About

Hi, I'm Sree Rohith.

I'm currently pursuing my M.S. in Electrical Engineering at Columbia University, New York. I worked as a Software Engineer for Xilinx-AMD for two years before.

Here's my LinkedIn profile: https://in.linkedin.com/in/rohith-pulipaka

I enjoy learning about Deep Learning and aim to have mathematical understanding along with experimental insights.

Current Work:

I am currently researching on understanding attention-based mechanisms with Professor John Wright at Columbia University.

Also, I am working on a project that uses Distributionally Robust Optimization to make CNNs robust to Affine Transformations.

Course Work:

Courses: Machine Learning, Reinforcement Learning, Sparse Models for High dimensional data, Convex Optimization, Fair and Robust Algorithms for Deep Learning

Fun Stuff:

It is well known that Transformers are data hungry. Unless we have access to large compute and data, out model does not learn inductive biases necessary for good generalization. So I'm curating a list of "eco-friendly" Transformers which could be trained on datasets such as CIFAR10, or even MNIST. (For perspective, I trained a tiny Transformer (from DeiT) and could achieve only 91% accuracy on MNIST after a tedious training period.) Here's the list: (Eco-Friendly) Transformers