Email me at ritik.raina@stonybrook.edu
I am a second-year Cognitive Science Ph.D. student at Stony Brook University working with Dr. Gregory Zelinsky.
My research interests are in the intersection of multimodal generative modelling and visual perception, focusing on building brain-inspired neural network architectures and generating human-aligned visual content.
I am always interested in research collaborations. Feel free to schedule a 30-min meeting on Calendly if you are interested in collaborating or discussing research ideas.
Framework for a Generative Multi-modal model of Embodied Thought
Gregory Zelinsky, Ritik Raina, Abe Leite, Seoyoung Ahn
CCN 2024
Generating objects in peripheral vision using attention-guided diffusion models
Ritik Raina, Seoyoung Ahn, Gregory Zelinsky
VSS 2024
Adaptive recurrent vision performs zero-shot computation scaling to unseen difficulty levels
Vijay Veerabadran, Srinivas Ravishankar, Yuan Tang, Ritik Raina, Virginia R. de Sa
NeurIPS 2023
Cortically motivated recurrence enables task extrapolation
Vijay Veerabadran, Yuan Tang, Ritik Raina, Virginia R. de Sa
COSYNE 2023
Exploring Biases in Facial Expression Analysis using Synthetic Faces
Ritik Raina, Miguel Monares, Mingze Xu, Sarah Fabi, Xiaojing Xu, Lehan Li, Will Sumerfield, Jin Gan, Virginia R. de Sa
NeurIPS SyntheticData4ML Workshop 2022
Bio-inspired learnable divisive normalization for ANNs
Vijay Veerabadran, Ritik Raina, Virginia R. de Sa
NeurIPS SVRHM Workshop 2021