Hello! 👋 I am a first-year Cognitive Science PhD student at Stony Brook University, under the supervision of
Dr. Gregory Zelinsky in the EyeCog Lab.
My research interests are in the intersection of multimodal generative modelling and visual perception, focusing on building brain-inspired neural network architectures and generating human-aligned visual content.
My current research interests intersect computer vision, representational learning, and neuroscience.
(Representative papers are highlighted. *=equal contribution)
Generating objects in peripheral vision using attention-guided diffusion models Ritik Raina,
Seoyoung Ahn,
Gregory Zelinsky VSS 2024 | Vision Sciences Society
Cortically motivated recurrence enables task extrapolation Vijay Veerabadran,
Yuan Tang,
Ritik Raina,
Virginia R. de Sa COSYNE 2023 | Computational and Systems Neuroscience
Bio-inspired learnable divisive normalization for ANNs Vijay Veerabadran,
Ritik Raina,
Virginia R. de Sa NeurIPS 2021 | Workshop on Shared Visual Representations in Human & Machine Intelligence (SVRHM)