Hello! 👋 I am a second-year Cognitive Science PhD student at Stony Brook University, under the supervision of
Dr. Gregory Zelinsky in the EyeCog Lab.
My research interests are in the intersection of multimodal generative modelling and visual perception, focusing on building brain-inspired neural network architectures and generating human-aligned visual content.
|
|
|
|
Recent News
New! August 2024
Short paper accepted at CCN 2024!
My current research interests intersect computer vision, representational learning, and neuroscience.
(Representative papers are highlighted. *=equal contribution)
Framework for a Generative Multi-modal model of Embodied Thought Gregory Zelinsky,
Ritik Raina,
Abraham Leite,
Seoyoung Ahn CCN 2024 | Cognitive and Computational Neurosciences
Generating objects in peripheral vision using attention-guided diffusion models Ritik Raina,
Seoyoung Ahn,
Gregory Zelinsky VSS 2024 | Vision Sciences Society
Cortically motivated recurrence enables task extrapolation Vijay Veerabadran,
Yuan Tang,
Ritik Raina,
Virginia R. de Sa COSYNE 2023 | Computational and Systems Neuroscience
Bio-inspired learnable divisive normalization for ANNs Vijay Veerabadran,
Ritik Raina,
Virginia R. de Sa NeurIPS 2021 | Workshop on Shared Visual Representations in Human & Machine Intelligence (SVRHM)