My interest lies in identifying regularities underlying neural computations that contribute to data-efficient learning and robust inference. These regularities are commonly framed as tuning where the activity of many neurons is sensitive to a common axis (aka tuning axis) in the input space. For instance, in the context of the visual system, many neurons have similar receptive fields but each is rotated differently, which makes the rotation angle the tuning axis. The questions I am currently working on are: 1) What other regularities are there in the computations performed by cortical neuronal populations? 2) Can we develop methods that automatically discover tuning in a given multiple-input and multiple-output (MIMO) system? And 3) How can we use models to design experiments such that we can control the activity of neuronal populations and ultimately control learning and inference. To answer these questions I use deep neural networks as models of the visual system and work with large scale neuro-physiological and -anatomical data.
Lab members are shown in this color. Preprints are shown in this color.
Mohammad Bashiri, Edgar Y. Walker, Konstantin-Klemens Lurz, Akshay Kumar Jagadish, Taliah Muhammad, Zhiwei Ding, Zhuokun Ding, Andreas S. Tolias, Fabian H. Sinz A flow-based latent state generative model of neural population responses to natural images NeurIPS (spotlight)
Konstantin-Klemens Lurz, Mohammad Bashiri, Konstantin Friedrich Willeke, Akshay Kumar Jagadish, Eric Wang, Edgar Y Walker, Santiago Cadena, Taliah Muhammad, Eric Cobos, Andreas Tolias, Alexander Ecker, Fabian Sinz Generalization in data-driven models of primary visual cortex ICLR (spotlight)