Meta-learning for neural system identification

Project Description

In computational neuroscience, neural system identification is the task of predicting the responses of a population of neurons to arbitrary stimuli, with the goal to identify the functional relationship between stimulus as input and neuronal response as output [1 - 3]. Deep neural networks (DNNs) have achieved state-of-the-art performance in predicting responses of neurons from the visual cortex to natural image stimuli, as DNNs have been shown to capture highly nonlinear and complex computations required to characterize neuronal responses [4 - 7].

Currently, there exist two approaches for training DNNs to perform neural system identification - the task-driven approach and the data-driven approach. The task driven approach borrows a DNN pretrained on large datasets for standard vision tasks, such as object recognition, and fine-tunes them on pairs of images and correpsonding neuronal responses [8]. Since DNN pretraining equips the DNNs already with the essential nonlinear computations, task driven networks are relatively sample efficient in the number of image-neuronal response pairs required. The data-driven approach relies on training DNNs from scratch on a large number of image-response pair [9]. Thus data-driven models learn the nonlinear computations shared among thousands of neurons and have been shown to exhibit better generalizability to new neurons, compared to task-driven models [9].

Both of the above approaches entail a time-consuming optimization process to fit and infer the neuronal responses of new neurons. In this project, we want to use meta-learning as the training paradigm to train DNNs for the task of neuronal system identification. Meta-learning enables us to formulate the problem as a few-shot learning problem [10 - 12]. The objective will be to train a DNN such that it requires only a few training examples in order to learn the neuronal response of new neurons, vastly improving sample efficiency. This is important in experimental contexts, where a networks needs to be adapted to new neurons and stimuli fast. Furthermore, we hypothesize that the representation learned by the meta-learned DNN to capture general nonlinear characteristics of the visual cortex.

The master’s thesis consists overall of a confirmatory component – using meta-learning to train a DNN to meta-learn the task of neural system identification – and an exploratory component – inspecting the representation learned by the resultant DNN model. It offers the opportunity to get insights in state-of-the-art deep learning models and their applications to neural system identification.

Our ideal candidate has a background in machine learning and deep learning, with experience in programming convolutional neural networks using PyTorch. We offer a highly collaborative work environment, great team spirit and state of the art infrastructure.

We look forward to hearing from you.

References:

  1. Klindt, D., et al. “Neural system identification for large populations separating “what” and “where” in Advances in Neural Information Processing Systems (Curran Associates, Red Hook, NY, 2017).” Google Scholar: 3506-3516.

  2. Wu, Michael C-K., Stephen V. David, and Jack L. Gallant. “Complete functional characterization of sensory neurons by system identification.” Annu. Rev. Neurosci. 29 (2006): 477-505.

  3. Carandini, Matteo, et al. “Do we know what the early visual system does?.” Journal of Neuroscience 25.46 (2005): 10577-10597.

  4. D. L. K. Yamins, H. Hong, C. F. Cadieu, E. A. Solomon, D. Seibert, and J. J. DiCarlo. Performance-optimized hierarchical models predict neural responses in higher visual cortex. Proceedings of the National Academy of Sciences of the United States of America, 111(23):8619–24, 2014. ISSN 1091-6490. doi: 10.1073/pnas.1403112111.

  5. D. L. K. Yamins and J. J. DiCarlo. Using goal-driven deep learning models to understand sensory cortex. Nature Neuroscience, 19(3):356–365, 2016. ISSN 1097-6256. doi: 10.1038/nn.4244.

  6. S. A. Cadena, G. H. Denfield, E. Y. Walker, L. A. Gatys, A. S. Tolias, M. Bethge, and A. S. Ecker. Deep convolutional models improve predictions of macaque V1 responses to natural images. PLoS Comput Biol, pp. 201764, 2019a. doi: 10.1101/201764.

  7. F. Sinz, A. S. Ecker, P. Fahey, E. Walker, E Cobos, E. Froudarakis, D. Yatsenko, X. Pitkow, J. Reimer, and A. Tolias. Stimulus domain transfer in recurrent models for large scale cortical population prediction on video. In Advances in Neural Information Processing Systems 31. 2018. doi:10.1101/452672

  8. S. A. Cadena, F.H. Sinz, T. Muhammad, E. Froudarakis, E. Cobos, E. Y. Walker, J. Reimer, M. Bethge, and A. S. Ecker. How well do deep neural networks trained on object recognition characterize the mouse visual system? In NeurIPS 2019 Workshop Neuro AI, 2019b.

  9. Lurz, Konstantin-Klemens, et al. “Generalization in data-driven models of primary visual cortex.” bioRxiv (2020).

  10. Hospedales, Timothy, et al. “Meta-learning in neural networks: A survey.” arXiv preprint arXiv:2004.05439 (2020).

  11. Finn, Chelsea, Pieter Abbeel, and Sergey Levine. “Model-agnostic meta-learning for fast adaptation of deep networks.” International Conference on Machine Learning. PMLR, 2017.

  12. Cotton, R. James, Fabian H. Sinz, and Andreas S. Tolias. “Factorized Neural Processes for Neural Processes: K-Shot Prediction of Neural Responses.” arXiv preprint arXiv:2010.11810 (2020).