© Photo - Emmanouil Froudarakis
Group Leader, Professor
I am interested in how the model bias in biological neuronal networks realized through network architecture, neuronal nonlinearities, and their dynamics lead to more robust inference and faster learning. I use deep learning to approach these questions through theoretical analysis and system identification on large scale neuro-physiological and -anatomical data.
Edgar Y. Walker
My interest lies in how a population of cortical neurons encode and subsequently perform computations on the world state variables. In particular, I have studied how sensory cortical population represent visual stimulus information including the sensory uncertainty in accordance to the theory of probabilistic population coding (PPC), combining population electrophysiology from Macaque V1 with Bayesian and neural network analysis to decode the likelihood functions. I have also been working on building network models of sensory cortex to understand the representation and computation carried out by repeating canonical computation units in mouse brain.
My interest lies in the topic of system identification, i.e. the finding of a mathematical model that maps the measurements of system inputs (visual stimuli) to system outputs (neuronal activity in the visual cortex of mice). The approach I chose for fitting these models is machine learning, more precisely deep convolutional neural networks (DCNNs). Identifying the underlying computations in a biological neural network using DCNNs can help the field in two ways: It can 1) provide insights about the functioning of biological neural networks for the neuroscience community and 2) it can identify useful inductive biases to be transferred to artificial neural networks for the machine learning community. My current project revolves around fitting such models that generalize between animals of the same species. Only if this condition is met, we can assume that the fitted model is not susceptible to subject-specific features and noise but captures general non-linear features that are characteristic for the visual cortex of mice as a whole.
My primary interest lies in uncovering fundamental algorithms and principles of intelligence and intelligent behavior - be it biological or artificial. My current work revolves around theories of perception in the brain - specifically that the brain performs perception via probabilistic inference. I am interested in modeling and evaluating these theories on large scale neurophysiological recordings from the visual cortex using modern probabilistic machine learning methods.
Despite their great success in many areas, deep neural networks frequently suffer from poor generalization performance on out-of-domain data. Good inductive biases, i.e. assumptions in the model to learn the target function and to generalize beyond training data, can help overcome this issue. One source of inspiration to look for such inductive biases are brains, as they frequently demonstrate great generalization abilities on a variety of tasks. My goal is to identify methods that allow us to reliably transfer inductive biases from a source environment (e.g. from a robust artificial neural network) to a target environment (e.g. to a previously non-robust network). Once such a method is found, we can use it to transfer inductive biases from biological to artificial neural networks and hopefully gain further insight for both along the way.
My interest lies in identifying regularities underlying neural computations that contribute to data-efficient learning and robust inference. These regularities are commonly framed as tuning where the activity of many neurons is sensitive to a common axis (aka tuning axis) in the input space. For instance, in the context of the visual system, many neurons have similar receptive fields but each is rotated differently, which makes the rotation angle the tuning axis. The questions I am currently working on are: 1) What other regularities are there in the computations performed by cortical neuronal populations? 2) Can we develop methods that automatically discover tuning in a given multiple-input and multiple-output (MIMO) system? And 3) How can we use models to design experiments such that we can control the activity of neuronal populations and ultimately control learning and inference. To answer these questions I use deep neural networks as models of the visual system and work with large scale neuro-physiological and -anatomical data.
I am fascinated by the similarities between artificial neural networks and biological neural networks, and I am particularly interested in how machine learning algorithms and the brain are solving the complex task of vision. To gain insight into these matters, I am using deep neural networks to model the early visual cortex of monkeys and mice, to see in which cases these models work, and in which cases they do not. Using network dissection techniques and in silico experiments, my aim is to discover functional principles of artificial neural networks that are instructive for neuroscientific experiments, and that could lead to insights into the fundamentals of our cognitive system and its implementation. I am also passionate about open science, sharing of data and methods, and the movement of using AI to empower society.