Arne Nix

Graduate Student

inductive bias transfer
Arne Nix

Despite their great success in many areas, deep neural networks frequently suffer from poor generalization performance on out-of-domain data. Good inductive biases, i.e. assumptions in the model to learn the target function and to generalize beyond training data, can help overcome this issue. One source of inspiration to look for such inductive biases are brains, as they frequently demonstrate great generalization abilities on a variety of tasks. My goal is to identify methods that allow us to reliably transfer inductive biases from a source environment (e.g. from a robust artificial neural network) to a target environment (e.g. to a previously non-robust network). Once such a method is found, we can use it to transfer inductive biases from biological to artificial neural networks and hopefully gain further insight for both along the way.


Publications

Lab members are shown in this color. Preprints are shown in this color.

2022

Santiago A. Cadena, Konstantin F. Willeke, Kelli Restivo, George Denfield, Fabian H. Sinz, Matthias Bethge, Andreas S. Tolias, Alexander S. Ecker Diverse task-driven modeling of macaque V4 reveals functional specialization towards semantic tasks bioRxiv
Arne Nix, Suhas Shrinivasan, Edgar Y. Walker, Fabian H. Sinz Can Functional Transfer Methods Capture Simple Inductive Biases? AISTATS 2022

2021

Shahd Safarani, Arne Nix, Konstantin Willeke, Santiago A. Cadena, Kelli Restivo, George Denfield, Andreas S. Tolias, Fabian H. Sinz Towards robust vision by multi-task learning on monkey visual cortex NeurIPS (accepted)
Shahd Safarani, Arne Nix, Konstantin Willeke, Santiago A. Cadena, Kelli Restivo, George Denfield, Andreas S. Tolias, Fabian H. Sinz Towards robust vision by multi-task learning on monkey visual cortex ICLR 2021 Workshop: How Can Findings About The Brain Improve AI Systems?