Context-dependent integration of information in rate-based recurrent neural network
The goal of this workgroup is to study rate-based recurrent neural network models of context-dependent integration of sensory evidence. Such models successfully describe some aspects of cortical computation (Mante et al., 2013). How context-dependent input selection and integration work can be understood from the linear system behaviour around fixed points in activity space. In particular, in those models a line attractor serves as a substrate for the memory of the integrated information. (1) How the existence of such attractors manifests in the RNN parameters is unclear. Additionally (2) we do not know whether there are fundamentally different types of solutions, all leading to roughly the same input-output behavior, or whether solutions form a continuum.
We will supply a set of 40 pre-trained RNNs that perform context-dependent information integration and a few python functions to load these networks, evolve the network dynamics in time, compute their fixed points and linearizations around those fixed points.
Our hope is to come up with suited ways of comparing and characterizing the various solution. A better understanding of the ‘structure of the solution space’ could enable us to formulate learning rules that are simpler, tractable on hardware, and more biological plausible than backpropagation through time.
Next to studying the trained networks we will also discuss various possibilities of setting up hard-coded solutions to the context-dependent integration problem and compare those to the learned architectures.
Session 1
Introduction, Explanation of Mante et al., 2013, How to evolve RNN dynamics
Explanation of various solution types
Session 2
Fixed points, properties of weight matrices, ...
Session 3
...
Timetable
Day | Time | Location |
---|---|---|
Thu, 25.04.2019 | 14:00 - 15:00 | Lecture room |
Fri, 26.04.2019 | 21:30 - 22:30 | Sala Panorama |
Tue, 30.04.2019 | 20:30 - 21:30 | Sala Panorama |