08.07.2022
Organised by Marcus Ghosh (France, Postdoc)
Introduction¶
Our first workshop brought together 16 researchers (undergraduates, doctoral students and postdocs) from 8 countries! We started the day by discussing ideas for the project and what work people had already commited to the repository. Our ideas fell into roughly four themes: time constants, Dale’s law, learning delays and analysing trained networks, so we created these breakout rooms and allowed everyone to join their team of interest. In most teams one person shared their screen and wrote code, while the rest of team provided ideas / input and read relevant literature. Below are some notes on each teams work, and our cross-team discussion at the end of the day.
Teams¶
Time Constants¶
Members¶
Chen Li (UK, Masters)
Peter Crowe (Germany, Undergraduate)
Rory Byrne (UK, Masters)
Zachary Friedenberger (Canada, PhD)
Questions¶
- How do membrane time constants affect network performance?
- If we train the membrane time constants - what distribution emerges?
- Do heterogenous time constants improve performance?
- What happens if we allow separate time constants per layer?
Results¶
- Model performance decreases as the membrane time constant increases
- Performance seemed to increase with small output unit time constants too
- Code to train the membrane time constants seemed to be working
References¶
https://
Dale’s Law¶
Members¶
Bena Gabriel (UK, PhD)
Jose Gomes (Portugal, PhD)
Questions¶
- Can networks with only positive weights learn the task?
- If we implement Dale’s law (i.e. units can only have positive or negative output connections weights) - how does performance change as a function of the ratio of excitatory:inhibitory units?
Results¶
- Wrote code to enforce Dale’s law and change the ratio of excitatory:inhibitory units
- Preliminary results suggest this is a promising direction to explore!
References¶
Learning delays¶
Members¶
Karim Habashy (Germany, PhD)
Leonidas Richter (Germany, PhD)
Questions¶
- With a random weight matrix - can you solve the task by just learning input delays?
- With a bit of weight pretraining - can you improve performance by learning input delays?
Results¶
- Learning the input delays didn’t seem to improve performance, but we discussed how a surrogate gradient-esq trick could help with this in future.
Analysing trained networks¶
Members¶
Gabryel Mason-Williams (UK, Undergraduate)
Josh Bourne (UK, Masters)
Tanushri Kabra (India, Masters)
Tomas Fiers (UK, PhD)
Zekai Xu (UK, Undergraduate)
Questions¶
- How can we interpret the spiking activity of trained networks?
- Is it helpful to rank unit’s importance (by the impact of ablation on task performance)?
Results¶
- Only some hidden units spike - ablating those degrades performance, while ablating the others does not
- Hidden units seem continuously tuned to the input angles
- Smaller networks (2 or 7 hidden units, as opposed to 30) worked significantly better than chance, but not as well as the larger networks
Next steps¶
Train networks until convergence
Test the impact of dropout (drop units or connections)
Regularize network spiking (e.g. using a lower and upper bound)
Discussion¶
Before lunch and at the end of the day we regrouped to share our progress. For the latter discussion we were joined by Alessandro Galloni (USA, Postdoc) and Boris Marin (Brazil, Assistant Professor).
Based on results from the time constants team, we agreed that we should all use a shorter time constant when training the networks and there was a general consensus that we should base our analysis on networks trained until convergence. We agreed that the breakout room format worked well (though 5 may be a reasonable limit on team size), and were pleased to hear that those not coding themselves learnt a lot from following along. Looking ahead we decided that we should meet on a monthly basis (starting in September) and agreed that a local meetup format would be great. Ideas for future work included: conductance-based synapses, heterogeneity (e.g. activation functions) and work on a reinforcement learning version of the task.
- Cornford, J., Kalajdzievski, D., Leite, M., Lamarquette, A., Kullmann, D. M., & Richards, B. (2020). Learning to live with Dale’s principle: ANNs with separate excitatory and inhibitory units. 10.1101/2020.11.02.364968