Skip to article frontmatterSkip to article content

Organising the paper

We are currently in the process of writing up the paper. If you are one of the contributors and you would like to be recognised as an author on this paper, please make yourself known to us. Either email Dan Goodman or join the SNUFA discord channel #sound-localisation-paper (or both).

The current plan for writing up the paper is as follows:

  1. Gather information on all contributors (names, email addresses, institutions, etc.). Current plan is that Marcus will be first author, I will be last author, and no decision about author ordering other than that. Please give your thoughts, including whether or not you disagree with the first/last author places (this is fine if you disagree, it’s just a temporary decision). If nobody has strong feelings we may just randomise the rest of the order.
  2. Clean up and merge all notebooks into main. Please note that I have renamed some of the notebooks, moved some into the research folder, and changes the headings on some to give them better descriptions. You may have conflicts that need resolving in your pull request.
  3. Marcus and I will write a first draft of the main body of the paper that will attempt to summarise everything as well as talk about the process. You can see the results so far online as we write.
  4. If you want to do a write up of your section in more detail including methods, results, etc. these will be included in the Appendices part of the paper.
  5. Every notebook will be included in the Supplementary Materials section of the paper.
  6. I am happy to discuss any of the decisions above. I’ve only made them in order to get things going quickly!
  7. We’ll do several rounds of iteration and comments. We will also continue our monthly meetings and discuss at those.
  8. We’ll submit a preprint of the paper to arXiv or bioRxiv.
  9. We’ll try to get this submitted to a journal.

Actions

Current known contributors

If you add a contribution, please use one of the following templates (see examples below):

  • Wrote the paper (plus which section if you would like to specify)
  • Conducted research (please give a link to your notebook formatted like this [](../research/3-Starting-Notebook.ipynb), or specify another sort of contribution)
  • Supervised research (please give the name of your supervisee)

Table 1:Contributors, ordered by GitHub commits.

NameGitHubContribution
Tomas Fiers@tfiersBuilt the website infrastructure.
Marcus Ghosh@ghoshmManaged the project, wrote the paper, conducted research (Quick Start Notebook, Sound localisation following Dale’ law), gave the Cosyne tutorial.
Dan Goodman@thesamovarConceived the project, wrote the paper, wrote and recorded the Cosyne tutorial. Conducted research (Starting Notebook, Analysing performance and solutions as time constants change).
Francesco De Santis@francescodesantisConducted research (TODO INCLUDE LINK), wrote the paper (Contralateral glycinergic inhibition as key factor in creating ITD sensitivity)
Karim Habashy@KarimHabashyConducted research (Learning delays, Learning delays (v2), Vanilla sound localization problem with a single delay layer (non-spiking)), wrote the paper (Learning Delays), project management (Quick Start Notebook)
Mingxuan Hong@mxhongConducted research (Altering Output Neurons, Dynamic threshold).
Dilay Fidan Erçelik@dilayercelikConducted research (Quick Start Notebook, Version with 250 Hz input (clean version)).
Rory Byrne@rorybyrneOrganised the source code structure, conducted research (Improving Performance: Optimizing the membrane time constant).
Zach Friedenberger@ZachFriedenbergerConducted research (Improving Performance: Optimizing the membrane time constant).
Helena Yuhan Liu@Helena-Yuhan-LiuConducted research (Analysis: thresholding W1W2 plot).
Jose Gomes (Portugal, PhD)@JoseGomesJPGConducted research (Sound localisation following Dale’ law).
???@a-dtk(TODO)
Sara Evers curie.fr@saraeversConducted research (Analysing Dale’s law and distribution of excitatory and inhibitory neurons).
Ido Aizenbud@ido4848Conducted research (Filter-and-Fire Neuron Model).
Balázs Mészáros@mbalazs98Wrote the paper (DCLS based delay learning in the appendix). Conducted research (Noise offsets in every iteration, Dilated Convolution with Learnable Spacings).
Sebastian Schmitt@schmitts(TODO)
Rowan Cockett@rowanc1MyST technical support
Jakub Smékal@smejak(TODO)
Alberto Antonietti@alberto-antoniettiSupervised Francesco De Santis, wrote the paper (Contralateral glycinergic inhibition as key factor in creating ITD sensitivity).
Lavínia Takarabe@laviniamitiko(TODO)
Danish Shaikh@danishbizkit(TODO)
???@pfcgit(TODO)
???@luis-rr(TODO)
Pietro Monticone@pitmonticoneCleaned paper and notebooks
Adam Haber@adamhaber(TODO)
Gabriel Béna@GabrielBenaConducted research (Analysing trained networks - workshop edition, Sound localisation following Dale’ law).
Divyansh Gupta@guptadivyansh(TODO)
Gabryel Mason-Williams (UK undergrad)???Conducted research (Analysing trained networks - workshop edition).
Josh Bourne (UK MSc student)???Conducted research (Analysing trained networks - workshop edition).
Zekai Xu (UK MSc student)???Conducted research (Analysing trained networks - workshop edition).
Leonidas Richter (Germany, PhD)???Conducted research (Learning delays).
Chen Li (UK MSc)???Conducted research (Improving Performance: Optimizing the membrane time constant).
Peter Crowe (Germany, Undergraduate)???Conducted research (Improving Performance: Optimizing the membrane time constant).

Notebook map

The following lists the notebooks, authors, summary and related notebooks in this project.

Introductory notebooks

Background
Explanation of the background. (Author: Dan Goodman.)
Questions & challenges
List of research questions and challenges. (Author: everyone.)

Templates / starting points

Starting Notebook
The template notebook suggested as a starting point, based on the Cosyne tutorial that kicked off this project. (Author: Dan Goodman.)
Quick Start Notebook
Condensed version of Starting Notebook using the shorter membrane time constants from Improving Performance: Optimizing the membrane time constant and Dale’s law from Sound localisation following Dale’ law. (Author: Dilay Fidan Erçelik, Karim Habashy, Marcus Ghosh.)

Individual notebooks

Filter-and-Fire Neuron Model
Using an alternative neuron model. (Author: Ido Aizenbud based on work from Dilay Fidan Erçelik.)
Altering Output Neurons
Comparison of three different ways of reading out the network’s decision (average membrane potential, maximum membrane potential, spiking outputs) with short and long time constants. (Author: Mingxuan Hong.)
Analysing trained networks - workshop edition
Group project from an early workshop looking at hidden unit spiking activity and single unit ablations. Found that some hidden neurons don’t spike, and ablating those does not harm performance. Builds on (WIP) Analysing trained networks. (Author: Gabriel Béna, Josh Bourne, Tomas Fiers, Tanushri Kabra, Zekai Xu.)
Sound localisation following Dale’ law
Investigation into the results of imposing Dale’s law. Incorporated into Quick Start Notebook. Uses a fix from Analysing Dale’s law and distribution of excitatory and inhibitory neurons. (Author: Marcus Ghosh, Gabriel Béna, Jose Gomes.)
Dynamic threshold
Adds an adaptive threshold to the neuron model and compares results. Conclusion is that the dynamic threshold does not help in this case. (Author: Mingxuan Hong.)
Sound localisation with excitatory-only inputs surrogate gradient descent
Results of imposing an excitatory only constraint on the neurons. Appears to find solutions that are more like what would be expected from the Jeffress model. (Author: TODO who is luis-rr???.)
Learning delays, Learning delays (v2) and Vanilla sound localization problem with a single delay layer (non-spiking)
Delay learning using differentiable delay layer, written up in Learning delays (Author: Karim Habashy.)
Dilated Convolution with Learnable Spacings
Delay learning using Dilated Convolution with Learnable Spacings, written up in Learning delays. (Author: Balázs Mészáros.)
Robustness to Noise and Dropout
Test effects of adding Gaussian noise and/or dropout during training phase. Conclusion is that dropout does not help and adding noise decreases performance. (Author: TODO: Who is a-dtk???.)
Version with 250 Hz input, Version with 250 Hz input (clean version)
Analysis of results with a higher frequency input stimulus and different membrane time constants for hidden and output layers. Conclusion is that smaller time constant matters for hidden layer but not for output layer. (Author: Dilay Fidan Erçelik.)
Analysing performance and solutions as time constants change
Deeper analysis of strategies found by trained networks as time constants vary. Added firing rate regularisation. Extends Improving Performance: Optimizing the membrane time constant (Author: Dan Goodman.)
Workshop 1 Write-up
Write-up of what happened at the first workshop. (Author: Marcus Ghosh.)

Inconclusive

The following notebooks did not reach a solid conclusion.

Compute hessians (jax version)
An unfinished attempt to perform sensitivity analysis using Hessian matrices computed via autodifferentiation with the Jax library. (Author: Adam Haber.)
Noise offsets in every iteration
Analysis of an alternative way of handling noise. (Author: Balázs Mészáros.)
Analysis: thresholding W1W2 plot
Unfinished attempt to improve analysis code. (Author: Helena Yuhan Liu.)

Historical

This subsection includes notebooks whose content got merged into an updated notebook later.

(WIP) Analysing trained networks
Early work on analysing the strategies learned by trained networks. Folded into Analysing trained networks - workshop edition. (Author: Dan Goodman.)
Improving Performance: Optimizing the membrane time constant
Analyses how performance depends on membrane time constant. Folded into Analysing performance and solutions as time constants change. (Author: Zach Friedenberger, Chen Li, Peter Crowe.)
Analysing Dale’s law and distribution of excitatory and inhibitory neurons
Fixed a mistake in an earlier version of Sound localisation following Dale’ law. (Author: Sara Evers.)