Thank you! Looking forward to seeing you there!
@achterbrain
Neuroscience & AI at University of Oxford and University of Cambridge | Principles of efficient computations + learning in brains, AI, and silicon π§ NeuroAI | Gates Cambridge Scholar www.jachterberg.com
Thank you! Looking forward to seeing you there!
We're organising a #Cosyne workshop on biologically-inspired AI on Monday 16th with @achterbrain.bsky.social! π§
We've got an incredibly exciting array of speakers, and you have the possibility to sign up to present your poster on
#NeuroAI!
More info below β¬οΈ
#compneuro #neuroscience #neuroskyence
Super looking forward to the workshop! Alongside the talks and poster session, we are also working on an industry careers in NeuroAI meet-up for Early Career Researchers.
All details and updates can be found on our website: sites.google.com/view/cosyne-...
We are also hosting a NeuroAI poster session during the lunch break! If you are presenting a NeuroAI-related poster at Cosyne, you are welcome to put it up in our workshop room so attendees can find relevant work they might otherwise have missed.
Signup: forms.gle/kzefm2FtVuRk...
...continued:
Shahab Bakhtiari (University of Montreal) @shahabbakht.bsky.social
Filippo Moro (Institute of Neuroinformatics, University of Zurich and ETH Zurich)
Charlotte Frenkel (TU Delft)
Rui Ponte Costa (University of Oxford) @somnirons.bsky.social
We have an super exciting lineup of speakers joining us:
Jonathan Cornford (University of Leeds) @repromancer.bsky.social
Ida Momennejad (Microsoft Research) @neuroai.bsky.social
Yulia Sandamirskaya (Zurich University of Applied Sciences)
Wolfgang Maass (TU Graz)
Going to Cosyne? There will be a "Biologically-inspired Artificial Intelligence: Challenges and Opportunities" workshop on Monday 16th! π§
Exciting lineup of speakers alongside the opportunity to present your #NeuroAI poster, more info on program & poster signup below!
#compneuro #neuroscience
Giacomo's commentary was in response to this great recent paper by Iqbal et al., also in PNAS:
"Biologically grounded neocortex computational primitives implemented on neuromorphic hardware improve vision transformer performance"
www.pnas.org/doi/10.1073/...
Enjoyed @giacomoi.bsky.social commentary in PNAS on how #NeuroAI and Neuromorphic Engineering should come together to allow brain circuit motifs to positively influence the design of computing systems:
Biological fidelity: The engine driving the neuromorphic renaissance
www.pnas.org/doi/full/10....
Let me know if you are in San Diego for #Neurips and wanted to chat about #compneuro / #neuroAI and neuroscience-inspired computing!
How do brain areas control each other? π§ ποΈ
β¨In our NeurIPS 2025 Spotlight paper, we introduce a data-driven framework to answer this question using deep learning, nonlinear control, and differential geometry.π§΅β¬οΈ
A beautiful summary of our paper! Thank you @neurosock.bsky.social
One example of how multiplexing might be implemented in the brain was shown in the great work by Thomas Akam with Dmitri Kullmann, a decade ago.
The papers are cited well but the general multiplexing idea never really took the field by storm as much as it deserved
www.nature.com/articles/nrn...
Postdoc fellowship opportunity for ECRs (<3 yrs post-PhD). Note that if you want to apply to work with me as your mentor, our dept has an internal deadline of Dec 4th so please email me asap. Our internal process is shorter than the full application. π€π§ π§ͺ
royalcommission1851.org/fellowships/...
This new model opens a whole new world of analysing multi region interaction across trials and tasks! More analysis and findings can be found in our paper linked below. Work lead by Jack Cook, and with great help from @danakarca.bsky.social and @somnirons.bsky.social !
arxiv.org/abs/2506.02813
We also find that while complex regions are needed to learn complex tasks, these tasks are eventually moved toward simpler regions, similar to how you may struggle the first time when learning a new skill, but slowly get better with practice.
Furthermore, we find that these pathways mirror our expected behavior of pathways in the brain! We find that difficult tasks need to be learned in more complex regions, similar to how you need to think βharderβ when learning how to solve a difficult math problem.
With these three features in place, we find that our third criterion of distinct pathways is also met. While baseline models exhibit largely random expert usage patterns, our models exhibit highly structured pathways between regions that reliably emerge during learning.
Our third contribution is expert dropout. Without this feature, we find models suffer large performance deficits when experts outside of the active pathway are disabled. However, we would want models to be primarily dependent on the experts that are most being used.
When put together, these two contributions resulted in remarkable pathway consistency in our model, which we measured by correlating the routing patterns across 10 different models trained on the same tasks.
We then identify three inductive biases that yield pathways that meet each of these criteria.
The first of these is a routing loss that penalizes the use of more complex experts during training, and the second scales this loss by the modelβs performance on the task being solved.
We then set three criteria to determine whether pathways had formed:
(1) Consistency: Models trained on the same tasks should have similar pathways
(2) Self-sufficiency: Pathways should be primarily reliant on their own experts
(3) Distinctness: Many distinct pathways should be present
We first needed to create a model in which we could study pathway formation. We chose a Heterogeneous Mixture-of-Experts architecture, in which information can be dynamically routed to computational experts, or regions, of varying sizes.
We train model on 82 tasks of varying complexity (ModCog)!
Brains have many pathways / subnetworks but which principles underlie their formation?
In our #NeurIPS paper lead by Jack Cook we identify biologically relevant inductive biases that create pathways in brain-like Mixture-of-Experts modelsπ§΅
#neuroskyence #compneuro #neuroAI
arxiv.org/abs/2506.02813
All good Dan!
Check out this cool new work lead by @pengfei-sun.bsky.social !
With my great advisors and colleagues, @achterbrain.bsky.social @zhe @danakarca.bsky.social @neural-reckoning.org, we show that if heterogeneous axonal delays (imprecise) can capture the essential temporal structure of a task, spiking networks do not need precise synaptic weights to perform well.
Come be our colleague at EPFL! Several open calls for positions π§ͺπ§ π€
* Neuroscience www.epfl.ch/about/workin... (deadline Oct 1)
* Life Science Engineering www.epfl.ch/about/workin...
* CS general call www.epfl.ch/about/workin...
* Learning Sciences www.epfl.ch/about/workin...
Statistical methods for dissecting interactions between brain areas
www.sciencedirect.com/science/arti...