Adopting an engineering mindset will help the field focus its research priorities, writes @timothyoleary.bsky.social.
#neuroskyence
www.thetransmitter.org/systems-neur...
Adopting an engineering mindset will help the field focus its research priorities, writes @timothyoleary.bsky.social.
#neuroskyence
www.thetransmitter.org/systems-neur...
Honored to have a research highlight featuring our work!
A comprehensive overview of our results and their impact for future research and applications:
www.nature.com/articles/s41...
Iβm super excited to finally put my recent work with @behrenstimb.bsky.social on bioRxiv, where we develop a new mechanistic theory of how PFC structures adaptive behaviour using attractor dynamics in space and time!
www.biorxiv.org/content/10.1...
In our Learning Club @cmc-lab.bsky.social today (Aug 18, Thu, 2pm CET), Samuel Liebana will tell us about his paper (www.cell.com/cell/fulltex... [joint work w/ @saxelab.bsky.social & @laklab.bsky.social]. Want to attend, send an empty email to virtual-talk-link-request@cmclab.org to get the link!
π¨Our preprint is online!π¨
www.biorxiv.org/content/10.1...
How do #dopamine neurons perform the key calculations in reinforcement #learning?
Read on to find out more! π§΅
Beautiful and clear results showing that temporal difference error calculation is hardwired in the dopamine/striatum mircocircuits: www.biorxiv.org/content/10.1...
from @malcolmgcampbell.bsky.social and @naoshigeuchida.bsky.social
Read the full paper βDopamine encodes deep network teaching signals for individual learning trajectoriesβ in @cellpress.bsky.social β¬οΈ
www.cell.com/cell/fulltex...
βͺ@yulonglilab.bsky.socialβ¬β¬β¬
@saxelab.bsky.social
βͺ@oxforddpag.bsky.socialβ¬β¬β¬
@laklab.bsky.social
Very glad you liked it Blake π
Super excited to see this paper from Armin Lak & colleagues out! (I've seen @saxelab.bsky.social present it before.)
www.cell.com/cell/fulltex...
tl;dr: The learning trajectories that individual mice take correspond to different saddle points in a deep net's loss landscape.
π§ π π§ͺ #NeuroAI
How does in-context learning emerge in attention models during gradient descent training?
Sharing our new Spotlight paper @icmlconf.bsky.social: Training Dynamics of In-Context Learning in Linear Attention
arxiv.org/abs/2501.16265
Led by Yedi Zhang with @aaditya6284.bsky.social and Peter Latham
Excited to share new work @icmlconf.bsky.social by Loek van Rossem exploring the development of computational algorithms in recurrent neural networks.
Hear it live tomorrow, Oral 1D, Tues 15 Jul West Exhibition Hall C: icml.cc/virtual/2025...
Paper: openreview.net/forum?id=3go...
(1/11)
Thanks Tim!!! Very glad you liked it
Thank you to all our collaborators and funders for making this work possible!
Finally, a deep neural network model trained with gradient descent and dopamine-like teaching signals captured the mice's learning trajectories from naive to expert.
Remarkably, the model's fixed-point graph succinctly explained the diverse yet systematic strategies mice developed through learning.
Dopamine (DA) signals in the dorsolateral striatum (DLS) provided further evidence for deep GD learning.
DLS DA acted as a partial stimulus-based RPE that only drove learning for stimuli used in decisions ("associated"), resembling the dependence of GD updates on hidden-layer representations.
We found evidence for deep GD dynamics in mice learning a task from naive to expert:
1. Learning transitioned through strategies that persisted for several days
2. From early behavior, we could predict behavior many days later
3. Strategies developed sensitivity to visual stimuli over learning
Deep learning theory has identified key properties of GD dynamics such as:
1. Learning plateaus, in deep but not shallow networks
2. Local learning, with connected & systematic trajectories
3. A hierarchy of learning stages of increasing complexity
Does animal learning share these properties?
Does the brain learn by gradient descent?
It's a pleasure to share our paper at @cp-cell.bsky.social, showing how mice learning over long timescales display key hallmarks of gradient descent (GD).
The culmination of my PhD supervised by @laklab.bsky.social, @saxelab.bsky.social and Rafal Bogacz!
Now online! Dopamine encodes deep network teaching signals for individual learning trajectories
Our work, out at Cell, shows that the brainβs dopamine signals teach each individual a unique learning trajectory. Collaborative experiment-theory effort, led by Sam Liebana in the lab. The first experiment my lab started just shy of 6y ago & v excited to see it out: www.cell.com/cell/fulltex...
Schematic of the study
New research shows long-term learning is shaped by dopamine signals that act as partial reward prediction errors.
The study in mice reveals how early behavioural biases predict individual learning trajectories.
Find out more β¬οΈ
www.sainsburywellcome.org/web/blog/lon...