Anirudh GJ's Avatar

Anirudh GJ

@anirudhgj

NeuroAI PhD student @ Mila & Universite de Montreal w/ Prof. Matthew Perich. Studying continual learning and adaptation in Brain and ANNs.

53
Followers
201
Following
1
Posts
07.10.2024
Joined
Posts Following

Latest posts by Anirudh GJ @anirudhgj

New paper with @deanpospisil.bsky.social , in which we introduce a new estimator for the "signal eigenspectrum" (i.e., the eigenvalues of the noiseless population responses). We re-analyze data from Stringer et al 2019 and show eigenvalues of mouse V1 are well explained by a broken power.

28.01.2026 07:39 πŸ‘ 33 πŸ” 7 πŸ’¬ 0 πŸ“Œ 1

Thrilled to see the first preprint of the lab out 🀩 Check it out if you need to compare dynamics in your data and RNN (or any other combinations of dynamical systems)!

08.01.2026 16:37 πŸ‘ 45 πŸ” 9 πŸ’¬ 0 πŸ“Œ 0
Preview
What a Neuron Teaches Us About Computation's Limits When we try to formalize a neuron computationally, we don't translate biology into codeβ€”we perform a violent collapse. We lock causation into fixed arrows when biology lives in causal ambiguity. We sy...

When we try to formalize a neuron computationally, we don't translate biology into codeβ€”we perform a violent collapse.

www.ocrampal.com/what-a-neuro...

#philosophy #science #psychology #AI #intelligence #physics #biology #philmind #philsci #philsky #philpsy #neurosky #neuroskyence

03.01.2026 16:16 πŸ‘ 47 πŸ” 20 πŸ’¬ 3 πŸ“Œ 3
Post image

Episode #36 in #TheoreticalNeurosciencePodcast: On low-dimensional manifolds in motor cortex – with Sara Solla @sasolla.bsky.social

theoreticalneuroscience.no/thn36

Manifold analysis has changed our thinking on how cortex works. One of the pioneers of this modelling approach explains.

03.01.2026 09:21 πŸ‘ 48 πŸ” 16 πŸ’¬ 2 πŸ“Œ 2
Preview
Subjective functions Where do objective functions come from? How do we select what goals to pursue? Human intelligence is adept at synthesizing new objective functions on the fly. How does this work, and can we endow arti...

Goal selection through the lens of subjective functions:
arxiv.org/abs/2512.15948
I welcome any feedback on these preliminary ideas.

19.12.2025 03:15 πŸ‘ 67 πŸ” 27 πŸ’¬ 4 πŸ“Œ 1

We took a stab at how to infer both the dynamics and control parameters of partially-observable systems.

It’s a nasty problem, but @vgeadah.bsky.social made tremendous progress, ending up with some really elegant formalisms.

18.12.2025 18:51 πŸ‘ 18 πŸ” 6 πŸ’¬ 1 πŸ“Œ 0
Preview
The next revolution in neuroscience is happening outside the lab By tracking brain activity as primates move freely in the wild, neuroethology could reshape what we think we know about our own minds.

Do we need to study animals in the wild to fully understand the brain? Maybe. OTOH, I sit in front a computer all day.
bigthink.com/neuropsych/n...
#neuroscience

17.12.2025 19:09 πŸ‘ 22 πŸ” 5 πŸ’¬ 3 πŸ“Œ 1
Post image

New paper for #neurips2025!

AI models adjust millions of internal settings to get better at a task. But how are these adjustments determined? For decades, we've mostly figured this out through trial & error.

We took a different approach...🧡 (1/6)

πŸ”— openreview.net/forum?id=oMi...

16.12.2025 19:29 πŸ‘ 49 πŸ” 14 πŸ’¬ 3 πŸ“Œ 2
Preview
A theory of multi-task computation and task selection Neural activity during the performance of a stereotyped behavioral task is often described as low-dimensional, occupying only a limited region in the space of all firing-rate patterns. This region has...

1/X Excited to present this preprint on multi-tasking, with
@david-g-clark.bsky.social and Ashok Litwin-Kumar! Timely too, as β€œlow-D manifold” has been trending again. (If you read thru the end, we escape Flatland and return to the glorious high-D world we deserve.) www.biorxiv.org/content/10.6...

15.12.2025 19:41 πŸ‘ 83 πŸ” 20 πŸ’¬ 1 πŸ“Œ 2
Post image

1/6 New preprint πŸš€ How does the cortex learn to represent things and how they move without reconstructing sensory stimuli? We developed a circuit-centric recurrent predictive learning (RPL) model based on JEPAs.
πŸ”— doi.org/10.1101/2025...
Led by @atenagm.bsky.social @mshalvagal.bsky.social

27.11.2025 08:24 πŸ‘ 142 πŸ” 42 πŸ’¬ 3 πŸ“Œ 4
Post image

πŸ“Excited to share that our paper was selected as a Spotlight at #NeurIPS2025!

arxiv.org/pdf/2410.03972

It started from a question I kept running into:

When do RNNs trained on the same task converge/diverge in their solutions?
πŸ§΅β¬‡οΈ

24.11.2025 16:43 πŸ‘ 108 πŸ” 27 πŸ’¬ 5 πŸ“Œ 6
Preview
Connectivity Structure and Dynamics of Nonlinear Recurrent Neural Networks The structure of brain connectivity predicts collective neural activity, with a small number of connectivity features determining activity dimensionality, linking circuit architecture to network-level...

Now in PRX: Theory linking connectivity structure to collective activity in nonlinear RNNs!
For neuro fans: conn. structure can be invisible in single neurons but shape pop. activity
For low-rank RNN fans: a theory of rank=O(N)
For physics fans: fluctuations around DMFT saddle⇒dimension of activity

03.11.2025 21:47 πŸ‘ 60 πŸ” 16 πŸ’¬ 2 πŸ“Œ 2
Three panel thing. In the left panel we use error bars. In the second, we take statistical significance as the biggest number but still have error bars. In LLM science, we just have the biggest number

Three panel thing. In the left panel we use error bars. In the second, we take statistical significance as the biggest number but still have error bars. In LLM science, we just have the biggest number

What if we did a single run and declared victory

23.10.2025 02:28 πŸ‘ 340 πŸ” 70 πŸ’¬ 13 πŸ“Œ 9
Defining and quantifying compositional structure What is compositionality? For those of us working in AI or cognitive neuroscience this question can appear easy at first, but becomes increasingly perplexing the more we think about it. We aren’t shor...

Very excited to release a new blog post that formalizes what it means for data to be compositional, and shows how compositionality can exist at multiple scales. Early days, but I think there may be significant implications for AI. Check it out! ericelmoznino.github.io/blog/2025/08...

18.08.2025 20:46 πŸ‘ 18 πŸ” 6 πŸ’¬ 1 πŸ“Œ 1

πŸ“° I really enjoyed writing this article with @thetransmitter.bsky.social! In it, I summarize parts of our recent perspective article on neural manifolds (www.nature.com/articles/s41...), with a focus on highlighting just a few cool insights into the brain we've already seen at the population level.

04.08.2025 18:45 πŸ‘ 54 πŸ” 15 πŸ’¬ 1 πŸ“Œ 1
From Spikes To Rates
From Spikes To Rates YouTube video by Gerstner Lab

Is it possible to go from spikes to rates without averaging?

We show how to exactly map recurrent spiking networks into recurrent rate networks, with the same number of neurons. No temporal or spatial averaging needed!

Presented at Gatsby Neural Dynamics Workshop, London.

08.08.2025 15:25 πŸ‘ 61 πŸ” 17 πŸ’¬ 2 πŸ“Œ 1

I wonder, where would be a good place to do modeling and chat with many people that study different species or do comparative studies? (asking for a friend)

16.07.2025 22:13 πŸ‘ 15 πŸ” 3 πŸ’¬ 2 πŸ“Œ 0
A summary figure for a NeurIPS competition where AI agents compete with mice in a visual foraging task.

A summary figure for a NeurIPS competition where AI agents compete with mice in a visual foraging task.

Mice learn these tasks and are robust to perturbations like fog. Now, we invite you all to make AI agents to beat mice.

We present our #NeurIPS competition. You can learn about it here: robustforaging.github.io (7/n)

10.07.2025 12:22 πŸ‘ 44 πŸ” 10 πŸ’¬ 1 πŸ“Œ 1
Preview
Simple low-dimensional computations explain variability in neuronal activity Our understanding of neural computation is founded on the assumption that neurons fire in response to a linear summation of inputs. Yet experiments demonstrate that some neurons are capable of complex...

This paper carefully examines how well simple units capture neural data.

To quote someone from my lab (they can take credit if they want):

Def not news to those of us who use [ANN] models, but a good counter argument to the "but neurons are more complicated" crowd.

arxiv.org/abs/2504.08637

πŸ§ πŸ“ˆ πŸ§ͺ

25.06.2025 15:49 πŸ‘ 64 πŸ” 13 πŸ’¬ 3 πŸ“Œ 2
Preview
Working memory control dynamics follow principles of spatial computing - Nature Communications It is unclear how cognitive computations are performed on sensory information. Here, neural evidence from working memory tasks suggests that the physical dimensions of cortical networks are used to up...

"These findings validate core predictions of Spatial Computing by showing that oscillatory dynamics not only gate information in time but also shape where in the cortex cognitive content is represented."
More on Spatial Computing:
doi.org/10.1038/s414...

25.06.2025 17:39 πŸ‘ 10 πŸ” 2 πŸ’¬ 0 πŸ“Œ 0
Preview
Structure of activity in multiregion recurrent neural networks | PNAS Neural circuits comprise multiple interconnected regions, each with complex dynamics. The interplay between local and global activity is thought to...

(1/23) In addition to the new Lady Gaga album "Mayhem," my paper with Manuel Beiran, "Structure of activity in multiregion recurrent neural networks," has been published today.

PNAS link: www.pnas.org/doi/10.1073/...

(see dclark.io for PDF)

An explainer thread...

07.03.2025 19:39 πŸ‘ 87 πŸ” 18 πŸ’¬ 2 πŸ“Œ 0
Preview
Universality and diversity in human song Songs exhibit universal patterns across cultures.

Music is universal. It varies more within than between societies and can be described by a few key dimensions. That’s because brains operate by using the raw materials of music: oscillations (brainwaves).
www.science.org/doi/10.1126/...
#neuroscience

23.06.2025 11:38 πŸ‘ 39 πŸ” 20 πŸ’¬ 4 πŸ“Œ 1
Video thumbnail

1/N
How do neural dynamics in motor cortex interact with those in subcortical networks to flexibly control movement? I’m beyond thrilled to share our work on this problem, led by Eric Kirk @eric-kirk.bsky.social with help from Kangjia Cai!
www.biorxiv.org/content/10.1...

23.06.2025 12:28 πŸ‘ 88 πŸ” 27 πŸ’¬ 3 πŸ“Œ 1

Thrilled to announce I'll be starting my own neuro-theory lab, as an Assistant Professor at @yaleneuro.bsky.social @wutsaiyale.bsky.social this Fall!

My group will study offline learning in the sleeping brain: how neural activity self-organizes during sleep and the computations it performs. 🧡

23.06.2025 15:55 πŸ‘ 417 πŸ” 48 πŸ’¬ 61 πŸ“Œ 7
screenshot of biorxiv paper titled "Neuromorphic hierarchical modular reservoirs", authors Filip Milisav,
Andrea I Luppi, Laura E SuΓ‘rez, Guillaume Lajoie, Bratislav Misic

screenshot of biorxiv paper titled "Neuromorphic hierarchical modular reservoirs", authors Filip Milisav, Andrea I Luppi, Laura E SuΓ‘rez, Guillaume Lajoie, Bratislav Misic

aside from this being a v cool paper I also want to congratulate the authors on the incredible SNR achieved in the title via a complete absence of filler words

Neuromorphic hierarchical modular reservoirs
www.biorxiv.org/content/10.1...

22.06.2025 17:13 πŸ‘ 25 πŸ” 3 πŸ’¬ 3 πŸ“Œ 0
Post image

New preprint! πŸ§ πŸ€–

How do we build neural decoders that are:
⚑️ fast enough for real-time use
🎯 accurate across diverse tasks
🌍 generalizable to new sessions, subjects, and even species?

We present POSSM, a hybrid SSM architecture that optimizes for all three of these axes!

🧡1/7

06.06.2025 17:40 πŸ‘ 54 πŸ” 24 πŸ’¬ 2 πŸ“Œ 8
Post image Post image

Curious about the history of the manifold/trajectory view of neural activity.

My own first exposure was Gilles Laurent's chapter in "21 Problems in Systems Neuroscience", where he cites odor trajectories in locust AL (2005). This was v inspiring as a biophysics student studying dynamical systems...

21.02.2025 19:11 πŸ‘ 116 πŸ” 18 πŸ’¬ 10 πŸ“Œ 1

I think the biological evidence points to this not being the case. We can see instances where synapses literally undergo a form of reverse plasticity, e.g. see here: www.cell.com/trends/cogni...

I think it cannot be assumed that we never wipe memories from our brains completely!

24.01.2025 23:10 πŸ‘ 10 πŸ” 1 πŸ’¬ 1 πŸ“Œ 0
How a neuroscientist solved the mystery of his own long COVID

How a neuroscientist solved the mystery of his own long COVID

How a neuroscientist solved the mystery of his own
#LongCovid and lead to a new scientific discovery. Inspiring story.
Thank you for sharing your journey @jeffmyau.bsky.social
www.youcanknowthings.com/how-one-neur...

08.01.2025 16:53 πŸ‘ 149 πŸ” 44 πŸ’¬ 6 πŸ“Œ 3
Preview
Thalamus: a brain-inspired algorithm for biologically-plausible continual learning and disentangled representations Animals thrive in a constantly changing environment and leverage the temporal structure to learn well-factorized causal representations. In contrast, traditional neural networks suffer from forgetting...

I love how this paper uses cortico-thalamic interactions(context switching) for continual learning.
arxiv.org/abs/2205.11713

06.01.2025 13:00 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0