Very happy to share our review on Reinforcement Learning vs Statistical Learning, with @ambrafer.bsky.social and @predictivebrain.bsky.social:
www.sciencedirect.com/science/arti...
A nice summary:
www.sainsburywellcome.org/blog/two-eng...
Very happy to share our review on Reinforcement Learning vs Statistical Learning, with @ambrafer.bsky.social and @predictivebrain.bsky.social:
www.sciencedirect.com/science/arti...
A nice summary:
www.sainsburywellcome.org/blog/two-eng...
To elaborate: multiple random networks under opt. fb. ctrl with a fixed cost (task) produce similar neural dynamics. Changes in the cost led to differences. We thought what may be conserved is the fb ctrl & task structure rather than specific circuit connectivity. Eager to read the paper in detail!
Nice work, congratulations! In a recent model we found that similar neural dynamics may be due to similar task structure independent of network connectivity (supp. Figure). Could a simpler explanation be that all animals experienced essentially the same task structure?
www.cell.com/cell-reports...
Thanks Kevin!
Favourite paper I have read this year. Check it out! Great work @harikalidindi.bsky.social and @fredcrevecoeur.bsky.social!
awesome paper bridging the gap between RNN and optimal control models of motor control
Enjoyed a lot doing this work with @fredcrevecoeur.bsky.social throughout! Glad it's finally out πππ
Here is the accompanying code for implementing:
github.com/neurohari/si...
Amid the rise of billion-parameter models, I argue that toy models, with just a few neurons, remain essentialβand may be all neuroscience needs, writes @marcusghosh.bsky.social.
#neuroskyence
www.thetransmitter.org/theoretical-...
Now published in the Journal of Neurophysiology:
journals.physiology.org/doi/full/10....
Get in touch if you think this tool could help in your science! We will be developing improvements and extensions over the next year.
Image of robots struggling with a social dilemma.
1/ Why does RL struggle with social dilemmas? How can we ensure that AI learns to cooperate rather than compete?
Introducing our new framework: MUPI (Embedded Universal Predictive Intelligence) which provides a theoretical basis for new cooperative solutions in RL.
Preprintπ§΅π
(Paper link below.)
Good piece by @kohitij.bsky.social on why neuroscientists use an "outdated" vision model. Neuroscience is different than AI and that's ok! medium.com/@kohitij_716...
Very interesting work!
The brain computes by processing information over time through interactions between connectivity and dynamics that are hard to model. Here we infer these interactions from data and find they better predict cognitive performance! www.nature.com/articles/s41... w/ @lindenmp.bsky.social
Join us for Fall 2026. In our group, you can run studies from human behavior and neuroimaging, to large-scale NHP ephys, and join them up with a robust computational foundation. Bonus: you can help build the reading list.
European universities leading the way
Thread of French and Dutch research institutes slowly unsubscribing from web of science (and thence impact factors).
0/10 Thanks for the interest in our preprint. Some takes say it negates or fully supports the βmanifold hypothesisβ, neither quite right. Our results show that if you only focus on the manifold capturing most of task-related variance, you could miss important dynamics that actually drive behavior.
How I contributed to rejecting one of my favorite papers of all times, Yes, I teach it to students daily, and refer to it in lots of papers. Sorry. open.substack.com/pub/kording/...
Unlike current AI systems, animals can quickly and flexibly adapt to changing environments.
This is the topic of our new perspective in Nature MI (rdcu.be/eSeif), where we relate dynamical and plasticity mechanisms in the brain to in-context and continual learning in AI. #NeuroAI
0/7 Excited to π’ that our (@mkashefi.bsky.social @diedrichsenjorn.bsky.social @andpru.bsky.social) new preprint on sequence preparation and its effect on reaction time is now up: www.biorxiv.org/content/10.1...
Please get in touch if there is anything you'd like to discuss! Brief summary π§΅π
What's the main factor that prevents us from getting universities and grant committies to pull the plug on these massive profiteering journals and put control back in the hands of researchers ?
Two posts from Bluesky. The first one shows a figure from a paper published in Nature Scientific Reports full of totally incoherent AI fabricated gibberish words. The other a comment on a recently published paper by eLife discussing the paper and its peer reviews which were published along with the paper.
Nature Sci Rep publishes incoherent AI slop. eLife publishes a paper which the reviewers didn't agree with, making all the comments and responses public with thoughtful commentary. One of these journals got delisted by Web of Science for quality concerns from not doing peer review. Guess which one?
13/ πFeel free to reach out to discuss this work, or the application of it to your field of study. Or come swing by our poster at #NeurIPS2025. Weβd love to chat!
π Paper: openreview.net/forum?id=I82...
πΎ Code: github.com/adamjeisen/J...
π Poster: Thu 4 Dec 11am - 2pm PST (#2111)
Very interesting work!
How do brain areas control each other? π§ ποΈ
β¨In our NeurIPS 2025 Spotlight paper, we introduce a data-driven framework to answer this question using deep learning, nonlinear control, and differential geometry.π§΅β¬οΈ
looks very cool!
I am not even sure it is a hypothesis.
I mean, it is a certainty that neural activity coding for a behavior is not using the full subspace of coding available.
It is interesting how many dimensions are in use.
But these are almost mathematically guaranteed.
Can you state the hypothesis?
"The inevitability and superfluousness of cell types in spatial cognition". Intuitive cell types are found in random artificial networks using the same selection criteria neuroscientists use with actual data. elifesciences.org/reviewed-pre... 1/2
It looks like networks stripped to their bare minimum end up emulating multiple observations while being analytically tractable. I'm curious to see how learning in networks interacts with this control process
This has made me think that itβs just as important to ask what an ideal actor would do without imposing structure, nonlin or data-training on the networks in the first place...this can allow us to interpret empirical observations in a rule based framework (a useful complementary approach!!)