PNAS
Proceedings of the National Academy of Sciences (PNAS), a peer reviewed journal of the National Academy of Sciences (NAS) - an authoritative source of high-impact, original research that broadly spans...
Bots have made their way to Prolific experiments. Our lab has stopped online testing of adults entirely now for this reason - we want to know if what we study is real. Probably data collected 2-3 years ago are ok, but moving forward we just can't know. www.pnas.org/doi/10.1073/...
19.02.2026 15:14
π 170
π 98
π¬ 6
π 11
Beautiful - recommended! Here, @sasolla.bsky.social recaps her decades-long journey from physics to neural networks (working with LeCun & Hopfield) to motor cortex, & and from industry (including Bell Labs) to academia, all driven by curiosity and awe (which flows from her voice). Inspiring!
04.01.2026 11:03
π 40
π 12
π¬ 1
π 0
Toy models, just in time for Christmas!
Excited to share my first article for @thetransmitter.bsky.social
#neuroskyence
22.12.2025 15:39
π 46
π 20
π¬ 0
π 3
this honestly looks like AI slopβ¦.
09.12.2025 16:02
π 0
π 0
π¬ 1
π 0
Discrete and systematic communication in a continuous signal-meaning space
Abstract. Human spoken language uses a continuous stream of acoustic signals to communicate about continuous features of the world, by using discrete forms
Human speech is continuous, and many meaning spaces (like color) are continuous too. Yet we use discrete words like βblueβ and βgreenβ that carve these spaces into categories.
In our new paper, we ask: How do people turn continuous spaces into structured, word-like systems for communication? (1/8)
26.11.2025 14:35
π 46
π 9
π¬ 1
π 2
What does it mean to understand language?
Language understanding entails not just extracting the surface-level meaning of the linguistic input, but constructing rich mental models of the situation it describes. Here we propose that because pr...
What does it mean to understand language? We argue that the brainβs core language system is limited, and that *deeply* understanding language requires EXPORTING info to other brain regions.
w/ @neuranna.bsky.social @evfedorenko.bsky.social @nancykanwisher.bsky.social
arxiv.org/abs/2511.19757
1/nπ§΅π
26.11.2025 16:26
π 82
π 33
π¬ 2
π 5
anything less than 420-D should not be called high D imo
26.11.2025 03:08
π 3
π 1
π¬ 1
π 0
Inhibitory Plasticity Balances Excitation and Inhibition in Sensory Pathways and Memory Networks
Plasticity at inhibitory synapses maintains balanced excitatory and inhibitory synaptic inputs at cortical neurons.
"Spiking Networks Hate It! Find Out the One Plasticity Trick They Donβt Want You to Know! Never stabilise models by hand again." - I woke up thinking we missed an opportunity with the title of this one. :/ www.science.org/doi/10.1126/... Also: It snowed in Vienna, 10cm white fluffies! Happy Sunday!
23.11.2025 07:09
π 48
π 8
π¬ 0
π 0
This raises what I like to call the "AI test for tasks".
If many people use AI to do task X, then that tells you that task X is actually just a brainless administrative exercise.
Any such task should probably be eliminated, and if that's not an option, modified to make automation even easier.
14.11.2025 19:14
π 65
π 8
π¬ 6
π 10
What is the most profitable industry in the world, this side of the law? Not oil, not IT, not pharma.
It's *scientific publishing*.
We call this the Drain of Scientific Publishing.
Paper: arxiv.org/abs/2511.04820
Background: doi.org/10.1162/qss_...
Thread @markhanson.fediscience.org.ap.brid.gy π
12.11.2025 10:31
π 337
π 239
π¬ 8
π 17
Richard Sutton β Father of RL thinks LLMs are a dead end
YouTube video by Dwarkesh Patel
The way Sutton himself interprets the βbitter lessonβ in this interview definitely caught a lot of bitter lesson enthusiasts off guard.
LLMs not actually being an example of the bitter lesson was quite a nuance no one saw coming.
youtu.be/21EYKqUsPfg?...
04.10.2025 03:55
π 37
π 2
π¬ 2
π 1
OSF
So far, learning traps seem robust to social learning in our cases. Surprisingly, despite many manipulations that have tried to reduce this learning trap, the most effective has been simply being a child (see @emilyliquin.bsky.social's work on traps in children) osf.io/preprints/ps...
26.09.2025 03:30
π 3
π 1
π¬ 0
π 0
The New York Times piece today about US science is terrible and wrongβin many ways.
I could write a whole article about this, but as one example:
βTo close observers, the original crisis began well before any of thisβ¦β
No. Iβm a close observer of science, and this is incorrect.
22.09.2025 12:20
π 915
π 224
π¬ 24
π 30
This is in principle justified by Rao-Blackwell theorem? one abstracts the problem enough such that the data we do have is a suffcient statistics for the inference problem.
16.09.2025 16:06
π 1
π 0
π¬ 0
π 0
Can a single cell learn? Even without a brain, some microbes show simple forms of cognition. Can this basal cognition be engineered? Check our new paper with @jordiplam.bsky.social on the minimal synthetic circuits & their cognitive limits. @drmichaellevin.bsky.social www.biorxiv.org/content/10.1...
10.09.2025 11:48
π 110
π 42
π¬ 4
π 6
Iβm not sure how useful this form is for characterizing part of the brain that does specific computation though. The heart is an important part of keep me alive to do face processing, but it doesβt seem useful to say face processing -> heart is active. though itβs logically correct.
06.09.2025 15:56
π 3
π 0
π¬ 0
π 1
LLRX republished the blogpost www.llrx.com/2025/08/ai-s...
22.08.2025 20:20
π 18
π 5
π¬ 0
π 1
Theoretical neuroscience has room to grow
Nature Reviews Neuroscience - The goal of theoretical neuroscience is to uncover principles of neural computation through careful design and interpretation of mathematical models. Here, I examine...
I wrote a Comment on neurotheory, and now you can read it!
Some thoughts on where neurotheory has and has not taken root within the neuroscience community, how it has shaped those subfields, and where we theorists might look next for fresh adventures.
www.nature.com/articles/s41...
20.08.2025 16:09
π 151
π 52
π¬ 8
π 3
MIT report: 95% of generative AI pilots at companies are failing
Thereβs a stark difference in success rates between companies that purchase AI tools from vendors and those that build them internally.
MITβs NANDA initiative found that 95% of generative AI deployments fail after interviewing 150 execs, surveying 350 workers, and analyzing 300 projects. The real βproductivity gainsβ seem to come from layoffs and squeezing more work from fewer people not AI.
20.08.2025 04:51
π 4647
π 1899
π¬ 77
π 441
So what drives drift? We looked closely at the neurons and found that a small group of them were stable. These stable neurons were more excitable than neighboring cells, making the fate of the cells predictable.
23.07.2025 16:15
π 6
π 2
π¬ 1
π 0
The Lemkin Institute for Genocide Prevention is
calling on every single leader in the world: DO EVERYTHING YOU CAN TO GET FOOD &
WATER INTO GAZA RIGHT AWAY. Even if it takes bypassing the reports, meetings, endless conferences, parliamentary sessions, UN sessions, and all the other regular diplomatic
channels that have led nowhere. Just do it. Genocide must not be allowed to continue while we all
watch. We must not allow mass starvation in Gaza.
We cannot wait any longer. IF YOU HAVE POWER, USE IT. HISTORY WILL DEMONSTRATE THE RECTITUDE OF
YOUR ACTIONS.
DO EVERYTHING YOU CAN TO GET FOOD AND WATER IN TO GAZA.
This is from Lemkin Institute begging..... we are all begging.
21.07.2025 21:57
π 619
π 402
π¬ 7
π 9
βthis is an unfair comparison because the model has not been trained on all data that has ever existed and on all future data that will be digitalized! Our foundation model is omniscient which renders the concept of generalization null!!!!β
16.07.2025 19:16
π 1
π 0
π¬ 1
π 0
Model mimicry limits conclusions about neural tuning and can mistakenly imply unlikely priors
Nature Communications - Model mimicry limits conclusions about neural tuning and can mistakenly imply unlikely priors
Who doesn't like a good model of the brain? Yet, from simple regression to neural nets, some limitations keep popping up (e.g., overfitting) @mjwolff.bsky.social & I saw some cool but puzzling data, ran a quick analysis & found one such limitation: model mimicry. Now in #naturecommunications &π§΅below
02.07.2025 08:50
π 69
π 24
π¬ 1
π 0
My latest Aronov lab paper is now published @Nature!
When a chickadee looks at a distant location, the same place cells activate as if it were actually there ποΈ
The hippocampus encodes where the bird is looking, AND what it expects to see next -- enabling spatial reasoning from afar
bit.ly/3HvWSum
11.06.2025 22:24
π 272
π 86
π¬ 10
π 5