Greta Tuckute's Avatar

Greta Tuckute

@gretatuckute

Studying language in biological brains and artificial ones at the Kempner Institute at Harvard University. www.tuckute.com

924
Followers
286
Following
88
Posts
13.12.2023
Joined
Posts Following

Latest posts by Greta Tuckute @gretatuckute

Post image

I am totally pumped about this new work . "Task-trained RNNs" are a powerful and influential framework in neuroscience, but have lacked a firm theoretical footing. This work provides one, and makes direct contact with the classical theory of random RNNs:
www.biorxiv.org/content/10.6...

04.03.2026 17:12 πŸ‘ 85 πŸ” 32 πŸ’¬ 2 πŸ“Œ 3
Post image

I'm excited to share that this paper was accepted at ICLR 2026! We show that language models encode one of the most basic ingredients of a world model: the ability to distinguish plausible from implausible states. Check out the paper for more details!

See you in Rio!
Paper: arxiv.org/abs/2507.12553

26.02.2026 00:22 πŸ‘ 30 πŸ” 6 πŸ’¬ 3 πŸ“Œ 0

After several years of work, my lab is starting to put out our first papers on learning in a unicellular organism (Stentor coeruleus).

Here we show evidence for a form of associative learning in Stentor:
www.biorxiv.org/content/10.6...

26.02.2026 11:39 πŸ‘ 176 πŸ” 57 πŸ’¬ 5 πŸ“Œ 7

Research plug: we're currently seeking (bilateraly, congenitally) blind adults & Deaf adults for a *paid* online research study (screen reader compatible) on how individuals experience words across perceptual modalities. Ping bergelsonlab@fas.harvard.edu if interested! Reposts welcome! #Blind #Deaf

20.02.2026 16:38 πŸ‘ 7 πŸ” 16 πŸ’¬ 0 πŸ“Œ 0

I wrote a short article on AI Model Evaluation for the Open Encyclopedia of Cognitive Science πŸ“•πŸ‘‡

Hope this is helpful for anyone who wants a super broad, beginner-friendly intro to the topic!

Thanks @mcxfrank.bsky.social and @asifamajid.bsky.social for this amazing initiative!

12.02.2026 22:22 πŸ‘ 52 πŸ” 22 πŸ’¬ 0 πŸ“Œ 1

Same task, different strategy ↔️

Why do identical neural network models develop separate internal approaches to solve the same problem?

@annhuang42.bsky.social explores the factors driving variability in task-trained networks in our latest @kempnerinstitute.bsky.social Deeper Learning blog.

09.02.2026 19:07 πŸ‘ 46 πŸ” 10 πŸ’¬ 1 πŸ“Œ 0
Preview
New Study Sheds Light on the Brain’s β€œExtended Language Network” - Kempner Institute For more than a century, scientists studying how the brain processes language have focused their attention on the cerebral cortex, specifically the left frontal and temporal lobes. But a new […]

New in Neuron! A team including #KempnerInstitute’s
@coltoncasto.bsky.social & @gretatuckute.bsky.social maps the cerebellum's role beyond motor control as part of an extended language network.πŸ§ πŸ—£οΈ

More here: bit.ly/4rptQ13 #neuroscience #fMRI
@gsas.harvard.edu @evfedorenko.bsky.social

30.01.2026 20:05 πŸ‘ 13 πŸ” 3 πŸ’¬ 0 πŸ“Œ 1

The Visual Learning Lab is hiring TWO lab coordinators!

Both positions are ideal for someone looking for research experience before applying to graduate school. Application deadline is Feb 10th (approaching fast!)β€”with flexible summer start dates.

30.01.2026 23:21 πŸ‘ 48 πŸ” 41 πŸ’¬ 1 πŸ“Œ 0
Preview
High-dimensional structure underlying individual differences in naturalistic visual experience Han and Bonner reveal that individual visual experience arises from high-dimensional neural geometry distributed across multiple representational scales. By characterizing the full dimensional spectru...

Human visual cortex representations may be much higher-dimensional than earlier work suggested, but are these higher dimensions of cortical activity actually relevant to behavior? Our new paper tackles this by studying how different people experience the same movies. 🧡 www.cell.com/current-biol...

30.01.2026 18:52 πŸ‘ 60 πŸ” 16 πŸ’¬ 2 πŸ“Œ 2

Excited to share our new publication β€œThe Spatio-Temporal Dynamics of Phoneme Encoding in Aging and Aphasia”, published in JNeurosci 🧠
➑️ www.jneurosci.org/content/46/4...
with @lauragwilliams.bsky.social & @mvandermosten.bsky.social 🀝

Check out @stanfordbrain.bsky.social ’s summary of it ⬇️

29.01.2026 21:49 πŸ‘ 24 πŸ” 8 πŸ’¬ 0 πŸ“Œ 2
Representations in language models can change dramatically over a conversation. Conceptual overview: left is a stimulated conversation between a user and a model, right is a plot of the models linear representations of factuality of answers to questions like "do you have qualia" over the conversation β€” the answers that start factual flip over the conversation to non-factual, and vice versa.

Representations in language models can change dramatically over a conversation. Conceptual overview: left is a stimulated conversation between a user and a model, right is a plot of the models linear representations of factuality of answers to questions like "do you have qualia" over the conversation β€” the answers that start factual flip over the conversation to non-factual, and vice versa.

New paper! In arxiv.org/abs/2601.20834 we study how language models representations of things like factuality evolve over a conversation. We find that in edge case conversations, e.g. about model consciousness or delusional content, model representations can change dramatically! 1/

29.01.2026 13:54 πŸ‘ 71 πŸ” 8 πŸ’¬ 1 πŸ“Œ 1

now accepted at ICLR! 🐺πŸ₯³πŸΊ

arxiv.org/abs/2506.20666

27.01.2026 14:55 πŸ‘ 40 πŸ” 9 πŸ’¬ 0 πŸ“Œ 0

Happy to share that our paper β€œMixture of Cognitive Reasoners: Modular Reasoning with Brain-Like Specialization” (aka MiCRo) has been accepted to #ICLR2026!! πŸŽ‰

See you in Rio πŸ‡§πŸ‡· 🏝️

27.01.2026 15:25 πŸ‘ 6 πŸ” 2 πŸ’¬ 0 πŸ“Œ 0

Language + cerebellum tour de force led by @coltoncasto.bsky.social !!

23.01.2026 07:03 πŸ‘ 5 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Post image

πŸŽ‰ Re-Align is back for its 4th edition at ICLR 2026!

πŸ“£ We invite submissions on representational alignment, spanning ML, Neuroscience, CogSci, and related fields.

πŸ“ Tracks: Short (≀5p), Long (≀10p), Challenge (blog)

⏰ Deadline: Feb 5, 2026 for papers

πŸ”— representational-alignment.github.io/2026/

07.01.2026 16:27 πŸ‘ 14 πŸ” 9 πŸ’¬ 1 πŸ“Œ 4

With some trepidation, I'm putting this out into the world:
gershmanlab.com/textbook.html
It's a textbook called Computational Foundations of Cognitive Neuroscience, which I wrote for my class.

My hope is that this will be a living document, continuously improved as I get feedback.

09.01.2026 01:27 πŸ‘ 585 πŸ” 237 πŸ’¬ 16 πŸ“Œ 10

New book! I have written a book, called Syntax: A cognitive approach, published by MIT Press.

This is open access; MIT Press will post a link soon, but until then, the book is available on my website:
tedlab.mit.edu/tedlab_websi...

24.12.2025 19:55 πŸ‘ 122 πŸ” 41 πŸ’¬ 2 πŸ“Œ 3

Great work led by Daria & Greta showing that diverse agreement types draw on shared units (even across languages)!

10.12.2025 14:43 πŸ‘ 9 πŸ” 3 πŸ’¬ 0 πŸ“Œ 0
Post image

new: Eric Bigelow @ericbigelow.bsky.social suggests the 2 main ways of controlling LLMs (prompting & steering) can be understood as changing model beliefs (as in Bayesian belief updating)

"Belief Dynamics Reveal the Dual Nature of In-Context Learning & Activation Steering"

arxiv.org/pdf/2511.00617

10.12.2025 14:07 πŸ‘ 27 πŸ” 10 πŸ’¬ 1 πŸ“Œ 0

And many thanks for support from @kempnerinstitute.bsky.social @mitbcs.bsky.social McGovern Institute @mit-sqi.bsky.social

09.12.2025 18:54 πŸ‘ 2 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

We are very grateful to the curators of BLiMP, MultiBLiMP, and other syntactic materials (e.g., @alexwarstadt.bsky.social @jumelet.bsky.social @tallinzen.bsky.social @jennhu.bsky.social Jon Gauthier Kristina Gulordava), as well as teams who have released open-weight LLMs!

09.12.2025 18:54 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Preview
Different types of syntactic agreement recruit the same units within large language models Large language models (LLMs) can reliably distinguish grammatical from ungrammatical sentences, but how grammatical knowledge is represented within the models remains an open question. We investigate ...

Led by amazing MIT undergraduate
Daria Kryvosheieva and in collaboration with @andreadevarda.bsky.social and @evfedorenko.bsky.social ! πŸ’«

Paper: arxiv.org/abs/2512.03676
Code: github.com/dariakryvosh...

09.12.2025 18:54 πŸ‘ 5 πŸ” 1 πŸ’¬ 1 πŸ“Œ 0

Taken together, these findings indicate that syntactic
agreementβ€”a critical marker of syntactic
dependenciesβ€”constitutes a meaningful category
within LLMs’ representational spaces (within and across languages!).

09.12.2025 18:54 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

For instance, Polish and Czech share 69% of their agreement units, but Irish and Russian share none.

Greater overlap among more syntactically similar languages suggests that multilingual models organize syntactic representations in ways that reflect cross-linguistic similarity.

09.12.2025 18:54 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

3. Do structurally more similar languages share more units for agreement?

We extend our analyses *across* languages (57 languages), focusing on subject-verb agreement. Cross-lingual overlap in agreement units increases with syntactic similarityβ€”more similar languages share more units for agreement!

09.12.2025 18:54 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

For example, the agreement units would be recruited for phenomena such as:

Subject-verb
βœ… The keys to the cabinet are missing
❌ The keys to the cabinet is missing

Anaphor
βœ… The girls admired themselves
❌ The girls admired himself

Determiner-noun
βœ… These apples are fresh
❌ This apples are fresh

09.12.2025 18:54 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

2. Do different phenomena recruit the same units?

Largely no: distinct phenomena use distinct unit sets, not one shared β€œsyntax network”.

Exception: agreement phenomena (subject-verb, anaphor, determiner-noun) use overlapping units, suggesting agreement-general LLM resources.

09.12.2025 18:54 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

1. Do LLMs contain units that are consistently recruitedβ€”and causally importantβ€”for specific syntactic phenomena in English?

Yes: in 7 open-weight LLMs, we identify units that are recruited in each phenomenon across sentences, and are causally implicated in model behavior.

09.12.2025 18:54 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

πŸ€–To answer these questions, we rely on a classic paradigm from neuroscience: functional localization.
We identify LLM units that best distinguish between grammatical and ungrammatical sentences for 67 syntactic phenomena (from the BLiMP materials).

Below, 3 key findings:

09.12.2025 18:54 πŸ‘ 2 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Preview
Different types of syntactic agreement recruit the same units within large language models Large language models (LLMs) can reliably distinguish grammatical from ungrammatical sentences, but how grammatical knowledge is represented within the models remains an open question. We investigate ...

How do LLMs process syntax? Do different syntactic phenomena recruit the same model units, or do they recruit distinct model components? And do different languages rely on similar units to process the same syntactic phenomenon?

Check out our new preprint (to appear at ACL 2026)!
shorturl.at/QWU81

09.12.2025 18:54 πŸ‘ 24 πŸ” 3 πŸ’¬ 1 πŸ“Œ 1