another great paper from @mh-christiansen.bsky.social, showing that non-constituents* can be primed
It's more evidence that traditional linguists were mistaken to believe memory was in short supply:
Human memory is compressed, clustered, implicit and vast
09.02.2026 13:12
π 20
π 4
π¬ 1
π 0
Careers | Human Resources
We are hiring a research specialist, to start this summer! This position would be a great fit for individuals looking to get more experience in computational and cognitive neuroscience research before applying to graduate school. #neurojobs Apply here: research-princeton.icims.com/jobs/21503/r...
04.02.2026 13:12
π 38
π 30
π¬ 0
π 3
Does memory fade slowly, or in drops and bursts? We analyzed 728k tests from 210k people. Key finding: βstabilityβ isnβt a trait you either have or donβt have - itβs often a time-limited state at different points in aging. Preprint "Punctuated Memory Change": π www.biorxiv.org/content/10.6...
23.01.2026 07:17
π 15
π 12
π¬ 2
π 0
Congrats Jonathan! Excited to see these amazing results get published officially!
23.01.2026 13:34
π 0
π 0
π¬ 1
π 0
Episodic memory facilitates flexible decision-making via access to detailed events - Nature Human Behaviour
Nicholas and Mattar found that people use episodic memory to make decisions when it is unclear what will be needed in the future. These findings reveal how the rich representational capacity of episod...
Our experiences have countless details, and it can be hard to know which matter.
How can we behave effectively in the future when, right now, we don't know what we'll need?
Out today in @nathumbehav.nature.com , @marcelomattar.bsky.social and I find that people solve this by using episodic memory.
23.01.2026 13:18
π 130
π 49
π¬ 7
π 2
Fantastic thread and a must-read for anyone working on spatial cognition.
10.01.2026 23:37
π 7
π 2
π¬ 1
π 0
Excited to announce a new book telling the story of mathematical approaches to studying the mind, from the origins of cognitive science to modern AI! The Laws of Thought will be published in February and is available for pre-order now.
18.12.2025 15:59
π 166
π 39
π¬ 2
π 5
What a privilege and a delight to work with @coltoncasto.bsky.social @ev_fedorenko and @neuranna
on this new speculative piece on What it means to understand language, nicely summarized in this
Tweeprint from @coltoncasto.bsky.social arxiv.org/abs/2511.19757
26.11.2025 16:34
π 35
π 6
π¬ 2
π 0
I am really proud that eLife have published this paper. It is a very nice paper, but you need to also read the reviews to understand why! 1/n
25.11.2025 20:34
π 78
π 12
π¬ 2
π 4
I'm going to present our latest memory model that learns causal inference during narrative comprehension! Stop by the poster on Monday to chat about causality, memory, brainπ§ , and AIπ€!
#sfn2025 #sfn25
15.11.2025 02:41
π 24
π 5
π¬ 0
π 1
A RNN with episodic memory, trained on free recall, learned the memory palace strategy -- the network developed an abstract item index code so that it can βwalk alongβ the same trajectory in the hidden state space to encode/retrieve item sequences!
Feedback appreciated!
22.10.2025 13:29
π 17
π 2
π¬ 0
π 0
Iβm super excited to finally put my recent work with @behrenstimb.bsky.social on bioRxiv, where we develop a new mechanistic theory of how PFC structures adaptive behaviour using attractor dynamics in space and time!
www.biorxiv.org/content/10.1...
24.09.2025 09:52
π 219
π 86
π¬ 9
π 9
We present our new preprint titled "Large Language Model Hacking: Quantifying the Hidden Risks of Using LLMs for Text Annotation".
We quantify LLM hacking risk through systematic replication of 37 diverse computational social science annotation tasks.
For these tasks, we use a combined set of 2,361 realistic hypotheses that researchers might test using these annotations.
Then, we collect 13 million LLM annotations across plausible LLM configurations.
These annotations feed into 1.4 million regressions testing the hypotheses.
For a hypothesis with no true effect (ground truth $p > 0.05$), different LLM configurations yield conflicting conclusions.
Checkmarks indicate correct statistical conclusions matching ground truth; crosses indicate LLM hacking -- incorrect conclusions due to annotation errors.
Across all experiments, LLM hacking occurs in 31-50\% of cases even with highly capable models.
Since minor configuration changes can flip scientific conclusions, from correct to incorrect, LLM hacking can be exploited to present anything as statistically significant.
π¨ New paper alert π¨ Using LLMs as data annotators, you can produce any scientific result you want. We call this **LLM Hacking**.
Paper: arxiv.org/pdf/2509.08825
12.09.2025 10:33
π 303
π 106
π¬ 6
π 23
Our new lab for Human & Machine Intelligence is officially open at Princeton University!
Consider applying for a PhD or Postdoc position, either through Computer Science or Psychology. You can register interest on our new website lake-lab.github.io (1/2)
08.09.2025 13:59
π 54
π 14
π¬ 1
π 0
Key-value memory network can learn to represent event memories by their causal relations to support event cognition!
Congrats to @hayoungsong.bsky.social on this exciting paper! So fun to be involved!
05.09.2025 13:07
π 12
π 2
π¬ 0
π 0
Our new study (Titled: Memory Loves Company) asks whether working memory hold more when objects belong together.
And yes, when everyday objects are paired meaningfully (Bow-Arrow), people remember them better than when theyβre unrelated (Glass-Arrow). (mini thread)
28.08.2025 12:07
π 69
π 15
π¬ 5
π 0
Now out in print at @jephpp.bsky.social ! doi.org/10.1037/xhp0...
Yu, X., Thakurdesai, S. P., & Xie, W. (2025). Associating everything with everything else, all at once: Semantic associations facilitate visual working memory formation for real-world objects. JEP:HPP.
27.06.2025 01:24
π 13
π 2
π¬ 0
π 1
Cortico-hippocampal interactions underlie schema-supported memory encoding in older adults
New paper led by @shenyanghuang.bsky.social!
academic.oup.com/cercor/artic...
Older adults' memory benefits from richer semantic contexts. We found connectivity patterns supporting this semantic scaffolding.
19.08.2025 18:26
π 16
π 6
π¬ 0
π 0
Successful prediction of the future enhances encoding of the present.
I am so delighted that this work found a wonderful home at Open Mind. The peer review journey was a rollercoaster but it *greatly* improved the paper.
direct.mit.edu/opmi/article...
09.08.2025 16:27
π 75
π 22
π¬ 2
π 2