8 days left to submit to the Human x AI Finance conference!
- 🏆 $1,000 prize for the best paper.
- ✈️ The top 4 papers will be invited for presentation at the Fink Center Conference on Financial Markets at UCLA Anderson School of Management (travel and lodging included).
🔗 humanxaifinance.org
10.03.2026 18:31
👍 3
🔁 2
💬 0
📌 0
Sycophantic AI distorts reality by returning responses that are biased to reinforce existing beliefs.
"sycophantic AI distorts belief, manufacturing certainty where there should be doubt."
Unbiased sampling produces discovery rates 5X higher! arxiv.org/pdf/2602.14270
09.03.2026 14:50
👍 22
🔁 11
💬 1
📌 1
📣 New publication 📣
Very excited to share our new paper "A neural signature of adaptive mentalization" out now in Nature Neuroscience (the project started all the way back in 2018!); with
@niklasbuergi.bsky.social
@drgokhanaydogan.bsky.social
@christianruff.bsky.social
(1/4)
09.03.2026 10:32
👍 24
🔁 7
💬 1
📌 0
OSF
When you collect data online, are the results from humans or AI? In a project led by Booth PhD student Grace Zhang, we estimate the prevalence of AI agents on commonly used survey platforms:
osf.io/preprints/ps...
🧵
07.03.2026 20:22
👍 108
🔁 50
💬 4
📌 3
Graph of award probability of R35 and R01 from NIH factbook as a function of review rank percentile. As is apparent, 2025 is a significant departure, with lower award probabilities at all scores <40 and significant departures from norm, where even being in the top 10% is no longer a nearly certain indicator of success.
Data source: https://report.nih.gov/nihdatabook/report/302
The data is in: the NIH goalposts have shifted.
What were once almost certain fundable scores have become coin flips and what used to be likely grants have become aspirational, leading to fewer awards.
Another manifestation of how HHS policies have led to fewer awards and less science.
07.03.2026 01:59
👍 680
🔁 416
💬 19
📌 60
Why we value things more as we are about to lose them: A reference-based theory
A common belief is that we only truly appreciate things or people when we are about to lose them. This phenomenon is often observed in real-world scen…
Why we value things more as we are about to lose them
A novel theory that explains why subjective value and effort-based decisions change over time, shedding light on reference-dependent behavior in both every day and high-stakes decision-making contexts
www.sciencedirect.com/science/arti...
04.03.2026 15:48
👍 15
🔁 5
💬 1
📌 1
Chapter 6 Chapter 5: Assessing Model Quality | Chapter 5: Assessing Model Quality - Does Our Model Make Sense?
My notes for the advanced cognitive modeling course - 2026
my course notes on a bayesian workflow for (single agent) cognitive modeling are now fully revised and online: fusaroli.github.io/AdvancedCogn...
Predictive checks, updating checks, sensitivity analyses and simulation based calibration in @mc-stan.org
Feedback is very welcome!
04.03.2026 16:39
👍 52
🔁 12
💬 2
📌 2
The inferred value of unchosen options spreads to related items in memory
Counterfactual thinking — considering what could have come of choosing the other path — can facilitate inference. Previous studies have demonstrated t…
📢New paper out today in @cognitionjournal.bsky.social!
Does the value of an unchosen option — inferred through counterfactual reasoning — spread to related items in memory, similar to how the value of a chosen option — acquired through direct experience — does?
In short, yes!
28.02.2026 19:11
👍 59
🔁 25
💬 1
📌 0
Amplifiers of Epistemic Posture
Essays and writing on AI
I'm a cognitive scientist with an interest in epistemic vigilance, and this essay that's been going around gave me pause.
I don't think it's straightforward to apply the concept of epistemic vigilance to interactions with LLMs, as this essay does.
🧵/
sbgeoaiphd.github.io/rotating_the...
26.02.2026 13:18
👍 290
🔁 121
💬 8
📌 33
Deciding for others alters metacognition leading to responsibility aversion
Making decisions on behalf of other people reduces decision confidence, which leads to responsibility aversion.
Happy to share my first first-author paper, new in Science Advances: Deciding for others alters metacognition leading to responsibility aversion www.science.org/doi/10.1126/... #ScienceAdvancesResearch @zne-uzh.bsky.social @econ.uzh.ch
25.02.2026 19:50
👍 21
🔁 11
💬 2
📌 1
Come join us at BAMB! to learn all about modelling behavior and what Barcelona’s beaches have to offer 🏐🏄♂️🏊♂️
Applications for 2026 are open here: www.bambschool.org
23.02.2026 16:07
👍 6
🔁 8
💬 0
📌 0
a penguin is sticking his head out of a hole next to a job application
ALT: a penguin is sticking his head out of a hole next to a job application
🚨 JOB alert: 📢
We are looking for a PhD student to work on our international @wellcometrust.bsky.social project on information gathering in OCD and Schizophrenia!
If you have a background in computational psychiatry / neuroimnaging and speak German, apply here: devcompsy.org/wp-content/u...
23.02.2026 07:37
👍 20
🔁 24
💬 1
📌 0
This Wednesday February 25! Dr. Michael Treadway (Emory University) is presenting in
@motcogmeet.bsky.social series: "Effort-Based Decision-Making and Its Discontents: Precision medicine approaches for understanding the pathophysiology and treatment of motivational deficits in mental illness" 1/
23.02.2026 15:08
👍 14
🔁 8
💬 1
📌 0
Where you look next isn’t arbitrary.
In our new paper, we model human eye movements in immersive visual search as reinforcement learning under cognitive constraints. 🧵
23.02.2026 15:42
👍 34
🔁 14
💬 1
📌 0
in my decision-making course we devote one class to a group exercise in which the students need to use what they learned in Act 1 ("Rational Decision Making") to shut down a rogue AI in the semi distant future; this is the intro.
23.02.2026 14:46
👍 22
🔁 5
💬 1
📌 2
OSF
I reviewed 5+ fMRI papers on response inhibition within roughly the last year, and the same points come up over and over again. So I wrote a short note last week entitled "The unique limitations of BOLD-fMRI in the study of response inhibition". You can read it here.
osf.io/preprints/ps...
21.02.2026 14:58
👍 64
🔁 19
💬 2
📌 0
Two side-by-side images depicting the nested hierarchical IPOMDP and the non-hierarchical x-IPOMDP mechanism.
What happens when we can't use recursive belief to compete? We can use anomaly detection instead!
Here, we (led by soon-to-be-Dr @nitalon.bsky.social ) devise a multi-agent account where compression & reward expectation are used to notice deception
jair.org/index.php/ja...
18.02.2026 08:49
👍 18
🔁 7
💬 1
📌 0
Mathematical Methods in Computational Neuroscience
Summer school in Eresfjord, Norway (July 8th - 26th, 2024)
Applications are now open for the summer school: 𝐌𝐚𝐭𝐡𝐞𝐦𝐚𝐭𝐢𝐜𝐚𝐥 𝐌𝐞𝐭𝐡𝐨𝐝𝐬 𝐢𝐧 𝐂𝐨𝐦𝐩𝐮𝐭𝐚𝐭𝐢𝐨𝐧𝐚𝐥 𝐍𝐞𝐮𝐫𝐨𝐬𝐜𝐢𝐞𝐧𝐜𝐞
🧠 Apply before March 15: www.compneuronrsn.org
📍 Located in beautiful Eresfjord 🇳🇴
🗓️ Between July 6-24
Supported by the @kavlifoundation.org
In collaboration with @kavlintnu.bsky.social
17.02.2026 22:22
👍 61
🔁 29
💬 0
📌 5
Beyond grateful to be selected as a 2026 Sloan Research Fellow in Neuroscience! 🧠🤓
It takes a village, and this wouldn't be possible without my amazing team, mentees, mentors, collaborators and colleagues! Very excited to continue our work on the neuroscience of social learning. #SloanFellow
18.02.2026 03:20
👍 29
🔁 2
💬 0
📌 1
Book cover. A silhouette of a person's head filled with colorful geometric shapes—perhaps symbolizing cognitive resources or deployment thereof. The style is attractive and modern, if generic.
text:
The Rational Use of Cognitive Resources
Falk Lieder, Frederick Callaway, Thomas L. Griffithts
I'm excited to announce that I had my first (co-authored) book published today! "The Rational Use of Cognitive Resources" with Falk Lieder and Tom Griffiths (@cocoscilab.bsky.social ). You can read it for free! (see thread)
18.02.2026 01:05
👍 142
🔁 45
💬 2
📌 0
We're hiring! This is a unique opportunity to translate our understanding of neural computation - from circuit-level mechanisms to computational principles - into the human brain, through the establishment of cutting-edge human neural recording capabilities with collaborators in London and abroad.
We’re hiring a Group Leader!
Join us to lead a transformative initiative in human systems neuroscience.
Find out more and apply ⤵️
www.sainsburywellcome.org/content/curr...
13.02.2026 13:41
👍 31
🔁 25
💬 1
📌 4
"While most AI tries to fix humans
@simile_ai
is building AI that understands them.
They build digital twins that capture someone’s worldview, then simulate how customers, employees or entire populations will actually respond to change.
Born out of Stanford generative agent research. Now backed by $100M to turn that into a category.
AI is getting smarter and Simile is making it more human. We're proud to be in their corner."
A proposed solution is to build generative agents that represent specific individuals (Box 1). One
such study [6] recruited a sample of ~1000 US participants nationally representative for age, gender,
race, region, education, and political ideology; programmed an LLM chatbot to interview each
participant for 2 h; and asked the participants to complete a battery of questionnaires and tasks.
They then used the interview transcripts to prompt ~1000 LLM agents to role-play each of the
human participants on the same questionnaires and tasks. Observing a high correspondence between
the responses of the generative agents and their human counterparts, the researchers concluded
that LLMs prompted in this way can capture the ‘idiosyncratic nature’ of real people across
a range of situations [57]. Some researchers propose making generative agents even more representative
by training them on their human counterparts’ ‘emails, messages and social media
posts’, aswell as ‘text generated by friends, family or coworkers’ [23]. (We note this raises critical
questions about informed consent; see Outstanding questions.) The logic here is that, because
generative agents are built to represent a diverse sample of specific individuals, researchers
could then run thousands of experiments on the generative agents and feel confident that the resultant
data are faithful to the original samples. Researchers could even populate virtual worlds with
generative agents, running large-scale simulations to test interventions and policies (Box 2).
Nevertheless, the generative agents paradigm faces hard limits to its potential representativeness.
By design, generative agents can only represent individuals who consent to sharing sensitive
data with scientists, which carries substantial privacy risks [6,58]. Given these risks, people
with stronger privacy concerns are less likely to consent to such studies. Members of marginalized
groups in the USA, including women, gender minorities, people of color, and disabled people,
have heightened privacy concerns and more negative attitudes about AI [59,60]ii–iv. These
groups have historically faced disproportionate surveillance [61,62] and theft of their biometric
and behavioral data for scientific research [63–65], including training machine learning models
[66]. Regimes of digital surveillance spread globally [67], creating frictions where global north ideologies
touch down in the global south [68]. These entrenched and repeating patterns raise cascading
problems for the generative agents approach: first, members of marginalized groups are
less likely to participate and, second, those who do will be less representative of their groups. Any
attempt to build AI Surrogates that are truly representative of diverse populations will likely face a
hard limit that marginalized people are (justifiably) less willing to entrust their data to scientists.
Box 2. Generative agents and simulated worlds
Researchers note that ‘many of themost interesting research questions, such as the psychology ofworld leaders, the effects
of large-scale policy change, or the effects of large-scale events on the general public’ are ‘logistically infeasible’ to study in
the laboratory ‘with any realistic amount of resources’ [23]. In response, generative agents populating simulated worlds are
seen as promising research paths. For example, researchers could create generative agents based on the profiles of Palo
Alto residents and simulate how the community would respond to different pandemic interventionsv. Much of the technical
research on artificial agents acting in simulated worlds originates in fields beyond cognitive science, including computer science,
sociology, economics, political science, computational social science, as well as private industry [9,112–116].
Developers of these agent architectures have lofty ambitions. They believe that this technology can ‘test interventions and
theories and gain real-world insights’ [58], serving as ‘a high-fidelity platformfor policy outcome evaluation’ to enable ‘datadriven
policy selection’ [115]. Given these ambitions, validating that these models can generalize to the real world is imperative
[116], and some researchers caution that ‘current architectures must cover some distance before their use is reliable’
[58]. Yet, such validation faces a paradox: these models can only be validated against the ground truth of real-world data,
but their appeal lies in simulating scenarios where ground truth is not available. Some researchers [22] propose to meet this
challenge by identifying ‘the most proximal cases for which ground-truth data from human subjects is available’ and using
those cases to validate the simulation’s predictions ‘before turning the model to a domain in which no ground truth exists’.
However, there is currently ‘no consensus’ around how proximal is proximal enough [116].
Imp…
Stanford CS researchers just got a huge payday for promising AI agents that can simulate the real world. @mjcrockett.bsky.social and I wrote about these researcher's vision. Screen shotting quite a lengthy part of our paper, because we spent A LOT of time thinking about the paucity of this promise
13.02.2026 14:43
👍 82
🔁 24
💬 5
📌 6