Elizabeth Jiwon Im's Avatar

Elizabeth Jiwon Im

@imelizabeth

🌲 PhD student at Stanford 🧠 Intersection of developing brain, visual experience, and computational models 🐦 Johns Hopkins alum imelizabeth.github.io

161
Followers
143
Following
9
Posts
25.09.2023
Joined
Posts Following

Latest posts by Elizabeth Jiwon Im @imelizabeth

Preview
How Congress can restore the independence of US science Members must go beyond reinstating US government research spending and re-establish decentralized governance at the National Institutes of Health and other agencies.

The most impt change at #NIH and to US science this year is bigger than grant cancellationsβ€” it’s how the agency is governed.

For 75 years NIH has been largely independent of presidential control. That’s changed this year. New piece from me and @nataliebaviles.bsky.social in @nature.com
πŸ§ͺ

09.03.2026 12:26 πŸ‘ 263 πŸ” 151 πŸ’¬ 4 πŸ“Œ 4
Post image

Excited to share new work on how the brain makes social inferences from visual input! πŸ§ πŸ‘―β€β™‚οΈ
(With @lisik.bsky.social , @shariliu.bsky.social, @tianminshu.bsky.social , and Minjae Kim!) www.biorxiv.org/content/10.6...

26.02.2026 22:09 πŸ‘ 45 πŸ” 16 πŸ’¬ 1 πŸ“Œ 2
Post image

We've posted a new fMRI study of semantic relations (has-part, is-a, made-of, etc.), a key aspect of language. We find that relations are represented in the same brain regions as are other semantic concepts, though voxels tend to be selective for only one relation or another.
doi.org/10.64898/202...

23.02.2026 21:06 πŸ‘ 55 πŸ” 20 πŸ’¬ 1 πŸ“Œ 2
Preview
Real-world Objects Scaffold Visual Working Memory for Features: Increased Neural Engagement When Colors Are Remembered as Part of Meaningful Objects Abstract. Visual working memory is a core cognitive function that allows active storage of task-relevant visual information. Contrary to the common assumption that the capacity of this system is fixed...

New paper with @timbrady.bsky.social and @violastoermer.bsky.social now out in JoCN! "Real-world Objects Scaffold Visual Working Memory for Features: Increased Neural Engagement When Colors Are Remembered as Part of Meaningful Objects" doi.org/10.1162/JOCN...

22.02.2026 01:29 πŸ‘ 39 πŸ” 11 πŸ’¬ 2 πŸ“Œ 0

🚨 Jewelia’s new preprint! We report the first pRF mapping in teens + reveal functional fingerprints of category regions in high-level visual cortex. www.biorxiv.org/content/10.6...

20.02.2026 00:16 πŸ‘ 10 πŸ” 3 πŸ’¬ 1 πŸ“Œ 1
OSF

New preprint with @SamJung @timbrady.bsky.social and @violastoermer.bsky.social: osf.io/preprints/ps.... Here we uncover what might be driving the β€œmeaningfulness benefit” in visual working memory. Studies show that real objects are remembered better in VWM tasks than abstract stimuli. But why? 1/

09.02.2026 21:06 πŸ‘ 41 πŸ” 24 πŸ’¬ 1 πŸ“Œ 0
Preview
Functional Magnetic Resonance Imaging in Awake Infants: Insights From More Than 750 Scanning Sessions Functional magnetic resonance imaging (fMRI) in awake infants has the potential to reveal how the early developing brain gives rise to cognition and behavior. However, awake infant fMRI poses signifi...

Congratulations to @lillianbehm.bsky.social, Nick Turk-Browne, and a huge team for putting together this paper (out today) on lessons from a decade of attempts to study awake infants with fMRI:
onlinelibrary.wiley.com/doi/10.1111/...

31.01.2026 18:44 πŸ‘ 60 πŸ” 13 πŸ’¬ 2 πŸ“Œ 0

The Visual Learning Lab is hiring TWO lab coordinators!

Both positions are ideal for someone looking for research experience before applying to graduate school. Application deadline is Feb 10th (approaching fast!)β€”with flexible summer start dates.

30.01.2026 23:21 πŸ‘ 48 πŸ” 41 πŸ’¬ 1 πŸ“Œ 0

Excited to share our new publication β€œThe Spatio-Temporal Dynamics of Phoneme Encoding in Aging and Aphasia”, published in JNeurosci 🧠
➑️ www.jneurosci.org/content/46/4...
with @lauragwilliams.bsky.social & @mvandermosten.bsky.social 🀝

Check out @stanfordbrain.bsky.social ’s summary of it ⬇️

29.01.2026 21:49 πŸ‘ 24 πŸ” 8 πŸ’¬ 0 πŸ“Œ 2

Now in press at Nature Communications!
www.nature.com/articles/s41...
Check it out if you are interested in category selectivity, the organization of visual cortex, and topographic models!

21.12.2025 12:26 πŸ‘ 22 πŸ” 8 πŸ’¬ 0 πŸ“Œ 0

Excited that this is now out in @nathumbehav.nature.com πŸŽ‰

David Rose (davdrose.github.io) led this project on how children's understanding of causal language develops.

πŸ“ƒ (preprint): osf.io/preprints/ps...
πŸ“Ž: github.com/davdrose/cau...

05.12.2025 16:20 πŸ‘ 51 πŸ” 11 πŸ’¬ 0 πŸ“Œ 0
Post image

How physical information is used to make sense of the psychological world

Perspective by Shari Liu, Seda Karakose-Akbiyik, Joseph Outa & Minjae J. Kim

Web: go.nature.com/3Xwo40J
PDF: rdcu.be/eSMfa

02.12.2025 13:16 πŸ‘ 12 πŸ” 5 πŸ’¬ 0 πŸ“Œ 0
People – Scaffolding of Cognition Team

We are recruiting a lab manager/research assistant to start in early 2026! The successful candidate will conduct awake infant fMRI, meet cute babies, and join a fun team!

More details (e.g. responsibilities): soc.stanford.edu/people/#join...

Apply here: careersearch.stanford.edu/jobs/social-...

21.11.2025 00:16 πŸ‘ 44 πŸ” 40 πŸ’¬ 1 πŸ“Œ 1
Figure 1 showing alignment pipeline using CLIP models on BabyView data.

Figure 1 showing alignment pipeline using CLIP models on BabyView data.

Figure 2: human judgments are correlated with CLIP scores.

Figure 2: human judgments are correlated with CLIP scores.

Can we use VLMs to quantify multimodal alignment in children's experiences? We analyze a large corpus of headcam videos to find out!

New preprint from our BabyView project, led by @alvinwmtan.bsky.social and Jane Yang: arxiv.org/abs/2511.18824

01.12.2025 18:05 πŸ‘ 27 πŸ” 5 πŸ’¬ 1 πŸ“Œ 0
Preview
AI Surrogates and illusions of generalizability in cognitive science Recent advances in artificial intelligence (AI) have generated enthusiasm for using AI simulations of human research participants to generate new know…

Can AI simulations of human research participants advance cognitive science? In @cp-trendscognsci.bsky.social, @lmesseri.bsky.social & I analyze this vision. We show how β€œAI Surrogates” entrench practices that limit the generalizability of cognitive science while aspiring to do the opposite. 1/

21.10.2025 20:24 πŸ‘ 289 πŸ” 119 πŸ’¬ 9 πŸ“Œ 27

We’re recruiting a postdoctoral fellow to join our team! πŸŽ‰

I’m happy to share that I’ve opened back up the search for this position (it was temporarily closed due to funding uncertainty).

See lab page and doc below for details!

07.10.2025 02:39 πŸ‘ 63 πŸ” 37 πŸ’¬ 2 πŸ“Œ 1
infant data from experiment 1

infant data from experiment 1

conceptual schema for different habituation models

conceptual schema for different habituation models

title page

title page

results from experiment 2 with adults

results from experiment 2 with adults

Ever wonder how habituation works? Here's our attempt to understand:

A stimulus-computable rational model of visual habituation in infants and adults doi.org/10.7554/eLif...

This is the thesis of two wonderful students: @anjiecao.bsky.social @galraz.bsky.social, w/ @rebeccasaxe.bsky.social

29.09.2025 23:38 πŸ‘ 76 πŸ” 28 πŸ’¬ 1 πŸ“Œ 2
Post image

🚨New paper out w/ @gershbrain.bsky.social & @fierycushman.bsky.social from my time @Harvard!

Humans are capable of sophisticated theory of mind, but when do we use it?

We formalize & document a new cognitive shortcut: belief neglect β€” inferring others' preferences, as if their beliefs are correct🧡

17.09.2025 00:58 πŸ‘ 50 πŸ” 16 πŸ’¬ 2 πŸ“Œ 1
Flyer for the event!

Flyer for the event!

*Sharing for our department’s trainees*

🧠 Looking for insight on applying to PhD programs in psychology?

✨ Apply by Sep 25th to Stanford Psychology's 9th annual Paths to a Psychology PhD info-session/workshop to have all of your questions answered!

πŸ“ Application: tinyurl.com/pathstophd2025

02.09.2025 20:01 πŸ‘ 10 πŸ” 8 πŸ’¬ 0 πŸ“Œ 0
Post image

New Open dataset alert:
🧠 Introducing "Spacetop" – a massive multimodal fMRI dataset that bridges naturalistic and experimental neuroscience!

N = 101 x 6 hours each = 606 functional iso-hours combining movies, pain, faces, theory-of-mind and other cognitive tasks!

🧡below

04.09.2025 19:21 πŸ‘ 114 πŸ” 58 πŸ’¬ 3 πŸ“Œ 3
OSF

🚨New preprint with my stellar student Junsong

A Small-World Mind Theory of social cognition doi.org/10.31234/osf...

Big network, close connection🀯

It explains why low-dimensional findings dominate & high-dimensional evidence emerges

Naturalistic designs matter to understanding the complex mind

02.09.2025 18:16 πŸ‘ 13 πŸ” 3 πŸ’¬ 0 πŸ“Œ 0
Video thumbnail

Can self supervised learning help understand how the brain learns to see the world?

Our latest study, led by Josephine Raugel (FAIR, ENS), is now out:

πŸ“„ arxiv.org/pdf/2508.18226
🧡 thread below

03.09.2025 05:18 πŸ‘ 46 πŸ” 12 πŸ’¬ 1 πŸ“Œ 0
Preview
Social inference brain networks in autistic adults during movie-viewing: functional specialization and heterogeneity - Molecular Autism Background Difficulty in social inferences is a core feature in autism spectrum disorders (ASD). On the behavioral level, it remains unclear whether reasoning about others’ mental states (Theory of Mi...

❗New Paper Alert ❗

We found some (but overall weak) evidence for greater differential responses in brain networks underlying theory of mind inferences than those involved in empathic responses in autism. #neuroskyence #autism #openaccess
@hilaryrichardson.bsky.social

rdcu.be/eBZwE

27.08.2025 15:29 πŸ‘ 4 πŸ” 2 πŸ’¬ 0 πŸ“Œ 0
Post image Post image Post image

Two new preprints from our lab!
@sabinemuzellec.bsky.social leads work on reverse(Brain➑ ANN) predictivity showing gaps in forward(ANN ➑ Brain) metrics shorturl.at/bnoFl
@marenwehrheim.bsky.social leads work on facial expression emerging from shared not segregated neural subspaces. shorturl.at/QAIl4

28.08.2025 19:17 πŸ‘ 9 πŸ” 1 πŸ’¬ 1 πŸ“Œ 0
Postdoctoral Associate

Job alert: I'm hiring a postdoc for my lab at CU Boulder starting Fall 2026!

We study person perception, stereotyping & prejudice, and intervention science using behavioral & neuroimaging methods.

Link: jobs.colorado.edu/jobs/JobDeta...

Review starts Nov 1 and continues until filled.

28.08.2025 16:51 πŸ‘ 20 πŸ” 19 πŸ’¬ 0 πŸ“Œ 2

So cool to see our project, spearheaded by Igor Bascandziev, featured in Harvard's 'Usable Knowledge'!

27.08.2025 21:00 πŸ‘ 5 πŸ” 1 πŸ’¬ 1 πŸ“Œ 0
Preview
Investigating action topography in visual cortex and deep artificial neural networks High-level visual cortex contains category-selective areas embedded within larger-scale topographic maps like animacy and real-world size. Here, we propose action as a key organizing factor shaping vi...

New preprint out! We propose that action is a key dimension shaping the topographic organization of object categories in lateral occipitotemporal cortex (LOTC)β€”and test whether standard and topographic neural networks capture this pattern. A thread:

www.biorxiv.org/content/10.1...

🧡 1/n

07.08.2025 15:17 πŸ‘ 35 πŸ” 7 πŸ’¬ 1 πŸ“Œ 1

Lazarus et al. (2025): A simple act with a lasting impact: Holding babies skin-to-skin in the NICU helped support their development and reduced differences linked to family income. Early touch can be a powerful way to promote equity from the very start #infancypapers doi.org/10.1111/infa...

31.07.2025 15:54 πŸ‘ 15 πŸ” 8 πŸ’¬ 0 πŸ“Œ 1
Preview
Visual Word Form Area demonstrates individual and task-agnostic consistency but inter-individual variability Ventral Occipital Temporal Cortex (VOTC) is home to a mosaic of categorically-selective functional regions that respond to visual stimuli. Within left VOTC lies the Visual Word Form Area (VWFA) - a te...

next preprint is out - ever wonder why findings about VWFA differ so much? @jyeatman.bsky.social @mayayablonski.bsky.social , Mia Fuentes-Jimenez, Hannah Stone, and I might have the answer...

www.biorxiv.org/content/10.1...

30.07.2025 05:05 πŸ‘ 7 πŸ” 3 πŸ’¬ 1 πŸ“Œ 1
continuum of inductive potential from low (relatively minimal categories whose members are dissimilar) to high (coherent meaningful categories whose members are similar) above a cartoon child. an icon of a tiger appears under "high" inductive potential, with "closes eyes when happy" appearing as a feature of a tiger in a zoo, with an arrow pointing to the tiger icon, and a dashed arrow extending it to a tiger on a savanna. an icon of a pedestrian appears under "low" inductive potential, with "closes eyes when happy" appearing as a feature of a woman on a street, with Xs over arrows pointing to the pedestrian icon, and to a different pedestrian.

continuum of inductive potential from low (relatively minimal categories whose members are dissimilar) to high (coherent meaningful categories whose members are similar) above a cartoon child. an icon of a tiger appears under "high" inductive potential, with "closes eyes when happy" appearing as a feature of a tiger in a zoo, with an arrow pointing to the tiger icon, and a dashed arrow extending it to a tiger on a savanna. an icon of a pedestrian appears under "low" inductive potential, with "closes eyes when happy" appearing as a feature of a woman on a street, with Xs over arrows pointing to the pedestrian icon, and to a different pedestrian.

πŸ“£ new paper! people use some categories to generalize (e.g., we generalize something we learn about one tiger 🐯 to other tigers πŸ…), but not others (e.g., we don't generalize from one pedestrian 🚢 to other pedestrians πŸšΆβ€β™‚οΈ). how do people learn what categories allow for generalization? 🧡

31.07.2025 06:10 πŸ‘ 45 πŸ” 16 πŸ’¬ 2 πŸ“Œ 1