Laura Gwilliams's Avatar

Laura Gwilliams

@lauragwilliams

Language processing - Neuroscience - Machine Learning - Assistant Professor at Stanford University - She/Her - πŸ³οΈβ€πŸŒˆ

1,996
Followers
394
Following
83
Posts
16.10.2023
Joined
Posts Following

Latest posts by Laura Gwilliams @lauragwilliams

going to be a great meeting!!

08.03.2026 00:49 πŸ‘ 2 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

πŸ˜€

08.03.2026 00:39 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Preview
Shifting neural code powers speech comprehension Dynamic coding helps explain how the brain processes multiple features of speechβ€”from the smallest units of sounds to full sentencesβ€”simultaneously.

thank you @claudia-lopez.bsky.social for this clear and thoughtful coverage in @thetransmitter.bsky.social on some of our recent work on dynamic neural codes in speech processing! 🧠 πŸŒ€ ✨ www.thetransmitter.org/language/shi...

06.03.2026 22:28 πŸ‘ 22 πŸ” 1 πŸ’¬ 1 πŸ“Œ 0

thank you @stellamayerhoff.bsky.social for covering our work in this article!

25.02.2026 21:36 πŸ‘ 6 πŸ” 2 πŸ’¬ 1 πŸ“Œ 0

Yes!

11.02.2026 01:45 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Soon hiring a lab manager! Looking for someone who is really interested in language neuroscience, who is organised, motivated, a great communicator, and who works well in a research team. Express interest by submitting this form: tinyurl.com/glysn-labman...

Reposts appreciated!

03.02.2026 16:14 πŸ‘ 41 πŸ” 39 πŸ’¬ 3 πŸ“Œ 1
Video thumbnail

1/7 Can infants recognise the world around them? πŸ‘ΆπŸ§  As part of the FOUNDCOG project, we scanned 134 awake infants using fMRI. Published today in Nature Neuroscience, our research reveals 2-month-old infants already possess complex visual representations in VVC that align with DNNs.

02.02.2026 16:00 πŸ‘ 155 πŸ” 70 πŸ’¬ 4 πŸ“Œ 8

CCN Proceedings submission deadline coming up soon!

Feb 9 - abstract and metadata
Feb 12 - full paper PDF

01.02.2026 18:15 πŸ‘ 1 πŸ” 1 πŸ’¬ 0 πŸ“Œ 0

very happy to share our latest work! building upon our basic science discoveries to better understand language processing in aphasia and healthy older adults

30.01.2026 14:45 πŸ‘ 17 πŸ” 1 πŸ’¬ 1 πŸ“Œ 0
Preview
The Spatio-Temporal Dynamics of Phoneme Encoding in Aging and Aphasia During successful language comprehension, speech sounds (phonemes) are encoded within a series of neural patterns that evolve over time. Here we tested whether these neural dynamics of speech encoding...

A new study by @lauragwilliams.bsky.social, @kriesjill.bsky.social, and team suggests that phonetic features are robustly encoded in healthy older adults, but show reduced encoding strength in individuals with post-stroke aphasia during speech comprehension.

www.jneurosci.org/content/46/4...

29.01.2026 20:48 πŸ‘ 7 πŸ” 3 πŸ’¬ 0 πŸ“Œ 1

Get ready to submit your 8-pagers!

28.01.2026 16:49 πŸ‘ 11 πŸ” 4 πŸ’¬ 0 πŸ“Œ 0
Post image

Applications are open for the Klingenstein Fellowship Awards in Neuroscience, supporting early-career investigators engaged in basic or clinical research that may lead to a better understanding of neurological and psychiatric disorders.

Learn more: klingenstein.org/esther-a-jos...

15.01.2026 17:37 πŸ‘ 2 πŸ” 1 πŸ’¬ 0 πŸ“Œ 0
Video thumbnail

main goal for this year: find a new job! πŸ™‚

looking for a role with fun & complex technical challenges & within a great community. my main expertise is in signal processing/EEG/MEG, but topic-wise I am quite flexible.

science/industry both great! starting mid-year. nschawor.github.io/cv

16.01.2026 10:14 πŸ‘ 102 πŸ” 66 πŸ’¬ 3 πŸ“Œ 3

I really enjoyed sharing my lab's work with ABIM today! looking forward to another great day of talks and posters tomorrow πŸ”οΈ

14.01.2026 23:12 πŸ‘ 6 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Post image

Believe it or not, it's already day 3 of #ABIM2026: @lauragwilliams.bsky.social and @jeanremiking.bsky.social will shed light onto the neural algorithms of human language, their emergence, and the role LLMs can play in their investigation.

14.01.2026 13:12 πŸ‘ 7 πŸ” 1 πŸ’¬ 1 πŸ“Œ 0
Commentary title: 
Linguists should learn to love speech-based deep learning models 

Authors: 
Marianne de Heer Kloots, Paul Boersma, Willem Zuidema

Abstract: 
Futrell and Mahowald present a useful framework bridging technology-oriented deep learning systems and explanation-oriented linguistic theories. Unfortunately, the target article's focus on generative text-based LLMs fundamentally limits fruitful interactions with linguistics, as many interesting questions on human language fall outside what is captured by written text. We argue that audio-based deep learning models can and should play a crucial role.

Commentary title: Linguists should learn to love speech-based deep learning models Authors: Marianne de Heer Kloots, Paul Boersma, Willem Zuidema Abstract: Futrell and Mahowald present a useful framework bridging technology-oriented deep learning systems and explanation-oriented linguistic theories. Unfortunately, the target article's focus on generative text-based LLMs fundamentally limits fruitful interactions with linguistics, as many interesting questions on human language fall outside what is captured by written text. We argue that audio-based deep learning models can and should play a crucial role.

'Tis the season to preprint BBS commentaries; I'm happy to share ours too! πŸŽ„βœ¨

The textual basis of current LLMs causes trouble, but linguistically relevant insights *can* be found in systems modelling the more natural form of human spoken language: the speech signal itself. arxiv.org/abs/2512.14506

17.12.2025 15:21 πŸ‘ 27 πŸ” 10 πŸ’¬ 1 πŸ“Œ 1

We will start the review for this position in a couple of days!

08.12.2025 17:55 πŸ‘ 12 πŸ” 3 πŸ’¬ 0 πŸ“Œ 0

Was a pleasure speaking at the @dataonbrainmind.bsky.social workshop! Plenty of exciting work still to come today - head over to room 10 if you’re are at #NeurIPS!

07.12.2025 18:29 πŸ‘ 3 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Thrilled to start 2026 as faculty in Psych & CS
@ualberta.bsky.social + Amii.ca Fellow! πŸ₯³ Recruiting students to develop theories of cognition in natural & artificial systems πŸ€–πŸ’­πŸ§ . Find me at #NeurIPS2025 workshops (speaking coginterp.github.io/neurips2025 & organising @dataonbrainmind.bsky.social)

06.12.2025 19:26 πŸ‘ 103 πŸ” 27 πŸ’¬ 4 πŸ“Œ 1

Excited that this is now out in @nathumbehav.nature.com πŸŽ‰

David Rose (davdrose.github.io) led this project on how children's understanding of causal language develops.

πŸ“ƒ (preprint): osf.io/preprints/ps...
πŸ“Ž: github.com/davdrose/cau...

05.12.2025 16:20 πŸ‘ 51 πŸ” 11 πŸ’¬ 0 πŸ“Œ 0

β€œYou coming with?” Is definitely a thing

05.12.2025 04:40 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

wooo!! congrats tyler!!

26.11.2025 01:06 πŸ‘ 2 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
a red building on UPENN's campus photographed during the fall

a red building on UPENN's campus photographed during the fall

the Philadelphia skyline, with clear skies and autumn trees

the Philadelphia skyline, with clear skies and autumn trees

starting fall 2026 i'll be an assistant professor at @upenn.edu πŸ₯³

my lab will develop scalable models/theories of human behavior, focused on memory and perception

currently recruiting PhD students in psychology, neuroscience, & computer science!

reach out if you're interested 😊

25.11.2025 21:36 πŸ‘ 227 πŸ” 45 πŸ’¬ 25 πŸ“Œ 3

Applications due December 1st!
Come work with us!

24.11.2025 21:41 πŸ‘ 11 πŸ” 6 πŸ’¬ 0 πŸ“Œ 1

proud to share this work, led by the brilliant @ilinabg.bsky.social, now out in Nature! Ilina finds that speech-sound neural processing is VERY similar in a language you know and one you don't. differences only emerge at the level of word boundaries and learnt statistical structure 🧠✨

20.11.2025 19:11 πŸ‘ 58 πŸ” 11 πŸ’¬ 2 πŸ“Œ 1
Preview
Could brain implants read our thoughts? (Not yet) Join us as we talk with Erin Kunz about building brain-computer interfaces to restore speech to people

Imagine losing your ability to speak. You know what you want to say, but the brain-to-muscle connection for forming words no longer works.

Can BCIs bypass those broken circuits to help people speak again? Erin Kunz shares how they work and what’s ahead.

neuroscience.stanford.edu/news/could-b...

19.11.2025 19:52 πŸ‘ 4 πŸ” 1 πŸ’¬ 0 πŸ“Œ 0
Video thumbnail

Come check out my poster at #SfN, Wed (8am-12pm, NN1)!
"Auditory and multisensory representations in the human hippocampus"

Using high-resolution fMRI, we ask how does the human hippocampus represent sensory modalities beyond vision, and how does it integrate sense information across modalities?

18.11.2025 20:20 πŸ‘ 13 πŸ” 3 πŸ’¬ 0 πŸ“Œ 0
Preview
Human cortical dynamics of auditory word form encoding We perceive continuous speech as a series of discrete words, despite the lack of clear acoustic boundaries. The superior temporal gyrus (STG) encodes …

happy to share our new paper, out now in Neuron! led by the incredible Yizhen Zhang, we explore how the brain segments continuous speech into word-forms and uses adaptive dynamics to code for relative time - www.sciencedirect.com/science/arti...

07.11.2025 18:16 πŸ‘ 48 πŸ” 17 πŸ’¬ 2 πŸ“Œ 1
Post image

I’m excited to share our Findings of EMNLP paper w/ @cocoscilab.bsky.social , @rtommccoy.bsky.social, and @rdhawkins.bsky.social !

Language models, unlike humans, require large amounts of data, which suggests the need for an inductive bias.
But what kind of inductive biases do we need?

07.11.2025 09:17 πŸ‘ 7 πŸ” 5 πŸ’¬ 1 πŸ“Œ 1
View from your office onto Giessen and surrounding villages.

View from your office onto Giessen and surrounding villages.

Please repost! I am looking for a PhD candidate in the area of Computational Cognitive Neuroscience to start in early 2026.

The position is funded as part of the Excellence Cluster "The Adaptive Mind" at @jlugiessen.bsky.social.

Please apply here until Nov 25:
www.uni-giessen.de/de/ueber-uns...

04.11.2025 13:57 πŸ‘ 80 πŸ” 98 πŸ’¬ 1 πŸ“Œ 4