ERCbravenewword's Avatar

ERCbravenewword

@ercbravenewword

Exploring how new words convey novel meanings in ERC Consolidator project #BraveNewWord๐Ÿง Unveiling language and cognition insights๐Ÿ”Join our research journey! https://bravenewword.unimib.it/

205
Followers
319
Following
50
Posts
25.01.2025
Joined
Posts Following

Latest posts by ERCbravenewword @ercbravenewword

Preview
The Multidimensional Nature of Semantic Transparency in a Crossโ€Linguistic Perspective: Evidence From Human Intuitions, Computational Estimates, and Processing Data for Chinese Compounds Semantic transparency is a key construct for understanding how complex words are represented and processed, yet it has been conceptualized and operationalized in diverse ways across studies. In this ...

Do human ratings and LLMs inform the same underlying latent structure of semantic transparency? And does the multidimensionality of transparency hold for Chinese compounds?

Chen J., Chersoni E., Marelli M., Huang C.
๐Ÿ”— onlinelibrary.wiley.com/doi/10.1111/...

@jing1chen2yes.bsky.social

12.03.2026 10:36 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0
Post image

How do our brain hemispheres divide the labor of narrative comprehension? Seminar by Prof. Jamie Reilly (Temple University, @reilly-coglab.com) on local vs. global semantic processing. ๐Ÿ—“๏ธ Monday, March 9 | 2:00 PM ๐Ÿ“ Milano-Bicocca (U6 - Sala Lauree) ๐ŸŒ Join online: meet.google.com/wkc-tyiv-zdx

02.03.2026 04:54 ๐Ÿ‘ 9 ๐Ÿ” 2 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0
Preview
Morphemes in the wild: Modelling affix learning from the noisy landscape of natural text Morphological knowledge serves as a powerful heuristic for vocabulary growth and contributes significantly to the speed and efficiency of reading. Whiโ€ฆ

New study in JML @Marco_Marelli
@mariakna.bsky.social, @kathyrastle.bsky.social

How does morphological knowledge serve as a powerful heuristic for vocabulary growth and reading efficiency? How do readers navigate noisy text to learn the meanings of affixes?

www.sciencedirect.com/science/arti...

26.01.2026 09:47 ๐Ÿ‘ 5 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0
Preview
Mbs Vector Space Lab

Our seminar archive is available online for those interested in the latest research on language and cognition.

The latest upload features Prof. Giovanni Cassani (Tilburg University), discussing how humans and LLMs interpret novel words in context.

www.youtube.com/channel/UClH...

20.01.2026 08:35 ๐Ÿ‘ 2 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0
Post image

Join us for our next seminar featuring Prof. Giovanni Cassani (Tilburg Uni). We will explore how humans and LLMs interpret novel words in context.

๐Ÿ—“๏ธ Today, Jan 19, 2:00 PM CET ๐Ÿ“ Bicocca (U6 - Sala Lauree) ๐ŸŒ Join online: meet.google.com/suf-ybti-oop

12.01.2026 12:19 ๐Ÿ‘ 2 ๐Ÿ” 1 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0
Making sense from the parts: What Chinese compounds tell us about reading | Cheng-Yu Hsieh | Milan
Making sense from the parts: What Chinese compounds tell us about reading | Cheng-Yu Hsieh | Milan YouTube video by Mbs Vector Space Lab

For those who couldn't attend, the recording of Hsieh Cheng-Yu's seminar is now available on our YouTube channel.

Watch the full presentation here: youtu.be/v7DHox_6duE

03.11.2025 08:29 ๐Ÿ‘ 3 ๐Ÿ” 1 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 1
Preview
Compositionality in the semantic network: a model-driven representational similarity analysis Abstract. Semantic composition allows us to construct complex meanings (e.g., โ€œdog houseโ€, โ€œhouse dogโ€) from simpler constituents (โ€œdogโ€, โ€œhouseโ€). Neuroim

How does the brain handle semantic composition?

Our new Cerebral Cortex paper shows the left inferior frontal gyrus (BA45) does it automatically, even when task-irrelevant. We used fMRI + computational models.

Congrats Marco Ciapparelli, Marco Marelli & team!

doi.org/10.1093/cerc...

31.10.2025 06:18 ๐Ÿ‘ 9 ๐Ÿ” 2 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0
Post image

๐Ÿšจ New publication: How to improve conceptual clarity in psychological science?

Thrilled to see this article with @ruimata.bsky.social out. We discuss how LLMs can be leveraged to map, clarify, and generate psychological measures and constructs.

Open access article: doi.org/10.1177/0963...

23.10.2025 07:27 ๐Ÿ‘ 44 ๐Ÿ” 19 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 2
Preview
Italian blasphemy and German ingenuity: how swear words differ around the world Once swearwords were dismissed as a sign of low intelligence, now researchers argue the โ€˜powerโ€™ of taboo words has been overlooked

A fascinating read in @theguardian.com on the psycholinguistics of swearing!

Did you know Germans averaged 53 taboo words, while Brits e Spaniards listed only 16?
Great to see the work of our colleague Simone Sulpizio & Jon Andoni Duรฑabeitia highlighted! ๐Ÿ‘

www.theguardian.com/science/2025...

23.10.2025 12:27 ๐Ÿ‘ 5 ๐Ÿ” 3 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0
Post image

Join us for our next seminar! We're excited to host Hsieh Cheng-Yu (University of London)

He'll discuss "Making sense from the parts: What Chinese compounds tell us about reading," exploring how we process ambiguity & meaning consistency

๐Ÿ—“๏ธ 27th Oct โฐ 2PM (CET)๐Ÿ“UniMiB ๐Ÿ’ป meet.google.com/zvk-owhv-tfw

19.10.2025 07:35 ๐Ÿ‘ 7 ๐Ÿ” 2 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 1
Post image

I'm sharing a Colab notebook on using large language models for cognitive science! GitHub repo: github.com/MarcoCiappar...

It's geared toward psychologists & linguists and covers extracting embeddings, predictability measures, comparing models across languages & modalities (vision). see examples ๐Ÿงต

18.07.2025 13:39 ๐Ÿ‘ 11 ๐Ÿ” 4 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0
Sigmoid function. Non-linearities in neural network allow it to behave in distributed and near-symbolic fashions.

Sigmoid function. Non-linearities in neural network allow it to behave in distributed and near-symbolic fashions.

New paper! ๐Ÿšจ I argue that LLMs represent a synthesis between distributed and symbolic approaches to language, because, when exposed to language, they develop highly symbolic representations and processing mechanisms in addition to distributed ones.
arxiv.org/abs/2502.11856

30.09.2025 13:15 ๐Ÿ‘ 27 ๐Ÿ” 11 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0
Preview
Compositionality in the semantic network: a model-driven representational similarity analysis Abstract. Semantic composition allows us to construct complex meanings (e.g., โ€œdog houseโ€, โ€œhouse dogโ€) from simpler constituents (โ€œdogโ€, โ€œhouseโ€). Neuroim

Important fMRI/RSA study by @marcociapparelli.bsky.social et al. Compositional (multiplicative) representations of compounds/phrases in left IFG (BA45), mSTS, ATL; left AG encodes constituents, not their composition, weighing the right element more, vice versa IFG ๐Ÿง ๐Ÿงฉ
academic.oup.com/cercor/artic...

26.09.2025 09:29 ๐Ÿ‘ 9 ๐Ÿ” 4 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0
Post image Post image Post image Post image

Great week at #ESLP2025 in Aix-en-Provence! Huge congrats to our colleagues for their excellent talks on computational models, sound symbolism, and multimodal cognition. Proud of the team and the stimulating discussions!

25.09.2025 10:28 ๐Ÿ‘ 5 ๐Ÿ” 2 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0
Post image

๐Ÿ“ฃThe chapter "๐—ฆ๐—ฝ๐—ฒ๐—ฐ๐—ถ๐—ณ๐—ถ๐—ฐ๐—ถ๐˜๐˜†: ๐— ๐—ฒ๐˜๐—ฟ๐—ถ๐—ฐ ๐—ฎ๐—ป๐—ฑ ๐—ก๐—ผ๐—ฟ๐—บ๐˜€"
w/@mariannabolog.bsky.social is now online & forthcoming in the #ElsevierEncyclopedia of Language & Linguistics
๐Ÿ” Theoretical overview, quantification tools, and behavioral evidence on specificity.
๐Ÿ‘‰ Read: urly.it/31c4nm
@abstractionerc.bsky.social

18.09.2025 08:57 ๐Ÿ‘ 6 ๐Ÿ” 4 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0
OSF

The dataset includes over 240K fixations and 150K word-level metrics, with saccade, fixation, and (word) interest area reports. Preprint osf.io/preprints/os..., data osf.io/hx2sj/. Work conducted with @davidecrepaldi.bsky.social and Maria Ktori. (2/2)

22.08.2025 18:49 ๐Ÿ‘ 1 ๐Ÿ” 1 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0
Post image

How can we reduce conceptual clutter in the psychological sciences?

@ruimata.bsky.social and I propose a solution based on a fine-tuned ๐Ÿค– LLM (bit.ly/mpnet-pers) and test it for ๐ŸŽญ personality psychology.

The paper is finally out in @natrevpsych.bsky.social: go.nature.com/4bEaaja

11.03.2025 10:57 ๐Ÿ‘ 52 ๐Ÿ” 29 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 5
Abhilasha Kumar, Beyond Arbitrariness: How a Word's Shape Influences Learning and Memory
Abhilasha Kumar, Beyond Arbitrariness: How a Word's Shape Influences Learning and Memory YouTube video by Mbs Vector Space Lab

For those who couldn't attend, the recording of Abhilasha Kumar's seminar on exploring form-meaning interactions in novel word learning and memory search is now available on our YouTube channel!!

Watch the full presentation here:
www.youtube.com/watch?v=VJTs...

12.09.2025 11:42 ๐Ÿ‘ 4 ๐Ÿ” 1 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0

Happy to share that our work on semantic composition is out now -- open access -- in Cerebral Cortex!

With Marco Marelli (@ercbravenewword.bsky.social), @wwgraves.bsky.social & @carloreve.bsky.social.

doi.org/10.1093/cerc...

12.09.2025 09:15 ๐Ÿ‘ 12 ๐Ÿ” 3 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 1
Post image

Great presentation by @fabiomarson.bsky.social last Saturday at #AMLAP2025! He shared his latest research using EEG to study how we integrate novel semantic representations, โ€œlinguistic chimerasโ€, from context.

Congratulations on a fascinating talk!

09.09.2025 11:10 ๐Ÿ‘ 4 ๐Ÿ” 1 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0
The Computational Approach to Morphological Productivity | Harald Baayen at Bicocca
The Computational Approach to Morphological Productivity | Harald Baayen at Bicocca YouTube video by Mbs Vector Space Lab

For those who couldn't attend, the recording of Prof. Harald Baayen's seminar on morphological productivity and the Discriminative Lexicon Model is now available on our YouTube channel.

Watch the full presentation here:
www.youtube.com/watch?v=zN7G...

09.09.2025 10:45 ๐Ÿ‘ 8 ๐Ÿ” 2 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0
Post image

New seminar announcement!

Exploring form-meaning interactions in novel word learning and memory search
Abhilasha Kumar (Assistant Professor, Bowdoin College)

A fantastic opportunity to delve into how we learn new words and retrieve them from memory.

๐Ÿ’ป Join remotely: meet.google.com/pay-qcpv-sbf

27.08.2025 11:06 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0
Post image

๐Ÿ“ข Upcoming Seminar!

A computational approach to morphological productivity using the Discriminative Lexicon Model
Professor Harald Baayen (University of Tรผbingen, Germany)

๐Ÿ—“๏ธ September 8, 2025
2:00 PM - 3:30 PM
๐Ÿ“ UniMiB, Room U6-01C, Milan
๐Ÿ”— Join remotely: meet.google.com/dkj-kzmw-vzt

25.08.2025 12:52 ๐Ÿ‘ 4 ๐Ÿ” 3 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0
hidden state representation during training

hidden state representation during training

Iโ€™d like to share some slides and code for a โ€œMemory Model 101 workshopโ€ I gave recently, which has some minimal examples to illustrate the Rumelhart network & catastrophic interference :)
slides: shorturl.at/q2iKq
code (with colab support!): github.com/qihongl/demo...

26.05.2025 11:56 ๐Ÿ‘ 31 ๐Ÿ” 10 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0
Preview
ChiWUG: A Graph-based Evaluation Dataset for Chinese Lexical Semantic Change Detection

๐ŸŽ‰We're thrilled to welcome Jing Chen, PhD to our team!
She investigates how meanings are encoded and evolve, combining linguistic and computational approaches.
Her work spans diachronic modeling of lexical change in Mandarin and semantic transparency in LLMs.
๐Ÿ”— research.polyu.edu.hk/en/publicati...

08.07.2025 10:54 ๐Ÿ‘ 1 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0
Preview
Cracking arbitrariness: A data-driven study of auditory iconicity in spoken English - Psychonomic Bulletin & Review Auditory iconic words display a phonological profile that imitates their referentsโ€™ sounds. Traditionally, those words are thought to constitute a minor portion of the auditory lexicon. In this articl...

๐Ÿ“ข New paper out! We show that auditory iconicity is not marginal in English: word sounds often resemble real-world sounds. Using neural networks and sound similarity measures, we crack the myth of arbitrariness.
Read more: link.springer.com/article/10.3...

@andreadevarda.bsky.social

04.07.2025 12:16 ๐Ÿ‘ 4 ๐Ÿ” 1 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0
Preview
Conceptual Combination in Large Language Models: Uncovering Implicit Relational Interpretations in Compound Words With Contextualized Word Embeddings Large language models (LLMs) have been proposed as candidate models of human semantics, and as such, they must be able to account for conceptual combination. This work explores the ability of two LLM...

1/n Happy to share a new paper with Calogero Zarbo & Marco Marelli! How well do LLMs represent the implicit meaning of familiar and novel compounds? How do they compare with simpler distributional semantics models (DSMs; i.e., word embeddings)?
doi.org/10.1111/cogs...

19.03.2025 14:09 ๐Ÿ‘ 13 ๐Ÿ” 4 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0
Words are weird? On the role of lexical ambiguity in language - Gemma Boleda - unimib
Words are weird? On the role of lexical ambiguity in language - Gemma Boleda - unimib YouTube video by Mbs Vector Space Lab

Here's the video of the seminar for those who missed it. Enjoy!

youtu.be/p2YXb6WHCi4

18.03.2025 10:21 ๐Ÿ‘ 2 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0
Preview
Compositional processing in the recognition of Chinese compounds: Behavioural and computational studies - Psychonomic Bulletin & Review Recent research has shown that the compositional meaning of a compound is routinely constructed by combining meanings of constituents. However, this body of research has focused primarily on Germanic ...

1st post here! Excited to share this work with Marelli & @kathyrastle.bsky.social. We've found readers "routinely" combine constituent meanings for Chinese compound meaning, despite variability in constituent meaning and word structure, even when they're not asked to. See threads๐Ÿ‘‡ for more details:

10.03.2025 15:36 ๐Ÿ‘ 7 ๐Ÿ” 4 ๐Ÿ’ฌ 2 ๐Ÿ“Œ 0

Link to the seminar: meet.google.com/vwm-hsug-niv
๐Ÿ“… Donโ€™t miss it!

03.03.2025 13:43 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0