The submission deadline for CMCL is coming up in less than a month! (Feb 25) CMCL will be co-located with LREC and take place on May 16!๐ดhttps://sites.google.com/view/cmclworkshop/cfp
The submission deadline for CMCL is coming up in less than a month! (Feb 25) CMCL will be co-located with LREC and take place on May 16!๐ดhttps://sites.google.com/view/cmclworkshop/cfp
Paper accepted to #EACL2026 main conference ๐
@taniseceron.bsky.social, Sebastian Padรณ and I test multilingual LLMs before and after English-only fine-tuning and find strong cross-lingual political opinion transfer across five Western languages.
www.arxiv.org/abs/2508.05553
Does it matter how you prompt an LLM with a persona? Do LLMs respond differently to natural conversation history compared to names and explicit mentions? Go check out our new preprint! ๐
Our paper has been accepted to EACL 2026!๐ We systematically evaluate several vision-language (VLMs) and language-only models, measuring their alignment with brain responses to concept words. Our results show that vision-language models offer a promising tool to model human concept processing
The CfP for CMCL is out!๐ด We are looking forward to receiving many interesting submissions! โจ (Deadline: February 25, 2026) sites.google.com/view/cmclwor...
Thank you so much for having me! @milanlp.bsky.social ๐
๐จ New main paper out at #EMNLP2025! ๐จ
โก We show that personalization of content moderation models can be harmful and perpetuate hate speech, defeating the purpose of the system and hurting the community.
We argue that personalized moderation needs boundaries, and we show how to build them.
Thrilled to be heading to Suzhou with a big team of GroNLP'ers ๐ฎ
Interested in Interpretable, Cognitively inspired, Low-resource LMs? Don't miss our posters & talks #EMNLP2025!
Next week, I'll be at #EMNLP presenting our work "Reading Between the Prompts: How Stereotypes Shape LLM's Implicit Personalization" ๐
๐ Ethics, Bias, and Fairness (Poster Session 2)
๐
Wed, November 5, 11:00-12:30 - Hall C
๐ Check the paper! arxiv.org/abs/2505.16467
See you in Suzhou! ๐
Three panel thing. In the left panel we use error bars. In the second, we take statistical significance as the biggest number but still have error bars. In LLM science, we just have the biggest number
What if we did a single run and declared victory
๐Introducing BabyBabelLM: A Multilingual Benchmark of Developmentally Plausible Training Data!
LLMs learn from vastly more data than humans ever experience. BabyLM challenges this paradigm by focusing on developmentally plausible data
We extend this effort to 45 new languages!
Hi all, there is a postdoc position open in the group I'm currently based in! โจ Let me know if you are interested or have questions ๐ Please share if you know someone who might be interested www.uu.nl/en/organisat...
๐ข Are you interested in a PhD in #NLProc to study and improve how AI model emotions and social signals?
๐จExciting news:๐จ Iโm hiring a PhD candidate at LIACS,
@unileiden.bsky.social.
๐ Leiden, The Netherlands
๐
Deadline: 17 Nov 2025
๐ Position details and application link: tinyurl.com/5x5v6zsa
๐จ Are you looking for a PhD in #NLProc dealing with #LLMs?
๐ Good news: I am hiring! ๐
The position is part of the โContested Climate Futures" project. ๐ฑ๐ You will focus on developing next-generation AI methods๐ค to analyze climate-related concepts in contentโincluding texts, images, and videos.
Q. Who aligns the aligners?
A. alignmentalignment.ai
Today Iโm humbled to announce an epoch-defining event: the launch of the ๐๐ฒ๐ป๐๐ฒ๐ฟ ๐ณ๐ผ๐ฟ ๐๐ต๐ฒ ๐๐น๐ถ๐ด๐ป๐บ๐ฒ๐ป๐ ๐ผ๐ณ ๐๐ ๐๐น๐ถ๐ด๐ป๐บ๐ฒ๐ป๐ ๐๐ฒ๐ป๐๐ฒ๐ฟ๐.
Interspeech paper title: What do self-supervised speech models know about Dutch? Analyzing advantages of language-specific pre-training Authors: Marianne de Heer Kloots, Hosein Mohebbi, Charlotte Pouw, Gaofei Shen, Willem Zuidema, Martijn Bentum
โจ Do self-supervised speech models learn to encode language-specific linguistic features from their training data, or only more language-general acoustic correlates?
At #Interspeech2025 we presented our new Wav2Vec2-NL model and SSL-NL evaluation dataset to test this!
๐ arxiv.org/abs/2506.00981
โฌ๏ธ
Delighted to share that our paper "Reading Between the Prompts: How Stereotypes Shape LLM's Implicit Personalization" (joint work with @arianna-bis.bsky.social and Raquel Fernรกndez) got accepted to the main conference of #EMNLP
Can't wait to discuss our work at #EMNLP2025 in Suzhou this November!
Our paper on multilingual reasoning is accepted to Findings of #EMNLP2025! ๐ (OA: 3/3/3.5/4)
We show SOTA LMs struggle with reasoning in non-English languages; prompt-hack & post-training improve alignment but trade off accuracy.
๐ arxiv.org/abs/2505.22888
See you in Suzhou! #EMNLP
What a privilege to have #CCN2025 in (an exceptionally warm and sunny) Amsterdam this year!
It was my first time attending the conference, and being surrounded by so many talented researchers whose interests are similar to mine has been a deeply enriching experience โจ
Some amazing @amsterdamnlp.bsky.social people in Vienna๐ซ#acl2025 Raquel Fernรกndez Sandro Pezzelle Katia Shutova @esamghaleb.bsky.social @veraneplenbroek.bsky.social @annabavaresco.bsky.social + @leobertolazzi.bsky.social
#ACL2025
๐งโ๐คโ๐ง @ecekt.bsky.social, @alberto-testoni.bsky.social
๐ Monday, July 28, 11:00-12:30, Hall 4/5
See you in Vienna! โจ @aclmeeting.bsky.social
๐งโ๐คโ๐ง @michaelwhanna.bsky.social, @akoller.bsky.social, @andre-t-martins.bsky.social, @pmondorf.bsky.social, Vera Neplenbroek, Sandro Pezzelle, @barbaraplank.bsky.social, @davidschlangen.bsky.social, Alessandro Suglia, @akskuchi.bsky.social
2๏ธโฃ LLMs instead of Human Judges? A Large Scale Empirical Study across 20 NLP Evaluation Tasks (Main Conference)
๐งโ๐คโ๐ง @annabavaresco.bsky.social, @raffagbernardi.bsky.social, @leobertolazzi.bsky.social, @delliott.bsky.social, Raquel Fernรกndez, Albert Gatt, @esamghaleb.bsky.social, Mario Giulianelli
๐ Happy to share that I will be presenting two papers at ACL 2025.
1๏ธโฃ Cross-Lingual Transfer of Debiasing and Detoxification in Multilingual LLMs: An Extensive Investigation (Findings)
๐งโ๐คโ๐ง Vera Neplenbroek, @arianna-bis.bsky.social, Raquel Fernรกndez
๐ Monday, July 28, 18:00-19:30, Hall 4/5
[4/4] We hope to inspire future research into methods that counter the influence of stereotypical associations on the modelโs latent representation of the user, particularly when the userโs demographic group is unknown.
Code and data:
github.com/Veranep/impl...
[3/4] Our findings reveal that LLMs infer demographic info based on stereotypical signals, sometimes even when the user explicitly identifies with a different demographic group. We mitigate this by intervening on the modelโs internal representations using a trained linear probe.
[2/4] We systematically explore how LLMs respond to stereotypical cues using controlled synthetic conversations, by analyzing the modelsโ latent user representations through both model internals and generated answers to targeted user questions.
Do LLMs assume demographic information based on stereotypes?
We (@arianna-bis.bsky.social, Raquel Fernรกndez and I) answered this question in our new paper: "Reading Between the Prompts: How Stereotypes Shape LLM's Implicit Personalization".
๐งต
arxiv.org/abs/2505.16467
[4/4] We hope to inspire future research into methods that counter the influence of stereotypical associations on the modelโs latent representation of the user, particularly when
the userโs demographic group is unknown.
Code and data: github.com/Veranep/impl...