Trending
's Avatar

@a-krishnan

Master student at Saarland university

45
Followers
68
Following
8
Posts
28.11.2024
Joined
Posts Following

Latest posts by @a-krishnan

Join me and @mariusmosbach.bsky.social to chat about our work on frequency effects in unlearning β€” and how @ai2.bsky.social's Olmo helped us gain key insights.

πŸ’¬ AMA: Tue, Oct 28 β€” 8:00 PT / 16:00 CEST
πŸ’‘ Bring your questions!
πŸ”— discord.gg/ai2

26.10.2025 16:12 πŸ‘ 2 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

We're presenting β€œNot all data are unlearned equally” at #COLM2025!

We show that data properties shape how LLMs forget β€” stop by to chat more!

πŸ—“ Wednesday, Oct 8
πŸ•“ 4:30–6:30 pm
πŸ“ poster #710 (session 4)

paper: arxiv.org/abs/2504.05058
Work with @mariusmosbach.bsky.social @sivareddyg.bsky.social

05.10.2025 15:55 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Post image

very happy to see the trend of a Behind the Scenes section catching on! transparent & honest science πŸ‘Œ

love the detailed montreal spots mentioned

consider including such a section in your next appendix!

(paper by @a-krishnan.bsky.social arxiv.org/pdf/2504.050...)

13.08.2025 12:19 πŸ‘ 8 πŸ” 1 πŸ’¬ 1 πŸ“Œ 1
Post image

Our new paper in #PNAS (bit.ly/4fcWfma) presents a surprising findingβ€”when words change meaning, older speakers rapidly adopt the new usage; inter-generational differences are often minor.

w/ Michelle Yang, β€ͺ@sivareddyg.bsky.social‬ , @msonderegger.bsky.social‬ and @dallascard.bsky.socialβ€¬πŸ‘‡(1/12)

29.07.2025 12:05 πŸ‘ 34 πŸ” 17 πŸ’¬ 3 πŸ“Œ 2
Preview
Announcements Keynote Speaker Announcement πŸ”Š 30.07.2025 We are delighted to announce the keynote speech t`hat will happen at the special session! Speaker: Prof. Karen Livescu, Toyota Technological Institute at Ch...

πŸ“’ #SpeechTech & #SpeechScience researchers!
We are thrilled to announce that Prof. Karen Livescu will keynote our Special Session on Interpretable Audio and Speech Models at #Interspeech2025:
"What can interpretability do for us (and what can it not)?"
πŸ—“οΈ Aug 18, 11:00
@interspeech.bsky.social

30.07.2025 18:25 πŸ‘ 3 πŸ” 1 πŸ’¬ 0 πŸ“Œ 1

Cool work! See you @interspeech.bsky.social πŸ˜€πŸ“Š

27.05.2025 13:55 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Preview
On the reliability of feature attribution methods for speech classification As the capabilities of large-scale pre-trained models evolve, understanding the determinants of their outputs becomes more important. Feature attribution aims to reveal which parts of the input elemen...

I am excited to announce that my paper "On the reliability of feature attribution methods for speech classification" has been accepted to #Interspeech2025!
Co-authors: @hmohebbi.bsky.social, Arianna Bisazza, Afra Alishahi, @grzegorz.chrupala.me
Find the preprint here: arxiv.org/abs/2505.16406

26.05.2025 08:21 πŸ‘ 10 πŸ” 2 πŸ’¬ 1 πŸ“Œ 1
Title slide: Processing Trans Languaging - Vagrant Gautam (they/xe), Saarland University, with a very brightly patterned background featuring colourful people and math symbols.

Title slide: Processing Trans Languaging - Vagrant Gautam (they/xe), Saarland University, with a very brightly patterned background featuring colourful people and math symbols.

Come to my keynote tomorrow at the first official @queerinai.com workshop at #NAACL2025 to hear about how trans languaging is complex and cool, and how this makes it extra difficult to process computationally. I will have SO many juicy examples!

03.05.2025 20:52 πŸ‘ 44 πŸ” 14 πŸ’¬ 3 πŸ“Œ 0
Post image

Chain-of-Thought (CoT) reasoning lets LLMs solve complex tasks, but long CoTs are expensive. How short can they be while still working? Our new ICML paper tackles this foundational question.

05.05.2025 12:25 πŸ‘ 12 πŸ” 2 πŸ’¬ 2 πŸ“Œ 0

A must-read for anyone in NLP right now

01.05.2025 16:00 πŸ‘ 6 πŸ” 1 πŸ’¬ 1 πŸ“Œ 0
Video thumbnail

Congratulations to Mila members @adadtur.bsky.social , Gaurav Kamath and @sivareddyg.bsky.social for their SAC award at NAACL! Check out Ada's talk in Session I: Oral/Poster 6. Paper: arxiv.org/abs/2502.05670

01.05.2025 14:30 πŸ‘ 13 πŸ” 7 πŸ’¬ 0 πŸ“Œ 3

Incredibly proud of my students @adadtur.bsky.social and Gaurav Kamath for winning a SAC award at #NAACL2025 for their work on assessing how LLMs model constituent shifts.

01.05.2025 15:11 πŸ‘ 17 πŸ” 5 πŸ’¬ 1 πŸ“Œ 0
Post image

πŸ’‘ New ICLR paper! πŸ’‘
"On Linear Representations and Pretraining Data Frequency in Language Models":

We provide an explanation for when & why linear representations form in large (or small) language models.

Led by @jackmerullo.bsky.social, w/ @nlpnoah.bsky.social & @sarah-nlp.bsky.social

25.04.2025 01:55 πŸ‘ 42 πŸ” 12 πŸ’¬ 3 πŸ“Œ 3

The intern (after loads of feedback) 😜

17.04.2025 12:40 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Post image

DeepSeek-R1 Thoughtology: Let’s <think> about LLM reasoning

142-page report diving into the reasoning chains of R1. It spans 9 unique axes: safety, world modeling, faithfulness, long context, etc.

Now on arxiv: arxiv.org/abs/2504.07128

12.04.2025 16:11 πŸ‘ 6 πŸ” 1 πŸ’¬ 1 πŸ“Œ 0
Post image

AgentRewardBench: Evaluating Automatic Evaluations of Web Agent Trajectories

We are releasing the first benchmark to evaluate how well automatic evaluators, such as LLM judges, can evaluate web agent trajectories.

15.04.2025 19:10 πŸ‘ 7 πŸ” 4 πŸ’¬ 1 πŸ“Œ 1

Checkout Benno's notes about our impact of interpretability paper πŸ‘‡.

Also, we are organizing a workshop at #ICML2025 which is inspired by some of the questions discussed in the paper: actionable-interpretability.github.io

15.04.2025 23:11 πŸ‘ 11 πŸ” 3 πŸ’¬ 0 πŸ“Œ 0
Diagram illustrating a hypothesis about knowledge unlearning in language models. The left side shows a training corpus with varying frequencies of facts, such as 'Montreal is a city in Quebec' (high frequency) and 'Atlantis is a city in the ocean' (lower frequency). The center shows a language model being trained on this data, then undergoing unlearning. The right side demonstrates the 'Forget Quality' results, where the model more effectively unlearns the less frequent fact ('Atlantis is in Greece') while retaining the more frequent knowledge. Labels A, B, and C mark key points in the hypothesis: A (frequency variations in training data), B (influence of frequency), and C (unlearning effectiveness).

Diagram illustrating a hypothesis about knowledge unlearning in language models. The left side shows a training corpus with varying frequencies of facts, such as 'Montreal is a city in Quebec' (high frequency) and 'Atlantis is a city in the ocean' (lower frequency). The center shows a language model being trained on this data, then undergoing unlearning. The right side demonstrates the 'Forget Quality' results, where the model more effectively unlearns the less frequent fact ('Atlantis is in Greece') while retaining the more frequent knowledge. Labels A, B, and C mark key points in the hypothesis: A (frequency variations in training data), B (influence of frequency), and C (unlearning effectiveness).

Check out our new paper on unlearning for LLMs πŸ€–. We show that *not all data are unlearned equally* and argue that future work on LLM unlearning should take properties of the data to be unlearned into account. This work was lead by my intern @a-krishnan.bsky.social
πŸ”—: arxiv.org/abs/2504.05058

09.04.2025 13:30 πŸ‘ 33 πŸ” 5 πŸ’¬ 1 πŸ“Œ 2
Post image

πŸ“’Excited to announce our upcoming workshop - Vision Language Models For All: Building Geo-Diverse and Culturally Aware Vision-Language Models (VLMs-4-All) @CVPR 2025!
🌐 sites.google.com/view/vlms4all

14.03.2025 15:55 πŸ‘ 17 πŸ” 11 πŸ’¬ 1 πŸ“Œ 4
Post image

Agents like OpenAI Operator can solve complex computer tasks, but what happens when users use them to cause harm, e.g. spread misinformation?

To find out, we introduce SafeArena (safearena.github.io), a benchmark to assess the capabilities of web agents to complete harmful web tasks. A thread πŸ‘‡

10.03.2025 17:45 πŸ‘ 17 πŸ” 7 πŸ’¬ 1 πŸ“Œ 5
Preview
Home Introduction Audio and speech technology has recently achieved unprecedented success in real-world applications, driven primarily by self-supervised pre-training of large neural networks on massive da...

πŸ“… Submit via the official portal!

πŸ”— More about the session: sites.google.com/view/intersp...

01.02.2025 09:28 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Preview
Home Introduction Audio and speech technology has recently achieved unprecedented success in real-world applications, driven primarily by self-supervised pre-training of large neural networks on massive da...

πŸ“’ #SpeechTech & #SpeechScience researchers!

⏳ Reminder: The #Interspeech2025 deadline is approaching! πŸš€ If your work focuses on interpretability in speech & audio, submit through our Special Session and showcase your research! 🎀

#Interpretability @interspeech.bsky.social

01.02.2025 09:28 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Hello, could I be added to the list?

06.12.2024 20:56 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0