Adam Davies's Avatar

Adam Davies

@adamdaviesnlp

PhD candidate @ UIUC | NLP, interpretability, cognitive science | http://ahdavies6.github.io

36
Followers
99
Following
25
Posts
26.11.2024
Joined
Posts Following

Latest posts by Adam Davies @adamdaviesnlp

Special thanks to my fantastic collaborator and primary author Amogh Mannekote for all his great work in making this paper/project happen!

10.10.2025 15:47 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

We introduce a framework for evaluating (b), finding that popular models do NOT consistently apply their learned world models when simulating social behavior. The upshot: even when models "know" how people might behave in a given situation, they often fail to apply it in actual simulations!

10.10.2025 15:45 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

For LLM social simulations to be useful, models must both (a) learn faithful world models re: how various people might realistically behave in different circumstances; and (b) simulate behavior consistent with that world model.

10.10.2025 15:45 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Do Role-Playing Agents Practice What They Preach? Belief-Behavior... As large language models (LLMs) are increasingly studied as role-playing agents to generate synthetic data for human behavioral research, ensuring that their outputs remain coherent with their...

With all the attention on "agentic LLM social simulations", how do we know if simulated behaviors are realistic? Come by our poster at the #COLM #SocialSim workshop at noon-1pm to find out! (More details in 🧡, or at openreview.net/forum?id=1BD...)

10.10.2025 15:45 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Special thanks to my fantastic collaborators @sewoong-sam-lee.bsky.social, Amogh Mannekote, Marc E. Canby, Julia Hockenmaier, @guohaoli.bsky.social, Kristy Boyer, ChengXiang Zhai, Bonnie J. Dorr, and @frapintoml.bsky.social!

08.10.2025 17:09 πŸ‘ 1 πŸ” 1 πŸ’¬ 0 πŸ“Œ 0
Do Role-Playing Agents Practice What They Preach? Belief-Behavior... As large language models (LLMs) are increasingly studied as role-playing agents to generate synthetic data for human behavioral research, ensuring that their outputs remain coherent with their...

Paper 2: Do Role-Playing Agents Practice What They Preach? Belief-Behavior Alignment in LLM-Based Simulations of Human Trust (SocialSim workshop; openreview.net/forum?id=1BD...)

08.10.2025 17:09 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Do Role-Playing Agents Practice What They Preach? Belief-Behavior... As large language models (LLMs) are increasingly studied as role-playing agents to generate synthetic data for human behavioral research, ensuring that their outputs remain coherent with their...

Paper 1: Evaluating and Designing Sparse Autoencoders by Approximating Quasi-Orthogonality (main conference; openreview.net/forum?id=Xhd...)

08.10.2025 17:09 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

In Montreal at COLM 2025 presenting two papers, DM me if you'd like to chat! Happy to chat all things NLP, interpretability, or cognitive science; and actively looking for Research Scientist roles (graduating May 2026).

08.10.2025 17:09 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

It was a real pleasure to work with my fantastic collaborators at @oxfordtvg.bsky.social on this project πŸ€— already looking forward to our future work in this direction!

#OOD #generalization #LLM #steering #ICML

15.07.2025 07:37 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Focus Instruction Tuning (ICML25) Updating LLM instruction tuning with adaptive test-time steerability.

*Come by our poster today to hear more!* πŸ™‰Β It’s Tue Jul 15 at 11am-1:30pm (East Exhibition Hall A-B #E-2800) πŸ“Β You can also visit our our project page at tomalamb.github.io/focus-instru... for more details and links πŸ”—

15.07.2025 07:37 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

This forces models to learn both (a) explicit relationships between latent features and task behaviors πŸŽ―πŸ™…β†”πŸ› οΈ and (b) how to dynamically steer generation based on those relationships πŸ›žπŸ€–

15.07.2025 07:37 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

The core idea is to train LLMs to generate different responses to the same task instances by conditioning on β€œfocus”/”ignore” instructions πŸ’‘

15.07.2025 07:37 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Great news β€” we developed an approach to improve instruction tuning so that the β€œhow”/steering instructions DO work, and it even generalizes to unseen features and tasks! πŸŽ‰

15.07.2025 07:37 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

This means it’s ineffective to simply ask models to focus on the β€œright” (causal 🎯) features and ignore the β€œwrong” (spurious/biased πŸ™…) ones, which can lead to poor generalization and biased behaviors 😬 Wouldn’t it be cool if that DID work, though? πŸ€”

15.07.2025 07:37 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Traditional instruction tuning teaches LLMs to perform open-ended tasks given text instructions πŸ’¬πŸ€–πŸ› οΈΒ But standard techniques are ineffective for controlling (steering πŸ›ž) HOW models should perform the task

15.07.2025 07:37 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

πŸ“„πŸ‘‹ #ICML2025 paper paper presentation TODAY (Tue morning): Focus Instruction Tuning β€” updating LLM instruction tuning with adaptive test-time steerability πŸ€–πŸ›ž

🧡

15.07.2025 07:37 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Come by our lightning talk at 3:40pm or our poster session at 4pm to hear more πŸ™‰ (both are located in the East Ballroom A/B). Hope to see you there!

15.12.2024 22:44 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Preview
Measuring the Reliability of Causal Probing Methods: Tradeoffs... Causal probing aims to analyze foundation models by examining how intervening on their representation of various latent properties impacts their outputs. Recent works have cast doubt on the...

But interpretability methods can sometimes be unreliable πŸ”¬πŸ‘Ž In our second paper (openreview.net/forum?id=tmp...), we define and measure their reliability, finding that concept removal methods are unreliable and counterfactual methods have key tradeoffs between different experimental goals

15.12.2024 22:44 πŸ‘ 2 πŸ” 1 πŸ’¬ 1 πŸ“Œ 0
Preview
Competence-Based Analysis of Language Models Despite the recent successes of large, pretrained neural language models (LLMs), comparatively little is known about the representations of linguistic structure they learn during pretraining, which...

Models fail to generalize under distribution shift if they rely on spurious features πŸ“‰πŸ™… In CALM (openreview.net/forum?id=x6Z...), we study whether models rely more on spurious or causal features for a range of tasks -- TLDR: they do both, leading to high performance ceilings but low floors!

15.12.2024 22:44 πŸ‘ 1 πŸ” 1 πŸ’¬ 1 πŸ“Œ 0

How can we interpret what features LLMs use to perform a given task? πŸ€–πŸ’­ And how do we know if our interpretation is correct? πŸ€”πŸ”¬

Excited to be presenting 2 papers + oral on these questions in the #InterpretableAI workshop at #neurips2024 πŸ“’ -- come by our posters/talk to hear more!

15.12.2024 22:44 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Hidden in Plain Sight: Evaluating Abstract Shape Recognition in Vision-Language Models We introduce IllusionBench, a dataset that challenges current cutting-edge VLMs to decipher shape information when the shape is represented by an arrangement of visual elements in a scene.

Check out our project page arshiahemmat.github.io/illusionbench/

13.12.2024 00:33 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Special thanks to my fabulous co-authors Arshia Hemmat, Tom Lamb, @dydyydyyyd.bsky.social, Phil Torr, Ashkan Khakzar, and @frapintoml.bsky.social -- loved working with you all, and can't wait for our next paper! πŸš€

13.12.2024 00:23 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

I'm excited to be presenting our paper -- Hidden in Plain Sight: Evaluating Abstract Shape Recognition in Vision-Language Models -- today at NeurIPS (West Ballroom A-D, Poster 5202). Hope to see you there!

13.12.2024 00:23 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Shape perception is fundamental to human vision πŸ‘οΈπŸ”· but years of research on shape vs texture bias has relied on benchmarks that are simplistic relative to today's best VLMs πŸ€–πŸ§  It's time for a new dataset generated with methods as powerful as the models we're testing! 🦾

13.12.2024 00:23 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

Introducing πŸͺ„ IllusionBench 🎩 our multimodal shape recognition benchmark at #NeurIPS2024

🎯 Can vision-language models recognize these shapes? (❌ nope!)

13.12.2024 00:23 πŸ‘ 5 πŸ” 0 πŸ’¬ 2 πŸ“Œ 0