Conceptual Buccaneering
Conceptual Buccaneering
Thanks! We have a fully updated review paper on this forthcoming in Philosophy Compass (should be preprinted soon), and a book in preparation that gets into more details and could be used as a textbook for that kind of course.
New work by my former PhD student, Boyang Li
His team produced 500 stories of less than 100 words. LLMs were basically chance-level at answering binary questions about the stories
arxiv.org/abs/2601.12410
Very glad to see this out! Great paper
now accepted at ICLR! 🐺🥳🐺
arxiv.org/abs/2506.20666
The main takeaway for me is that structural information in language is far more constraining than intuition suggests. That's very interesting (and I agree that parrot metaphors are misleading) but it seems like a claim about language more than intelligence. 3/3
The LLM has to do something like schema-conditioned infilling: produce a high-probability member of the equivalence class consistent with those constraints. So I'm not sure how unexpected the results are? That's roughly what I'd expect from matching to structurally similar training passages. 2/3
Very cool idea! Some quick thoughts. It looks like the corruption preserves a lot of information (function words, morphology, word order, punctuation, numbers, register) which would strongly constrains the posterior over plausible discourse frames as it were. 1/3
Jesus College
With @jesusoxford.bsky.social we are looking for a Professor of Statistics.
Become part of a historic institution and a community focused on academic excellence, innovative thinking, and significant practical application.
About the role: tinyurl.com/b8uy6mr5
Deadline: 15 September
I'm happy to share that I'll be joining Oxford this fall as an associate professor, as well as a fellow of @jesusoxford.bsky.social and affiliate with the Institute for Ethics in AI. I'll also begin my AI2050 Fellowship from @schmidtsciences.bsky.social there. Looking forward to getting started!
Thanks Ali! We'll (hopefully soon) publish a Philosophy Compass review and for a longer read a Cambridge Elements book that are the spiritual successors to these preprints and up-to-date w/ both technical and philosophical recent developments
There's a lot more in the full paper – here's the open access link:
sciencedirect.com/science/arti...
Special thanks to @taylorwwebb.bsky.social and @melaniemitchell.bsky.social for comments on previous versions of the paper!
9/9
This opens intesting avenues for future work. By using causal intervention methods with open-weights models, we can start to reverse-engineer these emergent analogical abilities and compare the discovered mecanisms to computational models of analogical reasoning. 8/9
But models also showed different sensitivities than humans. For example, top LLMs were more affected by permuting the order of examples and were more distracted by irrelevant semantic information, hinting at different underlying mechanisms. 7/9
We found that the best-performing LLMs match human performance across many of our challenging new tasks. This provides evidence that sophisticated analogical reasoning can emerge from domain-general learning, where existing computational models fall short. 6/9
In our second study, we highlighted the role of semantic content. Here, the task required identifying specific properties of concepts (e.g., "Is it a mammal?", "How many legs does it have?") and mapping them to features of the symbol strings. 5/9
In our first study, we tested whether LLMs could map semantic relationships between concepts to symbolic patterns. We included controls such as permuting the order of examples or adding semantic distractors to test for robustness and content effects (see full list below). 4/9
We tested humans & LLMs on analogical reasoning tasks that involve flexible re-representation. We strived to apply best practices from cognitive science – designing novel tasks to avoid data contamination, including careful controls, and doing proper statistical analysis. 3/9
We focus on an important feature of analogical reasoning often called "re-representation" – the ability to dynamically select which features of analogs matter to make sense of the analogy (e.g. if one analog is "horse", which properties of horses does the analogy rely on?). 2/9
Can LLMs reason by analogy like humans? We investigate this question in a new paper published in the Journal of Memory and Language (link below). This was a long-running but very rewarding project. Here are a few thoughts on our methodology and main findings. 1/9
I'm glad this can be useful! And I totally agree regarding QKV vectors – focusing on information movement across token positions is way more intuitive. I had to simplify things quite a bit, but hopefully the video animation is helpful too.
See also @melaniemitchell.bsky.social's excellent entry on Large Language Models:
oecs.mit.edu/pub/zp5n8ivs/
I wrote an entry on Transformers for the Open Encyclopedia of Cognitive Science (@oecs-bot.bsky.social). I had to work with a tight word limit, but I hope it's useful as a short introduction for students and researchers who don't work on machine learning:
oecs.mit.edu/pub/ppxhxe2b
Happy to share this updated Stanford Encyclopedia of Philosophy entry on 'Associationist Theories of Thought' with
@ericman.bsky.social. Among other things, we included a new major section on reinforcement learning. Many thanks to Eric for bringing me on board!
plato.stanford.edu/entries/asso...
The sycophantic tone of ChatGPT always sounded familiar, and then I recognized where I'd heard it before: author response letters to reviewer comments.
"You're exactly right, that's a great point!"
"Thank you so much for this insight!"
Also how it always agrees even when it contradicts itself.
The paper is available in open access. It includes a lot more, including a discussion of how social engineering attacks on humans relate to the exploitation of normative conflicts in LLMs, and some examples of "thought injection attacks" on RLMs. 13/13
link.springer.com/article/10.1...
In sum: the vulnerability of LLMs to adversarial attacks partly stems from shallow alignment that fails to handle normative conflicts. New methods like @OpenAI's “deliberative alignment” seem promising on paper, but still far from fully effective on jailbreak benchmarks. 12/13
I'm not convinced that the solution is a “scoping” approach to capabilities that seeks to remove information from the training data or model weights; we also need to augment models with a robust capacity for normative deliberation, even for out-of-distribution conflicts. 11/13
This has serious implications as models become more capable in high-stakes domains. LLMs are arguably past the point where they can cause real harm. Even if the probability of success of a single attack is negligible, success becomes almost inevitable with enough attempts. 10/13
Example of a "thought injection attack" on Deepseek R1, asking for a violent tirade against philosophers (note that the attack method also works on much more serious examples of harmful speech). This shows the reasoning trace before the actual answer.
Example of a "thought injection attack" on Deepseek R1, asking for a violent tirade against philosophers (note that the attack method also works on much more serious examples of harmful speech). This shows the actual answer after the reasoning trace.
For example, an RLM asked to generate a hateful tirade may conclude in its reasoning trace that it should refuse; but if the prompt instructs it to assess each hateful sentence within its thinking process, it will often leak the full harmful content! (see example below) 9/13