Improving LLM Personas via Rationalization with Psychological Scaffolds
Language models prompted with a user description or persona can predict a user's preferences and opinions, but existing approaches to building personas -- based solely on a user's demographic attribut...
π So fun to collaborate with amazing folks at USC (Xiang Ren, @swabhs.bsky.social ) and Apple (Rik Koncel-Kedziorski, Tim Paek) on this work! π€π‘π£
Excited to finally share thisβlet us know what you think! ππ½β¨π
More details in our draft here: arxiv.org/abs/2504.17993
(8/n)
29.04.2025 01:05
π 2
π 1
π¬ 0
π 0
π PB&J improvements do not hinge on any particular groupβwe see gains across education, race, income, and gender. πππ€
(7/n)
29.04.2025 01:05
π 0
π 0
π¬ 1
π 0
π‘ Itβs not just about having more reasoning tokensβitβs about what those tokens convey. π§πβ¨
We found that *human-written rationales* improved personas the most, but LLM-generated ones came close, showing that good scaffolds are very useful for personas. π§βπ»ππ¬
(6/n)
29.04.2025 01:05
π 0
π 0
π¬ 1
π 0
π We tested PB&J on two tasks: predicting opinions and movie ratings. PB&J consistently outperformed methods based on demographics and/or judgments. Using scaffolds boosted performance even more! π₯π³οΈπ
(5/n)
29.04.2025 01:05
π 0
π 0
π¬ 1
π 0
π§ We propose psychological scaffoldsβframeworks based on life experiences, personality traits, or core beliefsβto further guide the structure of LLM rationales. π οΈπ§¬π¨
(4/n)
29.04.2025 01:05
π 0
π 0
π¬ 1
π 0
π§ We introduce PB&J (Psychology of Behaviour and Judgments) that draws from folk psychologyβthe way people naturally explain each otherβs actions. PB&J adds LLM-generated rationales to personas, helping models better represent user behavior. ππ§©π¬
(3/n)
29.04.2025 01:05
π 1
π 0
π¬ 1
π 0
π€ LLMs can mimic human behavior with personas, but most current approaches focus only on demographics or past responses as user context, often overlooking the reasons why users think or act a certain way. π§ππ
(2/n)
29.04.2025 01:05
π 0
π 0
π¬ 1
π 0
Reasoning about the "why" behind user behavior can improve LLM personas! β¨π§ π
πExcited to share our new work: Improving LLM Personas via Rationalization with Psychological Scaffolds
π arxiv.org/abs/2504.17993
π§΅ (1/n)
29.04.2025 01:05
π 14
π 4
π¬ 1
π 1
omggg itβs amazing π combined together gave me queen vibes!!
21.11.2024 22:30
π 1
π 0
π¬ 0
π 0
ππ»ββοΈππ»ββοΈ
19.11.2024 20:26
π 1
π 0
π¬ 1
π 0
UGH ADORABLE
17.11.2024 22:53
π 1
π 0
π¬ 0
π 0
omg this can be 3D printed at the makerspace!!
17.11.2024 21:09
π 2
π 0
π¬ 0
π 0
How I Use "AI"
I don't think that AI models (by which I mean: large language models) are over-hyped. In this post I will list 50 ways I've used them.
These days, Iβve been obsessed with reading about how people use LLMs and this blog by Nicholas Carlini has been my TOP read since August - nicholas.carlini.com/writing/2024...
Not a lot of us who work with LLMs βuseβ them, and this is such a great starter to all of that!
17.11.2024 19:59
π 10
π 0
π¬ 0
π 0
@justachetan.bsky.social!!!
17.11.2024 18:57
π 1
π 0
π¬ 0
π 0
Netflix needs to stop asking if Iβm still watching and start asking if I moved the laundry to the dryer
15.11.2024 13:00
π 41033
π 5002
π¬ 689
π 331
#EMNLP2024 has been a treat so far! Of course it doesn't hurt to win an outstanding paper award with my incredible PhD students Jaspreet Ranjit and @brihi.bsky.social and our wonderful collaborators at USC π
Paper: dill-lab.github.io/oath-frames/
14.11.2024 23:31
π 17
π 1
π¬ 0
π 1