Yamil Ricardo Velez's Avatar

Yamil Ricardo Velez

@yamilrvelez

political scientist at Columbia | MIA ✈️ NYC | tailored surveys and experiments

2,400
Followers
1,097
Following
117
Posts
29.08.2023
Joined
Posts Following

Latest posts by Yamil Ricardo Velez @yamilrvelez

Hi Andy, that's a good catch! Agree that it might be confusing as currently written. I don’t mean measurement in the “measurement error” sense, but more that we might want to consider measuring relevant beliefs before developing interventions.

09.03.2026 21:37 👍 0 🔁 0 💬 0 📌 0

You’re too kind, Damien. Speaking as someone whose musical career never really got off the ground, being called an academic rockstar is the highest form of praise.

09.03.2026 19:01 👍 1 🔁 0 💬 1 📌 0

Thanks, David!

09.03.2026 18:48 👍 1 🔁 0 💬 0 📌 0

Congratulations to @yamilrvelez.bsky.social, @patrickpliu.bsky.social, and @scottclifford.bsky.social !

we think attitudes are some function of beliefs; our exps routinely move beliefs but not (even correlated) attitudes. This team found a way to guess which beliefs matter more (and they do!)

09.03.2026 16:37 👍 23 🔁 6 💬 1 📌 0

Thank you! We have several measures of attitudes, so here I mean that the effects on these measures are more consistently observed for the focal vs. distal counterargument.

09.03.2026 16:07 👍 0 🔁 0 💬 0 📌 0

Link: www.dropbox.com/scl/fi/vw8rq...

09.03.2026 14:15 👍 0 🔁 0 💬 0 📌 0

These findings speak to common null effects of informational interventions on attitudes, suggesting these results may partly reflect a measurement problem, not a motivational one.

09.03.2026 14:15 👍 1 🔁 0 💬 2 📌 0

The design embeds LLMs into surveys to recover each person’s own justifications, then generates tailored counterarguments, further highlighting how tailored experiments can be used to assess thorny mechanisms that traditional designs may struggle to capture.

09.03.2026 14:15 👍 0 🔁 0 💬 1 📌 0

We used AI to identify the specific beliefs people say justify their views, then randomly targeted those “focal” beliefs vs. peripheral ones. Both shift beliefs equally, but focal belief counterarguments produce more consistent attitude change within and across surveys.

09.03.2026 14:15 👍 0 🔁 0 💬 1 📌 0
Post image

Conditionally accepted at the APSR (w/ @scottclifford.bsky.social & @patrickpliu.bsky.social):

Why does political information so often change beliefs but NOT attitudes? We highlight the role of belief relevance, or the extent to which beliefs bear on attitudes.

09.03.2026 14:15 👍 112 🔁 32 💬 4 📌 3

Science communication has never been more important.

In this animation, @yamilrvelez.bsky.social, Donald Green and I break down our research exploring whether AI chatbots can increase political engagement among young, politically unaligned voters.

Link to animation: www.youtube.com/watch?v=iCuX...

03.03.2026 15:53 👍 6 🔁 2 💬 0 📌 0

Appreciate the shoutout! I ran an AI image discernment experiment a couple of years ago, but haven’t used images as treatments yet. Would love to see what others have done!

07.02.2026 01:41 👍 2 🔁 0 💬 0 📌 0

Thanks, Dan!

19.12.2025 02:23 👍 2 🔁 0 💬 1 📌 0

People feeling hopeless about AI/LLMs spoiling all online polling and survey research should take a look at what @yamilrvelez.bsky.social is doing with “Pulse” — a tool to detect proof of life that’s compatible with Quatrics

19.12.2025 01:55 👍 17 🔁 5 💬 2 📌 0
Post image

An AI Voter bot improves knowledge about politics
But, the AI bot has weak effects on downstream outcomes like vote preferences and party evaluations among respondents whose primary issue position aligns closely with one of the parties.
Partisan action is hard to change.
www.pnas.org/doi/10.1073/...

17.12.2025 19:34 👍 4 🔁 4 💬 0 📌 0

Though LLMs as persuasion tools are (rightly) getting attention, their most valuable civic use may be retrieval: grounded answers with links to the source material using language that is accessible to users. Positive use cases aren’t dominating the discourse, but that’s where I think the upside is.

12.12.2025 14:28 👍 4 🔁 0 💬 0 📌 0

In ongoing work w/ Alec Ewig, we explore how RAG can enhance measures of representation by retrieving relevant legislation from a 15,000+ bill database mapped to voter preferences. RAG + agentic workflows can uncover info buried in dense policy docs and answer complex queries with citations.

12.12.2025 14:28 👍 1 🔁 0 💬 1 📌 0
Post image

We used retrieval-augmented generation (RAG), a method that pulls relevant text from curated sources directly into prompts. A vector database, multiple API calls, and a lot of trial and error later, it substantially reduced errors and preserved issue-specific language from party platforms.

12.12.2025 14:28 👍 3 🔁 0 💬 1 📌 0

When we started this project, standard LLMs weren't up to the task. They produced plausible-sounding answers but regularly hallucinated policy stances, especially on niche or novel issues. For a voting advice application, accuracy was critical, so we had to build a more involved approach.

12.12.2025 14:28 👍 2 🔁 0 💬 1 📌 0

This paper was a blast to work on. The challenge: present party positions across many issues, in real time, using language voters actually use. 🧵 on why we went with a more involved retrieval-based approach and where I think these tools are headed.

12.12.2025 14:28 👍 18 🔁 6 💬 1 📌 1

Congrats, Damien!

10.12.2025 12:50 👍 2 🔁 0 💬 0 📌 0
Post image

🚨Excited to share our new paper published in PNAS (joint with @yamilrvelez.bsky.social and Don Green)! AI can enhance political knowledge and provide balanced information about politics with proper guardrails and vetted sources (e.g., party platforms).

www.pnas.org/doi/full/10....

08.12.2025 21:40 👍 35 🔁 8 💬 2 📌 2

I have a 2024 pre-election Verasight survey I carried out with Alec Ewig that used an adaptive survey method to create a kind of dynamic CES. Happy to chat offline if you think it would be useful!

08.12.2025 20:13 👍 0 🔁 0 💬 0 📌 0

Looking for a tl;dr for these two excellent papers? I've got you covered: www.science.org/doi/10.1126/...

04.12.2025 22:21 👍 7 🔁 2 💬 1 📌 0
Post image Post image

🚨 New in Nature+Science!🚨
AI chatbots can shift voter attitudes on candidates & policies, often by 10+pp
🔹Exps in US Canada Poland & UK
🔹More “facts”→more persuasion (not psych tricks)
🔹Increasing persuasiveness reduces "fact" accuracy
🔹Right-leaning bots=more inaccurate

04.12.2025 20:42 👍 167 🔁 70 💬 2 📌 3

All of this is to say that I hope I’m also invited to the party, not only because I care about identifying causal effects, but because I also care about measuring theoretical constructs with a level of precision that quasi-experimental and field experimental designs simply can’t deliver.

04.12.2025 14:10 👍 6 🔁 0 💬 0 📌 0

As we increasingly interact with GUIs via LLMs, targeted ads, and social media, survey experiments are *even more* fit for the task of understanding human behavior, and I hope we continue relying on them to sort through thorny causal questions.

04.12.2025 14:10 👍 5 🔁 0 💬 1 📌 0

That’s a useful distinction. Another way to put it is whether a treatment *intervenes* on an outcome. On its face, your standard persuasion experiment looks a lot like “do you prefer [Joe/Jose] for this job?,” but the latter design isn’t trying to shift Y. It’s a measurement exercise.

04.12.2025 14:10 👍 2 🔁 0 💬 1 📌 0
03.12.2025 13:45 👍 2 🔁 0 💬 0 📌 0

It's likely not a problem *yet*

24.11.2025 16:41 👍 2 🔁 0 💬 0 📌 0