Peter Damgaard's Avatar

Peter Damgaard

@peterdamgaard

Assistant professor (TT). Department of Political Science and Public Management. University of Southern Denmark. Website: https://peterdamgaard.com

523
Followers
442
Following
14
Posts
22.09.2023
Joined
Posts Following

Latest posts by Peter Damgaard @peterdamgaard

Fascinating paper on supply-side factors of negativity bias, misinformation and misperceptions. Also interesting measurements of accuracy of reporting and negativity bias of specific media orgs (e.g.
CNN, Fox News).

10.03.2026 18:47 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Post image

Conditionally accepted at the APSR (w/ @scottclifford.bsky.social & @patrickpliu.bsky.social):

Why does political information so often change beliefs but NOT attitudes? We highlight the role of belief relevance, or the extent to which beliefs bear on attitudes.

09.03.2026 14:15 πŸ‘ 112 πŸ” 32 πŸ’¬ 4 πŸ“Œ 3
OSF

When you collect data online, are the results from humans or AI? In a project led by Booth PhD student Grace Zhang, we estimate the prevalence of AI agents on commonly used survey platforms:
osf.io/preprints/ps...
🧡

07.03.2026 20:22 πŸ‘ 108 πŸ” 50 πŸ’¬ 4 πŸ“Œ 4

I'm also curious about this!

02.03.2026 14:19 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Screenshot of claude just writing a design no trouble

Screenshot of claude just writing a design no trouble

Writing simulations in DeclareDesign just went from "I should do that, but it's kind of a lot of work" to extremely easy

27.02.2026 16:34 πŸ‘ 61 πŸ” 10 πŸ’¬ 4 πŸ“Œ 2
Creating actually publication-ready figures for journals using `ggplot2` A practical guide to creating publication-ready figures in R using ggplot2, covering journal dimension requirements, custom themes, updated geom defaults, and SVG exportβ€”with minimal manual adjustment...

#rstats Here's a useful guide to creating publication-ready #ggplot figures to journal specifications, which is often quite fiddly.

jaquent.github.io/2026/02/crea...

24.02.2026 13:47 πŸ‘ 76 πŸ” 23 πŸ’¬ 3 πŸ“Œ 3
Preview
Expanding Paternity Leave: Effects on Beliefs, Norms, and Gender Gaps Founded in 1920, the NBER is a private, non-profit, non-partisan organization dedicated to conducting economic research and to disseminating research findings among academics, public policy makers, an...

interesting paper on effect of paternity leave reform on gender beliefs and norms. #WinningHeartsAndMinds
www.nber.org/papers/w34862

24.02.2026 12:25 πŸ‘ 5 πŸ” 3 πŸ’¬ 0 πŸ“Œ 0
Post image

Russia, Venezuela, Iran, China, the Sahel region, the United States ...

Want to know why state agents carry out brutal repression β€” or participate in illegal coups?

Our new book "Making a Career in Dictatorship" provides answers β€” it just got published by @academic.oup.com:

tinyurl.com/ystwm3tf

16.02.2026 11:09 πŸ‘ 135 πŸ” 74 πŸ’¬ 13 πŸ“Œ 10
Post image

Paper on statistical power necessary for interaction effects
doi.org/10.1177/2515...

20.02.2026 09:17 πŸ‘ 154 πŸ” 60 πŸ’¬ 4 πŸ“Œ 8
Post image

"Acceptable or Not? Understanding Attitudes Toward
Citizens' Discrimination Against Frontline Workers" by @halling.bsky.social, @mathildececchini.bsky.social, & Benedicte Gronhoj shows that language-based requests are viewed as more acceptable than religious ones.

doi.org/10.1111%2Fpa...

17.12.2025 17:26 πŸ‘ 7 πŸ” 6 πŸ’¬ 0 πŸ“Œ 0
Preview
dplyr 1.2.0 dplyr 1.2.0 fills in some important gaps in dplyr's API: we've added a new complement to `filter()` focused on dropping rows, and we've expanded the `case_when()` family with three new recoding and re...

dplyr 1.2.0 is out now and we are SO excited!

- `filter_out()` for dropping rows

- `recode_values()`, `replace_values()`, and `replace_when()` that join `case_when()` as a complete family of recoding/replacing tools

These are huge quality of life wins for #rstats!

tidyverse.org/blog/2026/02...

04.02.2026 11:39 πŸ‘ 466 πŸ” 133 πŸ’¬ 12 πŸ“Œ 14
Post image

Is meat consumption becoming political? @willemboterman.bsky.social & @eelcoharteveld.bsky.social examine Dutch surveys showing meat eating aligns with right-wing ideology & climate scepticism. Read OPEN ACCESS: buff.ly/HU5Mdec

@polstudiesassoc.bsky.social #polsky #polsci #FoodPolitics

26.01.2026 18:00 πŸ‘ 6 πŸ” 7 πŸ’¬ 0 πŸ“Œ 0
Post image

We’re organizing a workshop at Aarhus University. Please share and consider submitting!

πŸ—“οΈ 13–14 April 2026 | πŸ“ Deadline: Mon, 16 Feb 2026 (extended abstract) β€” junior scholars prioritized

🎀 Keynotes: @stefwalter.bsky.social (Univ. of Zurich) & @hhuang.bsky.social (Ohio State)

15.01.2026 13:51 πŸ‘ 42 πŸ” 30 πŸ’¬ 0 πŸ“Œ 3
Post image Post image

Survey experiments have become a popular methodology among social scientists. Has it been effective?

In POQ, Rauf et al. study the efficacy of 100 survey experiments. Their results show that a majority of hypotheses were not supported.

Read now: doi.org/10.1093/poq/...

18.12.2025 22:33 πŸ‘ 49 πŸ” 32 πŸ’¬ 1 πŸ“Œ 4

Here comes another aviation analogy.

I sincerely hope that these types of tools will be used to help us do _better_ research first and foremost.

I fear instead it will be used to help us do _more_ research _faster_.

The magic of autopilot is that it helps pilots fly better, not more.

16.12.2025 20:30 πŸ‘ 4 πŸ” 2 πŸ’¬ 0 πŸ“Œ 0
Will you incorporate LLMs and AI prompting into the course in the future?
No.

Why won’t you incorporate LLMs and AI prompting into the course?
These tools are useful for coding (see this for my personal take on this).

However, they’re only useful if you know what you’re doing first. If you skip the learning-the-process-of-writing-code step and just copy/paste output from ChatGPT, you will not learn. You cannot learn. You cannot improve. You will not understand the code.

Will you incorporate LLMs and AI prompting into the course in the future? No. Why won’t you incorporate LLMs and AI prompting into the course? These tools are useful for coding (see this for my personal take on this). However, they’re only useful if you know what you’re doing first. If you skip the learning-the-process-of-writing-code step and just copy/paste output from ChatGPT, you will not learn. You cannot learn. You cannot improve. You will not understand the code.

In that post, it warns that you cannot use it as a beginner:

…to use Databot effectively and safely, you still need the skills of a data scientist: background and domain knowledge, data analysis expertise, and coding ability.

There is no LLM-based shortcut to those skills. You cannot LLM your way into domain knowledge, data analysis expertise, or coding ability.

The only way to gain domain knowledge, data analysis expertise, and coding ability is to struggle. To get errors. To google those errors. To look over the documentation. To copy/paste your own code and adapt it for different purposes. To explore messy datasets. To struggle to clean those datasets. To spend an hour looking for a missing comma.

This isn’t a form of programming hazing, like β€œI had to walk to school uphill both ways in the snow and now you must too.” It’s the actual process of learning and growing and developing and improving. You’ve gotta struggle.

In that post, it warns that you cannot use it as a beginner: …to use Databot effectively and safely, you still need the skills of a data scientist: background and domain knowledge, data analysis expertise, and coding ability. There is no LLM-based shortcut to those skills. You cannot LLM your way into domain knowledge, data analysis expertise, or coding ability. The only way to gain domain knowledge, data analysis expertise, and coding ability is to struggle. To get errors. To google those errors. To look over the documentation. To copy/paste your own code and adapt it for different purposes. To explore messy datasets. To struggle to clean those datasets. To spend an hour looking for a missing comma. This isn’t a form of programming hazing, like β€œI had to walk to school uphill both ways in the snow and now you must too.” It’s the actual process of learning and growing and developing and improving. You’ve gotta struggle.

This Tumblr post puts it well (it’s about art specifically, but it applies to coding and data analysis too):

Contrary to popular belief the biggest beginner’s roadblock to art isn’t even technical skill it’s frustration tolerance, especially in the age of social media. It hurts and the frustration is endless but you must build the frustration tolerance equivalent to a roach’s capacity to survive a nuclear explosion. That’s how you build on the technical skill. Throw that β€œwon’t even start because I’m afraid it won’t be perfect” shit out the window. Just do it. Just start. Good luck. (The original post has disappeared, but here’s a reblog.)

It’s hard, but struggling is the only way to learn anything.

This Tumblr post puts it well (it’s about art specifically, but it applies to coding and data analysis too): Contrary to popular belief the biggest beginner’s roadblock to art isn’t even technical skill it’s frustration tolerance, especially in the age of social media. It hurts and the frustration is endless but you must build the frustration tolerance equivalent to a roach’s capacity to survive a nuclear explosion. That’s how you build on the technical skill. Throw that β€œwon’t even start because I’m afraid it won’t be perfect” shit out the window. Just do it. Just start. Good luck. (The original post has disappeared, but here’s a reblog.) It’s hard, but struggling is the only way to learn anything.

You might not enjoy code as much as Williams does (or I do), but there’s still value in maintaining codings skills as you improve and learn more. You don’t want your skills to atrophy.

As I discuss here, when I do use LLMs for coding-related tasks, I purposely throw as much friction into the process as possible:

To avoid falling into over-reliance on LLM-assisted code help, I add as much friction into my workflow as possible. I only use GitHub Copilot and Claude in the browser, not through the chat sidebar in Positron or Visual Studio Code. I treat the code it generates like random answers from StackOverflow or blog posts and generally rewrite it completely. I disable the inline LLM-based auto complete in text editors. For routine tasks like generating {roxygen2} documentation scaffolding for functions, I use the {chores} package, which requires a bunch of pointing and clicking to use.

Even though I use Positron, I purposely do not use either Positron Assistant or Databot. I have them disabled.

So in the end, for pedagogical reasons, I don’t foresee me incorporating LLMs into this class. I’m pedagogically opposed to it. I’m facing all sorts of external pressure to do it, but I’m resisting.

You’ve got to learn first.

You might not enjoy code as much as Williams does (or I do), but there’s still value in maintaining codings skills as you improve and learn more. You don’t want your skills to atrophy. As I discuss here, when I do use LLMs for coding-related tasks, I purposely throw as much friction into the process as possible: To avoid falling into over-reliance on LLM-assisted code help, I add as much friction into my workflow as possible. I only use GitHub Copilot and Claude in the browser, not through the chat sidebar in Positron or Visual Studio Code. I treat the code it generates like random answers from StackOverflow or blog posts and generally rewrite it completely. I disable the inline LLM-based auto complete in text editors. For routine tasks like generating {roxygen2} documentation scaffolding for functions, I use the {chores} package, which requires a bunch of pointing and clicking to use. Even though I use Positron, I purposely do not use either Positron Assistant or Databot. I have them disabled. So in the end, for pedagogical reasons, I don’t foresee me incorporating LLMs into this class. I’m pedagogically opposed to it. I’m facing all sorts of external pressure to do it, but I’m resisting. You’ve got to learn first.

Some closing thoughts for my students this semester on LLMs and learning #rstats datavizf25.classes.andrewheiss.com/news/2025-12...

09.12.2025 20:17 πŸ‘ 331 πŸ” 99 πŸ’¬ 14 πŸ“Œ 31
Preview
At will? Whose will? Removing independent agency heads is part of a broader assault on a nonpartisan government

I was going to just share an excerpt of this great @donmoyn.bsky.social piece, but there are too many excerpts worth sharing. Just read the whole thing. If you care about governance, democracy, and the rule of law, these issues are crucial. open.substack.com/pub/donmoyni...

09.12.2025 12:43 πŸ‘ 313 πŸ” 109 πŸ’¬ 5 πŸ“Œ 3
Information Equivalence in Survey Experiments | Political Analysis | Cambridge Core Information Equivalence in Survey Experiments - Volume 26 Issue 4

I consider this piece on information spillover/information equivalence a must-read for survey experimentalists. It's addresses a key limitation for many survey experiments (especially the hypothetical scenario/vignette type) www.cambridge.org/core/journal...

03.12.2025 08:17 πŸ‘ 10 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Preview
Designing Information Provision Experiments (March 2023) - Information provision experiments allow researchers to test economic theories and answer policy-relevant questions by varying the information set available to respondents. We survey the...

Treatment can also be (new) information focusing on effects on belief updating and downstream outcomes. See also "information provision experiments" in econ lit www.aeaweb.org/articles?id=...

03.12.2025 07:59 πŸ‘ 7 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

Today I published a replication outlining concerns with "Instrumentally Inclusive" by Turnbull-Dugarte and LΓ³pez Ortega (2024, APSR).

I document seemingly idiosyncratic and ad hoc choices made by the authors that create a pattern of statistically significant results consistent with their theory.

28.11.2025 17:08 πŸ‘ 85 πŸ” 19 πŸ’¬ 2 πŸ“Œ 2
Video thumbnail

🚨 SynthNet is out 🚨
Researchers propose new constructs and measures faster than anyone can track. We (@anniria.bsky.social @ruben.the100.ci) built a search engine to check what already exists and help identify redundancies; indexing 74,000 scales from ~31,500 instruments in APA PsycTests. 🧡1/3

26.11.2025 11:42 πŸ‘ 158 πŸ” 86 πŸ’¬ 3 πŸ“Œ 3

Selective reporting

20.11.2025 14:50 πŸ‘ 2 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Post image

I recently had a similar issue and ended up using a solution combing geom_step with a histogram as detailed by @kjhealy.co here: kieranhealy.org/blog/archive...

19.11.2025 22:18 πŸ‘ 3 πŸ” 0 πŸ’¬ 1 πŸ“Œ 1
Post image Post image Post image Post image

πŸ“„ New WP version out - full overhaul!

The Politics of Evidence Selection (w/ @jesperasring.bsky.social )

Comments welcome!

πŸ”— osf.io/preprints/so...

06.11.2025 15:30 πŸ‘ 52 πŸ” 18 πŸ’¬ 2 πŸ“Œ 0

Interesting new study on the elusive connection between organizational performance and user satisfaction

04.11.2025 08:56 πŸ‘ 4 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Post image Post image

After a huge post-election flip in economic perceptions, I thought Democrats and Republicans might be lying to pollsters to send a partisan message β€” but I was wrong!

New in the Journal of Experimental Political Science (open access): doi.org/10.1017/XPS....

27.10.2025 16:23 πŸ‘ 92 πŸ” 41 πŸ’¬ 3 πŸ“Œ 8
Title: A Justification for 80% Power
Abstract:
Cohen’s heuristic reason for choosing 80% power (balancing Type I and TypeII errors) conveniently arrives at approximately the same number as an approachwhere one maximizes the marginal gain in power per standard error reduction. Ihave yet to see someone point this out, and this is interesting because it providesa non-arbitrary justification for 80% power.

Title: A Justification for 80% Power Abstract: Cohen’s heuristic reason for choosing 80% power (balancing Type I and TypeII errors) conveniently arrives at approximately the same number as an approachwhere one maximizes the marginal gain in power per standard error reduction. Ihave yet to see someone point this out, and this is interesting because it providesa non-arbitrary justification for 80% power.

a derivation of the result

a derivation of the result

I think this is kind of neat and I don't think anyone else has noticed it (I've looked and I can't find anyone who has) osf.io/preprints/so...

Maybe I should back off "justification" language, but it's at least a remarkable coincidence. I still think someone else *must* have noticed it...

24.10.2025 12:23 πŸ‘ 70 πŸ” 21 πŸ’¬ 5 πŸ“Œ 0

When there is a random way to do something, there is a less random way that is better but requires more thought. In this case, regression models that make no sense don't belong in a multiverse analysis. An inferential regression without a causal justification is like an opinion without reasons.

23.10.2025 16:34 πŸ‘ 135 πŸ” 35 πŸ’¬ 4 πŸ“Œ 3
Preview
If your random seed is 42 I will come to your office and set your computer on fireπŸ”₯ Figuratively. More likely you'll get a stern talking to.

We need to have a conversation about random seeds. Don't use 42.
blog.genesmindsmachines.com/p/if-your-ra...

22.10.2025 12:49 πŸ‘ 89 πŸ” 35 πŸ’¬ 16 πŸ“Œ 15
Preview
Postdoctoral Fellow in Political Science (3-4 years) (288628) | University of Oslo Job title: Postdoctoral Fellow in Political Science (3-4 years) (288628), Employer: University of Oslo, Deadline: Monday, November 17, 2025

We are hiring PhDs and postdocs to work on the ERC project GETGOV, where I am the PI.

We will investigate governing elites since 1789. I am sure that it will be a lot of fun and result in great research!

Postdocs: www.jobbnorge.no/en/available...

PhDs: www.jobbnorge.no/en/available...

20.10.2025 15:27 πŸ‘ 85 πŸ” 79 πŸ’¬ 1 πŸ“Œ 5