we've put out the call for teams for the 2026 cooperative election study! be the unsung hero who organizes your colleagues for a module tischcollege.tufts.edu/research-fac...
we've put out the call for teams for the 2026 cooperative election study! be the unsung hero who organizes your colleagues for a module tischcollege.tufts.edu/research-fac...
we've put out the call for teams for the 2026 cooperative election study! be the unsung hero who organizes your colleagues for a module tischcollege.tufts.edu/research-fac...
Most Americans donβt see compromise as central to good citizenship.
Our election survey data reveal some surprising findings.
The latest from @bfschaffner.bsky.social, see the data here: goodauthority.org/news/most-am...
in 2024, Trump improved most on his vote share in counties that saw the smallest numbers of people moving there. the newest @TuftsUniversity Public Opinion Lab blog post from Toby Winick. tufts-pol.medium.com/trumps-suppo...
there are a lot of things that Americans think are important for being a good citizen, engaging in compromise is not one of them ... my latest post with @goodauth.bsky.social
goodauthority.org/news/most-am...
It must be very hard to publish null results Publication practices in the social sciences act as a filter that favors statistically significant results over null findings. While the problem of selection on significance (SoS) is well-known in theory, it has been difficult to measure its scope empirically, and it has been challenging to determine how selection varies across contexts. In this article, we use large language models to extract granular and validated data on about 100,000 articles published in over 150 political science journals from 2010 to 2024. We show that fewer than 2% of articles that rely on statistical methods report null-only findings in their abstracts, while over 90% of papers highlight significant results. To put these findings in perspective, we develop and calibrate a simple model of publication bias. Across a range of plausible assumptions, we find that statistically significant results are estimated to be one to two orders of magnitude more likely to enter the published record than null results. Leveraging metadata extracted from individual articles, we show that the pattern of strong SoS holds across subfields, journals, methods, and time periods. However, a few factors such as pre-registration and randomized experiments correlate with greater acceptance of null results. We conclude by discussing implications for the field and the potential of our new dataset for investigating other questions about political science.
I have a new paper. We look at ~all stats articles in political science post-2010 & show that 94% have abstracts that claim to reject a null. Only 2% present only null results. This is hard to explain unless the research process has a filter that only lets rejections through.
it is actually a question to respondents, though I think YouGov grabs the meta data and it lines up closely with how people respond to this item. the question is available in public releases, variable name is comptype, I believe. curious to see what you find!
in a new Tufts Public Opinion Lab post, Seona Maskara (class of '26) shows that LGBTQ liberals are more supportive of Gaza than other liberals tufts-pol.medium.com/queers-for-p...
πInattentive respondents are a growing concern in online surveys.
β‘S Blatte & @bfschaffner.bsky.social that 4 to 6% of respondents pass attention checks yet remain inattentive, biasing public opinion estimates for small subgroups www.cambridge.org/core/journal... #FirstView
Very interesting new study using CES data from Scott Platte and @bfschaffner.bsky.social new in @psrm.bsky.social
www.cambridge.org/core/journal...
i have a new article out in @psrm.bsky.social with Scott Blatte exploring respondent attentiveness in the Cooperative Election Study. we find fairly low rates of inattentiveness but our intervention failed to nudge respondents to pay more attention (thread)
www.cambridge.org/core/journal...
tldr: the fairly low rate of inattentiveness is unlikely to affect estimates with the full sample or large subgroups, but may bias estimates for smaller subgroups.
this was Scott's pet project & everyone on the CES team is grateful to him for helping us learn more about inattentiveness in the CES!
we also tested an experimental intervention where we flagged respondents' contradictory responses to one set of items, but that intervention did not produce more attentiveness on subsequent items in the survey compared to the control group
it is important to note that the inattentiveness we document occurs in the final sample provided by @today.yougov.com, after they have applied their own extensive quality control checks. these rates are much higher in raw samples obtained from lower quality vendors
while these low rates of inattentiveness are ignorable for analyses using the full sample, inattentive respondents tend to become a bigger share of small subgroups and can affect those subgroup estimates more.
respondents who were consistently inattentive moved through the survey much quicker, were more likely to straight-line and take the survey on a mobile device and less likely to answer the post-election wave and be validated voters
we utilize two pairs of items for which it would be logically inconsistent to indicate support for both items. about 5% of respondents gave logically inconsistent responses to these items and just 2% gave inconsistent responses on both sets of items
i have a new article out in @psrm.bsky.social with Scott Blatte exploring respondent attentiveness in the Cooperative Election Study. we find fairly low rates of inattentiveness but our intervention failed to nudge respondents to pay more attention (thread)
www.cambridge.org/core/journal...
If you need a cite backing up that issues commonly associated with African Americans (like criminal justice and poverty) *are actually perceived to be* associated with African Americans and we aren't just all talking out of our @$$ assuming that criminal justice=Black, Tatishe and I did the work:
in the newest Tufts Public Opinion Lab post, Miles Kendrick and Rachel Kuhn show that while Black Americans with college experience strongly support affirmative action, those without such experiences oppose it tufts-pol.medium.com/the-x-perien...
really interesting analysis from Tufts Public Opinion Lab alum Zoe Kava!
really interesting analysis from Tufts Public Opinion Lab alum Zoe Kava!
this deep dive on immigration attitudes from Tufts (and CES) alum Caroline Soler and her co-authors is really good! www.nytimes.com/2025/11/19/p...
This is out today in open access, and while it's very much a methodology piece, I think it's really important to building the foundation for how we move forward with understanding the role of gender in politics.
worth noting that some of this tension was clear in the NYC mayoral, as exit polls show that Mamdani received 79% of the secular white vote, but just 56% among black voters.
atheist/agnostic Americans are now the biggest single voting bloc in the Democratic Party, and their leftist views may be alienating another of the party's traditional sources of strength ... new piece from me and Steve Ansolabehere @goodauth.bsky.social goodauthority.org/news/secular...
nice work!
atheist/agnostic Americans are now the biggest single voting bloc in the Democratic Party, and their leftist views may be alienating another of the party's traditional sources of strength ... new piece from me and Steve Ansolabehere @goodauth.bsky.social goodauthority.org/news/secular...
the @nytimes.com recently profiled LGB Republicans, in the newest Tufts Public Opinion Lab blog post, Toby Winick explores what the data tells about this group and finds that they look quite different from other LGB Americans
tufts-pol.medium.com/lgb-republic...