Also available on Spotify: open.spotify.com/episode/0zH0...
Also available on Spotify: open.spotify.com/episode/0zH0...
Here's my conversation with @jamesheathers.bsky.social, Founder/Director of the Medical Evidence Project, on Metascience Matters: www.youtube.com/watch?v=QH87...
We discussed his book on Forensic Metascience, the story behind the GRIM test, how technology can enable metascience, and other topics.
Steve Bruntonβs videos are good youtu.be/rCdxlN6Ph14?...
Replication Research (R2), a π community-led Diamond OA journal, makes replication studies more discoverable, publishable & rigorously evaluatedβwithout subscription barriers or author fees. Ahead of #LoveReplicationsWeek, R2's senior editors shared their vision in our Q&A:
Wonderful to see this replication effort in the physical sciences using the models of many labs, preregistration, and transparency that have benefitted other fields.
And, an investment of $9.5 million to do it!
www.nature.com/articles/d41...
Here's what a Cohen's d = 22 looks like. Totally normal. See it all the time in my own data...
Today in that-didn't-happen: Cohen's d = 22.
Williams et al. (2014) has 145 citations, putting it in top 1% of most cited psych articles.
It is a load-bearing publication in its area, despite having impossible results.
pubpeer.com/publications...
It must be very hard to publish null results Publication practices in the social sciences act as a filter that favors statistically significant results over null findings. While the problem of selection on significance (SoS) is well-known in theory, it has been difficult to measure its scope empirically, and it has been challenging to determine how selection varies across contexts. In this article, we use large language models to extract granular and validated data on about 100,000 articles published in over 150 political science journals from 2010 to 2024. We show that fewer than 2% of articles that rely on statistical methods report null-only findings in their abstracts, while over 90% of papers highlight significant results. To put these findings in perspective, we develop and calibrate a simple model of publication bias. Across a range of plausible assumptions, we find that statistically significant results are estimated to be one to two orders of magnitude more likely to enter the published record than null results. Leveraging metadata extracted from individual articles, we show that the pattern of strong SoS holds across subfields, journals, methods, and time periods. However, a few factors such as pre-registration and randomized experiments correlate with greater acceptance of null results. We conclude by discussing implications for the field and the potential of our new dataset for investigating other questions about political science.
I have a new paper. We look at ~all stats articles in political science post-2010 & show that 94% have abstracts that claim to reject a null. Only 2% present only null results. This is hard to explain unless the research process has a filter that only lets rejections through.
Without publication bias, we might not need many replications. With publication bias, 20% to 40% might be justified (but of course, extremely dependent on the assumptions in the simulations!). If the field is a mess, we need a lot of replication studies to clean up!
My colleague Krist Vaessen wrote a new book: βNeomania: How our obsession with innovation is failing science, and how to restore trustβ. It's a great analysis how the drive for novelty hinders reliable scientific progress. Open Access, so read it here: books.openbookpublishers.com/10.11647/obp...
Here's my conversation with Mu Yang on Metascience Matters: www.youtube.com/watch?v=E2EK...
We discussed her work as a scientific sleuth, academic incentives for positive data, individual cases she has pursued, and why she loves being a sleuth.
Also on Spotify: open.spotify.com/episode/16R6...
New submission format at SBE:
βReplications as Registered Reportsβ
link.springer.com/journal/1118...
You can get "in-principle acceptance" before data collection even begins; final paper gets published regardless the results, if the study is conducted rigorously.
#EconSky
Call for metascience grants has a focus on three areas:
πΈοΈ The impact of artificial intelligence on scientific practice and the research landscape
πΈοΈ The effective design and leadership of research organisations
πΈοΈ Scientometrics approaches to understanding research excellence, efficiency and equity
Some discussion about this in a conversation Iβll be releasing in early March, thanks Rasu!
YouTube: youtube.com/@metascience...
Spotify: open.spotify.com/show/7coSExb...
Apple: podcasts.apple.com/us/podcast/m...
iHeart: www.iheart.com/podcast/269-...
I started a podcast! Metascience Matters features conversations with metascientists.
Two episodes are live:
Chirag Patel on Exposomics, and Vibration of Effects: youtu.be/RT2nypyb-iM?...
@floriannaudet.bsky.social on Clinical Trials, Registered Reports, and Psychiatry: youtu.be/fn4qtnc99Xo?...
This paper in Management Science has been cited more than 6,000 times. Wall Street execs, top govt officials, and even a former U.S. Vice President have all referenced it. Itβs fatally flawed, and the scholarly community refuses to do anything about it.
statmodeling.stat.columbia.edu/2026/01/22/a...
Congratulations!
Statistical Rethinking 2026 Lecture B01 Multilevel Models is online. This is the first lecture of the "experienced" section, in which we start with multilevel models and venture into vast covariance spaces. Full lecture list still here: github.com/rmcelreath/s...
With some trepidation, I'm putting this out into the world:
gershmanlab.com/textbook.html
It's a textbook called Computational Foundations of Cognitive Neuroscience, which I wrote for my class.
My hope is that this will be a living document, continuously improved as I get feedback.
The rarest of sights - a big glossy journal publishing negative replications! Yes, we had to bundle 4 replications into one article AND we had to wait 2 (!!) years in peer review, but here we are:
www.science.org/doi/10.1126/...
Hereβs two, both from the left:
www.currentaffairs.org/news/abandon...
www.levernews.com/abundance-is...
Important. Remember you canβt take science at face value: publication bias is everywhere.
Have had the same surprise, the Errington paper is a milestone not just for the scale of the project, but they also have a separate paper about issues they encountered running the project, hugely instructive for anyone doing replication studies elifesciences.org/articles/67995
The other classic ref is Begley and Ellis 2012 (www.nature.com/articles/483... ) which has the same issue, but the ref I always cite now is Errington et al :) elifesciences.org/articles/71601
# Bayesians are to frequentists as vegetarians are to murderers I have close friends who do not eat meat, for moral reasons. It's not that they find meat disgusting. In fact, they find it delicious. Instead they regard meat as murder. And yet we continue to be friends. I myself do not think meat is murder. I regard it in fact as an ordinary and normative part of human society. It's so commonplace. How could it be murder? This sort of moral incompatibility is commonplace. Vegetarians and vegans have to put up with assholes like me all the time. They are surrounded.
When I take train journeys, I sometimes write things. Things that I probably shouldn't publish
To anyone who may have been present at my talk today in which I intimated that you can just, you know, do this: you can just, you know, do this.
Registered reports now
For many social dilemma's in Science (e.g. the slow uptake of diamond open access journals) stronger top down management is necessary. It won't just happen. If scientists will not create this management themselves, someone is going to create it for us.
For comparison, the profits of four publishers (2.64B) amount to 5.58% of the FY2024 NIH budget. Revenues (7.36B) are *15.52%*. I agree with the authors' perspective that funders, governments, and universities should lead efforts to change this. All journals should be diamond open-access.