Great to hear! Making the code presentable took some time, so I'm glad you mentioned that this was actually productive!
Great to hear! Making the code presentable took some time, so I'm glad you mentioned that this was actually productive!
Sir, would you be available to please provide a quick fantasy court ruling as a neutral third party (something related to draft order for a soon upcoming draft)? If so, I will send you the details
I don't doubt the author's findings using their design, but it seems like a leap to claim that their findings based on GPT-4o-mini apply to contemporary state-of-the-art or near-state-of-the-art LLMs
Attempting to replicate this... I plucked one random low-profile retracted paper (pmc.ncbi.nlm.nih.gov/articles/PMC...) and asked GPT-5/Gemini/Claude "What do you think of this paper," with the title + abstract pasted. No model mentioned a retraction, but all said the paper has low evidential value
6/π§΅ Together, these findings produce a map of how we build meaning: from concept coding in the occipitotemporal cortex to relational integration in frontoparietal and striatal regions
Come see our full preprint here: www.biorxiv.org/content/10.1...
(Thanks for reading!)
5/π§΅ Relational analysis also identified a role of the dorsal striatum. Despite not being typically seen as an area for semantic coding, striatal regions robustly represent relational content, and the strength of this representation predicts participantsβ judgments about item pairs
4/π§΅ We used LLM embeddings to model neural representations via fMRI and RSA, which revealed dissociations in information processing. Occipitotemporal structures parse concepts but not relations. Conversely, frontoparietal regions (especially the PFC) almost exclusively encode relational information
3/π§΅ Next, we tested relational information. LLM activity for texts like βA prison and poker tableβ robustly predicts human ratings on the likelihood of finding a poker table in a prison. Further analyses show how LLMs parsing such texts also capture precise propositional relational features
2/π§΅ To produce concept embeddings, we submit a short text (βA porcupineβ) to contemporary LLMs and extract the LLMsβ residual stream. Leveraging data from our normative feature study, we find that LLM embeddings better predict human-reported propositional features than older models (word2vec, BERT)
π¨ New preprint π¨
Prior work has mapped how the brain encodes concepts: If you see fire and smoke, your brain will represent the fire (hot, bright) and smoke (gray, airy). But how do you encode features of the fire-smoke relation? We analyzed fMRI with embeddings extracted from LLMs to find out π§΅
I figure the results showing how, nowadays, stronger p-values are linked to more citations and higher-IF journals points to progress that is difficult to explain with just mturk
Another bit from the paper, perhaps consistent with your message: "there remain many studies publishing weak p values, suggesting that there have still been issues in eliminating the most problematic research. This deserves consideration despite the aggregate trend toward fewer fragile findings."
Averaging to 26% isn't "mission accomplished", but this 26% value (or, say, 20% per Peter's simulations) still seems like a meaningful reference. Considering it seems more informative than just expecting 0% (i.e., seeing 33 β 26% as just eliminating only 7/33 of problematic studies)
I figure the most appropriate conclusion would be that there are many studies achieving >80% power along with numerous studies below this
> But it should be obvious that the problem in psychology was not that 6% of the papers had p-values in a bad range.
I also talk about this 6% number in this other thread: bsky.app/profile/did:...
Although we certainly shouldn't conclude that the replication crisis is over, it seems fair to say that there has been productive progress
Continuing on this response to "The replication crisis in psychology is over", it is also worth considering psychology's place relative to other fields. Looking at analogous p-value data from neuro or med journals, only psychology seems to make a meaningful push to increase the strength of results
The replication crisis is certainly not over, and the paper always refers to it as ongoing. However, I wonder what is the online layperson's view of psychology replicability. The crisis entered public consciousness, but I doubt the public is as aware of the progress to increase replicability
Small take on your COVID vaccine example, a p-value of p = .01 based on a correlation seems intuitive? Flip a coin 10 times and you'll get heads 9 times 1% of the time. Yet, Pfizer and society should act strongly on that result given their priors on efficacy and given the importance of the topic
I agree that trying to convert aggregate p-values into a replication rate won't be reliable. Nonetheless, a paper's p-values seem to track something awfully related to replicability. Per Figure 6, fragile p-values neatly identify numerous topics and methods known to produce non-replicable findings
Not sure how informative this would be, but an earlier draft included a subfigure trying to demonstrate that there remain a substantial number of likely problematic papers. Somewhat arbitrarily, the figure showed the percentage of papers where a majority of significant p-values were p > .01
I regret not exploring this ~26.4% number more or having a paragraph on why a rate of 26.4% likely wouldn't correspond to entirely kosher studies. Your 16% and 20% results, if I'm reading this correctly, are based on some studies having above 80% power, which seems like a correct assumption
Hi Peter, thanks for these additional analyses. For my simulations of 80% power, I sampled z-scores from a distribution centered at 2.8. This only has a small effect (~0.4%), but I also accounted for the number of p-values each actual paper in the dataset reported then computed the average
If all studies are either 80% power or 45% power, a 32% level of fragile p-values implies that 25% of studies are questionable.
This math isn't meant to argue that 25% of studies were questionable but to show why a 32% fragile percentage can suggest a rate of questionable studies presumably >6%
Let's define a study as questionable if it doesn't have 80% power when the sample size as 2.5x. Eyeballing based on playing with G*Power, this means a questionable study is one with <45% power. An effect with 45% power will produce a fragile (.01 < p < .05) p-value about 50% of the time
Hi, I'm only seeing this thread now.
>it implies that ~6% of the literature was questionable
Sorry about giving off this impression as it is not at all what I had in mind. No doubt, much more than 6% of the literature remains questionable even today
This isn't on my website, but I also have gathered data on medical journals (e.g., oncology, surgery). I'm not an MD, but my feeling is that much medical research has perhaps bigger replicability issues than psych or neuro. I encourage any med academic interested in that direction to contact me
My feeling is that cog neuro has less confronted replicability issues (at least the subset of cog neuro reporting p-values). I don't plan to pursue a cog neuro paper, but anybody interested is free to reach out to me for the data, including data linking p-values to specific portions of text
Thrilled to see a news piece by @science.org on my recent paper. By analyzing p-values across >240k papers, the study suggests that the rate of statistically questionable findings in psychology has declined since the replication crisis began
www.science.org/content/arti...
"Depression" and "anxiety" have a perfect storm going for them, intersecting clinical and personality research, being a common covariate... I figure, depression and anxiety are also some of the easier clinical conditions to study (e.g., many schools wouldn't have access to schizophrenia patients)