Paul Bogdan's Avatar

Paul Bogdan

@pbogdan

I have a website: https://pbogdan.com/

138
Followers
108
Following
51
Posts
19.10.2024
Joined
Posts Following

Latest posts by Paul Bogdan @pbogdan

Great to hear! Making the code presentable took some time, so I'm glad you mentioned that this was actually productive!

13.10.2025 12:03 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Sir, would you be available to please provide a quick fantasy court ruling as a neutral third party (something related to draft order for a soon upcoming draft)? If so, I will send you the details

27.08.2025 12:22 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

I don't doubt the author's findings using their design, but it seems like a leap to claim that their findings based on GPT-4o-mini apply to contemporary state-of-the-art or near-state-of-the-art LLMs

25.08.2025 12:38 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Attempting to replicate this... I plucked one random low-profile retracted paper (pmc.ncbi.nlm.nih.gov/articles/PMC...) and asked GPT-5/Gemini/Claude "What do you think of this paper," with the title + abstract pasted. No model mentioned a retraction, but all said the paper has low evidential value

25.08.2025 12:28 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

6/🧡 Together, these findings produce a map of how we build meaning: from concept coding in the occipitotemporal cortex to relational integration in frontoparietal and striatal regions

Come see our full preprint here: www.biorxiv.org/content/10.1...

(Thanks for reading!)

24.06.2025 13:49 πŸ‘ 7 πŸ” 1 πŸ’¬ 1 πŸ“Œ 0
Post image

5/🧡 Relational analysis also identified a role of the dorsal striatum. Despite not being typically seen as an area for semantic coding, striatal regions robustly represent relational content, and the strength of this representation predicts participants’ judgments about item pairs

24.06.2025 13:49 πŸ‘ 3 πŸ” 1 πŸ’¬ 1 πŸ“Œ 0
Post image

4/🧡 We used LLM embeddings to model neural representations via fMRI and RSA, which revealed dissociations in information processing. Occipitotemporal structures parse concepts but not relations. Conversely, frontoparietal regions (especially the PFC) almost exclusively encode relational information

24.06.2025 13:49 πŸ‘ 7 πŸ” 2 πŸ’¬ 1 πŸ“Œ 0
Post image

3/🧡 Next, we tested relational information. LLM activity for texts like β€œA prison and poker table” robustly predicts human ratings on the likelihood of finding a poker table in a prison. Further analyses show how LLMs parsing such texts also capture precise propositional relational features

24.06.2025 13:49 πŸ‘ 1 πŸ” 1 πŸ’¬ 1 πŸ“Œ 0
Post image

2/🧡 To produce concept embeddings, we submit a short text (β€œA porcupine”) to contemporary LLMs and extract the LLMs’ residual stream. Leveraging data from our normative feature study, we find that LLM embeddings better predict human-reported propositional features than older models (word2vec, BERT)

24.06.2025 13:49 πŸ‘ 1 πŸ” 1 πŸ’¬ 1 πŸ“Œ 0
Post image

🚨 New preprint 🚨

Prior work has mapped how the brain encodes concepts: If you see fire and smoke, your brain will represent the fire (hot, bright) and smoke (gray, airy). But how do you encode features of the fire-smoke relation? We analyzed fMRI with embeddings extracted from LLMs to find out 🧡

24.06.2025 13:49 πŸ‘ 32 πŸ” 8 πŸ’¬ 1 πŸ“Œ 2

I figure the results showing how, nowadays, stronger p-values are linked to more citations and higher-IF journals points to progress that is difficult to explain with just mturk

10.06.2025 21:37 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Another bit from the paper, perhaps consistent with your message: "there remain many studies publishing weak p values, suggesting that there have still been issues in eliminating the most problematic research. This deserves consideration despite the aggregate trend toward fewer fragile findings."

10.06.2025 21:23 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Averaging to 26% isn't "mission accomplished", but this 26% value (or, say, 20% per Peter's simulations) still seems like a meaningful reference. Considering it seems more informative than just expecting 0% (i.e., seeing 33 ➜ 26% as just eliminating only 7/33 of problematic studies)

10.06.2025 21:18 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

I figure the most appropriate conclusion would be that there are many studies achieving >80% power along with numerous studies below this

10.06.2025 21:14 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

> But it should be obvious that the problem in psychology was not that 6% of the papers had p-values in a bad range.

I also talk about this 6% number in this other thread: bsky.app/profile/did:...

10.06.2025 18:13 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Although we certainly shouldn't conclude that the replication crisis is over, it seems fair to say that there has been productive progress

10.06.2025 18:12 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Continuing on this response to "The replication crisis in psychology is over", it is also worth considering psychology's place relative to other fields. Looking at analogous p-value data from neuro or med journals, only psychology seems to make a meaningful push to increase the strength of results

10.06.2025 18:11 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

The replication crisis is certainly not over, and the paper always refers to it as ongoing. However, I wonder what is the online layperson's view of psychology replicability. The crisis entered public consciousness, but I doubt the public is as aware of the progress to increase replicability

10.06.2025 18:09 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Small take on your COVID vaccine example, a p-value of p = .01 based on a correlation seems intuitive? Flip a coin 10 times and you'll get heads 9 times 1% of the time. Yet, Pfizer and society should act strongly on that result given their priors on efficacy and given the importance of the topic

10.06.2025 18:04 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Post image

I agree that trying to convert aggregate p-values into a replication rate won't be reliable. Nonetheless, a paper's p-values seem to track something awfully related to replicability. Per Figure 6, fragile p-values neatly identify numerous topics and methods known to produce non-replicable findings

10.06.2025 17:57 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Post image

Not sure how informative this would be, but an earlier draft included a subfigure trying to demonstrate that there remain a substantial number of likely problematic papers. Somewhat arbitrarily, the figure showed the percentage of papers where a majority of significant p-values were p > .01

10.06.2025 17:32 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

I regret not exploring this ~26.4% number more or having a paragraph on why a rate of 26.4% likely wouldn't correspond to entirely kosher studies. Your 16% and 20% results, if I'm reading this correctly, are based on some studies having above 80% power, which seems like a correct assumption

10.06.2025 17:27 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Hi Peter, thanks for these additional analyses. For my simulations of 80% power, I sampled z-scores from a distribution centered at 2.8. This only has a small effect (~0.4%), but I also accounted for the number of p-values each actual paper in the dataset reported then computed the average

10.06.2025 17:15 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

If all studies are either 80% power or 45% power, a 32% level of fragile p-values implies that 25% of studies are questionable.

This math isn't meant to argue that 25% of studies were questionable but to show why a 32% fragile percentage can suggest a rate of questionable studies presumably >6%

10.06.2025 17:01 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

Let's define a study as questionable if it doesn't have 80% power when the sample size as 2.5x. Eyeballing based on playing with G*Power, this means a questionable study is one with <45% power. An effect with 45% power will produce a fragile (.01 < p < .05) p-value about 50% of the time

10.06.2025 17:00 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Hi, I'm only seeing this thread now.

>it implies that ~6% of the literature was questionable

Sorry about giving off this impression as it is not at all what I had in mind. No doubt, much more than 6% of the literature remains questionable even today

10.06.2025 16:56 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

This isn't on my website, but I also have gathered data on medical journals (e.g., oncology, surgery). I'm not an MD, but my feeling is that much medical research has perhaps bigger replicability issues than psych or neuro. I encourage any med academic interested in that direction to contact me

06.06.2025 22:54 πŸ‘ 2 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

My feeling is that cog neuro has less confronted replicability issues (at least the subset of cog neuro reporting p-values). I don't plan to pursue a cog neuro paper, but anybody interested is free to reach out to me for the data, including data linking p-values to specific portions of text

06.06.2025 22:51 πŸ‘ 2 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Preview
β€˜A big win’: Dubious statistical results are becoming less common in psychology Fewer papers are reporting findings on the border of statistical significance, a potential marker of dodgy research practices

Thrilled to see a news piece by @science.org on my recent paper. By analyzing p-values across >240k papers, the study suggests that the rate of statistically questionable findings in psychology has declined since the replication crisis began

www.science.org/content/arti...

06.06.2025 19:19 πŸ‘ 122 πŸ” 35 πŸ’¬ 3 πŸ“Œ 3

"Depression" and "anxiety" have a perfect storm going for them, intersecting clinical and personality research, being a common covariate... I figure, depression and anxiety are also some of the easier clinical conditions to study (e.g., many schools wouldn't have access to schizophrenia patients)

03.06.2025 10:10 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0