Trending
AnthonyHigney's Avatar

AnthonyHigney

@anthonyhigney

Economist at Glasgow University. Environmental economics and meta-research. https://anthonychigney.github.io/home/

213
Followers
1,518
Following
18
Posts
11.11.2024
Joined
Posts Following

Latest posts by AnthonyHigney @anthonyhigney

I also signed this letter

23.02.2026 15:47 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
4-panel comic. (1) [Person 1 with ponytail flanked by person with short hair and another person speaking into microphone at podium] PERSON 1: In the early 2010s, researchers found that many major scientific results couldn’t be reproduced. (2) PERSON 1: Over a decade into the replication crisis, we wanted to see if today’s studies have become more robust. (3) PERSON 1: Unfortunately, our replication analysis has found exactly the same problems that those 2010s researchers did. (4) [newspaper with image of speakers from previous panels] Headline: Replication Crisis Solved

4-panel comic. (1) [Person 1 with ponytail flanked by person with short hair and another person speaking into microphone at podium] PERSON 1: In the early 2010s, researchers found that many major scientific results couldn’t be reproduced. (2) PERSON 1: Over a decade into the replication crisis, we wanted to see if today’s studies have become more robust. (3) PERSON 1: Unfortunately, our replication analysis has found exactly the same problems that those 2010s researchers did. (4) [newspaper with image of speakers from previous panels] Headline: Replication Crisis Solved

Replication Crisis

xkcd.com/3117/

21.07.2025 23:54 πŸ‘ 4880 πŸ” 659 πŸ’¬ 28 πŸ“Œ 31

I should say: cutting US funding for Gavi is a tragically misguided decision that can and should be reversed.

25.06.2025 15:34 πŸ‘ 1 πŸ” 2 πŸ’¬ 0 πŸ“Œ 0

πŸš— Noise pollution also drowns out the sounds of nature🚐
Our research shows that traffic noise reduces the calming, restorative power of birdsong 🐀
w/ @konraduebel.bsky.social, @simon-butler.bsky.social, @anthonyhigney.bsky.social, Nick Hanley & Eleanor Ratcliffe

πŸ‘‰ osf.io/preprints/os...

24.06.2025 16:15 πŸ‘ 1 πŸ” 1 πŸ’¬ 0 πŸ“Œ 0

🚨 Thrilled to announce 4th Annual SGPE Summer School

πŸŽ“ One-day masterclass on causal inference
πŸ“š Topics: SCM, matrix completion, partial ID
πŸ—“οΈ 5 June | πŸ“Stirling, UK | πŸ’» Dr Anthony Higney (Glasgow)
🎟️ £50 PhDs / £100 staff
Lunch & dinner included!
Sign up:bit.ly/SGPE2025SS
All welcome!

13.05.2025 17:44 πŸ‘ 2 πŸ” 1 πŸ’¬ 0 πŸ“Œ 0
The image shows data from a WHO report on adolescent mental health, focusing on loneliness among 11-, 13-, and 15-year-olds across 44 countries. It highlights that 16% of adolescents report feeling lonely most of the time or always, with rates nearly doubling from age 11 to 15. Girls consistently report higher levels than boys. A heatmap table shows the prevalence by age, gender, and country, with darker shades indicating higher rates. Notably high rates among 15-year-old girls include the UK (40%), Belgium (French, 37%), Finland (26%), and Germany (32%), with much lower rates for boys. The report also notes that loneliness is more common among adolescents from low-affluence families.

The image shows data from a WHO report on adolescent mental health, focusing on loneliness among 11-, 13-, and 15-year-olds across 44 countries. It highlights that 16% of adolescents report feeling lonely most of the time or always, with rates nearly doubling from age 11 to 15. Girls consistently report higher levels than boys. A heatmap table shows the prevalence by age, gender, and country, with darker shades indicating higher rates. Notably high rates among 15-year-old girls include the UK (40%), Belgium (French, 37%), Finland (26%), and Germany (32%), with much lower rates for boys. The report also notes that loneliness is more common among adolescents from low-affluence families.

Why is there such a difference in loneliness by gender for teenagers? who.int/europe/publica…

02.04.2025 06:56 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Preview
90% of paint samples tested contain lead above permissible limits in India: Study 90% of 51 paints used to paint houses in India contain lead above govt's permissible limit of 90 ppm. 76.4% of these paints contained lead more than 111 times the limit. Lead poisoning in children is ...

www.thehindu.com/news/nationa...

22.03.2025 18:32 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Post image

James Webb Satellite took a fantastic shot of the Planet Saturn with it's Near Infrared Camera (NIRCam) and the Mid-Infrared Instrument (MIRI). This is just an Amazing shot that shows the planet and the rings in brilliant detail.

24.02.2025 06:16 πŸ‘ 19286 πŸ” 3125 πŸ’¬ 281 πŸ“Œ 229
The figure is a line graph titled **"Heat Pump Adoption and Weekly Energy Consumption, Great Britain."**  

- **Y-axis:** "Change in weekly consumption, kWh," ranging from -350 to 150 in increments of 50.  
- **X-axis:** "Weeks since adoption," ranging from -50 to 90, with a vertical dashed line at week 0 marking the **"Week of heat pump adoption."**  

The graph displays two lines with confidence intervals (shaded areas):  
- **Blue line ("Electricity")**: Starts near zero before adoption, then jumps to approximately +50 kWh after adoption and gradually increases over time.  
- **Red line ("Gas")**: Starts near zero before adoption, then drops sharply to around -200 kWh after adoption. It shows seasonal variation afterward, with consumption dropping as low as -250 kWh around week 70.  

**Source:** Researchers' calculations using data from Octopus Energy.

The figure is a line graph titled **"Heat Pump Adoption and Weekly Energy Consumption, Great Britain."** - **Y-axis:** "Change in weekly consumption, kWh," ranging from -350 to 150 in increments of 50. - **X-axis:** "Weeks since adoption," ranging from -50 to 90, with a vertical dashed line at week 0 marking the **"Week of heat pump adoption."** The graph displays two lines with confidence intervals (shaded areas): - **Blue line ("Electricity")**: Starts near zero before adoption, then jumps to approximately +50 kWh after adoption and gradually increases over time. - **Red line ("Gas")**: Starts near zero before adoption, then drops sharply to around -200 kWh after adoption. It shows seasonal variation afterward, with consumption dropping as low as -250 kWh around week 70. **Source:** Researchers' calculations using data from Octopus Energy.

Paper finds heat pump adoption in UK led to 70% less carbon usage and 40% less energy.
www.nber.org/202502/diges...

03.02.2025 19:06 πŸ‘ 2 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
The detail with a boat and a house

The detail with a boat and a house

My favourite Minoan fresco from Akrotiri, showing boats arriving to the harbour. A detail. #FrescoFriday

10.01.2025 07:24 πŸ‘ 19 πŸ” 3 πŸ’¬ 1 πŸ“Œ 1

I've read a couple of papers on external validity where they define it in a way that is much too restricted in my view. Doesn't sit right.

31.12.2024 12:58 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

What sort of information? Anything: magnitude, sign, variation, heterogeneity (maybe a drug has more of an effect on young than old e.g.)

31.12.2024 12:58 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

How do you evaluate that? The same way you evaluate any forecast.

31.12.2024 12:58 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Workshopping here: if an estimate contains information about an ex-ante unobserved treatment then it has some degree of external validity for that treatment.

31.12.2024 12:58 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

We should at a minimum do what Nature does, in which the referee comments and author responses are published along with the paper.

Allows the paper itself to be an authoritative artifact while lifting the curtain on the debate that led its creation.

(quoting @dholtz.bsky.social )

4/

24.12.2024 14:45 πŸ‘ 178 πŸ” 17 πŸ’¬ 12 πŸ“Œ 6
Preview
Climbing the Ivory Tower: How Socio-Economic Background Shapes Academia Founded in 1920, the NBER is a private, non-profit, non-partisan organization dedicated to conducting economic research and to disseminating research findings among academics, public policy makers, an...

Some hope for first gens after all! Fabulous new study notes: "academics from poorer backgrounds introduce more novel scientific concepts but are less likely to receive recognition." This is good, not bad, news. Our different view on life is our super power to make change! www.nber.org/papers/w33289

23.12.2024 12:22 πŸ‘ 91 πŸ” 24 πŸ’¬ 1 πŸ“Œ 3
Medieval MP of the Month: Santa Claus in Parliament - The History of Parliament Here's a seasonal offering from Hannes Kleineke of the House of Commons 1422-1504 Section for our Medieval MP of Month...

According to some of the records in the archive, Santa Claus himself sat in Parliament in the 15th century... But was he really in Westminster when he should have been at the North Pole? #Christmas

Below, Dr Hannes Kleineke explores the mystery of 'Nicholas Christmas'

20.12.2024 08:00 πŸ‘ 5 πŸ” 4 πŸ’¬ 0 πŸ“Œ 0
Post image

Take a look at artifact. This inscription is ostensibly part of a large stele which has evidently been broken into fragments and we are seeing one (middle) section. All the edges are broken and unfinished. Here is the problem - if this was a broken stele, some letters would be fragmentary.

13.12.2024 01:33 πŸ‘ 52 πŸ” 7 πŸ’¬ 2 πŸ“Œ 0
We present the expected values from p-value hacking as a choice of the minimum p-value among m independents tests, which can be considerably lower than the "true" p-value, even with a single trial, owing to the extreme skewness of the meta-distribution.
We first present an exact probability distribution (meta-distribution) for p-values across ensembles of statistically identical phenomena. We derive the distribution for small samples 2<n≀nβˆ—β‰ˆ30 as well as the limiting one as the sample size n becomes large. We also look at the properties of the "power" of a test through the distribution of its inverse for a given p-value and parametrization.
The formulas allow the investigation of the stability of the reproduction of results and "p-hacking" and other aspects of meta-analysis.
P-values are shown to be extremely skewed and volatile, regardless of the sample size n, and vary greatly across repetitions of exactly same protocols under identical stochastic copies of the phenomenon; such volatility makes the minimum p value diverge significantly from the "true" one. Setting the power is shown to offer little remedy unless sample size is increased markedly or the p-value is lowered by at least one order of magnitude.

We present the expected values from p-value hacking as a choice of the minimum p-value among m independents tests, which can be considerably lower than the "true" p-value, even with a single trial, owing to the extreme skewness of the meta-distribution. We first present an exact probability distribution (meta-distribution) for p-values across ensembles of statistically identical phenomena. We derive the distribution for small samples 2<n≀nβˆ—β‰ˆ30 as well as the limiting one as the sample size n becomes large. We also look at the properties of the "power" of a test through the distribution of its inverse for a given p-value and parametrization. The formulas allow the investigation of the stability of the reproduction of results and "p-hacking" and other aspects of meta-analysis. P-values are shown to be extremely skewed and volatile, regardless of the sample size n, and vary greatly across repetitions of exactly same protocols under identical stochastic copies of the phenomenon; such volatility makes the minimum p value diverge significantly from the "true" one. Setting the power is shown to offer little remedy unless sample size is increased markedly or the p-value is lowered by at least one order of magnitude.

Reading this paper now. Interesting but I don't quite agree with his take on "true" p-values. Will post about it next week. arxiv.org/abs/1603.07532

12.12.2024 15:21 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
How Much Should We Trust Publication Bias Detection Techniques? Publication bias occurs when the distribution of observed effects differs from the distribution of all such effects. Usually, this means there is some filter based on the magnitude or direction of the...

As I explained here, you need to take into account study characteristics correlated with effect size and standard error when you use publication bias detection methods. anthonychigney.github.io/home/blog/Tr...

12.12.2024 12:53 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Abstract
Does lead pollution increase crime? We perform the first meta-analysis of the effect of lead on crime, pooling 542 estimates from 24 studies. The effect of lead is overstated in the literature due to publication bias. Our main estimates of the mean effect sizes are a partial correlation of 0.16, and an elasticity of 0.09. Our estimates suggest the abatement of lead pollution may be responsible for 7–28% of the fall in homicide in the US. Given the historically higher urban lead levels, reduced lead pollution accounted for 6–20% of the convergence in US urban and rural crime rates. Lead increases crime, but does not explain the majority of the fall in crime observed in some countries in the 20th century. Additional explanations are needed.

Abstract Does lead pollution increase crime? We perform the first meta-analysis of the effect of lead on crime, pooling 542 estimates from 24 studies. The effect of lead is overstated in the literature due to publication bias. Our main estimates of the mean effect sizes are a partial correlation of 0.16, and an elasticity of 0.09. Our estimates suggest the abatement of lead pollution may be responsible for 7–28% of the fall in homicide in the US. Given the historically higher urban lead levels, reduced lead pollution accounted for 6–20% of the convergence in US urban and rural crime rates. Lead increases crime, but does not explain the majority of the fall in crime observed in some countries in the 20th century. Additional explanations are needed.

Read the study yourself. t.co/BoS1kMsYlg

12.12.2024 12:52 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Funnel plot with elasticity effect size on x axis and precision (1/se) on y. Shows long right tail and smaller left tail at low precision. Clear positive and significant effect towards top of funnel.

Funnel plot with elasticity effect size on x axis and precision (1/se) on y. Shows long right tail and smaller left tail at low precision. Clear positive and significant effect towards top of funnel.

They also cut the bottom off that figure.

12.12.2024 12:52 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Screenshot of twitter account Cremieux. Basically says lead has no effect on crime citing my meta-analysis. Inlcudes funnel plot with partial correlations on the x axis and precision (1/se) on y axis). More precise effects centered around zero PCC.

Screenshot of twitter account Cremieux. Basically says lead has no effect on crime citing my meta-analysis. Inlcudes funnel plot with partial correlations on the x axis and precision (1/se) on y axis). More precise effects centered around zero PCC.

This person is using my study/chart in a misleading way on twitter.
There is a correlation here between the SE and effect size that is picked up by pub bias methods. Partly this is due to underlying study characteristics. Adjust for this and we find there is pub bias, but lead does cause crime.

12.12.2024 12:52 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Preview
Guaranteed Income In The Wild: Summarizing Evidence From Pilot Studies and Implications for Policy - Jain Family Institute How to make sense of competing claims about guaranteed income?

I haven't fully read this piece by @jacklandry.bsky.social yet but according to his meta-analysis of only the recent spat of GI pilots, the income elasticity of labor supply on the intensive margin is...0.16. Which is broadly consistent with the literature!

jainfamilyinstitute.org/guaranteed-i...

09.12.2024 20:01 πŸ‘ 13 πŸ” 4 πŸ’¬ 1 πŸ“Œ 2
Preview
This fearless science sleuth risked her career to expose publication fraud Anna Abalkina is part of Nature’s 10, a list of people who shaped science in 2024.

This fearless science sleuth risked her career to expose publication fraud
Anna Abalkina @abalkina.bsky.social is part of Nature’s 10, a list of people who shaped science in 2024.
Holly Else reports at Nature.

www.nature.com/articles/d41...

10.12.2024 07:51 πŸ‘ 325 πŸ” 93 πŸ’¬ 5 πŸ“Œ 4
Preview
Investigating statistical methods to assess the conduct and integrity of clinical trials at University of Aberdeen on FindAPhD.com PhD Project - Investigating statistical methods to assess the conduct and integrity of clinical trials at University of Aberdeen , listed on FindAPhD.com

Funding available to do a PhD on statistical methods for assessing the integrity of RCTs, based in Aberdeen, with Alison Avenell, myself, Graeme MacLennan and Mark Bolland. Competitive process, funded by MRC Trials Methodology Research Partnership: www.findaphd.com/phds/project...

03.12.2024 11:40 πŸ‘ 57 πŸ” 52 πŸ’¬ 2 πŸ“Œ 3
The authors and others have argued that this shows if effects are heterogeneous, and researchers have a good idea of the effect size ex-ante, and therefore efficiently select sample sizes to all have, for example, 80% power, there will be a correlation between the errors and the effects. This is true but would mean all the studies are estimating effects that are not only known to be different but we know how they are different. We can rank the effects. Should they be doing that? No! In general, in a (random effects) meta-analysis, we assume each study is estimating a different effect and estimate the average of the effects, but the differences are assumed to be random, drawn from a common underlying distribution.
When this is not the case, they are not i.i.d, nor exchangeable. What that means is any inference we do from this model will be off. This is important because when we test for publication bias, we are not just describing the data, we are doing inference.

The authors and others have argued that this shows if effects are heterogeneous, and researchers have a good idea of the effect size ex-ante, and therefore efficiently select sample sizes to all have, for example, 80% power, there will be a correlation between the errors and the effects. This is true but would mean all the studies are estimating effects that are not only known to be different but we know how they are different. We can rank the effects. Should they be doing that? No! In general, in a (random effects) meta-analysis, we assume each study is estimating a different effect and estimate the average of the effects, but the differences are assumed to be random, drawn from a common underlying distribution. When this is not the case, they are not i.i.d, nor exchangeable. What that means is any inference we do from this model will be off. This is important because when we test for publication bias, we are not just describing the data, we are doing inference.

I also have a brief discussion of how you need to meet the assumption of exchangeability at least in meta-analysis. Happy to hear if I am wrong about anything. Only way to learn.

06.12.2024 16:05 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
β€œβ€¦[E]ach of these data sets comprises studies on disparate processes, using radically different procedures and designs. Few meta-analysts would be interested in pooling such diverse effects.”

In other words, they are combining studies that are looking at completely different things! For me, this is fatal for the empirical part of the paper. This is β€œcombining incommensurable results” and I cannot see any good reason for doing so. For example, they are combining a replication of how much people agree β€œthe earth is flat” with a replication of the effect of less caveolin-1 in mice!

β€œβ€¦[E]ach of these data sets comprises studies on disparate processes, using radically different procedures and designs. Few meta-analysts would be interested in pooling such diverse effects.” In other words, they are combining studies that are looking at completely different things! For me, this is fatal for the empirical part of the paper. This is β€œcombining incommensurable results” and I cannot see any good reason for doing so. For example, they are combining a replication of how much people agree β€œthe earth is flat” with a replication of the effect of less caveolin-1 in mice!

For example, I think they combine incommensurable effects.

06.12.2024 16:05 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
How Much Should We Trust Publication Bias Detection Techniques? Publication bias occurs when the distribution of observed effects differs from the distribution of all such effects. Usually, this means there is some filter based on the magnitude or direction of the...

I have started writing about papers as I read them to help remember them, but in case others are interested I am also putting them on my site.

First one is a working paper criticising publication bias detection. I think their critique is not quite right.

anthonychigney.github.io/home/blog/Tr...

06.12.2024 16:05 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Preview
Maybe it's just YOUR testosterone that's low How the measurement tools have led us to falsely believe our T is low

My default explanation for sudden changes in a time series is change in measurement.

So glad to see another example, sent to me by a colleague who shares my cynicism. Short, clear explanation of Liquid Chromatography-Tandem Mass Spectrometry. Which is important to the story.

03.12.2024 09:50 πŸ‘ 178 πŸ” 45 πŸ’¬ 9 πŸ“Œ 17