Judging by the poor quality of biomarker research I see reported in biomedical journals, my article "How to Do Bad Biomarker Research" must have been hugely influential: www.fharrell.com/post/badb/in... #Statistics #StatsSky #EpiSky
Judging by the poor quality of biomarker research I see reported in biomedical journals, my article "How to Do Bad Biomarker Research" must have been hugely influential: www.fharrell.com/post/badb/in... #Statistics #StatsSky #EpiSky
1) A covariate that is predictive of outcome should be in the model even if unpredictive of assignment (eg matched pairs design).
2) A covariate that is not predictive of outcome should not be in the model, even if predictive of assignment.
3) The propensity score is stupid.
A century ago RA Fisher proposed randomisation for allocating treatments in designed experiments. Ever since we have been misinterpreting this as a means for producing unbiased estimates but RAF proposed it for validly estimating standard errors.
share.google/Hyfj5Ub07dcN...
"we probably do not need to worry about the fact that the actual effect of one treatment rather than the other is not the same for all patients. Quite limited knowledge about an average improvement is the best that we can do" John Tukey, Controlled Clinical Trials, 1993 p282
So, what's the advantage of PS?
I thought that PS was a revolutionary algorithm in causality to emulate randomization...
100 years ago RA Fisher set out his views on randomisation. This blog of mine errorstatistics.com/2020/04/20/s... from five years ago tries to explain them.
Nice work, and these linear models are special cases of semi parametric ordinal regression with no normality assumption - hbiostat.org/rmsc/ordsurv
Cheaper if the total cost of the 5 OS's is ignorable. More problems: loss of perceived equipoise limiting RCT recruitment, too much influence of OS results on clinical practice before the too-late RCT is launched, RCT not launched because of impressive OS, etc.
The idea that OS are cheap is a myth. The real costs simply aren't taken into account.
"The predictions here are not of future events, as in the context of time-series analysis, but rather of what would have happened in the experiment if other conditions had prevailed." I'm baffled. This is from a paper of 1982. But experts inform me that causality is never considered in statistics.
You categorised patients as responders or non responders by dichotomising a change from baseline?
You triple criminal!
I just posted a critique of this paper at discourse.datamethods.org/t/critique-o... where I hope others will add their thoughts #StatsSky #EpiSky #Statistics #rct
The advances we've made in statistics, experimental study design, and causal inference over the past century are remarkably useful for understanding our world. But there is never been a push to make people use them like we are seeing with generative AI. Perhaps take a moment to consider why.
And to obtain the same paramaters...
Stop interpreting clinical trials as if they were surveys.
www.linkedin.com/pulse/illegi...
Miguel Hernán is presenting at EFSPI and giving a frankly embarrassing rant about (addendum-style) estimands. While lots of people are constructively using it to complement causal inference, he is sticking to basic & straw-man gripes that are easily addressed by thinking or listening to others.
Congrats! Is a better life
What, only maybe?? 🤔 I'm going to have to read you once again.
Terrific @scientificdiscovery.dev post on randomized controlled trials in @ourworldindata.org.
Including this important chart on the impact of pre-registration.
Magically, when people had to pre-register outcomes, many of the benefits disappeared.
ourworldindata.org/randomized-c...
Thanks Paul. It’s nice to have a paper that’s both elegant (if I may say so) and practically useful. The conclusion is that using a particular covariate balancing PS estimator — inverse probability tilting — renders IPW, AIPW, and IPWRA all numerically identical with a linear conditional mean.
Perfectly stated Darren. Don’t anyone think this is an exaggeration. I’ve seen this in supposedly reputable cardiology journals, sometimes even with omission of “by drawing a DAG”.
We didn't randomize, and there was no allocation concealment or blinding, and we can't really be sure what intervention they got or how the outcomes were measured, but we emulated a trial by drawing a DAG.
Pleased to report on two papers on #ANVOVA with @poschm.bsky.social and Franz König
The first lnkd.in/eKmjiQdk
looks at median stratification for a single covariate.
The second lnkd.in/e3DH96G8
considers adjustment for many covariates.
Variance inflation factors are key.
A reminder that the mRNA vaccine technology saved millions of lives during the Covid pandemic and received the 2023 Nobel Prize for Medicine
It seems like a lot of systematic-reviews/meta-analyses are analogous to money laundering. You run a load of studies that nobody actually reads, many of them "dirty", through some process. On the other end you get "clean" results that the ignorant or deceitful can easily communicate as "evidence".
True story
True story
Even @robcaliff.bsky.social says "real world" data is only at the "promise" stage. The only thing it seems to be "missing" is accuracy, completeness, and the ability to account for bias. Their words not mine! 😜 www.nejm.org/doi/full/10....
ht @emilymoin.com