Fun exercise for students in a causal inference class: draw a DAG for this debate.
Fun exercise for students in a causal inference class: draw a DAG for this debate.
I mean, without major assumptions.
Interesting that out of the MHE's bag of tricks approach, the only design that definitely identifies a basic interventional estimand such as the ATE is SOB. Imbens & Angrist's IV identifies the CACE, RDDs identify cutoff estimands, and DID identifies ATT, but none of them identifies the ATE.
If you have a massive disagreement on which DAGs were valid, but you don't use or get DAGs, then the work to solve those disagreement is just herculean
Interesting response by @yiqingxu.bsky.social and others to @urisohn.bsky.social: arxiv.org/pdf/2502.05717 Causal inference is serious job. In Pearl's parlance, "define first, identify second, estimate last". If the 2 first parts are correct, one can use adaptive models in semiparametric fashion.
A very cool Econometrics Journal editorial by @jaapabbring.bsky.social, @victorchernozhukov.bsky.social & Fernandez-Val on Wright's 1928 contribution to causal inference and IV.
Very interesting stuff!
Link: arxiv.org/abs/2501.16395
Lisp is so great!
Here are the first five sets of slides:
01 Introduction: psantanna.com/DiD/01_Intro...
02 Classical 2x2 setup: psantanna.com/DiD/02_two_b...
03 Clustering issues: psantanna.com/DiD/03_Clust...
04 Functional form: psantanna.com/DiD/04_Funct...
05 Covariates: psantanna.com/DiD/05_Covar...
Obrigado mesmo, Eduardo. Aprendi muito com os seus papers.
Thank you Jacob for your words
Thank you so much Lorena!
Obrigado mesmo Jamil. Fiquei ausente por um tempo por causa do market. Mas logo logo vamos tomar um cafรฉ
Thank you so much Melody. And thanks a lot for your help during this process
Thank you so much Mike. It was really great to meet you at Polmeth
Obrigado mesmo Guilherme!!!
Congrats Anton! This is excellent for you and for Madison!
I am thrilled to announce that I will be joining Department of Government at Harvard University, first as a postdoctoral fellow (2025) and then as an assistant professor (2026). I am grateful and really excited for this new opportunity.
Just did, Claudia. Good to see you here.
New paper! arxiv.org/pdf/2411.14285
Led by amazing postdoc Alex Levis: www.awlevis.com/about/
We show causal effects of new "soft" interventions are less sensitive to unmeasured confounding
& study which effects are *least* sensitive to confounding -> makes new connections to optimal transport
academic.oup.com/ije/article/...
#causalsky #statsky #episky #causalinference
#Rstats `MatchIt` v4.6.0 is out! `MatchIt` implements propensity score matching and other matching methods for causal effect estimation. This isn't a major release, but here are the main updates: ๐งต
#causalsky #econsky #episky #statsky
A clear example is the Russian Roulette case proposed by Anders Huitfeldt, and then studied by Pearl and Cinelli (2021). Unfortunately, Anders, Carlos Cinelli, and Pearl don't seem to be here on bsky
In fact, no regression method can ensure external validity by itself. You need structural and sometimes functional assumptions. I feel like people in causal inference usually use the CATE invariance assumption. But in many cases this is not 100% guaranteed.
I made a starter-pack for Statistics and Statistics-related groups, departments or organisations. Please share, and suggest accounts that I have missed.
go.bsky.app/q6MfWL
Econ has some ML gems. One of my favorite's is Sandroni's sweeping result that no empirical test can distinguish an informed from an uninformed forecaster. I teach it in my ML class: www.youtube.com/watch?v=7OAI... But it is presented as negative, when it is in fact a sweeping positive result.
(brand new, WIP) python package for synthetic control estimators with a fast weight solver (pyensmallen). Currently implements jacknife CIs since inference in single-treated setting is basically made up anyway.
hope this passes muster, @paulgp.com ?
github.com/apoorvalal/s...
I guess I am a DAG person. Also an ADMG person.
Identification strategies concern what can be learned about the value of a parameter based on the data and the model assumptions. The literature on partial identification is motivated by the fact that it is not possible to learn the exact value of the parameter for many empirically relevant cases. A typical result in the literature on partial identification is a statement about char- acterizing the identified set, which summarizes what can be learned about the parameter of interest given the data and model assumptions. For in- stance, this may mean that the value of the parameter can be learned to be necessarily within some set of values. First, the review surveys the general frameworks that have been developed for conducting a partial identifica- tion analysis. Second, the review surveys some of the more recent results on partial identification.
"Recent Developments in Partial Identification" by Kline and Tamer (2023). #stats ๐๐
For those curious about Manski bounds and partial identification more generally, a nice review!
Open access: www.annualreviews.org/content/jour...
Just added all of them
Sure. I have to update it. Let me know your recs.