I am excited to see whether this will work. Irrespective of that, I think journals should experiment more with the publication process.
I am excited to see whether this will work. Irrespective of that, I think journals should experiment more with the publication process.
Haha, I thought you would have liked it.
Seems an excellent place to start!
(2/2) I will also discuss why I am not a big fan of difference in differences estimation.
(1/2) Looking forward to giving two presentations on estimating causal effects with panel data on Saturday at the ASSA meetings. First a continuing education lecture at 8am and at 2:30 a presentation on a new estimator.
yes, it can be. If you see variation in average outcomes per pod, it must lead to a big variance under interference.
I think the cluster variance is conservative the same way the standard Neyman variance is conservative, and there is not much you can do about it.
(2/2) That is well defined (assuming no spillovers beyond the pods). somewhat weird object, tied to the design and the population, but well-defined (an over-used term!) and its variance is estimated using clustering. The treatment is defined as being assigned to a group with a random set of peers.
(1/2) Re the spillovers: Suppose I have a fixed population of 100 units. My experiment is to assign 50 to a control group, and the other 50 to 5 pods (in seema's notation) where all they do is have a meeting and talk. I am interested in the average effect of being assigned to the treatment group.
(5/5) This question is as in Abadie et al (QJE). The answers there are relevant here. If the estimand involves averaging over a large population of therapists, you should cluster, if you want to keep the population of therapists fixed, you should not cluster. There are some technical subtleties,
(4/5) Without spillovers (the individual therapy case with the same therapist for multiple patients in Sterba, figure 1(a)), then the answer depends on the estimand. Do you want the ave effect over a large population of therapists, or the ave effect given the set of therapists in the experiment.
(3/5) To have valid standard errors if there are spillovers I think you ought to cluster at the pod/group level. This is what Cai-Szeidl do, and so I think their standard errors are right. (Though I dont entirely like the way they justify them, but who cares given that they get it right).
(2/5) In the Cai-Szeidl case, which is like the ``group-therapy case in Figure 1(b) in the Sterba paper, my concern would be about the presence of spillovers between units (firms or individuals) in the same pod/group. (It seems the only reason the groups matter is through interactions/spillovers.)
(1/5) Interesting set of questions. I had not seen the Cai-Szeidl paper, nor the Sterba paper referenced by Pustejovsky. Both are quite nice and recommended, especially the classification of cases in Figure 1 in Sterba. Here is my (preliminary) view.
but was I actually right?
(2/2) Design-based inference is about where you think the uncertainty is coming from. For settings with endogenous treatments you can base the inference on (conditional) randomization of the instrument. See my paper with Paul Rosenbaum in JRSS-A.
(1/2) I agree largely with Kevin. I dont see any particular concerns of doing design based inference in settings outside the potential outcome framework, because that is largely equivalent to other ways of formulating causal questions, eg structural equations, or graphs.
I will present it at the ASSA meetings in San Francisco as one of the methods lectures. I expect it will get recorded there.
Not quite my style! There is a lot of value to the DAGs, but perhaps not quite as much as Judea Pearl thinks.
We're working on it.