Tweet with the text “TRUMP SAYS "I THINK THE (IRAN) WAR IS VERY COMPLETE, PRETTY MUCH" - CBS REPORTER ON X, CITING AN INTERVIEW”
progress update at the co-author meeting
Tweet with the text “TRUMP SAYS "I THINK THE (IRAN) WAR IS VERY COMPLETE, PRETTY MUCH" - CBS REPORTER ON X, CITING AN INTERVIEW”
progress update at the co-author meeting
Tweet with the text “Rates For Programmars Will Tank Non-techies creating full-stack web and mobile apps will reduce demand for devs by around 15-20% This will happen in ~ 3 months and will cause a massive drop in the TO for SWEs. Some elite engineers will still command a high compensation, but even that will last about 12 months. If someone blindly recommends pursuing a CS degree right now, they are not thinking straight.”
Happy 1 year anniversary to this take!!!
Wikipedia article on the 1953 Iranian coup
You see, this time we’ll definitely get it right
LLMs will usher in a democratization of intellect. Now you too can have a junior (AI) scholar that you can blame for errors and fraud. Previously, only the top scholars had access to that kind of power.
g-computation, let’s go!
Deep irony about AI teaching/learning in universities: the most useful AI skills right now are essentially *people management* skills (for now!) which most academics (myself included) are pretty bad at.
Maybe a good bet would help give clarity to the “AI will kill academic papers” debate: will someone get tenure at a top 20 social science department in the next 10 years with the majority of their work being something other than an academic article or book?
Haha I can see why a “stochastic parrot” might make this mistake but, yeah, it’s not great.
Yeah, I think you could prompt much better than I did. I was trying to use it in a way that I expect a “typical” user might.
I think it highlights how you need to approach your use of the tools (they make different mistakes than humans)
...The literature review should give a brief history of the methods outside the social sciences and the prominent uses or developments within the social sciences."
Prompt: "I want you to write a literature review for an academic article on the use of "marginal structural models" in the social sciences. It should be in a tex file that can compile with a .bib file with the references. Use bibtex/natbib for the citations in the latex file...
Here's the lit review if folks are interested: mattblackwell.org/files/share/...
Took about 10mins without any intervention from me other than approving commands.
Was thinking more about AI lit reviews, so I gave Opus 4.6 a shot to do a lit review I know well (marginal structural models in the social sciences). Looked pretty good until it credited me with a paper by @smtorres.bsky.social
Unfortunately, Google AI overview confirmed by authorship...
I know the AI hype can drive anxiety about academic productivity, but please remember that an additional paper can have negative returns to both society and your reputation.
I think the longer task duration estimates are likely flawed and it kinda shows in this. I don’t think anyone believes opus 4.6 is almost 3x better than 4.5 and yet that’s what they are finding here.
Trump will do something insane this weekend just to mess with some poor economist’s shift-share design
Great, happy to leave it. Good luck in your efforts.
Right, you were the one insinuating that I am unethical and when I clarified that I am talking about completely ethical behavior, you say that’s obviously ethical but I am the one who needs better arguments. Curious!
That’s a lot of words for “no”
If I’m working on an observational paper and I conduct some robustness checks on the identification assumptions and find that they are not valid, then I feel comfortable not further pursuing a paper, regardless of what the EU has codified.
Are you saying you have published all results you have ever calculated for any project that you have investigated at all?
It could be malicious, sure, but it could also be file drawer situation where you abandon projects that do not seem robust enough for the publication process. I have frustrated some collaborators by nixing projects with lots of work done that just didn’t have justifiably consistent results
Would publishing the null effects that we find with the same imprecision and errors give us more insight about the truth? I’m skeptical, but I guess that may be the source of disagreement
I guess I don’t follow, replication came into social psych and perceptions about results started to adjust as I said in my post.
Biases toward aesthetically or ideologically pleasing results might be stronger than what I’m discussing here!
I’m not advocating for any of those positions. I’m simply saying that we should be humble in updating too heavily on one study’s point estimate of some effect. Could be biased (for many reasons), could a fluke, etc. In science, we maintain skepticism and accumulate evidence
Ignoring low power, I don’t think there’s anything wrong about focusing on the tails of the true effect distribution? Like, we don’t need studies to show that, say, the number of unicycles in a census bloc causally affects voter turnout. We don’t need to explore the entire of the dist of fx.
data engineering is my passion
Say what you will about subjective Bayesianism but at least it’s an ethos.
We can do finite population causal inference that doesn’t rely on random samples from superpopulations, but I take your point as broader. I’m open to different views on causality as long as the arguments/assumptions are clear.