An arrow into age? Would that βt were
An arrow into age? Would that βt were
Ok pals. Looking to move all my teaching materials open and online, and am developing new modules that I want to start off this way. Content = reading, quizzes, videos, and code tutorials. Opinions on the best platform? Good examples of best practice I can steal from? Many thanks.
Utilize is so over-utilized that I think I avoid it even when it's the right word. I basically always use use.
Great idea. I saw @epi.ist present about this recently. He has other ideas mij the same vein.
You can nitpick that sentence all you want, I had fun writing it.
Most people don't use the word utilize, they utilize utilize because they utilize it to sound sophisticated rather than its intended purpose which is to employ something for a purpose other than its intended purpose.
A very nice to initiative where you can post your DAG: opencausal.org
And because they're machine readable, they're much easier to search.
They nouned the word
I always like the reassurance that Iβm not a cylon. Actually battlestar galactica would have been quite different if they had this technology.
This is a super nice and concise way of putting this.
Daryl Bem has entered the chat
That was our goal in the paper. Weβre hoping at the very least it gets people talking!
In my experience, Bayesians almost always make for great company. Frequentists, on the other hand, are very hit or miss.
Actually, I think it's unreasonable that I have to put the word "ice" in front of hockey but I nonetheless agree that it's necessary in this bizarre continent of Europe.
Unless you can say the latter (which IMO you almost never can), I think you should have a U there. And, of course, you should take the U seriously. Bias analysis, negative controls, arguments based on expert knowledge.
Oh interesting. I prefer the U on the DAG. I fully agree that it can often just be performative and in the absence of doing anything else, entirely useless. But it's one thing to say "I can't think of any other confounders" and "there are no other confounders."
If you're looking for a way to ditch Amazon-owned Audible...we can help π We're the audiobook company that shares profits with indie bookstores. We're also:
πA Certified B Corp & registered Social Purpose Corporation. We care about the world & doing active, social good
π₯100% employee owned ‡οΈ
Love this stuff! I like to remind people that when p is like 0.049 that any doubt at all in their assumptions will push them over 0.05
Yeah. I agree. If thereβs no connection to substantive knowledge then Iβd say useless. But if you can compare to the strength of other confounders or just anything that gives a scale to the potential uncontrolled confounding, then it can be useful.
Those are usually reserved for discussions about ice hockey (I'm Canadian)
This book should be essential reading for anyone who has anything to do with AI. Well researched, with proper reference to recent history, revealing the insane underlying philosophies that enthusiastic techies & EA advocates buy into in the name of techno optimism.
When discussing ideas with other people I have 3 internal categories:
1. Reasonable idea and I agree
2. Reasonable idea but I don't agree
3. Unreasonable idea
Category 2 is the most fun.
Makes sense. I'm more of an "estimate" guy but that's probably mostly because I don't deal in trials. I can see how starting from a testing perspective could be more useful in trial design. So you've been unsuccessful in making me mad twice now.
Ah gotcha. I think the word claim made me think you were trying to say something different. But I'm fully onboard with that paragraph. Thanks
I wanted to write vaaaaaaaaaaague to make it clear I mean a very vague idea but I ran out of characters.
Interesting. Do you mean let's just test? Or more, let's estimate the effect size and interpret it very carefully in the context of all potential biases, lack of generalizability, vagueness of treatment definition, effect heterogeneity etc? And best case scenario we get a vague idea of effect size?
I definitely agree that lazy "there's probably uncontrolled confounding" are unproductive. But it doesn't follow, in my opinion that the authors' claim is to be believed otherwise. We should be asking authors for much more convincing arguments than is currently required (in epidemiology anyway).
I agree bias/tipping point analyses help but they must be steeped in very strong subject matter knowledge to be convincing. Bias analyses (e.g., E-values) in the absence of subject matter knowledge are bordering on useless.
That tells me even they, the experts in both the topic and the data, can't say anything convincing about the potential non/existence or magnitude of uncontrolled confounding. The authors are the ones making a claim. They should be able to make better arguments.
I used to think this way but more recently have tilted the other way. The authors are (hopefully) deeply familiar with the data and they are likely world experts of the topic being studied. If all they can say is "we adjusted for some things but there's likely some uncontrolled confounding..."