Seems like a logical name. I just had seen the same concept as a different name pubmed.ncbi.nlm.nih.gov/11113946/
Seems like a logical name. I just had seen the same concept as a different name pubmed.ncbi.nlm.nih.gov/11113946/
Huh interesting. Iβve not seen this called centinels before but rather sympercents.
β[Of] 25 replication studies, 7 (28%) studies demonstrated robust replicability, meeting all three validation criteria: achieving statistical significance (pβ<β0.05) in the same direction as the original study and showing compatible effect size magnitudes as per the Z test (pβ>β0.05).β
I'm pleased to announce the publication of two important papers in Sports Medicine on the first every large replication project in sport and exercise science. Read the full papers:
lnkd.in/gH3NCqK5 Β
lnkd.in/gE7izySW
#SportsScience #EvidenceBasedPractice #Research #OpenScience #Replication
Iβve implemented a couple different ways of plotting this in the TOSTER package aaroncaldwell.us/TOSTERpkg/ar...
Read it and had to do a double take to understand. Looking at this, Iβve never felt so American π€£
We're hiring at the U. of Arkansas!
Asst. Prof. with the aim of a research program focus on circular bioeconomy systems, teaching, service, and mentoring graduate students and research staff. Tenure will accrue in Dept. Biol. & Agric. Engr.
Apply: uasys.wd5.myworkdayjobs.com/en-US/UASYS/...
β οΈ Job Alert β οΈ
We're looking to add a Assistant/Associate Professor of Biostatistics to our crew at UAMS in Northwest Arkansas! Please share widely!
Awesome team, great work-life balance, and you get to help make real impact in community health. #stats
uasys.wd5.myworkdayjobs.com/en-US/UAMS_A...
Mind sharing the DOI? Iβm curious about this one.
Hey folks, I intermittently get on social media. If you ever need me, send me a message through my contact form on my website aaroncaldwell.us
New article from me:
βInconsistent multiple testing corrections: The fallacy of using family-based error rates to make inferences about individual hypothesesβ
Open access: doi.org/10.1016/j.me...
#Stats
#Methodology
You can make these mean difference plots easily in SimplyAgree aaroncaldwell.us/SimplyAgree/
New article: "Effects of preferred versus nonpreferred music on bench press performance". By Jasmin Hutchinson, @jennymurphy2.bsky.social, et al.
doi.org/10.51224/cik...
Check out our replication study as part of the larger replication project by the ssreplicationcentre.com
Thanks so much to Jasmin and team for their brilliant work on this
tweet from βgentile news networkβ: β οΈ HERE ARE THE RULES β οΈ π¨Post a picture of a Jew. π¨ Say "_____ is a Jew." (1st line) π¨Include what they are known for/who they are (2nd line) π¨ #NameThem. (3rd line) Please post only clean images of the person i.e. no stars of David etc. Let's get this trending. β€οΈ
gonna tell my grandkids this was muskβs twitter, because it was
Also a good paper that may be useful for the scenario you described pubmed.ncbi.nlm.nih.gov/10734289/
Ah, I had forgotten about this post. I like the simplicity of his MLM approach!
Yup, not necessarily wrong approach either. IMHO, Iβd prefer to report Glass delta (pre-intervention SD) peerj.com/articles/103...
Yeah, itβs feature of the design (Iβm guessing this is SMD of the change scores). Lack of concurrent control provides a conveniently large effect size
Yeah, thatβs pretty much how Iβd do it (at a glance)
So the mixed model you had but with t1 on the βrightβ and t2 and t3 on the βleftβ
Then Iβd only include t1 as a covariate
When is the experimental treatment exposure? Before or after t1?
New article: "Model specification in mixed-effects models: A focus on random effects". By Keith Lohse et al.
doi.org/10.51224/cik...
No, that sounds entirely unreasonable. Are these limits for limits of agreement or an equivalence test.
If you don't think it would be any better your conclusion/claim would be better stated as "Effect sizes are function of the experiment design and analysis approach. So describing an effect outside of the context of the experiment just is not meaningful."
Claim 1: "Effect size is a function of the experiment design and analysis approach as much as, if not more than, the underlying effect. So describing an effectβs Cohenβs d outside of the context of the experiment just isnβt meaningful."
Is misleading the reader then, no?
Not sure I agree, all depends on what information/inference you are trying to make with the comparison. Let me ask you this though, what makes you think an *unstandardized* mean difference would be any better for comparing between experiments?
And heterogeneity does not mean the effect sizes canβt be compared. Random effects models in ma exist for a reason
To put it another way: if I had two different studies that used different measures with wildly different reliabilities (within subject variation) I wouldnβt be surprised by differences in the SMD.