This week's Site Sanity breaks down the budget mechanics behind it and what sites can actually negotiate to fix it.
This week's Site Sanity breaks down the budget mechanics behind it and what sites can actually negotiate to fix it.
80% of clinical sites are running on six months of cash or less
Time to First Patient In isn't just a timeline metric. It's a survival signal.
Slow FPI tells sponsors your site can't execute. And they remember.
Speed = system health. If enrollment drags, fix the backend first.
#ClinicalResearch
Research suggests 37% of sites consistently under-enroll. When you're a 2-3 person independent site, every empty slot is payroll you're covering out of pocket.
The break-even math is tough. The admin burden makes it worse.
#ClinicalResearch #MSky
Research suggests CRCs toggle between 22+ systems per trial. Not a training problem. A retention problem.
You can't fix burnout with pizza parties when the cognitive load is structural.
#ClinicalResearch
Most sites track screen failures. Almost none track them by referral source.
That's not a documentation problem. It's a data architecture problem.
Fix the structure, and suddenly you know which pipelines work.
#ClinicalResearch
Screen failures aren't recruitment failures. They're systems failures.
Most sites lose candidates in the middle - between contact and consent. That's where Cora lives.
cora.getmaxoutput.com
#ClinicalResearch #ScienceFeed
Research suggests screen failures cost sites ~$1,200 in unrecoverable labor. Per occurrence.
Most sites track conversion rates. Few track cost by referral source.
That's the actual admin burden no one's measuring.
#ClinicalResearch
https://open.substack.com/pub/maxoutput/p/everyone-at-scope-heard-80-heres?r=6o81r5&utm_campaign=post&utm_medium=web&showWelcomeOnShare=true
This week's newsletter: the recruitment math nobody talks about.
AI tools can't fix a broken enrollment funnel. But they can show you exactly where it's breaking.
The average trial now generates ~296 protocol deviations. Nearly 33% of the data collected isn't even scientifically essential.
This isn't a training problem. It's a design problem.
E6(R3) finally gives sites the language to push back on bloated protocols.
#ClinicalResearch
The average site operates under conflicting protocol versions for 215 days per trial. Multiply that by 10 active studies. You aren't disorganized - you're managing structural chaos with a calendar and a prayer. #ClinicalResearch #CRC
Protocol Complexity breaks down the math: https://open.substack.com/pub/maxoutput/p/why-clinical-trial-complexity-is?r=6o81r5&utm_campaign=post&utm_medium=web&showWelcomeOnShare=true
This isn't negligence. It's arithmetic.
301 procedures. 215 days juggling conflicting protocol versions. 24% of sites now declining trials outright.
Image from Airtable
301 procedures per trial. 4.9M data points. 32.5% of it non-essential.
24% of sites are now declining trials because the complexity is operationally impossible.
#ClinicalResearch #CRC
Who called it "eligibility screening" instead of "reading a 47-page mystery novel to find out the patient was disqualified on page 12 by a footnote that references an appendix"
#ClinicalResearch #CRC
Image from Airtable
30-35 eligibility criteria. 300+ procedures to verify. 80% of EMR data buried in unstructured notes.
Eligibility screening isn't a memory test - it's a systems design problem.
When CRCs spend 31% of their week hunting through charts, that's broken architecture.
#ClinicalResearch
55-69% of screen failures? Same I/E criteria every time. 60% of RCTs have at least one poorly justified exclusion criterion.
SCRS calls them "unicorn protocols."
Predictable failures, labeled unavoidable. The system isn't broken by accident.
#ClinicalResearch
41 minutes to screen one patient. 36% fail anyway. $1,200 lost per failure because the protocol was written like a mystery novel.
The eligibility system is designed to fail CRCs.
New newsletter breaks down why (and what actually works).
βAI that summarizes PDFsβ β βAI that changes outcomes.β
CRCs need tools that cut screening time, surface eligibility risks, and reduce deviations across active protocols, not just shorter text.
If it doesnβt change your workflow, itβs just decoration.
Image from Airtable
Protocol deviations are like Valentine's Day plans - no matter how carefully you document them, something unexpected is getting reported to the IRB.
Happy Friday. π
Image from Airtable
FDA's new safety reporting guidance clarifies what many sites get wrong: sponsors determine if an event is "expected" or not. You assess causality. They see the full safety database across all sites.
Image from Airtable
FDA rejected both January warning letter responses. Not because sites didn't promise to fix things. Because they didn't explain HOW.
Root cause analysis. Specific procedures. Timelines. Verification methods.
Promises don't pass inspections. Systems do.
Protocol adherence shows up in 40-45% of FDA inspection severity changes. It's the #1 reason clinical research sites get upgraded to more serious findings.
https://open.substack.com/pub/maxoutput/p/two-warning-letters-in-five-days?r=6o81r5&utm_campaign=post&utm_medium=web&showWelcomeOnShare=true
So the Wegovy pill ad basically tried to promise weight loss, emotional healing, and a whole new life arc, but FDA said this isnβt therapy in a capsule
FDA inspectors upgrade severity in 40-45% of cases for one reason: protocol adherence failures.
Not data integrity. Not safety reporting. Protocol compliance.
Tomorrow's newsletter breaks down what sites miss most - and what to fix now. <https://maxoutput.substack.com/>
#ClinicalResearch #MSky
Wild to see modern imaging used on a 28,000-year-old skull like this. Do you think it actually makes sense to diagnose specific conditions like NF1 in remains this old, or does it risk over-interpreting what the bones show?