2 days ago
Strategic Governance of artificial intelligence (#AI)–Enabled Clinical Algorithm Development: Formative Evaluation of the Semiautomatic Clinical Algorithm Development Framework
Background: Health care leaders face a strategic dilemma: traditional expert-led content development ensures safety but is too slow for digital innovation, whereas artificial intelligence (#AI) (AI) automation offers speed but introduces risks from hallucinations. Resolving this tension requires governance frameworks that balance operational efficiency with rigorous accountability for patient safety. Objective: This study describes the development process and conducts a formative evaluation of the Semiautomatic Clinical Algorithm Development (S-ACAD) framework as an industry-driven implementation strategy. We aimed to assess the #feasibility of this “human-in-the-loop” governance model in balancing the need for operational efficiency with the rigorous safety standards required for pediatric emergency guidance. Methods: We conducted a prospective, single-day proof-of-concept case study focusing on pediatric febrile seizures. A single physician expert executed a 4-phase workflow: (1) parallel data collection using multiple AI agents, (2) AI-assisted synthesis, (3) iterative refinement via “AI sparring,” and (4) final clinical validation. The resulting algorithm was reviewed by 2 independent external pediatric specialists. We benchmarked this process against a fully automated system (Fully Autonomous Clinical Algorithm Development [F-ACAD]) to illustrate comparative efficiency and safety trade-offs. Results: In this single execution, the S-ACAD framework produced a parent-actionable febrile seizure algorithm in approximately 245 minutes. Two independent pediatric specialists (N=2) reviewed the output and did not identify medically inaccurate sections or critical safety errors requiring mandatory correction, and both rated overall clinical validity highly (9.0 and 9.5 out of 10). During the workflow, 19 human expert interventions were recorded, with clinical judgment (n=8, 42.1%) and safety review (n=5, 26.3%) as the most frequent categories in an exploratory post hoc analysis. By comparison, the fully automated approach (F-ACAD) completed the task in approximately 68 minutes, but its own AI critics identified 17 issues (9 high-priority), including concerns related to emergency response clarity and standard-of-care alignment. Conclusions: These preliminary findings suggest that the S-ACAD framework may offer a potential pathway for “active governance” in AI-assisted clinical content development. In this proof-of-concept case, the framework combined rapid AI-assisted drafting with continuous expert oversight and independent clinical review, suggesting the potential to reduce turnaround time while maintaining safety safeguards. However, these results are based on a single expert applying the workflow to a single clinical topic, and validation across multiple experts, topics, and institutional contexts is needed before generalizability can be established.
JMIR Formative Res: Strategic Governance of artificial intelligence (#AI)–Enabled Clinical Algorithm Development: Formative Evaluation of the Semiautomatic Clinical Algorithm Development Framework #ArtificialIntelligence #AI #HealthCare #ClinicalAlgorithms #PatientSafety
1
0
0
0