Trending

#AdaptiveTesting

Latest posts tagged with #AdaptiveTesting on Bluesky

Latest Top
Trending

Posts tagged #AdaptiveTesting

Preview
A Computerized Adaptive Test for the Knowledge of Effective Parenting Test–Internalizing Module: Instrument Validation Study Background: The development of efficient, scalable, and precise tools to assess knowledge of evidence-based parenting strategies is critical, particularly as increased parenting knowledge is a core target of many intervention programs. Objective: To promote this goal, we developed and evaluated a computerized adaptive testing (CAT) version of the Knowledge of Effective Parenting Test-Internalizing Module (KEPT-I). Methods: Using CAT simulations from a large (N = 1,000) national dataset, we compared the performance of the KEPT-I CAT to both the full-length KEPT-I and a 10-item static short form (KEPT-I Brief). Results: Results indicated that the KEPT-I CAT achieved comparable efficiency to the KEPT-I Brief (10 items), while demonstrating superior psychometric properties and modestly reducing the potential for practice effects. Conclusions: Given these advantages, the KEPT-I CAT is well-suited for post-intervention assessment and may facilitate research examining how increases in parenting knowledge relate to behavior change and reductions in child internalizing symptoms.

JMIR Formative Res: A Computerized Adaptive Test for the Knowledge of Effective Parenting Test–Internalizing Module: Instrument Validation Study #EffectiveParenting #ParentingKnowledge #ChildDevelopment #Psychometrics #AdaptiveTesting

0 0 0 0
Post image

Transformative change in education! Discover how board exams are going digital and adaptive. This move emphasizes skill-based assessment, leading to enhanced learning quality and a more equitable evaluation process. #EducationReform #AdaptiveTesting zurl.co/BvGbg

0 0 0 0
OSF

Exciting news! My preprint on an open-source Python package for Computerized Adaptive Testing (CAT) is available!

Key features:

• Seamless software integration

• Fully customizable design

• 100% open source

🔗 doi.org/10.31219/osf...

#AdaptiveTesting #BayesianMethods #OpenSource #Python

2 1 0 0
Preview
Customizing Computerized Adaptive Test Stopping Rules for Clinical Settings Using the Negative Affect Subdomain of the NIH Toolbox Emotion Battery: Simulation Study Background: Patient-reported outcome measures (PROMs) are crucial for informed medical decisions and evaluating treatments. However, they can be burdensome for patients and sometimes lack the reliability clinicians need for clear clinical interpretations. Objective: To assess the extent to which applying alternative stopping rules can increase reliability – for clinical use – while minimizing burden for computerized adaptive tests (CATs). Methods: CAT simulations were conducted on three NIH Toolbox for Assessment of Neurological and Behavioral Function® (NIH Toolbox®) Emotion Battery adult item banks in the Negative Affect subdomain (i.e., Anger Affect, Fear Affect, and Sadness) containing at least eight items. In the originally applied NIH Toolbox CAT stopping rules, the CAT was stopped if the score standard error (SE) reached < 0.3 before 12 items were administered. We first contrasted this with a SE-change rule in a planned simulation analysis. We then contrasted the original rules with fixed-length CATs (4-12 items), a reduction of the maximum number of items to eight, and other modifications in post-hoc analyses. Burden was measured by the number of items administered per simulation, precision by the percentage of assessments yielding reliabilities cutoffs (0.85, 0.90, and 0.95), and accurate score recovery by correlation and root mean squared error (RMSE) between the generating theta and the CAT-estimated EAP-based theta. Results: In general, relative to the original rules, the alternative stopping rules slightly decreased burden while also increasing the proportion of assessments achieving high reliability for the adult banks; however, the SE-change rule and fixed-length CATs with eight or fewer items also notably increased assessments yielding reliability < 0.85. Among the alternative rules explored, the reduced maximum stopping rule best balanced precision and parsimony, presenting another option beyond the original rules. Conclusions: Our findings demonstrate the challenges in attempting to reduce test burden while also achieving score precision for clinical use. Stopping rules should be modified in accordance with the context of the study population and the purpose of the study.

JMIR Formative Res: Background: Patient-reported outcome measures (PROMs) are crucial for informed medical decisions and evaluating treatments. However, they can be burdensome for patients and sometimes lack the… #ClinicalResearch #PatientOutcomes #AdaptiveTesting #MentalHealth #Reliability

0 0 0 0