Trending

#MultipleChoiceQuestions

Latest posts tagged with #MultipleChoiceQuestions on Bluesky

Latest Top
Trending

Posts tagged #MultipleChoiceQuestions

Preview
Fine-Tuned Large Language Models for Generating Multiple-Choice Questions in Anesthesiology: Psychometric Comparison With Faculty-Written Items Background: Multiple-choice examinations (MCQs) are widely used in medical education to ensure standardized and objective assessment. Developing high-quality items requires both subject expertise and methodological rigor. Large language models (LLMs) offer new opportunities for automated item generation. However, most evaluations rely on general-purpose prompting, and psychometric comparisons with faculty-written items remain scarce. Objective: This study aimed to evaluate whether a fine-tuned LLM can generate MCQs (Type A) in anesthesiology with psychometric properties comparable to those written by expert faculty. Methods: The study was embedded in the regular written anesthesiology examination of the eighth-semester medical curriculum with 157 students. The examination comprised 30 single best-answer MCQs, of which 15 were generated by senior faculty and 15 by a fine-tuned GPT-based model. A custom GPT-based (GPT-4) model was adapted with anesthesiology lecture slides, the National Competence-Based Learning Objectives Catalogue (NKLM 2.0), past examination questions, and faculty publications using supervised instruction-tuning with standardized prompt–response pairs. Item analysis followed established psychometric standards. Results: In total, 29 items (14 expert, 15 LLM-generated) were analyzed. Expert-generated questions had a mean difficulty of 0.81 (SD 0.19), point-biserial correlation of 0.19 (SD 0.07), and discrimination index of 0.09 (SD 0.08). LLM-generated items had a mean difficulty of 0.79 (SD 0.18), point-biserial correlation of 0.17 (SD 0.04), and discrimination index of 0.08 (SD 0.11). Mann-Whitney tests revealed no significant differences between expert- and LLM-generated items for difficulty (=.38), point-biserial correlation coefficient (=.96), or discrimination index (=.59). Categorical analyses confirmed no significant group differences. Both sets, however, showed only modest psychometric quality. Conclusions: Supervised fine-tuned LLMs are capable of generating MCQs with psychometric properties comparable to those written by experienced faculty. Given the limitations and cohort-dependency of psychometric indices, automated item generation should be considered a complement rather than a replacement for manual item writing. Further research with larger item sets and multi-institutional validation is needed to confirm generalizability and optimize integration of LLM-based tools into assessment development.

JMIR Formative Res: Fine-Tuned Large Language Models for Generating Multiple-Choice Questions in Anesthesiology: Psychometric Comparison With Faculty-Written Items #Anesthesiology #MedicalEducation #MultipleChoiceQuestions #LearningAssessment #LanguageModels

0 0 0 0
Marketing Management Kotler 14th Edition Multiple Choice Questions Marketing Management Kotler 14th Edition Multiple Choice Questions have become a popular tool for testing and assessing students’ knowledge in the field of marketing. These multiple-choice questions are based on the renowned textbook “Marketing Management” by Philip Kotler, which is widely recognized as a fundamental resource for marketing professionals. Spanning over several editions, the 14th […]

Marketing Management Kotler 14th Edition Multiple Choice Questions: Marketing Management Kotler 14th Edition Multiple Choice Questions have become a popular tool for testing and assessing students’ knowledge… #MarketingManagement #PhilipKotler #MultipleChoiceQuestions #MarketingEducation #StudyTips

0 0 0 0
Post image

Attempt GK Mcq by PendulumEdu to know what is the primary purpose of RAM (Random Access Memory) in a computer.
pendulumedu.com/qotd/what-is...
#GKmcq #Mathmcqs #ScienceMCQs #PolityMCQs #GeographyMCQs #AncientHistoryMCQ #QuestionofTheDay #DailyMCQs #MCQsquiz #MultipleChoiceQuestions #Quizoftheday

0 0 0 0
Preview
Using a Hybrid of AI and Template-Based Method in Automatic Item Generation to Create Multiple-Choice Questions in Medical Education: Hybrid AIG Background: Template-based automatic item generation (AIG) is more efficient than traditional item writing but it still heavily relies on expert effort in model development. While non-template-based AIG, leveraging artificial intelligence (#AI) (AI), offers efficiency, it faces accuracy challenges. Medical education, a field that relies heavily on both formative and summative assessments with multiple choice questions, is in dire need of AI-based support for the efficient automatic generation of items. Objective: We aimed to propose a Hybrid AIG to demonstrate whether it is possible to generate item templates using AI in the field of medical education. Methods: This is a mixed-methods methodological study with proof-of-concept elements. We propose the Hybrid AIG method as a structured series of interactions between a human subject matter expert and AI, designed as a collaborative authoring effort. The method leverages AI to generate item models (templates) and cognitive models to combine the advantages of the two AIG approaches. To demonstrate how to create item models using Hybrid AIG, we used two medical multiple-choice questions: one on respiratory infections in adults and another on acute allergic reactions in the pediatric population. Results: The Hybrid AIG method we propose consists of seven steps. The first five steps are performed by an expert in a customized AI environment. These involve providing a parent item, identifying elements for manipulation, selecting options and assigning values to elements, and generating the cognitive model. After a final expert review (Step 6), the content in the template can be used for item generation through a traditional (non-AI) software (Step 7). We showed that AI is capable of generating item templates for AIG under the control of a human expert in only 10 minutes. Leveraging AI in template development made it less challenging. Conclusions: The Hybrid AIG method transcends the traditional template-based approach by marrying the “art” that comes from AI as a “black box” with the “science” of algorithmic generation under the oversight of expert as a “marriage registrar”. It does not only capitalize on the strengths of both approaches but also mitigates their weaknesses, offering a human-AI collaboration to increase efficiency in medical education.

JMIR Formative Res: Using a Hybrid of AI and Template-Based Method in Automatic Item Generation to Create Multiple-Choice Questions in Medical Education: Hybrid AIG #MedicalEducation #ArtificialIntelligence #ItemGeneration #MultipleChoiceQuestions #HealthTech

1 0 0 0