Content Accuracy of Options of Multiple-Choice Questions (MCQS) Developed By Pulmonologist And Medical Educationist Verified with ChatGPT

Authors

  • Rano Mal Piryani Bilawal Medical College for Boys, Liaquat University of Medical & Health Sciences, Jamshoro, Sindh, Pakistan
  • Rajesh Piryani Akkodis Research, Akkodis, Blagnac, France.
  • Shomeeta Piryani Consultant Radiologist, Memon Medical Institute, Karachi, Sindh, Pakistan

DOI:

https://doi.org/10.3126/jucms.v13i03.88834

Keywords:

AI, Content Accuracy, ChatGPT, Items, MCQs, Options

Abstract

INTRODUCTION
The vetting, a process of review of multiple-choice questions (MCQs- Items) including stem, lead in questions and options (correct answer and distractors) is done by panel of experts. The objective of this study was to assess the content accuracy of the options (correct answers and distractors) of MCQs developed by pulmonologist and medical educator with generated by Chatbot Generative Pretrained Transformer (ChatGPT) using the same stems with lead in questions and assess the rationale of the options. 

MATERIAL  AND METHODS
In first step, one best answer type (Type A) MCQs at undergraduate level were developed by pulmonologist & medical educator following item writing guidelines and utilizing checklist of item quality. During second step, the options (correct answers and distractors) of the developed MCQs generated by ChatGPT with rationale using concise, contextual and relevant prompt. In third step, correct answers of those MCQs developed by medical educator whose correct answer different than created by ChatGPT were verified with ChatGPT twice. In fourth step, review of content accuracy of options & correct answer of all MCQs with ChatGPT. Finally, percentage of accuracy of options and correct answer generated by ChatGPT was calculated.

RESULTS
ChatGPT 4.1 free version confirmed content of 91% options (correct answers and distractors) accurate correct and acceptable and 9% possibly incorrect & less specific and less plausible. The rationales generated by ChatGPT were acceptable.

CONCLUSION
The ChatGPT 4.1 version may be considered as an expert for confirming the accuracy of content of options including correct answer and distractors of MCQs with acceptable rationales.

Downloads

Download data is not yet available.
Abstract
0
PDF
0

Author Biographies

Rano Mal Piryani, Bilawal Medical College for Boys, Liaquat University of Medical & Health Sciences, Jamshoro, Sindh, Pakistan

Department of Pulmonology

Shomeeta Piryani, Consultant Radiologist, Memon Medical Institute, Karachi, Sindh, Pakistan

Department of Radiology

Downloads

Published

2025-12-31

How to Cite

Rano Mal Piryani, Rajesh Piryani, & Shomeeta Piryani. (2025). Content Accuracy of Options of Multiple-Choice Questions (MCQS) Developed By Pulmonologist And Medical Educationist Verified with ChatGPT. Journal of Universal College of Medical Sciences, 13(03), 46–51. https://doi.org/10.3126/jucms.v13i03.88834

Issue

Section

Medical Education