Gathering Validity Evidence for a Simulation-Based Test of Otoscopy Skills

Josefine Hastrup von Buchwald, Martin Frendø, Andreas Frithioff, Anders Britze, Thomas Winther Frederiksen, Jacob Melchiors, Steven Arild Wuyts Andersen

Research output: Contribution to journal/Conference contribution in journal/Contribution to newspaperJournal articleResearchpeer-review

Abstract

Objective: Otoscopy is a key clinical examination used by multiple healthcare providers but training and testing of otoscopy skills remain largely uninvestigated. Simulator-based assessment of otoscopy skills exists, but evidence on its validity is scarce. In this study, we explored automated assessment and performance metrics of an otoscopy simulator through collection of validity evidence according to Messick’s framework. Methods: Novices and experienced otoscopists completed a test program on the Earsi otoscopy simulator. Automated assessment of diagnostic ability and performance were compared with manual ratings of technical skills. Reliability of assessment was evaluated using Generalizability theory. Linear mixed models and correlation analysis were used to compare automated and manual assessments. Finally, we used the contrasting groups method to define a pass/fail level for the automated score. Results: A total of 12 novices and 12 experienced otoscopists completed the study. We found an overall G-coefficient of.69 for automated assessment. The experienced otoscopists achieved a significantly higher mean automated score than the novices (59.9% (95% CI [57.3%-62.6%]) vs. 44.6% (95% CI [41.9%-47.2%]), P <.001). For the manual assessment of technical skills, there was no significant difference, nor did the automated score correlate with the manually rated score (Pearson’s r =.20, P =.601). We established a pass/fail standard for the simulator’s automated score of 49.3%. Conclusion: We explored validity evidence supporting an otoscopy simulator’s automated score, demonstrating that this score mainly reflects cognitive skills. Manual assessment therefore still seems necessary at this point and external video-recording is necessary for valid assessment. To improve the reliability, the test course should include more cases to achieve a higher G-coefficient and a higher pass/fail standard should be used.

Original languageEnglish
JournalAnnals of Otology, Rhinology and Laryngology
Volume134
Issue2
Pages (from-to)70-78
Number of pages9
ISSN0003-4894
DOIs
Publication statusPublished - Feb 2025

Keywords

  • Adult
  • Clinical Competence
  • Educational Measurement/methods
  • Female
  • Humans
  • Male
  • Otolaryngology/education
  • Otoscopy/methods
  • Reproducibility of Results
  • Simulation Training
  • technical skills training
  • handheld otoscopy
  • otology
  • simulation-based training
  • evidence-based medical education

Fingerprint

Dive into the research topics of 'Gathering Validity Evidence for a Simulation-Based Test of Otoscopy Skills'. Together they form a unique fingerprint.

Cite this