Abstract
BACKGROUND: Patients seeking orthodontic treatment may use large language models (LLMs) such as Chat-GPT for self-education, thereby impacting their decision-making process. This study assesses the reliability and validity of Chat-GPT prompts aimed at informing patients about orthodontic side effects and examines patients' perceptions of this information.
MATERIALS AND METHODS: To assess reliability, n = 28 individuals were asked to generate information from GPT-3.5 and Generative Pretrained Transformer 4 (GPT-4) about side effects related to orthodontic treatment using both self-formulated and standardized prompts. Three experts evaluated the content generated based on these prompts regarding its validity. We asked a cohort of 46 orthodontic patients about their perceptions after reading an AI-generated information text about orthodontic side effects and compared it with the standard text from the postgraduate orthodontic program at Aarhus University.
RESULTS: Although the GPT-generated answers mentioned several relevant side effects, the replies were diverse. The experts rated the AI-generated content generally as "neither deficient nor satisfactory," with GPT-4 achieving higher scores than GPT-3.5. The patients perceived the GPT-generated information as more useful and more comprehensive and experienced less nervousness when reading the GPT-generated information. Nearly 80% of patients preferred the AI-generated information over the standard text.
CONCLUSIONS: Although patients generally prefer AI-generated information regarding the side effects of orthodontic treatment, the tested prompts fall short of providing thoroughly satisfactory and high-quality education to patients.
Originalsprog | Engelsk |
---|---|
Tidsskrift | CUREUS |
Vol/bind | 16 |
Nummer | 8 |
Antal sider | 12 |
ISSN | 2168-8184 |
DOI | |
Status | Udgivet - 1 aug. 2024 |