Abstract
Research shows that peer feedback can significantly impact higher education students’ performance. However, peer feedback often is of poor quality. Studies indicate that peer feedback quality can be enhanced by using prompts to guide students’ feedback composition. To date, interventions applied static prompts that do not interact with students’ peer feedback input. Hence, this study explores the impact of an AI-based feedback coach that dynamically prompts students during feedback provision. The following research question was addressed: Does an AI-based feedback coach influence the composition of written peer feedback?
To investigate this, a quasi-experimental study was conducted in a competency development course for Ph.D. students (N = 46). Participants were tasked with submitting a written lesson plan in a digital learning environment and providing peer feedback. Students were divided into an intervention group (IG, n = 20) and a control group (CG, n = 26). In the IG an AI feedback coach was enabled during the feedback process, in the CG not.
Peer feedback quality was assessed by analyzing the feedback comments concerning length, readability, and sentiment using Natural Language Processing. Using Prins et al.’s (2006) feedback quality index (e.g., use of examples and explanations), quantitative-qualitative content analysis of each feedback comment is currently being conducted.
Analyses showed that CG participants wrote 326 comments, while the AI-assisted group wrote 418 comments. Preliminary findings indicate that participants with AI coaching wrote significantly shorter comments (t-test, p < .05 d = 0.15). Readability scores and sentiment analysis did not yield significant findings. Results from the in-depth content analysis will be presented at the conference.
This study contributes to the field of feedback research by analyzing how AI applications can enhance peer feedback quality and the peer feedback process.
To investigate this, a quasi-experimental study was conducted in a competency development course for Ph.D. students (N = 46). Participants were tasked with submitting a written lesson plan in a digital learning environment and providing peer feedback. Students were divided into an intervention group (IG, n = 20) and a control group (CG, n = 26). In the IG an AI feedback coach was enabled during the feedback process, in the CG not.
Peer feedback quality was assessed by analyzing the feedback comments concerning length, readability, and sentiment using Natural Language Processing. Using Prins et al.’s (2006) feedback quality index (e.g., use of examples and explanations), quantitative-qualitative content analysis of each feedback comment is currently being conducted.
Analyses showed that CG participants wrote 326 comments, while the AI-assisted group wrote 418 comments. Preliminary findings indicate that participants with AI coaching wrote significantly shorter comments (t-test, p < .05 d = 0.15). Readability scores and sentiment analysis did not yield significant findings. Results from the in-depth content analysis will be presented at the conference.
This study contributes to the field of feedback research by analyzing how AI applications can enhance peer feedback quality and the peer feedback process.
Originalsprog | Engelsk |
---|---|
Publikationsdato | 26 jun. 2024 |
Antal sider | 3 |
Status | Udgivet - 26 jun. 2024 |
Begivenhed | JURE 2024 - Faculty of Psychology of the University of Sevilla, C. Camilo José Cela, s/n, 41018 Sevilla., Sevilla, Spanien Varighed: 24 jun. 2024 → 28 jun. 2024 Konferencens nummer: 28th https://www.earli.org/events/jure2024 |
Konference
Konference | JURE 2024 |
---|---|
Nummer | 28th |
Lokation | Faculty of Psychology of the University of Sevilla, C. Camilo José Cela, s/n, 41018 Sevilla. |
Land/Område | Spanien |
By | Sevilla |
Periode | 24/06/2024 → 28/06/2024 |
Internetadresse |
Emneord
- læring
- generative AI
- universitetsdidaktik
- læringsteknologi