Abstract
This paper reports on a research project that aims to explore how and to what extent generative AI can be used to produce different types of explanations that can be activated in writing assistants for Chinese learners of English. It first places the project in a lexicographic context and describes the general methodology used, including the limited usefulness of a learner corpus as an empirical basis and the need to use ChatGPT as a supplement to determine the error sub-categories to be explained. As a result, 26 error sub-categories are identified within the main category of subject-verb disagreement. The paper then compares two generative AI chatbots, Baidu’s Ernie Bot and OpenAI's ChatGPT, and describes how the latter was found to be more efficient and therefore prompted by lexicographers with experience in second-language teaching to write long explanations for each of the error sub-categories, with several examples demonstrating both the chatbot’s remarkable performance and the constant need for human supervision and intervention. At the same time, the paper argues for the integration of generative AI directly into writing assistants to produce short default explanations for errors found in learners’ texts. Finally, the paper summarises the findings, including the complex relationship between human and artificial intelligence.
Original language | English |
---|---|
Journal | Lexikos |
Volume | 34 |
Pages (from-to) | 397-418 |
Number of pages | 22 |
ISSN | 1684-4904 |
DOIs | |
Publication status | Published - Oct 2024 |
Keywords
- Automatic error correction
- Chatbots
- Error explanation
- Frequency criteria
- Generative AI
- L2-learning
- Language model
- Learner corpus
- Moder glosses
- Writing assistants
- Lemma-centered lexicographical databases
- Problem-centered lexicographical databases
- Human-AI symbiosis
- Human-AI collaboration
- Human-AI co-creation
- ChatGPT
- Ernie Bot