Accesso libero

GPT Modeling for English Learning Assistance Functions in Non-Native Language Environments

  
03 set 2024
INFORMAZIONI SU QUESTO ARTICOLO

Cita
Scarica la copertina

In this paper, we design an English multi-round dialogue model based on the pre-trained model GPT-2 and use Top-p and Top-k sampling instead of greedy search and cluster search sampling to optimize the decoding strategy. After that, a dialog history keyword replication mechanism is introduced to improve context consistency. Based on the GPT-2 English Multi-Round Dialogue Model, a conversational teaching model for teaching English in a non-native language environment has been constructed. The impact of this teaching model on students’ theoretical learning and teaching skills was examined. The results show that the BLEU1 index value of the GPT-2DH model after pre-training in the chatterbot corpus is 0.356, which is almost twice as much as that of seq2seq. As a result, the model in this paper is superior. Compared with traditional teaching methods, the GPT dialogic learning model significantly improved students’ theoretical learning performance, English listening and speaking skills, learning attitude (P=0.029<0.05), and extrinsic learning motivation (P=0.046<0.05) but had no significant effect on their instructional design skills and intrinsic learning motivation.

Lingua:
Inglese
Frequenza di pubblicazione:
1 volte all'anno
Argomenti della rivista:
Scienze biologiche, Scienze della vita, altro, Matematica, Matematica applicata, Matematica generale, Fisica, Fisica, altro