The Application of Intelligent Translation System Based on Machine Translation in English Education Curriculum Reforms
Pubblicato online: 21 mar 2025
Ricevuto: 08 nov 2024
Accettato: 08 feb 2025
DOI: https://doi.org/10.2478/amns-2025-0562
Parole chiave
© 2025 Xiaoyan Cao, published by Sciendo
This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
With the rapid development of science and technology, intelligent translation has gradually become an important tool in the field of English education, especially in the English education classroom reform, the intelligent translation system of machine translation plays an important role.
Machine translation is the process of using computers to translate text from one natural language into another natural language [1–2]. With the deep learning in artificial intelligence, machine learning and neural networks and other technologies to promote the emergence of neural network machine translation, which promotes the development of education and teaching, changing the demand for talent and the shape of education. There are two main forms of machine intelligent translation applied in English writing classroom, namely pre-editing and post-editing [3–5]. Pre-editing refers to the use of intelligent translation system to translate certain fixed expressions, proper nouns and other words in English writing before it is carried out, and complete the composition with the assistance of the intelligent translation system [6–7].
It has been found that using Google Translate as a pre-editing tool in writing activities for low-level English language learners, students had an increased number of conceptual categories, were able to generate more original ideas, improved fluency in compositions, were better able to express themselves in a second language, and creativity in writing was enhanced [8–10]. Post-editing refers to the students’ use of intelligent translation systems in the classroom to translate their native language into English, and then detect, select, and modify the machine-translated content to negotiate the meaning with their peers, and finally complete the translation [11–12]. The basic principle of intelligent translation system can be summarized as the use of machine learning and deep learning algorithms, training the English corpus, building statistical models, and realizing the comprehension and conversion of English text [13–16]. From a linguistic point of view, an intelligent translation system supports lexical-grammatical knowledge, facilitates reading comprehension and writing, and ultimately promotes language learning. From an affective perspective, it reduces language anxiety, increases motivation and confidence, and creates a stress-free learning environment for learners. The use of machine-intelligent translation systems has two sides, and English teachers should be aware of its possibilities and limitations and provide adequate and effective guidance to their students [17–20].
It can be seen that Intelligent Translation System has important application value and influence in the reform of English education classes, which not only changes the traditional education mode and method, improves the efficiency and accuracy of translation, but also brings new opportunities to the field of English education, and plays an important role in promoting the development of the reform of English education classes.
In this paper, an intelligent translation model based on machine translation is constructed, and the self-attention model encoder in the traditional Transformer model is improved by using the dependent syntactic structure, and the dependent syntactic structure is presented in the form of a relation matrix, which guides the encoding process of the source language utterances. On the basis of the conventional model structure, the structured information of linguistics is added, and the structured knowledge of dependent syntax is used to guide the semantic information updating process at each step of the decoding process, so that the semantic information can be captured more intensively. Combining the idea of word block with the structural information of dependent syntax, a relatively smooth hidden variable updating strategy is adopted, and the syntactic relationship of neighboring words is used to obtain a certain weight, which is used as a gating switch to update and maintain the semantic information, so as to realize the enhancement of the translation performance of the model. Based on the intelligent translation model constructed in this paper, the Chinese-English Intelligent Translation System is constructed, and the practical effect of the system in the English education reform is explored in the following to carry out the English teaching experiments.
With the increasing development of Internet technology and the emergence of artificial intelligence (AI) technology means, the role of machine translation in the field of English translation is also becoming more and more prominent. As an emerging technology, machine translation integrates and utilizes computer technology and AI technology, which is the technology of translating (converting) a certain language into another language with the help of computers, and it is an intelligent product with high-tech content, and its main function is to reduce the communication obstacles brought by language barrier in the process of communication, and to make the work of English education and classroom reform become more convenient.
In this paper, we firstly make a preliminary introduction to dependent syntax and neural network-based models, and build an intelligent translation model based on machine translation, integrating dependent syntax, attention mechanism and neural network models such as the Transformer model.
Dependency syntactic analysis is a natural language processing technique designed to identify dependencies between words in a sentence [21]. These dependencies can be represented in a tree structure called Dependency Syntax Tree. The aim of dependency syntactic analysis is to discover the connections between every word and the other words in the sentence and to establish a dependency syntactic tree with a specific word as its root node. In this tree structure, every word represents a node and the connections between them are depicted as edges. Common dependency relations include subject-predicate relationship and verb-object relation. Fixed-medium relations, etc. Dependency syntactic analysis models are usually trained and predicted using machine learning algorithms or deep learning models. Dependent syntactic analysis has a wide range of applications in modern natural language processing, such as machine translation, text categorization, information extraction, and so on. It can help computers understand the meaning and structure of natural language well, thus enabling them to perform more accurate natural language processing tasks.
Transfer-based Dependency Syntax Analysis and Graph-based Dependency Syntax Analysis are the most commonly used methods for Dependency Syntax Analysis. Transfer-based dependent syntactic methods consist of two parts: states and actions. States are used to record incomplete predictions, while actions are used to control transfer between them. When used to generate a dependency syntax tree, it is specifically expressed as starting from the empty state, transferring the next state through actions, generating a dependency syntax tree step by step, and the final state preserves a complete dependency tree to analyze the relationship between words.
RNN
Recurrent Neural Network (RNN) is a neural network structure that is mainly used to process sequential data in which each input data point is related to the previous data point [22]. RNN models sequential information by recursively shifting the states of the elements in a sequence.
It has a memory function so that it can capture long-term dependencies in a sequence.RNN consists of a recurrent unit that receives the input and the hidden state of the previous step at each step and computes the output and the new hidden state for the current time step.RNN is most commonly used for tasks such as Natural Language Processing and Speech Recognition, which involve sequential sequences of inputs and outputs. In detail, the RNN receives an input
LSTM model
Long Short-Term Memory (LSTM) is a deep neural network commonly used for sequence modeling, which is able to better capture long-term dependencies when dealing with time-series data than traditional recurrent neural networks. This is mainly due to the introduction of three gating mechanisms in LSTM, the forgetting gate, the input gate and the output gate, which are responsible for controlling the flow of information, thus effectively mitigating the gradient vanishing problem. The forgetting gate will decide whether to remove specific information from the state based on the input of the current moment and the state of the previous moment. Input gates will decide how much new information should be added to the state based on the inputs of the current moment and the state of the previous moment. It is a special type of recurrent neural network that is mainly used to process time-series data. It solves the problems of gradient vanishing and gradient explosion that exist in traditional recurrent neural networks by introducing three gate controllers, enabling the network to capture long-term dependencies historically well.
Attention mechanism model
Attention mechanism is a technique used to enhance the attention of a neural network to the important parts of the input information, thus enabling the network to better capture the relevant features of the input information [23]. It was initially applied in the field of natural language processing, but it is now widely used in various fields such as image recognition, speech recognition, and materializer translation. In a normal neural network, each input feature is treated equally without recognizing their importance. In the attention mechanism, on the other hand, each input feature is given a weight which indicates the importance of that feature in the current task. In this way, the network can more accurately capture the most important parts of the input data and focus on these parts for processing. In practice, there are many ways to implement the attention mechanism. One of the more common is the Softmax attention mechanism. This mechanism will take the weighted sum of the input features as the output, and the weights are calculated by the Softmax function. In conclusion, the attention mechanism is a very useful technique that can help neural networks better handle complex input data and improve the prediction accuracy of the model:
Transformer model
Transformer is a neural network model that uses the attention mechanism. Transformer consists of two parts: encoder and decoder. Each part consists of several layers. The encoder embeds each word in the input sequence into a vector space and performs a multi-head attention mechanism to obtain a encoded representation. This coded representation can be fed into the decoder for subsequent processing and ultimately generates an output sequence in the target language. Unlike traditional recurrent neural networks or convolutional neural networks, Transformer uses a self-attention mechanism, i.e., each word can compute its own representation by weighted summation of other words. In addition, Transformer introduces residual concatenation and regularization techniques to speed up the model training process and prevent overfitting.
Transformer has achieved excellent results in the field of natural language processing, e.g., on the WMT2014 English-German translation task, Transformer’s BLEU scores exceeded the previous best model and achieved the best results in several benchmarks. In addition to translation tasks, Transformer can be applied to text categorization, text generation, and other tasks. Transformer is a very powerful and flexible model that can be used to solve a wide range of natural language processing problems and plays an important role in state-of-the-art NLP models.
The Transformer model as a whole still uses the encoder-decoder seq2seq structure, with major changes in the specific implementations on both sides, both using a multi-headed self-attention layer and residual connectivity as the main unit structure, and adjusting the outputs using multi-layer perceptrons at some locations [24].
The multi-head self-attention mechanism is the key to Transformer. Although the idea of self-attention replaces the sequence modeling approach of RNN by reducing the time complexity from the level of sequence length to the level of uniform network depth, it also brings the difficulty of modeling the positional relationship of input words. The positional information of words is difficult to obtain due to the lack of a serialized information transfer method. Here, the authors abandon the network learning approach and instead use a specialization function to express positional information called Positional Embedding:
Where pos denotes the position information of the current word,
Although the Transformer model has made great progress, its modeling is still at the level of sequence structuring, while the processing of position information is relatively simple. The weaknesses of structural and positional information leave room for its improvement.
Dependent syntactic structures provide a more rational way of transferring semantic information at the syntactic level, so the encoder of the self-attention model can be improved using dependent syntactic structures.
Dependent syntactic structure representation based on the idea of self-attention The idea of encoding the self-attention mechanism is to compute the attention scores of all words two by two and weight the semantic vectors accordingly, which is similar to the structure of the dependency relations, and thus analogous to the form of self-attention, one can encode the dependency syntactic tree as a dependency matrix. Specifically, let the encoded sentence length be For two words that do not have a dependency relationship, the dependency weight should be set to 0. And when there is a dependency relationship, the more levels apart, the smaller the dependency weight should be. Since the effect of dependency reversal is only the reversal of key-value relationship, this matrix is the transpose matrix of the dependency matrix D:
Self-attention model with added syntactic knowledge According to the idea of self-attention mechanism, the query vector for computing attention, the key representation being computed, and the value vector it contains are packed into three matrices of query(Q), key(K), and value(V), respectively, which can be used to compute the weighted sum of values under the correlation of the query and key as the weights, as follows:
When attempting to include information on dependencies, the relevance score Presented as a merged matrix, the attention scoring function Score(Q, K) is updated to:
Considering the directionality of dependencies, in addition to the forward modeling approach, the expression of the reverse structural information should also be added, i.e., the weighting of attention scores using the transposed dependency matrix:
The outputs of all the header structures are spliced and dimensionally transformed to get the final output:
Two different “head” structures are defined:
Eq. In the actual operation, the dimension of the word vector is set to be The rest of the encoder configuration is the same as for the Transformer model.
In this section, we try to utilize the structured knowledge of dependent syntax to guide the process of updating semantic information at each step of the decoding process, with the expectation of making the capture of semantic information more focused.
Regardless of whether the decoder is a recurrent neural network or a self-attention based decoder, the same abstraction can be made in the session of attention computation with the encoded result, let for step
Distinguishing from the frequency of semantic update based on word units, the idea of word block-based requires that the semantic vectors should be consistent in the same word block, i.e., the update of Get the semantic update weights for the current decoding step. The weight values are computed based on the output of the previous step:
Calculate the updated semantic implicit vector The semantic implicit vectors are not only dependent on the information from the coding side passed through the attention mechanism, but also refer to the semantic expression of the previous word chunk, which is finally obtained:
Weighting to generate the semantic vector for the new step Instead of the word sense vector Then subsequent operations such as dimension transformation, softmax and argmax are performed to obtain the output values. In the training phase, since the prediction process of syntactic information is added, the mean square error between the ideal weights and the actual generated values needs to be added to the loss function to normalize the weight ratio and introduce the results of the a priori dependent syntactic analysis:
For the translation task between Chinese and English languages, this chapter is based on the intelligent translation model constructed above, combined with the experience accumulated in actual operation, to reasonably construct an intelligent machine translation system between Chinese and English bilinguals, so as to realize the mutual translation between Chinese and English. In addition, this paper will also discuss the ways in which the Chinese-English intelligent machine translation system can be applied in the English education curriculum reform and the specific English teaching design.
This translation system utilizes HTML language to build the front-end interface, and the back-end utilizes Flask, a lightweight Web application framework written in Python, to call the trained translation model by building a remote access platform. Therefore, this translation system adopts a B/S (browser/server) architecture. This translation system contains data input, data processing, machine translation, data output, and application interface modules.
Data Input Module Utilizing Flask framework and the foreground HTML interface to realize data interaction, when the user inputs the source language in the application interface, Flask obtains the user’s input content and then proceeds to the next step of processing. First of all, use payload to encapsulate the input data, respond to the foreground request by sending POST, and then pass the encapsulated data back through request.form. Data processing module First of all, we carry out basic preprocessing for Chinese-English parallel data, such as removing garbled codes, spaces and other problems of the corpus, and then we utilize the LTP language platform for Chinese word segmentation and dependency parsing, and for Thai, we utilize spaCy-Thai for word segmentation and dependency parsing. Machine Translation Module This module mainly includes the processed bilingual data and the translation model for training. It mainly calls the model and transforms the input data into word vectors according to the dictionary, and then passes them into the neural machine translation model to obtain the final translation results. Data output module According to the translation results of the model, using Flask framework, the translation results are fed back to the application display interface. Application display module The final translation results are displayed on the browser page, which mainly includes the functions of input data, output data, language conversion, and translation direction conversion.
The specific ways in which the Chinese-English bilingual intelligent translation system constructed in this paper is applied in English education curriculum reform include multimedia, Internet platform teaching, CAT (Computer Aided Translation), and instant communication tools.
The intelligent translation model introduced in this paper is applied to the multimedia teaching platform. The platform can be built based on LAN and campus network, connecting and supporting various forms of terminal devices. The browsing of courseware, audio and video files, and even real-time teaching interactions can be integrated into the teaching terminals of teachers and students’ electronic devices. The convenient teaching platform enables students to access various forms of translation corpus at any time, watch the teachers’ exposition and explanation of relevant translation strategies, skills, theories and various problems in class, so as to realize the organic integration of classroom teaching and self-help learning, and at the same time join in some primary projects as participants, so as to realize the seamless transition from theory to practice, thus gradually improving translation ability.
The Internet teaching interactive platform that is complete and efficient solves the problem of dispersed and insufficient translation teaching resources and allows for the centralization and integration of a large number of teaching resources. Using the intelligent translation model in this paper, students can make full use of rich online learning and academic resources such as China Knowledge, WIKI Encyclopedia, TED-ed, etc. with the help of Internet teaching platform, self-learning, browsing translation courseware, combining with online Q&A, online tasks, interactive discussions, etc., to perfect the translation theories and skills taught by the teacher in class. Teachers can publish translation courses online, assign translation homework, organize translation practice sessions, and even test and evaluate them.
Combined with the intelligent translation model in this paper, computer-aided translation equipment is utilized to carry out university English translation teaching, realizing an all-round interactive teaching environment, helping the translator to complete a large number of translation and proofreading work efficiently, which is of great importance to the students of various majors in quickly mastering the vocabulary of professional translation in the corresponding fields.
After integrating the intelligent translation model of this paper with new media resources, the introduction of new media resources in translation teaching enhances the interest of teaching, increases students’ participation, and facilitates interactive communication and discussion inside and outside the classroom. Through the above new media, the discussion between teachers and students on teaching-related topics is no longer limited by time and place, and students are no longer bored and unmotivated because of the single form of classroom teaching; on the contrary, these self-media platforms can also maximize the mobilization of young learners’ interest in learning, and they can participate in the teaching activities organized by the teachers in various forms. Enriching the form of teaching, mobilizing the subjective initiative of teachers and students, while at the same time, this flexible, boundless sharing technology also extends the influence of learning to learners outside the classroom.
Since students’ ability to grasp the translation process is still immature and their cognitive level of the evaluation scale of translations is not high, it is necessary to carry out an appropriate pedagogical design of the intelligent translation system constructed in this paper in the English education curriculum reform, and strive to achieve a real integration of machine translation and classroom teaching.
The main form of translation tasks is from Chinese to English, progressing gradually from sentences to paragraphs to chapters. The advantage of this approach is that students will not be suddenly burdened with a large number of learning tasks, nor will they feel rejected or burned out as a result. When assigning translation tasks, common translation skills and strategies such as additions and omissions, sentence contrasts, and word order adjustments are also taught in conjunction with the corresponding content.
Before formal translation, the source language data should be pre-processed, and the source language text should be moderately decomposed during grammatical analysis to improve accuracy during machine translation. After the machine translation is finished, the translated text should be organized, and the corresponding translation skills should be used to check the spelling and correct the semantics of the translated text, so as to ensure the grammatical correctness of the translated text as well as to make the semantics precise and free from ambiguity.
Interactive online teaching systems that have emerged in recent years, such as Critique.com, can effectively monitor the quality of translations to a certain extent. These relatively mature intelligent systems are very sensitive to words and phrases, grammar, and Chinese English, and will make targeted analysis and evaluation, and they can also deal with target language errors and target culture errors, which to some extent improves the quality of students’ evaluation and correction of translations and efficiency.
In this chapter, the practical effects of the application of the Chinese-English bilingual intelligent translation system constructed in this paper in the English education curriculum reform will be analyzed and discussed in depth. Before formally carrying out the application practice, the performance of the intelligent translation model used to realize the system function in this paper is tested to ensure the normal operation of the system function.
Experimental results of NIST Chinese-English translation task
In this section, Transformer, BERT, GAN, BP, and POS models are selected as the comparison models for this experiment, and the test sets are selected from NIST2005, NIST2008, and NIST2012, and the case-sensitive BLEU is used as the evaluation index. The results of the different models on the NIST2005, NIST2008 and NIST2012 test sets are specifically shown in Table 1. It can be seen that the BLEU percentage points of the proposed model on the NIST2005, NIST2008 and NIST2012 test sets reached 39.79, which was 7.57, 6.21, 3.1, 2.77 and 3.32 percentage points higher than that of the Transformer, BERT, GAN, BP and POS models, respectively, and the quality of Chinese-English translation was better than that of other comparison models.
Experimental results on WMT Chinese-English translation tasks
In order to further verify the effectiveness of the intelligent translation model constructed in this paper under large-scale experimental datasets, experiments are conducted on three WMT translation tasks (WMT2017, WMT2018, WMT2019) in this section. The evaluation metrics still use the industry’s common case-sensitive BLEU, and the test results are specifically shown in Table 2. On these three WMT tasks, this paper’s model is 1.71, 1.55, 1.66, 1.14, and 1.15 BLEU points higher than Transformer, BERT, GAN, BP, and POS models on average, and compared to the NIST translation task, the quality gap between the English and Chinese translations of the various models is relatively small, but this is also fully able to show that our model in this paper is effective.
Sample Analysis
In this section, a translation example is used to test the performance of the model in encoding the syntactic knowledge of the source language. A translation example is given as “new ① service ② first time ③ provide ④ global ⑤ official ⑥ weather ⑦ observation ⑧”, which comes from the test set of the NIST Chinese-English translation task. Based on the traditional Transformer model, the model in this paper combines the idea of word chunks with the information of dependent syntactic structure, which is more flexible in the performance of Chinese-English translation. Taking the traditional Transformer model as the comparison model for this translation example experiment, its translation weight matrix with this paper’s model is specifically shown in Fig. 1, and (a) and (b) are the translation weight matrices of this paper’s model and the traditional Transformer model, respectively. The darker color in the figure represents the larger weight. It can be clearly seen that the dependency relationship between words in the weight matrix of this paper’s model is very clear, for example, the first word “new” is the modifier of the second word “service”, so the first row and the second column of the matrix are very high in weight and are in dark black color. On the other hand, the weight matrix of the traditional Transformer model is not clear enough to derive the syntactic structure of the translation samples, and its performance on the translation examples is inferior to that of this paper’s model.

Weight matrix
Test results on NIST mission
Model | NIST2005 | NIST2008 | NIST2012 | Average |
---|---|---|---|---|
Transformer | 38.14 | 30.15 | 28.37 | 32.22 |
BERT | 39.25 | 32.15 | 29.34 | 33.58 |
GAN | 43.88 | 35.72 | 30.48 | 36.69 |
BP | 43.12 | 35.18 | 32.75 | 37.02 |
POS | 44.12 | 35.14 | 30.14 | 36.47 |
Model of this article | 45.16 | 38.84 | 35.37 | 39.79 |
Test results on WMT mission
Model | WMT2017 | WMT2018 | WMT2019 | Average |
---|---|---|---|---|
Transformer | 20.29 | 32.32 | 26.18 | 26.26 |
BERT | 20.69 | 32.35 | 26.21 | 26.42 |
GAN | 21.66 | 31.48 | 25.78 | 26.31 |
BP | 22.15 | 32.03 | 26.31 | 26.83 |
POS | 21.12 | 33.03 | 26.31 | 26.82 |
Model of this article | 22.84 | 34.18 | 26.88 | 27.97 |
Next, the utility of the Chinese-English bilingual intelligent translation system based on machine translation constructed in this paper applied to the teaching practice of English education reform will be explored, and the experimental hypothesis that “the intelligent translation system in this paper can significantly improve students’ total English achievement, and there will be a significant difference in the total English achievement between experimental class and control class” will be put forward.
In this study, two classes of first-year Business English majors in Normal University of D 2023 with a total of 86 students, each of which has a student number of 43 and a comparable level of English learning, were used as the research subjects. The experimental class and the control class are set up, and the experimental class will apply this paper’s Intelligent Translation System to the reform of English education, and use this paper’s system to carry out teaching activities in English teaching, while the control class maintains the traditional way of English teaching. The length of this English teaching experiment is 1 semester (March 2024-June 2024).
Analysis of total English scores The total English achievement of the students in the experimental class and the control class before and after conducting the experiment is specifically shown in Table 3. From the table, the mean difference between the pretest scores of students in the experimental class and the control class is 0.5, which is not a big difference. The P=0.684>0.05 obtained from the T-test indicates that the level of the total English achievement of the students in the two classes before the experiment is comparable and there is no significant difference. At the end of one semester’s teaching experiment, a post-test was organized for the experimental class and the control class, and the difficulty of the post-test questions was somewhat increased compared with that of the pre-test questions. The mean of the posttest scores of the students in the experimental class was 69.66 while the mean of the control class was 66.52.The mean of the posttest of the experimental class was higher than the mean of the pretest by 2.61 while the mean of the posttest of the control class was lower than the mean of the pretest by 1.13.The difference in the mean of the posttest of the two classes (3.14) was higher compared to the difference in the mean of the pretest (0.6). From the results of the independent sample t-test of the posttest scores, P=0.041<0.05 indicates that there is a significant difference between the posttest scores of the experimental class and the control class, and that the experimental class’s scores are better than the control class’s scores. The data indicate that this experiment achieves the expected effect and the hypotheses of this paper are verified. Score analysis of each test question in the English post-test In order to further understand the impact of two-way translation on the English learning of higher vocational students, in this section, the scores of each part of the post-test questions of the experimental and control classes will be statistically analyzed using SPSS 21.0 software. The scores of each part of the test questions are specifically shown in Table 4. As can be seen from the table, among the five types of questions, the score difference between students in the experimental class and the control class in the two types of questions, listening comprehension questions and reading comprehension questions, is 0.07 and 0.25, and the score of the experimental class is slightly higher than that of the control class, and the difference between the two is small, and the result of the t-test of the independent samples is P>0.05, which is not a significant difference. As for translation questions, writing questions, vocabulary and structure questions, the scores of the experimental class are still higher than those of the control class, respectively 0.79, 1.16, 0.87, with a larger score gap, and the results of the independent samples t-test show that P<0.05, indicating that there is a significant difference in the scores of the experimental class and the control class on the post-test translation questions, writing questions, and vocabulary and structure questions and the scores of the experimental class are significantly better than the scores of the control class. The performance of the experimental class is significantly superior to that of the control class.
Total English score
Dimension | Test | Class | N | Mean | Standard deviation | T | P |
---|---|---|---|---|---|---|---|
Total English score | Before experiment | Experimental class | 43 | 67.05 | 10.471 | 0.383 | 0.684 |
Control class | 43 | 67.65 | 11.105 | ||||
After experiment | Experimental class | 43 | 69.66 | 5.736 | 2.058 | 0.041 | |
Control class | 43 | 66.52 | 6.175 |
Problem score
Topic | Cass | N | Mean | Standard deviation | T | P |
---|---|---|---|---|---|---|
Translation | Experimental class | 43 | 10.42 | 1.509 | 2.066 | 0.031 |
Control class | 43 | 9.63 | 1.172 | |||
Writing | Experimental class | 43 | 8.85 | 2.031 | 2.036 | 0.036 |
Control class | 43 | 7.69 | 2.253 | |||
Vocabulary and structure | Experimental class | 43 | 9.88 | 1.249 | 2.038 | 0.033 |
Control class | 43 | 9.01 | 1.712 | |||
Hearing comprehension | Experimental class | 43 | 17.08 | 1.683 | 0.227 | 0.088 |
Control class | 43 | 17.01 | 2.101 | |||
Reading comprehension | Experimental class | 43 | 23.43 | 2.623 | 0.556 | 0.608 |
Control class | 43 | 23.18 | 2.704 |
In this paper, an intelligent translation model is proposed by integrating the relevant theories of machine translation, which is used as the basis for functional realization, corresponding to the construction of a Chinese-English intelligent translation system, which is utilized in the English education curriculum reform. Before formally implementing the application practice of this intelligent translation system in English education reform, the performance of the intelligent translation model used for the realization of the system function is tested. In the NIST Chinese-English translation task experiments, the BLEU percentage of this paper’s model on the NIST2005, NIST2008 and NIST2012 test sets reaches 39.79, which is higher than that of other comparative models, and the quality of Chinese-English translation is better than that of other comparative models. Facing the three WMT translation tasks WMT2017, WMT2018, and WMT2019, this paper’s model compares with the Transformer, BERT, GAN, BP, and POS models by an average of 1.71, 1.55, 1.66, 1.14, and 1.15 BLEU points, which is a relatively small gap in the quality of the translations but is still enough to prove the model’s effectiveness of this paper’s model. A translation example is selected from the test set of the NIST Chinese-English translation task to carry out sample analysis, the dependency relationship between words in the translation weight matrix of this paper’s model is clear, while the dependency relationship between words in the translation weight matrix of the Transformer model as a comparison is not clear enough, and this paper’s model has a better translation performance.
Taking the first-year business English majors of class 2023 of Normal University of D as the research object, we set up experimental class and control class to carry out a semester-long English teaching experiment by applying the Chinese-English Intelligent Translation System constructed in this paper. In terms of total English achievement, the mean posttest score of the students in the experimental class is 69.66, which is an increase of 2.61 compared with the pre-test, and at the same time is higher than the posttest score of the control class by 3.14, which shows a significant difference with it (P=0.041<0.05). As for the various types of test questions in the English posttest, students in the experimental class scored only 0.07 and 0.25 higher than the control class in the listening comprehension questions and reading comprehension questions, which is a small difference without significant difference (P>0.05). On the other hand, the experimental class scored higher than the control class in translation questions, writing questions, and vocabulary and structure questions by 0.79, 1.16, and 0.87, respectively, which is a large difference in scores and shows a significant difference (P<0.05).