Open Access

The Paths and Strategies of Constructing the Multilingual Discourse System of National Community Awareness with the Aid of Machine Translation

  
Mar 21, 2025

Cite
Download Cover

Introduction

The Chinese nation is a multicultural and multi-ethnic family, which has gone through vicissitudes of life and gathered rich national culture and historical heritage. In the new era, we should be more determined to embrace the warmth of the group, hand in hand, build a strong sense of community of the Chinese nation, and realize the great rejuvenation of the Chinese nation [1-4]. The so-called community consciousness, that is, through the exchange, interaction and integration of various aspects of the psychological and cultural composition [5]. In concrete practice, community consciousness has the following connotations: cultural identity. Respect and accept the cultural advantages of different nationalities and regions, and integrate them together to form a new national culture [6-8]. Social identity. Through social activities, mutual help and common fight against difficulties, the common social values of the Chinese nation are formed. Especially in the face of emergencies and major disasters, the Chinese nation has always been able to maintain unity and solidarity [9-12]. Historical identity. Respect for historical facts, uphold the spirit of history and humanity, promote Chinese culture, and strengthen the will and self-confidence of the Chinese nation [13].

Language is an important tool for human communication, and an important part of culture and national identity. China is a large country with multi-ethnicity, multi-language and multi-dialects coexisting, and the richness and diversity of Chinese discourse system is the treasure of Chinese culture [14-17]. The Chinese discourse system is diversified, including Chinese as the main language, as well as minority languages such as Tibetan, Mongolian and Uyghur, and dialects and colloquialisms of various places. There are differences between these languages and dialects, and some of them can be difficult to communicate because of differences in phonology, vocabulary and grammar [18-21]. However, this is the charm of the Chinese discourse system, and it is this plurality and diversity that forms the rich language resources and reflects China’s long history and culture [22-23].

In this study, we collect corpus data (Chinese, Cantonese, Hmong, and Mongolian) through web crawling software and store them in the form of datasets to provide data support for the following experimental analysis. A multilingual machine translation model is constructed by combining encoder, attention mechanism, and transformer model. With the help of the machine translation model, the path and strategy for the construction of the multilingual discourse system of the three national communities are proposed. Immediately, the model is validated by manual and automatic evaluation, and the multilingual discourse system under the theory of machine translation is explored by designing a control group experiment.

Multilingual discourse systems assisted by machine translation
Theoretical Foundations of Machine Translation

Machine translation refers to the use of monolingual or parallel bilingual data to train models, constructed based on deep neural networks. The model given the input sentences at the source end can translate the sentences at the target end word by word [24-25]. This process is divided into two parts: encoding and then decoding. Compared to traditional machine translation models, neural machine translation based on continuous space transformation modeling greatly improves the translation quality and has become a popular model for many researchers doing research in the field of machine translation.

Encoder-Decoder

The encoder-decoder architecture was created specifically for machine translation applications, but it has since been widely used in sequence-to-sequence tasks like speech recognition, multi-round conversations, and text summary generation. It transforms a sequence into a hidden state in a high-dimensional space through an encoder, and then generates the hidden state into a target sequence through a decoder. The encoder-decoder framework is shown in Fig. 1.

Figure 1.

Encoder – decoder

Therefore, the encoder in the neural machine translation model is responsible for transforming the source sentence sequence x={x1,x2,,xn} into a fixed dimensional intermediate representation hcontext, and the decoder utilizes this intermediate representation and the contextual information to sequentially generate the target sequence y={y1,y2,,ym} . The formula for this process is shown below: hcontext=Encoder(x) yi|hcontextDecoder(hcontext;y<i)

where hcontext represents the intermediate representation generated by the encoder, and Equation (2) represents the formula for the conditional probability of generating the target sequence given the source sequence and context. With a parallel dataset containing k sentence pairs, the optimization objective of the neural machine translation model is to maximize the conditional probability of the target sequences in the dataset, as shown in the following formula: P(y|x)=i=1mP(yi|y<i,x;θ)

In Eq. P(y|x) denotes the probability of generating the target end sequence y given the source end sequence x. This probability is obtained from the output probability yi of all time steps, i.e., i=1mP(yi|y<i,x;θ) , where P(yi|y<i,x;θ) denotes the probability of the current time step i given the source-side input sequence x, the previous history output y<i, and θ is a parameter of the model.

The training objective of neural machine translation is to minimize the negative log-likelihood function of the conditional probabilities, which is given by the following formula: L(θ)=1mi=1mlogP(y|x)

where θ is the model parameter.

The encoder-decoder framework is a very commonly used framework in the field of machine translation, and its simple and effective design has been widely used in other natural language tasks, as well as attracting many researchers to improve it. However, in the decoding process, the source-side context representations utilized by the model in generating words are all the same, which is not in line with human understanding of natural language because different target words are often associated with different source-side words, and in order to solve this problem, the attention mechanism has been proposed.

Attention mechanisms

Attentional mechanisms are used in neural machine translation models to help the decoder generate target-side translations by extracting information from the source-side sequences that is most relevant to a particular word on the target-side, so that translations that are more faithful to the original can be produced. This process is shown in the following equation: yi|xDecoder(ci;y<i)

In Eq. yi denotes the output generated at time step i, and Decoder(ci;yi) denotes the probability distribution of the output yi generated using a decoder based on the context vectors ci and historical output sequences y < i at the current time step. Unlike Decoder(hcontext;y<i) in Eq. (2), the attention mechanism outputs dynamic context vectors ci based on different time-step words yi instead of static context vectors hcontext. The Decoder then takes the context vectors and the previously generated partial output sequences as inputs to generate the current time-step output. The specific implementation of the decoder can be a Recurrent Neural Network (RNN), a Convolutional Neural Network (CNN), or a stacked Transformer layer, among others.

The dynamic context vector first needs a scoring function to assign different weights to the hidden states, this scoring function takes as input the static context vector hcontext={h1,h2,,hn} and the hidden state si−1 of the previous time step, and outputs the current time step word yi with the score hj of the static context vector with the formula shown below: vi,j=f(si1,hj) ai,j=evi,jk=1nevi,k

In the formula, the two variables are evaluated using the f(,) function to obtain the weights vi,j, subsequently all the weights are normalized using softmax to obtain the score ai,j, and finally the dynamically weighted context vector is obtained from the score: ci=j=1nai,jhj

For the evaluation function f(,) , the following variants are commonly used: f(si,hj)={ siThj Dotproduct(mathematics) siThjd Scaleddotproduct siTWhj Bilinear vTtanh(W1si+W2hj) Additive vTtanh(W2tanh(W1[si;hj])) MultilayerPerceptron

In Eq. v is the weight vector, W1 and W2 are the weight matrices, and d is the vector dimension. Different evaluation functions are often used depending on the application scenarios and models, and usually the dot product evaluation function is the most general and commonly used method.

Transformer model

T Transformer has very good scalability, the flexibility and freedom of Transformer is reflected in the fact that it sets almost no assumptions about the structure of the input data, making it possible to apply it to a variety of downstream tasks and domains [26]. For example, it has been applied in computer vision, biology, chemistry and speech. The structure of the Transformer model is shown in Fig. 2, which mainly consists of the multi-head attention, feed-forward neural network and location coding modules.

Figure 2.

Transformer model structure

Multihead Attention: Attention within Transformer uses multihead attention, which is designed to allow different attention heads to capture information from different perspectives, such as syntax, grammar, and low-frequency words. The formula for multi-head attention is as follows: Attention(Q,K,V)=softmax(QKTdk)V headi(Q,K,V)=Attention(QWiQ,KWiK,VWiV) MultiHeadAttention(Q,K,V) =Concat(head1,head2,,headh)WO

In Eq. Multihead Attention maps the input data Q,K,Vn×d (n denotes the length of the sequence, d denotes the dimension of each head of the input vector) onto a new Q,K,Vn×d , which is obtained by linear transformation of the three matrices as a way to improve the model’s ability to fit and generalize the task. Then, QKTdk serves to compute the dot product between Q and K to obtain a matrix of n × n which is scaled; the scaling is performed to prevent gradient explosion or vanishing. Next, a softmax function is applied to the matrix to obtain a normalized weight matrix, where the elements of each row indicate the attentional weight of that row’s position over the other positions, and this matrix is used to sum the V weights. Finally, the results of each set of vector computations are stitched together and subjected to a linear transformation to obtain the final output matrix. Where WiQ , WiK , WiV and WO are the parameter matrices of the linear transformation, the structure of multi-head attention is shown in Fig. 3.

Figure 3.

Diagram of multi-head attention structure

Feedforward Neural Network: a feedforward neural network is a fully connected network that performs the same but independent nonlinear transformations on the hidden states of each input sequence. It serves to increase the model’s fitting ability and expressive power considering the model’s lack of fitting ability for complex tasks and can be highly parallelized. Feedforward neural networks usually use Relu as the activation function and the computational procedure is shown below: Relu(x)=max(0,x) FFN(x)=Relu(xW1+b1)W2+b2

Where, W1, W2, b1, b2 are trainable parameters.

Positional encoding: the RNN implicitly encodes positional information by accepting the hidden state of the previous word, while the attention mechanism in Transformer allows each word to interact with other words, which improves the parallel efficiency but cannot capture the positional information of the sequence. Therefore Transformer needs to introduce positional encoding into word vectors to help the model differentiate between words at different positions.Transformer’s commonly used trigonometric positional encoding formula is shown below: PE(pos,2i)=sin(pos/100002i/dmodel) PE(pos,2i+1)=cos(pos/100002i/dmodel)

where pos denotes the position index, which is the absolute position of the word in the sentence, i denotes the dimension index of the position encoding, dmodel denotes the dimension size of the word embedding and the position encoding, and sin() & cos() are the sine and cosine functions, respectively.

Machine Translation Model for Multilingualism
Language characteristics

The four languages involved in the model of this paper belong to different language families, which differ greatly in grammar and structure, and multiple languages do not have direct parallel corpus with each other, and face the problem of loose and scarce certain corpus. In this project, we plan to use Chinese, Cantonese, Hmong, and Mongolian to train a multilingual neural machine translation system. In order to facilitate corpus processing and word vector feature extraction, the linguistic characteristics of each language need to be analyzed.

Chinese

Chinese belongs to the Sino-Tibetan language family. Grammatically, it is an isolated language, i.e., it generally does not express grammatical information through word form changes, but through dummy words and fixed word order. Chinese is a morpheme script, i.e., a script that represents words or morphemes (the smallest semantic unit of a language). Chinese words are not clearly separated from each other, so it is necessary to slice the text when processing the corpus.

Cantonese

Cantonese has nine tones, which is more than the four tones of Chinese. This abundance of tones allows Cantonese to be more delicate in expression and to distinguish the meaning of more homophonic characters. Cantonese also retains the incoming tones of Middle Chinese, which have disappeared from many other Chinese dialects, adding to the rhythmic quality of its language.

Hmong

Hmong has a complex phonological system, containing multiple consonants, rhymes and tones. Hmong has a subject-predicate structure that usually has the subject preceding the predicate. When nouns are used as modifiers, the attributive modifier is placed in front of the center word, and the significant modifier comes after it. Monosyllabic words are more common in Hmong, while multisyllabic words are less common.

Mongolian

Mongolian has a strict vowel harmony law, the vowels within words are either all back vowels or all mid vowels, but front vowels can coexist with back or mid vowels, while Mongolian has 35 letters, the last word in each sentence in Mongolian sentence structure is a verb, and verb postpositions are an important grammatical feature.

Corpus data

The multilingual corpus data of national community awareness used to train the model in this study was mainly crawled by a crawler on the OPUS website, and before it was used to train the model, the data was subjected to preprocessing operations such as cleansing and segmentation, mainly using the Stuttering Segmentation Database and the UnderTheSea Multilingual Processing Toolkit. The size of the collected and organized corpus is 100,000 sentence pairs, which are randomly selected in a ratio of 7:2:1 to complete the division work. The training corpus is 70,000 pairs, the validation corpus is 20,000 pairs, and the test corpus is 10,000 pairs.

Data processing

The corpus datasets (training set, validation set, test set) of the four languages need to be processed separately. The words of Chinese (isolated language) are the smallest unit containing semantic information, but the Chinese text consists of a continuous sequence of characters, with no separator between words. It is necessary to use the Stanford University’s open-source word-splitting tool to slice and dice the words to exclude as much irrelevant information as possible. Cantonese, Hmong, and Mongolian are pinyin texts with obvious separators between words. Since the smallest unit is proposed to be used for word-level segmentation in this project, it is only necessary to normalize the corpus. After that, an artificial marker needs to be introduced at the beginning of the input sentence on the source language side to specify the desired target language for the next model training work.

Model training

The purpose of this model training is not only to verify the effectiveness of the multilingual-oriented machine translation model, but also to make the model better assist the construction of the multilingual discourse system of the national community. The encoder and decoder are both set to 10 layers, sharing the parameters of 5 of them, and the number of multiple heads is set to 5. The hidden_size of the hidden layer of the Transformer model is set to 256, and the batch size of the data in the process of model training is set to 32, and the number of training rounds of the model is set to 120 for epoch, and the dropout is set to 0.6, and the initial rate of model learning is set to 0.005. 0.6, the initial rate of model learning is set to 0.005, and the decay rate of learning rate is set to 0.1. In the model training, the optimal model parameters are finally obtained by repeatedly adjusting the parameters.

Evaluation methodology

The evaluation of multilingual-oriented machine translation models can be divided into manual evaluation methods and automatic evaluation methods. Regardless of which method, at present, the translation quality is generally evaluated in terms of sentences, and the overall score is finally given on a corpus dataset containing a large number of sentences.

Manual Evaluation

As the name suggests, human evaluation is when a linguistic expert measures the degree of inter-translation between the original and the translation based on their understanding of the two languages. For the output of a machine translation system, the expert is asked to look at each translation individually and judge its correctness. A common evaluation method is to let the human evaluator grade the translation according to its correctness. Traditionally speaking, translation pays attention to “faith, attainment, and elegance”, in which “elegance” embodies more of a literary and creative way of translation, and is not suitable for intuitively using as a quantitative evaluation index. It is not suitable to be visualized as a quantitative evaluation index. Therefore, the “correctness” of a translation is generally measured mainly by “faithfulness” and “attainment”, specifically by using fidelity and fluency as judgment criteria in manual evaluation.

Fidelity refers to the extent to which the translation correctly expresses the content of the original text and retains much of the information of the original text, including the correctness of the translation of the words in the text and the coverage of the translated content, which is equivalent to the “faith” in the phrase “faith, attainment, and elegance”.

Fluency refers to whether the language of the translated text is fluent and authentic, and whether it conforms to the expression habits of the target language, including word order, tense usage, word morphology and word collocation, etc., which is equivalent to “attainment” in “faith, attainment, and elegance”.

Automated assessment

Compared to manual evaluation, automatic evaluation methods are less costly and reproducible. The index for automatic evaluation is used to calculate the degree of similarity or deviation between the machine translation and the reference translation using a specific measure and express it numerically. The degree of proximity or deviation between the machine translation and the reference translation is characterized by this numerical value, which indicates that the machine translation is of high quality. This indirect approach is expected to reflect the user’s evaluation of the quality of machine translation.

BLEU is currently the most widely used automatic evaluation metric, and is used as an official evaluation metric by various machine translation evaluation organizations due to its simplicity and reliability.BLEU is mainly used to evaluate the translation quality of a collection of chapter-level machine translations. Its principle is to measure the degree of matching of phrase fragments of different lengths between chapter machine translations and chapter reference translations, and its matching range is only limited to the inside of sentences.

The specific calculation method is to count the ratio of the number of matches of n-tuple grammars (ngram, usually n = 1, 2, 3, 4) between the machine translation and the reference translation to the total number of n-tuple grammars in the machine translation. In the case of a certain length of the machine translation, the higher the number of matches, the higher the quality of the candidate translation. On the basis of calculating the proportion of matches of n-tuple grammars, the BLEU metric also prevents too short translations from obtaining higher scores by introducing a length penalty factor the formula of the BLEU evaluation is:

BLEU=BPexp(n=1NWnlogPn)

where N is the length of the longest word sequence examined (usually taken as N = 4, denoted as BLEU-4), Pn = mn/hn is the accuracy of all the nth grammatical matches in the passage (where mn is the number of correctly matched nth grammatical in the text, hn is the total number of occurrences of the nth grammatical of the machine-translated text in the text), Wn is the weight of the nth grammatical matching (usually 1/N), log is the logarithmic operation, and the base of the log operations in this article is e, abbreviated as log. BP is a length penalty factor which penalizes the translation result sentences whose length is shorter than the reference translation, calculated as: BP={ 1 Ifc>r e(1r/e) Ifcr

c is the sum of the lengths of the machine translations of the chapters (counted in terms of words), and r is the sum of the lengths of the reference translations of the chapters.

Paths and Strategies for the Construction of Discourse System under the Perspective of Machine Translation

Under the context of cultural diversity, this paper constructs a multilingual discourse system for national community consciousness with the help of the machine translation described above, in order to avoid becoming a vassal of other cultures. The purpose of constructing the discourse system of national culture is to promote the dissemination of national culture, so the construction of the discourse system should not only take into account the expression and interpretation needs of national ideology and culture, but also need to start from the perspective of machine translation, so as to make the construction of the discourse system and machine translation compatible. The purpose of constructing the discourse system is to facilitate the accurate dissemination of national ideology and culture, and the external dissemination of the relevant discourse system needs to be realized with the help of machine translation. Machine translation is of great significance to the national cultural discourse system, which is not only conducive to the national voice, expanding the influence of national culture, but also able to improve the outside world’s cognition of the national image, therefore, in the construction of the national cultural discourse system, it is necessary to standardize the construction of the national cultural discourse system from the translation perspective, and the following section will give the detailed paths and strategies for the construction of the discourse system under the perspective of machine translation.

Spirit of national culture

Ethnic minority discourse system is to use nationalized language to express national ideology and culture, so from the purpose of discourse system construction, national cultural discourse system in the translation process, even if the language carrying national culture changes, the national cultural connotation also needs to be emphasized in the target language, and can be clearly expressed. Therefore, from the aspect of machine translation, the construction of ethnic cultural discourse system naturally needs to emphasize the national ideology and culture and retain the kernel of the national ideology and culture, so that the national cultural discourse system assisted by the machine translation technology can effectively convey the national cultural connotation in the target language. To realize this goal, in the construction of national culture discourse system, firstly, the connotation of national minority ideology and culture needs to be refined, on this basis, then choose the appropriate language expression and language thinking to express, only in this way, the discourse system construction can truly reflect the connotation of national ideology and culture, and form the root of national ideology and culture as the kernel.

Ethnocultural harmonization

Translation is the language conversion of culture, and cultural conflicts are naturally inevitable in the process of translation, so various concepts with different meanings, inconsistent linguistic meanings, inconsistent intra-language meanings and other cultural conflicts will occur in translation, and these cultural conflicts often make the translation impossible or the translated text is not meaningful. Minority cultural discourse system construction must be spread through translation, in order to make the problem of cultural conflict in translation can be effectively taken into account, minority cultural discourse system construction must be coordinated in the cultural conflict, only in this way, can make the national cultural discourse system smooth translation, through the translation of the national cultural discourse system to expand the influence of the national cultural discourse system. The important purpose of constructing the national minority discourse system is to explain the cultural connotation of the national culture itself, to express its own cultural understanding and claims, and mainly to interpret a kind of cultural ideological attitude. In order to make the other cultural groups overcome the cultural conflict and understand and accept the national discourse system more clearly, the relevant system should be expressed as clearly as possible when it is constructed, so as to avoid generating ambiguities.

Ethnocultural legibility

In the process of translation, due to the translator’s cultural literacy and subjective preferences, as well as the deviation of the understanding of linguistic connotation and other issues, the same text is translated out of the effect is very different, and for the text, its real intention or style is necessarily only one, which means that the different translations of the culture is bound to have the problem of distortion. What kind of discourse system has what kind of text language, so from the perspective of translation, to improve the problem of translation distortion depends largely on the characteristics of text language in the construction of national cultural discourse system. In addition, from the point of view that the main purpose of constructing national cultural discourse system is to let the national culture make its own voice and expand the influence of the national culture, in order to make the translated text really recognized and understood by the target language audience, it is also necessary to take into account the language selection of the construction of national cultural discourse system. From the point of view of ethnic minority culture itself, there are big differences between it and popular culture, religious beliefs, cultural knowledge is also very different, most of the ethnic groups are relatively unfamiliar with the national culture, and it is not easy to recognize and understand the connotation of the national ideology and culture. In view of the need for translation and dissemination, avoidance of distortion in translation, as well as the popular understanding of ethnic culture, the construction of ethnic cultural discourse system must make the language application easy to understand, so that the cultural discourse system and the audience can be dialogued and communicated with each other.

Research on multilingual discourse systems under the theory of machine translation
Model validation analysis
Analysis of manual assessment results

In this subsection, according to the manual evaluation method given above, the machine translation model of this paper is verified and analyzed from two aspects: fidelity and fluency. In order to make the research results have a certain degree of reliability, on the basis of this, four control models are introduced (which mainly contain: CNN, RNN, SVM, and LSTM), and four experts of translation evaluation (each of them is proficient in Chinese, Cantonese, Hmong, and Mongolian, to ensure the rigor of the research results) are selected to analyze the results of different models for manual evaluation. The results of manual evaluation of different models are analyzed as shown in Fig. 4, where (a)~(b) are fidelity and fluency, the horizontal coordinates are the four ethnic discourse corpus (Chinese, Cantonese, Miao, and Mongolian), and the vertical axes are the values of fidelity and fluency indexes. Comprehensive Fig. 4(a)~(b) shows that the performance of this paper’s model is better than the other four control models in terms of both fidelity and fluency indicators, which fully verifies that this paper’s model has excellent performance in manual evaluation. For example, the machine translation on the four ethnic discourse corpus, lexical conversion and word order switching is better, the performance is excellent in word addition and subtraction, and at the same time, the performance is good in the fusion of simple sentences and the skill of splitting compound sentences. In addition, if the machine translation model can solve grammatical phenomena such as non-reference, substitution, omission, connection, repetition, etc. on analyzing the linguistic context, analyzing the relationship between the upper and lower sentences, and realizing syntactic conversion freely, in order to make the sentences more fluent and produce more acceptable translations.

Figure 4.

Analysis of manual evaluation results

Analysis of automated assessment results

As can be seen from the above, the automatic evaluation method accomplishes the work by calculating the BLEU values. As in subsection 3.1.1 of the model, the training sets of the four ethnic discourse corpora (Chinese, Cantonese, Hmong, and Mongolian) are inputted into the model for training, and the BLEU values of the respective models are calculated, and the BLEU values of different models are compared by using multi-grouped columns, and the analysis of the results of the automatic evaluation is presented in as shown in Fig. 5. It can be seen that the BLEU values of this paper’s model (0.904, 0.935, 0.945, 0.946 for the four ethnic discourse corpora) are better than those of the other four models (mainly including: CNN, RNN, SVM, and LSTM) for the translation of the four ethnic discourse corpora, which emphasize the effectiveness of the paper’s model in the effectiveness of ethnic multidiscourse translation task, which is of great significance for the construction of multilingual discourse system of ethnic community consciousness.

Figure 5.

Automatic evaluation results analysis

Evaluation of the effectiveness of multilingual discourse systems
Description of the experiment

Multilingual discourse professionals were selected as the research subjects of this experiment, and the research sample was screened heavily, from the initial 138 people to 18 people, and these 18 people were evenly divided into the experimental group and the control group, with the experimental group adopting the multilingual discourse system under the theory of machine translation, and the control group adopting the traditional multilingual discourse system. Based on the perspective of the path and strategy of constructing the multilingual discourse system of national community consciousness, we use the scale test to obtain the evaluation index values of the multilingual discourse system (discourse adaptability, discourse coordination, discourse consistency), and based on the evaluation index values, we validate the effectiveness of the multilingual discourse system under the machine translation theory based on the results of the t-test of independent samples. The method of experimental analysis that is specific is as follows:

Pre-intervention comparative analysis

On the basis of obtaining the assessment index values of the multilingual discourse system, the assessment index values of the pre-intervention experimental group and the control group were analyzed using the independent samples t-test, and the results of the comparative analysis of the pre-intervention assessment index values are shown in Fig. 6, in which (a)~(c) are discourse adaptability, discourse coordination, and discourse consistency, respectively, angle indicates the number of samples, and the length of the inner diameter indicates the assessment index values. In terms of mean value, the difference between the assessment index values (discourse adaptability, discourse coordination, discourse consistency) between the experimental and control groups before intervention is not significant. According to the P-value of the assessment indicators, it can be seen that the values of the assessment indicators of the pre-intervention experimental group and the control group do not have significant differences and meet the requirements of the experiment.

Figure 6.

Comparative analysis before intervention

Comparative post-intervention analysis

After analyzing the results of the comparison of the assessment indexes between the experimental group and the control group before the intervention, then the same method was taken to carry out the independent samples t-test on the values of the assessment indexes between the experimental group and the control group before the intervention, and the results of the post-intervention comparative analysis are shown in Figure 7. It can be clearly seen that after a period of experimental intervention, there is a significant difference (P<0.05) between the values of the assessment indexes of the pre-intervention experimental group and the control group (Discourse Adaptation P=0.013, Discourse Coordination P=0.009, Discourse Coherence P=0.005), which proves the validity of the multilingual discourse system under the theory of machine translation.

Figure 7.

Comparative analysis after intervention

Comparative analysis within groups

Finally, in order to improve the feasibility of the research results, the comparative analysis of the intra-group assessment index values between the experimental group and the control group was supplemented, and the results of the intra-group comparative analysis are shown in Fig. 8, in which (a)~(b) are the control group and the experimental group, respectively, and in which A1, A2, and A3 denote the discourse adaptability, discourse coordination, and discourse consistency, respectively, and the line segments denote the upper quartile value, the upper quartile value, the median line, the lower quartile value, and lower quartile value. It is found that there is no significant difference between the control group before and after the intervention (P=0.119>0.05), while there is a significant difference between the evaluation indexes of the experimental control group before and after the intervention (P=0.001<0.05), which indicates that the machine translation model has an excellent performance in the construction of multilingual discourse system of national community consciousness.

Figure 8.

Comparative analysis within the group

Conclusion

In this paper, we first construct a machine translation model for multilingualism, and with the support of the machine translation model, we propose the paths and strategies for the construction of the multilingual discourse system of the lower national community consciousness. Finally, it explores the evaluation of the construction effect of the system by combining research data and evaluation indexes. Both in terms of manual assessment and automatic assessment, compared to the other four control models, this paper’s model has excellent translation effects on the four ethnic discourse corpora (Chinese, Cantonese, Miao, and Mongolian). Based on the comprehensive pre-intervention, post-intervention, and within-group analysis, it can be concluded that the multilingual discourse system for ethnic community consciousness supported by the machine translation model is effective.

Funding:

The article is sponsored by the Department of Education of Hubei Province with Project “Research on the Development Path of Translation Technology and the Cultivation of Translators’ Competence” (Project No. 22G086).

Language:
English