Otwarty dostęp

Construction of Evaluation System of Civic Education Work of College Students Based on Decision Tree Algorithm in New Media Era

  
24 mar 2025

Zacytuj
Pobierz okładkę

Introduction

With the deepening of educational reform, the position of ideological and political education in higher education is becoming more and more prominent. As an important part of higher education, Civic and political education aims to cultivate students’ ideological and political qualities and guide them to establish correct worldview, life view and values through the teaching of courses [1-4]. However, how to scientifically and effectively evaluate the teaching effect of Civics and Politics in higher education programs has been a focus of attention in the field of education. In order to better and faster realize the strategic goal of China’s science and education to develop the country, it is very important to comprehensively improve the ideological and political literacy of college students [5-7]. In the context of the new media era, in order to guarantee the scientific and effective work of ideological education, it is necessary to build a reasonable evaluation system for students’ ideological education, so as to give full play to the role of ideological education in colleges and universities [8-10].

The evaluation system of students’ civic and political education is an important tool for all-round and multi-level evaluation of civic and political education, which can objectively assess the teaching effect of teachers, help teachers to improve the teaching method and content, and enhance students’ learning interest and learning effect [11-14]. At the same time, the evaluation system of Civics education is also an important means for school administrators to supervise the quality of teaching, which can provide decision-making references for school leadership and promote the continuous improvement and innovation of Civics teaching [15-18], and the decision tree algorithm can play an important role in realizing the construction of this system.

Aiming at the problem of possible redundant indicators in the existing evaluation index system of college students’ civic and political education work, the study analyzes the degree of correlation between the existing evaluation indicators with the help of correlation analysis method, and eliminates the indicators with strong correlation. The traditional ID3 decision tree algorithm was optimized to optimize the problem that the time may be too long when discretizing the data for evaluating education, the information entropy-based approach was adopted to discretize the data, and the improved information entropy ID3 algorithm was used to generate the decision tree of college students’ civic education work, and the screening of the indicators was carried out respectively. Finally, the subjective and objective assignment method is used to calculate the weights of the evaluation index system after screening the indexes, and the combination assignment method based on the game method is introduced to establish the G1-CRITIC model for the combination assignment of the indexes, which improves the scientificity and validity of the assignment results.

Construction of an evaluation system for the work of university students in civic and political education
Evaluation index system for the work of university students’ civic and political education
Principles of construction

Principle of Policy Orientation

The purpose of college students’ Civic and Political Education is to improve students’ ideological awareness, moral education level, and political literacy. The Party’s policy reflects the requirements of China’s education sector and its expectations for educational targets, and is the core guidance for the evaluation system of ideological and political education. Since the 19th National Congress, China has frequently issued policies and documents on the work of ideological and political education, and has constantly put forward new requirements and suggestions for the work of ideological and political education in China; these policies and documents not only provide a direction for the development of ideological and political education in China, but also provide a strong reference and theoretical basis for the construction of the evaluation system of ideological and political education in China.

Principle of Effectiveness

The principle of effectiveness is the quality requirement for the evaluation of college students’ ideological and political education. By carrying out the evaluation of the quality of ideological and political education, it helps to find out the problems existing in the education process and put forward corresponding measures to solve the problems, strengthen and improve the system of ideological and political education, and promote the quality of ideological and political education, so the quality of the evaluation of the ideological and political education determines the quality of the ideological and political work.

Dynamic principle

The evaluation of ideological and political education of college students should be updated and improved with the development of contemporary education, and with the continuous construction of the Party. China’s national conditions and weights, and then through the Delphi method to solicit the views of experts, for example, you can hire external experts and the party secretary of the college together with the decision-making, through a number of solicitations to continuously narrow the differences in the weights of the various indicators of the factor, and so on and so forth until the experts’ opinions are basically the same, and the final summary result is the weight of the indicators of the reference basis.

Construction of evaluation index system

According to the preliminary survey and research and consulting relevant experts, the index system of college students’ civic education is shown in Table 1.

College students’ education index system

Primary indicator Secondary indicator
Thinking of education Ideal belief education (I1)
Situation and policy education (I2)
Mental health education (I3)
School wind construction Daily education management (I4)
Course learning (I5)
Time activity (I6)
Team construction Organization construction (I7)
Teacher team (I8)
Childbearing effect Student rewards and punishments (I9)
One-time employment (I10)
Decision Tree and Improved ID3 Algorithm
Decision Tree Algorithm Fundamentals

Decision Tree

Decision tree is a tree-shaped data structure. Through the root node extends the various branches, connecting different child nodes, the use of child nodes and the relationship between the parent node and the structure, you can find any one of the nodes or leaf nodes on the entire tree structure.

Decision trees, also known as classification trees, decision tree structure is characterized by a relatively simple, the number of vertices, that is, the root node subordinate to a number of branches, constituting an attribute structure composed of different categories of branches. Each branch represents a set of test outputs. On each branch, different internal nodes as well as leaf nodes are distributed. The internal nodes represent a test, while the leaf nodes represent the branching categories of decisions. After several iterations, the predicted value of the leaf node class that holds the classification is found [19].

Classification Algorithm

The core algorithm of the decision tree algorithm is a classification method based on ID3, C4.5 and CART. It is currently widely used for analyzing words, graphs, and other types of data.

The basic principle of the CART algorithm, which utilizes the three core algorithms of the decision tree, is to utilize the recursive partitioning method and eventually form a binary tree structure. The ID3 and C4.5 algorithms are similar in that they all utilize the information entropy to calculate the gain, only that the specific categories are divided in different ways, with the former using the information gain as the standard, and the latter adopting the information gain rate as the main measurement index. According to different applications, the decision tree method can be very good for different types of data classification and prediction.

Decision Tree Construction

The construction process of decision tree is realized through the calculation and analysis of attributes. Through the attributes of different nodes, they are divided into the internal nodes of the decision tree, the whole process is to split the attribute values, when it is not possible to continue the splitting operation, it is defined as a leaf node, and the whole decision tree is finally obtained according to this process. Each path corresponds to a rule, and the whole number corresponds to a set of rules.

The construction of the decision tree is divided into two phases, including the stage of creating branch nodes and the pruning stage. First, the build and branching phase, the algorithm is executed initially to analyze the given training sample dataset for computation. After the classification iterations, all the internal nodes are found until the leaf node of the branch is found. Typically, this process can be implemented using existing open source frameworks and software for data mining. And the second phase, the pruning phase of the decision tree, aims to trim the tree for overfitting and redundancies. Pruning the tree is necessary. Usually, pruning methods are categorized as pre-pruning and post-pruning. According to different applications, the pruning method is chosen with reasonable care.

Classical ID3 Algorithm

For a classification system, suppose X is the set of data samples, x is the number of X’s, C is the attribute category variable taking the value C={C1,C2,C3...Cn}$$C = \left\{ {{C_1},{C_2},{C_3}...{C_n}} \right\}$$, n is the total number of categories, and xi is the number of samples contained in Ci categories, then the probability that any sample belongs to Ci is: pi=xix$${p_i} = \frac{{{x_i}}}{x}$$

The information entropy of the sample classification is: I(x1,x2...xn)=i=1npilog2pi$$I({x_1},{x_2}...{x_n}) = - \sum\limits_{i = 1}^n {{p_i}} {\log _2}{p_i}$$

Calculation of information entropy (per attribute)

Let attribute Y has m different values, m different values divide X into {X1,X2,...,Xn}$$\left\{ {{X_1},{X_2},...,{X_n}} \right\}$$, xij is the number of samples belonging to Ci in subset Xj, then the information entropy of Y is: E(Y)=j=1mx1j+x2j+...+xnjx×I(x1j,x2j,...,xnj)$$E(Y) = \sum\limits_{j = 1}^m {\frac{{{x_{1j}} + {x_{2j}} + ... + {x_{nj}}}}{x}} \times I({x_{1j}},\:{x_{2j}},...,\:{x_{nj}})$$

Calculation of information gain

The information gain Gain of the attribute is obtained by equation (3) as follows: Gain=I(x1,x2,...xn)E(Y)$$Gain = I({x_1},{x_2},...{x_n}) - E(Y)$$

Information entropy based algorithm improvement

This subsection focuses on the improvement method to the ID3 algorithm, and the principle of the improvement is described in detail.

Calculation of information entropy

The classical ID3 algorithm is introduced in the previous section, and the arithmetic example is given for deduction. However, in the process of calculation, it is necessary to carry out log operation many times, which is not too demanding in terms of performance for small datasets, but for large datasets, the performance of the operation is particularly important. Therefore, this paper transforms the original log operation, optimizes the algorithm by means of power expansion, and transforms the complex operation into addition, subtraction, multiplication and division with faster operation speed. The specific improvement steps are as follows:

According to the demand can be p, set to: pi=1si1+si$${p_i} = \frac{{1 - {s_i}}}{{1 + {s_i}}}$$

From equation (5): si=1pi1+pi$${s_i} = \frac{{1 - {p_i}}}{{1 + {p_i}}}$$

Substituting Eq. (6) into Eq. (3) yields: i=1n1si1+silog21si1+si=i=1n1si1+siln1si1+siln2$$ - \sum\limits_{i = 1}^n {\frac{{1 - {s_i}}}{{1 + {s_i}}}} {\log _2}\frac{{1 - {s_i}}}{{1 + {s_i}}} = - \sum\limits_{i = 1}^n {\frac{{1 - {s_i}}}{{1 + {s_i}}}} \frac{{\ln \frac{{1 - {s_i}}}{{1 + {s_i}}}}}{{\ln 2}}$$

From the power series expansion, equation (7) can be rewritten as: 2ln2i=1n1si1+si(si+13si3+15si5+17si7+...)$$\frac{2}{{\ln 2}}\sum\limits_{i = 1}^n {\frac{{1 - {s_i}}}{{1 + {s_i}}}} \left( {{s_i} + \frac{1}{3}s_i^3 + \frac{1}{5}s_i^5 + \frac{1}{7}s_i^7 + ...} \right)$$

The accuracy of Eq. (8) keeps increasing as the power becomes higher. In order to simplify the calculation process, and because the calculation process is a comparison of the magnitude of information entropy, the first two terms of Eq. (9) are taken here and organized to obtain Eq. (9). 2ln2i=1n1si1+si(si+13si3)=23ln2i=1nsi(1si)(3+si2)1+si$$\frac{2}{{\ln 2}}\sum\limits_{i = 1}^n {\frac{{1 - {s_i}}}{{1 + {s_i}}}} \left( {{s_i} + \frac{1}{3}s_i^3} \right) = \frac{2}{{3\ln 2}}\sum\limits_{i = 1}^n {\frac{{{s_i}(1 - {s_i})(3 + s_i^2)}}{{1 + {s_i}}}}$$

Substituting equation (8) into equation (9) yields the following equation (10): 83ln2i=1npi(1pi3)(1+pi)3$$\frac{8}{{3\ln 2}}\sum\limits_{i = 1}^n {\frac{{{p_i}(1 - p_i^3)}}{{{{(1 + {p_i})}^3}}}}$$

Substituting equation (5) into equation (10) yields the following equation (11): 83xln2i=1nxi(x3xi3)(x+xi)3$$\frac{8}{{3x\ln 2}}\sum\limits_{i = 1}^n {\frac{{{x_i}({x^3} - x_i^3)}}{{{{(x + {x_i})}^3}}}}$$

83xln2$$\frac{8}{{3x\ln 2}}$$ in the above equation is a constant that can be omitted from the calculation of the comparison size, so the final formula for calculating the information entropy is: 1xi=1nxi(x3xi3)(x+xi)3$$\frac{1}{x}\sum\limits_{i = 1}^n {\frac{{{x_i}({x^3} - x_i^3)}}{{{{(x + {x_i})}^3}}}}$$

Create decision tree

When creating a decision tree, it mainly relies on selecting the attribute with the largest information gain as the splitting node, and according to Eq. (12), it can be seen that I(x1, x2, ..., xn) in Eq. (12) is fixed, so the calculation of the attribute information gain mainly depends on the size of the attribute information entropy. The construction process of the decision tree is top-down, i.e., in the early stage of data classification, all the data are concentrated in the root node to start the division, and at the same time, the attribute entropy of the node is calculated, based on which, according to the entropy of the information gain, the data are initially classified. In the process of data classification, it is often carried out according to an agreed statistical way or heuristic rules. If the attributes can not be in the continuation of the split, it has to stop, at this time, it should be merged into the same class.

Determination of indicator weights by combinatorial weighting methods
Sequential Relationship Approach to Determine Subjective Weights

The method and steps for determining the subjective weight coefficients by the ordinal relationship method are as follows:

Determine the ordinal relationship between evaluation indicators

If the importance of an evaluation indicator in relation to an evaluation objective is greater than or equal to xj, it is denoted as xi > xj. An ordinal relationship is established for a group of evaluation indicators when evaluation indicator x1, x2, ⋯, xm has relationship x1*x2*xm*$$x_1^* \geq x_2^* \geq \cdots x_m^*$$ in relation to an evaluation objective. Evaluation indicator xj*$$x_j^*$$ refers to the jth evaluation indicator after being ranked according to the sequential relationship.

Determine the ratio of relative importance between neighboring indicators.

Calculate the weight of evaluation indicator xi, wi. Let the ratio of the degree of importance of evaluation indicators xz1*$$x_{z - 1}^*$$ and xz*$$x_z^*$$ be rz, i.e: w(z1)*/wz*=rz(z=m,m1,....,2)$$w_{(z - 1)}^*/w_z^* = {r_z}\quad (z = m,m - 1,....,2)$$

Where: wz1*$$w_{z - 1}^*$$ and wz*$$w_z^*$$ are the weights of the z − 1rd and zth indicators respectively, before calculation, weight wz*$$w_z^*$$ is unknown, rz can be obtained by expert scoring.

Calculate the weight coefficient of each evaluation indicator

Obviously, wz2*>wz*$$w_{z - 2}^* > w_z^*$$, and because wz − 1* > 0, therefore wz2/wz1*>wz*/wz1*$${w_z} - 2/{w_z} - {1^*} > w_z^*/w_{z - 1}^*$$, therefore rz − 1/rz. Because: i=zmri=wz1*wz*wz*wz+1*wz+1*wz+2*wm2*wm1*wm1*wm*=wz1*wz*z2$$\prod\limits_{i = z}^m {{r_i}} = \frac{{w_{z - 1}^*}}{{w_z^*}}\frac{{w_z^*}}{{w_{z + 1}^*}}\frac{{w_{z + 1}^*}}{{w_{z + 2}^*}} \cdots \frac{{w_{m - 2}^*}}{{w_{m - 1}^*}}\frac{{w_{m - 1}^*}}{{w_m^*}} = \frac{{w_{z - 1}^*}}{{w_z^*}}\quad z \geq 2$$

Summing over z from 2 to m has: z=2mi=zmri=z=2mwz1*wm*=1wm*(w1*+w2*++wm1*)=1wm*(z=1mwz*wm*)$$\sum\limits_{z = 2}^m {\prod\limits_{i = z}^m {{r_i}} } = \sum\limits_{z = 2}^m {\frac{{w_{z - 1}^*}}{{w_m^*}}} = \frac{1}{{w_m^*}}(w_1^* + w_2^* + \cdots + w_{m - 1}^*) = \frac{1}{{w_m^*}}(\sum\limits_{z = 1}^m {w_z^*} - w_m^*)$$

And because z=1mwz*=1$$\sum\limits_z = {1^m}w_z^* = 1$$, therefore: z=2mi=zmri=1wm*(1wm*)=1/wm*1$$\sum\limits_{z = 2}^m {\prod\limits_{i = z}^m {{r_i}} } = \frac{1}{{w_m^*}}(1 - w_m^*) = 1/w_m^* - 1$$

Then wm can be solved for: wm=g(1+z=2mz=imrig)1$${w_m} = g{(1 + \sum\limits_{z = 2}^m {\prod\limits_{z = i}^m {{r_i}} } g)^{ - 1}}$$

According to the formula: wz1=rzwz(z=m,m1,,2)$${w_{z - 1}} = {r_z}\:{w_z}\:(z = m,m - 1, \cdots ,2)$$

The weighting coefficients for each indicator can be derived.

Improvement of the CRITIC method for determining objective weights

Data standardization

Before using the CRITIC weight method [20], the indexes are firstly standardized, and the commonly used data standardization methods include the method of extremely large value, the method of extremely small value, and the method of eigenvalue assignment, etc. The method of extremely large values is suitable for standardizing the main factor indexes that are proportional to the top plate compressive strength. Extremely large value method is suitable for the standardization of the main factor index which is proportional to the top plate compressive strength, the larger the value of the main factor index, the larger the quantitative value after standardization, and the larger the corresponding top plate compressive strength. On the contrary, the method of very small value applies to the standardization of the main factor index which is inversely proportional to the compressive strength of the roof plate, and the larger the value of the main factor index, the smaller the quantitative value after standardization, and the smaller the corresponding compressive strength of the roof plate.

Extreme value method: standardresulti=xijmin(xj)max(xj)min(xj)$$s\tan dardresul{t_i} = \frac{{{x_{ij}} - \min ({x_j})}}{{\max ({x_j}) - \min ({x_j})}}$$

Minimal value method: standardresulti=max(xj)xijmax(xj)min(xj)$$s\tan dardresul{t_i} = \frac{{\max ({x_j}) - {x_{ij}}}}{{\max ({x_j}) - \min ({x_j})}}$$

Where: xij represents the jrd indicator of the ind object to be evaluated; max(xj) is the maximum value of the jth indicator data; min(xj) is the minimum value of the jth indicator data.

Calculation of the correlation coefficient and the value of the quantitative indicator of conflictability

The correlation coefficient between the ith indicator and the jth indicator can be calculated by the following formula: rij=(xixi¯)(xjxj¯)Σ(xixi¯)2(xjxj¯)2,ij$${r_{ij}} = \frac{{\sum {({x_i} - \overline {{x_i}} )} ({x_j} - \overline {{x_j}} )}}{{\sqrt {\Sigma {{({x_i} - \overline {{x_i}} )}^2}{{({x_j} - \overline {{x_j}} )}^2}} }},i \ne j$$

Where: xi and xj represent the values of the two variables; x¯i$${\bar x_i}$$ and x¯j$${\bar x_j}$$ represent the mean values of the two variables. When the standard deviation indicator with a scale is used to reflect the discriminatory power of the indicators, due to the different scales and orders of magnitude of the indicators, the standard deviation cannot be directly compared with their discriminatory power, and it should be improved by applying the standard deviation coefficient Sj to measure the discriminatory power of the indicators. Sj=σjx¯jj=1,2......m$${S_j} = \frac{{{\sigma _j}}}{{{{\bar x}_j}}}\quad j = 1,2......m$$

Where: Sj is the standard deviation coefficient of indicator j, σj=i=1n(standard_resultijx¯j)2n1$${\sigma _j} = \sqrt {\frac{{\sum\limits_{i = 1}^n {{{(s\tan dard\_resul{t_{ij}} - {{\bar x}_j})}^2}} }}{{n - 1}}}$$ is the standard deviation of indicator j, and xj¯$$\overline {{x_j}}$$ is the mean of indicator j.

The value of the quantitative indicator of conflictability of the jth indicator with other indicators is: Aj=i=1n(1rij),ij$${A_j} = \sum\limits_{i = 1}^n {(1 - {r_{ij}})} ,\:i \ne j$$

As equation (22) calculates the value of the quantitative indicator of conflict between indicators Aj, the correlation coefficient between indicators i and j may have a negative value, while for positive and negative correlation with the same absolute value, the correlation between the indicators reflected should be the same. Therefore, it is not reasonable to use equation (22) to measure the conflict between indicators. Improvement on this side, using equation (23) to calculate, can effectively avoid this problem. Aj'=i=1n(1|rij|),ij$$A_j' = \sum\limits_{i = 1}^n {(1 - |{r_{ij}}|)} , i \ne j$$

Calculating the information content of indicators

The objective weight of each indicator is measured by the combination of contrast intensity and conflict. Let Cj denote the amount of information contained in the jnd categorized indicator factor, and the formula for calculating Cj is as follows: Cj=Sj×Aj=σjx¯ji=1n(1|rij|),ij,j=1,2,n$${C_j} = {S_j} \times {A_j} = \frac{{{\sigma _j}}}{{{{\bar x}_j}}}\sum\limits_{i = 1}^n {(1 - |{r_{ij}}|)} ,\:i \ne j,\:j = 1,2, \cdots n$$

Calculation of indicator weights Wj=Cji=1nCj,j=1,2,,n$${W_j} = \frac{{{C_j}}}{{\sum\limits_{i = 1}^n {{C_j}} }},\quad j = 1,\:2, \cdots ,n$$

Where: The larger Cj is, the greater the amount of information contained in the jnd evaluation indicator, and the greater the relative importance of that indicator, i.e., the greater the weight.

Improving game theory ideas to determine portfolio weights

Game theory is a kind of operation research method to study things with competitiveness, drawing on the idea of game theory, the main and objective weights are regarded as the decision-making subjects in the non-cooperative game, and the two sides look for the balance of interests in the continuous conflict to realize the optimal combination of weights, so as to make the indicator assignment more scientific and reasonable [21]. The specific process is as follows:

Assuming that there is n assignment method, the weight of the indicator calculated by the first assignment method is noted as w1 = (w11, w12, w13, ⋯w1m), the weight of the indicator of the second assignment method is w2 = (w21, w22, w23, ⋯w2m), and so on the weight of the indicator of the nth assignment method is wn = (wn1, wn2, wn3, ⋯wnm). The combination weight is then constructed from the prior combination of w1, w2, ⋯wn, Eq: w=α1w1T+α2w2T+αnwnT$$w = {\alpha _1}w_1^T + {\alpha _2}w_2^T + \cdots {\alpha _n}w_n^T$$

Where: α1, α2, ⋯, αn is the weight combination coefficient of each assignment method respectively.

The Nash equilibrium point is solved according to the principle of game theory, i.e., finding equilibrium between different weights and minimizing the deviation between the combination of weights and each weight, with the objective function and constraints: f=mina1,a2,ani=1n|(k=1nakwiwkT)wiwiT|$$f = {\min {_{{a_1},{a_2}, \cdots {a_n}}}}\sum\limits_{i = 1}^n {\left| {(\sum\limits_{k = 1}^n {{a_k}} {w_i}w_k^T) - {w_i}w_i^T} \right|}$$ s.t.k=1nak=1$${\text{s}}.{\text{t}}.\sum\limits_{k = 1}^n {{a_k}} = 1$$

By solving the model, the optimal combination of weights for the data under the combined consideration of each assignment method can be obtained. A system of linear equations is established as shown in (30). ( w1w1T w1wnT wnw1T wnwnT)( α1 αn)=( w1w1T wnwnT)$$\left( {\begin{array}{c} {{w_1}w_1^T}& \cdots &{{w_1}w_n^T} \\ \vdots & \ddots & \vdots \\ {{w_n}w_1^T}& \cdots &{{w_n}w_n^T} \end{array}} \right)\left( {\begin{array}{c} {{\alpha _1}} \\ \vdots \\ {{\alpha _n}} \end{array}} \right) = \left( {\begin{array}{c} {{w_1}w_1^T} \\ \vdots \\ {{w_n}w_n^T} \end{array}} \right)$$

The combination coefficient α1, α2, …, αn is obtained according to Eq. (30) and normalized, i.e., αi*=ai/inai$$\alpha _i^* = {a_i}/\sum\limits_i^n {{a_i}}$$ can be used to find the final combination weights as: w*=α1*w1T+α2*w2T++αn*wnT$${w^*} = \alpha _1^*w_1^T + \alpha _2^*w_2^T + \cdots + \alpha _n^*w_n^T$$

From Eq. (30), it can be seen that the result of the combination weight depends on the linear combination coefficient α1, α2, ⋯, αn, while the combination coefficient α1, α2, …, αn is calculated by Eq. (31), but not every combination coefficient that is sought is positive, if the combination coefficient that is sought: α1,α2,,αn$${\alpha _1},{\alpha _2}, \ldots ,{\alpha _n}$$

The assumptions of Eq. (32) are not satisfied if there are negative values in them. For this reason this paper uses an improved game theoretic portfolio assignment method.

The objective function f of (32) above is rewritten to ensure that the combination coefficient α1, α2, ⋯, αn is greater than zero. f=mina1,a2,ani=1n|(k=1nakwiwkr)wiwir|$$f = {\min {_{{a_1},{a_2}, \cdots {a_n}}}}\sum\limits_{i = 1}^n {\left| {(\sum\limits_{k = 1}^n {{a_k}} {w_i}w_k^r) - {w_i}w_i^r} \right|}$$

The weight combination factor ak(k = 1, 2⋯n) satisfies the following conditions: k=1nak=1$$\sum\limits_{k = 1}^n {{a_k}} = 1$$

Or: k=1nak2=1$$\sum\limits_{k = 1}^n {a_k^2} = 1$$

Combining Eq. (34) and constraints (35), the optimization model can be obtained as shown in Eq. (36). { mina1,a2,,anfi=1n|(k=1nakwiwkT)wiwiT| s.t.ak>0,k=1,2nk=1nak2=1$$\left\{ {\begin{array}{l} {\mathop {\min }\limits_{{a_1},{a_2}, \cdots ,{a_n}} f\sum\limits_{i = 1}^n {\left| {(\sum\limits_{k = 1}^n {{a_k}} {w_i}w_k^T) - {w_i}w_i^T} \right|} } \\ {s.t.\:{a_k} > 0,\:k = 1,2 \cdots n\:\sum\limits_{k = 1}^n {a_k^2} = 1} \end{array}} \right.$$

The following Lagrangian function is established to solve the model: L(ak,γ)=i=1n|(k=1nakwiwkr)wiwir|+γ2(k=1nak21)$$L({a_k},\gamma ) = \sum\limits_{i = 1}^n {\left| {(\sum\limits_{k = 1}^n {{a_k}} {w_i}w_k^r) - {w_i}w_i^r} \right|} + \frac{\gamma }{2}(\sum\limits_{k = 1}^n {a_k^2} - 1)$$

Taking partial derivatives for ak(k = 1, 2⋯n) and γ, respectively, we have the following partial differential equation based on the conditions for the existence of extreme values: Lak=i=1n(±)wiwkr+γak=0$$\frac{{\partial L}}{{\partial {a_k}}} = \sum\limits_{i = 1}^n {( \pm )} - {w_i}w_k^r + \gamma \:{a_k} = 0$$ Lγ=i=1nak21=0$$\frac{{\partial L}}{{\partial \gamma }} = \sum\limits_{i = 1}^n {a_k^2} - 1 = 0$$

From equation (39): ak=i=1nwiwkTγ$${a_k} = \frac{{ \mp \sum\limits_{i = 1}^n {{w_i}} w_k^T}}{\gamma }$$

Substituting Eq. (38) into Eq. (39) yields: γ=±k=1n(i=1nwiwkT)2$$\gamma = \pm \sqrt {\sum\limits_{k = 1}^n {{{\left( {\sum\limits_{i = 1}^n {{w_i}} w_k^T} \right)}^2}} }$$

Substituting equation (41) into equation (38) gives: ak=i=1nwiwkT±k=1n(i=1nwiwkT)2$${a_k} = \frac{{ \mp \sum\limits_{i = 1}^n {{w_i}} w_k^T}}{{ \pm \sqrt {\sum\limits_{k = 1}^n {{{\left( {\sum\limits_{i = 1}^n {{w_i}} w_k^T} \right)}^2}} } }}$$

Due to the combination factor ak > 0, equation (42) can be written as: ak=i=1nwiwkTk=1n(i=1nwiwkT)2$${a_k} = \frac{{\sum\limits_{i = 1}^n {{w_i}} w_k^T}}{{\sqrt {\sum\limits_{k = 1}^n {{{\left( {\sum\limits_{i = 1}^n {{w_i}} w_k^T} \right)}^2}} } }}$$

Eq. (43) is the unique solution that satisfies model (42), and finally for ak(k = 1, 2⋯n), normalization is performed to obtain ak*$$a_k^*$$: ak*=i=1nwiwkTγ=k=1ni=1nwiwkT$$a_k^* = \frac{{\sum\limits_{i = 1}^n {{w_i}} w_k^T}}{{\gamma = \sum\limits_{k = 1}^n {\sum\limits_{i = 1}^n {{w_i}} } w_k^T}}$$

Substituting ak*$$a_k^*$$ into the following equation yields the value of portfolio weights as: w*=k=1nak*wkT,(k=1,2n)$${w^*} = \sum\limits_{k = 1}^n {a_k^*} w_k^T\:,\:(k = 1,2 \cdots n)$$

Evaluation of the work of political education for university students
Data Collection and Redundancy Indicator Removal
Objects of collection

Twenty-seven teachers from each of the five colleges of Computer Science and Engineering, Mechanical Engineering, Electrical and Automation Engineering, Mathematics and Statistics, and Foreign Languages in a university in City A were randomly selected as the subjects of the study. Among them, the age range of the 27 teachers selected from each college should involve young, middle-aged, and old; the title rank should include assistant professor, lecturer, associate professor, and professor; and the degree should involve doctoral, master’s, and bachelor’s degrees. In addition, 44 students were randomly selected as the teaching class for each teacher. Evaluation of decision tree algorithm based Civics teaching using Civics education classroom teaching.

Acquisition of data

The survey method during research is mostly a questionnaire method, which mainly involves the distribution and retrieval of questionnaires. The research implementation survey was conducted from April to June 2023, and 656 student questionnaires were distributed in this survey. 423 copies were recovered, of which, 389 were valid questionnaires.

Redundant indicators elimination

In this section of the experiment, DATA is selected as the experimental data to carry out correlation analysis experiments on students’ civic education work. First of all, the data is tested for normality, due to the large amount of data, this section uses the K-S test, and the significance of some indicators p<0.05. It indicates that the evaluation dataset does not satisfy a normal distribution, so the Pearson correlation coefficient was not chosen. Kendall correlation coefficient analysis is suitable for consistency checking of data, such as judges’ scores and data ranking. The Spearman correlation coefficient method, on the other hand, has more relaxed data requirements and is suitable for quantitative data that do not satisfy normal distribution. Therefore, Spearman’s correlation coefficient is used in this section as a method of correlation analysis.

In this section, correlation analysis was carried out using SPSS tool and the Spearman correlation coefficient was selected. The results of the analysis based on the Spearman correlation coefficient are shown in Table 2.

The analysis of the correlation analysis of liberal arts

I1 I2 I3 I4 I5 I6 I7 I8 I9 I10
I1 1 0.354 0.327 0.345 0.46 0.406 0.346 0.431 0.313 0.272
I2 0.354 1 0.336 0.406 0.401 0.416 0.125 0.439 0.37 0.028
I3 0.327 0.336 1 0.452 0.469 0.228 0.346 0.384 0.484 0.06
I4 0.345 0.406 0.452 1 0.442 0.308 0.285 0.621 0.316 0.094
I5 0.46 0.401 0.469 0.442 1 0.318 0.377 0.461 0.417 0.116
I6 0.406 0.416 0.228 0.308 0.318 1 0.373 0.431 0.345 0.095
I7 0.346 0.125 0.346 0.285 0.377 0.373 1 0.058 0.59 0.628
I8 0.431 0.439 0.384 0.621 0.461 0.431 0.058 1 0.4 0.315
I9 0.313 0.37 0.484 0.316 0.417 0.345 0.59 0.4 1 0.406
I10 0.272 0.028 0.06 0.094 0.116 0.090 0.628 0.315 0.406 1

The observation table shows that there is a high degree of correlation between indicator 7 and indicator 10. It is necessary to remove a redundant indicator from these two indicators, this paper utilizes the variable correlation removal method to calculate the correlation mean of each indicator in the above two indicators with all other variables and compare the size, between indicator 7 and indicator 10, the correlation mean of indicator 7 is 0.347, which is larger than the correlation mean of indicator 10, 0.223, therefore, indicator 7 is deleted, and after removing the redundant indicator 7 the experimental data of the data set of students’ civic education work as DATA2.

Decision Tree Based Evaluation Indicator Screening

In this section, the experimental data of science and engineering is divided into training set and test set according to the ratio of 7:3, and the decision tree is constructed for the data set of students’ civic education work, as shown in Fig. 1.B stands for “Bad”, and G stands for “Good”. By observing the decision tree of the data set of students’ civic education work, it can be found that: indicator 1 and indicator 9 are deleted, i.e., in the process of generating the decision tree, the rest of the indicators can already determine the evaluation level, and these two indicators can’t play a role in the generated decision tree model, and therefore serve as ineffective indicators of the evaluation system of students’ civic education work.

Figure 1.

The decision tree model of the liberal arts

In this section, the constructed decision tree model student civic education work of the prediction set for prediction, and the use of confusion matrix to evaluate the model, the results of the evaluation index of each model is shown in Table 3. The accuracy of the decision tree for analyzing the data set of students’ civic education work are between 88% and 93%, and the accuracy of this decision tree reaches 91%.

The evaluation index of the liberal arts

Accuracy rate Sensitivity F metric Class
0.934 0.895 0.924 GOOD
0.885 0.887 0.881 BAD
Calculation of indicator weights
Calculating subjective weights

The subjective weights of Civic Education, Academic Style Construction, Team Building, and Nurturing Effect are calculated, and the subjective weights are finally determined based on the cluster G1 method as shown in Table 4.

Group G1 subjective weight calculation results

Primary indicator Primary index weight Secondary indicator Secondary index weight
Thinking of education 0.3415 Situation and policy education Mental health education 0.2826
Daily education management 0.0589
School wind construction 0.2467 Course learning 0.0621
Time activity 0.1105
Teacher team 0.0741
Team construction 0.1278 One-time employment Situation and policy education 0.1278
Childbearing effect 0.2840 Mental health education 0.2840
Calculation of objective weights

On the basis of data normalization, SPSS software was used to calculate the variability σj, conflict Rj and informativeness Cj among the indicators, resulting in the objective weight values calculated based on the CRITIC method as shown in Table 5:

The weight calculation results of the liberal arts

Secondary indicator Index variability Index conflict Information content Weighting
Situation and policy education 2.102 1.327 2.774 9.45%
Mental health education 2.071 1.166 2.4 17.92
Daily education management 2.071 1.262 2.597 10.72
Course learning 2.077 1.533 3.167 20.05
Time activity 2.074 1.455 3.001 16.37
Teacher team 2.153 1.339 2.866 12.92
One-time employment 2.071 1.241 2.555 13.55
Calculating portfolio weights

In this paper, two kinds of weights have been calculated according to the cluster Gl method and CRITIC method, and according to the game synthesized assignment formula (46), the optimal solution matrix equation can be obtained as: [ Wa1Wa1T Wa1WcancT WcancWa1T WcancWcancT][ λ1 λ2]=[ Wa1Wa1T WcancWcancT]$$\left[ {\begin{array}{c} {{W_{a1}}W_{a1}^T}&{{W_{a1}}W_{canc}^T} \\ {{W_{canc}}W_{a1}^T}&{{W_{canc}}W_{canc}^T} \end{array}} \right]\left[ {\begin{array}{c} {{\lambda _1}} \\ {{\lambda _2}} \end{array}} \right] = \left[ {\begin{array}{c} {{W_{a1}}W_{a1}^T} \\ {{W_{canc}}W_{canc}^T} \end{array}} \right]$$

The final combination weights were found according to formula (46) as shown in Table 6.

Each index game combination weight

Secondary indicator Analytic hierarchy process CRITIC Composite weight
Situation and policy education 8.165% 10.51% 0.09817
Mental health education 5.816% 18.98% 0.15031
Daily education management 4.38 11.78% 0.0956
Course learning 17.097 21.11% 0.19906
Time activity 11.756 17.43% 0.15728
Teacher team 24.946 13.98% 0.17298
One-time employment 36.24 14.61% 0.21099

According to the results of the combination weights, the highest proportion of the seven evaluation indexes is the one-time employment rate (0.21099), followed by the course learning situation, the team of lecturers, time activities, mental health education, and situation and policy education. It shows that parenting and the construction of academic style have a relatively large impact on the ideological education of college students.

Conclusion

The study constructed the evaluation indexes of college students’ civic education through research surveys and experts’ relevant opinions, optimized the traditional ID3 decision tree algorithm, and used the improved information entropy ID3 algorithm to generate the decision tree of college students’ civic education work for the screening of the indexes. The subjective and objective assignment method is used to calculate the weights of the evaluation index system of the indicators, and the G1-CRITIC model of the combination assignment method based on the game method is established to carry out the combination assignment of the indicators. The results of the corresponding weights of the indicators of students’ civic education work are (0.09817, 0.15031, 0.0956, 0.19906, 0.15728, 0.17298, 0.21099), among which the disposable employment rate has the highest weight, which has a greater impact on the civic education of college students.

Język:
Angielski
Częstotliwość wydawania:
1 razy w roku
Dziedziny czasopisma:
Nauki biologiczne, Nauki biologiczne, inne, Matematyka, Matematyka stosowana, Matematyka ogólna, Fizyka, Fizyka, inne