Uneingeschränkter Zugang

A Study on Teaching Quality Improvement of Microcontroller Principles and Applications Course Based on Convolutional Neural Networks

,  und   
19. März 2025

Zitieren
COVER HERUNTERLADEN

Introduction

Principles and Applications of Microcontroller is a professional compulsory course that combines both theory and practice, and is characterized by strong comprehensiveness and practicability [12]. The course not only integrates the basic theoretical knowledge in the fields of analog electronics, digital electronics, electrical machinery and electrical appliances, but also requires students to be able to design hardware circuits and program to realize the microcontroller application system.

At present, the teaching of microcontroller course generally adopts the traditional teaching mode, i.e. “teacher + blackboard + experiment” as the main teaching mode, through the teacher’s systematic lectures to transfer knowledge to students [34]. In recent years, this course has faced some challenges in traditional teaching. The traditional classroom lectures make students only passively absorb the knowledge, and this single teaching mode usually makes students feel monotonous and boring, thus affecting their motivation and learning effect [58]. The old methods used in experimental teaching are usually limited to fixed and unchanging circuits and programs, which is not conducive to the cultivation of students’ innovative thinking and independent learning ability, which in turn leads to the disconnection between theoretical knowledge and practical application, making the teaching results unsatisfactory [912]. In addition, the course assessment and evaluation of microcontroller principles and applications course is single, mostly using written tests as the main evaluation method, with more theoretical exams and fewer skill operations and practical investigations [1314]. After the end of the course, many students still do not know how to carry out the design of microcontroller application system, and it is difficult to achieve the desired teaching effect. Based on the existence of these problems, it is necessary to reflect on and improve the existing teaching mode, in order to achieve better teaching results.

The blended teaching method focuses on the integration of teaching resources, and realizes the personalized learning of students by adjusting the proportion and content of online and offline in the course teaching. Qiu, G. designed a blended learning method for the course of microcontroller principles and applications by using digital technology, which is based on the digital platform integrating the learning resources in the teaching and learning process, and cultivating the diversified abilities of the students in the theoretical and practical teaching, which significantly enhances the students’ learning motivation, which brings opportunities for the teaching reform of the course [15]. Tong, Y. et al. introduced the design, implementation and effect of the blended teaching mode of microcontroller principles and applications course, which can stimulate the students’ learning initiative, improve the students’ ability to participate in the practical application of microcontrollers, and conform to the comprehensive and practical course objectives of the microcontroller principles and applications course [16]. Jingang, J. et al. showed that the traditional teaching mode of microcontroller principles and applications course is not conducive to the cultivation of students’ personality and innovative talents, so the introduction of the flipped classroom teaching methodology for the teaching design of microcontroller course is conducive to the improvement of the practical ability of engineering students and the ability of technological innovation [17]. Zhou, N. et al. emphasized the important role of the network teaching resources in flipped classroom teaching mode, and microlearning resources are a kind of microlearning resource that is a kind of microlearning resources. status, and microlearning resources are an emerging network teaching resources, which have great potential to be applied in the flipped classroom teaching session of microcontroller courses, and can significantly improve the initiative, enthusiasm and learning effect of students’ learning [18]. Wu, H. et al. designed the knowledge mapping of microcontroller principles and applications experimental class and made microclasses based on it for the key points and difficulties in the traditional experimental teaching program, and the study showed that that the combination of microclasses and education can significantly improve the efficiency of experimental teaching, and the result-process assessment method it supports also realizes the diversification of the evaluation of college education [19].

Project-based learning mode is a student-centered teaching method, through allowing students to independently participate in the design and implementation of actual projects to develop their ability to solve practical problems, prompting them to learn and master knowledge in practice.Liang, C. to innovative entrepreneurial projects to drive students to the learning of microcontroller principles and applications courses, so that students will combine theoretical learning with practical operation, not only inspire students’ learning enthusiasm and confidence in innovation and entrepreneurship, but also cultivate innovative and entrepreneurial talents with comprehensive ability [20]. Zhang, B. et al. used a project-based teaching methodology to design the teaching of microcontroller experimental courses, by building an experimental platform and constructing an experimental content system, aiming to cultivate the ability of students to solve complex engineering problems, and it was found that the teaching methodology achieved significant teaching results [21]. Yang, Z. et al. pointed out that starting from students’ characteristics and actual teaching content, setting up a reasonable project-driven teaching method and competition-based training mode can increase the diversity and practicality functions of microcontroller course teaching, which has a positive effect on improving teaching effectiveness [22]. Zhang, T. et al. explored and constructed an innovative experimental teaching mode for the course of microcontroller principles and applications, which is student-centered, can give full play to students’ initiative and creativity, and is conducive to cultivating students’ lifelong learning habits [23]. Luo, Q. et al. based on the OBE education concept for the design of teaching content of microcontroller principles and applications course, not only enhances the students’ learning interest and desire to explore and improves the teaching effect, but also its diversified course evaluation methods are also conducive to the promotion of continuous improvement of teaching quality [24].

In addition to the blended and project-based teaching methods, a large number of scholars reflect on and improve the existing teaching models and propose diversified teaching methods in order to achieve better teaching results. Lei, J. Combining the quality education method with the principle course of microcontroller in colleges and universities, and subliminally influencing students’ ideological behaviors, moral values and spiritual pursuits in the daily teaching process can effectively improve students’ learning effect [25]. Zhang, T. et al. proposed adjusting the course content, flexible teaching methods, and reforming the course assessment to solve the problems related to the teaching process of microcontroller principles and applications course in order to improve the quality of the course teaching and to realize the ultimate goal of cultivating students’ practical ability [26].

This paper firstly detects and identifies students’ expressions in the classroom of “Microcontroller Principles and Applications” through lightweight convolutional neural network, then identifies students’ classroom behaviors by using two-stream convolutional neural network, and constructs an instant feedback model of classroom teaching effect based on the identification and analysis of students’ expressions and behaviors, and gives real-time feedback on students’ expressions and behaviors to assist teachers in adjusting the classroom status, so as to optimize the teaching effect of the course. Based on the recognition and analysis of students’ expressions and behaviors, we construct a real-time feedback system for teaching effects based on convolutional neural networks. By comparing and analyzing the classroom emotional activity, students’ head-up rate and course grades of Microcontroller Principles and Applications before and after the experiment, the teaching quality of this paper’s teaching model based on convolutional neural network is examined.

Construction of real-time feedback system for teaching effect based on convolutional neural network
Student expression recognition based on lightweight convolutional neural network
Convolutional neural network structure design

VGG model network architecture

VGGNet as a very classical convolutional neural network, which is a convolutional neural network architecture with a deeper structural hierarchy [27]. The idea of 3 × 3 convolutional kernel used in this network is the basis of many later models, while VGGNet reduces the error rate of image classification to less than 10% for the first time.The VGG16 network structure is shown in Figure 1.

Selection of activation function

Activation function is a nonlinear function, with a variety of specific expressions. The following introduces several commonly used activation functions.

Sigmoid function

Sigmoid function, also called Logistic function, can be used to hide the output of the neuron, taking the value of the range of (0, 1), able to map any real number between (0, 1). It can be applied to binary classification problems. However, if the activation function is computationally large, when backpropagating, it is easy to have the gradient disappear, thus not being able to complete the training of the deep network: Sigmoid(x)=11+ex

Tanh function

The Tanh function, also known as the hyperbolic tangent function, has an output range of -1 to 1 compared to the sigmoid function, so the speed of convergence is improved. However, the phenomenon of gradient disappearance can also occur: Tanh(x)=exexex+ex

ReLU function

ReLU function is a popular activation function in deep learning, compared to the sigmoid function and tanh function, for positive numbers of the original output, negative numbers directly set to zero.ReLU function after the derivatives, in the interval x>0, will not be gradient disappearance phenomenon, so that its performance is much better than the sigmoid and tanh, convergence speed is faster, but the ReLU function is also for the negative numbers. Its derivative is 0: ReLU=max(0,x)

ReLU looks more like a linear function and is generally easier to optimize when the behavior of the neural network is linear or close to linear. Networks trained with this activation function avoid the problem of vanishing gradients almost completely, and with modern deep learning neural networks, the default activation function is the ReLU activation function.

SeLU function

In order to give full play to the advantages of the ReLU function, this paper adopts a variant of the ReLU function, SeLU, as the activation function.The SeLU function is defined as follows: SeLU(z)=λ{ zz>0α(ez1)z<=0

The partial derivative of the loss function L with respect to layer l is: δl=λ{ δl+1zl>0αδl+1ezlzl<=0

Among them: α=1.6732632423543772848170429916717λ=1.0507009873554804934193349852946

The positive semiaxis of SeLU is greater than 1, which allows it to increase in size when the variance is too small, but at the same time prevents the gradient from vanishing. In this way, the activation function has an immobile point, and the output of each layer after the network is deeper is with mean 0 and variance 1, which accelerates the convergence of the model, similar to the batch normalization effect, but with lower computational complexity.

Optimizer selection

The essence of the machine learning training process is to minimize the loss, until we define the loss function, the optimizer comes in handy, for many supervised learning models, the need to construct the loss function of the original model, followed by optimization algorithms to optimize the loss function, in order to find the optimal parameter w. Gradient descent and momentum optimization methods in the convolutional neural network kind of training kind of more commonly used optimization algorithms.

Gradient descent

Gradient descent is a first-order optimization algorithm, also known as most rapid descent [28]. From the basic gradient descent method further derived stochastic gradient descent method, and batch gradient descent method. All three methods update the parameter w, and the expressions are shown in equations (7) to (9): GD:wt+1=wtηtΔJ(wt) SGD:wt+1=wtηti=1nΔJ(wt,xis,yi) BGD:wt+1=wtηtΔJ(wt,xis,yis) where wi and wi+1 denote the parameters before and after the update, respectively, ηi is the learning rate, ΔJ is the gradient of the loss function with respect to the parameters, and is denotes an arbitrary sample. The objective function J(w) is minimized using the gradient descent method.

When using the GD method, the parameters are initialized first and then the values are kept changing until a global minimum is obtained. Whereas, when using SGD, the gradient of the cost function is computed using the complete dataset, which results in a slow computation of the SGD method. When using the BGD method, the entire dataset is first randomized and then only one sample is selected at each iteration (xis, yis)

Momentum optimization method to compute the gradient of the cost

When the convergence of the ordinary gradient descent method becomes slow, it is necessary to consider the historical gradient to guide the parameters towards the optimal value to converge faster, which is the basic idea of introducing the momentum algorithm. The momentum term m and the discount factor α are introduced in the gradient descent problem, and its parameter update expression is shown in equation (10): { mt=αmt1+ηtΔJ(wt,xi,yi)wt+1=wtmt

Figure 1.

VGG16 network structure

Where mi denotes the cumulative momentum term at moment i, wi denotes the model parameters at moment t, ηt denotes the learning rate at moment t, and ηiΔJ (wi, xi, yi) denotes the amount of updates in iteration 1. If the gradient at the current moment converges with the historical gradient direction, this trend will be strengthened at the current moment, otherwise the gradient direction at the current moment is weakened. Instead of doing the gradient update at wi, the update is done at wiαmi−1 one momentum unit ahead, resulting in the Newton’s accelerated gradient algorithm, which takes the form shown in (11): { mt=αmt1+ηtΔJ(wtαmt1)wt1=wtmt

For the classification problem of face expression recognition to be solved in this paper, the SGD optimization algorithm with Nesterov Momentum is selected as the optimizer of the model, which can ensure that the model training has a high speed and avoid falling into local minima.

Lightweight methods for convolutional neural networks

In this paper, we use a channel-level pruning model compression method and a binarization-based model quantization method to achieve lightweight improvements in the VGG16 model.

Model compression is a common method for improving the efficiency of network models, specifically by compressing trained network models. On the one hand, it can be started from the perspective of compressing the model weights, and on the other hand, it can be started from compressing the network architecture. Therefore, the network can be trained and stored fewer network parameters, which solves the problem of model storage and prediction speed.

Deep learning network models have a large number of redundant parameters from the convolutional layer to the fully connected layer, and a large number of neurons with activation values converging to 0. The removal of these neurons can result in the same model representation, which is known as over-parameterization. The corresponding technique is known as model pruning. Dropout randomly sets the outputs of some neurons Q to zero, which is neuron pruning, while DmpConnect randomly sets the connections between some neurons to zero, making the weight connection matrix sparse, which is weight connection pruning.

Quantization is another common method of model compression. Quantization is the approximation of weights or activation values expressed in high bit-width (32float) with lower bit-width (int8), which is reflected in the numerical value of discretization of continuous weight values or activation values. By quantizing, it is beneficial to speed up the operation. For example, when transforming a 32-float into an int8 representation, fixed-point arithmetic is faster than floating-point arithmetic, while reducing data storage space by about 1 / 4.

Student Classroom Behavior Recognition Based on Dual-Stream Convolutional Neural Networks
How the Attention Mechanism Works

The concept of attentional mechanism comes from the study of human vision, and in cognitive science, people tend to notice only a part of the information and selectively ignore the information they don’t pay attention to, a phenomenon known as attentional mechanism [29]. Usually, all parts of the human retina have different abilities to process information, which is also known as visual sensitivity. To maximize the limited visual information processing resources, it is necessary to select a specific area in the visual field and concentrate on it. For example, when a person is reading, they will pay attention to and process a small portion of the words to be read. In summary, the attention mechanism consists of two parts: one is to identify the region to focus on, and the other is to allocate the limited information processing resources to the focus region.Attention algorithm is to utilize the principle of attention resource allocation in the human brain, and utilize the probabilistic weighting method to probabilistically weight the different input features, so as to better identify the features that are relevant to the task. The basic structure of the Attention model is shown in Fig. 2.

Figure 2.

Basic structure of attention mechanism

Dual-stream convolutional neural network with embedded attention module

The role of dual stream convolutional neural networks in video classification and behavioral recognition is becoming more significant [30]. In recent years, scientific researchers have conducted many experiments on dual-stream networks to improve their recognition effect and enhance their performance. In the process of behavior recognition, spatial network and temporal network input image data is different, the network structure is not exactly the same, to find the difference between the features extracted by the two networks at any time, to remove the redundancy between the features, and to synthesize the effective information, which is a complex and laborious work. With the application of the attention mechanism in other fields, researchers have noticed the powerful function of the attention mechanism, and gradually combine the attention mechanism with dual-stream convolutional networks, with the purpose of improving the recognition efficiency and accuracy of dual-stream networks.

In this paper, the attention mechanism module is added to the dual-stream convolutional neural network to form a dual-stream convolutional neural network based on the attention mechanism, which is structured as shown in Fig. 3, where the video data of students’ behaviors are collected and processed in frames, and the input information of the spatial network is screened out as a single RGB frame image, and the remaining frames are selected to be processed into a continuous 10 frames to be inputted into the temporal network as a stacked optical flow map, and the dimensions of the input images are is uniformly 224 × 224, and the main network is composed of a pre-trained VGG16 network as well as an attention module. Except for the attention module, the dual-stream convolutional neural network based on the attention mechanism is exactly the same as the traditional dual-stream network, and the next section mainly explains the combination of the attention mechanism and the dual-stream network.

Figure 3.

The double flow convolution neural network with the attention mechanism

In view of the lack of connection between the two networks and the lack of filtering of key regions in the traditional dual-stream network, this paper proposes to add an attention mechanism module to the network, embed the attention module between the convolutional layers, and use the attention weights to screen the key regions layer by layer, as shown in Figure 3, the attention module is composed of 4 Attention matrices and 3 pooling layers, and each convolutional layer of the spatial network and the time network is connected by an Attention matrix. A pooling layer is arranged after each Attention matrix, which is used for pooling operations and reduces the number of parameters. In the implementation process, the first step is to calculate the weight matrix of Attention1, the remaining three Attention matrices can be obtained after three pooling in the network, the two images are input to the spatial network and the temporal network after the output of the feature maps, the size of the image input to the network is 224 × 224 × 3, the output of the feature maps of the first convolutional layer in the two networks are cascaded in channels, cascaded The size of the feature map after cascading is 112 × 112 × 128, where the size of each feature map is 112 × 112,128 is the number of channels, i.e., the number of feature maps. Then, the cascaded feature maps are used to obtain the weights of attention with a complete connection hierarchy, which is calculated using the following formula: GIJ=tanh(w1tHi,j+B1) GIJ=egi,ji=1112j=1112egi,j

Where, w1 is the weight, b1 is the bias, HijRl*d is the depth eigenvector at position (i, j)∈(1–112) on the feature map, gij is the un-normalized attention weight and Gij is the normalized attention weight matrix. By passing the obtained attention weights gij through the Softmax function, the 112 × 112 normalized attention weight matrix Gij, Attentionl, is obtained.

After calculating the first attention weight matrix Attention1, Attention1 is pairwise dot-multiplied with each feature map output from the second convolutional layer of the spatial and temporal networks, respectively, so as to obtain the feature map with attention weights, which is used as the input of the third convolutional layer group. At the same time, pooling operation is performed on the Attention1 weight matrix, and after pooling, an Attention2 matrix of size 56 × 56 is obtained, and the Attention2 matrix is multiplied by each feature map output from the third convolutional layer of the spatial and temporal networks, respectively, to obtain the feature map with the attention weight, which is the input of the fourth convolutional layer group. Attention2 is pooled to obtain an attention weight matrix Attention3 of size 28 × 28. Attention weight matrix Attention3 is multiplied by each feature map output from the fourth convolutional layer of the spatial and temporal networks respectively to obtain a feature map with attention weights as the input of the fifth convolutional layer. Attention3 continues the pooling operation to obtain an attention weight matrix Attention4 of size 14 × 14. The attention weight matrix Attention4 is pairwise dot-multiplied with each feature map output from the fifth convolutional layer of the spatial and temporal networks, respectively, to obtain the final feature map with attention weights. Finally, the output feature maps of the fifth convolutional layer group obtained from the temporal and spatial streams after the above operations are cascaded and fed into the fully connected layer for action-behavior recognition.

On this basis, by adding four attention mechanism modules, the method can not only selectively learn according to the importance of the information, eliminate redundant and interfering actions, and enhance the weight of the target, but also make full use of the correlation between temporal and spatial networks to obtain more accurate and diversified behavioral characteristics, thus improving the ability to identify students’ behaviors.

Instant Feedback Model of Classroom Teaching Effectiveness

Teachers need to receive timely feedback from students and make timely adjustments. However, in one-to-one classroom teaching, excessive and unconscious words from the teacher can interfere with the learning process of students’ active thinking, and thus cannot fully implement this educational concept and spirit.

With the popularization and application of video detection technology and face detection technology, real-time monitoring of students’ learning status and expression has become possible. In this paper, we utilize the related artificial intelligence detection technology to monitor the students’ expression and status changes in classroom teaching in real time, and design a model of classroom teaching effect instant feedback system based on convolutional neural network, which lays a theoretical foundation for the future development and implementation.

After studying the algorithm design of face detection, expression analysis and behavior recognition, it becomes possible to construct an instant feedback system for classroom teaching effect, which should not only be able to analyze and categorize the real-time expressions and behaviors of students, but also embody the module setup of human-computer interaction.

The classroom recording module has hard requirements for multimedia and video equipment, the classroom micro-expression recognition and tracking module will process the real-time image of the classroom recording video, and the face detection and behavior recognition algorithm based on convolutional neural network will automatically identify the corresponding five kinds of expressions and behaviors, namely, understanding, listening, confusion, not listening, and resentment.

The author defines four kinds of classroom teaching effectiveness evaluation indexes based on these five micro-expressions as follows: (1) Attention: Attention can reflect the situation of students’ listening status (listening status: understanding, listening, confusion. Non-listening status: not listening, disgusted). (2) Mastery: Mastery" reflects the students’ understanding of the knowledge points. (3) Confusion: Confusion" reflects the degree of students’ confusion about the knowledge points. (4) Fuzzy degree: when students are in the listening state, it is impossible to judge their mastery of knowledge points, so the fuzzy degree reflects the fuzzy degree of students’ mastery of knowledge points.

The HCI design module contains two parts: setting the evaluation index threshold and individual/whole tracking analysis. Teachers have the ability to set the threshold value for four types of classroom teaching effect evaluation indexes by themselves through threshold setting of evaluation indexes. If the feedback value of classroom teaching effect evaluation indexes exceeds the set threshold value, the instant feedback system will output an alarm signal to remind teachers that there are certain problems with classroom teaching at this time, and the teachers can adjust the teaching strategies and methods in a timely manner. Individual/whole tracking analysis means that not only can the class as a whole for classroom teaching effectiveness evaluation index tracking and feedback, but also can be specific to the individual students, the teacher can real-time access to individual student classroom teaching effectiveness evaluation index, more able to achieve the learning effect of tailored to the needs of the students.

Analysis of the quality of teaching

The model of this paper is used in the classroom of “Microcontroller Principles and Applications” in the classroom of 1 class of communication engineering major in the second year of S-school, and after a semester teaching experiment, we compare the classroom affective activeness, students’ head-up rate, and teaching effect before and after the experiment, in order to validate the teaching quality of this paper’s model in the classroom of “Microcontroller Principles and Applications”.

Analysis of Emotional Activity in the Classroom

The classroom emotional activeness on Principles and Applications of Microcontrollers at the beginning, middle and end of the semester was analyzed by extracting the emotional activeness of five students at the 5th, 20th, and 35th minutes of the classroom through the convolutional neural network model in this paper. The results of the three extractions were compiled, and the classroom emotional activeness in Principles and Applications of Microcontrollers at the beginning, middle, and end of the semester are shown in Tables 1, 2, and 3, respectively.

Class emotional activity level at the beginning of the semester

Time 5min 20min 35min
Teacher face expression activity 0.5655 0.6149 0.3895
Student 1 Positive expression activity 0.3304 0.5775 0.3558
Negative expression activity 0.6696 0.4225 0.6442
Student 2 Positive expression activity 0.4365 0.3425 0.3379
Negative expression activity 0.5635 0.6575 0.6621
Student 3 Positive expression activity 0.5606 0.4459 0.4725
Negative expression activity 0.4394 0.5541 0.5275
Student 4 Positive expression activity 0.4122 0.4998 0.4186
Negative expression activity 0.5878 0.5002 0.5814
Student 5 Positive expression activity 0.4634 0.3548 0.5872
Negative expression activity 0.5366 0.6452 0.4128
Class emotional activity level 0.4754 0.4366 0.4895
Teaching effect evaluation Average Average Average

Class emotional activity level in the middle of the semester

Time 5min 20min 35min
Teacher face expression activity 0.6641 0.5538 0.6284
Student 1 Positive expression activity 0.5805 0.5679 0.5271
Negative expression activity 0.4195 0.4321 0.4729
Student 2 Positive expression activity 0.5581 0.4563 0.5704
Negative expression activity 0.4419 0.5437 0.4296
Student 3 Positive expression activity 0.5479 0.5586 0.5584
Negative expression activity 0.4521 0.4414 0.4416
Student 4 Positive expression activity 0.5502 0.6611 0.6112
Negative expression activity 0.4498 0.3389 0.3888
Student 5 Positive expression activity 0.4886 0.6408 0.5059
Negative expression activity 0.5114 0.3592 0.4941
Class emotional activity level 0.6853 0.6462 0.6401
Teaching effect evaluation Good Good Good

Class emotional activity level in the end of the semester

Time 5min 20min 35min
Teacher face expression activity 0.7084 0.6538 0.6984
Student 1 Positive expression activity 0.7705 0.6679 0.6271
Negative expression activity 0.2295 0.3321 0.3729
Student 2 Positive expression activity 0.7588 0.6563 0.6704
Negative expression activity 0.2412 0.3437 0.3296
Student 3 Positive expression activity 0.6765 0.5586 0.5584
Negative expression activity 0.3235 0.4414 0.4416
Student 4 Positive expression activity 0.5372 0.6611 0.6712
Negative expression activity 0.4628 0.3389 0.3288
Student 5 Positive expression activity 0.7332 0.6408 0.5959
Negative expression activity 0.2668 0.3592 0.4041
Class emotional activity level 0.7004 0.7462 0.7101
Teaching effect evaluation Excellent Excellent Excellent

As can be seen from Tables 1 to 3, the negative expression activity of most students in the classroom of “Microcontroller Principles and Applications” at the beginning of the semester is greater than the positive expression activity, and the classroom emotional activeness in the 5th, 20th, and 35th minutes is 0.4754, 0.4366, and 0.4895, respectively, which are all average levels. After half a semester of teaching experiment, most of the students’ positive expression activity slightly exceeded the negative expression activity in the middle of the semester, or was at about the same level as the negative expression activity. In the middle of the semester, the teaching effect is good level, in the 5th, 20th and 35th minute classroom emotional activity is 0.6853, 0.6462, 0.6401 respectively. The teaching effect at the end of the semester was at an excellent level, and the classroom affective activeness in the 5th, 20th, and 35th minutes exceeded 0.7, and most of the students’ positive expression activeness was greater than negative expression activeness. It indicates that after the teaching experiment, students’ motivation and learning interest have significantly improved.

Analysis of student head-up rates

In the experiment, the overall pass rate of the Principles and Applications of Microcontrollers class at the beginning, middle, and end of the semester is detected and analyzed. The equipment used for the experiment was a cell phone rear camera that had a video format of.mp4, a frame width of 1280 pixels, a frame height of 720 pixels, and a frame rate of 30 FPS. The video was inputted into the system, and the detection and computation was performed every 3 seconds, and a total of about 800 detections were performed. The overall head-up rate of the class at each time period was analyzed, and the overall concentration of the students at different time periods in the whole class can be obtained as shown in Fig. 4.

Figure 4.

Test result of student head lifting rate

Observation of Fig. 4 reveals that the head-up rate of students in the Principles and Applications of Microcontrollers classroom gradually increased as the teaching experiment progressed. The overall head-up rate of students at the beginning of the semester is the lowest, only 37.26%, and the head-up rate of students in the whole class is not more than half, so the students’ learning engagement is ineffective, and their learning motivation is not high. After half a semester of teaching using this paper’s real-time feedback system of teaching effect based on convolutional neural network, the overall head-up rate of the classroom of “Microcontroller Principles and Applications” has been improved to the level of 59.85%, and the head-up rate of the students has been increased by 22.59% compared with the preexperimentation period, and the teaching experiment has achieved the initial results. After a whole semester of teaching, the statistics of the students’ head-up rate in the classroom of Microcontroller Principles and Applications again found that the head-up rate had increased to 85.31%, which was 48.05% and 25.46% higher than that at the beginning of the semester and in the middle of the semester, respectively, and the students’ concentration on learning in the classroom of Microcontroller Principles and Applications was significantly improved, and the atmosphere of the classroom and the attitude towards learning were greatly improved.

Analysis of Teaching Effectiveness

Finally, the grades in Principles and Applications of Microcontroller at the beginning and the end of the semester of Communication 1 class (experimental group) using the teaching model of this paper and Communication 2 class (control group) with conventional teaching are compared and analyzed to investigate the effect of the course teaching model based on convolutional neural network. The examination of the performance of “Microcontroller Principles and Applications” is divided into five dimensions: specification implementation, system design, system simulation, hardware extension and experimental analysis.

Comparison between groups

The pre-test results of the experimental (EC) and control (CC) groups’ performance in Principles and Applications of Microcontroller are shown in Table 4 and the post-test test results are shown in Table 5.

Pre-test results of experimental and control group

Dimension Group N M SD F P
Normative execution EC 49 14.06 1.45 4.985 0.546
CC 50 13.88 1.69
System design EC 49 14.13 1.84 3.749 0.715
CC 50 13.82 1.99
System simulation EC 49 13.58 2.05 5.613 0.826
CC 50 14.67 2.03
Hardware extension EC 49 13.98 1.85 5.068 0.668
CC 50 13.94 1.87
Experiment analysis EC 49 13.53 2.01 4.946 0.729
CC 50 12.98 2.15

Post-test results of experimental and control group

Dimension Group N M SD F P
Normative execution EC 49 19.44 2.84 5.954 0.001
CC 50 14.95 1.89
System design EC 49 18.25 2.23 6.485 0.001
CC 50 13.88 1.65
System simulation EC 49 18.56 2.69 7.065 0.002
CC 50 14.69 2.34
Hardware extension EC 49 19.37 2.72 5.994 0.001
CC 50 14.06 1.94
Experiment analysis EC 49 19.32 2.06 8.049 0.000
CC 50 13.08 2.01

As can be seen from the pre-test and post-test comparison results in Table 4 and Table 5, the score difference between the two groups in the five dimensions of Microcontroller Principles and Applications before the experiment is 0.18, 0.31, 1.09, 0.04, 0.45, respectively, which are not significantly different, and the p-value is greater than 0.5, and the two groups have similar scores. After the teaching experiment, the P-value for all five dimensions was less than 0.5, and the achievements of the two groups created a significant gap. The experimental group outperformed the control group by 4.49, 4.37, 3.87, 5.31, and 6.24 in specification implementation, system design, system simulation, hardware extension, and experimental analysis, respectively.

Within-group comparisons

The beginning and end of semester grades in Principles and Applications of Microcontrollers for the experimental and control groups, respectively, should be examined to examine the changes in the two groups before and after the experiment. The results of the pre- and post-test comparisons of the experimental group’s “Microcontroller Principles and Applications” grades are shown in Table 6, and the results of the control group’s pre- and post-test comparisons are shown in Table 7.

Pre-test and post-test results of experimental group

Dimension Pre/post-test N M SD F P
Normative execution Pre-test 49 14.06 1.45 5.984 0.001
Post-test 49 19.44 2.84
System design Pre-test 49 14.13 1.84 6.748 0.003
Post-test 49 18.25 2.23
System simulation Pre-test 49 13.58 2.05 7.162 0.002
Post-test 49 18.56 2.69
Hardware extension Pre-test 49 13.98 1.85 6.485 0.001
Post-test 49 19.37 2.72
Experiment analysis Pre-test 49 13.53 2.01 9.054 0.000
Post-test 49 19.32 2.06

Pre-test and post-test results of control group

Dimension Pre/post-test N M SD F P
Normative execution Pre-test 50 13.88 1.69 3.154 0.898
Post-test 50 14.95 1.89
System design Pre-test 50 13.82 1.99 4.068 0.758
Post-test 50 13.88 1.65
System simulation Pre-test 50 14.67 2.03 5.068 0.692
Post-test 50 14.69 2.34
Hardware extension Pre-test 50 13.94 1.87 4.698 0.882
Post-test 50 14.06 1.94
Experiment analysis Pre-test 50 13.08 2.01 2.946 0.943
Post-test 50 13.08 2.01

From the data in Tables 6 and 7, it can be seen that the experimental group’s performance in Principles and Applications of Microcontrollers after the experiment improved considerably compared to the preexperiment, with an increase of 5.38, 4.12, 4.98, 5.39, and 5.79 in five dimensions, and p-values of the dimensions in the pre- and post-tests were less than 0.05. On the contrary, the performance of the control group increased by 1.07, 0.06, 0.02, 0.12, and 0.10 on the five dimensions after the experiment, with p-values greater than 0.05, and the change in the performance of the control group was negligible, with the level of performance before and after the experiment basically at the same level.

Conclusion

In this paper, lightweight convolutional neural network and two-stream convolutional neural network are used to detect and identify students’ expressions and behaviors in the classroom, respectively, and construct an immediate feedback mechanism for classroom teaching effect, which provides real-time feedback on students’ expressions and behaviors, and constructs a real-time feedback system for teaching effect based on convolutional neural network. The effectiveness of the system in this paper is verified by comparison of classroom emotional activity, students’ head-up rate, and learning achievement.

The emotional activeness of the classroom of Microcontroller Principles and Applications at the beginning of the semester was 0.4754, 0.4366, and 0.4895 at the 5th, 20th, and 35th minutes, respectively, which were all average. Emotional activation levels at the middle of the semester were 0.6853, 0.6462, and 0.6401 respectively, which were good levels. The emotional activeness at the end of the semester was 0.7004, 0.7462, and 0.7101, respectively, which were excellent levels.

The overall head-up rate of the students at the beginning, middle and end of the semester was 37.26%, 59.85% and 85.31% respectively, the learning concentration was significantly improved, and the classroom atmosphere and learning attitude were greatly improved.

Before the experiment, there was no significant difference between the two groups in the five dimensions of Microcontroller Principles and Applications, and the P-value was greater than 0.5. After the teaching experiment, the P-value of the five dimensions was less than 0.5, and the two groups’ performance produced a large gap. The experimental group improved 5.38, 4.12, 4.98, 5.39, 5.79 on the 5 dimensions after the experiment, and the P-value of each dimension of the pre- and post-tests was less than 0.05. The P-value of the control group after the experiment was greater than 0.05 on all 5 dimensions, and it was equal to that of the pre-experiment.

Sprache:
Englisch
Zeitrahmen der Veröffentlichung:
1 Hefte pro Jahr
Fachgebiete der Zeitschrift:
Biologie, Biologie, andere, Mathematik, Angewandte Mathematik, Mathematik, Allgemeines, Physik, Physik, andere