Design of an Intelligent Teaching Platform under Multidimensional Data Fusion in Music Performance Teaching
Published Online: Mar 26, 2025
Received: Nov 08, 2024
Accepted: Feb 09, 2025
DOI: https://doi.org/10.2478/amns-2025-0807
Keywords
© 2025 Ni Li et al., published by Sciendo
This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
The rapid development of information technology and the wide application of the Internet, communication technology and digital media have not only dramatically changed people’s work and life style, but also significantly expanded and enhanced the scope of people’s learning, their learning styles and learning efficiency [1-3]. In the new era, the state actively advocates the comprehensive, interdisciplinary, and integrated development of the “new liberal arts” [4-5], integrates modern scientific and technological means into the scientific research and teaching of traditional liberal arts such as philosophy, literature, and art, strives to break down professional barriers, actively explores a new curriculum system of multidisciplinary cross-integration and collaborative innovation, cultivates innovative talents with in-depth learning and higher-order thinking skills, and builds a lifelong independent learning system [6-8].
Higher music colleges and universities, as institutions specializing in training musical talents, one of the core of their schooling objectives is to cultivate high-quality composite performance talents integrating application and research capabilities [9-10]. Music performance is a practice-oriented course, and its teaching mode mostly relies on oral transmission between teachers and students [11]. However, this traditional teaching method, which is based on single technical training, easily leads to a lack of autonomy and discernment in the learning process, so that students fall into patterned and subconscious performance inertia [12-13]. Therefore, in the teaching process of music performance, it is far from enough to focus only on the improvement of students’ performance skills, but also to cultivate students’ ability to conduct research on music performance itself, especially with the help of interdisciplinary vision and modern scientific and technological means to form a research ability with depth [14-16]. In the field of music performance, people have long taken composers and works as the core of academic research, and relatively neglected the research on music performance itself, so that the relevant research has been gradually marginalized. To change this status quo, it is necessary to strengthen the guidance of students’ academic research ability and music performance research methods [17-19].
In recent years, with the rapid development of a new generation of information technology, such as cloud computing, big data and artificial intelligence, a wave of digital transformation of education has begun to emerge globally, and digital technology has gradually been integrated into various fields of education, which provides more possibilities for the further development of the study of music performance [20-22]. Education informatization aggregates teaching and teachers and students into a whole, providing teachers and students with multifaceted and multi-level support for teaching and learning, enabling teachers and students to break through time and space constraints, and carry out teaching and learning anytime and anywhere with the help of the Internet, computers and other electronic terminals [23-24]. At the same time, teachers can utilize the rich network teaching resources to design novel courses and create a more lively music performance classroom to meet the diverse learning needs of students, stimulate students’ interest in music learning, and deeply explore students’ musical potential [25-26].
Scholars have studied the teaching and learning of music performance from both traditional teaching perspectives and modern information technology perspectives, which has helped to deepen the understanding of the practical effects of different teaching modes in music performance. Literature [27] developed a framework for teaching music performance with reference to John Hattie’s feedback theory in educational psychology research to support music performance teachers to pay attention to the pedagogical feedback of the teaching and learning process of music performance. Literature [28] empirically investigated the dynamics of stakeholder expectations in the formation of a music performance interaction company and the integration of instrumental/vocal teaching into higher education learning environments, concluding that there is a strong need to develop excellence in vocal and instrumental teaching in the UK, with HEIs playing a key role. Literature [29] builds a diversified intelligent music performance teaching system based on the concept of object-oriented music teaching and learning, and evaluates it in practice as an example of a music performance course.
Intelligent information technology in the field of education to the general use, how to intelligent information technology to teaching platform and teaching mode design has been the focus of the field of education can only information construction research. Literature [30], through simulation experiments, clarified that the digital teaching platform with artificial intelligence technology as the core logic achieves personalized and intelligent teaching recommendations, and improves teaching efficiency and student learning experience. Literature [31] conceived an intelligent digital teaching framework based on particle swarm optimization algorithm and least squares support vector machine algorithm, and obtained good results in numerical tests. Literature [32] designed an interactive learning platform with artificial intelligence technology as the underlying logic, which has lower memory and runtime memory occupancy and helps in the efficiency of teacher-student interaction. Literature [33] used an experimental method to compare and analyze the performance and functions of teaching information relationship systems, and concluded that it improves the quality and efficiency of teaching in schools to a certain extent. Literature [34] integrates cloud computing theory, teaching service-oriented architecture and artificial intelligence technology to build an intelligent education management system, which promotes the construction of education informatization and intelligence, and in the database pressure test, confirms the feasibility of the proposed intelligent teaching management platform. Literature [35] draws on relevant standards such as the e-government standardization guidelines of the National Information Office, the e-government top-level design model, and the planning guidelines for the construction of education informatization, and builds a public product of educational resources the with cloud computing technology as the core architecture to meet the educational needs. The traditional methods of music performance teaching can no longer meet the needs of society and students in the information age, and the practice of intelligent information technology in education is also more mature, so it is necessary to explore the path of information intelligence in music performance teaching based on intelligent information technology.
In this paper, the construction process and practical effects of the intelligent teaching platform are studied. Based on the main contents and overall objectives of platform construction, the chord sequence matching algorithm is utilized for preprocessing to obtain new MIDI files. Analyze the music characteristics, based on the corresponding characteristics, using the platform for students’ sight-singing ability assessment, to evaluate the students’ music performance skills and ability level by analyzing the score level. Using five songs to conduct controlled experiments on the matching correct rate of the chord sequence matching algorithm selected in this paper, to verify the superiority of the algorithm in terms of matching correct rate. The intelligent teaching platform is used to assess students’ sight-singing ability, and a 15-member experimental class is selected to test the credibility of the platform’s sight-singing assessment steps. At the end of the overall teaching experiment, the satisfaction of the students in the experimental class with the intelligent teaching platform was counted by means of questionnaire survey to judge the application effect of the intelligent teaching platform under the integration of multi-dimensional data in music performance teaching.
To design an intelligent teaching platform under multi-dimensional data fusion, it is first necessary to clarify the main content of the platform for music performance majors, as well as the overall goal of the platform construction, and to sort out the underlying ideas of the platform design from the level of basic construction.
According to the professional direction and specialization of the music performance profession, there are abundant resources available for teaching music performance. It mainly includes the following two major categories. The first category is vocal singing, i.e. American vocal singing, ethnic vocal singing, and popular music singing are the three main specialization directions. The second category is instrumental performance, which mainly includes orchestral instruments, such as flute, oboe, clarinet, bassoon and other woodwinds, trumpet, trombone, brass horn, tuba, saxophone and other brass instruments, violin, viola, cello, double bass, classical guitar and other stringed instruments, as well as percussion instruments; Ethnic instrumental performance includes stringed instruments, such as erhu, gaohu, banhu, yangqin, guzheng, pipa, zhongqin, guzheng, zhongxin, zither and so on. Ethnic instruments include stringed instruments such as erhu, gaohu, and banhu, plucked instruments such as guzheng, pipa, and zhongshu, wind instruments such as suona and sheng, and ethnic percussion. The third direction of music performance is keyboard instruments such as piano, electronic organ, accordion, pipe organ, and some electro-acoustic instruments used in popular music performances.
The overall goal of the construction of the music performance teaching resources service platform based on multidimensional data fusion is to actively utilize the features and advantages of the multidimensional data fusion model to integrate the music performance teaching resources of colleges and universities that are independent of each other, as well as the various software and hardware resources that support their matching, in order to realize resource sharing and superb resource data processing. The construction of this platform mainly includes four layers according to the different functions of the server. The first layer is the external connection layer, which is also the direct experience layer for teachers and students to get the most multidimensional data fusion user experience, where teachers and students only need to utilize computers, smartphones, tablets, and other terminal devices that can be connected to the network to access the resource platform at any time and obtain related services, without having to worry about problems such as network congestion and slow download speeds. The second layer is the application layer, in this layer through the integration of resources, for teachers and students to provide music performance teaching resources such as domestic and international master classes, concerts, professional competitions, etc., and at the same time to provide teachers and students with the software services and application program interfaces needed to use these teaching resources. The third layer is the platform service layer, which is mainly to provide multi-dimensional data fusion infrastructure for the overall platform construction, and mainly contains reasonable software and hardware operating environment as well as system development interfaces, and carries out comprehensive management on this basis. The fourth layer is the basic service layer, which is the foundation of the four-layer architecture of the whole multidimensional data fusion platform, where it mainly carries out the management of the infrastructure related to multidimensional data fusion, such as servers, network storage devices and their virtualization resources and other hardware and software, and provides basic services such as storage, network and computing.
According to the main contents and overall objectives of the platform design, the design is further transformed into reality through corresponding technologies, algorithms, and so on. This part describes the steps necessary to realize the platform, such as preprocessing music information, characterizing music, and assessing sight-singing ability.
A MIDI file is a digital music file that contains instructions for electronic devices to produce music. Specifically, a MIDI file contains a series of digital messages that communicate information about notes and other parameters to MIDI-compatible devices, such as synthesizers, samplers, and digital audio workstations.The information contained in a MIDI file typically includes:
Note information: MIDI messages can specify the pitch, duration, and velocity (or intensity) of individual notes. Rhythm and tempo information: MIDI files also contain information about the rhythm and timing of the music, such as the number of beats per minute and the tempo. Track information: MIDI messages can specify the type of instrument or sound used for each note, such as piano, guitar, or drums. Control messages: MIDI messages can also specify various controllers and effects, such as modulating pitch and volume. Program and Program Change Messages: A MIDI file can include a series of program change events that can be used to change an instrument or sound effect. This information is stored in the “Program Change” event. Other information: MIDI files can also include other information such as key signatures, tempo changes, etc.
MIDI files do not contain actual audio data like WAV or MP3 files. Instead, they contain instructions that tell the MIDI device how to make sounds. When a MIDI file is played on a MIDI-enabled device, the device uses the information in the file to produce the appropriate musical sound.
The platform’s design and methodology includes pitch set processing, interval calculation, chord label construction, patterns and colors for chord progressions, harmonic functions, and circle-of-fifths progressions. Before performing the chord sequence matching algorithm, the MIDI files need to be processed first, and through the knowledge of music theory, the method of digital encoding is chosen for pre-processing to facilitate the subsequent processing.
Figure 1 shows the main programming process of the chord sequence matching algorithm. First, the data parameters in the input MIDI file are analyzed using the mido library in the Python language. The tonality of the musical melody is calculated and determined based on the digitized expression of the music. Next, the melody was divided into fixed length measures based on note characteristics, and the notes in each measure were transformed within the same octave. Finally, the algorithm is used to calculate the maximum chord-matching score for each measure and generate a chord sequence based on the maximum chord-matching score, correcting the chords by chord inversion, and then creating a new track in the MIDI file into which the harmonic accompaniment is entered and merged with the main melody track to generate a new MIDI file.

Flow chart of chord sequence matching algorithm
In a melody, notes are not of equal importance. There are certain notes that play a decisive role in music and we define these notes as characteristic notes. Characteristic notes are derived from the three musical concepts of accentuations, syncopations, and long notes. In the generation process, feature notes can indicate the importance of the current note and optimize the attention distribution of the model. The specific formula expressions are shown in Eqs. (1) to (3) below:
The characteristic note is the most representative note in a melody, which summarizes the character of the piece, while the other notes are only for the purpose of decorating the melody. In music theory, the strength of 4/4 time music is “strong, weak, sub-strong, weak”. The letter
Characteristic notes generally refer to notes that are accented but not syncopated, or notes that are not accented but are both long and syncopated. As shown in equation (4):
Where
A measure is composed of several notes, and the characteristics of the notes in the measure can reflect the characteristics of the measure. Therefore, this paper divides the measure by the information of the strong beat in the measure, and derives the characteristics of the measure from the characteristics of the notes in the measure. Bars help give structure and rhythm to music, enabling musicians to play and read music more accurately and precisely. They also help to create pulse and momentum in the music and can be used to create tension and release in a phrase.
Here, the harmonic progressions are categorized into four common types depending on the style of the music: 1) full termination: V-I chord progressions, which are usually preceded by a dominant chord (II, IV, or VI); 2) altered-frame termination chord progressions: IV-I chord progressions that emphasize the support of the subordinate chord over the dominant chord; 3) impeded termination chord progressions: a V7-I progression that is replaced by a V7-VI progression; and 4) no fully terminating chord progressions: chord progressions from any chord to a V or VII chord. By analyzing the above four cases, it is possible to express more accurately the harmonic intonation of folk music. As shown in equation (5):
The identification of terminal chords divides the melodic phrase by the characteristics of the harmonic progression and determines the harmonic interval section in turn. This plays a crucial role in the subsequent matching of the multi-track accompaniment.
Multi-track accompaniment of music can enhance the expressiveness of the harmony and make the music sound fuller. Figure 2 shows the main flow chart of the multiple multi-track accompaniment patterns designed in this paper. Through the method described above, the music structure is analyzed, the bar and phrase characteristics are delineated, and the multi-track accompaniment music with different patterns is generated according to the melodic progression.

Multi-track pattern matching diagram
The evaluation of sight-singing in this study is based on MIDI score files as a standard reference, and MIDI describes the notes in the score in terms of pitch value and duration (duration of pitch), so it is necessary to extract the pitch characteristics and its duration in the audio of the sight-singing.The MIDI files utilize semitone values to represent the pitch of the notes, and the semitone and the fundamental frequency have a correspondence expressed by the following equation.
Where 69 is the semitone value corresponding to the international standard tone
The time value of a note in a score is defined relative to the time value of a beat, e.g., the time value of a quarter note is one-fourth of the time required to play a beat in the current score, and the time value of a beat is determined by the tempo of the score. The tempo indicates how fast or slow the score is sung, usually recorded in beats per minute, the reciprocal of which is the time of a beat (in minutes), e.g., 120 beats/minute means that a beat needs to be sung for 0.5 seconds.
In the actual singing process, the time of each beat is grasped by the singer himself, it is difficult to accurately sing the standard time of each note. Even when the same person sings the same song many times, there are differences in individual notes and overall rhythm. Therefore, when matching the pitch sequence with the template, it cannot be strictly one-to-one according to time, but instead needs to carry out dynamic regularization operations such as temporal offset and scaling of the pitch sequence. Considering this objective requirement, this paper uses the chord sequence matching algorithm described in the previous section to achieve template matching.
In addition to improving the accuracy of similarity calculation, the matched sequences of the chord sequence matching algorithm can be used to evaluate the problems of each note in singing according to the corresponding pitch sub-sequence of each note in the template, which is more targeted and instructive.
The distance of the sequence is obtained through the chord sequence matching algorithm, and normalized to between
where Timing correctness: when the absolute difference between a note and its corresponding pitch sequence frame number is not more than 0.3 of the note’s frame number, the note’s timing value is considered to be sung correctly, and is calculated according to the following formula:
The overall hourly correct rate is:
Pitch correctness: Calculate the mean value
Overall pitch correctness was:
Smoothness of breath: mainly refers to the stability of pitch when singing, this paper uses the degree of dispersion (standard deviation) of the pitch value in the subsequence corresponding to each note to determine whether the singer’s breath is smooth. The smoothness of a single note is:
For the
Overall smoothness:
The final score of the performance is the mean of the duration accuracy and pitch accuracy, and then the breath smoothness is used as a weighting factor, calculated by the formula:
Where
After constructing the intelligent teaching platform under multi-dimensional data fusion, in order to ensure that the platform can effectively assist teachers in teaching and students in learning, this part adopts relevant tests to conduct experiments on the correct rate of the chord sequence matching algorithm and the credibility of the platform’s scoring of the ability of sight-singing, etc., and analyzes the students’ satisfaction with the platform and the level of enhancement of their learning ability by means of a questionnaire survey.
Before conducting the assessment of students’ sight-singing ability, this part verifies the matching correctness of the chord sequence matching algorithm mainly used by the platform. First, the test MIDI files are submitted to professional music researchers for chord discrimination to get the true chord types; then the chord sequence matching algorithm model is used to discriminate the chord types with the traditional HMM Hidden Markov Model and the HMM model combined with the improved PCP features, and then the correct rates of the three models are compared to further judge the accuracy of the chord sequence matching algorithm in identifying the chord types.
When the three models are trained, five files are randomly selected from the test dataset as test objects for chord prediction, and the predicted chord sequences are recorded. In order to get the correct chord sequence to judge whether the predicted chords are accurate or not, this paper uses the a priori artificial music theory as the basis, and the tested files are submitted to professional music researchers for accompaniment chord discrimination, and this is used as the true value chord type to judge the correctness of the chords generated by the platform algorithm matching.
The chord correctness of the five test music files obtained by the chord sequence matching algorithm and the HMM model combined with the improved PCP features are counted, and the experimental results are shown in Table 1. The resultant data presented in Table 1 shows that the use of chord sequence matching algorithm in this paper improves the correctness of chord orchestration to a certain extent relative to the HMM model incorporating the improved PCP features. Compared with the HMM model combined with the improved PCP features, the accuracy of chord arrangement is increased by 5.48%, 6.16% and 6.55%, respectively, and by 2.81% and 2.35% in the songs Cool Day and Better Us, respectively. Overall, the chord sequence matching algorithm proposed in this paper achieves better chord orchestration results than the improved HMM model with PCP features.
Comparison of traditional PCP and improved PCP results
| Training data | Test Data (Song Title) | System type | Correct rate of chord arrangement (%) |
|---|---|---|---|
| Music files in 450 MIDI music libraries | Vacation | Improved PCP+HMM | 79.30 |
| Chord sequence matching algorithm | 84.78 | ||
| Better Hurry Up | Improved PCP+HMM | 76.39 | |
| Chord sequence matching algorithm | 82.55 | ||
| Cool Day | Improved PCP+HMM | 72.62 | |
| Chord sequence matching algorithm | 75.43 | ||
| Holiday Time | Improved PCP+HMM | 78.12 | |
| Chord sequence matching algorithm | 84.67 | ||
| Better Us | Improved PCP+HMM | 72.19 | |
| Chord sequence matching algorithm | 74.54 |
Further, in order to more comprehensively analyze the effectiveness of the chord sequence matching algorithm, the traditional HMM Hidden Markov Model was continued to be selected for the control analysis to compare experimental results.
Table 2 shows the experimental result data of the chord sequence matching algorithm and the traditional HMM Hidden Markov Model. From Table 2, it can be analyzed vertically that the chord sequence matching algorithm has a significant improvement in the correct rate of chord arrangement compared to the traditional HMM Hidden Markov Model. In the songs Vacation, Better Hurry Up and Holiday Time, the correct rate of chord sequencing is improved by 9.71%, 10.08% and 11.55%, respectively. In the songs Cool Day and Better Us, the correct chord arrangement is improved by 5.25% and 5.75%, respectively. Therefore, the chord sequence matching algorithm has a better performance in terms of the correct rate of matching chord permutation sequences compared to the traditional HMM Hidden Markov Model.
Comparison of system comprehensive results
| Training data | Test Data (Song Title) | System type | Correct rate of chord arrangement (%) |
|---|---|---|---|
| Music files in 450 MIDI music libraries | Vacation | Traditional HMM | 74.21 |
| Chord sequence matching algorithm | 83.92 | ||
| Better Hurry Up | Traditional HMM | 72.75 | |
| Chord sequence matching algorithm | 82.83 | ||
| Cool Day | Traditional HMM | 69.10 | |
| Chord sequence matching algorithm | 74.35 | ||
| Holiday Time | Traditional HMM | 72.24 | |
| Chord sequence matching algorithm | 83.79 | ||
| Better Us | Traditional HMM | 68.22 | |
| Chord sequence matching algorithm | 73.97 |
Comparing the data in Table 1 and Table 2, it can be observed that the overall chord matching rate of Cool Day and Better Us is lower than that of the other three songs, and the enhancement effect is not obvious. Therefore, in order to better explore the reasons, the chord sequences of the accompaniment of the music files of the five test sets are further analyzed by professional deduction. Through the chord comparison analysis, we can see that one of the main reasons for the decrease of the correct chord arrangement rate is the problem of chord analysis, because the chords involved in this paper are ternary chords, and it is especially easy to make recognition errors when two notes in a chord are repeated, resulting in the decrease of the correct chord arrangement rate.
According to the research in 4.1, it can be verified that the chord sequence matching algorithm chosen by this platform has a high correct rate when matching MIDI files and can be applied in students’ sight-singing ability grading. In this paper, a class of 15 students was selected as an experimental class, and the constructed intelligent teaching platform was used as an auxiliary teaching resource for music teaching, and the platform’s sight-singing ability assessment function was used in the final exam of this semester to grade students’ sight-singing ability according to the steps to determine whether the students’ musical ability was improved after using the platform for learning. In order to verify the credibility of the scoring results, this part of the design of the scoring results of the credibility of the test, the same selection of the experimental class of students for each step of the corresponding test, test results and analysis.
Figure 3 is the statistical chart of the test results of step 1 of the credibility test. For step 1, the “Ten Years” sung by Eason Chan was selected, and the test results were obtained by repeating the test 5 times by the same singer.

Reliability test Step 1 Test result data
Analyzing Figure 3, the following conclusions can be drawn: the final score without sound input is much smaller than that with sound input, and the final score is much smaller than that according to the songbook if the song is not sung according to the songbook, which indicates that the platform test results meet the expected goals of the test cases. However, it is worth noting that in the case of no sound input, the platform will still have a certain score, analyzing the reason, this is because in the case of no sound input due to the environment can not be absolutely quiet, there is still a very small undulation of noise in the recording samples, which can be filtered through the later addition of the calculation of short-term energy of the voice signal, at present, taking into account that the issue of the final score of the platform has a very small impact in the acceptable range, will be in the later version of the platform test results to meet the expected goals of the test case. It is acceptable, and will be improved in the next version.
Figure 4 is the statistical chart of the test results in step 2 of the credibility test. For Step 2, Liu Ruoying’s “Later” was selected, and 7 females and 8 males were asked to perform the test in two groups, provided that the 15 singers had been practiced and were able to sing the song according to the songbook.

Reliability test Step 2 Test result data
Analyzing Figure 4, the following conclusion can be drawn: in the 15 groups of tests, the average final score of women is significantly higher than that of men, which is in line with the expected results of the test case. Analyzing the reasons for this result, generally speaking, the range distribution of females is 2 octaves higher than that of males, and male singers without professional training basically cannot reach the range of the songs sung by female singers, and these lower scores are in fact just the result of wrong pitches reaching some standard pitches in the bass part of the song.
FIG. 5 is a statistical graph of the data of the test results of step 3 of the credibility test. For Step 3, Eason Chan’s “Ten Years” was selected and tested in the same way as Step 2 to obtain the test results.

Reliability test Step 3 Test result data
Analyzing Figure 5, the following conclusion can be drawn: in the 15 groups of tests, the average score of the final score of males is significantly higher than that of females, which is in line with the expected results of the test case. The reason for this is the same as that described in Step 2: it is basically impossible for untrained female singers to reach the range of male singers.
The two test results obtained from Step 2 and Step 3 lead to a user requirement for the next platform upgrade: when people sing songs, not only male singers sing songs and female singers sing songs, how to satisfy the different genders singing the same song and still be able to differentiate their singing strengths? For this problem, the current consideration to a program is: the platform constantly detects the singers singing pitch range, if in a certain period of time are focused on a certain range can distinguish their gender, and then change the scoring rules, will be the pitch parameter through the addition and subtraction method respectively. The scoring rules are then changed by raising and lowering the pitch parameters by adding and subtracting respectively to the songbook range, and then scoring is carried out. This program is currently being researched and will be the next focus of research.
Figure 6 is a statistical graph of the test results data in step 4 of the credibility test. For step 4, recordings of the seven female singers from step 2 were sent to judges with music expertise for judging and scoring, and the test results were compared to the system score.

Reliability test Step 4 Test result data
Combined with Figure 6 to analyze: 5 judges for each singer scoring, remove the highest and lowest scores to calculate the average score, respectively, in the corresponding system score for comparison, and ultimately found that the platform score relative to the judges scoring on the high side, but the overall ranking has not changed, to a certain extent, in line with the expected results of the test case. The reason for analyzing the gap between human scoring and platform scoring is that the current platform scoring is only based on sound elements such as pitch and length, while human scoring will also take into account more detailed factors such as the coherence of the singer’s breath, singing mood, etc. These factors have not yet been able to find a mature and effective solution in the current scoring platform, which is also the main research direction for the next step of the platform’s algorithmic improvement.
After the eighteen-week teaching experiment, a questionnaire was distributed to statistically analyze the students’ satisfaction, which consisted of 17 questions divided into five dimensions, namely, teachers’ teaching ability (questions 1-7), independent learning ability (questions 8-9), platform usage (questions 10-13), overall evaluation of teaching (questions 14-15), and knowledge and skill level (questions 16-17).
The analysis of the satisfaction level of 15 students after the experiment is shown in Table 3. Table 3 shows that most of the students believed that teaching using the intelligent teaching platform under the multidimensional data fusion had a positive impact on five dimensions-teacher’s teaching ability, independent learning ability, use of the platform, overall evaluation of teaching, and level of knowledge and skills. This teaching method integrates the interactivity of traditional face-to-face teaching with the convenience of online learning. The reasons for its success in enhancing student satisfaction mainly include the following:
Personalized learning experience: platform teaching allows students to schedule their learning according to their own pace and time, which can better meet the learning needs of different students and thus increase their satisfaction. Enhanced interactivity: Online platforms can provide additional interactive tools, such as forums and video conferencing, etc. These tools can help students communicate more frequently and deeply with their teachers and classmates. Resource-rich: platform teaching can integrate a variety of learning resources, including online videos, interactive simulations, etc. The diversity of these resources helps increase students’ interest and participation in learning. Real-time feedback and support: In the process of acquiring music performance skills, “feedback” plays a crucial role. It is through the feedback of information that music performance skills can be effectively adjusted and modified in the learning process. By using this process, teachers can improve student learning outcomes by targeting their teaching in a more focused manner. The online learning component provides instant feedback and assessment, allowing students to quickly understand their progress and level of understanding, which is crucial for improving learning efficiency and satisfaction. Flexibility and Convenience: Platform teaching allows students to learn via the Internet from any location, which provides great convenience for students who may not be able to attend traditional classes due to geographic location or other reasons. Reinforcement of practical aspects: the learning of music performance relies heavily on practice, and platform teaching can provide theoretical support online, while offline classrooms focus on the actual rehearsal and enhancement of don’t skills, and this combination can improve students’ practical ability more effectively.
Analysis table of students’ satisfaction after experiment (N=15)
| Very satisfied | Satisfaction | General satisfaction | Dissatisfy | Far from satisfied | |
|---|---|---|---|---|---|
| Teacher’s teaching ability | 45% | 42% | 13% | 0% | 0% |
| Self-directed learning ability | 32% | 61% | 7% | 0% | 0% |
| Platform usage | 25% | 56% | 19% | 0% | 0% |
| Holistic evaluation of teaching | 33% | 62% | 5% | 0% | 0% |
| Knowledge and skill level | 44% | 48% | 8% | 0% | 0% |
In summary, the application of platform teaching in music performance can fully mobilize students’ learning enthusiasm, improve teaching quality, and increase student satisfaction. At the same time, it is also in line with the current development trend of education digitization, which helps promote reform and innovation in the education model.
This paper utilizes the features and advantages of the multidimensional data fusion model, and designs and constructs an intelligent teaching platform as an auxiliary teaching resource for music teachers, taking into account the actual needs of music performance teaching. Through the control experiments with other algorithm models, it is clear that the chord sequence matching algorithm chosen by the platform has advantages in the correct rate of chord arrangement, and the correct rate of chord arrangement is increased by 11.55% at most. In the credibility test of the platform’s sight-singing ability scoring, the results of the platform tests from step 1 to step 4 have reached the expected goals of the test cases, proving that the platform’s sight-singing ability evaluation function has usability, and only needs to be further optimized and upgraded. Using the questionnaire to collect and analyze the satisfaction of the students in the experimental class with the intelligent platform, it was found that the students in the experimental class were 100% satisfied with the platform in five dimensions. It is verified that this teaching method combined with the intelligent teaching platform can mobilize students’ motivation to learn music performance, and teachers can continue to update their teaching planning by combining the research in this paper.
