Accès libre

Research on the construction method of dance education scenario simulation system integrating virtual reality technology

  
26 mars 2025
À propos de cet article

Citez
Télécharger la couverture

Introduction

With the development of society and the improvement of college students’ quality education needs, more and more art courses have entered the education system of colleges and universities to enhance college students’ pursuit of art. Dance education courses in colleges and universities are divided into two kinds: professional dance education and general dance education, the former aims to cultivate professional dance talents, and the latter takes dance education as an art elective course [1-3]. This helps to improve students’ physical coordination and aesthetic level of dance art. With the continuous development of information technology, people also put forward higher requirements for the field of education. The traditional dance classroom has many problems, such as low student interest and poor learning effect [4-5]. Virtual reality, as an emerging technology, can simulate and realize tasks that are difficult or impossible to achieve in the real world on the computer, which makes it easier for students to understand the knowledge and improve their practical ability [6-7]. Therefore, the combination of virtual reality and dance teaching is an inevitable trend of the current education reform [8].

In traditional dance education, teachers mainly convey knowledge and skills to students through language, text and other ways. This single form of teaching is difficult to meet the needs of modern society for talent training [9-10]. Virtual reality technology can visualize the abstract movements, making students understand and learn the relevant content more intuitively. At the same time, virtual reality technology can also provide students with more independent exploration and practice opportunities to help them better master the dance skills and performance methods [11-13]. In addition, the interactivity of virtual reality technology is also unique. Students can interact with each other by operating handles or body movements, thus enhancing the fun of the classroom. Therefore, the college dance teaching mode based on virtual reality technology has a broad application prospect. However, from the current point of view, most Chinese universities still adopt the traditional dance teaching mode, i.e., teaching by word and example as the main means. Although this teaching mode once played an important role in the field of Chinese higher education, it has gradually lagged behind the requirements of the times with the change of the times and the development of science and technology [14-16].

Therefore, we should fully recognize the great advantages and potentials that VR technology brings to dance teaching in colleges and universities, and actively explore how to make full use of this new technical means to promote dance teaching reform and innovation. In conclusion, with the continuous progress of science and technology and the change of social demand, college dance teaching mode needs to be constantly updated and changed to meet the requirements of the times [17-18]. As a new teaching tool and platform, VR technology will become an indispensable part of college dance teaching in the future.

Aiming at the needs of existing dance teaching in virtual reality technology, this paper initiates a discussion on the combination of virtual reality and dance education. This paper uses 3DMAX three-dimensional virtual stage scene modeling and virtual design as the basic modeling method, and collects five visual characteristic factors from this method to construct a nonlinear optimization function. The function is used to optimize the camera control state in the 3DMAX modeling method in real-time, combining the advantages of the VRP virtual platform to develop the modeling method in this paper. At the practical application level, the modeling method in this paper is compared in the before and after performance of the virtual scene, LOD detail level performance, and the modeling accuracy, stereoscopic sense, and application performance of the modeling method in this paper and the commonly used X3D and VRML modeling methods are compared at the same time.

Virtual Dance Education Stage
Integration of Virtual Reality and Dance Education

The emergence of various information technologies has brought about an innovative impetus for the development of art forms. Virtual technology is a simulation of real scenes, and its technology for design, display, and viewing is becoming increasingly sophisticated. For art education, virtual reality provides new opportunities for the construction and presentation of professional courses. College teachers must take the lead in accepting the challenges presented by new technologies, creating more suitable and engaging courses for students, and bringing innovation to art education.

In dance education, on-site teaching is commonly used, and students typically understand the knowledge and skills of dance by imitating their teacher’s movements. Generally, online courses are only a two-dimensional presentation of knowledge, with simple interaction between teachers and students, which is also not applicable to dance education. In China’s dance education, the emergence of virtual technology provides opportunities for the innovation of dance education in colleges and universities.

Virtual reality can be combined with dance education, using new technology to expand knowledge for students, increase knowledge of the dance background, stimulate students’ learning pleasure, and display dance for students from a variety of perspectives, which helps students understand the connotation of dance education.

The combination of virtual technology and dance education includes the collection and display of virtual data in two parts, the data collection of dance professionals, the establishment of dance teaching database, and the use of motion capture and collection technology, the virtualization of dance characters, and further abstraction into digital dance. Students can be assessed and corrected for their movements. Combined with music, stage design, text, etc., it can enhance the integration of dance art and other arts, conduct virtual display, and even on-site virtual projection, provide three-dimensional teaching and display space for dance education, and increase students’ interest and professional perception.

Application of the virtual stage

The combination of virtual reality and stage has been extremely widely used in the display of foreign dance. Dance, as a comprehensive art, needs music, props, and lighting to make it more interesting and enhance it. Adopting digital technology to provide a virtual stage for dancers can increase the visual effect of the dance performance, achieve the stage effect of real and fake, real and illusion, and increase the audience’s interest in watching. Zhang Yimou director in Hangzhou G20 cultural performances in the virtual reality technology applied to the “Swan Lake”, is an excellent example. But although the magnificent stage technology is dazzling, it is also worth reflecting on whether virtual reality has played a boisterous effect, allowing the audience to ignore the performance of the dancers themselves.

Virtual stage design, not only need the dancer’s own performance, but also need to provide the dancer with a suitable scene, which requires a very high artistic aesthetic, not flashy, combined with the actual need to design. The virtual stage needs to make comprehensive use of scene equipment, lighting, music, etc. to create a dance mood and provide the audience with rationalized imagination space. Dancers should also form a benign interaction with the virtual stage to achieve the effect of water soluble.

3D virtual stage scene modeling and virtual design

Three-dimensional modeling and virtual design for the stage scene. Figure 1 shows the steps of stage virtual scene modeling and design using 3DMAX software.

Scene modeling of the main stage;

Construct sub-models with stage equipment, curtains, sound, performance props, suspension and replacement of stent systems and other accessories for the stage as the object;

Synthesize the constructed main stage model and accessory stage model according to the real scale, and make appropriate adjustments to the synthesized stage model;

Add lights to render the stage effect;

Use VRP editor to export the synthesized model and carry out the virtual design of the stage in the computer;

Render the background of the stage using the sky box, and then add characters and actions to the stage virtual design through the character module and action module to obtain the complete stage virtual design.

Figure 1.

Virtual stage production process

RP exchange function

VRP (Virtual Reality Platform) main functional modules for the camera transition effects, vertex shading, normal mapping, VRP ion library, sky box, action module, character module and Flash space, etc.), support for the Lua language program function makes the VRP editor more targeted and practical application value, Figure 2 for the VRP exchange function chart.

Figure 2.

Structure diagram of function exchange

As can be seen from Figure 2, the reason why the VRP editor is widely used and has outstanding effects mainly relies on the powerful language editing ability. Facing the situation of multiple types of programs, the mastery of the language editing ability is the basis for the realization of the interactivity of the VRP editor.

Screen space location factor Mp

Considering the principle of screen aesthetic composition of photography, the position of the target on the screen should be in the area formed by the interesting center point of the screen, it is best to place the target in the 1/3 of the screen when there are multiple targets, and try to make the target close to the center of the screen when there is a single target so as to attract the attention of the observers and to achieve the purpose of picture harmony. Here, the center point of the screen is defined as Ucen, and the world space coordinates of the two benchmark targets in the ellipsoidal camera space can be expressed as U1 and U2, so the target position factor Mp in the screen space can be obtained as equation (1): Mp=U1U2+λ=3NUλUcen

Screen space size factor Ma

The spatial size of the target on the screen in this study can be expressed as the size of the sphere on the screen. The size factor of the target on the screen can be obtained from the sphere world coordinates Pλ and the right edge of the sphere world coordinates Prλ as equation (2): Ms=λ=1NπUλUrλ2

Proportion of screen space obscured Mo

The proportion of the target that is occluded in screen space can be expressed as the proportion of the corresponding sphere of the target that is occluded on the screen. The target A is in front of the target B with respect to the camera viewpoint, and the target A blocks a portion of the target B in the screen, B the blocked portion can be defined as ξ, and the blocked portion can be approximated as the product of the blocked lateral length loλ and the vertical length hoλ, so that the target’s proportion of being blocked in the screen space Mo can be expressed as equation (3): Mo=λ=1Nloλhoλ4UλUrλ2

Camera viewing angle factor My

The viewing angle of the camera with respect to the target can be represented by the direction vector Dcom of the camera and the direction vector Dλ of the target, and in order to make the front of each target appear in the screen as much as possible, the viewing angle factor of the camera can be defined as follows in equation (4): Mv=λ=1NarccosDλ(PλPcam)|Dλ||PλPcam|

Definition and Optimization of Nonlinear Optimization Functions

By collecting five visual characteristic factors, a nonlinear optimization function can be constructed. The function takes the input control state μin(αin,θin,φin) of the camera as the input of the function, and according to the current visual characteristic factors, a local optimal solution is obtained for the function to obtain the output control state μout(αout,θout,φout) . The specific form of the function is expressed as equation (5): minα,θ,φωpMp+ωsMs+ωoMo+ωvMv+ωcMc s.t.Mp>ε(Positionalconstraints) Ms[ψmin,ψmax](sizeconstraints) Mo<η(Occlusionconstraint) UλU(x,y)(Screenconstraints) (α0,θ0,φ0)=(α,θ^,φ^)(Initialvalueconstraints)

Where ε, Ψmin, Ψmax, η are the specific constraint values for different visual factors, which can be set according to different scenarios and needs. Where ωp, ωs, ωo, ωv, ωc are the normalization parameters for the five different visual characteristic factors, which are set to 10, -1, 50, -3, and 10, respectively. ωp and ω0 are set to 0 in the case of a single person in spherical coordinate space. (α0,θ0,φ0) is the initial control state of the camera.

In this study, a model predictive control method (MPC) is used to optimize the control state in real time by sampling the visual characteristic factors at each K interval, and constructing the optimization function after each sampling. The specific nonlinear optimization function solving method, Sequential Quadratic Programming (SQP) method, is used in this study. The SQP method is to transform the original problem into a series of quadratic programming sub-problems, and to obtain the optimal solution of the optimization process by locally searching for the effective set. The optimal solution obtained by solving is compared with the initial input control state parameter values, and feedback optimization is performed to adjust the current control state. Real-time optimization of each control state can realize the optimization operation process of the whole camera.

During the process of camera tracking the target, the camera will produce abnormal jitter or jump due to the uncertainty of the target motion. In order to solve such a problem, this study adopts the interpolation method to process the motion trajectory of the camera. According to the solution result μout(αout,θout,φout) of the nonlinear optimization function and the input state μin(αin,θin,φin) , the spatial parameters of the camera’s motion process can be expressed as Eq. (6): μin=μin+(μoutμin)t/f μ=μin

t in Equation (6) represents the time consumed from the initial position to the target position, and f represents the frame rate (FPS) at which the system operates. Based on the obtained spatial parameters during the camera operation, the spatial coordinate positions Pcam and Dcam of the camera can be found.

Practical applications

This chapter mainly applies 3D modeling technology, virtual character technology, and complex scene optimization technology to the “dance teaching scene”, and verifies the fluency effect and LOD detail level effect of the simplified scene, and finally compares the method of this paper with the two modeling methods based on X3D and the virtual reality design method based on VRML. Finally, the method of this paper is compared with two modeling methods based on X3D and VRML to comprehensively evaluate the modeling performance of this paper.

Before and after virtual scene simplification

In the dance virtual context, the comparison part of this paper intercepts the visual effects before and after the model simplification of the stage set and stage characters. There is no obvious difference between the models in the static visualization, which has a better visual effect. In addition, there is not much difference between the visual effect of the simplified model and the original model when texture is added, so it can be seen that texture mapping can be used as an effective means to reduce the amount of mesh data while maintaining a realistic appearance. Finally, it can be seen from Table 1 that the rendering efficiency of the whole scene has been significantly improved after the simplification process. At this time, not only is the realism of the graphics maintained, but also the real-time performance is satisfied, which is a more feasible method. Table 1 displays the experiments before and after the virtual scene comparison was simplified.

Scene simplification experimental data summary

Area-name Quantitative model Cap Segments Situation state FPS Visual effect
Primal scene 24895 266589 Static state 36 Strong sense of reality,poor real-time
dynamic state 21
Reduced scene 23047 174823 Static state 40 Strong sense of reality, good real-time
dynamic state 36

The simplified scenes are drawn at frame rates of 40 and 36 in static and dynamic states, respectively, which are slightly different but not significantly different. The main reason is that the simplified data volume has given the system sufficient processing time to provide a smooth picture. The frame rate of the original scene is significantly different, with 36 for the dynamic scene and only 21 for the static scene, because the system’s data loading and rendering processing capabilities are challenged by the huge amount of data, which can cause the screen lag phenomenon, especially during the contextual operation. As the user’s point of view changes, the loading and display of the scene is also different, and the screen is not smooth.

LOD detail level comparison

The LOD detail level technique is used to draw models with different levels of approximation in real-time. As the distance of the viewpoint decreases, the detail level of the model is clear in turn, and a good visual effect of the model is obtained.The experimental data of different levels of the LOD technique are shown in Table 2. The camera moves along with the user’s motion, so as the camera distance gradually increases from less than 3m to more than 10m, the image imaged by the model in the projection plane decreases, and the model complexity decreases as well. LOD0-LOD3 has a decreasing level of model detail, and the number of mesh facets decreases from the original 7189 to 0 as well.

LOD detail level experimental data summary

Dance figure Original parameter Reduced parameter
LOD0 LOD1 LOD2 LOD3
Camera distance <3m 3m-5m 5m-10m >10m
Complex rate High precision Medium precision Low precision Model disappearance
100% 55% 30% 15%
Cap Segments 7189 4423 875 0
Rendering efficiency 120Hertz 90Hertz 60Hertz 30Hertz
Comparative Analysis of Modeling Accuracy of Teaching Stages

In this subsection, the superiority of this paper’s method in stage modeling for dance teaching is further verified by comparing the performance of this paper’s method with the other two commonly used virtual reality design methods at three levels: visualization, three-dimensionality, and application-oriented.

Visual modeling accuracy

Visualization modeling accuracy test is the core step to reflect the advantages and disadvantages of this paper’s method for scene visualization effect, high modeling accuracy is good design effect. In order to highlight the design performance of this paper’s method, this paper’s method is used to conduct comparison experiments with two commonly used virtual reality design methods based on X3D and VRML. In the experiments, the three methods are used to carry out the design of light and shadow effects, scene layout design, stage scene design, dance props design, dance character design, and dance action design in turn. Figure 3 displays the visualization modeling accuracy of testing the three methods.

Figure 3.

Visual modeling accuracy test results

Analysis of Fig. 3 shows that this paper’s method is >98.00% in the visualization modeling accuracy of all kinds of elements, which is much higher than the X3D-based and VRML-based virtual reality design methods. Compared with the other two commonly used design methods, this paper’s method is more in line with the scene visualization needs.

Stereoscopic analysis

Test three methods of light and shadow effect design, scene layout design, stage scene design, dance props design, dance character design, dance action design after the overall three-dimensional sense, through the relevant experts to complete the evaluation of scoring, three-dimensional sense of the evaluation results are shown in Table 3. The average score of stereoscopic sense of X3D-based virtual reality design method is less than 5 points, and the average score of stereoscopic sense of VRML-based virtual reality design method is less than 8 points, which indicates that the stereoscopic sense of the remaining two commonly used methods is below the method of this paper.

Design effect three-dimensional evaluation

Design type Textual method X3D method VRML method
Light and shadow effect 99 92 92
Stage scene 98 96 89
stage set 98 94 88
Property in dance 99 93 93
Dance figure 97 90 85
Dance movement 97 93 93
Mean Value 98 93 90
Application modeling performance

Teaching stage modeling in not only need to focus on the three-dimensional sense, scene visualization should also focus on scene diversity, set consistency, light and shadow richness, action reproduction, character authenticity, smoothness, clarity, coordination of a total of eight aspects of the stage design effect. For this reason, the application performance of this paper’s method and X3D-based and VRML-based virtual reality design methods in designing scenes is evaluated from these eight perspectives. The evaluation results of the eight-stage designs of the three methods are shown in Table 4.

Comparison of the application performance of the three methods

Evaluation index Textual method X3D method VRML method
Scene diversification 0.98 0.84 0.84
Scene consistency 0.91 0.91 0.83
Light richness 0.92 0.84 0.76
Degree of action reduction 0.98 0.89 0.75
Character authenticity 0.96 0.83 0.82
Fluency 0.96 0.86 0.84
Sharpness 0.95 0.83 0.79
Harmony 0.92 0.83 0.76

Analyzing the evaluation results in Table 4, the maximum value of the eight application performance evaluation indexes of this paper’s method is 0.98, and the minimum value is 0.91. The maximum value of the X3D-based virtual reality design method is only 0.91, and the minimum value is 0.83. The maximum value of the VRML-based virtual reality design method is only 0.84, and the minimum value is 0.75. The difference between these two methods and this paper’s method is obvious. The gap is obvious, which shows the unique superiority of this paper’s method in teaching stage modeling.

Conclusion

Virtual reality technology can help existing dance education overcome the many inconveniences of on-site teaching and realize three-dimensional teaching. Taking this as the starting point, this paper starts the discussion on the suitability of virtual reality technology and dance education. Subsequently, this paper chooses a 3DMAX modeling method and optimizes it. It then combines the optimized 3DMAX modeling method with a VRP editor to obtain the modeling method in this paper.

In practical application, the simplified rendering effect of the virtual scene of this paper’s modeling method is high, and the frame rate of the simplified dynamic scene is as high as 40 and the static frame rate is as high as 36, which is an obvious improvement compared with the pre-simplification. In the LOD detail level comparison, the dancing characters sampled by the modeling method in this paper have good visual modeling effects. In the comparison with other two commonly used modeling methods, the visual modeling accuracy of this paper is >98.00%. This indicates that the method presented in this paper has excellent capabilities in modeling accuracy, scene rendering, and overall performance.

The modeling method proposed in this paper has the advantage of real restoration and superior modeling performance compared to other methods of the same type. It has a broad development potential in the future, and can provide an effective reference for the existing dance teaching virtual stage design.