Research on 3D Rendering Technology to Enhance the Spatial Expression of Virtual Art in the Age of Digital Media
Data publikacji: 23 wrz 2025
Otrzymano: 31 sty 2025
Przyjęty: 09 maj 2025
DOI: https://doi.org/10.2478/amns-2025-1108
Słowa kluczowe
© 2025 Cui Wei, published by Sciendo
This work is licensed under the Creative Commons Attribution 4.0 International License.
Since the beginning of the 21st century, with the rapid development of science and technology, the virtual wave is more and more widely affecting people's daily work, life and learning and many other fields. The virtual here refers to an effect created by computer and communication technology that simulates reality and transcends reality. Contemporary digital technology that can realize this effect is called virtual technology [1–2]. Talking about virtualization from the field of technology, we mainly discuss the function and impact of virtual technology in terms of its application in the fields of aerospace, healthcare, education, and military. And virtual in the field of art involves a new type of art, i.e. virtual art, which is a product of the intersection of technology and art today. There are fewer research results and less comprehensive research perspectives on the specific ways of creation of virtual art, the ways of existence of the works and the art forms and characteristics of their formation [3–5]. Nowadays, the development of art is more and more dependent on science and technology, VR (including AR, MR, etc.) technology extends or strengthens human perception ability, breaks through the intrinsic barrier of the real material space, so that human beings can enter into a “virtual” space that is not restricted by time and space and cultural obstacles, and this space can be infinitely close to the real space and the virtual space. This space can be infinitely close to the true reality and virtual composition of the “hybrid space”, or completely empty virtual imagination space [6–7]. With the continuous progress of simulation technology, virtual art, a style of art presented by virtual content mainly generated and controlled by computers, has begun to trigger a change in art styles, which changes human narrative methods and viewing behaviors through the virtual space created by digital graphic technology, challenging human cognitive concepts in an avant-garde way [8–10].
With the help of hypertextual and non-linear narrative structure, time and space can be arbitrarily alternated and interlaced in virtual art, and VR technology enables the audience's traditional understanding of visual art to be superimposed and augmented by multiple perceptions within the same space and time, and the artwork itself becomes a platform for the audience's thinking. The emergence of this new art form not only inspires artists to create works of thought, and subverts the audience's appreciation of art [11–13]. 3D rendering technology is a process of converting a 3D model into a 2D image or animation, which involves simulating the propagation and reflection of light in 3D space, and calculating the values of color and brightness of each pixel, so that designers can grasp the project and quality control [14–17]. Three-dimensional visualization rendering technology can effectively improve the display level of virtual art, which in turn improves its spatial expression, allowing people to experience the reality feeling in the virtual. This not only expands the comprehensive expression of art and other fields, but also enhances the immersion and visual experience of the audience, improves the contextual atmosphere and narrative of the virtual art space, and stimulates the interactivity and participation interest of users [18–20].
Enhancing the visual expression tension of 3D rendering technology in art space requires continuously selecting, collecting and integrating more fine elements to better touch the artistic perception of art space. When constructing the virtual art space scene, the article maps the real scene into the virtual art 3D scene through coordinate conversion by texture mapping algorithm, and combines the inter-block texture homogenization and shading illumination model to enhance the brightness and details during mapping. In order to enhance the spatial expression of virtual art, this paper combines the body rendering technique with neural radiation field, introduces the deep residual network to establish a deep neural 3D rendering model, and compresses the rendering file by wavelet reconstruction algorithm to enhance the rendering efficiency of virtual art space. The effectiveness of the deep neural 3D rendering model is verified in terms of texture mapping effect, surface reconstruction performance and rendering efficiency, aiming to enhance the artistic expression of the virtual art space through the deep neural 3D rendering model.
Virtual art space is a highly integrated creative form of technology and art, under the support of computer graphics technology, human imagination can be transformed into excellent works of art, and through the audio-visual experience and the continuous upgrading of the emotional experience, presenting the audience with a highly imaginative and immersive story time and space. The new era of three-dimensional rendering technology has brought about the enhancement of the efficiency of virtual art creation and the optimization of the creative process, imagination, creativity by the funds, cycle, site, manpower restrictions will be further reduced to the content as the main output form of the virtual art space form will be further convergence of the storytelling, emotional expression, the world view construction as the core value of the virtual art space will be further highlighted.
In the three-dimensional rendering process of the virtual art space scene, the texture mapping of the image is an important technique that indicates the rendering, texture mapping is to give the information of the two-dimensional space points to the three-dimensional space points through the mapping relationship, and the information that can be realized to be mapped includes the color, the brightness and so on, and the use of the texture mapping can enhance the realism of the model, so that the details and the readability of the three-dimensional modeling of the virtual art space scene are greatly enhanced.
Texture mapping of 3D models involves the transformation of image space, camera space and pixel space, where the coordinates of actual objects in space are transformed into image coordinates through the camera lens and the screen, and the image coordinates are converted into pixel coordinates when texture mapping is performed [21]. The coordinate conversions during texture mapping are as follows:
Conversion of object coordinates to image coordinates Image coordinates are the coordinates generated by the actual object through the camera, the establishment of the object coordinate system and the camera coordinate system is shown in Figure 1. The covariance condition equation is satisfied between the image coordinates and the actual coordinates of the object as follows:
Where ( Conversion between image plane coordinates and pixel coordinates The unit of image plane is generally millimeters, in converting image plane coordinates to texture coordinates, first need to be converted to pixel coordinates, the conversion relationship is:

Sketch map of object coordinate system and camera coordinate system
In order to make the texture of each model in the virtual art scene smoother, the introduction of 2D broadcast graphic editing ideas can realize the texture between the blocks of the virtual art scene to even out the light and color to achieve the purpose of eliminating the texture seam lines between the blocks. This method is mainly a process of constructing and solving Poisson's equation, and the construction of Poisson's equation for 3D models requires the calculation of the corrected color value of the triangular surface piece of the overlapping area (corresponding to the boundary pixel in the 2D image) and the scattering value of the triangular surface piece of the non-overlapping area (corresponding to the internal pixel in the 2D image).
Setting the overlapping area
A certain overlap region is set between adjacent sub-blocks, and the overlap region triangular facets are used as the boundary triangular facets in order to calculate the corrected color values of the boundary triangular facets as the boundary constraints in Poisson editing.
Calculate the average color value of triangular faces
The texture coordinates of the three object-square vertices of the triangular faces in the mesh network are used to obtain the three image-square vertices of their corresponding texture maps to form the texture plane triangle. Then rasterize the texture plane triangle to obtain all the pixels it contains. The average value of the pixel color of the texture plane triangle is taken as the average color value of the corresponding object triangle.
Calculate the dispersion of the triangular plane
Referring to the method of calculating pixel dispersion in 2D Poisson image editing, the dispersion of all non-boundary triangular faces of the reference mesh is calculated in terms of triangular faces. Then:
Where,
Calculate the color correction value of the triangulated surfaces in the overlapping region
In this paper, the triangular surface color correction value of the overlapping region of the reference network is used as a boundary constraint. Let the average color value of the triangular surface in the overlapped area of the reference network be
Then the number of color corrections for that triangular surface is:
Correct each pixel within the triangular plane.
Since the above boundary constraints are the corrected color values obtained from the average color of the triangular faces of the reference and neighboring nets, the color values of the triangular faces obtained by Poisson editing will converge to the neighboring triangular face color values, which can achieve a smooth transition of the differences in the color and luminance between the reference net and the neighboring nets, and solve the problem of the sudden change of the color and luminance between the blocks.
In order to further enhance the color expression of each art scene in the virtual art space, this paper adopts the illumination model with stepping for internal coloring. The diffuse reflection lighting model, which is a lighting model that conforms to Lambert's law, has an algorithm whose core is that the intensity of the reflected light is proportional to the cosine of the angle between the surface normal and the direction of the light source [22]. The formula for the calculation of its color values is partially defined as:
Where
Then the color values of the diffuse reflection lighting model are stepped to initially simulate the effect of ink shades, and at the same time, a thin layer of excessive color is added at the boundary for subsequent processing. The hierarchical stepped function used in this paper is defined as:
In this paper, the hierarchical thresholds for the diffuse color values of the model are 0.25, 0.55, 0.75,
Through certain algorithms to interpolate at the edges of each color scale, or through a one-dimensional texture map replacement, the purpose is to make the edge transition better natural, producing ink halo effect. However, in this paper, other techniques will be used to enhance the halo effect in the subsequent processing, considering the real-time nature of the system and the small impact on the final effect, so only a layer of excessive color is added first.
Volume Rendering Techniques
Suppose that there is a bulk density field
The calculation for solving differential equation
Use body rendering to map the color
Numerical estimation of
Neural Radiation Field
Neural Radiation Field (NeRF) refers to the abstraction of a scene into the form of a radiation field and the representation of the radiation field through a neural network. The radiation field consists of a series of particles, each of which has different color and density properties at different viewing angles. The neural network represents the scene through a five-dimensional function, then:
The neural radiation field aims to learn an implicit representation of a 3D scene from a set of sparse 2D pictures, and if the coordinates of the sampling point (
Positional coding is achieved by applying a series of sine and cosine functions to the input coordinates, and the coded vectors capture spatial variations at different scales, allowing the neural network to better fit data with high-frequency variations.
After the depth of the network is too large, after convolution calculation after convolution calculation, the correlation between the backpropagated gradients deteriorates, resulting in the problem of vanishing gradients. To solve this problem, the Resnet residual network is made to match the residual mapping with every few stacked nonlinear layers instead of directly matching the underlying mapping. Let the desired underlying mapping be H(x), we let these stacked nonlinear layers fit another mapping F(x):=H(x)-x, and the original mapping is able to be changed to F(x)+x.
The formula F(x)+x can be realized by a feedforward neural network “with shortcuts”. A shortcut is a connection that skips one or more nonlinear layers, whose outputs are added to the outputs of the stacked layers. This adds neither additional parameters nor computational complexity. The entire network can still be trained by backpropagation, and formally, the input-output function of a residual module is mapped as follows:
Generally virtual reality scenes are built through the 3D scene building tools and then directly imported, this paper synthesizes the use of tools and in the Unity 3D platform for script development and terrain establishment, and finally rendering of the model. The specific process is shown in Figure 2.

The scene model establishes the schematic
Firstly, the RenderDoc software tool is linked with Google Maps, so that it can obtain the call of real-time rendering program application interface (API) in Google Maps, which calls the map data in Google Maps for real-time rendering, and the tool carries out the data interception, obtains the relevant feature information and terrain information, and can organize the data. Then the RenderDoc software tool is used for data cleaning, noise processing and alignment processing to improve the accuracy and consistency of the intercepted data.
The taken data files are then imported into Unity 3D and a script is used to convert the data into a format data that meets the format of the terrain tool modeling, and then the terrain tool in Unity 3D is used to directly model the environment.The terrain tool in Unity 3D can create a large, editable terrain. Continuous hills and valleys can be created by utilizing the Terrain Height Map Editor tool. The acquired data is, after all, slightly flawed, so the model is smoothed and detailed, and the model texture is edited with the relevant tools. Terrain textures are added to the terrain with the Terrain Material tool.
Given a set of multi-view images with known camera poses, the goal of this paper is to reconstruct surfaces without mask supervision, combining the advantages of neural rendering and body rendering. The 3D spatial field of the target is represented by symbolic distance, and the corresponding surface is extracted using the zero level set of SDF, and the symbolic distance function is optimized during the rendering process. Combining the deep residual network with body rendering and neural radiation field, a multi-view surface reconstruction model based on the deep residual neural rendering technique is constructed as shown in Fig. 3, which mainly includes modules such as image appearance embedding, body rendering interpolation and color weight regularization, geometric constraints, and coarse sampling strategy.

Virtual art space rendering reconstruction
Appearance embedding
In order to eliminate feature sparsity bias, to model the rendering color as a function of 3D point position, viewing angle, and environment feature vector, i.e., the rendering color update, and to expect the residual network to encode the ambient illumination information in the environment feature vector. By increasing the inflow of environmental information, the inference ability of the MLP network for color is enhanced. Therefore, this paper extracts appearance shallow features from each image to forward optimize the color function MLP.
Body rendering interpolation and color weight regularization
In order to eliminate the bias due to the sampling operation
The first intersection of the ray with the surface is approximated by linear interpolation, then:
A new point set
Then the color deviation is:
After interpolation, though, we get
At the same time, this paper also reduces the weight error accordingly, and the regularized weight distribution is:
Geometric Coarse Sampling Strategy
A scene is usually dominated by unoccupied space, and we usually need more computational resources and view dependencies during the reconstruction of the target scene. Based on this fact, we should efficiently find the rough 3D regions of interest before proceeding to fine reconstruction. This will substantially reduce the number of query points on each ray in the later fine reconstruction stage.
When processing the input data, traditional methods use manual filtering to eliminate uninteresting geometric parts. In contrast, DVGO achieves automatic selection of the point cloud of interest.DVGO automatically selects the point cloud of interest based on the nearest and farthest points of the point cloud of the scene where the rays from each camera intersect. In this paper, we find the center of the point cloud by the camera position information, calculate the average distance from the center to the camera position, and use the average distance as the radius to select the point cloud area surrounding the center by 360° as the point cloud area of interest, where the radius of the surrounding area is defined according to the camera's shooting mode (surround scene or long-distance panoramic coverage).
In the reconstruction of the virtual art space scene, its rendering file due to too many scenes may lead to a long rendering time, thus, this paper proposes a rendering file compression algorithm based on wavelet reconstruction as a way to reduce the size of the file generated by the 3D rendering, and improve the model rendering and output speed.
Firstly, the wavelet technique is used to discretize the 3D model of the virtual art scene, and then the wavelet reconstruction method is used to reconstruct the 3D model of the virtual art scene, which plays the role of compression of rendering data, and can improve the rendering efficiency. The translation parameter is the key parameter of the wavelet reconstruction of the virtual art scene 3D model, two translation parameters are named
Definition
The discretized wavelet transform reconstruction takes the form:
The image to be rendered is segmented into one low frequency subband and three high frequency subbands by one wavelet, which ultimately results in a 3D wavelet reconstruction method. The rendered image is divided into three regions, HL, LH and HH, which are denoted as high-pass horizontal and low-pass vertical subbands, low-pass horizontal and high-pass vertical subbands, high-pass horizontal and high-pass vertical subbands, respectively. The resolution and frequency range of image elements in the 3D model are reduced to 1/2 to complete the 1st wavelet transform. The resolution and frequency range of the image elements in the 3D model are reduced to 1/4 to complete the 2nd wavelet transform. And so on, decompose
With the continuous evolution of computer hardware and software research and development technology breakthroughs, three-dimensional rendering technology has gone through several stages of development has become increasingly mature, with its technical core of the Unreal Engine in the game, digital art, industrial design, virtual production and other fields of application is very wide. Relying on three-dimensional rendering technology to optimize the virtual art space scene, aims to enhance the artistic expression of the virtual art space, so that the audience in the virtual space in the immersive feeling of art.
In this paper, texture mapping of virtual art scene is realized through virtual art scene coordinate conversion, texture mapping enhancement and shading illumination model, in order to analyze the effectiveness of the method, this paper's method is compared with other models for experiments. The texture mapping accuracy of this paper's method is verified by quantitative evaluation of two indicators. In the virtual art scene space, the average value of the Euclidean distance (P2S) from the vertices on the reconstructed surface to the ground truth is measured in centimeters. In order to further verify the detail information of the local texture mapping, the L2 normal reprojection error (Normal) is obtained by texture perspective projection of the reconstructed 3D points and comparing them with the positions of the pixel coordinates of the pictures, which is computed in centimeters, to extract the feature information in the images more efficiently, and thus can improve the accuracy of the texture mapping of the 3D surfaces of the virtual art scene.The lower values of the Normal, P2S The lower value means that the texture mapping effect of the method is better, and the higher value means that the texture mapping effect of the method is worse.
The publicly available dataset BUFF is selected as the experimental data, which is collected by 3D scanning serialization method and can realize the accurate quantitative evaluation of texture mapping. Five texture mapping methods, BodyNet, SiCloPe, IM-NET, HSI, and PIFu, are chosen as comparisons to illustrate the effectiveness of the method in this paper. Figure 4 shows the comparison results of texture mapping for virtual art scenes.

The comparison of the texture mapping of the virtual art scene
Based on the comparison results, it can be seen that this paper's method in the virtual art space scene texture mapping, its Normal, P2S value of 0.182cm and 3.253cm, and this paper's method in the virtual art space scene texture mapping, the Normal, P2S value of the error in the virtual art space scene texture mapping between the 0.045 ~ 0.109, the overall texture mapping error is smaller. This indicates that the method in this paper can provide more accurate texture mapping results in 3D reconstruction of virtual art space scenes, making the art atmosphere in virtual art space more intuitive. In this paper, the texture mapping of the virtual art scene is performed through the conversion of world coordinates to better represent the location of the art space scene in 3D space, and the continuity of the virtual art space is enhanced through the Poisson editing of the inter-block texture homogenization and homogenization of the color. The color arrangement of the virtual art space scene is also optimized by coloring the lighting model to further enhance its color expression.
In order to enhance the continuity of each scene in the virtual art space, this paper proposes a Poisson editing-based inter-block uniform light and color processing method, which aims at smoothly connecting each scene in the virtual art space and reducing the chromatic aberration deviation brought about by texture mapping. For the feasibility of this method to deal with the color difference between texture blocks when performing texture mapping enhancement, this paper chooses OpenMVS as a comparison method to map ten different types of artworks in virtual space, and selects luminance and computation time consuming as the indexes to study the effectiveness of the Poisson editing-based texture leveling and homogeneous color processing. Table 1 shows the comparison results of different methods.
Comparison results of different methods
Datasets | OpenMVS | This article | ||
---|---|---|---|---|
Brightness | Operational time | Brightness | Operational time | |
1 | 67.05% | 5.02s | 83.67% | 3.68s |
2 | 71.41% | 4.94s | 77.21% | 3.91s |
3 | 67.52% | 4.76s | 78.48% | 4.03s |
4 | 72.26% | 4.83s | 79.86% | 3.72s |
5 | 73.47% | 4.85s | 76.53% | 3.85s |
6 | 72.39% | 5.09s | 80.95% | 4.07s |
7 | 68.03% | 5.14s | 76.01% | 3.94s |
8 | 70.18% | 4.97s | 82.32% | 4.01s |
9 | 70.34% | 4.88s | 83.54% | 3.76s |
10 | 65.23% | 4.71s | 77.26% | 3.79s |
Means | 69.79% | 4.92s | 79.58% | 3.88s |
From the table, it can be seen that the Poisson editing-based inter-block texture leveling method can improve the brightness of the texture mapping process by up to 79.58%, and the overall algorithm computing time consumed by the average value of 3.88 s, compared with the OpenMVS algorithm in the brightness and computing time consumed by the enhancement and reduction of 14.03% and 21.14%, respectively. This fully demonstrates that the Poisson editing-based inter-block texture leveling can significantly enhance the texture mapping effect of the art scene, so that the texture mapping results of each scene in the virtual art space have no color difference, the natural transition of the texture at the seam line, and the computational speed is faster. It can enhance the readability of the 3D model and improve the display effect of the 3D model, providing reliable technical support for enhancing the expressiveness of the virtual art space.
In this paper, a multi-view surface reconstruction model based on deep residual neural rendering technology and neural radiation field is constructed by combining deep residual network and SDF to extract multi-view surface information of virtual art space. In order to verify the feasibility of the model in realizing the multi-view reconstruction of virtual art space, the DTU dataset is selected for model training, and some scenes are randomly selected from it for 3D reconstruction of virtual space, with Colmap, NeuS, NeuS2, PET-NeuS, and HF-NeuS as the comparison models. The chamfer distance (CD), a metric used to measure the approximate distance between two geometries, is chosen as an evaluation metric in surface reconstruction. It quantifies the minimum spacing between data points, and a smaller chamfer distance means that the reconstructed surface is closer to the real surface. Fig. 5 shows the results of quantitative comparison of the quality of surface reconstruction performed by different models in different scenes on the DTU dataset.

Three-dimensional surface reconstruction comparison
The average chamfer distance of this paper's model in different scenes is 0.636, which is still 8.49% higher than the performance of the state-of-the-art implicit surface reconstruction method (NeuS2). This indicates that when performing 3D reconstruction of virtual art space, the method in this paper is able to reconstruct the rich details of the surface, fully display the sense of artistic atmosphere of the virtual art space, and effectively deal with the sudden changes in depth of the art scene.Pure MLP surface reconstruction methods, such as NeuS and HF-NeuS, can reconstruct smooth surfaces, but they are only fitted using a fully-connected neural network, which has limited fitting capability and high computational complexity. Method with limited capability and high computational complexity, which does not easily extend the model for better 3D reconstruction and is difficult to reconstruct the fine details of the surface. For example, in the reconstruction of the roof of Scan38, the surface structure has depressions.Although NeuS2 has the ability to reconstruct surfaces quickly with the help of multi-resolution voxel meshes, it is only an accelerated version of NeuS, and the wastage of voxel meshes in the empty space and the difficulty of resolution expansion make the representation ability of NeuS2 reduced. Compared to PET-NeuS, the method in this paper has similar reconstruction quality, the three-vector mesh-based method is easier to scale the resolution, and has a better performance in the detail variation part of the geometry since this paper focuses on sampling the high-frequency region of the image for training.
In this paper, in conjunction with the 3D reconstruction model of the virtual art space proposed in the previous section, the components in the model are ablated and analyzed in order to verify the effectiveness of each component. In this section, NeRF is used as the benchmark model, model 1 adds the body rendering technique on the basis of NeRF, model 2 adds the SDF constraints on the basis of model 1, model 3 adds the deep residual network on the basis of model 2, and model 4 adds the texture mapping module on the basis of model 3 i.e., the model in this paper. Table 2 Comparison results of model ablation experiments.
Model ablation experiment comparison results
Model | Accuracy | F1-score | CD | Time |
---|---|---|---|---|
Model 1 | 0.813 | 0.853 | 0.712 | 15.24h |
Model 2 | 0.825 | 0.872 | 0.638 | 6.98h |
Model 3 | 0.837 | 0.891 | 0.615 | 3.75h |
Model 4 | 0.886 | 0.925 | 0.573 | 2.07h |
As can be seen from the table, after adding the body rendering technique, SDF constraints, deep residual network and texture mapping module on the basis of neural radiation field, the accuracy of the model is improved from 0.813 to 0.886, and the training time is shortened from 15.24h to 2.07h, which is more than 10 times of the training speed, and the model's efficiency of the 3D surface reconstruction is greatly improved. Using the body rendering technique directly based on NeuS can improve the accuracy of the algorithm to a certain extent, and at the same time, the training time can be shortened to 6.98 hours. The difference in training time between the two MLP networks, which also use the body rendering technique and the same number of layers, is mainly due to the fact that the NeuS network can be fitted faster, which is mainly due to the SDF explicit geometric constraints. Therefore, when performing 3D indication reconstruction of virtual art spaces, all the modules listed in the model in this paper can effectively enhance the generalization performance as well as the operational efficiency of the model.
1) Performance efficiency of the rendering method
In this paper, three sets of scene models (art scenes 1, 2, and 3) about the virtual art space produced by the terrain editor in the Unity engine are used as inputs to test the scene rendering of the rendering method implemented in this paper, and the performance is tested under different rice paper structure resolutions and different diffusion iteration times. Table 3 shows the performance efficiency of the rendering method in this paper.
The method in this paper is mainly divided into two parts: the uniform light and coloring of virtual art scene texture mapping, and the virtual art space rendering reconstruction, in which the computational complexity of the coloring part is directly related to the number of vertices of the input 3D object, and the main performance consumption of the rendering reconstruction part is concentrated in the bleed-through rendering part, which is related to the resolution size of the scene structure and the number of iterations of the diffusion simulation. By analyzing the data, it can be seen that the resolution of the scene structure has a great impact on the rendering efficiency, while the number of diffusion simulation iterations also has a greater impact on the rendering efficiency. In contrast, the number of model vertices has less impact on the rendering efficiency, which is due to the GPU parallel structure characteristic that has a great optimization effect on the shading calculation. It can be seen that the main performance consumption of this paper's rendering method is concentrated in the diffusion simulation process. According to the data, under the scene structure resolution of 1024, the frame rate of each group in this paper is more than 75fps, which can meet the requirements of real-time rendering. In the scene structure resolution of 2048, when the number of iterations of diffusion simulation is less than 50, the frame rate of rendering is between 59.37~62.59fps, which can basically meet the performance requirements of real-time rendering. In summary, the virtual art scene 3D rendering method proposed in this paper can meet the real-time requirements in terms of performance. In terms of visual effect, the method in this paper can realize a better art halo effect, and at the same time, it can also reflect certain artistic characteristics.
Rendering comparison of virtual scenes
In order to further verify the effectiveness of the 3D rendering technology proposed in this paper in virtual art space rendering, two algorithms of adaptive light projection and light mapping with importance sampling are selected for comparison, which are named as Algorithms A and B. The sky scene in the virtual art space is taken as the rendering object, and the comparison of the rendering scales in virtual art space are 200*300*40, 512*360*60, 1024*512*80, and 1024*512*80, respectively. 60, and 1024*512*80 cloud image effects. Table 4 shows the virtual art space scene rendering comparison effect of different methods.
As can be seen from the table, compared with the two algorithms of adaptive light projection and resampling light mapping, the average preprocessing time of this paper's method in different rendering scales is 12.35s, which is 20.63% and 15.87% lower than the two comparative algorithms, respectively. In terms of rendering frame rate, the average rendering frame rate of this paper's method is 27.2fps, which is 53.76% and 30.46% higher than that of the comparison methods, respectively. This indicates that the method in this paper has a better performance effect in both preprocessing time and rendering frame rate, which can satisfy the rendering of virtual art scenes under different scales and fully ensure the rendering efficiency of virtual art space scenes.
Performance efficiency of rendering methods
Input | The number of vertices | Scenario structure resolution | Diffusion simulation iteration times | Frame rate (fps) |
---|---|---|---|---|
Art scene 1 | 315200 | 1024 | 50 | 85.24 |
Art scene 1 | 315200 | 1024 | 100 | 76.38 |
Art scene 1 | 315200 | 2048 | 50 | 61.45 |
Art scene 1 | 315200 | 2048 | 100 | 42.27 |
Art scene 2 | 296700 | 1024 | 50 | 86.35 |
Art scene 2 | 296700 | 1024 | 100 | 76.43 |
Art scene 2 | 296700 | 2048 | 50 | 62.59 |
Art scene 2 | 296700 | 2048 | 100 | 45.07 |
Art scene 3 | 604800 | 1024 | 50 | 84.39 |
Art scene 3 | 604800 | 1024 | 100 | 75.18 |
Art scene 3 | 604800 | 2048 | 50 | 59.37 |
Art scene 3 | 604800 | 2048 | 100 | 36.54 |
Comparison of virtual art scene rendering
Data specification | Method | Pretreatment time (s) | Render frame rate (fps) |
---|---|---|---|
200*300*40 | Algorithm A | 8.75 | 22.15 |
Algorithm B | 7.93 | 26.42 | |
This article | 5.81 | 35.67 | |
512*360*60 | Algorithm A | 16.54 | 18.54 |
Algorithm B | 15.69 | 22.08 | |
This article | 13.06 | 27.21 | |
1024*512*80 | Algorithm A | 21.38 | 12.39 |
Algorithm B | 20.42 | 14.06 | |
This article | 18.17 | 18.73 |
In order to further verify the texture drawing changes when the virtual art space is rendered in 3D, this section examines the texture drawing results for the virtual art space scene. Based on the Unity engine through the C language combined with the framework programming on the virtual art space texture drawing design, and using the debugging of the image system monitoring tools and analysis tools, the program flow of the image tracking, the texture of the virtual art space scene images for real-time sampling. On this basis, the virtual art space scene image texture drawing process was experimented using a hardware server, and the virtual art space scene image texture drawing and rendering buffer was experimentally verified. Fig. 6 shows the calculation results of the filters for the delay of the rendered scene texture and the sampling of the rendered scene in the buffered pixels.

Scenario rendering latency and buffer time
In the texture drawing process of the virtual art space 3D scene, a maximum wave peak (4090 dpi) will be generated. When the wave peak reaches the maximum, the wave peak will be added to a processing core, and then after a linear protection, the texture drawing effect of the virtual art space scene can be effectively shortened, and the texture rendering efficiency of the virtual art space scene can be further improved. By analyzing the caching parameters of the virtual art space three-dimensional scene, the real-time and realism of the scene texture drawing is realized, and the artistic expression of the virtual art is better displayed. Because during the rendering of the scene, the output of the scene information after processing is buffered, so the computer server can detect the waiting time required for the scene texture is 0. In the ordinary three-dimensional rendering process, in order to prevent collision of straight lines, the texture object is usually delayed. However, since this paper combines the body rendering technique with the neural radiation field and makes full use of the texture mapping module of the virtual art space scene as a way to realize the scene drawing in the virtual art space without relying on the drawn texture objects, there is no problem of the traditional straight line scheduling. Therefore, when the virtual art space for three-dimensional scene texture drawing, combined with the three-dimensional rendering technology designed in this paper, you can get the texture of 0 more quickly and in real time. On the basis of ensuring the artistic expression of the virtual art space, the method in this paper has more efficient rendering efficiency, and can also effectively reduce the production cost of the virtual art space scene.
This paper establishes a virtual art space using the Unity 3D platform, combines virtual art scene mapping and 3D deep neural rendering technology for efficient rendering of the virtual art space, and verifies the effectiveness of the 3D rendering technology in enhancing the expressiveness of the virtual art space through simulation.
When performing virtual art space scene mapping, the Normal and P2S values obtained by this paper's method are 0.182cm and 3.253cm, respectively, which are 7.61% and 11.46% lower than those of the suboptimal performance PIFu model. Combined with the Poisson editing of the texture uniform light and color and shading illumination model, the average value of brightness enhancement in the texture mapping process can be up to 79.58%, and the average value of algorithmic operation time consumed is only 3.88 s. Texture mapping before carrying out the 3D rendering of the virtual art space can help to better restore the artistic atmosphere in the 3D virtual space, and provide the artistic texture support for the enhancement of the expressive power of the virtual art space.
The average chamfer distance of this paper's model in different virtual art space scenes is 0.636, which is 8.49% higher than the 3D surface reconstruction performance compared with the state-of-the-art implicit surface reconstruction method (NeuS2). Accurate 3D surface reconstruction can make the art display content in the virtual art space more close to reality, and provide technical support for creating the artistic sense of the virtual art space.
When the method of this paper is used for 3D rendering of the virtual art space, the frame rate is above 75 fps in 1024 scene structure resolution, and the 3D rendering frame rate of the virtual art space is between 59.37 and 62.59 fps when the number of diffusion simulation iterations is less than 50 in 2048 scene structure resolution. It shows that the virtual art space rendering using the method in this paper has high rendering efficiency, and the rendering results are more realistic and exquisite, which can attract more people's eyes. In addition, the user can also adjust the shape and proportion of the model by changing the rendering parameters to create an artistic effect in line with the concept of aesthetics.