Uneingeschränkter Zugang

Integration and Innovation of Traditional Ceramic Art and Modern Film and Television Scene Designs

 und   
29. Sept. 2025

Zitieren
COVER HERUNTERLADEN

Introduction

Chinese traditional ceramic art has a long history and has been an important part of Chinese culture for thousands of years. Ceramic art is well known in the world for its unique aesthetic value and exquisite craftsmanship [1-3]. With the continuous progress of modern technology and production techniques, ceramic products are no longer limited to traditional functional applications. In the field of modern film and television scene design, ceramics have been given more artistic and aesthetic value [4-7].

As one of the key links in film and television production, modern film and television scene design plays an important role in expressing the story of the movie, showing the emotions of the characters and creating the atmosphere. Good film and television scene design should not only consider the audience’s viewing experience, but also need to pay attention to the details of portrayal and fit with the storyline [8-11]. Ceramic art and film and television scene design are two quite artistic and creative fields, which have different charms in different media performance. However, at the intersection of contemporary art, we see the mutual integration and development of ceramic art and film and television scene design [12-15].

The integration and innovation of traditional ceramic art and modern film and television scene design is a field of continuous exploration. The charm of traditional ceramic art lies in its long history and unique craft characteristics, while modern film and television scene design focuses on practicality and innovation [16-19]. The integration of traditional ceramic art and modern design trends can bring new vitality and development opportunities for the ceramic industry. The use of innovative design concepts and the application of scientific and technological means will promote the continuous innovation and progress of ceramic art [20-23]. The combination of tradition and modernity not only revitalizes the ceramic art, but also promotes the development of the ceramic industry. Integration is the future trend of ceramic design, only the continuous pursuit of innovation can win the favor of the market and consumers [24-27].

In this paper, we refer to the camera imaging principle to project real world objects onto a 2D image and realize the sequential transformation of world-camera-image-pixel coordinates. According to the principle of polar geometric constraints to complete the inference of the camera parameters and the determination of the depth range, the inner and outer parameters of the camera and the depth information in the scene are inferred. After completing the depth estimation, the 3D point cloud model of the scene is generated through the depth map fusion algorithm to reconstruct the 3D model of the scene. UV mapping is used in this process to accurately fit the texture mapping to the 3D model and refine the appearance of the 3D model. Enter the scene through Leap Motion, use gesture interaction for traditional pottery hand movement learning, and monitor the audience’s gestures in real time through DenseNet to improve the accuracy of gesture recognition in the pottery process. Design the traditional pottery virtual museum scene according to the process, and conduct an empirical investigation of the audience’s experience to provide reference opinions for subsequent adjustments.

Virtual ceramic art scene design
Scene 3D reconstruction related technology
Camera model

Figure 1 shows the schematic of pinhole camera model imaging, 3D reconstruction is to restore the representation of an object in 3D space by image information from multiple viewpoints [28]. And the camera imaging principle describes how the camera projects real-world objects onto a two-dimensional image. The pinhole camera model, as a classical simplified model, explains the imaging principle of the camera, which involves the transformation relationship between the world coordinate system, the camera coordinate system, the image coordinate system and the pixel coordinates.

Figure 1.

Pinhole camera model imaging schematic

World Coordinate System: A coordinate system used to represent the position of an object in the real world, usually represented by OWXWYWZW. Where OW is a point in real space. For example, point P has coordinates (XW,YW,ZW) in the world coordinate system.

Camera coordinate system: the camera coordinate system is a coordinate system defined relative to the camera itself, usually expressed as OCXCYCZC. Where OC represents the optical center of the camera, i.e. the center of the point of light. The optical axis of the camera is defined as the Z-axis, which is usually used to describe the direction of view of the camera. Point P has coordinates (XC,YC,ZC) in the camera coordinate system.

Image coordinate system: According to the imaging principle of the pinhole camera, the imaging plane is located behind the camera plane. Take the intersection point O of axis Z of the camera coordinate system and the imaging plane as the origin to establish the image coordinate system Oxy, and the coordinate of point P in the image coordinate system is (x,y) .

Pixel coordinate system: Establish the pixel coordinate system Oθuv with the vertex Oθ of the captured image as the origin, and the coordinate of point P in the pixel coordinate system is (u,v) .

Conversion from the world coordinate system to the camera coordinate system

During the conversion from the world coordinate system to the camera coordinate system, no deformation of the object occurs, only the coordinate system is converted. This conversion is accomplished by translation vector t and rotation matrix R, as shown in equation (1): [ XC YC ZC]=R[ XW YW ZW]+t

where R is a 3 × 3-unit orthogonal matrix that describes the rotation transformation of the world coordinate system with respect to the camera coordinate system. t=[tx,ty,tz]T is the offset between the world coordinate system origin OW and the camera coordinate system origin OC. To simplify the computation, the rotation matrix R and the translation vector t are combined into one chi-square transformation matrix. This allows the points of the world coordinate system to be transformed into the camera coordinate system using matrix operations as shown in equation (2): [ XC YC ZC 1]=[ R t 0T 1][ XW YW ZW 1]

Conversion from camera coordinate system to image coordinate system

The process of converting from a camera coordinate system to an image coordinate system is a process of projecting three-dimensional points onto a two-dimensional plane. This conversion through the camera’s optical center of the three-dimensional point P projected onto the imaging plane, forming a similar triangle relationship. By the principle of similar triangle, the point P in the camera coordinate system can be converted to the coordinates in the image coordinate system.

Based on the properties of similar triangles, the following formula can be obtained: { Zcf=Xcx=Ycy x=fXcZc,y=fYcZc,z=f}

where f denotes the focal length, which is the distance between the camera coordinate system and the origin of the image coordinate system.

According to the derivation, the chi-square transformation formula from the camera coordinate system point P(XC,YC,ZC) to the image coordinate system point P(x, y) is: [ x y 1]=1Zc[ f 0 0 0 0 f 0 0 0 0 1 0][ XC YC ZC 1]

Conversion from image coordinate system to pixel coordinate system

In image processing, the pixel coordinate system is usually used to describe the position of pixels, so after completing the conversion from the camera coordinate system to the image coordinate system, it is necessary to convert the image coordinate system to the pixel coordinate system. The conversion between the pixel coordinate system and the image coordinate system includes scaling and translation of the origin. Let the corresponding real physical dimensions of a pixel point on the imaging plane be dx and dy, and the difference between the origin of the two coordinate systems in the direction of the horizontal and vertical axes be u0 and v0, respectively, then the correspondence between them can be expressed as follows: { u=xdx+u0 v=ydy+v0}

Collapsed into the chi-square conversion equation is: [ u v 1]=[ 1dx 0 u0 0 1dy v0 0 0 1][ x y 1]

By combining Eqs. (2), (4) and (6), the conversion equation from the world coordinate system to the pixel coordinate system can be derived as: [ u v 1]=1Zc[ 1dx 0 u0 0 1dy v0 0 0 1][ f 0 0 0 0 f 0 0 0 0 1 0][ R t 0T 1][ XW YW ZW 1]=KTZc[ XW YW ZW 1]

Order: K=[ 1dx 0 u0 0 1dy v0 0 0 1][ f 0 0 0 0 f 0 0 0 0 1 0],T=[ R t 0T 1]

Where K represents the internal reference matrix of the camera, which is usually a fixed 3 × 3 matrix given by the camera factory and contains internal parameters such as the camera’s focal length, principal point offset, etc. T represents the external reference matrix of the camera, which includes the rotation matrix R and translation vector t. The external reference matrix changes with the position and orientation of the camera in space and is used to describe the pose and position of the camera relative to the world coordinate system. ZC represents the depth value of the point in the camera coordinate system, i.e. the distance between the point and the optical center of the camera.

Geometric constraints on poles

Figure 2 shows the schematic of the pair-pole geometric constraints, when it is known that there are a certain number of feature matching points between two or more images, the inference of the camera parameters and the determination of the depth range can be accomplished according to the principle of the pair-pole geometric constraints. This process utilizes the geometric relationship between the cameras, and through the analysis and processing of the matching points, the internal and external parameters of the camera as well as the depth information in the scene can be inferred.

Figure 2.

For polar geometry constraints

In Fig. 2, points O1 and O2 represent the optical centers of the two cameras, respectively. Point P is a point in three-dimensional space, and p1 and p2 are the projection points of point P on the imaging planes I1 and I2 of the two cameras, respectively. Points e1 and e2 are the intersections of the optical centers O1 and O2 with the two imaging planes. The plane formed by points O1, O2 and P is called the polar plane. The lines of intersection of the imaging planes I1 and I2 with the polar planes p1e1 and p2e2 are called polar lines. As can be seen from the figure, the projection points of all points on O1P on the imaging plane I2 can only fall on the polar line p2e2. From Eq. (7), Eq. (9) can be obtained to express the corresponding coordinate relationship: { s1p1=KP s2p2=KT12P=K(R12P+t12)}

where sl and s2 are the depths of point P taken in the Ol and O2 camera coordinate systems, respectively, and the rotation matrix for transforming the Ol camera coordinate system to the O2 camera coordinate system is R12, and the translation vector is t12. This can be obtained by combining in Eq. (9): s2p2=K(R12K1s1p1+t12)

Multiply both sides by K−1 at the same time to get: K1s2p2=R12K1s1p1+t12

Let x2 = K−1s2p2, x1 = K−1s1p1, be obtained: x2=R12x1+t12

Simultaneously making an outer product with t12 for both sides of the equation, denoted by t12, and simultaneously left-multiplying by x2T, the following simplified equation is obtained since the inner product of the vertical vectors is zero: x2Tt12R12x1=0

Substituting x2 and xI gives: p2TKTt12ΛR12K1p1=0

Noting the middle part of this expression as basis matrix F and essence matrix E, respectively, one can simplify the pairwise polar geometric constraints to: x2TEx1=p2TFp1=0,E=t12ΛR12,F=KTEK1

In the case where the internal and external parameters of the camera are unknown at the same time, the classical 8-point method can usually be used, i.e., the basis matrix F is solved based on 8 pairs of matching points.

Depth Map Filtering and Fusion

After depth estimation is complete, filtering out outliers from the background and occluded regions is a critical step to ensure the quality and accuracy of the final point cloud [29]. The depth map is filtered by the geometric consistency between multiple images to eliminate the depth values that do not fulfill the conditions. A pixel p exists on the reference depth map I0 with depth dp. A pixel p is projected to a pixel q in another depth map Il with depth dq according to its depth value dp. A pixel q is then projected back to the reference depth map I0 according to its depth value dq. If the reprojected coordinate dreproj and the reprojected depth dreproj satisfy Equation (16), the depth estimate at p is said to be satisfying the geometric consistency of the two-views. In the experiments in this paper, ensuring that the depth estimate at a point is not rejected requires that at least three-view consistency is satisfied: { |prepvojp|<1 |drepvojdp|dp<0.01}

After the depth map filtering of all the images, the depth map fusion algorithm converts each depth map to a unified coordinate system, generating a 3D point cloud model of the scene and realizing the 3D reconstruction of the target. This paper uses the open source fusibile code to realize this function.

Mesh Reconstruction and Texture Mapping

After obtaining the depth map fused point cloud, there are two more steps before the complete 3D model, Mesh reconstruction is the process of generating a continuous 3D mesh model from discrete point cloud data. Usually, after obtaining a depth map fused point cloud, there are two more steps before a complete 3D model is available. Mesh reconstruction is the process of generating a continuous 3D mesh model from discrete point cloud data.

In this paper, we use UV mapping to realize this process. UV mapping is a special way of mapping texture coordinates, where UV coordinates represent positions on a 2D texture image, similar to X and Y coordinates. In UV mapping, each vertex is assigned a UV coordinate that is used to determine the position of that vertex on the texture image. The range of the UV coordinates is usually [0,1], where (0,0) denotes the lower-left corner of the texture image and (1,1) denotes the upper-right corner. By mapping the UV coordinates to the texture image, it is possible to realize the texture mapping to precisely fit the surface of the 3D model, thus giving the model a more vivid appearance.

Traditional pottery interactive system design

After the virtual 3D scene is constructed, the audience enters the 3D scene through Leap Motion and learns traditional pottery hand movements through gesture interaction [30]. The improved DenseNet network embedded in SENet in the background script monitors the opening key frames of the audience gestures in real time, and then classifies the gestures by fully connected neural network, and then gives feedback to the audience according to the prediction results.

Leap Motion Gesture Recognition Algorithm

Several mathematical formulas and technical descriptions are used in the calculation of hand positions for the Leap Motion device. These formulas cover areas such as computer vision, coordinate transformations, and geometric calculations. Leap Motion uses dual cameras to capture hand positions, so points in the camera coordinate system need to be transformed to the world coordinate system.

Point (xc,yc,zc) in the camera coordinate system can be converted to point (xw,yw,zw) in the world coordinate system by the projection matrix P of the camera as shown in equation (17): ( xw yw zw 1)=P( xc yc zc 1)

The projection matrix P can usually be expressed as equation (18): P=K[R|t]

In Equation (18), K is the camera’s internal reference matrix, R is the rotation matrix, and t is the translation vector.

Leap Motion uses stereo vision to determine the depth information through the parallax of the two cameras. Assuming that there are two cameras with parallel optical axes and the baseline distance between the cameras is B, the depth z can be computed for the corresponding points (xL,yL) and (xR,yR) of the left and right cameras, as shown in Equation (19): z=fBxLxR

In Equation (19), f is the focal length of the camera and xLxR is the horizontal parallax of the corresponding points of the two cameras. With the depth value z calculated from the parallax, points (x, y) in the camera plane can be converted to points (X, Y, Z) in 3D space to realize 3D reconstruction as shown in Equation (20): X=xzf,Y=yzf,Z=z

In addition to position information and 3D reconstruction, Leap Motion can recognize the posture and movement trajectory of the finger. Assuming that the 3D position of the finger joint is pi=(xi,yi,zi) , the orientation vector di of the finger bones can be expressed as Equation (21): di=pi+1pi

In order to recognize different gestures, the angle between the finger bone direction vectors can be calculated. Assume that the angle θ between the two vectors d1 and d2 is computed by Eq. (22): cosθ=d1d2d1d2

In Equation (22), · denotes the dot product and d denotes the mode of the vector.

Gesture Characterization and Modeling

Based on the above gesture tracking model with respect to scattered hand data features, the computational formula is specified as: Di=FiCS,i=1,,5

where the Euclidean distance between the tip of the finger and the palm of the hand is described by Di and S represents the scale factor, which is obtained by the calculation of Eq. (24): S=FmiddleC

The normalization process (except for the direction) of all hand eigenvalues is done by dividing the three-dimensional distance between the tip of the finger and the palm of the hand by the scale factor S.

The angle between the 3D coordinate points Fip and C and h in the commercial display space is Ai, which is calculated by the formula: Ai=(FipC,h)

where Fip represents the projected position of the finger tip on vector n. The normalized vertical distance of the finger tip from the center of the palm is described by Ei, which is calculated as: Ei=sgn((FiFip)n)FiFipS

Each gesture data extracted by Leap Motion will be characterized and will comprise a set of eigenvectors V=(D,A,E) including eigenvalues D, A, and E eigenvalues.

DenseNet

DenseNet is a common convolutional neural network [31]. The basic idea of DenseNet continues ResNet, and the core features are all utilizing the residual network structure. The DenseNet network contains a number of dense blocks, in which the size of the feature maps does not generally change in size. In order to prevent the poor effect of the downsampling layer, downsampling layer is added between each dense block to reduce the feature dimension. Defining x0 as the first layer of a convolutional neural network, l denotes the layer, and Hl(·) is the composite transform function of the layer, then the general convolutional output of layer l + 1 is xl as shown in Eq. (27). ResNet uses cross-layer connections as in equation (28), i.e., it uses the output xl−1 of layer l plus its transformed result Hl(xl1) . DenseNet, on the other hand, lets the outputs of the first l layers be used as inputs to layer l + 1, as shown in equation (29) [x0,x1,,xl1] denoting the connection of the outputs of the first l layers: xl=Hl(xl1) xl=Hl(xl1)+xl1 xl=Hl([x0,x1,,xl1])

All the layers in front are densely connected to the layers behind, and this connection method is helpful to alleviate the problem of gradient vanishing, and it can effectively avoid overfitting during the network training process with good regularization effect. In addition, it can also be noted that DenseNet realizes feature reuse through the connection of features on channels, and this image feature linking method can omit many parameters and reduce the learning redundancy. These two features allow DenseNet to achieve better performance than classical networks such as ResNet with fewer total parameters and lower computational cost.

SENet

SENet is a network structure that focuses on channel relationships, different from the usual focus on the spatial dimension level to improve the network performance, its model has the ability to rely on the interrelationships between the channels to adaptively calibrate the features of the channels, and differentiate the importance of each feature channel in order to enhance the valuable feature information and suppress the irrelevant features, and its structure includes the two Squeeze and Excitation key operations, so it is named SENet [32].

Ftr for convolution, X and U denote the input feature channel and output feature channel, Fsq() denotes the squeezing function, Fex(,W) denotes the excitation function, and a typical SENet module is represented in Fig.

In the squeezing module, the squeezing module is a global average pooling operation that turns spatial information into channel descriptors, as in Eq. (30), and the cth element in Z can be reduced by Fsq() shrinking the dimension H*W of U in the space, e.g., from 6 × 6 to 1 × 1: Zc=Fsq(Uc)=1H×Wi=1Hj=1WUc(i,j)

The excitation module connects the pooled C-dimensional feature vector with two fully connected layers in sequence, the role is to downscale the feature to C/r and re-upscale it back to C respectively, and finally activate it with the Sigmoid function, and the output value is limited to be between 0 and 1, and then finally multiply the weight matrix and U to give the weights as follows (31), δ represents the ReLU function, and the output function is the dimension of the feature of W1z through the W2δ multiplied to get the result of the excitation module: S=Fex(z,W)=σ(g(z,W))=σ(W2δ(W1z))

After the two modules are operated, then the C-dimensional features are fused with input U for the feature channel. In Eq. (32), uc is a two-dimensional matrix, sc is the computed weights, and the Fscale function is used to operate on these two values: xc=Fscale(uc,sc)=uc×sc

As known from the previous section, the SENet module is simple-minded and very flexible, and it can be loaded into the existing network model architecture relatively easily, and the residual network models can all integrate the SE block into it as a nonlinear constant branch, so in this paper, we embed the SENet into DenseNet to improve the accuracy of pottery gesture recognition.

DenseNet embedded in SENet

The network model in this thesis utilizes Leap Motion acquisition to detect the beginning keyframes of the depth image data, so as to obtain the whole segment of gesture motion data based on the number of beginning frames. The dense jump convolutional network structure is improved based on DenseNet, and a feature channel weight training module (SENet module) is added between the DenseNet dense block and the transition layer, which helps the network to focus on the effective feature parts and reduces the activation of irrelevant parts, in order to improve the accuracy of the gesture recognition during the pottery process.

Scenography for a virtual museum of traditional ceramics
Scenography

The overall scene design of the traditional ceramic art virtual exhibition hall is based on the architectural layout of the Jiangnan water town architecture and the Suzhou garden architecture of the “wall with tiles” and “step by step”, so that visitors can enter the virtual exhibition hall as if they were entering the pleasant scenery of the Jiangnan water town and the Suzhou garden in general, can more truly feel the traditional ceramic art origin and inheritance, development of regional culture. Suzhou garden in general, can more truly feel the traditional pottery origin and inheritance, development of regional culture. In addition, in terms of spatial layout design, the scene space is divided and designed according to the four parts of traditional pottery history, pottery classic works, pottery production masters and pottery production techniques, so that the audience can not only visit each part of the content according to the established visiting route, but also independently choose the visiting content according to their own interests. In addition, according to the traditional pottery more than ten production procedures, the space division and design of interactive places, so that the audience can use VR equipment to enter the virtual scene, more real, comprehensive observation of the traditional pottery production process, and even can participate in the traditional pottery production process.

Display design

Three-dimensional, immersive display is one of the main functions of the traditional pottery virtual showroom. According to the attributes of the displayed content, it can be divided into 3 parts: graphic display, video display and three-dimensional display. Among them, the graphic display, mainly through the text and picture layout design, allows the audience to visit the process, through the click, collision and other simple interactions, the history of traditional ceramic art, production of famous artists and classic works, etc. to display. Video display, is mainly some existing film and television materials, through PPT display, video playback and other forms, so that the audience as watching a movie to understand the traditional pottery culture.

Three-dimensional display, is the use of three-dimensional scanning technology will be some exquisite traditional ceramic works of three-dimensional transformation molding, or with Maya, 3d Max, C4D and other three-dimensional modeling technology to create three-dimensional models, with the virtual engine of the three-dimensional display function, so that the audience can be three-dimensional, all-round carefully observe, appreciate, identify the traditional ceramic works, and feel the traditional ceramic modeling of the beauty of the beauty of the pattern, the beauty of the material, The beauty of the traditional pottery art, the beauty of the pattern, the beauty of the material, the beauty of the production process.

Interaction design

The interaction design of traditional pottery virtual exhibition hall is mainly through real-time dynamic three-dimensional stereo realistic images to trigger the audience’s sense of hearing, touch, force, movement, smell and taste and other perceptions, and then with the help of VR glasses, VR handles, VR helmets and other sensing devices, and human head rotation, eyes, gestures, or other human behavioral actions to echo, to form the interaction with a stronger sense of immersion and interactivity. Based on the content of traditional pottery culture and the main points of the production of traditional pottery skills, the interaction design mainly includes conventional interaction such as click, collision, switch, etc., as well as interaction with stronger sensory experience such as tactile feedback, eye tracking, gesture tracking, direction tracking, and language interaction. For example, the interaction design when letting the audience experience the traditional pottery texture, pattern and material design, it collects the traditional collision interaction and gesture tracking and other interaction modes, so that the audience can activate different styles of materials and patterns through mouse click, handle ray collision, etc., and let the audience with the help of VR handle and other professional interaction equipment, through data positioning, hand tracking, etc., to make it more real and intuitive, Quickly realize pattern design and material replacement, with stronger interactivity, fun and immersion, so that the audience has a deeper understanding of the culture and traditional skills of traditional pottery.

Virtual scene-based display of traditional ceramic art
Presentation of design results

Based on the modern virtual scene perspective on the ceramic display space micro evaluation is the audience from the space visual elements on the evaluation of the results of design practice. Audience groups from the space of the shape of the form, color, material use, visual style of the four aspects of the spatial design effect of each area to score, through the form of the score to make the ceramic museum display scene spatial design practice results more clear. Each full score of ten points, of which 6 points or less indicates that the design performance in the corresponding space is poor, need to be further adjusted, 6-8 points indicates that the design performance in the corresponding space is good, 8 points or more indicates that the design performance in the corresponding space is better, can be used as a reference. In order to test the merits and shortcomings of the design program to present the effect, invited 10 ceramic pavilion exhibitors staff, 10 exhibition visitors and 10 designers on the ceramic pavilion design practice results of the ceramic pavilion display space scoring, each take the average of 30 ratings, the results of the statistics as shown in Figure 3.

Figure 3.

Design results score statistics

In modeling form, the average score of 8.265 points in the anteroom area, the average score of 8.425 points in the experience area, the average score of 8.945 points in the end of the hall area, indicating that the three areas of the shape of the shape of the design program is more excellent, and can provide the design reference and reference. In terms of color matching, the average score of the front hall area is 5.569, and the average score of the aisle area is 5.765, which indicates that these two areas have poorer color matching effects and need further adjustment. In the use of materials in all areas of the overall score tends to be stable, relatively speaking to the atrium transition area of the most appropriate use of materials, in the visual style to the end of the hall area and experience the highest rating, indicating that the visual style of its space is more in line with the characteristics of the traditional ceramic display space.

Survey on Audience Experience in Traditional Pottery Virtual Museum Scene
Evaluation of traditional ceramic scene design

The SD method, Semantic Difference Method, is a psychometric method in which the psychological feelings of the subjects are measured through language, and quantitative data on the subjects’ feelings can be obtained.

There were 15 females and 15 males among the test subjects, and the age range was mainly from 18 to 60 years old, 30 questionnaires were distributed, of which 30 were recovered, with an effective recovery rate of 100%. Through the statistics of the collected questionnaire data, the SD score table of 5 samples was obtained, as shown in Table 1. The SD score in the table is the average of the scores of each evaluation factor in all valid questionnaires, and the larger the value means that the scores are closer to the advantageous adjectives on the right side, and the smaller the value means that the scores are closer to the disadvantageous rating words on the left side.

SD scores for 5 samples

Index Adjective pair Evaluation sample Average
1 2 3 4 5
Colour Humdru—Abundant 0.126 0.179 0.452 0.296 0.348 0.2802
Visual element Rough—Exquisite 0.125 0.148 0.125 -0.053 -0.062 0.0566
Interface layout Clutter—Orderly -0.015 0.246 0.245 0.042 0.315 0.1666
Attraction None—Yes -0.085 0.065 0.123 0.266 0.215 0.1168
Ease of use Hard—Easy 0.031 0.052 0.012 0.052 0.269 0.0832
User guidance design Fuzzy—Definite 0.052 -0.135 0.089 0.179 0.052 0.0474
Information display hierarchy Complex—Intuition -0.165 0.125 0.125 0.266 0.268 0.1238
Operating sense Disconnection—Coherent -0.031 -0.052 0.236 0.215 0.045 0.0826
Operational feedback Single—Abundant 0.141 -0.125 0.098 -0.035 0.052 0.0262
System function Deficiency—Complete -0.245 0.286 0.095 0.052 0.032 0.044
Information richness Lack—Abundant -0.293 0.053 0.153 0.268 -0.186 -0.001
Display effect Inaccurate—Truth -0.021 0.054 0.327 0.378 -0.186 0.1104
Tour route Zigzag—Smooth -0.079 0 0.155 0.265 -0.023 0.0636

By observing the mean value of the SD scores, the scores of the 13 evaluation factors are concentrated in the range of -0.001 to 0.2802, which indicates that the evaluators are satisfied with the overall quality of the experience of the five samples. Among them, in the evaluation of virtual pottery scene design factors, the respondents’ evaluation of the color factor is higher, and the mean score is higher than the rest of the evaluation factors, indicating that all the respondents are more satisfied with the overall evaluation of the perception of the color design of the samples, in the evaluation of the interaction design factors, the lowest score of the operation feedback, and all the respondents feedback that the operation feedback of the samples is too homogeneous, and in the evaluation of the exhibition design factors, the richness of the ceramic exhibits’ information In the evaluation of exhibition design elements, the ceramic exhibit information richness score of -0.001 is the lowest overall, indicating that the respondents are the least satisfied with the sample’s ceramic exhibit information richness.

Plotting the SD score curve based on the data from the SD score table is shown in Figure 4, which allows for a visual comparison of the differences in the scores of the SD scores of the five samples on the 13 evaluation factors. The fold line overall to the right (highest score) is ceramic sample 3, with a mean score of 0.2812.

Figure 4.

SD score curve

Sample 3 of the functional architecture design at a glance, the system is clear, the interface layout of the appropriate sparse and rhythm coexist. Sample 3’s color and exhibit display effect of the evaluation factor score is significantly higher than the other samples, sample 3 maintains the traditional ceramics in the unique traditional colors, unique color style reflects the traditional ceramic characteristics. In the design of the whole traditional pottery display scene fully embodies the principle of situational experience, visual and interactive show exquisite and efficient characteristics, the use of audio, video, description of the information and real-life reproduction of the combination of exhibition methods to present a digital exhibition of the real display effect, so that the audience to get an immersive experience at the same time to reduce the long time use of digital equipment burnout, to improve the audience’s sense of pleasure to use the product and the curiosity. However, Sample 3’s excessive exhibition content and high-functionality architecture makes it more costly for viewers to use. The overall lower curve (lowest score) among all the samples is Sample 1 Traditional Ceramics Virtual Showroom, with an average score of 0.02654. Sample 1 exhibits a single content, only a type of museum real-life reproduction, the entire digital exhibition is low interactivity, low interest, lack of attraction, relative lack of functionality, in the subsequent scene design should be to enhance the interactivity between the pottery and the audience.

Traditional Ceramic Scene User Satisfaction

Fig. 5 is the satisfaction of users of traditional ceramic scene space, and the questionnaire results of the satisfaction of users of traditional ceramic scene are statistical, in which the index evaluation results are mostly positive, and the indicators in the figure are D1 (scene scale suitability), D2 (acceptability), D3 (scene space utilization), D4 (scene space enclosure), D5 (aesthetic value of traditional ceramic technology), D6 (scene construction perfection), D7 (pottery tool perfection), D8 (continuity), D9 (traditional cultural activity), D10 (traditional cultural environment dependence), D11 (activity frequency), D12 (cultural atmosphere), D13 (social influence), D14 (operation management), D15 (mechanism soundness), D16 (willingness to participate in activities), D17 (participation in activities), D18 (support), D19 (cultural identity). The average value of the 19 user satisfaction was 23.6316, and the scores of each index ranged from 13~35.

Figure 5.

Customer satisfaction of traditional ceramic scene space

By evaluating the various sub-element level indicators in the previous section, the final scores are calculated and plotted together with the SD mean semantic distribution, and the final results are shown in Figure 6.

Positive factor analysis

The overall positive factor of the traditional ceramic art virtual scene space accounts for the majority of the overall positive factor, and the absolute value of the overall high, in which the frequency of activities to reach the maximum degree of satisfaction state, the factor evaluation of 2. The traditional ceramic art virtual scene designed in this paper makes the audience’s frequency of activities to improve, and has been recognized by the audience. But compared to the above indicators, the traditional ceramic virtual scene in the scene scale appropriateness, traditional culture alive, operation and management, the degree of soundness of the mechanism and the degree of cultural identity appears to be slightly inferior, the factor evaluation of 0.245, 0.496, 0.615, 0.079 and 0.344. The reason for this is that the degree of soundness of the mechanism score and the audience’s degree of cultural identity has a very strong relationship between the audience, and many of the audience see less ceramic cultural value parts and do not have a strong sense of identity with the cultural environment they are in.

Negative factor analysis

There are only seven negative factor indicators for the virtual scene space of traditional ceramic art, namely, the degree of acceptability, scene space enclosure, aesthetic value of traditional ceramic craftsmanship, scene construction perfection, willingness to participate in the activity, when participating in the activity, and the degree of support, and the factor evaluations are -1.056, -0.765, -0.562, -0.945, -0.645, and -0.254, respectively, which are the future difficulties that need to be solved in optimal design research.

Figure 6.

The average semantic distribution of traditional ceramic art scene space

IPA data analysis

In this section, IPA analysis will be used to combine the data corresponding to the SD data in the previous section, and the quantitative data corresponding to the SD data and the weight data will be transformed into a qualitative graphical language, so as to facilitate a more accurate classification of the 32 evaluation factors, and to provide a basis for the strategy proposal and optimization design of the later section. Summarize the traditional ceramic scene evaluation and user satisfaction indicators, the traditional ceramic scene evaluation using A1-A13 representation.

This paper uses the constructed IPA model to categorize and analyze the traditional ceramic scene evaluation and user satisfaction indicators respectively. The reference value of the horizontal coordinate importance is 0.031, and the reference value of the vertical coordinate satisfaction is different, which is the average value of the user satisfaction score of each sample.

The reference value of traditional ceramic art scene space vertical coordinate satisfaction is SD score mean 0.4126. Figure 7 shows the traditional ceramic scene user satisfaction, and the distribution of its 19 sub-factor indicators is shown below. There are 3 indicators in Zone Ⅰ, which are D3 (Scene Space Usage Rate), D9 (Traditional Culture Alive), D10 (Traditional Culture Environment Dependence), these 3 indicators are the ones that are considered important by the experts and relatively satisfied by the users, and attention should be paid to the maintenance of the advantages of these indicators when proposing optimization strategies. There are 7 indicators in Zone Ⅱ, which are the indicators that experts consider relatively unimportant, but user satisfaction is still high, and these indicators are the advantages that users of traditional ceramic scenes are satisfied with, so it is enough to continue to maintain them. Ⅲ area of the five indicators for the importance of the lower, user satisfaction is also relatively low indicators, is secondary to consider the optimization and enhancement of the indicators. Ⅳ area fell into a total of 4 indicators, these 4 indicators are experts believe that important, but the user satisfaction is lower indicators, is the optimization strategy for the primary target.

Figure 7.

Traditional ceramic scene user satisfaction

Figure 8 shows the evaluation of the traditional ceramic scene, categorizing its 13 indicators. Ⅰ and Ⅱ area of a total of 4 indicators, these indicators for the advantages of traditional ceramic art scene space, continue to maintain can. Ⅲ area of the 2 indicators for the important lower, traditional ceramic art scene space is also relatively low indicators, is secondary consideration for optimization and enhancement of the indicators. Ⅳ area falls into a total of 3 indicators, respectively, A5 (ease of use), A7 (information display level), A10 (system functionality), these 3 are the optimization strategy for the primary target.

Figure 8.

Traditional ceramic scene evaluation

Traditional pottery production in the scene

The audience uses Leap Motion to enter the 3D scene and learn traditional pottery hand movements through gesture interaction, Figure 9 shows the results of traditional pottery gesture learning, the indicators in the figure are the gesture variables of pottery production, which can be seen in the 30 audience learning results in the majority of 5 points, of which the downward pressure, upward lifting of the two movements of the 5 points of the largest number of people, both 16 people, the number of people with full scores of more than 50%, which indicates that the pottery interaction The design of the system helps the audience to understand the production process of traditional pottery.

Figure 9.

The traditional ceramic gesture learning results

The introduction information of exhibits in the virtual scene of traditional ceramic field is an indispensable element in the virtual ceramic museum. Therefore, the analysis of the importance of ceramic exhibit information from the viewer’s perspective is necessary, and Table 2 shows the importance scores of ceramic exhibit information. The analysis results show that the historical background is the information that the audience wants to view the most, the average importance score is 4.2, followed by the audience wants to understand the size and appearance of the traditional ceramics, as well as the use of the scene, according to the results of the survey, it can be appropriately added to the module that the audience would like to see, to enhance the satisfaction of use.

The importance of ceramics information is graded

Options 1 2 3 4 5 Mean
Size 3 10.00% 2 6.67% 7 23.33% 8 26.67% 10 33.33% 3.67
Appearance 2 6.67% 5 16.67% 6 20.00% 5 16.67% 12 40.00% 3.67
Production process 4 13.33% 7 23.33% 5 16.67% 3 10.00% 11 36.67% 3.33
Usage scenario 3 10.00% 5 16.67% 4 13.33% 8 26.67% 10 33.33% 3.57
Historical background 1 3.33% 2 6.67% 4 13.33% 6 20.00% 17 56.67% 4.20
Subtotal 13 43.33% 21 70.00% 26 86.67% 30 100.00% 60 200.00% 3.69
Conclusion

In this paper, the virtual pottery scene is reconstructed using three-dimensional reconstruction technology, and a gesture interaction system is designed in it to realize the interaction between the audience and the traditional pottery production. The design of the traditional pottery virtual museum scene is carried out from three aspects: scene, display and interaction design. The space of the traditional pottery virtual scene is analyzed in terms of visual elements, and the average scores of the anteroom area, the experience area and the tail room area are 8.265, 8.425 and 8.945, indicating that the morphological modeling design scheme of these three areas is more excellent. Using the semantic difference method, the quantitative data of subjects’ feelings were obtained, and the audience’s evaluation of the traditional ceramic scene was concentrated between -0.001 and 0.2802, indicating that the evaluators were more satisfied with the overall quality of experience of the five samples. Using IPA analysis method, the SD data in the previous section was transformed into qualitative graphical language, the reference value of spatial longitudinal satisfaction of traditional ceramic art scenes is the mean value of SD score 0.4126, which is considered important by experts, and there are three indicators of relative satisfaction of the users, which are D3 (Scene Space Usage Rate), D9 (Traditional Culture Alive), and D10 (Traditional Cultural Environment Dependence).

Sprache:
Englisch
Zeitrahmen der Veröffentlichung:
1 Hefte pro Jahr
Fachgebiete der Zeitschrift:
Biologie, Biologie, andere, Mathematik, Angewandte Mathematik, Mathematik, Allgemeines, Physik, Physik, andere