Analysis of the Application of Artificial Intelligence Technology in the Digital Processing of Traditional Visual Art Elements
Published Online: Jun 05, 2025
Received: Dec 26, 2024
Accepted: Apr 22, 2025
DOI: https://doi.org/10.2478/amns-2025-1057
Keywords
© 2025 Jing Tang, published by Sciendo.
This work is licensed under the Creative Commons Attribution 4.0 International License.
With the development of digital technology, the art field has formed a digital art creation platform, which further stimulates the creative enthusiasm of artists and gradually changes the public’s perception of art [1-2]. Driven by the development of science and technology, the development of the art field has entered the digital information age. Contemporary visual artists use new concepts, means of expression, and try to reshape the form of art through digital technology, and display art works in a digital way [3-5].
The biggest difference between digital visual art and traditional art is that it is the result of virtual reality, technological experimentation, interactive experience, and mechanical reproduction, and its form is viewable and appreciable but hard to be touched [6-7]. Under the impetus of digital technology, contemporary art represented by visual art has produced many formal changes, which are mainly reflected in the mutual integration and reconstruction combining tradition and technology, as well as the abstract transformation that rejects the old and embraces the new technology [8-10].
Nowadays, the relationship between the form of visual art and digital technology is getting closer and closer, and it is difficult for artists to completely eliminate the digital technology elements in it, no matter in the creation process, or in the dissemination and display process. The mutual integration of technology and art has become an important characterization of the development of contemporary visual art [11-12]. It can be said that the value of contemporary visual art creation is not only reflected in the level of visual art works, but also in the level of the way of art production. People tend to realize while watching the works that technology has not only changed the shape of people’s lives, but also the shape of art [13-14]. The intervention of digital technology in the creation of contemporary visual art is manifested on several levels, and for artists, digital technology is a factor that they have to consider, both at the level of coming up with the concept of the work and at the level of utilizing materials and methods [15-16].
The application of Artificial Intelligence (AI) technology in the field of visual arts shows an unprecedented trend of innovation and change [17]. In the digital era, artists are able to create colorful and vibrant visual works with the help of AI technology [18]. AI not only simulates the creative process of human artists, but also analyzes a large number of art works through deep learning and neural networks, extracts creative styles and techniques from them, and then recombines and innovates these elements to create a completely new art form [19-20]. In this context, artificial intelligence systems such as DeepArt and Prisma bring unlimited possibilities to visual art by algorithmically transforming photos into art images with specific styles. In addition, some artists have collaborated with engineers to develop AI-based interactive installation art [21-23]. These installations can not only respond to the audience’s movements and emotions in real time, but also evolve and adjust their own expressions according to the audience’s feedback, thus providing a more personalized and interactive art experience [24-25]. With the continuous development of technology, the application of artificial intelligence in visual art not only enriches the means of artistic expression, but also challenges the boundaries and definitions of traditional art creation, enabling artists to break through their own limitations and explore unknown artistic fields, thus promoting the diversification and modernization of visual art [26-27].
In this paper, several typical generative adversarial networks, such as GAN, WGAN, DCGAN and CGAN, are highlighted. On this basis, in order to realize the creation of high-quality works of traditional visual art elements, a generative adversarial network based on asymmetric cyclic consistency is proposed for realizing the artistic style migration of real and natural images of traditional visual art elements. The network is equipped with generators with different conversion abilities and discriminators with different judgment abilities. The two discriminators are used to judge the authenticity of the generated real photos and traditional visual art element images respectively. In addition to this, the saliency edge extractor in the network is used to extract the saliency edge maps of the images. In order to learn the subject style in traditional visual art element images, a saliency edge loss module is proposed to be introduced in the subject edge constraint task of traditional visual art element images. The effectiveness of the improved network in this paper is verified in comparative experiments, and the aesthetic experience data is analyzed to explore effective strategies to improve the audience’s aesthetic experience.
The framework of the GAN [28] deep learning algorithm draws on the idea of a two-person zero-sum game. In this algorithm, generative model
When the input is a true sample, the output of the discriminator network is close to 1. When the input is a false sample, the output of the discriminator network should be close to 0. Therefore, the loss function (or cost function) of the discriminator network
where
In order for the discriminator to make as many errors as possible,
I.e., the training of a generative adversarial network is a very small and very large game problem, then its optimization process requires maximizing the loss function
Assuming that
With the generative model given, it can be shown that the optimal discriminator that can be obtained at this point is:
To address the problem of vanishing gradients during early training learning, the generator is trained using
When the generator loss function described above is used, the optimization of the generator becomes a problem of minimizing the KL scatter between the generated distribution and the true distribution and maximizing its JS scatter at the same time. It can bring about gradient instability. In order to solve this problem, Wasserstein GAN (WGAN) [29] came into being, the core idea is to use a loss function based on the Wasserstein distance instead of the cost function of the native network using the KL scatter and JS scatter with the following expression:
where the real data is denoted by
Deep Convolutional Generative Adversarial Network (DCGAN), replaces the fully connected network in the native GAN with a convolutional neural network and eliminates all the pooling layers. Inverse convolution is used in the generator model to complete the up-sampling, convolution of stride is added in the discriminative network model complex to replace pooling, and batch normalization is used in all layers of the model except for the input layer of the generator model and discriminative model to support deep neural network training. The problem of generative networks being prone to pattern collapse has also been improved to some extent. In the choice of activation function for the output layer of the generator network, the DCGAN uses the tanh function, while the other layers use the ReLUctant function as the activation function. In the discriminator network, the LeakyReLU function is chosen as the activation function, which achieves a better generation effect, and has an obvious advantage over other network models when dealing with high-resolution images.
The original GAN is an unsupervised network model where the input to the generator is random noise and hence the modality of the output data is uncontrollable, to solve this problem, Conditional GAN (CGAN) is proposed to add category conditions to the generator and discriminator to provide a better representation of multimodal data generation.
During training, CGAN adds auxiliary information
This cost function is closely related to the input information
Drawing on the design ideas of DCGAN and VGG network models, a real-time fast image style migration method is proposed. The model contains two networks, the image transformation network and the loss network. The input image
The VGG network is trained on the ImageNet dataset, and the resulting pre-trained model constitutes the loss network. Feature reconstruction loss
The characteristic reconstruction loss is:
Where,
where the Gram matrix
The real-time image style migration algorithm in this paper is built on the basis of convolutional neural network, although the image generation effect is slightly worse, but after the training of the image conversion network is completed, it is only necessary to do the forward computation of the image conversion network only once for each input image, instead of the need for heavy neural network training for each input image, and the training speed has been improved by more than 200 times.
In order to quantitatively analyze the information richness asymmetry in the traditional visual art element style migration task, we calculated the average image entropy of different image domains. (The larger the entropy value indicates the higher the information richness in the image.) The HSI color space includes three channels of hue, saturation and luminance, the image entropy of the first two channels can effectively reflect the color information characteristics of the image, and the image entropy of the last channel can effectively reflect the content information characteristics of the image, and the individual channels can be associated with the stylistic characteristics of the painting art images more effectively. Therefore, it is reasonable and effective to use the image entropy calculated by HSI color space to reflect the information richness of the image domain of traditional visual art elements.
Considering that an image domain contains several images, we take the average image entropy of all images in each image domain as the image entropy value of the domain by calculating the average image entropy of all images in the domain. The image entropy of an image domain is defined as follows:
Where
where
In order to measure the image entropy difference between the two image domains in the image conversion task more intuitively, we calculate the information entropy ratio between the two image domains, which is defined in detail as follows:
where
In order to better measure the information asymmetry of the traditional visual art element style migration task, we calculated the information entropy ratios of several generic image conversion tasks simultaneously. It is found that the information entropy ratios of both the traditional visual art element style migration task and the asymmetric image conversion task are larger than the information entropy ratio of the symmetric image conversion task, which suggests that the traditional visual art element style migration task is characterized by the asymmetry of the domain information richness, and it can be regarded as an asymmetric image conversion task.
The traditional visual art element style migration model network based on asymmetric cyclic consistency structure proposed in this chapter consists of two generators, two discriminators and a saliency edge extractor (SEE). The two discriminators Generator Based on the symmetric cyclic consistency structure of CycleGAN [30], we targeted to improve the structure of Generators Discriminator The main role of the discriminator is to distinguish between the generated image and the real image. In this paper, we use PatchGAN of 70×70 as the discriminator, which can help the model pay more attention to the image details compared to the original discriminator), and finally the mean value of the whole probability matrix is used as the authenticity output result. Significance Edge Extraction Module In order to simulate the stylistic characteristics of the prominent body strokes of traditional visual art elements, we use saliency edges to represent the body strokes in traditional visual art elements, which are obtained by extracting the edges of the salient subjects in the image. The module includes a saliency detection part and an edge detection part. A pre-trained PFAN network is used to detect the region mask of the salient object in the image, which can effectively represent the region of the main body of the painting in traditional visual art elements. An excellent edge extraction network, HED, is used to extract the edges of the image, which is helpful to simulate the different thicknesses of brush strokes in traditional visual art elements. The salient edge map obtained from the salient edge extraction module is used to compute the salient edge loss.
Based on the original loss of CycleGAN, we introduce feature-level based cyclic consistency loss and saliency edge loss.
Adversarial loss The main role of the adversarial loss is to be used to optimize the generator and the discriminator, so as to improve the generative ability of the generator and the discriminative ability of the discriminator. For the style migration from the real natural image domain to the direction of the traditional visual art element image domain, the use of the adversarial loss constraint model can generate an output image that is closer to the traditional visual art element image. The detailed calculation of the adversarial loss is as follows.
where Identity loss The identity loss function can help the model to avoid meaningless transformations, and at the same time constrain the model to keep the same color distribution of the input image and the output image, the detailed calculation is as follows.
Cyclic consistency loss based on feature level The pre-trained
Where Saliency Edge Loss The creation of traditional visual art elements focuses on emphasizing the main body of the scene, which is usually achieved by strengthening the strokes at the edges of the salient body. In this regard, we propose a saliency edge loss to simulate the stylistic characteristics of traditional visual art elements in which the main body is emphasized by heavy strokes. First, the real natural image and the generated image of traditional visual art elements are inputted into the saliency edge module, and then the obtained saliency subject edge map is balanced by the cross-edge loss calculation, so as to obtain the saliency edge loss, and the detailed calculation formula is as follows.
Total target loss Finally, the total target loss of the model is calculated as follows.
We aim to optimize the following objective function as detailed in Eq. (20):
The main methods compared are Bilinear Interpolation (abbreviated as BI in the comparison chart), CycleGAN, CGAN, and WGAN’s method. Bilinear interpolation is considered to be the most natural algorithm for generating pixel art, and many of the existing websites for generating pixel art draw on the algorithm of image downsampling. CycleGAN proposes cyclic consistent loss to achieve unsupervised deep learning, which provides unpaired training samples to realize the scheme of image style migration. In this subsection, two rubrics will be used to compare and analyze the above four models with the model in this paper.
Image quality is quantitatively assessed using an objective image quality evaluation metric like SSIM. This criterion structurally analyzes the similarity between the target image and the generated image. In this section, 20 sets of images are selected and each set of images contains the output image of each of the five methods and the corresponding real pixel image. In this section, the SSIM values between the real pixel images and the pixel images generated by each of the four algorithms are calculated using the compare_ssim function in the skimage library, and the statistics of the SSIM values of the images from the five methods are shown in Fig. 1. The average values obtained from the five groups of SSIM calculations are shown in Table 1. It can be seen from the figure that among the 15 groups of comparison images, most of the image SSIM values of this paper’s model are higher than those of the other four methods, which indicates that the images generated by this paper’s model are more similar to the real pixel images in terms of brightness, contrast and structure. The average SSIM value of 0.8859 for this paper’s model is the largest. This indicates that the pixel image results of this paper’s model are closer to the real pixel image works than the other four methods, and the perceived quality of the images generated by this paper’s model is higher than the other four methods in most cases.

Comparison method SSIM chart
Comparison method SSIM value table
| Evaluation index | BI | CycleGAN | CGAN | WGAN | This method |
|---|---|---|---|---|---|
| SSIM | 0.8645 | 0.8597 | 0.8054 | 0.5896 | 0.8859 |
PSNR is the evaluation of the generated pixel image from the pixel level. PSNR algorithm is simple and faster to check but in some cases there may be inconsistency between the discrepancy value and one’s subjective perception. In this section 20 groups of images which are same as in the previous experiment are selected and the PSNR values of each group of images for the five algorithms are calculated. The statistical graph of the PANR values of the images for the five methods are shown in Fig. 2, and finally their average values are calculated as shown in Table 2. PSNR reflects the pixel level difference between the generated image and the target image, the larger the value the better. As can be seen from the table, the PSNR value of the pixel image generated by the proposed model is 21.59, which is the largest among all methods, which indicates that the model in this paper performs the best in terms of peak signal-to-noise ratio, and the pixel image quality of the proposed model is closer to the real pixel image than the other four methods in the pixel-level standard.

The PANR values of the images for the five methods
Comparison method PSNR value statistics
| Evaluation index | BI | CycleGAN | CGAN | WGAN | This method |
|---|---|---|---|---|---|
| PSNR | 20.89 | 19.93 | 19.62 | 18.05 | 21.59 |
A total of 50 subjects were recruited for this experiment (males: 20, females: 30, M=24.58 years old), and the analysis of the collected data on art knowledge and art interest using SPSS26 revealed that the subjects in this experiment could be classified into three categories, namely, general subjects, learning subjects and expert subjects, based on differences in cognitive characteristics.
Descriptive statistics of the immediate aesthetic evaluation results are shown in Table 3. In the dimensions of beauty and preference, the distribution of learning subjects and expert subjects is similar, while in the dimension of understanding, the evaluation situation of learning subjects is more similar to that of general subjects and different from that of expert subjects, which emphasizes the importance of artistic knowledge in understanding the content of the picture of traditional visual art elements.
Descriptive statistical results of instant aesthetic evaluation
| Subject matter | Dimension | Test type | |||||
|---|---|---|---|---|---|---|---|
| Common type | Learning test | Expert test | |||||
| M | SD | M | SD | M | SD | ||
| Flower bird | Beauty | 4.4794 | 1.27679 | 5.4163 | 0.49452 | 6.1462 | 0.843 |
| Preference | 4.896 | 1.3262 | 5.2333 | 0.7103 | 5.9582 | 0.58194 | |
| Understand | 4.9169 | 1.47824 | 5.4334 | 0.6221 | 6.0626 | 0.80898 | |
| Figures | Beauty | 4.8959 | 1.14595 | 5.4669 | 0.90124 | 5.9791 | 1.03635 |
| Preference | 4.1672 | 1.09575 | 5.0837 | 0.9665 | 5.604 | 0.86906 | |
| Understand | 5.1253 | 1.07371 | 5.6326 | 0.92959 | 5.8955 | 0.85834 | |
| Mountain and water | Beauty | 4.9162 | 0.92306 | 4.8671 | 1.05036 | 5.2296 | 1.03058 |
| Preference | 5.1462 | 0.8774 | 4.6334 | 0.7866 | 5.5215 | 0.79036 | |
| Understand | 5.0209 | 1.09181 | 5.3836 | 1.01036 | 5.6453 | 0.92278 | |
Second, there were differences between the subjects: on the beauty dimension, expert subjects rated flower and bird paintings the highest, learning subjects rated figure paintings the highest, and both types of subjects rated landscape paintings the lowest, whereas ordinary subjects gave the highest ratings to landscape paintings. On the preference dimension, the general type subjects gave the lowest ratings to figure paintings and the highest ratings to landscape paintings, with mean ratings of 4.1672 and 5.1462 points, respectively. The other two types of subjects had the lowest ratings for landscape subjects and the highest ratings for flower and bird paintings. On the comprehension dimension, general and learning subjects rated figures the highest and expert subjects rated birds and flowers the highest, and the cross-sectional differences shown by expert subjects between subjects were the greatest.
Further correlation analysis of the dimensions was conducted, and the results of the correlation analysis are shown in Table 4, which shows that there is a highly significant positive correlation among the three dimensions in all the themes. During the viewing process, viewers are influenced by the visual stimulation of the images and their own cognitive background to judge the perceived quality of traditional visual art elements and form an initial impression of the quality of “beauty” of traditional visual art elements. Over time, the viewer undergoes deeper cognitive processes and tries to analyze the work at a higher level in order to “understand” the traditional visual arts elements, to extract the meaning of the work, and to analyze the value behind it.
Results of the analysis of real-time aesthetic evaluation
| Dimension | Subject matter | Beauty | Preference | Understand |
|---|---|---|---|---|
| Beauty | Flower bird | 1 | 0.75** | 0.415** |
| Figures | 1 | 0.44** | 0.307** | |
| Mountain and water | 1 | 0.445** | 0.479** | |
| Preference | Flower bird | 1 | 0.385** | |
| Figures | 1 | 0.309** | ||
| Mountain and water | 1 | 0.403** | ||
| Understand | Flower bird | 1 | ||
| Figures | 1 | |||
| Mountain and water | 1 |
Finally, the subject evaluates the previous processing and outputs an overall aesthetic judgment, and a successful analysis of the work triggers a positive aesthetic judgment. The cognitive level and emotional experience in the whole process are interconnected and accompanied by escalating emotional states to form aesthetic emotions and evaluations, which are dynamic and complex. ** At the 0.01 level (two-tailed), the correlation is significant.
The Aesthetic Experience Questionnaire for Traditional Visual Art Elements adopted in this study integrates the Aesthetic Experience Questionnaire for Traditional Visual Art Elements and adds the two dimensions related to the “heart flow experience” in the Aesthetic Experience Questionnaire (AEQ), finally forming the Aesthetic Experience Questionnaire with a total of 8 dimensions and 32 questions, and the meanings of the dimensions and the reliability tests are shown in Table 5. The meanings of the dimensions and the reliability tests are shown in Table 5. The reliability and validity test of the recovered data proved that the questionnaire has very good reliability and validity in this experiment (KMO value = 0.889, P<0.001. Cronbach’s alpha coefficient = 0.908), and can be further analyzed.
Aesthetic experience evaluation dimension meaning and reliability test
| Dimension | Representative meaning | Cronbach’s Alpha coefficient |
|---|---|---|
| Negative disposition | Unpleasant emotional reactions from art | 0.638 |
| Positive emotion | The pleasant emotional reactions of art | 0.845 |
| Artistic quality | Evaluation of artistic quality such as art technique | 0.859 |
| Self-correlation | Be able to relate to this picture with personal experience | 0.812 |
| Professional knowledge | Can contact the painting of the art history background such as fengji, school and so on | 0.913 |
| Cognitive exploration | Curious about the author, the history of art, or other things | 0.679 |
| Flow balance | Ability to enter the current state (challenge and skill balance) | 0.804 |
| Cardiac fluid test | Be able to focus on the experience of the painting and immerse it in it | 0.772 |
Further repeated-measures ANOVA on the dimensions by different conditions is shown in Table 6, which reveals that there is a significant main effect of the factor of subject type on all six dimensions, including positive emotions, self-association, professional knowledge, cognitive exploration, mind-stream balance, and mind-stream experience. It can be seen that the higher the subject’s art knowledge base and art interest, the more the subject believes that he or she can correctly analyze and process the painting, and at the same time, the higher the aesthetic evaluation will be output accordingly. The subject matter factor only showed a significant main effect on the dimension of “cognitive exploration” with an F value of 3.378, and only “negative emotions” did not show any significance in any of the conditions.
The multidimensional analysis of the aesthetic experience is repeated
| Dimension | Group | Freedom | Mean square | F | Eta-squared |
|---|---|---|---|---|---|
| Negative disposition | Subject matter | 2 | 0.953 | 2.994 | 0.059 |
| Test type | 2 | 0.814 | 1.445 | 0.057 | |
| Subject matter* Test type | 4 | 0.522 | 1.645 | 0.164 | |
| Positive emotion | Subject matter | 2 | 0.982 | 2.395 | 0.048 |
| Test type | 2 | 9.933 | 12.094*** | 0.335 | |
| Subject matter* Test type | 4 | 1.404 | 3.425* | 0.125 | |
| Artistic quality | Subject matter | 1.655 | 0.161 | 0.578 | 0.017 |
| Test type | 2 | 2.059 | 2.163 | 0.084 | |
| Subject matter* Test type | 3.211 | 0.984 | 3.485* | 0.125 | |
| Self-correlation | Subject matter | 2 | 0.215 | 0.575 | 0.017 |
| Test type | 2 | 6.494 | 3.427* | 0.126 | |
| Subject matter* Test type | 4 | 0.395 | 1.035 | 0.043 | |
| Professional knowledge | Subject matter | 1.689 | 0.005 | 0.025 | 0 |
| Test type | 2 | 32.12 | 20.109*** | 0.452 | |
| Subject matter* Test type | 3.321 | 0.735 | 2.448 | 0.093 | |
| Cognitive exploration | Subject matter | 2 | 1.079 | 3.378* | 0.065 |
| Test type | 2 | 8.669 | 10.539*** | 0.302 | |
| Subject matter* Test type | 4 | 0.412 | 1.318 | 0.053 | |
| Flow balance | Subject matter | 1.758 | 0.588 | 1.874 | 0.038 |
| Test type | 2 | 16.594 | 17.225*** | 0.415 | |
| Subject matter* Test type | 3.544 | 0.716 | 2.269 | 0.087 | |
| Cardiac fluid test | Subject matter | 2 | 0.408 | 1.503 | 0.229 |
| Test type | 2 | 4.393 | 4.365* | 0.156 | |
| Subject matter* Test type | 4 | 0.769 | 2.829* | 0.108 |
The results of ANOVA also showed that the interaction of subject types had a significant impact on the three dimensions of “positive emotion”, “artistic quality” and “flow experience”, that is, the influence of subject characteristics on these dimensions would vary with different subject types, and vice versa.
The dimension of “artistic quality” is biased towards the judgment of the aesthetic quality of the painting, which can represent the output of some aesthetic judgments. “Positive emotion” can represent the emotional state of the subject in the aesthetic process to some extent. To a certain extent, the “flow experience” can be used to understand the artistic conception of traditional visual art elements and summarize the overall viewing experience. Combined analysis, it was found that these three dimensions showed a significant impact on the interaction between the characteristics of the participants and the subject type, which confirmed that the aesthetic cognitive process was carried out in the continuous interaction between automatic processing and consciousness processing.
The correlation analysis of immediate evaluation proved that the aesthetic experience of traditional visual art elements is a complex and dynamic evaluation process, which integrates basic perceptual analysis, higher-order cognitive manipulation, and continuous updating of emotional assessment of the artwork, so whether there is any interaction among the multidimensional dimensions is also worth further exploration. The results of the correlation analysis between the multidimensions are shown in Table 7, where dimensions 1-8 represent negative emotion, positive emotion, art quality, self-association, expertise, cognitive exploration, mindstream balance, and mindstream experience, respectively.
Multidimensional analysis of aesthetic experience
| Dimension | Subject matter | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 |
|---|---|---|---|---|---|---|---|---|---|
| 1 | Flower bird | 1 | 0.476** | 0.568** | 0.078 | 0.235 | 0.125 | 0.375** | 0.382** |
| Figures | 1 | 0.566** | 0.536** | 0.136 | 0.267 | 0.368 | 0.515** | 0.443** | |
| Mountain and water | 1 | 0.432** | 0.465** | 0.098 | 0.188 | 0.267 | 0.238 | 0.465** | |
| 2 | Flower bird | 1 | 0.725** | 0.277** | 0.415** | 0.695** | 0.498** | 0.705** | |
| Figures | 1 | 0.763** | 0.449** | 0.389** | 0.764** | 0.523** | 0.712** | ||
| Mountain and water | 1 | 0.569** | 0.489** | 0.376** | 0.523** | 0.215 | 0.637** | ||
| 3 | Flower bird | 1 | 0.115 | 0.308* | 0.549** | 0.365** | 0.625** | ||
| Figures | 1 | 0.218 | 0.203 | 0.618** | 0.408** | 0.625** | |||
| Mountain and water | 1 | 0.136 | 0.195 | 0.378** | 0.113 | 0.298* | |||
| 4 | Flower bird | 1 | 0.387** | 0.315* | 0.446** | 0.205 | |||
| Figures | 1 | 0.364** | 0.264 | 0.225 | 0.314* | ||||
| Mountain and water | 1 | 0.467** | 0.145 | 0.406** | 0.328* | ||||
| 5 | Flower bird | 1 | 0.338* | 0.725** | 0.315* | ||||
| Figures | 1 | 0.306* | 0.637** | 0.268 | |||||
| Mountain and water | 1 | 0.437** | 0.789** | 0.223 | |||||
| 6 | Flower bird | 1 | 0.375** | 0.547** | |||||
| Figures | 1 | 0.372** | 0.708** | ||||||
| Mountain and water | 1 | 0.365** | 0.436** | ||||||
| 7 | Flower bird | 1 | 0.398** | ||||||
| Figures | 1 | 0.305* | |||||||
| Mountain and water | 1 | 0.236 | |||||||
| 8 | Flower bird | 1 | |||||||
| Figures | 1 | ||||||||
| Mountain and water | 1 |
The analysis of the Pearson correlation coefficients between the dimensions showed that:
Negative emotions had a significant moderate positive correlation with artistic quality and a low positive correlation with mindstream experience. Positive emotions had some positive correlations with all dimensions. Art quality had generally low positive correlations with negative emotion, positive emotion, cognitive probing, and mindstream experience. Self-association had generally low positive correlations with positive emotion, expertise, and mind-stream balance, suggesting that the degree to which subjects associated the paintings with their personal experience was significantly related to their ability to be in a positive emotional state during the viewing of the paintings. There was a significant low positive correlation between mind stream experience and negative emotion, and a significant moderate positive correlation with positive emotion, artistic quality, and cognitive exploration, indicating that subjects’ evaluation of the whole experience of the painting viewing process would be influenced by aesthetic judgment and aesthetic emotion.
The above results confirm that the dynamics and complexity of aesthetic experience discussed in the model of aesthetic cognitive processing are also occurring in the visual art discipline of traditional visual art elements, and further reveal how the perceptual characteristics related to the subjects themselves are interwoven into the process, which provides a possibility for clarifying the inner mechanism of the viewer’s aesthetic experience of traditional visual art elements in the later paper.
This paper analyzes the shortcomings of existing artwork style migration methods and proposes a style migration model for traditional visual art elements based on asymmetric cyclic consistency of generative adversarial networks.
The asymmetric cyclic consistency structure designed in this paper can improve the visual quality of generated traditional visual art element images. The results of the comparison experiments show that the average values of SSIM and PSNR of this paper’s model are 0.8859 and 21.59, respectively, and the pixel image results are closer to the real image than the remaining four methods, and also close to the pixel level standard in terms of the peak signal-to-noise ratio, which proves the superiority of this paper’s method.
This study explores the cognitive processing process and aesthetic experience of different types of viewers when viewing traditional visual art elements with different themes from the perspective of aesthetic experimentation. It is concluded that different types of viewers have different cognitive and aesthetic experiences of traditional visual art elements, that viewers’ aesthetic experiences in the visual art category of traditional visual art elements are dynamic and complex, and that the higher the viewers’ artistic knowledge and artistic interests are, the stronger their aesthetic experiences of traditional visual art elements are, etc. The higher the viewers’ artistic knowledge and artistic interests are, the stronger their aesthetic experiences of traditional visual art elements are.
The higher the artistic knowledge, the better the aesthetic experience of the audience. Therefore, curators can transform traditional visual art elements from “collections” to “exhibits” and break down the barriers between traditional visual art elements and audiences. Always pay attention to the differences in cognitive characteristics between the audience, to meet the different needs of the audience for traditional visual art elements in the content of the display, so as to enhance the dissemination effect and aesthetic experience of the exhibition of traditional visual art elements.
