Accesso libero

Research on Innovative Integration Strategies of Chinese Liangshan Yi Clothing Culture in Modern Clothing Design in the Age of Artificial Intelligence

, ,  e   
17 mar 2025
INFORMAZIONI SU QUESTO ARTICOLO

Cita
Scarica la copertina

Introduction

As the sixth largest ethnic minority in China, the culture of the Yi people has a distinctive influence on the development of China. Traditional ethnic clothing such as Yi clothing has very obvious inheritance characteristics, which makes it give full play to the important value of emotional ties in the process of interpersonal communication. It is necessary for people to draw the elements of Yi and other traditional ethnic costumes in a skillful way in modern clothing design, and to continue the emotional connotation of traditional clothing to a new carrier [1-2]. The integration of Yi clothing into modern clothing design has theoretical significance for the development of cultural heritage and practical significance for the great revival of the nation.

In the process of integration, innovation and development of Yi clothing elements and modern clothing design, modern clothing is not simply copying or imitating the traditional Yi clothing, but integrating multiple elements such as patterns, accessories, styles, colors and other elements that are unique to the charm of Yi clothing in a scientific way, and paying more attention to the inheritance and promotion of the national spirit [3-6]. Overall, the uniqueness of Yi clothing is that in the design and production of clothing, a large number of exquisite patterns with natural colors and folk colors are applied, and it is precisely because of these unique pattern designs that Yi clothing presents a more and more rich ethnic flavor [7-10]. And the exquisite production technology and gorgeous pattern design given by the AI era make the Yi dress stand out in many clothing categories, and become a bright landscape in the history of traditional Chinese national costumes [11-13]. The integration of Yi clothing elements into modern clothing design can perfectly interpret the original and natural flavor.

Ji, Z. et al. showed that color is a specific expression of the combination of local characteristics and traditional culture, so based on the study of color culture and color image of Liangshan Yi women’s traditional costumes, they analyzed the color distribution law of the sample garments, so as to establish a regional cultural color scheme, which provides a reference for designers to develop the regional cultural products [14]. Jiancai, W. A. N. G. studied the aesthetic characteristics of the Yi ethnic group costumes in Yunnan, showing that its color, structure, decoration, process characteristics and other aspects reflect the diversity, richness, importance and dominance of the aesthetic subject, making the Yi ethnic costumes the most important expression carrier of the Yi culture. dress, showing that its color, structure, decoration, craft features and other aspects reflect the diversity, richness, importance and dominance of the aesthetic subject, making Yi dress one of the most important carriers of expression of Yi culture [15]. Lu, H. et al. researched the unique three-color culture and color features of the Yi people, and found that the reverence and worship of the color of the Yi people have a deep cultural heritage and historical They further explored the aesthetic concepts of Yi clothing in displaying Yi color culture to help recognize the special value of Yi color culture and promote the dissemination of minority color culture [16]. Wang, H. et al. pointed out that the headdress culture of the Yi people is more distinctive than that of other ethnic dress cultures, and it is considered to be one of the important features of the Yi women’s dress culture, and by investigating the history of the development of the headdress of the Yi women, it provides the basic information on the headdresses and dresses [17].

Lei, W. et al. introduced virtual reality and somatosensory interaction technology, through the establishment of Yi Tusi dress model database, and promote the realization of virtual clothing interactive wear, so that the user intuitively feel the unique charm and cultural connotation of Yi Tusi dress [18]. Zhu, H. et al. used K-means clustering algorithm to extract the color of the traditional Yi clothing with the clustering analysis, and effectively restored the color images and characteristics of Yi clothing, providing technical support for the innovative application of Yi clothing color [19]. Zhao, L. et al. proposed a color scheme evaluation method based on K-Means clustering method for the color design of ethnic minority garments, which realized the unity of objectivity of color law and subjectivity of cultural image and promoted the dissemination of ethnic clothing culture by establishing the evaluation standard of apparel color scheme [20].

In this paper, we take the image and color elements of Yi clothing in Liangshan, China, as research elements, put forward the innovation and integration strategy of modern clothing design, and constructed a pattern cutting model and a color feature extraction model for Yi clothing, respectively. A lightweight pattern segmentation algorithm based on the AES-UNet model is proposed by integrating multi-scale feature extraction and attention, improving the U-Net model and embedding the ASPP and ECA attention in its encoder, and combining the excellent characteristics of the BCE loss function and the Lovász-hinge loss function to propose an improved loss function to enhance the accuracy of image segmentation. Pre-processing of image denoising is carried out, and in the color resolution part, the applied colors and other related color data and information of the series of sample images are extracted using adaptive K-means clustering method and custom K-means clustering method to extract the color features, construct the color network relationship model and optimize it. The performance experiments are carried out on the pattern cutting model and color feature extraction model of Yi clothing, and the series of modern clothing with the theme of “Yi Seeing and Loving” are designed with the technology of this paper, and the design effect is analyzed from the three aspects of color and pattern collocation, clothing style and decorative details, and the degree of ethnicity and the design effect.

Development of Liangshan Yi Clothing Culture in the Age of Artificial Intelligence

With the promotion of globalization, the communication and integration of various ethnic cultures has become a trend. In this context, how to combine traditional ethnic clothing elements with modern clothing design, not only to maintain the uniqueness of ethnic culture, but also to meet the modern aesthetic needs, how to find a balance in the collision, so that the ancient elements of ethnic clothing new vitality, has become an important issue in the field of clothing design. As an important part of Chinese traditional culture, Liangshan Yi clothing carries a profound historical and cultural heritage, and its rich patterns, bright colors, exquisite accessories and unique shapes provide a valuable source of inspiration for modern clothing design, and with its unique artistic charm, it inspires the infinite possibilities of modern clothing design, and it can promote modern clothing design to a diversified and personalized new height.

Overview of Yi Clothing

The Yi people are mainly concentrated in the prairie and coastal hilly areas of the four provinces of Yunnan, Sichuan, Guizhou, and Guangxi. The Yi culture is rich and colorful, and one of its remarkable features is its diverse costumes. According to statistics, there are more than 300 kinds of Yi costumes, which not only reflect the gender, age and social status of the Yi people, but also reflect their special needs on different occasions and important moments in their lives. For example, there are special costumes for specific occasions such as weddings and funerals, religious ceremonies, rites of passage, and the elderly. These costumes are not only simple clothes, but also carriers of Yi culture, history, and tradition, displaying their deep ethnic emotions and cultural heritage.

The patterns and colors of Yi costumes show diversity and innovation, which are not only a true portrayal of life, but also a unique expression of Yi culture. The unique design, bold colors, primitive style, and ruggedness of the costumes demonstrate the deep artistic attainments and infinite creativity of the Yi people.

Innovative integration strategies for modern clothing design

With the promotion of the wave of globalization, the communication and integration of various ethnic cultures have become a trend. In this context, how to combine traditional ethnic clothing elements with modern clothing design, not only to maintain the uniqueness of ethnic culture, but also to meet the modern aesthetic needs, how to find a balance in the collision, so that the ancient ethnic clothing elements new vitality, has become an important issue in the field of modern clothing design.

Combined with artificial intelligence technology, this paper puts forward the following innovative integration strategies for modern clothing design that integrate Yi ethnic dress culture.

Realize the cutting of Yi dress pattern and provide rich Yi dress pattern elements for modern clothing design.

Yi dress pattern is extremely complex in composition and color, and Yi dress pattern style is more diversified, for the complexity of Yi dress pattern, an image segmentation model for Yi dress pattern segmentation is proposed.

Realize the color feature extraction of Yi clothing, and provide the color source of Yi clothing for modern clothing design.

The Yi dresses are colorful, diverse and unique, showing the deep heritage of Yi culture. Accordingly, this paper proposes a color feature extraction model for Yi clothing.

In the next section, the corresponding algorithm model will be constructed from the two aspects of Yi dress pattern cutting and dress color feature extraction in combination with the above innovative integration strategy of modern clothing design, so as to provide technical support for the integration of Liangshan Yi dress culture in modern clothing design.

Yi Clothing Pattern Segmentation Model

In this chapter, an AES-UNet model for Yi ethnic dress pattern segmentation is proposed based on the U-Net model with appropriate improvements [21].

The model mainly combines attention and empty space pyramid pooling (ASPP) in the U-Net model as a way to improve the accuracy of Yi clothing pattern segmentation. In order to make the model more lightweight, this chapter firstly improves the framework of the U-Net model by reducing one down-sampling layer of the U-Net model and correspondingly reducing one up-sampling layer, which helps to reduce the amount of computation and increase the inference speed of the model. Then the encoder of the U-Net model is improved, the role of the encoder is to realize feature extraction through downsampling, and the convolutional layer used for downsampling in it is replaced with Residual in the ResNet model, which also plays the role of downsampling. The model is downsampled three times in total, and after each downsampling, the multi-scale feature extraction ASPP and ECA attention are embedded. Due to the complexity of the Yi dress pattern, the multi-scale feature extraction ASPP helps to capture the features of the Yi dress pattern with different receptive fields, and due to the fact that a large number of feature maps with rich channel information are obtained during downsampling, in order to capture the correlations and the importance between channels, ECA attention is immediately followed by each ASPP [22-23]. Then the improvement is the decoder of the U-Net model, the role of the decoder is to recover the image size through up-sampling to achieve end-to-end output, after the first two up-sampling splices of the decoder, the SA attention is embedded to help the model achieve the accurate localization of foreground regions of the Yi dress pattern, and at the same time, enhance the feature expression of the fine pattern, so as to achieve the accurate segmentation of the Yi dress pattern.

ASPP multi-scale feature extraction

Due to the complexity of Yi dress patterns, various Yi dress pattern characteristics will affect the segmentation effect of the model, such as the Yi dress patterns are more diverse in style, and the texture of the Yi dress patterns is more delicate and the colors are more vivid, so it is necessary to fully extract the feature information of the Yi dress patterns, and at the same time, in order to make up for the loss of feature extraction due to the reduction in the number of downsampling, this chapter embeds the void space Pyramid Pooling (ASPP) is embedded into the encoder, which has the advantage of obtaining the feature information of Yi dress patterns from multiple scales with different sensory fields to obtain the contextual information.

ASPP mainly contains a variety of null convolutions and poolings with different expansion rates. In this chapter, considering the fineness and complexity of the Yi dress pattern, the expansion rates of the null convolution are adjusted to 3, 6, and 9, respectively, in addition to an ordinary convolution and a pooling, which are capable of capturing the complex features of the Yi dress pattern with different receptive fields, and overcoming the problems of the traditional convolution’s large computational effort and loss of resolution. The results of each convolution and pooling are combined to share feature information, and then these feature information are fused by convolution and passed downward.

The work of ASPP can be expressed as follows: F=convl×l(convl×l(X);atrous3(X);atrous6(X);atrous9(X);fup(pool(X)))

where X is the input features, conv1×1 is the 1 × 1 convolution kernel, atrousr is the null convolution with expansion rate r , pool is pooling, fup is upsampling, and F is the output features.

ECA Attention

Because ASPF captures a lot of Yi costume pattern feature information, and ECA attention can efficiently model its channel, ECA attention first uses global average pooling to compress the input feature map, then uses variable convolution kernel to learn channel features, then activates it through the Sigmoid function, and finally combines channel attention to obtain a new feature map of the enhanced feature channel, which effectively enhances the channel expression containing rich Yi costume pattern feature information.

The variable convolutional kernel in ECA attention, for a given number of channels, determines the size of the receptive field of the convolutional kernel, which is expressed by the following equation: k=ϕ(c)=| log2(c)γ+bγ |odd

where c is the number of channels, k is the size of the variable convolution kernel, parameters γ and b are taken as 2 and 1, respectively, and odd denotes the nearest odd number.

SA Attention

Since the Yi dress pattern has a very fine texture, in order to better realize the precise segmentation of the Yi dress pattern, the feature letters of the Yi dress pattern are fully allocated by embedding the SA attention and utilizing the SA attention.

The SA attention module can be structured into three parts, namely Group, Split and Aggregate, the Group part is to group the input feature map to get the sub-features, which helps to fully subdivide the feature information of the Yi dress pattern, and the Split part is to group the sub-features along the channel dimension to form two branches, which perform the channel attention operation and the spatial attention operation, respectively. Split part is to form two branches along the channel dimension for the sub-features obtained from grouping, respectively, to carry out channel attention operation and spatial attention operation, so that the feature information of Yi ethnic dress pattern can be fully applied. The two branches are mainly used to stop shrinking the feature information to complete the channel attention operation by using the global semi-homogeneous pooling, and then enhance the grouping paradigm operation by the linear function of the other sub-feature branch, and complete the spatial attention operation by using the Sigmoid function response. Finally, the results of the two attention operations are combined to form a new subfeature. The two branches can be expressed by the following equations: Fk1=δ(fc1(fGAP(Xk1)))Xk1=δ(W1S1+b1)Xk1 Fk2=δ(fc2(fGN(Xk2)))Xk2=δ(W2S2+b2)Xk2

Where Xk1 and Xk2 are the mean input features after grouping, fGAP is the global average pooling, fGN is the grouping paradigm, fc1 and fc2 are linear functions, δ is the activation function, W1,b1,W2,b2 are the linear function parameters, S1 is the feature after global average pooling, S2 is the feature after grouping paradigm operation, Fk1 and Fk2 are the output features.

The Aggregate section aggregates all the new sub-features to facilitate the exchange of feature information across groups. Finally, the channel shuffle operation is performed to form the final attention feature map, which can effectively help the network achieve accurate segmentation of Yi ethnic dress patterns.

Improved loss function

The Yi ethnic dress pattern has a strong complexity, so the choice of loss function is crucial to the accuracy of segmentation results, the widely used BCE loss function has a strong advantage for the binary classification task, while the Lovász-hinge loss function introduced in this paper has the feature of pixel-by-pixel comparison, and at the same time, it can effectively solve the problem of the sample category equalization, and it has been fully verified when segmentation of the Hmong ethnic dress pattern. It has been fully verified in the segmentation of the Miao costume. In order to make the model realize finer supervision to improve the accuracy of segmentation, this chapter will combine the BCE loss function and the Lovász-hinge loss function to propose an improved loss function. Its function formula is as follows.

L=α×LBCE+(1α)×LLh

where α represents the weight coefficients, LBCE represents the BCE loss function, and LLh represents the Lovász-hinge loss function.

Color Feature Extraction Model for Yi Clothing
Image denoising

In this paper, there is image noise in the experimental collection samples, and here Gaussian filtering is used to denoise them [24]. Because it is a Gaussian filtering of the image, a two-dimensional Gaussian function filtering is used, and its model equation is shown in (6): G(x,y)=12πσ2e(x2+y2)/2σ2

Where G(x, y) is the result of the calculation of the weights of the Gaussian filter, and its value has a greater relationship with the value of σ , which affects the final image denoising effect.

Imagery color extraction

Imagery color extraction refers to the extraction of representative colors from the scene, which is different from the point-like and local colors in color measurement and color pairing, and the imagery color is the overall generalization of the scene colors. In this paper, the imagery color is extracted by combining adaptive clustering method and custom clustering method, and the color is extracted according to the demand of the number of extracted colors in descending order according to the proportion of colors, and the extracted colors are indicated by the “extracted color” in the following.

Color extraction scheme

In this paper, through a variety of clustering algorithms comparison and the demand of this topic considerations, the color extraction part of the improved K-means clustering algorithm is selected, the use of two color clustering to achieve the color extraction of the series of images [25].

Initial clustering

In the process of extracting color information in the process of color clustering, the first color clustering that is, the initial clustering using adaptive K-means clustering method, automatically extracted by the system program to generate the extracted color of each image. The initial adaptive color clustering is carried out sequentially for all scene images, and the principal steps of the adaptive clustering method are as follows.

Image pixel points are used as clustered sample points and the sample set is noted as {a1,a2,………an};

Carry out the Euclidean distance calculation between the sample points, the Euclidean distance indicates the similarity of the two sample points ai and aj(i, j = 1, 2,………, n) , the smaller the Euclidean distance d(i, j) =|| aiaj ||,d(i, j) between ai and aj , the more similar the two sample points are, and vice versa, the greater the difference is; set and calculate the critical distance D between the sample points, such as can be set the formula for the calculation of the D is: D=2n(n1)i=1nj=i+1nd(i,j)

Take each sample point as the center point and apply the critical distance D to classify the sample points to obtain the number of samples in each cluster class;

Take the center sample point of the cluster class containing the most samples as the first clustering center; if the distance between the center sample point of the cluster class with the second sample point and the first clustering center is more than 2D , then it will be set as the second clustering center; and so on, the adaptive generation of clustering centers and the number of clusters.

Secondary clustering

Considering the comparability of color distribution of various ethnic groups, K-means custom clustering method is used in color secondary clustering. Custom clustering is needed to determine the initial clustering center and its number. The principle steps are as follows.

Take all the image extraction colors extracted by the first color clustering as the sample points of the second color clustering; set the initial clustering center randomly by human beings to determine the number of clusters;

Calculate the distance from all sample points except the initial clustering center point to the center point of each cluster, and classify the sample points into the cluster class whose distance is closest according to the calculation results;

Calculate the centroids of all points in the cluster classes as new cluster centroids in each cluster class that has been categorized in the previous step;

After calculating the new clustering center of each cluster class, repeat steps (2) and (3) until the clustering center no longer changes, and the clustering center data that usually no longer changes is the final desired clustering result.

Through the custom clustering mentioned earlier, the extracted color extraction results of the series of pictures are finally obtained, allowing for further color information research.

Feature extraction

In the color analysis section, in order to effectively analyze the extracted color information and understand the color mechanism and color pattern of the series of scene images, it is necessary to extract three types of key feature information here, color percentage, paired-color color group co-occurrence rate and color space distance.

Extracting color ratio

Through the adaptive clustering number K1 and customized clustering number K2 , the color ratio of each color in the target object area is calculated separately. When clustering, the pixels of the same color are regarded as the same class, and the class counting method is a two-dimensional matrix, and the value of each position in the matrix is the color classification number of the corresponding pixel after clustering, and the ratio of the number of pixels contained in each cluster to the total pixels in the target object area is the color percentage of each cluster color. The ratio of the cluster color derived after the second clustering is the comprehensive extracted color ratio of the series of images, which can be used to set the parameters of the diameter or radius of the color circle in the color network relationship model, and the model demonstrates the results to facilitate the user to intuitively and quickly understand the use of each extracted color in the series of images. In the current procedure, the background color counted should be eliminated and not involved in the calculation of extracted color sorting. The formula for calculating the percentage of extracted colors is shown below: P=eE

Where, P represents the numerical value of the percentage of an extracted color, e represents the number of pixels of that extracted color, and E represents the total number of pixels in the image.

Pairwise color co-occurrence rate

The principle procedure for calculating the co-occurrence rate of the color group of the paired colors is as follows. Assuming that the color tolerance between the s nd extracted color and the i th extracted color of the i rd extracted color of the integrated extracted color in the P st image of a series of images is less than or equal to the threshold value f , and at the same time, when the color tolerance between the t th extracted color and the j th extracted color of the integrated extracted color in the P th image is also less than or equal to the threshold value f , then it is regarded that there exists a co-occurrence relationship between the i th and j th extracted color in the P th image of the comprehensive extracted color, and the calculation method of the two color tolerances in this paper is to calculate the Euclidean distance between the twoNamely:

When Di,sf and Dj,tf , ap = 1 , at this time: Ci,j=P=1AaPA

where Ci,j denotes the co-occurrence value of the i nd extracted color in the composite extracted color with the j rd extracted color in the series of images, Di,s denotes the Euclidean distance between the i th extracted color in the P th image and the sth extracted color in the composite extracted color, Dj,t denotes the Euclidean distance between the j th extracted color in the P th image and the t th extracted color in the composite extracted color, f is the color tolerance threshold, ap = 1 denotes that the composite extracted color of i th extracted color has co-occurrence relationship with the j th extracted color, and A denotes the total number of images in the series.

Color space distance

In the color clustering extraction color stage, the RGB color space model is used for the calculation of color value data, statistics, etc., and the calculation of color space distance is also based on the RGB color space model. The difference between colors is expressed using the Euclidean distance between colors, and the formula for calculating the difference between colors is as follows: Dm,n=(RmRn)2+(GmGn)2+(BmBn)2

Where Dm,n represents the Euclidean distance value between the m nd extracted color and the n rd extracted color, and the numerical results are normalized; Rm , Gm , Bm represents the R, G, B -color value of the m th extracted color, and Rn , Gn , Bn represents the R, G, B -color value of the n th extracted color. The inter-color difference results allow the user to understand the color relationships between the extracted colors, such as close colors, similar colors, or contrasting colors.

Experiments on Segmentation of Yi Clothing Patterns and Extraction of Color Features

In the above paper, based on the proposed innovative integration strategy of modern clothing color protectors based on Yi dress culture, this paper constructs a Yi dress pattern segmentation model and a Yi dress color feature extraction model from the two major aspects of Yi images and colors, respectively, in order to provide a source channel of Yi dress patterns and color elements for modern clothing design.

In this chapter, we will focus on testing the effectiveness of the Yi dress pattern segmentation model proposed in this paper on the segmentation of Yi dress patterns, and verifying the performance of the Yi dress color feature extraction model on the extraction of Yi dress colors.

The experimental environment in this chapter is configured using MatLab R2018a, Windows 10 operating system, processor Intel(R) Core(TM) i5-7200U CPU @ 2.50GHz 2.71, and running memory of 8 GB.

Experiment on the segmentation of Yi ethnic dress patterns

The validity index (VXB), a commonly used index in the field of image segmentation, is selected to quantitatively analyze the segmentation results. Through the evaluation of the effectiveness index (VXB) index, the performance of the segmentation algorithm can be measured more objectively, which provides an important reference for further optimization of the algorithm, so as to obtain more accurate and reliable image segmentation results.

In this section, two types of Yi dress images with simple and complex patterns are selected as experimental objects. The simple Yi costume images include Image 1, Image 2 and Image 3, with 10% pretzel noise, Gaussian noise and mixed noise added in turn. The complex Yi costume images include image 4, image 5, and image 6, and the corresponding noise is added as in the simple images. The VXB of each method on different images is specifically shown in Fig. 1. It can be seen that when different types of noise are added to the Yi dress pattern, the VXB of CNN algorithm, K-means algorithm and FCM algorithm is relatively low, and the average value of VXB is 4.49%, 5.055%, 5.45%, respectively. The RTFormer algorithm has a relatively good effect on the cutting of the Yi dress image, and the average value of VXB reaches 10.985%. And the Yi dress pattern segmentation model proposed in this paper has the best cutting effect for Yi dress images among all algorithms and methods, in which the VXB values of image 3 and image 6 are as high as 27.22% and 21.14%, and the average VXB value is 15.28%. The experimental results show that the Yi dress pattern segmentation model proposed in this paper has higher efficiency and performance in analyzing Yi dress patterns.

Figure 1.

VXB

Experiments on Color Feature Extraction of Yi Clothing

In order to quantify the color extraction effect of the Yi dress color feature extraction model proposed in this paper in Yi dress images, two quantitative parameters, namely, color extraction completeness and color accuracy, are introduced as evaluation indexes in this section. Using the color feature extraction model of Yi dress in this paper, the color extraction effect of 40 Yi dress images in three kinds of color spaces in the Yi dress sample set is tested for 10 times, and the average color extraction completeness and average color accuracy of the color extraction results in the three kinds of color spaces are shown in Table 1. It is found that in terms of color extraction completeness, 92.84%, 95.82% and 90.12% in RGB space, L*a*b* space and HSV space, respectively, are greater than 90%, indicating that the method in this paper can basically extract the colors in the Yi ethnic dress images completely. In terms of color accuracy parameter, the ratings on RGB space, L*a*b* space and HSV space are 3.19, 3.45 and 3.82 respectively, which are all greater than 3, and this paper’s method has strong color extraction accuracy in different spaces.

Completeness of color extraction and color accuracy

Number Completeness of color extraction(%) Color accuracy
RGB L*a*b* HSV RGB L*a*b* HSV
1 93.96 92.92 88.89 2.62 3.55 2.86
2 94.65 96.27 90.17 3.80 2.92 4.27
3 93.58 95.67 91.72 3.15 3.84 4.85
4 91.51 98.02 90.97 3.10 3.55 4.59
5 90.79 95.90 89.13 3.88 4.48 4.40
6 90.61 96.73 91.08 2.22 3.43 2.60
7 93.41 97.26 88.88 3.66 2.86 2.93
8 92.88 94.39 88.07 2.39 3.24 2.35
9 94.46 95.42 92.81 3.78 3.96 4.89
10 92.56 95.59 90.42 3.33 2.70 4.48
Average 92.84 95.82 90.21 3.19 3.45 3.82
Modern Clothing Design Practice Based on Yi Clothing Culture

In the above article, this paper proposes an innovative integration strategy for modern clothing design in the era of artificial intelligence, and proposes a pattern segmentation model for Yi clothing and a color feature extraction model for Yi clothing corresponding to the two dimensions of image and color, respectively.

In this chapter, we will combine the pattern segmentation method and color feature extraction method of Yi clothing with the representative Yi clothing elements and modern clothing style to design a series of clothing. This series of garments will be targeted at women aged 22-45 years old, with the theme of “Love at the sight of Yi”, integrating the elements of Yi clothing patterns and redesigning them to design modern women’s garments in Liangshan Yi style. In the following, we will evaluate the series of “Yi Style” garments designed in this paper from three perspectives: color and pattern matching, clothing style and decorative details, and the degree of ethnic style and design effect.

Eighty respondents with normal color vision were invited to participate in the evaluation of the clothing design, and the age range of the respondents was 22-45 years old. All the respondents participated in the evaluation voluntarily and understood the evaluation process without any hesitation.

Evaluation of color and pattern matching

The words suitable for describing the color and pattern of clothing were selected as the color and pattern matching evaluation, and finally the six groups of evaluation words suitable for evaluation were finally determined: “simple-gorgeous”, “traditional-popular”, “incongruous-coordinated”, “ordinary-chic”, “modern-classical”, and “not easy to match-easy to match”, which were referred to in alphabetical order A~F. For the six groups of evaluation vocabulary, a five-level Likert scale was used, with a score of 3 indicating neutrality, and the evaluation was not biased towards either side, and the more biased the score was to both ends, the higher the degree of recognition of the evaluation indicators at both ends. According to the description and statistics of all the evaluation results of the respondents, the color and pattern matching evaluation results of the “Yi Jian Love” series of clothing designed in this paper were obtained, as shown in Figure 2. Among the six evaluation groups, the scores below 3 were “traditional-popular” and “modern-classical”, with evaluation values of 2.8 and 1.85, respectively. The evaluation values of the other valence vocabulary groups were all greater than 3 points, and the evaluation values of the evaluation vocabulary groups of “simple-gorgeous”, “incongruous-coordinated”, “ordinary-chic”, and “not easy to match-easy to match” were 3.32, 3.46, 3.4, and 3.48, respectively. Combined with the scores of each group’s evaluation vocabulary, the respondents believe that the overall color and pattern matching style of the “Yi Jian Love” series of clothing designed in this paper is more gorgeous, traditional, coordinated, chic, modern and easy to match.

Figure 2.

Evaluation value

Evaluation of clothing style and decorative details

Clothing style and decorative detail design are also the main elements of clothing design. Using a five-point scoring system, the closer the score is to 5, the higher the degree of agreement and satisfaction of the respondents with the clothing style and decorative detail design of the series of garments in this paper “Yi See Your Heart”. The distribution of the evaluation value of the clothing style and decorative detail design of the “Yi See Your Heart” series is shown in Figure 3. As can be seen from the figure, the distribution of the evaluation value of clothing style and decorative detail design shows a big head and a small head, the evaluation value is in the [1,3] range of the frequency is low, the clothing decorative detail design respondents rated less than 3 points is relatively more than the case of clothing style design, the evaluation is less than 3 points of the respondents have 24 people, while the evaluation of the clothing style design is less than 3 points of the respondents have 19 people. The number of respondents who rated the design of clothing style below 3 was 19. At the same time, whether it is the design of clothing styles or decorative details, the evaluation value is mainly concentrated in the [3,5] range, with a high frequency of occurrence. The clothing styles and decorative details of the “Yi See Your Heart” series of garments in this paper are still generally recognized and satisfied by the interviewees.

Figure 3.

Distribution diagram

Evaluation of the degree of ethnicity and design effects

This section introduces the “degree of ethnic style” as an indicator to measure the integration effect of Yi ethnic dress culture elements in the “Yi See Your Heart” series of garments designed in this paper, and for this paper “Yi See Your Heart” series of clothing for the overall design effect of a comprehensive assessment. The degree of ethnicity indicator contains six levels, of which 0 indicates no degree of ethnicity, 1 indicates a weak degree of ethnicity, 2 indicates a weak degree of ethnicity, 3 indicates a medium degree of ethnicity, 4 indicates a strong degree of ethnicity, and 5 indicates a strong degree of ethnicity. The overall design effect is divided into four levels, of which 0 indicates a bad design effect, 1 indicates a fair design effect, 2 indicates a good design effect, and 3 indicates a very good design effect. Before the respondents started the evaluation activity, the respondents were numbered in order from 1 to 80. Respondents’ evaluation of the degree of ethnicity and design effect of the “Yi See Your Heart” series of clothing is shown in Figure 4. The x-axis represents the number of respondents, the y-axis represents the degree of ethnic style, and the z-axis represents the overall design effect. From the yz projection in the figure, it can be seen that the respondents who evaluate the degree of ethnic style of the “Yi See Your Heart” series of garments in the interval of [3,5] and the design effect in the interval of [2,3] occupy the majority of the respondents, which is as high as 80%. The interval of [3,5] belongs to the category of “medium” to “strong” ethnicity, while [2,3] belongs to the category of “good”. Obviously, the Yi ethnic style and the overall design effect embodied in the series of garments are generally recognized by the interviewees.

Figure 4.

Evaluation of design effect and ethnic style

Conclusion

Facing the inheritance and development of Yi clothing culture in Liangshan, China in the era of artificial intelligence, this paper puts forward an innovative integration strategy from the perspective of modern clothing design, and builds a Yi clothing pattern segmentation model and a Yi clothing color feature extraction model to realize the acquisition of pattern elements and color features of Yi clothing.

The performance test experiments are carried out on the Yi dress pattern segmentation model and the Yi dress color feature extraction model constructed in this paper respectively. The average value of VXB of the image segmentation model in this paper is 15.28%, which is higher than that of the comparative CNN algorithm, K-means algorithm and FCM algorithm, and it has higher efficiency and performance in the segmentation of Yi dress pattern. In the color feature extraction experiments of Yi dress images, the color feature extraction model in this paper has a color extraction completeness greater than 90% in RGB space, L*a*b* space and HSV space, and the color accuracy is greater than 3, so the color feature extraction performance is superior.

Using this paper’s Yi dress pattern segmentation model and Yi dress color feature extraction model, combined with the Yi dress text respondents believe that this paper’s series of clothing design of the overall color and pattern matching style is more inclined to gorgeous, traditional, coordinated, chic, modern, easy to match. In the evaluation of clothing style and decorative details, the evaluation value of this paper’s “Yi See Your Heart” series of clothing is mainly concentrated in the [3,5] range, with the overall score closer to 5, and the clothing style and decorative detail design are generally favored. Among the respondents, both the degree of ethnic style of the series of clothing designed in this paper is evaluated in the interval of “medium” to “strong” in [3,5], and the design effect is also evaluated in the interval of “good” in [2,3], and the number of people accounted for [2,3]. The number of respondents is 80% of the total number of respondents, and the embodiment of the Yi ethnic style and the overall design effect of the clothing design have been widely recognized by the respondents.

Lingua:
Inglese
Frequenza di pubblicazione:
1 volte all'anno
Argomenti della rivista:
Scienze biologiche, Scienze della vita, altro, Matematica, Matematica applicata, Matematica generale, Fisica, Fisica, altro