Otwarty dostęp

Research on Creativity Generation and Innovative Expression Strategies of Graphic Design Based on Graphic Processing Technology

, ,  oraz   
24 mar 2025

Zacytuj
Pobierz okładkę

Introduction

With the continuous development of computer technology, graphics processing technology has been widely used in various fields. From movie production, game development, to architectural design, product display, the application of graphic processing technology is more and more extensive, and has become an indispensable part of modern society [1-4]. Graphics processing technology refers to the technology of digitizing graphics. A graphic is a two-dimensional digital signal composed of pixels. Each pixel contains a gray scale or color value. The main purpose of graphics processing techniques is to extract, improve and analyze graphical information [5-8]. Graphic processing techniques mainly include graphic acquisition, graphic enhancement, and graphic compression, which have important applications in idea generation and creative expression in graphic design [9-11].

Graphic design refers to a creative activity that takes the two-dimensional plane as the basic carrier, and conveys information and expresses the mood through visual symbols such as words, pictures and colors. And graphic processing technology in graphic design can greatly expand the designer’s imagination space and design ability, so that he or she can complete the design work more flexibly and quickly [12-15]. For example, in poster design, graphics processing technology can help designers to process and edit pictures, making them more colorful and visually appealing. In brochure design, graphic processing technology can be used to make the photos more realistic, the text more clear, and can better express the characteristics and advantages of the product. In packaging design, designers can use graphics processing technology to design a more creative and attractive packaging image, so that the product is more easily accepted by consumers [16-19].

This paper points out that the application scope of graphic image processing technology in graphic design mainly includes two parts: poster design and logo design. Combined with the new demand for graphic processing in logo design and the deficiencies of the image super-resolution model, an image super-resolution network with the addition of a multi-frequency fusion attention module is proposed. The performance of this image super-resolution technology is analyzed using image quality evaluation indexes. Combined with the cognitive and aesthetic requirements of dynamic graphics, an evaluation system for the best program of dynamic graphics design is constructed. The dynamic graphic design evaluation system is utilized to score the samples for dynamic graphic design, and innovative expression strategies for new forms of graphic design (dynamic graphics) are proposed based on the actual scores.

Graphic design requirements and graphic processing techniques
Application of graphic image processing technology in graphic design

Graphic image processing techniques are dedicated to various manipulations and improvements of digital images. Its basic goal is to use computer algorithms and mathematical models to accurately analyze, process, and enhance images to meet the needs of specific applications. The scope of graphic image processing technology covers many aspects such as image acquisition, image preprocessing, feature extraction, image recognition, and image synthesis, forming a comprehensive and complex system [20-21].

Creative Poster Design

In today’s digital era, graphic image processing technology brings unprecedented innovation and inspiration to the design field. In creative posters, the boundaries of traditional design can be broken through advanced graphic processing technology, where abstract lines, light, and shadow intertwine to outline an illusory and unique space. It can make people walk from reality into the virtual world and swim in it.

Rich and wonderful changes in color and graphics are another feature of graphic image processing technology. This is not only a display of technology, but also a bold exploration of design language, highlighting the endless possibilities of image processing technology in creative design.

The three-dimensionality and texture of the poster are enhanced through graphic image processing technology. Every shadow and highlight has been carefully tuned to present a more realistic and fascinating effect, which allows people to touch every element in the image, which is precisely the marvelous power of design given by technology.

Brand identity and logo design

Brand identity and logo design are key areas in graphic design. Through graphic image processing technology, designers are able to utilize their ideas and creativity in shaping brand images. Vectorization and graphic processing techniques enable brand logos to retain their unique identity while being clear and scalable, whether on large billboards or small business cards.

Visual enhancement techniques provide more expressive means of logo design. With treatments such as shading, lighting, and gradients, designers are able to give logos a more vivid and three-dimensional appearance.

Mathematical Foundations of Graphics Processing Techniques

An image is an objective reflection of a natural scene, and the image saved in a photo, drawing, or video recording medium is continuous. The computer can not receive and process this spatial distribution and brightness values are continuous image, so the continuous image discretization, that is, digital processing. This work includes both sampling and quantization.

By sampling, a continuous image is spatially partitioned into M × N grid. Each grid is represented by a luminance value. Sampling discretizes the continuous image spatially, but the luminance values of the image at the sampling points are still continuously distributed over some amplitude interval. The process of converting the corresponding interval of continuous variation in luminance at a sampling point to a single specific value is called quantization, i.e., discretization of luminance at the sampling point.

An image is sampled and quantized to obtain a digital image. It can usually be represented by a gray scale matrix: f(x,y)=[ f(0,0) f(0,1) f(0,n1) f(m1,0) f(m1,1) f(m1,n1)]

The elements of the matrix are called pixels. Each pixel has two coordinates, x and y, indicating its position in the image. The value of a pixel represents the gray level, which corresponds to the brightness of the original image at that point.

The process of quantization of an image is the discretization of the gray level, i.e., the gray level of each pixel in the gray matrix is converted to the corresponding gray level, which is expressed as an integer power of 2: L=2mm=1,2,,8

If m = 1 is taken, it represents a black and white binary image with a gray level of 0 or 1. If m = 8 is taken, it represents a black and white image with 256 levels, and a gray level of 0 to 255.

A digitized image entered into a computer is represented by three quantities. The grayscale resolution is represented by m out of 2m, and the spatial resolution is represented by M, N two quantities. The total number of bits b entering the computer is: b=MNm

Digitized image of the value of M, N, m will obviously affect the quality of the image, as well as the size of the computer storage area and processing speed will also have a great impact. Generally speaking, the larger the M, N, m, the better the quality of the image, but due to the visual effect of the human eye, M, N large enough to a certain limit of image quality is good enough, even if it is larger, the image quality will not be a significant improvement in the memory occupied by the computer will increase, the processing speed is also greatly affected.

Changes in the direction, size and shape of a graphic are accomplished by geometric transformations that change the description of the object’s coordinates. The basic geometric transformations are translation, rotation, scale, miscut and symmetry. Compound geometric transformations are those in which the figure undergoes more than one basic geometric transformation and the result of the transformation is the product of the matrices of each basic transformation. In the geometric transformation of a three-dimensional figure, the geometric transformation matrix can be expressed as: T=[ a b c d e f g h i]

Where [ a b d e] serves to transform the figure in proportion, symmetry, rotation, and miscut. [ g h] is to perform translational transformations on the figure. [ c f] is to project the figure and [i] is to scale the whole figure.

For a point (X, Y) whose chi-square coordinates are [ X Y 1] after geometric transformation the point is (X′, Y′) whose chi-square coordinates are [ X Y 1] , the transformation process can be expressed as: [ X Y 1]=[ X Y 1]T

The geometric transformation matrices for the most commonly used transformations such as, translation transformation, proportional transformation, and counterclockwise rotation transformation around the coordinate origin are denoted as: [ 1 0 0 0 1 0 Tx Tx 1][ Sx 0 0 0 Sy 0 0 0 1][ cosθ sinθ 0 sinθ cosθ 0 0 0 1]

where Tτ and Tν are the translations in the x, y direction, respectively. Sx, Sy is the amount of scaling in the x, y direction, respectively. θ is the angle of counterclockwise rotation around the coordinate origin.

For points and line segments, their geometric transformation is the direct use of the transformation formula (6) to transform the coordinates of points and the coordinates of points on line segments.

For the cubic B-spline curve to be used later, the geometric transformation process is to first geometrically transform each control vertex Pt(Xt, Yt)(i = 0, 1, ⋯, n) of the constructed B-spline curve. Then a new B spline curve is constructed from the new control vertices Pi(Xi,Yi)(i=0,1,,n) obtained. In this way, the geometric transformation of the B spline curve is completed.

There is a transformation relationship between the window and the viewing area. Let the width of the viewing area be Li, the height be Hi, and the top and bottom corners be (Xv1, Yv1) The width of the window is Lw, the height is Hw, and the top and bottom corners are (Xw1, Yw1). For point (Xv, Yv) in the viewing area, there is a point (Xw, Yw) in the window that corresponds to it, and there is: { Xw=Xw1+LwLw(XvXv1) Yw=Yw1+HwHv(YvYv1)

Therefore, as long as every pair of windows and view areas are defined, the coordinate relationship between the two can be established so that the graphics of each part of the user’s coordinate system can be displayed in different view areas with different scales and positional relationships. It follows that a proper selection of windows can facilitate the observation of all or part of the information in a graphic for computer graphics processing.

The Need for Logo Design in the Age of Smart Media
Needs of the external environment

The development of logo design in the era of smart media is related to many aspects such as social economy, culture, media, technology, etc. Changes in the external environment present new requirements for logo design, leading to the development and iteration of logo designs.

The development of mass media also plays an important role in the development of logo design. People’s exposure to various information provides an opportunity for symbol consumption. The existence and development of mass intelligent media provide an excellent communication carrier for logo design, and different communication subjects realize the importance of logo design, which also makes it necessary for logo design to adapt to different media.

The development of science and technology in the era of smart media enables logo design to be presented in the form of dynamization or virtualization, and artificial intelligence, virtual reality and other technologies have deepened the influence of technology on various industries, which also prompts designers to incorporate more scientific and technological elements to strengthen the scientific and technological expression of logo design.

Demand within the brand

The business purpose of a brand is to obtain economic benefits, and a series of economic activities are also centered on this purpose. In the context of increasingly fierce market competition in the age of smart media, brand logo design can, to a certain extent, enhance the symbolic value of the brand.

From the visual level, most brand logos have increased color brightness, which is related to the sharpness of screen display adapted to smart devices. The compatibility of different media, more and more display channels, and the presentation of different media forms have certain differences, how to maintain consistency in the visual presentation, while enhancing the expressive power is an important demand for the logo design within the brand nowadays.

New technology of graphic image processing in graphic design
Description of the problem

Most of the current image super-resolution models follow the same type of module design for the most basic residual network module in the network, which always performs feature mining at the same resolution in the feature space. The information in the feature space varies depending on the resolution, but there has been no distinction between the frequency components of the feature space in previous research on image super-resolution tasks.

Taking this as a starting point, this paper designs a new Multifrequency Fusion Attention Module (MFAB) for feature enhancement based on different frequency components in the feature space. The module is based on the assumption of a feature scale space and differentiates the application of information from different frequencies in the feature space. The attention mechanism assigns the extracted feature information of different frequencies to the original features. So that the information of different frequencies in the original feature space can have different weights and improve the efficiency of the corresponding frequency components in utilizing the features. On this basis, the lightweight design of the module reduces the burden of parameter count and computation brought to the network by module replacement.

Network structure

In this section, the single-image super-resolution enhanced deep residual network (EDSR) network is used as the basic framework, and the new multi-frequency fusion attention network (MFAN) is obtained by replacing the proposed MFAB with the base residual module in EDSR.

The overall structure of the EDSR model is divided into three network modules: feature extraction, nonlinear mapping, and reconstruction.The EDSR is trained using the mean absolute error (MAE, also known as LI) loss function, defined in equation (8): MAE(X,Y;θ)=i,jh,w|Y(i,j)X(i,j)|h×w

The network takes low-resolution images as input to the network. After the feature extraction module it is fed into the feature enhancement module, which consists of several multi-frequency fusion attention modules (MFAB). The strengthened features are fed into the final image reconstruction module to obtain the final resolution-enhanced high-definition image as the output of the network. The most important part of the whole network model framework is the feature reinforcement module, which determines the efficiency of the whole super-resolution network to utilize the information in the feature space.

Multi-frequency fusion attention module

The module proposed in this section starts from a new perspective and dives deeper into the feature information corresponding to different frequencies in the feature space, based on the space theory of scales. The most basic residual module in the feature enhancement module has been improved, and the multi-frequency fusion attention module (MFAB) has been proposed.

In MFAB, the standard residual module is still used as the feature extractor, but a new module (MFF) that fuses multi-frequency information is added after this module, and MFAB is the product of the combination of the two.

From the structure of the Multifrequency Fusion Attention Block (MFAB), it can be seen that MFAB adds a new branch of attention to the traditional residual module. This branch focuses on feature mining in space with different resolutions, fully considering the importance of different frequency information of features. And then the information sampled in different frequency spaces is upsampled to the original resolution space for fusion, so as to obtain a weight map with different attention to different information.

It is not a bad idea to set the input feature of the ist MFAB to be fi−1. The input feature is passed through the plain convolution module to obtain feature xi, i.e: xi=Conv3×3(ReLU(Conv3×3(fi1)))

The feature resolution is reduced by three downsampling modules to get the new feature xi2, xi4, xi8 at different resolutions, i.e: xi2 = Downsample(xi,2) xi4 = Downsample(xi,4) xi8 = Downsample(xi,8)

And then the new features at each resolution go through Basic Block for feature extraction and information filtering to obtain the feature weight sampling si2, si4, si8 at each resolution, i.e.: si2 = Conν1×1(ReLU(Conν3×3(xi2))) si4 = Conν1×1(ReLU(Conν3×3(xi4))) Si5 = Conν1×1(ReLU(Conν3×3(xi8)))

Sampling each of the obtained feature weights and then performing an upsampling operation to recover the resolution, the fusion yields the feature weights wi in the original space, i.e: wi=Conv3×3(Upsample(si2,2)+Upsample(si4,4)+Upsample(si8,8))

Weighting the original features after normalizing the weight map, × represents the Hadamard product of matrices, which together with the input features fi−1 in the constant mapping branch gives the final output features fi. i.e.: fi=fi1+xi×Sigmoid(wi)

In order to be easily embedded into existing image super-resolution networks to replace the existing residual module, this section lightens the algorithm after the Conv-ReLU-Conv output of the feature maps in the traditional residual branch, and before adding them to the constant mapping branch.

The scheme used in this section is to reduce the number of channels of the feature map and maximize the downsampling multiplicity, which corresponds to the additional modules in the network design and the corresponding 2-, 4-, and 8-fold downsampling multiplicities.

Given the number of input channels is Cin, the number of output channels is Cout, the input feature map size is (Hin, Win), the output feature map size is (Hout, Wout), the convolution kernel size is K, the step size is 1, and the bias term is not taken into account, the standard convolution operation operates as: FLOPS=(Cin×K×K+Cin1)×Hout×Wout×Cout

With a channel reduction multiplicity of n, the computation of the standard convolution inside the lightweight multi-frequency fusion attention module proposed in this section will be reduced to: FLOPS=(Cinn×K×K+Cinn1)×Hout×Wout×Coutn

Obviously FLOPSFLOPS′ × n2, the computation of the attention branch in the new multifrequency fusion becomes 1n2 of the original one. Greatly reduces the proportion of the new multifrequency fusion attention branch, enabling the multifrequency fusion attention module proposed in this section to accomplish the replacement of the residual module in the ordinary super-resolution task without increasing the computation too much.

Performance evaluation of image super-resolution enhancement techniques
Image quality evaluation criteria

In the field of image SR, peak signal-to-noise ratio (PSNR) is evaluated by comparing two images pixel by pixel and calculating the difference between the pixel points of the two images. In the field of image SR, PSNR can objectively reflect the degree of distortion before and after the reconstruction of two images, but occasionally its results and the subjective perception of the human eye is a large difference, so the image quality assessment in the field of SR are combined with the SSIM to collaboratively assess the image quality. The definition of the PSNR formula is shown in Equation (16): PSNR=10log10((2n1)2MSE)

Structural similarity (SSIM) is an objective index used to evaluate the degree of image similarity, the SSIM result takes the value range of [0, 1], the higher the SSIM value indicates the higher the degree of similarity between two images. In the field of image SR, the higher SSIM value indicates that the reconstruction is more effective. SSIM is mainly composed of brightness, contrast and structural contrast, and the definition formula is as follows: SSIM(x,y)=(2uxuy+C1)(2σxy+C2)(μx2+μy2+C1)(σx2+σy2+C2)

In the above equation, ux and uy represent the average gray values of sample image x and image y, respectively, σx2 and σy2 represent the variance of original image x and image y, respectively, and C1 and C2 are very small constants to avoid a huge perturbation of the results when the denominator is close to zero.

Experimental environment and dataset

The equipment configurations used for the experiments in this paper are CPU Intel i7-8700 with 32 GB of RAM, GPU RTX2070 with 8 GB of RAM, and the operating system is Ubuntu 16.04LTS 64-bit. The deep learning framework used for model training is Caffe, and the software platform for model testing is MATLAB 2021b.

The experiments in this paper use the Berkeley segmentation dataset (containing 200 images) as the training data. In order to fully use the 200 training data and improve the model generalization ability. The experiment performs 3 kinds of data enhancement on them:

I, rotate the images by 90°, 180° and 270°. II, Flip the image horizontally. III, downscaling the image with downscaling factors of 0.9, 0.8, 0.7 and 0.6 respectively.

The test sets used for the experiments in this paper are four datasets commonly used in the field of image super-resolution reconstruction: Set5, Set14, BSD100 and Urban100.The test sets mainly contain people, animals, natural scenery and cityscape.

Experimental parameterization

Training Sample Preprocessing

In this paper, the original high-resolution images are first downsampled with a scale factor of m times, and then double cubic interpolation is used to generate the corresponding LR images. The LR image is cropped into a set of sub-images of size lsub × lsub pixels, and the corresponding HR real image is divided into sub-images of size mlsub × mlsub. These LR/HR sub-image pairs are the training samples.

The models in this paper were trained using the Caffe framework with a transposed convolutional layer producing an output size of (mlsubm+1)2 . (m1) pixel boundaries were cropped on the HR sub-images. Finally the size of the LR/HR sub-image pair is set to 412/792, 212/732, and 142/762 for models with scales 2, 3, and 4, respectively.

Training details

The model proposed in this paper has a feature enhancement module, and the number of kernels in the enhancement unit of the feature enhancement module is 64 and 32 for the two convolutional layers on each path, and the transposed convolution uses the convolution kernel size set to 24 × 24 at all scales, and all convolutional layers in the model are initialized with the method of MSRA, and all activation functions use the Leaky Relu function. All activation functions are Leaky Relu functions.

The model training is optimized using Adam’s algorithm with a batch size of 64 and a weight decay of 10−4. The model is first trained with L1 loss and then fine-tuned with L2 loss. The learning rate is initially set to 10−4.

Experimental results and analysis

In order to verify the validity of the proposed model in this paper, the model will be quantitatively compared with the bicubic interpolation method (Bicubic), the VDSR model, the DRCN model, and the IDN model on four datasets and three scales. Among them, VDSR model, DRCN model and IDN model are all deep residual network based image super-resolution reconstruction models.

The experiments in this paper use two evaluation metrics, Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity (SSIM), to evaluate the model performance, and the comparison of the experimental results of the different models on the test set is shown in Table 1.

The average PSNR value and SSIM values on the test set

Data set Scale Bicubic VDSR DRCN IDN Ours
Set5 x2 35.25/0.9321 45.25/0.9361 41.85/0.9212 41.52/0.8695 46.77/0.9525
x3 30.11/0.8047 34.61/0.9112 37.16/0.9103 41.06/0.9211 43.61/0.9301
x4 28.36/0.8124 41.56/0.8241 39.42/0.8893 32.42/0.8694 47.15/0.9052
Set14 x2 31.75/0.7216 32.65/0.9159 32.76/0.7214 36.12/0.7953 40.29/0.9332
x3 28.98/0.7178 36.89/0.7549 31.52/0.8245 35.12/0.8841 37.62/0.9124
x4 29.34/0.8542 31.21/0.8121 29.98/0.7936 35.43/0.9016 33.12/0.9175
BSD100 x2 30.15/0.7893 30.54/0.8512 32.96/0.8121 32.16/0.7715 36.79/0.8971
x3 35.62/0.8497 29.68/0.7562 32.56/0.8246 33.79/0.8925 38.44/0.9015
x4 38.54/0.8511 30.24/0.8266 29.81/0.9013 32.15/0.9196 40.98/0.9127
Urban100 x2 31.89/0.8647 29.79/0.8978 28.64/0.8694 33.78/0.9101 35.67/0.9134
x3 29.18/0.7412 31.52/0.7552 29.43/0.8941 35.88/0.9133 34.05/0.9217
x4 25.64/0.6689 28.34/0.7496 30.14/0.8236 30.21/0.8264 33.67/0.8596

Compared with the Bicubic method, the VDSR model and the DRCN model, the image super-resolution model proposed in this paper shows a great improvement in both peak signal-to-noise ratio and structural similarity. Compared to the IDN model, the model in this paper outperforms the IDN model in most cases. Especially, it has a significant performance on the Set5 dataset, and the peak signal-to-noise ratios reach 46.77db, 43.61db, and 47.15db for scale factor 2, 3, and 4, respectively.

The PSNR value of this paper’s model is improved by 13.5db, 9db, 6.45db, and 2.55db and the SSIM value is improved by 12.54, 1.89, 1.98, and 0.9 percentage points compared to the Bicubic method, the VDSR model, the DRCN model, and the IDN model for the scaling factor of 3 respectively. It proves that the proposed feature enhancement module in the model of this paper can enhance features more effectively and improve the image reconstruction effect.

In addition, this chapter also compares with VDSR model, DRCN model and IDN model in terms of runtime performance. The different model runtime comparisons are shown in Table 2 (in seconds).

Compare the operating time of different models (s)

Data set Scale VDSR DRCN IDN Ours
Set5 x2 0.053 0.755 0.019 0.025
x3 0.068 0.715 0.012 0.019
x4 0.056 0.725 0.009 0.017
Set14 x2 0.135 1.623 0.035 0.032
x3 0.147 1.528 0.018 0.017
x4 0.098 1.631 0.012 0.014
BSD100 x2 0.065 0.998 0.019 0.006
x3 0.076 0.989 0.015 0.015
x4 0.072 0.993 0.007 0.008
Urban100 x2 0.468 5.631 0.084 0.074
x3 0.512 5.248 0.036 0.063
x4 0.479 5.123 0.052 0.029

The model proposed in this paper has a good performance in terms of runtime performance, and the runtime is controlled within 0.08 seconds on Set5, Set14, BSD100 and Urban100 datasets. The runtime of the model in this paper has a significant reduction compared to the VDSR model and DRCN model, and is slightly higher than the IDN model, which maintains a high level of runtime performance. It can be seen that the model proposed in this paper achieves real-time speed while maintaining reconstruction accuracy.

Innovative applications of graphic design and creative expression strategies
Graphical Dynamization Design and Evaluation
Graphic design dynamization

The image super resolution technique proposed in this paper is applied to dynamic graphic design.

Graphic design dynamization has the following innovations. Dynamization of graphic design is different from the previous video, animation, it can intercept the video highlights and add other elements of the design to meet the audience’s curiosity about the unplayed film, but also the static continuous animation to the dynamic form of presentation. So dynamic graphic design to short, new, strange, content production requirements to convey a sense of dynamism, the use of three-dimensional characters and other constituent elements constitute a dynamic model, because of its own specific two-dimensional space characteristics, the design of the work can not go beyond the spatial boundaries, so the designers are as much as possible in the content and presentation methods to expand the expression of the space, so that the dynamic and static phase balance.

Evaluation design

To ensure the validity of the survey results, a total of 20 experts and scholars related to motion graphic design were invited to conduct the test. Among them, there are 4 teachers of visual communication design, 3 graduate students of art design, 3 doctoral students of design, 3 graduate students of digital media design, 4 dynamic designers, and 3 designers in the direction of user perception.

The dynamic graphic perceptual fluency factors, dynamic graphic cognitive and aesthetic themes were refined to constitute the specific components of dynamic graphic design evaluation based on perceptual fluency. Perceptual attributes, perceptual experience attributes, cognitive attributes, and aesthetic attributes are subdivided separately. Specifically, clear visual organization A1, coherent graphic movement A2, smooth camera movement A3, reasonable temporal order A4 and effective audio-visual integration A5 are categorized as perceptual attributes. Content familiarity B1, stylistic difference B2 and perceptual imagination B3 were categorized as perceptual experience attributes. Information readability C1, attention attraction C2, and memory retention C3 were categorized as cognitive attributes. Aesthetic pleasure D1, aesthetic expectation D2, and aesthetic interest D3 were categorized as aesthetic attributes.

The specific elements of the dynamic graphic design evaluation model based on perceptual fluency are as follows:

Goal layer. The only element of the goal layer is the optimal design solution for dynamic graphics based on perceptual fluency.

Criteria layer. For dynamic graphic design, the main attributes that affect its perceptual fluency are perceptual attributes, perceptual experience attributes, cognitive attributes, and aesthetic attributes. These four are taken as the model criterion layer.

Sub-criteria layer.

Evaluation methods for motion graphics design

Each tester was tasked with comparing the relative importance of each element between the same levels two by two. The scores were based on a scale of 1-9, and the geometric mean algorithm was applied to construct a judgment matrix and count the weight values of each design element. Then the consistency test was performed on the judgment matrix to ensure the logical correctness of the matrix. After the operation, the CR values are less than 0.1, and the consistency test is passed. The results of the consistency test are shown in Table 3.

Consistency test results

The best design for dynamic graphics based on perceptual fluency Perceptual attribute Perceptual experience attribute Cognitive attribute Aesthetic attribute
λmax 4.682 6.357 4.365 4.052 4.139
CI 0.021 0.089 0.007 0.004 0.008
RI 0.904 1.135 0.634 0.337 0.563
CR 0.025 0.084 0.025 0.005 0.002

In order to calculate the comprehensive weight ranking of each design element of the sub-criteria layer, the design elements of the sub-criteria layer are multiplied with the design elements of the corresponding criterion layer to obtain the comprehensive weights of the 14 design elements of the dynamic graphic, which are ranked according to the magnitude of the weight values.

The perceptual fluency-based weight prioritization of dynamic graphic design elements is shown in Figure 1. The first three design elements are clear visual organization A1=0.2012, aesthetic interest D3=0.1896, and coherent graphic motion A2=0.1287. Through the results of weight calculation, the four design attributes of the guideline layer are sequentially ranked as perceptual attributes, aesthetic attributes, cognitive attributes and perceptual experience attributes. It can be seen that in dynamic graphic design, the most important index for improving perceptual fluency is the perceptual attribute, and this perspective needs to be prioritized when design creativity is carried out.

Figure 1.

The dynamic graphics design factor is based on perceptual fluency

Dynamic graphic design application and strategy adjustment

In this paper, we utilize the previously proposed graphic image super-resolution processing technique to generate the dynamic logo of an intelligent social APP. This intelligent social APP, mainly consists of 6 functions, which are instant square, soul matching, love bell, group chat party, voice matching, video matching 6 contents. This intelligent social APP will match objects with high similarity in multiple dimensions such as three views, feelings, preferences, tastes, etc. to chat with users based on personal characteristics derived from the test. In some of these interactive behaviors, the design of super-high-resolution motion graphics processed by image enhancement becomes a special emotional icon for dating.

Survey data

In the interview questionnaire, a total of 50 copies were sent out and 50 copies were successfully recovered, of which 46 were valid questionnaires and 4 were invalid questionnaires. Considering the personality characteristics of the people who use this intelligent design APP, this questionnaire was mainly used as a small-scale quantitative analysis for the study in the form of a scale questionnaire, so as to give a more precise assessment of the respondents’ scores.

Evaluation results

The questionnaire data are shown in Table 4. The user ratings were based on a Likert scale, with 1-5 representing five grades of very favorable, relatively favorable, fair, relatively poor, and poor, respectively. The ratings of the surveyed users on the dynamic graphic design of this social APP are 3.57 points for perceptual attributes, 3.82 points for aesthetic attributes, 4.11 points for cognitive attributes, and 3.67 points for perceptual experience attributes, with the scores at the medium level and above. The score for the dynamic graphic design of the social app was calculated to be 75.68.

Questionnaire data

Evaluation dimension Evaluation weight User evaluation (mean) Evaluation result
Perceptual attribute 0.3561 3.57 25.43
Perceptual experience attribute 0.1147 3.67 8.42
Cognitive attribute 0.2414 4.11 19.84
Aesthetic attribute 0.2878 3.82 21.99
The app’s dynamic graphics design score 75.68
Application design analysis of motion graphics

The realization of graphic dynamism is remarkable, and the application of dynamic graphics is far more valuable to design than traditional graphic design in terms of visual sensation and information communication.

Combining interviews and observations of the interviewees, it was found that the use of motion graphics has been accepted by the new media community. New media people hope that motion graphic design is not limited by media, and that it becomes a free, open, cross-regional, and multidisciplinary application design. It can be made more flexible by creating its own design according to the users’ needs. The advancement of intelligent technology and interactive technology also makes the new media people expect motion graphics to bring more different feelings to satisfy their needs of pursuing new things. Therefore, future design should pay more attention to the specific needs of each person and propose different solutions for various special situations.

Innovative graphic design has the following two main points:

Focus on visual impact effect

Graphic design itself is a kind of art of visual communication, both for consumers and the design itself need to have visual focus, and the creation of visual focus is very important in visual expression. When people watch graphic design works, they first pay attention to the visual focus, and then pay attention to the shape and direction of the image, as well as the visual process of the shape changing with the change of color and movement trend.

And along with the development of the new media environment, dynamic graphic design has more advantages of visual communication than traditional graphic design, and the form of graphic image in the new media field is richer, combining more sensual experience based on humanized service, which makes the audience more comfortable.

Take the smart social app as an example, its own definition is to expand the new media crowd dating new media medium. Dynamic graphic design in the app emphasizes simplicity and clarity, the cell phone plane space is small, the dynamic performance and effect in the production of the logo need to maximize the concentration and simplicity. Overly complex and scattered dynamic effects not only cannot capture the audience’s visual focus, but also distract the audience’s perspective, leading to conveying incorrect or unimportant information. Therefore, motion graphic design needs to follow the principle of high expressiveness, i.e., not only to create visual focus but also to identify the whole. Under the premise of overall static, only the directional details of local action effects can grasp the visual focus from point to point.

Focus on information visualization expression

Graphic design not only makes people feel the visual beauty of graphic communication when looking at graphics, but also expresses the information and emotions that the designer wants to convey on the canvas.

In traditional design, points, lines, surfaces, and many other factors can only appear in a relatively flat two-dimensional world, which contains limited information space and standards.

With the advent of the era of picture information reading, a single static perspective is difficult to attract the attention of the audience. The timeline intervention begins to compensate for this limitation, providing a better visual solution than static design. Dynamic planar space adds the dimension of time to bring about a kind of spatial and temporal mobility and depth of expression, making the message of the work more imaginative and easier to be understood by the public.

Conclusion

This paper proposes the use of image processing technology in graphic design, i.e. image super resolution algorithm model. Aiming at the new requirements of graphic design in the era of smart media, combining the dynamic innovation advantages of graphic design and the elements of dynamic graphic design, a dynamic graphic design evaluation method is constructed. It is proposed to evaluate dynamic graphic applications and to use innovative expression strategies for dynamic graphics in graphic design, based on the results of graphic dynamic design scoring.

Set the experimental parameters and evaluate the model performance using two evaluation indexes, peak signal-to-noise ratio (PSNR) and structural similarity (SSIM). Compared with the Bicubic method, the VDSR model and the DRCN model, the image super-resolution model proposed in this paper shows great improvement in both peak signal-to-noise ratio and structural similarity.

In order to innovate dynamic graphic design and obtain the optimal design solution for dynamic graphics based on perceptual fluency, a dynamic graphic design evaluation model is constructed. The analyzed sample was evaluated for graphic design and its total score was 75.68. Combined with the graphic design of this sample, it is proposed that in graphic design, dynamic graphic design should be based on both visual effect and information visualization, emphasizing the visual interaction and information interaction between graphics and audience in graphic design.

Język:
Angielski
Częstotliwość wydawania:
1 razy w roku
Dziedziny czasopisma:
Nauki biologiczne, Nauki biologiczne, inne, Matematyka, Matematyka stosowana, Matematyka ogólna, Fizyka, Fizyka, inne