Otwarty dostęp

Research on robotic mechanical power sensing model based on multimodal sensor fusion

  
17 mar 2025

Zacytuj
Pobierz okładkę

Introduction

There are many tasks in human’s daily life that are simple and repetitive, and in the various exploratory activities of human beings, they often encounter situations that go beyond the limits of human beings, thus limiting the activities of human beings [1-2]. So mankind thought of using machines instead of people to accomplish these repetitive or dangerous jobs, so mankind began the study of robotics. Robotics is a comprehensive discipline, involving many disciplines such as bionics, mechanics, mechanics, materials science, computers and control science, and it is because of its comprehensive nature that it has jointly promoted its development [3-4].

Humans can obtain external information through smell, touch, vision, etc. to perceive the world, and like humans, robots also need to perceive external information for feedback control [5]. Robot sensors are similar to human eyes, ears, nose, using the known physical laws of humans to convert the detected quantities into physical quantities that can be recognized by the robot so as to analyze and calculate. Through the measurement of the corresponding signal data and sent to the central processor to execute the corresponding action to achieve the corresponding function. Sensors play a very important role in the motion control of robots [6-7].

Currently, in common multi-sensor robot interaction application scenarios, force sensors can be utilized to detect the contact force, cameras can be used to obtain external visual image information, proximity sensors can be used to perceive the approaching and moving away actions of objects, and acceleration sensors can be used to get the motion and vibration amplitude of the objects, and so on. de Gea Fernández, J. et al. presented the development of a two-armed robotic system for industrial production human-robot collaboration, focusing on the analysis of the robot’s sensor system and the robot arm control system [8]. Din, S et al. explored the multimodal sensor fusion design, as well as fabrication, and confirmed through theoretical analysis and experiments that flexible printed circuit board substrates can be converted into tensile circuits integrating multimodal sensors based on current PCB fabrication techniques, laser processing techniques, etc [9]. Xue, T et al. synthesized the research literature on multimodal sensors, summarized the current research breakthroughs and obstacles faced by multimodal sensors, and provided an outlook on the future research directions related to multimodal sensor fusion [10]. Park, S et al. In order to enhance wearable robotic rehabilitation devices to adapt to a wide range of upper limb injury conditions, they proposed to introduce multimodal sensing and interaction technologies into wearable robotic rehabilitation devices, which effectively extends the scope of application practice of wearable robotic rehabilitation devices [11]. Wang, Z et al. illustrated the seamless integration of multi-material systems designed to enable robots to sense temperature, haptics (i.e., material recognition), and electrochemical stimuli, pointing out that magnetic soft robots with multi-modal sensing capabilities can serve as the basis for research and innovation in the next-generation magnetic soft robots [12]. Research in various robot sensing fields have revealed the importance of robot sensing as a key homework direction for innovative robotics research.

The robot drive system is an important component of the robot as a whole, and research in this area involves motion patterns, drive principles, and dynamics analysis, but more theoretical and experimental analyses have been conducted, while practical applications are scarce. He, J et al. comprehensively compared and analyzed the design of multilimbed robots in recent years, especially in terms of the drive system and the dynamic control of the robot, and at the same time looked forward to the practical trends of the multilimbed robots for future applications [13]. Goldberg, B et al. envisioned an insect-like robot with autonomously controlled dynamics, introducing microcontrollers and customized drive electronics to improve the flexibility and maneuverability of the robot [14]. Pal, A et al. explored the differences between soft and rigid robots, and proposed a new drive approach for enhancement, which is to utilize the mechanical instability in order to enhance the drive speed and output power [15]. Farrell Helbling, E et al. presented cutting-edge research on the design of a small flapping-winged aerial vehicle, in particular, the drive technology and flight motion control system of this aerial robot, which contributed positively to the optimization and innovation of small flapping-winged aerial robots [16]. Yandell, M. B et al. combined the motion capture and force measurement methods as the technical basis , designing wearable walking assistive devices, and revealing the power transmission process between the assistive devices and the human body through research and analysis [17].

In this paper, we construct a cross-modal generation model based on audio-visual and haptic multimodal co-representation, which fully exploits the complementarity and co-distribution of multimodal data to achieve cross-modal generation between audio-visual to haptic modalities. Specifically, the model first encodes using audiovisual encoders and maps the different input modalities to a common feature space. Then, the model uses a decoder in that feature space to generate the target modal image. At the same time, a haptic self-encoding network is utilized to retain haptic reconstruction information and capture the semantic coherence of the haptic itself. Finally, two discriminative models are used to simultaneously perform intra-modal high-dimensional data constraints and inter-modal low-dimensional feature constraints. Compared to the current mainstream cross-modal generation methods, the model in this paper utilizes generative adversarial networks to optimize multimodal co-perception for improved accuracy.

Method

Typically, when robots utilize multiple sensing devices to acquire multiple modal information, each perceives the surrounding environment in isolation severing the intrinsic correlation between the modal information, resulting in the loss of some key information about the physical world. In terms of performance comparison, this method has obvious advantages over unimodal, but it also has many drawbacks that limit the intelligent development of robots. On the one hand, the multimodal information obtained by using multiple sensing devices has great differences in structural settings, time scales, and spatial dimensions, and how to fuse the simultaneous measurement data obtained by force-touch-vision and other sensors as well as invert the differences in spatial and temporal scales of the modal information in order to determine the data exchange law between the information world and the physical world is an important difficulty in the cognitive computation of perceptual data and inference, and the requirements for algorithmic performance, processing equipment, etc. are extremely high. The requirements for algorithm performance, processing equipment, etc. are extremely high; on the other hand, when robots collaborate with each sensing device, there is a time difference in the processing and conversion of information between each modality, which makes robots seem not so sensitive, and is also one of the important factors affecting the judgment of robot intelligence. Therefore, it is especially important to open up new methods to obtain multimodal information for the intelligent development of robots.

The purpose of this paper is to design a sensing model for robot multimodal information perception, to improve the robot’s intelligence, and to enhance its sensory prediction ability.

Generating Adversarial Network Algorithms
Auto Encoder (AE)

Autoencoders have been discussed for decades and are known as Boltzmann machines. They have a structure similar to the neural organization of the brain, and are primarily used to solve combinatorial and optimization problems. Later, nonlinear principal component analysis can be utilized to discover and eliminate nonlinear correlation components in the data, and can be used to reduce the dimensionality of the data by removing redundant information. A typical autoencoder operates through a feed-forward neural network, which is mainly composed of an encoder network (input layer) and a decoder network (output layer). The structure is shown in Fig. 1. The encoder compresses the high-dimensional input data into a small bottleneck representation with the lowest dimension, and the decoder tries to reconstruct the bottleneck as close to the input as possible. The L2 paradigm for Euclidean distance is used in the autoencoder to measure reconstruction loss.

Figure 1.

Automatic encoder structure

Variational autoencoders (VAE)

The variational autoencoder (VAE) has a very similar structure to the autoencoder (AE). However, unlike AE, VAE is able to regularize the latent representation and generate new data instead of reconstructing it. It has two neural networks, one is inferential network and the other is generative network, the two neural networks are connected by an implicit variable, the inferential network performs variational inference from the original input data to get the probability distribution of the implicit variable, and the generative network approximates the original data probability distribution by the probability distribution data generated in the previous stage. Figure 2 illustrates the distinction between the classical autoencoder and the variational autoencoder.

Figure 2.

The difference between simple automatic encoders and variant encoders

Generating Adversarial Networks

A Generative Adversarial Network (GAN) consists of two parts: generator G and discriminator D [18]. The goal of the generator is to capture the latent distribution of the training data and generate plausible data to deceive the discriminator. The goal of the discriminator is to distinguish whether its input is from the training data or the generated data. G and D are trained simultaneously in this adversarial system, and both models attempt to optimize their respective objectives. The objective function can be expressed as shown in equation (1) below: minGmaxDV(D,G)=Ex~ptat[ logD(x) ]+Ez~pz(z)[ log(1D(G(z))) ] $$\matrix{ {{{\min }_G}{{\max }_D}V(D,G)} & { = {{\rm{E}}_{x∼{p_{tat\,}}}}\left[ {\log D(x)} \right]} \cr {} & { + {{\rm{E}}_{z∼{p_z}(z)}}\left[ {\log (1 - D(G(z)))} \right]} \cr } $$

Where pdata is the real data and pz(z) is the generated data. In the best result of the training, we will get a generator which is able to generate an almost close to real data that spoofs the discriminator.

The principle of generative adversarial network model is to take a vector that satisfies a Gaussian distribution and map it to the generated modal space, and its generating function usually uses the structure of a neural network, so much so that the generated image or text can be approximated close to the real image or text.The cost function of the GAN network adversary is shown in equation (2): J(D)(θ(D),θ(G))=12ExPacalogD(x)12Expzlog(1D(G(z))) where E represents the desired probability distribution.

It was mentioned earlier that the generator and the adversary are a zero-sum game, so the sum of the costs of both needs to satisfy that the outcome is zero. Therefore it can be deduced that the generator’s cost function should satisfy equation (3): J(D){ { G } }=J{ { D } }J(G)=J(D)

Therefore, a cost function V can be set to represent J{{G}} J(G) and J{{D}} J(D).

The deformation of the cost function of GAN is shown in Eqs. (4) to (6) below: V(θ(D),θ(G))=Ex~PtaslogD(x)+Ex~Pilog(1D(G(z))) $$V\left( {{\theta ^{(D)}},{\theta ^{(G)}}} \right) = {E_{x∼{P_{tas}}}}\log D(x) + {{\rm{E}}_{x∼{P_i}}}\log \left( {1 - D(G(z))} \right)$$ J(D)=12V(θ(D),θ(G)) J(G)=12V(θ(D),θ(G))

Currently, the problem translates into finding some suitable V(θ{{D}}) and θ{{G}}V(θ(D)) to make J{{G}}J(G) and J{{D}}J(D) as small as possible.

According to the definition of Nash equilibrium point in game theory, neither party to the game can change its behavior to gain its own benefit. Therefore, the same is true in GAN networks, which need to seek an equilibrium point to minimize the cost function of both sides. That is to say, it can be defined as a problem of finding a very large minimal value, as shown in equation (7) below: argminGmaxDV(D,G)

The so-called maxima minima also means that the function can be de-maximized in one direction and the maximum value can be taken in the other direction.

So, after the above derivation, the generator and antagonist of an ideal generative adversarial network are shown in equation (8) below: D*=argmaxDV(D,G)G*=argminGmaxDV(D,G)=argminGV(D*,G)

For D* of the above equation, fix the generator G such that G(z) = x0 of the equation can be solved for V The result is shown in equation (9) below: V=ExpdatalogD(x)+Exp2log(1D(G(z)))=pdata(x)logD(x)dx+pg(x)log(1D(x))dx=pdata(x)logD(x)+pg(x)log(1D(x))dx

Now it’s just a matter of finding a D that maximizes V, hopefully for whatever value x takes for the term f(x) = pdara(x)logD(x)+pg(x)log(1–D(x)) in the integral. Where we know that pdata is fixed, we also assumed before that the generator G is fixed, so pg is also fixed, so we can easily find D to maximize f(x). Assuming x is fixed and the derivative of f(x) on D(x) is equal to zero, we can find D(x) as shown in equations (10) and (11) below: df(x)dD(x)=pdata(x)D(x)=pg(x)1D(x)=0 D*(x)=pdita(x)pdata(x)+pg(x)

It can be seen that it is a value ranging from 0 to 1. This is also in line with the standard pattern of the discriminator, ideally the discriminator should judge 1 when receiving the real data and 0 for the generated data, and when the generated data distribution is very close to the distribution of the real data, it should output a result of 1/2.

After finding D*, for generator G*, substituting D*(x) into the previous integral equation is re-expressed as shown in equation (12) below: maxDV(G,D)=V(G,D*)=pdata(x)logD*(x)dx+pg(x)log(1D*(x))dx=pdata(x)logpdata(x)pdata(x)+pg(x)dx+pg(x)logpg(x)pdata(x)+pg(x)dx

In probability statistics, JS scatter also possesses the ability to measure the degree of similarity between two probability distributions as the previously mentioned KL scatter, which is calculated based on the KL scatter and inherits the non-negativity of the KL scatter, etc., with one important difference, the JS scatter possesses symmetry.The relationship between the JS scatter and the KL scatter is shown below in Eqn. (13), and the formula for finding the JS scatter is shown in Eqn. (14) as follows: JSD(PQ)=12KL(PM)+12KL(QM) JSD(PQ)=p(x)logp(x)p(x)+q(x)2dx+q(x)logq(x)p(x)+q(x)2dx

For maxDV(G,D) , since the JS scatter is non-negative, the above equation achieves a global minimum if and only if pdata, ps are equal. So the optimal generator G*, which we require, is exactly the distribution that is going to make G*. This is shown in equation (15) below: maxDV(G,D)=log(4)+KL(pdatapdat+pg2)+KL(pgpdata+pg2)=log(4)+2JSD(pdatapg)

Cross-modal generative model based on multimodal co-representation of audiovisual touch
Model Architecture

The model designed in this paper consists of three main parts, namely cross-modal generative network GIS, haptic self-encoding network GT, and discriminative network (DIS,DT,Dc1,Dc2). In this paper, the generative model is named CRCM-GAN.

Multimodal co-representation generative network

The multimodal dataset is denoted as D = {Dtr,Dte}, where, Dtr is the training data and Dte is the test data, specifically, the training data contains the modal data of three pairs of visual, auditory, and haptic modalities, and Dtr = {Itr,Str,Ttr}, where, Itr={ ix }x=1mtr , Str={ sx }x=1mtr , Ttr={ tx }x=1mtr , ix, sx, and tx, each modality has mtr pairs of training data. Similarly, the test data is denoted as Dte = {Ite,Ste,Tte}, Ite={ iy }y=1nte , Ste={ sy }y=1nte , Tte={ ty }y=1nte , with nte test data for each modality.

Given visual and auditory signal pairs {ix,sx}, the input data is mapped to the feature space, denoted {fl,fS} by encoder EI and encoder ES, respectively. As shown in the following equation: fI=El(ix) fS=ES(sx)

Where EI and ES represent the forward computation of the encoding network to obtain the visual and auditory features {fl,fS} respectively, which will {fl,fS} be spliced to obtain the hidden layer inputs h0G , and h0G go through the intermediate hidden layer module to extract the common representation features h1G . The decoding network DecIS of the cross-modal generative network G extracts the output features fkG of the different convolutional layers from the fused features h1G as shown in the following equation: fkG=DeckIS(h1G) where k represents layer k of the decoding network DecIS. DeckIS() represents the forward computation of the decoding network [19].

Haptic reconstruction network

For a given haptic modal real image tx, it is converted into a hidden space feature representation fT using encoder ET of haptic self-encoding network T. i.e: fT=ET(tx)

The encoded feature vector fT i.e. h0T passes through two fully connected layers to obtain the hidden layer features h1T , the decoding network DecT of the haptic self-coding network takes the hidden space features h1T as inputs, and the features of each layer in the decoding process are represented as Eq. (20): fkT=DecckT(h1T) where k represents layer k of the decoding network DecT and DeckT() represents the forward computation of the self-coding network T.

Discriminator network

For the adversarial lossy discriminative model, the inputs to the discriminator DIS are the real tactile image treal and the one generated by the cross-modal generation network tfake, respectively. The inputs to DT are the real tactile image treal and the tactile information reconstructed by the tactile self-encoding network tae, respectively. The feature-level discrimination is precisely the discriminator of the common representation module, which tends to differentiate the implicit features whether the information encoded by the target modality or not, and the method is used to discover the common features between the different modalities. The method is used to uncover common features between different modalities. With the above definitions, the generative and discriminative models use the idea of games in generative networks, and the CRCM-GAN designed in this paper can be trained by jointly solving the learning problems of two parallel GANs.

Model Loss Function
Generation against loss

The generative model aims to uncover the intrinsic structure and characteristics of the data, thus enabling the generation of multimodal data.

In this, GIS and GT networks respective discriminators DIS and DT are used as independent discriminators. In addition, two discriminators, DC1 and DC2, are designed to explore the common representation among different modalities: minGIS,,TTmaxDIS,D,DC1,DCLG1(GIS,GT,DIS,DT)+LG2(GIS,GT,DC1,DC2)

Cross-modal generative model for audio-visual co-representation GIS the optimization objective is to minimize the minimization of the generative modal-target modal difference.

A discriminator network DIS is used to determine the authenticity of the input image. Real haptic data t, pseudo-images generated by the cross-modal generation network t^ , GIS The total generative adversarial loss is expressed through the following equation: LGIS(GIS,DIS)=E(t)~P(t)[ log(DIS(t)) ]+E(t^)~P(t)[ log(1DIS(GIS(t^))) ] $$\matrix{ {{{\rm{L}}_{{G_{IS}}}}\left( {{G_{IS}},{D_{IS}}} \right)} & { = {{\rm{E}}_{(t)∼{P_{(t)}}}}\left[ {\log \left( {{D_{IS}}(t)} \right)} \right]} \cr {} & { + {{\rm{E}}_{(\hat t)∼{P_{(t)}}}}\left[ {\log \left( {1 - {D_{IS}}\left( {{G_{IS}}(\hat t)} \right)} \right)} \right]} \cr } $$

Then discriminator DIS optimization objective is, maximize LGIS and generator GIS optimization objective is, minimize LG1S: LGIS=E(t)~P(t)[ log(DIS(GIS(t^))) ] $${{\rm{L}}_{{G_{IS}}}} = {{\rm{E}}_{(t)∼{P_{(t)}}}}\left[ {\log \left( {{D_{IS}}\left( {{G_{IS}}(\hat t)} \right)} \right)} \right]$$

The loss function LG1 of the haptic network and the haptic self-encoding network, generated after visual audition, is as follows: LG1=LGIS+LGT

Common representation learning loss

In the discriminator DC1 and DC2 training phase, feature representations under the same path are labeled as 1, and feature representations under different paths are labeled as 0. This taps into the common representation between audiovisual and tactile and facilitates the generation of tactile modalities.

LG2=E(i,s,t)~P(i,s,t) [ DC1(GISenc(i,s))DC1(GTenc(t)) +DC2(GTenc(t))DC2(GISenc(i,s)) ] $$\matrix{ {{{\rm{L}}_{{G_2}}} = {{\rm{E}}_{(i,s,t)∼{P_{(i,s,t)}}}}[{D_{C1}}\left( {{G_{ISenc}}(i,s)} \right) - {D_{C1}}\left( {{G_{Tenc}}(t)} \right)} \cr { + {D_{C2}}\left( {{G_{Tenc}}(t)} \right) - {D_{C2}}\left( {{G_{ISenc}}(i,s)} \right)]} \cr } $$
Feature-level supervised loss

Unlike the traditional feature matching loss, the algorithm proposed in this chapter is a feature-level supervised loss function during the generation of haptic signals by GIS and GT versus the reconstruction of haptic signals. GT network when trained with the cross-modal generative network GIS, network GT will model the data distribution of the haptic data better and converge faster than network GIS. Therefore, this model uses the output of each layer of the decoder of the haptic self-coding network GT as supervisory information for the decoder of the cross-modal generative network GIS, and imposes feature-level constraints on the output of each layer of the decoder. According to Eqs. (18) and (20) the feature supervision loss can be defined as: LFM=i=1n fkGfkT 2 where 2 denotes the l2 loss and n denotes the number of convolutional layers of the decoder network.

Training process and algorithm steps

For the cross-modal generative model, the feature vectors fI and fS with high-level representations of both visual and auditory modalities are first obtained from the encoder of the generative model, and the two are spliced to obtain the feature fusion vectors hxG , which are then spliced to obtain the audio-visual representation vectors sxG after the last fully-connected layer, which are then passed through a GIS decoder to generate the cross-modal generative haptic representations t^ . Subsequently, the generative images are discriminated to distinguish between t^ is cross-modal generated data and t is real data [20]. The formula is as follows: θDIS1Nx=1N[ log(1DIS(t))+log(DIS(t)) ] where N is the number of instances in a batch. Similarly, the haptic reconstruction network reconstructs the input haptic information to obtain the reconstructed representation t′. The reconstructed representation is compared with the real image, and thus the discriminative model can also be updated using the following equation: θDT1Nx=1N[ log(1DT(t))+log(DT(t)) ]

The stochastic gradient is calculated as follows: θDC11Nx=1N[ logDC1(sxG,hxG)+(log(1DC1(sxT,hxG))) ] where (s,h) denotes the joining of two feature vectors.

The stochastic gradient is calculated as follows: θDC21Nx=1N[ logDC2(sxT,hxT)+(log(1DC2(sxG,hxT))) ]

The input to the decoder is a mapping matrix from the target modality to the common representation space, and the output is a reconstructed image of the target modality in the common representation space. The aim is to minimize the objective function to fit the true correlation distribution. Its stochastic gradient descent is given in the following equation: θGIS1Nx=1N[ logDC2(sxG,hxT)+log(DIS(t^)) ]

Similarly, for the self-coding network model, in order to fit the spatial distribution of the reconstructed signal to the real signal.

θGT1Nx=1N[ logDC1(sxT,hxG)+log(DT(t)) ]

The training process for generative and discriminative models involves iterating them until they reach a stable equilibrium. In this process, the generative model tries to generate samples that are similar to the real samples, while the discriminative model tries to distinguish between real and generated samples. As a result, the heterogeneity gap between the different modalities gradually decreases and the learning space shares the representation space.

Results and Discussion
Experiments on localized pose prediction of objects

The experimental platform is a UR3 robotic arm equipped with a Barrett hand dexterous hand. This experiment is only for one finger to predict the local attitude of the object. There are two experimental objects, namely a water bottle and a square. The objects are placed on a flat surface, given an initial position of the dexterous hand relative to the object, the dexterous hand equipped with a fingertip tactile sensor is utilized to grasp the object from open to closed, and the sensor outputs the data of the proximity unit during the approach process. The collected data set is processed with data outliers removal and normalization, and then the proximity sensing data of the 2 objects are input into the trained model respectively, and the prediction curves are obtained after model fitting as shown in Fig. 3 and Fig. 4. Where the parameter d, parameter xrot and zrot are the perceptual prediction values on each coordinate axis of the object pose, respectively.

Figure 3.

The local attitude prediction curve of the water bottle

Figure 4.

The local attitude prediction diagram of the cube

Observing the parameter d, the prediction curves of the water bottle and the square body are gradually decreasing both tend to zero, presenting an obvious step shape, which is more effective.

The prediction curve of the square body shows an overall decreasing trend, with small fluctuations in the first half. Observing the parameters xrot and zrot, the overall trend of xrot becomes smaller and zrot becomes larger for both water bottle and square. This is due to the constant bending of the fingers of the dexterous hand resulting in attitude changes between the fingers and the objects, while the different surface structures of the grasped objects allow the proximity sensing unit to sense different localized attitudes. Overall, it is reasonable to predict the overall trend of the localized poses of different objects, and different object surfaces will produce corresponding predictions. It can be concluded that the sensing effect of the multimodal sensing model in this paper meets the design expectation.

Overall Multimodal Sensing of Objects

In order to analyze the effect of dictionary size K on the algorithm, by adjusting the size of K value, setting the pooling mode to average pooling, and observing the object recognition accuracy of OSL-SR and the algorithm in this paper, the results are shown in Figure 5.

Figure 5.

The relationship between the accuracy and dictionary size

The object recognition results are not only related to the specific algorithm, but also to the dictionary size parameter set by the algorithm. As the dictionary size K increases, the object recognition results of OSL-SR and this paper’s model in the case of single-sample learning also increase.The recognition accuracies of the OSL-SR model in the process of increasing the K value from 30 to 80 are 84, 86, 87, 90, 91, 91, respectively, while that of this paper’s model is 89, 90, 92, 94, 95, 92.The figure shows that the recognition accuracy of this paper’s model in the different K stages are higher than OSL-SR, which directly indicates that it has a better generalization ability under different parameters, and reflects that it is beneficial to improve the efficiency of the algorithm by considering the temporal characteristics of reconstructed data with coupling properties.

Figure 6 shows the F1 score results for perceptual state recognition for this paper and the other two algorithms. At any sparsity, the recognition of this paper’s model is significantly better than JKSC and AMDL. When T=5, the maximum recognition result is 0.953, which is higher than the recognition performance of the remaining two models. When T>5, the model of this paper starts to show a decreasing trend, but it is still higher than the other algorithms. And it has been further found that AMDL is more sensitive to sparsity than KSC because it considers the force association between multiple fingers. It can be concluded that the model in this paper is also superior in overall multimodal perception of objects.

Figure 6.

The algorithm of different sparse degrees is compared

Conclusion

In this paper, a sensing model for robotic multimodal information perception is designed to be able to complete the fusion of multimodal heterogeneous data information acquired by multiple sensing devices in structural settings, time scales, and spatial dimensions, and ultimately to realize the enhancement of robotic perceptual prediction capability. It is verified that it is reasonable to predict the overall trend for the local poses of different objects, and different object surfaces will produce corresponding prediction effects. Comparing this paper’s model with OSL-SR, JKSC and AMDL, it is found that the robot perception prediction effect is related to the dictionary size parameter set by the algorithm. As the dictionary size K increments, the object recognition results of OSL-SR and this paper’s model in the case of single-sample learning also increase, and the recognition accuracy of this paper’s model in different K stages is higher than that of OSL-SR, which proves that the model has a better generalization ability in the case of different parameters, and moreover, reflects that it is beneficial to improve the algorithm’s efficiency by considering the temporal features of reconstructed data with coupling characteristics. Under any sparsity, the recognition effect of this paper’s model is significantly better than that of JKSC and AMDL. At T=5, the maximum recognition result is 0.953, which is superior to the recognition performance of the other two models. When T > 5, the model in this paper starts to show a decreasing trend, but it is still higher than the other algorithms. In summary, this paper’s model achieves the design goal of intelligent enhancement of the robot’s perceptual prediction ability.

Project Number:

220124038, Heilongjiang Institute of Technology Horizontal Research Project, Development of a Web-based Product Selection Platform for S Enterprise’s Gear Reducers.

Język:
Angielski
Częstotliwość wydawania:
1 razy w roku
Dziedziny czasopisma:
Nauki biologiczne, Nauki biologiczne, inne, Matematyka, Matematyka stosowana, Matematyka ogólna, Fizyka, Fizyka, inne