Recursive neural network-based design of unmanned aircraft swarm collaborative mission execution and autonomous navigation system
Publicado en línea: 24 mar 2025
Recibido: 08 nov 2024
Aceptado: 19 feb 2025
DOI: https://doi.org/10.2478/amns-2025-0772
Palabras clave
© 2025 Ken Chen et al., published by Sciendo
This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Recurrent neural network is a kind of neural network with recurrent connections, each of its nodes can receive the output of the previous moment as input. This recurrent connection makes recurrent neural network able to process sequential data, such as natural language, time series, etc [1-3]. Compared with the traditional feed-forward neural network, recurrent neural network has the ability of memory, which can influence the current output by memorizing the previous information, so as to better capture the temporal relationship in the sequence data, and it has a wide range of applications in the field of science and technology [4-6].
UAV is an important achievement of modern science and technology, which has the advantages of unmanned, high efficiency and low cost, and can be applied in many fields. Among them, UAV swarm collaborative mission execution is one of the important directions in UAV application [7-10]. Autonomous UAV navigation system is a system that allows UAVs to independently perceive the surrounding environment, analyze the surrounding information, independently plan the path according to their own mission characteristics, execute the flight mission, or react to the abnormal situation [11-13]. Due to the importance of UAV swarm collaborative mission execution, the need for UAVs to collaborate with each other to accomplish the mission, and the importance of autonomous navigation system, the design of UAV swarm collaborative mission execution and autonomous navigation system based on recurrent neural network is of great significance for the application of UAVs [14-16].
In this paper, firstly, by analyzing the process characteristics of UAV to realize autonomous navigation, Yolov5 target detection model is selected as the network model for visual reconstruction to carry out single-stage target detection of UAV. Based on the fusion of distance information and visual perception results, the collection of environmental information is carried out. On the basis of SAC algorithm, LSTM-enhanced Layered-RSAC algorithm is proposed. Next, the UAV autonomous navigation system is built, and the randomly selected localization data is tested for accuracy. Through the UAV fixed-point autonomous takeoff and landing test, the autonomous landing accuracy analysis is carried out, and the UAV operation situation index is derived. Finally, the performance of the Layered-RSAC algorithm is compared with other algorithms based on the adaptive attenuation degree of the a priori strategy.
Autonomous navigation of unmanned aerial vehicles (UAVs) in complex environments refers to the generation of control commands by UAVs based on their own sensors’ observations of the surrounding environment to fly from a starting position to a target position in complex environments. The environment (e.g., urban environment with tall buildings, forest environment with regular trees) is unknown, obstacles in the environment appear randomly, and the starting position and target position are generated randomly.
Generally speaking, UAV control design involves three aspects, namely speed, direction, and height. For simplicity, this paper assumes that the UAV is flying at a fixed altitude during autonomous navigation, and the UAV only needs to control the change of direction and speed. In addition, this paper ignores the physical limitations of the UAV dynamics model and assumes that the control commands can take effect instantly. Based on the above simplifications and assumptions, let
In practical applications, the observation of the surrounding environment by UAVs can be realized by using visual cameras, radars, etc., while the localization of its own position can be realized by using the Global Positioning System (GPS). However, in the simulation environment, it is assumed that the UAV only utilizes ranging sensors deployed in a limited number of directions to realize the perception of the surrounding local environment, and its position information can be directly given by the simulation environment. In this paper, we propose a deployment scheme for the virtual ranging sensors carried by the UAV, where the first view direction
Based on the above definition, the navigation problem of the UAV in the complex environment can be further described as: the first viewing angle direction
where
This paper proposes an algorithmic framework for collaborative task execution and autonomous navigation of UAV swarms based on recurrent neural networks. The basic idea is that there are multiple UAVs and targets in the environment, firstly, the UAVs obtain the raw data according to the sensors they carry, and through the cooperative perception module, the raw visual data are reconstructed, spliced, grayscaled, and downscaled; the distance information between the UAVs, and the UAV’s ontological motion information are jointly mapped to the visual data, and stacked with the joint state of the joint perception results of the universality of the joint state of multiple moments description of multiple moments of joint perception results, which is used as an input to the collaborative decision-making module. The collaborative decision-making part utilizes the joint state description to generate the optimal action for each UAV end-to-end. By building a simulated navigation environment and using a centralized approach to interactive training, the joint rewards are jointly optimized and a stable collaborative strategy is finally achieved. The proposed algorithm is next described in detail in terms of the design and implementation of the joint state description, as well as the structure of the collaborative decision-making network.
Recurrent neural network-based target detection methods can be divided into two main parts: two-stage target detection algorithms represented by FastR-CNN and single-stage target detection algorithms represented by Yolo series. The two-stage algorithm approach is: first generate the candidate region, and then through the convolutional neural network for classification or regression, usually has a higher accuracy but slower detection speed. The single-stage algorithm is: the target detection is regarded as a regression problem of the bounding box, and the image features are extracted directly to predict the object category and location, which usually has faster detection speed and is more capable of learning the generalized features of the object. For the real-time requirements of UAV control, this paper selects the single-stage detection algorithm to meet the needs of real-time control.Yolo series has launched nine versions since its introduction, of which YoloV5 has a greater advantage in ecology, accuracy and real-time, especially in the case of small samples more stable and robust, and in the lower performance of the GPU embedded platforms with better real-time performance. Therefore, in this paper, the Yolov5 target detection model is chosen as the network model for visual reconstruction to take into account the real-time control of UAVs and the convenience of future deployment of edge computing devices.
The basic principle of the YOLO family of algorithms is to directly predict and localize multiple targets in an image by employing a single neural network model with a single forward propagation. This class of algorithms usually divides the input image into
Eq:
Convolutional neural networks with multiple layers and branches and different convolutional kernel sizes are used to extract feature maps at different scales and then fused, and supervised learning is used to complete network training using samples containing Ground Truth. The loss function usually consists of localization error (including center coordinates, quadratic cost function of width and height), confidence error (using cross-entropy loss), and category error (binary cross-entropy or quadratic cost function). Subsequent elimination of overlapping bounding boxes and reduction of false detection rate are carried out by means of post-processing such as non-maximal value suppression, so that information such as the localization and category of each object in the input image can be directly inferred from the training results. Its category error is calculated as (3):
In Eq:
The position prediction of the bounding box is calculated as follows:
In Eq:
The localization box error is calculated using the squared loss function:
Eq:
The loss function is usually the sum of (3) and (8), which is used to build the dataset by collecting and labeling the data, and the neural network is trained using the Adam optimizer until the error is less than a set threshold.
Since UAVs can only capture a limited range of information due to the limitation of the viewing angle when they only utilize the front RGB camera on board to collect environmental information, the limitation of the viewing angle may lead to collisions between UAVs that cannot sense each other in a cooperative UAV navigation task. The advantages of introducing the distance between UAVs into the sensing data are as follows:
When the distance between UAVs is too close, the decision-making part can adjust the strategy in time, such as changing the direction, as a way to avoid collision. The inter-copter distance information combined with the visual perception results can better perceive the relative positional relationship between UAVs, and the intelligent body can have a better understanding of the spatial layout of the surrounding environment, which is more conducive to the planning of paths by the intelligent body. The distance information can better coordinate the movement between UAVs, and the intelligent body can dynamically adjust the target allocation result according to the distance between UAVs and visual perception, thus ensuring more efficient cooperative navigation capability.
For the autonomous navigation task of cellular-connected UAVs in highly dynamic environments, the algorithms in this study are designed with the idea of firstly satisfying the two basic requirements of UAV navigation, i.e., safety and efficiency. Therefore, the complex decision-making process of UAV navigation in this context is transformed into solving three more tractable subproblems, namely: avoiding obstacles, approaching the destination, and choosing a specific action from the solution schemes of the first two subproblems to ensure the UAV’s air-ground communication connectivity. Therefore, this study designs an RSAC-based sub-network architecture for the first two sub-problems, which consists of an Evade Network and an Approach Network, each of which generates actions that match the sub-tasks, where the Evade Network is used to generate actions that are applicable for obstacle avoidance, and the Approach Network is used to generate actions applicable to approaching the destination.
The elaboration of the structure of the sub-networks of this study begins with the composition of their input vectors. Since this input relates to the subproblem of obstacle avoidance and it should contain information related to the historical motion characteristics of the dynamic obstacles, the input of this sub-network structure, as shown in (9), still adopts real-time updated historical information as the input vector, but the difference is that this historical information contains the subactions generated by the sub-network based on the previous historical information:
Once the input vectors were determined, the design of the sub-network architecture was accomplished in conjunction with the classical LSTM network structure. The hierarchical sub-network designed in this study utilizes an Actor-Critic architecture similar to the classical SAC algorithm, with each sub-network consisting of an Actor
Then, based on the overall algorithm flow, the action selection mode of the sub-network and the iterative update of its parameters are expounded, and in the interaction process between each cellular-connected UAV and the simulated complex dynamic urban environment, the strategy network of the sub-network Evade Network and Approach Network outputs the estimated probability distribution
where

The specific structure of the designed RSAC-based policy-value network
The Integrated Network then selects one of the two sub-actions as
As mentioned before, the value networks of the sub-networks are constructed i.e., to better train their strategy networks, while the estimates of their output
where
The process of iterative updating of parameters
Similarly, small changes can be made to the objective function of the training strategy network as shown in (14):
After this, a stochastic gradient ascent should be performed to maximize (14). Finally, the loss function of the temperature coefficient is now calculated by Eq. (15).
From the above introduction, it can be seen that the integrated complex strategy for autonomous UAV navigation in this algorithm evolves from three simpler strategies: avoidance, proximity and selection. The algorithm is named Layered-RSAC because it is based on the LSTM-enhanced SAC algorithm and it uses a layered neural network framework for solving complex optimization problems and applies a layer-by-layer optimization approach.
In order to explore the application effect of the proposed autonomous UAV navigation system, this paper carries out the construction of the system. Set the UAV autonomous navigation occupying space as a quadrilateral of 4km2, and select four vertices as the localization points. The location of localization point 1 is the origin of the local coordinate system, and the direction of localization point 1 pointing to localization point 2 is the x-axis direction, and the direction of localization point 1 pointing to localization point 3 is the y-axis direction. Nine positions in the space were randomly selected to obtain their positioning data, and the positioning errors in the three directions are shown in Fig. 2 compared with the real position data measured by using the laser distance measuring equipment.

System positioning error
Analysis of the data shows that the average localization error of the autonomous navigation system is 9.35 cm in the x-direction, 9.23 cm in the y-direction. While the average localization error in the Z-axis is 17.59 cm.
In order to improve the stability during takeoff and landing, ultrasonic altimetry data was fused in the near-ground stage for UAV altitude control. The landing point set during the experiment was (8, 8, 0). The UAV was unlocked from any position and took off, rose to a height of 3m, then flew to the pre-landing position (8, 8, -3), and then entered the landing mode.
A total of 18 flight tests were conducted, each time the UAV took off from a random position, and the actual landing position of the UAV is shown in Table 1. The average deviation of the landing in x-direction was 12.26cm, and the average deviation of the landing in y-direction was 9.72cm.
Analysis of UAV autonomous landing accuracy
| Experiment | Landing position/m | Landing deviation/m |
|---|---|---|
| 1 | (7.813,7.930) | (0.187,0.070) |
| 2 | (8.105,8.098) | (0.105,0.098) |
| 3 | (7.983,8.021) | (0.017,0.021) |
| 4 | (7.892,7.928) | (0.108,0.072) |
| 5 | (8.221,8.097) | (0.221,0.097) |
| 6 | (7.779,8.006) | (0.221,0.006) |
| 7 | (8.201,8.195) | (0.201,0.195) |
| 8 | (8.112,8.099) | (0.112,0.099) |
| 9 | (8.250,8.192) | (0.250,0.192) |
| 10 | (8.004,7.995) | (0.004,0.005) |
| 11 | (7.921,8.107) | (0.079,0.107) |
| 12 | (8.112,8.099) | (0.112,0.099) |
| 13 | (7.911,8.024) | (0.089,0.024) |
| 14 | (7.899,8.133) | (0.101,0.133) |
| 15 | (7.980,8.210) | (0.020,0.210) |
| 16 | (7.821,7.905) | (0.179,0.095) |
| 17 | (8.199,8.207) | (0.199,0.207) |
| 18 | (8.002,8.019) | (0.002,0.019) |
The metrics of the UAV’s operation during autonomous navigation are shown in Figure 3. Figure 3(a) demonstrates the variation of the UAV’s memory pool utilization versus the exploration completion rate; the memory pool was never full, and it is clear that the memory pool utilization was effectively controlled and grew rapidly at certain moments when it flew into a large amount of unknown space. The exploration completion rate explored more than 85% of the space.

Navigation health indicators
Octree is a tree-based data structure for describing three-dimensional spaces, which can effectively avoid the problems of wasted storage space and excessive computational complexity. Figure 3(b) shows the change of the number of known nodes of the octree during the autonomous navigation of the UAV. According to the characteristics of the environmental obstacles, it can be seen that the idle nodes will trigger pruning as the exploration continues, and the number of idle nodes will become less in some periods of time, with a smoother overall growth trend. The occupied nodes, on the other hand, are less likely to trigger pruning because they can only explore to their surfaces, and the number keeps growing continuously. It can be seen that at t=150s, the change in the number of idle nodes and occupied nodes tends to be flat, indicating that most of the space has been explored at this time, and most of the corresponding nodes are known.
The UAV autonomous navigation algorithm Layered-RSAC proposed in this paper is simulated and analyzed, and the parameters of the algorithm in this study are shown in Table 2. In this paper, we set the space occupied by autonomous UAV navigation to be about 4km2, assuming that the UAV flight altitude is a constant value, and the maximum number of training times of the algorithm is set to be 186,000, while the influence of the natural environment is ignored for the time being. The parameters of the environment and experimental equipment for the design and simulation of this paper are: Inter®Core™ i7-9700k CPU, 32 GB dual-channel memory, Windows 10 64-bit operating system, Python3.8, Pytorch1.9.0.
Layered-RSAC algorithm parameters
| Argument | Value |
|---|---|
| Learning rate | 0.0001 |
| Batch learning scale | 22 |
| Multithread scale | 15 |
| Discount factor | 0.99 |
| Maximum training step size | 186000 |
| Prior Policy experience attenuation | 0.00005 |
| Prior Policy initial variance | 0.15 |
The Layered-RSAC algorithm reduces the number of detours during autonomous UAV navigation by better matching the learning model’s capability with the a priori policy weighting in the early stage of model training. In order to verify the impact of this algorithm on the final autonomous navigation training effect, the extra distance ratio is designed to evaluate the path optimization of navigating a randomly generated start and end point under a certain training step, which can also reflect the degree of detour during navigation. The extra distance ratio is designed to be the ratio of the actual path length minus the diameter path length to the diameter path length.
Due to the large number of training times, this paper takes the average processing of the extra distance ratio per 5000 training steps, the first 2.5×106 training results are equivalent to 50 training results curve, in order to better demonstrate the algorithm’s training effect, different a priori strategy standard deviation conditions of the success rate of autonomous navigation curves are shown in Figure 4.

Success rate curve of autonomous navigation under different prior strategies
As can be seen from Fig. 4, when the standard deviation is very small, the model navigation success rate is very low, with a peak value of about 30%. The reason for this is that when the initial standard deviation is too small, the Prior-Policy’s guidance for the UAV is too strong i.e. the Prior-Policy accounts for too large a proportion in the execution of the behavioral strategy, resulting in the learning strategy itself cannot find the motivation to improve its own ability, so the training results tend to be more in line with the ability of the Prior-Policy itself. When the standard deviation of Prior-Policy is too large, the guidance ability of Prior-Policy in the pre-training period is too weak, and the proportion of inexperienced learning strategies in the execution of behavioral strategies is relatively high, which leads to the difficulty of learning strategies to absorb the experience, and leads to the model being in the vibration of the ups and downs, and the success rate of the navigation is higher, although the performance of the model is not stable. The network with a standard deviation of 0.35 has a higher guidance for the decision-making network than the network with a standard deviation of 0.45, resulting in a slower decay of its Prior-Policy and a slower convergence of the model, but ultimately the converged navigational success is comparable to that of a standard deviation of 0.45.
Except for the cases where the model appears to have lower success rates and difficulty converging when the standard deviation is too low or too high, the model converges to higher navigation success rates using most of the initial Prior-Policy standard deviations that are within a reasonable range. This indicates that the Layered-RSAC algorithm is relatively robust to changes in the standard deviation of the hyperparameters of the Prior-Policy. Therefore, in this paper, we choose the better performance of 0.45 as the initial Prior-Policy standard deviation, and the autonomous navigation success rate curves of different algorithms with σ = 0.45 are shown in Fig. 5.

The success rate of autonomous navigation for different algorithms is σ=0.45
As can be seen in Fig. 5, the Layered-RSAC algorithm proposed in this paper not only significantly outperforms the classical Deep Deterministic Policy Gradient (DDPG) and Prior-Policy, which does not use a learning network but only uses a priori policies for navigation, in terms of navigation success rate, but also significantly outperforms the convergence step in terms of success rate, which reaches 90% or more.The Layered-RSAC algorithm initially achieves 90% navigation success rate at 50 training steps and stabilizes at 90% to 100% success rate after 100 training steps. 50 training steps initially reaches 90% navigation success and stabilizes at 90% to 100% success after 100 training steps.The SAC algorithm, although it has a higher navigation success rate of 95% to 100%, is too slow to converge and initially reaches 90% success at 150 training steps and stabilizes at 150 training steps. Prior-Policy, which relies solely on a priori policies for navigation and doesn’t rely on reinforcement learning, converges faster and reaches convergence at 75 training steps, but ultimately stabilizes with just a 25% to 30% success rate. The classical DDPG algorithm has almost negligible navigation success rate due to the lack of learning effective experience. Simulation results show that the Layered-RSAC algorithm effectively improves the training convergence speed and navigation success rate.
This paper proposes a UAV swarm collaborative task execution and autonomous navigation system based on recurrent neural network, and conducts system operation tests and performance evaluation of the Layered-RSAC algorithm.
The data analysis shows that the average positioning error of the autonomous navigation system is 9.35 cm in the x-direction, 9.23 cm in the y-direction, and 17.59 cm in the z-direction, and the average positioning error is 9.23 cm in the y-direction, and 17.59 cm in the z-direction.A total of 18 autonomous takeoff and landing tests of UAVs are conducted at the fixed point, and each time, the UAVs take off from the random position, and the average deviation of the landing in the x-direction is 12.26 cm, and the average deviation of the landing in the y-direction is 9.72 cm. The average deviation of landing in x-direction is 12.26cm, and the average deviation of landing in y-direction is 9.72cm. In the process of autonomous navigation of UAVs, when t=150s, the exploration completion reaches 85%, and most of the space has been explored, and most of the corresponding nodes are in a known state.
When the standard deviation of the a priori strategy is 0.45, the model navigation success rate is optimal. Taking σ=0.45 as the initial value, the Layered-RSAC algorithm reaches 90% navigation success rate at the first time in 50 training steps and stabilizes at 90% to 100% success rate after 100 training steps, which is significantly ahead of Prior-Policy, DDPG and SAC algorithms, and effectively verifies the superiority of the Layered-RSAC algorithm.
