AI-driven resource scheduling optimisation model and its system architecture design for digital library management
Data publikacji: 17 mar 2025
Otrzymano: 22 paź 2024
Przyjęty: 30 sty 2025
DOI: https://doi.org/10.2478/amns-2025-0156
Słowa kluczowe
© 2025 Yu Zhao et al., published by Sciendo
This work is licensed under the Creative Commons Attribution 4.0 International License.
The current library digitization management is plagued by issues related to resource utilization and service performance mismatch. In this paper, we design a decomposition-based ARIMA-LSTM resource prediction model. This model dynamically adjusts the threshold value by predicting the overall load degree of the cluster and the migration failure rate of the pods. It then uses the utilization rate of each resource indicator in the high-load nodes as the weight of the pods’ contribution. Then, the target node nodes are chosen by looking at the type of node nodes that have a lot of work to do and making sure that the low-load node node queue for each resource metric type is always up to date. This optimizes the scheduling of library resources. It has been found that Kubernetes’s default resource scheduling strategy has high and low overall utilization of CPU and memory in Node1~Node4 nodes. The resource scheduling effect of the baseline model IGAACO is slightly better than the default resource scheduling strategy of Kubernetes, but there still exists the problem of extremely unbalanced local load. In contrast, the resource scheduling model that utilizes the neural network algorithm in this paper balances the load of each node in the cluster and improves its load capacity. The dynamic scheduling model reduces the cluster’s overall latency to a certain extent after reallocating accesses, thereby improving efficiency and achieving better load balancing results.