Accès libre

Research on image processing based on machine learning feature fusion and sparse representation

  
17 mars 2025
À propos de cet article

Citez
Télécharger la couverture

Image fusion is to take multiple images acquired by multiple sensors in the same scene and synthesize them to generate an image that contains complete information in the scene, which aims to improve the resolution and clarity of the image for observation and reprocessing. In this paper, we combine machine learning feature fusion with sparse representation to construct a sparse autoencoder-based image training and fusion model, ITFSAE. The model first performs sliding chunking of the original image, compiles individual image chunks into column vectors, and combines them into a union matrix. The union matrix is fed into the sparse autoencoder and trained to obtain a feature dictionary. The orthogonal matching tracking (OMP) algorithm and maximization selection algorithm are used to obtain the joint sparse coefficients matrix, and finally the fused image is calculated and outputted.The ITFSAE model achieves the minimum and maximum values of Loss and Accuracy, respectively, after 2500 training iterations, and reaches the optimal effect. And the values of standard deviation, mutual information, entropy, average gradient and spatial frequency of five evaluation indexes of the model in the fusion of “airplane” and “grass” are larger than those of other comparative models, indicating that the ITFSAE model constructed in this paper is more effective in image fusion and can be used for the utilization of features. It shows that the ITFSAE model constructed in this paper is more effective in image fusion, and can provide a useful reference for image processing using feature fusion and sparse representation.