Accesso libero

A study of auditory-associative musical emotion based on multidimensional signal processing techniques

,  e   
17 mar 2025
INFORMAZIONI SU QUESTO ARTICOLO

Cita
Scarica la copertina

In this paper, we mainly introduce the attention mechanism into the VGG16 network and utilize the feature mapping of convolutional layers for music visual emotion characterization. In terms of recognizing auditory emotional features, a CNN network is constructed to extract emotional features from music. The extracted audio-visual features are input into the fusion module, thus achieving the study of multi-dimensional signal processing and associative music emotion. Comparative analysis of the emotion recognition effect of this paper’s method shows that the fusion module is most effective when the audiovisual associative features are downscaled to 200 dimensions. The average recognition rate of emotion when fusing audiovisual features is 88.07%, which improves the emotion recognition rate. The length of the music piece is at 60s, and the recognition accuracy is 0.87, so the shorter the length of the music piece, the higher the recognition accuracy. However, rhythmic features do not have a significant effect on emotion recognition.

Lingua:
Inglese
Frequenza di pubblicazione:
1 volte all'anno
Argomenti della rivista:
Scienze biologiche, Scienze della vita, altro, Matematica, Matematica applicata, Matematica generale, Fisica, Fisica, altro