Open Access

Combination of GNN and MNL: a new model for dealing with multi-classification tasks

,  and   
Nov 27, 2024

Cite
Download Cover

In daily life, tasks such as choosing a mode of transportation in traffic, diagnosing diseases and selecting medications in healthcare, as well as recommending products in e-commerce, can all be fundamentally classified as multi-classification tasks. Currently, effective approaches to solving multi-classification tasks include the behavior modeling-based Integrated Choice and Latent Variable (ICLV) model and the machine learning-based Multinomial Logit Model (MNL). The former slightly outperforms the latter in multi-classification tasks due to its ability to identify latent variables and integrate the selection process. However, if certain shortcomings of the MNL model, such as the assumption of independence from irrelevant alternatives, linearity assumption, and lack of hierarchical structure, can be addressed, MNL could outperform the ICLV model in some datasets. Graph Neural Networks (GNNs), which treat the entire feature set as a graph and consider the relationships between features, break the linearity assumption and offer a more flexible and hierarchical structure. This indicates that GNNs are able to effectively alleviate the limitations of MNL. Therefore, we propose an innovative GNN MNL composite model: first, GNN is employed to efficiently extract features from the dataset, and then the extracted features are used as input to train the MNL model. Finally, the trained MNL is utilized to classify new samples. The model’s accuracy was enhanced by incorporating Generative Adversarial Networks (GANs) for data augmentation during the training process. Through validation on three datasets, including modeChoiceData, we demonstrated that the GNN MNL composite model indeed achieves higher accuracy, confirming its feasibility. Future research could explore the generalizability of the GNN MNL model in other classification domains.

Language:
English