Enhancing Research Support for Humanities PhD Teachers: A Novel Model Combining BERT and Reinforcement Learning
27 feb 2025
INFORMAZIONI SU QUESTO ARTICOLO
Pubblicato online: 27 feb 2025
Ricevuto: 08 ott 2024
Accettato: 12 gen 2025
DOI: https://doi.org/10.2478/amns-2025-0125
Parole chiave
© 2025 Peng Wang, published by Sciendo
This work is licensed under the Creative Commons Attribution 4.0 International License.
Figure 1.

Figure 2.

Figure 3.

Figure 4.

Figure 5.

Performance metrics of different models on S2ORC and MAG datasets
Model Name | S2ORC dataset | MAG dataset | ||||
---|---|---|---|---|---|---|
Precision | Recall | F1 Score | Precision | Recall | F1 Score | |
Transformer + DNN | 0.82 | 0.88 | 0.85 | 0.84 | 0.87 | 0.86 |
BERT + LSTM | 0.8 | 0.87 | 0.83 | 0.82 | 0.86 | 0.84 |
GPT-3 | 0.83 | 0.85 | 0.84 | 0.85 | 0.88 | 0.86 |
Ours | 0.85 | 0.9 | 0.87 | 0.87 | 0.89 | 0.88 |
Comparison of the model-extracted themes and their consistency scores across S2ORC and MAG datasets_
Theme ID | Extracted Keywords | S2ORC dataset | MAG dataset | ||
---|---|---|---|---|---|
Consistency Score (C_v, C_umass, C_npmi) | Related Publications | Consistency Score(C_v, C_umass, C_npmi) | Related Publications | ||
1 | Funding challenges | 0.82, -0.12, 0.50 | 95 | 0.85, -0.10, 0.52 | 120 |
2 | Resource scarcity | 0.79, -0.15, 0.48 | 80 | 0.80, -0.14, 0.49 | 110 |
3 | Publication bias | 0.86, -0.09, 0.53 | 65 | 0.88, -0.08, 0.55 | 90 |
4 | Collaboration issues | 0.77, -0.19, 0.45 | 55 | 0.78, -0.18, 0.47 | 70 |
5 | Methodological issues | 0.84, -0.11, 0.51 | 100 | 0.86, -0.09, 0.53 | 130 |