Uneingeschränkter Zugang

Enhancing Research Support for Humanities PhD Teachers: A Novel Model Combining BERT and Reinforcement Learning

  
27. Feb. 2025

Zitieren
COVER HERUNTERLADEN

Figure 1.

Overall Workflow of the Study. This figure illustrates the overall workflow of the study, encompassing data collection, preprocessing (including data cleaning and standardization), and the use of deep learning models (BERT and RNN layers) combined with reinforcement learning for addressing research difficulties.
Overall Workflow of the Study. This figure illustrates the overall workflow of the study, encompassing data collection, preprocessing (including data cleaning and standardization), and the use of deep learning models (BERT and RNN layers) combined with reinforcement learning for addressing research difficulties.

Figure 2.

BERT Input Representation Structure. This figure shows the BERT model’s input representation, combining token, sentence, and positional embeddings to capture contextual and sequential information.
BERT Input Representation Structure. This figure shows the BERT model’s input representation, combining token, sentence, and positional embeddings to capture contextual and sequential information.

Figure 3.

Structure of the Recurrent Neural Network
Structure of the Recurrent Neural Network

Figure 4.

Effectiveness of breakthrough strategies in addressing the primary research difficulties faced by PhD teachers in new arts colleges.
Effectiveness of breakthrough strategies in addressing the primary research difficulties faced by PhD teachers in new arts colleges.

Figure 5.

Convergence speed of different research resource allocation strategies.
Convergence speed of different research resource allocation strategies.

Performance metrics of different models on S2ORC and MAG datasets

Model Name S2ORC dataset MAG dataset
Precision Recall F1 Score Precision Recall F1 Score
Transformer + DNN 0.82 0.88 0.85 0.84 0.87 0.86
BERT + LSTM 0.8 0.87 0.83 0.82 0.86 0.84
GPT-3 0.83 0.85 0.84 0.85 0.88 0.86
Ours 0.85 0.9 0.87 0.87 0.89 0.88

Comparison of the model-extracted themes and their consistency scores across S2ORC and MAG datasets_

Theme ID Extracted Keywords S2ORC dataset MAG dataset
Consistency Score (C_v, C_umass, C_npmi) Related Publications Consistency Score(C_v, C_umass, C_npmi) Related Publications
1 Funding challenges 0.82, -0.12, 0.50 95 0.85, -0.10, 0.52 120
2 Resource scarcity 0.79, -0.15, 0.48 80 0.80, -0.14, 0.49 110
3 Publication bias 0.86, -0.09, 0.53 65 0.88, -0.08, 0.55 90
4 Collaboration issues 0.77, -0.19, 0.45 55 0.78, -0.18, 0.47 70
5 Methodological issues 0.84, -0.11, 0.51 100 0.86, -0.09, 0.53 130
Sprache:
Englisch
Zeitrahmen der Veröffentlichung:
1 Hefte pro Jahr
Fachgebiete der Zeitschrift:
Biologie, Biologie, andere, Mathematik, Angewandte Mathematik, Mathematik, Allgemeines, Physik, Physik, andere