Open Access

Curriculum Content Construction and Updating for Intelligent Educational Systems Using Knowledge Graphs

  
Sep 26, 2025

Cite
Download Cover

Introduction

With the continuous development and application of artificial intelligence technology, intelligent education system is more and more widely used in school education, vocational education, online education and other fields. Intelligent education system can not only provide personalized learning recommendations, accurate learning assessment, but also solve various problems faced in traditional teaching, improve learning efficiency and teaching quality [1-4].

Intelligent education system is an innovative teaching tool that combines artificial intelligence and education, and brings a lot of convenience and changes for students and teachers. The characteristics of intelligent educational systems include personalized learning, automatic assessment, and interactivity, etc. Its advantages lie in improving learning effects, stimulating learning interests, and promoting teaching improvement [5-8]. The impact of intelligent education system on education is revolutionary, which promotes the intelligent, personalized and sustainable development of education. Through the application of intelligent education system, education will be more diversified, flexible and efficient [9-11].

In the intelligent education system, the construction of knowledge graph is an important means to improve the intelligent level of the education system. Knowledge graph is a kind of graphical model for describing and organizing the structure of knowledge, which represents entities, attributes and relationships between entities graphically, constituting a huge knowledge network [12-15]. Through knowledge mapping, various types of knowledge can be integrated to form an interdisciplinary knowledge system, which provides learners with comprehensive, accurate, and effective knowledge resources and learning paths, and it facilitates the construction of course content and updating in intelligent education systems [16-19].

Literature [20] describes the construction and application of fuzzy knowledge graph system for intelligent education, analyzes the quality resource sharing and personalized service of AI-assisted intelligent education, and points out that intelligent education knowledge graph can integrate disciplinary knowledge, which enhances the interpretability of AI-assisted intelligent education. Literature [21] used Citespace software to sort out the current situation, hotspots and evolutionary trends of the development of intelligent education research from the perspective of knowledge graph, emphasizing that the field is gradually receiving attention, and communication between research institutions and countries is strong, while communication between researchers is weak. Literature [22] proposed a knowledge graph-based evaluation method based on the many problems existing in the evaluation system of college students, which makes full use of the semantic representation and semantic reasoning ability of knowledge graphs to achieve more accurate and broader student information.

Literature [23] analyzed the current situation of Q&A teaching and designed an intelligent Q&A system integrating knowledge graph technology, intelligent Q&A technology, and big data technology, which is able to solve students’ problems, construct knowledge network graphs, predict students’ learning behaviors, and provide feedback on teaching effects. Literature [24] provides a broad and complete overview of the definitions and challenges of knowledge graph convergence, which symbolizes a holistic approach to integrating, enhancing, and unifying knowledge graphs for system practitioners and researchers. Literature [25] explored the improvement and optimization of intelligent devices to the educational curriculum system in higher education by proposing a clustering algorithm and improving it, revealing that the improved algorithm improves data analysis in the teaching and learning process, and can promote the popularization of educational informatization and the improvement of teaching quality. Literature [26] uses new technologies such as AI, big data, blockchain, etc. to discuss the new path, mechanism and mode of professional resource construction and application based on knowledge graph, aiming to provide resource construction and application solutions for talent cultivation. The above research analyzes the application of knowledge graph in education system, which reflects the wide application of knowledge graph in education and shows excellent results, but there is no research related to “knowledge graph to build the curriculum and update of intelligent education system” by scholars.

This paper describes the process and methodology of constructing and updating the curriculum content of an educational system using knowledge graph. The jieba thesaurus is used to realize the segmentation of Chinese text, and the bidirectional long and short-term memory network and CRF discriminative model are used for named entity recognition. Intelligent Q&A module is added to the education system for students’ learning needs, matching students’ Q&A intention by deep learning model with data recording, and BERT-TextCNN model is used to handle the intention classification task. Finally, named entity recognition experiments, student question and answer intention recognition tests and intelligent Q&A module tests are conducted on NLPCC-ICCPOL 2018 and University of T basic education datasets, respectively.

Methods to realize intelligent education using knowledge graphs
Knowledge mapping

Knowledge graph is a structured knowledge representation that presents information about relationships and attributes between entities in the form of a graph. Its goal is to build a complete and accurate knowledge base so that machines can understand and reason about this knowledge to better provide intelligent services to humans. Knowledge graph data is derived from a variety of structured, semi-structured and unstructured data in the Internet. These data come from a variety of information sources, and by processing and extracting these data, entities, relationships and attributes can be extracted and organized into an organic graph structure [27].

Knowledge graph construction method

The construction methods of knowledge graph can be mainly categorized into top-down and bottom-up.

Top-down construction method: this method usually starts with constructing an ontology model, which defines the basic elements such as entities, relations and attributes of the target domain, and specifies their semantics and constraints. This is the conceptual layer for constructing a knowledge graph, also known as the ontology layer. Then, based on this ontology model, relevant entity data are extracted and organized to form the entity layer of the knowledge graph.

Bottom-up construction approach: this approach, on the other hand, collects a large amount of raw data first, and then discovers the entities, attributes and relationships in the data through data mining, information extraction and other techniques to construct the entity layer of the knowledge graph. Subsequently, the characteristics of the entity layer are analyzed and summarized, and the conceptual layer of the knowledge graph is gradually formed.

Knowledge graph-based approach for updating course content construction

The construction of knowledge graph is based on the ontology rules defined by domain experts to extract and fuse the correct triples that match the facts from a large amount of heterogeneous data. In order to realize the accurate extraction and fast aggregation of a large amount of knowledge, the whole construction process needs to use techniques such as ontology construction, entity naming, relationship extraction, knowledge storage and so on.

The data layer originates from major platforms in the Internet, and it needs to process the raw data for cleaning and alignment. Through the method of entity recognition, meaningful entity information is identified in a large amount of heterogeneous data. In the algorithm layer, the encapsulated algorithms are invoked according to the needs of the task layer. The data are chained and finally get the standard compliant ternary format and imported into the graph database to complete the storage of knowledge. In the final task layer, this stored knowledge will be utilized for various applications such as smart quizzes. This link involves how to utilize the stored knowledge and how to answer students’ questions effectively through the smart quiz module.

Ontology construction

Ontology construction is a key step in the process of building an academic knowledge graph. Through ontology construction, we can comprehensively define and explain the characteristics of the entities in the knowledge map and the relationships between the entities. When constructing a disciplinary knowledge ontology library, we first identify the main entities, relationships and attributes that constitute the knowledge ontology library, and define the relationships between entities and the attributes of entities. For example, the teacher ontology and thesis ontology are utilized to provide a comprehensive description and representation of knowledge, so as to construct a knowledge map with rich semantics and structure. Moreover, by explicitly defining the features and relationships of these entities, we can further support efficient knowledge retrieval and reasoning.

Knowledge extraction

The goal of knowledge extraction is to automatically extract useful information from unstructured data, such as named entities, relationships and events. Relationship extraction is the determination of associative relationships between recognized entities. This process plays an important role in constructing knowledge graphs, text understanding, information retrieval and other tasks. The neural network based relationship extraction in this paper is a powerful method. The method makes it possible for the neural network to infer relationships between entities from their context by learning local and global features in the text. Convolutional Neural Network (CNN) is a powerful feature extraction tool that learns complex patterns in a hierarchical way. Convolutional neural networks, on the other hand, capture more complex, long-range dependencies between entities by dividing the text into multiple paragraphs, then applying a convolutional operation on each paragraph, and then fusing the outputs of all the paragraphs. This approach can effectively handle long-distance relationships and complex relationships with better performance than traditional CNN models [28].

Named entity recognition and relationship extraction are two key steps in knowledge extraction, and they usually need to be used in combination. First, with named entity recognition, we can locate and identify important entities from text. Then, with relationship extraction, we can identify various types of relationships between these entities. Together, these two steps allow for the extraction of structured knowledge such as facts, events, and relationships from unstructured text. By combining these two techniques, useful knowledge, such as facts, events, and relationships, can be effectively extracted from a large amount of unstructured text, greatly improving the efficiency of information acquisition and understanding. This not only helps people better understand the text, but also provides the possibility for machines to understand and process natural language, opening up many new application scenarios, such as the intelligent Q&A application based on knowledge graph proposed later.

Knowledge storage

Along with the exponential growth of data volume in the Internet era, the storage and management of graph data is particularly important. Knowledge graphs can be stored using many different methods, among which graph databases are a common way of storing knowledge graphs, which can efficiently store and query graph structure data. Nodes in a graph database represent entities and edges represent relationships between entities. Graph databases support complex graph queries such as finding specific paths, detecting patterns in the graph, etc. Neo4j and OrientDB are two common graph databases.

In this project, we chose to use Neo4j graph database to build a prototype knowledge graph system for subject data.

Intelligent Q&A

Intelligent Q&A systems can usually be divided into three sub-modules, which are question parsing module, data retrieval module and answer construction module. The intelligent Q&A system based on knowledge graph first performs preprocessing steps on the consulting questions submitted by students, then extracts the core entity information in the questions, and further converts it into statements that can execute query commands in the knowledge information database, and finally returns the constructed answers. Currently, the more researched Q&A methods can be categorized into three kinds, which are the methods based on constructing templates, the methods based on semantic parsing, and the methods based on deep learning.

Among them, the core idea of the deep learning-based method is to first obtain word vectors from the questions asked by the students and the related semantic information through mapping operations, and then use deep learning algorithms to calculate the similarity between the vectors, and ultimately score and sort the relations and interrogative sentences in order to obtain the results of the questions. This method is used in this paper as a Q&A method for intelligent responses.

Design of Q&A module based on intelligent education system
Named Entity Identification for System Q&A Functions

Before named entity recognition, data preprocessing is needed. Upon receiving a question from a student, the question can first be segmented using jieba, a third-party library for python. jieba is a popular Chinese lexical library, which is capable of realizing the functions of word segmentation, lexical annotation, and keyword extraction of Chinese text. The result of the lexical processing is a series of words or phrases, which can describe different aspects of the problem.

In order to reduce the labor cost, here in this paper, a deep learning based approach is used and the BI-LSTM-CRF model is chosen for named entity recognition.

BI-LSTM-CRF is a deep learning structure that consists of two parts: a bidirectional long and short-term memory network (BI-LSTM) and a conditional random field (CRF), which can be used for named entity recognition. Compared with the unidirectional LSTM network, BI-LSTM can solve the problem that LSTM can only encode forward utterance information to some extent.

LSTM is a special type of RNN that learns long-term dependent information. LSTM adds cell states to the RNN Ct and designs a “gate” structure for the ability to add or remove information to cell states. A standard LSTM consists of a forget gate, an input gate, and an output gate [29].

Key Strategies for Learning Quality Experience Driven

A forget gate can selectively forget information in a cell state, which reads the hidden state ht−1 of the previous moment and the input xt of the current moment, outputting a value between 0 and 1 to each number in cell state Ct−1. When the output result is “1”, it means “completely retain”, and when the output result is “0”, it means “completely discard”. The output is expressed as ft, see equation (1): ft=σ(Wf[ht1,xt]+bf)$${f_t} = \sigma \left( {{W_f} \cdot \left[ {{h_{t - 1}},{x_t}} \right] + {b_f}} \right)$$

New information can be selectively recorded into the cell state by the input gate, which reads the hidden state ht−1 of the previous moment and the input xt of the current moment, and processes them into the information brought by the new input C˜t$${\widetilde C_t}$$ and the information retained by the new information it, see Equation (2) and Equation (3). The updated cell state Ct can be obtained from the results of the forgetting gate and the input gate, see Equation (4): C˜t=tanh(WC[ht1,xt]+bC)$${\tilde C_t} = \tanh \left( {{W_C} \cdot \left[ {{h_{t - 1}},{x_t}} \right] + {b_C}} \right)$$ it=σ(Wi[ht1,xt]+bi)$${i_t} = \sigma \left( {{W_i} \cdot \left[ {{h_{t - 1}},{x_t}} \right] + {b_i}} \right)$$ Ct=ftCt1+it*C˜t$${C_t} = {f_t} \cdot {C_{t - 1}} + {i_t}*{\tilde C_t}$$

The output gate represents the output of the LSTM, which reads the hidden state ht−1 of the previous moment and the input xt of the current moment to compute the result ot of the output gate, see Equation (5). In addition, the hidden state at this moment can be calculated by combining the updated cell state Ct, see Equation (6): ot=σ(Wo[ht1,xt]+bo)$${o_t} = \sigma \left( {{W_o} \cdot \left[ {{h_{t - 1}},{x_t}} \right] + {b_o}} \right)$$ ht=ottanh(Ct)$${h_t} = {o_t} \cdot \tanh \left( {{C_t}} \right)$$

BI-LSTM combines forward LSTM and backward LSTM to obtain two different kinds of implicit layer information, sequential and reverse order, of the input sequence, which enables better modeling of sequence information and learning of contextual semantic information.

In the named entity recognition task, BI-LSTM can effectively encode the information in the sentence, better recognize bidirectional semantic dependencies, and improve the accuracy of entity recognition.

Learning Quality Experience-Driven Critical Strategies

The CRF model is a discriminative model for modeling conditional distributions. It takes into account the influence of neighboring contextual information or states on the prediction, and is more suitable for named entity recognition. The output of BI-LSTM for each word is a labeling score, and CRF can add some constraints to the final constrained labels to ensure the validity of the predicted labels [30]. The loss function of CRF has two types of scores, the emission score and the transfer score:

The emission score can be obtained from the labeling scores output from BI-LSTM.

Transfer scores represent the scores transferred from one label to another, and the transfer matrix is usually used to store the scores of all labels transferred to each other to represent the constraints on the labels. The transfer matrix is a parameter of the BI-LSTM-CRF model, and this transfer score matrix can be randomly initialized before training the model, and all the random scores in this matrix will be updated during the training process. With continuous training, these scores will become more and more reasonable, forming reasonable constraints on the labels.

The goal of the model is to maximize the probability of the true path, the representation of which is shown in Equation (7). Where PRealPath denotes the score of the true path and P1 + P2 + … + PN denotes the score of all possible paths: prob=PRealPathP1+P2++PN$$prob = \frac{{{P_{\operatorname{Re} alPath}}}}{{{P_1} + {P_2} + \cdots + {P_N}}}$$

For ease of calculation, logarithmic and negative numbers are taken from Eq. (7) as the loss function, see Eq. (8): LossFunction=log(P1+P2++PN)logPRealParh$$LossFunction = \log \left( {{P_1} + {P_2} + \cdots + {P_N}} \right) - \log {P_{\operatorname{Re} alParh}}$$

The scores of the real paths can be calculated by summing the launch scores and transfer scores, and the sum of the scores of all paths can be calculated by the dynamic programming method. The BI-LSTM-CRF model is obtained by splicing the BI-LSTM and CRF through the fully connected layer.

Intent recognition model design and construction
Intent Recognition Model Design

In the Q&A function, the statements provided by students may have the following problems: less contextual semantic information, when students input questions, there may be cases of inputting short statements or abbreviations, which makes it difficult to effectively capture the contextual semantic information, and semantic understanding requires a more accurate model. The entity reference in the question is not clear, when students input the question, there may be typos or incomplete questions, which will have an impact on the effectiveness of the answer.

In view of the above problems, the use of traditional string matching or statistics-based methods not only need to spend a lot of manpower and time costs in advance to set up a large number of templates, and even if set up a lot of text, in the face of the complexity of the meaning of the natural language, will inevitably be omitted from some of the natural language, exhaustive exhaustion of all the possibilities can not be done, so in this paper, through the way of deep learning models and data logging to complete intention matching, which can not only avoid consuming too much labor and material resources, but also improve the efficiency of the system to a certain extent.

Pre-training model selection

Similar to named entity recognition, transforming text into vectors is always one of the most basic aspects in solving natural language problems, and the importance of this aspect is self-evident. In this section, the BERT model is selected for pre-training.

Selection of Classifiers

In intent recognition, although a single BERT model can complete the experiment, but if we rely only on BERT, due to the small change of parameter values within it, it will inevitably produce overfitting problems, and the features learned by a single model are also limited, and its effect is not satisfactory, therefore, TextCNN is fused on the basis of BERT, which utilizes multi-convolution kernel to the different lengths of the text of This feature is used for information extraction to learn more keyword information and make up for the deficiency of BERT layer.

Intent Recognition Model Architecture

In this study, the BERT-TextCNN model will be used to deal with the task of intent classification. The model framework is mainly divided into BERT embedding layer, feature extraction layer and classification layer, taking “Who is the lecturer of computer principle in University T” as an example, the specific processing flow is as follows.

The BERT layer splits the statement “Who is the lecturer of computer principle in University T?” according to the smallest unit of a single Chinese character, and generates a text sequence.

The sequence in the BERT layer is encoded as a vector against the ID in the dictionary, and all the vectors are aligned by padding to ensure the consistency of the vector dimensions.

Input the vector matrix into the TextCNN model, regulate the parameters, calculate and extract the feature maps of different dimensions using parameter values of different sizes, and then perform vector splicing after pooling.

The spliced vectors are passed through Softmax function for probability calculation.

The main corresponding hierarchies are introduced next respectively.

Feature extraction layer

The feature extraction layer can be subdivided into convolutional layer, pooling layer, fusion layer, and fully connected layer, the information is extracted through the convolutional kernel in the convolutional layer, the pooling layer pools the extracted information, and then the vectors are fused and reprocessed through the fully connected layer, and the TextCNN model in the current research process contains three convolutional kernels with different sizes and different combinations of the form of training comparison, and the parameter in each convolutional kernel is inverted by inverse vector fusion. The parameters in the convolutional kernel are optimized by the back propagation algorithm, and the weight of the neurons in the convolutional kernel is the product of “the number of words that can be covered by the convolutional kernel in the vertical direction” and “the dimension of the word vector”. In the process of data processing, each convolutional kernel can obtain a feature column vector that takes into account the word order relationship between several neighboring words, so that the extraction of local features of the text can be realized, and the calculation formula is shown in (9): ai=f(W×Ti,i+n1+b)$${a_i} = f\left( {W \times {T_{i,i + n - 1}} + b} \right)$$

where W is the weight, b is the bias, n is the size of the convolution kernel, f is a nonlinear function, and Ti,i+n−1 represents the word vectors at different locations in the text.

Similar to the structure in many neural networks, after the processing of the convolutional layer will enter the pooling layer, the reason is that the dimensionality of the features obtained in the convolutional layer is large, which is heavily dependent on the subsequent arithmetic level and aggravates the burden, pooling aims to downsize the data, so that neither the important semantic features are lost nor the number of parameters is reduced, and also prevents the occurrence of the overfitting problem, the current use of the maximal pooling technique, that is, the maximum value of the feature vector in a feature vector, the maximum value in the feature is selected as the feature of the overall vector.

Three different local features are obtained after the processing of the pooling layer respectively, and then these different local features need to be spliced through the fusion layer and fused into a vector, but at this time, the acquired features are still local features, which requires a fully connected layer. The fully connected layer can transform all the local features into global features after computation, and then the Softmax function is used for intent recognition.

Classification Layer

Softmax layer is the classification layer, often used to deal with multi-classification problems, the way to deal with the vector compression is calculated as a value within 0 to 1, indicating the type of the vector probability size. The probability formula is shown in (10): softmaxf(xi)=exic=0keexc$$soft\max f\left( {{x_i}} \right) = \frac{{{e^{{x_i}}}}}{{\sum\limits_{c = 0}^k {{e^{{e^{{x_c}}}}}} }}$$

The reason for introducing the exponential function into Softmax is that the exponential function has a fast-growing rate of change, which can widen the numerical gap between categories and make a “main element” as prominent as possible, thus making use of the differentiation of multicategorization tasks. For the output vector x of BERT, the probability of the vector corresponding to category i is calculated as shown in (11): P(y=i|x)=exTwic=0keeTwi$$P(y = i|x) = \frac{{{e^{{x^T}{w_i}}}}}{{\sum\limits_{c = 0}^k {{e^{{e^T}}}} {w_i}}}$$

Let the sample representation in the training set be {(x1,y1),(x2,y2),,(xn,yn)}$$\left\{ {\left( {{x_1},{y_1}} \right),\left( {{x_2},{y_2}} \right), \ldots ,\left( {{x_n},{y_n}} \right)} \right\}$$, where xi is the vector representation of the sentence meaning in the interrogative sentence after word embedding and feature extraction, yi denotes the category vector to which the interrogative sentence xi belongs, and n denotes the size of the sample set. The sentence-level vectors output by BERT are calculated using full connectivity, and then calculated by Softmax function to derive the category probability, the formula is shown in (12): P=softmac(CWT)$$P = softmac\left( {C{W^T}} \right)$$

In the tuning process, adding the loss function can enhance to get better results, the loss function calculation formula is shown in (13): lossi=logexic=0keec=logc=0kexcxi$$los{s_i} = - \log \frac{{{e^{{x_i}}}}}{{\sum\limits_{c = 0}^k {{e^{{e_c}}}} }} = \log \sum\limits_{c = 0}^k {{e^{{x_c}}}} - {x_i}$$

Taking logarithms on both sides leads to equation (14): log(Pi)=logexic=0keexc$$\log \left( {{P_i}} \right) = \log \frac{{{e^{{x_i}}}}}{{\sum\limits_{c = 0}^k {{e^{{e^{{x_c}}}}}} }}$$

Subject knowledge automatic quiz system realization
System development environment

The system implementation adopts B/S architecture, using software and hardware development environment configuration are PyCharm compilation platform, Element UI framework design to realize the front-end page, Python based Django framework to deal with the back-end business, crawler framework Scrapy2.3.0, Han LP language cloud platform and so on.

System Functional Module Implementation

Data Collection Module

The data collection module is the entrance and basic module for the construction and realization of the whole system. This part mainly collects network resources and paper resources, such as Baidu Encyclopedia, thematic websites, paper textbooks, syllabi, lesson plans and so on, by using the Scrapy crawler framework and the OCR interface of Baidu AI.

Scrapy crawler framework is mainly composed of engine, scheduler, downloader, crawler and data pipeline components, due to the use of Twisted asynchronous network framework, greatly accelerate the data crawling speed. Engine, scheduler, downloader has been defined in the framework to achieve, without the need for students to implement the code, which: (1) Scrapy engine: the core of the entire crawler framework, is responsible for controlling the data and various types of requests in the framework components of the transfer between. (2) Scheduler: the memory and scheduling center of URL addresses, responsible for storing URL requests sent from the engine into the queue and deciding the request situation of URLs. (3) Downloader and Downloader Middleware: The downloader downloads Requests passed by the scheduler and returns them to the engine. Downloader middleware allows students to customize and extend the functionality of the downloader, such as setting proxies, cookies, User-Agent, etc. (4) Crawler and Crawler Middleware: The crawler mainly processes the Response passed by the Scrapy engine, extracts the URL and data in it, and returns it to the engine. Crawler middleware allows students to customize the components of the Request request and filter the content in the Response. (5) Data pipeline: the main task is responsible for processing the data passed by the engine, such as data iteration and data storage.

This section uses Google Chrome, JSONViewer and other tools to analyze the URL rules of the target website, and then creates a custom crawler, rewrites the definition of the crawler function by defining the crawler class in the Scrapy framework file, including the crawler name, allowable range, start address, crawler log, etc., and then defines the rules of parsing the data by using Xpath, and sets up the data pipeline that facilitates data storage.

In order to improve the OCR recognition effect and ensure data quality, this section calls the OCR interface of Baidu AI to collect paper resources, there are two main ways to use: download the OCR module or call the official interface. In order to facilitate code debugging and data preservation, this section utilizes the pip command to download it locally for invocation. The OCR recognition of Baidu AI contains parameter verification, image recognition, JSON result return, text extraction and storage.

System Management Module Implementation

The system management module is specially designed for administrators to carry out student management, system parameter maintenance and knowledge base maintenance. In the student management sub-module, you can modify student rights, personal information of students, delete students, add students. Django has its own student management system, so you only need to create models.py and the student class in the framework, define student-related fields, and then you can manage students. However, the above system can not meet the customized permissions settings for individual students, you also need to define the permissions parameter permissions in the Meta element under the student class. In addition, when implementing the modification of student information, you need to define the name of the database table, view table, and so on in advance in the Model class of the model.py file, and then create business methods in the file view.py, set the parameters and response object return. Method, set the parameters and response object return, and then configure routing in the urls.py file.

For system parameter maintenance, you can install the django-crontab module, write a time setting script, you can regularly perform the server startup and shutdown. Knowledge base maintenance is for the updating of subject knowledge graph and MongoDB content, which is realized by installing the py2neo module and registering the app in the setting.py file, which converts python statements into corresponding database operation statements.

Personal information module realization

The personal information module serves the system login and registration of students at all levels, the maintenance of personal information and the statistical function of learning records. In models.py, we define model classes for administrators, students and social learners. Then in views.py, we define the methods for checking the login student name and password, the validity checking method for inputting data when registering personal information, and the requesting method for statistics of specific student’s learning records. Finally, the routing settings are made in urls.py.

Knowledge Quiz Module and Visualization Implementation

Knowledge Q&A module is the core of the automatic Q&A system, which mainly consists of question recognition and preprocessing, answer generation and retrieval, history record query and information feedback submodules.

Historical records query is that students can view the recent knowledge query, to facilitate timely review and mastery, the realization of the student field through the operation of the database, query corresponding to the field containing the query, and then return the results to the views.py. Information feedback is the use of the text field to receive the text entered by the students, and then send it to Django’s django.core.mail lightweight send_mail method provided in the wrapper module.

Knowledge visualization module is to make the query results of the knowledge points and the correlation between the knowledge of the relationship between the graph structure presented in the student page, in order to facilitate students to deepen the understanding of knowledge mastery. This module is based on D3.js force oriented diagram for implementation, the construction process is canvas drawing board definition, canvas definition, query answer loading, etc., the construction process is shown in Figure 1.

Figure 1.

The guide flow chart of the building d3.js force

Application effect of intelligent education system
Experiment on the accuracy of course knowledge content recommendation

Precision, recall, F1 value and accuracy are used as evaluation criteria for Q&A accuracy for the knowledge graph based intelligent educational system proposed in this paper. The experimental parameters for entity recognition in this paper are shown in Table 1.

Physical identification of experimental parameters

Experimental parameters Value
train_batch_size 48
eval_batch_size 64
max_seq_length 64
num_train_epochs 10
drop_out 0.5
embedding_size 752
learning_rate 5.0×10-5
hidden_dropout_prob 48

In this paper, we use the entity recognition dataset based on the NLPCC-ICCPOL 2018 dataset and the pre-processed basic education dataset of the University of T.

Experimental results based on NLPCC-ICCPOL 2018 dataset

This text recognition model increases the number of iterations in the training process, the loss rate changes, decreasing, to the end of the stability to maintain a certain degree of fluctuation, and then basically immobile, the loss rate changes as shown in Figure 2.

As shown in the figure, the horizontal coordinate indicates the number of times the entity recognition model of this paper is verified on the validation set data during the iteration process, and the vertical coordinate indicates the loss rate of the validation set output at the time of validation during the training process of the model, as can be seen from the image, the model has been trained continuously, the loss rate has been towards the decreasing trend, and finally tends to fluctuate to a stable value, which can be seen that the model has been trained.

Figure 2.

The changes of loss rate

The F1 value of the entity recognition model in this paper changes with the number of training times during the training process, as shown in Figure 3. The horizontal coordinate epochs in Fig. 3 represents the number of training times, and the vertical coordinate represents the change of the F1 value of the model during the training process, the figure shows that the model with the increase in the number of training times, the F1 continues to rise, and then stabilized and a little decline, it can be seen that in the training of the seventh time, the model trained the highest F1 value, the F1 value reached 98.67%.

Figure 3.

The F1 value of training

The results of different model tests are shown in Table 2. This paper proposes that the entity recognition model in this paper is compared with CRF, Bi_LSMT, Bi_LSMT+CRF, and BERT models can be found to be higher than other deep learning models in terms of precision rate P, recall rate R, and F1 value.

Test results of different models

Model Precision (%) Recall (%) F1 (%)
CRF 73.22 66.08 69.47
Bi_LSTM 82.28 75.23 78.60
Bi_LSTM+CRF 87.35 87.63 87.49
BERT 94.69 95.33 95.01
Ours 98.87 98.98 98.93

Table 2 specifically lists the individual deep learning models, as well as this paper proposes this paper’s entity recognition model of the precision rate P, recall R and F1 value of the data for comparison, and through the form of bar charts, a more intuitive feeling of the individual deep learning models in the entity recognition of the data task of the comparison of the deep learning model is shown in Figure 4.

Figure 4.

The comparison of the deep learning model

In Figure 4, the horizontal and vertical scales represent each deep learning model, and the vertical coordinate represents the value of accuracy, the first of each set of data represents the precision rate P, the second represents the recall rate R, and the third represents the value of F1, and it can be found that the three values of this paper’s model are higher than those of other deep learning models.

Experimental results based on the basic education dataset of the University of T

Figure 5 shows the loss rate change. The entity recognition model in this paper increases the number of iterations in the training process, the loss rate changes, decreases, and at the end of the stable to maintain a certain degree of fluctuation, and then basically does not move. The horizontal coordinate indicates the number of times the entity recognition model of this paper is verified on the validation set data in the iteration process, and the vertical coordinate indicates the loss rate of the validation set output at the time of validation during the training process of the model, as can be seen from the image, the model has been trained continuously, the loss rate has been towards the decreasing trend, and finally tends to fluctuate to a stable value, which can be seen that the model has been completed training.

Figure 5.

The changes of loss rate

In this paper, the entity recognition model in the training process F1 changes with the number of training times, as shown in Figure 6. The horizontal coordinate epochs in Figure 6 represents the number of training times, and the vertical coordinate represents the change of the F1 value of the model during the training process, the figure shows that the model with the increase in the number of training times, the F1 value continues to rise, and then stabilized and a little decline, it can be seen that in the training of the 5th time, the model training the F1 value to the highest value of the F1 value reached 97.43%.

Figure 6.

The F1 value of training

Student Intent Recognition Test

In this paper, the user intention recognition task is converted into a question sentence classification task, and the user question sentences are categorized into five types: factual, statistical, whether, list, and method using this paper’s algorithm. The prediction results of this paper’s classifier for the five types of interrogative sentences are shown in Table 3.

Classifier experimental effect (%)

Question type Precision Recall F1
Factual type 92.22 92.63 92.42
Statistical type 92.63 93.48 93.05
Whether or not type 91.82 92.54 92.18
List type 91.51 91.29 91.40
Method type 89.61 90.17 89.89

By analyzing Table 3, it can be found that this paper’s algorithm has a good performance on the dataset, and the F1 scores of each type of questioning are around 90.0%, which indicates that the model has a good classification ability. Among them, the statistical type of questioning has the best classification effect, with a precision rate of 92.63%, a recall rate of 93.48%, and an F1 score of 93.05%. The F1 score of statistical type of question is slightly higher than the other four types, which indicates that the features of statistical type of question are more obvious compared to the other types, and the model has a better ability to recognize and capture the features of this type of question. The F1 scores of methodological question types are slightly lower, which may be due to the small number of samples of methodological question types included in the test set or the features of methodological question types are not obvious. In addition, the difference in F1 scores for the five question types is not very large, and the precision and recall rates are also relatively balanced, indicating that the model’s ability to classify each type of question is relatively balanced, and the overall classification effect is able to provide support for the task of recognizing user intent.

In addition, Support Vector Machine and K-Nearest Neighbor are selected as comparison models to verify the effectiveness of this paper’s algorithm, and the experimental results of multiple models are shown in Figure 7. It can be seen that the F1 scores of Support Vector Machine, K-Nearest Neighbor and this paper’s algorithm are all above 85.0%, in which the precision rate, recall rate and F1 score of this paper’s algorithm are higher than the other two algorithms. The precision rate of support vector machine is slightly higher than that of K-nearest neighbor, but its recall rate is much lower than that of K-nearest neighbor, indicating that the misclassification rate of support vector machine is higher than that of K-nearest neighbor.

Figure 7.

The comparison of the effects of multiple models

Table 4 shows the experimental comparison results of multiple models. Analyzing the experimental results in combination with the evaluation indexes shows that under the same dataset and feature selection, the precision rate, recall rate and F1 score of the support vector machine algorithm are lower, indicating that the model performs in general in this experiment, which is due to the smaller dataset. The K-nearest neighbor algorithm achieves a recall rate of 89.38%, but the precision rate and the F1 score are lower, which suggests that the algorithm may have over fitting phenomenon.

Experimental comparison of various models (%)

Sorting algorithm Macro accuracy Macro recall Macro F1
SVM 87.36 85.81 86.58
KNN 86.84 89.38 88.09
Ours 91.54 92.22 91.88

The algorithm in this paper outperforms the other two algorithms in this sentence classification task, and its precision rate, recall rate and F1 score are higher than those of the support vector machine and K-nearest neighbor by 4.18%, 6.41%, 5.3%, and 4.7%, 2.84%, 3.79%, respectively, which indicates that the algorithm in this paper has a high accuracy and generalization ability in this sentence classification task, and the classification effect is more satisfactory than that of the support vector machine and K-nearest neighbor. This indicates that compared with Support Vector Machine and K-Nearest Neighbor, the algorithm in this paper has higher accuracy and generalization ability on this question classification task, and the classification effect is more satisfactory. In addition, this paper compares the training time required by the three algorithms, because the K-nearest neighbor algorithm needs to calculate the distance between each test sample and the training sample, its training time is the longest, and the support vector machine algorithm and this paper’s algorithm require a relatively short training time.

Knowledge Graph-based Intelligent Q&A Module Testing

In order to verify the practical effectiveness of this system, testing of the Q&A function is required. In this paper, 1500 question sentences suitable for the answer range of this Q&A system are selected for testing in the Chinese Learning Intentions dataset CMID, and this paper tests the practical use value of this medical Q&A system based on whether the above questions are correctly answered or not. The test mainly examines the following three indicators:

Whether the test question was answered correctly.

Whether the entities in the test question were recognized.

Whether the query intent of the test question can be correctly recognized.

Each time, 120 randomly selected questions from the dataset were tested on the Q&A system, and the results were compared manually for a total of 10 experiments, the results of which are shown in Table 5.

Comparison results

No. Number of questions The number of answers returned The correct number of answers Accuracy Precision
1 120 108 95 79.2% 87.96%
2 120 103 89 74.2% 86.41%
3 120 91 84 70.0% 92.31%
4 120 96 85 70.8% 88.54%
5 120 106 91 75.8% 85.85%
6 120 110 97 80.8% 88.18%
7 120 107 96 80.0% 89.72%
8 120 97 90 75.0% 92.78%
9 120 105 98 81.7% 93.33%
10 120 89 82 68.3% 92.13%
Average 120 101.2 90.7 75.6% 89.62%

In Table 5, the accuracy rate indicates the number of correct answers as a percentage of the number of questions asked, and the precision rate indicates the number of correct answers as a percentage of the number of answers returned. In the final 10 test results obtained, an average of 101.2 answers were returned, of which an average of 90.7 were correct. In the event that a query was encountered that failed to parse successfully, the result returned was a pre-set failure response template, “I’m sorry, I can’t understand your question.”

All things considered, the Q&A system got a 75.6% correct rate, which is of some use. Analyzing the above test results, the Q&A system has the following problems:

The data in the Chinese knowledge graph is not perfect enough, which contains only 36,000 Chinese entities in size, which may not be able to cover the entities in the user’s question, leading to the low accuracy of the Q&A system.

In the intent recognition part of the question, due to the incomplete setting of the intent type of the question, some types of the question could not be covered, resulting in the system not being able to correctly recognize the intent of the user’s question. Subsequent optimization of these two aspects is needed.

Conclusion

In this paper, we design a knowledge quiz system for course content applicable to intelligent education through techniques such as knowledge graph and deep learning. In the NLPCC-ICCPOL 2018 and University of T basic education datasets, the F1 value of this paper’s model reaches the highest of 98.67% and 97.43%, respectively, and the loss rate of the model converges faster and tends to be stabilized, indicating that the model is capable of recommending new learning content more accurately. The accuracy of intent recognition is high for all five question types, with the best classification results for the statistical type of question, with an F1 score of 93.05%. Out of 120 randomly selected interrogative sentences, the Q&A system got 75.6% correct, which can meet the students’ usage requirements. In summary, the intelligent education system based on knowledge graph proposed in this paper can build and learn content better and has practical application value.

Language:
English