Application and Innovation in Financial Market Risk Management Based on Big Data and Artificial Intelligence
Online veröffentlicht: 21. März 2025
Eingereicht: 31. Okt. 2024
Akzeptiert: 15. Feb. 2025
DOI: https://doi.org/10.2478/amns-2025-0579
Schlüsselwörter
© 2025 Ruimei Wang, published by Sciendo
This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
In the past thirty years, due to the profound influence of economic globalization and financial integration, the global financial market has developed rapidly, while the volatility of the global financial market has become more and more intense, enterprises, financial institutions, ordinary investors are facing unprecedented financial risks. The occurrence of financial risk not only seriously affects the normal operation of enterprises and financial institutions and the survival of individuals, but also causes serious harm to the national and even global financial markets and economic health and stability [1-4].
With the progress of the times, people’s investment philosophy has changed, no longer satisfied with the way to obtain income only through labor, the financial market in a very short period of time has undergone a radical change. The financial market has changed drastically in a short period of time. In this regard, a variety of trading markets have been created, including the stock exchange market, the financial currency market, and the debt market, and the total number of transactions per day can be as high as hundreds of millions of dollars [5-7]. Therefore, there is a need for their risk assessment and professional management.
Risk management is basically defined as a process of decision making by social organizations or individuals in order to reduce the negative consequences of risks [8-10]. This process includes the following steps: risk identification, risk assessment, risk evaluation, selection of risk management techniques and evaluation of risk management effectiveness [11-12]. The purpose of risk management is to obtain the highest security within the expected range at the lowest cost [13-15]. The core and foundation of financial market risk management is the measurement of risk, with the increase in the complexity of financial markets and the scale of financial asset transactions, the development of financial theories and financial instruments, financial market risk measurement techniques have become more intricate and complex [16-17]. In this complex situation, financial institutions and ordinary investors will consider the balance between return and risk, and the correct choice of risk control indicators to measure the risk in financial markets has become the key to research.
The development of science and technology has long been acting in the financial field, especially in the financial market. On the one hand, the visible and available data are collected through big data, and then their data are analyzed with artificial intelligence technology to assess, predict, and search for risk reduction directions in the financial market [18]. Literature [19] used artificial intelligence to assess the trust risk of loans in the financing market in the financial market, and the analysis of publicly available data found that this assessment can increase the probability of obtaining a loan for those who do not have the privilege to do so. This is also a closer assessment to the civilian financial market, which is a major benefit for both banks and lenders. Literature [20] describes that internal corporate risk management is significantly correlated with the value added of audits, internal auditors and internal auditing, from which some regulations can be targeted to help companies improve their risk management processes, giving a reference role to financial market risk management as well. In addition, literature [21] assessed the role of science and technology such as big data and artificial intelligence in the financial sector, realizing fintech, which reduces the risk of commercial banks and improves profitability and financial innovation. Literature [22] analyzed the commercially available literature using hybrid analysis, and the results showed that the application of artificial intelligence is necessary for financial market participants to learn, and can assist regulators such as governments and trading practitioners to understand the risks, fraud detection, and credit scoring in the financial market, and to better conduct transactions. In addition, several studies have shown that gold is a hedge in times of stress [23-24]. Therefore, literature [25] established an artificial neural network-based intelligent model to predict gold price fluctuations, which provides a reference for the decision-making of investors in the financial market. Literature [26] describes the use of AI analysis in the field of cryptocurrency trading to address its cybersecurity, price trend prediction, and volatility prediction. Literature [27] summarizes the application of AI to the stock market in recent years for price prediction, sentiment analysis, portfolio optimization, etc., which laterally reduces the risk of market trading. Literature [28] analyzed sentiment indicators related to financial market data based on algorithms and found that these indicators may be effective for predicting financial risks and issuing warnings. Literature [29] constructed a model for volatility indicators that measure market risk under high-frequency data, and analyzed the predictions that can be derived from short-term volatility to provide assistance to various participants and monitors in the financial market.
In this paper, financial market risk management starts from two aspects of risk prediction and risk assessment, in the financial market risk prediction, the main three models are used, respectively, GARCH model, LSTM network and ARFIMA model, and then the three models are linked together to form a hybrid ARFIMA-GARCH-LSTM model. For financial market risk assessment, a VAR model is used to analyze the relative level of financial market risk by measuring stock returns over a given period. Relevant indicators for risk prediction are introduced to empirically analyze the ARFIMA-GARCH-LSTM hybrid model, which is found to be better at predicting the status of financial market risks. And then, the evaluation effect of the VAR model is verified in combination with the return of SSE 50.
Generally speaking, market volatility means fluctuation within a certain reasonable range, and fluctuation beyond that range is something that needs to be taken seriously.
It has always been the case that negative news in the stock market has a stronger impact on stock market volatility than the equivalent positive news. In addition, the sequence has a common characteristic, which is often mentioned as “long memory”. That is, the sequence before and after the correlation characteristics.
Historical Volatility This calculation method mainly reflects the future volatility based on historical volatility, but also in the process of daily use of an easier method, and also the earliest contact with a method. Generally speaking, we choose the historical volatility to measure volatility is enough, such as this paper’s heteroskedasticity model is based on the historical volatility model. Its formula is as follows:
In the above equation, Implied volatility Next we will introduce another measure, we can also be based on the option pricing model to reverse the derivation, and thus get the implied volatility, which is often used in the options trading market a method. The volatility can be derived from the following option pricing model, keeping the basic assumptions constant
Symbols denote:
The idea underlying the ARCH model is that, based on the information currently available, the noise generation at a given time fits a normal distribution with a constant mean value attributed to zero.
Let
Mean value equation:
Variance equations:
where
The first-order GARCH model, which is used more often in the research process, has the following form:
Therefore, the GARCH model [30] can more accurately control the amount of analysis of the ARCH model, and at a relatively low order, it can be more ideal to express the high-order ARCH model. Even so, the shortcomings of the original model have not been completely optimized in the GARCH.
In order to be able to describe the asymmetric effects in financial markets and to compensate for the shortcomings of the GARCH model, another race model is proposed, which can correctly distinguish between positive and negative shocks to conditional volatility.
Mean Value Equation:
Variance equations:
Where
The difference between this model and the GARCH model is that this model has an additional term, which is called the asymmetric term. At
The first-order TGARCH model is more commonly used in practice and takes the following form:
When
Another asymmetric model is the EGARCH model [31], the basic expression of this model is shown below:
Mean value equation:
Variance equations:
There is another, more commonly used form of the above equation, on which the rest of this paper is based:
The sign of
In practice, the most commonly used is the first-order EGARCH model, whose variance equation takes the following form:
Observing the above equation, the conditional variance above is taking an exponential form. In fact,
Based on the distributional characteristics of previous series, the normal distribution is not applicable in the rest of this paper. Therefore this paper additionally introduces two forms of distributions that are highly utilized.
GED distribution The GED distribution is widely used in practical research, and it can describe the asymmetric effects of time series. The probability density function of this distribution is given below:
The random variable Student The Student
Its probability density formula is given below:
Let
When there are more dependencies between the various layers of the model, the neural network will have such a situation as gradient disappearance. Scholars have proposed an improved model based on recurrent neural networks a Long Short-Term Memory Model (LSTM for short). Compared to the original model, it adds the important part of memory storage, which leads to more accurate learning of long-term dependencies between inputs and outputs.
LSTM [32] replaces the activation function layer in the original neural network with a long and short-term memory unit, which contains an input gate, a forget gate and an output gate.
The passes through the input gates are given, resulting in the output information scaled as:
Similarly, the output information of the output gate is proportional:
The output message ratio of the forget gate is:
According to the update formula for each gate above, the input information of the memory storage block can be obtained:
Then the memory unit filters the information of the history state by virtue of the forget gate and retains some of the information:
After the above filtering, the output information obtained by the memory unit is represented as:
In the above equation,
Since the direction of LSTM updating information is opposite to the direction of propagation, the LSTM neural network must propagate backward along the time so as to update and calculate the corresponding weights. In this process, the following iterative formula is given:
In the above equation,
Finally we get the output error of the memory cell and the output gate:
From the above formulas, we further derive more detailed iterative formulas for the error signals of the memory cells:
Based on the above iterations, the detailed update equations for the error signals of the forget gate and the input gate can be obtained in the same way:
Finally, the error signal is calculated based on the above obtained error signal and the weights are also updated:
Among them,
What is obtained in summary is the detailed process of training, propagation, and derivation of the LSTM.
The ARMA model deals with short correlation in the series through p-order and q-order autoregression and moving average. From the definition of the ARFIMA model and the specific derivation process, the ARFIMA model is able to consider both the short and long memory of the time series, and it is more suitable for fitting the series with long memory.
Long memory focuses on exploring the continuous dependence of time series fluctuations, which is built on analyzing the sub-sequences in the long-distance time period, and has become an indispensable and important factor in financial time series forecasting modeling.
Definition 1: If there is a time series {
Then {
Definition 2: If the time series {
Then {
Hurst exponent
Let the time series {
where
At this point the
In the above process, different length subintervals
where
The linear relationship between ln( When 0.5 < When When 0 <
After judging that the series has the characteristic of long memory, it can be based on the relationship between the Hurst index
As the preferred ARMA model for time series modeling, integer order differencing may cause excessive differencing of the series, resulting in the loss of information of the low frequency components of the series, making the model prediction accuracy lower. If {
Examine whether 1st order differencing causes overdifferentiation: according to Eq. (41) it is known that the product of the original sequence spectral density function
If there is a
The correlation function of the smooth sequence can be regarded as a normal distribution with zero mean and standard deviation of
In the modeling can be identified through the autocorrelation and partial white correlation graph order, roughly determine the function value will be in which order after the beginning of gradual convergence to zero, so as to initially determine the model order, i.e., p, q value. Because of the subjectivity of determining the order by looking at the graphs, we can use the information criterion to help us find the relatively optimal combination of orders within a limited range of orders. Commonly used information criteria are
Where

Modeling process of ARFIMA model
The hybrid approach in series, i.e., the linear combination approach, needs to first decompose the data into two parts, linear and nonlinear, fit the linear component with ARFIMA model, and then process the residual value, i.e., the nonlinear component, with the LSTM model, and the linear combination of the two prediction results, i.e., the final prediction value.
The original data {
The linear component of the sequence {
Finally, the LSTM model is used to fit the prediction of the nonlinear component

Series model prediction flow chart
According to the different laws of assigning weights, the concatenation method is divided into many kinds, in this paper, the concatenation combination is carried out by the inverse error method.
The data are modeled and predicted using two models, LSTM and ARFIMA-GARCH, respectively, and then the inverse error method is used to assign weights to the two models i.e., give smaller weight to the one with larger error, and give larger weight to the model with smaller error).
VaR is short for ValueatRisk, which translates to “Value at Risk” or “Value at Risk”. It is a statistical estimate of the loss that an asset may incur over a holding period due to normal market fluctuations within a certain risk range. From the standpoint of a financial institution, VaR can be defined as the maximum loss that can be incurred by a financial asset with a given probability over a defined period of time. And from a statistical point of view, VaR is defined as the maximum loss in the value of a financial asset or portfolio of securities that could be expected to occur in a given future period under normal market volatility at a given level of probability (confidence level).
We know that future returns can be either positive or negative numbers, and when the return is negative it is what is known as a loss or risk. And when the return is positive it is a profit. If the stochastic return of a financial asset at some time in the future is set to be Δ
From the above, it can be seen that according to the definition of VaR, the calculated VaR value is negative, but in practice VaR is usually taken as a positive value, so the mathematical expression for calculating VaR can be expressed as:
where
If the returns are treated as inverses, i.e., by ordering
From the expression of VaR, it can be seen that the essence of VaR is the 1 −
The value of
According to the expression of VaR, the
where
To estimate the VaR value at the moment
The VaR calculation method is the core of the whole estimation of the VaR value, and different estimation methods have been produced depending on the starting point.
In order to more accurately portray the mean
In order to avoid making any assumptions about the distribution that
The main steps of the historical simulation method are:
First, a time series of returns over an appropriate time period is selected as historical data.
Then, the selected historical return series are sorted from largest to smallest.
Finally, for a given confidence level, say 95%, find the 95th percentile sample point of the total number of samples, then the value under that sample point is the future VaR value of the asset at the 95% confidence level.
Extreme value theory method is proposed by scholars in the process of continuously improving the calculation method of VaR. Extreme value theory mainly studies the tail characteristics of the distribution of independent and identically distributed sequences, and from the definition of the value at risk VaR can be seen that VaR happens to study the tail characteristics of the distribution of the data, so in the early stage of the study, scholars directly assume that the time series is independently and identically distributed, and apply the extreme value theory to the unprocessed time series, and then calculate the VaR. sequence is not independently and identically distributed, and the residual sequence filtered by the family
If we remember that the residual sequence after establishing a family
Cause:
To wit:
So the value at risk VaR of sequence {
where
The empirical research object of this paper is the SSE 50 index and its volatility, and the time range is selected from January 5, 2016 to September 30, 2021. The construction of the model characteristics is divided into four parts, which are the volume and price data, the parameter data of the GARCH family model, the attention index data and the realized volatility data. For the volatility of the SSE 50 index, which is the dependent variable of the model is represented by calculating the realized volatility through the intraday high-frequency data.
The first part of the feature is the volume and price data of the SSE 50 index, which includes six data: the opening price, the high price, the low price, the closing price, the volume and the turnover of the SSE 50 index. The data comes from the financial data of the MiBasket quantitative platform. The trend of the daily closing price of the SSE 50 Index from January 5, 2016 to September 30, 2021 is shown in Figure 3. The graph visualizes that the SSE 50 Index was highly volatile during the period from 2016 to 2017. Table 1 displays the descriptive statistics of the six volume and price data.

The trend of the closing price
Descriptive statistics of the price data
Variable | Mean | Median | Maximum value | Minimum value |
---|---|---|---|---|
Opening price | 2619.061 | 2642.19 | 2597.586 | 2621.28 |
Maximum price | 2614.54 | 2640.291 | 2591.675 | 2623.009 |
Lowest price | 1915.107 | 1953.378 | 1874.166 | 1912.711 |
Closing price | 3494.843 | 3494.713 | 3403.695 | 3458.697 |
Volume | 339.332 | 341.808 | 334.741 | 338.293 |
Turnover amount | 2619.051 | 2642.2 | 2597.486 | 2621.25 |
The second part features the parameters of the GARCH family model, and the GARCH family model in this paper includes three types of GARCH model, eGARCH model and tGARCH model, with a total of 11 parameters, including three parameters of the GARCH model, four parameters of the eGARCH model and four parameters of the tGARCH model. Firstly, the daily return series is calculated through the closing price data of the SSE 50 index from January 5, 2016 to September 30, 2021, and the results are shown in Figure 4. It can be intuitively seen that the daily return series of the SSE 50 index is basically smooth and basically symmetrically distributed near the 0-axis, but a smoothness test of the series is needed before modeling the time series data.

Yield of Shangzhen 50 index day yield
The classical test is the unit root test, i.e., ADF test, the null hypothesis (H0) of ADF test is that there is a unit root in the time series under study, and if the statistic of the ADF test is smaller than the corresponding value at the level of significance, then the original hypothesis can be rejected at the corresponding confidence level, which indicates that there is no unit root in the time series, and the series is smooth. Table 2 shows the descriptive statistics of the daily return series of the SSE 50 index as well as the statistics and p-values of the ADF test. The value of the ADF test statistic of the daily return series of SSE 50 index is -7.625, which is smaller than the value at 1%, the significance level, and the p-value is very close to 0, which can reject the original hypothesis, indicating that the return series of SSE 50 index is smooth.
Descriptive statistics
Mean | Median | Maximum value | Minimum value | Standard deviation | Kurtosis | Degree of bias | Adf test |
---|---|---|---|---|---|---|---|
2.78e-4 | 4.62e-4 | 7.82e-2 | -9.35e-2 | 1.52e-2 | 6.475 | -0.588 | -7.625(0.000) |
The use of the GARCH family model is preceded by an ARCH effect test on the time series, i.e., a test for serial correlation of the conditional heteroskedasticity series. Next, the ARCH effect test is performed, generally using the Ljung-Box test, the original hypothesis of the Ljung-Box test is that the time series is a white noise series. The daily return squared series of the SSE 50 index is shown in Figure 5, which clearly shows that there is a significant aggregation of volatility. The following Ljung-Box test is used to test whether the daily return squared series of the SSE 50 index has autocorrelation.

Yield squared
Setting the lag order from 1 to 10, the p-value of the LB statistic of Ljung-Box test changes as in Fig. 6. The p-values of the LB statistic are all much less than 0.05, which can reject the original hypothesis that the squared series of daily returns of the SSE 50 index is white noise, and indicate that there is an ARCH effect in this series. After the ARCH effect test, it was proved that the daily return series of the SSE 50 index can be modeled by the GARCH family model.

Ljung-box check the p value
The effect of attention on stock market volatility has been studied by scholars in related fields. The volatility of the S&P 500 index is predicted using deep learning with Google Trends as the feature, and the volatility of the CSI 300 index is predicted using Baidu index as the input feature of the model. Referring to these selection methods, this paper chooses 20 keywords related to the economy and finance as indicators that reflect macroeconomic factors, and then collects daily data on the Baidu index for these keywords. These data offer a fresh perspective on studying volatility. Select a keyword from 20 keywords, such as “financial crisis”, and compare its trend with the trend of the realized volatility of the SSE 50 Index, as shown in Figure 7, it can be intuitively seen that the two have roughly the same trend, when the search volume of “financial crisis” is large, the volatility of the SSE 50 Index is also large, and when the search volume of “financial crisis” is small, the volatility of the SSE 50 Index is also small.

Control of volatility and financial crisis
Firstly, the model is used to predict and process the linear part of the test set data, and the specific expression of the test set can be obtained, and secondly, the nonlinear residual sequence et is calculated according to the linear prediction results, and the normalized residual sequence is obtained by normalizing it. The training set and test set are also used to divide the samples with the sample ratio of 3:1, and here the optimal parameters of the model are also used with timestep=20 and batchsize=16. The normalized residual sequences are input into the model for training, and the autocorrelation function of the error of the nonlinear components is shown in Fig. 8. By observing the autocorrelation function of the error, except for the significant non-zero at order 0, almost all of the other orders are controlled within the confidence interval, so it can be considered that the error of the network is within the required range.The Loss plot is shown in Fig. 9, and according to the Loss plot, the error is almost concentrated in the range of 0.002-0.005, which indicates that the network training effect is good.

Nonlinear component error self-associated function diagram

Nonlinear component network training result diagram
The results of the two parts of the linear prediction are linearly superimposed to obtain the prediction results of the hybrid model as shown in Fig. 10, and the prediction performance indexes are shown in Table 3, which intuitively shows that the hybrid model of the serial mode has better prediction effect, high image overlap, and the values of MSE, RMSE, and MAE are all small, which are 0.000386, 0.019559, and 0.014479, respectively.

Series hybrid model prediction
Series hybrid model predictive energy index
Model | MSE | RMSE | MAE |
---|---|---|---|
ARFIMA-GARCH-LSTM | 0.000386 | 0.019559 | 0.014479 |
Through a variety of methods, stock return prediction can be achieved. The model in this paper has the best prediction effect, and by optimizing hyperparameters, the prediction effect can be slightly improved. The residual sequence of the model is plotted as a histogram, and the histogram of its residual sequence is shown in Figure 11. From the figure, it can be seen that the residuals follow a normal distribution.

Residual difference fabric histogram
The descriptive statistics values of the residual series are shown in Table 4. The descriptive statistics of the residual series shows that the mean value is -0.0011645, which is approximated to 0, the standard deviation is 0.0245188, the skewness S=-0.025 is approximated to 0, and the kurtosis K=3.312 is close to the kurtosis value of the standard normal-tai distribution of about 3, so the residuals obey a normal distribution.
Residual description
Variable | Numerical value |
---|---|
Mean value | -0.0011645 |
Standard deviation | 0.0245188 |
Maximum value | 0.1445756 |
Minimum value | -0.1678244 |
Degree of bias | -0.025 |
Kurtosis | 3.312 |
Next, model-based rolling forecasts of the standard deviation of returns are made. The optimization process is the same as the prediction of stock returns. The optimized parametric model is obtained with epochs of 10, batch size of 8, N of 9, lstm units of 100, optimizer of “adam”, dropout of 0.1, and activation of “tanh”. Finally, the model loss function value is 0.00601476, the standard deviation of the prediction of the fitted line graph shown in Figure 12, it can be seen that the prediction effect is better, the predicted value and the actual value of the high degree of overlap, can be a better fit to the trend and direction of the standard deviation.

Standard deviation
Regarding the measurement of VaR, in the historical simulation method, the VaR is calculated based on the empirical distribution of stock returns constructed from historical data.In this paper, the distribution of stock return forecasts is calculated based on the deep learning method, so as to measure the risk VaR and obtain the value-at-risk VaR sequence. The part of the test set obtained from the measurement is shown in Fig. 13. It can be seen that the value-at-risk volatility (VAR) fluctuates more sharply.

Risk value var trend diagram
The 1% sample quartile and 5% sample quartile of stock returns are used as the stock market risk warning line, respectively, and the warning graph is shown in Figure 14. An early warning signal is issued when the value at risk falls below the warning line. From the early warning chart, it can be seen that near July 2020, the value at risk Va R is in the trough, zooming in on this interval, as shown in Figure 15.Between June 25, 2020 and around July 10, 2020, the value at risk is below the warning line for the 1% sample quartile.

Risk alert diagram
Again intercepting the stock returns in this interval to observe the true state of the returns, a bar chart is plotted as shown in Figure 15. Most of the returns fluctuate in a lower state except for July 7, 8, and 10, 2020, when the returns are slightly above zero. It can be verified that this period is in the risky zone.

Stock yield range column chart
In the financial market, accurate risk identification is a prerequisite for risk management. To achieve this goal, it is necessary to establish a comprehensive risk identification framework and systematize the identification of various risks by combining advanced data analysis techniques.
First, financial institutions need to establish a sound risk identification mechanism, which can adopt a risk identification matrix to categorize and grade risks and clarify the characteristics, sources, and impacts of various types of risks.
Second, financial institutions should ensure the comprehensiveness, accuracy, and timeliness of data by building a perfect data collection system.
Third, financial institutions should focus on the dynamic and forward-looking nature of risk identification. To this end, financial institutions should establish an early warning system for risk identification that tracks market changes in a timely manner to dynamically adjust risk identification strategies.
Fourth, risk identification is not only the responsibility of the risk management department, but also requires the cooperation of other departments, such as business, compliance, and audit. Financial institutions need to set up regular cross-departmental risk assessment meetings to exchange risk experience and countermeasures from each department to form a unified risk identification and management strategy.
Fifth, financial institutions need to strengthen external cooperation and information exchange to enhance their risk identification capabilities through industry cooperation and guidance from regulators.
In risk management in financial markets, optimizing risk assessment is a key step in enhancing the efficiency and effectiveness of risk management.
First, financial institutions must ensure that risk assessment models are highly accurate, which requires constant updating and maintenance of the assessment models to ensure that they can truly reflect the latest risk dynamics.
Second, financial institutions need to establish a multi-dimensional risk assessment system, which should assess risks from multiple perspectives.
Third, with the development of information technology, real-time data analysis has become possible, and financial institutions need to utilize real-time data streams to continuously monitor risk indicators in order to quickly identify and respond to risks brought about by market movements.
Fourth, in order to optimize risk assessment in a comprehensive manner, financial institutions need to establish a cross-departmental risk management team, which should have expertise and experience in multiple fields, so as to analyze and assess risks from different perspectives.
Financial institutions enhance the adaptability of risk control through the establishment of a multi-level risk control system, the introduction of advanced risk control technology, the strengthening of risk control coordination and the enhancement of risk control.
First, financial institutions should establish a multi-level risk control system to ensure the comprehensiveness and effectiveness of risk control by setting risk limits, establishing internal control mechanisms, and formulating contingency plans.
Second, the implementation of a risk limit system by financial institutions is the basis for strengthening risk control, a strategy that requires institutions to set clear risk-tolerance thresholds and ensure that all operations are carried out within these limits.
Third, refined capital management is also a key component of enhanced risk control. Financial institutions are required to enhance their risk resilience by maintaining adequate capital buffers.
Fourth, financial institutions need to identify and correct deficiencies and loopholes in the risk management process in a timely manner through the establishment of a sound internal control system and frequent audits and inspections.
Fifth, the use of insurance and derivatives by financial institutions for risk transfer is a common means of controlling risks. The proper use of derivatives can help institutions lock in costs, reduce fluctuations in returns, and protect them from extreme market volatility. Enhanced use of derivatives will lead to better risk control.
Risk management is the focus of financial market research. This paper combines the traditional econometric model with the deep learning model to construct a more accurate prediction model to predict the stock risk market and to measure the risk of the stock risk market, and the conclusions are as follows:
The error autocorrelation function of the model in this paper is not zero at order 0, but the overall control is inside the confidence interval at other orders, and the prediction error is concentrated between 0.002-0.005, with very small values. The combination of the models in this paper is in series, the predicted images in this way have a high degree of overlap, and the values of the relevant prediction indexes are less than 0.02. Therefore, it is considered that the training of the model network in this paper is better, and the prediction accuracy meets the requirements, and the series connection of the hybrid model has a positive effect on the prediction accuracy of the model, and the prediction accuracy of the hybrid model has a positive effect.
In terms of risk metrics, the VaR assessment model is constructed, and the sample quartiles of 1% and 5% are used as the warning line, and when the value of risk is below the warning line, it indicates that the risk is high and preventive measures should be taken. Based on this, this paper discusses specific risk management strategies in depth from three aspects: improving the risk identification mechanism, optimizing the risk assessment method, and strengthening the risk control measures.