Open Access

Research on Data Asset Management Strategy for Cloud Computing Environment

  
Mar 19, 2025

Cite
Download Cover

Introduction

Data assets are data owned and controlled by an enterprise, including but not limited to electronic documents, databases, data warehouses, datasets, data tables, etc., which are categorized into internal data assets and external data assets. Data asset management refers to a method and strategy to comprehensively manage and utilize both internal and external data assets of an enterprise [14]. By collecting, storing, analyzing, and protecting data, data asset management can help enterprises better understand the value of data and play a key role in business decision-making and innovation. However, it is not easy to realize effective data asset management, and enterprises face many strategic choices and challenges [58].

Enterprises first need to develop a clear data strategy that defines the position and role of data in business development. This includes identifying what data are core assets, how they are collected, stored and used, and how they drive business growth through data. The data strategy should be aligned with the overall strategy of the organization to support the long-term development of the enterprise [912]. Data governance is the key to ensuring data quality, security and compliance. Enterprises need to establish a comprehensive data governance framework, including formulating data policies, processes and standards, clarifying the responsibilities of data owners, managers and users, and establishing data quality assessment and monitoring mechanisms [1316]. Through effective data governance, the credibility and usability of data can be improved and data risks can be reduced. In addition to these, data security and privacy protection, data quality management, and data integration are all important strategies for enterprise digital management [1720].

In this paper, with the idea of DPoS consensus algorithm, a small number of “high-quality nodes” are screened out for consensus to reduce the use of cloud computing resources and overhead. On the basis of Byzantine fault-tolerant algorithm, the influence factors of reputation value and abnormal node monitoring and replacement mechanism are introduced, and the current mainstream consensus algorithm is embedded into it to form a safe and fast consensus algorithm suitable for cloud computing environment, and a blockchain-based data asset management platform for cloud computing is constructed. The number of blocks per second generated by this paper’s algorithm, the time delay of each transaction, and the memory and CPU consumption are examined separately in simulation experiments, and an evaluation system is designed to assess the perfection of the data asset management of the blockchain platform.

Method
Blockchain-based Data Asset Management Platform

Due to the problems of public chain such as large number of nodes, huge data volume, low efficiency of reaching consensus, serious resource consumption, and regulatory difficulties, in order to guarantee the equal participation of all nodes in data asset circulation, transaction and sharing, and at the same time to avoid the shortcomings of the public chain, it is more appropriate to adopt the access audit of the coalition chain in the blockchain system that serves the data asset management platform. In addition, in order to ensure the smooth operation of the business including data asset authentication and transaction, the data asset management platform needs to exercise the regulatory function and provide public welfare services in the blockchain system [21].

Architecture system

The architecture system of the blockchain system that serves data asset management from the bottom up mainly consists of:

The data layer mainly includes all relevant information of digital assets such as data categories, system node account information, etc., which are stored using chained blocks.

the network layer includes the data dissemination mechanism and data verification mechanism including peer-to-peer network, etc., aiming to maintain the synchronization and verification of block data between different nodes.

is the consensus layer. The blockchain system is essentially a decentralized application of multiple nodes running at the same time and jointly maintained, and the results generated by a single node need to be confirmed by consensus with the nodes of the whole network before they can be packaged into the chain. Among them, the PoW (Proof of Work) mechanism, which is more often used, requires a lot of time and arithmetic power to fight for the bookkeeping right to reach consensus, while DPoS (Proof of Appointment of Stake) mechanism adopts all nodes to vote for the super node to directly obtain the bookkeeping right, which only requires a very small amount of computation time and consumption to ensure the normal operation of the blockchain system. Considering that the data asset management system is a coalition chain with high node trustworthiness, the DPoS consensus algorithm is more concise and efficient, and is more suitable as the consensus algorithm of the system [22].

The fourth layer: is the contract layer, which mainly utilizes smart contracts composed of automated script codes to realize various functions such as matching the two sides of the transaction under the constraints of the management system. The fifth application layer can externally provide a variety of applications for the blockchain-based system, such as node registration, account management, digital asset rights, and flow transactions.

Business process design

In the blockchain-based data asset management platform, the process of confirming rights and transaction flow of data assets mainly involves the following stages:

The first stage, account registration. Both data asset providers and data demanders need to apply for registration of system accounts using a unique identification number, and after being audited and approved by the platform, they will obtain proof of identity and obtain the corresponding public and private keys.

In the second stage, data assets are authorized. After the data provider applies for right verification after making data into data assets in standard form, the platform audits its originality and completeness, and then broadcasts the summary information of the qualified data assets, reaches a consensus and records it, completes the uploading of the chain, and confirms the right of ownership of the data assets.

The third stage is the transaction application stage. The data provider who has obtained the confirmation of ownership can apply to the management platform for the alienation of the data asset, use the script code to write a smart contract, and reach a consensus to record it in the blockchain system.

The fourth stage is transaction occurrence stage. The management platform continuously interacts with the blockchain network, obtains the information set contained in the data asset applied for alienation, searches for a suitable demander, and provides transaction matching services for both parties to the transaction. After the matching is completed, the data asset provider encrypts the data asset with the demander’s public key to obtain the ciphertext, and needs to utilize the hash function to compute a summary of the data, and digitally sign the summary using the private key. The provider sends the ciphertext and digital signature together to the demander, who receives it, first decrypts the digital signature using the provider’s public key to get the data digest, authenticates its identity, then decrypts the ciphertext using the private key to get the initial data, and finally calculates the data digest using the function to quickly verify the integrity of the data asset. The transaction is completed if the double verification passes. The fifth stage is transaction confirmation. After the transaction and data assets flow between the two sides of the transaction, it is necessary to broadcast the transaction content records, recognized by all nodes in the system through the consensus mechanism, recorded in the latter block uploaded to the system, and the transaction is formally completed.

Practical Byzantine Fault-Tolerant Algorithms

In order to solve the consensus reaching problem in the presence of node failures, Byzantine Fault Tolerance (BFT) algorithm has been proposed in the academic world. However, for a long time, the BFT algorithm and its improved algorithms have the problem of excessive complexity. Until the Practical Byzantine Fault Tolerance (PBFT) algorithm reduces the complexity of the algorithm from exponential to polynomial level, which makes the Byzantine Fault Tolerance algorithm feasible in practical applications. The PBFT algorithm can guarantee the security and flexibility of the consensus at the same time in the case that the faulty nodes are not more than 1/3 of the total number of nodes. The execution flow of the PBFT algorithm, as shown in Fig. 1, consists of three processes: the request process, the broadcast process, and the response process [23].

Request process: the request process consists of two steps: selecting the master and requesting, first a node is selected as the master node by rotation or random algorithm, and then the client sends a request message to the master node in the format of 〈〈REQUEST, operation, timestamp, client,〉σc〉. Thereafter, as long as the master node is not switched, it is called a view.

Broadcasting process: the master node collects the request message and after checking it broadcasts the request message to all other replica nodes.

Response process: after processing the request, all the nodes return the processing result to the client, and the message format of the returned processing result is 〈REPLY, view, timestamp, client, id_node, response〉. The client counts the received processing results, and when it receives at least f + 1 (f is the number of tolerable Byzantine nodes) of the same result from different nodes, the processing result is the final result.

Figure 1.

The execution process of the PBFT consensus algorithm

In particular, the master node broadcasting process consists of three phases: the pre-preparation phase, the preparation phase and the submission phase.

Pre-preparation phase: the master node first assigns a number to the client’s request, and then sends a pre-preparation message 〈〈PREPREPARE, view, n, digest〉,σp,message〈 to each replica node, where view is the view number, n is the proposal number, message is the request message, and digest is the numerical summary of the message.

Preparation phase: The replica node first verifies the legitimacy of the pre-preparation message and sends the preparation message 〈〈PREPARE, view, n, digest, id〉,σi〉 to other nodes after passing the verification, where id is the replica node’s identity proof. The replica node receives the ready messages from other nodes while sending the ready messages. After the preparation message is verified, it is added to the message log, and the node enters the preparation phase when it receives at least 2 f verified preparation messages.

Submission phase: The node broadcasts an acknowledgement message 〈〈COMMIT, view, n, digest〉,σi〉 to notify other nodes that a proposal n is in the preparation phase in view view, and when at least 2f + 1 verified COMMIT messages are received, it indicates that the proposal passes.

DPoS-based blockchain for data traceability in the cloud altogether
Cloud Consensus Network Modeling

In CloudDPoS, the consensus process is divided into two parts, as in the DPoS consensus algorithm, in the first part a certain number of “delegates” responsible for the production of blocks are selected. In the second part, the blocks produced by the “delegates” are handed over to the other nodes involved in the consensus process for verification. The set of all virtual nodes in the cloud participating in the consensus process is defined as N, with sizes k and (kN*), and set N is divided into two different sets of nodes: consensus node set NC, with size l,(lN*, l < k), and transaction node set NT, with size (kl). For NC and NT, NCNT = N and NCNT = Φ, the sub-network consisting of consensus nodes NiC is called the consensus network, and the sub-network consisting of transaction nodes NiT is called the transaction network. Neither the consensus network nor the transaction network is static since the nodes in the consensus network change with each consensus. The consensus network will remain unchanged from the time the current node identity information is determined until the next round of node election that results in a change of node identity.

The set of consensus nodes is subdivided into a set of witness nodes NW, of size m and (mN*, m < l), and a set of participant nodes NP, of size (lm). In the blockchain environment of the CloudDPoS consensus algorithm, the transaction nodes are responsible for generating, encrypting, and signing the transactions, and broadcasting the transactions to the blockchain network, and the transaction nodes are also responsible for voting for the consensus nodes; the witness node is the winner of the vote, i.e., the “delegate” node mentioned earlier, and it is responsible for producing blocks; the participant nodes are responsible for validating the blocks produced by the witness nodes; all nodes keep a complete copy of the blockchain. Witness nodes are the winners based on the votes, i.e., the previously mentioned “delegate” nodes, which are responsible for producing blocks; participant nodes are responsible for verifying the blocks produced by witness nodes; and all nodes keep a complete copy of the blockchain. The classification of nodes is shown in Figure 2:

Figure 2.

Node classification schematic

During the consensus process, consensus node NC will not be allowed to quit midway, and the consensus node can only choose to quit when the next round of node election starts. However, if any witness node outputs incorrect blocks, the next round of node election will be executed immediately.

In CloudDPoS, each user with cloud computing resources is defined as a node participating in the blockchain consensus process, and the resources of the cloud user will be pledged for maintaining the normal operation of the blockchain system, and this part of the user’s resources will be used as the amount of equity used to decide the node to become a block producer in the traditional DPoS consensus algorithm. A vector Ri= Cimax,Simax,Eimax , (i ∈ [1,k]) is defined in CloudDPoS to represent the total resources owned by a cloud subscriber, where Cimax represents the total number of CPU cores, Simax is the memory size in KB, and Eimax is the network bandwidth in Kbps. In order to ensure the normal business operation of the cloud user, this algorithm will reserve a certain resource size for the cloud user Ni according to the amount of resources it has already used RiU= CiU,SiU,EiU . This algorithm also sets a greedy factor σi ∈ (0,1) for each cloud user Ni, and the user can set the size of σ to decide how much free resources to be put into the blockchain consensus process as equity. The greedy factor σi is calculated by the formula: σi=w1σcpui+w2σmemi+w3σnwi σcpui in Eq. (1) represents the CPU component corresponding to the greedy factor, σmemi represents the memory component corresponding to the greedy factor, σnwi represents the network bandwidth component corresponding to the greedy factor, w1, w2, and w3 in Eq. (1) are the scaling parameters, and Σk∈{1,2,3}Wk = 1. The purpose of introducing the greedy factor is to ensure that the cloud users do not collateralize the same resources as the entitlement to bring heterogeneity to the voting in consensus. Therefore, the equity function is defined as: f(R,RU,σ)=σ(RRU)

For cloud users Ni, (i ∈ [1,k]), its mount of equity is defined as αi = αCiSiEi, where: αCi=σcpui(CimaxCiU) αSi=σmemi(SimaxSiU) αEi=σnwi(EimaxEiU) αCi in Eq. (3) represents the CPU component corresponding to the equity amount, αSi in Eq. (4) represents the memory component corresponding to the equity amount, and αEi in Eq. (5) represents the network bandwidth component corresponding to the equity amount. The resources used as collateral for the equity amount by the users participating in the consensus will be deducted at the beginning of each consensus round, and the resources will be returned to the users only when a new consensus round starts or the users are identified as the transaction nodes and quit in the middle of the round.

In CloudDPoS, for any node (user) Ni, (i ∈ [1, k]), the larger the value |αi| of the amount of equity it possesses, the higher the probability of being a witness, which is defined as the bias probability Pi, which is calculated as: Pi=| αi |k=1l| αk |

The numerator part in Equation (6) represents the value of the amount of equity owned by any node participating in the election, and the denominator part represents the sum of the values of the amount of equity owned by all consensus nodes, and the biased probability will be used in the witness election process. In the election process, this algorithm introduces the idea of timer in Raft consensus algorithm, CloudDPoS prepares a biased timer TIMER for each node participating in the witness election, TIMER uses the waiting time TIMEWAIT as an upper limit for timing, and the maximum value of TIMEWAIT is MAXTO, and in CloudDPoS, the The size of MAXTO is set to a random number between 150ms and 300ms, and it is experimentally verified that the value is taken to give a more even distribution of the voting results.The formula for TIMEWAIT is: TIMEWAIT=MAXTO*Pi

Before a consensus node initiates a voting request to a transaction node, each node will first turn on its respective TIMER and wait for the respective TIMEWAIT length of time before initiating the voting request. Since in CloudDPoS, each transaction node can only cast one vote in each round of voting, having a biased timer TIMER ensures that a node with a large amount of equity has the opportunity to obtain more votes, and thus ensures that it has a higher probability of being elected as a witness.

CloudDPoS Consensus Formation Mechanisms

The process of CloudDPoS consensus algorithm is mainly divided into two parts: the election process of the consensus node and the block generation and consensus process, this subsection will respectively introduce the process of each part in detail, and at the end of the consensus algorithm overall flow chart is given.

Election of consensus nodes

CloudDPoS will complete the screening and division of the node set through the election of consensus nodes. The detailed description of the election process of consensus nodes is noted as Algorithm 1.

Algorithm 1 will be executed after the initialization of the blockchain system is completed, and the algorithm will also be executed periodically during the subsequent operation of the blockchain system, and will be executed immediately in the event of a loss of response by a witness node and the generation of an erroneous block by a witness node. The algorithm takes as input the round information Round of the current election and the set of nodes N and returns the set of witness nodes Nw and the set of participant nodes NP.

Block generation and consensus

When the election process of nodes is over and the identity information of each node has been determined, the blockchain system will start the formal block generation and consensus process to complete the expansion of the blockchain in the blockchain system. The detailed description of the block generation and consensus process is noted as Algorithm 2.

Algorithm 2 will be executed after the execution of the consensus node election algorithm is completed and is responsible for completing the block generation and block consensus operations in the consensus process. The algorithm takes the set of witness nodes Nw as well as the set of participant nodes NP as inputs, and returns the judgment message BlockCORRECT if the newly generated block is correct, and the judgment message BlockERROR if the newly generated block is wrong.

Overall flow of the consensus algorithm

In order to facilitate understanding, the overall flow chart of CloudDPoS consensus algorithm is given here, and its content is shown in Figure 3.

Figure 3.

Flow chart of cloud ddpos consensus algorithm

Results and Discussion
Simulation Performance Testing

In terms of performance experiments, since this paper adopts the CloudDPoS model for the implementation of the protocol, the officially provided performance testing tool can be used for performance testing. Using the performance testing tool, the block generation speed, time delay, number of successful transactions, sending frequency, memory usage, and CPU usage of this protocol can be detected during the experiment. These are the detailed results of the experiment.

Number of blocks generated per second

First, the results of the speed test for generating blocks are shown in Fig. 4. The number of blocks generated per second ranges from 46 to 53 for sending frequencies of 50tps, 100tps, 150tps, and 200tps, respectively. Compared to Bitcoin, which generates a block every ten minutes, and Ether, which generates a block every 10 seconds, the block generation rate of the model has greatly improved. At the same time, this paper implements a one-way currency transaction function on the detection platform. Compared to one-way currency transactions, the number of blocks generated per second is slightly lower, but the overall performance is comparable.

Figure 4.

Blocks per second

Time delay per transaction

The time delay refers to the time between the user’s command and the system’s response, and the transaction time delay results are shown in Fig. 5. The time delay test results are shown in Fig. 5(b), where the horizontal coordinate is the sending frequency, i.e., the number of times the user sends in each second, and the vertical coordinate is the time delay. As can be seen from the figure, in this experimental environment, when the number of system requests per second is 50 times, the average time delay of the system is 1.67s, the minimum time delay is 0.69s, and the maximum time delay is 2.84s. With the increasing frequency of sending, the time delay is also increasing, but the average time delay is basically around 10s, and the late stage does not appear to be a significant increase. Figure 5(a) shows the performance of realizing a one-way currency transaction, compared with a one-way currency transaction, the time delay and one-way time delay performance of this transaction is also basically comparable, there is no major performance shortfall. After analysis, it can be seen that when the frequency of sending data is after 50tps, there is a sudden increase in time, which is due to the fact that block generation cannot catch up with the frequency of data sending at this time. Therefore, this system can basically meet the user’s needs when there is no large-scale sending request. When there is a large-scale sending request, due to the limitation of block generation speed, the resulting time delay may be larger.

Figure 5.

Time lag of Trading

Memory and CPU consumption

The memory consumption of the model at each node is shown in Table 1. In the simulation experiment, a total of 10 nodes are created, which contains 6 peer nodes, 3 ca nodes, and 1 orderer node. From the table, it can be seen that the memory consumption of the peer node is highest during the operation of this protocol. When the sending frequency is 50tps, its average memory consumption is 129.3M. When the sending frequency is 100tps, the average memory consumption is 140 M. When the sending frequency is 150tps and 200tps, the average memory consumption is 152.56 M and 167.8 M. The next node that consumes more memory is the ORDER node, but the total consumption is within 120 M. Lastly, it is the CA node, which basically does not consume much memory. Therefore, from the experimental point of view, this system consumes the appropriate amount of computer memory, which is suitable for the needs of ordinary users.

The consumption of each node memory (MB)

Peer 50Tips 100Tips 150Tips 200Tips
Max Avg Max Avg Max Avg Max Avg
peer0.org1 128.1 121.4 142.8 133.8 154.6 147.8 164.5 162.8
peer1.org1 127.5 120.8 136.1 131.7 148.9 141.7 158.1 157.6
peer0.org2 125.2 120.1 143.5 132.2 153.1 146.2 163.7 162.5
peer1.org2 150.6 146.2 164.2 154.9 172.4 167.3 181.2 180.7
peer0.org3 129.7 123.5 145.7 134.8 155.8 149.6 164.8 164.6
peer1.org3 148.2 143.7 159.6 152.9 169.7 162.8 179.6 178.6
ca.org1 14.9 14.9 14.9 14.9 14.9 14.9 14.9 12.8
ca.org2 9.8 9.8 9.8 9.8 9.8 9.8 9.8 8.4
ca.org3 8.6 8.6 8.6 8.6 8.6 8.6 8.6 8.2
orderer 87.3 84.6 94.6 90.6 105.6 99.6 112.5 111.5

Finally, the CPU consumption of each node of the simulation experiment is shown in Table 2. Among the 10 nodes created, the peer node and the orderer node consume more CPU resources, and the ca node consumes basically no CPU resources. When the sending rate is 50, 100, 150 and 200tps respectively, the CPU consumption of the peer node is all about 46%. And with the increase in sending rates, the CPU consumption does not show a significant increase. Therefore, the implementation shows that the CPU consumption of the system in this paper is also within a tolerable range, suitable for the needs of ordinary users.

The consumption of each node (CPU)

Peer 50Tips 100Tips 150Tips 200Tips
Max Avg Max Avg Max Avg Max Avg
peer0.org1 59.21 50.91 64.58 47.84 56.15 47.89 69.74 17.37
peer1.org1 54.85 43.64 59.62 42.85 55.92 49.22 67.25 15.28
peer0.org2 57.13 49.37 65.71 49.08 62.37 45.38 59.82 15.96
peer1.org2 51.88 43.82 62.84 43.66 61.96 40.64 66.39 15.79
peer0.org3 57.16 49.85 64.85 48.17 60.32 48.83 61.28 16.28
peer1.org3 50.34 42.67 64.18 43.91 61.59 39.16 59.34 15.71
ca.org1 0 0 0.47 0.46 0 0 1.34 0.05
ca.org2 0 0 0.82 0.06 0 0 1.24 0.05
ca.org3 0 0 0 0 0 0 0 0
orderer 57.39 38.69 98.25 44.38 175.28 46.85 135.89 8.14
Assessment of data asset management integrity

The perfection of the data asset management strategy proposed in this paper is evaluated. In order to consider the completeness and objectivity of the results, this paper focuses on the assessment indicators of management, technology and operation.

Management-level indicators are the cornerstone of the data management system and guarantee successful implementation of a series of tasks. Through the formulation of strategies, improvement of organizational structure, establishment of rules and regulations, establishment of governance and monitoring mechanisms, and training and publicity, etc., to ensure the smooth implementation of data management tasks. Technical level indicators provide technical support for the data processing system, relying on data standards and data frameworks, which are conducive to the standardization and normalization of data, the rapid collection of data, the reduction of storage costs, the improvement of processing efficiency, the breakthrough of the phenomenon of information silos, and the creation of a secure data environment. Operational-level indicators, as the key to the data management system, promote the circulation of data, increase the scope and depth of data openness, enhance the speed of data flow as well as accelerate the process of data assetization by clarifying the ownership of data, assessing the value of data, realizing trustworthy data transactions, and providing a variety of data services, relying on the data lifecycle management, enhancing data quality throughout the entire process, perfecting the data system, and recording and Traceability data.

Calculated by the entropy weight method, the indicators at all levels and their occupied weights are shown in Table 3. In the next assessment, the score 0~1 is the initial level of the first level, 1~2 is the second level, 2~3 is the robust level of the third level, 3~4 is the good level of the fourth level, and the highest level of 4~5 represents the most excellent level.

The indicators and the weights of the various levels

Classification Primary indicator Peer weight Secondary indicator Peer weight Global weight
Management class A data strategy 0.125 A1 data strategy planning 0.317 0.04
A2 data strategy implementation 0.385 0.048
A3 data strategy assessment 0.298 0.037
B data management 0.125 B1 data governance organization 0.322 0.04
B2 data governance system 0.359 0.045
B3 data governance communication 0.319 0.04
Technical class C data architecture 0.125 C1 data model 0.238 0.03
C2 data distribution 0.229 0.029
C3 data integration and sharing 0.262 0.033
C4 metadata 0.271 0.034
D data standard 0.125 D1 business terms 0.266 0.033
D2 reference data and master data 0.245 0.031
D3 data element 0.237 0.03
D4 index data 0.252 0.032
E data security 0.125 E1 data security strategy 0.311 0.039
E2 data security management 0.356 0.045
E3 data security audit 0.333 0.042
Operation class F data application 0.125 F1 data analysis 0.357 0.045
F2 data open sharing 0.295 0.037
F3 data service 0.348 0.044
G data quality 0.125 G1 data quality requirements 0.236 0.03
G2 data quality check 0.281 0.035
G3 data quality analysis 0.257 0.032
G4 data quality improvement 0.226 0.028
H data life cycle 0.125 H1 data requirements 0.258 0.032
H2 data design and development 0.216 0.027
H3 data operations 0.283 0.035
H4 data retirement 0.243 0.03
Assessment of management category integrity

The management class capabilities include the two first-level indicators, A Data Strategy and B Data Governance, and the second-level indicators to which they belong. Table 4 shows the management class assessment of the management strategy of this paper. Table 4 indicates that the management category’s two competency domains have scores of 4.05 and 4.57, both of which are at the five-star level.

The data strategy planning score of 3.44 is assessed at level four, indicating that the builder reflected the goals and scope of data management during the platform construction process.

The score of 4.23 for data strategy implementation, at the level of five excellent levels, indicates that the platform has carried out relevant work at the departmental level to a certain extent, and is able to assess the gaps between key data functions and the vision and goals in a better context.

Data strategy assessment score of 4.49, Level 5 Excellent level. Demonstrates that the Platform has established empirical examples of data functions and activities within the context of specific needs project teams.

Data Governance Organization score of 4.69, Level V Excellent level. Positions targeting data management and application have been established within the Specific Needs Project Team with clear functional responsibilities, however, it currently still relies on individual capabilities to handle data issues and has not yet established a professional team at the organizational level.

The score of data system construction is 4.32, with an excellent level of five. It shows that the platform has established data-related systems within the specific needs project teams, and members of each project team autonomously decide how to implement and enforce the data governance system.

Data Governance Communication Score 4.69, Level 5 Excellent Level. It shows that the platform has carried out communication activities and managed communication within the specific needs project team.

Management ability maturity evaluation results

Classification Primary indicator Score Grade Secondary indicator Score Grade
Management class A 4.05 Fifth level A1 3.44 Four level
A2 4.23 Fifth level
A3 4.49 Fifth level
B 4.57 Fifth level B1 4.69 Fifth level
B2 4.32 Fifth level
B3 4.69 Fifth level
Assessment of the refinement of the technology category

Technical capacities, including the first-level indicators data architecture, data standards, data security and their second-level indicators. Table 5 shows the evaluation results. From the table, it can be seen that the data standard score is relatively high, while the data security objectively reflects the strong guarantee of data security by the strategy of this paper.

Technical ability maturity evaluation results

Classification Primary indicator Score Grade Secondary indicator Score Grade
Technical class C 3.99 Four level C1 4.02 Fifth level
C2 3.68 Four level
C3 4.2 Fifth level
C4 4.05 Fifth level
D 4.4 Fifth level D1 3.42 Four level
D2 4.88 Fifth level
D3 4.78 Fifth level
D4 4.53 Fifth level
E 3.87 Four level E1 3.44 Four level
E2 4.51 Fifth level
E3 3.65 Four level

It can be seen that the scores of data architecture, data standards, and data security are 3.99, 4.4, and 3.87, respectively, which are at the good or excellent level of level four, five, and four. The highest scores in Level 2 indicators are D2 and D3, with 4.88 and 4.78 respectively. The platform has established data standards for some of the reference and master data, and described their attributes in a comprehensive manner. In addition, the platform has also established preliminary management specifications for reference and master data. Unified recording of public data meta-information has been realized in individual departments. A data element identification mechanism has been established in individual departments for the identification and creation of data elements.

Assessment of operational excellence

The results of the assessment of the operational category of capacity, including the first-level indicators of data application, data quality, data life cycle and its second-level indicators, are shown in Table 6. The assessments of data application, data quality and data survival cycle are all above 4 points, which are at the level of five excellent.

Technical ability maturity evaluation results

Classification Primary indicator Score Grade Secondary indicator Score Grade
Operation class F 4.05 Fifth level F1 3.91 Four level
F2 3.89 Four level
F3 4.36 Fifth level
G 4.3 Fifth level G1 4.15 Fifth level
G2 4.41 Fifth level
G3 3.79 Four level
G4 4.83 Fifth level
H 4.06 Fifth level H1 3.9 Four level
H2 4.33 Fifth level
H3 3.93 Four level
H4 4.08 Fifth level

Among the higher secondary indicators are data service, data quality check, and data quality improvement, with scores of 4.36, 4.41, and 4.83, respectively, at the five-level excellent level, indicating that the platform has clearly defined the security, quality, and monitoring requirements for data service, and also formulated processes and strategies related to data service management to guide each department in standardized management. Data quality inspection work is only carried out when data problems arise. Data quality corrections are only performed for data issues that arise in individual sections.

Summarizing the above assessment results, it can be seen that the final comprehensive score for the assessment of the perfection of the data asset management strategy proposed in this paper is 4.16, which is at the level of five excellent. The data asset management platform constructed in this paper satisfies the design expectations.

Conclusion

This paper proposes a CloudDPoS consensus algorithm for cloud platforms, builds a blockchain-based data asset management platform, and realizes data asset management for cloud computing environment. Performance testing simulation experiments are conducted using the officially provided performance testing tool, and the platform’s data asset management is evaluated for perfection. Compared with the block generation speed of Bitcoin and Ether, which is 10 minutes or 10 seconds, the platform in this paper generates blocks between 46 and 53 per second at sending frequencies of 50tps, 100tps, 150tps, and 200tps respectively, which is a great improvement in the rate. In the time delay experiment, when the number of system requests per second is 50 times, the average time delay of the system is 1.67s, the minimum time delay is 0.69s, and the maximum time delay is 2.84s. With the increasing frequency of sending, the time delay is also increasing, but the average time delay is basically around 10s, and there is no significant increase at the later stage, which ensures the reasonable response efficiency to the user’s interactive behavior. This ensures reasonable response efficiency to user interactions and does not consume too much memory or CPU resources. Evaluating the perfection of the data asset management strategy proposed in this paper, the final comprehensive score is 4.16, reflecting the strong guarantee and excellent performance of this paper’s platform on the dimensions of data application, data security, and data governance.

Language:
English