Open Access

Strategies for optimal allocation of cloud computing resources for innovation and entrepreneurship education in industry-teaching integration environment

  
Sep 26, 2025

Cite
Download Cover

Introduction

With the rapid development of social economy and the rise of knowledge economy, innovation and entrepreneurship education has become one of the hot topics in college education. As one of the important ways to cultivate innovative and entrepreneurial talents, the innovation and entrepreneurship education model based on the integration of industry and education has gradually attracted people’s attention [1-4]. The integration of industry and education refers to the in-depth cooperation between colleges and universities and enterprises to realize the organic combination of education and industry through the joint cultivation of talents, joint research and development projects, etc. In this context, the cloud computing resource optimization allocation strategy for innovation and entrepreneurship education is of great significance [5-8].

With the rapid development of information technology, cloud computing, as an advanced computing model, has been widely used in various industries [9-10]. Cloud computing through the centralized management of resources, and the use of virtualization technology to achieve efficient use of resources, so that users can easily access the required computing resources, cloud computing as an emerging technology means [11-14], is reshaping the way of configuration of educational resources, injecting new vitality into the development of education, and creating better conditions for the cultivation of innovative talents adapted to the needs of the times. However, the allocation and optimization of resources in cloud computing environment remains an important challenge [15-18]. The reasonable allocation and optimization strategy of cloud computing resources is of great significance in the practical application of innovation and entrepreneurship. Through the comprehensive application of resource allocation and planning, resource utilization optimization and resource scheduling strategies, the resource utilization efficiency in cloud computing environment can be improved to provide good service quality and meet the needs of innovation and entrepreneurship education [19-22].

This paper completes the construction of the innovative entrepreneurship education platform based on cloud computing technology from various aspects such as functional modules, platform framework, and business support service architecture. Facing the allocation of cloud computing resources of the platform, the ant colony algorithm is adopted as the main body of the scheduling strategy, and the annealing simulation algorithm is used to solve the optimization problem of the allocation combination of cloud computing resources, and the improved optimization algorithm HGAACO is proposed. Three kinds of pheromone updating methods are proposed on the ant colony algorithm, namely, ant amount model, ant density model, and ant perimeter model, respectively, and the ant perimeter model is determined as the pheromone updating strategy based on the judgment of the convergence speed of the algorithm. Through annealing simulation algorithm, the method of task and node representation is optimized, and the definition of matching factor is introduced to represent the degree of matching between tasks and resource nodes. Load balancing degree is defined with the help of mapping sequence of tasks and resources. Determine the calculation method of the adaptability function, and synthesize the time span and load balancing degree as the evaluation standard paper to formulate the algorithm of the objective function. Finally, CloudSim, a cloud computing simulation tool, is used as a tool for simulation experiments to carry out simulation experiments on the application of innovation and entrepreneurship education platform to explore the performance of this paper’s HGAACO algorithm on cloud computing optimization allocation from various aspects such as task execution, resource allocation and load balancing.

Innovation and Entrepreneurship Education Platform in Industry-Education Integration Environment

Innovation and entrepreneurship education is an important way to cultivate comprehensive practical talents. In recent years, the integration of information technology and education and teaching has become more and more close, and the construction of innovation and entrepreneurship education platform in the environment of industry-teaching integration plays an important role in the improvement of innovation and entrepreneurship level of young people [23].

In this chapter, the innovation and entrepreneurship education platform will be established, and the functional module construction and architecture design of the platform will be completed.

Functional Module Construction
Curriculum development and learning modules

Course construction is mainly divided into four major parts: basic course information, course content, resource management, and learning.

Basic course information

The function of basic course information includes general information, course overview, course content, activities and assessment.

General information: create the course cover, name, credits, suggested hours, classification and start time.

Course Overview: Including basic modules such as course introduction, learning objectives, learning methods, course assessment, cited resources, etc. You can also add some modules by yourself according to your needs, such as knowledge points, teaching forms, assessment scope, etc.

Course content: Teachers can create the complete information content of the course, including the course catalog, detailed chapter content, knowledge points, excellent cases, all kinds of courseware resources and so on.

Activities: Teachers can publish some activity information.

Assessment: Teachers can publish exams.

Course Content Editing

The course content editing function includes the following parts.

Chapter: You can edit the chapter information of the course content, divided into two levels.

Resource Library: Teacher’s existing resources can be inserted into the course content.

Local resources: upload and insert local resources into the course content.

Online Video: Insert the online course code.

Content editing: you can enter course content, pictures, etc. in the page.

Resource Management

Resource management is mainly in accordance with the standards of media resources on the platform for the management of all learning resources, such as video, Word documents, PDf documents, Jpeg files, Excel documents, etc., through the management of resources to facilitate the training of teachers in the creation of the course or the group volume to call; at the same time, these are also the important material for learners to learn.

Course Learning

Course learning page mainly has two major functions: learning content and learning support work. Learning content includes chapters, graphic resources, video resources and so on. Learning support work has a group, find teachers, study notes, evaluation and other tools.

Question bank module

The platform can directly edit test papers online, and can also use the existing word documents to directly import test papers. Online grouping is mainly to train teachers to utilize the “insert test paper” function provided by the platform to directly edit test questions online, and can set the question stem, answer analysis, score and difficulty system. The question bank set by teachers of a certain course can be dynamically added and adjusted, and it can also be quoted by teachers of other courses through authorization.

Guided learning support modules

The guided learning function is mainly realized by instant messaging and learning trace system. Instant messaging is mainly used for Q&A, group discussion, teaching management and other tutoring services between students and students, students and teachers. Learning Trace is used to facilitate teachers to view the current learning status of students, so as to analyze the current problems of students and give students guidance.

Platform framework design

This platform will rely on cloud computing technology, and the platform framework is shown in Figure 1. It can be seen that the platform mainly includes four aspects: cloud foundation, cloud management, cloud service and cloud security. Cloud foundation is the whole hardware foundation and support platform, including physical resources such as servers, storage devices, and network devices, etc. The physical resources can be virtualized through virtualization technology, and the on-demand allocation and dynamic deployment of resources can be realized through the scheduling and management of virtualized resources. Cloud management is mainly for the storage of educational resources, service scheduling, disaster treatment, etc., as well as the migration of services according to resource utilization, in which the cloud storage of educational resources is the distributed storage of massive resources and the realization of streaming media playback of resources. Cloud service is mainly to manage the resources and provide them to users with the help of digital educational resources building and sharing service platform, including question bank management, learning trace records, instant messaging, course creation, course learning, learning monitoring and so on. Cloud security is to protect the security of cloud computing platform and users through identity authentication, role management, permission management and other mechanisms.

Figure 1.

The structure diagram of the platform framework

Business Support Service Architecture Design

This platform will rely on cloud computing virtualization technology. The platform support services mainly include five major support services such as load balancing server, business node, file server cluster, video-on-demand service cluster and database cluster. Load balancing server is the business entrance of the whole platform, automatically distributing user requests to different services according to business concurrency. The business nodes include four major microservices such as assessment service, instant messaging, course content, and unified authentication. The file server cluster is a general-purpose file high-availability cluster, which is the main support service for course content. The video-on-demand service cluster is the generic video highly available cluster service, which serves as the course video resource support service. The database cluster is a general-purpose database high-availability disaster-tolerant cluster service, and the core database support service for operational business.

Resource allocation of innovation and entrepreneurship education platform based on cloud computing

In the previous chapter, this paper relies on cloud computing technology to construct an innovative entrepreneurship education platform from various aspects such as functional modules, platform framework, and business support service architecture. In this chapter, we will optimize the resource allocation problem faced by the innovation and entrepreneurship education platform in the cloud environment, adopt the ant colony algorithm as the main body of the scheduling strategy, and introduce an annealing simulation algorithm to solve the problem that the ant colony algorithm is prone to fall into the local optimal solution in the later stage.

Cloud Computing Resource Allocation

The cloud computing system framework can be divided into task layer, service layer and resource layer. The specific process of resource request and allocation in the cloud environment is as follows: the user sends service demands through various terminals accessing the network at the task layer, the service layer sends requests for the corresponding resources to the resource layer according to the established service mode, and the resource layer allocates the available resources for them according to the current operation of the system resources and scheduling strategy, and then sends them to the user through the service layer finally. These three layers adopt the bottomup way to realize mutual cooperation, in which the most important thing is the allocation scheme of the bottom layer resources.

Cloud Resource Allocation Model

To formalize the description of the problem, we take each data center in a cloud computing to contain a number of PM, and each PM to contain a series of unallocated VM. The ultimate goal is to aspire to an allocation policy that minimizes the total number of PM.

In the following, we provide a unified description of the resource model of the system:

Let the set of PM in Datecenter be denoted as P=(p1,p2,,pj,,pM)$$P = \left( {{p_1},{p_2}, \ldots ,{p_j}, \ldots ,{p_M}} \right)$$, where pj denotes the j th PM and there are pj=(cjcpu,cjmen,cjstor,cjbw)$${p_j} = \left( {{c_{{j_ - }cpu}},{c_{{j_ - }men}},{c_{{j_ - }stor}},{c_{{j_ - }bw}}} \right)$$, denoting the configurations of the hardware facilities of that PM corresponding to its CPU performance, memory, hard disk, and network bandwidth, respectively.

The set of unallocated VM is denoted as V=(vm1,vm2,,vmi,,vmN)$$V = \left( {v{m_1},v{m_2}, \ldots ,v{m_i}, \ldots ,v{m_N}} \right)$$, where vmi denotes the i th VM and there are vmi=(ricpu,rimen,ristor,ribw)$$v{m_i} = \left( {{r_{{i_ - }cpu}},{r_{{i_ - }men}},{r_{{i_ - }stor}},{r_{{i_ - }bw}}} \right)$$, denoting the allocation of resources for each dimension of that VM corresponding to its CPU performance, memory, hard disk, and network bandwidth, respectively;

For ∀pjP in PM, there exists a vector Hj=(hj1,hj2,,hjl,,hjN)$${H_j} = \left( {{h_{j1}},{h_{j2}}, \ldots ,{h_{jl}}, \ldots ,{h_{jN}}} \right)$$ that represents the mapping of this physical machine to the virtual machine if ∃vmiV such that there exists a mapping relationship from pj to vmi, then hji = 1, otherwise hji = 0.

Obviously, for a specific mapping mechanism to satisfy: ∀pjP, the total amount of its resources in each dimension (including CPU performance, memory, hard disk, and network bandwidth) must exceed the sum of the performance metrics of all the VMs it maps.

Meanwhile, for the PM-state involved in the allocation activity, we use the set U=(u1,u2,,uM)$$U = \left( {{u_1},{u_2}, \ldots ,{u_M}} \right)$$ to represent it as uj = 1 if it satisfies ∃vmiV and hji = 1, and uj = 0 otherwise: i=1Nri_cpu×hjicj_cpu1jM i=1Nri_men×hjicj_mem1jM i=1Nri_stor×hjicj_stor1jM i=1Nri_bw×hjicj_bw1jM$$\begin{array}{l} \sum\limits_{i = 1}^N {{r_{i\_cpu}}} \times {h_{ji}} \le {c_{j\_cpu}}1 \le j \le M \\ \sum\limits_{i = 1}^N {{r_{i\_men}}} \times {h_{ji}} \le {c_{j\_mem}}1 \le j \le M \\ \sum\limits_{i = 1}^N {{r_{i\_stor}}} \times {h_{ji}} \le {c_{j\_stor}}1 \le j \le M \\ \sum\limits_{i = 1}^N {{r_{i\_bw}}} \times {h_{ji}} \le {c_{j\_bw}}1 \le j \le M \\ \end{array}$$

And the final optimization objective can be expressed in the following form: minX=j=1Mujwhereuj=1(vmiV And hji=1) otherwiseuj=0$$\begin{array}{l} \min X = \sum\limits_{j = 1}^M {{u_j}} where{u_j} = 1\left( {\exists v{m_i} \in V\ And{\text{ }}{h_{ji}} = 1} \right) \\ otherwise{u_j} = 0 \\ \end{array}$$

where X denotes the total number of PM participants in the distribution activity.

Evaluation metrics for cloud resource allocation

Resource allocation is also the deployment of tasks, the implementation process, but only a single task is not our goal, but also need to assess the implementation of the task, maximize economic efficiency, improve resource utilization and so on. At present, the resource allocation process is mainly based on the two aspects of time span and load balancing as the main objectives.

Time Span

It is the total response time consumed by the system to fulfill all the task requests of each user. Compared to the independent response time of individual tasks, the time span of the system better reflects the strength of its computing power and overall throughput rate. Obviously, the shorter the completion time, the higher the performance of the system, and at the same time, the quality of service and satisfaction of the users will be improved accordingly.

Load Balancing

The cloud environment is multi-layered and scalable, and the computing nodes in the system are heterogeneous and independent of each other. To allocate resources in such a complex and changeable network environment, not only should the parallelism between tasks be taken into account, but also the load status of each resource in the cloud environment must be monitored in real time and whether or not it is faulty, etc., to ensure that the load balance of the system can improve the utilization rate of the resources in general.

Resource allocation based on ACO algorithm
Principles of Ant Colony Algorithm

Ant Colony Optimization (ACO) algorithm, also known as ant colony algorithm, is a heuristic algorithm designed to simulate the foraging behavior of ant colonies [24]. The solution of ACO algorithm is based on the combination of deterministic solution and stochasticity, the process of constructing the optimized solution by continuous iteration, which will be dynamically adjusted with the change of environmental factors, and finally the global optimal solution is obtained through the accumulation of pheromones. Due to the positive feedback mechanism and robustness of the ACO algorithm, many scholars have already carried out in-depth research on it and migrated it to solve a variety of NP-complete problems, such as the typical traveler’s TSP problem, graph coloring, quadratic allocation problem and so on, and have made a breakthrough at a stage.

The execution process of ant colony algorithm is essentially to simulate the behavioral process of ants foraging for food.

Solving the problem using the ant colony algorithm

The background and principles of the ACO algorithm have been introduced in the previous section, as a heuristic scheduling algorithm, the ACO algorithm is suitable for the solution of NPhard problems. Its solution process for the classical TSP will be described in detail in the following.

Brief description of the Traveler’s Problem (TSP) problem: Given a series of cities and the distance between two cities, first solve for the shortest path to visit all the cities and eventually return to the starting point, requiring that each city be passed through only once. This is a classical NP model of combinatorial optimization problems, first proposed by Dantzig et al. for its mathematical model of the planning and solution process. The enumeration method is a solution idea, but it has a complexity of O(n!) where n denotes the number of cities. As the number of cities increases, the algorithm is difficult to solve in polynomial time.

For ease of understanding, the problem can be formalized in Figure Graph(N, E), where N denotes cities and E denotes edges connecting cities. Suppose there are n cities, traversed by m ants, and bounded by the following conditions.

The ants release pheromones during their search and this particular chemical can remain on the path for some time.

Each ant selects the target city for the next hop with probability based on the concentration of the pheromone, and the concentration of the pheromone is proportional to the probability of selection.

Each ant visits each city only once, bounded by a taboo table.

For a certain ant k(k ∈ [1, m]), let its forbidden table structure in the search process be tabuk, which is used to record the walking distance of the ant, and tabuk[s] denotes the sth city visited by the ant k in its journey. Then, when it visits to a certain city node, it will calculate the state transfer probability in each direction according to the pheromone strength and heuristic information on each legal path, and decide the target direction of the next hop based on the size of the probability. The specific calculation formula is as follows: pij̇k(t)={ [τij(t)]α[ηij]βkallowedk[τik(t)]α[ηik]β if jallowedk 0 otherwise$$p_{i\dot j}^k(t) = \left\{ {\begin{array}{*{20}{c}} {\frac{{{{\left[ {{\tau_{ij}}(t)} \right]}^\alpha }{{\left[ {{\eta_{ij}}} \right]}^\beta }}}{{\sum\limits_{k \in allowedk} {{{\left[ {{\tau_{ik}}(t)} \right]}^\alpha }} {{\left[ {{\eta_{ik}}} \right]}^\beta }}}}&{ if\ j \in allowe{d_k}} \\ 0&{ otherwise} \end{array}} \right.$$

When all the ants have completed one traversal, i.e., after completing the search process of one path, the pheromone value needs to be updated according to the search situation of each search path, the specific formula is as follows: τij(t)=ρ×τij(t)+Δτij(t)$${\tau_{ij}}\left( {t^\prime } \right) = \rho \times {\tau_{ij}}(t) + \Delta {\tau_{ij}}(t)$$ Δτij(t)=k=1mΔτijk(t)$$\Delta {\tau_{ij}}(t) = \sum\limits_{k = 1}^m \Delta \tau_{ij}^k(t)$$

There are three main ways of updating pheromone, which are ant-volume model, ant-density model, and ant-week model.

Ant volume model Δτijk(t)={ constdij Ant k passes through ij path in this search 0 Otherwise$$\Delta \tau_{ij}^k(t) = \left\{ {\begin{array}{*{20}{l}} {\frac{{const}}{{{d_{ij}}}}}&{ {\text{Ant }}k{\text{ passes through }}ij{\text{ path in this search}}} \\ 0&{ {\text{Otherwise}}} \end{array}} \right.$$

Ant-density model Δτijk(t)={ const Ant k passes through ij path in this search 0 Otherwise$$\Delta \tau_{ij}^k(t) = \left\{ {\begin{array}{*{20}{l}} {const}&{ {\text{Ant }}k{\text{ passes through }}ij{\text{ path in this search}}} \\ 0&{ {\text{Otherwise}}} \end{array}} \right.$$

Ant-week model Δτijk(t)={ constLk Ant k passes through ij path in this search 0 Otherwise$$\Delta \tau_{ij}^k(t) = \left\{ {\begin{array}{*{20}{l}} {\frac{{const}}{{{L_k}}}}&{ {\text{Ant }}k{\text{ passes through }}ij{\text{ path in this search}}} \\ 0&{ {\text{Otherwise}}} \end{array}} \right.$$

where const is a constant factor and Lk denotes the total length of the search path of ant k this time.

Compared with the previous two strategies, the ant-week model adopts the full length of the search path of ant k in the current iteration as the updating factor of the pheromone, i.e., it is a global information updating method, which better reflects the influence of the overall length of the path in the global scope. Due to the positive feedback mechanism of the pheromone, this approach speeds up the convergence of the algorithm, therefore, we adopt the ant-week model as the updating strategy of the pheromone.

Optimal Resource Allocation Based on Annealing Simulation Optimized Ant Colony Algorithm

The resource allocation problem in cloud computing environment is still a combinatorial optimization problem. In the cloud computing environment, there are various influencing factors on resource allocation, and it is necessary to maintain the diversity of the population in the iterative process, and expand the solution space of the search to a certain extent, which can prevent the occurrence of premature convergence to a certain extent. For this reason, this paper will introduce an annealing simulation algorithm, optimize the ant colony algorithm and propose a new improved algorithm (HGAACO) to achieve the optimization of resource allocation in the innovation and entrepreneurship education platform.

Annealing simulation algorithm

The specific steps of annealing algorithm (SA)$$\left( {SA} \right)$$ can be divided into the following three parts [25]:

Generation of new solution X′: Generated in the solution space by generating function, generally the most commonly used method is through the current solution X by random simple transformation, the method has randomly replaced some elements in the current solution X, etc., so that the purpose is for the convenience of the later calculation as well as to reduce the conversion time.

Determine whether the new solution X′ is accepted or not: to determine whether the new solution is accepted or not is decided by two aspects, one is randomness, and the other is the difference of the objective function Δf, usually the difference of the objective function between the original solution X and the new solution X′ is computed by the increment, the Metropolis acceptance criterion is the most commonly used acceptance principle, which will be introduced in detail in the next subsection.

Treatment of the new solution X′: the case when X′ is accepted and the case when X′ is not accepted. When X′ is judged to be accepted, then X′ instead of the current solution X, for the replacement of step (1), the need to replace the inverse over, while the value of the objective function is corrected accordingly, the next experiment is built on the basis of the current solution for X′ can be. At this time, for the current solution X, has completed an iteration; when X′ is not accepted, then continue to X as the current solution for the next iteration can be, which means that this change has no effect on the results of this time. The algorithm ends when the termination condition is met.

The purpose of Metropolis acceptance criterion chooses whether to accept the new state as an optimized solution for one iteration or not [26]. The initial solution is X, the new solution of the random transformation X′, the energy of the initial solution is denoted by E(X), and the energy of the new solution is denoted by E(X)$$E\left( {X^\prime } \right)$$. The energy difference ΔE=E(X)E(X)$$\Delta E = E\left( {X^\prime } \right) - E(X)$$; if ΔE < 0, the new solution X′ is accepted and the new solution’s is set to be the initial state of the next annealing iteration; if ΔE > 0, it is necessary to set up a random number located in the interval of (0,1)$$\left( {0,1} \right)$$, to find out the new state probability p, and if p>rand(0,1)$$p > rand\left( {0,1} \right)$$, then the new solution X′ is still accepted, or the new solution X′ of the state will be discarded, the Still keep the initial solution X of this iteration as the initial state. Then we have the following transfer formula: p={ exp(E(X)E(X)T) E(X)>E(X) 1 E(X)<E(X)$$p = \left\{ {\begin{array}{*{20}{l}} {\exp \left( { - \frac{{E\left( {X^\prime } \right) - E(X)}}{T}} \right)}&{ E\left( {X^\prime } \right) > E(X)} \\ 1&{ E\left( {X\prime } \right) < E(X)} \end{array}} \right.$$

Characteristics analysis of annealing simulation algorithm: because in the process of algorithm execution will always be a certain probability to accept the suboptimal solution that is not as good as the current solution, increasing the solution space of the algorithm, to a certain extent, to realize the diversity of the solution space, to avoid the algorithm appearing to fall into the local optimal problem, and the design process of the algorithm is relatively simple. Of course, the SA algorithm also has shortcomings, the algorithm can not control the most effective direction of the search, there is a great deal of randomness, so the convergence speed is also relatively slow. According to its advantages and disadvantages combined with the GAAA algorithm in the late stage of the algorithm needs to increase the characteristics of the solution space, can be introduced into the SA algorithm of the Metropolis acceptance mechanism to propose a new improved algorithm (HGAACO), applied to the resource allocation problem in the cloud computing environment.

HGAACO improved algorithm

Task and node representation

Assuming that there are n task to be executed, denoted by T:T={T1,T2,Tn}$$T:T = \left\{ {{T_1},{T_2}, \ldots {T_n}} \right\}$$, and m resources, denoted by V:V={V1,V2,,Vm}$$V:V = \left\{ {{V_1},{V_2}, \ldots ,{V_m}} \right\}$$, this paper assumes that there are five attributes, i.e., they are represented by a hexadecimal tuple: Vi=VIriDViCriVBsiVRi$${V_i} = \left\langle {V{I_{{r_i}}}D{V_i}C{r_i}V{B_{{s_i}}}V{R_i}} \right\rangle$$, where where VIDi represents the node number, VCi represents the node’s cpu processing capacity, VBi represents the node bandwidth, VRi represents the node memory, VFi represents the node failure rate, and VPi represents the node usage price. Similarly for task T, it is still a hexadecimal group, which corresponds to the amount of resources expected by the task. The execution time of task i on resource j is represented by a matrix S, S = {SijSij represents the execution time of Ti on Vj, 1 < i < n, 1 < j < m}, and a resource allocation matrix is represented by E, with a matrix size of m × n, E = {EijEij represents whether Ti is allocated for execution on Vj or not, and if it is allocated for execution then Eij = 1, otherwise Eij = 0, 1 < i < n, 1 < j < m}.

User Task and Resource Node Matching Factors

The definition of matching factor is introduced to represent the degree of matching between each task and each resource node. The larger the difference between the expected value of the task and the resource value of the node, the more unmatched the assignment of the task to that resource is, and the smaller it is, the more appropriate it is, defining the matching factor between the i th task and the j th resource node is expressed as follows: Matchij=1/m=15(TmiVmj)2 $$Matc{h_{ij}} = 1/\sqrt {\sum\limits_{m = 1}^5 {{{\left( {T{m_i} - V{m_j}} \right)}^2}} }$$

Where Tmi implies the expectation of task i on the resource, which is obtained from hex T, and Vmj is the parameter value of resource node j obtained from hex V. The larger Matchij means that task i prefers to be executed on resource j, which means that the resource node can provide better service to task i.

Load Balancing Degree and Computing Power of Resource Nodes

Let X be a sequence of tasks and resources mapping, then the load balancing degree is defined as: Load(X)=i=1n(1C1YiLvi)2$$Load\left( X \right) = \sqrt {\sum\limits_{i = 1}^n {{{\left( {1 - {C_1}\frac{{{Y_i}}}{{L{v_i}}}} \right)}^2}} }$$

Where Yi denotes how many tasks are assigned to resource node i, Lvi denotes the computational power of the resource node, Lvi=a*c(Vi)+c*b(Vi)+b*r(Vi)$$L{v_i} = a^*c\left( {{V_i}} \right) + c^*b\left( {{V_i}} \right) + b^*r\left( {{V_i}} \right)$$, where c(Vi)b(Vi)r(Vi)$$c\left( {{V_i}} \right)b\left( {{V_i}} \right)r\left( {{V_i}} \right)$$ denotes the size of the resource node’s processing power, memory, and bandwidth, respectively, and the previous parameters a, b, and c represent the importance of each item in the process of computational power, respectively. C1 is the normalization parameter, which satisfies 0<C1YiLvi<1$$0 < {C_1}\frac{{{Y_i}}}{{L{v_i}}} < 1$$.

Genetic Coding

Genetic Algorithm (GA)$$\left( {GA} \right)$$ performs operations such as cross mutation through the ordinal number of paths in solving the TSP problem. In the resource allocation problem, it is necessary to encode the decision variables of the problem, so far there are methods such as encoding in binary, real number encoding, matrix encoding, tree encoding, etc. In this paper, in order to introduce the information about the specific problem of resource allocation in the cloud computing environment real number encoding is used by encoding a one-dimensional string that represents the task allocation. Str(i) = a denotes that the task Ti is assigned to a resource Va for execution. Assuming a specific allocation scheme AssignM:  V1:T1,T3,T5 V2:T2,T4,T6$$AssignM:\ \begin{array}{*{20}{c}} {{V_1}:{T_1},{T_3},{T_5}} \\ {{V_2}:{T_2},{T_4},{T_6}} \end{array}$$ then the corresponding encoding is (121212), the left string indicates that task T1, T3, T5 is assigned to V1 for execution and T2, T4, T6 is assigned to V2 for execution, so resource node 1 at position 135 and resource node 2 at position 246.

Definition of fitness function

The fitness function represents the degree of excellence of the individuals in the population, the genetic algorithm requires the fitness value is to seek the maximum value, while the cloud computing resource allocation problem is to seek the minimum value of the total cost, so you can’t use the objective function to define the fitness function, in this paper, the fitness function is defined as follows: f(i)=[k=1M(t(k)tmin+1)M*(t(i))tmin+1]2i=1,2,M$$f(i) = {\left[ {\frac{{\sum\limits_{k = 1}^M {\left( {t(k) - {t_{\min }} + 1} \right)} }}{{M*(t(i)) - {t_{\min }} + 1}}} \right]^2}i = 1,2, \ldots M$$

Where M represents the population size, tmin represents the minimum value of completion time in the task allocation scheme, t(i) represents the maximum completion time of the i th task allocation scheme, from the formula (12), we know that the size of t(i) affects the size of f(i), and the smaller t(i) represents the shorter completion time, in this case, the larger the fitness value is, i.e., the more likely to be selected for the next genetic operation, in order to prevent the fitness values between individuals from being too close, the formula squares the fitness function so as to enhance the significance of the individuals. In order to prevent the fitness values between individuals from being too close to each other, the formula squares the fitness function, which enhances the degree of significance of the individual.

Definition of Objective Function

Task allocation in cloud environment refers to the process of assigning the tasks submitted by users to be completed to the corresponding resources to achieve the most scheduling process, the assigned task execution time needs to be minimized as much as possible, and the load of the resources is balanced as much as possible. So in this paper, the time span and load balancing degree are synthesized as evaluation criteria, then the objective function is composed as: F(X)=Load(X)*Makespan(X)$$F\left( X \right) = Load\left( X \right)^*Makespan\left( X \right)$$

where Makespan(X)=i=1nj=1mSij*Eij$$Makespan\left( X \right) = \sum\limits_{i = 1}^n {\sum\limits_{j = 1}^m {{S_{ij}}} } *{E_{ij}}$$, denotes the task time span. where Sij denotes the execution time of Ti on Vj, 1 < i < n, 1 < j < m

Eij denotes whether Ti is assigned to Vj for execution, Eij = 1 if assigned, Eij = 0, 1 < i < n, 1 < j < m otherwise.

Simulation experiment on the application of innovation and entrepreneurship education platform

In this chapter, CloudSim, a cloud computing simulation tool, will be introduced as a tool for simulation experiments, and simulation experiments of innovation and entrepreneurship education platform applications will be carried out on the CloudSim platform.

Overview of CloudSim simulation platform

CloudSim, a generic and extensible new open-source simulation framework, evaluates the performance and energy consumption of a system during the scheduling process based on the differences between scheduling goals and scheduling strategies. Based on the cloud computing simulator, users can extend certain classes according to their own requirements to realize their own written scheduling strategies.

CloudSim Architecture

The architecture of CloudSim platform is a multi-layer design with four layers, namely UserCode layer, CloudSim layer and GridSim layer.

UserCode layer, i.e., the user code layer, allows users to customize their own services and scheduling strategies according to their needs.

CloudSim layer provides the management of all interfaces in the simulation platform, and provides modeling and simulation functions based on the virtualization technology of data centers and virtualized clouds. The cloud service provider receives the data from the user in this layer, calls the reasonable allocation strategy, and verifies the effectiveness of the strategy.

The CloudSim core simulation engine provides resource monitoring, host-to-virtual machine mapping functions, and the implementation of CloudSim events and implementation classes, and is the bottom and core of the CloudSim layered architecture.

Simulation flow of CloudSim

The simulation experiment process of CloudSim includes the following steps.

Initialization process: Initialize the environment of CloudSim.

Data center.

Create data center agent.

Create virtual machine VM columns.

Create task list.

Bind cloud tasks to VMs according to a specific algorithm.

Start the simulation.

End the simulation and print the result.

CloudSim experiment environment configuration

The software environment applied in this simulation experiment is specifically shown in Table 1.

Hardware environment

Hardware environment
Software environment CPU Memory
Windows 7, jdk1.8.0,CloudSim3.0.2 Intel(R) Core(TM) i5-4590@3.30GHz 4G
Analysis of simulation experiment results

The tasks simulated and scheduled in this paper are 10 sets of experimental data ranging from 30 to 300, with an interval increment of 30. Using these 10 sets of experimental data, the ant colony improvement algorithm based on annealing simulation (HGAACO) proposed in this paper is compared with the basic ant colony algorithm (ACO), ant colony algorithm based on genetic algorithm (GAAA), and the polling algorithm (RR) for the resource allocation strategy. In this paper, the following three experiments will be conducted to compare the above four algorithms in terms of task execution time, execution cost, and load balancing, and each set of experiments will be run 10 times to take the average value.

Task execution time

The largest task completion time in the printout of CloudSim simulation tool is taken as the execution time of the simulation experiment, and the experimental results are shown in Figure 2.

Figure 2.

Comparison of execution times when the number of tasks is different

The results of the experiment are shown in Fig. 2. The figure shows the running time of the four algorithms under different numbers of tasks. From the experimental results, the execution time of each algorithm does not differ much when the number of tasks is small, especially when the number of tasks is 30, the execution time of the four algorithms differs very little. However, when the number of tasks exceeds 30, the execution time of HGAACO, ACO and GAAA algorithms are significantly lower than that of the traditional polling algorithm. Comparing ACO, GAAA and HGAACO algorithms in this paper, it can be seen that the execution time of ACO algorithm is the longest compared with that of HGAACO algorithm in this paper. Taking the case of 300 tasks as an example, the execution time of this paper’s HGAACO algorithm is only 13ms, while the traditional polling algorithm, ACO algorithm, and GAAA algorithm reach 33ms, 13.5ms, and 27ms, respectively. Therefore, it can be obtained that this paper’s HGAACO algorithm shortens the completion time of the tasks, and is suitable for large-scale task scheduling in cloud environments.

Task implementation costs

The cost model is built in CloudSim, and the calculated output is used as the execution cost of this simulation experiment. The execution costs of the four scheduling algorithms with different numbers of tasks are shown in Fig. 3. From the experimental results, the cost difference of the four algorithms is relatively stable, and when the number of tasks is 30, the cost difference of the four algorithms is the smallest. When the number of tasks increases gradually, the cost of the GAAA algorithm tends to be consistent with the traditional polling algorithm. The cost of ACO algorithm and HGAACO algorithm in this paper is significantly lower than the cost of GAAA algorithm and traditional polling algorithm.

Figure 3.

Comparison of execution costs when the number of tasks is different

Load imbalance values for virtual machines

In this section, the VM load imbalance value DI is used to measure its load. The number of tasks for which simulated scheduling is carried out during the experiment is divided into six groups which are 50, 100, 150, 200, 250 and 300, then the load imbalance values of the four algorithms. The load imbalance values corresponding to the four scheduling algorithms with different number of tasks are shown specifically in Fig. 4. From the experimental results, the increasing number of tasks makes the DI values of all four algorithms decrease gradually, indicating that the resource allocation of the system is becoming more and more balanced. From the figure, it can also be seen that the HGAACO algorithm in this paper significantly reduces the load imbalance value compared with the traditional polling algorithm, which indicates that the HGAACO algorithm in this paper has a significant effect on improving the load balancing of the system. When users are more concerned about load balancing, this paper’s algorithm can obviously achieve this purpose.

Figure 4.

Comparison of load values when the number of tasks is different

Resource Allocation Time and Load Balancing

In order to further illustrate the optimization of this paper’s HGAACO algorithm in resource allocation scheduling time and the improvement of load balancing, this paper will be simulated with the GAAA algorithm and the ACO algorithm under the same resource task conditions and compared with the performance of this paper’s HGAACO algorithm with the same values of the algorithm’s parameters. The resource allocation scheduling task ends when the number of iterations is completed, or the gap between the task completion time after two iterations is completed is within 0.1%.

Comparison of resource load and resource scheduling task execution

According to the results of task allocation on resources, the load of each resource is organized, as shown in Figure 5. In the figure, the horizontal coordinate of 1~10 represents resource 1~resource 10 in turn, and the vertical coordinate is the number of cloud computing resource scheduling tasks. It can be seen that the number of cloud computing tasks of this paper’s HGAACO algorithm is higher than that of the ACO algorithm on resources 1~4, and the number of cloud computing tasks is the same for both of them on resource 5. From resource 5 onwards, the number of tasks of this paper’s HGAACO algorithm is always lower than that of the ACO algorithm, and also lower than that of the GAAA algorithm.

Figure 5.

Resource load situation

Based on the resource load, the total time for resource scheduling task execution is calculated as shown in Table 2. According to the order of time, HGAACO algorithm (73.6ms) <-GAAA algorithm (83.6ms) < ACO algorithm (92.8ms). Obviously, the HGAACO algorithm in this paper takes less time in resource allocation scheduling and has better performance.

Time-consuming of resource allocation scheduling

Algorithm Time-consuming of resource allocation scheduling(ms)
GAAA 83.6
ACO 92.8
HGAACO 73.6

Comparison of resource load balancing and task execution time span

The number of resource provisioning tasks set in this section is gradually increased from 20 to 100, and the cloud computing resource nodes are still the 10 nodes established in the previous section. The HGAACO algorithm, GAAA algorithm and ACO algorithm proposed in this paper are utilized to execute the tasks 20 times respectively, and the average value of the 20 times is obtained. Calculate the standard deviation of the allocation results of the three algorithms when they are assigned to different numbers of resource allocation tasks, respectively, as shown in Fig. 6. The horizontal coordinate in the figure indicates the number of tasks, and the vertical coordinate indicates the standard deviation of the allocation results. From the figure, it can be seen that this paper’s HGAACO algorithm allocates tasks more evenly than the GAAA algorithm and the ACO algorithm in cloud computing task scheduling, and with the increase of the number of user-input tasks, this paper’s HGAACO algorithm allocates them more evenly, and the standard deviation of the allocation results is always less than 5, which demonstrates a better performance on the task load balancing problem.

Figure 6.

Standard deviation of distribution results

The execution time span of different algorithms on resource allocation tasks is calculated and the results are shown in Fig. 7. The horizontal coordinate in the figure represents the number of tasks of the cloud platform and the vertical coordinate represents the task execution time of the cloud platform system. It is clear from the figure that under the condition of the same number of tasks, the execution time relationship of the tasks is HGAACO algorithm<GAAA algorithm<ACO algorithm, and with the increase of the number of tasks, the HGAACO algorithm in this paper shows more and more obvious effects. When the number of resource allocation tasks is 20, 40, 60, 80, 100 respectively, the execution time of this paper’s HGAACO algorithm is only 75ms, 141ms, 199ms, 268ms, 322ms, and the execution time grows with the growth of the number of resource allocation tasks, but comparing with the GAAA algorithm and the ACO algorithm, this paper’s HGAACO algorithm can always maintain a low execution time level.

Figure 7.

Comparison of task execution time span

Conclusion

Combined with cloud computing technology, this paper constructs an innovative entrepreneurship education platform, improves the ant colony algorithm using annealing simulation algorithm, and realizes the allocation of cloud computing resources.

With the help of CloudSim, a cloud simulation tool, we carry out simulation experiments on the application of innovation and entrepreneurship education platform. In terms of task execution time and execution cost, the task execution time of the HGAACO algorithm in this paper is shorter than that of the traditional polling algorithm, ACO algorithm and GAAA algorithm in comparison, and the execution cost is significantly lower than that of the GAAA algorithm and the traditional polling algorithm, and slightly lower than that of the ACO algorithm. As for the load imbalance value of virtual machines, compared with the traditional polling algorithm, the load imbalance value of HGAACO algorithm in this paper is significantly reduced. Facing the scheduling allocation and load balancing of cloud computing resources, the total time of resource scheduling task execution of this paper’s HGAACO algorithm is 73.6ms, which is lower than the 83.6ms of the GAAA algorithm and the 92.8ms of the ACO algorithm, and the standard deviation of the allocation results is always less than 5, which is more even in the allocation of cloud computing resources. Under the condition of the same number of resource allocation tasks, the execution time relationship of tasks is HGAACO algorithm<GAAA algorithm<ACO algorithm, and the HGAACO algorithm in this paper can always maintain a lower level of execution time.

Language:
English