19 results on '"Zhu, Xiaomin"'
Search Results
2. A scalable approach for content based image retrieval in cloud datacenter
- Author
-
Liao, Jianxin, Yang, Di, Li, Tonghong, Wang, Jingyu, Qi, Qi, and Zhu, Xiaomin
- Published
- 2014
- Full Text
- View/download PDF
3. Elastic Resource Provisioning Using Data Clustering in Cloud Service Platform.
- Author
-
Fei, Bowen, Zhu, Xiaomin, Liu, Daqian, Chen, Junjie, Bao, Weidong, and Liu, Ling
- Abstract
Currently, cloud computing has received great attention in commerce and scientific research due to its flexibility and strong data processing capability. However, in view of the fact that the types of tasks display an upward trend as the growth of service demands, and the different types of tasks arrive at the system without regularity. Moreover, the resources deployed in cloud are insufficiency to be flexibly provisioned in the face of obvious workload fluctuations. In this article, we present a method of elastic resource provisioning using date clustering in cloud service platform. The framework of proposed method consists of three core components: tasks clustering, the amount of tasks prediction in cluster, dynamic resource provisioning and scheduling. In workload classification, we propose a clustering ensemble method, which utilizes a novel distance decision-making method to obtain the final results. Our method can effectively partition the arriving tasks into several clusters based on similarity among tasks. For each cluster, we forecast the amount of tasks arriving at next moment by prediction model based on time-series to provide reference for the follow-up resource provisioning. Afterwards, an energy-saving resource provisioning method is designed to dynamically provide resources for tasks in each cluster to meet their performance requirements. We implement the experiments in Google cloud traces dataset and the results show that our method achieves 92.3, 91.2 percent, and 3679.2 kW $ \cdot$ · h respectively in terms of guarantee ratio, resource utilization and total energy consumption, which demonstrates the effectiveness of proposed method for dynamic resource provisioning. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
4. Uncertainty-Aware Online Scheduling for Real-Time Workflows in Cloud Service Environment.
- Author
-
Chen, Huangke, Zhu, Xiaomin, Liu, Guipeng, and Pedrycz, Witold
- Abstract
Scheduling workflows in cloud service environment has attracted great enthusiasm, and various approaches have been reported up to now. However, these approaches often ignored the uncertainties in the scheduling environment, such as the uncertain task start/execution/finish time, the uncertain data transfer time among tasks, the sudden arrival of new workflows. Ignoring these uncertain factors often leads to the violation of workflow deadlines and increases service renting costs of executing workflows. This study devotes to improving the performance for cloud service platforms by minimizing uncertainty propagation in scheduling workflow applications that have both uncertain task execution time and data transfer time. To be specific, a novel scheduling architecture is designed to control the count of workflow tasks directly waiting on each service instance (e.g., virtual machine and container). Once a task is completed, its start/execution/finish time are available, which means its uncertainties disappearing, and will not affect the subsequent waiting tasks on the same service instance. Thus, controlling the count of waiting tasks on service instances can prohibit the propagation of uncertainties. Based on this architecture, we develop an unceRtainty-aware Online Scheduling Algorithm (ROSA) to schedule dynamic and multiple workflows with deadlines. The proposed ROSA skillfully integrates both the proactive and reactive strategies. During the execution of the generated baseline schedules, the reactive strategy in ROSA will be dynamically called to produce new proactive baseline schedules for dealing with uncertainties. Then, on the basis of real-world workflow traces, five groups of simulation experiments are carried out to compare ROSA with five typical algorithms. The comparison results reveal that ROSA performs better than the five compared algorithms with respect to costs (up to 56 percent), deviation (up to 70 percent), resource utilization (up to 37 percent), and fairness (up to 37 percent). [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
5. Cloud Service Platform for Julia Programming on Supercomputer
- Author
-
Zhu Xiaomin, Liu Wei, Duan Shufeng, Liu Renfen, and Zhang Changyou
- Subjects
Software ,General Computer Science ,Beijing ,business.industry ,Computer science ,Library science ,Center (algebra and category theory) ,Cloud computing ,business ,Supercomputer ,China - Abstract
(1) Institute of Software, Chinese Academy of Science, Beijing 100190, China; (2) Sifang institute, Shijiazhuang Tiedao University, Shijiazhuang 051132, China; (3) Shijiazhuang Tiedao University, Shijiazhuang 050043, China; (4) National Super Computer Center, Jinan 250101, China
- Published
- 2014
- Full Text
- View/download PDF
6. Minimal Fault-Tolerant Coverage of Controllers in IaaS Datacenters.
- Author
-
Xie, Junjie, Guo, Deke, Zhu, Xiaomin, Ren, Bangbang, and Chen, Honghui
- Abstract
Large-scale datacenters are the key infrastructures of cloud computing. Inside a datacenter, a large number of servers are interconnected using a specific datacenter network to deliver the infrastructure as a service (IaaS) for tenants. To realize novel cloud applications like the network virtualization and network isolation among tenants, the principle of software-defined network (SDN) has been applied to datacenters. In the setting, multiple distributed controllers are deployed to offer a control plane over the entire datacenter to efficiently manage the network usage. Despite such efforts, cloud datacenters, however, still lack a scalable and resilient control plane. Consequently, this paper systematically studies the coverage problem of controllers, which means to cover all network devices using the least number of controllers. More precisely, we tackle this essential problem from three aspects, including the minimal coverage, the minimal fault-tolerant coverage, and the minimal communication overhead among controllers. After modelling and analyzing such three problems, we design efficient approaches to approximate the optimal solution, respectively. Extensive evaluation results indicate that our approaches can significantly save the number of required controllers, improve the fault-tolerant capability of the control plane and reduce the communication overhead of state synchronization among controllers. The design methodologies proposed in this paper can be applied to cloud datacenters with other networking structures after minimal modifications. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
7. A decision-making methodology for the cloud-based recycling service of smart products: a robot vacuum cleaner case study.
- Author
-
Jiang, Hui, Yi, Jianjun, Zhou, Kai, and Zhu, Xiaomin
- Subjects
DECISION making ,OBSOLETE digital media ,DEVELOPING countries ,CLOUD computing ,VIRTUAL machine systems ,VACUUM cleaners - Abstract
The smart product represents a new type of products, which contains software and hardware components. Many obsolete smart products are required to be recycled each year. In the developing countries, the inappropriate treatment of obsolete smart products contaminates the natural environment. This study aims to coordinate the abilities of social recycling organisations and improve the recovery rate of waste smart products. A framework of the recycling service for smart products is proposed based on the concept of cloud manufacturing. In the framework, a smart product ontology is designed to integrate the lifecycle data generated by the smart product. To decide the recycling choice of the components of smart products with the lifecycle data, a fuzzy-rule-based decision method is proposed. In cloud-based recycling service, the abilities of different recycling factories are virtualised as a pool of recycling resources. The virtualisation is achieved by the recycling resource ontology. In order to select optimal recycling resources and encapsulate the recycling service, a grey-relational-analysis-based method is proposed. A robot vacuum cleaner, which is a smart product, is adopted to demonstrate the proposed decision methods. The results show that the methods are effective for deciding the cloud-based recycling service for smart products. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
8. DEFT: Dynamic Fault-Tolerant Elastic scheduling for tasks with uncertain runtime in cloud.
- Author
-
Yan, Hui, Zhu, Xiaomin, Chen, Huangke, Guo, Hui, Zhou, Wen, and Bao, Weidong
- Subjects
- *
FAULT tolerance (Engineering) , *CLOUD computing , *FAULT-tolerant computing , *SYSTEMS design , *ESTIMATION theory , *ALGORITHMS , *COMPUTER scheduling - Abstract
Abstract With the widespread use of clouds, the reliability and efficiency of cloud have been the main concerns of the service providers and users. Thus, fault tolerance has become a hotspot in both industry and academia, especially for real-time applications. To achieve fault tolerance in cloud, a great number of in-depth researches have been conducted. Nevertheless, for addressing the issue of fault tolerance, few studies have taken into account the uncertainty of task runtime, which is however more practical and really needs urgent attention. In this paper, we introduce the uncertainty to our task runtime estimation model and we propose a fault-tolerant task allocation mechanism that strategically uses two fault tolerant task scheduling models while the uncertainty is considered. Moreover, we employ the overlapping mechanism to improve the resource utilization of cloud. Based on the two mechanisms, we propose an innovative D ynamic F ault- T olerant E lastic scheduling algorithm- DEFT for scheduling real-time tasks in the cloud where the system performance volatility should be considered. The purpose of DEFT is to achieve both fault tolerance and resource utilization efficiency. We compare DEFT with three baseline algorithms: NDRFT, DRFT , and NWDEFT. The results from our extensive experiments on the workload of the Google tracelogs show that DEFT can guarantee fault tolerance while achieving high resource utilization. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
9. Cost-Aware Big Data Processing Across Geo-Distributed Datacenters.
- Author
-
Xiao, Wenhua, Bao, Weidong, Zhu, Xiaomin, and Liu, Ling
- Subjects
BIG data ,CLOUD computing ,VIRTUAL machine systems ,ONLINE algorithms ,SERVER farms (Computer network management) ,MANAGEMENT - Abstract
With the globalization of service, organizations continuously produce large volumes of data that need to be analysed over geo-dispersed locations. Traditionally central approach that moving all data to a single cluster is inefficient or infeasible due to the limitations such as the scarcity of wide-area bandwidth and the low latency requirement of data processing. Processing big data across geo-distributed datacenters continues to gain popularity in recent years. However, managing distributed MapReduce computations across geo-distributed datacenters poses a number of technical challenges: how to allocate data among a selection of geo-distributed datacenters to reduce the communication cost, how to determine the Virtual Machine (VM) provisioning strategy that offers high performance and low cost, and what criteria should be used to select a datacenter as the final reducer for big data analytics jobs. In this paper, these challenges is addressed by balancing bandwidth cost, storage cost, computing cost, migration cost, and latency cost, between the two MapReduce phases across datacenters. We formulate this complex cost optimization problem for data movement, resource provisioning and reducer selection into a joint stochastic integer nonlinear optimization problem by minimizing the five cost factors simultaneously. The Lyapunov framework is integrated into our study and an efficient online algorithm that is able to minimize the long-term time-averaged operation cost is further designed. Theoretical analysis shows that our online algorithm can provide a near optimum solution with a provable gap and can guarantee that the data processing can be completed within pre-defined bounded delays. Experiments on WorldCup98 web site trace validate the theoretical analysis results and demonstrate that our approach is close to the offline-optimum performance and superior to some representative approaches. [ABSTRACT FROM PUBLISHER]
- Published
- 2017
- Full Text
- View/download PDF
10. Scheduling for Workflows with Security-Sensitive Intermediate Data by Selective Tasks Duplication in Clouds.
- Author
-
Chen, Huangke, Zhu, Xiaomin, Qiu, Dishan, Liu, Ling, and Du, Zhihui
- Subjects
- *
CLOUD computing security measures , *INFORMATION technology security , *DATA protection , *DISTRIBUTED computing , *PARALLEL algorithms - Abstract
With the wide deployment of cloud computing in many business enterprises as well as science and engineering domains, high quality security services are increasingly critical for processing workflow applications with sensitive intermediate data. Unfortunately, most existing worklfow scheduling approaches disregard the security requirements of the intermediate data produced by workflows, and overlook the performance impact of encryption time of intermediate data on the start of subsequent workflow tasks. Furthermore, the idle time slots on resources, resulting from data dependencies among workflow tasks, have not been adequately exploited to mitigate the impact of data encryption time on workflows’ makespans and monetary cost. To address these issues, this paper presents a novel task-scheduling framework for security sensitive workflows with three novel features. First, we provide comprehensive theoretical analyses on how selectively duplicating a task’s predecessor tasks is helpful for preventing both the data transmission time and encryption time from delaying task’s start time. Then, we define workflow tasks’ latest finish time, and prove that tasks can be completed before tasks’ latest finish time by using cheapest resources to reduce monetary cost without delaying tasks’ successors’ start time and workflows’ makespans. Based on these analyses, we devise a novel
s cheduling appro ach with sel ecti ve tasksd uplication, named SOLID, incorporating two important phases: 1) task scheduling with selectively duplicating predecessor tasks to idle time slots on resources; and 2) intermediate data encrypting by effectively exploiting tasks’ laxity time. We evaluate our solution approach through rigorous performance evaluation study using both randomly generated workflows and some real-world workflow traces. Our results show that the proposed SOLID approach prevails over existing algorithms in terms of makespan, monetary costs and resource efficiency. [ABSTRACT FROM AUTHOR]- Published
- 2017
- Full Text
- View/download PDF
11. Fault-Tolerant Scheduling for Real-Time Scientific Workflows with Elastic Resource Provisioning in Virtualized Clouds.
- Author
-
Zhu, Xiaomin, Wang, Ji, Guo, Hui, Zhu, Dakai, Yang, Laurence T., and Liu, Ling
- Subjects
- *
FAULT-tolerant computing , *CLOUD computing , *GRID computing , *WORKFLOW management systems , *ALGORITHMS - Abstract
Clouds are becoming an important platform for scientific workflow applications. However, with many nodes being deployed in clouds, managing reliability of resources becomes a critical issue, especially for the real-time scientific workflow execution where deadlines should be satisfied. Therefore, fault tolerance in clouds is extremely essential. The PB (primary backup) based scheduling is a popular technique for fault tolerance and has effectively been used in the cluster and grid computing. However, applying this technique for real-time workflows in a virtualized cloud is much more complicated and has rarely been studied. In this paper, we address this problem. We first establish a real-time workflow fault-tolerant model that extends the traditional PB model by incorporating the cloud characteristics. Based on this model, we develop approaches for task allocation and message transmission to ensure faults can be tolerated during the workflow execution. Finally, we propose a dynamic fault-tolerant scheduling algorithm, FASTER, for real-time workflows in the virtualized cloud. FASTER has three key features: 1) it employs a backward shifting method to make full use of the idle resources and incorporates task overlapping and VM migration for high resource utilization, 2) it applies the vertical/horizontal scaling-up technique to quickly provision resources for a burst of workflows, and 3) it uses the vertical scaling-down scheme to avoid unnecessary and ineffective resource changes due to fluctuated workflow requests. We evaluate our FASTER algorithm with synthetic workflows and workflows collected from the real scientific and business applications and compare it with six baseline algorithms. The experimental results demonstrate that FASTER can effectively improve the resource utilization and schedulability even in the presence of node failures in virtualized clouds. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
12. Dynamic Request Redirection and Resource Provisioning for Cloud-Based Video Services under Heterogeneous Environment.
- Author
-
Xiao, Wenhua, Bao, Weidong, Zhu, Xiaomin, Wang, Chen, Chen, Lidong, and Yang, Laurence T.
- Subjects
CLOUD computing ,WEB services ,LYAPUNOV functions ,MATHEMATICAL optimization ,ALGORITHM research - Abstract
Cloud computing provides a new opportunity for Video Service Providers (VSP) to running compute-intensive video applications in a cost effective manner. Under this paradigm, a VSP may rent virtual machines (VMs) from multiple geo-distributed datacenters that are close to video requestors to run their services. As user demands are difficult to predict and the prices of the VMs vary in different time and region, optimizing the number of VMs of each type rented from datacenters located in different regions in a given time frame becomes essential to achieve cost effectiveness for VSPs. Meanwhile, it is equally important to guarantee users' Quality of Experience (QoE) with rented VMs. In this paper, we give a systematic method called Dynamical Request Redirection and Resource Provisioning (DYRECEIVE) to address this problem. We formulate the problem as a stochastic optimization problem and design a Lyapunov optimization framework based online algorithm to solve it. Our method is able to minimize the long-term time average cost of renting cloud resources while maintaining the user QoE. Theoretical analysis shows that our online algorithm can produce a solution within an upper bound to the optimal solution achieved through offline computing. Extensive experiments shows that our method is adaptive to request pattern changes along time and outperforms existing algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
13. ANGEL: Agent-Based Scheduling for Real-Time Tasks in Virtualized Clouds.
- Author
-
Zhu, Xiaomin, Chen, Chao, Yang, Laurence T., and Xiang, Yang
- Subjects
- *
CLOUD computing , *PRODUCTION scheduling , *REAL-time computing , *QUALITY of service , *HEURISTIC algorithms - Abstract
The success of cloud computing makes an increasing number of real-time applications such as signal processing and weather forecasting run in the cloud. Meanwhile, scheduling for real-time tasks is playing an essential role for a cloud provider to maintain its quality of service and enhance the system’s performance. In this paper, we devise a novel agent-based scheduling mechanism in cloud computing environment to allocate real-time tasks and dynamically provision resources. In contrast to traditional contract net protocols, we employ a bidirectional announcement-bidding mechanism and the collaborative process consists of three phases, i.e., basic matching phase, forward announcement-bidding phase and backward announcement-bidding phase. Moreover, the elasticity is sufficiently considered while scheduling by dynamically adding virtual machines to improve schedulability. Furthermore, we design calculation rules of the bidding values in both forward and backward announcement-bidding phases and two heuristics for selecting contractors. On the basis of the bidirectional announcement-bidding mechanism, we propose an agent-based dynamic scheduling algorithm named ANGEL for real-time, independent and aperiodic tasks in clouds. Extensive experiments are conducted on CloudSim platform by injecting random synthetic workloads and the workloads from the last version of the Google cloud tracelogs to evaluate the performance of our ANGEL. The experimental results indicate that ANGEL can efficiently solve the real-time task scheduling problem in virtualized clouds. [ABSTRACT FROM PUBLISHER]
- Published
- 2015
- Full Text
- View/download PDF
14. AGILE: A terminal energy efficient scheduling method in mobile cloud computing.
- Author
-
Chen, Chao, Bao, Weidong, Zhu, Xiaomin, Ji, Haoran, Xiao, Wenhua, and Wu, Jianhong
- Subjects
TELECOMMUNICATION ,TELECOMMUNICATION systems ,CELL phones ,CLOUD computing ,COMPUTER network software - Abstract
With the development of mobile telecommunication technology, mobile phones have become a necessary tool in daily life and provided us many conveniences. Meanwhile, the huge number of cell phones constitute a potential high performance data processing system, called mobile cloud computing, to strengthen capacity for individual devices. Many researchers have studied about the architectures and scheduling algorithms of mobile cloud computing. However, little work has been performed about how to schedule mobile application tasks in data centers to extend battery life for mobile terminals. To address this issue, we investigate agent models, mobile energy consumption models and data transmission models under different connection environments. Based on which, we propose a novel terminal energy efficient scheduling method (AGILE for short). AGILE compares energy consumption in cloud execution and mobile execution according to the actual wireless environment, then makes energy-efficient decisions. Extensive experiments are conducted to evaluate the performance of the AGILE under different wireless channels, and the performance impact on different parameters are studied. The experimental results indicate that the proposed method can save mobile devices' energy effectively. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
15. FESTAL: Fault-Tolerant Elastic Scheduling Algorithm for Real-Time Tasks in Virtualized Clouds.
- Author
-
Wang, Ji, Bao, Weidong, Zhu, Xiaomin, Yang, Laurence T., and Xiang, Yang
- Subjects
FAULT-tolerant computing ,ELASTICITY ,RELIABILITY in engineering ,FAULT tolerance (Engineering) ,CLOUD computing ,COMPUTER scheduling - Abstract
As clouds have been deployed widely in various fields, the reliability and availability of clouds become the major concern of cloud service providers and users. Thereby, fault tolerance in clouds receives a great deal of attention in both industry and academia, especially for real-time applications due to their safety critical nature. Large amounts of researches have been conducted to realize fault tolerance in distributed systems, among which fault-tolerant scheduling plays a significant role. However, few researches on the fault-tolerant scheduling study the virtualization and the elasticity, two key features of clouds, sufficiently. To address this issue, this paper presents a fault-tolerant mechanism which extends the primary-backup model to incorporate the features of clouds. Meanwhile, for the first time, we propose an elastic resource provisioning mechanism in the fault-tolerant context to improve the resource utilization. On the basis of the fault-tolerant mechanism and the elastic resource provisioning mechanism, we design novel
f ault-tolerante lastics cheduling algorithms for real-timeta sks in cl ouds named FESTAL, aiming at achieving both fault tolerance and high resource utilization in clouds. Extensive experiments injecting with random synthetic workloads as well as the workload from the latest version of the Google cloud tracelogs are conducted by CloudSim to compare FESTAL with three baseline algorithms, i.e.,N on-M igration-FESTAL (NMFESTAL),N on-O verlapping-FESTAL (NOFESTAL), andE lasticF irstF it (EFF). The experimental results demonstrate that FESTAL is able to effectively enhance the performance of virtualized clouds. [ABSTRACT FROM PUBLISHER]- Published
- 2015
- Full Text
- View/download PDF
16. Optimization in distributed information systems.
- Author
-
Zhu, Xiaomin, Yang, Laurence T., Jiang, Hai, Thulasiraman, Parimala, and Di Martino, Beniamino
- Subjects
MATHEMATICAL optimization ,INFORMATION storage & retrieval systems ,RESOURCE allocation ,CLOUD computing ,LOAD balancing (Computer networks) - Abstract
In past decades, intensive attention has been drawn towards the optimization problem, such as task scheduling, resource allocation and network traffic management, in distributed information systems. However, the rapid increases in the scale of physical devices and types of applications in distributed information systems make the system more difficult to be operated and managed. Optimizing the system performance is an ongoing challenging and important issue. This special issue reports twelve high-quality contributions on solving the challenging optimization problems in distributed information systems. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
17. A server consolidation method with integrated deep learning predictor in local storage based clouds.
- Author
-
Zhang, Guoliang, Bao, Weidong, Zhu, Xiaomin, Zhao, Weiwei, and Yan, Huining
- Subjects
DEEP learning ,CLOUD storage ,CLOUD computing ,RESOURCE allocation ,CLIENT/SERVER computing equipment - Abstract
Summary: Server consolidation is one of the critical techniques for energy‐efficiency in cloud data centers. As it is often assumed that cloud service instances (eg, Amazon EC2 instances) utilize the shared storage only. In recent years, however, cloud service providers have been providing local storage for cloud users, since local storage can offer a better performance with identified price. However, these cloud instances usually contain much more data than shared storage cloud instances. Thus, in such local storage based cloud center, the migration cost can be really high and is in dire need of an efficient resource pre‐allocation. If we can predict the resource demand in advance, the migration oscillation will be reduced to minify the migration cost. We have found that there are some related work about server consolidation based on forecasting. Unfortunately, their latest work did not consider the background of "local storage" as we mentioned above. At the same time, some research about local storage did not involve the prediction strategy, which plays a significant part in server consolidation. To address this issue, this paper proposes Losari, a consolidation method, which takes numeric forecasting and local storage architecture into consideration. Losari consolidates servers on the basis of the resource demand predicted value using a statistical learning method. We model the workload from real cloud production environment as a time series. Taking deep learning as a frame of reference, multiple deep belief networks integrated with ARIMA model was trained to study the feature of historical workload. The experimental results have showed that its average predicted error is only 10.7% in the short term, which is much lower than the most common model based on threshold (19.8%) on the same dataset. What is more, the results show that Losari not only simulates the true sequences in high accuracy but also scales the compute resource well, which demonstrated the validity of this integrated deep learning model. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
18. A General Cross-Layer Cloud Scheduling Framework for Multiple IoT Computer Tasks.
- Author
-
Wu, Guanlin, Bao, Weidong, Zhu, Xiaomin, and Zhang, Xiongtao
- Subjects
INTERNET of things ,COMPUTER scheduling ,ALGORITHMS ,TASK performance ,CLOUD computing ,PROBLEM solving - Abstract
The diversity of IoT services and applications brings enormous challenges to improving the performance of multiple computer tasks’ scheduling in cross-layer cloud computing systems. Unfortunately, the commonly-employed frameworks fail to adapt to the new patterns on the cross-layer cloud. To solve this issue, we design a new computer task scheduling framework for multiple IoT services in cross-layer cloud computing systems. Specifically, we first analyze the features of the cross-layer cloud and computer tasks. Then, we design the scheduling framework based on the analysis and present detailed models to illustrate the procedures of using the framework. With the proposed framework, the IoT services deployed in cross-layer cloud computing systems can dynamically select suitable algorithms and use resources more effectively to finish computer tasks with different objectives. Finally, the algorithms are given based on the framework, and extensive experiments are also given to validate its effectiveness, as well as its superiority. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
19. Adaptive workflow scheduling for diverse objectives in cloud environments.
- Author
-
Ji, Haoran, Bao, Weidong, and Zhu, Xiaomin
- Subjects
CLOUD computing ,WORKFLOW management ,WEB services ,COMPUTER scheduling ,ALGORITHMS ,RANDOM access memory ,BANDWIDTHS - Abstract
Cloud computing environments facilitate applications by providing virtualised resources through the network and serve the clients by the pay-as-you-go mechanism. It is based on the rapid development of the network. Normally, economic cost is the most important factor of providing cloud services. However, under some special conditions, the objectives may change. In this scenario, it is impractical to reset the scheduling mechanism only for an occasional incident. So an adaptive scheduling mechanism is needed to address the issues of scheduling under different conditions. To the best of our knowledge, no works have well solved this problem. In this paper, we convert the problem to a multi-objective scheduling problem with varying objective weights and propose a two-phase algorithm, which is called adaptive priority-based workflow scheduling ( DRAWS) algorithm. The algorithm will self regulate the priorities of tasks to adapt to different objectives. In the experimental part, simulation environments are set up and three classic workflows are used to evaluate the performance. Four comparing algorithms of resource sensitive scheduling algorithm, Best Fit-millions of instructions per second, Best Fit-random-access memory and Best Fit-Bandwidth are used to evaluate the performance of DRAWS. It is demonstrated that each phase of our algorithm would optimise the scheduling. Furthermore, we obtain the conclusion that DRAWS algorithm shows superior than the comparing algorithms. Copyright © 2015 John Wiley & Sons, Ltd. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.