3,617 results on '"virtual machines"'
Search Results
2. Quality of service aware improved coati optimization algorithm for efficient task scheduling in cloud computing environment
- Author
-
Tamilarasu, P. and Singaravel, G.
- Published
- 2024
- Full Text
- View/download PDF
3. Microarchitectural Security of Firecracker VMM for Serverless Cloud Platforms
- Author
-
Weissman, Zane, Tiemann, Thore, Eisenbarth, Thomas, Sunar, Berk, Goos, Gerhard, Series Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Patil, Vishwas T., editor, Krishnan, Ram, editor, and Shyamasundar, Rudrapatna K., editor
- Published
- 2025
- Full Text
- View/download PDF
4. Decentralized dynamic load balancing for virtual machines in cloud computing: a blockchain-enabled system with state channel optimization.
- Author
-
Roselin, J. and Insulata, Israelin J.
- Abstract
This paper introduces an innovative load balancing algorithm that utilizes blockchain-enabled cloud computing environments. The proposed scheme leverages blockchain technology's decentralized architecture to dynamically and efficiently distribute workloads across virtual machines (VMs). This approach optimizes resource utilization and enhances the performance of cloud services. By integrating smart contracts and employing a meticulous VM selection process, our method effectively addresses the challenges associated with traditional load balancing techniques, which often struggle to adapt to dynamic, heterogeneous workloads. Furthermore, our algorithm promotes transparency and security in task allocation and execution, capitalizing on blockchain's inherent features of immutability and consensus. The effectiveness of the proposed scheme is demonstrated through rigorous simulation using the CloudSim toolkit, showcasing significant improvements over existing methods in terms of makespan, execution time, resource utilization, and throughput. These results underline the potential of our proposed solution to revolutionize cloud computing infrastructure management, making it more adaptable, efficient, and resilient to varying computing demands. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
5. Efficient load balancing strategy for cloud computing environment with African vultures algorithm.
- Author
-
Karuppan, A. Sandana and Bhalaji, N.
- Subjects
- *
VIRTUAL machine systems , *CLOUD computing , *SCHEDULING , *VULTURES , *ENERGY consumption - Abstract
Load balancing is essential in cloud computing (CC) to manage the increasing load on servers efficiently. This article proposes a load balancing strategy utilizing constraint measures to distribute the load evenly amongst the servers while minimizing power consumption. Firstly, the capacity and load of every Virtual Machine (VM) is evaluated, and tasks are assigned using the African Vultures Algorithm (AVA) when the load exceeds a predefined threshold. This approach aim is to minimize energy consumption, makespan, and data center usage. Additionally, a load balancing method computes critical features for each VM and assesses their load, followed by calculating selection factors for tasks. Tasks with superior selection factors are assigned to VMs. The proposed Efficient Load Balancing in Cloud Computing under African Vultures Algorithm (ELB-CC-AVA) demonstrates better performance in cloud environments, achieving lower makespan by 32.82%, 30.47%, and 25.32%, along with higher resource utilization rates of 38.22%, 40.21%, and 25.46% compared to the existing methods. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
6. Estimation of the required number of nodes of a university cloud virtualization cluster.
- Author
-
Akhmetov, Bakhytzhan, Lakhno, Valery, Oshanova, Nurzhamal, Alimseitova, Zhuldyz, Bereke, Madina, and Izbasova, Nurgul
- Subjects
VIRTUAL machine systems ,LABORATORY management ,VIRTUAL universities & colleges ,GENETIC algorithms ,VIRTUAL design - Abstract
When designing a virtual desktop infrastructure (VDI) for a university or inter-university cloud, developers must overcome many complex technical challenges. One of these tasks is estimating the required number of virtualization cluster nodes. Such nodes host virtual machines for users. These virtual machines can be used by students and teachers to complete academic assignments or research work. Another task that arises in the VDI design process is the problem of algorithmizing the placement of virtual machines in a computer network. In this case, optimal placement of virtual machines will reduce the number of computer nodes without affecting functionality. And this, ultimately, helps to reduce the cost of such a solution, which is important for educational institutions. The article proposes a model for estimating the required number of virtualization cluster nodes. The proposed model is based on a combined approach, which involves jointly solving the problem of optimal packaging and finding the configuration of server platforms of a private university cloud using a genetic algorithm. The model introduced in this research is universal. It can be used in the design of university cloud systems for different purposes-for example, educational systems or inter-university scientific laboratory management systems. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
7. Frequency aware task scheduling using DVFS for energy efficiency in Cloud data centre.
- Author
-
Samual, Joshua, Hussin, Masnida, Hamid, Nor Asilah Wati Abdul, and Abdullah, Azizol
- Subjects
- *
SERVER farms (Computer network management) , *PROCESS capability , *VIRTUAL machine systems , *SPECTRUM allocation , *QUALITY of service - Abstract
Reliable processing capacity and flexible storage space make Cloud computing the most recent favourable technology. Many organizations have converted their conventional processing data centre to a Cloud data centre. Cloud computing provides promising execution and storage, which leads to massive growth in processing demand by Cloud users. It makes the Cloud data centre increase the number of virtual machines (VM) to execute the users tasks. Hence, it causes high frequency disbursed and has increased energy consumption. Many techniques were proposed, which focuses on Cloud energy saving. However, there is still a lack of trade‐off between energy‐efficient task allocation and frequency scaling for a given workload. In this work, we propose a task scheduling algorithm that aims to minimize energy consumption through the frequency scaling technique while improving task execution time. Specifically, our scheduler comprises two modules, which are the scaling frequency module and frequency‐aware task scheduling module. In our first module, we utilize Dynamic Voltage and Frequency Scaling‐Optimal Frequency (DVFS) to determine the optimal frequency and selecting the best server for the incoming tasks. The number of VM is created upon the best server. As for the second module, the VM processing capacity is scaled to the required frequency of the task. We identify it as a required processing capacity for executing the tasks. The experiment result shows that our algorithm has outperformed and efficiently minimized the energy consumption in the Cloud data centre as compared with existing energy‐saving techniques. Meanwhile, the task allocation also has met the system's Quality of Service (QoS). Significantly, leveraging the resource processing frequency is able to gain better trade‐off between performance and energy consumption in the Cloud data centre. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
8. Machine learning-centric prediction and decision based resource management in cloud computing environments.
- Author
-
Kashyap, Shobhana, Singh, Avtar, and Gill, Sukhpal Singh
- Subjects
- *
VIRTUAL machine systems , *DECISION support systems , *CLOUD computing , *MACHINE learning , *PREDICTION models , *SERVER farms (Computer network management) - Abstract
In cloud data centers, precise resource prediction is a critical issue due to the dynamic environment, the presence of irrelevant data points, and the unpredictable nature of resource demand. An accurate prediction helps with resource management, cost planning, and improving cloud-related services, whereas an inaccurate prediction increases the budget because of unused and overused resources. The presence of dynamic and irrelevant data not only creates confusion in the model, resulting in inaccurate predictions, but also adds unnecessary complexity and cost to the entire process. To address these challenges, we propose an approach for multi-model methods that uses a sliding window method with an adjustable size to estimate important data points from the real trace. The current work conducts three experiments to assess the impact of unpredictable data and improve the models' performance for greater accuracy. Initially, we conducted the experiment on entire datasets, but this approach failed to produce accurate and efficient machine-learning models. The next fixed window technique provides activity recognition in real-time, making it suitable for applications that require immediate feedback or response. Finally, a novel Variable-Size Sliding Window (VSSW) is proposed that selects relevant data points that help to provide better performance. Additionally, a Model Selector Decision Support System (MSDSS) is designed for forecasting and optimizing resource demand. This system determines the best predictive model for a specific set of resources based on observations gathered over a defined time frame. The experimental outcomes demonstrate that the proposed algorithm has improved the Mean Absolute Error (MAE) by approximately 50.01% compared to the baseline method and approximately 31.75% compared to the fixed window size approach. Furthermore, the proposed model effectively addresses the challenge of predicting resource workloads in a dynamic environment. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
9. A Multi-Objective Approach for Optimizing Virtual Machine Placement Using ILP and Tabu Search
- Author
-
Mohamed Koubàa, Rym Regaieg, Abdullah S. Karar, Muhammad Nadeem, and Faouzi Bahloul
- Subjects
cloud computing ,virtualization ,virtual machines ,placement problem ,multi-objective optimization ,integer linear programming ,Computer engineering. Computer hardware ,TK7885-7895 ,Electronic computers. Computer science ,QA75.5-76.95 - Abstract
Efficient Virtual Machine (VM) placement is a critical challenge in optimizing resource utilization in cloud data centers. This paper explores both exact and approximate methods to address this problem. We begin by presenting an exact solution based on a Multi-Objective Integer Linear Programming (MOILP) model, which provides an optimal VM Placement (VMP) strategy. Given the NP-completeness of the MOILP model when handling large-scale problems, we then propose an approximate solution using a Tabu Search (TS) algorithm. The TS algorithm is designed as a practical alternative for addressing these complex scenarios. A key innovation of our approach is the simultaneous optimization of three performance metrics: the number of accepted VMs, resource wastage, and power consumption. To the best of our knowledge, this is the first application of a TS algorithm in the context of VMP. Furthermore, these three performance metrics are jointly optimized to ensure operational efficiency (OPEF) and minimal operational expenditure (OPEX). We rigorously evaluate the performance of the TS algorithm through extensive simulation scenarios and compare its results with those of the MOILP model, enabling us to assess the quality of the approximate solution relative to the optimal one. Additionally, we benchmark our approach against existing methods in the literature to emphasize its advantages. Our findings demonstrate that the TS algorithm strikes an effective balance between efficiency and practicality, making it a robust solution for VMP in cloud environments. The TS algorithm outperforms the other algorithms considered in the simulations, achieving a gain of 2% to 32% in OPEF, with a worst-case increase of up to 6% in OPEX.
- Published
- 2024
- Full Text
- View/download PDF
10. A DELAYED MALWARE PROPAGATION MODEL FOR CLOUD COMPUTING SECURITY.
- Author
-
YU, XIAODONG and ZHANG, ZIZHEN
- Subjects
- *
CLOUD computing security measures , *VIRTUAL machine systems , *LINEAR matrix inequalities , *TIME delay systems , *EXPONENTIAL stability - Abstract
How to secure the virtual environment in the cloud computing is a crucial issue since the cloud computing has been providing various services in many areas in the globe. The main aim of this paper is to investigate a delayed malware propagation model for cloud computing security. Time delay due to time interval that the infected virtual machines need to reinstall system and time delay due to the temporary immunization period of the protected virtual machines are introduced into the model. First, sufficient conditions for local stability and existence of Hopf bifurcation are derived by choosing different combinations of the two time delays as bifurcating parameter. Second, global exponential stability of the model is explored with the aid of linear matrix inequalities method. Finally, numerical simulations are carried out to illustrate that the obtained results and suggestions to ensure security of the cloud computing are given in the conclusion according to analyzing the dynamics of the proposed delayed malware propagation model for cloud computing security. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
11. HOGWO: a fog inspired optimized load balancing approach using hybridized grey wolf algorithm.
- Author
-
Das, Debashreet, Sengupta, Sayak, Satapathy, Shashank Mouli, and Saini, Deepanshu
- Subjects
- *
VIRTUAL machine systems , *PARTICLE swarm optimization , *COST control , *GENETIC algorithms , *INTERNET of things - Abstract
A distributed archetype, the concept of fog computing relocates the storage, computation, and services closer to the network's edge, where the data is generated. Despite these advantages, the users expect proper load management in the fog environment. This has expanded the Internet of Things (IoT) field, increasing user requests for the fog computing layer. Given the growth, Virtual Machines (VMs) in the fog layer become overburdened due to user demands. In the fog layer, it is essential to evenly and fairly distribute the workload among the segment's current VMs. Numerous load-management strategies for fog environments have been implemented up to this point. This study aims to create a hybridized and optimized approach for load management (HOGWO), in which the population set is generated using the Invasive Weed Optimisation (IWO) algorithm. The rest of the functional part is done with the help of the Grey Wolf Optimization (GWO) algorithm. This process ensures cost optimization, increased performance, scalability, and adaptability to any domain, such as healthcare, vehicular traffic management, etc. Also, the efficiency of the enhanced approach is analyzed in various scenarios to provide a more optimal solution set. The proposed approach is well illustrated and outperforms the existing algorithms, such as Particle Swarm Optimization (PSO), Genetic Algorithm (GA), etc., in terms of cost and load management. It was found that more than 97% jobs were completed on time, according to the testing data, and the hybrid technique outperformed all other approaches in terms of fluctuation of load and makespan. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
12. A Multi-Objective Approach for Optimizing Virtual Machine Placement Using ILP and Tabu Search.
- Author
-
Koubàa, Mohamed, Regaieg, Rym, Karar, Abdullah S., Nadeem, Muhammad, and Bahloul, Faouzi
- Subjects
ANT algorithms ,VIRTUAL machine systems ,MULTI-objective optimization ,LINEAR programming ,INTEGER programming - Abstract
Efficient Virtual Machine (VM) placement is a critical challenge in optimizing resource utilization in cloud data centers. This paper explores both exact and approximate methods to address this problem. We begin by presenting an exact solution based on a Multi-Objective Integer Linear Programming (MOILP) model, which provides an optimal VM Placement (VMP) strategy. Given the NP-completeness of the MOILP model when handling large-scale problems, we then propose an approximate solution using a Tabu Search (TS) algorithm. The TS algorithm is designed as a practical alternative for addressing these complex scenarios. A key innovation of our approach is the simultaneous optimization of three performance metrics: the number of accepted VMs, resource wastage, and power consumption. To the best of our knowledge, this is the first application of a TS algorithm in the context of VMP. Furthermore, these three performance metrics are jointly optimized to ensure operational efficiency (OPEF) and minimal operational expenditure (OPEX). We rigorously evaluate the performance of the TS algorithm through extensive simulation scenarios and compare its results with those of the MOILP model, enabling us to assess the quality of the approximate solution relative to the optimal one. Additionally, we benchmark our approach against existing methods in the literature to emphasize its advantages. Our findings demonstrate that the TS algorithm strikes an effective balance between efficiency and practicality, making it a robust solution for VMP in cloud environments. The TS algorithm outperforms the other algorithms considered in the simulations, achieving a gain of 2% to 32% in OPEF, with a worst-case increase of up to 6% in OPEX. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
13. Resource Sizing for Virtual Environments of Networked Interconnected System Services.
- Author
-
Albychev, Alexandr, Ilin, Dmitry, and Nikulchev, Evgeny
- Subjects
VIRTUAL machine systems ,VIRTUAL reality ,CLOUD computing ,RESOURCE allocation ,NUMBER systems - Abstract
Networked interconnected systems are often deployed in infrastructures with resource allocation using isolated virtual environments. The technological implementation of such systems varies significantly, making it difficult to accurately estimate the required volume of resources to allocate for each virtual environment. This leads to overprovisioning of some services and underprovisioning of others. The problem of distributing the available computational resources between the system services arises. To efficiently use resources and reduce resource waste, the problem of minimizing free resources under conditions of unknown ratios of resource distribution between services is formalized; an approach to determining regression dependencies of computing resource consumption by services on the number of requests and a procedure for efficient resource distribution between services are proposed. The proposed solution is experimentally evaluated using the networked interconnected system model. The results show an increase in throughput by 20.75% compared to arbitrary resource distribution and a reduction in wasted resources by 55.59%. The dependences of the use of resources by networked interconnected system services on the number of incoming requests, identified using the proposed solution, can also be used for scaling in the event of an increase in the total volume of allocated resources. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
14. The Goodness of Nesting Containers in Virtual Machines for Server Consolidation.
- Author
-
Bermejo, Belen, Juiz, Carlos, and Calzarossa, Maria Carla
- Abstract
Virtualization and server consolidation are the technologies that govern today’s data centers, allowing both efficient management at the functionality level as well as at the energy and performance levels. There are two main ways to virtualize either using virtual machines or containers. Both have a series of characteristics and applications, sometimes being not compatible with each other. Not to lose the advantages of each of them, there is a trend to load data centers by nesting containers in virtual machines. Although there are good experiences at a functional level, the performance and energy consumption trade-off of these solutions is not completely clear. Therefore, it is necessary to study how this new trend affects both energy consumption and performance. In this work, we present an experimental study aimed to investigate the behavior of nesting containers in virtual machines while executing CPU-intensive workloads. Our objective is to understand what performance and energy nesting configurations are equivalent or not. In this way, administrators will be able to manage their data centers more efficiently. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
15. A Queuing Theory Approach to Task Scheduling in Cloud Computing with Generalized Processor Sharing Queue Model and Heavy Traffic Approximation.
- Author
-
Ghazali, Mohamed and Tahar, Abdelghani Ben
- Subjects
VIRTUAL machine systems ,QUEUING theory ,RESOURCE allocation ,DATA warehousing ,RESOURCE management - Abstract
Cloud computing has transformed data storage, management, and processing by offering scalable and flexible resources via the internet. A key component of this technology is the efficient allocation and management of resources, particularly through task scheduling at the level of virtual machines (VMs). Task scheduling is critical for maximizing resource utilization and system performance in cloud environments. However, it presents significant challenges due to the dynamic and distributed nature of these environments. Effective task scheduling algorithms are necessary to balance load, minimize response time, and optimize resource usage, making it a crucial area for ongoing research and development in cloud computing. This paper addresses the challenge of task scheduling in cloud computing by employing an analytical approach based on queuing theory. We model the system using a generalized processor sharing (GPS) queue and evaluate its performance through heavy traffic approximation. This method allows us to derive performance metrics for queuing systems prone to congestion, considering general interarrival and service time distributions, thus providing a comprehensive analysis of scheduling efficiency. [ABSTRACT FROM AUTHOR]
- Published
- 2024
16. Improving Quality of Service in Cloud Computing Frameworks Using Whale Optimization Algorithm.
- Author
-
Ibrahim, Maher Ali, Al-Tahar, Inas Anouar, Salamah, Hasan Mohamed, and Mohamad, Naji Ibrahem
- Subjects
VIRTUAL machine systems ,QUALITY of service ,COMPUTING platforms ,POINT cloud ,SATISFACTION ,METAHEURISTIC algorithms ,HEURISTIC algorithms - Abstract
Quality of Service is one of the most important research topics in cloud computing, both from the customer's point of view and the cloud service provider's point of view, due to the increasing number of cloud services and applications, along with the significant increase in users and workloads. Task scheduling has been a topic of discussion in many researches, some of which have proposed new ways to improve the Quality of Service in cloud systems. Recently, metaheuristic algorithms have been employed to improve job execution efficiency in such systems, which has proven effective in finding optimal task scheduling solutions. During this research, the latest metaheuristic algorithm Whale Optimization Algorithm (WOA) will be applied for optimizing task scheduling in cloud systems. Additionally, a multi-objective model for optimization will be used to achieve a balance between user satisfaction with these systems, and the requirements needed by service providers. Research results indicate WOA superiority in minimizing the execution time cost, price cost, and total cost parameters as contrasted with the current metaheuristic algorithms, which ultimately result in higher service quality. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
17. AlphaBoot: accelerated container cold start using SmartNICs
- Author
-
Shaunak Galvankar and Sean Choi
- Subjects
SmartNICs ,cloud computing ,containers ,data center networks ,virtual machines ,Computer software ,QA76.75-76.765 - Abstract
Scalability and flexibility of modern cloud application can be mainly attributed to virtual machines (VMs) and containers, where virtual machines are isolated operating systems that run on a hypervisor while containers are lightweight isolated processes that share the Host OS kernel. To achieve the scalability and flexibility required for modern cloud applications, each bare-metal server in the data center often houses multiple virtual machines, each of which runs multiple containers and multiple containerized applications that often share the same set of libraries and code, often referred to as images. However, while container frameworks are optimized for sharing images within a single VM, sharing images across multiple VMs, even if the VMs are within the same bare-metal server, is nearly non-existent due to the nature of VM isolation, leading to repetitive downloads, causing redundant added network traffic and latency. This work aims to resolve this problem by utilizing SmartNICs, which are specialized network hardware that provide hardware acceleration and offload capabilities for networking tasks, to optimize image retrieval and sharing between containers across multiple VMs on the same server. The method proposed in this work shows promise in cutting down container cold start time by up to 92%, reducing network traffic by 99.9%. Furthermore, the result is even more promising as the performance benefit is directly proportional to the number of VMs in a server that concurrently seek the same image, which guarantees increased efficiency as bare metal machine specifications improve.
- Published
- 2025
- Full Text
- View/download PDF
18. Comparative Study of Web Server Performance Testing with and without Docker Based on Virtual Machines
- Author
-
Fajar Kurnia Ramadhan, Garno Garno, and Arip Solehudin
- Subjects
web server performance ,docker ,virtual machines ,performance testing ,load test ,system infrastructure development life cycle (sidlc) ,Electronic computers. Computer science ,QA75.5-76.95 - Abstract
Web server development is often hindered by the cost and resources required, as developing a web server typically requires a bare-metal server. Container technology, which allows for the development of multiple web servers on a single bare-metal server, has become popular. One of the most widely used containers is Docker. Docker reduces the need for costs and resources. Beyond the issues of cost and resource requirements, the performance of web servers also needs to be considered. The performance of web servers with and without Docker needs to be verified. This research aims to test the performance of two web servers, one using Docker and one not using Docker, utilizing the native hypervisor VMware ESXi. The web server performance test items in this study include CPU and RAM resource usage. The method for developing infrastructure systems uses SIDLC (System Infrastructure Development Life Cycle). Performance testing (Load Test) was conducted using Apache JMeter as a tool, with the manipulation of the number of threads predetermined. Resource usage information was monitored using Prometheus and Grafana. The research results show that with the same resources for each virtual machine, the CPU resource usage of Virtual Machine 2 (Undockerized) is less than that of Virtual Machine 1 (Dockerized). Meanwhile, RAM resource usage is not affected by the number of users on both virtual machines. Virtual Machine 2 (Undockerized) is better at handling HTTP requests. Virtual Machine 1 (Dockerized) can handle only 2,790 users, while Virtual Machine 2 (Undockerized) can handle more than 2,790 users without errors.
- Published
- 2024
- Full Text
- View/download PDF
19. Anomaly detection on longitudinal data with applications in cloud & healthcare
- Author
-
Abubakar, Abdullahi, Mai, Thai Son, Kilpatrick, Peter, and Nikolopoulos, Dimitrios
- Subjects
Anomaly detection ,time series ,virtual machines ,dengue fever ,parallel computing ,ensemble methods ,climate variable ,longitudinal data ,cloud computing ,prediction ,time-series forecasting ,concept drift - Abstract
Over a decade, analysing longitudinal data has presented a challenge in meeting the demands of extracting useful knowledge. For instance, as cloud/data centres grow in scale and complexity, effective monitoring and management of the cloud becomes a critical challenge. Competition for resource sharing and virtual machine overload are prone to cause anomalies, which will possibly cause downtime. This will seriously affect the reliability and availability of the entire cloud infrastructure. Given the increasing number of data-rich application fields including healthcare systems and seismic activities, there is a growing research interest in anomaly detection. Despite several attempts to create anomaly detection frameworks for cloud/data centres, developing a framework that properly identifies anomalous Virtual Machines (VMs) in a large-scale, highly dynamic cloud environment remains a difficult task. A framework (Predictive Ensemble based Anomaly Detection System-PEADS) is developed to monitor virtual machine CPU utilisation (as time-series data) by handling massive quantities of data. It is designed to handle low-latency reads and updates in a linearly scalable and fault-tolerant way. Furthermore, it can differentiate a true drift from an anomaly. Additionally, it has the ability to detect anomalies as soon as they occur because it was built based on prediction (forecast) algorithms; this eventually leads to early detection. Early anomaly detection can lead to a potential disaster prevention, such as VM failure in the cloud/data centre. This will help cloud providers to plan VM migration. PEADS provides better performance, high accuracy, and faster analysis than some state-of-the-art anomaly detection systems such as, DQR-AD, DeepAnT and VAE-LSTM. The success can be attributed to the proposed algorithm that reduce false alarms while maintaining accurate detection. Additionally, the windowing technique and similarity measure also played a significant role in its success. More importantly, the fact that it was an ensemble system whereas DQR-AD, DeepAnT and VAE-LSTM are based on single models also contributed to its success. In addition to the anomaly detection on virtual machines in cloud settings, we also utilised the outlier detection technique to predict Dengue fever outbreaks in Vietnam. Dengue Fever is an emerging mosquito-borne infectious disease that affect hundred millions of people each year with considerable morbidity and mortality rates, especial in children and the elderly. The World Health Organization listed Dengue fever (virus) among the top ten diseases responsible for the most global deaths. The spread of Dengue virus has been linked to the interplay between extreme weather events and mosquito dynamics. Therefore, a framework that utilises meteorological (climate) and Dengue fever vulnerability trends that might be predictive of Dengue fever epidemics, especially at the city (province), regional level and the entire Vietnam is proposed. Exploratory data analysis is performed to identify the correlation between Dengue incidences and climate variables using outlier detection and visualisation techniques. Finally, Dengue outbreaks across several provinces of Vietnam are predicted using the classification technique. Seventeen distinct machine learning algorithms as the base learners are utilised to generate ensemble prediction. These base learners are based on independent procedures and have diverse decision boundaries to predict the Dengue fever outbreak. Several models are constructed and validated for each algorithm using the cross-validation technique. The models with high training/validation accuracy from each machine learning algorithm are then returned as the final models. These models form the basis for ensemble prediction. For every prediction algorithm, four evaluation metrics were used to ascertain the quality of the prediction. These metrics include accuracy, balance accuracy, specificity, and sensitivity. The study period for predicting (forecasting) Dengue fever as a range of time t, from zero to six months in the future. For every forecast study period, the model is evaluated and the performance of every prediction on the base models as well as the ensemble model are measured. The ensemble method performs better than the base models. It can successfully forecast a Dengue fever outbreak six months in advance with 90% accuracy, but with a modest decrease in accuracy compared to short-term forecasting.
- Published
- 2023
20. Energy and time-aware scheduling in diverse virtualized cloud computing environments using optimized self-attention progressive generative adversarial network.
- Author
-
Senthilkumar, G. and Anandamurugan, S.
- Subjects
- *
GENERATIVE adversarial networks , *OPTIMIZATION algorithms , *VIRTUAL machine systems , *SCHEDULING , *VIRTUAL networks - Abstract
The rapid growth of cloud computing has led to the widespread adoption of heterogeneous virtualized environments, offering scalable and flexible resources to meet diverse user demands. However, the increasing complexity and variability in workload characteristics pose significant challenges in optimizing energy consumption. Many scheduling algorithms have been suggested to address this. Therefore, a self-attention-based progressive generative adversarial network optimized with Dwarf Mongoose algorithm adopted Energy and Deadline Aware Scheduling in heterogeneous virtualized cloud computing (SAPGAN-DMA-DAS-HVCC) is proposed in this paper. Here, a self-attention based progressive generative adversarial network (SAPGAN) is proposed to schedule activities in a cloud environment with an objective function of makespan and energy consumption. Then Dwarf Mongoose algorithm is proposed to optimize the weight parameters of SAPGAN. Outcome of proposed approach SAPGAN-DMA-DAS-HVCC contains 32.77%, 34.83% and 35.76% higher right skewed makespan, 31.52%, 33.28% and 29.14% lower cost when analysed to the existing models, like task scheduling in heterogeneous cloud environment utilizing mean grey wolf optimization approach, energy and performance-efficient task scheduling in heterogeneous virtualized Energy and Performance Efficient Task Scheduling Algorithm, energy and make span aware scheduling of deadline sensitive tasks on the cloud environment, respectively. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
21. Resource Utilization Based on Hybrid WOA-LOA Optimization with Credit Based Resource Aware Load Balancing and Scheduling Algorithm for Cloud Computing.
- Author
-
Narwal, Abhikriti
- Abstract
In a cloud computing environment, tasks are divided among virtual machines (VMs) with different start times, duration and execution periods. Thus, distributing these loads among the virtual machines is crucial, in order to maximize resource utilization and enhance system performance, load balancing must be implemented that ensures balance across all virtual machines (VMs). In the proposed framework, a credit-based resource-aware load balancing scheduling algorithm (HO-CB-RALB-SA) was created using a hybrid Walrus Optimization Algorithm (WOA) and Lyrebird Optimization Algorithm (LOA) for cloud computing. The proposed model is developed by jointly performing both load balancing and task scheduling. This article improves the credit-based load-balancing ideas by integrating a resource-aware strategy and scheduling algorithm. It maintains a balanced system load by evaluating the load as well as processing capacity of every VM through the use of a resource-aware load balancing algorithm. This method functions primarily on two stages which include scheduling dependent on the VM’s processing power. By employing supply and demand criteria to determine which VM has the least amount of load to map jobs or redistribute jobs from overloaded to underloaded VM. For efficient resource management and equitable task distribution among VM, the load balancing method makes use of a resource-aware optimization algorithm. After that, the credit-based scheduling algorithm weights the tasks and applies intelligent resource mapping that considers the computational capacity and demand of each resource. The FILL and SPILL functions in Resource Aware and Load utilize the hybrid Optimization Algorithm to facilitate this mapping. The user tasks are scheduled in a queued based on the length of the task using the FILL and SPILL scheduler algorithm. This algorithm functions with the assistance of the PEFT approach. The optimal threshold values for each VM are selected by evaluating the task based on the fitness function of minimising makespan and cost function using the hybrid Walrus Optimization Algorithm (WOA) and Lyrebird Optimization Algorithm (LOA).The application has been simulated and the QOS parameter, which includes Turn Around Time (TAT), resource utilization, Average Response Time (ART), Makespan Time (MST), Total Execution Time (TET), Total Processing Cost (TPC), and Total Processing Time (TPT) for the 400, 800, 1200, 1600, and 2000 cloudlets, has been determined by utilizing the cloudsim tool. The performance parameters for the proposed HO-CB-RALB-SA and the existing models are evaluated and compared. For the proposed HO-CB-RALB-SA model with 2000 cloudlets, the following parameter values are found: 526.023 ms of MST, 12741.79 ms of TPT, 33422.87$ of TPC, 23770.45 ms of TET, 172.32 ms of ART, 9593 MB of network utilization, 28.1 of energy consumption, 79.9 Mbps of throughput, 5 ms of TAT, 18.6 ms for total waiting time and 17.5% of resource utilization. Based on several performance parameters, the simulation results demonstrate that the HO-CB-RALB-SA strategy is superior to the other two existing models in the cloud environment for efficient resource utilization. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
22. Mean makespan task scheduling approach for the edge computing environment.
- Author
-
Saini, Nisha and Kumar, Jitender
- Subjects
VIRTUAL machine systems ,PRODUCTION scheduling ,EDGE computing ,RESEARCH personnel ,SCHEDULING - Abstract
Task scheduling in the edge computing environment poses significant challenges due to its inherent NP-hard nature. Several researchers concentrated on minimizing simple makespan, disregarding the reduction of the mean time to complete all tasks, resulting in uneven distributions of mean completion times. To address this issue, this study proposes a novel mean makespan task scheduling strategy (MMTSS) to minimize simple and mean makespan. MMTSS optimizes the utilization of virtual machine capacity and uses the mean makespan optimization to minimize the processing time of tasks. In addition, it reduces imbalance by evenly distributing tasks among virtual machines, which makes it easier to schedule batches subsequently. Using genetic algorithm optimization, MMTSS effectively lowers processing time and mean makespan, offering a viable approach for effective task scheduling in the edge computing environment. The simulation results, obtained using cloudlets ranging from 500 to 2000, explicitly demonstrate the improved performance of our approach in terms of both simple and mean makespan metrics. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
23. An improved particle swarm optimization algorithm for scheduling tasks in cloud environment.
- Author
-
Wang, Zi‐Ren, Hu, Xiao‐Xiang, Wei, Peng, and Yuan, Bo
- Subjects
- *
PARTICLE swarm optimization , *SWARM intelligence , *DISTRIBUTED algorithms , *SEARCH algorithms , *GENETIC algorithms - Abstract
Cloud computing provide services dynamically according to the contract between service providers and users. However, Inappropriateness of scheduling task on VMs can lead huge resource waste and load unbalance, which becomes a seriously challenging problem. Current Swarm intelligence algorithms like genetic algorithm (GA), particle swarm optimization (PSO) are combination of random initialization and local search algorithm. It avoids inconsistent results for different problem instances. However, existing Swarm intelligence works sometimes search the optima without analysing task scheduling situations comprehensively, global search efficiency is low and convergence is too early. In this paper, we propose SNSK‐IPSO algorithm, which develops as a two‐phases algorithm: enumerating all distributed solutions between VMs and tasks, finding the optimal solution through IPSO. It not only minimizes the execution time, but also improves resource utilization and load balance. Several experiments demonstrate that our novel algorithm outperforms others in terms of achieving load balance, higher resource utilization and lower execution times. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
24. Cosine Similarity Based Golden Jackal Optimization for Efficient Task Scheduling in Cloud Computing.
- Author
-
Rahman, Mohammed Ziaur and Pichandi, Anandaraj Shanthi
- Subjects
VIRTUAL machine systems ,PARTICLE swarm optimization ,RESOURCE allocation ,FUZZY logic ,FUZZY sets ,LOAD balancing (Computer networks) - Abstract
Task scheduling in a process where the cloud has attained substantial attention because of an enhanced demand for computing resources and services. Various load balancing approaches are developed for allocating the tasks and Virtual Machines (VMs) according to the task's priority and execution. However, there are some problems faced in the identification of the best tasks for allocation and to classify the relevant tasks based on the load. In this research, Fuzzy Logic (FL) and Cosine Similarity based Golden Jackal Optimization (CSGJO) algorithm is proposed for solving the problem of load balancing and task scheduling in CC. The FL optimally distributes the tasks based on their fuzzy rules, whereas the CSGJO improves fuzzy rule sets and identifies the optimal resource allocation, minimizing the response time even when handling large number of tasks. The Longest Job to Fastest Processor (LJFP) and Minimum Completion Time (MCT) approaches are the heuristic approaches used to initialize CSGJO. The multiobjective functions of Makespan, Degree of Imbalance (DOI), Execution Time, Energy Consumption and Resource Utilization for validating the proposed method's effectiveness. The CSGJO consumes a minimum makespan of 1.9s, as opposed to Particle Swarm Optimization (PSO) and Enhanced Sunflower Optimization (ESFO). [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
25. Cloud Versus Local: Performance Evaluation of Multi-node Hadoop Clusters Using HiBench Benchmarks
- Author
-
Chaubey, Harshit Kumar, Arelli, Siri, Patel, Tanu, Verma, Vishnu, Mallikharjuna Rao, K., Kacprzyk, Janusz, Series Editor, Gomide, Fernando, Advisory Editor, Kaynak, Okyay, Advisory Editor, Liu, Derong, Advisory Editor, Pedrycz, Witold, Advisory Editor, Polycarpou, Marios M., Advisory Editor, Rudas, Imre J., Advisory Editor, Wang, Jun, Advisory Editor, and Arai, Kohei, editor
- Published
- 2024
- Full Text
- View/download PDF
26. A Multi-objective Virtual Machine Placement Optimization in Sustainable Cloud Environment
- Author
-
Swain, Smruti Rekha, Parashar, Anshu, Singh, Ashutosh Kumar, Lee, Chung Nan, Kacprzyk, Janusz, Series Editor, Gomide, Fernando, Advisory Editor, Kaynak, Okyay, Advisory Editor, Liu, Derong, Advisory Editor, Pedrycz, Witold, Advisory Editor, Polycarpou, Marios M., Advisory Editor, Rudas, Imre J., Advisory Editor, Wang, Jun, Advisory Editor, Pastor-Escuredo, David, editor, Brigui, Imene, editor, Kesswani, Nishtha, editor, Bordoloi, Sushanta, editor, and Ray, Ashok Kumar, editor
- Published
- 2024
- Full Text
- View/download PDF
27. K-Means Clustering Based VM Placement Using MAD and IQR
- Author
-
Tandon, Akanksha, Jena, Aditya, Patel, Sanjeev, Filipe, Joaquim, Editorial Board Member, Ghosh, Ashish, Editorial Board Member, Zhou, Lizhu, Editorial Board Member, Verma, Anshul, editor, Verma, Pradeepika, editor, Pattanaik, Kiran Kumar, editor, Dhurandher, Sanjay Kumar, editor, and Woungang, Isaac, editor
- Published
- 2024
- Full Text
- View/download PDF
28. End-to-End Mechanized Proof of a JIT-Accelerated eBPF Virtual Machine for IoT
- Author
-
Yuan, Shenghao, Besson, Frédéric, Talpin, Jean-Pierre, Goos, Gerhard, Series Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Gurfinkel, Arie, editor, and Ganesh, Vijay, editor
- Published
- 2024
- Full Text
- View/download PDF
29. Temporal Bin Packing Problems with Placement Constraints: MIP-Models and Complexity
- Author
-
Borisovsky, Pavel, Eremeev, Anton, Panin, Artem, Sakhno, Maksim, Hartmanis, Juris, Founding Editor, Goos, Gerhard, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Eremeev, Anton, editor, Khachay, Michael, editor, Kochetov, Yury, editor, Mazalov, Vladimir, editor, and Pardalos, Panos, editor
- Published
- 2024
- Full Text
- View/download PDF
30. A Novel Method for Efficient Resource Management in Cloud Environment Using Improved Ant Colony Optimization
- Author
-
Yogeshwari, M., Sathya, S., Radhakrishnan, Sangeetha, Padmini, A., Megala, M., Filipe, Joaquim, Editorial Board Member, Ghosh, Ashish, Editorial Board Member, Prates, Raquel Oliveira, Editorial Board Member, Zhou, Lizhu, Editorial Board Member, Rajagopal, Sridaran, editor, Popat, Kalpesh, editor, Meva, Divyakant, editor, and Bajeja, Sunil, editor
- Published
- 2024
- Full Text
- View/download PDF
31. Duplicated Tasks Elimination for Cloud Data Center Using Modified Grey Wolf Optimization Algorithm for Energy Minimization
- Author
-
Ullah, Arif, Chakir, Aziza, Abbasi, Irshad Ahmed, Rehman, Muhammad Zubair, Alam, Tanweer, Chakir, Aziza, editor, Andry, Johanes Fernandes, editor, Ullah, Arif, editor, Bansal, Rohit, editor, and Ghazouani, Mohamed, editor
- Published
- 2024
- Full Text
- View/download PDF
32. Optimization of Cloud Migration Parameters Using Novel Linear Programming Technique
- Author
-
Afzal, Shahbaz, Thakur, Abhishek, Singh, Pankaj, Angrisani, Leopoldo, Series Editor, Arteaga, Marco, Series Editor, Chakraborty, Samarjit, Series Editor, Chen, Jiming, Series Editor, Chen, Shanben, Series Editor, Chen, Tan Kay, Series Editor, Dillmann, Rüdiger, Series Editor, Duan, Haibin, Series Editor, Ferrari, Gianluigi, Series Editor, Ferre, Manuel, Series Editor, Jabbari, Faryar, Series Editor, Jia, Limin, Series Editor, Kacprzyk, Janusz, Series Editor, Khamis, Alaa, Series Editor, Kroeger, Torsten, Series Editor, Li, Yong, Series Editor, Liang, Qilian, Series Editor, Martín, Ferran, Series Editor, Ming, Tan Cher, Series Editor, Minker, Wolfgang, Series Editor, Misra, Pradeep, Series Editor, Mukhopadhyay, Subhas, Series Editor, Ning, Cun-Zheng, Series Editor, Nishida, Toyoaki, Series Editor, Oneto, Luca, Series Editor, Panigrahi, Bijaya Ketan, Series Editor, Pascucci, Federica, Series Editor, Qin, Yong, Series Editor, Seng, Gan Woon, Series Editor, Speidel, Joachim, Series Editor, Veiga, Germano, Series Editor, Wu, Haitao, Series Editor, Zamboni, Walter, Series Editor, Zhang, Junjie James, Series Editor, Tan, Kay Chen, Series Editor, Shaw, Rabindra Nath, editor, Siano, Pierluigi, editor, Makhilef, Saad, editor, Ghosh, Ankush, editor, and Shimi, S. L., editor
- Published
- 2024
- Full Text
- View/download PDF
33. UAV-D2D Assisted Latency Minimization and Load Balancing in Mobile Edge Computing with Deep Reinforcement Learning
- Author
-
Song, Qinglin, Qu, Long, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Jin, Hai, editor, Yu, Zhiwen, editor, Yu, Chen, editor, Zhou, Xiaokang, editor, Lu, Zeguang, editor, and Song, Xianhua, editor
- Published
- 2024
- Full Text
- View/download PDF
34. Evaluation of connection pool PgBouncer efficiency for optimizing relational database computing resources
- Author
-
A. S. Boronnikov, P. S. Tsyngalev, V. G. Ilyin, and T. A. Demenkova
- Subjects
pgbouncer ,postgresql ,connection pool ,balancer ,databases ,optimization ,monitoring ,virtual machines ,cloud technologies ,Information theory ,Q350-390 - Abstract
Objectives. The aim of the research is to investigate the possibilities of using the PgBouncer connection pool with various configurations in modern database installations by conducting load testing with diverse real-world like scenarios, identifying critical metrics, obtaining testing results, and interpreting them in the form of graphs.Methods. The research utilized methods of experimentation, induction, testing, and statistical analysis.Results. The main features, architecture and modes of operation of the PgBouncer service are considered. Load testing was carried out on a virtual machine deployed on the basis of an open cloud platform with different configurations of computing resources (CPU, RAM) and according to several scenarios with different configurations and different numbers of balancer connections to the database, during which the following main indicators were investigated: distribution of processor usage, utilization of RAM, disk space, and CPU. The interpretation of the data obtained and the analysis of the results obtained by highlighting critical parameters are performed. On the basis of results analysis, conclusions and recommendations are formulated on the use of a connection balancer in real high-load installations for optimizing the resources utilized by the server on which the database management system (DBMS) is located. A conclusion is presented on the usefulness of using the PgBouncer query balancer along with proposed configuration options for subsequent use in real installations.Conclusions. The degree of influence of the use of the PgBouncer connection balancer on the performance of the system as a whole deployed in a virtualized environment is investigated. The results of the work showed that the use of PgBouncer allows significantly optimization of the computing resources of a computing node for a DBMS server, namely, load on the CPU decreased by 15%, RAM—by 25–50%, disk subsystem—by 20%, depending on the test scenarios, the number of connections to the database, and the configuration of the connection balancer.
- Published
- 2024
- Full Text
- View/download PDF
35. Clustering based EO with MRF technique for effective load balancing in cloud computing
- Author
-
N., Hanuman Reddy, Lathigara, Amit, Aluvalu, Rajanikanth, and V., Uma Maheswari
- Published
- 2024
- Full Text
- View/download PDF
36. Security and Privacy Considerations in Multimedia Resource Management Using Hybrid Deep Learning Techniques in Cloud Computing.
- Author
-
Nallasivan. G., Karpagam. T., Geetha. M., Sankarasubramanian. R. S., Kannan. R., Bhuvanesh. A., and Poojitha. G.
- Subjects
DEEP learning ,ANT algorithms ,OPTIMIZATION algorithms ,VIRTUAL machine systems ,RESOURCE management ,CLOUD computing - Abstract
The management of various multimedia assets, such as photos, videos, audio files, and other rich media content, within a cloud computing environment is referred to as managing multimedia resources in the cloud. To suit the needs of applications and users, this entails the effective storage, retrieval, processing, and distribution of multimedia resources. Given the significance of work planning and managing resources in the cloud computing environment, we present a unique hybrid algorithm in this research. Many cloud-based computing systems have made extensive use of traditional scheduling techniques like ant colony optimization (ACO), first come first serve, etc. The cloud gets client tasks at a high rate, so it is important to handle resource allocation for these tasks carefully. Using the improved pelican optimization algorithm, we efficiently distribute the tasks to the virtual machines in this proposed work. The proposed hybrid algorithm (Improved POA + Improved GJO) is then used to distribute and manage the resources (Memory and CPU) as needed by the tasks. According to experimental findings, the accuracy of the proposed technique increases by 1.12%, 2.11%, and 14.2%, respectively. It shows that the proposed method has good accuracy compared with the existing HUNTER, FT-ERM, and RU-VMM approaches. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
37. Real-World Implementation and Performance Analysis of Distributed Learning Frameworks for 6G IoT Applications.
- Author
-
Naseh, David, Abdollahpour, Mahdi, and Tarchi, Daniele
- Subjects
- *
FEDERATED learning , *INTERNET of things , *RASPBERRY Pi , *EDGE computing , *ENERGY consumption - Abstract
This paper explores the practical implementation and performance analysis of distributed learning (DL) frameworks on various client platforms, responding to the dynamic landscape of 6G technology and the pressing need for a fully connected distributed intelligence network for Internet of Things (IoT) devices. The heterogeneous nature of clients and data presents challenges for effective federated learning (FL) techniques, prompting our exploration of federated transfer learning (FTL) on Raspberry Pi, Odroid, and virtual machine platforms. Our study provides a detailed examination of the design, implementation, and evaluation of the FTL framework, specifically adapted to the unique constraints of various IoT platforms. By measuring the accuracy of FTL across diverse clients, we reveal its superior performance over traditional FL, particularly in terms of faster training and higher accuracy, due to the use of transfer learning (TL). Real-world measurements further demonstrate improved resource efficiency with lower average load, memory usage, temperature, power, and energy consumption when FTL is implemented compared to FL. Our experiments also showcase FTL's robustness in scenarios where users leave the server's communication coverage, resulting in fewer clients and less data for training. This adaptability underscores the effectiveness of FTL in environments with limited data, clients, and resources, contributing valuable information to the intersection of edge computing and DL for the 6G IoT. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
38. Adaptive Heterogeneity Index Cloudlet Scheduler for Variable Workload and Virtual Machine Configuration
- Author
-
Gritto D and Muthulakshmi P
- Subjects
Cloudlets ,Quality of Service ,Load Balancing ,Scheduling ,Virtual Machines ,Electronic computers. Computer science ,QA75.5-76.95 - Abstract
In any service-based computing environment, performance pertains to the effectiveness of a system or application in managing user tasks. The key performance assessment metrics include makespan, responsiveness, speed, throughput, resource utilization, etc. In any distributed landscape, like cloud computing, optimal performance relies on resource management techniques such as scheduling, load balancing, etc. Cloud environments often exhibit varying levels of heterogeneity arising from the diverse characteristics of cloudlets and virtual machines. This research paper focuses on the impact of this heterogeneity and proposes two scheduling algorithms to address it effectively: the Variance Managed Heuristic Scheduler (VMHS) and the Adaptive Heterogeneity Index Cloudlet Scheduler (AHICS). AHICS aims to minimize makespan, virtual machine underutilization, the degree of load imbalance, and the deviation of completion time among virtual machines. AHICS functions as the main scheduler, whereas VMHS and MaxMin act as sub-schedulers in this proposed work. AHICS is designed to be flexible and adjust its scheduling strategy based on the level of heterogeneity within the cloud environment. AHICS utilizes the VMHS scheduler in scenarios with a low heterogeneity index or the MaxMin scheduler when the cloudlets and the virtual machines characteristics are highly diverse or heterogeneous. This multi-objective AHICS scheduling algorithm harnesses the strengths of both schedulers as a hybrid algorithm. Implemented using the CloudSim 3.0.3 simulator, experimental results demonstrate that AHICS outperforms other heuristic scheduling algorithms, including MinMin, TASA, HAMM, PTFR, and RSSM, in terms of makespan, virtual machine utilization ratio, the degree of load imbalance, and the deviation of completion time among the virtual machines in both low and high heterogeneity levels
- Published
- 2024
- Full Text
- View/download PDF
39. RLPRAF: Reinforcement Learning-Based Proactive Resource Allocation Framework for Resource Provisioning in Cloud Environment
- Author
-
Reena Panwar and M. Supriya
- Subjects
Resource allocation ,resource provisioning ,autonomic computing systems ,machine learning ,reinforcement learning ,virtual machines ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
Recent developments in cloud technology enable one to dynamically deploy heterogeneous resources as and when needed. This dynamic nature of the incoming workload causes fluctuations in the cloud environment, which is currently addressed using traditional reactive scaling techniques. Simple reactive approaches affect elastic system performance either by over-provisioning resources which significantly increases the cost, or by under-provisioning, which leads to starvation. Hence automated resource provisioning becomes an effective method to deal with such workload fluctuations. The aforementioned problems can also be resolved by using intelligent resource provisioning techniques by dynamically assigning required resources while adapting to the environment. In this paper, a reinforcement learning-based proactive resource allocation framework (RLPRAF) is proposed. This framework simultaneously learns the environment and distributes the resources. The proposed work presents a paradigm for the optimal allocation of resources by merging the notions of automatic computation, linear regression, and reinforcement learning. When tested with real-time workloads, the proposed RLPRAF method surpasses previous auto-scaling algorithms considering CPU usage, response time, and throughput. Finally, a set of tests demonstrate that the suggested strategy lowers overall expense by 30% and SLA violation by 77.7%. Furthermore, it converges at an optimum timing and demonstrates that it is feasible for a wide range of real-world service-based cloud applications.
- Published
- 2024
- Full Text
- View/download PDF
40. Minimizing Virtual Machine Live Migration Latency for Proactive Fault Tolerance Using an ILP Model With Hybrid Genetic and Simulated Annealing Algorithms
- Author
-
Jayroop Ramesh, Zahra Solatidehkordi, Khaled El-Fakih, and Raafat Aburukba
- Subjects
Fault tolerance ,virtual machines ,live migration ,cloud ,monitoring ,optimization ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
Cloud computing has grown significantly in recent years and is now widely used by both individuals and businesses. Cloud service providers need to be prepared for failures to ensure that they can respond to them appropriately by minimizing downtime and achieving business continuity. This can be accomplished through fault tolerance mechanisms which aim to predict, prevent, and recover from failures. One of the common causes of task failures in the cloud is the deterioration of physical machines. If deterioration is detected in the physical machine, the virtual machines (VMs) being hosted must be moved to healthy physical machines. This paper proposes an approach to VM allocation in the event of host deterioration or failure. Namely, an integer linear programming optimization model is proposed for the allocation of the failing VMs while minimizing VM migration time. In addition, hybrid genetic and simulated annealing (HGA and HSA) algorithms are proposed for providing efficient solutions for the optimization model. The HGA is hybridized by a procedure that penalizes infeasible solutions to reduce their chances of being selected in the offspring populations and a hill-climbing procedure that modifies feasible solutions to produce better (or more fit) ones. The HSA starts from a feasible solution and also uses elitism as the HGA in its search to ensure that good candidate solutions are preserved. To validate the quality of the obtained solutions, we compare our approach to the commercial optimization solver CPLEX. In comparison to the CPLEX solver, the results show that for small size data center problems, the proposed HGA and HSA achieve near-optimal solutions with 70.80% and 81.43% increase in speed with an average solution quality trade-off by 20.57% and 78.52%, respectively. Notably, for medium, large and very large size problems the HGA and HSA obtained solutions where the CPLEX solver could not provide solutions within acceptable time. For medium size problems HGA attains solutions with better quality (75.82%) yet with higher execution time (57.27%). For large and very large problems HGA significantly outperforms HSA in terms of solution quality (better by 84.88% and 86.46%) yet with significantly higher execution time (74.22% and 76.55 %), respectively. Thus, the designer may choose which algorithm to use based on whether the focus is on quality or execution time.
- Published
- 2024
- Full Text
- View/download PDF
41. A Systematic Literature Review of Cloud Brokers for Autonomic Service Distribution
- Author
-
Mohd Hamzah Khan, Mohamed Hadi Habaebi, and Md. Rafiqul Islam
- Subjects
Cloud broker ,cloud service broker ,data center ,cloud computing ,load balancing ,virtual machines ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
In recent years, cloud computing has become an essential distributed computing platform and has achieved enormous popularity. Within cloud computing, the Cloud service broker creates an abstraction layer between provider and consumer so that customers notice the cloud service providers’ offered services’ solitary view. The brokers of cloud service help connect the cloud’s substantial resources and select the data centers of the cloud that meet the user’s requirement while maximizing the entire response time and reducing cost. The landscape of autonomic cloud brokers has been reviewed in this systematic literature review study, while the PRISMA approach is used to analyze the literature. This comprehensive review of cloud brokerage mechanisms is tailored towards the autonomic distribution of services. To emphasize autonomic computing and cloud-access security brokers, the evolving paradigms of cloud service selection are detailed and critically analyzed to enhance service distribution efficiency. Further, the role of cloud brokers in load-balancing services is also highlighted in this study. A new taxonomy for the structured framework of cloud brokerage mechanisms is introduced based on functionalities, deployment models, and architecture for the autonomic service distribution. Finally, the study offers valuable insights for future research challenges and best practices in cloud security.
- Published
- 2024
- Full Text
- View/download PDF
42. Resource Sizing for Virtual Environments of Networked Interconnected System Services
- Author
-
Alexandr Albychev, Dmitry Ilin, and Evgeny Nikulchev
- Subjects
cloud computing ,virtual environment sizing ,networked interconnected system ,virtual machines ,CPU allocation ,Technology - Abstract
Networked interconnected systems are often deployed in infrastructures with resource allocation using isolated virtual environments. The technological implementation of such systems varies significantly, making it difficult to accurately estimate the required volume of resources to allocate for each virtual environment. This leads to overprovisioning of some services and underprovisioning of others. The problem of distributing the available computational resources between the system services arises. To efficiently use resources and reduce resource waste, the problem of minimizing free resources under conditions of unknown ratios of resource distribution between services is formalized; an approach to determining regression dependencies of computing resource consumption by services on the number of requests and a procedure for efficient resource distribution between services are proposed. The proposed solution is experimentally evaluated using the networked interconnected system model. The results show an increase in throughput by 20.75% compared to arbitrary resource distribution and a reduction in wasted resources by 55.59%. The dependences of the use of resources by networked interconnected system services on the number of incoming requests, identified using the proposed solution, can also be used for scaling in the event of an increase in the total volume of allocated resources.
- Published
- 2024
- Full Text
- View/download PDF
43. GEP optimization for load balancing of virtual machines (LBVM) in cloud computing
- Author
-
G. Muneeswari, Jhansi Bharathi Madavarapu, R. Ramani, C. Rajeshkumar, and C. John Clement Singh
- Subjects
Load balancing ,Cloud computing ,Virtual machines ,Bi-LSTM ,Genetic expression programming ,Electric apparatus and materials. Electric circuits. Electric networks ,TK452-454.4 - Abstract
Cloud computing relies heavily on load balancing to distribute workloads evenly among servers, network connections, and drives. The cloud system has been assigned some load which can be underloaded, overloaded, or balanced depending on the cloud architecture and user requests. An important component of task scheduling in clouds is the load balancing of workloads that may be dependent or independent of virtual machines (VMs). To overcome these drawbacks, a novel Load Balancing of Virtual Machine (LBVM) in Cloud Computing has been proposed in this paper. The input tasks from multiple users were collected in a single task collector and sent towards the load balancer, which contains the deep learning network called the Bi-LSTM technique. When the load is unbalanced, the VM migration will begin by sending the task details to the load balancer. The Bi-LSTM is optimized by a Genetic Expression Programming (GEP) optimizer and finally, it balances the input loads in VMs. The efficiency of the proposed LBVM has been determined using the existing techniques such as MVM, PLBVM, and VMIS in terms of evaluation metrics such as configuration latency, detection rate, accuracy etc. Experimental results shows that the proposed method reduces the Migration Time of 49%, 41.7%, and 17.8% than MVM, PLBVM, VMIS existing techniques respectively.
- Published
- 2024
- Full Text
- View/download PDF
44. Efficient task scheduling in cloud networks using ANN for green computing.
- Author
-
Zavieh, Hadi, Javadpour, Amir, and Sangaiah, Arun Kumar
- Subjects
- *
CLOUD computing , *VIRTUAL machine systems , *ARTIFICIAL neural networks , *SUSTAINABILITY , *ENERGY consumption , *SCHEDULING - Abstract
Summary: Recently, there has been a growing emphasis on reducingenergy consumption in cloud networks and achieving green computing practices toaddress environmental concerns and optimize resource utilization. In thiscontext, efficient task scheduling minimizes energy usage and enhances overallsystem performance. To tackle the challenge ofenergy‐efficient task allocation, we propose a novel approach that harnessesthe power of Artificial Neural Networks (ANN). Our Artificial neural network Dynamic Balancing (ANNDB) method is designed toachieve green computing in cloud environments. ANNDB leverages the feed‐forwardnetwork architecture and a multi‐layer perceptron, effectively allocatingrequests to higher‐power and higher‐quality virtual machines, resulting inoptimized energy utilization. Through extensive simulations, wedemonstrate the superiority of ANNDB over existing methods, including WPEG,IRMBBC, and BEMEC, in terms of energy and power efficiency. Specifically, ourproposed ANNDB method exhibits substantial improvements of 13.81%, 8.62%, and9.74% in the Energy criterion compared to WPEG, IRMBBC, and BEMEC,respectively. Additionally, in the Power criterion, the method achievesperformance enhancements of 3.93%, 4.84%, and 4.19% over the mentioned methods.The findings from this research hold significant promise for organizations seekingto optimize their cloud computing environments while reducing energyconsumption and promoting sustainable computing practices. By adopting theANNDB approach for efficient task scheduling, businesses and institutions cancontribute to green computing efforts, reduce operational costs, and make moreenvironmentally friendly choices without compromising task allocationperformance. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
45. ARIMA-PID: container auto scaling based on predictive analysis and control theory.
- Author
-
Joshi, Nisarg S, Raghuwanshi, Raghav, Agarwal, Yash M, Annappa, B, and Sachin, DN
- Abstract
Containerization has become a widely popular virtualization mechanism alongside Virtual Machines (VMs) to deploy applications and services in the cloud. Containers form the backbone of the modern architectures around microservices and provide a lightweight virtualization mechanism for IoT and Edge systems. Elasticity is one of the key requirements of modern applications with various constraints ranging from Service Level Agreements (SLA) to optimization of resource utilization, cost management, etc. Auto Scaling is a technique used to attain elasticity by scaling the number of containers or resources. This work introduces a novel mechanism for auto-scaling containers in cloud environments, addressing the key elasticity requirement in modern applications. The proposed mechanism combines predictive analysis using the Auto-Regressive Integrated Moving Average (ARIMA) model and control theory utilizing the Proportional-Integral-Derivative (PID) controller. The major contributions of this work include the development of the ARIMA-PID algorithm for forecasting resource utilization and maintaining desired levels, comparing ARIMA-PID with existing threshold mechanisms, and demonstrating its superior performance in terms of CPU utilization and average response times. Experimental results showcase improvements of approximately 10% in CPU utilization and 30% [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
46. Multi-Faceted Job Scheduling Optimization Using Q-learning With ABC In Cloud Environment.
- Author
-
Sharma, Sanjeev and Pandey, Neeraj Kumar
- Subjects
VIRTUAL machine systems ,PARTICLE swarm optimization ,BEES algorithm ,REINFORCEMENT learning ,RESOURCE allocation - Abstract
Resource allocation is the utmost challenging and common problem, particularly in the cloud service model in Infrastructure as a Service (IaaS). The issue of load balancing is so harmful that irregular load balancing may result in a structure smash. Adopting a suitable access plan and allowing the system to spread work among all existing resources leads to utilizing Virtual Machines (VMs) appropriately. To get enhanced results from the Artificial Bee Colony Algorithm (ABC), a reinforcement learning technique Q-learning is combined using multifaceted job scheduling optimization based on ABC(QMFOABC) has been proposed. The proposed approach improves resource utilization and scheduling created on resource, cost, and makespan. The efficiency of the suggested strategy was evaluated in Datasets Synthetic Workload, Google Cloud Jobs (GoCJ), and Random by using CloudSim to the remaining scheduling strategies for load matching like Max-min, Multifaceted Cuckoo Search (MFCS), Multifaceted Particle Swarm Optimization (MFPSO), Q-learning, Heuristic job scheduling with Artificial Bee Colony approach with Largest Job First algorithm (HABC LJF), First come first serve (FCFS). According to the findings of the experiments, the algorithm that employed the QMFOABC method has better results in resource utilization, throughput, cost, and makespan. Compared to Max-Min (82.31%), MOPSO (35.62%), HABC LJF (21.65%), Q-Learning (11.72%), VTO-QABC FCFS(5.87%), and VTO-ABC LJF (5.86%) shorter time than MOCS, a considerable improvement is found. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
47. S-method: secure multimedia encryption technique in cloud environment.
- Author
-
Saini, Parul and Kumar, Krishan
- Abstract
Security of multimedia content on the Cloud has emerged as the major research area in today's era of the internet. Most exchange of multimedia content across the globe is done over the Cloud. Such communication is done openly in a cloud environment where anyone can quickly access the data. Therefore, it should provide reliable security mechanisms and standards for the shared multimedia data for users. This study highlights better protection of multimedia data over the Cloud than the existing approaches, where the information cannot be unwrapped reasonably. The encryption process is speedup using the multiple VMs over the Cloud for parallel processing of the chunks of the data. The proposed algorithm is well-designed and suited for efficient computation with limited resources,as most wireless devices come with limited bandwidth and computational power. The qualitative and quantitative evaluation is performed to compare the performance of the proposed S-Method model with the state-of-the-art models. A computing time shows that the proposed approach can meet the requirement of real-time applications. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
48. Energy and Resource Modeling: A Comparative Analysis of Containers and Virtual Machines.
- Author
-
Dervinis, Donatas
- Subjects
- *
VIRTUAL machine systems , *POWER resources , *ENERGY consumption , *RANDOM access memory , *PERFORMANCE technology - Abstract
This paper presents a comparative analysis of energy and resource utilization between containers and virtual machines (VMs), technologies essential for modern cloud computing environments. Containers, lightweight virtualization solutions, enable rapid deployment, efficient scaling, and reduced overhead by sharing the host OS kernel, making them ideal for microservices and agile development workflows. Conversely, VMs offer enhanced security and isolation by virtualizing entire operating systems, suiting multi-tenant and legacy applications. Through mathematical modeling, this study quantifies the differences in energy consumption and resource efficiency of these technologies. The models utilize variables such as CPU and RAM usage and server load to assess each technology's performance in various scenarios. Results from simulations indicate that containers can significantly reduce infrastructure costs by optimizing resource allocation. A sample calculation for VMs and containers was performed to assess resource and energy demands. The results indicate that running 10 VMs requires 9.2% more CPU resources, and 12.5% more RAM compared to containers. In terms of energy consumption, VMs require 82% more energy than an equivalent setup of 10 containers. [ABSTRACT FROM AUTHOR]
- Published
- 2024
49. Optimizing Edge Computing through Virtualization: A Comprehensive Review.
- Author
-
Saini, Nisha and Kumar, Jitender
- Subjects
EDGE computing ,VIRTUAL machine systems ,VIRTUAL reality ,RESEARCH methodology - Abstract
The adoption of virtualization in edge computing promotes the use of numerous applications and operating systems on a single physical server. The technological environment is increasingly efficient, economical, and hassle-free by virtue of this technology. This study explores the potential of virtualization in edge computing environments by creating virtual versions of physical resources and services. The primary objective of this study is to reduce hardware and operational expenses by consolidating multiple systems into one physical system. Various deployment strategies for virtual machines on edge nodes, including containers, hypervisors, and virtual machines, are discussed. The article also delves into a virtualization-based framework to address challenges in edge computing. Additionally, through historical research on virtualization techniques, this article provides insights into the benefits and limitations of virtualization in edge computing. Moreover, the integration of virtualization and edge computing is examined, highlighting open challenges that need to be addressed for optimal utilization of virtualization in edge computing environments. The findings contribute to a deeper understanding of the possibilities and challenges associated with virtualization in edge computing environments, laying the foundation for further research in this field. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
50. Efficient multiverse electro search optimization for multi-cloud task scheduling and resource allocation
- Author
-
Ravi, Rupesh and Pillai, Manu J.
- Published
- 2024
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.