5 results on '"Tuli, Shreshth"'
Search Results
2. MCDS: AI Augmented Workflow Scheduling in Mobile Edge Cloud Computing Systems
- Author
-
Tuli, Shreshth, Casale, Giuliano, and Jennings, Nicholas R.
- Subjects
FOS: Computer and information sciences ,Optimization ,Technology ,cs.DC ,Computer Science - Artificial Intelligence ,Processor scheduling ,0805 Distributed Computing ,Optimal scheduling ,GeneralLiterature_MISCELLANEOUS ,AI for PDC ,monte carlo learning ,Engineering ,Quality of service ,edge computing ,Computer Science, Theory & Methods ,1005 Communications Technologies ,workflow scheduling ,cs.PF ,Computer Science - Performance ,Science & Technology ,Time factors ,cloud computing ,deep learning ,0803 Computer Software ,Engineering, Electrical & Electronic ,cs.AI ,Costs ,Performance (cs.PF) ,Artificial Intelligence (cs.AI) ,Computer Science - Distributed, Parallel, and Cluster Computing ,Computational Theory and Mathematics ,Hardware and Architecture ,Signal Processing ,Computer Science ,Task analysis ,Distributed, Parallel, and Cluster Computing (cs.DC) ,Distributed Computing - Abstract
Workflow scheduling is a long-studied problem in parallel and distributed computing (PDC), aiming to efficiently utilize compute resources to meet user's service requirements. Recently proposed scheduling methods leverage the low response times of edge computing platforms to optimize application Quality of Service (QoS). However, scheduling workflow applications in mobile edge-cloud systems is challenging due to computational heterogeneity, changing latencies of mobile devices and the volatile nature of workload resource requirements. To overcome these difficulties, it is essential, but at the same time challenging, to develop a long-sighted optimization scheme that efficiently models the QoS objectives. In this work, we propose MCDS: Monte Carlo Learning using Deep Surrogate Models to efficiently schedule workflow applications in mobile edge-cloud computing systems. MCDS is an Artificial Intelligence (AI) based scheduling approach that uses a tree-based search strategy and a deep neural network-based surrogate model to estimate the long-term QoS impact of immediate actions for robust optimization of scheduling decisions. Experiments on physical and simulated edge-cloud testbeds show that MCDS can improve over the state-of-the-art methods in terms of energy consumption, response time, SLA violations and cost by at least 6.13, 4.56, 45.09 and 30.71 percent respectively., Comment: Accepted in IEEE Transactions on Parallel and Distributed Systems (Special Issue on PDC for AI), 2022
- Published
- 2021
- Full Text
- View/download PDF
3. SimTune: bridging the simulator reality gap for resource management in edge-cloud computing.
- Author
-
Tuli, Shreshth, Casale, Giuliano, and Jennings, Nicholas R.
- Subjects
- *
RESOURCE management , *DIGITAL twin , *EDGE computing , *QUALITY of service , *CLOUD computing , *DATA transmission systems , *INTERNET of things - Abstract
Industries and services are undergoing an Internet of Things centric transformation globally, giving rise to an explosion of multi-modal data generated each second. This, with the requirement of low-latency result delivery, has led to the ubiquitous adoption of edge and cloud computing paradigms. Edge computing follows the data gravity principle, wherein the computational devices move closer to the end-users to minimize data transfer and communication times. However, large-scale computation has exacerbated the problem of efficient resource management in hybrid edge-cloud platforms. In this regard, data-driven models such as deep neural networks (DNNs) have gained popularity to give rise to the notion of edge intelligence. However, DNNs face significant problems of data saturation when fed volatile data. Data saturation is when providing more data does not translate to improvements in performance. To address this issue, prior work has leveraged coupled simulators that, akin to digital twins, generate out-of-distribution training data alleviating the data-saturation problem. However, simulators face the reality-gap problem, which is the inaccuracy in the emulation of real computational infrastructure due to the abstractions in such simulators. To combat this, we develop a framework, SimTune, that tackles this challenge by leveraging a low-fidelity surrogate model of the high-fidelity simulator to update the parameters of the latter, so to increase the simulation accuracy. This further helps co-simulated methods to generalize to edge-cloud configurations for which human encoded parameters are not known apriori. Experiments comparing SimTune against state-of-the-art data-driven resource management solutions on a real edge-cloud platform demonstrate that simulator tuning can improve quality of service metrics such as energy consumption and response time by up to 14.7% and 7.6% respectively. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
4. MCDS: AI Augmented Workflow Scheduling in Mobile Edge Cloud Computing Systems.
- Author
-
Tuli, Shreshth, Casale, Giuliano, and Jennings, Nicholas R.
- Subjects
- *
COMPUTER systems , *EDGE computing , *ARTIFICIAL intelligence , *WORKFLOW , *MOBILE computing , *CLOUD computing , *MOBILE learning - Abstract
Workflow scheduling is a long-studied problem in parallel and distributed computing (PDC), aiming to efficiently utilize compute resources to meet user's service requirements. Recently proposed scheduling methods leverage the low response times of edge computing platforms to optimize application Quality of Service (QoS). However, scheduling workflow applications in mobile edge-cloud systems is challenging due to computational heterogeneity, changing latencies of mobile devices and the volatile nature of workload resource requirements. To overcome these difficulties, it is essential, but at the same time challenging, to develop a long-sighted optimization scheme that efficiently models the QoS objectives. In this work, we propose MCDS: Monte Carlo Learning using Deep Surrogate Models to efficiently schedule workflow applications in mobile edge-cloud computing systems. MCDS is an Artificial Intelligence (AI) based scheduling approach that uses a tree-based search strategy and a deep neural network-based surrogate model to estimate the long-term QoS impact of immediate actions for robust optimization of scheduling decisions. Experiments on physical and simulated edge-cloud testbeds show that MCDS can improve over the state-of-the-art methods in terms of energy consumption, response time, SLA violations and cost by at least 6.13, 4.56, 45.09 and 30.71 percent respectively. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
5. Dynamic Scheduling for Stochastic Edge-Cloud Computing Environments Using A3C Learning and Residual Recurrent Neural Networks.
- Author
-
Tuli, Shreshth, Ilager, Shashikant, Ramamohanarao, Kotagiri, and Buyya, Rajkumar
- Subjects
RECURRENT neural networks ,REINFORCEMENT learning ,SCHEDULING ,ENERGY consumption - Abstract
The ubiquitous adoption of Internet-of-Things (IoT) based applications has resulted in the emergence of the Fog computing paradigm, which allows seamlessly harnessing both mobile-edge and cloud resources. Efficient scheduling of application tasks in such environments is challenging due to constrained resource capabilities, mobility factors in IoT, resource heterogeneity, network hierarchy, and stochastic behaviors. Existing heuristics and Reinforcement Learning based approaches lack generalizability and quick adaptability, thus failing to tackle this problem optimally. They are also unable to utilize the temporal workload patterns and are suitable only for centralized setups. However, asynchronous-advantage-actor-critic (A3C) learning is known to quickly adapt to dynamic scenarios with less data and residual recurrent neural network (R2N2) to quickly update model parameters. Thus, we propose an A3C based real-time scheduler for stochastic Edge-Cloud environments allowing decentralized learning, concurrently across multiple agents. We use the R2N2 architecture to capture a large number of host and task parameters together with temporal patterns to provide efficient scheduling decisions. The proposed model is adaptive and able to tune different hyper-parameters based on the application requirements. We explicate our choice of hyper-parameters through sensitivity analysis. The experiments conducted on real-world data set show a significant improvement in terms of energy consumption, response time, Service-Level-Agreement and running cost by 14.4, 7.74, 31.9, and 4.64 percent, respectively when compared to the state-of-the-art algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.