5 results on '"Ciciani, Bruno"'
Search Results
2. Proactive Scalability and Management of Resources in Hybrid Clouds via Machine Learning
- Author
-
Bruno Ciciani, Pierangelo Di Sanzo, Luca Forte, Alessandro Pellegrini, Dimiter R. Avresky, Avresky, Dimiter R., DI SANZO, Pierangelo, Pellegrini, Alessandro, Ciciani, Bruno, and Forte, Luca
- Subjects
Scale (ratio) ,business.industry ,Computer science ,Cloud Computing ,Software Rejuvenation ,Software Aging ,Distributed computing ,Overlay network ,Cloud computing ,Workload ,Machine learning ,computer.software_genre ,Software ,Scalability ,Artificial intelligence ,Software aging ,business ,computer - Abstract
In this paper, we present a novel framework for supporting the management and optimization of application subject to software anomalies and deployed on large scale cloud architectures, composed of different geographically distributed cloud regions. The framework uses machine learning models for predicting failures caused by accumulation of anomalies. It introduces a novel workload balancing approach and a proactive system scale up/scale down technique. We developed a prototype of the framework and present some experiments for validating the applicability of the proposed approaches.
- Published
- 2015
3. Analytical/ML Mixed Approach for Concurrency Regulation in Software Transactional Memory
- Author
-
Bruno Ciciani, Diego Rughetti, Francesco Quaglia, Pierangelo Di Sanzo, Rughetti, Diego, DI SANZO, Pierangelo, Ciciani, Bruno, and Quaglia, Francesco
- Subjects
Exploit ,Computer science ,Distributed computing ,Reliability (computer networking) ,Concurrency ,Suite ,energy optimization ,performance optimization ,software transactional memory ,Data modeling ,performance model ,performance models ,Benchmark (computing) ,Concurrent computing ,Software transactional memory ,concurrency - Abstract
In this article we exploit a combination of analytical and Machine Learning (ML) techniques in order to build a performance model allowing to dynamically tune the level of concurrency of applications based on Software Transactional Memory (STM). Our mixed approach has the advantage of reducing the training time of pure machine learning methods, and avoiding approximation errors typically affecting pure analytical approaches. Hence it allows very fast construction of highly reliable performance models, which can be promptly and effectively exploited for optimizing actual application runs. We also present a real implementation of a concurrency regulation architecture, based on the mixed modeling approach, which has been integrated with the open source Tiny STM package, together with experimental data related to runs of applications taken from the STAMP benchmark suite demonstrating the effectiveness of our proposal. © 2014 IEEE.
- Published
- 2014
4. Dynamic feature selection for machine-learning based concurrency regulation in STM
- Author
-
Francesco Quaglia, Pierangelo Di Sanzo, Bruno Ciciani, Diego Rughetti, Rughetti, Diego, DI SANZO, Pierangelo, Ciciani, Bruno, and Quaglia, Francesco
- Subjects
Computer science ,Distributed computing ,Concurrency ,Feature selection ,performance optimization ,performance prediction ,software transactional memory ,performance model ,Set (abstract data type) ,Concurrency control ,Cardinality ,performance models ,machine learning ,transaction processing ,Benchmark (computing) ,Software transactional memory ,Overhead (computing) ,concurrency ,artificial neural network ,sampling overhead reduction - Abstract
In this paper we explore machine-learning approaches for dynamically selecting the well suited amount of concurrent threads in applications relying on Software Transactional Memory (STM). Specifically, we present a solution that dynamically shrinks or enlarges the set of input features to be exploited by the machine-learner. This allows for tuning the concurrency level while also minimizing the overhead for input-features sampling, given that the cardinality of the input-feature set is always tuned to the minimum value that still guarantees reliability of workload characterization. We also present a fully heedged implementation of our proposal within the TinySTM open source framework, and provide the results of an experimental study relying on the STAMP benchmark suite, which show significant reduction of the response time with respect to proposals based on static feature selection. © 2014 IEEE. In this paper we explore machine-learning approaches for dynamically selecting the well suited amount of concurrent threads in applications relying on Software Transactional Memory (STM). Specifically, we present a solution that dynamically shrinks or enlarges the set of input features to be exploited by the machine-learner. This allows for tuning the concurrency level while also minimizing the overhead for input-features sampling, given that the cardinality of the input feature set is always tuned to the minimum value that still guarantees reliability of workload characterization. We also present a fully fledged implementation of our proposal within the TinySTM open source framework, and provide the results of an experimental study relying on the STAMP benchmark suite, which show significant reduction of the response time with respect to proposals based on static feature selection.
- Published
- 2014
5. Providing Transaction Class-Based QoS in In-Memory Data Grids via Machine Learning
- Author
-
Diego Rughetti, Bruno Ciciani, Francesco Maria Molfese, Pierangelo Di Sanzo, DI SANZO, Pierangelo, Francesco Maria, Molfese, Rughetti, Diego, and Ciciani, Bruno
- Subjects
neural network ,Computer science ,Distributed computing ,Cloud computing ,computer.software_genre ,performance optimization ,Database-centric architecture ,data grid ,in-memory transactional data grids ,Data grid ,business.industry ,Quality of service ,cloud computing ,Workload ,Grid ,neural networks ,performance prediction ,in-memory transactional data grid ,machine learning ,Grid computing ,Benchmark (computing) ,quality of service ,artificial neural network ,business ,Transaction data ,Database transaction ,computer - Abstract
Elastic architectures and the "pay-as-you-go" resource pricing model offered by many cloud infrastructure providers may seem the right choice for companies dealing with data centric applications characterized by high variable workload. In such a context, in-memory transactional data grids have demonstrated to be particularly suited for exploiting advantages provided by elastic computing platforms, mainly thanks to their ability to be dynamically (re-)sized and tuned. Anyway, when specific QoS requirements have to be met, this kind of architectures have revealed to be complex to be managed by humans. Particularly, their management is a very complex task without the stand of mechanisms supporting run-time automatic sizing/tuning of the data platform and the underlying (virtual) hardware resources provided by the cloud. In this paper, we present a neural network-based architecture where the system is constantly and automatically re-configured, particularly in terms of computing resources, in order to achieve transaction class-based QoS while minimizing costs of the infrastructure. We also present some results showing the effectiveness of our architecture, which has been evaluated on top of Future Grid IaaS Cloud using Red Hat Infinispan in-memory data grid and the TPC-C benchmark.
- Published
- 2014
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.