112 results on '"Computer software -- Quality control"'
Search Results
2. Modern software review : techniques and technologies.
- Author
-
Wong, Yuk Kuen
- Subjects
Computer software -- Development ,Computer software -- Evaluation ,Computer software -- Quality control - Abstract
Summary: "This book provides an understanding of the critical factors affecting software review performance and to provide practical guidelines for software reviews"--Provided by publisher.
- Published
- 2006
3. Towards a modern quality framework
- Author
-
Universitat Politècnica de Catalunya. Departament d'Enginyeria de Serveis i Sistemes d'Informació, Universitat Politècnica de Catalunya. inSSIDE - integrated Software, Services, Information and Data Engineering, Glinz, Martin, Seyff, Norbert, Bühne, Stan, Franch Gutiérrez, Javier, Lauenroth, Kim, Universitat Politècnica de Catalunya. Departament d'Enginyeria de Serveis i Sistemes d'Informació, Universitat Politècnica de Catalunya. inSSIDE - integrated Software, Services, Information and Data Engineering, Glinz, Martin, Seyff, Norbert, Bühne, Stan, Franch Gutiérrez, Javier, and Lauenroth, Kim
- Abstract
Quality frameworks have been used in requirements engineering (RE) for a long time to help elicit and document quality requirements. However, existing quality frameworks have major issues that hamper their applicability, particularly in RE, but also in other fields such as the design of digital systems. In this paper, we discuss the issues of existing quality frameworks and propose a new quality model, which has been designed for application as a quality framework in RE as well as in the design of digital systems. We present the rationale and requirements for our new model, introduce the model and sketch its application. Our work contributes to the improvement of quality frameworks used in RE and Digital Design., Peer Reviewed, Postprint (published version)
- Published
- 2023
4. Applying project-based learning to teach software analytics and best practices in data science
- Author
-
Universitat Politècnica de Catalunya. Departament d'Enginyeria de Serveis i Sistemes d'Informació, Universitat Politècnica de Catalunya. inSSIDE - integrated Software, Services, Information and Data Engineering, Martínez Fernández, Silverio Juan, Gómez Seoane, Cristina, Lenarduzzi, Valentina, Universitat Politècnica de Catalunya. Departament d'Enginyeria de Serveis i Sistemes d'Informació, Universitat Politècnica de Catalunya. inSSIDE - integrated Software, Services, Information and Data Engineering, Martínez Fernández, Silverio Juan, Gómez Seoane, Cristina, and Lenarduzzi, Valentina
- Abstract
Due to recent industry needs, synergies between data science and software engineering are starting to be present in data science and engineering academic programs. Two synergies are: applying data science to manage the quality of the software (software analytics) and applying software engineering best practices in data science projects to ensure quality attributes such as maintainability and reproducibility. The lack of these synergies on academic programs have been argued to be an educational problem. Hence, it becomes necessary to explore how to teach software analytics and software engineering best practices in data science programs. In this context, we provide hands-on for conducting laboratories applying project-based learning in order to teach software analytics and software engineering best practices to data science students. We aim at improving the software engineering skills of data science students in order to produce software of higher quality by software analytics. We focus in two skills: following a process and software engineering best practices. We apply project-based learning as main teaching methodology to reach the intended outcomes. This teaching experience shows the introduction of project-based learning in a laboratory, where students applied data science and best software engineering practices to analyze and detect improvements in software quality. We carried out a case study in two academic semesters with 63 data science bachelor students. The students found the synergies of the project positive for their learning. In the project, they highlighted both utility of using a CRISP-DM data mining process and best software engineering practices like a software project structure convention applied to a data science project., This paper was partly funded by a teaching innovation project of ICE@UPC-BarcelonaTech (entitled ‘‘Audiovisual and digital material for data engineering, a teaching innovation project with open science’’), and the ‘‘Beatriz Galindo’’ Spanish Program BEA-GAL18/00064., Peer Reviewed, Postprint (published version)
- Published
- 2023
5. Bayesian network analysis of software logs for data-driven software maintenance
- Author
-
Universitat Politècnica de Catalunya. Departament d'Enginyeria de Serveis i Sistemes d'Informació, Universitat Politècnica de Catalunya. inSSIDE - integrated Software, Services, Information and Data Engineering, Rey Juárez, Santiago del, Martínez Fernández, Silverio Juan, Salmerón Cerdán, Antonio, Universitat Politècnica de Catalunya. Departament d'Enginyeria de Serveis i Sistemes d'Informació, Universitat Politècnica de Catalunya. inSSIDE - integrated Software, Services, Information and Data Engineering, Rey Juárez, Santiago del, Martínez Fernández, Silverio Juan, and Salmerón Cerdán, Antonio
- Abstract
Software organisations aim to develop and maintain high-quality software systems. Due to large amounts of behaviour data available, software organisations can conduct data-driven software maintenance. Indeed, software quality assurance and improvement programs have attracted many researchers' attention. Bayesian Networks (BNs) are proposed as a log analysis technique to discover poor performance indicators in a system and to explore usage patterns that usually require temporal analysis. For this, an action research study is designed and conducted to improve the software quality and the user experience of a web application using BNs as a technique to analyse software logs. To this aim, three models with BNs are created. As a result, multiple enhancement points have been identified within the application ranging from performance issues and errors to recurring user usage patterns. These enhancement points enable the creation of cards in the Scrum process of the web application, contributing to its data-driven software maintenance. Finally, the authors consider that BNs within quality-aware and data-driven software maintenance have great potential as a software log analysis technique and encourage the community to deepen its possible applications. For this, the applied methodology and a replication package are shared., Junta de Andalucía, Grant/Award Number: P20‐00091; AEI, Grant/Award Number: PID2019‐106758GB‐32/AEI/10.13039/501100011033; Spanish project, Grant/Award Number: PDC2021‐121195‐I00; Spanish Program, Grant/Award Number: BEAGAL18/00064, Peer Reviewed, Postprint (published version)
- Published
- 2023
6. Bayesian network analysis of software logs for data-driven software maintenance
- Author
-
Antonio Salmerón Cerdán, Santiago Del Rey Juárez, Silverio Martínez-Fernández, Universitat Politècnica de Catalunya. Departament d'Enginyeria de Serveis i Sistemes d'Informació, and Universitat Politècnica de Catalunya. inSSIDE - integrated Software, Services, Information and Data Engineering
- Subjects
Informàtica::Enginyeria del software [Àrees temàtiques de la UPC] ,Software quality ,Computer software -- Quality control ,Programari -- Control de qualitat ,Software maintenance ,Computer Graphics and Computer-Aided Design ,Bayes methods - Abstract
Software organisations aim to develop and maintain high-quality software systems. Due to large amounts of behaviour data available, software organisations can conduct data-driven software maintenance. Indeed, software quality assurance and improvement programs have attracted many researchers' attention. Bayesian Networks (BNs) are proposed as a log analysis technique to discover poor performance indicators in a system and to explore usage patterns that usually require temporal analysis. For this, an action research study is designed and conducted to improve the software quality and the user experience of a web application using BNs as a technique to analyse software logs. To this aim, three models with BNs are created. As a result, multiple enhancement points have been identified within the application ranging from performance issues and errors to recurring user usage patterns. These enhancement points enable the creation of cards in the Scrum process of the web application, contributing to its data-driven software maintenance. Finally, the authors consider that BNs within quality-aware and data-driven software maintenance have great potential as a software log analysis technique and encourage the community to deepen its possible applications. For this, the applied methodology and a replication package are shared. Junta de Andalucía, Grant/Award Number: P20‐00091; AEI, Grant/Award Number: PID2019‐106758GB‐32/AEI/10.13039/501100011033; Spanish project, Grant/Award Number: PDC2021‐121195‐I00; Spanish Program, Grant/Award Number: BEAGAL18/00064
- Published
- 2023
7. Applying project-based learning to teach software analytics and best practices in data science
- Author
-
Martínez Fernández, Silverio Juan, Gómez Seoane, Cristina, Lenarduzzi, Valentina, Universitat Politècnica de Catalunya. Departament d'Enginyeria de Serveis i Sistemes d'Informació, and Universitat Politècnica de Catalunya. inSSIDE - integrated Software, Services, Information and Data Engineering
- Subjects
Software engineering ,Mètode de projectes ,Project method in teaching ,Software analytics ,Informàtica::Enginyeria del software [Àrees temàtiques de la UPC] ,Software quality ,Computer software -- Quality control ,Programari -- Control de qualitat ,Project-based learning ,Data science - Abstract
Due to recent industry needs, synergies between data science and software engineering are starting to be present in data science and engineering academic programs. Two synergies are: applying data science to manage the quality of the software (software analytics) and applying software engineering best practices in data science projects to ensure quality attributes such as maintainability and reproducibility. The lack of these synergies on academic programs have been argued to be an educational problem. Hence, it becomes necessary to explore how to teach software analytics and software engineering best practices in data science programs. In this context, we provide hands-on for conducting laboratories applying project-based learning in order to teach software analytics and software engineering best practices to data science students. We aim at improving the software engineering skills of data science students in order to produce software of higher quality by software analytics. We focus in two skills: following a process and software engineering best practices. We apply project-based learning as main teaching methodology to reach the intended outcomes. This teaching experience shows the introduction of project-based learning in a laboratory, where students applied data science and best software engineering practices to analyze and detect improvements in software quality. We carried out a case study in two academic semesters with 63 data science bachelor students. The students found the synergies of the project positive for their learning. In the project, they highlighted both utility of using a CRISP-DM data mining process and best software engineering practices like a software project structure convention applied to a data science project. This paper was partly funded by a teaching innovation project of ICE@UPC-BarcelonaTech (entitled ‘‘Audiovisual and digital material for data engineering, a teaching innovation project with open science’’), and the ‘‘Beatriz Galindo’’ Spanish Program BEA-GAL18/00064.
- Published
- 2023
8. Measuring and Improving Agile Processes in a Small-Size Software Development Company
- Author
-
Michal Choras, Prabhat Ram, Lidia López, Rafał Kozik, Silverio Martínez-Fernández, Xavier Franch, Pilar Rodríguez, Tomasz Springer, Universitat Politècnica de Catalunya. Departament d'Enginyeria de Serveis i Sistemes d'Informació, Universitat Politècnica de Catalunya. inSSIDE - integrated Software, Service, Information and Data Engineering, and Publica
- Subjects
Computer software -- Development ,General Computer Science ,Computer science ,Informàtica::Enginyeria del software [Àrees temàtiques de la UPC] ,Programari àgil -- Desenvolupament ,SMEs ,Context (language use) ,Computer software -- Quality control ,Programari -- Control de qualitat ,02 engineering and technology ,Tools ,Software development process ,Software ,Empirical research ,0202 electrical engineering, electronic engineering, information engineering ,General Materials Science ,process metrics ,9. Industry and infrastructure ,business.industry ,Standards organizations ,General Engineering ,Software development ,020207 software engineering ,software quality ,Software quality ,Engineering management ,rapid software development ,Software measurement ,Programari -- Desenvolupament ,020201 artificial intelligence & image processing ,lcsh:Electrical engineering. Electronics. Nuclear engineering ,Small and medium-sized enterprises ,Agile software development ,business ,Companies ,lcsh:TK1-9971 ,software engineering - Abstract
Context: Agile software development has become commonplace in software development companies due to the numerous benefits it provides. However, conducting Agile projects is demanding in Small and Medium Enterprises (SMEs), because projects start and end quickly, but still have to fulfil customers' quality requirements. Objective: This paper aims at reporting a practical experience on the use of metrics related to the software development process as a means supporting SMEs in the development of software following an Agile methodology. Method: We followed Action-Research principles in a Polish small-size software development company. We developed and executed a study protocol suited to the needs of the company, using a pilot case. Results: A catalogue of Agile development process metrics practically validated in the context of a small-size software development company, adopted by the company in their Agile projects. Conclusions: Practitioners may adopt these metrics in their Agile projects, especially if working in an SME, and customise them to their own needs and tools. Academics may use the findings as a baseline for new research work, including new empirical studies. The authors would like to thank all the members of the QRapids H2020 project consortium.
- Published
- 2020
- Full Text
- View/download PDF
9. QFL: Data-driven feedback loop to manage quality in agile development
- Author
-
Universitat Politècnica de Catalunya. Departament d'Enginyeria de Serveis i Sistemes d'Informació, Universitat Politècnica de Catalunya. inSSIDE - integrated Software, Service, Information and Data Engineering, López Cuesta, Lidia, Bagnato, Alessandra, Ahbervé, Antonin, Franch Gutiérrez, Javier, Universitat Politècnica de Catalunya. Departament d'Enginyeria de Serveis i Sistemes d'Informació, Universitat Politècnica de Catalunya. inSSIDE - integrated Software, Service, Information and Data Engineering, López Cuesta, Lidia, Bagnato, Alessandra, Ahbervé, Antonin, and Franch Gutiérrez, Javier
- Abstract
Background: Quality requirements (QRs) describe desired system qualities, playing an important role in the success of software projects. In the context of agile software development (ASD), where the main objective is the fast delivery of functionalities, QRs are often ill-defined and not well addressed during the development process. Software analytics tools help to control quality though the measurement of qualityrelated software aspects to support decision-makers in the process of QR management. Aim: The goal of this research is to explore the benefits of integrating a concrete software analytics tool, Q-Rapids Tool, to assess software quality and support QR management processes. Method: In the context of a technology transfer project, the Softeam company has integrated Q-Rapids Tool in their development process. We conducted a series of workshops involving Softeam members working in the Modelio product development. Results: We present the Quality Feedback Loop (QFL) process to be integrated in software development processes to control the complete QR life-cycle, from elicitation to validation. As a result of the implementation of QFL in Softeam, Modelio’s team members highlight the benefits of integrating a data analytics tool with their project planning tool and the fact that project managers can control the whole process making the final decisions. Conclusions: Practitioners can benefit from the integration of software analytics tools as part of their software development toolchain to control software quality. The implementation of QFL promotes quality in the organization and the integration of software analytics and project planning tools also improves the communication between teams., This work was supported by Q-Rapids (Quality-Aware Rapid Software Development. Q-Rapids was funded by the European Union’s Horizon 2020 research and innovation programme under grant agreement nº 732253., Peer Reviewed, Postprint (author's final draft)
- Published
- 2021
10. Parallelware Tools: An Experimental Evaluation on POWER Systems
- Author
-
Xavier Martorell, Manuel Arenaz, Universitat Politècnica de Catalunya. Departament d'Arquitectura de Computadors, Barcelona Supercomputing Center, and Universitat Politècnica de Catalunya. CAP - Grup de Computació d'Altes Prestacions
- Subjects
FOS: Computer and information sciences ,Exploit ,Computer science ,Concurrency ,Informàtica::Enginyeria del software [Àrees temàtiques de la UPC] ,Parallel programming (Computer science) ,Static code analysis ,POWER systems ,Static program analysis ,Computer software -- Quality control ,Programari -- Control de qualitat ,010103 numerical & computational mathematics ,Programació en paral·lel (Informàtica) ,01 natural sciences ,Concurrency and parallelism ,Software development process ,Software ,0101 mathematics ,Informàtica::Arquitectura de computadors::Arquitectures paral·leles [Àrees temàtiques de la UPC] ,Tasking ,business.industry ,Software architecture ,Parallelware tools ,Detection of software defects ,OpenMP ,010101 applied mathematics ,Computer Science - Distributed, Parallel, and Cluster Computing ,Systems development life cycle ,Programari -- Disseny ,Distributed, Parallel, and Cluster Computing (cs.DC) ,Quality assurance and testing ,Software engineering ,business - Abstract
Static code analysis tools are designed to aid software developers to build better quality software in less time, by detecting defects early in the software development life cycle. Even the most experienced developer regularly introduces coding defects. Identifying, mitigating and resolving defects is an essential part of the software development process, but frequently defects can go undetected. One defect can lead to a minor malfunction or cause serious security and safety issues. This is magnified in the development of the complex parallel software required to exploit modern heterogeneous multicore hardware. Thus, there is an urgent need for new static code analysis tools to help in building better concurrent and parallel software. The paper reports preliminary results about the use of Appentra’s Parallelware technology to address this problem from the following three perspectives: finding concurrency issues in the code, discovering new opportunities for parallelization in the code, and generating parallel-equivalent codes that enable tasks to run faster. The paper also presents experimental results using well-known scientific codes and POWER systems. This work has been partly funded from the Spanish Ministry of Science and Technology (TIN2015-65316-P), the Departament d’Innovació, Universitats i Empresa de la Generalitat de Catalunya (MPEXPAR: Models de Programació i Entorns d’Execució Parallels, 2014-SGR-1051), and the European Union’s Horizon 2020 research and innovation program throughgrant agreements MAESTRO (801101) and EPEEC (801051).
- Published
- 2021
11. QFL: Data-driven feedback loop to manage quality in agile development
- Author
-
Alessandra Bagnato, Antonin Ahberve, Lidia López, Xavier Franch, Universitat Politècnica de Catalunya. Departament d'Enginyeria de Serveis i Sistemes d'Informació, and Universitat Politècnica de Catalunya. inSSIDE - integrated Software, Service, Information and Data Engineering
- Subjects
FOS: Computer and information sciences ,Computer science ,Informàtica::Enginyeria del software [Àrees temàtiques de la UPC] ,Programari àgil -- Desenvolupament ,Software analytics tool ,Computer software -- Quality control ,Programari -- Control de qualitat ,Requirements pattern ,Toolchain ,Software development process ,Software analytics ,Computer Science - Software Engineering ,Software ,Quality management process ,Quality monitoring ,Decisió, Presa de ,business.industry ,Software development ,Quality ,Software quality ,Software Engineering (cs.SE) ,Project planning ,Requirement ,business ,Software engineering ,Agile software development ,Decision-making ,Quality assessment - Abstract
Background: Quality requirements (QRs) describe desired system qualities, playing an important role in the success of software projects. In the context of agile software development (ASD), where the main objective is the fast delivery of functionalities, QRs are often ill-defined and not well addressed during the development process. Software analytics tools help to control quality though the measurement of quality-related software aspects to support decision-makers in the process of QR management. Aim: The goal of this research is to explore the benefits of integrating a concrete software analytics tool, Q-Rapids Tool, to assess software quality and support QR management processes. Method: In the context of a technology transfer project, the Softeam company has integrated Q-Rapids Tool in their development process. We conducted a series of workshops involving Softeam members working in the Modelio product development. Results: We present the Quality Feedback Loop (QFL) process to be integrated in software development processes to control the complete QR life-cycle, from elicitation to validation. As a result of the implementation of QFL in Softeam, Modelio's team members highlight the benefits of integrating a data analytics tool with their project planning tool and the fact that project managers can control the whole process making the final decisions. Conclusions: Practitioners can benefit from the integration of software analytics tools as part of their software development toolchain to control software quality. The implementation of QFL promotes quality in the organization and the integration of software analytics and project planning tools also improves the communication between teams., Comment: 9 pages, Accepted for publication in IEEE/ACM 43nd International Conference on Software Engineering: Software Engineering in Society (ICSE-SEIS), IEEE, 2021
- Published
- 2021
12. Measuring and improving Agile Processes in a small-size software development company
- Author
-
Universitat Politècnica de Catalunya. Departament d'Enginyeria de Serveis i Sistemes d'Informació, Universitat Politècnica de Catalunya. inSSIDE - integrated Software, Service, Information and Data Engineering, Choras, Michal, Springer, Thomas, Kozik, Rafal, López Cuesta, Lidia, Martínez Fernández, Silverio Juan, Ram, Prabhat, Rodríguez, Pilar, Franch Gutiérrez, Javier, Universitat Politècnica de Catalunya. Departament d'Enginyeria de Serveis i Sistemes d'Informació, Universitat Politècnica de Catalunya. inSSIDE - integrated Software, Service, Information and Data Engineering, Choras, Michal, Springer, Thomas, Kozik, Rafal, López Cuesta, Lidia, Martínez Fernández, Silverio Juan, Ram, Prabhat, Rodríguez, Pilar, and Franch Gutiérrez, Javier
- Abstract
Context: Agile software development has become commonplace in software development companies due to the numerous benefits it provides. However, conducting Agile projects is demanding in Small and Medium Enterprises (SMEs), because projects start and end quickly, but still have to fulfil customers' quality requirements. Objective: This paper aims at reporting a practical experience on the use of metrics related to the software development process as a means supporting SMEs in the development of software following an Agile methodology. Method: We followed Action-Research principles in a Polish small-size software development company. We developed and executed a study protocol suited to the needs of the company, using a pilot case. Results: A catalogue of Agile development process metrics practically validated in the context of a small-size software development company, adopted by the company in their Agile projects. Conclusions: Practitioners may adopt these metrics in their Agile projects, especially if working in an SME, and customise them to their own needs and tools. Academics may use the findings as a baseline for new research work, including new empirical studies., The authors would like to thank all the members of the QRapids H2020 project consortium., Peer Reviewed, Postprint (published version)
- Published
- 2020
13. Actionable software metrics: An industrial perspective
- Author
-
Universitat Politècnica de Catalunya. Departament d'Enginyeria de Serveis i Sistemes d'Informació, Universitat Politècnica de Catalunya. inSSIDE - integrated Software, Service, Information and Data Engineering, Ram, Prabhat, Rodríguez, Pilar, Oivo, Markku, Martínez Fernández, Silverio Juan, Bagnato, Alessandra, Choras, Michal, Kozik, Rafal, Aaramaa, Sanja, Ahola, Milla, Universitat Politècnica de Catalunya. Departament d'Enginyeria de Serveis i Sistemes d'Informació, Universitat Politècnica de Catalunya. inSSIDE - integrated Software, Service, Information and Data Engineering, Ram, Prabhat, Rodríguez, Pilar, Oivo, Markku, Martínez Fernández, Silverio Juan, Bagnato, Alessandra, Choras, Michal, Kozik, Rafal, Aaramaa, Sanja, and Ahola, Milla
- Abstract
Background: Practitioners would like to take action based on software metrics, as long as they find them reliable. Existing literature explores how metrics can be made reliable, but remains unclear if there are other conditions necessary for a metric to be actionable. Context & Method: In the context of a European H2020 Project, we conducted a multiple case study to study metrics’ use in four companies, and identified instances where these metrics influenced actions. We used an online questionnaire to enquire about the project participants’ views on actionable metrics. Next ,we invited one participant from each company to elaborate on the identified metrics’ use for taking actions and the questionnaire responses (N=17). Result:We learned that a metric that is practical, contextual, and exhibits high data quality characteristics is actionable. Even a non-actionable metric can be useful, but an actionable metric mostly requires interpretation. However, the more these metrics are simple and reflect the software development context accurately, the less interpretation required to infer actionable information from the metric. Company size and project characteristics can also influence the type of metric that can be actionable. Conclusion: This exploration of industry’s views on actionable metrics help characterize actionable metrics in practical terms. This awareness of what characteristics constitute an actionablemetric can facilitate theirdefinition and developmentright from the start of a software metrics program, This work is a result of the Q-Rapids Project, funded by the European Union’s Horizon 2020 research and innovation program, under grant agreement No. 732253., Peer Reviewed, Postprint (author's final draft)
- Published
- 2020
14. Estimación y priorización de requisitos no-funcionales para desarrollo de software: Estado del arte
- Author
-
Universitat Politècnica de Catalunya. Doctorat en Computació, Universitat Politècnica de Catalunya. Departament d'Enginyeria de Serveis i Sistemes d'Informació, Universitat Politècnica de Catalunya. inSSIDE - integrated Software, Service, Information and Data Engineering, Salamea Bravo, María José, González Palacio, Liliana, Oriol Hilari, Marc, Farré Tost, Carles, Universitat Politècnica de Catalunya. Doctorat en Computació, Universitat Politècnica de Catalunya. Departament d'Enginyeria de Serveis i Sistemes d'Informació, Universitat Politècnica de Catalunya. inSSIDE - integrated Software, Service, Information and Data Engineering, Salamea Bravo, María José, González Palacio, Liliana, Oriol Hilari, Marc, and Farré Tost, Carles
- Abstract
Los requisitos de calidad (también llamados requisitos no-funcionales) son aquellos que permiten asegurar la calidad del software. Incluyen aspectos muy diversos, como disponibilidad, seguridad, rendimiento, escalabilidad, portabilidad y usabilidad, entre otros. Los continuos avances tecnológicos, como el software en la nube o el Internet de las Cosas, presentan nuevos retos en el desarrollo del software para poder garantizar un nivel de calidad satisfactorio de dichos aspectos. Asimismo, las metodologías de desarrollo ágil, cuyo uso viene en aumento, tales como SCRUM, XP, Kanban, no dan el soporte necesario para la gestión de dichos requisitos de calidad. Con el fin de facilitar a los ingenieros del software la toma de decisiones sobre el nivel de calidad necesario en un proyecto, es imprescindible conocer de antemano: 1) qué criterios se van a tener en cuenta para verificar, priorizar, planificar y/o negociar los requisitos de calidad. Asimismo, es necesario: 2) precisar cómo se van a evaluar dichos criterios, y 3) identificar qué factores del contexto del proyecto pueden afectar dicha evaluación. Para intentar dar respuesta a estas 3 cuestiones o preguntas de investigación, los autores de este capítulo han diseñado y están llevando a cabo un estudio sistemático de la literatura. Este trabajo presenta para su discusión la descripción de la metodología seguida en ese estudio, así como algunos de los resultados preliminares obtenidos durante su ejecución., Peer Reviewed, Postprint (published version)
- Published
- 2020
15. Industrial practices on requirements reuse: An interview-based study
- Author
-
Universitat Politècnica de Catalunya. Departament d'Enginyeria de Serveis i Sistemes d'Informació, Universitat Politècnica de Catalunya. inSSIDE - integrated Software, Service, Information and Data Engineering, Franch Gutiérrez, Javier, Palomares Bonache, Cristina, Quer, Carme, Universitat Politècnica de Catalunya. Departament d'Enginyeria de Serveis i Sistemes d'Informació, Universitat Politècnica de Catalunya. inSSIDE - integrated Software, Service, Information and Data Engineering, Franch Gutiérrez, Javier, Palomares Bonache, Cristina, and Quer, Carme
- Abstract
[Context and motivation] Requirements reuse has been proposed asa key asset for requirements engineers to efficiently elicit, validate and document softwarer equirements and, as a consequence, obtain requirements specifications of better quality through more effective engineering processes.[Question/problem] Regardless the impact requirements reuse could have in software projects’ suc-cess and efficiency, the requirements engineering community has published veryfew studies reporting the way in which this activity is conducted in industry. [Principal ideas/results] In this paper, we present the results of an interview-based study involving 24 IT professionals on whether they reuse requirementsor not and how. Some kind of requirements reuse is carried out by the majorityof respondents, being organizational and project-related factors the main drivers.Quality requirements are the type most reused. The most common strategy isfind-copy-paste-adapt. Respondents agreed that requirements reuse is beneficial,especially for project-related reasons. The most stated challenge to overcome inrequirements reuse is related to the domain of the project and the development of acompletely new system. [Contribution] With this study, we contribute to the stateof the practice in the reuse of requirements by showing how real organizationscarry out this process and the factors that influence it., This work has been partially funded by the Horizon 2020 project OpenReq, which is supported by the European Union under the Grant Nr. 732463., Peer Reviewed, Postprint (author's final draft)
- Published
- 2020
16. Industrial practices on requirements reuse: An interview-based study
- Author
-
Carme Quer, Xavier Franch, Cristina Palomares, Universitat Politècnica de Catalunya. Departament d'Enginyeria de Serveis i Sistemes d'Informació, and Universitat Politècnica de Catalunya. inSSIDE - integrated Software, Service, Information and Data Engineering
- Subjects
050101 languages & linguistics ,Computer science ,Process (engineering) ,media_common.quotation_subject ,Informàtica::Enginyeria del software [Àrees temàtiques de la UPC] ,Context (language use) ,02 engineering and technology ,Requirements elicitation ,Computer software -- Quality control ,Programari -- Control de qualitat ,Reuse ,Domain (software engineering) ,0202 electrical engineering, electronic engineering, information engineering ,Enginyeria de requisits ,0501 psychology and cognitive sciences ,Quality (business) ,Software requirements ,Interview-based study ,Survey ,media_common ,Requirements reuse ,Requirements documentation ,Requirements engineering ,05 social sciences ,Engineering management ,020201 artificial intelligence & image processing - Abstract
[Context and motivation] Requirements reuse has been proposed asa key asset for requirements engineers to efficiently elicit, validate and document softwarer equirements and, as a consequence, obtain requirements specifications of better quality through more effective engineering processes.[Question/problem] Regardless the impact requirements reuse could have in software projects’ suc-cess and efficiency, the requirements engineering community has published veryfew studies reporting the way in which this activity is conducted in industry. [Principal ideas/results] In this paper, we present the results of an interview-based study involving 24 IT professionals on whether they reuse requirementsor not and how. Some kind of requirements reuse is carried out by the majorityof respondents, being organizational and project-related factors the main drivers.Quality requirements are the type most reused. The most common strategy isfind-copy-paste-adapt. Respondents agreed that requirements reuse is beneficial,especially for project-related reasons. The most stated challenge to overcome inrequirements reuse is related to the domain of the project and the development of acompletely new system. [Contribution] With this study, we contribute to the stateof the practice in the reuse of requirements by showing how real organizationscarry out this process and the factors that influence it. This work has been partially funded by the Horizon 2020 project OpenReq, which is supported by the European Union under the Grant Nr. 732463.
- Published
- 2020
17. Estimación y priorización de requisitos no-funcionales para desarrollo de software: Estado del arte
- Author
-
Salamea Bravo, María José, González Palacio, Liliana, Oriol Hilari, Marc|||0000-0003-1928-7024, Farré Tost, Carles|||0000-0001-5814-3782, Universitat Politècnica de Catalunya. Doctorat en Computació, Universitat Politècnica de Catalunya. Departament d'Enginyeria de Serveis i Sistemes d'Informació, and Universitat Politècnica de Catalunya. inSSIDE - integrated Software, Service, Information and Data Engineering
- Subjects
Informàtica::Enginyeria del software [Àrees temàtiques de la UPC] ,Computer software -- Quality control ,Programari -- Control de qualitat - Abstract
Los requisitos de calidad (también llamados requisitos no-funcionales) son aquellos que permiten asegurar la calidad del software. Incluyen aspectos muy diversos, como disponibilidad, seguridad, rendimiento, escalabilidad, portabilidad y usabilidad, entre otros. Los continuos avances tecnológicos, como el software en la nube o el Internet de las Cosas, presentan nuevos retos en el desarrollo del software para poder garantizar un nivel de calidad satisfactorio de dichos aspectos. Asimismo, las metodologías de desarrollo ágil, cuyo uso viene en aumento, tales como SCRUM, XP, Kanban, no dan el soporte necesario para la gestión de dichos requisitos de calidad. Con el fin de facilitar a los ingenieros del software la toma de decisiones sobre el nivel de calidad necesario en un proyecto, es imprescindible conocer de antemano: 1) qué criterios se van a tener en cuenta para verificar, priorizar, planificar y/o negociar los requisitos de calidad. Asimismo, es necesario: 2) precisar cómo se van a evaluar dichos criterios, y 3) identificar qué factores del contexto del proyecto pueden afectar dicha evaluación. Para intentar dar respuesta a estas 3 cuestiones o preguntas de investigación, los autores de este capítulo han diseñado y están llevando a cabo un estudio sistemático de la literatura. Este trabajo presenta para su discusión la descripción de la metodología seguida en ese estudio, así como algunos de los resultados preliminares obtenidos durante su ejecución.
- Published
- 2020
18. Actionable software metrics:an industrial perspective
- Author
-
Alessandra Bagnato, Pilar Rodríguez, Milla Ahola, Michał Choraś, Silverio Martínez-Fernández, Prabhat Ram, Markku Oivo, Rafał Kozik, Sanja Aaramaa, Universitat Politècnica de Catalunya. Departament d'Enginyeria de Serveis i Sistemes d'Informació, and Universitat Politècnica de Catalunya. inSSIDE - integrated Software, Service, Information and Data Engineering
- Subjects
metrics program ,Computer science ,business.industry ,Informàtica::Enginyeria del software [Àrees temàtiques de la UPC] ,Perspective (graphical) ,Software development ,020207 software engineering ,Context (language use) ,Computer software -- Quality control ,Programari -- Control de qualitat ,02 engineering and technology ,Computer-assisted web interviewing ,actionable metrics ,Data science ,Software metric ,context ,Action (philosophy) ,020204 information systems ,Data quality ,Machine learning ,Aprenentatge automàtic ,0202 electrical engineering, electronic engineering, information engineering ,data quality ,Metric (unit) ,business - Abstract
Background: Practitioners would like to take action based on software metrics, as long as they find them reliable. Existing literature explores how metrics can be made reliable, but remains unclear if there are other conditions necessary for a metric to be actionable. Context & Method: In the context of a European H2020 Project, we conducted a multiple case study to study metrics’ use in four companies, and identified instances where these metrics influenced actions. We used an online questionnaire to enquire about the project participants’ views on actionable metrics. Next ,we invited one participant from each company to elaborate on the identified metrics’ use for taking actions and the questionnaire responses (N=17). Result:We learned that a metric that is practical, contextual, and exhibits high data quality characteristics is actionable. Even a non-actionable metric can be useful, but an actionable metric mostly requires interpretation. However, the more these metrics are simple and reflect the software development context accurately, the less interpretation required to infer actionable information from the metric. Company size and project characteristics can also influence the type of metric that can be actionable. Conclusion: This exploration of industry’s views on actionable metrics help characterize actionable metrics in practical terms. This awareness of what characteristics constitute an actionablemetric can facilitate theirdefinition and developmentright from the start of a software metrics program This work is a result of the Q-Rapids Project, funded by the European Union’s Horizon 2020 research and innovation program, under grant agreement No. 732253.
- Published
- 2020
19. Practical experiences and value of applying software analytics to manage quality
- Author
-
Anna Maria Vollmer, Alessandra Bagnato, Pilar Rodríguez, Lidia López, Silverio Martínez-Fernández, Jari Partanen, Universitat Politècnica de Catalunya. Departament d'Enginyeria de Serveis i Sistemes d'Informació, and Universitat Politècnica de Catalunya. inSSIDE - integrated Software, Service, Information and Data Engineering
- Subjects
FOS: Computer and information sciences ,Analytics ,Process management ,Process (engineering) ,Computer science ,media_common.quotation_subject ,Informàtica::Enginyeria del software [Àrees temàtiques de la UPC] ,summative evaluation ,Computer software -- Quality control ,Programari -- Control de qualitat ,02 engineering and technology ,Computer Science - Software Engineering ,Software analytics ,Software ,020204 information systems ,0202 electrical engineering, electronic engineering, information engineering ,software analytics ,Quality (business) ,media_common ,technology transfer ,Product design ,business.industry ,020207 software engineering ,software quality ,Software quality ,Software Engineering (cs.SE) ,Enginyeria del programari ,business ,Quality assurance ,software engineering - Abstract
Background: Despite the growth in the use of software analytics platforms in industry, little empirical evidence is available about the challenges that practitioners face and the value that these platforms provide. Aim: The goal of this research is to explore the benefits of using a software analytics platform for practitioners managing quality. Method: In a technology transfer project, a software analytics platform was incrementally developed between academic and industrial partners to address their software quality problems. This paper focuses on exploring the value provided by this software analytics platform in two pilot projects. Results: Practitioners emphasized major benefits including the improvement of product quality and process performance and an increased awareness of product readiness. They especially perceived the semi-automated functionality of generating quality requirements by the software analytics platform as the benefit with the highest impact and most novel value for them. Conclusions: Practitioners can benefit from modern software analytics platforms, especially if they have time to adopt such a platform carefully and integrate it into their quality assurance activities., Comment: This is an Author's Accepted Manuscript of a paper consisting of a post-peer-review, pre-copyedit version of a paper accepted at the 13th ACM/IEEE International Symposium on Empirical Software Engineering and Measurement (ESEM), 2019. The final authenticated version is available online: https://ieeexplore.ieee.org/document/8870162
- Published
- 2019
- Full Text
- View/download PDF
20. Parallelware tools: an experimental evaluation on POWER systems
- Author
-
Universitat Politècnica de Catalunya. Departament d'Arquitectura de Computadors, Barcelona Supercomputing Center, Universitat Politècnica de Catalunya. CAP - Grup de Computació d'Altes Prestacions, Arenaz Silva, Manuel, Martorell Bofill, Xavier, Universitat Politècnica de Catalunya. Departament d'Arquitectura de Computadors, Barcelona Supercomputing Center, Universitat Politècnica de Catalunya. CAP - Grup de Computació d'Altes Prestacions, Arenaz Silva, Manuel, and Martorell Bofill, Xavier
- Abstract
Static code analysis tools are designed to aid software developers to build better quality software in less time, by detecting defects early in the software development life cycle. Even the most experienced developer regularly introduces coding defects. Identifying, mitigating and resolving defects is an essential part of the software development process, but frequently defects can go undetected. One defect can lead to a minor malfunction or cause serious security and safety issues. This is magnified in the development of the complex parallel software required to exploit modern heterogeneous multicore hardware. Thus, there is an urgent need for new static code analysis tools to help in building better concurrent and parallel software. The paper reports preliminary results about the use of Appentra’s Parallelware technology to address this problem from the following three perspectives: finding concurrency issues in the code, discovering new opportunities for parallelization in the code, and generating parallel-equivalent codes that enable tasks to run faster. The paper also presents experimental results using well-known scientific codes and POWER systems., This work has been partly funded from the Spanish Ministry of Science and Technology (TIN2015-65316-P), the Departament d’Innovació, Universitats i Empresa de la Generalitat de Catalunya (MPEXPAR: Models de Programació i Entorns d’Execució Parallels, 2014-SGR-1051), and the European Union’s Horizon 2020 research and innovation program throughgrant agreements MAESTRO (801101) and EPEEC (801051)., Peer Reviewed, Postprint (author's final draft)
- Published
- 2019
21. Influence of developer factors on code quality: a data study
- Author
-
Universitat Politècnica de Catalunya. Doctorat en Computació, Universitat Politècnica de Catalunya. Departament d'Enginyeria de Serveis i Sistemes d'Informació, Universitat Politècnica de Catalunya. inSSIDE - integrated Software, Service, Information and Data Engineering, Salamea Bravo, María José, Farré Tost, Carles, Universitat Politècnica de Catalunya. Doctorat en Computació, Universitat Politècnica de Catalunya. Departament d'Enginyeria de Serveis i Sistemes d'Informació, Universitat Politècnica de Catalunya. inSSIDE - integrated Software, Service, Information and Data Engineering, Salamea Bravo, María José, and Farré Tost, Carles
- Abstract
© 2019 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes,creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works., Automatic source-code inspection tools help to assess, monitor and improve code quality. Since these tools only examine the software project’s codebase, they overlook other possible factors that may impact code quality and the assessment of the technical debt (TD). Our initial hypothesis is that human factors associated with the software developers, like coding expertise, communication skills, and experience in the project have some measurable impact on the code quality. In this exploratory study, we test this hypothesis on two large open source repositories, using TD as a code quality metric and the data that may be inferred from the version control systems. The preliminary results of our statistical analysis suggest that the level of participation of the developers and their experience in the project have a positive correlation with the amount of TD that they introduce. On the contrary, communication skills have barely any impact on TD., Peer Reviewed, Postprint (author's final draft)
- Published
- 2019
22. Practical experiences and value of applying software analytics to manage quality
- Author
-
Universitat Politècnica de Catalunya. Departament d'Enginyeria de Serveis i Sistemes d'Informació, Universitat Politècnica de Catalunya. inSSIDE - integrated Software, Service, Information and Data Engineering, Vollmer, Anna Maria, Martinez Fernandez, Silverio, Bagnato, Alessandra, Partanen, Jari, López Cuesta, Lidia, Rodríguez, Pilar, Universitat Politècnica de Catalunya. Departament d'Enginyeria de Serveis i Sistemes d'Informació, Universitat Politècnica de Catalunya. inSSIDE - integrated Software, Service, Information and Data Engineering, Vollmer, Anna Maria, Martinez Fernandez, Silverio, Bagnato, Alessandra, Partanen, Jari, López Cuesta, Lidia, and Rodríguez, Pilar
- Abstract
© 2019 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes,creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works., Background: Despite the growth in the use of software analytics platforms in industry, little empirical evidence is available about the challenges that practitioners face and the value that these platforms provide. Aim: The goal of this research is to explore the benefits of using a software analytics platform for practitioners managing quality. Method: In a technology transfer project, a software analytics platform was incrementally developed between academic and industrial partners to address their software quality problems. This paper focuses on exploring the value provided by this software analytics platform in two pilot projects. Results: Practitioners emphasized major benefits including the improvement of product quality and process performance and an increased awareness of product readiness. They especially perceived the semi-automated functionality of generating quality requirements by the software analytics platform as the benefit with the highest impact and most novel value for them. Conclusions: Practitioners can benefit from modern software analytics platforms, especially if they have time to adopt such a platform carefully and integrate it into their quality assurance activities., Peer Reviewed, Postprint (author's final draft)
- Published
- 2019
23. Continuously assessing and improving software quality with software analytics tools: a case study
- Author
-
Universitat Politècnica de Catalunya. Departament d'Enginyeria de Serveis i Sistemes d'Informació, Universitat Politècnica de Catalunya. inSSIDE - integrated Software, Service, Information and Data Engineering, Martínez Fernández, Silverio Juan, Vollmer, Anna Maria, Jedlitschka, Andreas, Franch Gutiérrez, Javier, López Cuesta, Lidia, Ram, Prabhat, Rodríguez, Pilar, Aaramaa, Sanja, Bagnato, Alessandra, Choras, Michal, Partanen, Jari, Universitat Politècnica de Catalunya. Departament d'Enginyeria de Serveis i Sistemes d'Informació, Universitat Politècnica de Catalunya. inSSIDE - integrated Software, Service, Information and Data Engineering, Martínez Fernández, Silverio Juan, Vollmer, Anna Maria, Jedlitschka, Andreas, Franch Gutiérrez, Javier, López Cuesta, Lidia, Ram, Prabhat, Rodríguez, Pilar, Aaramaa, Sanja, Bagnato, Alessandra, Choras, Michal, and Partanen, Jari
- Abstract
In the last decade, modern data analytics technologies have enabled the creation of software analytics tools offering real-time visualization of various aspects related to software development and usage. These tools seem to be particularly attractive for companies doing agile software development. However, the information provided by the available tools is neither aggregated nor connected to higher quality goals. At the same time, assessing and improving software quality has also been a key target for the software engineering community, yielding several proposals for standards and software quality models. Integrating such quality models into software analytics tools could close the gap by providing the connection to higher quality goals. This study aims at understanding whether the integration of quality models into software analytics tools provides understandable, reliable, useful, and relevant information at the right level of detail about the quality of a process or product, and whether practitioners intend to use it. Over the course of more than one year, the four companies involved in this case study deployed such a tool to assess and improve software quality in several projects. We used standardized measurement instruments to elicit the perception of 22 practitioners regarding their use of the tool. We complemented the findings with debriefing sessions held at the companies. In addition, we discussed challenges and lessons learned with four practitioners leading the use of the tool. Quantitative and qualitative analyses provided positive results; i.e., the practitioners’ perception with regard to the tool’s understandability, reliability, usefulness, and relevance was positive. Individual statements support the statistical findings and constructive feedback can be used for future improvements. We conclude that potential for future adoption of quality models within software analytics tools definitely exists and encourage other practitioners to use the presented, Peer Reviewed, Postprint (published version)
- Published
- 2019
24. Integrating runtime data with development data to monitor external quality: challenges from practice
- Author
-
Universitat Politècnica de Catalunya. Departament d'Enginyeria de Serveis i Sistemes d'Informació, Universitat Politècnica de Catalunya. inSSIDE - integrated Software, Service, Information and Data Engineering, Aghabayli, Aytaj, Pfahl, Dietmar, Martínez Fernández, Silverio Juan, Trendowicz, Adam, Universitat Politècnica de Catalunya. Departament d'Enginyeria de Serveis i Sistemes d'Informació, Universitat Politècnica de Catalunya. inSSIDE - integrated Software, Service, Information and Data Engineering, Aghabayli, Aytaj, Pfahl, Dietmar, Martínez Fernández, Silverio Juan, and Trendowicz, Adam
- Abstract
The use of software analytics in software development companies has grown in the last years. Still, there is little support for such companies to obtain integrated insightful and actionable information at the right time. This research aims at exploring the integration of runtime and development data to analyze to what extent external quality is related to internal quality based on real project data. Over the course of more than three months, we collected and analyzed data of a software product following the CRISP-DM process. We studied the integration possibilities between runtime and development data, and implemented two integrations. The number of bugs found in code has a weak positive correlation with code quality measures and a moderate negative correlation with the number of rule violations found. Other types of correlations require more data cleaning and higher quality data for their exploration. During our study, several challenges to exploit data gathered both at runtime and during development were encountered. Lessons learned from integrating external and internal data in software projects may be useful for practitioners and researchers alike., Peer Reviewed, Postprint (author's final draft)
- Published
- 2019
25. Q-Rapids: Quality-Aware Rapid Software Development: an H2020 Project
- Author
-
Universitat Politècnica de Catalunya. Departament d'Enginyeria de Serveis i Sistemes d'Informació, Universitat Politècnica de Catalunya. inSSIDE - integrated Software, Service, Information and Data Engineering, López Cuesta, Lidia, Oriol Hilari, Marc, Universitat Politècnica de Catalunya. Departament d'Enginyeria de Serveis i Sistemes d'Informació, Universitat Politècnica de Catalunya. inSSIDE - integrated Software, Service, Information and Data Engineering, López Cuesta, Lidia, and Oriol Hilari, Marc
- Abstract
This work reports the objectives, current state, and outcomes of the Q-Rapids H2020 project. Q-Rapids (Quality-Aware Rapid Software Development) proposes a data-driven approach to the production of software following very short development cycles. The focus of Q-Rapids is on quality aspects, represented through quality requirements. The Q-Rapids platform, which is the tangible software asset emerging from the project, mines software repositories and usage logs to identify candidate quality requirements that may ameliorate the values of strategic indicators like product quality, time to market or team productivity. Four companies are providing use cases to evaluate the platform and associated processes., Peer Reviewed, Postprint (author's final draft)
- Published
- 2019
26. Quality-aware Rapid Software Development Project: The Q-Rapids Project
- Author
-
Universitat Politècnica de Catalunya. Departament d'Enginyeria de Serveis i Sistemes d'Informació, Universitat Politècnica de Catalunya. inSSIDE - integrated Software, Service, Information and Data Engineering, Franch Gutiérrez, Javier, López Cuesta, Lidia, Martinez Fernandez, Silverio, Oriol Hilari, Marc, Rodríguez, Pilar, Trendowicz, Adam, Universitat Politècnica de Catalunya. Departament d'Enginyeria de Serveis i Sistemes d'Informació, Universitat Politècnica de Catalunya. inSSIDE - integrated Software, Service, Information and Data Engineering, Franch Gutiérrez, Javier, López Cuesta, Lidia, Martinez Fernandez, Silverio, Oriol Hilari, Marc, Rodríguez, Pilar, and Trendowicz, Adam
- Abstract
Software quality poses continuously new challenges in software development, including aspects related to both software development and system usage, which significantly impact the success of software systems. The Q-Rapids H2020 project defines an evidence-based, data-driven quality-aware rapid software development methodology. Quality requirements (QRs) are incrementally elicited, refined and improved based on data gathered from software repositories, project management tools, system usage and quality of service. This data is analysed and aggregated into quality-related key strategic indicators (e.g., development effort required to include a given QR in the next development cycle) which are presented to decision makers using a highly informative dashboard. The Q-Rapids platform is being evaluated in-premises by the four companies participating in the consortium, reporting useful lessons learned and directions for new development., Peer Reviewed, Postprint (author's final draft)
- Published
- 2019
27. Software development metrics prediction using time series methods
- Author
-
Universitat Politècnica de Catalunya. Departament d'Enginyeria de Serveis i Sistemes d'Informació, Universitat Politècnica de Catalunya. inSSIDE - integrated Software, Service, Information and Data Engineering, Choras, Michal, Kozik, Rafal, Pawlicki, Marek, Holubowicz, Witold, Franch Gutiérrez, Javier, Universitat Politècnica de Catalunya. Departament d'Enginyeria de Serveis i Sistemes d'Informació, Universitat Politècnica de Catalunya. inSSIDE - integrated Software, Service, Information and Data Engineering, Choras, Michal, Kozik, Rafal, Pawlicki, Marek, Holubowicz, Witold, and Franch Gutiérrez, Javier
- Abstract
The software development process is an intricate task, with the growing complexity of software solutions and inflating code-line count being part of the reason for the fall of software code coherence and readability thus being one of the causes for software faults and it’s declining quality. Debugging software during development is significantly less expensive than attempting damage control after the software’s release. An automated quality-related analysis of developed code, which includes code analysis and correlation of development data like an ideal solution. In this paper the ability to predict software faults and software quality is scrutinized. Hereby we investigate four models that can be used to analyze time-based data series for prediction of trends observed in the software development process are investigated. Those models are Exponential Smoothing, the Holt-Winters Model, Autoregressive Integrated Moving Average (ARIMA) and Recurrent Neural Networks (RNN). Time-series analysis methods prove a good fit for software related data prediction. Such methods and tools can lend a helping hand for Product Owners in their daily decision-making process as related to e.g. assignment of tasks, time predictions, bugs predictions, time to release etc. Results of the research are presented., Peer Reviewed, Postprint (author's final draft)
- Published
- 2019
28. Continuously assessing and improving software quality with software analytics tools: a case study
- Author
-
Silverio Martinez-Fernandez, Anna Maria Vollmer, Andreas Jedlitschka, Xavier Franch, Lidia Lopez, Prabhat Ram, Pilar Rodriguez, Sanja Aaramaa, Alessandra Bagnato, Michal Choras, Jari Partanen, Universitat Politècnica de Catalunya. Departament d'Enginyeria de Serveis i Sistemes d'Informació, Universitat Politècnica de Catalunya. inSSIDE - integrated Software, Service, Information and Data Engineering, and Publica
- Subjects
Monitoring ,Software analytics ,Case study ,Informàtica::Enginyeria del software [Àrees temàtiques de la UPC] ,Programari àgil -- Desenvolupament ,Software analytics tool ,Software quality ,Computer software -- Quality control ,Programari -- Control de qualitat ,Tools ,case study ,software analytics ,lcsh:Electrical engineering. Electronics. Nuclear engineering ,quality model ,Agile software development ,lcsh:TK1-9971 ,Companies ,Real-time systems ,Quality model ,software analytics tool - Abstract
In the last decade, modern data analytics technologies have enabled the creation of software analytics tools offering real-time visualization of various aspects related to software development and usage. These tools seem to be particularly attractive for companies doing agile software development. However, the information provided by the available tools is neither aggregated nor connected to higher quality goals. At the same time, assessing and improving the software quality has also been the key targets for the software engineering community, yielding several proposals for standards and software quality models. Integrating such quality models into software analytics tools could close the gap by providing the connection to higher quality goals. This paper aims at understanding whether the integration of quality models into software analytics tools provides understandable, reliable, useful, and relevant information at the right level of detail about the quality of a process or product and whether practitioners intend to use it. Over the course of more than a year, four companies involved in this case study deployed such a tool to assess and improve software quality in several projects. We used standardized measurement instruments to elicit the perception of 22 practitioners regarding their use of the tool. We complemented the findings with debriefing sessions held at the companies. In addition, we discussed challenges and lessons learned with four practitioners leading the use of the tool. The quantitative and qualitative analyses provided positive results, i.e., the practitioners’ perception with regard to the tool’s understandability, reliability, usefulness, and relevance was positive. Individual statements support the statistical findings, and constructive feedback can be used for future improvements. We conclude that the potential for future adoption of quality models within software analytics tools definitely exists and encourage other practitioners to use the presented seven challenges and seven lessons learned and adopt them in their companies.
- Published
- 2019
29. Q-Rapids: Quality-Aware Rapid Software Development – An H2020 Project
- Author
-
Marc Oriol, Lidia López, Universitat Politècnica de Catalunya. Departament d'Enginyeria de Serveis i Sistemes d'Informació, and Universitat Politècnica de Catalunya. inSSIDE - integrated Software, Service, Information and Data Engineering
- Subjects
050101 languages & linguistics ,Process management ,Computer science ,media_common.quotation_subject ,Time to market ,Informàtica::Enginyeria del software [Àrees temàtiques de la UPC] ,Software quality ,Computer software -- Quality control ,Programari -- Control de qualitat ,02 engineering and technology ,Asset (computer security) ,Software ,Enginyeria de requisits ,0202 electrical engineering, electronic engineering, information engineering ,Data-driven requirements engineering ,0501 psychology and cognitive sciences ,Use case ,Quality (business) ,media_common ,Q-Rapids H2020 Project ,business.industry ,05 social sciences ,Software development ,Requirements engineering ,Product (business) ,020201 artificial intelligence & image processing ,business ,Quality requirements ,Rapid software development - Abstract
This work reports the objectives, current state, and outcomes of the Q-Rapids H2020 project. Q-Rapids (Quality-Aware Rapid Software Development) proposes a data-driven approach to the production of software following very short development cycles. The focus of Q-Rapids is on quality aspects, represented through quality requirements. The Q-Rapids platform, which is the tangible software asset emerging from the project, mines software repositories and usage logs to identify candidate quality requirements that may ameliorate the values of strategic indicators like product quality, time to market or team productivity. Four companies are providing use cases to evaluate the platform and associated processes.
- Published
- 2019
- Full Text
- View/download PDF
30. Quality-Aware Rapid Software Development Project: The Q-Rapids Project
- Author
-
Silverio Martínez-Fernández, Xavier Franch, Adam Trendowicz, Lidia López, Pilar Rodríguez, Marc Oriol, Universitat Politècnica de Catalunya. Departament d'Enginyeria de Serveis i Sistemes d'Informació, and Universitat Politècnica de Catalunya. inSSIDE - integrated Software, Service, Information and Data Engineering
- Subjects
Process management ,Computer science ,Informàtica::Enginyeria del software [Àrees temàtiques de la UPC] ,Dashboard (business) ,Programari àgil -- Desenvolupament ,Software quality ,Computer software -- Quality control ,Programari -- Control de qualitat ,02 engineering and technology ,Software repositories ,Software development process ,Software ,Q-Rapids H2020 ,0502 economics and business ,Enginyeria de requisits ,0202 electrical engineering, electronic engineering, information engineering ,Data-driven requirements engineering ,Quality models ,Software system ,Project management ,Software analytic tools ,business.industry ,05 social sciences ,Software development ,Requirements engineering ,Non-functional requirements ,020207 software engineering ,Agile software development ,business ,Quality requirements ,050203 business & management ,Rapid software development - Abstract
Software quality poses continuously new challenges in software development, including aspects related to both software development and system usage, which significantly impact the success of software systems. The Q-Rapids H2020 project defines an evidence-based, data-driven quality-aware rapid software development methodology. Quality requirements (QRs) are incrementally elicited, refined and improved based on data gathered from software repositories, project management tools, system usage and quality of service. This data is analysed and aggregated into quality-related key strategic indicators (e.g., development effort required to include a given QR in the next development cycle) which are presented to decision makers using a highly informative dashboard. The Q-Rapids platform is being evaluated in-premises by the four companies participating in the consortium, reporting useful lessons learned and directions for new development.
- Published
- 2019
- Full Text
- View/download PDF
31. A quality model for actionable analytics in rapid software development
- Author
-
Universitat Politècnica de Catalunya. Departament d'Enginyeria de Serveis i Sistemes d'Informació, Universitat Politècnica de Catalunya. inSSIDE - integrated Software, Service, Information and Data Engineering, Martínez Fernández, Silverio Juan, Jedlitschka, Andreas, Guzmán, Liliana, Vollmer, Anna Maria, Universitat Politècnica de Catalunya. Departament d'Enginyeria de Serveis i Sistemes d'Informació, Universitat Politècnica de Catalunya. inSSIDE - integrated Software, Service, Information and Data Engineering, Martínez Fernández, Silverio Juan, Jedlitschka, Andreas, Guzmán, Liliana, and Vollmer, Anna Maria
- Abstract
Accessing relevant data on the product, process, and usage perspectives of software as well as integrating and analyzing such data is crucial for getting reliable and timely actionable insights aimed at continuously managing software quality in Rapid Software Development (RSD). In this context, several software analytics tools have been developed in recent years. However, there is a lack of explainable software analytics that software practitioners trust. Aims: We aimed at creating a quality model (called Q-Rapids quality model) for actionable analytics in RSD, implementing it, and evaluating its understandability and relevance. Method: We performed workshops at four companies in order to determine relevant metrics as well as product and process factors. We also elicited how these metrics and factors are used and interpreted by practitioners when making decisions in RSD. We specified the Q-Rapids quality model by comparing and integrating the results of the four workshops. Then we implemented the Q-Rapids tool to support the usage of the Q-Rapids quality model as well as the gathering, integration, and analysis of the required data. Afterwards we installed the Q-Rapids tool in the four companies and performed semi-structured interviews with eight product owners to evaluate the understandability and relevance of the Q-Rapids quality model. Results: The participants of the evaluation perceived the metrics as well as the product and process factors of the Q-Rapids quality model as understandable. Also, they considered the Q-Rapids quality model relevant for identifying product and process deficiencies (e.g., blocking code situations). Conclusions: By means of heterogeneous data sources, the Q-Rapids quality model enables detecting problems that take more time to find manually and adds transparency among the perspectives of system, process, and usage., Peer Reviewed, Postprint (author's final draft)
- Published
- 2018
32. Data-driven elicitation, assessment and documentation of quality requirements in agile software development
- Author
-
Universitat Politècnica de Catalunya. Departament d'Enginyeria de Serveis i Sistemes d'Informació, Universitat Politècnica de Catalunya. inSSIDE - integrated Software, Service, Information and Data Engineering, Franch Gutiérrez, Javier, Gómez Seoane, Cristina, Jedlitschka, Andreas, López Cuesta, Lidia, Martínez Fernández, Silverio Juan, Oriol Hilari, Marc, Partanen, Jari, Universitat Politècnica de Catalunya. Departament d'Enginyeria de Serveis i Sistemes d'Informació, Universitat Politècnica de Catalunya. inSSIDE - integrated Software, Service, Information and Data Engineering, Franch Gutiérrez, Javier, Gómez Seoane, Cristina, Jedlitschka, Andreas, López Cuesta, Lidia, Martínez Fernández, Silverio Juan, Oriol Hilari, Marc, and Partanen, Jari
- Abstract
Quality Requirements (QRs) are difficult to manage in agile software development. Given the pressure to deploy fast, quality concerns are often sacrificed for the sake of richer functionality. Besides, artefacts as user stories are not particularly well-suited for representing QRs. In this exploratory paper, we envisage a data-driven method, called Q-Rapids, to QR elicitation, assessment and documentation in agile software development. Q-Rapids proposes: 1) The collection and analysis of design and runtime data in order to raise quality alerts; 2) The suggestion of candidate QRs to address these alerts; 3) A strategic analysis of the impact of such requirements by visualizing their effect on a set of indicators rendered in a dashboard; 4) The documentation of the requirements (if finally accepted) in the backlog. The approach is illustrated with scenarios evaluated through a questionnaire by experts from a telecom company., Peer Reviewed, Postprint (author's final draft)
- Published
- 2018
33. A Quality Model for Actionable Analytics in Rapid Software Development
- Author
-
Liliana Guzman, Anna Maria Vollmer, Andreas Jedlitschka, Silverio Martínez-Fernández, Universitat Politècnica de Catalunya. Departament d'Enginyeria de Serveis i Sistemes d'Informació, and Universitat Politècnica de Catalunya. inSSIDE - integrated Software, Service, Information and Data Engineering
- Subjects
FOS: Computer and information sciences ,Agile ,Computer science ,media_common.quotation_subject ,Informàtica::Enginyeria del software [Àrees temàtiques de la UPC] ,Software quality ,Computer software -- Quality control ,Programari -- Control de qualitat ,02 engineering and technology ,Programari -- Fiabilitat ,Data modeling ,Software analytics ,Computer Science - Software Engineering ,Software ,Decisió, Presa de ,0202 electrical engineering, electronic engineering, information engineering ,Quality (business) ,media_common ,business.industry ,H2020 ,Software development ,020207 software engineering ,Computer software -- Reliability ,Data science ,Software Engineering (cs.SE) ,Analytics ,020201 artificial intelligence & image processing ,business ,Decision making ,Quality model ,Q-Rapids ,Rapid software development ,Agile software development - Abstract
Background: Accessing relevant data on the product, process, and usage perspectives of software as well as integrating and analyzing such data is crucial for getting reliable and timely actionable insights aimed at continuously managing software quality in Rapid Software Development (RSD). In this context, several software analytics tools have been developed in recent years. However, there is a lack of explainable software analytics that software practitioners trust. Aims: We aimed at creating a quality model (called Q-Rapids quality model) for actionable analytics in RSD, implementing it, and evaluating its understandability and relevance. Method: We performed workshops at four companies in order to determine relevant metrics as well as product and process factors. We also elicited how these metrics and factors are used and interpreted by practitioners when making decisions in RSD. We specified the Q-Rapids quality model by comparing and integrating the results of the four workshops. Then we implemented the Q-Rapids tool to support the usage of the Q-Rapids quality model as well as the gathering, integration, and analysis of the required data. Afterwards we installed the Q-Rapids tool in the four companies and performed semi-structured interviews with eight product owners to evaluate the understandability and relevance of the Q-Rapids quality model. Results: The participants of the evaluation perceived the metrics as well as the product and process factors of the Q-Rapids quality model as understandable. Also, they considered the Q-Rapids quality model relevant for identifying product and process deficiencies (e.g., blocking code situations). Conclusions: By means of heterogeneous data sources, the Q-Rapids quality model enables detecting problems that take more time to find manually and adds transparency among the perspectives of system, process, and usage., This is an Author's Accepted Manuscript of a paper to be published by IEEE in the 44th Euromicro Conference on Software Engineering and Advanced Applications (SEAA) 2018. The final authenticated version will be available online
- Published
- 2018
- Full Text
- View/download PDF
34. Conflicts and synergies among quality requirements
- Author
-
Xavier Franch, Barry Boehm, Universitat Politècnica de Catalunya. Departament d'Enginyeria de Serveis i Sistemes d'Informació, and Universitat Politècnica de Catalunya. inSSIDE - integrated Software, Service, Information and Data Engineering
- Subjects
Engineering ,Non-functional requirement ,media_common.quotation_subject ,Informàtica::Enginyeria del software [Àrees temàtiques de la UPC] ,02 engineering and technology ,Computer software -- Quality control ,Programari -- Control de qualitat ,Integrated product team ,Programari -- Fiabilitat ,Software qualities ,0202 electrical engineering, electronic engineering, information engineering ,Quality (business) ,Reliability (statistics) ,media_common ,Vulnerability (computing) ,business.industry ,Software architecture ,Nonfunctional requirements ,020207 software engineering ,Usability ,Availability ,Computer software -- Reliability ,Reliability ,Software quality ,Maintainability ,Risk analysis (engineering) ,Security ,020201 artificial intelligence & image processing ,Programari -- Disseny ,Single point of failure ,business ,Software engineering ,Quality requirements - Abstract
Analyses of the interactions among quality requirements (QRs) have often found that optimizing on one QR will cause serious problems with other QRs. As just one relevant example, one large project had an Integrated Product Team optimize the system for Security. In doing so, it reduced its vulnerability profile by having a single-agent key distribution system and a single copy of the data base – only to have the Reliability engineers point on that these were system-critical single points of failure. The project’s Security-optimized architecture also created conflicts with the system’s Performance, Usability, and Modifiability. Of course, optimizing the system for Security had synergies with Reliability in having high levels of Confidentiality, Integrity, and Availability. This panel aims at fostering discussion on these relationships among QRs and how the use of data repositories may help discovering them.
- Published
- 2017
35. Conflicts and synergies among quality requirements
- Author
-
Universitat Politècnica de Catalunya. Departament d'Enginyeria de Serveis i Sistemes d'Informació, Universitat Politècnica de Catalunya. inSSIDE - integrated Software, Service, Information and Data Engineering, Boehm, Barry, Franch Gutiérrez, Javier, Universitat Politècnica de Catalunya. Departament d'Enginyeria de Serveis i Sistemes d'Informació, Universitat Politècnica de Catalunya. inSSIDE - integrated Software, Service, Information and Data Engineering, Boehm, Barry, and Franch Gutiérrez, Javier
- Abstract
Analyses of the interactions among quality requirements (QRs) have often found that optimizing on one QR will cause serious problems with other QRs. As just one relevant example, one large project had an Integrated Product Team optimize the system for Security. In doing so, it reduced its vulnerability profile by having a single-agent key distribution system and a single copy of the data base – only to have the Reliability engineers point on that these were system-critical single points of failure. The project’s Security-optimized architecture also created conflicts with the system’s Performance, Usability, and Modifiability. Of course, optimizing the system for Security had synergies with Reliability in having high levels of Confidentiality, Integrity, and Availability. This panel aims at fostering discussion on these relationships among QRs and how the use of data repositories may help discovering them., Peer Reviewed, Postprint (author's final draft)
- Published
- 2017
36. Determinizing Monitors for HML with Recursion
- Author
-
Luca Aceto, Anna Ingólfsdóttir, Adrian Francalanza, Sævar Örn Kjartansson, and Antonis Achilleos
- Subjects
Discrete mathematics ,FOS: Computer and information sciences ,Computer software -- Development ,Computer Science - Logic in Computer Science ,Finite-state machine ,TheoryofComputation_COMPUTATIONBYABSTRACTDEVICES ,Logic ,Computer science ,Formal Languages and Automata Theory (cs.FL) ,Process (computing) ,Recursion (computer science) ,Computer Science - Formal Languages and Automata Theory ,Computer software -- Quality control ,Theoretical Computer Science ,Exponential function ,Logic in Computer Science (cs.LO) ,TheoryofComputation_MATHEMATICALLOGICANDFORMALLANGUAGES ,Computational Theory and Mathematics ,Exponential growth ,TheoryofComputation_LOGICSANDMEANINGSOFPROGRAMS ,Computer Science::Logic in Computer Science ,Software ,Computer Science::Formal Languages and Automata Theory - Abstract
We examine the determinization of monitors for HML with recursion. We demonstrate that every monitor is equivalent to a deterministic one, which is at most doubly exponential in size with respect to the original monitor. When monitors are described as CCS-like processes, this doubly exponential bound is optimal. When (deterministic) monitors are described as finite automata (as their LTS), then they can be exponentially more succinct than their CCS process form., non peer-reviewed
- Published
- 2016
- Full Text
- View/download PDF
37. Many-valued institutions for constraint specification
- Author
-
Fernando Orejas, José Luiz Fiadeiro, Claudia Elena Chiriźăź, Universitat Politècnica de Catalunya. Departament de Ciències de la Computació, and Universitat Politècnica de Catalunya. ALBCOM - Algorismia, Bioinformàtica, Complexitat i Mètodes Formals
- Subjects
Graded semantic consequence ,Computer science ,Constraint satisfaction problems ,Informàtica::Enginyeria del software [Àrees temàtiques de la UPC] ,Service discovery ,Logical systems ,0102 computer and information sciences ,02 engineering and technology ,Computer software -- Quality control ,Contracts ,Programari -- Control de qualitat ,01 natural sciences ,Expressive power ,Functional requirements ,Soft-constraint satisfaction problems ,Complex software system specification ,Formal specification ,0202 electrical engineering, electronic engineering, information engineering ,Constraint specification ,Soft constraints ,Software system ,Many-valued institutions ,Constraint satisfaction problem ,Service-level agreements ,business.industry ,Functional requirement ,Quality attributes ,010201 computation theory & mathematics ,Compatibility (mechanics) ,020201 artificial intelligence & image processing ,Software engineering ,business - Abstract
We advance a general technique for enriching logical systems with soft constraints, making them suitable for specifying complex software systems where parts are put together not just based on how they meet certain functional requirements but also on how they optimise certain constraints. This added expressive power is required, for example, for capturing quality attributes that need to be optimised or, more generally, for formalising what are usually called service-level agreements. More specifically, we show how institutions endowed with a graded semantic consequence can accommodate soft-constraint satisfaction problems. We illustrate our approach by showing how, in the context of service discovery, one can quantify the compatibility of two specifications and thus formalise the selection of the most promising provider of a required resource.
- Published
- 2016
38. Using artificial bee colony to optimize software quality estimation models. (c2015)
- Author
-
Abou Assi, Tatiana Antoine and Abou Assi, Tatiana Antoine
- Abstract
Computer software has become an important foundation in several versatile domains including medicine, engineering, etc. Consequently, with such widespread application of software, the essential need of ensuring certain software quality characteristics such as efficiency, reliability and stability has emerged. In order to measure such software quality characteristics, we must wait until the software is implemented, tested and put to use for a certain amount of time. Several software metrics have been proposed in the literature to avoid this long and costly process, and they proved to be a good means of estimating software quality. For this purpose, software quality prediction models are built. These are used to establish a relationship between internal sub-characteristics such as inheritance, coupling, size, etc. and external software quality attributes such as maintainability, stability, etc. Using such relationships, one can build a model in order to estimate the quality of new software systems. Such models are mainly constructed by either statistical techniques such as regression, or machine learning techniques such as C4.5 and neural networks. We build our model using machine learning techniques in particular rule-based models. These have a white-box nature which gives the classification as well as the reason for it making them attractive to experts in the domain. In this thesis, we propose a novel heuristic based on Artificial Bee Colony (ABC) to optimize rule-based software quality prediction models. We validate our technique on data describing maintainability and reliability of classes in an Object-Oriented system. We compare our models to others constructed using other well established techniques such as C4.5, Genetic Algorithms, Simulated Annealing, Tabu Search, multi-layer perceptron with back-propagation, multi-layer perceptron hybridized with ABC and the majority classifier. Results show that, in most cases, our proposed technique out-performs the others
- Published
- 2016
39. Many-valued institutions for constraint specification
- Author
-
Universitat Politècnica de Catalunya. Departament de Ciències de la Computació, Universitat Politècnica de Catalunya. ALBCOM - Algorismia, Bioinformàtica, Complexitat i Mètodes Formals, Chirita, Claudia Elena, Fiadeiro, José Luiz, Orejas Valdés, Fernando, Universitat Politècnica de Catalunya. Departament de Ciències de la Computació, Universitat Politècnica de Catalunya. ALBCOM - Algorismia, Bioinformàtica, Complexitat i Mètodes Formals, Chirita, Claudia Elena, Fiadeiro, José Luiz, and Orejas Valdés, Fernando
- Abstract
We advance a general technique for enriching logical systems with soft constraints, making them suitable for specifying complex software systems where parts are put together not just based on how they meet certain functional requirements but also on how they optimise certain constraints. This added expressive power is required, for example, for capturing quality attributes that need to be optimised or, more generally, for formalising what are usually called service-level agreements. More specifically, we show how institutions endowed with a graded semantic consequence can accommodate soft-constraint satisfaction problems. We illustrate our approach by showing how, in the context of service discovery, one can quantify the compatibility of two specifications and thus formalise the selection of the most promising provider of a required resource., Peer Reviewed, Postprint (author's final draft)
- Published
- 2016
40. Software engineering and formal methods. Lecture notes in computer science
- Author
-
Gabriel Dimech, Adrian Francalanza, and Christian Colombo
- Subjects
Connected component ,Formal methods (Computer science) ,Service (systems architecture) ,Computer software -- Development ,Correctness ,Computer science ,business.industry ,Runtime verification ,Autonomous distributed systems ,Real-time data processing ,Computer software -- Quality control ,Abstraction layer ,Business process management ,Object-oriented methods (Computer science) ,Computer software -- Verification ,Embedded system ,Component (UML) ,Instrumentation (computer programming) ,business - Abstract
Enterprise Service Buses (ESBs) are highly-dynamic component platforms that are hard to test for correctness because their connected components may not necessarily be present prior to deployment. Runtime Verification (RV) is a potential solution towards ascertaining correctness of an ESB, by checking the ESB’s execution at runtime, and detecting any deviations from the expected behaviour. A crucial aspect impinging upon the feasibility of this verification approach is the runtime overheads introduced, which may have adverse effects on the execution of the ESB system being monitored. In turn, one factor that bears a major effect on such overheads is the instrumentation mechanism adopted by the RV setup. In this paper we identify three likely (but substantially different) ESB instrumentation mechanisms, detail their implementation over a widely-used ESB platform, assess them qualitatively, and empirically evaluate the runtime overheads introduced by these mechanisms., peer-reviewed
- Published
- 2015
41. Requirements capturing and software development methodologies for trustworthy systems
- Author
-
Κάτσικας, Σωκράτης, Σχολή Τεχνολογιών Πληροφορικής και Επικοινωνιών. Τμήμα Ψηφιακών Συστημάτων, and Τεχνοοικονομική Διοίκηση και Ασφάλεια Ψηφιακών Συστημάτων
- Subjects
Computer software -- Development ,Software engineering ,Computer software -- Reliability ,Computer software -- Quality control - Abstract
In this Master's thesis, the concept of information systems trustworthiness will be covered, in terms of describing existing methodologies for collecting and documenting security requirements as well as describing how existing methodologies support the delivery of trustworthy systems. Moreover, this essay will employ a case study, in order to enforce the essay's outcomes on how to achieve trustworthy software. Trustworthiness is a characteristic that can be applied to any system that satisfies the desired level of trust by not failing. The systems that should possess such a property are mainly systems that manage sensitive records, critical infrastructure, etc. The capturing of a system's requirements is the process of discovering and identifying the system's stakeholders and their needs. A system's requirements are the features and qualities that a system should possess, and are extracted from the system's stakeholders (i.e. owners, users). Therefore, the identification of security requirements is of crucial importance for the achievement of the desired security goals, namely trustworthiness. With respect to security requirements, in order for a system to ensure that its security specifications are satisfied, security concerns must be taken into consideration in every phase of the software engineering lifecycle; namely, from requirements engineering to design, implementation, testing, and deployment. In order to increase users' trust in the systems they use, software defects must be reduced through. Following a systematic development methodology, during the software development process, the risk of not achieving the acceptable result, is reduced, if not eliminated, since software development methodologies impose a disciplined process upon software development.
- Published
- 2014
42. Compliance Issues In Cloud Computing Systems
- Author
-
Yimam, Dereje (author), Fernandez, Eduardo B. (Thesis advisor), Florida Atlantic University (Degree grantor), College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science, Yimam, Dereje (author), Fernandez, Eduardo B. (Thesis advisor), Florida Atlantic University (Degree grantor), College of Engineering and Computer Science, and Department of Computer and Electrical Engineering and Computer Science
- Abstract
Summary: Appealing features of cloud services such as elasticity, scalability, universal access, low entry cost, and flexible billing motivate consumers to migrate their core businesses into the cloud. However, there are challenges about security, privacy, and compliance. Building compliant systems is difficult because of the complex nature of regulations and cloud systems. In addition, the lack of complete, precise, vendor neutral, and platform independent software architectures makes compliance even harder. We have attempted to make regulations clearer and more precise with patterns and reference architectures (RAs). We have analyzed regulation policies, identified overlaps, and abstracted them as patterns to build compliant RAs. RAs should be complete, precise, abstract, vendor neutral, platform independent, and with no implementation details; however, their levels of detail and abstraction are still debatable and there is no commonly accepted definition about what an RA should contain. Existing approaches to build RAs lack structured templates and systematic procedures. In addition, most approaches do not take full advantage of patterns and best practices that promote architectural quality. We have developed a five-step approach by analyzing features from available approaches but refined and combined them in a new way. We consider an RA as a big compound pattern that can improve the quality of the concrete architectures derived from it and from which we can derive more specialized RAs for cloud systems. We have built an RA for HIPAA, a compliance RA (CRA), and a specialized compliance and security RA (CSRA) for cloud systems. These RAs take advantage of patterns and best practices that promote software quality. We evaluated the architecture by creating profiles. The proposed approach can be used to build RAs from scratch or to build new RAs by abstracting real RAs for a given context. We have also described an RA itself as a compound pattern by using a modified, 2015, Includes bibliography., Degree granted: Dissertation (Ph.D.)--Florida Atlantic University, 2015., Collection: FAU Electronic Theses and Dissertations Collection
- Published
- 2015
43. Software defect prediction using Bayesian networks and kernel methods
- Author
-
Okutan, Ahmet, Yıldız, Olcay Taner, Işık Üniversitesi, Fen Bilimleri Enstitüsü, Bilgisayar Mühendisliği Doktora Programı, Okutan, Ahmet, and Bilgisayar Mühendisliği Anabilim Dalı
- Subjects
Neural networks (Computer science) ,Artificial intelligence ,Bayesian statistical decision theory ,QA76.76.Q35 O38 2012 ,Computer software -- Quality control ,Computer Engineering and Computer Science and Control ,Bilgisayar Mühendisliği Bilimleri-Bilgisayar ve Kontrol - Abstract
Text in English; Abstract: English and Turkish Includes bibliographical references (leaves 115-127) xix, 128 leaves There are lots of different software metrics discovered and used for defect prediction in the literature. Instead of dealing with so many metrics, it would be practical and easy if we could determine the set of metrics that are most important and focus on them more to predict defectiveness. We use Bayesian modelling to determine the influential relationships among software metrics and defect proneness. In addition to the metrics used in Promise data repository, We define two more metrics, i.e. NOD for the number of developers and LOCQ for the source code quality. We wxtract these metrics by inspecting the source code repositories of the selected Promise data repository data sets. At the end of our modeling, We learn both the marginal defect proneness probability of the whole software system and the set of most effective metrics. Our experiments on nine open source Promise data repository data sets show that respense for class (RFC), lines of code (LOC), and lack of coding quality (LOCQ) are the most efective metrics whereas coupling between objets (CBO), weighted method per class (WMC), and lack of cohesion of methods (LCOM) are less efective metris on defect proneness. Furthermore, number of children (NOC) and depth of inheritance tree (DIT) have very limited effect and are unstustworthy. On tthe other hand, based on the experiments on Poi, Tomcat, and Xalan data sets, We observe that there is a positive correlation between the number of developers (NOD) and the level of defectiveness.However, futher investigation involving a greater number of projects, is need to confirm our findings. Furthermore, we propose a novel technique for defect prediction that uses plagiarism detection tools. Although the defect prediction problem haz been researched for a long time, the results achieved are not so bright. We use kernel programming to model the relationship between source code similarity and defectiveness. Each value in the kernel matrix shows how much parallelism exit between the corresponding files ib the kernel matrix shows how much parallelism exist between the corresponding files in the software system chosen. Our experiments on 10 real world datasets indicate that support vector machines (SVM) with a precalculated kernel matrix performs better than the SVM with the usual linear and RBF kernels and generates comparable results with the famous defect prediction methods like linear logistic regression and J48 in terms of the area under the curve (AUC).Furthermore, we observed that when the amount of similarity among the files of a software system is high, then the AUC found by the SVM with precomputed kernel can be used to predict the number of defects in the files or classes of a software system, because we observe a relationship between source code similarity and the number of defects. Based on the results of our analysis, the developers can focus on more defective modules rather than on less or non defective ones during testing activities. The experiments on 10 Promise datasets indicate that while predicting the number of defects, SVM with a precomputed kernel performs as good as the SVM with the usual linear and RBF kernels, in terms of the root mean square error (RMSE). The method proposed is also comparable with other regression methods like linear regression and IBK. The results of these experiments suggest that source code similarity is a good means of predicting both defectiveness and the number of defects in software modules. Literatürde kullanılan çok çeşitli yazılım ölçütleri mevcuttur. Çok sat-yıda ölçütle hata tahmini yapmak yerine, en önemli ölçüt kümesini belirleyip bu kümedeki ölçütleri hata tahmininde kullanmak daha pratik ve kolay olacaktır. Bu tezde yazılım ölçütleri ile hataya yarkınlık arasındaki etkileşimi ortaya çıkarmak için Bayesian modelleme yöntemi kullanılmıştır. Promise veri deposundaki yazılım ölçütlerine ek olarak, yazılım geliştiricisi sayısı (NOD) ve kaynak kodu kalitesi (LOCQ) adlı 2 yeni ölçüt tanımlanmıştır. Bu ölçütleri çıkarmak için Promise veri depesundaki veri kümelerinin açık kaynak kodları kullanılmıştır. Yapılan modelleme sonucunda, hem sınanan sistemin hatalı olm aihtimali, hem de en etkili ölçüt künesi bulunmaktadır. 9 Promise veri kümesi üzerindeki deneyler, RFC, LOC ve LOCQ ölçütlerinin en etkili ölçütler olduğunu, CBO, WMC ve LCOM ölçütlerinin ise daha az etkili olduğunu ortaya koymuştur. Ayrıca, NOC ve DIT ölçütlerinin sınırlı bir etkiye sahip olduğu ve güvenilir olmadığı gözlemlenmiştir. Öte yandan, Poi, Tomcat Xalan veri kümeleri üzerinde yapılan deneyler sonucunda, yazılım geliştici sayısı (NOD) ile hata seviyesi arasında doğru orantı olduğu sonucuna varılmıştır. Bununla birlikte, tespitlerimizi doğrulamak için daha fazla veri kümesi üzerinde deney yapmaya ihtiyaç vardır. Ayrıca bu tezde, hata tahmini için intihal tespit araçlarını kullanan yeni bir yöntem önerilmiştir. Hata tahmini için intihal tespit araçlarını kullanan yeni bir yöntem önerilmiştir. Hata tahmin problemi ve uzun zamandan beri araştırılmaktadır, fakat ortaya çıkan sonuçlar çok parlak değildir. Farklı bir bakış açısı getirmek üzere, kaynak kod benzerliği ve hataya yatkınlık arasındaki ilişkiyi modelleyen çekirdek metodu yöntemi kullanılmıştır. Bu yöntemde, üretilen çekirdek matrisindeki her bir değer, matrisin satır ve sütunda bulubab kaynak kodu dosyaları arasındaki parelelliği göstermektedir. 10 veri kümesi üzerindeki deneyler, önceden hesaplanmış çekirdek matrisi kullanan SVM yönteminin, doğrusal veya RBF çekirdek kullanan SVM yöntemlerine göre daha başarılı olduğunu ayrıca mevcut hata tahmin yöntemleri doğrusal lojistik regresyon ve J48 ile benzer sonuçlar ürettiğini göstermiştir. Ayrıca, bir yazılım sistemi içerisinde bulubab dosyalar arasındaki kod benzerliğinin daha fazla olduğunu durumlarda, ROC eğrisi altındaki alan (AUC) ölçütünün de daha yüksek olduğu görülmüştür. Ayrıca, önceden hesaplanmış çekirdek matris kullanan SVM yönteminin, hata sayısı ile kaynak kodu benzerliği arasında gözlemlenen ilişkiden ötürü, bir yazılım sistemindeki hata sayısının tahmin edilmesinde de kullanılabileceği gösterilmiştir. Yapılan analiz sonucunda, yazılım geliştiriciler hatasız veya daha az hatalı modüllere odaklanmak yerine, daha fazla hata içeren modüllere odaklanabilirler. 10 Promise veri kümesi üzerinde yapılan deneyler, hata sayısını tahmin ederken, önceden hesaplanan çekirdek matris kullanan SVM yönetiminin ortalama karesel hata (RMSE) açısından doğrusal ve RBF çekirdek kullanan SVM yöntemi kadar başarılı olduğunu göstermiştir. Uygulana yöntem, doğrusal regreyon ve IBK gibi diğer regresyon yöntemleri ile benzer sonuçlar üreetmiştir. Yapılan deneylerin sonuçları, kaynak kodu benzerliğinin hataya yatkınlık ve hata sayısının tahmin etmede iyi bir araç olduğunu ortaya koymuştur. Software Metrics Static Code Metrics McCabe Metrics Line of Code Metrics Halstead Metrics Object Oriented Metrics Developer Metrics Process Metrics Defect Prediction Defect Prediction Data Performance Measure An Overview of the Defect Prediction Studies Defect Prediction Using Statistical Methods Defect Prediction Using Machine Learning Methods Previous Work on Defect Prediction Critics About Studies Benchmarking Studies Bayesian Networks Background on Bayesian Networks K2 Algorithm Previous Work on Bayesian Networks Kernel Machines Background on Kernel Machines Support Vector Machines Support Vector Machines for Regression Kernel Functions String Kernels Previous Work on Kernel Machines Plagiarism Tools Similarity Detection Kernel Methods for Defect Prediction Proposed Method Bayesian networks Bayesian network of Metrics and Defect Proneness Ordering Metrics for Bayesian Network Construction Kernel Methods to Predict Defectiveness Selecting Plagiarism Tools and Tuning Their Input Parameters Data Set Selection Kernel Matrix Generation Kernel Methods to Predict the Number of Defects Experiments and Results Experiment I: Determine Influential Relationships Among Metrics and Defectiveness Using Bayesian Networks Experiment Design Experiment II: Determine The Role Of Coding Quality And Number Of Developers On Defectiveness Using Bayesian Networks Conclusion Instability Test Effectiveness of Metric Pairs Feature Selection Tests Effectiveness of the Number of Developers (NOD) Experiment III: Defect Proneness Prediction Using Kernel Methods Experiment IV: Prediction of the Number of Defects with Kernel Methods Threats to Validity Summary of Results Bayesian Networks Kernel Methods to Predict Defectiveness Kernel Methods to Predict the Number of Defects Contributions Bayesian Networks Kernel Methods to Predict Defectiveness Kernel Methods to Predict the Number of Defects Future Work
- Published
- 2012
44. Simplifying Contract-Violating Traces
- Author
-
Ian Grima, Christian Colombo, and Adrian Francalanza
- Subjects
FOS: Computer and information sciences ,Computer Science - Logic in Computer Science ,Computer software -- Development ,Computer science ,business.industry ,lcsh:Mathematics ,Distributed computing ,media_common.quotation_subject ,Computer software -- Quality control ,lcsh:QA1-939 ,lcsh:QA75.5-76.95 ,Logic in Computer Science (cs.LO) ,Software Engineering (cs.SE) ,Computer Science - Software Engineering ,Software ,Debugging ,Software deployment ,Scalability ,lcsh:Electronic computers. Computer science ,business ,TRACE (psycholinguistics) ,Drawback ,media_common - Abstract
Contract conformance is hard to determine statically, prior to the deployment of large pieces of software. A scalable alternative is to monitor for contract violations post-deployment: once a violation is detected, the trace characterising the offending execution is analysed to pinpoint the source of the offence. A major drawback with this technique is that, often, contract violations take time to surface, resulting in long traces that are hard to analyse. This paper proposes a methodology together with an accompanying tool for simplifying traces and assisting contract-violation debugging., Comment: In Proceedings FLACOS 2012, arXiv:1209.1699
- Published
- 2012
- Full Text
- View/download PDF
45. Slowdown invariance of timed regular expressions
- Author
-
Bondin, Ingram, Pace, Gordon J., Colombo, Christian, and 2nd WICT National Workshop in Information and Communication Technology (WICT 2009)
- Subjects
Computer software -- Development ,Object-oriented methods (Computer science) ,Computer software -- Quality control - Abstract
In critical systems, it is frequently essential to know whether the system satisfies a number of real-time constraints, usually specified in a real-time logic such as timed regular expressions. However, after having verified a system correct, changes in its environment may slow it down or speed it up, possibly invalidating the properties. Colombo et al. (1) have presented a theory of slowdown and speedup invariance to determine which specifications are safe with respect to system retiming, and applied the approach to duration calculus. In this paper we build upon their approach, applying it to timed regular expressions. We hence identify a fragment of the logic which is invariant under the speedup or slowdown of a system, enabling more resilient verification of properties written in the logic., peer-reviewed
- Published
- 2009
46. On-Line Failure Detection and Confinement in Caches
- Author
-
Xavier Vera, Pedro Chaparro, Antonio González, Jaume Abella, Javier Carretero, Universitat Politècnica de Catalunya. Departament d'Arquitectura de Computadors, and Universitat Politècnica de Catalunya. ARCO - Microarquitectura i Compiladors
- Subjects
Logic ,CPU cache ,Computer science ,Cache memory ,Testing ,Memòria cau ,Word error rate ,Computer software -- Quality control ,Programari -- Control de qualitat ,Fault detection and isolation ,Hardware ,Memòria ràpida de treball (Informàtica) ,Error correction codes ,Informàtica::Arquitectura de computadors [Àrees temàtiques de la UPC] ,Read-only memory ,Protection ,Hardware_MEMORYSTRUCTURES ,business.industry ,Wires ,Chip ,Costs ,Soft error ,Computer engineering ,Error analysis ,Logic gate ,Embedded system ,Cache ,business ,Fault detection - Abstract
Technology scaling leads to burn-in phase out and increasing post-silicon test complexity, which increases in-the-field error rate due to both latent defects and actual errors. As a consequence, there is an increasing need for continuous on-line testing techniques to cope with hard errors in the field. Similarly, those techniques are needed for detecting soft errors in logic, whose error rate is expected to raise in future technologies. Cache memories, which occupy most of the area of the chip, are typically protected with parity or ECC, but most of the wires as well as some combinational blocks remain unprotected against both soft and hard errors. This paper presents a set of techniques to detect and confine hard and soft errors in cache memories in combination with parity/ECC at very low cost. By means of hard signatures in data rows and error tracking, faults can be detected, classified properly and confined for hardware reconfiguration.
- Published
- 2008
- Full Text
- View/download PDF
47. A unified framework for verification techniques for object invariants
- Author
-
Drossopoulou, Sophia, Francalanza, Adrian, Muller, Peter, Summers, Alexander J., and 22nd European Conference on Object-Oriented Programming
- Subjects
Soundness ,Computer software -- Development ,Theoretical computer science ,Computer science ,Programming language ,Semantics (computer science) ,Separation logic ,Computer software -- Quality control ,Type (model theory) ,Object (computer science) ,computer.software_genre ,Consistency (database systems) ,Meaning (philosophy of language) ,Object-oriented methods (Computer science) ,TheoryofComputation_LOGICSANDMEANINGSOFPROGRAMS ,Proof obligation ,computer - Abstract
Object invariants define the consistency of objects. They have subtle semantics, mainly because of call-backs, multi-object invariants, and subclassing. Several verification techniques for object invariants have been proposed. It is difficult to compare these techniques, and to ascertain their soundness, because of their differences in restrictions on programs and invariants, in the use of advanced type systems (e.g., ownership types), in the meaning of invariants, and in proof obligations. We develop a unified framework for such techniques. We distil seven parameters that characterise a verification technique, and identify sufficient conditions on these parameters which guarantee soundness. We instantiate our framework with three verification techniques from the literature, and use it to assess soundness and compare expressiveness., peer-reviewed
- Published
- 2008
48. A theory of system behaviour in the presence of node and link failure
- Author
-
Adrian Francalanza and Matthew Hennessy
- Subjects
Reduction barbed congruence ,Computer software -- Development ,Computational Theory and Mathematics ,Distributed operating systems (Computers) ,Distributed calculi ,Bisimulation ,Computer software -- Quality control ,Node and link failure ,Labelled transition systems ,Theoretical Computer Science ,Information Systems ,Computer Science Applications - Abstract
We develop a behavioural theory of distributed programs in the presence of failures such as nodes crashing and links breaking. The framework we use is that of Dπ, a language in which located processes, or agents, may migrate between dynamically created locations. In our extended framework, these processes run on a distributed network, in which individual nodes may crash in fail-stop fashion or the links between these nodes may become permanently broken. The original language, Dπ, is also extended by a ping construct for detecting and reacting to these failures. We define a bisimulation equivalence between these systems, based on labelled actions which record, in addition to the effect actions have on the processes, the effect on the actual state of the underlying network and the view of this state known to observers. We prove that the equivalence is fully abstract, in the sense that two systems will be differentiated if and only if, in some sense, there is a computational context, consisting of a surrounding network and an observer, which can see the difference., peer-reviewed
- Published
- 2008
49. ISO 9001 Registration for Small and Medium-Sized Software Enterprises
- Author
-
Bailetti, Antonio J., FitzGibbon, Chris, Bailetti, Antonio J., and FitzGibbon, Chris
- Published
- 1995
50. Software architecture evaluation for framework-based systems
- Author
-
Liming Zhu
- Subjects
Software architecture -- Reliability ,Software architecture -- Evaluation ,Component software -- Evaluation ,Component software -- Reliability ,Software measurement ,Computer software -- Quality control ,Computer software -- Evaluation - Abstract
Complex modern software is often built using existing application frameworks and middleware frameworks. These frameworks provide useful common services, while simultaneously imposing architectural rules and constraints. Existing software architecture evaluation methods do not explicitly consider the implications of these frameworks for software architecture. This research extends scenario-based architecture evaluation methods by incorporating framework-related information into different evaluation activities. I propose four techniques which target four different activities within a scenario-based architecture evaluation method. 1) Scenario development: A new technique was designed aiming to extract general scenarios and tactics from framework-related architectural patterns. The technique is intended to complement the current scenario development process. The feasibility of the technique was validated through a case study. Significant improvements of scenario quality were observed in a controlled experiment conducted by another colleague. 2) Architecture representation: A new metrics-driven technique was created to reconstruct software architecture in a just-in-time fashion. This technique was validated in a case study. This approach has significantly improved the efficiency of architecture representation in a complex environment. 3) Attribute specific analysis (performance only): A model-driven approach to performance measurement was applied by decoupling framework-specific information from performance testing requirements. This technique was validated on two platforms (J2EE and Web Services) through a number of case studies. This technique leads to the benchmark producing more representative measures of the eventual application. It reduces the complexity behind the load testing suite and framework-specific performance data collecting utilities. 4) Trade-off and sensitivity analysis: A new technique was designed seeking to improve the Analytical Hierarchical Process (AHP) for trade-off and sensitivity analysis during a framework selection process. This approach was validated in a case study using data from a commercial project. The approach can identify 1) trade-offs implied by an architecture alternative, along with the magnitude of these trade-offs. 2) the most critical decisions in the overall decision process 3) the sensitivity of the final decision and its capability for handling quality attribute priority changes.
- Published
- 2007
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.