56 results on '"Computer software -- Quality control"'
Search Results
2. Modern software review : techniques and technologies.
- Author
-
Wong, Yuk Kuen
- Subjects
Computer software -- Development ,Computer software -- Evaluation ,Computer software -- Quality control - Abstract
Summary: "This book provides an understanding of the critical factors affecting software review performance and to provide practical guidelines for software reviews"--Provided by publisher.
- Published
- 2006
3. Bayesian network analysis of software logs for data-driven software maintenance
- Author
-
Antonio Salmerón Cerdán, Santiago Del Rey Juárez, Silverio Martínez-Fernández, Universitat Politècnica de Catalunya. Departament d'Enginyeria de Serveis i Sistemes d'Informació, and Universitat Politècnica de Catalunya. inSSIDE - integrated Software, Services, Information and Data Engineering
- Subjects
Informàtica::Enginyeria del software [Àrees temàtiques de la UPC] ,Software quality ,Computer software -- Quality control ,Programari -- Control de qualitat ,Software maintenance ,Computer Graphics and Computer-Aided Design ,Bayes methods - Abstract
Software organisations aim to develop and maintain high-quality software systems. Due to large amounts of behaviour data available, software organisations can conduct data-driven software maintenance. Indeed, software quality assurance and improvement programs have attracted many researchers' attention. Bayesian Networks (BNs) are proposed as a log analysis technique to discover poor performance indicators in a system and to explore usage patterns that usually require temporal analysis. For this, an action research study is designed and conducted to improve the software quality and the user experience of a web application using BNs as a technique to analyse software logs. To this aim, three models with BNs are created. As a result, multiple enhancement points have been identified within the application ranging from performance issues and errors to recurring user usage patterns. These enhancement points enable the creation of cards in the Scrum process of the web application, contributing to its data-driven software maintenance. Finally, the authors consider that BNs within quality-aware and data-driven software maintenance have great potential as a software log analysis technique and encourage the community to deepen its possible applications. For this, the applied methodology and a replication package are shared. Junta de Andalucía, Grant/Award Number: P20‐00091; AEI, Grant/Award Number: PID2019‐106758GB‐32/AEI/10.13039/501100011033; Spanish project, Grant/Award Number: PDC2021‐121195‐I00; Spanish Program, Grant/Award Number: BEAGAL18/00064
- Published
- 2023
4. Applying project-based learning to teach software analytics and best practices in data science
- Author
-
Martínez Fernández, Silverio Juan, Gómez Seoane, Cristina, Lenarduzzi, Valentina, Universitat Politècnica de Catalunya. Departament d'Enginyeria de Serveis i Sistemes d'Informació, and Universitat Politècnica de Catalunya. inSSIDE - integrated Software, Services, Information and Data Engineering
- Subjects
Software engineering ,Mètode de projectes ,Project method in teaching ,Software analytics ,Informàtica::Enginyeria del software [Àrees temàtiques de la UPC] ,Software quality ,Computer software -- Quality control ,Programari -- Control de qualitat ,Project-based learning ,Data science - Abstract
Due to recent industry needs, synergies between data science and software engineering are starting to be present in data science and engineering academic programs. Two synergies are: applying data science to manage the quality of the software (software analytics) and applying software engineering best practices in data science projects to ensure quality attributes such as maintainability and reproducibility. The lack of these synergies on academic programs have been argued to be an educational problem. Hence, it becomes necessary to explore how to teach software analytics and software engineering best practices in data science programs. In this context, we provide hands-on for conducting laboratories applying project-based learning in order to teach software analytics and software engineering best practices to data science students. We aim at improving the software engineering skills of data science students in order to produce software of higher quality by software analytics. We focus in two skills: following a process and software engineering best practices. We apply project-based learning as main teaching methodology to reach the intended outcomes. This teaching experience shows the introduction of project-based learning in a laboratory, where students applied data science and best software engineering practices to analyze and detect improvements in software quality. We carried out a case study in two academic semesters with 63 data science bachelor students. The students found the synergies of the project positive for their learning. In the project, they highlighted both utility of using a CRISP-DM data mining process and best software engineering practices like a software project structure convention applied to a data science project. This paper was partly funded by a teaching innovation project of ICE@UPC-BarcelonaTech (entitled ‘‘Audiovisual and digital material for data engineering, a teaching innovation project with open science’’), and the ‘‘Beatriz Galindo’’ Spanish Program BEA-GAL18/00064.
- Published
- 2023
5. Measuring and Improving Agile Processes in a Small-Size Software Development Company
- Author
-
Michal Choras, Prabhat Ram, Lidia López, Rafał Kozik, Silverio Martínez-Fernández, Xavier Franch, Pilar Rodríguez, Tomasz Springer, Universitat Politècnica de Catalunya. Departament d'Enginyeria de Serveis i Sistemes d'Informació, Universitat Politècnica de Catalunya. inSSIDE - integrated Software, Service, Information and Data Engineering, and Publica
- Subjects
Computer software -- Development ,General Computer Science ,Computer science ,Informàtica::Enginyeria del software [Àrees temàtiques de la UPC] ,Programari àgil -- Desenvolupament ,SMEs ,Context (language use) ,Computer software -- Quality control ,Programari -- Control de qualitat ,02 engineering and technology ,Tools ,Software development process ,Software ,Empirical research ,0202 electrical engineering, electronic engineering, information engineering ,General Materials Science ,process metrics ,9. Industry and infrastructure ,business.industry ,Standards organizations ,General Engineering ,Software development ,020207 software engineering ,software quality ,Software quality ,Engineering management ,rapid software development ,Software measurement ,Programari -- Desenvolupament ,020201 artificial intelligence & image processing ,lcsh:Electrical engineering. Electronics. Nuclear engineering ,Small and medium-sized enterprises ,Agile software development ,business ,Companies ,lcsh:TK1-9971 ,software engineering - Abstract
Context: Agile software development has become commonplace in software development companies due to the numerous benefits it provides. However, conducting Agile projects is demanding in Small and Medium Enterprises (SMEs), because projects start and end quickly, but still have to fulfil customers' quality requirements. Objective: This paper aims at reporting a practical experience on the use of metrics related to the software development process as a means supporting SMEs in the development of software following an Agile methodology. Method: We followed Action-Research principles in a Polish small-size software development company. We developed and executed a study protocol suited to the needs of the company, using a pilot case. Results: A catalogue of Agile development process metrics practically validated in the context of a small-size software development company, adopted by the company in their Agile projects. Conclusions: Practitioners may adopt these metrics in their Agile projects, especially if working in an SME, and customise them to their own needs and tools. Academics may use the findings as a baseline for new research work, including new empirical studies. The authors would like to thank all the members of the QRapids H2020 project consortium.
- Published
- 2020
- Full Text
- View/download PDF
6. Parallelware Tools: An Experimental Evaluation on POWER Systems
- Author
-
Xavier Martorell, Manuel Arenaz, Universitat Politècnica de Catalunya. Departament d'Arquitectura de Computadors, Barcelona Supercomputing Center, and Universitat Politècnica de Catalunya. CAP - Grup de Computació d'Altes Prestacions
- Subjects
FOS: Computer and information sciences ,Exploit ,Computer science ,Concurrency ,Informàtica::Enginyeria del software [Àrees temàtiques de la UPC] ,Parallel programming (Computer science) ,Static code analysis ,POWER systems ,Static program analysis ,Computer software -- Quality control ,Programari -- Control de qualitat ,010103 numerical & computational mathematics ,Programació en paral·lel (Informàtica) ,01 natural sciences ,Concurrency and parallelism ,Software development process ,Software ,0101 mathematics ,Informàtica::Arquitectura de computadors::Arquitectures paral·leles [Àrees temàtiques de la UPC] ,Tasking ,business.industry ,Software architecture ,Parallelware tools ,Detection of software defects ,OpenMP ,010101 applied mathematics ,Computer Science - Distributed, Parallel, and Cluster Computing ,Systems development life cycle ,Programari -- Disseny ,Distributed, Parallel, and Cluster Computing (cs.DC) ,Quality assurance and testing ,Software engineering ,business - Abstract
Static code analysis tools are designed to aid software developers to build better quality software in less time, by detecting defects early in the software development life cycle. Even the most experienced developer regularly introduces coding defects. Identifying, mitigating and resolving defects is an essential part of the software development process, but frequently defects can go undetected. One defect can lead to a minor malfunction or cause serious security and safety issues. This is magnified in the development of the complex parallel software required to exploit modern heterogeneous multicore hardware. Thus, there is an urgent need for new static code analysis tools to help in building better concurrent and parallel software. The paper reports preliminary results about the use of Appentra’s Parallelware technology to address this problem from the following three perspectives: finding concurrency issues in the code, discovering new opportunities for parallelization in the code, and generating parallel-equivalent codes that enable tasks to run faster. The paper also presents experimental results using well-known scientific codes and POWER systems. This work has been partly funded from the Spanish Ministry of Science and Technology (TIN2015-65316-P), the Departament d’Innovació, Universitats i Empresa de la Generalitat de Catalunya (MPEXPAR: Models de Programació i Entorns d’Execució Parallels, 2014-SGR-1051), and the European Union’s Horizon 2020 research and innovation program throughgrant agreements MAESTRO (801101) and EPEEC (801051).
- Published
- 2021
7. QFL: Data-driven feedback loop to manage quality in agile development
- Author
-
Alessandra Bagnato, Antonin Ahberve, Lidia López, Xavier Franch, Universitat Politècnica de Catalunya. Departament d'Enginyeria de Serveis i Sistemes d'Informació, and Universitat Politècnica de Catalunya. inSSIDE - integrated Software, Service, Information and Data Engineering
- Subjects
FOS: Computer and information sciences ,Computer science ,Informàtica::Enginyeria del software [Àrees temàtiques de la UPC] ,Programari àgil -- Desenvolupament ,Software analytics tool ,Computer software -- Quality control ,Programari -- Control de qualitat ,Requirements pattern ,Toolchain ,Software development process ,Software analytics ,Computer Science - Software Engineering ,Software ,Quality management process ,Quality monitoring ,Decisió, Presa de ,business.industry ,Software development ,Quality ,Software quality ,Software Engineering (cs.SE) ,Project planning ,Requirement ,business ,Software engineering ,Agile software development ,Decision-making ,Quality assessment - Abstract
Background: Quality requirements (QRs) describe desired system qualities, playing an important role in the success of software projects. In the context of agile software development (ASD), where the main objective is the fast delivery of functionalities, QRs are often ill-defined and not well addressed during the development process. Software analytics tools help to control quality though the measurement of quality-related software aspects to support decision-makers in the process of QR management. Aim: The goal of this research is to explore the benefits of integrating a concrete software analytics tool, Q-Rapids Tool, to assess software quality and support QR management processes. Method: In the context of a technology transfer project, the Softeam company has integrated Q-Rapids Tool in their development process. We conducted a series of workshops involving Softeam members working in the Modelio product development. Results: We present the Quality Feedback Loop (QFL) process to be integrated in software development processes to control the complete QR life-cycle, from elicitation to validation. As a result of the implementation of QFL in Softeam, Modelio's team members highlight the benefits of integrating a data analytics tool with their project planning tool and the fact that project managers can control the whole process making the final decisions. Conclusions: Practitioners can benefit from the integration of software analytics tools as part of their software development toolchain to control software quality. The implementation of QFL promotes quality in the organization and the integration of software analytics and project planning tools also improves the communication between teams., Comment: 9 pages, Accepted for publication in IEEE/ACM 43nd International Conference on Software Engineering: Software Engineering in Society (ICSE-SEIS), IEEE, 2021
- Published
- 2021
8. Industrial practices on requirements reuse: An interview-based study
- Author
-
Carme Quer, Xavier Franch, Cristina Palomares, Universitat Politècnica de Catalunya. Departament d'Enginyeria de Serveis i Sistemes d'Informació, and Universitat Politècnica de Catalunya. inSSIDE - integrated Software, Service, Information and Data Engineering
- Subjects
050101 languages & linguistics ,Computer science ,Process (engineering) ,media_common.quotation_subject ,Informàtica::Enginyeria del software [Àrees temàtiques de la UPC] ,Context (language use) ,02 engineering and technology ,Requirements elicitation ,Computer software -- Quality control ,Programari -- Control de qualitat ,Reuse ,Domain (software engineering) ,0202 electrical engineering, electronic engineering, information engineering ,Enginyeria de requisits ,0501 psychology and cognitive sciences ,Quality (business) ,Software requirements ,Interview-based study ,Survey ,media_common ,Requirements reuse ,Requirements documentation ,Requirements engineering ,05 social sciences ,Engineering management ,020201 artificial intelligence & image processing - Abstract
[Context and motivation] Requirements reuse has been proposed asa key asset for requirements engineers to efficiently elicit, validate and document softwarer equirements and, as a consequence, obtain requirements specifications of better quality through more effective engineering processes.[Question/problem] Regardless the impact requirements reuse could have in software projects’ suc-cess and efficiency, the requirements engineering community has published veryfew studies reporting the way in which this activity is conducted in industry. [Principal ideas/results] In this paper, we present the results of an interview-based study involving 24 IT professionals on whether they reuse requirementsor not and how. Some kind of requirements reuse is carried out by the majorityof respondents, being organizational and project-related factors the main drivers.Quality requirements are the type most reused. The most common strategy isfind-copy-paste-adapt. Respondents agreed that requirements reuse is beneficial,especially for project-related reasons. The most stated challenge to overcome inrequirements reuse is related to the domain of the project and the development of acompletely new system. [Contribution] With this study, we contribute to the stateof the practice in the reuse of requirements by showing how real organizationscarry out this process and the factors that influence it. This work has been partially funded by the Horizon 2020 project OpenReq, which is supported by the European Union under the Grant Nr. 732463.
- Published
- 2020
9. Estimación y priorización de requisitos no-funcionales para desarrollo de software: Estado del arte
- Author
-
Salamea Bravo, María José, González Palacio, Liliana, Oriol Hilari, Marc|||0000-0003-1928-7024, Farré Tost, Carles|||0000-0001-5814-3782, Universitat Politècnica de Catalunya. Doctorat en Computació, Universitat Politècnica de Catalunya. Departament d'Enginyeria de Serveis i Sistemes d'Informació, and Universitat Politècnica de Catalunya. inSSIDE - integrated Software, Service, Information and Data Engineering
- Subjects
Informàtica::Enginyeria del software [Àrees temàtiques de la UPC] ,Computer software -- Quality control ,Programari -- Control de qualitat - Abstract
Los requisitos de calidad (también llamados requisitos no-funcionales) son aquellos que permiten asegurar la calidad del software. Incluyen aspectos muy diversos, como disponibilidad, seguridad, rendimiento, escalabilidad, portabilidad y usabilidad, entre otros. Los continuos avances tecnológicos, como el software en la nube o el Internet de las Cosas, presentan nuevos retos en el desarrollo del software para poder garantizar un nivel de calidad satisfactorio de dichos aspectos. Asimismo, las metodologías de desarrollo ágil, cuyo uso viene en aumento, tales como SCRUM, XP, Kanban, no dan el soporte necesario para la gestión de dichos requisitos de calidad. Con el fin de facilitar a los ingenieros del software la toma de decisiones sobre el nivel de calidad necesario en un proyecto, es imprescindible conocer de antemano: 1) qué criterios se van a tener en cuenta para verificar, priorizar, planificar y/o negociar los requisitos de calidad. Asimismo, es necesario: 2) precisar cómo se van a evaluar dichos criterios, y 3) identificar qué factores del contexto del proyecto pueden afectar dicha evaluación. Para intentar dar respuesta a estas 3 cuestiones o preguntas de investigación, los autores de este capítulo han diseñado y están llevando a cabo un estudio sistemático de la literatura. Este trabajo presenta para su discusión la descripción de la metodología seguida en ese estudio, así como algunos de los resultados preliminares obtenidos durante su ejecución.
- Published
- 2020
10. Actionable software metrics:an industrial perspective
- Author
-
Alessandra Bagnato, Pilar Rodríguez, Milla Ahola, Michał Choraś, Silverio Martínez-Fernández, Prabhat Ram, Markku Oivo, Rafał Kozik, Sanja Aaramaa, Universitat Politècnica de Catalunya. Departament d'Enginyeria de Serveis i Sistemes d'Informació, and Universitat Politècnica de Catalunya. inSSIDE - integrated Software, Service, Information and Data Engineering
- Subjects
metrics program ,Computer science ,business.industry ,Informàtica::Enginyeria del software [Àrees temàtiques de la UPC] ,Perspective (graphical) ,Software development ,020207 software engineering ,Context (language use) ,Computer software -- Quality control ,Programari -- Control de qualitat ,02 engineering and technology ,Computer-assisted web interviewing ,actionable metrics ,Data science ,Software metric ,context ,Action (philosophy) ,020204 information systems ,Data quality ,Machine learning ,Aprenentatge automàtic ,0202 electrical engineering, electronic engineering, information engineering ,data quality ,Metric (unit) ,business - Abstract
Background: Practitioners would like to take action based on software metrics, as long as they find them reliable. Existing literature explores how metrics can be made reliable, but remains unclear if there are other conditions necessary for a metric to be actionable. Context & Method: In the context of a European H2020 Project, we conducted a multiple case study to study metrics’ use in four companies, and identified instances where these metrics influenced actions. We used an online questionnaire to enquire about the project participants’ views on actionable metrics. Next ,we invited one participant from each company to elaborate on the identified metrics’ use for taking actions and the questionnaire responses (N=17). Result:We learned that a metric that is practical, contextual, and exhibits high data quality characteristics is actionable. Even a non-actionable metric can be useful, but an actionable metric mostly requires interpretation. However, the more these metrics are simple and reflect the software development context accurately, the less interpretation required to infer actionable information from the metric. Company size and project characteristics can also influence the type of metric that can be actionable. Conclusion: This exploration of industry’s views on actionable metrics help characterize actionable metrics in practical terms. This awareness of what characteristics constitute an actionablemetric can facilitate theirdefinition and developmentright from the start of a software metrics program This work is a result of the Q-Rapids Project, funded by the European Union’s Horizon 2020 research and innovation program, under grant agreement No. 732253.
- Published
- 2020
11. Practical experiences and value of applying software analytics to manage quality
- Author
-
Anna Maria Vollmer, Alessandra Bagnato, Pilar Rodríguez, Lidia López, Silverio Martínez-Fernández, Jari Partanen, Universitat Politècnica de Catalunya. Departament d'Enginyeria de Serveis i Sistemes d'Informació, and Universitat Politècnica de Catalunya. inSSIDE - integrated Software, Service, Information and Data Engineering
- Subjects
FOS: Computer and information sciences ,Analytics ,Process management ,Process (engineering) ,Computer science ,media_common.quotation_subject ,Informàtica::Enginyeria del software [Àrees temàtiques de la UPC] ,summative evaluation ,Computer software -- Quality control ,Programari -- Control de qualitat ,02 engineering and technology ,Computer Science - Software Engineering ,Software analytics ,Software ,020204 information systems ,0202 electrical engineering, electronic engineering, information engineering ,software analytics ,Quality (business) ,media_common ,technology transfer ,Product design ,business.industry ,020207 software engineering ,software quality ,Software quality ,Software Engineering (cs.SE) ,Enginyeria del programari ,business ,Quality assurance ,software engineering - Abstract
Background: Despite the growth in the use of software analytics platforms in industry, little empirical evidence is available about the challenges that practitioners face and the value that these platforms provide. Aim: The goal of this research is to explore the benefits of using a software analytics platform for practitioners managing quality. Method: In a technology transfer project, a software analytics platform was incrementally developed between academic and industrial partners to address their software quality problems. This paper focuses on exploring the value provided by this software analytics platform in two pilot projects. Results: Practitioners emphasized major benefits including the improvement of product quality and process performance and an increased awareness of product readiness. They especially perceived the semi-automated functionality of generating quality requirements by the software analytics platform as the benefit with the highest impact and most novel value for them. Conclusions: Practitioners can benefit from modern software analytics platforms, especially if they have time to adopt such a platform carefully and integrate it into their quality assurance activities., Comment: This is an Author's Accepted Manuscript of a paper consisting of a post-peer-review, pre-copyedit version of a paper accepted at the 13th ACM/IEEE International Symposium on Empirical Software Engineering and Measurement (ESEM), 2019. The final authenticated version is available online: https://ieeexplore.ieee.org/document/8870162
- Published
- 2019
- Full Text
- View/download PDF
12. Continuously assessing and improving software quality with software analytics tools: a case study
- Author
-
Silverio Martinez-Fernandez, Anna Maria Vollmer, Andreas Jedlitschka, Xavier Franch, Lidia Lopez, Prabhat Ram, Pilar Rodriguez, Sanja Aaramaa, Alessandra Bagnato, Michal Choras, Jari Partanen, Universitat Politècnica de Catalunya. Departament d'Enginyeria de Serveis i Sistemes d'Informació, Universitat Politècnica de Catalunya. inSSIDE - integrated Software, Service, Information and Data Engineering, and Publica
- Subjects
Monitoring ,Software analytics ,Case study ,Informàtica::Enginyeria del software [Àrees temàtiques de la UPC] ,Programari àgil -- Desenvolupament ,Software analytics tool ,Software quality ,Computer software -- Quality control ,Programari -- Control de qualitat ,Tools ,case study ,software analytics ,lcsh:Electrical engineering. Electronics. Nuclear engineering ,quality model ,Agile software development ,lcsh:TK1-9971 ,Companies ,Real-time systems ,Quality model ,software analytics tool - Abstract
In the last decade, modern data analytics technologies have enabled the creation of software analytics tools offering real-time visualization of various aspects related to software development and usage. These tools seem to be particularly attractive for companies doing agile software development. However, the information provided by the available tools is neither aggregated nor connected to higher quality goals. At the same time, assessing and improving the software quality has also been the key targets for the software engineering community, yielding several proposals for standards and software quality models. Integrating such quality models into software analytics tools could close the gap by providing the connection to higher quality goals. This paper aims at understanding whether the integration of quality models into software analytics tools provides understandable, reliable, useful, and relevant information at the right level of detail about the quality of a process or product and whether practitioners intend to use it. Over the course of more than a year, four companies involved in this case study deployed such a tool to assess and improve software quality in several projects. We used standardized measurement instruments to elicit the perception of 22 practitioners regarding their use of the tool. We complemented the findings with debriefing sessions held at the companies. In addition, we discussed challenges and lessons learned with four practitioners leading the use of the tool. The quantitative and qualitative analyses provided positive results, i.e., the practitioners’ perception with regard to the tool’s understandability, reliability, usefulness, and relevance was positive. Individual statements support the statistical findings, and constructive feedback can be used for future improvements. We conclude that the potential for future adoption of quality models within software analytics tools definitely exists and encourage other practitioners to use the presented seven challenges and seven lessons learned and adopt them in their companies.
- Published
- 2019
13. Q-Rapids: Quality-Aware Rapid Software Development – An H2020 Project
- Author
-
Marc Oriol, Lidia López, Universitat Politècnica de Catalunya. Departament d'Enginyeria de Serveis i Sistemes d'Informació, and Universitat Politècnica de Catalunya. inSSIDE - integrated Software, Service, Information and Data Engineering
- Subjects
050101 languages & linguistics ,Process management ,Computer science ,media_common.quotation_subject ,Time to market ,Informàtica::Enginyeria del software [Àrees temàtiques de la UPC] ,Software quality ,Computer software -- Quality control ,Programari -- Control de qualitat ,02 engineering and technology ,Asset (computer security) ,Software ,Enginyeria de requisits ,0202 electrical engineering, electronic engineering, information engineering ,Data-driven requirements engineering ,0501 psychology and cognitive sciences ,Use case ,Quality (business) ,media_common ,Q-Rapids H2020 Project ,business.industry ,05 social sciences ,Software development ,Requirements engineering ,Product (business) ,020201 artificial intelligence & image processing ,business ,Quality requirements ,Rapid software development - Abstract
This work reports the objectives, current state, and outcomes of the Q-Rapids H2020 project. Q-Rapids (Quality-Aware Rapid Software Development) proposes a data-driven approach to the production of software following very short development cycles. The focus of Q-Rapids is on quality aspects, represented through quality requirements. The Q-Rapids platform, which is the tangible software asset emerging from the project, mines software repositories and usage logs to identify candidate quality requirements that may ameliorate the values of strategic indicators like product quality, time to market or team productivity. Four companies are providing use cases to evaluate the platform and associated processes.
- Published
- 2019
- Full Text
- View/download PDF
14. Quality-Aware Rapid Software Development Project: The Q-Rapids Project
- Author
-
Silverio Martínez-Fernández, Xavier Franch, Adam Trendowicz, Lidia López, Pilar Rodríguez, Marc Oriol, Universitat Politècnica de Catalunya. Departament d'Enginyeria de Serveis i Sistemes d'Informació, and Universitat Politècnica de Catalunya. inSSIDE - integrated Software, Service, Information and Data Engineering
- Subjects
Process management ,Computer science ,Informàtica::Enginyeria del software [Àrees temàtiques de la UPC] ,Dashboard (business) ,Programari àgil -- Desenvolupament ,Software quality ,Computer software -- Quality control ,Programari -- Control de qualitat ,02 engineering and technology ,Software repositories ,Software development process ,Software ,Q-Rapids H2020 ,0502 economics and business ,Enginyeria de requisits ,0202 electrical engineering, electronic engineering, information engineering ,Data-driven requirements engineering ,Quality models ,Software system ,Project management ,Software analytic tools ,business.industry ,05 social sciences ,Software development ,Requirements engineering ,Non-functional requirements ,020207 software engineering ,Agile software development ,business ,Quality requirements ,050203 business & management ,Rapid software development - Abstract
Software quality poses continuously new challenges in software development, including aspects related to both software development and system usage, which significantly impact the success of software systems. The Q-Rapids H2020 project defines an evidence-based, data-driven quality-aware rapid software development methodology. Quality requirements (QRs) are incrementally elicited, refined and improved based on data gathered from software repositories, project management tools, system usage and quality of service. This data is analysed and aggregated into quality-related key strategic indicators (e.g., development effort required to include a given QR in the next development cycle) which are presented to decision makers using a highly informative dashboard. The Q-Rapids platform is being evaluated in-premises by the four companies participating in the consortium, reporting useful lessons learned and directions for new development.
- Published
- 2019
- Full Text
- View/download PDF
15. A Quality Model for Actionable Analytics in Rapid Software Development
- Author
-
Liliana Guzman, Anna Maria Vollmer, Andreas Jedlitschka, Silverio Martínez-Fernández, Universitat Politècnica de Catalunya. Departament d'Enginyeria de Serveis i Sistemes d'Informació, and Universitat Politècnica de Catalunya. inSSIDE - integrated Software, Service, Information and Data Engineering
- Subjects
FOS: Computer and information sciences ,Agile ,Computer science ,media_common.quotation_subject ,Informàtica::Enginyeria del software [Àrees temàtiques de la UPC] ,Software quality ,Computer software -- Quality control ,Programari -- Control de qualitat ,02 engineering and technology ,Programari -- Fiabilitat ,Data modeling ,Software analytics ,Computer Science - Software Engineering ,Software ,Decisió, Presa de ,0202 electrical engineering, electronic engineering, information engineering ,Quality (business) ,media_common ,business.industry ,H2020 ,Software development ,020207 software engineering ,Computer software -- Reliability ,Data science ,Software Engineering (cs.SE) ,Analytics ,020201 artificial intelligence & image processing ,business ,Decision making ,Quality model ,Q-Rapids ,Rapid software development ,Agile software development - Abstract
Background: Accessing relevant data on the product, process, and usage perspectives of software as well as integrating and analyzing such data is crucial for getting reliable and timely actionable insights aimed at continuously managing software quality in Rapid Software Development (RSD). In this context, several software analytics tools have been developed in recent years. However, there is a lack of explainable software analytics that software practitioners trust. Aims: We aimed at creating a quality model (called Q-Rapids quality model) for actionable analytics in RSD, implementing it, and evaluating its understandability and relevance. Method: We performed workshops at four companies in order to determine relevant metrics as well as product and process factors. We also elicited how these metrics and factors are used and interpreted by practitioners when making decisions in RSD. We specified the Q-Rapids quality model by comparing and integrating the results of the four workshops. Then we implemented the Q-Rapids tool to support the usage of the Q-Rapids quality model as well as the gathering, integration, and analysis of the required data. Afterwards we installed the Q-Rapids tool in the four companies and performed semi-structured interviews with eight product owners to evaluate the understandability and relevance of the Q-Rapids quality model. Results: The participants of the evaluation perceived the metrics as well as the product and process factors of the Q-Rapids quality model as understandable. Also, they considered the Q-Rapids quality model relevant for identifying product and process deficiencies (e.g., blocking code situations). Conclusions: By means of heterogeneous data sources, the Q-Rapids quality model enables detecting problems that take more time to find manually and adds transparency among the perspectives of system, process, and usage., This is an Author's Accepted Manuscript of a paper to be published by IEEE in the 44th Euromicro Conference on Software Engineering and Advanced Applications (SEAA) 2018. The final authenticated version will be available online
- Published
- 2018
- Full Text
- View/download PDF
16. Conflicts and synergies among quality requirements
- Author
-
Xavier Franch, Barry Boehm, Universitat Politècnica de Catalunya. Departament d'Enginyeria de Serveis i Sistemes d'Informació, and Universitat Politècnica de Catalunya. inSSIDE - integrated Software, Service, Information and Data Engineering
- Subjects
Engineering ,Non-functional requirement ,media_common.quotation_subject ,Informàtica::Enginyeria del software [Àrees temàtiques de la UPC] ,02 engineering and technology ,Computer software -- Quality control ,Programari -- Control de qualitat ,Integrated product team ,Programari -- Fiabilitat ,Software qualities ,0202 electrical engineering, electronic engineering, information engineering ,Quality (business) ,Reliability (statistics) ,media_common ,Vulnerability (computing) ,business.industry ,Software architecture ,Nonfunctional requirements ,020207 software engineering ,Usability ,Availability ,Computer software -- Reliability ,Reliability ,Software quality ,Maintainability ,Risk analysis (engineering) ,Security ,020201 artificial intelligence & image processing ,Programari -- Disseny ,Single point of failure ,business ,Software engineering ,Quality requirements - Abstract
Analyses of the interactions among quality requirements (QRs) have often found that optimizing on one QR will cause serious problems with other QRs. As just one relevant example, one large project had an Integrated Product Team optimize the system for Security. In doing so, it reduced its vulnerability profile by having a single-agent key distribution system and a single copy of the data base – only to have the Reliability engineers point on that these were system-critical single points of failure. The project’s Security-optimized architecture also created conflicts with the system’s Performance, Usability, and Modifiability. Of course, optimizing the system for Security had synergies with Reliability in having high levels of Confidentiality, Integrity, and Availability. This panel aims at fostering discussion on these relationships among QRs and how the use of data repositories may help discovering them.
- Published
- 2017
17. Determinizing Monitors for HML with Recursion
- Author
-
Luca Aceto, Anna Ingólfsdóttir, Adrian Francalanza, Sævar Örn Kjartansson, and Antonis Achilleos
- Subjects
Discrete mathematics ,FOS: Computer and information sciences ,Computer software -- Development ,Computer Science - Logic in Computer Science ,Finite-state machine ,TheoryofComputation_COMPUTATIONBYABSTRACTDEVICES ,Logic ,Computer science ,Formal Languages and Automata Theory (cs.FL) ,Process (computing) ,Recursion (computer science) ,Computer Science - Formal Languages and Automata Theory ,Computer software -- Quality control ,Theoretical Computer Science ,Exponential function ,Logic in Computer Science (cs.LO) ,TheoryofComputation_MATHEMATICALLOGICANDFORMALLANGUAGES ,Computational Theory and Mathematics ,Exponential growth ,TheoryofComputation_LOGICSANDMEANINGSOFPROGRAMS ,Computer Science::Logic in Computer Science ,Software ,Computer Science::Formal Languages and Automata Theory - Abstract
We examine the determinization of monitors for HML with recursion. We demonstrate that every monitor is equivalent to a deterministic one, which is at most doubly exponential in size with respect to the original monitor. When monitors are described as CCS-like processes, this doubly exponential bound is optimal. When (deterministic) monitors are described as finite automata (as their LTS), then they can be exponentially more succinct than their CCS process form., non peer-reviewed
- Published
- 2016
- Full Text
- View/download PDF
18. Many-valued institutions for constraint specification
- Author
-
Fernando Orejas, José Luiz Fiadeiro, Claudia Elena Chiriźăź, Universitat Politècnica de Catalunya. Departament de Ciències de la Computació, and Universitat Politècnica de Catalunya. ALBCOM - Algorismia, Bioinformàtica, Complexitat i Mètodes Formals
- Subjects
Graded semantic consequence ,Computer science ,Constraint satisfaction problems ,Informàtica::Enginyeria del software [Àrees temàtiques de la UPC] ,Service discovery ,Logical systems ,0102 computer and information sciences ,02 engineering and technology ,Computer software -- Quality control ,Contracts ,Programari -- Control de qualitat ,01 natural sciences ,Expressive power ,Functional requirements ,Soft-constraint satisfaction problems ,Complex software system specification ,Formal specification ,0202 electrical engineering, electronic engineering, information engineering ,Constraint specification ,Soft constraints ,Software system ,Many-valued institutions ,Constraint satisfaction problem ,Service-level agreements ,business.industry ,Functional requirement ,Quality attributes ,010201 computation theory & mathematics ,Compatibility (mechanics) ,020201 artificial intelligence & image processing ,Software engineering ,business - Abstract
We advance a general technique for enriching logical systems with soft constraints, making them suitable for specifying complex software systems where parts are put together not just based on how they meet certain functional requirements but also on how they optimise certain constraints. This added expressive power is required, for example, for capturing quality attributes that need to be optimised or, more generally, for formalising what are usually called service-level agreements. More specifically, we show how institutions endowed with a graded semantic consequence can accommodate soft-constraint satisfaction problems. We illustrate our approach by showing how, in the context of service discovery, one can quantify the compatibility of two specifications and thus formalise the selection of the most promising provider of a required resource.
- Published
- 2016
19. Software engineering and formal methods. Lecture notes in computer science
- Author
-
Gabriel Dimech, Adrian Francalanza, and Christian Colombo
- Subjects
Connected component ,Formal methods (Computer science) ,Service (systems architecture) ,Computer software -- Development ,Correctness ,Computer science ,business.industry ,Runtime verification ,Autonomous distributed systems ,Real-time data processing ,Computer software -- Quality control ,Abstraction layer ,Business process management ,Object-oriented methods (Computer science) ,Computer software -- Verification ,Embedded system ,Component (UML) ,Instrumentation (computer programming) ,business - Abstract
Enterprise Service Buses (ESBs) are highly-dynamic component platforms that are hard to test for correctness because their connected components may not necessarily be present prior to deployment. Runtime Verification (RV) is a potential solution towards ascertaining correctness of an ESB, by checking the ESB’s execution at runtime, and detecting any deviations from the expected behaviour. A crucial aspect impinging upon the feasibility of this verification approach is the runtime overheads introduced, which may have adverse effects on the execution of the ESB system being monitored. In turn, one factor that bears a major effect on such overheads is the instrumentation mechanism adopted by the RV setup. In this paper we identify three likely (but substantially different) ESB instrumentation mechanisms, detail their implementation over a widely-used ESB platform, assess them qualitatively, and empirically evaluate the runtime overheads introduced by these mechanisms., peer-reviewed
- Published
- 2015
20. Requirements capturing and software development methodologies for trustworthy systems
- Author
-
Κάτσικας, Σωκράτης, Σχολή Τεχνολογιών Πληροφορικής και Επικοινωνιών. Τμήμα Ψηφιακών Συστημάτων, and Τεχνοοικονομική Διοίκηση και Ασφάλεια Ψηφιακών Συστημάτων
- Subjects
Computer software -- Development ,Software engineering ,Computer software -- Reliability ,Computer software -- Quality control - Abstract
In this Master's thesis, the concept of information systems trustworthiness will be covered, in terms of describing existing methodologies for collecting and documenting security requirements as well as describing how existing methodologies support the delivery of trustworthy systems. Moreover, this essay will employ a case study, in order to enforce the essay's outcomes on how to achieve trustworthy software. Trustworthiness is a characteristic that can be applied to any system that satisfies the desired level of trust by not failing. The systems that should possess such a property are mainly systems that manage sensitive records, critical infrastructure, etc. The capturing of a system's requirements is the process of discovering and identifying the system's stakeholders and their needs. A system's requirements are the features and qualities that a system should possess, and are extracted from the system's stakeholders (i.e. owners, users). Therefore, the identification of security requirements is of crucial importance for the achievement of the desired security goals, namely trustworthiness. With respect to security requirements, in order for a system to ensure that its security specifications are satisfied, security concerns must be taken into consideration in every phase of the software engineering lifecycle; namely, from requirements engineering to design, implementation, testing, and deployment. In order to increase users' trust in the systems they use, software defects must be reduced through. Following a systematic development methodology, during the software development process, the risk of not achieving the acceptable result, is reduced, if not eliminated, since software development methodologies impose a disciplined process upon software development.
- Published
- 2014
21. Software defect prediction using Bayesian networks and kernel methods
- Author
-
Okutan, Ahmet, Yıldız, Olcay Taner, Işık Üniversitesi, Fen Bilimleri Enstitüsü, Bilgisayar Mühendisliği Doktora Programı, Okutan, Ahmet, and Bilgisayar Mühendisliği Anabilim Dalı
- Subjects
Neural networks (Computer science) ,Artificial intelligence ,Bayesian statistical decision theory ,QA76.76.Q35 O38 2012 ,Computer software -- Quality control ,Computer Engineering and Computer Science and Control ,Bilgisayar Mühendisliği Bilimleri-Bilgisayar ve Kontrol - Abstract
Text in English; Abstract: English and Turkish Includes bibliographical references (leaves 115-127) xix, 128 leaves There are lots of different software metrics discovered and used for defect prediction in the literature. Instead of dealing with so many metrics, it would be practical and easy if we could determine the set of metrics that are most important and focus on them more to predict defectiveness. We use Bayesian modelling to determine the influential relationships among software metrics and defect proneness. In addition to the metrics used in Promise data repository, We define two more metrics, i.e. NOD for the number of developers and LOCQ for the source code quality. We wxtract these metrics by inspecting the source code repositories of the selected Promise data repository data sets. At the end of our modeling, We learn both the marginal defect proneness probability of the whole software system and the set of most effective metrics. Our experiments on nine open source Promise data repository data sets show that respense for class (RFC), lines of code (LOC), and lack of coding quality (LOCQ) are the most efective metrics whereas coupling between objets (CBO), weighted method per class (WMC), and lack of cohesion of methods (LCOM) are less efective metris on defect proneness. Furthermore, number of children (NOC) and depth of inheritance tree (DIT) have very limited effect and are unstustworthy. On tthe other hand, based on the experiments on Poi, Tomcat, and Xalan data sets, We observe that there is a positive correlation between the number of developers (NOD) and the level of defectiveness.However, futher investigation involving a greater number of projects, is need to confirm our findings. Furthermore, we propose a novel technique for defect prediction that uses plagiarism detection tools. Although the defect prediction problem haz been researched for a long time, the results achieved are not so bright. We use kernel programming to model the relationship between source code similarity and defectiveness. Each value in the kernel matrix shows how much parallelism exit between the corresponding files ib the kernel matrix shows how much parallelism exist between the corresponding files in the software system chosen. Our experiments on 10 real world datasets indicate that support vector machines (SVM) with a precalculated kernel matrix performs better than the SVM with the usual linear and RBF kernels and generates comparable results with the famous defect prediction methods like linear logistic regression and J48 in terms of the area under the curve (AUC).Furthermore, we observed that when the amount of similarity among the files of a software system is high, then the AUC found by the SVM with precomputed kernel can be used to predict the number of defects in the files or classes of a software system, because we observe a relationship between source code similarity and the number of defects. Based on the results of our analysis, the developers can focus on more defective modules rather than on less or non defective ones during testing activities. The experiments on 10 Promise datasets indicate that while predicting the number of defects, SVM with a precomputed kernel performs as good as the SVM with the usual linear and RBF kernels, in terms of the root mean square error (RMSE). The method proposed is also comparable with other regression methods like linear regression and IBK. The results of these experiments suggest that source code similarity is a good means of predicting both defectiveness and the number of defects in software modules. Literatürde kullanılan çok çeşitli yazılım ölçütleri mevcuttur. Çok sat-yıda ölçütle hata tahmini yapmak yerine, en önemli ölçüt kümesini belirleyip bu kümedeki ölçütleri hata tahmininde kullanmak daha pratik ve kolay olacaktır. Bu tezde yazılım ölçütleri ile hataya yarkınlık arasındaki etkileşimi ortaya çıkarmak için Bayesian modelleme yöntemi kullanılmıştır. Promise veri deposundaki yazılım ölçütlerine ek olarak, yazılım geliştiricisi sayısı (NOD) ve kaynak kodu kalitesi (LOCQ) adlı 2 yeni ölçüt tanımlanmıştır. Bu ölçütleri çıkarmak için Promise veri depesundaki veri kümelerinin açık kaynak kodları kullanılmıştır. Yapılan modelleme sonucunda, hem sınanan sistemin hatalı olm aihtimali, hem de en etkili ölçüt künesi bulunmaktadır. 9 Promise veri kümesi üzerindeki deneyler, RFC, LOC ve LOCQ ölçütlerinin en etkili ölçütler olduğunu, CBO, WMC ve LCOM ölçütlerinin ise daha az etkili olduğunu ortaya koymuştur. Ayrıca, NOC ve DIT ölçütlerinin sınırlı bir etkiye sahip olduğu ve güvenilir olmadığı gözlemlenmiştir. Öte yandan, Poi, Tomcat Xalan veri kümeleri üzerinde yapılan deneyler sonucunda, yazılım geliştici sayısı (NOD) ile hata seviyesi arasında doğru orantı olduğu sonucuna varılmıştır. Bununla birlikte, tespitlerimizi doğrulamak için daha fazla veri kümesi üzerinde deney yapmaya ihtiyaç vardır. Ayrıca bu tezde, hata tahmini için intihal tespit araçlarını kullanan yeni bir yöntem önerilmiştir. Hata tahmini için intihal tespit araçlarını kullanan yeni bir yöntem önerilmiştir. Hata tahmin problemi ve uzun zamandan beri araştırılmaktadır, fakat ortaya çıkan sonuçlar çok parlak değildir. Farklı bir bakış açısı getirmek üzere, kaynak kod benzerliği ve hataya yatkınlık arasındaki ilişkiyi modelleyen çekirdek metodu yöntemi kullanılmıştır. Bu yöntemde, üretilen çekirdek matrisindeki her bir değer, matrisin satır ve sütunda bulubab kaynak kodu dosyaları arasındaki parelelliği göstermektedir. 10 veri kümesi üzerindeki deneyler, önceden hesaplanmış çekirdek matrisi kullanan SVM yönteminin, doğrusal veya RBF çekirdek kullanan SVM yöntemlerine göre daha başarılı olduğunu ayrıca mevcut hata tahmin yöntemleri doğrusal lojistik regresyon ve J48 ile benzer sonuçlar ürettiğini göstermiştir. Ayrıca, bir yazılım sistemi içerisinde bulubab dosyalar arasındaki kod benzerliğinin daha fazla olduğunu durumlarda, ROC eğrisi altındaki alan (AUC) ölçütünün de daha yüksek olduğu görülmüştür. Ayrıca, önceden hesaplanmış çekirdek matris kullanan SVM yönteminin, hata sayısı ile kaynak kodu benzerliği arasında gözlemlenen ilişkiden ötürü, bir yazılım sistemindeki hata sayısının tahmin edilmesinde de kullanılabileceği gösterilmiştir. Yapılan analiz sonucunda, yazılım geliştiriciler hatasız veya daha az hatalı modüllere odaklanmak yerine, daha fazla hata içeren modüllere odaklanabilirler. 10 Promise veri kümesi üzerinde yapılan deneyler, hata sayısını tahmin ederken, önceden hesaplanan çekirdek matris kullanan SVM yönetiminin ortalama karesel hata (RMSE) açısından doğrusal ve RBF çekirdek kullanan SVM yöntemi kadar başarılı olduğunu göstermiştir. Uygulana yöntem, doğrusal regreyon ve IBK gibi diğer regresyon yöntemleri ile benzer sonuçlar üreetmiştir. Yapılan deneylerin sonuçları, kaynak kodu benzerliğinin hataya yatkınlık ve hata sayısının tahmin etmede iyi bir araç olduğunu ortaya koymuştur. Software Metrics Static Code Metrics McCabe Metrics Line of Code Metrics Halstead Metrics Object Oriented Metrics Developer Metrics Process Metrics Defect Prediction Defect Prediction Data Performance Measure An Overview of the Defect Prediction Studies Defect Prediction Using Statistical Methods Defect Prediction Using Machine Learning Methods Previous Work on Defect Prediction Critics About Studies Benchmarking Studies Bayesian Networks Background on Bayesian Networks K2 Algorithm Previous Work on Bayesian Networks Kernel Machines Background on Kernel Machines Support Vector Machines Support Vector Machines for Regression Kernel Functions String Kernels Previous Work on Kernel Machines Plagiarism Tools Similarity Detection Kernel Methods for Defect Prediction Proposed Method Bayesian networks Bayesian network of Metrics and Defect Proneness Ordering Metrics for Bayesian Network Construction Kernel Methods to Predict Defectiveness Selecting Plagiarism Tools and Tuning Their Input Parameters Data Set Selection Kernel Matrix Generation Kernel Methods to Predict the Number of Defects Experiments and Results Experiment I: Determine Influential Relationships Among Metrics and Defectiveness Using Bayesian Networks Experiment Design Experiment II: Determine The Role Of Coding Quality And Number Of Developers On Defectiveness Using Bayesian Networks Conclusion Instability Test Effectiveness of Metric Pairs Feature Selection Tests Effectiveness of the Number of Developers (NOD) Experiment III: Defect Proneness Prediction Using Kernel Methods Experiment IV: Prediction of the Number of Defects with Kernel Methods Threats to Validity Summary of Results Bayesian Networks Kernel Methods to Predict Defectiveness Kernel Methods to Predict the Number of Defects Contributions Bayesian Networks Kernel Methods to Predict Defectiveness Kernel Methods to Predict the Number of Defects Future Work
- Published
- 2012
22. Simplifying Contract-Violating Traces
- Author
-
Ian Grima, Christian Colombo, and Adrian Francalanza
- Subjects
FOS: Computer and information sciences ,Computer Science - Logic in Computer Science ,Computer software -- Development ,Computer science ,business.industry ,lcsh:Mathematics ,Distributed computing ,media_common.quotation_subject ,Computer software -- Quality control ,lcsh:QA1-939 ,lcsh:QA75.5-76.95 ,Logic in Computer Science (cs.LO) ,Software Engineering (cs.SE) ,Computer Science - Software Engineering ,Software ,Debugging ,Software deployment ,Scalability ,lcsh:Electronic computers. Computer science ,business ,TRACE (psycholinguistics) ,Drawback ,media_common - Abstract
Contract conformance is hard to determine statically, prior to the deployment of large pieces of software. A scalable alternative is to monitor for contract violations post-deployment: once a violation is detected, the trace characterising the offending execution is analysed to pinpoint the source of the offence. A major drawback with this technique is that, often, contract violations take time to surface, resulting in long traces that are hard to analyse. This paper proposes a methodology together with an accompanying tool for simplifying traces and assisting contract-violation debugging., Comment: In Proceedings FLACOS 2012, arXiv:1209.1699
- Published
- 2012
- Full Text
- View/download PDF
23. Slowdown invariance of timed regular expressions
- Author
-
Bondin, Ingram, Pace, Gordon J., Colombo, Christian, and 2nd WICT National Workshop in Information and Communication Technology (WICT 2009)
- Subjects
Computer software -- Development ,Object-oriented methods (Computer science) ,Computer software -- Quality control - Abstract
In critical systems, it is frequently essential to know whether the system satisfies a number of real-time constraints, usually specified in a real-time logic such as timed regular expressions. However, after having verified a system correct, changes in its environment may slow it down or speed it up, possibly invalidating the properties. Colombo et al. (1) have presented a theory of slowdown and speedup invariance to determine which specifications are safe with respect to system retiming, and applied the approach to duration calculus. In this paper we build upon their approach, applying it to timed regular expressions. We hence identify a fragment of the logic which is invariant under the speedup or slowdown of a system, enabling more resilient verification of properties written in the logic., peer-reviewed
- Published
- 2009
24. On-Line Failure Detection and Confinement in Caches
- Author
-
Xavier Vera, Pedro Chaparro, Antonio González, Jaume Abella, Javier Carretero, Universitat Politècnica de Catalunya. Departament d'Arquitectura de Computadors, and Universitat Politècnica de Catalunya. ARCO - Microarquitectura i Compiladors
- Subjects
Logic ,CPU cache ,Computer science ,Cache memory ,Testing ,Memòria cau ,Word error rate ,Computer software -- Quality control ,Programari -- Control de qualitat ,Fault detection and isolation ,Hardware ,Memòria ràpida de treball (Informàtica) ,Error correction codes ,Informàtica::Arquitectura de computadors [Àrees temàtiques de la UPC] ,Read-only memory ,Protection ,Hardware_MEMORYSTRUCTURES ,business.industry ,Wires ,Chip ,Costs ,Soft error ,Computer engineering ,Error analysis ,Logic gate ,Embedded system ,Cache ,business ,Fault detection - Abstract
Technology scaling leads to burn-in phase out and increasing post-silicon test complexity, which increases in-the-field error rate due to both latent defects and actual errors. As a consequence, there is an increasing need for continuous on-line testing techniques to cope with hard errors in the field. Similarly, those techniques are needed for detecting soft errors in logic, whose error rate is expected to raise in future technologies. Cache memories, which occupy most of the area of the chip, are typically protected with parity or ECC, but most of the wires as well as some combinational blocks remain unprotected against both soft and hard errors. This paper presents a set of techniques to detect and confine hard and soft errors in cache memories in combination with parity/ECC at very low cost. By means of hard signatures in data rows and error tracking, faults can be detected, classified properly and confined for hardware reconfiguration.
- Published
- 2008
- Full Text
- View/download PDF
25. A unified framework for verification techniques for object invariants
- Author
-
Drossopoulou, Sophia, Francalanza, Adrian, Muller, Peter, Summers, Alexander J., and 22nd European Conference on Object-Oriented Programming
- Subjects
Soundness ,Computer software -- Development ,Theoretical computer science ,Computer science ,Programming language ,Semantics (computer science) ,Separation logic ,Computer software -- Quality control ,Type (model theory) ,Object (computer science) ,computer.software_genre ,Consistency (database systems) ,Meaning (philosophy of language) ,Object-oriented methods (Computer science) ,TheoryofComputation_LOGICSANDMEANINGSOFPROGRAMS ,Proof obligation ,computer - Abstract
Object invariants define the consistency of objects. They have subtle semantics, mainly because of call-backs, multi-object invariants, and subclassing. Several verification techniques for object invariants have been proposed. It is difficult to compare these techniques, and to ascertain their soundness, because of their differences in restrictions on programs and invariants, in the use of advanced type systems (e.g., ownership types), in the meaning of invariants, and in proof obligations. We develop a unified framework for such techniques. We distil seven parameters that characterise a verification technique, and identify sufficient conditions on these parameters which guarantee soundness. We instantiate our framework with three verification techniques from the literature, and use it to assess soundness and compare expressiveness., peer-reviewed
- Published
- 2008
26. A theory of system behaviour in the presence of node and link failure
- Author
-
Adrian Francalanza and Matthew Hennessy
- Subjects
Reduction barbed congruence ,Computer software -- Development ,Computational Theory and Mathematics ,Distributed operating systems (Computers) ,Distributed calculi ,Bisimulation ,Computer software -- Quality control ,Node and link failure ,Labelled transition systems ,Theoretical Computer Science ,Information Systems ,Computer Science Applications - Abstract
We develop a behavioural theory of distributed programs in the presence of failures such as nodes crashing and links breaking. The framework we use is that of Dπ, a language in which located processes, or agents, may migrate between dynamically created locations. In our extended framework, these processes run on a distributed network, in which individual nodes may crash in fail-stop fashion or the links between these nodes may become permanently broken. The original language, Dπ, is also extended by a ping construct for detecting and reacting to these failures. We define a bisimulation equivalence between these systems, based on labelled actions which record, in addition to the effect actions have on the processes, the effect on the actual state of the underlying network and the view of this state known to observers. We prove that the equivalence is fully abstract, in the sense that two systems will be differentiated if and only if, in some sense, there is a computational context, consisting of a surrounding network and an observer, which can see the difference., peer-reviewed
- Published
- 2008
27. ISO 9001 Registration for Small and Medium-Sized Software Enterprises
- Author
-
Bailetti, Antonio J., FitzGibbon, Chris, Bailetti, Antonio J., and FitzGibbon, Chris
- Published
- 1995
28. Software architecture evaluation for framework-based systems
- Author
-
Liming Zhu
- Subjects
Software architecture -- Reliability ,Software architecture -- Evaluation ,Component software -- Evaluation ,Component software -- Reliability ,Software measurement ,Computer software -- Quality control ,Computer software -- Evaluation - Abstract
Complex modern software is often built using existing application frameworks and middleware frameworks. These frameworks provide useful common services, while simultaneously imposing architectural rules and constraints. Existing software architecture evaluation methods do not explicitly consider the implications of these frameworks for software architecture. This research extends scenario-based architecture evaluation methods by incorporating framework-related information into different evaluation activities. I propose four techniques which target four different activities within a scenario-based architecture evaluation method. 1) Scenario development: A new technique was designed aiming to extract general scenarios and tactics from framework-related architectural patterns. The technique is intended to complement the current scenario development process. The feasibility of the technique was validated through a case study. Significant improvements of scenario quality were observed in a controlled experiment conducted by another colleague. 2) Architecture representation: A new metrics-driven technique was created to reconstruct software architecture in a just-in-time fashion. This technique was validated in a case study. This approach has significantly improved the efficiency of architecture representation in a complex environment. 3) Attribute specific analysis (performance only): A model-driven approach to performance measurement was applied by decoupling framework-specific information from performance testing requirements. This technique was validated on two platforms (J2EE and Web Services) through a number of case studies. This technique leads to the benchmark producing more representative measures of the eventual application. It reduces the complexity behind the load testing suite and framework-specific performance data collecting utilities. 4) Trade-off and sensitivity analysis: A new technique was designed seeking to improve the Analytical Hierarchical Process (AHP) for trade-off and sensitivity analysis during a framework selection process. This approach was validated in a case study using data from a commercial project. The approach can identify 1) trade-offs implied by an architecture alternative, along with the magnitude of these trade-offs. 2) the most critical decisions in the overall decision process 3) the sensitivity of the final decision and its capability for handling quality attribute priority changes.
- Published
- 2007
- Full Text
- View/download PDF
29. Goal-driven agent-oriented software processes
- Author
-
Enric Mayol, Carlos Cares, Enrique Alvarez, Xavier Franch, Universitat Politècnica de Catalunya. Departament d'Enginyeria de Serveis i Sistemes d'Informació, Universitat Politècnica de Catalunya. inSSIDE - integrated Software, Service, Information and Data Engineering, and Universitat Politècnica de Catalunya. IMP - Information Modeling and Processing
- Subjects
Informatics ,Computer science ,Programari -- Tests ,Informàtica::Enginyeria del software [Àrees temàtiques de la UPC] ,Software quality ,Computer software -- Quality control ,Spirals ,Programari -- Control de qualitat ,Computer software -- Testing ,Software testing ,Software safety ,Mètodes formals (Informàtica) ,Software tools ,Software verification and validation ,Quality management ,Software systems ,Formal methods (Computer science) ,Social software engineering ,Software engineering ,business.industry ,Software development ,Goal-Driven Software Development Process ,Software construction ,Personal software process ,Software design ,Ergonomics ,business - Abstract
The quality of software processes is acknowledged as a critical factor for delivering quality software systems. Any initiative for improving the quality of software processes requires their explicit representation and management. A current representational metaphor for systems is agent orientation, which has become one of the recently recognized engineering paradigms. In this article, we argue for the convenience of representing the software process using an agent-oriented language to model it and a goal-driven procedure to design it. Particularly we propose using the i* framework which is both an agent- and a goal-oriented modeling language. We review the possibilities of i* as a software process modeling language, and we also show how success factors can be made explicit in i* representations of the software processes. Finally, we illustrate the approach with an example based on the development of a set of ergonomic and safety software tools.
- Published
- 2006
30. DesCOTS-SL: a tool for the selection of COTS components
- Author
-
Xavier Franch, Carme Quer, X. Lopez-Pelegrin, Universitat Politècnica de Catalunya. Departament d'Enginyeria de Serveis i Sistemes d'Informació, and Universitat Politècnica de Catalunya. inSSIDE - integrated Software, Service, Information and Data Engineering
- Subjects
Engineering ,media_common.quotation_subject ,Informàtica::Enginyeria del software [Àrees temàtiques de la UPC] ,Software quality ,Computer software -- Quality control ,Programari -- Control de qualitat ,COTS components ,Domain (software engineering) ,Software selection ,Software quality analyst ,IEC standards ,Quality (business) ,Software verification and validation ,Software measurement ,media_common ,Programari -- Avaluació ,business.industry ,ISO standards ,Systems engineering ,Software packages ,Computer software -- Evaluation ,Software engineering ,business ,Software architecture ,Software metrics ,Software quality control - Abstract
DesCOTS is a system that has an aim to help clients in the selection of COTS components. This system is based in the use of quality models associated to a software domain for evaluating the products in that domain, and for defining in a formal way the requirements of the clients for finding a suitable product in that domain. The evaluation and the formal definition of requirements are facilitated by metrics of each quality entity in the quality models. Our ISO/IEC 9126-1 based quality models are a set of quality entities structured in hierarchies of characteristics, subcharacteristics and attributes; with possible intermediate hierarchies of sub-characteristics and attributes.
- Published
- 2006
31. Una propuesta conforme a MOF para la modelización de la calidad del software
- Author
-
Burgués Illa, Xavier, Franch Gutiérrez, Javier, Ribó Balust, Josep Maria, Universitat Politècnica de Catalunya. Departament d'Enginyeria de Serveis i Sistemes d'Informació, and Universitat Politècnica de Catalunya. inSSIDE - integrated Software, Service, Information and Data Engineering
- Subjects
UML (Computer science) ,UML (Informàtica) ,Informàtica::Enginyeria del software [Àrees temàtiques de la UPC] ,Computer software -- Quality control ,Programari -- Control de qualitat - Abstract
En trabajos previos hemos propuesto formalizar los aspectos de calidad de los artefactos software mediante tres tipos de modelos estructurados jerárquicamente. En este artículo hemos integrado nuestra propuesta en el metamodelo de UML y, en concreto, en la arquitectura MOF de 4 niveles. Hemos aplicado una metodología de extensión del metamodelo UML y hemos introducido la noción de asociación inducida por un metamodelo.
- Published
- 2004
32. QM: A tool for building software quality models
- Author
-
Carvallo Vega, Juan Pablo, Franch Gutiérrez, Javier, Grau Colom, Gemma, Quer, Carme, Universitat Politècnica de Catalunya. Departament d'Enginyeria de Serveis i Sistemes d'Informació, and Universitat Politècnica de Catalunya. inSSIDE - integrated Software, Service, Information and Data Engineering
- Subjects
Programació orientada a l'objecte (Informàtica) ,Information analysis ,Measurement standards ,Informàtica::Sistemes d'informació [Àrees temàtiques de la UPC] ,Informàtica::Enginyeria del software [Àrees temàtiques de la UPC] ,Requirements engineering ,Software quality ,Computer software -- Quality control ,Programari -- Control de qualitat ,Q factor ,ISO standards ,Object-oriented programming (Computer science) ,Portals ,Enginyeria de requisits ,IEC standards ,Quality models ,Quality management ,Unified modeling language ,Taxonomy - Abstract
This paper presents QM, a tool for supporting the construction of quality models for software systems. The quality framework assumed for QM is defined by means of a conceptual model. The goals of QM are enumerated and its functionalities described. Among other, QM provides functionalities to define software quality factors, to reuse these quality factors among different quality models, to state relationships among them and to assign metrics for their future evaluation. The construction of quality models can be guided following any method and new methods can be defined using the tool itself. QM has been designed to be integrated with other tools to support processes in the different contexts where quality models can be used (software development, component selection, etc). As an example, the architecture of a whole system for supporting component selection processes is finally included.
- Published
- 2004
33. DesCOTS: a software system for selecting COTS components
- Author
-
Grau, G., Juan Pablo Carvallo, Franch, X., Quer, C., Universitat Politècnica de Catalunya. Departament d'Enginyeria de Serveis i Sistemes d'Informació, and Universitat Politècnica de Catalunya. inSSIDE - integrated Software, Service, Information and Data Engineering
- Subjects
Programació orientada a l'objecte (Informàtica) ,Informàtica::Sistemes d'informació [Àrees temàtiques de la UPC] ,Computer software -- Reusability ,Informàtica::Enginyeria del software [Àrees temàtiques de la UPC] ,Computer software -- Quality control ,Programari -- Control de qualitat ,Software selection ,Object-oriented programming (Computer science) ,Software System ,Risk management ,Project management ,Software packages ,Software tools ,Software portability ,Formal specification ,Open systems ,Programari -- Reusabilitat - Abstract
Selection of commercial-off-the-shelf software components (COTS Components) has a growing importance in software engineering. Unfortunately, selection projects have a high risk of ending up into abandonment or yielding an incorrect selection. The use of some software engineering practices such as the definition of quality models can reduce this risk. We defined a process for COTS components selection based on the use of quality models and we started to apply it in academic and industrial cases. The need of having a tool to support this process arose and, although it already exists some tools to partially support the involved activities, none of them was suitable enough. Because of this we developed DesCOTS, a software system that embraces several tools that interact to support the different activities of our process. The system has been designed taking into account not only functional concerns but also non-functional aspects such as reusability, interoperability and portability. We present in this paper the different subsystems of DesCOTS and discuss about their applicability.
- Published
- 2004
34. A quality-model-based approach for describing and evaluating software packages
- Author
-
Xavier Franch, Juan Pablo Carvallo, Universitat Politècnica de Catalunya. Departament d'Enginyeria de Serveis i Sistemes d'Informació, and Universitat Politècnica de Catalunya. inSSIDE - integrated Software, Service, Information and Data Engineering
- Subjects
Computer science ,Computer software -- Standards ,Informàtica::Enginyeria del software [Àrees temàtiques de la UPC] ,Software quality ,Software requirements specification ,Computer software -- Quality control ,Programari -- Control de qualitat ,Software selection ,Software quality analyst ,Software requirements ,Software verification and validation ,Programari -- Normes ,Software measurement ,Programari -- Avaluació ,Software standards ,business.industry ,ISO standards ,Systems engineering ,Software packages ,Package development process ,Computer software -- Evaluation ,Formal specification ,Software metrics ,Software engineering ,business ,Software quality control - Abstract
Selection of software packages from user requirements is a central task in software engineering. Selection of inappropriate packages may compromise business processes and may interfere negatively in the functioning of the involved organization. Success of package selection is endangered because of many factors, one of the most important being the absence of structured descriptions of both package features and user quality requirements. In this paper, we propose a methodology for describing quality factors of software packages using the ISO/IEC quality standard as a framework. Following this standard, relevant attributes for a specific software domain are identified and structured as a hierarchy, and metrics for them are chosen. Software packages in this domain can then be described in a uniform and comprehensive way. Therefore, selection of packages can be ameliorated by transforming user quality requirements into requirements expressed in terms of quality model attributes. We illustrate the approach by presenting, in some depth, a quality model for the mail server domain.
- Published
- 2003
- Full Text
- View/download PDF
35. A Quality Model for the Ada Standard Container Library
- Author
-
Xavier Franch, Jordi Marco, Universitat Politècnica de Catalunya. Departament d'Enginyeria de Serveis i Sistemes d'Informació, Universitat Politècnica de Catalunya. Departament de Ciències de la Computació, and Universitat Politècnica de Catalunya. inSSIDE - integrated Software, Service, Information and Data Engineering
- Subjects
Database ,business.industry ,Computer science ,Reliability (computer networking) ,media_common.quotation_subject ,Libraries ,Software development ,Context (language use) ,Computer software -- Quality control ,Programari -- Control de qualitat ,Informàtica::Llenguatges de programació [Àrees temàtiques de la UPC] ,Security policy ,Abstract data type ,computer.software_genre ,Containers ,ADA (Llenguatge de programació) ,Ada (Computer program language) ,Container (abstract data type) ,Key (cryptography) ,Quality (business) ,Software engineering ,business ,computer ,media_common - Abstract
The existence of a standard container library has been largely recognized as a key feature for improving the quality and effectiveness of Ada programming. In this paper, we aim at providing a quality model for making explicit the quality features (those concerning functionality, suitability, etc.) that determine the form that such a library might take. Quality features are arranged hierarchically according to the ISO/IEC quality standard. We tailor this standard to the specific context of container libraries, by identifying their observable attributes and establishing some tradeoffs among them. Afterwards, we apply the resulting model to a pair of existing container libraries. As main contribution of our proposal, we may say that the resulting quality model provides a structured framework for (1) discussing and evaluating the capabilities that the prospective Ada Standard Container Library might offer, and (2) analyzing the consequences of the decisions taken during its design.
- Published
- 2003
- Full Text
- View/download PDF
36. A structured approach to software process modelling
- Author
-
Josep M. Ribó, Xavier Franch, Universitat Politècnica de Catalunya. Departament d'Enginyeria de Serveis i Sistemes d'Informació, and Universitat Politècnica de Catalunya. inSSIDE - integrated Software, Service, Information and Data Engineering
- Subjects
Theoretical computer science ,business.industry ,Computer science ,Programació estructurada ,Computer software -- Reusability ,Informàtica::Enginyeria del software [Àrees temàtiques de la UPC] ,Software quality ,Computer software -- Quality control ,Programari -- Control de qualitat ,Software maintenance ,Structured programming ,Modularity ,Software reusability ,Software development process ,Software ,Formal specification ,Software system ,Programari -- Reusabilitat ,business - Abstract
Systematic formulation of software process models (SPM) is currently a challenging problem in software engineering. We present an approach to define such models that encourages: reuse of both elements and models; modularity and incrementality in model construction; simplicity and naturality of the resulting model; and a high degree of concurrence in their enaction. We focus on model definition, distinguishing as usual its static and dynamic parts. We define the static part by means of formally defined hierarchies introducing the categories of elements that take part in SPM definition. Such hierarchies may be constructed and enlarged according to the requirements of any specific SPM. We present as an example a hierarchy for component programming that takes into account non-functional aspects of software (efficiency, etc.). The dynamic part of the SPM is defined by means of precedence relationships between tasks that take part in the model. These precedence relationships are represented with precedence graphs. Development strategies are defined by encapsulating new precedence relationships in modules, that can be combined and reused.
- Published
- 2002
- Full Text
- View/download PDF
37. Modelling non-functional requirements
- Author
-
Botella López, Pere, Burgués Illa, Xavier|||0000-0001-6974-9886, Franch Gutiérrez, Javier|||0000-0001-9733-8830, Huerta, Mario, Salazar, Guadalupe, Universitat Politècnica de Catalunya. Departament d'Enginyeria de Serveis i Sistemes d'Informació, and Universitat Politècnica de Catalunya. inSSIDE - integrated Software, Service, Information and Data Engineering
- Subjects
ISO/IEC quality standard ,Language NoFun ,Universal quality property ,Informàtica::Enginyeria del software [Àrees temàtiques de la UPC] ,Non-functional requirement ,Enginyeria de requisits ,Requirements engineering ,Software quality characteristic ,Computer software -- Quality control ,Programari -- Control de qualitat - Abstract
We present in this paper the language NoFun for stating component quality in the framework of the ISO/IEC quality standards. The language consists of three different parts. In the first one, software quality characteristics and attributes are defined, probably in a hierarchical manner. As part of this definition, abstract quality models can be formulated and further refined into more specialised ones. In the second part, values are assigned to component quality basic attributes. In the third one, quality requirements can be stated over components, both context-free (universal quality properties) and context-dependent (quality properties for a given framework-software domain, company, project, etc.). Last, we address to the translation of the language to UML, using its extension mechanisms for capturing the fundamental non-functional concepts.
- Published
- 2001
38. Analyzing data locality in numeric applications
- Author
-
Antonio González, J. Sanchez, Universitat Politècnica de Catalunya. Departament d'Arquitectura de Computadors, and Universitat Politècnica de Catalunya. ARCO - Microarquitectura i Compiladors
- Subjects
Profiling (computer programming) ,Computer science ,Locality ,Program compilers ,Compiladors (Programes d'ordinador) ,Parallel computing ,Computer software -- Quality control ,Programari -- Control de qualitat ,computer.software_genre ,Program diagnostics ,Computer engineering ,Hardware and Architecture ,Compiler ,Electrical and Electronic Engineering ,computer ,Informàtica::Arquitectura de computadors [Àrees temàtiques de la UPC] ,Software ,Compile time ,Compilers (Computer programs) - Abstract
In this article, we introduce SPLAT (Static and Profiled Data Locality Analysis Tool). The tool's purpose is to provide a fast study of memory behavior without the necessity of a costly memory simulator. SPLAT consists of a static locality analysis enhanced by simple profiling data. Its overhead is low because it performs most of the analysis at compile time, and because the required profiling support is just a basic-block-execution count. Many commercial compilers support this profiling option. Compared with simulation techniques, SPLAT's estimation technique is highly accurate for numeric codes.
- Published
- 2000
- Full Text
- View/download PDF
39. A visual program design language
- Author
-
Xu, Qi-Quan
- Subjects
Visual programming languages (Computer science) ,Computer software -- Development -- Computer program ,Computer software -- Development -- Standards ,Computer software -- Quality control - Abstract
Not available
- Published
- 1998
40. Systematic formulation of non-functional characteristics of software
- Author
-
Xavier Franch, Universitat Politècnica de Catalunya. Departament d'Enginyeria de Serveis i Sistemes d'Informació, and Universitat Politècnica de Catalunya. inSSIDE - integrated Software, Service, Information and Data Engineering
- Subjects
Computer software -- Development ,Theoretical computer science ,business.industry ,Computer science ,Informàtica::Enginyeria del software [Àrees temàtiques de la UPC] ,Software development ,Software requirements specification ,Requirements engineering ,Component programming ,Non-functional requirements ,Computer software -- Quality control ,Programari -- Control de qualitat ,computer.software_genre ,Software framework ,Software sizing ,Component-based software engineering ,Software construction ,Programari -- Desenvolupament ,Enginyeria de requisits ,NoFun ,Software system ,business ,Software engineering ,computer ,Software measurement - Abstract
This paper presents NoFun, a notation aimed at dealing with non-functional aspects of software systems at the product level in the component programming framework. NoFun can be used to define hierarchies of non-functional attributes, which can be bound to individual software components, libraries of components or (sets of) software systems. Non-functional attributes can be defined in several ways, being possible to choose a particular definition in a concrete context. Also, NoFun allows to state the values of the attributes in component implementations, and to formulate non-functional requirements over component implementations. The notation is complemented with an algorithm able to select the best implementation of components (with respect to their non-functional characteristics) in their context of use.
- Published
- 1998
- Full Text
- View/download PDF
41. Software architecture evaluation for framework-based systems.
- Author
-
Zhu, Liming
- Subjects
- Software architecture -- Evaluation, Software architecture -- Reliability, Component software -- Evaluation, Component software -- Reliability, Computer software -- Evaluation, Computer software -- Quality control, Software measurement
- Abstract
Complex modern software is often built using existing application frameworks and middleware frameworks. These frameworks provide useful common services, while simultaneously imposing architectural rules and constraints. Existing software architecture evaluation methods do not explicitly consider the implications of these frameworks for software architecture. This research extends scenario-based architecture evaluation methods by incorporating framework-related information into different evaluation activities. I propose four techniques which target four different activities within a scenario-based architecture evaluation method. 1) Scenario development: A new technique was designed aiming to extract general scenarios and tactics from framework-related architectural patterns. The technique is intended to complement the current scenario development process. The feasibility of the technique was validated through a case study. Significant improvements of scenario quality were observed in a controlled experiment conducted by another colleague. 2) Architecture representation: A new metrics-driven technique was created to reconstruct software architecture in a just-in-time fashion. This technique was validated in a case study. This approach has significantly improved the efficiency of architecture representation in a complex environment. 3) Attribute specific analysis (performance only): A model-driven approach to performance measurement was applied by decoupling framework-specific information from performance testing requirements. This technique was validated on two platforms (J2EE and Web Services) through a number of case studies. This technique leads to the benchmark producing more representative measures of the eventual application. It reduces the complexity behind the load testing suite and framework-specific performance data collecting utilities. 4) Trade-off and sensitivity analysis: A new technique was designed seeking to improve the Analytical Hierarchical Process (AHP) for trade-off and sensitivity analysis during a framework selection process. This approach was validated in a case study using data from a commercial project. The approach can identify 1) trade-offs implied by an architecture alternative, along with the magnitude of these trade-offs. 2) the most critical decisions in the overall decision process 3) the sensitivity of the final decision and its capability for handling quality attribute priority changes.
- Published
- 2007
42. Application of computational quality attributes in a distributed application environment
- Author
-
Stubbs, Rodrick Keith
- Subjects
- Computer software -- Quality control, Electrical and Computer Engineering, Engineering, Dissertations, Academic -- Engineering; Engineering -- Dissertations, Academic
- Abstract
This item is only available in print in the UCF Libraries. If this is your thesis or dissertation, you can help us make it available online for use by researchers around the world by STARS for more information.
- Published
- 2002
43. Dependability as a computational quality attribute
- Author
-
Houchin, Charles Andrew
- Subjects
- Computer software -- Quality control, Software engineering, Electrical and Computer Engineering, Engineering, Systems and Communications, Dissertations, Academic -- Engineering; Engineering -- Dissertations, Academic
- Abstract
This item is only available in print in the UCF Libraries. If this is your thesis or dissertation, you can help us make it available online for use by researchers around the world by STARS for more information.
- Published
- 2002
44. La prueba de programas
- Author
-
Costa Romero de Tejada, Manuel and Facultat d'Informàtica de Barcelona
- Subjects
Programació (Ordinadors) ,Programari -- Control de qualitat ,Computer software -- Quality control ,Computer programming ,Informàtica::Programació [Àrees temàtiques de la UPC] - Published
- 1976
45. Data-Driven Elicitation, Assessment and Documentation of Quality Requirements in Agile Software Development
- Author
-
Jari Partanen, Andreas Jedlitschka, Xavier Franch, Cristina Gómez, Silverio Martínez-Fernández, Lidia López, Marc Oriol, Universitat Politècnica de Catalunya. Departament d'Enginyeria de Serveis i Sistemes d'Informació, and Universitat Politècnica de Catalunya. inSSIDE - integrated Software, Service, Information and Data Engineering
- Subjects
Computer science ,business.industry ,media_common.quotation_subject ,Dashboard (business) ,User story ,Informàtica::Enginyeria del software [Àrees temàtiques de la UPC] ,Programari àgil -- Desenvolupament ,020207 software engineering ,02 engineering and technology ,Computer software -- Quality control ,Programari -- Control de qualitat ,Data-driven ,Set (abstract data type) ,Documentation ,Order (business) ,NFR ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Quality (business) ,Software engineering ,business ,Agile software development ,Quality requirement ,media_common - Abstract
Quality Requirements (QRs) are difficult to manage in agile software development. Given the pressure to deploy fast, quality concerns are often sacrificed for the sake of richer functionality. Besides, artefacts as user stories are not particularly well-suited for representing QRs. In this exploratory paper, we envisage a data-driven method, called Q-Rapids, to QR elicitation, assessment and documentation in agile software development. Q-Rapids proposes: 1) The collection and analysis of design and runtime data in order to raise quality alerts; 2) The suggestion of candidate QRs to address these alerts; 3) A strategic analysis of the impact of such requirements by visualizing their effect on a set of indicators rendered in a dashboard; 4) The documentation of the requirements (if finally accepted) in the backlog. The approach is illustrated with scenarios evaluated through a questionnaire by experts from a telecom company.
- Full Text
- View/download PDF
46. Software Development Metrics Prediction Using Time Series Methods
- Author
-
Rafał Kozik, Michał Choraś, Xavier Franch, Witold Hołubowicz, Marek Pawlicki, Universitat Politècnica de Catalunya. Departament d'Enginyeria de Serveis i Sistemes d'Informació, and Universitat Politècnica de Catalunya. inSSIDE - integrated Software, Service, Information and Data Engineering
- Subjects
Computer software -- Development ,Time series ,Computer science ,media_common.quotation_subject ,Informàtica::Enginyeria del software [Àrees temàtiques de la UPC] ,0211 other engineering and technologies ,Static program analysis ,02 engineering and technology ,Computer software -- Quality control ,Programari -- Control de qualitat ,computer.software_genre ,Software development process ,Software ,021105 building & construction ,0202 electrical engineering, electronic engineering, information engineering ,Autoregressive integrated moving average ,media_common ,Software engineering ,business.industry ,Software development ,Software quality ,Software metric ,Debugging ,Programari -- Desenvolupament ,020201 artificial intelligence & image processing ,Metrics ,Data mining ,business ,Prediction ,computer - Abstract
The software development process is an intricate task, with the growing complexity of software solutions and inflating code-line count being part of the reason for the fall of software code coherence and readability thus being one of the causes for software faults and it’s declining quality. Debugging software during development is significantly less expensive than attempting damage control after the software’s release. An automated quality-related analysis of developed code, which includes code analysis and correlation of development data like an ideal solution. In this paper the ability to predict software faults and software quality is scrutinized. Hereby we investigate four models that can be used to analyze time-based data series for prediction of trends observed in the software development process are investigated. Those models are Exponential Smoothing, the Holt-Winters Model, Autoregressive Integrated Moving Average (ARIMA) and Recurrent Neural Networks (RNN). Time-series analysis methods prove a good fit for software related data prediction. Such methods and tools can lend a helping hand for Product Owners in their daily decision-making process as related to e.g. assignment of tasks, time predictions, bugs predictions, time to release etc. Results of the research are presented.
- Full Text
- View/download PDF
47. Building and using quality models for complex software domains
- Author
-
Carvallo Vega, Juan Pablo, Franch Gutiérrez, Javier|||0000-0001-9733-8830, Quer, Carme|||0000-0002-9000-6371, Universitat Politècnica de Catalunya. Departament d'Enginyeria de Serveis i Sistemes d'Informació, and Universitat Politècnica de Catalunya. inSSIDE - integrated Software, Service, Information and Data Engineering
- Subjects
Informàtica::Sistemes d'informació [Àrees temàtiques de la UPC] ,Enginyeria de requisits ,Requirements engineering ,Computer software -- Quality control ,Quality models ,Programari -- Control de qualitat ,Package descriptions ,Quality requirements - Abstract
The use of quality models in software package procurement provides a framework for the description of the domain which the package belongs to. Package descriptions and user quality requirements may be translated into the quality concepts defined in the model making package procurement more efficient and reliable. In this paper we address the construction of quality models for complex software domains, defined as domains that imply a mixture of functionalities. Procurement processes taking place in complex domains require not a single package to be selected but a set of them. As a consequence, instead of a standard, single quality model, we need a more elaborated quality model for driving the simultaneous procurement of multiple software packages. We describe the parts that compose these kind of models, the methodology for building them and their usage in software procurement. We apply the approach to the complex domain of mail server systems.
48. J2EE instrumentation for software aging root cause application component determination with AspectJ
- Author
-
Alonso López, Javier, Torres Viñals, Jordi|||0000-0003-1963-7418, Berral García, Josep Lluís|||0000-0003-3037-3580, Gavaldà Mestre, Ricard|||0000-0003-4736-7179, Universitat Politècnica de Catalunya. Departament d'Arquitectura de Computadors, Universitat Politècnica de Catalunya. Departament de Llenguatges i Sistemes Informàtics, Universitat Politècnica de Catalunya. CAP - Grup de Computació d'Altes Prestacions, and Universitat Politècnica de Catalunya. LARCA - Laboratori d'Algorísmia Relacional, Complexitat i Aprenentatge
- Subjects
Aspect-oriented programming ,Informàtica::Enginyeria del software [Àrees temàtiques de la UPC] ,Software quality ,Computer software -- Quality control ,Programari -- Control de qualitat - Abstract
Unplanned system outages have a negative impact on company revenues and image. While the last decades have seen a lot of efforts from industry and academia to avoid them, they still happen and their impact is increasing. According to many studies, one of the most important causes of these outages is software aging. Software aging phenomena refers to the accumulation of errors, usually provoking resource contention, during long running application executions, like web applications, which normally cause applications/systems hang or crash. Determining the software aging root cause failure, not the resource or resources involved in, is a huge task due to the growing day by day complexity of the systems. In this paper we present a monitoring framework based on Aspect Programming to monitor the resources used by every application component in runtime. Knowing the resources used by every component of the application we can determine which components are related to the software aging. Furthermore, we present a case study where we evaluate our approach to determine in a web application scenario, which components are involved in the software aging with promising results.
49. Determining criteria for selecting software components: lessons learned
- Author
-
Xavier Franch, Juan Pablo Carvallo, Carme Quer, Universitat Politècnica de Catalunya. Departament d'Enginyeria de Serveis i Sistemes d'Informació, and Universitat Politècnica de Catalunya. inSSIDE - integrated Software, Service, Information and Data Engineering
- Subjects
Programació orientada a l'objecte (Informàtica) ,Collaborative software ,Computer science ,business.industry ,Informàtica::Enginyeria del software [Àrees temàtiques de la UPC] ,Software development ,Software quality ,Computer software -- Quality control ,Programari -- Control de qualitat ,computer.software_genre ,Software selection ,Object-oriented programming (Computer science) ,Workflow ,Component-based software engineering ,Decisió, Presa de ,Data mining ,Software engineering ,business ,Completeness (statistics) ,computer ,Decision making ,Software - Abstract
Software component selection is growing in importance. Its success relies on correctly assessing the candidate components' quality. For a particular project, you can assess quality by identifying and analyzing the criteria that affect it. Component selection is on the suitability and completeness of the criteria used for evaluation. Experiences from determining criteria for several industrial projects provide important lessons. For a particular selection process, you can organize selection criteria into a criteria catalog. A CC is built for a scope, which can be either a domain (workflow systems, mail servers, antivirus tools, and so on) or a category of domains (communication infrastructure, collaboration software, and so on). Structurally, a CC arranges selection criteria in a hierarchical tree-like structure. The higher-level selection criteria serve to classify more concrete selection criteria, usually allowing some overlap. They also serve to leverage the CC.
50. Using quality models in software package selection
- Author
-
Xavier Franch, Juan Pablo Carvallo, Universitat Politècnica de Catalunya. Departament d'Enginyeria de Serveis i Sistemes d'Informació, and Universitat Politècnica de Catalunya. inSSIDE - integrated Software, Service, Information and Data Engineering
- Subjects
Requirement ,Business requirements ,Procurement ,Computer science ,Maintenance ,Informàtica::Enginyeria del software [Àrees temàtiques de la UPC] ,Usability ,Software requirements specification ,Software quality ,Requirements elicitation ,Computer software -- Quality control ,Programari -- Control de qualitat ,User requirements document ,Software ,Formal specification ,Software quality analyst ,Non-functional testing ,IEC standards ,Context modeling ,Software requirements ,Software verification and validation ,Requirements analysis ,Programari -- Avaluació ,Social software engineering ,business.industry ,Software standards ,Software development ,Software metric ,ISO standards ,Packaging ,Requirement prioritization ,Software construction ,Software quality management ,Package development process ,Software design ,Software packages ,Computer software -- Evaluation ,Software engineering ,business ,Software quality control - Abstract
The growing importance of commercial off-the-shelf software packages requires adapting some software engineering practices, such as requirements elicitation and testing, to this emergent framework. Also, some specific new activities arise, among which selection of software packages plays a prominent role. All the methodologies that have been proposed recently for choosing software packages compare user requirements with the packages' capabilities. There are different types of requirements, such as managerial, political, and, of course, quality requirements. Quality requirements are often difficult to check. This is partly due to their nature, but there is another reason that can be mitigated, namely the lack of structured and widespread descriptions of package domains (that is, categories of software packages such as ERP systems, graphical or data structure libraries, and so on). This absence hampers the accurate description of software packages and the precise statement of quality requirements, and consequently overall package selection and confidence in the result of the process. Our methodology for building structured quality models helps solve this drawback.
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.