1. Which generated test failures are fault revealing? prioritizing failures based on inferred precondition violations using PAF
- Author
-
Shing-Chi Cheung, Sunghun Kim, and Mijung Kim
- Subjects
Unit testing ,Similarity (geometry) ,Dataflow ,Computer science ,020207 software engineering ,02 engineering and technology ,computer.software_genre ,Fault (power engineering) ,Field (computer science) ,Precondition ,Null (SQL) ,020204 information systems ,0202 electrical engineering, electronic engineering, information engineering ,False alarm ,Data mining ,human activities ,computer - Abstract
Automated unit testing tools, such as Randoop, have been developed to produce failing tests as means of finding faults. However, these tools often produce false alarms, so are not widely used in practice. The main reason for a false alarm is that the generated failing test violates an implicit precondition of the method under test, such as a field should not be null at the entry of the method. This condition is not explicitly programmed or documented but implicitly assumed by developers. To address this limitation, we propose a technique called PAF to cluster generated test failures due to the same cause and reorder them based on their likelihood of violating an implicit precondition of the method under test. From various test executions, PAF observes their dataflows to the variables whose values are used when the program fails. Based on the dataflow similarity and where these values are originated, PAF clusters failures and determines their likelihood of being fault revealing. We integrated PAF into Randoop. Our empirical results on open-source projects show that PAF effectively clusters fault revealing tests arising from the same fault and successfully prioritizes the fault-revealing ones.
- Published
- 2018
- Full Text
- View/download PDF