Back to Search Start Over

Testing for Imperfect Debugging in Software Reliability

Authors :
Eric V. Slud
Source :
Scandinavian Journal of Statistics. 24:555-572
Publication Year :
1997
Publisher :
Wiley, 1997.

Abstract

This paper continues the study of the software reliability model of Fakhre- Zakeri & Slud (1995), an "exponential order statistic model" in the sense of Miller (1986) with general mixing distribution, imperfect debugging and large-sample asymptotics reflect- ing increase of the initial number of bugs with software size. The parameters of the model are 0 (proportional to the initial number of bugs in the software), G(., p) (the mixing df, with finite dimensional unknown parameter , for the rates A)i with which the bugs in the software cause observable system failures), and p (the probability with which a detected bug is instantaneously replaced with another bug instead of being removed). Maximum likelihood estimation theory for (0,p,p) is applied to construct a likelihood-based score test for large sample data of the hypothesis of "perfect debugging" (p = 0) vs "imperfect" (p > 0) within the models studied. There are important models (including the Jelinski-Moranda) under which the score statistics with 1/V/i normalization are asymptotically degenerate. These statistics, illustrated on a software reliability data of Musa (1980), can serve nevertheless as important diagnostics for inadequacy of simple models.

Details

ISSN :
14679469 and 03036898
Volume :
24
Database :
OpenAIRE
Journal :
Scandinavian Journal of Statistics
Accession number :
edsair.doi...........b9ba0f0a92260dce427ee2840cd0c7f0
Full Text :
https://doi.org/10.1111/1467-9469.00081