Abstract
Software testing practitioners have an array of testing techniques to choose from to test their software. Nevertheless, there is little empirical evidence about the capability of each technique to detect specific types of defects. As a result, when selecting and combining the testing techniques for a project, practitioners must rely on their own experience. This paper studies the behaviour of two specific techniques, equivalence partitioning and decision coverage, to determine which types of defect are potentially undetectable to either one. This paper presents a differentiated experiment replication based on a previous experimental design, but using different artifacts. The experiment confirms the hypothesis that some defect types are undetectable to each technique. Even with a correct application of each technique, some defects will only be detected by chance. This study adds new empirical evidence for constructing a classification of defects that takes into account technique detection capabilities.
Original language | English |
---|---|
Title of host publication | SEKE 2014 |
Subtitle of host publication | Proceedings of the Twenty-Sixth International Conference on Software Engineering & Knowledge Engineering |
Place of Publication | Skokie |
Publisher | Knowledge Systems Institute Graduate School |
Pages | 106-109 |
Number of pages | 4 |
ISBN (Print) | 1891706357 |
Publication status | Published - 2014 |
Externally published | Yes |
Event | The 26th International Conference on Software Engineering & Knowledge Engineering - Hyatt Regency, Vancouver, Canada Duration: 1 Jul 2014 → 3 Jul 2014 |
Conference
Conference | The 26th International Conference on Software Engineering & Knowledge Engineering |
---|---|
Abbreviated title | SEKE 2014 |
Country/Territory | Canada |
City | Vancouver |
Period | 1/07/14 → 3/07/14 |
Keywords
- software testing
- experiment
- defect detection