An empirical evaluation of evolutionary algorithms for unit test suite generation

  • José Campos
  • , Yan Ge
  • , Nasser Albunian
  • , Gordon Fraser*
  • , Marcelo Eler
  • , Andrea Arcuri
  • *Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

Context
Evolutionary algorithms have been shown to be effective at generating unit test suites optimised for code coverage. While many specific aspects of these algorithms have been evaluated in detail (e.g., test length and different kinds of techniques aimed at improving performance, like seeding), the influence of the choice of evolutionary algorithm has to date seen less attention in the literature.

Objective
Since it is theoretically impossible to design an algorithm that is the best on all possible problems, a common approach in software engineering problems is to first try the most common algorithm, a genetic algorithm, and only afterwards try to refine it or compare it with other algorithms to see if any of them is more suited for the addressed problem. The objective of this paper is to perform this analysis, in order to shed light on the influence of the search algorithm applied for unit test generation.

Method
We empirically evaluate thirteen different evolutionary algorithms and two random approaches on a selection of non-trivial open source classes. All algorithms are implemented in the EvoSuite test generation tool, which includes recent optimisations such as the use of an archive during the search and optimisation for multiple coverage criteria.

Results
Our study shows that the use of a test archive makes evolutionary algorithms clearly better than random testing, and it confirms that the DynaMOSA many-objective search algorithm is the most effective algorithm for unit test generation.

Conclusion
Our results show that the choice of algorithm can have a substantial influence on the performance of whole test suite optimisation. Although we can make a recommendation on which algorithm to use in practice, no algorithm is clearly superior in all cases, suggesting future work on improved search algorithms for unit test generation.
Original languageEnglish
Pages (from-to)207-235
Number of pages29
JournalInformation and Software Technology
Volume104
Early online date22 Aug 2018
DOIs
Publication statusPublished - 31 Dec 2018
Externally publishedYes

Keywords

  • evolutionary algorithms
  • test suite feneration
  • empirical study

Fingerprint

Dive into the research topics of 'An empirical evaluation of evolutionary algorithms for unit test suite generation'. Together they form a unique fingerprint.

Cite this