ICST 2016: Unit Test Generation During Software Development: EvoSuite Plugins for Maven, IntelliJ and Jenkins

  • [PDF] A. Arcuri, J. Campos, and G. Fraser, “Unit Test Generation During Software Development: EvoSuite Plugins for Maven, IntelliJ and Jenkins,” in IEEE International Conference on Software Testing, Verification and Validation (ICST), 2016, pp. 401-408.
    [Bibtex]
    @inproceedings{ICST16_Tool, 
      author={A. Arcuri and J. Campos and G. Fraser}, 
      booktitle={IEEE International Conference on Software Testing, Verification and Validation (ICST)}, 
      title={Unit Test Generation During Software Development: EvoSuite Plugins for Maven, IntelliJ and Jenkins}, 
      year={2016}, 
      pages={401--408},
      publisher = {IEEE Computer Society},
    }

EvoSuite wins the SBST 2016 tool competition

EvoSuite achieved the overall highest score of all competing tools at the SBST 2016 Unit Testing Tool Competition. For more details read the following paper:

  • [PDF] G. Fraser and A. Arcuri, “EvoSuite at the SBST 2016 Tool Competition,” in 9th International Workshop on Search-Based Software Testing (SBST’16) at ICSE’16, 2016, pp. 33-36.
    [Bibtex]
    @inproceedings{SBST16_competition,
      author    = {Gordon Fraser and Andrea Arcuri},
      title     = {EvoSuite at the SBST 2016 Tool Competition},
      booktitle = {9th International Workshop on Search-Based Software Testing (SBST'16) at ICSE'16},
      year      = {2016},
      pages     = {33--36},
    }

STVR: Seeding strategies in search-based unit test generation

  • [PDF] [DOI] J. M. Rojas, G. Fraser, and A. Arcuri, “Seeding strategies in search-based unit test generation,” Software Testing, Verification and Reliability, p. n/a–n/a, 2016.
    [Bibtex]
    @article{STVR_seeding,
    author = {Rojas, Jos{\'e} Miguel and Fraser, Gordon and Arcuri, Andrea},
    title = {Seeding strategies in search-based unit test generation},
    journal = {Software Testing, Verification and Reliability},
    issn = {1099-1689},
    url = {http://dx.doi.org/10.1002/stvr.1601},
    doi = {10.1002/stvr.1601},
    pages = {n/a--n/a},
    keywords = {test case generation, search-based testing, testing classes, search-based software engineering, JUnit, Java},
    year = {2016},
    }

EMSE: A detailed investigation of the effectiveness of whole test suite generation

  • [PDF] [DOI] J. M. Rojas, M. Vivanti, A. Arcuri, and G. Fraser, “A detailed investigation of the effectiveness of whole test suite generation,” Empirical Software Engineering, pp. 1-42, 2016.
    [Bibtex]
    @Article{emse16_effectiveness,
    author="Rojas, Jos{\'e} Miguel
    and Vivanti, Mattia
    and Arcuri, Andrea
    and Fraser, Gordon",
    title="A detailed investigation of the effectiveness of whole test suite generation",
    journal="Empirical Software Engineering",
    year="2016",
    pages="1--42",
    abstract="A common application of search-based software testing is to generate test cases for all goals defined by a coverage criterion (e.g., lines, branches, mutants). Rather than generating one test case at a time for each of these goals individually, whole test suite generation optimizes entire test suites towards satisfying all goals at the same time. There is evidence that the overall coverage achieved with this approach is superior to that of targeting individual coverage goals. Nevertheless, there remains some uncertainty on (a) whether the results generalize beyond branch coverage, (b) whether the whole test suite approach might be inferior to a more focused search for some particular coverage goals, and (c) whether generating whole test suites could be optimized by only targeting coverage goals not already covered. In this paper, we perform an in-depth analysis to study these questions. An empirical study on 100 Java classes using three different coverage criteria reveals that indeed there are some testing goals that are only covered by the traditional approach, although their number is only very small in comparison with those which are exclusively covered by the whole test suite approach. We find that keeping an archive of already covered goals along with the tests covering them and focusing the search on uncovered goals overcomes this small drawback on larger classes, leading to an improved overall effectiveness of whole test suite generation.",
    issn="1573-7616",
    doi="10.1007/s10664-015-9424-2",
    url="http://dx.doi.org/10.1007/s10664-015-9424-2"
    }

New 1.0.3 release

A new version 1.0.3 of EvoSuite has now been released. This is the version used at this year’s SBST tool competition.

Besides a bunch of bug fixes and performance improvements, we have also included test naming, where tests are not named with numbers (test1, test2, etc.) but based on what they cover (testFoo, testBarReturnsTrue, etc.) If you want to try this feature, run EvoSuite with the option -Dtest_naming_strategy=coverage.

Release data is available 1.0.3 release results.

Best paper awards

Work done on EvoSuite led to a Best Paper with industry-relevant SBSE results Award at SSBSE’15 for “Combining Multiple Coverage Criteria in Search-Based Unit Test Generation”, and an ACM Distinguished Paper Award at ASE’15 for “Do Automatically Generated Unit Tests Find Real Faults? An Empirical Study of Effectiveness and Challenges“. See Publications for PDF copies.