STVR: Seeding strategies in search-based unit test generation

  • [PDF] [DOI] J. M. Rojas, G. Fraser, and A. Arcuri, “Seeding strategies in search-based unit test generation,” Software Testing, Verification and Reliability, p. n/a–n/a, 2016.
    [Bibtex]
    @article{STVR_seeding,
    author = {Rojas, Jos{\'e} Miguel and Fraser, Gordon and Arcuri, Andrea},
    title = {Seeding strategies in search-based unit test generation},
    journal = {Software Testing, Verification and Reliability},
    issn = {1099-1689},
    url = {http://dx.doi.org/10.1002/stvr.1601},
    doi = {10.1002/stvr.1601},
    pages = {n/a--n/a},
    keywords = {test case generation, search-based testing, testing classes, search-based software engineering, JUnit, Java},
    year = {2016},
    }

EMSE: A detailed investigation of the effectiveness of whole test suite generation

  • [PDF] [DOI] J. M. Rojas, M. Vivanti, A. Arcuri, and G. Fraser, “A detailed investigation of the effectiveness of whole test suite generation,” Empirical Software Engineering, pp. 1-42, 2016.
    [Bibtex]
    @Article{emse16_effectiveness,
    author="Rojas, Jos{\'e} Miguel
    and Vivanti, Mattia
    and Arcuri, Andrea
    and Fraser, Gordon",
    title="A detailed investigation of the effectiveness of whole test suite generation",
    journal="Empirical Software Engineering",
    year="2016",
    pages="1--42",
    abstract="A common application of search-based software testing is to generate test cases for all goals defined by a coverage criterion (e.g., lines, branches, mutants). Rather than generating one test case at a time for each of these goals individually, whole test suite generation optimizes entire test suites towards satisfying all goals at the same time. There is evidence that the overall coverage achieved with this approach is superior to that of targeting individual coverage goals. Nevertheless, there remains some uncertainty on (a) whether the results generalize beyond branch coverage, (b) whether the whole test suite approach might be inferior to a more focused search for some particular coverage goals, and (c) whether generating whole test suites could be optimized by only targeting coverage goals not already covered. In this paper, we perform an in-depth analysis to study these questions. An empirical study on 100 Java classes using three different coverage criteria reveals that indeed there are some testing goals that are only covered by the traditional approach, although their number is only very small in comparison with those which are exclusively covered by the whole test suite approach. We find that keeping an archive of already covered goals along with the tests covering them and focusing the search on uncovered goals overcomes this small drawback on larger classes, leading to an improved overall effectiveness of whole test suite generation.",
    issn="1573-7616",
    doi="10.1007/s10664-015-9424-2",
    url="http://dx.doi.org/10.1007/s10664-015-9424-2"
    }

New 1.0.3 release

A new version 1.0.3 of EvoSuite has now been released. This is the version used at this year’s SBST tool competition.

Besides a bunch of bug fixes and performance improvements, we have also included test naming, where tests are not named with numbers (test1, test2, etc.) but based on what they cover (testFoo, testBarReturnsTrue, etc.) If you want to try this feature, run EvoSuite with the option -Dtest_naming_strategy=coverage.

Release data is available 1.0.3 release results.

Best paper awards

Work done on EvoSuite led to a Best Paper with industry-relevant SBSE results Award at SSBSE’15 for “Combining Multiple Coverage Criteria in Search-Based Unit Test Generation”, and an ACM Distinguished Paper Award at ASE’15 for “Do Automatically Generated Unit Tests Find Real Faults? An Empirical Study of Effectiveness and Challenges“. See Publications for PDF copies.

ISSTA 2015: Automated Unit Test Generation during Software Development: A Controlled Experiment and Think-Aloud Observations

  • [PDF] J. M. Rojas, G. Fraser, and A. Arcuri, “Automated Unit Test Generation during Software Development: A Controlled Experiment and Think-Aloud Observations,” in Proceedings of the 2015 International Symposium on Software Testing and Analysis, 2015, pp. 338-349.
    [Bibtex]
    @inproceedings{ISSTA15_Study,
     author = {Jos{\'e} Miguel Rojas and Gordon Fraser and Andrea Arcuri},
     title = {Automated Unit Test Generation during Software Development: A Controlled Experiment and Think-Aloud Observations},
     booktitle = {Proceedings of the 2015 International Symposium on Software Testing and Analysis},
     series = {ISSTA '15},
     year = {2015},
     publisher = {ACM},
     pages={338--349},
    }

EvoSuite at the SBST 2015 competition

  • [PDF] G. Fraser and A. Arcuri, “EvoSuite at the SBST 2015 Tool Competition,” in 8th International Workshop on Search-Based Software Testing (SBST’15) at ICSE’15, 2015.
    [Bibtex]
    @inproceedings{SBST15_competition,
      author    = {Gordon Fraser and Andrea Arcuri},
      title     = {EvoSuite at the SBST 2015 Tool Competition},
      booktitle = {8th International Workshop on Search-Based Software Testing (SBST'15) at ICSE'15},
      year      = {2015},
      note = {To appear}
    }