# bears-benchmark **Repository Path**: dehengyang/bears-benchmark ## Basic Information - **Project Name**: bears-benchmark - **Description**: No description available - **Primary Language**: Unknown - **License**: GPL-3.0 - **Default Branch**: master - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 0 - **Forks**: 1 - **Created**: 2021-03-23 - **Last Updated**: 2025-06-14 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README # The Bears-Benchmark The Bears-Benchmark, or just Bears, is a benchmark of bugs for automatic program repair studies in Java. The bugs are collected from open-source projects hosted on GitHub through a process that scans pairs of builds from Travis Continuous Integration and reproduces bugs (by test failure) and their patches (passing test suite). If you use Bears, please cite our paper: ```bibtex @inproceedings{Madeiral2019, author = {Fernanda Madeiral and Simon Urli and Marcelo Maia and Martin Monperrus}, title = {{Bears: An Extensible Java Bug Benchmark for Automatic Program Repair Studies}}, booktitle = {Proceedings of the 26th IEEE International Conference on Software Analysis, Evolution and Reengineering (SANER '19)}, year = {2019}, url = {https://arxiv.org/abs/1901.06024} } ``` ## Bug Repository Design Each bug is stored in a branch. The name of each branch follows the pattern `--`. The files and data on a bug are organized in commits in the branch of the bug as follows: - Commit #1 contains the version of the program with the bug - Commit #2 contains the changes in the tests - Commit #3 contains the version of the program with the human-written patch - Commit #4 contains the metadata file `bears.json`, which is a gathering of information collected during the bug reproduction process. It contains information about the bug (e.g. test failure names), the patch (e.g. patch size), and the bug reproduction process (e.g. duration). ## How to use In the folder [scripts](scripts/) you can find scripts to check out bugs, to compile them, and to run tests on them. You can check out all bugs at once with ```bash python scripts/checkout_all.py --workspace ``` or you can check out a single bug by given a bug ID (see [bug IDs](https://bears-bugs.github.io/bears-benchmark)) as argument with ```bash python scripts/checkout_bug.py --bugId --workspace ``` In the same way as you can check out one or all bugs you can compile the checked out buggy program versions and run tests on them using the scripts `compile_all.py`, `compile_bug.py`, `run_tests_all.py`, `run_tests_bug.py`. ## How to contribute Any contribution is very welcome! The types of contribution are, but not limited to: - The proposal of bugs to be added in Bears (specific instructions comming soon) - To flag possibly incorrect branches added in the Bears - To participate in the manual validation of bugs/branches, i.e. review open pull requests ## Contact Feel free to create issues in this repository with questions or to drop an email to fer.madeiral@gmail.com