mirror of github.com/open-thought/reasoning-gym
Find a file
Rich Jones 11c9790a25 [Env] Game of Life Halting Prediction (#272)
This is a variant of the Game of Life task, which rather than trying to test the algorithmic simulation, tests the ability of the model to do explanatory reasoning of the board. The idea is that a model with good explanatory reasoning will be able to see that a game will not halt without simulating it into the future.

The task presents a GoL board, and the model is asked to predict if the board will halt (die, all cells zero) after n steps. Sometimes, the board will be made up of 'oscillators', isolated structures which never die. Othertimes, it is filled with non-oscillators, structures which will always die after a few steps. The model should deduce which case the presented board is.
2025-03-07 10:05:12 +01:00
.github/workflows Revert "Merge pull request #131 from zafstojano/chore/auto-generate-gallery" 2025-02-15 23:26:22 +01:00
eval should exit if API key isn't defined (#259) 2025-03-04 09:45:36 +01:00
examples more native type hints 2025-02-21 21:23:14 +01:00
notebooks First version of CodeI/O reasoning data (#264) 2025-03-05 22:34:11 +01:00
reasoning_gym [Env] Game of Life Halting Prediction (#272) 2025-03-07 10:05:12 +01:00
scripts use PYTHONHASHSEED=1 for generate_gallery.py 2025-02-04 12:03:45 +01:00
tests [Env] Game of Life Halting Prediction (#272) 2025-03-07 10:05:12 +01:00
tools use native types List->list, Dict->dict, Set->set, Tuple->tuple 2025-02-21 15:15:38 +01:00
.gitignore pre-commit 2025-02-23 13:11:31 +01:00
.pre-commit-config.yaml romve dataset examples from REAMED.md (now in linked GALLERY.md) 2025-01-30 23:06:22 +01:00
CONTRIBUTING.md contribution updates 2025-02-20 09:54:26 +01:00
GALLERY.md update gallery, pypi release, bump version 2025-03-05 23:45:45 +01:00
LICENSE Initial commit 2025-01-23 09:39:53 +01:00
NOTICE.txt fix formatting of NOTICE.txt 2025-02-25 23:43:12 +01:00
pyproject.toml update gallery, pypi release, bump version 2025-03-05 23:45:45 +01:00
README.md fix mkd 2025-02-21 15:14:36 +01:00
requirements-dev.txt [eval-basic] run precommit formatting 2025-02-09 22:40:45 -08:00

💪🧠 Reasoning Gym

Reasoning Gym is a community-created Python library of procedural dataset generators and algorithmically verifiable reasoning environments for training reasoning models with reinforcement learning (RL). The goal is to generate virtually infinite training data with adjustable complexity.

It currently provides more than 80 tasks over many domains, including but not limited to algebra, arithmetic, computation, cognition, geometry, graph theory, logic, and many common games.

Some tasks have a single correct answer, while others, such as Rubiks Cube and Countdown, have many correct solutions. To support this, we provide a standard interface for procedurally verifying solutions.

In GALLERY.md, you can find example outputs of all datasets available in reasoning-gym.

⬇️ Installation

The reasoning-gym package requires Python >= 3.11.

Install the latest published package from PyPI via pip:

pip install reasoning-gym

Note that this project is currently under active development, and the version published on PyPI may be a few days behind main.

🛠️ Development

For development setup, see CONTRIBUTING.md.

Example Usage

import reasoning_gym
data = reasoning_gym.create_dataset('leg_counting', size=10, seed=42)
for i, x in enumerate(data):
    print(f'{i}: q="{x['question']}", a="{x['answer']}"')
    print('metadata:', x['metadata'])
    # use the dataset's `score_answer` method for algorithmic verification
    assert data.score_answer(answer=x['answer'], entry=x) == 1.0

Output:

0: q="How many legs are there in total if you have 1 sea slug, 1 deer?", a="4"
metadata: {'animals': {'sea slug': 1, 'deer': 1}, 'total_legs': 4}
1: q="How many legs are there in total if you have 2 sheeps, 2 dogs?", a="16"
metadata: {'animals': {'sheep': 2, 'dog': 2}, 'total_legs': 16}
2: q="How many legs are there in total if you have 1 crab, 2 lobsters, 1 human, 1 cow, 1 bee?", a="42"
...

🔍 Evaluation

Instructions for running the evaluation scripts are provided in eval/README.md.

Evaluation results of different reasoning models will be tracked in the reasoning-gym-eval repo.

👷 Contributing

Please see CONTRIBUTING.md.

If you have ideas for dataset generators please create an issue here or contact us in the #reasoning-gym channel of the GPU-Mode discord server.