atropos/environments/community/padres_spatial
Dakota e13526d308 Fix API to accept messages without reward field + comprehensive tests
- Made reward field truly optional in messages (no auto-addition)
- Accept custom roles (dog, cat, etc.) beyond standard ones
- Added 24 new tests for edge cases (tuples, unicode, large content)
- Reorganized test structure: moved from testing/ to atroposlib/tests/
- Fixed legacy API tests and removed tests requiring missing data files

All 43 tests pass\! Fixes message handling for SFT use cases.

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-06-09 14:03:08 -05:00
..
visualization linting 2025-05-27 15:43:12 +10:00
__init__.py linting 2025-05-27 15:43:12 +10:00
llm_services.py linting, moved to community 2025-05-27 15:36:24 +10:00
README.md linting 2025-05-27 15:43:12 +10:00
requirements.txt linting 2025-05-27 15:43:12 +10:00
run_servers.py linting, moved to community 2025-05-27 15:36:24 +10:00
spatial_env.py Fix API to accept messages without reward field + comprehensive tests 2025-06-09 14:03:08 -05:00

Padres: Spatial RL Environment

Video Demo

Watch the demo video

Environment Design & Motivation

Padres is a 3D spatial reasoning environment that challenges LLMs to understand and manipulate objects in a simulated 3D world. The environment uses PyBullet for physics simulation and integrates with LLMs for task generation and execution. The primary goal is to test and improve LLMs' spatial reasoning capabilities through interactive tasks that require understanding of relative positioning, object manipulation, and spatial relationships.

Quickstart

  1. Install dependencies:
pip install -r requirements.txt
  1. Set up environment variables:
cp .env.example .env
# Add your OpenAI API key to .env
  1. Run the environment:
python spatial_env.py
  1. View the visualization:
cd visualization
python3 -m http.server 8080

Then visit http://localhost:8080

W&B Integration & Metrics

View the latest run here

Key metrics tracked:

  • Task completion score (0-1)
  • Final object distance
  • Spatial condition satisfaction
  • Action success rate
  • LLM response time

Additional Details

The environment implements a reward function that balances:

  1. Proximity to target position
  2. Spatial relationship constraints
  3. Task completion verification

The current implementation focuses on basic spatial tasks but is designed to be extensible for more complex scenarios. The reward function is structured to prevent common reward hacking strategies by requiring both position accuracy and spatial relationship satisfaction.