atropos/atroposlib/envs/server_handling
teknium e1ece3e64e Add reasoning configuration support across server implementations
- Updated server classes (OpenAIServer, SGLangServer, TrlVllmServer, VLLMServer) to accept a ReasoningConfig parameter during initialization.
- Enhanced ReasoningConfig to allow flexible max_tokens without strict validation, accommodating varying provider limits.
- Implemented reasoning configuration injection in APIServer methods for chat and completion handling.
- Updated tests to reflect changes in max_tokens validation logic.

This commit integrates reasoning capabilities into the server handling architecture, improving compatibility with diverse reasoning models.
2026-01-05 23:20:01 +00:00
..
MANAGED_SERVER.md Update MANAGED_SERVER.md 2025-11-12 07:22:40 +01:00
managed_server.py made masked logprobs coherently decided on 2025-10-29 10:52:38 -05:00
openai_server.py Add reasoning configuration support across server implementations 2026-01-05 23:20:01 +00:00
server_baseline.py Add reasoning configuration support across server implementations 2026-01-05 23:20:01 +00:00
server_harness.py fix tests 2025-10-29 10:55:10 -05:00
server_manager.py Add reasoning configuration support across server implementations 2026-01-05 23:20:01 +00:00
sglang_server.py Add reasoning configuration support across server implementations 2026-01-05 23:20:01 +00:00
trl_vllm_server.py Add reasoning configuration support across server implementations 2026-01-05 23:20:01 +00:00
vllm_server.py Add reasoning configuration support across server implementations 2026-01-05 23:20:01 +00:00