atropos/atroposlib/envs/server_handling
2026-01-15 07:44:53 +00:00
..
MANAGED_SERVER.md Update MANAGED_SERVER.md 2025-11-12 07:22:40 +01:00
managed_server.py made masked logprobs coherently decided on 2025-10-29 10:52:38 -05:00
openai_server.py Add reasoning configuration support across server implementations 2026-01-05 23:20:01 +00:00
README.md [pre-commit.ci] auto fixes from pre-commit.com hooks 2026-01-15 07:23:36 +00:00
server_baseline.py linter.... 2026-01-15 07:44:53 +00:00
server_harness.py fix tests 2025-10-29 10:55:10 -05:00
server_manager.py Merge branch 'main' into add_reasoning_handling_draft 2026-01-12 09:45:38 -08:00
sglang_server.py Add reasoning configuration support across server implementations 2026-01-05 23:20:01 +00:00
trl_vllm_server.py Add reasoning configuration support across server implementations 2026-01-05 23:20:01 +00:00
vllm_server.py Add reasoning configuration support across server implementations 2026-01-05 23:20:01 +00:00

Server Handling

This module provides server abstraction layers for different LLM inference backends.

Reasoning Model Support

The ReasoningConfig class enables support for reasoning/thinking models across different providers.

Provider Differences

Feature OpenAI OpenRouter / Others
Format {"reasoning_effort": "high"} {"reasoning": {"enabled": true, "effort": "high"}}
Effort Levels none, minimal, low, medium, high, xhigh none, minimal, low, medium, high, xhigh
Max Tokens Not supported {"reasoning": {"max_tokens": 16000}}
Temperature Must be 1.0 No restriction
Token Param max_completion_tokens max_tokens

Effort Level to Token Mapping

When providers don't support effort strings, effort levels map to approximate token budgets (based on 32k base):

Effort Tokens Percentage
none 1,024 Minimum
minimal 3,200 ~10%
low 6,400 ~20%
medium 16,000 ~50%
high 25,600 ~80%
xhigh 30,400 ~95%

Provider Token Limits

  • OpenRouter: Caps Anthropic reasoning at 1,024-32,000 tokens (docs)
  • Native Anthropic: Supports up to 128k extended thinking tokens

Usage

Reasoning is only injected for chat completions (not completions or logprobs API).

# Via environment config
config = BaseEnvConfig(
    thinking_mode=True,
    reasoning_effort="high",
    max_reasoning_tokens=16000,
)

# Direct ReasoningConfig
reasoning_config = ReasoningConfig(
    enabled=True,
    effort="high",
    max_tokens=16000,
)

Bypassing Reasoning Injection

Pass skip_reasoning=True to any chat completion call:

await server.chat_completion(messages=messages, skip_reasoning=True)

Important Constraints

  1. OpenRouter: Only accepts ONE of effort or max_tokens, not both. When both specified, effort takes priority.
  2. OpenAI: All effort levels are passed through directly.
  3. Auto-enable: Setting effort or max_tokens automatically enables reasoning mode.