Commit graph

9 commits

Author SHA1 Message Date
Dakota
7d6aeb9bbf add tokenizer name config to set the vllm/sglang tokenizer to something different if needed 2026-02-09 15:26:29 -06:00
teknium
e1ece3e64e Add reasoning configuration support across server implementations
- Updated server classes (OpenAIServer, SGLangServer, TrlVllmServer, VLLMServer) to accept a ReasoningConfig parameter during initialization.
- Enhanced ReasoningConfig to allow flexible max_tokens without strict validation, accommodating varying provider limits.
- Implemented reasoning configuration injection in APIServer methods for chat and completion handling.
- Updated tests to reflect changes in max_tokens validation logic.

This commit integrates reasoning capabilities into the server handling architecture, improving compatibility with diverse reasoning models.
2026-01-05 23:20:01 +00:00
Dakota
578175a709 fix pre-commit 2025-10-29 14:47:50 -05:00
pre-commit-ci[bot]
0d80da5146 [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
2025-10-24 20:10:29 +00:00
dmahan93
7bf4cfbf80 add managed server to make grabbing logprobs easier w/ tokenized items 2025-10-24 13:09:46 -07:00
Dakota
d240dbb3b7 Merge remote-tracking branch 'origin/add-logprob-server-manager-fn' into add-logprob-server-manager-fn 2025-10-16 13:46:03 -05:00
Dakota
134cbc09d0 update openai/trl_vllm server with new fn 2025-10-16 13:45:55 -05:00
pre-commit-ci[bot]
1e6a745491 [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
2025-10-16 17:39:04 +00:00
Dakota
c36ec29656 add sglang specific token level logprob handling and server manager/baseline logprob/token fn 2025-10-16 12:38:03 -05:00