Commit graph

8 commits

Author SHA1 Message Date
pre-commit-ci[bot]
60fb6cae11 [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
2026-02-20 04:58:47 +00:00
Jai Suphavadeeprasit
809b88bf30 gsm8k trial 2026-02-19 21:32:40 -05:00
Jai Suphavadeeprasit
bbbfaf1680 gsm8k trial 2026-02-19 21:17:49 -05:00
Dakota
7d6aeb9bbf add tokenizer name config to set the vllm/sglang tokenizer to something different if needed 2026-02-09 15:26:29 -06:00
Siddharth Balyan
7f28c52994
Merge branch 'main' into sid/verifiers 2026-01-16 11:50:27 +05:30
balyan.sid@gmail.com
6a27e88023 use managed server 2026-01-14 17:09:01 +05:30
teknium
e1ece3e64e Add reasoning configuration support across server implementations
- Updated server classes (OpenAIServer, SGLangServer, TrlVllmServer, VLLMServer) to accept a ReasoningConfig parameter during initialization.
- Enhanced ReasoningConfig to allow flexible max_tokens without strict validation, accommodating varying provider limits.
- Implemented reasoning configuration injection in APIServer methods for chat and completion handling.
- Updated tests to reflect changes in max_tokens validation logic.

This commit integrates reasoning capabilities into the server handling architecture, improving compatibility with diverse reasoning models.
2026-01-05 23:20:01 +00:00
Dakota
e6ac3abdcb add managed vllm server 2025-11-07 13:06:49 -06:00