Commit graph

38 commits

Author SHA1 Message Date
dmahan93
37f040a883 fix pre-commit 2025-05-09 19:14:45 -05:00
dmahan93
c1ba77ec26
Merge pull request #7 from misrasaurabh1/codeflash/optimize-grab_exact_from_heterogeneous_queue-ma3pegzo
️ Speed up function `grab_exact_from_heterogeneous_queue` by 1,680%
2025-05-09 12:18:56 -05:00
dmahan93
40b12dae60 run pre-commit on all files 2025-05-09 09:54:20 -05:00
hjc-puro
f303853e36
Update README.md 2025-05-09 02:41:17 -04:00
hjc-puro
629d8c1731
Merge pull request #14 from NousResearch/2025-05-02-server-cli 2025-05-09 13:37:54 +08:00
artem
693b28b961
Merge pull request #22 from NousResearch/vision_env_fixes
fix multimodal envs. add view_run_multimodal
2025-05-08 20:28:57 -07:00
dmahan93
8ff48065a3
Update server_manager.py to not continue to API config stuff if serverbaseline is set 2025-05-08 20:18:15 -05:00
dmahan93
f9b39c28f9
Merge pull request #27 from NousResearch/24-keyerror-on-self_state-in-base-register-env-fail
24 keyerror on self state in base register env fail
2025-05-08 17:46:41 -05:00
hjc-puro
515c4cb6ab
Update README.md 2025-05-08 15:12:44 -04:00
hjc-puro
7b0028d0ba
Update README.md 2025-05-08 15:07:43 -04:00
dmahan93
61af36b226
Update base.py 2025-05-08 11:53:15 -05:00
dmahan93
1848c7d453
Update base.py 2025-05-08 11:29:29 -05:00
dmahan93
301cc03b9d
require register-env to wait until batch is hit 2025-05-08 11:28:38 -05:00
hjc-puro
9415cadc53 fix cls name 2025-05-08 06:54:43 -07:00
hjc-puro
b5d81a9532 update readme with design philosophy 2025-05-07 22:43:07 -04:00
Artem Yatsenko
0f15be68a2 fix multimodal envs. add view_run_multimodal 2025-05-07 21:53:01 +00:00
hjc-puro
cdf5a9baa9 remove , 2025-05-07 15:22:01 -04:00
hjc-puro
0373005175 forgot to condition on is ServerBaseline instance 2025-05-07 15:09:34 -04:00
hjc-puro
ec6b86bb5d unbreak ServerBaseline 2025-05-07 14:51:51 -04:00
edmund
2cb1ff0087 Removed mentions of NousResearch/DeepHermes-3-Llama-3-1B-Preview and swapped it for NousResearch/DeepHermes-3-Llama-3-3B-Preview
I don't think there is a NousResearch/DeepHermes-3-Llama-3-1B-Preview
2025-05-07 18:03:17 +01:00
hjc-puro
38575d7029 not supported warning for server baseline 2025-05-06 22:29:34 -04:00
hjc-puro
1d35b9d626 remove comment 2025-05-03 16:26:35 -07:00
hjc-puro
ae24b022c3 fix bug where None would be parsed as a str instead of special value 2025-05-03 16:24:35 -07:00
hjc-puro
a4d8d7e875 remove spurious comments 2025-05-03 15:58:17 -07:00
hjc-puro
aa23f10857 remove try/except because handled in separate pr 2025-05-03 15:52:13 -07:00
hjc-puro
4348dd2ec1 hide complicated openai config override behavior somewhere else 2025-05-03 14:18:50 -07:00
hjc-puro
fe616ec7fa add exceptions 2025-05-03 05:28:40 -04:00
hjc-puro
af26b2e68a propagate cli stuff to serve command 2025-05-02 15:29:29 -04:00
hjc-puro
7c6c5edf30 add back env_config_cls 2025-05-02 09:00:57 -07:00
hjc-puro
6661e286c4 remove use_api in env_manager, log config to wandb 2025-05-02 08:52:28 -07:00
hjc-puro
8c9752b731 preserve descs 2025-05-02 06:13:45 -07:00
hjc-puro
e40dce445c remove oai key in defaults for process 2025-05-02 05:57:34 -07:00
hjc-puro
60d67d91e7 --slurm and --testing in outer namespace 2025-05-02 03:46:34 -07:00
hjc-puro
9a8ae1630b import refactor 2025-05-02 01:00:04 -07:00
hjc-puro
78cfef9daf add process subcommand 2025-05-02 03:42:10 -04:00
hjc-puro
0f966ec3fb add import 2025-05-02 03:24:14 -04:00
codeflash-ai[bot]
837ef6295d
️ Speed up function grab_exact_from_heterogeneous_queue by 1,680%
Here’s a highly optimized version of your code for both **runtime** and **memory**, based on the profile hot spots.

- **Avoid repeated summing** for checking lengths in a growing list — we keep a running sum.
- **Avoid repeatedly copying lists/dicts** by using lists of indices and marking to remove in one pass, and using set operations for fast membership checks.
- **Avoid creating lots of small dicts** and list extensions inside loops.
- **Combine related generator expressions** so costly operations are only done once.
- **Group similar linear scans** into one to minimize number of loops over `queue`.
- Use **pre-allocated lists and sets** where it saves time.

Here's the rewritten function (all comments preserved except where the code logic was changed).



**Key optimizations:**
- Only a *single pass* over queue for setup.
- No repeated `.append(dict)`; pass only indices around until the end.
- Use `.clear()` for lists inside dict to avoid reallocations.
- Use lists of lengths for O(1) access everywhere.
- Maintain a running sum for batch size check, not repeated `sum`.

This should **dramatically cut runtime**, especially at the hot spots from your line profiler output. If you need even more speed and the queue is huge/long-lived, consider reworking the data structure for the queue itself (`deque`, heap, etc.), but for code-level optimization this is near optimal for this algorithm!
2025-04-30 08:58:23 +00:00
Dakota Nous
621d00dd80 first commit 2025-04-29 12:10:10 -07:00