mirror of
https://github.com/NousResearch/atropos.git
synced 2026-04-19 12:57:58 +00:00
cleanup 3
This commit is contained in:
parent
39d307b440
commit
43cc71e070
4 changed files with 4 additions and 93 deletions
|
|
@ -330,7 +330,6 @@ The trainer supports multiple optimizer options to trade off between speed, memo
|
|||
| `adamw` | ~32GB (for 8B model) | Fastest | Full FP32 | None |
|
||||
| `adamw_8bit` (default) | ~8GB | Fast | 8-bit quantized | `bitsandbytes` |
|
||||
| `adafactor` | ~8GB | Fast | Full (no momentum) | `transformers` |
|
||||
| `adamw_cpu` | ~0GB (on CPU) | ~2x slower | Full FP32 | None |
|
||||
|
||||
**Usage:**
|
||||
```bash
|
||||
|
|
@ -342,21 +341,16 @@ The trainer supports multiple optimizer options to trade off between speed, memo
|
|||
|
||||
# Adafactor - no momentum states, good for large models
|
||||
--optimizer adafactor
|
||||
|
||||
# CPU offload - experimental, use when nothing else fits
|
||||
--optimizer adamw_cpu
|
||||
```
|
||||
|
||||
**Recommendations:**
|
||||
- **8B models on 80GB:** Use `adamw` (fastest)
|
||||
- **14B+ models on 80GB:** Use `adamw_8bit` or `adafactor`
|
||||
- **24B models:** Use `adafactor` with reduced batch size
|
||||
- **adamw_cpu:** Experimental - not well tested, ~2x slower due to CPU↔GPU transfers
|
||||
|
||||
**Potential Risks:**
|
||||
- `adamw_8bit`: Quantization may slightly affect convergence in edge cases; generally safe
|
||||
- `adafactor`: No momentum can make training slightly less stable; use with larger batch sizes
|
||||
- `adamw_cpu`: Significantly slower; only use when you have no other option
|
||||
|
||||
---
|
||||
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue