DeepSeek: R1 Distill Llama 70B
deepseek
DeepSeek R1 Distill Llama 70B is a distilled large language model based on [Llama-3.3-70B-Instruct](/meta-llama/llama-3.3-70b-instruct), using outputs from [DeepSeek R1](/deepseek/deepseek-r1). The model combines advanced distillation techniques to achieve high performance across...
- Context window
- 131,072 tokens
- Input cost
- $0.70 / 1M
- Output cost
- $0.80 / 1M
- Latency (p50)
- —
