Meta: Llama 4 Scout
meta-llama
Llama 4 Scout 17B Instruct (16E) is a mixture-of-experts (MoE) language model developed by Meta, activating 17 billion parameters out of a total of 109B. It supports native multimodal input...
- Context window
- 327,680 tokens
- Input cost
- $0.08 / 1M
- Output cost
- $0.30 / 1M
- Latency (p50)
- —
