Compare Models
Run side-by-side checks for pricing, context window, and latency.
Mistral: Mistral 7B Instruct v0.1
mistralai
A 7.3B parameter model that outperforms Llama 2 13B on all benchmarks, with optimizations for speed and context length.
- Context window
- 2,824 tokens
- Input cost
- $0.11 / 1M
- Output cost
- $0.19 / 1M
- Latency (p50)
- —