modelstop.top

Compare Models

Run side-by-side checks for pricing, context window, and latency.

Meta: Llama 4 Maverick

meta-llama

Llama 4 Maverick 17B Instruct (128E) is a high-capacity multimodal language model from Meta, built on a mixture-of-experts (MoE) architecture with 128 experts and 17 billion active parameters per forward...

Context window
1,048,576 tokens
Input cost
$0.15 / 1M
Output cost
$0.60 / 1M
Latency (p50)