modelstop.top

Compare Models

Run side-by-side checks for pricing, context window, and latency.

Inception: Mercury 2

inception

Mercury 2 is an extremely fast reasoning LLM, and the first reasoning diffusion LLM (dLLM). Instead of generating tokens sequentially, Mercury 2 produces and refines multiple tokens in parallel, achieving...

Context window
128,000 tokens
Input cost
$0.25 / 1M
Output cost
$0.75 / 1M
Latency (p50)