modelstop.top
Back to models
inceptionmodel

Inception: Mercury 2

Mercury 2 is an extremely fast reasoning LLM, and the first reasoning diffusion LLM (dLLM). Instead of generating tokens sequentially, Mercury 2 produces and refines multiple tokens in parallel, achieving...

Best for

Complex ReasoningMath & LogicResearchBulk Data Extraction
Context Window
128K tokens ≈ 284 pages of text
Input Cost
$0.25/1M
Output Cost
$0.75/1M
Latency p50

Pricing Details

Standard Pricing
Input (per 1M tokens)
$0.25
Output (per 1M tokens)
$0.75

Hallucination Score™ (est.)

Community reliability estimate · not official

Not yet rated

About this score: Community-estimated based on user reports and publicly available benchmark data (e.g. TruthfulQA). This is not an official score from the model provider. Scores may be inaccurate — always verify with the official leaderboard before making production decisions.

Price History

Not enough historical data yet. Check back after the next pricing sync.

Provider

inception

Community Prompts

Proven prompts shared by the community for this model

Loading prompts…