Back to models
inceptionmodel
Inception: Mercury
Mercury is the first diffusion large language model (dLLM). Applying a breakthrough discrete diffusion approach, the model runs 5-10x faster than even speed optimized models like GPT-4.1 Nano and Claude...
Best for
Bulk Data ExtractionHigh-Volume TasksLong DocumentsBook Summarisation
Context Window
128K tokens ≈ 284 pages of text
Input Cost
$0.25/1M
Output Cost
$0.75/1M
Latency p50
—
Pricing Details
Standard Pricing
Input (per 1M tokens)
$0.25
Output (per 1M tokens)
$0.75
Hallucination Score™ (est.)
Community reliability estimate · not official
—
Not yet rated
About this score: Community-estimated based on user reports and publicly available benchmark data (e.g. TruthfulQA). This is not an official score from the model provider. Scores may be inaccurate — always verify with the official leaderboard before making production decisions.
Price History
Not enough historical data yet. Check back after the next pricing sync.
Provider
inception
Community Prompts
Proven prompts shared by the community for this model
Loading prompts…
