Back to modelsVisit site
ai21model
AI21 Jamba 1.6 Large
AI21 Jamba 1.6 Large uses a hybrid Mamba-Transformer architecture offering low memory footprint and high throughput compared to equivalent Transformer models. Features 256K context at a fraction of the inference cost.
Best for
Long DocumentsBook SummarisationRAGInstruction Following
Context Window
256K tokens ≈ 569 pages of text
Input Cost
$2.00/1M
Output Cost
$8.00/1M
Latency p50
—
Pricing Details
Standard Pricing
Input (per 1M tokens)
$2.00
Output (per 1M tokens)
$8.00
Hallucination Score™ (est.)
Community reliability estimate · not official
—
Not yet rated
About this score: Community-estimated based on user reports and publicly available benchmark data (e.g. TruthfulQA). This is not an official score from the model provider. Scores may be inaccurate — always verify with the official leaderboard before making production decisions.
Price History
Not enough historical data yet. Check back after the next pricing sync.
Provider
ai21
Community Prompts
Proven prompts shared by the community for this model
Loading prompts…
