modelstop.top
Back to models
deepinframodel

google/gemma-4-26B-A4B-it

Gemma 4 26B A4B IT is an instruction-tuned Mixture-of-Experts (MoE) model from Google DeepMind. Despite 25.2B total parameters, only 3.8B activate per token during inference — delivering near-31B quality at...

Best for

Long DocumentsBook SummarisationRAG
Context Window
262K tokens ≈ 583 pages of text
Input Cost
Free
Output Cost
Latency p50

Pricing Details

No pricing data. Model may be free or requires direct access.

Hallucination Score™ (est.)

Community reliability estimate · not official

68
Generally reliable

About this score: Community-estimated based on user reports and publicly available benchmark data (e.g. TruthfulQA). This is not an official score from the model provider. Scores may be inaccurate — always verify with the official leaderboard before making production decisions.

Price History

Not enough historical data yet. Check back after the next pricing sync.

Provider

deepinfra

Community Prompts

Proven prompts shared by the community for this model

Loading prompts…