modelstop.top

Compare Models

Run side-by-side checks for pricing, context window, and latency.

google/gemma-4-26B-A4B-it

deepinfra

Gemma 4 26B A4B IT is an instruction-tuned Mixture-of-Experts (MoE) model from Google DeepMind. Despite 25.2B total parameters, only 3.8B activate per token during inference — delivering near-31B quality at...

Context window
262,144 tokens
Input cost
Output cost
Latency (p50)