modelstop.top

Compare Models

Run side-by-side checks for pricing, context window, and latency.

Google: Gemma 4 26B A4B (free)

google

Gemma 4 26B A4B IT is an instruction-tuned Mixture-of-Experts (MoE) model from Google DeepMind. Despite 25.2B total parameters, only 3.8B activate per token during inference — delivering near-31B quality at...

Context window
262,144 tokens
Input cost
$0.08 / 1M
Output cost
$0.35 / 1M
Latency (p50)