modelstop.top

Compare Models

Run side-by-side checks for pricing, context window, and latency.

openai/gpt-oss-120b

deepinfra

gpt-oss-120b is an open-weight, 117B-parameter Mixture-of-Experts (MoE) language model from OpenAI designed for high-reasoning, agentic, and general-purpose production use cases. It activates 5.1B parameters per forward pass and is optimized...

Context window
131,072 tokens
Input cost
Output cost
Latency (p50)