modelstop.top
Back to models
deepinframodel

Qwen/Qwen3-235B-A22B-Thinking-2507

Qwen3-235B-A22B-Thinking-2507 is a high-performance, open-weight Mixture-of-Experts (MoE) language model optimized for complex reasoning tasks. It activates 22B of its 235B parameters per forward pass and natively supports up to 262,144...

Best for

Complex ReasoningMath & LogicResearchBulk Data Extraction
Context Window
262K tokens ≈ 583 pages of text
Input Cost
Free
Output Cost
Latency p50

Pricing Details

No pricing data. Model may be free or requires direct access.

Hallucination Score™ (est.)

Community reliability estimate · not official

Not yet rated

About this score: Community-estimated based on user reports and publicly available benchmark data (e.g. TruthfulQA). This is not an official score from the model provider. Scores may be inaccurate — always verify with the official leaderboard before making production decisions.

Price History

Not enough historical data yet. Check back after the next pricing sync.

Provider

deepinfra

Community Prompts

Proven prompts shared by the community for this model

Loading prompts…
Qwen/Qwen3-235B-A22B-Thinking-2507 — modelstop.top