Back to models
z-aimodel
Z.ai: GLM 4.5
GLM-4.5 is our latest flagship foundation model, purpose-built for agent-based applications. It leverages a Mixture-of-Experts (MoE) architecture and supports a context length of up to 128k tokens. GLM-4.5 delivers significantly...
Best for
Autonomous AgentsTool UseFunction CallingBulk Data Extraction
Context Window
131K tokens ≈ 291 pages of text
Input Cost
$0.60/1M
Output Cost
$2.20/1M
Latency p50
—
Pricing Details
Standard Pricing
Input (per 1M tokens)
$0.60
Output (per 1M tokens)
$2.20
Hallucination Score™ (est.)
Community reliability estimate · not official
—
Not yet rated
About this score: Community-estimated based on user reports and publicly available benchmark data (e.g. TruthfulQA). This is not an official score from the model provider. Scores may be inaccurate — always verify with the official leaderboard before making production decisions.
Price History
Not enough historical data yet. Check back after the next pricing sync.
Provider
z-ai
Community Prompts
Proven prompts shared by the community for this model
Loading prompts…
