Publish time
2025/12/22Model Series
GLMInput type
Output type
Context Window
128,000Max Output Length
2,048Input Price
¥2 / 1M tokensOutput Price
¥8 / 1M tokensGLM-4.7 是智谱最新旗舰模型,GLM-4.7 面向 Agentic Coding 场景强化了编码能力、长程任务规划与工具协同,并在多个公开基准的当期榜单中取得开源模型中的领先表现。通用能力提升,回复更简洁自然,写作更具沉浸感。在执行复杂智能体任务,在工具调用时指令遵循更强,Artifacts 与 Agentic Coding 的前端美感和长程任务完成效率进一步提升。
Zhinao API routes requests to the best-fit provider and automatically fails over to the one with highest availability.
TTFT
4.55s
Throughput
22.41tps
Uptime
99.00%
Provider Model
bigmodel/glm-4.7
Supported Parameters
Recent Uptime
Reasoning
-
Supported Response Formats
Request Log Collection
-
Distillable
-
Total Context
128,000
Max Output
2,048
Input Price
¥2 / 1M tokens
Output Price
¥8 / 1M tokens
Compare different providers across Zhinao API
13.84 tok/s
4.92 s
Uptime for z-ai/glm-4.7 across all providers
Zhinao API normalizes requests and responses across providers for you
import OpenAI from "openai";
const client = new OpenAI({
baseURL: "https://api.360.cn/v1",
apiKey: process.env.ZHINAO_API_KEY,
});
const response = await client.chat.completions.create({
model: "z-ai/glm-4.7",
messages: [
{ role: "user", content: "Hello, how are you?" }
],
temperature: 0.7,
max_tokens: 1000,
});
console.log(response.choices[0].message.content);