Referrals
Docs
Pricing
Sign Up
Log In
Home
GPUs
Models
Usage
Support
All Models
Qwen2.5-Coder-32B-Instruct
LLM
FP8
Context Length: 131072
LLM
FP8
Context Length: 131072
?
Control Bar
Qwen2.5-Coder-32B-Instruct
Max Tokens
Temperature
Top P
System Prompt
API
Python
TypeScript
cURL
Gradio
Copy