For advanced users who want to use z-image-turbo for API-based bulk generation and automation, this guide explains how to set up a RunPod Serverless environment.
If you’re a beginner, we recommend starting with ConoHa AI Canvas first.
What Is RunPod Serverless?
RunPod is a cloud GPU platform. Using its Serverless function, you can run AI inference as an API endpoint.
Advantages:
- Pay-as-you-go billing (zero cost when idle)
- Auto-scaling support
- Callable via API for automation and batch processing
- Parallel execution with multiple workers possible
Disadvantages:
- Technical knowledge required (Docker, API)
- Latency of a few seconds to tens of seconds on cold start
For cost comparisons with other services, see the Cloud GPU Comparison article.
Setup Steps
Step 1: Create a RunPod Account
- Visit RunPod and create an account
- Register a credit card (for pay-as-you-go billing)
- Add initial credits ($10+ recommended)
Step 2: Create a Serverless Endpoint
- In the RunPod dashboard, navigate to the Serverless section
- Click New Endpoint
- Enter the following settings:
| Setting | Recommended Value | Description |
|---|---|---|
| Endpoint Name | z-image-turbo | Any name |
| Docker Image | z-image-turbo compatible image | Docker image containing the model |
| GPU Type | RTX A4000 / RTX 3090 | A4000 for cost efficiency |
| Min Workers | 0 | Zero cost when idle |
| Max Workers | 3 | Upper limit for concurrent requests |
| Idle Timeout | 5 seconds | Worker idle timeout |
Step 3: Get an API Key
- In the RunPod dashboard, go to Settings → API Keys
- Generate a new API key
- Also note down the Endpoint ID
Step 4: Set Environment Variables
# Save to ~/.config/z-image/.env
RUNPOD_API_KEY=your_api_key_here
RUNPOD_ENDPOINT_ID=your_endpoint_id_here
Step 5: Test Request
Use curl to test the API:
curl -X POST "https://api.runpod.ai/v2/${RUNPOD_ENDPOINT_ID}/runsync" \
-H "Authorization: Bearer ${RUNPOD_API_KEY}" \
-H "Content-Type: application/json" \
-d '{
"input": {
"prompt": "a beautiful Japanese woman, portrait, natural lighting, 85mm lens",
"width": 1024,
"height": 1024,
"steps": 8,
"sampler": "euler",
"scheduler": "ddim_uniform"
}
}'
Cost Optimization
GPU Selection
| GPU | Rate (/sec) | Speed | Cost Efficiency |
|---|---|---|---|
| RTX A4000 | $0.00016 | Good | ◎ |
| RTX 3090 | $0.00022 | Very Good | ○ |
| RTX A5000 | $0.00028 | Very Good | ○ |
| A100 | $0.00076 | Fastest | △ (overkill) |
z-image-turbo can generate in 8 steps, so expensive GPUs aren’t necessary. RTX A4000 is the best cost-performance option.
Worker Configuration Tips
- Min Workers = 0: Zero cost when idle. However, cold starts will occur.
- Min Workers = 1: Keep one worker always running. For cases where latency is unacceptable.
- Max Workers: Upper limit for concurrent requests. Set according to your batch processing parallelism.
Batch Processing
Using RunPod API’s asynchronous endpoint (/run) lets you send multiple requests in parallel for batch processing.
Prompt Configuration
For prompt writing, see these articles:
- Prompt Basics — Word order and emphasis syntax fundamentals
- Prompt Best Practices — Verified prompt knowledge
- Prompt Examples — Actual examples with prompts
The same parameter settings used in the ComfyUI workflow can also be used with the API.
Troubleshooting
Cold Start Is Slow
- Set Min Workers to 1 to keep a worker always running (increased cost)
- Enable the FlashBoot option (if supported for the GPU)
Images Not Generating
- Check that the API key is correct
- Check that the Endpoint ID is correct
- Check the endpoint status in the RunPod dashboard
Costs Higher Than Expected
- Shorten the Idle Timeout (5 seconds recommended)
- Verify Min Workers is set to 0
- Delete unused endpoints
Summary
z-image-turbo environment setup on RunPod Serverless:
- Create account → Configure endpoint → Get API key
- RTX A4000 is the best cost-performance option
- Min Workers = 0 for zero idle cost
- curl or script for API requests
Related Articles
- What Is z-image-turbo — Model features and selection guide
- ConoHa AI Canvas Getting Started — Beginners start here
- ComfyUI Workflow — Pre-configured workflow
- Cloud GPU Comparison — Cost comparison between services






![[Beginner's Guide] Getting Started with ConoHa AI Canvas | Using z-image-turbo in Your Browser](/tutorials/conoha-ai-canvas-guide/cover.webp)