Documentation
Everything you need to start generating images and videos through one unified API.
Quickstart
Three steps to your first generation:
- Create an account at panel.designapi.ink/register
- Top up your balance and create an API key (Tokens → Add new)
- Make your first request:
curl https://api.designapi.ink/v1/images/generations \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "flux",
"prompt": "a futuristic city at sunset",
"n": 1,
"size": "1024x1024"
}'
Authentication
All requests require an API key passed in the Authorization header:
Authorization: Bearer sk-...
Generate keys in your dashboard. You can create multiple keys, set per-key quotas, restrict allowed models, and revoke at any time.
https://api.designapi.inkEndpoints overview
Different models use different endpoint patterns. Pick the one matching your model:
| Use case | Endpoint | Models |
|---|---|---|
| Image generation | /v1/images/generations | flux, dall-e-3, recraft, ideogram, kling-image, etc. |
| Image via chat | /v1/chat/completions | gemini-*-image, gpt-image-*, gpt-4o-image, sora_image, nano-banana |
| Midjourney | /mj/submit/imagine | mj_fast_*, mj_relax_* |
| Video (async) | /v1/video/generations | kling-*, sora-2, veo3.1, runway-*, hailuo, wan2.*, seedance, pika, luma, pixverse, etc. |
Errors & rate limits
Errors follow OpenAI format:
{
"error": {
"message": "insufficient quota",
"type": "one_api_error",
"code": 403
}
}
| Code | Meaning |
|---|---|
| 401 | Invalid API key |
| 403 | Insufficient balance / model not allowed for this key |
| 429 | Rate limit hit (per-key concurrent limit) |
| 500/503 | Upstream provider error — retry after a few seconds |
Image generation — DALL-E format
Works for: flux, flux-dev, flux-pro, flux-schnell, dall-e-3, recraftv3, ideogram-generate-v3, kling-image, doubao-seedream-*, bfl/flux-*, and most other static image models.
Request
POST https://api.designapi.ink/v1/images/generations
Authorization: Bearer YOUR_API_KEY
Content-Type: application/json
{
"model": "flux-pro",
"prompt": "a cyberpunk samurai under neon rain, ultra detailed",
"n": 1,
"size": "1024x1024",
"response_format": "url"
}
Parameters
| Field | Type | Notes |
|---|---|---|
model | string | Required. Exact model id. |
prompt | string | Required. |
n | int | 1–4 (depends on model) |
size | string | 512x512, 1024x1024, 1792x1024, etc. |
response_format | string | url (default) or b64_json |
Python (OpenAI SDK)
from openai import OpenAI
client = OpenAI(api_key="YOUR_API_KEY", base_url="https://api.designapi.ink/v1")
result = client.images.generate(
model="flux-pro",
prompt="a cyberpunk samurai under neon rain",
size="1024x1024",
n=1,
)
print(result.data[0].url)
Node.js
import OpenAI from "openai";
const client = new OpenAI({
apiKey: "YOUR_API_KEY",
baseURL: "https://api.designapi.ink/v1",
});
const r = await client.images.generate({
model: "flux-pro",
prompt: "a cyberpunk samurai under neon rain",
size: "1024x1024",
});
console.log(r.data[0].url);
Image generation — chat-style
Some image models (Google Gemini, GPT-Image-via-chat, Sora image) return images through the chat completions endpoint. Works for: gemini-2.5-flash-image, gemini-3-pro-image-preview, nano-banana, nano-banana-pro, gpt-4o-image, sora_image.
curl https://api.designapi.ink/v1/chat/completions \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "nano-banana-pro",
"messages": [
{"role": "user", "content": "Generate an image of a futuristic city at sunset, 4K"}
]
}'
The image URL or base64 will be returned inside choices[0].message.content as markdown  or as a direct base64 string.
Image editing
Models with the image-edit tag (Flux Kontext, SeedEdit, Gemini-image-edit, qwen-image-edit) accept an existing image plus an instruction.
Flux Kontext (DALL-E format with image)
curl https://api.designapi.ink/v1/images/edits \
-H "Authorization: Bearer YOUR_API_KEY" \
-F "model=flux-kontext-pro" \
-F "prompt=add sunglasses to the cat" \
-F "image=@cat.png"
Gemini image edit (chat format)
{
"model": "gemini-2.5-flash-image-preview",
"messages": [{
"role": "user",
"content": [
{"type": "text", "text": "Replace the background with a beach"},
{"type": "image_url", "image_url": {"url": "https://..."}}
]
}]
}
Midjourney — submit job
Midjourney is asynchronous. You submit a job, get a task_id, then poll until done.
1. Submit
POST https://api.designapi.ink/mj/submit/imagine
Authorization: Bearer YOUR_API_KEY
Content-Type: application/json
{
"prompt": "a futuristic city at sunset --ar 16:9 --v 6",
"botType": "MID_JOURNEY"
}
→ {"code": 1, "result": "1745324567890123", "description": "submitted"}
Use mj_fast_* models for fast queue, mj_relax_* for relax queue (slower but cheaper).
Midjourney — poll status
GET https://api.designapi.ink/mj/task/{task_id}/fetch
Authorization: Bearer YOUR_API_KEY
→ {
"id": "1745324567890123",
"status": "SUCCESS",
"imageUrl": "https://cdn.../image.png",
"buttons": [...]
}
Possible status values: NOT_START, SUBMITTED, IN_PROGRESS, SUCCESS, FAILURE.
Python polling helper
import requests, time
BASE, KEY = "https://api.designapi.ink", "YOUR_API_KEY"
H = {"Authorization": f"Bearer {KEY}"}
r = requests.post(f"{BASE}/mj/submit/imagine", headers=H, json={
"prompt": "a futuristic city --ar 16:9", "botType": "MID_JOURNEY"
}).json()
task_id = r["result"]
while True:
s = requests.get(f"{BASE}/mj/task/{task_id}/fetch", headers=H).json()
if s["status"] == "SUCCESS":
print(s["imageUrl"]); break
if s["status"] == "FAILURE":
raise RuntimeError(s.get("failReason"))
time.sleep(3)
Midjourney — actions (upscale, variation, blend)
After a successful imagine, the response contains buttons with customId values. Use them to trigger actions:
POST /mj/submit/action
{
"taskId": "PARENT_TASK_ID",
"customId": "MJ::JOB::upsample::1::abc123"
}
Other endpoints: /mj/submit/blend (combine images), /mj/submit/describe (image to prompt), /mj/submit/modal (zoom/pan).
Video generation — async pattern
All video models are asynchronous. The flow is identical across providers:
- Submit — POST returns
task_id - Poll — GET task endpoint until
statusiscompletedorfailed - Download — read the resulting video URL from the response
Generic example
POST https://api.designapi.ink/v1/video/generations
Authorization: Bearer YOUR_API_KEY
Content-Type: application/json
{
"model": "kling-video-v2-6",
"prompt": "a cinematic shot of a hummingbird",
"duration": 5,
"aspect_ratio": "16:9"
}
→ {"id": "vid_abc123", "status": "queued"}
GET https://api.designapi.ink/v1/video/generations/vid_abc123
Authorization: Bearer YOUR_API_KEY
→ {
"id": "vid_abc123",
"status": "completed",
"video_url": "https://..."
}
Sora 2
Models: sora-2, sora-2-pro. Supports up to 15s (sora-2) or 25s (sora-2-pro).
{
"model": "sora-2",
"prompt": "a paper airplane gliding through a sunset sky, cinematic",
"duration": 10,
"size": "1280x720"
}
Sora-2 also supports the chat completions endpoint with messages instead of prompt — see the Sora API reference.
Kling
Models: kling-video-v1 through kling-video-v2-6, kling-video-v2-master (flagship), kling-effects, kling-lip-sync, kling-video-extend.
{
"model": "kling-video-v2-6",
"prompt": "a hummingbird hovering near a flower",
"image_url": "https://...", // optional, for i2v
"duration": 5,
"mode": "pro" // "std" or "pro"
}
Veo 3.1 (Google)
Models: veo3.1, veo3.1-fast, veo3.1-pro, veo3.1-pro-4k, veo3.1-components. Generates video with native audio.
{
"model": "veo3.1-pro",
"prompt": "ocean waves crashing on rocks at sunset, with sound of waves and seagulls",
"duration": 8,
"aspect_ratio": "16:9"
}
Runway Gen-3 / Gen-4
Models: runwayml-gen3a_turbo, runwayml-gen4_turbo, runway-act_one (face-to-character animation), runway-aleph (video editing).
{
"model": "runwayml-gen4_turbo",
"prompt": "a vintage car driving through Tokyo at night",
"image_url": "https://...", // for image-to-video
"duration": 5
}
Billing & quota
- Each request deducts the model's price from your balance.
- Per-call models charge a fixed amount per request (regardless of output size).
- Token-based models charge per 1M input/output tokens.
- Top up via redeem codes — purchase from @claudxeseller.
- Detailed usage breakdown by model is available in your dashboard under Logs.
SDK compatibility
The base URL https://api.designapi.ink/v1 is fully OpenAI-compatible. You can use it as a drop-in replacement in any OpenAI SDK or library:
- Python:
openaipackage — setbase_url - Node.js:
openaipackage — setbaseURL - LangChain: set
openai_api_base - LiteLLM: set
api_base - Cursor / Continue.dev: custom OpenAI provider, base URL
https://api.designapi.ink/v1
Need help? Contact @claudxeseller on Telegram.