Alibaba's multimodal generation model from the Wan AI suite, supporting text-to-video, image-to-video, reference-to-video with audio, and text-to-image, in both Chinese and English
Wan 2.6 API Image & Video generation
Integrate Wan 2.6 into your workflow for text-to-image generation and advanced image editing via Lumenfall's unified OpenAI-compatible API. This endpoint supports both direct text prompting and reference image guidance to maintain stylistic consistency across your generated media.
https://api.lumenfall.ai/openai/v1
Image
https://api.lumenfall.ai/v1
Video
wan-2.6
Code Examples
Text to Image
/v1/images/generationscurl -X POST \
https://api.lumenfall.ai/openai/v1/images/generations \
-H "Authorization: Bearer $LUMENFALL_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "wan-2.6",
"prompt": "",
"size": "1024x1024"
}'
# Response:
# { "created": 1234567890, "data": [{ "url": "https://...", "revised_prompt": "..." }] }
import OpenAI from 'openai';
const client = new OpenAI({
apiKey: 'YOUR_API_KEY',
baseURL: 'https://api.lumenfall.ai/openai/v1'
});
const response = await client.images.generate({
model: 'wan-2.6',
prompt: '',
size: '1024x1024'
});
// { created: 1234567890, data: [{ url: "https://...", revised_prompt: "..." }] }
console.log(response.data[0].url);
from openai import OpenAI
client = OpenAI(
api_key="YOUR_API_KEY",
base_url="https://api.lumenfall.ai/openai/v1"
)
response = client.images.generate(
model="wan-2.6",
prompt="",
size="1024x1024"
)
# { created: 1234567890, data: [{ url: "https://...", revised_prompt: "..." }] }
print(response.data[0].url)
Image Edit
/v1/images/editsText to Video
/v1/videos/generationsImage to Video
/v1/videos/generationsVideo to Video
/v1/videos/generationsParameter Reference
Core Parameters
| Parameter | Type | Description | Modes |
|---|---|---|---|
prompt
|
string | Required. Text prompt for image generation |
T2I
Edit
T2V
I2V
V2V
|
negative_prompt
|
string | Negative prompt to guide generation away from undesired content |
T2I
Edit
T2V
I2V
V2V
|
seed
|
integer | Random seed for reproducibility |
T2I
Edit
T2V
I2V
V2V
|
Size & Layout
| Parameter | Type | Description | Modes |
|---|---|---|---|
size
|
string |
Image dimensions as WxH pixels (e.g. "1024x1024") or aspect ratio (e.g. "16:9")
WxH determines both shape and scale (aspect_ratio and resolution are ignored when size is provided). W:H format is equivalent to aspect_ratio.
|
T2I
Edit
T2V
I2V
V2V
|
aspect_ratio
|
string |
Aspect ratio of the output image (e.g. "16:9", "1:1")
Controls shape independently of scale. Use with resolution to control both. If size is also provided, size takes precedence. Any ratio is accepted and mapped to the nearest supported value.
|
T2I
Edit
T2V
I2V
V2V
|
resolution
|
string |
Output resolution tier (e.g. "1K", "4K")
Controls scale independently of shape. Higher tiers produce larger images and cost more. If size is also provided, size takes precedence for scale. Any tier is accepted and mapped to the nearest supported value.
|
T2I
Edit
T2V
I2V
V2V
|
size
Exact pixel dimensions
"1920x1080"
aspect_ratio
Shape only, default scale
"16:9"
resolution
Scale tier, preserves shape
"1K"
Priority when combined
size is most specific and always wins. aspect_ratio and resolution control shape and scale independently.
How matching works
7:1 on a model with
4:1 and 8:1,
you get 8:1.
0.5K 1K 2K 4K)
or megapixel tiers (0.25 1).
If the exact tier isn't available, you get the nearest one.
Media Inputs
| Parameter | Type | Description | Modes |
|---|---|---|---|
image
|
file |
Required.
Input image(s) to edit
Supports PNG, JPEG, WebP.
|
T2I
Edit
T2V
I2V
V2V
|
Output & Format
| Parameter | Type | Description | Modes |
|---|---|---|---|
response_format
|
string |
How to return the image
url
b64_json
Default:
"url" |
T2I
Edit
T2V
I2V
V2V
|
output_format
|
string |
Output image format
png
jpeg
gif
webp
avif
Gateway converts to requested format if provider doesn't support it natively.
|
T2I
Edit
T2V
I2V
V2V
|
output_compression
|
integer | Compression level for lossy formats (JPEG, WebP, AVIF) |
T2I
Edit
T2V
I2V
V2V
|
n
|
integer |
Number of images to generate
Default:
1Gateway generates multiple images in parallel even if provider only supports 1.
|
T2I
Edit
T2V
I2V
V2V
|
Additional Parameters
| Parameter | Type | Description | Modes |
|---|---|---|---|
input_reference
|
array | Input image(s) to animate into video |
T2I
Edit
T2V
I2V
V2V
|
input_video
|
string | Input video URL to transform |
T2I
Edit
T2V
I2V
V2V
|
enable_prompt_expansion
fal
|
boolean | Enable LLM prompt optimization. Significantly improves results for simple prompts but adds 3-4 seconds processing time. |
T2I
Edit
T2V
I2V
V2V
|
enable_safety_checker
fal
|
boolean | Enable content moderation for input and output. |
T2I
Edit
T2V
I2V
V2V
|
Parameter Normalization
How we handle parameters across different providers
Not every provider speaks the same language. When you send a parameter, we handle it in one of four ways depending on what the model supports:
| Behavior | What happens | Example |
|---|---|---|
passthrough |
Sent as-is to the provider | style, quality |
renamed |
Same value, mapped to the field name the provider expects | prompt |
converted |
Transformed to the provider's native format | size |
emulated |
Works even if the provider has no concept of it | n, response_format |
Parameters we don't recognize pass straight through to the upstream API, so provider-specific options still work.