Wan 2.5 (Preview)

AI Image Editing Model

Image $$$ · 5¢

Alibaba's text-to-image and image-to-image generation model from the Wan AI suite, offering high-quality visual generation capabilities

Example outputs coming soon

Supported Modes
Text to Image Image Edit
Active

Details

Model ID
wan-2.5-preview
Creator
Alibaba
Family
wan
Tags
image-generation text-to-image image-editing
// Get Started

Ready to integrate?

Access wan-2.5-preview via our unified API.

Create Account
Available at 1 provider

Starting from

$0.050 /image via fal.ai

Prices shown are in USD

Full pricing details

Providers & Pricing (2)

Wan 2.5 (Preview) is available from 2 providers, with per-image pricing starting at $0.05 through fal.ai.

fal.ai
Text to Image
fal/wan-2.5-preview
Provider Model ID: fal-ai/wan-25-preview/text-to-image
$0.050 /image
fal.ai
Image Edit
fal/wan-2.5-preview-edit
Provider Model ID: fal-ai/wan-25-preview/image-to-image
$0.050 /image

Wan 2.5 (Preview) API OpenAI-compatible

Integrate Alibaba's Wan 2.5 (Preview) into your applications via the Lumenfall OpenAI-compatible API to programmatically generate and edit images from text prompts.

Base URL
https://api.lumenfall.ai/openai/v1
Model
wan-2.5-preview

Code Examples

Text to Image

/v1/images/generations
curl -X POST \
  https://api.lumenfall.ai/openai/v1/images/generations \
  -H "Authorization: Bearer $LUMENFALL_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "wan-2.5-preview",
    "prompt": "",
    "size": "1024x1024"
  }'
# Response:
# { "created": 1234567890, "data": [{ "url": "https://...", "revised_prompt": "..." }] }

Image Edit

/v1/images/edits

Parameter Reference

Required Supported Not available

Core Parameters

Parameter Type Description Modes
prompt string Required. Edit instruction for the image
T2I Edit
negative_prompt string Negative prompt to guide generation away from undesired content
T2I Edit
seed integer Random seed for reproducibility
T2I Edit

Size & Layout

Parameter Type Description Modes
size string Image dimensions as WxH pixels (e.g. "1024x1024") or aspect ratio (e.g. "16:9")
WxH determines both shape and scale (aspect_ratio and resolution are ignored when size is provided). W:H format is equivalent to aspect_ratio.
T2I Edit
aspect_ratio string Aspect ratio of the output image (e.g. "16:9", "1:1")
Controls shape independently of scale. Use with resolution to control both. If size is also provided, size takes precedence. Any ratio is accepted and mapped to the nearest supported value.
T2I Edit
resolution string Output resolution tier (e.g. "1K", "4K")
Controls scale independently of shape. Higher tiers produce larger images and cost more. If size is also provided, size takes precedence for scale. Any tier is accepted and mapped to the nearest supported value.
T2I Edit
size

Exact pixel dimensions

"1920x1080"
aspect_ratio

Shape only, default scale

"16:9"
resolution

Scale tier, preserves shape

"1K"

Priority when combined

size aspect_ratio + resolution aspect_ratio resolution

size is most specific and always wins. aspect_ratio and resolution control shape and scale independently.

How matching works

Shape matching – we pick the closest supported ratio. Ask for 7:1 on a model with 4:1 and 8:1, you get 8:1.
Scale matching – providers use different tier formats: K tiers (0.5K 1K 2K 4K) or megapixel tiers (0.25 1). If the exact tier isn't available, you get the nearest one.
Dimension clamping – if a model has pixel limits, we clamp dimensions to fit and keep the aspect ratio intact.

Media Inputs

Parameter Type Description Modes
image file Required. Input image(s) to edit
Supports PNG, JPEG, WebP.
T2I Edit

Output & Format

Parameter Type Description Modes
response_format string How to return the image
url b64_json
Default: "url"
T2I Edit
output_format string Output image format
png jpeg gif webp avif
Gateway converts to requested format if provider doesn't support it natively.
T2I Edit
output_compression integer Compression level for lossy formats (JPEG, WebP, AVIF)
T2I Edit
n integer Number of images to generate
Default: 1
Gateway generates multiple images in parallel even if provider only supports 1.
T2I Edit

Additional Parameters

Parameter Type Description Modes
enable_prompt_expansion fal boolean Whether to enable prompt rewriting using LLM. Improves results for short prompts but increases processing time.
T2I Edit
enable_safety_checker fal boolean If set to true, the safety checker will be enabled.
T2I Edit

Parameter Normalization

How we handle parameters across different providers

Not every provider speaks the same language. When you send a parameter, we handle it in one of four ways depending on what the model supports:

Behavior What happens Example
passthrough Sent as-is to the provider style, quality
renamed Same value, mapped to the field name the provider expects prompt
converted Transformed to the provider's native format size
emulated Works even if the provider has no concept of it n, response_format

Parameters we don't recognize pass straight through to the upstream API, so provider-specific options still work.

Wan 2.5 (Preview) FAQ

How much does Wan 2.5 (Preview) cost?

Wan 2.5 (Preview) starts at $0.05 per image through Lumenfall. Pricing varies by provider. Lumenfall does not add any markup to provider pricing.

How do I use Wan 2.5 (Preview) via API?

You can use Wan 2.5 (Preview) through Lumenfall's OpenAI-compatible API. Send requests to the unified endpoint with model ID "wan-2.5-preview". Code examples are available in Python, JavaScript, and cURL.

Which providers offer Wan 2.5 (Preview)?

Wan 2.5 (Preview) is available through fal.ai on Lumenfall. Lumenfall automatically routes requests to the best available provider.

Overview

Wan 2.5 (Preview) is a high-performance image generation model developed by Alibaba’s Wan AI team. It is designed for both text-to-image and image-to-image workflows, focusing on high-fidelity visual output and nuanced prompt adherence. This preview release represents Alibaba’s latest advancement in generative modeling, aiming to compete with leading diffusion models by balancing computational efficiency with aesthetic quality.

Strengths

  • Prompt Adherence: The model demonstrates a strong ability to follow complex, multi-part descriptive prompts, accurately placing objects and maintaining specified color palettes.
  • Image-to-Image Versatility: Beyond generating images from scratch, it excels at taking reference images and applying stylistic or structural modifications while preserving the essence of the source material.
  • Compositional Detail: It is particularly effective at rendering scenes with realistic lighting, shadows, and textures, reducing the common “plastic” look sometimes found in earlier diffusion iterations.
  • Text Rendering: Its architecture shows improved reliability in rendering legible text within generated images compared to older generation models in the same class.

Limitations

  • Sensitivity to Short Prompts: As a preview model, it often performs best with detailed descriptions; very brief or ambiguous prompts may lead to generic or unpredictable results.
  • Anatomical Accuracy: Like many current diffusion models, it can occasionally struggle with complex human anatomy, such as intricate hand positions or high-action poses, requiring iterative prompting to resolve.
  • Regional Latency: Depending on the provider infrastructure, inference times may be slightly higher than lightweight distilled models, making it less suitable for real-time applications.

Technical Background

Wan 2.5 is part of the Wan AI suite and utilizes a diffusion-based architecture optimized for high-resolution synthesis. The model is trained on a massive dataset of high-quality image-text pairs, employing specific training techniques to enhance spatial reasoning and visual consistency. While specific architectural whitepapers for this preview release are forthcoming, it follows the transformer-based diffusion paradigm (DiT) that has become the standard for modern high-performance generative AI.

Best For

  • Creative Asset Generation: Ideal for designers needing concept art, marketing visuals, or high-fidelity backgrounds with precise control.
  • Style Transfer and Editing: Strong for workflows where a user needs to transform an existing image into a different aesthetic or update specific elements of a composition.
  • Prototyping: Useful for developers building applications that require high-quality visual outputs for user-facing content.

Wan 2.5 (Preview) is available for immediate testing through Lumenfall’s unified API and interactive playground, allowing you to integrate it into your production environment or experiment with its capabilities alongside other leading models.

Try Wan 2.5 (Preview) in Playground

Generate images with custom prompts — no API key needed.

Open Playground