Alibaba's Qwen Image 2.0 model with enhanced text rendering, supporting both Chinese and English prompts with up to 6 images per request
Details
qwen-image-2.0
Starting from
Prices shown are in USD
Full pricing detailsProviders & Pricing (1)
Qwen Image 2.0 is available exclusively through Alibaba Cloud, starting at $0.035/image.
alibaba/qwen-image-2.0
Qwen Image 2.0 API OpenAI-compatible
Connect to Qwen Image 2.0 via the Lumenfall OpenAI-compatible API to generate high-quality images from text prompts with support for up to 6 images per request.
https://api.lumenfall.ai/openai/v1
qwen-image-2.0
Code Examples
Text to Image
/v1/images/generationscurl -X POST \
https://api.lumenfall.ai/openai/v1/images/generations \
-H "Authorization: Bearer $LUMENFALL_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "qwen-image-2.0",
"prompt": "",
"size": "1024x1024"
}'
# Response:
# { "created": 1234567890, "data": [{ "url": "https://...", "revised_prompt": "..." }] }
import OpenAI from 'openai';
const client = new OpenAI({
apiKey: 'YOUR_API_KEY',
baseURL: 'https://api.lumenfall.ai/openai/v1'
});
const response = await client.images.generate({
model: 'qwen-image-2.0',
prompt: '',
size: '1024x1024'
});
// { created: 1234567890, data: [{ url: "https://...", revised_prompt: "..." }] }
console.log(response.data[0].url);
from openai import OpenAI
client = OpenAI(
api_key="YOUR_API_KEY",
base_url="https://api.lumenfall.ai/openai/v1"
)
response = client.images.generate(
model="qwen-image-2.0",
prompt="",
size="1024x1024"
)
# { created: 1234567890, data: [{ url: "https://...", revised_prompt: "..." }] }
print(response.data[0].url)
Parameter Reference
Core Parameters
| Parameter | Type | Description | Modes |
|---|---|---|---|
prompt
|
string | Required. Text prompt for image generation |
T2I
|
Size & Layout
| Parameter | Type | Description | Modes |
|---|---|---|---|
size
|
string |
Image dimensions as WxH pixels (e.g. "1024x1024") or aspect ratio (e.g. "16:9")
WxH determines both shape and scale (aspect_ratio and resolution are ignored when size is provided). W:H format is equivalent to aspect_ratio.
|
T2I
|
aspect_ratio
|
string |
Aspect ratio of the output image (e.g. "16:9", "1:1")
Controls shape independently of scale. Use with resolution to control both. If size is also provided, size takes precedence. Any ratio is accepted and mapped to the nearest supported value.
|
T2I
|
resolution
|
string |
Output resolution tier (e.g. "1K", "4K")
Controls scale independently of shape. Higher tiers produce larger images and cost more. If size is also provided, size takes precedence for scale. Any tier is accepted and mapped to the nearest supported value.
|
T2I
|
size
Exact pixel dimensions
"1920x1080"
aspect_ratio
Shape only, default scale
"16:9"
resolution
Scale tier, preserves shape
"1K"
Priority when combined
size is most specific and always wins. aspect_ratio and resolution control shape and scale independently.
How matching works
7:1 on a model with
4:1 and 8:1,
you get 8:1.
0.5K 1K 2K 4K)
or megapixel tiers (0.25 1).
If the exact tier isn't available, you get the nearest one.
Output & Format
| Parameter | Type | Description | Modes |
|---|---|---|---|
response_format
|
string |
How to return the image
url
b64_json
Default:
"url" |
T2I
|
output_format
|
string |
Output image format
png
jpeg
gif
webp
avif
Gateway converts to requested format if provider doesn't support it natively.
|
T2I
|
output_compression
|
integer | Compression level for lossy formats (JPEG, WebP, AVIF) |
T2I
|
n
|
integer |
Number of images to generate
Default:
1Gateway generates multiple images in parallel even if provider only supports 1.
|
T2I
|
Parameter Normalization
How we handle parameters across different providers
Not every provider speaks the same language. When you send a parameter, we handle it in one of four ways depending on what the model supports:
| Behavior | What happens | Example |
|---|---|---|
passthrough |
Sent as-is to the provider | style, quality |
renamed |
Same value, mapped to the field name the provider expects | prompt |
converted |
Transformed to the provider's native format | size |
emulated |
Works even if the provider has no concept of it | n, response_format |
Parameters we don't recognize pass straight through to the upstream API, so provider-specific options still work.
Gallery
View all 4 imagesQwen Image 2.0 FAQ
How much does Qwen Image 2.0 cost?
Qwen Image 2.0 starts at $0.035 per image through Lumenfall. Pricing varies by provider. Lumenfall does not add any markup to provider pricing.
How do I use Qwen Image 2.0 via API?
You can use Qwen Image 2.0 through Lumenfall's OpenAI-compatible API. Send requests to the unified endpoint with model ID "qwen-image-2.0". Code examples are available in Python, JavaScript, and cURL.
Which providers offer Qwen Image 2.0?
Qwen Image 2.0 is available through Alibaba Cloud on Lumenfall. Lumenfall automatically routes requests to the best available provider.
What is the maximum resolution for Qwen Image 2.0?
Qwen Image 2.0 supports images up to 2048x2048 resolution.
Overview
Qwen Image 2.0 is a text-to-image generation model developed by Alibaba that specializes in high-fidelity visual synthesis from both Chinese and English prompts. Released in early 2026, it distinguishes itself through its ability to handle complex compositional instructions and its native support for creating sequences of up to six related images within a single request.
Strengths
- Multilingual Semantic Alignment: The model demonstrates high instruction-following accuracy for prompts written in both Chinese and English, reducing the need for translation middleware.
- Batch Consistency: By supporting up to six images per request, the model maintains a higher degree of stylistic and character consistency across a set of generated assets compared to individual sequential calls.
- Typography and Text Rendering: It features enhanced text rendering capabilities, allowing for the inclusion of legible, accurate characters within the generated imagery.
- Complex Composition: The model excels at spatial reasoning, correctly placing multiple subjects or objects in relation to one another as described in long-form textural descriptions.
Limitations
- Contextual Latency: Generating multiple images in a single request (up to six) results in higher per-call latency compared to single-image generation models optimized for speed.
- Specific Domain Gaps: While strong in general artistic and photorealistic styles, it may lag behind niche-specific models trained exclusively on medical or highly technical architectural schematics.
- Regional Cultural Bias: Given its training origin, the model may default to East Asian aesthetic preferences or cultural contexts for ambiguous prompts unless specified otherwise.
Technical Background
As part of the Qwen family, this model utilizes a diffusion-based architecture integrated with a large-scale multimodal transformer backbone. It leverages a dual-language text encoder that allows it to project Chinese and English tokens into a shared latent space, ensuring consistent conceptual mapping across languages. Alibaba utilized fine-grained reinforcement learning from human feedback (RLHF) specifically tuned for image aesthetic quality and text-alignment accuracy.
Best For
Qwen Image 2.0 is ideal for marketing teams creating localized content for global audiences, concept artists requiring consistent character references across multiple frames, and developers building applications that require accurate text overlays in images.
You can experiment with its multi-image generation capabilities and compare its multilingual performance through the Lumenfall unified API and playground, which provides a standardized interface for Qwen and other leading image models.
Try Qwen Image 2.0 in Playground
Generate images with custom prompts — no API key needed.