ShengShu Technology's text-to-image and reference-to-image model with support for character consistency and multi-reference image processing
Details
vidu-q2
Starting from
Prices shown are in USD
Full pricing detailsProviders & Pricing (2)
Vidu Q2 is available from 2 providers, with per-image pricing starting at $0.1 through fal.ai.
All modes
fal/vidu-q2
fal/vidu-q2-edit
Vidu Q2 API OpenAI-compatible
Integrate Vidu Q2 into your workflow using Lumenfall's OpenAI-compatible API to perform advanced text-to-image generation and complex image editing through a single endpoint.
https://api.lumenfall.ai/openai/v1
vidu-q2
Code Examples
Text to Image
/v1/images/generationscurl -X POST \
https://api.lumenfall.ai/openai/v1/images/generations \
-H "Authorization: Bearer $LUMENFALL_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "vidu-q2",
"prompt": "",
"size": "1024x1024"
}'
# Response:
# { "created": 1234567890, "data": [{ "url": "https://...", "revised_prompt": "..." }] }
import OpenAI from 'openai';
const client = new OpenAI({
apiKey: 'YOUR_API_KEY',
baseURL: 'https://api.lumenfall.ai/openai/v1'
});
const response = await client.images.generate({
model: 'vidu-q2',
prompt: '',
size: '1024x1024'
});
// { created: 1234567890, data: [{ url: "https://...", revised_prompt: "..." }] }
console.log(response.data[0].url);
from openai import OpenAI
client = OpenAI(
api_key="YOUR_API_KEY",
base_url="https://api.lumenfall.ai/openai/v1"
)
response = client.images.generate(
model="vidu-q2",
prompt="",
size="1024x1024"
)
# { created: 1234567890, data: [{ url: "https://...", revised_prompt: "..." }] }
print(response.data[0].url)
Image Edit
/v1/images/editsParameter Reference
Core Parameters
| Parameter | Type | Description | Modes |
|---|---|---|---|
prompt
|
string | Required. Text prompt for image generation |
T2I
Edit
|
seed
|
integer | Random seed for reproducibility |
T2I
Edit
|
Size & Layout
| Parameter | Type | Description | Modes |
|---|---|---|---|
size
|
string |
Image dimensions as WxH pixels (e.g. "1024x1024") or aspect ratio (e.g. "16:9")
1365x768
768x1365
1024x1024
WxH determines both shape and scale (aspect_ratio and resolution are ignored when size is provided). W:H format is equivalent to aspect_ratio.
|
T2I
Edit
|
aspect_ratio
|
string |
Aspect ratio of the output image (e.g. "16:9", "1:1")
9:16
1:1
16:9
Controls shape independently of scale. Use with resolution to control both. If size is also provided, size takes precedence. Any ratio is accepted and mapped to the nearest supported value.
|
T2I
Edit
|
resolution
|
string |
Output resolution tier (e.g. "1K", "4K")
1K
Controls scale independently of shape. Higher tiers produce larger images and cost more. If size is also provided, size takes precedence for scale. Any tier is accepted and mapped to the nearest supported value.
|
T2I
Edit
|
1K 3 sizes
| Output |
size
|
aspect_ratio
+
resolution
|
|
|---|---|---|---|
| 1024 × 1024 | "1024x1024" |
or |
"1:1"
+
"1K"
|
| 768 × 1365 | "768x1365" |
or |
"9:16"
+
"1K"
|
| 1365 × 768 | "1365x768" |
or |
"16:9"
+
"1K"
|
How these parameters work
size
Exact pixel dimensions
"1920x1080"
aspect_ratio
Shape only, default scale
"16:9"
resolution
Scale tier, preserves shape
"1K"
Priority when combined
size is most specific and always wins. aspect_ratio and resolution control shape and scale independently.
How matching works
7:1 on a model with
4:1 and 8:1,
you get 8:1.
0.5K 1K 2K 4K)
or megapixel tiers (0.25 1).
If the exact tier isn't available, you get the nearest one.
Media Inputs
| Parameter | Type | Description | Modes |
|---|---|---|---|
image
|
file |
Required.
Input image(s) to edit
Supports PNG, JPEG, WebP.
|
T2I
Edit
|
Output & Format
| Parameter | Type | Description | Modes |
|---|---|---|---|
response_format
|
string |
How to return the image
url
b64_json
Default:
"url" |
T2I
Edit
|
output_format
|
string |
Output image format
png
jpeg
gif
webp
avif
Gateway converts to requested format if provider doesn't support it natively.
|
T2I
Edit
|
output_compression
|
integer | Compression level for lossy formats (JPEG, WebP, AVIF) |
T2I
Edit
|
n
|
integer |
Number of images to generate
Default:
1Gateway generates multiple images in parallel even if provider only supports 1.
|
T2I
Edit
|
Parameter Normalization
How we handle parameters across different providers
Not every provider speaks the same language. When you send a parameter, we handle it in one of four ways depending on what the model supports:
| Behavior | What happens | Example |
|---|---|---|
passthrough |
Sent as-is to the provider | style, quality |
renamed |
Same value, mapped to the field name the provider expects | prompt |
converted |
Transformed to the provider's native format | size |
emulated |
Works even if the provider has no concept of it | n, response_format |
Parameters we don't recognize pass straight through to the upstream API, so provider-specific options still work.
Gallery
View all 2 imagesVidu Q2 FAQ
How much does Vidu Q2 cost?
Vidu Q2 starts at $0.1 per image through Lumenfall. Pricing varies by provider. Lumenfall does not add any markup to provider pricing.
How do I use Vidu Q2 via API?
You can use Vidu Q2 through Lumenfall's OpenAI-compatible API. Send requests to the unified endpoint with model ID "vidu-q2". Code examples are available in Python, JavaScript, and cURL.
Which providers offer Vidu Q2?
Vidu Q2 is available through fal.ai on Lumenfall. Lumenfall automatically routes requests to the best available provider.
Overview
Vidu Q2 is a specialized image generation model developed by ShengShu Technology that prioritizes structural control and character consistency. Unlike standard text-to-image models that often struggle to maintain identity across multiple generations, Vidu Q2 is designed to process multiple reference images to anchor the visual features of a subject. This makes it a functional tool for creators who need to place the same character or object into varying environments and poses without losing visual fidelity.
Strengths
- Character Consistency: The model excels at preserving the identity, facial features, and attire of a subject when provided with reference images, reducing the “hallucination” of new traits between frames or shots.
- Multi-Reference Processing: It can ingest and synthesize information from more than one reference image simultaneously, allowing for better 360-degree understanding of a subject’s geometry and textures.
- Structural Adherence: Vidu Q2 demonstrates high accuracy in following compositional instructions, ensuring that the spatial relationship between the subject and the background remains coherent.
- Prompt Alignment: It maintains a strong correlation between complex text prompts and the resulting visual elements, even when constrained by specific image references.
Limitations
- Style Rigidity: Because the model focuses heavily on consistency, it may sometimes inherit unwanted lighting or stylistic artifacts from the reference images, making it difficult to completely pivot to a drastically different art style without significant prompting effort.
- Attribute Bleeding: When using multiple reference images with conflicting details (e.g., a character wearing different hats in two photos), the model may intermittently blend these features in unexpected ways.
- Lower Creative Variance: Users seeking “happy accidents” or high stylistic diversity may find the model’s output overly constrained compared to more generalized diffusion models like Stable Diffusion XL or Flux.
Technical Background
Vidu Q2 is part of the Vidu family of generative models, utilizing a transformer-based architecture optimized for multimodal inputs. The model’s key technical differentiator is its specialized attention mechanism that gives weighted priority to visual tokens extracted from reference images. This training approach allows the model to treat reference images as “hard constraints” rather than mere stylistic suggestions, ensuring the generated output remains grounded in the provided visual data.
Best For
Vidu Q2 is best suited for storyboarding, character design, and brand-consistent marketing campaigns where maintaining a singular “hero” subject is critical. It is an effective choice for game developers and concept artists who need to visualize a character in multiple scenarios or lighting conditions.
Vidu Q2 is available to explore through the Lumenfall playground and can be integrated into production workflows via the Lumenfall unified API, providing a consistent interface for high-fidelity character generation.
Try Vidu Q2 in Playground
Generate images with custom prompts — no API key needed.