“Hyper-realistic cinematic close-up of a professional speedcuber solving a 3x3 Rubik's Cube at world-record pace. His hands move with insane precision and blistering speed — fingers flying across the glossy colored faces in a complex sequence of advanced algorithms, rapid twists, and smooth layer turns. The cube rotates with perfect realistic physics, slight motion blur on fast turns, and flawless color consistency as it progresses toward a solved state. Subtle sweat glistening on skin, visible veins, hyper-detailed fingerprints and nail textures. Intense focused facial expression with micro-expressions of concentration in shallow depth of field. Dramatic cinematic side lighting with strong specular highlights and reflections dancing across the cube surfaces and skin. Smooth slow orbiting camera that circles the hands and cube, capturing every intricate finger movement from dynamic angles. Photorealistic, 8K, subtle film grain, anamorphic lens flare, moody intense atmosphere, 24fps.”
OpenAI's professional video generation model with higher resolution support up to 1080p, native audio synthesis, and durations up to 20 seconds
Details
sora-2-pro
Starting from
Popular formats
Prices shown are in USD
See all providersProviders & Pricing (4)
Sora 2 Pro is available from 4 providers, with per-video pricing starting at $0.3 through fal.ai.
All modes
fal/sora-2-pro
Output
fal/sora-2-pro-i2v
Output
openai/sora-2-pro
Output
replicate/sora-2-pro
Output
Sora 2 Pro API Async video generation
Access Sora 2 Pro through the Lumenfall API to generate high-resolution video up to 1080p with native audio synthesis and 20-second durations.
https://api.lumenfall.ai/v1
sora-2-pro
Code Examples
Text to Video
/v1/videos/generations# Step 1: Submit video generation request
VIDEO_ID=$(curl -s -X POST \
https://api.lumenfall.ai/v1/videos \
-H "Authorization: Bearer $LUMENFALL_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "sora-2-pro",
"prompt": "",
"size": "1024x1024"
}' | jq -r '.id')
echo "Video ID: $VIDEO_ID"
# Step 2: Poll for completion
while true; do
RESULT=$(curl -s \
https://api.lumenfall.ai/v1/videos/$VIDEO_ID \
-H "Authorization: Bearer $LUMENFALL_API_KEY")
STATUS=$(echo $RESULT | jq -r '.status')
echo "Status: $STATUS"
if [ "$STATUS" = "completed" ]; then
echo $RESULT | jq -r '.output.url'
break
elif [ "$STATUS" = "failed" ]; then
echo $RESULT | jq -r '.error.message'
break
fi
sleep 5
done
const BASE_URL = 'https://api.lumenfall.ai/v1';
const API_KEY = 'YOUR_API_KEY';
// Step 1: Submit video generation request
const submitRes = await fetch(`${BASE_URL}/videos`, {
method: 'POST',
headers: {
'Authorization': `Bearer ${API_KEY}`,
'Content-Type': 'application/json'
},
body: JSON.stringify({
model: 'sora-2-pro',
prompt: '',
size: '1024x1024'
})
});
const { id: videoId } = await submitRes.json();
console.log('Video ID:', videoId);
// Step 2: Poll for completion
while (true) {
const pollRes = await fetch(`${BASE_URL}/videos/${videoId}`, {
headers: { 'Authorization': `Bearer ${API_KEY}` }
});
const result = await pollRes.json();
if (result.status === 'completed') {
console.log('Video URL:', result.output.url);
break;
} else if (result.status === 'failed') {
console.error('Error:', result.error.message);
break;
}
await new Promise(r => setTimeout(r, 5000));
}
import requests
import time
BASE_URL = "https://api.lumenfall.ai/v1"
API_KEY = "YOUR_API_KEY"
HEADERS = {
"Authorization": f"Bearer {API_KEY}",
"Content-Type": "application/json"
}
# Step 1: Submit video generation request
response = requests.post(
f"{BASE_URL}/videos",
headers=HEADERS,
json={
"model": "sora-2-pro",
"prompt": "",
"size": "1024x1024"
}
)
video_id = response.json()["id"]
print(f"Video ID: {video_id}")
# Step 2: Poll for completion
while True:
result = requests.get(
f"{BASE_URL}/videos/{video_id}",
headers=HEADERS
).json()
if result["status"] == "completed":
print(f"Video URL: {result['output']['url']}")
break
elif result["status"] == "failed":
print(f"Error: {result['error']['message']}")
break
time.sleep(5)
Image to Video
/v1/videos/generationsParameter Reference
Core Parameters
| Parameter | Type | Description | Modes |
|---|---|---|---|
prompt
|
string | Required. Text prompt for video generation |
T2V
I2V
|
duration
|
number | Video duration in seconds |
T2V
I2V
|
Size & Layout
| Parameter | Type | Description | Modes |
|---|---|---|---|
size
|
string |
Video dimensions as WxH pixels (e.g. "1920x1080") or aspect ratio (e.g. "16:9")
auto
1365x768
768x1365
WxH determines both shape and scale (aspect_ratio and resolution are ignored when size is provided). W:H format is equivalent to aspect_ratio.
|
T2V
I2V
|
aspect_ratio
|
string |
Aspect ratio of the output video (e.g. "16:9", "1:1")
auto
9:16
16:9
Controls shape independently of scale. Use with resolution to control both. If size is also provided, size takes precedence. Any ratio is accepted and mapped to the nearest supported value.
|
T2V
I2V
|
resolution
|
string |
Output resolution tier (e.g. "1K", "4K")
auto
1K
Controls scale independently of shape. Higher tiers produce larger videos and cost more. If size is also provided, size takes precedence for scale. Any tier is accepted and mapped to the nearest supported value.
|
T2V
I2V
|
| Output |
size
|
aspect_ratio
+
resolution
|
|
|---|---|---|---|
| Flexible | |||
| Auto | "auto" |
— | Model chooses optimal dimensions |
1K 2 sizes
| Output |
size
|
aspect_ratio
+
resolution
|
|
|---|---|---|---|
| 768 × 1365 | "768x1365" |
or |
"9:16"
+
"1K"
|
| 1365 × 768 | "1365x768" |
or |
"16:9"
+
"1K"
|
How these parameters work
size
Exact pixel dimensions
"1920x1080"
aspect_ratio
Shape only, default scale
"16:9"
resolution
Scale tier, preserves shape
"1K"
Priority when combined
size is most specific and always wins. aspect_ratio and resolution control shape and scale independently.
How matching works
7:1 on a model with
4:1 and 8:1,
you get 8:1.
0.5K 1K 2K 4K)
or megapixel tiers (0.25 1).
If the exact tier isn't available, you get the nearest one.
Media Inputs
| Parameter | Type | Description | Modes |
|---|---|---|---|
input_reference
replicate
|
string | An optional image to use as the first frame of the video. The image must be the same aspect ratio as the video. |
T2V
I2V
|
Output & Format
| Parameter | Type | Description | Modes |
|---|---|---|---|
n
|
integer |
Number of videos to generate
Default:
1Gateway generates multiple videos in parallel even if provider only supports 1.
|
T2V
I2V
|
Additional Parameters
Provider-specific passthrough fields, available only when the request is routed to the listed provider.
| Parameter | Type | Description | Modes |
|---|---|---|---|
|
fal
|
|||
character_ids
|
array | Up to two character IDs (from create-character) to use in the video. Refer to characters by name in the prompt. When set, only the OpenAI provider is used. |
T2V
I2V
|
delete_video
|
boolean | Whether to delete the video after generation for privacy reasons. If True, the video cannot be used for remixing and will be permanently deleted. |
T2V
I2V
|
detect_and_block_ip
|
boolean | If enabled, the prompt (and image for image-to-video) will be checked for known intellectual property references and the request will be blocked if any are detected. |
T2V
I2V
|
|
replicate
|
|||
openai_api_key
|
string | Optional: Your OpenAI API key. If you use your own OpenAI API key, you will be charged directly by OpenAI. |
T2V
I2V
|
seconds
|
integer | Duration of the video in seconds |
T2V
I2V
|
Parameter Normalization
How we handle parameters across different providers
Not every provider speaks the same language. When you send a parameter, we handle it in one of four ways depending on what the model supports:
| Behavior | What happens | Example |
|---|---|---|
passthrough |
Sent as-is to the provider | style, quality |
renamed |
Same value, mapped to the field name the provider expects | prompt |
converted |
Transformed to the provider's native format | size |
emulated |
Works even if the provider has no concept of it | n, response_format |
Parameters we don't recognize pass straight through to the upstream API, so provider-specific options still work.
Sora 2 Pro Benchmarks
Sora 2 Pro is ranked #1 in Text-to-Video with an Elo of 1186 on the Lumenfall Arena, where real users pick the better image in blind comparisons. These rankings are based on 3 blind-vote competitions.
Text-to-Video Landscape
Elo vs Cost
Elo vs Speed
Speed data is still warming up
We only have enough recent requests for Grok Imagine Video (45.0s average).
Competition Results
Uncategorized
“Extreme cinematic close-up of a beautiful young woman experiencing deep, raw emotion. Her expression slowly shifts from quiet sorrow to intense cathartic crying — realistic skin texture with visible pores, subtle muscle twitches, glistening tears forming in her eyes and rolling down her cheeks, red-rimmed eyes with natural blinking and micro-expressions of pain and release. Soft dramatic side lighting with gentle rim light highlighting the tears, very shallow depth of field, slight emotional camera push-in during the emotional peak, photorealistic, 8K, intricate skin and eye details, filmic color grading, subtle film grain.”
“Hyper-realistic cinematic video of an elegant young woman in a flowing white silk dress dancing gracefully in heavy pouring rain at night on a neon-lit Tokyo street. Her long wet hair whips dramatically in the wind, the dress clings and flows with realistic fabric and water physics, raindrops splash and create perfect reflections of pink and blue neon signs on the wet pavement. Subtle emotional expression of freedom mixed with melancholy on her face, water droplets on skin and eyelashes catching the light. Smooth dynamic orbiting camera with slight cinematic handheld feel, dramatic volumetric lighting with god rays piercing through the rain, photorealistic, 8K, film grain, shallow depth of field, anamorphic lens flare.”
Sora 2 Pro FAQ
How much does Sora 2 Pro cost?
Sora 2 Pro starts at $0.3 per video through Lumenfall. Pricing varies by provider. Lumenfall does not add any markup to provider pricing.
How do I use Sora 2 Pro via API?
You can use Sora 2 Pro through Lumenfall's OpenAI-compatible API. Send requests to the unified endpoint with model ID "sora-2-pro". Code examples are available in Python, JavaScript, and cURL.
Which providers offer Sora 2 Pro?
Sora 2 Pro is available through OpenAI, fal.ai, and Replicate on Lumenfall. Lumenfall automatically routes requests to the best available provider.
Overview
Sora 2 Pro is OpenAI’s high-fidelity video generation model designed for professional creative workflows. It extends the capabilities of the original Sora architecture by supporting 1080p high-definition output, native audio synthesis, and extended shot durations of up to 20 seconds. The model supports both text-to-video generation, where motion is synthesized from natural language prompts, and image-to-video, which uses a static image as a starting frame to guide the visual composition.
Strengths
- Temporal Stability: Maintains consistent character features, background elements, and object persistence over a continuous 20-second duration.
- Multi-Modal Synthesis: Generates synchronized audio alongside video, reducing the need for external sound design for basic environmental or atmospheric effects.
- High-Resolution Output: Supports native rendering up to 1080p, offering significantly more detail in textures and lighting compared to standard-definition latent diffusion models.
- Instruction Adherence: Excels at following complex prompts that require specific camera movements (e.g., “dolly zoom” or “tracking shot”) and physics-compliant motion.
Limitations
- Prompt Latency: Due to the high parameter count and high-resolution output, generation times are significantly longer than lower-resolution or shorter-form video models.
- Synthesizing Physics: While improved, the model may still struggle with complex physical interactions, such as objects shattering or fluid dynamics that require precise causal logic.
- Cost: At a starting price of $0.30 per generation, it is more expensive than many lightweight competitors, making it less ideal for rapid, iterative storyboarding.
Technical Background
Sora 2 Pro is built on a transformer-based diffusion architecture that operates on spacetime patches. By treating video frames as sequences of patches rather than individual images, the model can learn spatial and temporal relationships simultaneously. This version incorporates a more robust latent space representative of higher resolutions and an integrated audio head that generates waveforms conditioned on the visual latent representations.
Best For
Sora 2 Pro is best suited for high-end marketing content, high-fidelity cinematic prototyping, and social media production where visual clarity and longer durations are required. It is an ideal choice for creators who need to animate existing concept art via its image-to-video mode or generate high-quality B-roll directly from text. You can experiment with and deploy Sora 2 Pro through Lumenfall’s unified API and interactive playground.
Try Sora 2 Pro in Playground
Generate images with custom prompts — no API key needed.