Sora 2 Pro

AI Video Generation Model

Video $$$$ · 30¢

OpenAI's professional video generation model with higher resolution support up to 1080p, native audio synthesis, and durations up to 20 seconds

20 seconds
Max Video Duration
Supported Modes
Text to Video Image to Video
Active

Details

Model ID
sora-2-pro
Creator
OpenAI
Family
sora
Tags
video-generation text-to-video image-to-video
// Get Started

Ready to integrate?

Access sora-2-pro via our unified API.

Create Account
Available at 3 providers

Starting from

$0.300 /second via fal.ai, OpenAI, Replicate

Popular formats

720p (1280×720)
~$0.300
1080p (1920×1080)
~$0.500
high
~$0.500

Prices shown are in USD

See all providers

Providers & Pricing (4)

Sora 2 Pro is available from 4 providers, with per-video pricing starting at $0.3 through fal.ai.

fal.ai
Text to Video
fal/sora-2-pro
Provider Model ID: fal-ai/sora-2/text-to-video/pro

Output

Second 1080p
$0.500
Second 720p
$0.300
View official pricing • As of Mar 13, 2026
fal.ai
Image to Video
fal/sora-2-pro-i2v
Provider Model ID: fal-ai/sora-2/image-to-video/pro

Output

Second 1080p
$0.500
Second 720p
$0.300
View official pricing • As of Mar 13, 2026
OpenAI
Text to Video Image to Video
openai/sora-2-pro
Provider Model ID: sora-2-pro

Output

Second 1080p
$0.500
Second 720p
$0.300
View official pricing • As of Jan 21, 2026
Replicate
Text to Video Image to Video
replicate/sora-2-pro
Provider Model ID: openai/sora-2-pro

Output

Second high
$0.500
Second standard
$0.300
View official pricing • As of Jan 21, 2026

Sora 2 Pro API Async video generation

Access Sora 2 Pro through the Lumenfall API to generate high-resolution video up to 1080p with native audio synthesis and 20-second durations.

Base URL
https://api.lumenfall.ai/v1
Model
sora-2-pro

Code Examples

Text to Video

/v1/videos/generations
# Step 1: Submit video generation request
VIDEO_ID=$(curl -s -X POST \
  https://api.lumenfall.ai/v1/videos \
  -H "Authorization: Bearer $LUMENFALL_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "sora-2-pro",
    "prompt": "",
    "size": "1024x1024"
  }' | jq -r '.id')
echo "Video ID: $VIDEO_ID"
# Step 2: Poll for completion
while true; do
  RESULT=$(curl -s \
    https://api.lumenfall.ai/v1/videos/$VIDEO_ID \
    -H "Authorization: Bearer $LUMENFALL_API_KEY")
  STATUS=$(echo $RESULT | jq -r '.status')
  echo "Status: $STATUS"
  if [ "$STATUS" = "completed" ]; then
    echo $RESULT | jq -r '.output.url'
    break
  elif [ "$STATUS" = "failed" ]; then
    echo $RESULT | jq -r '.error.message'
    break
  fi
  sleep 5
done

Image to Video

/v1/videos/generations

Parameter Reference

Required Supported Not available

Core Parameters

Parameter Type Description Modes
prompt string Required. Text prompt for video generation
T2V I2V
duration number Video duration in seconds
T2V I2V

Size & Layout

Parameter Type Description Modes
size string Video dimensions as WxH pixels (e.g. "1920x1080") or aspect ratio (e.g. "16:9")
auto 1365x768 768x1365
WxH determines both shape and scale (aspect_ratio and resolution are ignored when size is provided). W:H format is equivalent to aspect_ratio.
T2V I2V
aspect_ratio string Aspect ratio of the output video (e.g. "16:9", "1:1")
auto 9:16 16:9
Controls shape independently of scale. Use with resolution to control both. If size is also provided, size takes precedence. Any ratio is accepted and mapped to the nearest supported value.
T2V I2V
resolution string Output resolution tier (e.g. "1K", "4K")
auto 1K
Controls scale independently of shape. Higher tiers produce larger videos and cost more. If size is also provided, size takes precedence for scale. Any tier is accepted and mapped to the nearest supported value.
T2V I2V
Output size aspect_ratio + resolution
Flexible
Auto "auto" Model chooses optimal dimensions
1K 2 sizes
Output size aspect_ratio + resolution
768 × 1365 "768x1365" or "9:16" + "1K"
1365 × 768 "1365x768" or "16:9" + "1K"

How these parameters work

size

Exact pixel dimensions

"1920x1080"
aspect_ratio

Shape only, default scale

"16:9"
resolution

Scale tier, preserves shape

"1K"

Priority when combined

size aspect_ratio + resolution aspect_ratio resolution

size is most specific and always wins. aspect_ratio and resolution control shape and scale independently.

How matching works

Shape matching – we pick the closest supported ratio. Ask for 7:1 on a model with 4:1 and 8:1, you get 8:1.
Scale matching – providers use different tier formats: K tiers (0.5K 1K 2K 4K) or megapixel tiers (0.25 1). If the exact tier isn't available, you get the nearest one.
Dimension clamping – if a model has pixel limits, we clamp dimensions to fit and keep the aspect ratio intact.

Output & Format

Parameter Type Description Modes
n integer Number of videos to generate
Default: 1
Gateway generates multiple videos in parallel even if provider only supports 1.
T2V I2V

Additional Parameters

Parameter Type Description Modes
character_ids fal array Up to two character IDs (from create-character) to use in the video. Refer to characters by name in the prompt. When set, only the OpenAI provider is used.
T2V I2V
delete_video fal boolean Whether to delete the video after generation for privacy reasons. If True, the video cannot be used for remixing and will be permanently deleted.
T2V I2V
detect_and_block_ip fal boolean If enabled, the prompt (and image for image-to-video) will be checked for known intellectual property references and the request will be blocked if any are detected.
T2V I2V
input_reference replicate string An optional image to use as the first frame of the video. The image must be the same aspect ratio as the video.
T2V I2V
openai_api_key replicate string Optional: Your OpenAI API key. If you use your own OpenAI API key, you will be charged directly by OpenAI.
T2V I2V
seconds replicate integer Duration of the video in seconds
T2V I2V

Parameter Normalization

How we handle parameters across different providers

Not every provider speaks the same language. When you send a parameter, we handle it in one of four ways depending on what the model supports:

Behavior What happens Example
passthrough Sent as-is to the provider style, quality
renamed Same value, mapped to the field name the provider expects prompt
converted Transformed to the provider's native format size
emulated Works even if the provider has no concept of it n, response_format

Parameters we don't recognize pass straight through to the upstream API, so provider-specific options still work.

Sora 2 Pro FAQ

How much does Sora 2 Pro cost?

Sora 2 Pro starts at $0.3 per video through Lumenfall. Pricing varies by provider. Lumenfall does not add any markup to provider pricing.

How do I use Sora 2 Pro via API?

You can use Sora 2 Pro through Lumenfall's OpenAI-compatible API. Send requests to the unified endpoint with model ID "sora-2-pro". Code examples are available in Python, JavaScript, and cURL.

Which providers offer Sora 2 Pro?

Sora 2 Pro is available through OpenAI, fal.ai, and Replicate on Lumenfall. Lumenfall automatically routes requests to the best available provider.

Overview

Sora 2 Pro is OpenAI’s high-fidelity video generation model designed for professional creative workflows. It extends the capabilities of the original Sora architecture by supporting 1080p high-definition output, native audio synthesis, and extended shot durations of up to 20 seconds. The model supports both text-to-video generation, where motion is synthesized from natural language prompts, and image-to-video, which uses a static image as a starting frame to guide the visual composition.

Strengths

  • Temporal Stability: Maintains consistent character features, background elements, and object persistence over a continuous 20-second duration.
  • Multi-Modal Synthesis: Generates synchronized audio alongside video, reducing the need for external sound design for basic environmental or atmospheric effects.
  • High-Resolution Output: Supports native rendering up to 1080p, offering significantly more detail in textures and lighting compared to standard-definition latent diffusion models.
  • Instruction Adherence: Excels at following complex prompts that require specific camera movements (e.g., “dolly zoom” or “tracking shot”) and physics-compliant motion.

Limitations

  • Prompt Latency: Due to the high parameter count and high-resolution output, generation times are significantly longer than lower-resolution or shorter-form video models.
  • Synthesizing Physics: While improved, the model may still struggle with complex physical interactions, such as objects shattering or fluid dynamics that require precise causal logic.
  • Cost: At a starting price of $0.30 per generation, it is more expensive than many lightweight competitors, making it less ideal for rapid, iterative storyboarding.

Technical Background

Sora 2 Pro is built on a transformer-based diffusion architecture that operates on spacetime patches. By treating video frames as sequences of patches rather than individual images, the model can learn spatial and temporal relationships simultaneously. This version incorporates a more robust latent space representative of higher resolutions and an integrated audio head that generates waveforms conditioned on the visual latent representations.

Best For

Sora 2 Pro is best suited for high-end marketing content, high-fidelity cinematic prototyping, and social media production where visual clarity and longer durations are required. It is an ideal choice for creators who need to animate existing concept art via its image-to-video mode or generate high-quality B-roll directly from text. You can experiment with and deploy Sora 2 Pro through Lumenfall’s unified API and interactive playground.

Try Sora 2 Pro in Playground

Generate images with custom prompts — no API key needed.

Open Playground