Sora 2

AI Video Generation Model

$$$$ · 10¢

OpenAI's video generation model supporting text-to-video and image-to-video at 720p resolution with durations up to 20 seconds

20 seconds
Max Video Duration
Input / Output
Text Image Video
Active

Details

Model ID
sora-2
Creator
OpenAI
Family
sora
Tags
video-generation text-to-video image-to-video
// Get Started

Ready to integrate?

Access sora-2 via our unified API.

Create Account

Providers & Pricing (3)

Sora 2 is available from 3 providers, with per-video pricing starting at $0.1 through Replicate.

Replicate
replicate/sora-2
Provider Model ID: openai/sora-2
$0.100 /second
OpenAI
openai/sora-2
Provider Model ID: sora-2
$0.100 /second
fal.ai
fal/sora-2
Provider Model ID: fal-ai/sora-2/text-to-video
$0.100 /second

Sora 2 API Async video generation

Integrate Sora 2 via Lumenfall’s OpenAI-compatible API to generate high-fidelity videos up to 20 seconds long with 720p resolution.

Base URL
https://api.lumenfall.ai/v1
Model
sora-2

Text to Video Generate

Submit a prompt, poll for the result, and download the video

# Step 1: Submit video generation request
VIDEO_ID=$(curl -s -X POST \
  https://api.lumenfall.ai/v1/videos \
  -H "Authorization: Bearer $LUMENFALL_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "sora-2",
    "prompt": "A serene mountain landscape at sunset",
    "size": "1024x1024"
  }' | jq -r '.id')
echo "Video ID: $VIDEO_ID"
# Step 2: Poll for completion
while true; do
  RESULT=$(curl -s \
    https://api.lumenfall.ai/v1/videos/$VIDEO_ID \
    -H "Authorization: Bearer $LUMENFALL_API_KEY")
  STATUS=$(echo $RESULT | jq -r '.status')
  echo "Status: $STATUS"
  if [ "$STATUS" = "completed" ]; then
    echo $RESULT | jq -r '.output.url'
    break
  elif [ "$STATUS" = "failed" ]; then
    echo $RESULT | jq -r '.error.message'
    break
  fi
  sleep 5
done

Size, Aspect Ratio & Resolution Reference

Three optional parameters for controlling output dimensions

size

Exact pixel dimensions

"1920x1080"
aspect_ratio

Shape only, default scale

"16:9"
resolution

Scale tier, preserves shape

"1K"

Priority when combined

size aspect_ratio + resolution aspect_ratio resolution

size is most specific and always wins. aspect_ratio and resolution control shape and scale independently.

How matching works

Shape matching – we pick the closest supported ratio. Ask for 7:1 on a model with 4:1 and 8:1, you get 8:1.
Scale matching – providers use different tier formats: K tiers (0.5K 1K 2K 4K) or megapixel tiers (0.25 1). If the exact tier isn't available, you get the nearest one.
Dimension clamping – if a model has pixel limits, we clamp dimensions to fit and keep the aspect ratio intact.

Parameters & Unified Output Reference

These work the same regardless of which provider runs your request

response_format

url or b64_json. If you ask for a URL but the provider returns base64, we store it temporarily and hand you a link valid for 60 minutes.

output_format

Pick from png jpeg gif webp avif. We convert if the provider generates a different format.

output_compression

Quality level from 1 to 100 for lossy formats (jpeg, webp, avif). Higher means better quality, larger file.

n

Request multiple images in one call. If the provider caps at 1, we run parallel requests behind the scenes and return them together.

Parameter support

Not every provider speaks the same language. When you send a parameter, we handle it in one of four ways depending on what the model supports:

Mode What happens Example
passthrough Sent as-is to the provider style, quality
renamed Same value, mapped to the field name the provider expects prompt
converted Transformed to the provider's native format size
emulated Works even if the provider has no concept of it n, response_format

Parameters we don't recognize pass straight through to the upstream API, so provider-specific options still work.

Sora 2 FAQ

How much does Sora 2 cost?

Sora 2 starts at $0.1 per video through Lumenfall. Pricing varies by provider. Lumenfall does not add any markup to provider pricing.

How do I use Sora 2 via API?

You can use Sora 2 through Lumenfall's OpenAI-compatible API. Send requests to the unified endpoint with model ID "sora-2". Code examples are available in Python, JavaScript, and cURL.

Which providers offer Sora 2?

Sora 2 is available through Replicate, OpenAI, and fal.ai on Lumenfall. Lumenfall automatically routes requests to the best available provider.

Overview

Sora 2 is a high-fidelity video generation model developed by OpenAI that transforms text prompts or static images into cinematic video sequences. It represents an evolution in the Sora family, capable of producing content at 720p resolution with extended durations reaching up to 20 seconds. This model is distinguished by its ability to maintain temporal consistency and complex motion dynamics over longer spans than many first-generation video models.

Strengths

  • Temporal Consistency: Maintains the identity of characters, objects, and environmental details across the entire 20-second duration, minimizing the “morphing” or warping effects common in shorter-clip models.
  • Physical Simulation: Demonstrates a sophisticated understanding of physical interactions, such as fluid dynamics, lighting reflections, and gravity, leading to more realistic movement.
  • Multimodal Input Flexibility: Supports both text-to-video for purely generative tasks and image-to-video for animating existing assets or extending still photography into motion.
  • Enhanced Resolution: Outputs native 720p video, providing sufficient clarity for social media content, prototyping, and digital backgrounds without immediate need for upscaling.

Limitations

  • Causal Reasoning: While physically grounded, the model may still struggle with complex “cause and effect” sequences, such as a character taking a bite out of a cookie and the cookie not showing a bite mark.
  • Spatial Confusion: In high-action scenes involving multiple moving parts (e.g., a crowded street), the model can occasionally mix up left/right orientations or produce impossible limb movements.
  • Resolution Ceiling: At 720p, it lacks the native 4K or 1080p detail required for professional film production pipelines without significant post-processing.

Technical Background

Sora 2 utilizes a diffusion transformer (DiT) architecture, which combines the scaling properties of transformers with the generative capabilities of diffusion models. It operates on spacetime patches, treating video data as a three-dimensional collection of patches that allow the model to train on diverse aspect ratios and resolutions. This architecture enables the model to look ahead and behind in a sequence to ensure global coherence rather than generating frames in a strictly linear, autoregressive fashion.

Best For

Sora 2 is best suited for rapid prototyping in creative agencies, generating b-roll for digital marketing, and creating environmental backgrounds for web design. It excels at animating conceptual art where maintaining character consistency is a priority. This model is available for testing and integration through Lumenfall’s unified API and playground, allowing developers to compare its outputs directly against other video generation frameworks.

Try Sora 2 in Playground

Generate images with custom prompts — no API key needed.

Open Playground