OpenAI's legacy image generation model supporting generations, edits with masks (inpainting), and variations
Details
dall-e-2
Starting from
Popular formats
Prices shown are in USD
See all providersProviders & Pricing (2)
DALL-E 2 is available from 2 providers, with per-image pricing starting at $0.016 through OpenAI.
All modes
openai/dall-e-2
Output
Pricing Notes (3)
- • Deprecated model - will stop being supported on May 12, 2026
- • Pricing is per image, varying by size
- • Supports generations, edits with masks (inpainting), and variations
replicate/dall-e-2
DALL-E 2 API OpenAI-compatible
Lumenfall provides an OpenAI-compatible API to generate images, create variations, and perform mask-based inpainting using the DALL-E 2 model. Developers can programmatically produce 1024x1024 visuals and execute image-to-image edits through a single unified endpoint.
https://api.lumenfall.ai/openai/v1
dall-e-2
Code Examples
Text to Image
/v1/images/generationscurl -X POST \
https://api.lumenfall.ai/openai/v1/images/generations \
-H "Authorization: Bearer $LUMENFALL_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "dall-e-2",
"prompt": "",
"size": "1024x1024"
}'
# Response:
# { "created": 1234567890, "data": [{ "url": "https://...", "revised_prompt": "..." }] }
import OpenAI from 'openai';
const client = new OpenAI({
apiKey: 'YOUR_API_KEY',
baseURL: 'https://api.lumenfall.ai/openai/v1'
});
const response = await client.images.generate({
model: 'dall-e-2',
prompt: '',
size: '1024x1024'
});
// { created: 1234567890, data: [{ url: "https://...", revised_prompt: "..." }] }
console.log(response.data[0].url);
from openai import OpenAI
client = OpenAI(
api_key="YOUR_API_KEY",
base_url="https://api.lumenfall.ai/openai/v1"
)
response = client.images.generate(
model="dall-e-2",
prompt="",
size="1024x1024"
)
# { created: 1234567890, data: [{ url: "https://...", revised_prompt: "..." }] }
print(response.data[0].url)
Image Edit
/v1/images/editsParameter Reference
Core Parameters
| Parameter | Type | Description | Modes |
|---|---|---|---|
prompt
|
string | Required. Text prompt for image generation |
T2I
Edit
|
Size & Layout
| Parameter | Type | Description | Modes |
|---|---|---|---|
size
|
string |
Image dimensions as WxH pixels (e.g. "1024x1024") or aspect ratio (e.g. "16:9")
WxH determines both shape and scale (aspect_ratio and resolution are ignored when size is provided). W:H format is equivalent to aspect_ratio.
|
T2I
Edit
|
aspect_ratio
|
string |
Aspect ratio of the output image (e.g. "16:9", "1:1")
Controls shape independently of scale. Use with resolution to control both. If size is also provided, size takes precedence. Any ratio is accepted and mapped to the nearest supported value.
|
T2I
Edit
|
resolution
|
string |
Output resolution tier (e.g. "1K", "4K")
Controls scale independently of shape. Higher tiers produce larger images and cost more. If size is also provided, size takes precedence for scale. Any tier is accepted and mapped to the nearest supported value.
|
T2I
Edit
|
size
Exact pixel dimensions
"1920x1080"
aspect_ratio
Shape only, default scale
"16:9"
resolution
Scale tier, preserves shape
"1K"
Priority when combined
size is most specific and always wins. aspect_ratio and resolution control shape and scale independently.
How matching works
7:1 on a model with
4:1 and 8:1,
you get 8:1.
0.5K 1K 2K 4K)
or megapixel tiers (0.25 1).
If the exact tier isn't available, you get the nearest one.
Media Inputs
| Parameter | Type | Description | Modes |
|---|---|---|---|
image
|
file |
Required.
Input image(s) to edit
Supports PNG, JPEG, WebP.
|
T2I
Edit
|
Output & Format
| Parameter | Type | Description | Modes |
|---|---|---|---|
response_format
|
string |
How to return the image
url
b64_json
Default:
"url" |
T2I
Edit
|
output_format
|
string |
Output image format
png
jpeg
gif
webp
avif
Gateway converts to requested format if provider doesn't support it natively.
|
T2I
Edit
|
output_compression
|
integer | Compression level for lossy formats (JPEG, WebP, AVIF) |
T2I
Edit
|
n
|
integer |
Number of images to generate
Default:
1Gateway generates multiple images in parallel even if provider only supports 1.
|
T2I
Edit
|
Parameter Normalization
How we handle parameters across different providers
Not every provider speaks the same language. When you send a parameter, we handle it in one of four ways depending on what the model supports:
| Behavior | What happens | Example |
|---|---|---|
passthrough |
Sent as-is to the provider | style, quality |
renamed |
Same value, mapped to the field name the provider expects | prompt |
converted |
Transformed to the provider's native format | size |
emulated |
Works even if the provider has no concept of it | n, response_format |
Parameters we don't recognize pass straight through to the upstream API, so provider-specific options still work.
Gallery
View all 4 imagesDALL-E 2 FAQ
How much does DALL-E 2 cost?
DALL-E 2 starts at $0.016 per image through Lumenfall. Pricing varies by provider. Lumenfall does not add any markup to provider pricing.
How do I use DALL-E 2 via API?
You can use DALL-E 2 through Lumenfall's OpenAI-compatible API. Send requests to the unified endpoint with model ID "dall-e-2". Code examples are available in Python, JavaScript, and cURL.
Which providers offer DALL-E 2?
DALL-E 2 is available through OpenAI and Replicate on Lumenfall. Lumenfall automatically routes requests to the best available provider.
What is the maximum resolution for DALL-E 2?
DALL-E 2 supports images up to 1024x1024 resolution.
Overview
DALL-E 2 is a legacy text-to-image diffusion model developed by OpenAI that generates images from natural language descriptions. While succeeded by newer iterations, it remains a stable benchmark for image synthesis, offering a distinct feature set that includes image-to-image variations and mask-based inpainting. It is particularly known for its ability to combine disparate concepts and objects in a coherent, albeit often stylized, visual manner.
Strengths
- Image Inpainting: The model excels at modifying existing images through masking, allowing users to replace specific elements or extend backgrounds while maintaining the original image’s context and lighting.
- Concept Blending: It demonstrates a strong capability for semantic synthesis, such as placing a 3D-rendered character in a real-world setting or applying specific artistic styles (e.g., “in the style of Van Gogh”) to original subjects.
- Compositional Understanding: DALL-E 2 handles spatial relationships and object attributes with reasonable accuracy, ensuring that adjectives are generally applied to the correct nouns within a prompt.
- Variation Generation: It can ingest an existing image and output multiple visual permutations that retain the original’s core theme and color palette without being exact copies.
Limitations
- Low Resolution: Native output is limited to 1024x1024 pixels, which often lacks the fine-grained texture and sharp detail found in more modern models like DALL-E 3 or Midjourney.
- Text Rendering: The model struggle significantly with rendering legible text; characters often appear as nonsensical glyphs or blurred artifacts.
- Photorealism Constraints: Compared to newer latent diffusion models, DALL-E 2 often produces images with a “plastic” or overly smooth aesthetic, struggling with complex human anatomy like hands or eyes.
Technical Background
DALL-E 2 is built on a CLIP-guided diffusion architecture, specifically a process OpenAI refers to as “unCLIP.” It uses the CLIP (Contrastive Language-Image Pre-training) latent space to translate text embeddings into image embeddings, which a decoder then converts into a visual representation. This approach prioritizes the relationship between visual concepts and their linguistic descriptions over raw pixel-mapping.
Best For
DALL-E 2 is best suited for rapid prototyping, creating stylized illustrations, and performing basic image editing tasks like inpainting or outpainting where high-fidelity photorealism isn’t the primary requirement. It is a cost-effective choice for developers who need consistent, programmatic image variations.
This model is available for testing and integration through Lumenfall’s unified API and interactive playground, allowing you to compare its outputs directly against more recent generative models.
Try DALL-E 2 in Playground
Generate images with custom prompts — no API key needed.