OpenAI's legacy image generation model supporting generations, edits with masks (inpainting), and variations
Details
dall-e-2
Providers & Pricing (2)
DALL-E 2 is available from 2 providers, with per-image pricing starting at $0.016 through Replicate.
replicate/dall-e-2
openai/dall-e-2
Output
Pricing Notes (3)
- • Deprecated model - will stop being supported on May 12, 2026
- • Pricing is per image, varying by size
- • Supports generations, edits with masks (inpainting), and variations
DALL-E 2 API OpenAI-compatible
Lumenfall provides an OpenAI-compatible API for professional image generation, deep inpainting using masks, and the creation of visual variations via DALL-E 2.
https://api.lumenfall.ai/openai/v1
dall-e-2
Text to Image Generate
Create images from text descriptions
curl -X POST \
https://api.lumenfall.ai/openai/v1/images/generations \
-H "Authorization: Bearer $LUMENFALL_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "dall-e-2",
"prompt": "A serene mountain landscape at sunset",
"size": "1024x1024"
}'
# Response:
# { "created": 1234567890, "data": [{ "url": "https://...", "revised_prompt": "..." }] }
import OpenAI from 'openai';
const client = new OpenAI({
apiKey: 'YOUR_API_KEY',
baseURL: 'https://api.lumenfall.ai/openai/v1'
});
const response = await client.images.generate({
model: 'dall-e-2',
prompt: 'A serene mountain landscape at sunset',
size: '1024x1024'
});
// { created: 1234567890, data: [{ url: "https://...", revised_prompt: "..." }] }
console.log(response.data[0].url);
from openai import OpenAI
client = OpenAI(
api_key="YOUR_API_KEY",
base_url="https://api.lumenfall.ai/openai/v1"
)
response = client.images.generate(
model="dall-e-2",
prompt="A serene mountain landscape at sunset",
size="1024x1024"
)
# { created: 1234567890, data: [{ url: "https://...", revised_prompt: "..." }] }
print(response.data[0].url)
Image Editing Edit
Transform existing images with text instructions
curl -X POST \
https://api.lumenfall.ai/openai/v1/images/edits \
-H "Authorization: Bearer $LUMENFALL_API_KEY" \
-F "model=dall-e-2" \
-F "[email protected]" \
-F "prompt=Add a starry night sky to this image" \
-F "size=1024x1024"
# Response:
# { "created": 1234567890, "data": [{ "url": "https://...", "revised_prompt": "..." }] }
import OpenAI from 'openai';
import fs from 'fs';
const client = new OpenAI({
apiKey: 'YOUR_API_KEY',
baseURL: 'https://api.lumenfall.ai/openai/v1'
});
const response = await client.images.edit({
model: 'dall-e-2',
image: fs.createReadStream('source.png'),
prompt: 'Add a starry night sky to this image',
size: '1024x1024'
});
// { created: 1234567890, data: [{ url: "https://...", revised_prompt: "..." }] }
console.log(response.data[0].url);
from openai import OpenAI
client = OpenAI(
api_key="YOUR_API_KEY",
base_url="https://api.lumenfall.ai/openai/v1"
)
response = client.images.edit(
model="dall-e-2",
image=open("source.png", "rb"),
prompt="Add a starry night sky to this image",
size="1024x1024"
)
# { created: 1234567890, data: [{ url: "https://...", revised_prompt: "..." }] }
print(response.data[0].url)
Gallery
View all 4 imagesDALL-E 2 FAQ
DALL-E 2 starts at $0.016 per image through Lumenfall. Pricing varies by provider. Lumenfall does not add any markup to provider pricing.
You can use DALL-E 2 through Lumenfall's OpenAI-compatible API. Send requests to the unified endpoint with model ID "dall-e-2". Code examples are available in Python, JavaScript, and cURL.
DALL-E 2 is available through Replicate and OpenAI on Lumenfall. Lumenfall automatically routes requests to the best available provider.
DALL-E 2 supports images up to 1024x1024 resolution.
Overview
DALL-E 2 is a legacy text-to-image diffusion model developed by OpenAI that generates images from natural language descriptions. While succeeded by newer iterations, it remains a stable benchmark for image synthesis, offering a distinct feature set that includes image-to-image variations and mask-based inpainting. It is particularly known for its ability to combine disparate concepts and objects in a coherent, albeit often stylized, visual manner.
Strengths
- Image Inpainting: The model excels at modifying existing images through masking, allowing users to replace specific elements or extend backgrounds while maintaining the original image’s context and lighting.
- Concept Blending: It demonstrates a strong capability for semantic synthesis, such as placing a 3D-rendered character in a real-world setting or applying specific artistic styles (e.g., “in the style of Van Gogh”) to original subjects.
- Compositional Understanding: DALL-E 2 handles spatial relationships and object attributes with reasonable accuracy, ensuring that adjectives are generally applied to the correct nouns within a prompt.
- Variation Generation: It can ingest an existing image and output multiple visual permutations that retain the original’s core theme and color palette without being exact copies.
Limitations
- Low Resolution: Native output is limited to 1024x1024 pixels, which often lacks the fine-grained texture and sharp detail found in more modern models like DALL-E 3 or Midjourney.
- Text Rendering: The model struggle significantly with rendering legible text; characters often appear as nonsensical glyphs or blurred artifacts.
- Photorealism Constraints: Compared to newer latent diffusion models, DALL-E 2 often produces images with a “plastic” or overly smooth aesthetic, struggling with complex human anatomy like hands or eyes.
Technical Background
DALL-E 2 is built on a CLIP-guided diffusion architecture, specifically a process OpenAI refers to as “unCLIP.” It uses the CLIP (Contrastive Language-Image Pre-training) latent space to translate text embeddings into image embeddings, which a decoder then converts into a visual representation. This approach prioritizes the relationship between visual concepts and their linguistic descriptions over raw pixel-mapping.
Best For
DALL-E 2 is best suited for rapid prototyping, creating stylized illustrations, and performing basic image editing tasks like inpainting or outpainting where high-fidelity photorealism isn’t the primary requirement. It is a cost-effective choice for developers who need consistent, programmatic image variations.
This model is available for testing and integration through Lumenfall’s unified API and interactive playground, allowing you to compare its outputs directly against more recent generative models.
Try DALL-E 2 in Playground
Generate images with custom prompts — no API key needed.