# DALL-E 2 > OpenAI's legacy image generation model supporting generations, edits with masks (inpainting), and variations ## Quick Reference - Model ID: dall-e-2 - Creator: OpenAI - Status: deprecated - Family: dall-e - Base URL: https://api.lumenfall.ai/openai/v1 ## Specifications - Max Resolution: 1024x1024 - Max Output Images: 10 - Input Modalities: text, image - Output Modalities: image - Supported Modes: Text to Image, Image Edit ## API Parameters The compiled parameter schema for this model is available via the API: `GET /v1/models/dall-e-2?schema=true`. ### Core Parameters - `prompt` (string) — REQUIRED: Text prompt for image generation. Modes: Text to Image ### Size & Layout - `size` (string): Image dimensions as WxH pixels (e.g. "1024x1024") or aspect ratio (e.g. "16:9"). Modes: Text to Image, Image Edit - `aspect_ratio` (string): Aspect ratio of the output image (e.g. "16:9", "1:1"). Modes: Text to Image, Image Edit - `resolution` (string): Output resolution tier (e.g. "1K", "4K"). Modes: Text to Image, Image Edit ### Media Inputs - `image` (file) — REQUIRED: Input image(s) to edit. Modes: Image Edit ### Output & Format - `response_format` (string): How to return the image. Default: url. Values: url, b64_json. Modes: Text to Image, Image Edit - `output_format` (string): Output image format. Values: png, jpeg, gif, webp, avif. Modes: Text to Image, Image Edit - `output_compression` (integer): Compression level for lossy formats (JPEG, WebP, AVIF). Modes: Text to Image, Image Edit - `n` (integer): Number of images to generate. Default: 1. Modes: Text to Image, Image Edit ## Model Identifiers - Primary Slug: dall-e-2 ## Dates - Released: April 2022 - Sunset Date: May 12, 2026 ## Tags image-generation, text-to-image, image-editing, inpainting ## Available Providers ### OpenAI - Config Key: openai/dall-e-2 - Provider Model ID: dall-e-2 - Pricing: $0.016/image, $0.018/image, $0.020/image - Note: Deprecated model - will stop being supported on May 12, 2026 - Note: Pricing is per image, varying by size - Note: Supports generations, edits with masks (inpainting), and variations - Source: https://platform.openai.com/docs/pricing ### Replicate - Config Key: replicate/dall-e-2 - Provider Model ID: openai/dall-e-2 - Pricing: $0.020/image - Source: https://replicate.com/openai/dall-e-2 ## Image Gallery 4 images available for this model. Browse all at https://lumenfall.ai/models/openai/dall-e-2/gallery ### Curated Examples - [A wide, cinematic shot of a sophisticated artisan chocolate boutique at dusk, where the name "DAL...](https://assets.lumenfall.ai/yMnc7Gev_1nMH6Km9AlaCuVb5Xr17eAWwdHV3lHdGGQ/rs:fit:1500:1500/plain/gs://lumenfall-prod-assets/239d16dko8aewcszjrxqpliwluqi@jpeg) - [A hyper-realistic, wide-angle interior shot of a sun-drenched minimalist ceramic studio. In the c...](https://assets.lumenfall.ai/sKGEpjKO6e-jcDyYJJ4jnpl6GFrOY-8Bcye4uvjX9xU/rs:fit:1500:1500/plain/gs://lumenfall-prod-assets/9mxfynzxpslk4drvfj2kdc6mgwen@jpeg) - [A hyper-realistic, wide-angle cinematic shot of a master carpenter's sun-drenched workshop. In th...](https://assets.lumenfall.ai/JuLzK_wlNCbfHdVWzL7vpWsUFUtH8QNOQe0_J2POz50/rs:fit:1500:1500/plain/gs://lumenfall-prod-assets/mfv2o8v5b8m4u56xcps2dd2zk2ri@jpeg) - [A sunlit, rustic bakery storefront with a large glass window displaying the hand-painted gold lea...](https://assets.lumenfall.ai/lGhlVYsUBHz416CxbyZIpLpUfJN9el7W2TPxL5VH7bI/rs:fit:1500:1500/plain/gs://lumenfall-prod-assets/8r71j9v55i4lx6x5qd4664cylejr@jpeg) ## Example Prompt The following prompt was used to generate an example image in our playground: A sunlit, rustic bakery storefront with a large glass window displaying the hand-painted gold leaf typography "THE GOLDEN CRUST" on the pane. A cozy capybara sits quietly on the sidewalk by the door, watching people pass by. 8k, warm tones. ## Code Examples ### Text to Image (/v1/images/generations) #### cURL curl -X POST \ https://api.lumenfall.ai/openai/v1/images/generations \ -H "Authorization: Bearer $LUMENFALL_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "model": "dall-e-2", "prompt": "", "size": "1024x1024" }' # Response: # { "created": 1234567890, "data": [{ "url": "https://...", "revised_prompt": "..." }] } #### JavaScript import OpenAI from 'openai'; const client = new OpenAI({ apiKey: 'YOUR_API_KEY', baseURL: 'https://api.lumenfall.ai/openai/v1' }); const response = await client.images.generate({ model: 'dall-e-2', prompt: '', size: '1024x1024' }); // { created: 1234567890, data: [{ url: "https://...", revised_prompt: "..." }] } console.log(response.data[0].url); #### Python from openai import OpenAI client = OpenAI( api_key="YOUR_API_KEY", base_url="https://api.lumenfall.ai/openai/v1" ) response = client.images.generate( model="dall-e-2", prompt="", size="1024x1024" ) # { created: 1234567890, data: [{ url: "https://...", revised_prompt: "..." }] } print(response.data[0].url) ### Image Edit (/v1/images/edits) #### cURL curl -X POST \ https://api.lumenfall.ai/openai/v1/images/edits \ -H "Authorization: Bearer $LUMENFALL_API_KEY" \ -F "model=dall-e-2" \ -F "image=@source.png" \ -F "prompt=Add a starry night sky to this image" \ -F "size=1024x1024" # Response: # { "created": 1234567890, "data": [{ "url": "https://...", "revised_prompt": "..." }] } #### JavaScript import OpenAI from 'openai'; import fs from 'fs'; const client = new OpenAI({ apiKey: 'YOUR_API_KEY', baseURL: 'https://api.lumenfall.ai/openai/v1' }); const response = await client.images.edit({ model: 'dall-e-2', image: fs.createReadStream('source.png'), prompt: 'Add a starry night sky to this image', size: '1024x1024' }); // { created: 1234567890, data: [{ url: "https://...", revised_prompt: "..." }] } console.log(response.data[0].url); #### Python from openai import OpenAI client = OpenAI( api_key="YOUR_API_KEY", base_url="https://api.lumenfall.ai/openai/v1" ) response = client.images.edit( model="dall-e-2", image=open("source.png", "rb"), prompt="Add a starry night sky to this image", size="1024x1024" ) # { created: 1234567890, data: [{ url: "https://...", revised_prompt: "..." }] } print(response.data[0].url) ## About ## Overview DALL-E 2 is a legacy text-to-image diffusion model developed by OpenAI that generates images from natural language descriptions. While succeeded by newer iterations, it remains a stable benchmark for image synthesis, offering a distinct feature set that includes image-to-image variations and mask-based inpainting. It is particularly known for its ability to combine disparate concepts and objects in a coherent, albeit often stylized, visual manner. ## Strengths * **Image Inpainting:** The model excels at modifying existing images through masking, allowing users to replace specific elements or extend backgrounds while maintaining the original image's context and lighting. * **Concept Blending:** It demonstrates a strong capability for semantic synthesis, such as placing a 3D-rendered character in a real-world setting or applying specific artistic styles (e.g., "in the style of Van Gogh") to original subjects. * **Compositional Understanding:** DALL-E 2 handles spatial relationships and object attributes with reasonable accuracy, ensuring that adjectives are generally applied to the correct nouns within a prompt. * **Variation Generation:** It can ingest an existing image and output multiple visual permutations that retain the original’s core theme and color palette without being exact copies. ## Limitations * **Low Resolution:** Native output is limited to 1024x1024 pixels, which often lacks the fine-grained texture and sharp detail found in more modern models like DALL-E 3 or Midjourney. * **Text Rendering:** The model struggle significantly with rendering legible text; characters often appear as nonsensical glyphs or blurred artifacts. * **Photorealism Constraints:** Compared to newer latent diffusion models, DALL-E 2 often produces images with a "plastic" or overly smooth aesthetic, struggling with complex human anatomy like hands or eyes. ## Technical Background DALL-E 2 is built on a CLIP-guided diffusion architecture, specifically a process OpenAI refers to as "unCLIP." It uses the CLIP (Contrastive Language-Image Pre-training) latent space to translate text embeddings into image embeddings, which a decoder then converts into a visual representation. This approach prioritizes the relationship between visual concepts and their linguistic descriptions over raw pixel-mapping. ## Best For DALL-E 2 is best suited for rapid prototyping, creating stylized illustrations, and performing basic image editing tasks like inpainting or outpainting where high-fidelity photorealism isn't the primary requirement. It is a cost-effective choice for developers who need consistent, programmatic image variations. This model is available for testing and integration through Lumenfall’s unified API and interactive playground, allowing you to compare its outputs directly against more recent generative models. ## Frequently Asked Questions ### How much does DALL-E 2 cost? DALL-E 2 starts at $0.016 per image through Lumenfall. Pricing varies by provider. Lumenfall does not add any markup to provider pricing. ### How do I use DALL-E 2 via API? You can use DALL-E 2 through Lumenfall's OpenAI-compatible API. Send requests to the unified endpoint with model ID "dall-e-2". Code examples are available in Python, JavaScript, and cURL. ### Which providers offer DALL-E 2? DALL-E 2 is available through OpenAI and Replicate on Lumenfall. Lumenfall automatically routes requests to the best available provider. ### What is the maximum resolution for DALL-E 2? DALL-E 2 supports images up to 1024x1024 resolution. ## Links - Model Page: https://lumenfall.ai/models/openai/dall-e-2 - About: https://lumenfall.ai/models/openai/dall-e-2/about - Providers, Pricing & Performance: https://lumenfall.ai/models/openai/dall-e-2/providers - API Reference: https://lumenfall.ai/models/openai/dall-e-2/api - Benchmarks: https://lumenfall.ai/models/openai/dall-e-2/benchmarks - Use Cases: https://lumenfall.ai/models/openai/dall-e-2/use-cases - Gallery: https://lumenfall.ai/models/openai/dall-e-2/gallery - Playground: https://lumenfall.ai/playground?model=dall-e-2 - API Documentation: https://docs.lumenfall.ai