# DALL-E 3 > OpenAI's previous generation image model with higher quality than DALL-E 2 and support for larger resolutions ## Quick Reference - Model ID: dall-e-3 - Creator: OpenAI - Status: deprecated - Family: dall-e - Base URL: https://api.lumenfall.ai/openai/v1 ## Specifications - Max Resolution: 1792x1792 - Max Output Images: 1 - Input Modalities: text - Output Modalities: image ## Model Identifiers - Primary Slug: dall-e-3 ## Dates - Sunset Date: May 12, 2026 ## Tags image-generation, text-to-image ## Available Providers ### Replicate - Config Key: replicate/dall-e-3 - Provider Model ID: openai/dall-e-3 - Pricing: - source: official - currency: USD - components: [{"type" => "output", "metric" => "image", "unit_price" => 0.12}] - source_url: https://replicate.com/openai/dall-e-3 - effective_at: 2026-01-27 ### OpenAI - Config Key: openai/dall-e-3 - Provider Model ID: dall-e-3 - Pricing: - notes: ["Deprecated model - will stop being supported on May 12, 2026", "Pricing is per image, varying by quality (standard/hd) and size", "Text-to-image generation only (no image editing)"] - source: official - currency: USD - components: [{"type" => "output", "metric" => "image", "conditions" => {"size" => "1024x1024", "quality" => "standard"}, "unit_price" => 0.04}, {"type" => "output", "metric" => "image", "conditions" => {"size" => "1024x1792", "quality" => "standard"}, "unit_price" => 0.08}, {"type" => "output", "metric" => "image", "conditions" => {"size" => "1792x1024", "quality" => "standard"}, "unit_price" => 0.08}, {"type" => "output", "metric" => "image", "conditions" => {"size" => "1024x1024", "quality" => "hd"}, "unit_price" => 0.08}, {"type" => "output", "metric" => "image", "conditions" => {"size" => "1024x1792", "quality" => "hd"}, "unit_price" => 0.12}, {"type" => "output", "metric" => "image", "conditions" => {"size" => "1792x1024", "quality" => "hd"}, "unit_price" => 0.12}] - source_url: https://platform.openai.com/docs/pricing ## Image Gallery 4 images available for this model. - Curated examples: 4 - "A cinematic, wide-angle shot of a high-end, contemporary art gallery at dusk. The focal point is a sleek, minimalist ..." - "A medium shot of an elderly artisan in a sunlit Mediterranean workshop, carefully hand-painting intricate blue azulej..." - "A hyper-realistic, macro close-up of a rustic wooden sign hanging outside an old-fashioned apothecary. The weathered ..." - "A cozy street-side flower shop with a chalkboard sign that says "BLOOM & GROW" in elegant cursive. Vibrant bouquets l..." ## Example Prompt The following prompt was used to generate an example image in our playground: A cozy street-side flower shop with a chalkboard sign that says "BLOOM & GROW" in elegant cursive. Vibrant bouquets line the front. In the background shadows, a capybara wearing a tiny bow tie peacefully nps near a bucket of daisies. ## Code Examples ### Text to Image (Generation) #### cURL curl -X POST \ https://api.lumenfall.ai/openai/v1/images/generations \ -H "Authorization: Bearer $LUMENFALL_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "model": "dall-e-3", "prompt": "A serene mountain landscape at sunset", "size": "1024x1024" }' # Response: # { "created": 1234567890, "data": [{ "url": "https://...", "revised_prompt": "..." }] } #### JavaScript import OpenAI from 'openai'; const client = new OpenAI({ apiKey: 'YOUR_API_KEY', baseURL: 'https://api.lumenfall.ai/openai/v1' }); const response = await client.images.generate({ model: 'dall-e-3', prompt: 'A serene mountain landscape at sunset', size: '1024x1024' }); // { created: 1234567890, data: [{ url: "https://...", revised_prompt: "..." }] } console.log(response.data[0].url); #### Python from openai import OpenAI client = OpenAI( api_key="YOUR_API_KEY", base_url="https://api.lumenfall.ai/openai/v1" ) response = client.images.generate( model="dall-e-3", prompt="A serene mountain landscape at sunset", size="1024x1024" ) # { created: 1234567890, data: [{ url: "https://...", revised_prompt: "..." }] } print(response.data[0].url) ## About ## Overview DALL-E 3 is a text-to-image generation model developed by OpenAI that focuses on precise prompt adherence and complex scene composition. It is designed to interpret nuanced instructions without the need for complex "prompt engineering," natively supporting various aspect ratios and higher resolutions than its predecessor, DALL-E 2. A defining characteristic of this model is its deep integration with large language models to refine user queries into detailed visual descriptions. ## Strengths * **Prompt Adherence:** The model excels at following complex, multi-part instructions, accurately placing specific objects in relation to one another as described in the text. * **Text Rendering:** Unlike many earlier diffusion models, DALL-E 3 can reliably generate legible text, signs, and labels within images. * **Contextual Understanding:** It handles nuanced requests involving specific artistic styles, historical periods, or lighting conditions with higher fidelity than previous iterations. * **Compositional Logic:** It demonstrates a strong grasp of spatial reasoning, such as "a small blue cube sitting on top of a large red sphere," reducing the frequency of floating or merged objects. ## Limitations * **Photorealism Constraints:** While capable of high-quality output, it may sometimes produce images with a "polished" or "rendered" aesthetic that lacks the organic imperfection found in models specifically tuned for hyper-realism. * **Human Anatomy:** Like many generative models, it can occasionally struggle with the fine details of human hands, fingers, and complex joint positions in crowded or high-action scenes. * **Generation Speed:** Due to the complexity of the model and its alignment process, generation times are generally slower compared to smaller or more optimized latent diffusion models. ## Technical Background DALL-E 3 is built upon a diffusion-based architecture that utilizes a highly descriptive captioning system. During training, OpenAI used a visual-language model to re-generate captions for the training dataset, resulting in a model that associates visual patterns with much more specific and detailed linguistic descriptions than models trained on raw alt-text. This bridge between the language and vision domains allows the model to process long, descriptive paragraphs of input text effectively. ## Best For DALL-E 3 is ideal for creative brainstorming, generating marketing assets with embedded text, and producing illustrative content where specific placement of elements is critical. It is well-suited for users who prefer natural language descriptions over technical parameter tuning. You can experiment with DALL-E 3 alongside other leading vision models through Lumenfall’s unified API and interactive playground to compare output styles and consistency. ## Frequently Asked Questions ### How much does DALL-E 3 cost? DALL-E 3 starts at $0.04 per image through Lumenfall. Pricing varies by provider. Lumenfall does not add any markup to provider pricing. ### How do I use DALL-E 3 via API? You can use DALL-E 3 through Lumenfall's OpenAI-compatible API. Send requests to the unified endpoint with model ID "dall-e-3". Code examples are available in Python, JavaScript, and cURL. ### Which providers offer DALL-E 3? DALL-E 3 is available through Replicate and OpenAI on Lumenfall. Lumenfall automatically routes requests to the best available provider. ### What is the maximum resolution for DALL-E 3? DALL-E 3 supports images up to 1792x1792 resolution. ## Links - Model Page: https://lumenfall.ai/models/openai/dall-e-3 - About: https://lumenfall.ai/models/openai/dall-e-3/about - Providers, Pricing & Performance: https://lumenfall.ai/models/openai/dall-e-3/providers - API Reference: https://lumenfall.ai/models/openai/dall-e-3/api - Benchmarks: https://lumenfall.ai/models/openai/dall-e-3/benchmarks - Use Cases: https://lumenfall.ai/models/openai/dall-e-3/use-cases - Gallery: https://lumenfall.ai/models/openai/dall-e-3/gallery - Playground: https://lumenfall.ai/playground?model=dall-e-3 - API Documentation: https://docs.lumenfall.ai