“Create a clean, modern vector infographic poster about the Apollo 11 mission. NASA-inspired palette (navy, white, muted red, light gray). Flat-vector style, crisp lines, consistent iconography, subtle gradients only. Steps (stop at landing): 1. Launch (Saturn Vicon) 2. Earth Orbit (Earth + orbit ring icon) 3. Translunar (trajectory arc icon) 4. Lunar Orbit (Moon + orbit ring icon) 5. Descent (lunar module descending icon) 6. Landing (lunar module on the surface icon) Small supporting elements (minimal text): • Crew strip: three silhouette icons with only last names: Armstrong, Aldrin, Collins. • Landing site marker: Moon pin labeled "Tranquility" only. Layout constraints: generous margins, large readable labels, clean background with subtle stars. Vector-only, print-poster look, high resolution.”
Stability AI's 8.1-billion parameter Multimodal Diffusion Transformer (MMDiT) text-to-image model featuring improved image quality, typography, complex prompt understanding, and resource-efficiency
Details
stable-diffusion-3.5-large
Ready to integrate?
Access stable-diffusion-3.5-large via our unified API.
Starting from
Prices shown are in USD · Some prices estimated from per-megapixel or per-token pricing
See all providersProviders & Pricing (2)
Stable Diffusion 3.5 Large is available from 2 providers, with per-image pricing starting at $0.065 through fal.ai.
All modes
fal/stable-diffusion-3.5-large
replicate/stable-diffusion-3.5-large
Stable Diffusion 3.5 Large API OpenAI-compatible
Integrate Stable Diffusion 3.5 Large into your application via Lumenfall’s OpenAI-compatible API to generate high-resolution images from text prompts. This unified endpoint provides programmatic access to the model's Multimodal Diffusion Transformer for scalable media generation.
https://api.lumenfall.ai/openai/v1
stable-diffusion-3.5-large
Code Examples
Text to Image
/v1/images/generationscurl -X POST \
https://api.lumenfall.ai/openai/v1/images/generations \
-H "Authorization: Bearer $LUMENFALL_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "stable-diffusion-3.5-large",
"prompt": "",
"size": "1024x1024"
}'
# Response:
# { "created": 1234567890, "data": [{ "url": "https://...", "revised_prompt": "..." }] }
import OpenAI from 'openai';
const client = new OpenAI({
apiKey: 'YOUR_API_KEY',
baseURL: 'https://api.lumenfall.ai/openai/v1'
});
const response = await client.images.generate({
model: 'stable-diffusion-3.5-large',
prompt: '',
size: '1024x1024'
});
// { created: 1234567890, data: [{ url: "https://...", revised_prompt: "..." }] }
console.log(response.data[0].url);
from openai import OpenAI
client = OpenAI(
api_key="YOUR_API_KEY",
base_url="https://api.lumenfall.ai/openai/v1"
)
response = client.images.generate(
model="stable-diffusion-3.5-large",
prompt="",
size="1024x1024"
)
# { created: 1234567890, data: [{ url: "https://...", revised_prompt: "..." }] }
print(response.data[0].url)
Image Edit
/v1/images/editsParameter Reference
Core Parameters
| Parameter | Type | Description | Modes |
|---|---|---|---|
prompt
|
string | Required. Text prompt for image generation |
T2I
Edit
|
negative_prompt
|
string | Negative prompt to guide generation away from undesired content |
T2I
Edit
|
seed
|
integer | Random seed for reproducibility |
T2I
Edit
|
Size & Layout
| Parameter | Type | Description | Modes |
|---|---|---|---|
size
|
string |
Image dimensions as WxH pixels (e.g. "1024x1024") or aspect ratio (e.g. "16:9")
WxH determines both shape and scale (aspect_ratio and resolution are ignored when size is provided). W:H format is equivalent to aspect_ratio.
|
T2I
Edit
|
aspect_ratio
|
string |
Aspect ratio of the output image (e.g. "16:9", "1:1")
Controls shape independently of scale. Use with resolution to control both. If size is also provided, size takes precedence. Any ratio is accepted and mapped to the nearest supported value.
|
T2I
Edit
|
resolution
|
string |
Output resolution tier (e.g. "1K", "4K")
1K
Controls scale independently of shape. Higher tiers produce larger images and cost more. If size is also provided, size takes precedence for scale. Any tier is accepted and mapped to the nearest supported value.
|
T2I
Edit
|
| Output |
size
|
aspect_ratio
+
resolution
|
|
|---|---|---|---|
| Flexible | |||
|
Custom
1–14142px per side
|
"WxH" |
— | Any pixel dimensions within model constraints |
1K 9 sizes
| Output |
size
|
aspect_ratio
+
resolution
|
|
|---|---|---|---|
| 916 × 1145 | "916x1145" |
or |
"4:5"
+
"1K"
|
| 1145 × 916 | "1145x916" |
or |
"5:4"
+
"1K"
|
| 1024 × 1024 | "1024x1024" |
or |
"1:1"
+
"1K"
|
| 836 × 1254 | "836x1254" |
or |
"2:3"
+
"1K"
|
| 1254 × 836 | "1254x836" |
or |
"3:2"
+
"1K"
|
| 768 × 1365 | "768x1365" |
or |
"9:16"
+
"1K"
|
| 1365 × 768 | "1365x768" |
or |
"16:9"
+
"1K"
|
| 670 × 1564 | "670x1564" |
or |
"9:21"
+
"1K"
|
| 1563 × 670 | "1563x670" |
or |
"21:9"
+
"1K"
|
How these parameters work
size
Exact pixel dimensions
"1920x1080"
aspect_ratio
Shape only, default scale
"16:9"
resolution
Scale tier, preserves shape
"1K"
Priority when combined
size is most specific and always wins. aspect_ratio and resolution control shape and scale independently.
How matching works
7:1 on a model with
4:1 and 8:1,
you get 8:1.
0.5K 1K 2K 4K)
or megapixel tiers (0.25 1).
If the exact tier isn't available, you get the nearest one.
Media Inputs
| Parameter | Type | Description | Modes |
|---|---|---|---|
image
|
file |
Required.
Input image(s) to edit
Supports PNG, JPEG, WebP.
|
T2I
Edit
|
Output & Format
| Parameter | Type | Description | Modes |
|---|---|---|---|
response_format
|
string |
How to return the image
url
b64_json
Default:
"url" |
T2I
Edit
|
output_format
|
string |
Output image format
png
jpeg
gif
webp
avif
Gateway converts to requested format if provider doesn't support it natively.
|
T2I
Edit
|
output_compression
|
integer | Compression level for lossy formats (JPEG, WebP, AVIF) |
T2I
Edit
|
n
|
integer |
Number of images to generate
Default:
1Gateway generates multiple images in parallel even if provider only supports 1.
|
T2I
Edit
|
Additional Parameters
| Parameter | Type | Description | Modes |
|---|---|---|---|
cfg_scale
|
number | Classifier-free guidance scale — higher values stick more closely to the prompt |
T2I
Edit
|
strength
|
number | How much to transform the input image: 0 keeps it unchanged, 1 fully regenerates from the prompt |
T2I
Edit
|
controlnet
fal
|
object | ControlNet for inference. |
T2I
Edit
|
enable_safety_checker
fal
|
boolean | If set to true, the safety checker will be enabled. |
T2I
Edit
|
ip_adapter
fal
|
object | IP-Adapter to use during inference. |
T2I
Edit
|
loras
fal
|
array | The LoRAs to use for the image generation. You can use any number of LoRAs and they will be merged together to generate the final image. |
T2I
Edit
|
num_inference_steps
fal
|
integer | The number of inference steps to perform. |
T2I
Edit
|
sync_mode
fal
|
boolean | If `True`, the media will be returned as a data URI and the output data won't be available in the request history. |
T2I
Edit
|
Parameter Normalization
How we handle parameters across different providers
Not every provider speaks the same language. When you send a parameter, we handle it in one of four ways depending on what the model supports:
| Behavior | What happens | Example |
|---|---|---|
passthrough |
Sent as-is to the provider | style, quality |
renamed |
Same value, mapped to the field name the provider expects | prompt |
converted |
Transformed to the provider's native format | size |
emulated |
Works even if the provider has no concept of it | n, response_format |
Parameters we don't recognize pass straight through to the upstream API, so provider-specific options still work.
Stable Diffusion 3.5 Large Benchmarks
Stable Diffusion 3.5 Large currently holds rank #21 in the Text-to-Image arena with a competitive Elo rating of 1225. This 8.1-billion parameter model utilizes a Multimodal Diffusion Transformer (MMDiT) architecture to balance prompt adherence with visual quality.
Text-to-Image Landscape
Elo vs Cost
Elo vs Speed
10 without speed data omitted.
Competition Results
“Modern minimalist restaurant menu design, white background with colorful food photos in grid, sections for appetizers/pizza/mains, bold sans-serif fonts, vibrant accents, clean professional layout for casual dining.”
“Vintage minimalist restaurant logo for "Caffè Florian", retro cloche dome with steam and "Est. 1720" banner, classic typography, warm brown and cream tones, subtle texture on light background, vector emblem style.”
“A candid street photo of an elderly Japanese man repairing a red bicycle in light rain, reflections on wet pavement, shallow depth of field, 50mm lens, natural skin texture, imperfect framing, motion blur from passing cars, cinematic but realistic, no stylization.”
“Vintage minimalist restaurant logo for "Caffè Florian", retro cloche dome with steam and "Est. 1720" banner, classic typography, warm brown and cream tones, subtle texture on light background, vector emblem style.”
“Close portrait of a battle-worn paladin in ornate engraved plate armor, hair braided with small beads, faint scars and dirt on the skin, warm torchlight reflecting off metal, shallow depth of field, bokeh sparks, lifelike eyes, highly detailed texture on leather straps and cloth underlayer.”
Uncategorized
“Hyper-photorealistic interior of a lush Victorian glass greenhouse filled with exotic tropical plants, vibrant blooming orchids, tall ferns, colorful butterflies in flight, sunlight filtering through ornate glass roof creating realistic caustics and dew on leaves, intricate iron framework visible, misty atmosphere, 8K masterpiece.”
“A glass cube on a wooden table. Inside the cube is a small blue sphere. On top of the cube sits a red book. A green plant is behind the cube, partially visible through the glass. Soft window light from the left.”
“Create a clear, 45° top-down isometric miniature 3D cartoon scene of Japan's signature dish: sushi, with soft refined textures, realistic PBR materials, gentle lighting, on a small raised diorama base with minimal garnish and plate. Solid light blue background. At top-center: 'JAPAN' in large bold text, 'SUSHI' below it, small flag icon. Perfectly centered, ultra-clean, high-clarity, square format.”
“Perfectly symmetrical mandala made entirely of real flowers, petals, leaves, fruits, and seeds in vibrant natural colors, intricate layered patterns with radial symmetry, top-down view on a soft neutral background, hyper-detailed organic textures and subtle shadows, photorealistic, 8K masterpiece.”
“Hyper-photorealistic scene of fluffy baby animals—a golden retriever puppy, tabby kitten, baby bunny, and red fox kit—with big expressive eyes and ultra-detailed soft fur, playfully chasing butterflies and tumbling together in a lush wildflower meadow, warm golden sunrise light with god rays and dew sparkles, joyful wholesome vibe, 8K masterpiece.”
“Hyper-photorealistic full-body portrait of a female superhero standing triumphantly on a New York skyscraper rooftop at golden sunset, wearing a classic modest superhero costume with flowing cape, chest emblem, gloves, and boots in red and blue colors, practical design, short hair, strong determined heroic expression looking into the distance, powerful confident stance with hands on hips and cape billowing dramatically in the wind, detailed urban cityscape background, warm natural sunlight with sharp shadows and fabric highlights, ultra-sharp textures on suit, hair, and concrete, 8K masterpiece, empowering family-friendly style.”
Top Matchups
See how Stable Diffusion 3.5 Large performs head-to-head against other AI models, ranked by community votes in blind comparisons.
vs Nano Banana
Challenge: Heroic Super Hero Portrait
14% W · 86% L
vs Nano Banana Pro
Challenge: Apollo 11: Journey to Tranquility
60% W · 40% L
vs FLUX.2 [dev] Turbo
Challenge: Geometric Composition
0% W · 100% L
vs Nano Banana Pro
Challenge: Victorian Greenhouse Oasis
33% W · 67% L
vs Nano Banana Pro
Challenge: Candid Street Photography
50% W · 50% L
Use Cases
See all Use CasesThe model performs best in Text Rendering where it ranks #14 with a 42.3% win rate, alongside a #15 ranking for portrait generation. It shows lower comparative performance in commercial branding and photorealism categories, ranking #19 and #17 respectively.
Gallery
View all 14 imagesStable Diffusion 3.5 Large FAQ
How much does Stable Diffusion 3.5 Large cost?
Stable Diffusion 3.5 Large starts at $0.065 per image through Lumenfall. Pricing varies by provider. Lumenfall does not add any markup to provider pricing.
How do I use Stable Diffusion 3.5 Large via API?
You can use Stable Diffusion 3.5 Large through Lumenfall's OpenAI-compatible API. Send requests to the unified endpoint with model ID "stable-diffusion-3.5-large". Code examples are available in Python, JavaScript, and cURL.
Which providers offer Stable Diffusion 3.5 Large?
Stable Diffusion 3.5 Large is available through fal.ai and Replicate on Lumenfall. Lumenfall automatically routes requests to the best available provider.
What is the maximum resolution for Stable Diffusion 3.5 Large?
Stable Diffusion 3.5 Large supports images up to 1024x1024 resolution.
Overview
Stable Diffusion 3.5 Large is an 8.1-billion parameter text-to-image model developed by Stability AI. Built on the Multimodal Diffusion Transformer (MMDiT) architecture, it is designed to balance high-fidelity visual output with the ability to follow intricate, multi-part natural language instructions. This model represents a significant refinement in the Stable Diffusion lineage, focusing on improved prompt adherence and photorealism compared to its predecessors.
Strengths
- Complex Prompt Adherence: The model excels at interpreting long, descriptive prompts that include specific spatial relationships, multiple subjects, and detailed stylistic instructions.
- Typography and Text Rendering: It demonstrates a high degree of accuracy when generating legible text within images, minimizing the spelling errors common in earlier latent diffusion models.
- Subject Diversity: It is capable of generating a wide range of human skin tones, textures, and facial features without a strong inherent bias toward a single aesthetic style.
- Structural Composition: The MMDiT architecture allows the model to maintain better global consistency, ensuring that large-scale elements (like limbs or architectural features) are proportionally correct and logically placed.
Limitations
- Hardware Requirements: At 8.1 billion parameters, it requires significant VRAM for local inference, making it less suitable for consumer-grade hardware without quantization.
- Generation Speed: Due to its size and the complexity of the transformer-based backbone, it generally has higher latency per image compared to “Turbo” or “Lightning” versions of the SD3 family.
- Anatomical Edge Cases: While improved, the model can still struggle with extremely complex anatomical poses or highly overlapping human figures in crowded scenes.
Technical Background
The model utilizes a Multimodal Diffusion Transformer (MMDiT) architecture, which uses separate sets of weights for image and text representations but allows them to interact via a bidirectional flow of information. This approach enables the model to treat visual and textual data as equal contributors to the final output, improving the alignment between the user’s input and the generated pixels. The training process prioritized resource efficiency and stable convergence, allowing the 8.1B parameter model to outperform larger competitors in specific benchmark categories.
Best For
Stable Diffusion 3.5 Large is ideal for professional design workflows where precise control over composition and text is required, such as creating posters, book covers, or conceptual art from detailed briefs. It is a strong choice for users who need a versatile, general-purpose model that can handle both photorealistic and stylized requests without extensive fine-tuning.
This model is available for testing and integration through Lumenfall’s unified API and playground, allowing you to compare its output alongside other industry-standard image generation models.
Try Stable Diffusion 3.5 Large in Playground
Generate images with custom prompts — no API key needed.