Stable Diffusion 3.5 Medium

AI Image Generation Model

Image $$ · 2¢

Stability AI's 2.5-billion parameter Multimodal Diffusion Transformer with improvements (MMDiT-X) text-to-image model optimized for consumer hardware, featuring improved image quality, typography, and complex prompt understanding

1414 x 1414
Max Resolution
Supported Modes
Text to Image Image Edit
Active

Details

Model ID
stable-diffusion-3.5-medium
Creator
Stability AI
Family
stable-diffusion-3.5
Tags
image-generation text-to-image open-weights
// Get Started

Ready to integrate?

Access stable-diffusion-3.5-medium via our unified API.

Create Account
Available at 2 providers

Starting from

$0.020 /image via fal.ai · +1 more

Prices shown are in USD · Some prices estimated from per-megapixel or per-token pricing

See all providers

Providers & Pricing (2)

Stable Diffusion 3.5 Medium is available from 2 providers, with per-image pricing starting at $0.02 through fal.ai.

fal.ai
Text to Image
fal/stable-diffusion-3.5-medium
Provider Model ID: fal-ai/stable-diffusion-v35-medium
$0.020 /megapixel
Replicate
Text to Image Image Edit
replicate/stable-diffusion-3.5-medium
Provider Model ID: stability-ai/stable-diffusion-3.5-medium
$0.035 /image

Stable Diffusion 3.5 Medium API OpenAI-compatible

Lumenfall provides an OpenAI-compatible API for generating high-quality images using the Stable Diffusion 3.5 Medium MMDiT-X architecture.

Base URL
https://api.lumenfall.ai/openai/v1
Model
stable-diffusion-3.5-medium

Code Examples

Text to Image

/v1/images/generations
curl -X POST \
  https://api.lumenfall.ai/openai/v1/images/generations \
  -H "Authorization: Bearer $LUMENFALL_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "stable-diffusion-3.5-medium",
    "prompt": "",
    "size": "1024x1024"
  }'
# Response:
# { "created": 1234567890, "data": [{ "url": "https://...", "revised_prompt": "..." }] }

Image Edit

/v1/images/edits

Parameter Reference

Required Supported Not available

Core Parameters

Parameter Type Description Modes
prompt string Required. Text prompt for image generation
T2I Edit

Size & Layout

Parameter Type Description Modes
size string Image dimensions as WxH pixels (e.g. "1024x1024") or aspect ratio (e.g. "16:9")
WxH determines both shape and scale (aspect_ratio and resolution are ignored when size is provided). W:H format is equivalent to aspect_ratio.
T2I Edit
aspect_ratio string Aspect ratio of the output image (e.g. "16:9", "1:1")
Controls shape independently of scale. Use with resolution to control both. If size is also provided, size takes precedence. Any ratio is accepted and mapped to the nearest supported value.
T2I Edit
resolution string Output resolution tier (e.g. "1K", "4K")
Controls scale independently of shape. Higher tiers produce larger images and cost more. If size is also provided, size takes precedence for scale. Any tier is accepted and mapped to the nearest supported value.
T2I Edit
size

Exact pixel dimensions

"1920x1080"
aspect_ratio

Shape only, default scale

"16:9"
resolution

Scale tier, preserves shape

"1K"

Priority when combined

size aspect_ratio + resolution aspect_ratio resolution

size is most specific and always wins. aspect_ratio and resolution control shape and scale independently.

How matching works

Shape matching – we pick the closest supported ratio. Ask for 7:1 on a model with 4:1 and 8:1, you get 8:1.
Scale matching – providers use different tier formats: K tiers (0.5K 1K 2K 4K) or megapixel tiers (0.25 1). If the exact tier isn't available, you get the nearest one.
Dimension clamping – if a model has pixel limits, we clamp dimensions to fit and keep the aspect ratio intact.

Media Inputs

Parameter Type Description Modes
image file Required. Input image(s) to edit
Supports PNG, JPEG, WebP.
T2I Edit

Output & Format

Parameter Type Description Modes
response_format string How to return the image
url b64_json
Default: "url"
T2I Edit
output_format string Output image format
png jpeg gif webp avif
Gateway converts to requested format if provider doesn't support it natively.
T2I Edit
output_compression integer Compression level for lossy formats (JPEG, WebP, AVIF)
T2I Edit
n integer Number of images to generate
Default: 1
Gateway generates multiple images in parallel even if provider only supports 1.
T2I Edit

Parameter Normalization

How we handle parameters across different providers

Not every provider speaks the same language. When you send a parameter, we handle it in one of four ways depending on what the model supports:

Behavior What happens Example
passthrough Sent as-is to the provider style, quality
renamed Same value, mapped to the field name the provider expects prompt
converted Transformed to the provider's native format size
emulated Works even if the provider has no concept of it n, response_format

Parameters we don't recognize pass straight through to the upstream API, so provider-specific options still work.

Stable Diffusion 3.5 Medium FAQ

How much does Stable Diffusion 3.5 Medium cost?

Stable Diffusion 3.5 Medium starts at $0.02 per image through Lumenfall. Pricing varies by provider. Lumenfall does not add any markup to provider pricing.

How do I use Stable Diffusion 3.5 Medium via API?

You can use Stable Diffusion 3.5 Medium through Lumenfall's OpenAI-compatible API. Send requests to the unified endpoint with model ID "stable-diffusion-3.5-medium". Code examples are available in Python, JavaScript, and cURL.

Which providers offer Stable Diffusion 3.5 Medium?

Stable Diffusion 3.5 Medium is available through fal.ai and Replicate on Lumenfall. Lumenfall automatically routes requests to the best available provider.

What is the maximum resolution for Stable Diffusion 3.5 Medium?

Stable Diffusion 3.5 Medium supports images up to 1414x1414 resolution.

Overview

Stable Diffusion 3.5 Medium is a 2.5-billion parameter text-to-image model developed by Stability AI. It utilizes the Multimodal Diffusion Transformer with improvements (MMDiT-X) architecture to balance high-quality image generation with hardware efficiency. This model is specifically designed to run on consumer-grade GPUs while maintaining the prompt adherence and visual fidelity of larger models in the SD3.5 family.

Strengths

  • Prompt Adherence: Excels at interpreting complex, multi-subject prompts and correctly attributing attributes (colors, positions, and styles) to specific objects within a scene.
  • Typography Rendering: Significant improvements in spelling accuracy and font legibility compared to previous iterations like SDXL or SD 1.5.
  • Hardware Efficiency: With 2.5 billion parameters, it occupies a “sweet spot” that allows for fast inference and fine-tuning on standard consumer hardware without requiring enterprise-grade VRAM.
  • Anatomical Realism: Demonstrates improved accuracy in rendering human forms, hands, and faces, reducing the common artifacts associated with earlier diffusion models.

Limitations

  • Compositional Drift: While improved, extremely long or contradictory prompts can still lead to “concept bleeding,” where styles or colors intended for one object merge into another.
  • Resolution Constraints: While capable of generating high-resolution images, it performs most reliably at its native training resolutions; exceeding these without tiled upscaling can lead to repetitive patterns or distorted proportions.
  • Photorealism Nuance: Compared to the “Large” variant of SD 3.5, the Medium model may occasionally lack the same level of fine-grained skin texture or micro-detail in complex lighting environments.

Technical Background

Stable Diffusion 3.5 Medium is built on the MMDiT-X architecture, which uses separate sets of weights for the image and text modalities. This allows the model to process visual and linguistic information in a more integrated fashion than traditional U-Net architectures. The training process focused on streamlining the transformer blocks to ensure the model remains performant on local deployments while benefiting from the scaling laws observed in the larger 8B parameter versions.

Best For

This model is ideal for developers building creative tools, marketing asset generators, or localized image synthesis applications where speed and memory efficiency are prioritized. It is particularly effective for projects requiring embedded text or precise layout control via natural language.

Stable Diffusion 3.5 Medium is available for testing and integration through Lumenfall’s unified API and interactive playground, allowing you to compare its performance against other models in the Stable Diffusion family.

Try Stable Diffusion 3.5 Medium in Playground

Generate images with custom prompts — no API key needed.

Open Playground