Stability AI's 2.5-billion parameter Multimodal Diffusion Transformer with improvements (MMDiT-X) text-to-image model optimized for consumer hardware, featuring improved image quality, typography, and complex prompt understanding
Details
stable-diffusion-3.5-medium
Ready to integrate?
Access stable-diffusion-3.5-medium via our unified API.
Providers & Pricing (2)
Stable Diffusion 3.5 Medium is available from 2 providers, with per-image pricing starting at $0.02 through fal.ai.
fal/stable-diffusion-3.5-medium
replicate/stable-diffusion-3.5-medium
Stable Diffusion 3.5 Medium API OpenAI-compatible
Developers can generate high-resolution images using Stable Diffusion 3.5 Medium through Lumenfall’s OpenAI-compatible API. This 2.5-billion parameter model supports standard text-to-image workflows, delivering improved visual quality and prompt fidelity at a smaller footprint.
https://api.lumenfall.ai/openai/v1
stable-diffusion-3.5-medium
curl -X POST \
https://api.lumenfall.ai/openai/v1/images/generations \
-H "Authorization: Bearer $LUMENFALL_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "stable-diffusion-3.5-medium",
"prompt": "A serene mountain landscape at sunset",
"size": "1024x1024"
}'
# Response:
# { "created": 1234567890, "data": [{ "url": "https://...", "revised_prompt": "..." }] }
import OpenAI from 'openai';
const client = new OpenAI({
apiKey: 'YOUR_API_KEY',
baseURL: 'https://api.lumenfall.ai/openai/v1'
});
const response = await client.images.generate({
model: 'stable-diffusion-3.5-medium',
prompt: 'A serene mountain landscape at sunset',
size: '1024x1024'
});
// { created: 1234567890, data: [{ url: "https://...", revised_prompt: "..." }] }
console.log(response.data[0].url);
from openai import OpenAI
client = OpenAI(
api_key="YOUR_API_KEY",
base_url="https://api.lumenfall.ai/openai/v1"
)
response = client.images.generate(
model="stable-diffusion-3.5-medium",
prompt="A serene mountain landscape at sunset",
size="1024x1024"
)
# { created: 1234567890, data: [{ url: "https://...", revised_prompt: "..." }] }
print(response.data[0].url)
Gallery
View all 4 imagesStable Diffusion 3.5 Medium FAQ
Stable Diffusion 3.5 Medium starts at $0.02 per image through Lumenfall. Pricing varies by provider. Lumenfall does not add any markup to provider pricing.
You can use Stable Diffusion 3.5 Medium through Lumenfall's OpenAI-compatible API. Send requests to the unified endpoint with model ID "stable-diffusion-3.5-medium". Code examples are available in Python, JavaScript, and cURL.
Stable Diffusion 3.5 Medium is available through Replicate and fal.ai on Lumenfall. Lumenfall automatically routes requests to the best available provider.
Stable Diffusion 3.5 Medium supports images up to 1414x1414 resolution.
Overview
Stable Diffusion 3.5 Medium is a 2.5-billion parameter text-to-image model developed by Stability AI. It utilizes the Multimodal Diffusion Transformer with improvements (MMDiT-X) architecture to balance high-quality image generation with hardware efficiency. This model is specifically designed to run on consumer-grade GPUs while maintaining the prompt adherence and visual fidelity of larger models in the SD3.5 family.
Strengths
- Prompt Adherence: Excels at interpreting complex, multi-subject prompts and correctly attributing attributes (colors, positions, and styles) to specific objects within a scene.
- Typography Rendering: Significant improvements in spelling accuracy and font legibility compared to previous iterations like SDXL or SD 1.5.
- Hardware Efficiency: With 2.5 billion parameters, it occupies a “sweet spot” that allows for fast inference and fine-tuning on standard consumer hardware without requiring enterprise-grade VRAM.
- Anatomical Realism: Demonstrates improved accuracy in rendering human forms, hands, and faces, reducing the common artifacts associated with earlier diffusion models.
Limitations
- Compositional Drift: While improved, extremely long or contradictory prompts can still lead to “concept bleeding,” where styles or colors intended for one object merge into another.
- Resolution Constraints: While capable of generating high-resolution images, it performs most reliably at its native training resolutions; exceeding these without tiled upscaling can lead to repetitive patterns or distorted proportions.
- Photorealism Nuance: Compared to the “Large” variant of SD 3.5, the Medium model may occasionally lack the same level of fine-grained skin texture or micro-detail in complex lighting environments.
Technical Background
Stable Diffusion 3.5 Medium is built on the MMDiT-X architecture, which uses separate sets of weights for the image and text modalities. This allows the model to process visual and linguistic information in a more integrated fashion than traditional U-Net architectures. The training process focused on streamlining the transformer blocks to ensure the model remains performant on local deployments while benefiting from the scaling laws observed in the larger 8B parameter versions.
Best For
This model is ideal for developers building creative tools, marketing asset generators, or localized image synthesis applications where speed and memory efficiency are prioritized. It is particularly effective for projects requiring embedded text or precise layout control via natural language.
Stable Diffusion 3.5 Medium is available for testing and integration through Lumenfall’s unified API and interactive playground, allowing you to compare its performance against other models in the Stable Diffusion family.
Try Stable Diffusion 3.5 Medium in Playground
Generate images with custom prompts — no API key needed.