“A candid street photo of an elderly Japanese man repairing a red bicycle in light rain, reflections on wet pavement, shallow depth of field, 50mm lens, natural skin texture, imperfect framing, motion blur from passing cars, cinematic but realistic, no stylization.”
Black Forest Labs' open-weights image generation model with frontier performance, available for non-commercial local deployment
Details
flux.2-dev
Starting from
Prices shown are in USD · Some prices estimated from per-megapixel or per-token pricing
See all providersProvider Performance
Fastest generation through replicate at 5,827ms median latency with 84.6% success rate.
Aggregated from real API requests over the last 30 days.
Generation Time
Success Rate
Time to First Byte
Provider Rankings
| # | Provider | p50 Gen Time | p95 Gen Time | Success Rate | TTFB (p50) |
|---|---|---|---|---|---|
| 1 | replicate | 5,827ms | 11,907ms | 84.6% | 5,293ms |
| 2 | fal | 8,089ms | 12,277ms | 100.0% | 8,396ms |
Providers & Pricing (3)
FLUX.2 [dev] is available from 3 providers, with per-image pricing starting at $0.012 through fal.ai.
All modes
fal/flux.2-dev
fal/flux.2-dev-edit
Input
Output
Pricing Notes (4)
- • Resolution is rounded up to the next megapixel, separately for each reference image and the generated image
- • 1 megapixel = 1024x1024 pixels
- • Each reference image is counted separately (minimum 1 MP each)
- • Images exceeding 4 megapixels are resized to 4 megapixels
replicate/flux.2-dev
Input
Output
Pricing Notes (2)
- • Resolution is rounded up to the next megapixel, separately for each reference image and the generated image
- • 1 megapixel = 1024x1024 pixels
FLUX.2 [dev] API OpenAI-compatible
Integrate FLUX.2 [dev] through Lumenfall’s unified API to programmatically generate or edit images using an OpenAI-compatible interface. This allows developers to trigger high-quality text-to-image generations and image-to-image transformations with a single standardized request.
https://api.lumenfall.ai/openai/v1
flux.2-dev
Code Examples
Text to Image
/v1/images/generationscurl -X POST \
https://api.lumenfall.ai/openai/v1/images/generations \
-H "Authorization: Bearer $LUMENFALL_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "flux.2-dev",
"prompt": "",
"size": "1024x1024"
}'
# Response:
# { "created": 1234567890, "data": [{ "url": "https://...", "revised_prompt": "..." }] }
import OpenAI from 'openai';
const client = new OpenAI({
apiKey: 'YOUR_API_KEY',
baseURL: 'https://api.lumenfall.ai/openai/v1'
});
const response = await client.images.generate({
model: 'flux.2-dev',
prompt: '',
size: '1024x1024'
});
// { created: 1234567890, data: [{ url: "https://...", revised_prompt: "..." }] }
console.log(response.data[0].url);
from openai import OpenAI
client = OpenAI(
api_key="YOUR_API_KEY",
base_url="https://api.lumenfall.ai/openai/v1"
)
response = client.images.generate(
model="flux.2-dev",
prompt="",
size="1024x1024"
)
# { created: 1234567890, data: [{ url: "https://...", revised_prompt: "..." }] }
print(response.data[0].url)
Image Edit
/v1/images/editsParameter Reference
Core Parameters
| Parameter | Type | Description | Modes |
|---|---|---|---|
prompt
|
string | Required. Edit instruction for the image |
T2I
Edit
|
seed
|
integer | Random seed for reproducibility |
T2I
Edit
|
Size & Layout
| Parameter | Type | Description | Modes |
|---|---|---|---|
size
|
string |
Image dimensions as WxH pixels (e.g. "1024x1024") or aspect ratio (e.g. "16:9")
WxH determines both shape and scale (aspect_ratio and resolution are ignored when size is provided). W:H format is equivalent to aspect_ratio.
|
T2I
Edit
|
aspect_ratio
|
string |
Aspect ratio of the output image (e.g. "16:9", "1:1")
Controls shape independently of scale. Use with resolution to control both. If size is also provided, size takes precedence. Any ratio is accepted and mapped to the nearest supported value.
|
T2I
Edit
|
resolution
|
string |
Output resolution tier (e.g. "1K", "4K")
auto
1K
Controls scale independently of shape. Higher tiers produce larger images and cost more. If size is also provided, size takes precedence for scale. Any tier is accepted and mapped to the nearest supported value.
|
T2I
Edit
|
| Output |
size
|
aspect_ratio
+
resolution
|
|
|---|---|---|---|
| Flexible | |||
| Auto | "auto" |
— | Model chooses optimal dimensions |
|
Custom
1–14142px per side
|
"WxH" |
— | Any pixel dimensions within model constraints |
1K 9 sizes
| Output |
size
|
aspect_ratio
+
resolution
|
|
|---|---|---|---|
| 1183 × 887 | "1183x887" |
or |
"4:3"
+
"1K"
|
| 916 × 1145 | "916x1145" |
or |
"4:5"
+
"1K"
|
| 1145 × 916 | "1145x916" |
or |
"5:4"
+
"1K"
|
| 1024 × 1024 | "1024x1024" |
or |
"1:1"
+
"1K"
|
| 887 × 1182 | "887x1182" |
or |
"3:4"
+
"1K"
|
| 836 × 1254 | "836x1254" |
or |
"2:3"
+
"1K"
|
| 1254 × 836 | "1254x836" |
or |
"3:2"
+
"1K"
|
| 768 × 1365 | "768x1365" |
or |
"9:16"
+
"1K"
|
| 1365 × 768 | "1365x768" |
or |
"16:9"
+
"1K"
|
How these parameters work
size
Exact pixel dimensions
"1920x1080"
aspect_ratio
Shape only, default scale
"16:9"
resolution
Scale tier, preserves shape
"1K"
Priority when combined
size is most specific and always wins. aspect_ratio and resolution control shape and scale independently.
How matching works
7:1 on a model with
4:1 and 8:1,
you get 8:1.
0.5K 1K 2K 4K)
or megapixel tiers (0.25 1).
If the exact tier isn't available, you get the nearest one.
Media Inputs
| Parameter | Type | Description | Modes |
|---|---|---|---|
image
|
file |
Required.
Input image(s) to edit
Supports PNG, JPEG, WebP.
|
T2I
Edit
|
Output & Format
| Parameter | Type | Description | Modes |
|---|---|---|---|
response_format
|
string |
How to return the image
url
b64_json
Default:
"url" |
T2I
Edit
|
output_format
|
string |
Output image format
png
jpeg
gif
webp
avif
Gateway converts to requested format if provider doesn't support it natively.
|
T2I
Edit
|
output_compression
|
integer | Compression level for lossy formats (JPEG, WebP, AVIF) |
T2I
Edit
|
n
|
integer |
Number of images to generate
Default:
1Gateway generates multiple images in parallel even if provider only supports 1.
|
T2I
Edit
|
Additional Parameters
| Parameter | Type | Description | Modes |
|---|---|---|---|
cfg_scale
|
number | Classifier-free guidance scale — higher values stick more closely to the prompt |
T2I
Edit
|
acceleration
fal
|
string |
The acceleration level to use for the image generation.
high
none
regular
|
T2I
Edit
|
disable_safety_checker
replicate
|
boolean | Disable safety checker for generated images. |
T2I
Edit
|
enable_prompt_expansion
fal
|
boolean | If set to true, the prompt will be expanded for better results. |
T2I
Edit
|
enable_safety_checker
fal
|
boolean | If set to true, the safety checker will be enabled. |
T2I
Edit
|
go_fast
replicate
|
boolean | Run faster predictions with additional optimizations. |
T2I
Edit
|
height
replicate
|
integer | Height of the generated image in text-to-image mode. Only used when aspect_ratio=custom. Must be a multiple of 32 (if it's not, it will be rounded to nearest multiple of 32). |
T2I
Edit
|
num_inference_steps
fal
|
integer | The number of inference steps to perform. |
T2I
Edit
|
output_quality
replicate
|
integer | Quality when saving the output images, from 0 to 100. 100 is best quality, 0 is lowest quality. Not relevant for .png outputs. |
T2I
Edit
|
sync_mode
fal
|
boolean | If `True`, the media will be returned as a data URI and the output data won't be available in the request history. |
T2I
Edit
|
width
replicate
|
integer | Width of the generated image in text-to-image mode. Only used when aspect_ratio=custom. Must be a multiple of 32 (if it's not, it will be rounded to nearest multiple of 32). |
T2I
Edit
|
Parameter Normalization
How we handle parameters across different providers
Not every provider speaks the same language. When you send a parameter, we handle it in one of four ways depending on what the model supports:
| Behavior | What happens | Example |
|---|---|---|
passthrough |
Sent as-is to the provider | style, quality |
renamed |
Same value, mapped to the field name the provider expects | prompt |
converted |
Transformed to the provider's native format | size |
emulated |
Works even if the provider has no concept of it | n, response_format |
Parameters we don't recognize pass straight through to the upstream API, so provider-specific options still work.
FLUX.2 [dev] Benchmarks
FLUX.2 [dev] maintains a competitive position in high-end image generation with a Text-to-Image Elo of 1236, currently ranking #18 globally. This open-weights model from Black Forest Labs delivers frontier-level visual fidelity comparable to top-tier proprietary systems.
Text-to-Image Landscape
Elo vs Cost
Elo vs Speed
10 without speed data omitted.
Competition Results
“Close portrait of a battle-worn paladin in ornate engraved plate armor, hair braided with small beads, faint scars and dirt on the skin, warm torchlight reflecting off metal, shallow depth of field, bokeh sparks, lifelike eyes, highly detailed texture on leather straps and cloth underlayer.”
Uncategorized
“A glass cube on a wooden table. Inside the cube is a small blue sphere. On top of the cube sits a red book. A green plant is behind the cube, partially visible through the glass. Soft window light from the left.”
“Hyper-photorealistic scene of fluffy baby animals—a golden retriever puppy, tabby kitten, baby bunny, and red fox kit—with big expressive eyes and ultra-detailed soft fur, playfully chasing butterflies and tumbling together in a lush wildflower meadow, warm golden sunrise light with god rays and dew sparkles, joyful wholesome vibe, 8K masterpiece.”
“Hyper-photorealistic full-body portrait of a female superhero standing triumphantly on a New York skyscraper rooftop at golden sunset, wearing a classic modest superhero costume with flowing cape, chest emblem, gloves, and boots in red and blue colors, practical design, short hair, strong determined heroic expression looking into the distance, powerful confident stance with hands on hips and cape billowing dramatically in the wind, detailed urban cityscape background, warm natural sunlight with sharp shadows and fabric highlights, ultra-sharp textures on suit, hair, and concrete, 8K masterpiece, empowering family-friendly style.”
“Close portrait of a battle-worn paladin in ornate engraved plate armor, hair braided with small beads, faint scars and dirt on the skin, warm torchlight reflecting off metal, shallow depth of field, bokeh sparks, lifelike eyes, highly detailed texture on leather straps and cloth underlayer.”
Top Matchups
See how FLUX.2 [dev] performs head-to-head against other AI models, ranked by community votes in blind comparisons.
Use Cases
See all Use CasesThe model performs strongest in portrait generation where it holds rank #11 with a 42.9% win rate, though its photorealism ranking sits at #15. It excels at text-to-image synthesis and complex image editing tasks where anatomical accuracy and prompt adherence are critical.
Gallery
View all 10 imagesFLUX.2 [dev] FAQ
How much does FLUX.2 [dev] cost?
FLUX.2 [dev] starts at $0.012 per image through Lumenfall. Pricing varies by provider. Lumenfall does not add any markup to provider pricing.
How do I use FLUX.2 [dev] via API?
You can use FLUX.2 [dev] through Lumenfall's OpenAI-compatible API. Send requests to the unified endpoint with model ID "flux.2-dev". Code examples are available in Python, JavaScript, and cURL.
Which providers offer FLUX.2 [dev]?
FLUX.2 [dev] is available through fal.ai and Replicate on Lumenfall. Lumenfall automatically routes requests to the best available provider.
What is the maximum resolution for FLUX.2 [dev]?
FLUX.2 [dev] supports images up to 2048x2048 resolution.
Overview
FLUX.2 [dev] is an open-weights image generation model developed by Black Forest Labs, designed to offer frontier-level performance for non-commercial applications. It serves as an intermediate iteration between high-speed distilled models and large-scale professional versions, balancing computational efficiency with high visual fidelity. The model is specifically engineered to handle complex text-to-image prompts through a refined Rectified Flow architecture.
Strengths
- High Text Rendering Accuracy: The model demonstrates significant improvements in rendering legible, correctly spelled text within generated images, even in complex layouts or unconventional fonts.
- Instruction Adherence: It excels at following multi-part prompts that specify spatial relationships, color palettes, and specific lighting conditions without losing detail in the background.
- Anatomical Realism: Compared to previous iterations in the FLUX family, this version shows increased stability in generating human anatomy, particularly regarding hands, limb articulation, and skin textures.
- Compositional Diversity: The model is less prone to “canonical” centering, allowing for more dynamic framing and varied perspectives based on descriptive text.
Limitations
- Non-Commercial Licensing: Unlike the “schnell” variants or standard open-source models, FLUX.2 [dev] is restricted to non-commercial use, which limits its application in production environments or for-profit products.
- Hardware Requirements: While designed for local deployment, the model still requires significant VRAM to run at full precision, making it less accessible for entry-level consumer GPUs without quantization.
- Inference Latency: It prioritizes output quality over generation speed, meaning it is noticeably slower than distilled 4-step models.
Technical Background
FLUX.2 [dev] is built on a Rectified Flow-based transformer architecture, which improves upon traditional diffusion methods by straightening the trajectory from noise to image. This approach allows for more efficient sampling and better alignment between the text encoder and the visual output. The training process leverages a massive-scale dataset designed to enhance the model’s understanding of complex semantics and nuanced visual concepts.
Best For
This model is best suited for visual researchers, creative hobbyists, and developers prototyping new image generation workflows who require high-quality visual outputs without the constraints of a closed API. It is particularly useful for projects requiring precise typography or complex scene composition. FLUX.2 [dev] is available for experimentation and integration through Lumenfall’s unified API and interactive playground, allowing you to compare its performance against other models in its class.
Try FLUX.2 [dev] in Playground
Generate images with custom prompts — no API key needed.