Black Forest Labs' 12-billion parameter flow transformer for high-quality text-to-image generation, suitable for personal and commercial use with streaming support
Details
flux.1-dev
Starting from
Prices shown are in USD
See all providersProviders & Pricing (2)
FLUX.1 [dev] is available from 2 providers, with per-image pricing starting at $0.025 through fal.ai.
All modes
fal/flux.1-dev
replicate/flux.1-dev
flux-dev API OpenAI-compatible
Lumenfall provides an OpenAI-compatible API to integrate FLUX.1 [dev] for high-fidelity text-to-image generation and streaming media workflows.
https://api.lumenfall.ai/openai/v1
flux.1-dev
Code Examples
Text to Image
/v1/images/generationscurl -X POST \
https://api.lumenfall.ai/openai/v1/images/generations \
-H "Authorization: Bearer $LUMENFALL_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "flux.1-dev",
"prompt": "",
"size": "1024x1024"
}'
# Response:
# { "created": 1234567890, "data": [{ "url": "https://...", "revised_prompt": "..." }] }
import OpenAI from 'openai';
const client = new OpenAI({
apiKey: 'YOUR_API_KEY',
baseURL: 'https://api.lumenfall.ai/openai/v1'
});
const response = await client.images.generate({
model: 'flux.1-dev',
prompt: '',
size: '1024x1024'
});
// { created: 1234567890, data: [{ url: "https://...", revised_prompt: "..." }] }
console.log(response.data[0].url);
from openai import OpenAI
client = OpenAI(
api_key="YOUR_API_KEY",
base_url="https://api.lumenfall.ai/openai/v1"
)
response = client.images.generate(
model="flux.1-dev",
prompt="",
size="1024x1024"
)
# { created: 1234567890, data: [{ url: "https://...", revised_prompt: "..." }] }
print(response.data[0].url)
Image Edit
/v1/images/editsParameter Reference
Core Parameters
| Parameter | Type | Description | Modes |
|---|---|---|---|
prompt
|
string | Required. Text prompt for image generation |
T2I
Edit
|
seed
|
integer | Random seed for reproducibility |
T2I
Edit
|
Size & Layout
| Parameter | Type | Description | Modes |
|---|---|---|---|
size
|
string |
Image dimensions as WxH pixels (e.g. "1024x1024") or aspect ratio (e.g. "16:9")
WxH determines both shape and scale (aspect_ratio and resolution are ignored when size is provided). W:H format is equivalent to aspect_ratio.
|
T2I
Edit
|
aspect_ratio
|
string |
Aspect ratio of the output image (e.g. "16:9", "1:1")
Controls shape independently of scale. Use with resolution to control both. If size is also provided, size takes precedence. Any ratio is accepted and mapped to the nearest supported value.
|
T2I
Edit
|
resolution
|
string |
Output resolution tier (e.g. "1K", "4K")
1K
Controls scale independently of shape. Higher tiers produce larger images and cost more. If size is also provided, size takes precedence for scale. Any tier is accepted and mapped to the nearest supported value.
|
T2I
Edit
|
| Output |
size
|
aspect_ratio
+
resolution
|
|
|---|---|---|---|
| Flexible | |||
|
Custom
1–14142px per side
|
"WxH" |
— | Any pixel dimensions within model constraints |
1K 11 sizes
| Output |
size
|
aspect_ratio
+
resolution
|
|
|---|---|---|---|
| 1183 × 887 | "1183x887" |
or |
"4:3"
+
"1K"
|
| 916 × 1145 | "916x1145" |
or |
"4:5"
+
"1K"
|
| 1145 × 916 | "1145x916" |
or |
"5:4"
+
"1K"
|
| 1024 × 1024 | "1024x1024" |
or |
"1:1"
+
"1K"
|
| 887 × 1182 | "887x1182" |
or |
"3:4"
+
"1K"
|
| 836 × 1254 | "836x1254" |
or |
"2:3"
+
"1K"
|
| 1254 × 836 | "1254x836" |
or |
"3:2"
+
"1K"
|
| 768 × 1365 | "768x1365" |
or |
"9:16"
+
"1K"
|
| 1365 × 768 | "1365x768" |
or |
"16:9"
+
"1K"
|
| 670 × 1564 | "670x1564" |
or |
"9:21"
+
"1K"
|
| 1563 × 670 | "1563x670" |
or |
"21:9"
+
"1K"
|
How these parameters work
size
Exact pixel dimensions
"1920x1080"
aspect_ratio
Shape only, default scale
"16:9"
resolution
Scale tier, preserves shape
"1K"
Priority when combined
size is most specific and always wins. aspect_ratio and resolution control shape and scale independently.
How matching works
7:1 on a model with
4:1 and 8:1,
you get 8:1.
0.5K 1K 2K 4K)
or megapixel tiers (0.25 1).
If the exact tier isn't available, you get the nearest one.
Media Inputs
| Parameter | Type | Description | Modes |
|---|---|---|---|
image
|
file |
Required.
Input image(s) to edit
Supports PNG, JPEG, WebP.
|
T2I
Edit
|
Output & Format
| Parameter | Type | Description | Modes |
|---|---|---|---|
response_format
|
string |
How to return the image
url
b64_json
Default:
"url" |
T2I
Edit
|
output_format
|
string |
Output image format
png
jpeg
gif
webp
avif
Gateway converts to requested format if provider doesn't support it natively.
|
T2I
Edit
|
output_compression
|
integer | Compression level for lossy formats (JPEG, WebP, AVIF) |
T2I
Edit
|
n
|
integer |
Number of images to generate
Default:
1Gateway generates multiple images in parallel even if provider only supports 1.
|
T2I
Edit
|
Additional Parameters
| Parameter | Type | Description | Modes |
|---|---|---|---|
cfg_scale
|
number | Classifier-free guidance scale — higher values stick more closely to the prompt |
T2I
Edit
|
strength
|
number | How much to transform the input image: 0 keeps it unchanged, 1 fully regenerates from the prompt |
T2I
Edit
|
acceleration
fal
|
string |
The speed of the generation. The higher the speed, the faster the generation.
high
none
regular
|
T2I
Edit
|
disable_safety_checker
replicate
|
boolean | Disable safety checker for generated images. |
T2I
Edit
|
enable_safety_checker
fal
|
boolean | If set to true, the safety checker will be enabled. |
T2I
Edit
|
go_fast
replicate
|
boolean | Run faster predictions with model optimized for speed (currently fp8 quantized); disable to run in original bf16. Note that outputs will not be deterministic when this is enabled, even if you set a seed. |
T2I
Edit
|
megapixels
replicate
|
string |
Approximate number of megapixels for generated image
0.25
1
|
T2I
Edit
|
num_inference_steps
|
integer | The number of inference steps to perform. |
T2I
Edit
|
output_quality
replicate
|
integer | Quality when saving the output images, from 0 to 100. 100 is best quality, 0 is lowest quality. Not relevant for .png outputs |
T2I
Edit
|
sync_mode
fal
|
boolean | If `True`, the media will be returned as a data URI and the output data won't be available in the request history. |
T2I
Edit
|
Parameter Normalization
How we handle parameters across different providers
Not every provider speaks the same language. When you send a parameter, we handle it in one of four ways depending on what the model supports:
| Behavior | What happens | Example |
|---|---|---|
passthrough |
Sent as-is to the provider | style, quality |
renamed |
Same value, mapped to the field name the provider expects | prompt |
converted |
Transformed to the provider's native format | size |
emulated |
Works even if the provider has no concept of it | n, response_format |
Parameters we don't recognize pass straight through to the upstream API, so provider-specific options still work.
Gallery
View all 3 imagesFLUX.1 [dev] FAQ
How much does FLUX.1 [dev] cost?
FLUX.1 [dev] starts at $0.025 per image through Lumenfall. Pricing varies by provider. Lumenfall does not add any markup to provider pricing.
How do I use FLUX.1 [dev] via API?
You can use FLUX.1 [dev] through Lumenfall's OpenAI-compatible API. Send requests to the unified endpoint with model ID "flux.1-dev". Code examples are available in Python, JavaScript, and cURL.
Which providers offer FLUX.1 [dev]?
FLUX.1 [dev] is available through fal.ai and Replicate on Lumenfall. Lumenfall automatically routes requests to the best available provider.
What is the maximum resolution for FLUX.1 [dev]?
FLUX.1 [dev] supports images up to 2048x2048 resolution.
Overview
FLUX.1 [dev] is a 12-billion parameter text-to-image synthesis model developed by Black Forest Labs. As an open-weight model derived from the FLUX.1 [pro] architecture, it is designed for high-fidelity image generation while remaining accessible for non-commercial development. It is distinguished by its use of flow matching, which allows it to generate images with higher composition quality and structural integrity than traditional diffusion-based models of similar size.
Strengths
- Precise Text Rendering: The model excels at following complex prompts requiring the inclusion of specific text, exhibiting high character accuracy and legibility in generated signs, labels, and documents.
- Anatomical Accuracy: It shows a significant reduction in common AI artifacts, such as distorted hands or inconsistent limb counts, producing more anatomically correct human figures.
- Prompt Adherence: The architecture is highly responsive to detailed, long-form descriptions, maintaining high fidelity to nuanced instructions regarding lighting, camera angles, and object placement.
- Visual Variety: It is capable of generating a wide range of styles, from photorealistic portraits to stylized digital art, without requiring extensive LoRA fine-tuning for basic aesthetic changes.
Limitations
- Hardware Requirements: With 12 billion parameters, the model is computationally heavy; running it locally requires substantial VRAM (typically 24GB or more) compared to smaller models like Stable Diffusion XL.
- Inference Speed: While it supports streaming, the generation process is inherently slower than “schnell” or distilled versions of the same family due to the higher step count required for optimal results.
- Licensing Constraints: Unlike the [schnell] variant, the [dev] model is released under a non-commercial license, which may limit its direct use in some production environments without a commercial agreement.
Technical Background
FLUX.1 [dev] is built on a flow-based transformer architecture. Rather than relying on standard latent diffusion, it utilizes flow matching—a method that learns a vector field to map a simple noise distribution to the target data distribution. This approach, combined with its high parameter count, allows the model to capture more complex spatial relationships and finer details during the sampling process.
Best For
This model is best suited for developers building high-end creative tools, designers requiring precise typography within images, and researchers exploring the limits of flow-matching models. It is an ideal choice for tasks where image quality and prompt accuracy are more critical than raw generation speed. FLUX.1 [dev] is available for testing and integration through Lumenfall’s unified API and interactive playground.
Try FLUX.1 [dev] in Playground
Generate images with custom prompts — no API key needed.