Black Forest Labs' open-weights multimodal flow transformer for in-context image generation and editing, available for non-commercial use with character consistency and style transfer capabilities
Details
flux.1-kontext-dev
Starting from
Prices shown are in USD
See all providersProviders & Pricing (3)
FLUX.1 Kontext [dev] is available from 3 providers, with per-image pricing starting at $0.025 through fal.ai.
All modes
fal/flux.1-kontext-dev
fal/flux.1-kontext-dev-edit
replicate/flux.1-kontext-dev
FLUX.1 Kontext [dev] API OpenAI-compatible
Integrate FLUX.1 Kontext [dev] into your application via the Lumenfall OpenAI-compatible API to perform advanced text-to-image generation and precise image editing using flow transformer technology.
https://api.lumenfall.ai/openai/v1
flux.1-kontext-dev
Code Examples
Text to Image
/v1/images/generationscurl -X POST \
https://api.lumenfall.ai/openai/v1/images/generations \
-H "Authorization: Bearer $LUMENFALL_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "flux.1-kontext-dev",
"prompt": "",
"size": "1024x1024"
}'
# Response:
# { "created": 1234567890, "data": [{ "url": "https://...", "revised_prompt": "..." }] }
import OpenAI from 'openai';
const client = new OpenAI({
apiKey: 'YOUR_API_KEY',
baseURL: 'https://api.lumenfall.ai/openai/v1'
});
const response = await client.images.generate({
model: 'flux.1-kontext-dev',
prompt: '',
size: '1024x1024'
});
// { created: 1234567890, data: [{ url: "https://...", revised_prompt: "..." }] }
console.log(response.data[0].url);
from openai import OpenAI
client = OpenAI(
api_key="YOUR_API_KEY",
base_url="https://api.lumenfall.ai/openai/v1"
)
response = client.images.generate(
model="flux.1-kontext-dev",
prompt="",
size="1024x1024"
)
# { created: 1234567890, data: [{ url: "https://...", revised_prompt: "..." }] }
print(response.data[0].url)
Image Edit
/v1/images/editsParameter Reference
Core Parameters
| Parameter | Type | Description | Modes |
|---|---|---|---|
prompt
|
string | Required. Edit instruction for the image |
T2I
Edit
|
seed
|
integer | Random seed for reproducibility |
T2I
Edit
|
Size & Layout
| Parameter | Type | Description | Modes |
|---|---|---|---|
size
|
string |
Image dimensions as WxH pixels (e.g. "1024x1024") or aspect ratio (e.g. "16:9")
WxH determines both shape and scale (aspect_ratio and resolution are ignored when size is provided). W:H format is equivalent to aspect_ratio.
|
T2I
Edit
|
aspect_ratio
|
string |
Aspect ratio of the output image (e.g. "16:9", "1:1")
Controls shape independently of scale. Use with resolution to control both. If size is also provided, size takes precedence. Any ratio is accepted and mapped to the nearest supported value.
|
T2I
Edit
|
resolution
|
string |
Output resolution tier (e.g. "1K", "4K")
auto
1K
Controls scale independently of shape. Higher tiers produce larger images and cost more. If size is also provided, size takes precedence for scale. Any tier is accepted and mapped to the nearest supported value.
|
T2I
Edit
|
| Output |
size
|
aspect_ratio
+
resolution
|
|
|---|---|---|---|
| Flexible | |||
| Auto | "auto" |
— | Model chooses optimal dimensions |
|
Custom
1–14142px per side
|
"WxH" |
— | Any pixel dimensions within model constraints |
1K 11 sizes
| Output |
size
|
aspect_ratio
+
resolution
|
|
|---|---|---|---|
| 1183 × 887 | "1183x887" |
or |
"4:3"
+
"1K"
|
| 916 × 1145 | "916x1145" |
or |
"4:5"
+
"1K"
|
| 1145 × 916 | "1145x916" |
or |
"5:4"
+
"1K"
|
| 1024 × 1024 | "1024x1024" |
or |
"1:1"
+
"1K"
|
| 887 × 1182 | "887x1182" |
or |
"3:4"
+
"1K"
|
| 836 × 1254 | "836x1254" |
or |
"2:3"
+
"1K"
|
| 1254 × 836 | "1254x836" |
or |
"3:2"
+
"1K"
|
| 768 × 1365 | "768x1365" |
or |
"9:16"
+
"1K"
|
| 1365 × 768 | "1365x768" |
or |
"16:9"
+
"1K"
|
| 670 × 1564 | "670x1564" |
or |
"9:21"
+
"1K"
|
| 1563 × 670 | "1563x670" |
or |
"21:9"
+
"1K"
|
How these parameters work
size
Exact pixel dimensions
"1920x1080"
aspect_ratio
Shape only, default scale
"16:9"
resolution
Scale tier, preserves shape
"1K"
Priority when combined
size is most specific and always wins. aspect_ratio and resolution control shape and scale independently.
How matching works
7:1 on a model with
4:1 and 8:1,
you get 8:1.
0.5K 1K 2K 4K)
or megapixel tiers (0.25 1).
If the exact tier isn't available, you get the nearest one.
Media Inputs
| Parameter | Type | Description | Modes |
|---|---|---|---|
image
|
file |
Required.
Input image(s) to edit
Supports PNG, JPEG, WebP.
|
T2I
Edit
|
Output & Format
| Parameter | Type | Description | Modes |
|---|---|---|---|
response_format
|
string |
How to return the image
url
b64_json
Default:
"url" |
T2I
Edit
|
output_format
|
string |
Output image format
png
jpeg
gif
webp
avif
Gateway converts to requested format if provider doesn't support it natively.
|
T2I
Edit
|
output_compression
|
integer | Compression level for lossy formats (JPEG, WebP, AVIF) |
T2I
Edit
|
n
|
integer |
Number of images to generate
Default:
1Gateway generates multiple images in parallel even if provider only supports 1.
|
T2I
Edit
|
Additional Parameters
| Parameter | Type | Description | Modes |
|---|---|---|---|
cfg_scale
|
number | Classifier-free guidance scale — higher values stick more closely to the prompt |
T2I
Edit
|
acceleration
fal
|
string |
The speed of the generation. The higher the speed, the faster the generation.
high
none
regular
|
T2I
Edit
|
disable_safety_checker
replicate
|
boolean | Disable NSFW safety checker |
T2I
Edit
|
enable_safety_checker
fal
|
boolean | If set to true, the safety checker will be enabled. |
T2I
Edit
|
enhance_prompt
fal
|
boolean | Whether to enhance the prompt for better results. |
T2I
Edit
|
num_inference_steps
|
integer | The number of inference steps to perform. |
T2I
Edit
|
output_quality
replicate
|
integer | Quality when saving the output images, from 0 to 100. 100 is best quality, 0 is lowest quality. Not relevant for .png outputs |
T2I
Edit
|
resolution_mode
fal
|
string | Determines how the output resolution is set for image editing. - `auto`: The model selects an optimal resolution from a predefined set that best matches the input image's aspect ratio. This is the recommended setting for most use cases as it's what the model was trained on. - `match_input`: The model will attempt to use the same resolution as the input image. The resolution will be adjusted to be compatible with the model's requirements (e.g. dimensions must be multiples of 16 and within supported limits). Apart from these, a few aspect ratios are also supported. |
T2I
Edit
|
sync_mode
fal
|
boolean | If `True`, the media will be returned as a data URI and the output data won't be available in the request history. |
T2I
Edit
|
Parameter Normalization
How we handle parameters across different providers
Not every provider speaks the same language. When you send a parameter, we handle it in one of four ways depending on what the model supports:
| Behavior | What happens | Example |
|---|---|---|
passthrough |
Sent as-is to the provider | style, quality |
renamed |
Same value, mapped to the field name the provider expects | prompt |
converted |
Transformed to the provider's native format | size |
emulated |
Works even if the provider has no concept of it | n, response_format |
Parameters we don't recognize pass straight through to the upstream API, so provider-specific options still work.
Gallery
View all 3 imagesFLUX.1 Kontext [dev] FAQ
How much does FLUX.1 Kontext [dev] cost?
FLUX.1 Kontext [dev] starts at $0.025 per image through Lumenfall. Pricing varies by provider. Lumenfall does not add any markup to provider pricing.
How do I use FLUX.1 Kontext [dev] via API?
You can use FLUX.1 Kontext [dev] through Lumenfall's OpenAI-compatible API. Send requests to the unified endpoint with model ID "flux.1-kontext-dev". Code examples are available in Python, JavaScript, and cURL.
Which providers offer FLUX.1 Kontext [dev]?
FLUX.1 Kontext [dev] is available through fal.ai and Replicate on Lumenfall. Lumenfall automatically routes requests to the best available provider.
What is the maximum resolution for FLUX.1 Kontext [dev]?
FLUX.1 Kontext [dev] supports images up to 2048x2048 resolution.
Overview
FLUX.1 Kontext [dev] is an open-weights multimodal flow transformer developed by Black Forest Labs, designed specifically for in-context image generation and editing. It extends the foundational FLUX.1 architecture to allow for complex image-to-image workflows, enabling users to maintain consistent characters or styles across different compositions. This model is intended for non-commercial development and research, offering a high-fidelity bridge between text prompts and visual reference inputs.
Strengths
- Character Consistency: The model excels at maintaining the identity and features of a specific subject across multiple generated frames by leveraging reference images as “context.”
- Zero-Shot Style Transfer: It can adapt the aesthetic, color palette, and texture of a target image onto a new prompt without requiring specific LoRA training or fine-tuning.
- Complex Attribute Mapping: It demonstrates high accuracy in following dense textual instructions while respecting the spatial constraints and structural information provided in the input image.
- Prompt Adherence: Like other models in the FLUX.1 family, it minimizes common artifacts in hand rendering and manages high-density text within images effectively.
Limitations
- Non-Commercial License: The [dev] version is released under a restrictive license that prohibits revenue-generating applications, making it unsuitable for production environments without further licensing.
- Hardware Intensity: Due to the flow transformer architecture and the multimodal input requirements, it demands significant VRAM and compute compared to standard latent diffusion models.
- Prompt Sensitivity: Achieving the perfect balance between the input image context and the text prompt can require iterative testing, as the model may occasionally over-index on the reference image at the expense of prompt instructions.
Technical Background
FLUX.1 Kontext [dev] is built on a multimodal flow transformer architecture, a departure from traditional U-Net-based diffusion models. This approach uses flow matching to improve training efficiency and sampling quality. By integrating text and image embeddings into a shared latent space, the model treats visual context as a primary input alongside textual tokens, allowing for more natural in-context learning during the generation process.
Best For
FLUX.1 Kontext [dev] is best suited for storyboarding, character design sheets, and stylistic exploration where visual continuity is required across a series of images. It is an excellent choice for developers experimenting with advanced image-editing pipelines or researchers studying multimodal integration in large-scale generative models. You can experiment with its in-context capabilities through the Lumenfall unified API and playground, which simplifies the integration of its multimodal inputs into your development workflow.
Try FLUX.1 Kontext [dev] in Playground
Generate images with custom prompts — no API key needed.