FLUX.1 [schnell] FP8

AI Image Generation Model

Image Free

FP8 quantized variant of Black Forest Labs' FLUX.1 [schnell] model, offering ~2x faster inference with reduced precision while maintaining high-quality image generation in 4 steps

Example outputs coming soon

1024 x 1024
Max Resolution
Supported Modes
Text to Image
Active

Details

Model ID
flux.1-schnell-fp8
Family
flux.1
Released
October 2024
Tags
image-generation text-to-image fast open-weights quantized
// Get Started

Ready to integrate?

Access flux.1-schnell-fp8 via our unified API.

Create Account
Available at 1 provider Free

Prices shown are in USD

Full pricing details

Provider Performance

Fastest generation through fireworks at 1,564ms median latency with 95.2% success rate.

Aggregated from real API requests over the last 30 days.

Generation Time

fireworks
1,564ms p95: 2,958ms

Success Rate

fireworks
95.2%
334 / 351 requests

Time to First Byte

fireworks
795ms
p95: 1,235ms

Provider Rankings

# Provider p50 Gen Time p95 Gen Time Success Rate TTFB (p50)
1 fireworks 1,564ms 2,958ms 95.2% 795ms
Data updated every 15 minutes. Based on all API requests through Lumenfall over the last 30 days.

Providers & Pricing (1)

FLUX.1 [schnell] FP8 is free to use through Fireworks AI.

Fireworks AI
fireworks/flux.1-schnell-fp8
Provider Model ID: accounts/fireworks/models/flux-1-schnell-fp8/text_to_image

Output

Image
Free per image
Pricing Notes (4)
  • Free to try
  • Normally priced at $0.00035 per inference step
  • FLUX.1 [schnell] uses 4 steps by default, making the effective per-image cost $0.0014
  • FP8 variant uses reduced precision for ~2x faster inference

FLUX.1 [schnell] FP8 API OpenAI-compatible

Developers can generate images using FLUX.1 [schnell] FP8 via Lumenfall's OpenAI-compatible API for 2x faster inference using quantized precision.

Base URL
https://api.lumenfall.ai/openai/v1
Model
flux.1-schnell-fp8

Code Examples

Text to Image

/v1/images/generations
curl -X POST \
  https://api.lumenfall.ai/openai/v1/images/generations \
  -H "Authorization: Bearer $LUMENFALL_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "flux.1-schnell-fp8",
    "prompt": "",
    "size": "1024x1024"
  }'
# Response:
# { "created": 1234567890, "data": [{ "url": "https://...", "revised_prompt": "..." }] }

Parameter Reference

Required Supported Not available

Core Parameters

Parameter Type Description Modes
prompt string Required. Text prompt for image generation
T2I

Size & Layout

Parameter Type Description Modes
size string Image dimensions as WxH pixels (e.g. "1024x1024") or aspect ratio (e.g. "16:9")
WxH determines both shape and scale (aspect_ratio and resolution are ignored when size is provided). W:H format is equivalent to aspect_ratio.
T2I
aspect_ratio string Aspect ratio of the output image (e.g. "16:9", "1:1")
Controls shape independently of scale. Use with resolution to control both. If size is also provided, size takes precedence. Any ratio is accepted and mapped to the nearest supported value.
T2I
resolution string Output resolution tier (e.g. "1K", "4K")
Controls scale independently of shape. Higher tiers produce larger images and cost more. If size is also provided, size takes precedence for scale. Any tier is accepted and mapped to the nearest supported value.
T2I
size

Exact pixel dimensions

"1920x1080"
aspect_ratio

Shape only, default scale

"16:9"
resolution

Scale tier, preserves shape

"1K"

Priority when combined

size aspect_ratio + resolution aspect_ratio resolution

size is most specific and always wins. aspect_ratio and resolution control shape and scale independently.

How matching works

Shape matching – we pick the closest supported ratio. Ask for 7:1 on a model with 4:1 and 8:1, you get 8:1.
Scale matching – providers use different tier formats: K tiers (0.5K 1K 2K 4K) or megapixel tiers (0.25 1). If the exact tier isn't available, you get the nearest one.
Dimension clamping – if a model has pixel limits, we clamp dimensions to fit and keep the aspect ratio intact.

Output & Format

Parameter Type Description Modes
response_format string How to return the image
url b64_json
Default: "url"
T2I
output_format string Output image format
png jpeg gif webp avif
Gateway converts to requested format if provider doesn't support it natively.
T2I
output_compression integer Compression level for lossy formats (JPEG, WebP, AVIF)
T2I
n integer Number of images to generate
Default: 1
Gateway generates multiple images in parallel even if provider only supports 1.
T2I

Parameter Normalization

How we handle parameters across different providers

Not every provider speaks the same language. When you send a parameter, we handle it in one of four ways depending on what the model supports:

Behavior What happens Example
passthrough Sent as-is to the provider style, quality
renamed Same value, mapped to the field name the provider expects prompt
converted Transformed to the provider's native format size
emulated Works even if the provider has no concept of it n, response_format

Parameters we don't recognize pass straight through to the upstream API, so provider-specific options still work.

FLUX.1 [schnell] FP8 FAQ

How much does FLUX.1 [schnell] FP8 cost?

FLUX.1 [schnell] FP8 is free to use through Lumenfall's unified API.

How do I use FLUX.1 [schnell] FP8 via API?

You can use FLUX.1 [schnell] FP8 through Lumenfall's OpenAI-compatible API. Send requests to the unified endpoint with model ID "flux.1-schnell-fp8". Code examples are available in Python, JavaScript, and cURL.

Which providers offer FLUX.1 [schnell] FP8?

FLUX.1 [schnell] FP8 is available through Fireworks AI on Lumenfall. Lumenfall automatically routes requests to the best available provider.

What is the maximum resolution for FLUX.1 [schnell] FP8?

FLUX.1 [schnell] FP8 supports images up to 1024x1024 resolution.

Overview

FLUX.1 [schnell] FP8 is a quantized version of Black Forest Labs’ distilled text-to-image model, optimized for maximum inference speed. By utilizing 8-bit floating-point precision, this variant achieves significantly lower latency and reduced memory overhead compared to the standard model. It is specifically designed for high-throughput applications where generating competitive imagery in a handful of steps is the primary requirement.

Strengths

  • Generation Speed: Produces usable 1024x1024 images in just 1 to 4 sampling steps, making it one of the fastest high-resolution open-weight models available.
  • Standardized Resource Efficiency: The FP8 quantization reduces the VRAM footprint and computational load, allowing for roughly 2x faster inference times compared to the full-precision version without a proportional loss in visual quality.
  • Prompt Adherence: Despite the lowered precision and distillation, the model retains the architectural ability to follow complex descriptive prompts and render legible, coherent text within images.
  • Output Consistency: It maintains the structural integrity and composition characteristic of the FLUX.1 family, even at extremely low step counts.

Limitations

  • Artistic Nuance: Due to the distillation and quantization, it offers less stylistic flexibility and fine-grained detail compared to the [dev] or [pro] iterations of FLUX.1.
  • Precision Loss: FP8 quantization can occasionally lead to minor artifacts or less smooth gradients in complex lighting scenarios that would be better handled by 16-bit or 32-bit models.
  • Step Sensitivity: The model is strictly tuned for low-step counts; increasing the sampling steps beyond the recommended range usually yields diminishing returns or visual regressions.

Technical Background

FLUX.1 [schnell] is a latent diffusion model based on a flow-based transformer architecture. This specific FP8 variant applies post-training quantization to the model weights, mapping them to 8-bit precision to optimize throughput on modern hardware. The “schnell” version itself is the result of a performance-oriented distillation process, allowing the model to reach a converged image state in a fraction of the time required by standard diffusion processes.

Best For

This model is ideal for real-time applications, rapid prototyping, and high-volume image generation workflows where operational cost and latency are critical. It is a strong choice for “generate-as-you-type” interfaces or large-scale content pipelines that require decent photorealism at minimal compute expense. FLUX.1 [schnell] FP8 is available for testing and integration through Lumenfall’s unified API and interactive playground.

Try FLUX.1 [schnell] FP8 in Playground

Generate images with custom prompts — no API key needed.

Open Playground