OpenAI's state-of-the-art image generation model with arbitrary resolution up to 4K and strong instruction following
Details
gpt-image-2
Starting from
Prices shown are in USD · Some prices estimated from per-megapixel or per-token pricing
Full pricing detailsProvider Performance
Fastest generation through openai at 58,739ms median latency with 83.7% success rate.
Aggregated from real API requests over the last 30 days.
Generation Time
Success Rate
Time to First Byte
Provider Rankings
| # | Provider | p50 Gen Time | p95 Gen Time | Success Rate | TTFB (p50) |
|---|---|---|---|---|---|
| 1 | openai | 58,739ms | 142,760ms | 83.7% | 56,881ms |
Providers & Pricing (1)
All modes
openai/gpt-image-2
Input
Output
Pricing Notes (7)
- • Token-based pricing; gpt-image-2 accepts arbitrary resolutions so a per-image table is not encoded here.
- • Example per-image costs at the three legacy preset sizes (derived from the same token pricing):
- • Low 1024x1024 ~= $0.006, 1024x1536 ~= $0.005, 1536x1024 ~= $0.005
- • Medium 1024x1024 ~= $0.053, 1024x1536 ~= $0.041, 1536x1024 ~= $0.041
- • High 1024x1024 ~= $0.211, 1024x1536 ~= $0.165, 1536x1024 ~= $0.165
- • Processes every image input at high fidelity; input_fidelity parameter is not supported.
- • Does not support transparent backgrounds.
GPT Image 2 API OpenAI-compatible
https://api.lumenfall.ai/openai/v1
gpt-image-2
Code Examples
Text to Image
/v1/images/generationscurl -X POST \
https://api.lumenfall.ai/openai/v1/images/generations \
-H "Authorization: Bearer $LUMENFALL_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-image-2",
"prompt": "",
"size": "1024x1024"
}'
# Response:
# { "created": 1234567890, "data": [{ "url": "https://...", "revised_prompt": "..." }] }
import OpenAI from 'openai';
const client = new OpenAI({
apiKey: 'YOUR_API_KEY',
baseURL: 'https://api.lumenfall.ai/openai/v1'
});
const response = await client.images.generate({
model: 'gpt-image-2',
prompt: '',
size: '1024x1024'
});
// { created: 1234567890, data: [{ url: "https://...", revised_prompt: "..." }] }
console.log(response.data[0].url);
from openai import OpenAI
client = OpenAI(
api_key="YOUR_API_KEY",
base_url="https://api.lumenfall.ai/openai/v1"
)
response = client.images.generate(
model="gpt-image-2",
prompt="",
size="1024x1024"
)
# { created: 1234567890, data: [{ url: "https://...", revised_prompt: "..." }] }
print(response.data[0].url)
Image Edit
/v1/images/editsParameter Reference
Core Parameters
| Parameter | Type | Description | Modes |
|---|---|---|---|
prompt
|
string | Required. Text prompt for image generation |
T2I
Edit
|
quality
|
string |
Image quality level
high
low
medium
|
T2I
Edit
|
Size & Layout
| Parameter | Type | Description | Modes |
|---|---|---|---|
size
|
string |
Image dimensions as WxH pixels (e.g. "1024x1024") or aspect ratio (e.g. "16:9")
WxH determines both shape and scale (aspect_ratio and resolution are ignored when size is provided). W:H format is equivalent to aspect_ratio.
|
T2I
Edit
|
aspect_ratio
|
string |
Aspect ratio of the output image (e.g. "16:9", "1:1")
Controls shape independently of scale. Use with resolution to control both. If size is also provided, size takes precedence. Any ratio is accepted and mapped to the nearest supported value.
|
T2I
Edit
|
resolution
|
string |
Output resolution tier (e.g. "1K", "4K")
Controls scale independently of shape. Higher tiers produce larger images and cost more. If size is also provided, size takes precedence for scale. Any tier is accepted and mapped to the nearest supported value.
|
T2I
Edit
|
size
Exact pixel dimensions
"1920x1080"
aspect_ratio
Shape only, default scale
"16:9"
resolution
Scale tier, preserves shape
"1K"
Priority when combined
size is most specific and always wins. aspect_ratio and resolution control shape and scale independently.
How matching works
7:1 on a model with
4:1 and 8:1,
you get 8:1.
0.5K 1K 2K 4K)
or megapixel tiers (0.25 1).
If the exact tier isn't available, you get the nearest one.
Media Inputs
| Parameter | Type | Description | Modes |
|---|---|---|---|
image
|
file |
Required.
Input image(s) to edit
Supports PNG, JPEG, WebP.
|
T2I
Edit
|
Output & Format
| Parameter | Type | Description | Modes |
|---|---|---|---|
response_format
|
string |
How to return the image
url
b64_json
Default:
"url" |
T2I
Edit
|
output_format
|
string |
Output image format
png
jpeg
gif
webp
avif
Gateway converts to requested format if provider doesn't support it natively.
|
T2I
Edit
|
output_compression
|
integer | Compression level for lossy formats (JPEG, WebP, AVIF) |
T2I
Edit
|
n
|
integer |
Number of images to generate
Default:
1Gateway generates multiple images in parallel even if provider only supports 1.
|
T2I
Edit
|
Additional Parameters
| Parameter | Type | Description | Modes |
|---|---|---|---|
background
openai
|
string |
Background handling. gpt-image-2 does not currently support transparent backgrounds.
opaque
|
T2I
Edit
|
moderation
openai
|
string |
Moderation strictness.
low
|
T2I
Edit
|
user
openai
|
string | Stable end-user identifier used by OpenAI abuse monitoring. |
T2I
Edit
|
Parameter Normalization
How we handle parameters across different providers
Not every provider speaks the same language. When you send a parameter, we handle it in one of four ways depending on what the model supports:
| Behavior | What happens | Example |
|---|---|---|
passthrough |
Sent as-is to the provider | style, quality |
renamed |
Same value, mapped to the field name the provider expects | prompt |
converted |
Transformed to the provider's native format | size |
emulated |
Works even if the provider has no concept of it | n, response_format |
Parameters we don't recognize pass straight through to the upstream API, so provider-specific options still work.
Gallery
View all 5 imagesGPT Image 2 FAQ
How do I use GPT Image 2 via API?
You can use GPT Image 2 through Lumenfall's OpenAI-compatible API. Send requests to the unified endpoint with model ID "gpt-image-2". Code examples are available in Python, JavaScript, and cURL.
Which providers offer GPT Image 2?
GPT Image 2 is available through OpenAI on Lumenfall. Lumenfall automatically routes requests to the best available provider.
What is the maximum resolution for GPT Image 2?
GPT Image 2 supports images up to 3840x2160 resolution.
Overview
GPT Image 2 is a high-fidelity image generation model developed by OpenAI, designed to produce visual content from text prompts and existing images. It represents an evolution in the GPT-image family, characterized by its ability to handle arbitrary resolutions up to 4K and its rigorous adherence to complex, multi-part instructions. This model supports both text-to-image generation and granular image editing, allowing users to move from initial concept to refined final asset within a single framework.
Strengths
- High-Resolution Output: The model generates images at arbitrary aspect ratios with a maximum resolution of 4K, making it suitable for professional print and digital media without immediate upscaling requirements.
- Prompt Adherence: It demonstrates strong instruction-following capabilities, accurately placing specific objects, managing spatial relationships, and maintaining stylistic consistency as described in the input text.
- Multi-mode Versatility: GPT Image 2 natively supports both text-to-image (creating visuals from scratch) and image-editing (modifying existing imagery based on textual instructions), ensuring a cohesive workflow for iterative design.
- Complex Composition: The model excels at rendering scenes with multiple subjects or dense detail that typically challenge standard diffusion models, maintaining structural integrity even at high pixel densities.
Limitations
- Compute Intensity: Due to the 4K resolution ceiling and model complexity, generation times may be longer compared to lower-resolution latent diffusion models.
- Instruction Sensitivity: While following instructions accurately, the model may require precise, descriptive language to achieve specific artistic styles, as it prioritizes literal interpretation of the prompt.
Technical Background
GPT Image 2 is built upon OpenAI’s proprietary architecture for visual synthesis, moving beyond fixed-aspect ratio training to support dynamic resolution scaling. The model utilizes a training approach that emphasizes the alignment between dense textual descriptions and high-resolution visual tokens. This allows the model to interpret nuanced natural language prompts as precise spatial and stylistic commands during the generation process.
Best For
GPT Image 2 is optimized for professional workflows requiring high-definition assets, such as marketing collateral, detailed concept art, and complex photo manipulation. It is particularly effective for users who need to iterate on an existing image through precise text-based edits rather than regenerating a scene from scratch. This model is available for integration and testing through Lumenfall’s unified API and playground, providing a streamlined environment for experimenting with 4K generation and image editing.
Try GPT Image 2 in Playground
Generate images with custom prompts — no API key needed.