# Qwen Image Edit Latest > Alibaba's Qwen image editing model for instruction-based image modifications and transformations ## Quick Reference - Model ID: qwen-image-edit - Creator: Alibaba - Status: active - Family: qwen - Base URL: https://api.lumenfall.ai/openai/v1 ## Specifications - Input Modalities: text, image - Output Modalities: image ## Model Identifiers - Primary Slug: qwen-image-edit - Aliases: qwen-image-edit-plus ## Dates ## Tags image-generation, image-editing ## Available Providers ### Replicate - Config Key: replicate/qwen-image-edit - Provider Model ID: qwen/qwen-image-edit - Pricing: - source: official - currency: USD - components: [{"type" => "output", "metric" => "image", "unit_price" => 0.03}] - source_url: https://replicate.com/qwen/qwen-image-edit - effective_at: 2026-01-02 ### fal.ai - Config Key: fal/qwen-image-edit - Provider Model ID: fal-ai/qwen-image-edit-2511 - Pricing: - source: official - currency: USD - components: [{"type" => "output", "metric" => "megapixel", "unit_price" => 0.03}] - source_url: https://fal.ai/models/fal-ai/qwen-image-edit-2511 - effective_at: 2025-12-29 ## Image Gallery 1 images available for this model. - Curated examples: 1 - "A professional, wide-angle cinematic shot of a high-end interior design studio bathed in the warm, golden glow of the..." ## Code Examples ### Text to Image (Generation) #### cURL curl -X POST \ https://api.lumenfall.ai/openai/v1/images/generations \ -H "Authorization: Bearer $LUMENFALL_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "model": "qwen-image-edit", "prompt": "A serene mountain landscape at sunset", "size": "1024x1024" }' # Response: # { "created": 1234567890, "data": [{ "url": "https://...", "revised_prompt": "..." }] } #### JavaScript import OpenAI from 'openai'; const client = new OpenAI({ apiKey: 'YOUR_API_KEY', baseURL: 'https://api.lumenfall.ai/openai/v1' }); const response = await client.images.generate({ model: 'qwen-image-edit', prompt: 'A serene mountain landscape at sunset', size: '1024x1024' }); // { created: 1234567890, data: [{ url: "https://...", revised_prompt: "..." }] } console.log(response.data[0].url); #### Python from openai import OpenAI client = OpenAI( api_key="YOUR_API_KEY", base_url="https://api.lumenfall.ai/openai/v1" ) response = client.images.generate( model="qwen-image-edit", prompt="A serene mountain landscape at sunset", size="1024x1024" ) # { created: 1234567890, data: [{ url: "https://...", revised_prompt: "..." }] } print(response.data[0].url) ### Image Editing #### cURL curl -X POST \ https://api.lumenfall.ai/openai/v1/images/edits \ -H "Authorization: Bearer $LUMENFALL_API_KEY" \ -F "model=qwen-image-edit" \ -F "image=@source.png" \ -F "prompt=Add a starry night sky to this image" \ -F "size=1024x1024" # Response: # { "created": 1234567890, "data": [{ "url": "https://...", "revised_prompt": "..." }] } #### JavaScript import OpenAI from 'openai'; import fs from 'fs'; const client = new OpenAI({ apiKey: 'YOUR_API_KEY', baseURL: 'https://api.lumenfall.ai/openai/v1' }); const response = await client.images.edit({ model: 'qwen-image-edit', image: fs.createReadStream('source.png'), prompt: 'Add a starry night sky to this image', size: '1024x1024' }); // { created: 1234567890, data: [{ url: "https://...", revised_prompt: "..." }] } console.log(response.data[0].url); #### Python from openai import OpenAI client = OpenAI( api_key="YOUR_API_KEY", base_url="https://api.lumenfall.ai/openai/v1" ) response = client.images.edit( model="qwen-image-edit", image=open("source.png", "rb"), prompt="Add a starry night sky to this image", size="1024x1024" ) # { created: 1234567890, data: [{ url: "https://...", revised_prompt: "..." }] } print(response.data[0].url) ## About ## Overview Qwen Image Edit is a specialized instruction-based image transformation model developed by Alibaba’s Qwen team. Unlike standard text-to-image generators, this model is designed to modify existing visual assets through natural language prompts, allowing for precise alterations without manual masking or complex layering. It sits within the broader Qwen ecosystem, leveraging large-scale multimodal pre-training to interpret spatial relationships and semantic changes within an image. ## Strengths * **Instructional Precision:** The model excels at following specific commands for object replacement, color grading, and style transfers while maintaining the underlying composition of the original image. * **Spatial Reasoning:** It demonstrates a strong understanding of where objects are located relative to one another, which helps in preventing unintended distortions to the background during foreground edits. * **Semantic Consistency:** When altering a subject—such as changing a character's clothing or an object’s material—the model preserves the identity and perspective of the original subject effectively. * **Multi-Modal Input Processing:** It handles the interplay between the reference image and the text instructions with high fidelity, reducing the "hallucination" of new elements that weren't requested. ## Limitations * **High-Frequency Detail:** Like many diffusion-based editors, it may struggle with micro-textures or extremely fine text rendering during complex transformations. * **Drastic Structural Changes:** While it handles local edits well, attempting to fundamentally change the camera angle or the core geometry of a scene can result in artifacts or loss of consistency with the source image. * **Large-Scale Inpainting:** For tasks requiring the generation of massive amounts of new content in large empty spaces, dedicated outpainting or general-purpose generative models might offer more creative variety. ## Technical Background Qwen Image Edit is part of the Qwen multimodal family, utilizing an architecture that integrates vision encoders with language models to bridge the gap between pixels and prose. It likely employs a diffusion-based framework fine-tuned on instruction-following datasets, where the model is trained on pairs of "before" images, "after" images, and the specific text instructions that link them. This training approach emphasizes the delta between two states rather than just generating a static image from scratch. ## Best For This model is ideal for automated e-commerce workflows, such as changing the color or texture of products, and for creative direction where a user needs to iterate on a concept image without restarting the generation process. It is also well-suited for social media content creation and rapid prototyping of visual assets. Qwen Image Edit is available for integration and testing through Lumenfall’s unified API and interactive playground. ## Frequently Asked Questions ### How much does Qwen Image Edit Latest cost? Qwen Image Edit Latest starts at $0.03 per image through Lumenfall. Pricing varies by provider. Lumenfall does not add any markup to provider pricing. ### How do I use Qwen Image Edit Latest via API? You can use Qwen Image Edit Latest through Lumenfall's OpenAI-compatible API. Send requests to the unified endpoint with model ID "qwen-image-edit". Code examples are available in Python, JavaScript, and cURL. ### Which providers offer Qwen Image Edit Latest? Qwen Image Edit Latest is available through Replicate and fal.ai on Lumenfall. Lumenfall automatically routes requests to the best available provider. ## Links - Model Page: https://lumenfall.ai/models/alibaba/qwen-image-edit - About: https://lumenfall.ai/models/alibaba/qwen-image-edit/about - Providers, Pricing & Performance: https://lumenfall.ai/models/alibaba/qwen-image-edit/providers - API Reference: https://lumenfall.ai/models/alibaba/qwen-image-edit/api - Benchmarks: https://lumenfall.ai/models/alibaba/qwen-image-edit/benchmarks - Use Cases: https://lumenfall.ai/models/alibaba/qwen-image-edit/use-cases - Gallery: https://lumenfall.ai/models/alibaba/qwen-image-edit/gallery - Playground: https://lumenfall.ai/playground?model=qwen-image-edit - API Documentation: https://docs.lumenfall.ai