# Qwen Image Edit 2509 > Alibaba's Qwen image editing model for instruction-based image modifications and transformations ## Quick Reference - Model ID: qwen-image-edit-2509 - Creator: Alibaba - Status: active - Family: qwen - Base URL: https://api.lumenfall.ai/openai/v1 ## Specifications - Input Modalities: text, image - Output Modalities: image ## Model Identifiers - Primary Slug: qwen-image-edit-2509 - Aliases: qwen-image-edit-plus-2509 ## Dates ## Tags image-generation, image-editing ## Available Providers ### fal.ai - Config Key: fal/qwen-image-edit-2509 - Provider Model ID: fal-ai/qwen-image-edit-2509 - Pricing: - source: official - currency: USD - components: [{"type" => "output", "metric" => "megapixel", "unit_price" => 0.03}] - source_url: https://fal.ai/models/fal-ai/qwen-image-edit-2509 - effective_at: 2025-12-29 ## Image Gallery 1 images available for this model. - Curated examples: 1 - "A hyper-realistic, wide-angle cinematic shot of a master restorer's sun-drenched attic studio. In the center, a large..." ## Example Prompt The following prompt was used to generate an example image in our playground: A cozy sunlit greenhouse filled with exotic monsteras and hanging ferns. In the center, a bright orange vintage typewriter sits on a rustic wooden table. Through the glass panes in the distance, a capybara grazes peacefully in the garden. ## Code Examples ### Text to Image (Generation) #### cURL curl -X POST \ https://api.lumenfall.ai/openai/v1/images/generations \ -H "Authorization: Bearer $LUMENFALL_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "model": "qwen-image-edit-2509", "prompt": "A serene mountain landscape at sunset", "size": "1024x1024" }' # Response: # { "created": 1234567890, "data": [{ "url": "https://...", "revised_prompt": "..." }] } #### JavaScript import OpenAI from 'openai'; const client = new OpenAI({ apiKey: 'YOUR_API_KEY', baseURL: 'https://api.lumenfall.ai/openai/v1' }); const response = await client.images.generate({ model: 'qwen-image-edit-2509', prompt: 'A serene mountain landscape at sunset', size: '1024x1024' }); // { created: 1234567890, data: [{ url: "https://...", revised_prompt: "..." }] } console.log(response.data[0].url); #### Python from openai import OpenAI client = OpenAI( api_key="YOUR_API_KEY", base_url="https://api.lumenfall.ai/openai/v1" ) response = client.images.generate( model="qwen-image-edit-2509", prompt="A serene mountain landscape at sunset", size="1024x1024" ) # { created: 1234567890, data: [{ url: "https://...", revised_prompt: "..." }] } print(response.data[0].url) ### Image Editing #### cURL curl -X POST \ https://api.lumenfall.ai/openai/v1/images/edits \ -H "Authorization: Bearer $LUMENFALL_API_KEY" \ -F "model=qwen-image-edit-2509" \ -F "image=@source.png" \ -F "prompt=Add a starry night sky to this image" \ -F "size=1024x1024" # Response: # { "created": 1234567890, "data": [{ "url": "https://...", "revised_prompt": "..." }] } #### JavaScript import OpenAI from 'openai'; import fs from 'fs'; const client = new OpenAI({ apiKey: 'YOUR_API_KEY', baseURL: 'https://api.lumenfall.ai/openai/v1' }); const response = await client.images.edit({ model: 'qwen-image-edit-2509', image: fs.createReadStream('source.png'), prompt: 'Add a starry night sky to this image', size: '1024x1024' }); // { created: 1234567890, data: [{ url: "https://...", revised_prompt: "..." }] } console.log(response.data[0].url); #### Python from openai import OpenAI client = OpenAI( api_key="YOUR_API_KEY", base_url="https://api.lumenfall.ai/openai/v1" ) response = client.images.edit( model="qwen-image-edit-2509", image=open("source.png", "rb"), prompt="Add a starry night sky to this image", size="1024x1024" ) # { created: 1234567890, data: [{ url: "https://...", revised_prompt: "..." }] } print(response.data[0].url) ## About ## Overview Qwen Image Edit 2509 is a specialized vision-language model developed by Alibaba designed for instruction-based image manipulation. Unlike standard text-to-image generators, this model accepts both an initial image and a natural language prompt to perform targeted modifications and transformations. It is distinctive for its ability to interpret complex editing instructions while maintaining the structural integrity of the original source image. ## Strengths * **Instruction Adherence:** The model accurately maps natural language verbs and nouns to visual changes, such as "change the color of the shirt" or "add a sunset to the background." * **Contextual Consistency:** It excels at preserving the identity and spatial layout of primary subjects while altering specific attributes or environmental elements. * **Zero-shot Composition:** The model handles various editing tasks—including stylization, object insertion, and attribute modification—without requiring mask-based inputs or fine-tuning for specific styles. * **Complex Transformation:** Beyond simple filters, it can handle structural transformations such as changing a character's pose or modifying the lighting conditions of a scene based on text descriptions. ## Limitations * **High-Detail Text Rendering:** Like many diffusion-based or vision-language architectures, it may struggle with rendering precise, small-scale legible text within an edited image. * **Large-Scale Compositional Overhauls:** While it handles local edits and style transfers well, it may produce artifacts if the prompt asks for a complete reimagining that contradicts the fundamental geometry of the source image. * **Anatomical Accuracy:** In complex edits involving human figures, there is a risk of generating anatomical inconsistencies, particularly in hands or overlapping limbs. ## Technical Background Developed as part of the Qwen model family, Qwen Image Edit 2509 utilizes a vision-encoder paired with a generative backbone trained on large-scale paired datasets of images and their corresponding edit instructions. The architecture focuses on cross-modal alignment, ensuring that the text embeddings effectively guide the latent representation of the source image during the denoising or reconstruction process. This approach prioritizes semantic understanding of the "before" and "after" relationship described in the prompt. ## Best For Qwen Image Edit 2509 is best suited for workflows requiring rapid prototyping of visual concepts, such as changing product backgrounds, adjusting fashion photography attributes, or iterative character design. It is an excellent choice for developers building creative tools that require "natural language" control over existing visual assets rather than generating images from scratch. This model is available through **Lumenfall’s unified API and playground**, allowing for easy integration into multi-model pipelines alongside text and vision-analysis models. ## Frequently Asked Questions ### How much does Qwen Image Edit 2509 cost? Qwen Image Edit 2509 starts at $0.03 per image through Lumenfall. Pricing varies by provider. Lumenfall does not add any markup to provider pricing. ### How do I use Qwen Image Edit 2509 via API? You can use Qwen Image Edit 2509 through Lumenfall's OpenAI-compatible API. Send requests to the unified endpoint with model ID "qwen-image-edit-2509". Code examples are available in Python, JavaScript, and cURL. ### Which providers offer Qwen Image Edit 2509? Qwen Image Edit 2509 is available through fal.ai on Lumenfall. Lumenfall automatically routes requests to the best available provider. ## Links - Model Page: https://lumenfall.ai/models/alibaba/qwen-image-edit-2509 - About: https://lumenfall.ai/models/alibaba/qwen-image-edit-2509/about - Providers, Pricing & Performance: https://lumenfall.ai/models/alibaba/qwen-image-edit-2509/providers - API Reference: https://lumenfall.ai/models/alibaba/qwen-image-edit-2509/api - Benchmarks: https://lumenfall.ai/models/alibaba/qwen-image-edit-2509/benchmarks - Use Cases: https://lumenfall.ai/models/alibaba/qwen-image-edit-2509/use-cases - Gallery: https://lumenfall.ai/models/alibaba/qwen-image-edit-2509/gallery - Playground: https://lumenfall.ai/playground?model=qwen-image-edit-2509 - API Documentation: https://docs.lumenfall.ai