Riverflow 1

AI Image Editing Model

Image $$ · 3.9¢

Sourceful's state-of-the-art image editing model using a vision language model with chain-of-thought reasoning combined with open weights diffusion models for design-grade precision

Example outputs coming soon

1024 x 1024
Max Resolution
Supported Modes
Image Edit
Active

Details

Model ID
riverflow-1
Also known as: riverflow-1-standard, riverflow-1-base
Creator
Family
riverflow
Max Input Images
3
Tags
image-generation image-editing
// Get Started

Ready to integrate?

Access riverflow-1 via our unified API.

Create Account
Available at 1 provider

Starting from

$0.039 /image via Runware

Prices shown are in USD

Full pricing details

Providers & Pricing (1)

Riverflow 1 is available exclusively through Runware, starting at $0.039/image.

Runware
runware/riverflow-1
Provider Model ID: sourceful:1@1
$0.039 /image

riverflow-1-standard API OpenAI-compatible

Integrate Riverflow 1 into your applications via Lumenfall’s OpenAI-compatible API to perform complex image editing and high-fidelity image generation. Modern diffusion workflows allow for precise visual manipulations by leveraging this model’s chain-of-thought reasoning capabilities.

Base URL
https://api.lumenfall.ai/openai/v1
Model
riverflow-1

Code Examples

Image Edit

/v1/images/edits
curl -X POST \
  https://api.lumenfall.ai/openai/v1/images/edits \
  -H "Authorization: Bearer $LUMENFALL_API_KEY" \
  -F "model=riverflow-1" \
  -F "[email protected]" \
  -F "prompt=Add a starry night sky to this image" \
  -F "size=1024x1024"
# Response:
# { "created": 1234567890, "data": [{ "url": "https://...", "revised_prompt": "..." }] }

Parameter Reference

Required Supported Not available

Core Parameters

Parameter Type Description Modes
prompt string Required. Edit instruction for the image
Edit

Size & Layout

Parameter Type Description Modes
size string Image dimensions as WxH pixels (e.g. "1024x1024") or aspect ratio (e.g. "16:9")
WxH determines both shape and scale (aspect_ratio and resolution are ignored when size is provided). W:H format is equivalent to aspect_ratio.
Edit
aspect_ratio string Aspect ratio of the output image (e.g. "16:9", "1:1")
Controls shape independently of scale. Use with resolution to control both. If size is also provided, size takes precedence. Any ratio is accepted and mapped to the nearest supported value.
Edit
resolution string Output resolution tier (e.g. "1K", "4K")
Controls scale independently of shape. Higher tiers produce larger images and cost more. If size is also provided, size takes precedence for scale. Any tier is accepted and mapped to the nearest supported value.
Edit
size

Exact pixel dimensions

"1920x1080"
aspect_ratio

Shape only, default scale

"16:9"
resolution

Scale tier, preserves shape

"1K"

Priority when combined

size aspect_ratio + resolution aspect_ratio resolution

size is most specific and always wins. aspect_ratio and resolution control shape and scale independently.

How matching works

Shape matching – we pick the closest supported ratio. Ask for 7:1 on a model with 4:1 and 8:1, you get 8:1.
Scale matching – providers use different tier formats: K tiers (0.5K 1K 2K 4K) or megapixel tiers (0.25 1). If the exact tier isn't available, you get the nearest one.
Dimension clamping – if a model has pixel limits, we clamp dimensions to fit and keep the aspect ratio intact.

Media Inputs

Parameter Type Description Modes
image file Required. Input image(s) to edit
Supports PNG, JPEG, WebP.
Edit

Output & Format

Parameter Type Description Modes
response_format string How to return the image
url b64_json
Default: "url"
Edit
output_format string Output image format
png jpeg gif webp avif
Gateway converts to requested format if provider doesn't support it natively.
Edit
output_compression integer Compression level for lossy formats (JPEG, WebP, AVIF)
Edit
n integer Number of images to generate
Default: 1
Gateway generates multiple images in parallel even if provider only supports 1.
Edit

Parameter Normalization

How we handle parameters across different providers

Not every provider speaks the same language. When you send a parameter, we handle it in one of four ways depending on what the model supports:

Behavior What happens Example
passthrough Sent as-is to the provider style, quality
renamed Same value, mapped to the field name the provider expects prompt
converted Transformed to the provider's native format size
emulated Works even if the provider has no concept of it n, response_format

Parameters we don't recognize pass straight through to the upstream API, so provider-specific options still work.

Riverflow 1 FAQ

How much does Riverflow 1 cost?

Riverflow 1 starts at $0.039 per image through Lumenfall. Pricing varies by provider. Lumenfall does not add any markup to provider pricing.

How do I use Riverflow 1 via API?

You can use Riverflow 1 through Lumenfall's OpenAI-compatible API. Send requests to the unified endpoint with model ID "riverflow-1". Code examples are available in Python, JavaScript, and cURL.

Which providers offer Riverflow 1?

Riverflow 1 is available through Runware on Lumenfall. Lumenfall automatically routes requests to the best available provider.

What is the maximum resolution for Riverflow 1?

Riverflow 1 supports images up to 1024x1024 resolution.

Overview

Riverflow 1 is a multimodal image editing model developed by Sourceful that focuses on high-precision design tasks. It differentiates itself from standard diffusion models by integrating a vision language model (VLM) that utilizes chain-of-thought reasoning to interpret complex editing instructions. This architecture allows the model to better understand spatial relationships and specific design constraints before executing pixel-level changes.

Strengths

  • Instruction Adherence: The integration of chain-of-thought reasoning helps the model follow multi-step or nuanced natural language instructions more accurately than models that rely on simple CLIP embeddings.
  • Design-Grade Precision: Optimized for professional workflows where maintaining the structural integrity of the original image—such as perspective, lighting, and object proportions—is critical during the editing process.
  • Spatial Awareness: The vision-language component excels at identifying specific regions for modification, reducing the need for manual masking or complex in-painting coordinates.
  • Multimodal Input Flexibility: Seamlessly processes both text prompts and reference images to perform contextual edits, such as style transfers or object replacements that match the surrounding environment.

Limitations

  • Processing Latency: Because the model performs cognitive reasoning steps (chain-of-thought) before generating the output, it may have higher inference times compared to single-pass diffusion models.
  • Stylistic Range: While highly effective for realistic and design-oriented modifications, it may not exhibit the same level of abstract creativity as specialized artistic models when given highly open-ended or vague prompts.

Technical Background

Riverflow 1 is built on a hybrid architecture that bridges vision-language modeling with open-weights diffusion frameworks. The core innovation involves using the VLM to generate an internal reasoning path that guides the diffusion process, effectively acting as an intelligent controller for the image generation backbone. This approach mimics a designer’s logic by first analyzing the “what” and “where” of an edit before committing to the final visual output.

Best For

Riverflow 1 is best suited for professional product photography editing, architectural visualization updates, and marketing asset iteration where precise control over existing imagery is required. It is an excellent choice for developers building tools that require “smart” image manipulation without forcing users to learn complex prompt engineering.

You can experiment with Riverflow 1 and integrate it into your applications through Lumenfall’s unified API and interactive playground.

Try Riverflow 1 in Playground

Generate images with custom prompts — no API key needed.

Open Playground