Qwen Image Edit at the Speed of Light
Upload an image, describe what you want to change, and see it happen in under 2 seconds.
Fastest Qwen-Image-Edit inference. Period.
All benchmarks measured at a fixed 1024×1024 output resolution so every provider is doing the same work per step. Lower is better.
1.18× faster than Fal
highest throughput of any provider
| Provider | sec / step | MP·steps/s | Per-step latency @ 1024×1024 |
|---|---|---|---|
|
Luminal
|
0.38s | 2.80 | |
| Prodia | 0.43s | 2.42 | |
| fal.ai | 0.44s | 2.36 |
Measured April 2026. All tests at 1024×1024 resolution, 4 steps, CFG 1. Per-step time is average over 10 runs, excluding queue wait and cold starts. MP·steps/s = (megapixels × steps) / total inference time.
End-to-end latency across resolutions
Average end-to-end time over 10 runs per provider. Our latency stays flat regardless of input resolution.
Luminal v Fal v Prodia
Our output matches Fal — Prodia's is sometimes worse
Given the same prompt and source image, our outputs and Fal's are essentially identical. Same model, same distillation ( we believe ) , same effective parameters — the images are a visual match.
Prodia's results are close but occasionally noticeably worse — subtle quality differences that suggest a different distillation checkpoint or post-processing pipeline. The model is the same, but the output fidelity isn't always there.
The benchmark is a pure speed comparison. Between us and Fal, the images are the same — the only question is who gets it to you faster. At 1.50s end-to-end, that's us.
Try it above — or bring it to production
The demo above is live. If you need this speed for your product — with an API, SLAs, and dedicated capacity — let's talk.