
Qwen-Image is a basic image generation model developed by Alibaba's Qwen team. It has two outstanding capabilities: complex text rendering and precise image editing.
Qwen-Image can render text, even long paragraphs, in images with very high quality. It is particularly good at handling English and Chinese, where it is exceptionally accurate. It preserves the typographic details, layout, and contextual harmony of texts.
Precise image editing: The model allows for style transfer, adding or removing objects, refining details, editing text within images, and even manipulating human poses. This capability makes almost professional-level editing accessible to everyday users.
This is a 20 billion-parameter MMDiT (Multimodal Diffusion Transformer) model. Open source under the Apache 2.0 license.
Availability: Natively supported in ComfyUI, but also available via Hugging Face and ModelScope, and can be tried as a demo on Qwen Chat
Performance: Independently evaluated, it shows outstanding results in both image generation and image editing, and is currently one of the best open source models on the market.
The MMDiT (Multimodal Diffusion Transformer) is the central, fundamental element or "backbone" of the Qwen-Image image generation model. (This approach has also proven effective in other models, such as the FLUX and Seedream series.)
Now let's see what this means exactly:
Imagine that the model works like a sculptor who starts from random noise (like a grainy TV broadcast). The essence of the diffusion model is to gradually remove this noise step by step until a clean and recognizable image is created. This is not done directly with the pixels, but with a compressed, abstract form of the images, which we call the (image) latent space. Qwen-Image uses a special tool, the VAE (Variational AutoEncoder), to transform the original images into such encoded, latent representations.
During the diffusion process, MMDiT learns the complex relationships between noisy image codes and clean, desired image codes. It practically learns the "recipe" of how to transform the noise into some specific visual content.
Qwen-Image uses a model called Qwen2.5-VL to extract interpretable "instructions" for MMDiT from text inputs. Thus, the model generates exactly the image we have described.
Qwen-Image has multimodal capabilities. Not only can it generate images from text (Text-to-Image), but it can also edit images based on text instructions (Text-Image-to-Image). It can also perform certain image interpretation tasks, such as object recognition or depth information estimation. This is because MMDiT is designed to process and interpret text and image information simultaneously.
LinksQwen-Image blog: https://qwenlm.github.io/blog/qwen-image/Qwen-Image Technical Report: https://arxiv.org/pdf/2508.02324GitHub: https://github.com/QwenLM/Qwen-ImageHugging Face: https://huggingface.co/Qwen/Qwen-ImageQwen Chat: https://chat.qwen.ai/Hugging Face Demo: https://huggingface.co/spaces/Qwen/Qwen-ImageKépgenerátor Aréna: https://github.com/mp3pintyo/Leaderboard-Image