--- license: apache-2.0 language: - en - zh pipeline_tag: image-to-image library_name: diffusers --- ## 🔥🔥🔥 News!! Nov 26, 2025: 👋 We release [Step1X-Edit-v1p2](https://huggingface.co/stepfun-ai/Step1X-Edit-v1p2) (referred to as **ReasonEdit-S** in the paper), a native reasoning edit model with better performance on KRIS-Bench and GEdit-Bench. Technical report can be found [here](https://arxiv.org/abs/2511.22625).
Models
GEdit-Bench
Kris-Bench
G_SC⬆️ G_PQ⬆️ G_O⬆️ FK⬆️ CK⬆️ PK⬆️ Overall⬆️
Flux-Kontext-dev 7.16 7.37 6.51 53.28 50.36 42.53 49.54
Qwen-Image-Edit-2509 8.00 7.86 7.56 61.47 56.79 47.07 56.15
Step1X-Edit v1.1 7.66 7.35 6.97 53.05 54.34 44.66 51.59
Step1x-edit-v1p2-preview 8.14 7.55 7.42 60.49 58.81 41.77 52.51
Step1x-edit-v1p2 (base) 7.77 7.65 7.24 58.23 60.55 46.21 56.33
Step1x-edit-v1p2 (thinking) 8.02 7.64 7.36 59.79 62.76 49.78 58.64
Step1x-edit-v1p2 (thinking + reflection) 8.18 7.85 7.58 62.44 65.72 50.42 60.93
## ⚡️ Model Usages Make sure your `transformers==4.55.0` (we tested on this version) Install the `diffusers` package from the following command: ```bash git clone -b step1xedit_v1p2 https://github.com/Peyton-Chen/diffusers.git cd diffusers pip install -e . ``` Here is an example for using the `Step1X-Edit-v1p2` model to edit images: ```python import torch from diffusers import Step1XEditPipelineV1P2 from diffusers.utils import load_image pipe = Step1XEditPipelineV1P2.from_pretrained("stepfun-ai/Step1X-Edit-v1p2", torch_dtype=torch.bfloat16) pipe.to("cuda") print("=== processing image ===") image = load_image("examples/0000.jpg").convert("RGB") prompt = "add a ruby pendant on the girl's neck." enable_thinking_mode=True enable_reflection_mode=True pipe_output = pipe( image=image, prompt=prompt, num_inference_steps=50, true_cfg_scale=6, generator=torch.Generator().manual_seed(42), enable_thinking_mode=enable_thinking_mode, enable_reflection_mode=enable_reflection_mode, ) if enable_thinking_mode: print("Reformat Prompt:", pipe_output.reformat_prompt) for image_idx in range(len(pipe_output.images)): pipe_output.images[image_idx].save(f"0001-{image_idx}.jpg", lossless=True) if enable_reflection_mode: print(pipe_output.think_info[image_idx]) print(pipe_output.best_info[image_idx]) pipe_output.final_images[0].save(f"0001-final.jpg", lossless=True) ``` The results looks like:
results
## 📖 Introduction Step1X-Edit-v1p2 represents a step towards reasoning-enhanced image editing models. We show that unlocking the reasoning capabilities of MLLMs can further expand the limits of instruction-based editing. Specifically, we introduce two complementary reasoning mechanisms, thinking and reflection, to improve instruction comprehension and editing accuracy. Building on these mechanisms, our framework performs editing in a thinking–editing–reflection loop: **the thinking stage** leverages MLLM world knowledge to interpret abstract instructions, while **the reflection stage** reviews the edited outputs, corrects unintended changes, and determines when to stop. For more details, please refer to our technical report.
results
## Citation ``` @article{yin2025reasonedit, title={ReasonEdit: Towards Reasoning-Enhanced Image Editing Models}, author={Fukun Yin, Shiyu Liu, Yucheng Han, Zhibo Wang, Peng Xing, Rui Wang, Wei Cheng, Yingming Wang, Aojie Li, Zixin Yin, Pengtao Chen, Xiangyu Zhang, Daxin Jiang, Xianfang Zeng, Gang Yu}, journal={arXiv preprint arXiv:2511.22625}, year={2025} } @article{liu2025step1x-edit, title={Step1X-Edit: A Practical Framework for General Image Editing}, author={Shiyu Liu and Yucheng Han and Peng Xing and Fukun Yin and Rui Wang and Wei Cheng and Jiaqi Liao and Yingming Wang and Honghao Fu and Chunrui Han and Guopeng Li and Yuang Peng and Quan Sun and Jingwei Wu and Yan Cai and Zheng Ge and Ranchen Ming and Lei Xia and Xianfang Zeng and Yibo Zhu and Binxing Jiao and Xiangyu Zhang and Gang Yu and Daxin Jiang}, journal={arXiv preprint arXiv:2504.17761}, year={2025} } ```