From Statics to Dynamics: Physics-Aware Image Editing with Latent Transition Priors

1KAUST  2CUHK MMLAB 3Krea AI 4Hugging Face

Abstract

Instruction-based image editing has achieved remarkable success in semantic alignment, yet state-of-the-art models frequently fail to render physically plausible results when editing involves complex causal dynamics, such as refraction or material deformation. We attribute this limitation to the dominant paradigm that treats editing as a discrete mapping between image pairs, which provides only boundary conditions and leaves transition dynamics underspecified. To address this, we reformulate physics-aware editing as predictive physical state transitions and introduce PhysicTran38K, a large-scale video-based dataset comprising 38K transition trajectories across five physical domains, constructed via a two-stage filtering and constraint-aware annotation pipeline. Building on this supervision, we propose PhysicEdit, an end-to-end framework equipped with a textual-visual dual-thinking mechanism. It combines a frozen Qwen2.5-VL for physically grounded reasoning with learnable transition queries that provide timestep-adaptive visual guidance to a diffusion backbone. Experiments show that PhysicEdit improves over Qwen-Image-Edit by 5.9% in physical realism and 10.1% in knowledge-grounded editing, setting a new state-of-the-art for open-source methods, while remaining competitive with leading proprietary models.

Motivation

Bridging semantic alignment and physical plausibility. Existing editing models achieve high semantic fidelity yet frequently violate physical principles, as they learn discrete image mappings with underspecified constraints. We reformulate editing as a Physical State Transition, leveraging continuous dynamics to steer generation from unreal hallucinations toward physically valid trajectories.

Motivation Teaser

PhysicTran38K: Video-based dataset of physical state transition

Overview of the PhysicTran38K construction pipeline. Starting from hierarchical physics categories, we synthesize videos using Wan2.2-T2V-A14B, filtered by ViPE with an adaptive strategy to preserve high-dynamic transitions. Candidate videos conduct principle-driven verification by GPT-5-mini, adhering to a rigorous retention rule. Finally, Qwen2.5-VL-7B performs constraint-aware annotation, generating instructions and structured reasoning while incorporating verification results to prevent hallucinations.

Method Overview

Overview of the PhysicEdit framework. (a) Training: We distill physical transition priors from video data into learnable transition queries. These queries are supervised by complementary visual features extracted from intermediate keyframes. (b) Inference: PhysicEdit follows a sequential workflow. The frozen MLLM first generates physically-grounded reasoning, which is then concatenated with the learned transition queries to serve as the condition for the diffusion backbone.

Qualitative Results

GenEval Performance

Quantitative comparisons on PICABench-Superficial and KRIS.

BibTeX

@misc{zhao2026staticsdynamicsphysicsawareimage,
      title={From Statics to Dynamics: Physics-Aware Image Editing with Latent Transition Priors}, 
      author={Liangbing Zhao and Le Zhuo and Sayak Paul and Hongsheng Li and Mohamed Elhoseiny},
      year={2026},
      eprint={2602.21778},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2602.21778}, 
      }