
Alibaba releases Wan2.7-Video video generation model focusing on the entire creative process
Alibaba-W (09988.HK) has released a newly upgraded Wan2.7-Video video generation model. This model supports multi-modal inputs including text, images, videos, and audio, focusing on the entire "creation" process, covering generation, editing, replication, reshaping, driving, continuation, and referencing, claiming to be more controllable, versatile, and "capable of directing and performing," focusing on the entire creation process.
Users can make local adjustments to the video frames through commands, with the edited areas naturally blending with the original video in terms of light and texture. Additionally, it supports adding and removing elements to replace objects, as well as modifying object attributes, and allows for precise additions based on reference image content.
The model also supports changing environments and styles while keeping character movements unchanged; the background season can shift from summer to late autumn, or be instantly transformed into a felt style, allowing for a quick journey through parallel universes. Furthermore, it supports video quality enhancement, visual understanding tasks, and adjustments in shooting methods to meet diverse editing needs

