Alibaba Unveils AI Model for Character Animation and Replacement – Alizila
Home > AI
Alibaba Unveils AI Model for Character Animation and Replacement
New open-source model for digital-human video-generation
Second model for digital human video generation unveiled within a month
By
Shao Xiaoyi
|
Published on Sept. 22, 2025
Alibaba has introduced Wan2.2-animate, a new digital-human video generation model, optimized for character animation and replacement. This is the second open-source digital-human video-generation model unveiled by the company within a month, underlining its ongoing commitment to AI innovation.
As part of Alibaba’s Wan2.2 video generation series, this model brings animation to a provided character image and a supporting reference video. It accurately captures facial expressions and body movements of the reference character and applies them to a new video.
The model can also replace a character in a video with one from a source image, while preserving their expressions and movements. By replicating the original lighting and color, Wan2.2-animate enables seamless character integration into the scene.
Last month, Alibaba unveiled Wan2.2-S2V (Speech-to-Video), its first open-source model designed for digital human video creation. This tool converts portrait photos into film-quality avatars capable of speaking, singing, and performing.
Innovative approach
The model uses an innovative approach to bringing characters to life. It deconstructs human motion into fundamental skeletal patterns while capturing facial expressions from source videos. This enables natural-appearing-animated characters with precise motion control to be created. The technology can interpret and reproduce subtle movements and expressions that typically require extensive manual effort by skilled animators, speeding up the pace at which realistic-looking characters can be created.
One of the major challenges in digital character replacement has been achieving realistic integration into scenes. The new model incorporates an auxiliary relighting Low-Rank Adaptation (LoRA) technology. This automatically adjusts the character’s appearance to match the video’s environment, ranging from subtle shadows to complex lighting conditions.
This advancement supports content creators across film, television, short-form video, gaming, and advertising by simplifying animation workflows and reducing production costs for both entertainment and commercial applications.
Wan2.2-animate is available for download on Hugging Face, GitHub, and the open-source community, ModelScope. Alibaba has been a significant contributor to the global open-source community. To date, the Wan series has generated over 30 million downloads on open-source communities and third-party platforms.
Reuse this content
Never Miss a StorySubscribe to Our Newsletter
You Might Also Like
News Roundup News Roundup: Alibaba Open-Sources Tongyi DeepResearch LLM, Partners with S&P Global to Deliver AI Intelligence to Chinese Customers
News Roundup News Roundup: Alibaba’s Workplace Recognition, Aviation Partnership, and Next-Gen AI Models
AI Agenda Qwen3-Next: A New Generation of Ultra-Efficient Model Architecture Unveiled
Lifestyle Amap Taps AI to Spotlight Users’ Local Favorites, Driving Offline Visits
Home > AI
Alibaba Unveils AI Model for Character Animation and Replacement
New open-source model for digital-human video-generation
Second model for digital human video generation unveiled within a month
By
Shao Xiaoyi
|
Published on Sept. 22, 2025
Alibaba has introduced Wan2.2-animate, a new digital-human video generation model, optimized for character animation and replacement. This is the second open-source digital-human video-generation model unveiled by the company within a month, underlining its ongoing commitment to AI innovation.
As part of Alibaba’s Wan2.2 video generation series, this model brings animation to a provided character image and a supporting reference video. It accurately captures facial expressions and body movements of the reference character and applies them to a new video.
The model can also replace a character in a video with one from a source image, while preserving their