A free browser-based workflow for character animation and subject replacement. Upload a source video and a reference image, then render a new performance with controllable masks and frame-by-frame consistency tools.
Direction Control
Switch between video-driven and reference-driven replacement paths depending on whether you prioritize motion fidelity or identity fidelity.
Advanced Masking
Refine face and subject regions before propagation to reduce spill and keep edges cleaner around hair, jawline, and fast motion.
Intermediate Outputs
Check pose, mask, background, and face tracks to quickly diagnose issues and rerun with better settings.
Web-Native Workflow
No local install required. Launch, test, and iterate directly in browser with lightweight setup and fast feedback loops.
This embedded app mirrors alexnasa/Wan2.2-Animate-ZEROGPU on Hugging Face Spaces.
Recommended Input Pairing
Use a source clip with clear frontal-to-3/4 face visibility and a reference image with similar lighting direction for more stable replacement quality.
Best Clip Length
Short clips (2-6 seconds) generally converge faster and make troubleshooting easier before running longer sequences.
Plan your run like a mini pipeline: clean inputs, correct mode selection, mask refinement, then quality inspection using intermediate streams.
Supports both Video -> Ref Image and Video <- Ref Image style replacement. Test both with the same assets and keep the one that better preserves the feature you care about most.
Activate advanced mask editing for complex backgrounds, overlapping hands, or fast motion. Better masks usually reduce ghosting and edge artifacts in the final render.
Review pose, background, mask, and face previews to locate the exact stage where drift appears. This lets you adjust only what matters instead of changing everything blindly.
Follow this process for your first successful run, then use the FAQ to solve common issues fast.
Start with a short clip to validate quality quickly. Pick a duration that matches your test goal and keeps queue time manageable.
Choose an image with similar camera angle and light direction. Then select the replacement mode that best preserves either motion or identity.
Review final and intermediate streams. If boundaries flicker, refine masks first before changing all other settings.
Drift usually comes from weak reference quality or hard pose mismatch. Use a clearer reference, shorten clip length, and compare both direction modes.
Rebuild masks around hair and contour boundaries, and avoid highly compressed source videos where edge detail is already lost.
Start at 2-4 seconds for iteration speed, then render longer once your mode and mask setup are validated.