This Is An Open Implementation of Motion Brush like Gen-2 Motion Brush allows you to specify the region of image to have motion. This implementation is based on
- diffusers
- Stable Video Diffusion by Stabilityai
🔥🚀 Try it in Google Colab 🚀🔥
Please use GPU to run the demo, and for a T4 GPU, it might takes 5 mins to process a 25x256x512 video.
Steps to have your image to be moving/animated:
- Upload your image
- Use the sketch brush to specify the region to have motion
- [Optinoal] Adjust the parameters, the motion bucket ID will affect the motion strength (the larger the stronger)
- Click "Generate" to see the result
For those who want to try simple demo or want to figure out my trick to make the motion brush, I put the minimum code under the repository motion_brush_minimum_code. It is quite simple and easy to understand.
First install the PyTorch according to your device.
git clone https://github.com/cplusx/open-diffusion-motion-brush.git
pip install -r requirements.txt
Start the demo by running:
python gradio_demo.py
The following GIF shows how to use the motion brush to specify the region to have motion.
(Note, the GIF removes the processing frames, it takes ~2 mins on a V100 for 25x576x1024 video.)
Functions TODO list
- [] Mask dilation
- Google Colab Demo
- Handle image resizing if the input image is too large or does not satisfy the requirement
- Set number of time steps to do replacement