Based on the work done by Wang and Ponce [1], we analyzed the structure of the latent space of human motion learned by VAEs, to understand if these spaces contain interpretable change directions even without explicit training.
conda env create -f environment.yml
- Get the code from MotionCLIP [2], ACTOR [3] and GAN-Geometry [1]. To run ACTOR and MotionCLIP simultaneously, rename MotionCLIP
src
folder toclip_src
- Change the path in
tools/paths.py
to point to the correct path for these repos - The main file is
main.py
. You can specify the generator and the distance function to test. There are a few tasks, the two most important ones are calculating the Hessian at some randomly sampled points (python3 main.py calculate
) and visualizing the changes in generated motion sequences while moving along the dominant eigenvectors (python3 main.py visualize
)
Example
python3 main.py visualize --wrapper clip --scorer low --cutoff 50 --sample_class 4 --num_samples 2 --eiglist 0,4,10,30,49 --maxdist 0.5
- Add the distance function to
tools/sim_score.py
- Update the list of
available_dist_functions
intools/paths.py
- Update
make_scorer
inmain.py
- Add the wrapper file of the generator to
tools/
. This wrapper should include asample_vector
function and agenerate
function - Update the list of
available_wrappers
intools/paths.py
- Update
make_wrapper
inmain.py
[1] Binxu Wang, Carlos R. Ponce (2021). The Geometry of Deep generative image models and its Application. ICLR 2021
[2] Guy Tevet, Brian Gordon, Amir Hertz, Amit H. Bermano, Daniel Cohen-Or (2022). MotionCLIP: Exposing Human Motion Generation to CLIP Space. ECCV 2022
[3] Mathis Petrovich, Michael J. Black, Gül Varol (2021). Action-Conditioned 3D Human Motion Synthesis with Transformer VAE. ICCV 2021