FACSvatar is a modular framework that connects different software over ZeroMQ (data messaging library) to enable both the animation and analysis of AUs. At present, a standard workflow is to use a modified OpenFace. (FACS input) that is transported and manipulated (smoothed) through FACSvatar and then forwarded to either Unity3D (real-time) or Blender (high-quality) animation.
Almost all modules have a
main.py which functions as entry point for that module.
To see what additional arguments are available, execute:
python main.py -h.
First time using these modules? Please head over to: First run (non-Docker)
These modules provide the basic functionality of FACSvatar and are (almost) always needed.
FACS from OpenFace csv¶
Allows for the OpenFace’s analysis results to be send as messages through FACSvatar.
Looks in the folder given as argument (
--csv_folder) for the specified .csv file(s)
Well, actually it first checks
specifed_folder_clean to see if a cleaned version from the .csv file already exists.
If not, it creates this
_clean folder with the cleaned .csv.
“Cleaning” here means removing not used columns, removing trailing spaces from column names, etc.
In multi-avatar setups, you can send 2 or more .csv files in parallel. For details, see:
This module allows for modules to communicate in a m-to-n fashion.
FACS to MB-Lab blend shapes¶
Convert AU values based on FACS to blend shape values found in avatars created with MB-Lab.
OpenFace offline (use .csv files created with OpenFace)¶
OpenFace creates .csv files after analyzing a video. This module reads those .csv’s and sends its data as a message per frame to FACSvatar.
Please see ‘Use your own videos’ if you want to use your own OpenFace analysis results.
These modules add extra functionality to FACSvatar
Deep Neural Network¶