This is the official repository for this paper: Arxiv
The pre-trained weight of EmoFace is now available at Google Drive now! Download the weight and put under weight folder.
As we could not publish the full dataset yet, we provide one piece of data from the evaluation set.
First, dataloader for training and testing need to be generated by data.py. The path to the dataset needs to be assigned.
Training and Testing is combined in main.py. To run the model with default settings, there is only need to set maximum epoches.
python main.py --max_epoch 1000
When training, the weight would be saved in directory weight/.
The directory blink contains files related to blinks.
demo.py uses the trained model to output corresponding controller rig values for audio clips. The weight of model PATH, path to audio files audio_path and save files pred_path need to be assigned inside the script.
The predicted expressions are formatted as 174-dimensional metahuman controller parameter sequences and are saved in a .txt file. The results can be visualized using either Maya or UE5. The scripts for visualization have been uploaded to visualization folder.
(1) MAYA
(2) Unreal Engine 5
Unlike Maya, the output controller rig sequences cannot be directly be imported into UE5. So we use Maya as the import intermediary, exporting .fbx file by Maya, and then import it into UE5. The workflow is introduced by this video: How to Rig and Animate a Metahuman: Maya to Unreal Engine 5 Workflow - YouTube. As for setting key frames, set_frames.py can be used to set frame-wise controller values in Maya.
If you find our work helpful for your research, please cite our paper:
@inproceedings{liu2024emoface,
title={EmoFace: Audio-driven emotional 3D face animation},
author={Liu, Chang and Lin, Qunfen and Zeng, Zijiao and Pan, Ye},
booktitle={2024 IEEE Conference Virtual Reality and 3D User Interfaces (VR)},
pages={387--397},
year={2024},
organization={IEEE}
}