Hugging Face Space demo app ?
Google Colab notebook demo
User Guide, Documentation, ChatGPT facetorch guide
Docker Hub (GPU)
Facetorch is a Python library designed for facial detection and analysis, leveraging the power of deep neural networks. Its primary aim is to curate open-source face analysis models from the community, optimize them for high performance using TorchScript, and integrate them into a versatile face analysis toolkit. The library offers the following key features:
Customizable Configuration: Easily configure your setup using Hydra and its powerful OmegaConf capabilities.
Reproducible Environments: Ensure reproducibility with tools like conda-lock for dependency management and Docker for containerization.
Accelerated Performance: Enjoy enhanced performance on both CPU and GPU with TorchScript optimization.
Simple Extensibility: Extend the library by uploading your model file to Google Drive and adding a corresponding configuration YAML file to the repository.
Facetorch provides an efficient, scalable, and user-friendly solution for facial analysis tasks, catering to developers and researchers looking for flexibility and performance.
Please use this library responsibly and with caution. Adhere to the European Commission's Ethics Guidelines for Trustworthy AI to ensure ethical and fair usage. Keep in mind that the models may have limitations and potential biases, so it is crucial to evaluate their outputs critically and consider their impact.
PyPI
pip install facetorchConda
conda install -c conda-forge facetorchDocker Compose provides an easy way of building a working facetorch environment with a single command.
docker compose run facetorch python ./scripts/example.pydocker compose run facetorch-gpu python ./scripts/example.py analyzer.device=cudaCheck data/output for resulting images with bounding boxes and facial 3D landmarks.
(Apple Mac M1) Use Rosetta 2 emulator in Docker Desktop to run the CPU version.
The project is configured by files located in conf with the main file: conf/config.yaml. One can easily add or remove modules from the configuration.
FaceAnalyzer is the main class of facetorch as it is the orchestrator responsible for initializing and running the following components:
analyzer
├── reader
├── detector
├── unifier
└── predictor
├── embed
├── verify
├── fer
├── au
├── va
├── deepfake
└── align
└── utilizer
├── align
├── draw
└── save
| model | source | params | license | version |
| ------------- | --------- | --------- | ----------- | ------- |
| RetinaFace | biubug6 | 27.3M | MIT license | 1 |
| model | source | params | license | version |
| ----------------- | ---------- | ------- | ----------- | ------- |
| ResNet-50 VGG 1M | 1adrianb | 28.4M | MIT license | 1 |
include_tensors needs to be True in order to include the model prediction in Prediction.logits| model | source | params | license | version |
| ---------------- | ----------- | -------- | ------------------ | ------- |
| MagFace+UNPG | Jung-Jun-Uk | 65.2M | Apache License 2.0 | 1 |
| AdaFaceR100W12M | mk-minchul | - | MIT License | 2 |
include_tensors needs to be True in order to include the model prediction in Prediction.logitsinclude_tensors needs to be True in order to include the model prediction in Prediction.logits| model | source | params | license | version |
| ----------------- | -------------- | -------- | ------------------ | ------- |
| EfficientNet B0 7 | HSE-asavchenko | 4M | Apache License 2.0 | 1 |
| EfficientNet B2 8 | HSE-asavchenko | 7.7M | Apache License 2.0 | 2 |
| model | source | params | license | version |
| ------------------- | --------- | ------- | ------------------ | ------- |
| OpenGraph Swin Base | CVI-SZU | 94M | MIT License | 1 |
| model | source | params | license | version |
| ----------------- | ---------- | ------- | ----------- | ------- |
| ELIM AL AlexNet | kdhht2334 | 2.3M | MIT license | 1 |
| model | source | params | license | version |
| -------------------- | ---------------- | -------- | ----------- | ------- |
| EfficientNet B7 | selimsef | 66.4M | MIT license | 1 |
| model | source | params | license | version |
| ----------------- | ---------------- | -------- | ----------- | ------- |
| MobileNet v2 | choyingw | 4.1M | MIT license | 1 |
include_tensors needs to be True in order to include the model prediction in Prediction.logitsModels are downloaded during runtime automatically to the models directory. You can also download the models manually from a public Google Drive folder.
Image test.jpg (4 faces) is analyzed (including drawing boxes and landmarks, but not saving) in about 486ms and test3.jpg (25 faces) in about 1845ms (batch_size=8) on NVIDIA Tesla T4 GPU once the default configuration (conf/config.yaml) of models is initialized and pre heated to the initial image size 1080x1080 by the first run. One can monitor the execution times in logs using the DEBUG level.
Detailed test.jpg execution times:
analyzer
├── reader: 27 ms
├── detector: 193 ms
├── unifier: 1 ms
└── predictor
├── embed: 8 ms
├── verify: 58 ms
├── fer: 28 ms
├── au: 57 ms
├── va: 1 ms
├── deepfake: 117 ms
└── align: 5 ms
└── utilizer
├── align: 8 ms
├── draw_boxes: 22 ms
├── draw_landmarks: 7 ms
└── save: 298 ms
Run the Docker container:
docker compose -f docker-compose.dev.yml run facetorch-devdocker compose -f docker-compose.dev.yml run facetorch-dev-gpuFacetorch works with models that were exported from PyTorch to TorchScript. You can apply torch.jit.trace function to compile a PyTorch model as a TorchScript module. Please verify that the output of the traced model equals the output of the original model.
The first models are hosted on my public Google Drive folder. You can either send the new model for upload to me, host the model on your Google Drive or host it somewhere else and add your own downloader object to the codebase.
/conf/analyzer/predictor/ following the FER example in /conf/analyzer/predictor/fer//conf/analyzer/predictor/fer/efficientnet_b2_8.yaml to the new folder
/conf/analyzer/predictor/<predictor_name>//conf/analyzer/predictor/<predictor_name>/<model_name>.yaml/tests/conftest.py file./tests/test_<predictor_name>.pyblack facetorchCPU:
environment.yml fileconda lock -p linux-64 -f environment.yml --lockfile conda-lock.ymldocker compose -f docker-compose.dev.yml run facetorch-lockconda-lock install --name env conda-lock.ymlGPU:
gpu.environment.yml fileconda lock -p linux-64 -f gpu.environment.yml --lockfile gpu.conda-lock.ymldocker compose -f docker-compose.dev.yml run facetorch-lock-gpuconda-lock install --name env gpu.conda-lock.ymlpytest tests --verbose --cov-report html:coverage --cov facetorchpdoc --html facetorch --output-dir docs --force --template-dir pdoc/templates/python -m cProfile -o profiling/example.prof scripts/example.pysnakeviz profiling/example.profSharma, Paritosh, Camille Challant, and Michael Filhol. "Facial Expressions for Sign Language Synthesis using FACSHuman and AZee." Proceedings of the LREC-COLING 2024 11th Workshop on the Representation and Processing of Sign Languages, pp. 354–360, 2024.
Liang, Cong, Jiahe Wang, Haofan Zhang, Bing Tang, Junshan Huang, Shangfei Wang, and Xiaoping Chen. "Unifarn: Unified transformer for facial reaction generation." Proceedings of the 31st ACM International Conference on Multimedia, pp. 9506–9510, 2023.
Gue, Jia Xuan, Chun Yong Chong, and Mei Kuan Lim. "Facial Expression Recognition as markers of Depression." 2023 Asia Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC), pp. 674–680, 2023.
I would like to thank the open-source community and the researchers who have shared their work and published models. This project would not have been possible without their contributions.
If you use facetorch in your work, please make sure to appropriately credit the original authors of the models it employs. Additionally, you may consider citing the facetorch library itself. Below is an example citation for facetorch:
@misc{facetorch,
author = {Gajarsky, Tomas},
title = {Facetorch: A Python Library for Analyzing Faces Using PyTorch},
year = {2024},
publisher = {GitHub},
journal = {GitHub Repository},
howpublished = {url{https://github.com/tomas-gajarsky/facetorch}}
}