# SadTalker
**Repository Path**: hola/sad-talker
## Basic Information
- **Project Name**: SadTalker
- **Description**: ๅฝๅคSadTalker
- **Primary Language**: Unknown
- **License**: MIT
- **Default Branch**: master
- **Homepage**: None
- **GVP Project**: No
## Statistics
- **Stars**: 0
- **Forks**: 1
- **Created**: 2024-08-23
- **Last Updated**: 2024-08-23
## Categories & Tags
**Categories**: Uncategorized
**Tags**: None
## README

[](https://colab.research.google.com/github/Winfredy/SadTalker/blob/main/quick_demo.ipynb) [](https://huggingface.co/spaces/vinthony/SadTalker)
1 Xi'an Jiaotong University 2 Tencent AI Lab 3 Ant Group
CVPR 2023

TL;DR: single portrait image ๐โโ๏ธ + audio ๐ค = talking head video ๐.
## ๐ฅ๐ฅ๐ฅ Highlight
- Several new mode, eg, `still mode`, `reference mode`, `resize mode` are online for better and custom applications.
- Happy to see our method is used in various talking or singing avatar, checkout these wonderful demos at [bilibili](https://search.bilibili.com/all?keyword=sadtalker&from_source=webtop_search&spm_id_from=333.1007&search_source=3
) and [twitter #sadtalker](https://twitter.com/search?q=%23sadtalker&src=typed_query).
## ๐ Changelog
- __[2023.03.30]__: Launch new feature: through using reference videos, our algorithm can generate videos with more natural eye blinking and some eyebrow movement.
- __[2023.03.29]__: `resize mode` is online by `python infererence.py --preprocess resize`! Where we can produce a larger crop of the image as discussed in https://github.com/Winfredy/SadTalker/issues/35.
- __[2023.03.29]__: local gradio demo is online! `python app.py` to start the demo. New `requirments.txt` is used to avoid the bugs in `librosa`.
- __[2023.03.28]__: Online demo is launched in [](https://huggingface.co/spaces/vinthony/SadTalker), thanks AK!
- __[2023.03.22]__: Launch new feature: generating the 3d face animation from a single image. New applications about it will be updated.
- __[2023.03.22]__: Launch new feature: `still mode`, where only a small head pose will be produced via `python inference.py --still`.