3kauXwpXJJQ;D@+Hi)JQHgB1QK9UKZ9u^[Ep.4@O+f-PzF]|(3k>LK_kX6lir[7Vo]jvBgT]5MXp=Sh8Zp|vjcC{
- ffmpeg (https://www.ffmpeg.org/)3kauXwpXJJQ;D@+Hi)JQHgB1QK9UKZ9u^[Ep.4@O+f-PzF]|(3k>LK_kX6lir[7Vo]jvBgT]5MXp=Sh8Zp|vjcC{
download > win > Windows builds from gyan.dev > release builds > ffmpeg-release-full.7z3kauXwpXJJQ;D@+Hi)JQHgB1QK9UKZ9u^[Ep.4@O+f-PzF]|(3k>LK_kX6lir[7Vo]jvBgT]5MXp=Sh8Zp|vjcC{
move bin/*.exe to windows/system32/ - conda create -n liveportrait python=3.9 and activate it
- d1: git clone https://github.com/KwaiVGI/LivePortrait
- d2: git clone from hugiingface https://huggingface.co/KwaiVGI/LivePortrait3kauXwpXJJQ;D@+Hi)JQHgB1QK9UKZ9u^[Ep.4@O+f-PzF]|(3k>LK_kX6lir[7Vo]jvBgT]5MXp=Sh8Zp|vjcC{
move both directory insightface and liveportrait to d1 ./pretrained_weights/ - run for cmd3kauXwpXJJQ;D@+Hi)JQHgB1QK9UKZ9u^[Ep.4@O+f-PzF]|(3k>LK_kX6lir[7Vo]jvBgT]5MXp=Sh8Zp|vjcC{
first: python inference.py to create sample in d1 ./animations3kauXwpXJJQ;D@+Hi)JQHgB1QK9UKZ9u^[Ep.4@O+f-PzF]|(3k>LK_kX6lir[7Vo]jvBgT]5MXp=Sh8Zp|vjcC{
custom: python inference.py -s d1 .\LivePortrait\assets\examples\source\?.jpg d1 .\LivePortrait\assets\examples\driving\?.mp4 and see outpurt in d1 ./animations - run for webgui3kauXwpXJJQ;D@+Hi)JQHgB1QK9UKZ9u^[Ep.4@O+f-PzF]|(3k>LK_kX6lir[7Vo]jvBgT]5MXp=Sh8Zp|vjcC{
python app.py and open 127.0.0.1:88903kauXwpXJJQ;D@+Hi)JQHgB1QK9UKZ9u^[Ep.4@O+f-PzF]|(3k>LK_kX6lir[7Vo]jvBgT]5MXp=Sh8Zp|vjcC{
public: modify d1 .\LivePortrait\src\config\argument_config.py server_ip, server_port, and share for true