未验证 提交 bcde8702 编写于 作者: Y YixinKristy 提交者: GitHub

Update mai-ha-hi related content (#215)

上级 92083054
......@@ -38,7 +38,7 @@ GAN-Generative Adversarial Network, was praised by "the Father of Convolutional
* [Pixel2Pixel](./docs/en_US/tutorials/pix2pix_cyclegan.md)
* [CycleGAN](./docs/en_US/tutorials/pix2pix_cyclegan.md)
* [PSGAN](./docs/en_US/tutorials/psgan.md)
* [First Order Motion Model](./docs/en_US/tutorials/motion_driving.md): **🤩Key Technology of "Mayiyahei🐜" (Face Swapping)🤩**
* [First Order Motion Model](./docs/en_US/tutorials/motion_driving.md): **🤩Key Technology of "Mai-ha-hi🐜" (Face Swapping)🤩**
* [FaceParsing](./docs/en_US/tutorials/face_parse.md)
* [AnimeGANv2](./docs/en_US/tutorials/animegan.md)
* [U-GAT-IT](./docs/en_US/tutorials/ugatit.md)
......@@ -58,7 +58,8 @@ You can run those projects in the [AI Studio](https://aistudio.baidu.com/aistudi
|Online Tutorial | link |
|--------------|-----------|
|Face Swapping-"Mayiyahei" |[Click and Try](https://aistudio.baidu.com/aistudio/projectdetail/1586056?channelType=0&channel=0)|
|Face Swapping-multi-personal "Mai-ha-hi" | [Click and Try](https://aistudio.baidu.com/aistudio/projectdetail/1603391) |
|Face Swapping-"Mai-ha-hi" |[Click and Try](https://aistudio.baidu.com/aistudio/projectdetail/1586056?channelType=0&channel=0)|
|Restore the video of Beijing hundreds years ago|[Click and Try](https://aistudio.baidu.com/aistudio/projectdetail/1161285)|
|Face Swapping-When "Su Daqiang" sings "unravel" |[Click and Try](https://aistudio.baidu.com/aistudio/projectdetail/1048840)|
......
......@@ -59,11 +59,14 @@ GAN--生成对抗网络,被“卷积网络之父”**Yann LeCun(杨立昆)
|在线教程 | 链接 |
|--------------|-----------|
|表情动作迁移-一键实现多人版「蚂蚁呀嘿」 | [点击体验](https://aistudio.baidu.com/aistudio/projectdetail/1603391) |
|表情动作迁移-全网爆火的「蚂蚁呀嘿」实现 |[点击体验](https://aistudio.baidu.com/aistudio/projectdetail/1586056?channelType=0&channel=0)|
|老北京视频修复|[点击体验](https://aistudio.baidu.com/aistudio/projectdetail/1161285)|
|表情动作迁移-当苏大强唱起unravel |[点击体验](https://aistudio.baidu.com/aistudio/projectdetail/1048840)|
## 效果展示
### 蚂蚁呀嘿🤪
......
# First order motion model
# First Order Motion model
## First order motion model introduction
## First Order Motion model introduction
[First order motion model](https://arxiv.org/abs/2003.00196) is to complete the Image animation task, which consists of generating a video sequence so that an object in a source image is animated according to the motion of a driving video. The first order motion framework addresses this problem without using any annotation or prior information about the specific object to animate. Once trained on a set of videos depicting objects of the same category (e.g. faces, human bodies), this method can be applied to any object of this class. To achieve this, the innovative method decouple appearance and motion information using a self-supervised formulation. In addition, to support complex motions, it use a representation consisting of a set of learned keypoints along with their local affine transformations. A generator network models occlusions arising during target motions and combines the appearance extracted from the source image and the motion derived from the driving video.
<div align="center">
<img src="../../imgs/fom_demo.png" width="500"/>
</div>
## Multi-Faces swapping
For photoes with multiple faces, we first detect all of the faces, then do facial expression transfer for each face, and finally put those faces back to the original photo to generate a complete new video.
Specific technical steps are shown below:
1. Use the S3FD model to detect the faces of a photo
2. Use the First Order Motion model to do the facial expression transfer of each face
3. Put those "new" generated faces back to the original photo
At the same time, specifically for face related work, PaddleGAN provides a ["faceutils" tool](https://github.com/PaddlePaddle/PaddleGAN/tree/develop/ppgan/faceutils), including face detection, face segmentation models and more.
## How to use
Users can upload the prepared source image and driving video, then substitute the path of source image and driving video for the `source_image` and `driving_video` parameter in the following running command. It will geneate a video file named `result.mp4` in the `output` folder, which is the animated video file.
Note: for photoes with multiple faces, the longer the distances between faces, the better the result quality you can get.
```
cd applications/
python -u tools/first-order-demo.py \
......@@ -23,10 +36,15 @@ python -u tools/first-order-demo.py \
**params:**
- driving_video: driving video, the motion of the driving video is to be migrated.
- source_image: source_image, support single people and multipeople in the image, the image will be animated according to the motion of the driving video.
- source_image: source_image, support single people and multi-person in the image, the image will be animated according to the motion of the driving video.
- relative: indicate whether the relative or absolute coordinates of the key points in the video are used in the program. It is recommended to use relative coordinates. If absolute coordinates are used, the characters will be distorted after animation.
- adapt_scale: adapt movement scale based on convex hull of keypoints.
- ratio: The pasted face percentage of generated image, this parameter should be adjusted in the case of multi-person image in which the adjacent faces are close, the reange is [0.4, 0.5]
- ratio: The pasted face percentage of generated image, this parameter should be adjusted in the case of multi-person image in which the adjacent faces are close. The defualt value is 0.4 and the range is [0.4, 0.5].
**Online Tutorial running in AI Studio:**
* **Multi-faces swapping: https://aistudio.baidu.com/aistudio/projectdetail/1603391**
* **Single face swapping: https://aistudio.baidu.com/aistudio/projectdetail/1586056**
## Animation results
......
......@@ -12,25 +12,45 @@ First order motion model的任务是image animation,给定一张源图片,
但是这篇文章提出的方法只需要在同类别物体的数据集上进行训练即可,比如实现太极动作迁移就用太极视频数据集进行训练,想要达到表情迁移的效果就使用人脸视频数据集voxceleb进行训练。训练好后,我们使用对应的预训练模型就可以达到前言中实时image animation的操作。
## 联合人脸检测模型实现多人脸表情迁移
使用PaddleGAN提供的[人脸检测算法S3FD](https://github.com/PaddlePaddle/PaddleGAN/tree/develop/ppgan/faceutils/face_detection/detection),将照片中多个人脸检测出来并进行表情迁移,实现多人同时换脸。
具体技术原理:
1. 使用S3FD人脸检测模型将照片中的每张人脸检测出来并抠出
2. 使用First Order Motion模型对抠出的每张人脸进行脸部表情迁移
3. 将完成表情迁移的人脸进行适当剪裁后贴回原照片位置
同时,PaddleGAN针对人脸的相关处理提供[faceutil工具](https://github.com/PaddlePaddle/PaddleGAN/tree/develop/ppgan/faceutils),包括人脸检测、五官分割、关键点检测等能力。
## 使用方法
用户可以上传自己准备的视频和图片,并在如下命令中的source_image参数和driving_video参数分别换成自己的图片和视频路径,然后运行如下命令,就可以完成动作表情迁移,程序运行成功后,会在ouput文件夹生成名为result.mp4的视频文件,该文件即为动作迁移后的视频。本项目中提供了原始图片和驱动视频供展示使用。运行的命令如下所示:
用户可上传一张单人/多人照片与驱动视频,并在如下命令中的source_image参数和driving_video参数分别换成自己的图片和视频路径,然后运行如下命令,即可完成单人/多人脸动作表情迁移,运行结果为命名为result.mp4的视频文件,保存在output文件夹中。
注意:使用多人脸时,尽量使用人脸间距较大的照片,效果更佳,也可通过手动调节ratio进行效果优化。
本项目中提供了原始图片和驱动视频供展示使用,运行的命令如下:
```
cd applications/
python -u tools/first-order-demo.py \
--driving_video ../docs/imgs/fom_dv.mp4 \
--source_image ../docs/imgs/fom_source_image.png \
--ratio 0.4 \
--relative --adapt_scale
```
**参数说明:**
- driving_video: 驱动视频,视频中人物的表情动作作为待迁移的对象
- source_image: 原始图片,支持单人图片和多人图片,视频中人物的表情动作将迁移到该原始图片中的人物上
- relative: 指示程序中使用视频和图片中人物关键点的相对坐标还是绝对坐标,建议使用相对坐标,若使用绝对坐标,会导致迁移后人物扭曲变形
- adapt_scale: 根据关键点凸包自适应运动尺度
- ratio: 贴回驱动生成的人脸区域占原图的比例, 用户需要根据生成的效果调整该参数,尤其对于多人脸距离比较近的情况下需要调整改参数, 调整范围是[0.4, 0.5]
- ratio: 贴回驱动生成的人脸区域占原图的比例, 用户需要根据生成的效果调整该参数,尤其对于多人脸距离比较近的情况下需要调整改参数, 默认为0.4,调整范围是[0.4, 0.5]
**在线体验项目**
* **多人脸通用:https://aistudio.baidu.com/aistudio/projectdetail/1603391**
* **单人脸通用:https://aistudio.baidu.com/aistudio/projectdetail/1586056**
## 生成结果展示
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册