@@ -199,6 +199,29 @@ You can run those projects in the [AI Studio](https://aistudio.baidu.com/aistudi
...
@@ -199,6 +199,29 @@ You can run those projects in the [AI Studio](https://aistudio.baidu.com/aistudi
## Changelog
## Changelog
- v2.1.0 (2021.12.8)
- Release a video super-resolution model PP-MSVSR and multiple pre-training weights
- Release several SOTA video super-resolution models and their pre-trained models such as BasicVSR, IconVSR and BasicVSR++
- Release the light-weight motion-driven model(Volume compression: 229M->10.1M), and optimized the fusion effect
- Release high-resolution FOMM and Wav2Lip pre-trained models
- Release several interesting applications based on StyleGANv2, such as face inversion, face fusion and face editing
- Released Baidu’s self-developed and effective style transfer model LapStyle and its interesting applications, and launched the official website [experience page](https://www.paddlepaddle.org.cn/paddlegan)
- Release a light-weight image super-resolution model PAN
- v2.0.0 (2021.6.2)
- Release [Fisrt Order Motion](https://github.com/PaddlePaddle/PaddleGAN/blob/develop/docs/en_US/tutorials/motion_driving.md) model and multiple pre-training weights
- Release applications that support [Multi-face action driven](https://github.com/PaddlePaddle/PaddleGAN/blob/develop/docs/en_US/tutorials/motion_driving.md#1-test-for-face)
- Release video super-resolution model [EDVR](https://github.com/PaddlePaddle/PaddleGAN/blob/develop/docs/en_US/tutorials/video_super_resolution.md) and multiple pre-training weights
- Release the contents of [7-day punch-in training camp](https://github.com/PaddlePaddle/PaddleGAN/tree/develop/education) corresponding to PaddleGAN
- Enhance the robustness of PaddleGAN running on the windows platform
- v2.0.0-beta (2021.3.1)
- Completely switch the API of Paddle 2.0.0 version.
- Release of super-resolution models: ESRGAN, RealSR, LESRCNN, DRN, etc.
- Release lip migration model: Wav2Lip
- Release anime model of Street View: AnimeGANv2
- Release face animation model: U-GAT-IT, Photo2Cartoon
- Release SOTA generation model: StyleGAN2
- v0.1.0 (2020.11.02)
- v0.1.0 (2020.11.02)
- Release first version, supported models include Pixel2Pixel, CycleGAN, PSGAN. Supported applications include video frame interpolation, super resolution, colorize images and videos, image animation.
- Release first version, supported models include Pixel2Pixel, CycleGAN, PSGAN. Supported applications include video frame interpolation, super resolution, colorize images and videos, image animation.
@@ -31,7 +31,7 @@ The proposed method is not exclusively for facial expression transfer, it also s
...
@@ -31,7 +31,7 @@ The proposed method is not exclusively for facial expression transfer, it also s
At the same time, PaddleGAN also provides a ["faceutils" tool](https://github.com/PaddlePaddle/PaddleGAN/tree/develop/ppgan/faceutils) for face-related work, including face detection, face segmentation, keypoints detection, etc.
At the same time, PaddleGAN also provides a ["faceutils" tool](https://github.com/PaddlePaddle/PaddleGAN/tree/develop/ppgan/faceutils) for face-related work, including face detection, face segmentation, keypoints detection, etc.
- #### Face Enhancement
- #### Face Enhancement
-**This effect significantly improves the definition of the driven video.**
-**This effect significantly improves the definition of the driven video.**
...
@@ -154,8 +154,8 @@ Currently, we use mobilenet combined with pruning to compress models, see the co
...
@@ -154,8 +154,8 @@ Currently, we use mobilenet combined with pruning to compress models, see the co
| | Size(M) | reconstruction loss |
| | Size(M) | reconstruction loss |
| ---------- | ------- | ------------------- |
| ---------- | ------- | ------------------- |
| Original | 229 | 0.012058867 |
| Original | 229 | 0.041781392 |
| Compressed | 6.1 | 0.015025159 |
| Compressed | 10.1 | 0.047878753 |
**Training:** First, set mode in configs/firstorder_vox_mobile_256.yaml as kp_detector, train the compressed kp_detector model, and immobilize the original generator model. Then set mode in configs/firstorder_vox_mobile_256.yaml as generator,train the compressed generator model, and immobilize the original kp_detector model. Finally, set mode as both and modify kp_weight_path and gen_weight_path in the config to the path of trained model to train together。
**Training:** First, set mode in configs/firstorder_vox_mobile_256.yaml as kp_detector, train the compressed kp_detector model, and immobilize the original generator model. Then set mode in configs/firstorder_vox_mobile_256.yaml as generator,train the compressed generator model, and immobilize the original kp_detector model. Finally, set mode as both and modify kp_weight_path and gen_weight_path in the config to the path of trained model to train together。