未验证 提交 3916fc7b 编写于 作者: L LielinJiang 提交者: GitHub

add weight demo for stylegan mix (#410)

* add weight demo for stylegan mix

* update eng docs
上级 9ca7b962
......@@ -34,7 +34,7 @@ python -u tools/styleganv2mixing.py \
- latent1: The path of the first style vector. Come from `dst.npy` generated by Pixel2Style2Pixel or `dst.fitting.npy` generated by StyleGANv2 Fitting module
- latent2: The path of the second style vector. The source is the same as the first style vector
- weights: The two style vectors are mixed in different proportions at different levels. For a resolution of 1024, there are 18 levels. For a resolution of 512, there are 16 levels, and so on.
The more in front, the more it affects the whole of the mixed image. The more behind, the more it affects the details of the mixed image.
The more in front, the more it affects the whole of the mixed image. The more behind, the more it affects the details of the mixed image. In the figure below we show the fusion results of different weights for reference.
- need_align: whether to crop the image to an image that can be recognized by the model. For an image that has been cropped, such as the `src.png` that is pre-generated when Pixel2Style2Pixel is used to generate the style vector, the need_align parameter may not be filled in
- start_lr: learning rate at the begin of training
- final_lr: learning rate at the end of training
......@@ -72,6 +72,25 @@ The result of mixing two style vectors in a specific ratio:
<img src="../../imgs/stylegan2mixing-sample.png" width="256"/>
</div>
## Results with different weight
The image corresponding to the first style vector:
<div align="center">
<img src="https://user-images.githubusercontent.com/50691816/130604304-292e2de4-5dc3-4613-a355-ff6163f9390f.png" width="300"/>
</div>
The image corresponding to the second style vector:
<div align="center">
<img src="https://user-images.githubusercontent.com/50691816/130604334-3550d429-742a-4b12-a445-e54c867dbd24.png" width="256"/>
</div>
The result of mixing two style vectors with different weight:
<div align="center">
<img src="https://user-images.githubusercontent.com/50691816/130603897-05f76968-bfdd-4bca-a00c-417a6e1d70af.png" height="256"/>
</div>
## Reference
- 1. [Analyzing and Improving the Image Quality of StyleGAN](https://arxiv.org/abs/1912.04958)
......
......@@ -32,8 +32,8 @@ python -u tools/styleganv2mixing.py \
**参数说明:**
- latent1: 第一个风格向量的路径。可来自于Pixel2Style2Pixel生成的`dst.npy`或StyleGANv2 Fitting模块生成的`dst.fitting.npy`
- latent2: 第二个风格向量的路径。来源同第一个风格向量
- weights: 两个风格向量在不同的层次按不同比例进行混合。对于1024的分辨率,有18个层次,512的分辨率,有16个层次,以此类推。
越前面,越影响混合图像的整体。越后面,越影响混合图像的细节
- weights: 两个风格向量在不同的层次按不同比例进行混合。对于1024的分辨率,有18个层次,512的分辨率,有16个层次,以此类推。越前面,越影响混合图像的整体。越后面,越影响混合图像的细节。
在下图中我们展示了不同权重的融合结果,可供参考
- output_path: 生成图片存放的文件夹
- weight_path: 预训练模型路径
- model_type: PaddleGAN内置模型类型,若输入PaddleGAN已存在的模型类型,`weight_path`将失效。当前建议使用: `ffhq-config-f`
......@@ -63,6 +63,24 @@ python -u tools/styleganv2mixing.py \
<img src="../../imgs/stylegan2mixing-sample.png" width="256"/>
</div>
## 不同权重拟合结果展示
第一个风格向量对应的图像:
<div align="center">
<img src="https://user-images.githubusercontent.com/50691816/130604304-292e2de4-5dc3-4613-a355-ff6163f9390f.png" width="300"/>
</div>
第二个风格向量对应的图像:
<div align="center">
<img src="https://user-images.githubusercontent.com/50691816/130604334-3550d429-742a-4b12-a445-e54c867dbd24.png" width="256"/>
</div>
不同权重的混合结果:
<div align="center">
<img src="https://user-images.githubusercontent.com/50691816/130603897-05f76968-bfdd-4bca-a00c-417a6e1d70af.png" height="256"/>
</div>
# 参考文献
- 1. [Analyzing and Improving the Image Quality of StyleGAN](https://arxiv.org/abs/1912.04958)
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册