未验证 提交 e3f07211 编写于 作者: K KP 提交者: GitHub

Merge branch 'develop' into add_English_Readme

# photopen
|模型名称|photopen|
| :--- | :---: |
|类别|图像 - 图像生成|
|网络|SPADEGenerator|
|数据集|coco_stuff|
|是否支持Fine-tuning|否|
|模型大小|74MB|
|最新更新日期|2021-12-14|
|数据指标|-|
## 一、模型基本信息
- ### 应用效果展示
- 样例结果示例:
<p align="center">
<img src="https://camo.githubusercontent.com/22e94b0c7278af08da8c475a3d968ba2f3cd565fcb2ad6b9a165c8a65f2d12f8/68747470733a2f2f61692d73747564696f2d7374617469632d6f6e6c696e652e63646e2e626365626f732e636f6d2f39343733313032336561623934623162393762396361383062643362333038333063393138636631363264303436626438383534306464613435303239356133" width = "90%" hspace='10'/>
<br />
- ### 模型介绍
- 本模块采用一个像素风格迁移网络 Pix2PixHD,能够根据输入的语义分割标签生成照片风格的图片。为了解决模型归一化层导致标签语义信息丢失的问题,向 Pix2PixHD 的生成器网络中添加了 SPADE(Spatially-Adaptive
Normalization)空间自适应归一化模块,通过两个卷积层保留了归一化时训练的缩放与偏置参数的空间维度,以增强生成图片的质量。语义风格标签图像可以参考[coco_stuff数据集](https://github.com/nightrome/cocostuff)获取, 也可以通过[PaddleGAN repo中的该项目](https://github.com/PaddlePaddle/PaddleGAN/blob/87537ad9d4eeda17eaa5916c6a585534ab989ea8/docs/zh_CN/tutorials/photopen.md)来自定义生成图像进行体验。
## 二、安装
- ### 1、环境依赖
- ppgan
- ### 2、安装
- ```shell
$ hub install photopen
```
- 如您安装时遇到问题,可参考:[零基础windows安装](../../../../docs/docs_ch/get_start/windows_quickstart.md)
| [零基础Linux安装](../../../../docs/docs_ch/get_start/linux_quickstart.md) | [零基础MacOS安装](../../../../docs/docs_ch/get_start/mac_quickstart.md)
## 三、模型API预测
- ### 1、命令行预测
- ```shell
# Read from a file
$ hub run photopen --input_path "/PATH/TO/IMAGE"
```
- 通过命令行方式实现图像生成模型的调用,更多请见 [PaddleHub命令行指令](../../../../docs/docs_ch/tutorial/cmd_usage.rst)
- ### 2、预测代码示例
- ```python
import paddlehub as hub
module = hub.Module(name="photopen")
input_path = ["/PATH/TO/IMAGE"]
# Read from a file
module.photo_transfer(paths=input_path, output_dir='./transfer_result/', use_gpu=True)
```
- ### 3、API
- ```python
photo_transfer(images=None, paths=None, output_dir='./transfer_result/', use_gpu=False, visualization=True):
```
- 图像转换生成API。
- **参数**
- images (list\[numpy.ndarray\]): 图片数据,ndarray.shape 为 \[H, W, C\];<br/>
- paths (list\[str\]): 图片的路径;<br/>
- output\_dir (str): 结果保存的路径; <br/>
- use\_gpu (bool): 是否使用 GPU;<br/>
- visualization(bool): 是否保存结果到本地文件夹
## 四、服务部署
- PaddleHub Serving可以部署一个在线图像转换生成服务。
- ### 第一步:启动PaddleHub Serving
- 运行启动命令:
- ```shell
$ hub serving start -m photopen
```
- 这样就完成了一个图像转换生成的在线服务API的部署,默认端口号为8866。
- **NOTE:** 如使用GPU预测,则需要在启动服务之前,请设置CUDA\_VISIBLE\_DEVICES环境变量,否则不用设置。
- ### 第二步:发送预测请求
- 配置好服务端,以下数行代码即可实现发送预测请求,获取预测结果
- ```python
import requests
import json
import cv2
import base64
def cv2_to_base64(image):
data = cv2.imencode('.jpg', image)[1]
return base64.b64encode(data.tostring()).decode('utf8')
# 发送HTTP请求
data = {'images':[cv2_to_base64(cv2.imread("/PATH/TO/IMAGE"))]}
headers = {"Content-type": "application/json"}
url = "http://127.0.0.1:8866/predict/photopen"
r = requests.post(url=url, headers=headers, data=json.dumps(data))
# 打印预测结果
print(r.json()["results"])
## 五、更新历史
* 1.0.0
初始发布
- ```shell
$ hub install photopen==1.0.0
```
# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserve.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import cv2
import numpy as np
import paddle
from PIL import Image
from PIL import ImageOps
from ppgan.models.generators import SPADEGenerator
from ppgan.utils.filesystem import load
from ppgan.utils.photopen import data_onehot_pro
class PhotoPenPredictor:
def __init__(self, weight_path, gen_cfg):
# 初始化模型
gen = SPADEGenerator(
gen_cfg.ngf,
gen_cfg.num_upsampling_layers,
gen_cfg.crop_size,
gen_cfg.aspect_ratio,
gen_cfg.norm_G,
gen_cfg.semantic_nc,
gen_cfg.use_vae,
gen_cfg.nef,
)
gen.eval()
para = load(weight_path)
if 'net_gen' in para:
gen.set_state_dict(para['net_gen'])
else:
gen.set_state_dict(para)
self.gen = gen
self.gen_cfg = gen_cfg
def run(self, image):
sem = Image.fromarray(image).convert('L')
sem = sem.resize((self.gen_cfg.crop_size, self.gen_cfg.crop_size), Image.NEAREST)
sem = np.array(sem).astype('float32')
sem = paddle.to_tensor(sem)
sem = sem.reshape([1, 1, self.gen_cfg.crop_size, self.gen_cfg.crop_size])
one_hot = data_onehot_pro(sem, self.gen_cfg)
predicted = self.gen(one_hot)
pic = predicted.numpy()[0].reshape((3, 256, 256)).transpose((1, 2, 0))
pic = ((pic + 1.) / 2. * 255).astype('uint8')
return pic
# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import argparse
import copy
import os
import cv2
import numpy as np
import paddle
from ppgan.utils.config import get_config
from skimage.io import imread
from skimage.transform import rescale
from skimage.transform import resize
import paddlehub as hub
from .model import PhotoPenPredictor
from .util import base64_to_cv2
from paddlehub.module.module import moduleinfo
from paddlehub.module.module import runnable
from paddlehub.module.module import serving
@moduleinfo(
name="photopen", type="CV/style_transfer", author="paddlepaddle", author_email="", summary="", version="1.0.0")
class Photopen:
def __init__(self):
self.pretrained_model = os.path.join(self.directory, "photopen.pdparams")
cfg = get_config(os.path.join(self.directory, "photopen.yaml"))
self.network = PhotoPenPredictor(weight_path=self.pretrained_model, gen_cfg=cfg.predict)
def photo_transfer(self,
images: list = None,
paths: list = None,
output_dir: str = './transfer_result/',
use_gpu: bool = False,
visualization: bool = True):
'''
images (list[numpy.ndarray]): data of images, shape of each is [H, W, C], color space must be BGR(read by cv2).
paths (list[str]): paths to images
output_dir (str): the dir to save the results
use_gpu (bool): if True, use gpu to perform the computation, otherwise cpu.
visualization (bool): if True, save results in output_dir.
'''
results = []
paddle.disable_static()
place = 'gpu:0' if use_gpu else 'cpu'
place = paddle.set_device(place)
if images == None and paths == None:
print('No image provided. Please input an image or a image path.')
return
if images != None:
for image in images:
image = image[:, :, ::-1]
out = self.network.run(image)
results.append(out)
if paths != None:
for path in paths:
image = cv2.imread(path)[:, :, ::-1]
out = self.network.run(image)
results.append(out)
if visualization == True:
if not os.path.exists(output_dir):
os.makedirs(output_dir, exist_ok=True)
for i, out in enumerate(results):
if out is not None:
cv2.imwrite(os.path.join(output_dir, 'output_{}.png'.format(i)), out[:, :, ::-1])
return results
@runnable
def run_cmd(self, argvs: list):
"""
Run as a command.
"""
self.parser = argparse.ArgumentParser(
description="Run the {} module.".format(self.name),
prog='hub run {}'.format(self.name),
usage='%(prog)s',
add_help=True)
self.arg_input_group = self.parser.add_argument_group(title="Input options", description="Input data. Required")
self.arg_config_group = self.parser.add_argument_group(
title="Config options", description="Run configuration for controlling module behavior, not required.")
self.add_module_config_arg()
self.add_module_input_arg()
self.args = self.parser.parse_args(argvs)
results = self.photo_transfer(
paths=[self.args.input_path],
output_dir=self.args.output_dir,
use_gpu=self.args.use_gpu,
visualization=self.args.visualization)
return results
@serving
def serving_method(self, images, **kwargs):
"""
Run as a service.
"""
images_decode = [base64_to_cv2(image) for image in images]
results = self.photo_transfer(images=images_decode, **kwargs)
tolist = [result.tolist() for result in results]
return tolist
def add_module_config_arg(self):
"""
Add the command config options.
"""
self.arg_config_group.add_argument('--use_gpu', action='store_true', help="use GPU or not")
self.arg_config_group.add_argument(
'--output_dir', type=str, default='transfer_result', help='output directory for saving result.')
self.arg_config_group.add_argument('--visualization', type=bool, default=False, help='save results or not.')
def add_module_input_arg(self):
"""
Add the command input options.
"""
self.arg_input_group.add_argument('--input_path', type=str, help="path to input image.")
total_iters: 1
output_dir: output_dir
checkpoints_dir: checkpoints
model:
name: PhotoPenModel
generator:
name: SPADEGenerator
ngf: 24
num_upsampling_layers: normal
crop_size: 256
aspect_ratio: 1.0
norm_G: spectralspadebatch3x3
semantic_nc: 14
use_vae: False
nef: 16
discriminator:
name: MultiscaleDiscriminator
ndf: 128
num_D: 4
crop_size: 256
label_nc: 12
output_nc: 3
contain_dontcare_label: True
no_instance: False
n_layers_D: 6
criterion:
name: PhotoPenPerceptualLoss
crop_size: 224
lambda_vgg: 1.6
label_nc: 12
contain_dontcare_label: True
batchSize: 1
crop_size: 256
lambda_feat: 10.0
dataset:
train:
name: PhotoPenDataset
content_root: test/coco_stuff
load_size: 286
crop_size: 256
num_workers: 0
batch_size: 1
test:
name: PhotoPenDataset_test
content_root: test/coco_stuff
load_size: 286
crop_size: 256
num_workers: 0
batch_size: 1
lr_scheduler: # abundoned
name: LinearDecay
learning_rate: 0.0001
start_epoch: 99999
decay_epochs: 99999
# will get from real dataset
iters_per_epoch: 1
optimizer:
lr: 0.0001
optimG:
name: Adam
net_names:
- net_gen
beta1: 0.9
beta2: 0.999
optimD:
name: Adam
net_names:
- net_des
beta1: 0.9
beta2: 0.999
log_config:
interval: 1
visiual_interval: 1
snapshot_config:
interval: 1
predict:
name: SPADEGenerator
ngf: 24
num_upsampling_layers: normal
crop_size: 256
aspect_ratio: 1.0
norm_G: spectralspadebatch3x3
semantic_nc: 14
use_vae: False
nef: 16
contain_dontcare_label: True
label_nc: 12
batchSize: 1
import base64
import cv2
import numpy as np
def base64_to_cv2(b64str):
data = base64.b64decode(b64str.encode('utf8'))
data = np.fromstring(data, np.uint8)
data = cv2.imdecode(data, cv2.IMREAD_COLOR)
return data
# face_parse
|模型名称|face_parse|
| :--- | :---: |
|类别|图像 - 人脸解析|
|网络|BiSeNet|
|数据集|COCO-Stuff|
|是否支持Fine-tuning|否|
|模型大小|77MB|
|最新更新日期|2021-12-07|
|数据指标|-|
## 一、模型基本信息
- ### 应用效果展示
- 样例结果示例:
<p align="center">
<img src="https://user-images.githubusercontent.com/22424850/157190651-595b6964-97c5-4b0b-ac0a-c30c8520a972.png" width = "40%" hspace='10'/>
<br />
输入图像
<br />
<img src="https://user-images.githubusercontent.com/22424850/157192693-b3f737ed-1a24-4ef9-8454-bfd9d51755af.png" width = "40%" hspace='10'/>
<br />
输出图像
<br />
</p>
- ### 模型介绍
- 人脸解析是语义图像分割的一种特殊情况,人脸解析是计算人脸图像中不同语义成分(如头发、嘴唇、鼻子、眼睛等)的像素级标签映射。给定一个输入的人脸图像,人脸解析将为每个语义成分分配一个像素级标签。
## 二、安装
- ### 1、环境依赖
- ppgan
- dlib
- ### 2、安装
- ```shell
$ hub install face_parse
```
- 如您安装时遇到问题,可参考:[零基础windows安装](../../../../docs/docs_ch/get_start/windows_quickstart.md)
| [零基础Linux安装](../../../../docs/docs_ch/get_start/linux_quickstart.md) | [零基础MacOS安装](../../../../docs/docs_ch/get_start/mac_quickstart.md)
## 三、模型API预测
- ### 1、命令行预测
- ```shell
# Read from a file
$ hub run face_parse --input_path "/PATH/TO/IMAGE"
```
- 通过命令行方式实现人脸解析模型的调用,更多请见 [PaddleHub命令行指令](../../../../docs/docs_ch/tutorial/cmd_usage.rst)
- ### 2、预测代码示例
- ```python
import paddlehub as hub
module = hub.Module(name="face_parse")
input_path = ["/PATH/TO/IMAGE"]
# Read from a file
module.style_transfer(paths=input_path, output_dir='./transfer_result/', use_gpu=True)
```
- ### 3、API
- ```python
style_transfer(images=None, paths=None, output_dir='./transfer_result/', use_gpu=False, visualization=True):
```
- 人脸解析转换API。
- **参数**
- images (list\[numpy.ndarray\]): 图片数据,ndarray.shape 为 \[H, W, C\];<br/>
- paths (list\[str\]): 图片的路径;<br/>
- output\_dir (str): 结果保存的路径; <br/>
- use\_gpu (bool): 是否使用 GPU;<br/>
- visualization(bool): 是否保存结果到本地文件夹
## 四、服务部署
- PaddleHub Serving可以部署一个在线人脸解析转换服务。
- ### 第一步:启动PaddleHub Serving
- 运行启动命令:
- ```shell
$ hub serving start -m face_parse
```
- 这样就完成了一个人脸解析转换的在线服务API的部署,默认端口号为8866。
- **NOTE:** 如使用GPU预测,则需要在启动服务之前,请设置CUDA\_VISIBLE\_DEVICES环境变量,否则不用设置。
- ### 第二步:发送预测请求
- 配置好服务端,以下数行代码即可实现发送预测请求,获取预测结果
- ```python
import requests
import json
import cv2
import base64
def cv2_to_base64(image):
data = cv2.imencode('.jpg', image)[1]
return base64.b64encode(data.tostring()).decode('utf8')
# 发送HTTP请求
data = {'images':[cv2_to_base64(cv2.imread("/PATH/TO/IMAGE"))]}
headers = {"Content-type": "application/json"}
url = "http://127.0.0.1:8866/predict/face_parse"
r = requests.post(url=url, headers=headers, data=json.dumps(data))
# 打印预测结果
print(r.json()["results"])
## 五、更新历史
* 1.0.0
初始发布
- ```shell
$ hub install face_parse==1.0.0
```
# copyright (c) 2020 PaddlePaddle Authors. All Rights Reserve.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import sys
import argparse
from PIL import Image
import numpy as np
import cv2
import ppgan.faceutils as futils
from ppgan.utils.preprocess import *
from ppgan.utils.visual import mask2image
class FaceParsePredictor:
def __init__(self):
self.input_size = (512, 512)
self.up_ratio = 0.6 / 0.85
self.down_ratio = 0.2 / 0.85
self.width_ratio = 0.2 / 0.85
self.face_parser = futils.mask.FaceParser()
def run(self, image):
image = Image.fromarray(image)
face = futils.dlib.detect(image)
if not face:
return
face_on_image = face[0]
image, face, crop_face = futils.dlib.crop(image, face_on_image, self.up_ratio, self.down_ratio,
self.width_ratio)
np_image = np.array(image)
mask = self.face_parser.parse(np.float32(cv2.resize(np_image, self.input_size)))
mask = cv2.resize(mask.numpy(), (256, 256))
mask = mask.astype(np.uint8)
mask = mask2image(mask)
return mask
# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import argparse
import copy
import os
import cv2
import numpy as np
import paddle
from skimage.io import imread
from skimage.transform import rescale
from skimage.transform import resize
import paddlehub as hub
from .model import FaceParsePredictor
from .util import base64_to_cv2
from paddlehub.module.module import moduleinfo
from paddlehub.module.module import runnable
from paddlehub.module.module import serving
@moduleinfo(
name="face_parse", type="CV/style_transfer", author="paddlepaddle", author_email="", summary="", version="1.0.0")
class Face_parse:
def __init__(self):
self.pretrained_model = os.path.join(self.directory, "bisenet.pdparams")
self.network = FaceParsePredictor()
def style_transfer(self,
images: list = None,
paths: list = None,
output_dir: str = './transfer_result/',
use_gpu: bool = False,
visualization: bool = True):
'''
images (list[numpy.ndarray]): data of images, shape of each is [H, W, C], color space must be BGR(read by cv2).
paths (list[str]): paths to images
output_dir (str): the dir to save the results
use_gpu (bool): if True, use gpu to perform the computation, otherwise cpu.
visualization (bool): if True, save results in output_dir.
'''
results = []
paddle.disable_static()
place = 'gpu:0' if use_gpu else 'cpu'
place = paddle.set_device(place)
if images == None and paths == None:
print('No image provided. Please input an image or a image path.')
return
if images != None:
for image in images:
image = image[:, :, ::-1]
out = self.network.run(image)
results.append(out)
if paths != None:
for path in paths:
image = cv2.imread(path)[:, :, ::-1]
out = self.network.run(image)
results.append(out)
if visualization == True:
if not os.path.exists(output_dir):
os.makedirs(output_dir, exist_ok=True)
for i, out in enumerate(results):
if out is not None:
cv2.imwrite(os.path.join(output_dir, 'output_{}.png'.format(i)), out[:, :, ::-1])
return results
@runnable
def run_cmd(self, argvs: list):
"""
Run as a command.
"""
self.parser = argparse.ArgumentParser(
description="Run the {} module.".format(self.name),
prog='hub run {}'.format(self.name),
usage='%(prog)s',
add_help=True)
self.arg_input_group = self.parser.add_argument_group(title="Input options", description="Input data. Required")
self.arg_config_group = self.parser.add_argument_group(
title="Config options", description="Run configuration for controlling module behavior, not required.")
self.add_module_config_arg()
self.add_module_input_arg()
self.args = self.parser.parse_args(argvs)
results = self.style_transfer(
paths=[self.args.input_path],
output_dir=self.args.output_dir,
use_gpu=self.args.use_gpu,
visualization=self.args.visualization)
return results
@serving
def serving_method(self, images, **kwargs):
"""
Run as a service.
"""
images_decode = [base64_to_cv2(image) for image in images]
results = self.style_transfer(images=images_decode, **kwargs)
tolist = [result.tolist() for result in results]
return tolist
def add_module_config_arg(self):
"""
Add the command config options.
"""
self.arg_config_group.add_argument('--use_gpu', action='store_true', help="use GPU or not")
self.arg_config_group.add_argument(
'--output_dir', type=str, default='transfer_result', help='output directory for saving result.')
self.arg_config_group.add_argument('--visualization', type=bool, default=False, help='save results or not.')
def add_module_input_arg(self):
"""
Add the command input options.
"""
self.arg_input_group.add_argument('--input_path', type=str, help="path to input image.")
import base64
import cv2
import numpy as np
def base64_to_cv2(b64str):
data = base64.b64decode(b64str.encode('utf8'))
data = np.fromstring(data, np.uint8)
data = cv2.imdecode(data, cv2.IMREAD_COLOR)
return data
# lapstyle_circuit
|模型名称|lapstyle_circuit|
| :--- | :---: |
|类别|图像 - 风格迁移|
|网络|LapStyle|
|数据集|COCO|
|是否支持Fine-tuning|否|
|模型大小|121MB|
|最新更新日期|2021-12-07|
|数据指标|-|
## 一、模型基本信息
- ### 应用效果展示
- 样例结果示例:
<p align="center">
<img src="https://user-images.githubusercontent.com/22424850/144995283-77ddba45-9efe-4f72-914c-1bff734372ed.png" width = "50%" hspace='10'/>
<br />
输入内容图形
<br />
<img src="https://user-images.githubusercontent.com/22424850/144997574-8b4028ad-d871-4caf-87d1-191582bba805.jpg" width = "50%" hspace='10'/>
<br />
输入风格图形
<br />
<img src="https://user-images.githubusercontent.com/22424850/144997589-407a12b9-95bf-44e7-b558-b1026ef3cd5a.png" width = "50%" hspace='10'/>
<br />
输出图像
<br />
</p>
- ### 模型介绍
- LapStyle--拉普拉斯金字塔风格化网络,是一种能够生成高质量风格化图的快速前馈风格化网络,能渐进地生成复杂的纹理迁移效果,同时能够在512分辨率下达到100fps的速度。可实现多种不同艺术风格的快速迁移,在艺术图像生成、滤镜等领域有广泛的应用。
- 更多详情参考:[Drafting and Revision: Laplacian Pyramid Network for Fast High-Quality Artistic Style Transfer](https://arxiv.org/pdf/2104.05376.pdf)
## 二、安装
- ### 1、环境依赖
- ppgan
- ### 2、安装
- ```shell
$ hub install lapstyle_circuit
```
- 如您安装时遇到问题,可参考:[零基础windows安装](../../../../../docs/docs_ch/get_start/windows_quickstart.md)
| [零基础Linux安装](../../../../../docs/docs_ch/get_start/linux_quickstart.md) | [零基础MacOS安装](../../../../../docs/docs_ch/get_start/mac_quickstart.md)
## 三、模型API预测
- ### 1、命令行预测
- ```shell
# Read from a file
$ hub run lapstyle_circuit --content "/PATH/TO/IMAGE" --style "/PATH/TO/IMAGE1"
```
- 通过命令行方式实现风格转换模型的调用,更多请见 [PaddleHub命令行指令](../../../../docs/docs_ch/tutorial/cmd_usage.rst)
- ### 2、预测代码示例
- ```python
import paddlehub as hub
module = hub.Module(name="lapstyle_circuit")
content = cv2.imread("/PATH/TO/IMAGE")
style = cv2.imread("/PATH/TO/IMAGE1")
results = module.style_transfer(images=[{'content':content, 'style':style}], output_dir='./transfer_result', use_gpu=True)
```
- ### 3、API
- ```python
style_transfer(images=None, paths=None, output_dir='./transfer_result/', use_gpu=False, visualization=True)
```
- 风格转换API。
- **参数**
- images (list[dict]): data of images, 每一个元素都为一个 dict,有关键字 content, style, 相应取值为:
- content (numpy.ndarray): 待转换的图片,shape 为 \[H, W, C\],BGR格式;<br/>
- style (numpy.ndarray) : 风格图像,shape为 \[H, W, C\],BGR格式;<br/>
- paths (list[str]): paths to images, 每一个元素都为一个dict, 有关键字 content, style, 相应取值为:
- content (str): 待转换的图片的路径;<br/>
- style (str) : 风格图像的路径;<br/>
- output\_dir (str): 结果保存的路径; <br/>
- use\_gpu (bool): 是否使用 GPU;<br/>
- visualization(bool): 是否保存结果到本地文件夹
## 四、服务部署
- PaddleHub Serving可以部署一个在线图像风格转换服务。
- ### 第一步:启动PaddleHub Serving
- 运行启动命令:
- ```shell
$ hub serving start -m lapstyle_circuit
```
- 这样就完成了一个图像风格转换的在线服务API的部署,默认端口号为8866。
- **NOTE:** 如使用GPU预测,则需要在启动服务之前,请设置CUDA\_VISIBLE\_DEVICES环境变量,否则不用设置。
- ### 第二步:发送预测请求
- 配置好服务端,以下数行代码即可实现发送预测请求,获取预测结果
- ```python
import requests
import json
import cv2
import base64
def cv2_to_base64(image):
data = cv2.imencode('.jpg', image)[1]
return base64.b64encode(data.tostring()).decode('utf8')
# 发送HTTP请求
data = {'images':[{'content': cv2_to_base64(cv2.imread("/PATH/TO/IMAGE")), 'style': cv2_to_base64(cv2.imread("/PATH/TO/IMAGE1"))}]}
headers = {"Content-type": "application/json"}
url = "http://127.0.0.1:8866/predict/lapstyle_circuit"
r = requests.post(url=url, headers=headers, data=json.dumps(data))
# 打印预测结果
print(r.json()["results"])
## 五、更新历史
* 1.0.0
初始发布
- ```shell
$ hub install lapstyle_circuit==1.0.0
```
# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserve.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import urllib.request
import cv2 as cv
import numpy as np
import paddle
import paddle.nn.functional as F
from paddle.vision.transforms import functional
from PIL import Image
from ppgan.models.generators import DecoderNet
from ppgan.models.generators import Encoder
from ppgan.models.generators import RevisionNet
from ppgan.utils.visual import tensor2img
def img(img):
# some images have 4 channels
if img.shape[2] > 3:
img = img[:, :, :3]
# HWC to CHW
return img
def img_totensor(content_img, style_img):
if content_img.ndim == 2:
content_img = cv.cvtColor(content_img, cv.COLOR_GRAY2RGB)
else:
content_img = cv.cvtColor(content_img, cv.COLOR_BGR2RGB)
h, w, c = content_img.shape
content_img = Image.fromarray(content_img)
content_img = content_img.resize((512, 512), Image.BILINEAR)
content_img = np.array(content_img)
content_img = img(content_img)
content_img = functional.to_tensor(content_img)
style_img = cv.cvtColor(style_img, cv.COLOR_BGR2RGB)
style_img = Image.fromarray(style_img)
style_img = style_img.resize((512, 512), Image.BILINEAR)
style_img = np.array(style_img)
style_img = img(style_img)
style_img = functional.to_tensor(style_img)
content_img = paddle.unsqueeze(content_img, axis=0)
style_img = paddle.unsqueeze(style_img, axis=0)
return content_img, style_img, h, w
def tensor_resample(tensor, dst_size, mode='bilinear'):
return F.interpolate(tensor, dst_size, mode=mode, align_corners=False)
def laplacian(x):
"""
Laplacian
return:
x - upsample(downsample(x))
"""
return x - tensor_resample(tensor_resample(x, [x.shape[2] // 2, x.shape[3] // 2]), [x.shape[2], x.shape[3]])
def make_laplace_pyramid(x, levels):
"""
Make Laplacian Pyramid
"""
pyramid = []
current = x
for i in range(levels):
pyramid.append(laplacian(current))
current = tensor_resample(current, (max(current.shape[2] // 2, 1), max(current.shape[3] // 2, 1)))
pyramid.append(current)
return pyramid
def fold_laplace_pyramid(pyramid):
"""
Fold Laplacian Pyramid
"""
current = pyramid[-1]
for i in range(len(pyramid) - 2, -1, -1): # iterate from len-2 to 0
up_h, up_w = pyramid[i].shape[2], pyramid[i].shape[3]
current = pyramid[i] + tensor_resample(current, (up_h, up_w))
return current
class LapStylePredictor:
def __init__(self, weight_path=None):
self.net_enc = Encoder()
self.net_dec = DecoderNet()
self.net_rev = RevisionNet()
self.net_rev_2 = RevisionNet()
self.net_enc.set_dict(paddle.load(weight_path)['net_enc'])
self.net_enc.eval()
self.net_dec.set_dict(paddle.load(weight_path)['net_dec'])
self.net_dec.eval()
self.net_rev.set_dict(paddle.load(weight_path)['net_rev'])
self.net_rev.eval()
self.net_rev_2.set_dict(paddle.load(weight_path)['net_rev_2'])
self.net_rev_2.eval()
def run(self, content_img, style_image):
content_img, style_img, h, w = img_totensor(content_img, style_image)
pyr_ci = make_laplace_pyramid(content_img, 2)
pyr_si = make_laplace_pyramid(style_img, 2)
pyr_ci.append(content_img)
pyr_si.append(style_img)
cF = self.net_enc(pyr_ci[2])
sF = self.net_enc(pyr_si[2])
stylized_small = self.net_dec(cF, sF)
stylized_up = F.interpolate(stylized_small, scale_factor=2)
revnet_input = paddle.concat(x=[pyr_ci[1], stylized_up], axis=1)
stylized_rev_lap = self.net_rev(revnet_input)
stylized_rev = fold_laplace_pyramid([stylized_rev_lap, stylized_small])
stylized_up = F.interpolate(stylized_rev, scale_factor=2)
revnet_input = paddle.concat(x=[pyr_ci[0], stylized_up], axis=1)
stylized_rev_lap_second = self.net_rev_2(revnet_input)
stylized_rev_second = fold_laplace_pyramid([stylized_rev_lap_second, stylized_rev_lap, stylized_small])
stylized = stylized_rev_second
stylized_visual = tensor2img(stylized, min_max=(0., 1.))
return stylized_visual
# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import argparse
import copy
import os
import cv2
import numpy as np
import paddle
from skimage.io import imread
from skimage.transform import rescale
from skimage.transform import resize
import paddlehub as hub
from .model import LapStylePredictor
from .util import base64_to_cv2
from paddlehub.module.module import moduleinfo
from paddlehub.module.module import runnable
from paddlehub.module.module import serving
@moduleinfo(
name="lapstyle_circuit",
type="CV/style_transfer",
author="paddlepaddle",
author_email="",
summary="",
version="1.0.0")
class Lapstyle_circuit:
def __init__(self):
self.pretrained_model = os.path.join(self.directory, "lapstyle_circuit.pdparams")
self.network = LapStylePredictor(weight_path=self.pretrained_model)
def style_transfer(self,
images: list = None,
paths: list = None,
output_dir: str = './transfer_result/',
use_gpu: bool = False,
visualization: bool = True):
'''
Transfer a image to circuit style.
images (list[dict]): data of images, each element is a dict:
- content (numpy.ndarray): input image,shape is \[H, W, C\],BGR format;<br/>
- style (numpy.ndarray) : style image,shape is \[H, W, C\],BGR format;<br/>
paths (list[dict]): paths to images, eacg element is a dict:
- content (str): path to input image;<br/>
- style (str) : path to style image;<br/>
output_dir (str): the dir to save the results
use_gpu (bool): if True, use gpu to perform the computation, otherwise cpu.
visualization (bool): if True, save results in output_dir.
'''
results = []
paddle.disable_static()
place = 'gpu:0' if use_gpu else 'cpu'
place = paddle.set_device(place)
if images == None and paths == None:
print('No image provided. Please input an image or a image path.')
return
if images != None:
for image_dict in images:
content_img = image_dict['content']
style_img = image_dict['style']
results.append(self.network.run(content_img, style_img))
if paths != None:
for path_dict in paths:
content_img = cv2.imread(path_dict['content'])
style_img = cv2.imread(path_dict['style'])
results.append(self.network.run(content_img, style_img))
if visualization == True:
if not os.path.exists(output_dir):
os.makedirs(output_dir, exist_ok=True)
for i, out in enumerate(results):
cv2.imwrite(os.path.join(output_dir, 'output_{}.png'.format(i)), out[:, :, ::-1])
return results
@runnable
def run_cmd(self, argvs: list):
"""
Run as a command.
"""
self.parser = argparse.ArgumentParser(
description="Run the {} module.".format(self.name),
prog='hub run {}'.format(self.name),
usage='%(prog)s',
add_help=True)
self.arg_input_group = self.parser.add_argument_group(title="Input options", description="Input data. Required")
self.arg_config_group = self.parser.add_argument_group(
title="Config options", description="Run configuration for controlling module behavior, not required.")
self.add_module_config_arg()
self.add_module_input_arg()
self.args = self.parser.parse_args(argvs)
self.style_transfer(
paths=[{
'content': self.args.content,
'style': self.args.style
}],
output_dir=self.args.output_dir,
use_gpu=self.args.use_gpu,
visualization=self.args.visualization)
@serving
def serving_method(self, images, **kwargs):
"""
Run as a service.
"""
images_decode = copy.deepcopy(images)
for image in images_decode:
image['content'] = base64_to_cv2(image['content'])
image['style'] = base64_to_cv2(image['style'])
results = self.style_transfer(images_decode, **kwargs)
tolist = [result.tolist() for result in results]
return tolist
def add_module_config_arg(self):
"""
Add the command config options.
"""
self.arg_config_group.add_argument('--use_gpu', action='store_true', help="use GPU or not")
self.arg_config_group.add_argument(
'--output_dir', type=str, default='transfer_result', help='output directory for saving result.')
self.arg_config_group.add_argument('--visualization', type=bool, default=False, help='save results or not.')
def add_module_input_arg(self):
"""
Add the command input options.
"""
self.arg_input_group.add_argument('--content', type=str, help="path to content image.")
self.arg_input_group.add_argument('--style', type=str, help="path to style image.")
import base64
import cv2
import numpy as np
def base64_to_cv2(b64str):
data = base64.b64decode(b64str.encode('utf8'))
data = np.fromstring(data, np.uint8)
data = cv2.imdecode(data, cv2.IMREAD_COLOR)
return data
# lapstyle_ocean
|模型名称|lapstyle_ocean|
| :--- | :---: |
|类别|图像 - 风格迁移|
|网络|LapStyle|
|数据集|COCO|
|是否支持Fine-tuning|否|
|模型大小|121MB|
|最新更新日期|2021-12-07|
|数据指标|-|
## 一、模型基本信息
- ### 应用效果展示
- 样例结果示例:
<p align="center">
<img src="https://user-images.githubusercontent.com/22424850/144995283-77ddba45-9efe-4f72-914c-1bff734372ed.png" width = "50%" hspace='10'/>
<br />
输入内容图形
<br />
<img src="https://user-images.githubusercontent.com/22424850/144997958-9162c304-dff4-4048-a197-607882ded00c.png" width = "50%" hspace='10'/>
<br />
输入风格图形
<br />
<img src="https://user-images.githubusercontent.com/22424850/144997967-43d7579c-cc73-452e-a920-5759eb5a5d67.png" width = "50%" hspace='10'/>
<br />
输出图像
<br />
</p>
- ### 模型介绍
- LapStyle--拉普拉斯金字塔风格化网络,是一种能够生成高质量风格化图的快速前馈风格化网络,能渐进地生成复杂的纹理迁移效果,同时能够在512分辨率下达到100fps的速度。可实现多种不同艺术风格的快速迁移,在艺术图像生成、滤镜等领域有广泛的应用。
- 更多详情参考:[Drafting and Revision: Laplacian Pyramid Network for Fast High-Quality Artistic Style Transfer](https://arxiv.org/pdf/2104.05376.pdf)
## 二、安装
- ### 1、环境依赖
- ppgan
- ### 2、安装
- ```shell
$ hub install lapstyle_ocean
```
- 如您安装时遇到问题,可参考:[零基础windows安装](../../../../../docs/docs_ch/get_start/windows_quickstart.md)
| [零基础Linux安装](../../../../../docs/docs_ch/get_start/linux_quickstart.md) | [零基础MacOS安装](../../../../../docs/docs_ch/get_start/mac_quickstart.md)
## 三、模型API预测
- ### 1、命令行预测
- ```shell
# Read from a file
$ hub run lapstyle_ocean --content "/PATH/TO/IMAGE" --style "/PATH/TO/IMAGE1"
```
- 通过命令行方式实现风格转换模型的调用,更多请见 [PaddleHub命令行指令](../../../../docs/docs_ch/tutorial/cmd_usage.rst)
- ### 2、预测代码示例
- ```python
import paddlehub as hub
module = hub.Module(name="lapstyle_ocean")
content = cv2.imread("/PATH/TO/IMAGE")
style = cv2.imread("/PATH/TO/IMAGE1")
results = module.style_transfer(images=[{'content':content, 'style':style}], output_dir='./transfer_result', use_gpu=True)
```
- ### 3、API
- ```python
style_transfer(images=None, paths=None, output_dir='./transfer_result/', use_gpu=False, visualization=True)
```
- 风格转换API。
- **参数**
- images (list[dict]): data of images, 每一个元素都为一个 dict,有关键字 content, style, 相应取值为:
- content (numpy.ndarray): 待转换的图片,shape 为 \[H, W, C\],BGR格式;<br/>
- style (numpy.ndarray) : 风格图像,shape为 \[H, W, C\],BGR格式;<br/>
- paths (list[str]): paths to images, 每一个元素都为一个dict, 有关键字 content, style, 相应取值为:
- content (str): 待转换的图片的路径;<br/>
- style (str) : 风格图像的路径;<br/>
- output\_dir (str): 结果保存的路径; <br/>
- use\_gpu (bool): 是否使用 GPU;<br/>
- visualization(bool): 是否保存结果到本地文件夹
## 四、服务部署
- PaddleHub Serving可以部署一个在线图像风格转换服务。
- ### 第一步:启动PaddleHub Serving
- 运行启动命令:
- ```shell
$ hub serving start -m lapstyle_ocean
```
- 这样就完成了一个图像风格转换的在线服务API的部署,默认端口号为8866。
- **NOTE:** 如使用GPU预测,则需要在启动服务之前,请设置CUDA\_VISIBLE\_DEVICES环境变量,否则不用设置。
- ### 第二步:发送预测请求
- 配置好服务端,以下数行代码即可实现发送预测请求,获取预测结果
- ```python
import requests
import json
import cv2
import base64
def cv2_to_base64(image):
data = cv2.imencode('.jpg', image)[1]
return base64.b64encode(data.tostring()).decode('utf8')
# 发送HTTP请求
data = {'images':[{'content': cv2_to_base64(cv2.imread("/PATH/TO/IMAGE")), 'style': cv2_to_base64(cv2.imread("/PATH/TO/IMAGE1"))}]}
headers = {"Content-type": "application/json"}
url = "http://127.0.0.1:8866/predict/lapstyle_ocean"
r = requests.post(url=url, headers=headers, data=json.dumps(data))
# 打印预测结果
print(r.json()["results"])
## 五、更新历史
* 1.0.0
初始发布
- ```shell
$ hub install lapstyle_ocean==1.0.0
```
# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserve.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import urllib.request
import cv2 as cv
import numpy as np
import paddle
import paddle.nn.functional as F
from paddle.vision.transforms import functional
from PIL import Image
from ppgan.models.generators import DecoderNet
from ppgan.models.generators import Encoder
from ppgan.models.generators import RevisionNet
from ppgan.utils.visual import tensor2img
def img(img):
# some images have 4 channels
if img.shape[2] > 3:
img = img[:, :, :3]
# HWC to CHW
return img
def img_totensor(content_img, style_img):
if content_img.ndim == 2:
content_img = cv.cvtColor(content_img, cv.COLOR_GRAY2RGB)
else:
content_img = cv.cvtColor(content_img, cv.COLOR_BGR2RGB)
h, w, c = content_img.shape
content_img = Image.fromarray(content_img)
content_img = content_img.resize((512, 512), Image.BILINEAR)
content_img = np.array(content_img)
content_img = img(content_img)
content_img = functional.to_tensor(content_img)
style_img = cv.cvtColor(style_img, cv.COLOR_BGR2RGB)
style_img = Image.fromarray(style_img)
style_img = style_img.resize((512, 512), Image.BILINEAR)
style_img = np.array(style_img)
style_img = img(style_img)
style_img = functional.to_tensor(style_img)
content_img = paddle.unsqueeze(content_img, axis=0)
style_img = paddle.unsqueeze(style_img, axis=0)
return content_img, style_img, h, w
def tensor_resample(tensor, dst_size, mode='bilinear'):
return F.interpolate(tensor, dst_size, mode=mode, align_corners=False)
def laplacian(x):
"""
Laplacian
return:
x - upsample(downsample(x))
"""
return x - tensor_resample(tensor_resample(x, [x.shape[2] // 2, x.shape[3] // 2]), [x.shape[2], x.shape[3]])
def make_laplace_pyramid(x, levels):
"""
Make Laplacian Pyramid
"""
pyramid = []
current = x
for i in range(levels):
pyramid.append(laplacian(current))
current = tensor_resample(current, (max(current.shape[2] // 2, 1), max(current.shape[3] // 2, 1)))
pyramid.append(current)
return pyramid
def fold_laplace_pyramid(pyramid):
"""
Fold Laplacian Pyramid
"""
current = pyramid[-1]
for i in range(len(pyramid) - 2, -1, -1): # iterate from len-2 to 0
up_h, up_w = pyramid[i].shape[2], pyramid[i].shape[3]
current = pyramid[i] + tensor_resample(current, (up_h, up_w))
return current
class LapStylePredictor:
def __init__(self, weight_path=None):
self.net_enc = Encoder()
self.net_dec = DecoderNet()
self.net_rev = RevisionNet()
self.net_rev_2 = RevisionNet()
self.net_enc.set_dict(paddle.load(weight_path)['net_enc'])
self.net_enc.eval()
self.net_dec.set_dict(paddle.load(weight_path)['net_dec'])
self.net_dec.eval()
self.net_rev.set_dict(paddle.load(weight_path)['net_rev'])
self.net_rev.eval()
self.net_rev_2.set_dict(paddle.load(weight_path)['net_rev_2'])
self.net_rev_2.eval()
def run(self, content_img, style_image):
content_img, style_img, h, w = img_totensor(content_img, style_image)
pyr_ci = make_laplace_pyramid(content_img, 2)
pyr_si = make_laplace_pyramid(style_img, 2)
pyr_ci.append(content_img)
pyr_si.append(style_img)
cF = self.net_enc(pyr_ci[2])
sF = self.net_enc(pyr_si[2])
stylized_small = self.net_dec(cF, sF)
stylized_up = F.interpolate(stylized_small, scale_factor=2)
revnet_input = paddle.concat(x=[pyr_ci[1], stylized_up], axis=1)
stylized_rev_lap = self.net_rev(revnet_input)
stylized_rev = fold_laplace_pyramid([stylized_rev_lap, stylized_small])
stylized_up = F.interpolate(stylized_rev, scale_factor=2)
revnet_input = paddle.concat(x=[pyr_ci[0], stylized_up], axis=1)
stylized_rev_lap_second = self.net_rev_2(revnet_input)
stylized_rev_second = fold_laplace_pyramid([stylized_rev_lap_second, stylized_rev_lap, stylized_small])
stylized = stylized_rev_second
stylized_visual = tensor2img(stylized, min_max=(0., 1.))
return stylized_visual
# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import argparse
import copy
import os
import cv2
import numpy as np
import paddle
from skimage.io import imread
from skimage.transform import rescale
from skimage.transform import resize
import paddlehub as hub
from .model import LapStylePredictor
from .util import base64_to_cv2
from paddlehub.module.module import moduleinfo
from paddlehub.module.module import runnable
from paddlehub.module.module import serving
@moduleinfo(
name="lapstyle_ocean",
type="CV/style_transfer",
author="paddlepaddle",
author_email="",
summary="",
version="1.0.0")
class Lapstyle_ocean:
def __init__(self):
self.pretrained_model = os.path.join(self.directory, "lapstyle_ocean.pdparams")
self.network = LapStylePredictor(weight_path=self.pretrained_model)
def style_transfer(self,
images: list = None,
paths: list = None,
output_dir: str = './transfer_result/',
use_gpu: bool = False,
visualization: bool = True):
'''
Transfer a image to ocean style.
images (list[dict]): data of images, each element is a dict:
- content (numpy.ndarray): input image,shape is \[H, W, C\],BGR format;<br/>
- style (numpy.ndarray) : style image,shape is \[H, W, C\],BGR format;<br/>
paths (list[dict]): paths to images, eacg element is a dict:
- content (str): path to input image;<br/>
- style (str) : path to style image;<br/>
output_dir (str): the dir to save the results
use_gpu (bool): if True, use gpu to perform the computation, otherwise cpu.
visualization (bool): if True, save results in output_dir.
'''
results = []
paddle.disable_static()
place = 'gpu:0' if use_gpu else 'cpu'
place = paddle.set_device(place)
if images == None and paths == None:
print('No image provided. Please input an image or a image path.')
return
if images != None:
for image_dict in images:
content_img = image_dict['content']
style_img = image_dict['style']
results.append(self.network.run(content_img, style_img))
if paths != None:
for path_dict in paths:
content_img = cv2.imread(path_dict['content'])
style_img = cv2.imread(path_dict['style'])
results.append(self.network.run(content_img, style_img))
if visualization == True:
if not os.path.exists(output_dir):
os.makedirs(output_dir, exist_ok=True)
for i, out in enumerate(results):
cv2.imwrite(os.path.join(output_dir, 'output_{}.png'.format(i)), out[:, :, ::-1])
return results
@runnable
def run_cmd(self, argvs: list):
"""
Run as a command.
"""
self.parser = argparse.ArgumentParser(
description="Run the {} module.".format(self.name),
prog='hub run {}'.format(self.name),
usage='%(prog)s',
add_help=True)
self.arg_input_group = self.parser.add_argument_group(title="Input options", description="Input data. Required")
self.arg_config_group = self.parser.add_argument_group(
title="Config options", description="Run configuration for controlling module behavior, not required.")
self.add_module_config_arg()
self.add_module_input_arg()
self.args = self.parser.parse_args(argvs)
self.style_transfer(
paths=[{
'content': self.args.content,
'style': self.args.style
}],
output_dir=self.args.output_dir,
use_gpu=self.args.use_gpu,
visualization=self.args.visualization)
@serving
def serving_method(self, images, **kwargs):
"""
Run as a service.
"""
images_decode = copy.deepcopy(images)
for image in images_decode:
image['content'] = base64_to_cv2(image['content'])
image['style'] = base64_to_cv2(image['style'])
results = self.style_transfer(images_decode, **kwargs)
tolist = [result.tolist() for result in results]
return tolist
def add_module_config_arg(self):
"""
Add the command config options.
"""
self.arg_config_group.add_argument('--use_gpu', action='store_true', help="use GPU or not")
self.arg_config_group.add_argument(
'--output_dir', type=str, default='transfer_result', help='output directory for saving result.')
self.arg_config_group.add_argument('--visualization', type=bool, default=False, help='save results or not.')
def add_module_input_arg(self):
"""
Add the command input options.
"""
self.arg_input_group.add_argument('--content', type=str, help="path to content image.")
self.arg_input_group.add_argument('--style', type=str, help="path to style image.")
import base64
import cv2
import numpy as np
def base64_to_cv2(b64str):
data = base64.b64decode(b64str.encode('utf8'))
data = np.fromstring(data, np.uint8)
data = cv2.imdecode(data, cv2.IMREAD_COLOR)
return data
# lapstyle_starrynew
|模型名称|lapstyle_starrynew|
| :--- | :---: |
|类别|图像 - 风格迁移|
|网络|LapStyle|
|数据集|COCO|
|是否支持Fine-tuning|否|
|模型大小|121MB|
|最新更新日期|2021-12-07|
|数据指标|-|
## 一、模型基本信息
- ### 应用效果展示
- 样例结果示例:
<p align="center">
<img src="https://user-images.githubusercontent.com/22424850/144995283-77ddba45-9efe-4f72-914c-1bff734372ed.png" width = "50%" hspace='10'/>
<br />
输入内容图形
<br />
<img src="https://user-images.githubusercontent.com/22424850/144995349-59651a1d-7be4-479f-ad58-063b4fc6dded.png" width = "50%" hspace='10'/>
<br />
输入风格图形
<br />
<img src="https://user-images.githubusercontent.com/22424850/144995779-bb87c39e-643c-4c75-be49-7de5f8b52a17.png" width = "50%" hspace='10'/>
<br />
输出图像
<br />
</p>
- ### 模型介绍
- LapStyle--拉普拉斯金字塔风格化网络,是一种能够生成高质量风格化图的快速前馈风格化网络,能渐进地生成复杂的纹理迁移效果,同时能够在512分辨率下达到100fps的速度。可实现多种不同艺术风格的快速迁移,在艺术图像生成、滤镜等领域有广泛的应用。
- 更多详情参考:[Drafting and Revision: Laplacian Pyramid Network for Fast High-Quality Artistic Style Transfer](https://arxiv.org/pdf/2104.05376.pdf)
## 二、安装
- ### 1、环境依赖
- ppgan
- ### 2、安装
- ```shell
$ hub install lapstyle_starrynew
```
- 如您安装时遇到问题,可参考:[零基础windows安装](../../../../../docs/docs_ch/get_start/windows_quickstart.md)
| [零基础Linux安装](../../../../../docs/docs_ch/get_start/linux_quickstart.md) | [零基础MacOS安装](../../../../../docs/docs_ch/get_start/mac_quickstart.md)
## 三、模型API预测
- ### 1、命令行预测
- ```shell
# Read from a file
$ hub run lapstyle_starrynew --content "/PATH/TO/IMAGE" --style "/PATH/TO/IMAGE1"
```
- 通过命令行方式实现风格转换模型的调用,更多请见 [PaddleHub命令行指令](../../../../docs/docs_ch/tutorial/cmd_usage.rst)
- ### 2、预测代码示例
- ```python
import paddlehub as hub
module = hub.Module(name="lapstyle_starrynew")
content = cv2.imread("/PATH/TO/IMAGE")
style = cv2.imread("/PATH/TO/IMAGE1")
results = module.style_transfer(images=[{'content':content, 'style':style}], output_dir='./transfer_result', use_gpu=True)
```
- ### 3、API
- ```python
style_transfer(images=None, paths=None, output_dir='./transfer_result/', use_gpu=False, visualization=True)
```
- 风格转换API。
- **参数**
- images (list[dict]): data of images, 每一个元素都为一个 dict,有关键字 content, style, 相应取值为:
- content (numpy.ndarray): 待转换的图片,shape 为 \[H, W, C\],BGR格式;<br/>
- style (numpy.ndarray) : 风格图像,shape为 \[H, W, C\],BGR格式;<br/>
- paths (list[str]): paths to images, 每一个元素都为一个dict, 有关键字 content, style, 相应取值为:
- content (str): 待转换的图片的路径;<br/>
- style (str) : 风格图像的路径;<br/>
- output\_dir (str): 结果保存的路径; <br/>
- use\_gpu (bool): 是否使用 GPU;<br/>
- visualization(bool): 是否保存结果到本地文件夹
## 四、服务部署
- PaddleHub Serving可以部署一个在线图像风格转换服务。
- ### 第一步:启动PaddleHub Serving
- 运行启动命令:
- ```shell
$ hub serving start -m lapstyle_starrynew
```
- 这样就完成了一个图像风格转换的在线服务API的部署,默认端口号为8866。
- **NOTE:** 如使用GPU预测,则需要在启动服务之前,请设置CUDA\_VISIBLE\_DEVICES环境变量,否则不用设置。
- ### 第二步:发送预测请求
- 配置好服务端,以下数行代码即可实现发送预测请求,获取预测结果
- ```python
import requests
import json
import cv2
import base64
def cv2_to_base64(image):
data = cv2.imencode('.jpg', image)[1]
return base64.b64encode(data.tostring()).decode('utf8')
# 发送HTTP请求
data = {'images':[{'content': cv2_to_base64(cv2.imread("/PATH/TO/IMAGE")), 'style': cv2_to_base64(cv2.imread("/PATH/TO/IMAGE1"))}]}
headers = {"Content-type": "application/json"}
url = "http://127.0.0.1:8866/predict/lapstyle_starrynew"
r = requests.post(url=url, headers=headers, data=json.dumps(data))
# 打印预测结果
print(r.json()["results"])
## 五、更新历史
* 1.0.0
初始发布
- ```shell
$ hub install lapstyle_starrynew==1.0.0
```
# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserve.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import urllib.request
import cv2 as cv
import numpy as np
import paddle
import paddle.nn.functional as F
from paddle.vision.transforms import functional
from PIL import Image
from ppgan.models.generators import DecoderNet
from ppgan.models.generators import Encoder
from ppgan.models.generators import RevisionNet
from ppgan.utils.visual import tensor2img
def img(img):
# some images have 4 channels
if img.shape[2] > 3:
img = img[:, :, :3]
# HWC to CHW
return img
def img_totensor(content_img, style_img):
if content_img.ndim == 2:
content_img = cv.cvtColor(content_img, cv.COLOR_GRAY2RGB)
else:
content_img = cv.cvtColor(content_img, cv.COLOR_BGR2RGB)
h, w, c = content_img.shape
content_img = Image.fromarray(content_img)
content_img = content_img.resize((512, 512), Image.BILINEAR)
content_img = np.array(content_img)
content_img = img(content_img)
content_img = functional.to_tensor(content_img)
style_img = cv.cvtColor(style_img, cv.COLOR_BGR2RGB)
style_img = Image.fromarray(style_img)
style_img = style_img.resize((512, 512), Image.BILINEAR)
style_img = np.array(style_img)
style_img = img(style_img)
style_img = functional.to_tensor(style_img)
content_img = paddle.unsqueeze(content_img, axis=0)
style_img = paddle.unsqueeze(style_img, axis=0)
return content_img, style_img, h, w
def tensor_resample(tensor, dst_size, mode='bilinear'):
return F.interpolate(tensor, dst_size, mode=mode, align_corners=False)
def laplacian(x):
"""
Laplacian
return:
x - upsample(downsample(x))
"""
return x - tensor_resample(tensor_resample(x, [x.shape[2] // 2, x.shape[3] // 2]), [x.shape[2], x.shape[3]])
def make_laplace_pyramid(x, levels):
"""
Make Laplacian Pyramid
"""
pyramid = []
current = x
for i in range(levels):
pyramid.append(laplacian(current))
current = tensor_resample(current, (max(current.shape[2] // 2, 1), max(current.shape[3] // 2, 1)))
pyramid.append(current)
return pyramid
def fold_laplace_pyramid(pyramid):
"""
Fold Laplacian Pyramid
"""
current = pyramid[-1]
for i in range(len(pyramid) - 2, -1, -1): # iterate from len-2 to 0
up_h, up_w = pyramid[i].shape[2], pyramid[i].shape[3]
current = pyramid[i] + tensor_resample(current, (up_h, up_w))
return current
class LapStylePredictor:
def __init__(self, weight_path=None):
self.net_enc = Encoder()
self.net_dec = DecoderNet()
self.net_rev = RevisionNet()
self.net_rev_2 = RevisionNet()
self.net_enc.set_dict(paddle.load(weight_path)['net_enc'])
self.net_enc.eval()
self.net_dec.set_dict(paddle.load(weight_path)['net_dec'])
self.net_dec.eval()
self.net_rev.set_dict(paddle.load(weight_path)['net_rev'])
self.net_rev.eval()
self.net_rev_2.set_dict(paddle.load(weight_path)['net_rev_2'])
self.net_rev_2.eval()
def run(self, content_img, style_image):
content_img, style_img, h, w = img_totensor(content_img, style_image)
pyr_ci = make_laplace_pyramid(content_img, 2)
pyr_si = make_laplace_pyramid(style_img, 2)
pyr_ci.append(content_img)
pyr_si.append(style_img)
cF = self.net_enc(pyr_ci[2])
sF = self.net_enc(pyr_si[2])
stylized_small = self.net_dec(cF, sF)
stylized_up = F.interpolate(stylized_small, scale_factor=2)
revnet_input = paddle.concat(x=[pyr_ci[1], stylized_up], axis=1)
stylized_rev_lap = self.net_rev(revnet_input)
stylized_rev = fold_laplace_pyramid([stylized_rev_lap, stylized_small])
stylized_up = F.interpolate(stylized_rev, scale_factor=2)
revnet_input = paddle.concat(x=[pyr_ci[0], stylized_up], axis=1)
stylized_rev_lap_second = self.net_rev_2(revnet_input)
stylized_rev_second = fold_laplace_pyramid([stylized_rev_lap_second, stylized_rev_lap, stylized_small])
stylized = stylized_rev_second
stylized_visual = tensor2img(stylized, min_max=(0., 1.))
return stylized_visual
# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import argparse
import copy
import os
import cv2
import numpy as np
import paddle
from skimage.io import imread
from skimage.transform import rescale
from skimage.transform import resize
import paddlehub as hub
from .model import LapStylePredictor
from .util import base64_to_cv2
from paddlehub.module.module import moduleinfo
from paddlehub.module.module import runnable
from paddlehub.module.module import serving
@moduleinfo(
name="lapstyle_starrynew",
type="CV/style_transfer",
author="paddlepaddle",
author_email="",
summary="",
version="1.0.0")
class Lapstyle_starrynew:
def __init__(self):
self.pretrained_model = os.path.join(self.directory, "lapstyle_starrynew.pdparams")
self.network = LapStylePredictor(weight_path=self.pretrained_model)
def style_transfer(self,
images: list = None,
paths: list = None,
output_dir: str = './transfer_result/',
use_gpu: bool = False,
visualization: bool = True):
'''
Transfer a image to starrynew style.
images (list[dict]): data of images, each element is a dict:
- content (numpy.ndarray): input image,shape is \[H, W, C\],BGR format;<br/>
- style (numpy.ndarray) : style image,shape is \[H, W, C\],BGR format;<br/>
paths (list[dict]): paths to images, eacg element is a dict:
- content (str): path to input image;<br/>
- style (str) : path to style image;<br/>
output_dir (str): the dir to save the results
use_gpu (bool): if True, use gpu to perform the computation, otherwise cpu.
visualization (bool): if True, save results in output_dir.
'''
results = []
paddle.disable_static()
place = 'gpu:0' if use_gpu else 'cpu'
place = paddle.set_device(place)
if images == None and paths == None:
print('No image provided. Please input an image or a image path.')
return
if images != None:
for image_dict in images:
content_img = image_dict['content']
style_img = image_dict['style']
results.append(self.network.run(content_img, style_img))
if paths != None:
for path_dict in paths:
content_img = cv2.imread(path_dict['content'])
style_img = cv2.imread(path_dict['style'])
results.append(self.network.run(content_img, style_img))
if visualization == True:
if not os.path.exists(output_dir):
os.makedirs(output_dir, exist_ok=True)
for i, out in enumerate(results):
cv2.imwrite(os.path.join(output_dir, 'output_{}.png'.format(i)), out[:, :, ::-1])
return results
@runnable
def run_cmd(self, argvs: list):
"""
Run as a command.
"""
self.parser = argparse.ArgumentParser(
description="Run the {} module.".format(self.name),
prog='hub run {}'.format(self.name),
usage='%(prog)s',
add_help=True)
self.arg_input_group = self.parser.add_argument_group(title="Input options", description="Input data. Required")
self.arg_config_group = self.parser.add_argument_group(
title="Config options", description="Run configuration for controlling module behavior, not required.")
self.add_module_config_arg()
self.add_module_input_arg()
self.args = self.parser.parse_args(argvs)
self.style_transfer(
paths=[{
'content': self.args.content,
'style': self.args.style
}],
output_dir=self.args.output_dir,
use_gpu=self.args.use_gpu,
visualization=self.args.visualization)
@serving
def serving_method(self, images, **kwargs):
"""
Run as a service.
"""
images_decode = copy.deepcopy(images)
for image in images_decode:
image['content'] = base64_to_cv2(image['content'])
image['style'] = base64_to_cv2(image['style'])
results = self.style_transfer(images_decode, **kwargs)
tolist = [result.tolist() for result in results]
return tolist
def add_module_config_arg(self):
"""
Add the command config options.
"""
self.arg_config_group.add_argument('--use_gpu', action='store_true', help="use GPU or not")
self.arg_config_group.add_argument(
'--output_dir', type=str, default='transfer_result', help='output directory for saving result.')
self.arg_config_group.add_argument('--visualization', type=bool, default=False, help='save results or not.')
def add_module_input_arg(self):
"""
Add the command input options.
"""
self.arg_input_group.add_argument('--content', type=str, help="path to content image.")
self.arg_input_group.add_argument('--style', type=str, help="path to style image.")
import base64
import cv2
import numpy as np
def base64_to_cv2(b64str):
data = base64.b64decode(b64str.encode('utf8'))
data = np.fromstring(data, np.uint8)
data = cv2.imdecode(data, cv2.IMREAD_COLOR)
return data
# lapstyle_stars
|模型名称|lapstyle_stars|
| :--- | :---: |
|类别|图像 - 风格迁移|
|网络|LapStyle|
|数据集|COCO|
|是否支持Fine-tuning|否|
|模型大小|121MB|
|最新更新日期|2021-12-07|
|数据指标|-|
## 一、模型基本信息
- ### 应用效果展示
- 样例结果示例:
<p align="center">
<img src="https://user-images.githubusercontent.com/22424850/144995283-77ddba45-9efe-4f72-914c-1bff734372ed.png" width = "50%" hspace='10'/>
<br />
输入内容图形
<br />
<img src="https://user-images.githubusercontent.com/22424850/144998358-14b87265-e966-422e-95f7-1738407e84ee.png" width = "50%" hspace='10'/>
<br />
输入风格图形
<br />
<img src="https://user-images.githubusercontent.com/22424850/144998367-5bc21fae-27fc-4c0e-8e1e-9702c7ee2b26.png" width = "50%" hspace='10'/>
<br />
输出图像
<br />
</p>
- ### 模型介绍
- LapStyle--拉普拉斯金字塔风格化网络,是一种能够生成高质量风格化图的快速前馈风格化网络,能渐进地生成复杂的纹理迁移效果,同时能够在512分辨率下达到100fps的速度。可实现多种不同艺术风格的快速迁移,在艺术图像生成、滤镜等领域有广泛的应用。
- 更多详情参考:[Drafting and Revision: Laplacian Pyramid Network for Fast High-Quality Artistic Style Transfer](https://arxiv.org/pdf/2104.05376.pdf)
## 二、安装
- ### 1、环境依赖
- ppgan
- ### 2、安装
- ```shell
$ hub install lapstyle_stars
```
- 如您安装时遇到问题,可参考:[零基础windows安装](../../../../../docs/docs_ch/get_start/windows_quickstart.md)
| [零基础Linux安装](../../../../../docs/docs_ch/get_start/linux_quickstart.md) | [零基础MacOS安装](../../../../../docs/docs_ch/get_start/mac_quickstart.md)
## 三、模型API预测
- ### 1、命令行预测
- ```shell
# Read from a file
$ hub run lapstyle_stars --content "/PATH/TO/IMAGE" --style "/PATH/TO/IMAGE1"
```
- 通过命令行方式实现风格转换模型的调用,更多请见 [PaddleHub命令行指令](../../../../docs/docs_ch/tutorial/cmd_usage.rst)
- ### 2、预测代码示例
- ```python
import paddlehub as hub
module = hub.Module(name="lapstyle_stars")
content = cv2.imread("/PATH/TO/IMAGE")
style = cv2.imread("/PATH/TO/IMAGE1")
results = module.style_transfer(images=[{'content':content, 'style':style}], output_dir='./transfer_result', use_gpu=True)
```
- ### 3、API
- ```python
style_transfer(images=None, paths=None, output_dir='./transfer_result/', use_gpu=False, visualization=True)
```
- 风格转换API。
- **参数**
- images (list[dict]): data of images, 每一个元素都为一个 dict,有关键字 content, style, 相应取值为:
- content (numpy.ndarray): 待转换的图片,shape 为 \[H, W, C\],BGR格式;<br/>
- style (numpy.ndarray) : 风格图像,shape为 \[H, W, C\],BGR格式;<br/>
- paths (list[str]): paths to images, 每一个元素都为一个dict, 有关键字 content, style, 相应取值为:
- content (str): 待转换的图片的路径;<br/>
- style (str) : 风格图像的路径;<br/>
- output\_dir (str): 结果保存的路径; <br/>
- use\_gpu (bool): 是否使用 GPU;<br/>
- visualization(bool): 是否保存结果到本地文件夹
## 四、服务部署
- PaddleHub Serving可以部署一个在线图像风格转换服务。
- ### 第一步:启动PaddleHub Serving
- 运行启动命令:
- ```shell
$ hub serving start -m lapstyle_stars
```
- 这样就完成了一个图像风格转换的在线服务API的部署,默认端口号为8866。
- **NOTE:** 如使用GPU预测,则需要在启动服务之前,请设置CUDA\_VISIBLE\_DEVICES环境变量,否则不用设置。
- ### 第二步:发送预测请求
- 配置好服务端,以下数行代码即可实现发送预测请求,获取预测结果
- ```python
import requests
import json
import cv2
import base64
def cv2_to_base64(image):
data = cv2.imencode('.jpg', image)[1]
return base64.b64encode(data.tostring()).decode('utf8')
# 发送HTTP请求
data = {'images':[{'content': cv2_to_base64(cv2.imread("/PATH/TO/IMAGE")), 'style': cv2_to_base64(cv2.imread("/PATH/TO/IMAGE1"))}]}
headers = {"Content-type": "application/json"}
url = "http://127.0.0.1:8866/predict/lapstyle_stars"
r = requests.post(url=url, headers=headers, data=json.dumps(data))
# 打印预测结果
print(r.json()["results"])
## 五、更新历史
* 1.0.0
初始发布
- ```shell
$ hub install lapstyle_stars==1.0.0
```
# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserve.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import urllib.request
import cv2 as cv
import numpy as np
import paddle
import paddle.nn.functional as F
from paddle.vision.transforms import functional
from PIL import Image
from ppgan.models.generators import DecoderNet
from ppgan.models.generators import Encoder
from ppgan.models.generators import RevisionNet
from ppgan.utils.visual import tensor2img
def img(img):
# some images have 4 channels
if img.shape[2] > 3:
img = img[:, :, :3]
# HWC to CHW
return img
def img_totensor(content_img, style_img):
if content_img.ndim == 2:
content_img = cv.cvtColor(content_img, cv.COLOR_GRAY2RGB)
else:
content_img = cv.cvtColor(content_img, cv.COLOR_BGR2RGB)
h, w, c = content_img.shape
content_img = Image.fromarray(content_img)
content_img = content_img.resize((512, 512), Image.BILINEAR)
content_img = np.array(content_img)
content_img = img(content_img)
content_img = functional.to_tensor(content_img)
style_img = cv.cvtColor(style_img, cv.COLOR_BGR2RGB)
style_img = Image.fromarray(style_img)
style_img = style_img.resize((512, 512), Image.BILINEAR)
style_img = np.array(style_img)
style_img = img(style_img)
style_img = functional.to_tensor(style_img)
content_img = paddle.unsqueeze(content_img, axis=0)
style_img = paddle.unsqueeze(style_img, axis=0)
return content_img, style_img, h, w
def tensor_resample(tensor, dst_size, mode='bilinear'):
return F.interpolate(tensor, dst_size, mode=mode, align_corners=False)
def laplacian(x):
"""
Laplacian
return:
x - upsample(downsample(x))
"""
return x - tensor_resample(tensor_resample(x, [x.shape[2] // 2, x.shape[3] // 2]), [x.shape[2], x.shape[3]])
def make_laplace_pyramid(x, levels):
"""
Make Laplacian Pyramid
"""
pyramid = []
current = x
for i in range(levels):
pyramid.append(laplacian(current))
current = tensor_resample(current, (max(current.shape[2] // 2, 1), max(current.shape[3] // 2, 1)))
pyramid.append(current)
return pyramid
def fold_laplace_pyramid(pyramid):
"""
Fold Laplacian Pyramid
"""
current = pyramid[-1]
for i in range(len(pyramid) - 2, -1, -1): # iterate from len-2 to 0
up_h, up_w = pyramid[i].shape[2], pyramid[i].shape[3]
current = pyramid[i] + tensor_resample(current, (up_h, up_w))
return current
class LapStylePredictor:
def __init__(self, weight_path=None):
self.net_enc = Encoder()
self.net_dec = DecoderNet()
self.net_rev = RevisionNet()
self.net_rev_2 = RevisionNet()
self.net_enc.set_dict(paddle.load(weight_path)['net_enc'])
self.net_enc.eval()
self.net_dec.set_dict(paddle.load(weight_path)['net_dec'])
self.net_dec.eval()
self.net_rev.set_dict(paddle.load(weight_path)['net_rev'])
self.net_rev.eval()
self.net_rev_2.set_dict(paddle.load(weight_path)['net_rev_2'])
self.net_rev_2.eval()
def run(self, content_img, style_image):
content_img, style_img, h, w = img_totensor(content_img, style_image)
pyr_ci = make_laplace_pyramid(content_img, 2)
pyr_si = make_laplace_pyramid(style_img, 2)
pyr_ci.append(content_img)
pyr_si.append(style_img)
cF = self.net_enc(pyr_ci[2])
sF = self.net_enc(pyr_si[2])
stylized_small = self.net_dec(cF, sF)
stylized_up = F.interpolate(stylized_small, scale_factor=2)
revnet_input = paddle.concat(x=[pyr_ci[1], stylized_up], axis=1)
stylized_rev_lap = self.net_rev(revnet_input)
stylized_rev = fold_laplace_pyramid([stylized_rev_lap, stylized_small])
stylized_up = F.interpolate(stylized_rev, scale_factor=2)
revnet_input = paddle.concat(x=[pyr_ci[0], stylized_up], axis=1)
stylized_rev_lap_second = self.net_rev_2(revnet_input)
stylized_rev_second = fold_laplace_pyramid([stylized_rev_lap_second, stylized_rev_lap, stylized_small])
stylized = stylized_rev_second
stylized_visual = tensor2img(stylized, min_max=(0., 1.))
return stylized_visual
# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import argparse
import copy
import os
import cv2
import numpy as np
import paddle
from skimage.io import imread
from skimage.transform import rescale
from skimage.transform import resize
import paddlehub as hub
from .model import LapStylePredictor
from .util import base64_to_cv2
from paddlehub.module.module import moduleinfo
from paddlehub.module.module import runnable
from paddlehub.module.module import serving
@moduleinfo(
name="lapstyle_stars",
type="CV/style_transfer",
author="paddlepaddle",
author_email="",
summary="",
version="1.0.0")
class Lapstyle_stars:
def __init__(self):
self.pretrained_model = os.path.join(self.directory, "lapstyle_stars.pdparams")
self.network = LapStylePredictor(weight_path=self.pretrained_model)
def style_transfer(self,
images: list = None,
paths: list = None,
output_dir: str = './transfer_result/',
use_gpu: bool = False,
visualization: bool = True):
'''
Transfer a image to stars style.
images (list[dict]): data of images, each element is a dict:
- content (numpy.ndarray): input image,shape is \[H, W, C\],BGR format;<br/>
- style (numpy.ndarray) : style image,shape is \[H, W, C\],BGR format;<br/>
paths (list[dict]): paths to images, eacg element is a dict:
- content (str): path to input image;<br/>
- style (str) : path to style image;<br/>
output_dir (str): the dir to save the results
use_gpu (bool): if True, use gpu to perform the computation, otherwise cpu.
visualization (bool): if True, save results in output_dir.
'''
results = []
paddle.disable_static()
place = 'gpu:0' if use_gpu else 'cpu'
place = paddle.set_device(place)
if images == None and paths == None:
print('No image provided. Please input an image or a image path.')
return
if images != None:
for image_dict in images:
content_img = image_dict['content']
style_img = image_dict['style']
results.append(self.network.run(content_img, style_img))
if paths != None:
for path_dict in paths:
content_img = cv2.imread(path_dict['content'])
style_img = cv2.imread(path_dict['style'])
results.append(self.network.run(content_img, style_img))
if visualization == True:
if not os.path.exists(output_dir):
os.makedirs(output_dir, exist_ok=True)
for i, out in enumerate(results):
cv2.imwrite(os.path.join(output_dir, 'output_{}.png'.format(i)), out[:, :, ::-1])
return results
@runnable
def run_cmd(self, argvs: list):
"""
Run as a command.
"""
self.parser = argparse.ArgumentParser(
description="Run the {} module.".format(self.name),
prog='hub run {}'.format(self.name),
usage='%(prog)s',
add_help=True)
self.arg_input_group = self.parser.add_argument_group(title="Input options", description="Input data. Required")
self.arg_config_group = self.parser.add_argument_group(
title="Config options", description="Run configuration for controlling module behavior, not required.")
self.add_module_config_arg()
self.add_module_input_arg()
self.args = self.parser.parse_args(argvs)
self.style_transfer(
paths=[{
'content': self.args.content,
'style': self.args.style
}],
output_dir=self.args.output_dir,
use_gpu=self.args.use_gpu,
visualization=self.args.visualization)
@serving
def serving_method(self, images, **kwargs):
"""
Run as a service.
"""
images_decode = copy.deepcopy(images)
for image in images_decode:
image['content'] = base64_to_cv2(image['content'])
image['style'] = base64_to_cv2(image['style'])
results = self.style_transfer(images_decode, **kwargs)
tolist = [result.tolist() for result in results]
return tolist
def add_module_config_arg(self):
"""
Add the command config options.
"""
self.arg_config_group.add_argument('--use_gpu', action='store_true', help="use GPU or not")
self.arg_config_group.add_argument(
'--output_dir', type=str, default='transfer_result', help='output directory for saving result.')
self.arg_config_group.add_argument('--visualization', type=bool, default=False, help='save results or not.')
def add_module_input_arg(self):
"""
Add the command input options.
"""
self.arg_input_group.add_argument('--content', type=str, help="path to content image.")
self.arg_input_group.add_argument('--style', type=str, help="path to style image.")
import base64
import cv2
import numpy as np
def base64_to_cv2(b64str):
data = base64.b64decode(b64str.encode('utf8'))
data = np.fromstring(data, np.uint8)
data = cv2.imdecode(data, cv2.IMREAD_COLOR)
return data
# paint_transformer
|模型名称|paint_transformer|
| :--- | :---: |
|类别|图像 - 风格转换|
|网络|Paint Transformer|
|数据集|百度自建数据集|
|是否支持Fine-tuning|否|
|模型大小|77MB|
|最新更新日期|2021-12-07|
|数据指标|-|
## 一、模型基本信息
- ### 应用效果展示
- 样例结果示例:
<p align="center">
<img src="https://user-images.githubusercontent.com/22424850/145002878-ffdeea71-8ff4-48cc-88d0-fba1aa1dce4b.jpg" width = "40%" hspace='10'/>
<br />
输入图像
<br />
<img src="https://user-images.githubusercontent.com/22424850/145002301-97c45887-cb2e-4a06-9d00-07b74080effa.png" width = "40%" hspace='10'/>
<br />
输出图像
<br />
</p>
- ### 模型介绍
- 该模型可以实现图像油画风格的转换。
- 更多详情参考:[Paint Transformer: Feed Forward Neural Painting with Stroke Prediction](https://github.com/wzmsltw/PaintTransformer)
## 二、安装
- ### 1、环境依赖
- ppgan
- ### 2、安装
- ```shell
$ hub install paint_transformer
```
- 如您安装时遇到问题,可参考:[零基础windows安装](../../../../docs/docs_ch/get_start/windows_quickstart.md)
| [零基础Linux安装](../../../../docs/docs_ch/get_start/linux_quickstart.md) | [零基础MacOS安装](../../../../docs/docs_ch/get_start/mac_quickstart.md)
## 三、模型API预测
- ### 1、命令行预测
- ```shell
# Read from a file
$ hub run paint_transformer --input_path "/PATH/TO/IMAGE"
```
- 通过命令行方式实现风格转换模型的调用,更多请见 [PaddleHub命令行指令](../../../../docs/docs_ch/tutorial/cmd_usage.rst)
- ### 2、预测代码示例
- ```python
import paddlehub as hub
module = hub.Module(name="paint_transformer")
input_path = ["/PATH/TO/IMAGE"]
# Read from a file
module.style_transfer(paths=input_path, output_dir='./transfer_result/', use_gpu=True)
```
- ### 3、API
- ```python
style_transfer(images=None, paths=None, output_dir='./transfer_result/', use_gpu=False, need_animation=False, visualization=True):
```
- 油画风格转换API。
- **参数**
- images (list\[numpy.ndarray\]): 图片数据,ndarray.shape 为 \[H, W, C\];<br/>
- paths (list\[str\]): 图片的路径;<br/>
- output\_dir (str): 结果保存的路径; <br/>
- use\_gpu (bool): 是否使用 GPU;<br/>
- need_animation(bool): 是否保存中间结果形成动画
- visualization(bool): 是否保存结果到本地文件夹
## 四、服务部署
- PaddleHub Serving可以部署一个在线油画风格转换服务。
- ### 第一步:启动PaddleHub Serving
- 运行启动命令:
- ```shell
$ hub serving start -m paint_transformer
```
- 这样就完成了一个油画风格转换的在线服务API的部署,默认端口号为8866。
- **NOTE:** 如使用GPU预测,则需要在启动服务之前,请设置CUDA\_VISIBLE\_DEVICES环境变量,否则不用设置。
- ### 第二步:发送预测请求
- 配置好服务端,以下数行代码即可实现发送预测请求,获取预测结果
- ```python
import requests
import json
import cv2
import base64
def cv2_to_base64(image):
data = cv2.imencode('.jpg', image)[1]
return base64.b64encode(data.tostring()).decode('utf8')
# 发送HTTP请求
data = {'images':[cv2_to_base64(cv2.imread("/PATH/TO/IMAGE"))]}
headers = {"Content-type": "application/json"}
url = "http://127.0.0.1:8866/predict/paint_transformer"
r = requests.post(url=url, headers=headers, data=json.dumps(data))
# 打印预测结果
print(r.json()["results"])
## 五、更新历史
* 1.0.0
初始发布
- ```shell
$ hub install paint_transformer==1.0.0
```
import numpy as np
from PIL import Image
import network
import os
import math
import render_utils
import paddle
import paddle.nn as nn
import paddle.nn.functional as F
import cv2
import render_parallel
import render_serial
def main(input_path, model_path, output_dir, need_animation=False, resize_h=None, resize_w=None, serial=False):
if not os.path.exists(output_dir):
os.mkdir(output_dir)
input_name = os.path.basename(input_path)
output_path = os.path.join(output_dir, input_name)
frame_dir = None
if need_animation:
if not serial:
print('It must be under serial mode if animation results are required, so serial flag is set to True!')
serial = True
frame_dir = os.path.join(output_dir, input_name[:input_name.find('.')])
if not os.path.exists(frame_dir):
os.mkdir(frame_dir)
stroke_num = 8
#* ----- load model ----- *#
paddle.set_device('gpu')
net_g = network.Painter(5, stroke_num, 256, 8, 3, 3)
net_g.set_state_dict(paddle.load(model_path))
net_g.eval()
for param in net_g.parameters():
param.stop_gradient = True
#* ----- load brush ----- *#
brush_large_vertical = render_utils.read_img('brush/brush_large_vertical.png', 'L')
brush_large_horizontal = render_utils.read_img('brush/brush_large_horizontal.png', 'L')
meta_brushes = paddle.concat([brush_large_vertical, brush_large_horizontal], axis=0)
import time
t0 = time.time()
original_img = render_utils.read_img(input_path, 'RGB', resize_h, resize_w)
if serial:
final_result_list = render_serial.render_serial(original_img, net_g, meta_brushes)
if need_animation:
print("total frame:", len(final_result_list))
for idx, frame in enumerate(final_result_list):
cv2.imwrite(os.path.join(frame_dir, '%03d.png' % idx), frame)
else:
cv2.imwrite(output_path, final_result_list[-1])
else:
final_result = render_parallel.render_parallel(original_img, net_g, meta_brushes)
cv2.imwrite(output_path, final_result)
print("total infer time:", time.time() - t0)
if __name__ == '__main__':
main(
input_path='input/chicago.jpg',
model_path='paint_best.pdparams',
output_dir='output/',
need_animation=True, # whether need intermediate results for animation.
resize_h=512, # resize original input to this size. None means do not resize.
resize_w=512, # resize original input to this size. None means do not resize.
serial=True) # if need animation, serial must be True.
import paddle
import paddle.nn as nn
import math
class Painter(nn.Layer):
"""
network architecture written in paddle.
"""
def __init__(self, param_per_stroke, total_strokes, hidden_dim, n_heads=8, n_enc_layers=3, n_dec_layers=3):
super().__init__()
self.enc_img = nn.Sequential(
nn.Pad2D([1, 1, 1, 1], 'reflect'),
nn.Conv2D(3, 32, 3, 1),
nn.BatchNorm2D(32),
nn.ReLU(), # maybe replace with the inplace version
nn.Pad2D([1, 1, 1, 1], 'reflect'),
nn.Conv2D(32, 64, 3, 2),
nn.BatchNorm2D(64),
nn.ReLU(),
nn.Pad2D([1, 1, 1, 1], 'reflect'),
nn.Conv2D(64, 128, 3, 2),
nn.BatchNorm2D(128),
nn.ReLU())
self.enc_canvas = nn.Sequential(
nn.Pad2D([1, 1, 1, 1], 'reflect'), nn.Conv2D(3, 32, 3, 1), nn.BatchNorm2D(32), nn.ReLU(),
nn.Pad2D([1, 1, 1, 1], 'reflect'), nn.Conv2D(32, 64, 3, 2), nn.BatchNorm2D(64), nn.ReLU(),
nn.Pad2D([1, 1, 1, 1], 'reflect'), nn.Conv2D(64, 128, 3, 2), nn.BatchNorm2D(128), nn.ReLU())
self.conv = nn.Conv2D(128 * 2, hidden_dim, 1)
self.transformer = nn.Transformer(hidden_dim, n_heads, n_enc_layers, n_dec_layers)
self.linear_param = nn.Sequential(
nn.Linear(hidden_dim, hidden_dim), nn.ReLU(), nn.Linear(hidden_dim, hidden_dim), nn.ReLU(),
nn.Linear(hidden_dim, param_per_stroke))
self.linear_decider = nn.Linear(hidden_dim, 1)
self.query_pos = paddle.static.create_parameter([total_strokes, hidden_dim],
dtype='float32',
default_initializer=nn.initializer.Uniform(0, 1))
self.row_embed = paddle.static.create_parameter([8, hidden_dim // 2],
dtype='float32',
default_initializer=nn.initializer.Uniform(0, 1))
self.col_embed = paddle.static.create_parameter([8, hidden_dim // 2],
dtype='float32',
default_initializer=nn.initializer.Uniform(0, 1))
def forward(self, img, canvas):
"""
prediction
"""
b, _, H, W = img.shape
img_feat = self.enc_img(img)
canvas_feat = self.enc_canvas(canvas)
h, w = img_feat.shape[-2:]
feat = paddle.concat([img_feat, canvas_feat], axis=1)
feat_conv = self.conv(feat)
pos_embed = paddle.concat([
self.col_embed[:w].unsqueeze(0).tile([h, 1, 1]),
self.row_embed[:h].unsqueeze(1).tile([1, w, 1]),
],
axis=-1).flatten(0, 1).unsqueeze(1)
hidden_state = self.transformer((pos_embed + feat_conv.flatten(2).transpose([2, 0, 1])).transpose([1, 0, 2]),
self.query_pos.unsqueeze(1).tile([1, b, 1]).transpose([1, 0, 2]))
param = self.linear_param(hidden_state)
decision = self.linear_decider(hidden_state)
return param, decision
# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import argparse
import copy
import paddle
import paddlehub as hub
from paddlehub.module.module import moduleinfo, runnable, serving
import numpy as np
import cv2
from skimage.io import imread
from skimage.transform import rescale, resize
from .model import Painter
from .render_utils import totensor, read_img
from .render_serial import render_serial
from .util import base64_to_cv2
@moduleinfo(
name="paint_transformer",
type="CV/style_transfer",
author="paddlepaddle",
author_email="",
summary="",
version="1.0.0")
class paint_transformer:
def __init__(self):
self.pretrained_model = os.path.join(self.directory, "paint_best.pdparams")
self.network = Painter(5, 8, 256, 8, 3, 3)
self.network.set_state_dict(paddle.load(self.pretrained_model))
self.network.eval()
for param in self.network.parameters():
param.stop_gradient = True
#* ----- load brush ----- *#
brush_large_vertical = read_img(os.path.join(self.directory, 'brush/brush_large_vertical.png'), 'L')
brush_large_horizontal = read_img(os.path.join(self.directory, 'brush/brush_large_horizontal.png'), 'L')
self.meta_brushes = paddle.concat([brush_large_vertical, brush_large_horizontal], axis=0)
def style_transfer(self,
images: list = None,
paths: list = None,
output_dir: str = './transfer_result/',
use_gpu: bool = False,
need_animation: bool = False,
visualization: bool = True):
'''
images (list[numpy.ndarray]): data of images, shape of each is [H, W, C], color space must be BGR(read by cv2).
paths (list[str]): paths to images
output_dir (str): the dir to save the results
use_gpu (bool): if True, use gpu to perform the computation, otherwise cpu.
need_animation (bool): if True, save every frame to show the process of painting.
visualization (bool): if True, save results in output_dir.
'''
results = []
paddle.disable_static()
place = 'gpu:0' if use_gpu else 'cpu'
place = paddle.set_device(place)
if images == None and paths == None:
print('No image provided. Please input an image or a image path.')
return
if images != None:
for image in images:
image = image[:, :, ::-1]
image = totensor(image)
final_result_list = render_serial(image, self.network, self.meta_brushes)
results.append(final_result_list)
if paths != None:
for path in paths:
image = cv2.imread(path)[:, :, ::-1]
image = totensor(image)
final_result_list = render_serial(image, self.network, self.meta_brushes)
results.append(final_result_list)
if visualization == True:
if not os.path.exists(output_dir):
os.makedirs(output_dir, exist_ok=True)
for i, out in enumerate(results):
if out:
if need_animation:
curoutputdir = os.path.join(output_dir, 'output_{}'.format(i))
if not os.path.exists(curoutputdir):
os.makedirs(curoutputdir, exist_ok=True)
for j, outimg in enumerate(out):
cv2.imwrite(os.path.join(curoutputdir, 'frame_{}.png'.format(j)), outimg)
else:
cv2.imwrite(os.path.join(output_dir, 'output_{}.png'.format(i)), out[-1])
return results
@runnable
def run_cmd(self, argvs: list):
"""
Run as a command.
"""
self.parser = argparse.ArgumentParser(
description="Run the {} module.".format(self.name),
prog='hub run {}'.format(self.name),
usage='%(prog)s',
add_help=True)
self.arg_input_group = self.parser.add_argument_group(title="Input options", description="Input data. Required")
self.arg_config_group = self.parser.add_argument_group(
title="Config options", description="Run configuration for controlling module behavior, not required.")
self.add_module_config_arg()
self.add_module_input_arg()
self.args = self.parser.parse_args(argvs)
results = self.style_transfer(
paths=[self.args.input_path],
output_dir=self.args.output_dir,
use_gpu=self.args.use_gpu,
need_animation=self.args.need_animation,
visualization=self.args.visualization)
return results
@serving
def serving_method(self, images, **kwargs):
"""
Run as a service.
"""
images_decode = [base64_to_cv2(image) for image in images]
results = self.style_transfer(images=images_decode, **kwargs)
tolist = [result.tolist() for result in results]
return tolist
def add_module_config_arg(self):
"""
Add the command config options.
"""
self.arg_config_group.add_argument('--use_gpu', action='store_true', help="use GPU or not")
self.arg_config_group.add_argument(
'--output_dir', type=str, default='transfer_result', help='output directory for saving result.')
self.arg_config_group.add_argument('--visualization', type=bool, default=False, help='save results or not.')
self.arg_config_group.add_argument(
'--need_animation', type=bool, default=False, help='save intermediate results or not.')
def add_module_input_arg(self):
"""
Add the command input options.
"""
self.arg_input_group.add_argument('--input_path', type=str, help="path to input image.")
import render_utils
import paddle
import paddle.nn as nn
import paddle.nn.functional as F
import numpy as np
import math
def crop(img, h, w):
H, W = img.shape[-2:]
pad_h = (H - h) // 2
pad_w = (W - w) // 2
remainder_h = (H - h) % 2
remainder_w = (W - w) % 2
img = img[:, :, pad_h:H - pad_h - remainder_h, pad_w:W - pad_w - remainder_w]
return img
def stroke_net_predict(img_patch, result_patch, patch_size, net_g, stroke_num, patch_num):
"""
stroke_net_predict
"""
img_patch = img_patch.transpose([0, 2, 1]).reshape([-1, 3, patch_size, patch_size])
result_patch = result_patch.transpose([0, 2, 1]).reshape([-1, 3, patch_size, patch_size])
#*----- Stroke Predictor -----*#
shape_param, stroke_decision = net_g(img_patch, result_patch)
stroke_decision = (stroke_decision > 0).astype('float32')
#*----- sampling color -----*#
grid = shape_param[:, :, :2].reshape([img_patch.shape[0] * stroke_num, 1, 1, 2])
img_temp = img_patch.unsqueeze(1).tile([1, stroke_num, 1, 1,
1]).reshape([img_patch.shape[0] * stroke_num, 3, patch_size, patch_size])
color = nn.functional.grid_sample(
img_temp, 2 * grid - 1, align_corners=False).reshape([img_patch.shape[0], stroke_num, 3])
param = paddle.concat([shape_param, color], axis=-1)
param = param.reshape([-1, 8])
param[:, :2] = param[:, :2] / 2 + 0.25
param[:, 2:4] = param[:, 2:4] / 2
param = param.reshape([1, patch_num, patch_num, stroke_num, 8])
decision = stroke_decision.reshape([1, patch_num, patch_num, stroke_num]) #.astype('bool')
return param, decision
def param2img_parallel(param, decision, meta_brushes, cur_canvas, stroke_num=8):
"""
Input stroke parameters and decisions for each patch, meta brushes, current canvas, frame directory,
and whether there is a border (if intermediate painting results are required).
Output the painting results of adding the corresponding strokes on the current canvas.
Args:
param: a tensor with shape batch size x patch along height dimension x patch along width dimension
x n_stroke_per_patch x n_param_per_stroke
decision: a 01 tensor with shape batch size x patch along height dimension x patch along width dimension
x n_stroke_per_patch
meta_brushes: a tensor with shape 2 x 3 x meta_brush_height x meta_brush_width.
The first slice on the batch dimension denotes vertical brush and the second one denotes horizontal brush.
cur_canvas: a tensor with shape batch size x 3 x H x W,
where H and W denote height and width of padded results of original images.
Returns:
cur_canvas: a tensor with shape batch size x 3 x H x W, denoting painting results.
"""
# param: b, h, w, stroke_per_patch, param_per_stroke
# decision: b, h, w, stroke_per_patch
b, h, w, s, p = param.shape
h, w = int(h), int(w)
param = param.reshape([-1, 8])
decision = decision.reshape([-1, 8])
H, W = cur_canvas.shape[-2:]
is_odd_y = h % 2 == 1
is_odd_x = w % 2 == 1
render_size_y = 2 * H // h
render_size_x = 2 * W // w
even_idx_y = paddle.arange(0, h, 2)
even_idx_x = paddle.arange(0, w, 2)
if h > 1:
odd_idx_y = paddle.arange(1, h, 2)
if w > 1:
odd_idx_x = paddle.arange(1, w, 2)
cur_canvas = F.pad(cur_canvas, [render_size_x // 4, render_size_x // 4, render_size_y // 4, render_size_y // 4])
valid_foregrounds = render_utils.param2stroke(param, render_size_y, render_size_x, meta_brushes)
#* ----- load dilation/erosion ---- *#
dilation = render_utils.Dilation2d(m=1)
erosion = render_utils.Erosion2d(m=1)
#* ----- generate alphas ----- *#
valid_alphas = (valid_foregrounds > 0).astype('float32')
valid_foregrounds = valid_foregrounds.reshape([-1, stroke_num, 1, render_size_y, render_size_x])
valid_alphas = valid_alphas.reshape([-1, stroke_num, 1, render_size_y, render_size_x])
temp = [dilation(valid_foregrounds[:, i, :, :, :]) for i in range(stroke_num)]
valid_foregrounds = paddle.stack(temp, axis=1)
valid_foregrounds = valid_foregrounds.reshape([-1, 1, render_size_y, render_size_x])
temp = [erosion(valid_alphas[:, i, :, :, :]) for i in range(stroke_num)]
valid_alphas = paddle.stack(temp, axis=1)
valid_alphas = valid_alphas.reshape([-1, 1, render_size_y, render_size_x])
foregrounds = valid_foregrounds.reshape([-1, h, w, stroke_num, 1, render_size_y, render_size_x])
alphas = valid_alphas.reshape([-1, h, w, stroke_num, 1, render_size_y, render_size_x])
decision = decision.reshape([-1, h, w, stroke_num, 1, 1, 1])
param = param.reshape([-1, h, w, stroke_num, 8])
def partial_render(this_canvas, patch_coord_y, patch_coord_x):
canvas_patch = F.unfold(
this_canvas, [render_size_y, render_size_x], strides=[render_size_y // 2, render_size_x // 2])
# canvas_patch: b, 3 * py * px, h * w
canvas_patch = canvas_patch.reshape([b, 3, render_size_y, render_size_x, h, w])
canvas_patch = canvas_patch.transpose([0, 4, 5, 1, 2, 3])
selected_canvas_patch = paddle.gather(canvas_patch, patch_coord_y, 1)
selected_canvas_patch = paddle.gather(selected_canvas_patch, patch_coord_x, 2)
selected_canvas_patch = selected_canvas_patch.reshape([0, 0, 0, 1, 3, render_size_y, render_size_x])
selected_foregrounds = paddle.gather(foregrounds, patch_coord_y, 1)
selected_foregrounds = paddle.gather(selected_foregrounds, patch_coord_x, 2)
selected_alphas = paddle.gather(alphas, patch_coord_y, 1)
selected_alphas = paddle.gather(selected_alphas, patch_coord_x, 2)
selected_decisions = paddle.gather(decision, patch_coord_y, 1)
selected_decisions = paddle.gather(selected_decisions, patch_coord_x, 2)
selected_color = paddle.gather(param, patch_coord_y, 1)
selected_color = paddle.gather(selected_color, patch_coord_x, 2)
selected_color = paddle.gather(selected_color, paddle.to_tensor([5, 6, 7]), 4)
selected_color = selected_color.reshape([0, 0, 0, stroke_num, 3, 1, 1])
for i in range(stroke_num):
i = paddle.to_tensor(i)
cur_foreground = paddle.gather(selected_foregrounds, i, 3)
cur_alpha = paddle.gather(selected_alphas, i, 3)
cur_decision = paddle.gather(selected_decisions, i, 3)
cur_color = paddle.gather(selected_color, i, 3)
cur_foreground = cur_foreground * cur_color
selected_canvas_patch = cur_foreground * cur_alpha * cur_decision + selected_canvas_patch * (
1 - cur_alpha * cur_decision)
selected_canvas_patch = selected_canvas_patch.reshape([0, 0, 0, 3, render_size_y, render_size_x])
this_canvas = selected_canvas_patch.transpose([0, 3, 1, 4, 2, 5])
# this_canvas: b, 3, h_half, py, w_half, px
h_half = this_canvas.shape[2]
w_half = this_canvas.shape[4]
this_canvas = this_canvas.reshape([b, 3, h_half * render_size_y, w_half * render_size_x])
# this_canvas: b, 3, h_half * py, w_half * px
return this_canvas
# even - even area
# 1 | 0
# 0 | 0
canvas = partial_render(cur_canvas, even_idx_y, even_idx_x)
if not is_odd_y:
canvas = paddle.concat([canvas, cur_canvas[:, :, -render_size_y // 2:, :canvas.shape[3]]], axis=2)
if not is_odd_x:
canvas = paddle.concat([canvas, cur_canvas[:, :, :canvas.shape[2], -render_size_x // 2:]], axis=3)
cur_canvas = canvas
# odd - odd area
# 0 | 0
# 0 | 1
if h > 1 and w > 1:
canvas = partial_render(cur_canvas, odd_idx_y, odd_idx_x)
canvas = paddle.concat([cur_canvas[:, :, :render_size_y // 2, -canvas.shape[3]:], canvas], axis=2)
canvas = paddle.concat([cur_canvas[:, :, -canvas.shape[2]:, :render_size_x // 2], canvas], axis=3)
if is_odd_y:
canvas = paddle.concat([canvas, cur_canvas[:, :, -render_size_y // 2:, :canvas.shape[3]]], axis=2)
if is_odd_x:
canvas = paddle.concat([canvas, cur_canvas[:, :, :canvas.shape[2], -render_size_x // 2:]], axis=3)
cur_canvas = canvas
# odd - even area
# 0 | 0
# 1 | 0
if h > 1:
canvas = partial_render(cur_canvas, odd_idx_y, even_idx_x)
canvas = paddle.concat([cur_canvas[:, :, :render_size_y // 2, :canvas.shape[3]], canvas], axis=2)
if is_odd_y:
canvas = paddle.concat([canvas, cur_canvas[:, :, -render_size_y // 2:, :canvas.shape[3]]], axis=2)
if not is_odd_x:
canvas = paddle.concat([canvas, cur_canvas[:, :, :canvas.shape[2], -render_size_x // 2:]], axis=3)
cur_canvas = canvas
# odd - even area
# 0 | 1
# 0 | 0
if w > 1:
canvas = partial_render(cur_canvas, even_idx_y, odd_idx_x)
canvas = paddle.concat([cur_canvas[:, :, :canvas.shape[2], :render_size_x // 2], canvas], axis=3)
if not is_odd_y:
canvas = paddle.concat([canvas, cur_canvas[:, :, -render_size_y // 2:, -canvas.shape[3]:]], axis=2)
if is_odd_x:
canvas = paddle.concat([canvas, cur_canvas[:, :, :canvas.shape[2], -render_size_x // 2:]], axis=3)
cur_canvas = canvas
cur_canvas = cur_canvas[:, :, render_size_y // 4:-render_size_y // 4, render_size_x // 4:-render_size_x // 4]
return cur_canvas
def render_parallel(original_img, net_g, meta_brushes):
patch_size = 32
stroke_num = 8
with paddle.no_grad():
original_h, original_w = original_img.shape[-2:]
K = max(math.ceil(math.log2(max(original_h, original_w) / patch_size)), 0)
original_img_pad_size = patch_size * (2**K)
original_img_pad = render_utils.pad(original_img, original_img_pad_size, original_img_pad_size)
final_result = paddle.zeros_like(original_img)
for layer in range(0, K + 1):
layer_size = patch_size * (2**layer)
img = F.interpolate(original_img_pad, (layer_size, layer_size))
result = F.interpolate(final_result, (layer_size, layer_size))
img_patch = F.unfold(img, [patch_size, patch_size], strides=[patch_size, patch_size])
result_patch = F.unfold(result, [patch_size, patch_size], strides=[patch_size, patch_size])
# There are patch_num * patch_num patches in total
patch_num = (layer_size - patch_size) // patch_size + 1
param, decision = stroke_net_predict(img_patch, result_patch, patch_size, net_g, stroke_num, patch_num)
#print(param.shape, decision.shape)
final_result = param2img_parallel(param, decision, meta_brushes, final_result)
# paint another time for last layer
border_size = original_img_pad_size // (2 * patch_num)
img = F.interpolate(original_img_pad, (layer_size, layer_size))
result = F.interpolate(final_result, (layer_size, layer_size))
img = F.pad(img, [patch_size // 2, patch_size // 2, patch_size // 2, patch_size // 2])
result = F.pad(result, [patch_size // 2, patch_size // 2, patch_size // 2, patch_size // 2])
img_patch = F.unfold(img, [patch_size, patch_size], strides=[patch_size, patch_size])
result_patch = F.unfold(result, [patch_size, patch_size], strides=[patch_size, patch_size])
final_result = F.pad(final_result, [border_size, border_size, border_size, border_size])
patch_num = (img.shape[2] - patch_size) // patch_size + 1
#w = (img.shape[3] - patch_size) // patch_size + 1
param, decision = stroke_net_predict(img_patch, result_patch, patch_size, net_g, stroke_num, patch_num)
final_result = param2img_parallel(param, decision, meta_brushes, final_result)
final_result = final_result[:, :, border_size:-border_size, border_size:-border_size]
final_result = (final_result.numpy().squeeze().transpose([1, 2, 0])[:, :, ::-1] * 255).astype(np.uint8)
return final_result
# !/usr/bin/env python3
"""
codes for oilpainting style transfer.
"""
import paddle
import paddle.nn as nn
import paddle.nn.functional as F
import numpy as np
from PIL import Image
import math
import cv2
import time
from .render_utils import param2stroke, Dilation2d, Erosion2d
def get_single_layer_lists(param, decision, ori_img, render_size_x, render_size_y, h, w, meta_brushes, dilation,
erosion, stroke_num):
"""
get_single_layer_lists
"""
valid_foregrounds = param2stroke(param[:, :], render_size_y, render_size_x, meta_brushes)
valid_alphas = (valid_foregrounds > 0).astype('float32')
valid_foregrounds = valid_foregrounds.reshape([-1, stroke_num, 1, render_size_y, render_size_x])
valid_alphas = valid_alphas.reshape([-1, stroke_num, 1, render_size_y, render_size_x])
temp = [dilation(valid_foregrounds[:, i, :, :, :]) for i in range(stroke_num)]
valid_foregrounds = paddle.stack(temp, axis=1)
valid_foregrounds = valid_foregrounds.reshape([-1, 1, render_size_y, render_size_x])
temp = [erosion(valid_alphas[:, i, :, :, :]) for i in range(stroke_num)]
valid_alphas = paddle.stack(temp, axis=1)
valid_alphas = valid_alphas.reshape([-1, 1, render_size_y, render_size_x])
patch_y = 4 * render_size_y // 5
patch_x = 4 * render_size_x // 5
img_patch = ori_img.reshape([1, 3, h, ori_img.shape[2] // h, w, ori_img.shape[3] // w])
img_patch = img_patch.transpose([0, 2, 4, 1, 3, 5])[0]
xid_list = []
yid_list = []
error_list = []
for flag_idx, flag in enumerate(decision.cpu().numpy()):
if flag:
flag_idx = flag_idx // stroke_num
x_id = flag_idx % w
flag_idx = flag_idx // w
y_id = flag_idx % h
xid_list.append(x_id)
yid_list.append(y_id)
inner_fores = valid_foregrounds[:, :, render_size_y // 10:9 * render_size_y // 10, render_size_x // 10:9 *
render_size_x // 10]
inner_alpha = valid_alphas[:, :, render_size_y // 10:9 * render_size_y // 10, render_size_x // 10:9 *
render_size_x // 10]
inner_fores = inner_fores.reshape([h * w, stroke_num, 1, patch_y, patch_x])
inner_alpha = inner_alpha.reshape([h * w, stroke_num, 1, patch_y, patch_x])
inner_real = img_patch.reshape([h * w, 3, patch_y, patch_x]).unsqueeze(1)
R = param[:, 5]
G = param[:, 6]
B = param[:, 7] #, G, B = param[5:]
R = R.reshape([-1, stroke_num]).unsqueeze(-1).unsqueeze(-1).unsqueeze(-1)
G = G.reshape([-1, stroke_num]).unsqueeze(-1).unsqueeze(-1).unsqueeze(-1)
B = B.reshape([-1, stroke_num]).unsqueeze(-1).unsqueeze(-1).unsqueeze(-1)
error_R = R * inner_fores - inner_real[:, :, 0:1, :, :]
error_G = G * inner_fores - inner_real[:, :, 1:2, :, :]
error_B = B * inner_fores - inner_real[:, :, 2:3, :, :]
error = paddle.abs(error_R) + paddle.abs(error_G) + paddle.abs(error_B)
error = error * inner_alpha
error = paddle.sum(error, axis=(2, 3, 4)) / paddle.sum(inner_alpha, axis=(2, 3, 4))
error_list = error.reshape([-1]).numpy()[decision.numpy()]
error_list = list(error_list)
valid_foregrounds = paddle.to_tensor(valid_foregrounds.numpy()[decision.numpy()])
valid_alphas = paddle.to_tensor(valid_alphas.numpy()[decision.numpy()])
selected_param = paddle.to_tensor(param.numpy()[decision.numpy()])
return xid_list, yid_list, valid_foregrounds, valid_alphas, error_list, selected_param
def get_single_stroke_on_full_image_A(x_id, y_id, valid_foregrounds, valid_alphas, param, original_img, render_size_x,
render_size_y, patch_x, patch_y):
"""
get_single_stroke_on_full_image_A
"""
tmp_foreground = paddle.zeros_like(original_img)
patch_y_num = original_img.shape[2] // patch_y
patch_x_num = original_img.shape[3] // patch_x
brush = valid_foregrounds.unsqueeze(0)
color_map = param[5:]
brush = brush.tile([1, 3, 1, 1])
color_map = color_map.unsqueeze(-1).unsqueeze(-1).unsqueeze(0) #.repeat(1, 1, H, W)
brush = brush * color_map
pad_l = x_id * patch_x
pad_r = (patch_x_num - x_id - 1) * patch_x
pad_t = y_id * patch_y
pad_b = (patch_y_num - y_id - 1) * patch_y
tmp_foreground = nn.functional.pad(brush, [pad_l, pad_r, pad_t, pad_b])
tmp_foreground = tmp_foreground[:, :, render_size_y // 10:-render_size_y // 10, render_size_x //
10:-render_size_x // 10]
tmp_alpha = nn.functional.pad(valid_alphas.unsqueeze(0), [pad_l, pad_r, pad_t, pad_b])
tmp_alpha = tmp_alpha[:, :, render_size_y // 10:-render_size_y // 10, render_size_x // 10:-render_size_x // 10]
return tmp_foreground, tmp_alpha
def get_single_stroke_on_full_image_B(x_id, y_id, valid_foregrounds, valid_alphas, param, original_img, render_size_x,
render_size_y, patch_x, patch_y):
"""
get_single_stroke_on_full_image_B
"""
x_expand = patch_x // 2 + render_size_x // 10
y_expand = patch_y // 2 + render_size_y // 10
pad_l = x_id * patch_x
pad_r = original_img.shape[3] + 2 * x_expand - (x_id * patch_x + render_size_x)
pad_t = y_id * patch_y
pad_b = original_img.shape[2] + 2 * y_expand - (y_id * patch_y + render_size_y)
brush = valid_foregrounds.unsqueeze(0)
color_map = param[5:]
brush = brush.tile([1, 3, 1, 1])
color_map = color_map.unsqueeze(-1).unsqueeze(-1).unsqueeze(0) #.repeat(1, 1, H, W)
brush = brush * color_map
tmp_foreground = nn.functional.pad(brush, [pad_l, pad_r, pad_t, pad_b])
tmp_foreground = tmp_foreground[:, :, y_expand:-y_expand, x_expand:-x_expand]
tmp_alpha = nn.functional.pad(valid_alphas.unsqueeze(0), [pad_l, pad_r, pad_t, pad_b])
tmp_alpha = tmp_alpha[:, :, y_expand:-y_expand, x_expand:-x_expand]
return tmp_foreground, tmp_alpha
def stroke_net_predict(img_patch, result_patch, patch_size, net_g, stroke_num):
"""
stroke_net_predict
"""
img_patch = img_patch.transpose([0, 2, 1]).reshape([-1, 3, patch_size, patch_size])
result_patch = result_patch.transpose([0, 2, 1]).reshape([-1, 3, patch_size, patch_size])
#*----- Stroke Predictor -----*#
shape_param, stroke_decision = net_g(img_patch, result_patch)
stroke_decision = (stroke_decision > 0).astype('float32')
#*----- sampling color -----*#
grid = shape_param[:, :, :2].reshape([img_patch.shape[0] * stroke_num, 1, 1, 2])
img_temp = img_patch.unsqueeze(1).tile([1, stroke_num, 1, 1,
1]).reshape([img_patch.shape[0] * stroke_num, 3, patch_size, patch_size])
color = nn.functional.grid_sample(
img_temp, 2 * grid - 1, align_corners=False).reshape([img_patch.shape[0], stroke_num, 3])
stroke_param = paddle.concat([shape_param, color], axis=-1)
param = stroke_param.reshape([-1, 8])
decision = stroke_decision.reshape([-1]).astype('bool')
param[:, :2] = param[:, :2] / 1.25 + 0.1
param[:, 2:4] = param[:, 2:4] / 1.25
return param, decision
def sort_strokes(params, decision, scores):
"""
sort_strokes
"""
sorted_scores, sorted_index = paddle.sort(scores, axis=1, descending=False)
sorted_params = []
for idx in range(8):
tmp_pick_params = paddle.gather(params[:, :, idx], axis=1, index=sorted_index)
sorted_params.append(tmp_pick_params)
sorted_params = paddle.stack(sorted_params, axis=2)
sorted_decison = paddle.gather(decision.squeeze(2), axis=1, index=sorted_index)
return sorted_params, sorted_decison
def render_serial(original_img, net_g, meta_brushes):
patch_size = 32
stroke_num = 8
H, W = original_img.shape[-2:]
K = max(math.ceil(math.log2(max(H, W) / patch_size)), 0)
dilation = Dilation2d(m=1)
erosion = Erosion2d(m=1)
frames_per_layer = [20, 20, 30, 40, 60]
final_frame_list = []
with paddle.no_grad():
#* ----- read in image and init canvas ----- *#
final_result = paddle.zeros_like(original_img)
for layer in range(0, K + 1):
t0 = time.time()
layer_size = patch_size * (2**layer)
img = nn.functional.interpolate(original_img, (layer_size, layer_size))
result = nn.functional.interpolate(final_result, (layer_size, layer_size))
img_patch = nn.functional.unfold(img, [patch_size, patch_size], strides=[patch_size, patch_size])
result_patch = nn.functional.unfold(result, [patch_size, patch_size], strides=[patch_size, patch_size])
h = (img.shape[2] - patch_size) // patch_size + 1
w = (img.shape[3] - patch_size) // patch_size + 1
render_size_y = int(1.25 * H // h)
render_size_x = int(1.25 * W // w)
#* -------------------------------------------------------------*#
#* -------------generate strokes on window type A---------------*#
#* -------------------------------------------------------------*#
param, decision = stroke_net_predict(img_patch, result_patch, patch_size, net_g, stroke_num)
expand_img = original_img
wA_xid_list, wA_yid_list, wA_fore_list, wA_alpha_list, wA_error_list, wA_params = \
get_single_layer_lists(param, decision, original_img, render_size_x, render_size_y, h, w,
meta_brushes, dilation, erosion, stroke_num)
#* -------------------------------------------------------------*#
#* -------------generate strokes on window type B---------------*#
#* -------------------------------------------------------------*#
#*----- generate input canvas and target patches -----*#
wB_error_list = []
img = nn.functional.pad(img, [patch_size // 2, patch_size // 2, patch_size // 2, patch_size // 2])
result = nn.functional.pad(result, [patch_size // 2, patch_size // 2, patch_size // 2, patch_size // 2])
img_patch = nn.functional.unfold(img, [patch_size, patch_size], strides=[patch_size, patch_size])
result_patch = nn.functional.unfold(result, [patch_size, patch_size], strides=[patch_size, patch_size])
h += 1
w += 1
param, decision = stroke_net_predict(img_patch, result_patch, patch_size, net_g, stroke_num)
patch_y = 4 * render_size_y // 5
patch_x = 4 * render_size_x // 5
expand_img = nn.functional.pad(original_img, [patch_x // 2, patch_x // 2, patch_y // 2, patch_y // 2])
wB_xid_list, wB_yid_list, wB_fore_list, wB_alpha_list, wB_error_list, wB_params = \
get_single_layer_lists(param, decision, expand_img, render_size_x, render_size_y, h, w,
meta_brushes, dilation, erosion, stroke_num)
#* -------------------------------------------------------------*#
#* -------------rank strokes and plot stroke one by one---------*#
#* -------------------------------------------------------------*#
numA = len(wA_error_list)
numB = len(wB_error_list)
total_error_list = wA_error_list + wB_error_list
sort_list = list(np.argsort(total_error_list))
sample = 0
samples = np.linspace(0, len(sort_list) - 2, frames_per_layer[layer]).astype(int)
for ii in sort_list:
ii = int(ii)
if ii < numA:
x_id = wA_xid_list[ii]
y_id = wA_yid_list[ii]
valid_foregrounds = wA_fore_list[ii]
valid_alphas = wA_alpha_list[ii]
sparam = wA_params[ii]
tmp_foreground, tmp_alpha = get_single_stroke_on_full_image_A(
x_id, y_id, valid_foregrounds, valid_alphas, sparam, original_img, render_size_x, render_size_y,
patch_x, patch_y)
else:
x_id = wB_xid_list[ii - numA]
y_id = wB_yid_list[ii - numA]
valid_foregrounds = wB_fore_list[ii - numA]
valid_alphas = wB_alpha_list[ii - numA]
sparam = wB_params[ii - numA]
tmp_foreground, tmp_alpha = get_single_stroke_on_full_image_B(
x_id, y_id, valid_foregrounds, valid_alphas, sparam, original_img, render_size_x, render_size_y,
patch_x, patch_y)
final_result = tmp_foreground * tmp_alpha + (1 - tmp_alpha) * final_result
if sample in samples:
saveframe = (final_result.numpy().squeeze().transpose([1, 2, 0])[:, :, ::-1] * 255).astype(np.uint8)
final_frame_list.append(saveframe)
#saveframe = cv2.resize(saveframe, (ow, oh))
sample += 1
print("layer %d cost: %.02f" % (layer, time.time() - t0))
saveframe = (final_result.numpy().squeeze().transpose([1, 2, 0])[:, :, ::-1] * 255).astype(np.uint8)
final_frame_list.append(saveframe)
return final_frame_list
import paddle
import paddle.nn as nn
import paddle.nn.functional as F
import cv2
import numpy as np
from PIL import Image
import math
class Erosion2d(nn.Layer):
"""
Erosion2d
"""
def __init__(self, m=1):
super(Erosion2d, self).__init__()
self.m = m
self.pad = [m, m, m, m]
def forward(self, x):
batch_size, c, h, w = x.shape
x_pad = F.pad(x, pad=self.pad, mode='constant', value=1e9)
channel = nn.functional.unfold(x_pad, 2 * self.m + 1, strides=1, paddings=0).reshape([batch_size, c, -1, h, w])
result = paddle.min(channel, axis=2)
return result
class Dilation2d(nn.Layer):
"""
Dilation2d
"""
def __init__(self, m=1):
super(Dilation2d, self).__init__()
self.m = m
self.pad = [m, m, m, m]
def forward(self, x):
batch_size, c, h, w = x.shape
x_pad = F.pad(x, pad=self.pad, mode='constant', value=-1e9)
channel = nn.functional.unfold(x_pad, 2 * self.m + 1, strides=1, paddings=0).reshape([batch_size, c, -1, h, w])
result = paddle.max(channel, axis=2)
return result
def param2stroke(param, H, W, meta_brushes):
"""
param2stroke
"""
b = param.shape[0]
param_list = paddle.split(param, 8, axis=1)
x0, y0, w, h, theta = [item.squeeze(-1) for item in param_list[:5]]
sin_theta = paddle.sin(math.pi * theta)
cos_theta = paddle.cos(math.pi * theta)
index = paddle.full((b, ), -1, dtype='int64').numpy()
index[(h > w).numpy()] = 0
index[(h <= w).numpy()] = 1
meta_brushes_resize = F.interpolate(meta_brushes, (H, W)).numpy()
brush = paddle.to_tensor(meta_brushes_resize[index])
warp_00 = cos_theta / w
warp_01 = sin_theta * H / (W * w)
warp_02 = (1 - 2 * x0) * cos_theta / w + (1 - 2 * y0) * sin_theta * H / (W * w)
warp_10 = -sin_theta * W / (H * h)
warp_11 = cos_theta / h
warp_12 = (1 - 2 * y0) * cos_theta / h - (1 - 2 * x0) * sin_theta * W / (H * h)
warp_0 = paddle.stack([warp_00, warp_01, warp_02], axis=1)
warp_1 = paddle.stack([warp_10, warp_11, warp_12], axis=1)
warp = paddle.stack([warp_0, warp_1], axis=1)
grid = nn.functional.affine_grid(warp, [b, 3, H, W]) # paddle和torch默认值是反过来的
brush = nn.functional.grid_sample(brush, grid)
return brush
def read_img(img_path, img_type='RGB', h=None, w=None):
"""
read img
"""
img = Image.open(img_path).convert(img_type)
if h is not None and w is not None:
img = img.resize((w, h), resample=Image.NEAREST)
img = np.array(img)
if img.ndim == 2:
img = np.expand_dims(img, axis=-1)
img = img.transpose((2, 0, 1))
img = paddle.to_tensor(img).unsqueeze(0).astype('float32') / 255.
return img
def preprocess(img, w=512, h=512):
image = cv2.resize(img, (w, h), cv2.INTER_NEAREST)
image = image.transpose((2, 0, 1))
image = paddle.to_tensor(image).unsqueeze(0).astype('float32') / 255.
return image
def totensor(img):
image = img.transpose((2, 0, 1))
image = paddle.to_tensor(image).unsqueeze(0).astype('float32') / 255.
return image
def pad(img, H, W):
b, c, h, w = img.shape
pad_h = (H - h) // 2
pad_w = (W - w) // 2
remainder_h = (H - h) % 2
remainder_w = (W - w) % 2
expand_img = nn.functional.pad(img, [pad_w, pad_w + remainder_w, pad_h, pad_h + remainder_h])
return expand_img
import base64
import cv2
import numpy as np
def base64_to_cv2(b64str):
data = base64.b64decode(b64str.encode('utf8'))
data = np.fromstring(data, np.uint8)
data = cv2.imdecode(data, cv2.IMREAD_COLOR)
return data
# psgan
|模型名称|psgan|
| :--- | :---: |
|类别|图像 - 妆容迁移|
|网络|PSGAN|
|数据集|-|
|是否支持Fine-tuning|否|
|模型大小|121MB|
|最新更新日期|2021-12-07|
|数据指标|-|
## 一、模型基本信息
- ### 应用效果展示
- 样例结果示例:
<p align="center">
<img src="https://user-images.githubusercontent.com/22424850/157190651-595b6964-97c5-4b0b-ac0a-c30c8520a972.png" width = "30%" hspace='10'/>
<br />
输入内容图形
<br />
<img src="https://user-images.githubusercontent.com/22424850/145003966-c5c2e6ad-d306-4eaf-89a2-965a3dbf3675.jpg" width = "30%" hspace='10'/>
<br />
输入妆容图形
<br />
<img src="https://user-images.githubusercontent.com/22424850/157190800-b1dd79d4-0eca-4b36-b091-6fcd2c00dcf6.png" width = "30%" hspace='10'/>
<br />
输出图像
<br />
</p>
- ### 模型介绍
- PSGAN模型的任务是妆容迁移, 即将任意参照图像上的妆容迁移到不带妆容的源图像上。很多人像美化应用都需要这种技术。
- 更多详情参考:[PSGAN: Pose and Expression Robust Spatial-Aware GAN for Customizable Makeup Transfer](https://arxiv.org/pdf/1909.06956.pdf)
## 二、安装
- ### 1、环境依赖
- ppgan
- dlib
- ### 2、安装
- ```shell
$ hub install psgan
```
- 如您安装时遇到问题,可参考:[零基础windows安装](../../../../docs/docs_ch/get_start/windows_quickstart.md)
| [零基础Linux安装](../../../../docs/docs_ch/get_start/linux_quickstart.md) | [零基础MacOS安装](../../../../docs/docs_ch/get_start/mac_quickstart.md)
## 三、模型API预测
- ### 1、命令行预测
- ```shell
# Read from a file
$ hub run psgan --content "/PATH/TO/IMAGE" --style "/PATH/TO/IMAGE1"
```
- 通过命令行方式实现妆容转换模型的调用,更多请见 [PaddleHub命令行指令](../../../../docs/docs_ch/tutorial/cmd_usage.rst)
- ### 2、预测代码示例
- ```python
import paddlehub as hub
module = hub.Module(name="psgan")
content = cv2.imread("/PATH/TO/IMAGE")
style = cv2.imread("/PATH/TO/IMAGE1")
results = module.makeup_transfer(images=[{'content':content, 'style':style}], output_dir='./transfer_result', use_gpu=True)
```
- ### 3、API
- ```python
makeup_transfer(images=None, paths=None, output_dir='./transfer_result/', use_gpu=False, visualization=True)
```
- 妆容风格转换API。
- **参数**
- images (list[dict]): data of images, 每一个元素都为一个 dict,有关键字 content, style, 相应取值为:
- content (numpy.ndarray): 待转换的图片,shape 为 \[H, W, C\],BGR格式;<br/>
- style (numpy.ndarray) : 风格图像,shape为 \[H, W, C\],BGR格式;<br/>
- paths (list[str]): paths to images, 每一个元素都为一个dict, 有关键字 content, style, 相应取值为:
- content (str): 待转换的图片的路径;<br/>
- style (str) : 风格图像的路径;<br/>
- output\_dir (str): 结果保存的路径; <br/>
- use\_gpu (bool): 是否使用 GPU;<br/>
- visualization(bool): 是否保存结果到本地文件夹
## 四、服务部署
- PaddleHub Serving可以部署一个在线妆容风格转换服务。
- ### 第一步:启动PaddleHub Serving
- 运行启动命令:
- ```shell
$ hub serving start -m psgan
```
- 这样就完成了一个妆容风格转换的在线服务API的部署,默认端口号为8866。
- **NOTE:** 如使用GPU预测,则需要在启动服务之前,请设置CUDA\_VISIBLE\_DEVICES环境变量,否则不用设置。
- ### 第二步:发送预测请求
- 配置好服务端,以下数行代码即可实现发送预测请求,获取预测结果
- ```python
import requests
import json
import cv2
import base64
def cv2_to_base64(image):
data = cv2.imencode('.jpg', image)[1]
return base64.b64encode(data.tostring()).decode('utf8')
# 发送HTTP请求
data = {'images':[{'content': cv2_to_base64(cv2.imread("/PATH/TO/IMAGE")), 'style': cv2_to_base64(cv2.imread("/PATH/TO/IMAGE1"))}]}
headers = {"Content-type": "application/json"}
url = "http://127.0.0.1:8866/predict/psgan"
r = requests.post(url=url, headers=headers, data=json.dumps(data))
# 打印预测结果
print(r.json()["results"])
## 五、更新历史
* 1.0.0
初始发布
- ```shell
$ hub install psgan==1.0.0
```
epochs: 100
output_dir: tmp
checkpoints_dir: checkpoints
find_unused_parameters: True
model:
name: MakeupModel
generator:
name: GeneratorPSGANAttention
conv_dim: 64
repeat_num: 6
discriminator:
name: NLayerDiscriminator
ndf: 64
n_layers: 3
input_nc: 3
norm_type: spectral
cycle_criterion:
name: L1Loss
idt_criterion:
name: L1Loss
loss_weight: 0.5
l1_criterion:
name: L1Loss
l2_criterion:
name: MSELoss
gan_criterion:
name: GANLoss
gan_mode: lsgan
dataset:
train:
name: MakeupDataset
trans_size: 256
dataroot: data/MT-Dataset
cls_list: [non-makeup, makeup]
phase: train
test:
name: MakeupDataset
trans_size: 256
dataroot: data/MT-Dataset
cls_list: [non-makeup, makeup]
phase: test
lr_scheduler:
name: LinearDecay
learning_rate: 0.0002
start_epoch: 100
decay_epochs: 100
# will get from real dataset
iters_per_epoch: 1
optimizer:
optimizer_G:
name: Adam
net_names:
- netG
beta1: 0.5
optimizer_DA:
name: Adam
net_names:
- netD_A
beta1: 0.5
optimizer_DB:
name: Adam
net_names:
- netD_B
beta1: 0.5
log_config:
interval: 10
visiual_interval: 500
snapshot_config:
interval: 5
# copyright (c) 2020 PaddlePaddle Authors. All Rights Reserve.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import argparse
import os
import sys
from pathlib import Path
import numpy as np
import paddle
import paddle.vision.transforms as T
import ppgan.faceutils as futils
from paddle.utils.download import get_weights_path_from_url
from PIL import Image
from ppgan.models.builder import build_model
from ppgan.utils.config import get_config
from ppgan.utils.filesystem import load
from ppgan.utils.options import parse_args
from ppgan.utils.preprocess import *
def toImage(net_output):
img = net_output.squeeze(0).transpose((1, 2, 0)).numpy() # [1,c,h,w]->[h,w,c]
img = (img * 255.0).clip(0, 255)
img = np.uint8(img)
img = Image.fromarray(img, mode='RGB')
return img
PS_WEIGHT_URL = "https://paddlegan.bj.bcebos.com/models/psgan_weight.pdparams"
class PreProcess:
def __init__(self, config, need_parser=True):
self.img_size = 256
self.transform = transform = T.Compose([
T.Resize(size=256),
T.ToTensor(),
])
self.norm = T.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])
if need_parser:
self.face_parser = futils.mask.FaceParser()
self.up_ratio = 0.6 / 0.85
self.down_ratio = 0.2 / 0.85
self.width_ratio = 0.2 / 0.85
def __call__(self, image):
face = futils.dlib.detect(image)
if not face:
return
face_on_image = face[0]
image, face, crop_face = futils.dlib.crop(image, face_on_image, self.up_ratio, self.down_ratio,
self.width_ratio)
np_image = np.array(image)
image_trans = self.transform(np_image)
mask = self.face_parser.parse(np.float32(cv2.resize(np_image, (512, 512))))
mask = cv2.resize(mask.numpy(), (self.img_size, self.img_size), interpolation=cv2.INTER_NEAREST)
mask = mask.astype(np.uint8)
mask_tensor = paddle.to_tensor(mask)
lms = futils.dlib.landmarks(image, face) / image_trans.shape[:2] * self.img_size
lms = lms.round()
P_np = generate_P_from_lmks(lms, self.img_size, self.img_size, self.img_size)
mask_aug = generate_mask_aug(mask, lms)
return [self.norm(image_trans).unsqueeze(0),
np.float32(mask_aug),
np.float32(P_np),
np.float32(mask)], face_on_image, crop_face
class PostProcess:
def __init__(self, config):
self.denoise = True
self.img_size = 256
def __call__(self, source: Image, result: Image):
# TODO: Refract -> name, resize
source = np.array(source)
result = np.array(result)
height, width = source.shape[:2]
small_source = cv2.resize(source, (self.img_size, self.img_size))
laplacian_diff = source.astype(np.float) - cv2.resize(small_source, (width, height)).astype(np.float)
result = (cv2.resize(result, (width, height)) + laplacian_diff).round().clip(0, 255).astype(np.uint8)
if self.denoise:
result = cv2.fastNlMeansDenoisingColored(result)
result = Image.fromarray(result).convert('RGB')
return result
class Inference:
def __init__(self, config, model_path=''):
self.model = build_model(config.model)
self.preprocess = PreProcess(config)
self.model_path = model_path
def transfer(self, source, reference, with_face=False):
source_input, face, crop_face = self.preprocess(source)
reference_input, face, crop_face = self.preprocess(reference)
consis_mask = np.float32(calculate_consis_mask(source_input[1], reference_input[1]))
consis_mask = paddle.to_tensor(np.expand_dims(consis_mask, 0))
if not (source_input and reference_input):
if with_face:
return None, None
return
for i in range(1, len(source_input) - 1):
source_input[i] = paddle.to_tensor(np.expand_dims(source_input[i], 0))
for i in range(1, len(reference_input) - 1):
reference_input[i] = paddle.to_tensor(np.expand_dims(reference_input[i], 0))
input_data = {
'image_A': source_input[0],
'image_B': reference_input[0],
'mask_A_aug': source_input[1],
'mask_B_aug': reference_input[1],
'P_A': source_input[2],
'P_B': reference_input[2],
'consis_mask': consis_mask
}
state_dicts = load(self.model_path)
for net_name, net in self.model.nets.items():
net.set_state_dict(state_dicts[net_name])
result, _ = self.model.test(input_data)
min_, max_ = result.min(), result.max()
result += -min_
result = paddle.divide(result, max_ - min_ + 1e-5)
img = toImage(result)
if with_face:
return img, crop_face
return img
class PSGANPredictor:
def __init__(self, cfg, weight_path):
self.cfg = cfg
self.weight_path = weight_path
def run(self, source, reference):
source = Image.fromarray(source)
reference = Image.fromarray(reference)
inference = Inference(self.cfg, self.weight_path)
postprocess = PostProcess(self.cfg)
# Transfer the psgan from reference to source.
image, face = inference.transfer(source, reference, with_face=True)
source_crop = source.crop((face.left(), face.top(), face.right(), face.bottom()))
image = postprocess(source_crop, image)
image = np.array(image)
return image
# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import argparse
import copy
import os
import cv2
import numpy as np
import paddle
from ppgan.utils.config import get_config
from skimage.io import imread
from skimage.transform import rescale
from skimage.transform import resize
import paddlehub as hub
from .model import PSGANPredictor
from .util import base64_to_cv2
from paddlehub.module.module import moduleinfo
from paddlehub.module.module import runnable
from paddlehub.module.module import serving
@moduleinfo(name="psgan", type="CV/gan", author="paddlepaddle", author_email="", summary="", version="1.0.0")
class psgan:
def __init__(self):
self.pretrained_model = os.path.join(self.directory, "psgan_weight.pdparams")
cfg = get_config(os.path.join(self.directory, 'makeup.yaml'))
self.network = PSGANPredictor(cfg, self.pretrained_model)
def makeup_transfer(self,
images=None,
paths=None,
output_dir='./transfer_result/',
use_gpu=False,
visualization=True):
'''
Transfer a image to stars style.
images (list[dict]): data of images, 每一个元素都为一个 dict,有关键字 content, style, 相应取值为:
- content (numpy.ndarray): 待转换的图片,shape 为 \[H, W, C\],BGR格式;<br/>
- style (numpy.ndarray) : 妆容图像,shape为 \[H, W, C\],BGR格式;<br/>
paths (list[str]): paths to images, 每一个元素都为一个dict, 有关键字 content, style, 相应取值为:
- content (str): 待转换的图片的路径;<br/>
- style (str) : 妆容图像的路径;<br/>
output_dir: the dir to save the results
use_gpu: if True, use gpu to perform the computation, otherwise cpu.
visualization: if True, save results in output_dir.
'''
results = []
paddle.disable_static()
place = 'gpu:0' if use_gpu else 'cpu'
place = paddle.set_device(place)
if images == None and paths == None:
print('No image provided. Please input an image or a image path.')
return
if images != None:
for image_dict in images:
content_img = image_dict['content'][:, :, ::-1]
style_img = image_dict['style'][:, :, ::-1]
results.append(self.network.run(content_img, style_img))
if paths != None:
for path_dict in paths:
content_img = cv2.imread(path_dict['content'])[:, :, ::-1]
style_img = cv2.imread(path_dict['style'])[:, :, ::-1]
results.append(self.network.run(content_img, style_img))
if visualization == True:
if not os.path.exists(output_dir):
os.makedirs(output_dir, exist_ok=True)
for i, out in enumerate(results):
cv2.imwrite(os.path.join(output_dir, 'output_{}.png'.format(i)), out[:, :, ::-1])
return results
@runnable
def run_cmd(self, argvs: list):
"""
Run as a command.
"""
self.parser = argparse.ArgumentParser(
description="Run the {} module.".format(self.name),
prog='hub run {}'.format(self.name),
usage='%(prog)s',
add_help=True)
self.arg_input_group = self.parser.add_argument_group(title="Input options", description="Input data. Required")
self.arg_config_group = self.parser.add_argument_group(
title="Config options", description="Run configuration for controlling module behavior, not required.")
self.add_module_config_arg()
self.add_module_input_arg()
self.args = self.parser.parse_args(argvs)
self.makeup_transfer(
paths=[{
'content': self.args.content,
'style': self.args.style
}],
output_dir=self.args.output_dir,
use_gpu=self.args.use_gpu,
visualization=self.args.visualization)
@serving
def serving_method(self, images, **kwargs):
"""
Run as a service.
"""
images_decode = copy.deepcopy(images)
for image in images_decode:
image['content'] = base64_to_cv2(image['content'])
image['style'] = base64_to_cv2(image['style'])
results = self.makeup_transfer(images_decode, **kwargs)
tolist = [result.tolist() for result in results]
return tolist
def add_module_config_arg(self):
"""
Add the command config options.
"""
self.arg_config_group.add_argument('--use_gpu', action='store_true', help="use GPU or not")
self.arg_config_group.add_argument(
'--output_dir', type=str, default='transfer_result', help='output directory for saving result.')
self.arg_config_group.add_argument('--visualization', type=bool, default=False, help='save results or not.')
def add_module_input_arg(self):
"""
Add the command input options.
"""
self.arg_input_group.add_argument('--content', type=str, help="path to content image.")
self.arg_input_group.add_argument('--style', type=str, help="path to style image.")
import base64
import cv2
import numpy as np
def base64_to_cv2(b64str):
data = base64.b64decode(b64str.encode('utf8'))
data = np.fromstring(data, np.uint8)
data = cv2.imdecode(data, cv2.IMREAD_COLOR)
return data
# prnet
|模型名称|prnet|
| :--- | :---: |
|类别|图像 - 图像生成|
|网络|PRN|
|数据集|300W-LP|
|是否支持Fine-tuning|否|
|模型大小|154MB|
|最新更新日期|2021-11-20|
|数据指标|-|
## 一、模型基本信息
- ### 应用效果展示
- 样例结果示例:
<p align="center">
<img src="https://user-images.githubusercontent.com/22424850/157190651-595b6964-97c5-4b0b-ac0a-c30c8520a972.png" width = "450" hspace='10'/>
<br />
输入原图像
<br />
<img src="https://user-images.githubusercontent.com/22424850/142995636-dd5e1f0a-3810-4ae9-b680-4b2482858001.jpg" width = "450" height = "300" hspace='10'/>
<br />
输入参考图像
<br />
<img src="https://user-images.githubusercontent.com/22424850/157205282-89c9ace9-5fec-4112-ace1-edaebbfc30f8.png" width = "450" hspace='10'/>
<br />
输出图像
<br />
</p>
- ### 模型介绍
- PRNet提出一种方法同时重建3D的脸部结构和脸部对齐,可应用于脸部对齐、3D脸重建、脸部纹理编辑等任务。该模块引入了脸部纹理编辑的功能,可以将参考图像的脸部纹理转移到原图像上。
- 更多详情参考:[Joint 3D Face Reconstruction and Dense Alignment with Position Map Regression Network](https://arxiv.org/pdf/1803.07835.pdf)
## 二、安装
- ### 1、环境依赖
- dlib
- scikit-image
- ### 2、安装
- ```shell
$ hub install prnet
```
- 如您安装时遇到问题,可参考:[零基础windows安装](../../../../docs/docs_ch/get_start/windows_quickstart.md)
| [零基础Linux安装](../../../../docs/docs_ch/get_start/linux_quickstart.md) | [零基础MacOS安装](../../../../docs/docs_ch/get_start/mac_quickstart.md)
## 三、模型API预测
- ### 1、命令行预测
- ```shell
$ hub run prnet --source "/PATH/TO/IMAGE1" --ref "/PATH/TO/IMAGE2"
```
- 通过命令行方式实现脸部纹理编辑的调用,更多请见 [PaddleHub命令行指令](../../../../docs/docs_ch/tutorial/cmd_usage.rst)
- ### 2、预测代码示例
- ```python
import paddlehub as hub
module = hub.Module(name="prnet")
source_path = "/PATH/TO/IMAGE1"
ref_path = "/PATH/TO/IMAGE2"
module.face_swap(paths=[{'source':input_path, 'ref':ref_path}],
mode = 0,
output_dir='./swapping_result/',
use_gpu=True,
visualization=True)
```
- ### 3、API
- ```python
def face_swap(self,
images=None,
paths=None,
mode = 0,
output_dir='./swapping_result/',
use_gpu=False,
visualization=True):
```
- 脸部纹理编辑API,将参考图像的脸部纹理转移到原图像上。
- **参数**
- images (list[dict]): data of images, 每一个元素都为一个 dict,有关键字 source, ref, 相应取值为:
- source (numpy.ndarray): 待转换的图片,shape 为 \[H, W, C\],BGR格式;<br/>
- ref (numpy.ndarray) : 参考图像,shape为 \[H, W, C\],BGR格式;<br/>
- paths (list[str]): paths to images, 每一个元素都为一个dict, 有关键字 source, ref, 相应取值为:
- source (str): 待转换的图片的路径;<br/>
- ref (str) : 参考图像的路径;<br/>
- mode(int): option, 0表示改变局部纹理, 1表示改变整个脸;<br/>
- output\_dir (str): 结果保存的路径; <br/>
- use\_gpu (bool): 是否使用 GPU;<br/>
- visualization(bool): 是否保存结果到本地文件夹
## 四、服务部署
- PaddleHub Serving可以部署一个在线图像风格转换服务。
- ### 第一步:启动PaddleHub Serving
- 运行启动命令:
- ```shell
$ hub serving start -m prnet
```
- 这样就完成了一个图像风格转换的在线服务API的部署,默认端口号为8866。
- **NOTE:** 如使用GPU预测,则需要在启动服务之前,请设置CUDA\_VISIBLE\_DEVICES环境变量,否则不用设置。
- ### 第二步:发送预测请求
- 配置好服务端,以下数行代码即可实现发送预测请求,获取预测结果
- ```python
import requests
import json
import rawpy
import base64
def cv2_to_base64(image):
data = cv2.imencode('.jpg', image)[1]
return base64.b64encode(data.tostring()).decode('utf8')
# 发送HTTP请求
data = {'images':[{'source': cv2_to_base64(cv2.imread("/PATH/TO/IMAGE1")), 'ref':cv2_to_base64(cv2.imread("/PATH/TO/IMAGE2"))}]}
headers = {"Content-type": "application/json"}
url = "http://127.0.0.1:8866/predict/prnet/"
r = requests.post(url=url, headers=headers, data=json.dumps(data))
# 打印预测结果
print(r.json()["results"])
```
## 五、更新历史
* 1.0.0
初始发布
- ```shell
$ hub install prnet==1.0.0
```
# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
from time import time
import numpy as np
import paddle
from skimage.io import imread
from skimage.io import imsave
from skimage.transform import estimate_transform
from skimage.transform import warp
from .predictor import PosPrediction
class PRN:
''' Joint 3D Face Reconstruction and Dense Alignment with Position Map Regression Network
Args:
is_dlib(bool, optional): If true, dlib is used for detecting faces.
prefix(str, optional): If run at another folder, the absolute path is needed to load the data.
'''
def __init__(self, is_dlib=False, prefix='.'):
# resolution of input and output image size.
self.resolution_inp = 256
self.resolution_op = 256
#---- load detectors
if is_dlib:
import dlib
detector_path = os.path.join(prefix, 'Data/net-data/mmod_human_face_detector.dat')
self.face_detector = dlib.cnn_face_detection_model_v1(detector_path)
#---- load PRN
params = paddle.load(os.path.join(prefix, "pd_model/model.pdparams"))
self.pos_predictor = PosPrediction(params, self.resolution_inp, self.resolution_op)
# uv file
self.uv_kpt_ind = np.loadtxt(os.path.join(prefix,
'Data/uv-data/uv_kpt_ind.txt')).astype(np.int32) # 2 x 68 get kpt
self.face_ind = np.loadtxt(os.path.join(prefix, 'Data/uv-data/face_ind.txt')).astype(
np.int32) # get valid vertices in the pos map
self.triangles = np.loadtxt(os.path.join(prefix, 'Data/uv-data/triangles.txt')).astype(np.int32) # ntri x 3
self.uv_coords = self.generate_uv_coords()
def generate_uv_coords(self):
resolution = self.resolution_op
uv_coords = np.meshgrid(range(resolution), range(resolution))
uv_coords = np.transpose(np.array(uv_coords), [1, 2, 0])
uv_coords = np.reshape(uv_coords, [resolution**2, -1])
uv_coords = uv_coords[self.face_ind, :]
uv_coords = np.hstack((uv_coords[:, :2], np.zeros([uv_coords.shape[0], 1])))
return uv_coords
def dlib_detect(self, image):
return self.face_detector(image, 1)
def net_forward(self, image):
''' The core of out method: regress the position map of a given image.
Args:
image: (256,256,3) array. value range: 0~1
Returns:
pos: the 3D position map. (256, 256, 3) array.
'''
return self.pos_predictor.predict(image)
def process(self, input, image_info=None):
''' process image with crop operation.
Args:
input: (h,w,3) array or str(image path). image value range:1~255.
image_info(optional): the bounding box information of faces. if None, will use dlib to detect face.
Returns:
pos: the 3D position map. (256, 256, 3).
'''
if isinstance(input, str):
try:
image = imread(input)
except IOError:
print("error opening file: ", input)
return None
else:
image = input
if image.ndim < 3:
image = np.tile(image[:, :, np.newaxis], [1, 1, 3])
if image_info is not None:
if np.max(image_info.shape) > 4: # key points to get bounding box
kpt = image_info
if kpt.shape[0] > 3:
kpt = kpt.T
left = np.min(kpt[0, :])
right = np.max(kpt[0, :])
top = np.min(kpt[1, :])
bottom = np.max(kpt[1, :])
else: # bounding box
bbox = image_info
left = bbox[0]
right = bbox[1]
top = bbox[2]
bottom = bbox[3]
old_size = (right - left + bottom - top) / 2
center = np.array([right - (right - left) / 2.0, bottom - (bottom - top) / 2.0])
size = int(old_size * 1.6)
else:
detected_faces = self.dlib_detect(image)
if len(detected_faces) == 0:
print('warning: no detected face')
return None
d = detected_faces[
0].rect ## only use the first detected face (assume that each input image only contains one face)
left = d.left()
right = d.right()
top = d.top()
bottom = d.bottom()
old_size = (right - left + bottom - top) / 2
center = np.array([right - (right - left) / 2.0, bottom - (bottom - top) / 2.0 + old_size * 0.14])
size = int(old_size * 1.58)
# crop image
src_pts = np.array([[center[0] - size / 2, center[1] - size / 2], [center[0] - size / 2, center[1] + size / 2],
[center[0] + size / 2, center[1] - size / 2]])
DST_PTS = np.array([[0, 0], [0, self.resolution_inp - 1], [self.resolution_inp - 1, 0]])
tform = estimate_transform('similarity', src_pts, DST_PTS)
image = image / 255.
cropped_image = warp(image, tform.inverse, output_shape=(self.resolution_inp, self.resolution_inp))
cropped_pos = self.net_forward(cropped_image)
# restore
cropped_vertices = np.reshape(cropped_pos, [-1, 3]).T
z = cropped_vertices[2, :].copy() / tform.params[0, 0]
cropped_vertices[2, :] = 1
vertices = np.dot(np.linalg.inv(tform.params), cropped_vertices)
vertices = np.vstack((vertices[:2, :], z))
pos = np.reshape(vertices.T, [self.resolution_op, self.resolution_op, 3])
return pos
def get_landmarks(self, pos):
'''
Args:
pos: the 3D position map. shape = (256, 256, 3).
Returns:
kpt: 68 3D landmarks. shape = (68, 3).
'''
kpt = pos[self.uv_kpt_ind[1, :], self.uv_kpt_ind[0, :], :]
return kpt
def get_vertices(self, pos):
'''
Args:
pos: the 3D position map. shape = (256, 256, 3).
Returns:
vertices: the vertices(point cloud). shape = (num of points, 3). n is about 40K here.
'''
all_vertices = np.reshape(pos, [self.resolution_op**2, -1])
vertices = all_vertices[self.face_ind, :]
return vertices
def get_colors_from_texture(self, texture):
'''
Args:
texture: the texture map. shape = (256, 256, 3).
Returns:
colors: the corresponding colors of vertices. shape = (num of points, 3). n is 45128 here.
'''
all_colors = np.reshape(texture, [self.resolution_op**2, -1])
colors = all_colors[self.face_ind, :]
return colors
def get_colors(self, image, vertices):
'''
Args:
pos: the 3D position map. shape = (256, 256, 3).
Returns:
colors: the corresponding colors of vertices. shape = (num of points, 3). n is 45128 here.
'''
[h, w, _] = image.shape
vertices[:, 0] = np.minimum(np.maximum(vertices[:, 0], 0), w - 1) # x
vertices[:, 1] = np.minimum(np.maximum(vertices[:, 1], 0), h - 1) # y
ind = np.round(vertices).astype(np.int32)
colors = image[ind[:, 1], ind[:, 0], :] # n x 3
return colors
# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import argparse
import copy
import os
import cv2
import numpy as np
import paddle
from skimage.io import imread
from skimage.transform import rescale
from skimage.transform import resize
import paddlehub as hub
from .api import PRN
from .predictor import PosPrediction
from .util import base64_to_cv2
from .utils.render import render_texture
from paddlehub.module.module import moduleinfo
from paddlehub.module.module import runnable
from paddlehub.module.module import serving
@moduleinfo(name="prnet", type="CV/", author="paddlepaddle", author_email="", summary="", version="1.0.0")
class PRNet:
def __init__(self):
self.pretrained_model = os.path.join(self.directory, "pd_model/model.pdparams")
self.network = PRN(is_dlib=True, prefix=self.directory)
def face_swap(self,
images: list = None,
paths: list = None,
mode: int = 0,
output_dir: str = './swapping_result/',
use_gpu: bool = False,
visualization: bool = True):
'''
Denoise a raw image in the low-light scene.
images (list[dict]): data of images, each element is a dict:
- source (numpy.ndarray): input image,shape is \[H, W, C\],BGR format;<br/>
- ref (numpy.ndarray) : style image,shape is \[H, W, C\],BGR format;<br/>
paths (list[dict]): paths to images, eacg element is a dict:
- source (str): path to input image;<br/>
- ref (str) : path to reference image;<br/>
mode (int): option, 0 for change part of texture, 1 for change whole face
output_dir (str): the dir to save the results
use_gpu (bool): if True, use gpu to perform the computation, otherwise cpu.
visualization (bool): if True, save results in output_dir.
'''
results = []
paddle.disable_static()
place = 'gpu:0' if use_gpu else 'cpu'
place = paddle.set_device(place)
if images == None and paths == None:
print('No image provided. Please input an image or a image path.')
return
if images != None:
for image_dict in images:
source_img = image_dict['source'][:, :, ::-1]
ref_img = image_dict['ref'][:, :, ::-1]
results.append(self.texture_editing(source_img, ref_img, mode))
if paths != None:
for path_dict in paths:
source_img = cv2.imread(path_dict['source'])[:, :, ::-1]
ref_img = cv2.imread(path_dict['ref'])[:, :, ::-1]
results.append(self.texture_editing(source_img, ref_img, mode))
if visualization == True:
if not os.path.exists(output_dir):
os.makedirs(output_dir, exist_ok=True)
for i, out in enumerate(results):
cv2.imwrite(os.path.join(output_dir, 'output_{}.png'.format(i)), out[:, :, ::-1])
return results
def texture_editing(self, source_img, ref_img, mode):
# read image
image = source_img
[h, w, _] = image.shape
prn = self.network
#-- 1. 3d reconstruction -> get texture.
pos = prn.process(image)
vertices = prn.get_vertices(pos)
image = image / 255.
texture = cv2.remap(
image,
pos[:, :, :2].astype(np.float32),
None,
interpolation=cv2.INTER_NEAREST,
borderMode=cv2.BORDER_CONSTANT,
borderValue=(0))
#-- 2. Texture Editing
Mode = mode
# change part of texture(for data augumentation/selfie editing. Here modify eyes for example)
if Mode == 0:
# load eye mask
uv_face_eye = imread(os.path.join(self.directory, 'Data/uv-data/uv_face_eyes.png'), as_gray=True) / 255.
uv_face = imread(os.path.join(self.directory, 'Data/uv-data/uv_face.png'), as_gray=True) / 255.
eye_mask = (abs(uv_face_eye - uv_face) > 0).astype(np.float32)
# texture from another image or a processed texture
ref_image = ref_img
ref_pos = prn.process(ref_image)
ref_image = ref_image / 255.
ref_texture = cv2.remap(
ref_image,
ref_pos[:, :, :2].astype(np.float32),
None,
interpolation=cv2.INTER_NEAREST,
borderMode=cv2.BORDER_CONSTANT,
borderValue=(0))
# modify texture
new_texture = texture * (1 - eye_mask[:, :, np.newaxis]) + ref_texture * eye_mask[:, :, np.newaxis]
# change whole face(face swap)
elif Mode == 1:
# texture from another image or a processed texture
ref_image = ref_img
ref_pos = prn.process(ref_image)
ref_image = ref_image / 255.
ref_texture = cv2.remap(
ref_image,
ref_pos[:, :, :2].astype(np.float32),
None,
interpolation=cv2.INTER_NEAREST,
borderMode=cv2.BORDER_CONSTANT,
borderValue=(0))
ref_vertices = prn.get_vertices(ref_pos)
new_texture = ref_texture #(texture + ref_texture)/2.
else:
print('Wrong Mode! Mode should be 0 or 1.')
exit()
#-- 3. remap to input image.(render)
vis_colors = np.ones((vertices.shape[0], 1))
face_mask = render_texture(vertices.T, vis_colors.T, prn.triangles.T, h, w, c=1)
face_mask = np.squeeze(face_mask > 0).astype(np.float32)
new_colors = prn.get_colors_from_texture(new_texture)
new_image = render_texture(vertices.T, new_colors.T, prn.triangles.T, h, w, c=3)
new_image = image * (1 - face_mask[:, :, np.newaxis]) + new_image * face_mask[:, :, np.newaxis]
# Possion Editing for blending image
vis_ind = np.argwhere(face_mask > 0)
vis_min = np.min(vis_ind, 0)
vis_max = np.max(vis_ind, 0)
center = (int((vis_min[1] + vis_max[1]) / 2 + 0.5), int((vis_min[0] + vis_max[0]) / 2 + 0.5))
output = cv2.seamlessClone((new_image * 255).astype(np.uint8), (image * 255).astype(np.uint8),
(face_mask * 255).astype(np.uint8), center, cv2.NORMAL_CLONE)
return output
@runnable
def run_cmd(self, argvs: list):
"""
Run as a command.
"""
self.parser = argparse.ArgumentParser(
description="Run the {} module.".format(self.name),
prog='hub run {}'.format(self.name),
usage='%(prog)s',
add_help=True)
self.arg_input_group = self.parser.add_argument_group(title="Input options", description="Input data. Required")
self.arg_config_group = self.parser.add_argument_group(
title="Config options", description="Run configuration for controlling module behavior, not required.")
self.add_module_config_arg()
self.add_module_input_arg()
self.args = self.parser.parse_args(argvs)
self.face_swap(
paths=[{
'source': self.args.source,
'ref': self.args.ref
}],
mode=self.args.mode,
output_dir=self.args.output_dir,
use_gpu=self.args.use_gpu,
visualization=self.args.visualization)
@serving
def serving_method(self, images, **kwargs):
"""
Run as a service.
"""
images_decode = copy.deepcopy(images)
for image in images_decode:
image['source'] = base64_to_cv2(image['source'])
image['ref'] = base64_to_cv2(image['ref'])
results = self.face_swap(images_decode, **kwargs)
tolist = [result.tolist() for result in results]
return tolist
def add_module_config_arg(self):
"""
Add the command config options.
"""
self.arg_config_group.add_argument(
'--mode', type=int, default=0, help='process option, 0 for part texture, 1 for whole face.', choices=[0, 1])
self.arg_config_group.add_argument('--use_gpu', action='store_true', help="use GPU or not")
self.arg_config_group.add_argument(
'--output_dir', type=str, default='swapping_result', help='output directory for saving result.')
self.arg_config_group.add_argument('--visualization', type=bool, default=False, help='save results or not.')
def add_module_input_arg(self):
"""
Add the command input options.
"""
self.arg_input_group.add_argument('--source', type=str, help="path to source image.")
self.arg_input_group.add_argument('--ref', type=str, help="path to reference image.")
# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import numpy as np
import paddle
from .pd_model.x2paddle_code import TFModel
class PosPrediction():
def __init__(self, params, resolution_inp=256, resolution_op=256):
# -- hyper settings
self.resolution_inp = resolution_inp
self.resolution_op = resolution_op
self.MaxPos = resolution_inp * 1.1
# network type
self.network = TFModel()
self.network.set_dict(params, use_structured_name=False)
self.network.eval()
def predict(self, image):
paddle.disable_static()
image_tensor = paddle.to_tensor(image[np.newaxis, :, :, :], dtype='float32')
pos = self.network(image_tensor)
pos = pos.numpy()
pos = np.squeeze(pos)
return pos * self.MaxPos
def predict_batch(self, images):
pos = self.sess.run(self.x_op, feed_dict={self.x: images})
return pos * self.MaxPos
# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import base64
import cv2
import numpy as np
def base64_to_cv2(b64str):
data = base64.b64decode(b64str.encode('utf8'))
data = np.fromstring(data, np.uint8)
data = cv2.imdecode(data, cv2.IMREAD_GRAYSCALE)
return data
# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import cv2
import numpy as np
end_list = np.array([17, 22, 27, 42, 48, 31, 36, 68], dtype=np.int32) - 1
def plot_kpt(image, kpt):
''' Draw 68 key points
Args:
image: the input image
kpt: (68, 3).
'''
image = image.copy()
kpt = np.round(kpt).astype(np.int32)
for i in range(kpt.shape[0]):
st = kpt[i, :2]
image = cv2.circle(image, (st[0], st[1]), 1, (0, 0, 255), 2)
if i in end_list:
continue
ed = kpt[i + 1, :2]
image = cv2.line(image, (st[0], st[1]), (ed[0], ed[1]), (255, 255, 255), 1)
return image
def plot_vertices(image, vertices):
image = image.copy()
vertices = np.round(vertices).astype(np.int32)
for i in range(0, vertices.shape[0], 2):
st = vertices[i, :2]
image = cv2.circle(image, (st[0], st[1]), 1, (255, 0, 0), -1)
return image
def plot_pose_box(image, P, kpt, color=(0, 255, 0), line_width=2):
''' Draw a 3D box as annotation of pose. Ref:https://github.com/yinguobing/head-pose-estimation/blob/master/pose_estimator.py
Args:
image: the input image
P: (3, 4). Affine Camera Matrix.
kpt: (68, 3).
'''
image = image.copy()
point_3d = []
rear_size = 90
rear_depth = 0
point_3d.append((-rear_size, -rear_size, rear_depth))
point_3d.append((-rear_size, rear_size, rear_depth))
point_3d.append((rear_size, rear_size, rear_depth))
point_3d.append((rear_size, -rear_size, rear_depth))
point_3d.append((-rear_size, -rear_size, rear_depth))
front_size = 105
front_depth = 110
point_3d.append((-front_size, -front_size, front_depth))
point_3d.append((-front_size, front_size, front_depth))
point_3d.append((front_size, front_size, front_depth))
point_3d.append((front_size, -front_size, front_depth))
point_3d.append((-front_size, -front_size, front_depth))
point_3d = np.array(point_3d, dtype=np.float).reshape(-1, 3)
# Map to 2d image points
point_3d_homo = np.hstack((point_3d, np.ones([point_3d.shape[0], 1]))) #n x 4
point_2d = point_3d_homo.dot(P.T)[:, :2]
point_2d[:, :2] = point_2d[:, :2] - np.mean(point_2d[:4, :2], 0) + np.mean(kpt[:27, :2], 0)
point_2d = np.int32(point_2d.reshape(-1, 2))
# Draw all the lines
cv2.polylines(image, [point_2d], True, color, line_width, cv2.LINE_AA)
cv2.line(image, tuple(point_2d[1]), tuple(point_2d[6]), color, line_width, cv2.LINE_AA)
cv2.line(image, tuple(point_2d[2]), tuple(point_2d[7]), color, line_width, cv2.LINE_AA)
cv2.line(image, tuple(point_2d[3]), tuple(point_2d[8]), color, line_width, cv2.LINE_AA)
return image
此差异已折叠。
# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import numpy as np
from scipy import ndimage
from utils.render import render_texture
from utils.render import vis_of_vertices
def get_visibility(vertices, triangles, h, w):
triangles = triangles.T
vertices_vis = vis_of_vertices(vertices.T, triangles, h, w)
vertices_vis = vertices_vis.astype(bool)
for k in range(2):
tri_vis = vertices_vis[triangles[0, :]] | vertices_vis[triangles[1, :]] | vertices_vis[triangles[2, :]]
ind = triangles[:, tri_vis]
vertices_vis[ind] = True
# for k in range(2):
# tri_vis = vertices_vis[triangles[0,:]] & vertices_vis[triangles[1,:]] & vertices_vis[triangles[2,:]]
# ind = triangles[:, tri_vis]
# vertices_vis[ind] = True
vertices_vis = vertices_vis.astype(np.float32) #1 for visible and 0 for non-visible
return vertices_vis
def get_uv_mask(vertices_vis, triangles, uv_coords, h, w, resolution):
triangles = triangles.T
vertices_vis = vertices_vis.astype(np.float32)
uv_mask = render_texture(uv_coords.T, vertices_vis[np.newaxis, :], triangles, resolution, resolution, 1)
uv_mask = np.squeeze(uv_mask > 0)
uv_mask = ndimage.binary_closing(uv_mask)
uv_mask = ndimage.binary_erosion(uv_mask, structure=np.ones((4, 4)))
uv_mask = ndimage.binary_closing(uv_mask)
uv_mask = ndimage.binary_erosion(uv_mask, structure=np.ones((4, 4)))
uv_mask = ndimage.binary_erosion(uv_mask, structure=np.ones((4, 4)))
uv_mask = ndimage.binary_erosion(uv_mask, structure=np.ones((4, 4)))
uv_mask = uv_mask.astype(np.float32)
return np.squeeze(uv_mask)
def get_depth_image(vertices, triangles, h, w, isShow=False):
z = vertices[:, 2:]
if isShow:
z = z / max(z)
depth_image = render_texture(vertices.T, z.T, triangles.T, h, w, 1)
return np.squeeze(depth_image)
# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import numpy as np
# import scipy.io as
def frontalize(vertices):
canonical_vertices = np.load('Data/uv-data/canonical_vertices.npy')
vertices_homo = np.hstack((vertices, np.ones([vertices.shape[0], 1]))) #n x 4
P = np.linalg.lstsq(vertices_homo, canonical_vertices)[0].T # Affine matrix. 3 x 4
front_vertices = vertices_homo.dot(P.T)
return front_vertices
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册