提交 5a17edc5 编写于 作者: H haoyuying

add readme_en for first 300 models

上级 be73bfb6
......@@ -22,7 +22,7 @@
<img src="https://user-images.githubusercontent.com/35907364/136653401-6644bd46-d280-4c15-8d48-680b7eb152cb.png" width = "300" height = "450" hspace='10'/> <img src="https://user-images.githubusercontent.com/35907364/136648959-40493c9c-08ec-46cd-a2a2-5e2038dcbfa7.png" width = "300" height = "450" hspace='10'/>
</p>
- user_guided_colorization 是基于''Real-Time User-Guided Image Colorization with Learned Deep Priors"的着色模型,该模型利用预先提供的着色块对图像进行着色。
- user_guided_colorization 是基于"Real-Time User-Guided Image Colorization with Learned Deep Priors"的着色模型,该模型利用预先提供的着色块对图像进行着色。
## 二、安装
......
# user_guided_colorization
|Module Name|user_guided_colorization|
| :--- | :---: |
|Category |Image editing|
|Network| Local and Global Hints Network |
|Dataset|ILSVRC 2012|
|Fine-tuning supported or notFine-tuning|Yes|
|Module Size|131MB|
|Data indicators|-|
|Latest update date |2021-02-26|
## I. Basic Information
- ### Application Effect Display
- Sample results:
<p align="center">
<img src="https://user-images.githubusercontent.com/35907364/136653401-6644bd46-d280-4c15-8d48-680b7eb152cb.png" width = "300" height = "450" hspace='10'/> <img src="https://user-images.githubusercontent.com/35907364/136648959-40493c9c-08ec-46cd-a2a2-5e2038dcbfa7.png" width = "300" height = "450" hspace='10'/>
</p>
- ### Module Introduction
- user_guided_colorization is a colorization model based on "Real-Time User-Guided Image Colorization with Learned Deep Priors",this model uses pre-supplied coloring blocks to color the gray image.
## II. Installation
- ### 1、Environmental Dependence
- paddlepaddle >= 2.0.0
- paddlehub >= 2.0.0
- ### 2、Installation
- ```shell
$ hub install user_guided_colorization
```
- In case of any problems during installation, please refer to:[Windows_Quickstart](../../../../docs/docs_ch/get_start/windows_quickstart.md)
| [Linux_Quickstart](../../../../docs/docs_ch/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_ch/get_start/mac_quickstart.md)
## III. Module API Prediction
- ### 1、Command line Prediction
```shell
$ hub run user_guided_colorization --input_path "/PATH/TO/IMAGE"
```
- ### 2、Prediction Code Example
```python
import paddle
import paddlehub as hub
if __name__ == '__main__':
model = hub.Module(name='user_guided_colorization')
model.set_config(prob=0.1)
result = model.predict(images=['/PATH/TO/IMAGE'])
```
- ### 3.Fine-tune and Encapsulation
- After completing the installation of PaddlePaddle and PaddleHub, you can start using the user_guided_colorization model to fine-tune datasets such as [Canvas](../../docs/reference/datasets.md#class-hubdatasetsCanvas) by executing `python train.py`.
- Steps:
- Step1: Define the data preprocessing method
- ```python
import paddlehub.vision.transforms as T
transform = T.Compose([T.Resize((256, 256), interpolation='NEAREST'),
T.RandomPaddingCrop(crop_size=176),
T.RGB2LAB()], to_rgb=True)
```
- `transforms` The data enhancement module defines lots of data preprocessing methods. Users can replace the data preprocessing methods according to their needs.
- Step2: Download the dataset
- ```python
from paddlehub.datasets import Canvas
color_set = Canvas(transform=transform, mode='train')
```
* `transforms`: data preprocessing methods.
* `mode`: Select the data mode, the options are `train`, `test`, `val`. Default is `train`.
* `hub.datasets.Canvas()` The dataset will be automatically downloaded from the network and decompressed to the `$HOME/.paddlehub/dataset` directory under the user directory.
- Step3: Load the pre-trained model
- ```python
model = hub.Module(name='user_guided_colorization', load_checkpoint=None)
model.set_config(classification=True, prob=1)
```
* `name`: model name.
* `load_checkpoint`: Whether to load the self-trained model, if it is None, load the provided parameters.
* `classification`: The model is trained by two mode. At the beginning, `classification` is set to True, which is used for shallow network training. In the later stage of training, set `classification` to False, which is used to train the output layer of the network.
* `prob`: The probability that a priori color block is not added to each input image, the default is 1, that is, no prior color block is added. For example, when `prob` is set to 0.9, the probability that there are two a priori color blocks on a picture is(1-0.9)*(1-0.9)*0.9=0.009.
- Step4: Optimization strategy
```python
optimizer = paddle.optimizer.Adam(learning_rate=0.0001, parameters=model.parameters())
trainer = Trainer(model, optimizer, checkpoint_dir='img_colorization_ckpt_cls_1')
trainer.train(color_set, epochs=201, batch_size=25, eval_dataset=color_set, log_interval=10, save_interval=10)
```
- Run configuration
- `Trainer` mainly control the training of Fine-tune, including the following controllable parameters:
* `model`: Optimized model;
* `optimizer`: Optimizer selection;
* `use_vdl`: Whether to use vdl to visualize the training process;
* `checkpoint_dir`: The storage address of the model parameters;
* `compare_metrics`: The measurement index of the optimal model;
- `trainer.train` mainly control the specific training process, including the following controllable parameters:
* `train_dataset`: Training dataset;
* `epochs`: Epochs of training process;
* `batch_size`: Batch size;
* `num_workers`: Number of workers.
* `eval_dataset`: Validation dataset;
* `log_interval`:The interval for printing logs;
* `save_interval`: The interval for saving model parameters.
- Model prediction
- When Fine-tune is completed, the model with the best performance on the verification set will be saved in the `${CHECKPOINT_DIR}/best_model` directory. We use this model to make predictions. The `predict.py` script is as follows:
- ```python
import paddle
import paddlehub as hub
if __name__ == '__main__':
model = hub.Module(name='user_guided_colorization', load_checkpoint='/PATH/TO/CHECKPOINT')
model.set_config(prob=0.1)
result = model.predict(images=['/PATH/TO/IMAGE'])
```
- **NOTE:** If you want to get the oil painting style, please download the parameter file [Canvas colorization](https://paddlehub.bj.bcebos.com/dygraph/models/canvas_rc.pdparams)
## IV. Server Deployment
- PaddleHub Serving can deploy an online service of colorization.
- ### Step 1: Start PaddleHub Serving
- Run the startup command:
- ```shell
$ hub serving start -m user_guided_colorization
```
- The servitization API is now deployed and the default port number is 8866.
- **NOTE:** If GPU is used for prediction, set CUDA_VISIBLE_DEVICES environment variable before the service, otherwise it need not be set.
- ### Step 2: Send a predictive request
- With a configured server, use the following lines of code to send the prediction request and obtain the result
```python
import requests
import json
import cv2
import base64
import numpy as np
def cv2_to_base64(image):
data = cv2.imencode('.jpg', image)[1]
return base64.b64encode(data.tostring()).decode('utf8')
def base64_to_cv2(b64str):
data = base64.b64decode(b64str.encode('utf8'))
data = np.fromstring(data, np.uint8)
data = cv2.imdecode(data, cv2.IMREAD_COLOR)
return data
# Send an HTTP request
org_im = cv2.imread('/PATH/TO/IMAGE')
data = {'images':[cv2_to_base64(org_im)]}
headers = {"Content-type": "application/json"}
url = "http://127.0.0.1:8866/predict/user_guided_colorization"
r = requests.post(url=url, headers=headers, data=json.dumps(data))
data = base64_to_cv2(r.json()["results"]['data'][0]['fake_reg'])
cv2.imwrite('color.png', data)
```
## V. Release Note
* 1.0.0
First release
# dcscn
|Module Name|dcscn|
| :--- | :---: |
|Category |Image editing|
|Network|dcscn|
|Dataset|DIV2k|
|Fine-tuning supported or not|No|
|Module Size|260KB|
|指标|PSNR37.63|
|Data indicators |2021-02-26|
## I. Basic Information
- ### Application Effect Display
- Sample results:
<p align="center">
<img src="https://user-images.githubusercontent.com/35907364/133558583-0b7049db-ed1f-4a16-8676-f2141fcb3dee.png" width = "450" height = "300" hspace='10'/> <img src="https://user-images.githubusercontent.com/35907364/130899031-a6f8c58a-5cb7-4105-b990-8cca5ae15368.png" width = "450" height = "300" hspace='10'/>
</p>
- ### Module Introduction
- DCSCN is a super resolution model based on 'Fast and Accurate Image Super Resolution by Deep CNN with Skip Connection and Network in Network'. The model uses residual structure and skip connections to extract local and global features. It uses a parallel 1*1 convolutional network to learn detailed features to improve model performance. This model provides super resolution result with scale factor x2.
- For more information, please refer to: [dcscn](https://github.com/jiny2001/dcscn-super-resolution)
## II. Installation
- ### 1、Environmental Dependence
- paddlepaddle >= 2.0.0
- paddlehub >= 2.0.0
- ### 2、Installation
- ```shell
$ hub install dcscn
```
- In case of any problems during installation, please refer to:[Windows_Quickstart](../../../../docs/docs_ch/get_start/windows_quickstart.md)
| [Linux_Quickstart](../../../../docs/docs_ch/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_ch/get_start/mac_quickstart.md)
## III. Module API Prediction
- ### 1、Command line Prediction
- ```
$ hub run dcscn --input_path "/PATH/TO/IMAGE"
```
- ### 2、Prediction Code Example
```python
import cv2
import paddlehub as hub
sr_model = hub.Module(name='dcscn')
im = cv2.imread('/PATH/TO/IMAGE').astype('float32')
res = sr_model.reconstruct(images=[im], visualization=True)
print(res[0]['data'])
sr_model.save_inference_model()
```
- ### 3、API
- ```python
def reconstruct(self,
images=None,
paths=None,
use_gpu=False,
visualization=False,
output_dir="dcscn_output")
```
- Prediction API.
- **Parameter**
* images (list\[numpy.ndarray\]): image data,ndarray.shape is in the format \[H, W, C\],BGR;
* paths (list\[str\]): image path;
* use\_gpu (bool): use GPU or not; **set the CUDA_VISIBLE_DEVICES environment variable first if you are using GPU**;
* visualization (bool): Whether to save the recognition results as picture files;
* output\_dir (str): save path of images, "dcscn_output" by default.
- **Return**
* res (list\[dict\]): The list of model results, where each element is dict and each field is:
* save\_path (str, optional): Save path of the result, save_path is '' if no image is saved.
* data (numpy.ndarray): result of super resolution.
- ```python
def save_inference_model(self,
dirname='dcscn_save_model',
model_filename=None,
params_filename=None,
combined=False)
```
- Save the model to the specified path.
- **Parameters**
* dirname: Save path.
* model\_filename: model file name,defalt is \_\_model\_\_
* params\_filename: parameter file name,defalt is \_\_params\_\_(Only takes effect when `combined` is True)
* combined: Whether to save the parameters to a unified file.
## IV. Server Deployment
- PaddleHub Serving can deploy an online service of super resolution.
- ### Step 1: Start PaddleHub Serving
- Run the startup command:
- ```shell
$ hub serving start -m dcscn
```
- The servitization API is now deployed and the default port number is 8866.
- **NOTE:** If GPU is used for prediction, set CUDA_VISIBLE_DEVICES environment variable before the service, otherwise it need not be set.
- ### Step 2: Send a predictive request
- With a configured server, use the following lines of code to send the prediction request and obtain the result
```python
import requests
import json
import base64
import cv2
import numpy as np
def cv2_to_base64(image):
data = cv2.imencode('.jpg', image)[1]
return base64.b64encode(data.tostring()).decode('utf8')
def base64_to_cv2(b64str):
data = base64.b64decode(b64str.encode('utf8'))
data = np.fromstring(data, np.uint8)
data = cv2.imdecode(data, cv2.IMREAD_COLOR)
return data
org_im = cv2.imread('/PATH/TO/IMAGE')
data = {'images':[cv2_to_base64(org_im)]}
headers = {"Content-type": "application/json"}
url = "http://127.0.0.1:8866/predict/dcscn"
r = requests.post(url=url, headers=headers, data=json.dumps(data))
sr = np.expand_dims(cv2.cvtColor(base64_to_cv2(r.json()["results"][0]['data']), cv2.COLOR_BGR2GRAY), axis=2)
shape =sr.shape
org_im = cv2.cvtColor(org_im, cv2.COLOR_BGR2YUV)
uv = cv2.resize(org_im[...,1:], (shape[1], shape[0]), interpolation=cv2.INTER_CUBIC)
combine_im = cv2.cvtColor(np.concatenate((sr, uv), axis=2), cv2.COLOR_YUV2BGR)
cv2.imwrite('dcscn_X2.png', combine_im)
print("save image as dcscn_X2.png")
```
## V. Release Note
- 1.0.0
First release
# falsr_a
|Module Name|falsr_a|
| :--- | :---: |
|Category |Image editing|
|Network |falsr_a|
|Dataset|DIV2k|
|Fine-tuning supported or not|No|
|Module Size |8.9MB|
|Data indicators|PSNR37.82|
|Latest update date|2021-02-26|
## I. Basic Information
- ### Application Effect Display
- Sample results:
<p align="center">
<img src="https://user-images.githubusercontent.com/35907364/133558583-0b7049db-ed1f-4a16-8676-f2141fcb3dee.png" width = "450" height = "300" hspace='10'/> <img src="https://user-images.githubusercontent.com/35907364/130899031-a6f8c58a-5cb7-4105-b990-8cca5ae15368.png" width = "450" height = "300" hspace='10'/>
</p>
- ### Module Introduction
- falsr_a is a lightweight super-resolution model based on `Accurate and Lightweight Super-Resolution with Neural Architecture Search`. The model uses a multi-objective approach to deal with the over-segmentation problem, and uses an elastic search strategy based on a hybrid controller to improve the performance of the model. This model provides super resolution result with scale factor x2.
- For more information, please refer to:[falsr_a](https://github.com/xiaomi-automl/FALSR)
## II. Installation
- ### 1、Environmental Dependence
- paddlepaddle >= 2.0.0
- paddlehub >= 2.0.0
- ### 2、Installation
- ```shell
$ hub install falsr_a
```
- In case of any problems during installation, please refer to:[Windows_Quickstart](../../../../docs/docs_ch/get_start/windows_quickstart.md)
| [Linux_Quickstart](../../../../docs/docs_ch/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_ch/get_start/mac_quickstart.md)
## III. Module API Prediction
- ### 1、Command line Prediction
- ```
$ hub run falsr_a --input_path "/PATH/TO/IMAGE"
```
- ### 2、Prediction Code Example
```python
import cv2
import paddlehub as hub
sr_model = hub.Module(name='falsr_a')
im = cv2.imread('/PATH/TO/IMAGE').astype('float32')
res = sr_model.reconstruct(images=[im], visualization=True)
print(res[0]['data'])
sr_model.save_inference_model()
```
- ### 3、API
- ```python
def reconstruct(self,
images=None,
paths=None,
use_gpu=False,
visualization=False,
output_dir="falsr_a_output")
```
- Prediction API.
- **Parameter**
* images (list\[numpy.ndarray\]): image data,ndarray.shape is in the format \[H, W, C\],BGR;
* paths (list\[str\]): image path;
* use\_gpu (bool): use GPU or not; **set the CUDA_VISIBLE_DEVICES environment variable first if you are using GPU**;
* visualization (bool): Whether to save the recognition results as picture files;
* output\_dir (str): save path of images, "dcscn_output" by default.
- **Return**
* res (list\[dict\]): The list of model results, where each element is dict and each field is:
* save\_path (str, optional): Save path of the result, save_path is '' if no image is saved.
* data (numpy.ndarray): result of super resolution.
- ```python
def save_inference_model(self,
dirname='falsr_a_save_model',
model_filename=None,
params_filename=None,
combined=False)
```
- Save the model to the specified path.
- **Parameters**
* dirname: Save path.
* model\_filename: model file name,defalt is \_\_model\_\_
* params\_filename: parameter file name,defalt is \_\_params\_\_(Only takes effect when `combined` is True)
* combined: Whether to save the parameters to a unified file.
## IV. Server Deployment
- PaddleHub Serving can deploy an online service of super resolution.
- ### Step 1: Start PaddleHub Serving
- Run the startup command:
- ```shell
$ hub serving start -m falsr_a
```
- The servitization API is now deployed and the default port number is 8866.
- **NOTE:** If GPU is used for prediction, set CUDA_VISIBLE_DEVICES environment variable before the service, otherwise it need not be set.
- ### Step 2: Send a predictive request
- With a configured server, use the following lines of code to send the prediction request and obtain the result
```python
import requests
import json
import base64
import cv2
import numpy as np
def cv2_to_base64(image):
data = cv2.imencode('.jpg', image)[1]
return base64.b64encode(data.tostring()).decode('utf8')
def base64_to_cv2(b64str):
data = base64.b64decode(b64str.encode('utf8'))
data = np.fromstring(data, np.uint8)
data = cv2.imdecode(data, cv2.IMREAD_COLOR)
return data
org_im = cv2.imread('/PATH/TO/IMAGE')
data = {'images':[cv2_to_base64(org_im)]}
headers = {"Content-type": "application/json"}
url = "http://127.0.0.1:8866/predict/falsr_a"
r = requests.post(url=url, headers=headers, data=json.dumps(data))
sr = base64_to_cv2(r.json()["results"][0]['data'])
cv2.imwrite('falsr_a_X2.png', sr)
print("save image as falsr_a_X2.png")
```
## V. Release Note
- 1.0.0
First release
# falsr_b
|Module Name|falsr_b|
| :--- | :---: |
|Category |Image editing|
|Network |falsr_b|
|Dataset|DIV2k|
|Fine-tuning supported or not|No|
|Module Size |4MB|
|Data indicators|PSNR37.61|
|Latest update date|2021-02-26|
## I. Basic Information
- ### Application Effect Display
- Sample results:
<p align="center">
<img src="https://user-images.githubusercontent.com/35907364/133558583-0b7049db-ed1f-4a16-8676-f2141fcb3dee.png" width = "450" height = "300" hspace='10'/> <img src="https://user-images.githubusercontent.com/35907364/130899031-a6f8c58a-5cb7-4105-b990-8cca5ae15368.png" width = "450" height = "300" hspace='10'/>
</p>
- ### Module Introduction
- falsr_b is a lightweight super-resolution model based on `Accurate and Lightweight Super-Resolution with Neural Architecture Search`. The model uses a multi-objective approach to deal with the over-segmentation problem, and uses an elastic search strategy based on a hybrid controller to improve the performance of the model. This model provides super resolution result with scale factor x2.
- For more information, please refer to:[falsr_b](https://github.com/xiaomi-automl/FALSR)
## II. Installation
- ### 1、Environmental Dependence
- paddlepaddle >= 2.0.0
- paddlehub >= 2.0.0
- ### 2、Installation
- ```shell
$ hub install falsr_b
```
- In case of any problems during installation, please refer to:[Windows_Quickstart](../../../../docs/docs_ch/get_start/windows_quickstart.md)
| [Linux_Quickstart](../../../../docs/docs_ch/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_ch/get_start/mac_quickstart.md)
## III. Module API Prediction
- ### 1、Command line Prediction
- ```
$ hub run falsr_b --input_path "/PATH/TO/IMAGE"
```
- ### 2、Prediction Code Example
```python
import cv2
import paddlehub as hub
sr_model = hub.Module(name='falsr_b')
im = cv2.imread('/PATH/TO/IMAGE').astype('float32')
res = sr_model.reconstruct(images=[im], visualization=True)
print(res[0]['data'])
sr_model.save_inference_model()
```
- ### 3、API
- ```python
def reconstruct(self,
images=None,
paths=None,
use_gpu=False,
visualization=False,
output_dir="falsr_b_output")
```
- Prediction API.
- **Parameter**
* images (list\[numpy.ndarray\]): image data,ndarray.shape is in the format \[H, W, C\],BGR;
* paths (list\[str\]): image path;
* use\_gpu (bool): use GPU or not; **set the CUDA_VISIBLE_DEVICES environment variable first if you are using GPU**;
* visualization (bool): Whether to save the recognition results as picture files;
* output\_dir (str): save path of images, "dcscn_output" by default.
- **Return**
* res (list\[dict\]): The list of model results, where each element is dict and each field is:
* save\_path (str, optional): Save path of the result, save_path is '' if no image is saved.
* data (numpy.ndarray): result of super resolution.
- ```python
def save_inference_model(self,
dirname='falsr_b_save_model',
model_filename=None,
params_filename=None,
combined=False)
```
- Save the model to the specified path.
- **Parameters**
* dirname: Save path.
* model\_filename: model file name,defalt is \_\_model\_\_
* params\_filename: parameter file name,defalt is \_\_params\_\_(Only takes effect when `combined` is True)
* combined: Whether to save the parameters to a unified file.
## IV. Server Deployment
- PaddleHub Serving can deploy an online service of super resolution.
- ### Step 1: Start PaddleHub Serving
- Run the startup command:
- ```shell
$ hub serving start -m falsr_b
```
- The servitization API is now deployed and the default port number is 8866.
- **NOTE:** If GPU is used for prediction, set CUDA_VISIBLE_DEVICES environment variable before the service, otherwise it need not be set.
- ### Step 2: Send a predictive request
- With a configured server, use the following lines of code to send the prediction request and obtain the result
```python
import requests
import json
import base64
import cv2
import numpy as np
def cv2_to_base64(image):
data = cv2.imencode('.jpg', image)[1]
return base64.b64encode(data.tostring()).decode('utf8')
def base64_to_cv2(b64str):
data = base64.b64decode(b64str.encode('utf8'))
data = np.fromstring(data, np.uint8)
data = cv2.imdecode(data, cv2.IMREAD_COLOR)
return data
org_im = cv2.imread('/PATH/TO/IMAGE')
data = {'images':[cv2_to_base64(org_im)]}
headers = {"Content-type": "application/json"}
url = "http://127.0.0.1:8866/predict/falsr_b"
r = requests.post(url=url, headers=headers, data=json.dumps(data))
sr = base64_to_cv2(r.json()["results"][0]['data'])
cv2.imwrite('falsr_b_X2.png', sr)
print("save image as falsr_b_X2.png")
```
## V. Release Note
- 1.0.0
First release
# attgan_celeba
|Module Name|attgan_celeba|
| :--- | :---: |
|Category |image generation|
|Network |AttGAN|
|Dataset|Celeba|
|Fine-tuning supported or not |No|
|Module Size |167MB|
|Latest update date|2021-02-26|
|Data indicators |-|
## I. Basic Information
- ### Application Effect Display
- Sample results:
<p align="center">
<img src="https://user-images.githubusercontent.com/35907364/137855667-43c5c40c-28f5-45d8-accc-028e185b988f.JPG" width=1200><br/>
The image attributes are: original image, Bald, Bangs, Black_Hair, Blond_Hair, Brown_Hair, Bushy_Eyebrows, Eyeglasses, Gender, Mouth_Slightly_Open, Mustache, No_Beard, Pale_Skin, Aged<br/>
</p>
- ### Module Introduction
- AttGAN is a Generative Adversarial Network, which uses classification loss and reconstruction loss to train the network. The PaddleHub Module is trained one Celeba dataset and currently supports attributes of "Bald", "Bangs", "Black_Hair", "Blond_Hair", "Brown_Hair", "Bushy_Eyebrows", "Eyeglasses", "Gender", "Mouth_Slightly_Open", "Mustache", "No_Beard", "Pale_Skin", "Aged".
## II. Installation
- ### 1、Environmental Dependence
- paddlepaddle >= 1.5.2
- paddlehub >= 1.0.0 | [How to install PaddleHub](../../../../docs/docs_ch/get_start/installation.rst)
- ### 2、Installation
- ```shell
$ hub install attgan_celeba==1.0.0
```
- In case of any problems during installation, please refer to:[Windows_Quickstart](../../../../docs/docs_ch/get_start/windows_quickstart.md)
| [Linux_Quickstart](../../../../docs/docs_ch/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_ch/get_start/mac_quickstart.md).
## III. Module API Prediction
- ### 1、Command line Prediction
- ```shell
$ hub run attgan_celeba --image "/PATH/TO/IMAGE" --style "target_attribute"
```
- **Parameters**
- image: image path
- style: Specify the attributes to be converted. The options are "Bald", "Bangs", "Black_Hair", "Blond_Hair", "Brown_Hair", "Bushy_Eyebrows", "Eyeglasses", "Gender", "Mouth_Slightly_Open", "Mustache", "No_Beard", "Pale_Skin", "Aged". You can choose one of the options.
- ### 2、Prediction Code Example
- ```python
import paddlehub as hub
attgan = hub.Module(name="attgan_celeba")
test_img_path = ["/PATH/TO/IMAGE"]
trans_attr = ["Bangs"]
# set input dict
input_dict = {"image": test_img_path, "style": trans_attr}
# execute predict and print the result
results = attgan.generate(data=input_dict)
print(results)
```
- ### 3、API
- ```python
def generate(data)
```
- Style transfer API.
- **Parameter**
- data(list[dict]): each element in the list is dict and each field is:
- image (list\[str\]): Each element in the list is the path of the image to be converted.
- style (list\[str\]): Each element in the list is a string, fill in the face attributes to be converted.
- **Return**
- res (list\[str\]): Save path of the result.
## IV. Release Note
- 1.0.0
First release
# cyclegan_cityscapes
|Module Name|cyclegan_cityscapes|
| :--- | :---: |
|Category |image generation|
|Network |CycleGAN|
|Dataset|Cityscapes|
|Fine-tuning supported or not |No|
|Module Size |33MB|
|Latest update date |2021-02-26|
|Data indicators|-|
## I. Basic Information
- ### Application Effect Display
- Sample results:
<p align="center">
<img src="https://user-images.githubusercontent.com/35907364/137839740-4be4cf40-816f-401e-a73f-6cda037041dd.png" width = "450" height = "300" hspace='10'/>
<br />
Input image
<br />
<img src="https://user-images.githubusercontent.com/35907364/137839777-89fc705b-f0d7-4a93-94e2-76c0d3c5a0b0.png" width = "450" height = "300" hspace='10'/>
<br />
Output image
<br />
</p>
- ### Module Introduction
- CycleGAN是生成对抗网络(Generative Adversarial Networks )的一种,与传统的GAN只能单向生成图片不同,CycleGAN可以同时完成两个domain的图片进行相互转换。该PaddleHub Module使用Cityscapes数据集训练完成,支持图片从实景图转换为语义分割结果,也支持从语义分割结果转换为实景图。
- CycleGAN belongs to Generative Adversarial Networks(GANs). Unlike traditional GANs that can only generate pictures in one direction, CycleGAN can simultaneously complete the style transfer of two domains. The PaddleHub Module is trained by Cityscapes dataset, and supports the conversion from real images to semantic segmentation results, and also supports conversion from semantic segmentation results to real images.
## II. Installation
- ### 1、Environmental Dependence
- paddlepaddle >= 1.4.0
- paddlehub >= 1.1.0 | [How to install PaddleHub](../../../../docs/docs_ch/get_start/installation.rst)
- ### 2、Installation
- ```shell
$ hub install cyclegan_cityscapes==1.0.0
```
- In case of any problems during installation, please refer to:[Windows_Quickstart](../../../../docs/docs_ch/get_start/windows_quickstart.md)
| [Linux_Quickstart](../../../../docs/docs_ch/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_ch/get_start/mac_quickstart.md)
## III. Module API Prediction
- ### 1、Command line Prediction
- ```shell
$ hub run cyclegan_cityscapes --input_path "/PATH/TO/IMAGE"
```
- **Parameters**
- input_path: image path
- ### 2、Prediction Code Example
- ```python
import paddlehub as hub
cyclegan = hub.Module(name="cyclegan_cityscapes")
test_img_path = "/PATH/TO/IMAGE"
# set input dict
input_dict = {"image": [test_img_path]}
# execute predict and print the result
results = cyclegan.generate(data=input_dict)
print(results)
```
- ### 3、API
- ```python
def generate(data)
```
- Style transfer API.
- **Parameters**
- data(list[dict]): each element in the list is dict and each field is:
- image (list\[str\]): image path.
- **Return**
- res (list\[str\]): The list of style transfer results, where each element is dict and each field is:
- origin: original input path.
- generated: save path of images.
## IV. Release Note
* 1.0.0
First release
# stargan_celeba
|Module Name|stargan_celeba|
| :--- | :---: |
|Category|image generation|
|Network|STGAN|
|Dataset|Celeba|
|Fine-tuning supported or not|否|
|Module Size |33MB|
|Latest update date|2021-02-26|
|Data indicators|-|
## I. Basic Information
- ### Application Effect Display
- Sample results:
<p align="center">
<img src="https://user-images.githubusercontent.com/35907364/137855887-f0abca76-2735-4275-b7ad-242decf31bb3.PNG" width=600><br/>
The image attributes are: origial image, Black_Hair, Blond_Hair, Brown_Hair, Male, Aged<br/>
</p>
- ### Module Introduction
- STGAN takes the difference between the original attribute and the target attribute as input, and proposes STUs (Selective transfer units) to select and modify features of the encoder. The PaddleHub Module is trained one Celeba dataset and currently supports attributes of "Black_Hair", "Blond_Hair", "Brown_Hair", "Female", "Male", "Aged".
## II. Installation
- ### 1、Environmental Dependence
- paddlepaddle >= 1.5.2
- paddlehub >= 1.0.0 | [How to install PaddleHub](../../../../docs/docs_ch/get_start/installation.rst)
- ### 2、Installation
- ```shell
$ hub install stargan_celeba==1.0.0
```
- In case of any problems during installation, please refer to:[Windows_Quickstart](../../../../docs/docs_ch/get_start/windows_quickstart.md)
| [Linux_Quickstart](../../../../docs/docs_ch/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_ch/get_start/mac_quickstart.md)
## III. Module API Prediction
- ### 1、Command line Prediction
- ```shell
$ hub run stargan_celeba --image "/PATH/TO/IMAGE" --style "target_attribute"
```
- **Parameters**
- image: image path
- style: Specify the attributes to be converted. The options are "Black_Hair", "Blond_Hair", "Brown_Hair", "Female", "Male", "Aged". You can choose one of the options.
- ### 2、Prediction Code Example
- ```python
import paddlehub as hub
stargan = hub.Module(name="stargan_celeba")
test_img_path = ["/PATH/TO/IMAGE"]
trans_attr = ["Blond_Hair"]
# set input dict
input_dict = {"image": test_img_path, "style": trans_attr}
# execute predict and print the result
results = stargan.generate(data=input_dict)
print(results)
```
- ### 3、API
- ```python
def generate(data)
```
- Style transfer API.
- **Parameter**
- data(list[dict]): each element in the list is dict and each field is:
- image (list\[str\]): Each element in the list is the path of the image to be converted.
- style (list\[str\]): Each element in the list is a string, fill in the face attributes to be converted.
- **Return**
- res (list\[str\]): Save path of the result.
## IV. Release Note
- 1.0.0
First release
# stgan_celeba
|Module Name|stgan_celeba|
| :--- | :---: |
|Category|image generation|
|Network|STGAN|
|Dataset|Celeba|
|Fine-tuning supported or not|否|
|Module Size |287MB|
|Latest update date|2021-02-26|
|Data indicators|-|
## I. Basic Information
- ### Application Effect Display
- Sample results:
<p align="center">
<img src="https://user-images.githubusercontent.com/35907364/137856070-2a43facd-cda0-473f-8935-e61f5dd583d8.JPG" width=1200><br/>
The image attributes are: original image, Bald, Bangs, Black_Hair, Blond_Hair, Brown_Hair, Bushy_Eyebrows, Eyeglasses, Gender, Mouth_Slightly_Open, Mustache, No_Beard, Pale_Skin, Aged<br/>
</p>
- ### Module Introduction
- STGAN takes the difference between the original attribute and the target attribute as input, and proposes STUs (Selective transfer units) to select and modify features of the encoder. The PaddleHub Module is trained one Celeba dataset and currently supports attributes of "Bald", "Bangs", "Black_Hair", "Blond_Hair", "Brown_Hair", "Bushy_Eyebrows", "Eyeglasses", "Gender", "Mouth_Slightly_Open", "Mustache", "No_Beard", "Pale_Skin", "Aged".
## II. Installation
- ### 1、Environmental Dependence
- paddlepaddle >= 1.5.2
- paddlehub >= 1.0.0 | [How to install PaddleHub](../../../../docs/docs_ch/get_start/installation.rst)
- ### 2、Installation
- ```shell
$ hub install stgan_celeba==1.0.0
```
- In case of any problems during installation, please refer to:[Windows_Quickstart](../../../../docs/docs_ch/get_start/windows_quickstart.md)
| [Linux_Quickstart](../../../../docs/docs_ch/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_ch/get_start/mac_quickstart.md)
## III. Module API Prediction
- ### 1、Command line Prediction
- ```shell
$ hub run stgan_celeba --image "/PATH/TO/IMAGE" --info "original_attributes" --style "target_attribute"
```
- **Parameters**
- image: image path
- info: attributes of original image, must fill in gender( "Male" or "Female").The options are "Bald", "Bangs", "Black_Hair", "Blond_Hair", "Brown_Hair", "Bushy_Eyebrows", "Eyeglasses", "Mouth_Slightly_Open", "Mustache", "No_Beard", "Pale_Skin", "Aged". For example, the input picture is a girl with black hair, then fill in as "Female,Black_Hair".
- style: Specify the attributes to be converted. The options are "Bald", "Bangs", "Black_Hair", "Blond_Hair", "Brown_Hair", "Bushy_Eyebrows", "Eyeglasses", "Gender", "Mouth_Slightly_Open", "Mustache", "No_Beard", "Pale_Skin", "Aged". You can choose one of the options.
- ### 2、Prediction Code Example
- ```python
import paddlehub as hub
stgan = hub.Module(name="stgan_celeba")
test_img_path = ["/PATH/TO/IMAGE"]
org_info = ["Female,Black_Hair"]
trans_attr = ["Bangs"]
# set input dict
input_dict = {"image": test_img_path, "style": trans_attr, "info": org_info}
# execute predict and print the result
results = stgan.generate(data=input_dict)
print(results)
```
- ### 3、API
- ```python
def generate(data)
```
- Style transfer API.
- **Parameter**
- data(list[dict]): each element in the list is dict and each field is:
- image (list\[str\]): Each element in the list is the path of the image to be converted.
- style (list\[str\]): Each element in the list is a string, fill in the face attributes to be converted.
- info (list\[str\]): Represents the face attributes of the original image. Different attributes are separated by commas.
- **Return**
- res (list\[str\]): Save path of the result.
## IV. Release Note
- 1.0.0
First release
# ID_Photo_GEN
|Module Name |ID_Photo_GEN|
| :--- | :---: |
|Category|image generation|
|Network|HRNet_W18|
|Dataset |-|
|Fine-tuning supported or not |No|
|Module Size|28KB|
|Latest update date|2021-02-26|
|Data indicators|-|
## I. Basic Information
- ### Application Effect Display
- Sample results:
<p align="center">
<img src="https://img-blog.csdnimg.cn/20201224163307901.jpg" >
</p>
- ### Module Introduction
- This model is based on face_landmark_localization and FCN_HRNet_W18_Face_Seg. It can generate ID photos with white, red and blue background
## II. Installation
- ### 1、Environmental Dependence
- paddlepaddle >= 2.0.0
- paddlehub >= 2.0.0
- ### 2、Installation
- ```shell
$ hub install ID_Photo_GEN
```
- In case of any problems during installation, please refer to:[Windows_Quickstart](../../../../docs/docs_ch/get_start/windows_quickstart.md)
| [Linux_Quickstart](../../../../docs/docs_ch/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_ch/get_start/mac_quickstart.md)
## III. Module API Prediction
- ### 1、Prediction Code Example
- ```python
import cv2
import paddlehub as hub
model = hub.Module(name='ID_Photo_GEN')
result = model.Photo_GEN(
images=[cv2.imread('/PATH/TO/IMAGE')],
paths=None,
batch_size=1,
output_dir='output',
visualization=True,
use_gpu=False)
```
- ### 2、API
- ```python
def Photo_GEN(
images=None,
paths=None,
batch_size=1,
output_dir='output',
visualization=False,
use_gpu=False):
```
- Prediction API, generating ID photos.
- **Parameter**
* images (list[np.ndarray]): image data, ndarray.shape is in the format [H, W, C], BGR;
* paths (list[str]): image path
* batch_size (int): batch size
* output_dir (str): save path of images, output by default.
* visualization (bool): Whether to save the recognition results as picture files;
* use_gpu (bool): use GPU or not; **set the CUDA_VISIBLE_DEVICES environment variable first if you are using GPU**
**NOTE:** Choose one of `paths` and `images` to provide input data.
- **Return**
* results (list[dict{"write":np.ndarray,"blue":np.ndarray,"red":np.ndarray}]): The list of generation results.
## IV. Release Note
- 1.0.0
First release
\ No newline at end of file
# UGATIT_83w
|Module Name|UGATIT_83w|
| :--- | :---: |
|Category|Image editing|
|Network |U-GAT-IT|
|Dataset|selfie2anime|
|Fine-tuning supported or not|No|
|Module Size|41MB|
|Latest update date |2021-02-26|
|Data indicators|-|
## I. Basic Information
- ### Application Effect Display
- Sample results:
<p align="center">
<img src="https://user-images.githubusercontent.com/35907364/136651638-33cac040-edad-41ac-a9ce-7c0e678d8c52.jpg" width = "400" height = "400" hspace='10'/> <img src="https://user-images.githubusercontent.com/35907364/136651644-dd1d3836-99b3-40f0-8543-37de18f9cfd9.jpg" width = "400" height = "400" hspace='10'/>
</p>
- ### Module Introduction
- UGATIT can transfer the input face image into the anime style.
## II. Installation
- ### 1、Environmental Dependence
- paddlepaddle >= 1.8.2
- paddlehub >= 1.8.0
- ### 2、Installation
- ```shell
$ hub install UGATIT_83w
```
- In case of any problems during installation, please refer to:[Windows_Quickstart](../../../../docs/docs_ch/get_start/windows_quickstart.md)
| [Linux_Quickstart](../../../../docs/docs_ch/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_ch/get_start/mac_quickstart.md)
## III. Module API Prediction
- ### 1、Prediction Code Example
- ```python
import cv2
import paddlehub as hub
model = hub.Module(name='UGATIT_83w', use_gpu=False)
result = model.style_transfer(images=[cv2.imread('/PATH/TO/IMAGE')])
# or
# result = model.style_transfer(paths=['/PATH/TO/IMAGE'])
```
- ### 2、API
- ```python
def style_transfer(
self,
images=None,
paths=None,
batch_size=1,
output_dir='output',
visualization=False
)
```
- Style transfer API, convert the input face image into anime style.
- **Parameters**
* images (list\[numpy.ndarray\]): image data, ndarray.shape is in the format [H, W, C], BGR;
* paths (list\[str\]): image path,default is None;
* batch\_size (int): batch size, default is 1;
* visualization (bool): Whether to save the recognition results as picture files, default is False;
* output\_dir (str): save path of images, `output` by default.
**NOTE:** Choose one of `paths` and `images` to provide data.
- **Return**
- res (list\[numpy.ndarray\]): result, ndarray.shape is in the format [H, W, C].
## IV. Server Deployment
- PaddleHub Serving can deploy an online service of Style transfer task.
- ### Step 1: Start PaddleHub Serving
- Run the startup command:
- ```shell
$ hub serving start -m UGATIT_83w
```
- The servitization API is now deployed and the default port number is 8866.
- **NOTE:** If GPU is used for prediction, set CUDA_VISIBLE_DEVICES environment variable before the service, otherwise it need not be set.
- ### Step 2: Send a predictive request
- With a configured server, use the following lines of code to send the prediction request and obtain the result
- ```python
import requests
import json
import cv2
import base64
def cv2_to_base64(image):
data = cv2.imencode('.jpg', image)[1]
return base64.b64encode(data.tostring()).decode('utf8')
# Send an HTTP request
data = {'images':[cv2_to_base64(cv2.imread("/PATH/TO/IMAGE"))]}
headers = {"Content-type": "application/json"}
url = "http://127.0.0.1:8866/predict/UGATIT_83w"
r = requests.post(url=url, headers=headers, data=json.dumps(data))
# print prediction results
print(r.json()["results"])
```
## V. Release Note
- 1.0.0
First release
\ No newline at end of file
# UGATIT_92w
|Module Name|UGATIT_92w|
| :--- | :---: |
|Category|image editing|
|Network |U-GAT-IT|
|Dataset|selfie2anime|
|Fine-tuning supported or not|No|
|Module Size|41MB|
|Latest update date |2021-02-26|
|Data indicators|-|
## I. Basic Information
- ### Application Effect Display
- Sample results:
<p align="center">
<img src="https://user-images.githubusercontent.com/35907364/136651638-33cac040-edad-41ac-a9ce-7c0e678d8c52.jpg" width = "400" height = "400" hspace='10'/> <img src="https://user-images.githubusercontent.com/35907364/136653047-f00c30fb-521f-486f-8247-8d8f63649473.jpg" width = "400" height = "400" hspace='10'/>
</p>
- ### Module Introduction
- UGATIT can transfer the input face image into the anime style.
## II. Installation
- ### 1、Environmental Dependence
- paddlepaddle >= 1.8.2
- paddlehub >= 1.8.0
- ### 2、Installation
- ```shell
$ hub install UGATIT_92w
```
- In case of any problems during installation, please refer to:[Windows_Quickstart](../../../../docs/docs_ch/get_start/windows_quickstart.md)
| [Linux_Quickstart](../../../../docs/docs_ch/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_ch/get_start/mac_quickstart.md)
## III. Module API Prediction
- ### 1、Prediction Code Example
- ```python
import cv2
import paddlehub as hub
model = hub.Module(name='UGATIT_92w', use_gpu=False)
result = model.style_transfer(images=[cv2.imread('/PATH/TO/IMAGE')])
# or
# result = model.style_transfer(paths=['/PATH/TO/IMAGE'])
```
- ### 2、API
- ```python
def style_transfer(
self,
images=None,
paths=None,
batch_size=1,
output_dir='output',
visualization=False
)
```
- Style transfer API, convert the input face image into anime style.
- **Parameters**
* images (list\[numpy.ndarray\]): image data, ndarray.shape is in the format [H, W, C], BGR;
* paths (list\[str\]): image path,default is None;
* batch\_size (int): batch size, default is 1;
* visualization (bool): Whether to save the recognition results as picture files, default is False;
* output\_dir (str): save path of images, `output` by default.
**NOTE:** Choose one of `paths` and `images` to provide input data.
- **Return**
- res (list\[numpy.ndarray\]): result, ndarray.shape is in the format [H, W, C].
## IV. Server Deployment
- PaddleHub Serving can deploy an online service of Style transfer task.
- ### Step 1: Start PaddleHub Serving
- Run the startup command:
- ```shell
$ hub serving start -m UGATIT_92w
```
- The servitization API is now deployed and the default port number is 8866.
- **NOTE:** If GPU is used for prediction, set CUDA_VISIBLE_DEVICES environment variable before the service, otherwise it need not be set.
- ### Step 2: Send a predictive request
- With a configured server, use the following lines of code to send the prediction request and obtain the result
- ```python
import requests
import json
import cv2
import base64
def cv2_to_base64(image):
data = cv2.imencode('.jpg', image)[1]
return base64.b64encode(data.tostring()).decode('utf8')
# Send an HTTP request
data = {'images':[cv2_to_base64(cv2.imread("/PATH/TO/IMAGE"))]}
headers = {"Content-type": "application/json"}
url = "http://127.0.0.1:8866/predict/UGATIT_92w"
r = requests.post(url=url, headers=headers, data=json.dumps(data))
# print prediction results
print(r.json()["results"])
```
## V. Release Note
- 1.0.0
First release
\ No newline at end of file
# animegan_v2_paprika_54
|Module Name |animegan_v2_paprika_54|
| :--- | :---: |
|Category |image generation|
|Network|AnimeGAN|
|Dataset|Paprika|
|Fine-tuning supported or not|No|
|Module Size|9.4MB|
|Latest update date|2021-02-26|
|Data indicators|-|
## I. Basic Information
- ### Application Effect Display
- Sample results:
<p align="center">
<img src="https://ai-studio-static-online.cdn.bcebos.com/bd002c4bb6a7427daf26988770bb18648b7d8d2bfd6746bfb9a429db4867727f" width = "450" height = "300" hspace='10'/>
<br />
输入图像
<br />
<img src="https://ai-studio-static-online.cdn.bcebos.com/6574669d87b24bab9627c6e33896528b4a0bf5af1cd84ca29655d68719f2d551" width = "450" height = "300" hspace='10'/>
<br />
输出图像
<br />
</p>
- ### Module Introduction
- AnimeGAN V2 image style stransfer model, the model can convert the input image into red pepper anime style, the model weight is converted from[AnimeGAN V2 official repo](https://github.com/TachibanaYoshino/AnimeGAN)
## II. Installation
- ### 1、Environmental Dependence
- paddlepaddle >= 1.8.0
- paddlehub >= 1.8.0 | [How to install PaddleHub](../../../../docs/docs_ch/get_start/installation.rst)
- ### 2、Installation
- ```shell
$ hub install animegan_v2_paprika_54
```
- In case of any problems during installation, please refer to:[Windows_Quickstart](../../../../docs/docs_ch/get_start/windows_quickstart.md)
| [Linux_Quickstart](../../../../docs/docs_ch/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_ch/get_start/mac_quickstart.md)
## III. Module API Prediction
- ### 1、Prediction Code Example
- ```python
import paddlehub as hub
import cv2
model = hub.Module(name="animegan_v2_paprika_54")
result = model.style_transfer(images=[cv2.imread('/PATH/TO/IMAGE')])
# or
# result = model.style_transfer(paths=['/PATH/TO/IMAGE'])
```
- ### 2、API
- ```python
def style_transfer(images=None,
paths=None,
output_dir='output',
visualization=False,
min_size=32,
max_size=1024)
```
- Style transfer API.
- **Parameters**
- images (list\[numpy.ndarray\]): image data, ndarray.shape is in the format [H, W, C], BGR;
- paths (list\[str\]): image path;
- output\_dir (str): save path of images, `output` by default;
- visualization (bool): Whether to save the results as picture files;
- min\_size (int): minimum size, default is 32;
- max\_size (int): maximum size, default is 1024.
**NOTE:** Choose one of `paths` and `images` to provide input data.
- **Return**
- res (list\[numpy.ndarray\]): The list of style transfer results,ndarray.shape is in the format [H, W, C].
## IV. Server Deployment
- PaddleHub Serving can deploy an online service of style transfer.
- ### Step 1: Start PaddleHub Serving
- Run the startup command:
- ```shell
$ hub serving start -m animegan_v2_paprika_54
```
- The servitization API is now deployed and the default port number is 8866.
- **NOTE:** If GPU is used for prediction, set CUDA_VISIBLE_DEVICES environment variable before the service, otherwise it need not be set.
- ### Step 2: Send a predictive request
- With a configured server, use the following lines of code to send the prediction request and obtain the result
- ```python
import requests
import json
import cv2
import base64
def cv2_to_base64(image):
data = cv2.imencode('.jpg', image)[1]
return base64.b64encode(data.tostring()).decode('utf8')
# Send an HTTP request
data = {'images':[cv2_to_base64(cv2.imread("/PATH/TO/IMAGE"))]}
headers = {"Content-type": "application/json"}
url = "http://127.0.0.1:8866/predict/animegan_v2_paprika_54"
r = requests.post(url=url, headers=headers, data=json.dumps(data))
# print prediction results
print(r.json()["results"])
```
## V. Release Note
- 1.0.0
First release.
* 1.0.1
Support paddlehub2.0.
* 1.0.2
Delete batch_size.
# animegan_v2_paprika_97
|Module Name |animegan_v2_paprika_97|
| :--- | :---: |
|Category |image generation|
|Network|AnimeGAN|
|Dataset|Paprika|
|Fine-tuning supported or not|No|
|Module Size|9.7MB|
|Latest update date|2021-07-30|
|Data indicators|-|
## I. Basic Information
- ### Application Effect Display
- Sample results:
<p align="center">
<img src="https://user-images.githubusercontent.com/35907364/136652269-48b8c902-3a2b-46b7-a9f2-d500097bbb0e.jpg" width = "450" height = "300" hspace='10'/>
<br />
Input image
<br />
<img src="https://user-images.githubusercontent.com/35907364/136652280-7e9ebfd2-8a45-4b5b-b3ac-f107770525c4.jpg" width = "450" height = "300" hspace='10'/>
<br />
Output image
<br />
</p>
- ### Module Introduction
- AnimeGAN V2 image style stransfer model, the model can convert the input image into red pepper anime style, the model weight is converted from[AnimeGAN V2 official repo](https://github.com/TachibanaYoshino/AnimeGAN)
## II. Installation
- ### 1、Environmental Dependence
- paddlepaddle >= 1.8.0
- paddlehub >= 1.8.0 | [How to install PaddleHub](../../../../docs/docs_ch/get_start/installation.rst)
- ### 2、Installation
- ```shell
$ hub install animegan_v2_paprika_97
```
- In case of any problems during installation, please refer to:[Windows_Quickstart](../../../../docs/docs_ch/get_start/windows_quickstart.md)
| [Linux_Quickstart](../../../../docs/docs_ch/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_ch/get_start/mac_quickstart.md)
## III. Module API Prediction
- ### 1、Prediction Code Example
- ```python
import paddlehub as hub
import cv2
model = hub.Module(name="animegan_v2_paprika_97")
result = model.style_transfer(images=[cv2.imread('/PATH/TO/IMAGE')])
# or
# result = model.style_transfer(paths=['/PATH/TO/IMAGE'])
```
- ### 2、API
- ```python
def style_transfer(images=None,
paths=None,
output_dir='output',
visualization=False,
min_size=32,
max_size=1024)
```
- Style transfer API.
- **Parameters**
- images (list\[numpy.ndarray\]): image data, ndarray.shape is in the format [H, W, C], BGR;
- paths (list\[str\]): image path;
- output\_dir (str): save path of images, `output` by default;
- visualization (bool): Whether to save the results as picture files;
- min\_size (int): minimum size, default is 32;
- max\_size (int): maximum size, default is 1024.
**NOTE:** Choose one of `paths` and `images` to provide input data.
- **Return**
- res (list\[numpy.ndarray\]): The list of style transfer results,ndarray.shape is in the format [H, W, C].
## IV. Server Deployment
- PaddleHub Serving can deploy an online service of style transfer.
- ### Step 1: Start PaddleHub Serving
- Run the startup command:
- ```shell
$ hub serving start -m animegan_v2_paprika_97
```
- The servitization API is now deployed and the default port number is 8866.
- **NOTE:** If GPU is used for prediction, set CUDA_VISIBLE_DEVICES environment variable before the service, otherwise it need not be set.
- ### Step 2: Send a predictive request
- With a configured server, use the following lines of code to send the prediction request and obtain the result
- ```python
import requests
import json
import cv2
import base64
def cv2_to_base64(image):
data = cv2.imencode('.jpg', image)[1]
return base64.b64encode(data.tostring()).decode('utf8')
# Send an HTTP request
data = {'images':[cv2_to_base64(cv2.imread("/PATH/TO/IMAGE"))]}
headers = {"Content-type": "application/json"}
url = "http://127.0.0.1:8866/predict/animegan_v2_paprika_97"
r = requests.post(url=url, headers=headers, data=json.dumps(data))
# print prediction results
print(r.json()["results"])
```
## V. Release Note
- 1.0.0
First release.
* 1.0.1
Support paddlehub2.0.
* 1.0.2
Delete batch_size.
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册