- UGATIT is a model for style transfer. This module can be used to transfer a face image to cartoon style. For more information, please refer to [UGATIT-Paddle Project](https://github.com/miraiwk/UGATIT-paddle).
## II.Installation
- ### 1、Environmental Dependence
- paddlepaddle >= 1.8.0
- paddlehub >= 1.8.0 | [How to install PaddleHub]()
- ### 2、Installation
-```shell
$ hub install UGATIT_100w
```
- In case of any problems during installation, please refer to: [Windows_Quickstart]() | [Linux_Quickstart]() | [Mac_Quickstart]()
## III.Module API Prediction
- ### 1、Prediction Code Example
-```python
import paddlehub as hub
import cv2
model = hub.Module(name="UGATIT_100w")
result = model.style_transfer(images=[cv2.imread('/PATH/TO/IMAGE')])
# or
# result = model.style_transfer(paths=['/PATH/TO/IMAGE'])
```
- ### 2、API
-```python
def style_transfer(images=None,
paths=None,
batch_size=1,
output_dir='output',
visualization=False)
```
- Style transfer API.
- **Parameters**
- images (list\[numpy.ndarray\]): image data, ndarray.shape is in the format [H, W, C], BGR;
- paths (list[str]): image path;
- batch_size (int): the size of batch;
- visualization (bool): Whether to save the results as picture files;
- output_dir (str): save path of images;
**NOTE:** choose one parameter to provide data from paths and images
- **Return**
- res (list\[numpy.ndarray\]): result list,ndarray.shape 为 \[H, W, C\]
## IV.Server Deployment
- PaddleHub Serving can deploy an online service of style transfer.
- ### Step 1: Start PaddleHub Serving
- Run the startup command:
-```shell
$ hub serving start -m UGATIT_100w
```
- The servitization API is now deployed and the default port number is 8866.
-**NOTE:** If GPU is used for prediction, set CUDA_VISIBLE_DEVICES environment variable before the service, otherwise it need not be set.
- ### Step 2: Send a predictive request
- With a configured server, use the following lines of code to send the prediction request and obtain the result
- AnimeGAN V1 is a style transfer model, which can transfer a image style to Miyazaki carton style. For more information, please refer to [AnimeGAN V1 Project](https://github.com/TachibanaYoshino/AnimeGAN).
## II.Installation
- ### 1、Environmental Dependence
- paddlepaddle >= 1.8.0
- paddlehub >= 1.8.0 | [How to install PaddleHub]()
- ### 2、Installation
-```shell
$ hub install animegan_v1_hayao_60
```
- In case of any problems during installation, please refer to: [Windows_Quickstart]() | [Linux_Quickstart]() | [Mac_Quickstart]()
## III.Module API Prediction
- ### 1、Prediction Code Example
-```python
import paddlehub as hub
import cv2
model = hub.Module(name="animegan_v1_hayao_60")
result = model.style_transfer(images=[cv2.imread('/PATH/TO/IMAGE')])
# or
# result = model.style_transfer(paths=['/PATH/TO/IMAGE'])
```
- ### 2、API
-```python
def style_transfer(images=None,
paths=None,
output_dir='output',
visualization=False,
min_size=32,
max_size=1024)
```
- Style transfer API.
- **Parameters**
- images (list\[numpy.ndarray\]): image data, ndarray.shape is in the format [H, W, C], BGR;
- paths (list[str]): image path;
- output_dir (str): save path of images;
- visualization (bool): Whether to save the results as picture files;
- min\_size (int): min size of image shape,default is 32;
- max\_size (int): max size of image shape,default is 1024.
**NOTE:** choose one parameter to provide data from paths and images
- **Return**
- res (list\[numpy.ndarray\]): result list,ndarray.shape is \[H, W, C\]
## IV.Server Deployment
- PaddleHub Serving can deploy an online service of style transfer.
- ### Step 1: Start PaddleHub Serving
- Run the startup command:
-```shell
$ hub serving start -m animegan_v1_hayao_60
```
- The servitization API is now deployed and the default port number is 8866.
-**NOTE:** If GPU is used for prediction, set CUDA_VISIBLE_DEVICES environment variable before the service, otherwise it need not be set.
- ### Step 2: Send a predictive request
- With a configured server, use the following lines of code to send the prediction request and obtain the result
- AnimeGAN V2 is a style transfer model, which can transfer a image style to Miyazaki carton style. For more information, please refer to [AnimeGAN V2 Project](https://github.com/TachibanaYoshino/AnimeGANv2).
## II.Installation
- ### 1、Environmental Dependence
- paddlepaddle >= 1.8.0
- paddlehub >= 1.8.0 | [How to install PaddleHub]()
- ### 2、Installation
-```shell
$ hub install animegan_v2_hayao_64
```
- In case of any problems during installation, please refer to: [Windows_Quickstart]() | [Linux_Quickstart]() | [Mac_Quickstart]()
## III.Module API Prediction
- ### 1、Prediction Code Example
-```python
import paddlehub as hub
import cv2
model = hub.Module(name="animegan_v2_hayao_64")
result = model.style_transfer(images=[cv2.imread('/PATH/TO/IMAGE')])
# or
# result = model.style_transfer(paths=['/PATH/TO/IMAGE'])
```
- ### 3、API
-```python
def style_transfer(images=None,
paths=None,
output_dir='output',
visualization=False,
min_size=32,
max_size=1024)
```
- Style transfer API.
- **Parameters**
- images (list\[numpy.ndarray\]): image data, ndarray.shape is in the format [H, W, C], BGR;
- paths (list[str]): image path;
- output_dir (str): save path of images;
- visualization (bool): Whether to save the results as picture files;
- min\_size (int): min size of image shape,default is 32;
- max\_size (int): max size of image shape,default is 1024.
**NOTE:** choose one parameter to provide data from paths and images
- **Return**
- res (list\[numpy.ndarray\]): result list,ndarray.shape is \[H, W, C\]
## IV.Server Deployment
- PaddleHub Serving can deploy an online service of style transfer.
- ### Step 1: Start PaddleHub Serving
- Run the startup command:
-```shell
$ hub serving start -m animegan_v2_hayao_64
```
- The servitization API is now deployed and the default port number is 8866.
-**NOTE:** If GPU is used for prediction, set CUDA_VISIBLE_DEVICES environment variable before the service, otherwise it need not be set.
- ### Step 2: Send a predictive request
- With a configured server, use the following lines of code to send the prediction request and obtain the result
- AnimeGAN V2 is a style transfer model, which can transfer a image style to Miyazaki carton style. For more information, please refer to [AnimeGAN V2 Project](https://github.com/TachibanaYoshino/AnimeGANv2).
## II.Installation
- ### 1、Environmental Dependence
- paddlepaddle >= 1.8.0
- paddlehub >= 1.8.0 | [How to install PaddleHub]()
- ### 2、Installation
-```shell
$ hub install animegan_v2_hayao_99
```
- In case of any problems during installation, please refer to: [Windows_Quickstart]() | [Linux_Quickstart]() | [Mac_Quickstart]()
## III.Module API Prediction
- ### 1、Prediction Code Example
-```python
import paddlehub as hub
import cv2
model = hub.Module(name="animegan_v2_hayao_99")
result = model.style_transfer(images=[cv2.imread('/PATH/TO/IMAGE')])
# or
# result = model.style_transfer(paths=['/PATH/TO/IMAGE'])
```
- ### 2、API
-```python
def style_transfer(images=None,
paths=None,
output_dir='output',
visualization=False,
min_size=32,
max_size=1024)
```
- Style transfer API.
- **Parameters**
- images (list\[numpy.ndarray\]): image data, ndarray.shape is in the format [H, W, C], BGR;
- paths (list[str]): image path;
- output_dir (str): save path of images;
- visualization (bool): Whether to save the results as picture files;
- min\_size (int): min size of image shape,default is 32;
- max\_size (int): max size of image shape,default is 1024.
**NOTE:** choose one parameter to provide data from paths and images
- **Return**
- res (list\[numpy.ndarray\]): result list,ndarray.shape is \[H, W, C\]
## IV.Server Deployment
- PaddleHub Serving can deploy an online service of style transfer.
- ### Step 1: Start PaddleHub Serving
- Run the startup command:
-```shell
$ hub serving start -m animegan_v2_hayao_99
```
- The servitization API is now deployed and the default port number is 8866.
-**NOTE:** If GPU is used for prediction, set CUDA_VISIBLE_DEVICES environment variable before the service, otherwise it need not be set.
- ### Step 2: Send a predictive request
- With a configured server, use the following lines of code to send the prediction request and obtain the result
- AnimeGAN V2 is a style transfer model, which can transfer a image style to paprika carton style. For more information, please refer to [AnimeGAN V2 Project](https://github.com/TachibanaYoshino/AnimeGANv2).
## II.Installation
- ### 1、Environmental Dependence
- paddlepaddle >= 1.8.0
- paddlehub >= 1.8.0 | [How to install PaddleHub]()
- ### 2、Installation
-```shell
$ hub install animegan_v2_paprika_74
```
- In case of any problems during installation, please refer to: [Windows_Quickstart]() | [Linux_Quickstart]() | [Mac_Quickstart]()
## III.Module API Prediction
- ### 1、Prediction Code Example
-```python
import paddlehub as hub
import cv2
model = hub.Module(name="animegan_v2_paprika_74")
result = model.style_transfer(images=[cv2.imread('/PATH/TO/IMAGE')])
# or
# result = model.style_transfer(paths=['/PATH/TO/IMAGE'])
```
- ### 2、API
-```python
def style_transfer(images=None,
paths=None,
output_dir='output',
visualization=False,
min_size=32,
max_size=1024)
```
- Style transfer API.
- **Parameters**
- images (list\[numpy.ndarray\]): image data, ndarray.shape is in the format [H, W, C], BGR;
- paths (list[str]): image path;
- output_dir (str): save path of images;
- visualization (bool): Whether to save the results as picture files;
- min\_size (int): min size of image shape,default is 32;
- max\_size (int): max size of image shape,default is 1024.
- **Return**
- res (list\[numpy.ndarray\]): result list,ndarray.shape is \[H, W, C\]
## IV.Server Deployment
- PaddleHub Serving can deploy an online service of style transfer.
- ### Step 1: Start PaddleHub Serving
- Run the startup command:
-```shell
$ hub serving start -m animegan_v2_paprika_74
```
- The servitization API is now deployed and the default port number is 8866.
-**NOTE:** If GPU is used for prediction, set CUDA_VISIBLE_DEVICES environment variable before the service, otherwise it need not be set.
- ### Step 2: Send a predictive request
- With a configured server, use the following lines of code to send the prediction request and obtain the result
- AnimeGAN V2 is a style transfer model, which can transfer a image style to paprika carton style. For more information, please refer to [AnimeGAN V2 Project](https://github.com/TachibanaYoshino/AnimeGANv2).
## II.Installation
- ### 1、Environmental Dependence
- paddlepaddle >= 1.8.0
- paddlehub >= 1.8.0 | [How to install PaddleHub]()
- ### 2、Installation
-```shell
$ hub install animegan_v2_paprika_98
```
- In case of any problems during installation, please refer to: [Windows_Quickstart]() | [Linux_Quickstart]() | [Mac_Quickstart]()
## III.Module API Prediction
- ### 1、Prediction Code Example
-```python
import paddlehub as hub
import cv2
model = hub.Module(name="animegan_v2_paprika_98")
result = model.style_transfer(images=[cv2.imread('/PATH/TO/IMAGE')])
# or
# result = model.style_transfer(paths=['/PATH/TO/IMAGE'])
```
- ### 2、API
-```python
def style_transfer(images=None,
paths=None,
output_dir='output',
visualization=False,
min_size=32,
max_size=1024)
```
- Style transfer API.
- **Parameters**
- images (list\[numpy.ndarray\]): image data, ndarray.shape is in the format [H, W, C], BGR;
- paths (list[str]): image path;
- output_dir (str): save path of images;
- visualization (bool): Whether to save the results as picture files;
- min\_size (int): min size of image shape,default is 32;
- max\_size (int): max size of image shape,default is 1024.
**NOTE:** choose one parameter to provide data from paths and images
- **Return**
- res (list\[numpy.ndarray\]): result list,ndarray.shape is \[H, W, C\]
## IV.Server Deployment
- PaddleHub Serving can deploy an online service of style transfer.
- ### Step 1: Start PaddleHub Serving
- Run the startup command:
-```shell
$ hub serving start -m animegan_v2_paprika_98
```
- The servitization API is now deployed and the default port number is 8866.
-**NOTE:** If GPU is used for prediction, set CUDA_VISIBLE_DEVICES environment variable before the service, otherwise it need not be set.
- ### Step 2: Send a predictive request
- With a configured server, use the following lines of code to send the prediction request and obtain the result
- AnimeGAN V2 is a style transfer model, which can transfer a image style to Shinkai carton style. For more information, please refer to [AnimeGAN V2 Project](https://github.com/TachibanaYoshino/AnimeGANv2).
## II.Installation
- ### 1、Environmental Dependence
- paddlepaddle >= 1.8.0
- paddlehub >= 1.8.0 | [How to install PaddleHub]()
- ### 2、Installation
-```shell
$ hub install animegan_v2_shinkai_33
```
- In case of any problems during installation, please refer to: [Windows_Quickstart]() | [Linux_Quickstart]() | [Mac_Quickstart]()
## III.Module API Prediction
- ### 1、Prediction Code Example
-```python
import paddlehub as hub
import cv2
model = hub.Module(name="animegan_v2_shinkai_33")
result = model.style_transfer(images=[cv2.imread('/PATH/TO/IMAGE')])
# or
# result = model.style_transfer(paths=['/PATH/TO/IMAGE'])
```
- ### 2、API
-```python
def style_transfer(images=None,
paths=None,
output_dir='output',
visualization=False,
min_size=32,
max_size=1024)
```
- Style transfer API.
- **Parameters**
- images (list\[numpy.ndarray\]): image data, ndarray.shape is in the format [H, W, C], BGR;
- paths (list[str]): image path;
- output_dir (str): save path of images;
- visualization (bool): Whether to save the results as picture files;
- min\_size (int): min size of image shape,default is 32;
- max\_size (int): max size of image shape,default is 1024.
**NOTE:** choose one parameter to provide data from paths and images
- **Return**
- res (list\[numpy.ndarray\]): result list,ndarray.shape is \[H, W, C\]
## IV.Server Deployment
- PaddleHub Serving can deploy an online service of style transfer.
- ### Step 1: Start PaddleHub Serving
- Run the startup command:
-```shell
$ hub serving start -m animegan_v2_shinkai_33
```
- The servitization API is now deployed and the default port number is 8866.
-**NOTE:** If GPU is used for prediction, set CUDA_VISIBLE_DEVICES environment variable before the service, otherwise it need not be set.
- ### Step 2: Send a predictive request
- With a configured server, use the following lines of code to send the prediction request and obtain the result
- AnimeGAN V2 is a style transfer model, which can transfer a image style to Shinkai carton style. For more information, please refer to [AnimeGAN V2 Project](https://github.com/TachibanaYoshino/AnimeGANv2).
## II.Installation
- ### 1、Environmental Dependence
- paddlepaddle >= 1.8.0
- paddlehub >= 1.8.0 | [How to install PaddleHub]()
- ### 2、Installation
-```shell
$ hub install animegan_v2_shinkai_53
```
- In case of any problems during installation, please refer to: [Windows_Quickstart]() | [Linux_Quickstart]() | [Mac_Quickstart]()
## III.Module API Prediction
- ### 1、Prediction Code Example
-```python
import paddlehub as hub
import cv2
model = hub.Module(name="animegan_v2_shinkai_53")
result = model.style_transfer(images=[cv2.imread('/PATH/TO/IMAGE')])
# or
# result = model.style_transfer(paths=['/PATH/TO/IMAGE'])
```
- ### 2、API
-```python
def style_transfer(images=None,
paths=None,
output_dir='output',
visualization=False,
min_size=32,
max_size=1024)
```
- Style transfer API.
- **Parameters**
- images (list\[numpy.ndarray\]): image data, ndarray.shape is in the format [H, W, C], BGR;
- paths (list[str]): image path;
- output_dir (str): save path of images;
- visualization (bool): Whether to save the results as picture files;
- min\_size (int): min size of image shape,default is 32;
- max\_size (int): max size of image shape,default is 1024.
**NOTE:** choose one parameter to provide data from paths and images
- **Return**
- res (list\[numpy.ndarray\]): result list,ndarray.shape is \[H, W, C\]
## IV.Server Deployment
- PaddleHub Serving can deploy an online service of style transfer.
- ### Step 1: Start PaddleHub Serving
- Run the startup command:
-```shell
$ hub serving start -m animegan_v2_shinkai_53
```
- The servitization API is now deployed and the default port number is 8866.
-**NOTE:** If GPU is used for prediction, set CUDA_VISIBLE_DEVICES environment variable before the service, otherwise it need not be set.
- ### Step 2: Send a predictive request
- With a configured server, use the following lines of code to send the prediction request and obtain the result
- StyleProNet is a model for style transfer, which is light-weight and responds quickly. This module is based on StyleProNet, trained on WikiArt(MS-COCO) and WikiArt(style) datasets, and can be used for style transfer. For more information, please refer to [StyleProNet](https://arxiv.org/abs/2003.07694).
## II.Installation
- ### 1、Environmental Dependence
- paddlepaddle >= 1.6.2
- paddlehub >= 1.6.0 | [How to install PaddleHub]()
- ### 2、Installation
-```shell
$ hub install stylepro_artistic
```
- In case of any problems during installation, please refer to: [Windows_Quickstart]() | [Linux_Quickstart]() | [Mac_Quickstart]()
## III.Module API Prediction
- ### 1、Command line Prediction
-```shell
$ hub run stylepro_artistic --input_path "/PATH/TO/IMAGE"
```
- If you want to call the Hub module through the command line, please refer to: [PaddleHub Command Line Instruction](../../../../docs/docs_ch/tutorial/cmd_usage.rst)