diff --git a/modules/image/Image_gan/style_transfer/Photo2Cartoon/README_en.md b/modules/image/Image_gan/style_transfer/Photo2Cartoon/README_en.md
new file mode 100644
index 0000000000000000000000000000000000000000..9d909df96a758dfa1215fb7651e490d0fb5cf11a
--- /dev/null
+++ b/modules/image/Image_gan/style_transfer/Photo2Cartoon/README_en.md
@@ -0,0 +1,95 @@
+# Photo2Cartoon
+
+|Module Name|Photo2Cartoon|
+| :--- | :---: |
+|Category|image generation|
+|Network|U-GAT-IT|
+|Dataset|cartoon_data|
+|Fine-tuning supported or not|No|
+|Module Size|205MB|
+|Latest update date|2021-02-26|
+|Data indicators|-|
+
+
+## I.Basic Information
+
+- ### Application Effect Display
+ - Sample results:
+
+
+
+
+
+
+- ### Module Introduction
+
+ - This module encapsulates project [photo2cartoon](https://github.com/minivision-ai/photo2cartoon-paddle).
+
+
+## II.Installation
+
+- ### 1、Environmental Dependence
+
+ - paddlepaddle >= 2.0.0
+
+ - paddlehub >= 2.0.0 | [How to install PaddleHub]()
+
+- ### 2、Installation
+
+ - ```shell
+ $ hub install Photo2Cartoon
+ ```
+ - In case of any problems during installation, please refer to: [Windows_Quickstart]() | [Linux_Quickstart]() | [Mac_Quickstart]()
+
+## III.Module API Prediction
+
+- ### 1、Prediction Code Example
+
+ - ```python
+ import paddlehub as hub
+ import cv2
+
+ model = hub.Module(name="Photo2Cartoon")
+ result = model.Cartoon_GEN(images=[cv2.imread('/PATH/TO/IMAGE')])
+ # or
+ # result = model.Cartoon_GEN(paths=['/PATH/TO/IMAGE'])
+ ```
+
+- ### 2、API
+
+ - ```python
+ def Cartoon_GEN(images=None,
+ paths=None,
+ batch_size=1,
+ output_dir='output',
+ visualization=False,
+ use_gpu=False):
+ ```
+
+ - Cartoon style generation API.
+
+ - **Parameters**
+
+ - images (list\[numpy.ndarray\]): image data, ndarray.shape is in the format [H, W, C], BGR;
+ - paths (list[str]): image path;
+ - output_dir (str): save path of images;
+ - batch_size (int): the size of batch;
+ - visualization (bool): Whether to save the results as picture files;
+ - use_gpu (bool): use GPU or not; **set the CUDA_VISIBLE_DEVICES environment variable first if you are using GPU**
+
+ **NOTE:** choose one parameter to provide data from paths and images
+
+ - **Return**
+ - res (list\[numpy.ndarray\]): result list,ndarray.shape is \[H, W, C\]
+
+
+
+## IV.Release Note
+
+* 1.0.0
+
+ First release
+
+ - ```shell
+ $ hub install Photo2Cartoon==1.0.0
+ ```
diff --git a/modules/image/Image_gan/style_transfer/U2Net_Portrait/README_en.md b/modules/image/Image_gan/style_transfer/U2Net_Portrait/README_en.md
new file mode 100644
index 0000000000000000000000000000000000000000..2fc10840371ec96e5d4a8553389af333206a0b8b
--- /dev/null
+++ b/modules/image/Image_gan/style_transfer/U2Net_Portrait/README_en.md
@@ -0,0 +1,102 @@
+# U2Net_Portrait
+
+|Module Name|U2Net_Portrait|
+| :--- | :---: |
+|Category|image generation|
+|Network|U^2Net|
+|Dataset|-|
+|Fine-tuning supported or not|No|
+|Module Size|254MB|
+|Latest update date|2021-02-26|
+|Data indicators|-|
+
+
+## I.Basic Information
+
+- ### Application Effect Display
+ - Sample results:
+
+
+
+ Input image
+
+
+
+ Output image
+
+
+
+
+- ### Module Introduction
+
+ - U2Net_Portrait can be used to create a face portrait.
+
+
+## II.Installation
+
+- ### 1、Environmental Dependence
+
+ - paddlepaddle >= 2.0.0
+
+ - paddlehub >= 2.0.0 | [How to install PaddleHub]()
+
+- ### 2、Installation
+
+ - ```shell
+ $ hub install U2Net_Portrait
+ ```
+ - In case of any problems during installation, please refer to: [Windows_Quickstart]() | [Linux_Quickstart]() | [Mac_Quickstart]()
+
+## III.Module API Prediction
+
+- ### 1、Prediction Code Example
+
+ - ```python
+ import paddlehub as hub
+ import cv2
+
+ model = hub.Module(name="U2Net_Portrait")
+ result = model.Portrait_GEN(images=[cv2.imread('/PATH/TO/IMAGE')])
+ # or
+ # result = model.Portrait_GEN(paths=['/PATH/TO/IMAGE'])
+ ```
+
+- ### 2、API
+
+ - ```python
+ def Portrait_GEN(images=None,
+ paths=None,
+ scale=1,
+ batch_size=1,
+ output_dir='output',
+ face_detection=True,
+ visualization=False):
+ ```
+
+ - Portrait generation API.
+
+ - **Parameters**
+
+ - images (list\[numpy.ndarray\]): image data, ndarray.shape is in the format [H, W, C], BGR;
+ - paths (list[str]): image path;
+ - scale (float) : scale for resizing image;
+ - batch_size (int): the size of batch;
+ - output_dir (str): save path of images;
+ - visualization (bool): Whether to save the results as picture files;
+
+ **NOTE:** choose one parameter to provide data from paths and images
+
+ - **Return**
+ - res (list\[numpy.ndarray\]): result list,ndarray.shape is \[H, W, C\]
+
+
+
+## IV.Release Note
+
+* 1.0.0
+
+ First release
+
+ - ```shell
+ $ hub install U2Net_Portrait==1.0.0
+ ```
diff --git a/modules/image/Image_gan/style_transfer/UGATIT_100w/README_en.md b/modules/image/Image_gan/style_transfer/UGATIT_100w/README_en.md
new file mode 100644
index 0000000000000000000000000000000000000000..6c0606bb7b64026f4c30c50f9fbeeb4dddf298d9
--- /dev/null
+++ b/modules/image/Image_gan/style_transfer/UGATIT_100w/README_en.md
@@ -0,0 +1,139 @@
+# UGATIT_100w
+
+|Module Name|UGATIT_100w|
+| :--- | :---: |
+|Category|image generation|
+|Network|U-GAT-IT|
+|Dataset|selfie2anime|
+|Fine-tuning supported or not|No|
+|Module Size|41MB|
+|Latest update date|2021-02-26|
+|Data indicators|-|
+
+
+## I.Basic Information
+
+- ### Application Effect Display
+ - Sample results:
+
+
+
+ Input image
+
+
+
+ Output image
+
+
+
+
+- ### Module Introduction
+
+ - UGATIT is a model for style transfer. This module can be used to transfer a face image to cartoon style. For more information, please refer to [UGATIT-Paddle Project](https://github.com/miraiwk/UGATIT-paddle).
+
+
+## II.Installation
+
+- ### 1、Environmental Dependence
+
+ - paddlepaddle >= 1.8.0
+
+ - paddlehub >= 1.8.0 | [How to install PaddleHub]()
+
+- ### 2、Installation
+
+ - ```shell
+ $ hub install UGATIT_100w
+ ```
+ - In case of any problems during installation, please refer to: [Windows_Quickstart]() | [Linux_Quickstart]() | [Mac_Quickstart]()
+
+## III.Module API Prediction
+
+- ### 1、Prediction Code Example
+
+ - ```python
+ import paddlehub as hub
+ import cv2
+
+ model = hub.Module(name="UGATIT_100w")
+ result = model.style_transfer(images=[cv2.imread('/PATH/TO/IMAGE')])
+ # or
+ # result = model.style_transfer(paths=['/PATH/TO/IMAGE'])
+ ```
+
+- ### 2、API
+
+ - ```python
+ def style_transfer(images=None,
+ paths=None,
+ batch_size=1,
+ output_dir='output',
+ visualization=False)
+ ```
+
+ - Style transfer API.
+
+ - **Parameters**
+
+ - images (list\[numpy.ndarray\]): image data, ndarray.shape is in the format [H, W, C], BGR;
+ - paths (list[str]): image path;
+ - batch_size (int): the size of batch;
+ - visualization (bool): Whether to save the results as picture files;
+ - output_dir (str): save path of images;
+
+ **NOTE:** choose one parameter to provide data from paths and images
+
+ - **Return**
+ - res (list\[numpy.ndarray\]): result list,ndarray.shape 为 \[H, W, C\]
+
+
+## IV.Server Deployment
+
+- PaddleHub Serving can deploy an online service of style transfer.
+
+- ### Step 1: Start PaddleHub Serving
+
+ - Run the startup command:
+ - ```shell
+ $ hub serving start -m UGATIT_100w
+ ```
+
+ - The servitization API is now deployed and the default port number is 8866.
+
+ - **NOTE:** If GPU is used for prediction, set CUDA_VISIBLE_DEVICES environment variable before the service, otherwise it need not be set.
+
+- ### Step 2: Send a predictive request
+
+ - With a configured server, use the following lines of code to send the prediction request and obtain the result
+
+ - ```python
+ import requests
+ import json
+ import cv2
+ import base64
+
+
+ def cv2_to_base64(image):
+ data = cv2.imencode('.jpg', image)[1]
+ return base64.b64encode(data.tostring()).decode('utf8')
+
+ # Send an HTTP request
+ data = {'images':[cv2_to_base64(cv2.imread("/PATH/TO/IMAGE"))]}
+ headers = {"Content-type": "application/json"}
+ url = "http://127.0.0.1:8866/predict/UGATIT_100w"
+ r = requests.post(url=url, headers=headers, data=json.dumps(data))
+
+ # print prediction results
+ print(r.json()["results"])
+ ```
+
+
+## V.Release Note
+
+* 1.0.0
+
+ First release
+
+ - ```shell
+ $ hub install UGATIT_100w==1.0.0
+ ```
diff --git a/modules/image/Image_gan/style_transfer/animegan_v1_hayao_60/README_en.md b/modules/image/Image_gan/style_transfer/animegan_v1_hayao_60/README_en.md
new file mode 100644
index 0000000000000000000000000000000000000000..0fe31afd24aa6b1efd6822b28660f0112bbcacee
--- /dev/null
+++ b/modules/image/Image_gan/style_transfer/animegan_v1_hayao_60/README_en.md
@@ -0,0 +1,149 @@
+# animegan_v1_hayao_60
+
+|Module Name|animegan_v1_hayao_60|
+| :--- | :---: |
+|Category|image generation|
+|Network|AnimeGAN|
+|Dataset|The Wind Rises|
+|Fine-tuning supported or not|No|
+|Module Size|18MB|
+|Latest update date|2021-07-30|
+|Data indicators|-|
+
+
+## I.Basic Information
+
+- ### Application Effect Display
+ - Sample results:
+
+
+
+ Input Image
+
+
+
+ Output Image
+
+
+
+
+
+- ### Module Introduction
+
+ - AnimeGAN V1 is a style transfer model, which can transfer a image style to Miyazaki carton style. For more information, please refer to [AnimeGAN V1 Project](https://github.com/TachibanaYoshino/AnimeGAN).
+
+
+## II.Installation
+
+- ### 1、Environmental Dependence
+
+ - paddlepaddle >= 1.8.0
+
+ - paddlehub >= 1.8.0 | [How to install PaddleHub]()
+
+- ### 2、Installation
+
+ - ```shell
+ $ hub install animegan_v1_hayao_60
+ ```
+ - In case of any problems during installation, please refer to: [Windows_Quickstart]() | [Linux_Quickstart]() | [Mac_Quickstart]()
+
+## III.Module API Prediction
+
+- ### 1、Prediction Code Example
+
+ - ```python
+ import paddlehub as hub
+ import cv2
+
+ model = hub.Module(name="animegan_v1_hayao_60")
+ result = model.style_transfer(images=[cv2.imread('/PATH/TO/IMAGE')])
+ # or
+ # result = model.style_transfer(paths=['/PATH/TO/IMAGE'])
+ ```
+
+- ### 2、API
+
+ - ```python
+ def style_transfer(images=None,
+ paths=None,
+ output_dir='output',
+ visualization=False,
+ min_size=32,
+ max_size=1024)
+ ```
+
+ - Style transfer API.
+
+ - **Parameters**
+
+ - images (list\[numpy.ndarray\]): image data, ndarray.shape is in the format [H, W, C], BGR;
+ - paths (list[str]): image path;
+ - output_dir (str): save path of images;
+ - visualization (bool): Whether to save the results as picture files;
+ - min\_size (int): min size of image shape,default is 32;
+ - max\_size (int): max size of image shape,default is 1024.
+
+ **NOTE:** choose one parameter to provide data from paths and images
+
+ - **Return**
+ - res (list\[numpy.ndarray\]): result list,ndarray.shape is \[H, W, C\]
+
+
+## IV.Server Deployment
+
+- PaddleHub Serving can deploy an online service of style transfer.
+- ### Step 1: Start PaddleHub Serving
+
+ - Run the startup command:
+ - ```shell
+ $ hub serving start -m animegan_v1_hayao_60
+ ```
+
+ - The servitization API is now deployed and the default port number is 8866.
+
+ - **NOTE:** If GPU is used for prediction, set CUDA_VISIBLE_DEVICES environment variable before the service, otherwise it need not be set.
+
+- ### Step 2: Send a predictive request
+
+ - With a configured server, use the following lines of code to send the prediction request and obtain the result
+
+ - ```python
+ import requests
+ import json
+ import cv2
+ import base64
+
+
+ def cv2_to_base64(image):
+ data = cv2.imencode('.jpg', image)[1]
+ return base64.b64encode(data.tostring()).decode('utf8')
+
+ # Send an HTTP request
+ data = {'images':[cv2_to_base64(cv2.imread("/PATH/TO/IMAGE"))]}
+ headers = {"Content-type": "application/json"}
+ url = "http://127.0.0.1:8866/predict/animegan_v1_hayao_60"
+ r = requests.post(url=url, headers=headers, data=json.dumps(data))
+
+ # print prediction results
+ print(r.json()["results"])
+ ```
+
+
+## V.Release Note
+
+* 1.0.0
+
+ First release
+
+* 1.0.1
+
+ Adapt to paddlehub2.0
+
+* 1.0.2
+
+ Delete optional parameter batch_size
+
+ - ```shell
+ $ hub install animegan_v1_hayao_60==1.0.2
+ ```
diff --git a/modules/image/Image_gan/style_transfer/animegan_v2_hayao_64/README_en.md b/modules/image/Image_gan/style_transfer/animegan_v2_hayao_64/README_en.md
new file mode 100644
index 0000000000000000000000000000000000000000..ac84636f50bb5b39fa266d3d666e130f4c1c8479
--- /dev/null
+++ b/modules/image/Image_gan/style_transfer/animegan_v2_hayao_64/README_en.md
@@ -0,0 +1,148 @@
+# animegan_v2_hayao_64
+
+|Module Name|animegan_v2_hayao_64|
+| :--- | :---: |
+|Category|image generation|
+|Network|AnimeGAN|
+|Dataset|The Wind Rises|
+|Fine-tuning supported or not|No|
+|Module Size|9.4MB|
+|Latest update date|2021-07-30|
+|Data indicators|-|
+
+
+## I.Basic Information
+
+- ### Application Effect Display
+ - Sample results:
+
+
+
+ Input image
+
+
+
+ Output image
+
+
+
+- ### Module Introduction
+
+ - AnimeGAN V2 is a style transfer model, which can transfer a image style to Miyazaki carton style. For more information, please refer to [AnimeGAN V2 Project](https://github.com/TachibanaYoshino/AnimeGANv2).
+
+
+## II.Installation
+
+- ### 1、Environmental Dependence
+
+ - paddlepaddle >= 1.8.0
+
+ - paddlehub >= 1.8.0 | [How to install PaddleHub]()
+
+- ### 2、Installation
+
+ - ```shell
+ $ hub install animegan_v2_hayao_64
+ ```
+ - In case of any problems during installation, please refer to: [Windows_Quickstart]() | [Linux_Quickstart]() | [Mac_Quickstart]()
+
+## III.Module API Prediction
+
+- ### 1、Prediction Code Example
+
+ - ```python
+ import paddlehub as hub
+ import cv2
+
+ model = hub.Module(name="animegan_v2_hayao_64")
+ result = model.style_transfer(images=[cv2.imread('/PATH/TO/IMAGE')])
+ # or
+ # result = model.style_transfer(paths=['/PATH/TO/IMAGE'])
+ ```
+
+- ### 3、API
+
+ - ```python
+ def style_transfer(images=None,
+ paths=None,
+ output_dir='output',
+ visualization=False,
+ min_size=32,
+ max_size=1024)
+ ```
+
+ - Style transfer API.
+
+ - **Parameters**
+
+ - images (list\[numpy.ndarray\]): image data, ndarray.shape is in the format [H, W, C], BGR;
+ - paths (list[str]): image path;
+ - output_dir (str): save path of images;
+ - visualization (bool): Whether to save the results as picture files;
+ - min\_size (int): min size of image shape,default is 32;
+ - max\_size (int): max size of image shape,default is 1024.
+
+ **NOTE:** choose one parameter to provide data from paths and images
+
+ - **Return**
+ - res (list\[numpy.ndarray\]): result list,ndarray.shape is \[H, W, C\]
+
+
+## IV.Server Deployment
+
+- PaddleHub Serving can deploy an online service of style transfer.
+
+- ### Step 1: Start PaddleHub Serving
+
+ - Run the startup command:
+ - ```shell
+ $ hub serving start -m animegan_v2_hayao_64
+ ```
+
+ - The servitization API is now deployed and the default port number is 8866.
+
+ - **NOTE:** If GPU is used for prediction, set CUDA_VISIBLE_DEVICES environment variable before the service, otherwise it need not be set.
+
+- ### Step 2: Send a predictive request
+
+ - With a configured server, use the following lines of code to send the prediction request and obtain the result
+
+ - ```python
+ import requests
+ import json
+ import cv2
+ import base64
+
+
+ def cv2_to_base64(image):
+ data = cv2.imencode('.jpg', image)[1]
+ return base64.b64encode(data.tostring()).decode('utf8')
+
+ # Send an HTTP request
+ data = {'images':[cv2_to_base64(cv2.imread("/PATH/TO/IMAGE"))]}
+ headers = {"Content-type": "application/json"}
+ url = "http://127.0.0.1:8866/predict/animegan_v2_hayao_64"
+ r = requests.post(url=url, headers=headers, data=json.dumps(data))
+
+ # print prediction results
+ print(r.json()["results"])
+ ```
+
+
+## V.Release Note
+
+* 1.0.0
+
+ First release
+
+* 1.0.1
+
+ Adapt to paddlehub2.0
+
+* 1.0.2
+
+ Delete optional parameter batch_size
+
+ - ```shell
+ $ hub install animegan_v2_hayao_64==1.0.2
+ ```
diff --git a/modules/image/Image_gan/style_transfer/animegan_v2_hayao_99/README_en.md b/modules/image/Image_gan/style_transfer/animegan_v2_hayao_99/README_en.md
new file mode 100644
index 0000000000000000000000000000000000000000..06d1fba7c5f9222a2589407b096978a1099ebeb7
--- /dev/null
+++ b/modules/image/Image_gan/style_transfer/animegan_v2_hayao_99/README_en.md
@@ -0,0 +1,148 @@
+# animegan_v2_hayao_99
+
+|Module Name|animegan_v2_hayao_99|
+| :--- | :---: |
+|Category|image generation|
+|Network|AnimeGAN|
+|Dataset|The Wind Rises|
+|Fine-tuning supported or not|No|
+|Module Size|9.4MB|
+|Latest update date|2021-07-30|
+|Data indicators|-|
+
+
+## I.Basic Information
+
+- ### Application Effect Display
+ - Sample results:
+
+
+
+ Input image
+
+
+
+ Output image
+
+
+
+
+- ### Module Introduction
+
+ - AnimeGAN V2 is a style transfer model, which can transfer a image style to Miyazaki carton style. For more information, please refer to [AnimeGAN V2 Project](https://github.com/TachibanaYoshino/AnimeGANv2).
+
+## II.Installation
+
+- ### 1、Environmental Dependence
+
+ - paddlepaddle >= 1.8.0
+
+ - paddlehub >= 1.8.0 | [How to install PaddleHub]()
+
+- ### 2、Installation
+
+ - ```shell
+ $ hub install animegan_v2_hayao_99
+ ```
+ - In case of any problems during installation, please refer to: [Windows_Quickstart]() | [Linux_Quickstart]() | [Mac_Quickstart]()
+
+## III.Module API Prediction
+
+
+- ### 1、Prediction Code Example
+
+ - ```python
+ import paddlehub as hub
+ import cv2
+
+ model = hub.Module(name="animegan_v2_hayao_99")
+ result = model.style_transfer(images=[cv2.imread('/PATH/TO/IMAGE')])
+ # or
+ # result = model.style_transfer(paths=['/PATH/TO/IMAGE'])
+ ```
+
+- ### 2、API
+
+ - ```python
+ def style_transfer(images=None,
+ paths=None,
+ output_dir='output',
+ visualization=False,
+ min_size=32,
+ max_size=1024)
+ ```
+
+ - Style transfer API.
+
+ - **Parameters**
+
+ - images (list\[numpy.ndarray\]): image data, ndarray.shape is in the format [H, W, C], BGR;
+ - paths (list[str]): image path;
+ - output_dir (str): save path of images;
+ - visualization (bool): Whether to save the results as picture files;
+ - min\_size (int): min size of image shape,default is 32;
+ - max\_size (int): max size of image shape,default is 1024.
+
+ **NOTE:** choose one parameter to provide data from paths and images
+
+ - **Return**
+ - res (list\[numpy.ndarray\]): result list,ndarray.shape is \[H, W, C\]
+
+
+## IV.Server Deployment
+
+- PaddleHub Serving can deploy an online service of style transfer.
+- ### Step 1: Start PaddleHub Serving
+
+ - Run the startup command:
+ - ```shell
+ $ hub serving start -m animegan_v2_hayao_99
+ ```
+
+ - The servitization API is now deployed and the default port number is 8866.
+
+ - **NOTE:** If GPU is used for prediction, set CUDA_VISIBLE_DEVICES environment variable before the service, otherwise it need not be set.
+
+- ### Step 2: Send a predictive request
+
+ - With a configured server, use the following lines of code to send the prediction request and obtain the result
+
+ - ```python
+ import requests
+ import json
+ import cv2
+ import base64
+
+
+ def cv2_to_base64(image):
+ data = cv2.imencode('.jpg', image)[1]
+ return base64.b64encode(data.tostring()).decode('utf8')
+
+ # Send an HTTP request
+ data = {'images':[cv2_to_base64(cv2.imread("/PATH/TO/IMAGE"))]}
+ headers = {"Content-type": "application/json"}
+ url = "http://127.0.0.1:8866/predict/animegan_v2_hayao_99"
+ r = requests.post(url=url, headers=headers, data=json.dumps(data))
+
+ # print prediction results
+ print(r.json()["results"])
+ ```
+
+
+## V.Release Note
+
+* 1.0.0
+
+ First release
+
+* 1.0.1
+
+ Adapt to paddlehub2.0
+
+* 1.0.2
+
+ Delete optional parameter batch_size
+
+ - ```shell
+ $ hub install animegan_v2_hayao_99==1.0.2
+ ```
diff --git a/modules/image/Image_gan/style_transfer/animegan_v2_paprika_74/README_en.md b/modules/image/Image_gan/style_transfer/animegan_v2_paprika_74/README_en.md
new file mode 100644
index 0000000000000000000000000000000000000000..4e72e8211c733e4671637afdbf9c2e199780266d
--- /dev/null
+++ b/modules/image/Image_gan/style_transfer/animegan_v2_paprika_74/README_en.md
@@ -0,0 +1,147 @@
+# animegan_v2_paprika_74
+
+|Module Name|animegan_v2_paprika_74|
+| :--- | :---: |
+|Category|image generation|
+|Network|AnimeGAN|
+|Dataset|Paprika|
+|Fine-tuning supported or not|No|
+|Module Size|9.4MB|
+|Latest update date|2021-02-26|
+|Data indicators|-|
+
+
+## I.Basic Information
+
+- ### Application Effect Display
+ - Sample results:
+
+
+
+ Input Image
+
+
+
+ Output Image
+
+
+
+
+- ### Module Introduction
+
+ - AnimeGAN V2 is a style transfer model, which can transfer a image style to paprika carton style. For more information, please refer to [AnimeGAN V2 Project](https://github.com/TachibanaYoshino/AnimeGANv2).
+
+
+## II.Installation
+
+- ### 1、Environmental Dependence
+
+ - paddlepaddle >= 1.8.0
+
+ - paddlehub >= 1.8.0 | [How to install PaddleHub]()
+
+- ### 2、Installation
+
+ - ```shell
+ $ hub install animegan_v2_paprika_74
+ ```
+ - In case of any problems during installation, please refer to: [Windows_Quickstart]() | [Linux_Quickstart]() | [Mac_Quickstart]()
+
+## III.Module API Prediction
+
+- ### 1、Prediction Code Example
+
+ - ```python
+ import paddlehub as hub
+ import cv2
+
+ model = hub.Module(name="animegan_v2_paprika_74")
+ result = model.style_transfer(images=[cv2.imread('/PATH/TO/IMAGE')])
+ # or
+ # result = model.style_transfer(paths=['/PATH/TO/IMAGE'])
+ ```
+
+- ### 2、API
+
+ - ```python
+ def style_transfer(images=None,
+ paths=None,
+ output_dir='output',
+ visualization=False,
+ min_size=32,
+ max_size=1024)
+ ```
+
+ - Style transfer API.
+
+ - **Parameters**
+
+ - images (list\[numpy.ndarray\]): image data, ndarray.shape is in the format [H, W, C], BGR;
+ - paths (list[str]): image path;
+ - output_dir (str): save path of images;
+ - visualization (bool): Whether to save the results as picture files;
+ - min\_size (int): min size of image shape,default is 32;
+ - max\_size (int): max size of image shape,default is 1024.
+
+ - **Return**
+ - res (list\[numpy.ndarray\]): result list,ndarray.shape is \[H, W, C\]
+
+
+## IV.Server Deployment
+
+- PaddleHub Serving can deploy an online service of style transfer.
+
+- ### Step 1: Start PaddleHub Serving
+
+ - Run the startup command:
+ - ```shell
+ $ hub serving start -m animegan_v2_paprika_74
+ ```
+
+ - The servitization API is now deployed and the default port number is 8866.
+
+ - **NOTE:** If GPU is used for prediction, set CUDA_VISIBLE_DEVICES environment variable before the service, otherwise it need not be set.
+
+- ### Step 2: Send a predictive request
+
+ - With a configured server, use the following lines of code to send the prediction request and obtain the result
+
+ - ```python
+ import requests
+ import json
+ import cv2
+ import base64
+
+
+ def cv2_to_base64(image):
+ data = cv2.imencode('.jpg', image)[1]
+ return base64.b64encode(data.tostring()).decode('utf8')
+
+ # Send an HTTP request
+ data = {'images':[cv2_to_base64(cv2.imread("/PATH/TO/IMAGE"))]}
+ headers = {"Content-type": "application/json"}
+ url = "http://127.0.0.1:8866/predict/animegan_v2_paprika_74"
+ r = requests.post(url=url, headers=headers, data=json.dumps(data))
+
+ # print prediction results
+ print(r.json()["results"])
+ ```
+
+
+## V.Release Note
+
+* 1.0.0
+
+ First release
+
+* 1.0.1
+
+ Adapt to paddlehub2.0
+
+* 1.0.2
+
+ Delete optional parameter batch_size
+
+ - ```shell
+ $ hub install animegan_v2_paprika_74==1.0.2
+ ```
diff --git a/modules/image/Image_gan/style_transfer/animegan_v2_paprika_98/README_en.md b/modules/image/Image_gan/style_transfer/animegan_v2_paprika_98/README_en.md
new file mode 100644
index 0000000000000000000000000000000000000000..9e16bfb89e81291ea7c81c2df32c4e7681f19441
--- /dev/null
+++ b/modules/image/Image_gan/style_transfer/animegan_v2_paprika_98/README_en.md
@@ -0,0 +1,149 @@
+# animegan_v2_paprika_98
+
+|Module Name|animegan_v2_paprika_98|
+| :--- | :---: |
+|Category|image generation|
+|Network|AnimeGAN|
+|Dataset|Paprika|
+|Fine-tuning supported or not|No|
+|Module Size|9.4MB|
+|Latest update date|2021-07-30|
+|Data indicators|-|
+
+
+## I.Basic Information
+
+- ### Application Effect Display
+ - Sample results:
+
+
+
+ Input image
+
+
+
+ Output image
+
+
+
+
+- ### Module Introduction
+
+ - AnimeGAN V2 is a style transfer model, which can transfer a image style to paprika carton style. For more information, please refer to [AnimeGAN V2 Project](https://github.com/TachibanaYoshino/AnimeGANv2).
+
+
+## II.Installation
+
+- ### 1、Environmental Dependence
+
+ - paddlepaddle >= 1.8.0
+
+ - paddlehub >= 1.8.0 | [How to install PaddleHub]()
+
+- ### 2、Installation
+
+ - ```shell
+ $ hub install animegan_v2_paprika_98
+ ```
+ - In case of any problems during installation, please refer to: [Windows_Quickstart]() | [Linux_Quickstart]() | [Mac_Quickstart]()
+
+## III.Module API Prediction
+
+- ### 1、Prediction Code Example
+
+ - ```python
+ import paddlehub as hub
+ import cv2
+
+ model = hub.Module(name="animegan_v2_paprika_98")
+ result = model.style_transfer(images=[cv2.imread('/PATH/TO/IMAGE')])
+ # or
+ # result = model.style_transfer(paths=['/PATH/TO/IMAGE'])
+ ```
+
+- ### 2、API
+
+ - ```python
+ def style_transfer(images=None,
+ paths=None,
+ output_dir='output',
+ visualization=False,
+ min_size=32,
+ max_size=1024)
+ ```
+
+ - Style transfer API.
+
+ - **Parameters**
+
+ - images (list\[numpy.ndarray\]): image data, ndarray.shape is in the format [H, W, C], BGR;
+ - paths (list[str]): image path;
+ - output_dir (str): save path of images;
+ - visualization (bool): Whether to save the results as picture files;
+ - min\_size (int): min size of image shape,default is 32;
+ - max\_size (int): max size of image shape,default is 1024.
+
+ **NOTE:** choose one parameter to provide data from paths and images
+
+ - **Return**
+ - res (list\[numpy.ndarray\]): result list,ndarray.shape is \[H, W, C\]
+
+
+## IV.Server Deployment
+
+- PaddleHub Serving can deploy an online service of style transfer.
+
+- ### Step 1: Start PaddleHub Serving
+
+ - Run the startup command:
+ - ```shell
+ $ hub serving start -m animegan_v2_paprika_98
+ ```
+
+ - The servitization API is now deployed and the default port number is 8866.
+
+ - **NOTE:** If GPU is used for prediction, set CUDA_VISIBLE_DEVICES environment variable before the service, otherwise it need not be set.
+
+- ### Step 2: Send a predictive request
+
+ - With a configured server, use the following lines of code to send the prediction request and obtain the result
+
+ - ```python
+ import requests
+ import json
+ import cv2
+ import base64
+
+
+ def cv2_to_base64(image):
+ data = cv2.imencode('.jpg', image)[1]
+ return base64.b64encode(data.tostring()).decode('utf8')
+
+ # Send an HTTP request
+ data = {'images':[cv2_to_base64(cv2.imread("/PATH/TO/IMAGE"))]}
+ headers = {"Content-type": "application/json"}
+ url = "http://127.0.0.1:8866/predict/animegan_v2_paprika_98"
+ r = requests.post(url=url, headers=headers, data=json.dumps(data))
+
+ # print prediction results
+ print(r.json()["results"])
+ ```
+
+
+## V.Release Note
+
+* 1.0.0
+
+ First release
+
+* 1.0.1
+
+ Adapt to paddlehub2.0
+
+* 1.0.2
+
+ Delete optional parameter batch_size
+
+ - ```shell
+ $ hub install animegan_v2_paprika_98==1.0.2
+ ```
diff --git a/modules/image/Image_gan/style_transfer/animegan_v2_shinkai_33/README_en.md b/modules/image/Image_gan/style_transfer/animegan_v2_shinkai_33/README_en.md
new file mode 100644
index 0000000000000000000000000000000000000000..383d5813a5dd33069dd0f8a6317a85469dbf4221
--- /dev/null
+++ b/modules/image/Image_gan/style_transfer/animegan_v2_shinkai_33/README_en.md
@@ -0,0 +1,150 @@
+# animegan_v2_shinkai_33
+
+|Module Name|animegan_v2_shinkai_33|
+| :--- | :---: |
+|Category|image generation|
+|Network|AnimeGAN|
+|Dataset|Your Name, Weathering with you|
+|Fine-tuning supported or not|No|
+|Module Size|9.4MB|
+|Latest update date|2021-07-30|
+|Data indicators|-|
+
+
+## I.Basic Information
+
+- ### Application Effect Display
+ - Sample results:
+
+
+
+ Input image
+
+
+
+ Output image
+
+
+
+
+- ### Module Introduction
+
+ - AnimeGAN V2 is a style transfer model, which can transfer a image style to Shinkai carton style. For more information, please refer to [AnimeGAN V2 Project](https://github.com/TachibanaYoshino/AnimeGANv2).
+
+
+## II.Installation
+
+- ### 1、Environmental Dependence
+
+ - paddlepaddle >= 1.8.0
+
+ - paddlehub >= 1.8.0 | [How to install PaddleHub]()
+
+- ### 2、Installation
+
+ - ```shell
+ $ hub install animegan_v2_shinkai_33
+ ```
+ - In case of any problems during installation, please refer to: [Windows_Quickstart]() | [Linux_Quickstart]() | [Mac_Quickstart]()
+
+## III.Module API Prediction
+
+- ### 1、Prediction Code Example
+
+ - ```python
+ import paddlehub as hub
+ import cv2
+
+ model = hub.Module(name="animegan_v2_shinkai_33")
+ result = model.style_transfer(images=[cv2.imread('/PATH/TO/IMAGE')])
+ # or
+ # result = model.style_transfer(paths=['/PATH/TO/IMAGE'])
+ ```
+
+- ### 2、API
+
+ - ```python
+ def style_transfer(images=None,
+ paths=None,
+ output_dir='output',
+ visualization=False,
+ min_size=32,
+ max_size=1024)
+ ```
+
+ - Style transfer API.
+
+ - **Parameters**
+
+ - images (list\[numpy.ndarray\]): image data, ndarray.shape is in the format [H, W, C], BGR;
+ - paths (list[str]): image path;
+ - output_dir (str): save path of images;
+ - visualization (bool): Whether to save the results as picture files;
+ - min\_size (int): min size of image shape,default is 32;
+ - max\_size (int): max size of image shape,default is 1024.
+
+ **NOTE:** choose one parameter to provide data from paths and images
+
+ - **Return**
+ - res (list\[numpy.ndarray\]): result list,ndarray.shape is \[H, W, C\]
+
+
+## IV.Server Deployment
+
+- PaddleHub Serving can deploy an online service of style transfer.
+
+
+- ### Step 1: Start PaddleHub Serving
+
+ - Run the startup command:
+ - ```shell
+ $ hub serving start -m animegan_v2_shinkai_33
+ ```
+
+ - The servitization API is now deployed and the default port number is 8866.
+
+ - **NOTE:** If GPU is used for prediction, set CUDA_VISIBLE_DEVICES environment variable before the service, otherwise it need not be set.
+
+- ### Step 2: Send a predictive request
+
+ - With a configured server, use the following lines of code to send the prediction request and obtain the result
+
+ - ```python
+ import requests
+ import json
+ import cv2
+ import base64
+
+
+ def cv2_to_base64(image):
+ data = cv2.imencode('.jpg', image)[1]
+ return base64.b64encode(data.tostring()).decode('utf8')
+
+ # Send an HTTP request
+ data = {'images':[cv2_to_base64(cv2.imread("/PATH/TO/IMAGE"))]}
+ headers = {"Content-type": "application/json"}
+ url = "http://127.0.0.1:8866/predict/animegan_v2_shinkai_33"
+ r = requests.post(url=url, headers=headers, data=json.dumps(data))
+
+ # print prediction results
+ print(r.json()["results"])
+ ```
+
+
+## V.Release Note
+
+* 1.0.0
+
+ First release
+
+* 1.0.1
+
+ Adapt to paddlehub2.0
+
+* 1.0.2
+
+ Delete optional parameter batch_size
+
+ - ```shell
+ $ hub install animegan_v2_shinkai_33==1.0.2
+ ```
diff --git a/modules/image/Image_gan/style_transfer/animegan_v2_shinkai_53/README_en.md b/modules/image/Image_gan/style_transfer/animegan_v2_shinkai_53/README_en.md
new file mode 100644
index 0000000000000000000000000000000000000000..3ab8c6b6913a002f3604c3b70c13bb15bbcbab2a
--- /dev/null
+++ b/modules/image/Image_gan/style_transfer/animegan_v2_shinkai_53/README_en.md
@@ -0,0 +1,149 @@
+# animegan_v2_shinkai_53
+
+|Module Name|animegan_v2_shinkai_53|
+| :--- | :---: |
+|Category|image generation|
+|Network|AnimeGAN|
+|Dataset|Your Name, Weathering with you|
+|Fine-tuning supported or not|No|
+|Module Size|9.4MB|
+|Latest update date|2021-07-30|
+|Data indicators|-|
+
+
+## I.Basic Information
+
+- ### Application Effect Display
+ - Sample results:
+
+
+
+ Input image
+
+
+
+ Output image
+
+
+
+
+- ### Module Introduction
+
+ - AnimeGAN V2 is a style transfer model, which can transfer a image style to Shinkai carton style. For more information, please refer to [AnimeGAN V2 Project](https://github.com/TachibanaYoshino/AnimeGANv2).
+
+
+## II.Installation
+
+- ### 1、Environmental Dependence
+
+ - paddlepaddle >= 1.8.0
+
+ - paddlehub >= 1.8.0 | [How to install PaddleHub]()
+
+- ### 2、Installation
+
+ - ```shell
+ $ hub install animegan_v2_shinkai_53
+ ```
+ - In case of any problems during installation, please refer to: [Windows_Quickstart]() | [Linux_Quickstart]() | [Mac_Quickstart]()
+
+## III.Module API Prediction
+
+- ### 1、Prediction Code Example
+
+ - ```python
+ import paddlehub as hub
+ import cv2
+
+ model = hub.Module(name="animegan_v2_shinkai_53")
+ result = model.style_transfer(images=[cv2.imread('/PATH/TO/IMAGE')])
+ # or
+ # result = model.style_transfer(paths=['/PATH/TO/IMAGE'])
+ ```
+
+- ### 2、API
+
+ - ```python
+ def style_transfer(images=None,
+ paths=None,
+ output_dir='output',
+ visualization=False,
+ min_size=32,
+ max_size=1024)
+ ```
+
+ - Style transfer API.
+
+ - **Parameters**
+
+ - images (list\[numpy.ndarray\]): image data, ndarray.shape is in the format [H, W, C], BGR;
+ - paths (list[str]): image path;
+ - output_dir (str): save path of images;
+ - visualization (bool): Whether to save the results as picture files;
+ - min\_size (int): min size of image shape,default is 32;
+ - max\_size (int): max size of image shape,default is 1024.
+
+ **NOTE:** choose one parameter to provide data from paths and images
+
+ - **Return**
+ - res (list\[numpy.ndarray\]): result list,ndarray.shape is \[H, W, C\]
+
+
+## IV.Server Deployment
+
+- PaddleHub Serving can deploy an online service of style transfer.
+
+- ### Step 1: Start PaddleHub Serving
+
+ - Run the startup command:
+ - ```shell
+ $ hub serving start -m animegan_v2_shinkai_53
+ ```
+
+ - The servitization API is now deployed and the default port number is 8866.
+
+ - **NOTE:** If GPU is used for prediction, set CUDA_VISIBLE_DEVICES environment variable before the service, otherwise it need not be set.
+
+- ### Step 2: Send a predictive request
+
+ - With a configured server, use the following lines of code to send the prediction request and obtain the result
+
+ - ```python
+ import requests
+ import json
+ import cv2
+ import base64
+
+
+ def cv2_to_base64(image):
+ data = cv2.imencode('.jpg', image)[1]
+ return base64.b64encode(data.tostring()).decode('utf8')
+
+ # Send an HTTP request
+ data = {'images':[cv2_to_base64(cv2.imread("/PATH/TO/IMAGE"))]}
+ headers = {"Content-type": "application/json"}
+ url = "http://127.0.0.1:8866/predict/animegan_v2_shinkai_53"
+ r = requests.post(url=url, headers=headers, data=json.dumps(data))
+
+ # print prediction results
+ print(r.json()["results"])
+ ```
+
+
+## V.Release Note
+
+* 1.0.0
+
+ First release
+
+* 1.0.1
+
+ Adapt to paddlehub2.0
+
+* 1.0.2
+
+ Delete optional parameter batch_size
+
+ - ```shell
+ $ hub install animegan_v2_shinkai_53==1.0.2
+ ```
diff --git a/modules/image/Image_gan/style_transfer/stylepro_artistic/README_en.md b/modules/image/Image_gan/style_transfer/stylepro_artistic/README_en.md
new file mode 100644
index 0000000000000000000000000000000000000000..355c32aeb5098a7bb7425201d97e883b4d204a56
--- /dev/null
+++ b/modules/image/Image_gan/style_transfer/stylepro_artistic/README_en.md
@@ -0,0 +1,186 @@
+# stylepro_artistic
+
+|Module Name|stylepro_artistic|
+| :--- | :---: |
+|Category|image generation|
+|Network|StyleProNet|
+|Dataset|MS-COCO + WikiArt|
+|Fine-tuning supported or not|No|
+|Module Size|28MB|
+|Latest update date|2021-02-26|
+|Data indicators|-|
+
+
+## I.Basic Information
+
+- ### Application Effect Display
+ - Sample results:
+
+
+
+
+- ### Module Introduction
+
+ - StyleProNet is a model for style transfer, which is light-weight and responds quickly. This module is based on StyleProNet, trained on WikiArt(MS-COCO) and WikiArt(style) datasets, and can be used for style transfer. For more information, please refer to [StyleProNet](https://arxiv.org/abs/2003.07694).
+
+
+## II.Installation
+
+- ### 1、Environmental Dependence
+
+ - paddlepaddle >= 1.6.2
+
+ - paddlehub >= 1.6.0 | [How to install PaddleHub]()
+
+- ### 2、Installation
+
+ - ```shell
+ $ hub install stylepro_artistic
+ ```
+ - In case of any problems during installation, please refer to: [Windows_Quickstart]() | [Linux_Quickstart]() | [Mac_Quickstart]()
+
+## III.Module API Prediction
+
+- ### 1、Command line Prediction
+
+ - ```shell
+ $ hub run stylepro_artistic --input_path "/PATH/TO/IMAGE"
+ ```
+ - If you want to call the Hub module through the command line, please refer to: [PaddleHub Command Line Instruction](../../../../docs/docs_ch/tutorial/cmd_usage.rst)
+- ### 2、Prediction Code Example
+
+ - ```python
+ import paddlehub as hub
+ import cv2
+
+ stylepro_artistic = hub.Module(name="stylepro_artistic")
+ result = stylepro_artistic.style_transfer(
+ images=[{
+ 'content': cv2.imread('/PATH/TO/CONTENT_IMAGE'),
+ 'styles': [cv2.imread('/PATH/TO/STYLE_IMAGE')]
+ }])
+
+ # or
+ # result = stylepro_artistic.style_transfer(
+ # paths=[{
+ # 'content': '/PATH/TO/CONTENT_IMAGE',
+ # 'styles': ['/PATH/TO/STYLE_IMAGE']
+ # }])
+ ```
+
+- ### 3、API
+
+ - ```python
+ def style_transfer(images=None,
+ paths=None,
+ alpha=1,
+ use_gpu=False,
+ visualization=False,
+ output_dir='transfer_result')
+ ```
+
+ - Style transfer API.
+
+ - **Parameters**
+ - images (list\[dict\]): each element is a dict,includes:
+ - content (numpy.ndarray): input image array,shape is \[H, W, C\],BGR format;
+ - styles (list\[numpy.ndarray\]) : list of style image arrays,shape is \[H, W, C\],BGR format;
+ - weights (list\[float\], optioal) : weight for each style, if not set, each style has the same weight;
+ - paths (list\[dict\]): each element is a dict,includes:
+ - content (str): path for input image;
+ - styles (list\[str\]) : paths for style images;
+ - weights (list\[float\], optioal) : weight for each style, if not set, each style has the same weight;
+ - alpha (float) : alpha value,\[0, 1\] ,default is 1
+ - use_gpu (bool): use GPU or not; **set the CUDA_VISIBLE_DEVICES environment variable first if you are using GPU**
+ - visualization (bool): Whether to save the results as picture files;
+ - output_dir (str): save path of images;
+
+ **NOTE:** choose one parameter to provide data from paths and images
+
+ - **Return**
+
+ - res (list\[dict\]): results
+ - path (str): path for input image
+ - data (numpy.ndarray): output image
+
+
+ - ```python
+ def save_inference_model(dirname,
+ model_filename=None,
+ params_filename=None,
+ combined=True)
+ ```
+ - Save model to specific path
+
+ - **Parameters**
+
+ - dirname: output dir for saving model
+ - model\_filename: filename for saving model
+ - params\_filename: filename for saving parameters
+ - combined: whether save parameters into one file
+
+
+## IV.Server Deployment
+
+- PaddleHub Serving can deploy an online service of style transfer.
+
+- ### Step 1: Start PaddleHub Serving
+
+ - Run the startup command:
+ - ```shell
+ $ hub serving start -m stylepro_artistic
+ ```
+
+ - The servitization API is now deployed and the default port number is 8866.
+
+ - **NOTE:** If GPU is used for prediction, set CUDA_VISIBLE_DEVICES environment variable before the service, otherwise it need not be set.
+
+- ### Step 2: Send a predictive request
+
+ - With a configured server, use the following lines of code to send the prediction request and obtain the result
+
+ - ```python
+ import requests
+ import json
+ import cv2
+ import base64
+ import numpy as np
+
+
+ def cv2_to_base64(image):
+ data = cv2.imencode('.jpg', image)[1]
+ return base64.b64encode(data.tostring()).decode('utf8')
+
+ def base64_to_cv2(b64str):
+ data = base64.b64decode(b64str.encode('utf8'))
+ data = np.fromstring(data, np.uint8)
+ data = cv2.imdecode(data, cv2.IMREAD_COLOR)
+ return data
+
+ # Send an HTTP request
+ data = {'images':[
+ {
+ 'content':cv2_to_base64(cv2.imread('/PATH/TO/CONTENT_IMAGE')),
+ 'styles':[cv2_to_base64(cv2.imread('/PATH/TO/STYLE_IMAGE'))]
+ }
+ ]}
+ headers = {"Content-type": "application/json"}
+ url = "http://127.0.0.1:8866/predict/stylepro_artistic"
+ r = requests.post(url=url, headers=headers, data=json.dumps(data))
+
+ # print prediction results
+ print(base64_to_cv2(r.json()["results"][0]['data']))
+ ```
+
+
+## V.Release Note
+
+* 1.0.0
+
+ First release
+
+* 1.0.1
+
+ - ```shell
+ $ hub install stylepro_artistic==1.0.1
+ ```