“5cdaf71b40f7948ebae27f75c9b8a4b6eecc79f0”上不存在“doc/design/functions_operators_layers.html”
未验证 提交 27b1e6d9 编写于 作者: K KP 提交者: GitHub

Merge pull request #1710 from rainyfly/add_English_Readme

Add english readme
...@@ -25,8 +25,6 @@ ...@@ -25,8 +25,6 @@
- ### 1、环境依赖 - ### 1、环境依赖
- paddlepaddle >= 1.8.2
- paddlehub >= 1.8.0 | [如何安装paddlehub](../../../../docs/docs_ch/get_start/installation.rst) - paddlehub >= 1.8.0 | [如何安装paddlehub](../../../../docs/docs_ch/get_start/installation.rst)
- ### 2、安装 - ### 2、安装
...@@ -38,7 +36,7 @@ ...@@ -38,7 +36,7 @@
| [零基础Linux安装](../../../../docs/docs_ch/get_start/linux_quickstart.md) | [零基础MacOS安装](../../../../docs/docs_ch/get_start/mac_quickstart.md) | [零基础Linux安装](../../../../docs/docs_ch/get_start/linux_quickstart.md) | [零基础MacOS安装](../../../../docs/docs_ch/get_start/mac_quickstart.md)
## 三、模型API预测 ## 三、模型API预测
- ### 1、代码示例 - ### 1、预测代码示例
- ```python - ```python
import paddlehub as hub import paddlehub as hub
......
# stgan_bald
|Module Name|stgan_bald|
| :--- | :---: |
|Category|image generation|
|Network|STGAN|
|Dataset|CelebA|
|Fine-tuning supported or not|No|
|Module Size|287MB|
|Latest update date|2021-02-26|
|Data indicators|-|
## I.Basic Information
- ### Application Effect Display
- Please refer to this [link](https://aistudio.baidu.com/aistudio/projectdetail/1145381)
- ### Module Introduction
- This module is based on STGAN model, trained on CelebA dataset, and can be used to predict bald appearance after 1, 3 and 5 years.
## II.Installation
- ### 1、Environmental Dependence
- paddlehub >= 1.8.0 | [How to install PaddleHub](../../../../docs/docs_en/get_start/installation.rst)
- ### 2、Installation
- ```shell
$ hub install stgan_bald
```
- In case of any problems during installation, please refer to: [Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md) | [Linux_Quickstart](../../../../docs/docs_en/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_en/get_start/mac_quickstart.md)
## III.Module API Prediction
- ### 1、Prediction Code Example
- ```python
import paddlehub as hub
import cv2
stgan_bald = hub.Module(name="stgan_bald")
result = stgan_bald.bald(images=[cv2.imread('/PATH/TO/IMAGE')])
# or
# result = stgan_bald.bald(paths=['/PATH/TO/IMAGE'])
```
- ### 2、API
- ```python
def bald(images=None,
paths=None,
use_gpu=False,
visualization=False,
output_dir="bald_output")
```
- Bald appearance generation API.
- **Parameters**
- images (list\[numpy.ndarray\]): image data, ndarray.shape is in the format [H, W, C], BGR;
- paths (list[str]): image path;
- use_gpu (bool): use GPU or not; **set the CUDA_VISIBLE_DEVICES environment variable first if you are using GPU**
- visualization (bool): Whether to save the results as picture files;
- output_dir (str): save path of images;
**NOTE:** choose one parameter to provide data from paths and images
- **Return**
- res (list\[numpy.ndarray\]): result list,ndarray.shape is \[H, W, C\]
## IV.Server Deployment
- PaddleHub Serving can deploy an online service of bald appearance generation.
- ### Step 1: Start PaddleHub Serving
- Run the startup command:
- ```shell
$ hub serving start -m stgan_bald
```
- The servitization API is now deployed and the default port number is 8866.
- **NOTE:** If GPU is used for prediction, set CUDA_VISIBLE_DEVICES environment variable before the service, otherwise it need not be set.
- ### Step 2: Send a predictive request
- With a configured server, use the following lines of code to send the prediction request and obtain the result
- ```python
import requests
import json
import cv2
import base64
import numpy as np
def cv2_to_base64(image):
data = cv2.imencode('.jpg', image)[1]
return base64.b64encode(data.tostring()).decode('utf8')
def base64_to_cv2(b64str):
data = base64.b64decode(b64str.encode('utf8'))
data = np.fromstring(data, np.uint8)
data = cv2.imdecode(data, cv2.IMREAD_COLOR)
return data
# Send an HTTP request
data = {'images':[cv2_to_base64(cv2.imread("/PATH/TO/IMAGE"))]}
headers = {"Content-type": "application/json"}
url = "http://127.0.0.1:8866/predict/stgan_bald"
r = requests.post(url=url, headers=headers, data=json.dumps(data))
# save results
one_year =cv2.cvtColor(base64_to_cv2(r.json()["results"]['data_0']), cv2.COLOR_RGB2BGR)
three_year =cv2.cvtColor(base64_to_cv2(r.json()["results"]['data_1']), cv2.COLOR_RGB2BGR)
five_year =cv2.cvtColor(base64_to_cv2(r.json()["results"]['data_2']), cv2.COLOR_RGB2BGR)
cv2.imwrite("stgan_bald_server.png", one_year)
```
## V.Release Note
* 1.0.0
First release
- ```shell
$ hub install stgan_bald==1.0.0
```
...@@ -44,7 +44,7 @@ ...@@ -44,7 +44,7 @@
## 三、模型API预测 ## 三、模型API预测
- ### 1、代码示例 - ### 1、预测代码示例
- ```python - ```python
import paddlehub as hub import paddlehub as hub
......
# Photo2Cartoon
|Module Name|Photo2Cartoon|
| :--- | :---: |
|Category|image generation|
|Network|U-GAT-IT|
|Dataset|cartoon_data|
|Fine-tuning supported or not|No|
|Module Size|205MB|
|Latest update date|2021-02-26|
|Data indicators|-|
## I.Basic Information
- ### Application Effect Display
- Sample results:
<p align="center">
<img src="https://img-blog.csdnimg.cn/20201224164040624.jpg" hspace='10'/> <br />
</p>
- ### Module Introduction
- This module encapsulates project [photo2cartoon](https://github.com/minivision-ai/photo2cartoon-paddle).
## II.Installation
- ### 1、Environmental Dependence
- paddlepaddle >= 2.0.0
- paddlehub >= 2.0.0 | [How to install PaddleHub](../../../../docs/docs_en/get_start/installation.rst)
- ### 2、Installation
- ```shell
$ hub install Photo2Cartoon
```
- In case of any problems during installation, please refer to: [Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md) | [Linux_Quickstart](../../../../docs/docs_en/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_en/get_start/mac_quickstart.md)
## III.Module API Prediction
- ### 1、Prediction Code Example
- ```python
import paddlehub as hub
import cv2
model = hub.Module(name="Photo2Cartoon")
result = model.Cartoon_GEN(images=[cv2.imread('/PATH/TO/IMAGE')])
# or
# result = model.Cartoon_GEN(paths=['/PATH/TO/IMAGE'])
```
- ### 2、API
- ```python
def Cartoon_GEN(images=None,
paths=None,
batch_size=1,
output_dir='output',
visualization=False,
use_gpu=False):
```
- Cartoon style generation API.
- **Parameters**
- images (list\[numpy.ndarray\]): image data, ndarray.shape is in the format [H, W, C], BGR;
- paths (list[str]): image path;
- output_dir (str): save path of images;
- batch_size (int): the size of batch;
- visualization (bool): Whether to save the results as picture files;
- use_gpu (bool): use GPU or not; **set the CUDA_VISIBLE_DEVICES environment variable first if you are using GPU**
**NOTE:** choose one parameter to provide data from paths and images
- **Return**
- res (list\[numpy.ndarray\]): result list,ndarray.shape is \[H, W, C\]
## IV.Release Note
* 1.0.0
First release
- ```shell
$ hub install Photo2Cartoon==1.0.0
```
# U2Net_Portrait
|Module Name|U2Net_Portrait|
| :--- | :---: |
|Category|image generation|
|Network|U^2Net|
|Dataset|-|
|Fine-tuning supported or not|No|
|Module Size|254MB|
|Latest update date|2021-02-26|
|Data indicators|-|
## I.Basic Information
- ### Application Effect Display
- Sample results:
<p align="center">
<img src="https://ai-studio-static-online.cdn.bcebos.com/07f73466f3294373965e06c141c4781992f447104a94471dadfabc1c3d920861" height='50%' hspace='10'/>
<br />
Input image
<br />
<img src="https://ai-studio-static-online.cdn.bcebos.com/c6ab02cf27414a5ba5921d9e6b079b487f6cda6026dc4d6dbca8f0167ad7cae3" height='50%' hspace='10'/>
<br />
Output image
<br />
</p>
- ### Module Introduction
- U2Net_Portrait can be used to create a face portrait.
## II.Installation
- ### 1、Environmental Dependence
- paddlepaddle >= 2.0.0
- paddlehub >= 2.0.0 | [How to install PaddleHub](../../../../docs/docs_en/get_start/installation.rst)
- ### 2、Installation
- ```shell
$ hub install U2Net_Portrait
```
- In case of any problems during installation, please refer to: [Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md) | [Linux_Quickstart](../../../../docs/docs_en/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_en/get_start/mac_quickstart.md)
## III.Module API Prediction
- ### 1、Prediction Code Example
- ```python
import paddlehub as hub
import cv2
model = hub.Module(name="U2Net_Portrait")
result = model.Portrait_GEN(images=[cv2.imread('/PATH/TO/IMAGE')])
# or
# result = model.Portrait_GEN(paths=['/PATH/TO/IMAGE'])
```
- ### 2、API
- ```python
def Portrait_GEN(images=None,
paths=None,
scale=1,
batch_size=1,
output_dir='output',
face_detection=True,
visualization=False):
```
- Portrait generation API.
- **Parameters**
- images (list\[numpy.ndarray\]): image data, ndarray.shape is in the format [H, W, C], BGR;
- paths (list[str]): image path;
- scale (float) : scale for resizing image;<br/>
- batch_size (int): the size of batch;
- output_dir (str): save path of images;
- visualization (bool): Whether to save the results as picture files;
**NOTE:** choose one parameter to provide data from paths and images
- **Return**
- res (list\[numpy.ndarray\]): result list,ndarray.shape is \[H, W, C\]
## IV.Release Note
* 1.0.0
First release
- ```shell
$ hub install U2Net_Portrait==1.0.0
```
...@@ -50,7 +50,7 @@ ...@@ -50,7 +50,7 @@
## 三、模型API预测 ## 三、模型API预测
- ### 1、代码示例 - ### 1、预测代码示例
- ```python - ```python
import paddlehub as hub import paddlehub as hub
......
# UGATIT_100w
|Module Name|UGATIT_100w|
| :--- | :---: |
|Category|image generation|
|Network|U-GAT-IT|
|Dataset|selfie2anime|
|Fine-tuning supported or not|No|
|Module Size|41MB|
|Latest update date|2021-02-26|
|Data indicators|-|
## I.Basic Information
- ### Application Effect Display
- Sample results:
<p align="center">
<img src="https://ai-studio-static-online.cdn.bcebos.com/d130fabd8bd34e53b2f942b3766eb6bbd3c19c0676d04abfbd5cc4b83b66f8b6" height='80%' hspace='10'/>
<br />
Input image
<br />
<img src="https://ai-studio-static-online.cdn.bcebos.com/8538af03b3f14b1884fcf4eec48965baf939e35a783d40129085102057438c77" height='80%' hspace='10'/>
<br />
Output image
<br />
</p>
- ### Module Introduction
- UGATIT is a model for style transfer. This module can be used to transfer a face image to cartoon style. For more information, please refer to [UGATIT-Paddle Project](https://github.com/miraiwk/UGATIT-paddle).
## II.Installation
- ### 1、Environmental Dependence
- paddlepaddle >= 1.8.0
- paddlehub >= 1.8.0 | [How to install PaddleHub](../../../../docs/docs_en/get_start/installation.rst)
- ### 2、Installation
- ```shell
$ hub install UGATIT_100w
```
- In case of any problems during installation, please refer to: [Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md) | [Linux_Quickstart](../../../../docs/docs_en/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_en/get_start/mac_quickstart.md)
## III.Module API Prediction
- ### 1、Prediction Code Example
- ```python
import paddlehub as hub
import cv2
model = hub.Module(name="UGATIT_100w")
result = model.style_transfer(images=[cv2.imread('/PATH/TO/IMAGE')])
# or
# result = model.style_transfer(paths=['/PATH/TO/IMAGE'])
```
- ### 2、API
- ```python
def style_transfer(images=None,
paths=None,
batch_size=1,
output_dir='output',
visualization=False)
```
- Style transfer API.
- **Parameters**
- images (list\[numpy.ndarray\]): image data, ndarray.shape is in the format [H, W, C], BGR;
- paths (list[str]): image path;
- batch_size (int): the size of batch;
- visualization (bool): Whether to save the results as picture files;
- output_dir (str): save path of images;
**NOTE:** choose one parameter to provide data from paths and images
- **Return**
- res (list\[numpy.ndarray\]): result list,ndarray.shape is \[H, W, C\]
## IV.Server Deployment
- PaddleHub Serving can deploy an online service of style transfer.
- ### Step 1: Start PaddleHub Serving
- Run the startup command:
- ```shell
$ hub serving start -m UGATIT_100w
```
- The servitization API is now deployed and the default port number is 8866.
- **NOTE:** If GPU is used for prediction, set CUDA_VISIBLE_DEVICES environment variable before the service, otherwise it need not be set.
- ### Step 2: Send a predictive request
- With a configured server, use the following lines of code to send the prediction request and obtain the result
- ```python
import requests
import json
import cv2
import base64
def cv2_to_base64(image):
data = cv2.imencode('.jpg', image)[1]
return base64.b64encode(data.tostring()).decode('utf8')
# Send an HTTP request
data = {'images':[cv2_to_base64(cv2.imread("/PATH/TO/IMAGE"))]}
headers = {"Content-type": "application/json"}
url = "http://127.0.0.1:8866/predict/UGATIT_100w"
r = requests.post(url=url, headers=headers, data=json.dumps(data))
# print prediction results
print(r.json()["results"])
```
## V.Release Note
* 1.0.0
First release
- ```shell
$ hub install UGATIT_100w==1.0.0
```
...@@ -51,7 +51,7 @@ ...@@ -51,7 +51,7 @@
## 三、模型API预测 ## 三、模型API预测
- ### 1、代码示例 - ### 1、预测代码示例
- ```python - ```python
import paddlehub as hub import paddlehub as hub
......
# animegan_v1_hayao_60
|Module Name|animegan_v1_hayao_60|
| :--- | :---: |
|Category|image generation|
|Network|AnimeGAN|
|Dataset|The Wind Rises|
|Fine-tuning supported or not|No|
|Module Size|18MB|
|Latest update date|2021-07-30|
|Data indicators|-|
## I.Basic Information
- ### Application Effect Display
- Sample results:
<p align="center">
<img src="https://ai-studio-static-online.cdn.bcebos.com/bd002c4bb6a7427daf26988770bb18648b7d8d2bfd6746bfb9a429db4867727f" width = "450" height = "300" hspace='10'/>
<br />
Input Image
<br />
<img src="https://ai-studio-static-online.cdn.bcebos.com/10175bb964e94ce18608a84b0ab6ebfe154b523df42f44a3a851b2d91dd17a63" width = "450" height = "300" hspace='10'/>
<br />
Output Image
<br />
</p>
- ### Module Introduction
- AnimeGAN V1 is a style transfer model, which can transfer a image style to Miyazaki carton style. For more information, please refer to [AnimeGAN V1 Project](https://github.com/TachibanaYoshino/AnimeGAN).
## II.Installation
- ### 1、Environmental Dependence
- paddlepaddle >= 1.8.0
- paddlehub >= 1.8.0 | [How to install PaddleHub](../../../../docs/docs_en/get_start/installation.rst)
- ### 2、Installation
- ```shell
$ hub install animegan_v1_hayao_60
```
- In case of any problems during installation, please refer to: [Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md) | [Linux_Quickstart](../../../../docs/docs_en/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_en/get_start/mac_quickstart.md)
## III.Module API Prediction
- ### 1、Prediction Code Example
- ```python
import paddlehub as hub
import cv2
model = hub.Module(name="animegan_v1_hayao_60")
result = model.style_transfer(images=[cv2.imread('/PATH/TO/IMAGE')])
# or
# result = model.style_transfer(paths=['/PATH/TO/IMAGE'])
```
- ### 2、API
- ```python
def style_transfer(images=None,
paths=None,
output_dir='output',
visualization=False,
min_size=32,
max_size=1024)
```
- Style transfer API.
- **Parameters**
- images (list\[numpy.ndarray\]): image data, ndarray.shape is in the format [H, W, C], BGR;
- paths (list[str]): image path;
- output_dir (str): save path of images;
- visualization (bool): Whether to save the results as picture files;
- min\_size (int): min size of image shape,default is 32;
- max\_size (int): max size of image shape,default is 1024.
**NOTE:** choose one parameter to provide data from paths and images
- **Return**
- res (list\[numpy.ndarray\]): result list,ndarray.shape is \[H, W, C\]
## IV.Server Deployment
- PaddleHub Serving can deploy an online service of style transfer.
- ### Step 1: Start PaddleHub Serving
- Run the startup command:
- ```shell
$ hub serving start -m animegan_v1_hayao_60
```
- The servitization API is now deployed and the default port number is 8866.
- **NOTE:** If GPU is used for prediction, set CUDA_VISIBLE_DEVICES environment variable before the service, otherwise it need not be set.
- ### Step 2: Send a predictive request
- With a configured server, use the following lines of code to send the prediction request and obtain the result
- ```python
import requests
import json
import cv2
import base64
def cv2_to_base64(image):
data = cv2.imencode('.jpg', image)[1]
return base64.b64encode(data.tostring()).decode('utf8')
# Send an HTTP request
data = {'images':[cv2_to_base64(cv2.imread("/PATH/TO/IMAGE"))]}
headers = {"Content-type": "application/json"}
url = "http://127.0.0.1:8866/predict/animegan_v1_hayao_60"
r = requests.post(url=url, headers=headers, data=json.dumps(data))
# print prediction results
print(r.json()["results"])
```
## V.Release Note
* 1.0.0
First release
* 1.0.1
Adapt to paddlehub2.0
* 1.0.2
Delete optional parameter batch_size
- ```shell
$ hub install animegan_v1_hayao_60==1.0.2
```
...@@ -49,7 +49,7 @@ ...@@ -49,7 +49,7 @@
## 三、模型API预测 ## 三、模型API预测
- ### 1、代码示例 - ### 1、预测代码示例
- ```python - ```python
import paddlehub as hub import paddlehub as hub
......
# animegan_v2_hayao_64
|Module Name|animegan_v2_hayao_64|
| :--- | :---: |
|Category|image generation|
|Network|AnimeGAN|
|Dataset|The Wind Rises|
|Fine-tuning supported or not|No|
|Module Size|9.4MB|
|Latest update date|2021-07-30|
|Data indicators|-|
## I.Basic Information
- ### Application Effect Display
- Sample results:
<p align="center">
<img src="https://ai-studio-static-online.cdn.bcebos.com/bd002c4bb6a7427daf26988770bb18648b7d8d2bfd6746bfb9a429db4867727f" width = "450" height = "300" hspace='10'/>
<br />
Input image
<br />
<img src="https://ai-studio-static-online.cdn.bcebos.com/49620341f1fe4f00af4d93c22694897a1ae578a235844a1db1bbb4bd37bf750b" width = "450" height = "300" hspace='10'/>
<br />
Output image
<br />
</p>
- ### Module Introduction
- AnimeGAN V2 is a style transfer model, which can transfer a image style to Miyazaki carton style. For more information, please refer to [AnimeGAN V2 Project](https://github.com/TachibanaYoshino/AnimeGANv2).
## II.Installation
- ### 1、Environmental Dependence
- paddlepaddle >= 1.8.0
- paddlehub >= 1.8.0 | [How to install PaddleHub](../../../../docs/docs_en/get_start/installation.rst)
- ### 2、Installation
- ```shell
$ hub install animegan_v2_hayao_64
```
- In case of any problems during installation, please refer to: [Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md) | [Linux_Quickstart](../../../../docs/docs_en/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_en/get_start/mac_quickstart.md)
## III.Module API Prediction
- ### 1、Prediction Code Example
- ```python
import paddlehub as hub
import cv2
model = hub.Module(name="animegan_v2_hayao_64")
result = model.style_transfer(images=[cv2.imread('/PATH/TO/IMAGE')])
# or
# result = model.style_transfer(paths=['/PATH/TO/IMAGE'])
```
- ### 3、API
- ```python
def style_transfer(images=None,
paths=None,
output_dir='output',
visualization=False,
min_size=32,
max_size=1024)
```
- Style transfer API.
- **Parameters**
- images (list\[numpy.ndarray\]): image data, ndarray.shape is in the format [H, W, C], BGR;
- paths (list[str]): image path;
- output_dir (str): save path of images;
- visualization (bool): Whether to save the results as picture files;
- min\_size (int): min size of image shape,default is 32;
- max\_size (int): max size of image shape,default is 1024.
**NOTE:** choose one parameter to provide data from paths and images
- **Return**
- res (list\[numpy.ndarray\]): result list,ndarray.shape is \[H, W, C\]
## IV.Server Deployment
- PaddleHub Serving can deploy an online service of style transfer.
- ### Step 1: Start PaddleHub Serving
- Run the startup command:
- ```shell
$ hub serving start -m animegan_v2_hayao_64
```
- The servitization API is now deployed and the default port number is 8866.
- **NOTE:** If GPU is used for prediction, set CUDA_VISIBLE_DEVICES environment variable before the service, otherwise it need not be set.
- ### Step 2: Send a predictive request
- With a configured server, use the following lines of code to send the prediction request and obtain the result
- ```python
import requests
import json
import cv2
import base64
def cv2_to_base64(image):
data = cv2.imencode('.jpg', image)[1]
return base64.b64encode(data.tostring()).decode('utf8')
# Send an HTTP request
data = {'images':[cv2_to_base64(cv2.imread("/PATH/TO/IMAGE"))]}
headers = {"Content-type": "application/json"}
url = "http://127.0.0.1:8866/predict/animegan_v2_hayao_64"
r = requests.post(url=url, headers=headers, data=json.dumps(data))
# print prediction results
print(r.json()["results"])
```
## V.Release Note
* 1.0.0
First release
* 1.0.1
Adapt to paddlehub2.0
* 1.0.2
Delete optional parameter batch_size
- ```shell
$ hub install animegan_v2_hayao_64==1.0.2
```
...@@ -51,7 +51,7 @@ ...@@ -51,7 +51,7 @@
## 三、模型API预测 ## 三、模型API预测
- ### 1、代码示例 - ### 1、预测代码示例
- ```python - ```python
import paddlehub as hub import paddlehub as hub
......
# animegan_v2_hayao_99
|Module Name|animegan_v2_hayao_99|
| :--- | :---: |
|Category|image generation|
|Network|AnimeGAN|
|Dataset|The Wind Rises|
|Fine-tuning supported or not|No|
|Module Size|9.4MB|
|Latest update date|2021-07-30|
|Data indicators|-|
## I.Basic Information
- ### Application Effect Display
- Sample results:
<p align="center">
<img src="https://ai-studio-static-online.cdn.bcebos.com/bd002c4bb6a7427daf26988770bb18648b7d8d2bfd6746bfb9a429db4867727f" width = "450" height = "300" hspace='10'/>
<br />
Input image
<br />
<img src="https://ai-studio-static-online.cdn.bcebos.com/16195e03d7e0412d990349587c587a26d9ae9e2ed1ec4fa1b4dc994e948d1f7d" width = "450" height = "300" hspace='10'/>
<br />
Output image
<br />
</p>
- ### Module Introduction
- AnimeGAN V2 is a style transfer model, which can transfer a image style to Miyazaki carton style. For more information, please refer to [AnimeGAN V2 Project](https://github.com/TachibanaYoshino/AnimeGANv2).
## II.Installation
- ### 1、Environmental Dependence
- paddlepaddle >= 1.8.0
- paddlehub >= 1.8.0 | [How to install PaddleHub](../../../../docs/docs_en/get_start/installation.rst)
- ### 2、Installation
- ```shell
$ hub install animegan_v2_hayao_99
```
- In case of any problems during installation, please refer to: [Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md) | [Linux_Quickstart](../../../../docs/docs_en/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_en/get_start/mac_quickstart.md)
## III.Module API Prediction
- ### 1、Prediction Code Example
- ```python
import paddlehub as hub
import cv2
model = hub.Module(name="animegan_v2_hayao_99")
result = model.style_transfer(images=[cv2.imread('/PATH/TO/IMAGE')])
# or
# result = model.style_transfer(paths=['/PATH/TO/IMAGE'])
```
- ### 2、API
- ```python
def style_transfer(images=None,
paths=None,
output_dir='output',
visualization=False,
min_size=32,
max_size=1024)
```
- Style transfer API.
- **Parameters**
- images (list\[numpy.ndarray\]): image data, ndarray.shape is in the format [H, W, C], BGR;
- paths (list[str]): image path;
- output_dir (str): save path of images;
- visualization (bool): Whether to save the results as picture files;
- min\_size (int): min size of image shape,default is 32;
- max\_size (int): max size of image shape,default is 1024.
**NOTE:** choose one parameter to provide data from paths and images
- **Return**
- res (list\[numpy.ndarray\]): result list,ndarray.shape is \[H, W, C\]
## IV.Server Deployment
- PaddleHub Serving can deploy an online service of style transfer.
- ### Step 1: Start PaddleHub Serving
- Run the startup command:
- ```shell
$ hub serving start -m animegan_v2_hayao_99
```
- The servitization API is now deployed and the default port number is 8866.
- **NOTE:** If GPU is used for prediction, set CUDA_VISIBLE_DEVICES environment variable before the service, otherwise it need not be set.
- ### Step 2: Send a predictive request
- With a configured server, use the following lines of code to send the prediction request and obtain the result
- ```python
import requests
import json
import cv2
import base64
def cv2_to_base64(image):
data = cv2.imencode('.jpg', image)[1]
return base64.b64encode(data.tostring()).decode('utf8')
# Send an HTTP request
data = {'images':[cv2_to_base64(cv2.imread("/PATH/TO/IMAGE"))]}
headers = {"Content-type": "application/json"}
url = "http://127.0.0.1:8866/predict/animegan_v2_hayao_99"
r = requests.post(url=url, headers=headers, data=json.dumps(data))
# print prediction results
print(r.json()["results"])
```
## V.Release Note
* 1.0.0
First release
* 1.0.1
Adapt to paddlehub2.0
* 1.0.2
Delete optional parameter batch_size
- ```shell
$ hub install animegan_v2_hayao_99==1.0.2
```
...@@ -50,7 +50,7 @@ ...@@ -50,7 +50,7 @@
## 三、模型API预测 ## 三、模型API预测
- ### 1、代码示例 - ### 1、预测代码示例
- ```python - ```python
import paddlehub as hub import paddlehub as hub
......
# animegan_v2_paprika_74
|Module Name|animegan_v2_paprika_74|
| :--- | :---: |
|Category|image generation|
|Network|AnimeGAN|
|Dataset|Paprika|
|Fine-tuning supported or not|No|
|Module Size|9.4MB|
|Latest update date|2021-02-26|
|Data indicators|-|
## I.Basic Information
- ### Application Effect Display
- Sample results:
<p align="center">
<img src="https://ai-studio-static-online.cdn.bcebos.com/bd002c4bb6a7427daf26988770bb18648b7d8d2bfd6746bfb9a429db4867727f" width = "450" height = "300" hspace='10'/>
<br />
Input Image
<br />
<img src="https://ai-studio-static-online.cdn.bcebos.com/6574669d87b24bab9627c6e33896528b4a0bf5af1cd84ca29655d68719f2d551" width = "450" height = "300" hspace='10'/>
<br />
Output Image
<br />
</p>
- ### Module Introduction
- AnimeGAN V2 is a style transfer model, which can transfer a image style to paprika carton style. For more information, please refer to [AnimeGAN V2 Project](https://github.com/TachibanaYoshino/AnimeGANv2).
## II.Installation
- ### 1、Environmental Dependence
- paddlepaddle >= 1.8.0
- paddlehub >= 1.8.0 | [How to install PaddleHub](../../../../docs/docs_en/get_start/installation.rst)
- ### 2、Installation
- ```shell
$ hub install animegan_v2_paprika_74
```
- In case of any problems during installation, please refer to: [Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md) | [Linux_Quickstart](../../../../docs/docs_en/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_en/get_start/mac_quickstart.md)
## III.Module API Prediction
- ### 1、Prediction Code Example
- ```python
import paddlehub as hub
import cv2
model = hub.Module(name="animegan_v2_paprika_74")
result = model.style_transfer(images=[cv2.imread('/PATH/TO/IMAGE')])
# or
# result = model.style_transfer(paths=['/PATH/TO/IMAGE'])
```
- ### 2、API
- ```python
def style_transfer(images=None,
paths=None,
output_dir='output',
visualization=False,
min_size=32,
max_size=1024)
```
- Style transfer API.
- **Parameters**
- images (list\[numpy.ndarray\]): image data, ndarray.shape is in the format [H, W, C], BGR;
- paths (list[str]): image path;
- output_dir (str): save path of images;
- visualization (bool): Whether to save the results as picture files;
- min\_size (int): min size of image shape,default is 32;
- max\_size (int): max size of image shape,default is 1024.
- **Return**
- res (list\[numpy.ndarray\]): result list,ndarray.shape is \[H, W, C\]
## IV.Server Deployment
- PaddleHub Serving can deploy an online service of style transfer.
- ### Step 1: Start PaddleHub Serving
- Run the startup command:
- ```shell
$ hub serving start -m animegan_v2_paprika_74
```
- The servitization API is now deployed and the default port number is 8866.
- **NOTE:** If GPU is used for prediction, set CUDA_VISIBLE_DEVICES environment variable before the service, otherwise it need not be set.
- ### Step 2: Send a predictive request
- With a configured server, use the following lines of code to send the prediction request and obtain the result
- ```python
import requests
import json
import cv2
import base64
def cv2_to_base64(image):
data = cv2.imencode('.jpg', image)[1]
return base64.b64encode(data.tostring()).decode('utf8')
# Send an HTTP request
data = {'images':[cv2_to_base64(cv2.imread("/PATH/TO/IMAGE"))]}
headers = {"Content-type": "application/json"}
url = "http://127.0.0.1:8866/predict/animegan_v2_paprika_74"
r = requests.post(url=url, headers=headers, data=json.dumps(data))
# print prediction results
print(r.json()["results"])
```
## V.Release Note
* 1.0.0
First release
* 1.0.1
Adapt to paddlehub2.0
* 1.0.2
Delete optional parameter batch_size
- ```shell
$ hub install animegan_v2_paprika_74==1.0.2
```
...@@ -50,7 +50,7 @@ ...@@ -50,7 +50,7 @@
## 三、模型API预测 ## 三、模型API预测
- ### 1、代码示例 - ### 1、预测代码示例
- ```python - ```python
import paddlehub as hub import paddlehub as hub
......
# animegan_v2_paprika_98
|Module Name|animegan_v2_paprika_98|
| :--- | :---: |
|Category|image generation|
|Network|AnimeGAN|
|Dataset|Paprika|
|Fine-tuning supported or not|No|
|Module Size|9.4MB|
|Latest update date|2021-07-30|
|Data indicators|-|
## I.Basic Information
- ### Application Effect Display
- Sample results:
<p align="center">
<img src="https://ai-studio-static-online.cdn.bcebos.com/bd002c4bb6a7427daf26988770bb18648b7d8d2bfd6746bfb9a429db4867727f" width = "450" height = "300" hspace='10'/>
<br />
Input image
<br />
<img src="https://ai-studio-static-online.cdn.bcebos.com/495436a627ef423ab572536c5f2ba6d0eb99b1ce098947a5ac02af36e7eb85f7" width = "450" height = "300" hspace='10'/>
<br />
Output image
<br />
</p>
- ### Module Introduction
- AnimeGAN V2 is a style transfer model, which can transfer a image style to paprika carton style. For more information, please refer to [AnimeGAN V2 Project](https://github.com/TachibanaYoshino/AnimeGANv2).
## II.Installation
- ### 1、Environmental Dependence
- paddlepaddle >= 1.8.0
- paddlehub >= 1.8.0 | [How to install PaddleHub](../../../../docs/docs_en/get_start/installation.rst)
- ### 2、Installation
- ```shell
$ hub install animegan_v2_paprika_98
```
- In case of any problems during installation, please refer to: [Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md) | [Linux_Quickstart](../../../../docs/docs_en/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_en/get_start/mac_quickstart.md)
## III.Module API Prediction
- ### 1、Prediction Code Example
- ```python
import paddlehub as hub
import cv2
model = hub.Module(name="animegan_v2_paprika_98")
result = model.style_transfer(images=[cv2.imread('/PATH/TO/IMAGE')])
# or
# result = model.style_transfer(paths=['/PATH/TO/IMAGE'])
```
- ### 2、API
- ```python
def style_transfer(images=None,
paths=None,
output_dir='output',
visualization=False,
min_size=32,
max_size=1024)
```
- Style transfer API.
- **Parameters**
- images (list\[numpy.ndarray\]): image data, ndarray.shape is in the format [H, W, C], BGR;
- paths (list[str]): image path;
- output_dir (str): save path of images;
- visualization (bool): Whether to save the results as picture files;
- min\_size (int): min size of image shape,default is 32;
- max\_size (int): max size of image shape,default is 1024.
**NOTE:** choose one parameter to provide data from paths and images
- **Return**
- res (list\[numpy.ndarray\]): result list,ndarray.shape is \[H, W, C\]
## IV.Server Deployment
- PaddleHub Serving can deploy an online service of style transfer.
- ### Step 1: Start PaddleHub Serving
- Run the startup command:
- ```shell
$ hub serving start -m animegan_v2_paprika_98
```
- The servitization API is now deployed and the default port number is 8866.
- **NOTE:** If GPU is used for prediction, set CUDA_VISIBLE_DEVICES environment variable before the service, otherwise it need not be set.
- ### Step 2: Send a predictive request
- With a configured server, use the following lines of code to send the prediction request and obtain the result
- ```python
import requests
import json
import cv2
import base64
def cv2_to_base64(image):
data = cv2.imencode('.jpg', image)[1]
return base64.b64encode(data.tostring()).decode('utf8')
# Send an HTTP request
data = {'images':[cv2_to_base64(cv2.imread("/PATH/TO/IMAGE"))]}
headers = {"Content-type": "application/json"}
url = "http://127.0.0.1:8866/predict/animegan_v2_paprika_98"
r = requests.post(url=url, headers=headers, data=json.dumps(data))
# print prediction results
print(r.json()["results"])
```
## V.Release Note
* 1.0.0
First release
* 1.0.1
Adapt to paddlehub2.0
* 1.0.2
Delete optional parameter batch_size
- ```shell
$ hub install animegan_v2_paprika_98==1.0.2
```
...@@ -50,7 +50,7 @@ ...@@ -50,7 +50,7 @@
## 三、模型API预测 ## 三、模型API预测
- ### 1、代码示例 - ### 1、预测代码示例
- ```python - ```python
import paddlehub as hub import paddlehub as hub
......
# animegan_v2_shinkai_33
|Module Name|animegan_v2_shinkai_33|
| :--- | :---: |
|Category|image generation|
|Network|AnimeGAN|
|Dataset|Your Name, Weathering with you|
|Fine-tuning supported or not|No|
|Module Size|9.4MB|
|Latest update date|2021-07-30|
|Data indicators|-|
## I.Basic Information
- ### Application Effect Display
- Sample results:
<p align="center">
<img src="https://ai-studio-static-online.cdn.bcebos.com/bd002c4bb6a7427daf26988770bb18648b7d8d2bfd6746bfb9a429db4867727f" width = "450" height = "300" hspace='10'/>
<br />
Input image
<br />
<img src="https://ai-studio-static-online.cdn.bcebos.com/776a84a0d97c452bbbe479592fbb8f5c6fe9c45f3b7e41fd8b7da80bf52ee668" width = "450" height = "300" hspace='10'/>
<br />
Output image
<br />
</p>
- ### Module Introduction
- AnimeGAN V2 is a style transfer model, which can transfer a image style to Shinkai carton style. For more information, please refer to [AnimeGAN V2 Project](https://github.com/TachibanaYoshino/AnimeGANv2).
## II.Installation
- ### 1、Environmental Dependence
- paddlepaddle >= 1.8.0
- paddlehub >= 1.8.0 | [How to install PaddleHub](../../../../docs/docs_en/get_start/installation.rst)
- ### 2、Installation
- ```shell
$ hub install animegan_v2_shinkai_33
```
- In case of any problems during installation, please refer to: [Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md) | [Linux_Quickstart](../../../../docs/docs_en/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_en/get_start/mac_quickstart.md)
## III.Module API Prediction
- ### 1、Prediction Code Example
- ```python
import paddlehub as hub
import cv2
model = hub.Module(name="animegan_v2_shinkai_33")
result = model.style_transfer(images=[cv2.imread('/PATH/TO/IMAGE')])
# or
# result = model.style_transfer(paths=['/PATH/TO/IMAGE'])
```
- ### 2、API
- ```python
def style_transfer(images=None,
paths=None,
output_dir='output',
visualization=False,
min_size=32,
max_size=1024)
```
- Style transfer API.
- **Parameters**
- images (list\[numpy.ndarray\]): image data, ndarray.shape is in the format [H, W, C], BGR;
- paths (list[str]): image path;
- output_dir (str): save path of images;
- visualization (bool): Whether to save the results as picture files;
- min\_size (int): min size of image shape,default is 32;
- max\_size (int): max size of image shape,default is 1024.
**NOTE:** choose one parameter to provide data from paths and images
- **Return**
- res (list\[numpy.ndarray\]): result list,ndarray.shape is \[H, W, C\]
## IV.Server Deployment
- PaddleHub Serving can deploy an online service of style transfer.
- ### Step 1: Start PaddleHub Serving
- Run the startup command:
- ```shell
$ hub serving start -m animegan_v2_shinkai_33
```
- The servitization API is now deployed and the default port number is 8866.
- **NOTE:** If GPU is used for prediction, set CUDA_VISIBLE_DEVICES environment variable before the service, otherwise it need not be set.
- ### Step 2: Send a predictive request
- With a configured server, use the following lines of code to send the prediction request and obtain the result
- ```python
import requests
import json
import cv2
import base64
def cv2_to_base64(image):
data = cv2.imencode('.jpg', image)[1]
return base64.b64encode(data.tostring()).decode('utf8')
# Send an HTTP request
data = {'images':[cv2_to_base64(cv2.imread("/PATH/TO/IMAGE"))]}
headers = {"Content-type": "application/json"}
url = "http://127.0.0.1:8866/predict/animegan_v2_shinkai_33"
r = requests.post(url=url, headers=headers, data=json.dumps(data))
# print prediction results
print(r.json()["results"])
```
## V.Release Note
* 1.0.0
First release
* 1.0.1
Adapt to paddlehub2.0
* 1.0.2
Delete optional parameter batch_size
- ```shell
$ hub install animegan_v2_shinkai_33==1.0.2
```
...@@ -50,7 +50,7 @@ ...@@ -50,7 +50,7 @@
## 三、模型API预测 ## 三、模型API预测
- ### 1、代码示例 - ### 1、预测代码示例
- ```python - ```python
import paddlehub as hub import paddlehub as hub
......
# animegan_v2_shinkai_53
|Module Name|animegan_v2_shinkai_53|
| :--- | :---: |
|Category|image generation|
|Network|AnimeGAN|
|Dataset|Your Name, Weathering with you|
|Fine-tuning supported or not|No|
|Module Size|9.4MB|
|Latest update date|2021-07-30|
|Data indicators|-|
## I.Basic Information
- ### Application Effect Display
- Sample results:
<p align="center">
<img src="https://ai-studio-static-online.cdn.bcebos.com/bd002c4bb6a7427daf26988770bb18648b7d8d2bfd6746bfb9a429db4867727f" width = "450" height = "300" hspace='10'/>
<br />
Input image
<br />
<img src="https://ai-studio-static-online.cdn.bcebos.com/fa4ba157e73c48658c4c9c6b8b92f5c99231d1d19556472788b1e5dd58d5d6cc" width = "450" height = "300" hspace='10'/>
<br />
Output image
<br />
</p>
- ### Module Introduction
- AnimeGAN V2 is a style transfer model, which can transfer a image style to Shinkai carton style. For more information, please refer to [AnimeGAN V2 Project](https://github.com/TachibanaYoshino/AnimeGANv2).
## II.Installation
- ### 1、Environmental Dependence
- paddlepaddle >= 1.8.0
- paddlehub >= 1.8.0 | [How to install PaddleHub](../../../../docs/docs_en/get_start/installation.rst)
- ### 2、Installation
- ```shell
$ hub install animegan_v2_shinkai_53
```
- In case of any problems during installation, please refer to: [Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md) | [Linux_Quickstart](../../../../docs/docs_en/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_en/get_start/mac_quickstart.md)
## III.Module API Prediction
- ### 1、Prediction Code Example
- ```python
import paddlehub as hub
import cv2
model = hub.Module(name="animegan_v2_shinkai_53")
result = model.style_transfer(images=[cv2.imread('/PATH/TO/IMAGE')])
# or
# result = model.style_transfer(paths=['/PATH/TO/IMAGE'])
```
- ### 2、API
- ```python
def style_transfer(images=None,
paths=None,
output_dir='output',
visualization=False,
min_size=32,
max_size=1024)
```
- Style transfer API.
- **Parameters**
- images (list\[numpy.ndarray\]): image data, ndarray.shape is in the format [H, W, C], BGR;
- paths (list[str]): image path;
- output_dir (str): save path of images;
- visualization (bool): Whether to save the results as picture files;
- min\_size (int): min size of image shape,default is 32;
- max\_size (int): max size of image shape,default is 1024.
**NOTE:** choose one parameter to provide data from paths and images
- **Return**
- res (list\[numpy.ndarray\]): result list,ndarray.shape is \[H, W, C\]
## IV.Server Deployment
- PaddleHub Serving can deploy an online service of style transfer.
- ### Step 1: Start PaddleHub Serving
- Run the startup command:
- ```shell
$ hub serving start -m animegan_v2_shinkai_53
```
- The servitization API is now deployed and the default port number is 8866.
- **NOTE:** If GPU is used for prediction, set CUDA_VISIBLE_DEVICES environment variable before the service, otherwise it need not be set.
- ### Step 2: Send a predictive request
- With a configured server, use the following lines of code to send the prediction request and obtain the result
- ```python
import requests
import json
import cv2
import base64
def cv2_to_base64(image):
data = cv2.imencode('.jpg', image)[1]
return base64.b64encode(data.tostring()).decode('utf8')
# Send an HTTP request
data = {'images':[cv2_to_base64(cv2.imread("/PATH/TO/IMAGE"))]}
headers = {"Content-type": "application/json"}
url = "http://127.0.0.1:8866/predict/animegan_v2_shinkai_53"
r = requests.post(url=url, headers=headers, data=json.dumps(data))
# print prediction results
print(r.json()["results"])
```
## V.Release Note
* 1.0.0
First release
* 1.0.1
Adapt to paddlehub2.0
* 1.0.2
Delete optional parameter batch_size
- ```shell
$ hub install animegan_v2_shinkai_53==1.0.2
```
...@@ -48,7 +48,7 @@ ...@@ -48,7 +48,7 @@
$ hub run stylepro_artistic --input_path "/PATH/TO/IMAGE" $ hub run stylepro_artistic --input_path "/PATH/TO/IMAGE"
``` ```
- 通过命令行方式实现风格转换模型的调用,更多请见 [PaddleHub命令行指令](../../../../docs/docs_ch/tutorial/cmd_usage.rst) - 通过命令行方式实现风格转换模型的调用,更多请见 [PaddleHub命令行指令](../../../../docs/docs_ch/tutorial/cmd_usage.rst)
- ### 2、代码示例 - ### 2、预测代码示例
- ```python - ```python
import paddlehub as hub import paddlehub as hub
......
# stylepro_artistic
|Module Name|stylepro_artistic|
| :--- | :---: |
|Category|image generation|
|Network|StyleProNet|
|Dataset|MS-COCO + WikiArt|
|Fine-tuning supported or not|No|
|Module Size|28MB|
|Latest update date|2021-02-26|
|Data indicators|-|
## I.Basic Information
- ### Application Effect Display
- Sample results:
<p align="center">
<img src="https://paddlehub.bj.bcebos.com/resources/style.png" width='80%' hspace='10'/> <br />
</p>
- ### Module Introduction
- StyleProNet is a model for style transfer, which is light-weight and responds quickly. This module is based on StyleProNet, trained on WikiArt(MS-COCO) and WikiArt(style) datasets, and can be used for style transfer. For more information, please refer to [StyleProNet](https://arxiv.org/abs/2003.07694).
## II.Installation
- ### 1、Environmental Dependence
- paddlepaddle >= 1.6.2
- paddlehub >= 1.6.0 | [How to install PaddleHub](../../../../docs/docs_en/get_start/installation.rst)
- ### 2、Installation
- ```shell
$ hub install stylepro_artistic
```
- In case of any problems during installation, please refer to: [Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md) | [Linux_Quickstart](../../../../docs/docs_en/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_en/get_start/mac_quickstart.md)
## III.Module API Prediction
- ### 1、Command line Prediction
- ```shell
$ hub run stylepro_artistic --input_path "/PATH/TO/IMAGE"
```
- If you want to call the Hub module through the command line, please refer to: [PaddleHub Command Line Instruction](../../../../docs/docs_ch/tutorial/cmd_usage.rst)
- ### 2、Prediction Code Example
- ```python
import paddlehub as hub
import cv2
stylepro_artistic = hub.Module(name="stylepro_artistic")
result = stylepro_artistic.style_transfer(
images=[{
'content': cv2.imread('/PATH/TO/CONTENT_IMAGE'),
'styles': [cv2.imread('/PATH/TO/STYLE_IMAGE')]
}])
# or
# result = stylepro_artistic.style_transfer(
# paths=[{
# 'content': '/PATH/TO/CONTENT_IMAGE',
# 'styles': ['/PATH/TO/STYLE_IMAGE']
# }])
```
- ### 3、API
- ```python
def style_transfer(images=None,
paths=None,
alpha=1,
use_gpu=False,
visualization=False,
output_dir='transfer_result')
```
- Style transfer API.
- **Parameters**
- images (list\[dict\]): each element is a dict,includes:
- content (numpy.ndarray): input image array,shape is \[H, W, C\],BGR format;<br/>
- styles (list\[numpy.ndarray\]) : list of style image arrays,shape is \[H, W, C\],BGR format;<br/>
- weights (list\[float\], optioal) : weight for each style, if not set, each style has the same weight;<br/>
- paths (list\[dict\]): each element is a dict,includes:
- content (str): path for input image;<br/>
- styles (list\[str\]) : paths for style images;<br/>
- weights (list\[float\], optioal) : weight for each style, if not set, each style has the same weight;<br/>
- alpha (float) : alpha value,\[0, 1\] ,default is 1<br/>
- use_gpu (bool): use GPU or not; **set the CUDA_VISIBLE_DEVICES environment variable first if you are using GPU**
- visualization (bool): Whether to save the results as picture files;
- output_dir (str): save path of images;
**NOTE:** choose one parameter to provide data from paths and images
- **Return**
- res (list\[dict\]): results
- path (str): path for input image
- data (numpy.ndarray): output image
- ```python
def save_inference_model(dirname,
model_filename=None,
params_filename=None,
combined=True)
```
- Save model to specific path
- **Parameters**
- dirname: output dir for saving model
- model\_filename: filename for saving model
- params\_filename: filename for saving parameters
- combined: whether save parameters into one file
## IV.Server Deployment
- PaddleHub Serving can deploy an online service of style transfer.
- ### Step 1: Start PaddleHub Serving
- Run the startup command:
- ```shell
$ hub serving start -m stylepro_artistic
```
- The servitization API is now deployed and the default port number is 8866.
- **NOTE:** If GPU is used for prediction, set CUDA_VISIBLE_DEVICES environment variable before the service, otherwise it need not be set.
- ### Step 2: Send a predictive request
- With a configured server, use the following lines of code to send the prediction request and obtain the result
- ```python
import requests
import json
import cv2
import base64
import numpy as np
def cv2_to_base64(image):
data = cv2.imencode('.jpg', image)[1]
return base64.b64encode(data.tostring()).decode('utf8')
def base64_to_cv2(b64str):
data = base64.b64decode(b64str.encode('utf8'))
data = np.fromstring(data, np.uint8)
data = cv2.imdecode(data, cv2.IMREAD_COLOR)
return data
# Send an HTTP request
data = {'images':[
{
'content':cv2_to_base64(cv2.imread('/PATH/TO/CONTENT_IMAGE')),
'styles':[cv2_to_base64(cv2.imread('/PATH/TO/STYLE_IMAGE'))]
}
]}
headers = {"Content-type": "application/json"}
url = "http://127.0.0.1:8866/predict/stylepro_artistic"
r = requests.post(url=url, headers=headers, data=json.dumps(data))
# print prediction results
print(base64_to_cv2(r.json()["results"][0]['data']))
```
## V.Release Note
* 1.0.0
First release
* 1.0.1
- ```shell
$ hub install stylepro_artistic==1.0.1
```
# DriverStatusRecognition
|Module Name|DriverStatusRecognition|
| :--- | :---: |
|Category|image classification|
|Network|MobileNetV3_small_ssld|
|Dataset|Distractible Driver Dataset|
|Fine-tuning supported or not|No|
|Module Size|6MB|
|Latest update date|-|
|Data indicators|-|
## I.Basic Information
- ### Module Introduction
- This module can be used for recognizing distractible drivers by analysing the expression on the face.
## II.Installation
- ### 1、Environmental Dependence
- paddlepaddle >= 2.0.0
- paddlehub >= 2.0.0 | [How to install PaddleHub](../../../../docs/docs_en/get_start/installation.rst)
- paddlex >= 1.3.7
- ### 2、Installation
- ```shell
$ hub install DriverStatusRecognition
```
- In case of any problems during installation, please refer to: [Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md) | [Linux_Quickstart](../../../../docs/docs_en/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_en/get_start/mac_quickstart.md)
- ### 3、Online experience
[AI Studio](https://aistudio.baidu.com/aistudio/projectdetail/1649513)
## III.Module API Prediction
- ### 1、Command line Prediction
- ```shell
$ hub run DriverStatusRecognition --input_path /PATH/TO/IMAGE
```
- If you want to call the Hub module through the command line, please refer to: [PaddleHub Command Line Instruction](../../../../docs/docs_ch/tutorial/cmd_usage.rst)
- ### 2、Prediction Code Example
- ```python
import paddlehub as hub
import cv2
classifier = hub.Module(name="DriverStatusRecognition")
images = [cv2.imread('/PATH/TO/IMAGE')]
results = classifier.predict(images=images)
for result in results:
print(result)
```
- ### 3、API
- ```python
def predict(images)
```
- classification API.
- **Parameters**
- images (list\[numpy.ndarray\]): image data, ndarray.shape is in the format [H, W, C], BGR;
- **Return**
- result(list[dict]): classication results, each element in the list is dict, key is the label name, and value is the corresponding probability
## IV.Release Note
* 1.0.0
First release
- ```shell
$ hub install DriverStatusRecognition==1.0.0
```
# SnakeIdentification
|Module Name|SnakeIdentification|
| :--- | :---: |
|Category|image classification|
|Network|ResNet50_vd_ssld|
|Dataset|Snake Dataset|
|Fine-tuning supported or not|No|
|Module Size|84MB|
|Latest update date|-|
|Data indicators|-|
## I.Basic Information
- ### Module Introduction
- This module can be used to identify the kind of snake, and judge the toxicity.
## II.Installation
- ### 1、Environmental Dependence
- paddlepaddle >= 2.0.0
- paddlehub >= 2.0.0 | [How to install PaddleHub](../../../../docs/docs_en/get_start/installation.rst)
- paddlex >= 1.3.7
- ### 2、Installation
- ```shell
$ hub install SnakeIdentification
```
- In case of any problems during installation, please refer to: [Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md) | [Linux_Quickstart](../../../../docs/docs_en/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_en/get_start/mac_quickstart.md)
- ### 3、Online experience
[AI Studio](https://aistudio.baidu.com/aistudio/projectdetail/1646951)
## III.Module API Prediction
- ### 1、Command line Prediction
- ```shell
$ hub run SnakeIdentification --input_path /PATH/TO/IMAGE
```
- If you want to call the Hub module through the command line, please refer to: [PaddleHub Command Line Instruction](../../../../docs/docs_ch/tutorial/cmd_usage.rst)
- ### 2、Prediction Code Example
- ```python
import paddlehub as hub
import cv2
classifier = hub.Module(name="SnakeIdentification")
images = [cv2.imread('/PATH/TO/IMAGE')]
results = classifier.predict(images=images)
for result in results:
print(result)
```
- ### 3、API
- ```python
def predict(images)
```
- classification API.
- **Parameters**
- images (list\[numpy.ndarray\]): image data, ndarray.shape is in the format [H, W, C], BGR;
- **Return**
- result(list[dict]): classication results, each element in the list is dict, key is the label name, and value is the corresponding probability
## IV.Release Note
* 1.0.0
First release
- ```shell
$ hub install SnakeIdentification==1.0.0
```
# alexnet_imagenet
|Module Name|alexnet_imagenet|
| :--- | :---: |
|Category|image classification|
|Network|AlexNet|
|Dataset|ImageNet-2012|
|Fine-tuning supported or not|No|
|Module Size|234MB|
|Latest update date|-|
|Data indicators|-|
## I.Basic Information
- ### Module Introduction
- AlexNet was a classification model proposed by Alex Krizhevsky in 2012, and gained the champion of ILSVRC 2012. This module is based on AlexNet, trained on ImageNet-2012, and can predict an image of size 224*224*3.
## II.Installation
- ### 1、Environmental Dependence
- paddlepaddle >= 1.4.0
- paddlehub >= 1.0.0 | [How to install PaddleHub](../../../../docs/docs_en/get_start/installation.rst)
- ### 2、Installation
- ```shell
$ hub install alexnet_imagenet
```
- In case of any problems during installation, please refer to: [Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md) | [Linux_Quickstart](../../../../docs/docs_en/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_en/get_start/mac_quickstart.md)
## III.Module API Prediction
- ### 1、Command line Prediction
- ```shell
$ hub run alexnet_imagenet --input_path "/PATH/TO/IMAGE"
```
- If you want to call the Hub module through the command line, please refer to: [PaddleHub Command Line Instruction](../../../../docs/docs_ch/tutorial/cmd_usage.rst)
- ### 2、Prediction Code Example
- ```python
import paddlehub as hub
import cv2
classifier = hub.Module(name="alexnet_imagenet")
test_img_path = "/PATH/TO/IMAGE"
input_dict = {"image": [test_img_path]}
result = classifier.classification(data=input_dict)
```
- ### 3、API
- ```python
def classification(data)
```
- classification API.
- **Parameters**
- data (dict): key is "image", value is a list of image paths
- **Return**
- result(list[dict]): classication results, each element in the list is dict, key is the label name, and value is the corresponding probability
## IV.Release Note
* 1.0.0
First release
- ```shell
$ hub install alexnet_imagenet==1.0.0
```
# darknet53_imagenet
|Module Name|darknet53_imagenet|
| :--- | :---: |
|Category|image classification|
|Network|DarkNet|
|Dataset|ImageNet-2012|
|Fine-tuning supported or not|No|
|Module Size|160MB|
|Latest update date|-|
|Data indicators|-|
## I.Basic Information
- ### Module Introduction
- DarkNet is a classification model proposed by Joseph Redmon, which uses Yolov3 as backbone to extract features. This module is based on darknet53, trained on ImageNet-2012, and can predict an image of size 224*224*3.
## II.Installation
- ### 1、Environmental Dependence
- paddlepaddle >= 1.4.0
- paddlehub >= 1.0.0 | [How to install PaddleHub](../../../../docs/docs_en/get_start/installation.rst)
- ### 2、Installation
- ```shell
$ hub install darknet53_imagenet
```
- In case of any problems during installation, please refer to: [Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md) | [Linux_Quickstart](../../../../docs/docs_en/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_en/get_start/mac_quickstart.md)
## III.Module API Prediction
- ### 1、Command line Prediction
- ```shell
$ hub run darknet53_imagenet --input_path "/PATH/TO/IMAGE"
```
- If you want to call the Hub module through the command line, please refer to: [PaddleHub Command Line Instruction](../../../../docs/docs_ch/tutorial/cmd_usage.rst)
- ### 2、Prediction Code Example
- ```python
import paddlehub as hub
import cv2
classifier = hub.Module(name="darknet53_imagenet")
test_img_path = "/PATH/TO/IMAGE"
input_dict = {"image": [test_img_path]}
result = classifier.classification(data=input_dict)
```
- ### 3、API
- ```python
def classification(data)
```
- classification API.
- **Parameters**
- data (dict): key is "image", value is a list of image paths
- **Return**
- result(list[dict]): classication results, each element in the list is dict, key is the label name, and value is the corresponding probability
## IV.Release Note
* 1.0.0
First release
- ```shell
$ hub install darknet53_imagenet==1.0.0
```
# densenet121_imagenet
|Module Name|densenet121_imagenet|
| :--- | :---: |
|Category|image classification|
|Network|DenseNet|
|Dataset|ImageNet-2012|
|Fine-tuning supported or not|No|
|Module Size|34MB|
|Latest update date|-|
|Data indicators|-|
## I.Basic Information
- ### Module Introduction
- DenseNet is the model in CVPR2017 best paper. Every layer outputs its result as input for the layer after it, and forms the dense connection topology. The dense connection ease the probblem of vanishing gradient and improve the information flow. This module is based on DenseNet121, trained on ImageNet-2012, and can predict an image of size 224*224*3.
## II.Installation
- ### 1、Environmental Dependence
- paddlepaddle >= 1.4.0
- paddlehub >= 1.0.0 | [How to install PaddleHub](../../../../docs/docs_en/get_start/installation.rst)
- ### 2、Installation
- ```shell
$ hub install densenet121_imagenet
```
- In case of any problems during installation, please refer to: [Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md) | [Linux_Quickstart](../../../../docs/docs_en/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_en/get_start/mac_quickstart.md)
## III.Module API Prediction
- ### 1、Command line Prediction
- ```shell
$ hub run densenet121_imagenet --input_path "/PATH/TO/IMAGE"
```
- If you want to call the Hub module through the command line, please refer to: [PaddleHub Command Line Instruction](../../../../docs/docs_ch/tutorial/cmd_usage.rst)
- ### 2、Prediction Code Example
- ```python
import paddlehub as hub
import cv2
classifier = hub.Module(name="densenet121_imagenet")
test_img_path = "/PATH/TO/IMAGE"
input_dict = {"image": [test_img_path]}
result = classifier.classification(data=input_dict)
```
- ### 3、API
- ```python
def classification(data)
```
- classification API.
- **Parameters**
- data (dict): key is "image", value is a list of image paths
- **Return**
- result(list[dict]): classication results, each element in the list is dict, key is the label name, and value is the corresponding probability
## IV.Release Note
* 1.0.0
First release
- ```shell
$ hub install densenet121_imagenet==1.0.0
```
# densenet161_imagenet
|Module Name|densenet161_imagenet|
| :--- | :---: |
|Category|image classification|
|Network|DenseNet|
|Dataset|ImageNet-2012|
|Fine-tuning supported or not|No|
|Module Size|114MB|
|Latest update date|-|
|Data indicators|-|
## I.Basic Information
- ### Module Introduction
- DenseNet is the model in CVPR2017 best paper. Every layer outputs its result as input for the layer after it, and forms the dense connection topology. The dense connection ease the probblem of vanishing gradient and improve the information flow. This module is based on DenseNet161, trained on ImageNet-2012, and can predict an image of size 224*224*3.
## II.Installation
- ### 1、Environmental Dependence
- paddlepaddle >= 1.4.0
- paddlehub >= 1.0.0 | [How to install PaddleHub](../../../../docs/docs_en/get_start/installation.rst)
- ### 2、Installation
- ```shell
$ hub install densenet161_imagenet
```
- In case of any problems during installation, please refer to: [Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md) | [Linux_Quickstart](../../../../docs/docs_en/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_en/get_start/mac_quickstart.md)
## III.Module API Prediction
- ### 1、Command line Prediction
- ```shell
$ hub run densenet161_imagenet --input_path "/PATH/TO/IMAGE"
```
- If you want to call the Hub module through the command line, please refer to: [PaddleHub Command Line Instruction](../../../../docs/docs_ch/tutorial/cmd_usage.rst)
- ### 2、Prediction Code Example
- ```python
import paddlehub as hub
import cv2
classifier = hub.Module(name="densenet161_imagenet")
test_img_path = "/PATH/TO/IMAGE"
input_dict = {"image": [test_img_path]}
result = classifier.classification(data=input_dict)
```
- ### 3、API
- ```python
def classification(data)
```
- classification API.
- **Parameters**
- data (dict): key is "image", value is a list of image paths
- **Return**
- result(list[dict]): classication results, each element in the list is dict, key is the label name, and value is the corresponding probability
## IV.Release Note
* 1.0.0
First release
- ```shell
$ hub install densenet161_imagenet==1.0.0
```
# densenet169_imagenet
|Module Name|densenet169_imagenet|
| :--- | :---: |
|Category|image classification|
|Network|DenseNet|
|Dataset|ImageNet-2012|
|Fine-tuning supported or not|No|
|Module Size|59MB|
|Latest update date|-|
|Data indicators|-|
## I.Basic Information
- ### Module Introduction
- DenseNet is the model in CVPR2017 best paper. Every layer outputs its result as input for the layer after it, and forms the dense connection topology. The dense connection ease the probblem of vanishing gradient and improve the information flow. This module is based on DenseNet169, trained on ImageNet-2012, and can predict an image of size 224*224*3.
## II.Installation
- ### 1、Environmental Dependence
- paddlepaddle >= 1.4.0
- paddlehub >= 1.0.0 | [How to install PaddleHub](../../../../docs/docs_en/get_start/installation.rst)
- ### 2、Installation
- ```shell
$ hub install densenet169_imagenet
```
- In case of any problems during installation, please refer to: [Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md) | [Linux_Quickstart](../../../../docs/docs_en/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_en/get_start/mac_quickstart.md)
## III.Module API Prediction
- ### 1、Command line Prediction
- ```shell
$ hub run densenet169_imagenet --input_path "/PATH/TO/IMAGE"
```
- If you want to call the Hub module through the command line, please refer to: [PaddleHub Command Line Instruction](../../../../docs/docs_ch/tutorial/cmd_usage.rst)
- ### 2、Prediction Code Example
- ```python
import paddlehub as hub
import cv2
classifier = hub.Module(name="densenet169_imagenet")
test_img_path = "/PATH/TO/IMAGE"
input_dict = {"image": [test_img_path]}
result = classifier.classification(data=input_dict)
```
- ### 3、API
- ```python
def classification(data)
```
- classification API.
- **Parameters**
- data (dict): key is "image", value is a list of image paths
- **Return**
- result(list[dict]): classication results, each element in the list is dict, key is the label name, and value is the corresponding probability
## IV.Release Note
* 1.0.0
First release
- ```shell
$ hub install densenet169_imagenet==1.0.0
```
# densenet201_imagenet
|Module Name|densenet201_imagenet|
| :--- | :---: |
|Category|image classification|
|Network|DenseNet|
|Dataset|ImageNet-2012|
|Fine-tuning supported or not|No|
|Module Size|82MB|
|Latest update date|-|
|Data indicators|-|
## I.Basic Information
- ### Module Introduction
- DenseNet is the model in CVPR2017 best paper. Every layer outputs its result as input for the layer after it, and forms the dense connection topology. The dense connection ease the probblem of vanishing gradient and improve the information flow. This module is based on DenseNet201, trained on ImageNet-2012, and can predict an image of size 224*224*3.
## II.Installation
- ### 1、Environmental Dependence
- paddlepaddle >= 1.4.0
- paddlehub >= 1.0.0 | [How to install PaddleHub](../../../../docs/docs_en/get_start/installation.rst)
- ### 2、Installation
- ```shell
$ hub install densenet201_imagenet
```
- In case of any problems during installation, please refer to: [Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md) | [Linux_Quickstart](../../../../docs/docs_en/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_en/get_start/mac_quickstart.md)
## III.Module API Prediction
- ### 1、Command line Prediction
- ```shell
$ hub run densenet201_imagenet --input_path "/PATH/TO/IMAGE"
```
- If you want to call the Hub module through the command line, please refer to: [PaddleHub Command Line Instruction](../../../../docs/docs_ch/tutorial/cmd_usage.rst)
- ### 2、Prediction Code Example
- ```python
import paddlehub as hub
import cv2
classifier = hub.Module(name="densenet201_imagenet")
test_img_path = "/PATH/TO/IMAGE"
input_dict = {"image": [test_img_path]}
result = classifier.classification(data=input_dict)
```
- ### 3、API
- ```python
def classification(data)
```
- classification API.
- **Parameters**
- data (dict): key is "image", value is a list of image paths
- **Return**
- result(list[dict]): classication results, each element in the list is dict, key is the label name, and value is the corresponding probability
## IV.Release Note
* 1.0.0
First release
- ```shell
$ hub install densenet201_imagenet==1.0.0
```
# densenet264_imagenet
|Module Name|densenet264_imagenet|
| :--- | :---: |
|Category|image classification|
|Network|DenseNet|
|Dataset|ImageNet-2012|
|Fine-tuning supported or not|No|
|Module Size|135MB|
|Latest update date|-|
|Data indicators|-|
## I.Basic Information
- ### Module Introduction
- DenseNet is the model in CVPR2017 best paper. Every layer outputs its result as input for the layer after it, and forms the dense connection topology. The dense connection ease the probblem of vanishing gradient and improve the information flow. This module is based on DenseNet264, trained on ImageNet-2012, and can predict an image of size 224*224*3.
## II.Installation
- ### 1、Environmental Dependence
- paddlepaddle >= 1.4.0
- paddlehub >= 1.0.0 | [How to install PaddleHub](../../../../docs/docs_en/get_start/installation.rst)
- ### 2、Installation
- ```shell
$ hub install densenet264_imagenet
```
- In case of any problems during installation, please refer to: [Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md) | [Linux_Quickstart](../../../../docs/docs_en/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_en/get_start/mac_quickstart.md)
## III.Module API Prediction
- ### 1、Command line Prediction
- ```shell
$ hub run densenet264_imagenet --input_path "/PATH/TO/IMAGE"
```
- If you want to call the Hub module through the command line, please refer to: [PaddleHub Command Line Instruction](../../../../docs/docs_ch/tutorial/cmd_usage.rst)
- ### 2、Prediction Code Example
- ```python
import paddlehub as hub
import cv2
classifier = hub.Module(name="densenet264_imagenet")
test_img_path = "/PATH/TO/IMAGE"
input_dict = {"image": [test_img_path]}
result = classifier.classification(data=input_dict)
```
- ### 3、API
- ```python
def classification(data)
```
- classification API.
- **Parameters**
- data (dict): key is "image", value is a list of image paths
- **Return**
- result(list[dict]): classication results, each element in the list is dict, key is the label name, and value is the corresponding probability
## IV.Release Note
* 1.0.0
First release
- ```shell
$ hub install densenet264_imagenet==1.0.0
```
# dpn107_imagenet
|Module Name|dpn107_imagenet|
| :--- | :---: |
|Category|image classification|
|Network|DPN|
|Dataset|ImageNet-2012|
|Fine-tuning supported or not|No|
|Module Size|335MB|
|Latest update date|-|
|Data indicators|-|
## I.Basic Information
- ### Module Introduction
- DPN(Dual Path Networks) is the champion of ILSVRC2017 in Object Localization Task. This module is based on DPN107, trained on ImageNet-2012, can predict an image of size 224*224*3.
## II.Installation
- ### 1、Environmental Dependence
- paddlepaddle >= 1.4.0
- paddlehub >= 1.0.0 | [How to install PaddleHub](../../../../docs/docs_en/get_start/installation.rst)
- ### 2、Installation
- ```shell
$ hub install dpn107_imagenet
```
- In case of any problems during installation, please refer to: [Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md) | [Linux_Quickstart](../../../../docs/docs_en/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_en/get_start/mac_quickstart.md)
## III.Module API Prediction
- ### 1、Command line Prediction
- ```shell
$ hub run dpn107_imagenet --input_path "/PATH/TO/IMAGE"
```
- If you want to call the Hub module through the command line, please refer to: [PaddleHub Command Line Instruction](../../../../docs/docs_ch/tutorial/cmd_usage.rst)
- ### 2、Prediction Code Example
- ```python
import paddlehub as hub
import cv2
classifier = hub.Module(name="dpn107_imagenet")
test_img_path = "/PATH/TO/IMAGE"
input_dict = {"image": [test_img_path]}
result = classifier.classification(data=input_dict)
```
- ### 3、API
- ```python
def classification(data)
```
- classification API.
- **Parameters**
- data (dict): key is "image", value is a list of image paths
- **Return**
- result(list[dict]): classication results, each element in the list is dict, key is the label name, and value is the corresponding probability
## IV.Release Note
* 1.0.0
First release
- ```shell
$ hub install dpn107_imagenet==1.0.0
```
# dpn131_imagenet
|Module Name|dpn131_imagenet|
| :--- | :---: |
|Category|image classification|
|Network|DPN|
|Dataset|ImageNet-2012|
|Fine-tuning supported or not|No|
|Module Size|306MB|
|Latest update date|-|
|Data indicators|-|
## I.Basic Information
- ### Module Introduction
- DPN(Dual Path Networks) is the champion of ILSVRC2017 in Object Localization Task. This module is based on DPN131, trained on ImageNet-2012, can predict an image of size 224*224*3.
## II.Installation
- ### 1、Environmental Dependence
- paddlepaddle >= 1.4.0
- paddlehub >= 1.0.0 | [How to install PaddleHub](../../../../docs/docs_en/get_start/installation.rst)
- ### 2、Installation
- ```shell
$ hub install dpn131_imagenet
```
- In case of any problems during installation, please refer to: [Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md) | [Linux_Quickstart](../../../../docs/docs_en/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_en/get_start/mac_quickstart.md)
## III.Module API Prediction
- ### 1、Command line Prediction
- ```shell
$ hub run dpn131_imagenet --input_path "/PATH/TO/IMAGE"
```
- If you want to call the Hub module through the command line, please refer to: [PaddleHub Command Line Instruction](../../../../docs/docs_ch/tutorial/cmd_usage.rst)
- ### 2、Prediction Code Example
- ```python
import paddlehub as hub
import cv2
classifier = hub.Module(name="dpn131_imagenet")
test_img_path = "/PATH/TO/IMAGE"
input_dict = {"image": [test_img_path]}
result = classifier.classification(data=input_dict)
```
- ### 3、API
- ```python
def classification(data)
```
- classification API.
- **Parameters**
- data (dict): key is "image", value is a list of image paths
- **Return**
- result(list[dict]): classication results, each element in the list is dict, key is the label name, and value is the corresponding probability
## IV.Release Note
* 1.0.0
First release
- ```shell
$ hub install dpn131_imagenet==1.0.0
```
# dpn68_imagenet
|Module Name|dpn68_imagenet|
| :--- | :---: |
|Category|image classification|
|Network|DPN|
|Dataset|ImageNet-2012|
|Fine-tuning supported or not|No|
|Module Size|50MB|
|Latest update date|-|
|Data indicators|-|
## I.Basic Information
- ### Module Introduction
- DPN(Dual Path Networks) is the champion of ILSVRC2017 in Object Localization Task. This module is based on DPN68, trained on ImageNet-2012, can predict an image of size 224*224*3.
## II.Installation
- ### 1、Environmental Dependence
- paddlepaddle >= 1.4.0
- paddlehub >= 1.0.0 | [How to install PaddleHub](../../../../docs/docs_en/get_start/installation.rst)
- ### 2、Installation
- ```shell
$ hub install dpn68_imagenet
```
- In case of any problems during installation, please refer to: [Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md) | [Linux_Quickstart](../../../../docs/docs_en/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_en/get_start/mac_quickstart.md)
## III.Module API Prediction
- ### 1、Command line Prediction
- ```shell
$ hub run dpn68_imagenet --input_path "/PATH/TO/IMAGE"
```
- If you want to call the Hub module through the command line, please refer to: [PaddleHub Command Line Instruction](../../../../docs/docs_ch/tutorial/cmd_usage.rst)
- ### 2、Prediction Code Example
- ```python
import paddlehub as hub
import cv2
classifier = hub.Module(name="dpn68_imagenet")
test_img_path = "/PATH/TO/IMAGE"
input_dict = {"image": [test_img_path]}
result = classifier.classification(data=input_dict)
```
- ### 3、API
- ```python
def classification(data)
```
- classification API.
- **Parameters**
- data (dict): key is "image", value is a list of image paths
- **Return**
- result(list[dict]): classication results, each element in the list is dict, key is the label name, and value is the corresponding probability
## IV.Release Note
* 1.0.0
First release
- ```shell
$ hub install dpn68_imagenet==1.0.0
```
# dpn92_imagenet
|Module Name|dpn92_imagenet|
| :--- | :---: |
|Category|image classification|
|Network|DPN|
|Dataset|ImageNet-2012|
|Fine-tuning supported or not|No|
|Module Size|146MB|
|Latest update date|-|
|Data indicators|-|
## I.Basic Information
- ### Module Introduction
- DPN(Dual Path Networks) is the champion of ILSVRC2017 in Object Localization Task. This module is based on DPN92, trained on ImageNet-2012, can predict an image of size 224*224*3.
## II.Installation
- ### 1、Environmental Dependence
- paddlepaddle >= 1.4.0
- paddlehub >= 1.0.0 | [How to install PaddleHub](../../../../docs/docs_en/get_start/installation.rst)
- ### 2、Installation
- ```shell
$ hub install dpn92_imagenet
```
- In case of any problems during installation, please refer to: [Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md) | [Linux_Quickstart](../../../../docs/docs_en/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_en/get_start/mac_quickstart.md)
## III.Module API Prediction
- ### 1、Command line Prediction
- ```shell
$ hub run dpn92_imagenet --input_path "/PATH/TO/IMAGE"
```
- If you want to call the Hub module through the command line, please refer to: [PaddleHub Command Line Instruction](../../../../docs/docs_ch/tutorial/cmd_usage.rst)
- ### 2、Prediction Code Example
- ```python
import paddlehub as hub
import cv2
classifier = hub.Module(name="dpn92_imagenet")
test_img_path = "/PATH/TO/IMAGE"
input_dict = {"image": [test_img_path]}
result = classifier.classification(data=input_dict)
```
- ### 3、API
- ```python
def classification(data)
```
- classification API.
- **Parameters**
- data (dict): key is "image", value is a list of image paths
- **Return**
- result(list[dict]): classication results, each element in the list is dict, key is the label name, and value is the corresponding probability
## IV.Release Note
* 1.0.0
First release
- ```shell
$ hub install dpn92_imagenet==1.0.0
```
# dpn98_imagenet
|Module Name|dpn98_imagenet|
| :--- | :---: |
|Category|image classification|
|Network|DPN|
|Dataset|ImageNet-2012|
|Fine-tuning supported or not|No|
|Module Size|238MB|
|Latest update date|-|
|Data indicators|-|
## I.Basic Information
- ### Module Introduction
- DPN(Dual Path Networks) is the champion of ILSVRC2017 in Object Localization Task. This module is based on DPN98, trained on ImageNet-2012, can predict an image of size 224*224*3.
## II.Installation
- ### 1、Environmental Dependence
- paddlepaddle >= 1.4.0
- paddlehub >= 1.0.0 | [How to install PaddleHub](../../../../docs/docs_en/get_start/installation.rst)
- ### 2、Installation
- ```shell
$ hub install dpn98_imagenet
```
- In case of any problems during installation, please refer to: [Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md) | [Linux_Quickstart](../../../../docs/docs_en/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_en/get_start/mac_quickstart.md)
## III.Module API Prediction
- ### 1、Command line Prediction
- ```shell
$ hub run dpn98_imagenet --input_path "/PATH/TO/IMAGE"
```
- If you want to call the Hub module through the command line, please refer to: [PaddleHub Command Line Instruction](../../../../docs/docs_ch/tutorial/cmd_usage.rst)
- ### 2、Prediction Code Example
- ```python
import paddlehub as hub
import cv2
classifier = hub.Module(name="dpn98_imagenet")
test_img_path = "/PATH/TO/IMAGE"
input_dict = {"image": [test_img_path]}
result = classifier.classification(data=input_dict)
```
- ### 3、API
- ```python
def classification(data)
```
- classification API.
- **Parameters**
- data (dict): key is "image", value is a list of image paths
- **Return**
- result(list[dict]): classication results, each element in the list is dict, key is the label name, and value is the corresponding probability
## IV.Release Note
* 1.0.0
First release
- ```shell
$ hub install dpn98_imagenet==1.0.0
```
# efficientnetb0_imagenet
|Module Name|efficientnetb0_imagenet|
| :--- | :---: |
|Category|image classification|
|Network|EfficientNet|
|Dataset|ImageNet-2012|
|Fine-tuning supported or not|No|
|Module Size|22MB|
|Latest update date|-|
|Data indicators|-|
## I.Basic Information
- ### Module Introduction
- EfficientNet is a light-weight model proposed by google, which consists of MBConv, and takes advantage of squeeze-and-excitation operation. This module is based on EfficientNetB0, trained on ImageNet-2012 dataset, and can predict an image of size 224*224*3.
## II.Installation
- ### 1、Environmental Dependence
- paddlepaddle >= 1.6.2
- paddlehub >= 1.6.0 | [How to install PaddleHub](../../../../docs/docs_en/get_start/installation.rst)
- ### 2、Installation
- ```shell
$ hub install efficientnetb0_imagenet
```
- In case of any problems during installation, please refer to: [Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md) | [Linux_Quickstart](../../../../docs/docs_en/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_en/get_start/mac_quickstart.md)
## III.Module API Prediction
- ### 1、Command line Prediction
- ```shell
$ hub run efficientnetb0_imagenet --input_path "/PATH/TO/IMAGE"
```
- If you want to call the Hub module through the command line, please refer to: [PaddleHub Command Line Instruction](../../../../docs/docs_ch/tutorial/cmd_usage.rst)
- ### 2、Prediction Code Example
- ```python
import paddlehub as hub
import cv2
classifier = hub.Module(name="efficientnetb0_imagenet")
result = classifier.classification(images=[cv2.imread('/PATH/TO/IMAGE')])
# or
# result = classifier.classification(paths=['/PATH/TO/IMAGE'])
```
- ### 3、API
- ```python
def classification(images=None,
paths=None,
batch_size=1,
use_gpu=False,
top_k=1):
```
- classification API.
- **Parameters**
- images (list\[numpy.ndarray\]): image data, ndarray.shape is in the format [H, W, C], BGR;
- paths (list[str]): image path;
- batch_size (int): the size of batch;
- use_gpu (bool): use GPU or not; **set the CUDA_VISIBLE_DEVICES environment variable first if you are using GPU**
- top\_k (int): return the first k results
- **Return**
- res (list\[dict\]): classication results, each element in the list is dict, key is the label name, and value is the corresponding probability
## IV.Server Deployment
- PaddleHub Serving can deploy an online service of image classification.
- ### Step 1: Start PaddleHub Serving
- Run the startup command:
- ```shell
$ hub serving start -m efficientnetb0_imagenet
```
- The servitization API is now deployed and the default port number is 8866.
- **NOTE:** If GPU is used for prediction, set CUDA_VISIBLE_DEVICES environment variable before the service, otherwise it need not be set.
- ### Step 2: Send a predictive request
- With a configured server, use the following lines of code to send the prediction request and obtain the result
- ```python
import requests
import json
import cv2
import base64
def cv2_to_base64(image):
data = cv2.imencode('.jpg', image)[1]
return base64.b64encode(data.tostring()).decode('utf8')
# Send an HTTP request
data = {'images':[cv2_to_base64(cv2.imread("/PATH/TO/IMAGE"))]}
headers = {"Content-type": "application/json"}
url = "http://127.0.0.1:8866/predict/efficientnetb0_imagenet"
r = requests.post(url=url, headers=headers, data=json.dumps(data))
# print prediction results
print(r.json()["results"])
```
## V.Release Note
* 1.0.0
First release
* 1.1.0
Improve the prediction performance and users' experience
- ```shell
$ hub install efficientnetb0_imagenet==1.1.0
```
# efficientnetb0_small_imagenet
|Module Name|efficientnetb0_small_imagenet|
| :--- | :---: |
|Category|image classification|
|Network|EfficientNet|
|Dataset|ImageNet-2012|
|Fine-tuning supported or not|No|
|Module Size|20MB|
|Latest update date|-|
|Data indicators|-|
## I.Basic Information
- ### Module Introduction
- EfficientNet is a light-weight model proposed by google, which consists of MBConv, and takes advantage of squeeze-and-excitation operation. This module is based on EfficientNetB0, trained on ImageNet-2012 dataset, and can predict an image of size 224*224*3.
## II.Installation
- ### 1、Environmental Dependence
- paddlepaddle >= 1.6.2
- paddlehub >= 1.6.0 | [How to install PaddleHub](../../../../docs/docs_en/get_start/installation.rst)
- ### 2、Installation
- ```shell
$ hub install efficientnetb0_small_imagenet
```
- In case of any problems during installation, please refer to: [Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md) | [Linux_Quickstart](../../../../docs/docs_en/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_en/get_start/mac_quickstart.md)
## III.Module API Prediction
- ### 1、Command line Prediction
- ```shell
$ hub run efficientnetb0_small_imagenet --input_path "/PATH/TO/IMAGE"
```
- If you want to call the Hub module through the command line, please refer to: [PaddleHub Command Line Instruction](../../../../docs/docs_ch/tutorial/cmd_usage.rst)
- ### 2、Prediction Code Example
- ```python
import paddlehub as hub
import cv2
classifier = hub.Module(name="efficientnetb0_small_imagenet")
result = classifier.classification(images=[cv2.imread('/PATH/TO/IMAGE')])
# or
# result = classifier.classification(paths=['/PATH/TO/IMAGE'])
```
- ### 3、API
- ```python
def classification(images=None,
paths=None,
batch_size=1,
use_gpu=False,
top_k=1):
```
- classification API.
- **Parameters**
- images (list\[numpy.ndarray\]): image data, ndarray.shape is in the format [H, W, C], BGR;
- paths (list[str]): image path;
- batch_size (int): the size of batch;
- use_gpu (bool): use GPU or not; **set the CUDA_VISIBLE_DEVICES environment variable first if you are using GPU**
- top\_k (int): return the first k results
- **Return**
- res (list\[dict\]): classication results, each element in the list is dict, key is the label name, and value is the corresponding probability
## IV.Server Deployment
- PaddleHub Serving can deploy an online service of image classification.
- ### Step 1: Start PaddleHub Serving
- Run the startup command:
- ```shell
$ hub serving start -m efficientnetb0_small_imagenet
```
- The servitization API is now deployed and the default port number is 8866.
- **NOTE:** If GPU is used for prediction, set CUDA_VISIBLE_DEVICES environment variable before the service, otherwise it need not be set.
- ### Step 2: Send a predictive request
- With a configured server, use the following lines of code to send the prediction request and obtain the result
- ```python
import requests
import json
import cv2
import base64
def cv2_to_base64(image):
data = cv2.imencode('.jpg', image)[1]
return base64.b64encode(data.tostring()).decode('utf8')
# Send an HTTP request
data = {'images':[cv2_to_base64(cv2.imread("/PATH/TO/IMAGE"))]}
headers = {"Content-type": "application/json"}
url = "http://127.0.0.1:8866/predict/efficientnetb0_small_imagenet"
r = requests.post(url=url, headers=headers, data=json.dumps(data))
# print prediction results
print(r.json()["results"])
```
## V.Release Note
* 1.0.0
First release
- ```shell
$ hub install efficientnetb0_small_imagenet==1.0.0
```
# efficientnetb1_imagenet
|Module Name|efficientnetb1_imagenet|
| :--- | :---: |
|Category|image classification|
|Network|EfficientNet|
|Dataset|ImageNet-2012|
|Fine-tuning supported or not|No|
|Module Size|33MB|
|Latest update date|-|
|Data indicators|-|
## I.Basic Information
- ### Module Introduction
- EfficientNet is a light-weight model proposed by google, which consists of MBConv, and takes advantage of squeeze-and-excitation operation. This module is based on EfficientNetB1, trained on ImageNet-2012 dataset, and can predict an image of size 224*224*3.
## II.Installation
- ### 1、Environmental Dependence
- paddlepaddle >= 1.6.2
- paddlehub >= 1.6.0 | [How to install PaddleHub](../../../../docs/docs_en/get_start/installation.rst)
- ### 2、Installation
- ```shell
$ hub install efficientnetb1_imagenet
```
- In case of any problems during installation, please refer to: [Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md) | [Linux_Quickstart](../../../../docs/docs_en/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_en/get_start/mac_quickstart.md)
## III.Module API Prediction
- ### 1、Command line Prediction
- ```shell
$ hub run efficientnetb1_imagenet --input_path "/PATH/TO/IMAGE"
```
- If you want to call the Hub module through the command line, please refer to: [PaddleHub Command Line Instruction](../../../../docs/docs_ch/tutorial/cmd_usage.rst)
- ### 2、Prediction Code Example
- ```python
import paddlehub as hub
import cv2
classifier = hub.Module(name="efficientnetb1_imagenet")
result = classifier.classification(images=[cv2.imread('/PATH/TO/IMAGE')])
# or
# result = classifier.classification(paths=['/PATH/TO/IMAGE'])
```
- ### 3、API
- ```python
def classification(images=None,
paths=None,
batch_size=1,
use_gpu=False,
top_k=1):
```
- classification API.
- **Parameters**
- images (list\[numpy.ndarray\]): image data, ndarray.shape is in the format [H, W, C], BGR;
- paths (list[str]): image path;
- batch_size (int): the size of batch;
- use_gpu (bool): use GPU or not; **set the CUDA_VISIBLE_DEVICES environment variable first if you are using GPU**
- top\_k (int): return the first k results
- **Return**
- res (list\[dict\]): classication results, each element in the list is dict, key is the label name, and value is the corresponding probability
## IV.Server Deployment
- PaddleHub Serving can deploy an online service of image classification.
- ### Step 1: Start PaddleHub Serving
- Run the startup command:
- ```shell
$ hub serving start -m efficientnetb1_imagenet
```
- The servitization API is now deployed and the default port number is 8866.
- **NOTE:** If GPU is used for prediction, set CUDA_VISIBLE_DEVICES environment variable before the service, otherwise it need not be set.
- ### Step 2: Send a predictive request
- With a configured server, use the following lines of code to send the prediction request and obtain the result
- ```python
import requests
import json
import cv2
import base64
def cv2_to_base64(image):
data = cv2.imencode('.jpg', image)[1]
return base64.b64encode(data.tostring()).decode('utf8')
# Send an HTTP request
data = {'images':[cv2_to_base64(cv2.imread("/PATH/TO/IMAGE"))]}
headers = {"Content-type": "application/json"}
url = "http://127.0.0.1:8866/predict/efficientnetb1_imagenet"
r = requests.post(url=url, headers=headers, data=json.dumps(data))
# print prediction results
print(r.json()["results"])
```
## V.Release Note
* 1.0.0
First release
* 1.1.0
Improve the prediction performance and users' experience
- ```shell
$ hub install efficientnetb1_imagenet==1.1.0
```
# efficientnetb2_imagenet
|Module Name|efficientnetb2_imagenet|
| :--- | :---: |
|Category|image classification|
|Network|EfficientNet|
|Dataset|ImageNet-2012|
|Fine-tuning supported or not|No|
|Module Size|38MB|
|Latest update date|-|
|Data indicators|-|
## I.Basic Information
- ### Module Introduction
- EfficientNet is a light-weight model proposed by google, which consists of MBConv, and takes advantage of squeeze-and-excitation operation. This module is based on EfficientNetB2, trained on ImageNet-2012 dataset, and can predict an image of size 224*224*3.
## II.Installation
- ### 1、Environmental Dependence
- paddlepaddle >= 1.6.2
- paddlehub >= 1.6.0 | [How to install PaddleHub](../../../../docs/docs_en/get_start/installation.rst)
- ### 2、Installation
- ```shell
$ hub install efficientnetb2_imagenet
```
- In case of any problems during installation, please refer to: [Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md) | [Linux_Quickstart](../../../../docs/docs_en/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_en/get_start/mac_quickstart.md)
## III.Module API Prediction
- ### 1、Command line Prediction
- ```shell
$ hub run efficientnetb2_imagenet --input_path "/PATH/TO/IMAGE"
```
- If you want to call the Hub module through the command line, please refer to: [PaddleHub Command Line Instruction](../../../../docs/docs_ch/tutorial/cmd_usage.rst)
- ### 2、Prediction Code Example
- ```python
import paddlehub as hub
import cv2
classifier = hub.Module(name="efficientnetb2_imagenet")
result = classifier.classification(images=[cv2.imread('/PATH/TO/IMAGE')])
# or
# result = classifier.classification(paths=['/PATH/TO/IMAGE'])
```
- ### 3、API
- ```python
def classification(images=None,
paths=None,
batch_size=1,
use_gpu=False,
top_k=1):
```
- classification API.
- **Parameters**
- images (list\[numpy.ndarray\]): image data, ndarray.shape is in the format [H, W, C], BGR;
- paths (list[str]): image path;
- batch_size (int): the size of batch;
- use_gpu (bool): use GPU or not; **set the CUDA_VISIBLE_DEVICES environment variable first if you are using GPU**
- top\_k (int): return the first k results
- **Return**
- res (list\[dict\]): classication results, each element in the list is dict, key is the label name, and value is the corresponding probability
## IV.Server Deployment
- PaddleHub Serving can deploy an online service of image classification.
- ### Step 1: Start PaddleHub Serving
- Run the startup command:
- ```shell
$ hub serving start -m efficientnetb2_imagenet
```
- The servitization API is now deployed and the default port number is 8866.
- **NOTE:** If GPU is used for prediction, set CUDA_VISIBLE_DEVICES environment variable before the service, otherwise it need not be set.
- ### Step 2: Send a predictive request
- With a configured server, use the following lines of code to send the prediction request and obtain the result
- ```python
import requests
import json
import cv2
import base64
def cv2_to_base64(image):
data = cv2.imencode('.jpg', image)[1]
return base64.b64encode(data.tostring()).decode('utf8')
# Send an HTTP request
data = {'images':[cv2_to_base64(cv2.imread("/PATH/TO/IMAGE"))]}
headers = {"Content-type": "application/json"}
url = "http://127.0.0.1:8866/predict/efficientnetb2_imagenet"
r = requests.post(url=url, headers=headers, data=json.dumps(data))
# print prediction results
print(r.json()["results"])
```
## V.Release Note
* 1.0.0
First release
* 1.1.0
Improve the prediction performance and users' experience
- ```shell
$ hub install efficientnetb2_imagenet==1.1.0
```
# efficientnetb3_imagenet
|Module Name|efficientnetb3_imagenet|
| :--- | :---: |
|Category|image classification|
|Network|EfficientNet|
|Dataset|ImageNet-2012|
|Fine-tuning supported or not|No|
|Module Size|51MB|
|Latest update date|-|
|Data indicators|-|
## I.Basic Information
- ### Module Introduction
- EfficientNet is a light-weight model proposed by google, which consists of MBConv, and takes advantage of squeeze-and-excitation operation. This module is based on EfficientNetB3, trained on ImageNet-2012 dataset, and can predict an image of size 224*224*3.
## II.Installation
- ### 1、Environmental Dependence
- paddlepaddle >= 1.6.2
- paddlehub >= 1.6.0 | [How to install PaddleHub](../../../../docs/docs_en/get_start/installation.rst)
- ### 2、Installation
- ```shell
$ hub install efficientnetb3_imagenet
```
- In case of any problems during installation, please refer to: [Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md) | [Linux_Quickstart](../../../../docs/docs_en/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_en/get_start/mac_quickstart.md)
## III.Module API Prediction
- ### 1、Command line Prediction
- ```shell
$ hub run efficientnetb3_imagenet --input_path "/PATH/TO/IMAGE"
```
- If you want to call the Hub module through the command line, please refer to: [PaddleHub Command Line Instruction](../../../../docs/docs_ch/tutorial/cmd_usage.rst)
- ### 2、Prediction Code Example
- ```python
import paddlehub as hub
import cv2
classifier = hub.Module(name="efficientnetb3_imagenet")
result = classifier.classification(images=[cv2.imread('/PATH/TO/IMAGE')])
# or
# result = classifier.classification(paths=['/PATH/TO/IMAGE'])
```
- ### 3、API
- ```python
def classification(images=None,
paths=None,
batch_size=1,
use_gpu=False,
top_k=1):
```
- classification API.
- **Parameters**
- images (list\[numpy.ndarray\]): image data, ndarray.shape is in the format [H, W, C], BGR;
- paths (list[str]): image path;
- batch_size (int): the size of batch;
- use_gpu (bool): use GPU or not; **set the CUDA_VISIBLE_DEVICES environment variable first if you are using GPU**
- top\_k (int): return the first k results
- **Return**
- res (list\[dict\]): classication results, each element in the list is dict, key is the label name, and value is the corresponding probability
## IV.Server Deployment
- PaddleHub Serving can deploy an online service of image classification.
- ### Step 1: Start PaddleHub Serving
- Run the startup command:
- ```shell
$ hub serving start -m efficientnetb3_imagenet
```
- The servitization API is now deployed and the default port number is 8866.
- **NOTE:** If GPU is used for prediction, set CUDA_VISIBLE_DEVICES environment variable before the service, otherwise it need not be set.
- ### Step 2: Send a predictive request
- With a configured server, use the following lines of code to send the prediction request and obtain the result
- ```python
import requests
import json
import cv2
import base64
def cv2_to_base64(image):
data = cv2.imencode('.jpg', image)[1]
return base64.b64encode(data.tostring()).decode('utf8')
# Send an HTTP request
data = {'images':[cv2_to_base64(cv2.imread("/PATH/TO/IMAGE"))]}
headers = {"Content-type": "application/json"}
url = "http://127.0.0.1:8866/predict/efficientnetb3_imagenet"
r = requests.post(url=url, headers=headers, data=json.dumps(data))
# print prediction results
print(r.json()["results"])
```
## V.Release Note
* 1.0.0
First release
* 1.1.0
Improve the prediction performance and users' experience
- ```shell
$ hub install efficientnetb3_imagenet==1.1.0
```
# efficientnetb4_imagenet
|Module Name|efficientnetb4_imagenet|
| :--- | :---: |
|Category|image classification|
|Network|EfficientNet|
|Dataset|ImageNet-2012|
|Fine-tuning supported or not|No|
|Module Size|77MB|
|Latest update date|-|
|Data indicators|-|
## I.Basic Information
- ### Module Introduction
- EfficientNet is a light-weight model proposed by google, which consists of MBConv, and takes advantage of squeeze-and-excitation operation. This module is based on EfficientNetB4, trained on ImageNet-2012 dataset, and can predict an image of size 224*224*3.
## II.Installation
- ### 1、Environmental Dependence
- paddlepaddle >= 1.6.2
- paddlehub >= 1.6.0 | [How to install PaddleHub](../../../../docs/docs_en/get_start/installation.rst)
- ### 2、Installation
- ```shell
$ hub install efficientnetb4_imagenet
```
- In case of any problems during installation, please refer to: [Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md) | [Linux_Quickstart](../../../../docs/docs_en/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_en/get_start/mac_quickstart.md)
## III.Module API Prediction
- ### 1、Command line Prediction
- ```shell
$ hub run efficientnetb4_imagenet --input_path "/PATH/TO/IMAGE"
```
- If you want to call the Hub module through the command line, please refer to: [PaddleHub Command Line Instruction](../../../../docs/docs_ch/tutorial/cmd_usage.rst)
- ### 2、Prediction Code Example
- ```python
import paddlehub as hub
import cv2
classifier = hub.Module(name="efficientnetb4_imagenet")
result = classifier.classification(images=[cv2.imread('/PATH/TO/IMAGE')])
# or
# result = classifier.classification(paths=['/PATH/TO/IMAGE'])
```
- ### 3、API
- ```python
def classification(images=None,
paths=None,
batch_size=1,
use_gpu=False,
top_k=1):
```
- classification API.
- **Parameters**
- images (list\[numpy.ndarray\]): image data, ndarray.shape is in the format [H, W, C], BGR;
- paths (list[str]): image path;
- batch_size (int): the size of batch;
- use_gpu (bool): use GPU or not; **set the CUDA_VISIBLE_DEVICES environment variable first if you are using GPU**
- top\_k (int): return the first k results
- **Return**
- res (list\[dict\]): classication results, each element in the list is dict, key is the label name, and value is the corresponding probability
## IV.Server Deployment
- PaddleHub Serving can deploy an online service of image classification.
- ### Step 1: Start PaddleHub Serving
- Run the startup command:
- ```shell
$ hub serving start -m efficientnetb4_imagenet
```
- The servitization API is now deployed and the default port number is 8866.
- **NOTE:** If GPU is used for prediction, set CUDA_VISIBLE_DEVICES environment variable before the service, otherwise it need not be set.
- ### Step 2: Send a predictive request
- With a configured server, use the following lines of code to send the prediction request and obtain the result
- ```python
import requests
import json
import cv2
import base64
def cv2_to_base64(image):
data = cv2.imencode('.jpg', image)[1]
return base64.b64encode(data.tostring()).decode('utf8')
# Send an HTTP request
data = {'images':[cv2_to_base64(cv2.imread("/PATH/TO/IMAGE"))]}
headers = {"Content-type": "application/json"}
url = "http://127.0.0.1:8866/predict/efficientnetb4_imagenet"
r = requests.post(url=url, headers=headers, data=json.dumps(data))
# print prediction results
print(r.json()["results"])
```
## V.Release Note
* 1.0.0
First release
* 1.1.0
Improve the prediction performance and users' experience
- ```shell
$ hub install efficientnetb4_imagenet==1.1.0
```
# efficientnetb5_imagenet
|Module Name|efficientnetb5_imagenet|
| :--- | :---: |
|Category|image classification|
|Network|EfficientNet|
|Dataset|ImageNet-2012|
|Fine-tuning supported or not|No|
|Module Size|121MB|
|Latest update date|-|
|Data indicators|-|
## I.Basic Information
- ### Module Introduction
- EfficientNet is a light-weight model proposed by google, which consists of MBConv, and takes advantage of squeeze-and-excitation operation. This module is based on EfficientNetB5, trained on ImageNet-2012 dataset, and can predict an image of size 224*224*3.
## II.Installation
- ### 1、Environmental Dependence
- paddlepaddle >= 1.6.2
- paddlehub >= 1.6.0 | [How to install PaddleHub](../../../../docs/docs_en/get_start/installation.rst)
- ### 2、Installation
- ```shell
$ hub install efficientnetb5_imagenet
```
- In case of any problems during installation, please refer to: [Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md) | [Linux_Quickstart](../../../../docs/docs_en/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_en/get_start/mac_quickstart.md)
## III.Module API Prediction
- ### 1、Command line Prediction
- ```shell
$ hub run efficientnetb5_imagenet --input_path "/PATH/TO/IMAGE"
```
- If you want to call the Hub module through the command line, please refer to: [PaddleHub Command Line Instruction](../../../../docs/docs_ch/tutorial/cmd_usage.rst)
- ### 2、Prediction Code Example
- ```python
import paddlehub as hub
import cv2
classifier = hub.Module(name="efficientnetb5_imagenet")
result = classifier.classification(images=[cv2.imread('/PATH/TO/IMAGE')])
# or
# result = classifier.classification(paths=['/PATH/TO/IMAGE'])
```
- ### 3、API
- ```python
def classification(images=None,
paths=None,
batch_size=1,
use_gpu=False,
top_k=1):
```
- classification API.
- **Parameters**
- images (list\[numpy.ndarray\]): image data, ndarray.shape is in the format [H, W, C], BGR;
- paths (list[str]): image path;
- batch_size (int): the size of batch;
- use_gpu (bool): use GPU or not; **set the CUDA_VISIBLE_DEVICES environment variable first if you are using GPU**
- top\_k (int): return the first k results
- **Return**
- res (list\[dict\]): classication results, each element in the list is dict, key is the label name, and value is the corresponding probability
## IV.Server Deployment
- PaddleHub Serving can deploy an online service of image classification.
- ### Step 1: Start PaddleHub Serving
- Run the startup command:
- ```shell
$ hub serving start -m efficientnetb5_imagenet
```
- The servitization API is now deployed and the default port number is 8866.
- **NOTE:** If GPU is used for prediction, set CUDA_VISIBLE_DEVICES environment variable before the service, otherwise it need not be set.
- ### Step 2: Send a predictive request
- With a configured server, use the following lines of code to send the prediction request and obtain the result
- ```python
import requests
import json
import cv2
import base64
def cv2_to_base64(image):
data = cv2.imencode('.jpg', image)[1]
return base64.b64encode(data.tostring()).decode('utf8')
# Send an HTTP request
data = {'images':[cv2_to_base64(cv2.imread("/PATH/TO/IMAGE"))]}
headers = {"Content-type": "application/json"}
url = "http://127.0.0.1:8866/predict/efficientnetb5_imagenet"
r = requests.post(url=url, headers=headers, data=json.dumps(data))
# print prediction results
print(r.json()["results"])
```
## V.Release Note
* 1.0.0
First release
* 1.1.0
Improve the prediction performance and users' experience
- ```shell
$ hub install efficientnetb5_imagenet==1.1.0
```
# efficientnetb6_imagenet
|Module Name|efficientnetb6_imagenet|
| :--- | :---: |
|Category|image classification|
|Network|EfficientNet|
|Dataset|ImageNet-2012|
|Fine-tuning supported or not|No|
|Module Size|170MB|
|Latest update date|-|
|Data indicators|-|
## I.Basic Information
- ### Module Introduction
- EfficientNet is a light-weight model proposed by google, which consists of MBConv, and takes advantage of squeeze-and-excitation operation. This module is based on EfficientNetB6, trained on ImageNet-2012 dataset, and can predict an image of size 224*224*3.
## II.Installation
- ### 1、Environmental Dependence
- paddlepaddle >= 1.4.0
- paddlehub >= 1.0.0 | [How to install PaddleHub](../../../../docs/docs_en/get_start/installation.rst)
- ### 2、Installation
- ```shell
$ hub install efficientnetb6_imagenet
```
- In case of any problems during installation, please refer to: [Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md) | [Linux_Quickstart](../../../../docs/docs_en/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_en/get_start/mac_quickstart.md)
## III.Module API Prediction
- ### 1、Command line Prediction
- ```shell
$ hub run efficientnetb6_imagenet --input_path "/PATH/TO/IMAGE"
```
- If you want to call the Hub module through the command line, please refer to: [PaddleHub Command Line Instruction](../../../../docs/docs_ch/tutorial/cmd_usage.rst)
- ### 2、Prediction Code Example
- ```python
import paddlehub as hub
import cv2
classifier = hub.Module(name="efficientnetb6_imagenet")
result = classifier.classification(images=[cv2.imread('/PATH/TO/IMAGE')])
# or
# result = classifier.classification(paths=['/PATH/TO/IMAGE'])
```
- ### 3、API
- ```python
def classification(images=None,
paths=None,
batch_size=1,
use_gpu=False,
top_k=1):
```
- classification API.
- **Parameters**
- images (list\[numpy.ndarray\]): image data, ndarray.shape is in the format [H, W, C], BGR;
- paths (list[str]): image path;
- batch_size (int): the size of batch;
- use_gpu (bool): use GPU or not; **set the CUDA_VISIBLE_DEVICES environment variable first if you are using GPU**
- top\_k (int): return the first k results
- **Return**
- res (list\[dict\]): classication results, each element in the list is dict, key is the label name, and value is the corresponding probability
## IV.Server Deployment
- PaddleHub Serving can deploy an online service of image classification.
- ### Step 1: Start PaddleHub Serving
- Run the startup command:
- ```shell
$ hub serving start -m efficientnetb6_imagenet
```
- The servitization API is now deployed and the default port number is 8866.
- **NOTE:** If GPU is used for prediction, set CUDA_VISIBLE_DEVICES environment variable before the service, otherwise it need not be set.
- ### Step 2: Send a predictive request
- With a configured server, use the following lines of code to send the prediction request and obtain the result
- ```python
import requests
import json
import cv2
import base64
def cv2_to_base64(image):
data = cv2.imencode('.jpg', image)[1]
return base64.b64encode(data.tostring()).decode('utf8')
# Send an HTTP request
data = {'images':[cv2_to_base64(cv2.imread("/PATH/TO/IMAGE"))]}
headers = {"Content-type": "application/json"}
url = "http://127.0.0.1:8866/predict/efficientnetb6_imagenet"
r = requests.post(url=url, headers=headers, data=json.dumps(data))
# print prediction results
print(r.json()["results"])
```
## V.Release Note
* 1.0.0
First release
* 1.1.0
Improve the prediction performance and users' experience
- ```shell
$ hub install efficientnetb6_imagenet==1.1.0
```
# efficientnetb7_imagenet
|Module Name|efficientnetb7_imagenet|
| :--- | :---: |
|Category|image classification|
|Network|EfficientNet|
|Dataset|ImageNet-2012|
|Fine-tuning supported or not|No|
|Module Size|260MB|
|Latest update date|-|
|Data indicators|-|
## I.Basic Information
- ### Module Introduction
- EfficientNet is a light-weight model proposed by google, which consists of MBConv, and takes advantage of squeeze-and-excitation operation. This module is based on EfficientNetB7, trained on ImageNet-2012 dataset, and can predict an image of size 224*224*3.
## II.Installation
- ### 1、Environmental Dependence
- paddlepaddle >= 1.6.2
- paddlehub >= 1.6.0 | [How to install PaddleHub](../../../../docs/docs_en/get_start/installation.rst)
- ### 2、Installation
- ```shell
$ hub install efficientnetb7_imagenet
```
- In case of any problems during installation, please refer to: [Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md) | [Linux_Quickstart](../../../../docs/docs_en/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_en/get_start/mac_quickstart.md)
## III.Module API Prediction
- ### 1、Command line Prediction
- ```shell
$ hub run efficientnetb7_imagenet --input_path "/PATH/TO/IMAGE"
```
- If you want to call the Hub module through the command line, please refer to: [PaddleHub Command Line Instruction](../../../../docs/docs_ch/tutorial/cmd_usage.rst)
- ### 2、Prediction Code Example
- ```python
import paddlehub as hub
import cv2
classifier = hub.Module(name="efficientnetb7_imagenet")
result = classifier.classification(images=[cv2.imread('/PATH/TO/IMAGE')])
# or
# result = classifier.classification(paths=['/PATH/TO/IMAGE'])
```
- ### 3、API
- ```python
def classification(images=None,
paths=None,
batch_size=1,
use_gpu=False,
top_k=1):
```
- classification API.
- **Parameters**
- images (list\[numpy.ndarray\]): image data, ndarray.shape is in the format [H, W, C], BGR;
- paths (list[str]): image path;
- batch_size (int): the size of batch;
- use_gpu (bool): use GPU or not; **set the CUDA_VISIBLE_DEVICES environment variable first if you are using GPU**
- top\_k (int): return the first k results
- **Return**
- res (list\[dict\]): classication results, each element in the list is dict, key is the label name, and value is the corresponding probability
## IV.Server Deployment
- PaddleHub Serving can deploy an online service of image classification.
- ### Step 1: Start PaddleHub Serving
- Run the startup command:
- ```shell
$ hub serving start -m efficientnetb7_imagenet
```
- The servitization API is now deployed and the default port number is 8866.
- **NOTE:** If GPU is used for prediction, set CUDA_VISIBLE_DEVICES environment variable before the service, otherwise it need not be set.
- ### Step 2: Send a predictive request
- With a configured server, use the following lines of code to send the prediction request and obtain the result
- ```python
import requests
import json
import cv2
import base64
def cv2_to_base64(image):
data = cv2.imencode('.jpg', image)[1]
return base64.b64encode(data.tostring()).decode('utf8')
# Send an HTTP request
data = {'images':[cv2_to_base64(cv2.imread("/PATH/TO/IMAGE"))]}
headers = {"Content-type": "application/json"}
url = "http://127.0.0.1:8866/predict/efficientnetb7_imagenet"
r = requests.post(url=url, headers=headers, data=json.dumps(data))
# print prediction results
print(r.json()["results"])
```
## V.Release Note
* 1.0.0
First release
* 1.1.0
Improve the prediction performance and users' experience
- ```shell
$ hub install efficientnetb7_imagenet==1.1.0
```
# fix_resnext101_32x48d_wsl_imagenet
|Module Name|fix_resnext101_32x48d_wsl_imagenet|
| :--- | :---: |
|Category|image classification|
|Network|ResNeXt|
|Dataset|ImageNet-2012|
|Fine-tuning supported or not|No|
|Module Size|3.1GB|
|Latest update date|-|
|Data indicators|-|
## I.Basic Information
- ### Module Introduction
- ResNeXt is proposed by UC San Diego and Facebook AI Research in 2017. This module is based on ResNeXt model. It is weak-supervised trained on billions of socail images, finetuned on ImageNet-2012 dataset, and can predict an image of size 224*224*3.
## II.Installation
- ### 1、Environmental Dependence
- paddlepaddle >= 1.6.2
- paddlehub >= 1.6.0 | [How to install PaddleHub](../../../../docs/docs_en/get_start/installation.rst)
- ### 2、Installation
- ```shell
$ hub install fix_resnext101_32x48d_wsl_imagenet
```
- In case of any problems during installation, please refer to: [Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md) | [Linux_Quickstart](../../../../docs/docs_en/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_en/get_start/mac_quickstart.md)
## III.Module API Prediction
- ### 1、Command line Prediction
- ```shell
$ hub run fix_resnext101_32x48d_wsl_imagenet --input_path "/PATH/TO/IMAGE"
```
- If you want to call the Hub module through the command line, please refer to: [PaddleHub Command Line Instruction](../../../../docs/docs_ch/tutorial/cmd_usage.rst)
- ### 2、Prediction Code Example
- ```python
import paddlehub as hub
import cv2
classifier = hub.Module(name="fix_resnext101_32x48d_wsl_imagenet")
result = classifier.classification(images=[cv2.imread('/PATH/TO/IMAGE')])
# or
# result = classifier.classification(paths=['/PATH/TO/IMAGE'])
```
- ### 3、API
- ```python
def classification(images=None,
paths=None,
batch_size=1,
use_gpu=False,
top_k=1):
```
- classification API.
- **Parameters**
- images (list\[numpy.ndarray\]): image data, ndarray.shape is in the format [H, W, C], BGR;
- paths (list[str]): image path;
- batch_size (int): the size of batch;
- use_gpu (bool): use GPU or not; **set the CUDA_VISIBLE_DEVICES environment variable first if you are using GPU**
- top\_k (int): return the first k results
- **Return**
- res (list\[dict\]): classication results, each element in the list is dict, key is the label name, and value is the corresponding probability
## IV.Server Deployment
- PaddleHub Serving can deploy an online service of image classification.
- ### Step 1: Start PaddleHub Serving
- Run the startup command:
- ```shell
$ hub serving start -m fix_resnext101_32x48d_wsl_imagenet
```
- The servitization API is now deployed and the default port number is 8866.
- **NOTE:** If GPU is used for prediction, set CUDA_VISIBLE_DEVICES environment variable before the service, otherwise it need not be set.
- ### Step 2: Send a predictive request
- With a configured server, use the following lines of code to send the prediction request and obtain the result
- ```python
import requests
import json
import cv2
import base64
def cv2_to_base64(image):
data = cv2.imencode('.jpg', image)[1]
return base64.b64encode(data.tostring()).decode('utf8')
# Send an HTTP request
data = {'images':[cv2_to_base64(cv2.imread("/PATH/TO/IMAGE"))]}
headers = {"Content-type": "application/json"}
url = "http://127.0.0.1:8866/predict/fix_resnext101_32x48d_wsl_imagenet"
r = requests.post(url=url, headers=headers, data=json.dumps(data))
# print prediction results
print(r.json()["results"])
```
## V.Release Note
* 1.0.0
First release
- ```shell
$ hub install fix_resnext101_32x48d_wsl_imagenet==1.0.0
```
...@@ -23,8 +23,6 @@ ...@@ -23,8 +23,6 @@
- ### 1、环境依赖 - ### 1、环境依赖
- paddlepaddle >= 2.0.0
- paddlehub >= 2.0.0 | [如何安装paddlehub](../../../../docs/docs_ch/get_start/installation.rst) - paddlehub >= 2.0.0 | [如何安装paddlehub](../../../../docs/docs_ch/get_start/installation.rst)
- paddlex >= 1.3.7 - paddlex >= 1.3.7
......
# food_classification
|Module Name|food_classification|
| :--- | :---: |
|Category|image classification|
|Network|ResNet50_vd_ssld|
|Dataset|Food Dataset|
|Fine-tuning supported or not|No|
|Module Size|91MB|
|Latest update date|-|
|Data indicators|-|
## I.Basic Information
- ### Module Introduction
- This module can be used for food classification.
## II.Installation
- ### 1、Environmental Dependence
- paddlehub >= 2.0.0 | [How to install PaddleHub](../../../../docs/docs_en/get_start/installation.rst)
- paddlex >= 1.3.7
- ### 2、Installation
- ```shell
$ hub install food_classification
```
- In case of any problems during installation, please refer to: [Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md) | [Linux_Quickstart](../../../../docs/docs_en/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_en/get_start/mac_quickstart.md)
## III.Module API Prediction
- ### 1、Command line Prediction
- ```shell
$ hub run food_classification --input_path /PATH/TO/IMAGE
```
- If you want to call the Hub module through the command line, please refer to: [PaddleHub Command Line Instruction](../../../../docs/docs_ch/tutorial/cmd_usage.rst)
- ### 2、Prediction Code Example
- ```python
import paddlehub as hub
import cv2
classifier = hub.Module(name="food_classification")
images = [cv2.imread('/PATH/TO/IMAGE')]
results = classifier.predict(images=images)
for result in results:
print(result)
```
- ### 3、API
- ```python
def predict(images)
```
- classification API.
- **Parameters**
- images (list\[numpy.ndarray\]): image data, ndarray.shape is in the format [H, W, C], BGR;
- **Return**
- result(list[dict]): classication results, each element in the list is dict, key is the label name, and value is the corresponding probability
- category_id (int): category id;
- category(str): category name;
- score(float): probability
## IV.Release Note
* 1.0.0
First release
- ```shell
$ hub install food_classification==1.0.0
```
# googlenet_imagenet
|Module Name|googlenet_imagenet|
| :--- | :---: |
|Category|image classification|
|Network|GoogleNet|
|Dataset|ImageNet-2012|
|Fine-tuning supported or not|No|
|Module Size|28MB|
|Latest update date|-|
|Data indicators|-|
## I.Basic Information
- ### Module Introduction
- GoogleNet was proposed by Christian Szegedy in 2014 and gained the champion of ILSVRC 2014. This module is based on GoogleNet, trained on ImageNet-2012, and can predict an image of size 224*224*3.
## II.Installation
- ### 1、Environmental Dependence
- paddlepaddle >= 1.4.0
- paddlehub >= 1.0.0 | [How to install PaddleHub](../../../../docs/docs_en/get_start/installation.rst)
- ### 2、Installation
- ```shell
$ hub install googlenet_imagenet
```
- In case of any problems during installation, please refer to: [Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md) | [Linux_Quickstart](../../../../docs/docs_en/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_en/get_start/mac_quickstart.md)
## III.Module API Prediction
- ### 1、Command line Prediction
- ```shell
$ hub run googlenet_imagenet --input_path "/PATH/TO/IMAGE"
```
- If you want to call the Hub module through the command line, please refer to: [PaddleHub Command Line Instruction](../../../../docs/docs_ch/tutorial/cmd_usage.rst)
- ### 2、Prediction Code Example
- ```python
import paddlehub as hub
import cv2
classifier = hub.Module(name="googlenet_imagenet")
test_img_path = "/PATH/TO/IMAGE"
input_dict = {"image": [test_img_path]}
result = classifier.classification(data=input_dict)
```
- ### 3、API
- ```python
def classification(data)
```
- classification API.
- **Parameters**
- data (dict): key is "image", value is a list of image paths
- **Return**
- result(list[dict]): classication results, each element in the list is dict, key is the label name, and value is the corresponding probability
## IV.Release Note
* 1.0.0
First release
- ```shell
$ hub install googlenet_imagenet==1.0.0
```
# inception_v4_imagenet
|Module Name|inception_v4_imagenet|
| :--- | :---: |
|Category|image classification|
|Network|Inception_V4|
|Dataset|ImageNet-2012|
|Fine-tuning supported or not|No|
|Module Size|167MB|
|Latest update date|-|
|Data indicators|-|
## I.Basic Information
- ### Module Introduction
- Inception structure is first introduced in GoogLeNet, so GoogLeNet is named Inception-v1. Inception-v4 is an improvement on it, which takas advantage of sereral useful strategies such as batch normalization, residual learning. This module is based on Inception-v4, trained on ImageNet-2012, and can predict an image of size 224*224*3.
## II.Installation
- ### 1、Environmental Dependence
- paddlepaddle >= 1.4.0
- paddlehub >= 1.0.0 | [How to install PaddleHub](../../../../docs/docs_en/get_start/installation.rst)
- ### 2、Installation
- ```shell
$ hub install inception_v4_imagenet
```
- In case of any problems during installation, please refer to: [Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md) | [Linux_Quickstart](../../../../docs/docs_en/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_en/get_start/mac_quickstart.md)
## III.Module API Prediction
- ### 1、Command line Prediction
- ```shell
$ hub run inception_v4_imagenet --input_path "/PATH/TO/IMAGE"
```
- If you want to call the Hub module through the command line, please refer to: [PaddleHub Command Line Instruction](../../../../docs/docs_ch/tutorial/cmd_usage.rst)
- ### 2、Prediction Code Example
- ```python
import paddlehub as hub
import cv2
classifier = hub.Module(name="inception_v4_imagenet")
test_img_path = "/PATH/TO/IMAGE"
input_dict = {"image": [test_img_path]}
result = classifier.classification(data=input_dict)
```
- ### 3、API
- ```python
def classification(data)
```
- classification API.
- **Parameters**
- data (dict): key is "image", value is a list of image paths
- **Return**
- result(list[dict]): classication results, each element in the list is dict, key is the label name, and value is the corresponding probability
## IV.Release Note
* 1.0.0
First release
- ```shell
$ hub install inception_v4_imagenet==1.0.0
```
# marine_biometrics
|Module Name|marine_biometrics|
| :--- | :---: |
|Category|image classification|
|Network|ResNet50_vd_ssld|
|Dataset|Fish4Knowledge|
|Fine-tuning supported or not|No|
|Module Size|84MB|
|Latest update date|-|
|Data indicators|-|
## I.Basic Information
- ### Module Introduction
- This module can be used to classify marine biometrics.
## II.Installation
- ### 1、Environmental Dependence
- paddlepaddle >= 2.0.0
- paddlehub >= 2.0.0 | [How to install PaddleHub](../../../../docs/docs_en/get_start/installation.rst)
- ### 2、Installation
- ```shell
$ hub install marine_biometrics
```
- In case of any problems during installation, please refer to: [Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md) | [Linux_Quickstart](../../../../docs/docs_en/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_en/get_start/mac_quickstart.md)
## III.Module API Prediction
- ### 1、Command line Prediction
- ```shell
$ hub run marine_biometrics --input_path "/PATH/TO/IMAGE"
```
- If you want to call the Hub module through the command line, please refer to: [PaddleHub Command Line Instruction](../../../../docs/docs_ch/tutorial/cmd_usage.rst)
- ### 2、Prediction Code Example
- ```python
import paddlehub as hub
import cv2
classifier = hub.Module(name="marine_biometrics")
images = [cv2.imread('/PATH/TO/IMAGE')]
results = classifier.predict(images=images)
for result in results:
print(result)
```
- ### 3、API
- ```python
def predict(images)
```
- classification API.
- **Parameters**
- images (list\[numpy.ndarray\]): image data, ndarray.shape is in the format [H, W, C], BGR;
- **Return**
- result(list[dict]): classication results, each element in the list is dict, key is the label name, and value is the corresponding probability
## IV.Release Note
* 1.0.0
First release
- ```shell
$ hub install marine_biometrics==1.0.0
```
# mobilenet_v2_animals
|Module Name|mobilenet_v2_animals|
| :--- | :---: |
|Category|image classification|
|Network|MobileNet_v2|
|Dataset|Baidu Animal Dataset|
|Fine-tuning supported or not|No|
|Module Size|50MB|
|Latest update date|-|
|Data indicators|-|
## I.Basic Information
- ### Module Introduction
- MobileNet is a light-weight convolution network. This module is trained on Baidu animal dataset, and can classify 7978 kinds of animals.
- For more information, please refer to:[MobileNetV2: Inverted Residuals and Linear Bottlenecks](https://arxiv.org/pdf/1801.04381.pdf)
## II.Installation
- ### 1、Environmental Dependence
- paddlepaddle >= 1.6.2
- paddlehub >= 1.6.0 | [How to install PaddleHub](../../../../docs/docs_en/get_start/installation.rst)
- ### 2、Installation
- ```shell
$ hub install mobilenet_v2_animals
```
- In case of any problems during installation, please refer to: [Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md) | [Linux_Quickstart](../../../../docs/docs_en/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_en/get_start/mac_quickstart.md)
## III.Module API Prediction
- ### 1、Command line Prediction
- ```shell
$ hub run mobilenet_v2_animals --input_path "/PATH/TO/IMAGE"
```
- If you want to call the Hub module through the command line, please refer to: [PaddleHub Command Line Instruction](../../../../docs/docs_ch/tutorial/cmd_usage.rst)
- ### 2、Prediction Code Example
- ```python
import paddlehub as hub
import cv2
classifier = hub.Module(name="mobilenet_v2_animals")
result = classifier.classification(images=[cv2.imread('/PATH/TO/IMAGE')])
# or
# result = classifier.classification(paths=['/PATH/TO/IMAGE'])
```
- ### 3、API
- ```python
def classification(images=None,
paths=None,
batch_size=1,
use_gpu=False,
top_k=1):
```
- classification API.
- **Parameters**
- images (list\[numpy.ndarray\]): image data, ndarray.shape is in the format [H, W, C], BGR;
- paths (list[str]): image path;
- batch_size (int): the size of batch;
- use_gpu (bool): use GPU or not; **set the CUDA_VISIBLE_DEVICES environment variable first if you are using GPU**
- top\_k (int): return the first k results
- **Return**
- res (list\[dict\]): classication results, each element in the list is dict, key is the label name, and value is the corresponding probability
## IV.Server Deployment
- PaddleHub Serving can deploy an online service of image classification.
- ### Step 1: Start PaddleHub Serving
- Run the startup command:
- ```shell
$ hub serving start -m mobilenet_v2_animals
```
- The servitization API is now deployed and the default port number is 8866.
- **NOTE:** If GPU is used for prediction, set CUDA_VISIBLE_DEVICES environment variable before the service, otherwise it need not be set.
- ### Step 2: Send a predictive request
- With a configured server, use the following lines of code to send the prediction request and obtain the result
- ```python
import requests
import json
import cv2
import base64
def cv2_to_base64(image):
data = cv2.imencode('.jpg', image)[1]
return base64.b64encode(data.tostring()).decode('utf8')
# Send an HTTP request
data = {'images':[cv2_to_base64(cv2.imread("/PATH/TO/IMAGE"))]}
headers = {"Content-type": "application/json"}
url = "http://127.0.0.1:8866/predict/mobilenet_v2_animals"
r = requests.post(url=url, headers=headers, data=json.dumps(data))
# print prediction results
print(r.json()["results"])
```
## V.Release Note
* 1.0.0
First release
- ```shell
$ hub install mobilenet_v2_animals==1.0.0
```
# mobilenet_v2_dishes
|Module Name|mobilenet_v2_dishes|
| :--- | :---: |
|Category|image classification|
|Network|MobileNet_v2|
|Dataset|Baidu food Dataset|
|Fine-tuning supported or not|No|
|Module Size|52MB|
|Latest update date|-|
|Data indicators|-|
## I.Basic Information
- ### Module Introduction
- MobileNet is a light-weight convolution network. This module is trained on Baidu food dataset, and can classify 8416 kinds of food.
<p align="center">
<img src="http://bj.bcebos.com/ibox-thumbnail98/e7b22762cf42ab0e1e1fab6b8720938b?authorization=bce-auth-v1%2Ffbe74140929444858491fbf2b6bc0935%2F2020-04-08T11%3A49%3A16Z%2F1800%2F%2Faf385f56da3c8ee1298588939d93533a72203c079ae1187affa2da555b9898ea" width = "800" hspace='10'/> <br />
</p>
- For more information, please refer to:[MobileNetV2: Inverted Residuals and Linear Bottlenecks](https://arxiv.org/pdf/1801.04381.pdf)
## II.Installation
- ### 1、Environmental Dependence
- paddlepaddle >= 1.6.2
- paddlehub >= 1.6.0 | [How to install PaddleHub](../../../../docs/docs_en/get_start/installation.rst)
- ### 2、Installation
- ```shell
$ hub install mobilenet_v2_dishes
```
- In case of any problems during installation, please refer to: [Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md) | [Linux_Quickstart](../../../../docs/docs_en/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_en/get_start/mac_quickstart.md)
## III.Module API Prediction
- ### 1、Command line Prediction
- ```shell
$ hub run mobilenet_v2_dishes --input_path "/PATH/TO/IMAGE"
```
- If you want to call the Hub module through the command line, please refer to: [PaddleHub Command Line Instruction](../../../../docs/docs_ch/tutorial/cmd_usage.rst)
- ### 2、Prediction Code Example
- ```python
import paddlehub as hub
import cv2
classifier = hub.Module(name="mobilenet_v2_dishes")
result = classifier.classification(images=[cv2.imread('/PATH/TO/IMAGE')])
# or
# result = classifier.classification(paths=['/PATH/TO/IMAGE'])
```
- ### 3、API
- ```python
def classification(images=None,
paths=None,
batch_size=1,
use_gpu=False,
top_k=1):
```
- classification API.
- **Parameters**
- images (list\[numpy.ndarray\]): image data, ndarray.shape is in the format [H, W, C], BGR;
- paths (list[str]): image path;
- batch_size (int): the size of batch;
- use_gpu (bool): use GPU or not; **set the CUDA_VISIBLE_DEVICES environment variable first if you are using GPU**
- top\_k (int): return the first k results
- **Return**
- res (list\[dict\]): classication results, each element in the list is dict, key is the label name, and value is the corresponding probability
## IV.Server Deployment
- PaddleHub Serving can deploy an online service of image classification.
- ### Step 1: Start PaddleHub Serving
- Run the startup command:
- ```shell
$ hub serving start -m mobilenet_v2_dishes
```
- The servitization API is now deployed and the default port number is 8866.
- **NOTE:** If GPU is used for prediction, set CUDA_VISIBLE_DEVICES environment variable before the service, otherwise it need not be set.
- ### Step 2: Send a predictive request
- With a configured server, use the following lines of code to send the prediction request and obtain the result
- ```python
import requests
import json
import cv2
import base64
def cv2_to_base64(image):
data = cv2.imencode('.jpg', image)[1]
return base64.b64encode(data.tostring()).decode('utf8')
# Send an HTTP request
data = {'images':[cv2_to_base64(cv2.imread("/PATH/TO/IMAGE"))]}
headers = {"Content-type": "application/json"}
url = "http://127.0.0.1:8866/predict/mobilenet_v2_dishes"
r = requests.post(url=url, headers=headers, data=json.dumps(data))
# print prediction results
print(r.json()["results"])
```
## V.Release Note
* 1.0.0
First release
- ```shell
$ hub install mobilenet_v2_dishes==1.0.0
```
# mobilenet_v2_imagenet
|Module Name|mobilenet_v2_imagenet|
| :--- | :---: |
|Category|image classification|
|Network|Mobilenet_v2|
|Dataset|ImageNet-2012|
|Fine-tuning supported or not|No|
|Module Size|15MB|
|Latest update date|-|
|Data indicators|-|
## I.Basic Information
- ### Module Introduction
- MobileNet V2 is an image classification model proposed by Mark Sandler, Andrew Howard et al. in 2018. This model is a light-weight model for mobile and embedded device, and can reach high accurary with a few parameters. This module is based on MobileNet V2, trained on ImageNet-2012, and can predict an image of size 224*224*3.
## II.Installation
- ### 1、Environmental Dependence
- paddlepaddle >= 1.4.0
- paddlehub >= 1.0.0 | [How to install PaddleHub](../../../../docs/docs_en/get_start/installation.rst)
- ### 2、Installation
- ```shell
$ hub install mobilenet_v2_imagenet
```
- In case of any problems during installation, please refer to: [Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md) | [Linux_Quickstart](../../../../docs/docs_en/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_en/get_start/mac_quickstart.md)
## III.Module API Prediction
- ### 1、Command line Prediction
- ```shell
$ hub run mobilenet_v2_imagenet --input_path "/PATH/TO/IMAGE"
```
- If you want to call the Hub module through the command line, please refer to: [PaddleHub Command Line Instruction](../../../../docs/docs_ch/tutorial/cmd_usage.rst)
- ### 2、Prediction Code Example
- ```python
import paddlehub as hub
import cv2
classifier = hub.Module(name="mobilenet_v2_imagenet")
test_img_path = "/PATH/TO/IMAGE"
input_dict = {"image": [test_img_path]}
result = classifier.classification(data=input_dict)
```
- ### 3、API
- ```python
def classification(data)
```
- classification API.
- **Parameters**
- data (dict): key is "image", value is a list of image paths
- **Return**
- result(list[dict]): classication results, each element in the list is dict, key is the label name, and value is the corresponding probability
## IV.Release Note
* 1.0.0
First release
* 1.0.1
Fix the problem of encoding in python2
- ```shell
$ hub install mobilenet_v2_imagenet==1.0.1
```
# mobilenet_v2_imagenet_ssld
|Module Name|mobilenet_v2_imagenet_ssld|
| :--- | :---: |
|Category|image classification|
|Network|Mobilenet_v2|
|Dataset|ImageNet-2012|
|Fine-tuning supported or not|No|
|Module Size|15MB|
|Latest update date|-|
|Data indicators|-|
## I.Basic Information
- ### Module Introduction
- MobileNet V2 is an image classification model proposed by Mark Sandler, Andrew Howard et al. in 2018. This model is a light-weight model for mobile and embedded device, and can reach high accurary with a few parameters. This module is based on MobileNet V2, trained on ImageNet-2012 with SSLD distillation strategy, and can predict an image of size 224*224*3.
## II.Installation
- ### 1、Environmental Dependence
- paddlepaddle >= 1.6.2
- paddlehub >= 1.6.0 | [How to install PaddleHub](../../../../docs/docs_en/get_start/installation.rst)
- ### 2、Installation
- ```shell
$ hub install mobilenet_v2_imagenet_ssld
```
- In case of any problems during installation, please refer to: [Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md) | [Linux_Quickstart](../../../../docs/docs_en/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_en/get_start/mac_quickstart.md)
## III.Module API Prediction
- ### 1、Command line Prediction
- ```shell
$ hub run mobilenet_v2_imagenet_ssld --input_path "/PATH/TO/IMAGE"
```
- If you want to call the Hub module through the command line, please refer to: [PaddleHub Command Line Instruction](../../../../docs/docs_ch/tutorial/cmd_usage.rst)
- ### 2、Prediction Code Example
- ```python
import paddlehub as hub
import cv2
classifier = hub.Module(name="mobilenet_v2_imagenet_ssld")
result = classifier.classification(images=[cv2.imread('/PATH/TO/IMAGE')])
# or
# result = classifier.classification(paths=['/PATH/TO/IMAGE'])
```
- ### 3、API
- ```python
def classification(images=None,
paths=None,
batch_size=1,
use_gpu=False,
top_k=1):
```
- classification API.
- **Parameters**
- images (list\[numpy.ndarray\]): image data, ndarray.shape is in the format [H, W, C], BGR;
- paths (list[str]): image path;
- batch_size (int): the size of batch;
- use_gpu (bool): use GPU or not; **set the CUDA_VISIBLE_DEVICES environment variable first if you are using GPU**
- top\_k (int): return the first k results
- **Return**
- res (list\[dict\]): classication results, each element in the list is dict, key is the label name, and value is the corresponding probability
## IV.Server Deployment
- PaddleHub Serving can deploy an online service of image classification.
- ### Step 1: Start PaddleHub Serving
- Run the startup command:
- ```shell
$ hub serving start -m mobilenet_v2_imagenet_ssld
```
- The servitization API is now deployed and the default port number is 8866.
- **NOTE:** If GPU is used for prediction, set CUDA_VISIBLE_DEVICES environment variable before the service, otherwise it need not be set.
- ### Step 2: Send a predictive request
- With a configured server, use the following lines of code to send the prediction request and obtain the result
- ```python
import requests
import json
import cv2
import base64
def cv2_to_base64(image):
data = cv2.imencode('.jpg', image)[1]
return base64.b64encode(data.tostring()).decode('utf8')
# Send an HTTP request
data = {'images':[cv2_to_base64(cv2.imread("/PATH/TO/IMAGE"))]}
headers = {"Content-type": "application/json"}
url = "http://127.0.0.1:8866/predict/mobilenet_v2_imagenet_ssld"
r = requests.post(url=url, headers=headers, data=json.dumps(data))
# print prediction results
print(r.json()["results"])
```
## V.Release Note
* 1.0.0
First release
- ```shell
$ hub install mobilenet_v2_imagenet_ssld==1.0.0
```
# mobilenet_v3_large_imagenet_ssld
|Module Name|mobilenet_v3_large_imagenet_ssld|
| :--- | :---: |
|Category|image classification|
|Network|Mobilenet_v3_large|
|Dataset|ImageNet-2012|
|Fine-tuning supported or not|No|
|Module Size|23MB|
|Latest update date|-|
|Data indicators|-|
## I.Basic Information
- ### Module Introduction
- MobileNetV3 is an image classification model proposed by Google in 2019. The authors proposed to search the network architecture by combination of NAS and NetAdapt, and provide two versions of this model, i.e. Large and Small version. This module is based on MobileNetV3 Large, trained on ImageNet-2012 with SSLD distillation strategy, and can predict an image of size 224*224*3.
## II.Installation
- ### 1、Environmental Dependence
- paddlepaddle >= 1.6.2
- paddlehub >= 1.6.0 | [How to install PaddleHub](../../../../docs/docs_en/get_start/installation.rst)
- ### 2、Installation
- ```shell
$ hub install mobilenet_v3_large_imagenet_ssld
```
- In case of any problems during installation, please refer to: [Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md) | [Linux_Quickstart](../../../../docs/docs_en/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_en/get_start/mac_quickstart.md)
## III.Module API Prediction
- ### 1、Command line Prediction
- ```shell
$ hub run mobilenet_v3_large_imagenet_ssld --input_path "/PATH/TO/IMAGE"
```
- If you want to call the Hub module through the command line, please refer to: [PaddleHub Command Line Instruction](../../../../docs/docs_ch/tutorial/cmd_usage.rst)
- ### 2、Prediction Code Example
- ```python
import paddlehub as hub
import cv2
classifier = hub.Module(name="mobilenet_v3_large_imagenet_ssld")
result = classifier.classification(images=[cv2.imread('/PATH/TO/IMAGE')])
# or
# result = classifier.classification(paths=['/PATH/TO/IMAGE'])
```
- ### 3、API
- ```python
def classification(images=None,
paths=None,
batch_size=1,
use_gpu=False,
top_k=1):
```
- classification API.
- **Parameters**
- images (list\[numpy.ndarray\]): image data, ndarray.shape is in the format [H, W, C], BGR;
- paths (list[str]): image path;
- batch_size (int): the size of batch;
- use_gpu (bool): use GPU or not; **set the CUDA_VISIBLE_DEVICES environment variable first if you are using GPU**
- top\_k (int): return the first k results
- **Return**
- res (list\[dict\]): classication results, each element in the list is dict, key is the label name, and value is the corresponding probability
## IV.Server Deployment
- PaddleHub Serving can deploy an online service of image classification.
- ### Step 1: Start PaddleHub Serving
- Run the startup command:
- ```shell
$ hub serving start -m mobilenet_v3_large_imagenet_ssld
```
- The servitization API is now deployed and the default port number is 8866.
- **NOTE:** If GPU is used for prediction, set CUDA_VISIBLE_DEVICES environment variable before the service, otherwise it need not be set.
- ### Step 2: Send a predictive request
- With a configured server, use the following lines of code to send the prediction request and obtain the result
- ```python
import requests
import json
import cv2
import base64
def cv2_to_base64(image):
data = cv2.imencode('.jpg', image)[1]
return base64.b64encode(data.tostring()).decode('utf8')
# Send an HTTP request
data = {'images':[cv2_to_base64(cv2.imread("/PATH/TO/IMAGE"))]}
headers = {"Content-type": "application/json"}
url = "http://127.0.0.1:8866/predict/mobilenet_v3_large_imagenet_ssld"
r = requests.post(url=url, headers=headers, data=json.dumps(data))
# print prediction results
print(r.json()["results"])
```
## V.Release Note
* 1.0.0
First release
- ```shell
$ hub install mobilenet_v3_large_imagenet_ssld==1.0.0
```
# mobilenet_v3_small_imagenet_ssld
|Module Name|mobilenet_v3_small_imagenet_ssld|
| :--- | :---: |
|Category|image classification|
|Network|Mobilenet_v3_Small|
|Dataset|ImageNet-2012|
|Fine-tuning supported or not|No|
|Module Size|13MB|
|Latest update date|-|
|Data indicators|-|
## I.Basic Information
- ### Module Introduction
- MobileNetV3 is an image classification model proposed by Google in 2019. The authors proposed to search the network architecture by combination of NAS and NetAdapt, and provide two versions of this model, i.e. Large and Small version. This module is based on MobileNetV3 Small, trained on ImageNet-2012 with SSLD distillation strategy, and can predict an image of size 224*224*3.
## II.Installation
- ### 1、Environmental Dependence
- paddlepaddle >= 1.6.2
- paddlehub >= 1.6.0 | [How to install PaddleHub](../../../../docs/docs_en/get_start/installation.rst)
- ### 2、Installation
- ```shell
$ hub install mobilenet_v3_small_imagenet_ssld
```
- In case of any problems during installation, please refer to: [Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md) | [Linux_Quickstart](../../../../docs/docs_en/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_en/get_start/mac_quickstart.md)
## III.Module API Prediction
- ### 1、Command line Prediction
- ```shell
$ hub run mobilenet_v3_small_imagenet_ssld --input_path "/PATH/TO/IMAGE"
```
- If you want to call the Hub module through the command line, please refer to: [PaddleHub Command Line Instruction](../../../../docs/docs_ch/tutorial/cmd_usage.rst)
- ### 2、Prediction Code Example
- ```python
import paddlehub as hub
import cv2
classifier = hub.Module(name="mobilenet_v3_small_imagenet_ssld")
result = classifier.classification(images=[cv2.imread('/PATH/TO/IMAGE')])
# or
# result = classifier.classification(paths=['/PATH/TO/IMAGE'])
```
- ### 3、API
- ```python
def classification(images=None,
paths=None,
batch_size=1,
use_gpu=False,
top_k=1):
```
- classification API.
- **Parameters**
- images (list\[numpy.ndarray\]): image data, ndarray.shape is in the format [H, W, C], BGR;
- paths (list[str]): image path;
- batch_size (int): the size of batch;
- use_gpu (bool): use GPU or not; **set the CUDA_VISIBLE_DEVICES environment variable first if you are using GPU**
- top\_k (int): return the first k results
- **Return**
- res (list\[dict\]): classication results, each element in the list is dict, key is the label name, and value is the corresponding probability
## IV.Server Deployment
- PaddleHub Serving can deploy an online service of image classification.
- ### Step 1: Start PaddleHub Serving
- Run the startup command:
- ```shell
$ hub serving start -m mobilenet_v3_small_imagenet_ssld
```
- The servitization API is now deployed and the default port number is 8866.
- **NOTE:** If GPU is used for prediction, set CUDA_VISIBLE_DEVICES environment variable before the service, otherwise it need not be set.
- ### Step 2: Send a predictive request
- With a configured server, use the following lines of code to send the prediction request and obtain the result
- ```python
import requests
import json
import cv2
import base64
def cv2_to_base64(image):
data = cv2.imencode('.jpg', image)[1]
return base64.b64encode(data.tostring()).decode('utf8')
# Send an HTTP request
data = {'images':[cv2_to_base64(cv2.imread("/PATH/TO/IMAGE"))]}
headers = {"Content-type": "application/json"}
url = "http://127.0.0.1:8866/predict/mobilenet_v3_small_imagenet_ssld"
r = requests.post(url=url, headers=headers, data=json.dumps(data))
# print prediction results
print(r.json()["results"])
```
## V.Release Note
* 1.0.0
First release
- ```shell
$ hub install mobilenet_v3_small_imagenet_ssld==1.0.0
```
# nasnet_imagenet
|Module Name|nasnet_imagenet|
| :--- | :---: |
|Category|image classification|
|Network|NASNet|
|Dataset|ImageNet-2012|
|Fine-tuning supported or not|No|
|Module Size|345MB|
|Latest update date|-|
|Data indicators|-|
## I.Basic Information
- ### Module Introduction
- NASNet is proposed by Google, which is trained by AutoML. This module is based on NASNet, trained on ImageNet-2012, and can predict an image of size 224*224*3.
## II.Installation
- ### 1、Environmental Dependence
- paddlepaddle >= 1.4.0
- paddlehub >= 1.0.0 | [How to install PaddleHub](../../../../docs/docs_en/get_start/installation.rst)
- ### 2、Installation
- ```shell
$ hub install nasnet_imagenet
```
- In case of any problems during installation, please refer to: [Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md) | [Linux_Quickstart](../../../../docs/docs_en/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_en/get_start/mac_quickstart.md)
## III.Module API Prediction
- ### 1、Command line Prediction
- ```shell
$ hub run nasnet_imagenet --input_path "/PATH/TO/IMAGE"
```
- If you want to call the Hub module through the command line, please refer to: [PaddleHub Command Line Instruction](../../../../docs/docs_ch/tutorial/cmd_usage.rst)
- ### 2、Prediction Code Example
- ```python
import paddlehub as hub
import cv2
classifier = hub.Module(name="nasnet_imagenet")
test_img_path = "/PATH/TO/IMAGE"
input_dict = {"image": [test_img_path]}
result = classifier.classification(data=input_dict)
```
- ### 3、API
- ```python
def classification(data)
```
- classification API.
- **Parameters**
- data (dict): key is "image", value is a list of image paths
- **Return**
- result(list[dict]): classication results, each element in the list is dict, key is the label name, and value is the corresponding probability
## IV.Release Note
* 1.0.0
First release
* 1.0.1
Fix the problem of encoding in python2
- ```shell
$ hub install nasnet_imagenet==1.0.1
```
# pnasnet_imagenet
|Module Name|pnasnet_imagenet|
| :--- | :---: |
|Category|image classification|
|Network|PNASNet|
|Dataset|ImageNet-2012|
|Fine-tuning supported or not|No|
|Module Size|333MB|
|Latest update date|-|
|Data indicators|-|
## I.Basic Information
- ### Module Introduction
- PNASNet is proposed by Google, which is trained by AutoML. This module is based on PNASNet, trained on ImageNet-2012, and can predict an image of size 224*224*3.
## II.Installation
- ### 1、Environmental Dependence
- paddlepaddle >= 1.4.0
- paddlehub >= 1.0.0 | [How to install PaddleHub](../../../../docs/docs_en/get_start/installation.rst)
- ### 2、Installation
- ```shell
$ hub install pnasnet_imagenet
```
- In case of any problems during installation, please refer to: [Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md) | [Linux_Quickstart](../../../../docs/docs_en/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_en/get_start/mac_quickstart.md)
## III.Module API Prediction
- ### 1、Command line Prediction
- ```shell
$ hub run pnasnet_imagenet --input_path "/PATH/TO/IMAGE"
```
- If you want to call the Hub module through the command line, please refer to: [PaddleHub Command Line Instruction](../../../../docs/docs_ch/tutorial/cmd_usage.rst)
- ### 2、Prediction Code Example
- ```python
import paddlehub as hub
import cv2
classifier = hub.Module(name="pnasnet_imagenet")
test_img_path = "/PATH/TO/IMAGE"
input_dict = {"image": [test_img_path]}
result = classifier.classification(data=input_dict)
```
- ### 3、API
- ```python
def classification(data)
```
- classification API.
- **Parameters**
- data (dict): key is "image", value is a list of image paths
- **Return**
- result(list[dict]): classication results, each element in the list is dict, key is the label name, and value is the corresponding probability
## IV.Release Note
* 1.0.0
First release
* 1.0.1
Fix the problem of encoding in python2
- ```shell
$ hub install pnasnet_imagenet==1.0.1
```
# res2net101_vd_26w_4s_imagenet
|Module Name|res2net101_vd_26w_4s_imagenet|
| :--- | :---: |
|Category|image classification|
|Network|Res2Net|
|Dataset|ImageNet-2012|
|Fine-tuning supported or not|No|
|Module Size|179MB|
|Latest update date|-|
|Data indicators|-|
## I.Basic Information
- ### Module Introduction
- Res2Net is an improvement on ResNet, which can improve performance without increasing computation. This module is based on Res2Net, trained on ImageNet-2012, and can predict an image of size 224*224*3.
## II.Installation
- ### 1、Environmental Dependence
- paddlepaddle >= 1.6.2
- paddlehub >= 1.6.0 | [How to install PaddleHub](../../../../docs/docs_en/get_start/installation.rst)
- ### 2、Installation
- ```shell
$ hub install res2net101_vd_26w_4s_imagenet
```
- In case of any problems during installation, please refer to: [Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md) | [Linux_Quickstart](../../../../docs/docs_en/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_en/get_start/mac_quickstart.md)
## III.Module API Prediction
- ### 1、Command line Prediction
- ```shell
$ hub run res2net101_vd_26w_4s_imagenet --input_path "/PATH/TO/IMAGE"
```
- If you want to call the Hub module through the command line, please refer to: [PaddleHub Command Line Instruction](../../../../docs/docs_ch/tutorial/cmd_usage.rst)
- ### 2、Prediction Code Example
- ```python
import paddlehub as hub
import cv2
classifier = hub.Module(name="res2net101_vd_26w_4s_imagenet")
result = classifier.classification(images=[cv2.imread('/PATH/TO/IMAGE')])
# or
# result = classifier.classification(paths=['/PATH/TO/IMAGE'])
```
- ### 3、API
- ```python
def classification(images=None,
paths=None,
batch_size=1,
use_gpu=False,
top_k=1):
```
- classification API.
- **Parameters**
- images (list\[numpy.ndarray\]): image data, ndarray.shape is in the format [H, W, C], BGR;
- paths (list[str]): image path;
- batch_size (int): the size of batch;
- use_gpu (bool): use GPU or not; **set the CUDA_VISIBLE_DEVICES environment variable first if you are using GPU**
- top\_k (int): return the first k results
- **Return**
- res (list\[dict\]): classication results, each element in the list is dict, key is the label name, and value is the corresponding probability
## IV.Server Deployment
- PaddleHub Serving can deploy an online service of image classification.
- ### Step 1: Start PaddleHub Serving
- Run the startup command:
- ```shell
$ hub serving start -m res2net101_vd_26w_4s_imagenet
```
- The servitization API is now deployed and the default port number is 8866.
- **NOTE:** If GPU is used for prediction, set CUDA_VISIBLE_DEVICES environment variable before the service, otherwise it need not be set.
- ### Step 2: Send a predictive request
- With a configured server, use the following lines of code to send the prediction request and obtain the result
- ```python
import requests
import json
import cv2
import base64
def cv2_to_base64(image):
data = cv2.imencode('.jpg', image)[1]
return base64.b64encode(data.tostring()).decode('utf8')
# Send an HTTP request
data = {'images':[cv2_to_base64(cv2.imread("/PATH/TO/IMAGE"))]}
headers = {"Content-type": "application/json"}
url = "http://127.0.0.1:8866/predict/res2net101_vd_26w_4s_imagenet"
r = requests.post(url=url, headers=headers, data=json.dumps(data))
# print prediction results
print(r.json()["results"])
```
## V.Release Note
* 1.0.0
First release
- ```shell
$ hub install res2net101_vd_26w_4s_imagenet==1.0.0
```
# resnet18_vd_imagenet
|Module Name|resnet18_vd_imagenet|
| :--- | :---: |
|Category|image classification|
|Network|ResNet_vd|
|Dataset|ImageNet-2012|
|Fine-tuning supported or not|No|
|Module Size|46MB|
|Latest update date|-|
|Data indicators|-|
## I.Basic Information
- ### Module Introduction
- ResNet proposed a residual unit to solve the problem of training an extremely deep network, and improved the prediction accuracy of models. ResNet-vd is a variant of ResNet. This module is based on ResNet_vd, trained on ImageNet-2012 dataset, and can predict an image of size 224*224*3.
## II.Installation
- ### 1、Environmental Dependence
- paddlepaddle >= 1.6.2
- paddlehub >= 1.6.0 | [How to install PaddleHub](../../../../docs/docs_en/get_start/installation.rst)
- ### 2、Installation
- ```shell
$ hub install resnet18_vd_imagenet
```
- In case of any problems during installation, please refer to: [Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md) | [Linux_Quickstart](../../../../docs/docs_en/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_en/get_start/mac_quickstart.md)
## III.Module API Prediction
- ### 1、Command line Prediction
- ```shell
$ hub run resnet18_vd_imagenet --input_path "/PATH/TO/IMAGE"
```
- If you want to call the Hub module through the command line, please refer to: [PaddleHub Command Line Instruction](../../../../docs/docs_ch/tutorial/cmd_usage.rst)
- ### 2、Prediction Code Example
- ```python
import paddlehub as hub
import cv2
classifier = hub.Module(name="resnet18_vd_imagenet")
result = classifier.classification(images=[cv2.imread('/PATH/TO/IMAGE')])
# or
# result = classifier.classification(paths=['/PATH/TO/IMAGE'])
```
- ### 3、API
- ```python
def classification(images=None,
paths=None,
batch_size=1,
use_gpu=False,
top_k=1):
```
- classification API.
- **Parameters**
- images (list\[numpy.ndarray\]): image data, ndarray.shape is in the format [H, W, C], BGR;
- paths (list[str]): image path;
- batch_size (int): the size of batch;
- use_gpu (bool): use GPU or not; **set the CUDA_VISIBLE_DEVICES environment variable first if you are using GPU**
- top\_k (int): return the first k results
- **Return**
- res (list\[dict\]): classication results, each element in the list is dict, key is the label name, and value is the corresponding probability
## IV.Server Deployment
- PaddleHub Serving can deploy an online service of image classification.
- ### Step 1: Start PaddleHub Serving
- Run the startup command:
- ```shell
$ hub serving start -m resnet18_vd_imagenet
```
- The servitization API is now deployed and the default port number is 8866.
- **NOTE:** If GPU is used for prediction, set CUDA_VISIBLE_DEVICES environment variable before the service, otherwise it need not be set.
- ### Step 2: Send a predictive request
- With a configured server, use the following lines of code to send the prediction request and obtain the result
- ```python
import requests
import json
import cv2
import base64
def cv2_to_base64(image):
data = cv2.imencode('.jpg', image)[1]
return base64.b64encode(data.tostring()).decode('utf8')
# Send an HTTP request
data = {'images':[cv2_to_base64(cv2.imread("/PATH/TO/IMAGE"))]}
headers = {"Content-type": "application/json"}
url = "http://127.0.0.1:8866/predict/resnet18_vd_imagenet"
r = requests.post(url=url, headers=headers, data=json.dumps(data))
# print prediction results
print(r.json()["results"])
```
## V.Release Note
* 1.0.0
First release
- ```shell
$ hub install resnet18_vd_imagenet==1.0.0
```
# resnet50_vd_10w
|Module Name|resnet50_vd_10w|
| :--- | :---: |
|Category|image classification|
|Network|ResNet_vd|
|Dataset|Baidu Dataset|
|Fine-tuning supported or not|No|
|Module Size|92MB|
|Latest update date|-|
|Data indicators|-|
## I.Basic Information
- ### Module Introduction
- ResNet proposed a residual unit to solve the problem of training an extremely deep network, and improved the prediction accuracy of models. ResNet-vd is a variant of ResNet. This module is based on ResNet_vd, trained on Baidu dataset(consists of 100 thousand classes, 40 million pairs of data), and can predict an image of size 224*224*3.
## II.Installation
- ### 1、Environmental Dependence
- paddlepaddle >= 1.6.2
- paddlehub >= 1.6.0 | [How to install PaddleHub](../../../../docs/docs_en/get_start/installation.rst)
- ### 2、Installation
- ```shell
$ hub install resnet50_vd_10w
```
- In case of any problems during installation, please refer to: [Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md) | [Linux_Quickstart](../../../../docs/docs_en/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_en/get_start/mac_quickstart.md)
## III.Module API Prediction
- ### 1、Prediction Code Example
- ```python
import paddlehub as hub
import cv2
classifier = hub.Module(name="resnet50_vd_10w")
input_dict, output_dict, program = classifier.context(trainable=True)
```
- ### 2、API
- ```python
def context(trainable=True, pretrained=True)
```
- **Parameters**
- trainable (bool): whether parameters are trainable;<br/>
- pretrained (bool): whether load the pre-trained model.
- **Return**
- inputs (dict): model inputs,key is 'image', value is the image tensor;<br/>
- outputs (dict): model outputs,key is 'classification' and 'feature_map',values:
- classification (paddle.fluid.framework.Variable): classification result;
- feature\_map (paddle.fluid.framework.Variable): feature map extracted by model.
- context\_prog(fluid.Program): computation graph, used for transfer learning.
- ```python
def save_inference_model(dirname,
model_filename=None,
params_filename=None,
combined=True)
```
- **Parameters**
- dirname: output dir for saving model; <br/>
- model_filename: filename of model, default is \_\_model\_\_; <br/>
- params_filename: filename of parameters,default is \_\_params\_\_(only effective when `combined` is True); <br/>
- combined: whether save parameters into one file
## V.Release Note
* 1.0.0
First release
- ```shell
$ hub install resnet50_vd_10w==1.0.0
```
# resnet50_vd_dishes
|Module Name|resnet50_vd_dishes|
| :--- | :---: |
|Category|image classification|
|Network|ResNet50_vd|
|Dataset|Baidu Food Dataset|
|Fine-tuning supported or not|No|
|Module Size|158MB|
|Latest update date|-|
|Data indicators|-|
## I.Basic Information
- ### Module Introduction
- ResNet proposed a residual unit to solve the problem of training an extremely deep network, and improved the prediction accuracy of models. ResNet-vd is a variant of ResNet. This module is based on ResNet-vd and can classify 8416 kinds of food.
<p align="center">
<img src="http://bj.bcebos.com/ibox-thumbnail98/77fa9b7003e4665867855b2b65216519?authorization=bce-auth-v1%2Ffbe74140929444858491fbf2b6bc0935%2F2020-04-08T11%3A05%3A10Z%2F1800%2F%2F1df0ecb4a52adefeae240c9e2189e8032560333e399b3187ef1a76e4ffa5f19f" width = "800" hspace='10'/> <br />
</p>
- For more information, please refer to:[Bag of Tricks for Image Classification with Convolutional Neural Networks](https://arxiv.org/pdf/1812.01187.pdf)
## II.Installation
- ### 1、Environmental Dependence
- paddlepaddle >= 1.6.2
- paddlehub >= 1.6.0 | [How to install PaddleHub](../../../../docs/docs_en/get_start/installation.rst)
- ### 2、Installation
- ```shell
$ hub install resnet50_vd_dishes
```
- In case of any problems during installation, please refer to: [Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md) | [Linux_Quickstart](../../../../docs/docs_en/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_en/get_start/mac_quickstart.md)
## III.Module API Prediction
- ### 1、Command line Prediction
- ```shell
$ hub run resnet50_vd_dishes --input_path "/PATH/TO/IMAGE"
```
- If you want to call the Hub module through the command line, please refer to: [PaddleHub Command Line Instruction](../../../../docs/docs_ch/tutorial/cmd_usage.rst)
- ### 2、Prediction Code Example
- ```python
import paddlehub as hub
import cv2
classifier = hub.Module(name="resnet50_vd_dishes")
result = classifier.classification(images=[cv2.imread('/PATH/TO/IMAGE')])
# or
# result = classifier.classification(paths=['/PATH/TO/IMAGE'])
```
- ### 3、API
- ```python
def classification(images=None,
paths=None,
batch_size=1,
use_gpu=False,
top_k=1):
```
- classification API.
- **Parameters**
- images (list\[numpy.ndarray\]): image data, ndarray.shape is in the format [H, W, C], BGR;
- paths (list[str]): image path;
- batch_size (int): the size of batch;
- use_gpu (bool): use GPU or not; **set the CUDA_VISIBLE_DEVICES environment variable first if you are using GPU**
- top\_k (int): return the first k results
- **Return**
- res (list\[dict\]): classication results, each element in the list is dict, key is the label name, and value is the corresponding probability
## IV.Server Deployment
- PaddleHub Serving can deploy an online service of image classification.
- ### Step 1: Start PaddleHub Serving
- Run the startup command:
- ```shell
$ hub serving start -m resnet50_vd_dishes
```
- The servitization API is now deployed and the default port number is 8866.
- **NOTE:** If GPU is used for prediction, set CUDA_VISIBLE_DEVICES environment variable before the service, otherwise it need not be set.
- ### Step 2: Send a predictive request
- With a configured server, use the following lines of code to send the prediction request and obtain the result
- ```python
import requests
import json
import cv2
import base64
def cv2_to_base64(image):
data = cv2.imencode('.jpg', image)[1]
return base64.b64encode(data.tostring()).decode('utf8')
# Send an HTTP request
data = {'images':[cv2_to_base64(cv2.imread("/PATH/TO/IMAGE"))]}
headers = {"Content-type": "application/json"}
url = "http://127.0.0.1:8866/predict/resnet50_vd_dishes"
r = requests.post(url=url, headers=headers, data=json.dumps(data))
# print prediction results
print(r.json()["results"])
```
## V.Release Note
* 1.0.0
First release
- ```shell
$ hub install resnet50_vd_dishes==1.0.0
```
# resnet50_vd_wildanimals
|Module Name|resnet50_vd_wildanimals|
| :--- | :---: |
|Category|image classification|
|Network|ResNet_vd|
|Dataset|IFAW Wild Animal Dataset|
|Fine-tuning supported or not|No|
|Module Size|92MB|
|Latest update date|-|
|Data indicators|-|
## I.Basic Information
- ### Module Introduction
- ResNet proposed a residual unit to solve the problem of training an extremely deep network, and improved the prediction accuracy of models. ResNet-vd is a variant of ResNet. This module is based on ResNet_vd, trained on IFAW Wild Animal dataset, and can predict ten kinds of wild animal components.
## II.Installation
- ### 1、Environmental Dependence
- paddlepaddle >= 1.6.2
- paddlehub >= 1.6.0 | [How to install PaddleHub](../../../../docs/docs_en/get_start/installation.rst)
- ### 2、Installation
- ```shell
$ hub install resnet50_vd_wildanimals
```
- In case of any problems during installation, please refer to: [Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md) | [Linux_Quickstart](../../../../docs/docs_en/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_en/get_start/mac_quickstart.md)
## III.Module API Prediction
- ### 1、Command line Prediction
- ```shell
$ hub run resnet50_vd_wildanimals --input_path "/PATH/TO/IMAGE"
```
- If you want to call the Hub module through the command line, please refer to: [PaddleHub Command Line Instruction](../../../../docs/docs_ch/tutorial/cmd_usage.rst)
- ### 2、Prediction Code Example
- ```python
import paddlehub as hub
import cv2
classifier = hub.Module(name="resnet50_vd_wildanimals")
result = classifier.classification(images=[cv2.imread('/PATH/TO/IMAGE')])
# or
# result = classifier.classification(paths=['/PATH/TO/IMAGE'])
```
- ### 3、API
- ```python
def classification(images=None,
paths=None,
batch_size=1,
use_gpu=False,
top_k=1):
```
- classification API.
- **Parameters**
- images (list\[numpy.ndarray\]): image data, ndarray.shape is in the format [H, W, C], BGR;
- paths (list[str]): image path;
- batch_size (int): the size of batch;
- use_gpu (bool): use GPU or not; **set the CUDA_VISIBLE_DEVICES environment variable first if you are using GPU**
- top\_k (int): return the first k results
- **Return**
- res (list\[dict\]): classication results, each element in the list is dict, key is the label name, and value is the corresponding probability
## IV.Server Deployment
- PaddleHub Serving can deploy an online service of image classification.
- ### Step 1: Start PaddleHub Serving
- Run the startup command:
- ```shell
$ hub serving start -m resnet50_vd_wildanimals
```
- The servitization API is now deployed and the default port number is 8866.
- **NOTE:** If GPU is used for prediction, set CUDA_VISIBLE_DEVICES environment variable before the service, otherwise it need not be set.
- ### Step 2: Send a predictive request
- With a configured server, use the following lines of code to send the prediction request and obtain the result
- ```python
import requests
import json
import cv2
import base64
def cv2_to_base64(image):
data = cv2.imencode('.jpg', image)[1]
return base64.b64encode(data.tostring()).decode('utf8')
# Send an HTTP request
data = {'images':[cv2_to_base64(cv2.imread("/PATH/TO/IMAGE"))]}
headers = {"Content-type": "application/json"}
url = "http://127.0.0.1:8866/predict/resnet50_vd_wildanimals"
r = requests.post(url=url, headers=headers, data=json.dumps(data))
# print prediction results
print(r.json()["results"])
```
## V.Release Note
* 1.0.0
First release
- ```shell
$ hub install resnet50_vd_wildanimals==1.0.0
```
# resnet_v2_101_imagenet
|Module Name|resnet_v2_101_imagenet|
| :--- | :---: |
|Category|image classification|
|Network|ResNet V2 101|
|Dataset|ImageNet-2012|
|Fine-tuning supported or not|No|
|Module Size|173MB|
|Latest update date|-|
|Data indicators|-|
## I.Basic Information
- ### Module Introduction
- ResNet proposed a residual unit to solve the problem of training an extremely deep network, and improved the prediction accuracy of models. This module is based on ResNet101, trained on ImageNet-2012 dataset, and can predict an image of size 224*224*3.
## II.Installation
- ### 1、Environmental Dependence
- paddlepaddle >= 1.4.0
- paddlehub >= 1.0.0 | [How to install PaddleHub](../../../../docs/docs_en/get_start/installation.rst)
- ### 2、Installation
- ```shell
$ hub install resnet_v2_101_imagenet
```
- In case of any problems during installation, please refer to: [Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md) | [Linux_Quickstart](../../../../docs/docs_en/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_en/get_start/mac_quickstart.md)
## III.Module API Prediction
- ### 1、Command line Prediction
- ```shell
$ hub run resnet_v2_101_imagenet --input_path "/PATH/TO/IMAGE"
```
- If you want to call the Hub module through the command line, please refer to: [PaddleHub Command Line Instruction](../../../../docs/docs_ch/tutorial/cmd_usage.rst)
- ### 2、Prediction Code Example
- ```python
import paddlehub as hub
import cv2
classifier = hub.Module(name="resnet_v2_101_imagenet")
test_img_path = "/PATH/TO/IMAGE"
input_dict = {"image": [test_img_path]}
result = classifier.classification(data=input_dict)
```
- ### 3、API
- ```python
def classification(data)
```
- classification API.
- **Parameters**
- data (dict): key is "image", value is a list of image paths
- **Return**
- result(list[dict]): classication results, each element in the list is dict, key is the label name, and value is the corresponding probability
## IV.Release Note
* 1.0.0
First release
* 1.0.1
Fix the problem of encoding in python2
- ```shell
$ hub install resnet_v2_101_imagenet==1.0.1
```
# resnet_v2_152_imagenet
|Module Name|resnet_v2_152_imagenet|
| :--- | :---: |
|Category|image classification|
|Network|ResNet V2|
|Dataset|ImageNet-2012|
|Fine-tuning supported or not|No|
|Module Size|234MB|
|Latest update date|-|
|Data indicators|-|
## I.Basic Information
- ### Module Introduction
- ResNet proposed a residual unit to solve the problem of training an extremely deep network, and improved the prediction accuracy of models. This module is based on ResNet152, trained on ImageNet-2012 dataset, and can predict an image of size 224*224*3.
## II.Installation
- ### 1、Environmental Dependence
- paddlepaddle >= 1.4.0
- paddlehub >= 1.0.0 | [How to install PaddleHub](../../../../docs/docs_en/get_start/installation.rst)
- ### 2、Installation
- ```shell
$ hub install resnet_v2_152_imagenet
```
- In case of any problems during installation, please refer to: [Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md) | [Linux_Quickstart](../../../../docs/docs_en/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_en/get_start/mac_quickstart.md)
## III.Module API Prediction
- ### 1、Command line Prediction
- ```shell
$ hub run resnet_v2_152_imagenet --input_path "/PATH/TO/IMAGE"
```
- If you want to call the Hub module through the command line, please refer to: [PaddleHub Command Line Instruction](../../../../docs/docs_ch/tutorial/cmd_usage.rst)
- ### 2、Prediction Code Example
- ```python
import paddlehub as hub
import cv2
classifier = hub.Module(name="resnet_v2_152_imagenet")
test_img_path = "/PATH/TO/IMAGE"
input_dict = {"image": [test_img_path]}
result = classifier.classification(data=input_dict)
```
- ### 3、API
- ```python
def classification(data)
```
- classification API.
- **Parameters**
- data (dict): key is "image", value is a list of image paths
- **Return**
- result(list[dict]): classication results, each element in the list is dict, key is the label name, and value is the corresponding probability
## IV.Release Note
* 1.0.0
First release
* 1.0.1
Fix the problem of encoding in python2
- ```shell
$ hub install resnet_v2_152_imagenet==1.0.1
```
# resnet_v2_18_imagenet
|Module Name|resnet_v2_18_imagenet|
| :--- | :---: |
|Category|image classification|
|Network|ResNet V2|
|Dataset|ImageNet-2012|
|Fine-tuning supported or not|No|
|Module Size|46MB|
|Latest update date|-|
|Data indicators|-|
## I.Basic Information
- ### Module Introduction
- ResNet proposed a residual unit to solve the problem of training an extremely deep network, and improved the prediction accuracy of models. This module is based on ResNet18, trained on ImageNet-2012 dataset, and can predict an image of size 224*224*3.
## II.Installation
- ### 1、Environmental Dependence
- paddlepaddle >= 1.4.0
- paddlehub >= 1.0.0 | [How to install PaddleHub](../../../../docs/docs_en/get_start/installation.rst)
- ### 2、Installation
- ```shell
$ hub install resnet_v2_18_imagenet
```
- In case of any problems during installation, please refer to: [Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md) | [Linux_Quickstart](../../../../docs/docs_en/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_en/get_start/mac_quickstart.md)
## III.Module API Prediction
- ### 1、Command line Prediction
- ```shell
$ hub run resnet_v2_18_imagenet --input_path "/PATH/TO/IMAGE"
```
- If you want to call the Hub module through the command line, please refer to: [PaddleHub Command Line Instruction](../../../../docs/docs_ch/tutorial/cmd_usage.rst)
- ### 2、Prediction Code Example
- ```python
import paddlehub as hub
import cv2
classifier = hub.Module(name="resnet_v2_18_imagenet")
test_img_path = "/PATH/TO/IMAGE"
input_dict = {"image": [test_img_path]}
result = classifier.classification(data=input_dict)
```
- ### 3、API
- ```python
def classification(data)
```
- classification API.
- **Parameters**
- data (dict): key is "image", value is a list of image paths
- **Return**
- result(list[dict]): classication results, each element in the list is dict, key is the label name, and value is the corresponding probability
## IV.Release Note
* 1.0.0
First release
- ```shell
$ hub install resnet_v2_18_imagenet==1.0.0
```
# resnet_v2_34_imagenet
|Module Name|resnet_v2_34_imagenet|
| :--- | :---: |
|Category|image classification|
|Network|ResNet V2|
|Dataset|ImageNet-2012|
|Fine-tuning supported or not|No|
|Module Size|85MB|
|Latest update date|-|
|Data indicators|-|
## I.Basic Information
- ### Module Introduction
- ResNet proposed a residual unit to solve the problem of training an extremely deep network, and improved the prediction accuracy of models. This module is based on ResNet34, trained on ImageNet-2012 dataset, and can predict an image of size 224*224*3.
## II.Installation
- ### 1、Environmental Dependence
- paddlepaddle >= 1.4.0
- paddlehub >= 1.0.0 | [How to install PaddleHub](../../../../docs/docs_en/get_start/installation.rst)
- ### 2、Installation
- ```shell
$ hub install resnet_v2_34_imagenet
```
- In case of any problems during installation, please refer to: [Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md) | [Linux_Quickstart](../../../../docs/docs_en/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_en/get_start/mac_quickstart.md)
## III.Module API Prediction
- ### 1、Command line Prediction
- ```shell
$ hub run resnet_v2_34_imagenet --input_path "/PATH/TO/IMAGE"
```
- If you want to call the Hub module through the command line, please refer to: [PaddleHub Command Line Instruction](../../../../docs/docs_ch/tutorial/cmd_usage.rst)
- ### 2、Prediction Code Example
- ```python
import paddlehub as hub
import cv2
classifier = hub.Module(name="resnet_v2_34_imagenet")
test_img_path = "/PATH/TO/IMAGE"
input_dict = {"image": [test_img_path]}
result = classifier.classification(data=input_dict)
```
- ### 3、API
- ```python
def classification(data)
```
- classification API.
- **Parameters**
- data (dict): key is "image", value is a list of image paths
- **Return**
- result(list[dict]): classication results, each element in the list is dict, key is the label name, and value is the corresponding probability
## IV.Release Note
* 1.0.0
First release
- ```shell
$ hub install resnet_v2_34_imagenet==1.0.0
```
...@@ -72,6 +72,7 @@ ...@@ -72,6 +72,7 @@
## IV. Release Note ## IV. Release Note
- 1.0.0 - 1.0.0
......
# resnext101_32x16d_wsl
|Module Name|resnext101_32x16d_wsl|
| :--- | :---: |
|Category|image classification|
|Network|ResNeXt_wsl|
|Dataset|ImageNet-2012|
|Fine-tuning supported or not|No|
|Module Size|744MB|
|Latest update date|-|
|Data indicators|-|
## I.Basic Information
- ### Module Introduction
- The scale of dataset annotated by people is close to limit, researchers in Facebook adopt a new method of transfer learning to train the network. They use hashtag to annotate images, and trained on billions of social images, then transfer to weakly supervised learning. The top-1 accuracy of ResNeXt101_32x16d_wsl on ImageNet reaches 84.24%. This module is based on ResNeXt101_32x16d_wsl, and can predict an image of size 224*224*3.
## II.Installation
- ### 1、Environmental Dependence
- paddlepaddle >= 1.6.0
- paddlehub >= 1.0.0 | [How to install PaddleHub](../../../../docs/docs_en/get_start/installation.rst)
- ### 2、Installation
- ```shell
$ hub install resnext101_32x16d_wsl
```
- In case of any problems during installation, please refer to: [Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md) | [Linux_Quickstart](../../../../docs/docs_en/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_en/get_start/mac_quickstart.md)
## III.Module API Prediction
- ### 1、Command line Prediction
- ```shell
$ hub run resnext101_32x16d_wsl --input_path "/PATH/TO/IMAGE"
```
- If you want to call the Hub module through the command line, please refer to: [PaddleHub Command Line Instruction](../../../../docs/docs_ch/tutorial/cmd_usage.rst)
- ### 2、Prediction Code Example
- ```python
import paddlehub as hub
import cv2
classifier = hub.Module(name="resnext101_32x16d_wsl")
test_img_path = "/PATH/TO/IMAGE"
input_dict = {"image": [test_img_path]}
result = classifier.classification(data=input_dict)
```
- ### 3、API
- ```python
def classification(data)
```
- classification API.
- **Parameters**
- data (dict): key is "image", value is a list of image paths
- **Return**
- result(list[dict]): classication results, each element in the list is dict, key is the label name, and value is the corresponding probability
## IV.Release Note
* 1.0.0
First release
- ```shell
$ hub install resnext101_32x16d_wsl==1.0.0
```
# resnext101_32x32d_wsl
|Module Name|resnext101_32x32d_wsl|
| :--- | :---: |
|Category|image classification|
|Network|ResNeXt_wsl|
|Dataset|ImageNet-2012|
|Fine-tuning supported or not|No|
|Module Size|1.8GB|
|Latest update date|-|
|Data indicators|-|
## I.Basic Information
- ### Module Introduction
- The scale of dataset annotated by people is close to limit, researchers in Facebook adopt a new method of transfer learning to train the network. They use hashtag to annotate images, and trained on billions of social images, then transfer to weakly supervised learning. The top-1 accuracy of ResNeXt101_32x32d_wsl on ImageNet reaches 84.97%. This module is based on ResNeXt101_32x32d_wsl, and can predict an image of size 224*224*3.
## II.Installation
- ### 1、Environmental Dependence
- paddlepaddle >= 1.6.0
- paddlehub >= 1.0.0 | [How to install PaddleHub](../../../../docs/docs_en/get_start/installation.rst)
- ### 2、Installation
- ```shell
$ hub install resnext101_32x32d_wsl
```
- In case of any problems during installation, please refer to: [Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md) | [Linux_Quickstart](../../../../docs/docs_en/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_en/get_start/mac_quickstart.md)
## III.Module API Prediction
- ### 1、Command line Prediction
- ```shell
$ hub run resnext101_32x32d_wsl --input_path "/PATH/TO/IMAGE"
```
- If you want to call the Hub module through the command line, please refer to: [PaddleHub Command Line Instruction](../../../../docs/docs_ch/tutorial/cmd_usage.rst)
- ### 2、Prediction Code Example
- ```python
import paddlehub as hub
import cv2
classifier = hub.Module(name="resnext101_32x32d_wsl")
test_img_path = "/PATH/TO/IMAGE"
input_dict = {"image": [test_img_path]}
result = classifier.classification(data=input_dict)
```
- ### 3、API
- ```python
def classification(data)
```
- classification API.
- **Parameters**
- data (dict): key is "image", value is a list of image paths
- **Return**
- result(list[dict]): classication results, each element in the list is dict, key is the label name, and value is the corresponding probability
## IV.Release Note
* 1.0.0
First release
- ```shell
$ hub install resnext101_32x32d_wsl==1.0.0
```
# resnext101_32x48d_wsl
|Module Name|resnext101_32x48d_wsl|
| :--- | :---: |
|Category|image classification|
|Network|ResNeXt_wsl|
|Dataset|ImageNet-2012|
|Fine-tuning supported or not|No|
|Module Size|342MB|
|Latest update date|-|
|Data indicators|-|
## I.Basic Information
- ### Module Introduction
- The scale of dataset annotated by people is close to limit, researchers in Facebook adopt a new method of transfer learning to train the network. They use hashtag to annotate images, and trained on billions of social images, then transfer to weakly supervised learning. The top-1 accuracy of ResNeXt101_32x48d_wsl on ImageNet reaches 85.4%. This module is based on ResNeXt101_32x48d_wsl, and can predict an image of size 224*224*3.
## II.Installation
- ### 1、Environmental Dependence
- paddlepaddle >= 1.6.0
- paddlehub >= 1.0.0 | [How to install PaddleHub](../../../../docs/docs_en/get_start/installation.rst)
- ### 2、Installation
- ```shell
$ hub install resnext101_32x48d_wsl
```
- In case of any problems during installation, please refer to: [Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md) | [Linux_Quickstart](../../../../docs/docs_en/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_en/get_start/mac_quickstart.md)
## III.Module API Prediction
- ### 1、Command line Prediction
- ```shell
$ hub run resnext101_32x48d_wsl --input_path "/PATH/TO/IMAGE"
```
- If you want to call the Hub module through the command line, please refer to: [PaddleHub Command Line Instruction](../../../../docs/docs_ch/tutorial/cmd_usage.rst)
- ### 2、Prediction Code Example
- ```python
import paddlehub as hub
import cv2
classifier = hub.Module(name="resnext101_32x48d_wsl")
test_img_path = "/PATH/TO/IMAGE"
input_dict = {"image": [test_img_path]}
result = classifier.classification(data=input_dict)
```
- ### 3、API
- ```python
def classification(data)
```
- classification API.
- **Parameters**
- data (dict): key is "image", value is a list of image paths
- **Return**
- result(list[dict]): classication results, each element in the list is dict, key is the label name, and value is the corresponding probability
## IV.Release Note
* 1.0.0
First release
- ```shell
$ hub install resnext101_32x48d_wsl==1.0.0
```
# resnext101_32x4d_imagenet
|Module Name|resnext101_32x4d_imagenet|
| :--- | :---: |
|Category|image classification|
|Network|ResNeXt|
|Dataset|ImageNet-2012|
|Fine-tuning supported or not|No|
|Module Size|172MB|
|Latest update date|-|
|Data indicators|-|
## I.Basic Information
- ### Module Introduction
- ResNeXt is proposed by UC San Diego and Facebook AI Research in 2017. This module is based on resnext101_32x4d, which denotes 101 layers ,32 branches,and the number of input and output branch channels is 4 in the network. It is weak-supervised trained on billions of socail images, finetuned on ImageNet-2012 dataset, and can predict an image of size 224*224*3.
## II.Installation
- ### 1、Environmental Dependence
- paddlepaddle >= 1.4.0
- paddlehub >= 1.0.0 | [How to install PaddleHub](../../../../docs/docs_en/get_start/installation.rst)
- ### 2、Installation
- ```shell
$ hub install resnext101_32x4d_imagenet
```
- In case of any problems during installation, please refer to: [Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md) | [Linux_Quickstart](../../../../docs/docs_en/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_en/get_start/mac_quickstart.md)
## III.Module API Prediction
- ### 1、Command line Prediction
- ```shell
$ hub run resnext101_32x4d_imagenet --input_path "/PATH/TO/IMAGE"
```
- If you want to call the Hub module through the command line, please refer to: [PaddleHub Command Line Instruction](../../../../docs/docs_ch/tutorial/cmd_usage.rst)
- ### 2、Prediction Code Example
- ```python
import paddlehub as hub
import cv2
classifier = hub.Module(name="resnext101_32x4d_imagenet")
test_img_path = "/PATH/TO/IMAGE"
input_dict = {"image": [test_img_path]}
result = classifier.classification(data=input_dict)
```
- ### 3、API
- ```python
def classification(data)
```
- classification API.
- **Parameters**
- data (dict): key is "image", value is a list of image paths
- **Return**
- result(list[dict]): classication results, each element in the list is dict, key is the label name, and value is the corresponding probability
## IV.Release Note
* 1.0.0
First release
- ```shell
$ hub install resnext101_32x4d_imagenet==1.0.0
```
# resnext101_32x8d_wsl
|Module Name|resnext101_32x8d_wsl|
| :--- | :---: |
|Category|image classification|
|Network|ResNeXt_wsl|
|Dataset|ImageNet-2012|
|Fine-tuning supported or not|No|
|Module Size|317MB|
|Latest update date|-|
|Data indicators|-|
## I.Basic Information
- ### Module Introduction
- The scale of dataset annotated by people is close to limit, researchers in Facebook adopt a new method of transfer learning to train the network. They use hashtag to annotate images, and trained on billions of social images, then transfer to weakly supervised learning. The top-1 accuracy of ResNeXt101_32x8d_wsl on ImageNet reaches 82.55%. This module is based on ResNeXt101_32x8d_wsl, and can predict an image of size 224*224*3.
## II.Installation
- ### 1、Environmental Dependence
- paddlepaddle >= 1.6.0
- paddlehub >= 1.0.0 | [How to install PaddleHub](../../../../docs/docs_en/get_start/installation.rst)
- ### 2、Installation
- ```shell
$ hub install resnext101_32x8d_wsl
```
- In case of any problems during installation, please refer to: [Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md) | [Linux_Quickstart](../../../../docs/docs_en/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_en/get_start/mac_quickstart.md)
## III.Module API Prediction
- ### 1、Command line Prediction
- ```shell
$ hub run resnext101_32x8d_wsl --input_path "/PATH/TO/IMAGE"
```
- If you want to call the Hub module through the command line, please refer to: [PaddleHub Command Line Instruction](../../../../docs/docs_ch/tutorial/cmd_usage.rst)
- ### 2、Prediction Code Example
- ```python
import paddlehub as hub
import cv2
classifier = hub.Module(name="resnext101_32x8d_wsl")
test_img_path = "/PATH/TO/IMAGE"
input_dict = {"image": [test_img_path]}
result = classifier.classification(data=input_dict)
```
- ### 3、API
- ```python
def classification(data)
```
- classification API.
- **Parameters**
- data (dict): key is "image", value is a list of image paths
- **Return**
- result(list[dict]): classication results, each element in the list is dict, key is the label name, and value is the corresponding probability
## IV.Release Note
* 1.0.0
First release
- ```shell
$ hub install resnext101_32x8d_wsl==1.0.0
```
# resnext101_64x4d_imagenet
|Module Name|resnext101_64x4d_imagenet|
| :--- | :---: |
|Category|image classification|
|Network|ResNeXt|
|Dataset|ImageNet-2012|
|Fine-tuning supported or not|No|
|Module Size|322MB|
|Latest update date|-|
|Data indicators|-|
## I.Basic Information
- ### Module Introduction
- ResNeXt is proposed by UC San Diego and Facebook AI Research in 2017. This module is based on resnext101_64x4d, which denotes 101 layers ,64 branches,and the number of input and output branch channels is 4 in the network. It is weak-supervised trained on billions of socail images, finetuned on ImageNet-2012 dataset, and can predict an image of size 224*224*3.
## II.Installation
- ### 1、Environmental Dependence
- paddlepaddle >= 1.4.0
- paddlehub >= 1.0.0 | [How to install PaddleHub](../../../../docs/docs_en/get_start/installation.rst)
- ### 2、Installation
- ```shell
$ hub install resnext101_64x4d_imagenet
```
- In case of any problems during installation, please refer to: [Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md) | [Linux_Quickstart](../../../../docs/docs_en/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_en/get_start/mac_quickstart.md)
## III.Module API Prediction
- ### 1、Command line Prediction
- ```shell
$ hub run resnext101_64x4d_imagenet --input_path "/PATH/TO/IMAGE"
```
- If you want to call the Hub module through the command line, please refer to: [PaddleHub Command Line Instruction](../../../../docs/docs_ch/tutorial/cmd_usage.rst)
- ### 2、Prediction Code Example
- ```python
import paddlehub as hub
import cv2
classifier = hub.Module(name="resnext101_64x4d_imagenet")
test_img_path = "/PATH/TO/IMAGE"
input_dict = {"image": [test_img_path]}
result = classifier.classification(data=input_dict)
```
- ### 3、API
- ```python
def classification(data)
```
- classification API.
- **Parameters**
- data (dict): key is "image", value is a list of image paths
- **Return**
- result(list[dict]): classication results, each element in the list is dict, key is the label name, and value is the corresponding probability
## IV.Release Note
* 1.0.0
First release
- ```shell
$ hub install resnext101_64x4d_imagenet==1.0.0
```
# resnext101_vd_32x4d_imagenet
|Module Name|resnext101_vd_32x4d_imagenet|
| :--- | :---: |
|Category|image classification|
|Network|ResNeXt|
|Dataset|ImageNet-2012|
|Fine-tuning supported or not|No|
|Module Size|172MB|
|Latest update date|-|
|Data indicators|-|
## I.Basic Information
- ### Module Introduction
- ResNeXt is proposed by UC San Diego and Facebook AI Research in 2017. This module is based on resnext101_vd_32x4d, which denotes 101 layers ,32 branches,and the number of input and output branch channels is 4 in the network. It is weak-supervised trained on billions of socail images, finetuned on ImageNet-2012 dataset, and can predict an image of size 224*224*3.
## II.Installation
- ### 1、Environmental Dependence
- paddlepaddle >= 1.4.0
- paddlehub >= 1.0.0 | [How to install PaddleHub](../../../../docs/docs_en/get_start/installation.rst)
- ### 2、Installation
- ```shell
$ hub install resnext101_vd_32x4d_imagenet
```
- In case of any problems during installation, please refer to: [Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md) | [Linux_Quickstart](../../../../docs/docs_en/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_en/get_start/mac_quickstart.md)
## III.Module API Prediction
- ### 1、Command line Prediction
- ```shell
$ hub run resnext101_vd_32x4d_imagenet --input_path "/PATH/TO/IMAGE"
```
- If you want to call the Hub module through the command line, please refer to: [PaddleHub Command Line Instruction](../../../../docs/docs_ch/tutorial/cmd_usage.rst)
- ### 2、Prediction Code Example
- ```python
import paddlehub as hub
import cv2
classifier = hub.Module(name="resnext101_vd_32x4d_imagenet")
test_img_path = "/PATH/TO/IMAGE"
input_dict = {"image": [test_img_path]}
result = classifier.classification(data=input_dict)
```
- ### 3、API
- ```python
def classification(data)
```
- classification API.
- **Parameters**
- data (dict): key is "image", value is a list of image paths
- **Return**
- result(list[dict]): classication results, each element in the list is dict, key is the label name, and value is the corresponding probability
## IV.Release Note
* 1.0.0
First release
- ```shell
$ hub install resnext101_vd_32x4d_imagenet==1.0.0
```
# resnext101_vd_64x4d_imagenet
|Module Name|resnext101_vd_64x4d_imagenet|
| :--- | :---: |
|Category|image classification|
|Network|ResNeXt_vd|
|Dataset|ImageNet-2012|
|Fine-tuning supported or not|No|
|Module Size|172MB|
|Latest update date|-|
|Data indicators|-|
## I.Basic Information
- ### Module Introduction
- ResNeXt is proposed by UC San Diego and Facebook AI Research in 2017. This module is based on resnext101_vd_64x4d_imagenet, which denotes 101 layers ,64 branches,and the number of input and output branch channels is 4 in the network. It is weak-supervised trained on billions of socail images, finetuned on ImageNet-2012 dataset, and can predict an image of size 224*224*3.
## II.Installation
- ### 1、Environmental Dependence
- paddlepaddle >= 1.4.0
- paddlehub >= 1.0.0 | [How to install PaddleHub](../../../../docs/docs_en/get_start/installation.rst)
- ### 2、Installation
- ```shell
$ hub install resnext101_vd_64x4d_imagenet
```
- In case of any problems during installation, please refer to: [Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md) | [Linux_Quickstart](../../../../docs/docs_en/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_en/get_start/mac_quickstart.md)
## III.Module API Prediction
- ### 1、Command line Prediction
- ```shell
$ hub run resnext101_vd_64x4d_imagenet --input_path "/PATH/TO/IMAGE"
```
- If you want to call the Hub module through the command line, please refer to: [PaddleHub Command Line Instruction](../../../../docs/docs_ch/tutorial/cmd_usage.rst)
- ### 2、Prediction Code Example
- ```python
import paddlehub as hub
import cv2
classifier = hub.Module(name="resnext101_vd_64x4d_imagenet")
test_img_path = "/PATH/TO/IMAGE"
input_dict = {"image": [test_img_path]}
result = classifier.classification(data=input_dict)
```
- ### 3、API
- ```python
def classification(data)
```
- classification API.
- **Parameters**
- data (dict): key is "image", value is a list of image paths
- **Return**
- result(list[dict]): classication results, each element in the list is dict, key is the label name, and value is the corresponding probability
## IV.Release Note
* 1.0.0
First release
- ```shell
$ hub install resnext101_vd_64x4d_imagenet==1.0.0
```
# resnext152_32x4d_imagenet
|Module Name|resnext152_32x4d_imagenet|
| :--- | :---: |
|Category|image classification|
|Network|ResNeXt|
|Dataset|ImageNet-2012|
|Fine-tuning supported or not|No|
|Module Size|233MB|
|Latest update date|-|
|Data indicators|-|
## I.Basic Information
- ### Module Introduction
- ResNeXt is proposed by UC San Diego and Facebook AI Research in 2017. This module is based on resnext152_32x4d_imagenet, which denotes 152 layers ,32 branches,and the number of input and output branch channels is 4 in the network. It is weak-supervised trained on billions of socail images, finetuned on ImageNet-2012 dataset, and can predict an image of size 224*224*3.
## II.Installation
- ### 1、Environmental Dependence
- paddlepaddle >= 1.4.0
- paddlehub >= 1.0.0 | [How to install PaddleHub](../../../../docs/docs_en/get_start/installation.rst)
- ### 2、Installation
- ```shell
$ hub install resnext152_32x4d_imagenet
```
- In case of any problems during installation, please refer to: [Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md) | [Linux_Quickstart](../../../../docs/docs_en/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_en/get_start/mac_quickstart.md)
## III.Module API Prediction
- ### 1、Command line Prediction
- ```shell
$ hub run resnext152_32x4d_imagenet --input_path "/PATH/TO/IMAGE"
```
- If you want to call the Hub module through the command line, please refer to: [PaddleHub Command Line Instruction](../../../../docs/docs_ch/tutorial/cmd_usage.rst)
- ### 2、Prediction Code Example
- ```python
import paddlehub as hub
import cv2
classifier = hub.Module(name="resnext152_32x4d_imagenet")
test_img_path = "/PATH/TO/IMAGE"
input_dict = {"image": [test_img_path]}
result = classifier.classification(data=input_dict)
```
- ### 3、API
- ```python
def classification(data)
```
- classification API.
- **Parameters**
- data (dict): key is "image", value is a list of image paths
- **Return**
- result(list[dict]): classication results, each element in the list is dict, key is the label name, and value is the corresponding probability
## IV.Release Note
* 1.0.0
First release
- ```shell
$ hub install resnext152_32x4d_imagenet==1.0.0
```
# resnext152_64x4d_imagenet
|Module Name|resnext152_64x4d_imagenet|
| :--- | :---: |
|Category|image classification|
|Network|ResNeXt|
|Dataset|ImageNet-2012|
|Fine-tuning supported or not|No|
|Module Size|444MB|
|Latest update date|-|
|Data indicators|-|
## I.Basic Information
- ### Module Introduction
- ResNeXt is proposed by UC San Diego and Facebook AI Research in 2017. This module is based on resnext152_64x4d_imagenet, which denotes 152 layers ,64 branches,and the number of input and output branch channels is 4 in the network. It is weak-supervised trained on billions of socail images, finetuned on ImageNet-2012 dataset, and can predict an image of size 224*224*3.
## II.Installation
- ### 1、Environmental Dependence
- paddlepaddle >= 1.4.0
- paddlehub >= 1.0.0 | [How to install PaddleHub](../../../../docs/docs_en/get_start/installation.rst)
- ### 2、Installation
- ```shell
$ hub install resnext152_64x4d_imagenet
```
- In case of any problems during installation, please refer to: [Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md) | [Linux_Quickstart](../../../../docs/docs_en/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_en/get_start/mac_quickstart.md)
## III.Module API Prediction
- ### 1、Command line Prediction
- ```shell
$ hub run resnext152_64x4d_imagenet --input_path "/PATH/TO/IMAGE"
```
- If you want to call the Hub module through the command line, please refer to: [PaddleHub Command Line Instruction](../../../../docs/docs_ch/tutorial/cmd_usage.rst)
- ### 2、Prediction Code Example
- ```python
import paddlehub as hub
import cv2
classifier = hub.Module(name="resnext152_64x4d_imagenet")
test_img_path = "/PATH/TO/IMAGE"
input_dict = {"image": [test_img_path]}
result = classifier.classification(data=input_dict)
```
- ### 3、API
- ```python
def classification(data)
```
- classification API.
- **Parameters**
- data (dict): key is "image", value is a list of image paths
- **Return**
- result(list[dict]): classication results, each element in the list is dict, key is the label name, and value is the corresponding probability
## IV.Release Note
* 1.0.0
First release
- ```shell
$ hub install resnext152_64x4d_imagenet==1.0.0
```
# resnext152_vd_64x4d_imagenet
|Module Name|resnext152_vd_64x4d_imagenet|
| :--- | :---: |
|Category|image classification|
|Network|ResNeXt_vd|
|Dataset|ImageNet-2012|
|Fine-tuning supported or not|No|
|Module Size|444MB|
|Latest update date|-|
|Data indicators|-|
## I.Basic Information
- ### Module Introduction
- ResNeXt is proposed by UC San Diego and Facebook AI Research in 2017. This module is based on resnext152_vd_64x4d_imagenet, which denotes 152 layers ,64 branches,and the number of input and output branch channels is 4 in the network. It is weak-supervised trained on billions of socail images, finetuned on ImageNet-2012 dataset, and can predict an image of size 224*224*3.
## II.Installation
- ### 1、Environmental Dependence
- paddlepaddle >= 1.4.0
- paddlehub >= 1.0.0 | [How to install PaddleHub](../../../../docs/docs_en/get_start/installation.rst)
- ### 2、Installation
- ```shell
$ hub install resnext152_vd_64x4d_imagenet
```
- In case of any problems during installation, please refer to: [Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md) | [Linux_Quickstart](../../../../docs/docs_en/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_en/get_start/mac_quickstart.md)
## III.Module API Prediction
- ### 1、Command line Prediction
- ```shell
$ hub run resnext152_vd_64x4d_imagenet --input_path "/PATH/TO/IMAGE"
```
- If you want to call the Hub module through the command line, please refer to: [PaddleHub Command Line Instruction](../../../../docs/docs_ch/tutorial/cmd_usage.rst)
- ### 2、Prediction Code Example
- ```python
import paddlehub as hub
import cv2
classifier = hub.Module(name="resnext152_vd_64x4d_imagenet")
test_img_path = "/PATH/TO/IMAGE"
input_dict = {"image": [test_img_path]}
result = classifier.classification(data=input_dict)
```
- ### 3、API
- ```python
def classification(data)
```
- classification API.
- **Parameters**
- data (dict): key is "image", value is a list of image paths
- **Return**
- result(list[dict]): classication results, each element in the list is dict, key is the label name, and value is the corresponding probability
## IV.Release Note
* 1.0.0
First release
- ```shell
$ hub install resnext152_vd_64x4d_imagenet==1.0.0
```
# resnext50_32x4d_imagenet
|Module Name|resnext50_32x4d_imagenet|
| :--- | :---: |
|Category|image classification|
|Network|ResNeXt|
|Dataset|ImageNet-2012|
|Fine-tuning supported or not|No|
|Module Size|97MB|
|Latest update date|-|
|Data indicators|-|
## I.Basic Information
- ### Module Introduction
- ResNeXt is proposed by UC San Diego and Facebook AI Research in 2017. This module is based on resnext50_32x4d, which denotes 50 layers ,32 branches,and the number of input and output branch channels is 4 in the network. It is weak-supervised trained on billions of socail images, finetuned on ImageNet-2012 dataset, and can predict an image of size 224*224*3.
## II.Installation
- ### 1、Environmental Dependence
- paddlepaddle >= 1.4.0
- paddlehub >= 1.0.0 | [How to install PaddleHub](../../../../docs/docs_en/get_start/installation.rst)
- ### 2、Installation
- ```shell
$ hub install resnext50_32x4d_imagenet
```
- In case of any problems during installation, please refer to: [Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md) | [Linux_Quickstart](../../../../docs/docs_en/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_en/get_start/mac_quickstart.md)
## III.Module API Prediction
- ### 1、Command line Prediction
- ```shell
$ hub run resnext50_32x4d_imagenet --input_path "/PATH/TO/IMAGE"
```
- If you want to call the Hub module through the command line, please refer to: [PaddleHub Command Line Instruction](../../../../docs/docs_ch/tutorial/cmd_usage.rst)
- ### 2、Prediction Code Example
- ```python
import paddlehub as hub
import cv2
classifier = hub.Module(name="resnext50_32x4d_imagenet")
test_img_path = "/PATH/TO/IMAGE"
input_dict = {"image": [test_img_path]}
result = classifier.classification(data=input_dict)
```
- ### 3、API
- ```python
def classification(data)
```
- classification API.
- **Parameters**
- data (dict): key is "image", value is a list of image paths
- **Return**
- result(list[dict]): classication results, each element in the list is dict, key is the label name, and value is the corresponding probability
## IV.Release Note
* 1.0.0
First release
- ```shell
$ hub install resnext50_32x4d_imagenet==1.0.0
```
# resnext50_64x4d_imagenet
|Module Name|resnext50_64x4d_imagenet|
| :--- | :---: |
|Category|image classification|
|Network|ResNeXt|
|Dataset|ImageNet-2012|
|Fine-tuning supported or not|No|
|Module Size|174MB|
|Latest update date|-|
|Data indicators|-|
## I.Basic Information
- ### Module Introduction
- ResNeXt is proposed by UC San Diego and Facebook AI Research in 2017. This module is based on resnext50_64x4d_imagenet, which denotes 50 layers ,60 branches,and the number of input and output branch channels is 4 in the network. It is weak-supervised trained on billions of socail images, finetuned on ImageNet-2012 dataset, and can predict an image of size 224*224*3.
## II.Installation
- ### 1、Environmental Dependence
- paddlepaddle >= 1.4.0
- paddlehub >= 1.0.0 | [How to install PaddleHub](../../../../docs/docs_en/get_start/installation.rst)
- ### 2、Installation
- ```shell
$ hub install resnext50_64x4d_imagenet
```
- In case of any problems during installation, please refer to: [Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md) | [Linux_Quickstart](../../../../docs/docs_en/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_en/get_start/mac_quickstart.md)
## III.Module API Prediction
- ### 1、Command line Prediction
- ```shell
$ hub run resnext50_64x4d_imagenet --input_path "/PATH/TO/IMAGE"
```
- If you want to call the Hub module through the command line, please refer to: [PaddleHub Command Line Instruction](../../../../docs/docs_ch/tutorial/cmd_usage.rst)
- ### 2、Prediction Code Example
- ```python
import paddlehub as hub
import cv2
classifier = hub.Module(name="resnext50_64x4d_imagenet")
test_img_path = "/PATH/TO/IMAGE"
input_dict = {"image": [test_img_path]}
result = classifier.classification(data=input_dict)
```
- ### 3、API
- ```python
def classification(data)
```
- classification API.
- **Parameters**
- data (dict): key is "image", value is a list of image paths
- **Return**
- result(list[dict]): classication results, each element in the list is dict, key is the label name, and value is the corresponding probability
## IV.Release Note
* 1.0.0
First release
- ```shell
$ hub install resnext50_64x4d_imagenet==1.0.0
```
# resnext50_vd_32x4d_imagenet
|Module Name|resnext50_vd_32x4d_imagenet|
| :--- | :---: |
|Category|image classification|
|Network|ResNeXt_vd|
|Dataset|ImageNet-2012|
|Fine-tuning supported or not|No|
|Module Size|98MB|
|Latest update date|-|
|Data indicators|-|
## I.Basic Information
- ### Module Introduction
- ResNeXt is proposed by UC San Diego and Facebook AI Research in 2017. This module is based on resnext50_vd_32x4d, which denotes 50 layers ,32 branches,and the number of input and output branch channels is 4 in the network. It is weak-supervised trained on billions of socail images, finetuned on ImageNet-2012 dataset, and can predict an image of size 224*224*3.
## II.Installation
- ### 1、Environmental Dependence
- paddlepaddle >= 1.4.0
- paddlehub >= 1.0.0 | [How to install PaddleHub](../../../../docs/docs_en/get_start/installation.rst)
- ### 2、Installation
- ```shell
$ hub install resnext50_vd_32x4d_imagenet
```
- In case of any problems during installation, please refer to: [Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md) | [Linux_Quickstart](../../../../docs/docs_en/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_en/get_start/mac_quickstart.md)
## III.Module API Prediction
- ### 1、Command line Prediction
- ```shell
$ hub run resnext50_vd_32x4d_imagenet --input_path "/PATH/TO/IMAGE"
```
- If you want to call the Hub module through the command line, please refer to: [PaddleHub Command Line Instruction](../../../../docs/docs_ch/tutorial/cmd_usage.rst)
- ### 2、Prediction Code Example
- ```python
import paddlehub as hub
import cv2
classifier = hub.Module(name="resnext50_vd_32x4d_imagenet")
test_img_path = "/PATH/TO/IMAGE"
input_dict = {"image": [test_img_path]}
result = classifier.classification(data=input_dict)
```
- ### 3、API
- ```python
def classification(data)
```
- classification API.
- **Parameters**
- data (dict): key is "image", value is a list of image paths
- **Return**
- result(list[dict]): classication results, each element in the list is dict, key is the label name, and value is the corresponding probability
## IV.Release Note
* 1.0.0
First release
- ```shell
$ hub install resnext50_vd_32x4d_imagenet==1.0.0
```
# resnext50_vd_64x4d_imagenet
|Module Name|resnext50_vd_64x4d_imagenet|
| :--- | :---: |
|Category|image classification|
|Network|ResNeXt_vd|
|Dataset|ImageNet-2012|
|Fine-tuning supported or not|No|
|Module Size|175MB|
|Latest update date|-|
|Data indicators|-|
## I.Basic Information
- ### Module Introduction
- ResNeXt is proposed by UC San Diego and Facebook AI Research in 2017. This module is based on resnext50_vd_64x4d_imagenet, which denotes 50 layers ,64 branches,and the number of input and output branch channels is 4 in the network. It is weak-supervised trained on billions of socail images, finetuned on ImageNet-2012 dataset, and can predict an image of size 224*224*3.
## II.Installation
- ### 1、Environmental Dependence
- paddlepaddle >= 1.4.0
- paddlehub >= 1.0.0 | [How to install PaddleHub](../../../../docs/docs_en/get_start/installation.rst)
- ### 2、Installation
- ```shell
$ hub install resnext50_vd_64x4d_imagenet
```
- In case of any problems during installation, please refer to: [Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md) | [Linux_Quickstart](../../../../docs/docs_en/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_en/get_start/mac_quickstart.md)
## III.Module API Prediction
- ### 1、Command line Prediction
- ```shell
$ hub run resnext50_vd_64x4d_imagenet --input_path "/PATH/TO/IMAGE"
```
- If you want to call the Hub module through the command line, please refer to: [PaddleHub Command Line Instruction](../../../../docs/docs_ch/tutorial/cmd_usage.rst)
- ### 2、Prediction Code Example
- ```python
import paddlehub as hub
import cv2
classifier = hub.Module(name="resnext50_vd_64x4d_imagenet")
test_img_path = "/PATH/TO/IMAGE"
input_dict = {"image": [test_img_path]}
result = classifier.classification(data=input_dict)
```
- ### 3、API
- ```python
def classification(data)
```
- classification API.
- **Parameters**
- data (dict): key is "image", value is a list of image paths
- **Return**
- result(list[dict]): classication results, each element in the list is dict, key is the label name, and value is the corresponding probability
## IV.Release Note
* 1.0.0
First release
- ```shell
$ hub install resnext50_vd_64x4d_imagenet==1.0.0
```
# se_resnext101_32x4d_imagenet
|Module Name|se_resnext101_32x4d_imagenet|
| :--- | :---: |
|Category|image classification|
|Network|SE_ResNeXt|
|Dataset|ImageNet-2012|
|Fine-tuning supported or not|No|
|Module Size|191MB|
|Latest update date|-|
|Data indicators|-|
## I.Basic Information
- ### Module Introduction
- Squeeze-and-Excitation Network is proposed by Momenta in 2017. This model learns the weight to strengthen important channels of features and improves classification accuracy, which is the champion of ILSVR 2017. This module is based on se_resnext101_32x4d, trained on ImageNet-2012, and can predict an image of size 224*224*3.
## II.Installation
- ### 1、Environmental Dependence
- paddlepaddle >= 1.4.0
- paddlehub >= 1.0.0 | [How to install PaddleHub](../../../../docs/docs_en/get_start/installation.rst)
- ### 2、Installation
- ```shell
$ hub install se_resnext101_32x4d_imagenet
```
- In case of any problems during installation, please refer to: [Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md) | [Linux_Quickstart](../../../../docs/docs_en/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_en/get_start/mac_quickstart.md)
## III.Module API Prediction
- ### 1、Command line Prediction
- ```shell
$ hub run se_resnext101_32x4d_imagenet --input_path "/PATH/TO/IMAGE"
```
- If you want to call the Hub module through the command line, please refer to: [PaddleHub Command Line Instruction](../../../../docs/docs_ch/tutorial/cmd_usage.rst)
- ### 2、Prediction Code Example
- ```python
import paddlehub as hub
import cv2
classifier = hub.Module(name="se_resnext101_32x4d_imagenet")
test_img_path = "/PATH/TO/IMAGE"
input_dict = {"image": [test_img_path]}
result = classifier.classification(data=input_dict)
```
- ### 3、API
- ```python
def classification(data)
```
- classification API.
- **Parameters**
- data (dict): key is "image", value is a list of image paths
- **Return**
- result(list[dict]): classication results, each element in the list is dict, key is the label name, and value is the corresponding probability
## IV.Release Note
* 1.0.0
First release
- ```shell
$ hub install se_resnext101_32x4d_imagenet==1.0.0
```
# se_resnext50_32x4d_imagenet
|Module Name|se_resnext50_32x4d_imagenet|
| :--- | :---: |
|Category|image classification|
|Network|SE_ResNeXt|
|Dataset|ImageNet-2012|
|Fine-tuning supported or not|No|
|Module Size|107MB|
|Latest update date|-|
|Data indicators|-|
## I.Basic Information
- ### Module Introduction
- Squeeze-and-Excitation Network is proposed by Momenta in 2017. This model learns the weight to strengthen important channels of features and improves classification accuracy, which is the champion of ILSVR 2017. This module is based on SE_ResNeXt50_32x4d, trained on ImageNet-2012, and can predict an image of size 224*224*3.
## II.Installation
- ### 1、Environmental Dependence
- paddlepaddle >= 1.4.0
- paddlehub >= 1.0.0 | [How to install PaddleHub](../../../../docs/docs_en/get_start/installation.rst)
- ### 2、Installation
- ```shell
$ hub install se_resnext50_32x4d_imagenet
```
- In case of any problems during installation, please refer to: [Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md) | [Linux_Quickstart](../../../../docs/docs_en/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_en/get_start/mac_quickstart.md)
## III.Module API Prediction
- ### 1、Command line Prediction
- ```shell
$ hub run se_resnext50_32x4d_imagenet --input_path "/PATH/TO/IMAGE"
```
- If you want to call the Hub module through the command line, please refer to: [PaddleHub Command Line Instruction](../../../../docs/docs_ch/tutorial/cmd_usage.rst)
- ### 2、Prediction Code Example
- ```python
import paddlehub as hub
import cv2
classifier = hub.Module(name="se_resnext50_32x4d_imagenet")
test_img_path = "/PATH/TO/IMAGE"
input_dict = {"image": [test_img_path]}
result = classifier.classification(data=input_dict)
```
- ### 3、API
- ```python
def classification(data)
```
- classification API.
- **Parameters**
- data (dict): key is "image", value is a list of image paths
- **Return**
- result(list[dict]): classication results, each element in the list is dict, key is the label name, and value is the corresponding probability
## IV.Release Note
* 1.0.0
First release
- ```shell
$ hub install se_resnext50_32x4d_imagenet==1.0.0
```
# shufflenet_v2_imagenet
|Module Name|shufflenet_v2_imagenet|
| :--- | :---: |
|Category|image classification|
|Network|ShuffleNet V2|
|Dataset|ImageNet-2012|
|Fine-tuning supported or not|No|
|Module Size|11MB|
|Latest update date|-|
|Data indicators|-|
## I.Basic Information
- ### Module Introduction
- ShuffleNet V2 is a light-weight model proposed by MEGVII in 2018. This model proposed pointwise group convolution and channel shuffle to keep accurary and reduce the amount of computation. This module is based on ShuffleNet V2, trained on ImageNet-2012, and can predict an image of 224*224*3.
## II.Installation
- ### 1、Environmental Dependence
- paddlepaddle >= 1.4.0
- paddlehub >= 1.0.0 | [How to install PaddleHub](../../../../docs/docs_en/get_start/installation.rst)
- ### 2、Installation
- ```shell
$ hub install shufflenet_v2_imagenet
```
- In case of any problems during installation, please refer to: [Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md) | [Linux_Quickstart](../../../../docs/docs_en/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_en/get_start/mac_quickstart.md)
## III.Module API Prediction
- ### 1、Command line Prediction
- ```shell
$ hub run shufflenet_v2_imagenet --input_path "/PATH/TO/IMAGE"
```
- If you want to call the Hub module through the command line, please refer to: [PaddleHub Command Line Instruction](../../../../docs/docs_ch/tutorial/cmd_usage.rst)
- ### 2、Prediction Code Example
- ```python
import paddlehub as hub
import cv2
classifier = hub.Module(name="shufflenet_v2_imagenet")
test_img_path = "/PATH/TO/IMAGE"
input_dict = {"image": [test_img_path]}
result = classifier.classification(data=input_dict)
```
- ### 3、API
- ```python
def classification(data)
```
- classification API.
- **Parameters**
- data (dict): key is "image", value is a list of image paths
- **Return**
- result(list[dict]): classication results, each element in the list is dict, key is the label name, and value is the corresponding probability
## IV.Release Note
* 1.0.0
First release
- ```shell
$ hub install shufflenet_v2_imagenet==1.0.0
```
# spinalnet_res101_gemstone
|Module Name|spinalnet_res101_gemstone|
| :--- | :---: |
|Category|image classification|
|Network|resnet101|
|Dataset|gemstone|
|Fine-tuning supported or not|No|
|Module Size|246MB|
|Latest update date|-|
|Data indicators|-|
## I.Basic Information
- ### Module Introduction
- This module is based on SpinalNet trained on gemstone dataset, and can be used to classify a gemstone.
## II.Installation
- ### 1、Environmental Dependence
- paddlepaddle >= 2.0.0
- paddlehub >= 2.0.0 | [How to install PaddleHub](../../../../docs/docs_en/get_start/installation.rst)
- ### 2、Installation
- ```shell
$ hub install spinalnet_res101_gemstone
```
- In case of any problems during installation, please refer to: [Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md) | [Linux_Quickstart](../../../../docs/docs_en/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_en/get_start/mac_quickstart.md)
## III.Module API Prediction
- ### 1、Command line Prediction
- ```shell
$ hub run spinalnet_res101_gemstone --input_path "/PATH/TO/IMAGE"
```
- If you want to call the Hub module through the command line, please refer to: [PaddleHub Command Line Instruction](../../../../docs/docs_ch/tutorial/cmd_usage.rst)
- ### 2、Prediction Code Example
- ```python
import paddlehub as hub
import cv2
classifier = hub.Module(name="spinalnet_res101_gemstone")
result = classifier.predict(['/PATH/TO/IMAGE'])
print(result)
```
- ### 3、API
- ```python
def predict(images)
```
- classification API.
- **Parameters**
- images(list[numpy.ndarray]): image data.
- **Return**
- result(list[dict]): classication results, each element in the list is dict, key is the label name, and value is the corresponding probability
## IV.Release Note
* 1.0.0
First release
- ```shell
$ hub install spinalnet_res101_gemstone==1.0.0
```
# spinalnet_res50_gemstone
|Module Name|spinalnet_res50_gemstone|
| :--- | :---: |
|Category|image classification|
|Network|resnet50|
|Dataset|gemstone|
|Fine-tuning supported or not|No|
|Module Size|137MB|
|Latest update date|-|
|Data indicators|-|
## I.Basic Information
- ### Module Introduction
- This module is based on SpinalNet trained on gemstone dataset, and can be used to classify a gemstone.
## II.Installation
- ### 1、Environmental Dependence
- paddlepaddle >= 2.0.0
- paddlehub >= 2.0.0 | [How to install PaddleHub](../../../../docs/docs_en/get_start/installation.rst)
- ### 2、Installation
- ```shell
$ hub install spinalnet_res50_gemstone
```
- In case of any problems during installation, please refer to: [Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md) | [Linux_Quickstart](../../../../docs/docs_en/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_en/get_start/mac_quickstart.md)
## III.Module API Prediction
- ### 1、Command line Prediction
- ```shell
$ hub run spinalnet_res50_gemstone --input_path "/PATH/TO/IMAGE"
```
- If you want to call the Hub module through the command line, please refer to: [PaddleHub Command Line Instruction](../../../../docs/docs_ch/tutorial/cmd_usage.rst)
- ### 2、Prediction Code Example
- ```python
import paddlehub as hub
import cv2
classifier = hub.Module(name="spinalnet_res50_gemstone")
result = classifier.predict(['/PATH/TO/IMAGE'])
print(result)
```
- ### 3、API
- ```python
def predict(images)
```
- classification API.
- **Parameters**
- images: list类型,待预测的图像.
- **Return**
- result(list[dict]): classication results, each element in the list is dict, key is the label name, and value is the corresponding probability
## IV.Release Note
* 1.0.0
First release
- ```shell
$ hub install spinalnet_res50_gemstone==1.0.0
```
...@@ -18,6 +18,7 @@ ...@@ -18,6 +18,7 @@
- ### 模型介绍 - ### 模型介绍
- 使用PaddleHub的SpinalNet预训练模型进行宝石识别或finetune并完成宝石的预测任务。 - 使用PaddleHub的SpinalNet预训练模型进行宝石识别或finetune并完成宝石的预测任务。
## 二、安装 ## 二、安装
- ### 1、环境依赖 - ### 1、环境依赖
......
# spinalnet_vgg16_gemstone
|Module Name|spinalnet_vgg16_gemstone|
| :--- | :---: |
|Category|image classification|
|Network|vgg16|
|Dataset|gemstone|
|Fine-tuning supported or not|No|
|Module Size|1.5GB|
|Latest update date|-|
|Data indicators|-|
## I.Basic Information
- ### Module Introduction
- This module is based on SpinalNet trained on gemstone dataset, and can be used to classify a gemstone.
## II.Installation
- ### 1、Environmental Dependence
- paddlepaddle >= 2.0.0
- paddlehub >= 2.0.0 | [How to install PaddleHub](../../../../docs/docs_en/get_start/installation.rst)
- ### 2、Installation
- ```shell
$ hub install spinalnet_vgg16_gemstone
```
- In case of any problems during installation, please refer to: [Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md) | [Linux_Quickstart](../../../../docs/docs_en/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_en/get_start/mac_quickstart.md)
## III.Module API Prediction
- ### 1、Command line Prediction
- ```shell
$ hub run spinalnet_vgg16_gemstone --input_path "/PATH/TO/IMAGE"
```
- If you want to call the Hub module through the command line, please refer to: [PaddleHub Command Line Instruction](../../../../docs/docs_ch/tutorial/cmd_usage.rst)
- ### 2、Prediction Code Example
- ```python
import paddlehub as hub
import cv2
classifier = hub.Module(name="spinalnet_vgg16_gemstone")
result = classifier.predict(['/PATH/TO/IMAGE'])
print(result)
```
- ### 3、API
- ```python
def predict(images)
```
- classification API.
- **Parameters**
- images (list\[numpy.ndarray\]): image data, ndarray.shape is in the format [H, W, C], BGR;
- **Return**
- result(list[dict]): classication results, each element in the list is dict, key is the label name, and value is the corresponding probability
## IV.Release Note
* 1.0.0
First release
- ```shell
$ hub install spinalnet_vgg16_gemstone==1.0.0
```
# vgg11_imagenet
|Module Name|vgg11_imagenet|
| :--- | :---: |
|Category|image classification|
|Network|VGG|
|Dataset|ImageNet-2012|
|Fine-tuning supported or not|No|
|Module Size|507MB|
|Latest update date|-|
|Data indicators|-|
## I.Basic Information
- ### Module Introduction
- VGG is a serial of models for image classification proposed by university of Oxford and DeepMind. The serial models demonstrate 'the deeper the network is, the better the performance is'. And VGG is used for feature extraction as the backbone by most image classification tasks. This module is based on VGG11, trained on ImageNet-2012, and can predict an image of size 224*224*3.
## II.Installation
- ### 1、Environmental Dependence
- paddlepaddle >= 1.4.0
- paddlehub >= 1.0.0 | [How to install PaddleHub](../../../../docs/docs_en/get_start/installation.rst)
- ### 2、Installation
- ```shell
$ hub install vgg11_imagenet
```
- In case of any problems during installation, please refer to: [Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md) | [Linux_Quickstart](../../../../docs/docs_en/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_en/get_start/mac_quickstart.md)
## III.Module API Prediction
- ### 1、Command line Prediction
- ```shell
$ hub run vgg11_imagenet --input_path "/PATH/TO/IMAGE"
```
- If you want to call the Hub module through the command line, please refer to: [PaddleHub Command Line Instruction](../../../../docs/docs_ch/tutorial/cmd_usage.rst)
- ### 2、Prediction Code Example
- ```python
import paddlehub as hub
import cv2
classifier = hub.Module(name="vgg11_imagenet")
test_img_path = "/PATH/TO/IMAGE"
input_dict = {"image": [test_img_path]}
result = classifier.classification(data=input_dict)
```
- ### 3、API
- ```python
def classification(data)
```
- classification API.
- **Parameters**
- data (dict): key is "image", value is a list of image paths
- **Return**
- result(list[dict]): classication results, each element in the list is dict, key is the label name, and value is the corresponding probability
## IV.Release Note
* 1.0.0
First release
- ```shell
$ hub install vgg11_imagenet==1.0.0
```
# vgg13_imagenet
|Module Name|vgg13_imagenet|
| :--- | :---: |
|Category|image classification|
|Network|VGG|
|Dataset|ImageNet-2012|
|Fine-tuning supported or not|No|
|Module Size|508MB|
|Latest update date|-|
|Data indicators|-|
## I.Basic Information
- ### Module Introduction
- VGG is a serial of models for image classification proposed by university of Oxford and DeepMind. The serial models demonstrate 'the deeper the network is, the better the performance is'. And VGG is used for feature extraction as the backbone by most image classification tasks. This module is based on VGG13, trained on ImageNet-2012, and can predict an image of size 224*224*3.
## II.Installation
- ### 1、Environmental Dependence
- paddlepaddle >= 1.4.0
- paddlehub >= 1.0.0 | [How to install PaddleHub](../../../../docs/docs_en/get_start/installation.rst)
- ### 2、Installation
- ```shell
$ hub install vgg13_imagenet
```
- In case of any problems during installation, please refer to: [Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md) | [Linux_Quickstart](../../../../docs/docs_en/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_en/get_start/mac_quickstart.md)
## III.Module API Prediction
- ### 1、Command line Prediction
- ```shell
$ hub run vgg13_imagenet --input_path "/PATH/TO/IMAGE"
```
- If you want to call the Hub module through the command line, please refer to: [PaddleHub Command Line Instruction](../../../../docs/docs_ch/tutorial/cmd_usage.rst)
- ### 2、Prediction Code Example
- ```python
import paddlehub as hub
import cv2
classifier = hub.Module(name="vgg13_imagenet")
test_img_path = "/PATH/TO/IMAGE"
input_dict = {"image": [test_img_path]}
result = classifier.classification(data=input_dict)
```
- ### 3、API
- ```python
def classification(data)
```
- classification API.
- **Parameters**
- data (dict): key is "image", value is a list of image paths
- **Return**
- result(list[dict]): classication results, each element in the list is dict, key is the label name, and value is the corresponding probability
## IV.Release Note
* 1.0.0
First release
- ```shell
$ hub install vgg13_imagenet==1.0.0
```
# vgg16_imagenet
|Module Name|vgg16_imagenet|
| :--- | :---: |
|Category|image classification|
|Network|VGG|
|Dataset|ImageNet-2012|
|Fine-tuning supported or not|No|
|Module Size|528MB|
|Latest update date|-|
|Data indicators|-|
## I.Basic Information
- ### Module Introduction
- VGG is a serial of models for image classification proposed by university of Oxford and DeepMind. The serial models demonstrate 'the deeper the network is, the better the performance is'. And VGG is used for feature extraction as the backbone by most image classification tasks. This module is based on VGG16, trained on ImageNet-2012, and can predict an image of size 224*224*3.
## II.Installation
- ### 1、Environmental Dependence
- paddlepaddle >= 1.4.0
- paddlehub >= 1.0.0 | [How to install PaddleHub](../../../../docs/docs_en/get_start/installation.rst)
- ### 2、Installation
- ```shell
$ hub install vgg16_imagenet
```
- In case of any problems during installation, please refer to: [Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md) | [Linux_Quickstart](../../../../docs/docs_en/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_en/get_start/mac_quickstart.md)
## III.Module API Prediction
- ### 1、Command line Prediction
- ```shell
$ hub run vgg16_imagenet --input_path "/PATH/TO/IMAGE"
```
- If you want to call the Hub module through the command line, please refer to: [PaddleHub Command Line Instruction](../../../../docs/docs_ch/tutorial/cmd_usage.rst)
- ### 2、Prediction Code Example
- ```python
import paddlehub as hub
import cv2
classifier = hub.Module(name="vgg16_imagenet")
test_img_path = "/PATH/TO/IMAGE"
input_dict = {"image": [test_img_path]}
result = classifier.classification(data=input_dict)
```
- ### 3、API
- ```python
def classification(data)
```
- classification API.
- **Parameters**
- data (dict): key is "image", value is a list of image paths
- **Return**
- result(list[dict]): classication results, each element in the list is dict, key is the label name, and value is the corresponding probability
## IV.Release Note
* 1.0.0
First release
- ```shell
$ hub install vgg16_imagenet==1.0.0
```
# vgg19_imagenet
|Module Name|vgg19_imagenet|
| :--- | :---: |
|Category|image classification|
|Network|vgg19_imagenet|
|Dataset|ImageNet-2012|
|Fine-tuning supported or not|No|
|Module Size|549MB|
|Latest update date|-|
|Data indicators|-|
## I.Basic Information
- ### Module Introduction
- VGG is a serial of models for image classification proposed by university of Oxford and DeepMind. The serial models demonstrate 'the deeper the network is, the better the performance is'. And VGG is used for feature extraction as the backbone by most image classification tasks. This module is based on VGG19, trained on ImageNet-2012, and can predict an image of size 224*224*3.
## II.Installation
- ### 1、Environmental Dependence
- paddlepaddle >= 1.4.0
- paddlehub >= 1.0.0 | [How to install PaddleHub](../../../../docs/docs_en/get_start/installation.rst)
- ### 2、Installation
- ```shell
$ hub install vgg19_imagenet
```
- In case of any problems during installation, please refer to: [Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md) | [Linux_Quickstart](../../../../docs/docs_en/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_en/get_start/mac_quickstart.md)
## III.Module API Prediction
- ### 1、Command line Prediction
- ```shell
$ hub run vgg19_imagenet --input_path "/PATH/TO/IMAGE"
```
- If you want to call the Hub module through the command line, please refer to: [PaddleHub Command Line Instruction](../../../../docs/docs_ch/tutorial/cmd_usage.rst)
- ### 2、Prediction Code Example
- ```python
import paddlehub as hub
import cv2
classifier = hub.Module(name="vgg19_imagenet")
test_img_path = "/PATH/TO/IMAGE"
input_dict = {"image": [test_img_path]}
result = classifier.classification(data=input_dict)
```
- ### 3、API
- ```python
def classification(data)
```
- classification API.
- **Parameters**
- data (dict): key is "image", value is a list of image paths
- **Return**
- result(list[dict]): classication results, each element in the list is dict, key is the label name, and value is the corresponding probability
## IV.Release Note
* 1.0.0
First release
- ```shell
$ hub install vgg19_imagenet==1.0.0
```
# xception41_imagenet
|Module Name|xception41_imagenet|
| :--- | :---: |
|Category|image classification|
|Network|Xception|
|Dataset|ImageNet-2012|
|Fine-tuning supported or not|No|
|Module Size|MB|
|Latest update date|-|
|Data indicators|-|
## I.Basic Information
- ### Module Introduction
- Xception is a model proposed by Google in 2016, which is an improvement on Inception V3. This module is based on Xception41, trained on ImageNet-2012, and can predict an image of size 224*224*3.
## II.Installation
- ### 1、Environmental Dependence
- paddlepaddle >= 1.4.0
- paddlehub >= 1.0.0 | [How to install PaddleHub](../../../../docs/docs_en/get_start/installation.rst)
- ### 2、Installation
- ```shell
$ hub install xception41_imagenet
```
- In case of any problems during installation, please refer to: [Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md) | [Linux_Quickstart](../../../../docs/docs_en/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_en/get_start/mac_quickstart.md)
## III.Module API Prediction
- ### 1、Command line Prediction
- ```shell
$ hub run xception41_imagenet --input_path "/PATH/TO/IMAGE"
```
- If you want to call the Hub module through the command line, please refer to: [PaddleHub Command Line Instruction](../../../../docs/docs_ch/tutorial/cmd_usage.rst)
- ### 2、Prediction Code Example
- ```python
import paddlehub as hub
import cv2
classifier = hub.Module(name="xception41_imagenet")
test_img_path = "/PATH/TO/IMAGE"
input_dict = {"image": [test_img_path]}
result = classifier.classification(data=input_dict)
```
- ### 3、API
- ```python
def classification(data)
```
- classification API.
- **Parameters**
- data (dict): key is "image", value is a list of image paths
- **Return**
- result(list[dict]): classication results, each element in the list is dict, key is the label name, and value is the corresponding probability
## IV.Release Note
* 1.0.0
First release
- ```shell
$ hub install xception41_imagenet==1.0.0
```
# xception65_imagenet
|Module Name|xception65_imagenet|
| :--- | :---: |
|Category|image classification|
|Network|Xception|
|Dataset|ImageNet-2012|
|Fine-tuning supported or not|No|
|Module Size|140MB|
|Latest update date|-|
|Data indicators|-|
## I.Basic Information
- ### Module Introduction
- Xception is a model proposed by Google in 2016, which is an improvement on Inception V3. This module is based on Xception65, trained on ImageNet-2012, and can predict an image of size 224*224*3.
## II.Installation
- ### 1、Environmental Dependence
- paddlepaddle >= 1.4.0
- paddlehub >= 1.0.0 | [How to install PaddleHub](../../../../docs/docs_en/get_start/installation.rst)
- ### 2、Installation
- ```shell
$ hub install xception65_imagenet
```
- In case of any problems during installation, please refer to: [Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md) | [Linux_Quickstart](../../../../docs/docs_en/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_en/get_start/mac_quickstart.md)
## III.Module API Prediction
- ### 1、Command line Prediction
- ```shell
$ hub run xception65_imagenet --input_path "/PATH/TO/IMAGE"
```
- If you want to call the Hub module through the command line, please refer to: [PaddleHub Command Line Instruction](../../../../docs/docs_ch/tutorial/cmd_usage.rst)
- ### 2、Prediction Code Example
- ```python
import paddlehub as hub
import cv2
classifier = hub.Module(name="xception65_imagenet")
test_img_path = "/PATH/TO/IMAGE"
input_dict = {"image": [test_img_path]}
result = classifier.classification(data=input_dict)
```
- ### 3、API
- ```python
def classification(data)
```
- classification API.
- **Parameters**
- data (dict): key is "image", value is a list of image paths
- **Return**
- result(list[dict]): classication results, each element in the list is dict, key is the label name, and value is the corresponding probability
## IV.Release Note
* 1.0.0
First release
- ```shell
$ hub install xception65_imagenet==1.0.0
```
# xception71_imagenet
|Module Name|xception71_imagenet|
| :--- | :---: |
|Category|image classification|
|Network|Xception|
|Dataset|ImageNet-2012|
|Fine-tuning supported or not|No|
|Module Size|147MB|
|Latest update date|-|
|Data indicators|-|
## I.Basic Information
- ### Module Introduction
- Xception is a model proposed by Google in 2016, which is an improvement on Inception V3. This module is based on Xception71, trained on ImageNet-2012, and can predict an image of size 224*224*3.
## II.Installation
- ### 1、Environmental Dependence
- paddlepaddle >= 1.4.0
- paddlehub >= 1.0.0 | [How to install PaddleHub](../../../../docs/docs_en/get_start/installation.rst)
- ### 2、Installation
- ```shell
$ hub install xception71_imagenet
```
- In case of any problems during installation, please refer to: [Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md) | [Linux_Quickstart](../../../../docs/docs_en/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_en/get_start/mac_quickstart.md)
## III.Module API Prediction
- ### 1、Command line Prediction
- ```shell
$ hub run xception71_imagenet --input_path "/PATH/TO/IMAGE"
```
- If you want to call the Hub module through the command line, please refer to: [PaddleHub Command Line Instruction](../../../../docs/docs_ch/tutorial/cmd_usage.rst)
- ### 2、Prediction Code Example
- ```python
import paddlehub as hub
import cv2
classifier = hub.Module(name="xception71_imagenet")
test_img_path = "/PATH/TO/IMAGE"
input_dict = {"image": [test_img_path]}
result = classifier.classification(data=input_dict)
```
- ### 3、API
- ```python
def classification(data)
```
- classification API.
- **Parameters**
- data (dict): key is "image", value is a list of image paths
- **Return**
- result(list[dict]): classication results, each element in the list is dict, key is the label name, and value is the corresponding probability
## IV.Release Note
* 1.0.0
First release
- ```shell
$ hub install xception71_imagenet==1.0.0
```
...@@ -43,7 +43,7 @@ ...@@ -43,7 +43,7 @@
## 三、模型API预测 ## 三、模型API预测
- ### 1、代码示例 - ### 1、预测代码示例
- ```python - ```python
import paddlehub as hub import paddlehub as hub
......
# MiDaS_Large
|Module Name|MiDaS_Large|
| :--- | :---: |
|Category|depth estimation|
|Network|-|
|Dataset|3D Movies, WSVD, ReDWeb, MegaDepth|
|Fine-tuning supported or not|No|
|Module Size|399MB|
|Latest update date|2021-02-26|
|Data indicators|-|
## I.Basic Information
- ### Application Effect Display
- Sample results:
<p align="center">
<img src="https://img-blog.csdnimg.cn/20201227112600975.jpg" width='70%' hspace='10'/> <br />
</p>
- ### Module Introduction
- MiDas_Large module is used for monocular depth estimation.
## II.Installation
- ### 1、Environmental Dependence
- paddlepaddle >= 2.0.0
- paddlehub >= 2.0.0 | [How to install PaddleHub](../../../../docs/docs_en/get_start/installation.rst)
- ### 2、Installation
- ```shell
$ hub install MiDaS_Large
```
- In case of any problems during installation, please refer to: [Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md) | [Linux_Quickstart](../../../../docs/docs_en/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_en/get_start/mac_quickstart.md)
## III.Module API Prediction
- ### 1、Prediction Code Example
- ```python
import paddlehub as hub
import cv2
model = hub.Module(name="MiDaS_Large")
result = model.depth_estimation(images=[cv2.imread('/PATH/TO/IMAGE')])
# or
# result = model.depth_estimation(paths=['/PATH/TO/IMAGE'])
```
- ### 2、API
- ```python
def depth_estimation(images=None,
paths=None,
batch_size=1,
output_dir='output',
visualization=False):
```
- depth estimation API.
- **Parameters**
- images (list\[numpy.ndarray\]): image data, ndarray.shape is in the format [H, W, C], BGR;
- paths (list[str]): image path;
- batch_size (int): the size of batch;
- output_dir (str): save path of images;
- visualization (bool): Whether to save the results as picture files;
**NOTE:** choose one parameter to provide data from paths and images
- **Return**
- res (list\[numpy.ndarray\]): depth data,ndarray.shape is \[H, W\]
## IV.Release Note
* 1.0.0
First release
- ```shell
$ hub install MiDaS_Large==1.0.0
```
...@@ -51,7 +51,7 @@ ...@@ -51,7 +51,7 @@
``` ```
- 通过命令行方式实现人脸检测模型的调用,更多请见 [PaddleHub命令行指令](../../../../docs/docs_ch/tutorial/cmd_usage.rst) - 通过命令行方式实现人脸检测模型的调用,更多请见 [PaddleHub命令行指令](../../../../docs/docs_ch/tutorial/cmd_usage.rst)
- ### 2、代码示例 - ### 2、预测代码示例
- ```python - ```python
import paddlehub as hub import paddlehub as hub
......
# pyramidbox_face_detection
|Module Name|pyramidbox_face_detection|
| :--- | :---: |
|Category|face detection|
|Network|PyramidBox|
|Dataset|WIDER FACEDataset|
|Fine-tuning supported or not|No|
|Module Size|220MB|
|Latest update date|2021-02-26|
|Data indicators|-|
## I.Basic Information
- ### Application Effect Display
- Sample results:
<p align="center">
<img src="https://user-images.githubusercontent.com/22424850/131602468-351eb3fb-81e3-4294-ac8e-b49a3a0232cb.jpg" width='50%' hspace='10'/>
<br />
</p>
- ### Module Introduction
- PyramidBox is a one-stage face detector based on SSD. It can redict results across six scale levels of feature maps. This module is based on PyramidBox, trained on WIDER FACE Dataset, and supports face detection.
## II.Installation
- ### 1、Environmental Dependence
- paddlepaddle >= 1.6.2
- paddlehub >= 1.6.0 | [How to install PaddleHub](../../../../docs/docs_en/get_start/installation.rst)
- ### 2、Installation
- ```shell
$ hub install pyramidbox_face_detection
```
- In case of any problems during installation, please refer to: [Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md) | [Linux_Quickstart](../../../../docs/docs_en/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_en/get_start/mac_quickstart.md)
## III.Module API Prediction
- ### 1、Command line Prediction
- ```shell
$ hub run pyramidbox_face_detection --input_path "/PATH/TO/IMAGE"
```
- If you want to call the Hub module through the command line, please refer to: [PaddleHub Command Line Instruction](../../../../docs/docs_ch/tutorial/cmd_usage.rst)
- ### 2、Prediction Code Example
- ```python
import paddlehub as hub
import cv2
face_detector = hub.Module(name="pyramidbox_face_detection")
result = face_detector.face_detection(images=[cv2.imread('/PATH/TO/IMAGE')])
# or
# result = face_detector.face_detection(paths=['/PATH/TO/IMAGE'])
```
- ### 3、API
- ```python
def face_detection(images=None,
paths=None,
use_gpu=False,
output_dir='detection_result',
visualization=False,
score_thresh=0.15)
```
- Detect all faces in image
- **Parameters**
- images (list\[numpy.ndarray\]): image data, ndarray.shape is in the format [H, W, C], BGR;
- paths (list[str]): image path;
- use_gpu (bool): use GPU or not; **set the CUDA_VISIBLE_DEVICES environment variable first if you are using GPU**
- output_dir (str): save path of images;
- visualization (bool): Whether to save the results as picture files;
- score_thresh (float): the confidence threshold
**NOTE:** choose one parameter to provide data from paths and images
- **Return**
- res (list\[dict\]): results
- path (str): path for input image
- data (list): detection results, each element in the list is dict
- confidence (float): the confidence of the result
- left (int): the upper left corner x coordinate of the detection box
- top (int): the upper left corner y coordinate of the detection box
- right (int): the lower right corner x coordinate of the detection box
- bottom (int): the lower right corner y coordinate of the detection box
- ```python
def save_inference_model(dirname,
model_filename=None,
params_filename=None,
combined=True)
```
- Save model to specific path
- **Parameters**
- dirname: output dir for saving model
- model\_filename: filename for saving model
- params\_filename: filename for saving parameters
- combined: whether save parameters into one file
## IV.Server Deployment
- PaddleHub Serving can deploy an online service of face detection.
- ### Step 1: Start PaddleHub Serving
- Run the startup command:
- ```shell
$ hub serving start -m pyramidbox_face_detection
```
- The servitization API is now deployed and the default port number is 8866.
- **NOTE:** If GPU is used for prediction, set CUDA_VISIBLE_DEVICES environment variable before the service, otherwise it need not be set.
- ### Step 2: Send a predictive request
- With a configured server, use the following lines of code to send the prediction request and obtain the result
- ```python
import requests
import json
import cv2
import base64
def cv2_to_base64(image):
data = cv2.imencode('.jpg', image)[1]
return base64.b64encode(data.tostring()).decode('utf8')
# Send an HTTP request
data = {'images':[cv2_to_base64(cv2.imread("/PATH/TO/IMAGE"))]}
headers = {"Content-type": "application/json"}
url = "http://127.0.0.1:8866/predict/pyramidbox_face_detection"
r = requests.post(url=url, headers=headers, data=json.dumps(data))
# print prediction results
print(r.json()["results"])
```
## V.Release Note
* 1.0.0
First release
* 1.1.0
Fix the problem of reading numpy
- ```shell
$ hub install pyramidbox_face_detection==1.1.0
```
...@@ -50,7 +50,7 @@ ...@@ -50,7 +50,7 @@
``` ```
- 通过命令行方式实现人脸检测模型的调用,更多请见 [PaddleHub命令行指令](../../../../docs/docs_ch/tutorial/cmd_usage.rst) - 通过命令行方式实现人脸检测模型的调用,更多请见 [PaddleHub命令行指令](../../../../docs/docs_ch/tutorial/cmd_usage.rst)
- ### 2、代码示例 - ### 2、预测代码示例
- ```python - ```python
import paddlehub as hub import paddlehub as hub
......
# pyramidbox_lite_mobile
|Module Name|pyramidbox_lite_mobile|
| :--- | :---: |
|Category|face detection|
|Network|PyramidBox|
|Dataset|WIDER FACEDataset + Baidu Face Dataset|
|Fine-tuning supported or not|No|
|Module Size|7.3MB|
|Latest update date|2021-02-26|
|Data indicators|-|
## I.Basic Information
- ### Application Effect Display
- Sample results:
<p align="center">
<img src="https://user-images.githubusercontent.com/22424850/131602468-351eb3fb-81e3-4294-ac8e-b49a3a0232cb.jpg" width='50%' hspace='10'/>
<br />
</p>
- ### Module Introduction
- PyramidBox-Lite is a light-weight model based on PyramidBox proposed by Baidu in ECCV 2018. This model has solid robustness against interferences such as light and scale variation. This module is optimized for mobile device, based on PyramidBox, trained on WIDER FACE Dataset and Baidu Face Dataset, and can be used for face detection.
## II.Installation
- ### 1、Environmental Dependence
- paddlepaddle >= 1.6.2
- paddlehub >= 1.6.0 | [How to install PaddleHub](../../../../docs/docs_en/get_start/installation.rst)
- ### 2、Installation
- ```shell
$ hub install pyramidbox_lite_mobile
```
- In case of any problems during installation, please refer to: [Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md) | [Linux_Quickstart](../../../../docs/docs_en/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_en/get_start/mac_quickstart.md)
## III.Module API Prediction
- ### 1、Command line Prediction
- ```shell
$ hub run pyramidbox_lite_mobile --input_path "/PATH/TO/IMAGE"
```
- If you want to call the Hub module through the command line, please refer to: [PaddleHub Command Line Instruction](../../../../docs/docs_ch/tutorial/cmd_usage.rst)
- ### 2、Prediction Code Example
- ```python
import paddlehub as hub
import cv2
face_detector = hub.Module(name="pyramidbox_lite_mobile")
result = face_detector.face_detection(images=[cv2.imread('/PATH/TO/IMAGE')])
# or
# result = face_detector.face_detection(paths=['/PATH/TO/IMAGE'])
```
- ### 3、API
- ```python
def face_detection(images=None,
paths=None,
use_gpu=False,
output_dir='detection_result',
visualization=False,
shrink=0.5,
confs_threshold=0.6)
```
- Detect all faces in image
- **Parameters**
- images (list\[numpy.ndarray\]): image data, ndarray.shape is in the format [H, W, C], BGR;
- paths (list[str]): image path;
- use_gpu (bool): use GPU or not; **set the CUDA_VISIBLE_DEVICES environment variable first if you are using GPU**
- output_dir (str): save path of images;
- visualization (bool): Whether to save the results as picture files;
- shrink (float): the scale to resize image
- confs\_threshold (float): the confidence threshold
**NOTE:** choose one parameter to provide data from paths and images
- **Return**
- res (list\[dict\]): results
- path (str): path for input image
- data (list): detection results, each element in the list is dict
- confidence (float): the confidence of the result
- left (int): the upper left corner x coordinate of the detection box
- top (int): the upper left corner y coordinate of the detection box
- right (int): the lower right corner x coordinate of the detection box
- bottom (int): the lower right corner y coordinate of the detection box
- ```python
def save_inference_model(dirname,
model_filename=None,
params_filename=None,
combined=True)
```
- Save model to specific path
- **Parameters**
- dirname: output dir for saving model
- model\_filename: filename for saving model
- params\_filename: filename for saving parameters
- combined: whether save parameters into one file
## IV.Server Deployment
- PaddleHub Serving can deploy an online service of face detection.
- ### Step 1: Start PaddleHub Serving
- Run the startup command:
- ```shell
$ hub serving start -m pyramidbox_lite_mobile
```
- The servitization API is now deployed and the default port number is 8866.
- **NOTE:** If GPU is used for prediction, set CUDA_VISIBLE_DEVICES environment variable before the service, otherwise it need not be set.
- ### Step 2: Send a predictive request
- With a configured server, use the following lines of code to send the prediction request and obtain the result
- ```python
import requests
import json
import cv2
import base64
def cv2_to_base64(image):
data = cv2.imencode('.jpg', image)[1]
return base64.b64encode(data.tostring()).decode('utf8')
# Send an HTTP request
data = {'images':[cv2_to_base64(cv2.imread("/PATH/TO/IMAGE"))]}
headers = {"Content-type": "application/json"}
url = "http://127.0.0.1:8866/predict/pyramidbox_lite_mobile"
r = requests.post(url=url, headers=headers, data=json.dumps(data))
# print prediction results
print(r.json()["results"])
```
## V.Release Note
* 1.0.0
First release
* 1.2.0
- ```shell
$ hub install pyramidbox_lite_mobile==1.2.0
```
...@@ -50,7 +50,7 @@ ...@@ -50,7 +50,7 @@
``` ```
- 通过命令行方式实现人脸检测模型的调用,更多请见 [PaddleHub命令行指令](../../../../docs/docs_ch/tutorial/cmd_usage.rst) - 通过命令行方式实现人脸检测模型的调用,更多请见 [PaddleHub命令行指令](../../../../docs/docs_ch/tutorial/cmd_usage.rst)
- ### 2、代码示例 - ### 2、预测代码示例
- ```python - ```python
import paddlehub as hub import paddlehub as hub
......
# pyramidbox_lite_mobile_mask
|Module Name|pyramidbox_lite_mobile_mask|
| :--- | :---: |
|Category|face detection|
|Network|PyramidBox|
|Dataset|WIDER FACEDataset + Baidu Face Dataset|
|Fine-tuning supported or not|No|
|Module Size|1.2MB|
|Latest update date|2021-02-26|
|Data indicators|-|
## I.Basic Information
- ### Application Effect Display
- Sample results:
<p align="center">
<img src="https://user-images.githubusercontent.com/22424850/131603304-690a2e3b-9f24-42f6-9297-a12ada884191.jpg" width='50%' hspace='10'/>
<br />
</p>
- ### Module Introduction
- PyramidBox-Lite is a light-weight model based on PyramidBox proposed by Baidu in ECCV 2018. This model has solid robustness against interferences such as light and scale variation. This module is optimized for mobile device, based on PyramidBox, trained on WIDER FACE Dataset and Baidu Face Dataset, and can be used for mask detection.
## II.Installation
- ### 1、Environmental Dependence
- paddlepaddle >= 1.6.2
- paddlehub >= 1.6.0 | [How to install PaddleHub](../../../../docs/docs_en/get_start/installation.rst)
- ### 2、Installation
- ```shell
$ hub install pyramidbox_lite_mobile_mask
```
- In case of any problems during installation, please refer to: [Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md) | [Linux_Quickstart](../../../../docs/docs_en/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_en/get_start/mac_quickstart.md)
## III.Module API Prediction
- ### 1、Command line Prediction
- ```shell
$ hub run pyramidbox_lite_mobile_mask --input_path "/PATH/TO/IMAGE"
```
- If you want to call the Hub module through the command line, please refer to: [PaddleHub Command Line Instruction](../../../../docs/docs_ch/tutorial/cmd_usage.rst)
- ### 2、Prediction Code Example
- ```python
import paddlehub as hub
import cv2
mask_detector = hub.Module(name="pyramidbox_lite_mobile_mask")
result = mask_detector.face_detection(images=[cv2.imread('/PATH/TO/IMAGE')])
# or
# result = mask_detector.face_detection(paths=['/PATH/TO/IMAGE'])
```
- ### 3、API
- ```python
def face_detection(images=None,
paths=None,
batch_size=1,
use_gpu=False,
visualization=False,
output_dir='detection_result',
use_multi_scale=False,
shrink=0.5,
confs_threshold=0.6)
```
- Detect all faces in image, and judge the existence of mask.
- **Parameters**
- images (list\[numpy.ndarray\]): image data, ndarray.shape is in the format [H, W, C], BGR;
- paths (list[str]): image path;
- batch_size (int): the size of batch;
- use_gpu (bool): use GPU or not; **set the CUDA_VISIBLE_DEVICES environment variable first if you are using GPU**
- visualization (bool): Whether to save the results as picture files;
- output_dir (str): save path of images;
- use\_multi\_scale (bool) : whether to detect across multiple scales;
- shrink (float): the scale to resize image
- confs\_threshold (float): the confidence threshold
**NOTE:** choose one parameter to provide data from paths and images
- **Return**
- res (list\[dict\]): results
- path (str): path for input image
- data (list): detection results, each element in the list is dict
- label (str): 'NO MASK' or 'MASK';
- confidence (float): the confidence of the result
- left (int): the upper left corner x coordinate of the detection box
- top (int): the upper left corner y coordinate of the detection box
- right (int): the lower right corner x coordinate of the detection box
- bottom (int): the lower right corner y coordinate of the detection box
- ```python
def save_inference_model(dirname,
model_filename=None,
params_filename=None,
combined=True)
```
- Save model to specific path
- **Parameters**
- dirname: output dir for saving model
- model\_filename: filename for saving model
- params\_filename: filename for saving parameters
- combined: whether save parameters into one file
## IV.Server Deployment
- PaddleHub Serving can deploy an online service of face detection.
- ### Step 1: Start PaddleHub Serving
- Run the startup command:
- ```shell
$ hub serving start -m pyramidbox_lite_mobile_mask
```
- The servitization API is now deployed and the default port number is 8866.
- **NOTE:** If GPU is used for prediction, set CUDA_VISIBLE_DEVICES environment variable before the service, otherwise it need not be set.
- ### Step 2: Send a predictive request
- With a configured server, use the following lines of code to send the prediction request and obtain the result
- ```python
import requests
import json
import cv2
import base64
def cv2_to_base64(image):
data = cv2.imencode('.jpg', image)[1]
return base64.b64encode(data.tostring()).decode('utf8')
# Send an HTTP request
data = {'images':[cv2_to_base64(cv2.imread("/PATH/TO/IMAGE"))]}
headers = {"Content-type": "application/json"}
url = "http://127.0.0.1:8866/predict/pyramidbox_lite_mobile_mask"
r = requests.post(url=url, headers=headers, data=json.dumps(data))
# print prediction results
print(r.json()["results"])
```
## V.Paddle Lite Deployment
- ### Save model demo
- ```python
import paddlehub as hub
pyramidbox_lite_mobile_mask = hub.Module(name="pyramidbox_lite_mobile_mask")
# save model in directory named test_program
pyramidbox_lite_mobile_mask.save_inference_model(dirname="test_program")
```
- ### transform model
- The model downloaded from paddlehub is a prediction model. If we want to deploy it in mobile device, we can use OPT tool provided by PaddleLite to transform the model. For more information, please refer to [OPT tool](https://paddle-lite.readthedocs.io/zh/latest/user_guides/model_optimize_tool.html))
- ### Deploy the model with Paddle Lite
- Please refer to[Paddle-Lite mask detection model deployment demo](https://github.com/PaddlePaddle/Paddle-Lite/tree/develop/lite/demo/cxx)
## V.Release Note
* 1.0.0
First release
* 1.3.0
- ```shell
$ hub install pyramidbox_lite_mobile_mask==1.3.0
```
...@@ -50,7 +50,7 @@ ...@@ -50,7 +50,7 @@
``` ```
- 通过命令行方式实现人脸检测模型的调用,更多请见 [PaddleHub命令行指令](../../../../docs/docs_ch/tutorial/cmd_usage.rst) - 通过命令行方式实现人脸检测模型的调用,更多请见 [PaddleHub命令行指令](../../../../docs/docs_ch/tutorial/cmd_usage.rst)
- ### 2、代码示例 - ### 2、预测代码示例
- ```python - ```python
import paddlehub as hub import paddlehub as hub
......
# pyramidbox_lite_server
|Module Name|pyramidbox_lite_server|
| :--- | :---: |
|Category|face detection|
|Network|PyramidBox|
|Dataset|WIDER FACEDataset + Baidu Face Dataset|
|Fine-tuning supported or not|No|
|Module Size|8MB|
|Latest update date|2021-02-26|
|Data indicators|-|
## I.Basic Information
- ### Application Effect Display
- Sample results:
<p align="center">
<img src="https://user-images.githubusercontent.com/22424850/131602468-351eb3fb-81e3-4294-ac8e-b49a3a0232cb.jpg" width='50%' hspace='10'/>
<br />
</p>
- ### Module Introduction
- PyramidBox-Lite is a light-weight model based on PyramidBox proposed by Baidu in ECCV 2018. This model has solid robustness against interferences such as light and scale variation. This module is based on PyramidBox, trained on WIDER FACE Dataset and Baidu Face Dataset, and can be used for face detection.
## II.Installation
- ### 1、Environmental Dependence
- paddlepaddle >= 1.6.2
- paddlehub >= 1.6.0 | [How to install PaddleHub](../../../../docs/docs_en/get_start/installation.rst)
- ### 2、Installation
- ```shell
$ hub install pyramidbox_lite_server
```
- In case of any problems during installation, please refer to: [Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md) | [Linux_Quickstart](../../../../docs/docs_en/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_en/get_start/mac_quickstart.md)
## III.Module API Prediction
- ### 1、Command line Prediction
- ```shell
$ hub run pyramidbox_lite_server --input_path "/PATH/TO/IMAGE"
```
- If you want to call the Hub module through the command line, please refer to: [PaddleHub Command Line Instruction](../../../../docs/docs_ch/tutorial/cmd_usage.rst)
- ### 2、Prediction Code Example
- ```python
import paddlehub as hub
import cv2
face_detector = hub.Module(name="pyramidbox_lite_server")
result = face_detector.face_detection(images=[cv2.imread('/PATH/TO/IMAGE')])
# or
# result = face_detector.face_detection(paths=['/PATH/TO/IMAGE'])
```
- ### 3、API
- ```python
def face_detection(images=None,
paths=None,
use_gpu=False,
output_dir='detection_result',
visualization=False,
shrink=0.5,
confs_threshold=0.6)
```
- Detect all faces in image
- **Parameters**
- images (list\[numpy.ndarray\]): image data, ndarray.shape is in the format [H, W, C], BGR;
- paths (list[str]): image path;
- use_gpu (bool): use GPU or not; **set the CUDA_VISIBLE_DEVICES environment variable first if you are using GPU**
- output_dir (str): save path of images;
- visualization (bool): Whether to save the results as picture files;
- shrink (float): the scale to resize image
- confs\_threshold (float): the confidence threshold
**NOTE:** choose one parameter to provide data from paths and images
- **Return**
- res (list\[dict\]): results
- path (str): path for input image
- data (list): detection results, each element in the list is dict
- confidence (float): the confidence of the result
- left (int): the upper left corner x coordinate of the detection box
- top (int): the upper left corner y coordinate of the detection box
- right (int): the lower right corner x coordinate of the detection box
- bottom (int): the lower right corner y coordinate of the detection box
- ```python
def save_inference_model(dirname,
model_filename=None,
params_filename=None,
combined=True)
```
- Save model to specific path
- **Parameters**
- dirname: output dir for saving model
- model\_filename: filename for saving model
- params\_filename: filename for saving parameters
- combined: whether save parameters into one file
## IV.Server Deployment
- PaddleHub Serving can deploy an online service of face detection.
- ### Step 1: Start PaddleHub Serving
- Run the startup command:
- ```shell
$ hub serving start -m pyramidbox_lite_server
```
- The servitization API is now deployed and the default port number is 8866.
- **NOTE:** If GPU is used for prediction, set CUDA_VISIBLE_DEVICES environment variable before the service, otherwise it need not be set.
- ### Step 2: Send a predictive request
- With a configured server, use the following lines of code to send the prediction request and obtain the result
- ```python
import requests
import json
import cv2
import base64
def cv2_to_base64(image):
data = cv2.imencode('.jpg', image)[1]
return base64.b64encode(data.tostring()).decode('utf8')
# Send an HTTP request
data = {'images':[cv2_to_base64(cv2.imread("/PATH/TO/IMAGE"))]}
headers = {"Content-type": "application/json"}
url = "http://127.0.0.1:8866/predict/pyramidbox_lite_server"
r = requests.post(url=url, headers=headers, data=json.dumps(data))
# print prediction results
print(r.json()["results"])
```
## V.Release Note
* 1.0.0
First release
* 1.2.0
Fix the problem of reading numpy
- ```shell
$ hub install pyramidbox_lite_server==1.2.0
```
...@@ -50,7 +50,7 @@ ...@@ -50,7 +50,7 @@
``` ```
- 通过命令行方式实现人脸检测模型的调用,更多请见 [PaddleHub命令行指令](../../../../docs/docs_ch/tutorial/cmd_usage.rst) - 通过命令行方式实现人脸检测模型的调用,更多请见 [PaddleHub命令行指令](../../../../docs/docs_ch/tutorial/cmd_usage.rst)
- ### 2、代码示例 - ### 2、预测代码示例
- ```python - ```python
import paddlehub as hub import paddlehub as hub
......
# pyramidbox_lite_server_mask
|Module Name|pyramidbox_lite_server_mask|
| :--- | :---: |
|Category|face detection|
|Network|PyramidBox|
|Dataset|WIDER FACEDataset + Baidu Face Dataset|
|Fine-tuning supported or not|No|
|Module Size|1.2MB|
|Latest update date|2021-02-26|
|Data indicators|-|
## I.Basic Information
- ### Application Effect Display
- Sample results:
<p align="center">
<img src="https://user-images.githubusercontent.com/22424850/131603304-690a2e3b-9f24-42f6-9297-a12ada884191.jpg" width='50%' hspace='10'/>
<br />
</p>
- ### Module Introduction
- PyramidBox-Lite is a light-weight model based on PyramidBox proposed by Baidu in ECCV 2018. This model has solid robustness against interferences such as light and scale variation. This module is based on PyramidBox, trained on WIDER FACE Dataset and Baidu Face Dataset, and can be used for mask detection.
## II.Installation
- ### 1、Environmental Dependence
- paddlepaddle >= 1.6.2
- paddlehub >= 1.6.0 | [How to install PaddleHub](../../../../docs/docs_en/get_start/installation.rst)
- ### 2、Installation
- ```shell
$ hub install pyramidbox_lite_server_mask
```
- In case of any problems during installation, please refer to: [Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md) | [Linux_Quickstart](../../../../docs/docs_en/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_en/get_start/mac_quickstart.md)
## III.Module API Prediction
- ### 1、Command line Prediction
- ```shell
$ hub run pyramidbox_lite_server_mask --input_path "/PATH/TO/IMAGE"
```
- If you want to call the Hub module through the command line, please refer to: [PaddleHub Command Line Instruction](../../../../docs/docs_ch/tutorial/cmd_usage.rst)
- ### 2、Prediction Code Example
- ```python
import paddlehub as hub
import cv2
mask_detector = hub.Module(name="pyramidbox_lite_server_mask")
result = mask_detector.face_detection(images=[cv2.imread('/PATH/TO/IMAGE')])
# or
# result = mask_detector.face_detection(paths=['/PATH/TO/IMAGE'])
```
- ### 3、API
- ```python
def face_detection(images=None,
paths=None,
batch_size=1,
use_gpu=False,
visualization=False,
output_dir='detection_result',
use_multi_scale=False,
shrink=0.5,
confs_threshold=0.6)
```
- Detect all faces in image, and judge the existence of mask.
- **Parameters**
- images (list\[numpy.ndarray\]): image data, ndarray.shape is in the format [H, W, C], BGR;
- paths (list[str]): image path;
- batch_size (int): the size of batch;
- use_gpu (bool): use GPU or not; **set the CUDA_VISIBLE_DEVICES environment variable first if you are using GPU**
- visualization (bool): Whether to save the results as picture files;
- output_dir (str): save path of images;
- use\_multi\_scale (bool) : whether to detect across multiple scales;
- shrink (float): the scale to resize image
- confs\_threshold (float): the confidence threshold
**NOTE:** choose one parameter to provide data from paths and images
- **Return**
- res (list\[dict\]): results
- path (str): path for input image
- data (list): detection results, each element in the list is dict
- label (str): 'NO MASK' or 'MASK';
- confidence (float): the confidence of the result
- left (int): the upper left corner x coordinate of the detection box
- top (int): the upper left corner y coordinate of the detection box
- right (int): the lower right corner x coordinate of the detection box
- bottom (int): the lower right corner y coordinate of the detection box
- ```python
def save_inference_model(dirname,
model_filename=None,
params_filename=None,
combined=True)
```
- Save model to specific path
- **Parameters**
- dirname: output dir for saving model
- model\_filename: filename for saving model
- params\_filename: filename for saving parameters
- combined: whether save parameters into one file
## IV.Server Deployment
- PaddleHub Serving can deploy an online service of face detection.
- ### Step 1: Start PaddleHub Serving
- Run the startup command:
- ```shell
$ hub serving start -m pyramidbox_lite_server_mask
```
- The servitization API is now deployed and the default port number is 8866.
- **NOTE:** If GPU is used for prediction, set CUDA_VISIBLE_DEVICES environment variable before the service, otherwise it need not be set.
- ### Step 2: Send a predictive request
- With a configured server, use the following lines of code to send the prediction request and obtain the result
- ```python
import requests
import json
import cv2
import base64
def cv2_to_base64(image):
data = cv2.imencode('.jpg', image)[1]
return base64.b64encode(data.tostring()).decode('utf8')
# Send an HTTP request
data = {'images':[cv2_to_base64(cv2.imread("/PATH/TO/IMAGE"))]}
headers = {"Content-type": "application/json"}
url = "http://127.0.0.1:8866/predict/pyramidbox_lite_server_mask"
r = requests.post(url=url, headers=headers, data=json.dumps(data))
# print prediction results
print(r.json()["results"])
```
## V.Paddle Lite Deployment
- ### Save model demo
- ```python
import paddlehub as hub
pyramidbox_lite_server_mask = hub.Module(name="pyramidbox_lite_server_mask")
# save model in directory named test_program
pyramidbox_lite_server_mask.save_inference_model(dirname="test_program")
```
- ### transform model
- The model downloaded from paddlehub is a prediction model. If we want to deploy it in mobile device, we can use OPT tool provided by PaddleLite to transform the model. For more information, please refer to [OPT tool](https://paddle-lite.readthedocs.io/zh/latest/user_guides/model_optimize_tool.html))
- ### Deploy the model with Paddle Lite
- Please refer to[Paddle-Lite mask detection model deployment demo](https://github.com/PaddlePaddle/Paddle-Lite/tree/develop/lite/demo/cxx)
## V.Release Note
* 1.0.0
First release
* 1.3.1
- ```shell
$ hub install pyramidbox_lite_server_mask==1.3.1
```
...@@ -50,7 +50,7 @@ ...@@ -50,7 +50,7 @@
``` ```
- 通过命令行方式实现人脸检测模型的调用,更多请见 [PaddleHub命令行指令](../../../../docs/docs_ch/tutorial/cmd_usage.rst) - 通过命令行方式实现人脸检测模型的调用,更多请见 [PaddleHub命令行指令](../../../../docs/docs_ch/tutorial/cmd_usage.rst)
- ### 2、代码示例 - ### 2、预测代码示例
- ```python - ```python
import paddlehub as hub import paddlehub as hub
......
# ultra_light_fast_generic_face_detector_1mb_320
|Module Name|ultra_light_fast_generic_face_detector_1mb_320|
| :--- | :---: |
|Category|face detection|
|Network|Ultra-Light-Fast-Generic-Face-Detector-1MB|
|Dataset|WIDER FACEDataset|
|Fine-tuning supported or not|No|
|Module Size|2.6MB|
|Latest update date|2021-02-26|
|Data indicators|-|
## I.Basic Information
- ### Application Effect Display
- Sample results:
<p align="center">
<img src="https://user-images.githubusercontent.com/22424850/131604811-bce29c3f-66f7-45cb-a388-d739368bfeb9.jpg" width='50%' hspace='10'/>
<br />
</p>
- ### Module Introduction
- Ultra-Light-Fast-Generic-Face-Detector-1MB is an extreme light-weight model for real-time face detection in low computation power devices. This module is based on Ultra-Light-Fast-Generic-Face-Detector-1MB, trained on WIDER FACEDataset, and can be used for face detection.
## II.Installation
- ### 1、Environmental Dependence
- paddlepaddle >= 1.6.2
- paddlehub >= 1.6.0 | [How to install PaddleHub](../../../../docs/docs_en/get_start/installation.rst)
- ### 2、Installation
- ```shell
$ hub install ultra_light_fast_generic_face_detector_1mb_320
```
- In case of any problems during installation, please refer to: [Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md) | [Linux_Quickstart](../../../../docs/docs_en/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_en/get_start/mac_quickstart.md)
## III.Module API Prediction
- ### 1、Command line Prediction
- ```shell
$ hub run ultra_light_fast_generic_face_detector_1mb_320 --input_path "/PATH/TO/IMAGE"
```
- If you want to call the Hub module through the command line, please refer to: [PaddleHub Command Line Instruction](../../../../docs/docs_ch/tutorial/cmd_usage.rst)
- ### 2、Prediction Code Example
- ```python
import paddlehub as hub
import cv2
face_detector = hub.Module(name="ultra_light_fast_generic_face_detector_1mb_320")
result = face_detector.face_detection(images=[cv2.imread('/PATH/TO/IMAGE')])
# or
# result = face_detector.face_detection(paths=['/PATH/TO/IMAGE'])
```
- ### 3、API
- ```python
def face_detection(images=None,
paths=None,
batch\_size=1,
use_gpu=False,
output_dir='face_detector_640_predict_output',
visualization=False,
confs_threshold=0.5)
```
- Detect all faces in image
- **Parameters**
- images (list\[numpy.ndarray\]): image data, ndarray.shape is in the format [H, W, C], BGR;
- paths (list[str]): image path;
- batch_size (int): the size of batch;
- use_gpu (bool): use GPU or not; **set the CUDA_VISIBLE_DEVICES environment variable first if you are using GPU**
- output_dir (str): save path of images;
- visualization (bool): Whether to save the results as picture files;
- confs\_threshold (float): the confidence threshold
**NOTE:** choose one parameter to provide data from paths and images
- **Return**
- res (list\[dict\]): results
- path (str): path for input image
- data (list): detection results, each element in the list is dict
- confidence (float): the confidence of the result
- left (int): the upper left corner x coordinate of the detection box
- top (int): the upper left corner y coordinate of the detection box
- right (int): the lower right corner x coordinate of the detection box
- bottom (int): the lower right corner y coordinate of the detection box
- save\_path (str): path for saving output image
- ```python
def save_inference_model(dirname,
model_filename=None,
params_filename=None,
combined=True)
```
- Save model to specific path
- **Parameters**
- dirname: output dir for saving model
- model\_filename: filename for saving model
- params\_filename: filename for saving parameters
- combined: whether save parameters into one file
## IV.Server Deployment
- PaddleHub Serving can deploy an online service of face detection.
- ### Step 1: Start PaddleHub Serving
- Run the startup command:
- ```shell
$ hub serving start -m ultra_light_fast_generic_face_detector_1mb_320
```
- The servitization API is now deployed and the default port number is 8866.
- **NOTE:** If GPU is used for prediction, set CUDA_VISIBLE_DEVICES environment variable before the service, otherwise it need not be set.
- ### Step 2: Send a predictive request
- With a configured server, use the following lines of code to send the prediction request and obtain the result
- ```python
import requests
import json
import cv2
import base64
def cv2_to_base64(image):
data = cv2.imencode('.jpg', image)[1]
return base64.b64encode(data.tostring()).decode('utf8')
# Send an HTTP request
data = {'images':[cv2_to_base64(cv2.imread("/PATH/TO/IMAGE"))]}
headers = {"Content-type": "application/json"}
url = "http://127.0.0.1:8866/predict/ultra_light_fast_generic_face_detector_1mb_320"
r = requests.post(url=url, headers=headers, data=json.dumps(data))
# print prediction results
print(r.json()["results"])
```
## V.Release Note
* 1.0.0
First release
* 1.1.2
- ```shell
$ hub install ultra_light_fast_generic_face_detector_1mb_320==1.1.2
```
...@@ -50,7 +50,7 @@ ...@@ -50,7 +50,7 @@
``` ```
- 通过命令行方式实现人脸检测模型的调用,更多请见 [PaddleHub命令行指令](../../../../docs/docs_ch/tutorial/cmd_usage.rst) - 通过命令行方式实现人脸检测模型的调用,更多请见 [PaddleHub命令行指令](../../../../docs/docs_ch/tutorial/cmd_usage.rst)
- ### 2、代码示例 - ### 2、预测代码示例
- ```python - ```python
import paddlehub as hub import paddlehub as hub
......
# ultra_light_fast_generic_face_detector_1mb_640
|Module Name|ultra_light_fast_generic_face_detector_1mb_640|
| :--- | :---: |
|Category|face detection|
|Network|Ultra-Light-Fast-Generic-Face-Detector-1MB|
|Dataset|WIDER FACEDataset|
|Fine-tuning supported or not|No|
|Module Size|2.9MB|
|Latest update date|2021-02-26|
|Data indicators|-|
## I.Basic Information
- ### Application Effect Display
- Sample results:
<p align="center">
<img src="https://user-images.githubusercontent.com/22424850/131604811-bce29c3f-66f7-45cb-a388-d739368bfeb9.jpg" width='50%' hspace='10'/>
<br />
</p>
- ### Module Introduction
- Ultra-Light-Fast-Generic-Face-Detector-1MB is an extreme light-weight model for real-time face detection in low computation power devices. This module is based on Ultra-Light-Fast-Generic-Face-Detector-1MB, trained on WIDER FACEDataset, and can be used for face detection.
## II.Installation
- ### 1、Environmental Dependence
- paddlepaddle >= 1.6.2
- paddlehub >= 1.6.0 | [How to install PaddleHub](../../../../docs/docs_en/get_start/installation.rst)
- ### 2、Installation
- ```shell
$ hub install ultra_light_fast_generic_face_detector_1mb_640
```
- In case of any problems during installation, please refer to: [Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md) | [Linux_Quickstart](../../../../docs/docs_en/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_en/get_start/mac_quickstart.md)
## III.Module API Prediction
- ### 1、Command line Prediction
- ```shell
$ hub run ultra_light_fast_generic_face_detector_1mb_640 --input_path "/PATH/TO/IMAGE"
```
- If you want to call the Hub module through the command line, please refer to: [PaddleHub Command Line Instruction](../../../../docs/docs_ch/tutorial/cmd_usage.rst)
- ### 2、Prediction Code Example
- ```python
import paddlehub as hub
import cv2
face_detector = hub.Module(name="ultra_light_fast_generic_face_detector_1mb_640")
result = face_detector.face_detection(images=[cv2.imread('/PATH/TO/IMAGE')])
# or
# result = face_detector.face_detection(paths=['/PATH/TO/IMAGE'])
```
- ### 3、API
- ```python
def face_detection(images=None,
paths=None,
batch\_size=1,
use_gpu=False,
output_dir='face_detector_640_predict_output',
visualization=False,
confs_threshold=0.5)
```
- Detect all faces in image
- **Parameters**
- images (list\[numpy.ndarray\]): image data, ndarray.shape is in the format [H, W, C], BGR;
- paths (list[str]): image path;
- batch_size (int): the size of batch;
- use_gpu (bool): use GPU or not; **set the CUDA_VISIBLE_DEVICES environment variable first if you are using GPU**
- output_dir (str): save path of images;
- visualization (bool): Whether to save the results as picture files;
- confs\_threshold (float): the confidence threshold
**NOTE:** choose one parameter to provide data from paths and images
- **Return**
- res (list\[dict\]): results
- path (str): path for input image
- data (list): detection results, each element in the list is dict
- confidence (float): the confidence of the result
- left (int): the upper left corner x coordinate of the detection box
- top (int): the upper left corner y coordinate of the detection box
- right (int): the lower right corner x coordinate of the detection box
- bottom (int): the lower right corner y coordinate of the detection box
- save\_path (str): path for saving output image
- ```python
def save_inference_model(dirname,
model_filename=None,
params_filename=None,
combined=True)
```
- Save model to specific path
- **Parameters**
- dirname: output dir for saving model
- model\_filename: filename for saving model
- params\_filename: filename for saving parameters
- combined: whether save parameters into one file
## IV.Server Deployment
- PaddleHub Serving can deploy an online service of face detection.
- ### Step 1: Start PaddleHub Serving
- Run the startup command:
- ```shell
$ hub serving start -m ultra_light_fast_generic_face_detector_1mb_640
```
- The servitization API is now deployed and the default port number is 8866.
- **NOTE:** If GPU is used for prediction, set CUDA_VISIBLE_DEVICES environment variable before the service, otherwise it need not be set.
- ### Step 2: Send a predictive request
- With a configured server, use the following lines of code to send the prediction request and obtain the result
- ```python
import requests
import json
import cv2
import base64
def cv2_to_base64(image):
data = cv2.imencode('.jpg', image)[1]
return base64.b64encode(data.tostring()).decode('utf8')
# Send an HTTP request
data = {'images':[cv2_to_base64(cv2.imread("/PATH/TO/IMAGE"))]}
headers = {"Content-type": "application/json"}
url = "http://127.0.0.1:8866/predict/ultra_light_fast_generic_face_detector_1mb_640"
r = requests.post(url=url, headers=headers, data=json.dumps(data))
# print prediction results
print(r.json()["results"])
```
## V.Release Note
* 1.0.0
First release
* 1.1.2
- ```shell
$ hub install ultra_light_fast_generic_face_detector_1mb_640==1.1.2
```
...@@ -50,7 +50,7 @@ ...@@ -50,7 +50,7 @@
$ hub run faster_rcnn_resnet50_coco2017 --input_path "/PATH/TO/IMAGE" $ hub run faster_rcnn_resnet50_coco2017 --input_path "/PATH/TO/IMAGE"
``` ```
- 通过命令行方式实现目标检测模型的调用,更多请见 [PaddleHub命令行指令](../../../../docs/docs_ch/tutorial/cmd_usage.rst) - 通过命令行方式实现目标检测模型的调用,更多请见 [PaddleHub命令行指令](../../../../docs/docs_ch/tutorial/cmd_usage.rst)
- ### 2、代码示例 - ### 2、预测代码示例
- ```python - ```python
import paddlehub as hub import paddlehub as hub
......
# faster_rcnn_resnet50_coco2017
|Module Name|faster_rcnn_resnet50_coco2017|
| :--- | :---: |
|Category|object detection|
|Network|faster_rcnn|
|Dataset|COCO2017|
|Fine-tuning supported or not|No|
|Module Size|131MB|
|Latest update date|2021-03-15|
|Data indicators|-|
## I.Basic Information
- ### Application Effect Display
- Sample results:
<p align="center">
<img src="https://user-images.githubusercontent.com/22424850/131504887-d024c7e5-fc09-4d6b-92b8-4d0c965949d0.jpg" width='50%' hspace='10'/>
<br />
</p>
- ### Module Introduction
- Faster_RCNN is a two-stage detector, it consists of feature extraction, proposal, classification and refinement processes. This module is trained on COCO2017 dataset, and can be used for object detection.
## II.Installation
- ### 1、Environmental Dependence
- paddlepaddle >= 1.6.2
- paddlehub >= 1.6.0 | [How to install PaddleHub](../../../../docs/docs_en/get_start/installation.rst)
- ### 2、Installation
- ```shell
$ hub install faster_rcnn_resnet50_coco2017
```
- In case of any problems during installation, please refer to: [Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md) | [Linux_Quickstart](../../../../docs/docs_en/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_en/get_start/mac_quickstart.md)
## III.Module API Prediction
- ### 1、Command line Prediction
- ```shell
$ hub run faster_rcnn_resnet50_coco2017 --input_path "/PATH/TO/IMAGE"
```
- If you want to call the Hub module through the command line, please refer to: [PaddleHub Command Line Instruction](../../../../docs/docs_ch/tutorial/cmd_usage.rst)
- ### 2、Prediction Code Example
- ```python
import paddlehub as hub
import cv2
object_detector = hub.Module(name="faster_rcnn_resnet50_coco2017")
result = object_detector.object_detection(images=[cv2.imread('/PATH/TO/IMAGE')])
# or
# result = object_detector.object_detection((paths=['/PATH/TO/IMAGE'])
```
- ### 3、API
- ```python
def object_detection(paths=None,
images=None,
batch_size=1,
use_gpu=False,
output_dir='detection_result',
score_thresh=0.5,
visualization=True)
```
- Detection API, detect positions of all objects in image
- **Parameters**
- paths (list[str]): image path;
- images (list\[numpy.ndarray\]): image data, ndarray.shape is in the format [H, W, C], BGR;
- batch_size (int): the size of batch;
- use_gpu (bool): use GPU or not; **set the CUDA_VISIBLE_DEVICES environment variable first if you are using GPU**
- output_dir (str): save path of images;
- score\_thresh (float): confidence threshold;<br/>
- visualization (bool): Whether to save the results as picture files;
**NOTE:** choose one parameter to provide data from paths and images
- **Return**
- res (list\[dict\]): results
- data (list): detection results, each element in the list is dict
- confidence (float): the confidence of the result
- label (str): label
- left (int): the upper left corner x coordinate of the detection box
- top (int): the upper left corner y coordinate of the detection box
- right (int): the lower right corner x coordinate of the detection box
- bottom (int): the lower right corner y coordinate of the detection box
- save\_path (str, optional): output path for saving results
- ```python
def save_inference_model(dirname,
model_filename=None,
params_filename=None,
combined=True)
```
- Save model to specific path
- **Parameters**
- dirname: output dir for saving model
- model\_filename: filename for saving model
- params\_filename: filename for saving parameters
- combined: whether save parameters into one file
## IV.Server Deployment
- PaddleHub Serving can deploy an online service of object detection.
- ### Step 1: Start PaddleHub Serving
- Run the startup command:
- ```shell
$ hub serving start -m faster_rcnn_resnet50_coco2017
```
- The servitization API is now deployed and the default port number is 8866.
- **NOTE:** If GPU is used for prediction, set CUDA_VISIBLE_DEVICES environment variable before the service, otherwise it need not be set.
- ### Step 2: Send a predictive request
- With a configured server, use the following lines of code to send the prediction request and obtain the result
- ```python
import requests
import json
import cv2
import base64
def cv2_to_base64(image):
data = cv2.imencode('.jpg', image)[1]
return base64.b64encode(data.tostring()).decode('utf8')
# Send an HTTP request
data = {'images':[cv2_to_base64(cv2.imread("/PATH/TO/IMAGE"))]}
headers = {"Content-type": "application/json"}
url = "http://127.0.0.1:8866/predict/faster_rcnn_resnet50_coco2017"
r = requests.post(url=url, headers=headers, data=json.dumps(data))
# print prediction results
print(r.json()["results"])
```
## V.Release Note
* 1.1.0
First release
* 1.1.1
Fix the problem of reading numpy
- ```shell
$ hub install faster_rcnn_resnet50_coco2017==1.1.1
```
...@@ -50,7 +50,7 @@ ...@@ -50,7 +50,7 @@
``` ```
- 通过命令行方式实现目标检测模型的调用,更多请见 [PaddleHub命令行指令](../../../../docs/docs_ch/tutorial/cmd_usage.rst) - 通过命令行方式实现目标检测模型的调用,更多请见 [PaddleHub命令行指令](../../../../docs/docs_ch/tutorial/cmd_usage.rst)
- ### 2、代码示例 - ### 2、预测代码示例
- ```python - ```python
import paddlehub as hub import paddlehub as hub
......
# faster_rcnn_resnet50_fpn_coco2017
|Module Name|faster_rcnn_resnet50_fpn_coco2017|
| :--- | :---: |
|Category|object detection|
|Network|faster_rcnn|
|Dataset|COCO2017|
|Fine-tuning supported or not|No|
|Module Size|161MB|
|Latest update date|2021-03-15|
|Data indicators|-|
## I.Basic Information
- ### Application Effect Display
- Sample results:
<p align="center">
<img src="https://user-images.githubusercontent.com/22424850/131504887-d024c7e5-fc09-4d6b-92b8-4d0c965949d0.jpg" width='50%' hspace='10'/>
<br />
</p>
- ### Module Introduction
- Faster_RCNN is a two-stage detector, it consists of feature extraction, proposal, classification and refinement processes. This module is trained on COCO2017 dataset, and can be used for object detection.
## II.Installation
- ### 1、Environmental Dependence
- paddlepaddle >= 1.6.2
- paddlehub >= 1.6.0 | [How to install PaddleHub](../../../../docs/docs_en/get_start/installation.rst)
- ### 2、Installation
- ```shell
$ hub install faster_rcnn_resnet50_fpn_coco2017
```
- In case of any problems during installation, please refer to: [Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md) | [Linux_Quickstart](../../../../docs/docs_en/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_en/get_start/mac_quickstart.md)
## III.Module API Prediction
- ### 1、Command line Prediction
- ```shell
$ hub run faster_rcnn_resnet50_fpn_coco2017 --input_path "/PATH/TO/IMAGE"
```
- If you want to call the Hub module through the command line, please refer to: [PaddleHub Command Line Instruction](../../../../docs/docs_ch/tutorial/cmd_usage.rst)
- ### 2、Prediction Code Example
- ```python
import paddlehub as hub
import cv2
object_detector = hub.Module(name="faster_rcnn_resnet50_fpn_coco2017")
result = object_detector.object_detection(images=[cv2.imread('/PATH/TO/IMAGE')])
# or
# result = object_detector.object_detection((paths=['/PATH/TO/IMAGE'])
```
- ### 3、API
- ```python
def object_detection(paths=None,
images=None,
batch_size=1,
use_gpu=False,
output_dir='detection_result',
score_thresh=0.5,
visualization=True)
```
- Detection API, detect positions of all objects in image
- **Parameters**
- paths (list[str]): image path;
- images (list\[numpy.ndarray\]): image data, ndarray.shape is in the format [H, W, C], BGR;
- batch_size (int): the size of batch;
- use_gpu (bool): use GPU or not; **set the CUDA_VISIBLE_DEVICES environment variable first if you are using GPU**
- output_dir (str): save path of images;
- score\_thresh (float): confidence threshold;<br/>
- visualization (bool): Whether to save the results as picture files;
**NOTE:** choose one parameter to provide data from paths and images
- **Return**
- res (list\[dict\]): results
- data (list): detection results, each element in the list is dict
- confidence (float): the confidence of the result
- label (str): label
- left (int): the upper left corner x coordinate of the detection box
- top (int): the upper left corner y coordinate of the detection box
- right (int): the lower right corner x coordinate of the detection box
- bottom (int): the lower right corner y coordinate of the detection box
- save\_path (str, optional): output path for saving results
- ```python
def save_inference_model(dirname,
model_filename=None,
params_filename=None,
combined=True)
```
- Save model to specific path
- **Parameters**
- dirname: output dir for saving model
- model\_filename: filename for saving model
- params\_filename: filename for saving parameters
- combined: whether save parameters into one file
## IV.Server Deployment
- PaddleHub Serving can deploy an online service of object detection.
- ### Step 1: Start PaddleHub Serving
- Run the startup command:
- ```shell
$ hub serving start -m faster_rcnn_resnet50_fpn_coco2017
```
- The servitization API is now deployed and the default port number is 8866.
- **NOTE:** If GPU is used for prediction, set CUDA_VISIBLE_DEVICES environment variable before the service, otherwise it need not be set.
- ### Step 2: Send a predictive request
- With a configured server, use the following lines of code to send the prediction request and obtain the result
- ```python
import requests
import json
import cv2
import base64
def cv2_to_base64(image):
data = cv2.imencode('.jpg', image)[1]
return base64.b64encode(data.tostring()).decode('utf8')
# Send an HTTP request
data = {'images':[cv2_to_base64(cv2.imread("/PATH/TO/IMAGE"))]}
headers = {"Content-type": "application/json"}
url = "http://127.0.0.1:8866/predict/faster_rcnn_resnet50_fpn_coco2017"
r = requests.post(url=url, headers=headers, data=json.dumps(data))
# print prediction results
print(r.json()["results"])
```
## V.Release Note
* 1.0.0
First release
* 1.0.1
Fix the problem of reading numpy
- ```shell
$ hub install faster_rcnn_resnet50_fpn_coco2017==1.0.1
```
# faster_rcnn_resnet50_fpn_venus
|Module Name|faster_rcnn_resnet50_fpn_venus|
| :--- | :---: |
|Category|object detection|
|Network|faster_rcnn|
|Dataset|Baidu Detection Dataset|
|Fine-tuning supported or not|Yes|
|Module Size|317MB|
|Latest update date|2021-02-26|
|Data indicators|-|
## I.Basic Information
- ### Module Introduction
- Faster_RCNN is a two-stage detector, it consists of feature extraction, proposal, classification and refinement processes. This module is trained on Baidu Detection Dataset, which contains 170w pictures and 1000w+ boxes, and improve the accuracy on 8 test datasets with average 2.06%. Besides, this module supports to fine-tune model, and can achieve faster convergence and better performance.
## II.Installation
- ### 1、Environmental Dependence
- paddlepaddle >= 1.6.2
- paddlehub >= 1.6.0 | [How to install PaddleHub](../../../../docs/docs_en/get_start/installation.rst)
- ### 2、Installation
- ```shell
$ hub install faster_rcnn_resnet50_fpn_venus
```
- In case of any problems during installation, please refer to: [Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md) | [Linux_Quickstart](../../../../docs/docs_en/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_en/get_start/mac_quickstart.md)
## III.Module API Prediction
- ### 1、API
- ```python
def context(num_classes=81,
trainable=True,
pretrained=True,
phase='train')
```
- Extract features, and do transfer learning
- **Parameters**
- num\_classes (int): number of classes;<br/>
- trainable (bool): whether parameters trainable or not;<br/>
- pretrained (bool): whether load pretrained model or not
- get\_prediction (bool): optional, 'train' or 'predict','train' is used for training,'predict' used for prediction.
- **Return**
- inputs (dict): inputs, a dict:
if phase is 'train', keys are:
- image (Variable): image variable
- im\_size (Variable): image size
- im\_info (Variable): image information
- gt\_class (Variable): box class
- gt\_box (Variable): box coordination
- is\_crowd (Variable): if multiple objects in box
if phase 为 'predict',keys are:
- image (Variable): image variable
- im\_size (Variable): image size
- im\_info (Variable): image information
- outputs (dict): model output
if phase is 'train', keys are:
- head_features (Variable): features extracted
- rpn\_cls\_loss (Variable): classfication loss in box
- rpn\_reg\_loss (Variable): regression loss in box
- generate\_proposal\_labels (Variable): proposal labels
if phase 为 'predict',keys are:
- head_features (Variable): features extracted
- rois (Variable): roi
- bbox\_out (Variable): prediction results
- program for transfer learning
- ```python
def save_inference_model(dirname,
model_filename=None,
params_filename=None,
combined=True)
```
- Save model to specific path
- **Parameters**
- dirname: output dir for saving model
- model\_filename: filename for saving model
- params\_filename: filename for saving parameters
- combined: whether save parameters into one file
## IV.Release Note
* 1.0.0
First release
- ```shell
$ hub install faster_rcnn_resnet50_fpn_venus==1.0.0
```
...@@ -49,7 +49,7 @@ ...@@ -49,7 +49,7 @@
$ hub run ssd_mobilenet_v1_pascal --input_path "/PATH/TO/IMAGE" $ hub run ssd_mobilenet_v1_pascal --input_path "/PATH/TO/IMAGE"
``` ```
- 通过命令行方式实现目标检测模型的调用,更多请见 [PaddleHub命令行指令](../../../../docs/docs_ch/tutorial/cmd_usage.rst) - 通过命令行方式实现目标检测模型的调用,更多请见 [PaddleHub命令行指令](../../../../docs/docs_ch/tutorial/cmd_usage.rst)
- ### 2、代码示例 - ### 2、预测代码示例
- ```python - ```python
import paddlehub as hub import paddlehub as hub
......
# ssd_mobilenet_v1_pascal
|Module Name|ssd_mobilenet_v1_pascal|
| :--- | :---: |
|Category|object detection|
|Network|SSD|
|Dataset|PASCAL VOC|
|Fine-tuning supported or not|No|
|Module Size|24MB|
|Latest update date|2021-02-26|
|Data indicators|-|
## I.Basic Information
- ### Application Effect Display
- Sample results:
<p align="center">
<img src="https://user-images.githubusercontent.com/22424850/131504887-d024c7e5-fc09-4d6b-92b8-4d0c965949d0.jpg" width='50%' hspace='10'/>
<br />
</p>
- ### Module Introduction
- Single Shot MultiBox Detector (SSD) is a one-stage detector. Different from two-stage detector, SSD frames object detection as a re- gression problem to spatially separated bounding boxes and associated class probabilities. This module is based on MobileNet-v1, trained on Pascal dataset, and can be used for object detection.
## II.Installation
- ### 1、Environmental Dependence
- paddlepaddle >= 1.6.2
- paddlehub >= 1.6.0 | [How to install PaddleHub](../../../../docs/docs_en/get_start/installation.rst)
- ### 2、Installation
- ```shell
$ hub install ssd_mobilenet_v1_pascal
```
- In case of any problems during installation, please refer to: [Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md) | [Linux_Quickstart](../../../../docs/docs_en/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_en/get_start/mac_quickstart.md)
## III.Module API Prediction
- ### 1、Command line Prediction
- ```shell
$ hub run ssd_mobilenet_v1_pascal --input_path "/PATH/TO/IMAGE"
```
- If you want to call the Hub module through the command line, please refer to: [PaddleHub Command Line Instruction](../../../../docs/docs_ch/tutorial/cmd_usage.rst)
- ### 2、Prediction Code Example
- ```python
import paddlehub as hub
import cv2
object_detector = hub.Module(name="ssd_mobilenet_v1_pascal")
result = object_detector.object_detection(images=[cv2.imread('/PATH/TO/IMAGE')])
# or
# result = object_detector.object_detection((paths=['/PATH/TO/IMAGE'])
```
- ### 3、API
- ```python
def object_detection(paths=None,
images=None,
batch_size=1,
use_gpu=False,
output_dir='detection_result',
score_thresh=0.5,
visualization=True,
)
```
- Detection API, detect positions of all objects in image
- **Parameters**
- paths (list[str]): image path;
- images (list\[numpy.ndarray\]): image data, ndarray.shape is in the format [H, W, C], BGR;
- batch_size (int): the size of batch;
- use_gpu (bool): use GPU or not; **set the CUDA_VISIBLE_DEVICES environment variable first if you are using GPU**
- output_dir (str): save path of images;
- score\_thresh (float): 识别置信度的阈值;<br/>
- visualization (bool): Whether to save the results as picture files;
**NOTE:** choose one parameter to provide data from paths and images
- **Return**
- res (list\[dict\]): results
- data (list): detection results, each element in the list is dict
- confidence (float): the confidence of the result
- label (str): label
- left (int): the upper left corner x coordinate of the detection box
- top (int): the upper left corner y coordinate of the detection box
- right (int): the lower right corner x coordinate of the detection box
- bottom (int): the lower right corner y coordinate of the detection box
- save\_path (str, optional): output path for saving results
- ```python
def save_inference_model(dirname,
model_filename=None,
params_filename=None,
combined=True)
```
- Save model to specific path
- **Parameters**
- dirname: output dir for saving model
- model\_filename: filename for saving model
- params\_filename: filename for saving parameters
- combined: whether save parameters into one file
## IV.Server Deployment
- PaddleHub Serving can deploy an online service of object detection.
- ### Step 1: Start PaddleHub Serving
- Run the startup command:
- ```shell
$ hub serving start -m ssd_mobilenet_v1_pascal
```
- The servitization API is now deployed and the default port number is 8866.
- **NOTE:** If GPU is used for prediction, set CUDA_VISIBLE_DEVICES environment variable before the service, otherwise it need not be set.
- ### Step 2: Send a predictive request
- With a configured server, use the following lines of code to send the prediction request and obtain the result
- ```python
import requests
import json
import cv2
import base64
def cv2_to_base64(image):
data = cv2.imencode('.jpg', image)[1]
return base64.b64encode(data.tostring()).decode('utf8')
# Send an HTTP request
data = {'images':[cv2_to_base64(cv2.imread("/PATH/TO/IMAGE"))]}
headers = {"Content-type": "application/json"}
url = "http://127.0.0.1:8866/predict/ssd_mobilenet_v1_pascal"
r = requests.post(url=url, headers=headers, data=json.dumps(data))
# print prediction results
print(r.json()["results"])
```
## V.Release Note
* 1.0.0
First release
* 1.1.2
Fix the problem of reading numpy
- ```shell
$ hub install ssd_mobilenet_v1_pascal==1.1.2
```
...@@ -49,7 +49,7 @@ ...@@ -49,7 +49,7 @@
$ hub run ssd_vgg16_512_coco2017 --input_path "/PATH/TO/IMAGE" $ hub run ssd_vgg16_512_coco2017 --input_path "/PATH/TO/IMAGE"
``` ```
- 通过命令行方式实现目标检测模型的调用,更多请见 [PaddleHub命令行指令](../../../../docs/docs_ch/tutorial/cmd_usage.rst) - 通过命令行方式实现目标检测模型的调用,更多请见 [PaddleHub命令行指令](../../../../docs/docs_ch/tutorial/cmd_usage.rst)
- ### 2、代码示例 - ### 2、预测代码示例
- ```python - ```python
import paddlehub as hub import paddlehub as hub
......
# ssd_vgg16_512_coco2017
|Module Name|ssd_vgg16_512_coco2017|
| :--- | :---: |
|Category|object detection|
|Network|SSD|
|Dataset|COCO2017|
|Fine-tuning supported or not|No|
|Module Size|139MB|
|Latest update date|2021-03-15|
|Data indicators|-|
## I.Basic Information
- ### Application Effect Display
- Sample results:
<p align="center">
<img src="https://user-images.githubusercontent.com/22424850/131506781-b4ecb77b-5ab1-4795-88da-5f547f7f7f9c.jpg" width='50%' hspace='10'/>
<br />
</p>
- ### Module Introduction
- Single Shot MultiBox Detector (SSD) is a one-stage detector. Different from two-stage detector, SSD frames object detection as a re- gression problem to spatially separated bounding boxes and associated class probabilities. This module is based on VGG16, trained on COCO2017 dataset, and can be used for object detection.
## II.Installation
- ### 1、Environmental Dependence
- paddlepaddle >= 1.6.2
- paddlehub >= 1.6.0 | [How to install PaddleHub](../../../../docs/docs_en/get_start/installation.rst)
- ### 2、Installation
- ```shell
$ hub install ssd_vgg16_512_coco2017
```
- In case of any problems during installation, please refer to: [Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md) | [Linux_Quickstart](../../../../docs/docs_en/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_en/get_start/mac_quickstart.md)
## III.Module API Prediction
- ### 1、Command line Prediction
- ```shell
$ hub run ssd_vgg16_512_coco2017 --input_path "/PATH/TO/IMAGE"
```
- If you want to call the Hub module through the command line, please refer to: [PaddleHub Command Line Instruction](../../../../docs/docs_ch/tutorial/cmd_usage.rst)
- ### 2、Prediction Code Example
- ```python
import paddlehub as hub
import cv2
object_detector = hub.Module(name="ssd_vgg16_512_coco2017")
result = object_detector.object_detection(images=[cv2.imread('/PATH/TO/IMAGE')])
# or
# result = object_detector.object_detection((paths=['/PATH/TO/IMAGE'])
```
- ### 3、API
- ```python
def object_detection(paths=None,
images=None,
batch_size=1,
use_gpu=False,
output_dir='detection_result',
score_thresh=0.5,
visualization=True)
```
- Detection API, detect positions of all objects in image
- **Parameters**
- paths (list[str]): image path;
- images (list\[numpy.ndarray\]): image data, ndarray.shape is in the format [H, W, C], BGR;
- batch_size (int): the size of batch;
- use_gpu (bool): use GPU or not; **set the CUDA_VISIBLE_DEVICES environment variable first if you are using GPU**
- output_dir (str): save path of images;
- score\_thresh (float): confidence threshold;<br/>
- visualization (bool): Whether to save the results as picture files;
**NOTE:** choose one parameter to provide data from paths and images
- **Return**
- res (list\[dict\]): results
- data (list): detection results, each element in the list is dict
- confidence (float): the confidence of the result
- label (str): label
- left (int): the upper left corner x coordinate of the detection box
- top (int): the upper left corner y coordinate of the detection box
- right (int): the lower right corner x coordinate of the detection box
- bottom (int): the lower right corner y coordinate of the detection box
- save\_path (str, optional): output path for saving results
- ```python
def save_inference_model(dirname,
model_filename=None,
params_filename=None,
combined=True)
```
- Save model to specific path
- **Parameters**
- dirname: output dir for saving model
- model\_filename: filename for saving model
- params\_filename: filename for saving parameters
- combined: whether save parameters into one file
## IV.Server Deployment
- PaddleHub Serving can deploy an online service of object detection.
- ### Step 1: Start PaddleHub Serving
- Run the startup command:
- ```shell
$ hub serving start -m ssd_vgg16_512_coco2017
```
- The servitization API is now deployed and the default port number is 8866.
- **NOTE:** If GPU is used for prediction, set CUDA_VISIBLE_DEVICES environment variable before the service, otherwise it need not be set.
- ### Step 2: Send a predictive request
- With a configured server, use the following lines of code to send the prediction request and obtain the result
- ```python
import requests
import json
import cv2
import base64
def cv2_to_base64(image):
data = cv2.imencode('.jpg', image)[1]
return base64.b64encode(data.tostring()).decode('utf8')
# Send an HTTP request
data = {'images':[cv2_to_base64(cv2.imread("/PATH/TO/IMAGE"))]}
headers = {"Content-type": "application/json"}
url = "http://127.0.0.1:8866/predict/ssd_vgg16_512_coco2017"
r = requests.post(url=url, headers=headers, data=json.dumps(data))
# print prediction results
print(r.json()["results"])
```
## V.Release Note
* 1.0.0
First release
* 1.0.2
Fix the problem of reading numpy
- ```shell
$ hub install ssd_vgg16_512_coco2017==1.0.2
```
...@@ -49,7 +49,7 @@ ...@@ -49,7 +49,7 @@
$ hub run yolov3_darknet53_coco2017 --input_path "/PATH/TO/IMAGE" $ hub run yolov3_darknet53_coco2017 --input_path "/PATH/TO/IMAGE"
``` ```
- 通过命令行方式实现目标检测模型的调用,更多请见 [PaddleHub命令行指令](../../../../docs/docs_ch/tutorial/cmd_usage.rst) - 通过命令行方式实现目标检测模型的调用,更多请见 [PaddleHub命令行指令](../../../../docs/docs_ch/tutorial/cmd_usage.rst)
- ### 2、代码示例 - ### 2、预测代码示例
- ```python - ```python
import paddlehub as hub import paddlehub as hub
......
# yolov3_darknet53_coco2017
|Module Name|yolov3_darknet53_coco2017|
| :--- | :---: |
|Category|object detection|
|Network|YOLOv3|
|Dataset|COCO2017|
|Fine-tuning supported or not|No|
|Module Size|239MB|
|Latest update date|2021-02-26|
|Data indicators|-|
## I.Basic Information
- ### Application Effect Display
- Sample results:
<p align="center">
<img src="https://user-images.githubusercontent.com/22424850/131506781-b4ecb77b-5ab1-4795-88da-5f547f7f7f9c.jpg" width='50%' hspace='10'/>
<br />
</p>
- ### Module Introduction
- YOLOv3 is a one-stage detector proposed by Joseph Redmon and Ali Farhadi, which can reach comparable accuracy but twice as fast as traditional methods. This module is based on YOLOv3, trained on COCO2017, and can be used for object detection.
## II.Installation
- ### 1、Environmental Dependence
- paddlepaddle >= 1.6.2
- paddlehub >= 1.6.0 | [How to install PaddleHub](../../../../docs/docs_en/get_start/installation.rst)
- ### 2、Installation
- ```shell
$ hub install yolov3_darknet53_coco2017
```
- In case of any problems during installation, please refer to: [Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md) | [Linux_Quickstart](../../../../docs/docs_en/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_en/get_start/mac_quickstart.md)
## III.Module API Prediction
- ### 1、Command line Prediction
- ```shell
$ hub run yolov3_darknet53_coco2017 --input_path "/PATH/TO/IMAGE"
```
- If you want to call the Hub module through the command line, please refer to: [PaddleHub Command Line Instruction](../../../../docs/docs_ch/tutorial/cmd_usage.rst)
- ### 2、Prediction Code Example
- ```python
import paddlehub as hub
import cv2
object_detector = hub.Module(name="yolov3_darknet53_coco2017")
result = object_detector.object_detection(images=[cv2.imread('/PATH/TO/IMAGE')])
# or
# result = object_detector.object_detection((paths=['/PATH/TO/IMAGE'])
```
- ### 3、API
- ```python
def object_detection(paths=None,
images=None,
batch_size=1,
use_gpu=False,
output_dir='detection_result',
score_thresh=0.5,
visualization=True)
```
- Detection API, detect positions of all objects in image
- **Parameters**
- paths (list[str]): image path;
- images (list\[numpy.ndarray\]): image data, ndarray.shape is in the format [H, W, C], BGR;
- batch_size (int): the size of batch;
- use_gpu (bool): use GPU or not; **set the CUDA_VISIBLE_DEVICES environment variable first if you are using GPU**
- output_dir (str): save path of images;
- score\_thresh (float): confidence threshold;<br/>
- visualization (bool): Whether to save the results as picture files;
**NOTE:** choose one parameter to provide data from paths and images
- **Return**
- res (list\[dict\]): results
- data (list): detection results, each element in the list is dict
- confidence (float): the confidence of the result
- label (str): label
- left (int): the upper left corner x coordinate of the detection box
- top (int): the upper left corner y coordinate of the detection box
- right (int): the lower right corner x coordinate of the detection box
- bottom (int): the lower right corner y coordinate of the detection box
- save\_path (str, optional): output path for saving results
- ```python
def save_inference_model(dirname,
model_filename=None,
params_filename=None,
combined=True)
```
- Save model to specific path
- **Parameters**
- dirname: output dir for saving model
- model\_filename: filename for saving model
- params\_filename: filename for saving parameters
- combined: whether save parameters into one file
## IV.Server Deployment
- PaddleHub Serving can deploy an online service of object detection.
- ### Step 1: Start PaddleHub Serving
- Run the startup command:
- ```shell
$ hub serving start -m yolov3_darknet53_coco2017
```
- The servitization API is now deployed and the default port number is 8866.
- **NOTE:** If GPU is used for prediction, set CUDA_VISIBLE_DEVICES environment variable before the service, otherwise it need not be set.
- ### Step 2: Send a predictive request
- With a configured server, use the following lines of code to send the prediction request and obtain the result
- ```python
import requests
import json
import cv2
import base64
def cv2_to_base64(image):
data = cv2.imencode('.jpg', image)[1]
return base64.b64encode(data.tostring()).decode('utf8')
# Send an HTTP request
data = {'images':[cv2_to_base64(cv2.imread("/PATH/TO/IMAGE"))]}
headers = {"Content-type": "application/json"}
url = "http://127.0.0.1:8866/predict/yolov3_darknet53_coco2017"
r = requests.post(url=url, headers=headers, data=json.dumps(data))
# print prediction results
print(r.json()["results"])
```
## V.Release Note
* 1.0.0
First release
* 1.1.1
Fix the problem of reading numpy
- ```shell
$ hub install yolov3_darknet53_coco2017==1.1.1
```
...@@ -49,7 +49,7 @@ ...@@ -49,7 +49,7 @@
$ hub run yolov3_darknet53_pedestrian --input_path "/PATH/TO/IMAGE" $ hub run yolov3_darknet53_pedestrian --input_path "/PATH/TO/IMAGE"
``` ```
- 通过命令行方式实现行人检测模型的调用,更多请见 [PaddleHub命令行指令](../../../../docs/docs_ch/tutorial/cmd_usage.rst) - 通过命令行方式实现行人检测模型的调用,更多请见 [PaddleHub命令行指令](../../../../docs/docs_ch/tutorial/cmd_usage.rst)
- ### 2、代码示例 - ### 2、预测代码示例
- ```python - ```python
import paddlehub as hub import paddlehub as hub
......
# yolov3_darknet53_pedestrian
|Module Name|yolov3_darknet53_pedestrian|
| :--- | :---: |
|Category|object detection|
|Network|YOLOv3|
|Dataset|Baidu Pedestrian Dataset|
|Fine-tuning supported or not|No|
|Module Size|238MB|
|Latest update date|2021-03-15|
|Data indicators|-|
## I.Basic Information
- ### Application Effect Display
- Sample results:
<p align="center">
<img src="https://user-images.githubusercontent.com/22424850/131492636-714c697c-3275-4c8c-a83a-cf971a91ba98.jpg" width='50%' hspace='10'/>
<br />
</p>
- ### Module Introduction
- YOLOv3 is a one-stage detector proposed by Joseph Redmon and Ali Farhadi, which can reach comparable accuracy but twice as fast as traditional methods. This module is based on YOLOv3, trained on Baidu Pedestrian Dataset, and can be used for pedestrian detection.
## II.Installation
- ### 1、Environmental Dependence
- paddlepaddle >= 1.6.2
- paddlehub >= 1.6.0 | [How to install PaddleHub](../../../../docs/docs_en/get_start/installation.rst)
- ### 2、Installation
- ```shell
$ hub install yolov3_darknet53_pedestrian
```
- In case of any problems during installation, please refer to: [Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md) | [Linux_Quickstart](../../../../docs/docs_en/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_en/get_start/mac_quickstart.md)
## III.Module API Prediction
- ### 1、Command line Prediction
- ```shell
$ hub run yolov3_darknet53_pedestrian --input_path "/PATH/TO/IMAGE"
```
- If you want to call the Hub module through the command line, please refer to: [PaddleHub Command Line Instruction](../../../../docs/docs_ch/tutorial/cmd_usage.rst)
- ### 2、Prediction Code Example
- ```python
import paddlehub as hub
import cv2
pedestrian_detector = hub.Module(name="yolov3_darknet53_pedestrian")
result = pedestrian_detector.object_detection(images=[cv2.imread('/PATH/TO/IMAGE')])
# or
# result = pedestrian_detector.object_detection(paths=['/PATH/TO/IMAGE'])
```
- ### 3、API
- ```python
def object_detection(paths=None,
images=None,
batch_size=1,
use_gpu=False,
output_dir='yolov3_pedestrian_detect_output',
score_thresh=0.2,
visualization=True)
```
- Detection API, detect positions of all pedestrian in image
- **Parameters**
- paths (list[str]): image path;
- images (list\[numpy.ndarray\]): image data, ndarray.shape is in the format [H, W, C], BGR;
- batch_size (int): the size of batch;
- use_gpu (bool): use GPU or not; **set the CUDA_VISIBLE_DEVICES environment variable first if you are using GPU**
- output_dir (str): save path of images;
- score\_thresh (float): confidence threshold;<br/>
- visualization (bool): Whether to save the results as picture files;
**NOTE:** choose one parameter to provide data from paths and images
- **Return**
- res (list\[dict\]): results
- data (list): detection results, each element in the list is dict
- confidence (float): the confidence of the result
- label (str): label
- left (int): the upper left corner x coordinate of the detection box
- top (int): the upper left corner y coordinate of the detection box
- right (int): the lower right corner x coordinate of the detection box
- bottom (int): the lower right corner y coordinate of the detection box
- save\_path (str, optional): output path for saving results
- ```python
def save_inference_model(dirname,
model_filename=None,
params_filename=None,
combined=True)
```
- Save model to specific path
- **Parameters**
- dirname: output dir for saving model
- model\_filename: filename for saving model
- params\_filename: filename for saving parameters
- combined: whether save parameters into one file
## IV.Server Deployment
- PaddleHub Serving can deploy an online service of object detection.
- ### Step 1: Start PaddleHub Serving
- Run the startup command:
- ```shell
$ hub serving start -m yolov3_darknet53_pedestrian
```
- The servitization API is now deployed and the default port number is 8866.
- **NOTE:** If GPU is used for prediction, set CUDA_VISIBLE_DEVICES environment variable before the service, otherwise it need not be set.
- ### Step 2: Send a predictive request
- With a configured server, use the following lines of code to send the prediction request and obtain the result
- ```python
import requests
import json
import cv2
import base64
def cv2_to_base64(image):
data = cv2.imencode('.jpg', image)[1]
return base64.b64encode(data.tostring()).decode('utf8')
# Send an HTTP request
data = {'images':[cv2_to_base64(cv2.imread("/PATH/TO/IMAGE"))]}
headers = {"Content-type": "application/json"}
url = "http://127.0.0.1:8866/predict/yolov3_darknet53_pedestrian"
r = requests.post(url=url, headers=headers, data=json.dumps(data))
# print prediction results
print(r.json()["results"])
```
## V.Release Note
* 1.0.0
First release
* 1.0.2
Fix the problem of reading numpy
- ```shell
$ hub install yolov3_darknet53_pedestrian==1.0.2
```
...@@ -49,7 +49,7 @@ ...@@ -49,7 +49,7 @@
$ hub run yolov3_darknet53_vehicles --input_path "/PATH/TO/IMAGE" $ hub run yolov3_darknet53_vehicles --input_path "/PATH/TO/IMAGE"
``` ```
- 通过命令行方式实现车辆检测模型的调用,更多请见 [PaddleHub命令行指令](../../../../docs/docs_ch/tutorial/cmd_usage.rst) - 通过命令行方式实现车辆检测模型的调用,更多请见 [PaddleHub命令行指令](../../../../docs/docs_ch/tutorial/cmd_usage.rst)
- ### 2、代码示例 - ### 2、预测代码示例
- ```python - ```python
import paddlehub as hub import paddlehub as hub
......
# yolov3_darknet53_vehicles
|Module Name|yolov3_darknet53_vehicles|
| :--- | :---: |
|Category|object detection|
|Network|YOLOv3|
|Dataset|Baidu Vehicle Dataset|
|Fine-tuning supported or not|No|
|Module Size|238MB|
|Latest update date|2021-03-15|
|Data indicators|-|
## I.Basic Information
- ### Application Effect Display
- Sample results:
<p align="center">
<img src="https://user-images.githubusercontent.com/22424850/131529643-70ee93fc-c9f3-40df-a981-901074683beb.jpg" width='50%' hspace='10'/>
<br />
</p>
- ### Module Introduction
- YOLOv3 is a one-stage detector proposed by Joseph Redmon and Ali Farhadi, which can reach comparable accuracy but twice as fast as traditional methods. This module is based on YOLOv3, trained on Baidu Vehicle Dataset, and can be used for vehicle detection.
## II.Installation
- ### 1、Environmental Dependence
- paddlepaddle >= 1.6.2
- paddlehub >= 1.6.0 | [How to install PaddleHub](../../../../docs/docs_en/get_start/installation.rst)
- ### 2、Installation
- ```shell
$ hub install yolov3_darknet53_vehicles
```
- In case of any problems during installation, please refer to: [Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md) | [Linux_Quickstart](../../../../docs/docs_en/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_en/get_start/mac_quickstart.md)
## III.Module API Prediction
- ### 1、Command line Prediction
- ```shell
$ hub run yolov3_darknet53_vehicles --input_path "/PATH/TO/IMAGE"
```
- If you want to call the Hub module through the command line, please refer to: [PaddleHub Command Line Instruction](../../../../docs/docs_ch/tutorial/cmd_usage.rst)
- ### 2、Prediction Code Example
- ```python
import paddlehub as hub
import cv2
vehicles_detector = hub.Module(name="yolov3_darknet53_vehicles")
result = vehicles_detector.object_detection(images=[cv2.imread('/PATH/TO/IMAGE')])
# or
# result = vehicles_detector.object_detection((paths=['/PATH/TO/IMAGE'])
```
- ### 3、API
- ```python
def object_detection(paths=None,
images=None,
batch_size=1,
use_gpu=False,
output_dir='yolov3_vehicles_detect_output',
score_thresh=0.2,
visualization=True)
```
- Detection API, detect positions of all vehicles in image
- **Parameters**
- paths (list[str]): image path;
- images (list\[numpy.ndarray\]): image data, ndarray.shape is in the format [H, W, C], BGR;
- batch_size (int): the size of batch;
- use_gpu (bool): use GPU or not; **set the CUDA_VISIBLE_DEVICES environment variable first if you are using GPU**
- output_dir (str): save path of images;
- score\_thresh (float): confidence threshold;<br/>
- visualization (bool): Whether to save the results as picture files;
**NOTE:** choose one parameter to provide data from paths and images
- **Return**
- res (list\[dict\]): results
- data (list): detection results, each element in the list is dict
- confidence (float): the confidence of the result
- label (str): label
- left (int): the upper left corner x coordinate of the detection box
- top (int): the upper left corner y coordinate of the detection box
- right (int): the lower right corner x coordinate of the detection box
- bottom (int): the lower right corner y coordinate of the detection box
- save\_path (str, optional): output path for saving results
- ```python
def save_inference_model(dirname,
model_filename=None,
params_filename=None,
combined=True)
```
- Save model to specific path
- **Parameters**
- dirname: output dir for saving model
- model\_filename: filename for saving model
- params\_filename: filename for saving parameters
- combined: whether save parameters into one file
## IV.Server Deployment
- PaddleHub Serving can deploy an online service of object detection.
- ### Step 1: Start PaddleHub Serving
- Run the startup command:
- ```shell
$ hub serving start -m yolov3_darknet53_vehicles
```
- The servitization API is now deployed and the default port number is 8866.
- **NOTE:** If GPU is used for prediction, set CUDA_VISIBLE_DEVICES environment variable before the service, otherwise it need not be set.
- ### Step 2: Send a predictive request
- With a configured server, use the following lines of code to send the prediction request and obtain the result
- ```python
import requests
import json
import cv2
import base64
def cv2_to_base64(image):
data = cv2.imencode('.jpg', image)[1]
return base64.b64encode(data.tostring()).decode('utf8')
# Send an HTTP request
data = {'images':[cv2_to_base64(cv2.imread("/PATH/TO/IMAGE"))]}
headers = {"Content-type": "application/json"}
url = "http://127.0.0.1:8866/predict/yolov3_darknet53_vehicles"
r = requests.post(url=url, headers=headers, data=json.dumps(data))
# print prediction results
print(r.json()["results"])
```
## V.Release Note
* 1.0.0
First release
* 1.0.2
Fix the problem of reading numpy
- ```shell
$ hub install yolov3_darknet53_vehicles==1.0.2
```
# yolov3_darknet53_venus
|Module Name|yolov3_darknet53_venus|
| :--- | :---: |
|Category|object detection|
|Network|YOLOv3|
|Dataset|Baidu Detection Dataset|
|Fine-tuning supported or not|Yes|
|Module Size|501MB|
|Latest update date|2021-02-26|
|Data indicators|-|
## I.Basic Information
- ### Module Introduction
- YOLOv3 is a one-stage detector proposed by Joseph Redmon and Ali Farhadi, which can reach comparable accuracy but twice as fast as traditional methods. This module is based on YOLOv3, trained on Baidu Vehicle Dataset which consists of 170w pictures and 1000w+ boxes, improve the accuracy on 8 test datasets for average 5.36%, and can be used for vehicle detection.
## II.Installation
- ### 1、Environmental Dependence
- paddlepaddle >= 1.6.2
- paddlehub >= 1.6.0 | [How to install PaddleHub](../../../../docs/docs_en/get_start/installation.rst)
- ### 2、Installation
- ```shell
$ hub install yolov3_darknet53_venus
```
- In case of any problems during installation, please refer to: [Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md) | [Linux_Quickstart](../../../../docs/docs_en/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_en/get_start/mac_quickstart.md)
## III.Module API Prediction
- ### 1、API
- ```python
def context(trainable=True,
pretrained=True,
get_prediction=False)
```
- Extract features, and do transfer learning
- **Parameters**
- trainable(bool): whether parameters trainable or not
- pretrained (bool): whether load pretrained model or not
- get\_prediction (bool): whether perform prediction
- **Return**
- inputs (dict): inputs, a dict, include two keys: "image" and "im\_size"
- image (Variable): image variable
- im\_size (Variable): image size
- outputs (dict): model output
- program for transfer learning
- ```python
def object_detection(paths=None,
images=None,
batch_size=1,
use_gpu=False,
score_thresh=0.5,
visualization=True,
output_dir='detection_result')
```
- Detection API, detect positions of all objects in image
- **Parameters**
- paths (list[str]): image path;
- images (list\[numpy.ndarray\]): image data, ndarray.shape is in the format [H, W, C], BGR;
- batch_size (int): the size of batch;
- use_gpu (bool): use GPU or not; **set the CUDA_VISIBLE_DEVICES environment variable first if you are using GPU**
- score\_thresh (float): confidence threshold;<br/>
- visualization (bool): Whether to save the results as picture files;
- output_dir (str): save path of images;
- **Return**
- res (list\[dict\]): classication results, each element in the list is dict, key is the label name, and value is the corresponding probability
- data (list): detection results, each element in the list is dict
- confidence (float): the confidence of the result
- label (str): label
- left (int): the upper left corner x coordinate of the detection box
- top (int): the upper left corner y coordinate of the detection box
- right (int): the lower right corner x coordinate of the detection box
- bottom (int): the lower right corner y coordinate of the detection box
- save\_path (str, optional): output path for saving results
- ```python
def save_inference_model(dirname,
model_filename=None,
params_filename=None,
combined=True)
```
- Save model to specific path
- **Parameters**
- dirname: output dir for saving model
- model\_filename: filename for saving model
- params\_filename: filename for saving parameters
- combined: whether save parameters into one file
## IV.Release Note
* 1.0.0
First release
- ```shell
$ hub install yolov3_darknet53_venus==1.0.0
```
...@@ -50,7 +50,7 @@ ...@@ -50,7 +50,7 @@
$ hub run yolov3_mobilenet_v1_coco2017 --input_path "/PATH/TO/IMAGE" $ hub run yolov3_mobilenet_v1_coco2017 --input_path "/PATH/TO/IMAGE"
``` ```
- 通过命令行方式实现目标检测模型的调用,更多请见 [PaddleHub命令行指令](../../../../docs/docs_ch/tutorial/cmd_usage.rst) - 通过命令行方式实现目标检测模型的调用,更多请见 [PaddleHub命令行指令](../../../../docs/docs_ch/tutorial/cmd_usage.rst)
- ### 2、代码示例 - ### 2、预测代码示例
- ```python - ```python
import paddlehub as hub import paddlehub as hub
......
# yolov3_mobilenet_v1_coco2017
|Module Name|yolov3_mobilenet_v1_coco2017|
| :--- | :---: |
|Category|object detection|
|Network|YOLOv3|
|Dataset|COCO2017|
|Fine-tuning supported or not|No|
|Module Size|96MB|
|Latest update date|2021-03-15|
|Data indicators|-|
## I.Basic Information
- ### Application Effect Display
- Sample results:
<p align="center">
<img src="https://user-images.githubusercontent.com/22424850/131506781-b4ecb77b-5ab1-4795-88da-5f547f7f7f9c.jpg" width='50%' hspace='10'/>
<br />
</p>
- ### Module Introduction
- YOLOv3 is a one-stage detector proposed by Joseph Redmon and Ali Farhadi, which can reach comparable accuracy but twice as fast as traditional methods. This module is based on YOLOv3, trained on COCO2017, and can be used for object detection.
## II.Installation
- ### 1、Environmental Dependence
- paddlepaddle >= 1.6.2
- paddlehub >= 1.6.0 | [How to install PaddleHub](../../../../docs/docs_en/get_start/installation.rst)
- ### 2、Installation
- ```shell
$ hub install yolov3_mobilenet_v1_coco2017
```
- In case of any problems during installation, please refer to: [Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md) | [Linux_Quickstart](../../../../docs/docs_en/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_en/get_start/mac_quickstart.md)
## III.Module API Prediction
- ### 1、Command line Prediction
- ```shell
$ hub run yolov3_mobilenet_v1_coco2017 --input_path "/PATH/TO/IMAGE"
```
- If you want to call the Hub module through the command line, please refer to: [PaddleHub Command Line Instruction](../../../../docs/docs_ch/tutorial/cmd_usage.rst)
- ### 2、Prediction Code Example
- ```python
import paddlehub as hub
import cv2
object_detector = hub.Module(name="yolov3_mobilenet_v1_coco2017")
result = object_detector.object_detection(images=[cv2.imread('/PATH/TO/IMAGE')])
# or
# result = object_detector.object_detection((paths=['/PATH/TO/IMAGE'])
```
- ### 3、API
- ```python
def object_detection(paths=None,
images=None,
batch_size=1,
use_gpu=False,
output_dir='detection_result',
score_thresh=0.5,
visualization=True)
```
- Detection API, detect positions of all objects in image
- **Parameters**
- paths (list[str]): image path;
- images (list\[numpy.ndarray\]): image data, ndarray.shape is in the format [H, W, C], BGR;
- batch_size (int): the size of batch;
- use_gpu (bool): use GPU or not; **set the CUDA_VISIBLE_DEVICES environment variable first if you are using GPU**
- output_dir (str): save path of images;
- score\_thresh (float): confidence threshold;<br/>
- visualization (bool): Whether to save the results as picture files;
**NOTE:** choose one parameter to provide data from paths and images
- **Return**
- res (list\[dict\]): results
- data (list): detection results, each element in the list is dict
- confidence (float): the confidence of the result
- label (str): label
- left (int): the upper left corner x coordinate of the detection box
- top (int): the upper left corner y coordinate of the detection box
- right (int): the lower right corner x coordinate of the detection box
- bottom (int): the lower right corner y coordinate of the detection box
- save\_path (str, optional): output path for saving results
- ```python
def save_inference_model(dirname,
model_filename=None,
params_filename=None,
combined=True)
```
- Save model to specific path
- **Parameters**
- dirname: output dir for saving model
- model\_filename: filename for saving model
- params\_filename: filename for saving parameters
- combined: whether save parameters into one file
## IV.Server Deployment
- PaddleHub Serving can deploy an online service of object detection.
- ### Step 1: Start PaddleHub Serving
- Run the startup command:
- ```shell
$ hub serving start -m yolov3_mobilenet_v1_coco2017
```
- The servitization API is now deployed and the default port number is 8866.
- **NOTE:** If GPU is used for prediction, set CUDA_VISIBLE_DEVICES environment variable before the service, otherwise it need not be set.
- ### Step 2: Send a predictive request
- With a configured server, use the following lines of code to send the prediction request and obtain the result
- ```python
import requests
import json
import cv2
import base64
def cv2_to_base64(image):
data = cv2.imencode('.jpg', image)[1]
return base64.b64encode(data.tostring()).decode('utf8')
# Send an HTTP request
data = {'images':[cv2_to_base64(cv2.imread("/PATH/TO/IMAGE"))]}
headers = {"Content-type": "application/json"}
url = "http://127.0.0.1:8866/predict/yolov3_mobilenet_v1_coco2017"
r = requests.post(url=url, headers=headers, data=json.dumps(data))
# print prediction results
print(r.json()["results"])
```
## V.Release Note
* 1.0.0
First release
* 1.0.2
Fix the problem of reading numpy
- ```shell
$ hub install yolov3_mobilenet_v1_coco2017==1.0.2
```
...@@ -49,7 +49,7 @@ ...@@ -49,7 +49,7 @@
$ hub run yolov3_resnet34_coco2017 --input_path "/PATH/TO/IMAGE" $ hub run yolov3_resnet34_coco2017 --input_path "/PATH/TO/IMAGE"
``` ```
- 通过命令行方式实现目标检测模型的调用,更多请见 [PaddleHub命令行指令](../../../../docs/docs_ch/tutorial/cmd_usage.rst) - 通过命令行方式实现目标检测模型的调用,更多请见 [PaddleHub命令行指令](../../../../docs/docs_ch/tutorial/cmd_usage.rst)
- ### 2、代码示例 - ### 2、预测代码示例
- ```python - ```python
import paddlehub as hub import paddlehub as hub
......
# yolov3_resnet34_coco2017
|Module Name|yolov3_resnet34_coco2017|
| :--- | :---: |
|Category|object detection|
|Network|YOLOv3|
|Dataset|COCO2017|
|Fine-tuning supported or not|No|
|Module Size|164MB|
|Latest update date|2021-03-15|
|Data indicators|-|
## I.Basic Information
- ### Application Effect Display
- Sample results:
<p align="center">
<img src="https://user-images.githubusercontent.com/22424850/131506781-b4ecb77b-5ab1-4795-88da-5f547f7f7f9c.jpg" width='50%' hspace='10'/>
<br />
</p>
- ### Module Introduction
- YOLOv3 is a one-stage detector proposed by Joseph Redmon and Ali Farhadi, which can reach comparable accuracy but twice as fast as traditional methods. This module is based on YOLOv3, trained on COCO2017, and can be used for object detection.
## II.Installation
- ### 1、Environmental Dependence
- paddlepaddle >= 1.6.2
- paddlehub >= 1.6.0 | [How to install PaddleHub](../../../../docs/docs_en/get_start/installation.rst)
- ### 2、Installation
- ```shell
$ hub install yolov3_resnet34_coco2017
```
- In case of any problems during installation, please refer to: [Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md) | [Linux_Quickstart](../../../../docs/docs_en/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_en/get_start/mac_quickstart.md)
## III.Module API Prediction
- ### 1、Command line Prediction
- ```shell
$ hub run yolov3_resnet34_coco2017 --input_path "/PATH/TO/IMAGE"
```
- If you want to call the Hub module through the command line, please refer to: [PaddleHub Command Line Instruction](../../../../docs/docs_ch/tutorial/cmd_usage.rst)
- ### 2、Prediction Code Example
- ```python
import paddlehub as hub
import cv2
object_detector = hub.Module(name="yolov3_resnet34_coco2017")
result = object_detector.object_detection(images=[cv2.imread('/PATH/TO/IMAGE')])
# or
# result = object_detector.object_detection((paths=['/PATH/TO/IMAGE'])
```
- ### 3、API
- ```python
def object_detection(paths=None,
images=None,
batch_size=1,
use_gpu=False,
output_dir='detection_result',
score_thresh=0.5,
visualization=True)
```
- Detection API, detect positions of all objects in image
- **Parameters**
- paths (list[str]): image path;
- images (list\[numpy.ndarray\]): image data, ndarray.shape is in the format [H, W, C], BGR;
- batch_size (int): the size of batch;
- use_gpu (bool): use GPU or not; **set the CUDA_VISIBLE_DEVICES environment variable first if you are using GPU**
- output_dir (str): save path of images;
- score\_thresh (float): confidence threshold;<br/>
- visualization (bool): Whether to save the results as picture files;
**NOTE:** choose one parameter to provide data from paths and images
- **Return**
- res (list\[dict\]): results
- data (list): detection results, each element in the list is dict
- confidence (float): the confidence of the result
- label (str): label
- left (int): the upper left corner x coordinate of the detection box
- top (int): the upper left corner y coordinate of the detection box
- right (int): the lower right corner x coordinate of the detection box
- bottom (int): the lower right corner y coordinate of the detection box
- save\_path (str, optional): output path for saving results
- ```python
def save_inference_model(dirname,
model_filename=None,
params_filename=None,
combined=True)
```
- Save model to specific path
- **Parameters**
- dirname: output dir for saving model
- model\_filename: filename for saving model
- params\_filename: filename for saving parameters
- combined: whether save parameters into one file
## IV.Server Deployment
- PaddleHub Serving can deploy an online service of object detection.
- ### Step 1: Start PaddleHub Serving
- Run the startup command:
- ```shell
$ hub serving start -m yolov3_resnet34_coco2017
```
- The servitization API is now deployed and the default port number is 8866.
- **NOTE:** If GPU is used for prediction, set CUDA_VISIBLE_DEVICES environment variable before the service, otherwise it need not be set.
- ### Step 2: Send a predictive request
- With a configured server, use the following lines of code to send the prediction request and obtain the result
- ```python
import requests
import json
import cv2
import base64
def cv2_to_base64(image):
data = cv2.imencode('.jpg', image)[1]
return base64.b64encode(data.tostring()).decode('utf8')
# Send an HTTP request
data = {'images':[cv2_to_base64(cv2.imread("/PATH/TO/IMAGE"))]}
headers = {"Content-type": "application/json"}
url = "http://127.0.0.1:8866/predict/yolov3_resnet34_coco2017"
r = requests.post(url=url, headers=headers, data=json.dumps(data))
# print prediction results
print(r.json()["results"])
```
## V.Release Note
* 1.0.0
First release
* 1.0.2
Fix the problem of reading numpy
- ```shell
$ hub install yolov3_resnet34_coco2017==1.0.2
```
...@@ -49,7 +49,7 @@ ...@@ -49,7 +49,7 @@
$ hub run yolov3_resnet50_vd_coco2017 --input_path "/PATH/TO/IMAGE" $ hub run yolov3_resnet50_vd_coco2017 --input_path "/PATH/TO/IMAGE"
``` ```
- 通过命令行方式实现目标检测模型的调用,更多请见 [PaddleHub命令行指令](../../../../docs/docs_ch/tutorial/cmd_usage.rst) - 通过命令行方式实现目标检测模型的调用,更多请见 [PaddleHub命令行指令](../../../../docs/docs_ch/tutorial/cmd_usage.rst)
- ### 2、代码示例 - ### 2、预测代码示例
- ```python - ```python
import paddlehub as hub import paddlehub as hub
......
# yolov3_resnet50_vd_coco2017
|Module Name|yolov3_resnet50_vd_coco2017|
| :--- | :---: |
|Category|object detection|
|Network|YOLOv3|
|Dataset|COCO2017|
|Fine-tuning supported or not|No|
|Module Size|178MB|
|Latest update date|2021-03-15|
|Data indicators|-|
## I.Basic Information
- ### Application Effect Display
- Sample results:
<p align="center">
<img src="https://user-images.githubusercontent.com/22424850/131506781-b4ecb77b-5ab1-4795-88da-5f547f7f7f9c.jpg" width='50%' hspace='10'/>
<br />
</p>
- ### Module Introduction
- YOLOv3 is a one-stage detector proposed by Joseph Redmon and Ali Farhadi, which can reach comparable accuracy but twice as fast as traditional methods. This module is based on YOLOv3, trained on COCO2017, and can be used for object detection.
## II.Installation
- ### 1、Environmental Dependence
- paddlepaddle >= 1.6.2
- paddlehub >= 1.6.0 | [How to install PaddleHub](../../../../docs/docs_en/get_start/installation.rst)
- ### 2、Installation
- ```shell
$ hub install yolov3_resnet50_vd_coco2017
```
- In case of any problems during installation, please refer to: [Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md) | [Linux_Quickstart](../../../../docs/docs_en/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_en/get_start/mac_quickstart.md)
## III.Module API Prediction
- ### 1、Command line Prediction
- ```shell
$ hub run yolov3_resnet50_vd_coco2017 --input_path "/PATH/TO/IMAGE"
```
- If you want to call the Hub module through the command line, please refer to: [PaddleHub Command Line Instruction](../../../../docs/docs_ch/tutorial/cmd_usage.rst)
- ### 2、Prediction Code Example
- ```python
import paddlehub as hub
import cv2
object_detector = hub.Module(name="yolov3_resnet50_vd_coco2017")
result = object_detector.object_detection(images=[cv2.imread('/PATH/TO/IMAGE')])
# or
# result = object_detector.object_detection((paths=['/PATH/TO/IMAGE'])
```
- ### 3、API
- ```python
def object_detection(paths=None,
images=None,
batch_size=1,
use_gpu=False,
output_dir='detection_result',
score_thresh=0.5,
visualization=True)
```
- Detection API, detect positions of all objects in image
- **Parameters**
- paths (list[str]): image path;
- images (list\[numpy.ndarray\]): image data, ndarray.shape is in the format [H, W, C], BGR;
- batch_size (int): the size of batch;
- use_gpu (bool): use GPU or not; **set the CUDA_VISIBLE_DEVICES environment variable first if you are using GPU**
- output_dir (str): save path of images;
- score\_thresh (float): confidence threshold;<br/>
- visualization (bool): Whether to save the results as picture files;
**NOTE:** choose one parameter to provide data from paths and images
- **Return**
- res (list\[dict\]): results
- data (list): detection results, each element in the list is dict
- confidence (float): the confidence of the result
- label (str): label
- left (int): the upper left corner x coordinate of the detection box
- top (int): the upper left corner y coordinate of the detection box
- right (int): the lower right corner x coordinate of the detection box
- bottom (int): the lower right corner y coordinate of the detection box
- save\_path (str, optional): output path for saving results
- ```python
def save_inference_model(dirname,
model_filename=None,
params_filename=None,
combined=True)
```
- Save model to specific path
- **Parameters**
- dirname: output dir for saving model
- model\_filename: filename for saving model
- params\_filename: filename for saving parameters
- combined: whether save parameters into one file
## IV.Server Deployment
- PaddleHub Serving can deploy an online service of object detection.
- ### Step 1: Start PaddleHub Serving
- Run the startup command:
- ```shell
$ hub serving start -m yolov3_resnet50_vd_coco2017
```
- The servitization API is now deployed and the default port number is 8866.
- **NOTE:** If GPU is used for prediction, set CUDA_VISIBLE_DEVICES environment variable before the service, otherwise it need not be set.
- ### Step 2: Send a predictive request
- With a configured server, use the following lines of code to send the prediction request and obtain the result
- ```python
import requests
import json
import cv2
import base64
def cv2_to_base64(image):
data = cv2.imencode('.jpg', image)[1]
return base64.b64encode(data.tostring()).decode('utf8')
# Send an HTTP request
data = {'images':[cv2_to_base64(cv2.imread("/PATH/TO/IMAGE"))]}
headers = {"Content-type": "application/json"}
url = "http://127.0.0.1:8866/predict/yolov3_resnet50_vd_coco2017"
r = requests.post(url=url, headers=headers, data=json.dumps(data))
# print prediction results
print(r.json()["results"])
```
## V.Release Note
* 1.0.0
First release
* 1.0.2
Fix the problem of reading numpy
- ```shell
$ hub install yolov3_resnet50_vd_coco2017==1.0.2
```
...@@ -43,7 +43,7 @@ ...@@ -43,7 +43,7 @@
## 三、模型API预测 ## 三、模型API预测
- ### 1、代码示例 - ### 1、预测代码示例
- ```python - ```python
import paddlehub as hub import paddlehub as hub
......
# FCN_HRNet_W18_Face_Seg
|Module Name|FCN_HRNet_W18_Face_Seg|
| :--- | :---: |
|Category|image segmentation|
|Network|FCN_HRNet_W18|
|Dataset|-|
|Fine-tuning supported or not|No|
|Module Size|56MB|
|Latest update date|2021-02-26|
|Data indicators|-|
## I.Basic Information
- ### Application Effect Display
- Sample results:
<p align="center">
<img src="https://ai-studio-static-online.cdn.bcebos.com/88155299a7534f1084f8467a4d6db7871dc4729627d3471c9129d316dc4ff9bc" width='70%' hspace='10'/> <br />
</p>
- ### Module Introduction
- This module is based on FCN_HRNet_W18 model, and can be used to segment face region.
## II.Installation
- ### 1、Environmental Dependence
- paddlepaddle >= 2.0.0
- paddlehub >= 2.0.0 | [How to install PaddleHub](../../../../docs/docs_en/get_start/installation.rst)
- ### 2、Installation
- ```shell
$ hub install FCN_HRNet_W18_Face_Seg
```
- In case of any problems during installation, please refer to: [Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md) | [Linux_Quickstart](../../../../docs/docs_en/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_en/get_start/mac_quickstart.md)
## III.Module API Prediction
- ### 1、Prediction Code Example
- ```python
import paddlehub as hub
import cv2
model = hub.Module(name="FCN_HRNet_W18_Face_Seg")
result = model.Segmentation(images=[cv2.imread('/PATH/TO/IMAGE')])
# or
# result = model.Segmentation(paths=['/PATH/TO/IMAGE'])
```
- ### 2、API
- ```python
def Segmentation(images=None,
paths=None,
batch_size=1,
output_dir='output',
visualization=False):
```
- Face segmentation API.
- **Parameters**
- images (list\[numpy.ndarray\]): image data, ndarray.shape is in the format [H, W, C], BGR;
- paths (list[str]): image path;
- batch_size (int): the size of batch;
- output_dir (str): save path of images;
- visualization (bool): Whether to save the results as picture files;
**NOTE:** choose one parameter to provide data from paths and images
- **Return**
- res (list\[numpy.ndarray\]): result list,ndarray.shape is \[H, W, C\]
## IV.Release Note
* 1.0.0
First release
- ```shell
$ hub install FCN_HRNet_W18_Face_Seg==1.0.0
```
...@@ -43,7 +43,7 @@ ...@@ -43,7 +43,7 @@
## 三、模型API预测 ## 三、模型API预测
- ### 1、代码示例 - ### 1、预测代码示例
- ```python - ```python
import paddlehub as hub import paddlehub as hub
......
# Vehicle_License_Plate_Recognition
|Module Name|Vehicle_License_Plate_Recognition|
| :--- | :---: |
|Category|text recognition|
|Network|-|
|Dataset|CCPD|
|Fine-tuning supported or not|No|
|Module Size|111MB|
|Latest update date|2021-03-22|
|Data indicators|-|
## I.Basic Information
- ### Application Effect Display
- Sample results:
<p align="center">
<img src="https://ai-studio-static-online.cdn.bcebos.com/35a3dab32ac948549de41afba7b51a5770d3f872d60b437d891f359a5cef8052" width = "450" height = "300" hspace='10'/> <br />
</p>
- ### Module Introduction
- Vehicle_License_Plate_Recognition is a module for licence plate recognition, trained on CCPD dataset. This model can detect the position of licence plate and recognize the contents.
## II.Installation
- ### 1、Environmental Dependence
- paddlepaddle >= 2.0.0
- paddlehub >= 2.0.4
- paddleocr >= 2.0.2
- ### 2、Installation
- ```shell
$ hub install Vehicle_License_Plate_Recognition
```
- In case of any problems during installation, please refer to: [Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md) | [Linux_Quickstart](../../../../docs/docs_en/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_en/get_start/mac_quickstart.md)
## III.Module API Prediction
- ### 1、Prediction Code Example
- ```python
import paddlehub as hub
import cv2
model = hub.Module(name="Vehicle_License_Plate_Recognition")
result = model.plate_recognition(images=[cv2.imread('/PATH/TO/IMAGE')])
```
- ### 2、API
- ```python
def plate_recognition(images)
```
- Prediction API.
- **Parameters**
- images (list\[numpy.ndarray\]): image data, ndarray.shape is in the format [H, W, C], BGR;
- **Return**
- results(list(dict{'license', 'bbox'})): The list of recognition results, where each element is dict.
## IV.Server Deployment
- PaddleHub Serving can deploy an online service of text recognition.
- ### Step 1: Start PaddleHub Serving
- Run the startup command:
- ```shell
$ hub serving start -m Vehicle_License_Plate_Recognition
```
- The servitization API is now deployed and the default port number is 8866.
- **NOTE:** If GPU is used for prediction, set CUDA_VISIBLE_DEVICES environment variable before the service, otherwise it need not be set.
- ### Step 2: Send a predictive request
- With a configured server, use the following lines of code to send the prediction request and obtain the result
- ```python
import requests
import json
import cv2
import base64
def cv2_to_base64(image):
data = cv2.imencode('.jpg', image)[1]
return base64.b64encode(data.tostring()).decode('utf8')
# Send an HTTP request
data = {'images':[cv2_to_base64(cv2.imread("/PATH/TO/IMAGE"))]}
headers = {"Content-type": "application/json"}
url = "http://127.0.0.1:8866/predict/Vehicle_License_Plate_Recognition"
r = requests.post(url=url, headers=headers, data=json.dumps(data))
# print prediction results
print(r.json()["results"])
```
## V.Release Note
* 1.0.0
First release
- ```shell
$ hub install Vehicle_License_Plate_Recognition==1.0.0
```
# german_ocr_db_crnn_mobile
|Module Name|german_ocr_db_crnn_mobile|
| :--- | :---: |
|Category|text recognition|
|Network|Differentiable Binarization+CRNN|
|Dataset|icdar2015Dataset|
|Fine-tuning supported or not|No|
|Module Size|3.8MB|
|Latest update date|2021-02-26|
|Data indicators|-|
## I.Basic Information
- ### Application Effect Display
- Sample results:
<p align="center">
<img src="https://user-images.githubusercontent.com/22424850/133761772-8c47f25f-0d95-45b4-8075-867dbbd14c86.jpg" width="80%" hspace='10'/> <br />
</p>
- ### Module Introduction
- german_ocr_db_crnn_mobile Module is used to identify Germany characters in pictures. It first obtains the text box detected by [chinese_text_detection_db_mobile Module](), then identifies the Germany characters and carries out angle classification to these text boxes. CRNN(Convolutional Recurrent Neural Network) is adopted as the final recognition algorithm. This Module is an ultra-lightweight Germany OCR model that supports direct prediction.
## II.Installation
- ### 1、Environmental Dependence
- paddlepaddle >= 1.8.0
- paddlehub >= 1.8.0 | [How to install PaddleHub](../../../../docs/docs_en/get_start/installation.rst)
- shapely
- pyclipper
- ```shell
$ pip install shapely pyclipper
```
- **This Module relies on the third-party libraries, shapely and pyclipper. Please install shapely and pyclipper before using this Module.**
- ### 2、Installation
- ```shell
$ hub install german_ocr_db_crnn_mobile
```
- In case of any problems during installation, please refer to: [Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md) | [Linux_Quickstart](../../../../docs/docs_en/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_en/get_start/mac_quickstart.md)
## III.Module API Prediction
- ### 1、Command line Prediction
- ```shell
$ hub run german_ocr_db_crnn_mobile --input_path "/PATH/TO/IMAGE"
```
- If you want to call the Hub module through the command line, please refer to: [PaddleHub Command Line Instruction](../../../../docs/docs_ch/tutorial/cmd_usage.rst)
- ### 2、Prediction Code Example
- ```python
import paddlehub as hub
import cv2
ocr = hub.Module(name="german_ocr_db_crnn_mobile", enable_mkldnn=True) # MKLDNN acceleration is only available on CPU
result = ocr.recognize_text(images=[cv2.imread('/PATH/TO/IMAGE')])
# or
# result = ocr.recognize_text(paths=['/PATH/TO/IMAGE'])
```
- ### 3、API
- ```python
def __init__(text_detector_module=None, enable_mkldnn=False)
```
- Construct the GenmanOCRDBCRNNMobile object
- **Parameters**
- text_detector_module(str): Name of text detection module in PaddleHub Module, if set to None, [chinese_text_detection_db_mobile Module]() will be used by default. It serves to detect the text in the picture.
- enable_mkldnn(bool): Whether to enable MKLDNN for CPU computing acceleration. This parameter is valid only when the CPU is running. The default is False.
- ```python
def recognize_text(images=[],
paths=[],
use_gpu=False,
output_dir='ocr_result',
visualization=False,
box_thresh=0.5,
text_thresh=0.5,
angle_classification_thresh=0.9)
```
- Prediction API, detecting the position of all Germany text in the input image.
- **Parameter**
- paths (list[str]): image path
- images (list[numpy.ndarray]): image data, ndarray.shape is in the format [H, W, C], BGR;
- use_gpu (bool): use GPU or not; **set the CUDA_VISIBLE_DEVICES environment variable first if you are using GPU**
- box_thresh (float): The confidence threshold for text box detection;
- text_thresh (float): The confidence threshold for Germany text recognition;
- angle_classification_thresh(float): The confidence threshold for text angle classification
- visualization (bool): Whether to save the results as picture files;
- output_dir (str): save path of images;
- **Return**
- res (list[dict]): The list of recognition results, where each element is dict and each field is:
- data (list[dict]): recognition results, each element in the list is dict and each field is:
- text(str): Recognized texts
- confidence(float): The confidence of the results
- text_box_position(list): The pixel coordinates of the text box in the original picture, a 4*2 matrix representing the coordinates of the lower left, lower right, upper right and upper left vertices of the text box in turn, data is [] if there's no result
- save_path (str, optional): Save path of the result, save_path is '' if no image is saved.
## IV.Server Deployment
- PaddleHub Serving can deploy an online service of text recognition.
- ### Step 1: Start PaddleHub Serving
- Run the startup command:
- ```shell
$ hub serving start -m german_ocr_db_crnn_mobile
```
- The servitization API is now deployed and the default port number is 8866.
- **NOTE:** If GPU is used for prediction, set CUDA_VISIBLE_DEVICES environment variable before the service, otherwise it need not be set.
- ### Step 2: Send a predictive request
- With a configured server, use the following lines of code to send the prediction request and obtain the result
- ```python
import requests
import json
import cv2
import base64
def cv2_to_base64(image):
data = cv2.imencode('.jpg', image)[1]
return base64.b64encode(data.tostring()).decode('utf8')
# Send an HTTP request
data = {'images':[cv2_to_base64(cv2.imread("/PATH/TO/IMAGE"))]}
headers = {"Content-type": "application/json"}
url = "http://127.0.0.1:8866/predict/german_ocr_db_crnn_mobile"
r = requests.post(url=url, headers=headers, data=json.dumps(data))
# print prediction results
print(r.json()["results"])
```
## V.Release Note
* 1.0.0
First release
- ```shell
$ hub install german_ocr_db_crnn_mobile==1.0.0
```
# japan_ocr_db_crnn_mobile
|Module Name|japan_ocr_db_crnn_mobile|
| :--- | :---: |
|Category|text recognition|
|Network|Differentiable Binarization+CRNN|
|Dataset|icdar2015Dataset|
|Fine-tuning supported or not|No|
|Module Size|8MB|
|Latest update date|2021-04-15|
|Data indicators|-|
## I.Basic Information
- ### Application Effect Display
- Sample results:
<p align="center">
<img src="https://user-images.githubusercontent.com/22424850/133761650-91f24c1e-f437-47b1-8cfb-a074e7150ff5.jpg" width='80%' hspace='10'/> <br />
</p>
- ### Module Introduction
- japan_ocr_db_crnn_mobile Module is used to identify Japanese characters in pictures. It first obtains the text box detected by [chinese_text_detection_db_mobile Module](), then identifies the Japanese characters and carries out angle classification to these text boxes. CRNN(Convolutional Recurrent Neural Network) is adopted as the final recognition algorithm. This Module is an ultra-lightweight Japanese OCR model that supports direct prediction.
## II.Installation
- ### 1、Environmental Dependence
- paddlepaddle >= 1.8.0
- paddlehub >= 1.8.0 | [How to install PaddleHub](../../../../docs/docs_en/get_start/installation.rst)
- shapely
- pyclipper
- ```shell
$ pip install shapely pyclipper
```
- **This Module relies on the third-party libraries, shapely and pyclipper. Please install shapely and pyclipper before using this Module.**
- ### 2、Installation
- ```shell
$ hub install japan_ocr_db_crnn_mobile
```
- In case of any problems during installation, please refer to: [Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md) | [Linux_Quickstart](../../../../docs/docs_en/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_en/get_start/mac_quickstart.md)
## III.Module API Prediction
- ### 1、Command line Prediction
- ```shell
$ hub run japan_ocr_db_crnn_mobile --input_path "/PATH/TO/IMAGE"
```
- If you want to call the Hub module through the command line, please refer to: [PaddleHub Command Line Instruction](../../../../docs/docs_ch/tutorial/cmd_usage.rst)
- ### 2、Prediction Code Example
- ```python
import paddlehub as hub
import cv2
ocr = hub.Module(name="japan_ocr_db_crnn_mobile", enable_mkldnn=True) # MKLDNN acceleration is only available on CPU
result = ocr.recognize_text(images=[cv2.imread('/PATH/TO/IMAGE')])
# or
# result = ocr.recognize_text(paths=['/PATH/TO/IMAGE'])
```
- ### 3、API
- ```python
def __init__(text_detector_module=None, enable_mkldnn=False)
```
- Construct the JapanOCRDBCRNNMobile object
- **Parameters**
- text_detector_module(str): Name of text detection module in PaddleHub Module, if set to None, [chinese_text_detection_db_mobile Module]() will be used by default. It serves to detect the text in the picture.
- enable_mkldnn(bool): Whether to enable MKLDNN for CPU computing acceleration. This parameter is valid only when the CPU is running. The default is False.
- ```python
def recognize_text(images=[],
paths=[],
use_gpu=False,
output_dir='ocr_result',
visualization=False,
box_thresh=0.5,
text_thresh=0.5,
angle_classification_thresh=0.9)
```
- Prediction API, detecting the position of all Japanese text in the input image.
- **Parameter**
- paths (list[str]): image path
- images (list[numpy.ndarray]): image data, ndarray.shape is in the format [H, W, C], BGR;
- use_gpu (bool): use GPU or not; **set the CUDA_VISIBLE_DEVICES environment variable first if you are using GPU**
- box_thresh (float): The confidence threshold for text box detection;
- text_thresh (float): The confidence threshold for Japanese text recognition;
- angle_classification_thresh(float): The confidence threshold for text angle classification
- visualization (bool): Whether to save the results as picture files;
- output_dir (str): save path of images;
- **Return**
- res (list[dict]): The list of recognition results, where each element is dict and each field is:
- data (list[dict]): recognition results, each element in the list is dict and each field is:
- text(str): Recognized texts
- confidence(float): The confidence of the results
- text_box_position(list): The pixel coordinates of the text box in the original picture, a 4*2 matrix representing the coordinates of the lower left, lower right, upper right and upper left vertices of the text box in turn, data is [] if there's no result
- save_path (str, optional): Save path of the result, save_path is '' if no image is saved.
## IV.Server Deployment
- PaddleHub Serving can deploy an online service of text recognition.
- ### Step 1: Start PaddleHub Serving
- Run the startup command:
- ```shell
$ hub serving start -m japan_ocr_db_crnn_mobile
```
- The servitization API is now deployed and the default port number is 8866.
- **NOTE:** If GPU is used for prediction, set CUDA_VISIBLE_DEVICES environment variable before the service, otherwise it need not be set.
- ### Step 2: Send a predictive request
- With a configured server, use the following lines of code to send the prediction request and obtain the result
- ```python
import requests
import json
import cv2
import base64
def cv2_to_base64(image):
data = cv2.imencode('.jpg', image)[1]
return base64.b64encode(data.tostring()).decode('utf8')
# Send an HTTP request
data = {'images':[cv2_to_base64(cv2.imread("/PATH/TO/IMAGE"))]}
headers = {"Content-type": "application/json"}
url = "http://127.0.0.1:8866/predict/japan_ocr_db_crnn_mobile"
r = requests.post(url=url, headers=headers, data=json.dumps(data))
# print prediction results
print(r.json()["results"])
```
## V.Release Note
* 1.0.0
First release
- ```shell
$ hub install japan_ocr_db_crnn_mobile==1.0.0
```
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册