未验证 提交 7f9274d9 编写于 作者: K KP 提交者: GitHub

Merge pull request #1841 from rainyfly/fix_ernie_tiny_doc

delete doc
# ernie_tiny
|模型名称|ernie_tiny|
| :--- | :---: |
|类别|图像 - 图像生成|
|网络|SPADEGenerator|
|数据集|coco_stuff|
|是否支持Fine-tuning|否|
|模型大小|74MB|
|最新更新日期|2021-12-14|
|数据指标|-|
## 一、模型基本信息
- ### 应用效果展示
- 样例结果示例:
<p align="center">
<img src="https://camo.githubusercontent.com/22e94b0c7278af08da8c475a3d968ba2f3cd565fcb2ad6b9a165c8a65f2d12f8/68747470733a2f2f61692d73747564696f2d7374617469632d6f6e6c696e652e63646e2e626365626f732e636f6d2f39343733313032336561623934623162393762396361383062643362333038333063393138636631363264303436626438383534306464613435303239356133" width = "90%" hspace='10'/>
<br />
- ### 模型介绍
- 本模块采用一个像素风格迁移网络 Pix2PixHD,能够根据输入的语义分割标签生成照片风格的图片。为了解决模型归一化层导致标签语义信息丢失的问题,向 Pix2PixHD 的生成器网络中添加了 SPADE(Spatially-Adaptive
Normalization)空间自适应归一化模块,通过两个卷积层保留了归一化时训练的缩放与偏置参数的空间维度,以增强生成图片的质量。语义风格标签图像可以参考[coco_stuff数据集](https://github.com/nightrome/cocostuff)获取, 也可以通过[PaddleGAN repo中的该项目](https://github.com/PaddlePaddle/PaddleGAN/blob/87537ad9d4eeda17eaa5916c6a585534ab989ea8/docs/zh_CN/tutorials/photopen.md)来自定义生成图像进行体验。
## 二、安装
- ### 1、环境依赖
- ppgan
- ### 2、安装
- ```shell
$ hub install photopen
```
- 如您安装时遇到问题,可参考:[零基础windows安装](../../../../docs/docs_ch/get_start/windows_quickstart.md)
| [零基础Linux安装](../../../../docs/docs_ch/get_start/linux_quickstart.md) | [零基础MacOS安装](../../../../docs/docs_ch/get_start/mac_quickstart.md)
## 三、模型API预测
- ### 1、命令行预测
- ```shell
# Read from a file
$ hub run photopen --input_path "/PATH/TO/IMAGE"
```
- 通过命令行方式实现图像生成模型的调用,更多请见 [PaddleHub命令行指令](../../../../docs/docs_ch/tutorial/cmd_usage.rst)
- ### 2、预测代码示例
- ```python
import paddlehub as hub
module = hub.Module(name="photopen")
input_path = ["/PATH/TO/IMAGE"]
# Read from a file
module.photo_transfer(paths=input_path, output_dir='./transfer_result/', use_gpu=True)
```
- ### 3、API
- ```python
photo_transfer(images=None, paths=None, output_dir='./transfer_result/', use_gpu=False, visualization=True):
```
- 图像转换生成API。
- **参数**
- images (list\[numpy.ndarray\]): 图片数据,ndarray.shape 为 \[H, W, C\];<br/>
- paths (list\[str\]): 图片的路径;<br/>
- output\_dir (str): 结果保存的路径; <br/>
- use\_gpu (bool): 是否使用 GPU;<br/>
- visualization(bool): 是否保存结果到本地文件夹
## 四、服务部署
- PaddleHub Serving可以部署一个在线图像转换生成服务。
- ### 第一步:启动PaddleHub Serving
- 运行启动命令:
- ```shell
$ hub serving start -m photopen
```
- 这样就完成了一个图像转换生成的在线服务API的部署,默认端口号为8866。
- **NOTE:** 如使用GPU预测,则需要在启动服务之前,请设置CUDA\_VISIBLE\_DEVICES环境变量,否则不用设置。
- ### 第二步:发送预测请求
- 配置好服务端,以下数行代码即可实现发送预测请求,获取预测结果
- ```python
import requests
import json
import cv2
import base64
def cv2_to_base64(image):
data = cv2.imencode('.jpg', image)[1]
return base64.b64encode(data.tostring()).decode('utf8')
# 发送HTTP请求
data = {'images':[cv2_to_base64(cv2.imread("/PATH/TO/IMAGE"))]}
headers = {"Content-type": "application/json"}
url = "http://127.0.0.1:8866/predict/photopen"
r = requests.post(url=url, headers=headers, data=json.dumps(data))
# 打印预测结果
print(r.json()["results"])
## 五、更新历史
* 1.0.0
初始发布
- ```shell
$ hub install ernie_tiny==1.1.0
```
# ernie_tiny
|Module Name|ernie_tiny|
| :--- | :---: |
|Category|object detection|
|Network|faster_rcnn|
|Dataset|COCO2017|
|Fine-tuning supported or not|No|
|Module Size|161MB|
|Latest update date|2021-03-15|
|Data indicators|-|
## I.Basic Information
- ### Application Effect Display
- Sample results:
<p align="center">
<img src="https://user-images.githubusercontent.com/22424850/131504887-d024c7e5-fc09-4d6b-92b8-4d0c965949d0.jpg" width='50%' hspace='10'/>
<br />
</p>
- ### Module Introduction
- Faster_RCNN is a two-stage detector, it consists of feature extraction, proposal, classification and refinement processes. This module is trained on COCO2017 dataset, and can be used for object detection.
## II.Installation
- ### 1、Environmental Dependence
- paddlepaddle >= 1.6.2
- paddlehub >= 1.6.0 | [How to install PaddleHub](../../../../docs/docs_en/get_start/installation.rst)
- ### 2、Installation
- ```shell
$ hub install faster_rcnn_resnet50_fpn_coco2017
```
- In case of any problems during installation, please refer to: [Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md) | [Linux_Quickstart](../../../../docs/docs_en/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_en/get_start/mac_quickstart.md)
## III.Module API Prediction
- ### 1、Command line Prediction
- ```shell
$ hub run faster_rcnn_resnet50_fpn_coco2017 --input_path "/PATH/TO/IMAGE"
```
- If you want to call the Hub module through the command line, please refer to: [PaddleHub Command Line Instruction](../../../../docs/docs_ch/tutorial/cmd_usage.rst)
- ### 2、Prediction Code Example
- ```python
import paddlehub as hub
import cv2
object_detector = hub.Module(name="faster_rcnn_resnet50_fpn_coco2017")
result = object_detector.object_detection(images=[cv2.imread('/PATH/TO/IMAGE')])
# or
# result = object_detector.object_detection((paths=['/PATH/TO/IMAGE'])
```
- ### 3、API
- ```python
def object_detection(paths=None,
images=None,
batch_size=1,
use_gpu=False,
output_dir='detection_result',
score_thresh=0.5,
visualization=True)
```
- Detection API, detect positions of all objects in image
- **Parameters**
- paths (list[str]): image path;
- images (list\[numpy.ndarray\]): image data, ndarray.shape is in the format [H, W, C], BGR;
- batch_size (int): the size of batch;
- use_gpu (bool): use GPU or not; **set the CUDA_VISIBLE_DEVICES environment variable first if you are using GPU**
- output_dir (str): save path of images;
- score\_thresh (float): confidence threshold;<br/>
- visualization (bool): Whether to save the results as picture files;
**NOTE:** choose one parameter to provide data from paths and images
- **Return**
- res (list\[dict\]): results
- data (list): detection results, each element in the list is dict
- confidence (float): the confidence of the result
- label (str): label
- left (int): the upper left corner x coordinate of the detection box
- top (int): the upper left corner y coordinate of the detection box
- right (int): the lower right corner x coordinate of the detection box
- bottom (int): the lower right corner y coordinate of the detection box
- save\_path (str, optional): output path for saving results
- ```python
def save_inference_model(dirname,
model_filename=None,
params_filename=None,
combined=True)
```
- Save model to specific path
- **Parameters**
- dirname: output dir for saving model
- model\_filename: filename for saving model
- params\_filename: filename for saving parameters
- combined: whether save parameters into one file
## IV.Server Deployment
- PaddleHub Serving can deploy an online service of object detection.
- ### Step 1: Start PaddleHub Serving
- Run the startup command:
- ```shell
$ hub serving start -m faster_rcnn_resnet50_fpn_coco2017
```
- The servitization API is now deployed and the default port number is 8866.
- **NOTE:** If GPU is used for prediction, set CUDA_VISIBLE_DEVICES environment variable before the service, otherwise it need not be set.
- ### Step 2: Send a predictive request
- With a configured server, use the following lines of code to send the prediction request and obtain the result
- ```python
import requests
import json
import cv2
import base64
def cv2_to_base64(image):
data = cv2.imencode('.jpg', image)[1]
return base64.b64encode(data.tostring()).decode('utf8')
# Send an HTTP request
data = {'images':[cv2_to_base64(cv2.imread("/PATH/TO/IMAGE"))]}
headers = {"Content-type": "application/json"}
url = "http://127.0.0.1:8866/predict/faster_rcnn_resnet50_fpn_coco2017"
r = requests.post(url=url, headers=headers, data=json.dumps(data))
# print prediction results
print(r.json()["results"])
```
## V.Release Note
* 1.0.0
First release
* 1.0.1
Fix the problem of reading numpy
- ```shell
$ hub install ernie_tiny==1.1.0
```
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册