提交 3ed41c7e 编写于 作者: M mindspore-ci-bot 提交者: Gitee

!688 serving add en doc

Merge pull request !688 from 徐永飞/master
# MindSpore-based Inference Service Deployment
<!-- TOC -->
- [MindSpore-based Inference Service Deployment](#mindspore-based-inference-service-deployment)
- [Overview](#overview)
- [Starting Serving](#starting-serving)
- [Application Example](#application-example)
- [Exporting Model](#exporting-model)
- [Starting Serving Inference](#starting-serving-inference)
- [Client Samples](#client-samples)
- [Python Client Sample](#python-client-sample)
- [C++ Client Sample](#cpp-client-sample)
<!-- /TOC -->
<a href="https://gitee.com/mindspore/docs/blob/master/tutorials/source_en/advanced_use/serving.md" target="_blank"><img src="../_static/logo_source.png"></a>
## Overview
MindSpore Serving is a lightweight and high-performance service module that helps MindSpore developers efficiently deploy online inference services in the production environment. After completing model training using MindSpore, you can export the MindSpore model and use MindSpore Serving to create an inference service for the model. Currently, only Ascend 910 is supported.
## Starting Serving
After MindSpore is installed using `pip`, the Serving executable program is stored in `/{your python path}/lib/python3.7/site-packages/mindspore/ms_serving`.
Run the following command to start Serving:
```bash
ms_serving [--help] [--model_path <MODEL_PATH>] [--model_name <MODEL_NAME>]
[--port <PORT>] [--device_id <DEVICE_ID>]
```
Parameters are described as follows:
|Parameter|Attribute|Function|Parameter Type|Default Value|Value Range|
|---|---|---|---|---|---|
|`--help`|Optional|Displays the help information about the startup command. |-|-|-|
|`--model_path=<MODEL_PATH>`|Mandatory|Path for storing the model to be loaded. |String|Null|-|
|`--model_name=<MODEL_NAME>`|Mandatory|Name of the model file to be loaded. |String|Null|-|
|`--=port <PORT>`|Optional|Specifies the external Serving port number. |Integer|5500|1–65535|
|`--device_id=<DEVICE_ID>`|Optional|Specifies device ID to be used.|Integer|0|0 to 7|
> Before running the startup command, add the path `/{your python path}/lib:/{your python path}/lib/python3.7/site-packages/mindspore/lib` to the environment variable `LD_LIBRARY_PATH`.
## Application Example
The following uses a simple network as an example to describe how to use MindSpore Serving.
### Exporting Model
Use [add_model.py](https://gitee.com/mindspore/mindspore/blob/master/serving/example/export_model/add_model.py) to build a network with only the Add operator and export the MindSpore inference deployment model.
```python
python add_model.py
```
Execute the script to generate the `tensor_add.mindir` file. The input of the model is two one-dimensional tensors with shape [2,2], and the output is the sum of the two input tensors.
### Starting Serving Inference
```bash
ms_serving --model_path={model directory} --model_name=tensor_add.mindir
```
If the server prints the `MS Serving Listening on 0.0.0.0:5500` log, the Serving has loaded the inference model.
### Client Samples
#### <span name="python-client-sample">Python Client Sample</span>
Obtain [ms_client.py](https://gitee.com/mindspore/mindspore/blob/master/serving/example/python_client/ms_client.py) and start the Python client.
```bash
python ms_client.py
```
If the following information is displayed, the Serving has correctly executed the inference of the Add network.
```
ms client received:
[[2. 2.]
[2. 2.]]
```
#### <span name="cpp-client-sample">C++ Client Sample</span>
1. Obtain an executable client sample program.
Download the [MindSpore source code](https://gitee.com/mindspore/mindspore). You can use either of the following methods to compile and obtain the client sample program:
+ When MindSpore is compiled using the source code, the Serving C++ client sample program is generated. You can find the `ms_client` executable program in the `build/mindspore/serving/example/cpp_client` directory.
+ Independent compilation
Preinstall [gRPC](https://gRPC.io).
Run the following command in the MindSpore source code path to compile a client sample program:
```bash
cd mindspore/serving/example/cpp_client
mkdir build && cd build
cmake -D GRPC_PATH={grpc_install_dir} ..
make
```
In the preceding command, `{grpc_install_dir}` indicates the gRPC installation path. Replace it with the actual gRPC installation path.
2. Start the client.
Execute `ms_client` to send an inference request to the Serving.
```bash
./ms_client --target=localhost:5500
```
If the following information is displayed, the Serving has correctly executed the inference of the Add network.
```
Compute [[1, 2], [3, 4]] + [[1, 2], [3, 4]]
Add result is 2 4 6 8
client received: RPC OK
```
The client code consists of the following parts:
1. Implement the client based on MSService::Stub and create a client instance.
```
class MSClient {
public:
explicit MSClient(std::shared_ptr<Channel> channel) : stub_(MSService::NewStub(channel)) {}
private:
std::unique_ptr<MSService::Stub> stub_;
};MSClient client(grpc::CreateChannel(target_str, grpc::InsecureChannelCredentials()));
MSClient client(grpc::CreateChannel(target_str, grpc::InsecureChannelCredentials()));
```
2. Build the request input parameter `Request`, output parameter `Reply`, and gRPC client `Context` based on the actual network input.
```
PredictRequest request;
PredictReply reply;
ClientContext context;
//construct tensor
Tensor data;
//set shape
TensorShape shape;
shape.add_dims(4);
*data.mutable_tensor_shape() = shape;
//set type
data.set_tensor_type(ms_serving::MS_FLOAT32);
std::vector<float> input_data{1, 2, 3, 4};
//set datas
data.set_data(input_data.data(), input_data.size());
//add tensor to request
*request.add_data() = data;
*request.add_data() = data;
```
3. Call the gRPC API to communicate with the Serving that has been started, and obtain the return value.
```
Status status = stub_->Predict(&context, request, &reply);
```
For details about the complete code, see [ms_client](https://gitee.com/mindspore/mindspore/blob/master/serving/example/cpp_client/ms_client.cc).
......@@ -35,10 +35,10 @@ ms_serving [--help] [--model_path <MODEL_PATH>] [--model_name <MODEL_NAME>]
|参数名|属性|功能描述|参数类型|默认值|取值范围|
|---|---|---|---|---|---|
|`--help`|可选|显示启动命令的帮助信息。|-|-|-|
|`--model_path=<MODEL_PATH>`|必选|指定待加载模型的存放路径。|str|空|-|
|`--model_name=<MODEL_NAME>`|必选|指定待加载模型的文件名。|str|空|-|
|`--port=<PORT>`|可选|指定Serving对外的端口号。|int|5500|1~65535|
|`--device_id=<DEVICE_ID>`|可选|指定使用的设备号|int|0|0~7|
|`--model_path=<MODEL_PATH>`|必选|指定待加载模型的存放路径。|String|空|-|
|`--model_name=<MODEL_NAME>`|必选|指定待加载模型的文件名。|String|空|-|
|`--port=<PORT>`|可选|指定Serving对外的端口号。|Integer|5500|1~65535|
|`--device_id=<DEVICE_ID>`|可选|指定使用的设备号|Integer|0|0~7|
> 执行启动命令前,需将`/{your python path}/lib:/{your python path}/lib/python3.7/site-packages/mindspore/lib`对应的路径加入到环境变量LD_LIBRARY_PATH中 。
......@@ -51,10 +51,9 @@ ms_serving [--help] [--model_path <MODEL_PATH>] [--model_name <MODEL_NAME>]
```python
python add_model.py
```
执行脚本,生成`tensor_add.mindir`文件,该模型的输入为两个shape为[4]的一维Tensor,输出结果是两个输入Tensor之和。
执行脚本,生成`tensor_add.mindir`文件,该模型的输入为两个shape为[2,2]的二维Tensor,输出结果是两个输入Tensor之和。
### 启动Serving推理服务
`ms_serving`打包在MindSpore whl包安装路径中。
```bash
ms_serving --model_path={model directory} --model_name=tensor_add.mindir
```
......@@ -70,7 +69,8 @@ python ms_client.py
显示如下返回值说明Serving服务已正确执行Add网络的推理。
```
ms client received:
[2. 2. 2. 2.]
[[2. 2.]
[2. 2.]]
```
#### <span name="cpp客户端示例">C++客户端示例</span>
......@@ -99,8 +99,8 @@ ms client received:
```
显示如下返回值说明Serving服务已正确执行Add网络的推理。
```
Compute [1, 2, 3, 4] + [1, 2, 3, 4]
Add result is [2, 4, 6, 8]
Compute [[1, 2], [3, 4]] + [[1, 2], [3, 4]]
Add result is 2 4 6 8
client received: RPC OK
```
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册