未验证 提交 734f39fb 编写于 作者: J Jiawei Wang 提交者: GitHub

Merge pull request #806 from PaddlePaddle/wangjiawei04-patch-1

Update INFERENCE_TO_SERVING_CN.md
...@@ -2,6 +2,20 @@ ...@@ -2,6 +2,20 @@
([简体中文](./INFERENCE_TO_SERVING_CN.md)|English) ([简体中文](./INFERENCE_TO_SERVING_CN.md)|English)
We should know something before converting to serving model
**inference_model_dir**:the directory of Paddle inference model
**serving_client_dir**: the directory of server side configuration
**serving_client_dir**: the directory of client side configuration
**model_filename**: this is model description file whose default value is `__model__`, if it's not default name, set `model_filename` explicitly
**params_filename**: during `save_inference_model` every Variable will be save as a single file. If we have the inference model whose params are compressed into one file, please set `params_filename` explicitly
## Example ## Example
``` python ``` python
...@@ -12,3 +26,11 @@ serving_server_dir = "serving_server_dir" ...@@ -12,3 +26,11 @@ serving_server_dir = "serving_server_dir"
feed_var_names, fetch_var_names = inference_model_to_serving( feed_var_names, fetch_var_names = inference_model_to_serving(
inference_model_dir, serving_client_dir, serving_server_dir) inference_model_dir, serving_client_dir, serving_server_dir)
``` ```
if your model file and params file are both standalone, please use the following api.
```
feed_var_names, fetch_var_names = inference_model_to_serving(
inference_model_dir, serving_client_dir, serving_server_dir,
model_filename="model", params_filename="params")
```
...@@ -4,6 +4,19 @@ ...@@ -4,6 +4,19 @@
## 示例 ## 示例
在下列代码中,我们需要知道以下信息。
**模型文件夹**:这个文件夹就是Paddle的inference_model所在的文件夹
**serving_client_dir**: 这个文件夹是inference_model转换成Serving模型后,服务端配置的保存路径
**serving_client_dir**: 这个文件夹是inference_model转换成Serving模型后,客户端配置的保存路径
**模型描述文件**: 模型描述文件也就是`model_filename`默认值为`__model__`,是一个pb2文本文件,如果是别的文件名需要显式指定
**模型参数文件**: 在`save_inference_model`阶段,默认方式是每一个Variable保存一个二进制文件,如果是这种情况就不需要做指定。如果所有参数用压缩成一个文件的形式保存,则需要显式指定`params_filename`
``` python ``` python
from paddle_serving_client.io import inference_model_to_serving from paddle_serving_client.io import inference_model_to_serving
inference_model_dir = "your_inference_model" inference_model_dir = "your_inference_model"
...@@ -12,3 +25,9 @@ serving_server_dir = "serving_server_dir" ...@@ -12,3 +25,9 @@ serving_server_dir = "serving_server_dir"
feed_var_names, fetch_var_names = inference_model_to_serving( feed_var_names, fetch_var_names = inference_model_to_serving(
inference_model_dir, serving_client_dir, serving_server_dir) inference_model_dir, serving_client_dir, serving_server_dir)
``` ```
如果模型中有模型描述文件`model_filename` 和 模型参数文件`params_filename`,那么请用
```
feed_var_names, fetch_var_names = inference_model_to_serving(
inference_model_dir, serving_client_dir, serving_server_dir,
model_filename="model", params_filename="params")
```
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册