diff --git a/doc/INFERENCE_TO_SERVING.md b/doc/INFERENCE_TO_SERVING.md index 8334159ea255ca65241a2b567e43682a148bb775..e10ee976fb455c8cc49a0d5fa44ed4cc1f300ba9 100644 --- a/doc/INFERENCE_TO_SERVING.md +++ b/doc/INFERENCE_TO_SERVING.md @@ -2,6 +2,20 @@ ([简体中文](./INFERENCE_TO_SERVING_CN.md)|English) +We should know something before converting to serving model + +**inference_model_dir**:the directory of Paddle inference model + +**serving_client_dir**: the directory of server side configuration + +**serving_client_dir**: the directory of client side configuration + +**model_filename**: this is model description file whose default value is `__model__`, if it's not default name, set `model_filename` explicitly + +**params_filename**: during `save_inference_model` every Variable will be save as a single file. If we have the inference model whose params are compressed into one file, please set `params_filename` explicitly + + + ## Example ``` python @@ -12,3 +26,11 @@ serving_server_dir = "serving_server_dir" feed_var_names, fetch_var_names = inference_model_to_serving( inference_model_dir, serving_client_dir, serving_server_dir) ``` + +if your model file and params file are both standalone, please use the following api. + +``` +feed_var_names, fetch_var_names = inference_model_to_serving( + inference_model_dir, serving_client_dir, serving_server_dir, + model_filename="model", params_filename="params") +``` diff --git a/doc/INFERENCE_TO_SERVING_CN.md b/doc/INFERENCE_TO_SERVING_CN.md index 94d1def424db467e200020c69fbd6d1599a5ffde..e7e909ac04be3b1a0885b3390d99a153dfbd170e 100644 --- a/doc/INFERENCE_TO_SERVING_CN.md +++ b/doc/INFERENCE_TO_SERVING_CN.md @@ -4,6 +4,19 @@ ## 示例 +在下列代码中,我们需要知道以下信息。 + +**模型文件夹**:这个文件夹就是Paddle的inference_model所在的文件夹 + +**serving_client_dir**: 这个文件夹是inference_model转换成Serving模型后,服务端配置的保存路径 + +**serving_client_dir**: 这个文件夹是inference_model转换成Serving模型后,客户端配置的保存路径 + +**模型描述文件**: 模型描述文件也就是`model_filename`默认值为`__model__`,是一个pb2文本文件,如果是别的文件名需要显式指定 + +**模型参数文件**: 在`save_inference_model`阶段,默认方式是每一个Variable保存一个二进制文件,如果是这种情况就不需要做指定。如果所有参数用压缩成一个文件的形式保存,则需要显式指定`params_filename` + + ``` python from paddle_serving_client.io import inference_model_to_serving inference_model_dir = "your_inference_model" @@ -12,3 +25,9 @@ serving_server_dir = "serving_server_dir" feed_var_names, fetch_var_names = inference_model_to_serving( inference_model_dir, serving_client_dir, serving_server_dir) ``` +如果模型中有模型描述文件`model_filename` 和 模型参数文件`params_filename`,那么请用 +``` +feed_var_names, fetch_var_names = inference_model_to_serving( + inference_model_dir, serving_client_dir, serving_server_dir, + model_filename="model", params_filename="params") +```