From 63cdba51300e2f74b5f064ff85caaf5aa48aa135 Mon Sep 17 00:00:00 2001 From: Jiawei Wang Date: Wed, 19 Aug 2020 12:07:50 +0800 Subject: [PATCH] Update INFERENCE_TO_SERVING.md --- doc/INFERENCE_TO_SERVING.md | 22 ++++++++++++++++++++++ 1 file changed, 22 insertions(+) diff --git a/doc/INFERENCE_TO_SERVING.md b/doc/INFERENCE_TO_SERVING.md index 8334159e..e10ee976 100644 --- a/doc/INFERENCE_TO_SERVING.md +++ b/doc/INFERENCE_TO_SERVING.md @@ -2,6 +2,20 @@ ([简体中文](./INFERENCE_TO_SERVING_CN.md)|English) +We should know something before converting to serving model + +**inference_model_dir**:the directory of Paddle inference model + +**serving_client_dir**: the directory of server side configuration + +**serving_client_dir**: the directory of client side configuration + +**model_filename**: this is model description file whose default value is `__model__`, if it's not default name, set `model_filename` explicitly + +**params_filename**: during `save_inference_model` every Variable will be save as a single file. If we have the inference model whose params are compressed into one file, please set `params_filename` explicitly + + + ## Example ``` python @@ -12,3 +26,11 @@ serving_server_dir = "serving_server_dir" feed_var_names, fetch_var_names = inference_model_to_serving( inference_model_dir, serving_client_dir, serving_server_dir) ``` + +if your model file and params file are both standalone, please use the following api. + +``` +feed_var_names, fetch_var_names = inference_model_to_serving( + inference_model_dir, serving_client_dir, serving_server_dir, + model_filename="model", params_filename="params") +``` -- GitLab