From f8752ca98e66571143ad407cf2bbdaba37d14527 Mon Sep 17 00:00:00 2001 From: WangXi Date: Tue, 1 Sep 2020 20:31:27 +0800 Subject: [PATCH] refine doc --- doc/SAVE.md | 16 +++++++--------- doc/SAVE_CN.md | 17 +++++++---------- 2 files changed, 14 insertions(+), 19 deletions(-) diff --git a/doc/SAVE.md b/doc/SAVE.md index 35a70eb6..11c3f557 100644 --- a/doc/SAVE.md +++ b/doc/SAVE.md @@ -38,17 +38,15 @@ If you have saved model files using Paddle's `save_inference_model` API, you can import paddle_serving_client.io as serving_io serving_io.inference_model_to_serving(dirname, serving_server="serving_server", serving_client="serving_client", model_filename=None, params_filename=None ) ``` -dirname (str) - Path of saved model files. Program file and parameter files are saved in this directory. - -serving_server (str, optional) - The path of model files and configuration files for server. Default: "serving_server". - -serving_client (str, optional) - The path of configuration files for client. Default: "serving_client". - -model_filename (str, optional) - The name of file to load the inference program. If it is None, the default filename `__model__` will be used. Default: None. - -paras_filename (str, optional) - The name of file to load all parameters. It is only used for the case that all parameters were saved in a single binary file. If parameters were saved in separate files, set it as None. Default: None. Or you can use a build-in python module called `paddle_serving_client.io.convert` to convert it. ```python python -m paddle_serving_client.io.convert --dirname ./your_inference_model_dir ``` Arguments are the same as `inference_model_to_serving` API. +| Argument | Type | Default | Description | +|--------------|------|-----------|--------------------------------| +| `dirname` | str | - | Path of saved model files. Program file and parameter files are saved in this directory. | +| `serving_server` | str | `"serving_server"` | The path of model files and configuration files for server. | +| `serving_client` | str | `"serving_client"` | The path of configuration files for client. | +| `model_filename` | str | None | The name of file to load the inference program. If it is None, the default filename `__model__` will be used. | +| `paras_filename` | str | None | The name of file to load all parameters. It is only used for the case that all parameters were saved in a single binary file. If parameters were saved in separate files, set it as None. | diff --git a/doc/SAVE_CN.md b/doc/SAVE_CN.md index 0b10c32f..037cb0a8 100644 --- a/doc/SAVE_CN.md +++ b/doc/SAVE_CN.md @@ -39,18 +39,15 @@ for line in sys.stdin: import paddle_serving_client.io as serving_io serving_io.inference_model_to_serving(dirname, serving_server="serving_server", serving_client="serving_client", model_filename=None, params_filename=None) ``` -dirname (str) – 需要转换的模型文件存储路径,Program结构文件和参数文件均保存在此目录。 - -serving_server (str, 可选) - 转换后的模型文件和配置文件的存储路径。默认值为serving_server。 - -serving_client (str, 可选) - 转换后的客户端配置文件存储路径。默认值为serving_client。 - -model_filename (str,可选) – 存储需要转换的模型Inference Program结构的文件名称。如果设置为None,则使用 `__model__` 作为默认的文件名。默认值为None。 - -params_filename (str,可选) – 存储需要转换的模型所有参数的文件名称。当且仅当所有模型参数被保存在一个单独的二进制文件中,它才需要被指定。如果模型参数是存储在各自分离的文件中,设置它的值为None。默认值为None。 - 或者你可以使用Paddle Serving提供的名为`paddle_serving_client.io.convert`的内置模块进行转换。 ```python python -m paddle_serving_client.io.convert --dirname ./your_inference_model_dir ``` 模块参数与`inference_model_to_serving`接口参数相同。 +| 参数 | 类型 | 默认值 | 描述 | +|--------------|------|-----------|--------------------------------| +| `dirname` | str | - | 需要转换的模型文件存储路径,Program结构文件和参数文件均保存在此目录。| +| `serving_server` | str | `"serving_server"` | 转换后的模型文件和配置文件的存储路径。默认值为serving_server | +| `serving_client` | str | `"serving_client"` | 转换后的客户端配置文件存储路径。默认值为serving_client | +| `model_filename` | str | None | 存储需要转换的模型Inference Program结构的文件名称。如果设置为None,则使用 `__model__` 作为默认的文件名 | +| `paras_filename` | str | None | 存储需要转换的模型所有参数的文件名称。当且仅当所有模型参数被保存在一个单独的二进制文件中,它才需要被指定。如果模型参数是存储在各自分离的文件中,设置它的值为None | -- GitLab