diff --git a/docs/export_for_infer.md b/docs/export_for_infer.md index 9a3e0aaffbbb1fabd26880c136ab3128f6b65e55..960ca42fc0f280c292ddc3a5b52f1dc5a0baaa92 100644 --- a/docs/export_for_infer.md +++ b/docs/export_for_infer.md @@ -2,7 +2,7 @@ 通常,PaddlePaddle大规模分类库在训练过程中保存的模型只保存模型参数信息, 而不包括预测模型结构。为了部署PLSC预测库,需要将预训练模型导出为预测模型。 -预测模型包括预测所需要的模型参数和模型结构,用于后续地预测任务(参见[C++预测库使用]) +预测模型包括预测所需要的模型参数和模型结构,用于后续地预测任务(参见[C++预测库使用](./serving.md)) 可以通过下面的代码将预训练模型导出为预测模型: diff --git a/docs/serving.md b/docs/serving.md index da7a379376cd68f66afdc3a4772834182dabb84d..f19d8fd1191939bf7f6d63eab8f33455df6085f5 100644 --- a/docs/serving.md +++ b/docs/serving.md @@ -6,9 +6,9 @@ server端 需要python3环境,下载whl包 -https://paddle-serving.bj.bcebos.com/paddle-gpu-serving/wheel/plsc_serving-0.1.4-py3-none-any.whl +https://paddle-serving.bj.bcebos.com/paddle-gpu-serving/wheel/plsc_serving-0.1.6-py3-none-any.whl -pip3 install plsc_serving-0.1.4-py3-none-any.whl +pip3 install plsc_serving-0.1.6-py3-none-any.whl client端 @@ -25,7 +25,8 @@ server端 ```python from plsc_serving.run import PLSCServer fs = PLSCServer() -fs.with_model(model_name = 'face_resnet50') +#设定使用的模型文路径,str类型,绝对路径 +fs.with_model(model_path = '/XXX/XXX') #跑单个进程,gpu_index指定使用的gpu,int类型,默认为0;port指定使用的端口,int类型,默认为8866 fs.run(gpu_index = 0, port = 8010) ```