diff --git a/doc/FAQ_CN.md b/doc/FAQ_CN.md index 94020bcc2477151839c521474cde6d73ea388ad2..7713f0eef92ed0b1bc07e7e1c8a7a9121df70a36 100644 --- a/doc/FAQ_CN.md +++ b/doc/FAQ_CN.md @@ -1,6 +1,12 @@ # FAQ +## 版本升级问题 +#### Q: 从v0.6.x升级到v0.7.0版本时,运行Python Pipeline程序时报错信息如下: +``` +Failed to predict: (data_id=1 log_id=0) [det|0] Failed to postprocess: postprocess() takes 4 positional arguments but 5 were given +``` +**A:** 在服务端程序(例如 web_service.py)的postprocess函数定义中增加参数data_id,改为 def postprocess(self, input_dicts, fetch_dict, **data_id**, log_id) 即可。 ## 基础知识 @@ -10,7 +16,7 @@ #### Q: paddle-serving是否支持Int32支持 -**A:** 在protobuf定feed_type和fetch_type编号与数据类型对应如下 +**A:** 在protobuf定feed_type和fetch_type编号与数据类型对应如下,完整信息可参考[Serving配置与启动参数说明](./Serving_Configure_CN.md#模型配置文件) 0-int64 diff --git a/doc/Install_CN.md b/doc/Install_CN.md index a1bad3cadaa9ddda8f46eafb41fa0164cd78c624..3be6bee69e50f68f6f72857e8a6d902da8e6ce25 100644 --- a/doc/Install_CN.md +++ b/doc/Install_CN.md @@ -77,6 +77,8 @@ paddle-serving-server和paddle-serving-server-gpu安装包支持Centos 6/7, Ubun paddle-serving-client和paddle-serving-app安装包支持Linux和Windows,其中paddle-serving-client仅支持python3.6/3.7/3.8。 +**如果您之前使用paddle serving 0.5.X 0.6.X的Cuda10.2环境,需要注意在0.7.0版本,paddle-serving-server-gpu==0.7.0.post102的使用Cudnn7和TensorRT6,而0.6.0.post102使用cudnn8和TensorRT7。如果0.6.0的cuda10.2用户需要升级安装,请使用paddle-serving-server-gpu==0.7.0.post1028** + ## 3.安装Paddle相关Python库 **当您使用`paddle_serving_client.convert`命令或者`Python Pipeline框架`时才需要安装。** ``` diff --git a/doc/Install_EN.md b/doc/Install_EN.md index 4e58c54ac6ecf196ff04856c10e529ee2dcb6e48..1dbf63b8460fe4cc1f19c0262675afb37b9dc3ca 100644 --- a/doc/Install_EN.md +++ b/doc/Install_EN.md @@ -78,6 +78,8 @@ The paddle-serving-server and paddle-serving-server-gpu installation packages su The paddle-serving-client and paddle-serving-app installation packages support Linux and Windows, and paddle-serving-client only supports python3.6/3.7/3.8. +**If you used the Cuda10.2 environment of paddle serving 0.5.X 0.6.X before, you need to pay attention to version 0.7.0, paddle-serving-server-gpu==0.7.0.post102 uses Cudnn7 and TensorRT6, and 0.6.0.post102 uses cudnn8 and TensorRT7. If 0.6.0 cuda10.2 users need to upgrade, please use paddle-serving-server-gpu==0.7.0.post1028** + ## 3. Install Paddle related Python libraries **You only need to install it when you use the `paddle_serving_client.convert` command or the `Python Pipeline framework`. ** ``` diff --git a/doc/Latest_Packages_CN.md b/doc/Latest_Packages_CN.md index e32065ef458806ce3b89ac8cab8d7d2612773e31..c43c9266f29593b92facdec45dbce64deb104af2 100644 --- a/doc/Latest_Packages_CN.md +++ b/doc/Latest_Packages_CN.md @@ -2,6 +2,8 @@ ## Paddle-Serving-Server (x86 CPU/GPU) +Check the following table, and copy the address of hyperlink then run `pip3 install`. For example, if you want to install `paddle-serving-server-0.0.0-py3-non-any.whl`, right click the hyper link and copy the link address, the final command is `pip3 install https://paddle-serving.bj.bcebos.com/test-dev/whl/paddle_serving_server-0.0.0-py3-none-any.whl`. + | | develop whl | develop bin | stable whl | stable bin | |---------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------| | cpu-avx-mkl | [paddle_serving_server-0.0.0-py3-none-any.whl ](https://paddle-serving.bj.bcebos.com/test-dev/whl/paddle_serving_server-0.0.0-py3-none-any.whl) | [serving-cpu-avx-mkl-0.0.0.tar.gz](https://paddle-serving.bj.bcebos.com/test-dev/bin/serving-cpu-avx-mkl-0.0.0.tar.gz) | [paddle_serving_server-0.7.0-py3-none-any.whl ](https://paddle-serving.bj.bcebos.com/test-dev/whl/paddle_serving_server-0.7.0-py3-none-any.whl) | [serving-cpu-avx-mkl-0.7.0.tar.gz](https://paddle-serving.bj.bcebos.com/test-dev/bin/serving-cpu-avx-mkl-0.7.0.tar.gz) | @@ -44,9 +46,15 @@ for kunlun user who uses arm-xpu or x86-xpu can download the wheel packages as f for arm kunlun user ``` -https://paddle-serving.bj.bcebos.com/whl/xpu/0.7.0/paddle_serving_server_xpu-0.7.0.post2-cp36-cp36m-linux_aarch64.whl -https://paddle-serving.bj.bcebos.com/whl/xpu/0.7.0/paddle_serving_client-0.7.0-cp36-cp36m-linux_aarch64.whl -https://paddle-serving.bj.bcebos.com/whl/xpu/0.7.0/paddle_serving_app-0.7.0-cp36-cp36m-linux_aarch64.whl +# paddle-serving-server +https://paddle-serving.bj.bcebos.com/whl/xpu/arm/paddle_serving_server_xpu-0.0.0.post2-py3-none-any.whl +# paddle-serving-client +https://paddle-serving.bj.bcebos.com/whl/xpu/arm/paddle_serving_client-0.0.0-cp36-none-any.whl +# paddle-serving-app +https://paddle-serving.bj.bcebos.com/whl/xpu/arm/paddle_serving_app-0.0.0-py3-none-any.whl + +# SERVING BIN +https://paddle-serving.bj.bcebos.com/bin/serving-xpu-aarch64-0.0.0.tar.gz ``` for x86 kunlun user diff --git a/doc/images/wechat_group_1.jpeg b/doc/images/wechat_group_1.jpeg index 518d0d42d2faa90df5253b853917666ba87c33ea..0ba95a794b0246a092a7f6e38146311e159c6f4d 100644 Binary files a/doc/images/wechat_group_1.jpeg and b/doc/images/wechat_group_1.jpeg differ