@@ -33,6 +33,10 @@ If you want to customize your Serving based on source code, use the version with
...
@@ -33,6 +33,10 @@ If you want to customize your Serving based on source code, use the version with
If you need to develop and compile based on the source code, please use the version with the suffix -devel.
If you need to develop and compile based on the source code, please use the version with the suffix -devel.
**In the TAG column, 0.8.0 can also be replaced with the corresponding version number, such as 0.5.0/0.4.1, etc., but it should be noted that some development environments only increase with a certain version iteration, so not all environments All have the corresponding version number can be used.**
**In the TAG column, 0.8.0 can also be replaced with the corresponding version number, such as 0.5.0/0.4.1, etc., but it should be noted that some development environments only increase with a certain version iteration, so not all environments All have the corresponding version number can be used.**
**Development Docker Images:**
A variety of development tools are installed in the development image, which can be used to debug and compile code, and adapt to two GCC versions and multiple CUDA environments, but the image size is large.
Runtime Docker Images is lighter than Develop Images, and Running Images are made up with serving whl and bin, but without develop tools like cmake because of lower image size. If you want to know about it, plese check the document [Paddle Serving on Kubernetes](./Run_On_Kubernetes_CN.md).
| Env | Version | Docker images tag | OS | Gcc Version | Size |
registry.baidubce.com/paddlepaddle/serving:xpu-arm # for arm xpu user
registry.baidubce.com/paddlepaddle/serving:xpu-arm # for arm xpu user
registry.baidubce.com/paddlepaddle/serving:xpu-x86 # for x86 xpu user
registry.baidubce.com/paddlepaddle/serving:xpu-x86 # for x86 xpu user
...
@@ -61,31 +77,3 @@ The machine running the CUDA container **only requires the NVIDIA driver**, the
...
@@ -61,31 +77,3 @@ The machine running the CUDA container **only requires the NVIDIA driver**, the
For the relationship between CUDA toolkit version, Driver version and GPU architecture, please refer to [nvidia-docker wiki](https://github.com/NVIDIA/nvidia-docker/wiki/CUDA).
For the relationship between CUDA toolkit version, Driver version and GPU architecture, please refer to [nvidia-docker wiki](https://github.com/NVIDIA/nvidia-docker/wiki/CUDA).
# (Attachment) The List of All the Docker images
Develop Images:
| Env | Version | Docker images tag | OS | Gcc Version |
Running Images is lighter than Develop Images, and Running Images are made up with serving whl and bin, but without develop tools like cmake because of lower image size. If you want to know about it, plese check the document [Paddle Serving on Kubernetes](./Run_On_Kubernetes_CN.md).
| Env | Version | Docker images tag | OS | Gcc Version | Size |
Install the service whl package. There are three types of client, app and server. The server is divided into CPU and GPU. Choose one installation according to the environment.
Install the service whl package. There are three types of client, app and server. The server is divided into CPU and GPU. Choose one installation according to the environment.
- GPU with CUDA10.2 + Cudnn7 + TensorRT6(Recommended)