LATEST_PACKAGES.md 4.2 KB
Newer Older
M
MRXLT 已提交
1
# Latest Wheel Packages
M
MRXLT 已提交
2 3 4 5

## CPU server
### Python 3
```
M
MRXLT 已提交
6
https://paddle-serving.bj.bcebos.com/whl/paddle_serving_server-0.0.0-py3-none-any.whl
M
MRXLT 已提交
7 8 9 10
```

### Python 2
```
M
MRXLT 已提交
11
https://paddle-serving.bj.bcebos.com/whl/paddle_serving_server-0.0.0-py2-none-any.whl
M
MRXLT 已提交
12 13 14 15 16
```

## GPU server
### Python 3
```
M
fix doc  
MRXLT 已提交
17
#cuda 9.0
M
MRXLT 已提交
18
https://paddle-serving.bj.bcebos.com/whl/paddle_serving_server_gpu-0.0.0.post9-py3-none-any.whl
M
fix doc  
MRXLT 已提交
19
#cuda 10.0
M
MRXLT 已提交
20
https://paddle-serving.bj.bcebos.com/whl/paddle_serving_server_gpu-0.0.0.post10-py3-none-any.whl
M
bug fix  
MRXLT 已提交
21
#cuda10.1 with TensorRT 6
W
wangjiawei04 已提交
22 23 24
https://paddle-serving.bj.bcebos.com/whl/paddle_serving_server_gpu-0.0.0.post101-py3-none-any.whl
#cuda10.2 with TensorRT 7
https://paddle-serving.bj.bcebos.com/whl/paddle_serving_server_gpu-0.0.0.post102-py3-none-any.whl
25 26
#cuda11.0 with TensorRT 7 (beta)
https://paddle-serving.bj.bcebos.com/whl/paddle_serving_server_gpu-0.0.0.post11-py3-none-any.whl
M
MRXLT 已提交
27 28 29
```
### Python 2
```
M
fix doc  
MRXLT 已提交
30
#cuda 9.0
M
MRXLT 已提交
31
https://paddle-serving.bj.bcebos.com/whl/paddle_serving_server_gpu-0.0.0.post9-py2-none-any.whl
M
fix doc  
MRXLT 已提交
32
#cuda 10.0
M
MRXLT 已提交
33
https://paddle-serving.bj.bcebos.com/whl/paddle_serving_server_gpu-0.0.0.post10-py2-none-any.whl
W
wangjiawei04 已提交
34
#cuda10.1 with TensorRT 6
W
wangjiawei04 已提交
35 36 37
https://paddle-serving.bj.bcebos.com/whl/paddle_serving_server_gpu-0.0.0.post101-py2-none-any.whl
#cuda10.2 with TensorRT 7
https://paddle-serving.bj.bcebos.com/whl/paddle_serving_server_gpu-0.0.0.post102-py2-none-any.whl
38 39
#cuda11.0 with TensorRT 7 (beta)
https://paddle-serving.bj.bcebos.com/whl/paddle_serving_server_gpu-0.0.0.post11-py2-none-any.whl
M
MRXLT 已提交
40 41 42
```

## Client
43

M
MRXLT 已提交
44 45
### Python 3.6
```
M
MRXLT 已提交
46
https://paddle-serving.bj.bcebos.com/whl/paddle_serving_client-0.0.0-cp36-none-any.whl
M
MRXLT 已提交
47
```
48 49 50 51 52 53 54 55
### Python 3.8
```
https://paddle-serving.bj.bcebos.com/whl/paddle_serving_client-0.0.0-cp38-none-any.whl
```
### Python 3.7
```
https://paddle-serving.bj.bcebos.com/whl/paddle_serving_client-0.0.0-cp37-none-any.whl
```
M
MRXLT 已提交
56 57 58 59
### Python 3.5
```
https://paddle-serving.bj.bcebos.com/whl/paddle_serving_client-0.0.0-cp35-none-any.whl
```
M
MRXLT 已提交
60 61
### Python 2.7
```
M
MRXLT 已提交
62
https://paddle-serving.bj.bcebos.com/whl/paddle_serving_client-0.0.0-cp27-none-any.whl
M
MRXLT 已提交
63 64
```

65

M
MRXLT 已提交
66 67 68
## App
### Python 3
```
M
MRXLT 已提交
69
https://paddle-serving.bj.bcebos.com/whl/paddle_serving_app-0.0.0-py3-none-any.whl
M
MRXLT 已提交
70
```
M
MRXLT 已提交
71 72 73

### Python 2
```
M
MRXLT 已提交
74
https://paddle-serving.bj.bcebos.com/whl/paddle_serving_app-0.0.0-py2-none-any.whl
M
MRXLT 已提交
75
```
W
wangjiawei04 已提交
76 77

## ARM user
W
wangjiawei04 已提交
78
for ARM user who uses [PaddleLite](https://github.com/PaddlePaddle/PaddleLite) can download the wheel packages as follows. And ARM user should use the xpu-beta docker [DOCKER IMAGES](./DOCKER_IMAGES.md) 
W
wangjiawei04 已提交
79 80 81 82 83 84 85 86 87 88 89
**We only support Python 3.6 for Arm Users.**

### Wheel Package Links
```
# Server 
https://paddle-serving.bj.bcebos.com/whl/xpu/paddle_serving_server_gpu-0.0.0.postarm_xpu-py3-none-any.whl
# Client
https://paddle-serving.bj.bcebos.com/whl/xpu/paddle_serving_client-0.0.0-cp36-none-any.whl 
# App
https://paddle-serving.bj.bcebos.com/whl/xpu/paddle_serving_app-0.0.0-py3-none-any.whl 
```
W
wangjiawei04 已提交
90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120


### Binary Package
for most users, we do not need to read this section. But if you deploy your Paddle Serving on a machine without network, you will encounter a problem that the binary executable tar file cannot be downloaded. Therefore, here we give you all the download links for various environment.

#### Bin links
```
# CPU AVX MKL
https://paddle-serving.bj.bcebos.com/bin/serving-cpu-avx-mkl-0.0.0.tar.gz
# CPU AVX OPENBLAS
https://paddle-serving.bj.bcebos.com/bin/serving-cpu-avx-openblas-0.0.0.tar.gz
# CPU NOAVX OPENBLAS
https://paddle-serving.bj.bcebos.com/bin/serving-cpu-noavx-openblas-0.0.0.tar.gz
# Cuda 9
https://paddle-serving.bj.bcebos.com/bin/serving-gpu-cuda9-0.0.0.tar.gz
# Cuda 10
https://paddle-serving.bj.bcebos.com/bin/serving-gpu-cuda10-0.0.0.tar.gz
# Cuda 10.1
https://paddle-serving.bj.bcebos.com/bin/serving-gpu-101-0.0.0.tar.gz
# Cuda 10.2
https://paddle-serving.bj.bcebos.com/bin/serving-gpu-102-0.0.0.tar.gz
# Cuda 11
https://paddle-serving.bj.bcebos.com/bin/serving-gpu-cuda11-0.0.0.tar.gz
```

#### How to setup SERVING_BIN offline?

- download the serving server whl package and bin package, and make sure they are for the same environment
- download the serving client whl and serving app whl, pay attention to the Python version.
- `pip install ` the serving and `tar xf ` the binary package, then `export SERVING_BIN=$PWD/serving-gpu-cuda10-0.0.0/serving` (take Cuda 10.0 as the example)