@@ -29,7 +29,7 @@ We consider deploying deep learning inference service online to be a user-facing
...
@@ -29,7 +29,7 @@ We consider deploying deep learning inference service online to be a user-facing
<h2align="center">Installation</h2>
<h2align="center">Installation</h2>
We **highly recommend** you to **run Paddle Serving in Docker**, please visit [Run in Docker](https://github.com/PaddlePaddle/Serving/blob/develop/doc/RUN_IN_DOCKER.md)
We **highly recommend** you to **run Paddle Serving in Docker**, please visit [Run in Docker](https://github.com/PaddlePaddle/Serving/blob/develop/doc/RUN_IN_DOCKER.md). See the [document](doc/DOCKER_IMAGES.md) for more docker images.
It is recommended to use Docker for compilation. We have prepared the Paddle Serving compilation environment for you:
It is recommended to use Docker for compilation. We have prepared the Paddle Serving compilation environment for you, see [this document](DOCKER_IMAGES.md).
This document will take Python2 as an example to show how to compile Paddle Serving. If you want to compile with Python3, just adjust the Python options of cmake:
This document will take Python2 as an example to show how to compile Paddle Serving. If you want to compile with Python3, just adjust the Python options of cmake:
In the default centos7 image we provide, the Python path is `/usr/bin/python`. If you want to use our centos6 image, you need to set it to `export PYTHONROOT=/usr/local/python2.7/`.
In the default centos7 image we provide, the Python path is `/usr/bin/python`. If you want to use our centos6 image, you need to set it to `export PYTHONROOT=/usr/local/python2.7/`.
## Install Python dependencies
```shell
pip install-r python/requirements.txt
```
If Python3 is used, replace `pip` with `pip3`.
## Compile Server
## Compile Server
### Integrated CPU version paddle inference library
### Integrated CPU version paddle inference library
...
@@ -62,6 +73,8 @@ execute `make install` to put targets under directory `./output`
...
@@ -62,6 +73,8 @@ execute `make install` to put targets under directory `./output`
**Attention:** After the compilation is successful, you need to set the path of `SERVING_BIN`. See [Note](https://github.com/PaddlePaddle/Serving/blob/develop/doc/COMPILE.md#Note) for details.
**Attention:** After the compilation is successful, you need to set the path of `SERVING_BIN`. See [Note](https://github.com/PaddlePaddle/Serving/blob/develop/doc/COMPILE.md#Note) for details.
## Compile Client
## Compile Client
``` shell
``` shell
...
@@ -72,6 +85,8 @@ make -j10
...
@@ -72,6 +85,8 @@ make -j10
execute `make install` to put targets under directory `./output`
execute `make install` to put targets under directory `./output`
Regardless of the client, server or App part, after compiling, install the whl package under `python/dist/`.
Regardless of the client, server or App part, after compiling, install the whl package under `python/dist/`.
## Note
## Note
When running the python server, it will check the `SERVING_BIN` environment variable. If you want to use your own compiled binary file, set the environment variable to the path of the corresponding binary file, usually`export SERVING_BIN=${BUILD_DIR}/core/general-server/serving`.
When running the python server, it will check the `SERVING_BIN` environment variable. If you want to use your own compiled binary file, set the environment variable to the path of the corresponding binary file, usually`export SERVING_BIN=${BUILD_DIR}/core/general-server/serving`.
nvidia-docker run -p 9292:9292 --nametest-dit hub.baidubce.com/paddlepaddle/serving:latest-gpu
nvidia-docker run -p 9292:9292 --nametest-dit hub.baidubce.com/paddlepaddle/serving:latest-cuda9.0-cudnn7
nvidia-docker exec-ittest bash
nvidia-docker exec-ittest bash
```
```
...
@@ -200,4 +180,4 @@ tar -xzf uci_housing.tar.gz
...
@@ -200,4 +180,4 @@ tar -xzf uci_housing.tar.gz
## Attention
## Attention
The images provided by this document are all runtime images, which do not support compilation. If you want to compile from source, refer to [COMPILE](COMPILE.md).
Runtime images cannot be used for compilation. If you want to compile from source, refer to [COMPILE](COMPILE.md).