提交 b61add60 编写于 作者: F fengjiayi 提交者: GitHub

Merge pull request #416 from PaddlePaddle/tmp

Tmp
FROM paddlepaddle/paddle:latest-gpu
FROM paddlepaddle/paddle
ENV PARAMETER_TAR_PATH=/data/param.tar \
TOPOLOGY_FILE_PATH=/data/inference_topology.pkl
......
FROM paddlepaddle/paddle:latest-gpu
ENV PARAMETER_TAR_PATH=/data/param.tar \
TOPOLOGY_FILE_PATH=/data/inference_topology.pkl
ADD requirements.txt /root
ADD main.py /root
RUN pip install -r /root/requirements.txt
CMD ["python", "/root/main.py"]
......@@ -95,7 +95,7 @@ To run the inference server with GPU support, please make sure you have
first, and run:
```bash
nvidia-docker run --name paddle_serve -v `pwd`:/data -d -p 8000:80 -e WITH_GPU=1 paddlepaddle/book:serve
nvidia-docker run --name paddle_serve -v `pwd`:/data -d -p 8000:80 -e WITH_GPU=1 paddlepaddle/book:serve-gpu
```
this command will start a server on port `8000`.
......@@ -215,4 +215,5 @@ the docker image again.
```bash
docker build -t paddlepaddle/book:serve .
docker build -t paddlepaddle/book:serve-gpu -f Dockerfile.gpu .
```
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册