提交 e23bc324 编写于 作者: H Helin Wang

Update the tutorial of the end-to-end demo

上级 0f7f0244
# MNIST classification by PaddlePaddle # MNIST classification by PaddlePaddle
Forked from https://github.com/sugyan/tensorflow-mnist
![screencast](https://cloud.githubusercontent.com/assets/80381/11339453/f04f885e-923c-11e5-8845-33c16978c54d.gif) ![screencast](https://cloud.githubusercontent.com/assets/80381/11339453/f04f885e-923c-11e5-8845-33c16978c54d.gif)
## Build ## Usage
$ docker build -t paddle-mnist . This MNIST classification demo consists of two parts: a PaddlePaddle
inference server and a Javascript front end. We will start them
separately.
## Usage We will use Docker to run the demo, if you are not familiar with
Docker, please checkout
this
[tutorial](https://github.com/PaddlePaddle/Paddle/wiki/TLDR-for-new-docker-user).
### Start the Inference Server
The inference server can be used to inference any model trained by
PaddlePaddle. Please see [here](TODO) for more details.
1. Download the MNIST inference model topylogy and parameters to the
current working directory.
```bash
wget https://s3.us-east-2.amazonaws.com/models.paddlepaddle/inference_topology.pkl
wget https://s3.us-east-2.amazonaws.com/models.paddlepaddle/param.tar
```
1. Run following command to start the inference server:
```bash
docker run --name paddle_serve -v `pwd`:/data -d -p 8000:80 -e WITH_GPU=0 paddlepaddle/book:serve
```
The above command will mount the current working directory to the
`/data` directory inside the docker container. The inference
server will load the model topology and parameters that we just
downloaded from there.
After you are done with the demo, you can run `docker stop
paddle_serve` to stop this docker container.
### Start the Front End
1. Run the following command
```bash
docker run -it -p 5000:5000 paddlepaddle/book:mnist
```
1. Visit http://localhost:5000 and you will see the PaddlePaddle MNIST demo.
## Build
We have already prepared the pre-built docker image
`paddlepaddle/book:mnist`, here is the command if you want build the
docker image again.
1. Download `inference_topology.pkl` and `param.tar` to current directory
1. Run following commands:
```bash ```bash
docker run -v `pwd`:/data -d -p 8000:80 -e WITH_GPU=0 paddlepaddle/book:serve docker build -t paddlepaddle/book:mnist .
docker run -it -p 5000:5000 paddlepaddle/book:mnist
``` ```
1. Visit http://localhost:5000
## Acknowledgement
Thanks to the great project https://github.com/sugyan/tensorflow-mnist
. Most of the code in this project comes from there.
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册