提交 a5acad11 编写于 作者: T typhoonzero

update docs

上级 0bbd7bc3
#FROM python:2.7.14
FROM nvidia/cuda:8.0-runtime-ubuntu16.04
FROM nvidia/cuda:8.0-cudnn5-runtime-ubuntu16.04
RUN apt-get update && apt-get install -y python
RUN pip install -U kubernetes opencv-python && apt-get update -y && apt-get install -y iputils-ping libgtk2.0-dev
# NOTE: By default CI built wheel packages turn WITH_DISTRIBUTE=OFF,
......
# Performance for distributed vgg16
# Performance for Distributed vgg16
## Test Result
......@@ -50,7 +50,7 @@
- Trainer Count: 60
- Batch Size: 128
- Metrics: mini-batch / sec
- Metrics: samples/ sec
| PServer Count | 3 | 6 |10 | 20 |
| -- | -- | -- | -- | -- |
......@@ -61,7 +61,7 @@
*The performance gap between Fuild and v2 comes from the network interference.*
## Steps to run the performance test
## Steps to Run the Performance Test
1. You must re-compile PaddlePaddle and enable `-DWITH_DISTRIBUTE` to build PaddlePaddle with distributed support.
1. When the build finishes, copy the output `whl` package located under `build/python/dist` to current directory.
......@@ -71,6 +71,6 @@
Check the logs for the distributed training progress and analyze the performance.
## Enable verbos logs
## Enable Verbos Logs
Edit `pserver.yaml` and `trainer.yaml` and add an environment variable `GLOG_v=3` to see what happend in detail.
Edit `pserver.yaml` and `trainer.yaml` and add an environment variable `GLOG_v=3` and `GLOG_logtostderr=1` to see what happend in detail.
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册