diff --git a/benchmark/fluid/README.md b/benchmark/fluid/README.md index d3e15779458a507bebc3509a392f46b06e050bfc..33d2228ca5f65d104360e22bc281fad2d3dd9d0e 100644 --- a/benchmark/fluid/README.md +++ b/benchmark/fluid/README.md @@ -29,7 +29,7 @@ Currently supported `--model` argument include: You can choose to use GPU/CPU training. With GPU training, you can specify `--gpus ` to run multi GPU training. * Run distributed training with parameter servers: - * see run_fluid_benchmark.sh as an example. + * see [run_fluid_benchmark.sh](https://github.com/PaddlePaddle/Paddle/blob/develop/benchmark/fluid/run_fluid_benchmark.sh) as an example. * start parameter servers: ```bash PADDLE_TRAINING_ROLE=PSERVER PADDLE_PSERVER_PORT=7164 PADDLE_PSERVER_IPS=127.0.0.1 PADDLE_TRAINERS=1 PADDLE_CURRENT_IP=127.0.0.1 PADDLE_TRAINER_ID=0 python fluid_benchmark.py --model mnist --device GPU --update_method pserver