From 1666cf96bfed426be8debdefa9a8a561f5a8a13f Mon Sep 17 00:00:00 2001 From: Yancey1989 Date: Mon, 4 Jun 2018 14:53:24 +0800 Subject: [PATCH] update by comment --- doc/fluid/howto/cluster/fluid_recordio.md | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-) diff --git a/doc/fluid/howto/cluster/fluid_recordio.md b/doc/fluid/howto/cluster/fluid_recordio.md index 0e8b98542d1..55ce63ec193 100644 --- a/doc/fluid/howto/cluster/fluid_recordio.md +++ b/doc/fluid/howto/cluster/fluid_recordio.md @@ -23,7 +23,10 @@ as follows: fluid.recordio_writer.convert_reader_to_recordio_file('./mnist.recordio', reader, feeder) ``` -The above codes would generate a RecordIO `./mnist.recordio` on your host. +The above code snippet would generate a RecordIO `./mnist.recordio` on your host. + +**NOTE**: we recommend users to set `batch_size=1` when generating the recordio files so that users can +adjust it flexibly while reading it. ## Use the RecordIO file in a Local Training Job @@ -96,7 +99,7 @@ The above codes would generate multiple RecordIO files on your host like: |-mnist-00004.recordio ``` -1. open multiple RecordIO files by `fluid.layers.io.open_files` +2. open multiple RecordIO files by `fluid.layers.io.open_files` For a distributed training job, the distributed operator system will schedule trainer process on multiple nodes, each trainer process reads parts of the whole training data, we usually take the following approach to make the training -- GitLab