未验证 提交 7d5e0e20 编写于 作者: Y Yu Yang 提交者: GitHub

Merge pull request #14 from reyoung/feature/write_read_data

Prepare Data
.. _api_guide_executor:
########
Executor
########
......@@ -5,7 +7,7 @@ Executor
:code:`Executor` 即 :code:`执行器` 。PaddlePaddle Fluid中有两种执行器可以选择。
:code:`Executor` 实现了一个简易的执行器,所有Operator会被顺序执行。用户可以使用
Python脚本驱动 :code:`Executor` 执行。默认情况下 :code:`Executor` 是单线程的,如果
想使用数据并行,请参考另一个执行器, :ref:`api_guide_low_level_parallel_executor` 。
想使用数据并行,请参考另一个执行器, :ref:`api_guide_parallel_executor` 。
:code:`Executor` 的代码逻辑非常简单。建议用户在调试过程中,先使用
:code:`Executor` 跑通模型,再切换到多设备计算,甚至多机计算。
......@@ -15,4 +17,4 @@ Python脚本驱动 :code:`Executor` 执行。默认情况下 :code:`Executor`
:ref:`api_guide_low_level_program` 。
简单的使用方法,请参考 :ref:`quick_start_fit_a_line` , API Reference 请参考
:ref:`api_fluid_Executor` 。
\ No newline at end of file
:ref:`api_fluid_Executor` 。
.. _api_guide_low_level_parallel_executor:
.. _api_guide_parallel_executor:
################
ParallelExecutor
......
########
输入输出
########
\ No newline at end of file
########
.. _api_guide_reader:
Reader相关API
#############
\ No newline at end of file
.. _api_guide_lod_tensor:
#########
LoDTensor
#########
############
RecordIO文件
############
RecordIO转换API
###############
.. _api_guide_recordio_file_format:
RecordIO文件格式
################
......@@ -3,27 +3,7 @@
####################
概述
####
数据预处理
##########
配置简单的网络
##############
训练
####
调试
####
模型评估
########
.. toctree::
:maxdepth: 2
prepare_data/index
\ No newline at end of file
.. _user_guide_use_numpy_array_as_train_data:
###########################
使用Numpy Array作为训练数据
###########################
PaddlePaddle Fluid支持使用 :ref:`api_fluid_layers_data` 配置数据层;
再使用 Numpy Array 或者直接使用Python创建C++的
:ref:`api_guide_lod_tensor` , 通过 :code:`Executor.run(feed=...)` 传给
:ref:`api_guide_executor` 或 :ref:`api_guide_parallel_executor` 。
数据层配置
##########
通过 :ref:`api_fluid_layers_data` 可以配置神经网络中需要的数据层。具体方法为:
.. code-block:: python
import paddle.fluid as fluid
image = fluid.layers.data(name="image", shape=[3, 224, 224])
label = fluid.layers.data(name="label", shape=[1], dtype="int64")
# use image/label as layer input
prediction = fluid.layers.fc(input=image, size=1000, act="softmax")
loss = fluid.layers.cross_entropy(input=prediction, label=label)
...
上段代码中,:code:`image` 和 :code:`label` 是通过 :code:`fluid.layers.data`
创建的两个输入数据层。其中 :code:`image` 是 :code:`[3, 224, 224]` 维度的浮点数据;
:code:`label` 是 :code:`[1]` 维度的整数数据。这里需要注意的是:
1. Fluid中默认使用 :code:`-1` 表示 batch size 维度,默认情况下会在 :code:`shape`
的第一个维度添加 :code:`-1` 。 所以 上段代码中, 我们可以接受将一个
:code:`[32, 3, 224, 224]` 的numpy array传给 :code:`image` 。 如果想自定义batch size
维度的位置的话,请设置 :code:`fluid.layers.data(append_batch_size=False)` 。
请参考进阶使用中的 :ref:`user_guide_customize_batch_size_rank` 。
2. Fluid中用来做类别标签的数据类型是 :code:`int64`,并且标签从0开始。
传递训练数据给执行器
####################
:code:`Executor.run` 和 :code:`ParallelExecutor.run` 都接受一个 :code:`feed` 参数。
这个参数是一个Python的字典。它的键是数据层的名字,例如上文代码中的 :code:`image`。
它的值是对应的numpy array。
例如:
.. code-block:: python
exe = fluid.Executor(fluid.CPUPlace())
exe.run(feed={
"image": numpy.random.random(size=(32, 3, 224, 224)).astype('float32'),
"label": numpy.random.random(size=(32, 1)).astype('int64')
})
进阶使用
########
如何传入序列数据
----------------
序列数据是PaddlePaddle Fluid支持的特殊数据类型,可以使用 :code:`LoDTensor` 作为
输入数据类型。它需要用户: 1. 传入一个mini-batch需要被训练的所有数据;
2.每个序列的长度信息。
用户可以使用 :code:`fluid.create_lod_tensor` 来创建 :code:`LoDTensor`。
传入序列信息的时候,需要设置序列嵌套深度,:code:`lod_level`。
例如训练数据是词汇组成的句子,:code:`lod_level=1`;训练数据是 词汇先组成了句子,
句子再组成了段落,那么 :code:`lod_level=2`。
例如:
.. code-block:: python
sentence = fluid.layers.data(name="sentence", dtype="int64", shape=[1], lod_level=1)
...
exe.run(feed={
"sentence": create_lod_tensor(
data=numpy.array([1, 3, 4, 5, 3, 6, 8], dtype='int64').reshape(-1, 1),
lod=[4, 1, 2],
place=fluid.CPUPlace()
)
})
训练数据 :code:`sentence` 包含三个样本,他们的长度分别是 :code:`4, 1, 2`。
他们分别是 :code:`data[0:4]`, :code:`data[4:5]` 和 :code:`data[5:7]`。
如何分别设置ParallelExecutor中每个设备的训练数据
------------------------------------------------
用户将数据传递给使用 :code:`ParallelExecutor.run(feed=...)` 时,
可以显示指定每一个训练设备(例如GPU)上的数据。
用户需要将一个列表传递给 :code:`feed` 参数,列表中的每一个元素都是一个字典。
这个字典的键是数据层的名字,值是数据层的值。
例如:
.. code-block:: python
parallel_executor = fluid.ParallelExecutor()
parallel_executor.run(
feed=[
{
"image": numpy.random.random(size=(32, 3, 224, 224)).astype('float32'),
"label": numpy.random.random(size=(32, 1)).astype('int64')
},
{
"image": numpy.random.random(size=(16, 3, 224, 224)).astype('float32'),
"label": numpy.random.random(size=(16, 1)).astype('int64')
},
]
)
上述代码中,GPU0会训练 32 个样本,而 GPU1训练 16 个样本。
.. _user_guide_customize_batch_size_rank:
自定义BatchSize维度
-------------------
PaddlePaddle Fluid默认batch size是数据的第一维度,以 :code:`-1` 表示。但是在高级
使用中,batch_size 可以固定,也可以是其他维度或者多个维度来表示。这都需要设置
:code:`fluid.layers.data(append_batch_size=False)` 来完成。
1. 固定batch size维度
.. code-block:: python
image = fluid.layers.data(name="image", shape=[32, 784], append_batch_size=False)
这里,:code:`image` 永远是一个 :code:`[32, 784]` 大小的矩阵。
2. 使用其他维度表示batch size
.. code-block:: python
sentence = fluid.layers.data(name="sentence",
shape=[80, -1, 1],
append_batch_size=False,
dtype="int64")
这里 :code:`sentence` 的中间维度是batch size。这种数据排布会用在定长的循环神经
网络中。
\ No newline at end of file
.. _user_guide_prepare_data:
########
准备数据
########
PaddlePaddle Fluid支持两种传入数据的方式:
1. 用户需要使用 :code:`fluid.layers.data`
配置数据输入层,并在 :ref:`api_guide_executor` 或 :ref:`api_guide_parallel_executor`
中,使用 :code:`executor.run(feed=...)` 传入训练数据。
2. 用户需要先将训练数据
转换成 Paddle 识别的 :ref:`api_guide_recordio_file_format` , 再使用
:code:`fluid.layers.open_files` 以及 :ref:`api_guide_reader` 配置数据读取。
这两种准备数据方法的比较如下:
.. _user_guide_prepare_data_comparision:
+------------+----------------------------------+---------------------------------------+
| | Feed数据 | 使用Reader |
+============+==================================+=======================================+
| API接口 | :code:`executor.run(feed=...)` | :ref:`api_guide_reader` |
+------------+----------------------------------+---------------------------------------+
| 数据格式 | Numpy Array | :ref:`api_guide_recordio_file_format` |
+------------+----------------------------------+---------------------------------------+
| 数据增强 | Python端使用其他库完成 | 使用Fluid中的Operator 完成 |
+------------+----------------------------------+---------------------------------------+
| 速度 | 慢 | 快 |
+------------+----------------------------------+---------------------------------------+
| 推荐用途 | 调试模型 | 工业训练 |
+------------+----------------------------------+---------------------------------------+
这些准备数据的详细使用方法,请参考:
.. toctree::
:maxdepth: 2
feeding_data
use_recordio_reader
Python Reader
#############
为了方便用户在Python中定义数据处理流程,PaddlePaddle Fluid支持 Python Reader,
具体请参考:
.. toctree::
:maxdepth: 2
reader.md
```eval_rst
.. _user_guide_reader:
```
# Python Reader
During the training and testing phases, PaddlePaddle programs need to read data. To help the users write code that performs reading input data, we define the following:
- A *reader*: A function that reads data (from file, network, random number generator, etc) and yields the data items.
- A *reader creator*: A function that returns a reader function.
- A *reader decorator*: A function, which takes in one or more readers, and returns a reader.
- A *batch reader*: A function that reads data (from *reader*, file, network, random number generator, etc) and yields a batch of data items.
and also provide a function which can convert a reader to a batch reader, frequently used reader creators and reader decorators.
## Data Reader Interface
*Data reader* doesn't have to be a function that reads and yields data items. It can just be any function without any parameters that creates an iterable (anything can be used in `for x in iterable`) as follows:
```
iterable = data_reader()
```
The item produced from the iterable should be a **single** entry of data and **not** a mini batch. The entry of data could be a single item or a tuple of items. Item should be of one of the [supported types](http://www.paddlepaddle.org/doc/ui/data_provider/pydataprovider2.html?highlight=dense_vector#input-types) (e.g., numpy 1d array of float32, int, list of int etc.)
An example implementation for single item data reader creator is as follows:
```python
def reader_creator_random_image(width, height):
def reader():
while True:
yield numpy.random.uniform(-1, 1, size=width*height)
return reader
```
An example implementation for multiple item data reader creator is as follows:
```python
def reader_creator_random_image_and_label(width, height, label):
def reader():
while True:
yield numpy.random.uniform(-1, 1, size=width*height), label
return reader
```
## Batch Reader Interface
*Batch reader* can be any function without any parameters that creates an iterable (anything can be used in `for x in iterable`). The output of the iterable should be a batch (list) of data items. Each item inside the list should be a tuple.
Here are some valid outputs:
```python
# a mini batch of three data items. Each data item consist three columns of data, each of which is 1.
[(1, 1, 1),
(2, 2, 2),
(3, 3, 3)]
# a mini batch of three data items, each data item is a list (single column).
[([1,1,1],),
([2,2,2],),
([3,3,3],)]
```
Please note that each item inside the list must be a tuple, below is an invalid output:
```python
# wrong, [1,1,1] needs to be inside a tuple: ([1,1,1],).
# Otherwise it is ambiguous whether [1,1,1] means a single column of data [1, 1, 1],
# or three columns of data, each of which is 1.
[[1,1,1],
[2,2,2],
[3,3,3]]
```
It is easy to convert from a reader to a batch reader:
```python
mnist_train = paddle.dataset.mnist.train()
mnist_train_batch_reader = paddle.batch(mnist_train, 128)
```
It is also straight forward to create a custom batch reader:
```python
def custom_batch_reader():
while True:
batch = []
for i in xrange(128):
batch.append((numpy.random.uniform(-1, 1, 28*28),)) # note that it's a tuple being appended.
yield batch
mnist_random_image_batch_reader = custom_batch_reader
```
## Usage
Following is how we can use the reader with PaddlePaddle:
The batch reader, a mapping from item(s) to data layer, the batch size and the number of total passes will be passed into `paddle.train` as follows:
```python
# two data layer is created:
image_layer = paddle.layer.data("image", ...)
label_layer = paddle.layer.data("label", ...)
# ...
batch_reader = paddle.batch(paddle.dataset.mnist.train(), 128)
paddle.train(batch_reader, {"image":0, "label":1}, 128, 10, ...)
```
## Data Reader Decorator
The *Data reader decorator* takes in a single reader or multiple data readers and returns a new data reader. It is similar to a [python decorator](https://wiki.python.org/moin/PythonDecorators), but it does not use `@` in the syntax.
Since we have a strict interface for data readers (no parameters and return a single data item), a data reader can be used in a flexible way using data reader decorators. Following are a few examples:
### Prefetch Data
Since reading data may take some time and training can not proceed without data, it is generally a good idea to prefetch the data.
Use `paddle.reader.buffered` to prefetch data:
```python
buffered_reader = paddle.reader.buffered(paddle.dataset.mnist.train(), 100)
```
`buffered_reader` will try to buffer (prefetch) `100` data entries.
### Compose Multiple Data Readers
For example, if we want to use a source of real images (say reusing mnist dataset), and a source of random images as input for [Generative Adversarial Networks](https://arxiv.org/abs/1406.2661).
We can do the following :
```python
def reader_creator_random_image(width, height):
def reader():
while True:
yield numpy.random.uniform(-1, 1, size=width*height)
return reader
def reader_creator_bool(t):
def reader:
while True:
yield t
return reader
true_reader = reader_creator_bool(True)
false_reader = reader_creator_bool(False)
reader = paddle.reader.compose(paddle.dataset.mnist.train(), data_reader_creator_random_image(20, 20), true_reader, false_reader)
# Skipped 1 because paddle.dataset.mnist.train() produces two items per data entry.
# And we don't care about the second item at this time.
paddle.train(paddle.batch(reader, 128), {"true_image":0, "fake_image": 2, "true_label": 3, "false_label": 4}, ...)
```
### Shuffle
Given the shuffle buffer size `n`, `paddle.reader.shuffle` returns a data reader that buffers `n` data entries and shuffles them before a data entry is read.
Example:
```python
reader = paddle.reader.shuffle(paddle.dataset.mnist.train(), 512)
```
## Q & A
### Why does a reader return only a single entry, and not a mini batch?
Returning a single entry makes reusing existing data readers much easier (for example, if an existing reader returns 3 entries instead if a single entry, the training code will be more complicated because it need to handle cases like a batch size 2).
We provide a function: `paddle.batch` to turn (a single entry) reader into a batch reader.
### Why do we need a batch reader, isn't is sufficient to give the reader and batch_size as arguments during training ?
In most of the cases, it would be sufficient to give the reader and batch_size as arguments to the train method. However sometimes the user wants to customize the order of data entries inside a mini batch, or even change the batch size dynamically. For these cases using a batch reader is very efficient and helpful.
### Why use a dictionary instead of a list to provide mapping?
Using a dictionary (`{"image":0, "label":1}`) instead of a list (`["image", "label"]`) gives the advantage that the user can easily reuse the items (e.g., using `{"image_a":0, "image_b":0, "label":1}`) or even skip an item (e.g., using `{"image_a":0, "label":2}`).
### How to create a custom data reader creator ?
```python
def image_reader_creator(image_path, label_path, n):
def reader():
f = open(image_path)
l = open(label_path)
images = numpy.fromfile(
f, 'ubyte', count=n * 28 * 28).reshape((n, 28 * 28)).astype('float32')
images = images / 255.0 * 2.0 - 1.0
labels = numpy.fromfile(l, 'ubyte', count=n).astype("int")
for i in xrange(n):
yield images[i, :], labels[i] # a single entry of data is created each time
f.close()
l.close()
return reader
# images_reader_creator creates a reader
reader = image_reader_creator("/path/to/image_file", "/path/to/label_file", 1024)
paddle.train(paddle.batch(reader, 128), {"image":0, "label":1}, ...)
```
### How is `paddle.train` implemented
An example implementation of paddle.train is:
```python
def train(batch_reader, mapping, batch_size, total_pass):
for pass_idx in range(total_pass):
for mini_batch in batch_reader(): # this loop will never end in online learning.
do_forward_backward(mini_batch, mapping)
```
.. _user_guide_use_recordio_as_train_data:
############################
使用RecordIO文件作为训练数据
############################
相比于 :ref:`user_guide_use_numpy_array_as_train_data`,
:ref:`user_guide_use_recordio_as_train_data` 的性能更好;
但是用户需要先将训练数据集转换成RecordIO文件格式,再使用
:ref:`api_fluid_layers_open_files` 层在神经网络配置中导入 RecordIO 文件。
用户还可以使用 :ref:`api_fluid_layers_double_buffer` 加速数据从内存到显存的拷贝,
使用 :ref:`api_fluid_layers_Preprocessor` 工具进行数据增强。
将训练数据转换成RecordIO文件格式
################################
:ref:`api_guide_recordio_file_format` 中,每个记录都是一个
:code:`vector<LoDTensor>`, 即一个支持序列信息的Tensor数组。这个数组包括训练所需
的所有特征。例如对于图像分类来说,这个数组可以包含图片和分类标签。
用户可以使用 :ref:`api_fluid_recordio_writer_convert_reader_to_recordio_file` 可以将
:ref:`user_guide_reader` 转换成一个RecordIO文件。或者可以使用
:ref:`api_fluid_recordio_writer_convert_reader_to_recordio_files` 将一个
:ref:`user_guide_reader` 转换成多个RecordIO文件。
具体使用方法为:
.. code-block:: python
import paddle.fluid as fluid
import numpy
def reader_creator():
def __impl__():
for i in range(1000):
yield [
numpy.random.random(size=[3,224,224], dtype="float32"),
numpy.random.random(size=[1], dtype="int64")
]
return __impl__
img = fluid.layers.data(name="image", shape=[3, 224, 224])
label = fluid.layers.data(name="label", shape=[1], dtype="int64")
feeder = fluid.DataFeeder(feed_list=[img, label], place=fluid.CPUPlace())
BATCH_SIZE = 32
reader = paddle.batch(reader_creator(), batch_size=BATCH_SIZE)
fluid.recordio_writer.convert_reader_to_recordio_file(
"train.recordio", feeder=feeder, reader_creator=reader)
其中 :code:`reader_creator` 创建了一个 :code:`Reader`。
:ref:`_api_fluid_data_feeder_DataFeeder`
是将 :code:`Reader` 转换成 :code:`LoDTensor` 的工具。详细请参考
:ref:`user_guide_reader` 。
上述程序将 :code:`reader_creator` 的数据转换成了 :code:`train.recordio` 文件,
其中每一个record 含有 32 条样本。如果batch size会在训练过程中调整,
用户可以将每一个Record的样本数设置成1。并参考
:ref:`user_guide_use_recordio_as_train_data_use_op_create_batch`。
配置神经网络, 打开RecordIO文件
##############################
RecordIO文件转换好之后,用户可以使用 :ref:`api_fluid_layers_open_files`
打开文件,并使用 :ref:`api_fluid_layers_read_file` 读取文件内容。
简单使用方法如下:
.. code-block:: python
import paddle.fluid as fluid
file_obj = fluid.layers.open_files(
filenames=["train.recordio"],
shape=[[3, 224, 224], [1]],
lod_levels=[0, 0],
dtypes=["float32", "int64"],
pass_num=100
)
image, label = fluid.layers.read_file(file_obj)
其中如果设置了 :code:`pass_num` ,那么当所有数据读完后,会重新读取数据,
直到读取了 :code:`pass_num` 遍。
进阶使用
########
使用 :ref:`api_fluid_layers_double_buffer`
------------------------------------------
:code:`Double buffer` 使用双缓冲技术,将训练数据从内存中复制到显存中。配置双缓冲
需要使用 :ref:`api_fluid_layers_double_buffer` 修饰文件对象。 例如:
.. code-block:: python
import paddle.fliud as fluid
file_obj = fluid.layers.open_files(...)
file_obj = fluid.layers.double_buffer(file_obj)
image, label = fluid.layers.read_file(file_obj)
双缓冲技术可以参考
`Multiple buffering <https://en.wikipedia.org/wiki/Multiple_buffering>`_ 。
配置数据增强
------------
使用 :ref:`api_fluid_layers_Preprocessor` 可以配置文件的数据增强方法。例如
.. code-block:: python
import paddle.fluid as fluid
file_obj = fluid.layers.open_files(...)
preprocessor = fluid.layers.Preprocessor(reader=data_file)
with preprocessor.block():
image, label = preprocessor.inputs()
image = image / 2
label = label + 1
preprocessor.outputs(image, label)
如上代码所示,使用 :code:`Preprocessor` 定义了一个数据增强模块,并在
:code:`with preprocessor.block()` 中定义了数据增强的具体操作。 用户通过配置
:code:`preprocessor.inputs()` 获得数据文件中的各个字段。 并用
:code:`preprocessor.outputs()` 标记预处理后的输出。
.. _user_guide_use_recordio_as_train_data_use_op_create_batch:
使用Op组batch
-------------
使用 :ref:`api_fluid_layers_batch` 可以在训练的过程中动态的组batch。例如
.. code-block:: python
import paddle.fluid as fluid
file_obj = fluid.layers.open_files(...)
file_obj = fluid.layers.batch(file_obj, batch_size=32)
img, label = fluid.layers.read_file(file_obj)
需要注意的是,如果数据集中的最后几个样本不能组成 :code:`batch_size` 大小的批量数据,
那么这几个样本直接组成一个批量数据进行训练。
读入数据的shuffle
-----------------
使用 :ref:`api_fluid_layers_shuffle` 可以在训练过程中动态重排训练数据。例如
.. code-block:: python
import paddle.fluid as fluid
file_obj = fluid.layers.open_files(...)
file_obj = fliud.layers.shuffle(file_obj, buffer_size=8192)
img, label = fliud.layers.read_file(file_obj)
需要注意的是:
1. :code:`shuffle` 实现方法是:
先读入 :code:`buffer_size` 条样本,再随机的选出样本进行训练。
2. :code:`shuffle` 中 :code:`buffer_size` 会占用训练内存,需要确定训练过程中内存
足够支持缓存 :code:`buffer_size` 条数据。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册