Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
PaddlePaddle
PaddleDetection
提交
e05abab6
P
PaddleDetection
项目概览
PaddlePaddle
/
PaddleDetection
1 年多 前同步成功
通知
696
Star
11112
Fork
2696
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
184
列表
看板
标记
里程碑
合并请求
40
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
P
PaddleDetection
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
184
Issue
184
列表
看板
标记
里程碑
合并请求
40
合并请求
40
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
提交
e05abab6
编写于
5月 31, 2018
作者:
Y
Yancey1989
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
use recordio in dist train
上级
ccf61b30
变更
6
显示空白变更内容
内联
并排
Showing
6 changed file
with
178 addition
and
17 deletion
+178
-17
doc/v2/howto/recordio/README.md
doc/v2/howto/recordio/README.md
+122
-0
paddle/fluid/framework/details/threaded_ssa_graph_executor.cc
...le/fluid/framework/details/threaded_ssa_graph_executor.cc
+4
-2
paddle/fluid/operators/reader/create_recordio_file_reader_op.cc
.../fluid/operators/reader/create_recordio_file_reader_op.cc
+7
-5
python/paddle/fluid/layers/io.py
python/paddle/fluid/layers/io.py
+11
-9
python/paddle/fluid/recordio_writer.py
python/paddle/fluid/recordio_writer.py
+34
-1
tools/codestyle/docstring_checker.pyc
tools/codestyle/docstring_checker.pyc
+0
-0
未找到文件。
doc/v2/howto/recordio/README.md
0 → 100644
浏览文件 @
e05abab6
# How to use RecordIO in Fluid
If you want to use RecordIO as your training data format, you need to convert to your training data
to RecordIO files and reading them in the process of training, PaddlePaddle Fluid provides some
interface to deal with the RecordIO files.
## Generate RecordIO File
Before start training with RecordIO files, you need to convert your training data
to RecordIO format by
`fluid.recordio_writer.convert_reader_to_recordio_file`
, the sample codes
as follows:
```
python
reader
=
paddle
.
batch
(
mnist
.
train
(),
batch_size
=
1
)
feeder
=
fluid
.
DataFeeder
(
feed_list
=
[
# order is image and label
fluid
.
layers
.
data
(
name
=
'image'
,
shape
=
[
784
]),
fluid
.
layers
.
data
(
name
=
'label'
,
shape
=
[
1
],
dtype
=
'int64'
),
],
place
=
fluid
.
CPUPlace
())
fluid
.
recordio_writer
.
convert_reader_to_recordio_file
(
'./mnist.recordio'
,
reader
,
feeder
)
```
The above codes would generate a RecordIO
`./mnist.recordio`
on your host.
## Use the RecordIO file in a Local Training Job
PaddlePaddle Fluid provides an interface
`fluid.layers.io.open_recordio_file`
to load your RecordIO file
and then you can use them as a Layer in your network configuration, the sample codes as follows:
```
python
data_file
=
fluid
.
layers
.
io
.
open_recordio_file
(
filename
=
"./mnist.recordio"
,
shapes
=
[(
-
1
,
784
),(
-
1
,
1
)],
lod_levels
=
[
0
,
0
],
dtypes
=
[
"float32"
,
"int32"
])
data_file
=
fluid
.
layers
.
io
.
batch
(
data_file
,
batch_size
=
4
)
img
,
label
=
fluid
.
layers
.
io
.
read_file
(
data_file
)
hidden
=
fluid
.
layers
.
fc
(
input
=
img
,
size
=
100
,
act
=
'tanh'
)
prediction
=
fluid
.
layers
.
fc
(
input
=
hidden
,
size
=
10
,
act
=
'softmax'
)
loss
=
fluid
.
layers
.
cross_entropy
(
input
=
prediction
,
label
=
label
)
avg_loss
=
fluid
.
layers
.
mean
(
loss
)
fluid
.
optimizer
.
Adam
(
learning_rate
=
1e-3
).
minimize
(
avg_loss
)
place
=
fluid
.
CPUPlace
()
exe
=
fluid
.
Executor
(
place
)
exe
.
run
(
fluid
.
default_startup_program
())
avg_loss_np
=
[]
# train a pass
batch_id
=
0
while
True
:
tmp
,
=
exe
.
run
(
fetch_list
=
[
avg_loss
])
avg_loss_np
.
append
(
tmp
)
print
(
batch_id
)
batch_id
+=
1
```
## Use the RecordIO files in Distributed Training
1.
generate multiple RecordIO files
For a distributed training job, you may have multiple trainer nodes,
and one or more RecordIO files for one trainer node, you can use the interface
`fluid.recordio_writer.convert_reader_to_recordio_files`
to convert your training data
into multiple RecordIO files, the sample codes as follows:
```
python
reader
=
paddle
.
batch
(
mnist
.
train
(),
batch_size
=
1
)
feeder
=
fluid
.
DataFeeder
(
feed_list
=
[
# order is image and label
fluid
.
layers
.
data
(
name
=
'image'
,
shape
=
[
784
]),
fluid
.
layers
.
data
(
name
=
'label'
,
shape
=
[
1
],
dtype
=
'int64'
),
],
place
=
fluid
.
CPUPlace
())
fluid
.
recordio_writer
.
convert_reader_to_recordio_files
(
filename_suffix
=
'./mnist.recordio'
,
batch_per_file
=
100
,
reader
,
feeder
)
```
The above codes would generate multiple RecordIO files on your host like:
```
bash
.
\_
mnist.recordio-00000
|-mnist.recordio-00001
|-mnist.recordio-00002
|-mnist.recordio-00003
|-mnist.recordio-00004
```
1.
read these RecordIO files with
`fluid.layers.io.open_recordio_file`
For a distributed training job, the distributed operator system will schedule trainer process on multiple nodes,
each trainer process reads parts of the whole training data, we usually take the following approach to make the training
data allocated by each trainer process as uniform as possiable:
```
python
def
gen_train_list
(
file_pattern
,
trainers
,
trainer_id
):
file_list
=
glob
.
glob
(
file_pattern
)
ret_list
=
[]
for
idx
,
f
in
enumerate
(
file_list
):
if
(
idx
+
trainers
)
%
trainers
==
trainer_id
:
ret_list
.
append
(
f
)
return
ret_list
trainers
=
int
(
os
.
getenv
(
"TRAINERS"
))
trainer_id
=
int
(
os
.
getenv
(
"PADDLE_INIT_TRAINER_ID"
))
data_file
=
fluid
.
layers
.
io
.
open_recordio_file
(
filename
=
gen_train_list
(
"./mnist.recordio*"
,
trainers
,
trainer_id
),
shapes
=
[(
-
1
,
784
),(
-
1
,
1
)],
lod_levels
=
[
0
,
0
],
dtypes
=
[
"float32"
,
"int32"
])
data_file
=
fluid
.
layers
.
io
.
batch
(
data_file
,
batch_size
=
4
)
```
paddle/fluid/framework/details/threaded_ssa_graph_executor.cc
浏览文件 @
e05abab6
...
...
@@ -189,9 +189,11 @@ void ThreadedSSAGraphExecutor::RunOp(
BlockingQueue
<
VarHandleBase
*>
*
ready_var_q
,
details
::
OpHandleBase
*
op
)
{
auto
op_run
=
[
ready_var_q
,
op
,
this
]
{
try
{
VLOG
(
10
)
<<
op
<<
" "
<<
op
->
Name
()
<<
" : "
<<
op
->
DebugString
();
VLOG
(
10
)
<<
"PE start "
<<
" "
<<
op
->
Name
()
<<
" : "
<<
op
->
DebugString
();
op
->
Run
(
strategy_
.
use_event_
);
VLOG
(
10
)
<<
op
<<
" "
<<
op
->
Name
()
<<
" Done "
;
VLOG
(
10
)
<<
"PE end "
<<
" "
<<
op
->
Name
()
<<
" Done "
;
running_ops_
--
;
ready_var_q
->
Extend
(
op
->
Outputs
());
VLOG
(
10
)
<<
op
<<
" "
<<
op
->
Name
()
<<
"Signal posted"
;
...
...
paddle/fluid/operators/reader/create_recordio_file_reader_op.cc
浏览文件 @
e05abab6
...
...
@@ -65,20 +65,22 @@ class CreateRecordIOReaderOp : public framework::OperatorBase {
static_cast
<
int
>
(
shape_concat
.
size
()),
"The accumulate of all ranks should be equal to the "
"shape concat's length."
);
std
::
string
filename
=
Attr
<
std
::
string
>
(
"filename
"
);
auto
filenames
=
Attr
<
std
::
vector
<
std
::
string
>>
(
"filenames
"
);
auto
*
out
=
scope
.
FindVar
(
Output
(
"Out"
))
->
template
GetMutable
<
framework
::
ReaderHolder
>();
out
->
Reset
(
new
RecordIOFileReader
<
true
>
(
filename
,
RestoreShapes
(
shape_concat
,
ranks
)));
for
(
auto
&
fn
:
filenames
)
{
out
->
Reset
(
new
RecordIOFileReader
<
true
>
(
fn
,
RestoreShapes
(
shape_concat
,
ranks
)));
}
}
};
class
CreateRecordIOReaderOpMaker
:
public
FileReaderMakerBase
{
protected:
void
Apply
()
override
{
AddAttr
<
std
::
string
>
(
"filename"
,
"The filename of record io reader"
);
AddAttr
<
std
::
vector
<
std
::
string
>>
(
"filenames"
,
"The filenames of record io reader"
);
AddComment
(
R"DOC(
CreateRecordIOReader Operator
...
...
python/paddle/fluid/layers/io.py
浏览文件 @
e05abab6
...
...
@@ -21,7 +21,7 @@ from ..layer_helper import LayerHelper
from
..executor
import
global_scope
__all__
=
[
'data'
,
'BlockGuardServ'
,
'ListenAndServ'
,
'Send'
,
'open_recordio_file'
,
'data'
,
'BlockGuardServ'
,
'ListenAndServ'
,
'Send'
,
'open_recordio_file
s
'
,
'open_files'
,
'read_file'
,
'shuffle'
,
'batch'
,
'double_buffer'
,
'random_data_generator'
,
'Preprocessor'
]
...
...
@@ -291,7 +291,7 @@ def _copy_reader_create_op_(block, op):
return
new_op
def
open_recordio_file
(
filename
,
def
open_recordio_file
s
(
filenames
,
shapes
,
lod_levels
,
dtypes
,
...
...
@@ -304,7 +304,7 @@ def open_recordio_file(filename,
Via the Reader Variable, we can get data from the given RecordIO file.
Args:
filename(str): The RecordIO file's name.
filename(str)
or list(str)
: The RecordIO file's name.
shapes(list): List of tuples which declaring data shapes.
lod_levels(list): List of ints which declaring data lod_level.
dtypes(list): List of strs which declaring data type.
...
...
@@ -336,6 +336,8 @@ def open_recordio_file(filename,
ranks
.
append
(
len
(
shape
))
var_name
=
unique_name
(
'open_recordio_file'
)
if
isinstance
(
filenames
,
str
):
filenames
=
[
filenames
]
startup_blk
=
default_startup_program
().
current_block
()
startup_var
=
startup_blk
.
create_var
(
name
=
var_name
)
...
...
@@ -345,7 +347,7 @@ def open_recordio_file(filename,
attrs
=
{
'shape_concat'
:
shape_concat
,
'lod_levels'
:
lod_levels
,
'filename
'
:
filename
,
'filename
s'
:
filenames
,
'ranks'
:
ranks
})
...
...
python/paddle/fluid/recordio_writer.py
浏览文件 @
e05abab6
...
...
@@ -14,7 +14,7 @@
import
core
import
contextlib
from
..batch
import
batch
__all__
=
[
'convert_reader_to_recordio_file'
]
...
...
@@ -46,3 +46,36 @@ def convert_reader_to_recordio_file(
writer
.
complete_append_tensor
()
counter
+=
1
return
counter
import
paddle
def
convert_reader_to_recordio_files
(
filename_suffix
,
batch_per_file
,
reader_creator
,
feeder
,
compressor
=
core
.
RecordIOWriter
.
Compressor
.
Snappy
,
max_num_records
=
1000
,
feed_order
=
None
):
if
feed_order
is
None
:
feed_order
=
feeder
.
feed_names
lines
=
[]
f_idx
=
0
counter
=
0
for
idx
,
batch
in
enumerate
(
reader_creator
()):
lines
.
append
(
batch
)
if
idx
>=
batch_per_file
and
idx
%
batch_per_file
==
0
:
filename
=
"%s-%05d"
%
(
filename_suffix
,
f_idx
)
with
create_recordio_writer
(
filename
,
compressor
,
max_num_records
)
as
writer
:
for
l
in
lines
:
res
=
feeder
.
feed
(
l
)
for
each
in
feed_order
:
writer
.
append_tensor
(
res
[
each
])
writer
.
complete_append_tensor
()
counter
+=
1
lines
=
[]
f_idx
+=
1
return
counter
tools/codestyle/docstring_checker.pyc
浏览文件 @
e05abab6
无法预览此类型文件
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录