Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
PaddlePaddle
PaddleDetection
提交
80642bee
P
PaddleDetection
项目概览
PaddlePaddle
/
PaddleDetection
大约 1 年 前同步成功
通知
695
Star
11112
Fork
2696
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
184
列表
看板
标记
里程碑
合并请求
40
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
P
PaddleDetection
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
184
Issue
184
列表
看板
标记
里程碑
合并请求
40
合并请求
40
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
提交
80642bee
编写于
6月 28, 2017
作者:
W
wanghaoshuang
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
fix_xmap and refine flowers dataset
上级
633082ad
变更
5
隐藏空白更改
内联
并排
Showing
5 changed file
with
72 addition
and
67 deletion
+72
-67
python/paddle/v2/dataset/__init__.py
python/paddle/v2/dataset/__init__.py
+2
-1
python/paddle/v2/dataset/flowers.py
python/paddle/v2/dataset/flowers.py
+35
-32
python/paddle/v2/dataset/tests/flowers_test.py
python/paddle/v2/dataset/tests/flowers_test.py
+2
-2
python/paddle/v2/reader/decorator.py
python/paddle/v2/reader/decorator.py
+23
-24
python/paddle/v2/reader/tests/decorator_test.py
python/paddle/v2/reader/tests/decorator_test.py
+10
-8
未找到文件。
python/paddle/v2/dataset/__init__.py
浏览文件 @
80642bee
...
@@ -25,8 +25,9 @@ import uci_housing
...
@@ -25,8 +25,9 @@ import uci_housing
import
sentiment
import
sentiment
import
wmt14
import
wmt14
import
mq2007
import
mq2007
import
flowers
__all__
=
[
__all__
=
[
'mnist'
,
'imikolov'
,
'imdb'
,
'cifar'
,
'movielens'
,
'conll05'
,
'sentiment'
'mnist'
,
'imikolov'
,
'imdb'
,
'cifar'
,
'movielens'
,
'conll05'
,
'sentiment'
'uci_housing'
,
'wmt14'
,
'mq2007'
'uci_housing'
,
'wmt14'
,
'mq2007'
,
'flowers'
]
]
python/paddle/v2/dataset/flowers.py
浏览文件 @
80642bee
...
@@ -13,18 +13,18 @@
...
@@ -13,18 +13,18 @@
# limitations under the License.
# limitations under the License.
"""
"""
This module will download dataset from
This module will download dataset from
http://www.robots.ox.ac.uk/~vgg/data/flowers/102/index.html
http://www.robots.ox.ac.uk/~vgg/data/flowers/102/index.html
and parse train/test set intopaddle reader creators.
and parse train/test set intopaddle reader creators.
This set contains images of flowers belonging to 102 different categories.
This set contains images of flowers belonging to 102 different categories.
The images were acquired by searching the web and taking pictures. There are a
The images were acquired by searching the web and taking pictures. There are a
minimum of 40 images for each category.
minimum of 40 images for each category.
The database was used in:
The database was used in:
Nilsback, M-E. and Zisserman, A. Automated flower classification over a large
Nilsback, M-E. and Zisserman, A. Automated flower classification over a large
number of classes.Proceedings of the Indian Conference on Computer Vision,
number of classes.Proceedings of the Indian Conference on Computer Vision,
Graphics and Image Processing (2008)
Graphics and Image Processing (2008)
http://www.robots.ox.ac.uk/~vgg/publications/papers/nilsback08.{pdf,ps.gz}.
http://www.robots.ox.ac.uk/~vgg/publications/papers/nilsback08.{pdf,ps.gz}.
"""
"""
...
@@ -34,9 +34,9 @@ from common import download
...
@@ -34,9 +34,9 @@ from common import download
import
tarfile
import
tarfile
import
scipy.io
as
scio
import
scipy.io
as
scio
from
paddle.v2.image
import
*
from
paddle.v2.image
import
*
from
paddle.v2.reader
import
*
import
os
import
os
import
numpy
as
np
import
numpy
as
np
import
paddle.v2
as
paddle
from
multiprocessing
import
cpu_count
from
multiprocessing
import
cpu_count
__all__
=
[
'train'
,
'test'
,
'valid'
]
__all__
=
[
'train'
,
'test'
,
'valid'
]
...
@@ -53,8 +53,8 @@ def default_mapper(sample):
...
@@ -53,8 +53,8 @@ def default_mapper(sample):
map image bytes data to type needed by model input layer
map image bytes data to type needed by model input layer
'''
'''
img
,
label
=
sample
img
,
label
=
sample
img
=
paddle
.
image
.
load_image_bytes
(
img
)
img
=
load_image_bytes
(
img
)
img
=
paddle
.
image
.
simple_transform
(
img
,
256
,
224
,
True
)
img
=
simple_transform
(
img
,
256
,
224
,
True
)
return
img
.
flatten
().
astype
(
'float32'
),
label
return
img
.
flatten
().
astype
(
'float32'
),
label
...
@@ -63,22 +63,23 @@ def reader_creator(data_file,
...
@@ -63,22 +63,23 @@ def reader_creator(data_file,
setid_file
,
setid_file
,
dataset_name
,
dataset_name
,
mapper
=
default_mapper
,
mapper
=
default_mapper
,
buffered_size
=
1024
):
buffered_size
=
1024
,
useXmap
=
True
):
'''
'''
1. read images from tar file and
1. read images from tar file and
merge images into batch files in 102flowers.tgz_batch/
merge images into batch files in 102flowers.tgz_batch/
2. get a reader to read sample from batch file
2. get a reader to read sample from batch file
:param data_file: downloaded data file
:param data_file: downloaded data file
:type data_file: string
:type data_file: string
:param label_file: downloaded label file
:param label_file: downloaded label file
:type label_file: string
:type label_file: string
:param setid_file: downloaded setid file containing information
:param setid_file: downloaded setid file containing information
about how to split dataset
about how to split dataset
:type setid_file: string
:type setid_file: string
:param dataset_name: data set name (tstid|trnid|valid)
:param dataset_name: data set name (tstid|trnid|valid)
:type dataset_name: string
:type dataset_name: string
:param mapper: a function to map image bytes data to type
:param mapper: a function to map image bytes data to type
needed by model input layer
needed by model input layer
:type mapper: callable
:type mapper: callable
:param buffered_size: the size of buffer used to process images
:param buffered_size: the size of buffer used to process images
...
@@ -105,15 +106,17 @@ def reader_creator(data_file,
...
@@ -105,15 +106,17 @@ def reader_creator(data_file,
for
sample
,
label
in
itertools
.
izip
(
data
,
batch
[
'label'
]):
for
sample
,
label
in
itertools
.
izip
(
data
,
batch
[
'label'
]):
yield
sample
,
int
(
label
)
yield
sample
,
int
(
label
)
return
paddle
.
reader
.
xmap_readers
(
mapper
,
reader
,
if
useXmap
:
cpu_count
(),
buffered_size
)
return
xmap_readers
(
mapper
,
reader
,
cpu_count
(),
buffered_size
)
else
:
return
map_readers
(
mapper
,
reader
)
def
train
(
mapper
=
default_mapper
,
buffered_size
=
1024
):
def
train
(
mapper
=
default_mapper
,
buffered_size
=
1024
,
useXmap
=
True
):
'''
'''
Create flowers training set reader.
Create flowers training set reader.
It returns a reader, each sample in the reader is
It returns a reader, each sample in the reader is
image pixels in [0, 1] and label in [1, 102]
image pixels in [0, 1] and label in [1, 102]
translated from original color image by steps:
translated from original color image by steps:
1. resize to 256*256
1. resize to 256*256
2. random crop to 224*224
2. random crop to 224*224
...
@@ -128,15 +131,15 @@ def train(mapper=default_mapper, buffered_size=1024):
...
@@ -128,15 +131,15 @@ def train(mapper=default_mapper, buffered_size=1024):
return
reader_creator
(
return
reader_creator
(
download
(
DATA_URL
,
'flowers'
,
DATA_MD5
),
download
(
DATA_URL
,
'flowers'
,
DATA_MD5
),
download
(
LABEL_URL
,
'flowers'
,
LABEL_MD5
),
download
(
LABEL_URL
,
'flowers'
,
LABEL_MD5
),
download
(
SETID_URL
,
'flowers'
,
SETID_MD5
),
't
rn
id'
,
mapper
,
download
(
SETID_URL
,
'flowers'
,
SETID_MD5
),
't
st
id'
,
mapper
,
buffered_size
)
buffered_size
,
useXmap
)
def
test
(
mapper
=
default_mapper
,
buffered_size
=
1024
):
def
test
(
mapper
=
default_mapper
,
buffered_size
=
1024
,
useXmap
=
True
):
'''
'''
Create flowers test set reader.
Create flowers test set reader.
It returns a reader, each sample in the reader is
It returns a reader, each sample in the reader is
image pixels in [0, 1] and label in [1, 102]
image pixels in [0, 1] and label in [1, 102]
translated from original color image by steps:
translated from original color image by steps:
1. resize to 256*256
1. resize to 256*256
2. random crop to 224*224
2. random crop to 224*224
...
@@ -151,15 +154,15 @@ def test(mapper=default_mapper, buffered_size=1024):
...
@@ -151,15 +154,15 @@ def test(mapper=default_mapper, buffered_size=1024):
return
reader_creator
(
return
reader_creator
(
download
(
DATA_URL
,
'flowers'
,
DATA_MD5
),
download
(
DATA_URL
,
'flowers'
,
DATA_MD5
),
download
(
LABEL_URL
,
'flowers'
,
LABEL_MD5
),
download
(
LABEL_URL
,
'flowers'
,
LABEL_MD5
),
download
(
SETID_URL
,
'flowers'
,
SETID_MD5
),
't
st
id'
,
mapper
,
download
(
SETID_URL
,
'flowers'
,
SETID_MD5
),
't
rn
id'
,
mapper
,
buffered_size
)
buffered_size
,
useXmap
)
def
valid
(
mapper
=
default_mapper
,
buffered_size
=
1024
):
def
valid
(
mapper
=
default_mapper
,
buffered_size
=
1024
,
useXmap
=
True
):
'''
'''
Create flowers validation set reader.
Create flowers validation set reader.
It returns a reader, each sample in the reader is
It returns a reader, each sample in the reader is
image pixels in [0, 1] and label in [1, 102]
image pixels in [0, 1] and label in [1, 102]
translated from original color image by steps:
translated from original color image by steps:
1. resize to 256*256
1. resize to 256*256
2. random crop to 224*224
2. random crop to 224*224
...
@@ -175,7 +178,7 @@ def valid(mapper=default_mapper, buffered_size=1024):
...
@@ -175,7 +178,7 @@ def valid(mapper=default_mapper, buffered_size=1024):
download
(
DATA_URL
,
'flowers'
,
DATA_MD5
),
download
(
DATA_URL
,
'flowers'
,
DATA_MD5
),
download
(
LABEL_URL
,
'flowers'
,
LABEL_MD5
),
download
(
LABEL_URL
,
'flowers'
,
LABEL_MD5
),
download
(
SETID_URL
,
'flowers'
,
SETID_MD5
),
'valid'
,
mapper
,
download
(
SETID_URL
,
'flowers'
,
SETID_MD5
),
'valid'
,
mapper
,
buffered_size
)
buffered_size
,
useXmap
)
def
fetch
():
def
fetch
():
...
...
python/paddle/v2/dataset/tests/flowers_test.py
浏览文件 @
80642bee
...
@@ -31,13 +31,13 @@ class TestFlowers(unittest.TestCase):
...
@@ -31,13 +31,13 @@ class TestFlowers(unittest.TestCase):
def
test_train
(
self
):
def
test_train
(
self
):
instances
,
max_label_value
=
self
.
check_reader
(
instances
,
max_label_value
=
self
.
check_reader
(
paddle
.
v2
.
dataset
.
flowers
.
train
())
paddle
.
v2
.
dataset
.
flowers
.
train
())
self
.
assertEqual
(
instances
,
1020
)
self
.
assertEqual
(
instances
,
6149
)
self
.
assertEqual
(
max_label_value
,
102
)
self
.
assertEqual
(
max_label_value
,
102
)
def
test_test
(
self
):
def
test_test
(
self
):
instances
,
max_label_value
=
self
.
check_reader
(
instances
,
max_label_value
=
self
.
check_reader
(
paddle
.
v2
.
dataset
.
flowers
.
test
())
paddle
.
v2
.
dataset
.
flowers
.
test
())
self
.
assertEqual
(
instances
,
6149
)
self
.
assertEqual
(
instances
,
1020
)
self
.
assertEqual
(
max_label_value
,
102
)
self
.
assertEqual
(
max_label_value
,
102
)
def
test_valid
(
self
):
def
test_valid
(
self
):
...
...
python/paddle/v2/reader/decorator.py
浏览文件 @
80642bee
...
@@ -166,12 +166,12 @@ def buffered(reader, size):
...
@@ -166,12 +166,12 @@ def buffered(reader, size):
The buffered data reader will read and save data entries into a
The buffered data reader will read and save data entries into a
buffer. Reading from the buffered data reader will proceed as long
buffer. Reading from the buffered data reader will proceed as long
as the buffer is not empty.
as the buffer is not empty.
:param reader: the data reader to read from.
:param reader: the data reader to read from.
:type reader: callable
:type reader: callable
:param size: max buffer size.
:param size: max buffer size.
:type size: int
:type size: int
:returns: the buffered data reader.
:returns: the buffered data reader.
"""
"""
...
@@ -238,7 +238,7 @@ def xmap_readers(mapper, reader, process_num, buffer_size, order=False):
...
@@ -238,7 +238,7 @@ def xmap_readers(mapper, reader, process_num, buffer_size, order=False):
:type mapper: callable
:type mapper: callable
:param reader: the data reader to read from
:param reader: the data reader to read from
:type reader: callable
:type reader: callable
:param process_num: process number to handle original sample
:param process_num: process number to handle original sample
:type process_num: int
:type process_num: int
:param buffer_size: max buffer size
:param buffer_size: max buffer size
:type buffer_size: int
:type buffer_size: int
...
@@ -248,9 +248,6 @@ def xmap_readers(mapper, reader, process_num, buffer_size, order=False):
...
@@ -248,9 +248,6 @@ def xmap_readers(mapper, reader, process_num, buffer_size, order=False):
:rtype: callable
:rtype: callable
"""
"""
end
=
XmapEndSignal
()
end
=
XmapEndSignal
()
in_queue
=
Queue
(
buffer_size
)
out_queue
=
Queue
(
buffer_size
)
out_order
=
[
0
]
# define a worker to read samples from reader to in_queue
# define a worker to read samples from reader to in_queue
def
read_worker
(
reader
,
in_queue
):
def
read_worker
(
reader
,
in_queue
):
...
@@ -266,12 +263,6 @@ def xmap_readers(mapper, reader, process_num, buffer_size, order=False):
...
@@ -266,12 +263,6 @@ def xmap_readers(mapper, reader, process_num, buffer_size, order=False):
in_order
+=
1
in_order
+=
1
in_queue
.
put
(
end
)
in_queue
.
put
(
end
)
# start a read worker in a thread
target
=
order_read_worker
if
order
else
read_worker
t
=
Thread
(
target
=
target
,
args
=
(
reader
,
in_queue
))
t
.
daemon
=
True
t
.
start
()
# define a worker to handle samples from in_queue by mapper
# define a worker to handle samples from in_queue by mapper
# and put mapped samples into out_queue
# and put mapped samples into out_queue
def
handle_worker
(
in_queue
,
out_queue
,
mapper
):
def
handle_worker
(
in_queue
,
out_queue
,
mapper
):
...
@@ -298,19 +289,27 @@ def xmap_readers(mapper, reader, process_num, buffer_size, order=False):
...
@@ -298,19 +289,27 @@ def xmap_readers(mapper, reader, process_num, buffer_size, order=False):
in_queue
.
put
(
end
)
in_queue
.
put
(
end
)
out_queue
.
put
(
end
)
out_queue
.
put
(
end
)
# start several handle_workers
target
=
order_handle_worker
if
order
else
handle_worker
args
=
(
in_queue
,
out_queue
,
mapper
,
out_order
)
if
order
else
(
in_queue
,
out_queue
,
mapper
)
workers
=
[]
for
i
in
xrange
(
process_num
):
worker
=
Thread
(
target
=
target
,
args
=
args
)
worker
.
daemon
=
True
workers
.
append
(
worker
)
for
w
in
workers
:
w
.
start
()
def
xreader
():
def
xreader
():
in_queue
=
Queue
(
buffer_size
)
out_queue
=
Queue
(
buffer_size
)
out_order
=
[
0
]
# start a read worker in a thread
target
=
order_read_worker
if
order
else
read_worker
t
=
Thread
(
target
=
target
,
args
=
(
reader
,
in_queue
))
t
.
daemon
=
True
t
.
start
()
# start several handle_workers
target
=
order_handle_worker
if
order
else
handle_worker
args
=
(
in_queue
,
out_queue
,
mapper
,
out_order
)
if
order
else
(
in_queue
,
out_queue
,
mapper
)
workers
=
[]
for
i
in
xrange
(
process_num
):
worker
=
Thread
(
target
=
target
,
args
=
args
)
worker
.
daemon
=
True
workers
.
append
(
worker
)
for
w
in
workers
:
w
.
start
()
sample
=
out_queue
.
get
()
sample
=
out_queue
.
get
()
while
not
isinstance
(
sample
,
XmapEndSignal
):
while
not
isinstance
(
sample
,
XmapEndSignal
):
yield
sample
yield
sample
...
...
python/paddle/v2/reader/tests/decorator_test.py
浏览文件 @
80642bee
...
@@ -132,15 +132,17 @@ class TestXmap(unittest.TestCase):
...
@@ -132,15 +132,17 @@ class TestXmap(unittest.TestCase):
for
order
in
orders
:
for
order
in
orders
:
for
tNum
in
thread_nums
:
for
tNum
in
thread_nums
:
for
size
in
buffered_size
:
for
size
in
buffered_size
:
result
=
[]
reader
=
paddle
.
v2
.
reader
.
xmap_readers
(
mapper
,
for
i
in
paddle
.
v2
.
reader
.
xmap_readers
(
mapper
,
reader_creator_10
(
0
),
reader_creator_10
(
0
),
tNum
,
size
,
order
)():
tNum
,
size
,
order
)
result
.
append
(
i
)
for
n
in
xrange
(
3
):
if
not
order
:
result
=
[]
result
.
sort
()
for
i
in
reader
():
for
idx
,
e
in
enumerate
(
result
):
result
.
append
(
i
)
self
.
assertEqual
(
e
,
mapper
(
idx
))
if
not
order
:
result
.
sort
()
for
idx
,
e
in
enumerate
(
result
):
self
.
assertEqual
(
e
,
mapper
(
idx
))
if
__name__
==
'__main__'
:
if
__name__
==
'__main__'
:
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录