Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
PaddlePaddle
PaddleDetection
提交
1e29b124
P
PaddleDetection
项目概览
PaddlePaddle
/
PaddleDetection
大约 1 年 前同步成功
通知
695
Star
11112
Fork
2696
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
184
列表
看板
标记
里程碑
合并请求
40
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
P
PaddleDetection
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
184
Issue
184
列表
看板
标记
里程碑
合并请求
40
合并请求
40
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
提交
1e29b124
编写于
4月 13, 2017
作者:
Q
qijun
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
follow comments
上级
5dd7586e
变更
8
隐藏空白更改
内联
并排
Showing
8 changed file
with
55 addition
and
45 deletion
+55
-45
python/paddle/v2/dataset/cifar.py
python/paddle/v2/dataset/cifar.py
+9
-6
python/paddle/v2/dataset/conll05.py
python/paddle/v2/dataset/conll05.py
+9
-7
python/paddle/v2/dataset/imdb.py
python/paddle/v2/dataset/imdb.py
+10
-10
python/paddle/v2/dataset/imikolov.py
python/paddle/v2/dataset/imikolov.py
+7
-5
python/paddle/v2/dataset/movielens.py
python/paddle/v2/dataset/movielens.py
+7
-6
python/paddle/v2/dataset/uci_housing.py
python/paddle/v2/dataset/uci_housing.py
+4
-4
python/paddle/v2/dataset/wmt14.py
python/paddle/v2/dataset/wmt14.py
+8
-6
python/paddle/v2/trainer.py
python/paddle/v2/trainer.py
+1
-1
未找到文件。
python/paddle/v2/dataset/cifar.py
浏览文件 @
1e29b124
...
@@ -14,14 +14,17 @@
...
@@ -14,14 +14,17 @@
"""
"""
CIFAR dataset.
CIFAR dataset.
This module will download dataset from https://www.cs.toronto.edu/~kriz/cifar.html and
This module will download dataset from
parse train/test set into paddle reader creators.
https://www.cs.toronto.edu/~kriz/cifar.html and parse train/test set into
paddle reader creators.
The CIFAR-10 dataset consists of 60000 32x32 colour images in 10 classes, with 6000
The CIFAR-10 dataset consists of 60000 32x32 colour images in 10 classes,
images per class. There are 50000 training images and 10000 test images.
with 6000 images per class. There are 50000 training images and 10000 test
images.
The CIFAR-100 dataset is just like the CIFAR-10, except it has 100 classes containing
The CIFAR-100 dataset is just like the CIFAR-10, except it has 100 classes
600 images each. There are 500 training images and 100 testing images per class.
containing 600 images each. There are 500 training images and 100 testing
images per class.
"""
"""
...
...
python/paddle/v2/dataset/conll05.py
浏览文件 @
1e29b124
...
@@ -13,10 +13,11 @@
...
@@ -13,10 +13,11 @@
# limitations under the License.
# limitations under the License.
"""
"""
Conll05 dataset.
Conll05 dataset.
Paddle semantic role labeling Book and demo use this dataset as an example. Because
Paddle semantic role labeling Book and demo use this dataset as an example.
Conll05 is not free in public, the default downloaded URL is test set of
Because Conll05 is not free in public, the default downloaded URL is test set
Conll05 (which is public). Users can change URL and MD5 to their Conll dataset.
of Conll05 (which is public). Users can change URL and MD5 to their Conll
And a pre-trained word vector model based on Wikipedia corpus is used to initialize SRL model.
dataset. And a pre-trained word vector model based on Wikipedia corpus is used
to initialize SRL model.
"""
"""
import
tarfile
import
tarfile
...
@@ -198,9 +199,10 @@ def test():
...
@@ -198,9 +199,10 @@ def test():
"""
"""
Conll05 test set creator.
Conll05 test set creator.
Because the train dataset is not free, the test dataset is used for training.
Because the train dataset is not free, the test dataset is used for
It returns a reader creator, each sample in the reader is nine features, including sentence
training. It returns a reader creator, each sample in the reader is nine
sequence, predicate, predicate context, predicate context flag and tagged sequence.
features, including sentence sequence, predicate, predicate context,
predicate context flag and tagged sequence.
:return: Train reader creator
:return: Train reader creator
:rtype: callable
:rtype: callable
...
...
python/paddle/v2/dataset/imdb.py
浏览文件 @
1e29b124
...
@@ -14,11 +14,10 @@
...
@@ -14,11 +14,10 @@
"""
"""
IMDB dataset.
IMDB dataset.
This module download IMDB dataset from
This module downloads IMDB dataset from
http://ai.stanford.edu/%7Eamaas/data/sentiment/, which contains a set of 25,000
http://ai.stanford.edu/%7Eamaas/data/sentiment/. This dataset contains a set
highly polar movie reviews for training, and 25,000 for testing. Besides, this
of 25,000 highly polar movie reviews for training, and 25,000 for testing.
module also provides API for build dictionary and parse train set and test set
Besides, this module also provides API for building dictionary.
into paddle reader creators.
"""
"""
import
paddle.v2.dataset.common
import
paddle.v2.dataset.common
...
@@ -37,7 +36,7 @@ MD5 = '7c2ac02c03563afcf9b574c7e56c153a'
...
@@ -37,7 +36,7 @@ MD5 = '7c2ac02c03563afcf9b574c7e56c153a'
def
tokenize
(
pattern
):
def
tokenize
(
pattern
):
"""
"""
Read files that match pattern. Tokenize and yield each file.
Read files that match
the given
pattern. Tokenize and yield each file.
"""
"""
with
tarfile
.
open
(
paddle
.
v2
.
dataset
.
common
.
download
(
URL
,
'imdb'
,
with
tarfile
.
open
(
paddle
.
v2
.
dataset
.
common
.
download
(
URL
,
'imdb'
,
...
@@ -57,7 +56,8 @@ def tokenize(pattern):
...
@@ -57,7 +56,8 @@ def tokenize(pattern):
def
build_dict
(
pattern
,
cutoff
):
def
build_dict
(
pattern
,
cutoff
):
"""
"""
Build a word dictionary, the key is word, and the value is index.
Build a word dictionary from the corpus. Keys of the dictionary are words,
and values are zero-based IDs of these words.
"""
"""
word_freq
=
collections
.
defaultdict
(
int
)
word_freq
=
collections
.
defaultdict
(
int
)
for
doc
in
tokenize
(
pattern
):
for
doc
in
tokenize
(
pattern
):
...
@@ -123,7 +123,7 @@ def train(word_idx):
...
@@ -123,7 +123,7 @@ def train(word_idx):
"""
"""
IMDB train set creator.
IMDB train set creator.
It returns a reader creator, each sample in the reader is an
index
It returns a reader creator, each sample in the reader is an
zero-based ID
sequence and label in [0, 1].
sequence and label in [0, 1].
:param word_idx: word dictionary
:param word_idx: word dictionary
...
@@ -140,7 +140,7 @@ def test(word_idx):
...
@@ -140,7 +140,7 @@ def test(word_idx):
"""
"""
IMDB test set creator.
IMDB test set creator.
It returns a reader creator, each sample in the reader is an
index
It returns a reader creator, each sample in the reader is an
zero-based ID
sequence and label in [0, 1].
sequence and label in [0, 1].
:param word_idx: word dictionary
:param word_idx: word dictionary
...
@@ -155,7 +155,7 @@ def test(word_idx):
...
@@ -155,7 +155,7 @@ def test(word_idx):
def
word_dict
():
def
word_dict
():
"""
"""
Build
word dictionary
.
Build
a word dictionary from the corpus
.
:return: Word dictionary
:return: Word dictionary
:rtype: dict
:rtype: dict
...
...
python/paddle/v2/dataset/imikolov.py
浏览文件 @
1e29b124
...
@@ -14,8 +14,9 @@
...
@@ -14,8 +14,9 @@
"""
"""
imikolov's simple dataset.
imikolov's simple dataset.
This module will download dataset from http://www.fit.vutbr.cz/~imikolov/rnnlm/ and
This module will download dataset from
parse train/test set into paddle reader creators.
http://www.fit.vutbr.cz/~imikolov/rnnlm/ and parse train/test set into paddle
reader creators.
"""
"""
import
paddle.v2.dataset.common
import
paddle.v2.dataset.common
import
collections
import
collections
...
@@ -42,7 +43,8 @@ def word_count(f, word_freq=None):
...
@@ -42,7 +43,8 @@ def word_count(f, word_freq=None):
def
build_dict
():
def
build_dict
():
"""
"""
Build a word dictionary, the key is word, and the value is index.
Build a word dictionary from the corpus, Keys of the dictionary are words,
and values are zero-based IDs of these words.
"""
"""
train_filename
=
'./simple-examples/data/ptb.train.txt'
train_filename
=
'./simple-examples/data/ptb.train.txt'
test_filename
=
'./simple-examples/data/ptb.valid.txt'
test_filename
=
'./simple-examples/data/ptb.valid.txt'
...
@@ -91,7 +93,7 @@ def train(word_idx, n):
...
@@ -91,7 +93,7 @@ def train(word_idx, n):
"""
"""
imikolov train set creator.
imikolov train set creator.
It returns a reader creator, each sample in the reader is a
n index
It returns a reader creator, each sample in the reader is a
word ID
tuple.
tuple.
:param word_idx: word dictionary
:param word_idx: word dictionary
...
@@ -108,7 +110,7 @@ def test(word_idx, n):
...
@@ -108,7 +110,7 @@ def test(word_idx, n):
"""
"""
imikolov test set creator.
imikolov test set creator.
It returns a reader creator, each sample in the reader is a
n index
It returns a reader creator, each sample in the reader is a
word ID
tuple.
tuple.
:param word_idx: word dictionary
:param word_idx: word dictionary
...
...
python/paddle/v2/dataset/movielens.py
浏览文件 @
1e29b124
...
@@ -14,10 +14,11 @@
...
@@ -14,10 +14,11 @@
"""
"""
Movielens 1-M dataset.
Movielens 1-M dataset.
Movielens 1-M dataset contains 1 million ratings from 6000 users on 4000 movies, which was
Movielens 1-M dataset contains 1 million ratings from 6000 users on 4000
collected by GroupLens Research. This module will download Movielens 1-M dataset from
movies, which was collected by GroupLens Research. This module will download
http://files.grouplens.org/datasets/movielens/ml-1m.zip and parse train/test set
Movielens 1-M dataset from
into paddle reader creators.
http://files.grouplens.org/datasets/movielens/ml-1m.zip and parse train/test
set into paddle reader creators.
"""
"""
...
@@ -50,7 +51,7 @@ class MovieInfo(object):
...
@@ -50,7 +51,7 @@ class MovieInfo(object):
def
value
(
self
):
def
value
(
self
):
"""
"""
Get information
of
a movie.
Get information
from
a movie.
"""
"""
return
[
return
[
self
.
index
,
[
CATEGORIES_DICT
[
c
]
for
c
in
self
.
categories
],
self
.
index
,
[
CATEGORIES_DICT
[
c
]
for
c
in
self
.
categories
],
...
@@ -78,7 +79,7 @@ class UserInfo(object):
...
@@ -78,7 +79,7 @@ class UserInfo(object):
def
value
(
self
):
def
value
(
self
):
"""
"""
Get information
of
a user.
Get information
from
a user.
"""
"""
return
[
self
.
index
,
0
if
self
.
is_male
else
1
,
self
.
age
,
self
.
job_id
]
return
[
self
.
index
,
0
if
self
.
is_male
else
1
,
self
.
age
,
self
.
job_id
]
...
...
python/paddle/v2/dataset/uci_housing.py
浏览文件 @
1e29b124
...
@@ -75,8 +75,8 @@ def train():
...
@@ -75,8 +75,8 @@ def train():
"""
"""
UCI_HOUSING train set creator.
UCI_HOUSING train set creator.
It returns a reader creator, each sample in the reader is features after
normalization
It returns a reader creator, each sample in the reader is features after
and price number.
normalization
and price number.
:return: Train reader creator
:return: Train reader creator
:rtype: callable
:rtype: callable
...
@@ -95,8 +95,8 @@ def test():
...
@@ -95,8 +95,8 @@ def test():
"""
"""
UCI_HOUSING test set creator.
UCI_HOUSING test set creator.
It returns a reader creator, each sample in the reader is features after
normalization
It returns a reader creator, each sample in the reader is features after
and price number.
normalization
and price number.
:return: Test reader creator
:return: Test reader creator
:rtype: callable
:rtype: callable
...
...
python/paddle/v2/dataset/wmt14.py
浏览文件 @
1e29b124
...
@@ -13,8 +13,8 @@
...
@@ -13,8 +13,8 @@
# limitations under the License.
# limitations under the License.
"""
"""
WMT14 dataset.
WMT14 dataset.
The original WMT14 dataset is too large and a small set of data for set is
provided.
The original WMT14 dataset is too large and a small set of data for set is
This module will download dataset from
provided.
This module will download dataset from
http://paddlepaddle.cdn.bcebos.com/demo/wmt_shrinked_data/wmt14.tgz and
http://paddlepaddle.cdn.bcebos.com/demo/wmt_shrinked_data/wmt14.tgz and
parse train/test set into paddle reader creators.
parse train/test set into paddle reader creators.
...
@@ -107,8 +107,9 @@ def train(dict_size):
...
@@ -107,8 +107,9 @@ def train(dict_size):
"""
"""
WMT14 train set creator.
WMT14 train set creator.
It returns a reader creator, each sample in the reader is source language word index
It returns a reader creator, each sample in the reader is source language
sequence, target language word index sequence and next word index sequence.
word ID sequence, target language word ID sequence and next word ID
sequence.
:return: Train reader creator
:return: Train reader creator
:rtype: callable
:rtype: callable
...
@@ -121,8 +122,9 @@ def test(dict_size):
...
@@ -121,8 +122,9 @@ def test(dict_size):
"""
"""
WMT14 test set creator.
WMT14 test set creator.
It returns a reader creator, each sample in the reader is source language word index
It returns a reader creator, each sample in the reader is source language
sequence, target language word index sequence and next word index sequence.
word ID sequence, target language word ID sequence and next word ID
sequence.
:return: Train reader creator
:return: Train reader creator
:rtype: callable
:rtype: callable
...
...
python/paddle/v2/trainer.py
浏览文件 @
1e29b124
"""
"""
Trainer package
Module Trainer
"""
"""
import
collections
import
collections
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录