Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
BaiXuePrincess
Paddle
提交
871ac282
P
Paddle
项目概览
BaiXuePrincess
/
Paddle
与 Fork 源项目一致
Fork自
PaddlePaddle / Paddle
通知
1
Star
1
Fork
0
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
0
列表
看板
标记
里程碑
合并请求
0
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
P
Paddle
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
0
Issue
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
未验证
提交
871ac282
编写于
12月 28, 2018
作者:
C
Cheerego
提交者:
GitHub
12月 28, 2018
浏览文件
操作
浏览文件
下载
差异文件
Merge pull request #15085 from haowang101779990/enapi_improve_dec27
en api improve format Dec 27
上级
7ab50126
66ea7184
变更
9
展开全部
隐藏空白更改
内联
并排
Showing
9 changed file
with
379 addition
and
291 deletion
+379
-291
python/paddle/fluid/data_feeder.py
python/paddle/fluid/data_feeder.py
+1
-2
python/paddle/fluid/framework.py
python/paddle/fluid/framework.py
+2
-2
python/paddle/fluid/layers/control_flow.py
python/paddle/fluid/layers/control_flow.py
+5
-4
python/paddle/fluid/layers/detection.py
python/paddle/fluid/layers/detection.py
+62
-58
python/paddle/fluid/layers/io.py
python/paddle/fluid/layers/io.py
+5
-6
python/paddle/fluid/layers/nn.py
python/paddle/fluid/layers/nn.py
+266
-201
python/paddle/fluid/layers/tensor.py
python/paddle/fluid/layers/tensor.py
+8
-3
python/paddle/fluid/metrics.py
python/paddle/fluid/metrics.py
+14
-8
python/paddle/fluid/transpiler/distribute_transpiler.py
python/paddle/fluid/transpiler/distribute_transpiler.py
+16
-7
未找到文件。
python/paddle/fluid/data_feeder.py
浏览文件 @
871ac282
...
@@ -272,8 +272,7 @@ class DataFeeder(object):
...
@@ -272,8 +272,7 @@ class DataFeeder(object):
dict: the result of conversion.
dict: the result of conversion.
Raises:
Raises:
ValueError: If drop_last is False and the data batch which cannot
ValueError: If drop_last is False and the data batch which cannot fit for devices.
fit for devices.
"""
"""
def
__reader_creator__
():
def
__reader_creator__
():
...
...
python/paddle/fluid/framework.py
浏览文件 @
871ac282
...
@@ -1638,8 +1638,8 @@ class Program(object):
...
@@ -1638,8 +1638,8 @@ class Program(object):
parameters, e.g., :code:`trainable`, :code:`optimize_attr`, need
parameters, e.g., :code:`trainable`, :code:`optimize_attr`, need
to print.
to print.
Returns
Returns
:
(str)
: The debug string.
str
: The debug string.
Raises:
Raises:
ValueError: If any of required fields is not set and throw_on_error is
ValueError: If any of required fields is not set and throw_on_error is
...
...
python/paddle/fluid/layers/control_flow.py
浏览文件 @
871ac282
...
@@ -1452,6 +1452,7 @@ class DynamicRNN(object):
...
@@ -1452,6 +1452,7 @@ class DynamicRNN(object):
def
step_input
(
self
,
x
):
def
step_input
(
self
,
x
):
"""
"""
Mark a sequence as a dynamic RNN input.
Mark a sequence as a dynamic RNN input.
Args:
Args:
x(Variable): The input sequence.
x(Variable): The input sequence.
...
@@ -1505,6 +1506,7 @@ class DynamicRNN(object):
...
@@ -1505,6 +1506,7 @@ class DynamicRNN(object):
"""
"""
Mark a variable as a RNN input. The input will not be scattered into
Mark a variable as a RNN input. The input will not be scattered into
time steps.
time steps.
Args:
Args:
x(Variable): The input variable.
x(Variable): The input variable.
...
@@ -1629,13 +1631,11 @@ class DynamicRNN(object):
...
@@ -1629,13 +1631,11 @@ class DynamicRNN(object):
Args:
Args:
init(Variable|None): The initialized variable.
init(Variable|None): The initialized variable.
shape(list|tuple): The memory shape. NOTE the shape does not contain
shape(list|tuple): The memory shape. NOTE the shape does not contain batch_size.
batch_size.
value(float): the initalized value.
value(float): the initalized value.
need_reorder(bool): True if the initialized memory depends on the
need_reorder(bool): True if the initialized memory depends on the input sample.
input sample.
dtype(str|numpy.dtype): The data type of the initialized memory.
dtype(str|numpy.dtype): The data type of the initialized memory.
...
@@ -1714,6 +1714,7 @@ class DynamicRNN(object):
...
@@ -1714,6 +1714,7 @@ class DynamicRNN(object):
"""
"""
Update the memory from ex_mem to new_mem. NOTE that the shape and data
Update the memory from ex_mem to new_mem. NOTE that the shape and data
type of :code:`ex_mem` and :code:`new_mem` must be same.
type of :code:`ex_mem` and :code:`new_mem` must be same.
Args:
Args:
ex_mem(Variable): the memory variable.
ex_mem(Variable): the memory variable.
new_mem(Variable): the plain variable generated in RNN block.
new_mem(Variable): the plain variable generated in RNN block.
...
...
python/paddle/fluid/layers/detection.py
浏览文件 @
871ac282
...
@@ -65,7 +65,7 @@ def rpn_target_assign(bbox_pred,
...
@@ -65,7 +65,7 @@ def rpn_target_assign(bbox_pred,
rpn_negative_overlap
=
0.3
,
rpn_negative_overlap
=
0.3
,
use_random
=
True
):
use_random
=
True
):
"""
"""
**
Target Assign Layer for region proposal network (RPN) in Faster-RCNN detection.
**
**
Target Assign Layer for region proposal network (RPN) in Faster-RCNN detection.
**
This layer can be, for given the Intersection-over-Union (IoU) overlap
This layer can be, for given the Intersection-over-Union (IoU) overlap
between anchors and ground truth boxes, to assign classification and
between anchors and ground truth boxes, to assign classification and
...
@@ -135,19 +135,20 @@ def rpn_target_assign(bbox_pred,
...
@@ -135,19 +135,20 @@ def rpn_target_assign(bbox_pred,
Examples:
Examples:
.. code-block:: python
.. code-block:: python
bbox_pred = layers.data(name='bbox_pred', shape=[100, 4],
bbox_pred = layers.data(name='bbox_pred', shape=[100, 4],
append_batch_size=False, dtype='float32')
append_batch_size=False, dtype='float32')
cls_logits = layers.data(name='cls_logits', shape=[100, 1],
cls_logits = layers.data(name='cls_logits', shape=[100, 1],
append_batch_size=False, dtype='float32')
append_batch_size=False, dtype='float32')
anchor_box = layers.data(name='anchor_box', shape=[20, 4],
anchor_box = layers.data(name='anchor_box', shape=[20, 4],
append_batch_size=False, dtype='float32')
append_batch_size=False, dtype='float32')
gt_boxes = layers.data(name='gt_boxes', shape=[10, 4],
gt_boxes = layers.data(name='gt_boxes', shape=[10, 4],
append_batch_size=False, dtype='float32')
append_batch_size=False, dtype='float32')
loc_pred, score_pred, loc_target, score_target, bbox_inside_weight =
loc_pred, score_pred, loc_target, score_target, bbox_inside_weight =
fluid.layers.rpn_target_assign(bbox_pred=bbox_pred,
fluid.layers.rpn_target_assign(bbox_pred=bbox_pred,
cls_logits=cls_logits,
cls_logits=cls_logits,
anchor_box=anchor_box,
anchor_box=anchor_box,
gt_boxes=gt_boxes)
gt_boxes=gt_boxes)
"""
"""
helper
=
LayerHelper
(
'rpn_target_assign'
,
**
locals
())
helper
=
LayerHelper
(
'rpn_target_assign'
,
**
locals
())
...
@@ -1519,27 +1520,30 @@ def anchor_generator(input,
...
@@ -1519,27 +1520,30 @@ def anchor_generator(input,
Args:
Args:
input(Variable): The input feature map, the format is NCHW.
input(Variable): The input feature map, the format is NCHW.
anchor_sizes(list|tuple|float): The anchor sizes of generated anchors,
anchor_sizes(list|tuple|float): The anchor sizes of generated anchors,
given in absolute pixels e.g. [64., 128., 256., 512.].
given in absolute pixels e.g. [64., 128., 256., 512.].
For instance, the anchor size of 64 means the area of this anchor equals to 64**2.
For instance, the anchor size of 64 means the area of this anchor equals to 64**2.
aspect_ratios(list|tuple|float): The height / width ratios of generated
aspect_ratios(list|tuple|float): The height / width ratios of generated
anchors, e.g. [0.5, 1.0, 2.0].
anchors, e.g. [0.5, 1.0, 2.0].
variance(list|tuple): The variances to be used in box regression deltas.
variance(list|tuple): The variances to be used in box regression deltas.
Default:[0.1, 0.1, 0.2, 0.2].
Default:[0.1, 0.1, 0.2, 0.2].
stride(list|turple): The anchors stride across width and height,
stride(list|turple): The anchors stride across width and height,e.g. [16.0, 16.0]
e.g. [16.0, 16.0]
offset(float): Prior boxes center offset. Default: 0.5
offset(float): Prior boxes center offset. Default: 0.5
name(str): Name of the prior box op. Default: None.
name(str): Name of the prior box op. Default: None.
Returns:
Returns:
Anchors(Variable): The output anchors with a layout of [H, W, num_anchors, 4].
Anchors(Variable),Variances(Variable):
H is the height of input, W is the width of input,
num_anchors is the box count of each position.
two variables:
Each anchor is in (xmin, ymin, xmax, ymax) format an unnormalized.
Variances(Variable): The expanded variances of anchors
- Anchors(Variable): The output anchors with a layout of [H, W, num_anchors, 4].
\
with a layout of [H, W, num_priors, 4].
H is the height of input, W is the width of input,
\
H is the height of input, W is the width of input
num_anchors is the box count of each position.
\
num_anchors is the box count of each position.
Each anchor is in (xmin, ymin, xmax, ymax) format an unnormalized.
Each variance is in (xcenter, ycenter, w, h) format.
- Variances(Variable): The expanded variances of anchors
\
with a layout of [H, W, num_priors, 4].
\
H is the height of input, W is the width of input
\
num_anchors is the box count of each position.
\
Each variance is in (xcenter, ycenter, w, h) format.
Examples:
Examples:
...
@@ -1748,35 +1752,35 @@ def generate_proposals(scores,
...
@@ -1748,35 +1752,35 @@ def generate_proposals(scores,
eta
=
1.0
,
eta
=
1.0
,
name
=
None
):
name
=
None
):
"""
"""
**
Generate proposal Faster-RCNN
**
**
Generate proposal Faster-RCNN
**
This operation proposes RoIs according to each box with their probability to be a foreground object and
This operation proposes RoIs according to each box with their probability to be a foreground object and
the box can be calculated by anchors. Bbox_deltais and scores to be an object are the output of RPN. Final proposals
the box can be calculated by anchors. Bbox_deltais and scores to be an object are the output of RPN. Final proposals
could be used to train detection net.
could be used to train detection net.
For generating proposals, this operation performs following steps:
For generating proposals, this operation performs following steps:
1. Transposes and resizes scores and bbox_deltas in size of (H*W*A, 1) and (H*W*A, 4)
1. Transposes and resizes scores and bbox_deltas in size of (H*W*A, 1) and (H*W*A, 4)
2. Calculate box locations as proposals candidates.
2. Calculate box locations as proposals candidates.
3. Clip boxes to image
3. Clip boxes to image
4. Remove predicted boxes with small area.
4. Remove predicted boxes with small area.
5. Apply NMS to get final proposals as output.
5. Apply NMS to get final proposals as output.
Args:
Args:
scores(Variable): A 4-D Tensor with shape [N, A, H, W] represents the probability for each box to be an object.
scores(Variable): A 4-D Tensor with shape [N, A, H, W] represents the probability for each box to be an object
.
N is batch size, A is number of anchors, H and W are height and width of the feature map
.
N is batch size, A is number of anchors, H and W are height and width of the feature map.
bbox_deltas(Variable): A 4-D Tensor with shape [N, 4*A, H, W] represents the differece between predicted box locatoin and anchor location.
bbox_deltas(Variable): A 4-D Tensor with shape [N, 4*A, H, W] represents the differece between predicted box locatoin and anchor location.
im_info(Variable): A 2-D Tensor with shape [N, 3] represents origin image information for N batch. Info contains height, width and scale
im_info(Variable): A 2-D Tensor with shape [N, 3] represents origin image information for N batch. Info contains height, width and scale
between origin image size and the size of feature map.
between origin image size and the size of feature map.
anchors(Variable): A 4-D Tensor represents the anchors with a layout of [H, W, A, 4]. H and W are height and width of the feature map,
anchors(Variable): A 4-D Tensor represents the anchors with a layout of [H, W, A, 4]. H and W are height and width of the feature map,
num_anchors is the box count of each position. Each anchor is in (xmin, ymin, xmax, ymax) format an unnormalized.
num_anchors is the box count of each position. Each anchor is in (xmin, ymin, xmax, ymax) format an unnormalized
.
variances(Variable): The expanded variances of anchors with a layout of [H, W, num_priors, 4]. Each variance is in (xcenter, ycenter, w, h) format
.
variances(Variable): The expanded variances of anchors with a layout of [H, W, num_priors, 4]. Each variance is in (xcenter, ycenter, w, h) forma
t.
pre_nms_top_n(float): Number of total bboxes to be kept per image before NMS. 6000 by defaul
t.
pre_nms_top_n(float): Number of total bboxes to be kept per image before NMS. 6
000 by default.
post_nms_top_n(float): Number of total bboxes to be kept per image after NMS. 1
000 by default.
post_nms_top_n(float): Number of total bboxes to be kept per image after NMS. 1000
by default.
nms_thresh(float): Threshold in NMS, 0.5
by default.
nms_thresh(float): Threshold in NMS, 0.5
by default.
min_size(float): Remove predicted boxes with either height or width < min_size. 0.1
by default.
min_size(float): Remove predicted boxes with either height or width < min_size. 0.1 by default
.
eta(float): Apply in adaptive NMS, if adaptive threshold > 0.5, adaptive_threshold = adaptive_threshold * eta in each iteration
.
eta(float): Apply in adaptive NMS, if adaptive threshold > 0.5, adaptive_threshold = adaptive_threshold * eta in each iteration.
"""
"""
helper
=
LayerHelper
(
'generate_proposals'
,
**
locals
())
helper
=
LayerHelper
(
'generate_proposals'
,
**
locals
())
...
...
python/paddle/fluid/layers/io.py
浏览文件 @
871ac282
...
@@ -949,12 +949,11 @@ def shuffle(reader, buffer_size):
...
@@ -949,12 +949,11 @@ def shuffle(reader, buffer_size):
is determined by argument buf_size.
is determined by argument buf_size.
Args:
Args:
param reader: the original reader whose output will be shuffled.
reader(callable): the original reader whose output will be shuffled.
type reader: callable
buf_size(int): shuffle buffer size.
param buf_size: shuffle buffer size.
type buf_size: int
Returns:
return: the new reader whose output is shuffled.
callable: the new reader whose output is shuffled.
rtype: callable
"""
"""
return
__create_unshared_decorated_reader__
(
return
__create_unshared_decorated_reader__
(
'create_shuffle_reader'
,
reader
,
{
'buffer_size'
:
int
(
buffer_size
)})
'create_shuffle_reader'
,
reader
,
{
'buffer_size'
:
int
(
buffer_size
)})
...
...
python/paddle/fluid/layers/nn.py
浏览文件 @
871ac282
此差异已折叠。
点击以展开。
python/paddle/fluid/layers/tensor.py
浏览文件 @
871ac282
...
@@ -393,9 +393,6 @@ def fill_constant_batch_size_like(input,
...
@@ -393,9 +393,6 @@ def fill_constant_batch_size_like(input,
It also sets *stop_gradient* to True.
It also sets *stop_gradient* to True.
>>> data = fluid.layers.fill_constant_batch_size_like(
>>> input=like, shape=[1], value=0, dtype='int64')
Args:
Args:
input(${input_type}): ${input_comment}.
input(${input_type}): ${input_comment}.
...
@@ -411,6 +408,14 @@ def fill_constant_batch_size_like(input,
...
@@ -411,6 +408,14 @@ def fill_constant_batch_size_like(input,
Returns:
Returns:
${out_comment}.
${out_comment}.
Examples:
.. code-block:: python
data = fluid.layers.fill_constant_batch_size_like(
input=like, shape=[1], value=0, dtype='int64')
"""
"""
helper
=
LayerHelper
(
"fill_constant_batch_size_like"
,
**
locals
())
helper
=
LayerHelper
(
"fill_constant_batch_size_like"
,
**
locals
())
out
=
helper
.
create_variable_for_type_inference
(
dtype
=
dtype
)
out
=
helper
.
create_variable_for_type_inference
(
dtype
=
dtype
)
...
...
python/paddle/fluid/metrics.py
浏览文件 @
871ac282
...
@@ -361,8 +361,8 @@ class ChunkEvaluator(MetricBase):
...
@@ -361,8 +361,8 @@ class ChunkEvaluator(MetricBase):
Accumulate counter numbers output by chunk_eval from mini-batches and
Accumulate counter numbers output by chunk_eval from mini-batches and
compute the precision recall and F1-score using the accumulated counter
compute the precision recall and F1-score using the accumulated counter
numbers.
numbers.
For some basics of chunking, please refer to
For some basics of chunking, please refer to
'Chunking with Support Vector Machines <https://aclanthology.info/pdf/N/N01/N01-1025.pdf>'
.
`Chunking with Support Vector Machines <https://aclanthology.info/pdf/N/N01/N01-1025.pdf>`_
.
ChunkEvalEvaluator computes the precision, recall, and F1-score of chunk detection,
ChunkEvalEvaluator computes the precision, recall, and F1-score of chunk detection,
and supports IOB, IOE, IOBES and IO (also known as plain) tagging schemes.
and supports IOB, IOE, IOBES and IO (also known as plain) tagging schemes.
...
@@ -391,6 +391,7 @@ class ChunkEvaluator(MetricBase):
...
@@ -391,6 +391,7 @@ class ChunkEvaluator(MetricBase):
def
update
(
self
,
num_infer_chunks
,
num_label_chunks
,
num_correct_chunks
):
def
update
(
self
,
num_infer_chunks
,
num_label_chunks
,
num_correct_chunks
):
"""
"""
Update the states based on the layers.chunk_eval() ouputs.
Update the states based on the layers.chunk_eval() ouputs.
Args:
Args:
num_infer_chunks(int|numpy.array): The number of chunks in Inference on the given minibatch.
num_infer_chunks(int|numpy.array): The number of chunks in Inference on the given minibatch.
num_label_chunks(int|numpy.array): The number of chunks in Label on the given mini-batch.
num_label_chunks(int|numpy.array): The number of chunks in Label on the given mini-batch.
...
@@ -450,9 +451,9 @@ class EditDistance(MetricBase):
...
@@ -450,9 +451,9 @@ class EditDistance(MetricBase):
distance, instance_error = distance_evaluator.eval()
distance, instance_error = distance_evaluator.eval()
In the above example:
In the above example:
'distance' is the average of the edit distance in a pass.
'instance_error' is the instance error rate in a pass.
- 'distance' is the average of the edit distance in a pass.
- 'instance_error' is the instance error rate in a pass.
"""
"""
...
@@ -567,12 +568,15 @@ class DetectionMAP(object):
...
@@ -567,12 +568,15 @@ class DetectionMAP(object):
Calculate the detection mean average precision (mAP).
Calculate the detection mean average precision (mAP).
The general steps are as follows:
The general steps are as follows:
1. calculate the true positive and false positive according to the input
1. calculate the true positive and false positive according to the input
of detection and labels.
of detection and labels.
2. calculate mAP value, support two versions: '11 point' and 'integral'.
2. calculate mAP value, support two versions: '11 point' and 'integral'.
Please get more information from the following articles:
Please get more information from the following articles:
https://sanchom.wordpress.com/tag/average-precision/
https://sanchom.wordpress.com/tag/average-precision/
https://arxiv.org/abs/1512.02325
https://arxiv.org/abs/1512.02325
Args:
Args:
...
@@ -613,10 +617,12 @@ class DetectionMAP(object):
...
@@ -613,10 +617,12 @@ class DetectionMAP(object):
for data in batches:
for data in batches:
loss, cur_map_v, accum_map_v = exe.run(fetch_list=fetch)
loss, cur_map_v, accum_map_v = exe.run(fetch_list=fetch)
In the above example:
In the above example:
- 'cur_map_v' is the mAP of current mini-batch.
- 'accum_map_v' is the accumulative mAP of one pass.
'cur_map_v' is the mAP of current mini-batch.
'accum_map_v' is the accumulative mAP of one pass.
"""
"""
def
__init__
(
self
,
def
__init__
(
self
,
...
...
python/paddle/fluid/transpiler/distribute_transpiler.py
浏览文件 @
871ac282
...
@@ -125,14 +125,23 @@ def slice_variable(var_list, slice_count, min_block_size):
...
@@ -125,14 +125,23 @@ def slice_variable(var_list, slice_count, min_block_size):
class
DistributeTranspilerConfig
(
object
):
class
DistributeTranspilerConfig
(
object
):
"""
"""
Args:
.. py:attribute:: slice_var_up (bool)
slice_var_up (bool): Do Tensor slice for pservers, default is True.
split_method (PSDispatcher): RoundRobin or HashName can be used
Do Tensor slice for pservers, default is True.
try to choose the best method to balance loads for pservers.
min_block_size (int): Minimum splitted element number in block.
.. py:attribute:: split_method (PSDispatcher)
According:https://github.com/PaddlePaddle/Paddle/issues/8638#issuecomment-369912156
RoundRobin or HashName can be used.
Try to choose the best method to balance loads for pservers.
.. py:attribute:: min_block_size (int)
Minimum number of splitted elements in block.
According to : https://github.com/PaddlePaddle/Paddle/issues/8638#issuecomment-369912156
We can use bandwidth effiently when data size is larger than 2MB.If you
We can use bandwidth effiently when data size is larger than 2MB.If you
want to change it, please be sure you see the slice_variable function.
want to change it, please be sure you have read the slice_variable function.
"""
"""
slice_var_up
=
True
slice_var_up
=
True
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录