Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
BaiXuePrincess
Paddle
提交
5be454bf
P
Paddle
项目概览
BaiXuePrincess
/
Paddle
与 Fork 源项目一致
Fork自
PaddlePaddle / Paddle
通知
1
Star
1
Fork
0
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
0
列表
看板
标记
里程碑
合并请求
0
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
P
Paddle
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
0
Issue
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
提交
5be454bf
编写于
6月 08, 2018
作者:
Y
yi.wu
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
polish docs
上级
8deff48d
变更
7
显示空白变更内容
内联
并排
Showing
7 changed file
with
86 addition
and
42 deletion
+86
-42
paddle/fluid/framework/details/fuse_vars_op_handle.cc
paddle/fluid/framework/details/fuse_vars_op_handle.cc
+1
-1
paddle/fluid/operators/crf_decoding_op.cc
paddle/fluid/operators/crf_decoding_op.cc
+8
-8
paddle/fluid/operators/roi_pool_op.cc
paddle/fluid/operators/roi_pool_op.cc
+10
-0
paddle/fluid/operators/scale_op.cc
paddle/fluid/operators/scale_op.cc
+1
-0
python/paddle/fluid/layers/io.py
python/paddle/fluid/layers/io.py
+28
-3
python/paddle/fluid/layers/nn.py
python/paddle/fluid/layers/nn.py
+37
-30
python/paddle/fluid/layers/ops.py
python/paddle/fluid/layers/ops.py
+1
-0
未找到文件。
paddle/fluid/framework/details/fuse_vars_op_handle.cc
浏览文件 @
5be454bf
...
...
@@ -42,7 +42,7 @@ void FuseVarsOpHandle::RunImpl() {
out_t
->
ShareDataWith
(
out_tensor
->
Slice
(
s
,
s
+
numel
));
s
+=
numel
;
}
this
->
RunAndRecordEvent
([
this
]
{});
this
->
RunAndRecordEvent
([]
{});
}
std
::
string
FuseVarsOpHandle
::
Name
()
const
{
return
"fuse vars"
;
}
...
...
paddle/fluid/operators/crf_decoding_op.cc
浏览文件 @
5be454bf
...
...
@@ -54,20 +54,20 @@ The output of this operator changes according to whether Input(Label) is given:
1. Input(Label) is given:
This happens in training. This operator is used to co-work with the chunk_eval
operator.
This happens in training. This operator is used to co-work with the chunk_eval
operator.
When Input(Label) is given, the crf_decoding operator returns a row vector
with shape [N x 1] whose values are fixed to be 0, indicating an incorrect
prediction, or 1 indicating a tag is correctly predicted. Such an output is the
input to chunk_eval operator.
When Input(Label) is given, the crf_decoding operator returns a row vector
with shape [N x 1] whose values are fixed to be 0, indicating an incorrect
prediction, or 1 indicating a tag is correctly predicted. Such an output is the
input to chunk_eval operator.
2. Input(Label) is not given:
This is the standard decoding process.
This is the standard decoding process.
The crf_decoding operator returns a row vector with shape [N x 1] whose values
range from 0 to maximum tag number - 1
.
Each element indicates an index of a
range from 0 to maximum tag number - 1
,
Each element indicates an index of a
predicted tag.
)DOC"
);
}
...
...
paddle/fluid/operators/roi_pool_op.cc
浏览文件 @
5be454bf
...
...
@@ -141,6 +141,16 @@ class ROIPoolOpMaker : public framework::OpProtoAndCheckerMaker {
AddComment
(
R"DOC(
ROIPool operator
Region of interest pooling (also known as RoI pooling) is to perform
is to perform max pooling on inputs of nonuniform sizes to obtain
fixed-size feature maps (e.g. 7*7).
The operator has three steps:
1. Dividing each region proposal into equal-sized sections with
the pooled_width and pooled_height
2. Finding the largest value in each section
3. Copying these max values to the output buffer
ROI Pooling for Faster-RCNN. The link below is a further introduction:
https://stackoverflow.com/questions/43430056/what-is-roi-layer-in-fast-rcnn
)DOC"
);
...
...
paddle/fluid/operators/scale_op.cc
浏览文件 @
5be454bf
...
...
@@ -42,6 +42,7 @@ class ScaleOpMaker : public framework::OpProtoAndCheckerMaker {
AddOutput
(
"Out"
,
"(Tensor) Output tensor of scale operator."
);
AddComment
(
R"DOC(
Scale operator
Multiply the input tensor with a float scalar to scale the input tensor.
$$Out = scale*X$$
)DOC"
);
...
...
python/paddle/fluid/layers/io.py
浏览文件 @
5be454bf
...
...
@@ -108,10 +108,35 @@ class BlockGuardServ(BlockGuard):
class
ListenAndServ
(
object
):
"""
ListenAndServ
class
.
ListenAndServ
layer
.
ListenAndServ class is used to wrap listen_and_serv op to create a server
which can receive variables from clients and run a block.
ListenAndServ is used to create a rpc server bind and listen
on specific TCP port, this server will run the sub-block when
received variables from clients.
Args:
endpoint(string): IP:port string which the server will listen on.
inputs(list): a list of variables that the server will get from clients.
fan_in(int): how many client are expected to report to this server, default: 1.
optimizer_mode(bool): whether to run the server as a parameter server, default: True.
Examples:
.. code-block:: python
with fluid.program_guard(main):
serv = layers.ListenAndServ(
"127.0.0.1:6170", ["X"], optimizer_mode=False)
with serv.do():
x = layers.data(
shape=[32, 32],
dtype='float32',
name="X",
append_batch_size=False)
fluid.initializer.Constant(value=1.0)(x, main.global_block())
layers.scale(x=x, scale=10.0, out=out_var)
self.server_exe = fluid.Executor(place)
self.server_exe.run(main)
"""
def
__init__
(
self
,
endpoint
,
inputs
,
fan_in
=
1
,
optimizer_mode
=
True
):
...
...
python/paddle/fluid/layers/nn.py
浏览文件 @
5be454bf
...
...
@@ -869,10 +869,17 @@ def crf_decoding(input, param_attr, label=None):
return
viterbi_path
@
templatedoc
()
def
cos_sim
(
X
,
Y
):
"""
This function performs the cosine similarity between two tensors
X and Y and returns that as the output.
${comment}
Args:
X(${X_type}): ${X_comment}
Y(${Y_type}): ${Y_comment}
Returns:
A Variable contains the output of this layer.
"""
helper
=
LayerHelper
(
'cos_sim'
,
**
locals
())
out
=
helper
.
create_tmp_variable
(
dtype
=
X
.
dtype
)
...
...
@@ -1059,14 +1066,25 @@ def square_error_cost(input, label):
return
square_out
@
templatedoc
()
def
chunk_eval
(
input
,
label
,
chunk_scheme
,
num_chunk_types
,
excluded_chunk_types
=
None
):
"""
This function computes and outputs the precision, recall and
F1-score of chunk detection.
${comment}
Args:
input(Variable): ${Inference_comment}
label(Variable): ${Label_comment}
chunk_scheme(${chunk_scheme_type}): ${chunk_scheme_comment}
num_chunk_types(${num_chunk_types_type}): ${num_chunk_types_comment}
excluded_chunk_types(${excluded_chunk_types_type}): ${excluded_chunk_types_comment}
Returns(typle): a tuple of variables:
(precision, recall, f1_score, num_infer_chunks, num_label_chunks, num_correct_chunks)
"""
helper
=
LayerHelper
(
"chunk_eval"
,
**
locals
())
...
...
@@ -1737,6 +1755,7 @@ def beam_search_decode(ids, scores, name=None):
return
sentence_ids
,
sentence_scores
@
templatedoc
()
def
conv2d_transpose
(
input
,
num_filters
,
output_size
=
None
,
...
...
@@ -1760,7 +1779,7 @@ def conv2d_transpose(input,
Parameters(dilations, strides, paddings) are two elements. These two elements
represent height and width, respectively. The details of convolution transpose
layer, please refer to the following explanation and references
`
therein
<http://www.matthewzeiler.com/wp-content/uploads/2017/07/cvpr2010.pdf>`_.
`
here
<http://www.matthewzeiler.com/wp-content/uploads/2017/07/cvpr2010.pdf>`_.
For each input :math:`X`, the equation is:
...
...
@@ -2781,6 +2800,7 @@ def edit_distance(input, label, normalized=True, ignored_tokens=None,
def
ctc_greedy_decoder
(
input
,
blank
,
name
=
None
):
"""
This op is used to decode sequences by greedy policy by below steps:
1. Get the indexes of max value for each row in input. a.k.a.
numpy.argmax(input, axis=0).
2. For each sequence in result of step1, merge repeated tokens between two
...
...
@@ -3451,8 +3471,9 @@ def one_hot(input, depth):
def
autoincreased_step_counter
(
counter_name
=
None
,
begin
=
1
,
step
=
1
):
"""
NOTE: The counter will be automatically increased by 1 every mini-batch
Return the run counter of the main program, which is started with 1.
Create an auto-increase variable
which will be automatically increased by 1 every mini-batch
Return the run counter of the main program, default is started from 1.
Args:
counter_name(str): The counter name, default is '@STEP_COUNTER@'.
...
...
@@ -3866,34 +3887,20 @@ def label_smooth(label,
return
smooth_label
@
templatedoc
()
def
roi_pool
(
input
,
rois
,
pooled_height
=
1
,
pooled_width
=
1
,
spatial_scale
=
1.0
):
"""
Region of interest pooling (also known as RoI pooling) is to perform
is to perform max pooling on inputs of nonuniform sizes to obtain
fixed-size feature maps (e.g. 7*7).
The operator has three steps:
1. Dividing each region proposal into equal-sized sections with
the pooled_width and pooled_height
2. Finding the largest value in each section
3. Copying these max values to the output buffer
${comment}
Args:
input (Variable): The input for ROI pooling.
rois (Variable): ROIs (Regions of Interest) to pool over. It should
be a 2-D one level LoTensor of shape [num_rois, 4].
The layout is [x1, y1, x2, y2], where (x1, y1)
is the top left coordinates, and (x2, y2) is the
bottom right coordinates. The num_rois is the
total number of ROIs in this batch data.
pooled_height (integer): The pooled output height. Default: 1
pooled_width (integer): The pooled output width. Default: 1
spatial_scale (float): Multiplicative spatial scale factor. To
translate ROI coords from their input scale
to the scale used when pooling. Default: 1.0
input (Variable): ${X_comment}
rois (Variable): ${ROIs_comment}
pooled_height (integer): ${pooled_height_comment} Default: 1
pooled_width (integer): ${pooled_width_comment} Default: 1
spatial_scale (float): ${spatial_scale_comment} Default: 1.0
Returns:
pool_out (Variable): The output is a 4-D tensor of the shape
(num_rois, channels, pooled_h, pooled_w).
pool_out (Variable): ${Out_comment}.
Examples:
.. code-block:: python
...
...
python/paddle/fluid/layers/ops.py
浏览文件 @
5be454bf
...
...
@@ -73,6 +73,7 @@ __all__ = [
'sum'
,
'polygon_box_transform'
,
'shape'
,
'iou_similarity'
,
]
+
__activations__
for
_OP
in
set
(
__all__
):
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录