Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
PaddlePaddle
Paddle
提交
e8f64889
P
Paddle
项目概览
PaddlePaddle
/
Paddle
接近 2 年 前同步成功
通知
2323
Star
20933
Fork
5424
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
1423
列表
看板
标记
里程碑
合并请求
543
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
P
Paddle
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
1,423
Issue
1,423
列表
看板
标记
里程碑
合并请求
543
合并请求
543
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
未验证
提交
e8f64889
编写于
2月 28, 2020
作者:
T
tianshuo78520a
提交者:
GitHub
2月 28, 2020
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
fix typo word (#22765)
上级
fd945075
变更
152
展开全部
隐藏空白更改
内联
并排
Showing
152 changed file
with
519 addition
and
520 deletion
+519
-520
paddle/fluid/framework/data_set.h
paddle/fluid/framework/data_set.h
+1
-1
paddle/fluid/framework/details/build_strategy.h
paddle/fluid/framework/details/build_strategy.h
+2
-2
paddle/fluid/framework/ir/conv_elementwise_add2_act_fuse_pass.cc
...fluid/framework/ir/conv_elementwise_add2_act_fuse_pass.cc
+1
-1
paddle/fluid/framework/ir/conv_elementwise_add_act_fuse_pass.cc
.../fluid/framework/ir/conv_elementwise_add_act_fuse_pass.cc
+1
-1
paddle/fluid/framework/ir/fuse_optimizer_ops_pass/fuse_optimizer_op_pass.cc
...work/ir/fuse_optimizer_ops_pass/fuse_optimizer_op_pass.cc
+1
-1
paddle/fluid/framework/ir/graph_pattern_detector.cc
paddle/fluid/framework/ir/graph_pattern_detector.cc
+1
-1
paddle/fluid/framework/ir/multi_batch_merge_pass.cc
paddle/fluid/framework/ir/multi_batch_merge_pass.cc
+1
-1
paddle/fluid/framework/ir/multi_devices_graph_pass/multi_devices_graph_pass.cc
...k/ir/multi_devices_graph_pass/multi_devices_graph_pass.cc
+1
-1
paddle/fluid/framework/op_desc.cc
paddle/fluid/framework/op_desc.cc
+3
-3
paddle/fluid/framework/operator.cc
paddle/fluid/framework/operator.cc
+1
-1
paddle/fluid/framework/operator.h
paddle/fluid/framework/operator.h
+2
-2
paddle/fluid/framework/operator_test.cc
paddle/fluid/framework/operator_test.cc
+4
-4
paddle/fluid/inference/analysis/ir_passes/subgraph_util.cc
paddle/fluid/inference/analysis/ir_passes/subgraph_util.cc
+1
-1
paddle/fluid/inference/analysis/ir_passes/tensorrt_subgraph_pass.cc
...id/inference/analysis/ir_passes/tensorrt_subgraph_pass.cc
+1
-1
paddle/fluid/inference/io.cc
paddle/fluid/inference/io.cc
+1
-1
paddle/fluid/inference/tensorrt/convert/ut_helper.h
paddle/fluid/inference/tensorrt/convert/ut_helper.h
+1
-1
paddle/fluid/inference/tensorrt/engine.cc
paddle/fluid/inference/tensorrt/engine.cc
+3
-4
paddle/fluid/operators/array_to_lod_tensor_op.cc
paddle/fluid/operators/array_to_lod_tensor_op.cc
+1
-1
paddle/fluid/operators/average_accumulates_op.cc
paddle/fluid/operators/average_accumulates_op.cc
+4
-5
paddle/fluid/operators/bilinear_tensor_product_op.h
paddle/fluid/operators/bilinear_tensor_product_op.h
+1
-1
paddle/fluid/operators/conv_transpose_op.cc
paddle/fluid/operators/conv_transpose_op.cc
+2
-2
paddle/fluid/operators/crop_op.cc
paddle/fluid/operators/crop_op.cc
+1
-1
paddle/fluid/operators/crop_tensor_op.cc
paddle/fluid/operators/crop_tensor_op.cc
+1
-1
paddle/fluid/operators/crop_tensor_op.h
paddle/fluid/operators/crop_tensor_op.h
+1
-1
paddle/fluid/operators/cross_entropy_op.cc
paddle/fluid/operators/cross_entropy_op.cc
+1
-1
paddle/fluid/operators/ctc_align_op.cc
paddle/fluid/operators/ctc_align_op.cc
+1
-1
paddle/fluid/operators/cumsum_op.cc
paddle/fluid/operators/cumsum_op.cc
+2
-2
paddle/fluid/operators/deformable_psroi_pooling_op.cc
paddle/fluid/operators/deformable_psroi_pooling_op.cc
+1
-1
paddle/fluid/operators/detection/box_coder_op.cc
paddle/fluid/operators/detection/box_coder_op.cc
+2
-2
paddle/fluid/operators/detection/generate_mask_labels_op.cc
paddle/fluid/operators/detection/generate_mask_labels_op.cc
+2
-2
paddle/fluid/operators/detection/generate_proposal_labels_op.cc
.../fluid/operators/detection/generate_proposal_labels_op.cc
+2
-2
paddle/fluid/operators/detection/iou_similarity_op.cc
paddle/fluid/operators/detection/iou_similarity_op.cc
+1
-1
paddle/fluid/operators/detection/locality_aware_nms_op.cc
paddle/fluid/operators/detection/locality_aware_nms_op.cc
+1
-1
paddle/fluid/operators/detection/multiclass_nms_op.cc
paddle/fluid/operators/detection/multiclass_nms_op.cc
+1
-1
paddle/fluid/operators/detection/target_assign_op.cc
paddle/fluid/operators/detection/target_assign_op.cc
+1
-1
paddle/fluid/operators/detection/yolov3_loss_op.cc
paddle/fluid/operators/detection/yolov3_loss_op.cc
+3
-3
paddle/fluid/operators/elementwise/test_elementwise_mul_op_dim.cc
...luid/operators/elementwise/test_elementwise_mul_op_dim.cc
+1
-1
paddle/fluid/operators/fused/fusion_group_op.cc
paddle/fluid/operators/fused/fusion_group_op.cc
+1
-1
paddle/fluid/operators/fused/fusion_transpose_flatten_concat_op.cu.cc
.../operators/fused/fusion_transpose_flatten_concat_op.cu.cc
+1
-1
paddle/fluid/operators/grid_sampler_op.cc
paddle/fluid/operators/grid_sampler_op.cc
+4
-4
paddle/fluid/operators/gru_op.cc
paddle/fluid/operators/gru_op.cc
+1
-1
paddle/fluid/operators/gru_unit_op.cc
paddle/fluid/operators/gru_unit_op.cc
+1
-1
paddle/fluid/operators/hierarchical_sigmoid_op.cc
paddle/fluid/operators/hierarchical_sigmoid_op.cc
+1
-1
paddle/fluid/operators/interpolate_op.cc
paddle/fluid/operators/interpolate_op.cc
+2
-2
paddle/fluid/operators/lrn_op.cc
paddle/fluid/operators/lrn_op.cc
+1
-1
paddle/fluid/operators/math/matrix_bit_code.h
paddle/fluid/operators/math/matrix_bit_code.h
+1
-1
paddle/fluid/operators/nce_op.cc
paddle/fluid/operators/nce_op.cc
+3
-3
paddle/fluid/operators/pad_constant_like_op.cc
paddle/fluid/operators/pad_constant_like_op.cc
+2
-2
paddle/fluid/operators/prroi_pool_op.cu
paddle/fluid/operators/prroi_pool_op.cu
+1
-1
paddle/fluid/operators/prroi_pool_op.h
paddle/fluid/operators/prroi_pool_op.h
+1
-1
paddle/fluid/operators/reader/read_op.cc
paddle/fluid/operators/reader/read_op.cc
+2
-2
paddle/fluid/operators/reduce_ops/reduce_op.h
paddle/fluid/operators/reduce_ops/reduce_op.h
+1
-1
paddle/fluid/operators/reshape_op.cc
paddle/fluid/operators/reshape_op.cc
+7
-6
paddle/fluid/operators/scatter_op.cc
paddle/fluid/operators/scatter_op.cc
+1
-1
paddle/fluid/operators/select_input_op.cc
paddle/fluid/operators/select_input_op.cc
+1
-1
paddle/fluid/operators/sequence_ops/sequence_pad_op.cc
paddle/fluid/operators/sequence_ops/sequence_pad_op.cc
+1
-1
paddle/fluid/operators/sequence_ops/sequence_pool_op.cc
paddle/fluid/operators/sequence_ops/sequence_pool_op.cc
+2
-2
paddle/fluid/operators/sequence_ops/sequence_topk_avg_pooling_op.cc
...id/operators/sequence_ops/sequence_topk_avg_pooling_op.cc
+1
-1
paddle/fluid/operators/sequence_ops/sequence_unpad_op.cc
paddle/fluid/operators/sequence_ops/sequence_unpad_op.cc
+1
-1
paddle/fluid/operators/shard_index_op.cc
paddle/fluid/operators/shard_index_op.cc
+2
-2
paddle/fluid/operators/shrink_rnn_memory_op.cc
paddle/fluid/operators/shrink_rnn_memory_op.cc
+3
-3
paddle/fluid/operators/softmax_with_cross_entropy_op.cc
paddle/fluid/operators/softmax_with_cross_entropy_op.cc
+2
-2
paddle/fluid/operators/softmax_with_cross_entropy_op.cu
paddle/fluid/operators/softmax_with_cross_entropy_op.cu
+1
-1
paddle/fluid/operators/spectral_norm_op.cc
paddle/fluid/operators/spectral_norm_op.cc
+2
-2
paddle/fluid/operators/tensorrt/tensorrt_engine_op.h
paddle/fluid/operators/tensorrt/tensorrt_engine_op.h
+1
-1
paddle/fluid/operators/unfold_op.cc
paddle/fluid/operators/unfold_op.cc
+1
-1
paddle/fluid/operators/uniform_random_op.cc
paddle/fluid/operators/uniform_random_op.cc
+1
-1
paddle/fluid/operators/unsqueeze_op.cc
paddle/fluid/operators/unsqueeze_op.cc
+1
-1
paddle/fluid/operators/warpctc_op.cc
paddle/fluid/operators/warpctc_op.cc
+2
-2
paddle/fluid/platform/device_tracer.cc
paddle/fluid/platform/device_tracer.cc
+1
-1
paddle/fluid/pybind/imperative.cc
paddle/fluid/pybind/imperative.cc
+4
-4
python/paddle/dataset/movielens.py
python/paddle/dataset/movielens.py
+1
-1
python/paddle/dataset/mq2007.py
python/paddle/dataset/mq2007.py
+6
-6
python/paddle/distributed/launch.py
python/paddle/distributed/launch.py
+7
-7
python/paddle/distributed/launch_ps.py
python/paddle/distributed/launch_ps.py
+1
-1
python/paddle/fluid/backward.py
python/paddle/fluid/backward.py
+1
-1
python/paddle/fluid/contrib/layers/nn.py
python/paddle/fluid/contrib/layers/nn.py
+7
-7
python/paddle/fluid/contrib/layers/rnn_impl.py
python/paddle/fluid/contrib/layers/rnn_impl.py
+2
-2
python/paddle/fluid/contrib/memory_usage_calc.py
python/paddle/fluid/contrib/memory_usage_calc.py
+4
-3
python/paddle/fluid/contrib/quantize/quantize_transpiler.py
python/paddle/fluid/contrib/quantize/quantize_transpiler.py
+1
-1
python/paddle/fluid/contrib/slim/core/compressor.py
python/paddle/fluid/contrib/slim/core/compressor.py
+2
-2
python/paddle/fluid/contrib/slim/graph/graph_wrapper.py
python/paddle/fluid/contrib/slim/graph/graph_wrapper.py
+6
-6
python/paddle/fluid/contrib/slim/nas/controller_server.py
python/paddle/fluid/contrib/slim/nas/controller_server.py
+1
-1
python/paddle/fluid/contrib/slim/prune/auto_prune_strategy.py
...on/paddle/fluid/contrib/slim/prune/auto_prune_strategy.py
+1
-1
python/paddle/fluid/contrib/slim/prune/prune_strategy.py
python/paddle/fluid/contrib/slim/prune/prune_strategy.py
+1
-1
python/paddle/fluid/contrib/slim/prune/pruner.py
python/paddle/fluid/contrib/slim/prune/pruner.py
+1
-1
python/paddle/fluid/contrib/slim/quantization/quantization_pass.py
...ddle/fluid/contrib/slim/quantization/quantization_pass.py
+6
-6
python/paddle/fluid/contrib/trainer.py
python/paddle/fluid/contrib/trainer.py
+8
-8
python/paddle/fluid/contrib/utils/hdfs_utils.py
python/paddle/fluid/contrib/utils/hdfs_utils.py
+2
-2
python/paddle/fluid/contrib/utils/lookup_table_utils.py
python/paddle/fluid/contrib/utils/lookup_table_utils.py
+2
-2
python/paddle/fluid/data.py
python/paddle/fluid/data.py
+2
-2
python/paddle/fluid/data_feed_desc.py
python/paddle/fluid/data_feed_desc.py
+1
-1
python/paddle/fluid/data_feeder.py
python/paddle/fluid/data_feeder.py
+4
-4
python/paddle/fluid/dataset.py
python/paddle/fluid/dataset.py
+2
-2
python/paddle/fluid/debugger.py
python/paddle/fluid/debugger.py
+1
-1
python/paddle/fluid/distributed/downpour.py
python/paddle/fluid/distributed/downpour.py
+1
-1
python/paddle/fluid/distributed/ps_instance.py
python/paddle/fluid/distributed/ps_instance.py
+1
-1
python/paddle/fluid/dygraph/learning_rate_scheduler.py
python/paddle/fluid/dygraph/learning_rate_scheduler.py
+9
-9
python/paddle/fluid/dygraph/nn.py
python/paddle/fluid/dygraph/nn.py
+9
-9
python/paddle/fluid/dygraph/varbase_patch_methods.py
python/paddle/fluid/dygraph/varbase_patch_methods.py
+4
-4
python/paddle/fluid/dygraph_grad_clip.py
python/paddle/fluid/dygraph_grad_clip.py
+1
-1
python/paddle/fluid/executor.py
python/paddle/fluid/executor.py
+4
-4
python/paddle/fluid/framework.py
python/paddle/fluid/framework.py
+32
-32
python/paddle/fluid/incubate/data_generator/__init__.py
python/paddle/fluid/incubate/data_generator/__init__.py
+3
-3
python/paddle/fluid/incubate/fleet/base/role_maker.py
python/paddle/fluid/incubate/fleet/base/role_maker.py
+2
-2
python/paddle/fluid/incubate/fleet/collective/__init__.py
python/paddle/fluid/incubate/fleet/collective/__init__.py
+2
-2
python/paddle/fluid/incubate/fleet/parameter_server/pslib/__init__.py
...e/fluid/incubate/fleet/parameter_server/pslib/__init__.py
+1
-1
python/paddle/fluid/incubate/fleet/parameter_server/pslib/optimizer_factory.py
...ncubate/fleet/parameter_server/pslib/optimizer_factory.py
+1
-1
python/paddle/fluid/incubate/fleet/utils/fleet_util.py
python/paddle/fluid/incubate/fleet/utils/fleet_util.py
+4
-4
python/paddle/fluid/incubate/fleet/utils/hdfs.py
python/paddle/fluid/incubate/fleet/utils/hdfs.py
+2
-2
python/paddle/fluid/initializer.py
python/paddle/fluid/initializer.py
+1
-1
python/paddle/fluid/input.py
python/paddle/fluid/input.py
+2
-2
python/paddle/fluid/install_check.py
python/paddle/fluid/install_check.py
+1
-1
python/paddle/fluid/io.py
python/paddle/fluid/io.py
+11
-12
python/paddle/fluid/layers/control_flow.py
python/paddle/fluid/layers/control_flow.py
+13
-13
python/paddle/fluid/layers/detection.py
python/paddle/fluid/layers/detection.py
+28
-28
python/paddle/fluid/layers/distributions.py
python/paddle/fluid/layers/distributions.py
+1
-1
python/paddle/fluid/layers/io.py
python/paddle/fluid/layers/io.py
+3
-3
python/paddle/fluid/layers/learning_rate_scheduler.py
python/paddle/fluid/layers/learning_rate_scheduler.py
+4
-4
python/paddle/fluid/layers/loss.py
python/paddle/fluid/layers/loss.py
+6
-6
python/paddle/fluid/layers/nn.py
python/paddle/fluid/layers/nn.py
+65
-65
python/paddle/fluid/layers/ops.py
python/paddle/fluid/layers/ops.py
+1
-1
python/paddle/fluid/layers/rnn.py
python/paddle/fluid/layers/rnn.py
+23
-23
python/paddle/fluid/layers/sequence_lod.py
python/paddle/fluid/layers/sequence_lod.py
+8
-8
python/paddle/fluid/layers/tensor.py
python/paddle/fluid/layers/tensor.py
+6
-6
python/paddle/fluid/log_helper.py
python/paddle/fluid/log_helper.py
+1
-1
python/paddle/fluid/metrics.py
python/paddle/fluid/metrics.py
+8
-8
python/paddle/fluid/nets.py
python/paddle/fluid/nets.py
+6
-6
python/paddle/fluid/optimizer.py
python/paddle/fluid/optimizer.py
+6
-6
python/paddle/fluid/param_attr.py
python/paddle/fluid/param_attr.py
+1
-1
python/paddle/fluid/profiler.py
python/paddle/fluid/profiler.py
+1
-1
python/paddle/fluid/tests/demo/pipeline_train.py
python/paddle/fluid/tests/demo/pipeline_train.py
+1
-1
python/paddle/fluid/tests/unittests/dist_transformer.py
python/paddle/fluid/tests/unittests/dist_transformer.py
+5
-5
python/paddle/fluid/tests/unittests/test_activation_nn_grad.py
...n/paddle/fluid/tests/unittests/test_activation_nn_grad.py
+1
-1
python/paddle/fluid/tests/unittests/test_elementwise_nn_grad.py
.../paddle/fluid/tests/unittests/test_elementwise_nn_grad.py
+8
-8
python/paddle/fluid/tests/unittests/test_generate_proposals_op.py
...addle/fluid/tests/unittests/test_generate_proposals_op.py
+1
-1
python/paddle/fluid/tests/unittests/test_imperative_transformer_sorted_gradient.py
.../unittests/test_imperative_transformer_sorted_gradient.py
+1
-1
python/paddle/fluid/tests/unittests/test_linear_chain_crf_op.py
.../paddle/fluid/tests/unittests/test_linear_chain_crf_op.py
+1
-1
python/paddle/fluid/tests/unittests/test_nce.py
python/paddle/fluid/tests/unittests/test_nce.py
+3
-3
python/paddle/fluid/tests/unittests/test_nn_grad.py
python/paddle/fluid/tests/unittests/test_nn_grad.py
+1
-1
python/paddle/fluid/tests/unittests/test_reshape_op.py
python/paddle/fluid/tests/unittests/test_reshape_op.py
+1
-1
python/paddle/fluid/tests/unittests/test_static_save_load.py
python/paddle/fluid/tests/unittests/test_static_save_load.py
+20
-20
python/paddle/fluid/tests/unittests/transformer_model.py
python/paddle/fluid/tests/unittests/transformer_model.py
+4
-4
python/paddle/fluid/transpiler/details/program_utils.py
python/paddle/fluid/transpiler/details/program_utils.py
+1
-1
python/paddle/fluid/transpiler/distribute_transpiler.py
python/paddle/fluid/transpiler/distribute_transpiler.py
+3
-3
python/paddle/fluid/transpiler/geo_sgd_transpiler.py
python/paddle/fluid/transpiler/geo_sgd_transpiler.py
+1
-1
python/paddle/fluid/transpiler/ps_dispatcher.py
python/paddle/fluid/transpiler/ps_dispatcher.py
+2
-2
python/paddle/reader/decorator.py
python/paddle/reader/decorator.py
+2
-2
python/paddle/utils/image_util.py
python/paddle/utils/image_util.py
+1
-1
python/paddle/utils/plotcurve.py
python/paddle/utils/plotcurve.py
+1
-1
python/paddle/utils/preprocess_img.py
python/paddle/utils/preprocess_img.py
+1
-1
python/paddle/utils/preprocess_util.py
python/paddle/utils/preprocess_util.py
+4
-4
未找到文件。
paddle/fluid/framework/data_set.h
浏览文件 @
e8f64889
...
@@ -126,7 +126,7 @@ class Dataset {
...
@@ -126,7 +126,7 @@ class Dataset {
virtual
void
DestroyPreLoadReaders
()
=
0
;
virtual
void
DestroyPreLoadReaders
()
=
0
;
// set preload thread num
// set preload thread num
virtual
void
SetPreLoadThreadNum
(
int
thread_num
)
=
0
;
virtual
void
SetPreLoadThreadNum
(
int
thread_num
)
=
0
;
// sep
e
rate train thread and dataset thread
// sep
a
rate train thread and dataset thread
virtual
void
DynamicAdjustChannelNum
(
int
channel_num
)
=
0
;
virtual
void
DynamicAdjustChannelNum
(
int
channel_num
)
=
0
;
virtual
void
DynamicAdjustReadersNum
(
int
thread_num
)
=
0
;
virtual
void
DynamicAdjustReadersNum
(
int
thread_num
)
=
0
;
// set fleet send sleep seconds
// set fleet send sleep seconds
...
...
paddle/fluid/framework/details/build_strategy.h
浏览文件 @
e8f64889
...
@@ -132,10 +132,10 @@ struct BuildStrategy {
...
@@ -132,10 +132,10 @@ struct BuildStrategy {
// The picture is here:
// The picture is here:
// https://github.com/PaddlePaddle/Paddle/pull/17263#discussion_r285411396
// https://github.com/PaddlePaddle/Paddle/pull/17263#discussion_r285411396
bool
use_hierarchical_allreduce_
{
false
};
bool
use_hierarchical_allreduce_
{
false
};
// Nccl ranks in a node when use hierarchical allreduce, it's set
ted
to gpu
// Nccl ranks in a node when use hierarchical allreduce, it's set to gpu
// cards' number in most cases.
// cards' number in most cases.
size_t
hierarchical_allreduce_inter_nranks_
{
0
};
size_t
hierarchical_allreduce_inter_nranks_
{
0
};
// Nccl ranks bewteen nodes when use hierarchical allreduce, it's set
ted
to
// Nccl ranks bewteen nodes when use hierarchical allreduce, it's set to
// nodes number.
// nodes number.
size_t
hierarchical_allreduce_exter_nranks_
{
0
};
size_t
hierarchical_allreduce_exter_nranks_
{
0
};
...
...
paddle/fluid/framework/ir/conv_elementwise_add2_act_fuse_pass.cc
浏览文件 @
e8f64889
...
@@ -33,7 +33,7 @@ namespace ir {
...
@@ -33,7 +33,7 @@ namespace ir {
GET_IR_NODE(act_op); \
GET_IR_NODE(act_op); \
GET_IR_NODE(act_out);
GET_IR_NODE(act_out);
// Inherient the basic infomation from `base_desc`, and modify some fields.
// Inherient the basic info
r
mation from `base_desc`, and modify some fields.
framework
::
proto
::
OpDesc
PrepareOpDesc
(
framework
::
proto
::
OpDesc
PrepareOpDesc
(
const
framework
::
proto
::
OpDesc
&
base_desc
,
const
std
::
string
&
bias
,
const
framework
::
proto
::
OpDesc
&
base_desc
,
const
std
::
string
&
bias
,
const
std
::
string
&
bias1
,
const
std
::
string
&
activation
,
const
std
::
string
&
bias1
,
const
std
::
string
&
activation
,
...
...
paddle/fluid/framework/ir/conv_elementwise_add_act_fuse_pass.cc
浏览文件 @
e8f64889
...
@@ -31,7 +31,7 @@ namespace ir {
...
@@ -31,7 +31,7 @@ namespace ir {
GET_IR_NODE(act_op); \
GET_IR_NODE(act_op); \
GET_IR_NODE(act_out);
GET_IR_NODE(act_out);
// Inherient the basic infomation from `base_desc`, and modify some fields.
// Inherient the basic info
r
mation from `base_desc`, and modify some fields.
framework
::
proto
::
OpDesc
PrepareOpDesc
(
framework
::
proto
::
OpDesc
PrepareOpDesc
(
const
framework
::
proto
::
OpDesc
&
base_desc
,
const
std
::
string
&
bias
,
const
framework
::
proto
::
OpDesc
&
base_desc
,
const
std
::
string
&
bias
,
const
std
::
string
&
activation
,
const
std
::
string
&
output
)
{
const
std
::
string
&
activation
,
const
std
::
string
&
output
)
{
...
...
paddle/fluid/framework/ir/fuse_optimizer_ops_pass/fuse_optimizer_op_pass.cc
浏览文件 @
e8f64889
...
@@ -382,7 +382,7 @@ const VarDesc *FuseOptimizerOpPass::GetVarDescFromVarsInfo(
...
@@ -382,7 +382,7 @@ const VarDesc *FuseOptimizerOpPass::GetVarDescFromVarsInfo(
const
std
::
string
&
var_name
)
const
{
const
std
::
string
&
var_name
)
const
{
auto
grad_iter
=
vars_info
.
find
(
var_name
);
auto
grad_iter
=
vars_info
.
find
(
var_name
);
PADDLE_ENFORCE_EQ
(
grad_iter
!=
vars_info
.
end
(),
true
,
PADDLE_ENFORCE_EQ
(
grad_iter
!=
vars_info
.
end
(),
true
,
"The gradient vari
ba
le %s is not found."
,
var_name
);
"The gradient vari
ab
le %s is not found."
,
var_name
);
PADDLE_ENFORCE_EQ
(
!
grad_iter
->
second
.
empty
(),
true
,
PADDLE_ENFORCE_EQ
(
!
grad_iter
->
second
.
empty
(),
true
,
"The gradient var node %s is not found."
,
var_name
);
"The gradient var node %s is not found."
,
var_name
);
PADDLE_ENFORCE_NOT_NULL
(
grad_iter
->
second
.
front
()
->
Var
(),
PADDLE_ENFORCE_NOT_NULL
(
grad_iter
->
second
.
front
()
->
Var
(),
...
...
paddle/fluid/framework/ir/graph_pattern_detector.cc
浏览文件 @
e8f64889
...
@@ -131,7 +131,7 @@ bool GraphPatternDetector::MarkPDNodesInGraph(const ir::Graph &graph) {
...
@@ -131,7 +131,7 @@ bool GraphPatternDetector::MarkPDNodesInGraph(const ir::Graph &graph) {
}
}
// The intermediate Nodes can only link to the nodes inside the pattern, or this
// The intermediate Nodes can only link to the nodes inside the pattern, or this
// subgraph will be droped.
// subgraph will be drop
p
ed.
void
GraphPatternDetector
::
ValidateByNodeRole
(
void
GraphPatternDetector
::
ValidateByNodeRole
(
std
::
vector
<
GraphPatternDetector
::
subgraph_t
>
*
subgraphs
)
{
std
::
vector
<
GraphPatternDetector
::
subgraph_t
>
*
subgraphs
)
{
std
::
vector
<
GraphPatternDetector
::
subgraph_t
>
result
;
std
::
vector
<
GraphPatternDetector
::
subgraph_t
>
result
;
...
...
paddle/fluid/framework/ir/multi_batch_merge_pass.cc
浏览文件 @
e8f64889
...
@@ -179,7 +179,7 @@ void BatchMergePass::ApplyImpl(ir::Graph* graph) const {
...
@@ -179,7 +179,7 @@ void BatchMergePass::ApplyImpl(ir::Graph* graph) const {
ir
::
Node
*
var
=
nullptr
;
ir
::
Node
*
var
=
nullptr
;
auto
updated_var
=
UpdateGradVarDesc
(
in_node
->
Var
(),
i
,
grad_names
,
auto
updated_var
=
UpdateGradVarDesc
(
in_node
->
Var
(),
i
,
grad_names
,
bn_vars_need_rename
);
bn_vars_need_rename
);
// should be initialized by startup, how to initilize tensor in the
// should be initialized by startup, how to initi
a
lize tensor in the
// scope?
// scope?
if
(
node
->
Name
()
==
"batch_norm"
&&
if
(
node
->
Name
()
==
"batch_norm"
&&
bn_vars_need_rename
.
find
(
in_node
->
Name
())
!=
bn_vars_need_rename
.
find
(
in_node
->
Name
())
!=
...
...
paddle/fluid/framework/ir/multi_devices_graph_pass/multi_devices_graph_pass.cc
浏览文件 @
e8f64889
...
@@ -1041,7 +1041,7 @@ void DistSSAGraphBuilder::InsertPostprocessOps(ir::Graph *result) const {
...
@@ -1041,7 +1041,7 @@ void DistSSAGraphBuilder::InsertPostprocessOps(ir::Graph *result) const {
// There are 4 conditions:
// There are 4 conditions:
// 1. GPU && Reduce: Reduce gradient then broadcast gradient to other GPUS.
// 1. GPU && Reduce: Reduce gradient then broadcast gradient to other GPUS.
// Need to broadcast received parameters to other GPU.
// Need to broadcast received parameters to other GPU.
// 2. GPU && AllReduce: AllReduce all gra
id
ent to each GPU. Need to
// 2. GPU && AllReduce: AllReduce all gra
di
ent to each GPU. Need to
// broadcast received parameters to other GPU.
// broadcast received parameters to other GPU.
// 3. CPU && AllReduce: AllReduce all gradient to each thread. Need to
// 3. CPU && AllReduce: AllReduce all gradient to each thread. Need to
// broadcast received parameters to other scope.
// broadcast received parameters to other scope.
...
...
paddle/fluid/framework/op_desc.cc
浏览文件 @
e8f64889
...
@@ -80,7 +80,7 @@ class CompileTimeInferShapeContext : public InferShapeContext {
...
@@ -80,7 +80,7 @@ class CompileTimeInferShapeContext : public InferShapeContext {
PADDLE_ENFORCE_EQ
(
PADDLE_ENFORCE_EQ
(
in_var_names
.
size
(),
out_var_names
.
size
(),
in_var_names
.
size
(),
out_var_names
.
size
(),
platform
::
errors
::
PreconditionNotMet
(
platform
::
errors
::
PreconditionNotMet
(
"Op [%s]: Input var number shoul
e
be equal with output var number"
,
"Op [%s]: Input var number shoul
d
be equal with output var number"
,
op_
.
Type
()));
op_
.
Type
()));
for
(
size_t
i
=
0
;
i
<
in_var_names
.
size
();
++
i
)
{
for
(
size_t
i
=
0
;
i
<
in_var_names
.
size
();
++
i
)
{
...
@@ -663,7 +663,7 @@ void OpDesc::Flush() {
...
@@ -663,7 +663,7 @@ void OpDesc::Flush() {
void
OpDesc
::
CheckAttrs
()
{
void
OpDesc
::
CheckAttrs
()
{
PADDLE_ENFORCE
(
!
Type
().
empty
(),
PADDLE_ENFORCE
(
!
Type
().
empty
(),
"CheckAttr() can not be called before type is set
ted
."
);
"CheckAttr() can not be called before type is set."
);
auto
*
checker
=
OpInfoMap
::
Instance
().
Get
(
Type
()).
Checker
();
auto
*
checker
=
OpInfoMap
::
Instance
().
Get
(
Type
()).
Checker
();
if
(
checker
==
nullptr
)
{
if
(
checker
==
nullptr
)
{
// checker is not configured. That operator could be generated by Paddle,
// checker is not configured. That operator could be generated by Paddle,
...
@@ -706,7 +706,7 @@ void OpDesc::InferShape(const BlockDesc &block) const {
...
@@ -706,7 +706,7 @@ void OpDesc::InferShape(const BlockDesc &block) const {
void
OpDesc
::
InferVarType
(
BlockDesc
*
block
)
const
{
void
OpDesc
::
InferVarType
(
BlockDesc
*
block
)
const
{
// There are a few places that var type can be set.
// There are a few places that var type can be set.
// When VarDesc is created, default set to LOD_TENSOR.
// When VarDesc is created, default set to LOD_TENSOR.
// When output variable is created, default is defaut set to LOD_TENSOR.
// When output variable is created, default is defau
l
t set to LOD_TENSOR.
// We limit here to be the only place that operator defines its customized
// We limit here to be the only place that operator defines its customized
// var type inference. Hence, we don't do any "default" setting here.
// var type inference. Hence, we don't do any "default" setting here.
auto
&
info
=
OpInfoMap
::
Instance
().
Get
(
this
->
Type
());
auto
&
info
=
OpInfoMap
::
Instance
().
Get
(
this
->
Type
());
...
...
paddle/fluid/framework/operator.cc
浏览文件 @
e8f64889
...
@@ -648,7 +648,7 @@ class RuntimeInferShapeContext : public InferShapeContext {
...
@@ -648,7 +648,7 @@ class RuntimeInferShapeContext : public InferShapeContext {
PADDLE_ENFORCE_EQ
(
PADDLE_ENFORCE_EQ
(
in_var_list
.
size
(),
out_var_list
.
size
(),
in_var_list
.
size
(),
out_var_list
.
size
(),
platform
::
errors
::
PreconditionNotMet
(
platform
::
errors
::
PreconditionNotMet
(
"Op [%s]: Input var size should be equal with ouput var size"
,
"Op [%s]: Input var size should be equal with ou
t
put var size"
,
op_
.
Type
()));
op_
.
Type
()));
auto
&
out_var_names
=
op_
.
Outputs
(
out
);
auto
&
out_var_names
=
op_
.
Outputs
(
out
);
...
...
paddle/fluid/framework/operator.h
浏览文件 @
e8f64889
...
@@ -53,8 +53,8 @@ constexpr char kEmptyVarName[] = "@EMPTY@";
...
@@ -53,8 +53,8 @@ constexpr char kEmptyVarName[] = "@EMPTY@";
constexpr
char
kTempVarName
[]
=
"@TEMP@"
;
constexpr
char
kTempVarName
[]
=
"@TEMP@"
;
/// If a variable's name has a certain suffix, it means that the
/// If a variable's name has a certain suffix, it means that the
/// variable is the gradient of another vari
ba
le.
/// variable is the gradient of another vari
ab
le.
/// e.g. Variable "x@GRAD" is the gradient of vari
ba
le "x".
/// e.g. Variable "x@GRAD" is the gradient of vari
ab
le "x".
constexpr
char
kGradVarSuffix
[]
=
"@GRAD"
;
constexpr
char
kGradVarSuffix
[]
=
"@GRAD"
;
constexpr
size_t
kGradVarSuffixSize
=
5U
;
constexpr
size_t
kGradVarSuffixSize
=
5U
;
...
...
paddle/fluid/framework/operator_test.cc
浏览文件 @
e8f64889
...
@@ -340,7 +340,7 @@ class IndicateLoDTensorDataTypeTestProtoMaker : public OpProtoAndCheckerMaker {
...
@@ -340,7 +340,7 @@ class IndicateLoDTensorDataTypeTestProtoMaker : public OpProtoAndCheckerMaker {
public:
public:
void
Make
()
{
void
Make
()
{
AddInput
(
"LoDTensor"
,
"Input of Tensor type Variable."
);
AddInput
(
"LoDTensor"
,
"Input of Tensor type Variable."
);
AddComment
(
"This Op is only for IndicateVarDataType in
f
erface test."
);
AddComment
(
"This Op is only for IndicateVarDataType in
t
erface test."
);
}
}
};
};
...
@@ -362,7 +362,7 @@ class IndicateSelectedRowsDataTypeTestProtoMaker
...
@@ -362,7 +362,7 @@ class IndicateSelectedRowsDataTypeTestProtoMaker
public:
public:
void
Make
()
{
void
Make
()
{
AddInput
(
"SelectedRows"
,
"Input of SelectedRows type Variable."
);
AddInput
(
"SelectedRows"
,
"Input of SelectedRows type Variable."
);
AddComment
(
"This Op is only for IndicateVarDataType in
f
erface test."
);
AddComment
(
"This Op is only for IndicateVarDataType in
t
erface test."
);
}
}
};
};
...
@@ -382,7 +382,7 @@ class IndicateOtherDataTypeTestProtoMaker : public OpProtoAndCheckerMaker {
...
@@ -382,7 +382,7 @@ class IndicateOtherDataTypeTestProtoMaker : public OpProtoAndCheckerMaker {
public:
public:
void
Make
()
{
void
Make
()
{
AddInput
(
"Other"
,
"Input of Other type Variable"
);
AddInput
(
"Other"
,
"Input of Other type Variable"
);
AddComment
(
"This Op is only for IndicateVarDataType in
f
erface test."
);
AddComment
(
"This Op is only for IndicateVarDataType in
t
erface test."
);
}
}
};
};
...
@@ -572,7 +572,7 @@ class GetSetLoDLevelTestMaker : public OpProtoAndCheckerMaker {
...
@@ -572,7 +572,7 @@ class GetSetLoDLevelTestMaker : public OpProtoAndCheckerMaker {
void
Make
()
{
void
Make
()
{
AddInput
(
"X"
,
"(LoDTensor) Input Variable."
);
AddInput
(
"X"
,
"(LoDTensor) Input Variable."
);
AddOutput
(
"Out"
,
"(LoDTensor) Output Variable."
);
AddOutput
(
"Out"
,
"(LoDTensor) Output Variable."
);
AddComment
(
"This Op is only for Get/SetLoDLevel in
f
erface test."
);
AddComment
(
"This Op is only for Get/SetLoDLevel in
t
erface test."
);
}
}
};
};
...
...
paddle/fluid/inference/analysis/ir_passes/subgraph_util.cc
浏览文件 @
e8f64889
...
@@ -112,7 +112,7 @@ void RenameAndGetOutputs(
...
@@ -112,7 +112,7 @@ void RenameAndGetOutputs(
std
::
unordered_map
<
std
::
string
,
std
::
string
>
*
output_name_map
,
std
::
unordered_map
<
std
::
string
,
std
::
string
>
*
output_name_map
,
const
std
::
unordered_map
<
std
::
string
,
framework
::
ir
::
Node
*>
&
graph_var_map
,
const
std
::
unordered_map
<
std
::
string
,
framework
::
ir
::
Node
*>
&
graph_var_map
,
bool
trt_and_not_int8
)
{
bool
trt_and_not_int8
)
{
//// In the normal case, the paddle-trt exists bug when runing the googlenet.
//// In the normal case, the paddle-trt exists bug when run
n
ing the googlenet.
// When there are more than two convolutions of 1 * 1 with the same input, the
// When there are more than two convolutions of 1 * 1 with the same input, the
// paddle-tensorrt will do the merging optimization, which fuse those conv
// paddle-tensorrt will do the merging optimization, which fuse those conv
// into one conv, and then trigger bug. So, We should use strategy to avoid
// into one conv, and then trigger bug. So, We should use strategy to avoid
...
...
paddle/fluid/inference/analysis/ir_passes/tensorrt_subgraph_pass.cc
浏览文件 @
e8f64889
...
@@ -223,7 +223,7 @@ void TensorRtSubgraphPass::CreateTensorRTOp(
...
@@ -223,7 +223,7 @@ void TensorRtSubgraphPass::CreateTensorRTOp(
auto
use_static_engine
=
Get
<
bool
>
(
"use_static_engine"
);
auto
use_static_engine
=
Get
<
bool
>
(
"use_static_engine"
);
// TODO(NHZlX)
// TODO(NHZlX)
// There are models with the same structure but the different parameters,
// There are models with the same structure but the different parameters,
// when runing in the 'use_serialize' mode, there is a bug.
// when run
n
ing in the 'use_serialize' mode, there is a bug.
auto
engine_key
=
GenerateEngineKey
(
input_names_with_id
,
output_names_with_id
,
auto
engine_key
=
GenerateEngineKey
(
input_names_with_id
,
output_names_with_id
,
std
::
to_string
(
0
));
std
::
to_string
(
0
));
auto
predictor_id
=
Get
<
int
>
(
"predictor_id"
);
auto
predictor_id
=
Get
<
int
>
(
"predictor_id"
);
...
...
paddle/fluid/inference/io.cc
浏览文件 @
e8f64889
...
@@ -137,7 +137,7 @@ std::unique_ptr<framework::ProgramDesc> Load(framework::Executor* executor,
...
@@ -137,7 +137,7 @@ std::unique_ptr<framework::ProgramDesc> Load(framework::Executor* executor,
"model version %ld is not supported."
,
"model version %ld is not supported."
,
main_program
->
Version
());
main_program
->
Version
());
// model_from_memory is false in sep
e
rate parameters.
// model_from_memory is false in sep
a
rate parameters.
LoadPersistables
(
executor
,
scope
,
*
main_program
,
dirname
,
""
,
LoadPersistables
(
executor
,
scope
,
*
main_program
,
dirname
,
""
,
false
/* model_from_memory */
);
false
/* model_from_memory */
);
return
main_program
;
return
main_program
;
...
...
paddle/fluid/inference/tensorrt/convert/ut_helper.h
浏览文件 @
e8f64889
...
@@ -101,7 +101,7 @@ class TRTConvertValidation {
...
@@ -101,7 +101,7 @@ class TRTConvertValidation {
DeclVar
(
name
,
dim_vec
);
DeclVar
(
name
,
dim_vec
);
}
}
// Declare a parameter var
ai
ble in the scope.
// Declare a parameter var
ia
ble in the scope.
void
DeclParamVar
(
const
std
::
string
&
name
,
const
nvinfer1
::
Dims
&
dims
)
{
void
DeclParamVar
(
const
std
::
string
&
name
,
const
nvinfer1
::
Dims
&
dims
)
{
DeclVar
(
name
,
dims
,
true
);
DeclVar
(
name
,
dims
,
true
);
}
}
...
...
paddle/fluid/inference/tensorrt/engine.cc
浏览文件 @
e8f64889
...
@@ -104,10 +104,9 @@ void TensorRTEngine::FreezeNetwork() {
...
@@ -104,10 +104,9 @@ void TensorRTEngine::FreezeNetwork() {
for
(
auto
&
t
:
all_t
)
{
for
(
auto
&
t
:
all_t
)
{
if
(
!
quant_dynamic_range_
.
count
(
t
))
{
if
(
!
quant_dynamic_range_
.
count
(
t
))
{
VLOG
(
3
)
VLOG
(
3
)
<<
"We are in trt int8 mode(not calibration), scale not set"
<<
"We are in trt int8 mode(not calibration), scale not setted"
<<
" for tensor "
<<
t
->
getName
()
<<
" for tensor "
<<
t
->
getName
()
<<
", this might be ok when trt does not need this range"
;
<<
", this might be ok when trt does not need this range"
;
}
}
}
}
std
::
unordered_set
<
std
::
string
>
all_out_t_name
;
std
::
unordered_set
<
std
::
string
>
all_out_t_name
;
...
...
paddle/fluid/operators/array_to_lod_tensor_op.cc
浏览文件 @
e8f64889
...
@@ -172,7 +172,7 @@ class ArrayToLoDTensorOpProtoMaker : public framework::OpProtoAndCheckerMaker {
...
@@ -172,7 +172,7 @@ class ArrayToLoDTensorOpProtoMaker : public framework::OpProtoAndCheckerMaker {
"(std::vector<LodTensor>) A vector of tensors that is going to "
"(std::vector<LodTensor>) A vector of tensors that is going to "
"be casted to a big LoDTensor."
);
"be casted to a big LoDTensor."
);
AddInput
(
"RankTable"
,
AddInput
(
"RankTable"
,
"(LoDRankTable) RankTable provides the coarse lod infomation to "
"(LoDRankTable) RankTable provides the coarse lod info
r
mation to "
"build the output LoDTensor. See "
"build the output LoDTensor. See "
"'paddle/framework/lod_rank_table.h' for more details."
);
"'paddle/framework/lod_rank_table.h' for more details."
);
AddOutput
(
"Out"
,
"(LoDTensor) The LoDTensor formed by input tensor array."
);
AddOutput
(
"Out"
,
"(LoDTensor) The LoDTensor formed by input tensor array."
);
...
...
paddle/fluid/operators/average_accumulates_op.cc
浏览文件 @
e8f64889
...
@@ -132,7 +132,7 @@ class AverageAccumulatesOpMaker : public framework::OpProtoAndCheckerMaker {
...
@@ -132,7 +132,7 @@ class AverageAccumulatesOpMaker : public framework::OpProtoAndCheckerMaker {
"(Tensor<int64_t>), The accumulating times of previous window with "
"(Tensor<int64_t>), The accumulating times of previous window with "
"shape [1]."
);
"shape [1]."
);
AddInput
(
"in_num_updates"
,
AddInput
(
"in_num_updates"
,
"(Tensor<int64_t>), The total number of batches used by train
n
ing "
"(Tensor<int64_t>), The total number of batches used by training "
"before this batch with shape [1]."
);
"before this batch with shape [1]."
);
AddOutput
(
"out_sum_1"
,
AddOutput
(
"out_sum_1"
,
...
@@ -155,10 +155,9 @@ class AverageAccumulatesOpMaker : public framework::OpProtoAndCheckerMaker {
...
@@ -155,10 +155,9 @@ class AverageAccumulatesOpMaker : public framework::OpProtoAndCheckerMaker {
"out_old_num_accumulates"
,
"out_old_num_accumulates"
,
"(Tensor<int64_t>) The accumulating times of previous window with "
"(Tensor<int64_t>) The accumulating times of previous window with "
"shape [1]."
);
"shape [1]."
);
AddOutput
(
AddOutput
(
"out_num_updates"
,
"out_num_updates"
,
"(Tensor<int64_t>), The total number of batches used by training "
"(Tensor<int64_t>), The total number of batches used by trainning "
"before this batch with shape [1]."
);
"before this batch with shape [1]."
);
AddAttr
<
float
>
(
"average_window"
,
AddAttr
<
float
>
(
"average_window"
,
"(float, default 0) "
"(float, default 0) "
...
...
paddle/fluid/operators/bilinear_tensor_product_op.h
浏览文件 @
e8f64889
...
@@ -49,7 +49,7 @@ class BilinearTensorProductKernel : public framework::OpKernel<T> {
...
@@ -49,7 +49,7 @@ class BilinearTensorProductKernel : public framework::OpKernel<T> {
auto
&
place
=
*
ctx
.
template
device_context
<
DeviceContext
>().
eigen_device
();
auto
&
place
=
*
ctx
.
template
device_context
<
DeviceContext
>().
eigen_device
();
auto
&
dev_ctx
=
ctx
.
template
device_context
<
DeviceContext
>();
auto
&
dev_ctx
=
ctx
.
template
device_context
<
DeviceContext
>();
// Create the intermediate variable to caculate the result of
// Create the intermediate variable to ca
l
culate the result of
// Input(X) multiplied by Input(Weight_i), the formula is:
// Input(X) multiplied by Input(Weight_i), the formula is:
// left_mul = X Weight_i.
// left_mul = X Weight_i.
Tensor
left_mul
;
Tensor
left_mul
;
...
...
paddle/fluid/operators/conv_transpose_op.cc
浏览文件 @
e8f64889
...
@@ -267,7 +267,7 @@ void Conv2DTransposeOpMaker::Make() {
...
@@ -267,7 +267,7 @@ void Conv2DTransposeOpMaker::Make() {
"workspace is a section of GPU memory which will be "
"workspace is a section of GPU memory which will be "
"allocated/freed each time the operator runs, larger "
"allocated/freed each time the operator runs, larger "
"workspace size can increase performance but also requires "
"workspace size can increase performance but also requires "
"better hardward. This size should be carefully set
ted
."
)
"better hardward. This size should be carefully set."
)
.
SetDefault
(
platform
::
GetDefaultConvWorkspaceSizeLimitMB
());
.
SetDefault
(
platform
::
GetDefaultConvWorkspaceSizeLimitMB
());
AddComment
(
R"DOC(
AddComment
(
R"DOC(
Convolution2D Transpose Operator.
Convolution2D Transpose Operator.
...
@@ -368,7 +368,7 @@ void Conv3DTransposeOpMaker::Make() {
...
@@ -368,7 +368,7 @@ void Conv3DTransposeOpMaker::Make() {
"workspace is a section of GPU memory which will be "
"workspace is a section of GPU memory which will be "
"allocated/freed each time the operator runs, larger "
"allocated/freed each time the operator runs, larger "
"workspace size can increase performance but also requires "
"workspace size can increase performance but also requires "
"better hardward. This size should be carefully set
ted
."
)
"better hardward. This size should be carefully set."
)
.
SetDefault
(
platform
::
GetDefaultConvWorkspaceSizeLimitMB
());
.
SetDefault
(
platform
::
GetDefaultConvWorkspaceSizeLimitMB
());
AddComment
(
R"DOC(
AddComment
(
R"DOC(
Convolution3D Transpose Operator.
Convolution3D Transpose Operator.
...
...
paddle/fluid/operators/crop_op.cc
浏览文件 @
e8f64889
...
@@ -36,7 +36,7 @@ class CropOp : public framework::OperatorWithKernel {
...
@@ -36,7 +36,7 @@ class CropOp : public framework::OperatorWithKernel {
auto
shape
=
ctx
->
Attrs
().
Get
<
std
::
vector
<
int
>>
(
"shape"
);
auto
shape
=
ctx
->
Attrs
().
Get
<
std
::
vector
<
int
>>
(
"shape"
);
PADDLE_ENFORCE_EQ
(
PADDLE_ENFORCE_EQ
(
int64_t
(
shape
.
size
()),
x_dim
.
size
(),
int64_t
(
shape
.
size
()),
x_dim
.
size
(),
"Shape size should be equal to dimen
t
ion size of input tensor."
);
"Shape size should be equal to dimen
s
ion size of input tensor."
);
std
::
vector
<
int64_t
>
tensor_shape
(
shape
.
size
());
std
::
vector
<
int64_t
>
tensor_shape
(
shape
.
size
());
for
(
size_t
i
=
0
;
i
<
shape
.
size
();
++
i
)
{
for
(
size_t
i
=
0
;
i
<
shape
.
size
();
++
i
)
{
tensor_shape
[
i
]
=
static_cast
<
int64_t
>
(
shape
[
i
]);
tensor_shape
[
i
]
=
static_cast
<
int64_t
>
(
shape
[
i
]);
...
...
paddle/fluid/operators/crop_tensor_op.cc
浏览文件 @
e8f64889
...
@@ -82,7 +82,7 @@ class CropTensorOp : public framework::OperatorWithKernel {
...
@@ -82,7 +82,7 @@ class CropTensorOp : public framework::OperatorWithKernel {
}
}
PADDLE_ENFORCE_EQ
(
int64_t
(
shape
.
size
()),
x_dim
.
size
(),
PADDLE_ENFORCE_EQ
(
int64_t
(
shape
.
size
()),
x_dim
.
size
(),
"Attr(shape)'size of Op(crop_tensor) should be equal to "
"Attr(shape)'size of Op(crop_tensor) should be equal to "
"dimen
t
ion size of input tensor."
);
"dimen
s
ion size of input tensor."
);
std
::
vector
<
int64_t
>
out_shape
(
shape
.
size
(),
-
1
);
std
::
vector
<
int64_t
>
out_shape
(
shape
.
size
(),
-
1
);
for
(
size_t
i
=
0
;
i
<
shape
.
size
();
++
i
)
{
for
(
size_t
i
=
0
;
i
<
shape
.
size
();
++
i
)
{
if
(
shape
[
i
]
>
0
)
{
if
(
shape
[
i
]
>
0
)
{
...
...
paddle/fluid/operators/crop_tensor_op.h
浏览文件 @
e8f64889
...
@@ -157,7 +157,7 @@ void CropTensorFunction(const framework::ExecutionContext& context) {
...
@@ -157,7 +157,7 @@ void CropTensorFunction(const framework::ExecutionContext& context) {
// get shape from Input(ShapeTensor) of Input(Shape)
// get shape from Input(ShapeTensor) of Input(Shape)
std
::
vector
<
int
>
shape
=
GetShape
(
context
);
std
::
vector
<
int
>
shape
=
GetShape
(
context
);
// out_dims set
ted
by arrt(shape)
// out_dims set by arrt(shape)
if
(
shape
.
size
()
==
0
)
{
if
(
shape
.
size
()
==
0
)
{
for
(
int
i
=
0
;
i
<
out_dims
.
size
();
++
i
)
{
for
(
int
i
=
0
;
i
<
out_dims
.
size
();
++
i
)
{
shape
.
push_back
(
out_dims
[
i
]);
shape
.
push_back
(
out_dims
[
i
]);
...
...
paddle/fluid/operators/cross_entropy_op.cc
浏览文件 @
e8f64889
...
@@ -203,7 +203,7 @@ class CrossEntropyOpMaker : public framework::OpProtoAndCheckerMaker {
...
@@ -203,7 +203,7 @@ class CrossEntropyOpMaker : public framework::OpProtoAndCheckerMaker {
"represents the cross entropy loss."
);
"represents the cross entropy loss."
);
AddAttr
<
bool
>
(
"soft_label"
,
AddAttr
<
bool
>
(
"soft_label"
,
"(bool, default false), a flag indicating whether to "
"(bool, default false), a flag indicating whether to "
"interpreta
te
the given labels as soft labels."
)
"interpreta
nt
the given labels as soft labels."
)
.
SetDefault
(
false
);
.
SetDefault
(
false
);
AddAttr
<
int
>
(
"ignore_index"
,
AddAttr
<
int
>
(
"ignore_index"
,
"(int, default -100), Specifies a target value that is"
"(int, default -100), Specifies a target value that is"
...
...
paddle/fluid/operators/ctc_align_op.cc
浏览文件 @
e8f64889
...
@@ -63,7 +63,7 @@ class CTCAlignOpMaker : public framework::OpProtoAndCheckerMaker {
...
@@ -63,7 +63,7 @@ class CTCAlignOpMaker : public framework::OpProtoAndCheckerMaker {
"sequence in Output."
)
"sequence in Output."
)
.
AsDispensable
();
.
AsDispensable
();
AddAttr
<
int
>
(
"blank"
,
AddAttr
<
int
>
(
"blank"
,
"(int, default: 0), the blank label set
ted
in Connectionist "
"(int, default: 0), the blank label set in Connectionist "
"Temporal Classification (CTC) op."
)
"Temporal Classification (CTC) op."
)
.
SetDefault
(
0
);
.
SetDefault
(
0
);
AddAttr
<
bool
>
(
"merge_repeated"
,
AddAttr
<
bool
>
(
"merge_repeated"
,
...
...
paddle/fluid/operators/cumsum_op.cc
浏览文件 @
e8f64889
...
@@ -33,8 +33,8 @@ class CumsumOpMaker : public framework::OpProtoAndCheckerMaker {
...
@@ -33,8 +33,8 @@ class CumsumOpMaker : public framework::OpProtoAndCheckerMaker {
AddInput
(
"X"
,
"Input of cumsum operator"
);
AddInput
(
"X"
,
"Input of cumsum operator"
);
AddOutput
(
"Out"
,
"Output of cumsum operator"
);
AddOutput
(
"Out"
,
"Output of cumsum operator"
);
AddAttr
<
int
>
(
"axis"
,
AddAttr
<
int
>
(
"axis"
,
"The dimens
t
ion to accumulate along. -1 means the last "
"The dimension to accumulate along. -1 means the last "
"dimens
t
ion [default -1]."
)
"dimension [default -1]."
)
.
SetDefault
(
-
1
)
.
SetDefault
(
-
1
)
.
EqualGreaterThan
(
-
1
);
.
EqualGreaterThan
(
-
1
);
AddAttr
<
bool
>
(
"exclusive"
,
AddAttr
<
bool
>
(
"exclusive"
,
...
...
paddle/fluid/operators/deformable_psroi_pooling_op.cc
浏览文件 @
e8f64889
...
@@ -67,7 +67,7 @@ class DeformablePSROIPoolOpMaker : public framework::OpProtoAndCheckerMaker {
...
@@ -67,7 +67,7 @@ class DeformablePSROIPoolOpMaker : public framework::OpProtoAndCheckerMaker {
"the number of groups which input channels are divided."
"the number of groups which input channels are divided."
"(eg.number of input channels is k1*k2*(C+1), which k1 and k2 "
"(eg.number of input channels is k1*k2*(C+1), which k1 and k2 "
"are group width and height and C+1 is number of output "
"are group width and height and C+1 is number of output "
"chanels. eg.(4, 6), which 4 is height of group and 6 is "
"chan
n
els. eg.(4, 6), which 4 is height of group and 6 is "
"width of group"
);
"width of group"
);
AddAttr
<
int
>
(
"pooled_height"
,
AddAttr
<
int
>
(
"pooled_height"
,
"(int), "
"(int), "
...
...
paddle/fluid/operators/detection/box_coder_op.cc
浏览文件 @
e8f64889
...
@@ -117,7 +117,7 @@ class BoxCoderOpMaker : public framework::OpProtoAndCheckerMaker {
...
@@ -117,7 +117,7 @@ class BoxCoderOpMaker : public framework::OpProtoAndCheckerMaker {
.
InEnum
({
"encode_center_size"
,
"decode_center_size"
});
.
InEnum
({
"encode_center_size"
,
"decode_center_size"
});
AddAttr
<
bool
>
(
"box_normalized"
,
AddAttr
<
bool
>
(
"box_normalized"
,
"(bool, default true) "
"(bool, default true) "
"whether treat the priorbox as a nor
am
lized box"
)
"whether treat the priorbox as a nor
ma
lized box"
)
.
SetDefault
(
true
);
.
SetDefault
(
true
);
AddAttr
<
int
>
(
"axis"
,
AddAttr
<
int
>
(
"axis"
,
"(int, default 0)"
"(int, default 0)"
...
@@ -140,7 +140,7 @@ class BoxCoderOpMaker : public framework::OpProtoAndCheckerMaker {
...
@@ -140,7 +140,7 @@ class BoxCoderOpMaker : public framework::OpProtoAndCheckerMaker {
"box_coder_op with shape [N, M, 4] representing the result of N "
"box_coder_op with shape [N, M, 4] representing the result of N "
"target boxes encoded with M Prior boxes and variances. When "
"target boxes encoded with M Prior boxes and variances. When "
"code_type is 'decode_center_size', N represents the batch size "
"code_type is 'decode_center_size', N represents the batch size "
"and M represents the number of de
oc
ded boxes."
);
"and M represents the number of de
co
ded boxes."
);
AddComment
(
R"DOC(
AddComment
(
R"DOC(
...
...
paddle/fluid/operators/detection/generate_mask_labels_op.cc
浏览文件 @
e8f64889
...
@@ -403,7 +403,7 @@ class GenerateMaskLabelsOpMaker : public framework::OpProtoAndCheckerMaker {
...
@@ -403,7 +403,7 @@ class GenerateMaskLabelsOpMaker : public framework::OpProtoAndCheckerMaker {
"each element is a bounding box with (xmin, ymin, xmax, ymax) format."
);
"each element is a bounding box with (xmin, ymin, xmax, ymax) format."
);
AddInput
(
"LabelsInt32"
,
AddInput
(
"LabelsInt32"
,
"(LoDTensor), This intput is a 2D LoDTensor with shape [R, 1], "
"(LoDTensor), This intput is a 2D LoDTensor with shape [R, 1], "
"each element rep
er
sents a class label of a roi"
);
"each element rep
re
sents a class label of a roi"
);
AddOutput
(
AddOutput
(
"MaskRois"
,
"MaskRois"
,
"(LoDTensor), This output is a 2D LoDTensor with shape [P, 4]. "
"(LoDTensor), This output is a 2D LoDTensor with shape [P, 4]. "
...
@@ -411,7 +411,7 @@ class GenerateMaskLabelsOpMaker : public framework::OpProtoAndCheckerMaker {
...
@@ -411,7 +411,7 @@ class GenerateMaskLabelsOpMaker : public framework::OpProtoAndCheckerMaker {
"each element is a bounding box with [xmin, ymin, xmax, ymax] format."
);
"each element is a bounding box with [xmin, ymin, xmax, ymax] format."
);
AddOutput
(
"RoiHasMaskInt32"
,
AddOutput
(
"RoiHasMaskInt32"
,
"(LoDTensor), This output is a 2D LoDTensor with shape [P, 1], "
"(LoDTensor), This output is a 2D LoDTensor with shape [P, 1], "
"each element rep
er
sents the output mask rois index with regard "
"each element rep
re
sents the output mask rois index with regard "
"to input rois"
);
"to input rois"
);
AddOutput
(
"MaskInt32"
,
AddOutput
(
"MaskInt32"
,
"(LoDTensor), This output is a 4D LoDTensor with shape [P, Q], "
"(LoDTensor), This output is a 4D LoDTensor with shape [P, Q], "
...
...
paddle/fluid/operators/detection/generate_proposal_labels_op.cc
浏览文件 @
e8f64889
...
@@ -521,11 +521,11 @@ class GenerateProposalLabelsOpMaker : public framework::OpProtoAndCheckerMaker {
...
@@ -521,11 +521,11 @@ class GenerateProposalLabelsOpMaker : public framework::OpProtoAndCheckerMaker {
"each element is a bounding box with [xmin, ymin, xmax, ymax] format."
);
"each element is a bounding box with [xmin, ymin, xmax, ymax] format."
);
AddOutput
(
"LabelsInt32"
,
AddOutput
(
"LabelsInt32"
,
"(LoDTensor), This output is a 2D LoDTensor with shape [P, 1], "
"(LoDTensor), This output is a 2D LoDTensor with shape [P, 1], "
"each element rep
er
sents a class label of a roi"
);
"each element rep
re
sents a class label of a roi"
);
AddOutput
(
"BboxTargets"
,
AddOutput
(
"BboxTargets"
,
"(LoDTensor), This output is a 2D LoDTensor with shape [P, 4 * "
"(LoDTensor), This output is a 2D LoDTensor with shape [P, 4 * "
"class_nums], "
"class_nums], "
"each element rep
er
sents a box label of a roi"
);
"each element rep
re
sents a box label of a roi"
);
AddOutput
(
AddOutput
(
"BboxInsideWeights"
,
"BboxInsideWeights"
,
"(LoDTensor), This output is a 2D LoDTensor with shape [P, 4 * "
"(LoDTensor), This output is a 2D LoDTensor with shape [P, 4 * "
...
...
paddle/fluid/operators/detection/iou_similarity_op.cc
浏览文件 @
e8f64889
...
@@ -63,7 +63,7 @@ class IOUSimilarityOpMaker : public framework::OpProtoAndCheckerMaker {
...
@@ -63,7 +63,7 @@ class IOUSimilarityOpMaker : public framework::OpProtoAndCheckerMaker {
"bottom coordinate of the box."
);
"bottom coordinate of the box."
);
AddAttr
<
bool
>
(
"box_normalized"
,
AddAttr
<
bool
>
(
"box_normalized"
,
"(bool, default true) "
"(bool, default true) "
"whether treat the priorbox as a nor
am
lized box"
)
"whether treat the priorbox as a nor
ma
lized box"
)
.
SetDefault
(
true
);
.
SetDefault
(
true
);
AddOutput
(
"Out"
,
AddOutput
(
"Out"
,
"(LoDTensor, the lod is same as input X) The output of "
"(LoDTensor, the lod is same as input X) The output of "
...
...
paddle/fluid/operators/detection/locality_aware_nms_op.cc
浏览文件 @
e8f64889
...
@@ -393,7 +393,7 @@ class LocalityAwareNMSOpMaker : public framework::OpProtoAndCheckerMaker {
...
@@ -393,7 +393,7 @@ class LocalityAwareNMSOpMaker : public framework::OpProtoAndCheckerMaker {
AddAttr
<
int
>
(
"nms_top_k"
,
AddAttr
<
int
>
(
"nms_top_k"
,
"(int64_t) "
"(int64_t) "
"Maximum number of detections to be kept according to the "
"Maximum number of detections to be kept according to the "
"confidences after
n
the filtering detections based on "
"confidences after the filtering detections based on "
"score_threshold"
);
"score_threshold"
);
AddAttr
<
float
>
(
"nms_threshold"
,
AddAttr
<
float
>
(
"nms_threshold"
,
"(float, default: 0.3) "
"(float, default: 0.3) "
...
...
paddle/fluid/operators/detection/multiclass_nms_op.cc
浏览文件 @
e8f64889
...
@@ -424,7 +424,7 @@ class MultiClassNMSOpMaker : public framework::OpProtoAndCheckerMaker {
...
@@ -424,7 +424,7 @@ class MultiClassNMSOpMaker : public framework::OpProtoAndCheckerMaker {
AddAttr
<
int
>
(
"nms_top_k"
,
AddAttr
<
int
>
(
"nms_top_k"
,
"(int64_t) "
"(int64_t) "
"Maximum number of detections to be kept according to the "
"Maximum number of detections to be kept according to the "
"confidences after
n
the filtering detections based on "
"confidences after the filtering detections based on "
"score_threshold"
);
"score_threshold"
);
AddAttr
<
float
>
(
"nms_threshold"
,
AddAttr
<
float
>
(
"nms_threshold"
,
"(float, default: 0.3) "
"(float, default: 0.3) "
...
...
paddle/fluid/operators/detection/target_assign_op.cc
浏览文件 @
e8f64889
...
@@ -44,7 +44,7 @@ class TargetAssignOp : public framework::OperatorWithKernel {
...
@@ -44,7 +44,7 @@ class TargetAssignOp : public framework::OperatorWithKernel {
PADDLE_ENFORCE_EQ
(
neg_dims
.
size
(),
2
,
PADDLE_ENFORCE_EQ
(
neg_dims
.
size
(),
2
,
"The rank of Input(NegIndices) must be 2."
);
"The rank of Input(NegIndices) must be 2."
);
PADDLE_ENFORCE_EQ
(
neg_dims
[
1
],
1
,
PADDLE_ENFORCE_EQ
(
neg_dims
[
1
],
1
,
"The last dimens
t
ion of Out(NegIndices) must be 1."
);
"The last dimension of Out(NegIndices) must be 1."
);
}
}
auto
n
=
mi_dims
[
0
];
auto
n
=
mi_dims
[
0
];
...
...
paddle/fluid/operators/detection/yolov3_loss_op.cc
浏览文件 @
e8f64889
...
@@ -111,15 +111,15 @@ class Yolov3LossOpMaker : public framework::OpProtoAndCheckerMaker {
...
@@ -111,15 +111,15 @@ class Yolov3LossOpMaker : public framework::OpProtoAndCheckerMaker {
AddInput
(
"X"
,
AddInput
(
"X"
,
"The input tensor of YOLOv3 loss operator, "
"The input tensor of YOLOv3 loss operator, "
"This is a 4-D tensor with shape of [N, C, H, W]."
"This is a 4-D tensor with shape of [N, C, H, W]."
"H and W should be same, and the second dimen
t
ion(C) stores"
"H and W should be same, and the second dimen
s
ion(C) stores"
"box locations, confidence score and classification one-hot"
"box locations, confidence score and classification one-hot"
"keys of each anchor box"
);
"keys of each anchor box"
);
AddInput
(
"GTBox"
,
AddInput
(
"GTBox"
,
"The input tensor of ground truth boxes, "
"The input tensor of ground truth boxes, "
"This is a 3-D tensor with shape of [N, max_box_num, 5], "
"This is a 3-D tensor with shape of [N, max_box_num, 5], "
"max_box_num is the max number of boxes in each image, "
"max_box_num is the max number of boxes in each image, "
"In the third dimen
t
ion, stores x, y, w, h coordinates, "
"In the third dimen
s
ion, stores x, y, w, h coordinates, "
"x, y is the center cordinate of boxes and w, h is the "
"x, y is the center co
o
rdinate of boxes and w, h is the "
"width and height and x, y, w, h should be divided by "
"width and height and x, y, w, h should be divided by "
"input image height to scale to [0, 1]."
);
"input image height to scale to [0, 1]."
);
AddInput
(
"GTLabel"
,
AddInput
(
"GTLabel"
,
...
...
paddle/fluid/operators/elementwise/test_elementwise_mul_op_dim.cc
浏览文件 @
e8f64889
...
@@ -79,7 +79,7 @@ TEST(ElementwiseMulOpTester, correct_dims) {
...
@@ -79,7 +79,7 @@ TEST(ElementwiseMulOpTester, correct_dims) {
MainTest
(
test_data
);
MainTest
(
test_data
);
}
}
// Checks if AreDimsAndFormatCorrect fails when channel_num is not d
i
visable by
// Checks if AreDimsAndFormatCorrect fails when channel_num is not d
e
visable by
// 16
// 16
TEST
(
ElementwiseMulOpTester
,
incorrect_channel_num
)
{
TEST
(
ElementwiseMulOpTester
,
incorrect_channel_num
)
{
TestData
test_data
;
TestData
test_data
;
...
...
paddle/fluid/operators/fused/fusion_group_op.cc
浏览文件 @
e8f64889
...
@@ -76,7 +76,7 @@ class FusionGroupOpMaker : public framework::OpProtoAndCheckerMaker {
...
@@ -76,7 +76,7 @@ class FusionGroupOpMaker : public framework::OpProtoAndCheckerMaker {
fusion_group Operator.
fusion_group Operator.
It is used to execute a generated CUDA kernel which fuse the computation of
It is used to execute a generated CUDA kernel which fuse the computation of
multiple operators into one. It supports se
r
veral types:
multiple operators into one. It supports several types:
0, fused computation of elementwise operations in which all the dims of inputs
0, fused computation of elementwise operations in which all the dims of inputs
and outputs should be exactly the same.
and outputs should be exactly the same.
)DOC"
);
)DOC"
);
...
...
paddle/fluid/operators/fused/fusion_transpose_flatten_concat_op.cu.cc
浏览文件 @
e8f64889
...
@@ -76,7 +76,7 @@ class TransposeFlattenConcatFusionKernel : public framework::OpKernel<T> {
...
@@ -76,7 +76,7 @@ class TransposeFlattenConcatFusionKernel : public framework::OpKernel<T> {
}
}
}
}
// Since concat is after
n
flatten, the output is 2D tensor.
// Since concat is after flatten, the output is 2D tensor.
// If concat_axis is 0, each input's permutated tensor is continuous.
// If concat_axis is 0, each input's permutated tensor is continuous.
// If concat_axis is 1, the stride of 0-th dim of each input's
// If concat_axis is 1, the stride of 0-th dim of each input's
// permutated tensor is odims()[1].
// permutated tensor is odims()[1].
...
...
paddle/fluid/operators/grid_sampler_op.cc
浏览文件 @
e8f64889
...
@@ -84,7 +84,7 @@ class GridSampleOpMaker : public framework::OpProtoAndCheckerMaker {
...
@@ -84,7 +84,7 @@ class GridSampleOpMaker : public framework::OpProtoAndCheckerMaker {
"Grid"
,
"Grid"
,
"(Tensor) The input grid of GridSampleOp generated by AffineGridOp, "
"(Tensor) The input grid of GridSampleOp generated by AffineGridOp, "
"This is a 4-D tensor with shape of [N, H, W, 2] is the concatenation "
"This is a 4-D tensor with shape of [N, H, W, 2] is the concatenation "
"of x and y coordinates with shape [N, H, W] in last dimen
t
ion"
);
"of x and y coordinates with shape [N, H, W] in last dimen
s
ion"
);
AddOutput
(
"Output"
,
"(Tensor) Output tensor with shape [N, C, H, W]"
);
AddOutput
(
"Output"
,
"(Tensor) Output tensor with shape [N, C, H, W]"
);
AddAttr
<
bool
>
(
AddAttr
<
bool
>
(
"use_cudnn"
,
"use_cudnn"
,
...
@@ -93,11 +93,11 @@ class GridSampleOpMaker : public framework::OpProtoAndCheckerMaker {
...
@@ -93,11 +93,11 @@ class GridSampleOpMaker : public framework::OpProtoAndCheckerMaker {
AddComment
(
R"DOC(
AddComment
(
R"DOC(
This operation samples input X by using bilinear interpolation based on
This operation samples input X by using bilinear interpolation based on
flow field grid, which is usually gen
n
erated by affine_grid. The grid of
flow field grid, which is usually generated by affine_grid. The grid of
shape [N, H, W, 2] is the concatenation of (grid_x, grid_y) coordinates
shape [N, H, W, 2] is the concatenation of (grid_x, grid_y) coordinates
with shape [N, H, W] each, where grid_x is indexing the 4th dimension
with shape [N, H, W] each, where grid_x is indexing the 4th dimension
(in width dimension) of input data x and grid_y is indexng the 3rd
(in width dimension) of input data x and grid_y is index
i
ng the 3rd
dimen
t
ion (in height dimension), finally results is the bilinear
dimen
s
ion (in height dimension), finally results is the bilinear
interpolation value of 4 nearest corner points.
interpolation value of 4 nearest corner points.
Step 1:
Step 1:
...
...
paddle/fluid/operators/gru_op.cc
浏览文件 @
e8f64889
...
@@ -112,7 +112,7 @@ class GRUOpMaker : public framework::OpProtoAndCheckerMaker {
...
@@ -112,7 +112,7 @@ class GRUOpMaker : public framework::OpProtoAndCheckerMaker {
.
AsIntermediate
();
.
AsIntermediate
();
AddOutput
(
AddOutput
(
"BatchResetHiddenPrev"
,
"BatchResetHiddenPrev"
,
"(LoDTensor) The reset
ed
hidden state LoDTensor organized in batches. "
"(LoDTensor) The reset hidden state LoDTensor organized in batches. "
"This LoDTensor is a matrix with shape (T X D) and has the same LoD "
"This LoDTensor is a matrix with shape (T X D) and has the same LoD "
"with `BatchGate`."
)
"with `BatchGate`."
)
.
AsIntermediate
();
.
AsIntermediate
();
...
...
paddle/fluid/operators/gru_unit_op.cc
浏览文件 @
e8f64889
...
@@ -97,7 +97,7 @@ class GRUUnitOpMaker : public framework::OpProtoAndCheckerMaker {
...
@@ -97,7 +97,7 @@ class GRUUnitOpMaker : public framework::OpProtoAndCheckerMaker {
.
AsIntermediate
();
.
AsIntermediate
();
AddOutput
(
"ResetHiddenPrev"
,
AddOutput
(
"ResetHiddenPrev"
,
"(Tensor) Matrix with shape [batch_size, frame_size] for the "
"(Tensor) Matrix with shape [batch_size, frame_size] for the "
"reset
ed
hidden state of previous time step."
)
"reset hidden state of previous time step."
)
.
AsIntermediate
();
.
AsIntermediate
();
AddOutput
(
"Hidden"
,
AddOutput
(
"Hidden"
,
"(Tensor) The GRU hidden state of the current time step "
"(Tensor) The GRU hidden state of the current time step "
...
...
paddle/fluid/operators/hierarchical_sigmoid_op.cc
浏览文件 @
e8f64889
...
@@ -144,7 +144,7 @@ class HierarchicalSigmoidOpMaker : public framework::OpProtoAndCheckerMaker {
...
@@ -144,7 +144,7 @@ class HierarchicalSigmoidOpMaker : public framework::OpProtoAndCheckerMaker {
.
AsIntermediate
();
.
AsIntermediate
();
AddOutput
(
AddOutput
(
"W_Out"
,
"W_Out"
,
"(LoDTensor, optinal) using input 'W' as Output to make it mutable"
"(LoDTensor, opti
o
nal) using input 'W' as Output to make it mutable"
"When we are using prefetch"
)
"When we are using prefetch"
)
.
AsIntermediate
();
.
AsIntermediate
();
AddAttr
<
AttrType
>
(
"num_classes"
,
"(int, optional), The number of classes"
)
AddAttr
<
AttrType
>
(
"num_classes"
,
"(int, optional), The number of classes"
)
...
...
paddle/fluid/operators/interpolate_op.cc
浏览文件 @
e8f64889
...
@@ -285,7 +285,7 @@ class InterpolateOpMaker : public framework::OpProtoAndCheckerMaker {
...
@@ -285,7 +285,7 @@ class InterpolateOpMaker : public framework::OpProtoAndCheckerMaker {
interpolation.
interpolation.
Nearest neighbor interpolation is to perform nearest neighbor interpolation
Nearest neighbor interpolation is to perform nearest neighbor interpolation
in both the 3rd dimen
tion(in height direction) and the 4th diment
ion(in width
in both the 3rd dimen
sion(in height direction) and the 4th dimens
ion(in width
direction) on input tensor.
direction) on input tensor.
Bilinear interpolation is an extension of linear interpolation for
Bilinear interpolation is an extension of linear interpolation for
...
@@ -299,7 +299,7 @@ class InterpolateOpMaker : public framework::OpProtoAndCheckerMaker {
...
@@ -299,7 +299,7 @@ class InterpolateOpMaker : public framework::OpProtoAndCheckerMaker {
H-direction and W-direction in this op) on a rectilinear 3D grid.
H-direction and W-direction in this op) on a rectilinear 3D grid.
The linear interpolation is performed on three directions.
The linear interpolation is performed on three directions.
Align_corners and align_mode are optinal parameters,the calculation method
Align_corners and align_mode are opti
o
nal parameters,the calculation method
of interpolation can be selected by them.
of interpolation can be selected by them.
Example:
Example:
...
...
paddle/fluid/operators/lrn_op.cc
浏览文件 @
e8f64889
...
@@ -296,7 +296,7 @@ $$
...
@@ -296,7 +296,7 @@ $$
Function implementation:
Function implementation:
Inputs and outpus are in NCHW or NHWC format, while input.shape.ndims() equals 4.
Inputs and outpu
t
s are in NCHW or NHWC format, while input.shape.ndims() equals 4.
If NCHW, the dimensions 0 ~ 3 represent batch size, feature maps, rows,
If NCHW, the dimensions 0 ~ 3 represent batch size, feature maps, rows,
and columns, respectively.
and columns, respectively.
...
...
paddle/fluid/operators/math/matrix_bit_code.h
浏览文件 @
e8f64889
...
@@ -105,7 +105,7 @@ class SimpleCode {
...
@@ -105,7 +105,7 @@ class SimpleCode {
SimpleCode
(
size_t
code
,
size_t
num_classes
,
const
int64_t
*
ids
)
SimpleCode
(
size_t
code
,
size_t
num_classes
,
const
int64_t
*
ids
)
:
c_
(
static_cast
<
size_t
>
(
ids
[
code
])
+
num_classes
)
{}
:
c_
(
static_cast
<
size_t
>
(
ids
[
code
])
+
num_classes
)
{}
/**
/**
* Here the id of root shoud be 1 rather than 0, thus the encoding of class c
* Here the id of root shou
l
d be 1 rather than 0, thus the encoding of class c
* is `c + num_classes` and all siblings can get the same weight indice using
* is `c + num_classes` and all siblings can get the same weight indice using
* prefixes.
* prefixes.
* Weight index is the prefixes of encoding, thus leave out the right most
* Weight index is the prefixes of encoding, thus leave out the right most
...
...
paddle/fluid/operators/nce_op.cc
浏览文件 @
e8f64889
...
@@ -129,19 +129,19 @@ class NCEOpMaker : public framework::OpProtoAndCheckerMaker {
...
@@ -129,19 +129,19 @@ class NCEOpMaker : public framework::OpProtoAndCheckerMaker {
"CustomDistProbs"
,
"CustomDistProbs"
,
"(Tensor) It is used in 'CostumDist' sampler. "
"(Tensor) It is used in 'CostumDist' sampler. "
"It is a tensor with shape [num_total_classes]."
"It is a tensor with shape [num_total_classes]."
"The i-th element is the prob
s
bility of the i-th class being sampled."
)
"The i-th element is the prob
a
bility of the i-th class being sampled."
)
.
AsDispensable
();
.
AsDispensable
();
AddInput
(
AddInput
(
"CustomDistAlias"
,
"CustomDistAlias"
,
"(Tensor) It is used in 'CostumDist' sampler. "
"(Tensor) It is used in 'CostumDist' sampler. "
"It is a tensor with shape [num_total_classes]."
"It is a tensor with shape [num_total_classes]."
"The i-th element is the prob
s
bility of the i-th class being sampled."
)
"The i-th element is the prob
a
bility of the i-th class being sampled."
)
.
AsDispensable
();
.
AsDispensable
();
AddInput
(
AddInput
(
"CustomDistAliasProbs"
,
"CustomDistAliasProbs"
,
"(Tensor) It is used in 'CostumDist' sampler. "
"(Tensor) It is used in 'CostumDist' sampler. "
"It is a tensor with shape [num_total_classes]."
"It is a tensor with shape [num_total_classes]."
"The i-th element is the prob
s
bility of the i-th class being sampled."
)
"The i-th element is the prob
a
bility of the i-th class being sampled."
)
.
AsDispensable
();
.
AsDispensable
();
AddOutput
(
"Cost"
,
AddOutput
(
"Cost"
,
...
...
paddle/fluid/operators/pad_constant_like_op.cc
浏览文件 @
e8f64889
...
@@ -36,7 +36,7 @@ class PadConstantLikeOp : public framework::OperatorWithKernel {
...
@@ -36,7 +36,7 @@ class PadConstantLikeOp : public framework::OperatorWithKernel {
auto
y_dim
=
ctx
->
GetInputDim
(
"Y"
);
auto
y_dim
=
ctx
->
GetInputDim
(
"Y"
);
PADDLE_ENFORCE_EQ
(
x_dim
.
size
(),
y_dim
.
size
(),
PADDLE_ENFORCE_EQ
(
x_dim
.
size
(),
y_dim
.
size
(),
"The dimen
t
ion of X and Y should be the same."
);
"The dimen
s
ion of X and Y should be the same."
);
for
(
int
i
=
0
;
i
<
x_dim
.
size
();
++
i
)
{
for
(
int
i
=
0
;
i
<
x_dim
.
size
();
++
i
)
{
if
((
!
ctx
->
IsRuntime
())
&&
((
x_dim
[
i
]
==
-
1
)
||
(
y_dim
[
i
]
==
-
1
)))
{
if
((
!
ctx
->
IsRuntime
())
&&
((
x_dim
[
i
]
==
-
1
)
||
(
y_dim
[
i
]
==
-
1
)))
{
...
@@ -164,7 +164,7 @@ class PadConstantLikeOpGrad : public framework::OperatorWithKernel {
...
@@ -164,7 +164,7 @@ class PadConstantLikeOpGrad : public framework::OperatorWithKernel {
auto
dout_dim
=
ctx
->
GetInputDim
(
framework
::
GradVarName
(
"Out"
));
auto
dout_dim
=
ctx
->
GetInputDim
(
framework
::
GradVarName
(
"Out"
));
PADDLE_ENFORCE_EQ
(
dout_dim
.
size
(),
y_dim
.
size
(),
PADDLE_ENFORCE_EQ
(
dout_dim
.
size
(),
y_dim
.
size
(),
"The dimen
t
ion of X and Y should be the same."
);
"The dimen
s
ion of X and Y should be the same."
);
auto
y_grad_name
=
framework
::
GradVarName
(
"Y"
);
auto
y_grad_name
=
framework
::
GradVarName
(
"Y"
);
if
(
ctx
->
HasOutput
(
y_grad_name
))
{
if
(
ctx
->
HasOutput
(
y_grad_name
))
{
...
...
paddle/fluid/operators/prroi_pool_op.cu
浏览文件 @
e8f64889
...
@@ -325,7 +325,7 @@ class GPUPRROIPoolGradOpKernel : public framework::OpKernel<T> {
...
@@ -325,7 +325,7 @@ class GPUPRROIPoolGradOpKernel : public framework::OpKernel<T> {
}
else
{
}
else
{
PADDLE_ENFORCE_EQ
(
rois
->
lod
().
empty
(),
false
,
PADDLE_ENFORCE_EQ
(
rois
->
lod
().
empty
(),
false
,
platform
::
errors
::
InvalidArgument
(
platform
::
errors
::
InvalidArgument
(
"the lod of Input ROIs shoul
e
not be empty when "
"the lod of Input ROIs shoul
d
not be empty when "
"BatchRoINums is None!"
));
"BatchRoINums is None!"
));
auto
rois_lod
=
rois
->
lod
().
back
();
auto
rois_lod
=
rois
->
lod
().
back
();
int
rois_batch_size
=
rois_lod
.
size
()
-
1
;
int
rois_batch_size
=
rois_lod
.
size
()
-
1
;
...
...
paddle/fluid/operators/prroi_pool_op.h
浏览文件 @
e8f64889
...
@@ -293,7 +293,7 @@ class CPUPRROIPoolOpKernel : public framework::OpKernel<T> {
...
@@ -293,7 +293,7 @@ class CPUPRROIPoolOpKernel : public framework::OpKernel<T> {
}
else
{
}
else
{
PADDLE_ENFORCE_EQ
(
rois
->
lod
().
empty
(),
false
,
PADDLE_ENFORCE_EQ
(
rois
->
lod
().
empty
(),
false
,
platform
::
errors
::
InvalidArgument
(
platform
::
errors
::
InvalidArgument
(
"the lod of Input ROIs shoul
e
not be empty when "
"the lod of Input ROIs shoul
d
not be empty when "
"BatchRoINums is None!"
));
"BatchRoINums is None!"
));
auto
rois_lod
=
rois
->
lod
().
back
();
auto
rois_lod
=
rois
->
lod
().
back
();
int
rois_batch_size
=
rois_lod
.
size
()
-
1
;
int
rois_batch_size
=
rois_lod
.
size
()
-
1
;
...
...
paddle/fluid/operators/reader/read_op.cc
浏览文件 @
e8f64889
...
@@ -24,8 +24,8 @@ namespace operators {
...
@@ -24,8 +24,8 @@ namespace operators {
// Returns true if the two dimensions are compatible.
// Returns true if the two dimensions are compatible.
// A dimension is compatible with the other if:
// A dimension is compatible with the other if:
// 1. The length of the dimensions are same.
// 1. The length of the dimensions are same.
// 2. Each non-negative number of the two dimen
t
ions are same.
// 2. Each non-negative number of the two dimen
s
ions are same.
// 3. For negative number in a dimen
t
ion, it means unknown so it is compatible
// 3. For negative number in a dimen
s
ion, it means unknown so it is compatible
// with any number.
// with any number.
bool
DimensionIsCompatibleWith
(
const
framework
::
DDim
&
first
,
bool
DimensionIsCompatibleWith
(
const
framework
::
DDim
&
first
,
const
framework
::
DDim
&
second
)
{
const
framework
::
DDim
&
second
)
{
...
...
paddle/fluid/operators/reduce_ops/reduce_op.h
浏览文件 @
e8f64889
...
@@ -174,7 +174,7 @@ class ReduceOp : public framework::OperatorWithKernel {
...
@@ -174,7 +174,7 @@ class ReduceOp : public framework::OperatorWithKernel {
PADDLE_ENFORCE_GT
(
PADDLE_ENFORCE_GT
(
dims
.
size
(),
0
,
dims
.
size
(),
0
,
"ShapeError: The input dim dimensions of Reduce "
"ShapeError: The input dim dimensions of Reduce "
"shoud be greater than 0. But received the dim dimesions of Reduce "
"shou
l
d be greater than 0. But received the dim dimesions of Reduce "
" = %d"
,
" = %d"
,
dims
.
size
());
dims
.
size
());
...
...
paddle/fluid/operators/reshape_op.cc
浏览文件 @
e8f64889
...
@@ -155,10 +155,11 @@ class ReshapeOp : public framework::OperatorWithKernel {
...
@@ -155,10 +155,11 @@ class ReshapeOp : public framework::OperatorWithKernel {
}
else
{
}
else
{
PADDLE_ENFORCE_GT
(
PADDLE_ENFORCE_GT
(
shape
[
i
],
0
,
shape
[
i
],
0
,
"ShapeError: Each dimension value of 'shape' in ReshapeOp must not "
platform
::
errors
::
InvalidArgument
(
"be negtive except one unknown dimension. "
"Each dimension value of 'shape' in ReshapeOp must not "
"But received shape = [%s], shape[%d] = %d."
,
"be negative except one unknown dimension. "
framework
::
make_ddim
(
shape
),
i
,
shape
[
i
]);
"But received shape = [%s], shape[%d] = %d."
,
framework
::
make_ddim
(
shape
),
i
,
shape
[
i
]));
}
}
capacity
*=
(
shape
[
i
]
?
shape
[
i
]
:
in_dims
[
i
]);
capacity
*=
(
shape
[
i
]
?
shape
[
i
]
:
in_dims
[
i
]);
...
@@ -228,7 +229,7 @@ class ReshapeOpMaker : public framework::OpProtoAndCheckerMaker {
...
@@ -228,7 +229,7 @@ class ReshapeOpMaker : public framework::OpProtoAndCheckerMaker {
"(Tensor<int32>, optional). Target shape of reshape operator. "
"(Tensor<int32>, optional). Target shape of reshape operator. "
"It has a higher priority than Attr(shape) but a lower priority "
"It has a higher priority than Attr(shape) but a lower priority "
"than Input(ShapeTensor). The Attr(shape) still should be "
"than Input(ShapeTensor). The Attr(shape) still should be "
"set correctly to gurantee shape inference in compile time."
)
"set correctly to gu
a
rantee shape inference in compile time."
)
.
AsDispensable
();
.
AsDispensable
();
AddInput
(
AddInput
(
"ShapeTensor"
,
"ShapeTensor"
,
...
@@ -282,7 +283,7 @@ dimension value will be copied from Input(X) at runtime. Note that the index of
...
@@ -282,7 +283,7 @@ dimension value will be copied from Input(X) at runtime. Note that the index of
[2, 3, 4], Attr(shape) = [2, 3, 2, 0] is an invalid input.
[2, 3, 4], Attr(shape) = [2, 3, 2, 0] is an invalid input.
3. Input(Shape) has a higher priority than Attr(shape) if it is provided, while
3. Input(Shape) has a higher priority than Attr(shape) if it is provided, while
Attr(shape) still should be set correctly to gurantee shape inference in
Attr(shape) still should be set correctly to gu
a
rantee shape inference in
compile-time.
compile-time.
)DOC"
);
)DOC"
);
...
...
paddle/fluid/operators/scatter_op.cc
浏览文件 @
e8f64889
...
@@ -86,7 +86,7 @@ class ScatterOpMaker : public framework::OpProtoAndCheckerMaker {
...
@@ -86,7 +86,7 @@ class ScatterOpMaker : public framework::OpProtoAndCheckerMaker {
AddInput
(
"Updates"
,
"The updated value of scatter op"
);
AddInput
(
"Updates"
,
"The updated value of scatter op"
);
AddOutput
(
"Out"
,
"The output of scatter op"
);
AddOutput
(
"Out"
,
"The output of scatter op"
);
AddAttr
<
bool
>
(
"overwrite"
,
AddAttr
<
bool
>
(
"overwrite"
,
"(bool, defa
lu
t: True) "
"(bool, defa
ul
t: True) "
"The mode that updating the output when has same index,"
"The mode that updating the output when has same index,"
"If True, use the overwrite mode to update the output"
"If True, use the overwrite mode to update the output"
"of the same index, if False, use the accumulate mode to"
"of the same index, if False, use the accumulate mode to"
...
...
paddle/fluid/operators/select_input_op.cc
浏览文件 @
e8f64889
...
@@ -67,7 +67,7 @@ class SelectInputOpProtoMaker : public framework::OpProtoAndCheckerMaker {
...
@@ -67,7 +67,7 @@ class SelectInputOpProtoMaker : public framework::OpProtoAndCheckerMaker {
// Because this op is blocking whole control flow. I am implementing MVP
// Because this op is blocking whole control flow. I am implementing MVP
// (minimal viable product) here.
// (minimal viable product) here.
AddComment
(
R"DOC(
AddComment
(
R"DOC(
Merge branches of LoDTensor into a single Output with a mask inte
r
ger
Merge branches of LoDTensor into a single Output with a mask integer
specifying the output branchi.
specifying the output branchi.
)DOC"
);
)DOC"
);
}
}
...
...
paddle/fluid/operators/sequence_ops/sequence_pad_op.cc
浏览文件 @
e8f64889
...
@@ -118,7 +118,7 @@ class SequencePadOpMaker : public framework::OpProtoAndCheckerMaker {
...
@@ -118,7 +118,7 @@ class SequencePadOpMaker : public framework::OpProtoAndCheckerMaker {
"sequences before padding."
);
"sequences before padding."
);
AddAttr
<
int
>
(
AddAttr
<
int
>
(
"padded_length"
,
"padded_length"
,
"The length of padded sequences. It can be set
ted
to -1 or "
"The length of padded sequences. It can be set to -1 or "
"any positive int. When it is -1, all sequences will be padded up to "
"any positive int. When it is -1, all sequences will be padded up to "
"the length of the longest one among them; when it a certain positive "
"the length of the longest one among them; when it a certain positive "
"value, it must be greater than the length of the longest original "
"value, it must be greater than the length of the longest original "
...
...
paddle/fluid/operators/sequence_ops/sequence_pool_op.cc
浏览文件 @
e8f64889
...
@@ -54,7 +54,7 @@ class SequencePoolOpMaker : public framework::OpProtoAndCheckerMaker {
...
@@ -54,7 +54,7 @@ class SequencePoolOpMaker : public framework::OpProtoAndCheckerMaker {
AddInput
(
"X"
,
"(LoDTensor) The variable-length input of SequencePoolOp"
);
AddInput
(
"X"
,
"(LoDTensor) The variable-length input of SequencePoolOp"
);
AddOutput
(
"Out"
,
AddOutput
(
"Out"
,
"(Tensor) The output of SequencePoolOp does not contain LoD "
"(Tensor) The output of SequencePoolOp does not contain LoD "
"infomation."
);
"info
r
mation."
);
AddOutput
(
"MaxIndex"
,
AddOutput
(
"MaxIndex"
,
"(Tensor<int>) This tensor is used for the sequence max-pooling "
"(Tensor<int>) This tensor is used for the sequence max-pooling "
"to record the max indexes."
)
"to record the max indexes."
)
...
@@ -93,7 +93,7 @@ Assume X is a [7,M,N] LoDTensor, and X->lod()[0] = [0, 2, 5, 7], 7=2+3+2.
...
@@ -93,7 +93,7 @@ Assume X is a [7,M,N] LoDTensor, and X->lod()[0] = [0, 2, 5, 7], 7=2+3+2.
Besides, for the sake of simplicity, we assume M=1 and N=1,
Besides, for the sake of simplicity, we assume M=1 and N=1,
and the value of X = [[1, 3], [2, 4, 6], [5, 1]].
and the value of X = [[1, 3], [2, 4, 6], [5, 1]].
Thus, Out is a [3,1,1] Tensor without LoD infomation.
Thus, Out is a [3,1,1] Tensor without LoD info
r
mation.
And for different pooltype, the value of Out is as follows:
And for different pooltype, the value of Out is as follows:
- AVERAGE: [2, 4, 3], where 2=(1+3)/2, 4=(2+4+6)/3, 3=(5+1)/2
- AVERAGE: [2, 4, 3], where 2=(1+3)/2, 4=(2+4+6)/3, 3=(5+1)/2
...
...
paddle/fluid/operators/sequence_ops/sequence_topk_avg_pooling_op.cc
浏览文件 @
e8f64889
...
@@ -63,7 +63,7 @@ class SequenceTopkAvgPoolingOpMaker : public framework::OpProtoAndCheckerMaker {
...
@@ -63,7 +63,7 @@ class SequenceTopkAvgPoolingOpMaker : public framework::OpProtoAndCheckerMaker {
AddOutput
(
AddOutput
(
"Out"
,
"Out"
,
"(Tensor) The output of SequenceTopkPoolingOp does not contain LoD "
"(Tensor) The output of SequenceTopkPoolingOp does not contain LoD "
"infomation."
);
"info
r
mation."
);
AddOutput
(
"pos"
,
"(Tensor<int>) store the topk index "
).
AsIntermediate
();
AddOutput
(
"pos"
,
"(Tensor<int>) store the topk index "
).
AsIntermediate
();
AddAttr
<
std
::
vector
<
int
>>
(
"topks"
,
"topks"
);
AddAttr
<
std
::
vector
<
int
>>
(
"topks"
,
"topks"
);
AddAttr
<
int
>
(
"channel_num"
,
"channel number"
);
AddAttr
<
int
>
(
"channel_num"
,
"channel number"
);
...
...
paddle/fluid/operators/sequence_ops/sequence_unpad_op.cc
浏览文件 @
e8f64889
...
@@ -96,7 +96,7 @@ class SequenceUnpadOpMaker : public framework::OpProtoAndCheckerMaker {
...
@@ -96,7 +96,7 @@ class SequenceUnpadOpMaker : public framework::OpProtoAndCheckerMaker {
[ 6.0, 7.0, 8.0, 9.0, 10.0],
[ 6.0, 7.0, 8.0, 9.0, 10.0],
[11.0, 12.0, 13.0, 14.0, 15.0]],
[11.0, 12.0, 13.0, 14.0, 15.0]],
`
`
in which there are 3 sequences padded to length 5, and the ac
ut
al length
in which there are 3 sequences padded to length 5, and the ac
tu
al length
specified by Input(Length):
specified by Input(Length):
Length.data = [2, 3, 4],
Length.data = [2, 3, 4],
...
...
paddle/fluid/operators/shard_index_op.cc
浏览文件 @
e8f64889
...
@@ -63,7 +63,7 @@ class ShardIndexOpMaker : public framework::OpProtoAndCheckerMaker {
...
@@ -63,7 +63,7 @@ class ShardIndexOpMaker : public framework::OpProtoAndCheckerMaker {
AddAttr
<
int
>
(
"nshards"
,
AddAttr
<
int
>
(
"nshards"
,
"A positive integer to specify the number of shards."
);
"A positive integer to specify the number of shards."
);
AddAttr
<
int
>
(
"shard_id"
,
"The current shard id"
);
AddAttr
<
int
>
(
"shard_id"
,
"The current shard id"
);
AddAttr
<
int
>
(
"ignore_value"
,
"An in
get
er value out of sharded range"
)
AddAttr
<
int
>
(
"ignore_value"
,
"An in
teg
er value out of sharded range"
)
.
SetDefault
(
-
1
);
.
SetDefault
(
-
1
);
AddComment
(
R"DOC(
AddComment
(
R"DOC(
This layer creates the sharded index for input. This layers is used in
This layer creates the sharded index for input. This layers is used in
...
@@ -80,7 +80,7 @@ to
...
@@ -80,7 +80,7 @@ to
y = x % shard_size if x / shard_size == shard_id else ignore_value
y = x % shard_size if x / shard_size == shard_id else ignore_value
We take the distributed one-hot representation to show what this layer is
We take the distributed one-hot representation to show what this layer is
used for. The distributed one-hot representation is sep
e
rated into multiple
used for. The distributed one-hot representation is sep
a
rated into multiple
shards, and each shard is filling zeros except the one with the index
shards, and each shard is filling zeros except the one with the index
inside. In order to create these sharded representation in each trainer,
inside. In order to create these sharded representation in each trainer,
the original index should be recalculated (i.e. sharded) before.
the original index should be recalculated (i.e. sharded) before.
...
...
paddle/fluid/operators/shrink_rnn_memory_op.cc
浏览文件 @
e8f64889
...
@@ -73,12 +73,12 @@ class ShrinkRNNMemoryOp : public ArrayOp {
...
@@ -73,12 +73,12 @@ class ShrinkRNNMemoryOp : public ArrayOp {
class
ShrinkRNNMemoryOpProtoMaker
:
public
framework
::
OpProtoAndCheckerMaker
{
class
ShrinkRNNMemoryOpProtoMaker
:
public
framework
::
OpProtoAndCheckerMaker
{
public:
public:
void
Make
()
override
{
void
Make
()
override
{
AddInput
(
"X"
,
"(LoDTensor) The RNN step memory to be shr
inked
."
);
AddInput
(
"X"
,
"(LoDTensor) The RNN step memory to be shr
ank
."
);
AddInput
(
"RankTable"
,
"(LoDRankTable) The lod_rank_table of dynamic RNN."
);
AddInput
(
"RankTable"
,
"(LoDRankTable) The lod_rank_table of dynamic RNN."
);
AddInput
(
"I"
,
AddInput
(
"I"
,
"(LoDTensor) The step index. The RNN step memory 'X' will be "
"(LoDTensor) The step index. The RNN step memory 'X' will be "
"shr
inked
to match the size of the input of the index'th step."
);
"shr
ank
to match the size of the input of the index'th step."
);
AddOutput
(
"Out"
,
"(LoDTensor) The shr
inked
RNN step memory."
);
AddOutput
(
"Out"
,
"(LoDTensor) The shr
ank
RNN step memory."
);
AddComment
(
R"DOC(
AddComment
(
R"DOC(
This operator is used to shrink output batch of memory defined in dynamic RNN.
This operator is used to shrink output batch of memory defined in dynamic RNN.
...
...
paddle/fluid/operators/softmax_with_cross_entropy_op.cc
浏览文件 @
e8f64889
...
@@ -31,7 +31,7 @@ class SoftmaxWithCrossEntropyOpMaker
...
@@ -31,7 +31,7 @@ class SoftmaxWithCrossEntropyOpMaker
"by softmax."
);
"by softmax."
);
AddInput
(
AddInput
(
"Label"
,
"Label"
,
"(Tensor) The input te
sn
or of groud truth label. If :attr:`soft_label` "
"(Tensor) The input te
ns
or of groud truth label. If :attr:`soft_label` "
"is set to false, Label is a Tensor<int64> in same shape with "
"is set to false, Label is a Tensor<int64> in same shape with "
"Input(Logits) except the shape in dimension :attr:`axis` as 1. If "
"Input(Logits) except the shape in dimension :attr:`axis` as 1. If "
"soft_label is set to true, Label is a Tensor<float/double> in same "
"soft_label is set to true, Label is a Tensor<float/double> in same "
...
@@ -50,7 +50,7 @@ class SoftmaxWithCrossEntropyOpMaker
...
@@ -50,7 +50,7 @@ class SoftmaxWithCrossEntropyOpMaker
"entropy loss."
);
"entropy loss."
);
AddAttr
<
bool
>
(
AddAttr
<
bool
>
(
"soft_label"
,
"soft_label"
,
"(bool, default: false), A flag to indicate whether to interpreta
te
"
"(bool, default: false), A flag to indicate whether to interpreta
nt
"
"the given labels as soft labels."
)
"the given labels as soft labels."
)
.
SetDefault
(
false
);
.
SetDefault
(
false
);
AddAttr
<
bool
>
(
AddAttr
<
bool
>
(
...
...
paddle/fluid/operators/softmax_with_cross_entropy_op.cu
浏览文件 @
e8f64889
...
@@ -100,7 +100,7 @@ where:
...
@@ -100,7 +100,7 @@ where:
Therefore, the calculation can be separated into 3 steps:
Therefore, the calculation can be separated into 3 steps:
Step 1: row-wise operation to calculate max_i
Step 1: row-wise operation to calculate max_i
Step 2: row-wise operation to calculate logDiffMaxSum_i
Step 2: row-wise operation to calculate logDiffMaxSum_i
Step 3: caculate tmp_i_j, and finally get softmax_i_j and cross\_entropy_i
Step 3: ca
l
culate tmp_i_j, and finally get softmax_i_j and cross\_entropy_i
To save memory, we can share memory among max_i, logDiffMaxSum_i and
To save memory, we can share memory among max_i, logDiffMaxSum_i and
cross\_entropy_i.
cross\_entropy_i.
In this way, the 3 steps should be changed to:
In this way, the 3 steps should be changed to:
...
...
paddle/fluid/operators/spectral_norm_op.cc
浏览文件 @
e8f64889
...
@@ -93,7 +93,7 @@ class SpectralNormOpMaker : public framework::OpProtoAndCheckerMaker {
...
@@ -93,7 +93,7 @@ class SpectralNormOpMaker : public framework::OpProtoAndCheckerMaker {
AddInput
(
"U"
,
AddInput
(
"U"
,
"The weight_u tensor of spectral_norm operator, "
"The weight_u tensor of spectral_norm operator, "
"This can be a 1-D tensor in shape [H, 1],"
"This can be a 1-D tensor in shape [H, 1],"
"H is the 1st dimen
t
ions of Weight after reshape"
"H is the 1st dimen
s
ions of Weight after reshape"
"corresponding by Attr(dim). As for Attr(dim) = 1"
"corresponding by Attr(dim). As for Attr(dim) = 1"
"in conv2d layer with weight shape [M, C, K1, K2]"
"in conv2d layer with weight shape [M, C, K1, K2]"
"Weight will be reshape to [C, M*K1*K2], U will"
"Weight will be reshape to [C, M*K1*K2], U will"
...
@@ -101,7 +101,7 @@ class SpectralNormOpMaker : public framework::OpProtoAndCheckerMaker {
...
@@ -101,7 +101,7 @@ class SpectralNormOpMaker : public framework::OpProtoAndCheckerMaker {
AddInput
(
"V"
,
AddInput
(
"V"
,
"The weight_v tensor of spectral_norm operator, "
"The weight_v tensor of spectral_norm operator, "
"This can be a 1-D tensor in shape [W, 1], "
"This can be a 1-D tensor in shape [W, 1], "
"W is the 2nd dimen
t
ions of Weight after reshape "
"W is the 2nd dimen
s
ions of Weight after reshape "
"corresponding by Attr(dim). As for Attr(dim) = 1 "
"corresponding by Attr(dim). As for Attr(dim) = 1 "
"in conv2d layer with weight shape [M, C, K1, K2] "
"in conv2d layer with weight shape [M, C, K1, K2] "
"Weight will be reshape to [C, M*K1*K2], V will "
"Weight will be reshape to [C, M*K1*K2], V will "
...
...
paddle/fluid/operators/tensorrt/tensorrt_engine_op.h
浏览文件 @
e8f64889
...
@@ -276,7 +276,7 @@ class TensorRTEngineOp : public framework::OperatorBase {
...
@@ -276,7 +276,7 @@ class TensorRTEngineOp : public framework::OperatorBase {
"size(%d).
\n
"
"size(%d).
\n
"
"There are two possible causes for this problem:
\n
"
"There are two possible causes for this problem:
\n
"
"1. Check whether the runtime batch is larger than the max_batch "
"1. Check whether the runtime batch is larger than the max_batch "
"set
ted
by EnableTensorrtEngine()
\n
"
"set by EnableTensorrtEngine()
\n
"
"2. Check whether the model you are running has multiple trt "
"2. Check whether the model you are running has multiple trt "
"subgraphs:
\n
"
"subgraphs:
\n
"
"
\t
If there are multiple trt subgraphs, you need to ensure that "
"
\t
If there are multiple trt subgraphs, you need to ensure that "
...
...
paddle/fluid/operators/unfold_op.cc
浏览文件 @
e8f64889
...
@@ -51,7 +51,7 @@ class UnfoldOpMaker : public framework::OpProtoAndCheckerMaker {
...
@@ -51,7 +51,7 @@ class UnfoldOpMaker : public framework::OpProtoAndCheckerMaker {
This Operator is used to extract sliding local blocks from a batched input tensor, also known
This Operator is used to extract sliding local blocks from a batched input tensor, also known
as im2col when operated on batched 2D image tensor. For each block under the convolution filter,
as im2col when operated on batched 2D image tensor. For each block under the convolution filter,
all element will be rearranged as a column. While the convolution filter s
il
ding over the input
all element will be rearranged as a column. While the convolution filter s
li
ding over the input
feature map, a series of such columns will be formed.
feature map, a series of such columns will be formed.
)DOC"
);
)DOC"
);
}
}
...
...
paddle/fluid/operators/uniform_random_op.cc
浏览文件 @
e8f64889
...
@@ -177,7 +177,7 @@ class UniformRandomOpMaker : public framework::OpProtoAndCheckerMaker {
...
@@ -177,7 +177,7 @@ class UniformRandomOpMaker : public framework::OpProtoAndCheckerMaker {
"according to "
"according to "
"this given shape. It means that it has a higher priority than "
"this given shape. It means that it has a higher priority than "
"the shape attribute, while the shape attribute still should be "
"the shape attribute, while the shape attribute still should be "
"set correctly to gurantee shape inference in compile time."
)
"set correctly to gu
a
rantee shape inference in compile time."
)
.
AsDispensable
();
.
AsDispensable
();
AddInput
(
"ShapeTensorList"
,
AddInput
(
"ShapeTensorList"
,
"(vector<Tensor<int64_t>> or vector<Tensor<int32_t>>, optional). "
"(vector<Tensor<int64_t>> or vector<Tensor<int32_t>>, optional). "
...
...
paddle/fluid/operators/unsqueeze_op.cc
浏览文件 @
e8f64889
...
@@ -153,7 +153,7 @@ class UnsqueezeOpMaker : public framework::OpProtoAndCheckerMaker {
...
@@ -153,7 +153,7 @@ class UnsqueezeOpMaker : public framework::OpProtoAndCheckerMaker {
PADDLE_ENFORCE_LT
(
static_cast
<
int
>
(
axes
.
size
()),
6
,
PADDLE_ENFORCE_LT
(
static_cast
<
int
>
(
axes
.
size
()),
6
,
"Invalid dimensions, dynamic dimensions should be "
"Invalid dimensions, dynamic dimensions should be "
"within [1, 6] dimensions (Eigen limit)."
);
"within [1, 6] dimensions (Eigen limit)."
);
// Validity Check: the range of unsqueeze a
ix
s.
// Validity Check: the range of unsqueeze a
xi
s.
for
(
int
axis
:
axes
)
{
for
(
int
axis
:
axes
)
{
PADDLE_ENFORCE_LT
(
axis
,
6
,
PADDLE_ENFORCE_LT
(
axis
,
6
,
"Invalid dimensions, input axis should be"
"Invalid dimensions, input axis should be"
...
...
paddle/fluid/operators/warpctc_op.cc
浏览文件 @
e8f64889
...
@@ -123,10 +123,10 @@ An operator integrating the open-source
...
@@ -123,10 +123,10 @@ An operator integrating the open-source
https://arxiv.org/pdf/1512.02595v1.pdf),
https://arxiv.org/pdf/1512.02595v1.pdf),
to compute Connectionist Temporal Classification (CTC) loss.
to compute Connectionist Temporal Classification (CTC) loss.
It can be aliased as softmax with ctc, since a native softmax activation is
It can be aliased as softmax with ctc, since a native softmax activation is
interated to the warp-ctc library, to to normlize values for each row of the
interated to the warp-ctc library, to to norm
a
lize values for each row of the
input tensor.
input tensor.
More detail of CTC loss can be found by refering to
More detail of CTC loss can be found by refer
r
ing to
[Connectionist Temporal Classification: Labelling Unsegmented Sequence Data with
[Connectionist Temporal Classification: Labelling Unsegmented Sequence Data with
Recurrent Neural Networks](
Recurrent Neural Networks](
http://machinelearning.wustl.edu/mlpapers/paper_files/icml2006_GravesFGS06.pdf).
http://machinelearning.wustl.edu/mlpapers/paper_files/icml2006_GravesFGS06.pdf).
...
...
paddle/fluid/platform/device_tracer.cc
浏览文件 @
e8f64889
...
@@ -50,7 +50,7 @@ void PrintCuptiHint() {
...
@@ -50,7 +50,7 @@ void PrintCuptiHint() {
static
bool
showed
=
false
;
static
bool
showed
=
false
;
if
(
showed
)
return
;
if
(
showed
)
return
;
showed
=
true
;
showed
=
true
;
LOG
(
WARNING
)
<<
"Invalid timestamp occured. Please try increasing the "
LOG
(
WARNING
)
<<
"Invalid timestamp occur
r
ed. Please try increasing the "
"FLAGS_multiple_of_cupti_buffer_size."
;
"FLAGS_multiple_of_cupti_buffer_size."
;
}
}
...
...
paddle/fluid/pybind/imperative.cc
浏览文件 @
e8f64889
...
@@ -226,7 +226,7 @@ void BindImperative(py::module *m_ptr) {
...
@@ -226,7 +226,7 @@ void BindImperative(py::module *m_ptr) {
BackwardStrategy is a descriptor of how to run the backward process.
BackwardStrategy is a descriptor of how to run the backward process.
**Note**:
**Note**:
**This API is only ava
li
able in** `Dygraph <../../user_guides/howto/dygraph/DyGraph.html>`_ **Mode**
**This API is only ava
il
able in** `Dygraph <../../user_guides/howto/dygraph/DyGraph.html>`_ **Mode**
Attribute:
Attribute:
**sort_sum_gradient**:
**sort_sum_gradient**:
...
@@ -339,7 +339,7 @@ void BindImperative(py::module *m_ptr) {
...
@@ -339,7 +339,7 @@ void BindImperative(py::module *m_ptr) {
},
},
R"DOC(
R"DOC(
**Notes**:
**Notes**:
**This API is ONLY ava
li
able in Dygraph mode**
**This API is ONLY ava
il
able in Dygraph mode**
Returns a numpy array shows the value of current :ref:`api_guide_Variable_en`
Returns a numpy array shows the value of current :ref:`api_guide_Variable_en`
...
@@ -375,7 +375,7 @@ void BindImperative(py::module *m_ptr) {
...
@@ -375,7 +375,7 @@ void BindImperative(py::module *m_ptr) {
},
},
py
::
return_value_policy
::
copy
,
R"DOC(
py
::
return_value_policy
::
copy
,
R"DOC(
**Notes**:
**Notes**:
**This API is ONLY ava
li
able in Dygraph mode**
**This API is ONLY ava
il
able in Dygraph mode**
Returns a new Variable, detached from the current graph.
Returns a new Variable, detached from the current graph.
...
@@ -402,7 +402,7 @@ void BindImperative(py::module *m_ptr) {
...
@@ -402,7 +402,7 @@ void BindImperative(py::module *m_ptr) {
.
def
(
"clear_gradient"
,
&
imperative
::
VarBase
::
ClearGradient
,
R"DOC(
.
def
(
"clear_gradient"
,
&
imperative
::
VarBase
::
ClearGradient
,
R"DOC(
**Notes**:
**Notes**:
**1. This API is ONLY ava
li
able in Dygraph mode**
**1. This API is ONLY ava
il
able in Dygraph mode**
**2. Use it only Variable has gradient, normally we use this for Parameters since other temporal Variable will be deleted by Python's GC**
**2. Use it only Variable has gradient, normally we use this for Parameters since other temporal Variable will be deleted by Python's GC**
...
...
python/paddle/dataset/movielens.py
浏览文件 @
e8f64889
...
@@ -224,7 +224,7 @@ def max_job_id():
...
@@ -224,7 +224,7 @@ def max_job_id():
def
movie_categories
():
def
movie_categories
():
"""
"""
Get movie categori
g
es dictionary.
Get movie categories dictionary.
"""
"""
__initialize_meta_info__
()
__initialize_meta_info__
()
return
CATEGORIES_DICT
return
CATEGORIES_DICT
...
...
python/paddle/dataset/mq2007.py
浏览文件 @
e8f64889
...
@@ -150,7 +150,7 @@ def gen_plain_txt(querylist):
...
@@ -150,7 +150,7 @@ def gen_plain_txt(querylist):
gen plain text in list for other usage
gen plain text in list for other usage
Paramters:
Paramters:
--------
--------
querylist : querylist, one query match many docment pairs in list, see QueryList
querylist : querylist, one query match many doc
u
ment pairs in list, see QueryList
return :
return :
------
------
...
@@ -171,7 +171,7 @@ def gen_point(querylist):
...
@@ -171,7 +171,7 @@ def gen_point(querylist):
gen item in list for point-wise learning to rank algorithm
gen item in list for point-wise learning to rank algorithm
Paramters:
Paramters:
--------
--------
querylist : querylist, one query match many docment pairs in list, see QueryList
querylist : querylist, one query match many doc
u
ment pairs in list, see QueryList
return :
return :
------
------
...
@@ -190,9 +190,9 @@ def gen_pair(querylist, partial_order="full"):
...
@@ -190,9 +190,9 @@ def gen_pair(querylist, partial_order="full"):
gen pair for pair-wise learning to rank algorithm
gen pair for pair-wise learning to rank algorithm
Paramters:
Paramters:
--------
--------
querylist : querylist, one query match many docment pairs in list, see QueryList
querylist : querylist, one query match many doc
u
ment pairs in list, see QueryList
pairtial_order : "full" or "neighbour"
pairtial_order : "full" or "neighbour"
there is redu
dant in all possiable pair combinations, which can be simplif
ed
there is redu
ndant in all possible pair combinations, which can be simplifi
ed
gen pairs for neighbour items or the full partial order pairs
gen pairs for neighbour items or the full partial order pairs
return :
return :
...
@@ -233,7 +233,7 @@ def gen_list(querylist):
...
@@ -233,7 +233,7 @@ def gen_list(querylist):
gen item in list for list-wise learning to rank algorithm
gen item in list for list-wise learning to rank algorithm
Paramters:
Paramters:
--------
--------
querylist : querylist, one query match many docment pairs in list, see QueryList
querylist : querylist, one query match many doc
u
ment pairs in list, see QueryList
return :
return :
------
------
...
@@ -268,7 +268,7 @@ def query_filter(querylists):
...
@@ -268,7 +268,7 @@ def query_filter(querylists):
def
load_from_text
(
filepath
,
shuffle
=
False
,
fill_missing
=-
1
):
def
load_from_text
(
filepath
,
shuffle
=
False
,
fill_missing
=-
1
):
"""
"""
parse data file into quer
y
s
parse data file into quer
ie
s
"""
"""
prev_query_id
=
-
1
prev_query_id
=
-
1
querylists
=
[]
querylists
=
[]
...
...
python/paddle/distributed/launch.py
浏览文件 @
e8f64889
...
@@ -13,18 +13,18 @@
...
@@ -13,18 +13,18 @@
# limitations under the License.
# limitations under the License.
"""
"""
paddle.distributed.launch is a module that spawns multiple distributed
paddle.distributed.launch is a module that spawns multiple distributed
process on each train
ning node for gpu train
ning.
process on each train
ing node for gpu trai
ning.
Usage:
Usage:
In both of single node training or multiple node training, this module
In both of single node training or multiple node training, this module
launch a process on each of the given gpu card.
launch a process on each of the given gpu card.
1. for single node train
n
ing with all visible gpu cards:
1. for single node training with all visible gpu cards:
python -m paddle.distributed.launch
\
python -m paddle.distributed.launch
\
your_training_py (arg1 arg2 and all others)
your_training_py (arg1 arg2 and all others)
2. for single node train
n
ing with [0,4) cards
2. for single node training with [0,4) cards
python -m paddle.distributed.launch --selected_gpus="0,1,2,3"
\
python -m paddle.distributed.launch --selected_gpus="0,1,2,3"
\
your_training_py (arg1 arg2 and all others)
your_training_py (arg1 arg2 and all others)
3. for mul
it
ple node training such as two node:192.168.0.16, 192.168.0.17
3. for mul
ti
ple node training such as two node:192.168.0.16, 192.168.0.17
on 192.168.0.16:
on 192.168.0.16:
python -m paddle.distributed.launch --cluster_node_ips="192.168.0.16,192.168.0.17"
\
python -m paddle.distributed.launch --cluster_node_ips="192.168.0.16,192.168.0.17"
\
--node_ip=192.168.0.16
\
--node_ip=192.168.0.16
\
...
@@ -114,14 +114,14 @@ POD_IP (current node ip address, not needed for local training)
...
@@ -114,14 +114,14 @@ POD_IP (current node ip address, not needed for local training)
"--selected_gpus"
,
"--selected_gpus"
,
type
=
str
,
type
=
str
,
default
=
None
,
default
=
None
,
help
=
"It's for gpu train
ning and the train
ning process will run on the selected_gpus,"
help
=
"It's for gpu train
ing and the trai
ning process will run on the selected_gpus,"
"each process is bound to a single GPU. And if it's not set
ted
, this module will use all the gpu cards for training."
"each process is bound to a single GPU. And if it's not set, this module will use all the gpu cards for training."
)
)
parser
.
add_argument
(
parser
.
add_argument
(
"--log_dir"
,
"--log_dir"
,
type
=
str
,
type
=
str
,
help
=
"The path for each process's log.If it's not set
ted
, the log will printed to default pipe."
help
=
"The path for each process's log.If it's not set, the log will printed to default pipe."
)
)
#positional
#positional
...
...
python/paddle/distributed/launch_ps.py
浏览文件 @
e8f64889
...
@@ -61,7 +61,7 @@ def parse_args():
...
@@ -61,7 +61,7 @@ def parse_args():
"--log_dir"
,
"--log_dir"
,
default
=
"logs"
,
default
=
"logs"
,
type
=
str
,
type
=
str
,
help
=
"The path for each process's log.If it's not set
ted
, the log will printed to default pipe."
help
=
"The path for each process's log.If it's not set, the log will printed to default pipe."
)
)
# positional
# positional
...
...
python/paddle/fluid/backward.py
浏览文件 @
e8f64889
...
@@ -832,7 +832,7 @@ def _append_backward_ops_(block,
...
@@ -832,7 +832,7 @@ def _append_backward_ops_(block,
target_block(Block): the block which is going to hold new generated grad ops
target_block(Block): the block which is going to hold new generated grad ops
no_grad_dict(dict):
no_grad_dict(dict):
key(int) block index
key(int) block index
val(set) a set of vari
bale names. These variba
les have no gradient
val(set) a set of vari
able names. These variab
les have no gradient
grad_to_var(dict)(output argument):
grad_to_var(dict)(output argument):
key(str): grad variable name
key(str): grad variable name
val(str): corresponding forward variable name
val(str): corresponding forward variable name
...
...
python/paddle/fluid/contrib/layers/nn.py
浏览文件 @
e8f64889
...
@@ -116,7 +116,7 @@ def var_conv_2d(input,
...
@@ -116,7 +116,7 @@ def var_conv_2d(input,
"""
"""
The var_conv_2d layer calculates the output base on the :attr:`input` with variable length,
The var_conv_2d layer calculates the output base on the :attr:`input` with variable length,
row, col, input channel, filter size and strides. Both :attr:`input`, :attr:`row`,
row, col, input channel, filter size and strides. Both :attr:`input`, :attr:`row`,
and :attr:`col` are 1-level LodTensor. The covolution operation is same as conv2d layer with
and :attr:`col` are 1-level LodTensor. The co
n
volution operation is same as conv2d layer with
padding. Besides, input.dims[1] should be 1.
padding. Besides, input.dims[1] should be 1.
.. code-block:: text
.. code-block:: text
...
@@ -133,9 +133,9 @@ def var_conv_2d(input,
...
@@ -133,9 +133,9 @@ def var_conv_2d(input,
output.dims = [174, 1] # where 174 = 90 + 84
output.dims = [174, 1] # where 174 = 90 + 84
Args:
Args:
input (Variable): The input shoud be 1-level LodTensor with dims[1] equals 1.
input (Variable): The input shou
l
d be 1-level LodTensor with dims[1] equals 1.
row (Variable): The row shoud be 1-level LodTensor to provide height information.
row (Variable): The row shou
l
d be 1-level LodTensor to provide height information.
col (Variable): The col shoud be 1-level LodTensor to provide width information.
col (Variable): The col shou
l
d be 1-level LodTensor to provide width information.
input_channel (int): The number of input channel.
input_channel (int): The number of input channel.
output_channel (int): The number of output channel.
output_channel (int): The number of output channel.
filter_size (int|tuple|None): The filter size. If filter_size is a tuple,
filter_size (int|tuple|None): The filter size. If filter_size is a tuple,
...
@@ -325,9 +325,9 @@ def sequence_topk_avg_pooling(input, row, col, topks, channel_num):
...
@@ -325,9 +325,9 @@ def sequence_topk_avg_pooling(input, row, col, topks, channel_num):
Args:
Args:
input (Variable): The input should be 2D LodTensor with dims[1] equals 1.
input (Variable): The input should be 2D LodTensor with dims[1] equals 1.
row (Variable): The row shoud be 1-level LodTensor to provide the height information
row (Variable): The row shou
l
d be 1-level LodTensor to provide the height information
of the input tensor data.
of the input tensor data.
col (Variable): The col shoud be 1-level LodTensor to provide the width information
col (Variable): The col shou
l
d be 1-level LodTensor to provide the width information
of the input tensor data.
of the input tensor data.
topks (list): A list of incremental value to average the topk feature.
topks (list): A list of incremental value to average the topk feature.
channel_num (int): The number of input channel.
channel_num (int): The number of input channel.
...
@@ -555,7 +555,7 @@ def multiclass_nms2(bboxes,
...
@@ -555,7 +555,7 @@ def multiclass_nms2(bboxes,
low confidence score. If not provided,
low confidence score. If not provided,
consider all boxes.
consider all boxes.
nms_top_k (int): Maximum number of detections to be kept according to
nms_top_k (int): Maximum number of detections to be kept according to
the confidences after
n
the filtering detections based
the confidences after the filtering detections based
on score_threshold.
on score_threshold.
nms_threshold (float): The threshold to be used in NMS. Default: 0.3
nms_threshold (float): The threshold to be used in NMS. Default: 0.3
nms_eta (float): The threshold to be used in NMS. Default: 1.0
nms_eta (float): The threshold to be used in NMS. Default: 1.0
...
...
python/paddle/fluid/contrib/layers/rnn_impl.py
浏览文件 @
e8f64889
...
@@ -181,7 +181,7 @@ def basic_gru(input,
...
@@ -181,7 +181,7 @@ def basic_gru(input,
sequence_length (Variabe|None): A Tensor (shape [batch_size]) stores each real length of each instance,
sequence_length (Variabe|None): A Tensor (shape [batch_size]) stores each real length of each instance,
This tensor will be convert to a mask to mask the padding ids
This tensor will be convert to a mask to mask the padding ids
If it's None means NO padding ids
If it's None means NO padding ids
dropout_prob(float|0.0): Dropout prob, dropout ONLY works after rnn output of ea
r
ch layers,
dropout_prob(float|0.0): Dropout prob, dropout ONLY works after rnn output of each layers,
NOT between time steps
NOT between time steps
bidirectional (bool|False): If it is bidirectional
bidirectional (bool|False): If it is bidirectional
batch_first (bool|True): The shape format of the input and output tensors. If true,
batch_first (bool|True): The shape format of the input and output tensors. If true,
...
@@ -411,7 +411,7 @@ def basic_lstm(input,
...
@@ -411,7 +411,7 @@ def basic_lstm(input,
sequence_length (Variabe|None): A tensor (shape [batch_size]) stores each real length of each instance,
sequence_length (Variabe|None): A tensor (shape [batch_size]) stores each real length of each instance,
This tensor will be convert to a mask to mask the padding ids
This tensor will be convert to a mask to mask the padding ids
If it's None means NO padding ids
If it's None means NO padding ids
dropout_prob(float|0.0): Dropout prob, dropout ONLY work after rnn output of ea
r
ch layers,
dropout_prob(float|0.0): Dropout prob, dropout ONLY work after rnn output of each layers,
NOT between time steps
NOT between time steps
bidirectional (bool|False): If it is bidirectional
bidirectional (bool|False): If it is bidirectional
batch_first (bool|True): The shape format of the input and output tensors. If true,
batch_first (bool|True): The shape format of the input and output tensors. If true,
...
...
python/paddle/fluid/contrib/memory_usage_calc.py
浏览文件 @
e8f64889
...
@@ -12,7 +12,7 @@
...
@@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and
# See the License for the specific language governing permissions and
# limitations under the License.
# limitations under the License.
"""
"""
This module pr
i
vides a memory usage calculate function for user.
This module pr
o
vides a memory usage calculate function for user.
The purpose of this API is to allow users to estimate memory usage of
The purpose of this API is to allow users to estimate memory usage of
a program under a special batch size, then user can set appropriate
a program under a special batch size, then user can set appropriate
batch size to fully utilize a GPU.
batch size to fully utilize a GPU.
...
@@ -91,8 +91,9 @@ def memory_usage(program, batch_size):
...
@@ -91,8 +91,9 @@ def memory_usage(program, batch_size):
for
x
in
var
.
shape
:
for
x
in
var
.
shape
:
if
x
<
0
:
if
x
<
0
:
if
neg_dim_count
>=
1
:
if
neg_dim_count
>=
1
:
raise
ValueError
(
"Var %s has more than one negtive dim."
raise
ValueError
(
%
(
var_name
))
"Var %s has more than one negative dim."
%
(
var_name
))
neg_dim_count
+=
1
neg_dim_count
+=
1
data_count
*=
batch_size
*
(
-
x
)
data_count
*=
batch_size
*
(
-
x
)
else
:
else
:
...
...
python/paddle/fluid/contrib/quantize/quantize_transpiler.py
浏览文件 @
e8f64889
...
@@ -147,7 +147,7 @@ class QuantizeTranspiler(object):
...
@@ -147,7 +147,7 @@ class QuantizeTranspiler(object):
"""Rewrites a training input program in place for simulated
"""Rewrites a training input program in place for simulated
quantization. Insert fake quantization and de-quantization ops into
quantization. Insert fake quantization and de-quantization ops into
program to simulate the error introduced by quantization. And change
program to simulate the error introduced by quantization. And change
the gra
id
ent ops' input by using the faked quantization weights and
the gra
di
ent ops' input by using the faked quantization weights and
activation. Since the program is transformed in place, the graph
activation. Since the program is transformed in place, the graph
connection will change.
connection will change.
...
...
python/paddle/fluid/contrib/slim/core/compressor.py
浏览文件 @
e8f64889
...
@@ -302,7 +302,7 @@ class Compressor(object):
...
@@ -302,7 +302,7 @@ class Compressor(object):
this optimizer is used to minimize the combined loss of student-net and
this optimizer is used to minimize the combined loss of student-net and
teacher-net while train_optimizer is used to minimize loss of
teacher-net while train_optimizer is used to minimize loss of
student-net in fine-tune stage.
student-net in fine-tune stage.
search_space(slim.nas.SearchSpace): The instance that define the searching space. It must inherit
e
search_space(slim.nas.SearchSpace): The instance that define the searching space. It must inherit
slim.nas.SearchSpace class and overwrite the abstract methods.
slim.nas.SearchSpace class and overwrite the abstract methods.
log_period(int): The period of print log of training.
log_period(int): The period of print log of training.
...
@@ -551,7 +551,7 @@ class Compressor(object):
...
@@ -551,7 +551,7 @@ class Compressor(object):
def
run
(
self
):
def
run
(
self
):
"""
"""
Execute compressi
o
ng pass.
Execute compressing pass.
"""
"""
context
=
Context
(
context
=
Context
(
place
=
self
.
place
,
place
=
self
.
place
,
...
...
python/paddle/fluid/contrib/slim/graph/graph_wrapper.py
浏览文件 @
e8f64889
...
@@ -63,7 +63,7 @@ class VarWrapper(object):
...
@@ -63,7 +63,7 @@ class VarWrapper(object):
def
shape
(
self
):
def
shape
(
self
):
"""
"""
Get the shape of the vari
ba
le.
Get the shape of the vari
ab
le.
"""
"""
return
self
.
_var
.
shape
return
self
.
_var
.
shape
...
@@ -152,13 +152,13 @@ class OpWrapper(object):
...
@@ -152,13 +152,13 @@ class OpWrapper(object):
def
inputs
(
self
,
name
):
def
inputs
(
self
,
name
):
"""
"""
Get all the vari
ba
les by the input name.
Get all the vari
ab
les by the input name.
"""
"""
return
[
self
.
_graph
.
var
(
var_name
)
for
var_name
in
self
.
_op
.
input
(
name
)]
return
[
self
.
_graph
.
var
(
var_name
)
for
var_name
in
self
.
_op
.
input
(
name
)]
def
outputs
(
self
,
name
):
def
outputs
(
self
,
name
):
"""
"""
Get all the vari
ba
les by the output name.
Get all the vari
ab
les by the output name.
"""
"""
return
[
self
.
_graph
.
var
(
var_name
)
for
var_name
in
self
.
_op
.
output
(
name
)]
return
[
self
.
_graph
.
var
(
var_name
)
for
var_name
in
self
.
_op
.
output
(
name
)]
...
@@ -233,7 +233,7 @@ class GraphWrapper(object):
...
@@ -233,7 +233,7 @@ class GraphWrapper(object):
"""
"""
Whether the given variable is parameter.
Whether the given variable is parameter.
Args:
Args:
var(VarWrapper): The given vari
ba
le.
var(VarWrapper): The given vari
ab
le.
"""
"""
return
isinstance
(
var
.
_var
,
Parameter
)
return
isinstance
(
var
.
_var
,
Parameter
)
...
@@ -241,7 +241,7 @@ class GraphWrapper(object):
...
@@ -241,7 +241,7 @@ class GraphWrapper(object):
"""
"""
Whether the given variable is persistable.
Whether the given variable is persistable.
Args:
Args:
var(VarWrapper): The given vari
ba
le.
var(VarWrapper): The given vari
ab
le.
"""
"""
return
var
.
_var
.
persistable
return
var
.
_var
.
persistable
...
@@ -397,7 +397,7 @@ class GraphWrapper(object):
...
@@ -397,7 +397,7 @@ class GraphWrapper(object):
"""
"""
Get a new graph for training by appending some backward operators and optimization operators.
Get a new graph for training by appending some backward operators and optimization operators.
Args:
Args:
optimizer: The optim
zi
er used to generate training graph.
optimizer: The optim
iz
er used to generate training graph.
place: The place to run the graph.
place: The place to run the graph.
scope: The scope used to run the graph. Some new variable will be added into this scope.
scope: The scope used to run the graph. Some new variable will be added into this scope.
no_grad_var_names(list<str>): Names of variables that should be ignored while computing gradients. default: [].
no_grad_var_names(list<str>): Names of variables that should be ignored while computing gradients. default: [].
...
...
python/paddle/fluid/contrib/slim/nas/controller_server.py
浏览文件 @
e8f64889
...
@@ -27,7 +27,7 @@ _logger = get_logger(
...
@@ -27,7 +27,7 @@ _logger = get_logger(
class
ControllerServer
(
object
):
class
ControllerServer
(
object
):
"""
"""
The controller wrapper with a socket server to handle the request of search agent
t
.
The controller wrapper with a socket server to handle the request of search agent.
"""
"""
def
__init__
(
self
,
def
__init__
(
self
,
...
...
python/paddle/fluid/contrib/slim/prune/auto_prune_strategy.py
浏览文件 @
e8f64889
...
@@ -53,7 +53,7 @@ class AutoPruneStrategy(PruneStrategy):
...
@@ -53,7 +53,7 @@ class AutoPruneStrategy(PruneStrategy):
metric_name(str): The metric used to evaluate the model.
metric_name(str): The metric used to evaluate the model.
It should be one of keys in out_nodes of graph wrapper. Default: 'top1_acc'
It should be one of keys in out_nodes of graph wrapper. Default: 'top1_acc'
pruned_params(str): The pattern str to match the parameter names to be pruned. Default: 'conv.*_weights'
pruned_params(str): The pattern str to match the parameter names to be pruned. Default: 'conv.*_weights'
retrain_epoch(int): The training epochs in each seaching step. Default: 0
retrain_epoch(int): The training epochs in each sea
r
ching step. Default: 0
uniform_range(int): The token range in each position of tokens generated by controller. None means getting the range automatically. Default: None.
uniform_range(int): The token range in each position of tokens generated by controller. None means getting the range automatically. Default: None.
init_tokens(list<int>): The initial tokens. None means getting the initial tokens automatically. Default: None.
init_tokens(list<int>): The initial tokens. None means getting the initial tokens automatically. Default: None.
"""
"""
...
...
python/paddle/fluid/contrib/slim/prune/prune_strategy.py
浏览文件 @
e8f64889
...
@@ -741,7 +741,7 @@ class SensitivePruneStrategy(PruneStrategy):
...
@@ -741,7 +741,7 @@ class SensitivePruneStrategy(PruneStrategy):
def
_format_sensitivities
(
self
,
sensitivities
):
def
_format_sensitivities
(
self
,
sensitivities
):
"""
"""
Print formated sensitivities in debug log level.
Print format
t
ed sensitivities in debug log level.
"""
"""
tb
=
pt
.
PrettyTable
()
tb
=
pt
.
PrettyTable
()
tb
.
field_names
=
[
"parameter"
,
"size"
]
+
[
tb
.
field_names
=
[
"parameter"
,
"size"
]
+
[
...
...
python/paddle/fluid/contrib/slim/prune/pruner.py
浏览文件 @
e8f64889
...
@@ -42,7 +42,7 @@ class StructurePruner(Pruner):
...
@@ -42,7 +42,7 @@ class StructurePruner(Pruner):
pruning_axis(dict): The key is the name of parameter to be pruned,
pruning_axis(dict): The key is the name of parameter to be pruned,
'*' means all the parameters.
'*' means all the parameters.
The value is the axis to be used. Given a parameter
The value is the axis to be used. Given a parameter
with shape [3, 4], the result of pruning 50% on a
ix
s 1
with shape [3, 4], the result of pruning 50% on a
xi
s 1
is a parameter with shape [3, 2].
is a parameter with shape [3, 2].
criterions(dict): The key is the name of parameter to be pruned,
criterions(dict): The key is the name of parameter to be pruned,
'*' means all the parameters.
'*' means all the parameters.
...
...
python/paddle/fluid/contrib/slim/quantization/quantization_pass.py
浏览文件 @
e8f64889
...
@@ -666,10 +666,10 @@ class QuantizationFreezePass(object):
...
@@ -666,10 +666,10 @@ class QuantizationFreezePass(object):
quantizable_op_type
=
[
'conv2d'
,
'depthwise_conv2d'
,
'mul'
]):
quantizable_op_type
=
[
'conv2d'
,
'depthwise_conv2d'
,
'mul'
]):
"""
"""
The freeze pass is used to adjust the quantize operator order, for example:
The freeze pass is used to adjust the quantize operator order, for example:
1) `activation -> quant -> dequant -> conv2d` will be fr
eezed
into
1) `activation -> quant -> dequant -> conv2d` will be fr
ozen
into
`activation -> quant -> conv2d -> dequant`
`activation -> quant -> conv2d -> dequant`
2) `weight -> quant -> dequant -> conv2d` will be fr
eezed
into `weight -> conv2d`,
2) `weight -> quant -> dequant -> conv2d` will be fr
ozen
into `weight -> conv2d`,
and weight will be s
ac
led offline.
and weight will be s
ca
led offline.
Args:
Args:
scope(fluid.Scope): scope is used to get the weight tensor values.
scope(fluid.Scope): scope is used to get the weight tensor values.
...
@@ -994,8 +994,8 @@ class ConvertToInt8Pass(object):
...
@@ -994,8 +994,8 @@ class ConvertToInt8Pass(object):
def
apply
(
self
,
graph
):
def
apply
(
self
,
graph
):
"""
"""
Convert weights' t
py
e of the graph. After that, the data type of the
Convert weights' t
yp
e of the graph. After that, the data type of the
graph weig
th
s is int8_t.
graph weig
ht
s is int8_t.
Args:
Args:
graph(IrGraph): the applied graph.
graph(IrGraph): the applied graph.
...
@@ -1065,7 +1065,7 @@ class ConvertToInt8Pass(object):
...
@@ -1065,7 +1065,7 @@ class ConvertToInt8Pass(object):
class
TransformForMobilePass
(
object
):
class
TransformForMobilePass
(
object
):
def
__init__
(
self
):
def
__init__
(
self
):
"""
"""
This pass is used to convert the fr
eezed
graph for paddle-mobile execution.
This pass is used to convert the fr
ozen
graph for paddle-mobile execution.
"""
"""
self
.
_fake_quant_op_names
=
_fake_quant_op_list
self
.
_fake_quant_op_names
=
_fake_quant_op_list
self
.
_fake_dequant_op_names
=
_fake_dequant_op_list
self
.
_fake_dequant_op_names
=
_fake_dequant_op_list
...
...
python/paddle/fluid/contrib/trainer.py
浏览文件 @
e8f64889
...
@@ -673,11 +673,11 @@ def save_checkpoint(executor,
...
@@ -673,11 +673,11 @@ def save_checkpoint(executor,
main_program and then saves these variables to the `checkpoint_dir`
main_program and then saves these variables to the `checkpoint_dir`
directory.
directory.
In the training pr
e
cess, we generally save a checkpoint in each
In the training pr
o
cess, we generally save a checkpoint in each
iteration. So there might be a lot of checkpoints in the
iteration. So there might be a lot of checkpoints in the
`checkpoint_dir`. To avoid them taking too much disk space, the
`checkpoint_dir`. To avoid them taking too much disk space, the
`max_num_checkpoints` are introduced to limit the total number of
`max_num_checkpoints` are introduced to limit the total number of
checkpoints. If the number of existing checkpints is greater than
checkpoints. If the number of existing checkp
o
ints is greater than
the `max_num_checkpoints`, oldest ones will be scroll deleted.
the `max_num_checkpoints`, oldest ones will be scroll deleted.
A variable is a checkpoint variable and will be saved if it meets
A variable is a checkpoint variable and will be saved if it meets
...
@@ -689,7 +689,7 @@ def save_checkpoint(executor,
...
@@ -689,7 +689,7 @@ def save_checkpoint(executor,
Args:
Args:
executor(Executor): The executor to run for save checkpoint.
executor(Executor): The executor to run for save checkpoint.
checkpoint_dir(str): The folder where to save checkpoints.
checkpoint_dir(str): The folder where to save checkpoints.
trainer_id(int): curre
c
t trainer id, if id is equal to 0, the trainer
trainer_id(int): curre
n
t trainer id, if id is equal to 0, the trainer
is chief.
is chief.
trainer_args(dict|None): Current training arguments. Such as 'epoch_id'
trainer_args(dict|None): Current training arguments. Such as 'epoch_id'
and 'step_id'.
and 'step_id'.
...
@@ -772,7 +772,7 @@ def load_checkpoint(executor,
...
@@ -772,7 +772,7 @@ def load_checkpoint(executor,
main_program and then try to load these variables from the
main_program and then try to load these variables from the
`checkpoint_dir` directory.
`checkpoint_dir` directory.
In the training pr
e
cess, we generally save a checkpoint in each
In the training pr
o
cess, we generally save a checkpoint in each
iteration. So there are more than one checkpoint in the
iteration. So there are more than one checkpoint in the
`checkpoint_dir` (each checkpoint has its own sub folder), use
`checkpoint_dir` (each checkpoint has its own sub folder), use
`serial` to specify which serial of checkpoint you would like to
`serial` to specify which serial of checkpoint you would like to
...
@@ -867,7 +867,7 @@ def _load_persist_vars_without_grad(executor,
...
@@ -867,7 +867,7 @@ def _load_persist_vars_without_grad(executor,
has_model_dir
=
False
):
has_model_dir
=
False
):
"""
"""
This function filters out all checkpoint variables from the give
This function filters out all checkpoint variables from the give
program and then tr
y
s to load these variables from the given directory.
program and then tr
ie
s to load these variables from the given directory.
A variable is a checkpoint variable if it meets all following
A variable is a checkpoint variable if it meets all following
conditions:
conditions:
...
@@ -898,7 +898,7 @@ def _load_persist_vars_without_grad(executor,
...
@@ -898,7 +898,7 @@ def _load_persist_vars_without_grad(executor,
# In this example, `_load_persist_vars_without_grad` function
# In this example, `_load_persist_vars_without_grad` function
# will first filters out all checkpoint variables in the default
# will first filters out all checkpoint variables in the default
# main program, and then tr
y
s to load these variables form the
# main program, and then tr
ie
s to load these variables form the
# folder "./my_paddle_model/__model__".
# folder "./my_paddle_model/__model__".
"""
"""
...
@@ -1135,12 +1135,12 @@ def _is_checkpoint_var(var):
...
@@ -1135,12 +1135,12 @@ def _is_checkpoint_var(var):
def
_make_chekcpoint_dirs
(
dirs
):
def
_make_chekcpoint_dirs
(
dirs
):
"""
"""
_make_chekcpoint_dirs will mak
dir local directory directly, when the directory is exist, it will ig
ore it.
_make_chekcpoint_dirs will mak
edir local directory directly, when the directory is exist, it will ign
ore it.
"""
"""
assert
dirs
is
not
None
assert
dirs
is
not
None
if
os
.
path
.
isfile
(
dirs
):
if
os
.
path
.
isfile
(
dirs
):
raise
OSError
(
errno
.
ENOTDIR
,
"dirs path shoul
e
be a Directory."
,
dirs
)
raise
OSError
(
errno
.
ENOTDIR
,
"dirs path shoul
d
be a Directory."
,
dirs
)
if
not
os
.
path
.
isdir
(
dirs
):
if
not
os
.
path
.
isdir
(
dirs
):
try
:
try
:
...
...
python/paddle/fluid/contrib/utils/hdfs_utils.py
浏览文件 @
e8f64889
...
@@ -312,9 +312,9 @@ class HDFSClient(object):
...
@@ -312,9 +312,9 @@ class HDFSClient(object):
@
staticmethod
@
staticmethod
def
make_local_dirs
(
local_path
):
def
make_local_dirs
(
local_path
):
"""
"""
create a direct
i
ory local, is same to mkdir
create a directory local, is same to mkdir
Args:
Args:
local_path: local path that wants to create a direct
i
ory.
local_path: local path that wants to create a directory.
"""
"""
try
:
try
:
os
.
makedirs
(
local_path
)
os
.
makedirs
(
local_path
)
...
...
python/paddle/fluid/contrib/utils/lookup_table_utils.py
浏览文件 @
e8f64889
...
@@ -137,7 +137,7 @@ def load_persistables_for_increment(dirname, executor, program,
...
@@ -137,7 +137,7 @@ def load_persistables_for_increment(dirname, executor, program,
lookup_table_var
,
lookup_table_var_path
):
lookup_table_var
,
lookup_table_var_path
):
"""
"""
WARNING: this function will only be used for distributed training with distributed lookup table.
WARNING: this function will only be used for distributed training with distributed lookup table.
for increment train
n
ing, the pserver will not only load dense variables,
for increment training, the pserver will not only load dense variables,
but also load the suitable lookup table var. Because of sliced lookup table
but also load the suitable lookup table var. Because of sliced lookup table
var with HASH, we must load the correct sliced var.
var with HASH, we must load the correct sliced var.
...
@@ -417,7 +417,7 @@ def get_inference_model(main_program, feeded_var_names, target_vars):
...
@@ -417,7 +417,7 @@ def get_inference_model(main_program, feeded_var_names, target_vars):
Args:
Args:
main_program(Program|None): The original program, which will be pruned to
main_program(Program|None): The original program, which will be pruned to
build the inference model. If is set
ted
None,
build the inference model. If is set None,
the default main program will be used.
the default main program will be used.
Default: None.
Default: None.
feeded_var_names(list[str]): Names of variables that need to be feeded data
feeded_var_names(list[str]): Names of variables that need to be feeded data
...
...
python/paddle/fluid/data.py
浏览文件 @
e8f64889
...
@@ -54,7 +54,7 @@ def data(name, shape, dtype='float32', lod_level=0):
...
@@ -54,7 +54,7 @@ def data(name, shape, dtype='float32', lod_level=0):
for more details.
for more details.
shape (list|tuple): List|Tuple of integers declaring the shape. You can
shape (list|tuple): List|Tuple of integers declaring the shape. You can
set "None" at a dimension to indicate the dimension can be of any
set "None" at a dimension to indicate the dimension can be of any
size. For example, it is useful to set changable batch size as "None"
size. For example, it is useful to set chang
e
able batch size as "None"
dtype (np.dtype|VarType|str, optional): The type of the data. Supported
dtype (np.dtype|VarType|str, optional): The type of the data. Supported
dtype: bool, float16, float32, float64, int8, int16, int32, int64,
dtype: bool, float16, float32, float64, int8, int16, int32, int64,
uint8. Default: float32
uint8. Default: float32
...
@@ -75,7 +75,7 @@ def data(name, shape, dtype='float32', lod_level=0):
...
@@ -75,7 +75,7 @@ def data(name, shape, dtype='float32', lod_level=0):
# User can only feed data of the same shape to x
# User can only feed data of the same shape to x
x = fluid.data(name='x', shape=[3, 2, 1], dtype='float32')
x = fluid.data(name='x', shape=[3, 2, 1], dtype='float32')
# Creates a variable with changable batch size.
# Creates a variable with chang
e
able batch size.
# Users can feed data of any batch size into y,
# Users can feed data of any batch size into y,
# but size of each data sample has to be [2, 1]
# but size of each data sample has to be [2, 1]
y = fluid.data(name='y', shape=[None, 2, 1], dtype='float32')
y = fluid.data(name='y', shape=[None, 2, 1], dtype='float32')
...
...
python/paddle/fluid/data_feed_desc.py
浏览文件 @
e8f64889
...
@@ -53,7 +53,7 @@ class DataFeedDesc(object):
...
@@ -53,7 +53,7 @@ class DataFeedDesc(object):
data_feed = fluid.DataFeedDesc('data.proto')
data_feed = fluid.DataFeedDesc('data.proto')
However, users usually shouldn't care about the message format; instead,
However, users usually shouldn't care about the message format; instead,
they are encouragd to use :code:`Data Generator` as a tool to generate a
they are encourag
e
d to use :code:`Data Generator` as a tool to generate a
valid data description, in the process of converting their raw log files to
valid data description, in the process of converting their raw log files to
training files acceptable to AsyncExecutor.
training files acceptable to AsyncExecutor.
...
...
python/paddle/fluid/data_feeder.py
浏览文件 @
e8f64889
...
@@ -334,10 +334,10 @@ class DataFeeder(object):
...
@@ -334,10 +334,10 @@ class DataFeeder(object):
"""
"""
Similar with feed function, feed_parallel is used with multiple devices (CPU|GPU).
Similar with feed function, feed_parallel is used with multiple devices (CPU|GPU).
Here :code:`iterable` is a list of python generators. The data return by each
Here :code:`iterable` is a list of python generators. The data return by each
generator in the list will be fed into a sep
e
rate device.
generator in the list will be fed into a sep
a
rate device.
Parameters:
Parameters:
iterable (list|tuple): list of user-defined python geneators. The element
iterable (list|tuple): list of user-defined python gene
r
ators. The element
number should match the :code:`num_places`.
number should match the :code:`num_places`.
num_places (int, optional): the number of devices. If not provided (None),
num_places (int, optional): the number of devices. If not provided (None),
all available devices on the machine will be used. Default None.
all available devices on the machine will be used. Default None.
...
@@ -374,7 +374,7 @@ class DataFeeder(object):
...
@@ -374,7 +374,7 @@ class DataFeeder(object):
exe.run(fluid.default_startup_program())
exe.run(fluid.default_startup_program())
program = fluid.CompiledProgram(fluid.default_main_program()).with_data_parallel(places=places)
program = fluid.CompiledProgram(fluid.default_main_program()).with_data_parallel(places=places)
# print sample feed_parallel r result
t
# print sample feed_parallel r result
# for item in list(feeder.feed_parallel([generate_reader(5, 0, 1), generate_reader(3, 10, 2)], 2)):
# for item in list(feeder.feed_parallel([generate_reader(5, 0, 1), generate_reader(3, 10, 2)], 2)):
# print(item['x'])
# print(item['x'])
# print(item['y'])
# print(item['y'])
...
@@ -428,7 +428,7 @@ class DataFeeder(object):
...
@@ -428,7 +428,7 @@ class DataFeeder(object):
Parameters:
Parameters:
reader(generator): a user defined python generator used to get :code:`mini-batch` of data.
reader(generator): a user defined python generator used to get :code:`mini-batch` of data.
A :code:`mini-batch` can be regarded as a python generator that returns batchs of input
A :code:`mini-batch` can be regarded as a python generator that returns batch
e
s of input
entities, just like the below :code:`_mini_batch` in the code example.
entities, just like the below :code:`_mini_batch` in the code example.
multi_devices(bool): indicate whether to use multiple devices or not.
multi_devices(bool): indicate whether to use multiple devices or not.
num_places(int, optional): if :code:`multi_devices` is True, you can specify the number
num_places(int, optional): if :code:`multi_devices` is True, you can specify the number
...
...
python/paddle/fluid/dataset.py
浏览文件 @
e8f64889
...
@@ -100,7 +100,7 @@ class DatasetBase(object):
...
@@ -100,7 +100,7 @@ class DatasetBase(object):
Args:
Args:
record_candidate_size(int): size of instances candidate to shuffle
record_candidate_size(int): size of instances candidate to shuffle
one slot
one slot
fea_eval(bool): whe
a
ther enable fea eval mode to enable slots shuffle.
fea_eval(bool): whether enable fea eval mode to enable slots shuffle.
default is True.
default is True.
Examples:
Examples:
...
@@ -822,7 +822,7 @@ class BoxPSDataset(InMemoryDataset):
...
@@ -822,7 +822,7 @@ class BoxPSDataset(InMemoryDataset):
def
wait_preload_done
(
self
):
def
wait_preload_done
(
self
):
"""
"""
Wait async pr
o
load done
Wait async pr
e
load done
Wait Until Feed Pass Done
Wait Until Feed Pass Done
Examples:
Examples:
.. code-block:: python
.. code-block:: python
...
...
python/paddle/fluid/debugger.py
浏览文件 @
e8f64889
...
@@ -338,7 +338,7 @@ def run_fast_nan_inf_debug(executor,
...
@@ -338,7 +338,7 @@ def run_fast_nan_inf_debug(executor,
use_program_cache
=
False
,
use_program_cache
=
False
,
dump_core
=
True
):
dump_core
=
True
):
"""
"""
Run a program by the given executor. Catch the exception of NAN and INF, and save persist
ba
les into the dumped core.
Run a program by the given executor. Catch the exception of NAN and INF, and save persist
ab
les into the dumped core.
"""
"""
assert
(
executor
is
not
None
)
assert
(
executor
is
not
None
)
...
...
python/paddle/fluid/distributed/downpour.py
浏览文件 @
e8f64889
...
@@ -59,7 +59,7 @@ class DownpourSGD(object):
...
@@ -59,7 +59,7 @@ class DownpourSGD(object):
"""
"""
DownpounSGD is a distributed optimizer so
DownpounSGD is a distributed optimizer so
that user can call minimize to generate backward
that user can call minimize to generate backward
operators and optimization operators within minmize function
operators and optimization operators within min
i
mize function
Args:
Args:
loss(Variable): loss variable defined by user
loss(Variable): loss variable defined by user
startup_program(Program): startup program that defined by user
startup_program(Program): startup program that defined by user
...
...
python/paddle/fluid/distributed/ps_instance.py
浏览文件 @
e8f64889
...
@@ -110,7 +110,7 @@ class PaddlePSInstance(object):
...
@@ -110,7 +110,7 @@ class PaddlePSInstance(object):
def
gather_ips
(
self
):
def
gather_ips
(
self
):
"""
"""
Return all servers and workers ip through
t
mpi allgather
Return all servers and workers ip through mpi allgather
"""
"""
self
.
_ips
=
self
.
dh
.
comm
.
allgather
(
self
.
_ip
)
self
.
_ips
=
self
.
dh
.
comm
.
allgather
(
self
.
_ip
)
return
self
.
_ips
return
self
.
_ips
...
...
python/paddle/fluid/dygraph/learning_rate_scheduler.py
浏览文件 @
e8f64889
...
@@ -88,9 +88,9 @@ class PiecewiseDecay(LearningRateDecay):
...
@@ -88,9 +88,9 @@ class PiecewiseDecay(LearningRateDecay):
boundaries(list): A list of steps numbers. The type of element in the list is python int.
boundaries(list): A list of steps numbers. The type of element in the list is python int.
values(list): A list of learning rate values that will be picked during
values(list): A list of learning rate values that will be picked during
different step boundaries. The type of element in the list is python float.
different step boundaries. The type of element in the list is python float.
begin(int): The begin step to initilize the global_step in the description above.
begin(int): The begin step to initi
a
lize the global_step in the description above.
step(int, optional): The step size used to calculate the new global_step in the description above.
step(int, optional): The step size used to calculate the new global_step in the description above.
The defa
l
ult value is 1.
The default value is 1.
dtype(str, optional): The data type used to create the learning rate variable. The data type can be set as
dtype(str, optional): The data type used to create the learning rate variable. The data type can be set as
'float32', 'float64'. The default value is 'float32'.
'float32', 'float64'. The default value is 'float32'.
...
@@ -158,7 +158,7 @@ class NaturalExpDecay(LearningRateDecay):
...
@@ -158,7 +158,7 @@ class NaturalExpDecay(LearningRateDecay):
default value is False.
default value is False.
begin(int, optional): The begin step. The initial value of global_step described above. The default value is 0.
begin(int, optional): The begin step. The initial value of global_step described above. The default value is 0.
step(int, optional): The step size used to calculate the new global_step in the description above.
step(int, optional): The step size used to calculate the new global_step in the description above.
The defa
l
ult value is 1.
The default value is 1.
dtype(str, optional): The data type used to create the learning rate variable. The data type can be set as
dtype(str, optional): The data type used to create the learning rate variable. The data type can be set as
'float32', 'float64'. The default value is 'float32'.
'float32', 'float64'. The default value is 'float32'.
...
@@ -238,7 +238,7 @@ class ExponentialDecay(LearningRateDecay):
...
@@ -238,7 +238,7 @@ class ExponentialDecay(LearningRateDecay):
default value is False.
default value is False.
begin(int, optional): The begin step. The initial value of global_step described above. The default value is 0.
begin(int, optional): The begin step. The initial value of global_step described above. The default value is 0.
step(int, optional): The step size used to calculate the new global_step in the description above.
step(int, optional): The step size used to calculate the new global_step in the description above.
The defa
l
ult value is 1.
The default value is 1.
dtype(str, optional): The data type used to create the learning rate variable. The data type can be set as
dtype(str, optional): The data type used to create the learning rate variable. The data type can be set as
'float32', 'float64'. The default value is 'float32'.
'float32', 'float64'. The default value is 'float32'.
...
@@ -312,7 +312,7 @@ class InverseTimeDecay(LearningRateDecay):
...
@@ -312,7 +312,7 @@ class InverseTimeDecay(LearningRateDecay):
default value is False.
default value is False.
begin(int, optional): The begin step. The initial value of global_step described above. The default value is 0.
begin(int, optional): The begin step. The initial value of global_step described above. The default value is 0.
step(int, optional): The step size used to calculate the new global_step in the description above.
step(int, optional): The step size used to calculate the new global_step in the description above.
The defa
l
ult value is 1.
The default value is 1.
dtype(str, optional): The data type used to create the learning rate variable. The data type can be
dtype(str, optional): The data type used to create the learning rate variable. The data type can be
'float32', 'float64'. The default value is 'float32'.
'float32', 'float64'. The default value is 'float32'.
...
@@ -393,7 +393,7 @@ class PolynomialDecay(LearningRateDecay):
...
@@ -393,7 +393,7 @@ class PolynomialDecay(LearningRateDecay):
cycle(bool, optional): If set true, decay the learning rate every decay_steps. The default value is False.
cycle(bool, optional): If set true, decay the learning rate every decay_steps. The default value is False.
begin(int, optional): The begin step. The initial value of global_step described above. The default value is 0.
begin(int, optional): The begin step. The initial value of global_step described above. The default value is 0.
step(int, optional): The step size used to calculate the new global_step in the description above.
step(int, optional): The step size used to calculate the new global_step in the description above.
The defa
l
ult value is 1.
The default value is 1.
dtype(str, optional): The data type used to create the learning rate variable. The data type can be set as
dtype(str, optional): The data type used to create the learning rate variable. The data type can be set as
'float32', 'float64'. The default value is 'float32'.
'float32', 'float64'. The default value is 'float32'.
...
@@ -471,7 +471,7 @@ class CosineDecay(LearningRateDecay):
...
@@ -471,7 +471,7 @@ class CosineDecay(LearningRateDecay):
epochs(int): The number of epochs.
epochs(int): The number of epochs.
begin(int, optional): The begin step. The initial value of global_step described above. The default value is 0.
begin(int, optional): The begin step. The initial value of global_step described above. The default value is 0.
step(int, optional): The step size used to calculate the new global_step in the description above.
step(int, optional): The step size used to calculate the new global_step in the description above.
The defa
l
ult value is 1.
The default value is 1.
dtype(str, optional): The data type used to create the learning rate variable. The data type can be set as
dtype(str, optional): The data type used to create the learning rate variable. The data type can be set as
'float32', 'float64'. The default value is 'float32'.
'float32', 'float64'. The default value is 'float32'.
...
@@ -528,7 +528,7 @@ class NoamDecay(LearningRateDecay):
...
@@ -528,7 +528,7 @@ class NoamDecay(LearningRateDecay):
it's a tensor with shape [1] and the data type can be int32 or int64. The type can also be python int.
it's a tensor with shape [1] and the data type can be int32 or int64. The type can also be python int.
begin(int, optional): The begin step. The initial value of global_step described above. The default value is 0.
begin(int, optional): The begin step. The initial value of global_step described above. The default value is 0.
step(int, optional): The step size used to calculate the new global_step in the description above.
step(int, optional): The step size used to calculate the new global_step in the description above.
The defa
l
ult value is 1.
The default value is 1.
dtype(str, optional): The data type used to create the learning rate variable. The data type can be set as
dtype(str, optional): The data type used to create the learning rate variable. The data type can be set as
'float32', 'float64'. The default value is 'float32'.
'float32', 'float64'. The default value is 'float32'.
...
@@ -592,7 +592,7 @@ class LinearLrWarmup(LearningRateDecay):
...
@@ -592,7 +592,7 @@ class LinearLrWarmup(LearningRateDecay):
end_lr (float): Final learning rate of warm up.
end_lr (float): Final learning rate of warm up.
begin(int, optional): The begin step. The initial value of global_step described above. The default value is 0.
begin(int, optional): The begin step. The initial value of global_step described above. The default value is 0.
step(int, optional): The step size used to calculate the new global_step in the description above.
step(int, optional): The step size used to calculate the new global_step in the description above.
The defa
l
ult value is 1.
The default value is 1.
dtype(str, optional): The data type used to create the learning rate variable. The data type can be set as
dtype(str, optional): The data type used to create the learning rate variable. The data type can be set as
'float32', 'float64'. The default value is 'float32'.
'float32', 'float64'. The default value is 'float32'.
...
...
python/paddle/fluid/dygraph/nn.py
浏览文件 @
e8f64889
...
@@ -49,7 +49,7 @@ class Conv2D(layers.Layer):
...
@@ -49,7 +49,7 @@ class Conv2D(layers.Layer):
C will equal the number of input feature map divided by the groups.
C will equal the number of input feature map divided by the groups.
Please refer to UFLDL's `convolution
Please refer to UFLDL's `convolution
<http://ufldl.stanford.edu/tutorial/supervised/FeatureExtractionUsingConvolution/>`_
<http://ufldl.stanford.edu/tutorial/supervised/FeatureExtractionUsingConvolution/>`_
for more det
ia
ls.
for more det
ai
ls.
If bias attribution and activation type are provided, bias is added to the
If bias attribution and activation type are provided, bias is added to the
output of the convolution, and the corresponding activation function is
output of the convolution, and the corresponding activation function is
applied to the final result.
applied to the final result.
...
@@ -1003,7 +1003,7 @@ class BatchNorm(layers.Layer):
...
@@ -1003,7 +1003,7 @@ class BatchNorm(layers.Layer):
Parameters:
Parameters:
num_channels(int): Indicate the number of channels of the input ``Tensor``.
num_channels(int): Indicate the number of channels of the input ``Tensor``.
act(str, optional): Activation to be applied to the output of batch normaliza
it
on. Default: None.
act(str, optional): Activation to be applied to the output of batch normaliza
ti
on. Default: None.
is_test (bool, optional): A flag indicating whether it is in test phrase or not. Default: False.
is_test (bool, optional): A flag indicating whether it is in test phrase or not. Default: False.
momentum(float, optional): The value used for the moving_mean and moving_var computation. Default: 0.9.
momentum(float, optional): The value used for the moving_mean and moving_var computation. Default: 0.9.
epsilon(float, optional): The small value added to the variance to prevent division by zero. Default: 1e-5.
epsilon(float, optional): The small value added to the variance to prevent division by zero. Default: 1e-5.
...
@@ -1242,7 +1242,7 @@ class Embedding(layers.Layer):
...
@@ -1242,7 +1242,7 @@ class Embedding(layers.Layer):
default weight parameter property is used. See usage for details in :ref:`api_fluid_ParamAttr` . In addition,
default weight parameter property is used. See usage for details in :ref:`api_fluid_ParamAttr` . In addition,
user-defined or pre-trained word vectors can be loaded with the :attr:`param_attr` parameter.
user-defined or pre-trained word vectors can be loaded with the :attr:`param_attr` parameter.
The local word vector needs to be transformed into numpy format, and the shape of local word
The local word vector needs to be transformed into numpy format, and the shape of local word
vector shoud be consistent with :attr:`size` . Then :ref:`api_fluid_initializer_NumpyArrayInitializer`
vector shou
l
d be consistent with :attr:`size` . Then :ref:`api_fluid_initializer_NumpyArrayInitializer`
is used to load custom or pre-trained word vectors. See code example 2 for details.
is used to load custom or pre-trained word vectors. See code example 2 for details.
dtype(np.dtype|core.VarDesc.VarType|str): It refers to the data type of output Tensor.
dtype(np.dtype|core.VarDesc.VarType|str): It refers to the data type of output Tensor.
It must be "float32" or "float64". Default: "float32".
It must be "float32" or "float64". Default: "float32".
...
@@ -1382,7 +1382,7 @@ class LayerNorm(layers.Layer):
...
@@ -1382,7 +1382,7 @@ class LayerNorm(layers.Layer):
omitted. If :attr:`shift` is True and :attr:`param_attr` is None,
omitted. If :attr:`shift` is True and :attr:`param_attr` is None,
a default :code:`ParamAttr` would be added as bias. The
a default :code:`ParamAttr` would be added as bias. The
:attr:`bias_attr` is initialized as 0 if it is added. Default: None.
:attr:`bias_attr` is initialized as 0 if it is added. Default: None.
act(str, optional): Activation to be applied to the output of layer normaliza
it
on.
act(str, optional): Activation to be applied to the output of layer normaliza
ti
on.
Default: None.
Default: None.
dtype (str, optional): Data type, it can be "float32" or "float64". Default: "float32".
dtype (str, optional): Data type, it can be "float32" or "float64". Default: "float32".
...
@@ -1435,7 +1435,7 @@ class LayerNorm(layers.Layer):
...
@@ -1435,7 +1435,7 @@ class LayerNorm(layers.Layer):
default_initializer
=
Constant
(
1.0
))
default_initializer
=
Constant
(
1.0
))
else
:
else
:
if
self
.
_param_attr
:
if
self
.
_param_attr
:
logging
.
warn
(
"param_attr are only ava
li
able with scale is True"
)
logging
.
warn
(
"param_attr are only ava
il
able with scale is True"
)
if
self
.
_shift
:
if
self
.
_shift
:
assert
self
.
_bias_attr
is
not
False
assert
self
.
_bias_attr
is
not
False
...
@@ -1446,7 +1446,7 @@ class LayerNorm(layers.Layer):
...
@@ -1446,7 +1446,7 @@ class LayerNorm(layers.Layer):
is_bias
=
True
)
is_bias
=
True
)
else
:
else
:
if
self
.
_bias_attr
:
if
self
.
_bias_attr
:
logging
.
warn
(
"bias_attr are only ava
li
able with shift is True"
)
logging
.
warn
(
"bias_attr are only ava
il
able with shift is True"
)
def
forward
(
self
,
input
):
def
forward
(
self
,
input
):
input_shape
=
list
(
input
.
shape
)
input_shape
=
list
(
input
.
shape
)
...
@@ -1702,7 +1702,7 @@ class NCE(layers.Layer):
...
@@ -1702,7 +1702,7 @@ class NCE(layers.Layer):
will create ParamAttr as bias_attr. If the Initializer of the bias_attr
will create ParamAttr as bias_attr. If the Initializer of the bias_attr
is not set, the bias is initialized zero. Default: None.
is not set, the bias is initialized zero. Default: None.
num_neg_samples (int, optional): The number of negative classes. The default value is 10.
num_neg_samples (int, optional): The number of negative classes. The default value is 10.
sampler (str, optional): The sampler used to sample class from negtive classes.
sampler (str, optional): The sampler used to sample class from neg
a
tive classes.
It can be 'uniform', 'log_uniform' or 'custom_dist'.
It can be 'uniform', 'log_uniform' or 'custom_dist'.
default: 'uniform'.
default: 'uniform'.
custom_dist (float[], optional): A float[] with size=num_total_classes.
custom_dist (float[], optional): A float[] with size=num_total_classes.
...
@@ -2544,7 +2544,7 @@ class GroupNorm(layers.Layer):
...
@@ -2544,7 +2544,7 @@ class GroupNorm(layers.Layer):
bias_attr(ParamAttr, optional): The parameter attribute for the learnable
bias_attr(ParamAttr, optional): The parameter attribute for the learnable
bias :math:`b`. If it is set to False, no bias will be added to the output units.
bias :math:`b`. If it is set to False, no bias will be added to the output units.
If it is set to None, the bias is initialized zero. Default: None.
If it is set to None, the bias is initialized zero. Default: None.
act(str, optional): Activation to be applied to the output of group normaliza
it
on. Default: None.
act(str, optional): Activation to be applied to the output of group normaliza
ti
on. Default: None.
data_layout(str, optional): Specify the input data format. Only NCHW is supported. Default: NCHW.
data_layout(str, optional): Specify the input data format. Only NCHW is supported. Default: NCHW.
Returns:
Returns:
...
@@ -2640,7 +2640,7 @@ class SpectralNorm(layers.Layer):
...
@@ -2640,7 +2640,7 @@ class SpectralNorm(layers.Layer):
and W is the product result of remaining dimensions.
and W is the product result of remaining dimensions.
Step 2:
Step 2:
:attr:`power_iters` shoul
e be a positive inter
ger, do following
:attr:`power_iters` shoul
d be a positive inte
ger, do following
calculations with U and V for :attr:`power_iters` rounds.
calculations with U and V for :attr:`power_iters` rounds.
.. math::
.. math::
...
...
python/paddle/fluid/dygraph/varbase_patch_methods.py
浏览文件 @
e8f64889
...
@@ -27,7 +27,7 @@ def monkey_patch_varbase():
...
@@ -27,7 +27,7 @@ def monkey_patch_varbase():
def
set_value
(
self
,
value
):
def
set_value
(
self
,
value
):
"""
"""
**Notes**:
**Notes**:
**This API is ONLY ava
li
able in Dygraph mode**
**This API is ONLY ava
il
able in Dygraph mode**
Set a new value for this Variable.
Set a new value for this Variable.
...
@@ -76,7 +76,7 @@ def monkey_patch_varbase():
...
@@ -76,7 +76,7 @@ def monkey_patch_varbase():
def
backward
(
self
,
backward_strategy
=
None
):
def
backward
(
self
,
backward_strategy
=
None
):
"""
"""
**Notes**:
**Notes**:
**This API is ONLY ava
li
able in Dygraph mode**
**This API is ONLY ava
il
able in Dygraph mode**
Run backward of current Graph which starts from current Variable
Run backward of current Graph which starts from current Variable
...
@@ -116,13 +116,13 @@ def monkey_patch_varbase():
...
@@ -116,13 +116,13 @@ def monkey_patch_varbase():
self
.
_run_backward
(
backward_strategy
,
framework
.
_dygraph_tracer
())
self
.
_run_backward
(
backward_strategy
,
framework
.
_dygraph_tracer
())
else
:
else
:
raise
ValueError
(
raise
ValueError
(
"Variable.backward() is only ava
li
able in DyGraph mode"
)
"Variable.backward() is only ava
il
able in DyGraph mode"
)
@
framework
.
dygraph_only
@
framework
.
dygraph_only
def
gradient
(
self
):
def
gradient
(
self
):
"""
"""
**Notes**:
**Notes**:
**This API is ONLY ava
li
able in Dygraph mode**
**This API is ONLY ava
il
able in Dygraph mode**
Get the Gradient of Current Variable
Get the Gradient of Current Variable
...
...
python/paddle/fluid/dygraph_grad_clip.py
浏览文件 @
e8f64889
此差异已折叠。
点击以展开。
python/paddle/fluid/executor.py
浏览文件 @
e8f64889
此差异已折叠。
点击以展开。
python/paddle/fluid/framework.py
浏览文件 @
e8f64889
此差异已折叠。
点击以展开。
python/paddle/fluid/incubate/data_generator/__init__.py
浏览文件 @
e8f64889
此差异已折叠。
点击以展开。
python/paddle/fluid/incubate/fleet/base/role_maker.py
浏览文件 @
e8f64889
此差异已折叠。
点击以展开。
python/paddle/fluid/incubate/fleet/collective/__init__.py
浏览文件 @
e8f64889
此差异已折叠。
点击以展开。
python/paddle/fluid/incubate/fleet/parameter_server/pslib/__init__.py
浏览文件 @
e8f64889
此差异已折叠。
点击以展开。
python/paddle/fluid/incubate/fleet/parameter_server/pslib/optimizer_factory.py
浏览文件 @
e8f64889
此差异已折叠。
点击以展开。
python/paddle/fluid/incubate/fleet/utils/fleet_util.py
浏览文件 @
e8f64889
此差异已折叠。
点击以展开。
python/paddle/fluid/incubate/fleet/utils/hdfs.py
浏览文件 @
e8f64889
此差异已折叠。
点击以展开。
python/paddle/fluid/initializer.py
浏览文件 @
e8f64889
此差异已折叠。
点击以展开。
python/paddle/fluid/input.py
浏览文件 @
e8f64889
此差异已折叠。
点击以展开。
python/paddle/fluid/install_check.py
浏览文件 @
e8f64889
此差异已折叠。
点击以展开。
python/paddle/fluid/io.py
浏览文件 @
e8f64889
此差异已折叠。
点击以展开。
python/paddle/fluid/layers/control_flow.py
浏览文件 @
e8f64889
此差异已折叠。
点击以展开。
python/paddle/fluid/layers/detection.py
浏览文件 @
e8f64889
此差异已折叠。
点击以展开。
python/paddle/fluid/layers/distributions.py
浏览文件 @
e8f64889
此差异已折叠。
点击以展开。
python/paddle/fluid/layers/io.py
浏览文件 @
e8f64889
此差异已折叠。
点击以展开。
python/paddle/fluid/layers/learning_rate_scheduler.py
浏览文件 @
e8f64889
此差异已折叠。
点击以展开。
python/paddle/fluid/layers/loss.py
浏览文件 @
e8f64889
此差异已折叠。
点击以展开。
python/paddle/fluid/layers/nn.py
浏览文件 @
e8f64889
此差异已折叠。
点击以展开。
python/paddle/fluid/layers/ops.py
浏览文件 @
e8f64889
此差异已折叠。
点击以展开。
python/paddle/fluid/layers/rnn.py
浏览文件 @
e8f64889
此差异已折叠。
点击以展开。
python/paddle/fluid/layers/sequence_lod.py
浏览文件 @
e8f64889
此差异已折叠。
点击以展开。
python/paddle/fluid/layers/tensor.py
浏览文件 @
e8f64889
此差异已折叠。
点击以展开。
python/paddle/fluid/log_helper.py
浏览文件 @
e8f64889
此差异已折叠。
点击以展开。
python/paddle/fluid/metrics.py
浏览文件 @
e8f64889
此差异已折叠。
点击以展开。
python/paddle/fluid/nets.py
浏览文件 @
e8f64889
此差异已折叠。
点击以展开。
python/paddle/fluid/optimizer.py
浏览文件 @
e8f64889
此差异已折叠。
点击以展开。
python/paddle/fluid/param_attr.py
浏览文件 @
e8f64889
此差异已折叠。
点击以展开。
python/paddle/fluid/profiler.py
浏览文件 @
e8f64889
此差异已折叠。
点击以展开。
python/paddle/fluid/tests/demo/pipeline_train.py
浏览文件 @
e8f64889
此差异已折叠。
点击以展开。
python/paddle/fluid/tests/unittests/dist_transformer.py
浏览文件 @
e8f64889
此差异已折叠。
点击以展开。
python/paddle/fluid/tests/unittests/test_activation_nn_grad.py
浏览文件 @
e8f64889
此差异已折叠。
点击以展开。
python/paddle/fluid/tests/unittests/test_elementwise_nn_grad.py
浏览文件 @
e8f64889
此差异已折叠。
点击以展开。
python/paddle/fluid/tests/unittests/test_generate_proposals_op.py
浏览文件 @
e8f64889
此差异已折叠。
点击以展开。
python/paddle/fluid/tests/unittests/test_imperative_transformer_sorted_gradient.py
浏览文件 @
e8f64889
此差异已折叠。
点击以展开。
python/paddle/fluid/tests/unittests/test_linear_chain_crf_op.py
浏览文件 @
e8f64889
此差异已折叠。
点击以展开。
python/paddle/fluid/tests/unittests/test_nce.py
浏览文件 @
e8f64889
此差异已折叠。
点击以展开。
python/paddle/fluid/tests/unittests/test_nn_grad.py
浏览文件 @
e8f64889
此差异已折叠。
点击以展开。
python/paddle/fluid/tests/unittests/test_reshape_op.py
浏览文件 @
e8f64889
此差异已折叠。
点击以展开。
python/paddle/fluid/tests/unittests/test_static_save_load.py
浏览文件 @
e8f64889
此差异已折叠。
点击以展开。
python/paddle/fluid/tests/unittests/transformer_model.py
浏览文件 @
e8f64889
此差异已折叠。
点击以展开。
python/paddle/fluid/transpiler/details/program_utils.py
浏览文件 @
e8f64889
此差异已折叠。
点击以展开。
python/paddle/fluid/transpiler/distribute_transpiler.py
浏览文件 @
e8f64889
此差异已折叠。
点击以展开。
python/paddle/fluid/transpiler/geo_sgd_transpiler.py
浏览文件 @
e8f64889
此差异已折叠。
点击以展开。
python/paddle/fluid/transpiler/ps_dispatcher.py
浏览文件 @
e8f64889
此差异已折叠。
点击以展开。
python/paddle/reader/decorator.py
浏览文件 @
e8f64889
此差异已折叠。
点击以展开。
python/paddle/utils/image_util.py
浏览文件 @
e8f64889
此差异已折叠。
点击以展开。
python/paddle/utils/plotcurve.py
浏览文件 @
e8f64889
此差异已折叠。
点击以展开。
python/paddle/utils/preprocess_img.py
浏览文件 @
e8f64889
此差异已折叠。
点击以展开。
python/paddle/utils/preprocess_util.py
浏览文件 @
e8f64889
此差异已折叠。
点击以展开。
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录