Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
BaiXuePrincess
Paddle
提交
6e310e2d
P
Paddle
项目概览
BaiXuePrincess
/
Paddle
与 Fork 源项目一致
Fork自
PaddlePaddle / Paddle
通知
1
Star
1
Fork
0
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
0
列表
看板
标记
里程碑
合并请求
0
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
P
Paddle
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
0
Issue
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
提交
6e310e2d
编写于
6月 19, 2019
作者:
翟
翟飞跃
提交者:
Tao Luo
6月 20, 2019
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
Fix spelling errors (#18213)
上级
91fc03d2
变更
30
显示空白变更内容
内联
并排
Showing
30 changed file
with
86 addition
and
87 deletion
+86
-87
benchmark/fluid/README.md
benchmark/fluid/README.md
+1
-1
paddle/.common_test_util.sh
paddle/.common_test_util.sh
+1
-1
paddle/fluid/API.spec
paddle/fluid/API.spec
+9
-10
paddle/fluid/inference/api/demo_ci/run.sh
paddle/fluid/inference/api/demo_ci/run.sh
+1
-1
paddle/fluid/operators/attention_lstm_op.cc
paddle/fluid/operators/attention_lstm_op.cc
+1
-1
paddle/fluid/operators/batch_norm_op.cu
paddle/fluid/operators/batch_norm_op.cu
+1
-1
paddle/fluid/operators/conv_cudnn_op.cu.cc
paddle/fluid/operators/conv_cudnn_op.cu.cc
+5
-5
paddle/fluid/operators/conv_fusion_op.cu.cc
paddle/fluid/operators/conv_fusion_op.cu.cc
+1
-1
paddle/fluid/operators/conv_op.cc
paddle/fluid/operators/conv_op.cc
+2
-2
paddle/fluid/operators/detection/bipartite_match_op.cc
paddle/fluid/operators/detection/bipartite_match_op.cc
+3
-3
paddle/fluid/operators/detection/multiclass_nms_op.cc
paddle/fluid/operators/detection/multiclass_nms_op.cc
+2
-2
paddle/fluid/operators/detection_map_op.cc
paddle/fluid/operators/detection_map_op.cc
+1
-1
paddle/fluid/operators/fused/fused_embedding_fc_lstm_op.cc
paddle/fluid/operators/fused/fused_embedding_fc_lstm_op.cc
+4
-4
paddle/fluid/operators/fused/fusion_conv_inception_op.cc
paddle/fluid/operators/fused/fusion_conv_inception_op.cc
+2
-2
paddle/fluid/operators/fused/fusion_gru_op.cc
paddle/fluid/operators/fused/fusion_gru_op.cc
+2
-2
paddle/fluid/operators/fused/fusion_lstm_op.cc
paddle/fluid/operators/fused/fusion_lstm_op.cc
+4
-4
paddle/fluid/operators/gaussian_random_batch_size_like_op.cc
paddle/fluid/operators/gaussian_random_batch_size_like_op.cc
+1
-1
paddle/fluid/operators/gru_op.cc
paddle/fluid/operators/gru_op.cc
+1
-1
paddle/fluid/operators/lstm_op.cc
paddle/fluid/operators/lstm_op.cc
+5
-5
paddle/fluid/operators/lstmp_op.cc
paddle/fluid/operators/lstmp_op.cc
+7
-7
paddle/fluid/operators/pool_op.cc
paddle/fluid/operators/pool_op.cc
+2
-2
paddle/fluid/operators/unpool_op.cc
paddle/fluid/operators/unpool_op.cc
+1
-1
python/paddle/fluid/contrib/slim/quantization/quantization_strategy.py
.../fluid/contrib/slim/quantization/quantization_strategy.py
+6
-6
python/paddle/fluid/contrib/slim/tests/quantization/compress.yaml
...addle/fluid/contrib/slim/tests/quantization/compress.yaml
+4
-4
python/paddle/fluid/evaluator.py
python/paddle/fluid/evaluator.py
+3
-3
python/paddle/fluid/layers/detection.py
python/paddle/fluid/layers/detection.py
+9
-9
python/paddle/fluid/layers/nn.py
python/paddle/fluid/layers/nn.py
+2
-2
python/paddle/fluid/metrics.py
python/paddle/fluid/metrics.py
+3
-3
python/paddle/fluid/tests/unittests/dist_transformer.py
python/paddle/fluid/tests/unittests/dist_transformer.py
+1
-1
python/paddle/fluid/tests/unittests/transformer_model.py
python/paddle/fluid/tests/unittests/transformer_model.py
+1
-1
未找到文件。
benchmark/fluid/README.md
浏览文件 @
6e310e2d
...
@@ -59,7 +59,7 @@ python -c 'from recordio_converter import *; prepare_mnist("data", 1)'
...
@@ -59,7 +59,7 @@ python -c 'from recordio_converter import *; prepare_mnist("data", 1)'
## Run Distributed Benchmark on Kubernetes Cluster
## Run Distributed Benchmark on Kubernetes Cluster
You may need to build a Docker image before submitting a cluster job onto Kubernetes, or you will
You may need to build a Docker image before submitting a cluster job onto Kubernetes, or you will
have to start all those processes man
n
ually on each node, which is not recommended.
have to start all those processes manually on each node, which is not recommended.
To build the Docker image, you need to choose a paddle "whl" package to run with, you may either
To build the Docker image, you need to choose a paddle "whl" package to run with, you may either
download it from
download it from
...
...
paddle/.common_test_util.sh
浏览文件 @
6e310e2d
...
@@ -26,7 +26,7 @@ chmod a+rw $PORT_FILE $PORT_LOCK_FILE 2>/dev/null
...
@@ -26,7 +26,7 @@ chmod a+rw $PORT_FILE $PORT_LOCK_FILE 2>/dev/null
#
#
# There are two parameter of this method
# There are two parameter of this method
# param 1: the begin of port range
# param 1: the begin of port range
# param 2: the leng
ht
of port range.
# param 2: the leng
th
of port range.
# so, the port range is [param1, param1+param2)
# so, the port range is [param1, param1+param2)
acquire_ports
(){
acquire_ports
(){
(
(
...
...
paddle/fluid/API.spec
浏览文件 @
6e310e2d
...
@@ -73,8 +73,8 @@ paddle.fluid.initializer.init_on_cpu (ArgSpec(args=[], varargs=None, keywords=No
...
@@ -73,8 +73,8 @@ paddle.fluid.initializer.init_on_cpu (ArgSpec(args=[], varargs=None, keywords=No
paddle.fluid.initializer.NumpyArrayInitializer.__init__ (ArgSpec(args=['self', 'value'], varargs=None, keywords=None, defaults=None), ('document', '6adf97f83acf6453d4a6a4b1070f3754'))
paddle.fluid.initializer.NumpyArrayInitializer.__init__ (ArgSpec(args=['self', 'value'], varargs=None, keywords=None, defaults=None), ('document', '6adf97f83acf6453d4a6a4b1070f3754'))
paddle.fluid.layers.fc (ArgSpec(args=['input', 'size', 'num_flatten_dims', 'param_attr', 'bias_attr', 'act', 'is_test', 'name'], varargs=None, keywords=None, defaults=(1, None, None, None, False, None)), ('document', '424e898365195e3ccbc2e7dc8b63605e'))
paddle.fluid.layers.fc (ArgSpec(args=['input', 'size', 'num_flatten_dims', 'param_attr', 'bias_attr', 'act', 'is_test', 'name'], varargs=None, keywords=None, defaults=(1, None, None, None, False, None)), ('document', '424e898365195e3ccbc2e7dc8b63605e'))
paddle.fluid.layers.embedding (ArgSpec(args=['input', 'size', 'is_sparse', 'is_distributed', 'padding_idx', 'param_attr', 'dtype'], varargs=None, keywords=None, defaults=(False, False, None, None, 'float32')), ('document', '6f9f96d2a1517cd1affebc960c3526f7'))
paddle.fluid.layers.embedding (ArgSpec(args=['input', 'size', 'is_sparse', 'is_distributed', 'padding_idx', 'param_attr', 'dtype'], varargs=None, keywords=None, defaults=(False, False, None, None, 'float32')), ('document', '6f9f96d2a1517cd1affebc960c3526f7'))
paddle.fluid.layers.dynamic_lstm (ArgSpec(args=['input', 'size', 'h_0', 'c_0', 'param_attr', 'bias_attr', 'use_peepholes', 'is_reverse', 'gate_activation', 'cell_activation', 'candidate_activation', 'dtype', 'name'], varargs=None, keywords=None, defaults=(None, None, None, None, True, False, 'sigmoid', 'tanh', 'tanh', 'float32', None)), ('document', '
8e35ca26adbe44eb631d71045c8d64d5
'))
paddle.fluid.layers.dynamic_lstm (ArgSpec(args=['input', 'size', 'h_0', 'c_0', 'param_attr', 'bias_attr', 'use_peepholes', 'is_reverse', 'gate_activation', 'cell_activation', 'candidate_activation', 'dtype', 'name'], varargs=None, keywords=None, defaults=(None, None, None, None, True, False, 'sigmoid', 'tanh', 'tanh', 'float32', None)), ('document', '
246ff18abc877dd576653006991918e9
'))
paddle.fluid.layers.dynamic_lstmp (ArgSpec(args=['input', 'size', 'proj_size', 'param_attr', 'bias_attr', 'use_peepholes', 'is_reverse', 'gate_activation', 'cell_activation', 'candidate_activation', 'proj_activation', 'dtype', 'name', 'h_0', 'c_0', 'cell_clip', 'proj_clip'], varargs=None, keywords=None, defaults=(None, None, True, False, 'sigmoid', 'tanh', 'tanh', 'tanh', 'float32', None, None, None, None, None)), ('document', '
b4b608b986eb9617aa0525e1be21d32d
'))
paddle.fluid.layers.dynamic_lstmp (ArgSpec(args=['input', 'size', 'proj_size', 'param_attr', 'bias_attr', 'use_peepholes', 'is_reverse', 'gate_activation', 'cell_activation', 'candidate_activation', 'proj_activation', 'dtype', 'name', 'h_0', 'c_0', 'cell_clip', 'proj_clip'], varargs=None, keywords=None, defaults=(None, None, True, False, 'sigmoid', 'tanh', 'tanh', 'tanh', 'float32', None, None, None, None, None)), ('document', '
4f63053354bcc6c743b4d2f4e7104e25
'))
paddle.fluid.layers.dynamic_gru (ArgSpec(args=['input', 'size', 'param_attr', 'bias_attr', 'is_reverse', 'gate_activation', 'candidate_activation', 'h_0', 'origin_mode'], varargs=None, keywords=None, defaults=(None, None, False, 'sigmoid', 'tanh', None, False)), ('document', '83617c165827e030636c80486d5de6f3'))
paddle.fluid.layers.dynamic_gru (ArgSpec(args=['input', 'size', 'param_attr', 'bias_attr', 'is_reverse', 'gate_activation', 'candidate_activation', 'h_0', 'origin_mode'], varargs=None, keywords=None, defaults=(None, None, False, 'sigmoid', 'tanh', None, False)), ('document', '83617c165827e030636c80486d5de6f3'))
paddle.fluid.layers.gru_unit (ArgSpec(args=['input', 'hidden', 'size', 'param_attr', 'bias_attr', 'activation', 'gate_activation', 'origin_mode'], varargs=None, keywords=None, defaults=(None, None, 'tanh', 'sigmoid', False)), ('document', '33974b9bfa69f2f1eb85e6f956dff04e'))
paddle.fluid.layers.gru_unit (ArgSpec(args=['input', 'hidden', 'size', 'param_attr', 'bias_attr', 'activation', 'gate_activation', 'origin_mode'], varargs=None, keywords=None, defaults=(None, None, 'tanh', 'sigmoid', False)), ('document', '33974b9bfa69f2f1eb85e6f956dff04e'))
paddle.fluid.layers.linear_chain_crf (ArgSpec(args=['input', 'label', 'param_attr'], varargs=None, keywords=None, defaults=(None,)), ('document', '34f96be41684b0959897a9e735997e20'))
paddle.fluid.layers.linear_chain_crf (ArgSpec(args=['input', 'label', 'param_attr'], varargs=None, keywords=None, defaults=(None,)), ('document', '34f96be41684b0959897a9e735997e20'))
...
@@ -118,7 +118,7 @@ paddle.fluid.layers.dropout (ArgSpec(args=['x', 'dropout_prob', 'is_test', 'seed
...
@@ -118,7 +118,7 @@ paddle.fluid.layers.dropout (ArgSpec(args=['x', 'dropout_prob', 'is_test', 'seed
paddle.fluid.layers.split (ArgSpec(args=['input', 'num_or_sections', 'dim', 'name'], varargs=None, keywords=None, defaults=(-1, None)), ('document', '59b28903ce8fb6a7e3861ff355592eb4'))
paddle.fluid.layers.split (ArgSpec(args=['input', 'num_or_sections', 'dim', 'name'], varargs=None, keywords=None, defaults=(-1, None)), ('document', '59b28903ce8fb6a7e3861ff355592eb4'))
paddle.fluid.layers.ctc_greedy_decoder (ArgSpec(args=['input', 'blank', 'name'], varargs=None, keywords=None, defaults=(None,)), ('document', '2bc3a59efa9d52b628a6255422d9f0e8'))
paddle.fluid.layers.ctc_greedy_decoder (ArgSpec(args=['input', 'blank', 'name'], varargs=None, keywords=None, defaults=(None,)), ('document', '2bc3a59efa9d52b628a6255422d9f0e8'))
paddle.fluid.layers.edit_distance (ArgSpec(args=['input', 'label', 'normalized', 'ignored_tokens'], varargs=None, keywords=None, defaults=(True, None)), ('document', 'f2c252aa2f83f8e503ffaf79668eaa28'))
paddle.fluid.layers.edit_distance (ArgSpec(args=['input', 'label', 'normalized', 'ignored_tokens'], varargs=None, keywords=None, defaults=(True, None)), ('document', 'f2c252aa2f83f8e503ffaf79668eaa28'))
paddle.fluid.layers.l2_normalize (ArgSpec(args=['x', 'axis', 'epsilon', 'name'], varargs=None, keywords=None, defaults=(1e-12, None)), ('document', '
35c6a241bcc1a1fc89508860d82ad62b
'))
paddle.fluid.layers.l2_normalize (ArgSpec(args=['x', 'axis', 'epsilon', 'name'], varargs=None, keywords=None, defaults=(1e-12, None)), ('document', '
d0484a1f85b40009a794d45a1a298c12
'))
paddle.fluid.layers.matmul (ArgSpec(args=['x', 'y', 'transpose_x', 'transpose_y', 'alpha', 'name'], varargs=None, keywords=None, defaults=(False, False, 1.0, None)), ('document', 'aa27ca4405e70c6a733cb9806a76af30'))
paddle.fluid.layers.matmul (ArgSpec(args=['x', 'y', 'transpose_x', 'transpose_y', 'alpha', 'name'], varargs=None, keywords=None, defaults=(False, False, 1.0, None)), ('document', 'aa27ca4405e70c6a733cb9806a76af30'))
paddle.fluid.layers.topk (ArgSpec(args=['input', 'k', 'name'], varargs=None, keywords=None, defaults=(None,)), ('document', '2a1e9ea041ff4d6a9948bb8d03b743ea'))
paddle.fluid.layers.topk (ArgSpec(args=['input', 'k', 'name'], varargs=None, keywords=None, defaults=(None,)), ('document', '2a1e9ea041ff4d6a9948bb8d03b743ea'))
paddle.fluid.layers.warpctc (ArgSpec(args=['input', 'label', 'blank', 'norm_by_times', 'use_cudnn'], varargs=None, keywords=None, defaults=(0, False, False)), ('document', '4aa9df890b47eb67d5442f04aaf9eeec'))
paddle.fluid.layers.warpctc (ArgSpec(args=['input', 'label', 'blank', 'norm_by_times', 'use_cudnn'], varargs=None, keywords=None, defaults=(0, False, False)), ('document', '4aa9df890b47eb67d5442f04aaf9eeec'))
...
@@ -195,7 +195,7 @@ paddle.fluid.layers.elementwise_floordiv (ArgSpec(args=['x', 'y', 'axis', 'act',
...
@@ -195,7 +195,7 @@ paddle.fluid.layers.elementwise_floordiv (ArgSpec(args=['x', 'y', 'axis', 'act',
paddle.fluid.layers.uniform_random_batch_size_like (ArgSpec(args=['input', 'shape', 'dtype', 'input_dim_idx', 'output_dim_idx', 'min', 'max', 'seed'], varargs=None, keywords=None, defaults=('float32', 0, 0, -1.0, 1.0, 0)), ('document', 'c8c7518358cfbb3822a019e6b5fbea52'))
paddle.fluid.layers.uniform_random_batch_size_like (ArgSpec(args=['input', 'shape', 'dtype', 'input_dim_idx', 'output_dim_idx', 'min', 'max', 'seed'], varargs=None, keywords=None, defaults=('float32', 0, 0, -1.0, 1.0, 0)), ('document', 'c8c7518358cfbb3822a019e6b5fbea52'))
paddle.fluid.layers.gaussian_random (ArgSpec(args=['shape', 'mean', 'std', 'seed', 'dtype'], varargs=None, keywords=None, defaults=(0.0, 1.0, 0, 'float32')), ('document', '8c78ccb77e291e4a0f0673d34823ce4b'))
paddle.fluid.layers.gaussian_random (ArgSpec(args=['shape', 'mean', 'std', 'seed', 'dtype'], varargs=None, keywords=None, defaults=(0.0, 1.0, 0, 'float32')), ('document', '8c78ccb77e291e4a0f0673d34823ce4b'))
paddle.fluid.layers.sampling_id (ArgSpec(args=['x', 'min', 'max', 'seed', 'dtype'], varargs=None, keywords=None, defaults=(0.0, 1.0, 0, 'float32')), ('document', '35428949368cad5121dd37f8522ef8b0'))
paddle.fluid.layers.sampling_id (ArgSpec(args=['x', 'min', 'max', 'seed', 'dtype'], varargs=None, keywords=None, defaults=(0.0, 1.0, 0, 'float32')), ('document', '35428949368cad5121dd37f8522ef8b0'))
paddle.fluid.layers.gaussian_random_batch_size_like (ArgSpec(args=['input', 'shape', 'input_dim_idx', 'output_dim_idx', 'mean', 'std', 'seed', 'dtype'], varargs=None, keywords=None, defaults=(0, 0, 0.0, 1.0, 0, 'float32')), ('document', '
9e520987168f8ddb7dd71ffd68aa352c
'))
paddle.fluid.layers.gaussian_random_batch_size_like (ArgSpec(args=['input', 'shape', 'input_dim_idx', 'output_dim_idx', 'mean', 'std', 'seed', 'dtype'], varargs=None, keywords=None, defaults=(0, 0, 0.0, 1.0, 0, 'float32')), ('document', '
7536418f4cf0360a1a897c265f06e77e
'))
paddle.fluid.layers.sum (ArgSpec(args=['x'], varargs=None, keywords=None, defaults=None), ('document', '4527fd90e222f67b5f7451fb0cf7c845'))
paddle.fluid.layers.sum (ArgSpec(args=['x'], varargs=None, keywords=None, defaults=None), ('document', '4527fd90e222f67b5f7451fb0cf7c845'))
paddle.fluid.layers.slice (ArgSpec(args=['input', 'axes', 'starts', 'ends'], varargs=None, keywords=None, defaults=None), ('document', '3ca6a761570d86e303e473afba99bb49'))
paddle.fluid.layers.slice (ArgSpec(args=['input', 'axes', 'starts', 'ends'], varargs=None, keywords=None, defaults=None), ('document', '3ca6a761570d86e303e473afba99bb49'))
paddle.fluid.layers.shape (ArgSpec(args=['input'], varargs=None, keywords=None, defaults=None), ('document', 'bf61c8f79d795a8371bdb3b5468aa82b'))
paddle.fluid.layers.shape (ArgSpec(args=['input'], varargs=None, keywords=None, defaults=None), ('document', 'bf61c8f79d795a8371bdb3b5468aa82b'))
...
@@ -339,18 +339,17 @@ paddle.fluid.layers.uniform_random (ArgSpec(args=['shape', 'dtype', 'min', 'max'
...
@@ -339,18 +339,17 @@ paddle.fluid.layers.uniform_random (ArgSpec(args=['shape', 'dtype', 'min', 'max'
paddle.fluid.layers.hard_shrink (ArgSpec(args=['x', 'threshold'], varargs=None, keywords=None, defaults=(None,)), ('document', 'c142f5884f3255e0d6075c286bbd531e'))
paddle.fluid.layers.hard_shrink (ArgSpec(args=['x', 'threshold'], varargs=None, keywords=None, defaults=(None,)), ('document', 'c142f5884f3255e0d6075c286bbd531e'))
paddle.fluid.layers.cumsum (ArgSpec(args=['x', 'axis', 'exclusive', 'reverse'], varargs=None, keywords=None, defaults=(None, None, None)), ('document', '944d7c03057f5fc88bc78acd4d82f926'))
paddle.fluid.layers.cumsum (ArgSpec(args=['x', 'axis', 'exclusive', 'reverse'], varargs=None, keywords=None, defaults=(None, None, None)), ('document', '944d7c03057f5fc88bc78acd4d82f926'))
paddle.fluid.layers.thresholded_relu (ArgSpec(args=['x', 'threshold'], varargs=None, keywords=None, defaults=(None,)), ('document', '90566ea449ea4c681435546e2f70610a'))
paddle.fluid.layers.thresholded_relu (ArgSpec(args=['x', 'threshold'], varargs=None, keywords=None, defaults=(None,)), ('document', '90566ea449ea4c681435546e2f70610a'))
paddle.fluid.layers.prior_box (ArgSpec(args=['input', 'image', 'min_sizes', 'max_sizes', 'aspect_ratios', 'variance', 'flip', 'clip', 'steps', 'offset', 'name', 'min_max_aspect_ratios_order'], varargs=None, keywords=None, defaults=(None, [1.0], [0.1, 0.1, 0.2, 0.2], False, False, [0.0, 0.0], 0.5, None, False)), ('document', '
a00d43a08ec664454e8e685bc54e9e78
'))
paddle.fluid.layers.prior_box (ArgSpec(args=['input', 'image', 'min_sizes', 'max_sizes', 'aspect_ratios', 'variance', 'flip', 'clip', 'steps', 'offset', 'name', 'min_max_aspect_ratios_order'], varargs=None, keywords=None, defaults=(None, [1.0], [0.1, 0.1, 0.2, 0.2], False, False, [0.0, 0.0], 0.5, None, False)), ('document', '
b351a05b758f7e5370898cc7d7d40dca
'))
paddle.fluid.layers.density_prior_box (ArgSpec(args=['input', 'image', 'densities', 'fixed_sizes', 'fixed_ratios', 'variance', 'clip', 'steps', 'offset', 'flatten_to_2d', 'name'], varargs=None, keywords=None, defaults=(None, None, None, [0.1, 0.1, 0.2, 0.2], False, [0.0, 0.0], 0.5, False, None)), ('document', '
7e62e12ce8b127f2c7ce8db79299c3c
3'))
paddle.fluid.layers.density_prior_box (ArgSpec(args=['input', 'image', 'densities', 'fixed_sizes', 'fixed_ratios', 'variance', 'clip', 'steps', 'offset', 'flatten_to_2d', 'name'], varargs=None, keywords=None, defaults=(None, None, None, [0.1, 0.1, 0.2, 0.2], False, [0.0, 0.0], 0.5, False, None)), ('document', '
05c43e8fd25efe34f75e35a2c045ded
3'))
paddle.fluid.layers.multi_box_head (ArgSpec(args=['inputs', 'image', 'base_size', 'num_classes', 'aspect_ratios', 'min_ratio', 'max_ratio', 'min_sizes', 'max_sizes', 'steps', 'step_w', 'step_h', 'offset', 'variance', 'flip', 'clip', 'kernel_size', 'pad', 'stride', 'name', 'min_max_aspect_ratios_order'], varargs=None, keywords=None, defaults=(None, None, None, None, None, None, None, 0.5, [0.1, 0.1, 0.2, 0.2], True, False, 1, 0, 1, None, False)), ('document', 'fd58078fdfffd899b91f992ba224628f'))
paddle.fluid.layers.multi_box_head (ArgSpec(args=['inputs', 'image', 'base_size', 'num_classes', 'aspect_ratios', 'min_ratio', 'max_ratio', 'min_sizes', 'max_sizes', 'steps', 'step_w', 'step_h', 'offset', 'variance', 'flip', 'clip', 'kernel_size', 'pad', 'stride', 'name', 'min_max_aspect_ratios_order'], varargs=None, keywords=None, defaults=(None, None, None, None, None, None, None, 0.5, [0.1, 0.1, 0.2, 0.2], True, False, 1, 0, 1, None, False)), ('document', 'fd58078fdfffd899b91f992ba224628f'))
paddle.fluid.layers.bipartite_match (ArgSpec(args=['dist_matrix', 'match_type', 'dist_threshold', 'name'], varargs=None, keywords=None, defaults=(None, None, None)), ('document', '3ddb9b966f193900193a95a3df77c3c1'))
paddle.fluid.layers.bipartite_match (ArgSpec(args=['dist_matrix', 'match_type', 'dist_threshold', 'name'], varargs=None, keywords=None, defaults=(None, None, None)), ('document', '3ddb9b966f193900193a95a3df77c3c1'))
paddle.fluid.layers.target_assign (ArgSpec(args=['input', 'matched_indices', 'negative_indices', 'mismatch_value', 'name'], varargs=None, keywords=None, defaults=(None, None, None)), ('document', 'e9685f32d21bec8c013626c0254502c5'))
paddle.fluid.layers.target_assign (ArgSpec(args=['input', 'matched_indices', 'negative_indices', 'mismatch_value', 'name'], varargs=None, keywords=None, defaults=(None, None, None)), ('document', 'e9685f32d21bec8c013626c0254502c5'))
paddle.fluid.layers.detection_output (ArgSpec(args=['loc', 'scores', 'prior_box', 'prior_box_var', 'background_label', 'nms_threshold', 'nms_top_k', 'keep_top_k', 'score_threshold', 'nms_eta'], varargs=None, keywords=None, defaults=(0, 0.3, 400, 200, 0.01, 1.0)), ('document', 'efae414c1137c7944d6174dd08c5347a'))
paddle.fluid.layers.detection_output (ArgSpec(args=['loc', 'scores', 'prior_box', 'prior_box_var', 'background_label', 'nms_threshold', 'nms_top_k', 'keep_top_k', 'score_threshold', 'nms_eta'], varargs=None, keywords=None, defaults=(0, 0.3, 400, 200, 0.01, 1.0)), ('document', 'efae414c1137c7944d6174dd08c5347a'))
paddle.fluid.layers.ssd_loss (ArgSpec(args=['location', 'confidence', 'gt_box', 'gt_label', 'prior_box', 'prior_box_var', 'background_label', 'overlap_threshold', 'neg_pos_ratio', 'neg_overlap', 'loc_loss_weight', 'conf_loss_weight', 'match_type', 'mining_type', 'normalize', 'sample_size'], varargs=None, keywords=None, defaults=(None, 0, 0.5, 3.0, 0.5, 1.0, 1.0, 'per_prediction', 'max_negative', True, None)), ('document', '6d5028fd09d01ab82d296adc0ea95aee'))
paddle.fluid.layers.ssd_loss (ArgSpec(args=['location', 'confidence', 'gt_box', 'gt_label', 'prior_box', 'prior_box_var', 'background_label', 'overlap_threshold', 'neg_pos_ratio', 'neg_overlap', 'loc_loss_weight', 'conf_loss_weight', 'match_type', 'mining_type', 'normalize', 'sample_size'], varargs=None, keywords=None, defaults=(None, 0, 0.5, 3.0, 0.5, 1.0, 1.0, 'per_prediction', 'max_negative', True, None)), ('document', '055bd5070ad72dccc0949b4ed036f39c'))
paddle.fluid.layers.detection_map (ArgSpec(args=['detect_res', 'label', 'class_num', 'background_label', 'overlap_threshold', 'evaluate_difficult', 'has_state', 'input_states', 'out_states', 'ap_version'], varargs=None, keywords=None, defaults=(0, 0.3, True, None, None, None, 'integral')), ('document', '1467d91b50c22cd52103b4aa1ee9d0a1'))
paddle.fluid.layers.rpn_target_assign (ArgSpec(args=['bbox_pred', 'cls_logits', 'anchor_box', 'anchor_var', 'gt_boxes', 'is_crowd', 'im_info', 'rpn_batch_size_per_im', 'rpn_straddle_thresh', 'rpn_fg_fraction', 'rpn_positive_overlap', 'rpn_negative_overlap', 'use_random'], varargs=None, keywords=None, defaults=(256, 0.0, 0.5, 0.7, 0.3, True)), ('document', '70d0109c864bced99b6b0aca4574af5e'))
paddle.fluid.layers.rpn_target_assign (ArgSpec(args=['bbox_pred', 'cls_logits', 'anchor_box', 'anchor_var', 'gt_boxes', 'is_crowd', 'im_info', 'rpn_batch_size_per_im', 'rpn_straddle_thresh', 'rpn_fg_fraction', 'rpn_positive_overlap', 'rpn_negative_overlap', 'use_random'], varargs=None, keywords=None, defaults=(256, 0.0, 0.5, 0.7, 0.3, True)), ('document', '1e164a56fe9376e18a56d22563d9f801'))
paddle.fluid.layers.retinanet_target_assign (ArgSpec(args=['bbox_pred', 'cls_logits', 'anchor_box', 'anchor_var', 'gt_boxes', 'gt_labels', 'is_crowd', 'im_info', 'num_classes', 'positive_overlap', 'negative_overlap'], varargs=None, keywords=None, defaults=(1, 0.5, 0.4)), ('document', 'fa1d1c9d5e0111684c0db705f86a2595'))
paddle.fluid.layers.retinanet_target_assign (ArgSpec(args=['bbox_pred', 'cls_logits', 'anchor_box', 'anchor_var', 'gt_boxes', 'gt_labels', 'is_crowd', 'im_info', 'num_classes', 'positive_overlap', 'negative_overlap'], varargs=None, keywords=None, defaults=(1, 0.5, 0.4)), ('document', 'fa1d1c9d5e0111684c0db705f86a2595'))
paddle.fluid.layers.sigmoid_focal_loss (ArgSpec(args=['x', 'label', 'fg_num', 'gamma', 'alpha'], varargs=None, keywords=None, defaults=(2, 0.25)), ('document', 'aeac6aae100173b3fc7f102cf3023a3d'))
paddle.fluid.layers.sigmoid_focal_loss (ArgSpec(args=['x', 'label', 'fg_num', 'gamma', 'alpha'], varargs=None, keywords=None, defaults=(2, 0.25)), ('document', 'aeac6aae100173b3fc7f102cf3023a3d'))
paddle.fluid.layers.anchor_generator (ArgSpec(args=['input', 'anchor_sizes', 'aspect_ratios', 'variance', 'stride', 'offset', 'name'], varargs=None, keywords=None, defaults=(None, None, [0.1, 0.1, 0.2, 0.2], None, 0.5, None)), ('document', '
82b2aefeeb1b706bc4afec70928a259a
'))
paddle.fluid.layers.anchor_generator (ArgSpec(args=['input', 'anchor_sizes', 'aspect_ratios', 'variance', 'stride', 'offset', 'name'], varargs=None, keywords=None, defaults=(None, None, [0.1, 0.1, 0.2, 0.2], None, 0.5, None)), ('document', '
acc23232f4c8c03791598500b5bf7790
'))
paddle.fluid.layers.roi_perspective_transform (ArgSpec(args=['input', 'rois', 'transformed_height', 'transformed_width', 'spatial_scale'], varargs=None, keywords=None, defaults=(1.0,)), ('document', 'd1ddc75629fedee46f82e631e22c79dc'))
paddle.fluid.layers.roi_perspective_transform (ArgSpec(args=['input', 'rois', 'transformed_height', 'transformed_width', 'spatial_scale'], varargs=None, keywords=None, defaults=(1.0,)), ('document', 'd1ddc75629fedee46f82e631e22c79dc'))
paddle.fluid.layers.generate_proposal_labels (ArgSpec(args=['rpn_rois', 'gt_classes', 'is_crowd', 'gt_boxes', 'im_info', 'batch_size_per_im', 'fg_fraction', 'fg_thresh', 'bg_thresh_hi', 'bg_thresh_lo', 'bbox_reg_weights', 'class_nums', 'use_random', 'is_cls_agnostic', 'is_cascade_rcnn'], varargs=None, keywords=None, defaults=(256, 0.25, 0.25, 0.5, 0.0, [0.1, 0.1, 0.2, 0.2], None, True, False, False)), ('document', 'e87c1131e98715d3657a96c44db1b910'))
paddle.fluid.layers.generate_proposal_labels (ArgSpec(args=['rpn_rois', 'gt_classes', 'is_crowd', 'gt_boxes', 'im_info', 'batch_size_per_im', 'fg_fraction', 'fg_thresh', 'bg_thresh_hi', 'bg_thresh_lo', 'bbox_reg_weights', 'class_nums', 'use_random', 'is_cls_agnostic', 'is_cascade_rcnn'], varargs=None, keywords=None, defaults=(256, 0.25, 0.25, 0.5, 0.0, [0.1, 0.1, 0.2, 0.2], None, True, False, False)), ('document', 'e87c1131e98715d3657a96c44db1b910'))
paddle.fluid.layers.generate_proposals (ArgSpec(args=['scores', 'bbox_deltas', 'im_info', 'anchors', 'variances', 'pre_nms_top_n', 'post_nms_top_n', 'nms_thresh', 'min_size', 'eta', 'name'], varargs=None, keywords=None, defaults=(6000, 1000, 0.5, 0.1, 1.0, None)), ('document', 'b7d707822b6af2a586bce608040235b1'))
paddle.fluid.layers.generate_proposals (ArgSpec(args=['scores', 'bbox_deltas', 'im_info', 'anchors', 'variances', 'pre_nms_top_n', 'post_nms_top_n', 'nms_thresh', 'min_size', 'eta', 'name'], varargs=None, keywords=None, defaults=(6000, 1000, 0.5, 0.1, 1.0, None)), ('document', 'b7d707822b6af2a586bce608040235b1'))
...
...
paddle/fluid/inference/api/demo_ci/run.sh
浏览文件 @
6e310e2d
...
@@ -4,7 +4,7 @@ PADDLE_ROOT=$1
...
@@ -4,7 +4,7 @@ PADDLE_ROOT=$1
TURN_ON_MKL
=
$2
# use MKL or Openblas
TURN_ON_MKL
=
$2
# use MKL or Openblas
TEST_GPU_CPU
=
$3
# test both GPU/CPU mode or only CPU mode
TEST_GPU_CPU
=
$3
# test both GPU/CPU mode or only CPU mode
DATA_DIR
=
$4
# dataset
DATA_DIR
=
$4
# dataset
TENSORRT_INCLUDE_DIR
=
$5
# TensorRT header file dir, defa
lu
t to /usr/local/TensorRT/include
TENSORRT_INCLUDE_DIR
=
$5
# TensorRT header file dir, defa
ul
t to /usr/local/TensorRT/include
TENSORRT_LIB_DIR
=
$6
# TensorRT lib file dir, default to /usr/local/TensorRT/lib
TENSORRT_LIB_DIR
=
$6
# TensorRT lib file dir, default to /usr/local/TensorRT/lib
inference_install_dir
=
${
PADDLE_ROOT
}
/build/fluid_inference_install_dir
inference_install_dir
=
${
PADDLE_ROOT
}
/build/fluid_inference_install_dir
...
...
paddle/fluid/operators/attention_lstm_op.cc
浏览文件 @
6e310e2d
...
@@ -207,7 +207,7 @@ void AttentionLSTMOpMaker::Make() {
...
@@ -207,7 +207,7 @@ void AttentionLSTMOpMaker::Make() {
.
InEnum
({
"sigmoid"
,
"tanh"
,
"relu"
,
"identity"
});
.
InEnum
({
"sigmoid"
,
"tanh"
,
"relu"
,
"identity"
});
AddAttr
<
std
::
string
>
(
"cell_activation"
,
AddAttr
<
std
::
string
>
(
"cell_activation"
,
"(string, default: tanh)"
"(string, default: tanh)"
"The activation for cell output, `tanh` by defa
lu
t."
)
"The activation for cell output, `tanh` by defa
ul
t."
)
.
SetDefault
(
"tanh"
)
.
SetDefault
(
"tanh"
)
.
InEnum
({
"sigmoid"
,
"tanh"
,
"relu"
,
"identity"
});
.
InEnum
({
"sigmoid"
,
"tanh"
,
"relu"
,
"identity"
});
AddAttr
<
std
::
string
>
(
"candidate_activation"
,
AddAttr
<
std
::
string
>
(
"candidate_activation"
,
...
...
paddle/fluid/operators/batch_norm_op.cu
浏览文件 @
6e310e2d
...
@@ -31,7 +31,7 @@ limitations under the License. */
...
@@ -31,7 +31,7 @@ limitations under the License. */
// input data range.
// input data range.
DEFINE_bool
(
cudnn_batchnorm_spatial_persistent
,
false
,
DEFINE_bool
(
cudnn_batchnorm_spatial_persistent
,
false
,
"Whether enable CUDNN_BATCHNORM_SPATIAL_PERSISTENT mode for cudnn "
"Whether enable CUDNN_BATCHNORM_SPATIAL_PERSISTENT mode for cudnn "
"batch_norm, defa
lu
t is False."
);
"batch_norm, defa
ul
t is False."
);
namespace
paddle
{
namespace
paddle
{
namespace
operators
{
namespace
operators
{
...
...
paddle/fluid/operators/conv_cudnn_op.cu.cc
浏览文件 @
6e310e2d
...
@@ -33,7 +33,7 @@ DEFINE_uint64(conv_workspace_size_limit,
...
@@ -33,7 +33,7 @@ DEFINE_uint64(conv_workspace_size_limit,
"cuDNN convolution workspace limit in MB unit."
);
"cuDNN convolution workspace limit in MB unit."
);
DEFINE_bool
(
cudnn_exhaustive_search
,
false
,
DEFINE_bool
(
cudnn_exhaustive_search
,
false
,
"Whether enable exhaustive search for cuDNN convolution or "
"Whether enable exhaustive search for cuDNN convolution or "
"not, defa
lu
t is False."
);
"not, defa
ul
t is False."
);
namespace
paddle
{
namespace
paddle
{
namespace
operators
{
namespace
operators
{
...
@@ -102,7 +102,7 @@ class CUDNNConvOpKernel : public framework::OpKernel<T> {
...
@@ -102,7 +102,7 @@ class CUDNNConvOpKernel : public framework::OpKernel<T> {
conv_desc
.
descriptor
<
T
>
(
paddings
,
strides
,
dilations
);
conv_desc
.
descriptor
<
T
>
(
paddings
,
strides
,
dilations
);
#if CUDNN_VERSION_MIN(7, 0, 1)
#if CUDNN_VERSION_MIN(7, 0, 1)
// cudnn 7 can support groups, no need to do it man
n
ually
// cudnn 7 can support groups, no need to do it manually
// FIXME(typhoonzero): find a better way to disable groups
// FIXME(typhoonzero): find a better way to disable groups
// rather than setting it to 1.
// rather than setting it to 1.
CUDNN_ENFORCE
(
platform
::
dynload
::
cudnnSetConvolutionGroupCount
(
CUDNN_ENFORCE
(
platform
::
dynload
::
cudnnSetConvolutionGroupCount
(
...
@@ -300,7 +300,7 @@ class CUDNNConvGradOpKernel : public framework::OpKernel<T> {
...
@@ -300,7 +300,7 @@ class CUDNNConvGradOpKernel : public framework::OpKernel<T> {
FLAGS_cudnn_exhaustive_search
||
ctx
.
Attr
<
bool
>
(
"exhaustive_search"
);
FLAGS_cudnn_exhaustive_search
||
ctx
.
Attr
<
bool
>
(
"exhaustive_search"
);
if
(
exhaustive_search
&&
FLAGS_cudnn_deterministic
)
{
if
(
exhaustive_search
&&
FLAGS_cudnn_deterministic
)
{
PADDLE_THROW
(
PADDLE_THROW
(
"Can
n
't set exhaustive_search True and "
"Can't set exhaustive_search True and "
"FLAGS_cudnn_deterministic True at same time."
);
"FLAGS_cudnn_deterministic True at same time."
);
}
}
...
@@ -320,7 +320,7 @@ class CUDNNConvGradOpKernel : public framework::OpKernel<T> {
...
@@ -320,7 +320,7 @@ class CUDNNConvGradOpKernel : public framework::OpKernel<T> {
conv_desc
.
descriptor
<
T
>
(
paddings
,
strides
,
dilations
);
conv_desc
.
descriptor
<
T
>
(
paddings
,
strides
,
dilations
);
#if CUDNN_VERSION_MIN(7, 0, 1)
#if CUDNN_VERSION_MIN(7, 0, 1)
// cudnn 7 can support groups, no need to do it man
n
ually
// cudnn 7 can support groups, no need to do it manually
// FIXME(typhoonzero): find a better way to disable groups
// FIXME(typhoonzero): find a better way to disable groups
// rather than setting it to 1.
// rather than setting it to 1.
CUDNN_ENFORCE
(
platform
::
dynload
::
cudnnSetConvolutionGroupCount
(
CUDNN_ENFORCE
(
platform
::
dynload
::
cudnnSetConvolutionGroupCount
(
...
@@ -665,7 +665,7 @@ class CUDNNConvDoubleGradOpKernel : public framework::OpKernel<T> {
...
@@ -665,7 +665,7 @@ class CUDNNConvDoubleGradOpKernel : public framework::OpKernel<T> {
bool
deterministic
=
FLAGS_cudnn_deterministic
;
bool
deterministic
=
FLAGS_cudnn_deterministic
;
if
(
exhaustive_search
&&
deterministic
)
{
if
(
exhaustive_search
&&
deterministic
)
{
PADDLE_THROW
(
PADDLE_THROW
(
"Can
n
't set exhaustive_search True and "
"Can't set exhaustive_search True and "
"FLAGS_cudnn_deterministic True at same time."
);
"FLAGS_cudnn_deterministic True at same time."
);
}
}
...
...
paddle/fluid/operators/conv_fusion_op.cu.cc
浏览文件 @
6e310e2d
...
@@ -18,7 +18,7 @@ limitations under the License. */
...
@@ -18,7 +18,7 @@ limitations under the License. */
DEFINE_int64
(
cudnn_exhaustive_search_times
,
-
1
,
DEFINE_int64
(
cudnn_exhaustive_search_times
,
-
1
,
"Exhaustive search times for cuDNN convolution, "
"Exhaustive search times for cuDNN convolution, "
"defa
lu
t is -1, not exhaustive search"
);
"defa
ul
t is -1, not exhaustive search"
);
namespace
paddle
{
namespace
paddle
{
namespace
operators
{
namespace
operators
{
...
...
paddle/fluid/operators/conv_op.cc
浏览文件 @
6e310e2d
...
@@ -259,7 +259,7 @@ void Conv2DOpMaker::Make() {
...
@@ -259,7 +259,7 @@ void Conv2DOpMaker::Make() {
AddAttr
<
bool
>
(
"exhaustive_search"
,
AddAttr
<
bool
>
(
"exhaustive_search"
,
"(bool, default false) cuDNN has many algorithm to calculation "
"(bool, default false) cuDNN has many algorithm to calculation "
"convolution, whether enable exhaustive search "
"convolution, whether enable exhaustive search "
"for cuDNN convolution or not, defa
lu
t is False."
)
"for cuDNN convolution or not, defa
ul
t is False."
)
.
SetDefault
(
false
);
.
SetDefault
(
false
);
AddComment
(
R"DOC(
AddComment
(
R"DOC(
Convolution Operator.
Convolution Operator.
...
@@ -378,7 +378,7 @@ void Conv3DOpMaker::Make() {
...
@@ -378,7 +378,7 @@ void Conv3DOpMaker::Make() {
AddAttr
<
bool
>
(
"exhaustive_search"
,
AddAttr
<
bool
>
(
"exhaustive_search"
,
"(bool, default false) cuDNN has many algorithm to calculation "
"(bool, default false) cuDNN has many algorithm to calculation "
"convolution, whether enable exhaustive search "
"convolution, whether enable exhaustive search "
"for cuDNN convolution or not, defa
lu
t is False."
)
"for cuDNN convolution or not, defa
ul
t is False."
)
.
SetDefault
(
false
);
.
SetDefault
(
false
);
AddComment
(
R"DOC(
AddComment
(
R"DOC(
Convolution3D Operator.
Convolution3D Operator.
...
...
paddle/fluid/operators/detection/bipartite_match_op.cc
浏览文件 @
6e310e2d
...
@@ -231,14 +231,14 @@ class BipartiteMatchOpMaker : public framework::OpProtoAndCheckerMaker {
...
@@ -231,14 +231,14 @@ class BipartiteMatchOpMaker : public framework::OpProtoAndCheckerMaker {
"entities."
);
"entities."
);
AddAttr
<
std
::
string
>
(
AddAttr
<
std
::
string
>
(
"match_type"
,
"match_type"
,
"(string, defa
lu
t: per_prediction) "
"(string, defa
ul
t: per_prediction) "
"The type of matching method, should be 'bipartite' or "
"The type of matching method, should be 'bipartite' or "
"'per_prediction', 'bipartite' by defa
lu
t."
)
"'per_prediction', 'bipartite' by defa
ul
t."
)
.
SetDefault
(
"bipartite"
)
.
SetDefault
(
"bipartite"
)
.
InEnum
({
"bipartite"
,
"per_prediction"
});
.
InEnum
({
"bipartite"
,
"per_prediction"
});
AddAttr
<
float
>
(
AddAttr
<
float
>
(
"dist_threshold"
,
"dist_threshold"
,
"(float, defa
lu
t: 0.5) "
"(float, defa
ul
t: 0.5) "
"If `match_type` is 'per_prediction', this threshold is to determine "
"If `match_type` is 'per_prediction', this threshold is to determine "
"the extra matching bboxes based on the maximum distance."
)
"the extra matching bboxes based on the maximum distance."
)
.
SetDefault
(
0.5
);
.
SetDefault
(
0.5
);
...
...
paddle/fluid/operators/detection/multiclass_nms_op.cc
浏览文件 @
6e310e2d
...
@@ -463,7 +463,7 @@ class MultiClassNMSOpMaker : public framework::OpProtoAndCheckerMaker {
...
@@ -463,7 +463,7 @@ class MultiClassNMSOpMaker : public framework::OpProtoAndCheckerMaker {
"Input BBoxes should be the second case with shape [M, C, 4]."
);
"Input BBoxes should be the second case with shape [M, C, 4]."
);
AddAttr
<
int
>
(
AddAttr
<
int
>
(
"background_label"
,
"background_label"
,
"(int, defa
lu
t: 0) "
"(int, defa
ul
t: 0) "
"The index of background label, the background label will be ignored. "
"The index of background label, the background label will be ignored. "
"If set to -1, then all categories will be considered."
)
"If set to -1, then all categories will be considered."
)
.
SetDefault
(
0
);
.
SetDefault
(
0
);
...
@@ -477,7 +477,7 @@ class MultiClassNMSOpMaker : public framework::OpProtoAndCheckerMaker {
...
@@ -477,7 +477,7 @@ class MultiClassNMSOpMaker : public framework::OpProtoAndCheckerMaker {
"confidences aftern the filtering detections based on "
"confidences aftern the filtering detections based on "
"score_threshold"
);
"score_threshold"
);
AddAttr
<
float
>
(
"nms_threshold"
,
AddAttr
<
float
>
(
"nms_threshold"
,
"(float, defa
lu
t: 0.3) "
"(float, defa
ul
t: 0.3) "
"The threshold to be used in NMS."
)
"The threshold to be used in NMS."
)
.
SetDefault
(
0.3
);
.
SetDefault
(
0.3
);
AddAttr
<
float
>
(
"nms_eta"
,
AddAttr
<
float
>
(
"nms_eta"
,
...
...
paddle/fluid/operators/detection_map_op.cc
浏览文件 @
6e310e2d
...
@@ -150,7 +150,7 @@ class DetectionMAPOpMaker : public framework::OpProtoAndCheckerMaker {
...
@@ -150,7 +150,7 @@ class DetectionMAPOpMaker : public framework::OpProtoAndCheckerMaker {
"The class number."
);
"The class number."
);
AddAttr
<
int
>
(
AddAttr
<
int
>
(
"background_label"
,
"background_label"
,
"(int, defa
lu
t: 0) "
"(int, defa
ul
t: 0) "
"The index of background label, the background label will be ignored. "
"The index of background label, the background label will be ignored. "
"If set to -1, then all categories will be considered."
)
"If set to -1, then all categories will be considered."
)
.
SetDefault
(
0
);
.
SetDefault
(
0
);
...
...
paddle/fluid/operators/fused/fused_embedding_fc_lstm_op.cc
浏览文件 @
6e310e2d
...
@@ -174,15 +174,15 @@ void FusedEmbeddingFCLSTMOpMaker::Make() {
...
@@ -174,15 +174,15 @@ void FusedEmbeddingFCLSTMOpMaker::Make() {
AddOutput
(
"ReorderedH0"
,
"(LoDTensor) (N x D)."
).
AsIntermediate
();
AddOutput
(
"ReorderedH0"
,
"(LoDTensor) (N x D)."
).
AsIntermediate
();
AddOutput
(
"ReorderedC0"
,
"(LoDTensor) (N x D)."
).
AsIntermediate
();
AddOutput
(
"ReorderedC0"
,
"(LoDTensor) (N x D)."
).
AsIntermediate
();
AddAttr
<
bool
>
(
"use_peepholes"
,
AddAttr
<
bool
>
(
"use_peepholes"
,
"(bool, defa
lu
t: True) "
"(bool, defa
ul
t: True) "
"whether to enable diagonal/peephole connections."
)
"whether to enable diagonal/peephole connections."
)
.
SetDefault
(
true
);
.
SetDefault
(
true
);
AddAttr
<
bool
>
(
"is_reverse"
,
AddAttr
<
bool
>
(
"is_reverse"
,
"(bool, defa
lu
t: False) "
"(bool, defa
ul
t: False) "
"whether to compute reversed LSTM."
)
"whether to compute reversed LSTM."
)
.
SetDefault
(
false
);
.
SetDefault
(
false
);
AddAttr
<
bool
>
(
"use_seq"
,
AddAttr
<
bool
>
(
"use_seq"
,
"(bool, defa
lu
t: True) "
"(bool, defa
ul
t: True) "
"whether to use seq mode to compute."
)
"whether to use seq mode to compute."
)
.
SetDefault
(
true
);
.
SetDefault
(
true
);
AddAttr
<
std
::
string
>
(
"gate_activation"
,
AddAttr
<
std
::
string
>
(
"gate_activation"
,
...
@@ -193,7 +193,7 @@ void FusedEmbeddingFCLSTMOpMaker::Make() {
...
@@ -193,7 +193,7 @@ void FusedEmbeddingFCLSTMOpMaker::Make() {
.
InEnum
({
"sigmoid"
,
"tanh"
,
"relu"
,
"identity"
});
.
InEnum
({
"sigmoid"
,
"tanh"
,
"relu"
,
"identity"
});
AddAttr
<
std
::
string
>
(
"cell_activation"
,
AddAttr
<
std
::
string
>
(
"cell_activation"
,
"(string, default: tanh)"
"(string, default: tanh)"
"The activation for cell output, `tanh` by defa
lu
t."
)
"The activation for cell output, `tanh` by defa
ul
t."
)
.
SetDefault
(
"tanh"
)
.
SetDefault
(
"tanh"
)
.
InEnum
({
"sigmoid"
,
"tanh"
,
"relu"
,
"identity"
});
.
InEnum
({
"sigmoid"
,
"tanh"
,
"relu"
,
"identity"
});
AddAttr
<
std
::
string
>
(
"candidate_activation"
,
AddAttr
<
std
::
string
>
(
"candidate_activation"
,
...
...
paddle/fluid/operators/fused/fusion_conv_inception_op.cc
浏览文件 @
6e310e2d
...
@@ -67,7 +67,7 @@ class ConvInceptionFusionOpMaker : public framework::OpProtoAndCheckerMaker {
...
@@ -67,7 +67,7 @@ class ConvInceptionFusionOpMaker : public framework::OpProtoAndCheckerMaker {
void
Make
()
override
{
void
Make
()
override
{
AddInput
(
"Input"
,
"(Tensor) NCHW layout."
);
AddInput
(
"Input"
,
"(Tensor) NCHW layout."
);
AddInput
(
"Filter"
,
"(vector<Tensor>) 4 aggregated filters"
).
AsDuplicable
();
AddInput
(
"Filter"
,
"(vector<Tensor>) 4 aggregated filters"
).
AsDuplicable
();
AddInput
(
"Bias"
,
"(vector<Tensor>) it's leng
ht
is equal to Filter"
)
AddInput
(
"Bias"
,
"(vector<Tensor>) it's leng
th
is equal to Filter"
)
.
AsDuplicable
();
.
AsDuplicable
();
AddOutput
(
"Output"
,
AddOutput
(
"Output"
,
"(Tensor) The output tensor of convolution operator. "
"(Tensor) The output tensor of convolution operator. "
...
@@ -82,7 +82,7 @@ class ConvInceptionFusionOpMaker : public framework::OpProtoAndCheckerMaker {
...
@@ -82,7 +82,7 @@ class ConvInceptionFusionOpMaker : public framework::OpProtoAndCheckerMaker {
"exclusive"
,
"exclusive"
,
"(bool, default True) When true, will exclude the zero-padding in the "
"(bool, default True) When true, will exclude the zero-padding in the "
"averaging calculating, otherwise, include the zero-padding. Note, it "
"averaging calculating, otherwise, include the zero-padding. Note, it "
"is only used when pooling_type is avg. The defa
lu
t is True."
)
"is only used when pooling_type is avg. The defa
ul
t is True."
)
.
SetDefault
(
true
);
.
SetDefault
(
true
);
AddAttr
<
std
::
string
>
(
AddAttr
<
std
::
string
>
(
"activation"
,
"activation"
,
...
...
paddle/fluid/operators/fused/fusion_gru_op.cc
浏览文件 @
6e310e2d
...
@@ -147,11 +147,11 @@ void FusionGRUOpMaker::Make() {
...
@@ -147,11 +147,11 @@ void FusionGRUOpMaker::Make() {
"The activation type used in update gate and reset gate."
)
"The activation type used in update gate and reset gate."
)
.
SetDefault
(
"sigmoid"
);
.
SetDefault
(
"sigmoid"
);
AddAttr
<
bool
>
(
"is_reverse"
,
AddAttr
<
bool
>
(
"is_reverse"
,
"(bool, defa
lu
t: False) "
"(bool, defa
ul
t: False) "
"whether to compute reversed GRU."
)
"whether to compute reversed GRU."
)
.
SetDefault
(
false
);
.
SetDefault
(
false
);
AddAttr
<
bool
>
(
"use_seq"
,
AddAttr
<
bool
>
(
"use_seq"
,
"(bool, defa
lu
t: True) "
"(bool, defa
ul
t: True) "
"whether to use seq mode to compute GRU."
)
"whether to use seq mode to compute GRU."
)
.
SetDefault
(
true
);
.
SetDefault
(
true
);
AddComment
(
R"DOC(
AddComment
(
R"DOC(
...
...
paddle/fluid/operators/fused/fusion_lstm_op.cc
浏览文件 @
6e310e2d
...
@@ -179,15 +179,15 @@ void FusionLSTMOpMaker::Make() {
...
@@ -179,15 +179,15 @@ void FusionLSTMOpMaker::Make() {
AddOutput
(
"CheckedCell"
,
"(Tensor) (2 x D) only for peephole."
)
AddOutput
(
"CheckedCell"
,
"(Tensor) (2 x D) only for peephole."
)
.
AsIntermediate
();
.
AsIntermediate
();
AddAttr
<
bool
>
(
"use_peepholes"
,
AddAttr
<
bool
>
(
"use_peepholes"
,
"(bool, defa
lu
t: True) "
"(bool, defa
ul
t: True) "
"whether to enable diagonal/peephole connections."
)
"whether to enable diagonal/peephole connections."
)
.
SetDefault
(
true
);
.
SetDefault
(
true
);
AddAttr
<
bool
>
(
"is_reverse"
,
AddAttr
<
bool
>
(
"is_reverse"
,
"(bool, defa
lu
t: False) "
"(bool, defa
ul
t: False) "
"whether to compute reversed LSTM."
)
"whether to compute reversed LSTM."
)
.
SetDefault
(
false
);
.
SetDefault
(
false
);
AddAttr
<
bool
>
(
"use_seq"
,
AddAttr
<
bool
>
(
"use_seq"
,
"(bool, defa
lu
t: True) "
"(bool, defa
ul
t: True) "
"whether to use seq mode to compute."
)
"whether to use seq mode to compute."
)
.
SetDefault
(
true
);
.
SetDefault
(
true
);
AddAttr
<
std
::
string
>
(
"gate_activation"
,
AddAttr
<
std
::
string
>
(
"gate_activation"
,
...
@@ -198,7 +198,7 @@ void FusionLSTMOpMaker::Make() {
...
@@ -198,7 +198,7 @@ void FusionLSTMOpMaker::Make() {
.
InEnum
({
"sigmoid"
,
"tanh"
,
"relu"
,
"identity"
});
.
InEnum
({
"sigmoid"
,
"tanh"
,
"relu"
,
"identity"
});
AddAttr
<
std
::
string
>
(
"cell_activation"
,
AddAttr
<
std
::
string
>
(
"cell_activation"
,
"(string, default: tanh)"
"(string, default: tanh)"
"The activation for cell output, `tanh` by defa
lu
t."
)
"The activation for cell output, `tanh` by defa
ul
t."
)
.
SetDefault
(
"tanh"
)
.
SetDefault
(
"tanh"
)
.
InEnum
({
"sigmoid"
,
"tanh"
,
"relu"
,
"identity"
});
.
InEnum
({
"sigmoid"
,
"tanh"
,
"relu"
,
"identity"
});
AddAttr
<
std
::
string
>
(
"candidate_activation"
,
AddAttr
<
std
::
string
>
(
"candidate_activation"
,
...
...
paddle/fluid/operators/gaussian_random_batch_size_like_op.cc
浏览文件 @
6e310e2d
...
@@ -58,7 +58,7 @@ class GaussianRandomBatchSizeLikeOpMaker : public BatchSizeLikeOpMaker {
...
@@ -58,7 +58,7 @@ class GaussianRandomBatchSizeLikeOpMaker : public BatchSizeLikeOpMaker {
AddComment
(
R"DOC(
AddComment
(
R"DOC(
Used to initialize tensors with gaussian random generator.
Used to initialize tensors with gaussian random generator.
The defa
lut mean of the distribution is 0. and defalu
t standard
The defa
ult mean of the distribution is 0. and defaul
t standard
deviation (std) of the distribution is 1.. Uers can set mean and std
deviation (std) of the distribution is 1.. Uers can set mean and std
by input arguments.
by input arguments.
)DOC"
);
)DOC"
);
...
...
paddle/fluid/operators/gru_op.cc
浏览文件 @
6e310e2d
...
@@ -137,7 +137,7 @@ class GRUOpMaker : public framework::OpProtoAndCheckerMaker {
...
@@ -137,7 +137,7 @@ class GRUOpMaker : public framework::OpProtoAndCheckerMaker {
"The activation type used in update gate and reset gate."
)
"The activation type used in update gate and reset gate."
)
.
SetDefault
(
"sigmoid"
);
.
SetDefault
(
"sigmoid"
);
AddAttr
<
bool
>
(
"is_reverse"
,
AddAttr
<
bool
>
(
"is_reverse"
,
"(bool, defa
lu
t: False) "
"(bool, defa
ul
t: False) "
"whether to compute reversed GRU."
)
"whether to compute reversed GRU."
)
.
SetDefault
(
false
);
.
SetDefault
(
false
);
AddAttr
<
bool
>
(
"origin_mode"
,
AddAttr
<
bool
>
(
"origin_mode"
,
...
...
paddle/fluid/operators/lstm_op.cc
浏览文件 @
6e310e2d
...
@@ -153,11 +153,11 @@ class LSTMOpMaker : public framework::OpProtoAndCheckerMaker {
...
@@ -153,11 +153,11 @@ class LSTMOpMaker : public framework::OpProtoAndCheckerMaker {
"in the backward."
)
"in the backward."
)
.
AsIntermediate
();
.
AsIntermediate
();
AddAttr
<
bool
>
(
"use_peepholes"
,
AddAttr
<
bool
>
(
"use_peepholes"
,
"(bool, defa
lu
t: True) "
"(bool, defa
ul
t: True) "
"whether to enable diagonal/peephole connections."
)
"whether to enable diagonal/peephole connections."
)
.
SetDefault
(
true
);
.
SetDefault
(
true
);
AddAttr
<
bool
>
(
"is_reverse"
,
AddAttr
<
bool
>
(
"is_reverse"
,
"(bool, defa
lu
t: False) "
"(bool, defa
ul
t: False) "
"whether to compute reversed LSTM."
)
"whether to compute reversed LSTM."
)
.
SetDefault
(
false
);
.
SetDefault
(
false
);
AddAttr
<
std
::
string
>
(
AddAttr
<
std
::
string
>
(
...
@@ -169,7 +169,7 @@ class LSTMOpMaker : public framework::OpProtoAndCheckerMaker {
...
@@ -169,7 +169,7 @@ class LSTMOpMaker : public framework::OpProtoAndCheckerMaker {
.
InEnum
({
"sigmoid"
,
"tanh"
,
"relu"
,
"identity"
});
.
InEnum
({
"sigmoid"
,
"tanh"
,
"relu"
,
"identity"
});
AddAttr
<
std
::
string
>
(
"cell_activation"
,
AddAttr
<
std
::
string
>
(
"cell_activation"
,
"(string, default: tanh)"
"(string, default: tanh)"
"The activation for cell output, `tanh` by defa
lu
t."
)
"The activation for cell output, `tanh` by defa
ul
t."
)
.
SetDefault
(
"tanh"
)
.
SetDefault
(
"tanh"
)
.
InEnum
({
"sigmoid"
,
"tanh"
,
"relu"
,
"identity"
});
.
InEnum
({
"sigmoid"
,
"tanh"
,
"relu"
,
"identity"
});
AddAttr
<
std
::
string
>
(
"candidate_activation"
,
AddAttr
<
std
::
string
>
(
"candidate_activation"
,
...
@@ -181,7 +181,7 @@ class LSTMOpMaker : public framework::OpProtoAndCheckerMaker {
...
@@ -181,7 +181,7 @@ class LSTMOpMaker : public framework::OpProtoAndCheckerMaker {
AddComment
(
R"DOC(
AddComment
(
R"DOC(
Long-Short Term Memory (LSTM) Operator.
Long-Short Term Memory (LSTM) Operator.
The defa
lu
t implementation is diagonal/peephole connection
The defa
ul
t implementation is diagonal/peephole connection
(https://arxiv.org/pdf/1402.1128.pdf), the formula is as follows:
(https://arxiv.org/pdf/1402.1128.pdf), the formula is as follows:
$$ i_t = \\sigma(W_{ix}x_{t} + W_{ih}h_{t-1} + W_{ic}c_{t-1} + b_i) $$
$$ i_t = \\sigma(W_{ix}x_{t} + W_{ih}h_{t-1} + W_{ic}c_{t-1} + b_i) $$
...
@@ -199,7 +199,7 @@ $$ h_t = o_t \\odot act_h(c_t) $$
...
@@ -199,7 +199,7 @@ $$ h_t = o_t \\odot act_h(c_t) $$
- W terms denote weight matrices (e.g. $W_{xi}$ is the matrix
- W terms denote weight matrices (e.g. $W_{xi}$ is the matrix
of weights from the input gate to the input), $W_{ic}, W_{fc}, W_{oc}$
of weights from the input gate to the input), $W_{ic}, W_{fc}, W_{oc}$
are diagonal weight matrices for peephole connections. In our implementation,
are diagonal weight matrices for peephole connections. In our implementation,
we use vectors to repre
nse
t these diagonal weight matrices.
we use vectors to repre
sen
t these diagonal weight matrices.
- The b terms denote bias vectors ($b_i$ is the input gate bias vector).
- The b terms denote bias vectors ($b_i$ is the input gate bias vector).
- $\sigma$ is the non-line activations, such as logistic sigmoid function.
- $\sigma$ is the non-line activations, such as logistic sigmoid function.
- $i, f, o$ and $c$ are the input gate, forget gate, output gate,
- $i, f, o$ and $c$ are the input gate, forget gate, output gate,
...
...
paddle/fluid/operators/lstmp_op.cc
浏览文件 @
6e310e2d
...
@@ -177,20 +177,20 @@ class LSTMPOpMaker : public framework::OpProtoAndCheckerMaker {
...
@@ -177,20 +177,20 @@ class LSTMPOpMaker : public framework::OpProtoAndCheckerMaker {
"backward."
)
"backward."
)
.
AsIntermediate
();
.
AsIntermediate
();
AddAttr
<
bool
>
(
"use_peepholes"
,
AddAttr
<
bool
>
(
"use_peepholes"
,
"(bool, defa
lu
t: True) "
"(bool, defa
ul
t: True) "
"whether to enable diagonal/peephole connections."
)
"whether to enable diagonal/peephole connections."
)
.
SetDefault
(
true
);
.
SetDefault
(
true
);
AddAttr
<
bool
>
(
"is_reverse"
,
AddAttr
<
bool
>
(
"is_reverse"
,
"(bool, defa
lu
t: False) "
"(bool, defa
ul
t: False) "
"whether to compute reversed LSTMP."
)
"whether to compute reversed LSTMP."
)
.
SetDefault
(
false
);
.
SetDefault
(
false
);
AddAttr
<
float
>
(
"cell_clip"
,
AddAttr
<
float
>
(
"cell_clip"
,
"(float, defa
lu
t: 0.0) "
"(float, defa
ul
t: 0.0) "
"Clip for Tensor for cell state tensor when clip value is "
"Clip for Tensor for cell state tensor when clip value is "
"greater than 0.0"
)
"greater than 0.0"
)
.
SetDefault
(
0.0
);
.
SetDefault
(
0.0
);
AddAttr
<
float
>
(
"proj_clip"
,
AddAttr
<
float
>
(
"proj_clip"
,
"(float, defa
lu
t: 0.0) "
"(float, defa
ul
t: 0.0) "
"Clip for Tensor for projection tensor when clip value is "
"Clip for Tensor for projection tensor when clip value is "
"greater than 0.0"
)
"greater than 0.0"
)
.
SetDefault
(
0.0
);
.
SetDefault
(
0.0
);
...
@@ -203,7 +203,7 @@ class LSTMPOpMaker : public framework::OpProtoAndCheckerMaker {
...
@@ -203,7 +203,7 @@ class LSTMPOpMaker : public framework::OpProtoAndCheckerMaker {
.
InEnum
({
"sigmoid"
,
"tanh"
,
"relu"
,
"identity"
});
.
InEnum
({
"sigmoid"
,
"tanh"
,
"relu"
,
"identity"
});
AddAttr
<
std
::
string
>
(
"cell_activation"
,
AddAttr
<
std
::
string
>
(
"cell_activation"
,
"(string, default: tanh)"
"(string, default: tanh)"
"The activation for cell output, `tanh` by defa
lu
t."
)
"The activation for cell output, `tanh` by defa
ul
t."
)
.
SetDefault
(
"tanh"
)
.
SetDefault
(
"tanh"
)
.
InEnum
({
"sigmoid"
,
"tanh"
,
"relu"
,
"identity"
});
.
InEnum
({
"sigmoid"
,
"tanh"
,
"relu"
,
"identity"
});
AddAttr
<
std
::
string
>
(
"candidate_activation"
,
AddAttr
<
std
::
string
>
(
"candidate_activation"
,
...
@@ -215,7 +215,7 @@ class LSTMPOpMaker : public framework::OpProtoAndCheckerMaker {
...
@@ -215,7 +215,7 @@ class LSTMPOpMaker : public framework::OpProtoAndCheckerMaker {
AddAttr
<
std
::
string
>
(
"proj_activation"
,
AddAttr
<
std
::
string
>
(
"proj_activation"
,
"(string, default: tanh)"
"(string, default: tanh)"
"The activation for projection output, "
"The activation for projection output, "
"`tanh` by defa
lu
t."
)
"`tanh` by defa
ul
t."
)
.
SetDefault
(
"tanh"
)
.
SetDefault
(
"tanh"
)
.
InEnum
({
"sigmoid"
,
"tanh"
,
"relu"
,
"identity"
});
.
InEnum
({
"sigmoid"
,
"tanh"
,
"relu"
,
"identity"
});
AddComment
(
R"DOC(
AddComment
(
R"DOC(
...
@@ -248,7 +248,7 @@ $$
...
@@ -248,7 +248,7 @@ $$
where the W terms denote weight matrices (e.g. $W_{xi}$ is the matrix
where the W terms denote weight matrices (e.g. $W_{xi}$ is the matrix
of weights from the input gate to the input), $W_{ic}, W_{fc}, W_{oc}$
of weights from the input gate to the input), $W_{ic}, W_{fc}, W_{oc}$
are diagonal weight matrices for peephole connections. In our implementation,
are diagonal weight matrices for peephole connections. In our implementation,
we use vectors to repre
nse
t these diagonal weight matrices. The b terms
we use vectors to repre
sen
t these diagonal weight matrices. The b terms
denote bias vectors ($b_i$ is the input gate bias vector), $\sigma$
denote bias vectors ($b_i$ is the input gate bias vector), $\sigma$
is the activation, such as logistic sigmoid function, and
is the activation, such as logistic sigmoid function, and
$i, f, o$ and $c$ are the input gate, forget gate, output gate,
$i, f, o$ and $c$ are the input gate, forget gate, output gate,
...
...
paddle/fluid/operators/pool_op.cc
浏览文件 @
6e310e2d
...
@@ -190,7 +190,7 @@ void Pool2dOpMaker::Make() {
...
@@ -190,7 +190,7 @@ void Pool2dOpMaker::Make() {
"exclusive"
,
"exclusive"
,
"(bool, default True) When true, will exclude the zero-padding in the "
"(bool, default True) When true, will exclude the zero-padding in the "
"averaging calculating, otherwise, include the zero-padding. Note, it "
"averaging calculating, otherwise, include the zero-padding. Note, it "
"is only used when pooling_type is avg. The defa
lu
t is True."
)
"is only used when pooling_type is avg. The defa
ul
t is True."
)
.
SetDefault
(
true
);
.
SetDefault
(
true
);
AddAttr
<
bool
>
(
AddAttr
<
bool
>
(
"adaptive"
,
"adaptive"
,
...
@@ -360,7 +360,7 @@ void Pool3dOpMaker::Make() {
...
@@ -360,7 +360,7 @@ void Pool3dOpMaker::Make() {
"exclusive"
,
"exclusive"
,
"(bool, default True) When true, will exclude the zero-padding in the "
"(bool, default True) When true, will exclude the zero-padding in the "
"averaging calculating, otherwise, include the zero-padding. Note, it "
"averaging calculating, otherwise, include the zero-padding. Note, it "
"is only used when pooling_type is avg. The defa
lu
t is True."
)
"is only used when pooling_type is avg. The defa
ul
t is True."
)
.
SetDefault
(
true
);
.
SetDefault
(
true
);
AddAttr
<
bool
>
(
AddAttr
<
bool
>
(
"adaptive"
,
"adaptive"
,
...
...
paddle/fluid/operators/unpool_op.cc
浏览文件 @
6e310e2d
...
@@ -46,7 +46,7 @@ class Unpool2dOpMaker : public framework::OpProtoAndCheckerMaker {
...
@@ -46,7 +46,7 @@ class Unpool2dOpMaker : public framework::OpProtoAndCheckerMaker {
"strides (height, width) of unpooling operator."
)
"strides (height, width) of unpooling operator."
)
.
SetDefault
({
1
,
1
});
.
SetDefault
({
1
,
1
});
AddAttr
<
std
::
vector
<
int
>>
(
"paddings"
,
AddAttr
<
std
::
vector
<
int
>>
(
"paddings"
,
"(vector defa
lu
t:{0,0}), "
"(vector defa
ul
t:{0,0}), "
"paddings (height, width) of unpooling operator."
)
"paddings (height, width) of unpooling operator."
)
.
SetDefault
({
0
,
0
});
.
SetDefault
({
0
,
0
});
AddAttr
<
std
::
string
>
(
AddAttr
<
std
::
string
>
(
...
...
python/paddle/fluid/contrib/slim/quantization/quantization_strategy.py
浏览文件 @
6e310e2d
...
@@ -53,11 +53,11 @@ class QuantizationStrategy(Strategy):
...
@@ -53,11 +53,11 @@ class QuantizationStrategy(Strategy):
start_epoch(int): The 'on_epoch_begin' function will be called in start_epoch. default: 0
start_epoch(int): The 'on_epoch_begin' function will be called in start_epoch. default: 0
end_epoch(int): The 'on_epoch_end' function will be called in end_epoch. default: 0
end_epoch(int): The 'on_epoch_end' function will be called in end_epoch. default: 0
float_model_save_path(str): The path to save model with float weights.
float_model_save_path(str): The path to save model with float weights.
None means it doesn't save float model. defa
lu
t: None.
None means it doesn't save float model. defa
ul
t: None.
mobile_model_save_path(str): The path to save model for paddle-mobile execution.
mobile_model_save_path(str): The path to save model for paddle-mobile execution.
None means it doesn't save mobile model. defa
lu
t: None.
None means it doesn't save mobile model. defa
ul
t: None.
int8_model_save_path(str): The path to save model with int8_t weight.
int8_model_save_path(str): The path to save model with int8_t weight.
None means it doesn't save int8 model. defa
lu
t: None.
None means it doesn't save int8 model. defa
ul
t: None.
activation_bits(int): quantization bit number for activation. default: 8.
activation_bits(int): quantization bit number for activation. default: 8.
weight_bits(int): quantization bit number for weights. The bias is not quantized.
weight_bits(int): quantization bit number for weights. The bias is not quantized.
default: 8.
default: 8.
...
@@ -90,7 +90,7 @@ class QuantizationStrategy(Strategy):
...
@@ -90,7 +90,7 @@ class QuantizationStrategy(Strategy):
def
restore_from_checkpoint
(
self
,
context
):
def
restore_from_checkpoint
(
self
,
context
):
"""
"""
Restore graph when the compress
oi
n task is inited from checkpoint.
Restore graph when the compress
io
n task is inited from checkpoint.
"""
"""
# It is inited from checkpoint and has missed start epoch.
# It is inited from checkpoint and has missed start epoch.
if
context
.
epoch_id
!=
0
and
context
.
epoch_id
>
self
.
start_epoch
:
if
context
.
epoch_id
!=
0
and
context
.
epoch_id
>
self
.
start_epoch
:
...
@@ -100,7 +100,7 @@ class QuantizationStrategy(Strategy):
...
@@ -100,7 +100,7 @@ class QuantizationStrategy(Strategy):
def
_modify_graph_for_quantization
(
self
,
context
):
def
_modify_graph_for_quantization
(
self
,
context
):
"""
"""
Insert fake_quantize_op and fake_dequantize_op before train
g
ing and testing.
Insert fake_quantize_op and fake_dequantize_op before training and testing.
"""
"""
train_ir_graph
=
IrGraph
(
train_ir_graph
=
IrGraph
(
core
.
Graph
(
context
.
optimize_graph
.
program
.
clone
().
desc
),
core
.
Graph
(
context
.
optimize_graph
.
program
.
clone
().
desc
),
...
@@ -151,7 +151,7 @@ class QuantizationStrategy(Strategy):
...
@@ -151,7 +151,7 @@ class QuantizationStrategy(Strategy):
def
on_epoch_begin
(
self
,
context
):
def
on_epoch_begin
(
self
,
context
):
"""
"""
Insert fake_quantize_op and fake_dequantize_op before train
g
ing and testing.
Insert fake_quantize_op and fake_dequantize_op before training and testing.
"""
"""
super
(
QuantizationStrategy
,
self
).
on_epoch_begin
(
context
)
super
(
QuantizationStrategy
,
self
).
on_epoch_begin
(
context
)
if
self
.
start_epoch
==
context
.
epoch_id
:
if
self
.
start_epoch
==
context
.
epoch_id
:
...
...
python/paddle/fluid/contrib/slim/tests/quantization/compress.yaml
浏览文件 @
6e310e2d
#start_epoch(int): The epoch to insert quantization operators. default: 0
#start_epoch(int): The epoch to insert quantization operators. default: 0
#
#
#end_epoch(int): The epoch to save infere
cn
e model. default: 0
#end_epoch(int): The epoch to save infere
nc
e model. default: 0
#
#
#float_model_save_path(str): The path to save model with float weights.
#float_model_save_path(str): The path to save model with float weights.
# None means it doesn't save float model. defa
lu
t: None.
# None means it doesn't save float model. defa
ul
t: None.
#
#
#mobile_model_save_path(str): The path to save model for paddle-mobile execution.
#mobile_model_save_path(str): The path to save model for paddle-mobile execution.
# None means it doesn't save mobile model. defa
lu
t: None.
# None means it doesn't save mobile model. defa
ul
t: None.
#
#
#int8_model_save_path(str): The path to save model with int8_t weight.
#int8_model_save_path(str): The path to save model with int8_t weight.
# None means it doesn't save int8 model. defa
lu
t: None.
# None means it doesn't save int8 model. defa
ul
t: None.
#
#
#activation_bits(int): quantization bit number for activation. default: 8.
#activation_bits(int): quantization bit number for activation. default: 8.
#
#
...
...
python/paddle/fluid/evaluator.py
浏览文件 @
6e310e2d
...
@@ -323,11 +323,11 @@ class DetectionMAP(Evaluator):
...
@@ -323,11 +323,11 @@ class DetectionMAP(Evaluator):
class_num (int): The class number.
class_num (int): The class number.
background_label (int): The index of background label, the background
background_label (int): The index of background label, the background
label will be ignored. If set to -1, then all categories will be
label will be ignored. If set to -1, then all categories will be
considered, 0 by defa
lu
t.
considered, 0 by defa
ul
t.
overlap_threshold (float): The threshold for deciding true/false
overlap_threshold (float): The threshold for deciding true/false
positive, 0.5 by defa
lu
t.
positive, 0.5 by defa
ul
t.
evaluate_difficult (bool): Whether to consider difficult ground truth
evaluate_difficult (bool): Whether to consider difficult ground truth
for evaluation, True by defa
lu
t. This argument does not work when
for evaluation, True by defa
ul
t. This argument does not work when
gt_difficult is None.
gt_difficult is None.
ap_version (string): The average precision calculation ways, it must be
ap_version (string): The average precision calculation ways, it must be
'integral' or '11point'. Please check
'integral' or '11point'. Please check
...
...
python/paddle/fluid/layers/detection.py
浏览文件 @
6e310e2d
...
@@ -266,7 +266,7 @@ def rpn_target_assign(bbox_pred,
...
@@ -266,7 +266,7 @@ def rpn_target_assign(bbox_pred,
coordinate of the anchor box.
coordinate of the anchor box.
anchor_var(Variable): A 2-D Tensor with shape [M,4] holds expanded
anchor_var(Variable): A 2-D Tensor with shape [M,4] holds expanded
variances of anchors.
variances of anchors.
gt_boxes (Variable): The ground-truth bou
d
ding boxes (bboxes) are a 2D
gt_boxes (Variable): The ground-truth bou
n
ding boxes (bboxes) are a 2D
LoDTensor with shape [Ng, 4], Ng is the total number of ground-truth
LoDTensor with shape [Ng, 4], Ng is the total number of ground-truth
bboxes of mini-batch input.
bboxes of mini-batch input.
is_crowd (Variable): A 1-D LoDTensor which indicates groud-truth is crowd.
is_crowd (Variable): A 1-D LoDTensor which indicates groud-truth is crowd.
...
@@ -1258,8 +1258,8 @@ def ssd_loss(location,
...
@@ -1258,8 +1258,8 @@ def ssd_loss(location,
"""
"""
**Multi-box loss layer for object detection algorithm of SSD**
**Multi-box loss layer for object detection algorithm of SSD**
This layer is to compute dection loss for SSD given the location offset
This layer is to compute de
te
ction loss for SSD given the location offset
predictions, confidence predictions, prior boxes and ground-truth bou
d
ding
predictions, confidence predictions, prior boxes and ground-truth bou
n
ding
boxes and labels, and the type of hard example mining. The returned loss
boxes and labels, and the type of hard example mining. The returned loss
is a weighted sum of the localization loss (or regression loss) and
is a weighted sum of the localization loss (or regression loss) and
confidence loss (or classification loss) by performing the following steps:
confidence loss (or classification loss) by performing the following steps:
...
@@ -1303,7 +1303,7 @@ def ssd_loss(location,
...
@@ -1303,7 +1303,7 @@ def ssd_loss(location,
confidence (Variable): The confidence predictions are a 3D Tensor
confidence (Variable): The confidence predictions are a 3D Tensor
with shape [N, Np, C], N and Np are the same as they are in
with shape [N, Np, C], N and Np are the same as they are in
`location`, C is the class number.
`location`, C is the class number.
gt_box (Variable): The ground-truth bou
d
ding boxes (bboxes) are a 2D
gt_box (Variable): The ground-truth bou
n
ding boxes (bboxes) are a 2D
LoDTensor with shape [Ng, 4], Ng is the total number of ground-truth
LoDTensor with shape [Ng, 4], Ng is the total number of ground-truth
bboxes of mini-batch input.
bboxes of mini-batch input.
gt_label (Variable): The ground-truth labels are a 2D LoDTensor
gt_label (Variable): The ground-truth labels are a 2D LoDTensor
...
@@ -1316,14 +1316,14 @@ def ssd_loss(location,
...
@@ -1316,14 +1316,14 @@ def ssd_loss(location,
`overlap_threshold` to determine the extra matching bboxes when
`overlap_threshold` to determine the extra matching bboxes when
finding matched boxes. 0.5 by default.
finding matched boxes. 0.5 by default.
neg_pos_ratio (float): The ratio of the negative boxes to the positive
neg_pos_ratio (float): The ratio of the negative boxes to the positive
boxes, used only when mining_type is 'max_negative', 3.0 by defa
lu
t.
boxes, used only when mining_type is 'max_negative', 3.0 by defa
ul
t.
neg_overlap (float): The negative overlap upper bound for the unmatched
neg_overlap (float): The negative overlap upper bound for the unmatched
predictions. Use only when mining_type is 'max_negative',
predictions. Use only when mining_type is 'max_negative',
0.5 by default.
0.5 by default.
loc_loss_weight (float): Weight for localization loss, 1.0 by default.
loc_loss_weight (float): Weight for localization loss, 1.0 by default.
conf_loss_weight (float): Weight for confidence loss, 1.0 by default.
conf_loss_weight (float): Weight for confidence loss, 1.0 by default.
match_type (str): The type of matching method during training, should
match_type (str): The type of matching method during training, should
be 'bipartite' or 'per_prediction', 'per_prediction' by defa
lu
t.
be 'bipartite' or 'per_prediction', 'per_prediction' by defa
ul
t.
mining_type (str): The hard example mining type, should be 'hard_example'
mining_type (str): The hard example mining type, should be 'hard_example'
or 'max_negative', now only support `max_negative`.
or 'max_negative', now only support `max_negative`.
normalize (bool): Whether to normalize the SSD loss by the total number
normalize (bool): Whether to normalize the SSD loss by the total number
...
@@ -1507,7 +1507,7 @@ def prior_box(input,
...
@@ -1507,7 +1507,7 @@ def prior_box(input,
Default:[0.1, 0.1, 0.2, 0.2].
Default:[0.1, 0.1, 0.2, 0.2].
flip(bool): Whether to flip aspect ratios. Default:False.
flip(bool): Whether to flip aspect ratios. Default:False.
clip(bool): Whether to clip out-of-boundary boxes. Default: False.
clip(bool): Whether to clip out-of-boundary boxes. Default: False.
step(list|tu
r
ple): Prior boxes step across width and height, If
step(list|tuple): Prior boxes step across width and height, If
step[0] == 0.0/step[1] == 0.0, the prior boxes step across
step[0] == 0.0/step[1] == 0.0, the prior boxes step across
height/weight of the input will be automatically calculated.
height/weight of the input will be automatically calculated.
Default: [0., 0.]
Default: [0., 0.]
...
@@ -1636,7 +1636,7 @@ def density_prior_box(input,
...
@@ -1636,7 +1636,7 @@ def density_prior_box(input,
variance(list|tuple): the variances to be encoded in density prior boxes.
variance(list|tuple): the variances to be encoded in density prior boxes.
Default:[0.1, 0.1, 0.2, 0.2].
Default:[0.1, 0.1, 0.2, 0.2].
clip(bool): Whether to clip out-of-boundary boxes. Default: False.
clip(bool): Whether to clip out-of-boundary boxes. Default: False.
step(list|tu
r
ple): Prior boxes step across width and height, If
step(list|tuple): Prior boxes step across width and height, If
step[0] == 0.0/step[1] == 0.0, the density prior boxes step across
step[0] == 0.0/step[1] == 0.0, the density prior boxes step across
height/weight of the input will be automatically calculated.
height/weight of the input will be automatically calculated.
Default: [0., 0.]
Default: [0., 0.]
...
@@ -2003,7 +2003,7 @@ def anchor_generator(input,
...
@@ -2003,7 +2003,7 @@ def anchor_generator(input,
anchors, e.g. [0.5, 1.0, 2.0].
anchors, e.g. [0.5, 1.0, 2.0].
variance(list|tuple): The variances to be used in box regression deltas.
variance(list|tuple): The variances to be used in box regression deltas.
Default:[0.1, 0.1, 0.2, 0.2].
Default:[0.1, 0.1, 0.2, 0.2].
stride(list|tu
r
ple): The anchors stride across width and height,e.g. [16.0, 16.0]
stride(list|tuple): The anchors stride across width and height,e.g. [16.0, 16.0]
offset(float): Prior boxes center offset. Default: 0.5
offset(float): Prior boxes center offset. Default: 0.5
name(str): Name of the prior box op. Default: None.
name(str): Name of the prior box op. Default: None.
...
...
python/paddle/fluid/layers/nn.py
浏览文件 @
6e310e2d
...
@@ -769,7 +769,7 @@ def dynamic_lstmp(input,
...
@@ -769,7 +769,7 @@ def dynamic_lstmp(input,
the matrix of weights from the input gate to the input).
the matrix of weights from the input gate to the input).
* :math:`W_{ic}`, :math:`W_{fc}`, :math:`W_{oc}`: Diagonal weight
\
* :math:`W_{ic}`, :math:`W_{fc}`, :math:`W_{oc}`: Diagonal weight
\
matrices for peephole connections. In our implementation,
\
matrices for peephole connections. In our implementation,
\
we use vectors to repre
nse
t these diagonal weight matrices.
we use vectors to repre
sen
t these diagonal weight matrices.
* :math:`b`: Denotes bias vectors (e.g. :math:`b_i` is the input gate
\
* :math:`b`: Denotes bias vectors (e.g. :math:`b_i` is the input gate
\
bias vector).
bias vector).
* :math:`\sigma`: The activation, such as logistic sigmoid function.
* :math:`\sigma`: The activation, such as logistic sigmoid function.
...
@@ -5067,7 +5067,7 @@ def l2_normalize(x, axis, epsilon=1e-12, name=None):
...
@@ -5067,7 +5067,7 @@ def l2_normalize(x, axis, epsilon=1e-12, name=None):
the dimension to normalization is rank(X) + axis. -1 is the
the dimension to normalization is rank(X) + axis. -1 is the
last dimension.
last dimension.
epsilon(float): The epsilon value is used to avoid division by zero,
\
epsilon(float): The epsilon value is used to avoid division by zero,
\
the defa
lu
t value is 1e-12.
the defa
ul
t value is 1e-12.
name(str|None): A name for this layer(optional). If set None, the layer
\
name(str|None): A name for this layer(optional). If set None, the layer
\
will be named automatically.
will be named automatically.
...
...
python/paddle/fluid/metrics.py
浏览文件 @
6e310e2d
...
@@ -713,11 +713,11 @@ class DetectionMAP(object):
...
@@ -713,11 +713,11 @@ class DetectionMAP(object):
class_num (int): The class number.
class_num (int): The class number.
background_label (int): The index of background label, the background
background_label (int): The index of background label, the background
label will be ignored. If set to -1, then all categories will be
label will be ignored. If set to -1, then all categories will be
considered, 0 by defa
lu
t.
considered, 0 by defa
ul
t.
overlap_threshold (float): The threshold for deciding true/false
overlap_threshold (float): The threshold for deciding true/false
positive, 0.5 by defa
lu
t.
positive, 0.5 by defa
ul
t.
evaluate_difficult (bool): Whether to consider difficult ground truth
evaluate_difficult (bool): Whether to consider difficult ground truth
for evaluation, True by defa
lu
t. This argument does not work when
for evaluation, True by defa
ul
t. This argument does not work when
gt_difficult is None.
gt_difficult is None.
ap_version (string): The average precision calculation ways, it must be
ap_version (string): The average precision calculation ways, it must be
'integral' or '11point'. Please check
'integral' or '11point'. Please check
...
...
python/paddle/fluid/tests/unittests/dist_transformer.py
浏览文件 @
6e310e2d
...
@@ -1563,7 +1563,7 @@ def fast_decode(
...
@@ -1563,7 +1563,7 @@ def fast_decode(
}
for
cache
in
caches
]
}
for
cache
in
caches
]
pre_pos
=
layers
.
elementwise_mul
(
pre_pos
=
layers
.
elementwise_mul
(
x
=
layers
.
fill_constant_batch_size_like
(
x
=
layers
.
fill_constant_batch_size_like
(
input
=
pre_enc_output
,
# can
n
't use pre_ids here since it has lod
input
=
pre_enc_output
,
# can't use pre_ids here since it has lod
value
=
1
,
value
=
1
,
shape
=
[
-
1
,
1
,
1
],
shape
=
[
-
1
,
1
,
1
],
dtype
=
pre_ids
.
dtype
),
dtype
=
pre_ids
.
dtype
),
...
...
python/paddle/fluid/tests/unittests/transformer_model.py
浏览文件 @
6e310e2d
...
@@ -136,7 +136,7 @@ def multi_head_attention(queries,
...
@@ -136,7 +136,7 @@ def multi_head_attention(queries,
# The current implementation of softmax_op only supports 2D tensor,
# The current implementation of softmax_op only supports 2D tensor,
# consequently it cannot be directly used here.
# consequently it cannot be directly used here.
# If to use the reshape_op, Besides, the shape of product inferred in
# If to use the reshape_op, Besides, the shape of product inferred in
# compile-time is not the actual shape in run-time. It can
n
't be used
# compile-time is not the actual shape in run-time. It can't be used
# to set the attribute of reshape_op.
# to set the attribute of reshape_op.
# So, here define the softmax for temporary solution.
# So, here define the softmax for temporary solution.
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录