Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
PaddlePaddle
Paddle
提交
5fd2ff0a
P
Paddle
项目概览
PaddlePaddle
/
Paddle
大约 2 年 前同步成功
通知
2325
Star
20933
Fork
5424
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
1423
列表
看板
标记
里程碑
合并请求
543
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
P
Paddle
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
1,423
Issue
1,423
列表
看板
标记
里程碑
合并请求
543
合并请求
543
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
提交
5fd2ff0a
编写于
5月 28, 2020
作者:
Y
yangqingyou
浏览文件
操作
浏览文件
下载
差异文件
add crypto api for python, test=develop
上级
3a1cfa9d
c0911fdd
变更
69
隐藏空白更改
内联
并排
Showing
69 changed file
with
470 addition
and
224 deletion
+470
-224
CMakeLists.txt
CMakeLists.txt
+1
-1
paddle/fluid/framework/CMakeLists.txt
paddle/fluid/framework/CMakeLists.txt
+1
-0
paddle/fluid/framework/grad_op_desc_maker.h
paddle/fluid/framework/grad_op_desc_maker.h
+18
-2
paddle/fluid/framework/op_call_stack.cc
paddle/fluid/framework/op_call_stack.cc
+9
-3
paddle/fluid/framework/op_call_stack.h
paddle/fluid/framework/op_call_stack.h
+7
-0
paddle/fluid/framework/op_call_stack_test.cc
paddle/fluid/framework/op_call_stack_test.cc
+61
-0
paddle/fluid/imperative/dygraph_grad_maker.h
paddle/fluid/imperative/dygraph_grad_maker.h
+2
-0
paddle/fluid/inference/tensorrt/convert/hard_sigmoid_op.cc
paddle/fluid/inference/tensorrt/convert/hard_sigmoid_op.cc
+1
-1
paddle/fluid/operators/activation_op.cc
paddle/fluid/operators/activation_op.cc
+16
-16
paddle/fluid/operators/argsort_op.cc
paddle/fluid/operators/argsort_op.cc
+2
-2
paddle/fluid/operators/batch_fc_op.cc
paddle/fluid/operators/batch_fc_op.cc
+2
-2
paddle/fluid/operators/batch_size_like.h
paddle/fluid/operators/batch_size_like.h
+1
-1
paddle/fluid/operators/concat_op.cc
paddle/fluid/operators/concat_op.cc
+2
-2
paddle/fluid/operators/conv_cudnn_helper.h
paddle/fluid/operators/conv_cudnn_helper.h
+3
-1
paddle/fluid/operators/conv_transpose_cudnn_op.cu
paddle/fluid/operators/conv_transpose_cudnn_op.cu
+4
-2
paddle/fluid/operators/crop_op.cc
paddle/fluid/operators/crop_op.cc
+2
-2
paddle/fluid/operators/cvm_op.cc
paddle/fluid/operators/cvm_op.cc
+4
-4
paddle/fluid/operators/elementwise/elementwise_add_op.cc
paddle/fluid/operators/elementwise/elementwise_add_op.cc
+4
-4
paddle/fluid/operators/elementwise/elementwise_div_op.cc
paddle/fluid/operators/elementwise/elementwise_div_op.cc
+1
-1
paddle/fluid/operators/elementwise/elementwise_mul_op.cc
paddle/fluid/operators/elementwise/elementwise_mul_op.cc
+1
-1
paddle/fluid/operators/elementwise/elementwise_op.h
paddle/fluid/operators/elementwise/elementwise_op.h
+8
-8
paddle/fluid/operators/elementwise/elementwise_sub_op.cc
paddle/fluid/operators/elementwise/elementwise_sub_op.cc
+4
-4
paddle/fluid/operators/fill_constant_batch_size_like_op.cc
paddle/fluid/operators/fill_constant_batch_size_like_op.cc
+1
-1
paddle/fluid/operators/fill_zeros_like_op.cc
paddle/fluid/operators/fill_zeros_like_op.cc
+2
-2
paddle/fluid/operators/flatten_op.cc
paddle/fluid/operators/flatten_op.cc
+8
-8
paddle/fluid/operators/gather_nd_op.cc
paddle/fluid/operators/gather_nd_op.cc
+2
-2
paddle/fluid/operators/gather_op.cc
paddle/fluid/operators/gather_op.cc
+2
-2
paddle/fluid/operators/gaussian_random_batch_size_like_op.cc
paddle/fluid/operators/gaussian_random_batch_size_like_op.cc
+1
-1
paddle/fluid/operators/group_norm_op.cc
paddle/fluid/operators/group_norm_op.cc
+4
-4
paddle/fluid/operators/gru_op.cc
paddle/fluid/operators/gru_op.cc
+2
-2
paddle/fluid/operators/gru_unit_op.cc
paddle/fluid/operators/gru_unit_op.cc
+2
-2
paddle/fluid/operators/reduce_ops/reduce_mean_op.cc
paddle/fluid/operators/reduce_ops/reduce_mean_op.cc
+2
-3
paddle/fluid/operators/reduce_ops/reduce_sum_op.cc
paddle/fluid/operators/reduce_ops/reduce_sum_op.cc
+2
-2
paddle/fluid/operators/roll_op.cc
paddle/fluid/operators/roll_op.cc
+2
-2
paddle/fluid/operators/scatter_nd_add_op.cc
paddle/fluid/operators/scatter_nd_add_op.cc
+2
-2
paddle/fluid/operators/scatter_op.cc
paddle/fluid/operators/scatter_op.cc
+2
-2
paddle/fluid/operators/sequence_ops/sequence_concat_op.cc
paddle/fluid/operators/sequence_ops/sequence_concat_op.cc
+2
-3
paddle/fluid/operators/sequence_ops/sequence_expand_as_op.cc
paddle/fluid/operators/sequence_ops/sequence_expand_as_op.cc
+4
-4
paddle/fluid/operators/sequence_ops/sequence_expand_op.cc
paddle/fluid/operators/sequence_ops/sequence_expand_op.cc
+5
-5
paddle/fluid/operators/sequence_ops/sequence_pad_op.cc
paddle/fluid/operators/sequence_ops/sequence_pad_op.cc
+2
-2
paddle/fluid/operators/sequence_ops/sequence_pool_op.cc
paddle/fluid/operators/sequence_ops/sequence_pool_op.cc
+2
-2
paddle/fluid/operators/sequence_ops/sequence_scatter_op.cc
paddle/fluid/operators/sequence_ops/sequence_scatter_op.cc
+3
-3
paddle/fluid/operators/sequence_ops/sequence_slice_op.cc
paddle/fluid/operators/sequence_ops/sequence_slice_op.cc
+2
-2
paddle/fluid/operators/sequence_ops/sequence_softmax_cudnn_op.cu.cc
...id/operators/sequence_ops/sequence_softmax_cudnn_op.cu.cc
+10
-5
paddle/fluid/operators/sequence_ops/sequence_unpad_op.cc
paddle/fluid/operators/sequence_ops/sequence_unpad_op.cc
+3
-3
paddle/fluid/operators/slice_op.cc
paddle/fluid/operators/slice_op.cc
+2
-2
paddle/fluid/operators/space_to_depth_op.cc
paddle/fluid/operators/space_to_depth_op.cc
+2
-2
paddle/fluid/operators/squared_l2_distance_op.cc
paddle/fluid/operators/squared_l2_distance_op.cc
+3
-2
paddle/fluid/operators/squeeze_op.cc
paddle/fluid/operators/squeeze_op.cc
+2
-2
paddle/fluid/operators/strided_slice_op.cc
paddle/fluid/operators/strided_slice_op.cc
+2
-2
paddle/fluid/operators/trace_op.cc
paddle/fluid/operators/trace_op.cc
+2
-3
paddle/fluid/operators/unfold_op.cc
paddle/fluid/operators/unfold_op.cc
+2
-2
paddle/fluid/operators/uniform_random_batch_size_like_op.cc
paddle/fluid/operators/uniform_random_batch_size_like_op.cc
+1
-1
paddle/fluid/operators/unsqueeze_op.cc
paddle/fluid/operators/unsqueeze_op.cc
+2
-3
paddle/fluid/operators/warpctc_op.cc
paddle/fluid/operators/warpctc_op.cc
+2
-2
paddle/fluid/operators/where_op.cc
paddle/fluid/operators/where_op.cc
+2
-3
paddle/fluid/platform/device_tracer.cc
paddle/fluid/platform/device_tracer.cc
+23
-20
paddle/fluid/platform/device_tracer.h
paddle/fluid/platform/device_tracer.h
+1
-1
paddle/fluid/platform/profiler.cc
paddle/fluid/platform/profiler.cc
+2
-1
python/paddle/fluid/contrib/slim/quantization/quantization_pass.py
...ddle/fluid/contrib/slim/quantization/quantization_pass.py
+1
-1
python/paddle/fluid/dygraph/dygraph_to_static/program_translator.py
...dle/fluid/dygraph/dygraph_to_static/program_translator.py
+1
-0
python/paddle/fluid/dygraph/math_op_patch.py
python/paddle/fluid/dygraph/math_op_patch.py
+5
-7
python/paddle/fluid/tests/unittests/dygraph_to_static/test_mnist.py
...dle/fluid/tests/unittests/dygraph_to_static/test_mnist.py
+1
-1
python/paddle/fluid/tests/unittests/dygraph_to_static/test_save_inference_model.py
.../unittests/dygraph_to_static/test_save_inference_model.py
+6
-1
python/paddle/fluid/tests/unittests/test_conv2d_transpose_op.py
.../paddle/fluid/tests/unittests/test_conv2d_transpose_op.py
+149
-25
python/paddle/fluid/tests/unittests/test_math_op_patch_var_base.py
...ddle/fluid/tests/unittests/test_math_op_patch_var_base.py
+24
-2
python/paddle/fluid/tests/unittests/white_list/op_accuracy_white_list.py
...luid/tests/unittests/white_list/op_accuracy_white_list.py
+2
-1
tools/check_ut.py
tools/check_ut.py
+3
-2
tools/manylinux1/Dockerfile.cuda10_cudnn7_gcc8_ubuntu16
tools/manylinux1/Dockerfile.cuda10_cudnn7_gcc8_ubuntu16
+9
-17
未找到文件。
CMakeLists.txt
浏览文件 @
5fd2ff0a
...
@@ -88,7 +88,7 @@ option(WITH_DGC "Use DGC(Deep Gradient Compression) or not" ${WITH_DISTRIBUTE}
...
@@ -88,7 +88,7 @@ option(WITH_DGC "Use DGC(Deep Gradient Compression) or not" ${WITH_DISTRIBUTE}
option
(
SANITIZER_TYPE
"Choose the type of sanitizer, options are: Address, Leak, Memory, Thread, Undefined"
OFF
)
option
(
SANITIZER_TYPE
"Choose the type of sanitizer, options are: Address, Leak, Memory, Thread, Undefined"
OFF
)
option
(
WITH_LITE
"Compile Paddle Fluid with Lite Engine"
OFF
)
option
(
WITH_LITE
"Compile Paddle Fluid with Lite Engine"
OFF
)
option
(
WITH_NCCL
"Compile PaddlePaddle with NCCL support"
ON
)
option
(
WITH_NCCL
"Compile PaddlePaddle with NCCL support"
ON
)
option
(
WITH_CRYPTO
"Compile PaddlePaddle with
crypto support
"
ON
)
option
(
WITH_CRYPTO
"Compile PaddlePaddle with
paddle_crypto lib
"
ON
)
# PY_VERSION
# PY_VERSION
if
(
NOT PY_VERSION
)
if
(
NOT PY_VERSION
)
...
...
paddle/fluid/framework/CMakeLists.txt
浏览文件 @
5fd2ff0a
...
@@ -148,6 +148,7 @@ cc_library(proto_desc SRCS var_desc.cc op_desc.cc block_desc.cc program_desc.cc
...
@@ -148,6 +148,7 @@ cc_library(proto_desc SRCS var_desc.cc op_desc.cc block_desc.cc program_desc.cc
cc_library
(
op_registry SRCS op_registry.cc DEPS op_proto_maker op_info operator glog proto_desc
)
cc_library
(
op_registry SRCS op_registry.cc DEPS op_proto_maker op_info operator glog proto_desc
)
cc_library
(
op_call_stack SRCS op_call_stack.cc DEPS op_proto_maker enforce
)
cc_library
(
op_call_stack SRCS op_call_stack.cc DEPS op_proto_maker enforce
)
cc_test
(
op_call_stack_test SRCS op_call_stack_test.cc DEPS op_call_stack
)
nv_test
(
op_registry_test SRCS op_registry_test.cc DEPS op_registry
)
nv_test
(
op_registry_test SRCS op_registry_test.cc DEPS op_registry
)
...
...
paddle/fluid/framework/grad_op_desc_maker.h
浏览文件 @
5fd2ff0a
...
@@ -18,7 +18,9 @@ limitations under the License. */
...
@@ -18,7 +18,9 @@ limitations under the License. */
#include <string>
#include <string>
#include <unordered_map>
#include <unordered_map>
#include <unordered_set>
#include <unordered_set>
#include <utility>
#include <vector>
#include <vector>
#include "paddle/fluid/framework/op_call_stack.h"
#include "paddle/fluid/framework/op_desc.h"
#include "paddle/fluid/framework/op_desc.h"
#include "paddle/fluid/framework/operator.h"
#include "paddle/fluid/framework/operator.h"
#include "paddle/fluid/imperative/dygraph_grad_maker.h"
#include "paddle/fluid/imperative/dygraph_grad_maker.h"
...
@@ -195,7 +197,14 @@ class SingleGradOpMaker<OpDesc> : public GradOpDescMakerBase {
...
@@ -195,7 +197,14 @@ class SingleGradOpMaker<OpDesc> : public GradOpDescMakerBase {
std
::
vector
<
std
::
unique_ptr
<
OpDesc
>>
operator
()()
const
final
{
std
::
vector
<
std
::
unique_ptr
<
OpDesc
>>
operator
()()
const
final
{
std
::
vector
<
std
::
unique_ptr
<
OpDesc
>>
retv
;
std
::
vector
<
std
::
unique_ptr
<
OpDesc
>>
retv
;
retv
.
emplace_back
(
new
OpDesc
());
retv
.
emplace_back
(
new
OpDesc
());
this
->
Apply
(
retv
.
front
().
get
());
try
{
this
->
Apply
(
retv
.
front
().
get
());
}
catch
(
platform
::
EnforceNotMet
&
exception
)
{
framework
::
AppendErrorOpHint
(
retv
.
front
().
get
()
->
Type
(),
&
exception
);
throw
std
::
move
(
exception
);
}
catch
(...)
{
std
::
rethrow_exception
(
std
::
current_exception
());
}
return
retv
;
return
retv
;
}
}
...
@@ -213,7 +222,14 @@ class SingleGradOpMaker<imperative::OpBase>
...
@@ -213,7 +222,14 @@ class SingleGradOpMaker<imperative::OpBase>
auto
node
=
this
->
NewGradNode
();
auto
node
=
this
->
NewGradNode
();
{
{
imperative
::
TracedGradOp
traced_grad_op
(
node
);
imperative
::
TracedGradOp
traced_grad_op
(
node
);
this
->
Apply
(
&
traced_grad_op
);
try
{
this
->
Apply
(
&
traced_grad_op
);
}
catch
(
platform
::
EnforceNotMet
&
exception
)
{
framework
::
AppendErrorOpHint
(
traced_grad_op
.
Type
(),
&
exception
);
throw
std
::
move
(
exception
);
}
catch
(...)
{
std
::
rethrow_exception
(
std
::
current_exception
());
}
}
}
return
node
->
empty
()
?
nullptr
:
node
;
return
node
->
empty
()
?
nullptr
:
node
;
}
}
...
...
paddle/fluid/framework/op_call_stack.cc
浏览文件 @
5fd2ff0a
...
@@ -56,9 +56,15 @@ void InsertCallStackInfo(const std::string &type, const AttributeMap &attrs,
...
@@ -56,9 +56,15 @@ void InsertCallStackInfo(const std::string &type, const AttributeMap &attrs,
}
}
// Step 3. Construct final call stack & append error op name
// Step 3. Construct final call stack & append error op name
sout
<<
exception
->
err_str_
;
sout
<<
exception
->
err_str_
;
if
(
callstack
)
{
sout
<<
" [operator < "
<<
type
<<
" > error]"
;
sout
<<
" [operator < "
<<
type
<<
" > error]"
;
exception
->
err_str_
=
sout
.
str
();
}
}
void
AppendErrorOpHint
(
const
std
::
string
&
type
,
platform
::
EnforceNotMet
*
exception
)
{
std
::
ostringstream
sout
;
sout
<<
exception
->
err_str_
;
sout
<<
" [operator < "
<<
type
<<
" > error]"
;
exception
->
err_str_
=
sout
.
str
();
exception
->
err_str_
=
sout
.
str
();
}
}
...
...
paddle/fluid/framework/op_call_stack.h
浏览文件 @
5fd2ff0a
...
@@ -20,7 +20,14 @@ limitations under the License. */
...
@@ -20,7 +20,14 @@ limitations under the License. */
namespace
paddle
{
namespace
paddle
{
namespace
framework
{
namespace
framework
{
// insert python call stack & append error op for exception message
void
InsertCallStackInfo
(
const
std
::
string
&
type
,
const
AttributeMap
&
attrs
,
void
InsertCallStackInfo
(
const
std
::
string
&
type
,
const
AttributeMap
&
attrs
,
platform
::
EnforceNotMet
*
exception
);
platform
::
EnforceNotMet
*
exception
);
// only append error op for exception message
void
AppendErrorOpHint
(
const
std
::
string
&
type
,
platform
::
EnforceNotMet
*
exception
);
}
// namespace framework
}
// namespace framework
}
// namespace paddle
}
// namespace paddle
paddle/fluid/framework/op_call_stack_test.cc
0 → 100644
浏览文件 @
5fd2ff0a
/* Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. */
#include "paddle/fluid/framework/op_call_stack.h"
#include <string>
#include <vector>
#include "gtest/gtest.h"
namespace
paddle
{
namespace
framework
{
namespace
details
{
static
void
ThrowEnforceNotMet
()
{
PADDLE_THROW
(
platform
::
errors
::
InvalidArgument
(
"
\n
----------------------
\n
Error Message "
"Summary:
\n
----------------------
\n
"
"Created error."
));
}
}
// namespace details
}
// namespace framework
}
// namespace paddle
TEST
(
OpCallStack
,
InsertCallStackInfo
)
{
try
{
paddle
::
framework
::
details
::
ThrowEnforceNotMet
();
}
catch
(
paddle
::
platform
::
EnforceNotMet
&
exception
)
{
paddle
::
framework
::
AttributeMap
attr_map
;
std
::
string
stack_test_str
=
"test for op callstack"
;
std
::
vector
<
std
::
string
>
stack_test_vec
;
stack_test_vec
.
emplace_back
(
stack_test_str
);
attr_map
[
"op_callstack"
]
=
stack_test_vec
;
paddle
::
framework
::
InsertCallStackInfo
(
"test"
,
attr_map
,
&
exception
);
std
::
string
ex_msg
=
exception
.
what
();
EXPECT_TRUE
(
ex_msg
.
find
(
stack_test_str
)
!=
std
::
string
::
npos
);
EXPECT_TRUE
(
ex_msg
.
find
(
"[operator < test > error]"
)
!=
std
::
string
::
npos
);
}
}
TEST
(
OpCallStack
,
AppendErrorOpHint
)
{
try
{
paddle
::
framework
::
details
::
ThrowEnforceNotMet
();
}
catch
(
paddle
::
platform
::
EnforceNotMet
&
exception
)
{
paddle
::
framework
::
AppendErrorOpHint
(
"test"
,
&
exception
);
std
::
string
ex_msg
=
exception
.
what
();
EXPECT_TRUE
(
ex_msg
.
find
(
"[operator < test > error]"
)
!=
std
::
string
::
npos
);
}
}
paddle/fluid/imperative/dygraph_grad_maker.h
浏览文件 @
5fd2ff0a
...
@@ -258,6 +258,8 @@ class TracedGradOp {
...
@@ -258,6 +258,8 @@ class TracedGradOp {
}
}
}
}
std
::
string
Type
()
const
{
return
op_
->
Type
();
}
void
SetType
(
const
std
::
string
&
type
)
{
op_
->
SetType
(
type
);
}
void
SetType
(
const
std
::
string
&
type
)
{
op_
->
SetType
(
type
);
}
void
SetAttrMap
(
const
framework
::
AttributeMap
&
attrs
)
{
void
SetAttrMap
(
const
framework
::
AttributeMap
&
attrs
)
{
...
...
paddle/fluid/inference/tensorrt/convert/hard_sigmoid_op.cc
浏览文件 @
5fd2ff0a
...
@@ -25,7 +25,7 @@ class HardSigmoidOpConverter : public OpConverter {
...
@@ -25,7 +25,7 @@ class HardSigmoidOpConverter : public OpConverter {
public:
public:
void
operator
()(
const
framework
::
proto
::
OpDesc
&
op
,
void
operator
()(
const
framework
::
proto
::
OpDesc
&
op
,
const
framework
::
Scope
&
scope
,
bool
test_mode
)
override
{
const
framework
::
Scope
&
scope
,
bool
test_mode
)
override
{
#if IS_TRT_VERSION_GE(5
00
0)
#if IS_TRT_VERSION_GE(5
13
0)
VLOG
(
3
)
<<
"convert a fluid HardSigmoid op to tensorrt IActivationLayer "
VLOG
(
3
)
<<
"convert a fluid HardSigmoid op to tensorrt IActivationLayer "
"layer without bias"
;
"layer without bias"
;
framework
::
OpDesc
op_desc
(
op
,
nullptr
);
framework
::
OpDesc
op_desc
(
op
,
nullptr
);
...
...
paddle/fluid/operators/activation_op.cc
浏览文件 @
5fd2ff0a
...
@@ -822,10 +822,10 @@ class SquareDoubleGradMaker : public ::paddle::framework::SingleGradOpMaker<T> {
...
@@ -822,10 +822,10 @@ class SquareDoubleGradMaker : public ::paddle::framework::SingleGradOpMaker<T> {
}
}
};
};
DECLARE_INPLACE_OP_INFERER
(
ActivationGradOpInplaceInfere
nce
,
DECLARE_INPLACE_OP_INFERER
(
ActivationGradOpInplaceInfere
r
,
{
framework
::
GradVarName
(
"Out"
),
{
framework
::
GradVarName
(
"Out"
),
framework
::
GradVarName
(
"X"
)});
framework
::
GradVarName
(
"X"
)});
DECLARE_INPLACE_OP_INFERER
(
ActivationDoubleGradOpInplaceInfere
nce
,
DECLARE_INPLACE_OP_INFERER
(
ActivationDoubleGradOpInplaceInfere
r
,
{
"DDX"
,
"DDOut"
});
{
"DDX"
,
"DDOut"
});
template
<
typename
T
>
template
<
typename
T
>
...
@@ -913,7 +913,7 @@ namespace plat = paddle::platform;
...
@@ -913,7 +913,7 @@ namespace plat = paddle::platform;
std::conditional<ops::CanInplaceAct<ops::grad_functor<float>>(), \
std::conditional<ops::CanInplaceAct<ops::grad_functor<float>>(), \
ops::ActFwdInplaceInferer, void>::type); \
ops::ActFwdInplaceInferer, void>::type); \
REGISTER_OPERATOR(KERNEL_TYPE##_grad, ops::ActivationOpGrad, \
REGISTER_OPERATOR(KERNEL_TYPE##_grad, ops::ActivationOpGrad, \
ops::ActivationGradOpInplaceInfere
nce
);
ops::ActivationGradOpInplaceInfere
r
);
#define REGISTER_ACTIVATION_CPU_KERNEL(act_type, op_name, functor, \
#define REGISTER_ACTIVATION_CPU_KERNEL(act_type, op_name, functor, \
grad_functor) \
grad_functor) \
...
@@ -941,13 +941,13 @@ REGISTER_OPERATOR(
...
@@ -941,13 +941,13 @@ REGISTER_OPERATOR(
paddle
::
imperative
::
OpBase
>
,
paddle
::
imperative
::
OpBase
>
,
ops
::
ActFwdInplaceInferer
);
ops
::
ActFwdInplaceInferer
);
REGISTER_OPERATOR
(
relu_grad
,
ops
::
ActivationOpGrad
,
REGISTER_OPERATOR
(
relu_grad
,
ops
::
ActivationOpGrad
,
ops
::
ActivationGradOpInplaceInfere
nce
,
ops
::
ActivationGradOpInplaceInfere
r
,
ops
::
ReluDoubleGradMaker
<
paddle
::
framework
::
OpDesc
>
,
ops
::
ReluDoubleGradMaker
<
paddle
::
framework
::
OpDesc
>
,
ops
::
ReluDoubleGradMaker
<
paddle
::
imperative
::
OpBase
>
);
ops
::
ReluDoubleGradMaker
<
paddle
::
imperative
::
OpBase
>
);
REGISTER_OPERATOR
(
REGISTER_OPERATOR
(
relu_grad_grad
,
relu_grad_grad
,
ops
::
ActivationOpDoubleGrad2
<
ops
::
ReluGradFunctor
<
float
>::
FwdDeps
()
>
,
ops
::
ActivationOpDoubleGrad2
<
ops
::
ReluGradFunctor
<
float
>::
FwdDeps
()
>
,
ops
::
ActivationDoubleGradOpInplaceInfere
nce
);
ops
::
ActivationDoubleGradOpInplaceInfere
r
);
REGISTER_ACTIVATION_CPU_KERNEL
(
relu
,
Relu
,
ReluFunctor
,
ReluGradFunctor
);
REGISTER_ACTIVATION_CPU_KERNEL
(
relu
,
Relu
,
ReluFunctor
,
ReluGradFunctor
);
...
@@ -971,13 +971,13 @@ REGISTER_OPERATOR(
...
@@ -971,13 +971,13 @@ REGISTER_OPERATOR(
paddle
::
imperative
::
OpBase
>
,
paddle
::
imperative
::
OpBase
>
,
ops
::
ActFwdInplaceInferer
);
ops
::
ActFwdInplaceInferer
);
REGISTER_OPERATOR
(
leaky_relu_grad
,
ops
::
ActivationOpGrad
,
REGISTER_OPERATOR
(
leaky_relu_grad
,
ops
::
ActivationOpGrad
,
ops
::
ActivationGradOpInplaceInfere
nce
,
ops
::
ActivationGradOpInplaceInfere
r
,
ops
::
LeakyReluDoubleGradMaker
<
paddle
::
framework
::
OpDesc
>
,
ops
::
LeakyReluDoubleGradMaker
<
paddle
::
framework
::
OpDesc
>
,
ops
::
LeakyReluDoubleGradMaker
<
paddle
::
imperative
::
OpBase
>
);
ops
::
LeakyReluDoubleGradMaker
<
paddle
::
imperative
::
OpBase
>
);
REGISTER_OPERATOR
(
REGISTER_OPERATOR
(
leaky_relu_grad_grad
,
leaky_relu_grad_grad
,
ops
::
ActivationOpDoubleGrad2
<
ops
::
LeakyReluGradFunctor
<
float
>::
FwdDeps
()
>
,
ops
::
ActivationOpDoubleGrad2
<
ops
::
LeakyReluGradFunctor
<
float
>::
FwdDeps
()
>
,
ops
::
ActivationDoubleGradOpInplaceInfere
nce
);
ops
::
ActivationDoubleGradOpInplaceInfere
r
);
REGISTER_ACTIVATION_CPU_KERNEL
(
leaky_relu
,
LeakyRelu
,
LeakyReluFunctor
,
REGISTER_ACTIVATION_CPU_KERNEL
(
leaky_relu
,
LeakyRelu
,
LeakyReluFunctor
,
LeakyReluGradFunctor
);
LeakyReluGradFunctor
);
...
@@ -1000,13 +1000,13 @@ REGISTER_OPERATOR(
...
@@ -1000,13 +1000,13 @@ REGISTER_OPERATOR(
paddle
::
imperative
::
OpBase
>
,
paddle
::
imperative
::
OpBase
>
,
ops
::
ActFwdInplaceInferer
);
ops
::
ActFwdInplaceInferer
);
REGISTER_OPERATOR
(
elu_grad
,
ops
::
ActivationOpGrad
,
REGISTER_OPERATOR
(
elu_grad
,
ops
::
ActivationOpGrad
,
ops
::
ActivationGradOpInplaceInfere
nce
,
ops
::
ActivationGradOpInplaceInfere
r
,
ops
::
ELUDoubleGradMaker
<
paddle
::
framework
::
OpDesc
>
,
ops
::
ELUDoubleGradMaker
<
paddle
::
framework
::
OpDesc
>
,
ops
::
ELUDoubleGradMaker
<
paddle
::
imperative
::
OpBase
>
);
ops
::
ELUDoubleGradMaker
<
paddle
::
imperative
::
OpBase
>
);
REGISTER_OPERATOR
(
REGISTER_OPERATOR
(
elu_grad_grad
,
elu_grad_grad
,
ops
::
ActivationOpDoubleGrad
<
ops
::
ELUGradFunctor
<
float
>::
FwdDeps
()
>
,
ops
::
ActivationOpDoubleGrad
<
ops
::
ELUGradFunctor
<
float
>::
FwdDeps
()
>
,
ops
::
ActivationDoubleGradOpInplaceInfere
nce
);
ops
::
ActivationDoubleGradOpInplaceInfere
r
);
REGISTER_ACTIVATION_CPU_KERNEL
(
elu
,
ELU
,
ELUFunctor
,
ELUGradFunctor
);
REGISTER_ACTIVATION_CPU_KERNEL
(
elu
,
ELU
,
ELUFunctor
,
ELUGradFunctor
);
REGISTER_OP_CPU_KERNEL
(
REGISTER_OP_CPU_KERNEL
(
...
@@ -1028,13 +1028,13 @@ REGISTER_OPERATOR(
...
@@ -1028,13 +1028,13 @@ REGISTER_OPERATOR(
paddle
::
imperative
::
OpBase
>
,
paddle
::
imperative
::
OpBase
>
,
ops
::
ActFwdInplaceInferer
);
ops
::
ActFwdInplaceInferer
);
REGISTER_OPERATOR
(
sqrt_grad
,
ops
::
ActivationOpGrad
,
REGISTER_OPERATOR
(
sqrt_grad
,
ops
::
ActivationOpGrad
,
ops
::
ActivationGradOpInplaceInfere
nce
,
ops
::
ActivationGradOpInplaceInfere
r
,
ops
::
SqrtDoubleGradMaker
<
paddle
::
framework
::
OpDesc
>
,
ops
::
SqrtDoubleGradMaker
<
paddle
::
framework
::
OpDesc
>
,
ops
::
SqrtDoubleGradMaker
<
paddle
::
imperative
::
OpBase
>
);
ops
::
SqrtDoubleGradMaker
<
paddle
::
imperative
::
OpBase
>
);
REGISTER_OPERATOR
(
REGISTER_OPERATOR
(
sqrt_grad_grad
,
sqrt_grad_grad
,
ops
::
ActivationOpDoubleGrad
<
ops
::
SqrtGradGradFunctor
<
float
>::
FwdDeps
()
>
,
ops
::
ActivationOpDoubleGrad
<
ops
::
SqrtGradGradFunctor
<
float
>::
FwdDeps
()
>
,
ops
::
ActivationDoubleGradOpInplaceInfere
nce
);
ops
::
ActivationDoubleGradOpInplaceInfere
r
);
REGISTER_ACTIVATION_CPU_KERNEL
(
sqrt
,
Sqrt
,
SqrtFunctor
,
SqrtGradFunctor
);
REGISTER_ACTIVATION_CPU_KERNEL
(
sqrt
,
Sqrt
,
SqrtFunctor
,
SqrtGradFunctor
);
REGISTER_OP_CPU_KERNEL
(
REGISTER_OP_CPU_KERNEL
(
...
@@ -1056,13 +1056,13 @@ REGISTER_OPERATOR(
...
@@ -1056,13 +1056,13 @@ REGISTER_OPERATOR(
paddle
::
imperative
::
OpBase
>
,
paddle
::
imperative
::
OpBase
>
,
ops
::
ActFwdInplaceInferer
);
ops
::
ActFwdInplaceInferer
);
REGISTER_OPERATOR
(
square_grad
,
ops
::
ActivationOpGrad
,
REGISTER_OPERATOR
(
square_grad
,
ops
::
ActivationOpGrad
,
ops
::
ActivationGradOpInplaceInfere
nce
,
ops
::
ActivationGradOpInplaceInfere
r
,
ops
::
SquareDoubleGradMaker
<
paddle
::
framework
::
OpDesc
>
,
ops
::
SquareDoubleGradMaker
<
paddle
::
framework
::
OpDesc
>
,
ops
::
SquareDoubleGradMaker
<
paddle
::
imperative
::
OpBase
>
);
ops
::
SquareDoubleGradMaker
<
paddle
::
imperative
::
OpBase
>
);
REGISTER_OPERATOR
(
REGISTER_OPERATOR
(
square_grad_grad
,
square_grad_grad
,
ops
::
ActivationOpDoubleGrad
<
ops
::
SquareGradGradFunctor
<
float
>::
FwdDeps
()
>
,
ops
::
ActivationOpDoubleGrad
<
ops
::
SquareGradGradFunctor
<
float
>::
FwdDeps
()
>
,
ops
::
ActivationDoubleGradOpInplaceInfere
nce
);
ops
::
ActivationDoubleGradOpInplaceInfere
r
);
REGISTER_OP_CPU_KERNEL
(
square
,
REGISTER_OP_CPU_KERNEL
(
square
,
ops
::
ActivationKernel
<
paddle
::
platform
::
CPUDeviceContext
,
ops
::
ActivationKernel
<
paddle
::
platform
::
CPUDeviceContext
,
...
@@ -1106,7 +1106,7 @@ REGISTER_OPERATOR(
...
@@ -1106,7 +1106,7 @@ REGISTER_OPERATOR(
std
::
conditional
<
ops
::
CanInplaceAct
<
ops
::
PowGradFunctor
<
float
>>
(),
std
::
conditional
<
ops
::
CanInplaceAct
<
ops
::
PowGradFunctor
<
float
>>
(),
ops
::
ActFwdInplaceInferer
,
void
>::
type
);
ops
::
ActFwdInplaceInferer
,
void
>::
type
);
REGISTER_OPERATOR
(
pow_grad
,
ops
::
PowOpGrad
,
REGISTER_OPERATOR
(
pow_grad
,
ops
::
PowOpGrad
,
ops
::
ActivationGradOpInplaceInfere
nce
);
ops
::
ActivationGradOpInplaceInfere
r
);
REGISTER_OP_CPU_KERNEL
(
REGISTER_OP_CPU_KERNEL
(
pow
,
ops
::
PowKernel
<
plat
::
CPUDeviceContext
,
ops
::
PowFunctor
<
float
>>
,
pow
,
ops
::
PowKernel
<
plat
::
CPUDeviceContext
,
ops
::
PowFunctor
<
float
>>
,
...
@@ -1131,7 +1131,7 @@ REGISTER_OPERATOR(
...
@@ -1131,7 +1131,7 @@ REGISTER_OPERATOR(
std
::
conditional
<
ops
::
CanInplaceAct
<
ops
::
ExpGradFunctor
<
float
>>
(),
std
::
conditional
<
ops
::
CanInplaceAct
<
ops
::
ExpGradFunctor
<
float
>>
(),
ops
::
ActFwdInplaceInferer
,
void
>::
type
);
ops
::
ActFwdInplaceInferer
,
void
>::
type
);
REGISTER_OPERATOR
(
exp_grad
,
ops
::
ActivationOpGrad
,
REGISTER_OPERATOR
(
exp_grad
,
ops
::
ActivationOpGrad
,
ops
::
ActivationGradOpInplaceInfere
nce
);
ops
::
ActivationGradOpInplaceInfere
r
);
REGISTER_OP_CPU_KERNEL
(
exp
,
REGISTER_OP_CPU_KERNEL
(
exp
,
ops
::
ActivationKernel
<
paddle
::
platform
::
CPUDeviceContext
,
ops
::
ActivationKernel
<
paddle
::
platform
::
CPUDeviceContext
,
...
@@ -1163,7 +1163,7 @@ REGISTER_OPERATOR(
...
@@ -1163,7 +1163,7 @@ REGISTER_OPERATOR(
std
::
conditional
<
ops
::
CanInplaceAct
<
ops
::
AbsGradFunctor
<
float
>>
(),
std
::
conditional
<
ops
::
CanInplaceAct
<
ops
::
AbsGradFunctor
<
float
>>
(),
ops
::
ActFwdInplaceInferer
,
void
>::
type
);
ops
::
ActFwdInplaceInferer
,
void
>::
type
);
REGISTER_OPERATOR
(
abs_grad
,
ops
::
ActivationOpGrad
,
REGISTER_OPERATOR
(
abs_grad
,
ops
::
ActivationOpGrad
,
ops
::
ActivationGradOpInplaceInfere
nce
);
ops
::
ActivationGradOpInplaceInfere
r
);
REGISTER_OP_CPU_KERNEL
(
abs
,
REGISTER_OP_CPU_KERNEL
(
abs
,
ops
::
ActivationKernel
<
paddle
::
platform
::
CPUDeviceContext
,
ops
::
ActivationKernel
<
paddle
::
platform
::
CPUDeviceContext
,
...
...
paddle/fluid/operators/argsort_op.cc
浏览文件 @
5fd2ff0a
...
@@ -116,7 +116,7 @@ class ArgsortGradOpMaker : public framework::SingleGradOpMaker<T> {
...
@@ -116,7 +116,7 @@ class ArgsortGradOpMaker : public framework::SingleGradOpMaker<T> {
}
}
};
};
DECLARE_NO_NEED_BUFFER_VARS_INFERER
(
ArgsortGradNoNeedBufferVar
Inference
,
"X"
);
DECLARE_NO_NEED_BUFFER_VARS_INFERER
(
ArgsortGradNoNeedBufferVar
sInferer
,
"X"
);
}
// namespace operators
}
// namespace operators
}
// namespace paddle
}
// namespace paddle
...
@@ -126,7 +126,7 @@ REGISTER_OPERATOR(argsort, ops::ArgsortOp, ops::ArgsortOpMaker,
...
@@ -126,7 +126,7 @@ REGISTER_OPERATOR(argsort, ops::ArgsortOp, ops::ArgsortOpMaker,
ops
::
ArgsortGradOpMaker
<
paddle
::
framework
::
OpDesc
>
,
ops
::
ArgsortGradOpMaker
<
paddle
::
framework
::
OpDesc
>
,
ops
::
ArgsortGradOpMaker
<
paddle
::
imperative
::
OpBase
>
);
ops
::
ArgsortGradOpMaker
<
paddle
::
imperative
::
OpBase
>
);
REGISTER_OPERATOR
(
argsort_grad
,
ops
::
ArgsortGradOp
,
REGISTER_OPERATOR
(
argsort_grad
,
ops
::
ArgsortGradOp
,
ops
::
ArgsortGradNoNeedBufferVar
Inference
);
ops
::
ArgsortGradNoNeedBufferVar
sInferer
);
REGISTER_OP_CPU_KERNEL
(
argsort
,
REGISTER_OP_CPU_KERNEL
(
argsort
,
ops
::
ArgsortKernel
<
paddle
::
platform
::
CPUPlace
,
float
>
,
ops
::
ArgsortKernel
<
paddle
::
platform
::
CPUPlace
,
float
>
,
ops
::
ArgsortKernel
<
paddle
::
platform
::
CPUPlace
,
double
>
,
ops
::
ArgsortKernel
<
paddle
::
platform
::
CPUPlace
,
double
>
,
...
...
paddle/fluid/operators/batch_fc_op.cc
浏览文件 @
5fd2ff0a
...
@@ -136,7 +136,7 @@ class BatchFCGradOpMaker : public framework::SingleGradOpMaker<T> {
...
@@ -136,7 +136,7 @@ class BatchFCGradOpMaker : public framework::SingleGradOpMaker<T> {
op
->
SetAttrMap
(
this
->
Attrs
());
op
->
SetAttrMap
(
this
->
Attrs
());
}
}
};
};
DECLARE_NO_NEED_BUFFER_VARS_INFERER
(
BatchFCGradOpNoNeedBufferVarsInfere
nce
,
DECLARE_NO_NEED_BUFFER_VARS_INFERER
(
BatchFCGradOpNoNeedBufferVarsInfere
r
,
"Bias"
);
"Bias"
);
}
// namespace operators
}
// namespace operators
...
@@ -148,7 +148,7 @@ REGISTER_OPERATOR(batch_fc, ops::BatchFCOp, ops::BatchFCOpMaker,
...
@@ -148,7 +148,7 @@ REGISTER_OPERATOR(batch_fc, ops::BatchFCOp, ops::BatchFCOpMaker,
ops
::
BatchFCGradOpMaker
<
paddle
::
imperative
::
OpBase
>
);
ops
::
BatchFCGradOpMaker
<
paddle
::
imperative
::
OpBase
>
);
REGISTER_OPERATOR
(
batch_fc_grad
,
ops
::
BatchFCGradOp
,
REGISTER_OPERATOR
(
batch_fc_grad
,
ops
::
BatchFCGradOp
,
ops
::
BatchFCGradOpNoNeedBufferVarsInfere
nce
);
ops
::
BatchFCGradOpNoNeedBufferVarsInfere
r
);
REGISTER_OP_CPU_KERNEL
(
REGISTER_OP_CPU_KERNEL
(
batch_fc
,
ops
::
BatchFCKernel
<
paddle
::
platform
::
CPUDeviceContext
,
float
>
,
batch_fc
,
ops
::
BatchFCKernel
<
paddle
::
platform
::
CPUDeviceContext
,
float
>
,
...
...
paddle/fluid/operators/batch_size_like.h
浏览文件 @
5fd2ff0a
...
@@ -74,7 +74,7 @@ class BatchSizeLikeOpMaker : public framework::OpProtoAndCheckerMaker {
...
@@ -74,7 +74,7 @@ class BatchSizeLikeOpMaker : public framework::OpProtoAndCheckerMaker {
virtual
void
Apply
()
=
0
;
virtual
void
Apply
()
=
0
;
};
};
DECLARE_NO_NEED_BUFFER_VARS_INFERER
(
BatchSizeLikeNoNeedBufferVarsInfere
nce
,
DECLARE_NO_NEED_BUFFER_VARS_INFERER
(
BatchSizeLikeNoNeedBufferVarsInfere
r
,
"Input"
);
"Input"
);
}
// namespace operators
}
// namespace operators
...
...
paddle/fluid/operators/concat_op.cc
浏览文件 @
5fd2ff0a
...
@@ -175,7 +175,7 @@ class ConcatOpGrad : public framework::OperatorWithKernel {
...
@@ -175,7 +175,7 @@ class ConcatOpGrad : public framework::OperatorWithKernel {
}
}
};
};
DECLARE_NO_NEED_BUFFER_VARS_INFERER
(
ConcatOpGradNoNeedBufferVarInfere
nce
,
"X"
);
DECLARE_NO_NEED_BUFFER_VARS_INFERER
(
ConcatOpGradNoNeedBufferVarInfere
r
,
"X"
);
template
<
typename
T
>
template
<
typename
T
>
class
ConcatGradOpMaker
:
public
framework
::
SingleGradOpMaker
<
T
>
{
class
ConcatGradOpMaker
:
public
framework
::
SingleGradOpMaker
<
T
>
{
...
@@ -203,7 +203,7 @@ REGISTER_OPERATOR(concat, ops::ConcatOp, ops::ConcatOpMaker,
...
@@ -203,7 +203,7 @@ REGISTER_OPERATOR(concat, ops::ConcatOp, ops::ConcatOpMaker,
ops
::
ConcatGradOpMaker
<
paddle
::
framework
::
OpDesc
>
,
ops
::
ConcatGradOpMaker
<
paddle
::
framework
::
OpDesc
>
,
ops
::
ConcatGradOpMaker
<
paddle
::
imperative
::
OpBase
>
);
ops
::
ConcatGradOpMaker
<
paddle
::
imperative
::
OpBase
>
);
REGISTER_OPERATOR
(
concat_grad
,
ops
::
ConcatOpGrad
,
REGISTER_OPERATOR
(
concat_grad
,
ops
::
ConcatOpGrad
,
ops
::
ConcatOpGradNoNeedBufferVarInfere
nce
);
ops
::
ConcatOpGradNoNeedBufferVarInfere
r
);
REGISTER_OP_CPU_KERNEL
(
REGISTER_OP_CPU_KERNEL
(
concat
,
ops
::
ConcatKernel
<
paddle
::
platform
::
CPUDeviceContext
,
double
>
,
concat
,
ops
::
ConcatKernel
<
paddle
::
platform
::
CPUDeviceContext
,
double
>
,
ops
::
ConcatKernel
<
paddle
::
platform
::
CPUDeviceContext
,
float
>
,
ops
::
ConcatKernel
<
paddle
::
platform
::
CPUDeviceContext
,
float
>
,
...
...
paddle/fluid/operators/conv_cudnn_helper.h
浏览文件 @
5fd2ff0a
...
@@ -148,7 +148,7 @@ struct SearchAlgorithm<cudnnConvolutionFwdAlgoPerf_t> {
...
@@ -148,7 +148,7 @@ struct SearchAlgorithm<cudnnConvolutionFwdAlgoPerf_t> {
}
}
#endif
#endif
if
(
!
exhaustive
)
{
if
(
!
exhaustive
&&
!
deterministic
)
{
#if CUDNN_VERSION >= 7001
#if CUDNN_VERSION >= 7001
int
perf_count
;
int
perf_count
;
int
best_algo_idx
=
0
;
int
best_algo_idx
=
0
;
...
@@ -185,6 +185,8 @@ struct SearchAlgorithm<cudnnConvolutionFwdAlgoPerf_t> {
...
@@ -185,6 +185,8 @@ struct SearchAlgorithm<cudnnConvolutionFwdAlgoPerf_t> {
workspace_size_limit
,
&
algo
));
workspace_size_limit
,
&
algo
));
#endif
#endif
VLOG
(
3
)
<<
"choose algo "
<<
algo
;
VLOG
(
3
)
<<
"choose algo "
<<
algo
;
}
else
if
(
deterministic
)
{
algo
=
static_cast
<
cudnnConvolutionFwdAlgo_t
>
(
1
);
}
else
{
}
else
{
auto
&
dev_ctx
=
auto
&
dev_ctx
=
ctx
.
template
device_context
<
platform
::
CUDADeviceContext
>();
ctx
.
template
device_context
<
platform
::
CUDADeviceContext
>();
...
...
paddle/fluid/operators/conv_transpose_cudnn_op.cu
浏览文件 @
5fd2ff0a
...
@@ -245,7 +245,8 @@ class CUDNNConvTransposeOpKernel : public framework::OpKernel<T> {
...
@@ -245,7 +245,8 @@ class CUDNNConvTransposeOpKernel : public framework::OpKernel<T> {
int
output_offset
=
int
output_offset
=
transformed_output
.
numel
()
/
transformed_output
.
dims
()[
0
]
/
groups
;
transformed_output
.
numel
()
/
transformed_output
.
dims
()[
0
]
/
groups
;
int
filter_offset
=
filter
->
numel
()
/
groups
;
int
filter_offset
=
filter
->
numel
()
/
groups
;
T
alpha
=
static_cast
<
T
>
(
1.0
),
beta
=
static_cast
<
T
>
(
0.0
);
ScalingParamType
<
T
>
alpha
=
1.0
f
;
ScalingParamType
<
T
>
beta
=
0.0
f
;
auto
workspace_handle
=
dev_ctx
.
cudnn_workspace_handle
();
auto
workspace_handle
=
dev_ctx
.
cudnn_workspace_handle
();
for
(
int
g
=
0
;
g
<
groups
;
g
++
)
{
for
(
int
g
=
0
;
g
<
groups
;
g
++
)
{
auto
cudnn_func
=
[
&
](
void
*
cudnn_workspace
)
{
auto
cudnn_func
=
[
&
](
void
*
cudnn_workspace
)
{
...
@@ -493,7 +494,8 @@ class CUDNNConvTransposeGradOpKernel : public framework::OpKernel<T> {
...
@@ -493,7 +494,8 @@ class CUDNNConvTransposeGradOpKernel : public framework::OpKernel<T> {
int
output_grad_offset
=
transformed_output_grad
.
numel
()
/
int
output_grad_offset
=
transformed_output_grad
.
numel
()
/
transformed_output_grad
.
dims
()[
0
]
/
groups
;
transformed_output_grad
.
dims
()[
0
]
/
groups
;
int
filter_offset
=
filter
->
numel
()
/
groups
;
int
filter_offset
=
filter
->
numel
()
/
groups
;
T
alpha
=
static_cast
<
T
>
(
1.0
),
beta
=
static_cast
<
T
>
(
0.0
);
ScalingParamType
<
T
>
alpha
=
1.0
f
;
ScalingParamType
<
T
>
beta
=
0.0
f
;
auto
workspace_handle
=
dev_ctx
.
cudnn_workspace_handle
();
auto
workspace_handle
=
dev_ctx
.
cudnn_workspace_handle
();
if
(
input_grad
)
{
if
(
input_grad
)
{
// Because beta is zero, it is unnecessary to reset input_grad.
// Because beta is zero, it is unnecessary to reset input_grad.
...
...
paddle/fluid/operators/crop_op.cc
浏览文件 @
5fd2ff0a
...
@@ -203,7 +203,7 @@ class CropGradOpMaker : public framework::SingleGradOpMaker<T> {
...
@@ -203,7 +203,7 @@ class CropGradOpMaker : public framework::SingleGradOpMaker<T> {
}
}
};
};
DECLARE_NO_NEED_BUFFER_VARS_INFERER
(
GropNoNeedBufferVarInfere
nce
,
"Y"
);
DECLARE_NO_NEED_BUFFER_VARS_INFERER
(
GropNoNeedBufferVarInfere
r
,
"Y"
);
}
// namespace operators
}
// namespace operators
}
// namespace paddle
}
// namespace paddle
...
@@ -212,7 +212,7 @@ namespace ops = paddle::operators;
...
@@ -212,7 +212,7 @@ namespace ops = paddle::operators;
REGISTER_OPERATOR
(
crop
,
ops
::
CropOp
,
ops
::
CropOpMaker
,
REGISTER_OPERATOR
(
crop
,
ops
::
CropOp
,
ops
::
CropOpMaker
,
ops
::
CropGradOpMaker
<
paddle
::
framework
::
OpDesc
>
,
ops
::
CropGradOpMaker
<
paddle
::
framework
::
OpDesc
>
,
ops
::
CropGradOpMaker
<
paddle
::
imperative
::
OpBase
>
,
ops
::
CropGradOpMaker
<
paddle
::
imperative
::
OpBase
>
,
ops
::
GropNoNeedBufferVarInfere
nce
);
ops
::
GropNoNeedBufferVarInfere
r
);
REGISTER_OPERATOR
(
crop_grad
,
ops
::
CropOpGrad
);
REGISTER_OPERATOR
(
crop_grad
,
ops
::
CropOpGrad
);
REGISTER_OP_CPU_KERNEL
(
REGISTER_OP_CPU_KERNEL
(
crop
,
ops
::
CropKernel
<
paddle
::
platform
::
CPUDeviceContext
,
float
>
,
crop
,
ops
::
CropKernel
<
paddle
::
platform
::
CPUDeviceContext
,
float
>
,
...
...
paddle/fluid/operators/cvm_op.cc
浏览文件 @
5fd2ff0a
...
@@ -153,8 +153,8 @@ class CVMGradOpMaker : public framework::SingleGradOpMaker<T> {
...
@@ -153,8 +153,8 @@ class CVMGradOpMaker : public framework::SingleGradOpMaker<T> {
}
}
};
};
DECLARE_NO_NEED_BUFFER_VARS_INFERER
(
CVMNoNeedBufferVarInfere
nce
,
"CVM"
);
DECLARE_NO_NEED_BUFFER_VARS_INFERER
(
CVMNoNeedBufferVarInfere
r
,
"CVM"
);
DECLARE_NO_NEED_BUFFER_VARS_INFERER
(
CVMGradNoNeedBufferVarInfere
nce
,
"X"
);
DECLARE_NO_NEED_BUFFER_VARS_INFERER
(
CVMGradNoNeedBufferVarInfere
r
,
"X"
);
}
// namespace operators
}
// namespace operators
}
// namespace paddle
}
// namespace paddle
...
@@ -163,10 +163,10 @@ namespace ops = paddle::operators;
...
@@ -163,10 +163,10 @@ namespace ops = paddle::operators;
REGISTER_OPERATOR
(
cvm
,
ops
::
CVMOp
,
ops
::
CVMOpMaker
,
REGISTER_OPERATOR
(
cvm
,
ops
::
CVMOp
,
ops
::
CVMOpMaker
,
ops
::
CVMGradOpMaker
<
paddle
::
framework
::
OpDesc
>
,
ops
::
CVMGradOpMaker
<
paddle
::
framework
::
OpDesc
>
,
ops
::
CVMGradOpMaker
<
paddle
::
imperative
::
OpBase
>
,
ops
::
CVMGradOpMaker
<
paddle
::
imperative
::
OpBase
>
,
ops
::
CVMNoNeedBufferVarInfere
nce
);
ops
::
CVMNoNeedBufferVarInfere
r
);
REGISTER_OPERATOR
(
cvm_grad
,
ops
::
CVMGradientOp
,
REGISTER_OPERATOR
(
cvm_grad
,
ops
::
CVMGradientOp
,
ops
::
CVMGradNoNeedBufferVarInfere
nce
);
ops
::
CVMGradNoNeedBufferVarInfere
r
);
REGISTER_OP_CPU_KERNEL
(
cvm
,
ops
::
CVMOpKernel
<
float
>
,
ops
::
CVMOpKernel
<
double
>
);
REGISTER_OP_CPU_KERNEL
(
cvm
,
ops
::
CVMOpKernel
<
float
>
,
ops
::
CVMOpKernel
<
double
>
);
...
...
paddle/fluid/operators/elementwise/elementwise_add_op.cc
浏览文件 @
5fd2ff0a
...
@@ -97,15 +97,15 @@ REGISTER_ELEMWISE_EXPLICIT_OP_WITHOUT_GRAD(elementwise_add, Add);
...
@@ -97,15 +97,15 @@ REGISTER_ELEMWISE_EXPLICIT_OP_WITHOUT_GRAD(elementwise_add, Add);
namespace
ops
=
paddle
::
operators
;
namespace
ops
=
paddle
::
operators
;
REGISTER_OPERATOR
(
REGISTER_OPERATOR
(
elementwise_add_grad
,
ops
::
ElementwiseOpGrad
,
ops
::
ElementwiseGradOpInplace
,
elementwise_add_grad
,
ops
::
ElementwiseOpGrad
,
ops
::
ElementwiseGrad
NoBufVarsInference
,
ops
::
ElementwiseGrad
OpInplaceInferer
,
ops
::
ElementwiseGradNoBufVarsInferer
,
ops
::
ElementwiseAddDoubleGradMaker
<
paddle
::
framework
::
OpDesc
>
,
ops
::
ElementwiseAddDoubleGradMaker
<
paddle
::
framework
::
OpDesc
>
,
ops
::
ElementwiseAddDoubleGradMaker
<
paddle
::
imperative
::
OpBase
>
);
ops
::
ElementwiseAddDoubleGradMaker
<
paddle
::
imperative
::
OpBase
>
);
REGISTER_OPERATOR
(
elementwise_add_grad_grad
,
REGISTER_OPERATOR
(
elementwise_add_grad_grad
,
ops
::
ElementwiseOpDoubleGradWithoutDXDY
,
ops
::
ElementwiseOpDoubleGradWithoutDXDY
,
ops
::
ElementwiseDoubleGradOpInplace
,
ops
::
ElementwiseDoubleGradOpInplace
Inferer
,
ops
::
ElementwiseDoubleGradNoBufVarsInfere
nce
);
ops
::
ElementwiseDoubleGradNoBufVarsInfere
r
);
REGISTER_OP_CPU_KERNEL
(
REGISTER_OP_CPU_KERNEL
(
elementwise_add
,
elementwise_add
,
...
...
paddle/fluid/operators/elementwise/elementwise_div_op.cc
浏览文件 @
5fd2ff0a
...
@@ -123,7 +123,7 @@ REGISTER_OPERATOR(
...
@@ -123,7 +123,7 @@ REGISTER_OPERATOR(
ops
::
ElementwiseDivDoubleGradMaker
<
paddle
::
imperative
::
OpBase
>
);
ops
::
ElementwiseDivDoubleGradMaker
<
paddle
::
imperative
::
OpBase
>
);
REGISTER_OPERATOR
(
elementwise_div_grad_grad
,
ops
::
ElementwiseDivOpDoubleGrad
,
REGISTER_OPERATOR
(
elementwise_div_grad_grad
,
ops
::
ElementwiseDivOpDoubleGrad
,
ops
::
ElementwiseDoubleGradOpInplace
);
ops
::
ElementwiseDoubleGradOpInplace
Inferer
);
REGISTER_OP_CPU_KERNEL
(
REGISTER_OP_CPU_KERNEL
(
elementwise_div
,
elementwise_div
,
...
...
paddle/fluid/operators/elementwise/elementwise_mul_op.cc
浏览文件 @
5fd2ff0a
...
@@ -123,7 +123,7 @@ REGISTER_OPERATOR(
...
@@ -123,7 +123,7 @@ REGISTER_OPERATOR(
ops
::
ElementwiseMulDoubleGradMaker
<
paddle
::
imperative
::
OpBase
>
);
ops
::
ElementwiseMulDoubleGradMaker
<
paddle
::
imperative
::
OpBase
>
);
REGISTER_OPERATOR
(
elementwise_mul_grad_grad
,
ops
::
ElementwiseOpDoubleGrad
,
REGISTER_OPERATOR
(
elementwise_mul_grad_grad
,
ops
::
ElementwiseOpDoubleGrad
,
ops
::
ElementwiseDoubleGradOpInplace
);
ops
::
ElementwiseDoubleGradOpInplace
Inferer
);
REGISTER_OP_CPU_KERNEL
(
REGISTER_OP_CPU_KERNEL
(
elementwise_mul
,
elementwise_mul
,
...
...
paddle/fluid/operators/elementwise/elementwise_op.h
浏览文件 @
5fd2ff0a
...
@@ -348,16 +348,16 @@ class ElemwiseGradKernel : public framework::OpKernel<T> {
...
@@ -348,16 +348,16 @@ class ElemwiseGradKernel : public framework::OpKernel<T> {
}
}
};
};
DECLARE_INPLACE_OP_INFERER
(
ElementwiseOpInplace
,
{
"X"
,
"Out"
});
DECLARE_INPLACE_OP_INFERER
(
ElementwiseOpInplace
Inferer
,
{
"X"
,
"Out"
});
DECLARE_INPLACE_OP_INFERER
(
ElementwiseGradOpInplace
,
DECLARE_INPLACE_OP_INFERER
(
ElementwiseGradOpInplace
Inferer
,
{
framework
::
GradVarName
(
"Out"
),
{
framework
::
GradVarName
(
"Out"
),
framework
::
GradVarName
(
"X"
)});
framework
::
GradVarName
(
"X"
)});
DECLARE_INPLACE_OP_INFERER
(
ElementwiseDoubleGradOpInplace
,
{
"DDX"
,
"DDOut"
});
DECLARE_INPLACE_OP_INFERER
(
ElementwiseDoubleGradOpInplaceInferer
,
{
"DDX"
,
"DDOut"
});
DECLARE_NO_NEED_BUFFER_VARS_INFERER
(
ElementwiseGradNoBufVarsInference
,
"X"
,
DECLARE_NO_NEED_BUFFER_VARS_INFERER
(
ElementwiseGradNoBufVarsInferer
,
"X"
,
"Y"
);
"Y"
);
DECLARE_NO_NEED_BUFFER_VARS_INFERER
(
ElementwiseDoubleGradNoBufVarsInferer
,
"Y"
,
DECLARE_NO_NEED_BUFFER_VARS_INFERER
(
ElementwiseDoubleGradNoBufVarsInference
,
"DOut"
);
"Y"
,
"DOut"
);
}
// namespace operators
}
// namespace operators
}
// namespace paddle
}
// namespace paddle
...
@@ -389,4 +389,4 @@ DECLARE_NO_NEED_BUFFER_VARS_INFERER(ElementwiseDoubleGradNoBufVarsInference,
...
@@ -389,4 +389,4 @@ DECLARE_NO_NEED_BUFFER_VARS_INFERER(ElementwiseDoubleGradNoBufVarsInference,
::paddle::operators::ElementwiseOpInferVarType, \
::paddle::operators::ElementwiseOpInferVarType, \
op_type##GradMaker<::paddle::framework::OpDesc>, \
op_type##GradMaker<::paddle::framework::OpDesc>, \
op_type##GradMaker<::paddle::imperative::OpBase>, \
op_type##GradMaker<::paddle::imperative::OpBase>, \
::paddle::operators::ElementwiseOpInplace);
::paddle::operators::ElementwiseOpInplace
Inferer
);
paddle/fluid/operators/elementwise/elementwise_sub_op.cc
浏览文件 @
5fd2ff0a
...
@@ -97,14 +97,14 @@ REGISTER_ELEMWISE_EXPLICIT_OP_WITHOUT_GRAD(elementwise_sub, Sub);
...
@@ -97,14 +97,14 @@ REGISTER_ELEMWISE_EXPLICIT_OP_WITHOUT_GRAD(elementwise_sub, Sub);
namespace
ops
=
paddle
::
operators
;
namespace
ops
=
paddle
::
operators
;
REGISTER_OPERATOR
(
REGISTER_OPERATOR
(
elementwise_sub_grad
,
ops
::
ElementwiseOpGrad
,
ops
::
ElementwiseGradOpInplace
,
elementwise_sub_grad
,
ops
::
ElementwiseOpGrad
,
ops
::
ElementwiseGrad
NoBufVarsInference
,
ops
::
ElementwiseGrad
OpInplaceInferer
,
ops
::
ElementwiseGradNoBufVarsInferer
,
ops
::
ElementwiseSubDoubleGradMaker
<
paddle
::
framework
::
OpDesc
>
,
ops
::
ElementwiseSubDoubleGradMaker
<
paddle
::
framework
::
OpDesc
>
,
ops
::
ElementwiseSubDoubleGradMaker
<
paddle
::
imperative
::
OpBase
>
);
ops
::
ElementwiseSubDoubleGradMaker
<
paddle
::
imperative
::
OpBase
>
);
REGISTER_OPERATOR
(
elementwise_sub_grad_grad
,
REGISTER_OPERATOR
(
elementwise_sub_grad_grad
,
ops
::
ElementwiseOpDoubleGradWithoutDXDY
,
ops
::
ElementwiseOpDoubleGradWithoutDXDY
,
ops
::
ElementwiseDoubleGradOpInplace
,
ops
::
ElementwiseDoubleGradOpInplace
Inferer
,
ops
::
ElementwiseDoubleGradNoBufVarsInfere
nce
);
ops
::
ElementwiseDoubleGradNoBufVarsInfere
r
);
REGISTER_OP_CPU_KERNEL
(
REGISTER_OP_CPU_KERNEL
(
elementwise_sub
,
elementwise_sub
,
...
...
paddle/fluid/operators/fill_constant_batch_size_like_op.cc
浏览文件 @
5fd2ff0a
...
@@ -63,7 +63,7 @@ REGISTER_OPERATOR(
...
@@ -63,7 +63,7 @@ REGISTER_OPERATOR(
paddle
::
framework
::
EmptyGradOpMaker
<
paddle
::
framework
::
OpDesc
>
,
paddle
::
framework
::
EmptyGradOpMaker
<
paddle
::
framework
::
OpDesc
>
,
paddle
::
framework
::
EmptyGradOpMaker
<
paddle
::
imperative
::
OpBase
>
,
paddle
::
framework
::
EmptyGradOpMaker
<
paddle
::
imperative
::
OpBase
>
,
ops
::
FillConstantBatchSizeLikeOpMaker
,
ops
::
FillConstantBatchSizeLikeOpMaker
,
ops
::
BatchSizeLikeNoNeedBufferVarsInfere
nce
);
ops
::
BatchSizeLikeNoNeedBufferVarsInfere
r
);
REGISTER_OP_CPU_KERNEL
(
REGISTER_OP_CPU_KERNEL
(
fill_constant_batch_size_like
,
fill_constant_batch_size_like
,
ops
::
FillConstantBatchSizeLikeOpKernel
<
paddle
::
platform
::
CPUDeviceContext
,
ops
::
FillConstantBatchSizeLikeOpKernel
<
paddle
::
platform
::
CPUDeviceContext
,
...
...
paddle/fluid/operators/fill_zeros_like_op.cc
浏览文件 @
5fd2ff0a
...
@@ -71,7 +71,7 @@ class FillZerosLikeOp2Maker : public FillZerosLikeOpMaker {
...
@@ -71,7 +71,7 @@ class FillZerosLikeOp2Maker : public FillZerosLikeOpMaker {
}
}
};
};
DECLARE_NO_NEED_BUFFER_VARS_INFERER
(
FillZerosLikeOp2NoNeedBufferVarsInfere
nce
,
DECLARE_NO_NEED_BUFFER_VARS_INFERER
(
FillZerosLikeOp2NoNeedBufferVarsInfere
r
,
"X"
);
"X"
);
}
// namespace operators
}
// namespace operators
...
@@ -83,7 +83,7 @@ REGISTER_OP_WITHOUT_GRADIENT(fill_zeros_like, ops::FillZerosLikeOp,
...
@@ -83,7 +83,7 @@ REGISTER_OP_WITHOUT_GRADIENT(fill_zeros_like, ops::FillZerosLikeOp,
REGISTER_OPERATOR
(
REGISTER_OPERATOR
(
fill_zeros_like2
,
ops
::
FillZerosLikeOp2
,
ops
::
FillZerosLikeOp2Maker
,
fill_zeros_like2
,
ops
::
FillZerosLikeOp2
,
ops
::
FillZerosLikeOp2Maker
,
ops
::
FillZerosLikeOp2NoNeedBufferVarsInfere
nce
,
ops
::
FillZerosLikeOp2NoNeedBufferVarsInfere
r
,
paddle
::
framework
::
EmptyGradOpMaker
<
paddle
::
framework
::
OpDesc
>
,
paddle
::
framework
::
EmptyGradOpMaker
<
paddle
::
framework
::
OpDesc
>
,
paddle
::
framework
::
EmptyGradOpMaker
<
paddle
::
imperative
::
OpBase
>
);
paddle
::
framework
::
EmptyGradOpMaker
<
paddle
::
imperative
::
OpBase
>
);
...
...
paddle/fluid/operators/flatten_op.cc
浏览文件 @
5fd2ff0a
...
@@ -241,11 +241,11 @@ class Flatten2GradOp : public framework::OperatorWithKernel {
...
@@ -241,11 +241,11 @@ class Flatten2GradOp : public framework::OperatorWithKernel {
}
}
};
};
DECLARE_INPLACE_OP_INFERER
(
FlattenOpInplaceIn
ToOut
,
{
"X"
,
"Out"
});
DECLARE_INPLACE_OP_INFERER
(
FlattenOpInplaceIn
ferer
,
{
"X"
,
"Out"
});
DECLARE_INPLACE_OP_INFERER
(
FlattenGradInplace
inToOut
,
DECLARE_INPLACE_OP_INFERER
(
FlattenGradInplace
Inferer
,
{
framework
::
GradVarName
(
"Out"
),
{
framework
::
GradVarName
(
"Out"
),
framework
::
GradVarName
(
"X"
)});
framework
::
GradVarName
(
"X"
)});
DECLARE_NO_NEED_BUFFER_VARS_INFERER
(
FlattenGradNoNeedBufferVarsInfere
nce
,
"X"
);
DECLARE_NO_NEED_BUFFER_VARS_INFERER
(
FlattenGradNoNeedBufferVarsInfere
r
,
"X"
);
}
// namespace operators
}
// namespace operators
}
// namespace paddle
}
// namespace paddle
...
@@ -254,17 +254,17 @@ namespace ops = paddle::operators;
...
@@ -254,17 +254,17 @@ namespace ops = paddle::operators;
REGISTER_OPERATOR
(
flatten
,
ops
::
FlattenOp
,
ops
::
FlattenOpMaker
,
REGISTER_OPERATOR
(
flatten
,
ops
::
FlattenOp
,
ops
::
FlattenOpMaker
,
ops
::
FlattenGradOpMaker
<
paddle
::
framework
::
OpDesc
>
,
ops
::
FlattenGradOpMaker
<
paddle
::
framework
::
OpDesc
>
,
ops
::
FlattenGradOpMaker
<
paddle
::
imperative
::
OpBase
>
,
ops
::
FlattenGradOpMaker
<
paddle
::
imperative
::
OpBase
>
,
ops
::
FlattenOpInplaceIn
ToOut
);
ops
::
FlattenOpInplaceIn
ferer
);
REGISTER_OPERATOR
(
flatten_grad
,
ops
::
FlattenGradOp
,
REGISTER_OPERATOR
(
flatten_grad
,
ops
::
FlattenGradOp
,
ops
::
FlattenGradInplace
inToOut
,
ops
::
FlattenGradInplace
Inferer
,
ops
::
FlattenGradNoNeedBufferVarsInfere
nce
);
ops
::
FlattenGradNoNeedBufferVarsInfere
r
);
REGISTER_OPERATOR
(
flatten2
,
ops
::
Flatten2Op
,
ops
::
Flatten2OpMaker
,
REGISTER_OPERATOR
(
flatten2
,
ops
::
Flatten2Op
,
ops
::
Flatten2OpMaker
,
ops
::
Flatten2GradOpMaker
<
paddle
::
framework
::
OpDesc
>
,
ops
::
Flatten2GradOpMaker
<
paddle
::
framework
::
OpDesc
>
,
ops
::
Flatten2GradOpMaker
<
paddle
::
imperative
::
OpBase
>
,
ops
::
Flatten2GradOpMaker
<
paddle
::
imperative
::
OpBase
>
,
ops
::
FlattenOpInplaceIn
ToOut
);
ops
::
FlattenOpInplaceIn
ferer
);
REGISTER_OPERATOR
(
flatten2_grad
,
ops
::
Flatten2GradOp
,
REGISTER_OPERATOR
(
flatten2_grad
,
ops
::
Flatten2GradOp
,
ops
::
FlattenGradInplace
inToOut
);
ops
::
FlattenGradInplace
Inferer
);
REGISTER_OP_CPU_KERNEL
(
REGISTER_OP_CPU_KERNEL
(
flatten
,
ops
::
FlattenKernel
<
paddle
::
platform
::
CPUDeviceContext
,
float
>
,
flatten
,
ops
::
FlattenKernel
<
paddle
::
platform
::
CPUDeviceContext
,
float
>
,
...
...
paddle/fluid/operators/gather_nd_op.cc
浏览文件 @
5fd2ff0a
...
@@ -166,7 +166,7 @@ class GatherNdGradOpMaker : public framework::SingleGradOpMaker<T> {
...
@@ -166,7 +166,7 @@ class GatherNdGradOpMaker : public framework::SingleGradOpMaker<T> {
}
}
};
};
DECLARE_NO_NEED_BUFFER_VARS_INFERER
(
GatherNdGradNoNeedBufferVarInfere
nce
,
"X"
);
DECLARE_NO_NEED_BUFFER_VARS_INFERER
(
GatherNdGradNoNeedBufferVarInfere
r
,
"X"
);
}
// namespace operators
}
// namespace operators
}
// namespace paddle
}
// namespace paddle
...
@@ -178,7 +178,7 @@ REGISTER_OPERATOR(gather_nd, ops::GatherNdOp, ops::GatherNdOpMaker,
...
@@ -178,7 +178,7 @@ REGISTER_OPERATOR(gather_nd, ops::GatherNdOp, ops::GatherNdOpMaker,
ops
::
GatherNdGradOpMaker
<
paddle
::
imperative
::
OpBase
>
);
ops
::
GatherNdGradOpMaker
<
paddle
::
imperative
::
OpBase
>
);
REGISTER_OPERATOR
(
gather_nd_grad
,
ops
::
GatherNdGradOp
,
REGISTER_OPERATOR
(
gather_nd_grad
,
ops
::
GatherNdGradOp
,
ops
::
GatherNdGradNoNeedBufferVarInfere
nce
);
ops
::
GatherNdGradNoNeedBufferVarInfere
r
);
REGISTER_OP_CPU_KERNEL
(
gather_nd
,
ops
::
GatherNdOpKernel
<
float
>
,
REGISTER_OP_CPU_KERNEL
(
gather_nd
,
ops
::
GatherNdOpKernel
<
float
>
,
ops
::
GatherNdOpKernel
<
double
>
,
ops
::
GatherNdOpKernel
<
double
>
,
...
...
paddle/fluid/operators/gather_op.cc
浏览文件 @
5fd2ff0a
...
@@ -127,7 +127,7 @@ class GatherGradOpMaker : public framework::SingleGradOpMaker<T> {
...
@@ -127,7 +127,7 @@ class GatherGradOpMaker : public framework::SingleGradOpMaker<T> {
}
}
};
};
DECLARE_NO_NEED_BUFFER_VARS_INFERER
(
GatherGradNoNeedBufferVarInfere
nce
,
"X"
);
DECLARE_NO_NEED_BUFFER_VARS_INFERER
(
GatherGradNoNeedBufferVarInfere
r
,
"X"
);
}
// namespace operators
}
// namespace operators
}
// namespace paddle
}
// namespace paddle
...
@@ -137,7 +137,7 @@ REGISTER_OPERATOR(gather, ops::GatherOp, ops::GatherOpMaker,
...
@@ -137,7 +137,7 @@ REGISTER_OPERATOR(gather, ops::GatherOp, ops::GatherOpMaker,
ops
::
GatherGradOpMaker
<
paddle
::
framework
::
OpDesc
>
,
ops
::
GatherGradOpMaker
<
paddle
::
framework
::
OpDesc
>
,
ops
::
GatherGradOpMaker
<
paddle
::
imperative
::
OpBase
>
);
ops
::
GatherGradOpMaker
<
paddle
::
imperative
::
OpBase
>
);
REGISTER_OPERATOR
(
gather_grad
,
ops
::
GatherGradOp
,
REGISTER_OPERATOR
(
gather_grad
,
ops
::
GatherGradOp
,
ops
::
GatherGradNoNeedBufferVarInfere
nce
);
ops
::
GatherGradNoNeedBufferVarInfere
r
);
REGISTER_OP_CPU_KERNEL
(
gather
,
ops
::
GatherOpKernel
<
float
>
,
REGISTER_OP_CPU_KERNEL
(
gather
,
ops
::
GatherOpKernel
<
float
>
,
ops
::
GatherOpKernel
<
double
>
,
ops
::
GatherOpKernel
<
int
>
,
ops
::
GatherOpKernel
<
double
>
,
ops
::
GatherOpKernel
<
int
>
,
ops
::
GatherOpKernel
<
uint8_t
>
,
ops
::
GatherOpKernel
<
uint8_t
>
,
...
...
paddle/fluid/operators/gaussian_random_batch_size_like_op.cc
浏览文件 @
5fd2ff0a
...
@@ -74,6 +74,6 @@ REGISTER_OPERATOR(
...
@@ -74,6 +74,6 @@ REGISTER_OPERATOR(
paddle
::
operators
::
GaussianRandomBatchSizeLikeOpMaker
,
paddle
::
operators
::
GaussianRandomBatchSizeLikeOpMaker
,
paddle
::
framework
::
EmptyGradOpMaker
<
paddle
::
framework
::
OpDesc
>
,
paddle
::
framework
::
EmptyGradOpMaker
<
paddle
::
framework
::
OpDesc
>
,
paddle
::
framework
::
EmptyGradOpMaker
<
paddle
::
imperative
::
OpBase
>
,
paddle
::
framework
::
EmptyGradOpMaker
<
paddle
::
imperative
::
OpBase
>
,
paddle
::
operators
::
BatchSizeLikeNoNeedBufferVarsInfere
nce
);
paddle
::
operators
::
BatchSizeLikeNoNeedBufferVarsInfere
r
);
// Kernels are registered in gaussian_random_op.cc and gaussian_random_op.cu
// Kernels are registered in gaussian_random_op.cc and gaussian_random_op.cu
paddle/fluid/operators/group_norm_op.cc
浏览文件 @
5fd2ff0a
...
@@ -216,8 +216,8 @@ class GroupNormGradMaker : public framework::SingleGradOpMaker<T> {
...
@@ -216,8 +216,8 @@ class GroupNormGradMaker : public framework::SingleGradOpMaker<T> {
}
}
};
};
DECLARE_INPLACE_OP_INFERER
(
GroupNormInplaceIn
ToOut
,
{
"X"
,
"Y"
});
DECLARE_INPLACE_OP_INFERER
(
GroupNormInplaceIn
ferer
,
{
"X"
,
"Y"
});
DECLARE_INPLACE_OP_INFERER
(
GroupNormGradInplaceIn
ToOut
,
DECLARE_INPLACE_OP_INFERER
(
GroupNormGradInplaceIn
ferer
,
{
framework
::
GradVarName
(
"Y"
),
{
framework
::
GradVarName
(
"Y"
),
framework
::
GradVarName
(
"X"
)});
framework
::
GradVarName
(
"X"
)});
...
@@ -239,9 +239,9 @@ REGISTER_OPERATOR(group_norm, ops::GroupNormOp, ops::GroupNormOpMaker,
...
@@ -239,9 +239,9 @@ REGISTER_OPERATOR(group_norm, ops::GroupNormOp, ops::GroupNormOpMaker,
ops
::
GroupNormOpInferVarType
,
ops
::
GroupNormOpInferVarType
,
ops
::
GroupNormGradMaker
<
paddle
::
framework
::
OpDesc
>
,
ops
::
GroupNormGradMaker
<
paddle
::
framework
::
OpDesc
>
,
ops
::
GroupNormGradMaker
<
paddle
::
imperative
::
OpBase
>
,
ops
::
GroupNormGradMaker
<
paddle
::
imperative
::
OpBase
>
,
ops
::
GroupNormInplaceIn
ToOut
);
ops
::
GroupNormInplaceIn
ferer
);
REGISTER_OPERATOR
(
group_norm_grad
,
ops
::
GroupNormGradOp
,
REGISTER_OPERATOR
(
group_norm_grad
,
ops
::
GroupNormGradOp
,
ops
::
GroupNormGradInplaceIn
ToOut
);
ops
::
GroupNormGradInplaceIn
ferer
);
REGISTER_OP_CPU_KERNEL
(
REGISTER_OP_CPU_KERNEL
(
group_norm
,
ops
::
GroupNormKernel
<
paddle
::
platform
::
CPUDeviceContext
,
float
>
,
group_norm
,
ops
::
GroupNormKernel
<
paddle
::
platform
::
CPUDeviceContext
,
float
>
,
ops
::
GroupNormKernel
<
paddle
::
platform
::
CPUDeviceContext
,
double
>
);
ops
::
GroupNormKernel
<
paddle
::
platform
::
CPUDeviceContext
,
double
>
);
...
...
paddle/fluid/operators/gru_op.cc
浏览文件 @
5fd2ff0a
...
@@ -456,7 +456,7 @@ class GRUGradOpMaker : public framework::SingleGradOpMaker<T> {
...
@@ -456,7 +456,7 @@ class GRUGradOpMaker : public framework::SingleGradOpMaker<T> {
}
}
};
};
DECLARE_NO_NEED_BUFFER_VARS_INFERER
(
GRUGradOpNoNeedBufferVarInfere
nce
,
"Input"
,
DECLARE_NO_NEED_BUFFER_VARS_INFERER
(
GRUGradOpNoNeedBufferVarInfere
r
,
"Input"
,
"Bias"
);
"Bias"
);
}
// namespace operators
}
// namespace operators
...
@@ -467,7 +467,7 @@ REGISTER_OPERATOR(gru, ops::GRUOp, ops::GRUOpMaker,
...
@@ -467,7 +467,7 @@ REGISTER_OPERATOR(gru, ops::GRUOp, ops::GRUOpMaker,
ops
::
GRUGradOpMaker
<
paddle
::
framework
::
OpDesc
>
,
ops
::
GRUGradOpMaker
<
paddle
::
framework
::
OpDesc
>
,
ops
::
GRUGradOpMaker
<
paddle
::
imperative
::
OpBase
>
);
ops
::
GRUGradOpMaker
<
paddle
::
imperative
::
OpBase
>
);
REGISTER_OPERATOR
(
gru_grad
,
ops
::
GRUGradOp
,
REGISTER_OPERATOR
(
gru_grad
,
ops
::
GRUGradOp
,
ops
::
GRUGradOpNoNeedBufferVarInfere
nce
);
ops
::
GRUGradOpNoNeedBufferVarInfere
r
);
REGISTER_OP_CPU_KERNEL
(
gru
,
ops
::
GRUCPUKernel
<
float
>
,
REGISTER_OP_CPU_KERNEL
(
gru
,
ops
::
GRUCPUKernel
<
float
>
,
ops
::
GRUCPUKernel
<
double
>
);
ops
::
GRUCPUKernel
<
double
>
);
REGISTER_OP_CPU_KERNEL
(
REGISTER_OP_CPU_KERNEL
(
...
...
paddle/fluid/operators/gru_unit_op.cc
浏览文件 @
5fd2ff0a
...
@@ -234,7 +234,7 @@ class GRUUnitGradOpMaker : public framework::SingleGradOpMaker<T> {
...
@@ -234,7 +234,7 @@ class GRUUnitGradOpMaker : public framework::SingleGradOpMaker<T> {
}
}
};
};
DECLARE_NO_NEED_BUFFER_VARS_INFERER
(
GRUUnitGradOpNoNeedBufferVarInfere
nce
,
DECLARE_NO_NEED_BUFFER_VARS_INFERER
(
GRUUnitGradOpNoNeedBufferVarInfere
r
,
"Bias"
);
"Bias"
);
}
// namespace operators
}
// namespace operators
...
@@ -246,7 +246,7 @@ REGISTER_OPERATOR(gru_unit, ops::GRUUnitOp, ops::GRUUnitOpMaker,
...
@@ -246,7 +246,7 @@ REGISTER_OPERATOR(gru_unit, ops::GRUUnitOp, ops::GRUUnitOpMaker,
ops
::
GRUUnitGradOpMaker
<
paddle
::
framework
::
OpDesc
>
,
ops
::
GRUUnitGradOpMaker
<
paddle
::
framework
::
OpDesc
>
,
ops
::
GRUUnitGradOpMaker
<
paddle
::
imperative
::
OpBase
>
);
ops
::
GRUUnitGradOpMaker
<
paddle
::
imperative
::
OpBase
>
);
REGISTER_OPERATOR
(
gru_unit_grad
,
ops
::
GRUUnitGradOp
,
REGISTER_OPERATOR
(
gru_unit_grad
,
ops
::
GRUUnitGradOp
,
ops
::
GRUUnitGradOpNoNeedBufferVarInfere
nce
);
ops
::
GRUUnitGradOpNoNeedBufferVarInfere
r
);
REGISTER_OP_CPU_KERNEL
(
REGISTER_OP_CPU_KERNEL
(
gru_unit
,
ops
::
GRUUnitKernel
<
paddle
::
platform
::
CPUDeviceContext
,
float
>
,
gru_unit
,
ops
::
GRUUnitKernel
<
paddle
::
platform
::
CPUDeviceContext
,
float
>
,
...
...
paddle/fluid/operators/reduce_ops/reduce_mean_op.cc
浏览文件 @
5fd2ff0a
...
@@ -82,8 +82,7 @@ class ReduceMeanDoubleGradOpBaseMaker : public imperative::GradOpBaseMakerBase {
...
@@ -82,8 +82,7 @@ class ReduceMeanDoubleGradOpBaseMaker : public imperative::GradOpBaseMakerBase {
}
}
}
}
};
};
DECLARE_NO_NEED_BUFFER_VARS_INFERER
(
ReduceMeanGradNoNeedBufferVarInference
,
DECLARE_NO_NEED_BUFFER_VARS_INFERER
(
ReduceMeanGradNoNeedBufferVarInferer
,
"X"
);
"X"
);
}
// namespace operators
}
// namespace operators
}
// namespace paddle
}
// namespace paddle
...
@@ -99,7 +98,7 @@ REGISTER_OPERATOR(reduce_mean, ops::ReduceOp, __reduce_meanMaker__,
...
@@ -99,7 +98,7 @@ REGISTER_OPERATOR(reduce_mean, ops::ReduceOp, __reduce_meanMaker__,
REGISTER_OPERATOR
(
reduce_mean_grad
,
ops
::
ReduceGradOp
,
REGISTER_OPERATOR
(
reduce_mean_grad
,
ops
::
ReduceGradOp
,
ops
::
ReduceMeanDoubleGradDescMaker
,
ops
::
ReduceMeanDoubleGradDescMaker
,
ops
::
ReduceMeanDoubleGradOpBaseMaker
,
ops
::
ReduceMeanDoubleGradOpBaseMaker
,
ops
::
ReduceMeanGradNoNeedBufferVarInfere
nce
);
ops
::
ReduceMeanGradNoNeedBufferVarInfere
r
);
REGISTER_OP_CPU_KERNEL
(
reduce_mean
,
REGISTER_OP_CPU_KERNEL
(
reduce_mean
,
ops
::
ReduceKernel
<
paddle
::
platform
::
CPUDeviceContext
,
ops
::
ReduceKernel
<
paddle
::
platform
::
CPUDeviceContext
,
float
,
ops
::
MeanFunctor
>
,
float
,
ops
::
MeanFunctor
>
,
...
...
paddle/fluid/operators/reduce_ops/reduce_sum_op.cc
浏览文件 @
5fd2ff0a
...
@@ -51,7 +51,7 @@ class ReduceSumOpGradMaker : public framework::SingleGradOpMaker<T> {
...
@@ -51,7 +51,7 @@ class ReduceSumOpGradMaker : public framework::SingleGradOpMaker<T> {
}
}
};
};
DECLARE_NO_NEED_BUFFER_VARS_INFERER
(
ReduceSumGradNoNeedBufferVarInfere
nce
,
"X"
);
DECLARE_NO_NEED_BUFFER_VARS_INFERER
(
ReduceSumGradNoNeedBufferVarInfere
r
,
"X"
);
class
ReduceSumVarTypeInference
:
public
paddle
::
framework
::
VarTypeInference
{
class
ReduceSumVarTypeInference
:
public
paddle
::
framework
::
VarTypeInference
{
public:
public:
void
operator
()(
paddle
::
framework
::
InferVarTypeContext
*
ctx
)
const
override
{
void
operator
()(
paddle
::
framework
::
InferVarTypeContext
*
ctx
)
const
override
{
...
@@ -77,7 +77,7 @@ REGISTER_OPERATOR(reduce_sum, ops::ReduceOp, ReduceSumOpMaker,
...
@@ -77,7 +77,7 @@ REGISTER_OPERATOR(reduce_sum, ops::ReduceOp, ReduceSumOpMaker,
ops
::
ReduceSumOpGradMaker
<
paddle
::
framework
::
OpDesc
>
,
ops
::
ReduceSumOpGradMaker
<
paddle
::
framework
::
OpDesc
>
,
ops
::
ReduceSumOpGradMaker
<
paddle
::
imperative
::
OpBase
>
);
ops
::
ReduceSumOpGradMaker
<
paddle
::
imperative
::
OpBase
>
);
REGISTER_OPERATOR
(
reduce_sum_grad
,
ops
::
ReduceGradOp
,
REGISTER_OPERATOR
(
reduce_sum_grad
,
ops
::
ReduceGradOp
,
ops
::
ReduceSumGradNoNeedBufferVarInfere
nce
);
ops
::
ReduceSumGradNoNeedBufferVarInfere
r
);
REGISTER_OP_CPU_KERNEL
(
REGISTER_OP_CPU_KERNEL
(
reduce_sum
,
ops
::
ReduceKernel
<
paddle
::
platform
::
CPUDeviceContext
,
float
,
reduce_sum
,
ops
::
ReduceKernel
<
paddle
::
platform
::
CPUDeviceContext
,
float
,
...
...
paddle/fluid/operators/roll_op.cc
浏览文件 @
5fd2ff0a
...
@@ -121,7 +121,7 @@ class RollGradMaker : public framework::SingleGradOpMaker<T> {
...
@@ -121,7 +121,7 @@ class RollGradMaker : public framework::SingleGradOpMaker<T> {
}
}
};
};
DECLARE_NO_NEED_BUFFER_VARS_INFERER
(
RollGradNoNeedBufferVarsInfere
nce
,
"X"
);
DECLARE_NO_NEED_BUFFER_VARS_INFERER
(
RollGradNoNeedBufferVarsInfere
r
,
"X"
);
}
// namespace operators
}
// namespace operators
}
// namespace paddle
}
// namespace paddle
...
@@ -130,7 +130,7 @@ REGISTER_OPERATOR(roll, ops::RollOp, ops::RollOpMaker,
...
@@ -130,7 +130,7 @@ REGISTER_OPERATOR(roll, ops::RollOp, ops::RollOpMaker,
ops
::
RollGradMaker
<
paddle
::
framework
::
OpDesc
>
,
ops
::
RollGradMaker
<
paddle
::
framework
::
OpDesc
>
,
ops
::
RollGradMaker
<
paddle
::
imperative
::
OpBase
>
);
ops
::
RollGradMaker
<
paddle
::
imperative
::
OpBase
>
);
REGISTER_OPERATOR
(
roll_grad
,
ops
::
RollGradOp
,
REGISTER_OPERATOR
(
roll_grad
,
ops
::
RollGradOp
,
ops
::
RollGradNoNeedBufferVarsInfere
nce
);
ops
::
RollGradNoNeedBufferVarsInfere
r
);
REGISTER_OP_CPU_KERNEL
(
REGISTER_OP_CPU_KERNEL
(
roll
,
ops
::
RollKernel
<
paddle
::
platform
::
CPUDeviceContext
,
float
>
,
roll
,
ops
::
RollKernel
<
paddle
::
platform
::
CPUDeviceContext
,
float
>
,
ops
::
RollKernel
<
paddle
::
platform
::
CPUDeviceContext
,
double
>
,
ops
::
RollKernel
<
paddle
::
platform
::
CPUDeviceContext
,
double
>
,
...
...
paddle/fluid/operators/scatter_nd_add_op.cc
浏览文件 @
5fd2ff0a
...
@@ -170,7 +170,7 @@ class ScatterNdAddGradMaker : public framework::SingleGradOpMaker<T> {
...
@@ -170,7 +170,7 @@ class ScatterNdAddGradMaker : public framework::SingleGradOpMaker<T> {
}
}
};
};
DECLARE_NO_NEED_BUFFER_VARS_INFERER
(
ScatterNdAddGradNoNeedBufferVarsInfere
nce
,
DECLARE_NO_NEED_BUFFER_VARS_INFERER
(
ScatterNdAddGradNoNeedBufferVarsInfere
r
,
"Updates"
);
"Updates"
);
}
// namespace operators
}
// namespace operators
...
@@ -183,7 +183,7 @@ REGISTER_OPERATOR(scatter_nd_add, ops::ScatterNdAddOp, ops::ScatterNdAddOpMaker,
...
@@ -183,7 +183,7 @@ REGISTER_OPERATOR(scatter_nd_add, ops::ScatterNdAddOp, ops::ScatterNdAddOpMaker,
ops
::
ScatterNdAddGradMaker
<
paddle
::
imperative
::
OpBase
>
);
ops
::
ScatterNdAddGradMaker
<
paddle
::
imperative
::
OpBase
>
);
REGISTER_OPERATOR
(
scatter_nd_add_grad
,
ops
::
ScatterNdAddGradOp
,
REGISTER_OPERATOR
(
scatter_nd_add_grad
,
ops
::
ScatterNdAddGradOp
,
ops
::
ScatterNdAddGradNoNeedBufferVarsInfere
nce
);
ops
::
ScatterNdAddGradNoNeedBufferVarsInfere
r
);
REGISTER_OP_CPU_KERNEL
(
scatter_nd_add
,
ops
::
ScatterNdAddOpKernel
<
float
>
,
REGISTER_OP_CPU_KERNEL
(
scatter_nd_add
,
ops
::
ScatterNdAddOpKernel
<
float
>
,
ops
::
ScatterNdAddOpKernel
<
double
>
,
ops
::
ScatterNdAddOpKernel
<
double
>
,
...
...
paddle/fluid/operators/scatter_op.cc
浏览文件 @
5fd2ff0a
...
@@ -134,7 +134,7 @@ class ScatterGradMaker : public framework::SingleGradOpMaker<T> {
...
@@ -134,7 +134,7 @@ class ScatterGradMaker : public framework::SingleGradOpMaker<T> {
}
}
};
};
DECLARE_NO_NEED_BUFFER_VARS_INFERER
(
ScatterGradNoNeedBufferVarsInfere
nce
,
DECLARE_NO_NEED_BUFFER_VARS_INFERER
(
ScatterGradNoNeedBufferVarsInfere
r
,
"Updates"
);
"Updates"
);
DECLARE_INPLACE_OP_INFERER
(
ScatterInplaceInferer
,
{
"X"
,
"Out"
});
DECLARE_INPLACE_OP_INFERER
(
ScatterInplaceInferer
,
{
"X"
,
"Out"
});
...
@@ -151,7 +151,7 @@ REGISTER_OPERATOR(scatter, ops::ScatterOp, ops::ScatterOpMaker,
...
@@ -151,7 +151,7 @@ REGISTER_OPERATOR(scatter, ops::ScatterOp, ops::ScatterOpMaker,
ops
::
ScatterGradMaker
<
paddle
::
imperative
::
OpBase
>
,
ops
::
ScatterGradMaker
<
paddle
::
imperative
::
OpBase
>
,
ops
::
ScatterInplaceInferer
);
ops
::
ScatterInplaceInferer
);
REGISTER_OPERATOR
(
scatter_grad
,
ops
::
ScatterGradOp
,
REGISTER_OPERATOR
(
scatter_grad
,
ops
::
ScatterGradOp
,
ops
::
ScatterGradNoNeedBufferVarsInfere
nce
,
ops
::
ScatterGradNoNeedBufferVarsInfere
r
,
ops
::
ScatterGradInplaceInferer
);
ops
::
ScatterGradInplaceInferer
);
REGISTER_OP_CPU_KERNEL
(
scatter
,
ops
::
ScatterOpKernel
<
float
>
,
REGISTER_OP_CPU_KERNEL
(
scatter
,
ops
::
ScatterOpKernel
<
float
>
,
ops
::
ScatterOpKernel
<
double
>
,
ops
::
ScatterOpKernel
<
int
>
,
ops
::
ScatterOpKernel
<
double
>
,
ops
::
ScatterOpKernel
<
int
>
,
...
...
paddle/fluid/operators/sequence_ops/sequence_concat_op.cc
浏览文件 @
5fd2ff0a
...
@@ -123,8 +123,7 @@ class SeqConcatGradOp : public framework::OperatorWithKernel {
...
@@ -123,8 +123,7 @@ class SeqConcatGradOp : public framework::OperatorWithKernel {
}
}
};
};
DECLARE_NO_NEED_BUFFER_VARS_INFERER
(
SeqConcatGradNoNeedBufferVarsInference
,
DECLARE_NO_NEED_BUFFER_VARS_INFERER
(
SeqConcatGradNoNeedBufferVarsInferer
,
"X"
);
"X"
);
}
// namespace operators
}
// namespace operators
}
// namespace paddle
}
// namespace paddle
...
@@ -140,7 +139,7 @@ REGISTER_OP_CPU_KERNEL(sequence_concat, Kernel<float>, Kernel<double>,
...
@@ -140,7 +139,7 @@ REGISTER_OP_CPU_KERNEL(sequence_concat, Kernel<float>, Kernel<double>,
Kernel
<
int
>
,
Kernel
<
int64_t
>
);
Kernel
<
int
>
,
Kernel
<
int64_t
>
);
REGISTER_OPERATOR
(
sequence_concat_grad
,
op
::
SeqConcatGradOp
,
REGISTER_OPERATOR
(
sequence_concat_grad
,
op
::
SeqConcatGradOp
,
op
::
SeqConcatGradNoNeedBufferVarsInfere
nce
);
op
::
SeqConcatGradNoNeedBufferVarsInfere
r
);
template
<
typename
T
>
template
<
typename
T
>
using
GradKernel
=
using
GradKernel
=
op
::
SeqConcatGradKernel
<
paddle
::
platform
::
CPUDeviceContext
,
T
>
;
op
::
SeqConcatGradKernel
<
paddle
::
platform
::
CPUDeviceContext
,
T
>
;
...
...
paddle/fluid/operators/sequence_ops/sequence_expand_as_op.cc
浏览文件 @
5fd2ff0a
...
@@ -181,10 +181,10 @@ class SequenceExpandAsOpGradOpMaker : public framework::SingleGradOpMaker<T> {
...
@@ -181,10 +181,10 @@ class SequenceExpandAsOpGradOpMaker : public framework::SingleGradOpMaker<T> {
}
}
};
};
DECLARE_NO_NEED_BUFFER_VARS_INFERER
(
SequenceExpandAsOpNoNeedBufferVarsInfere
nce
,
DECLARE_NO_NEED_BUFFER_VARS_INFERER
(
SequenceExpandAsOpNoNeedBufferVarsInfere
r
,
"Y"
);
"Y"
);
DECLARE_NO_NEED_BUFFER_VARS_INFERER
(
DECLARE_NO_NEED_BUFFER_VARS_INFERER
(
SequenceExpandAsGradOpNoNeedBufferVarsInfere
nce
,
"X"
,
"Y"
);
SequenceExpandAsGradOpNoNeedBufferVarsInfere
r
,
"X"
,
"Y"
);
}
// namespace operators
}
// namespace operators
}
// namespace paddle
}
// namespace paddle
...
@@ -194,9 +194,9 @@ REGISTER_OPERATOR(
...
@@ -194,9 +194,9 @@ REGISTER_OPERATOR(
sequence_expand_as
,
ops
::
SequenceExpandAsOp
,
ops
::
SequenceExpandAsOpMaker
,
sequence_expand_as
,
ops
::
SequenceExpandAsOp
,
ops
::
SequenceExpandAsOpMaker
,
ops
::
SequenceExpandAsOpGradOpMaker
<
paddle
::
framework
::
OpDesc
>
,
ops
::
SequenceExpandAsOpGradOpMaker
<
paddle
::
framework
::
OpDesc
>
,
ops
::
SequenceExpandAsOpGradOpMaker
<
paddle
::
imperative
::
OpBase
>
,
ops
::
SequenceExpandAsOpGradOpMaker
<
paddle
::
imperative
::
OpBase
>
,
ops
::
SequenceExpandAsOpNoNeedBufferVarsInfere
nce
);
ops
::
SequenceExpandAsOpNoNeedBufferVarsInfere
r
);
REGISTER_OPERATOR
(
sequence_expand_as_grad
,
ops
::
SequenceExpandAsOpGrad
,
REGISTER_OPERATOR
(
sequence_expand_as_grad
,
ops
::
SequenceExpandAsOpGrad
,
ops
::
SequenceExpandAsGradOpNoNeedBufferVarsInfere
nce
);
ops
::
SequenceExpandAsGradOpNoNeedBufferVarsInfere
r
);
REGISTER_OP_CPU_KERNEL
(
REGISTER_OP_CPU_KERNEL
(
sequence_expand_as
,
sequence_expand_as
,
ops
::
SequenceExpandAsKernel
<
paddle
::
platform
::
CPUDeviceContext
,
float
>
,
ops
::
SequenceExpandAsKernel
<
paddle
::
platform
::
CPUDeviceContext
,
float
>
,
...
...
paddle/fluid/operators/sequence_ops/sequence_expand_op.cc
浏览文件 @
5fd2ff0a
...
@@ -247,10 +247,10 @@ class SequenceExpandOpGradMaker : public framework::SingleGradOpMaker<T> {
...
@@ -247,10 +247,10 @@ class SequenceExpandOpGradMaker : public framework::SingleGradOpMaker<T> {
}
}
};
};
DECLARE_NO_NEED_BUFFER_VARS_INFERER
(
SequenceExpandOpNoNeedBufferVarsInfere
nce
,
DECLARE_NO_NEED_BUFFER_VARS_INFERER
(
SequenceExpandOpNoNeedBufferVarsInfere
r
,
"Y"
);
"Y"
);
DECLARE_NO_NEED_BUFFER_VARS_INFERER
(
DECLARE_NO_NEED_BUFFER_VARS_INFERER
(
SequenceExpandGradOpNoNeedBufferVarsInferer
,
SequenceExpandGradOpNoNeedBufferVarsInference
,
"X"
,
"Y"
);
"X"
,
"Y"
);
}
// namespace operators
}
// namespace operators
}
// namespace paddle
}
// namespace paddle
...
@@ -260,9 +260,9 @@ REGISTER_OPERATOR(sequence_expand, ops::SequenceExpandOp,
...
@@ -260,9 +260,9 @@ REGISTER_OPERATOR(sequence_expand, ops::SequenceExpandOp,
ops
::
SequenceExpandOpMaker
,
ops
::
SequenceExpandOpMaker
,
ops
::
SequenceExpandOpGradMaker
<
paddle
::
framework
::
OpDesc
>
,
ops
::
SequenceExpandOpGradMaker
<
paddle
::
framework
::
OpDesc
>
,
ops
::
SequenceExpandOpGradMaker
<
paddle
::
imperative
::
OpBase
>
,
ops
::
SequenceExpandOpGradMaker
<
paddle
::
imperative
::
OpBase
>
,
ops
::
SequenceExpandOpNoNeedBufferVarsInfere
nce
);
ops
::
SequenceExpandOpNoNeedBufferVarsInfere
r
);
REGISTER_OPERATOR
(
sequence_expand_grad
,
ops
::
SequenceExpandOpGrad
,
REGISTER_OPERATOR
(
sequence_expand_grad
,
ops
::
SequenceExpandOpGrad
,
ops
::
SequenceExpandGradOpNoNeedBufferVarsInfere
nce
);
ops
::
SequenceExpandGradOpNoNeedBufferVarsInfere
r
);
REGISTER_OP_CPU_KERNEL
(
REGISTER_OP_CPU_KERNEL
(
sequence_expand
,
sequence_expand
,
ops
::
SequenceExpandKernel
<
paddle
::
platform
::
CPUDeviceContext
,
float
>
,
ops
::
SequenceExpandKernel
<
paddle
::
platform
::
CPUDeviceContext
,
float
>
,
...
...
paddle/fluid/operators/sequence_ops/sequence_pad_op.cc
浏览文件 @
5fd2ff0a
...
@@ -251,7 +251,7 @@ class SequencePadGradOpMaker : public framework::SingleGradOpMaker<T> {
...
@@ -251,7 +251,7 @@ class SequencePadGradOpMaker : public framework::SingleGradOpMaker<T> {
}
}
};
};
DECLARE_NO_NEED_BUFFER_VARS_INFERER
(
SequencePadGradOpNoNeedBufferVarsInfere
nce
,
DECLARE_NO_NEED_BUFFER_VARS_INFERER
(
SequencePadGradOpNoNeedBufferVarsInfere
r
,
"X"
);
"X"
);
}
// namespace operators
}
// namespace operators
...
@@ -262,7 +262,7 @@ REGISTER_OPERATOR(sequence_pad, ops::SequencePadOp, ops::SequencePadOpMaker,
...
@@ -262,7 +262,7 @@ REGISTER_OPERATOR(sequence_pad, ops::SequencePadOp, ops::SequencePadOpMaker,
ops
::
SequencePadGradOpMaker
<
paddle
::
framework
::
OpDesc
>
,
ops
::
SequencePadGradOpMaker
<
paddle
::
framework
::
OpDesc
>
,
ops
::
SequencePadGradOpMaker
<
paddle
::
imperative
::
OpBase
>
);
ops
::
SequencePadGradOpMaker
<
paddle
::
imperative
::
OpBase
>
);
REGISTER_OPERATOR
(
sequence_pad_grad
,
ops
::
SequencePadGradOp
,
REGISTER_OPERATOR
(
sequence_pad_grad
,
ops
::
SequencePadGradOp
,
ops
::
SequencePadGradOpNoNeedBufferVarsInfere
nce
);
ops
::
SequencePadGradOpNoNeedBufferVarsInfere
r
);
REGISTER_OP_CPU_KERNEL
(
REGISTER_OP_CPU_KERNEL
(
sequence_pad
,
sequence_pad
,
ops
::
SequencePadOpKernel
<
paddle
::
platform
::
CPUDeviceContext
,
float
>
,
ops
::
SequencePadOpKernel
<
paddle
::
platform
::
CPUDeviceContext
,
float
>
,
...
...
paddle/fluid/operators/sequence_ops/sequence_pool_op.cc
浏览文件 @
5fd2ff0a
...
@@ -166,7 +166,7 @@ class SequencePoolGradOpMaker : public framework::SingleGradOpMaker<T> {
...
@@ -166,7 +166,7 @@ class SequencePoolGradOpMaker : public framework::SingleGradOpMaker<T> {
}
}
};
};
DECLARE_NO_NEED_BUFFER_VARS_INFERER
(
SequencePoolGradOpNoNeedBufferVarsInfere
nce
,
DECLARE_NO_NEED_BUFFER_VARS_INFERER
(
SequencePoolGradOpNoNeedBufferVarsInfere
r
,
"X"
);
"X"
);
}
// namespace operators
}
// namespace operators
...
@@ -177,7 +177,7 @@ REGISTER_OPERATOR(sequence_pool, ops::SequencePoolOp, ops::SequencePoolOpMaker,
...
@@ -177,7 +177,7 @@ REGISTER_OPERATOR(sequence_pool, ops::SequencePoolOp, ops::SequencePoolOpMaker,
ops
::
SequencePoolGradOpMaker
<
paddle
::
framework
::
OpDesc
>
,
ops
::
SequencePoolGradOpMaker
<
paddle
::
framework
::
OpDesc
>
,
ops
::
SequencePoolGradOpMaker
<
paddle
::
imperative
::
OpBase
>
);
ops
::
SequencePoolGradOpMaker
<
paddle
::
imperative
::
OpBase
>
);
REGISTER_OPERATOR
(
sequence_pool_grad
,
ops
::
SequencePoolGradOp
,
REGISTER_OPERATOR
(
sequence_pool_grad
,
ops
::
SequencePoolGradOp
,
ops
::
SequencePoolGradOpNoNeedBufferVarsInfere
nce
);
ops
::
SequencePoolGradOpNoNeedBufferVarsInfere
r
);
REGISTER_OP_CPU_KERNEL
(
REGISTER_OP_CPU_KERNEL
(
sequence_pool
,
sequence_pool
,
ops
::
SequencePoolKernel
<
paddle
::
platform
::
CPUDeviceContext
,
float
>
);
ops
::
SequencePoolKernel
<
paddle
::
platform
::
CPUDeviceContext
,
float
>
);
...
...
paddle/fluid/operators/sequence_ops/sequence_scatter_op.cc
浏览文件 @
5fd2ff0a
...
@@ -168,8 +168,8 @@ class SequenceScatterGradMaker : public framework::SingleGradOpMaker<T> {
...
@@ -168,8 +168,8 @@ class SequenceScatterGradMaker : public framework::SingleGradOpMaker<T> {
}
}
};
};
DECLARE_NO_NEED_BUFFER_VARS_INFERER
(
DECLARE_NO_NEED_BUFFER_VARS_INFERER
(
SequenceScatterGradNoNeedBufferVarsInferer
,
SequenceScatterGradNoNeedBufferVarsInference
,
"Updates"
);
"Updates"
);
}
// namespace operators
}
// namespace operators
}
// namespace paddle
}
// namespace paddle
...
@@ -180,7 +180,7 @@ REGISTER_OPERATOR(sequence_scatter, ops::SequenceScatterOp,
...
@@ -180,7 +180,7 @@ REGISTER_OPERATOR(sequence_scatter, ops::SequenceScatterOp,
ops
::
SequenceScatterGradMaker
<
paddle
::
framework
::
OpDesc
>
,
ops
::
SequenceScatterGradMaker
<
paddle
::
framework
::
OpDesc
>
,
ops
::
SequenceScatterGradMaker
<
paddle
::
imperative
::
OpBase
>
);
ops
::
SequenceScatterGradMaker
<
paddle
::
imperative
::
OpBase
>
);
REGISTER_OPERATOR
(
sequence_scatter_grad
,
ops
::
SequenceScatterGradOp
,
REGISTER_OPERATOR
(
sequence_scatter_grad
,
ops
::
SequenceScatterGradOp
,
ops
::
SequenceScatterGradNoNeedBufferVarsInfere
nce
);
ops
::
SequenceScatterGradNoNeedBufferVarsInfere
r
);
REGISTER_OP_CPU_KERNEL
(
sequence_scatter
,
ops
::
SequenceScatterOpKernel
<
float
>
,
REGISTER_OP_CPU_KERNEL
(
sequence_scatter
,
ops
::
SequenceScatterOpKernel
<
float
>
,
ops
::
SequenceScatterOpKernel
<
double
>
,
ops
::
SequenceScatterOpKernel
<
double
>
,
ops
::
SequenceScatterOpKernel
<
int
>
,
ops
::
SequenceScatterOpKernel
<
int
>
,
...
...
paddle/fluid/operators/sequence_ops/sequence_slice_op.cc
浏览文件 @
5fd2ff0a
...
@@ -137,7 +137,7 @@ class SequenceSliceGradOpMaker : public framework::SingleGradOpMaker<T> {
...
@@ -137,7 +137,7 @@ class SequenceSliceGradOpMaker : public framework::SingleGradOpMaker<T> {
}
}
};
};
DECLARE_NO_NEED_BUFFER_VARS_INFERER
(
SequenceSliceGradNoNeedBufferVarsInfere
nce
,
DECLARE_NO_NEED_BUFFER_VARS_INFERER
(
SequenceSliceGradNoNeedBufferVarsInfere
r
,
"X"
);
"X"
);
}
// namespace operators
}
// namespace operators
...
@@ -149,7 +149,7 @@ REGISTER_OPERATOR(sequence_slice, ops::SequenceSliceOp,
...
@@ -149,7 +149,7 @@ REGISTER_OPERATOR(sequence_slice, ops::SequenceSliceOp,
ops
::
SequenceSliceGradOpMaker
<
paddle
::
framework
::
OpDesc
>
,
ops
::
SequenceSliceGradOpMaker
<
paddle
::
framework
::
OpDesc
>
,
ops
::
SequenceSliceGradOpMaker
<
paddle
::
imperative
::
OpBase
>
);
ops
::
SequenceSliceGradOpMaker
<
paddle
::
imperative
::
OpBase
>
);
REGISTER_OPERATOR
(
sequence_slice_grad
,
ops
::
SequenceSliceGradOp
,
REGISTER_OPERATOR
(
sequence_slice_grad
,
ops
::
SequenceSliceGradOp
,
ops
::
SequenceSliceGradNoNeedBufferVarsInfere
nce
);
ops
::
SequenceSliceGradNoNeedBufferVarsInfere
r
);
REGISTER_OP_CPU_KERNEL
(
REGISTER_OP_CPU_KERNEL
(
sequence_slice
,
sequence_slice
,
ops
::
SequenceSliceOpKernel
<
paddle
::
platform
::
CPUDeviceContext
,
float
>
,
ops
::
SequenceSliceOpKernel
<
paddle
::
platform
::
CPUDeviceContext
,
float
>
,
...
...
paddle/fluid/operators/sequence_ops/sequence_softmax_cudnn_op.cu.cc
浏览文件 @
5fd2ff0a
...
@@ -33,12 +33,17 @@ class SequenceSoftmaxCUDNNKernel : public framework::OpKernel<T> {
...
@@ -33,12 +33,17 @@ class SequenceSoftmaxCUDNNKernel : public framework::OpKernel<T> {
auto
&
dims
=
x
->
dims
();
auto
&
dims
=
x
->
dims
();
const
size_t
level
=
lod
.
size
()
-
1
;
const
size_t
level
=
lod
.
size
()
-
1
;
PADDLE_ENFORCE_EQ
(
dims
[
0
],
static_cast
<
int64_t
>
(
lod
[
level
].
back
()),
PADDLE_ENFORCE_EQ
(
"The first dimension of Input(X) should be equal to the "
dims
[
0
],
static_cast
<
int64_t
>
(
lod
[
level
].
back
()),
"sum of all sequences' lengths."
);
platform
::
errors
::
InvalidArgument
(
"The first dimension of Input(X) should be equal to the sum of all "
"sequences' lengths. But received first dimension of Input(X) is "
"%d, the sum of all sequences' lengths is %d."
,
dims
[
0
],
static_cast
<
int64_t
>
(
lod
[
level
].
back
())));
PADDLE_ENFORCE_EQ
(
dims
[
0
],
x
->
numel
(),
PADDLE_ENFORCE_EQ
(
dims
[
0
],
x
->
numel
(),
"The width of each timestep in Input(X) of "
platform
::
errors
::
InvalidArgument
(
"SequenceSoftmaxOp should be 1."
);
"The width of each timestep in Input(X) of "
"SequenceSoftmaxOp should be 1."
));
out
->
mutable_data
<
T
>
(
ctx
.
GetPlace
());
out
->
mutable_data
<
T
>
(
ctx
.
GetPlace
());
for
(
int
i
=
0
;
i
<
static_cast
<
int
>
(
lod
[
level
].
size
())
-
1
;
++
i
)
{
for
(
int
i
=
0
;
i
<
static_cast
<
int
>
(
lod
[
level
].
size
())
-
1
;
++
i
)
{
...
...
paddle/fluid/operators/sequence_ops/sequence_unpad_op.cc
浏览文件 @
5fd2ff0a
...
@@ -169,8 +169,8 @@ class SequenceUnpadGradOpMaker : public framework::SingleGradOpMaker<T> {
...
@@ -169,8 +169,8 @@ class SequenceUnpadGradOpMaker : public framework::SingleGradOpMaker<T> {
}
}
};
};
DECLARE_NO_NEED_BUFFER_VARS_INFERER
(
DECLARE_NO_NEED_BUFFER_VARS_INFERER
(
SequenceUnpadGradOpNoNeedBufferVarsInferer
,
SequenceUnpadGradOpNoNeedBufferVarsInference
,
"X"
);
"X"
);
}
// namespace operators
}
// namespace operators
}
// namespace paddle
}
// namespace paddle
...
@@ -181,7 +181,7 @@ REGISTER_OPERATOR(sequence_unpad, ops::SequenceUnpadOp,
...
@@ -181,7 +181,7 @@ REGISTER_OPERATOR(sequence_unpad, ops::SequenceUnpadOp,
ops
::
SequenceUnpadGradOpMaker
<
paddle
::
framework
::
OpDesc
>
,
ops
::
SequenceUnpadGradOpMaker
<
paddle
::
framework
::
OpDesc
>
,
ops
::
SequenceUnpadGradOpMaker
<
paddle
::
imperative
::
OpBase
>
);
ops
::
SequenceUnpadGradOpMaker
<
paddle
::
imperative
::
OpBase
>
);
REGISTER_OPERATOR
(
sequence_unpad_grad
,
ops
::
SequenceUnpadGradOp
,
REGISTER_OPERATOR
(
sequence_unpad_grad
,
ops
::
SequenceUnpadGradOp
,
ops
::
SequenceUnpadGradOpNoNeedBufferVarsInfere
nce
);
ops
::
SequenceUnpadGradOpNoNeedBufferVarsInfere
r
);
REGISTER_OP_CPU_KERNEL
(
REGISTER_OP_CPU_KERNEL
(
sequence_unpad
,
sequence_unpad
,
ops
::
SequenceUnpadOpKernel
<
paddle
::
platform
::
CPUDeviceContext
,
float
>
,
ops
::
SequenceUnpadOpKernel
<
paddle
::
platform
::
CPUDeviceContext
,
float
>
,
...
...
paddle/fluid/operators/slice_op.cc
浏览文件 @
5fd2ff0a
...
@@ -377,7 +377,7 @@ class SliceDoubleOpGradMaker : public framework::SingleGradOpMaker<T> {
...
@@ -377,7 +377,7 @@ class SliceDoubleOpGradMaker : public framework::SingleGradOpMaker<T> {
}
}
};
};
DECLARE_NO_NEED_BUFFER_VARS_INFERER
(
SliceOpGradNoNeedBufferVarsInfere
nce
,
DECLARE_NO_NEED_BUFFER_VARS_INFERER
(
SliceOpGradNoNeedBufferVarsInfere
r
,
"Input"
);
"Input"
);
}
// namespace operators
}
// namespace operators
...
@@ -391,7 +391,7 @@ REGISTER_OPERATOR(slice, ops::SliceOp, ops::SliceOpMaker,
...
@@ -391,7 +391,7 @@ REGISTER_OPERATOR(slice, ops::SliceOp, ops::SliceOpMaker,
REGISTER_OPERATOR
(
slice_grad
,
ops
::
SliceOpGrad
,
REGISTER_OPERATOR
(
slice_grad
,
ops
::
SliceOpGrad
,
ops
::
SliceDoubleOpGradMaker
<
paddle
::
framework
::
OpDesc
>
,
ops
::
SliceDoubleOpGradMaker
<
paddle
::
framework
::
OpDesc
>
,
ops
::
SliceDoubleOpGradMaker
<
paddle
::
imperative
::
OpBase
>
,
ops
::
SliceDoubleOpGradMaker
<
paddle
::
imperative
::
OpBase
>
,
ops
::
SliceOpGradNoNeedBufferVarsInfere
nce
,
ops
::
SliceOpGradNoNeedBufferVarsInfere
r
,
ops
::
SliceOpGradVarTypeInference
);
ops
::
SliceOpGradVarTypeInference
);
REGISTER_OP_CPU_KERNEL
(
REGISTER_OP_CPU_KERNEL
(
...
...
paddle/fluid/operators/space_to_depth_op.cc
浏览文件 @
5fd2ff0a
...
@@ -131,7 +131,7 @@ class SpaceToDepthOpMaker : public framework::OpProtoAndCheckerMaker {
...
@@ -131,7 +131,7 @@ class SpaceToDepthOpMaker : public framework::OpProtoAndCheckerMaker {
}
}
};
};
DECLARE_NO_NEED_BUFFER_VARS_INFERER
(
SpaceToDepthGradOpNoBuffer
,
"X"
);
DECLARE_NO_NEED_BUFFER_VARS_INFERER
(
SpaceToDepthGradOpNoBuffer
VarsInferer
,
"X"
);
template
<
typename
T
>
template
<
typename
T
>
class
SpaceToDepthGradOpMaker
:
public
framework
::
SingleGradOpMaker
<
T
>
{
class
SpaceToDepthGradOpMaker
:
public
framework
::
SingleGradOpMaker
<
T
>
{
...
@@ -179,7 +179,7 @@ REGISTER_OPERATOR(space_to_depth, ops::SpaceToDepthOp, ops::SpaceToDepthOpMaker,
...
@@ -179,7 +179,7 @@ REGISTER_OPERATOR(space_to_depth, ops::SpaceToDepthOp, ops::SpaceToDepthOpMaker,
ops
::
SpaceToDepthGradOpMaker
<
paddle
::
framework
::
OpDesc
>
,
ops
::
SpaceToDepthGradOpMaker
<
paddle
::
framework
::
OpDesc
>
,
ops
::
SpaceToDepthGradOpMaker
<
paddle
::
imperative
::
OpBase
>
);
ops
::
SpaceToDepthGradOpMaker
<
paddle
::
imperative
::
OpBase
>
);
REGISTER_OPERATOR
(
space_to_depth_grad
,
ops
::
SpaceToDepthGradOp
,
REGISTER_OPERATOR
(
space_to_depth_grad
,
ops
::
SpaceToDepthGradOp
,
ops
::
SpaceToDepthGradOpNoBuffer
);
ops
::
SpaceToDepthGradOpNoBuffer
VarsInferer
);
REGISTER_OP_CPU_KERNEL
(
REGISTER_OP_CPU_KERNEL
(
space_to_depth
,
space_to_depth
,
ops
::
SpaceToDepthKernel
<
paddle
::
platform
::
CPUDeviceContext
,
float
>
,
ops
::
SpaceToDepthKernel
<
paddle
::
platform
::
CPUDeviceContext
,
float
>
,
...
...
paddle/fluid/operators/squared_l2_distance_op.cc
浏览文件 @
5fd2ff0a
...
@@ -88,7 +88,8 @@ class SquaredL2DistanceOp : public framework::OperatorWithKernel {
...
@@ -88,7 +88,8 @@ class SquaredL2DistanceOp : public framework::OperatorWithKernel {
}
}
};
};
DECLARE_NO_NEED_BUFFER_VARS_INFERER
(
SquaredL2DistanceGradOpNoBuffer
,
"X"
,
"Y"
);
DECLARE_NO_NEED_BUFFER_VARS_INFERER
(
SquaredL2DistanceGradOpNoBufferVarsInferer
,
"X"
,
"Y"
);
template
<
typename
T
>
template
<
typename
T
>
class
SquaredL2DistanceGradOpMaker
:
public
framework
::
SingleGradOpMaker
<
T
>
{
class
SquaredL2DistanceGradOpMaker
:
public
framework
::
SingleGradOpMaker
<
T
>
{
...
@@ -192,7 +193,7 @@ REGISTER_OPERATOR(
...
@@ -192,7 +193,7 @@ REGISTER_OPERATOR(
ops
::
SquaredL2DistanceGradOpMaker
<
paddle
::
framework
::
OpDesc
>
,
ops
::
SquaredL2DistanceGradOpMaker
<
paddle
::
framework
::
OpDesc
>
,
ops
::
SquaredL2DistanceGradOpMaker
<
paddle
::
imperative
::
OpBase
>
);
ops
::
SquaredL2DistanceGradOpMaker
<
paddle
::
imperative
::
OpBase
>
);
REGISTER_OPERATOR
(
squared_l2_distance_grad
,
ops
::
SquaredL2DistanceGradOp
,
REGISTER_OPERATOR
(
squared_l2_distance_grad
,
ops
::
SquaredL2DistanceGradOp
,
ops
::
SquaredL2DistanceGradOpNoBuffer
);
ops
::
SquaredL2DistanceGradOpNoBuffer
VarsInferer
);
REGISTER_OP_CPU_KERNEL
(
REGISTER_OP_CPU_KERNEL
(
squared_l2_distance
,
squared_l2_distance
,
ops
::
SquaredL2DistanceKernel
<
paddle
::
platform
::
CPUDeviceContext
,
float
>
);
ops
::
SquaredL2DistanceKernel
<
paddle
::
platform
::
CPUDeviceContext
,
float
>
);
...
...
paddle/fluid/operators/squeeze_op.cc
浏览文件 @
5fd2ff0a
...
@@ -275,7 +275,7 @@ DECLARE_INPLACE_OP_INFERER(SequeezeInplaceInferer, {"X", "Out"});
...
@@ -275,7 +275,7 @@ DECLARE_INPLACE_OP_INFERER(SequeezeInplaceInferer, {"X", "Out"});
DECLARE_INPLACE_OP_INFERER
(
SequeezeGradInplaceInferer
,
DECLARE_INPLACE_OP_INFERER
(
SequeezeGradInplaceInferer
,
{
framework
::
GradVarName
(
"Out"
),
{
framework
::
GradVarName
(
"Out"
),
framework
::
GradVarName
(
"X"
)});
framework
::
GradVarName
(
"X"
)});
DECLARE_NO_NEED_BUFFER_VARS_INFERER
(
SqueezeGradNoNeedBufferVarsInfere
nce
,
"X"
);
DECLARE_NO_NEED_BUFFER_VARS_INFERER
(
SqueezeGradNoNeedBufferVarsInfere
r
,
"X"
);
}
// namespace operators
}
// namespace operators
}
// namespace paddle
}
// namespace paddle
...
@@ -284,7 +284,7 @@ REGISTER_OPERATOR(squeeze, ops::SqueezeOp, ops::SqueezeOpMaker,
...
@@ -284,7 +284,7 @@ REGISTER_OPERATOR(squeeze, ops::SqueezeOp, ops::SqueezeOpMaker,
ops
::
SqueezeGradOpMaker
<
paddle
::
framework
::
OpDesc
>
,
ops
::
SqueezeGradOpMaker
<
paddle
::
framework
::
OpDesc
>
,
ops
::
SqueezeGradOpMaker
<
paddle
::
imperative
::
OpBase
>
);
ops
::
SqueezeGradOpMaker
<
paddle
::
imperative
::
OpBase
>
);
REGISTER_OPERATOR
(
squeeze_grad
,
ops
::
SqueezeGradOp
,
REGISTER_OPERATOR
(
squeeze_grad
,
ops
::
SqueezeGradOp
,
ops
::
SqueezeGradNoNeedBufferVarsInfere
nce
);
ops
::
SqueezeGradNoNeedBufferVarsInfere
r
);
REGISTER_OPERATOR
(
squeeze2
,
ops
::
Squeeze2Op
,
ops
::
Squeeze2OpMaker
,
REGISTER_OPERATOR
(
squeeze2
,
ops
::
Squeeze2Op
,
ops
::
Squeeze2OpMaker
,
ops
::
Squeeze2GradOpMaker
<
paddle
::
framework
::
OpDesc
>
,
ops
::
Squeeze2GradOpMaker
<
paddle
::
framework
::
OpDesc
>
,
...
...
paddle/fluid/operators/strided_slice_op.cc
浏览文件 @
5fd2ff0a
...
@@ -304,7 +304,7 @@ class StridedSliceOpGradMaker : public framework::SingleGradOpMaker<T> {
...
@@ -304,7 +304,7 @@ class StridedSliceOpGradMaker : public framework::SingleGradOpMaker<T> {
}
}
};
};
DECLARE_NO_NEED_BUFFER_VARS_INFERER
(
StridedSliceOpGradNoNeedBufferVarsInfere
nce
,
DECLARE_NO_NEED_BUFFER_VARS_INFERER
(
StridedSliceOpGradNoNeedBufferVarsInfere
r
,
"Input"
);
"Input"
);
}
// namespace operators
}
// namespace operators
...
@@ -315,7 +315,7 @@ REGISTER_OPERATOR(strided_slice, ops::StridedSliceOp, ops::StridedSliceOpMaker,
...
@@ -315,7 +315,7 @@ REGISTER_OPERATOR(strided_slice, ops::StridedSliceOp, ops::StridedSliceOpMaker,
ops
::
StridedSliceOpGradMaker
<
paddle
::
framework
::
OpDesc
>
,
ops
::
StridedSliceOpGradMaker
<
paddle
::
framework
::
OpDesc
>
,
ops
::
StridedSliceOpGradMaker
<
paddle
::
imperative
::
OpBase
>
);
ops
::
StridedSliceOpGradMaker
<
paddle
::
imperative
::
OpBase
>
);
REGISTER_OPERATOR
(
strided_slice_grad
,
ops
::
StridedSliceOpGrad
,
REGISTER_OPERATOR
(
strided_slice_grad
,
ops
::
StridedSliceOpGrad
,
ops
::
StridedSliceOpGradNoNeedBufferVarsInfere
nce
);
ops
::
StridedSliceOpGradNoNeedBufferVarsInfere
r
);
REGISTER_OP_CPU_KERNEL
(
REGISTER_OP_CPU_KERNEL
(
strided_slice
,
strided_slice
,
...
...
paddle/fluid/operators/trace_op.cc
浏览文件 @
5fd2ff0a
...
@@ -147,8 +147,7 @@ class TraceGradOpMaker : public framework::SingleGradOpMaker<T> {
...
@@ -147,8 +147,7 @@ class TraceGradOpMaker : public framework::SingleGradOpMaker<T> {
}
}
};
};
DECLARE_NO_NEED_BUFFER_VARS_INFERER
(
TraceGradNoNeedBufferVarsInference
,
DECLARE_NO_NEED_BUFFER_VARS_INFERER
(
TraceGradNoNeedBufferVarsInferer
,
"Input"
);
"Input"
);
}
// namespace operators
}
// namespace operators
}
// namespace paddle
}
// namespace paddle
...
@@ -159,7 +158,7 @@ REGISTER_OPERATOR(trace, ops::TraceOp, ops::TraceOpMaker,
...
@@ -159,7 +158,7 @@ REGISTER_OPERATOR(trace, ops::TraceOp, ops::TraceOpMaker,
ops
::
TraceGradOpMaker
<
paddle
::
imperative
::
OpBase
>
);
ops
::
TraceGradOpMaker
<
paddle
::
imperative
::
OpBase
>
);
REGISTER_OPERATOR
(
trace_grad
,
ops
::
TraceOpGrad
,
REGISTER_OPERATOR
(
trace_grad
,
ops
::
TraceOpGrad
,
ops
::
TraceGradNoNeedBufferVarsInfere
nce
);
ops
::
TraceGradNoNeedBufferVarsInfere
r
);
REGISTER_OP_CPU_KERNEL
(
REGISTER_OP_CPU_KERNEL
(
trace
,
ops
::
TraceKernel
<
paddle
::
platform
::
CPUDeviceContext
,
int
>
,
trace
,
ops
::
TraceKernel
<
paddle
::
platform
::
CPUDeviceContext
,
int
>
,
ops
::
TraceKernel
<
paddle
::
platform
::
CPUDeviceContext
,
float
>
,
ops
::
TraceKernel
<
paddle
::
platform
::
CPUDeviceContext
,
float
>
,
...
...
paddle/fluid/operators/unfold_op.cc
浏览文件 @
5fd2ff0a
...
@@ -174,7 +174,7 @@ class UnfoldGradMaker : public framework::SingleGradOpMaker<T> {
...
@@ -174,7 +174,7 @@ class UnfoldGradMaker : public framework::SingleGradOpMaker<T> {
}
}
};
};
DECLARE_NO_NEED_BUFFER_VARS_INFERER
(
UnfoldGradOpNoNeedBufferVarsInfere
nce
,
"X"
);
DECLARE_NO_NEED_BUFFER_VARS_INFERER
(
UnfoldGradOpNoNeedBufferVarsInfere
r
,
"X"
);
}
// namespace operators
}
// namespace operators
}
// namespace paddle
}
// namespace paddle
...
@@ -184,7 +184,7 @@ REGISTER_OPERATOR(unfold, ops::UnfoldOp, ops::UnfoldOpMaker,
...
@@ -184,7 +184,7 @@ REGISTER_OPERATOR(unfold, ops::UnfoldOp, ops::UnfoldOpMaker,
ops
::
UnfoldGradMaker
<
paddle
::
framework
::
OpDesc
>
,
ops
::
UnfoldGradMaker
<
paddle
::
framework
::
OpDesc
>
,
ops
::
UnfoldGradMaker
<
paddle
::
imperative
::
OpBase
>
);
ops
::
UnfoldGradMaker
<
paddle
::
imperative
::
OpBase
>
);
REGISTER_OPERATOR
(
unfold_grad
,
ops
::
UnfoldGradOp
,
REGISTER_OPERATOR
(
unfold_grad
,
ops
::
UnfoldGradOp
,
ops
::
UnfoldGradOpNoNeedBufferVarsInfere
nce
);
ops
::
UnfoldGradOpNoNeedBufferVarsInfere
r
);
REGISTER_OP_CPU_KERNEL
(
REGISTER_OP_CPU_KERNEL
(
unfold
,
ops
::
UnfoldOpKernel
<
paddle
::
platform
::
CPUDeviceContext
,
float
>
,
unfold
,
ops
::
UnfoldOpKernel
<
paddle
::
platform
::
CPUDeviceContext
,
float
>
,
...
...
paddle/fluid/operators/uniform_random_batch_size_like_op.cc
浏览文件 @
5fd2ff0a
...
@@ -78,5 +78,5 @@ REGISTER_OPERATOR(
...
@@ -78,5 +78,5 @@ REGISTER_OPERATOR(
paddle
::
operators
::
UniformRandomBatchSizeLikeOpMaker
,
paddle
::
operators
::
UniformRandomBatchSizeLikeOpMaker
,
paddle
::
framework
::
EmptyGradOpMaker
<
paddle
::
framework
::
OpDesc
>
,
paddle
::
framework
::
EmptyGradOpMaker
<
paddle
::
framework
::
OpDesc
>
,
paddle
::
framework
::
EmptyGradOpMaker
<
paddle
::
imperative
::
OpBase
>
,
paddle
::
framework
::
EmptyGradOpMaker
<
paddle
::
imperative
::
OpBase
>
,
paddle
::
operators
::
BatchSizeLikeNoNeedBufferVarsInfere
nce
);
paddle
::
operators
::
BatchSizeLikeNoNeedBufferVarsInfere
r
);
// Kernels are registered in uniform_random_op.cc and uniform_random_op.cu
// Kernels are registered in uniform_random_op.cc and uniform_random_op.cu
paddle/fluid/operators/unsqueeze_op.cc
浏览文件 @
5fd2ff0a
...
@@ -306,8 +306,7 @@ DECLARE_INPLACE_OP_INFERER(UnsqueezeInplaceInferer, {"X", "Out"});
...
@@ -306,8 +306,7 @@ DECLARE_INPLACE_OP_INFERER(UnsqueezeInplaceInferer, {"X", "Out"});
DECLARE_INPLACE_OP_INFERER
(
UnsqueezeGradInplaceInferer
,
DECLARE_INPLACE_OP_INFERER
(
UnsqueezeGradInplaceInferer
,
{
framework
::
GradVarName
(
"Out"
),
{
framework
::
GradVarName
(
"Out"
),
framework
::
GradVarName
(
"X"
)});
framework
::
GradVarName
(
"X"
)});
DECLARE_NO_NEED_BUFFER_VARS_INFERER
(
UnsqueezeGradOpNoNeedBufferVarInference
,
DECLARE_NO_NEED_BUFFER_VARS_INFERER
(
UnsqueezeGradOpNoNeedBufferVarInferer
,
"X"
);
"X"
);
}
// namespace operators
}
// namespace operators
}
// namespace paddle
}
// namespace paddle
...
@@ -316,7 +315,7 @@ REGISTER_OPERATOR(unsqueeze, ops::UnsqueezeOp, ops::UnsqueezeOpMaker,
...
@@ -316,7 +315,7 @@ REGISTER_OPERATOR(unsqueeze, ops::UnsqueezeOp, ops::UnsqueezeOpMaker,
ops
::
UnsqueezeGradOpMaker
<
paddle
::
framework
::
OpDesc
>
,
ops
::
UnsqueezeGradOpMaker
<
paddle
::
framework
::
OpDesc
>
,
ops
::
UnsqueezeGradOpMaker
<
paddle
::
imperative
::
OpBase
>
);
ops
::
UnsqueezeGradOpMaker
<
paddle
::
imperative
::
OpBase
>
);
REGISTER_OPERATOR
(
unsqueeze_grad
,
ops
::
UnsqueezeGradOp
,
REGISTER_OPERATOR
(
unsqueeze_grad
,
ops
::
UnsqueezeGradOp
,
ops
::
UnsqueezeGradOpNoNeedBufferVarInfere
nce
);
ops
::
UnsqueezeGradOpNoNeedBufferVarInfere
r
);
REGISTER_OPERATOR
(
unsqueeze2
,
ops
::
Unsqueeze2Op
,
ops
::
Unsqueeze2OpMaker
,
REGISTER_OPERATOR
(
unsqueeze2
,
ops
::
Unsqueeze2Op
,
ops
::
Unsqueeze2OpMaker
,
ops
::
Unsqueeze2GradOpMaker
<
paddle
::
framework
::
OpDesc
>
,
ops
::
Unsqueeze2GradOpMaker
<
paddle
::
framework
::
OpDesc
>
,
...
...
paddle/fluid/operators/warpctc_op.cc
浏览文件 @
5fd2ff0a
...
@@ -184,7 +184,7 @@ class WarpCTCGradOp : public framework::OperatorWithKernel {
...
@@ -184,7 +184,7 @@ class WarpCTCGradOp : public framework::OperatorWithKernel {
}
}
};
};
DECLARE_NO_NEED_BUFFER_VARS_INFERER
(
WarpCTCGradOpNoNeedBufferVarInfere
nce
,
DECLARE_NO_NEED_BUFFER_VARS_INFERER
(
WarpCTCGradOpNoNeedBufferVarInfere
r
,
"Logits"
);
"Logits"
);
}
// namespace operators
}
// namespace operators
...
@@ -195,7 +195,7 @@ REGISTER_OPERATOR(warpctc, ops::WarpCTCOp, ops::WarpCTCOpMaker,
...
@@ -195,7 +195,7 @@ REGISTER_OPERATOR(warpctc, ops::WarpCTCOp, ops::WarpCTCOpMaker,
ops
::
WarpCTCGradOpMaker
<
paddle
::
framework
::
OpDesc
>
,
ops
::
WarpCTCGradOpMaker
<
paddle
::
framework
::
OpDesc
>
,
ops
::
WarpCTCGradOpMaker
<
paddle
::
imperative
::
OpBase
>
);
ops
::
WarpCTCGradOpMaker
<
paddle
::
imperative
::
OpBase
>
);
REGISTER_OPERATOR
(
warpctc_grad
,
ops
::
WarpCTCGradOp
,
REGISTER_OPERATOR
(
warpctc_grad
,
ops
::
WarpCTCGradOp
,
ops
::
WarpCTCGradOpNoNeedBufferVarInfere
nce
);
ops
::
WarpCTCGradOpNoNeedBufferVarInfere
r
);
REGISTER_OP_CPU_KERNEL
(
REGISTER_OP_CPU_KERNEL
(
warpctc
,
ops
::
WarpCTCKernel
<
paddle
::
platform
::
CPUDeviceContext
,
float
>
);
warpctc
,
ops
::
WarpCTCKernel
<
paddle
::
platform
::
CPUDeviceContext
,
float
>
);
REGISTER_OP_CPU_KERNEL
(
REGISTER_OP_CPU_KERNEL
(
...
...
paddle/fluid/operators/where_op.cc
浏览文件 @
5fd2ff0a
...
@@ -135,8 +135,7 @@ class WhereOpGradMaker : public framework::SingleGradOpMaker<T> {
...
@@ -135,8 +135,7 @@ class WhereOpGradMaker : public framework::SingleGradOpMaker<T> {
}
}
};
};
DECLARE_NO_NEED_BUFFER_VARS_INFERER
(
WhereGradNoNeedBufferVarsInference
,
"X"
,
DECLARE_NO_NEED_BUFFER_VARS_INFERER
(
WhereGradNoNeedBufferVarsInferer
,
"X"
,
"Y"
);
"Y"
);
}
// namespace operators
}
// namespace operators
}
// namespace paddle
}
// namespace paddle
...
@@ -146,7 +145,7 @@ REGISTER_OPERATOR(where, ops::WhereOp, ops::WhereOpMaker,
...
@@ -146,7 +145,7 @@ REGISTER_OPERATOR(where, ops::WhereOp, ops::WhereOpMaker,
ops
::
WhereOpGradMaker
<
paddle
::
imperative
::
OpBase
>
);
ops
::
WhereOpGradMaker
<
paddle
::
imperative
::
OpBase
>
);
REGISTER_OPERATOR
(
where_grad
,
ops
::
WhereGradOp
,
REGISTER_OPERATOR
(
where_grad
,
ops
::
WhereGradOp
,
ops
::
WhereGradNoNeedBufferVarsInfere
nce
);
ops
::
WhereGradNoNeedBufferVarsInfere
r
);
REGISTER_OP_CPU_KERNEL
(
REGISTER_OP_CPU_KERNEL
(
where
,
ops
::
WhereKernel
<
paddle
::
platform
::
CPUDeviceContext
,
float
>
,
where
,
ops
::
WhereKernel
<
paddle
::
platform
::
CPUDeviceContext
,
float
>
,
ops
::
WhereKernel
<
paddle
::
platform
::
CPUDeviceContext
,
double
>
,
ops
::
WhereKernel
<
paddle
::
platform
::
CPUDeviceContext
,
double
>
,
...
...
paddle/fluid/platform/device_tracer.cc
浏览文件 @
5fd2ff0a
...
@@ -641,22 +641,24 @@ DeviceTracer *GetDeviceTracer() {
...
@@ -641,22 +641,24 @@ DeviceTracer *GetDeviceTracer() {
return
tracer
;
return
tracer
;
}
}
std
::
string
SetCurAnnotation
(
Event
*
event
)
{
// In order to record PE time, we add main_thread_annotation_stack
// for all event between PE run, we treat it as PE's child Event,
// so when event is not in same thread of PE event, we need add
// father event(PE::run event) for this event
void
SetCurAnnotation
(
Event
*
event
)
{
std
::
string
ret
;
std
::
string
ret
;
if
(
!
annotation_stack
.
empty
()
&&
event
->
role
()
!=
EventRole
::
kSpecial
)
{
if
(
!
annotation_stack
.
empty
())
{
event
->
set_parent
(
annotation_stack
.
back
());
event
->
set_parent
(
annotation_stack
.
back
());
event
->
set_name
(
annotation_stack
.
back
()
->
name
()
+
"/"
+
event
->
name
());
event
->
set_name
(
annotation_stack
.
back
()
->
name
()
+
"/"
+
event
->
name
());
}
}
if
(
annotation_stack
.
empty
()
&&
!
main_thread_annotation_stack
.
empty
()
&&
main_thread_annotation_stack
.
back
()
->
thread_id
()
!=
event
->
thread_id
())
{
event
->
set_parent
(
main_thread_annotation_stack
.
back
());
event
->
set_name
(
main_thread_annotation_stack
.
back
()
->
name
()
+
"/"
+
event
->
name
());
}
annotation_stack
.
push_back
(
event
);
annotation_stack
.
push_back
(
event
);
if
(
!
main_thread_annotation_stack_name
.
empty
()
&&
!
annotation_stack
.
empty
()
&&
main_thread_annotation_stack
.
back
()
->
thread_id
()
!=
annotation_stack
.
back
()
->
thread_id
())
{
ret
=
main_thread_annotation_stack_name
.
back
()
+
"/"
+
event
->
name
();
}
else
{
ret
=
event
->
name
();
}
if
(
event
->
role
()
==
EventRole
::
kSpecial
)
{
if
(
event
->
role
()
==
EventRole
::
kSpecial
)
{
std
::
string
name
=
event
->
name
();
std
::
string
name
=
event
->
name
();
if
(
!
main_thread_annotation_stack_name
.
empty
())
{
if
(
!
main_thread_annotation_stack_name
.
empty
())
{
...
@@ -665,22 +667,23 @@ std::string SetCurAnnotation(Event *event) {
...
@@ -665,22 +667,23 @@ std::string SetCurAnnotation(Event *event) {
main_thread_annotation_stack_name
.
push_back
(
name
);
main_thread_annotation_stack_name
.
push_back
(
name
);
main_thread_annotation_stack
.
push_back
(
event
);
main_thread_annotation_stack
.
push_back
(
event
);
}
}
return
ret
;
}
}
void
ClearCurAnnotation
()
{
void
ClearCurAnnotation
()
{
if
(
!
main_thread_annotation_stack_name
.
empty
()
&&
!
annotation_stack
.
empty
()
&&
main_thread_annotation_stack
.
back
()
->
thread_id
()
!=
annotation_stack
.
back
()
->
thread_id
())
{
annotation_stack
.
back
()
->
set_name
(
main_thread_annotation_stack_name
.
back
()
+
"/"
+
annotation_stack
.
back
()
->
name
());
}
if
(
!
main_thread_annotation_stack
.
empty
()
&&
if
(
!
main_thread_annotation_stack
.
empty
()
&&
main_thread_annotation_stack
.
back
()
->
name
()
==
main_thread_annotation_stack
.
back
()
->
name
()
==
annotation_stack
.
back
()
->
name
())
{
annotation_stack
.
back
()
->
name
())
{
main_thread_annotation_stack_name
.
pop_back
();
std
::
string
name
=
annotation_stack
.
back
()
->
name
();
main_thread_annotation_stack
.
pop_back
();
std
::
string
main_name
=
main_thread_annotation_stack
.
back
()
->
name
();
int
main_name_len
=
main_name
.
length
();
int
name_len
=
name
.
length
();
int
prefix_len
=
main_name_len
-
name_len
;
if
(
prefix_len
>=
0
&&
main_name
.
at
(
prefix_len
)
==
'/'
&&
name
==
main_name
.
substr
(
prefix_len
,
name_len
))
{
main_thread_annotation_stack_name
.
pop_back
();
main_thread_annotation_stack
.
pop_back
();
}
}
}
annotation_stack
.
pop_back
();
annotation_stack
.
pop_back
();
}
}
...
...
paddle/fluid/platform/device_tracer.h
浏览文件 @
5fd2ff0a
...
@@ -137,7 +137,7 @@ class DeviceTracer {
...
@@ -137,7 +137,7 @@ class DeviceTracer {
DeviceTracer
*
GetDeviceTracer
();
DeviceTracer
*
GetDeviceTracer
();
// Set a name for the cuda kernel operation being launched by the thread.
// Set a name for the cuda kernel operation being launched by the thread.
std
::
string
SetCurAnnotation
(
Event
*
event
);
void
SetCurAnnotation
(
Event
*
event
);
// Clear the name after the operation is done.
// Clear the name after the operation is done.
void
ClearCurAnnotation
();
void
ClearCurAnnotation
();
// Current name of the operation being run in the thread.
// Current name of the operation being run in the thread.
...
...
paddle/fluid/platform/profiler.cc
浏览文件 @
5fd2ff0a
...
@@ -73,7 +73,8 @@ RecordEvent::RecordEvent(const std::string &name, const EventRole role) {
...
@@ -73,7 +73,8 @@ RecordEvent::RecordEvent(const std::string &name, const EventRole role) {
// lock is not needed, the code below is thread-safe
// lock is not needed, the code below is thread-safe
Event
*
e
=
PushEvent
(
name
,
role
);
Event
*
e
=
PushEvent
(
name
,
role
);
// Maybe need the same push/pop behavior.
// Maybe need the same push/pop behavior.
name_
=
SetCurAnnotation
(
e
);
SetCurAnnotation
(
e
);
name_
=
e
->
name
();
}
}
RecordEvent
::~
RecordEvent
()
{
RecordEvent
::~
RecordEvent
()
{
...
...
python/paddle/fluid/contrib/slim/quantization/quantization_pass.py
浏览文件 @
5fd2ff0a
...
@@ -83,7 +83,7 @@ _op_real_in_out_name = {
...
@@ -83,7 +83,7 @@ _op_real_in_out_name = {
"swish"
:
[[
"X"
],
[
"Out"
]],
"swish"
:
[[
"X"
],
[
"Out"
]],
"dropout"
:
[[
"X"
],
[
"Out"
]],
"dropout"
:
[[
"X"
],
[
"Out"
]],
"batch_norm"
:
[[
"X"
],
[
"Y"
]],
"batch_norm"
:
[[
"X"
],
[
"Y"
]],
"sigmoid"
:
[[
"X"
],
[
"
Y
"
]],
"sigmoid"
:
[[
"X"
],
[
"
Out
"
]],
}
}
...
...
python/paddle/fluid/dygraph/dygraph_to_static/program_translator.py
浏览文件 @
5fd2ff0a
...
@@ -550,6 +550,7 @@ class ProgramTranslator(object):
...
@@ -550,6 +550,7 @@ class ProgramTranslator(object):
source_code
=
ast_to_source_code
(
root_wrapper
.
node
)
source_code
=
ast_to_source_code
(
root_wrapper
.
node
)
return
source_code
return
source_code
@
switch_to_static_graph
def
save_inference_model
(
self
,
dirname
,
feed
=
None
,
fetch
=
None
):
def
save_inference_model
(
self
,
dirname
,
feed
=
None
,
fetch
=
None
):
"""
"""
Saves current model as the inference model. It will prune the main_program
Saves current model as the inference model. It will prune the main_program
...
...
python/paddle/fluid/dygraph/math_op_patch.py
浏览文件 @
5fd2ff0a
...
@@ -37,9 +37,6 @@ def monkey_patch_math_varbase():
...
@@ -37,9 +37,6 @@ def monkey_patch_math_varbase():
The difference is, in dygraph mode, use auto-generated op functions for better performance.
The difference is, in dygraph mode, use auto-generated op functions for better performance.
"""
"""
def
safe_get_dtype
(
var
):
return
var
.
dtype
@
no_grad
@
no_grad
def
create_tensor
(
value
,
dtype
,
shape
):
def
create_tensor
(
value
,
dtype
,
shape
):
out
=
_varbase_creator
(
dtype
=
dtype
)
out
=
_varbase_creator
(
dtype
=
dtype
)
...
@@ -96,8 +93,9 @@ def monkey_patch_math_varbase():
...
@@ -96,8 +93,9 @@ def monkey_patch_math_varbase():
print("new var's dtype is: {}, numpy dtype is {}".format(new_variable.dtype, new_variable.numpy().dtype))
print("new var's dtype is: {}, numpy dtype is {}".format(new_variable.dtype, new_variable.numpy().dtype))
"""
"""
return
core
.
ops
.
cast
(
self
,
'in_dtype'
,
self
.
dtype
,
'out_dtype'
,
if
not
isinstance
(
dtype
,
core
.
VarDesc
.
VarType
):
convert_np_dtype_to_dtype_
(
dtype
))
dtype
=
convert_np_dtype_to_dtype_
(
dtype
)
return
core
.
ops
.
cast
(
self
,
'in_dtype'
,
self
.
dtype
,
'out_dtype'
,
dtype
)
def
_scalar_elementwise_op_
(
var
,
scale
,
bias
):
def
_scalar_elementwise_op_
(
var
,
scale
,
bias
):
return
core
.
ops
.
scale
(
var
,
'scale'
,
scale
,
'bias'
,
bias
)
return
core
.
ops
.
scale
(
var
,
'scale'
,
scale
,
'bias'
,
bias
)
...
@@ -175,7 +173,7 @@ def monkey_patch_math_varbase():
...
@@ -175,7 +173,7 @@ def monkey_patch_math_varbase():
elif
isinstance
(
other_var
,
int
):
elif
isinstance
(
other_var
,
int
):
return
scalar_method
(
self
,
float
(
other_var
))
return
scalar_method
(
self
,
float
(
other_var
))
lhs_dtype
=
s
afe_get_dtype
(
self
)
lhs_dtype
=
s
elf
.
dtype
if
not
isinstance
(
other_var
,
core
.
VarBase
):
if
not
isinstance
(
other_var
,
core
.
VarBase
):
if
reverse
:
if
reverse
:
...
@@ -185,7 +183,7 @@ def monkey_patch_math_varbase():
...
@@ -185,7 +183,7 @@ def monkey_patch_math_varbase():
# add fill_op
# add fill_op
other_var
=
create_scalar
(
value
=
other_var
,
dtype
=
lhs_dtype
)
other_var
=
create_scalar
(
value
=
other_var
,
dtype
=
lhs_dtype
)
rhs_dtype
=
safe_get_dtype
(
other_var
)
rhs_dtype
=
other_var
.
dtype
if
lhs_dtype
!=
rhs_dtype
:
if
lhs_dtype
!=
rhs_dtype
:
other_var
=
astype
(
other_var
,
lhs_dtype
)
other_var
=
astype
(
other_var
,
lhs_dtype
)
if
reverse
:
if
reverse
:
...
...
python/paddle/fluid/tests/unittests/dygraph_to_static/test_mnist.py
浏览文件 @
5fd2ff0a
...
@@ -200,7 +200,6 @@ class TestMNISTWithDeclarative(TestMNIST):
...
@@ -200,7 +200,6 @@ class TestMNISTWithDeclarative(TestMNIST):
break
break
return
loss_data
return
loss_data
@
switch_to_static_graph
def
check_save_inference_model
(
self
,
inputs
,
prog_trans
,
to_static
,
gt_out
):
def
check_save_inference_model
(
self
,
inputs
,
prog_trans
,
to_static
,
gt_out
):
if
to_static
:
if
to_static
:
infer_model_path
=
"./test_mnist_inference_model"
infer_model_path
=
"./test_mnist_inference_model"
...
@@ -208,6 +207,7 @@ class TestMNISTWithDeclarative(TestMNIST):
...
@@ -208,6 +207,7 @@ class TestMNISTWithDeclarative(TestMNIST):
infer_out
=
self
.
load_and_run_inference
(
infer_model_path
,
inputs
)
infer_out
=
self
.
load_and_run_inference
(
infer_model_path
,
inputs
)
self
.
assertTrue
(
np
.
allclose
(
gt_out
.
numpy
(),
infer_out
))
self
.
assertTrue
(
np
.
allclose
(
gt_out
.
numpy
(),
infer_out
))
@
switch_to_static_graph
def
load_and_run_inference
(
self
,
model_path
,
inputs
):
def
load_and_run_inference
(
self
,
model_path
,
inputs
):
exe
=
fluid
.
Executor
(
self
.
place
)
exe
=
fluid
.
Executor
(
self
.
place
)
[
inference_program
,
feed_target_names
,
[
inference_program
,
feed_target_names
,
...
...
python/paddle/fluid/tests/unittests/dygraph_to_static/test_save_inference_model.py
浏览文件 @
5fd2ff0a
...
@@ -30,6 +30,7 @@ np.random.seed(SEED)
...
@@ -30,6 +30,7 @@ np.random.seed(SEED)
place
=
fluid
.
CUDAPlace
(
0
)
if
fluid
.
is_compiled_with_cuda
()
else
fluid
.
CPUPlace
(
place
=
fluid
.
CUDAPlace
(
0
)
if
fluid
.
is_compiled_with_cuda
()
else
fluid
.
CPUPlace
(
)
)
program_translator
=
ProgramTranslator
()
class
SimpleFcLayer
(
fluid
.
dygraph
.
Layer
):
class
SimpleFcLayer
(
fluid
.
dygraph
.
Layer
):
...
@@ -63,6 +64,10 @@ class TestDyToStaticSaveInferenceModel(unittest.TestCase):
...
@@ -63,6 +64,10 @@ class TestDyToStaticSaveInferenceModel(unittest.TestCase):
loss
.
backward
()
loss
.
backward
()
adam
.
minimize
(
loss
)
adam
.
minimize
(
loss
)
layer
.
clear_gradients
()
layer
.
clear_gradients
()
# test for saving model in dygraph.guard
infer_model_dir
=
"./test_dy2stat_save_inference_model"
program_translator
.
save_inference_model
(
infer_model_dir
,
feed
=
[
0
],
fetch
=
[
1
])
# Check the correctness of the inference
# Check the correctness of the inference
dygraph_out
,
_
=
layer
(
x
)
dygraph_out
,
_
=
layer
(
x
)
self
.
check_save_inference_model
(
layer
,
[
x_data
],
dygraph_out
.
numpy
())
self
.
check_save_inference_model
(
layer
,
[
x_data
],
dygraph_out
.
numpy
())
...
@@ -77,7 +82,7 @@ class TestDyToStaticSaveInferenceModel(unittest.TestCase):
...
@@ -77,7 +82,7 @@ class TestDyToStaticSaveInferenceModel(unittest.TestCase):
gt_out
,
gt_out
,
feed
=
None
,
feed
=
None
,
fetch
=
None
):
fetch
=
None
):
program_translator
=
ProgramTranslator
()
expected_persistable_vars
=
set
([
p
.
name
for
p
in
model
.
parameters
()])
expected_persistable_vars
=
set
([
p
.
name
for
p
in
model
.
parameters
()])
infer_model_dir
=
"./test_dy2stat_save_inference_model"
infer_model_dir
=
"./test_dy2stat_save_inference_model"
...
...
python/paddle/fluid/tests/unittests/test_conv2d_transpose_op.py
浏览文件 @
5fd2ff0a
...
@@ -109,6 +109,7 @@ class TestConv2dTransposeOp(OpTest):
...
@@ -109,6 +109,7 @@ class TestConv2dTransposeOp(OpTest):
def
setUp
(
self
):
def
setUp
(
self
):
# init as conv transpose
# init as conv transpose
self
.
dtype
=
np
.
float64
self
.
dtype
=
np
.
float64
self
.
need_check_grad
=
True
self
.
is_test
=
False
self
.
is_test
=
False
self
.
use_cudnn
=
False
self
.
use_cudnn
=
False
self
.
use_mkldnn
=
False
self
.
use_mkldnn
=
False
...
@@ -152,35 +153,40 @@ class TestConv2dTransposeOp(OpTest):
...
@@ -152,35 +153,40 @@ class TestConv2dTransposeOp(OpTest):
self
.
check_output
(
check_dygraph
=
(
self
.
use_mkldnn
==
False
))
self
.
check_output
(
check_dygraph
=
(
self
.
use_mkldnn
==
False
))
def
test_check_grad_no_input
(
self
):
def
test_check_grad_no_input
(
self
):
if
self
.
use_cudnn
:
if
self
.
need_check_grad
:
place
=
core
.
CUDAPlace
(
0
)
if
self
.
use_cudnn
:
self
.
check_grad_with_place
(
place
=
core
.
CUDAPlace
(
0
)
place
,
[
'Filter'
],
self
.
check_grad_with_place
(
'Output'
,
place
,
[
'Filter'
],
max_relative_error
=
0.02
,
'Output'
,
no_grad_set
=
set
([
'Input'
]))
max_relative_error
=
0.02
,
else
:
no_grad_set
=
set
([
'Input'
]))
self
.
check_grad
([
'Filter'
],
'Output'
,
no_grad_set
=
set
([
'Input'
]))
else
:
self
.
check_grad
(
[
'Filter'
],
'Output'
,
no_grad_set
=
set
([
'Input'
]))
def
test_check_grad_no_filter
(
self
):
def
test_check_grad_no_filter
(
self
):
if
self
.
use_cudnn
:
if
self
.
need_check_grad
:
place
=
core
.
CUDAPlace
(
0
)
if
self
.
use_cudnn
:
self
.
check_grad_with_place
(
place
=
core
.
CUDAPlace
(
0
)
place
,
[
'Input'
],
'Output'
,
no_grad_set
=
set
([
'Filter'
]))
self
.
check_grad_with_place
(
else
:
place
,
[
'Input'
],
'Output'
,
no_grad_set
=
set
([
'Filter'
]))
self
.
check_grad
([
'Input'
],
'Output'
,
no_grad_set
=
set
([
'Filter'
]))
else
:
self
.
check_grad
(
[
'Input'
],
'Output'
,
no_grad_set
=
set
([
'Filter'
]))
def
test_check_grad
(
self
):
def
test_check_grad
(
self
):
if
self
.
use_cudnn
:
if
self
.
need_check_grad
:
place
=
core
.
CUDAPlace
(
0
)
if
self
.
use_cudnn
:
self
.
check_grad_with_place
(
place
=
core
.
CUDAPlace
(
0
)
place
,
self
.
check_grad_with_place
(
set
([
'Input'
,
'Filter'
]),
place
,
'Output'
,
set
([
'Input'
,
'Filter'
]),
max_relative_error
=
0.02
)
'Output'
,
else
:
max_relative_error
=
0.02
)
self
.
check_grad
(
else
:
set
([
'Input'
,
'Filter'
]),
'Output'
,
max_relative_error
=
0.02
)
self
.
check_grad
(
set
([
'Input'
,
'Filter'
]),
'Output'
,
max_relative_error
=
0.02
)
def
init_test_case
(
self
):
def
init_test_case
(
self
):
self
.
pad
=
[
0
,
0
]
self
.
pad
=
[
0
,
0
]
...
@@ -708,6 +714,124 @@ class TestDepthwiseConvTransposeAsymmetricPad_NHWC(TestConv2dTransposeOp):
...
@@ -708,6 +714,124 @@ class TestDepthwiseConvTransposeAsymmetricPad_NHWC(TestConv2dTransposeOp):
self
.
data_format
=
'NHWC'
self
.
data_format
=
'NHWC'
@
unittest
.
skipIf
(
not
core
.
is_compiled_with_cuda
(),
"core is not compiled with CUDA"
)
class
TestCUDNN_FP16
(
TestConv2dTransposeOp
):
def
init_test_case
(
self
):
self
.
dtype
=
np
.
float16
self
.
pad
=
[
1
,
1
]
self
.
stride
=
[
1
,
1
]
self
.
groups
=
1
self
.
dilations
=
[
1
,
1
]
self
.
input_size
=
[
2
,
3
,
5
,
5
]
# NCHW
f_c
=
self
.
input_size
[
1
]
self
.
filter_size
=
[
f_c
,
6
,
3
,
3
]
def
init_op_type
(
self
):
self
.
need_check_grad
=
False
self
.
use_cudnn
=
True
self
.
op_type
=
"conv2d_transpose"
def
test_check_output
(
self
):
if
self
.
use_cudnn
:
place
=
core
.
CUDAPlace
(
0
)
self
.
check_output_with_place
(
place
,
atol
=
0.02
,
check_dygraph
=
(
self
.
use_mkldnn
==
False
))
else
:
self
.
check_output
(
check_dygraph
=
(
self
.
use_mkldnn
==
False
))
@
unittest
.
skipIf
(
not
core
.
is_compiled_with_cuda
(),
"core is not compiled with CUDA"
)
class
TestCUDNN_NHWC_FP16
(
TestCUDNN_FP16
):
def
init_test_case
(
self
):
self
.
dtype
=
np
.
float16
self
.
pad
=
[
0
,
0
]
self
.
stride
=
[
1
,
1
]
self
.
dilations
=
[
1
,
1
]
self
.
groups
=
1
self
.
input_size
=
[
2
,
5
,
5
,
3
]
# NHWC
f_c
=
self
.
input_size
[
-
1
]
self
.
filter_size
=
[
f_c
,
6
,
3
,
3
]
self
.
data_format
=
'NHWC'
@
unittest
.
skipIf
(
not
core
.
is_compiled_with_cuda
(),
"core is not compiled with CUDA"
)
class
TestCUDNNWithSymmetricPad_NHWC_FP16
(
TestCUDNN_FP16
):
def
init_test_case
(
self
):
self
.
dtype
=
np
.
float16
self
.
pad
=
[
1
,
1
]
self
.
stride
=
[
1
,
1
]
self
.
groups
=
1
self
.
dilations
=
[
1
,
1
]
self
.
input_size
=
[
2
,
5
,
5
,
3
]
# NHWC
f_c
=
self
.
input_size
[
-
1
]
self
.
filter_size
=
[
f_c
,
6
,
3
,
3
]
self
.
data_format
=
'NHWC'
@
unittest
.
skipIf
(
not
core
.
is_compiled_with_cuda
(),
"core is not compiled with CUDA"
)
class
TestCUDNNWithAsymmetricPad_NHWC_FP16
(
TestCUDNN_FP16
):
def
init_test_case
(
self
):
self
.
dtype
=
np
.
float16
self
.
pad
=
[
1
,
0
,
2
,
3
]
self
.
stride
=
[
2
,
2
]
self
.
groups
=
1
self
.
dilations
=
[
1
,
1
]
self
.
input_size
=
[
2
,
5
,
5
,
3
]
# NHWC
f_c
=
self
.
input_size
[
-
1
]
self
.
filter_size
=
[
f_c
,
6
,
3
,
3
]
self
.
data_format
=
'NHWC'
@
unittest
.
skipIf
(
not
core
.
is_compiled_with_cuda
(),
"core is not compiled with CUDA"
)
class
TestCUDNNWithStride_NHWC_FP16
(
TestCUDNN_FP16
):
def
init_test_case
(
self
):
self
.
dtype
=
np
.
float16
self
.
pad
=
[
1
,
1
]
self
.
stride
=
[
2
,
2
]
self
.
groups
=
1
self
.
dilations
=
[
1
,
1
]
self
.
input_size
=
[
2
,
5
,
5
,
3
]
# NHWC
f_c
=
self
.
input_size
[
-
1
]
self
.
filter_size
=
[
f_c
,
6
,
3
,
3
]
self
.
data_format
=
'NHWC'
@
unittest
.
skipIf
(
not
core
.
is_compiled_with_cuda
(),
"core is not compiled with CUDA"
)
class
TestCUDNNWithGroups_NHWC_FP16
(
TestCUDNN_FP16
):
def
init_test_case
(
self
):
self
.
dtype
=
np
.
float16
self
.
pad
=
[
1
,
1
]
self
.
stride
=
[
1
,
1
]
self
.
dilations
=
[
1
,
1
]
self
.
groups
=
2
self
.
input_size
=
[
2
,
5
,
5
,
4
]
# NCHW
f_c
=
self
.
input_size
[
-
1
]
self
.
filter_size
=
[
f_c
,
3
,
3
,
3
]
self
.
data_format
=
'NHWC'
@
unittest
.
skipIf
(
not
core
.
is_compiled_with_cuda
(),
"core is not compiled with CUDA"
)
class
TestCUDNNWithEvenUpsample_NHWC_FP16
(
TestCUDNN_FP16
):
def
init_test_case
(
self
):
self
.
dtype
=
np
.
float16
self
.
pad
=
[
2
,
2
]
self
.
stride
=
[
2
,
2
]
self
.
groups
=
1
self
.
dilations
=
[
1
,
1
]
self
.
output_size
=
[
14
,
14
]
self
.
input_size
=
[
2
,
7
,
7
,
3
]
# NHWC
f_c
=
self
.
input_size
[
-
1
]
self
.
filter_size
=
[
f_c
,
6
,
5
,
5
]
self
.
data_format
=
'NHWC'
class
TestConv2dTransposeAPI
(
unittest
.
TestCase
):
class
TestConv2dTransposeAPI
(
unittest
.
TestCase
):
def
test_case1
(
self
):
def
test_case1
(
self
):
data1
=
fluid
.
layers
.
data
(
data1
=
fluid
.
layers
.
data
(
...
...
python/paddle/fluid/tests/unittests/test_math_op_patch_var_base.py
浏览文件 @
5fd2ff0a
...
@@ -15,7 +15,6 @@
...
@@ -15,7 +15,6 @@
from
__future__
import
print_function
from
__future__
import
print_function
import
unittest
import
unittest
from
decorator_helper
import
prog_scope
import
paddle.fluid
as
fluid
import
paddle.fluid
as
fluid
import
numpy
as
np
import
numpy
as
np
import
six
import
six
...
@@ -23,7 +22,7 @@ import six
...
@@ -23,7 +22,7 @@ import six
class
TestMathOpPatchesVarBase
(
unittest
.
TestCase
):
class
TestMathOpPatchesVarBase
(
unittest
.
TestCase
):
def
setUp
(
self
):
def
setUp
(
self
):
self
.
shape
=
[
10
,
10
]
self
.
shape
=
[
10
,
10
24
]
self
.
dtype
=
np
.
float32
self
.
dtype
=
np
.
float32
def
test_add
(
self
):
def
test_add
(
self
):
...
@@ -251,6 +250,29 @@ class TestMathOpPatchesVarBase(unittest.TestCase):
...
@@ -251,6 +250,29 @@ class TestMathOpPatchesVarBase(unittest.TestCase):
rtol
=
1e-05
,
rtol
=
1e-05
,
atol
=
0.0
))
atol
=
0.0
))
def
test_add_different_dtype
(
self
):
a_np
=
np
.
random
.
random
(
self
.
shape
).
astype
(
np
.
float32
)
b_np
=
np
.
random
.
random
(
self
.
shape
).
astype
(
np
.
float16
)
with
fluid
.
dygraph
.
guard
():
a
=
fluid
.
dygraph
.
to_variable
(
a_np
)
b
=
fluid
.
dygraph
.
to_variable
(
b_np
)
res
=
a
+
b
self
.
assertTrue
(
np
.
array_equal
(
res
.
numpy
(),
a_np
+
b_np
))
def
test_astype
(
self
):
a_np
=
np
.
random
.
uniform
(
-
1
,
1
,
self
.
shape
).
astype
(
self
.
dtype
)
with
fluid
.
dygraph
.
guard
():
a
=
fluid
.
dygraph
.
to_variable
(
a_np
)
res1
=
a
.
astype
(
np
.
float16
)
res2
=
a
.
astype
(
'float16'
)
res3
=
a
.
astype
(
fluid
.
core
.
VarDesc
.
VarType
.
FP16
)
self
.
assertEqual
(
res1
.
dtype
,
res2
.
dtype
)
self
.
assertEqual
(
res1
.
dtype
,
res3
.
dtype
)
self
.
assertTrue
(
np
.
array_equal
(
res1
.
numpy
(),
res2
.
numpy
()))
self
.
assertTrue
(
np
.
array_equal
(
res1
.
numpy
(),
res3
.
numpy
()))
if
__name__
==
'__main__'
:
if
__name__
==
'__main__'
:
unittest
.
main
()
unittest
.
main
()
python/paddle/fluid/tests/unittests/white_list/op_accuracy_white_list.py
浏览文件 @
5fd2ff0a
...
@@ -80,5 +80,6 @@ NO_FP16_CHECK_GRAD_OP_LIST = [
...
@@ -80,5 +80,6 @@ NO_FP16_CHECK_GRAD_OP_LIST = [
'fused_elemwise_activation'
,
\
'fused_elemwise_activation'
,
\
'pool2d'
,
\
'pool2d'
,
\
'pool3d'
,
\
'pool3d'
,
\
'softmax'
'softmax'
,
\
'conv2d_transpose'
]
]
tools/check_ut.py
浏览文件 @
5fd2ff0a
...
@@ -28,7 +28,7 @@ class PRChecker(object):
...
@@ -28,7 +28,7 @@ class PRChecker(object):
self
.
repo
=
None
self
.
repo
=
None
def
check
(
self
):
def
check
(
self
):
""" check pr """
""" check pr
.
"""
filename
=
'block.txt'
filename
=
'block.txt'
pr_id
=
os
.
getenv
(
'GIT_PR_ID'
)
pr_id
=
os
.
getenv
(
'GIT_PR_ID'
)
if
not
pr_id
:
if
not
pr_id
:
...
@@ -44,7 +44,8 @@ class PRChecker(object):
...
@@ -44,7 +44,8 @@ class PRChecker(object):
with
open
(
filename
)
as
f
:
with
open
(
filename
)
as
f
:
for
l
in
f
:
for
l
in
f
:
if
l
.
rstrip
(
'
\r\n
'
)
==
user
:
if
l
.
rstrip
(
'
\r\n
'
)
==
user
:
print
(
'{} has UT to be fixed, so CI failed.'
.
format
(
user
))
print
(
'{} has unit-test to be fixed, so CI failed.'
.
format
(
user
))
exit
(
1
)
exit
(
1
)
exit
(
0
)
exit
(
0
)
...
...
tools/manylinux1/Dockerfile.cuda10_cudnn7_gcc8_ubuntu16
浏览文件 @
5fd2ff0a
...
@@ -19,6 +19,15 @@ ENV HOME /root
...
@@ -19,6 +19,15 @@ ENV HOME /root
# Add bash enhancements
# Add bash enhancements
COPY ./paddle/scripts/docker/root/ /root/
COPY ./paddle/scripts/docker/root/ /root/
ENV PATH=/usr/local/gcc-8.2/bin:$PATH
RUN rm -rf /temp_gcc82 && rm -rf /gcc-8.2.0.tar.xz && rm -rf /gcc-8.2.0
# Prepare packages for Python
RUN apt-get update && \
apt-get install -y make build-essential libssl-dev zlib1g-dev libbz2-dev \
libreadline-dev libsqlite3-dev wget curl llvm libncurses5-dev libncursesw5-dev \
xz-utils tk-dev libffi-dev liblzma-dev
# gcc8.2
# gcc8.2
RUN wget -q https://paddle-docker-tar.bj.bcebos.com/home/users/tianshuo/bce-python-sdk-0.8.27/gcc-8.2.0.tar.xz && \
RUN wget -q https://paddle-docker-tar.bj.bcebos.com/home/users/tianshuo/bce-python-sdk-0.8.27/gcc-8.2.0.tar.xz && \
tar -xvf gcc-8.2.0.tar.xz && \
tar -xvf gcc-8.2.0.tar.xz && \
...
@@ -33,23 +42,6 @@ RUN wget -q https://paddle-docker-tar.bj.bcebos.com/home/users/tianshuo/bce-pyth
...
@@ -33,23 +42,6 @@ RUN wget -q https://paddle-docker-tar.bj.bcebos.com/home/users/tianshuo/bce-pyth
ENV PATH=/usr/local/gcc-8.2/bin:$PATH
ENV PATH=/usr/local/gcc-8.2/bin:$PATH
RUN rm -rf /temp_gcc82 && rm -rf /gcc-8.2.0.tar.xz && rm -rf /gcc-8.2.0
RUN rm -rf /temp_gcc82 && rm -rf /gcc-8.2.0.tar.xz && rm -rf /gcc-8.2.0
# Prepare packages for Python
RUN apt-get update && \
apt-get install -y make build-essential libssl-dev zlib1g-dev libbz2-dev \
libreadline-dev libsqlite3-dev wget curl llvm libncurses5-dev libncursesw5-dev \
xz-utils tk-dev libffi-dev liblzma-dev
# Downgrade gcc&&g++
RUN apt-get update
WORKDIR /usr/bin
RUN apt install -y gcc-4.8 g++-4.8
RUN cp gcc gcc.bak
RUN cp g++ g++.bak
RUN rm gcc
RUN rm g++
RUN ln -s gcc-4.8 gcc
RUN ln -s g++-4.8 g++
# Install Python3.6
# Install Python3.6
RUN mkdir -p /root/python_build/ && wget -q https://www.sqlite.org/2018/sqlite-autoconf-3250300.tar.gz && \
RUN mkdir -p /root/python_build/ && wget -q https://www.sqlite.org/2018/sqlite-autoconf-3250300.tar.gz && \
tar -zxf sqlite-autoconf-3250300.tar.gz && cd sqlite-autoconf-3250300 && \
tar -zxf sqlite-autoconf-3250300.tar.gz && cd sqlite-autoconf-3250300 && \
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录