Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
s920243400
PaddleDetection
提交
1348c20e
P
PaddleDetection
项目概览
s920243400
/
PaddleDetection
与 Fork 源项目一致
Fork自
PaddlePaddle / PaddleDetection
通知
2
Star
0
Fork
0
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
0
列表
看板
标记
里程碑
合并请求
0
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
P
PaddleDetection
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
0
Issue
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
提交
1348c20e
编写于
9月 06, 2017
作者:
L
Liu Yiqun
浏览文件
操作
浏览文件
下载
差异文件
Merge branch 'develop' into core_add_fc_op
上级
f196ad02
ba43904a
变更
21
隐藏空白更改
内联
并排
Showing
21 changed file
with
441 addition
and
134 deletion
+441
-134
doc/howto/dev/new_op_cn.md
doc/howto/dev/new_op_cn.md
+41
-14
doc/howto/dev/use_eigen_cn.md
doc/howto/dev/use_eigen_cn.md
+146
-0
paddle/framework/attribute.cc
paddle/framework/attribute.cc
+12
-0
paddle/framework/attribute.h
paddle/framework/attribute.h
+2
-1
paddle/framework/framework.proto
paddle/framework/framework.proto
+7
-0
paddle/framework/op_registry_test.cc
paddle/framework/op_registry_test.cc
+1
-33
paddle/framework/operator_test.cc
paddle/framework/operator_test.cc
+34
-0
paddle/gserver/layers/Conv3DLayer.cpp
paddle/gserver/layers/Conv3DLayer.cpp
+17
-6
paddle/gserver/layers/DeConv3DLayer.cpp
paddle/gserver/layers/DeConv3DLayer.cpp
+16
-6
paddle/operators/fc_op.cc
paddle/operators/fc_op.cc
+20
-23
paddle/operators/lookup_table_op.h
paddle/operators/lookup_table_op.h
+7
-7
paddle/operators/mul_op.cc
paddle/operators/mul_op.cc
+2
-2
paddle/operators/mul_op.h
paddle/operators/mul_op.h
+20
-16
paddle/operators/rowwise_add_op.cc
paddle/operators/rowwise_add_op.cc
+4
-2
paddle/operators/rowwise_add_op.h
paddle/operators/rowwise_add_op.h
+14
-10
python/paddle/trainer/PyDataProvider2.py
python/paddle/trainer/PyDataProvider2.py
+36
-0
python/paddle/v2/framework/op.py
python/paddle/v2/framework/op.py
+6
-1
python/paddle/v2/framework/tests/gradient_checker.py
python/paddle/v2/framework/tests/gradient_checker.py
+3
-1
python/paddle/v2/framework/tests/test_fc_op.py
python/paddle/v2/framework/tests/test_fc_op.py
+14
-2
python/paddle/v2/framework/tests/test_mul_op.py
python/paddle/v2/framework/tests/test_mul_op.py
+26
-5
python/paddle/v2/framework/tests/test_rowwise_add_op.py
python/paddle/v2/framework/tests/test_rowwise_add_op.py
+13
-5
未找到文件。
doc/howto/dev/new_op_cn.md
浏览文件 @
1348c20e
...
...
@@ -169,6 +169,8 @@ class MulKernel : public framework::OpKernel {
`MulKernel`
需要重写
`Compute`
接口,该接口参数为
`const framework::ExecutionContext& context`
,
`ExecutionContext`
相比
`InferShapeContext`
增加了设备类型,同样可获取到输入输出和属性参数,
`Compute`
函数里写具体实现时。
注意,不同设备(CPU、GPU)共享一个Op定义,是否则共享同一个
`OpKernel`
,取决于
`Compute`
调用的函数是否支持不同设备。
`MulOp`
的CPU、GPU实现共享同一个
`Kernel`
,
`OpKernel`
不共享的例子可以参考
[
`OnehotCrossEntropyOpKernel`
](
https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/operators/cross_entropy_op.h#L43
)
。
为了使得
`OpKernel`
的计算过程书写较为简单,CPU、GPU的代码可以复用,我们通常借助Eigen unsupported Tensor模块来实现。关于在paddle中如何使用Eigen库,请参考对应的使用
[
文档
](
https://github.com/PaddlePaddle/Paddle/blob/develop/doc/howto/dev/use_eigen_cn.md
)
到此前向Op实现完成,需要在
`.cc`
文件中注册该op和kernel。反向Op类的定义和Kernel定义与前向Op类似,这里不再重复。但注意,反向Op没有
`ProtoMaker`
。
...
...
@@ -188,9 +190,12 @@ REGISTER_OP_CPU_KERNEL(mul_grad,
-
`REGISTER_OP_WITHOUT_GRADIENT`
: 用于注册没有反向的Op。
-
`REGISTER_OP_CPU_KERNEL`
:注册
`ops::MulKernel`
类,并特化模板参数为
`paddle::platform::CPUPlace`
和
`float`
类型,同理,注册
`ops::MulKernel`
类。
在
`.cu`
文件中注册GPU Kernel。
在
`.cu`
文件中注册GPU Kernel。
请注意,如果GPU Kernel的实现是基于Eigen unsupported模块,那么在
`.cu`
的最前面请加上宏定义
`#define EIGEN_USE_GPU`
```
c++
// if use Eigen unsupported module before include head files
#define EIGEN_USE_GPU
namespace
ops
=
paddle
::
operators
;
REGISTER_OP_GPU_KERNEL
(
mul
,
ops
::
MulKernel
<
paddle
::
platform
::
GPUPlace
,
float
>
);
REGISTER_OP_GPU_KERNEL
(
mul_grad
,
...
...
@@ -286,28 +291,50 @@ class TestMulOp(unittest.TestCase):
反向Op单测继承自
`GradientChecker`
,而
`GradientChecker`
集成自
`unittest.TestCase`
,所以反向单测函数需要
`test_`
开头。
```
class MulGradOpTest
(GradientChecker):
def
test_mul
(self):
op = create_op("mul")
inputs = {
```
class TestMulGradOp
(GradientChecker):
def
setUp
(self):
self.
op = create_op("mul")
self.
inputs = {
'X': np.random.random((32, 84)).astype("float32"),
'Y': np.random.random((84, 100)).astype("float32")
}
self.compare_grad(op, inputs)
def test_cpu_gpu_compare(self):
self.compare_grad(self.op, self.inputs)
def test_normal(self):
# mul op will enlarge the relative error
self.check_grad(
op, inputs, set(["X", "Y"]), "Out", max_relative_error=0.5)
```
self.op, self.inputs, ["X", "Y"], "Out", max_relative_error=0.5)
def test_ignore_x(self):
self.check_grad(
self.op,
self.inputs, ["Y"],
"Out",
max_relative_error=0.5,
no_grad_set={"X"})
def test_ignore_y(self):
self.check_grad(
self.op,
self.inputs, ["X"],
"Out",
max_relative_error=0.5,
no_grad_set={"Y"})
```
下面解释一些关键的地方:
-
调用
`create_op("mul")`
创建反向Op对应的前向Op。
-
定义输入
`inputs`
。
-
调用
`compare_grad`
函数对比CPU、GPU计算结果。
-
调用
`check_grad`
检查梯度稳定性,这里采用数值法检测梯度正确性。
-
第一个参数
`
op`
: 前向o
p。
-
第二个参数
`inputs`
: 输入词典,词典的Key和
`ProtoMaker`
定义保持一致。
-
第三个参数
`
set(["X", "Y"])
`
: 指定对输入变量
`X`
、
`Y`
做梯度检测。
-
`test_normal`
中
调用
`check_grad`
检查梯度稳定性,这里采用数值法检测梯度正确性。
-
第一个参数
`
self.op`
: 前向O
p。
-
第二个参数
`
self.
inputs`
: 输入词典,词典的Key和
`ProtoMaker`
定义保持一致。
-
第三个参数
`
["X", "Y"]
`
: 指定对输入变量
`X`
、
`Y`
做梯度检测。
-
第四个参数
`"Out"`
: 指定前向网络最终的输出目标变量
`Out`
-
`test_ignore_x`
和
`test_ignore_y`
分支测试只需要计算一个输入梯度的情况。
### 编译和执行
...
...
doc/howto/dev/use_eigen_cn.md
0 → 100644
浏览文件 @
1348c20e
## 在Paddle中如何使用Eigen
神经网络本质上是一个计算图,计算需要的数据存放在
`Tensor`
中,而计算过程是由
`Operartor`
来描述的。在执行时,
`Operator`
调用对应
`OpKernel`
中的
`Compute`
接口,实现对
`Tensor`
的操作。
### Eigen Tensor模块
Eigen Tensor模块对element-wise计算提供了强大的支持,并且书写一份代码,可以同时在CPU、GPU执行。但Eigen Tensor是一个正在开发中的模块,因此可能测试不够完备,文档较少。
关于Eigen Tensor模块的详细介绍请参考
[
文档1
](
https://github.com/RLovelett/eigen/blob/master/unsupported/Eigen/CXX11/src/Tensor/README.md
)
和
[
文档2
](
https://bitbucket.org/eigen/eigen/src/default/unsupported/Eigen/CXX11/src/Tensor/README.md
)
### paddle::framework::Tensor
Paddle Tensor定义在framework目录下,其主要接口如下:
```
cpp
class
Tensor
{
public:
/*! Return a pointer to mutable memory block. */
template
<
typename
T
>
inline
T
*
data
();
/**
* @brief Return a pointer to mutable memory block.
* @note If not exist, then allocation.
*/
template
<
typename
T
>
inline
T
*
mutable_data
(
platform
::
Place
place
);
/**
* @brief Return a pointer to mutable memory block.
*
* @param[in] dims The dimensions of the memory block.
* @param[in] place The place of the memory block.
*
* @note If not exist, then allocation.
*/
template
<
typename
T
>
inline
T
*
mutable_data
(
DDim
dims
,
platform
::
Place
place
);
/*! Resize the dimensions of the memory block. */
inline
Tensor
&
Resize
(
const
DDim
&
dims
);
/*! Return the dimensions of the memory block. */
inline
const
DDim
&
dims
()
const
;
private:
/*! holds the memory block if allocated. */
std
::
shared_ptr
<
Placeholder
>
holder_
;
/*! points to dimensions of memory block. */
DDim
dim_
;
};
```
`Placeholder`
的作用是延迟分配内存,即我们可以先定义一个Tensor,然后使用Resize接口设置Tensor的大小,最后再调用mutable_data接口分配实际的内存。
```
cpp
paddle
::
framework
::
Tensor
t
;
paddle
::
platform
::
CPUPlace
place
;
// set size first
t
.
Resize
({
2
,
3
});
// allocate memory on CPU later
t
.
mutable_data
(
place
);
```
### paddle::framework::Tensor使用样例
下面以AddOp为例说明Tensor的使用过程:
-
InferShape
在运行神经网络计算图时,我们先调用每个
`Operator`
的
`InferShape`
接口,根据输入Tensor的大小来设置输出Tensor的大小,
`Resize`
接口会被调用。
```
cpp
void
InferShape
(
const
framework
::
InferShapeContext
&
ctx
)
const
override
{
PADDLE_ENFORCE_EQ
(
ctx
.
Input
<
Tensor
>
(
"X"
)
->
dims
(),
ctx
.
Input
<
Tensor
>
(
"Y"
)
->
dims
(),
"Two input of Add Op's dimension must be same."
);
ctx
.
Output
<
Tensor
>
(
"Out"
)
->
Resize
(
ctx
.
Input
<
Tensor
>
(
"X"
)
->
dims
());
}
```
-
Run
`Operator`
的
`Run`
接口最终会调用对应
`OpKernel`
的
`Compute`
接口,在这时真正的分配内存,
`mutable_data`
接口会被调用。
```
cpp
void
Compute
(
const
framework
::
ExecutionContext
&
context
)
const
override
{
auto
*
input0
=
context
.
Input
<
Tensor
>
(
"X"
);
auto
*
input1
=
context
.
Input
<
Tensor
>
(
"Y"
);
auto
*
output
=
context
.
Output
<
Tensor
>
(
"Out"
);
output
->
mutable_data
<
T
>
(
context
.
GetPlace
());
auto
x
=
EigenVector
<
T
>::
Flatten
(
*
input0
);
auto
y
=
EigenVector
<
T
>::
Flatten
(
*
input1
);
auto
z
=
EigenVector
<
T
>::
Flatten
(
*
output
);
auto
place
=
context
.
GetEigenDevice
<
Place
>
();
z
.
device
(
place
)
=
x
+
y
;
}
```
### paddle::framework::Tensor到EigenTensor的转换
如上一小节所示,在具体的计算中,我们需要先把输入Tensor和输出Tensor转换为Eigen支持的格式。我们在
[
eigen.h
](
https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/framework/eigen.h
)
中提供了一些全局函数用来实现paddle::framework::Tensor到EigenTensor/EigenMatrix/EigenVector/EigenScalar的转换。
以EigenTensor为例,做一个介绍
```
cpp
Tensor
t
;
float
*
p
=
t
.
mutable_data
<
float
>
(
make_ddim
({
1
,
2
,
3
}),
platform
::
CPUPlace
());
for
(
int
i
=
0
;
i
<
1
*
2
*
3
;
i
++
)
{
p
[
i
]
=
static_cast
<
float
>
(
i
);
}
EigenTensor
<
float
,
3
>::
Type
et
=
EigenTensor
<
float
,
3
>::
From
(
t
);
```
From是EigenTensor模板提供的一个接口,可以实现从paddle::framework::Tensor到对EigenTensor的转换。由于Tensor的rank是模板参数,因此在转换时需要显示的指定。
在Eigen中,不同rank的Tensor是不同类型,Vector是rank为1的Tensor。需要额外注意的是,EigenVector
<T>
::From方法是把paddle中的一维Tensor转为Eigen的一维Tensor,在这里用EigenVector来表示;而EigenVector
<T>
::Flatten方法是把paddle中的一个Tensor进行reshape操作,压扁成为Eigen的一维Tensor,类型仍然为EigenVector。
更多的转换方法请参考eigen_test.cc中的
[
单元测试
](
https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/framework/eigen_test.cc
)
。
### 实现计算
当需要完成计算时,我们需要等式左边的EigenTensor调用device接口。在这里需要注意的是,这里的EigenTensor之间的运算只是改变了原有Tensor中的数据,而不会改变原有Tensor的shape信息。
```
cpp
auto
x
=
EigenVector
<
T
>::
Flatten
(
*
input0
);
auto
y
=
EigenVector
<
T
>::
Flatten
(
*
input1
);
auto
z
=
EigenVector
<
T
>::
Flatten
(
*
output
);
auto
place
=
context
.
GetEigenDevice
<
Place
>
();
z
.
device
(
place
)
=
x
+
y
;
```
在这段代码中,input0/input1/output可以是任意维度的Tensor。我们调用了EigenVector的Flatten接口,把任意维度的Tensor转为了一维的EigenVector。而在计算结束之后,input0/input1/output的原有shape信息不变。如果想改变原有Tensor的shape信息,可以调用Resize接口进行改变。
由于Eigen Tensor模块的文档较少,我们可以参考TensorFlow的
[
kernels
](
https://github.com/tensorflow/tensorflow/tree/master/tensorflow/core/kernels
)
模块下的相关
`OpKernel`
的计算代码。
paddle/framework/attribute.cc
浏览文件 @
1348c20e
...
...
@@ -43,6 +43,10 @@ template <>
AttrType
AttrTypeID
<
std
::
vector
<
std
::
string
>>
()
{
return
STRINGS
;
}
template
<
>
AttrType
AttrTypeID
<
std
::
vector
<
std
::
pair
<
int
,
int
>>>
()
{
return
INT_PAIRS
;
}
Attribute
GetAttrValue
(
const
OpDesc
::
Attr
&
attr_desc
)
{
switch
(
attr_desc
.
type
())
{
...
...
@@ -76,6 +80,14 @@ Attribute GetAttrValue(const OpDesc::Attr& attr_desc) {
}
return
val
;
}
case
paddle
::
framework
::
AttrType
::
INT_PAIRS
:
{
std
::
vector
<
std
::
pair
<
int
,
int
>>
val
(
attr_desc
.
int_pairs_size
());
for
(
int
i
=
0
;
i
<
attr_desc
.
int_pairs_size
();
++
i
)
{
val
[
i
].
first
=
attr_desc
.
int_pairs
(
i
).
first
();
val
[
i
].
second
=
attr_desc
.
int_pairs
(
i
).
second
();
}
return
val
;
}
}
PADDLE_ENFORCE
(
false
,
"Unknown OpDesc::AttrDesc::type !"
);
return
boost
::
blank
();
...
...
paddle/framework/attribute.h
浏览文件 @
1348c20e
...
...
@@ -28,7 +28,8 @@ namespace paddle {
namespace
framework
{
typedef
boost
::
variant
<
boost
::
blank
,
int
,
float
,
std
::
string
,
std
::
vector
<
int
>
,
std
::
vector
<
float
>
,
std
::
vector
<
std
::
string
>>
std
::
vector
<
float
>
,
std
::
vector
<
std
::
string
>
,
std
::
vector
<
std
::
pair
<
int
,
int
>>>
Attribute
;
typedef
std
::
unordered_map
<
std
::
string
,
Attribute
>
AttributeMap
;
...
...
paddle/framework/framework.proto
浏览文件 @
1348c20e
...
...
@@ -22,8 +22,14 @@ enum AttrType {
INTS
=
3
;
FLOATS
=
4
;
STRINGS
=
5
;
INT_PAIRS
=
6
;
}
message
IntPair
{
required
int32
first
=
1
;
required
int32
second
=
2
;
};
// OpDesc describes an instance of a C++ framework::OperatorBase
// derived class type.
message
OpDesc
{
...
...
@@ -37,6 +43,7 @@ message OpDesc {
repeated
int32
ints
=
6
;
repeated
float
floats
=
7
;
repeated
string
strings
=
8
;
repeated
IntPair
int_pairs
=
9
;
};
message
Var
{
...
...
paddle/framework/op_registry_test.cc
浏览文件 @
1348c20e
...
...
@@ -174,36 +174,4 @@ TEST(OpRegistry, CustomChecker) {
op
->
Run
(
scope
,
dev_ctx
);
int
test_attr
=
op
->
GetAttr
<
int
>
(
"test_attr"
);
ASSERT_EQ
(
test_attr
,
4
);
}
class
TestAttrProtoMaker
:
public
pd
::
OpProtoAndCheckerMaker
{
public:
TestAttrProtoMaker
(
pd
::
OpProto
*
proto
,
pd
::
OpAttrChecker
*
op_checker
)
:
OpProtoAndCheckerMaker
(
proto
,
op_checker
)
{
AddAttr
<
float
>
(
"scale"
,
"scale of test op"
);
AddAttr
<
float
>
(
"scale"
,
"scale of test op"
);
}
};
TEST
(
ProtoMaker
,
DuplicatedAttr
)
{
pd
::
OpProto
op_proto
;
pd
::
OpAttrChecker
op_checker
;
auto
proto_maker
=
TestAttrProtoMaker
(
&
op_proto
,
&
op_checker
);
ASSERT_THROW
(
proto_maker
.
Validate
(),
paddle
::
platform
::
EnforceNotMet
);
}
class
TestInOutProtoMaker
:
public
pd
::
OpProtoAndCheckerMaker
{
public:
TestInOutProtoMaker
(
pd
::
OpProto
*
proto
,
pd
::
OpAttrChecker
*
op_checker
)
:
OpProtoAndCheckerMaker
(
proto
,
op_checker
)
{
AddInput
(
"input"
,
"input of test op"
);
AddInput
(
"input"
,
"input of test op"
);
}
};
TEST
(
ProtoMaker
,
DuplicatedInOut
)
{
pd
::
OpProto
op_proto
;
pd
::
OpAttrChecker
op_checker
;
auto
proto_maker
=
TestInOutProtoMaker
(
&
op_proto
,
&
op_checker
);
ASSERT_THROW
(
proto_maker
.
Validate
(),
paddle
::
platform
::
EnforceNotMet
);
}
}
\ No newline at end of file
paddle/framework/operator_test.cc
浏览文件 @
1348c20e
...
...
@@ -263,4 +263,38 @@ TEST(Operator, Clone) {
OperatorClone
a
(
"ABC"
,
{},
{},
{});
auto
b
=
a
.
Clone
();
ASSERT_EQ
(
a
.
Type
(),
b
->
Type
());
}
class
TestAttrProtoMaker
:
public
paddle
::
framework
::
OpProtoAndCheckerMaker
{
public:
TestAttrProtoMaker
(
paddle
::
framework
::
OpProto
*
proto
,
paddle
::
framework
::
OpAttrChecker
*
op_checker
)
:
OpProtoAndCheckerMaker
(
proto
,
op_checker
)
{
AddAttr
<
float
>
(
"scale"
,
"scale of test op"
);
AddAttr
<
float
>
(
"scale"
,
"scale of test op"
);
}
};
TEST
(
ProtoMaker
,
DuplicatedAttr
)
{
paddle
::
framework
::
OpProto
op_proto
;
paddle
::
framework
::
OpAttrChecker
op_checker
;
auto
proto_maker
=
TestAttrProtoMaker
(
&
op_proto
,
&
op_checker
);
ASSERT_THROW
(
proto_maker
.
Validate
(),
paddle
::
platform
::
EnforceNotMet
);
}
class
TestInOutProtoMaker
:
public
paddle
::
framework
::
OpProtoAndCheckerMaker
{
public:
TestInOutProtoMaker
(
paddle
::
framework
::
OpProto
*
proto
,
paddle
::
framework
::
OpAttrChecker
*
op_checker
)
:
OpProtoAndCheckerMaker
(
proto
,
op_checker
)
{
AddInput
(
"input"
,
"input of test op"
);
AddInput
(
"input"
,
"input of test op"
);
}
};
TEST
(
ProtoMaker
,
DuplicatedInOut
)
{
paddle
::
framework
::
OpProto
op_proto
;
paddle
::
framework
::
OpAttrChecker
op_checker
;
auto
proto_maker
=
TestInOutProtoMaker
(
&
op_proto
,
&
op_checker
);
ASSERT_THROW
(
proto_maker
.
Validate
(),
paddle
::
platform
::
EnforceNotMet
);
}
\ No newline at end of file
paddle/gserver/layers/Conv3DLayer.cpp
浏览文件 @
1348c20e
...
...
@@ -42,10 +42,10 @@ bool Conv3DLayer::init(const LayerMap &layerMap,
if
(
sharedBiases_
)
{
CHECK_EQ
((
size_t
)
numFilters_
,
biasParameter_
->
getSize
());
biases_
=
std
::
unique_ptr
<
Weight
>
(
new
Weight
(
1
,
numFilters_
,
biasParameter_
));
std
::
unique_ptr
<
Weight
>
(
new
Weight
(
numFilters_
,
1
,
biasParameter_
));
}
else
{
biases_
=
std
::
unique_ptr
<
Weight
>
(
new
Weight
(
1
,
getSize
()
,
biasParameter_
));
std
::
unique_ptr
<
Weight
>
(
new
Weight
(
getSize
(),
1
,
biasParameter_
));
}
}
return
true
;
...
...
@@ -224,20 +224,31 @@ void Conv3DLayer::bpropData(int i) {
}
void
Conv3DLayer
::
bpropBiases
()
{
MatrixPtr
biases
=
Matrix
::
create
(
biases_
->
getWGrad
()
->
getData
(),
1
,
biases_
->
getWGrad
()
->
getElementCnt
(),
false
,
useGpu_
);
MatrixPtr
outGradMat
=
getOutputGrad
();
if
(
this
->
sharedBiases_
)
{
biases
_
->
getWGrad
()
->
collectSharedBias
(
*
outGradMat
,
1.0
f
);
biases
->
collectSharedBias
(
*
outGradMat
,
1.0
f
);
}
else
{
biases
_
->
getWGrad
()
->
collectBias
(
*
outGradMat
,
1.0
f
);
biases
->
collectBias
(
*
outGradMat
,
1.0
f
);
}
}
void
Conv3DLayer
::
addBias
()
{
MatrixPtr
outMat
=
getOutputValue
();
MatrixPtr
bias
=
Matrix
::
create
(
biases_
->
getW
()
->
getData
(),
1
,
biases_
->
getW
()
->
getElementCnt
(),
false
,
useGpu_
);
if
(
this
->
sharedBiases_
)
{
outMat
->
addSharedBias
(
*
(
bias
es_
->
getW
()
),
1.0
f
);
outMat
->
addSharedBias
(
*
(
bias
),
1.0
f
);
}
else
{
outMat
->
addBias
(
*
(
bias
es_
->
getW
()
),
1.0
f
);
outMat
->
addBias
(
*
(
bias
),
1.0
f
);
}
}
...
...
paddle/gserver/layers/DeConv3DLayer.cpp
浏览文件 @
1348c20e
...
...
@@ -42,10 +42,10 @@ bool DeConv3DLayer::init(const LayerMap &layerMap,
if
(
sharedBiases_
)
{
CHECK_EQ
((
size_t
)
numFilters_
,
biasParameter_
->
getSize
());
biases_
=
std
::
unique_ptr
<
Weight
>
(
new
Weight
(
1
,
numFilters_
,
biasParameter_
));
std
::
unique_ptr
<
Weight
>
(
new
Weight
(
numFilters_
,
1
,
biasParameter_
));
}
else
{
biases_
=
std
::
unique_ptr
<
Weight
>
(
new
Weight
(
1
,
getSize
()
,
biasParameter_
));
std
::
unique_ptr
<
Weight
>
(
new
Weight
(
getSize
(),
1
,
biasParameter_
));
}
}
return
true
;
...
...
@@ -191,21 +191,31 @@ void DeConv3DLayer::bpropWeights(int i) {}
void
DeConv3DLayer
::
bpropData
(
int
i
)
{}
void
DeConv3DLayer
::
bpropBiases
()
{
MatrixPtr
biases
=
Matrix
::
create
(
biases_
->
getWGrad
()
->
getData
(),
1
,
biases_
->
getWGrad
()
->
getElementCnt
(),
false
,
useGpu_
);
const
MatrixPtr
&
outGradMat
=
getOutputGrad
();
if
(
this
->
sharedBiases_
)
{
biases
_
->
getWGrad
()
->
collectSharedBias
(
*
outGradMat
,
1.0
f
);
biases
->
collectSharedBias
(
*
outGradMat
,
1.0
f
);
}
else
{
biases
_
->
getWGrad
()
->
collectBias
(
*
outGradMat
,
1.0
f
);
biases
->
collectBias
(
*
outGradMat
,
1.0
f
);
}
}
void
DeConv3DLayer
::
addBias
()
{
MatrixPtr
outMat
=
getOutputValue
();
MatrixPtr
bias
=
Matrix
::
create
(
biases_
->
getW
()
->
getData
(),
1
,
biases_
->
getW
()
->
getElementCnt
(),
false
,
useGpu_
);
if
(
this
->
sharedBiases_
)
{
outMat
->
addSharedBias
(
*
(
bias
es_
->
getW
()
),
1.0
f
);
outMat
->
addSharedBias
(
*
(
bias
),
1.0
f
);
}
else
{
outMat
->
addBias
(
*
(
bias
es_
->
getW
()
),
1.0
f
);
outMat
->
addBias
(
*
(
bias
),
1.0
f
);
}
}
...
...
paddle/operators/fc_op.cc
浏览文件 @
1348c20e
...
...
@@ -31,12 +31,23 @@ class FCOp : public NetOp {
if
(
b
!=
framework
::
kEmptyVarName
)
{
AppendOp
(
framework
::
OpRegistry
::
CreateOp
(
"rowwise_add"
,
{{
"X"
,
{
Output
(
"mul_out"
)}},
{
"b"
,
{
Input
(
"b"
)}}},
{{
"Out"
,
{
Output
(
"mul_out"
)}}},
{}));
{{
"Out"
,
{
Output
(
"add_out"
)}}},
{}));
}
else
{
AppendOp
(
framework
::
OpRegistry
::
CreateOp
(
"identity"
,
{{
"X"
,
{
Output
(
"mul_out"
)}}},
{{
"Out"
,
{
Output
(
"add_out"
)}}},
{}));
}
auto
activation
=
GetAttr
<
std
::
string
>
(
"activation"
);
AppendOp
(
framework
::
OpRegistry
::
CreateOp
(
activation
,
{{
"X"
,
{
Output
(
"mul_out"
)}}},
{{
"Y"
,
{
Output
(
"Y"
)}}},
{}));
if
(
activation
==
"identity"
)
{
AppendOp
(
framework
::
OpRegistry
::
CreateOp
(
activation
,
{{
"X"
,
{
Output
(
"add_out"
)}}},
{{
"Out"
,
{
Output
(
"Out"
)}}},
{}));
}
else
{
AppendOp
(
framework
::
OpRegistry
::
CreateOp
(
activation
,
{{
"X"
,
{
Output
(
"add_out"
)}}},
{{
"Y"
,
{
Output
(
"Out"
)}}},
{}));
}
CompleteAddOp
(
false
);
}
};
...
...
@@ -49,8 +60,10 @@ class FCOpMaker : public framework::OpProtoAndCheckerMaker {
AddInput
(
"W"
,
"The 2D weight matrix of FC operator."
);
AddInput
(
"b"
,
"The 1D bias vector of FC operator"
);
AddOutput
(
"Y"
,
"The activated output matrix of FC operator"
);
AddOutput
(
"mul_out"
,
"The non-actived output of FC operator, X * W + b"
)
AddOutput
(
"Out"
,
"The activated output matrix of FC operator"
);
AddOutput
(
"mul_out"
,
"The non-actived output of FC operator, X * W"
)
.
AsIntermediate
();
AddOutput
(
"add_out"
,
"The non-actived output of FC operator, X * W + b"
)
.
AsIntermediate
();
AddAttr
<
std
::
string
>
(
"activation"
,
"The activation type of FC operator."
)
.
SetDefault
(
"identity"
)
...
...
@@ -65,7 +78,7 @@ learned weights with a matrix multiplication followed by a bias offset
(optionally).
Equation:
Y
= Act(sum_n{X_i * W_i} + b)
Out
= Act(sum_n{X_i * W_i} + b)
where X_i is a 2D matrix of size (M x K), usually M is the minibatch size and
K is the number of features. W_i is also a 2D matrix of size (K x N),
...
...
@@ -78,22 +91,6 @@ Activation type can be set to `identity` (default), `sigmoid` or `softmax`.
}
};
class
FCGradOp
:
public
NetOp
{
public:
FCGradOp
(
const
std
::
string
&
type
,
const
framework
::
VariableNameMap
&
inputs
,
const
framework
::
VariableNameMap
&
outputs
,
const
framework
::
AttributeMap
&
attrs
)
:
NetOp
(
type
,
inputs
,
outputs
,
attrs
)
{
auto
y_grad
=
Input
(
framework
::
GradVarName
(
"Y"
));
auto
mul_out_grad
=
Input
(
framework
::
GradVarName
(
"mul_out"
));
auto
x_grad
=
Output
(
framework
::
GradVarName
(
"X"
));
auto
w_grad
=
Output
(
framework
::
GradVarName
(
"W"
));
auto
b_grad
=
Output
(
framework
::
GradVarName
(
"b"
));
CompleteAddOp
(
false
);
}
};
}
// namespace operators
}
// namespace paddle
...
...
@@ -104,4 +101,4 @@ USE_OP(sigmoid);
USE_OP
(
softmax
);
namespace
ops
=
paddle
::
operators
;
REGISTER_OP
(
fc
,
ops
::
FCOp
,
ops
::
FCOpMaker
,
fc_grad
,
ops
::
FCGradOp
);
REGISTER_OP
_WITHOUT_GRADIENT
(
fc
,
ops
::
FCOp
,
ops
::
FCOpMaker
);
paddle/operators/lookup_table_op.h
浏览文件 @
1348c20e
...
...
@@ -30,12 +30,12 @@ class LookupTableKernel : public framework::OpKernel {
auto
ids_t
=
context
.
Input
<
Tensor
>
(
"Ids"
);
// int tensor
auto
output_t
=
context
.
Output
<
Tensor
>
(
"Out"
);
// float tensor
size_
t
N
=
table_t
->
dims
()[
0
];
size_
t
D
=
table_t
->
dims
()[
1
];
in
t
N
=
table_t
->
dims
()[
0
];
in
t
D
=
table_t
->
dims
()[
1
];
auto
ids
=
ids_t
->
data
<
int32_t
>
();
auto
table
=
table_t
->
data
<
T
>
();
auto
output
=
output_t
->
mutable_data
<
T
>
(
context
.
GetPlace
());
for
(
size_t
i
=
0
;
i
<
product
(
ids_t
->
dims
());
++
i
)
{
for
(
s
s
ize_t
i
=
0
;
i
<
product
(
ids_t
->
dims
());
++
i
)
{
PADDLE_ENFORCE_LT
(
ids
[
i
],
N
);
PADDLE_ENFORCE_GE
(
ids
[
i
],
0
);
memcpy
(
output
+
i
*
D
,
table
+
ids
[
i
]
*
D
,
D
*
sizeof
(
T
));
...
...
@@ -51,8 +51,8 @@ class LookupTableGradKernel : public framework::OpKernel {
auto
d_output_t
=
context
.
Input
<
Tensor
>
(
framework
::
GradVarName
(
"Out"
));
auto
d_table_t
=
context
.
Output
<
Tensor
>
(
framework
::
GradVarName
(
"W"
));
size_
t
N
=
d_table_t
->
dims
()[
0
];
size_
t
D
=
d_table_t
->
dims
()[
1
];
in
t
N
=
d_table_t
->
dims
()[
0
];
in
t
D
=
d_table_t
->
dims
()[
1
];
auto
ids
=
ids_t
->
data
<
int32_t
>
();
const
T
*
d_output
=
d_output_t
->
data
<
T
>
();
T
*
d_table
=
d_table_t
->
mutable_data
<
T
>
(
context
.
GetPlace
());
...
...
@@ -61,10 +61,10 @@ class LookupTableGradKernel : public framework::OpKernel {
t
.
device
(
context
.
GetEigenDevice
<
platform
::
CPUPlace
>
())
=
t
.
constant
(
static_cast
<
T
>
(
0
));
for
(
size_t
i
=
0
;
i
<
product
(
ids_t
->
dims
());
++
i
)
{
for
(
s
s
ize_t
i
=
0
;
i
<
product
(
ids_t
->
dims
());
++
i
)
{
PADDLE_ENFORCE_LT
(
ids
[
i
],
N
);
PADDLE_ENFORCE_GE
(
ids
[
i
],
0
);
for
(
size_
t
j
=
0
;
j
<
D
;
++
j
)
{
for
(
in
t
j
=
0
;
j
<
D
;
++
j
)
{
d_table
[
ids
[
i
]
*
D
+
j
]
+=
d_output
[
i
*
D
+
j
];
}
}
...
...
paddle/operators/mul_op.cc
浏览文件 @
1348c20e
...
...
@@ -75,8 +75,8 @@ class MulOpGrad : public framework::OperatorWithKernel {
PADDLE_ENFORCE
(
y_dims
[
1
]
==
out_dims
[
1
],
"Out@GRAD M X N must equal to Y dims 1, N "
);
x_grad
->
Resize
(
x_dims
);
y_grad
->
Resize
(
y_dims
);
if
(
x_grad
)
x_grad
->
Resize
(
x_dims
);
if
(
y_grad
)
y_grad
->
Resize
(
y_dims
);
}
};
...
...
paddle/operators/mul_op.h
浏览文件 @
1348c20e
...
...
@@ -31,13 +31,13 @@ template <typename Place, typename T>
class
MulKernel
:
public
framework
::
OpKernel
{
public:
void
Compute
(
const
framework
::
ExecutionContext
&
context
)
const
override
{
auto
*
X
=
context
.
Input
<
Tensor
>
(
"X"
);
auto
*
Y
=
context
.
Input
<
Tensor
>
(
"Y"
);
auto
*
Z
=
context
.
Output
<
Tensor
>
(
"Out"
);
Z
->
mutable_data
<
T
>
(
context
.
GetPlace
());
auto
*
x
=
context
.
Input
<
Tensor
>
(
"X"
);
auto
*
y
=
context
.
Input
<
Tensor
>
(
"Y"
);
auto
*
z
=
context
.
Output
<
Tensor
>
(
"Out"
);
z
->
mutable_data
<
T
>
(
context
.
GetPlace
());
auto
*
device_context
=
const_cast
<
platform
::
DeviceContext
*>
(
context
.
device_context_
);
math
::
matmul
<
Place
,
T
>
(
*
X
,
false
,
*
Y
,
false
,
1
,
Z
,
0
,
device_context
);
math
::
matmul
<
Place
,
T
>
(
*
x
,
false
,
*
y
,
false
,
1
,
z
,
0
,
device_context
);
}
};
...
...
@@ -45,20 +45,24 @@ template <typename Place, typename T>
class
MulGradKernel
:
public
framework
::
OpKernel
{
public:
void
Compute
(
const
framework
::
ExecutionContext
&
ctx
)
const
override
{
auto
*
X
=
ctx
.
Input
<
Tensor
>
(
"X"
);
auto
*
Y
=
ctx
.
Input
<
Tensor
>
(
"Y"
);
auto
*
d
O
ut
=
ctx
.
Input
<
Tensor
>
(
framework
::
GradVarName
(
"Out"
));
auto
*
x
=
ctx
.
Input
<
Tensor
>
(
"X"
);
auto
*
y
=
ctx
.
Input
<
Tensor
>
(
"Y"
);
auto
*
d
o
ut
=
ctx
.
Input
<
Tensor
>
(
framework
::
GradVarName
(
"Out"
));
auto
*
dX
=
ctx
.
Output
<
Tensor
>
(
framework
::
GradVarName
(
"X"
));
auto
*
dY
=
ctx
.
Output
<
Tensor
>
(
framework
::
GradVarName
(
"Y"
));
dX
->
mutable_data
<
T
>
(
ctx
.
GetPlace
());
dY
->
mutable_data
<
T
>
(
ctx
.
GetPlace
());
auto
*
dx
=
ctx
.
Output
<
Tensor
>
(
framework
::
GradVarName
(
"X"
));
auto
*
dy
=
ctx
.
Output
<
Tensor
>
(
framework
::
GradVarName
(
"Y"
));
auto
*
device_context
=
const_cast
<
platform
::
DeviceContext
*>
(
ctx
.
device_context_
);
// dX = dOut * Y'. dX: M x K, dOut : M x N, Y : K x N
math
::
matmul
<
Place
,
T
>
(
*
dOut
,
false
,
*
Y
,
true
,
1
,
dX
,
0
,
device_context
);
// dY = X' * dOut. dY: K x N, dOut : M x N, X : M x K
math
::
matmul
<
Place
,
T
>
(
*
X
,
true
,
*
dOut
,
false
,
1
,
dY
,
0
,
device_context
);
if
(
dx
)
{
dx
->
mutable_data
<
T
>
(
ctx
.
GetPlace
());
// dx = dout * y'. dx: M x K, dout : M x N, y : K x N
math
::
matmul
<
Place
,
T
>
(
*
dout
,
false
,
*
y
,
true
,
1
,
dx
,
0
,
device_context
);
}
if
(
dy
)
{
dy
->
mutable_data
<
T
>
(
ctx
.
GetPlace
());
// dy = x' * dout. dy K x N, dout : M x N, x : M x K
math
::
matmul
<
Place
,
T
>
(
*
x
,
true
,
*
dout
,
false
,
1
,
dy
,
0
,
device_context
);
}
}
};
...
...
paddle/operators/rowwise_add_op.cc
浏览文件 @
1348c20e
...
...
@@ -64,8 +64,10 @@ class RowwiseAddGradOp : public framework::OperatorWithKernel {
auto
dims0
=
ctx
.
Input
<
Tensor
>
(
"X"
)
->
dims
();
auto
dims1
=
ctx
.
Input
<
Tensor
>
(
"b"
)
->
dims
();
PADDLE_ENFORCE_EQ
(
1
,
dims1
.
size
(),
"b dims should be 1"
)
ctx
.
Output
<
Tensor
>
(
framework
::
GradVarName
(
"X"
))
->
Resize
(
dims0
);
ctx
.
Output
<
Tensor
>
(
framework
::
GradVarName
(
"b"
))
->
Resize
(
dims1
);
auto
*
dx
=
ctx
.
Output
<
Tensor
>
(
framework
::
GradVarName
(
"X"
));
auto
*
db
=
ctx
.
Output
<
Tensor
>
(
framework
::
GradVarName
(
"b"
));
if
(
dx
)
dx
->
Resize
(
dims0
);
if
(
db
)
db
->
Resize
(
dims1
);
}
};
...
...
paddle/operators/rowwise_add_op.h
浏览文件 @
1348c20e
...
...
@@ -51,20 +51,24 @@ template <typename Place, typename T>
class
RowwiseAddGradKernel
:
public
framework
::
OpKernel
{
public:
void
Compute
(
const
framework
::
ExecutionContext
&
context
)
const
override
{
auto
*
d
O
ut
=
context
.
Input
<
Tensor
>
(
framework
::
GradVarName
(
"Out"
));
auto
*
d
X
=
context
.
Output
<
Tensor
>
(
framework
::
GradVarName
(
"X"
));
auto
*
d
o
ut
=
context
.
Input
<
Tensor
>
(
framework
::
GradVarName
(
"Out"
));
auto
*
d
x
=
context
.
Output
<
Tensor
>
(
framework
::
GradVarName
(
"X"
));
auto
*
db
=
context
.
Output
<
Tensor
>
(
framework
::
GradVarName
(
"b"
));
dX
->
mutable_data
<
T
>
(
context
.
GetPlace
());
db
->
mutable_data
<
T
>
(
context
.
GetPlace
());
auto
OutGrad
=
EigenMatrix
<
T
>::
From
(
*
dO
ut
);
auto
out_grad
=
EigenMatrix
<
T
>::
From
(
*
do
ut
);
auto
place
=
context
.
GetEigenDevice
<
Place
>
();
EigenMatrix
<
T
>::
From
(
*
dX
).
device
(
place
)
=
OutGrad
;
if
(
dx
)
{
dx
->
mutable_data
<
T
>
(
context
.
GetPlace
());
EigenMatrix
<
T
>::
From
(
*
dx
).
device
(
place
)
=
out_grad
;
}
// https://eigen.tuxfamily.org/dox/unsupported/TensorBase_8h_source.html
// colwise add
Eigen
::
array
<
int
,
1
>
dims
{{
0
}};
/* dimension to reduce */
EigenVector
<
T
>::
Flatten
(
*
db
).
device
(
place
)
=
OutGrad
.
sum
(
dims
);
if
(
db
)
{
db
->
mutable_data
<
T
>
(
context
.
GetPlace
());
// https://eigen.tuxfamily.org/dox/unsupported/TensorBase_8h_source.html
// colwise add
Eigen
::
array
<
int
,
1
>
dims
{{
0
}};
/* dimension to reduce */
EigenVector
<
T
>::
Flatten
(
*
db
).
device
(
place
)
=
out_grad
.
sum
(
dims
);
}
}
};
}
// namespace operators
...
...
python/paddle/trainer/PyDataProvider2.py
浏览文件 @
1348c20e
...
...
@@ -27,6 +27,14 @@ class SequenceType(object):
SEQUENCE
=
1
SUB_SEQUENCE
=
2
@
classmethod
def
tostring
(
cls
,
value
):
for
k
in
cls
.
__dict__
:
if
not
k
.
startswith
(
'__'
):
if
getattr
(
cls
,
k
)
==
value
:
return
cls
.
__name__
+
'.'
+
k
return
'INVALID('
+
str
(
value
)
+
')'
# TODO(yuyang18): Add string data type here.
class
DataType
(
object
):
...
...
@@ -35,6 +43,14 @@ class DataType(object):
SparseValue
=
2
Index
=
3
@
classmethod
def
tostring
(
cls
,
value
):
for
k
in
cls
.
__dict__
:
if
not
k
.
startswith
(
'__'
):
if
getattr
(
cls
,
k
)
==
value
:
return
cls
.
__name__
+
'.'
+
k
return
'INVALID('
+
str
(
value
)
+
')'
class
CacheType
(
object
):
NO_CACHE
=
0
# No cache at all
...
...
@@ -69,6 +85,26 @@ class InputType(object):
self
.
seq_type
=
seq_type
self
.
type
=
tp
def
__repr__
(
self
):
"""
Return a human readable representation like 'InputType(dim=25921,
seq_type=SequenceType.NO_SEQUENCE, type=DataType.Dense)'
"""
repr_str
=
type
(
self
).
__name__
repr_str
+=
'('
serialize_func_map
=
{
'dim'
:
repr
,
'seq_type'
:
SequenceType
.
tostring
,
'type'
:
DataType
.
tostring
}
for
idx
,
k
in
enumerate
(
self
.
__slots__
):
if
idx
!=
0
:
repr_str
+=
', '
repr_str
+=
(
k
+
'='
+
serialize_func_map
.
get
(
k
,
repr
)(
getattr
(
self
,
k
)))
repr_str
+=
')'
return
repr_str
def
dense_slot
(
dim
,
seq_type
=
SequenceType
.
NO_SEQUENCE
):
"""
...
...
python/paddle/v2/framework/op.py
浏览文件 @
1348c20e
...
...
@@ -94,9 +94,14 @@ class OpDescCreationMethod(object):
new_attr
.
floats
.
extend
(
user_defined_attr
)
elif
attr
.
type
==
framework_pb2
.
STRINGS
:
new_attr
.
strings
.
extend
(
user_defined_attr
)
elif
attr
.
type
==
framework_pb2
.
INT_PAIRS
:
for
p
in
user_defined_attr
:
pair
=
new_attr
.
pairs
.
add
()
pair
.
first
=
p
[
0
]
pair
.
second
=
p
[
1
]
else
:
raise
NotImplementedError
(
"Not support attribute type "
+
attr
.
type
)
str
(
attr
.
type
)
)
return
op_desc
...
...
python/paddle/v2/framework/tests/gradient_checker.py
浏览文件 @
1348c20e
...
...
@@ -286,6 +286,9 @@ class GradientChecker(unittest.TestCase):
for
no_grad
in
no_grad_set
:
if
no_grad
not
in
in_names
:
raise
ValueError
(
"no_grad should be in in_names"
)
if
no_grad
in
inputs_to_check
:
raise
ValueError
(
"no_grad should not be in inputs_to_check"
)
backward_op
=
core
.
Operator
.
backward
(
forward_op
,
no_grad_set
)
places
=
[
core
.
CPUPlace
()]
...
...
@@ -301,7 +304,6 @@ class GradientChecker(unittest.TestCase):
check_names
=
[
grad_var_name
(
name
)
for
name
in
inputs_to_check
]
for
place
in
places
:
# get analytical gradients according to different device
analytic_grads
=
self
.
__get_gradient
(
forward_op
,
backward_op
,
input_vars
,
check_names
,
place
)
self
.
__assert_is_close
(
numeric_grads
,
analytic_grads
,
check_names
,
...
...
python/paddle/v2/framework/tests/test_fc_op.py
浏览文件 @
1348c20e
...
...
@@ -2,6 +2,7 @@ import unittest
import
numpy
as
np
from
gradient_checker
import
GradientChecker
,
create_op
from
op_test_util
import
OpTestMeta
from
paddle.v2.framework.op
import
Operator
class
TestFCOp
(
unittest
.
TestCase
):
...
...
@@ -18,12 +19,23 @@ class TestFCOp(unittest.TestCase):
mul_out
=
np
.
dot
(
self
.
inputs
[
"X"
],
self
.
inputs
[
"W"
])
add_out
=
np
.
add
(
mul_out
,
self
.
inputs
[
"b"
])
sigmoid_out
=
1
/
(
1
+
np
.
exp
(
-
add_out
))
self
.
outputs
=
{
"mul_out"
:
add_out
,
"Y"
:
sigmoid_out
}
self
.
outputs
=
{
"mul_out"
:
mul_out
,
"add_out"
:
add_out
,
"Out"
:
sigmoid_out
}
class
TestFCGradOp
(
GradientChecker
):
def
test_normal
(
self
):
print
"nothing"
self
.
inputs
=
{
"X"
:
np
.
random
.
random
((
4
,
4
)).
astype
(
"float32"
),
"W"
:
np
.
random
.
random
((
4
,
4
)).
astype
(
"float32"
),
"b"
:
np
.
random
.
random
(
4
).
astype
(
"float32"
)
}
op
=
Operator
(
"fc"
,
X
=
"X"
,
W
=
"W"
,
b
=
"b"
,
Out
=
"Out"
,
activation
=
"sigmoid"
)
#self.check_grad(op, self.inputs, ["X", "W", "b"], "Out")
if
__name__
==
'__main__'
:
...
...
python/paddle/v2/framework/tests/test_mul_op.py
浏览文件 @
1348c20e
...
...
@@ -16,16 +16,37 @@ class TestMulOp(unittest.TestCase):
self
.
outputs
=
{
'Out'
:
np
.
dot
(
self
.
inputs
[
'X'
],
self
.
inputs
[
'Y'
])}
class
MulGradOpTest
(
GradientChecker
):
def
test_mul
(
self
):
op
=
create_op
(
"mul"
)
inputs
=
{
class
TestMulGradOp
(
GradientChecker
):
def
setUp
(
self
):
self
.
op
=
create_op
(
"mul"
)
self
.
inputs
=
{
'X'
:
np
.
random
.
random
((
32
,
84
)).
astype
(
"float32"
),
'Y'
:
np
.
random
.
random
((
84
,
100
)).
astype
(
"float32"
)
}
def
test_cpu_gpu_compare
(
self
):
self
.
compare_grad
(
self
.
op
,
self
.
inputs
)
def
test_normal
(
self
):
# mul op will enlarge the relative error
self
.
check_grad
(
op
,
inputs
,
set
([
"X"
,
"Y"
]),
"Out"
,
max_relative_error
=
0.5
)
self
.
op
,
self
.
inputs
,
[
"X"
,
"Y"
],
"Out"
,
max_relative_error
=
0.5
)
def
test_ignore_x
(
self
):
self
.
check_grad
(
self
.
op
,
self
.
inputs
,
[
"Y"
],
"Out"
,
max_relative_error
=
0.5
,
no_grad_set
=
{
"X"
})
def
test_ignore_y
(
self
):
self
.
check_grad
(
self
.
op
,
self
.
inputs
,
[
"X"
],
"Out"
,
max_relative_error
=
0.5
,
no_grad_set
=
{
"Y"
})
# TODO(dzh,qijun) : mulgrad test case need transpose feature of blas library
...
...
python/paddle/v2/framework/tests/test_rowwise_add_op.py
浏览文件 @
1348c20e
...
...
@@ -16,14 +16,22 @@ class TestRowwiseAddOp(unittest.TestCase):
self
.
outputs
=
{
'Out'
:
np
.
add
(
self
.
inputs
[
'X'
],
self
.
inputs
[
'b'
])}
class
RowwiseAddGradOpTest
(
GradientChecker
):
def
test_rowwise_add
(
self
):
op
=
create_op
(
"rowwise_add"
)
inputs
=
{
class
TestRowwiseAddGradOp
(
GradientChecker
):
def
setUp
(
self
):
self
.
op
=
create_op
(
"rowwise_add"
)
self
.
inputs
=
{
"X"
:
np
.
random
.
uniform
(
0.1
,
1
,
[
5
,
10
]).
astype
(
"float32"
),
"b"
:
np
.
random
.
uniform
(
0.1
,
1
,
[
10
]).
astype
(
"float32"
)
}
self
.
check_grad
(
op
,
inputs
,
set
([
"X"
,
"b"
]),
"Out"
)
def
test_normal
(
self
):
self
.
check_grad
(
self
.
op
,
self
.
inputs
,
[
"X"
,
"b"
],
"Out"
)
def
test_ignore_b
(
self
):
self
.
check_grad
(
self
.
op
,
self
.
inputs
,
[
"X"
],
"Out"
,
no_grad_set
=
{
"b"
})
def
test_ignore_x
(
self
):
self
.
check_grad
(
self
.
op
,
self
.
inputs
,
[
"b"
],
"Out"
,
no_grad_set
=
{
"X"
})
if
__name__
==
'__main__'
:
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录