提交 e61485e0 编写于 作者: C caoying03

Merge branch 'develop' into refine_softmax_op

...@@ -23,17 +23,20 @@ ...@@ -23,17 +23,20 @@
- `framework::OperatorWithKernel`:继承自OperatorBase,Op有计算函数,称作有Kernel。 - `framework::OperatorWithKernel`:继承自OperatorBase,Op有计算函数,称作有Kernel。
- `class OpProtoAndCheckerMaker`:描述该Op的输入、输出、属性、注释,主要用于Python API接口生成 - `class OpProtoAndCheckerMaker`:描述该Op的输入、输出、属性、注释,主要用于Python API接口生成
依据是否包含kernel,将Op分为两种:包含Kernel的Op和不包含kernel的Op,前者Op的定义继承自`OperatorBase`,后者继承自`OperatorWithKernel`。本教程主要介绍带Kernel的Op如何写,简单总结Op需要包含的内容如下: 依据是否包含kernel,可以将Op分为两种:包含Kernel的Op和不包含kernel的Op,前者Op的定义继承自`OperatorBase`,后者继承自`OperatorWithKernel`。本教程主要介绍带Kernel的Op如何写,简单总结Op需要包含的内容如下:
内容 | 定义位置 内容 | 定义位置
-------------- | :---------------------- -------------- | :----------------------
OpProtoMake定义 | `.cc`文件,Backward Op不需要定义OpProtoMake OpProtoMake定义 | `.cc`文件,Backward Op不需要定义OpProtoMake
Op定义 | `.cc`文件 Op定义 | `.cc`文件
Kernel实现 | CPU、GPU共享Kernel在`.h`文件,否则,CPU可以在`.cc`文件,GPU可在`.cu`文件。 Kernel实现 | CPU、GPU共享Kernel实现在`.h`文件中,否则,CPU 实现在`.cc`文件中,GPU 实现在`.cu`文件中。
注册Op | Op注册在`.cc`文件;Kernel注册CPU在`.cc`文件,GPU在`.cu`文件 注册Op | Op注册实现在`.cc`文件;Kernel注册CPU实现在`.cc`文件中,GPU实现在`.cu`文件中
实现新的op都添加至目录[paddle/operators](https://github.com/PaddlePaddle/Paddle/tree/develop/paddle/operators)下,文件命名以`*_op.h`(如有) 、 `*_op.cc``*_op.cu`(如有)结尾。
下面以矩阵乘操作,即[MulOp](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/operators/mul_op.cc)为例来介绍如何写带Kernel的Operator。 下面以矩阵乘操作,即[MulOp](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/operators/mul_op.cc)为例来介绍如何写带Kernel的Operator。
...@@ -42,9 +45,11 @@ Kernel实现 | CPU、GPU共享Kernel在`.h`文件,否则,CPU可以在` ...@@ -42,9 +45,11 @@ Kernel实现 | CPU、GPU共享Kernel在`.h`文件,否则,CPU可以在`
### 1. 定义ProtoMaker类 ### 1. 定义ProtoMaker类
矩阵乘的公式:$Out = X * Y$, 可见该计算由两个输入,一个输出组成。首先定义`ProtoMaker`来描述该Op的输入、输出及注释: 矩阵乘法的公式:$Out = X * Y$, 可见该计算由两个输入,一个输出组成。
``` 首先定义`ProtoMaker`来描述该Op的输入、输出,并添加注释:
```cpp
class MulOpMaker : public framework::OpProtoAndCheckerMaker { class MulOpMaker : public framework::OpProtoAndCheckerMaker {
public: public:
MulOpMaker(framework::OpProto *proto, framework::OpAttrChecker *op_checker) MulOpMaker(framework::OpProto *proto, framework::OpAttrChecker *op_checker)
...@@ -59,20 +64,20 @@ The equation is: Out = X * Y ...@@ -59,20 +64,20 @@ The equation is: Out = X * Y
} }
}; };
``` ```
[`MulOpMaker`](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/operators/mul_op.cc#L43)继承自`framework::OpProtoAndCheckerMaker`,构造函数包括2个 [`MulOpMaker`](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/operators/mul_op.cc#L43)继承自`framework::OpProtoAndCheckerMaker`,构造函数含有2个参数
- `framework::OpProto` : 前者存储Op的输入输出和参数属性,将用于Python API接口的生成。 - `framework::OpProto` : 前者存储Op的输入输出和参数属性,将用于Python API接口的生成。
- `framework::OpAttrChecker` :后者用于检查参数属性的合法性。 - `framework::OpAttrChecker` :后者用于检查参数属性的合法性。
构造函数里通过`AddInput`添加输入参数,通过`AddOutput`添加输出参数,通过`AddComment`添加该Op的注释,这些函数会将对应内容添加到`OpProto`中。
`MulOp`中添加两个输入`X``Y`,添加了一个输出`Out`,并解释了各自含义,该命名尽可能的规范 构造函数里通过`AddInput`添加输入参数,通过`AddOutput`添加输出参数,通过`AddComment`添加Op的注释。这些函数会将对应内容添加到`OpProto`
上面的代码在`MulOp`中添加两个输入`X``Y`,添加了一个输出`Out`,并解释了各自含义,命名请遵守命名规范。
再举个[`ScaleOp`](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/operators/scale_op.cc#L37)的例子:
``` 再以[`ScaleOp`](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/operators/scale_op.cc#L37)为例:
```cpp
template <typename AttrType> template <typename AttrType>
class ScaleOpMaker : public framework::OpProtoAndCheckerMaker { class ScaleOpMaker : public framework::OpProtoAndCheckerMaker {
public: public:
...@@ -87,17 +92,19 @@ The equation is: Out = scale*X ...@@ -87,17 +92,19 @@ The equation is: Out = scale*X
} }
}; };
``` ```
在这个例子里,两处不同: 这个例子有两处不同:
- `AddInput("X","...").NotInGradient()` : 表示`X`这个输入不参与`ScaleOp`对应的梯度Op计算之中。 - `AddInput("X","...").NotInGradient()` : 表示`X`这个输入不参与`ScaleOp`对应的梯度Op计算之中,如果Op的某个输入不参与反向梯度的计算,请显示地调用`.NotInGradient()`进行设置。
- `AddAttr<AttrType>("scale", "...").SetDefault(1.0);` : 增加`scale`系数,作为参数属性,并且设置默认值为1.0。
- `AddAttr<AttrType>("scale", "...").SetDefault(1.0);` : 增加`scale`系数,作为参数属性,并且设置默认值为1.0。
### 2. 定义Operator类 ### 2. 定义Operator类
下面的点实现了MulOp的定义:
```c++ ```cpp
class MulOp : public framework::OperatorWithKernel { class MulOp : public framework::OperatorWithKernel {
public: public:
using framework::OperatorWithKernel::OperatorWithKernel; using framework::OperatorWithKernel::OperatorWithKernel;
...@@ -121,33 +128,46 @@ class MulOp : public framework::OperatorWithKernel { ...@@ -121,33 +128,46 @@ class MulOp : public framework::OperatorWithKernel {
``` ```
[`MulOp`](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/operators/mul_op.cc#L22)继承自`OperatorWithKernel``public`成员: [`MulOp`](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/operators/mul_op.cc#L22)继承自`OperatorWithKernel``public`成员:
```c++ ```cpp
using framework::OperatorWithKernel::OperatorWithKernel; using framework::OperatorWithKernel::OperatorWithKernel;
``` ```
这句表示使用基类`OperatorWithKernel`的构造函数,也可写成: 这句表示使用基类`OperatorWithKernel`的构造函数,也可写成:
```c++ ```cpp
MulOp(const std::string &type, const framework::VariableNameMap &inputs, MulOp(const std::string &type, const framework::VariableNameMap &inputs,
const framework::VariableNameMap &outputs, const framework::VariableNameMap &outputs,
const framework::AttributeMap &attrs) const framework::AttributeMap &attrs)
: OperatorWithKernel(type, inputs, outputs, attrs) {} : OperatorWithKernel(type, inputs, outputs, attrs) {}
``` ```
还需要重写`InferShape`接口。`InferShape`为const函数,不能修改Op的成员变量,参数为`const framework::InferShapeContext &ctx`,通过该参数可获取到输入输出以及属性。它的功能是: 还需要重写`InferShape`接口。`InferShape`为const函数,不能修改Op的成员变量,参数为`const framework::InferShapeContext &ctx`,通过该参数可获取到输入输出以及属性。它的功能是:
- 1). 做检查, 尽早报错:检查输入数据维度、类型等是否合法。 - 1). 做检查, 尽早报错:检查输入数据维度、类型等是否合法。
- 2). 设置输出Tensor的形状。 - 2). 设置输出Tensor的形状。
通常`OpProtoMaker``Op`类的定义写在`.cc`文件中,和要讲到的注册函数一起放在`.cc` 通常`OpProtoMaker``Op`类的定义写在`.cc`文件中,和下面将要介绍的注册函数一起放在`.cc`
### 3. 定义OpKernel类 ### 3. 定义OpKernel类
```C++ `MulKernel`继承自`framework::OpKernel`,带有下面两个模板参数:
template <typename Place, typename T>
class MulKernel : public framework::OpKernel { - `typename Place`: 表示设备类型,不同设备(CPU、GPU)共享同一个Kernel时,需加该模板参数,不共享则不加,一个不共享的例子是[`OnehotCrossEntropyOpKernel`](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/operators/cross_entropy_op.h#L43)
public:
- `typename T` : 表示数据类型,如`float`, `double`等。
需要为`MulKernel`类重写`Compute`接口。
- `Compute`接受一个输入参数:`const framework::ExecutionContext& context`
-`InferShapeContext`相比,`ExecutionContext`增加了设备类型,同样可获取到输入输出和属性参数。
- `Compute`函数里实现`OpKernel`的具体计算逻辑。
下面是 `MulKernel` `Compute`的实现:
```cpp
template <typename Place, typename T>
class MulKernel : public framework::OpKernel {
public:
void Compute(const framework::ExecutionContext& context) const override { void Compute(const framework::ExecutionContext& context) const override {
auto* X = context.Input<Tensor>("X"); auto* X = context.Input<Tensor>("X");
auto* Y = context.Input<Tensor>("Y"); auto* Y = context.Input<Tensor>("Y");
...@@ -157,141 +177,136 @@ class MulKernel : public framework::OpKernel { ...@@ -157,141 +177,136 @@ class MulKernel : public framework::OpKernel {
const_cast<platform::DeviceContext*>(context.device_context_); const_cast<platform::DeviceContext*>(context.device_context_);
math::matmul<Place, T>(*X, false, *Y, false, 1, Z, 0, device_context); math::matmul<Place, T>(*X, false, *Y, false, 1, Z, 0, device_context);
} }
}; };
``` ```
`MulKernel`继承自`framework::OpKernel`,带有模板参数: 需要注意:**不同设备(CPU、GPU)共享一个Op定义,是否则共享同一个`OpKernel`,取决于`Compute`调用的函数是否支持不同设备。**
- `typename Place`: 表示设备类型,不同设备(CPU、GPU)共享同一个Kernel时,需加该模板参数,不共享则不加,一个不共享的例子是[`OnehotCrossEntropyOpKernel`](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/operators/cross_entropy_op.h#L43) `MulOp`的CPU、GPU实现共享同一个`Kernel``OpKernel`不共享的例子可以参考:[`OnehotCrossEntropyOpKernel`](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/operators/cross_entropy_op.h#L43)
- `typename T` : 表示数据类型,如`float`, `double` 为了使`OpKernel`的计算过程书写更加简单,并且CPU、GPU的代码可以复用,我们通常借助 Eigen unsupported Tensor模块来实现`Compute`接口。关于在PaddlePaddle中如何使用Eigen库,请参考[使用文档](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/howto/dev/use_eigen_cn.md)
`MulKernel`需要重写`Compute`接口,该接口参数为`const framework::ExecutionContext& context`, `ExecutionContext`相比`InferShapeContext`增加了设备类型,同样可获取到输入输出和属性参数,`Compute`函数里写具体实现时。
到此,前向Op实现完成。接下来,需要在`.cc`文件中注册该op和kernel。
注意,不同设备(CPU、GPU)共享一个Op定义,是否则共享同一个`OpKernel`,取决于`Compute`调用的函数是否支持不同设备。`MulOp`的CPU、GPU实现共享同一个`Kernel``OpKernel`不共享的例子可以参考[`OnehotCrossEntropyOpKernel`](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/operators/cross_entropy_op.h#L43) 反向Op类的定义,反向OpKernel的定义与前向Op类似,这里不再赘述。**但需注意反向Op没有`ProtoMaker`**
为了使得`OpKernel`的计算过程书写较为简单,CPU、GPU的代码可以复用,我们通常借助Eigen unsupported Tensor模块来实现。关于在paddle中如何使用Eigen库,请参考对应的使用[文档](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/howto/dev/use_eigen_cn.md)
到此前向Op实现完成,需要在`.cc`文件中注册该op和kernel。反向Op类的定义和Kernel定义与前向Op类似,这里不再重复。但注意,反向Op没有`ProtoMaker`
### 4. 注册Operator ### 4. 注册Operator
`.cc`文件中注册前向、反向Op类,注册CPU Kernel。 -`.cc`文件中注册前向、反向Op类,注册CPU Kernel。
```cpp
namespace ops = paddle::operators;
REGISTER_OP(mul, ops::MulOp, ops::MulOpMaker, mul_grad, ops::MulOpGrad);
REGISTER_OP_CPU_KERNEL(mul, ops::MulKernel<paddle::platform::CPUPlace, float>);
REGISTER_OP_CPU_KERNEL(mul_grad,
ops::MulGradKernel<paddle::platform::CPUPlace, float>);
```
在上面的代码中:
- `REGISTER_OP` : 注册`ops::MulOp`类,类型名为`mul`,该类的`ProtoMaker`为`ops::MulOpMaker`,注册`ops::MulOpGrad`,类型名为`mul_grad`。
- `REGISTER_OP_WITHOUT_GRADIENT` : 用于注册没有反向的Op。
- `REGISTER_OP_CPU_KERNEL` :注册`ops::MulKernel`类,并特化模板参数为`paddle::platform::CPUPlace`和`float`类型,同理,注册`ops::MulKernel`类。
```c++
namespace ops = paddle::operators; -`.cu`文件中注册GPU Kernel。
REGISTER_OP(mul, ops::MulOp, ops::MulOpMaker, mul_grad, ops::MulOpGrad); - 请注意,如果GPU Kernel的实现基于Eigen unsupported模块,那么在 `.cu`的开始请加上宏定义 `#define EIGEN_USE_GPU`,代码示例如下:
REGISTER_OP_CPU_KERNEL(mul, ops::MulKernel<paddle::platform::CPUPlace, float>);
REGISTER_OP_CPU_KERNEL(mul_grad, ```cpp
ops::MulGradKernel<paddle::platform::CPUPlace, float>); // if use Eigen unsupported module before include head files
``` #define EIGEN_USE_GPU
- `REGISTER_OP` : 注册`ops::MulOp`类,类型名为`mul`,该类的`ProtoMaker``ops::MulOpMaker`,注册`ops::MulOpGrad`,类型名为`mul_grad` namespace ops = paddle::operators;
- `REGISTER_OP_WITHOUT_GRADIENT` : 用于注册没有反向的Op。 REGISTER_OP_GPU_KERNEL(mul, ops::MulKernel<paddle::platform::GPUPlace, float>);
- `REGISTER_OP_CPU_KERNEL` :注册`ops::MulKernel`类,并特化模板参数为`paddle::platform::CPUPlace``float`类型,同理,注册`ops::MulKernel`类。 REGISTER_OP_GPU_KERNEL(mul_grad,
ops::MulGradKernel<paddle::platform::GPUPlace, float>);
`.cu`文件中注册GPU Kernel。请注意,如果GPU Kernel的实现是基于Eigen unsupported模块,那么在 `.cu`的最前面请加上宏定义 `#define EIGEN_USE_GPU` ```
```c++
// if use Eigen unsupported module before include head files
#define EIGEN_USE_GPU
namespace ops = paddle::operators;
REGISTER_OP_GPU_KERNEL(mul, ops::MulKernel<paddle::platform::GPUPlace, float>);
REGISTER_OP_GPU_KERNEL(mul_grad,
ops::MulGradKernel<paddle::platform::GPUPlace, float>);
```
### 5. 编译 ### 5. 编译
[paddle/operators/CMakeLists.txt](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/operators/CMakeLists.txt)文件中添加编译。 - 简单**无特殊依赖**的OP无需修改CMakeList.txt文件。[paddle/operators/CMakeLists.txt](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/operators/CMakeLists.txt) 会自动将 `paddle/operators` 目录下新增的 `*_op.cc` 文件加入编译。
- 较为复杂、**有额外依赖** 的operator仍需要修改[paddle/operators/CMakeLists.txt](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/operators/CMakeLists.txt)。如,`mul_op` 依赖 `math_function`,需要在`CMakeLists.txt`中添加如下内容:
```
op_library(mul_op SRCS mul_op.cc mul_op.cu DEPS math_function) ```
``` op_library(mul_op SRCS mul_op.cc mul_op.cu DEPS math_function) +
```
下面命令可以编译:
- 运行下面命令可以进行编译:
```
make mul_op ```
``` make mul_op
```
## 绑定Python ## 绑定Python
- 绑定Python - 绑定Python
[`paddle/pybind/pybind.cc 在 [`paddle/pybind/pybind.cc
`](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/pybind/pybind.cc)文件中添加该类: `](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/pybind/pybind.cc) 使用`USE_OP`告知编译器需要链接的Op,具体解释参考[代码注释](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/framework/op_registry.h#L81)。
``` ```
USE_OP(mul); USE_OP(mul);
``` ```
如果只实现了CPU版本,则使用`USE_CPU_ONLY_OP`: 如果只实现了CPU版本,则使用`USE_CPU_ONLY_OP`:
``` ```
USE_CPU_ONLY_OP(gather); USE_CPU_ONLY_OP(gather);
``` ```
如果OP不带Kernel,则使用`USE_NO_KENREL_OP`: 如果OP不带Kernel,则使用`USE_NO_KENREL_OP`:
``` ```
USE_NO_KENREL_OP(recurrent); USE_NO_KENREL_OP(recurrent);
``` ```
使用`USE_OP`告知编译器需要链接该Op的目标文件,具体解释参考[代码注释](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/framework/op_registry.h#L81)。
- 生成库 - 生成库
[`paddle/pybind/CMakeLists.txt`](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/pybind/CMakeLists.txt)文件添加类到`DEPS`中,使得该Op可以链接到生成的lib库中。 无需修改 [`paddle/pybind/CMakeLists.txt`](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/pybind/CMakeLists.txt)文件,`paddle/operators` 目录下新增的 `*_op.cc` 文件会被自动添加链接到生成的lib库中。
```
if(WITH_PYTHON)
cc_library(paddle_pybind SHARED
SRCS pybind.cc
DEPS pybind python backward
mul_op
minus_op)
endif(WITH_PYTHON)
```
## 实现单元测试 ## 实现单元测试
单测包括对比前向Op不同设备(CPU、GPU)的实现、对比反向OP不同设备(CPU、GPU)的实现、反向Op的梯度测试。下面介绍介绍[`MulOp`的单](https://github.com/PaddlePaddle/Paddle/blob/develop/python/paddle/v2/framework/tests/test_mul_op.py) 单测包括对比前向Op不同设备(CPU、GPU)的实现、对比反向OP不同设备(CPU、GPU)的实现、反向Op的梯度测试。下面介绍介绍[`MulOp`的单元测试](https://github.com/PaddlePaddle/Paddle/blob/develop/python/paddle/v2/framework/tests/test_mul_op.py)。
### 前向Operator单 ### 前向Operator单元测试
前向Op单测继承自`unittest.TestCase`,并定义元类`__metaclass__ = OpTestMeta`,具体单测流程在`OpTestMeta`里完成。需在`setUp`函数定义输入输出和属性参数,以及Python对比的输出值。 前向Op单元测试继承自`unittest.TestCase`,并定义元类`__metaclass__ = OpTestMeta`。各项更加具体的单元测试在`OpTestMeta`里完成。测试前向Operator,需要:
``` 1. 在`setUp`函数定义输入、输出,以及相关的属性参数。
import unittest 2. 生成随机的输入数据。
import numpy as np 3. 在Python脚本中实现与前向operator相同的计算逻辑,得到输出值,与operator前向计算的输出进行对比。
from gradient_checker import GradientChecker, create_op
from op_test_util import OpTestMeta
class TestMulOp(unittest.TestCase):
__metaclass__ = OpTestMeta
def setUp(self): ```python
self.type = "mul" import unittest
self.inputs = { import numpy as np
'X': np.random.random((32, 84)).astype("float32"), from gradient_checker import GradientChecker, create_op
'Y': np.random.random((84, 100)).astype("float32") from op_test_util import OpTestMeta
}
self.outputs = {'Out': np.dot(self.inputs['X'], self.inputs['Y'])}
```
首先需要`import`必要的包,下面详细解释其他值:
- `self.type = "mul" ` : 定义类型,和注册的类型一致。
- `self.inputs` : 定义输入,类型为Numpy.array,并初始化。
- `self.outputs` : 定义输出,并得到Python结算结果。
class TestMulOp(unittest.TestCase):
### 反向Operator单测 __metaclass__ = OpTestMeta
反向Op单测继承自`GradientChecker`,而`GradientChecker`集成自`unittest.TestCase`,所以反向单测函数需要`test_`开头。 def setUp(self):
self.type = "mul"
self.inputs = {
'X': np.random.random((32, 84)).astype("float32"),
'Y': np.random.random((84, 100)).astype("float32")
}
self.outputs = {'Out': np.dot(self.inputs['X'], self.inputs['Y'])}
```
``` 上面的代码首先导入依赖的包,下面是对`setUp`函数中操作的重要变量的详细解释:
- `self.type = "mul" ` : 定义类型,与operator注册时注册的类型一致。
- `self.inputs` : 定义输入,类型为`numpy.array`,并初始化。
- `self.outputs` : 定义输出,并在Python脚本中完成与operator同样的计算逻辑,返回Python端的计算结果。
### 反向Operator单元测试
反向Op单元测试继承自`GradientChecker`,而`GradientChecker`继承自`unittest.TestCase`,因此,**反向单元测试函数需要以`test_`开头**。
```python
class TestMulGradOp(GradientChecker): class TestMulGradOp(GradientChecker):
def setUp(self): def setUp(self):
self.op = create_op("mul") self.op = create_op("mul")
...@@ -325,33 +340,34 @@ class TestMulGradOp(GradientChecker): ...@@ -325,33 +340,34 @@ class TestMulGradOp(GradientChecker):
no_grad_set={"Y"}) no_grad_set={"Y"})
``` ```
下面解释一些关键的地方: 下面解释代码中一些关键的地方:
- 调用`create_op("mul")`创建反向Op对应的前向Op。 - 调用`create_op("mul")`创建反向Op对应的前向Op。
- 调用`compare_grad`函数对比CPU、GPU计算结果。 - 调用`compare_grad`函数对比CPU、GPU计算结果。
- `test_normal`中调用`check_grad`检查梯度稳定性,这里采用数值法检测梯度正确性。 - `test_normal`中调用`check_grad`使用数值法检测梯度正确性和稳定性。
- 第一个参数`self.op` : 前向Op。 - 第一个参数`self.op` : 前向Op。
- 第二个参数`self.inputs` : 输入词典,词典的Key和`ProtoMaker`定义保持一致。 - 第二个参数`self.inputs` : 输入词典,词典的Key和`ProtoMaker`定义保持一致。
- 第三个参数`["X", "Y"]` : 指定对输入变量`X``Y`做梯度检测。 - 第三个参数`["X", "Y"]` : 指定对输入变量`X`、`Y`做梯度检测。
- 第四个参数`"Out"` : 指定前向网络最终的输出目标变量`Out` - 第四个参数`"Out"` : 指定前向网络最终的输出目标变量`Out`
- `test_ignore_x``test_ignore_y`分支测试只需要计算一个输入梯度的情况。 - `test_ignore_x`和`test_ignore_y`分支用来测试只需要计算一个输入梯度的情况。
### 编译和执行 ### 编译和执行单元测试
测完成之后,在[`python/paddle/v2/framework/tests/CMakeLists.txt`](https://github.com/PaddlePaddle/Paddle/blob/develop/python/paddle/v2/framework/tests/CMakeLists.txt)里添加编译 元测试编写完成之后,在[`python/paddle/v2/framework/tests/CMakeLists.txt`](https://github.com/PaddlePaddle/Paddle/blob/develop/python/paddle/v2/framework/tests/CMakeLists.txt)中添加以下内容,将单元测试加入工程
``` ```
py_test(test_mul_op SRCS test_mul_op.py) py_test(test_mul_op SRCS test_mul_op.py)
``` ```
编译时需要打开`WITH_TESTING`, 即 `cmake paddle_dir -DWITH_TESTING=ON`,编译成功之后执行单测命令为 请注意,**不同于Op的编译测试,运行单元测试测时需要编译整个工程**,并且编译时需要打开`WITH_TESTING`, 即`cmake paddle_dir -DWITH_TESTING=ON`。编译成功后,执行下面的命令来运行单元测试
``` ```bash
make test ARGS="-R test_mul_op -V" make test ARGS="-R test_mul_op -V"
``` ```
或者: 或者:
``` ```bash
ctest -R test_mul_op ctest -R test_mul_op
``` ```
...@@ -21,16 +21,16 @@ namespace framework { ...@@ -21,16 +21,16 @@ namespace framework {
/// @cond HIDDEN /// @cond HIDDEN
template <int i> template <int i>
Dim<i> make_dim(const int* d) { Dim<i> make_dim(const int64_t* d) {
return Dim<i>(*d, make_dim<i - 1>(d + 1)); return Dim<i>(*d, make_dim<i - 1>(d + 1));
} }
template <> template <>
Dim<1> make_dim<1>(const int* d) { Dim<1> make_dim<1>(const int64_t* d) {
return Dim<1>(*d); return Dim<1>(*d);
} }
void make_ddim(DDim& ddim, const int* dims, int n) { void make_ddim(DDim& ddim, const int64_t* dims, int n) {
switch (n) { switch (n) {
case 1: case 1:
ddim = make_dim<1>(dims); ddim = make_dim<1>(dims);
...@@ -67,13 +67,13 @@ void make_ddim(DDim& ddim, const int* dims, int n) { ...@@ -67,13 +67,13 @@ void make_ddim(DDim& ddim, const int* dims, int n) {
/// @endcond /// @endcond
DDim make_ddim(std::initializer_list<int> dims) { DDim make_ddim(std::initializer_list<int64_t> dims) {
DDim result(make_dim(0)); DDim result(make_dim(0));
make_ddim(result, dims.begin(), dims.size()); make_ddim(result, dims.begin(), dims.size());
return result; return result;
} }
DDim make_ddim(const std::vector<int>& dims) { DDim make_ddim(const std::vector<int64_t>& dims) {
DDim result(make_dim(0)); DDim result(make_dim(0));
make_ddim(result, &dims[0], dims.size()); make_ddim(result, &dims[0], dims.size());
return result; return result;
...@@ -81,12 +81,12 @@ DDim make_ddim(const std::vector<int>& dims) { ...@@ -81,12 +81,12 @@ DDim make_ddim(const std::vector<int>& dims) {
/// @cond HIDDEN /// @cond HIDDEN
// XXX For some reason, putting this in an anonymous namespace causes errors // XXX For some reason, putting this in an anonymous namespace causes errors
class DynamicMutableIndexer : public boost::static_visitor<int&> { class DynamicMutableIndexer : public boost::static_visitor<int64_t&> {
public: public:
explicit DynamicMutableIndexer(int idx) : idx_(idx) {} explicit DynamicMutableIndexer(int idx) : idx_(idx) {}
template <int D> template <int D>
int& operator()(Dim<D>& dim) const { int64_t& operator()(Dim<D>& dim) const {
return dim[idx_]; return dim[idx_];
} }
...@@ -94,12 +94,12 @@ class DynamicMutableIndexer : public boost::static_visitor<int&> { ...@@ -94,12 +94,12 @@ class DynamicMutableIndexer : public boost::static_visitor<int&> {
int idx_; int idx_;
}; };
class DynamicConstIndexer : public boost::static_visitor<int> { class DynamicConstIndexer : public boost::static_visitor<int64_t> {
public: public:
explicit DynamicConstIndexer(int idx) : idx_(idx) {} explicit DynamicConstIndexer(int idx) : idx_(idx) {}
template <int D> template <int D>
int operator()(const Dim<D>& dim) const { int64_t operator()(const Dim<D>& dim) const {
return dim[idx_]; return dim[idx_];
} }
...@@ -109,22 +109,22 @@ class DynamicConstIndexer : public boost::static_visitor<int> { ...@@ -109,22 +109,22 @@ class DynamicConstIndexer : public boost::static_visitor<int> {
/// @endcond /// @endcond
int& DDim::operator[](int idx) { int64_t& DDim::operator[](int idx) {
return boost::apply_visitor(DynamicMutableIndexer(idx), var); return boost::apply_visitor(DynamicMutableIndexer(idx), var);
} }
int DDim::operator[](int idx) const { int64_t DDim::operator[](int idx) const {
return boost::apply_visitor(DynamicConstIndexer(idx), var); return boost::apply_visitor(DynamicConstIndexer(idx), var);
} }
ssize_t DDim::size() const { return arity(*this); } int64_t DDim::size() const { return arity(*this); }
bool DDim::operator==(DDim d) const { bool DDim::operator==(DDim d) const {
if (var.which() != d.getVar().which()) { if (var.which() != d.getVar().which()) {
return false; return false;
} else { } else {
std::vector<int> v1 = vectorize(*this); std::vector<int64_t> v1 = vectorize(*this);
std::vector<int> v2 = vectorize(d); std::vector<int64_t> v2 = vectorize(d);
for (unsigned int i = 0; i < v1.size(); i++) { for (unsigned int i = 0; i < v1.size(); i++) {
if (v1[i] != v2[i]) { if (v1[i] != v2[i]) {
...@@ -139,10 +139,10 @@ bool DDim::operator==(DDim d) const { ...@@ -139,10 +139,10 @@ bool DDim::operator==(DDim d) const {
bool DDim::operator!=(DDim d) const { return !(*this == d); } bool DDim::operator!=(DDim d) const { return !(*this == d); }
DDim DDim::operator+(DDim d) const { DDim DDim::operator+(DDim d) const {
std::vector<int> v1 = vectorize(*this); std::vector<int64_t> v1 = vectorize(*this);
std::vector<int> v2 = vectorize(d); std::vector<int64_t> v2 = vectorize(d);
std::vector<int> v3; std::vector<int64_t> v3;
assert(v1.size() == v2.size()); assert(v1.size() == v2.size());
...@@ -154,10 +154,10 @@ DDim DDim::operator+(DDim d) const { ...@@ -154,10 +154,10 @@ DDim DDim::operator+(DDim d) const {
} }
DDim DDim::operator*(DDim d) const { DDim DDim::operator*(DDim d) const {
std::vector<int> v1 = vectorize(*this); std::vector<int64_t> v1 = vectorize(*this);
std::vector<int> v2 = vectorize(d); std::vector<int64_t> v2 = vectorize(d);
std::vector<int> v3; std::vector<int64_t> v3;
assert(v1.size() == v2.size()); assert(v1.size() == v2.size());
...@@ -168,15 +168,15 @@ DDim DDim::operator*(DDim d) const { ...@@ -168,15 +168,15 @@ DDim DDim::operator*(DDim d) const {
return make_ddim(v3); return make_ddim(v3);
} }
int get(const DDim& ddim, int idx) { return ddim[idx]; } int64_t get(const DDim& ddim, int idx) { return ddim[idx]; }
void set(DDim& ddim, int idx, int value) { ddim[idx] = value; } void set(DDim& ddim, int idx, int value) { ddim[idx] = value; }
/// @cond HIDDEN /// @cond HIDDEN
struct VectorizeVisitor : public boost::static_visitor<> { struct VectorizeVisitor : public boost::static_visitor<> {
std::vector<int>& vector; std::vector<int64_t>& vector;
explicit VectorizeVisitor(std::vector<int>& v) : vector(v) {} explicit VectorizeVisitor(std::vector<int64_t>& v) : vector(v) {}
template <typename T> template <typename T>
void operator()(const T& t) { void operator()(const T& t) {
...@@ -188,31 +188,31 @@ struct VectorizeVisitor : public boost::static_visitor<> { ...@@ -188,31 +188,31 @@ struct VectorizeVisitor : public boost::static_visitor<> {
}; };
/// @endcond /// @endcond
std::vector<int> vectorize(const DDim& ddim) { std::vector<int64_t> vectorize(const DDim& ddim) {
std::vector<int> result; std::vector<int64_t> result;
VectorizeVisitor visitor(result); VectorizeVisitor visitor(result);
boost::apply_visitor(visitor, ddim); boost::apply_visitor(visitor, ddim);
return result; return result;
} }
struct ProductVisitor : public boost::static_visitor<ssize_t> { struct ProductVisitor : public boost::static_visitor<int64_t> {
template <int D> template <int D>
ssize_t operator()(const Dim<D>& dim) { int64_t operator()(const Dim<D>& dim) {
return product(dim); return product(dim);
} }
}; };
ssize_t product(const DDim& ddim) { int64_t product(const DDim& ddim) {
ProductVisitor visitor; ProductVisitor visitor;
return boost::apply_visitor(visitor, ddim); return boost::apply_visitor(visitor, ddim);
} }
struct SliceVectorizeVisitor : public boost::static_visitor<> { struct SliceVectorizeVisitor : public boost::static_visitor<> {
std::vector<int>& vector; std::vector<int64_t>& vector;
int begin; int begin;
int end; int end;
SliceVectorizeVisitor(std::vector<int>& v, int b, int e) SliceVectorizeVisitor(std::vector<int64_t>& v, int b, int e)
: vector(v), begin(b), end(e) { : vector(v), begin(b), end(e) {
PADDLE_ENFORCE(begin < end, PADDLE_ENFORCE(begin < end,
"Begin index must be less than end index in ddim slice."); "Begin index must be less than end index in ddim slice.");
...@@ -240,7 +240,7 @@ struct SliceVectorizeVisitor : public boost::static_visitor<> { ...@@ -240,7 +240,7 @@ struct SliceVectorizeVisitor : public boost::static_visitor<> {
}; };
DDim slice_ddim(const DDim& dim, int begin, int end) { DDim slice_ddim(const DDim& dim, int begin, int end) {
std::vector<int> vec; std::vector<int64_t> vec;
vec.reserve(end - begin); vec.reserve(end - begin);
SliceVectorizeVisitor visitor(vec, begin, end); SliceVectorizeVisitor visitor(vec, begin, end);
boost::apply_visitor(visitor, dim); boost::apply_visitor(visitor, dim);
...@@ -280,7 +280,7 @@ std::ostream& operator<<(std::ostream& os, const DDim& ddim) { ...@@ -280,7 +280,7 @@ std::ostream& operator<<(std::ostream& os, const DDim& ddim) {
return os; return os;
} }
DDim::DDim(std::initializer_list<int> init_list) { DDim::DDim(std::initializer_list<int64_t> init_list) {
*this = make_ddim(init_list); *this = make_ddim(init_list);
} }
} // namespace framework } // namespace framework
......
...@@ -40,7 +40,7 @@ struct DDim { ...@@ -40,7 +40,7 @@ struct DDim {
template <int D> template <int D>
explicit DDim(const Dim<D>& in) : var(in) {} explicit DDim(const Dim<D>& in) : var(in) {}
/*implicit*/ DDim(std::initializer_list<int> init_list); /*implicit*/ DDim(std::initializer_list<int64_t> init_list);
template <int D> template <int D>
DDim& operator=(const Dim<D>& in) { DDim& operator=(const Dim<D>& in) {
...@@ -48,8 +48,8 @@ struct DDim { ...@@ -48,8 +48,8 @@ struct DDim {
return *this; return *this;
} }
int& operator[](int idx); int64_t& operator[](int idx);
int operator[](int idx) const; int64_t operator[](int idx) const;
template <typename Visitor> template <typename Visitor>
typename Visitor::result_type apply_visitor(Visitor& visitor) { typename Visitor::result_type apply_visitor(Visitor& visitor) {
...@@ -71,15 +71,15 @@ struct DDim { ...@@ -71,15 +71,15 @@ struct DDim {
DDim operator*(DDim d) const; DDim operator*(DDim d) const;
ssize_t size() const; int64_t size() const;
}; };
/** /**
* \brief Make a DDim from std::vector<int> * \brief Make a DDim from std::vector<int64_t>
* *
* \param dims An vector of ints. Must be sized between [1, 9] * \param dims An vector of ints. Must be sized between [1, 9]
*/ */
DDim make_ddim(const std::vector<int>& dims); DDim make_ddim(const std::vector<int64_t>& dims);
/** /**
* \brief Make a DDim from an initializer list * \brief Make a DDim from an initializer list
...@@ -87,14 +87,14 @@ DDim make_ddim(const std::vector<int>& dims); ...@@ -87,14 +87,14 @@ DDim make_ddim(const std::vector<int>& dims);
* \param dims An initializer list of ints. Must be sized between [1, 9] * \param dims An initializer list of ints. Must be sized between [1, 9]
* *
*/ */
DDim make_ddim(std::initializer_list<int> dims); DDim make_ddim(std::initializer_list<int64_t> dims);
int get(const DDim& dim, int idx); int64_t get(const DDim& dim, int idx);
void set(DDim& dim, int idx, int val); void set(DDim& dim, int idx, int val);
std::vector<int> vectorize(const DDim& ddim); std::vector<int64_t> vectorize(const DDim& ddim);
ssize_t product(const DDim& ddim); int64_t product(const DDim& ddim);
/** /**
* \brief Slice a ddim * \brief Slice a ddim
......
...@@ -12,7 +12,7 @@ TEST(DDim, Equality) { ...@@ -12,7 +12,7 @@ TEST(DDim, Equality) {
EXPECT_EQ(ddim[2], 5); EXPECT_EQ(ddim[2], 5);
// construct a DDim from a vector // construct a DDim from a vector
std::vector<int> vec({9, 1, 5}); std::vector<int64_t> vec({9, 1, 5});
paddle::framework::DDim vddim = paddle::framework::make_ddim(vec); paddle::framework::DDim vddim = paddle::framework::make_ddim(vec);
EXPECT_EQ(ddim[0], 9); EXPECT_EQ(ddim[0], 9);
EXPECT_EQ(ddim[1], 1); EXPECT_EQ(ddim[1], 1);
...@@ -25,7 +25,7 @@ TEST(DDim, Equality) { ...@@ -25,7 +25,7 @@ TEST(DDim, Equality) {
EXPECT_EQ(paddle::framework::get(ddim, 0), 6); EXPECT_EQ(paddle::framework::get(ddim, 0), 6);
// vectorize a DDim // vectorize a DDim
std::vector<int> res_vec = paddle::framework::vectorize(vddim); std::vector<int64_t> res_vec = paddle::framework::vectorize(vddim);
EXPECT_EQ(res_vec[0], 9); EXPECT_EQ(res_vec[0], 9);
EXPECT_EQ(res_vec[1], 1); EXPECT_EQ(res_vec[1], 1);
EXPECT_EQ(res_vec[2], 5); EXPECT_EQ(res_vec[2], 5);
......
...@@ -17,13 +17,13 @@ struct Dim { ...@@ -17,13 +17,13 @@ struct Dim {
static constexpr int dimensions = i; static constexpr int dimensions = i;
template <typename... Args> template <typename... Args>
HOSTDEVICE Dim(int _head, Args... _tail) : head(_head), tail(_tail...) { HOSTDEVICE Dim(int64_t _head, Args... _tail) : head(_head), tail(_tail...) {
static_assert(sizeof...(_tail) == i - 1, static_assert(sizeof...(_tail) == i - 1,
"Dim initialized with the wrong number of parameters"); "Dim initialized with the wrong number of parameters");
} }
HOSTDEVICE HOSTDEVICE
Dim(int _head, const Dim<i - 1>& _tail) : head(_head), tail(_tail) {} Dim(int64_t _head, const Dim<i - 1>& _tail) : head(_head), tail(_tail) {}
HOSTDEVICE HOSTDEVICE
Dim() : head(0), tail() {} Dim() : head(0), tail() {}
...@@ -31,12 +31,12 @@ struct Dim { ...@@ -31,12 +31,12 @@ struct Dim {
/** Construct a Dim from a linear index and size. Uses Fortran order /** Construct a Dim from a linear index and size. Uses Fortran order
* indexing. */ * indexing. */
HOSTDEVICE HOSTDEVICE
Dim(int idx, const Dim<i>& size) Dim(int64_t idx, const Dim<i>& size)
: head(idx % size.head), tail(idx / size.head, size.tail) {} : head(idx % size.head), tail(idx / size.head, size.tail) {}
/** Construct a Dim with each dimension set to the given index */ /** Construct a Dim with each dimension set to the given index */
HOSTDEVICE HOSTDEVICE
Dim(int idx) : head(idx), tail(idx) {} Dim(int64_t idx) : head(idx), tail(idx) {}
HOSTDEVICE HOSTDEVICE
bool operator==(const Dim<i>& o) const { bool operator==(const Dim<i>& o) const {
...@@ -47,13 +47,13 @@ struct Dim { ...@@ -47,13 +47,13 @@ struct Dim {
bool operator!=(const Dim<i>& o) const { return !(*this == o); } bool operator!=(const Dim<i>& o) const { return !(*this == o); }
HOSTDEVICE HOSTDEVICE
int& operator[](int idx); int64_t& operator[](int idx);
HOSTDEVICE HOSTDEVICE
int operator[](int idx) const; int64_t operator[](int idx) const;
HOST std::string to_string() const; HOST std::string to_string() const;
int head; int64_t head;
Dim<i - 1> tail; Dim<i - 1> tail;
}; };
...@@ -63,7 +63,7 @@ struct Dim<1> { ...@@ -63,7 +63,7 @@ struct Dim<1> {
static constexpr int dimensions = 1; static constexpr int dimensions = 1;
HOSTDEVICE HOSTDEVICE
Dim(int _head) : head(_head) {} Dim(int64_t _head) : head(_head) {}
HOSTDEVICE HOSTDEVICE
Dim() : head(0) {} Dim() : head(0) {}
...@@ -86,11 +86,11 @@ struct Dim<1> { ...@@ -86,11 +86,11 @@ struct Dim<1> {
bool operator!=(const Dim<1>& o) const { return !(*this == o); } bool operator!=(const Dim<1>& o) const { return !(*this == o); }
HOSTDEVICE HOSTDEVICE
int& operator[](int idx); int64_t& operator[](int idx);
HOSTDEVICE HOSTDEVICE
int operator[](int idx) const; int64_t operator[](int idx) const;
int head; int64_t head;
}; };
namespace { namespace {
...@@ -100,12 +100,12 @@ template <int i> ...@@ -100,12 +100,12 @@ template <int i>
struct DimGetter { struct DimGetter {
// Return a copy if Dim is const // Return a copy if Dim is const
template <typename D> template <typename D>
HOSTDEVICE static int impl(const D& d) { HOSTDEVICE static int64_t impl(const D& d) {
return DimGetter<i - 1>::impl(d.tail); return DimGetter<i - 1>::impl(d.tail);
} }
// Return a reference if Dim is mutable // Return a reference if Dim is mutable
template <typename D> template <typename D>
HOSTDEVICE static int& impl(D& d) { HOSTDEVICE static int64_t& impl(D& d) {
return DimGetter<i - 1>::impl(d.tail); return DimGetter<i - 1>::impl(d.tail);
} }
}; };
...@@ -115,18 +115,18 @@ template <> ...@@ -115,18 +115,18 @@ template <>
struct DimGetter<0> { struct DimGetter<0> {
// Return a copy if Dim is const // Return a copy if Dim is const
template <typename D> template <typename D>
HOSTDEVICE static int impl(const D& d) { HOSTDEVICE static int64_t impl(const D& d) {
return d.head; return d.head;
} }
// Return a reference if Dim is mutable // Return a reference if Dim is mutable
template <typename D> template <typename D>
HOSTDEVICE static int& impl(D& d) { HOSTDEVICE static int64_t& impl(D& d) {
return d.head; return d.head;
} }
}; };
template <int D> template <int D>
HOSTDEVICE int& indexer(Dim<D>& dim, int idx) { HOSTDEVICE int64_t& indexer(Dim<D>& dim, int idx) {
#ifndef __CUDA_ARCH__ #ifndef __CUDA_ARCH__
if (idx < 0) { if (idx < 0) {
throw std::invalid_argument("Tried to access a negative dimension"); throw std::invalid_argument("Tried to access a negative dimension");
...@@ -141,7 +141,7 @@ HOSTDEVICE int& indexer(Dim<D>& dim, int idx) { ...@@ -141,7 +141,7 @@ HOSTDEVICE int& indexer(Dim<D>& dim, int idx) {
} }
template <> template <>
HOSTDEVICE int& indexer<1>(Dim<1>& dim, int idx) { HOSTDEVICE int64_t& indexer<1>(Dim<1>& dim, int idx) {
#ifndef __CUDA_ARCH__ #ifndef __CUDA_ARCH__
if (idx != 0) { if (idx != 0) {
throw std::invalid_argument("Invalid index"); throw std::invalid_argument("Invalid index");
...@@ -153,7 +153,7 @@ HOSTDEVICE int& indexer<1>(Dim<1>& dim, int idx) { ...@@ -153,7 +153,7 @@ HOSTDEVICE int& indexer<1>(Dim<1>& dim, int idx) {
} }
template <int D> template <int D>
HOSTDEVICE int indexer(const Dim<D>& dim, int idx) { HOSTDEVICE int64_t indexer(const Dim<D>& dim, int idx) {
#ifndef __CUDA_ARCH__ #ifndef __CUDA_ARCH__
if (idx < 0) { if (idx < 0) {
throw std::invalid_argument("Tried to access a negative dimension"); throw std::invalid_argument("Tried to access a negative dimension");
...@@ -168,7 +168,7 @@ HOSTDEVICE int indexer(const Dim<D>& dim, int idx) { ...@@ -168,7 +168,7 @@ HOSTDEVICE int indexer(const Dim<D>& dim, int idx) {
} }
template <> template <>
HOSTDEVICE int indexer<1>(const Dim<1>& dim, int idx) { HOSTDEVICE int64_t indexer<1>(const Dim<1>& dim, int idx) {
#ifndef __CUDA_ARCH__ #ifndef __CUDA_ARCH__
if (idx != 0) { if (idx != 0) {
throw std::invalid_argument("Invalid index"); throw std::invalid_argument("Invalid index");
...@@ -182,73 +182,76 @@ HOSTDEVICE int indexer<1>(const Dim<1>& dim, int idx) { ...@@ -182,73 +182,76 @@ HOSTDEVICE int indexer<1>(const Dim<1>& dim, int idx) {
} // namespace } // namespace
// Static access to constant Dim // Static access to constant Dim
template <int i, int l> template <int i, int l>
HOSTDEVICE int get(const Dim<l>& d) { HOSTDEVICE int64_t get(const Dim<l>& d) {
return DimGetter<i>::impl(d); return DimGetter<i>::impl(d);
} }
// Static access to mutable Dim // Static access to mutable Dim
template <int i, int l> template <int i, int l>
HOSTDEVICE int& get(Dim<l>& d) { HOSTDEVICE int64_t& get(Dim<l>& d) {
return DimGetter<i>::impl(d); return DimGetter<i>::impl(d);
} }
// Dynamic access to constant Dim // Dynamic access to constant Dim
template <int l> template <int l>
HOSTDEVICE int Dim<l>::operator[](int i) const { HOSTDEVICE int64_t Dim<l>::operator[](int i) const {
return indexer(*this, i); return indexer(*this, i);
} }
// Dynamic access to mutable Dim // Dynamic access to mutable Dim
template <int l> template <int l>
HOSTDEVICE int& Dim<l>::operator[](int i) { HOSTDEVICE int64_t& Dim<l>::operator[](int i) {
return indexer(*this, i); return indexer(*this, i);
} }
// Dynamic access to constant Dim // Dynamic access to constant Dim
inline HOSTDEVICE int Dim<1>::operator[](int i) const { inline HOSTDEVICE int64_t Dim<1>::operator[](int i) const {
return indexer(*this, i); return indexer(*this, i);
} }
// Dynamic access to mutable Dim // Dynamic access to mutable Dim
inline HOSTDEVICE int& Dim<1>::operator[](int i) { return indexer(*this, i); } inline HOSTDEVICE int64_t& Dim<1>::operator[](int i) {
return indexer(*this, i);
}
// Dynamic access to constant Dim // Dynamic access to constant Dim
// without std::enable_if will try to instantiate this on get<0>(d) // without std::enable_if will try to instantiate this on get<0>(d)
template <int l> template <int l>
HOSTDEVICE typename std::enable_if<(l > 0), int>::type get(const Dim<l>& d, HOSTDEVICE typename std::enable_if<(l > 0), int64_t>::type get(const Dim<l>& d,
int i) { int i) {
return d[i]; return d[i];
} }
// Dynamic access to mutable Dim // Dynamic access to mutable Dim
template <int l> template <int l>
HOSTDEVICE typename std::enable_if<(l > 0), int&>::type get(Dim<l>& d, int i) { HOSTDEVICE typename std::enable_if<(l > 0), int64_t&>::type get(Dim<l>& d,
int i) {
return d[i]; return d[i];
} }
// Dot product of two dims // Dot product of two dims
template <int i> template <int i>
HOSTDEVICE int linearize(const Dim<i>& a, const Dim<i>& b) { HOSTDEVICE int64_t linearize(const Dim<i>& a, const Dim<i>& b) {
return a.head * b.head + linearize(a.tail, b.tail); return a.head * b.head + linearize(a.tail, b.tail);
} }
// Base case dot product of two Dims // Base case dot product of two Dims
// Notice it is inline because it is no longer a template // Notice it is inline because it is no longer a template
template <> template <>
HOSTDEVICE inline int linearize(const Dim<1>& a, const Dim<1>& b) { HOSTDEVICE inline int64_t linearize(const Dim<1>& a, const Dim<1>& b) {
return a.head * b.head; return a.head * b.head;
} }
// Product of a Dim // Product of a Dim
template <int i> template <int i>
HOSTDEVICE int product(const Dim<i>& a, int prod = 1) { HOSTDEVICE int64_t product(const Dim<i>& a, int prod = 1) {
return prod * a.head * product(a.tail); return prod * a.head * product(a.tail);
} }
// Base case product of a Dim // Base case product of a Dim
// Notice it is inline because it is no longer a template // Notice it is inline because it is no longer a template
template <> template <>
HOSTDEVICE inline int product(const Dim<1>& a, int prod) { HOSTDEVICE inline int64_t product(const Dim<1>& a, int prod) {
return prod * a.head; return prod * a.head;
} }
......
...@@ -8,7 +8,7 @@ __global__ void test(paddle::framework::Dim<2>* o) { ...@@ -8,7 +8,7 @@ __global__ void test(paddle::framework::Dim<2>* o) {
o[0] = paddle::framework::make_dim(5, 6); o[0] = paddle::framework::make_dim(5, 6);
} }
__global__ void dyn_idx_gpu(int* o) { __global__ void dyn_idx_gpu(int64_t* o) {
auto d = paddle::framework::make_dim(5, 6); auto d = paddle::framework::make_dim(5, 6);
o[0] = d[1]; o[0] = d[1];
} }
...@@ -47,9 +47,9 @@ TEST(Dim, Equality) { ...@@ -47,9 +47,9 @@ TEST(Dim, Equality) {
EXPECT_EQ(b[1], 11); EXPECT_EQ(b[1], 11);
// dynamic access on GPU // dynamic access on GPU
thrust::device_vector<int> r(1); thrust::device_vector<int64_t> r(1);
dyn_idx_gpu<<<1, 1>>>(thrust::raw_pointer_cast(r.data())); dyn_idx_gpu<<<1, 1>>>(thrust::raw_pointer_cast(r.data()));
int res = r[0]; int64_t res = r[0];
EXPECT_EQ(res, 6); EXPECT_EQ(res, 6);
// ex_prefix_mul // ex_prefix_mul
......
...@@ -28,7 +28,7 @@ struct EigenDim { ...@@ -28,7 +28,7 @@ struct EigenDim {
static Type From(const DDim& dims) { static Type From(const DDim& dims) {
PADDLE_ENFORCE(arity(dims) == D, "D must match arity(DDim)"); PADDLE_ENFORCE(arity(dims) == D, "D must match arity(DDim)");
Type ret; Type ret;
for (int d = 0; d < arity(dims); d++) { for (int64_t d = 0; d < arity(dims); d++) {
ret[d] = dims[d]; ret[d] = dims[d];
} }
return ret; return ret;
......
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
#include "paddle/framework/op_registry.h" #include "paddle/framework/op_registry.h"
#include "paddle/framework/operator.h" #include "paddle/framework/operator.h"
USE_OP(add_two); USE_OP(add);
namespace paddle { namespace paddle {
namespace framework { namespace framework {
...@@ -41,7 +41,7 @@ namespace f = paddle::framework; ...@@ -41,7 +41,7 @@ namespace f = paddle::framework;
TEST(GradOpBuilder, AddTwo) { TEST(GradOpBuilder, AddTwo) {
std::shared_ptr<f::OperatorBase> add_op(f::OpRegistry::CreateOp( std::shared_ptr<f::OperatorBase> add_op(f::OpRegistry::CreateOp(
"add_two", {{"X", {"x"}}, {"Y", {"y"}}}, {{"Out", {"out"}}}, {})); "add", {{"X", {"x"}}, {"Y", {"y"}}}, {{"Out", {"out"}}}, {}));
std::shared_ptr<f::OperatorBase> grad_add_op = std::shared_ptr<f::OperatorBase> grad_add_op =
f::OpRegistry::CreateGradOp(*add_op); f::OpRegistry::CreateGradOp(*add_op);
EXPECT_EQ(grad_add_op->Inputs().size(), 4UL); EXPECT_EQ(grad_add_op->Inputs().size(), 4UL);
......
...@@ -58,7 +58,7 @@ inline T* Tensor::mutable_data(platform::Place place) { ...@@ -58,7 +58,7 @@ inline T* Tensor::mutable_data(platform::Place place) {
"Tensor's numel must be larger than zero to call " "Tensor's numel must be larger than zero to call "
"Tensor::mutable_data. Call Tensor::set_dim first."); "Tensor::mutable_data. Call Tensor::set_dim first.");
/* some versions of boost::variant don't have operator!= */ /* some versions of boost::variant don't have operator!= */
size_t size = product(dims_) * sizeof(T); int64_t size = product(dims_) * sizeof(T);
if (holder_ == nullptr || !(holder_->place() == place) || if (holder_ == nullptr || !(holder_->place() == place) ||
holder_->size() < size + offset_) { holder_->size() < size + offset_) {
if (platform::is_cpu_place(place)) { if (platform::is_cpu_place(place)) {
...@@ -131,7 +131,7 @@ inline Tensor Tensor::Slice(const int& begin_idx, const int& end_idx) const { ...@@ -131,7 +131,7 @@ inline Tensor Tensor::Slice(const int& begin_idx, const int& end_idx) const {
PADDLE_ENFORCE_LT(begin_idx, end_idx, PADDLE_ENFORCE_LT(begin_idx, end_idx,
"Begin index must be less than end index."); "Begin index must be less than end index.");
PADDLE_ENFORCE_NE(dims_[0], 1, "Can not slice a tensor with dims_[0] = 1."); PADDLE_ENFORCE_NE(dims_[0], 1, "Can not slice a tensor with dims_[0] = 1.");
int base = product(dims_) / dims_[0]; size_t base = product(dims_) / dims_[0];
Tensor dst; Tensor dst;
dst.holder_ = holder_; dst.holder_ = holder_;
DDim dst_dims = dims_; DDim dst_dims = dims_;
......
...@@ -14,27 +14,31 @@ function(op_library TARGET) ...@@ -14,27 +14,31 @@ function(op_library TARGET)
cmake_parse_arguments(op_library "${options}" "${oneValueArgs}" cmake_parse_arguments(op_library "${options}" "${oneValueArgs}"
"${multiValueArgs}" ${ARGN}) "${multiValueArgs}" ${ARGN})
foreach(src ${op_library_SRCS}) list(LENGTH op_library_SRCS op_library_SRCS_len)
if (${src} MATCHES ".*\\.cu$") if (${op_library_SRCS_len} EQUAL 0)
list(APPEND cu_srcs ${src}) if (EXISTS ${CMAKE_CURRENT_SOURCE_DIR}/${TARGET}.cc)
elseif(${src} MATCHES ".*\\.cc$") list(APPEND cc_srcs ${TARGET}.cc)
list(APPEND cc_srcs ${src})
else()
message(FATAL_ERROR "${TARGET} Source file ${src} should only be .cc or .cu")
endif() endif()
endforeach() if (EXISTS ${CMAKE_CURRENT_SOURCE_DIR}/${TARGET}.cu)
list(APPEND cu_srcs ${TARGET}.cu)
endif()
else()
foreach(src ${op_library_SRCS})
if (${src} MATCHES ".*\\.cu$")
list(APPEND cu_srcs ${src})
elseif(${src} MATCHES ".*\\.cc$")
list(APPEND cc_srcs ${src})
else()
message(FATAL_ERROR "${TARGET} Source file ${src} should only be .cc or .cu")
endif()
endforeach()
endif()
list(LENGTH cc_srcs cc_srcs_len) list(LENGTH cc_srcs cc_srcs_len)
if (${cc_srcs_len} EQUAL 0) if (${cc_srcs_len} EQUAL 0)
message(FATAL_ERROR "The op library ${TARGET} should contains at least one .cc file") message(FATAL_ERROR "The op library ${TARGET} should contains at least one .cc file")
endif() endif()
list(LENGTH cu_srcs cu_srcs_len)
list(LENGTH op_library_DEPS dep_len)
if (${cu_srcs_len} EQUAL 0 AND ${dep_len} EQUAL 0)
message(WARNING "The op library ${TARGET} not support GPU!")
endif()
if (WITH_GPU) if (WITH_GPU)
nv_library(${TARGET} SRCS ${cc_srcs} ${cu_srcs} DEPS ${op_library_DEPS} nv_library(${TARGET} SRCS ${cc_srcs} ${cu_srcs} DEPS ${op_library_DEPS}
${op_common_deps}) ${op_common_deps})
...@@ -46,22 +50,22 @@ endfunction() ...@@ -46,22 +50,22 @@ endfunction()
add_subdirectory(math) add_subdirectory(math)
list(REMOVE_ITEM GENERAL_OPS set(DEPS_OPS
net_op identity_op
minus_op minus_op
mul_op mul_op
recurrent_op recurrent_op
scale_op) scale_op)
op_library(identity_op DEPS scale_op)
op_library(net_op SRCS net_op.cc) op_library(minus_op DEPS scale_op)
op_library(minus_op SRCS minus_op.cc minus_op.cu DEPS scale_op) op_library(mul_op DEPS math_function)
op_library(mul_op SRCS mul_op.cc mul_op.cu DEPS math_function)
op_library(recurrent_op SRCS recurrent_op.cc rnn/recurrent_op_utils.cc op_library(recurrent_op SRCS recurrent_op.cc rnn/recurrent_op_utils.cc
DEPS framework_proto tensor operator net_op) DEPS framework_proto tensor operator net_op)
op_library(scale_op SRCS scale_op.cc scale_op.cu DEPS net_op) op_library(scale_op DEPS net_op)
list(REMOVE_ITEM GENERAL_OPS ${DEPS_OPS})
foreach(src ${GENERAL_OPS}) foreach(src ${GENERAL_OPS})
op_library(${src} SRCS ${src}.cc ${src}.cu) op_library(${src})
endforeach() endforeach()
set(GLOB_OP_LIB ${OP_LIBRARY} CACHE INTERNAL "Global OP library") set(GLOB_OP_LIB ${OP_LIBRARY} CACHE INTERNAL "Global OP library")
......
...@@ -57,7 +57,6 @@ class AddOpGrad : public framework::OperatorWithKernel { ...@@ -57,7 +57,6 @@ class AddOpGrad : public framework::OperatorWithKernel {
} // namespace paddle } // namespace paddle
namespace ops = paddle::operators; namespace ops = paddle::operators;
REGISTER_OP(add_two, ops::AddOp, ops::AddOpMaker, add_two_grad, ops::AddOpGrad); REGISTER_OP(add, ops::AddOp, ops::AddOpMaker, add_grad, ops::AddOpGrad);
REGISTER_OP_CPU_KERNEL(add_two, REGISTER_OP_CPU_KERNEL(add, ops::AddKernel<paddle::platform::CPUPlace, float>);
ops::AddKernel<paddle::platform::CPUPlace, float>);
...@@ -12,10 +12,7 @@ ...@@ -12,10 +12,7 @@
See the License for the specific language governing permissions and See the License for the specific language governing permissions and
limitations under the License. */ limitations under the License. */
#define EIGEN_USE_GPU
#include "paddle/framework/op_registry.h"
#include "paddle/operators/add_op.h" #include "paddle/operators/add_op.h"
namespace ops = paddle::operators; namespace ops = paddle::operators;
REGISTER_OP_GPU_KERNEL(add_two, REGISTER_OP_GPU_KERNEL(add, ops::AddKernel<paddle::platform::GPUPlace, float>);
ops::AddKernel<paddle::platform::GPUPlace, float>);
/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. */
#include "paddle/operators/cos_sim_op.h"
namespace paddle {
namespace operators {
using framework::Tensor;
class CosSimOp : public framework::OperatorWithKernel {
public:
using framework::OperatorWithKernel::OperatorWithKernel;
protected:
void InferShape(const framework::InferShapeContext &ctx) const override {
PADDLE_ENFORCE_NOT_NULL(ctx.InputVar("X"), "Input(X) must not be null.");
PADDLE_ENFORCE_NOT_NULL(ctx.InputVar("Y"), "Input(Y) must not be null.");
PADDLE_ENFORCE_EQ(ctx.Input<Tensor>("X")->dims(),
ctx.Input<Tensor>("Y")->dims(),
"Dimensions of Input(X) and Input(Y) must be the same.");
auto dims = ctx.Input<Tensor>("X")->dims();
ctx.Output<Tensor>("Out")->Resize({dims[0], 1});
ctx.Output<Tensor>("XNorm")->Resize({dims[0], 1});
ctx.Output<Tensor>("YNorm")->Resize({dims[0], 1});
}
};
class CosSimOpMaker : public framework::OpProtoAndCheckerMaker {
public:
CosSimOpMaker(framework::OpProto *proto, framework::OpAttrChecker *op_checker)
: OpProtoAndCheckerMaker(proto, op_checker) {
AddInput("X", "The first input of cos_sim op.");
AddInput("Y", "The second input of cos_sim op.");
AddOutput("Out", "The output of cos_sim op.");
AddOutput("XNorm", "Row norm of the first input.").AsIntermediate();
AddOutput("YNorm", "Row norm of the second input.").AsIntermediate();
AddComment(R"DOC(
Cosine Similarity Operator.
The equation is: Out = X^T * Y / (sqrt(X^T * X) * sqrt(Y^T * Y))
)DOC");
}
};
class CosSimOpGrad : public framework::OperatorWithKernel {
public:
using framework::OperatorWithKernel::OperatorWithKernel;
protected:
void InferShape(const framework::InferShapeContext &ctx) const override {
PADDLE_ENFORCE_NOT_NULL(ctx.InputVar("X"), "Input(X) must not be null.");
PADDLE_ENFORCE_NOT_NULL(ctx.InputVar("Y"), "Input(Y) must not be null.");
PADDLE_ENFORCE_NOT_NULL(ctx.InputVar("XNorm"),
"Input(XNorm) must not be null.");
PADDLE_ENFORCE_NOT_NULL(ctx.InputVar("YNorm"),
"Input(YNorm) must not be null.");
PADDLE_ENFORCE_NOT_NULL(ctx.InputVar(framework::GradVarName("Out")),
"Input(Out@GRAD) must not be null.");
auto x_dims = ctx.Input<Tensor>("X")->dims();
auto y_dims = ctx.Input<Tensor>("Y")->dims();
auto xnorm_dims = ctx.Input<Tensor>("XNorm")->dims();
auto ynorm_dims = ctx.Input<Tensor>("YNorm")->dims();
auto out_dims = ctx.Input<Tensor>(framework::GradVarName("Out"))->dims();
PADDLE_ENFORCE_EQ(x_dims, y_dims,
"Dimensions of Input(X) and Input(Y) must be the same.");
PADDLE_ENFORCE_EQ(xnorm_dims[0], x_dims[0],
"1st dimension of XNorm must equal that of Input(X).");
PADDLE_ENFORCE_EQ(xnorm_dims[1], 1, "2st dimension of XNorm must be one.");
PADDLE_ENFORCE_EQ(ynorm_dims[0], y_dims[0],
"1st dimension of YNorm must equal that of Input(Y).");
PADDLE_ENFORCE_EQ(ynorm_dims[1], 1, "2st dimension of YNorm must be one.");
PADDLE_ENFORCE_EQ(out_dims[0], x_dims[0],
"1st dimension of Out@GRAD must equal that of Input(X)");
PADDLE_ENFORCE_EQ(out_dims[1], 1, "1st dimension of Out@GRAD must be one.");
auto *x_grad = ctx.Output<Tensor>(framework::GradVarName("X"));
auto *y_grad = ctx.Output<Tensor>(framework::GradVarName("Y"));
if (x_grad) x_grad->Resize(x_dims);
if (y_grad) y_grad->Resize(y_dims);
}
};
} // namespace operators
} // namespace paddle
namespace ops = paddle::operators;
REGISTER_OP(cos_sim, ops::CosSimOp, ops::CosSimOpMaker, cos_sim_grad,
ops::CosSimOpGrad);
REGISTER_OP_CPU_KERNEL(cos_sim,
ops::CosSimKernel<paddle::platform::CPUPlace, float>);
REGISTER_OP_CPU_KERNEL(
cos_sim_grad, ops::CosSimGradKernel<paddle::platform::CPUPlace, float>);
...@@ -13,8 +13,10 @@ ...@@ -13,8 +13,10 @@
limitations under the License. */ limitations under the License. */
#define EIGEN_USE_GPU #define EIGEN_USE_GPU
#include "paddle/operators/gather_op.h" #include "paddle/operators/cos_sim_op.h"
namespace ops = paddle::operators; namespace ops = paddle::operators;
REGISTER_OP_GPU_KERNEL(gather, REGISTER_OP_GPU_KERNEL(cos_sim,
ops::GatherOpKernel<paddle::platform::GPUPlace, float>); ops::CosSimKernel<paddle::platform::GPUPlace, float>);
REGISTER_OP_GPU_KERNEL(
cos_sim_grad, ops::CosSimGradKernel<paddle::platform::GPUPlace, float>);
/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. */
#pragma once
#include "paddle/framework/eigen.h"
#include "paddle/framework/op_registry.h"
namespace paddle {
namespace operators {
using Tensor = framework::Tensor;
template <typename T, int MajorType = Eigen::RowMajor,
typename IndexType = Eigen::DenseIndex>
using EigenMatrix = framework::EigenMatrix<T, MajorType, IndexType>;
template <typename T, int MajorType = Eigen::RowMajor,
typename IndexType = Eigen::DenseIndex>
using EigenVector = framework::EigenVector<T, MajorType, IndexType>;
template <typename Place, typename T>
class CosSimKernel : public framework::OpKernel {
public:
void Compute(const framework::ExecutionContext& context) const override {
auto* input_x = context.Input<Tensor>("X");
auto* input_y = context.Input<Tensor>("Y");
auto* output_z = context.Output<Tensor>("Out");
auto* output_x_norm = context.Output<Tensor>("XNorm");
auto* output_y_norm = context.Output<Tensor>("YNorm");
output_z->mutable_data<T>(context.GetPlace());
output_x_norm->mutable_data<T>(context.GetPlace());
output_y_norm->mutable_data<T>(context.GetPlace());
auto dims = input_x->dims();
int size = static_cast<int>(framework::product(dims));
auto new_dims = framework::make_ddim({dims[0], size / dims[0]});
auto x = EigenMatrix<T>::From(*input_x, new_dims);
auto y = EigenMatrix<T>::From(*input_y, new_dims);
auto z = EigenVector<T>::Flatten(*output_z);
auto x_norm = EigenVector<T>::Flatten(*output_x_norm);
auto y_norm = EigenVector<T>::Flatten(*output_y_norm);
auto place = context.GetEigenDevice<Place>();
auto xy = (x * y).sum(Eigen::array<int, 1>({{1}}));
x_norm.device(place) = x.square().sum(Eigen::array<int, 1>({{1}})).sqrt();
y_norm.device(place) = y.square().sum(Eigen::array<int, 1>({{1}})).sqrt();
z.device(place) = xy / x_norm / y_norm;
}
};
template <typename Place, typename T>
class CosSimGradKernel : public framework::OpKernel {
public:
void Compute(const framework::ExecutionContext& context) const override {
auto* input_x = context.Input<Tensor>("X");
auto* input_y = context.Input<Tensor>("Y");
auto* input_z = context.Input<Tensor>("Out");
auto* input_x_norm = context.Input<Tensor>("XNorm");
auto* input_y_norm = context.Input<Tensor>("YNorm");
auto* output_grad_x = context.Output<Tensor>(framework::GradVarName("X"));
auto* output_grad_y = context.Output<Tensor>(framework::GradVarName("Y"));
auto* input_grad_z = context.Input<Tensor>(framework::GradVarName("Out"));
auto dims = input_x->dims();
int size = static_cast<int>(framework::product(dims));
auto new_dims = framework::make_ddim({dims[0], size / dims[0]});
auto x = EigenMatrix<T>::From(*input_x, new_dims);
auto y = EigenMatrix<T>::From(*input_y, new_dims);
auto z = EigenMatrix<T>::From(*input_z);
auto x_norm = EigenMatrix<T>::From(*input_x_norm);
auto y_norm = EigenMatrix<T>::From(*input_y_norm);
auto dz = EigenMatrix<T>::From(*input_grad_z);
Eigen::DSizes<int, 2> bcast(1, new_dims[1]);
auto z_bcast = z.broadcast(bcast);
auto dz_bcast = dz.broadcast(bcast);
auto place = context.GetEigenDevice<Place>();
auto x_snorm_bcast = x_norm.square().eval().broadcast(bcast);
auto y_snorm_bcast = y_norm.square().eval().broadcast(bcast);
auto norm_prod_bcast = (x_norm * y_norm).eval().broadcast(bcast);
if (output_grad_x) {
output_grad_x->mutable_data<T>(context.GetPlace());
auto dx = EigenMatrix<T>::From(*output_grad_x, new_dims);
dx.device(place) =
dz_bcast * (y / norm_prod_bcast - z_bcast * x / x_snorm_bcast);
}
if (output_grad_y) {
output_grad_y->mutable_data<T>(context.GetPlace());
auto dy = EigenMatrix<T>::From(*output_grad_y, new_dims);
dy.device(place) =
dz_bcast * (x / norm_prod_bcast - z_bcast * y / y_snorm_bcast);
}
}
};
} // namespace operators
} // namespace paddle
...@@ -31,8 +31,8 @@ class CPUGaussianRandomKernel : public framework::OpKernel { ...@@ -31,8 +31,8 @@ class CPUGaussianRandomKernel : public framework::OpKernel {
} }
engine.seed(seed); engine.seed(seed);
std::normal_distribution<T> dist(mean, std); std::normal_distribution<T> dist(mean, std);
ssize_t size = framework::product(tensor->dims()); int64_t size = framework::product(tensor->dims());
for (ssize_t i = 0; i < size; ++i) { for (int64_t i = 0; i < size; ++i) {
data[i] = dist(engine); data[i] = dist(engine);
} }
} }
...@@ -46,9 +46,14 @@ class GaussianRandomOp : public framework::OperatorWithKernel { ...@@ -46,9 +46,14 @@ class GaussianRandomOp : public framework::OperatorWithKernel {
void InferShape(const framework::InferShapeContext& context) const override { void InferShape(const framework::InferShapeContext& context) const override {
auto* tensor = context.Output<framework::Tensor>("Out"); auto* tensor = context.Output<framework::Tensor>("Out");
auto dims = GetAttr<std::vector<int>>("dims"); auto dims = GetAttr<std::vector<int>>("dims");
std::vector<int64_t> temp;
temp.reserve(dims.size());
for (auto dim : dims) {
temp.push_back(static_cast<int64_t>(dim));
}
PADDLE_ENFORCE(dims.size() > 0UL, PADDLE_ENFORCE(dims.size() > 0UL,
"dims can be one int or array. dims must be set."); "dims can be one int or array. dims must be set.");
tensor->Resize(framework::make_ddim(dims)); tensor->Resize(framework::make_ddim(temp));
} }
}; };
......
/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. */
#include "paddle/operators/net_op.h"
#include "paddle/operators/scale_op.h"
namespace paddle {
namespace operators {
// The identity operator is an alias of the scale operator. This is also an
// example for creating the alias for an existing operator.
template <typename AttrType>
class IdentityOpMaker : public framework::OpProtoAndCheckerMaker {
public:
IdentityOpMaker(framework::OpProto *proto,
framework::OpAttrChecker *op_checker)
: OpProtoAndCheckerMaker(proto, op_checker) {
AddInput("X", "The input tensor of identity operator.");
AddOutput("Out", "The output tensor of identity operator.");
AddComment(R"DOC(
The identity operator is an alias of the scale operator
with the attribute scale fixed to 1.0
)DOC");
}
};
template <typename AttrType>
class IdentityOp : public NetOp {
public:
IdentityOp(const std::string &type, const framework::VariableNameMap &inputs,
const framework::VariableNameMap &outputs,
const framework::AttributeMap &attrs)
: NetOp(type, inputs, outputs, attrs) {
AppendOp(framework::OpRegistry::CreateOp(
"scale", {{"X", {Input("X")}}}, {{"Out", {Output("Out")}}},
{{"scale", static_cast<AttrType>(1)}}));
}
};
} // namespace operators
} // namespace paddle
namespace ops = paddle::operators;
REGISTER_OP_WITHOUT_GRADIENT(identity, ops::IdentityOp<float>,
ops::IdentityOpMaker<float>);
...@@ -61,7 +61,7 @@ void ConcatOutputs(const std::vector<Scope*>& step_scopes, ...@@ -61,7 +61,7 @@ void ConcatOutputs(const std::vector<Scope*>& step_scopes,
PADDLE_ENFORCE(step_scope_var != nullptr, "%s not in scope", PADDLE_ENFORCE(step_scope_var != nullptr, "%s not in scope",
outlinks[i].internal); outlinks[i].internal);
f::DDim step_dims = step_scope_var->template GetMutable<Tensor>()->dims(); f::DDim step_dims = step_scope_var->template GetMutable<Tensor>()->dims();
std::vector<int> dims_vec = vectorize(step_dims); std::vector<int64_t> dims_vec = vectorize(step_dims);
dims_vec.insert(dims_vec.begin(), seq_len); dims_vec.insert(dims_vec.begin(), seq_len);
output->Resize(f::make_ddim(dims_vec)); output->Resize(f::make_ddim(dims_vec));
} else { } else {
......
...@@ -49,7 +49,7 @@ The equation is: Out = scale*X ...@@ -49,7 +49,7 @@ The equation is: Out = scale*X
} }
}; };
// IdentityOp's gradient is IdentityOp, too. // The gradients of a scale operator is just the scale operator itself.
// Grad(Out=scale(X)) => Grad(X) = scale(Grad(Out)) // Grad(Out=scale(X)) => Grad(X) = scale(Grad(Out))
template <typename AttrType> template <typename AttrType>
class ScaleGradOp : public NetOp { class ScaleGradOp : public NetOp {
...@@ -66,36 +66,6 @@ class ScaleGradOp : public NetOp { ...@@ -66,36 +66,6 @@ class ScaleGradOp : public NetOp {
} }
}; };
// IdentityOp is an alias of the ScaleOp. This is also an example for creating
// an alias of an existing operator.
template <typename AttrType>
class IdentityOpMaker : public framework::OpProtoAndCheckerMaker {
public:
IdentityOpMaker(framework::OpProto *proto,
framework::OpAttrChecker *op_checker)
: OpProtoAndCheckerMaker(proto, op_checker) {
AddInput("X", "The input tensor of identity op.");
AddOutput("Out", "The output tensor of identity op.");
AddComment(R"DOC(
The identity operator is just an alias of the scale operator with the
attribute scale is fixed to 1.0.
)DOC");
}
};
template <typename AttrType>
class IdentityOp : public NetOp {
public:
IdentityOp(const std::string &type, const framework::VariableNameMap &inputs,
const framework::VariableNameMap &outputs,
const framework::AttributeMap &attrs)
: NetOp(type, inputs, outputs, attrs) {
AppendOp(framework::OpRegistry::CreateOp(
"scale", {{"X", {Input("X")}}}, {{"Out", {Output("Out")}}},
{{"scale", static_cast<AttrType>(1)}}));
}
};
} // namespace operators } // namespace operators
} // namespace paddle } // namespace paddle
...@@ -105,5 +75,3 @@ REGISTER_OP(scale, ops::ScaleOp, ops::ScaleOpMaker<float>, scale_grad, ...@@ -105,5 +75,3 @@ REGISTER_OP(scale, ops::ScaleOp, ops::ScaleOpMaker<float>, scale_grad,
ops::ScaleGradOp<float>); ops::ScaleGradOp<float>);
REGISTER_OP_CPU_KERNEL(scale, REGISTER_OP_CPU_KERNEL(scale,
ops::ScaleKernel<paddle::platform::CPUPlace, float>); ops::ScaleKernel<paddle::platform::CPUPlace, float>);
REGISTER_OP_WITHOUT_GRADIENT(identity, ops::IdentityOp<float>,
ops::IdentityOpMaker<float>);
/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. */
#define EIGEN_USE_GPU
#include "paddle/operators/scatter_op.h"
namespace ops = paddle::operators;
REGISTER_OP_GPU_KERNEL(scatter,
ops::ScatterOpKernel<paddle::platform::GPUPlace, float>);
...@@ -23,9 +23,9 @@ class SoftmaxOp : public framework::OperatorWithKernel { ...@@ -23,9 +23,9 @@ class SoftmaxOp : public framework::OperatorWithKernel {
protected: protected:
void InferShape(const framework::InferShapeContext &ctx) const override { void InferShape(const framework::InferShapeContext &ctx) const override {
PADDLE_ENFORCE(ctx.Input<Tensor>("logits")->dims().size() == 2UL, PADDLE_ENFORCE(ctx.Input<Tensor>("Logits")->dims().size() == 2UL,
"The input of softmax op must be a matrix."); "The input of softmax op must be a matrix.");
ctx.Output<Tensor>("softmax")->Resize(ctx.Input<Tensor>("logits")->dims()); ctx.Output<Tensor>("Out")->Resize(ctx.Input<Tensor>("Logits")->dims());
} }
}; };
...@@ -34,10 +34,10 @@ class SoftmaxOpMaker : public framework::OpProtoAndCheckerMaker { ...@@ -34,10 +34,10 @@ class SoftmaxOpMaker : public framework::OpProtoAndCheckerMaker {
SoftmaxOpMaker(framework::OpProto *proto, SoftmaxOpMaker(framework::OpProto *proto,
framework::OpAttrChecker *op_checker) framework::OpAttrChecker *op_checker)
: OpProtoAndCheckerMaker(proto, op_checker) { : OpProtoAndCheckerMaker(proto, op_checker) {
AddInput("logits", AddInput("Logits",
"The input tensor of softmax. " "The input tensor of softmax. "
"2-D with shape [batch_size, input_feature_dimensions]."); "2-D with shape [batch_size, input_feature_dimensions].");
AddOutput("softmax", "The normalized values with the same shape as X."); AddOutput("Out", "The normalized values with the same shape as the input.");
AddComment(R"DOC( AddComment(R"DOC(
The input of softmax operator is a 2-D tensor with shape N x K (N is the The input of softmax operator is a 2-D tensor with shape N x K (N is the
batch_size, K is the dimension of input feature). The output tensor has the batch_size, K is the dimension of input feature). The output tensor has the
...@@ -51,8 +51,8 @@ the other dimensions in the K-dimensional vector input. Then the ratio of the ...@@ -51,8 +51,8 @@ the other dimensions in the K-dimensional vector input. Then the ratio of the
exponential of the given dimension and the sum of exponential values of all exponential of the given dimension and the sum of exponential values of all
the other dimensions is the output of the softmax operator. the other dimensions is the output of the softmax operator.
For each row `i` and each column `j` in X, we have: For each row `i` and each column `j` in the input: Logits, we have:
Y[i, j] = exp(X[i, j]) / sum_j(exp(X[i, j])) Out[i, j] = exp(Logits[i, j]) / sum_j(exp(Logits[i, j]))
)DOC"); )DOC");
} }
...@@ -64,17 +64,16 @@ class SoftmaxOpGrad : public framework::OperatorWithKernel { ...@@ -64,17 +64,16 @@ class SoftmaxOpGrad : public framework::OperatorWithKernel {
protected: protected:
void InferShape(const framework::InferShapeContext &ctx) const override { void InferShape(const framework::InferShapeContext &ctx) const override {
PADDLE_ENFORCE_NOT_NULL(ctx.InputVar("softmax"), PADDLE_ENFORCE_NOT_NULL(ctx.InputVar("Out"),
"Input(softmax) should be not null."); "Input(Out) should be not null.");
PADDLE_ENFORCE_NOT_NULL(ctx.InputVar(framework::GradVarName("softmax")), PADDLE_ENFORCE_NOT_NULL(ctx.InputVar(framework::GradVarName("Out")),
"Input(softmax@GRAD) should be not null."); "Input(Out@GRAD) should be not null.");
PADDLE_ENFORCE_EQ( PADDLE_ENFORCE_EQ(ctx.Input<Tensor>("Out")->dims(),
ctx.Input<Tensor>("softmax")->dims(), ctx.Input<Tensor>(framework::GradVarName("Out"))->dims(),
ctx.Input<Tensor>(framework::GradVarName("softmax"))->dims(), "Input(Out) and its gradients should have a same shape.");
"Input(softmax) and its gradients should have a same shape.");
ctx.Output<Tensor>(framework::GradVarName("Logits"))
ctx.Output<Tensor>(framework::GradVarName("logits")) ->Resize(ctx.Input<Tensor>("Logits")->dims());
->Resize(ctx.Input<Tensor>("logits")->dims());
} }
}; };
......
...@@ -28,12 +28,12 @@ template <typename Place, typename T> ...@@ -28,12 +28,12 @@ template <typename Place, typename T>
class SoftmaxKernel : public framework::OpKernel { class SoftmaxKernel : public framework::OpKernel {
public: public:
void Compute(const framework::ExecutionContext& context) const override { void Compute(const framework::ExecutionContext& context) const override {
auto X = context.Input<Tensor>("logits"); auto X = context.Input<Tensor>("Logits");
auto Y = context.Output<Tensor>("softmax"); auto Y = context.Output<Tensor>("Out");
Y->mutable_data<T>(context.GetPlace()); Y->mutable_data<T>(context.GetPlace());
auto logits = EigenMatrix<T>::From(*X); auto logits = EigenMatrix<T>::From(*X);
auto softmax = EigenMatrix<T>::From(*Y); auto out = EigenMatrix<T>::From(*Y);
const int kBatchDim = 0; const int kBatchDim = 0;
const int kClassDim = 1; const int kClassDim = 1;
...@@ -51,11 +51,11 @@ class SoftmaxKernel : public framework::OpKernel { ...@@ -51,11 +51,11 @@ class SoftmaxKernel : public framework::OpKernel {
.reshape(batch_by_one) .reshape(batch_by_one)
.broadcast(one_by_class)); .broadcast(one_by_class));
softmax.device(context.GetEigenDevice<Place>()) = shifted_logits.exp(); out.device(context.GetEigenDevice<Place>()) = shifted_logits.exp();
softmax.device(context.GetEigenDevice<Place>()) = out.device(context.GetEigenDevice<Place>()) =
(softmax * (out *
softmax.sum(along_class) out.sum(along_class)
.inverse() .inverse()
.eval() .eval()
.reshape(batch_by_one) .reshape(batch_by_one)
...@@ -69,9 +69,9 @@ class SoftmaxGradKernel : public framework::OpKernel { ...@@ -69,9 +69,9 @@ class SoftmaxGradKernel : public framework::OpKernel {
void Compute(const framework::ExecutionContext& context) const override { void Compute(const framework::ExecutionContext& context) const override {
std::shared_ptr<Tensor> scale_ = std::make_shared<Tensor>(); std::shared_ptr<Tensor> scale_ = std::make_shared<Tensor>();
auto Y = context.Input<Tensor>("softmax"); auto Y = context.Input<Tensor>("Out");
auto dY = context.Input<Tensor>(framework::GradVarName("softmax")); auto dY = context.Input<Tensor>(framework::GradVarName("Out"));
auto dX = context.Output<Tensor>(framework::GradVarName("logits")); auto dX = context.Output<Tensor>(framework::GradVarName("Logits"));
dX->mutable_data<T>(context.GetPlace()); dX->mutable_data<T>(context.GetPlace());
const int batch_size = Y->dims()[0]; const int batch_size = Y->dims()[0];
......
...@@ -35,8 +35,8 @@ class CPUUniformRandomKernel : public framework::OpKernel { ...@@ -35,8 +35,8 @@ class CPUUniformRandomKernel : public framework::OpKernel {
std::uniform_real_distribution<T> dist( std::uniform_real_distribution<T> dist(
static_cast<T>(context.GetAttr<float>("min")), static_cast<T>(context.GetAttr<float>("min")),
static_cast<T>(context.GetAttr<float>("max"))); static_cast<T>(context.GetAttr<float>("max")));
ssize_t size = framework::product(tensor->dims()); int64_t size = framework::product(tensor->dims());
for (ssize_t i = 0; i < size; ++i) { for (int64_t i = 0; i < size; ++i) {
data[i] = dist(engine); data[i] = dist(engine);
} }
} }
...@@ -52,7 +52,12 @@ class UniformRandomOp : public framework::OperatorWithKernel { ...@@ -52,7 +52,12 @@ class UniformRandomOp : public framework::OperatorWithKernel {
"uniform_random's min must less then max"); "uniform_random's min must less then max");
auto* tensor = ctx.Output<framework::Tensor>("Out"); auto* tensor = ctx.Output<framework::Tensor>("Out");
auto dims = GetAttr<std::vector<int>>("dims"); auto dims = GetAttr<std::vector<int>>("dims");
tensor->Resize(framework::make_ddim(dims)); std::vector<int64_t> temp;
temp.reserve(dims.size());
for (auto dim : dims) {
temp.push_back(static_cast<int64_t>(dim));
}
tensor->Resize(framework::make_ddim(temp));
} }
}; };
......
...@@ -30,7 +30,7 @@ limitations under the License. */ ...@@ -30,7 +30,7 @@ limitations under the License. */
namespace py = pybind11; namespace py = pybind11;
USE_OP(add_two); USE_OP(add);
USE_OP(onehot_cross_entropy); USE_OP(onehot_cross_entropy);
USE_OP(sgd); USE_OP(sgd);
USE_OP(mul); USE_OP(mul);
...@@ -46,6 +46,7 @@ USE_OP(lookup_table); ...@@ -46,6 +46,7 @@ USE_OP(lookup_table);
USE_OP(scale); USE_OP(scale);
USE_NO_KERNEL_OP(identity); USE_NO_KERNEL_OP(identity);
USE_OP(minus); USE_OP(minus);
USE_OP(cos_sim);
USE_CPU_ONLY_OP(gather); USE_CPU_ONLY_OP(gather);
USE_CPU_ONLY_OP(scatter); USE_CPU_ONLY_OP(scatter);
...@@ -76,7 +77,7 @@ PYBIND11_PLUGIN(core) { ...@@ -76,7 +77,7 @@ PYBIND11_PLUGIN(core) {
.def("get_dims", .def("get_dims",
[](const Tensor &self) { return vectorize(self.dims()); }) [](const Tensor &self) { return vectorize(self.dims()); })
.def("set_dims", .def("set_dims",
[](Tensor &self, const std::vector<int> &dim) { [](Tensor &self, const std::vector<int64_t> &dim) {
self.Resize(make_ddim(dim)); self.Resize(make_ddim(dim));
}) })
.def("alloc_float", .def("alloc_float",
......
...@@ -85,7 +85,7 @@ void PyCPUTensorSetFromArray( ...@@ -85,7 +85,7 @@ void PyCPUTensorSetFromArray(
framework::Tensor &self, framework::Tensor &self,
py::array_t<T, py::array::c_style | py::array::forcecast> array, py::array_t<T, py::array::c_style | py::array::forcecast> array,
paddle::platform::CPUPlace &place) { paddle::platform::CPUPlace &place) {
std::vector<int> dims; std::vector<int64_t> dims;
dims.reserve(array.ndim()); dims.reserve(array.ndim());
for (size_t i = 0; i < array.ndim(); ++i) { for (size_t i = 0; i < array.ndim(); ++i) {
dims.push_back((int)array.shape()[i]); dims.push_back((int)array.shape()[i]);
...@@ -102,7 +102,7 @@ void PyCUDATensorSetFromArray( ...@@ -102,7 +102,7 @@ void PyCUDATensorSetFromArray(
framework::Tensor &self, framework::Tensor &self,
py::array_t<T, py::array::c_style | py::array::forcecast> array, py::array_t<T, py::array::c_style | py::array::forcecast> array,
paddle::platform::GPUPlace &place) { paddle::platform::GPUPlace &place) {
std::vector<int> dims; std::vector<int64_t> dims;
dims.reserve(array.ndim()); dims.reserve(array.ndim());
for (size_t i = 0; i < array.ndim(); ++i) { for (size_t i = 0; i < array.ndim(); ++i) {
dims.push_back((int)array.shape()[i]); dims.push_back((int)array.shape()[i]);
......
...@@ -4,7 +4,7 @@ import paddle.v2.framework.proto.framework_pb2 as framework_pb2 ...@@ -4,7 +4,7 @@ import paddle.v2.framework.proto.framework_pb2 as framework_pb2
def get_all_op_protos(): def get_all_op_protos():
""" """
Get all registered op proto from Paddle C++ Get all registered op proto from PaddlePaddle C++ end.
:return: list of OpProto :return: list of OpProto
""" """
protostrs = core.get_all_op_protos() protostrs = core.get_all_op_protos()
...@@ -21,8 +21,8 @@ def is_str(s): ...@@ -21,8 +21,8 @@ def is_str(s):
class OpDescCreationMethod(object): class OpDescCreationMethod(object):
""" """
A Functor object to convert user input(use key word args) to OpDesc based on A Functor object converting the user's input(only keyword arguments are
OpProto. supported) to OpDesc based on the OpProto.
:param op_proto: The OpProto object. :param op_proto: The OpProto object.
:type op_proto: op_proto_pb2.OpProto :type op_proto: op_proto_pb2.OpProto
...@@ -30,17 +30,18 @@ class OpDescCreationMethod(object): ...@@ -30,17 +30,18 @@ class OpDescCreationMethod(object):
def __init__(self, op_proto): def __init__(self, op_proto):
if not isinstance(op_proto, framework_pb2.OpProto): if not isinstance(op_proto, framework_pb2.OpProto):
raise TypeError("Argument should be OpProto") raise TypeError(
"Type of op_proto should be OpProto in PaddlePaddle.")
self.__op_proto__ = op_proto self.__op_proto__ = op_proto
def __call__(self, *args, **kwargs): def __call__(self, *args, **kwargs):
""" """
Convert user input to OpDesc. Only key-word args are supported. Convert user's input to OpDesc. Only keyword arguments are supported.
:return: OpDesc based on user input :return: OpDesc based on user input
:rtype: op_desc_pb2.OpDesc :rtype: op_desc_pb2.OpDesc
""" """
if len(args) != 0: if len(args) != 0:
raise ValueError("Only keyword arguments is supported by Paddle") raise ValueError("Only keyword arguments are supported.")
op_desc = framework_pb2.OpDesc() op_desc = framework_pb2.OpDesc()
for input_parameter in self.__op_proto__.inputs: for input_parameter in self.__op_proto__.inputs:
...@@ -49,10 +50,11 @@ class OpDescCreationMethod(object): ...@@ -49,10 +50,11 @@ class OpDescCreationMethod(object):
input_arguments = [input_arguments] input_arguments = [input_arguments]
if not input_parameter.duplicable and len(input_arguments) > 1: if not input_parameter.duplicable and len(input_arguments) > 1:
raise ValueError("Input %s only accepts one input, but give %d" raise ValueError(
% (input_parameter.name, len(input_arguments))) "Input %s expects only one input, but %d are given." %
(input_parameter.name, len(input_arguments)))
ipt = op_desc.inputs.add() ipt = op_desc.inputs.add()
ipt.parameter = input_parameter.name ipt.parameter = input_parameter.name
ipt.arguments.extend(input_arguments) ipt.arguments.extend(input_arguments)
...@@ -63,10 +65,10 @@ class OpDescCreationMethod(object): ...@@ -63,10 +65,10 @@ class OpDescCreationMethod(object):
if not output_parameter.duplicable and len(output_arguments) > 1: if not output_parameter.duplicable and len(output_arguments) > 1:
raise ValueError( raise ValueError(
"Output %s only accepts one output, but give %d" % "Output %s expects only one output, but %d are given." %
(output_parameter.name, len(output_arguments))) (output_parameter.name, len(output_arguments)))
out = op_desc.outputs.add() out = op_desc.outputs.add()
out.parameter = output_parameter.name out.parameter = output_parameter.name
out.arguments.extend(output_arguments) out.arguments.extend(output_arguments)
...@@ -100,10 +102,11 @@ class OpDescCreationMethod(object): ...@@ -100,10 +102,11 @@ class OpDescCreationMethod(object):
pair.first = p[0] pair.first = p[0]
pair.second = p[1] pair.second = p[1]
else: else:
raise NotImplementedError("Not support attribute type " + raise NotImplementedError(
str(attr.type)) "A not supported attribute type: %s." % (
str(attr.type)))
return op_desc return op_desc
@staticmethod @staticmethod
def any_is_true(generator): def any_is_true(generator):
...@@ -142,10 +145,10 @@ def create_op_creation_method(op_proto): ...@@ -142,10 +145,10 @@ def create_op_creation_method(op_proto):
outputs=[var.name for var in op_proto.outputs], outputs=[var.name for var in op_proto.outputs],
attrs=[attr.name for attr in op_proto.attrs]) attrs=[attr.name for attr in op_proto.attrs])
class OperatorFactory(object):
def __init__(self):
self.op_methods = dict()
class OperatorFactory(object):
def __init__(self):
self.op_methods = dict()
for op_proto in get_all_op_protos(): for op_proto in get_all_op_protos():
method = create_op_creation_method(op_proto) method = create_op_creation_method(op_proto)
self.op_methods[method.name] = method self.op_methods[method.name] = method
...@@ -153,14 +156,16 @@ class OperatorFactory(object): ...@@ -153,14 +156,16 @@ class OperatorFactory(object):
def __call__(self, *args, **kwargs): def __call__(self, *args, **kwargs):
if 'type' in kwargs: if 'type' in kwargs:
if len(args) != 0: if len(args) != 0:
raise ValueError("All Paddle argument should be key-word " raise ValueError(
"argument except type") ("All PaddlePaddle arguments should be keyword "
t = kwargs.pop('type') "arguments except the argument \"type\"."))
t = kwargs.pop('type')
else: else:
if len(args) != 1: if len(args) != 1:
raise ValueError("All Paddle argument should be key-word " raise ValueError(
"argument except type") ("All PaddlePaddle arguments should be keyword "
t = args[0] "arguments except the argument \"type\"."))
t = args[0]
return self.get_op_info(t).method(**kwargs) return self.get_op_info(t).method(**kwargs)
...@@ -169,7 +174,7 @@ class OperatorFactory(object): ...@@ -169,7 +174,7 @@ class OperatorFactory(object):
def get_op_info(self, t): def get_op_info(self, t):
if t not in self.op_methods: if t not in self.op_methods:
raise ValueError("operator %s is not registered", t) raise ValueError("The operator: %s is not registered." % t)
return self.op_methods.get(t) return self.op_methods.get(t)
def get_op_input_names(self, type): def get_op_input_names(self, type):
......
...@@ -4,6 +4,7 @@ py_test(test_scope SRCS test_scope.py) ...@@ -4,6 +4,7 @@ py_test(test_scope SRCS test_scope.py)
py_test(test_tensor SRCS test_tensor.py) py_test(test_tensor SRCS test_tensor.py)
py_test(test_mul_op SRCS test_mul_op.py) py_test(test_mul_op SRCS test_mul_op.py)
py_test(test_cos_sim_op SRCS test_cos_sim_op.py)
py_test(test_mean_op SRCS test_mean_op.py) py_test(test_mean_op SRCS test_mean_op.py)
......
...@@ -36,13 +36,13 @@ def get_numeric_gradient(op, ...@@ -36,13 +36,13 @@ def get_numeric_gradient(op,
in_place=False): in_place=False):
""" """
Get Numeric Gradient for an operator's input. Get Numeric Gradient for an operator's input.
:param op: C++ operator instance, could be an network :param op: C++ operator instance, could be an network
:param input_values: The input variables. Should be an dictionary, key is :param input_values: The input variables. Should be an dictionary, key is
variable name. Value is numpy array. variable name. Value is numpy array.
:param output_name: The final output variable name. :param output_name: The final output variable name.
:param input_to_check: The input variable need to get gradient. :param input_to_check: The input variable need to get gradient.
:param delta: The perturbation value for numeric gradient method. The :param delta: The perturbation value for numeric gradient method. The
smaller delta is, the more accurate result will get. But if that delta is smaller delta is, the more accurate result will get. But if that delta is
too small, it could occur numerical stability problem. too small, it could occur numerical stability problem.
:param local_scope: The local scope used for get_numeric_gradient. :param local_scope: The local scope used for get_numeric_gradient.
...@@ -229,9 +229,9 @@ class GradientChecker(unittest.TestCase): ...@@ -229,9 +229,9 @@ class GradientChecker(unittest.TestCase):
"""Use relative error for the comparison. """Use relative error for the comparison.
:param numeric_grads: the numerical graidents. :param numeric_grads: the numerical graidents.
:type numeric_grads: a list of numpy.array :type numeric_grads: a list of numpy.array
:param analytic_grads: the analytical graidents. :param analytic_grads: the analytical graidents.
:type analytic_grads: a list of numpy.array :type analytic_grads: a list of numpy.array
:param name: the names of gradients, used to print for debug. :param name: the names of gradients, used to print for debug.
:type names: a list of string :type names: a list of string
:param msg_prefix: string info, used to print for debug. :param msg_prefix: string info, used to print for debug.
......
...@@ -38,9 +38,9 @@ def feed_data(name, data): ...@@ -38,9 +38,9 @@ def feed_data(name, data):
assert isinstance(data, numpy.ndarray) assert isinstance(data, numpy.ndarray)
tensor = scope.find_var(name).get_tensor() tensor = scope.find_var(name).get_tensor()
tensor.set_dims(data.shape) tensor.set_dims(data.shape)
if data.dtype == numpy.dtype('int32'): if data.dtype == numpy.dtype("int32"):
tensor.alloc_int(place) tensor.alloc_int(place)
elif data.dtype == numpy.dtype('float32'): elif data.dtype == numpy.dtype("float32"):
tensor.alloc_float(place) tensor.alloc_float(place)
else: else:
raise ValueError("data type not supported") raise ValueError("data type not supported")
...@@ -74,22 +74,25 @@ def init_param(net, param_name, dims): ...@@ -74,22 +74,25 @@ def init_param(net, param_name, dims):
# fc_layer # fc_layer
def fc_layer(net, input, size, act="softmax", bias=True, param=None, name=None): def fc_layer(net, input, size, act="softmax", bias=True, param=None, name=None):
""" """
Add a fc layer to net The fully connected layer.
:param input: input variable name. :param input: The name of input variable.
:type input: str :type input: str
:param size: fully connected layer size. :param size: The size of fully connected layer.
:param act: activation name :param act: The name of activation.
:param param: parameter attribute, used for initialize parameters. :param param: The attribute of learnable parameter which can be used to
:param bias: bias attribute. False will not have a bias. modify initialization mean and std of the parameter.
:param name: the name of fc layer. If not set, model will generate a :param bias: The attribute of bias. If set False, this layer does not have
readable name a bias.
:return: output variable name. :param name: The name of this layer. If it is not set explictly, a name
will be generated automatically.
:return: The name of the output variable.
""" """
if name is None: if name is None:
name = 'fc_%d' % uniq_id() name = "fc_%d" % uniq_id()
if not isinstance(name, str): if not isinstance(name, str):
raise ValueError("name should be string") raise ValueError("The name of a layer should be a string.")
input_dims = scope.find_var(input).get_tensor().get_dims() input_dims = scope.find_var(input).get_tensor().get_dims()
...@@ -123,7 +126,7 @@ def fc_layer(net, input, size, act="softmax", bias=True, param=None, name=None): ...@@ -123,7 +126,7 @@ def fc_layer(net, input, size, act="softmax", bias=True, param=None, name=None):
def cross_entropy_layer(net, input, label): def cross_entropy_layer(net, input, label):
cost_name = 'cross_entropy_%d' % uniq_id() cost_name = "cross_entropy_%d" % uniq_id()
cross_entropy_op = Operator( cross_entropy_op = Operator(
"onehot_cross_entropy", X=input, label=label, Y=cost_name) "onehot_cross_entropy", X=input, label=label, Y=cost_name)
net.append_op(cross_entropy_op) net.append_op(cross_entropy_op)
...@@ -177,8 +180,8 @@ def error_rate(predict, label): ...@@ -177,8 +180,8 @@ def error_rate(predict, label):
return error_num / float(len(label)) return error_num / float(len(label))
images = data_layer(name='pixel', dims=[BATCH_SIZE, 784]) images = data_layer(name="pixel", dims=[BATCH_SIZE, 784])
labels = data_layer(name='label', dims=[BATCH_SIZE]) labels = data_layer(name="label", dims=[BATCH_SIZE])
fc1 = fc_layer(net=forward_net, input=images, size=100, act="sigmoid") fc1 = fc_layer(net=forward_net, input=images, size=100, act="sigmoid")
fc2 = fc_layer(net=forward_net, input=fc1, size=100, act="sigmoid") fc2 = fc_layer(net=forward_net, input=fc1, size=100, act="sigmoid")
predict = fc_layer(net=forward_net, input=fc2, size=10, act="softmax") predict = fc_layer(net=forward_net, input=fc2, size=10, act="softmax")
......
...@@ -6,13 +6,13 @@ from paddle.v2.framework.op import Operator ...@@ -6,13 +6,13 @@ from paddle.v2.framework.op import Operator
class OpTestMeta(type): class OpTestMeta(type):
""" """
Operator Test ClassMeta. Operator Test ClassMeta.
It injects `test_all` method into user's OperatorTest class, to make Python It injects `test_all` method into user's OperatorTest class, to make Python
unittest module run that method. unittest module run that method.
The `test_all` read what value is stored in `self`. It use self's values to The `test_all` read what value is stored in `self`. It use self's values to
create and run a operator, and check whether that op is OK or not. create and run a operator, and check whether that op is OK or not.
See `test_add_two_op` for example usage. See `test_add_two_op` for example usage.
""" """
......
...@@ -11,7 +11,7 @@ class TestAddOp(unittest.TestCase): ...@@ -11,7 +11,7 @@ class TestAddOp(unittest.TestCase):
__metaclass__ = OpTestMeta __metaclass__ = OpTestMeta
def setUp(self): def setUp(self):
self.type = "add_two" self.type = "add"
self.inputs = { self.inputs = {
'X': numpy.random.random((102, 105)).astype("float32"), 'X': numpy.random.random((102, 105)).astype("float32"),
'Y': numpy.random.random((102, 105)).astype("float32") 'Y': numpy.random.random((102, 105)).astype("float32")
......
import unittest
import numpy as np
from gradient_checker import GradientChecker, create_op
from op_test_util import OpTestMeta
class TestCosSimOp(unittest.TestCase):
__metaclass__ = OpTestMeta
def setUp(self):
self.type = "cos_sim"
self.inputs = {
'X': np.random.random((32, 64)).astype("float32"),
'Y': np.random.random((32, 64)).astype("float32")
}
expect_x_norm = np.linalg.norm(self.inputs['X'], axis=1)
expect_y_norm = np.linalg.norm(self.inputs['Y'], axis=1)
expect_out = (self.inputs['X'] * self.inputs['Y']).sum(axis=1) / \
expect_x_norm / expect_y_norm
self.outputs = {
'XNorm': np.expand_dims(expect_x_norm, 1),
'YNorm': np.expand_dims(expect_y_norm, 1),
'Out': np.expand_dims(expect_out, 1)
}
class TestCosSimGradOp(GradientChecker):
def setUp(self):
self.op = create_op("cos_sim")
self.inputs = {
'X': np.random.random((10, 5)).astype("float32"),
'Y': np.random.random((10, 5)).astype("float32")
}
def test_cpu_gpu_compare(self):
self.compare_grad(self.op, self.inputs)
def test_normal(self):
self.check_grad(
self.op, self.inputs, ["X", "Y"], "Out", max_relative_error=0.05)
def test_ignore_x(self):
self.check_grad(
self.op,
self.inputs, ["Y"],
"Out",
max_relative_error=0.05,
no_grad_set={"X"})
def test_ignore_y(self):
self.check_grad(
self.op,
self.inputs, ["X"],
"Out",
max_relative_error=0.05,
no_grad_set={"Y"})
if __name__ == '__main__':
unittest.main()
...@@ -7,11 +7,11 @@ from gradient_checker import get_numeric_gradient ...@@ -7,11 +7,11 @@ from gradient_checker import get_numeric_gradient
class GetNumericGradientTest(unittest.TestCase): class GetNumericGradientTest(unittest.TestCase):
def test_add_op(self): def test_add_op(self):
add_op = Operator('add_two', X="X", Y="Y", Out="Z") add_op = Operator("add", X="X", Y="Y", Out="Z")
x = numpy.random.random((10, 1)).astype("float32") x = numpy.random.random((10, 1)).astype("float32")
y = numpy.random.random((10, 1)).astype("float32") y = numpy.random.random((10, 1)).astype("float32")
arr = get_numeric_gradient(add_op, {'X': x, "Y": y}, 'Z', 'X') arr = get_numeric_gradient(add_op, {"X": x, "Y": y}, "Z", "X")
self.assertAlmostEqual(arr.mean(), 1.0, delta=1e-4) self.assertAlmostEqual(arr.mean(), 1.0, delta=1e-4)
def test_softmax_op(self): def test_softmax_op(self):
...@@ -28,16 +28,16 @@ class GetNumericGradientTest(unittest.TestCase): ...@@ -28,16 +28,16 @@ class GetNumericGradientTest(unittest.TestCase):
dX[i, :] = Y[i, :] * (dY[i, :] - d) dX[i, :] = Y[i, :] * (dY[i, :] - d)
return dX return dX
softmax_op = Operator("softmax", X="X", Y="Y") softmax_op = Operator("softmax", Logits="Logits", Out="Out")
X = numpy.random.random((2, 2)).astype("float32") X = numpy.random.random((2, 2)).astype("float32")
Y = numpy.apply_along_axis(stable_softmax, 1, X) Y = numpy.apply_along_axis(stable_softmax, 1, X)
dY = numpy.ones(Y.shape) dY = numpy.ones(Y.shape)
dX = label_softmax_grad(Y, dY) dX = label_softmax_grad(Y, dY)
arr = get_numeric_gradient(softmax_op, {"X": X}, 'Y', 'X') arr = get_numeric_gradient(softmax_op, {"Logits": X}, "Out", "Logits")
numpy.testing.assert_almost_equal(arr, dX, decimal=1e-2) numpy.testing.assert_almost_equal(arr, dX, decimal=1e-2)
if __name__ == '__main__': if __name__ == "__main__":
unittest.main() unittest.main()
...@@ -15,7 +15,7 @@ def fc(X, W, Y): ...@@ -15,7 +15,7 @@ def fc(X, W, Y):
class TestNet(unittest.TestCase): class TestNet(unittest.TestCase):
def test_net_all(self): def test_net_all(self):
net = core.Net.create() net = core.Net.create()
op1 = Operator("add_two", X="X", Y="Y", Out="Out") op1 = Operator("add", X="X", Y="Y", Out="Out")
net.append_op(op1) net.append_op(op1)
net2 = core.Net.create() net2 = core.Net.create()
...@@ -26,7 +26,7 @@ class TestNet(unittest.TestCase): ...@@ -26,7 +26,7 @@ class TestNet(unittest.TestCase):
expected = ''' expected = '''
Op(plain_net), inputs:{all[W, X, Y]}, outputs:{all[Out, fc.out, pre_activation]}. Op(plain_net), inputs:{all[W, X, Y]}, outputs:{all[Out, fc.out, pre_activation]}.
Op(add_two), inputs:{X[X], Y[Y]}, outputs:{Out[Out]}. Op(add), inputs:{X[X], Y[Y]}, outputs:{Out[Out]}.
Op(plain_net), inputs:{all[W, X]}, outputs:{all[fc.out, pre_activation]}. Op(plain_net), inputs:{all[W, X]}, outputs:{all[fc.out, pre_activation]}.
Op(plain_net), inputs:{all[W, X]}, outputs:{all[fc.out, pre_activation]}. Op(plain_net), inputs:{all[W, X]}, outputs:{all[fc.out, pre_activation]}.
Op(mul), inputs:{X[X], Y[W]}, outputs:{Out[pre_activation]}. Op(mul), inputs:{X[X], Y[W]}, outputs:{Out[pre_activation]}.
......
...@@ -193,10 +193,10 @@ class TestOpDescCreationMethod(unittest.TestCase): ...@@ -193,10 +193,10 @@ class TestOpDescCreationMethod(unittest.TestCase):
class TestOpCreations(unittest.TestCase): class TestOpCreations(unittest.TestCase):
def test_all(self): def test_all(self):
add_op = op.Operator("add_two", X="a", Y="b", Out="z") add_op = op.Operator("add", X="a", Y="b", Out="z")
self.assertIsNotNone(add_op) self.assertIsNotNone(add_op)
# Invoke C++ DebugString() # Invoke C++ DebugString()
self.assertEqual('Op(add_two), inputs:{X[a], Y[b]}, outputs:{Out[z]}.', self.assertEqual('Op(add), inputs:{X[a], Y[b]}, outputs:{Out[z]}.',
str(add_op)) str(add_op))
......
...@@ -146,7 +146,7 @@ class TestRecurrentOp(unittest.TestCase): ...@@ -146,7 +146,7 @@ class TestRecurrentOp(unittest.TestCase):
stepnet = core.Net.create() stepnet = core.Net.create()
x_fc_op = Operator("mul", X="x@alias", Y="W", Out="Wx") x_fc_op = Operator("mul", X="x@alias", Y="W", Out="Wx")
h_fc_op = Operator("mul", X="h@pre", Y="U", Out="Uh") h_fc_op = Operator("mul", X="h@pre", Y="U", Out="Uh")
sum_op = Operator("add_two", X="Wx", Y="Uh", Out="sum") sum_op = Operator("add", X="Wx", Y="Uh", Out="sum")
sig_op = Operator("sigmoid", X="sum", Y="h@alias") sig_op = Operator("sigmoid", X="sum", Y="h@alias")
for op in [x_fc_op, h_fc_op, sum_op, sig_op]: for op in [x_fc_op, h_fc_op, sum_op, sig_op]:
......
...@@ -18,10 +18,9 @@ class TestSoftmaxOp(unittest.TestCase): ...@@ -18,10 +18,9 @@ class TestSoftmaxOp(unittest.TestCase):
def setUp(self): def setUp(self):
self.type = "softmax" self.type = "softmax"
self.inputs = {"logits": np.random.random((10, 10)).astype("float32")} self.inputs = {"Logits": np.random.random((10, 10)).astype("float32")}
self.outputs = { self.outputs = {
"softmax": "Out": np.apply_along_axis(stable_softmax, 1, self.inputs["Logits"])
np.apply_along_axis(stable_softmax, 1, self.inputs["logits"])
} }
...@@ -29,11 +28,11 @@ class TestSoftmaxGradOp(GradientChecker): ...@@ -29,11 +28,11 @@ class TestSoftmaxGradOp(GradientChecker):
def setUp(self): def setUp(self):
self.op = create_op("softmax") self.op = create_op("softmax")
self.inputs = { self.inputs = {
"logits": np.random.uniform(0.1, 1, [10, 10]).astype("float32") "Logits": np.random.uniform(0.1, 1, [10, 10]).astype("float32")
} }
def test_softmax_grad(self): def test_softmax_grad(self):
self.check_grad(self.op, self.inputs, ["logits"], "softmax") self.check_grad(self.op, self.inputs, ["Logits"], "Out")
if __name__ == "__main__": if __name__ == "__main__":
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册