diff --git a/doc/howto/dev/new_op_cn.md b/doc/howto/dev/new_op_cn.md
index 58665e9f2b6299ec3959ed6858ab01d459f64dd8..e3892849abe21fc207d2fcbe4adc65184ba771f4 100644
--- a/doc/howto/dev/new_op_cn.md
+++ b/doc/howto/dev/new_op_cn.md
@@ -262,7 +262,7 @@ MulOp(const std::string &type, const framework::VariableNameMap &inputs,
- 生成库
- 无需修改 [`paddle/pybind/CMakeLists.txt`](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/pybind/CMakeLists.txt)文件,`paddle/operators` 目录下新增的 `*_op.cc` 文件会被自动添加链接到生成的lib库中。
+ `paddle/operators` 目录下新增的 `*_op.cc` 文件会被自动添加链接到生成的lib库中。
## 实现单元测试
@@ -354,11 +354,7 @@ class TestMulGradOp(GradientChecker):
### 编译和执行单元测试
-单元测试编写完成之后,在[`python/paddle/v2/framework/tests/CMakeLists.txt`](https://github.com/PaddlePaddle/Paddle/blob/develop/python/paddle/v2/framework/tests/CMakeLists.txt)中添加以下内容,将单元测试加入工程:
-
-```
-py_test(test_mul_op SRCS test_mul_op.py)
-```
+`python/paddle/v2/framework/tests` 目录下新增的 `test_*.py` 单元测试会被自动加入工程进行编译。
请注意,**不同于Op的编译测试,运行单元测试测时需要编译整个工程**,并且编译时需要打开`WITH_TESTING`, 即`cmake paddle_dir -DWITH_TESTING=ON`。编译成功后,执行下面的命令来运行单元测试:
diff --git a/paddle/framework/backward.md b/paddle/framework/backward.md
index c762811dfc190b255e0a3389885a081ce8315caf..0a6d762bc8be5201ac196b4bc6107c06d07a31d7 100644
--- a/paddle/framework/backward.md
+++ b/paddle/framework/backward.md
@@ -2,11 +2,22 @@
## Motivation
-In Neural Network, the backpropagation algorithm follows the chain rule, so we need to compound the gradient operators/expressions together with the chain rule. Every forward network needs a backward network to construct the full computation graph, the operator/expression's backward pass will be generated respect to forward pass.
+In Neural Network, many model is solved by the the backpropagation algorithm(known as BP) at present. Technically it caculates the gradient of the loss function, then distributed back through the networks. Follows the chain rule, so we need a module chains the gradient operators/expressions together with to construct the backward pass. Every forward network needs a backward network to construct the full computation graph, the operator/expression's backward pass will be generated respect to forward pass.
-## Backward Operator Registry
+## Implementation
-A backward network is built up with several backward operators. Backward operators take forward operators' inputs outputs, and output gradients and then calculate its input gradients.
+In this design doc, we exported only one API for generating the backward pass.
+
+```c++
+std::unique_ptr Backward(const OperatorBase& forwardOp,
+ const std::unordered_set& no_grad_vars);
+```
+
+The implementation behind it can be divided into two parts, **Backward Operator Creating** and **Backward Operator Building**.
+
+### Backward Operator Registry
+
+A backward network is built up with several backward operators. Backward operators take forward operators' inputs, outputs, and output gradients and then calculate its input gradients.
| | forward operator | backward operator
| ---------------------- | ---------------- |------------------------- |
@@ -25,7 +36,7 @@ REGISTER_OP(mul, MulOp, MulOpMaker, mul_grad, MulOpGrad);
`mul_grad` is the type of backward operator, and `MulOpGrad` is its class name.
-## Backward Opeartor Creating
+### Backward Opeartor Creating
Given a certain forward operator, we can get its corresponding backward operator by calling:
@@ -43,40 +54,47 @@ The function `BuildGradOp` will sequentially execute following processes:
4. Building backward operator with `inputs`, `outputs` and forward operator's attributes.
-## Backward Network Building
-
-A backward network is a series of backward operators. The main idea of building a backward network is creating backward operators in the inverted sequence and put them together.
+### Backward Network Building
-In our design, the network itself is also a kind of operator. So the operators contained by a big network may be some small network.
-
-given a forward network, it generates the backward network. We only care about the Gradients—`OutputGradients`, `InputGradients`.
+A backward network is a series of backward operators. The main idea of building a backward network is creating backward operators in the inverted sequence and append them together one by one. There is some corner case need to process specially.
1. Op
- when the input forward network is an Op, return its gradient Operator Immediately.
+ When the input forward network is an Op, return its gradient Operator Immediately. If all of its outputs are in no gradient set, then return a special `NOP`.
2. NetOp
- when the input forward network is a NetOp, it needs to call the sub NetOp/Operators backward function recursively. During the process, we need to collect the `OutputGradients` name according to the forward NetOp.
+ In our design, the network itself is also a kind of operator(**NetOp**). So the operators contained by a big network may be some small network. When the input forward network is a NetOp, it needs to call the sub NetOp/Operators backward function recursively. During the process, we need to collect the `OutputGradients` name according to the forward NetOp.
+
+3. RnnOp
+
+ RnnOp is a nested stepnet operator. Backward module need to recusively call `Backward` for every stepnet.
+
+4. Sharing Variables
+
+ **sharing variables**. As illustrated in the pictures, two operator's share the same variable name of W@GRAD, which will overwrite their sharing input variable.
+
+
+
- **shared variable**. As illustrated in the pictures, two operator's `Output` `Gradient` will overwrite their shared input variable.
+ pic 1. Sharing variables in operators.
-
-
+
- 1. Shared variable in operators.
+ Sharing variable between operators or same input variable used in multiple operators leads to a duplicate gradient variable. As demo show above, we need to rename gradient name recursively and add a generic add operator to replace the overwrite links.
-
+
+
- Share variable between operators or same input variable used in multiple operators leads to a duplicate gradient variable. As demo show above, we need to rename gradient name recursively and add a generic add operator replace the overwrite links.
+ pic 2. Replace sharing variable's gradient with `Add` operator.
-
-
+
- 2. Replace shared variable's gradient with `Add` operator.
+ Because our framework finds variables accord to their names, we need to rename the output links. We add a suffix of number to represent its position in clockwise.
-
+5. Part of Gradient is Zero.
+ In the whole graph, there is some case of that one operator's gradient is not needed, but its input's gradient is a dependency link of other operator, we need to fill a same shape gradient matrix in the position. In our implement, we insert a special `fillZeroLike` operator.
- Then collect the sub graph `OutputGradients`/`InputGradients` as the NetOp's and return it.
+Follow these rules above, then collect the sub graph `OutputGradients`/`InputGradients` as the NetOp's and return it.
diff --git a/paddle/framework/images/duplicate_op2.graffle b/paddle/framework/images/duplicate_op2.graffle
index ede3bca30ae17d5af52505fd94dc2f79b23b57e0..5cec3bc64dbd44dc99e348485969f29bd128ceb1 100644
Binary files a/paddle/framework/images/duplicate_op2.graffle and b/paddle/framework/images/duplicate_op2.graffle differ
diff --git a/paddle/framework/images/duplicate_op2.png b/paddle/framework/images/duplicate_op2.png
index 4e872dc2caf3b0cbd0d5176f11a14801b538dc86..21cdd5cabf1b5203e1435a75b57770d2f702fa92 100644
Binary files a/paddle/framework/images/duplicate_op2.png and b/paddle/framework/images/duplicate_op2.png differ
diff --git a/paddle/framework/tensor.h b/paddle/framework/tensor.h
index ce938b21437195fed8c1adad4329fd139f3f96ab..4b5a2ae523f2f7fde5445f0534cd99969ad9d59e 100644
--- a/paddle/framework/tensor.h
+++ b/paddle/framework/tensor.h
@@ -81,6 +81,9 @@ class Tensor {
/*! Return the dimensions of the memory block. */
inline const DDim& dims() const;
+ /*! Return the numel of the memory block. */
+ inline int64_t numel() const;
+
/*! Resize the dimensions of the memory block. */
inline Tensor& Resize(const DDim& dims);
@@ -162,6 +165,12 @@ class Tensor {
/*! points to dimensions of memory block. */
DDim dims_;
+ /**
+ * A cache of the number of elements in a tensor.
+ * Would be 0 for an uninitialized tensor.
+ */
+ int64_t numel_;
+
/**
* @brief A PlaceHolder may be shared by more than one tensor.
*
diff --git a/paddle/framework/tensor_impl.h b/paddle/framework/tensor_impl.h
index 637f04ae0037bd402d855b8bcde8087bfe8328d1..642b53efc7095d25712ca324638f5fe9b8316c0c 100644
--- a/paddle/framework/tensor_impl.h
+++ b/paddle/framework/tensor_impl.h
@@ -24,7 +24,7 @@ inline void Tensor::check_memory_size() const {
PADDLE_ENFORCE_NOT_NULL(
holder_, "Tenosr holds no memory. Call Tensor::mutable_data first.");
PADDLE_ENFORCE_GE(
- holder_->size(), product(dims_) * sizeof(T) + offset_,
+ holder_->size(), numel() * sizeof(T) + offset_,
"Tensor's dims_ is out of bound. Call Tensor::mutable_data "
"first to re-allocate memory.\n"
"or maybe the required data-type mismatches the data already stored.");
@@ -54,11 +54,11 @@ inline T* Tensor::mutable_data(DDim dims, platform::Place place) {
template
inline T* Tensor::mutable_data(platform::Place place) {
static_assert(std::is_pod::value, "T must be POD");
- PADDLE_ENFORCE_GT(product(dims_), 0,
+ PADDLE_ENFORCE_GT(numel(), 0,
"Tensor's numel must be larger than zero to call "
"Tensor::mutable_data. Call Tensor::set_dim first.");
/* some versions of boost::variant don't have operator!= */
- int64_t size = product(dims_) * sizeof(T);
+ int64_t size = numel() * sizeof(T);
if (holder_ == nullptr || !(holder_->place() == place) ||
holder_->size() < size + offset_) {
if (platform::is_cpu_place(place)) {
@@ -97,7 +97,7 @@ inline void Tensor::CopyFrom(const Tensor& src,
auto dst_ptr = static_cast(mutable_data(dst_place));
- auto size = product(src.dims_) * sizeof(T);
+ auto size = src.numel() * sizeof(T);
if (platform::is_cpu_place(src_place) && platform::is_cpu_place(dst_place)) {
memory::Copy(boost::get(dst_place), dst_ptr,
@@ -131,7 +131,7 @@ inline Tensor Tensor::Slice(const int& begin_idx, const int& end_idx) const {
PADDLE_ENFORCE_LT(begin_idx, end_idx,
"Begin index must be less than end index.");
PADDLE_ENFORCE_NE(dims_[0], 1, "Can not slice a tensor with dims_[0] = 1.");
- size_t base = product(dims_) / dims_[0];
+ size_t base = numel() / dims_[0];
Tensor dst;
dst.holder_ = holder_;
DDim dst_dims = dims_;
@@ -143,11 +143,14 @@ inline Tensor Tensor::Slice(const int& begin_idx, const int& end_idx) const {
inline Tensor& Tensor::Resize(const DDim& dims) {
dims_ = dims;
+ numel_ = product(dims_);
return *this;
}
inline const DDim& Tensor::dims() const { return dims_; }
+inline int64_t Tensor::numel() const { return numel_; }
+
template
inline Tensor ReshapeToMatrix(const Tensor& src, int num_col_dims) {
Tensor res;
diff --git a/paddle/gserver/layers/SwitchOrderLayer.cpp b/paddle/gserver/layers/SwitchOrderLayer.cpp
index d7eee6eaf078dab8d48adc4c7ee758a433672ac6..e97809141a93106f9e6ebaf40c7e8aa9c6010557 100644
--- a/paddle/gserver/layers/SwitchOrderLayer.cpp
+++ b/paddle/gserver/layers/SwitchOrderLayer.cpp
@@ -83,8 +83,7 @@ void SwitchOrderLayer::forward(PassType passType) {
setOutDims();
resetOutput(outDims_[0], outDims_[1] * outDims_[2] * outDims_[3]);
if (heightAxis_.size() > 0) {
- getOutputValue()->reshape(reshapeHeight_, reshapeWidth_);
- getOutputGrad()->reshape(reshapeHeight_, reshapeWidth_);
+ resetOutput(reshapeHeight_, reshapeWidth_);
}
// switch NCHW to NHWC
diff --git a/paddle/operators/cos_sim_op.h b/paddle/operators/cos_sim_op.h
index 9e2bcebe3b5432c157fac895a9bbab5164193dbb..0dc509952578497671a128374f77ce616a520909 100644
--- a/paddle/operators/cos_sim_op.h
+++ b/paddle/operators/cos_sim_op.h
@@ -42,7 +42,7 @@ class CosSimKernel : public framework::OpKernel {
output_y_norm->mutable_data(context.GetPlace());
auto dims = input_x->dims();
- int size = static_cast(framework::product(dims));
+ int64_t size = input_x->numel();
auto new_dims = framework::make_ddim({dims[0], size / dims[0]});
auto x = EigenMatrix::From(*input_x, new_dims);
auto y = EigenMatrix::From(*input_y, new_dims);
@@ -72,7 +72,7 @@ class CosSimGradKernel : public framework::OpKernel {
auto* input_grad_z = context.Input(framework::GradVarName("Out"));
auto dims = input_x->dims();
- int size = static_cast(framework::product(dims));
+ int64_t size = input_x->numel();
auto new_dims = framework::make_ddim({dims[0], size / dims[0]});
auto x = EigenMatrix::From(*input_x, new_dims);
auto y = EigenMatrix::From(*input_y, new_dims);
diff --git a/paddle/operators/elementwise_mul_op.cc b/paddle/operators/elementwise_mul_op.cc
new file mode 100644
index 0000000000000000000000000000000000000000..1742925545d29df5d7df719faaea3b754680ab61
--- /dev/null
+++ b/paddle/operators/elementwise_mul_op.cc
@@ -0,0 +1,109 @@
+/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve.
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License. */
+
+#include "paddle/operators/elementwise_mul_op.h"
+
+namespace paddle {
+namespace operators {
+
+using Tensor = framework::Tensor;
+
+class ElementWiseMulOp : public framework::OperatorWithKernel {
+ public:
+ using framework::OperatorWithKernel::OperatorWithKernel;
+
+ protected:
+ void InferShape(const framework::InferShapeContext &ctx) const override {
+ PADDLE_ENFORCE_NOT_NULL(ctx.InputVar("X"), "Input(X) should not be null");
+ PADDLE_ENFORCE_NOT_NULL(ctx.InputVar("Y"), "Input(Y) should not be null");
+ auto x_dim = ctx.Input("X")->dims();
+ auto y_dim = ctx.Input("Y")->dims();
+ PADDLE_ENFORCE_GE(x_dim.size(), y_dim.size(),
+ "Rank of first input must >= rank of second input.")
+ ctx.Output("Out")->Resize(x_dim);
+ }
+};
+
+class ElementWiseMulOpMaker : public framework::OpProtoAndCheckerMaker {
+ public:
+ ElementWiseMulOpMaker(framework::OpProto *proto,
+ framework::OpAttrChecker *op_checker)
+ : OpProtoAndCheckerMaker(proto, op_checker) {
+ AddInput("X", "The first input of elementwise mul op");
+ AddInput("Y", "The second input of elementwise mul op");
+ AddAttr("axis",
+ R"DOC(
+When shape(Y) does not equal shape(X),Y will be broadcasted
+to match the shape of X and axis should be dimension index Y in X
+ )DOC")
+ .SetDefault(-1)
+ .EqualGreaterThan(-1);
+
+ AddOutput("Out", "The output of elementwise mul op");
+ AddComment(R"DOC(
+Limited elementwise multiple operator.The equation is: Out = X ⊙ Y.
+1. The shape of Y should be same with X or
+2. Y's shape is a subset of X.
+ Y will be broadcasted to match the shape of X and axis should be dimension index Y in X.
+ example:
+ shape(X) = (2, 3, 4, 5), shape(Y) = (,)
+ shape(X) = (2, 3, 4, 5), shape(Y) = (5,)
+ shape(X) = (2, 3, 4, 5), shape(Y) = (4, 5)
+ shape(X) = (2, 3, 4, 5), shape(Y) = (3, 4), with axis=1
+ shape(X) = (2, 3, 4, 5), shape(Y) = (2), with axis=0
+)DOC");
+ }
+};
+
+class ElementWiseMulOpGrad : public framework::OperatorWithKernel {
+ public:
+ using framework::OperatorWithKernel::OperatorWithKernel;
+
+ protected:
+ void InferShape(const framework::InferShapeContext &ctx) const override {
+ PADDLE_ENFORCE_NOT_NULL(ctx.InputVar("X"), "Input(X) should not be null");
+ PADDLE_ENFORCE_NOT_NULL(ctx.InputVar("Y"), "Input(Y) should not be null");
+ PADDLE_ENFORCE_NOT_NULL(ctx.InputVar(framework::GradVarName("Out")),
+ "Input(Out@GRAD) should not be null");
+
+ auto x_dims = ctx.Input("X")->dims();
+ auto y_dims = ctx.Input("Y")->dims();
+ auto out_dims = ctx.Input(framework::GradVarName("Out"))->dims();
+ auto *x_grad = ctx.Output(framework::GradVarName("X"));
+ auto *y_grad = ctx.Output(framework::GradVarName("Y"));
+
+ PADDLE_ENFORCE_GE(x_dims.size(), y_dims.size(),
+ "Rank of first input must >= rank of second input.")
+
+ if (x_grad) {
+ x_grad->Resize(x_dims);
+ }
+
+ if (y_grad) {
+ y_grad->Resize(y_dims);
+ }
+ }
+};
+} // namespace operators
+} // namespace paddle
+
+namespace ops = paddle::operators;
+REGISTER_OP(elementwise_mul, ops::ElementWiseMulOp, ops::ElementWiseMulOpMaker,
+ elementwise_mul_grad, ops::ElementWiseMulOpGrad);
+REGISTER_OP_CPU_KERNEL(
+ elementwise_mul,
+ ops::ElementWiseMulKernel);
+REGISTER_OP_CPU_KERNEL(
+ elementwise_mul_grad,
+ ops::ElementWiseMulGradKernel);
diff --git a/paddle/operators/elementwise_mul_op.cu b/paddle/operators/elementwise_mul_op.cu
new file mode 100644
index 0000000000000000000000000000000000000000..56f2087c22c6c599a3c5aef36eb0fe3eac295bef
--- /dev/null
+++ b/paddle/operators/elementwise_mul_op.cu
@@ -0,0 +1,25 @@
+/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve.
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License. */
+
+#define EIGEN_USE_GPU
+#include "paddle/operators/elementwise_mul_op.h"
+
+namespace ops = paddle::operators;
+
+REGISTER_OP_GPU_KERNEL(
+ elementwise_mul,
+ ops::ElementWiseMulKernel);
+REGISTER_OP_GPU_KERNEL(
+ elementwise_mul_grad,
+ ops::ElementWiseMulGradKernel);
diff --git a/paddle/operators/elementwise_mul_op.h b/paddle/operators/elementwise_mul_op.h
new file mode 100644
index 0000000000000000000000000000000000000000..e9ed6791799240039f9af42c1a4339be7126ee65
--- /dev/null
+++ b/paddle/operators/elementwise_mul_op.h
@@ -0,0 +1,185 @@
+/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve.
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License. */
+
+#pragma once
+#include
+#include "paddle/framework/eigen.h"
+#include "paddle/framework/op_registry.h"
+#include "paddle/operators/math/math_function.h"
+
+namespace paddle {
+namespace operators {
+/*
+ * Out = X ⊙ Y
+ * 1. shape(X) = (2, 3, 4, 5), shape(Y) = (3, 4), with axis=1
+ * pre=2, n=3*4, post=5
+ * 2. shape(X) = (2, 3, 4, 5), shape(Y) = (4,5)
+ * pre=2*3, n=4*5, post=1
+ */
+
+inline void get_mid_dims(const framework::DDim& x_dims,
+ const framework::DDim& y_dims, const int axis,
+ int& pre, int& n, int& post) {
+ pre = 1;
+ n = 1;
+ post = 1;
+ for (int i = 0; i < axis; ++i) {
+ pre *= x_dims[i];
+ }
+
+ for (int i = 0; i < y_dims.size(); ++i) {
+ PADDLE_ENFORCE_EQ(x_dims[i + axis], y_dims[i],
+ "Broadcast dimension mismatch.");
+ n *= y_dims[i];
+ }
+
+ for (int i = axis + y_dims.size(); i < x_dims.size(); ++i) {
+ post *= x_dims[i];
+ }
+}
+
+template
+class ElementWiseMulKernel : public framework::OpKernel {
+ public:
+ void Compute(const framework::ExecutionContext& ctx) const override {
+ using Tensor = framework::Tensor;
+
+ auto* x = ctx.Input("X");
+ auto* y = ctx.Input("Y");
+ auto* z = ctx.Output("Out");
+ z->mutable_data(ctx.GetPlace());
+
+ auto x_e = framework::EigenVector::Flatten(*x);
+ auto y_e = framework::EigenVector::Flatten(*y);
+ auto z_e = framework::EigenVector::Flatten(*z);
+
+ auto x_dims = x->dims();
+ auto y_dims = y->dims();
+ PADDLE_ENFORCE_GE(x_dims.size(), y_dims.size(),
+ "Rank of first input must >= rank of second input.")
+
+ if (x_dims == y_dims || product(y_dims) == 1) {
+ z_e.device(ctx.GetEigenDevice()) = x_e * y_e;
+ return;
+ }
+
+ int axis = ctx.Attr("axis");
+ axis = (axis == -1 ? x_dims.size() - y_dims.size() : axis);
+ PADDLE_ENFORCE(axis >= 0 && axis < x_dims.size(),
+ "Axis should be in range [0, x_dims)");
+
+ int pre, n, post;
+ get_mid_dims(x_dims, y_dims, axis, pre, n, post);
+ if (post == 1) {
+ auto y_bcast = y_e.reshape(Eigen::DSizes(1, n))
+ .broadcast(Eigen::DSizes(pre, 1))
+ .reshape(Eigen::DSizes(x_e.size()));
+ z_e.device(ctx.GetEigenDevice()) = x_e * y_bcast;
+ return;
+ } else {
+ auto y_bcast = y_e.reshape(Eigen::DSizes(1, n, 1))
+ .broadcast(Eigen::DSizes(pre, 1, post))
+ .reshape(Eigen::DSizes(x_e.size()));
+ z_e.device(ctx.GetEigenDevice()) = x_e * y_bcast;
+ return;
+ }
+ }
+};
+
+template
+class ElementWiseMulGradKernel : public framework::OpKernel {
+ public:
+ void Compute(const framework::ExecutionContext& ctx) const override {
+ using Tensor = framework::Tensor;
+
+ auto* x = ctx.Input("X");
+ auto* y = ctx.Input("Y");
+ auto* dout = ctx.Input(framework::GradVarName("Out"));
+
+ auto x_e = framework::EigenVector::Flatten(*x);
+ auto y_e = framework::EigenVector::Flatten(*y);
+ auto dout_e = framework::EigenVector::Flatten(*dout);
+
+ auto x_dims = x->dims();
+ auto y_dims = y->dims();
+
+ auto* dx = ctx.Output(framework::GradVarName("X"));
+ auto* dy = ctx.Output(framework::GradVarName("Y"));
+ if (dx) {
+ dx->mutable_data(ctx.GetPlace());
+ }
+ if (dy) {
+ dy->mutable_data(ctx.GetPlace());
+ }
+
+ if (x_dims == y_dims || product(y_dims) == 1) {
+ if (dx) {
+ auto dx_e = framework::EigenVector::Flatten(*dx);
+ dx_e.device(ctx.GetEigenDevice()) = dout_e * y_e;
+ }
+
+ if (dy) {
+ auto dy_e = framework::EigenVector::Flatten(*dy);
+ dy_e.device(ctx.GetEigenDevice()) = x_e * dout_e;
+ }
+ return;
+ }
+
+ int axis = ctx.Attr("axis");
+ axis = (axis == -1 ? x_dims.size() - y_dims.size() : axis);
+
+ int pre, n, post;
+ get_mid_dims(x_dims, y_dims, axis, pre, n, post);
+
+ // TODO(gongweibao): wrap reshape to a function.
+ if (post == 1) {
+ auto y_e_bcast = y_e.reshape(Eigen::DSizes(1, n))
+ .broadcast(Eigen::DSizes(pre, 1))
+ .reshape(Eigen::DSizes(x_e.size()));
+ if (dx) {
+ auto dx_e = framework::EigenVector::Flatten(*dx);
+ dx_e.device(ctx.GetEigenDevice()) = dout_e * y_e_bcast;
+ }
+
+ if (dy) {
+ auto dy_e = framework::EigenVector::Flatten(*dy);
+ dy_e.device(ctx.GetEigenDevice()) =
+ (x_e * dout_e)
+ .reshape(Eigen::DSizes(pre, n))
+ .sum(Eigen::array{{0}});
+ }
+ return;
+ } else {
+ auto y_e_bcast = y_e.reshape(Eigen::DSizes(1, n, 1))
+ .broadcast(Eigen::DSizes(pre, 1, post))
+ .reshape(Eigen::DSizes(x_e.size()));
+ if (dx) {
+ auto dx_e = framework::EigenVector::Flatten(*dx);
+ dx_e.device(ctx.GetEigenDevice()) = dout_e * y_e_bcast;
+ }
+
+ if (dy) {
+ auto dy_e = framework::EigenVector::Flatten(*dy);
+ dy_e.device(ctx.GetEigenDevice()) =
+ (x_e * dout_e)
+ .reshape(Eigen::DSizes(pre, n, post))
+ .sum(Eigen::array{{0, 2}});
+ }
+ return;
+ }
+ }
+};
+
+} // namespace operators
+} // namespace paddle
diff --git a/paddle/operators/gaussian_random_op.cc b/paddle/operators/gaussian_random_op.cc
index 6574880c0eb6324b2dd175e39a364d2ef46e735e..3d76516405960c502a46997108049b2db5cab6bf 100644
--- a/paddle/operators/gaussian_random_op.cc
+++ b/paddle/operators/gaussian_random_op.cc
@@ -31,7 +31,7 @@ class CPUGaussianRandomKernel : public framework::OpKernel {
}
engine.seed(seed);
std::normal_distribution dist(mean, std);
- int64_t size = framework::product(tensor->dims());
+ int64_t size = tensor->numel();
for (int64_t i = 0; i < size; ++i) {
data[i] = dist(engine);
}
diff --git a/paddle/operators/gaussian_random_op.cu b/paddle/operators/gaussian_random_op.cu
index d9dbc1dcfe6a6676938d64be93c879ea69148018..2d63b3049988cfc3135a87a57dad56b970df3eab 100644
--- a/paddle/operators/gaussian_random_op.cu
+++ b/paddle/operators/gaussian_random_op.cu
@@ -50,8 +50,8 @@ class GPUGaussianRandomKernel : public framework::OpKernel {
T mean = static_cast(context.Attr("mean"));
T std = static_cast(context.Attr("std"));
thrust::counting_iterator index_sequence_begin(0);
- ssize_t N = framework::product(tensor->dims());
- thrust::transform(index_sequence_begin, index_sequence_begin + N,
+ int64_t size = tensor->numel();
+ thrust::transform(index_sequence_begin, index_sequence_begin + size,
thrust::device_ptr(data),
GaussianGenerator(mean, std, seed));
}
diff --git a/paddle/operators/lookup_table_op.cu b/paddle/operators/lookup_table_op.cu
index 27eee3436af8107cef2aa3577ea238be49edf1af..708344046760691aa2da562eb1ee3d8b130c5f18 100644
--- a/paddle/operators/lookup_table_op.cu
+++ b/paddle/operators/lookup_table_op.cu
@@ -70,7 +70,7 @@ class LookupTableCUDAKernel : public framework::OpKernel {
size_t N = table_t->dims()[0];
size_t D = table_t->dims()[1];
- size_t K = product(ids_t->dims());
+ size_t K = ids_t->numel();
auto ids = ids_t->data();
auto table = table_t->data();
auto output = output_t->mutable_data(context.GetPlace());
@@ -91,7 +91,7 @@ class LookupTableGradCUDAKernel : public framework::OpKernel {
int N = d_table_t->dims()[0];
int D = d_table_t->dims()[1];
- int K = product(ids_t->dims());
+ int K = ids_t->numel();
const int32_t* ids = ids_t->data();
const T* d_output = d_output_t->data();
T* d_table = d_table_t->mutable_data(context.GetPlace());
diff --git a/paddle/operators/lookup_table_op.h b/paddle/operators/lookup_table_op.h
index 877b36cef4ea9cdaaaf37c97d5e5bfce55b91436..a1298906dd4b4209644fe06584f70169519de01c 100644
--- a/paddle/operators/lookup_table_op.h
+++ b/paddle/operators/lookup_table_op.h
@@ -35,7 +35,7 @@ class LookupTableKernel : public framework::OpKernel {
auto ids = ids_t->data();
auto table = table_t->data();
auto output = output_t->mutable_data(context.GetPlace());
- for (ssize_t i = 0; i < product(ids_t->dims()); ++i) {
+ for (int64_t i = 0; i < ids_t->numel(); ++i) {
PADDLE_ENFORCE_LT(ids[i], N);
PADDLE_ENFORCE_GE(ids[i], 0);
memcpy(output + i * D, table + ids[i] * D, D * sizeof(T));
@@ -61,7 +61,7 @@ class LookupTableGradKernel : public framework::OpKernel {
t.device(context.GetEigenDevice()) =
t.constant(static_cast(0));
- for (ssize_t i = 0; i < product(ids_t->dims()); ++i) {
+ for (int64_t i = 0; i < ids_t->numel(); ++i) {
PADDLE_ENFORCE_LT(ids[i], N);
PADDLE_ENFORCE_GE(ids[i], 0);
for (int j = 0; j < D; ++j) {
diff --git a/paddle/operators/mean_op.h b/paddle/operators/mean_op.h
index 9848af280b62729bef9243052ceae0b7d8f4c6f5..ce31e178d8e375dc59be80a6c05133201308da70 100644
--- a/paddle/operators/mean_op.h
+++ b/paddle/operators/mean_op.h
@@ -49,12 +49,11 @@ class MeanGradKernel : public framework::OpKernel {
public:
void Compute(const framework::ExecutionContext& context) const override {
auto OG = context.Input(framework::GradVarName("Out"));
- PADDLE_ENFORCE(framework::product(OG->dims()) == 1,
- "Mean Gradient should be scalar");
+ PADDLE_ENFORCE(OG->numel() == 1, "Mean Gradient should be scalar");
auto IG = context.Output(framework::GradVarName("X"));
IG->mutable_data(context.GetPlace());
- T ig_size = (T)framework::product(IG->dims());
+ T ig_size = static_cast(IG->numel());
Eigen::DSizes bcast(ig_size);
EigenVector::Flatten(*IG).device(context.GetEigenDevice()) =
diff --git a/paddle/operators/minus_op.cc b/paddle/operators/minus_op.cc
index 069fb5e1abc657aa02a50fde352ce88d078c36e1..a4876feb2edf77bd422fa2a7687b0fa7d55dae47 100644
--- a/paddle/operators/minus_op.cc
+++ b/paddle/operators/minus_op.cc
@@ -31,8 +31,7 @@ class MinusOp : public framework::OperatorWithKernel {
auto *right_tensor = ctx.Input("Y");
PADDLE_ENFORCE_EQ(
- framework::product(left_tensor->dims()),
- framework::product(right_tensor->dims()),
+ left_tensor->numel(), right_tensor->numel(),
"Minus operator must take two tensor with same num of elements");
ctx.Output("Out")->Resize(left_tensor->dims());
}
diff --git a/paddle/operators/name_convention.md b/paddle/operators/name_convention.md
new file mode 100644
index 0000000000000000000000000000000000000000..a090e0b5450509affdd739f63df618595f204f97
--- /dev/null
+++ b/paddle/operators/name_convention.md
@@ -0,0 +1,59 @@
+## Operator's Parameter Name Convention
+
+To make the operator document itself more clear, we recommend operator names obey the listing conventions.
+
+### OpProtoMaker names
+
+When defining an operator in Paddle, a corresponding [OpProtoMaker](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/framework/operator.h#L170) (TODO: OpProtoMaker Doc)need to be defined. All the Input/Output and Attributes will write into the [OpProto](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/framework/framework.proto#L61) , and will be used in client language to create operator.
+
+- Input/Output.
+ - Input/Output names follow the **CamelCase**. e.g. `X`, `Y`, `Matrix`, `LastAxisInMatrix`. Input/Output much more like Variables, we prefer to meaningful English words.
+ - If an operator's Input/Output are tensors in math, not match to any meaningful words, input name should starts from `X`. e.g. `X`, `Y`, and output name should starts from `Out`. e.g. `Out`. This rule intends making operators which have few inputs/outputs unified.
+
+- Attribute.
+ - Attribute name follows the **camelCase**. e.g. `x`, `y`, `axis`, `rowwiseMatrix`. Also, attribute name prefers to meaningful English words.
+
+- Comments.
+ - Input/Output/Attr comment follow the format of **(type,default value) usage**, corresponding to which type it can be and how it will be used in the operator. e.g. Attribute in Accumulator`"gamma" `,`(float, default 1.0) Accumulation multiplier`.
+ - Operator comment format of` R"DOC(your comment here)DOC"`. You should explain the input/output of the operator first. If there is math calculation in this operator, you should write the equation in the comment. e.g. `Out = X + Y`.
+
+- Order.
+ - Follow the order of Input/Output, then Attribute, then Comments. See the example in best practice.
+
+### Best Practice
+
+Here we give some examples to show how these rules will be used.
+
+- The operator has one input, one output. e.g.`relu`, inputs: `X`, outputs: `Out`.
+
+- The operator has two input, one output. e.g. `rowwise_add`, inputs : `X`, `Y`, outputs : `Out`.
+
+- The operator contains attribute. e.g. `cosine`, inputs : `X`, `axis`, outputs : `Out`.
+
+ We give a full example of Accumulator Operator.
+
+```c++
+class AccumulateOpMaker : public framework::OpProtoAndCheckerMaker {
+public:
+ AccumulateOpMaker(framework::OpProto *proto,
+ framework::OpAttrChecker *op_checker)
+ : OpProtoAndCheckerMaker(proto, op_checker) {
+ AddInput("X", "(Tensor) The input tensor that has to be accumulated to the output tensor. If the output size is not the same as input size, the output tensor is first reshaped and initialized to zero, and only then, accumulation is done.");
+ AddOutput("Out", "(Tensor) Accumulated output tensor");
+ AddAttr("gamma", "(float, default 1.0) Accumulation multiplier");
+ AddComment(R"DOC(
+Accumulate operator accumulates the input tensor to the output tensor. If the
+output tensor already has the right size, we add to it; otherwise, we first
+initialize the output tensor to all zeros, and then do accumulation. Any
+further calls to the operator, given that no one else fiddles with the output
+in the interim, will do simple accumulations.
+Accumulation is done as shown:
+
+Out = 1*X + gamma*Out
+
+where X is the input tensor, Y is the output tensor and gamma is the multiplier
+argument.
+)DOC");
+ }
+};
+```
diff --git a/paddle/operators/reshape_op.cc b/paddle/operators/reshape_op.cc
new file mode 100644
index 0000000000000000000000000000000000000000..b7061153d2bf13982f14f233e87a87daeeebf5fd
--- /dev/null
+++ b/paddle/operators/reshape_op.cc
@@ -0,0 +1,107 @@
+
+/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve.
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License. */
+
+#include "paddle/operators/reshape_op.h"
+
+namespace paddle {
+namespace operators {
+
+class ReshapeOp : public framework::OperatorWithKernel {
+ public:
+ ReshapeOp(const std::string &type, const framework::VariableNameMap &inputs,
+ const framework::VariableNameMap &outputs,
+ const framework::AttributeMap &attrs)
+ : OperatorWithKernel(type, inputs, outputs, attrs) {}
+
+ protected:
+ void InferShape(const framework::InferShapeContext &ctx) const override {
+ // input check
+ PADDLE_ENFORCE_NOT_NULL(ctx.InputVar("X"), "Input(X) shouldn't be null");
+ auto shape = ctx.Attr>("shape");
+ PADDLE_ENFORCE(shape.size() > 0, "Attr(shape) shouldn't be empty.");
+ for (auto dim : shape) {
+ PADDLE_ENFORCE(dim > 0, "Each dimension of shape must be positive.");
+ }
+ // capacity check
+ int64_t capacity =
+ std::accumulate(shape.begin(), shape.end(), 1, std::multiplies());
+ auto *in = ctx.Input("X");
+ int64_t in_size = framework::product(in->dims());
+ PADDLE_ENFORCE_EQ(capacity, in_size,
+ "The size of Input(X) mismatches with Attr(shape).");
+ // resize output
+ std::vector shape_int64(shape.size(), 0);
+ std::transform(shape.begin(), shape.end(), shape_int64.begin(),
+ [](int a) { return static_cast(a); });
+ auto out_dims = framework::make_ddim(shape_int64);
+ ctx.Output("Out")->Resize(out_dims);
+ }
+};
+
+class ReshapeOpMaker : public framework::OpProtoAndCheckerMaker {
+ public:
+ ReshapeOpMaker(framework::OpProto *proto,
+ framework::OpAttrChecker *op_checker)
+ : OpProtoAndCheckerMaker(proto, op_checker) {
+ AddInput("X", "The input tensor of reshape operator.");
+ AddOutput("Out", "The output tensor of reshape operator.");
+ AddAttr>("shape", "Target shape of reshape operator.");
+ AddComment(R"DOC(Reshape operator
+
+Reshape Input(X) into the shape specified by Attr(shape).
+
+An example:
+Given a 2-D tensor X with 2 rows and 2 columns
+
+ [[1, 2], [3, 4]]
+
+with target shape = [1, 4], the reshape operator will transform
+the tensor X into a 1-D tensor:
+
+ [1, 2, 3, 4]
+
+)DOC");
+ }
+};
+
+class ReshapeGradOp : public framework::OperatorWithKernel {
+ public:
+ ReshapeGradOp(const std::string &type,
+ const framework::VariableNameMap &inputs,
+ const framework::VariableNameMap &outputs,
+ const framework::AttributeMap &attrs)
+ : OperatorWithKernel(type, inputs, outputs, attrs) {}
+
+ protected:
+ void InferShape(const framework::InferShapeContext &ctx) const override {
+ PADDLE_ENFORCE_NOT_NULL(ctx.InputVar("X"), "Input(X) shouldn't be null.");
+ PADDLE_ENFORCE_NOT_NULL(ctx.InputVar(framework::GradVarName("Out")),
+ "Input(Out@GRAD) shouldn't be null.");
+ auto dims = ctx.Input("X")->dims();
+ auto *d_in = ctx.Output(framework::GradVarName("X"));
+ d_in->Resize(dims);
+ }
+};
+
+} // namespace operators
+} // namespace paddle
+namespace ops = paddle::operators;
+
+REGISTER_OP(reshape, ops::ReshapeOp, ops::ReshapeOpMaker, reshape_grad,
+ ops::ReshapeGradOp);
+REGISTER_OP_CPU_KERNEL(reshape,
+ ops::ReshapeKernel);
+REGISTER_OP_CPU_KERNEL(
+ reshape_grad, ops::ReshapeGradKernel);
diff --git a/paddle/operators/reshape_op.cu b/paddle/operators/reshape_op.cu
new file mode 100644
index 0000000000000000000000000000000000000000..23dbe089d3b37aabedf9ef166f7bbfbf67da7e0a
--- /dev/null
+++ b/paddle/operators/reshape_op.cu
@@ -0,0 +1,22 @@
+/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve.
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License. */
+
+#include "paddle/operators/reshape_op.h"
+
+REGISTER_OP_GPU_KERNEL(
+ reshape,
+ paddle::operators::ReshapeKernel);
+REGISTER_OP_GPU_KERNEL(
+ reshape_grad,
+ paddle::operators::ReshapeGradKernel);
diff --git a/paddle/operators/reshape_op.h b/paddle/operators/reshape_op.h
new file mode 100644
index 0000000000000000000000000000000000000000..873acf30782d390cdca5e7e864c76e1f743f9a7c
--- /dev/null
+++ b/paddle/operators/reshape_op.h
@@ -0,0 +1,55 @@
+/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve.
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License. */
+
+#pragma once
+
+#include "paddle/framework/eigen.h"
+#include "paddle/framework/op_registry.h"
+
+namespace paddle {
+namespace operators {
+
+template
+class ReshapeKernel : public framework::OpKernel {
+ public:
+ void Compute(const framework::ExecutionContext& ctx) const {
+ auto* out = ctx.Output("Out");
+ auto* in = ctx.Input("X");
+ out->mutable_data(ctx.GetPlace());
+
+ auto shape = ctx.Attr>("shape");
+ std::vector shape_int64(shape.size(), 0);
+ std::transform(shape.begin(), shape.end(), shape_int64.begin(),
+ [](int a) { return static_cast(a); });
+ auto out_dims = framework::make_ddim(shape_int64);
+ out->CopyFrom(*in, ctx.GetPlace());
+ out->Resize(out_dims);
+ }
+};
+
+template
+class ReshapeGradKernel : public framework::OpKernel {
+ public:
+ void Compute(const framework::ExecutionContext& ctx) const {
+ auto* d_out = ctx.Input(framework::GradVarName("Out"));
+ auto* d_x = ctx.Output(framework::GradVarName("X"));
+ d_x->mutable_data(ctx.GetPlace());
+
+ auto in_dims = d_x->dims();
+ d_x->CopyFrom(*d_out, ctx.GetPlace());
+ d_x->Resize(in_dims);
+ }
+};
+} // namespace operators
+} // namespace paddle
diff --git a/paddle/operators/squared_l2_distance_op.cc b/paddle/operators/squared_l2_distance_op.cc
index dc30644a5e7e33d4289e48cac093aa5fde7e75e7..9f51d3efa8ecba894a1023b9de2df451ca85916c 100644
--- a/paddle/operators/squared_l2_distance_op.cc
+++ b/paddle/operators/squared_l2_distance_op.cc
@@ -41,8 +41,7 @@ class SquaredL2DistanceOp : public framework::OperatorWithKernel {
int rank = framework::arity(x_dims);
PADDLE_ENFORCE_GE(rank, 2, "Tensor rank should be at least equal to 2.");
- PADDLE_ENFORCE_EQ(framework::product(x_dims) / x_dims[0],
- framework::product(y_dims) / y_dims[0],
+ PADDLE_ENFORCE_EQ(x->numel() / x_dims[0], y->numel() / y_dims[0],
"Product of dimensions expcet the first dimension of "
"input and target must be equal.");
PADDLE_ENFORCE(y_dims[0] == 1 || y_dims[0] == x_dims[0],
@@ -50,8 +49,7 @@ class SquaredL2DistanceOp : public framework::OperatorWithKernel {
"or to 1.");
ctx.Output("sub_result")
- ->Resize({static_cast(x_dims[0]),
- static_cast(framework::product(x_dims) / x_dims[0])});
+ ->Resize({x_dims[0], x->numel() / x_dims[0]});
ctx.Output("Out")->Resize({x_dims[0], 1});
}
};
diff --git a/paddle/operators/squared_l2_distance_op.h b/paddle/operators/squared_l2_distance_op.h
index ad3347a0b35f3385c5adbcd7ceaa94fe134105e3..097ac04fc09a10b3b624f491a847e281e41a802c 100644
--- a/paddle/operators/squared_l2_distance_op.h
+++ b/paddle/operators/squared_l2_distance_op.h
@@ -39,7 +39,7 @@ class SquaredL2DistanceKernel : public framework::OpKernel {
auto in0_dims = in0->dims();
auto in1_dims = in1->dims();
- int cols = framework::product(in0_dims) / in0_dims[0];
+ int cols = in0->numel() / in0_dims[0];
// reduce dimensions except the first
auto x =
EigenMatrix::From(*in0, framework::make_ddim({in0_dims[0], cols}));
@@ -82,7 +82,7 @@ class SquaredL2DistanceGradKernel : public framework::OpKernel {
auto x_dims = x_g->dims();
auto y_dims = y_g->dims();
- int cols = framework::product(x_dims) / x_dims[0];
+ int cols = x_g->numel() / x_dims[0];
// calculate gradient
auto grad_mat = 2 *
(out_grad.broadcast(Eigen::array({{1, cols}}))) *
diff --git a/paddle/operators/uniform_random_op.cc b/paddle/operators/uniform_random_op.cc
index f2aeef6c310df8535e67fa3906301a87f8ec4694..b8fbc9b52aecdb5c8d985b5de9bcd7cb85835b60 100644
--- a/paddle/operators/uniform_random_op.cc
+++ b/paddle/operators/uniform_random_op.cc
@@ -35,7 +35,7 @@ class CPUUniformRandomKernel : public framework::OpKernel {
std::uniform_real_distribution dist(
static_cast(context.Attr("min")),
static_cast(context.Attr("max")));
- int64_t size = framework::product(tensor->dims());
+ int64_t size = tensor->numel();
for (int64_t i = 0; i < size; ++i) {
data[i] = dist(engine);
}
diff --git a/paddle/operators/uniform_random_op.cu b/paddle/operators/uniform_random_op.cu
index c2c041b144b6ca1f019f972e1301b756ec1c9301..6614b53b3f990d10c82633f3c1f079acea0cd827 100644
--- a/paddle/operators/uniform_random_op.cu
+++ b/paddle/operators/uniform_random_op.cu
@@ -53,8 +53,8 @@ class GPUUniformRandomKernel : public framework::OpKernel {
T min = static_cast(context.Attr("min"));
T max = static_cast(context.Attr("max"));
thrust::counting_iterator index_sequence_begin(0);
- ssize_t N = framework::product(tensor->dims());
- thrust::transform(index_sequence_begin, index_sequence_begin + N,
+ int64_t size = tensor->numel();
+ thrust::transform(index_sequence_begin, index_sequence_begin + size,
thrust::device_ptr(data),
UniformGenerator(min, max, seed));
}
diff --git a/paddle/pybind/pybind.cc b/paddle/pybind/pybind.cc
index ea09287f95e1f1e5500dfe799dc307965d7caace..a27160f6e5c410a76caee94cec76ffc6c8fa904a 100644
--- a/paddle/pybind/pybind.cc
+++ b/paddle/pybind/pybind.cc
@@ -35,6 +35,7 @@ USE_OP(add);
USE_OP(onehot_cross_entropy);
USE_OP(sgd);
USE_OP(mul);
+USE_OP(elementwise_mul);
USE_OP(mean);
USE_OP(sigmoid);
USE_OP(softmax);
@@ -54,6 +55,7 @@ USE_CPU_ONLY_OP(concat);
USE_OP(top_k);
USE_OP(squared_l2_distance);
USE_OP(sum);
+USE_OP(reshape);
USE_OP(expand);
namespace paddle {
diff --git a/python/paddle/trainer/config_parser.py b/python/paddle/trainer/config_parser.py
index 356e1d8b6fa9173db33a340744afd8d513a83a96..4f68a8953446ffa0510df65c5b214d09b913cff8 100644
--- a/python/paddle/trainer/config_parser.py
+++ b/python/paddle/trainer/config_parser.py
@@ -2034,6 +2034,7 @@ class ParameterReluLayer(LayerBase):
config_assert(input_layer.size % partial_sum == 0,
"a wrong setting for partial_sum")
self.set_layer_size(input_layer.size)
+ self.config.partial_sum = partial_sum
self.create_input_parameter(0, input_layer.size / partial_sum)
diff --git a/python/paddle/trainer_config_helpers/tests/configs/protostr/test_prelu_layer.protostr b/python/paddle/trainer_config_helpers/tests/configs/protostr/test_prelu_layer.protostr
index 64d227565f2b21ff43d4391c682ca90c0f47908e..94ad56cab063df9e6a11bb1c293727fb9dec810f 100644
--- a/python/paddle/trainer_config_helpers/tests/configs/protostr/test_prelu_layer.protostr
+++ b/python/paddle/trainer_config_helpers/tests/configs/protostr/test_prelu_layer.protostr
@@ -14,6 +14,29 @@ layers {
input_layer_name: "input"
input_parameter_name: "___prelu_layer_0__.w0"
}
+ partial_sum: 1
+}
+layers {
+ name: "__prelu_layer_1__"
+ type: "prelu"
+ size: 300
+ active_type: ""
+ inputs {
+ input_layer_name: "input"
+ input_parameter_name: "___prelu_layer_1__.w0"
+ }
+ partial_sum: 1
+}
+layers {
+ name: "__prelu_layer_2__"
+ type: "prelu"
+ size: 300
+ active_type: ""
+ inputs {
+ input_layer_name: "input"
+ input_parameter_name: "___prelu_layer_2__.w0"
+ }
+ partial_sum: 5
}
parameters {
name: "___prelu_layer_0__.w0"
@@ -23,14 +46,32 @@ parameters {
initial_strategy: 0
initial_smart: true
}
+parameters {
+ name: "___prelu_layer_1__.w0"
+ size: 300
+ initial_mean: 0.0
+ initial_std: 0.057735026919
+ initial_strategy: 0
+ initial_smart: true
+}
+parameters {
+ name: "___prelu_layer_2__.w0"
+ size: 60
+ initial_mean: 0.0
+ initial_std: 0.129099444874
+ initial_strategy: 0
+ initial_smart: true
+}
input_layer_names: "input"
-output_layer_names: "__prelu_layer_0__"
+output_layer_names: "__prelu_layer_2__"
sub_models {
name: "root"
layer_names: "input"
layer_names: "__prelu_layer_0__"
+ layer_names: "__prelu_layer_1__"
+ layer_names: "__prelu_layer_2__"
input_layer_names: "input"
- output_layer_names: "__prelu_layer_0__"
+ output_layer_names: "__prelu_layer_2__"
is_recurrent_layer_group: false
}
diff --git a/python/paddle/trainer_config_helpers/tests/configs/test_prelu_layer.py b/python/paddle/trainer_config_helpers/tests/configs/test_prelu_layer.py
index 2e3057f323db22ffc3911cce30ec2e8bb95e3dbe..aae90fab32db78a70c2169ed8fafb930433f4136 100644
--- a/python/paddle/trainer_config_helpers/tests/configs/test_prelu_layer.py
+++ b/python/paddle/trainer_config_helpers/tests/configs/test_prelu_layer.py
@@ -2,5 +2,7 @@ from paddle.trainer_config_helpers import *
data = data_layer(name='input', size=300)
prelu = prelu_layer(input=data)
+prelu = prelu_layer(input=data, partial_sum=1)
+prelu = prelu_layer(input=data, partial_sum=5)
outputs(prelu)
diff --git a/python/paddle/v2/framework/tests/CMakeLists.txt b/python/paddle/v2/framework/tests/CMakeLists.txt
index e141013a693a6e145023dde4adb4815f5df6e1ba..4d7664469e481344cf9eea84688f068b4fb99dee 100644
--- a/python/paddle/v2/framework/tests/CMakeLists.txt
+++ b/python/paddle/v2/framework/tests/CMakeLists.txt
@@ -1,38 +1,5 @@
-py_test(test_net SRCS test_net.py)
-
-py_test(test_scope SRCS test_scope.py)
-
-py_test(test_tensor SRCS test_tensor.py)
-py_test(test_mul_op SRCS test_mul_op.py)
-py_test(test_cos_sim_op SRCS test_cos_sim_op.py)
-
-py_test(test_mean_op SRCS test_mean_op.py)
-
-py_test(test_protobuf SRCS test_protobuf.py)
-
-py_test(test_add_two_op SRCS test_add_two_op.py)
-py_test(test_sigmoid_op SRCS test_sigmoid_op.py)
-py_test(test_softmax_op SRCS test_softmax_op.py)
-py_test(test_cross_entropy_op SRCS test_cross_entropy_op.py)
-py_test(test_gather_op SRCS test_gather_op.py)
-py_test(test_scatter_op SRCS test_scatter_op.py)
-py_test(test_fill_zeros_like_op SRCS test_fill_zeros_like_op.py)
-py_test(test_top_k_op SRCS test_top_k_op.py)
-
-py_test(test_rowwise_add_op SRCS test_rowwise_add_op.py)
-
-py_test(test_default_scope_funcs SRCS test_default_scope_funcs.py)
-
-py_test(test_operator SRCS test_operator.py)
-py_test(test_gaussian_random_op SRCS test_gaussian_random_op.py)
-py_test(test_uniform_random_op SRCS test_uniform_random_op.py)
-py_test(test_recurrent_op SRCS test_recurrent_op.py)
-py_test(test_sgd_op SRCS test_sgd_op.py)
-py_test(test_gradient_checker SRCS test_gradient_checker.py)
-py_test(test_lookup_table SRCS test_lookup_table.py)
-py_test(test_scale_and_identity_op SRCS test_scale_and_identity_op.py)
-py_test(test_sum_op SRCS test_sum_op.py)
-py_test(mnist SRCS mnist.py)
-py_test(test_concat_op SRCS test_concat_op.py)
-py_test(test_squared_l2_distance_op SRCS test_squared_l2_distance_op.py)
-py_test(test_expand_op SRCS test_expand_op.py)
+file(GLOB TEST_OPS RELATIVE "${CMAKE_CURRENT_SOURCE_DIR}" "test_*.py")
+string(REPLACE ".py" "" TEST_OPS "${TEST_OPS}")
+foreach(src ${TEST_OPS})
+ py_test(${src} SRCS ${src}.py)
+endforeach()
diff --git a/python/paddle/v2/framework/tests/test_elementwise_mul_op.py b/python/paddle/v2/framework/tests/test_elementwise_mul_op.py
new file mode 100644
index 0000000000000000000000000000000000000000..e268cfddb26721a35ddd2d2cc18f526ff7b2f6d9
--- /dev/null
+++ b/python/paddle/v2/framework/tests/test_elementwise_mul_op.py
@@ -0,0 +1,157 @@
+import unittest
+import numpy as np
+from op_test import OpTest
+
+
+class TestElementwiseMulOp_Matrix(OpTest):
+ def setUp(self):
+ self.op_type = "elementwise_mul"
+ """ Warning
+ CPU gradient check error!
+ 'X': np.random.random((32,84)).astype("float32"),
+ 'Y': np.random.random((32,84)).astype("float32")
+ """
+ self.inputs = {
+ 'X': np.random.uniform(0.1, 1, [13, 17]).astype("float32"),
+ 'Y': np.random.uniform(0.1, 1, [13, 17]).astype("float32")
+ }
+ self.outputs = {'Out': np.multiply(self.inputs['X'], self.inputs['Y'])}
+
+ def test_check_output(self):
+ self.check_output()
+
+ def test_check_grad_normal(self):
+ self.check_grad(['X', 'Y'], 'Out', max_relative_error=0.1)
+
+ def test_check_grad_ingore_x(self):
+ self.check_grad(
+ ['Y'], 'Out', max_relative_error=0.1, no_grad_set=set("X"))
+
+ def test_check_grad_ingore_y(self):
+ self.check_grad(
+ ['X'], 'Out', max_relative_error=0.1, no_grad_set=set('Y'))
+
+
+class TestElementwiseMulOp_Vector(OpTest):
+ def setUp(self):
+ self.op_type = "elementwise_mul"
+ self.inputs = {
+ 'X': np.random.random((32, )).astype("float32"),
+ 'Y': np.random.random((32, )).astype("float32")
+ }
+ self.outputs = {'Out': np.multiply(self.inputs['X'], self.inputs['Y'])}
+
+ def test_check_output(self):
+ self.check_output()
+
+ def test_check_grad_normal(self):
+ self.check_grad(['X', 'Y'], 'Out', max_relative_error=0.1)
+
+ def test_check_grad_ingore_x(self):
+ self.check_grad(
+ ['Y'], 'Out', max_relative_error=0.1, no_grad_set=set("X"))
+
+ def test_check_grad_ingore_y(self):
+ self.check_grad(
+ ['X'], 'Out', max_relative_error=0.1, no_grad_set=set('Y'))
+
+
+class TestElementwiseMulOp_broadcast_0(OpTest):
+ def setUp(self):
+ self.op_type = "elementwise_mul"
+ self.inputs = {
+ 'X': np.random.rand(2, 3, 4).astype(np.float32),
+ 'Y': np.random.rand(2).astype(np.float32)
+ }
+
+ self.attrs = {'axis': 0}
+ self.outputs = {
+ 'Out': self.inputs['X'] * self.inputs['Y'].reshape(2, 1, 1)
+ }
+
+ def test_check_output(self):
+ self.check_output()
+
+ def test_check_grad_normal(self):
+ self.check_grad(['X', 'Y'], 'Out', max_relative_error=0.1)
+
+ def test_check_grad_ingore_x(self):
+ self.check_grad(
+ ['Y'], 'Out', max_relative_error=0.1, no_grad_set=set("X"))
+
+ def test_check_grad_ingore_y(self):
+ self.check_grad(
+ ['X'], 'Out', max_relative_error=0.1, no_grad_set=set('Y'))
+
+
+class TestElementwiseMulOp_broadcast_1(OpTest):
+ def setUp(self):
+ self.op_type = "elementwise_mul"
+ self.inputs = {
+ 'X': np.random.rand(2, 3, 4).astype(np.float32),
+ 'Y': np.random.rand(3).astype(np.float32)
+ }
+
+ self.attrs = {'axis': 1}
+ self.outputs = {
+ 'Out': self.inputs['X'] * self.inputs['Y'].reshape(1, 3, 1)
+ }
+
+ def test_check_output(self):
+ self.check_output()
+
+ def test_check_grad_normal(self):
+ self.check_grad(['X', 'Y'], 'Out', max_relative_error=0.1)
+
+ def test_check_grad_ingore_x(self):
+ self.check_grad(
+ ['Y'], 'Out', max_relative_error=0.1, no_grad_set=set("X"))
+
+ def test_check_grad_ingore_y(self):
+ self.check_grad(
+ ['X'], 'Out', max_relative_error=0.1, no_grad_set=set('Y'))
+
+
+class TestElementwiseMulOp_broadcast_2(OpTest):
+ def setUp(self):
+ self.op_type = "elementwise_mul"
+ self.inputs = {
+ 'X': np.random.rand(2, 3, 4).astype(np.float32),
+ 'Y': np.random.rand(4).astype(np.float32)
+ }
+
+ self.outputs = {
+ 'Out': self.inputs['X'] * self.inputs['Y'].reshape(1, 1, 4)
+ }
+
+ def test_check_output(self):
+ self.check_output()
+
+ def test_check_grad_normal(self):
+ self.check_grad(['X', 'Y'], 'Out', max_relative_error=0.1)
+
+ def test_check_grad_ingore_x(self):
+ self.check_grad(
+ ['Y'], 'Out', max_relative_error=0.1, no_grad_set=set("X"))
+
+ def test_check_grad_ingore_y(self):
+ self.check_grad(
+ ['X'], 'Out', max_relative_error=0.1, no_grad_set=set('Y'))
+
+
+class TestElementwiseMulOp_broadcast_3(OpTest):
+ def setUp(self):
+ self.op_type = "elementwise_mul"
+ self.inputs = {
+ 'X': np.random.rand(2, 3, 4, 5).astype(np.float32),
+ 'Y': np.random.rand(3, 4).astype(np.float32)
+ }
+
+ self.attrs = {'axis': 1}
+ self.outputs = {
+ 'Out': self.inputs['X'] * self.inputs['Y'].reshape(1, 3, 4, 1)
+ }
+
+
+if __name__ == '__main__':
+ unittest.main()
diff --git a/python/paddle/v2/framework/tests/mnist.py b/python/paddle/v2/framework/tests/test_mnist.py
similarity index 100%
rename from python/paddle/v2/framework/tests/mnist.py
rename to python/paddle/v2/framework/tests/test_mnist.py
diff --git a/python/paddle/v2/framework/tests/test_reshape_op.py b/python/paddle/v2/framework/tests/test_reshape_op.py
new file mode 100644
index 0000000000000000000000000000000000000000..16bb6bb2af67f7d32a2fafc1cb37412084ec0829
--- /dev/null
+++ b/python/paddle/v2/framework/tests/test_reshape_op.py
@@ -0,0 +1,21 @@
+import unittest
+import numpy as np
+from op_test import OpTest
+
+
+class TestReshapeOp(OpTest):
+ def setUp(self):
+ self.op_type = "reshape"
+ self.inputs = {'X': np.random.random((10, 20)).astype("float32")}
+ self.attrs = {'shape': [10 * 20]}
+ self.outputs = {'Out': self.inputs['X'].reshape(self.attrs['shape'])}
+
+ def test_check_output(self):
+ self.check_output()
+
+ def test_check_grad(self):
+ self.check_grad(["X"], "Out")
+
+
+if __name__ == '__main__':
+ unittest.main()