diff --git a/doc/api/v2/fluid/layers.rst b/doc/api/v2/fluid/layers.rst
index 92ca1cf0f836a376387f3e6f2b5a24c78109323d..842f3b18007a55fb538fbe5d5fefc3f4b75ebe14 100644
--- a/doc/api/v2/fluid/layers.rst
+++ b/doc/api/v2/fluid/layers.rst
@@ -312,3 +312,9 @@ sequence_softmax
.. autofunction:: paddle.v2.fluid.layers.sequence_softmax
:noindex:
+
+reduce_sum
+---------
+.. autofunction:: paddle.v2.fluid.layers.reduce_sum
+ :noindex:
+
diff --git a/doc/design/refactor/multi_cpu.md b/doc/design/refactor/multi_cpu.md
new file mode 100644
index 0000000000000000000000000000000000000000..a8d8ee0422acc84835170a44eb83f9b5f0c6bb40
--- /dev/null
+++ b/doc/design/refactor/multi_cpu.md
@@ -0,0 +1,43 @@
+# Design Doc: Execute the Program with Multi CPU
+
+## Abstract
+
+This Design Doc propose an approach to make the user-defined Op graph
+running with multi-CPU, we will use an auto transpiler to convert the user-defined
+Op graph to a multi-CPU Op graph, and run `ParallelDo` Op to run the graph.
+
+## Transpiler
+
+
+
+After converted:
+
+
+
+## Implement
+
+- `Multi-CPU Transpiler` will convert the graph to a multi-CPU graph
+ which would be executed with multi-threads.
+- `BlockingCounter` will `Init/Decrement` an atomic counter, and Blocking `Wait`
+ for the atomic counter become `0`:
+ ```cpp
+ BlockingCounter bc(thread_count);
+ for (int i = 0; i < thread_count; ++i) {
+ thread_pool->Start([&bc] {bc.DecrementCount(); })
+ }
+ bc.Wait();
+ ```
+- `ParallelDo` Operator
+ - Initialize a thread pool which is a Singleton.
+ - Use a block id as the input, and create run the specify Block on independent scope
+ with multi-threads.
+ - Initialize a `BlockingCounter` instance and wait until all threads are done.
+- `Split` Operator will split the Input Tensor into a TensorArray.
+- `Merge` merge all the gradients which calculated in different threads
+ with `mean/sum/max/min...` method, and then run the Optimizer Op to optimize `W`.
+
+## TODO
+
+- Improve the optimizer stage with multi-threads, since we could
+ assign the parameters to the different threads and execute
+ optimizer with multi-threads.
diff --git a/doc/design/refactor/src/multi-threads.graffle b/doc/design/refactor/src/multi-threads.graffle
new file mode 100644
index 0000000000000000000000000000000000000000..e71173715fff92a0a933d0c7d83599ba948552c6
Binary files /dev/null and b/doc/design/refactor/src/multi-threads.graffle differ
diff --git a/doc/design/refactor/src/multi-threads/multi-threads@3x.png b/doc/design/refactor/src/multi-threads/multi-threads@3x.png
new file mode 100644
index 0000000000000000000000000000000000000000..e40a869987dbbf5019d4cb03c1dab55b74d6c9f9
Binary files /dev/null and b/doc/design/refactor/src/multi-threads/multi-threads@3x.png differ
diff --git a/doc/design/refactor/src/multi-threads/single-thread@3x.png b/doc/design/refactor/src/multi-threads/single-thread@3x.png
new file mode 100644
index 0000000000000000000000000000000000000000..4083aebfdd45af5fbac25fa2c4176bc08c3cb44a
Binary files /dev/null and b/doc/design/refactor/src/multi-threads/single-thread@3x.png differ
diff --git a/doc/design/switch_kernel.md b/doc/design/switch_kernel.md
new file mode 100644
index 0000000000000000000000000000000000000000..1846e5d9f99dd433b44ac6b5ae52893ec8f0d451
--- /dev/null
+++ b/doc/design/switch_kernel.md
@@ -0,0 +1,66 @@
+## Background
+Every operator has many kernels because there are multiple data types, places, data layout that Fluid supports. We use the `KernelType` to describe kernel types that operators can hold.
+
+The `KernelType` is as follows.
+
+```
+struct KernelType {
+ Place place_;
+ DataType data_type_;
+ LayoutType layout_;
+};
+```
+
+The `place_` is a descriptor of the device and the computational library, e.g., `MKLDNNPlace`, `CUDAPlace`.
+
+The `data_type_` is the data type that this kernel performs on, e.g., `FP32`, `INT64`. Note that one kernel may have inputs with different data types. However, it will be a major `data_type`. For example, the `cross_entropy` takes `int64` as it label, and `double`/`float` as its input logit and output cost. The major `data_type` of `cross_entropy` is `float`/`double`.
+
+The `layout` is useful for some computational library. One example is that MKLDNN uses many kinds of layout, such as `nChw8c`. Each kind of layout will invoke the different kernel.
+
+## Problem
+
+We register a kernel for every operator and every kernel type ideally. However, it is impracticable for the following situations.
+
+1. Some operators, like CRF, are complicated and inefficient to be implemented on GPU. The CRF operator will only have a CPU kernel.
+2. Some operators will take too many memory. It is better to force them into CPU. However, the rest of operators in this neural network will be performed on GPU, i.e., model parallel problem.
+3. Some layout and place are particular. One example is that MKLDNN uses `nChw8` and there is no other library uses `nChw8c`.
+
+Problems under these situations are similar. We can formalise this problem as follow.
+
+We register kernels with types $KT = \{kt_1, kt_2, kt_3, ...\}$ for one operator. The inputs of this operator should be run on kernel type $kt_{?}$, which the $kt_{?} \notin KT$. How to cast the input of this operator from $kt_{?}$ to any of kernel type in $KT$.
+
+## Solution
+
+It is clearly that transforming inputs of an operator toadapt another kernel type is not related to the particular operator. So we should register these transformation methods as global methods.
+
+We can infer a kernel type from the inputs of an operators. We let this kernel type as `actual kernel type`, which means this kernel type is the actually kernel type that operator should be performed.
+
+We can get a kernel type by 1) The configuration of operator description. (Users may want to force use `MKL` for `conv` operator). 2) The place of the current executor. (Executor is running on GPU). This kernel type is what we expect the operator will be performed on. We let this kernel type as `expect kernel type`.
+
+We transform the input data from `actual` to `expect` if the expect kernel type is not as same as actual kernel type.
+
+The algorithm is described as follow
+
+```cpp
+using DataTransformationFN = std::function;
+using KernelTypePair = std::pair;
+
+map g_data_transformation_;
+
+void OpWithKernel::Run() {
+ vec inputs = ...
+ auto actual_kernel_type = GetActualKernelType(inputs);
+
+ // The expected kernel type is related to actual kernel type.
+ // For the most operators, the expected kernel type is as same as
+ // actual kernel type.
+ //
+ // So we pass `actual_kernel_type` as a parameter of
+ // GetExpectedKernelType
+ auto expect_kernel_type = GetExpectedKernelType(actual_kernel_type);
+
+ auto trans = g_data_transformation_[{actual_kernel_type, expect_kernel_type}];
+
+ kernel.run(trans(inputs));
+}
+```
diff --git a/doc/getstarted/build_and_install/docker_install_cn.rst b/doc/getstarted/build_and_install/docker_install_cn.rst
index 1eb06e4182d40c3be20d71e37b34009905eaf9d6..fa1b6a372728ccac128d2e6e79a6514b8884ea3f 100644
--- a/doc/getstarted/build_and_install/docker_install_cn.rst
+++ b/doc/getstarted/build_and_install/docker_install_cn.rst
@@ -128,7 +128,7 @@ PaddlePaddle Book是为用户和开发者制作的一个交互式的Jupyter Note
AVX是一种CPU指令集,可以加速PaddlePaddle的计算。最新的PaddlePaddle Docker镜像默认
是开启AVX编译的,所以,如果您的电脑不支持AVX,需要单独
-`编译 <./build_from_source_cn.rst>`_ PaddlePaddle为no-avx版本。
+`编译 <./build_from_source_cn.html>`_ PaddlePaddle为no-avx版本。
以下指令能检查Linux电脑是否支持AVX:
diff --git a/doc/getstarted/build_and_install/docker_install_en.rst b/doc/getstarted/build_and_install/docker_install_en.rst
index 5a46c598f2248c7912169a9e77b16851230c1d2e..06012bf65e75c32957516f6b7f62e09480871b84 100644
--- a/doc/getstarted/build_and_install/docker_install_en.rst
+++ b/doc/getstarted/build_and_install/docker_install_en.rst
@@ -137,7 +137,7 @@ GPU driver installed before move on.
AVX is a kind of CPU instruction can accelerate PaddlePaddle's calculations.
The latest PaddlePaddle Docker image turns AVX on by default, so, if your
computer doesn't support AVX, you'll probably need to
-`build <./build_from_source_en.rst>`_ with :code:`WITH_AVX=OFF`.
+`build <./build_from_source_en.html>`_ with :code:`WITH_AVX=OFF`.
The following command will tell you whether your computer supports AVX.
diff --git a/doc/howto/dev/new_op_cn.md b/doc/howto/dev/new_op_cn.md
index 757a5840bca4c8028e362789ec95bb03d261d2c1..3109d72001f13a38a93b9ca39d3f8525c8cea9f1 100644
--- a/doc/howto/dev/new_op_cn.md
+++ b/doc/howto/dev/new_op_cn.md
@@ -53,7 +53,7 @@ Kernel实现 | CPU、CUDA共享Kernel实现在`.h`文件中,否则,CPU
```cpp
class MulOpMaker : public framework::OpProtoAndCheckerMaker {
public:
- MulOpMaker(framework::OpProto *proto, framework::OpAttrChecker *op_checker)
+ MulOpMaker(OpProto *proto, OpAttrChecker *op_checker)
: OpProtoAndCheckerMaker(proto, op_checker) {
AddInput("X", "(Tensor), 2D tensor of size (M x K)");
AddInput("Y", "(Tensor), 2D tensor of size (K x N)");
@@ -82,7 +82,7 @@ The equation is: Out = X * Y
template
class ScaleOpMaker : public framework::OpProtoAndCheckerMaker {
public:
- ScaleOpMaker(framework::OpProto *proto, framework::OpAttrChecker *op_checker)
+ ScaleOpMaker(OpProto *proto, OpAttrChecker *op_checker)
: OpProtoAndCheckerMaker(proto, op_checker) {
AddInput("X", "The input tensor of scale operator.").NotInGradient();
AddOutput("Out", "The output tensor of scale operator.").NotInGradient();
diff --git a/doc/howto/dev/new_op_en.md b/doc/howto/dev/new_op_en.md
index fe86936bc12cc2fb88d653429e250f71a478dfb6..7175d8370d6ce08c6d502eb42b8e53252db89bbb 100644
--- a/doc/howto/dev/new_op_en.md
+++ b/doc/howto/dev/new_op_en.md
@@ -50,7 +50,7 @@ First, define `ProtoMaker` to describe the Operator's input, output, and additio
```cpp
class MulOpMaker : public framework::OpProtoAndCheckerMaker {
public:
- MulOpMaker(framework::OpProto *proto, framework::OpAttrChecker *op_checker)
+ MulOpMaker(OpProto *proto, OpAttrChecker *op_checker)
: OpProtoAndCheckerMaker(proto, op_checker) {
AddInput("X", "(Tensor), 2D tensor of size (M x K)");
AddInput("Y", "(Tensor), 2D tensor of size (K x N)");
@@ -79,7 +79,7 @@ An additional example [`ScaleOp`](https://github.com/PaddlePaddle/Paddle/blob/de
template
class ScaleOpMaker : public framework::OpProtoAndCheckerMaker {
public:
- ScaleOpMaker(framework::OpProto *proto, framework::OpAttrChecker *op_checker)
+ ScaleOpMaker(OpProto *proto, OpAttrChecker *op_checker)
: OpProtoAndCheckerMaker(proto, op_checker) {
AddInput("X", "The input tensor of scale operator.").NotInGradient();
AddOutput("Out", "The output tensor of scale operator.").NotInGradient();
diff --git a/doc/howto/usage/cluster/k8s_distributed_cn.md b/doc/howto/usage/cluster/k8s_distributed_cn.md
index 701a9a75d78b53d7dab94529dbd1be382ff0d04e..167089b8074b33e3b094fa3ec8e377630cec42ac 100644
--- a/doc/howto/usage/cluster/k8s_distributed_cn.md
+++ b/doc/howto/usage/cluster/k8s_distributed_cn.md
@@ -2,8 +2,6 @@
前一篇文章介绍了如何在Kubernetes集群上启动一个单机PaddlePaddle训练作业 (Job)。在这篇文章里,我们介绍如何在Kubernetes集群上进行分布式PaddlePaddle训练作业。关于PaddlePaddle的分布式训练,文章 [Cluster Training](http://www.paddlepaddle.org/docs/develop/documentation/zh/howto/usage/cluster/cluster_train_cn.html)介绍了一种通过SSH远程分发任务,进行分布式训练的方法,与此不同的是,本文将介绍在Kubernetes容器管理平台上快速构建PaddlePaddle容器集群,进行分布式训练的方案。
-有关Kubernetes相关概念以及如何搭建和配置Kubernetes集群,可以参考[k8s_basis](./k8s_basis_cn.md)。
-
## 整体方案
在训练之前,用户将配置与训练数据切分好放在分布式文件系统预先分配好的目录中(不同的分布式文件系统,需要使用其制定的方式挂载后并导入数据),训练时,程序从此目录拷贝文件到容器内进行训练,将结果保存到此目录里。整体的结构图如下:
diff --git a/paddle/framework/attribute.cc b/paddle/framework/attribute.cc
index b1e17936417e4ce09bace1d1a5d346d1c9cfa710..b0fd4d2750eb2529706d871947332d39494505cd 100644
--- a/paddle/framework/attribute.cc
+++ b/paddle/framework/attribute.cc
@@ -19,42 +19,42 @@ limitations under the License. */
namespace paddle {
namespace framework {
-Attribute GetAttrValue(const OpDesc::Attr& attr_desc) {
+Attribute GetAttrValue(const proto::OpDesc::Attr& attr_desc) {
switch (attr_desc.type()) {
- case framework::AttrType::BOOLEAN: {
+ case proto::AttrType::BOOLEAN: {
return attr_desc.b();
}
- case framework::AttrType::INT: {
+ case proto::AttrType::INT: {
return attr_desc.i();
}
- case framework::AttrType::FLOAT: {
+ case proto::AttrType::FLOAT: {
return attr_desc.f();
}
- case framework::AttrType::STRING: {
+ case proto::AttrType::STRING: {
return attr_desc.s();
}
- case framework::AttrType::BOOLEANS: {
+ case proto::AttrType::BOOLEANS: {
std::vector val(attr_desc.bools_size());
for (int i = 0; i < attr_desc.bools_size(); ++i) {
val[i] = attr_desc.bools(i);
}
return val;
}
- case framework::AttrType::INTS: {
+ case proto::AttrType::INTS: {
std::vector val(attr_desc.ints_size());
for (int i = 0; i < attr_desc.ints_size(); ++i) {
val[i] = attr_desc.ints(i);
}
return val;
}
- case framework::AttrType::FLOATS: {
+ case proto::AttrType::FLOATS: {
std::vector val(attr_desc.floats_size());
for (int i = 0; i < attr_desc.floats_size(); ++i) {
val[i] = attr_desc.floats(i);
}
return val;
}
- case framework::AttrType::STRINGS: {
+ case proto::AttrType::STRINGS: {
std::vector val(attr_desc.strings_size());
for (int i = 0; i < attr_desc.strings_size(); ++i) {
val[i] = attr_desc.strings(i);
diff --git a/paddle/framework/attribute.h b/paddle/framework/attribute.h
index 0641907d6ff7546df1601d3b0263ff42f4186968..c1c63d9cb13acb195b3bc3b30088f5fa7daf2a3d 100644
--- a/paddle/framework/attribute.h
+++ b/paddle/framework/attribute.h
@@ -27,12 +27,12 @@ limitations under the License. */
namespace paddle {
namespace framework {
template
-inline AttrType AttrTypeID() {
+inline proto::AttrType AttrTypeID() {
Attribute tmp = T();
- return static_cast(tmp.which() - 1);
+ return static_cast(tmp.which() - 1);
}
-Attribute GetAttrValue(const OpDesc::Attr& attr_desc);
+Attribute GetAttrValue(const proto::OpDesc::Attr& attr_desc);
class AttrReader {
public:
diff --git a/paddle/framework/backward.cc b/paddle/framework/backward.cc
index faf6e60cbd1bcda9864c12696b336998ea7606b7..f1a577325f1b1ead6b1e08ee35a0ea30ce003bb6 100644
--- a/paddle/framework/backward.cc
+++ b/paddle/framework/backward.cc
@@ -341,7 +341,7 @@ static void CreateGradVarInBlock(
auto* param = block_desc->FindVarRecursive(pname);
auto* grad = block_desc->FindVar(arg);
if (param == nullptr) {
- grad->SetDataType(DataType::FP32);
+ grad->SetDataType(proto::DataType::FP32);
} else {
grad->SetDataType(param->GetDataType());
}
diff --git a/paddle/framework/backward_test.cc b/paddle/framework/backward_test.cc
index 9fe49881d5b740655432f6e83a7886878ceb17e8..1099fffab3129876b3fd3a93cbc4d8a5bd29bea1 100644
--- a/paddle/framework/backward_test.cc
+++ b/paddle/framework/backward_test.cc
@@ -166,7 +166,7 @@ class FillZeroOpMaker : public OpProtoAndCheckerMaker {
class SumOpMaker : public framework::OpProtoAndCheckerMaker {
public:
- SumOpMaker(framework::OpProto *proto, framework::OpAttrChecker *op_checker)
+ SumOpMaker(OpProto *proto, OpAttrChecker *op_checker)
: OpProtoAndCheckerMaker(proto, op_checker) {
AddInput("X", "the input tensors of sum operator.").AsDuplicable();
AddOutput("Out", "the output tensor of sum operator.");
diff --git a/paddle/framework/block_desc.cc b/paddle/framework/block_desc.cc
index 6a7a07d5cf471a32822cdccf5c616d8748fd1bd7..6b961caebd3c0e2af732ce5ff8f1ae5ef2b042aa 100644
--- a/paddle/framework/block_desc.cc
+++ b/paddle/framework/block_desc.cc
@@ -128,22 +128,22 @@ BlockDescBind *BlockDescBind::ParentBlock() const {
return prog_->MutableBlock(static_cast(this->desc_->parent_idx()));
}
-BlockDesc *BlockDescBind::Proto() {
+proto::BlockDesc *BlockDescBind::Proto() {
Flush();
return desc_;
}
-BlockDescBind::BlockDescBind(ProgramDescBind *prog, BlockDesc *desc)
+BlockDescBind::BlockDescBind(ProgramDescBind *prog, proto::BlockDesc *desc)
: prog_(prog), desc_(desc), need_update_(false) {
- for (const VarDesc &var_desc : desc_->vars()) {
+ for (const proto::VarDesc &var_desc : desc_->vars()) {
vars_[var_desc.name()].reset(new VarDescBind(var_desc));
}
- for (const OpDesc &op_desc : desc_->ops()) {
+ for (const proto::OpDesc &op_desc : desc_->ops()) {
ops_.emplace_back(new OpDescBind(op_desc, prog));
}
}
-BlockDescBind::BlockDescBind(const BlockDescBind &other, BlockDesc *desc,
+BlockDescBind::BlockDescBind(const BlockDescBind &other, proto::BlockDesc *desc,
ProgramDescBind *prog)
: prog_(prog), desc_(desc) {
need_update_ = true;
diff --git a/paddle/framework/block_desc.h b/paddle/framework/block_desc.h
index 8e967e5378eb47a7869efb59cc96a271f1cbb9a1..592fe49e075a947956c6e34f03068d4c34530a46 100644
--- a/paddle/framework/block_desc.h
+++ b/paddle/framework/block_desc.h
@@ -36,9 +36,9 @@ class ProgramDescBind;
class BlockDescBind {
public:
- BlockDescBind(ProgramDescBind *prog, BlockDesc *desc);
+ BlockDescBind(ProgramDescBind *prog, proto::BlockDesc *desc);
- BlockDescBind(const BlockDescBind &other, BlockDesc *desc,
+ BlockDescBind(const BlockDescBind &other, proto::BlockDesc *desc,
ProgramDescBind *prog);
~BlockDescBind() {
@@ -88,7 +88,7 @@ class BlockDescBind {
void Flush();
- BlockDesc *Proto();
+ proto::BlockDesc *Proto();
ProgramDescBind *Program() { return this->prog_; }
@@ -97,8 +97,8 @@ class BlockDescBind {
void ClearPBVars();
private:
- ProgramDescBind *prog_; // not_own
- BlockDesc *desc_; // not_own
+ ProgramDescBind *prog_; // not_own
+ proto::BlockDesc *desc_; // not_own
bool need_update_;
std::deque> ops_;
diff --git a/paddle/framework/data_type.h b/paddle/framework/data_type.h
index c54d2d4ddf09c445fb25c1fbe8a7498f233d8212..e94ee2ed52bc40f52caf783f971dd0b560534e08 100644
--- a/paddle/framework/data_type.h
+++ b/paddle/framework/data_type.h
@@ -20,7 +20,8 @@
namespace paddle {
namespace framework {
-inline DataType ToDataType(std::type_index type) {
+inline proto::DataType ToDataType(std::type_index type) {
+ using namespace paddle::framework::proto;
if (typeid(float).hash_code() == type.hash_code()) {
return DataType::FP32;
} else if (typeid(double).hash_code() == type.hash_code()) {
@@ -36,7 +37,8 @@ inline DataType ToDataType(std::type_index type) {
}
}
-inline std::type_index ToTypeIndex(DataType type) {
+inline std::type_index ToTypeIndex(proto::DataType type) {
+ using namespace paddle::framework::proto;
switch (type) {
case DataType::FP32:
return typeid(float);
@@ -54,7 +56,8 @@ inline std::type_index ToTypeIndex(DataType type) {
}
template
-inline void VisitDataType(DataType type, Visitor visitor) {
+inline void VisitDataType(proto::DataType type, Visitor visitor) {
+ using namespace paddle::framework::proto;
switch (type) {
case DataType::FP32:
visitor.template operator()();
diff --git a/paddle/framework/details/op_registry.h b/paddle/framework/details/op_registry.h
index f91e0e03410c95f84a65f02beed38b7bbfdcaa86..435f0b6b78b19dc433f0b761d9ac800565684b7c 100644
--- a/paddle/framework/details/op_registry.h
+++ b/paddle/framework/details/op_registry.h
@@ -90,7 +90,7 @@ struct OpInfoFiller {
template
struct OpInfoFiller {
void operator()(const char* op_type, OpInfo* info) const {
- info->proto_ = new OpProto;
+ info->proto_ = new proto::OpProto;
info->checker_ = new OpAttrChecker();
auto maker = T(info->proto_, info->checker_);
maker.Validate();
diff --git a/paddle/framework/executor.cc b/paddle/framework/executor.cc
index a8b8a6f8e82525bd9a1f709516483de6f44142dc..ea6b259c09012ab1e5576cedeac54e46d21e40dc 100644
--- a/paddle/framework/executor.cc
+++ b/paddle/framework/executor.cc
@@ -41,20 +41,20 @@ Executor::Executor(const std::vector& places) {
device_contexts_.swap(borrowed_contexts);
}
-static void CreateTensor(Variable* var, VarDesc::VarType var_type) {
- if (var_type == VarDesc::LOD_TENSOR) {
+static void CreateTensor(Variable* var, proto::VarDesc::VarType var_type) {
+ if (var_type == proto::VarDesc::LOD_TENSOR) {
var->GetMutable();
- } else if (var_type == VarDesc::SELECTED_ROWS) {
+ } else if (var_type == proto::VarDesc::SELECTED_ROWS) {
var->GetMutable();
- } else if (var_type == VarDesc::FEED_MINIBATCH) {
+ } else if (var_type == proto::VarDesc::FEED_MINIBATCH) {
var->GetMutable();
- } else if (var_type == VarDesc::FETCH_LIST) {
+ } else if (var_type == proto::VarDesc::FETCH_LIST) {
var->GetMutable();
- } else if (var_type == VarDesc::STEP_SCOPES) {
+ } else if (var_type == proto::VarDesc::STEP_SCOPES) {
var->GetMutable>();
- } else if (var_type == VarDesc::LOD_RANK_TABLE) {
+ } else if (var_type == proto::VarDesc::LOD_RANK_TABLE) {
var->GetMutable();
- } else if (var_type == VarDesc::LOD_TENSOR_ARRAY) {
+ } else if (var_type == proto::VarDesc::LOD_TENSOR_ARRAY) {
var->GetMutable();
} else {
PADDLE_THROW(
diff --git a/paddle/framework/framework.proto b/paddle/framework/framework.proto
index f1fc4529e15502927560eefd74110f6ca7eab4a9..4f2746e4b86ee5fe095897ff6ef9d3f6473e8a14 100644
--- a/paddle/framework/framework.proto
+++ b/paddle/framework/framework.proto
@@ -14,7 +14,7 @@ limitations under the License. */
syntax = "proto2";
option optimize_for = LITE_RUNTIME;
-package paddle.framework;
+package paddle.framework.proto;
enum AttrType {
INT = 0;
diff --git a/paddle/framework/lod_rank_table.cc b/paddle/framework/lod_rank_table.cc
index 1c2fba70c8ab0827ba6d1563f08cd0820650822e..17d524c09276fc0eb166925bd79bc0bdfcead195 100644
--- a/paddle/framework/lod_rank_table.cc
+++ b/paddle/framework/lod_rank_table.cc
@@ -46,4 +46,13 @@ void LoDRankTable::Reset(const LoD& lod, size_t level) {
}
} // namespace framework
+
+std::ostream& operator<<(std::ostream& out,
+ const framework::LoDRankTable& table) {
+ out << "NumOfSequence " << table.items().size() << "\n";
+ for (auto& each_item : table.items()) {
+ out << "\tSeq #" << each_item.index << ", Len=" << each_item.length << "\n";
+ }
+ return out;
+}
} // namespace paddle
diff --git a/paddle/framework/lod_rank_table.h b/paddle/framework/lod_rank_table.h
index 9faa3a4d7bdc55ab7b24e31f5e5434dacc0a4b36..d3007d3d7379a59b32465cbd55780c6268e0e4a8 100644
--- a/paddle/framework/lod_rank_table.h
+++ b/paddle/framework/lod_rank_table.h
@@ -13,6 +13,7 @@
limitations under the License. */
#pragma once
+#include
#include "paddle/framework/lod_tensor.h"
namespace paddle {
@@ -52,4 +53,8 @@ class LoDRankTable {
};
} // namespace framework
+
+std::ostream& operator<<(std::ostream& out,
+ const framework::LoDRankTable& table);
+
} // namespace paddle
diff --git a/paddle/framework/lod_tensor.cc b/paddle/framework/lod_tensor.cc
index fdf6de4babff3bb3c253aaf516636882237e6faf..465f8c62b5fe2efd549f68bb3a9823d299ba5393 100644
--- a/paddle/framework/lod_tensor.cc
+++ b/paddle/framework/lod_tensor.cc
@@ -197,7 +197,7 @@ void SerializeToStream(std::ostream &os, const LoDTensor &tensor,
{ // the 2nd field, tensor description
// int32_t size
// void* protobuf message
- framework::TensorDesc desc;
+ proto::TensorDesc desc;
desc.set_data_type(framework::ToDataType(tensor.type()));
auto dims = framework::vectorize(tensor.dims());
auto *pb_dims = desc.mutable_dims();
@@ -262,7 +262,7 @@ void DeserializeFromStream(std::istream &is, LoDTensor *tensor) {
uint32_t version;
is.read(reinterpret_cast(&version), sizeof(version));
PADDLE_ENFORCE_EQ(version, 0U, "Only version 0 is supported");
- framework::TensorDesc desc;
+ proto::TensorDesc desc;
{ // int32_t size
// proto buffer
int32_t size;
@@ -281,16 +281,16 @@ void DeserializeFromStream(std::istream &is, LoDTensor *tensor) {
void *buf;
platform::Place cpu = platform::CPUPlace();
switch (desc.data_type()) {
- case framework::FP32:
+ case proto::FP32:
buf = tensor->mutable_data(cpu);
break;
- case framework::FP64:
+ case proto::FP64:
buf = tensor->mutable_data(cpu);
break;
- case framework::INT32:
+ case proto::INT32:
buf = tensor->mutable_data(cpu);
break;
- case framework::INT64:
+ case proto::INT64:
buf = tensor->mutable_data(cpu);
break;
default:
diff --git a/paddle/framework/lod_tensor.h b/paddle/framework/lod_tensor.h
index 9411c96aea4c10ebf921cc3e3b442769c8acbefa..0923c52a0ad2fe10cea760df20c99021984ad39d 100644
--- a/paddle/framework/lod_tensor.h
+++ b/paddle/framework/lod_tensor.h
@@ -184,6 +184,18 @@ LoDTensor LodExpand(const LoDTensor& source, const LoD& lod, size_t level,
return tensor;
}
+// Get the absolute offset of a lod[start_level][start_idx:end_idx] and
+// relative length of details for every levels(i.e., [start_level: ]).
+//
+// For example,
+// lod = [[0, 3, 4, 8], [0, 9, 10, 11, 13, 17, 19, 22, 24]]
+// start_level = 0
+// start_idx = 1
+// end_idx = 3
+//
+// Returns:
+// LoD = [[1, 4], [2, 4, 2, 3, 2]]
+// pair = {11, 24}
std::pair> GetSubLoDAndAbsoluteOffset(
const LoD& lod, size_t start_idx, size_t end_idx, size_t start_level);
diff --git a/paddle/framework/op_desc.cc b/paddle/framework/op_desc.cc
index 7ba1e3e4e3270f4cd88e41e245f24c3cfc8aaab7..7af5b687273d846628e9e7e2c07107a50d47acb3 100644
--- a/paddle/framework/op_desc.cc
+++ b/paddle/framework/op_desc.cc
@@ -58,11 +58,11 @@ class CompileTimeInferShapeContext : public InferShapeContext {
PADDLE_ENFORCE_LT(j, Outputs(out).size());
auto *in_var = block_.FindVarRecursive(Inputs(in)[i]);
auto *out_var = block_.FindVarRecursive(Outputs(out)[j]);
- if (in_var->GetType() != VarDesc::LOD_TENSOR) {
+ if (in_var->GetType() != proto::VarDesc::LOD_TENSOR) {
VLOG(3) << "input " << in << " is not LodTensor";
return;
}
- PADDLE_ENFORCE_EQ(in_var->GetType(), VarDesc::LOD_TENSOR,
+ PADDLE_ENFORCE_EQ(in_var->GetType(), proto::VarDesc::LOD_TENSOR,
"The %d-th output of Output(%s) must be LoDTensor.", j,
out);
out_var->SetLoDLevel(in_var->GetLodLevel());
@@ -70,7 +70,7 @@ class CompileTimeInferShapeContext : public InferShapeContext {
bool IsRuntime() const override;
protected:
- VarDesc::VarType GetVarType(const std::string &name) const override;
+ proto::VarDesc::VarType GetVarType(const std::string &name) const override;
DDim GetDim(const std::string &name) const override;
@@ -90,12 +90,12 @@ OpDescBind::OpDescBind(const std::string &type, const VariableNameMap &inputs,
need_update_ = true;
}
-OpDescBind::OpDescBind(const OpDesc &desc, ProgramDescBind *prog)
+OpDescBind::OpDescBind(const proto::OpDesc &desc, ProgramDescBind *prog)
: desc_(desc), need_update_(false) {
// restore inputs_
int input_size = desc_.inputs_size();
for (int i = 0; i < input_size; ++i) {
- const OpDesc::Var &var = desc_.inputs(i);
+ const proto::OpDesc::Var &var = desc_.inputs(i);
std::vector &args = inputs_[var.parameter()];
int argu_size = var.arguments_size();
args.reserve(argu_size);
@@ -106,7 +106,7 @@ OpDescBind::OpDescBind(const OpDesc &desc, ProgramDescBind *prog)
// restore outputs_
int output_size = desc_.outputs_size();
for (int i = 0; i < output_size; ++i) {
- const OpDesc::Var &var = desc_.outputs(i);
+ const proto::OpDesc::Var &var = desc_.outputs(i);
std::vector &args = outputs_[var.parameter()];
int argu_size = var.arguments_size();
args.reserve(argu_size);
@@ -115,9 +115,9 @@ OpDescBind::OpDescBind(const OpDesc &desc, ProgramDescBind *prog)
}
}
// restore attrs_
- for (const OpDesc::Attr &attr : desc_.attrs()) {
+ for (const proto::OpDesc::Attr &attr : desc_.attrs()) {
std::string attr_name = attr.name();
- if (attr.type() != AttrType::BLOCK) {
+ if (attr.type() != proto::AttrType::BLOCK) {
attrs_[attr_name] = GetAttrValue(attr);
} else {
auto bid = attr.block_idx();
@@ -126,7 +126,7 @@ OpDescBind::OpDescBind(const OpDesc &desc, ProgramDescBind *prog)
}
}
-OpDesc *OpDescBind::Proto() {
+proto::OpDesc *OpDescBind::Proto() {
Flush();
return &desc_;
}
@@ -175,10 +175,10 @@ void OpDescBind::SetOutput(const std::string ¶m_name,
this->outputs_[param_name] = args;
}
-AttrType OpDescBind::GetAttrType(const std::string &name) const {
+proto::AttrType OpDescBind::GetAttrType(const std::string &name) const {
auto it = attrs_.find(name);
PADDLE_ENFORCE(it != attrs_.end(), "Attribute %s is not found", name);
- return static_cast(it->second.which() - 1);
+ return static_cast(it->second.which() - 1);
}
std::vector OpDescBind::AttrNames() const {
@@ -253,8 +253,8 @@ void OpDescBind::RenameInput(const std::string &old_name,
}
struct SetAttrDescVisitor : public boost::static_visitor {
- explicit SetAttrDescVisitor(OpDesc::Attr *attr) : attr_(attr) {}
- mutable OpDesc::Attr *attr_;
+ explicit SetAttrDescVisitor(proto::OpDesc::Attr *attr) : attr_(attr) {}
+ mutable proto::OpDesc::Attr *attr_;
void operator()(int v) const { attr_->set_i(v); }
void operator()(float v) const { attr_->set_f(v); }
void operator()(const std::string &v) const { attr_->set_s(v); }
@@ -272,7 +272,9 @@ struct SetAttrDescVisitor : public boost::static_visitor {
void operator()(const std::vector &v) const {
VectorToRepeated(v, attr_->mutable_bools());
}
- void operator()(BlockDesc *desc) const { attr_->set_block_idx(desc->idx()); }
+ void operator()(proto::BlockDesc *desc) const {
+ attr_->set_block_idx(desc->idx());
+ }
void operator()(boost::blank) const { PADDLE_THROW("Unexpected branch"); }
};
@@ -297,7 +299,7 @@ void OpDescBind::Flush() {
auto *attr_desc = desc_.add_attrs();
attr_desc->set_name(attr.first);
attr_desc->set_type(
- static_cast(attr.second.which() - 1));
+ static_cast(attr.second.which() - 1));
SetAttrDescVisitor visitor(attr_desc);
boost::apply_visitor(visitor, attr.second);
}
@@ -375,7 +377,7 @@ void OpDescBind::InferVarType(BlockDescBind *block) const {
for (auto &out_pair : this->outputs_) {
for (auto &out_var_name : out_pair.second) {
block->FindRecursiveOrCreateVar(out_var_name)
- ->SetType(VarDesc::LOD_TENSOR);
+ ->SetType(proto::VarDesc::LOD_TENSOR);
}
}
}
@@ -484,7 +486,7 @@ void CompileTimeInferShapeContext::SetDim(const std::string &name,
}
bool CompileTimeInferShapeContext::IsRuntime() const { return false; }
-VarDesc::VarType CompileTimeInferShapeContext::GetVarType(
+proto::VarDesc::VarType CompileTimeInferShapeContext::GetVarType(
const std::string &name) const {
return block_.FindVarRecursive(name)->GetType();
}
diff --git a/paddle/framework/op_desc.h b/paddle/framework/op_desc.h
index da032319afa775571d3942bf6ae415db7d233735..0f0f126f9859e729aaf83f8bd13c60f551f0c1d5 100644
--- a/paddle/framework/op_desc.h
+++ b/paddle/framework/op_desc.h
@@ -33,9 +33,9 @@ class OpDescBind {
OpDescBind(const std::string &type, const VariableNameMap &inputs,
const VariableNameMap &outputs, const AttributeMap &attrs);
- OpDescBind(const OpDesc &desc, ProgramDescBind *prog);
+ OpDescBind(const proto::OpDesc &desc, ProgramDescBind *prog);
- OpDesc *Proto();
+ proto::OpDesc *Proto();
std::string Type() const { return desc_.type(); }
@@ -59,7 +59,7 @@ class OpDescBind {
return attrs_.find(name) != attrs_.end();
}
- AttrType GetAttrType(const std::string &name) const;
+ proto::AttrType GetAttrType(const std::string &name) const;
std::vector AttrNames() const;
@@ -126,7 +126,7 @@ class OpDescBind {
return ret_val;
}
- OpDesc desc_;
+ proto::OpDesc desc_;
VariableNameMap inputs_;
VariableNameMap outputs_;
AttributeMap attrs_;
diff --git a/paddle/framework/op_info.h b/paddle/framework/op_info.h
index d3b1a3b5fa2cf8f6a9571e92a319f3757666657e..7772d6e745c2207024863d3dd5cbef052358272e 100644
--- a/paddle/framework/op_info.h
+++ b/paddle/framework/op_info.h
@@ -34,7 +34,7 @@ class InferShapeBase {
struct OpInfo {
OpCreator creator_;
GradOpMakerFN grad_op_maker_;
- OpProto* proto_{nullptr};
+ proto::OpProto* proto_{nullptr};
OpAttrChecker* checker_{nullptr};
InferVarTypeFN infer_var_type_;
InferShapeFN infer_shape_;
@@ -43,7 +43,7 @@ struct OpInfo {
return proto_ != nullptr && checker_ != nullptr;
}
- const OpProto& Proto() const {
+ const proto::OpProto& Proto() const {
PADDLE_ENFORCE_NOT_NULL(proto_, "Operator Proto has not been registered");
PADDLE_ENFORCE(proto_->IsInitialized(),
"Operator Proto must be initialized in op info");
diff --git a/paddle/framework/op_proto_maker.h b/paddle/framework/op_proto_maker.h
index 44e8ab16895cc604f85bb83e240eab55739f8ba0..efd3a5ca535403d8d46a73adc899d914623b53e4 100644
--- a/paddle/framework/op_proto_maker.h
+++ b/paddle/framework/op_proto_maker.h
@@ -22,6 +22,8 @@ namespace framework {
// this class not only make proto but also init attribute checkers.
class OpProtoAndCheckerMaker {
public:
+ using OpProto = proto::OpProto;
+ using OpAttrChecker = framework::OpAttrChecker;
OpProtoAndCheckerMaker(OpProto* proto, OpAttrChecker* op_checker)
: proto_(proto), op_checker_(op_checker) {}
@@ -80,7 +82,7 @@ class OpProtoAndCheckerMaker {
class NOPMaker : public OpProtoAndCheckerMaker {
public:
- NOPMaker(framework::OpProto* proto, framework::OpAttrChecker* op_checker)
+ NOPMaker(OpProto* proto, framework::OpAttrChecker* op_checker)
: OpProtoAndCheckerMaker(proto, op_checker) {}
};
diff --git a/paddle/framework/op_proto_maker_test.cc b/paddle/framework/op_proto_maker_test.cc
index 988a14cf4de8fdf052ca7e8c41bff0c05ba2daaa..f16cb6fa3aa095a6d9737d84c7ce58f385a7072b 100644
--- a/paddle/framework/op_proto_maker_test.cc
+++ b/paddle/framework/op_proto_maker_test.cc
@@ -18,7 +18,7 @@ limitations under the License. */
class TestAttrProtoMaker : public paddle::framework::OpProtoAndCheckerMaker {
public:
- TestAttrProtoMaker(paddle::framework::OpProto* proto,
+ TestAttrProtoMaker(paddle::framework::proto::OpProto* proto,
paddle::framework::OpAttrChecker* op_checker)
: OpProtoAndCheckerMaker(proto, op_checker) {
AddAttr("scale", "scale of test op");
@@ -27,7 +27,7 @@ class TestAttrProtoMaker : public paddle::framework::OpProtoAndCheckerMaker {
};
TEST(ProtoMaker, DuplicatedAttr) {
- paddle::framework::OpProto op_proto;
+ paddle::framework::proto::OpProto op_proto;
paddle::framework::OpAttrChecker op_checker;
auto proto_maker = TestAttrProtoMaker(&op_proto, &op_checker);
ASSERT_THROW(proto_maker.Validate(), paddle::platform::EnforceNotMet);
@@ -35,7 +35,7 @@ TEST(ProtoMaker, DuplicatedAttr) {
class TestInOutProtoMaker : public paddle::framework::OpProtoAndCheckerMaker {
public:
- TestInOutProtoMaker(paddle::framework::OpProto* proto,
+ TestInOutProtoMaker(paddle::framework::proto::OpProto* proto,
paddle::framework::OpAttrChecker* op_checker)
: OpProtoAndCheckerMaker(proto, op_checker) {
AddInput("input", "input of test op");
@@ -44,7 +44,7 @@ class TestInOutProtoMaker : public paddle::framework::OpProtoAndCheckerMaker {
};
TEST(ProtoMaker, DuplicatedInOut) {
- paddle::framework::OpProto op_proto;
+ paddle::framework::proto::OpProto op_proto;
paddle::framework::OpAttrChecker op_checker;
auto proto_maker = TestInOutProtoMaker(&op_proto, &op_checker);
ASSERT_THROW(proto_maker.Validate(), paddle::platform::EnforceNotMet);
diff --git a/paddle/framework/op_registry.cc b/paddle/framework/op_registry.cc
index 8dedd873aad648174b770b84e5232cd17b577e72..f202c0b27a7d4d7e7d8fe7b578d2d98bfa978f9b 100644
--- a/paddle/framework/op_registry.cc
+++ b/paddle/framework/op_registry.cc
@@ -31,7 +31,8 @@ std::unique_ptr OpRegistry::CreateOp(
}
static VariableNameMap ConvertOpDescVarsToVarNameMap(
- const google::protobuf::RepeatedPtrField& op_desc_vars) {
+ const google::protobuf::RepeatedPtrField&
+ op_desc_vars) {
VariableNameMap ret_val;
for (auto& var : op_desc_vars) {
auto& var_names = ret_val[var.parameter()];
@@ -43,7 +44,8 @@ static VariableNameMap ConvertOpDescVarsToVarNameMap(
return ret_val;
}
-std::unique_ptr OpRegistry::CreateOp(const OpDesc& op_desc) {
+std::unique_ptr OpRegistry::CreateOp(
+ const proto::OpDesc& op_desc) {
VLOG(1) << "CreateOp directly from OpDesc is deprecated. It should only be"
"used in unit tests. Use CreateOp(const OpDescBind& op_desc) "
"instead.";
diff --git a/paddle/framework/op_registry.h b/paddle/framework/op_registry.h
index b29238432b05d81e984e1f4c269a00b01a4229cc..7367e0e637a6d308043f35de628913d060fb379c 100644
--- a/paddle/framework/op_registry.h
+++ b/paddle/framework/op_registry.h
@@ -77,7 +77,7 @@ class OpRegistry {
const VariableNameMap& outputs,
AttributeMap attrs);
- static std::unique_ptr CreateOp(const OpDesc& op_desc);
+ static std::unique_ptr CreateOp(const proto::OpDesc& op_desc);
static std::unique_ptr CreateOp(const OpDescBind& op_desc);
};
diff --git a/paddle/framework/op_registry_test.cc b/paddle/framework/op_registry_test.cc
index b860fe6cac773d1e85adecc43f5dfec42b6c7661..27713e5cbffe95e0ae31ac94a70c64deb53c4ffb 100644
--- a/paddle/framework/op_registry_test.cc
+++ b/paddle/framework/op_registry_test.cc
@@ -51,7 +51,7 @@ class MyTestOpProtoAndCheckerMaker : public OpProtoAndCheckerMaker {
static void BuildVar(const std::string& param_name,
std::initializer_list arguments,
- paddle::framework::OpDesc::Var* var) {
+ paddle::framework::proto::OpDesc::Var* var) {
var->set_parameter(param_name);
for (auto& arg_name : arguments) {
var->add_arguments(arg_name);
@@ -63,7 +63,7 @@ REGISTER_OP_WITHOUT_GRADIENT(my_test_op, paddle::framework::MyTestOp,
paddle::framework::MyTestOpProtoAndCheckerMaker);
TEST(OpRegistry, CreateOp) {
- paddle::framework::OpDesc op_desc;
+ paddle::framework::proto::OpDesc op_desc;
op_desc.set_type("cos_sim");
BuildVar("input", {"aa"}, op_desc.add_inputs());
BuildVar("output", {"bb"}, op_desc.add_outputs());
@@ -71,7 +71,7 @@ TEST(OpRegistry, CreateOp) {
float scale = 3.3;
auto attr = op_desc.mutable_attrs()->Add();
attr->set_name("scale");
- attr->set_type(paddle::framework::AttrType::FLOAT);
+ attr->set_type(paddle::framework::proto::AttrType::FLOAT);
attr->set_f(scale);
auto op = paddle::framework::OpRegistry::CreateOp(op_desc);
@@ -83,14 +83,14 @@ TEST(OpRegistry, CreateOp) {
}
TEST(OpRegistry, IllegalAttr) {
- paddle::framework::OpDesc op_desc;
+ paddle::framework::proto::OpDesc op_desc;
op_desc.set_type("cos_sim");
BuildVar("input", {"aa"}, op_desc.add_inputs());
BuildVar("output", {"bb"}, op_desc.add_outputs());
auto attr = op_desc.mutable_attrs()->Add();
attr->set_name("scale");
- attr->set_type(paddle::framework::AttrType::FLOAT);
+ attr->set_type(paddle::framework::proto::AttrType::FLOAT);
attr->set_f(-2.0);
bool caught = false;
@@ -108,7 +108,7 @@ TEST(OpRegistry, IllegalAttr) {
}
TEST(OpRegistry, DefaultValue) {
- paddle::framework::OpDesc op_desc;
+ paddle::framework::proto::OpDesc op_desc;
op_desc.set_type("cos_sim");
BuildVar("input", {"aa"}, op_desc.add_inputs());
BuildVar("output", {"bb"}, op_desc.add_outputs());
@@ -123,7 +123,7 @@ TEST(OpRegistry, DefaultValue) {
}
TEST(OpRegistry, CustomChecker) {
- paddle::framework::OpDesc op_desc;
+ paddle::framework::proto::OpDesc op_desc;
op_desc.set_type("my_test_op");
BuildVar("input", {"ii"}, op_desc.add_inputs());
BuildVar("output", {"oo"}, op_desc.add_outputs());
@@ -145,7 +145,7 @@ TEST(OpRegistry, CustomChecker) {
// set 'test_attr' set to an illegal value
auto attr = op_desc.mutable_attrs()->Add();
attr->set_name("test_attr");
- attr->set_type(paddle::framework::AttrType::INT);
+ attr->set_type(paddle::framework::proto::AttrType::INT);
attr->set_i(3);
caught = false;
try {
@@ -164,7 +164,7 @@ TEST(OpRegistry, CustomChecker) {
op_desc.mutable_attrs()->Clear();
attr = op_desc.mutable_attrs()->Add();
attr->set_name("test_attr");
- attr->set_type(paddle::framework::AttrType::INT);
+ attr->set_type(paddle::framework::proto::AttrType::INT);
attr->set_i(4);
auto op = paddle::framework::OpRegistry::CreateOp(op_desc);
paddle::platform::CPUDeviceContext dev_ctx;
diff --git a/paddle/framework/operator.cc b/paddle/framework/operator.cc
index e83d7547831744333d6a9c36e842d840a2a0dc03..0e58c0b5707516bd1274181df568d08ff504c152 100644
--- a/paddle/framework/operator.cc
+++ b/paddle/framework/operator.cc
@@ -377,7 +377,7 @@ class RuntimeInferShapeContext : public InferShapeContext {
}
}
- VarDesc::VarType GetVarType(const std::string& name) const override {
+ proto::VarDesc::VarType GetVarType(const std::string& name) const override {
auto* var = scope_.FindVar(name);
return ToVarType(var->Type());
}
@@ -417,7 +417,7 @@ OpKernelType OperatorWithKernel::GetKernelType(
const ExecutionContext& ctx) const {
return OpKernelType(IndicateDataType(ctx), ctx.GetPlace());
}
-DataType OperatorWithKernel::IndicateDataType(
+proto::DataType OperatorWithKernel::IndicateDataType(
const ExecutionContext& ctx) const {
auto& scope = ctx.scope();
int data_type = -1;
@@ -443,7 +443,7 @@ DataType OperatorWithKernel::IndicateDataType(
}
}
PADDLE_ENFORCE(data_type != -1, "DataType should be indicated by input");
- return static_cast(data_type);
+ return static_cast(data_type);
}
} // namespace framework
diff --git a/paddle/framework/operator.h b/paddle/framework/operator.h
index e60dbfc313f732120f6879fd6fd19ca8abc06813..3207360cbaca4e3b96dfe933c67aaa70c59a6044 100644
--- a/paddle/framework/operator.h
+++ b/paddle/framework/operator.h
@@ -358,12 +358,13 @@ struct OpKernelType {
};
platform::Place place_;
- DataType data_type_;
+ proto::DataType data_type_;
- OpKernelType(DataType data_type, platform::Place place)
+ OpKernelType(proto::DataType data_type, platform::Place place)
: place_(place), data_type_(data_type) {}
- OpKernelType(DataType data_type, const platform::DeviceContext& dev_ctx)
+ OpKernelType(proto::DataType data_type,
+ const platform::DeviceContext& dev_ctx)
: place_(dev_ctx.GetPlace()), data_type_(data_type) {}
bool operator==(const OpKernelType& o) const {
@@ -409,7 +410,7 @@ class OperatorWithKernel : public OperatorBase {
private:
// indicate kernel DataType by input data. Defaultly all input data must be
// same.
- DataType IndicateDataType(const ExecutionContext& ctx) const;
+ proto::DataType IndicateDataType(const ExecutionContext& ctx) const;
};
std::ostream& operator<<(std::ostream& os, const OpKernelType& kernel_key);
diff --git a/paddle/framework/operator_test.cc b/paddle/framework/operator_test.cc
index b678178454ff63e4217f0be7a9938a9ba183cda4..05a465152204c8e9f9dbd75d0bfb21ea44d25cf1 100644
--- a/paddle/framework/operator_test.cc
+++ b/paddle/framework/operator_test.cc
@@ -58,7 +58,7 @@ class OpeWithoutKernelTestProtoAndCheckerMaker : public OpProtoAndCheckerMaker {
static void BuildVar(const std::string& param_name,
std::initializer_list arguments,
- paddle::framework::OpDesc::Var* var) {
+ paddle::framework::proto::OpDesc::Var* var) {
var->set_parameter(param_name);
for (auto& arg_name : arguments) {
*var->mutable_arguments()->Add() = arg_name;
@@ -70,14 +70,14 @@ REGISTER_OP_WITHOUT_GRADIENT(
paddle::framework::OpeWithoutKernelTestProtoAndCheckerMaker);
TEST(OperatorBase, all) {
- paddle::framework::OpDesc op_desc;
+ paddle::framework::proto::OpDesc op_desc;
op_desc.set_type("test_operator");
BuildVar("input", {"IN1"}, op_desc.add_inputs());
BuildVar("output", {"OUT1"}, op_desc.add_outputs());
auto attr = op_desc.mutable_attrs()->Add();
attr->set_name("scale");
- attr->set_type(paddle::framework::AttrType::FLOAT);
+ attr->set_type(paddle::framework::proto::AttrType::FLOAT);
attr->set_f(3.14);
paddle::platform::CPUDeviceContext device_context;
@@ -115,7 +115,7 @@ class OpWithKernelTest : public OperatorWithKernel {
protected:
void InferShape(framework::InferShapeContext* ctx) const override {}
OpKernelType GetKernelType(const ExecutionContext& ctx) const override {
- return OpKernelType(DataType::FP32, ctx.GetPlace());
+ return OpKernelType(proto::DataType::FP32, ctx.GetPlace());
}
};
@@ -195,14 +195,14 @@ REGISTER_OP_CPU_KERNEL(op_with_kernel,
// test with single input
TEST(OpKernel, all) {
- paddle::framework::OpDesc op_desc;
+ paddle::framework::proto::OpDesc op_desc;
op_desc.set_type("op_with_kernel");
BuildVar("x", {"IN1"}, op_desc.add_inputs());
BuildVar("y", {"OUT1"}, op_desc.add_outputs());
auto attr = op_desc.mutable_attrs()->Add();
attr->set_name("scale");
- attr->set_type(paddle::framework::AttrType::FLOAT);
+ attr->set_type(paddle::framework::proto::AttrType::FLOAT);
attr->set_f(3.14);
paddle::platform::CPUDeviceContext cpu_device_context;
@@ -224,7 +224,7 @@ REGISTER_OP_CPU_KERNEL(op_multi_inputs_with_kernel,
TEST(OpKernel, multi_inputs) {
using namespace paddle::framework;
- OpDesc op_desc;
+ proto::OpDesc op_desc;
op_desc.set_type("op_multi_inputs_with_kernel");
BuildVar("xs", {"x0", "x1", "x2"}, op_desc.add_inputs());
BuildVar("k", {"k0"}, op_desc.add_inputs());
@@ -232,7 +232,7 @@ TEST(OpKernel, multi_inputs) {
auto attr = op_desc.mutable_attrs()->Add();
attr->set_name("scale");
- attr->set_type(paddle::framework::AttrType::FLOAT);
+ attr->set_type(paddle::framework::proto::AttrType::FLOAT);
attr->set_f(3.14);
paddle::platform::CPUDeviceContext cpu_device_context;
diff --git a/paddle/framework/program_desc.cc b/paddle/framework/program_desc.cc
index 4af8d94563ad0ecf6fcc6fe0575b0f69006a9a2d..30a265ccac1d4287d5cb0d83386860425a1af4c0 100644
--- a/paddle/framework/program_desc.cc
+++ b/paddle/framework/program_desc.cc
@@ -26,7 +26,7 @@ BlockDescBind *ProgramDescBind::AppendBlock(const BlockDescBind &parent) {
return blocks_.back().get();
}
-ProgramDesc *ProgramDescBind::Proto() {
+proto::ProgramDesc *ProgramDescBind::Proto() {
for (auto &block : blocks_) {
block->Flush();
}
@@ -49,7 +49,7 @@ ProgramDescBind::ProgramDescBind(const ProgramDescBind &o) {
}
}
-ProgramDescBind::ProgramDescBind(const ProgramDesc &desc) {
+ProgramDescBind::ProgramDescBind(const proto::ProgramDesc &desc) {
desc_ = desc;
for (auto &block_desc : *desc_.mutable_blocks()) {
blocks_.emplace_back(new BlockDescBind(this, &block_desc));
diff --git a/paddle/framework/program_desc.h b/paddle/framework/program_desc.h
index b1cb086de4345902482d8254b8aeec041ecf81bc..affec491ca598a66129f224a35cbf6019859738c 100644
--- a/paddle/framework/program_desc.h
+++ b/paddle/framework/program_desc.h
@@ -29,7 +29,7 @@ class ProgramDescBind {
public:
ProgramDescBind();
- explicit ProgramDescBind(const ProgramDesc &desc);
+ explicit ProgramDescBind(const proto::ProgramDesc &desc);
ProgramDescBind(const ProgramDescBind &o);
@@ -43,10 +43,10 @@ class ProgramDescBind {
size_t Size() const { return blocks_.size(); }
- ProgramDesc *Proto();
+ proto::ProgramDesc *Proto();
private:
- ProgramDesc desc_;
+ proto::ProgramDesc desc_;
std::vector> blocks_;
};
diff --git a/paddle/framework/program_desc_test.cc b/paddle/framework/program_desc_test.cc
index 83e7286e0ec3639fa589b0958922543a3ba16a00..c4fb28f2cc9bd35439690b1a97692bd13671d3e1 100644
--- a/paddle/framework/program_desc_test.cc
+++ b/paddle/framework/program_desc_test.cc
@@ -22,15 +22,15 @@ TEST(ProgramDesc, copy_ctor) {
ProgramDescBind program;
auto* global_block = program.MutableBlock(0);
auto* x = global_block->Var("X");
- x->SetType(VarDesc_VarType_LOD_TENSOR);
+ x->SetType(proto::VarDesc_VarType_LOD_TENSOR);
x->SetLoDLevel(0);
- x->SetDataType(FP32);
+ x->SetDataType(proto::FP32);
x->SetShape({1000, 784});
auto* y = global_block->Var("Y");
- y->SetType(VarDesc_VarType_LOD_TENSOR);
+ y->SetType(proto::VarDesc_VarType_LOD_TENSOR);
y->SetLoDLevel(0);
- y->SetDataType(FP32);
+ y->SetDataType(proto::FP32);
y->SetShape({784, 100});
auto* op = global_block->AppendOp();
@@ -39,7 +39,7 @@ TEST(ProgramDesc, copy_ctor) {
op->SetInput("Y", {y->Name()});
auto* out = global_block->Var("Out");
- out->SetType(VarDesc_VarType_LOD_TENSOR);
+ out->SetType(proto::VarDesc_VarType_LOD_TENSOR);
op->SetOutput("Y", {out->Name()});
ProgramDescBind program_copy(program);
@@ -84,15 +84,15 @@ TEST(ProgramDescBind, serialize_and_deserialize) {
ProgramDescBind program_origin;
auto* global_block = program_origin.MutableBlock(0);
auto* x = global_block->Var("X");
- x->SetType(VarDesc_VarType_LOD_TENSOR);
+ x->SetType(proto::VarDesc_VarType_LOD_TENSOR);
x->SetLoDLevel(0);
- x->SetDataType(FP32);
+ x->SetDataType(proto::FP32);
x->SetShape({1000, 784});
auto* y = global_block->Var("Y");
- y->SetType(VarDesc_VarType_LOD_TENSOR);
+ y->SetType(proto::VarDesc_VarType_LOD_TENSOR);
y->SetLoDLevel(0);
- y->SetDataType(FP32);
+ y->SetDataType(proto::FP32);
y->SetShape({784, 100});
auto* op = global_block->AppendOp();
@@ -101,7 +101,7 @@ TEST(ProgramDescBind, serialize_and_deserialize) {
op->SetInput("Y", {y->Name()});
auto* out = global_block->Var("Out");
- out->SetType(VarDesc_VarType_LOD_TENSOR);
+ out->SetType(proto::VarDesc_VarType_LOD_TENSOR);
op->SetOutput("Y", {out->Name()});
std::string binary_str;
diff --git a/paddle/framework/prune.cc b/paddle/framework/prune.cc
index da76052eb4d3067214841af72a35cebb26477e7f..25eb813ffb96e9b1e13299421ead9f85c02da59f 100644
--- a/paddle/framework/prune.cc
+++ b/paddle/framework/prune.cc
@@ -29,7 +29,7 @@ const std::string kFetchOpType = "fetch";
const std::string kDropOutOpType = "dropout";
const std::string kBatchNormOpType = "batch_norm";
-bool HasDependentVar(const OpDesc& op_desc,
+bool HasDependentVar(const proto::OpDesc& op_desc,
const std::set& dependent_vars) {
for (auto& var : op_desc.outputs()) {
for (auto& argu : var.arguments()) {
@@ -41,14 +41,15 @@ bool HasDependentVar(const OpDesc& op_desc,
return false;
}
-bool IsTarget(const OpDesc& op_desc) {
+bool IsTarget(const proto::OpDesc& op_desc) {
if (op_desc.has_is_target()) {
return op_desc.is_target();
}
return false;
}
-void prune_impl(const ProgramDesc& input, ProgramDesc* output, int block_id) {
+void prune_impl(const proto::ProgramDesc& input, proto::ProgramDesc* output,
+ int block_id) {
// TODO(tonyyang-svail):
// - will change to use multiple blocks for RNN op and Cond Op
@@ -104,12 +105,12 @@ void prune_impl(const ProgramDesc& input, ProgramDesc* output, int block_id) {
}
// TODO(fengjiayi): Prune() could be inplaced to avoid unnecessary copies
-void Prune(const ProgramDesc& input, ProgramDesc* output) {
+void Prune(const proto::ProgramDesc& input, proto::ProgramDesc* output) {
prune_impl(input, output, 0);
}
-void inference_optimize_impl(const ProgramDesc& input, ProgramDesc* output,
- int block_id) {
+void inference_optimize_impl(const proto::ProgramDesc& input,
+ proto::ProgramDesc* output, int block_id) {
*output = input;
auto* op_field = output->mutable_blocks(block_id)->mutable_ops();
for (auto& op_desc : *op_field) {
@@ -125,7 +126,8 @@ void inference_optimize_impl(const ProgramDesc& input, ProgramDesc* output,
}
}
-void InferenceOptimize(const ProgramDesc& input, ProgramDesc* output) {
+void InferenceOptimize(const proto::ProgramDesc& input,
+ proto::ProgramDesc* output) {
inference_optimize_impl(input, output, 0);
}
diff --git a/paddle/framework/prune.h b/paddle/framework/prune.h
index 23db014894348094a98e043aa744c6f0d27b2640..593292523d0c14136791bb804a4721a0740b47ba 100644
--- a/paddle/framework/prune.h
+++ b/paddle/framework/prune.h
@@ -20,9 +20,10 @@ limitations under the License. */
namespace paddle {
namespace framework {
-void Prune(const ProgramDesc& input, ProgramDesc* output);
+void Prune(const proto::ProgramDesc& input, proto::ProgramDesc* output);
-void InferenceOptimize(const ProgramDesc& input, ProgramDesc* output);
+void InferenceOptimize(const proto::ProgramDesc& input,
+ proto::ProgramDesc* output);
} // namespace framework
} // namespace paddle
diff --git a/paddle/framework/prune_test.cc b/paddle/framework/prune_test.cc
index f21df37a292fd1e039ee8f8fa26244e26c978cae..47fe4b0636c14c5a4f0d4e000a4da8f8951c5d62 100644
--- a/paddle/framework/prune_test.cc
+++ b/paddle/framework/prune_test.cc
@@ -34,7 +34,7 @@ void AddOp(const std::string &type, const f::VariableNameMap &inputs,
for (auto kv : outputs) {
for (auto v : kv.second) {
auto var = block->Var(v);
- var->SetDataType(paddle::framework::DataType::FP32);
+ var->SetDataType(paddle::framework::proto::DataType::FP32);
}
}
@@ -57,14 +57,14 @@ TEST(Prune, one_operator) {
AddOp("one_one", {{"input", {"a"}}}, {{"output", {"b"}}}, f::AttributeMap{},
block);
- f::ProgramDesc *pdesc = program.Proto();
- f::ProgramDesc pruned;
+ f::proto::ProgramDesc *pdesc = program.Proto();
+ f::proto::ProgramDesc pruned;
- Prune(*pdesc, &pruned);
+ f::Prune(*pdesc, &pruned);
PADDLE_ENFORCE_EQ(pruned.blocks(0).ops_size(), 0);
pdesc->mutable_blocks(0)->mutable_ops(0)->set_is_target(true);
- Prune(*pdesc, &pruned);
+ f::Prune(*pdesc, &pruned);
PADDLE_ENFORCE_EQ(pruned.blocks(0).ops_size(), 1);
}
@@ -81,12 +81,12 @@ TEST(Prune, forward) {
AddOp("one_one", {{"input", {"d"}}}, {{"output", {"e"}}}, f::AttributeMap{},
block);
- f::ProgramDesc *pdesc = program.Proto();
+ f::proto::ProgramDesc *pdesc = program.Proto();
for (int i = 0; i < pdesc->blocks(0).ops_size(); ++i) {
- f::ProgramDesc pruned;
+ f::proto::ProgramDesc pruned;
pdesc->mutable_blocks(0)->mutable_ops(i)->set_is_target(true);
- Prune(*pdesc, &pruned);
+ f::Prune(*pdesc, &pruned);
PADDLE_ENFORCE_EQ(pruned.blocks(0).ops_size(), i + 1);
}
}
@@ -104,11 +104,11 @@ TEST(Prune, multi_input_op) {
AddOp("three_one", {{"input", {"b0", "b1", "b2"}}}, {{"output", {"c"}}},
f::AttributeMap{}, block);
- f::ProgramDesc *pdesc = program.Proto();
+ f::proto::ProgramDesc *pdesc = program.Proto();
pdesc->mutable_blocks(0)->mutable_ops(3)->set_is_target(true);
- f::ProgramDesc pruned;
- Prune(*pdesc, &pruned);
+ f::proto::ProgramDesc pruned;
+ f::Prune(*pdesc, &pruned);
PADDLE_ENFORCE_EQ(pruned.blocks(0).ops_size(), 4);
}
@@ -123,11 +123,11 @@ TEST(Prune, multi_output_op) {
AddOp("one_one", {{"input", {"c"}}}, {{"output", {"c1"}}}, f::AttributeMap{},
block);
- f::ProgramDesc *pdesc = program.Proto();
+ f::proto::ProgramDesc *pdesc = program.Proto();
pdesc->mutable_blocks(0)->mutable_ops(2)->set_is_target(true);
- f::ProgramDesc pruned;
- Prune(*pdesc, &pruned);
+ f::proto::ProgramDesc pruned;
+ f::Prune(*pdesc, &pruned);
PADDLE_ENFORCE_EQ(pruned.blocks(0).ops_size(), 2);
}
@@ -142,11 +142,11 @@ TEST(Prune, multi_target) {
AddOp("one_one", {{"input", {"c"}}}, {{"output", {"c1"}}}, f::AttributeMap{},
block);
- f::ProgramDesc *pdesc = program.Proto();
+ f::proto::ProgramDesc *pdesc = program.Proto();
pdesc->mutable_blocks(0)->mutable_ops(1)->set_is_target(true);
pdesc->mutable_blocks(0)->mutable_ops(2)->set_is_target(true);
- f::ProgramDesc pruned;
- Prune(*pdesc, &pruned);
+ f::proto::ProgramDesc pruned;
+ f::Prune(*pdesc, &pruned);
PADDLE_ENFORCE_EQ(pruned.blocks(0).ops_size(), 3);
}
diff --git a/paddle/framework/shape_inference.cc b/paddle/framework/shape_inference.cc
index 7dac1cfd5ee0c320c67bc0b2448417d258d6862b..86dc01665bda5e7f988e60780c0600b049d737ef 100644
--- a/paddle/framework/shape_inference.cc
+++ b/paddle/framework/shape_inference.cc
@@ -57,17 +57,17 @@ void InferShapeContext::SetDims(const std::vector &names,
SetDim(names[i], dims[i]);
}
}
-std::vector InferShapeContext::GetInputsVarType(
+std::vector InferShapeContext::GetInputsVarType(
const std::string &name) const {
return GetVarTypes(Inputs(name));
}
-std::vector InferShapeContext::GetOutputsVarType(
+std::vector InferShapeContext::GetOutputsVarType(
const std::string &name) const {
return GetVarTypes(Outputs(name));
}
-std::vector InferShapeContext::GetVarTypes(
+std::vector InferShapeContext::GetVarTypes(
const std::vector &names) const {
- std::vector retv;
+ std::vector retv;
retv.resize(names.size());
std::transform(names.begin(), names.end(), retv.begin(),
std::bind(std::mem_fn(&InferShapeContext::GetVarType), this,
diff --git a/paddle/framework/shape_inference.h b/paddle/framework/shape_inference.h
index 46f2ea84b4b64292cc9026ef9864621efba79c7a..f93319d8f2fd4c5d388bd57fd595a6a5edd51775 100644
--- a/paddle/framework/shape_inference.h
+++ b/paddle/framework/shape_inference.h
@@ -27,8 +27,9 @@ class InferShapeContext {
virtual bool HasInput(const std::string &name) const = 0;
virtual bool HasOutput(const std::string &name) const = 0;
- std::vector GetInputsVarType(const std::string &name) const;
- std::vector GetOutputsVarType(
+ std::vector GetInputsVarType(
+ const std::string &name) const;
+ std::vector GetOutputsVarType(
const std::string &name) const;
virtual bool HasInputs(const std::string &name) const = 0;
@@ -65,10 +66,10 @@ class InferShapeContext {
std::vector GetDims(
const std::vector &names) const;
- std::vector GetVarTypes(
+ std::vector GetVarTypes(
const std::vector &names) const;
- virtual VarDesc::VarType GetVarType(const std::string &name) const = 0;
+ virtual proto::VarDesc::VarType GetVarType(const std::string &name) const = 0;
};
} // namespace framework
diff --git a/paddle/framework/var_desc.cc b/paddle/framework/var_desc.cc
index 0babec29f6f4412ed29deeafe24470e86b30a636..2180827767e73b7ce51e05e402de8b0dd68571bd 100644
--- a/paddle/framework/var_desc.cc
+++ b/paddle/framework/var_desc.cc
@@ -18,15 +18,17 @@ limitations under the License. */
namespace paddle {
namespace framework {
-VarDesc::VarType VarDescBind::GetType() const { return desc_.type(); }
+proto::VarDesc::VarType VarDescBind::GetType() const { return desc_.type(); }
-void VarDescBind::SetType(VarDesc::VarType type) { desc_.set_type(type); }
+void VarDescBind::SetType(proto::VarDesc::VarType type) {
+ desc_.set_type(type);
+}
void VarDescBind::SetShape(const std::vector &dims) {
VectorToRepeated(dims, mutable_tensor_desc()->mutable_dims());
}
-void VarDescBind::SetDataType(DataType data_type) {
+void VarDescBind::SetDataType(proto::DataType data_type) {
mutable_tensor_desc()->set_data_type(data_type);
}
@@ -34,14 +36,16 @@ std::vector VarDescBind::Shape() const {
return RepeatedToVector(tensor_desc().dims());
}
-DataType VarDescBind::GetDataType() const { return tensor_desc().data_type(); }
+proto::DataType VarDescBind::GetDataType() const {
+ return tensor_desc().data_type();
+}
void VarDescBind::SetLoDLevel(int32_t lod_level) {
switch (desc_.type()) {
- case VarDesc::LOD_TENSOR:
+ case proto::VarDesc::LOD_TENSOR:
desc_.mutable_lod_tensor()->set_lod_level(lod_level);
break;
- case VarDesc::LOD_TENSOR_ARRAY:
+ case proto::VarDesc::LOD_TENSOR_ARRAY:
desc_.mutable_tensor_array()->set_lod_level(lod_level);
break;
default:
@@ -52,9 +56,9 @@ void VarDescBind::SetLoDLevel(int32_t lod_level) {
int32_t VarDescBind::GetLodLevel() const {
switch (desc_.type()) {
- case VarDesc::LOD_TENSOR:
+ case proto::VarDesc::LOD_TENSOR:
return desc_.lod_tensor().lod_level();
- case VarDesc::LOD_TENSOR_ARRAY:
+ case proto::VarDesc::LOD_TENSOR_ARRAY:
return desc_.tensor_array().lod_level();
default:
PADDLE_THROW("Tensor type=%d does not support LoDLevel",
@@ -62,29 +66,29 @@ int32_t VarDescBind::GetLodLevel() const {
}
}
-const TensorDesc &VarDescBind::tensor_desc() const {
+const proto::TensorDesc &VarDescBind::tensor_desc() const {
PADDLE_ENFORCE(desc_.has_type(), "invoke TensorDesc must after set type");
switch (desc_.type()) {
- case VarDesc::SELECTED_ROWS:
+ case proto::VarDesc::SELECTED_ROWS:
return desc_.selected_rows();
- case VarDesc::LOD_TENSOR:
+ case proto::VarDesc::LOD_TENSOR:
return desc_.lod_tensor().tensor();
- case VarDesc::LOD_TENSOR_ARRAY:
+ case proto::VarDesc::LOD_TENSOR_ARRAY:
return desc_.tensor_array().tensor();
default:
PADDLE_THROW("Unexpected branch.");
}
}
-TensorDesc *VarDescBind::mutable_tensor_desc() {
+proto::TensorDesc *VarDescBind::mutable_tensor_desc() {
PADDLE_ENFORCE(desc_.has_type(),
"invoke MutableTensorDesc must after set type");
switch (desc_.type()) {
- case VarDesc::SELECTED_ROWS:
+ case proto::VarDesc::SELECTED_ROWS:
return desc_.mutable_selected_rows();
- case VarDesc::LOD_TENSOR:
+ case proto::VarDesc::LOD_TENSOR:
return desc_.mutable_lod_tensor()->mutable_tensor();
- case VarDesc::LOD_TENSOR_ARRAY:
+ case proto::VarDesc::LOD_TENSOR_ARRAY:
return desc_.mutable_tensor_array()->mutable_tensor();
default:
PADDLE_THROW("Unexpected branch.");
diff --git a/paddle/framework/var_desc.h b/paddle/framework/var_desc.h
index 5cf4608944c5011d798fbde060002a57be8f6102..335a864cabfe575e153cd5c44b5cd30c312914a2 100644
--- a/paddle/framework/var_desc.h
+++ b/paddle/framework/var_desc.h
@@ -57,40 +57,40 @@ class VarDescBind {
public:
explicit VarDescBind(const std::string &name) {
desc_.set_name(name);
- desc_.set_type(VarDesc::LOD_TENSOR);
+ desc_.set_type(proto::VarDesc::LOD_TENSOR);
}
- explicit VarDescBind(const VarDesc &desc) : desc_(desc) {}
+ explicit VarDescBind(const proto::VarDesc &desc) : desc_(desc) {}
- VarDesc *Proto() { return &desc_; }
+ proto::VarDesc *Proto() { return &desc_; }
std::string Name() const { return desc_.name(); }
void SetShape(const std::vector &dims);
- void SetDataType(DataType data_type);
+ void SetDataType(proto::DataType data_type);
std::vector Shape() const;
- DataType GetDataType() const;
+ proto::DataType GetDataType() const;
void SetLoDLevel(int32_t lod_level);
int32_t GetLodLevel() const;
- VarDesc::VarType GetType() const;
+ proto::VarDesc::VarType GetType() const;
- void SetType(VarDesc::VarType type);
+ void SetType(proto::VarDesc::VarType type);
bool Persistable() const { return desc_.persistable(); }
void SetPersistable(bool persistable) { desc_.set_persistable(persistable); }
private:
- const TensorDesc &tensor_desc() const;
- TensorDesc *mutable_tensor_desc();
+ const proto::TensorDesc &tensor_desc() const;
+ proto::TensorDesc *mutable_tensor_desc();
- VarDesc desc_;
+ proto::VarDesc desc_;
};
} // namespace framework
} // namespace paddle
diff --git a/paddle/framework/var_type.h b/paddle/framework/var_type.h
index 0f19870bec3e69d07278507cc556a86bbd25d12d..43a72276408bdefc329e8ddcd901ba346aba35f3 100644
--- a/paddle/framework/var_type.h
+++ b/paddle/framework/var_type.h
@@ -20,15 +20,15 @@
namespace paddle {
namespace framework {
-inline VarDesc::VarType ToVarType(std::type_index type) {
+inline proto::VarDesc::VarType ToVarType(std::type_index type) {
if (type.hash_code() == typeid(LoDTensor).hash_code()) {
- return VarDesc_VarType_LOD_TENSOR;
+ return proto::VarDesc_VarType_LOD_TENSOR;
} else if (type.hash_code() == typeid(LoDRankTable).hash_code()) {
- return VarDesc_VarType_LOD_RANK_TABLE;
+ return proto::VarDesc_VarType_LOD_RANK_TABLE;
} else if (type.hash_code() == typeid(LoDTensorArray).hash_code()) {
- return VarDesc_VarType_LOD_TENSOR_ARRAY;
+ return proto::VarDesc_VarType_LOD_TENSOR_ARRAY;
} else if (type.hash_code() == typeid(SelectedRows).hash_code()) {
- return VarDesc_VarType_SELECTED_ROWS;
+ return proto::VarDesc_VarType_SELECTED_ROWS;
} else {
PADDLE_THROW("ToVarType:Unsupported type %s", type.name());
}
@@ -37,16 +37,16 @@ inline VarDesc::VarType ToVarType(std::type_index type) {
template
inline void VisitVarType(const Variable& var, Visitor visitor) {
switch (ToVarType(var.Type())) {
- case VarDesc_VarType_LOD_TENSOR:
+ case proto::VarDesc_VarType_LOD_TENSOR:
visitor(var.Get());
return;
- case VarDesc_VarType_LOD_RANK_TABLE:
+ case proto::VarDesc_VarType_LOD_RANK_TABLE:
visitor(var.Get());
return;
- case VarDesc_VarType_LOD_TENSOR_ARRAY:
+ case proto::VarDesc_VarType_LOD_TENSOR_ARRAY:
visitor(var.Get());
return;
- case VarDesc_VarType_SELECTED_ROWS:
+ case proto::VarDesc_VarType_SELECTED_ROWS:
visitor(var.Get());
return;
default:
diff --git a/paddle/framework/var_type_inference_test.cc b/paddle/framework/var_type_inference_test.cc
index 9035e63fa48ffdf7c72061b0a4248538d7a357e4..8b465cbc59c50fbbe48036eb7dc09847d21d9d0b 100644
--- a/paddle/framework/var_type_inference_test.cc
+++ b/paddle/framework/var_type_inference_test.cc
@@ -36,14 +36,14 @@ class SumOpVarTypeInference : public VarTypeInference {
void operator()(const OpDescBind &op_desc,
BlockDescBind *block) const override {
auto &inputs = op_desc.Input("X");
- auto default_var_type = VarDesc::SELECTED_ROWS;
+ auto default_var_type = proto::VarDesc::SELECTED_ROWS;
bool any_input_is_lod_tensor = std::any_of(
inputs.begin(), inputs.end(), [block](const std::string &name) {
- return block->Var(name)->GetType() == VarDesc::LOD_TENSOR;
+ return block->Var(name)->GetType() == proto::VarDesc::LOD_TENSOR;
});
if (any_input_is_lod_tensor) {
- default_var_type = VarDesc::LOD_TENSOR;
+ default_var_type = proto::VarDesc::LOD_TENSOR;
}
auto out_var_name = op_desc.Output("Out").front();
@@ -68,19 +68,19 @@ TEST(InferVarType, sum_op) {
op->SetInput("X", {"test_a", "test_b", "test_c"});
op->SetOutput("Out", {"test_out"});
- prog.MutableBlock(0)->Var("test_a")->SetType(VarDesc::SELECTED_ROWS);
- prog.MutableBlock(0)->Var("test_b")->SetType(VarDesc::SELECTED_ROWS);
- prog.MutableBlock(0)->Var("test_c")->SetType(VarDesc::SELECTED_ROWS);
+ prog.MutableBlock(0)->Var("test_a")->SetType(proto::VarDesc::SELECTED_ROWS);
+ prog.MutableBlock(0)->Var("test_b")->SetType(proto::VarDesc::SELECTED_ROWS);
+ prog.MutableBlock(0)->Var("test_c")->SetType(proto::VarDesc::SELECTED_ROWS);
prog.MutableBlock(0)->Var("test_out");
op->InferVarType(prog.MutableBlock(0));
- ASSERT_EQ(VarDesc::SELECTED_ROWS,
+ ASSERT_EQ(proto::VarDesc::SELECTED_ROWS,
prog.MutableBlock(0)->Var("test_out")->GetType());
- prog.MutableBlock(0)->Var("test_b")->SetType(VarDesc::LOD_TENSOR);
+ prog.MutableBlock(0)->Var("test_b")->SetType(proto::VarDesc::LOD_TENSOR);
op->InferVarType(prog.MutableBlock(0));
- ASSERT_EQ(VarDesc::LOD_TENSOR,
+ ASSERT_EQ(proto::VarDesc::LOD_TENSOR,
prog.MutableBlock(0)->Var("test_out")->GetType());
}
@@ -91,14 +91,14 @@ TEST(InferVarType, sum_op_without_infer_var_type) {
op->SetInput("X", {"test2_a", "test2_b", "test2_c"});
op->SetOutput("Out", {"test2_out"});
- prog.MutableBlock(0)->Var("test2_a")->SetType(VarDesc::SELECTED_ROWS);
- prog.MutableBlock(0)->Var("test2_b")->SetType(VarDesc::SELECTED_ROWS);
- prog.MutableBlock(0)->Var("test2_c")->SetType(VarDesc::SELECTED_ROWS);
+ prog.MutableBlock(0)->Var("test2_a")->SetType(proto::VarDesc::SELECTED_ROWS);
+ prog.MutableBlock(0)->Var("test2_b")->SetType(proto::VarDesc::SELECTED_ROWS);
+ prog.MutableBlock(0)->Var("test2_c")->SetType(proto::VarDesc::SELECTED_ROWS);
prog.MutableBlock(0)->Var("test2_out");
op->InferVarType(prog.MutableBlock(0));
- ASSERT_EQ(VarDesc_VarType_LOD_TENSOR,
+ ASSERT_EQ(proto::VarDesc_VarType_LOD_TENSOR,
prog.MutableBlock(0)->Var("test2_out")->GetType());
}
diff --git a/paddle/operators/accuracy_op.cc b/paddle/operators/accuracy_op.cc
index 76da21c4726a1245241c1cf61860f9c8b62ea452..b8ed93f4eb549fbd76bf360d4b843c1fa9635b40 100644
--- a/paddle/operators/accuracy_op.cc
+++ b/paddle/operators/accuracy_op.cc
@@ -63,8 +63,7 @@ class AccuracyOp : public framework::OperatorWithKernel {
class AccuracyOpMaker : public framework::OpProtoAndCheckerMaker {
public:
- AccuracyOpMaker(framework::OpProto *proto,
- framework::OpAttrChecker *op_checker)
+ AccuracyOpMaker(OpProto *proto, OpAttrChecker *op_checker)
: OpProtoAndCheckerMaker(proto, op_checker) {
// TODO(typhoonzero): support both inference value and indices.
AddInput("Out", "The network output of topk (inferences)");
diff --git a/paddle/operators/activation_op.cc b/paddle/operators/activation_op.cc
index 63490f0ec9f4852a3ead574b9d52c807d8ba6d89..2b4c7e5f0de8347d4789136a3a45408ada439f02 100644
--- a/paddle/operators/activation_op.cc
+++ b/paddle/operators/activation_op.cc
@@ -38,9 +38,8 @@ class ActivationOpGrad : public framework::OperatorWithKernel {
class SigmoidOpMaker : public framework::OpProtoAndCheckerMaker {
public:
- SigmoidOpMaker(framework::OpProto *proto,
- framework::OpAttrChecker *op_checker)
- : OpProtoAndCheckerMaker(proto, op_checker) {
+ SigmoidOpMaker(OpProto *proto, OpAttrChecker *op_checker)
+ : framework::OpProtoAndCheckerMaker(proto, op_checker) {
AddInput("X", "Input of Sigmoid operator");
AddOutput("Y", "Output of Sigmoid operator");
AddComment(R"DOC(
@@ -54,9 +53,8 @@ $$y = \frac{1}{1 + e^{-x}}$$
class LogSigmoidOpMaker : public framework::OpProtoAndCheckerMaker {
public:
- LogSigmoidOpMaker(framework::OpProto *proto,
- framework::OpAttrChecker *op_checker)
- : OpProtoAndCheckerMaker(proto, op_checker) {
+ LogSigmoidOpMaker(OpProto *proto, OpAttrChecker *op_checker)
+ : framework::OpProtoAndCheckerMaker(proto, op_checker) {
AddInput("X", "Input of LogSigmoid operator");
AddOutput("Y", "Output of LogSigmoid operator");
AddComment(R"DOC(
@@ -70,8 +68,8 @@ $$y = \log \frac{1}{1 + e^{-x}}$$
class ExpOpMaker : public framework::OpProtoAndCheckerMaker {
public:
- ExpOpMaker(framework::OpProto *proto, framework::OpAttrChecker *op_checker)
- : OpProtoAndCheckerMaker(proto, op_checker) {
+ ExpOpMaker(OpProto *proto, OpAttrChecker *op_checker)
+ : framework::OpProtoAndCheckerMaker(proto, op_checker) {
AddInput("X", "Input of Exp operator");
AddOutput("Y", "Output of Exp operator");
AddComment(R"DOC(
@@ -85,8 +83,8 @@ $y = e^x$
class ReluOpMaker : public framework::OpProtoAndCheckerMaker {
public:
- ReluOpMaker(framework::OpProto *proto, framework::OpAttrChecker *op_checker)
- : OpProtoAndCheckerMaker(proto, op_checker) {
+ ReluOpMaker(OpProto *proto, OpAttrChecker *op_checker)
+ : framework::OpProtoAndCheckerMaker(proto, op_checker) {
AddInput("X", "Input of Relu operator");
AddOutput("Y", "Output of Relu operator");
AddComment(R"DOC(
@@ -100,9 +98,8 @@ $y = \max(x, 0)$
class LeakyReluOpMaker : public framework::OpProtoAndCheckerMaker {
public:
- LeakyReluOpMaker(framework::OpProto *proto,
- framework::OpAttrChecker *op_checker)
- : OpProtoAndCheckerMaker(proto, op_checker) {
+ LeakyReluOpMaker(OpProto *proto, OpAttrChecker *op_checker)
+ : framework::OpProtoAndCheckerMaker(proto, op_checker) {
AddInput("X", "Input of LeakyRelu operator");
AddOutput("Y", "Output of LeakyRelu operator");
AddAttr("alpha", "The small negative slope").SetDefault(0.02f);
@@ -117,9 +114,8 @@ $y = \max(x, \alpha * x)$
class SoftShrinkOpMaker : public framework::OpProtoAndCheckerMaker {
public:
- SoftShrinkOpMaker(framework::OpProto *proto,
- framework::OpAttrChecker *op_checker)
- : OpProtoAndCheckerMaker(proto, op_checker) {
+ SoftShrinkOpMaker(OpProto *proto, OpAttrChecker *op_checker)
+ : framework::OpProtoAndCheckerMaker(proto, op_checker) {
AddInput("X", "Input of Softshrink operator");
AddOutput("Y", "Output of Softshrink operator");
AddAttr("lambda", "non-negative offset").SetDefault(0.5f);
@@ -140,8 +136,8 @@ $$
class TanhOpMaker : public framework::OpProtoAndCheckerMaker {
public:
- TanhOpMaker(framework::OpProto *proto, framework::OpAttrChecker *op_checker)
- : OpProtoAndCheckerMaker(proto, op_checker) {
+ TanhOpMaker(OpProto *proto, OpAttrChecker *op_checker)
+ : framework::OpProtoAndCheckerMaker(proto, op_checker) {
AddInput("X", "Input of Tanh operator");
AddOutput("Y", "Output of Tanh operator");
AddComment(R"DOC(
@@ -155,9 +151,8 @@ $$y = \frac{e^{x} - e^{-x}}{e^{x} + e^{-x}}$$
class TanhShrinkOpMaker : public framework::OpProtoAndCheckerMaker {
public:
- TanhShrinkOpMaker(framework::OpProto *proto,
- framework::OpAttrChecker *op_checker)
- : OpProtoAndCheckerMaker(proto, op_checker) {
+ TanhShrinkOpMaker(OpProto *proto, OpAttrChecker *op_checker)
+ : framework::OpProtoAndCheckerMaker(proto, op_checker) {
AddInput("X", "Input of TanhShrink operator");
AddOutput("Y", "Output of TanhShrink operator");
AddComment(R"DOC(
@@ -171,9 +166,8 @@ $$y = x - \frac{e^{x} - e^{-x}}{e^{x} + e^{-x}}$$
class HardShrinkOpMaker : public framework::OpProtoAndCheckerMaker {
public:
- HardShrinkOpMaker(framework::OpProto *proto,
- framework::OpAttrChecker *op_checker)
- : OpProtoAndCheckerMaker(proto, op_checker) {
+ HardShrinkOpMaker(OpProto *proto, OpAttrChecker *op_checker)
+ : framework::OpProtoAndCheckerMaker(proto, op_checker) {
AddInput("X", "Input of HardShrink operator");
AddOutput("Y", "Output of HardShrink operator");
AddAttr("threshold", "The value of threshold for HardShrink")
@@ -195,8 +189,8 @@ $$
class SqrtOpMaker : public framework::OpProtoAndCheckerMaker {
public:
- SqrtOpMaker(framework::OpProto *proto, framework::OpAttrChecker *op_checker)
- : OpProtoAndCheckerMaker(proto, op_checker) {
+ SqrtOpMaker(OpProto *proto, OpAttrChecker *op_checker)
+ : framework::OpProtoAndCheckerMaker(proto, op_checker) {
AddInput("X", "Input of Sqrt operator");
AddOutput("Y", "Output of Sqrt operator");
AddComment(R"DOC(
@@ -210,8 +204,8 @@ $y = \sqrt{x}$
class AbsOpMaker : public framework::OpProtoAndCheckerMaker {
public:
- AbsOpMaker(framework::OpProto *proto, framework::OpAttrChecker *op_checker)
- : OpProtoAndCheckerMaker(proto, op_checker) {
+ AbsOpMaker(OpProto *proto, OpAttrChecker *op_checker)
+ : framework::OpProtoAndCheckerMaker(proto, op_checker) {
AddInput("X", "Input of Abs operator");
AddOutput("Y", "Output of Abs operator");
AddComment(R"DOC(
@@ -225,8 +219,8 @@ $y = |x|$
class CeilOpMaker : public framework::OpProtoAndCheckerMaker {
public:
- CeilOpMaker(framework::OpProto *proto, framework::OpAttrChecker *op_checker)
- : OpProtoAndCheckerMaker(proto, op_checker) {
+ CeilOpMaker(OpProto *proto, OpAttrChecker *op_checker)
+ : framework::OpProtoAndCheckerMaker(proto, op_checker) {
AddInput("X", "Input of Ceil operator");
AddOutput("Y", "Output of Ceil operator");
AddComment(R"DOC(
@@ -240,8 +234,8 @@ $y = ceil(x)$
class FloorOpMaker : public framework::OpProtoAndCheckerMaker {
public:
- FloorOpMaker(framework::OpProto *proto, framework::OpAttrChecker *op_checker)
- : OpProtoAndCheckerMaker(proto, op_checker) {
+ FloorOpMaker(OpProto *proto, OpAttrChecker *op_checker)
+ : framework::OpProtoAndCheckerMaker(proto, op_checker) {
AddInput("X", "Input of Floor operator");
AddOutput("Y", "Output of Floor operator");
AddComment(R"DOC(
@@ -255,8 +249,8 @@ $y = floor(x)$
class RoundOpMaker : public framework::OpProtoAndCheckerMaker {
public:
- RoundOpMaker(framework::OpProto *proto, framework::OpAttrChecker *op_checker)
- : OpProtoAndCheckerMaker(proto, op_checker) {
+ RoundOpMaker(OpProto *proto, OpAttrChecker *op_checker)
+ : framework::OpProtoAndCheckerMaker(proto, op_checker) {
AddInput("X", "Input of Round operator");
AddOutput("Y", "Output of Round operator");
AddComment(R"DOC(
@@ -270,9 +264,8 @@ $y = [x]$
class ReciprocalOpMaker : public framework::OpProtoAndCheckerMaker {
public:
- ReciprocalOpMaker(framework::OpProto *proto,
- framework::OpAttrChecker *op_checker)
- : OpProtoAndCheckerMaker(proto, op_checker) {
+ ReciprocalOpMaker(OpProto *proto, OpAttrChecker *op_checker)
+ : framework::OpProtoAndCheckerMaker(proto, op_checker) {
AddInput("X", "Input of Reciprocal operator");
AddOutput("Y", "Output of Reciprocal operator");
AddComment(R"DOC(
@@ -286,8 +279,8 @@ $$y = \frac{1}{x}$$
class LogOpMaker : public framework::OpProtoAndCheckerMaker {
public:
- LogOpMaker(framework::OpProto *proto, framework::OpAttrChecker *op_checker)
- : OpProtoAndCheckerMaker(proto, op_checker) {
+ LogOpMaker(OpProto *proto, OpAttrChecker *op_checker)
+ : framework::OpProtoAndCheckerMaker(proto, op_checker) {
AddInput("X", "Input of Log operator");
AddOutput("Y", "Output of Log operator");
AddComment(R"DOC(
@@ -303,8 +296,8 @@ Natural logarithm of x.
class SquareOpMaker : public framework::OpProtoAndCheckerMaker {
public:
- SquareOpMaker(framework::OpProto *proto, framework::OpAttrChecker *op_checker)
- : OpProtoAndCheckerMaker(proto, op_checker) {
+ SquareOpMaker(OpProto *proto, OpAttrChecker *op_checker)
+ : framework::OpProtoAndCheckerMaker(proto, op_checker) {
AddInput("X", "Input of Square operator");
AddOutput("Y", "Output of Square operator");
AddComment(R"DOC(
@@ -318,9 +311,8 @@ $y = x^2$
class SoftplusOpMaker : public framework::OpProtoAndCheckerMaker {
public:
- SoftplusOpMaker(framework::OpProto *proto,
- framework::OpAttrChecker *op_checker)
- : OpProtoAndCheckerMaker(proto, op_checker) {
+ SoftplusOpMaker(OpProto *proto, OpAttrChecker *op_checker)
+ : framework::OpProtoAndCheckerMaker(proto, op_checker) {
AddInput("X", "Input of Softplus operator");
AddOutput("Y", "Output of Softplus operator");
AddComment(R"DOC(
@@ -334,9 +326,8 @@ $y = \ln(1 + e^{x})$
class SoftsignOpMaker : public framework::OpProtoAndCheckerMaker {
public:
- SoftsignOpMaker(framework::OpProto *proto,
- framework::OpAttrChecker *op_checker)
- : OpProtoAndCheckerMaker(proto, op_checker) {
+ SoftsignOpMaker(OpProto *proto, OpAttrChecker *op_checker)
+ : framework::OpProtoAndCheckerMaker(proto, op_checker) {
AddInput("X", "Input of Softsign operator");
AddOutput("Y", "Output of Softsign operator");
AddComment(R"DOC(
@@ -350,8 +341,8 @@ $$y = \frac{x}{1 + |x|}$$
class BReluOpMaker : public framework::OpProtoAndCheckerMaker {
public:
- BReluOpMaker(framework::OpProto *proto, framework::OpAttrChecker *op_checker)
- : OpProtoAndCheckerMaker(proto, op_checker) {
+ BReluOpMaker(OpProto *proto, OpAttrChecker *op_checker)
+ : framework::OpProtoAndCheckerMaker(proto, op_checker) {
AddInput("X", "Input of BRelu operator");
AddOutput("Y", "Output of BRelu operator");
AddAttr("t_min", "The min marginal value of BRelu")
@@ -369,9 +360,8 @@ $y = \max(\min(x, t_{min}), t_{max})$
class SoftReluOpMaker : public framework::OpProtoAndCheckerMaker {
public:
- SoftReluOpMaker(framework::OpProto *proto,
- framework::OpAttrChecker *op_checker)
- : OpProtoAndCheckerMaker(proto, op_checker) {
+ SoftReluOpMaker(OpProto *proto, OpAttrChecker *op_checker)
+ : framework::OpProtoAndCheckerMaker(proto, op_checker) {
AddInput("X", "Input of SoftRelu operator");
AddOutput("Y", "Output of SoftRelu operator");
AddAttr("threshold", "The threshold value of SoftRelu")
@@ -387,8 +377,8 @@ $y = \ln(1 + \exp(\max(\min(x, threshold), threshold))$
class ELUOpMaker : public framework::OpProtoAndCheckerMaker {
public:
- ELUOpMaker(framework::OpProto *proto, framework::OpAttrChecker *op_checker)
- : OpProtoAndCheckerMaker(proto, op_checker) {
+ ELUOpMaker(OpProto *proto, OpAttrChecker *op_checker)
+ : framework::OpProtoAndCheckerMaker(proto, op_checker) {
AddInput("X", "Input of ELU operator");
AddOutput("Y", "Output of ELU operator");
AddAttr("alpha", "The alpha value of ELU").SetDefault(1.0f);
@@ -406,8 +396,8 @@ $y = \max(0, x) + \min(0, \alpha * (e^x - 1))$
class Relu6OpMaker : public framework::OpProtoAndCheckerMaker {
public:
- Relu6OpMaker(framework::OpProto *proto, framework::OpAttrChecker *op_checker)
- : OpProtoAndCheckerMaker(proto, op_checker) {
+ Relu6OpMaker(OpProto *proto, OpAttrChecker *op_checker)
+ : framework::OpProtoAndCheckerMaker(proto, op_checker) {
AddInput("X", "Input of Relu6 operator");
AddOutput("Y", "Output of Relu6 operator");
AddAttr("threshold", "The threshold value of Relu6")
@@ -423,8 +413,8 @@ $y = \min(\max(0, x), 6)$
class PowOpMaker : public framework::OpProtoAndCheckerMaker {
public:
- PowOpMaker(framework::OpProto *proto, framework::OpAttrChecker *op_checker)
- : OpProtoAndCheckerMaker(proto, op_checker) {
+ PowOpMaker(OpProto *proto, OpAttrChecker *op_checker)
+ : framework::OpProtoAndCheckerMaker(proto, op_checker) {
AddInput("X", "Input of Pow operator");
AddOutput("Y", "Output of Pow operator");
AddAttr("factor", "The exponential factor of Pow").SetDefault(1.0f);
@@ -439,8 +429,8 @@ $y = x^{factor}$
class STanhOpMaker : public framework::OpProtoAndCheckerMaker {
public:
- STanhOpMaker(framework::OpProto *proto, framework::OpAttrChecker *op_checker)
- : OpProtoAndCheckerMaker(proto, op_checker) {
+ STanhOpMaker(OpProto *proto, OpAttrChecker *op_checker)
+ : framework::OpProtoAndCheckerMaker(proto, op_checker) {
AddInput("X", "Input of STanh operator");
AddOutput("Y", "Output of STanh operator");
AddAttr("scale_a", "The scale parameter of a for the input")
@@ -458,9 +448,8 @@ $$y = b * \frac{e^{a * x} - e^{-a * x}}{e^{a * x} + e^{-a * x}}$$
class ThresholdedReluOpMaker : public framework::OpProtoAndCheckerMaker {
public:
- ThresholdedReluOpMaker(framework::OpProto *proto,
- framework::OpAttrChecker *op_checker)
- : OpProtoAndCheckerMaker(proto, op_checker) {
+ ThresholdedReluOpMaker(OpProto *proto, OpAttrChecker *op_checker)
+ : framework::OpProtoAndCheckerMaker(proto, op_checker) {
AddInput("X", "Input of ThresholdedRelu operator");
AddOutput("Y", "Output of ThresholdedRelu operator");
AddAttr("threshold", "The threshold location of activation")
@@ -481,9 +470,8 @@ $$
class HardSigmoidOpMaker : public framework::OpProtoAndCheckerMaker {
public:
- HardSigmoidOpMaker(framework::OpProto *proto,
- framework::OpAttrChecker *op_checker)
- : OpProtoAndCheckerMaker(proto, op_checker) {
+ HardSigmoidOpMaker(OpProto *proto, OpAttrChecker *op_checker)
+ : framework::OpProtoAndCheckerMaker(proto, op_checker) {
AddInput("X", "Input of HardSigmoid operator");
AddOutput("Y", "Output of HardSigmoid operator");
AddAttr("slope", "Slope for linear approximation of sigmoid")
@@ -508,8 +496,8 @@ It is recommended to use the defaults for this activation.
class SwishOpMaker : public framework::OpProtoAndCheckerMaker {
public:
- SwishOpMaker(framework::OpProto *proto, framework::OpAttrChecker *op_checker)
- : OpProtoAndCheckerMaker(proto, op_checker) {
+ SwishOpMaker(OpProto *proto, OpAttrChecker *op_checker)
+ : framework::OpProtoAndCheckerMaker(proto, op_checker) {
AddInput("X", "Input of Swish operator");
AddOutput("Y", "Output of Swish operator");
AddAttr("beta", "Constant beta of swish operator").SetDefault(1.0f);
diff --git a/paddle/operators/adadelta_op.cc b/paddle/operators/adadelta_op.cc
index 507811e7b59b9426c599570ead9b42f8d02380fd..d8a9491c8247ac463e01606dac248780d5284236 100644
--- a/paddle/operators/adadelta_op.cc
+++ b/paddle/operators/adadelta_op.cc
@@ -59,8 +59,7 @@ class AdadeltaOp : public framework::OperatorWithKernel {
class AdadeltaOpMaker : public framework::OpProtoAndCheckerMaker {
public:
- AdadeltaOpMaker(framework::OpProto *proto,
- framework::OpAttrChecker *op_checker)
+ AdadeltaOpMaker(OpProto *proto, OpAttrChecker *op_checker)
: OpProtoAndCheckerMaker(proto, op_checker) {
AddInput("Param", "(Tensor) Input parameter");
AddInput("Grad", "(Tensor) Input gradient");
diff --git a/paddle/operators/adagrad_op.cc b/paddle/operators/adagrad_op.cc
index 5d007163161cd4bf4a9fd46eda57f7984c6a414f..052c793a01907abdc7784d1290f43543ae81bdb1 100644
--- a/paddle/operators/adagrad_op.cc
+++ b/paddle/operators/adagrad_op.cc
@@ -59,8 +59,7 @@ class AdagradOp : public framework::OperatorWithKernel {
class AdagradOpMaker : public framework::OpProtoAndCheckerMaker {
public:
- AdagradOpMaker(framework::OpProto* proto,
- framework::OpAttrChecker* op_checker)
+ AdagradOpMaker(OpProto* proto, OpAttrChecker* op_checker)
: OpProtoAndCheckerMaker(proto, op_checker) {
AddInput("Param", "(Tensor) Input parameter");
AddInput("Grad", "(Tensor) Input gradient");
diff --git a/paddle/operators/adam_op.cc b/paddle/operators/adam_op.cc
index cf6ef6dd53979b23de125014b8d5150d8ce4c053..03527de936bf736d572fb0140033bde4db990981 100644
--- a/paddle/operators/adam_op.cc
+++ b/paddle/operators/adam_op.cc
@@ -73,7 +73,7 @@ class AdamOp : public framework::OperatorWithKernel {
class AdamOpMaker : public framework::OpProtoAndCheckerMaker {
public:
- AdamOpMaker(framework::OpProto *proto, framework::OpAttrChecker *op_checker)
+ AdamOpMaker(OpProto *proto, OpAttrChecker *op_checker)
: OpProtoAndCheckerMaker(proto, op_checker) {
AddInput("Param", "(Tensor) Input parameter");
AddInput("Grad", "(Tensor) Input gradient");
diff --git a/paddle/operators/adamax_op.cc b/paddle/operators/adamax_op.cc
index 49ce497bb710de24b198fb4b5f56ff6d277c6f52..3b0b71418477ea128dbb31a8d7cd44cf6bf023a1 100644
--- a/paddle/operators/adamax_op.cc
+++ b/paddle/operators/adamax_op.cc
@@ -67,7 +67,7 @@ class AdamaxOp : public framework::OperatorWithKernel {
class AdamaxOpMaker : public framework::OpProtoAndCheckerMaker {
public:
- AdamaxOpMaker(framework::OpProto *proto, framework::OpAttrChecker *op_checker)
+ AdamaxOpMaker(OpProto *proto, OpAttrChecker *op_checker)
: OpProtoAndCheckerMaker(proto, op_checker) {
AddInput("Param", "(Tensor) Input parameter");
AddInput("Grad", "(Tensor) Input gradient");
diff --git a/paddle/operators/array_to_lod_tensor_op.cc b/paddle/operators/array_to_lod_tensor_op.cc
index faeba7f3ed26d05de16775a1de4d42f802111207..aafdb8fb248397d78360745270d390ff4d60e7eb 100644
--- a/paddle/operators/array_to_lod_tensor_op.cc
+++ b/paddle/operators/array_to_lod_tensor_op.cc
@@ -114,8 +114,7 @@ class ArrayToLoDTensorOp : public framework::OperatorBase {
class ArrayToLoDTensorOpProtoMaker : public framework::OpProtoAndCheckerMaker {
public:
- ArrayToLoDTensorOpProtoMaker(framework::OpProto *proto,
- framework::OpAttrChecker *op_checker)
+ ArrayToLoDTensorOpProtoMaker(OpProto *proto, OpAttrChecker *op_checker)
: OpProtoAndCheckerMaker(proto, op_checker) {
AddInput("X",
"(std::vector) A vector of tensors that is going to "
diff --git a/paddle/operators/assign_op.cc b/paddle/operators/assign_op.cc
index 0a37f18729a93b15623c0a17e3689e518c38b844..0d98755aa07e4b655e74211ffecb039254e50608 100644
--- a/paddle/operators/assign_op.cc
+++ b/paddle/operators/assign_op.cc
@@ -86,8 +86,7 @@ class AssignOp : public framework::OperatorBase {
class AssignOpProtoMaker : public framework::OpProtoAndCheckerMaker {
public:
- AssignOpProtoMaker(framework::OpProto *proto,
- framework::OpAttrChecker *op_checker)
+ AssignOpProtoMaker(OpProto *proto, OpAttrChecker *op_checker)
: OpProtoAndCheckerMaker(proto, op_checker) {
AddInput("X",
"(LoDTensor, SelectedRows or LoDTensorArray) The input variable "
@@ -109,8 +108,8 @@ class AssignInferShape : public framework::InferShapeBase {
void operator()(framework::InferShapeContext *context) const override {
if (context->HasInput("X")) {
auto type = context->GetInputsVarType("X")[0];
- if (type == framework::VarDesc_VarType_SELECTED_ROWS ||
- type == framework::VarDesc_VarType_LOD_TENSOR) {
+ if (type == framework::proto::VarDesc_VarType_SELECTED_ROWS ||
+ type == framework::proto::VarDesc_VarType_LOD_TENSOR) {
context->SetOutputDim("Out", context->GetInputDim("X"));
}
}
diff --git a/paddle/operators/auc_op.cc b/paddle/operators/auc_op.cc
index 6c3f67ec32fb1b942241997e87a1e9c4752e707d..811c487089fcf4044f129ad6bf95b46535d4fcd6 100644
--- a/paddle/operators/auc_op.cc
+++ b/paddle/operators/auc_op.cc
@@ -49,7 +49,7 @@ class AucOp : public framework::OperatorWithKernel {
class AucOpMaker : public framework::OpProtoAndCheckerMaker {
public:
- AucOpMaker(framework::OpProto *proto, framework::OpAttrChecker *op_checker)
+ AucOpMaker(OpProto *proto, OpAttrChecker *op_checker)
: OpProtoAndCheckerMaker(proto, op_checker) {
AddInput("Out",
"A floating point 2D tensor, values are in the range [0, 1]."
diff --git a/paddle/operators/batch_norm_op.cc b/paddle/operators/batch_norm_op.cc
index 94a972b7ab56f41f8b6a203b6bf0330a69f84e54..f545da22d74f4758a099d249db922de28c926ec2 100644
--- a/paddle/operators/batch_norm_op.cc
+++ b/paddle/operators/batch_norm_op.cc
@@ -85,8 +85,7 @@ class BatchNormOp : public framework::OperatorWithKernel {
class BatchNormOpMaker : public framework::OpProtoAndCheckerMaker {
public:
- BatchNormOpMaker(framework::OpProto *proto,
- framework::OpAttrChecker *op_checker)
+ BatchNormOpMaker(OpProto *proto, OpAttrChecker *op_checker)
: OpProtoAndCheckerMaker(proto, op_checker) {
AddAttr("is_test", "").SetDefault(false);
AddAttr("momentum", "").SetDefault(0.9);
diff --git a/paddle/operators/beam_search_decode_op.cc b/paddle/operators/beam_search_decode_op.cc
index c796a0c5d089499e7858c7a427825fdbeb05cb7f..ceb20cbe18445cc6fbaea6ba1c64f143e88fb9c9 100644
--- a/paddle/operators/beam_search_decode_op.cc
+++ b/paddle/operators/beam_search_decode_op.cc
@@ -83,9 +83,8 @@ class BeamSearchDecodeOp : public framework::OperatorBase {
class BeamSearchDecodeOpProtoMaker : public framework::OpProtoAndCheckerMaker {
public:
- BeamSearchDecodeOpProtoMaker(framework::OpProto* proto,
- framework::OpAttrChecker* op_checker)
- : OpProtoAndCheckerMaker(proto, op_checker) {
+ BeamSearchDecodeOpProtoMaker(OpProto* proto, OpAttrChecker* op_checker)
+ : framework::OpProtoAndCheckerMaker(proto, op_checker) {
AddInput("Ids",
"(LodTensorArray)"
"score of the candidate words in each step");
@@ -123,10 +122,10 @@ class BeamSearchDecodeInferVarType : public framework::VarTypeInference {
void operator()(const framework::OpDescBind& op_desc,
framework::BlockDescBind* block) const override {
for (auto& o : op_desc.Output("SentenceIds")) {
- block->Var(o)->SetType(framework::VarDesc::LOD_TENSOR);
+ block->Var(o)->SetType(framework::proto::VarDesc::LOD_TENSOR);
}
for (auto& o : op_desc.Output("SentenceScores")) {
- block->Var(o)->SetType(framework::VarDesc::LOD_TENSOR);
+ block->Var(o)->SetType(framework::proto::VarDesc::LOD_TENSOR);
}
}
};
diff --git a/paddle/operators/beam_search_op.cc b/paddle/operators/beam_search_op.cc
index 8c3e2a303fb8f12a8886c11cf112b859a6db7bcf..69ddc52035ae78dd2d1926b66fcbbe36737e87aa 100644
--- a/paddle/operators/beam_search_op.cc
+++ b/paddle/operators/beam_search_op.cc
@@ -153,8 +153,7 @@ bool BeamSearch::NextItemSet(std::vector *items) {
class BeamSearchProtoAndCheckerMaker
: public framework::OpProtoAndCheckerMaker {
public:
- BeamSearchProtoAndCheckerMaker(framework::OpProto *proto,
- framework::OpAttrChecker *op_checker)
+ BeamSearchProtoAndCheckerMaker(OpProto *proto, OpAttrChecker *op_checker)
: OpProtoAndCheckerMaker(proto, op_checker) {
// inputs and outputs stored in proto
AddInput("pre_ids", "ids in previous step");
diff --git a/paddle/operators/bilinear_tensor_product_op.cc b/paddle/operators/bilinear_tensor_product_op.cc
index 217fd523667777f7d250295d2a036867dac94f04..7640147a12d66a924f16eaf168227b6ce6a96040 100644
--- a/paddle/operators/bilinear_tensor_product_op.cc
+++ b/paddle/operators/bilinear_tensor_product_op.cc
@@ -65,8 +65,7 @@ class BilinearTensorProductOp : public framework::OperatorWithKernel {
class BilinearTensorProductOpMaker : public framework::OpProtoAndCheckerMaker {
public:
- BilinearTensorProductOpMaker(framework::OpProto* proto,
- framework::OpAttrChecker* op_checker)
+ BilinearTensorProductOpMaker(OpProto* proto, OpAttrChecker* op_checker)
: OpProtoAndCheckerMaker(proto, op_checker) {
AddInput("X", "The first input of bilinear_tensor_product operator.");
AddInput("Y", "The second input of bilinear_tensor_product operator.");
diff --git a/paddle/operators/cast_op.cc b/paddle/operators/cast_op.cc
index d641b8fc9fea81d1e364ae05de98ed7760a32648..927a32645ccb6cd8aa225f3ce1ba78045bd8fd19 100644
--- a/paddle/operators/cast_op.cc
+++ b/paddle/operators/cast_op.cc
@@ -20,8 +20,7 @@ namespace operators {
class CastOpProtoMaker : public framework::OpProtoAndCheckerMaker {
public:
- CastOpProtoMaker(framework::OpProto *proto,
- framework::OpAttrChecker *op_checker)
+ CastOpProtoMaker(OpProto *proto, OpAttrChecker *op_checker)
: OpProtoAndCheckerMaker(proto, op_checker) {
AddInput("X", "The input tensor of cast op");
AddOutput("Out", "The output tensor of cast op");
diff --git a/paddle/operators/cast_op.h b/paddle/operators/cast_op.h
index a6773f13a8deb443b022c6045f1b3b976b3e6607..0c72d809e67e8f3f25be5643041d89da3d04d95e 100644
--- a/paddle/operators/cast_op.h
+++ b/paddle/operators/cast_op.h
@@ -55,7 +55,7 @@ class CastOpKernel : public framework::OpKernel {
auto* in = context.Input("X");
auto* out = context.Output("Out");
framework::VisitDataType(
- static_cast(context.Attr("out_dtype")),
+ static_cast(context.Attr("out_dtype")),
CastOpFunctor(
in, out, context.template device_context()));
}
diff --git a/paddle/operators/chunk_eval_op.cc b/paddle/operators/chunk_eval_op.cc
index 894f355deb9d764ef72d452f362e6b42f8831667..f1f274a7af079d68c7c1bcd8ec07962e18b0ea60 100644
--- a/paddle/operators/chunk_eval_op.cc
+++ b/paddle/operators/chunk_eval_op.cc
@@ -57,15 +57,14 @@ class ChunkEvalOp : public framework::OperatorWithKernel {
protected:
framework::OpKernelType GetKernelType(
const framework::ExecutionContext &ctx) const override {
- return framework::OpKernelType(framework::DataType::FP32,
+ return framework::OpKernelType(framework::proto::DataType::FP32,
ctx.device_context());
}
};
class ChunkEvalOpMaker : public framework::OpProtoAndCheckerMaker {
public:
- ChunkEvalOpMaker(framework::OpProto *proto,
- framework::OpAttrChecker *op_checker)
+ ChunkEvalOpMaker(OpProto *proto, OpAttrChecker *op_checker)
: OpProtoAndCheckerMaker(proto, op_checker) {
AddInput("Inference",
"(Tensor, default: Tensor). "
diff --git a/paddle/operators/clip_by_norm_op.cc b/paddle/operators/clip_by_norm_op.cc
index 0b7975a63f7d364bf9b0ce529e2dd72d9f3cd2e9..05c79d0e25deea84463f0b67ac4dc9a8dd43f2cb 100644
--- a/paddle/operators/clip_by_norm_op.cc
+++ b/paddle/operators/clip_by_norm_op.cc
@@ -37,8 +37,7 @@ class ClipByNormOp : public framework::OperatorWithKernel {
class ClipByNormOpMaker : public framework::OpProtoAndCheckerMaker {
public:
- ClipByNormOpMaker(framework::OpProto* proto,
- framework::OpAttrChecker* op_checker)
+ ClipByNormOpMaker(OpProto* proto, OpAttrChecker* op_checker)
: OpProtoAndCheckerMaker(proto, op_checker) {
AddInput("X",
"(Tensor) The input of clip_by_norm op."
diff --git a/paddle/operators/clip_op.cc b/paddle/operators/clip_op.cc
index 6092212de4635e2ada81f8383a0ccf64a8116158..e34ba0a8f4757e45db58270dfd6191157f6e226a 100644
--- a/paddle/operators/clip_op.cc
+++ b/paddle/operators/clip_op.cc
@@ -38,7 +38,7 @@ class ClipOp : public framework::OperatorWithKernel {
template
class ClipOpMaker : public framework::OpProtoAndCheckerMaker {
public:
- ClipOpMaker(framework::OpProto* proto, framework::OpAttrChecker* op_checker)
+ ClipOpMaker(OpProto* proto, OpAttrChecker* op_checker)
: OpProtoAndCheckerMaker(proto, op_checker) {
AddInput("X",
"(Tensor)The input of clip op."
diff --git a/paddle/operators/compare_op.cc b/paddle/operators/compare_op.cc
index bf7e88368157d29e627c3c06384f28b6e5e4ecc1..1148172f3a2cc9b3f849ee04cefc19f16742d3eb 100644
--- a/paddle/operators/compare_op.cc
+++ b/paddle/operators/compare_op.cc
@@ -20,8 +20,7 @@ namespace operators {
template
class CompareOpProtoMaker : public framework::OpProtoAndCheckerMaker {
public:
- CompareOpProtoMaker(framework::OpProto *proto,
- framework::OpAttrChecker *op_checker)
+ CompareOpProtoMaker(OpProto *proto, OpAttrChecker *op_checker)
: OpProtoAndCheckerMaker(proto, op_checker) {
OpComment comment;
AddInput("X",
diff --git a/paddle/operators/concat_op.cc b/paddle/operators/concat_op.cc
index cf522d6921ee746d03d8082b8fc4d051f4d504e6..6151e2e73fb33f01794f81bd176fde7e5579a5c8 100644
--- a/paddle/operators/concat_op.cc
+++ b/paddle/operators/concat_op.cc
@@ -58,7 +58,7 @@ class ConcatOp : public framework::OperatorWithKernel {
class ConcatOpMaker : public framework::OpProtoAndCheckerMaker {
public:
- ConcatOpMaker(framework::OpProto *proto, framework::OpAttrChecker *op_checker)
+ ConcatOpMaker(OpProto *proto, OpAttrChecker *op_checker)
: OpProtoAndCheckerMaker(proto, op_checker) {
AddInput("X", "Input tensors of concat operator.").AsDuplicable();
AddOutput("Out", "Output tensor of concat operator.");
diff --git a/paddle/operators/cond_op.cc b/paddle/operators/cond_op.cc
index b809bdc3a0fea727f2fb6ea0a55672ee9b0bbd04..8c860676e06de5dac9570d2a6f7271ff451eebee 100644
--- a/paddle/operators/cond_op.cc
+++ b/paddle/operators/cond_op.cc
@@ -205,8 +205,7 @@ void CondOp::Run(const Scope& scope,
class CondOpProtoAndCheckerMaker : public framework::OpProtoAndCheckerMaker {
public:
- CondOpProtoAndCheckerMaker(framework::OpProto* proto,
- framework::OpAttrChecker* op_checker)
+ CondOpProtoAndCheckerMaker(OpProto* proto, OpAttrChecker* op_checker)
: OpProtoAndCheckerMaker(proto, op_checker) {
AddInput("Cond", "The condition, which is a bool vector");
AddInput("Xs", "Inputs of Subnets").AsDuplicable();
diff --git a/paddle/operators/conditional_block_op.cc b/paddle/operators/conditional_block_op.cc
index 6f2ef9174e84a0c0ae096956c04039435e6583c6..5fe362c1b630867d68b566cb9660b57860368d70 100644
--- a/paddle/operators/conditional_block_op.cc
+++ b/paddle/operators/conditional_block_op.cc
@@ -74,8 +74,7 @@ class ConditionalBlockOp : public ConditionalOp {
class ConditionalBlockOpProtoMaker : public framework::OpProtoAndCheckerMaker {
public:
- ConditionalBlockOpProtoMaker(framework::OpProto *proto,
- framework::OpAttrChecker *op_checker)
+ ConditionalBlockOpProtoMaker(OpProto *proto, OpAttrChecker *op_checker)
: OpProtoAndCheckerMaker(proto, op_checker) {
AddInput("X",
"The conditional variable of this operator. If X is empty, the "
diff --git a/paddle/operators/conv_cudnn_op.cc b/paddle/operators/conv_cudnn_op.cc
index 008bf01885ecddd1fee76a33c43370d07a8988a2..5b27ada55d737c31f8e65dc9b460a3a2ea11b869 100644
--- a/paddle/operators/conv_cudnn_op.cc
+++ b/paddle/operators/conv_cudnn_op.cc
@@ -19,8 +19,7 @@ namespace operators {
class CudnnConv2DOpMaker : public Conv2DOpMaker {
public:
- CudnnConv2DOpMaker(framework::OpProto* proto,
- framework::OpAttrChecker* op_checker)
+ CudnnConv2DOpMaker(OpProto* proto, OpAttrChecker* op_checker)
: Conv2DOpMaker(proto, op_checker) {
AddAttr("workspace_size_MB",
"workspace size for cudnn, in MB, "
@@ -34,8 +33,7 @@ class CudnnConv2DOpMaker : public Conv2DOpMaker {
class CudnnConv3DOpMaker : public Conv3DOpMaker {
public:
- CudnnConv3DOpMaker(framework::OpProto* proto,
- framework::OpAttrChecker* op_checker)
+ CudnnConv3DOpMaker(OpProto* proto, OpAttrChecker* op_checker)
: Conv3DOpMaker(proto, op_checker) {
AddAttr("workspace_size_MB",
"workspace size for cudnn, in MB, "
diff --git a/paddle/operators/conv_op.cc b/paddle/operators/conv_op.cc
index 7ef805fd44bf94d3279ffa50f86993b3f2b64412..abe82e124121a6c57d1c3ca7337804f5a4ab3d38 100644
--- a/paddle/operators/conv_op.cc
+++ b/paddle/operators/conv_op.cc
@@ -66,8 +66,7 @@ void ConvOp::InferShape(framework::InferShapeContext* ctx) const {
ctx->SetOutputDim("Output", framework::make_ddim(output_shape));
}
-Conv2DOpMaker::Conv2DOpMaker(framework::OpProto* proto,
- framework::OpAttrChecker* op_checker)
+Conv2DOpMaker::Conv2DOpMaker(OpProto* proto, OpAttrChecker* op_checker)
: OpProtoAndCheckerMaker(proto, op_checker) {
AddInput(
"Input",
@@ -138,8 +137,7 @@ $$
)DOC");
}
-Conv3DOpMaker::Conv3DOpMaker(framework::OpProto* proto,
- framework::OpAttrChecker* op_checker)
+Conv3DOpMaker::Conv3DOpMaker(OpProto* proto, OpAttrChecker* op_checker)
: OpProtoAndCheckerMaker(proto, op_checker) {
AddInput(
"Input",
diff --git a/paddle/operators/conv_op.h b/paddle/operators/conv_op.h
index d2de4e80f751d4938ac9cad60871b470fccf225c..83786e2329e7ae3c2908fdfdaeb1f79d19a53f47 100644
--- a/paddle/operators/conv_op.h
+++ b/paddle/operators/conv_op.h
@@ -50,14 +50,12 @@ inline bool IsExpand(std::vector& filter_dim,
// operator implementations can reuse the code.
class Conv2DOpMaker : public framework::OpProtoAndCheckerMaker {
public:
- Conv2DOpMaker(framework::OpProto* proto,
- framework::OpAttrChecker* op_checker);
+ Conv2DOpMaker(OpProto* proto, OpAttrChecker* op_checker);
};
class Conv3DOpMaker : public framework::OpProtoAndCheckerMaker {
public:
- Conv3DOpMaker(framework::OpProto* proto,
- framework::OpAttrChecker* op_checker);
+ Conv3DOpMaker(OpProto* proto, OpAttrChecker* op_checker);
};
class ConvOp : public framework::OperatorWithKernel {
diff --git a/paddle/operators/conv_shift_op.cc b/paddle/operators/conv_shift_op.cc
index a4150a5664690e750d2501a1849767c23209186b..ac2f80625935e14189d27bf738e9b9985a7f42c2 100644
--- a/paddle/operators/conv_shift_op.cc
+++ b/paddle/operators/conv_shift_op.cc
@@ -75,8 +75,7 @@ class ConvShiftGradOp : public framework::OperatorWithKernel {
class ConvShiftOpMaker : public framework::OpProtoAndCheckerMaker {
public:
- ConvShiftOpMaker(framework::OpProto *proto,
- framework::OpAttrChecker *op_checker)
+ ConvShiftOpMaker(OpProto *proto, OpAttrChecker *op_checker)
: framework::OpProtoAndCheckerMaker(proto, op_checker) {
AddInput("X",
"(Tensor, default Tensor), a 2-D tensor with shape B x M, "
diff --git a/paddle/operators/conv_transpose_cudnn_op.cc b/paddle/operators/conv_transpose_cudnn_op.cc
index 4cb6a2ccffc76066ea0868f76ba2a3bfb9e5e450..2348bed4ff1c658e2b5614a3bdd94dfb554ad705 100644
--- a/paddle/operators/conv_transpose_cudnn_op.cc
+++ b/paddle/operators/conv_transpose_cudnn_op.cc
@@ -19,8 +19,7 @@ namespace operators {
class CudnnConv2DTransposeOpMaker : public Conv2DTransposeOpMaker {
public:
- CudnnConv2DTransposeOpMaker(framework::OpProto* proto,
- framework::OpAttrChecker* op_checker)
+ CudnnConv2DTransposeOpMaker(OpProto* proto, OpAttrChecker* op_checker)
: Conv2DTransposeOpMaker(proto, op_checker) {
AddAttr>("dilations", "dilations of convolution operator.")
.SetDefault({1, 1});
@@ -36,8 +35,7 @@ class CudnnConv2DTransposeOpMaker : public Conv2DTransposeOpMaker {
class CudnnConv3DTransposeOpMaker : public Conv3DTransposeOpMaker {
public:
- CudnnConv3DTransposeOpMaker(framework::OpProto* proto,
- framework::OpAttrChecker* op_checker)
+ CudnnConv3DTransposeOpMaker(OpProto* proto, OpAttrChecker* op_checker)
: Conv3DTransposeOpMaker(proto, op_checker) {
AddAttr>("dilations", "dilations of convolution operator.")
.SetDefault({1, 1, 1});
diff --git a/paddle/operators/conv_transpose_op.cc b/paddle/operators/conv_transpose_op.cc
index ca063e94bbe64817567a298c3b1ad9306667536d..cae0e2ca2b472818463e109cb700da7f62180926 100644
--- a/paddle/operators/conv_transpose_op.cc
+++ b/paddle/operators/conv_transpose_op.cc
@@ -53,8 +53,8 @@ void ConvTransposeOp::InferShape(framework::InferShapeContext* ctx) const {
ctx->SetOutputDim("Output", framework::make_ddim(output_shape));
}
-Conv2DTransposeOpMaker::Conv2DTransposeOpMaker(
- framework::OpProto* proto, framework::OpAttrChecker* op_checker)
+Conv2DTransposeOpMaker::Conv2DTransposeOpMaker(OpProto* proto,
+ OpAttrChecker* op_checker)
: OpProtoAndCheckerMaker(proto, op_checker) {
AddInput(
"Input",
@@ -112,8 +112,8 @@ Example:
)DOC");
}
-Conv3DTransposeOpMaker::Conv3DTransposeOpMaker(
- framework::OpProto* proto, framework::OpAttrChecker* op_checker)
+Conv3DTransposeOpMaker::Conv3DTransposeOpMaker(OpProto* proto,
+ OpAttrChecker* op_checker)
: OpProtoAndCheckerMaker(proto, op_checker) {
AddInput("Input",
"(Tensor) The input tensor of convolution transpose operator."
diff --git a/paddle/operators/conv_transpose_op.h b/paddle/operators/conv_transpose_op.h
index 1171b0435fd2b1abe541043e8283a8fc09dc13c7..e81651f417a5b96a1f25ae9ca9228d332a16bc6d 100644
--- a/paddle/operators/conv_transpose_op.h
+++ b/paddle/operators/conv_transpose_op.h
@@ -30,14 +30,12 @@ using DDim = framework::DDim;
// operator implementations can reuse the code.
class Conv2DTransposeOpMaker : public framework::OpProtoAndCheckerMaker {
public:
- Conv2DTransposeOpMaker(framework::OpProto* proto,
- framework::OpAttrChecker* op_checker);
+ Conv2DTransposeOpMaker(OpProto* proto, OpAttrChecker* op_checker);
};
class Conv3DTransposeOpMaker : public framework::OpProtoAndCheckerMaker {
public:
- Conv3DTransposeOpMaker(framework::OpProto* proto,
- framework::OpAttrChecker* op_checker);
+ Conv3DTransposeOpMaker(OpProto* proto, OpAttrChecker* op_checker);
};
class ConvTransposeOp : public framework::OperatorWithKernel {
diff --git a/paddle/operators/cos_sim_op.cc b/paddle/operators/cos_sim_op.cc
index 440c427cba9396ec6d0ebf7814d671e45f45412d..a4d4a78d3200259403695a73ed9cfabe9baf8876 100644
--- a/paddle/operators/cos_sim_op.cc
+++ b/paddle/operators/cos_sim_op.cc
@@ -62,7 +62,7 @@ class CosSimOp : public framework::OperatorWithKernel {
class CosSimOpMaker : public framework::OpProtoAndCheckerMaker {
public:
- CosSimOpMaker(framework::OpProto* proto, framework::OpAttrChecker* op_checker)
+ CosSimOpMaker(OpProto* proto, OpAttrChecker* op_checker)
: OpProtoAndCheckerMaker(proto, op_checker) {
AddInput("X", "The 1st input of cos_sim op.");
AddInput("Y", "The 2nd input of cos_sim op.");
diff --git a/paddle/operators/crf_decoding_op.cc b/paddle/operators/crf_decoding_op.cc
index 1ce189fa6ebba3712467572c55d599975bbe7534..27d0871f82beed4ceb3a4439be097a580631d4c6 100644
--- a/paddle/operators/crf_decoding_op.cc
+++ b/paddle/operators/crf_decoding_op.cc
@@ -18,8 +18,7 @@ namespace paddle {
namespace operators {
class CRFDecodingOpMaker : public framework::OpProtoAndCheckerMaker {
public:
- CRFDecodingOpMaker(framework::OpProto* proto,
- framework::OpAttrChecker* op_checker)
+ CRFDecodingOpMaker(OpProto* proto, OpAttrChecker* op_checker)
: OpProtoAndCheckerMaker(proto, op_checker) {
AddInput("Emission",
"(LoDTensor, default: LoDTensor). A LoDTensor with shape "
diff --git a/paddle/operators/crop_op.cc b/paddle/operators/crop_op.cc
index 5c973fbb3cf9513d82a5b87719cb947466082424..87fcab4cca669a356ced8951fbdc3c3ee3a24f3d 100644
--- a/paddle/operators/crop_op.cc
+++ b/paddle/operators/crop_op.cc
@@ -52,7 +52,7 @@ class CropOp : public framework::OperatorWithKernel {
class CropOpMaker : public framework::OpProtoAndCheckerMaker {
public:
- CropOpMaker(framework::OpProto* proto, framework::OpAttrChecker* op_checker)
+ CropOpMaker(OpProto* proto, OpAttrChecker* op_checker)
: OpProtoAndCheckerMaker(proto, op_checker) {
AddInput("X",
"The input of pad op. "
diff --git a/paddle/operators/cross_entropy_op.cc b/paddle/operators/cross_entropy_op.cc
index 2b06012b690c6725fd150cd99e992912655dc9c6..1ab7c0a06f85f332b290cb6cac82d0cfbe8f3242 100644
--- a/paddle/operators/cross_entropy_op.cc
+++ b/paddle/operators/cross_entropy_op.cc
@@ -111,8 +111,7 @@ class CrossEntropyGradientOp : public framework::OperatorWithKernel {
class CrossEntropyOpMaker : public framework::OpProtoAndCheckerMaker {
public:
- CrossEntropyOpMaker(framework::OpProto* proto,
- framework::OpAttrChecker* op_checker)
+ CrossEntropyOpMaker(OpProto* proto, OpAttrChecker* op_checker)
: OpProtoAndCheckerMaker(proto, op_checker) {
AddInput("X",
"(Tensor, default Tensor), a 2-D tensor with shape N x D, "
diff --git a/paddle/operators/decayed_adagrad_op.cc b/paddle/operators/decayed_adagrad_op.cc
index fd29c7270b0442da740a74f83fdfeed8f47f830d..739a8d881c35817756421a3299901c9e5e7d96ba 100644
--- a/paddle/operators/decayed_adagrad_op.cc
+++ b/paddle/operators/decayed_adagrad_op.cc
@@ -55,8 +55,7 @@ class DecayedAdagradOp : public framework::OperatorWithKernel {
class DecayedAdagradOpMaker : public framework::OpProtoAndCheckerMaker {
public:
- DecayedAdagradOpMaker(framework::OpProto *proto,
- framework::OpAttrChecker *op_checker)
+ DecayedAdagradOpMaker(OpProto *proto, OpAttrChecker *op_checker)
: OpProtoAndCheckerMaker(proto, op_checker) {
AddInput("Param", "(Tensor) Input parameter");
AddInput("Grad", "(Tensor) Input gradient");
diff --git a/paddle/operators/dropout_op.cc b/paddle/operators/dropout_op.cc
index acd526ae8047292ce6c6756f174c80053dca0d9f..c4bee44e3e5a16334fb9070165eab5c7cdf0141c 100644
--- a/paddle/operators/dropout_op.cc
+++ b/paddle/operators/dropout_op.cc
@@ -40,8 +40,7 @@ class DropoutOp : public framework::OperatorWithKernel {
template
class DropoutOpMaker : public framework::OpProtoAndCheckerMaker {
public:
- DropoutOpMaker(framework::OpProto* proto,
- framework::OpAttrChecker* op_checker)
+ DropoutOpMaker(OpProto* proto, OpAttrChecker* op_checker)
: OpProtoAndCheckerMaker(proto, op_checker) {
AddInput("X", "The input of dropout op.");
AddOutput("Out", "The output of dropout op.");
diff --git a/paddle/operators/elementwise_add_op.cc b/paddle/operators/elementwise_add_op.cc
index a62eeeeb95fef77c00258403ca1cae11c2db7173..b6bd794a74665cef546347015be25ab989e852b2 100644
--- a/paddle/operators/elementwise_add_op.cc
+++ b/paddle/operators/elementwise_add_op.cc
@@ -19,8 +19,7 @@ namespace paddle {
namespace operators {
class ElementwiseAddOpMaker : public ElementwiseOpMaker {
public:
- ElementwiseAddOpMaker(framework::OpProto* proto,
- framework::OpAttrChecker* op_checker)
+ ElementwiseAddOpMaker(OpProto* proto, OpAttrChecker* op_checker)
: ElementwiseOpMaker(proto, op_checker) {
SetComment("Add", "$Out = X + Y$");
AddComment(comment_);
diff --git a/paddle/operators/elementwise_div_op.cc b/paddle/operators/elementwise_div_op.cc
index 1c3e9e70eef0c1adfb89cf1a58437092f8d536d7..78eae53f53593e5fd3a20daad09098190b4b59f6 100644
--- a/paddle/operators/elementwise_div_op.cc
+++ b/paddle/operators/elementwise_div_op.cc
@@ -19,8 +19,7 @@ namespace paddle {
namespace operators {
class ElementwiseDivOpMaker : public ElementwiseOpMaker {
public:
- ElementwiseDivOpMaker(framework::OpProto* proto,
- framework::OpAttrChecker* op_checker)
+ ElementwiseDivOpMaker(OpProto* proto, OpAttrChecker* op_checker)
: ElementwiseOpMaker(proto, op_checker) {
SetComment("Div", "$Out = X / Y$");
AddComment(comment_);
diff --git a/paddle/operators/elementwise_mul_op.cc b/paddle/operators/elementwise_mul_op.cc
index aadb95cbe35fe565cf1009f0f9765def921d0906..f0a61b8b081f5675b1684022e61876ed4d1d4aca 100644
--- a/paddle/operators/elementwise_mul_op.cc
+++ b/paddle/operators/elementwise_mul_op.cc
@@ -20,8 +20,7 @@ namespace operators {
class ElementwiseMulOpMaker : public ElementwiseOpMaker {
public:
- ElementwiseMulOpMaker(framework::OpProto* proto,
- framework::OpAttrChecker* op_checker)
+ ElementwiseMulOpMaker(OpProto* proto, OpAttrChecker* op_checker)
: ElementwiseOpMaker(proto, op_checker) {
SetComment("Mul", "$Out = X \\odot\\ Y$");
AddComment(comment_);
diff --git a/paddle/operators/elementwise_op.h b/paddle/operators/elementwise_op.h
index ea533503e4916cae7e1157ed34da9629dcff3513..f308ee05e11210540e41cda4b9a896f9f96c4730 100644
--- a/paddle/operators/elementwise_op.h
+++ b/paddle/operators/elementwise_op.h
@@ -43,8 +43,7 @@ class ElementwiseOp : public framework::OperatorWithKernel {
class ElementwiseOpMaker : public framework::OpProtoAndCheckerMaker {
public:
- ElementwiseOpMaker(framework::OpProto* proto,
- framework::OpAttrChecker* op_checker)
+ ElementwiseOpMaker(OpProto* proto, OpAttrChecker* op_checker)
: OpProtoAndCheckerMaker(proto, op_checker) {
AddInput("X", "(Tensor) The first input tensor of elementwise op");
AddInput("Y", "(Tensor) The second input tensor of elementwise op");
diff --git a/paddle/operators/elementwise_sub_op.cc b/paddle/operators/elementwise_sub_op.cc
index 3e4d19361ead0100e45e50880d402e3d2b8557ff..1c4168621c343f14d603b18dd6c518052f83ad0d 100644
--- a/paddle/operators/elementwise_sub_op.cc
+++ b/paddle/operators/elementwise_sub_op.cc
@@ -19,8 +19,7 @@ namespace paddle {
namespace operators {
class ElementwiseSubOpMaker : public ElementwiseOpMaker {
public:
- ElementwiseSubOpMaker(framework::OpProto* proto,
- framework::OpAttrChecker* op_checker)
+ ElementwiseSubOpMaker(OpProto* proto, OpAttrChecker* op_checker)
: ElementwiseOpMaker(proto, op_checker) {
SetComment("Sub", "$Out = X - Y$");
AddComment(comment_);
diff --git a/paddle/operators/expand_op.cc b/paddle/operators/expand_op.cc
index 8b3cddbb944de250d5754a2be64dd8e7ec53003a..08fa91ed72aa41ed2f513c090b9085410bb5cc47 100644
--- a/paddle/operators/expand_op.cc
+++ b/paddle/operators/expand_op.cc
@@ -55,7 +55,7 @@ class ExpandOp : public framework::OperatorWithKernel {
class ExpandOpMaker : public framework::OpProtoAndCheckerMaker {
public:
- ExpandOpMaker(framework::OpProto* proto, framework::OpAttrChecker* op_checker)
+ ExpandOpMaker(OpProto* proto, OpAttrChecker* op_checker)
: OpProtoAndCheckerMaker(proto, op_checker) {
AddInput("X",
"(Tensor, default Tensor) A tensor with rank in [1, 6]."
diff --git a/paddle/operators/feed_op.cc b/paddle/operators/feed_op.cc
index ee43c22fb13e203c7de1a7e6d1586423fcbfb25a..66b8080c26192a74cc27bce9a00107de89822717 100644
--- a/paddle/operators/feed_op.cc
+++ b/paddle/operators/feed_op.cc
@@ -54,8 +54,7 @@ class FeedOp : public framework::OperatorBase {
class FeedOpInfoMaker : public framework::OpProtoAndCheckerMaker {
public:
- FeedOpInfoMaker(framework::OpProto *proto,
- framework::OpAttrChecker *op_checker)
+ FeedOpInfoMaker(OpProto *proto, OpAttrChecker *op_checker)
: OpProtoAndCheckerMaker(proto, op_checker) {
AddInput("X", "The input of feed op");
AddOutput("Out", "The output of feed op");
diff --git a/paddle/operators/fetch_op.cc b/paddle/operators/fetch_op.cc
index 1ae07194c235ce6724f59c9c60df80f957787cda..616590f2001be3bea4e50c0c1755a80eb20e9348 100644
--- a/paddle/operators/fetch_op.cc
+++ b/paddle/operators/fetch_op.cc
@@ -61,8 +61,7 @@ class FetchOp : public framework::OperatorBase {
class FetchOpInfoMaker : public framework::OpProtoAndCheckerMaker {
public:
- FetchOpInfoMaker(framework::OpProto *proto,
- framework::OpAttrChecker *op_checker)
+ FetchOpInfoMaker(OpProto *proto, OpAttrChecker *op_checker)
: OpProtoAndCheckerMaker(proto, op_checker) {
AddInput("X", "The input of fetch op");
AddOutput("Out", "The output of fetch op");
diff --git a/paddle/operators/fill_constant_batch_size_like_op.cc b/paddle/operators/fill_constant_batch_size_like_op.cc
index 7fb74e2b950338fbd05515f844959862504eddce..7a7e280e78309582a627087bdbdfea358c37b9eb 100644
--- a/paddle/operators/fill_constant_batch_size_like_op.cc
+++ b/paddle/operators/fill_constant_batch_size_like_op.cc
@@ -52,7 +52,7 @@ class FillConstantBatchSizeLikeOp : public framework::OperatorWithKernel {
framework::OpKernelType GetKernelType(
const framework::ExecutionContext &ctx) const override {
return framework::OpKernelType(
- static_cast(ctx.Attr("dtype")),
+ static_cast(ctx.Attr("dtype")),
ctx.device_context());
}
};
@@ -60,13 +60,12 @@ class FillConstantBatchSizeLikeOp : public framework::OperatorWithKernel {
class FillConstantBatchSizeLikeOpMaker
: public framework::OpProtoAndCheckerMaker {
public:
- FillConstantBatchSizeLikeOpMaker(framework::OpProto *proto,
- framework::OpAttrChecker *op_checker)
+ FillConstantBatchSizeLikeOpMaker(OpProto *proto, OpAttrChecker *op_checker)
: framework::OpProtoAndCheckerMaker(proto, op_checker) {
AddAttr("dtype",
"(int, default 5 (FP32)) "
"Output data type")
- .SetDefault(framework::DataType::FP32);
+ .SetDefault(framework::proto::DataType::FP32);
AddInput("Input",
"(Tensor) Tensor "
"whose dim_idx th dimension is used to specify the batch_size");
diff --git a/paddle/operators/fill_constant_op.cc b/paddle/operators/fill_constant_op.cc
index 3d5f84bc239615797a5cf01a74150fdb7dfc1b80..3489079eaa3e8f04e27941de942ce9e14f8434f9 100644
--- a/paddle/operators/fill_constant_op.cc
+++ b/paddle/operators/fill_constant_op.cc
@@ -34,7 +34,8 @@ class FillConstantOp : public framework::OperatorBase {
using framework::OperatorBase::OperatorBase;
void Run(const framework::Scope &scope,
const platform::DeviceContext &dev_ctx) const override {
- auto data_type = static_cast(Attr("dtype"));
+ auto data_type =
+ static_cast(Attr("dtype"));
auto value = Attr("value");
auto force_cpu = Attr("force_cpu");
auto &out =
@@ -52,13 +53,12 @@ class FillConstantOp : public framework::OperatorBase {
class FillConstantOpMaker : public framework::OpProtoAndCheckerMaker {
public:
- FillConstantOpMaker(framework::OpProto *proto,
- framework::OpAttrChecker *op_checker)
+ FillConstantOpMaker(OpProto *proto, OpAttrChecker *op_checker)
: framework::OpProtoAndCheckerMaker(proto, op_checker) {
AddAttr("dtype",
"(int, default 5 (FP32)) "
"Output data type")
- .SetDefault(framework::DataType::FP32);
+ .SetDefault(framework::proto::DataType::FP32);
AddAttr>("shape", "(vector) The shape of the output");
AddAttr("value", "(float, default 0) The value to be filled")
.SetDefault(0.0f);
diff --git a/paddle/operators/fill_op.cc b/paddle/operators/fill_op.cc
index 382e161c5d83ba560411b1f231aa896028b709b8..f0c6cff8e34c9038c2321c0326bd2ef728d665ba 100644
--- a/paddle/operators/fill_op.cc
+++ b/paddle/operators/fill_op.cc
@@ -48,7 +48,7 @@ class FillOp : public framework::OperatorBase {
"Cannot find variable %s", Output("Out"))
.GetMutable());
out.Resize(framework::make_ddim(Attr>("shape")));
- auto dtype = static_cast(Attr("dtype"));
+ auto dtype = static_cast(Attr("dtype"));
platform::CPUPlace cpu;
auto force_cpu = Attr("force_cpu");
out.mutable_data(force_cpu ? cpu : dev_ctx.GetPlace(),
@@ -76,7 +76,7 @@ class FillOp : public framework::OperatorBase {
class FillOpMaker : public framework::OpProtoAndCheckerMaker {
public:
- FillOpMaker(framework::OpProto *proto, framework::OpAttrChecker *op_checker)
+ FillOpMaker(OpProto *proto, OpAttrChecker *op_checker)
: OpProtoAndCheckerMaker(proto, op_checker) {
AddComment(R"DOC(Fill operator
@@ -88,7 +88,7 @@ Fill an tensor with `value` and `shape`. The type of the tensor is specify by
"value", "The float values of tensor, which are flatten in row major");
AddAttr>("shape", "The shape of output tensor");
AddAttr("dtype", "The data type of output tensor, Default is float")
- .SetDefault(framework::DataType::FP32);
+ .SetDefault(framework::proto::DataType::FP32);
AddAttr("force_cpu",
"Whether the output tensor must be at CPU memory or not. "
"Default is false.")
diff --git a/paddle/operators/fill_zeros_like_op.cc b/paddle/operators/fill_zeros_like_op.cc
index 720c11f5f12a8dea971fe82db6afe8f6b0d9ee1a..3e828f84d076f1afc841db2c1664f5f5d9c90dc6 100644
--- a/paddle/operators/fill_zeros_like_op.cc
+++ b/paddle/operators/fill_zeros_like_op.cc
@@ -33,8 +33,7 @@ class FillZerosLikeOp : public framework::OperatorWithKernel {
class FillZerosLikeOpMaker : public framework::OpProtoAndCheckerMaker {
public:
- FillZerosLikeOpMaker(framework::OpProto *proto,
- framework::OpAttrChecker *op_checker)
+ FillZerosLikeOpMaker(OpProto *proto, OpAttrChecker *op_checker)
: framework::OpProtoAndCheckerMaker(proto, op_checker) {
AddInput("X", "The input of fill-zeros-like op.");
AddOutput("Y", "The variable will be filled up with zeros.");
diff --git a/paddle/operators/ftrl_op.cc b/paddle/operators/ftrl_op.cc
index b14913ff213c84051b5a945f4a470cea4039a289..d00700823d48eb2ea4fc64d1fa2989f18c7c5f18 100644
--- a/paddle/operators/ftrl_op.cc
+++ b/paddle/operators/ftrl_op.cc
@@ -57,7 +57,7 @@ class FTRLOp : public framework::OperatorWithKernel {
class FTRLOpMaker : public framework::OpProtoAndCheckerMaker {
public:
- FTRLOpMaker(framework::OpProto *proto, framework::OpAttrChecker *op_checker)
+ FTRLOpMaker(OpProto *proto, OpAttrChecker *op_checker)
: OpProtoAndCheckerMaker(proto, op_checker) {
AddInput("Param",
"(Tensor, default Tensor) "
diff --git a/paddle/operators/gather_op.cc b/paddle/operators/gather_op.cc
index 8f80fb162519f60fcce897b3c31a3507bbf6ba6d..47af222314c40a2c77ee422ccc70602078b3f1fb 100644
--- a/paddle/operators/gather_op.cc
+++ b/paddle/operators/gather_op.cc
@@ -67,7 +67,7 @@ class GatherGradOp : public framework::OperatorWithKernel {
class GatherOpMaker : public framework::OpProtoAndCheckerMaker {
public:
- GatherOpMaker(framework::OpProto* proto, framework::OpAttrChecker* op_checker)
+ GatherOpMaker(OpProto* proto, OpAttrChecker* op_checker)
: OpProtoAndCheckerMaker(proto, op_checker) {
AddInput("X", "The source input of gather op");
AddInput("Index", "The index input of gather op");
diff --git a/paddle/operators/gaussian_random_op.cc b/paddle/operators/gaussian_random_op.cc
index 254c83e1378a121d99c89d9d8705935b5f06edc8..5eab1d5f4ee067db602ab81a9df1854bcfaf78a8 100644
--- a/paddle/operators/gaussian_random_op.cc
+++ b/paddle/operators/gaussian_random_op.cc
@@ -60,15 +60,14 @@ class GaussianRandomOp : public framework::OperatorWithKernel {
framework::OpKernelType GetKernelType(
const framework::ExecutionContext& ctx) const override {
return framework::OpKernelType(
- static_cast(ctx.Attr("dtype")),
+ static_cast(ctx.Attr("dtype")),
ctx.device_context());
}
};
class GaussianRandomOpMaker : public framework::OpProtoAndCheckerMaker {
public:
- GaussianRandomOpMaker(framework::OpProto* proto,
- framework::OpAttrChecker* op_checker)
+ GaussianRandomOpMaker(OpProto* proto, OpAttrChecker* op_checker)
: framework::OpProtoAndCheckerMaker(proto, op_checker) {
AddOutput("Out", "Output matrix of gaussian random op");
@@ -91,7 +90,7 @@ class GaussianRandomOpMaker : public framework::OpProtoAndCheckerMaker {
AddAttr("dtype",
"(int, default 5(FP32)) "
"Output data type.")
- .SetDefault(framework::DataType::FP32);
+ .SetDefault(framework::proto::DataType::FP32);
AddComment(R"DOC(
GaussianRandom Operator.
diff --git a/paddle/operators/gru_op.cc b/paddle/operators/gru_op.cc
index 311e7edcf1519bc706a51e4d9242a1ebee5168ca..8e7000654c62b50a3ca130e2ffed4a0f5880de91 100644
--- a/paddle/operators/gru_op.cc
+++ b/paddle/operators/gru_op.cc
@@ -67,7 +67,7 @@ class GRUOp : public framework::OperatorWithKernel {
class GRUOpMaker : public framework::OpProtoAndCheckerMaker {
public:
- GRUOpMaker(framework::OpProto* proto, framework::OpAttrChecker* op_checker)
+ GRUOpMaker(OpProto* proto, OpAttrChecker* op_checker)
: OpProtoAndCheckerMaker(proto, op_checker) {
AddInput("Input",
"(LoDTensor) The first input is a LodTensor, which supports "
diff --git a/paddle/operators/gru_unit_op.cc b/paddle/operators/gru_unit_op.cc
index 705de87be5b67fbc343a89eeba2282941b264c8a..7e5f674a8c020d931fd375ff5994da18052aa8fa 100644
--- a/paddle/operators/gru_unit_op.cc
+++ b/paddle/operators/gru_unit_op.cc
@@ -71,8 +71,7 @@ class GRUUnitOp : public framework::OperatorWithKernel {
class GRUUnitOpMaker : public framework::OpProtoAndCheckerMaker {
public:
- GRUUnitOpMaker(framework::OpProto* proto,
- framework::OpAttrChecker* op_checker)
+ GRUUnitOpMaker(OpProto* proto, OpAttrChecker* op_checker)
: OpProtoAndCheckerMaker(proto, op_checker) {
AddInput("Input",
"(Tensor) Matrix with shape [batch_size, frame_size * 3] for the "
diff --git a/paddle/operators/hinge_loss_op.cc b/paddle/operators/hinge_loss_op.cc
index 373b4d99b47f2a8ab06c7584a25acee59b6f3e3b..19d2e9dc56fe11f9dfb13e8cb271a23e128bf91b 100644
--- a/paddle/operators/hinge_loss_op.cc
+++ b/paddle/operators/hinge_loss_op.cc
@@ -46,8 +46,7 @@ class HingeLossOp : public framework::OperatorWithKernel {
template
class HingeLossOpMaker : public framework::OpProtoAndCheckerMaker {
public:
- HingeLossOpMaker(framework::OpProto* proto,
- framework::OpAttrChecker* op_checker)
+ HingeLossOpMaker(OpProto* proto, OpAttrChecker* op_checker)
: OpProtoAndCheckerMaker(proto, op_checker) {
AddInput("Logits",
"The input value (Logits) of Hinge loss op."
diff --git a/paddle/operators/huber_loss_op.cc b/paddle/operators/huber_loss_op.cc
index 11828d083a55f0a38cf3b8513b7395bbb5592581..5c92f2c7b2d2f701bcc487716db41a0cce91002f 100644
--- a/paddle/operators/huber_loss_op.cc
+++ b/paddle/operators/huber_loss_op.cc
@@ -45,8 +45,7 @@ class HuberLossOp : public framework::OperatorWithKernel {
template
class HuberLossOpMaker : public framework::OpProtoAndCheckerMaker {
public:
- HuberLossOpMaker(framework::OpProto* proto,
- framework::OpAttrChecker* op_checker)
+ HuberLossOpMaker(OpProto* proto, OpAttrChecker* op_checker)
: OpProtoAndCheckerMaker(proto, op_checker) {
AddInput("X",
"The input value of huber loss op."
diff --git a/paddle/operators/increment_op.cc b/paddle/operators/increment_op.cc
index 54911267e36dfdbc62d533f40f0b754e7d2cb7bf..3a53ea89dc9a73d170ca5a9fe6cffde02a9f283b 100644
--- a/paddle/operators/increment_op.cc
+++ b/paddle/operators/increment_op.cc
@@ -70,8 +70,7 @@ class IncrementOp : public framework::OperatorBase {
class IncrementOpMaker : public framework::OpProtoAndCheckerMaker {
public:
- IncrementOpMaker(framework::OpProto *proto,
- framework::OpAttrChecker *op_checker)
+ IncrementOpMaker(OpProto *proto, OpAttrChecker *op_checker)
: OpProtoAndCheckerMaker(proto, op_checker) {
AddInput("X", "(Tensor) The input tensor of increment operator");
AddOutput("Out", "(Tensor) The output tensor of increment operator.");
diff --git a/paddle/operators/is_empty_op.cc b/paddle/operators/is_empty_op.cc
index 54fecf44e881b5c283c81580fd161da9808d253e..3616a0414f9e889376f8ba46e7567d7171eff3bf 100644
--- a/paddle/operators/is_empty_op.cc
+++ b/paddle/operators/is_empty_op.cc
@@ -47,8 +47,7 @@ class IsEmptyOp : public framework::OperatorBase {
class IsEmptyOpProtoMaker : public framework::OpProtoAndCheckerMaker {
public:
- IsEmptyOpProtoMaker(framework::OpProto *proto,
- framework::OpAttrChecker *op_checker)
+ IsEmptyOpProtoMaker(OpProto *proto, OpAttrChecker *op_checker)
: OpProtoAndCheckerMaker(proto, op_checker) {
AddInput(kInput, "(Tensor) Tensor which is to be checked.");
AddOutput(kOutput, "(Tensor) a boolean Tensor that indicate empty or not.");
diff --git a/paddle/operators/l1_norm_op.cc b/paddle/operators/l1_norm_op.cc
index c0b51202c6bb708a682568175c56583394961535..3d1da79763102c876de3b45e56438da909b00394 100644
--- a/paddle/operators/l1_norm_op.cc
+++ b/paddle/operators/l1_norm_op.cc
@@ -48,7 +48,7 @@ class L1NormGradOp : public framework::OperatorWithKernel {
class L1NormOpMaker : public framework::OpProtoAndCheckerMaker {
public:
- L1NormOpMaker(framework::OpProto* proto, framework::OpAttrChecker* op_checker)
+ L1NormOpMaker(OpProto* proto, OpAttrChecker* op_checker)
: framework::OpProtoAndCheckerMaker(proto, op_checker) {
AddInput("X", "(Tensor) The input of l1_norm op.");
AddOutput("Out", "(Scalar) The output of l1_norm op.");
diff --git a/paddle/operators/linear_chain_crf_op.cc b/paddle/operators/linear_chain_crf_op.cc
index 896e3657d4406c5a1fe07f1712abb2ff0370fd3c..ad15e8ebd2b323929a4448e98a18c5cad6f5ed12 100644
--- a/paddle/operators/linear_chain_crf_op.cc
+++ b/paddle/operators/linear_chain_crf_op.cc
@@ -19,8 +19,7 @@ namespace operators {
class LinearChainCRFOpMaker : public framework::OpProtoAndCheckerMaker {
public:
- LinearChainCRFOpMaker(framework::OpProto* proto,
- framework::OpAttrChecker* op_checker)
+ LinearChainCRFOpMaker(OpProto* proto, OpAttrChecker* op_checker)
: OpProtoAndCheckerMaker(proto, op_checker) {
AddInput("Emission",
"(LoDTensor, default LoDTensor) "
diff --git a/paddle/operators/load_op.cc b/paddle/operators/load_op.cc
index 4e58b84430f2a8697bbbc1acf971fd063120f563..6c51dad27a4d9cd9e48b8591b1f14472c83ceaf1 100644
--- a/paddle/operators/load_op.cc
+++ b/paddle/operators/load_op.cc
@@ -58,8 +58,7 @@ class LoadOp : public framework::OperatorBase {
class LoadOpProtoMaker : public framework::OpProtoAndCheckerMaker {
public:
- LoadOpProtoMaker(framework::OpProto *proto,
- framework::OpAttrChecker *op_checker)
+ LoadOpProtoMaker(OpProto *proto, OpAttrChecker *op_checker)
: OpProtoAndCheckerMaker(proto, op_checker) {
AddOutput("Out", "(Tensor) The tensor need to be loaded");
AddAttr("file_path",
diff --git a/paddle/operators/lod_array_length_op.cc b/paddle/operators/lod_array_length_op.cc
index b2f4ec57fadd2ba3dc8708abbfebaaeb67100f1e..cc8593810baf83e12368e67ceaeef0631e35c051 100644
--- a/paddle/operators/lod_array_length_op.cc
+++ b/paddle/operators/lod_array_length_op.cc
@@ -38,8 +38,7 @@ class LoDArrayLengthOp : public framework::OperatorBase {
class LoDArrayLengthProtoMaker : public framework::OpProtoAndCheckerMaker {
public:
- LoDArrayLengthProtoMaker(framework::OpProto *proto,
- framework::OpAttrChecker *op_checker)
+ LoDArrayLengthProtoMaker(OpProto *proto, OpAttrChecker *op_checker)
: OpProtoAndCheckerMaker(proto, op_checker) {
AddInput("X", "(LoDTensorArray) The input tensor array.");
AddOutput("Out", "(Tensor) 1x1 CPU Tensor of length, int64_t");
diff --git a/paddle/operators/lod_rank_table_op.cc b/paddle/operators/lod_rank_table_op.cc
index f7d4db1947b83fecf57575e17fafe26795c92bdd..46577d0c5821a1738bd050815f46776591bdfdde 100644
--- a/paddle/operators/lod_rank_table_op.cc
+++ b/paddle/operators/lod_rank_table_op.cc
@@ -30,13 +30,13 @@ class LoDRankTableOp : public framework::OperatorBase {
scope.FindVar(Output("Out"))->GetMutable();
VLOG(10) << "Level = " << static_cast(Attr("level"));
out->Reset(x.lod(), static_cast(Attr("level")));
+ VLOG(10) << Input("X") << "'s lod information is " << *out;
}
};
class LoDRankTableOpProtoMaker : public framework::OpProtoAndCheckerMaker {
public:
- LoDRankTableOpProtoMaker(framework::OpProto *proto,
- framework::OpAttrChecker *op_checker)
+ LoDRankTableOpProtoMaker(OpProto *proto, OpAttrChecker *op_checker)
: OpProtoAndCheckerMaker(proto, op_checker) {
AddInput("X",
"(LoDTensor) input lod tensor, must contain lod information.");
@@ -67,7 +67,7 @@ class LoDRankTableInferVarType : public framework::VarTypeInference {
framework::BlockDescBind *block) const override {
for (auto &o : op_desc.Output("Out")) {
block->FindRecursiveOrCreateVar(o)->SetType(
- framework::VarDesc::LOD_RANK_TABLE);
+ framework::proto::VarDesc::LOD_RANK_TABLE);
}
}
};
diff --git a/paddle/operators/lod_reset_op.cc b/paddle/operators/lod_reset_op.cc
index 32831cb1e2cf188a507773ef1e00b22de98d82ab..ccb87258c6b8629cd18d08185bfcc84c247070dd 100644
--- a/paddle/operators/lod_reset_op.cc
+++ b/paddle/operators/lod_reset_op.cc
@@ -48,8 +48,7 @@ class LoDResetOp : public framework::OperatorWithKernel {
class LoDResetOpMaker : public framework::OpProtoAndCheckerMaker {
public:
- LoDResetOpMaker(framework::OpProto *proto,
- framework::OpAttrChecker *op_checker)
+ LoDResetOpMaker(OpProto *proto, OpAttrChecker *op_checker)
: OpProtoAndCheckerMaker(proto, op_checker) {
AddInput("X", "(LoDTensor) The input tensor of lod_reset operator.");
AddInput("TargetLoD",
diff --git a/paddle/operators/lod_tensor_to_array_op.cc b/paddle/operators/lod_tensor_to_array_op.cc
index b970bf31773f4c6feb0010bd40ba906b388ec310..33af0e819f757b574b1a4cc3426a8375bdd702c6 100644
--- a/paddle/operators/lod_tensor_to_array_op.cc
+++ b/paddle/operators/lod_tensor_to_array_op.cc
@@ -97,8 +97,7 @@ class LoDTensorToArrayOp : public framework::OperatorBase {
class LoDTensorToArrayOpProtoMaker : public framework::OpProtoAndCheckerMaker {
public:
- LoDTensorToArrayOpProtoMaker(framework::OpProto *proto,
- framework::OpAttrChecker *op_checker)
+ LoDTensorToArrayOpProtoMaker(OpProto *proto, OpAttrChecker *op_checker)
: OpProtoAndCheckerMaker(proto, op_checker) {
AddInput("X", "");
AddInput("RankTable", "");
@@ -131,7 +130,7 @@ class LoDTensorToArrayInferVarType : public framework::VarTypeInference {
void operator()(const framework::OpDescBind &op_desc,
framework::BlockDescBind *block) const override {
for (auto &out_var : op_desc.Output("Out")) {
- block->Var(out_var)->SetType(framework::VarDesc::LOD_TENSOR_ARRAY);
+ block->Var(out_var)->SetType(framework::proto::VarDesc::LOD_TENSOR_ARRAY);
}
}
};
diff --git a/paddle/operators/log_loss_op.cc b/paddle/operators/log_loss_op.cc
index 4524229a330a0ceddca673e2b2a6d836a15a2e3f..f714945354c5668f58e273dc8d6c7c16d51ac17d 100644
--- a/paddle/operators/log_loss_op.cc
+++ b/paddle/operators/log_loss_op.cc
@@ -46,8 +46,7 @@ class LogLossOp : public framework::OperatorWithKernel {
template
class LogLossOpMaker : public framework::OpProtoAndCheckerMaker {
public:
- LogLossOpMaker(framework::OpProto* proto,
- framework::OpAttrChecker* op_checker)
+ LogLossOpMaker(OpProto* proto, OpAttrChecker* op_checker)
: OpProtoAndCheckerMaker(proto, op_checker) {
AddInput("Predicted",
"The input value (Predicted) of Log loss op."
diff --git a/paddle/operators/logical_op.cc b/paddle/operators/logical_op.cc
index c818d5e9c19abab15ebdc2b3485e03ab66cf649d..2bd6c6efae38d6d8d49cc9f3fd97cf316fbbdd0a 100644
--- a/paddle/operators/logical_op.cc
+++ b/paddle/operators/logical_op.cc
@@ -20,8 +20,7 @@ namespace operators {
template
class BinaryLogicalOpProtoMaker : public framework::OpProtoAndCheckerMaker {
public:
- BinaryLogicalOpProtoMaker(framework::OpProto *proto,
- framework::OpAttrChecker *op_checker)
+ BinaryLogicalOpProtoMaker(OpProto *proto, OpAttrChecker *op_checker)
: OpProtoAndCheckerMaker(proto, op_checker) {
OpComment comment;
AddInput("X",
@@ -45,8 +44,7 @@ Each element of Out is calculated by %s
template
class UnaryLogicalOpProtoMaker : public framework::OpProtoAndCheckerMaker {
public:
- UnaryLogicalOpProtoMaker(framework::OpProto *proto,
- framework::OpAttrChecker *op_checker)
+ UnaryLogicalOpProtoMaker(OpProto *proto, OpAttrChecker *op_checker)
: OpProtoAndCheckerMaker(proto, op_checker) {
OpComment comment;
AddInput("X", string::Sprintf("(LoDTensor) Operand of %s operator",
diff --git a/paddle/operators/lookup_table_op.cc b/paddle/operators/lookup_table_op.cc
index 93e812ac5be5aea6bf3ab353d31480322c51ccbc..606b44808edf11a72eb154cbdc8356bb1fc9b9d5 100644
--- a/paddle/operators/lookup_table_op.cc
+++ b/paddle/operators/lookup_table_op.cc
@@ -51,8 +51,7 @@ class LookupTableOp : public framework::OperatorWithKernel {
class LookupTableOpMaker : public framework::OpProtoAndCheckerMaker {
public:
- LookupTableOpMaker(framework::OpProto* proto,
- framework::OpAttrChecker* op_checker)
+ LookupTableOpMaker(OpProto* proto, OpAttrChecker* op_checker)
: OpProtoAndCheckerMaker(proto, op_checker) {
AddInput("W",
"An input represents embedding tensors, "
@@ -117,11 +116,12 @@ class LookupTableOpGradVarTypeInference : public framework::VarTypeInference {
if (is_sparse) {
VLOG(3) << "lookup_table_grad op " << framework::GradVarName("W")
<< " is set to SelectedRows";
- block->Var(out_var_name)->SetType(framework::VarDesc::SELECTED_ROWS);
+ block->Var(out_var_name)
+ ->SetType(framework::proto::VarDesc::SELECTED_ROWS);
} else {
VLOG(3) << "lookup_table_grad op " << framework::GradVarName("W")
<< " is set to LoDTensor";
- block->Var(out_var_name)->SetType(framework::VarDesc::LOD_TENSOR);
+ block->Var(out_var_name)->SetType(framework::proto::VarDesc::LOD_TENSOR);
}
}
};
diff --git a/paddle/operators/lrn_op.cc b/paddle/operators/lrn_op.cc
index b5b7bc940a85ac2bbb6c6b303284777df714b7d6..3b77b27b72d7079c10695da43a4fcfed9b4c855c 100644
--- a/paddle/operators/lrn_op.cc
+++ b/paddle/operators/lrn_op.cc
@@ -140,7 +140,7 @@ class LRNOp : public framework::OperatorWithKernel {
template
class LRNOpMaker : public framework::OpProtoAndCheckerMaker {
public:
- LRNOpMaker(framework::OpProto* proto, framework::OpAttrChecker* op_checker)
+ LRNOpMaker(OpProto* proto, OpAttrChecker* op_checker)
: OpProtoAndCheckerMaker(proto, op_checker) {
AddInput("X",
"(Tensor) The input of LRN operator. "
diff --git a/paddle/operators/lstm_op.cc b/paddle/operators/lstm_op.cc
index 2db7da30db416e03cf473c8e65b023d9265e9193..f82156170e672b5e590ddb8e0e6e8a2a24ea6868 100644
--- a/paddle/operators/lstm_op.cc
+++ b/paddle/operators/lstm_op.cc
@@ -102,7 +102,7 @@ class LSTMOp : public framework::OperatorWithKernel {
class LSTMOpMaker : public framework::OpProtoAndCheckerMaker {
public:
- LSTMOpMaker(framework::OpProto* proto, framework::OpAttrChecker* op_checker)
+ LSTMOpMaker(OpProto* proto, OpAttrChecker* op_checker)
: OpProtoAndCheckerMaker(proto, op_checker) {
AddInput("Input",
"(LoDTensor) the first input is a LodTensor, which support "
diff --git a/paddle/operators/lstm_unit_op.cc b/paddle/operators/lstm_unit_op.cc
index b6eb33bafe50548502a0478d37842fd2dfdebda4..34da75c00d336d3f540a9472ee2e6c4b224add09 100644
--- a/paddle/operators/lstm_unit_op.cc
+++ b/paddle/operators/lstm_unit_op.cc
@@ -48,8 +48,7 @@ class LstmUnitOp : public framework::OperatorWithKernel {
class LstmUnitOpMaker : public framework::OpProtoAndCheckerMaker {
public:
- LstmUnitOpMaker(framework::OpProto* proto,
- framework::OpAttrChecker* op_checker)
+ LstmUnitOpMaker(OpProto* proto, OpAttrChecker* op_checker)
: OpProtoAndCheckerMaker(proto, op_checker) {
AddInput("X",
"Lstm unit only applies non-linear activations, please make sure"
diff --git a/paddle/operators/margin_rank_loss_op.cc b/paddle/operators/margin_rank_loss_op.cc
index 42e8961c0ea57650a823ee4b58516f66a455b385..fddc72aec0aa7fa17ef585388c53da640d3c1837 100644
--- a/paddle/operators/margin_rank_loss_op.cc
+++ b/paddle/operators/margin_rank_loss_op.cc
@@ -42,8 +42,7 @@ class MarginRankLossOp : public framework::OperatorWithKernel {
template
class MarginRankLossOpMaker : public framework::OpProtoAndCheckerMaker {
public:
- MarginRankLossOpMaker(framework::OpProto *proto,
- framework::OpAttrChecker *op_checker)
+ MarginRankLossOpMaker(OpProto *proto, OpAttrChecker *op_checker)
: OpProtoAndCheckerMaker(proto, op_checker) {
AddInput("X1",
"(2-D tensor with shape [batch_size x 1]) The score for "
diff --git a/paddle/operators/matmul_op.cc b/paddle/operators/matmul_op.cc
index ee0bc0c3708ac20ad00e3222060244d42dbd6f2f..fd65d894d5749c97f860d614de354e89f6d9441d 100644
--- a/paddle/operators/matmul_op.cc
+++ b/paddle/operators/matmul_op.cc
@@ -130,7 +130,7 @@ class MatMulOp : public framework::OperatorWithKernel {
class MatMulOpMaker : public framework::OpProtoAndCheckerMaker {
public:
- MatMulOpMaker(framework::OpProto* proto, framework::OpAttrChecker* op_checker)
+ MatMulOpMaker(OpProto* proto, OpAttrChecker* op_checker)
: OpProtoAndCheckerMaker(proto, op_checker) {
AddInput("X", "The first input of MatMul op");
AddInput("Y", "The second input of MatMul op");
diff --git a/paddle/operators/max_sequence_len_op.cc b/paddle/operators/max_sequence_len_op.cc
index 798022c9dd904a0ac189b4b550a94264a433ebf2..dec2874a1fd13c1379e37d7b9755d465ffb1a6f7 100644
--- a/paddle/operators/max_sequence_len_op.cc
+++ b/paddle/operators/max_sequence_len_op.cc
@@ -40,8 +40,7 @@ class MaxSeqenceLenOp : public framework::OperatorBase {
class MaxSeqenceLenOpProtoMaker : public framework::OpProtoAndCheckerMaker {
public:
- MaxSeqenceLenOpProtoMaker(framework::OpProto *proto,
- framework::OpAttrChecker *op_checker)
+ MaxSeqenceLenOpProtoMaker(OpProto *proto, OpAttrChecker *op_checker)
: OpProtoAndCheckerMaker(proto, op_checker) {
AddInput("RankTable", "The lod_rank_table.");
AddOutput("Out", "The max sequence length.");
diff --git a/paddle/operators/maxout_op.cc b/paddle/operators/maxout_op.cc
index 011616e615a36efa0efe9ff15e678f1486c5177a..3ee32269417e80cd14a6ff0f8e52c0b2dec4b8be 100644
--- a/paddle/operators/maxout_op.cc
+++ b/paddle/operators/maxout_op.cc
@@ -20,7 +20,7 @@ using framework::Tensor;
class MaxOutOpMaker : public framework::OpProtoAndCheckerMaker {
public:
- MaxOutOpMaker(framework::OpProto* proto, framework::OpAttrChecker* op_checker)
+ MaxOutOpMaker(OpProto* proto, OpAttrChecker* op_checker)
: OpProtoAndCheckerMaker(proto, op_checker) {
AddInput(
"X",
diff --git a/paddle/operators/mean_op.cc b/paddle/operators/mean_op.cc
index 8932d700c2ae17eefe919eefae2282ae4a5a80a8..e27f9eeac6e7cdc0615f1368b21df252866d9b7e 100644
--- a/paddle/operators/mean_op.cc
+++ b/paddle/operators/mean_op.cc
@@ -32,7 +32,7 @@ class MeanOp : public framework::OperatorWithKernel {
class MeanOpMaker : public framework::OpProtoAndCheckerMaker {
public:
- MeanOpMaker(framework::OpProto* proto, framework::OpAttrChecker* op_checker)
+ MeanOpMaker(OpProto* proto, OpAttrChecker* op_checker)
: OpProtoAndCheckerMaker(proto, op_checker) {
AddInput("X", "The input of mean op");
AddOutput("Out", "The output of mean op");
diff --git a/paddle/operators/merge_lod_tensor_op.cc b/paddle/operators/merge_lod_tensor_op.cc
index adc688dbd5e13a2203d6842a12acdb8625288275..ec76cfdf279c93f28cf54c744eafe07b3957c06b 100644
--- a/paddle/operators/merge_lod_tensor_op.cc
+++ b/paddle/operators/merge_lod_tensor_op.cc
@@ -114,8 +114,7 @@ class MergeLoDTensorOp : public framework::OperatorBase {
class MergeLoDTensorOpProtoMaker : public framework::OpProtoAndCheckerMaker {
public:
- MergeLoDTensorOpProtoMaker(framework::OpProto *proto,
- framework::OpAttrChecker *op_checker)
+ MergeLoDTensorOpProtoMaker(OpProto *proto, OpAttrChecker *op_checker)
: OpProtoAndCheckerMaker(proto, op_checker) {
AddInput("X",
"The input LoDTensor, contains complete lod information to "
diff --git a/paddle/operators/minus_op.cc b/paddle/operators/minus_op.cc
index 27f0c8de2053064e65d9984ec9bd4242fee48e5f..eb65fededfd633de51b0c418b78aab0726e70320 100644
--- a/paddle/operators/minus_op.cc
+++ b/paddle/operators/minus_op.cc
@@ -46,7 +46,7 @@ class MinusOp : public framework::OperatorWithKernel {
class MinusOpMaker : public framework::OpProtoAndCheckerMaker {
public:
- MinusOpMaker(framework::OpProto *proto, framework::OpAttrChecker *op_checker)
+ MinusOpMaker(OpProto *proto, OpAttrChecker *op_checker)
: OpProtoAndCheckerMaker(proto, op_checker) {
AddInput("X", "The left tensor of minus operator.");
AddInput("Y", "The right tensor of minus operator.");
diff --git a/paddle/operators/modified_huber_loss_op.cc b/paddle/operators/modified_huber_loss_op.cc
index f0a42491bf04a5bbe2de10de2f702877c9a2f839..dbb28f8466b141502fbba8ae5d8a511a6b1d74c3 100644
--- a/paddle/operators/modified_huber_loss_op.cc
+++ b/paddle/operators/modified_huber_loss_op.cc
@@ -39,8 +39,7 @@ class ModifiedHuberLossOp : public framework::OperatorWithKernel {
class ModifiedHuberLossOpMaker : public framework::OpProtoAndCheckerMaker {
public:
- ModifiedHuberLossOpMaker(framework::OpProto* proto,
- framework::OpAttrChecker* op_checker)
+ ModifiedHuberLossOpMaker(OpProto* proto, OpAttrChecker* op_checker)
: OpProtoAndCheckerMaker(proto, op_checker) {
AddInput("X",
"The input tensor of modified huber loss op. "
diff --git a/paddle/operators/momentum_op.cc b/paddle/operators/momentum_op.cc
index 2ab48fedecf0cce95dcf4d0593dcd4b30bc1f505..15b8b80776732f43c3ef4f8b80cffedf5c2a76fd 100644
--- a/paddle/operators/momentum_op.cc
+++ b/paddle/operators/momentum_op.cc
@@ -54,8 +54,7 @@ class MomentumOp : public framework::OperatorWithKernel {
class MomentumOpMaker : public framework::OpProtoAndCheckerMaker {
public:
- MomentumOpMaker(framework::OpProto *proto,
- framework::OpAttrChecker *op_checker)
+ MomentumOpMaker(OpProto *proto, OpAttrChecker *op_checker)
: OpProtoAndCheckerMaker(proto, op_checker) {
AddInput("Param",
"(Tensor, default Tensor) "
diff --git a/paddle/operators/mul_op.cc b/paddle/operators/mul_op.cc
index bc4a5fdf0b37ce07b4c07bba9e1af5611d2be7e3..a4bf0711de0efe0967b1211b7f32e5e2245860bc 100644
--- a/paddle/operators/mul_op.cc
+++ b/paddle/operators/mul_op.cc
@@ -71,7 +71,7 @@ class MulOpShapeInference : public framework::InferShapeBase {
class MulOpMaker : public framework::OpProtoAndCheckerMaker {
public:
- MulOpMaker(framework::OpProto* proto, framework::OpAttrChecker* op_checker)
+ MulOpMaker(OpProto* proto, OpAttrChecker* op_checker)
: OpProtoAndCheckerMaker(proto, op_checker) {
AddInput("X", "The first input of mul op");
AddInput("Y", "The second input of mul op");
diff --git a/paddle/operators/multiplex_op.cc b/paddle/operators/multiplex_op.cc
index b1ee8051c4c48f575690b38142ae082930fe2070..f524de60dbb3c652aa2a74478af6c0e38fb3cb43 100644
--- a/paddle/operators/multiplex_op.cc
+++ b/paddle/operators/multiplex_op.cc
@@ -61,8 +61,7 @@ class MultiplexOp : public framework::OperatorWithKernel {
class MultiplexOpMaker : public framework::OpProtoAndCheckerMaker {
public:
- MultiplexOpMaker(framework::OpProto* proto,
- framework::OpAttrChecker* op_checker)
+ MultiplexOpMaker(OpProto* proto, OpAttrChecker* op_checker)
: OpProtoAndCheckerMaker(proto, op_checker) {
AddInput("Ids", "The index tensor of multiplex operator.");
AddInput("X", "The candidate tensors of multiplex operator.")
diff --git a/paddle/operators/name_convention.md b/paddle/operators/name_convention.md
index b5cb176e003b4584321142ac9f1c3380b7010936..a02b356f058da68442516c2705d0bac140f8ef18 100644
--- a/paddle/operators/name_convention.md
+++ b/paddle/operators/name_convention.md
@@ -35,8 +35,8 @@ Here we give some examples to show how these rules will be used.
```c++
class AccumulateOpMaker : public framework::OpProtoAndCheckerMaker {
public:
- AccumulateOpMaker(framework::OpProto *proto,
- framework::OpAttrChecker *op_checker)
+ AccumulateOpMaker(OpProto *proto,
+ OpAttrChecker *op_checker)
: OpProtoAndCheckerMaker(proto, op_checker) {
AddInput("X", "(Tensor) The input tensor that has to be accumulated to the output tensor.
If the output size is not the same as input size,
diff --git a/paddle/operators/nccl_op.cc b/paddle/operators/nccl_op.cc
index 22a37ff1bbf6b8cfb2cbc3c3dbbb20a87c5ea4e7..e19f534f8a2d05cd9b569a0eebb287db3d3321ba 100644
--- a/paddle/operators/nccl_op.cc
+++ b/paddle/operators/nccl_op.cc
@@ -43,8 +43,7 @@ class NCCLInitOp : public framework::OperatorBase {
class NCCLInitOpMaker : public framework::OpProtoAndCheckerMaker {
public:
- NCCLInitOpMaker(framework::OpProto *proto,
- framework::OpAttrChecker *op_checker)
+ NCCLInitOpMaker(OpProto *proto, OpAttrChecker *op_checker)
: OpProtoAndCheckerMaker(proto, op_checker) {
AddOutput("Communicator",
"Create Communicator for communicating between gpus");
@@ -52,7 +51,7 @@ class NCCLInitOpMaker : public framework::OpProtoAndCheckerMaker {
AddAttr("dtype",
"(int, default 5 (FP32)) "
"Output data type")
- .SetDefault(framework::DataType::FP32);
+ .SetDefault(framework::proto::DataType::FP32);
AddComment(R"DOC(
NCCLInit Operator.
@@ -141,8 +140,7 @@ class NCCLBcastOp : public framework::OperatorWithKernel {
// AllreduceOp
class NCCLAllReduceOpMaker : public framework::OpProtoAndCheckerMaker {
public:
- NCCLAllReduceOpMaker(framework::OpProto *proto,
- framework::OpAttrChecker *op_checker)
+ NCCLAllReduceOpMaker(OpProto *proto, OpAttrChecker *op_checker)
: OpProtoAndCheckerMaker(proto, op_checker) {
AddInput("X", "The input of AllReduce op");
AddInput("Communicator", "Communicator for communicating between gpus");
@@ -163,8 +161,7 @@ AllReduce the input tensors.
// ReduceOp
class NCCLReduceOpMaker : public framework::OpProtoAndCheckerMaker {
public:
- NCCLReduceOpMaker(framework::OpProto *proto,
- framework::OpAttrChecker *op_checker)
+ NCCLReduceOpMaker(OpProto *proto, OpAttrChecker *op_checker)
: OpProtoAndCheckerMaker(proto, op_checker) {
AddInput("X", "The input of Reduce op");
AddInput("Communicator", "Communicator for communicating between gpus");
@@ -190,8 +187,7 @@ Reduce the tensors.
// BcastOp
class NCCLBcastOpMaker : public framework::OpProtoAndCheckerMaker {
public:
- NCCLBcastOpMaker(framework::OpProto *proto,
- framework::OpAttrChecker *op_checker)
+ NCCLBcastOpMaker(OpProto *proto, OpAttrChecker *op_checker)
: OpProtoAndCheckerMaker(proto, op_checker) {
AddInput("X", "The input of BcastSend op");
AddInput("Communicator", "Communicator for communicating between gpus");
diff --git a/paddle/operators/nce_op.cc b/paddle/operators/nce_op.cc
index 5ad1610fde041ee934486ef98ba41dca42559100..6dd457f7a2e410b65680004599ab753acbb34f71 100644
--- a/paddle/operators/nce_op.cc
+++ b/paddle/operators/nce_op.cc
@@ -73,7 +73,7 @@ class NCEOp : public framework::OperatorWithKernel {
class NCEOpMaker : public framework::OpProtoAndCheckerMaker {
public:
- NCEOpMaker(framework::OpProto* proto, framework::OpAttrChecker* op_checker)
+ NCEOpMaker(OpProto* proto, OpAttrChecker* op_checker)
: OpProtoAndCheckerMaker(proto, op_checker) {
AddInput("Input", "(Tensor) A tensor of shape [batch_size, dim].");
AddInput(
diff --git a/paddle/operators/pad_op.cc b/paddle/operators/pad_op.cc
index 936dde22c34a30c5a50e2ac8a76f0f91dfb328ab..8d2d031fcdb6bc4ee955b6db6df2892aa7ee5d53 100644
--- a/paddle/operators/pad_op.cc
+++ b/paddle/operators/pad_op.cc
@@ -48,7 +48,7 @@ class PadOp : public framework::OperatorWithKernel {
class PadOpMaker : public framework::OpProtoAndCheckerMaker {
public:
- PadOpMaker(framework::OpProto* proto, framework::OpAttrChecker* op_checker)
+ PadOpMaker(OpProto* proto, OpAttrChecker* op_checker)
: OpProtoAndCheckerMaker(proto, op_checker) {
AddInput("X",
"The input of pad op. "
diff --git a/paddle/operators/pool_op.cc b/paddle/operators/pool_op.cc
index 45fa20280c1ad20f63d6542d5199e002ff60495f..50057eb6483e9c9e745bc07dee26a0bbbbb5a48c 100644
--- a/paddle/operators/pool_op.cc
+++ b/paddle/operators/pool_op.cc
@@ -67,8 +67,7 @@ void PoolOpGrad::InferShape(framework::InferShapeContext *ctx) const {
ctx->SetOutputDim(framework::GradVarName("X"), ctx->GetInputDim("X"));
}
-Pool2dOpMaker::Pool2dOpMaker(framework::OpProto *proto,
- framework::OpAttrChecker *op_checker)
+Pool2dOpMaker::Pool2dOpMaker(OpProto *proto, OpAttrChecker *op_checker)
: OpProtoAndCheckerMaker(proto, op_checker) {
AddInput(
"X",
@@ -136,8 +135,7 @@ Example:
)DOC");
}
-Pool3dOpMaker::Pool3dOpMaker(framework::OpProto *proto,
- framework::OpAttrChecker *op_checker)
+Pool3dOpMaker::Pool3dOpMaker(OpProto *proto, OpAttrChecker *op_checker)
: OpProtoAndCheckerMaker(proto, op_checker) {
AddInput("X",
"(Tensor) The input tensor of pooling operator. "
diff --git a/paddle/operators/pool_op.h b/paddle/operators/pool_op.h
index ab85d587a3131237d7a9ec774a11193c70220c7c..3860e295f4b4dbeb2d60cfb304847de39083f1e1 100644
--- a/paddle/operators/pool_op.h
+++ b/paddle/operators/pool_op.h
@@ -40,14 +40,12 @@ class PoolOpGrad : public framework::OperatorWithKernel {
class Pool2dOpMaker : public framework::OpProtoAndCheckerMaker {
public:
- Pool2dOpMaker(framework::OpProto* proto,
- framework::OpAttrChecker* op_checker);
+ Pool2dOpMaker(OpProto* proto, OpAttrChecker* op_checker);
};
class Pool3dOpMaker : public framework::OpProtoAndCheckerMaker {
public:
- Pool3dOpMaker(framework::OpProto* proto,
- framework::OpAttrChecker* op_checker);
+ Pool3dOpMaker(OpProto* proto, OpAttrChecker* op_checker);
};
template
diff --git a/paddle/operators/pool_with_index_op.cc b/paddle/operators/pool_with_index_op.cc
index 1a2383f8b80357d2927c3b6a8c57c787ba7e366d..980e9dc08b2ac160e6e06dfb11ff8f3e1279be46 100644
--- a/paddle/operators/pool_with_index_op.cc
+++ b/paddle/operators/pool_with_index_op.cc
@@ -100,8 +100,7 @@ class MaxPoolWithIndexOpGrad : public framework::OperatorWithKernel {
class MaxPool2dWithIndexOpMaker : public framework::OpProtoAndCheckerMaker {
public:
- MaxPool2dWithIndexOpMaker(framework::OpProto *proto,
- framework::OpAttrChecker *op_checker)
+ MaxPool2dWithIndexOpMaker(OpProto *proto, OpAttrChecker *op_checker)
: OpProtoAndCheckerMaker(proto, op_checker) {
AddInput(
"X",
@@ -178,8 +177,7 @@ Example:
class MaxPool3dWithIndexOpMaker : public framework::OpProtoAndCheckerMaker {
public:
- MaxPool3dWithIndexOpMaker(framework::OpProto *proto,
- framework::OpAttrChecker *op_checker)
+ MaxPool3dWithIndexOpMaker(OpProto *proto, OpAttrChecker *op_checker)
: OpProtoAndCheckerMaker(proto, op_checker) {
AddInput("X",
"(Tensor) The input tensor of pooling operator. "
diff --git a/paddle/operators/positive_negative_pair_op.cc b/paddle/operators/positive_negative_pair_op.cc
index 4ba40a62ec5f696ad980c2913f7e162879a557e2..ab9f67bfe6b3d6f59b35a57cb8135e9c6d00636e 100644
--- a/paddle/operators/positive_negative_pair_op.cc
+++ b/paddle/operators/positive_negative_pair_op.cc
@@ -95,8 +95,7 @@ class PositiveNegativePairOp : public framework::OperatorWithKernel {
class PositiveNegativePairOpMaker : public framework::OpProtoAndCheckerMaker {
public:
- PositiveNegativePairOpMaker(framework::OpProto *proto,
- framework::OpAttrChecker *op_checker)
+ PositiveNegativePairOpMaker(OpProto *proto, OpAttrChecker *op_checker)
: OpProtoAndCheckerMaker(proto, op_checker) {
AddInput("Score",
"(Tensor, float) Model Score on an item (with "
diff --git a/paddle/operators/precision_recall_op.cc b/paddle/operators/precision_recall_op.cc
index 1ace4f2a5935dcb4239526c42599a42d288ff552..21dcd28c67bb5eb1d3af0ac8ba16f1d5df1958a8 100644
--- a/paddle/operators/precision_recall_op.cc
+++ b/paddle/operators/precision_recall_op.cc
@@ -90,8 +90,7 @@ class PrecisionRecallOp : public framework::OperatorWithKernel {
class PrecisionRecallOpMaker : public framework::OpProtoAndCheckerMaker {
public:
- PrecisionRecallOpMaker(framework::OpProto *proto,
- framework::OpAttrChecker *op_checker)
+ PrecisionRecallOpMaker(OpProto *proto, OpAttrChecker *op_checker)
: OpProtoAndCheckerMaker(proto, op_checker) {
AddInput("MaxProbs",
"(Tensor, default Tensor) A 2-D tensor with shape N x 1, "
diff --git a/paddle/operators/prelu_op.cc b/paddle/operators/prelu_op.cc
index 317a2a40154f92f2e13a3012d2f7a63df9a69afb..4af8f85277ddb2262aa534f8d81be30449ccf8da 100644
--- a/paddle/operators/prelu_op.cc
+++ b/paddle/operators/prelu_op.cc
@@ -38,7 +38,7 @@ class PReluOp : public framework::OperatorWithKernel {
class PReluOpMaker : public framework::OpProtoAndCheckerMaker {
public:
- PReluOpMaker(framework::OpProto *proto, framework::OpAttrChecker *op_checker)
+ PReluOpMaker(OpProto *proto, OpAttrChecker *op_checker)
: OpProtoAndCheckerMaker(proto, op_checker) {
AddInput("X", "The input tensor of prelu operator.");
AddInput("Alpha", "The alpha weight of prelu operator.");
diff --git a/paddle/operators/proximal_adagrad_op.cc b/paddle/operators/proximal_adagrad_op.cc
index cc350f6d26e6d8bd6e59f2fda74a3b734df55247..b92f46b5bd4e48a25f8c87873c5df53f1753b71b 100644
--- a/paddle/operators/proximal_adagrad_op.cc
+++ b/paddle/operators/proximal_adagrad_op.cc
@@ -59,8 +59,7 @@ class ProximalAdagradOp : public framework::OperatorWithKernel {
class ProximalAdagradOpMaker : public framework::OpProtoAndCheckerMaker {
public:
- ProximalAdagradOpMaker(framework::OpProto *proto,
- framework::OpAttrChecker *op_checker)
+ ProximalAdagradOpMaker(OpProto *proto, OpAttrChecker *op_checker)
: OpProtoAndCheckerMaker(proto, op_checker) {
AddInput("Param",
"(Tensor, default Tensor) "
diff --git a/paddle/operators/proximal_gd_op.cc b/paddle/operators/proximal_gd_op.cc
index 0b26beb3ac3803c78f45cc2ce0a8f444bdc313b6..2d3bbdaf320a4d6bdf18ec92230a81ad98371498 100644
--- a/paddle/operators/proximal_gd_op.cc
+++ b/paddle/operators/proximal_gd_op.cc
@@ -47,8 +47,7 @@ class ProximalGDOp : public framework::OperatorWithKernel {
class ProximalGDOpMaker : public framework::OpProtoAndCheckerMaker {
public:
- ProximalGDOpMaker(framework::OpProto *proto,
- framework::OpAttrChecker *op_checker)
+ ProximalGDOpMaker(OpProto *proto, OpAttrChecker *op_checker)
: OpProtoAndCheckerMaker(proto, op_checker) {
AddInput("Param",
"(Tensor, default Tensor) "
diff --git a/paddle/operators/rank_loss_op.cc b/paddle/operators/rank_loss_op.cc
index b80b175792f3fc56d689c187b7182198542d7345..b5a9949d236bfa6910227092f0682a599543a425 100644
--- a/paddle/operators/rank_loss_op.cc
+++ b/paddle/operators/rank_loss_op.cc
@@ -45,8 +45,7 @@ class RankLossOp : public framework::OperatorWithKernel {
class RankLossOpMaker : public framework::OpProtoAndCheckerMaker {
public:
- RankLossOpMaker(framework::OpProto *proto,
- framework::OpAttrChecker *op_checker)
+ RankLossOpMaker(OpProto *proto, OpAttrChecker *op_checker)
: OpProtoAndCheckerMaker(proto, op_checker) {
AddInput("Label",
"(2-D Tensor with shape [batch_size x 1]) "
diff --git a/paddle/operators/recurrent_op.cc b/paddle/operators/recurrent_op.cc
index 232d926f7b975c3b8ebecad983d0f1cc54b9486f..ca3a063553d5c8eae3d62d63061bb13a4b3f8365 100644
--- a/paddle/operators/recurrent_op.cc
+++ b/paddle/operators/recurrent_op.cc
@@ -497,8 +497,7 @@ class RecurrentGradOp : public RecurrentBase {
class RecurrentOpProtoMaker : public framework::OpProtoAndCheckerMaker {
public:
- RecurrentOpProtoMaker(framework::OpProto *proto,
- framework::OpAttrChecker *op_checker)
+ RecurrentOpProtoMaker(OpProto *proto, OpAttrChecker *op_checker)
: OpProtoAndCheckerMaker(proto, op_checker) {
AddInput(kInputs, "rnn inputs").AsDuplicable();
AddInput(kInitialStates, "rnn initial states").AsDuplicable();
diff --git a/paddle/operators/recv_op.cc b/paddle/operators/recv_op.cc
index eed482c1b458cd442ede523838b400d85c23a155..2cc6cf6947b601e326ed563082800946b3ff221d 100644
--- a/paddle/operators/recv_op.cc
+++ b/paddle/operators/recv_op.cc
@@ -97,7 +97,7 @@ class RecvOp : public framework::OperatorBase {
class RecvOpMaker : public framework::OpProtoAndCheckerMaker {
public:
- RecvOpMaker(framework::OpProto *proto, framework::OpAttrChecker *op_checker)
+ RecvOpMaker(OpProto *proto, OpAttrChecker *op_checker)
: OpProtoAndCheckerMaker(proto, op_checker) {
AddInput("RX", "(Tensor) Input tensor to be saved");
AddComment(R"DOC(
diff --git a/paddle/operators/reduce_op.cc b/paddle/operators/reduce_op.cc
index fedc2a5c37ff84ffdf8ebd2f19296db92e256e5b..19220f2f59d218e9b4d57b260b35df64b4abd2cb 100644
--- a/paddle/operators/reduce_op.cc
+++ b/paddle/operators/reduce_op.cc
@@ -83,7 +83,7 @@ class ReduceGradOp : public framework::OperatorWithKernel {
class ReduceOpMaker : public framework::OpProtoAndCheckerMaker {
public:
- ReduceOpMaker(framework::OpProto *proto, framework::OpAttrChecker *op_checker)
+ ReduceOpMaker(OpProto *proto, OpAttrChecker *op_checker)
: OpProtoAndCheckerMaker(proto, op_checker) {
AddInput("X",
"(Tensor) The input tensor. Tensors with rank at most 6 are "
@@ -135,8 +135,7 @@ If reduce_all is true, just reduce along all dimensions and output a scalar.
class ReduceSumOpMaker : public ReduceOpMaker {
public:
- ReduceSumOpMaker(framework::OpProto *proto,
- framework::OpAttrChecker *op_checker)
+ ReduceSumOpMaker(OpProto *proto, OpAttrChecker *op_checker)
: ReduceOpMaker(proto, op_checker) {
SetComment("ReduceSum", "sum");
AddComment(comment_);
@@ -145,8 +144,7 @@ class ReduceSumOpMaker : public ReduceOpMaker {
class ReduceMeanOpMaker : public ReduceOpMaker {
public:
- ReduceMeanOpMaker(framework::OpProto *proto,
- framework::OpAttrChecker *op_checker)
+ ReduceMeanOpMaker(OpProto *proto, OpAttrChecker *op_checker)
: ReduceOpMaker(proto, op_checker) {
SetComment("ReduceMean", "mean");
AddComment(comment_);
@@ -155,8 +153,7 @@ class ReduceMeanOpMaker : public ReduceOpMaker {
class ReduceMaxOpMaker : public ReduceOpMaker {
public:
- ReduceMaxOpMaker(framework::OpProto *proto,
- framework::OpAttrChecker *op_checker)
+ ReduceMaxOpMaker(OpProto *proto, OpAttrChecker *op_checker)
: ReduceOpMaker(proto, op_checker) {
SetComment("ReduceMax", "max");
AddComment(comment_);
@@ -165,8 +162,7 @@ class ReduceMaxOpMaker : public ReduceOpMaker {
class ReduceMinOpMaker : public ReduceOpMaker {
public:
- ReduceMinOpMaker(framework::OpProto *proto,
- framework::OpAttrChecker *op_checker)
+ ReduceMinOpMaker(OpProto *proto, OpAttrChecker *op_checker)
: ReduceOpMaker(proto, op_checker) {
SetComment("ReduceMin", "min");
AddComment(comment_);
diff --git a/paddle/operators/reshape_op.cc b/paddle/operators/reshape_op.cc
index d82d828747c0c822195b699359b8e62d1cf7e92d..2c5167295d8546358b09e018ee02f0941f2897d1 100644
--- a/paddle/operators/reshape_op.cc
+++ b/paddle/operators/reshape_op.cc
@@ -77,8 +77,7 @@ class ReshapeOp : public framework::OperatorWithKernel {
class ReshapeOpMaker : public framework::OpProtoAndCheckerMaker {
public:
- ReshapeOpMaker(framework::OpProto *proto,
- framework::OpAttrChecker *op_checker)
+ ReshapeOpMaker(OpProto *proto, OpAttrChecker *op_checker)
: OpProtoAndCheckerMaker(proto, op_checker) {
AddInput("X", "The input tensor of reshape operator.");
AddOutput("Out", "The output tensor of reshape operator.");
diff --git a/paddle/operators/rmsprop_op.cc b/paddle/operators/rmsprop_op.cc
index fc3f9b8988ec7fe0093ef6b09a105747b0025ec1..f7c250bf913b9213e7d7e2cca9ecadf74cac91a1 100644
--- a/paddle/operators/rmsprop_op.cc
+++ b/paddle/operators/rmsprop_op.cc
@@ -63,8 +63,7 @@ class RmspropOp : public framework::OperatorWithKernel {
class RmspropOpMaker : public framework::OpProtoAndCheckerMaker {
public:
- RmspropOpMaker(framework::OpProto *proto,
- framework::OpAttrChecker *op_checker)
+ RmspropOpMaker(OpProto *proto, OpAttrChecker *op_checker)
: OpProtoAndCheckerMaker(proto, op_checker) {
AddInput("Param",
"(Tensor, default Tensor) "
diff --git a/paddle/operators/rnn_memory_helper_op.cc b/paddle/operators/rnn_memory_helper_op.cc
index 3a035f0b9acb94bab60659938e11b4996b8eaa0f..795bdf3e51a2dd323e85c497fcf203ad3ed54183 100644
--- a/paddle/operators/rnn_memory_helper_op.cc
+++ b/paddle/operators/rnn_memory_helper_op.cc
@@ -57,15 +57,14 @@ class RNNMemoryHelperOpShapeInference : public framework::InferShapeBase {
class RNNMemoryHelperOpInfoMaker : public framework::OpProtoAndCheckerMaker {
public:
- RNNMemoryHelperOpInfoMaker(framework::OpProto *proto,
- framework::OpAttrChecker *op_checker)
+ RNNMemoryHelperOpInfoMaker(OpProto *proto, OpAttrChecker *op_checker)
: OpProtoAndCheckerMaker(proto, op_checker) {
AddInput("X", "");
AddOutput("Out", "");
AddAttr("dtype",
"(int, default 5 (FP32)) "
"Output data type")
- .SetDefault(framework::DataType::FP32);
+ .SetDefault(framework::proto::DataType::FP32);
AddComment("");
}
};
@@ -114,8 +113,7 @@ class RNNMemoryHelperGradOp : public framework::OperatorBase {
class RNNMemoryHelperGradOpInfoMaker
: public framework::OpProtoAndCheckerMaker {
public:
- RNNMemoryHelperGradOpInfoMaker(framework::OpProto *proto,
- framework::OpAttrChecker *op_checker)
+ RNNMemoryHelperGradOpInfoMaker(OpProto *proto, OpAttrChecker *op_checker)
: OpProtoAndCheckerMaker(proto, op_checker) {
AddInput(framework::GradVarName("Out"), "");
AddInput("X", "");
@@ -124,7 +122,7 @@ class RNNMemoryHelperGradOpInfoMaker
AddAttr("dtype",
"(int, default 5 (FP32)) "
"Output data type")
- .SetDefault(framework::DataType::FP32);
+ .SetDefault(framework::proto::DataType::FP32);
AddComment("");
}
};
diff --git a/paddle/operators/roi_pool_op.cc b/paddle/operators/roi_pool_op.cc
index 75fcea8401fbbc2943c0d6a50ca81288268823d8..85b6a8e15160d0c259a270f5e12ca9e67a6508ab 100644
--- a/paddle/operators/roi_pool_op.cc
+++ b/paddle/operators/roi_pool_op.cc
@@ -99,8 +99,7 @@ class ROIPoolGradOp : public framework::OperatorWithKernel {
class ROIPoolOpMaker : public framework::OpProtoAndCheckerMaker {
public:
- ROIPoolOpMaker(framework::OpProto* proto,
- framework::OpAttrChecker* op_checker)
+ ROIPoolOpMaker(OpProto* proto, OpAttrChecker* op_checker)
: OpProtoAndCheckerMaker(proto, op_checker) {
AddInput("X",
"(Tensor), "
diff --git a/paddle/operators/row_conv_op.cc b/paddle/operators/row_conv_op.cc
index 5203a5079c8b125f8dc156202f70ce76711a1e30..6b116a9fe704e6ddf18c22455c06346ea14909d2 100644
--- a/paddle/operators/row_conv_op.cc
+++ b/paddle/operators/row_conv_op.cc
@@ -76,8 +76,7 @@ class RowConvGradOp : public framework::OperatorWithKernel {
class RowConvOpMaker : public framework::OpProtoAndCheckerMaker {
public:
- RowConvOpMaker(framework::OpProto *proto,
- framework::OpAttrChecker *op_checker)
+ RowConvOpMaker(OpProto *proto, OpAttrChecker *op_checker)
: framework::OpProtoAndCheckerMaker(proto, op_checker) {
AddInput("X",
"(LoDTensor), the input(X) is a LodTensor, which supports "
diff --git a/paddle/operators/save_op.cc b/paddle/operators/save_op.cc
index d4921cb80c8d78c52ae1887c36819b52621470eb..eae1146d6c61fe56ebc48ac50e1eacd62e3fa7d0 100644
--- a/paddle/operators/save_op.cc
+++ b/paddle/operators/save_op.cc
@@ -94,8 +94,7 @@ class SaveOp : public framework::OperatorBase {
class SaveOpProtoMaker : public framework::OpProtoAndCheckerMaker {
public:
- SaveOpProtoMaker(framework::OpProto *proto,
- framework::OpAttrChecker *op_checker)
+ SaveOpProtoMaker(OpProto *proto, OpAttrChecker *op_checker)
: OpProtoAndCheckerMaker(proto, op_checker) {
AddInput("X", "(Tensor ) Input tensor to be saved");
AddComment(R"DOC(
diff --git a/paddle/operators/scale_op.cc b/paddle/operators/scale_op.cc
index d848be823e602e595f66138f4b5dfd6e38dd85a1..98170c0d1b22fd2cbc57c29295e6887c1bb1fa68 100644
--- a/paddle/operators/scale_op.cc
+++ b/paddle/operators/scale_op.cc
@@ -38,7 +38,7 @@ class ScaleOp : public framework::OperatorWithKernel {
template
class ScaleOpMaker : public framework::OpProtoAndCheckerMaker {
public:
- ScaleOpMaker(framework::OpProto *proto, framework::OpAttrChecker *op_checker)
+ ScaleOpMaker(OpProto *proto, OpAttrChecker *op_checker)
: OpProtoAndCheckerMaker(proto, op_checker) {
AddInput("X", "(Tensor) Input tensor of scale operator.");
AddOutput("Out", "(Tensor) Output tensor of scale operator.");
diff --git a/paddle/operators/scatter_op.cc b/paddle/operators/scatter_op.cc
index 573bbcd1875c86a2d843b6c5e9c1af4d48a5cb18..173c9582557eb4e020824d5830731e3e2312dc3c 100644
--- a/paddle/operators/scatter_op.cc
+++ b/paddle/operators/scatter_op.cc
@@ -78,8 +78,7 @@ class ScatterGradOp : public framework::OperatorWithKernel {
class ScatterOpMaker : public framework::OpProtoAndCheckerMaker {
public:
- ScatterOpMaker(framework::OpProto* proto,
- framework::OpAttrChecker* op_checker)
+ ScatterOpMaker(OpProto* proto, OpAttrChecker* op_checker)
: OpProtoAndCheckerMaker(proto, op_checker) {
AddInput("Ref", "The source input of scatter op");
AddInput("Index",
diff --git a/paddle/operators/send_op.cc b/paddle/operators/send_op.cc
index a3059847f2d420359b347e3a5d514d8a3829a4e2..0d121fb48dc2dc8bd0312aa2091f4e058caa4515 100644
--- a/paddle/operators/send_op.cc
+++ b/paddle/operators/send_op.cc
@@ -59,7 +59,7 @@ class SendOp : public framework::OperatorBase {
class SendOpMaker : public framework::OpProtoAndCheckerMaker {
public:
- SendOpMaker(framework::OpProto *proto, framework::OpAttrChecker *op_checker)
+ SendOpMaker(OpProto *proto, OpAttrChecker *op_checker)
: OpProtoAndCheckerMaker(proto, op_checker) {
AddInput("X", "(Tensor) Input tensor to be saved");
AddOutput("Out", "(Tensor) Output fetched from server");
diff --git a/paddle/operators/sequence_concat_op.cc b/paddle/operators/sequence_concat_op.cc
index 9c7e5456e8238af70f920aaaa9cc652d5d12d3e9..54e8989f256e61ce9d4f76b631a3906773d04a2e 100644
--- a/paddle/operators/sequence_concat_op.cc
+++ b/paddle/operators/sequence_concat_op.cc
@@ -43,8 +43,7 @@ class SequenceConcatOp : public framework::OperatorWithKernel {
class SequenceConcatOpMaker : public framework::OpProtoAndCheckerMaker {
public:
- SequenceConcatOpMaker(framework::OpProto* proto,
- framework::OpAttrChecker* op_checker)
+ SequenceConcatOpMaker(OpProto* proto, OpAttrChecker* op_checker)
: OpProtoAndCheckerMaker(proto, op_checker) {
AddInput("X",
"(LodTensorArray) Input is a vector of LoDTensor, "
diff --git a/paddle/operators/sequence_conv_op.cc b/paddle/operators/sequence_conv_op.cc
index f5c4f1c13331f45183d2810a95f773ad52aca13b..c5b7c81bd7c6e1110aa9e2ced629bea5d88832d1 100644
--- a/paddle/operators/sequence_conv_op.cc
+++ b/paddle/operators/sequence_conv_op.cc
@@ -100,8 +100,7 @@ class SequenceConvGradOp : public framework::OperatorWithKernel {
class SequenceConvOpMaker : public framework::OpProtoAndCheckerMaker {
public:
- SequenceConvOpMaker(framework::OpProto* proto,
- framework::OpAttrChecker* op_checker)
+ SequenceConvOpMaker(OpProto* proto, OpAttrChecker* op_checker)
: OpProtoAndCheckerMaker(proto, op_checker) {
AddInput(
"X",
diff --git a/paddle/operators/sequence_expand_op.cc b/paddle/operators/sequence_expand_op.cc
index 770161b593e232f2f2cf4a2ccb952391557b9a3d..6227408be0529e63318bddcf6fa4f1927bb05eca 100644
--- a/paddle/operators/sequence_expand_op.cc
+++ b/paddle/operators/sequence_expand_op.cc
@@ -37,8 +37,7 @@ class SequenceExpandOp : public framework::OperatorWithKernel {
class SequenceExpandOpMaker : public framework::OpProtoAndCheckerMaker {
public:
- SequenceExpandOpMaker(framework::OpProto* proto,
- framework::OpAttrChecker* op_checker)
+ SequenceExpandOpMaker(OpProto* proto, OpAttrChecker* op_checker)
: OpProtoAndCheckerMaker(proto, op_checker) {
AddInput("X",
"(Tensor or LoDTensor) The input(X) of this operator can be a "
diff --git a/paddle/operators/sequence_pool_op.cc b/paddle/operators/sequence_pool_op.cc
index 3526e45a1b6565bc21413d381d15c02f08c587bd..0eb675caaddf1274a941cbfe29017cb9ea11f40f 100644
--- a/paddle/operators/sequence_pool_op.cc
+++ b/paddle/operators/sequence_pool_op.cc
@@ -37,8 +37,7 @@ class SequencePoolOp : public framework::OperatorWithKernel {
class SequencePoolOpMaker : public framework::OpProtoAndCheckerMaker {
public:
- SequencePoolOpMaker(framework::OpProto* proto,
- framework::OpAttrChecker* op_checker)
+ SequencePoolOpMaker(OpProto* proto, OpAttrChecker* op_checker)
: OpProtoAndCheckerMaker(proto, op_checker) {
AddInput("X", "(LoDTensor) The variable-length input of SequencePoolOp");
AddOutput("Out",
diff --git a/paddle/operators/sequence_slice_op.cc b/paddle/operators/sequence_slice_op.cc
index 481db8f9e548de68c102210035d4ff037ab56261..309ee1f3a82c35104db74084c4ef761bd4b06695 100644
--- a/paddle/operators/sequence_slice_op.cc
+++ b/paddle/operators/sequence_slice_op.cc
@@ -79,8 +79,7 @@ class SequenceSliceGradOp : public framework::OperatorWithKernel {
class SequenceSliceOpMaker : public framework::OpProtoAndCheckerMaker {
public:
- SequenceSliceOpMaker(framework::OpProto* proto,
- framework::OpAttrChecker* op_checker)
+ SequenceSliceOpMaker(OpProto* proto, OpAttrChecker* op_checker)
: OpProtoAndCheckerMaker(proto, op_checker) {
AddInput("X",
"(LoDTensor), "
diff --git a/paddle/operators/sequence_softmax_op.cc b/paddle/operators/sequence_softmax_op.cc
index 37d5452e6ba59411f9ab2e1460fc8584583f0321..fe1832a36fa7fd91cf1fccddfc09918e4698e09c 100644
--- a/paddle/operators/sequence_softmax_op.cc
+++ b/paddle/operators/sequence_softmax_op.cc
@@ -33,8 +33,7 @@ class SequenceSoftmaxOp : public framework::OperatorWithKernel {
class SequenceSoftmaxOpMaker : public framework::OpProtoAndCheckerMaker {
public:
- SequenceSoftmaxOpMaker(framework::OpProto* proto,
- framework::OpAttrChecker* op_checker)
+ SequenceSoftmaxOpMaker(OpProto* proto, OpAttrChecker* op_checker)
: OpProtoAndCheckerMaker(proto, op_checker) {
AddInput("X",
"(LoDTensor) 1-D or 2-D input LoDTensor with the 2-nd dimension "
diff --git a/paddle/operators/sgd_op.cc b/paddle/operators/sgd_op.cc
index 121bf60b27c62c1b0dd4c34c12962b7098e29ae2..fb4b43e472f86f2fa30a7013731c4621cb2b3e0e 100644
--- a/paddle/operators/sgd_op.cc
+++ b/paddle/operators/sgd_op.cc
@@ -43,7 +43,7 @@ class SGDOp : public framework::OperatorWithKernel {
class SGDOpMaker : public framework::OpProtoAndCheckerMaker {
public:
- SGDOpMaker(framework::OpProto* proto, framework::OpAttrChecker* op_checker)
+ SGDOpMaker(OpProto* proto, OpAttrChecker* op_checker)
: OpProtoAndCheckerMaker(proto, op_checker) {
AddInput("Param", "(Tensor) Input parameter");
AddInput("LearningRate", "(Tensor) Learning rate of SGD");
diff --git a/paddle/operators/shrink_rnn_memory_op.cc b/paddle/operators/shrink_rnn_memory_op.cc
index c380e606869fd2c559c7d5f378857ca74fa8d8d3..92dbe126bc084a4e1ead48f2e2334447480362bb 100644
--- a/paddle/operators/shrink_rnn_memory_op.cc
+++ b/paddle/operators/shrink_rnn_memory_op.cc
@@ -54,8 +54,7 @@ class ShrinkRNNMemoryOp : public ArrayOp {
class ShrinkRNNMemoryOpProtoMaker : public framework::OpProtoAndCheckerMaker {
public:
- ShrinkRNNMemoryOpProtoMaker(framework::OpProto *proto,
- framework::OpAttrChecker *op_checker)
+ ShrinkRNNMemoryOpProtoMaker(OpProto *proto, OpAttrChecker *op_checker)
: OpProtoAndCheckerMaker(proto, op_checker) {
AddInput("X", "(LoDTensor) The RNN step memory to be shrinked.");
AddInput("RankTable", "(LoDRankTable) The lod_rank_table of dynamic RNN.");
diff --git a/paddle/operators/sigmoid_cross_entropy_with_logits_op.cc b/paddle/operators/sigmoid_cross_entropy_with_logits_op.cc
index b8a1bf122a78df1e0d8291c77a61b3f917d40960..9b5227d92d1cfd7d6ac7e28186fbba6d887475b1 100644
--- a/paddle/operators/sigmoid_cross_entropy_with_logits_op.cc
+++ b/paddle/operators/sigmoid_cross_entropy_with_logits_op.cc
@@ -86,8 +86,8 @@ class SigmoidCrossEntropyWithLogitsGradOp
class SigmoidCrossEntropyWithLogitsOpMaker
: public framework::OpProtoAndCheckerMaker {
public:
- SigmoidCrossEntropyWithLogitsOpMaker(framework::OpProto* proto,
- framework::OpAttrChecker* op_checker)
+ SigmoidCrossEntropyWithLogitsOpMaker(OpProto* proto,
+ OpAttrChecker* op_checker)
: framework::OpProtoAndCheckerMaker(proto, op_checker) {
AddInput("X",
"(Tensor, default Tensor), a 2-D tensor with shape N x D, "
diff --git a/paddle/operators/sign_op.cc b/paddle/operators/sign_op.cc
index d5a7ccb77e7d9ad3a93702861dbab295c4ab5bce..b2bfce71a6c3b7153aa10521cbb55cedbc0d7940 100644
--- a/paddle/operators/sign_op.cc
+++ b/paddle/operators/sign_op.cc
@@ -34,7 +34,7 @@ class SignOp : public framework::OperatorWithKernel {
template
class SignOpMaker : public framework::OpProtoAndCheckerMaker {
public:
- SignOpMaker(framework::OpProto *proto, framework::OpAttrChecker *op_checker)
+ SignOpMaker(OpProto *proto, OpAttrChecker *op_checker)
: OpProtoAndCheckerMaker(proto, op_checker) {
AddInput("X", "(Tensor) Input tensor of sign operator.");
AddOutput("Out", "(Tensor) Output tensor of sign operator.");
diff --git a/paddle/operators/smooth_l1_loss_op.cc b/paddle/operators/smooth_l1_loss_op.cc
index 56e8d9058fcc035c28e74daff778c4e034f46b44..42a53cfa06f7724000ff59c69504629890f6ec87 100644
--- a/paddle/operators/smooth_l1_loss_op.cc
+++ b/paddle/operators/smooth_l1_loss_op.cc
@@ -47,8 +47,7 @@ class SmoothL1LossOp : public framework::OperatorWithKernel {
template
class SmoothL1LossOpMaker : public framework::OpProtoAndCheckerMaker {
public:
- SmoothL1LossOpMaker(framework::OpProto* proto,
- framework::OpAttrChecker* op_checker)
+ SmoothL1LossOpMaker(OpProto* proto, OpAttrChecker* op_checker)
: OpProtoAndCheckerMaker(proto, op_checker) {
AddInput("X",
"(Tensor, default Tensor) A tensor with rank at least 2. "
diff --git a/paddle/operators/softmax_op.cc b/paddle/operators/softmax_op.cc
index 0988c83d43535d7ee1bcef87bf506e5db1a3ecc0..6b3f19bb46c45b7dd8ec6c23ee449521b36d759c 100644
--- a/paddle/operators/softmax_op.cc
+++ b/paddle/operators/softmax_op.cc
@@ -36,8 +36,7 @@ class SoftmaxOp : public framework::OperatorWithKernel {
class SoftmaxOpMaker : public framework::OpProtoAndCheckerMaker {
public:
- SoftmaxOpMaker(framework::OpProto* proto,
- framework::OpAttrChecker* op_checker)
+ SoftmaxOpMaker(OpProto* proto, OpAttrChecker* op_checker)
: OpProtoAndCheckerMaker(proto, op_checker) {
AddInput("X",
"The input tensor of softmax. "
diff --git a/paddle/operators/softmax_with_cross_entropy_op.cc b/paddle/operators/softmax_with_cross_entropy_op.cc
index 0c302288637ad1713e133d37faa0fb338e1f7022..bca3ff1562d881be1d31e56270d22252e5e491e1 100644
--- a/paddle/operators/softmax_with_cross_entropy_op.cc
+++ b/paddle/operators/softmax_with_cross_entropy_op.cc
@@ -20,8 +20,7 @@ namespace operators {
class SoftmaxWithCrossEntropyOpMaker
: public framework::OpProtoAndCheckerMaker {
public:
- SoftmaxWithCrossEntropyOpMaker(framework::OpProto* proto,
- framework::OpAttrChecker* op_checker)
+ SoftmaxWithCrossEntropyOpMaker(OpProto* proto, OpAttrChecker* op_checker)
: OpProtoAndCheckerMaker(proto, op_checker) {
AddInput("Logits",
"(Tensor, default: Tensor), The unscaled log probabilities "
diff --git a/paddle/operators/split_lod_tensor_op.cc b/paddle/operators/split_lod_tensor_op.cc
index f164a4771186635232fea46327ca1fb8b86f2852..c83b0cbad7f7e9115e7ce47a1823987830c0c086 100644
--- a/paddle/operators/split_lod_tensor_op.cc
+++ b/paddle/operators/split_lod_tensor_op.cc
@@ -118,8 +118,7 @@ class SplitLoDTensorOp : public framework::OperatorBase {
class SplitLoDTensorOpProtoMaker : public framework::OpProtoAndCheckerMaker {
public:
- SplitLoDTensorOpProtoMaker(framework::OpProto *proto,
- framework::OpAttrChecker *op_checker)
+ SplitLoDTensorOpProtoMaker(OpProto *proto, OpAttrChecker *op_checker)
: OpProtoAndCheckerMaker(proto, op_checker) {
AddInput("X", "The input LoDTensor");
AddInput("Mask", "A bool column vector which mask the input");
diff --git a/paddle/operators/split_op.cc b/paddle/operators/split_op.cc
index 275b25e96aa75fdbcb7275e272c49ea8d278d2c8..e8c5fffcd2cdf9f1b31a6c343e4820f155d595f7 100644
--- a/paddle/operators/split_op.cc
+++ b/paddle/operators/split_op.cc
@@ -65,7 +65,7 @@ class SplitOp : public framework::OperatorWithKernel {
class SplitOpMaker : public framework::OpProtoAndCheckerMaker {
public:
- SplitOpMaker(framework::OpProto *proto, framework::OpAttrChecker *op_checker)
+ SplitOpMaker(OpProto *proto, OpAttrChecker *op_checker)
: OpProtoAndCheckerMaker(proto, op_checker) {
AddInput("X", "(Tensor) Input tensor of the split operator.");
AddOutput("Out", "(Tensor) Output tensors of the split operator.")
diff --git a/paddle/operators/spp_op.cc b/paddle/operators/spp_op.cc
index b1807b62616b80ea8a9e48409e0760c1c7b36a38..c0aa87b0f06ca9c7d156dfdf8df188da68ac1450 100644
--- a/paddle/operators/spp_op.cc
+++ b/paddle/operators/spp_op.cc
@@ -18,7 +18,7 @@ namespace operators {
class SppOpMaker : public framework::OpProtoAndCheckerMaker {
public:
- SppOpMaker(framework::OpProto* proto, framework::OpAttrChecker* op_checker)
+ SppOpMaker(OpProto* proto, OpAttrChecker* op_checker)
: OpProtoAndCheckerMaker(proto, op_checker) {
AddInput(
"X",
diff --git a/paddle/operators/squared_l2_distance_op.cc b/paddle/operators/squared_l2_distance_op.cc
index 50bc6da196e642e3860874cfb883390dd2e93215..9e097176f3434e81e31f2ecf4093af47b654e816 100644
--- a/paddle/operators/squared_l2_distance_op.cc
+++ b/paddle/operators/squared_l2_distance_op.cc
@@ -56,8 +56,7 @@ class SquaredL2DistanceOp : public framework::OperatorWithKernel {
class SquaredL2DistanceOpMaker : public framework::OpProtoAndCheckerMaker {
public:
- SquaredL2DistanceOpMaker(framework::OpProto* proto,
- framework::OpAttrChecker* op_checker)
+ SquaredL2DistanceOpMaker(OpProto* proto, OpAttrChecker* op_checker)
: OpProtoAndCheckerMaker(proto, op_checker) {
AddInput("X", "(Tensor) Input of SquaredL2DistanceOp.");
AddInput("Y", "(Tensor) Target of SquaredL2DistanceOp.");
diff --git a/paddle/operators/squared_l2_norm_op.cc b/paddle/operators/squared_l2_norm_op.cc
index 3cff61a02f71fadf99f73787e2b2c179f7d441a8..9c239042cb5127af7eebc0e534da7a7705388de8 100644
--- a/paddle/operators/squared_l2_norm_op.cc
+++ b/paddle/operators/squared_l2_norm_op.cc
@@ -48,8 +48,7 @@ class SquaredL2NormGradOp : public framework::OperatorWithKernel {
class SquaredL2NormOpMaker : public framework::OpProtoAndCheckerMaker {
public:
- SquaredL2NormOpMaker(framework::OpProto* proto,
- framework::OpAttrChecker* op_checker)
+ SquaredL2NormOpMaker(OpProto* proto, OpAttrChecker* op_checker)
: framework::OpProtoAndCheckerMaker(proto, op_checker) {
AddInput("X", "(Tensor) The input of squared_l2_norm op.");
AddOutput("Out", "(Scalar) The output of squared_l2_norm op.");
diff --git a/paddle/operators/sum_op.cc b/paddle/operators/sum_op.cc
index cd52672f78e3e5826e8dfff92fb8e4668c5c6dcd..c56fc1f10b58c1678743aff277f28064dff0f28d 100644
--- a/paddle/operators/sum_op.cc
+++ b/paddle/operators/sum_op.cc
@@ -29,7 +29,7 @@ class SumOp : public framework::OperatorWithKernel {
"Output(Out) of SumOp should not be null.");
if (ctx->IsRuntime() &&
ctx->GetOutputsVarType("Out")[0] ==
- framework::VarDesc::LOD_TENSOR_ARRAY) {
+ framework::proto::VarDesc::LOD_TENSOR_ARRAY) {
return; // skip runtime infershape when is tensor array;
}
@@ -72,8 +72,8 @@ class SumOp : public framework::OperatorWithKernel {
PADDLE_ENFORCE_NE(dtype, -1,
"Sum operator should have at least one tensor");
- return framework::OpKernelType(static_cast(dtype),
- ctx.device_context());
+ return framework::OpKernelType(
+ static_cast(dtype), ctx.device_context());
} else if (x_vars[0]->IsType()) {
return framework::OpKernelType(
framework::ToDataType(
@@ -98,7 +98,7 @@ class SumOp : public framework::OperatorWithKernel {
class SumOpMaker : public framework::OpProtoAndCheckerMaker {
public:
- SumOpMaker(framework::OpProto* proto, framework::OpAttrChecker* op_checker)
+ SumOpMaker(OpProto* proto, OpAttrChecker* op_checker)
: OpProtoAndCheckerMaker(proto, op_checker) {
AddInput("X", "(vector) The input tensors of sum operator.")
.AsDuplicable();
@@ -118,7 +118,7 @@ class SumOpVarTypeInference : public framework::VarTypeInference {
void operator()(const framework::OpDescBind& op_desc,
framework::BlockDescBind* block) const override {
auto& inputs = op_desc.Input("X");
- auto var_type = framework::VarDesc::SELECTED_ROWS;
+ auto var_type = framework::proto::VarDesc::SELECTED_ROWS;
for (auto& name : op_desc.Input("X")) {
VLOG(10) << name << " "
@@ -128,12 +128,12 @@ class SumOpVarTypeInference : public framework::VarTypeInference {
bool any_input_is_lod_tensor = std::any_of(
inputs.begin(), inputs.end(), [block](const std::string& name) {
return block->FindRecursiveOrCreateVar(name)->GetType() ==
- framework::VarDesc::LOD_TENSOR;
+ framework::proto::VarDesc::LOD_TENSOR;
});
auto is_tensor_array = [block](const std::string& name) {
return detail::Ref(block->FindRecursiveOrCreateVar(name)).GetType() ==
- framework::VarDesc::LOD_TENSOR_ARRAY;
+ framework::proto::VarDesc::LOD_TENSOR_ARRAY;
};
bool any_input_is_tensor_array =
@@ -152,9 +152,9 @@ class SumOpVarTypeInference : public framework::VarTypeInference {
PADDLE_ENFORCE(all_inputs_are_tensor_array,
"Not all inputs are tensor array:\n%s", os.str());
}
- var_type = framework::VarDesc::LOD_TENSOR_ARRAY;
+ var_type = framework::proto::VarDesc::LOD_TENSOR_ARRAY;
} else if (any_input_is_lod_tensor) {
- var_type = framework::VarDesc::LOD_TENSOR;
+ var_type = framework::proto::VarDesc::LOD_TENSOR;
}
auto out_var_name = op_desc.Output("Out").front();
diff --git a/paddle/operators/tensor_array_read_write_op.cc b/paddle/operators/tensor_array_read_write_op.cc
index 2835b84f75cad6c8fb01d02b93bb9ff79edb1088..337b7555c7f077bdebed46ef7b5d8b6fa0ce0363 100644
--- a/paddle/operators/tensor_array_read_write_op.cc
+++ b/paddle/operators/tensor_array_read_write_op.cc
@@ -51,8 +51,7 @@ class WriteToArrayOp : public ArrayOp {
class WriteToArrayOpProtoMaker : public framework::OpProtoAndCheckerMaker {
public:
- WriteToArrayOpProtoMaker(framework::OpProto *proto,
- framework::OpAttrChecker *op_checker)
+ WriteToArrayOpProtoMaker(OpProto *proto, OpAttrChecker *op_checker)
: OpProtoAndCheckerMaker(proto, op_checker) {
AddInput("X", "(LoDTensor) the tensor will be written to tensor array");
AddInput(
@@ -104,7 +103,7 @@ class WriteToArrayInferVarType : public framework::VarTypeInference {
VLOG(10) << "Set Variable " << out_name << " as LOD_TENSOR_ARRAY";
auto &out = detail::Ref(block->FindRecursiveOrCreateVar(out_name),
"Cannot found %s", out_name);
- out.SetType(framework::VarDesc::LOD_TENSOR_ARRAY);
+ out.SetType(framework::proto::VarDesc::LOD_TENSOR_ARRAY);
auto *x = block->FindVarRecursive(x_name);
if (x != nullptr) {
out.SetDataType(x->GetDataType());
@@ -140,8 +139,7 @@ class ReadFromArrayOp : public ArrayOp {
class ReadFromArrayProtoMaker : public framework::OpProtoAndCheckerMaker {
public:
- ReadFromArrayProtoMaker(framework::OpProto *proto,
- framework::OpAttrChecker *op_checker)
+ ReadFromArrayProtoMaker(OpProto *proto, OpAttrChecker *op_checker)
: OpProtoAndCheckerMaker(proto, op_checker) {
AddInput("X", "(TensorArray) the array will be read from.");
AddInput("I",
diff --git a/paddle/operators/top_k_op.cc b/paddle/operators/top_k_op.cc
index 16ae925eb5cab1c05f3bc376972cabadc4367d20..bb72210bb67f925af3e450961069f0737dbde35e 100644
--- a/paddle/operators/top_k_op.cc
+++ b/paddle/operators/top_k_op.cc
@@ -46,7 +46,7 @@ class TopkOp : public framework::OperatorWithKernel {
class TopkOpMaker : public framework::OpProtoAndCheckerMaker {
public:
- TopkOpMaker(framework::OpProto *proto, framework::OpAttrChecker *op_checker)
+ TopkOpMaker(OpProto *proto, OpAttrChecker *op_checker)
: OpProtoAndCheckerMaker(proto, op_checker) {
AddInput("X", "(Tensor) The input of Topk op");
AddOutput("Out", "(Tensor) The output tensor of Topk op");
diff --git a/paddle/operators/transpose_op.cc b/paddle/operators/transpose_op.cc
index de5ff561add6183828f6bb4c44e30f6bb13079fa..0109b8bc5c30e0fe3e4ff9d741cd76b741e17926 100644
--- a/paddle/operators/transpose_op.cc
+++ b/paddle/operators/transpose_op.cc
@@ -55,8 +55,7 @@ class TransposeOp : public framework::OperatorWithKernel {
class TransposeOpMaker : public framework::OpProtoAndCheckerMaker {
public:
- TransposeOpMaker(framework::OpProto* proto,
- framework::OpAttrChecker* op_checker)
+ TransposeOpMaker(OpProto* proto, OpAttrChecker* op_checker)
: OpProtoAndCheckerMaker(proto, op_checker) {
AddInput(
"X",
diff --git a/paddle/operators/uniform_random_op.cc b/paddle/operators/uniform_random_op.cc
index 2a49ee471f67cda87415db0e1440a4992c0cd088..3c705cb3396f68f88882388675ab145660e13070 100644
--- a/paddle/operators/uniform_random_op.cc
+++ b/paddle/operators/uniform_random_op.cc
@@ -66,15 +66,14 @@ class UniformRandomOp : public framework::OperatorWithKernel {
framework::OpKernelType GetKernelType(
const framework::ExecutionContext& ctx) const override {
return framework::OpKernelType(
- static_cast(ctx.Attr("dtype")),
+ static_cast(ctx.Attr("dtype")),
ctx.GetPlace());
}
};
class UniformRandomOpMaker : public framework::OpProtoAndCheckerMaker {
public:
- UniformRandomOpMaker(framework::OpProto* proto,
- framework::OpAttrChecker* op_checker)
+ UniformRandomOpMaker(OpProto* proto, OpAttrChecker* op_checker)
: framework::OpProtoAndCheckerMaker(proto, op_checker) {
AddOutput("Out", "(Tensor) The output tensor of uniform random op");
AddComment(R"DOC(
@@ -100,7 +99,7 @@ uniform distribution.
"0 means use a seed generated by the system.")
.SetDefault(0);
AddAttr("dtype", "(int, default 5(FP32)) Output tensor data type")
- .SetDefault(framework::DataType::FP32);
+ .SetDefault(framework::proto::DataType::FP32);
}
};
} // namespace operators
diff --git a/paddle/operators/unpool_op.cc b/paddle/operators/unpool_op.cc
index 49df2a530cd0c5c13f08db4b1e7db62679081e9b..7c035c0b48ebb0d7115e1499c03f8f40f2ca7282 100644
--- a/paddle/operators/unpool_op.cc
+++ b/paddle/operators/unpool_op.cc
@@ -18,8 +18,7 @@ namespace operators {
class Unpool2dOpMaker : public framework::OpProtoAndCheckerMaker {
public:
- Unpool2dOpMaker(framework::OpProto* proto,
- framework::OpAttrChecker* op_checker)
+ Unpool2dOpMaker(OpProto* proto, OpAttrChecker* op_checker)
: OpProtoAndCheckerMaker(proto, op_checker) {
AddInput(
"X",
diff --git a/paddle/operators/while_op.cc b/paddle/operators/while_op.cc
index 9a092a570ff1f3f529413cd44dff55147adbaadc..56a01e56d75a125a86e3d3a3702bf0bb64552b65 100644
--- a/paddle/operators/while_op.cc
+++ b/paddle/operators/while_op.cc
@@ -64,7 +64,7 @@ class WhileOp : public framework::OperatorBase {
class WhileOpMaker : public framework::OpProtoAndCheckerMaker {
public:
- WhileOpMaker(framework::OpProto *proto, framework::OpAttrChecker *op_checker)
+ WhileOpMaker(OpProto *proto, OpAttrChecker *op_checker)
: OpProtoAndCheckerMaker(proto, op_checker) {
AddInput(kParameters,
"A set of variables, which are required by operators inside the "
@@ -321,10 +321,10 @@ class WhileGradOpShapeInference : public framework::InferShapeBase {
continue;
}
auto dims = ctx->GetInputsElementDim(kParameters, i);
- if (var_types[i] == framework::VarDesc::LOD_TENSOR) {
+ if (var_types[i] == framework::proto::VarDesc::LOD_TENSOR) {
names_to_set.push_back(pg_names[i]);
dims_to_set.push_back(dims);
- } else if (var_types[i] == framework::VarDesc::LOD_TENSOR_ARRAY) {
+ } else if (var_types[i] == framework::proto::VarDesc::LOD_TENSOR_ARRAY) {
// not sure how to set the dim of LOD_TENSOR_ARRAY
names_to_set.push_back(pg_names[i]);
dims_to_set.push_back(dims);
diff --git a/paddle/pybind/print_operators_doc.cc b/paddle/pybind/print_operators_doc.cc
index 24f2a9383f7a069f1a8c7ed2bf3da46720470efa..f4f281229e611a6c9c8e9ecd54e0097ab683bbf3 100644
--- a/paddle/pybind/print_operators_doc.cc
+++ b/paddle/pybind/print_operators_doc.cc
@@ -31,31 +31,32 @@ std::string Escape(const std::string& s) {
return r;
}
-std::string AttrType(paddle::framework::AttrType at) {
+std::string AttrType(paddle::framework::proto::AttrType at) {
switch (at) {
- case paddle::framework::INT:
+ case paddle::framework::proto::INT:
return "int";
- case paddle::framework::FLOAT:
+ case paddle::framework::proto::FLOAT:
return "float";
- case paddle::framework::STRING:
+ case paddle::framework::proto::STRING:
return "string";
- case paddle::framework::BOOLEAN:
+ case paddle::framework::proto::BOOLEAN:
return "bool";
- case paddle::framework::INTS:
+ case paddle::framework::proto::INTS:
return "int array";
- case paddle::framework::FLOATS:
+ case paddle::framework::proto::FLOATS:
return "float array";
- case paddle::framework::STRINGS:
+ case paddle::framework::proto::STRINGS:
return "string array";
- case paddle::framework::BOOLEANS:
+ case paddle::framework::proto::BOOLEANS:
return "bool array";
- case paddle::framework::BLOCK:
+ case paddle::framework::proto::BLOCK:
return "block id";
}
return "UNKNOWN"; // not possible
}
-void PrintVar(const paddle::framework::OpProto::Var& v, std::stringstream& ss) {
+void PrintVar(const paddle::framework::proto::OpProto::Var& v,
+ std::stringstream& ss) {
ss << " { "
<< "\n"
<< " \"name\" : \"" << Escape(v.name()) << "\",\n"
@@ -65,7 +66,7 @@ void PrintVar(const paddle::framework::OpProto::Var& v, std::stringstream& ss) {
<< " },";
}
-void PrintAttr(const paddle::framework::OpProto::Attr& a,
+void PrintAttr(const paddle::framework::proto::OpProto::Attr& a,
std::stringstream& ss) {
ss << " { "
<< "\n"
@@ -81,7 +82,7 @@ void PrintOpProto(const std::string& type,
std::stringstream& ss) {
std::cerr << "Processing " << type << "\n";
- const paddle::framework::OpProto* p = opinfo.proto_;
+ const paddle::framework::proto::OpProto* p = opinfo.proto_;
if (p == nullptr) {
return; // It is possible that an operator doesn't have OpProto.
}
diff --git a/paddle/pybind/protobuf.cc b/paddle/pybind/protobuf.cc
index 6c8f06cccb92fa9cd22fdb89a9d410e6853895cc..de26184d0102574164a0e06da5fd58ef052d2559 100644
--- a/paddle/pybind/protobuf.cc
+++ b/paddle/pybind/protobuf.cc
@@ -144,7 +144,7 @@ void BindProgramDesc(py::module &m) {
.def("serialize_to_string", SerializeMessage)
.def("parse_from_string",
[](ProgramDescBind &program_desc, const std::string &data) {
- ProgramDesc *desc = program_desc.Proto();
+ proto::ProgramDesc *desc = program_desc.Proto();
PADDLE_ENFORCE(desc->ParseFromString(data),
"Fail to parse ProgramDesc from string. This could "
"be a bug of Paddle.");
@@ -184,14 +184,14 @@ void BindBlockDesc(py::module &m) {
}
void BindVarDsec(py::module &m) {
- py::enum_(m, "DataType", "")
- .value("BOOL", DataType::BOOL)
- .value("INT16", DataType::INT16)
- .value("INT32", DataType::INT32)
- .value("INT64", DataType::INT64)
- .value("FP16", DataType::FP16)
- .value("FP32", DataType::FP32)
- .value("FP64", DataType::FP64);
+ py::enum_(m, "DataType", "")
+ .value("BOOL", proto::DataType::BOOL)
+ .value("INT16", proto::DataType::INT16)
+ .value("INT32", proto::DataType::INT32)
+ .value("INT64", proto::DataType::INT64)
+ .value("FP16", proto::DataType::FP16)
+ .value("FP32", proto::DataType::FP32)
+ .value("FP64", proto::DataType::FP64);
py::class_ var_desc(m, "VarDesc", "");
var_desc
@@ -213,27 +213,27 @@ void BindVarDsec(py::module &m) {
.def("persistable", &VarDescBind::Persistable)
.def("set_persistable", &VarDescBind::SetPersistable);
- py::enum_(var_desc, "VarType", "")
- .value("LOD_TENSOR", VarDesc::LOD_TENSOR)
- .value("SELECTED_ROWS", VarDesc::SELECTED_ROWS)
- .value("FEED_MINIBATCH", VarDesc::FEED_MINIBATCH)
- .value("FETCH_LIST", VarDesc::FETCH_LIST)
- .value("STEP_SCOPES", VarDesc::STEP_SCOPES)
- .value("LOD_RANK_TABLE", VarDesc::LOD_RANK_TABLE)
- .value("LOD_TENSOR_ARRAY", VarDesc::LOD_TENSOR_ARRAY);
+ py::enum_(var_desc, "VarType", "")
+ .value("LOD_TENSOR", proto::VarDesc::LOD_TENSOR)
+ .value("SELECTED_ROWS", proto::VarDesc::SELECTED_ROWS)
+ .value("FEED_MINIBATCH", proto::VarDesc::FEED_MINIBATCH)
+ .value("FETCH_LIST", proto::VarDesc::FETCH_LIST)
+ .value("STEP_SCOPES", proto::VarDesc::STEP_SCOPES)
+ .value("LOD_RANK_TABLE", proto::VarDesc::LOD_RANK_TABLE)
+ .value("LOD_TENSOR_ARRAY", proto::VarDesc::LOD_TENSOR_ARRAY);
}
void BindOpDesc(py::module &m) {
- py::enum_(m, "AttrType", "")
- .value("INT", AttrType::INT)
- .value("INTS", AttrType::INTS)
- .value("FLOAT", AttrType::FLOAT)
- .value("FLOATS", AttrType::FLOATS)
- .value("STRING", AttrType::STRING)
- .value("STRINGS", AttrType::STRINGS)
- .value("BOOL", AttrType::BOOLEAN)
- .value("BOOLS", AttrType::BOOLEANS)
- .value("BLOCK", AttrType::BLOCK);
+ py::enum_(m, "AttrType", "")
+ .value("INT", proto::AttrType::INT)
+ .value("INTS", proto::AttrType::INTS)
+ .value("FLOAT", proto::AttrType::FLOAT)
+ .value("FLOATS", proto::AttrType::FLOATS)
+ .value("STRING", proto::AttrType::STRING)
+ .value("STRINGS", proto::AttrType::STRINGS)
+ .value("BOOL", proto::AttrType::BOOLEAN)
+ .value("BOOLS", proto::AttrType::BOOLEANS)
+ .value("BLOCK", proto::AttrType::BLOCK);
py::class_ op_desc(m, "OpDesc", "");
op_desc.def("type", &OpDescBind::Type)
diff --git a/paddle/pybind/pybind.cc b/paddle/pybind/pybind.cc
index 4a82f1596eb0b7b3cfe9b9bbce32549f58efdbc9..31f802d4d2489dda7fbe1d773abfc14433db3d45 100644
--- a/paddle/pybind/pybind.cc
+++ b/paddle/pybind/pybind.cc
@@ -288,12 +288,12 @@ All parameter, weight, gradient are variables in Paddle.
for (const auto &t : targets) {
prog_with_targets.MutableBlock(t[0])->Op(t[1])->MarkAsTarget();
}
- ProgramDesc pruned_desc;
+ proto::ProgramDesc pruned_desc;
Prune(*prog_with_targets.Proto(), &pruned_desc);
return new ProgramDescBind(pruned_desc);
});
m.def("inference_optimize", [](ProgramDescBind &origin) {
- ProgramDesc pruned_desc;
+ proto::ProgramDesc pruned_desc;
InferenceOptimize(*(origin.Proto()), &pruned_desc);
return new ProgramDescBind(pruned_desc);
});
@@ -345,7 +345,7 @@ All parameter, weight, gradient are variables in Paddle.
py::class_(m, "Operator")
.def_static("create",
[](py::bytes protobin) {
- OpDesc desc;
+ proto::OpDesc desc;
PADDLE_ENFORCE(desc.ParsePartialFromString(protobin),
"Cannot parse user input to OpDesc");
PADDLE_ENFORCE(desc.IsInitialized(),
@@ -398,7 +398,7 @@ All parameter, weight, gradient are variables in Paddle.
py::class_(m, "CondOp")
.def_static("create",
[](py::bytes protobin) -> operators::CondOp * {
- OpDesc desc;
+ proto::OpDesc desc;
PADDLE_ENFORCE(desc.ParsePartialFromString(protobin),
"Cannot parse user input to OpDesc");
PADDLE_ENFORCE(desc.IsInitialized(),
diff --git a/paddle/scripts/travis/build_doc.sh b/paddle/scripts/travis/build_doc.sh
index ff0bac6a0740111dfa1a1440daaf1ceaf3a7b0d8..0db8d33bbcb5278ed0dd5584b5822502b719ede9 100755
--- a/paddle/scripts/travis/build_doc.sh
+++ b/paddle/scripts/travis/build_doc.sh
@@ -14,9 +14,8 @@ make -j `nproc` print_operators_doc
paddle/pybind/print_operators_doc > doc/en/html/operators.json
# check websites for broken links
-# It will be failed now!
-#linkchecker doc/en/html/index.html
-#linkchecker doc/cn/html/index.html
+linkchecker doc/en/html/index.html
+linkchecker doc/cn/html/index.html
# Parse Github URL
REPO=`git config remote.origin.url`
diff --git a/python/paddle/trainer_config_helpers/networks.py b/python/paddle/trainer_config_helpers/networks.py
index 8bfe56d795e394efffabb61f145b1a20d806447d..b5cde7bac779ee1d54395b68941df2693e1ed0f5 100644
--- a/python/paddle/trainer_config_helpers/networks.py
+++ b/python/paddle/trainer_config_helpers/networks.py
@@ -25,10 +25,10 @@ from paddle.trainer.config_parser import *
__all__ = [
'sequence_conv_pool', 'simple_lstm', "simple_img_conv_pool",
"img_conv_bn_pool", 'lstmemory_group', 'lstmemory_unit', 'small_vgg',
- 'img_conv_group', 'vgg_16_network', 'gru_unit', 'gru_group', 'simple_gru',
- 'simple_attention', 'dot_product_attention', 'multi_head_attention',
- 'simple_gru2', 'bidirectional_gru', 'text_conv_pool', 'bidirectional_lstm',
- 'inputs', 'outputs'
+ 'img_conv_group', 'img_separable_conv', 'vgg_16_network', 'gru_unit',
+ 'gru_group', 'simple_gru', 'simple_attention', 'dot_product_attention',
+ 'multi_head_attention', 'simple_gru2', 'bidirectional_gru',
+ 'text_conv_pool', 'bidirectional_lstm', 'inputs', 'outputs'
]
######################################################
@@ -251,13 +251,13 @@ def img_conv_bn_pool(input,
pool_layer_attr=None):
"""
Convolution, batch normalization, pooling group.
-
+
Img input => Conv => BN => Pooling => Output.
:param name: group name.
:type name: basestring
:param input: input layer.
- :type input: LayerOutput
+ :type input: LayerOutput
:param filter_size: see img_conv_layer for details.
:type filter_size: int
:param num_filters: see img_conv_layer for details.
@@ -435,6 +435,85 @@ def img_conv_group(input,
input=tmp, stride=pool_stride, pool_size=pool_size, pool_type=pool_type)
+@wrap_name_default("separable_conv")
+def img_separable_conv(input,
+ num_channels,
+ num_out_channels,
+ filter_size,
+ stride=1,
+ padding=0,
+ depth_multiplier=1,
+ act=None,
+ bias_attr=None,
+ param_attr=None,
+ shared_bias=True,
+ layer_type='exconv',
+ name=None):
+ """
+ Separable Convolution.
+
+ The separable convolution module is consisted of a depthwise convolution
+ that acts separately on input channels, followed by a pointwise convolution
+ with 1*1 kernels that mixes channels. It is used for Xception:
+ https://arxiv.org/pdf/1610.02357.pdf
+
+ :param input: input layer.
+ :type input: LayerOutput
+ :param num_channels: the number of input channels.
+ :type num_channels: int
+ :param num_out_channels: the number of output channels.
+ :type num_out_channels: int
+ :param filter_size: the filter size for the depthwise convolution.
+ :type filter_size: int|tuple
+ :param stride: the stride size for the depthwise convolution.
+ :type stride: int|tuple
+ :param padding: the padding size for the depthwise convolution.
+ :type padding: int|tuple
+ :param depth_multiplier: the number of filter for one channel in the
+ depthwize convolution.
+ :type depth_multiplier: int
+ :param act: the activation function for the output.
+ :type act: BaseActivation
+ :param bias_attr: see img_conv_layer for details.
+ :type bias_attr: ParameterAttribute
+ :param param_attr: see img_conv_layer for details.
+ :type param_attr: ParameterAttribute
+ :param shared_bias: see img_conv_layer for details.
+ :type shared_bias: bool
+ :param layer_type: see img_conv_layer for details.
+ :type layer_type: bool
+ :return: layer's output
+ :rtype: LayerOutput
+ """
+ __depthwise_conv__ = img_conv_layer(
+ name="%s_depthwise_conv" % name,
+ input=input,
+ num_channels=num_channels,
+ num_filters=num_channels * depth_multiplier,
+ groups=num_channels,
+ filter_size=filter_size,
+ stride=stride,
+ padding=padding,
+ act=LinearActivation(),
+ bias_attr=bias_attr,
+ param_attr=param_attr,
+ shared_biases=shared_bias,
+ layer_type=layer_type)
+ __pointwise_conv__ = img_conv_layer(
+ name="%s_pointwise_conv" % name,
+ input=__depthwise_conv__,
+ num_channels=num_channels * depth_multiplier,
+ num_filters=num_out_channels,
+ filter_size=1,
+ stride=1,
+ padding=0,
+ act=act,
+ bias_attr=bias_attr,
+ param_attr=param_attr,
+ shared_biases=shared_bias)
+ return __pointwise_conv__
+
+
def small_vgg(input_image, num_channels, num_classes):
def __vgg__(ipt, num_filter, times, dropouts, num_channels_=None):
return img_conv_group(
@@ -648,7 +727,7 @@ def lstmemory_unit(input,
lstm_bias_attr=None,
lstm_layer_attr=None):
"""
- lstmemory_unit defines the caculation process of a LSTM unit during a
+ lstmemory_unit defines the caculation process of a LSTM unit during a
single time step. This function is not a recurrent layer, so it can not be
directly used to process sequence input. This function is always used in
recurrent_group (see layers.py for more details) to implement attention
@@ -869,7 +948,7 @@ def gru_unit(input,
gru_layer_attr=None,
naive=False):
"""
- gru_unit defines the calculation process of a gated recurrent unit during a single
+ gru_unit defines the calculation process of a gated recurrent unit during a single
time step. This function is not a recurrent layer, so it can not be
directly used to process sequence input. This function is always used in
the recurrent_group (see layers.py for more details) to implement attention
@@ -1012,7 +1091,7 @@ def simple_gru(input,
simple_gru in network.py. The reason why there are so many interfaces is
that we have two ways to implement recurrent neural network. One way is to
use one complete layer to implement rnn (including simple rnn, gru and lstm)
- with multiple time steps, such as recurrent_layer, lstmemory, grumemory. But
+ with multiple time steps, such as recurrent_layer, lstmemory, grumemory. But
the multiplication operation :math:`W x_t` is not computed in these layers.
See details in their interfaces in layers.py.
The other implementation is to use an recurrent group which can ensemble a
@@ -1116,7 +1195,7 @@ def simple_gru2(input,
:type act: BaseActivation
:param gate_act: gate activiation type of gru
:type gate_act: BaseActivation
- :param gru_bias_attr: bias parameter attribute of gru layer,
+ :param gru_bias_attr: bias parameter attribute of gru layer,
False means no bias, None means default bias.
:type gru_bias_attr: ParameterAttribute|False|None
:param gru_param_attr: param parameter attribute of gru layer,
@@ -1189,7 +1268,7 @@ def bidirectional_gru(input,
:type size: int
:param return_seq: If set False, the last time step of output are
concatenated and returned.
- If set True, the entire output sequences in forward
+ If set True, the entire output sequences in forward
and backward directions are concatenated and returned.
:type return_seq: bool
:return: LayerOutput object.
@@ -1278,7 +1357,7 @@ def bidirectional_lstm(input,
:type size: int
:param return_seq: If set False, the last time step of output are
concatenated and returned.
- If set True, the entire output sequences in forward
+ If set True, the entire output sequences in forward
and backward directions are concatenated and returned.
:type return_seq: bool
:return: LayerOutput object.
diff --git a/python/paddle/v2/fluid/layers/nn.py b/python/paddle/v2/fluid/layers/nn.py
index 2c38c232240fbe3541ca5e0efc51d8f47c6e4190..73f68466da78021778d2860f24c1b699081fb95b 100644
--- a/python/paddle/v2/fluid/layers/nn.py
+++ b/python/paddle/v2/fluid/layers/nn.py
@@ -13,7 +13,7 @@ __all__ = [
'crf_decoding', 'cos_sim', 'cross_entropy', 'square_error_cost', 'accuracy',
'chunk_eval', 'sequence_conv', 'conv2d', 'sequence_pool', 'pool2d',
'batch_norm', 'beam_search_decode', 'conv2d_transpose', 'sequence_expand',
- 'lstm_unit'
+ 'lstm_unit', 'reduce_sum'
]
@@ -402,8 +402,8 @@ def chunk_eval(input,
},
attrs={
"num_chunk_types": num_chunk_types,
- 'chunk_scheme': chunk_scheme,
- 'excluded_chunk_types': excluded_chunk_types or []
+ "chunk_scheme": chunk_scheme,
+ "excluded_chunk_types": excluded_chunk_types or []
})
return precision, recall, f1_score, num_infer_chunks, num_label_chunks, num_correct_chunks
@@ -935,3 +935,47 @@ def lstm_unit(x_t,
attrs={"forget_bias": forget_bias})
return h, c
+
+
+def reduce_sum(input, dim=None, keep_dim=False):
+ """
+ Computes the sum of tensor elements over the given dimension.
+
+ Args:
+ input (Variable): The input variable which is a Tensor or LoDTensor.
+ dim (int|None): The dimension along which the sum is performed. If
+ :attr:`None`, sum all elements of :attr:`input` and return a
+ Tensor variable with a single element, otherwise must be in the
+ range :math:`[-rank(input), rank(input))`. If :math:`dim < 0`,
+ the dimension to reduce is :math:`rank + dim`.
+ keep_dim (bool): Whether to reserve the reduced dimension in the
+ output Tensor. The result tensor will have one fewer dimension
+ than the :attr:`input` unless :attr:`keep_dim` is true.
+
+ Returns:
+ Variable: The reduced Tensor variable.
+
+ Examples:
+ .. code-block:: python
+
+ # x is a Tensor variable with following elements:
+ # [[0.2, 0.3, 0.5, 0.9]
+ # [0.1, 0.2, 0.6, 0.7]]
+ # Each example is followed by the correspending output tensor.
+ fluid.layers.reduce_sum(x) # [3.5]
+ fluid.layers.reduce_sum(x, dim=0) # [0.3, 0.5, 1.1, 1.6]
+ fluid.layers.reduce_sum(x, dim=-1) # [1.9, 1.6]
+ fluid.layers.reduce_sum(x, dim=1, keep_dim=True) # [[1.9], [1.6]]
+ """
+ helper = LayerHelper('reduce_sum', **locals())
+ out = helper.create_tmp_variable(dtype=helper.input_dtype())
+ helper.append_op(
+ type='reduce_sum',
+ inputs={'X': input},
+ outputs={'Out': out},
+ attrs={
+ 'dim': dim if dim != None else 0,
+ 'keep_dim': keep_dim,
+ 'reduce_all': True if dim == None else False
+ })
+ return out