new_op_en.md 15.1 KB
Newer Older
1 2
# How to write a new operator

M
Mimee 已提交
3
 - [Background](#background)
T
tensor-tang 已提交
4 5
 - [Implementing C++ Types](#implementing-c-types)
   - [Defining ProtoMaker](#defining-protomaker)
M
Mimee 已提交
6 7 8 9 10 11 12 13 14
   - [Defining Operator](#defining-operator)
   - [Registering Operator](#registering-operator)
   - [Compilation](#compilation)
 - [Python Binding](#python-binding)
 - [Unit Tests](#unit-tests)
   - [Testing Forward Operators](#testing-forward-operators)
   - [Testing Backward Operators](#testing-backward-operators)
   - [Compiling and Running](#compiling-and-running)
 - [Remarks](#remarks)
15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30
## Background

Here are the base types needed. For details, please refer to the design docs.

- `framework::OperatorBase`: Operator (Op)base class.
- `framework::OpKernel`: Base class for Op computation.
- `framework::OperatorWithKernel`: Inherited from OperatorBase, describing an operator with computation.
- `class OpProtoAndCheckerMaker`: Describes an Operator's input, output, attributes and description, mainly used to interface with Python API.

An operator can be differentiated by whether in has kernel methods. An operator with kernel inherits from `OperatorWithKernel` while the ones without inherit from `OperatorBase`. This tutorial focuses on implementing operators with kernels. In short, an operator includes the following information:


 Information           | Where is it defined
--------------  | :----------------------
OpProtoMake definition  | `.cc`files, Backward Op does not need an OpProtoMake interface.
Op definition           | `.cc` files
Q
QI JUN 已提交
31 32
Kernel implementation       | The kernel methods shared between CPU and CUDA are defined in `.h` files. CPU-specific kernels live in `.cc` files, while CUDA-specific kernels are implemented in `.cu`files.
Registering the Op           | Ops are registered in `.cc` files; For Kernel registration, `.cc` files contain the CPU implementation, while `.cu` files contain the CUDA implementation.
33 34 35 36 37 38 39 40 41 42 43


New Operator implementations are added to the list [paddle/operators](https://github.com/PaddlePaddle/Paddle/tree/develop/paddle/operators), with file names in the format `*_op.h` (if applicable), `*_op.cc`, `*_op.cu` (if applicable).** The system will use the naming scheme to automatically build operators and their corresponding Python extensions. **


Let's take matrix multiplication operator, [MulOp](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/operators/mul_op.cc), as an example to introduce the writing of an Operator with Kernel.


## Implementing C++ Types


T
tensor-tang 已提交
44
### Defining ProtoMaker
45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100

Matrix Multiplication can be written as $Out = X * Y$, meaning that the operation consists of two inputs and pne output.

First, define `ProtoMaker` to describe the Operator's input, output, and additional comments:

```cpp
class MulOpMaker : public framework::OpProtoAndCheckerMaker {
 public:
  MulOpMaker(framework::OpProto *proto, framework::OpAttrChecker *op_checker)
      : OpProtoAndCheckerMaker(proto, op_checker) {
    AddInput("X", "(Tensor), 2D tensor of size (M x K)");
    AddInput("Y", "(Tensor), 2D tensor of size (K x N)");
    AddOutput("Out", "(Tensor), 2D tensor of size (M x N)");
    AddComment(R"DOC(
Two Element Mul Operator.
The equation is: Out = X * Y
)DOC");
  }
};
```

[`MulOpMaker`](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/operators/mul_op.cc#L43)is inherited from`framework::OpProtoAndCheckerMaker`, consisting of 2 variables in the constructor:

   - `framework::OpProto` stores Operator input and variable attribute, used for generating Python API interfaces.
   - `framework::OpAttrChecker` is used to validate variable attributes.

The constructor utilizes `AddInput`, `AddOutput`, and `AddComment`, so that the corresponding information will be added to `OpProto`.

The code above adds two inputs `X` and `Y` to `MulOp`, an output `Out`, and their corresponding descriptions, in accordance to Paddle's [naming convention](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/operators/name_convention.md).


An additional example [`ScaleOp`](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/operators/scale_op.cc#L37) is implemented as follows:

```cpp
template <typename AttrType>
class ScaleOpMaker : public framework::OpProtoAndCheckerMaker {
 public:
  ScaleOpMaker(framework::OpProto *proto, framework::OpAttrChecker *op_checker)
      : OpProtoAndCheckerMaker(proto, op_checker) {
    AddInput("X", "The input tensor of scale operator.").NotInGradient();
    AddOutput("Out", "The output tensor of scale operator.").NotInGradient();
    AddComment(R"DOC(Scale operator
The equation is: Out = scale*X
)DOC");
    AddAttr<AttrType>("scale", "scale of scale operator.").SetDefault(1.0);
  }
};
```

There are two changes in this example:

- `AddInput("X","...").NotInGradient()` expresses that input `X` is not involved in `ScaleOp`'s corresponding computation. If an input to an operator is not participating in back-propagation, please explicitly set `.NotInGradient()`.

- `AddAttr<AttrType>("scale", "...").SetDefault(1.0);`  adds `scale`constant as an attribute, and sets the default value to 1.0.


T
tensor-tang 已提交
101
### Defining Operator
102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148

The following code defines the interface for MulOp:

```cpp
class MulOp : public framework::OperatorWithKernel {
 public:
  using framework::OperatorWithKernel::OperatorWithKernel;

 protected:
  void InferShape(const framework::InferShapeContext &ctx) const override {
    auto dim0 = ctx.Input<Tensor>("X")->dims();
    auto dim1 = ctx.Input<Tensor>("Y")->dims();
    PADDLE_ENFORCE_EQ(dim0.size(), 2,
                      "input X(%s) should be a tensor with 2 dims, a matrix",
                      ctx.op_.Input("X"));
    PADDLE_ENFORCE_EQ(dim1.size(), 2,
                      "input Y(%s) should be a tensor with 2 dims, a matrix",
                      ctx.op_.Input("Y"));
    PADDLE_ENFORCE_EQ(
        dim0[1], dim1[0],
        "First matrix's width must be equal with second matrix's height.");
    ctx.Output<Tensor>("Out")->Resize({dim0[0], dim1[1]});
  }
};
```

[`MulOp`](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/operators/mul_op.cc#L22) is inherited from `OperatorWithKernel`. Its `public` member

```cpp
using framework::OperatorWithKernel::OperatorWithKernel;
```

expresses an operator constructor using base class `OperatorWithKernel`, alternatively written as

```cpp
MulOp(const std::string &type, const framework::VariableNameMap &inputs,
      const framework::VariableNameMap &outputs,
      const framework::AttributeMap &attrs)
  : OperatorWithKernel(type, inputs, outputs, attrs) {}
```

`InferShape` interface needs to be re-written.`InferShape` is a constant method and cannot modify Op's member variables, its constant member `const framework::InferShapeContext &ctx` can be used to extract input, output, and attributes. It functions to

  - 1). validate and error out early: it checks input data dimensions and types.
  - 2). configures the tensor shape in the output.

Usually `OpProtoMaker` and `Op`'s type definitions are written in `.cc` files, which also include the registration methods introduced later.
M
Mimee 已提交
149

T
tensor-tang 已提交
150
### Defining OpKernel
M
Mimee 已提交
151 152 153

`MulKernel` inherits `framework::OpKernel`, which includes the following templates:

Q
QI JUN 已提交
154
- `typename  DeviceContext` denotes device context type. When different devices, namely the CPUDeviceContext and the CUDADeviceContext, share the same kernel, this template needs to be added. If they don't share kernels, this must not be added. An example of a non-sharing kernel is [`OnehotCrossEntropyOpKernel`](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/operators/cross_entropy_op.h#L43).
M
Mimee 已提交
155 156 157 158 159 160 161 162 163 164 165

- `typename T` denotes data type, such as `float` or `double`.

`MulKernel` types need to rewrite the interface for `Compute`.
- `Compute` takes one input variable `const framework::ExecutionContext& context`.
- Compared with `InferShapeContext`, `ExecutionContext` includes device types, and can similarly extract input, output, and attribute variables.
- `Compute` implements the computation logics of an `OpKernel`.

`MulKernel`'s implementation of `Compute` is as follows:

  ```cpp
Q
QI JUN 已提交
166
  template <typename DeviceContext, typename T>
M
Mimee 已提交
167 168 169 170 171 172 173
  class MulKernel : public framework::OpKernel {
  public:
  void Compute(const framework::ExecutionContext& context) const override {
    auto* X = context.Input<Tensor>("X");
    auto* Y = context.Input<Tensor>("Y");
    auto* Z = context.Output<Tensor>("Out");
    Z->mutable_data<T>(context.GetPlace());
Q
QI JUN 已提交
174 175
    auto& device_context = context.template device_context<DeviceContext>();
    math::matmul<DeviceContext, T>(*X, false, *Y, false, 1, Z, 0, device_context);
M
Mimee 已提交
176 177 178 179
  }
  };
  ```

Q
QI JUN 已提交
180
Note that **different devices (CPU, CUDA)share an Op definition; whether or not they share the same `OpKernel` depends on whether `Compute` calls functions that support both devices.**
M
Mimee 已提交
181

Q
QI JUN 已提交
182
`MulOp`'s CPU and CUDA share the same `Kernel`. A non-sharing  `OpKernel` example can be seen in [`OnehotCrossEntropyOpKernel`](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/operators/cross_entropy_op.h#L43).
M
Mimee 已提交
183

184
To ease the writing of `OpKernel` compute, and for reusing code cross-device, [`Eigen-unsupported Tensor`](https://bitbucket.org/eigen/eigen/src/default/unsupported/Eigen/CXX11/src/Tensor/README.md?fileviewer=file-view-default) module is used to implement `Compute` interface. To learn about how the Eigen library is used in PaddlePaddle, please see [usage document](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/howto/dev/use_eigen_cn.md).
M
Mimee 已提交
185 186 187 188 189 190


This concludes the forward implementation of an operator. Next its operation and kernel need to be registered in a `.cc` file.

The definition of its corresponding backward operator, if applicable, is similar to that of an forward operator. **Note that a backward operator does not include a `ProtoMaker`**.

T
tensor-tang 已提交
191
### Registering Operator
M
Mimee 已提交
192 193 194 195 196 197

- In `.cc` files, register forward and backward operator classes and the CPU kernel.

    ```cpp
    namespace ops = paddle::operators;
    REGISTER_OP(mul, ops::MulOp, ops::MulOpMaker, mul_grad, ops::MulOpGrad);
Q
QI JUN 已提交
198
    REGISTER_OP_CPU_KERNEL(mul, ops::MulKernel<paddle::platform::CPUDeviceContext, float>);
M
Mimee 已提交
199
    REGISTER_OP_CPU_KERNEL(mul_grad,
Q
QI JUN 已提交
200
                  ops::MulGradKernel<paddle::platform::CPUDeviceContext, float>);
M
Mimee 已提交
201 202 203 204 205 206
    ```

   In that code block,

    - `REGISTER_OP` registers the `ops::MulOp` class, type named `mul`, its type `ProtoMaker` is `ops::MulOpMaker`, registering `ops::MulOpGrad` as `mul_grad`.
    - `REGISTER_OP_WITHOUT_GRADIENT` registers an operator without gradient.
K
kexinzhao 已提交
207
    - `REGISTER_OP_CPU_KERNEL` registers `ops::MulKernel` class and specialized template types `paddle::platform::CPUPlace` and `float`, which also registers `ops::MulGradKernel`.
M
Mimee 已提交
208 209


Q
QI JUN 已提交
210 211
- Registering CUDA Kernel in `.cu` files
    - Note that if CUDA Kernel is implemented using the `Eigen unsupported` module, then on top of `.cu`, a macro definition `#define EIGEN_USE_GPU` is needed, such as
M
Mimee 已提交
212 213 214 215 216 217

    ```cpp
    // if use Eigen unsupported module before include head files
    #define EIGEN_USE_GPU

    namespace ops = paddle::operators;
Q
QI JUN 已提交
218 219 220
    REGISTER_OP_CUDA_KERNEL(mul, ops::MulKernel<paddle::platform::CUDADeviceContext, float>);
    REGISTER_OP_CUDA_KERNEL(mul_grad,
                           ops::MulGradKernel<paddle::platform::CUDADeviceContext, float>);
M
Mimee 已提交
221 222
    ```

T
tensor-tang 已提交
223
### Compilation
M
Mimee 已提交
224 225 226 227 228 229 230 231 232 233 234 235 236

Run the following commands to compile.

```
make mul_op
```

## Python Binding

The system will automatically bind to Python and link it to a generated library.

## Unit Tests

M
Mimee 已提交
237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254
Unit tests for an operator include

1. comparing a forward operator's implementations on different devices,

2. comparing a backward operator's implementation on different devices, and

3. a scaling test for the backward operator.

Here, we introduce the [unit tests for `MulOp`](https://github.com/PaddlePaddle/Paddle/blob/develop/python/paddle/v2/framework/tests/test_mul_op.py).

### Testing Forward Operators

A forward operator unit test inherits `unittest.TestCase` and defines metaclass `__metaclass__ = OpTestMeta`. More concrete tests are performed in `OpTestMeta`. Testing a forward operator requires the following:

1. Defining input, output and relevant attributes in `setUp` method.

2. Generating random input data.

Q
QI JUN 已提交
255 256 257
3. Implementing the same computation logic in a Python script.

4. Call check gradient function to check the backward operator.
M
Mimee 已提交
258 259 260 261

  ```python
  import unittest
  import numpy as np
Q
QI JUN 已提交
262
  from op_test import OpTest
M
Mimee 已提交
263 264


Q
QI JUN 已提交
265
  class TestMulOp(OpTest):
M
Mimee 已提交
266
      def setUp(self):
Q
QI JUN 已提交
267
          self.op_type = "mul"
M
Mimee 已提交
268 269 270 271 272 273
          self.inputs = {
              'X': np.random.random((32, 84)).astype("float32"),
              'Y': np.random.random((84, 100)).astype("float32")
          }
          self.outputs = {'Out': np.dot(self.inputs['X'], self.inputs['Y'])}

Q
QI JUN 已提交
274 275 276 277 278
      def test_check_output(self):
          self.check_output()
          
      def test_check_grad_normal(self):
          self.check_grad(['X', 'Y'], 'Out', max_relative_error=0.5)
M
Mimee 已提交
279

Q
QI JUN 已提交
280 281 282
      def test_check_grad_ingore_x(self):
          self.check_grad(
              ['Y'], 'Out', max_relative_error=0.5, no_grad_set=set("X"))
M
Mimee 已提交
283

Q
QI JUN 已提交
284 285 286
      def test_check_grad_ingore_y(self):
          self.check_grad(
              ['X'], 'Out', max_relative_error=0.5, no_grad_set=set('Y'))
T
tensor-tang 已提交
287
  ```
Q
QI JUN 已提交
288
Get its output, and compare it with the forward operator's own output.
M
Mimee 已提交
289

Q
QI JUN 已提交
290
The code above first loads required packages. In addition, we have
M
Mimee 已提交
291

Q
QI JUN 已提交
292 293 294
- `self.op_type = "mul" ` defines the type that is identical to what the operator's registered type.
- `self.inputs` defines input, with type `numpy.array` and initializes it.
- `self.outputs` defines output and completes the same operator computation in the Python script, and returns its result from the Python script.
M
Mimee 已提交
295

T
tensor-tang 已提交
296 297
### Testing Backward Operators

Q
QI JUN 已提交
298
Some key points in checking gradient above include:
M
Mimee 已提交
299 300

- `test_normal` calls `check_grad` to validate scaling tests' correctness and stability through numeric methods.
301 302 303 304
  - The first variable `["X", "Y"]` appoints `X` and `Y` to be scale tested.
  - The second variable `"Out"` points to the network's final output target `Out`.
  - The third variable `max_relative_error` points to the maximum relative tolerance error during scaling tests.
- `test_check_grad_ingore_x` and `test_check_grad_ingore_y`branches test the cases where there is only one scaling input.
M
Mimee 已提交
305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328

### Compiling and Running


Any new unit testing file of the format `test_*.py`  added to the director `python/paddle/v2/framework/tests` is automatically added to the project to compile.

Note that **unlike the compile test for Ops, running unit tests requires compiling the entire project** and requires compiling with flag `WITH_TESTING` on i.e. `cmake paddle_dir -DWITH_TESTING=ON`.

After successfully compiling the project, run the following command to run unit tests:

```bash
make test ARGS="-R test_mul_op -V"
```

Or,

```bash
ctest -R test_mul_op
```

## Remarks

- Every `*_op.h` (if applicable), `*_op.cc`, and `*_op.cu` (if applicable) must be created for a unique Op. Compiling will fail if multiple operators are included per file.
- The type with which an operator is registered needs to be identical to the Op's name. Registering `REGISTER_OP(B, ...)` in `A_op.cc` will cause unit testing failures.
Q
QI JUN 已提交
329
- If the operator does not implement a CUDA kernel, please refrain from creating an empty `*_op.cu` file, or else unit tests will fail.
M
Mimee 已提交
330
- If multiple operators rely on some shared methods, a file NOT named `*_op.*` can be created to store them, such as `gather.h`.