support_new_device.md 7.7 KB
Newer Older
1
# Design Doc: Supporting new Device/Library
2 3 4

## Background

5
Deep learning has a high demand for computing resources. New high-performance devices and computing libraries are appearing very frequently. Deep learning frameworks have to integrate these high-performance devices and computing libraries in a flexible and efficient manner.
6

7
On one hand, hardware and computing libraries usually do not have a one-to-one correspondence. For example, Intel CPUs support Eigen and MKL computing libraries while Nvidia GPUs support Eigen and cuDNN computing libraries. We have to implement operator specific kernels for each computing library.
8

9
On the other hand, users usually do not want to care about the low-level hardware and computing libraries when writing a neural network configuration. In Fluid, `Layer` is exposed in `Python`, and `Operator` is exposed in `C++`. Both `Layer` and `Operator` are hardware independent.
10 11 12 13 14 15

So, how to support a new Device/Library in Fluid becomes a challenge.


## Basic: Integrate A New Device/Library

W
weixing02 已提交
16
For a general overview of fluid, please refer to the [overview doc](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/fluid/read_source.md).
17

18
There are mainly three parts that we have to consider while integrating a new device/library:
19

20
- Place and DeviceContext: indicate the device id and manage hardware resources
21 22 23

- Memory and Tensor: malloc/free data on certain device

24
- Math Functor and OpKernel: implement computing unit on certain devices/libraries
25 26 27

### Place and DeviceContext

28
Please note that device and computing library are not one-to-one corresponding. A device can have a lot of computing libraries and a computing library can also support several devices.
29 30

#### Place
W
weixing02 已提交
31
Fluid uses class [Place](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/fluid/platform/place.h#L55) to represent the device memory where data is located. If we add another device, we have to add the corresponding `DevicePlace`.
32 33

```
Q
QI JUN 已提交
34 35
        |   CPUPlace
Place --|   CUDAPlace
36 37 38 39 40 41 42 43 44 45 46
        |   FPGAPlace
```

And `Place` is defined as follows:

```
typedef boost::variant<CUDAPlace, CPUPlace, FPGAPlace> Place;
```

#### DeviceContext

W
weixing02 已提交
47
Fluid uses class [DeviceContext](https://github.com/PaddlePaddle/Paddle/blob/develop/fluid/paddle/platform/device_context.h#L30) to manage the resources in different libraries, such as CUDA stream in `CDUADeviceContext`. There are also inheritance relationships between different kinds of `DeviceContext`.
48 49 50


```
D
dzhwinter 已提交
51 52
                /->  CPUDeviceContext   
DeviceContext ---->  CUDADeviceContext  
53 54 55
                \->  FPGADeviceContext
```

56
An example of Nvidia GPU is as follows:
57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75

- DeviceContext


```
class DeviceContext {
  virtual Place GetPlace() const = 0;
};  
```


- CUDADeviceContext


```
class CUDADeviceContext : public DeviceContext {
  Place GetPlace() const override { return place_; }
private:
  CUDAPlace place_;
W
weixing02 已提交
76
  cudaStream_t stream_;
77 78 79 80 81 82 83 84 85 86
  cublasHandle_t cublas_handle_;
  std::unique_ptr<Eigen::GpuDevice> eigen_device_;  // binds with stream_
};
```

### Memory and Tensor


#### memory module

W
weixing02 已提交
87
Fluid provides the following [memory interfaces](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/fluid/memory/memory.h#L36):
88 89 90 91 92 93 94 95 96 97 98 99

```
template <typename Place>
void* Alloc(Place place, size_t size);

template <typename Place>
void Free(Place place, void* ptr);

template <typename Place>
size_t Used(Place place);
```

Q
QI JUN 已提交
100
To implement these interfaces, we have to implement MemoryAllocator for different Devices.
101 102 103 104


#### Tensor

W
weixing02 已提交
105
[Tensor](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/fluid/framework/tensor.h#L36) holds data with some shape in a specific Place.
106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146

```cpp
class Tensor {
 public:
  /*! Return a pointer to mutable memory block. */
  template <typename T>
  inline T* data();

  /**
   * @brief   Return a pointer to mutable memory block.
   * @note    If not exist, then allocation.
   */
  template <typename T>
  inline T* mutable_data(platform::Place place);

  /**
   * @brief     Return a pointer to mutable memory block.
   *
   * @param[in] dims    The dimensions of the memory block.
   * @param[in] place   The place of the memory block.
   *
   * @note      If not exist, then allocation.
   */
  template <typename T>
  inline T* mutable_data(DDim dims, platform::Place place);

  /*! Resize the dimensions of the memory block. */
  inline Tensor& Resize(const DDim& dims);

  /*! Return the dimensions of the memory block. */
  inline const DDim& dims() const;

 private:
  /*! holds the memory block if allocated. */
  std::shared_ptr<Placeholder> holder_;

  /*! points to dimensions of memory block. */
  DDim dim_;
};
```

147
`Placeholder` is used to delay memory allocation; that is, we can first define a tensor, using `Resize` to configurate its shape, and then call `mutuable_data` to allocate the actual memory.
148 149 150 151 152 153 154 155 156 157 158 159 160 161

```cpp
paddle::framework::Tensor t;
paddle::platform::CPUPlace place;
// set size first
t.Resize({2, 3});
// allocate memory on CPU later
t.mutable_data(place);
```



### Math Functor and OpKernel

162
Fluid implements computing units based on different DeviceContexts. Some computing units are shared between operators. This common part will be put in operators/math directory as basic Functors.
163

W
weixing02 已提交
164
Let's take [MaxOutFunctor](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/fluid/operators/math/maxouting.h#L27) as an example:
165

166
The interface is defined in the header file.
167 168 169 170 171 172 173 174 175 176

```
template <typename DeviceContext, typename T>
class MaxOutFunctor {
 public:
  void operator()(const DeviceContext& context, const framework::Tensor& input,
                  framework::Tensor* output, int groups);
};
```

G
gongweibao 已提交
177
CPU implementation is in .cc file
178 179 180 181 182 183 184 185 186 187 188 189 190

```
template <typename T>
class MaxOutFunctor<platform::CPUDeviceContext, T> {
  public:
  void operator()(const platform::CPUDeviceContext& context,
                  const framework::Tensor& input, framework::Tensor* output,
                  int groups) {
                  ...
                  }
};
```

G
gongweibao 已提交
191
CUDA implementation is in .cu file
192 193 194 195 196 197 198 199 200 201 202 203 204 205

```
template <typename T>
class MaxOutFunctor<platform::CUDADeviceContext, T> {
 public:
  void operator()(const platform::CUDADeviceContext& context,
                  const framework::Tensor& input, framework::Tensor* output,
                  int groups) {
                  ...
                  }
};                  
```


G
gongweibao 已提交
206
We first obtain the computing handle from a concrete DeviceContext and then compute on tensors.
207

G
gongweibao 已提交
208
The implementation of `OpKernel` is similar to math functors, the extra thing we need to do is to register the OpKernel in a global map.
209

210
Fluid provides different register interfaces in op_registry.h
211 212


W
weixing02 已提交
213
Let's take [Crop](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/fluid/operators/crop_op.cc#L134) operator as an example:
214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233

In .cc file:

```
REGISTER_OP_CPU_KERNEL(crop, ops::CropKernel<float>);
REGISTER_OP_CPU_KERNEL(
    crop_grad, ops::CropGradKernel<paddle::platform::CPUDeviceContext, float>);
```

In .cu file:

```
REGISTER_OP_CUDA_KERNEL(crop, ops::CropKernel<float>);
REGISTER_OP_CUDA_KERNEL(
    crop_grad, ops::CropGradKernel<paddle::platform::CUDADeviceContext, float>);
```


## Advanced topics: How to switch between different Device/Library

234
Generally, we will implement OpKernel for all Device/Library of an Operator. We can easily train a Convolutional Neural Network in GPU. However, some OpKernel is not suitable on a specific Device. For example, crf operator can only run on CPU, whereas most other operators can run on GPU. To achieve high performance in such circumstance, we have to switch between different Device/Library.
235 236


Q
QI JUN 已提交
237
For more details, please refer to following docs:
238

W
weixing02 已提交
239 240
- operator kernel type [doc](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/fluid/design/multi_devices/operator_kernel_type.md)
- switch kernel [doc](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/fluid/design/execution/switch.md)