Deep learning has a high demand for computing resources. New high-performance device and computing library are coming constantly. The deep learning framework has to integrate these high-performance device and computing library flexibly.
Deep learning has a high demand for computing resources. New high-performance devices and computing libraries are appearing very frequently. Deep learning frameworks have to integrate these high-performance devices and computing libraries flexibly and efficiently.
On the one hand, hardware and computing library are not usually one-to-one coresponding relations. For example, in Intel CPU, there are Eigen and MKL computing library. And in Nvidia GPU, there are Eigen and cuDNN computing library. We have to implement specific kernels for an operator for each computing library.
On one hand, hardware and computing libraries usually do not have a one-to-one correspondence. For example,Intel CPUs support Eigen and MKL computing libraries while Nvidia GPUs support Eigen and cuDNN computing libraries. We have to implement operator specific kernels for each computing library.
On the other hand, users usually do not want to care about the low-level hardware and computing library when writing a neural network configuration. In Fluid, `Layer` is exposed in `Python`, and `Operator` is exposed in `C++`. Both `Layer` and `Operator` are independent on hardwares.
On the other hand, users usually do not want to care about the low-level hardware and computing libraries when writing a neural network configuration. In Fluid, `Layer` is exposed in `Python`, and `Operator` is exposed in `C++`. Both `Layer` and `Operator` are hardware independent.
So, how to support a new Device/Library in Fluid becomes a challenge.
## Basic: Integrate A New Device/Library
For a general overview of fluid, please refer to [overview doc](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/howto/read_source.md).
For a general overview of fluid, please refer to the [overview doc](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/howto/read_source.md).
There are mainly there parts we have to consider in integrating a new device/library:
There are mainly three parts that we have to consider while integrating a new device/library:
- Place and DeviceContext: indicates the device id and manages hardware resources
- Memory and Tensor: malloc/free data on certain device
- Math Functor and OpKernel: implement computing unit on certain device/library
- Math Functor and OpKernel: implement computing unit on certain devices/libraries
### Place and DeviceContext
#### Place
Fluid use class [Place](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/platform/place.h#L55) to represent specific device and computing library. There are inheritance relationships between different kinds of `Place`.
Fluid uses class [Place](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/platform/place.h#L55) to represent different devices and computing libraries. There are inheritance relationships between different kinds of `Place`.
Fluid use class [DeviceContext](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/platform/device_context.h#L30) to manage the resources in certain hardware, such as CUDA stream in `CDUADeviceContext`. There are also inheritance relationships between different kinds of `DeviceContext`.
Fluid uses class [DeviceContext](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/platform/device_context.h#L30) to manage the resources in different hardwares, such as CUDA stream in `CDUADeviceContext`. There are also inheritance relationships between different kinds of `DeviceContext`.
@@ -93,7 +93,7 @@ class CUDNNDeviceContext : public CUDADeviceContext {
#### memory module
Fluid provide following [memory interfaces](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/memory/memory.h#L36):
Fluid provides the following [memory interfaces](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/memory/memory.h#L36):
```
template <typename Place>
...
...
@@ -106,12 +106,12 @@ template <typename Place>
size_t Used(Place place);
```
To implementing these interfaces, we have to implement MemoryAllocator for specific Device
To implementing these interfaces, we have to implement MemoryAllocator for different Devices
#### Tensor
[Tensor](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/framework/tensor.h#L36) holds data with some shape in certain Place.
[Tensor](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/framework/tensor.h#L36) holds data with some shape in a specific Place.
```cpp
class Tensor {
...
...
@@ -168,7 +168,7 @@ t.mutable_data(place);
### Math Functor and OpKernel
Fluid implements computing unit based on different DeviceContext. Some computing unit is shared between operators. These common part will be put in operators/math directory as basic Functors.
Fluid implements computing units based on different DeviceContexts. Some computing units are shared between operators. This common part will be put in operators/math directory as basic Functors.
Let's take [MaxOutFunctor](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/operators/math/maxouting.h#L27) as an example:
...
...
@@ -183,7 +183,7 @@ class MaxOutFunctor {
};
```
CPU implement in .cc file
CPU implemention is in .cc file
```
template <typename T>
...
...
@@ -197,7 +197,7 @@ class MaxOutFunctor<platform::CPUDeviceContext, T> {
};
```
CUDA implement in .cu file
CUDA implemention is in .cu file
```
template <typename T>
...
...
@@ -212,11 +212,11 @@ class MaxOutFunctor<platform::CUDADeviceContext, T> {
```
We get computing handle from concrete DeviceContext, and make compution on tensors.
We get computing handle from a concrete DeviceContext, and make compution on tensors.
The implement of `OpKernel` is similar to math functors, the extra thing we need to do is registering the OpKernel to global map.
The implemention of `OpKernel` is similar to math functors, the extra thing we need to do is to register the OpKernel in a global map.
Fluid provides different register interface in op_registry.h
Fluid provides different register interfaces in op_registry.h
Let's take [Crop](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/operators/crop_op.cc#L134) operator as an example:
...
...
@@ -240,7 +240,7 @@ REGISTER_OP_CUDA_KERNEL(
## Advanced topics: How to switch between different Device/Library
Generally, we will impelement OpKernel for all Device/Library of an Operator. We can easily train a Convolutional Neural Network in GPU. However, some OpKernel is not sutibale in a specific Device. For example, crf operator can be only run at CPU, whereas most other operators can be run at GPU. To achieve high performance in such circumstance, we have to switch between different Device/Library.
Generally, we will impelement OpKernel for all Device/Library of an Operator. We can easily train a Convolutional Neural Network in GPU. However, some OpKernel is not sutibale on a specific Device. For example, crf operator can only run on CPU, whereas most other operators can run at GPU. To achieve high performance in such circumstance, we have to switch between different Device/Library.
We will discuss how to implement an efficient OpKernel switch policy.
<spanid="design-doc-support-new-device-library"></span><h1>Design Doc: Support new Device/Library<aclass="headerlink"href="#design-doc-support-new-device-library"title="Permalink to this headline">¶</a></h1>
<spanid="design-doc-supporting-new-device-library"></span><h1>Design Doc: Supporting new Device/Library<aclass="headerlink"href="#design-doc-supporting-new-device-library"title="Permalink to this headline">¶</a></h1>
<divclass="section"id="background">
<spanid="background"></span><h2>Background<aclass="headerlink"href="#background"title="Permalink to this headline">¶</a></h2>
<p>Deep learning has a high demand for computing resources. New high-performance device and computing library are coming constantly. The deep learning framework has to integrate these high-performance device and computing library flexibly.</p>
<p>On the one hand, hardware and computing library are not usually one-to-one coresponding relations. For example, in Intel CPU, there are Eigen and MKL computing library. And in Nvidia GPU, there are Eigen and cuDNN computing library. We have to implement specific kernels for an operator for each computing library.</p>
<p>On the other hand, users usually do not want to care about the low-level hardware and computing library when writing a neural network configuration. In Fluid, <codeclass="docutils literal"><spanclass="pre">Layer</span></code> is exposed in <codeclass="docutils literal"><spanclass="pre">Python</span></code>, and <codeclass="docutils literal"><spanclass="pre">Operator</span></code> is exposed in <codeclass="docutils literal"><spanclass="pre">C++</span></code>. Both <codeclass="docutils literal"><spanclass="pre">Layer</span></code> and <codeclass="docutils literal"><spanclass="pre">Operator</span></code> are independent on hardwares.</p>
<p>Deep learning has a high demand for computing resources. New high-performance devices and computing libraries are appearing very frequently. Deep learning frameworks have to integrate these high-performance devices and computing libraries flexibly and efficiently.</p>
<p>On one hand, hardware and computing libraries usually do not have a one-to-one correspondence. For example,Intel CPUs support Eigen and MKL computing libraries while Nvidia GPUs support Eigen and cuDNN computing libraries. We have to implement operator specific kernels for each computing library.</p>
<p>On the other hand, users usually do not want to care about the low-level hardware and computing libraries when writing a neural network configuration. In Fluid, <codeclass="docutils literal"><spanclass="pre">Layer</span></code> is exposed in <codeclass="docutils literal"><spanclass="pre">Python</span></code>, and <codeclass="docutils literal"><spanclass="pre">Operator</span></code> is exposed in <codeclass="docutils literal"><spanclass="pre">C++</span></code>. Both <codeclass="docutils literal"><spanclass="pre">Layer</span></code> and <codeclass="docutils literal"><spanclass="pre">Operator</span></code> are hardware independent.</p>
<p>So, how to support a new Device/Library in Fluid becomes a challenge.</p>
<spanid="basic-integrate-a-new-device-library"></span><h2>Basic: Integrate A New Device/Library<aclass="headerlink"href="#basic-integrate-a-new-device-library"title="Permalink to this headline">¶</a></h2>
<p>For a general overview of fluid, please refer to <aclass="reference external"href="https://github.com/PaddlePaddle/Paddle/blob/develop/doc/howto/read_source.md">overview doc</a>.</p>
<p>There are mainly there parts we have to consider in integrating a new device/library:</p>
<p>For a general overview of fluid, please refer to the <aclass="reference external"href="https://github.com/PaddlePaddle/Paddle/blob/develop/doc/howto/read_source.md">overview doc</a>.</p>
<p>There are mainly three parts that we have to consider while integrating a new device/library:</p>
<ulclass="simple">
<li>Place and DeviceContext: indicates the device id and manages hardware resources</li>
<li>Memory and Tensor: malloc/free data on certain device</li>
<li>Math Functor and OpKernel: implement computing unit on certain device/library</li>
<li>Math Functor and OpKernel: implement computing unit on certain devices/libraries</li>
</ul>
<divclass="section"id="place-and-devicecontext">
<spanid="place-and-devicecontext"></span><h3>Place and DeviceContext<aclass="headerlink"href="#place-and-devicecontext"title="Permalink to this headline">¶</a></h3>
<divclass="section"id="place">
<spanid="place"></span><h4>Place<aclass="headerlink"href="#place"title="Permalink to this headline">¶</a></h4>
<p>Fluid use class <aclass="reference external"href="https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/platform/place.h#L55">Place</a> to represent specific device and computing library. There are inheritance relationships between different kinds of <codeclass="docutils literal"><spanclass="pre">Place</span></code>.</p>
<p>Fluid uses class <aclass="reference external"href="https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/platform/place.h#L55">Place</a> to represent different devices and computing libraries. There are inheritance relationships between different kinds of <codeclass="docutils literal"><spanclass="pre">Place</span></code>.</p>
<spanid="devicecontext"></span><h4>DeviceContext<aclass="headerlink"href="#devicecontext"title="Permalink to this headline">¶</a></h4>
<p>Fluid use class <aclass="reference external"href="https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/platform/device_context.h#L30">DeviceContext</a> to manage the resources in certain hardware, such as CUDA stream in <codeclass="docutils literal"><spanclass="pre">CDUADeviceContext</span></code>. There are also inheritance relationships between different kinds of <codeclass="docutils literal"><spanclass="pre">DeviceContext</span></code>.</p>
<p>Fluid uses class <aclass="reference external"href="https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/platform/device_context.h#L30">DeviceContext</a> to manage the resources in different hardwares, such as CUDA stream in <codeclass="docutils literal"><spanclass="pre">CDUADeviceContext</span></code>. There are also inheritance relationships between different kinds of <codeclass="docutils literal"><spanclass="pre">DeviceContext</span></code>.</p>
<spanid="memory-and-tensor"></span><h3>Memory and Tensor<aclass="headerlink"href="#memory-and-tensor"title="Permalink to this headline">¶</a></h3>
<divclass="section"id="memory-module">
<spanid="memory-module"></span><h4>memory module<aclass="headerlink"href="#memory-module"title="Permalink to this headline">¶</a></h4>
<p>Fluid provide following <aclass="reference external"href="https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/memory/memory.h#L36">memory interfaces</a>:</p>
<p>Fluid provides the following <aclass="reference external"href="https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/memory/memory.h#L36">memory interfaces</a>:</p>
<p>To implementing these interfaces, we have to implement MemoryAllocator for specific Device</p>
<p>To implementing these interfaces, we have to implement MemoryAllocator for different Devices</p>
</div>
<divclass="section"id="tensor">
<spanid="tensor"></span><h4>Tensor<aclass="headerlink"href="#tensor"title="Permalink to this headline">¶</a></h4>
<p><aclass="reference external"href="https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/framework/tensor.h#L36">Tensor</a> holds data with some shape in certain Place.</p>
<p><aclass="reference external"href="https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/framework/tensor.h#L36">Tensor</a> holds data with some shape in a specific Place.</p>
<spanid="math-functor-and-opkernel"></span><h3>Math Functor and OpKernel<aclass="headerlink"href="#math-functor-and-opkernel"title="Permalink to this headline">¶</a></h3>
<p>Fluid implements computing unit based on different DeviceContext. Some computing unit is shared between operators. These common part will be put in operators/math directory as basic Functors.</p>
<p>Fluid implements computing units based on different DeviceContexts. Some computing units are shared between operators. This common part will be put in operators/math directory as basic Functors.</p>
<p>Let’s take <aclass="reference external"href="https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/operators/math/maxouting.h#L27">MaxOutFunctor</a> as an example:</p>
<p>We get computing handle from concrete DeviceContext, and make compution on tensors.</p>
<p>The implement of <codeclass="docutils literal"><spanclass="pre">OpKernel</span></code> is similar to math functors, the extra thing we need to do is registering the OpKernel to global map.</p>
<p>Fluid provides different register interface in op_registry.h</p>
<p>We get computing handle from a concrete DeviceContext, and make compution on tensors.</p>
<p>The implemention of <codeclass="docutils literal"><spanclass="pre">OpKernel</span></code> is similar to math functors, the extra thing we need to do is to register the OpKernel in a global map.</p>
<p>Fluid provides different register interfaces in op_registry.h</p>
<p>Let’s take <aclass="reference external"href="https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/operators/crop_op.cc#L134">Crop</a> operator as an example:</p>
<spanid="advanced-topics-how-to-switch-between-different-device-library"></span><h2>Advanced topics: How to switch between different Device/Library<aclass="headerlink"href="#advanced-topics-how-to-switch-between-different-device-library"title="Permalink to this headline">¶</a></h2>
<p>Generally, we will impelement OpKernel for all Device/Library of an Operator. We can easily train a Convolutional Neural Network in GPU. However, some OpKernel is not sutibale in a specific Device. For example, crf operator can be only run at CPU, whereas most other operators can be run at GPU. To achieve high performance in such circumstance, we have to switch between different Device/Library.</p>
<p>Generally, we will impelement OpKernel for all Device/Library of an Operator. We can easily train a Convolutional Neural Network in GPU. However, some OpKernel is not sutibale on a specific Device. For example, crf operator can only run on CPU, whereas most other operators can run at GPU. To achieve high performance in such circumstance, we have to switch between different Device/Library.</p>
<p>We will discuss how to implement an efficient OpKernel switch policy.</p>
Deep learning has a high demand for computing resources. New high-performance device and computing library are coming constantly. The deep learning framework has to integrate these high-performance device and computing library flexibly.
Deep learning has a high demand for computing resources. New high-performance devices and computing libraries are appearing very frequently. Deep learning frameworks have to integrate these high-performance devices and computing libraries flexibly and efficiently.
On the one hand, hardware and computing library are not usually one-to-one coresponding relations. For example, in Intel CPU, there are Eigen and MKL computing library. And in Nvidia GPU, there are Eigen and cuDNN computing library. We have to implement specific kernels for an operator for each computing library.
On one hand, hardware and computing libraries usually do not have a one-to-one correspondence. For example,Intel CPUs support Eigen and MKL computing libraries while Nvidia GPUs support Eigen and cuDNN computing libraries. We have to implement operator specific kernels for each computing library.
On the other hand, users usually do not want to care about the low-level hardware and computing library when writing a neural network configuration. In Fluid, `Layer` is exposed in `Python`, and `Operator` is exposed in `C++`. Both `Layer` and `Operator` are independent on hardwares.
On the other hand, users usually do not want to care about the low-level hardware and computing libraries when writing a neural network configuration. In Fluid, `Layer` is exposed in `Python`, and `Operator` is exposed in `C++`. Both `Layer` and `Operator` are hardware independent.
So, how to support a new Device/Library in Fluid becomes a challenge.
## Basic: Integrate A New Device/Library
For a general overview of fluid, please refer to [overview doc](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/howto/read_source.md).
For a general overview of fluid, please refer to the [overview doc](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/howto/read_source.md).
There are mainly there parts we have to consider in integrating a new device/library:
There are mainly three parts that we have to consider while integrating a new device/library:
- Place and DeviceContext: indicates the device id and manages hardware resources
- Memory and Tensor: malloc/free data on certain device
- Math Functor and OpKernel: implement computing unit on certain device/library
- Math Functor and OpKernel: implement computing unit on certain devices/libraries
### Place and DeviceContext
#### Place
Fluid use class [Place](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/platform/place.h#L55) to represent specific device and computing library. There are inheritance relationships between different kinds of `Place`.
Fluid uses class [Place](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/platform/place.h#L55) to represent different devices and computing libraries. There are inheritance relationships between different kinds of `Place`.
Fluid use class [DeviceContext](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/platform/device_context.h#L30) to manage the resources in certain hardware, such as CUDA stream in `CDUADeviceContext`. There are also inheritance relationships between different kinds of `DeviceContext`.
Fluid uses class [DeviceContext](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/platform/device_context.h#L30) to manage the resources in different hardwares, such as CUDA stream in `CDUADeviceContext`. There are also inheritance relationships between different kinds of `DeviceContext`.
@@ -93,7 +93,7 @@ class CUDNNDeviceContext : public CUDADeviceContext {
#### memory module
Fluid provide following [memory interfaces](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/memory/memory.h#L36):
Fluid provides the following [memory interfaces](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/memory/memory.h#L36):
```
template <typename Place>
...
...
@@ -106,12 +106,12 @@ template <typename Place>
size_t Used(Place place);
```
To implementing these interfaces, we have to implement MemoryAllocator for specific Device
To implementing these interfaces, we have to implement MemoryAllocator for different Devices
#### Tensor
[Tensor](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/framework/tensor.h#L36) holds data with some shape in certain Place.
[Tensor](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/framework/tensor.h#L36) holds data with some shape in a specific Place.
```cpp
class Tensor {
...
...
@@ -168,7 +168,7 @@ t.mutable_data(place);
### Math Functor and OpKernel
Fluid implements computing unit based on different DeviceContext. Some computing unit is shared between operators. These common part will be put in operators/math directory as basic Functors.
Fluid implements computing units based on different DeviceContexts. Some computing units are shared between operators. This common part will be put in operators/math directory as basic Functors.
Let's take [MaxOutFunctor](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/operators/math/maxouting.h#L27) as an example:
...
...
@@ -183,7 +183,7 @@ class MaxOutFunctor {
};
```
CPU implement in .cc file
CPU implemention is in .cc file
```
template <typename T>
...
...
@@ -197,7 +197,7 @@ class MaxOutFunctor<platform::CPUDeviceContext, T> {
};
```
CUDA implement in .cu file
CUDA implemention is in .cu file
```
template <typename T>
...
...
@@ -212,11 +212,11 @@ class MaxOutFunctor<platform::CUDADeviceContext, T> {
```
We get computing handle from concrete DeviceContext, and make compution on tensors.
We get computing handle from a concrete DeviceContext, and make compution on tensors.
The implement of `OpKernel` is similar to math functors, the extra thing we need to do is registering the OpKernel to global map.
The implemention of `OpKernel` is similar to math functors, the extra thing we need to do is to register the OpKernel in a global map.
Fluid provides different register interface in op_registry.h
Fluid provides different register interfaces in op_registry.h
Let's take [Crop](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/operators/crop_op.cc#L134) operator as an example:
...
...
@@ -240,7 +240,7 @@ REGISTER_OP_CUDA_KERNEL(
## Advanced topics: How to switch between different Device/Library
Generally, we will impelement OpKernel for all Device/Library of an Operator. We can easily train a Convolutional Neural Network in GPU. However, some OpKernel is not sutibale in a specific Device. For example, crf operator can be only run at CPU, whereas most other operators can be run at GPU. To achieve high performance in such circumstance, we have to switch between different Device/Library.
Generally, we will impelement OpKernel for all Device/Library of an Operator. We can easily train a Convolutional Neural Network in GPU. However, some OpKernel is not sutibale on a specific Device. For example, crf operator can only run on CPU, whereas most other operators can run at GPU. To achieve high performance in such circumstance, we have to switch between different Device/Library.
We will discuss how to implement an efficient OpKernel switch policy.
<spanid="design-doc-support-new-device-library"></span><h1>Design Doc: Support new Device/Library<aclass="headerlink"href="#design-doc-support-new-device-library"title="永久链接至标题">¶</a></h1>
<spanid="design-doc-supporting-new-device-library"></span><h1>Design Doc: Supporting new Device/Library<aclass="headerlink"href="#design-doc-supporting-new-device-library"title="永久链接至标题">¶</a></h1>
<p>Deep learning has a high demand for computing resources. New high-performance device and computing library are coming constantly. The deep learning framework has to integrate these high-performance device and computing library flexibly.</p>
<p>On the one hand, hardware and computing library are not usually one-to-one coresponding relations. For example, in Intel CPU, there are Eigen and MKL computing library. And in Nvidia GPU, there are Eigen and cuDNN computing library. We have to implement specific kernels for an operator for each computing library.</p>
<p>On the other hand, users usually do not want to care about the low-level hardware and computing library when writing a neural network configuration. In Fluid, <codeclass="docutils literal"><spanclass="pre">Layer</span></code> is exposed in <codeclass="docutils literal"><spanclass="pre">Python</span></code>, and <codeclass="docutils literal"><spanclass="pre">Operator</span></code> is exposed in <codeclass="docutils literal"><spanclass="pre">C++</span></code>. Both <codeclass="docutils literal"><spanclass="pre">Layer</span></code> and <codeclass="docutils literal"><spanclass="pre">Operator</span></code> are independent on hardwares.</p>
<p>Deep learning has a high demand for computing resources. New high-performance devices and computing libraries are appearing very frequently. Deep learning frameworks have to integrate these high-performance devices and computing libraries flexibly and efficiently.</p>
<p>On one hand, hardware and computing libraries usually do not have a one-to-one correspondence. For example,Intel CPUs support Eigen and MKL computing libraries while Nvidia GPUs support Eigen and cuDNN computing libraries. We have to implement operator specific kernels for each computing library.</p>
<p>On the other hand, users usually do not want to care about the low-level hardware and computing libraries when writing a neural network configuration. In Fluid, <codeclass="docutils literal"><spanclass="pre">Layer</span></code> is exposed in <codeclass="docutils literal"><spanclass="pre">Python</span></code>, and <codeclass="docutils literal"><spanclass="pre">Operator</span></code> is exposed in <codeclass="docutils literal"><spanclass="pre">C++</span></code>. Both <codeclass="docutils literal"><spanclass="pre">Layer</span></code> and <codeclass="docutils literal"><spanclass="pre">Operator</span></code> are hardware independent.</p>
<p>So, how to support a new Device/Library in Fluid becomes a challenge.</p>
<spanid="basic-integrate-a-new-device-library"></span><h2>Basic: Integrate A New Device/Library<aclass="headerlink"href="#basic-integrate-a-new-device-library"title="永久链接至标题">¶</a></h2>
<p>For a general overview of fluid, please refer to <aclass="reference external"href="https://github.com/PaddlePaddle/Paddle/blob/develop/doc/howto/read_source.md">overview doc</a>.</p>
<p>There are mainly there parts we have to consider in integrating a new device/library:</p>
<p>For a general overview of fluid, please refer to the <aclass="reference external"href="https://github.com/PaddlePaddle/Paddle/blob/develop/doc/howto/read_source.md">overview doc</a>.</p>
<p>There are mainly three parts that we have to consider while integrating a new device/library:</p>
<ulclass="simple">
<li>Place and DeviceContext: indicates the device id and manages hardware resources</li>
<li>Memory and Tensor: malloc/free data on certain device</li>
<li>Math Functor and OpKernel: implement computing unit on certain device/library</li>
<li>Math Functor and OpKernel: implement computing unit on certain devices/libraries</li>
</ul>
<divclass="section"id="place-and-devicecontext">
<spanid="place-and-devicecontext"></span><h3>Place and DeviceContext<aclass="headerlink"href="#place-and-devicecontext"title="永久链接至标题">¶</a></h3>
<p>Fluid use class <aclass="reference external"href="https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/platform/place.h#L55">Place</a> to represent specific device and computing library. There are inheritance relationships between different kinds of <codeclass="docutils literal"><spanclass="pre">Place</span></code>.</p>
<p>Fluid uses class <aclass="reference external"href="https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/platform/place.h#L55">Place</a> to represent different devices and computing libraries. There are inheritance relationships between different kinds of <codeclass="docutils literal"><spanclass="pre">Place</span></code>.</p>
<p>Fluid use class <aclass="reference external"href="https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/platform/device_context.h#L30">DeviceContext</a> to manage the resources in certain hardware, such as CUDA stream in <codeclass="docutils literal"><spanclass="pre">CDUADeviceContext</span></code>. There are also inheritance relationships between different kinds of <codeclass="docutils literal"><spanclass="pre">DeviceContext</span></code>.</p>
<p>Fluid uses class <aclass="reference external"href="https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/platform/device_context.h#L30">DeviceContext</a> to manage the resources in different hardwares, such as CUDA stream in <codeclass="docutils literal"><spanclass="pre">CDUADeviceContext</span></code>. There are also inheritance relationships between different kinds of <codeclass="docutils literal"><spanclass="pre">DeviceContext</span></code>.</p>
<p>Fluid provide following <aclass="reference external"href="https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/memory/memory.h#L36">memory interfaces</a>:</p>
<p>Fluid provides the following <aclass="reference external"href="https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/memory/memory.h#L36">memory interfaces</a>:</p>
<p><aclass="reference external"href="https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/framework/tensor.h#L36">Tensor</a> holds data with some shape in certain Place.</p>
<p><aclass="reference external"href="https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/framework/tensor.h#L36">Tensor</a> holds data with some shape in a specific Place.</p>
<spanid="math-functor-and-opkernel"></span><h3>Math Functor and OpKernel<aclass="headerlink"href="#math-functor-and-opkernel"title="永久链接至标题">¶</a></h3>
<p>Fluid implements computing unit based on different DeviceContext. Some computing unit is shared between operators. These common part will be put in operators/math directory as basic Functors.</p>
<p>Fluid implements computing units based on different DeviceContexts. Some computing units are shared between operators. This common part will be put in operators/math directory as basic Functors.</p>
<p>Let’s take <aclass="reference external"href="https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/operators/math/maxouting.h#L27">MaxOutFunctor</a> as an example:</p>
<p>We get computing handle from concrete DeviceContext, and make compution on tensors.</p>
<p>The implement of <codeclass="docutils literal"><spanclass="pre">OpKernel</span></code> is similar to math functors, the extra thing we need to do is registering the OpKernel to global map.</p>
<p>Fluid provides different register interface in op_registry.h</p>
<p>We get computing handle from a concrete DeviceContext, and make compution on tensors.</p>
<p>The implemention of <codeclass="docutils literal"><spanclass="pre">OpKernel</span></code> is similar to math functors, the extra thing we need to do is to register the OpKernel in a global map.</p>
<p>Fluid provides different register interfaces in op_registry.h</p>
<p>Let’s take <aclass="reference external"href="https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/operators/crop_op.cc#L134">Crop</a> operator as an example:</p>
<spanid="advanced-topics-how-to-switch-between-different-device-library"></span><h2>Advanced topics: How to switch between different Device/Library<aclass="headerlink"href="#advanced-topics-how-to-switch-between-different-device-library"title="永久链接至标题">¶</a></h2>
<p>Generally, we will impelement OpKernel for all Device/Library of an Operator. We can easily train a Convolutional Neural Network in GPU. However, some OpKernel is not sutibale in a specific Device. For example, crf operator can be only run at CPU, whereas most other operators can be run at GPU. To achieve high performance in such circumstance, we have to switch between different Device/Library.</p>
<p>Generally, we will impelement OpKernel for all Device/Library of an Operator. We can easily train a Convolutional Neural Network in GPU. However, some OpKernel is not sutibale on a specific Device. For example, crf operator can only run on CPU, whereas most other operators can run at GPU. To achieve high performance in such circumstance, we have to switch between different Device/Library.</p>
<p>We will discuss how to implement an efficient OpKernel switch policy.</p>