提交 c3c30bb0 编写于 作者: T Travis CI

Deploy to GitHub Pages: ea5d6eae

上级 8f6e1b74
...@@ -25,13 +25,14 @@ There are mainly three parts that we have to consider while integrating a new de ...@@ -25,13 +25,14 @@ There are mainly three parts that we have to consider while integrating a new de
### Place and DeviceContext ### Place and DeviceContext
Please remind that device and computing library are not one-to-one corresponding. A device can have a lot of computing libraries and a computing library can also support several devices.
#### Place #### Place
Fluid uses class [Place](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/platform/place.h#L55) to represent different devices and computing libraries. There are inheritance relationships between different kinds of `Place`. Fluid uses class [Place](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/platform/place.h#L55) to represent the device memory where data is located. If we add another device, we have to add corresponding `DevicePlace`.
``` ```
| CPUPlace --> MKLDNNPlace | CPUPlace
Place --| CUDAPlace --> CUDNNPlace Place --| CUDAPlace
| FPGAPlace | FPGAPlace
``` ```
...@@ -43,7 +44,7 @@ typedef boost::variant<CUDAPlace, CPUPlace, FPGAPlace> Place; ...@@ -43,7 +44,7 @@ typedef boost::variant<CUDAPlace, CPUPlace, FPGAPlace> Place;
#### DeviceContext #### DeviceContext
Fluid uses class [DeviceContext](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/platform/device_context.h#L30) to manage the resources in different hardwares, such as CUDA stream in `CDUADeviceContext`. There are also inheritance relationships between different kinds of `DeviceContext`. Fluid uses class [DeviceContext](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/platform/device_context.h#L30) to manage the resources in different libraries, such as CUDA stream in `CDUADeviceContext`. There are also inheritance relationships between different kinds of `DeviceContext`.
``` ```
...@@ -106,7 +107,7 @@ template <typename Place> ...@@ -106,7 +107,7 @@ template <typename Place>
size_t Used(Place place); size_t Used(Place place);
``` ```
To implementing these interfaces, we have to implement MemoryAllocator for different Devices To implement these interfaces, we have to implement MemoryAllocator for different Devices.
#### Tensor #### Tensor
...@@ -243,6 +244,7 @@ REGISTER_OP_CUDA_KERNEL( ...@@ -243,6 +244,7 @@ REGISTER_OP_CUDA_KERNEL(
Generally, we will impelement OpKernel for all Device/Library of an Operator. We can easily train a Convolutional Neural Network in GPU. However, some OpKernel is not sutibale on a specific Device. For example, crf operator can only run on CPU, whereas most other operators can run at GPU. To achieve high performance in such circumstance, we have to switch between different Device/Library. Generally, we will impelement OpKernel for all Device/Library of an Operator. We can easily train a Convolutional Neural Network in GPU. However, some OpKernel is not sutibale on a specific Device. For example, crf operator can only run on CPU, whereas most other operators can run at GPU. To achieve high performance in such circumstance, we have to switch between different Device/Library.
We will discuss how to implement an efficient OpKernel switch policy. For more details, please refer to following docs:
- TBD - operator kernel type [doc](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/design/operator_kernel_type.md)
- switch kernel [doc](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/design/switch_kernel.md)
...@@ -228,11 +228,12 @@ ...@@ -228,11 +228,12 @@
</ul> </ul>
<div class="section" id="place-and-devicecontext"> <div class="section" id="place-and-devicecontext">
<span id="place-and-devicecontext"></span><h3>Place and DeviceContext<a class="headerlink" href="#place-and-devicecontext" title="Permalink to this headline"></a></h3> <span id="place-and-devicecontext"></span><h3>Place and DeviceContext<a class="headerlink" href="#place-and-devicecontext" title="Permalink to this headline"></a></h3>
<p>Please remind that device and computing library are not one-to-one corresponding. A device can have a lot of computing libraries and a computing library can also support several devices.</p>
<div class="section" id="place"> <div class="section" id="place">
<span id="place"></span><h4>Place<a class="headerlink" href="#place" title="Permalink to this headline"></a></h4> <span id="place"></span><h4>Place<a class="headerlink" href="#place" title="Permalink to this headline"></a></h4>
<p>Fluid uses class <a class="reference external" href="https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/platform/place.h#L55">Place</a> to represent different devices and computing libraries. There are inheritance relationships between different kinds of <code class="docutils literal"><span class="pre">Place</span></code>.</p> <p>Fluid uses class <a class="reference external" href="https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/platform/place.h#L55">Place</a> to represent the device memory where data is located. If we add another device, we have to add corresponding <code class="docutils literal"><span class="pre">DevicePlace</span></code>.</p>
<div class="highlight-default"><div class="highlight"><pre><span></span> <span class="o">|</span> <span class="n">CPUPlace</span> <span class="o">--&gt;</span> <span class="n">MKLDNNPlace</span> <div class="highlight-default"><div class="highlight"><pre><span></span> <span class="o">|</span> <span class="n">CPUPlace</span>
<span class="n">Place</span> <span class="o">--|</span> <span class="n">CUDAPlace</span> <span class="o">--&gt;</span> <span class="n">CUDNNPlace</span> <span class="n">Place</span> <span class="o">--|</span> <span class="n">CUDAPlace</span>
<span class="o">|</span> <span class="n">FPGAPlace</span> <span class="o">|</span> <span class="n">FPGAPlace</span>
</pre></div> </pre></div>
</div> </div>
...@@ -243,7 +244,7 @@ ...@@ -243,7 +244,7 @@
</div> </div>
<div class="section" id="devicecontext"> <div class="section" id="devicecontext">
<span id="devicecontext"></span><h4>DeviceContext<a class="headerlink" href="#devicecontext" title="Permalink to this headline"></a></h4> <span id="devicecontext"></span><h4>DeviceContext<a class="headerlink" href="#devicecontext" title="Permalink to this headline"></a></h4>
<p>Fluid uses class <a class="reference external" href="https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/platform/device_context.h#L30">DeviceContext</a> to manage the resources in different hardwares, such as CUDA stream in <code class="docutils literal"><span class="pre">CDUADeviceContext</span></code>. There are also inheritance relationships between different kinds of <code class="docutils literal"><span class="pre">DeviceContext</span></code>.</p> <p>Fluid uses class <a class="reference external" href="https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/platform/device_context.h#L30">DeviceContext</a> to manage the resources in different libraries, such as CUDA stream in <code class="docutils literal"><span class="pre">CDUADeviceContext</span></code>. There are also inheritance relationships between different kinds of <code class="docutils literal"><span class="pre">DeviceContext</span></code>.</p>
<div class="highlight-default"><div class="highlight"><pre><span></span> <span class="o">/-&gt;</span> <span class="n">CPUDeviceContext</span> <span class="o">--&gt;</span> <span class="n">MKLDeviceContext</span> <div class="highlight-default"><div class="highlight"><pre><span></span> <span class="o">/-&gt;</span> <span class="n">CPUDeviceContext</span> <span class="o">--&gt;</span> <span class="n">MKLDeviceContext</span>
<span class="n">DeviceContext</span> <span class="o">----&gt;</span> <span class="n">CUDADeviceContext</span> <span class="o">--&gt;</span> <span class="n">CUDNNDeviceContext</span> <span class="n">DeviceContext</span> <span class="o">----&gt;</span> <span class="n">CUDADeviceContext</span> <span class="o">--&gt;</span> <span class="n">CUDNNDeviceContext</span>
\<span class="o">-&gt;</span> <span class="n">FPGADeviceContext</span> \<span class="o">-&gt;</span> <span class="n">FPGADeviceContext</span>
...@@ -297,7 +298,7 @@ ...@@ -297,7 +298,7 @@
<span class="n">size_t</span> <span class="n">Used</span><span class="p">(</span><span class="n">Place</span> <span class="n">place</span><span class="p">);</span> <span class="n">size_t</span> <span class="n">Used</span><span class="p">(</span><span class="n">Place</span> <span class="n">place</span><span class="p">);</span>
</pre></div> </pre></div>
</div> </div>
<p>To implementing these interfaces, we have to implement MemoryAllocator for different Devices</p> <p>To implement these interfaces, we have to implement MemoryAllocator for different Devices.</p>
</div> </div>
<div class="section" id="tensor"> <div class="section" id="tensor">
<span id="tensor"></span><h4>Tensor<a class="headerlink" href="#tensor" title="Permalink to this headline"></a></h4> <span id="tensor"></span><h4>Tensor<a class="headerlink" href="#tensor" title="Permalink to this headline"></a></h4>
...@@ -410,9 +411,10 @@ ...@@ -410,9 +411,10 @@
<div class="section" id="advanced-topics-how-to-switch-between-different-device-library"> <div class="section" id="advanced-topics-how-to-switch-between-different-device-library">
<span id="advanced-topics-how-to-switch-between-different-device-library"></span><h2>Advanced topics: How to switch between different Device/Library<a class="headerlink" href="#advanced-topics-how-to-switch-between-different-device-library" title="Permalink to this headline"></a></h2> <span id="advanced-topics-how-to-switch-between-different-device-library"></span><h2>Advanced topics: How to switch between different Device/Library<a class="headerlink" href="#advanced-topics-how-to-switch-between-different-device-library" title="Permalink to this headline"></a></h2>
<p>Generally, we will impelement OpKernel for all Device/Library of an Operator. We can easily train a Convolutional Neural Network in GPU. However, some OpKernel is not sutibale on a specific Device. For example, crf operator can only run on CPU, whereas most other operators can run at GPU. To achieve high performance in such circumstance, we have to switch between different Device/Library.</p> <p>Generally, we will impelement OpKernel for all Device/Library of an Operator. We can easily train a Convolutional Neural Network in GPU. However, some OpKernel is not sutibale on a specific Device. For example, crf operator can only run on CPU, whereas most other operators can run at GPU. To achieve high performance in such circumstance, we have to switch between different Device/Library.</p>
<p>We will discuss how to implement an efficient OpKernel switch policy.</p> <p>For more details, please refer to following docs:</p>
<ul class="simple"> <ul class="simple">
<li>TBD</li> <li>operator kernel type <a class="reference external" href="https://github.com/PaddlePaddle/Paddle/blob/develop/doc/design/operator_kernel_type.md">doc</a></li>
<li>switch kernel <a class="reference external" href="https://github.com/PaddlePaddle/Paddle/blob/develop/doc/design/switch_kernel.md">doc</a></li>
</ul> </ul>
</div> </div>
</div> </div>
......
...@@ -2376,7 +2376,7 @@ ...@@ -2376,7 +2376,7 @@
{ {
"name" : "pooltype", "name" : "pooltype",
"type" : "string", "type" : "string",
"comment" : "(int, default AVERAGE) the pooling pooltype of SequencePoolOp.", "comment" : "(string, default 'AVERAGE') the pooling pooltype of SequencePoolOp.",
"generated" : 0 "generated" : 0
} ] } ]
},{ },{
......
因为 它太大了无法显示 source diff 。你可以改为 查看blob
...@@ -25,13 +25,14 @@ There are mainly three parts that we have to consider while integrating a new de ...@@ -25,13 +25,14 @@ There are mainly three parts that we have to consider while integrating a new de
### Place and DeviceContext ### Place and DeviceContext
Please remind that device and computing library are not one-to-one corresponding. A device can have a lot of computing libraries and a computing library can also support several devices.
#### Place #### Place
Fluid uses class [Place](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/platform/place.h#L55) to represent different devices and computing libraries. There are inheritance relationships between different kinds of `Place`. Fluid uses class [Place](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/platform/place.h#L55) to represent the device memory where data is located. If we add another device, we have to add corresponding `DevicePlace`.
``` ```
| CPUPlace --> MKLDNNPlace | CPUPlace
Place --| CUDAPlace --> CUDNNPlace Place --| CUDAPlace
| FPGAPlace | FPGAPlace
``` ```
...@@ -43,7 +44,7 @@ typedef boost::variant<CUDAPlace, CPUPlace, FPGAPlace> Place; ...@@ -43,7 +44,7 @@ typedef boost::variant<CUDAPlace, CPUPlace, FPGAPlace> Place;
#### DeviceContext #### DeviceContext
Fluid uses class [DeviceContext](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/platform/device_context.h#L30) to manage the resources in different hardwares, such as CUDA stream in `CDUADeviceContext`. There are also inheritance relationships between different kinds of `DeviceContext`. Fluid uses class [DeviceContext](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/platform/device_context.h#L30) to manage the resources in different libraries, such as CUDA stream in `CDUADeviceContext`. There are also inheritance relationships between different kinds of `DeviceContext`.
``` ```
...@@ -106,7 +107,7 @@ template <typename Place> ...@@ -106,7 +107,7 @@ template <typename Place>
size_t Used(Place place); size_t Used(Place place);
``` ```
To implementing these interfaces, we have to implement MemoryAllocator for different Devices To implement these interfaces, we have to implement MemoryAllocator for different Devices.
#### Tensor #### Tensor
...@@ -243,6 +244,7 @@ REGISTER_OP_CUDA_KERNEL( ...@@ -243,6 +244,7 @@ REGISTER_OP_CUDA_KERNEL(
Generally, we will impelement OpKernel for all Device/Library of an Operator. We can easily train a Convolutional Neural Network in GPU. However, some OpKernel is not sutibale on a specific Device. For example, crf operator can only run on CPU, whereas most other operators can run at GPU. To achieve high performance in such circumstance, we have to switch between different Device/Library. Generally, we will impelement OpKernel for all Device/Library of an Operator. We can easily train a Convolutional Neural Network in GPU. However, some OpKernel is not sutibale on a specific Device. For example, crf operator can only run on CPU, whereas most other operators can run at GPU. To achieve high performance in such circumstance, we have to switch between different Device/Library.
We will discuss how to implement an efficient OpKernel switch policy. For more details, please refer to following docs:
- TBD - operator kernel type [doc](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/design/operator_kernel_type.md)
- switch kernel [doc](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/design/switch_kernel.md)
...@@ -241,11 +241,12 @@ ...@@ -241,11 +241,12 @@
</ul> </ul>
<div class="section" id="place-and-devicecontext"> <div class="section" id="place-and-devicecontext">
<span id="place-and-devicecontext"></span><h3>Place and DeviceContext<a class="headerlink" href="#place-and-devicecontext" title="永久链接至标题"></a></h3> <span id="place-and-devicecontext"></span><h3>Place and DeviceContext<a class="headerlink" href="#place-and-devicecontext" title="永久链接至标题"></a></h3>
<p>Please remind that device and computing library are not one-to-one corresponding. A device can have a lot of computing libraries and a computing library can also support several devices.</p>
<div class="section" id="place"> <div class="section" id="place">
<span id="place"></span><h4>Place<a class="headerlink" href="#place" title="永久链接至标题"></a></h4> <span id="place"></span><h4>Place<a class="headerlink" href="#place" title="永久链接至标题"></a></h4>
<p>Fluid uses class <a class="reference external" href="https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/platform/place.h#L55">Place</a> to represent different devices and computing libraries. There are inheritance relationships between different kinds of <code class="docutils literal"><span class="pre">Place</span></code>.</p> <p>Fluid uses class <a class="reference external" href="https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/platform/place.h#L55">Place</a> to represent the device memory where data is located. If we add another device, we have to add corresponding <code class="docutils literal"><span class="pre">DevicePlace</span></code>.</p>
<div class="highlight-default"><div class="highlight"><pre><span></span> <span class="o">|</span> <span class="n">CPUPlace</span> <span class="o">--&gt;</span> <span class="n">MKLDNNPlace</span> <div class="highlight-default"><div class="highlight"><pre><span></span> <span class="o">|</span> <span class="n">CPUPlace</span>
<span class="n">Place</span> <span class="o">--|</span> <span class="n">CUDAPlace</span> <span class="o">--&gt;</span> <span class="n">CUDNNPlace</span> <span class="n">Place</span> <span class="o">--|</span> <span class="n">CUDAPlace</span>
<span class="o">|</span> <span class="n">FPGAPlace</span> <span class="o">|</span> <span class="n">FPGAPlace</span>
</pre></div> </pre></div>
</div> </div>
...@@ -256,7 +257,7 @@ ...@@ -256,7 +257,7 @@
</div> </div>
<div class="section" id="devicecontext"> <div class="section" id="devicecontext">
<span id="devicecontext"></span><h4>DeviceContext<a class="headerlink" href="#devicecontext" title="永久链接至标题"></a></h4> <span id="devicecontext"></span><h4>DeviceContext<a class="headerlink" href="#devicecontext" title="永久链接至标题"></a></h4>
<p>Fluid uses class <a class="reference external" href="https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/platform/device_context.h#L30">DeviceContext</a> to manage the resources in different hardwares, such as CUDA stream in <code class="docutils literal"><span class="pre">CDUADeviceContext</span></code>. There are also inheritance relationships between different kinds of <code class="docutils literal"><span class="pre">DeviceContext</span></code>.</p> <p>Fluid uses class <a class="reference external" href="https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/platform/device_context.h#L30">DeviceContext</a> to manage the resources in different libraries, such as CUDA stream in <code class="docutils literal"><span class="pre">CDUADeviceContext</span></code>. There are also inheritance relationships between different kinds of <code class="docutils literal"><span class="pre">DeviceContext</span></code>.</p>
<div class="highlight-default"><div class="highlight"><pre><span></span> <span class="o">/-&gt;</span> <span class="n">CPUDeviceContext</span> <span class="o">--&gt;</span> <span class="n">MKLDeviceContext</span> <div class="highlight-default"><div class="highlight"><pre><span></span> <span class="o">/-&gt;</span> <span class="n">CPUDeviceContext</span> <span class="o">--&gt;</span> <span class="n">MKLDeviceContext</span>
<span class="n">DeviceContext</span> <span class="o">----&gt;</span> <span class="n">CUDADeviceContext</span> <span class="o">--&gt;</span> <span class="n">CUDNNDeviceContext</span> <span class="n">DeviceContext</span> <span class="o">----&gt;</span> <span class="n">CUDADeviceContext</span> <span class="o">--&gt;</span> <span class="n">CUDNNDeviceContext</span>
\<span class="o">-&gt;</span> <span class="n">FPGADeviceContext</span> \<span class="o">-&gt;</span> <span class="n">FPGADeviceContext</span>
...@@ -310,7 +311,7 @@ ...@@ -310,7 +311,7 @@
<span class="n">size_t</span> <span class="n">Used</span><span class="p">(</span><span class="n">Place</span> <span class="n">place</span><span class="p">);</span> <span class="n">size_t</span> <span class="n">Used</span><span class="p">(</span><span class="n">Place</span> <span class="n">place</span><span class="p">);</span>
</pre></div> </pre></div>
</div> </div>
<p>To implementing these interfaces, we have to implement MemoryAllocator for different Devices</p> <p>To implement these interfaces, we have to implement MemoryAllocator for different Devices.</p>
</div> </div>
<div class="section" id="tensor"> <div class="section" id="tensor">
<span id="tensor"></span><h4>Tensor<a class="headerlink" href="#tensor" title="永久链接至标题"></a></h4> <span id="tensor"></span><h4>Tensor<a class="headerlink" href="#tensor" title="永久链接至标题"></a></h4>
...@@ -423,9 +424,10 @@ ...@@ -423,9 +424,10 @@
<div class="section" id="advanced-topics-how-to-switch-between-different-device-library"> <div class="section" id="advanced-topics-how-to-switch-between-different-device-library">
<span id="advanced-topics-how-to-switch-between-different-device-library"></span><h2>Advanced topics: How to switch between different Device/Library<a class="headerlink" href="#advanced-topics-how-to-switch-between-different-device-library" title="永久链接至标题"></a></h2> <span id="advanced-topics-how-to-switch-between-different-device-library"></span><h2>Advanced topics: How to switch between different Device/Library<a class="headerlink" href="#advanced-topics-how-to-switch-between-different-device-library" title="永久链接至标题"></a></h2>
<p>Generally, we will impelement OpKernel for all Device/Library of an Operator. We can easily train a Convolutional Neural Network in GPU. However, some OpKernel is not sutibale on a specific Device. For example, crf operator can only run on CPU, whereas most other operators can run at GPU. To achieve high performance in such circumstance, we have to switch between different Device/Library.</p> <p>Generally, we will impelement OpKernel for all Device/Library of an Operator. We can easily train a Convolutional Neural Network in GPU. However, some OpKernel is not sutibale on a specific Device. For example, crf operator can only run on CPU, whereas most other operators can run at GPU. To achieve high performance in such circumstance, we have to switch between different Device/Library.</p>
<p>We will discuss how to implement an efficient OpKernel switch policy.</p> <p>For more details, please refer to following docs:</p>
<ul class="simple"> <ul class="simple">
<li>TBD</li> <li>operator kernel type <a class="reference external" href="https://github.com/PaddlePaddle/Paddle/blob/develop/doc/design/operator_kernel_type.md">doc</a></li>
<li>switch kernel <a class="reference external" href="https://github.com/PaddlePaddle/Paddle/blob/develop/doc/design/switch_kernel.md">doc</a></li>
</ul> </ul>
</div> </div>
</div> </div>
......
因为 它太大了无法显示 source diff 。你可以改为 查看blob
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册