提交 7045fad1 编写于 作者: T Travis CI

Deploy to GitHub Pages: 35572355

上级 8d8d2493
......@@ -28,6 +28,51 @@ The goal of float16 is to serve as a key for the executor to find and run the co
- [Eigen](https://github.com/RLovelett/eigen) >= 3.3 supports float16 calculation on both GPU and CPU using the `Eigen::half` class. It is mostly useful for Nvidia GPUs because of the overloaded arithmetic operators using cuda intrinsics. It falls back to using software emulation on CPU for calculation and there is no special treatment to ARM processors.
- [ARM compute library](https://github.com/ARM-software/ComputeLibrary) >= 17.02.01 supports NEON FP16 kernels (requires ARMv8.2-A CPU).
### CUDA version issue
There are currently three versions of CUDA that supports `__half` data type, namely, CUDA 7.5, 8.0, and 9.0.
CUDA 7.5 and 8.0 define `__half` as a simple struct that has a `uint16_t` data (see [`cuda_fp16.h`](https://github.com/ptillet/isaac/blob/9212ab5a3ddbe48f30ef373f9c1fb546804c7a8c/include/isaac/external/CUDA/cuda_fp16.h)) as follows:
```
typedef struct __align__(2) {
unsigned short x;
} __half;
typedef __half half;
```
This struct does not define any overloaded arithmetic operators. So you have to directly use `__hadd` instead of `+` to correctly add two half types:
```
__global__ void Add() {
half a, b, c;
c = __hadd(a, b); // correct
c = a + b; // compiler error: no operator "+" matches these operands
}
```
CUDA 9.0 provides a major update to the half data type. The related code can be found in the updated [`cuda_fp16.h`](https://github.com/ptillet/isaac/blob/master/include/isaac/external/CUDA/cuda_fp16.h) and the newly added [`cuda_fp16.hpp`](https://github.com/ptillet/isaac/blob/master/include/isaac/external/CUDA/cuda_fp16.hpp).
Essentially, CUDA 9.0 renames the original `__half` type in 7.5 and 8.0 as `__half_raw`, and defines a new `__half` class type that has constructors, conversion operators, and also provides overloaded arithmetic operators such as follows:
```
typedef struct __CUDA_ALIGN__(2) {
unsigned short x;
} __half_raw;
struct __CUDA_ALIGN__(2) __half {
protected:
unsigned short __x;
public:
// constructors and conversion operators from/to
// __half_raw and other built-in data types
}
typedef __half half;
__device__ __forceinline__
__half operator+(const __half &lh, const __half &rh) {
return __hadd(lh, rh);
}
// Other overloaded operators
```
This new design makes `c = a + b` work correctly for CUDA half data type.
## Implementation
The float16 class holds a 16-bit `uint16_t` data internally.
......
......@@ -227,6 +227,52 @@
<li><a class="reference external" href="https://github.com/ARM-software/ComputeLibrary">ARM compute library</a> &gt;= 17.02.01 supports NEON FP16 kernels (requires ARMv8.2-A CPU).</li>
</ul>
</div>
<div class="section" id="cuda-version-issue">
<span id="cuda-version-issue"></span><h3>CUDA version issue<a class="headerlink" href="#cuda-version-issue" title="Permalink to this headline"></a></h3>
<p>There are currently three versions of CUDA that supports <code class="docutils literal"><span class="pre">__half</span></code> data type, namely, CUDA 7.5, 8.0, and 9.0.
CUDA 7.5 and 8.0 define <code class="docutils literal"><span class="pre">__half</span></code> as a simple struct that has a <code class="docutils literal"><span class="pre">uint16_t</span></code> data (see <a class="reference external" href="https://github.com/ptillet/isaac/blob/9212ab5a3ddbe48f30ef373f9c1fb546804c7a8c/include/isaac/external/CUDA/cuda_fp16.h"><code class="docutils literal"><span class="pre">cuda_fp16.h</span></code></a>) as follows:</p>
<div class="highlight-default"><div class="highlight"><pre><span></span><span class="n">typedef</span> <span class="n">struct</span> <span class="n">__align__</span><span class="p">(</span><span class="mi">2</span><span class="p">)</span> <span class="p">{</span>
<span class="n">unsigned</span> <span class="n">short</span> <span class="n">x</span><span class="p">;</span>
<span class="p">}</span> <span class="n">__half</span><span class="p">;</span>
<span class="n">typedef</span> <span class="n">__half</span> <span class="n">half</span><span class="p">;</span>
</pre></div>
</div>
<p>This struct does not define any overloaded arithmetic operators. So you have to directly use <code class="docutils literal"><span class="pre">__hadd</span></code> instead of <code class="docutils literal"><span class="pre">+</span></code> to correctly add two half types:</p>
<div class="highlight-default"><div class="highlight"><pre><span></span><span class="n">__global__</span> <span class="n">void</span> <span class="n">Add</span><span class="p">()</span> <span class="p">{</span>
<span class="n">half</span> <span class="n">a</span><span class="p">,</span> <span class="n">b</span><span class="p">,</span> <span class="n">c</span><span class="p">;</span>
<span class="n">c</span> <span class="o">=</span> <span class="n">__hadd</span><span class="p">(</span><span class="n">a</span><span class="p">,</span> <span class="n">b</span><span class="p">);</span> <span class="o">//</span> <span class="n">correct</span>
<span class="n">c</span> <span class="o">=</span> <span class="n">a</span> <span class="o">+</span> <span class="n">b</span><span class="p">;</span> <span class="o">//</span> <span class="n">compiler</span> <span class="n">error</span><span class="p">:</span> <span class="n">no</span> <span class="n">operator</span> <span class="s2">&quot;+&quot;</span> <span class="n">matches</span> <span class="n">these</span> <span class="n">operands</span>
<span class="p">}</span>
</pre></div>
</div>
<p>CUDA 9.0 provides a major update to the half data type. The related code can be found in the updated <a class="reference external" href="https://github.com/ptillet/isaac/blob/master/include/isaac/external/CUDA/cuda_fp16.h"><code class="docutils literal"><span class="pre">cuda_fp16.h</span></code></a> and the newly added <a class="reference external" href="https://github.com/ptillet/isaac/blob/master/include/isaac/external/CUDA/cuda_fp16.hpp"><code class="docutils literal"><span class="pre">cuda_fp16.hpp</span></code></a>.</p>
<p>Essentially, CUDA 9.0 renames the original <code class="docutils literal"><span class="pre">__half</span></code> type in 7.5 and 8.0 as <code class="docutils literal"><span class="pre">__half_raw</span></code>, and defines a new <code class="docutils literal"><span class="pre">__half</span></code> class type that has constructors, conversion operators, and also provides overloaded arithmetic operators such as follows:</p>
<div class="highlight-default"><div class="highlight"><pre><span></span><span class="n">typedef</span> <span class="n">struct</span> <span class="n">__CUDA_ALIGN__</span><span class="p">(</span><span class="mi">2</span><span class="p">)</span> <span class="p">{</span>
<span class="n">unsigned</span> <span class="n">short</span> <span class="n">x</span><span class="p">;</span>
<span class="p">}</span> <span class="n">__half_raw</span><span class="p">;</span>
<span class="n">struct</span> <span class="n">__CUDA_ALIGN__</span><span class="p">(</span><span class="mi">2</span><span class="p">)</span> <span class="n">__half</span> <span class="p">{</span>
<span class="n">protected</span><span class="p">:</span>
<span class="n">unsigned</span> <span class="n">short</span> <span class="n">__x</span><span class="p">;</span>
<span class="n">public</span><span class="p">:</span>
<span class="o">//</span> <span class="n">constructors</span> <span class="ow">and</span> <span class="n">conversion</span> <span class="n">operators</span> <span class="n">from</span><span class="o">/</span><span class="n">to</span>
<span class="o">//</span> <span class="n">__half_raw</span> <span class="ow">and</span> <span class="n">other</span> <span class="n">built</span><span class="o">-</span><span class="ow">in</span> <span class="n">data</span> <span class="n">types</span>
<span class="p">}</span>
<span class="n">typedef</span> <span class="n">__half</span> <span class="n">half</span><span class="p">;</span>
<span class="n">__device__</span> <span class="n">__forceinline__</span>
<span class="n">__half</span> <span class="n">operator</span><span class="o">+</span><span class="p">(</span><span class="n">const</span> <span class="n">__half</span> <span class="o">&amp;</span><span class="n">lh</span><span class="p">,</span> <span class="n">const</span> <span class="n">__half</span> <span class="o">&amp;</span><span class="n">rh</span><span class="p">)</span> <span class="p">{</span>
<span class="k">return</span> <span class="n">__hadd</span><span class="p">(</span><span class="n">lh</span><span class="p">,</span> <span class="n">rh</span><span class="p">);</span>
<span class="p">}</span>
<span class="o">//</span> <span class="n">Other</span> <span class="n">overloaded</span> <span class="n">operators</span>
</pre></div>
</div>
<p>This new design makes <code class="docutils literal"><span class="pre">c</span> <span class="pre">=</span> <span class="pre">a</span> <span class="pre">+</span> <span class="pre">b</span></code> work correctly for CUDA half data type.</p>
</div>
</div>
<div class="section" id="implementation">
<span id="implementation"></span><h2>Implementation<a class="headerlink" href="#implementation" title="Permalink to this headline"></a></h2>
......
因为 它太大了无法显示 source diff 。你可以改为 查看blob
......@@ -28,6 +28,51 @@ The goal of float16 is to serve as a key for the executor to find and run the co
- [Eigen](https://github.com/RLovelett/eigen) >= 3.3 supports float16 calculation on both GPU and CPU using the `Eigen::half` class. It is mostly useful for Nvidia GPUs because of the overloaded arithmetic operators using cuda intrinsics. It falls back to using software emulation on CPU for calculation and there is no special treatment to ARM processors.
- [ARM compute library](https://github.com/ARM-software/ComputeLibrary) >= 17.02.01 supports NEON FP16 kernels (requires ARMv8.2-A CPU).
### CUDA version issue
There are currently three versions of CUDA that supports `__half` data type, namely, CUDA 7.5, 8.0, and 9.0.
CUDA 7.5 and 8.0 define `__half` as a simple struct that has a `uint16_t` data (see [`cuda_fp16.h`](https://github.com/ptillet/isaac/blob/9212ab5a3ddbe48f30ef373f9c1fb546804c7a8c/include/isaac/external/CUDA/cuda_fp16.h)) as follows:
```
typedef struct __align__(2) {
unsigned short x;
} __half;
typedef __half half;
```
This struct does not define any overloaded arithmetic operators. So you have to directly use `__hadd` instead of `+` to correctly add two half types:
```
__global__ void Add() {
half a, b, c;
c = __hadd(a, b); // correct
c = a + b; // compiler error: no operator "+" matches these operands
}
```
CUDA 9.0 provides a major update to the half data type. The related code can be found in the updated [`cuda_fp16.h`](https://github.com/ptillet/isaac/blob/master/include/isaac/external/CUDA/cuda_fp16.h) and the newly added [`cuda_fp16.hpp`](https://github.com/ptillet/isaac/blob/master/include/isaac/external/CUDA/cuda_fp16.hpp).
Essentially, CUDA 9.0 renames the original `__half` type in 7.5 and 8.0 as `__half_raw`, and defines a new `__half` class type that has constructors, conversion operators, and also provides overloaded arithmetic operators such as follows:
```
typedef struct __CUDA_ALIGN__(2) {
unsigned short x;
} __half_raw;
struct __CUDA_ALIGN__(2) __half {
protected:
unsigned short __x;
public:
// constructors and conversion operators from/to
// __half_raw and other built-in data types
}
typedef __half half;
__device__ __forceinline__
__half operator+(const __half &lh, const __half &rh) {
return __hadd(lh, rh);
}
// Other overloaded operators
```
This new design makes `c = a + b` work correctly for CUDA half data type.
## Implementation
The float16 class holds a 16-bit `uint16_t` data internally.
......
......@@ -241,6 +241,52 @@
<li><a class="reference external" href="https://github.com/ARM-software/ComputeLibrary">ARM compute library</a> &gt;= 17.02.01 supports NEON FP16 kernels (requires ARMv8.2-A CPU).</li>
</ul>
</div>
<div class="section" id="cuda-version-issue">
<span id="cuda-version-issue"></span><h3>CUDA version issue<a class="headerlink" href="#cuda-version-issue" title="永久链接至标题"></a></h3>
<p>There are currently three versions of CUDA that supports <code class="docutils literal"><span class="pre">__half</span></code> data type, namely, CUDA 7.5, 8.0, and 9.0.
CUDA 7.5 and 8.0 define <code class="docutils literal"><span class="pre">__half</span></code> as a simple struct that has a <code class="docutils literal"><span class="pre">uint16_t</span></code> data (see <a class="reference external" href="https://github.com/ptillet/isaac/blob/9212ab5a3ddbe48f30ef373f9c1fb546804c7a8c/include/isaac/external/CUDA/cuda_fp16.h"><code class="docutils literal"><span class="pre">cuda_fp16.h</span></code></a>) as follows:</p>
<div class="highlight-default"><div class="highlight"><pre><span></span><span class="n">typedef</span> <span class="n">struct</span> <span class="n">__align__</span><span class="p">(</span><span class="mi">2</span><span class="p">)</span> <span class="p">{</span>
<span class="n">unsigned</span> <span class="n">short</span> <span class="n">x</span><span class="p">;</span>
<span class="p">}</span> <span class="n">__half</span><span class="p">;</span>
<span class="n">typedef</span> <span class="n">__half</span> <span class="n">half</span><span class="p">;</span>
</pre></div>
</div>
<p>This struct does not define any overloaded arithmetic operators. So you have to directly use <code class="docutils literal"><span class="pre">__hadd</span></code> instead of <code class="docutils literal"><span class="pre">+</span></code> to correctly add two half types:</p>
<div class="highlight-default"><div class="highlight"><pre><span></span><span class="n">__global__</span> <span class="n">void</span> <span class="n">Add</span><span class="p">()</span> <span class="p">{</span>
<span class="n">half</span> <span class="n">a</span><span class="p">,</span> <span class="n">b</span><span class="p">,</span> <span class="n">c</span><span class="p">;</span>
<span class="n">c</span> <span class="o">=</span> <span class="n">__hadd</span><span class="p">(</span><span class="n">a</span><span class="p">,</span> <span class="n">b</span><span class="p">);</span> <span class="o">//</span> <span class="n">correct</span>
<span class="n">c</span> <span class="o">=</span> <span class="n">a</span> <span class="o">+</span> <span class="n">b</span><span class="p">;</span> <span class="o">//</span> <span class="n">compiler</span> <span class="n">error</span><span class="p">:</span> <span class="n">no</span> <span class="n">operator</span> <span class="s2">&quot;+&quot;</span> <span class="n">matches</span> <span class="n">these</span> <span class="n">operands</span>
<span class="p">}</span>
</pre></div>
</div>
<p>CUDA 9.0 provides a major update to the half data type. The related code can be found in the updated <a class="reference external" href="https://github.com/ptillet/isaac/blob/master/include/isaac/external/CUDA/cuda_fp16.h"><code class="docutils literal"><span class="pre">cuda_fp16.h</span></code></a> and the newly added <a class="reference external" href="https://github.com/ptillet/isaac/blob/master/include/isaac/external/CUDA/cuda_fp16.hpp"><code class="docutils literal"><span class="pre">cuda_fp16.hpp</span></code></a>.</p>
<p>Essentially, CUDA 9.0 renames the original <code class="docutils literal"><span class="pre">__half</span></code> type in 7.5 and 8.0 as <code class="docutils literal"><span class="pre">__half_raw</span></code>, and defines a new <code class="docutils literal"><span class="pre">__half</span></code> class type that has constructors, conversion operators, and also provides overloaded arithmetic operators such as follows:</p>
<div class="highlight-default"><div class="highlight"><pre><span></span><span class="n">typedef</span> <span class="n">struct</span> <span class="n">__CUDA_ALIGN__</span><span class="p">(</span><span class="mi">2</span><span class="p">)</span> <span class="p">{</span>
<span class="n">unsigned</span> <span class="n">short</span> <span class="n">x</span><span class="p">;</span>
<span class="p">}</span> <span class="n">__half_raw</span><span class="p">;</span>
<span class="n">struct</span> <span class="n">__CUDA_ALIGN__</span><span class="p">(</span><span class="mi">2</span><span class="p">)</span> <span class="n">__half</span> <span class="p">{</span>
<span class="n">protected</span><span class="p">:</span>
<span class="n">unsigned</span> <span class="n">short</span> <span class="n">__x</span><span class="p">;</span>
<span class="n">public</span><span class="p">:</span>
<span class="o">//</span> <span class="n">constructors</span> <span class="ow">and</span> <span class="n">conversion</span> <span class="n">operators</span> <span class="n">from</span><span class="o">/</span><span class="n">to</span>
<span class="o">//</span> <span class="n">__half_raw</span> <span class="ow">and</span> <span class="n">other</span> <span class="n">built</span><span class="o">-</span><span class="ow">in</span> <span class="n">data</span> <span class="n">types</span>
<span class="p">}</span>
<span class="n">typedef</span> <span class="n">__half</span> <span class="n">half</span><span class="p">;</span>
<span class="n">__device__</span> <span class="n">__forceinline__</span>
<span class="n">__half</span> <span class="n">operator</span><span class="o">+</span><span class="p">(</span><span class="n">const</span> <span class="n">__half</span> <span class="o">&amp;</span><span class="n">lh</span><span class="p">,</span> <span class="n">const</span> <span class="n">__half</span> <span class="o">&amp;</span><span class="n">rh</span><span class="p">)</span> <span class="p">{</span>
<span class="k">return</span> <span class="n">__hadd</span><span class="p">(</span><span class="n">lh</span><span class="p">,</span> <span class="n">rh</span><span class="p">);</span>
<span class="p">}</span>
<span class="o">//</span> <span class="n">Other</span> <span class="n">overloaded</span> <span class="n">operators</span>
</pre></div>
</div>
<p>This new design makes <code class="docutils literal"><span class="pre">c</span> <span class="pre">=</span> <span class="pre">a</span> <span class="pre">+</span> <span class="pre">b</span></code> work correctly for CUDA half data type.</p>
</div>
</div>
<div class="section" id="implementation">
<span id="implementation"></span><h2>Implementation<a class="headerlink" href="#implementation" title="永久链接至标题"></a></h2>
......
此差异已折叠。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册