Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
慢慢CG
Mace
提交
a6c8ff43
Mace
项目概览
慢慢CG
/
Mace
与 Fork 源项目一致
Fork自
Xiaomi / Mace
通知
1
Star
0
Fork
0
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
0
列表
看板
标记
里程碑
合并请求
0
DevOps
流水线
流水线任务
计划
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
Mace
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
0
Issue
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
Pages
DevOps
DevOps
流水线
流水线任务
计划
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
流水线任务
提交
Issue看板
提交
a6c8ff43
编写于
5月 14, 2018
作者:
刘
刘琦
浏览文件
操作
浏览文件
下载
差异文件
Merge branch 'docs' into 'master'
Update docs See merge request !468
上级
9fb0d67e
af3b4630
变更
11
隐藏空白更改
内联
并排
Showing
11 changed file
with
293 addition
and
255 deletion
+293
-255
docs/conf.py
docs/conf.py
+0
-3
docs/development/adding_a_new_op.md
docs/development/adding_a_new_op.md
+5
-1
docs/development/contributing.md
docs/development/contributing.md
+2
-1
docs/development/memory_layout.rst
docs/development/memory_layout.rst
+109
-54
docs/faq.md
docs/faq.md
+1
-1
docs/getting_started/create_a_model_deployment.rst
docs/getting_started/create_a_model_deployment.rst
+59
-96
docs/getting_started/docker.md
docs/getting_started/docker.md
+27
-0
docs/getting_started/docker.rst
docs/getting_started/docker.rst
+0
-19
docs/getting_started/models/demo_app_models.yaml
docs/getting_started/models/demo_app_models.yaml
+46
-0
docs/getting_started/op_lists.rst
docs/getting_started/op_lists.rst
+43
-79
docs/index.rst
docs/index.rst
+1
-1
未找到文件。
docs/conf.py
浏览文件 @
a6c8ff43
...
...
@@ -10,9 +10,6 @@ project = u'MiAI Compute Engine'
author
=
u
'%s Developers'
%
project
copyright
=
u
'2018, %s'
%
author
version
=
u
'0.6'
release
=
u
'0.6.0 alpha'
source_parsers
=
{
'.md'
:
recommonmark
.
parser
.
CommonMarkParser
,
}
...
...
docs/development/adding_a_new_op.md
浏览文件 @
a6c8ff43
...
...
@@ -3,7 +3,7 @@ Adding a new Op
You can create a custom op if it is not supported yet.
To add a custom op, you need to finish the following steps
.
To add a custom op, you need to finish the following steps
:
Define the Op class
--------------------
...
...
@@ -95,3 +95,7 @@ Add test and benchmark
----------------------
It's strongly recommended to add unit test and micro benchmark for your
new Op. If you wish to contribute back, it's required.
Document the new Op
---------------------
Finally, add an entry in operator table in the document.
docs/development/contributing.md
浏览文件 @
a6c8ff43
...
...
@@ -34,7 +34,8 @@ clang-format -style="{BasedOnStyle: google, \
C++ logging guideline
---------------------
The rule of VLOG level:
VLOG is used for verbose logging, which is configured by environment variable
`MACE_CPP_MIN_VLOG_LEVEL`
. The guideline of VLOG level is as follows:
```
0. Ad hoc debug logging, should only be added in test or temporary ad hoc
...
...
docs/development/memory_layout.rst
浏览文件 @
a6c8ff43
...
...
@@ -5,17 +5,21 @@ CPU runtime memory layout
-------------------------
The CPU tensor buffer is organized in the following order:
+-----------------------------+--------------+
| Tensor type | Buffer |
+=============================+==============+
| Intermediate input/output | NCHW |
+-----------------------------+--------------+
| Convolution Filter | OIHW |
+-----------------------------+--------------+
| Depthwise Convolution Filter| MIHW |
+-----------------------------+--------------+
| 1-D Argument, length = W | W |
+-----------------------------+--------------+
.. list-table::
:widths: auto
:header-rows: 1
:align: left
* - Tensor type
- Buffer
* - Intermediate input/output
- NCHW
* - Convolution Filter
- OIHW
* - Depthwise Convolution Filter
- MIHW
* - 1-D Argument, length = W
- W
OpenCL runtime memory layout
-----------------------------
...
...
@@ -34,66 +38,117 @@ Input/Output Tensor
The Input/Output Tensor is stored in NHWC format:
+---------------------------+--------+----------------------------+-----------------------------+
|Tensor type | Buffer | Image size [width, height] | Explanation |
+===========================+========+============================+=============================+
|Channel-Major Input/Output | NHWC | [W * (C+3)/4, N * H] | Default Input/Output format |
+---------------------------+--------+----------------------------+-----------------------------+
|Height-Major Input/Output | NHWC | [W * C, N * (H+3)/4] | Winograd Convolution format |
+---------------------------+--------+----------------------------+-----------------------------+
|Width-Major Input/Output | NHWC | [(W+3)/4 * C, N * H] | Winograd Convolution format |
+---------------------------+--------+----------------------------+-----------------------------+
.. list-table::
:widths: auto
:header-rows: 1
:align: left
* - Tensor type
- Buffer
- Image size [width, height]
- Explanation
* - Channel-Major Input/Output
- NHWC
- [W * (C+3)/4, N * H]
- Default Input/Output format
* - Height-Major Input/Output
- NHWC
- [W * C, N * (H+3)/4
- Winograd Convolution format
* - Width-Major Input/Output
- NHWC
- [(W+3)/4 * C, N * H]
- Winograd Convolution format
Each Pixel of **Image** contains 4 elements. The below table list the
coordination relation between **Image** and **Buffer**.
+---------------------------+-------------------------------------------------------------------------+-------------+
|Tensor type | Pixel coordinate relationship | Explanation |
+===========================+=========================================================================+=============+
|Channel-Major Input/Output | P[i, j] = {E[n, h, w, c] | (n=j/H, h=j%H, w=i%W, c=[i/W * 4 + k])} | k=[0, 4) |
+---------------------------+-------------------------------------------------------------------------+-------------+
|Height-Major Input/Output | P[i, j] = {E[n, h, w, c] | (n=j%N, h=[j/H*4 + k], w=i%W, c=i/W)} | k=[0, 4) |
+---------------------------+-------------------------------------------------------------------------+-------------+
|Width-Major Input/Output | P[i, j] = {E[n, h, w, c] | (n=j/H, h=j%H, w=[i%W*4 + k], c=i/W)} | k=[0, 4) |
+---------------------------+-------------------------------------------------------------------------+-------------+
.. list-table::
:widths: auto
:header-rows: 1
:align: left
* - Tensor type
- Pixel coordinate relationship
- Explanation
* - Channel-Major Input/Output
- P[i, j] = {E[n, h, w, c] | (n=j/H, h=j%H, w=i%W, c=[i/W * 4 + k])}
- k=[0, 4)
* - Height-Major Input/Output
- P[i, j] = {E[n, h, w, c] | (n=j%N, h=[j/H*4 + k], w=i%W, c=i/W)}
- k=[0, 4)
* - Width-Major Input/Output
- P[i, j] = {E[n, h, w, c] | (n=j/H, h=j%H, w=[i%W*4 + k], c=i/W)}
- k=[0, 4)
Filter Tensor
~~~~~~~~~~~~~
+----------------------------+------+---------------------------------+------------------------------------------------------------------------------+
| Tensor |Buffer| Image size [width, height] | Explanation |
+============================+======+=================================+==============================================================================+
|Convolution Filter | HWOI | [RoundUp<4>(I), H * W * (O+3)/4]|Convolution filter format,There is no difference compared to [H*w*I, (O+3)/4]|
+----------------------------+------+---------------------------------+------------------------------------------------------------------------------+
|Depthwise Convlution Filter | HWIM | [H * W * M, (I+3)/4] |Depthwise-Convolution filter format |
+----------------------------+------+---------------------------------+------------------------------------------------------------------------------+
.. list-table::
:widths: auto
:header-rows: 1
:align: left
* - Tensor
- Buffer
- Image size [width, height]
- Explanation
* - Convolution Filter
- HWOI
- [RoundUp<4>(I), H * W * (O+3)/4]
- Convolution filter format,There is no difference compared to [H*w*I, (O+3)/4]
* - Depthwise Convlution Filter
- HWIM
- [H * W * M, (I+3)/4]
- Depthwise-Convolution filter format
Each Pixel of **Image** contains 4 elements. The below table list the
coordination relation between **Image** and **Buffer**.
+----------------------------+-------------------------------------------------------------------+---------------------------------------+
|Tensor type | Pixel coordinate relationship | Explanation |
+============================+===================================================================+=======================================+
|Convolution Filter | P[m, n] = {E[h, w, o, i] | (h=T/W, w=T%W, o=[n/HW*4+k], i=m)}| HW= H * W, T=n%HW, k=[0, 4) |
+----------------------------+-------------------------------------------------------------------+---------------------------------------+
|Depthwise Convlution Filter | P[m, n] = {E[h, w, i, 0] | (h=m/W, w=m%W, i=[n*4+k])} | only support multiplier == 1, k=[0, 4)|
+----------------------------+-------------------------------------------------------------------+---------------------------------------+
.. list-table::
:widths: auto
:header-rows: 1
:align: left
* - Tensor type
- Pixel coordinate relationship
- Explanation
* - Convolution Filter
- P[m, n] = {E[h, w, o, i] | (h=T/W, w=T%W, o=[n/HW*4+k], i=m)}
- HW= H * W, T=n%HW, k=[0, 4)
* - Depthwise Convlution Filter
- P[m, n] = {E[h, w, i, 0] | (h=m/W, w=m%W, i=[n*4+k])}
- only support multiplier == 1, k=[0, 4)
1-D Argument Tensor
~~~~~~~~~~~~~~~~~~~
+----------------+----------+------------------------------+---------------------------------+
| Tensor type | Buffer | Image size [width, height] | Explanation |
+================+==========+==============================+=================================+
| 1-D Argument | W | [(W+3)/4, 1] | 1D argument format, e.g. Bias |
+----------------+----------+------------------------------+---------------------------------+
.. list-table::
:widths: auto
:header-rows: 1
:align: left
* - Tensor type
- Buffer
- Image size [width, height]
- Explanation
* - 1-D Argument
- W
- [(W+3)/4, 1]
- 1D argument format, e.g. Bias
Each Pixel of **Image** contains 4 elements. The below table list the
coordination relation between **Image** and **Buffer**.
+--------------+---------------------------------+-------------+
| Tensor type | Pixel coordinate relationship | Explanation |
+==============+=================================+=============+
|1-D Argument | P[i, 0] = {E[w] | w=i*4+k} | k=[0, 4) |
+--------------+---------------------------------+-------------+
.. list-table::
:widths: auto
:header-rows: 1
:align: left
* - Tensor type
- Pixel coordinate relationship
- Explanation
* - 1-D Argument
- P[i, 0] = {E[w] | w=i*4+k}
- k=[0, 4)
docs/faq.md
浏览文件 @
a6c8ff43
Frequently
Asked Q
uestions
Frequently
asked q
uestions
==========================
Why is the generated static library file size so huge?
...
...
docs/getting_started/create_a_model_deployment.rst
浏览文件 @
a6c8ff43
Create a model deployment
=========================
Create a model deployment
file
=========================
=====
Each YAML deployment script describes a case of deployments (for example,
a smart camera application may contains face recognition, object recognition,
and voice recognition models, which can be defined in one deployment file),
which will generate one static library (if more than one ABIs specified,
there will be one static library for each). Each YAML scripts can contains one
or more models.
The first step to deploy you models is to create a YAML model deployment
file.
One deployment file describes a case of model deployment,
each file will generate one static library (if more than one ABIs specified,
there will be one static library for each). The deployment file can contains
one or more models, for example, a smart camera application may contains face
recognition, object recognition, and voice recognition models, which can be
defined in one deployment file),
Model deployment file example
-------------------------------
TODO: change to a link to a standalone file with comments.
.. code:: yaml
Example
----------
Here is an deployment file example used by Android demo application.
# 配置文件名会被用作生成库的名称:libmace-${filename}.a
target_abis: [armeabi-v7a, arm64-v8a]
# 具体机型的soc编号,可以使用`adb shell getprop | grep ro.board.platform | cut -d [ -f3 | cut -d ] -f1`获取
target_socs: [msm8998]
embed_model_data: 1
vlog_level: 0
models: # 一个配置文件可以包含多个模型的配置信息,最终生成的库中包含多个模型
first_net: # 模型的标签,在调度模型的时候,会用这个变量
platform: tensorflow
model_file_path: path/to/model64.pb # also support http:// and https://
model_sha256_checksum: 7f7462333406e7dea87222737590ebb7d94490194d2f21a7d72bafa87e64e9f9
input_nodes: input_node
output_nodes: output_node
input_shapes: 1,64,64,3
output_shapes: 1,64,64,2
runtime: gpu
limit_opencl_kernel_time: 0
dsp_mode: 0
obfuscate: 1
fast_conv: 0
input_files:
- path/to/input_files # support http://
second_net:
platform: caffe
model_file_path: path/to/model.prototxt
weight_file_path: path/to/weight.caffemodel
model_sha256_checksum: 05d92625809dc9edd6484882335c48c043397aed450a168d75eb8b538e86881a
weight_sha256_checksum: 05d92625809dc9edd6484882335c48c043397aed450a168d75eb8b538e86881a
input_nodes:
- input_node0
- input_node1
output_nodes:
- output_node0
- output_node1
input_shapes:
- 1,256,256,3
- 1,128,128,3
output_shapes:
- 1,256,256,2
- 1,1,1,2
runtime: cpu
limit_opencl_kernel_time: 1
dsp_mode: 0
obfuscate: 1
fast_conv: 0
input_files:
- path/to/input_files # support http://
TODO: change this example file to the demo deployment file
(reuse the same file) and rename to a reasonable name.
.. literalinclude:: models/demo_app_models.yaml
:language: yaml
Configurations
--------------------
+--------------------------+----------------------------------------------------------------------------------------+
| Configuration key | Description |
+==========================+========================================================================================+
| target_abis | The target ABI to build, can be one or more of 'host', 'armeabi-v7a' or 'arm64-v8a' |
+--------------------------+----------------------------------------------------------------------------------------+
| embed_model_data | Whether embedding model weights as the code, default to 1 |
+--------------------------+----------------------------------------------------------------------------------------+
| platform | The source framework, tensorflow or caffe |
+--------------------------+----------------------------------------------------------------------------------------+
| model_file_path | The path of the model file, can be local or remote |
+--------------------------+----------------------------------------------------------------------------------------+
| weight_file_path | The path of the model weights file, used by Caffe model |
+--------------------------+----------------------------------------------------------------------------------------+
| model_sha256_checksum | The SHA256 checksum of the model file |
+--------------------------+----------------------------------------------------------------------------------------+
| weight_sha256_checksum | The SHA256 checksum of the weight file, used by Caffe model |
+--------------------------+----------------------------------------------------------------------------------------+
| input_nodes | The input node names, one or more strings |
+--------------------------+----------------------------------------------------------------------------------------+
| output_nodes | The output node names, one or more strings |
+--------------------------+----------------------------------------------------------------------------------------+
| input_shapes | The shapes of the input nodes, in NHWC order |
+--------------------------+----------------------------------------------------------------------------------------+
| output_shapes | The shapes of the output nodes, in NHWC order |
+--------------------------+----------------------------------------------------------------------------------------+
| runtime | The running device, one of CPU, GPU or DSP |
+--------------------------+----------------------------------------------------------------------------------------+
| limit_opencl_kernel_time | Whether splitting the OpenCL kernel within 1 ms to keep UI responsiveness, default to 0|
+--------------------------+----------------------------------------------------------------------------------------+
| dsp_mode | Control the DSP precision and performance, default to 0 usually works for most cases |
+--------------------------+----------------------------------------------------------------------------------------+
| obfuscate | Whether to obfuscate the model operator name, default to 0 |
+--------------------------+----------------------------------------------------------------------------------------+
| fast_conv | Whether to enable Winograd convolution, **will increase memory consumption** |
+--------------------------+----------------------------------------------------------------------------------------+
| input_files | Specify Numpy validation inputs. When not provided, [-1, 1] random values will be used |
+--------------------------+----------------------------------------------------------------------------------------+
.. list-table::
:widths: auto
:header-rows: 1
:align: left
* - Configuration key
- Description
* - target_abis
- The target ABI to build, can be one or more of 'host', 'armeabi-v7a' or 'arm64-v8a'
* - embed_model_data
- Whether embedding model weights as the code, default to 1
* - platform
- The source framework, tensorflow or caffe
* - model_file_path
- The path of the model file, can be local or remote
* - weight_file_path
- The path of the model weights file, used by Caffe model
* - model_sha256_checksum
- The SHA256 checksum of the model file
* - weight_sha256_checksum
- The SHA256 checksum of the weight file, used by Caffe model
* - input_nodes
- The input node names, one or more strings
* - output_nodes
- The output node names, one or more strings
* - input_shapes
- The shapes of the input nodes, in NHWC order
* - output_shapes
- The shapes of the output nodes, in NHWC order
* - runtime
- The running device, one of CPU, GPU or DSP
* - limit_opencl_kernel_time
- Whether splitting the OpenCL kernel within 1 ms to keep UI responsiveness, default to 0
* - dsp_mode
- Control the DSP precision and performance, default to 0 usually works for most cases
* - obfuscate
- Whether to obfuscate the model operator name, default to 0
* - fast_conv
- Whether to enable Winograd convolution, **will increase memory consumption**
* - input_files
- Specify Numpy validation inputs. When not provided, [-1, 1] random values will be used
docs/getting_started/docker.md
0 → 100644
浏览文件 @
a6c8ff43
Docker Images
=============
*
Login in
[
Xiaomi Docker Registry
](
http://docs.api.xiaomi.net/docker-registry/
)
```
docker login cr.d.xiaomi.net
```
*
Build with
`Dockerfile`
```
docker build -t cr.d.xiaomi.net/mace/mace-dev
```
*
Pull image from docker registry
```
docker pull cr.d.xiaomi.net/mace/mace-dev
```
*
Create container
```
# Set 'host' network to use ADB
docker run -it --rm -v /local/path:/container/path --net=host cr.d.xiaomi.net/mace/mace-dev /bin/bash
```
docs/getting_started/docker.rst
已删除
100644 → 0
浏览文件 @
9fb0d67e
Docker Images
=============
- Login in `Xiaomi Docker
Registry <http://docs.api.xiaomi.net/docker-registry/>`__
``docker login cr.d.xiaomi.net``
- Build with ``Dockerfile``
``docker build -t cr.d.xiaomi.net/mace/mace-dev .``
- Pull image from docker registry
``docker pull cr.d.xiaomi.net/mace/mace-dev``
- Create container
``# Set 'host' network to use ADB docker run -it --rm -v /local/path:/container/path --net=host cr.d.xiaomi.net/mace/mace-dev /bin/bash``
docs/getting_started/models/demo_app_models.yaml
0 → 100644
浏览文件 @
a6c8ff43
# 配置文件名会被用作生成库的名称:libmace-${filename}.a
target_abis
:
[
armeabi-v7a
,
arm64-v8a
]
# 具体机型的soc编号,可以使用`adb shell getprop | grep ro.board.platform | cut -d [ -f3 | cut -d ] -f1`获取
target_socs
:
[
msm8998
]
embed_model_data
:
1
models
:
# 一个配置文件可以包含多个模型的配置信息,最终生成的库中包含多个模型
first_net
:
# 模型的标签,在调度模型的时候,会用这个变量
platform
:
tensorflow
model_file_path
:
path/to/model64.pb
# also support http:// and https://
model_sha256_checksum
:
7f7462333406e7dea87222737590ebb7d94490194d2f21a7d72bafa87e64e9f9
input_nodes
:
input_node
output_nodes
:
output_node
input_shapes
:
1,64,64,3
output_shapes
:
1,64,64,2
runtime
:
gpu
limit_opencl_kernel_time
:
0
dsp_mode
:
0
obfuscate
:
1
fast_conv
:
0
input_files
:
-
path/to/input_files
# support http://
second_net
:
platform
:
caffe
model_file_path
:
path/to/model.prototxt
weight_file_path
:
path/to/weight.caffemodel
model_sha256_checksum
:
05d92625809dc9edd6484882335c48c043397aed450a168d75eb8b538e86881a
weight_sha256_checksum
:
05d92625809dc9edd6484882335c48c043397aed450a168d75eb8b538e86881a
input_nodes
:
-
input_node0
-
input_node1
output_nodes
:
-
output_node0
-
output_node1
input_shapes
:
-
1,256,256,3
-
1,128,128,3
output_shapes
:
-
1,256,256,2
-
1,1,1,2
runtime
:
cpu
limit_opencl_kernel_time
:
1
dsp_mode
:
0
obfuscate
:
1
fast_conv
:
0
input_files
:
-
path/to/input_files
# support http://
docs/getting_started/op_lists.rst
浏览文件 @
a6c8ff43
Operator lists
==============
+----------------------------------+--------------+--------+-------------------------------------------------------+
| Operator | Android NN | Status | Remark |
+==================================+==============+========+=======================================================+
| ADD | Y | Y | |
+----------------------------------+--------------+--------+-------------------------------------------------------+
| AVERAGE\_POOL\_2D | Y | Y | |
+----------------------------------+--------------+--------+-------------------------------------------------------+
| BATCH\_NORM | | Y | Fusion with activation is supported |
+----------------------------------+--------------+--------+-------------------------------------------------------+
| BIAS\_ADD | | Y | |
+----------------------------------+--------------+--------+-------------------------------------------------------+
| CHANNEL\_SHUFFLE | | Y | |
+----------------------------------+--------------+--------+-------------------------------------------------------+
| CONCATENATION | Y | Y | |
+----------------------------------+--------------+--------+-------------------------------------------------------+
| CONV\_2D | Y | Y | Fusion with BN and activation layer is supported |
+----------------------------------+--------------+--------+-------------------------------------------------------+
| DEPTHWISE\_CONV\_2D | Y | Y | Only multiplier = 1 is supported; Fusion is supported |
+----------------------------------+--------------+--------+-------------------------------------------------------+
| DEPTH\_TO\_SPACE | Y | Y | |
+----------------------------------+--------------+--------+-------------------------------------------------------+
| DEQUANTIZE | Y | | |
+----------------------------------+--------------+--------+-------------------------------------------------------+
| EMBEDDING\_LOOKUP | Y | | |
+----------------------------------+--------------+--------+-------------------------------------------------------+
| FLOOR | Y | | |
+----------------------------------+--------------+--------+-------------------------------------------------------+
| FULLY\_CONNECTED | Y | Y | |
+----------------------------------+--------------+--------+-------------------------------------------------------+
| GROUP\_CONV\_2D | | | |
+----------------------------------+--------------+--------+-------------------------------------------------------+
| HASHTABLE\_LOOKUP | Y | | |
+----------------------------------+--------------+--------+-------------------------------------------------------+
| L2\_NORMALIZATION | Y | | |
+----------------------------------+--------------+--------+-------------------------------------------------------+
| L2\_POOL\_2D | Y | | |
+----------------------------------+--------------+--------+-------------------------------------------------------+
| LOCAL\_RESPONSE\_NORMALIZATION | Y | Y | |
+----------------------------------+--------------+--------+-------------------------------------------------------+
| LOGISTIC | Y | Y | |
+----------------------------------+--------------+--------+-------------------------------------------------------+
| LSH\_PROJECTION | Y | | |
+----------------------------------+--------------+--------+-------------------------------------------------------+
| LSTM | Y | | |
+----------------------------------+--------------+--------+-------------------------------------------------------+
| MATMUL | | Y | |
+----------------------------------+--------------+--------+-------------------------------------------------------+
| MAX\_POOL\_2D | Y | Y | |
+----------------------------------+--------------+--------+-------------------------------------------------------+
| MUL | Y | | |
+----------------------------------+--------------+--------+-------------------------------------------------------+
| PSROI\_ALIGN | | Y | |
+----------------------------------+--------------+--------+-------------------------------------------------------+
| PRELU | | Y | |
+----------------------------------+--------------+--------+-------------------------------------------------------+
| RELU | Y | Y | |
+----------------------------------+--------------+--------+-------------------------------------------------------+
| RELU1 | Y | Y | |
+----------------------------------+--------------+--------+-------------------------------------------------------+
| RELU6 | Y | Y | |
+----------------------------------+--------------+--------+-------------------------------------------------------+
| RELUX | | Y | |
+----------------------------------+--------------+--------+-------------------------------------------------------+
| RESHAPE | Y | Y | Limited support |
+----------------------------------+--------------+--------+-------------------------------------------------------+
| RESIZE\_BILINEAR | Y | Y | |
+----------------------------------+--------------+--------+-------------------------------------------------------+
| RNN | Y | | |
+----------------------------------+--------------+--------+-------------------------------------------------------+
| RPN\_PROPOSAL\_LAYER | | Y | |
+----------------------------------+--------------+--------+-------------------------------------------------------+
| SOFTMAX | Y | Y | |
+----------------------------------+--------------+--------+-------------------------------------------------------+
| SPACE\_TO\_DEPTH | Y | Y | |
+----------------------------------+--------------+--------+-------------------------------------------------------+
| SVDF | Y | | |
+----------------------------------+--------------+--------+-------------------------------------------------------+
| TANH | Y | Y | |
+----------------------------------+--------------+--------+-------------------------------------------------------+
.. Please keep in chronological order when editing
.. csv-table::
:widths: auto
:header: "Operator","Android NN","Supported","Remark"
"ADD","Y","Y",""
"AVERAGE_POOL_2D","Y","Y",""
"BATCH_NORM","","Y","Fusion with activation is supported"
"BIAS_ADD","","Y",""
"CHANNEL_SHUFFLE","","Y",""
"CONCATENATION","Y","Y",""
"CONV_2D","Y","Y","Fusion with BN and activation layer is supported"
"DEPTHWISE_CONV_2D","Y","Y","Only multiplier = 1 is supported; Fusion is supported"
"DEPTH_TO_SPACE","Y","Y",""
"DEQUANTIZE","Y","",""
"EMBEDDING_LOOKUP","Y","",""
"FLOOR","Y","",""
"FULLY_CONNECTED","Y","Y",""
"GROUP_CONV_2D","","",""
"HASHTABLE_LOOKUP","Y","",""
"L2_NORMALIZATION","Y","",""
"L2_POOL_2D","Y","",""
"LOCAL_RESPONSE_NORMALIZATION","Y","Y",""
"LOGISTIC","Y","Y",""
"LSH_PROJECTION","Y","",""
"LSTM","Y","",""
"MATMUL","","Y",""
"MAX_POOL_2D","Y","Y",""
"MUL","Y","",""
"PSROI_ALIGN","","Y",""
"PRELU","","Y",""
"RELU","Y","Y",""
"RELU1","Y","Y",""
"RELU6","Y","Y",""
"RELUX","","Y",""
"RESHAPE","Y","Y","Limited support"
"RESIZE_BILINEAR","Y","Y",""
"RNN","Y","",""
"RPN_PROPOSAL_LAYER","","Y",""
"SOFTMAX","Y","Y",""
"SPACE_TO_DEPTH","Y","Y",""
"SVDF","Y","",""
"TANH","Y","Y",""
docs/index.rst
浏览文件 @
a6c8ff43
...
...
@@ -11,8 +11,8 @@ The main documentation is organized into the following sections:
getting_started/introduction
getting_started/create_a_model_deployment
getting_started/how_to_build
getting_started/docker
getting_started/how_to_build
getting_started/op_lists
.. toctree::
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录