提交 272f923e 编写于 作者: L Liangliang He

Update docs

上级 22676b19
...@@ -10,9 +10,6 @@ project = u'MiAI Compute Engine' ...@@ -10,9 +10,6 @@ project = u'MiAI Compute Engine'
author = u'%s Developers' % project author = u'%s Developers' % project
copyright = u'2018, %s' % author copyright = u'2018, %s' % author
version = u'0.6'
release = u'0.6.0 alpha'
source_parsers = { source_parsers = {
'.md': recommonmark.parser.CommonMarkParser, '.md': recommonmark.parser.CommonMarkParser,
} }
......
...@@ -3,7 +3,7 @@ Adding a new Op ...@@ -3,7 +3,7 @@ Adding a new Op
You can create a custom op if it is not supported yet. You can create a custom op if it is not supported yet.
To add a custom op, you need to finish the following steps. To add a custom op, you need to finish the following steps:
Define the Op class Define the Op class
-------------------- --------------------
......
...@@ -34,7 +34,8 @@ clang-format -style="{BasedOnStyle: google, \ ...@@ -34,7 +34,8 @@ clang-format -style="{BasedOnStyle: google, \
C++ logging guideline C++ logging guideline
--------------------- ---------------------
The rule of VLOG level: VLOG is used for verbose logging, which is configured by environment variable
`MACE_CPP_MIN_VLOG_LEVEL`. The guideline of VLOG level is as follows:
``` ```
0. Ad hoc debug logging, should only be added in test or temporary ad hoc 0. Ad hoc debug logging, should only be added in test or temporary ad hoc
......
Frequently Asked Questions Frequently asked questions
========================== ==========================
Why is the generated static library file size so huge? Why is the generated static library file size so huge?
......
Create a model deployment Create a model deployment file
========================= ==============================
Each YAML deployment script describes a case of deployments (for example, The first step to deploy you models is to create a YAML model deployment
a smart camera application may contains face recognition, object recognition, file.
and voice recognition models, which can be defined in one deployment file),
which will generate one static library (if more than one ABIs specified,
there will be one static library for each). Each YAML scripts can contains one
or more models.
One deployment file describes a case of model deployment,
each file will generate one static library (if more than one ABIs specified,
there will be one static library for each). The deployment file can contains
one or more models, for example, a smart camera application may contains face
recognition, object recognition, and voice recognition models, which can be
defined in one deployment file),
Model deployment file example
-------------------------------
TODO: change to a link to a standalone file with comments.
.. code:: yaml Example
----------
Here is an deployment file example used by Android demo application.
# 配置文件名会被用作生成库的名称:libmace-${filename}.a TODO: change this example file to the demo deployment file
target_abis: [armeabi-v7a, arm64-v8a] (reuse the same file) and rename to a reasonable name.
# 具体机型的soc编号,可以使用`adb shell getprop | grep ro.board.platform | cut -d [ -f3 | cut -d ] -f1`获取
target_socs: [msm8998] .. literalinclude :: models/demo_app_models.yaml
embed_model_data: 1 :language: yaml
vlog_level: 0
models: # 一个配置文件可以包含多个模型的配置信息,最终生成的库中包含多个模型
first_net: # 模型的标签,在调度模型的时候,会用这个变量
platform: tensorflow
model_file_path: path/to/model64.pb # also support http:// and https://
model_sha256_checksum: 7f7462333406e7dea87222737590ebb7d94490194d2f21a7d72bafa87e64e9f9
input_nodes: input_node
output_nodes: output_node
input_shapes: 1,64,64,3
output_shapes: 1,64,64,2
runtime: gpu
limit_opencl_kernel_time: 0
dsp_mode: 0
obfuscate: 1
fast_conv: 0
input_files:
- path/to/input_files # support http://
second_net:
platform: caffe
model_file_path: path/to/model.prototxt
weight_file_path: path/to/weight.caffemodel
model_sha256_checksum: 05d92625809dc9edd6484882335c48c043397aed450a168d75eb8b538e86881a
weight_sha256_checksum: 05d92625809dc9edd6484882335c48c043397aed450a168d75eb8b538e86881a
input_nodes:
- input_node0
- input_node1
output_nodes:
- output_node0
- output_node1
input_shapes:
- 1,256,256,3
- 1,128,128,3
output_shapes:
- 1,256,256,2
- 1,1,1,2
runtime: cpu
limit_opencl_kernel_time: 1
dsp_mode: 0
obfuscate: 1
fast_conv: 0
input_files:
- path/to/input_files # support http://
Configurations Configurations
-------------------- --------------------
......
Docker Images
=============
* Login in [Xiaomi Docker Registry](http://docs.api.xiaomi.net/docker-registry/)
```
docker login cr.d.xiaomi.net
```
* Build with `Dockerfile`
```
docker build -t cr.d.xiaomi.net/mace/mace-dev
```
* Pull image from docker registry
```
docker pull cr.d.xiaomi.net/mace/mace-dev
```
* Create container
```
# Set 'host' network to use ADB
docker run -it --rm -v /local/path:/container/path --net=host cr.d.xiaomi.net/mace/mace-dev /bin/bash
```
Docker Images
=============
- Login in `Xiaomi Docker
Registry <http://docs.api.xiaomi.net/docker-registry/>`__
``docker login cr.d.xiaomi.net``
- Build with ``Dockerfile``
``docker build -t cr.d.xiaomi.net/mace/mace-dev .``
- Pull image from docker registry
``docker pull cr.d.xiaomi.net/mace/mace-dev``
- Create container
``# Set 'host' network to use ADB docker run -it --rm -v /local/path:/container/path --net=host cr.d.xiaomi.net/mace/mace-dev /bin/bash``
# 配置文件名会被用作生成库的名称:libmace-${filename}.a
target_abis: [armeabi-v7a, arm64-v8a]
# 具体机型的soc编号,可以使用`adb shell getprop | grep ro.board.platform | cut -d [ -f3 | cut -d ] -f1`获取
target_socs: [msm8998]
embed_model_data: 1
models: # 一个配置文件可以包含多个模型的配置信息,最终生成的库中包含多个模型
first_net: # 模型的标签,在调度模型的时候,会用这个变量
platform: tensorflow
model_file_path: path/to/model64.pb # also support http:// and https://
model_sha256_checksum: 7f7462333406e7dea87222737590ebb7d94490194d2f21a7d72bafa87e64e9f9
input_nodes: input_node
output_nodes: output_node
input_shapes: 1,64,64,3
output_shapes: 1,64,64,2
runtime: gpu
limit_opencl_kernel_time: 0
dsp_mode: 0
obfuscate: 1
fast_conv: 0
input_files:
- path/to/input_files # support http://
second_net:
platform: caffe
model_file_path: path/to/model.prototxt
weight_file_path: path/to/weight.caffemodel
model_sha256_checksum: 05d92625809dc9edd6484882335c48c043397aed450a168d75eb8b538e86881a
weight_sha256_checksum: 05d92625809dc9edd6484882335c48c043397aed450a168d75eb8b538e86881a
input_nodes:
- input_node0
- input_node1
output_nodes:
- output_node0
- output_node1
input_shapes:
- 1,256,256,3
- 1,128,128,3
output_shapes:
- 1,256,256,2
- 1,1,1,2
runtime: cpu
limit_opencl_kernel_time: 1
dsp_mode: 0
obfuscate: 1
fast_conv: 0
input_files:
- path/to/input_files # support http://
...@@ -11,8 +11,8 @@ The main documentation is organized into the following sections: ...@@ -11,8 +11,8 @@ The main documentation is organized into the following sections:
getting_started/introduction getting_started/introduction
getting_started/create_a_model_deployment getting_started/create_a_model_deployment
getting_started/how_to_build
getting_started/docker getting_started/docker
getting_started/how_to_build
getting_started/op_lists getting_started/op_lists
.. toctree:: .. toctree::
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册