提交 1c874957 编写于 作者: 刘琦

Merge branch 'docs' into 'master'

Add introduction into docs

See merge request !476
......@@ -25,7 +25,8 @@ docs:
- cd docs
- make html
- CI_LATEST_OUTPUT_PATH=/mace-build-output/$CI_PROJECT_NAME/latest
- CI_JOB_OUTPUT_PATH=/mace-build-output/$CI_PROJECT_NAME/$CI_BUILD_ID
- CI_JOB_OUTPUT_PATH=/mace-build-output/$CI_PROJECT_NAME/$CI_PIPELINE_ID
- rm -rf $CI_JOB_OUTPUT_PATH
- mkdir -p $CI_JOB_OUTPUT_PATH
- cp -r _build/html $CI_JOB_OUTPUT_PATH/docs
- rm -rf $CI_LATEST_OUTPUT_PATH
......
Frequently asked questions
==========================
Does the tensor data consume extra memory when compiled into C++ code?
----------------------------------------------------------------------
When compiled into C++ code, the data will be mmaped by the system loader.
For CPU runtime, the tensor data are used without memory copy.
For GPU and DSP runtime, the tensor data is used once during model
initialization. The operating system is free to swap the pages out, however,
it still consumes virtual memory space. So generally speaking, it takes
no extra physical memory. If you are short of virtual memory space (this
should be very rare), you can choose load the tensor data from a file, which
can be unmapped after initialization.
Why is the generated static library file size so huge?
-------------------------------------------------------
The static library is simply an archive of a set of object files which are
......
Docker Images
=============
* Login in [Xiaomi Docker Registry](http://docs.api.xiaomi.net/docker-registry/)
```
docker login cr.d.xiaomi.net
```
* Build with `Dockerfile`
```
docker build -t cr.d.xiaomi.net/mace/mace-dev
```
* Pull image from docker registry
```
docker pull cr.d.xiaomi.net/mace/mace-dev
```
* Create container
```
# Set 'host' network to use ADB
docker run -it --rm -v /local/path:/container/path --net=host cr.d.xiaomi.net/mace/mace-dev /bin/bash
```
......@@ -48,6 +48,36 @@ How to build
| docker(for caffe) | >= 17.09.0-ce | `install doc <https://docs.docker.com/install/linux/docker-ce/ubuntu/#set-up-the-repository>`__ |
+---------------------+-----------------+---------------------------------------------------------------------------------------------------+
Docker Images
----------------
* Login in `Xiaomi Docker Registry <http://docs.api.xiaomi.net/docker-registry/>`__
.. code:: sh
docker login cr.d.xiaomi.net
* Build with Dockerfile
.. code:: sh
docker build -t cr.d.xiaomi.net/mace/mace-dev
* Pull image from docker registry
.. code:: sh
docker pull cr.d.xiaomi.net/mace/mace-dev
* Create container
.. code:: sh
# Set 'host' network to use ADB
docker run -it --rm -v /local/path:/container/path --net=host cr.d.xiaomi.net/mace/mace-dev /bin/bash
使用简介
--------
......
Introduction
============
TODO: describe the conceptions and workflow with diagram.
![alt text](workflow.jpg "MiAI workflow")
TODO: describe the runtime.
Introduction
============
MiAI Compute Engine is a deep learning inference framework optimized for
mobile heterogeneous computing platforms. The following figure shows the
overall architecture.
.. image:: mace-arch.png
:scale: 40 %
:align: center
Model format
------------
MiAI Compute Engine defines a customized model format which is similar to
Caffe2. The MiAI model can be converted from exported models by TensorFlow
and Caffe. We define a YAML schema to describe the model deployment. In the
next chapter, there is a detailed guide showing how to create this YAML file.
Model conversion
----------------
Currently, we provide model converters for TensorFlow and Caffe. And
more frameworks will be supported in the future.
Model loading
-------------
The MiAI model format contains two parts: the model graph definition and
the model parameter tensors. The graph part utilizes Protocol Buffers
for serialization. All the model parameter tensors are concatenated
together into a continuous array, and we call this array tensor data in
the following paragraphs. In the model graph, the tensor data offsets
and lengths are recorded.
The models can be loaded in 3 ways:
1. Both model graph and tensor data are dynamically loaded externally
(by default, from file system, but the users are free to choose their own
implementations, for example, with compression or encryption). This
approach provides the most flexibility but the weakest model protection.
2. Both model graph and tensor data are converted into C++ code and loaded
by executing the compiled code. This approach provides the strongest
model protection and simplest deployment.
3. The model graph is converted into C++ code and constructed as the second
approach, and the tensor data is loaded externally as the first approach.
......@@ -11,7 +11,6 @@ The main documentation is organized into the following sections:
getting_started/introduction
getting_started/create_a_model_deployment
getting_started/docker
getting_started/how_to_build
getting_started/op_lists
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册