diff --git a/README.md b/README.md index ebd893a793f8d783b20882c5404e6687f217ff08..9558548c319318e96749dec64e13802b3d6d0e10 100644 --- a/README.md +++ b/README.md @@ -4,9 +4,9 @@ [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](LICENSE) +[![Build Status](https://travis-ci.org/travis-ci/travis-web.svg?branch=master)](https://travis-ci.org/travis-ci/travis-web) [![pipeline status](https://gitlab.com/llhe/mace/badges/master/pipeline.svg)](https://gitlab.com/llhe/mace/pipelines) [![doc build status](https://readthedocs.org/projects/mace/badge/?version=latest)](https://readthedocs.org/projects/mace/badge/?version=latest) -[![Build Status](https://travis-ci.org/travis-ci/travis-web.svg?branch=master)](https://travis-ci.org/travis-ci/travis-web) [Documentation](https://mace.readthedocs.io) | [FAQ](https://mace.readthedocs.io/en/latest/faq.html) | diff --git a/docs/conf.py b/docs/conf.py index d69fded5d15632627e3621c483fce170000c8ad5..aff476bd04532e7132acb5c5857a5743122df5a9 100644 --- a/docs/conf.py +++ b/docs/conf.py @@ -6,7 +6,7 @@ import recommonmark.parser import sphinx_rtd_theme -project = u'Mobile AI Compute Engine (MACE)' +project = u'MACE' author = u'%s Developers' % project copyright = u'2018, %s' % author diff --git a/docs/installation/env_requirement.rst b/docs/installation/env_requirement.rst index bc7adfc8b8598c0a4e8ea92313b33a5f5c314461..c482712418b1e0afb92970ce0a55bb7226e82ed9 100644 --- a/docs/installation/env_requirement.rst +++ b/docs/installation/env_requirement.rst @@ -1,64 +1,67 @@ -Environment Requirement +Environment requirement ======================== MACE requires the following dependencies: -Necessary Dependencies: ------------------------- +Required dependencies +--------------------- .. list-table:: :header-rows: 1 - * - software - - version - - install command - * - bazel - - >= 0.13.0 + * - Software + - Installation command + - Tested version + * - Python + - + - 2.7 + * - Bazel - `bazel installation guide `__ - * - android-ndk - - r15c/r16b - - `NDK installation guide `__ - * - adb - - >= 1.0.32 - - apt-get install android-tools-adb - * - cmake - - >= 3.11.3 + - 0.13.0 + * - CMake - apt-get install cmake - * - numpy - - >= 1.14.0 - - pip install -I numpy==1.14.0 - * - scipy - - >= 1.0.0 - - pip install -I scipy==1.0.0 - * - jinja2 - - >= 2.10 + - >= 3.11.3 + * - Jinja2 - pip install -I jinja2==2.10 + - 2.10 * - PyYaml - - >= 3.12.0 - pip install -I pyyaml==3.12 + - 3.12.0 * - sh - - >= 1.12.14 - pip install -I sh==1.12.14 - * - filelock - - >= 3.0.0 - - pip install -I filelock==3.0.0 - -.. note:: + - 1.12.14 - ``export ANDROID_NDK_HOME=/path/to/ndk`` to specify ANDROID_NDK_HOME - -Optional Dependencies: ------------------------ +Optional dependencies +--------------------- .. list-table:: :header-rows: 1 - * - software - - version - - install command - * - tensorflow - - >= 1.6.0 - - pip install -I tensorflow==1.6.0 (if you use tensorflow model) - * - docker (for caffe) - - >= 17.09.0-ce + * - Software + - Installation command + - Remark + * - Android NDK + - `NDK installation guide `__ + - Required by Android build, r15b, r15c, r16b + * - ADB + - apt-get install android-tools-adb + - Required by Android run, >= 1.0.32 + * - TensorFlow + - pip install -I tensorflow==1.6.0 + - Required by TensorFlow model + * - Docker - `docker installation guide `__ + - Required by docker mode for Caffe model + * - Numpy + - pip install -I numpy==1.14.0 + - Required by model validation + * - Scipy + - pip install -I scipy==1.0.0 + - Required by model validation + * - FileLock + - pip install -I filelock==3.0.0 + - Required by Android run + +.. note:: + + For Android build, `ANDROID_NDK_HOME` must be confifigured by using ``export ANDROID_NDK_HOME=/path/to/ndk`` diff --git a/docs/installation/manual_setup.rst b/docs/installation/manual_setup.rst index dbff94d689dfa49c8c67b2f2dafde39594da1105..58d7025147b590aa0e39b2def020be4ad5457e07 100644 --- a/docs/installation/manual_setup.rst +++ b/docs/installation/manual_setup.rst @@ -1,13 +1,12 @@ Manual setup ============= -The setup steps are based on ``Ubuntu``. And dependencies to install can refer to :doc:`env_requirement`. - -Install Necessary Dependencies -------------------------------- +The setup steps are based on ``Ubuntu``, you can change the commands +correspondingly for other systems. +For the detailed installation dependencies, please refer to :doc:`env_requirement`. Install Bazel -~~~~~~~~~~~~~~ +------------- Recommend bazel with version larger than ``0.13.0`` (Refer to `Bazel documentation `__). @@ -22,10 +21,11 @@ Recommend bazel with version larger than ``0.13.0`` (Refer to `Bazel documentati cd / && \ rm -f /bazel/bazel-$BAZEL_VERSION-installer-linux-x86_64.sh -Install NDK -~~~~~~~~~~~~ +Install Android NDK +-------------------- -Recommend NDK with version r15c or r16 (Refer to `NDK installation guide `__). +The recommended Android NDK versions includes r15b, r15c and r16b (Refers to +`NDK installation guide `__). .. code:: sh @@ -43,7 +43,7 @@ Recommend NDK with version r15c or r16 (Refer to `NDK installation guide `__. .. literalinclude:: models/demo_app_models.yml :language: yaml @@ -84,8 +83,6 @@ in one deployment file. - [optional] The data type used for specified runtime. [fp16_fp32, fp32_fp32] for GPU, default is fp16_fp32, [fp32] for CPU and [uint8] for DSP. * - limit_opencl_kernel_time - [optional] Whether splitting the OpenCL kernel within 1 ms to keep UI responsiveness, default is 0. - * - nnlib_graph_mode - - [optional] Control the DSP precision and performance, default to 0 usually works for most cases. * - obfuscate - [optional] Whether to obfuscate the model operator name, default to 0. * - winograd @@ -105,13 +102,14 @@ in one deployment file. sha256sum /path/to/your/file ---------------- -Advanced Usage ---------------- +Advanced usage +-------------- -There are two common advanced use cases: 1. convert a model to CPP code. 2. tuning for specific SOC if use GPU. +There are two common advanced use cases: + - converting model to C++ code. + - tuning GPU kernels for a specific SoC. -* **Convert model(s) to CPP code** +* **Convert model(s) to C++ code** .. warning:: @@ -119,7 +117,7 @@ There are two common advanced use cases: 1. convert a model to CPP code. 2. tuni * **1. Change the model deployment file(.yml)** - If you want to protect your model, you can convert model to CPP code. there are also two cases: + If you want to protect your model, you can convert model to C++ code. there are also two cases: * convert model graph to code and model weight to file with below model configuration. @@ -197,10 +195,10 @@ There are two common advanced use cases: 1. convert a model to CPP code. 2. tuni // ... Same with the code in basic usage -* **Tuning for specific SOC's GPU** +* **Tuning for specific SoC's GPU** If you want to use the GPU of a specific device, you can just specify the ``target_socs`` in your YAML file and - then tune the MACE lib for it, which may get 1~10% performance improvement. + then tune the MACE lib for it (OpenCL kernels), which may get 1~10% performance improvement. * **1. Change the model deployment file(.yml)** @@ -253,7 +251,7 @@ There are two common advanced use cases: 1. convert a model to CPP code. 2. tuni used for your models, which could accelerate the initialization stage. Details please refer to `OpenCL Specification `__. * **mobilenet-v2-tuned_opencl_parameter.MI6.msm8998.bin** stands for the tuned OpenCL parameters - for the SOC. + for the SoC. * **4. Deployment** * Change the names of files generated above for not collision and push them to **your own device's directory**. @@ -274,9 +272,8 @@ There are two common advanced use cases: 1. convert a model to CPP code. 2. tuni // ... Same with the code in basic usage. ----------------- Useful Commands ----------------- +--------------- * **run the model** .. code:: sh diff --git a/docs/user_guide/basic_usage.rst b/docs/user_guide/basic_usage.rst index 5c5f03301f9b74a42aa5d5de3b22942efbe9b9e7..b5f2a784ff3f552845bd217da4d4a40831597b6e 100644 --- a/docs/user_guide/basic_usage.rst +++ b/docs/user_guide/basic_usage.rst @@ -7,13 +7,14 @@ Build and run an example model At first, make sure the environment has been set up correctly already (refer to :doc:`../installation/env_requirement`). -The followings are instructions about how to quickly build and run a provided model in *MACE Model Zoo*. +The followings are instructions about how to quickly build and run a provided model in +`MACE Model Zoo `__. Here we use the mobilenet-v2 model as an example. **Commands** - 1. Pull *MACE* project. + 1. Pull `MACE `__ project. .. code:: sh @@ -29,14 +30,14 @@ Here we use the mobilenet-v2 model as an example. It's highly recommanded to use a release version instead of master branch. - 2. Pull *MACE Model Zoo* project. + 2. Pull `MACE Model Zoo `__ project. .. code:: sh git clone https://github.com/XiaoMi/mace-models.git - 3. Build a general MACE library. + 3. Build a generic MACE library. .. code:: sh @@ -46,7 +47,7 @@ Here we use the mobilenet-v2 model as an example. bash tools/build-standalone-lib.sh - 4. Convert the model to MACE format model. + 4. Convert the pre-trained mobilenet-v2 model to MACE format model. .. code:: sh @@ -57,7 +58,7 @@ Here we use the mobilenet-v2 model as an example. 5. Run the model. - .. warning:: + .. note:: If you want to run on device/phone, please plug in at least one device/phone. @@ -77,13 +78,13 @@ Here we use the mobilenet-v2 model as an example. Build your own model --------------------- -This part will show you how to use your pre-trained model in MACE. +This part will show you how to use your own pre-trained model in MACE. ====================== 1. Prepare your model ====================== -Mace now supports models from TensorFlow and Caffe (more frameworks will be supported). +MACE now supports models from TensorFlow and Caffe (more frameworks will be supported). - TensorFlow @@ -338,4 +339,4 @@ Please refer to \ ``mace/examples/example.cc``\ for full usage. The following li // 6. Run the model MaceStatus status = engine.Run(inputs, &outputs); -More details are in :doc:`advanced_usage`. \ No newline at end of file +More details are in :doc:`advanced_usage`. diff --git a/docs/user_guide/models/demo_app_models.yml b/docs/user_guide/models/demo_app_models.yml index 7e8b7e08bee9f479605b3dfd558855bee48632df..ad0527a9a61698eb4ea401ffa12ad70b11c0079f 100644 --- a/docs/user_guide/models/demo_app_models.yml +++ b/docs/user_guide/models/demo_app_models.yml @@ -26,7 +26,6 @@ models: - https://cnbj1.fds.api.xiaomi.com/mace/inputs/dog.npy runtime: cpu+gpu limit_opencl_kernel_time: 0 - nnlib_graph_mode: 0 obfuscate: 0 winograd: 0 squeezenet_v11: @@ -46,6 +45,5 @@ models: - 1,1,1,1000 runtime: cpu+gpu limit_opencl_kernel_time: 0 - nnlib_graph_mode: 0 obfuscate: 0 - winograd: 0 \ No newline at end of file + winograd: 0