diff --git a/docs/development/how_to_run_tests.md b/docs/development/how_to_run_tests.md index bf481dad3e1c173ed9b8fdf40f90efa1c897e8bc..203363685544c217096e3c09fae21554c45ea599 100644 --- a/docs/development/how_to_run_tests.md +++ b/docs/development/how_to_run_tests.md @@ -1,7 +1,7 @@ How to run tests ================= -To run a test, you need to first cross compile the code, push the binary +To run tests, you need to first cross compile the code, push the binary into the device and then execute the binary. To automate this process, MACE provides `tools/bazel_adb_run.py` tool. diff --git a/docs/installation/env_requirement.rst b/docs/installation/env_requirement.rst index 3efaf8f9f0c90c26971907ee9f68b8b6b5193fed..bc7adfc8b8598c0a4e8ea92313b33a5f5c314461 100644 --- a/docs/installation/env_requirement.rst +++ b/docs/installation/env_requirement.rst @@ -17,7 +17,7 @@ Necessary Dependencies: - `bazel installation guide `__ * - android-ndk - r15c/r16b - - `NDK installation guide `__ or refers to the docker file + - `NDK installation guide `__ * - adb - >= 1.0.32 - apt-get install android-tools-adb @@ -42,9 +42,6 @@ Necessary Dependencies: * - filelock - >= 3.0.0 - pip install -I filelock==3.0.0 - * - docker (for caffe) - - >= 17.09.0-ce - - `docker installation guide `__ .. note:: @@ -62,3 +59,6 @@ Optional Dependencies: * - tensorflow - >= 1.6.0 - pip install -I tensorflow==1.6.0 (if you use tensorflow model) + * - docker (for caffe) + - >= 17.09.0-ce + - `docker installation guide `__ diff --git a/docs/installation/manual_setup.rst b/docs/installation/manual_setup.rst index bb283ac4928ba23338f28795d3e598ddead107f4..dbff94d689dfa49c8c67b2f2dafde39594da1105 100644 --- a/docs/installation/manual_setup.rst +++ b/docs/installation/manual_setup.rst @@ -65,4 +65,4 @@ Install Optional Dependencies .. code:: sh - pip install -i http://pypi.douban.com/simple/ --trusted-host pypi.douban.com tensorflow==1.8.0 + pip install -i http://pypi.douban.com/simple/ --trusted-host pypi.douban.com tensorflow==1.6.0 diff --git a/docs/introduction.rst b/docs/introduction.rst index cc4a4a7f279a6e91b3ba9e32196bbb25468985ec..433f96665ecce87e239c763746b830546dcd0fa0 100644 --- a/docs/introduction.rst +++ b/docs/introduction.rst @@ -3,7 +3,7 @@ Introduction Mobile AI Compute Engine (MACE) is a deep learning inference framework optimized for mobile heterogeneous computing platforms. MACE cover common mobile computing devices(CPU, GPU and DSP), -and supplies tools and document to help users to deploy NN model to mobile devices. MACE has been +and supplies tools and document to help users to deploy neural network model to mobile devices. MACE has been widely used in Xiaomi and proved with industry leading performance and stability. Framework @@ -26,7 +26,7 @@ and Caffe. MACE Interpreter ================ -Mace Interpreter mainly parse the NN graph and manage the tensors in the graph. +Mace Interpreter mainly parses the NN graph and manages the tensors in the graph. ======= Runtime diff --git a/docs/user_guide/advanced_usage.rst b/docs/user_guide/advanced_usage.rst index bd2b5ab1c0c10e8feb4238c44e4190279e2c4341..938f6b02ae2c3de5cd031ecfb3e93db0776c1347 100644 --- a/docs/user_guide/advanced_usage.rst +++ b/docs/user_guide/advanced_usage.rst @@ -1,5 +1,6 @@ +============== Advanced usage -=============== +============== This part contains the full usage of MACE. @@ -41,7 +42,7 @@ in one deployment file. - Library name. * - target_abis - The target ABI(s) to build, could be 'host', 'armeabi-v7a' or 'arm64-v8a'. - If more than one ABIs will be used, seperate them by comas. + If more than one ABIs will be used, separate them by commas. * - target_socs - [optional] Build for specific SoCs. * - model_graph_format @@ -49,12 +50,12 @@ in one deployment file. * - model_data_format - model data format, could be 'file' or 'code'. 'file' for converting model weight to data file(.data) and 'code' for converting model weight to c++ code. * - model_name - - model name, should be unique if there are more than one models. + - model name should be unique if there are more than one models. **LIMIT: if build_type is code, model_name will be used in c++ code so that model_name must comply with c++ name specification.** * - platform - The source framework, tensorflow or caffe. * - model_file_path - - The path of your model file, can be local path or remote url. + - The path of your model file which can be local path or remote URL. * - model_sha256_checksum - The SHA256 checksum of the model file. * - weight_file_path @@ -108,7 +109,7 @@ in one deployment file. Advanced Usage ============== -There are two common advanced use cases: 1. convert model to CPP code. 2. tuning for specific SOC if use GPU. +There are two common advanced use cases: 1. convert a model to CPP code. 2. tuning for specific SOC if use GPU. * **Convert model(s) to CPP code** @@ -120,14 +121,14 @@ There are two common advanced use cases: 1. convert model to CPP code. 2. tuning If you want to protect your model, you can convert model to CPP code. there are also two cases: - * convert model graph to code and model weight to file with blow model configuration. + * convert model graph to code and model weight to file with below model configuration. .. code:: sh model_graph_format: code model_data_format: file - * convert both model graph and model weight to code with blow model configuration. + * convert both model graph and model weight to code with below model configuration. .. code:: sh @@ -145,7 +146,7 @@ There are two common advanced use cases: 1. convert model to CPP code. 2. tuning python tools/converter.py convert --config=/path/to/model_deployment_file.yml The command will generate **${library_name}.a** in **builds/${library_name}/model** directory and - ** *.h ** in **builds/${library_name}/include** like blow dir-tree. + ** *.h ** in **builds/${library_name}/include** like below dir-tree. .. code:: @@ -230,7 +231,7 @@ There are two common advanced use cases: 1. convert model to CPP code. 2. tuning python tools/converter.py run --config=/path/to/model_deployment_file.yml --validate - The command will generate two files in `builds/${library_name}/opencl`, like blow. + The command will generate two files in `builds/${library_name}/opencl`, like below. .. code:: @@ -252,8 +253,8 @@ There are two common advanced use cases: 1. convert model to CPP code. 2. tuning for the SOC. * **4. Deployment** - * Change the names of files generated above for not collision and push them to **your own device' directory**. - * Usage like the previous procedure, blow list the key steps different. + * Change the names of files generated above for not collision and push them to **your own device's directory**. + * Use like the previous procedure, below lists the key steps differently. .. code:: cpp diff --git a/docs/user_guide/basic_usage.rst b/docs/user_guide/basic_usage.rst index 692ea0163aa200a9ba724ac4a0276578913b07f0..6a410c2120804c9e9a22f0a115a4df5caff16ed8 100644 --- a/docs/user_guide/basic_usage.rst +++ b/docs/user_guide/basic_usage.rst @@ -185,7 +185,7 @@ The above command will generate dynamic library ``builds/lib/${ABI}/libmace.so`` .. warning:: - 1. Please verify that the target_abis param in the above command and your deployment file are the same. + Please verify that the target_abis param in the above command and your deployment file are the same. ==================