From bf4627797249f815716319c130bf3c89f67fbf22 Mon Sep 17 00:00:00 2001 From: liuqi Date: Tue, 10 Jul 2018 14:01:43 +0800 Subject: [PATCH] Fix document typo. --- docs/development/how_to_run_tests.md | 2 +- docs/installation/env_requirement.rst | 8 ++++---- docs/installation/manual_setup.rst | 2 +- docs/introduction.rst | 4 ++-- docs/user_guide/advanced_usage.rst | 23 ++++++++++++----------- docs/user_guide/basic_usage.rst | 2 +- 6 files changed, 21 insertions(+), 20 deletions(-) diff --git a/docs/development/how_to_run_tests.md b/docs/development/how_to_run_tests.md index bf481dad..20336368 100644 --- a/docs/development/how_to_run_tests.md +++ b/docs/development/how_to_run_tests.md @@ -1,7 +1,7 @@ How to run tests ================= -To run a test, you need to first cross compile the code, push the binary +To run tests, you need to first cross compile the code, push the binary into the device and then execute the binary. To automate this process, MACE provides `tools/bazel_adb_run.py` tool. diff --git a/docs/installation/env_requirement.rst b/docs/installation/env_requirement.rst index 3efaf8f9..bc7adfc8 100644 --- a/docs/installation/env_requirement.rst +++ b/docs/installation/env_requirement.rst @@ -17,7 +17,7 @@ Necessary Dependencies: - `bazel installation guide `__ * - android-ndk - r15c/r16b - - `NDK installation guide `__ or refers to the docker file + - `NDK installation guide `__ * - adb - >= 1.0.32 - apt-get install android-tools-adb @@ -42,9 +42,6 @@ Necessary Dependencies: * - filelock - >= 3.0.0 - pip install -I filelock==3.0.0 - * - docker (for caffe) - - >= 17.09.0-ce - - `docker installation guide `__ .. note:: @@ -62,3 +59,6 @@ Optional Dependencies: * - tensorflow - >= 1.6.0 - pip install -I tensorflow==1.6.0 (if you use tensorflow model) + * - docker (for caffe) + - >= 17.09.0-ce + - `docker installation guide `__ diff --git a/docs/installation/manual_setup.rst b/docs/installation/manual_setup.rst index bb283ac4..dbff94d6 100644 --- a/docs/installation/manual_setup.rst +++ b/docs/installation/manual_setup.rst @@ -65,4 +65,4 @@ Install Optional Dependencies .. code:: sh - pip install -i http://pypi.douban.com/simple/ --trusted-host pypi.douban.com tensorflow==1.8.0 + pip install -i http://pypi.douban.com/simple/ --trusted-host pypi.douban.com tensorflow==1.6.0 diff --git a/docs/introduction.rst b/docs/introduction.rst index cc4a4a7f..433f9666 100644 --- a/docs/introduction.rst +++ b/docs/introduction.rst @@ -3,7 +3,7 @@ Introduction Mobile AI Compute Engine (MACE) is a deep learning inference framework optimized for mobile heterogeneous computing platforms. MACE cover common mobile computing devices(CPU, GPU and DSP), -and supplies tools and document to help users to deploy NN model to mobile devices. MACE has been +and supplies tools and document to help users to deploy neural network model to mobile devices. MACE has been widely used in Xiaomi and proved with industry leading performance and stability. Framework @@ -26,7 +26,7 @@ and Caffe. MACE Interpreter ================ -Mace Interpreter mainly parse the NN graph and manage the tensors in the graph. +Mace Interpreter mainly parses the NN graph and manages the tensors in the graph. ======= Runtime diff --git a/docs/user_guide/advanced_usage.rst b/docs/user_guide/advanced_usage.rst index bd2b5ab1..938f6b02 100644 --- a/docs/user_guide/advanced_usage.rst +++ b/docs/user_guide/advanced_usage.rst @@ -1,5 +1,6 @@ +============== Advanced usage -=============== +============== This part contains the full usage of MACE. @@ -41,7 +42,7 @@ in one deployment file. - Library name. * - target_abis - The target ABI(s) to build, could be 'host', 'armeabi-v7a' or 'arm64-v8a'. - If more than one ABIs will be used, seperate them by comas. + If more than one ABIs will be used, separate them by commas. * - target_socs - [optional] Build for specific SoCs. * - model_graph_format @@ -49,12 +50,12 @@ in one deployment file. * - model_data_format - model data format, could be 'file' or 'code'. 'file' for converting model weight to data file(.data) and 'code' for converting model weight to c++ code. * - model_name - - model name, should be unique if there are more than one models. + - model name should be unique if there are more than one models. **LIMIT: if build_type is code, model_name will be used in c++ code so that model_name must comply with c++ name specification.** * - platform - The source framework, tensorflow or caffe. * - model_file_path - - The path of your model file, can be local path or remote url. + - The path of your model file which can be local path or remote URL. * - model_sha256_checksum - The SHA256 checksum of the model file. * - weight_file_path @@ -108,7 +109,7 @@ in one deployment file. Advanced Usage ============== -There are two common advanced use cases: 1. convert model to CPP code. 2. tuning for specific SOC if use GPU. +There are two common advanced use cases: 1. convert a model to CPP code. 2. tuning for specific SOC if use GPU. * **Convert model(s) to CPP code** @@ -120,14 +121,14 @@ There are two common advanced use cases: 1. convert model to CPP code. 2. tuning If you want to protect your model, you can convert model to CPP code. there are also two cases: - * convert model graph to code and model weight to file with blow model configuration. + * convert model graph to code and model weight to file with below model configuration. .. code:: sh model_graph_format: code model_data_format: file - * convert both model graph and model weight to code with blow model configuration. + * convert both model graph and model weight to code with below model configuration. .. code:: sh @@ -145,7 +146,7 @@ There are two common advanced use cases: 1. convert model to CPP code. 2. tuning python tools/converter.py convert --config=/path/to/model_deployment_file.yml The command will generate **${library_name}.a** in **builds/${library_name}/model** directory and - ** *.h ** in **builds/${library_name}/include** like blow dir-tree. + ** *.h ** in **builds/${library_name}/include** like below dir-tree. .. code:: @@ -230,7 +231,7 @@ There are two common advanced use cases: 1. convert model to CPP code. 2. tuning python tools/converter.py run --config=/path/to/model_deployment_file.yml --validate - The command will generate two files in `builds/${library_name}/opencl`, like blow. + The command will generate two files in `builds/${library_name}/opencl`, like below. .. code:: @@ -252,8 +253,8 @@ There are two common advanced use cases: 1. convert model to CPP code. 2. tuning for the SOC. * **4. Deployment** - * Change the names of files generated above for not collision and push them to **your own device' directory**. - * Usage like the previous procedure, blow list the key steps different. + * Change the names of files generated above for not collision and push them to **your own device's directory**. + * Use like the previous procedure, below lists the key steps differently. .. code:: cpp diff --git a/docs/user_guide/basic_usage.rst b/docs/user_guide/basic_usage.rst index 692ea016..6a410c21 100644 --- a/docs/user_guide/basic_usage.rst +++ b/docs/user_guide/basic_usage.rst @@ -185,7 +185,7 @@ The above command will generate dynamic library ``builds/lib/${ABI}/libmace.so`` .. warning:: - 1. Please verify that the target_abis param in the above command and your deployment file are the same. + Please verify that the target_abis param in the above command and your deployment file are the same. ================== -- GitLab