diff --git a/lite/tutorials/source_en/build.md b/lite/tutorials/source_en/build.md index 42810e4c11564b448dcbe00c88c900f40056fcea..976003f037a8e2055b908d6f6134d9e9a6e5bb1e 100644 --- a/lite/tutorials/source_en/build.md +++ b/lite/tutorials/source_en/build.md @@ -143,7 +143,7 @@ The inference framework can be obtained under `-I x86_64`, `-I arm64` and `-I ar │ └── third_party # Header files and libraries of third party libraries │ ├── flatbuffers # Header files of FlatBuffers │ └── include # Header files of inference framework - │ └── time_profiler # Model network layer time-consuming analysis tool + │ └── time_profile # Model network layer time-consuming analysis tool ``` @@ -158,7 +158,7 @@ The inference framework can be obtained under `-I x86_64`, `-I arm64` and `-I ar │ └── third_party # Header files and libraries of third party libraries │ ├── flatbuffers # Header files of FlatBuffers │ └── include # Header files of inference framework - │ └── time_profiler # Model network layer time-consuming analysis tool + │ └── time_profile # Model network layer time-consuming analysis tool ``` @@ -172,10 +172,10 @@ The inference framework can be obtained under `-I x86_64`, `-I arm64` and `-I ar │ └── third_party # Header files and libraries of third party libraries │ ├── flatbuffers # Header files of FlatBuffers │ └── include # Header files of inference framework - │ └── time_profiler # Model network layer time-consuming analysis tool + │ └── time_profile # Model network layer time-consuming analysis tool ``` > 1. `liboptimize.so` only exists in the output package of runtime-arm64 and is only used on ARMv8.2 and CPUs that support fp16. > 2. Compile ARM64 to get the inference framework output of arm64-cpu by default, if you add `-e gpu`, you will get the inference framework output of arm64-gpu, and the package name is `mindspore-lite-{version}-runtime-arm64-gpu.tar.gz`, compiling ARM32 is in the same way. -> 3. Before running the tools in the converter, benchmark or time_profiler directory, you need to configure environment variables, and configure the path where the dynamic libraries of MindSpore Lite and Protobuf are located to the path where the system searches for dynamic libraries. Take the compiled under version 0.7.0-beta as an example: configure converter: `export LD_LIBRARY_PATH=./output/mindspore-lite-0.7.0-converter-ubuntu/third_party/protobuf/lib:${LD_LIBRARY_PATH}`; configure benchmark and time_profiler: `export LD_LIBRARY_PATH= ./output/mindspore-lite-0.7.0-runtime-x86-cpu/lib:${LD_LIBRARY_PATH}`. +> 3. Before running the tools in the converter, benchmark or time_profile directory, you need to configure environment variables, and configure the path where the dynamic libraries of MindSpore Lite and Protobuf are located to the path where the system searches for dynamic libraries. Take the compiled under version 0.7.0-beta as an example: configure converter: `export LD_LIBRARY_PATH=./output/mindspore-lite-0.7.0-converter-ubuntu/third_party/protobuf/lib:${LD_LIBRARY_PATH}`; configure benchmark and timeprofiler: `export LD_LIBRARY_PATH= ./output/mindspore-lite-0.7.0-runtime-x86-cpu/lib:${LD_LIBRARY_PATH}`. diff --git a/lite/tutorials/source_zh_cn/build.md b/lite/tutorials/source_zh_cn/build.md index dc384f4987d0220c6ec1325fb85dcb55f751967c..602d405298aaeb3ccef3e156adf70c16e8b2f626 100644 --- a/lite/tutorials/source_zh_cn/build.md +++ b/lite/tutorials/source_zh_cn/build.md @@ -144,7 +144,7 @@ tar -xvf mindspore-lite-{version}-runtime-{os}-{device}.tar.gz │ └── third_party # 第三方库头文件和库 │ ├── flatbuffers # FlatBuffers头文件 │ └── include # 推理框架头文件 - │ └── time_profiler # 模型网络层耗时分析工具 + │ └── time_profile # 模型网络层耗时分析工具 ``` @@ -159,7 +159,7 @@ tar -xvf mindspore-lite-{version}-runtime-{os}-{device}.tar.gz │ └── third_party # 第三方库头文件和库 │ ├── flatbuffers # FlatBuffers头文件 │ └── include # 推理框架头文件 - │ └── time_profiler # 模型网络层耗时分析工具 + │ └── time_profile # 模型网络层耗时分析工具 ``` @@ -173,10 +173,10 @@ tar -xvf mindspore-lite-{version}-runtime-{os}-{device}.tar.gz │ └── third_party # 第三方库头文件和库 │ ├── flatbuffers # FlatBuffers头文件 │ └── include # 推理框架头文件 - │ └── time_profiler # 模型网络层耗时分析工具 + │ └── time_profile # 模型网络层耗时分析工具 ``` > 1. `liboptimize.so`仅在runtime-arm64的输出包中存在,仅在ARMv8.2和支持fp16特性的CPU上使用。 > 2. 编译ARM64默认可获得arm64-cpu的推理框架输出件,若添加`-e gpu`则获得arm64-gpu的推理框架输出件,此时包名为`mindspore-lite-{version}-runtime-arm64-gpu.tar.gz`,编译ARM32同理。 -> 3. 运行converter、benchmark或time_profiler目录下的工具前,都需配置环境变量,将MindSpore Lite和Protobuf的动态库所在的路径配置到系统搜索动态库的路径中。以0.7.0-beta版本下编译为例:配置converter:`export LD_LIBRARY_PATH=./output/mindspore-lite-0.7.0-converter-ubuntu/third_party/protobuf/lib:${LD_LIBRARY_PATH}`;配置benchmark和time_profiler:`export LD_LIBRARY_PATH=./output/mindspore-lite-0.7.0-runtime-x86-cpu/lib:${LD_LIBRARY_PATH}` +> 3. 运行converter、benchmark或time_profile目录下的工具前,都需配置环境变量,将MindSpore Lite和Protobuf的动态库所在的路径配置到系统搜索动态库的路径中。以0.7.0-beta版本下编译为例:配置converter:`export LD_LIBRARY_PATH=./output/mindspore-lite-0.7.0-converter-ubuntu/third_party/protobuf/lib:${LD_LIBRARY_PATH}`;配置benchmark和timeprofiler:`export LD_LIBRARY_PATH=./output/mindspore-lite-0.7.0-runtime-x86-cpu/lib:${LD_LIBRARY_PATH}`