提交 4d0dbda5 编写于 作者: M meng_chunyang

fix some bugs

上级 7339173f
......@@ -11,6 +11,7 @@
- [Description of Converter's Directory Structure](#description-of-converter-directory-structure)
- [Description of Runtime and Other tools' Directory Structure](#description-of-runtime-and-other-tools-directory-structure)
<!-- /TOC -->
<a href="https://gitee.com/mindspore/docs/blob/r0.7/lite/tutorials/source_en/build.md" target="_blank"><img src="./_static/logo_source.png"></a>
......@@ -177,4 +178,4 @@ The inference framework can be obtained under `-I x86_64`, `-I arm64` and `-I ar
> 1. `liboptimize.so` only exists in the output package of runtime-arm64 and is only used on ARMv8.2 and CPUs that support fp16.
> 2. Compile ARM64 to get the inference framework output of arm64-cpu by default, if you add `-e gpu`, you will get the inference framework output of arm64-gpu, and the package name is `mindspore-lite-{version}-runtime-arm64-gpu.tar.gz`, compiling ARM32 is in the same way.
> 3. Before running the tools in the converter, benchmark or time_profiler directory, you need to configure environment variables, and configure the path where the dynamic libraries of MindSpore Lite and Protobuf are located to the path where the system searches for dynamic libraries. Take the CPU compiled under version 0.7.0-beta as an example: configure converter: `export LD_LIBRARY_PATH=./output/mindspore-lite-0.7.0-converter-ubuntu/third_party/protobuf/lib:${LD_LIBRARY_PATH}`; configure benchmark and time_profiler: `export LD_LIBRARY_PATH= ./output/mindspore-lite-0.7.0-runtime-x86-cpu/lib:${LD_LIBRARY_PATH}`.
> 3. Before running the tools in the converter, benchmark or time_profiler directory, you need to configure environment variables, and configure the path where the dynamic libraries of MindSpore Lite and Protobuf are located to the path where the system searches for dynamic libraries. Take the compiled under version 0.7.0-beta as an example: configure converter: `export LD_LIBRARY_PATH=./output/mindspore-lite-0.7.0-converter-ubuntu/third_party/protobuf/lib:${LD_LIBRARY_PATH}`; configure benchmark and time_profiler: `export LD_LIBRARY_PATH= ./output/mindspore-lite-0.7.0-runtime-x86-cpu/lib:${LD_LIBRARY_PATH}`.
......@@ -20,16 +20,16 @@ The TimeProfiler tool can be used to analyze the time consumption of forward inf
To use the TimeProfiler tool, you need to prepare the environment as follows:
- Compilation: Install build dependencies and perform build. The code of the TimeProfiler tool is stored in the `mindspore/lite/tools/time_profiler` directory of the MindSpore source code. For details about the build operations, see the [Environment Requirements](https://www.mindspore.cn/lite/tutorial/en/r0.7/build.html#environment-requirements) and [Compilation Example](https://www.mindspore.cn/lite/tutorial/en/r0.7/build.html#compilation-example) in the build document.
- Compilation: Install build dependencies and perform build. The code of the TimeProfiler tool is stored in the `mindspore/lite/tools/time_profile` directory of the MindSpore source code. For details about the build operations, see the [Environment Requirements](https://www.mindspore.cn/lite/tutorial/en/r0.7/build.html#environment-requirements) and [Compilation Example](https://www.mindspore.cn/lite/tutorial/en/r0.7/build.html#compilation-example) in the build document.
- Run: Obtain the `time_profiler` tool and configure environment variables by referring to [Output Description](https://www.mindspore.cn/lite/tutorial/en/r0.7/build.html#output-description) in the build document.
- Run: Obtain the `timeprofile` tool and configure environment variables by referring to [Output Description](https://www.mindspore.cn/lite/tutorial/en/r0.7/build.html#output-description) in the build document.
## Parameter Description
The command used for analyzing the time consumption of forward inference at the network layer based on the compiled TimeProfiler tool is as follows:
```bash
./timeprofiler --modelPath=<MODELPATH> [--help] [--loopCount=<LOOPCOUNT>] [--numThreads=<NUMTHREADS>] [--cpuBindMode=<CPUBINDMODE>] [--inDataPath=<INDATAPATH>] [--fp16Priority=<FP16PRIORITY>]
./timeprofile --modelPath=<MODELPATH> [--help] [--loopCount=<LOOPCOUNT>] [--numThreads=<NUMTHREADS>] [--cpuBindMode=<CPUBINDMODE>] [--inDataPath=<INDATAPATH>] [--fp16Priority=<FP16PRIORITY>]
```
The following describes the parameters in detail.
......@@ -49,7 +49,7 @@ The following describes the parameters in detail.
Take the `test_timeprofiler.ms` model as an example and set the number of model inference cycles to 10. The command for using TimeProfiler to analyze the time consumption at the network layer is as follows:
```bash
./timeprofiler --modelPath=./models/test_timeprofiler.ms --loopCount=10
./timeprofile --modelPath=./models/test_timeprofiler.ms --loopCount=10
```
After this command is executed, the TimeProfiler tool outputs the statistics on the running time of the model at the network layer. In this example, the command output is as follows: The statistics are displayed by`opName` and `optype`. `opName` indicates the operator name, `optype` indicates the operator type, and `avg` indicates the average running time of the operator per single run, `percent` indicates the ratio of the operator running time to the total operator running time, `calledTimess` indicates the number of times that the operator is run, and `opTotalTime` indicates the total time that the operator is run for a specified number of times. Finally, `total time` and `kernel cost` show the average time consumed by a single inference operation of the model and the sum of the average time consumed by all operators in the model inference, respectively.
......
......@@ -129,7 +129,7 @@ tar -xvf mindspore-lite-{version}-runtime-{os}-{device}.tar.gz
│ ├── protobuf # Protobuf的动态库
```
#### 模型推理框架runtime及其他工具目录结构说明
推理框架可在`-I x86_64``-I arm64``-I arm32`编译选项下获得,内容包括以下几部分:
......@@ -179,4 +179,4 @@ tar -xvf mindspore-lite-{version}-runtime-{os}-{device}.tar.gz
> 1. `liboptimize.so`仅在runtime-arm64的输出包中存在,仅在ARMv8.2和支持fp16特性的CPU上使用。
> 2. 编译ARM64默认可获得arm64-cpu的推理框架输出件,若添加`-e gpu`则获得arm64-gpu的推理框架输出件,此时包名为`mindspore-lite-{version}-runtime-arm64-gpu.tar.gz`,编译ARM32同理。
> 3. 运行converter、benchmark或time_profiler目录下的工具前,都需配置环境变量,将MindSpore Lite和Protobuf的动态库所在的路径配置到系统搜索动态库的路径中。以0.7.0-beta版本下编译CPU为例:配置converter:`export LD_LIBRARY_PATH=./output/mindspore-lite-0.7.0-converter-ubuntu/third_party/protobuf/lib:${LD_LIBRARY_PATH}`;配置benchmark和time_profiler:`export LD_LIBRARY_PATH=./output/mindspore-lite-0.7.0-runtime-x86-cpu/lib:${LD_LIBRARY_PATH}`
> 3. 运行converter、benchmark或time_profiler目录下的工具前,都需配置环境变量,将MindSpore Lite和Protobuf的动态库所在的路径配置到系统搜索动态库的路径中。以0.7.0-beta版本下编译为例:配置converter:`export LD_LIBRARY_PATH=./output/mindspore-lite-0.7.0-converter-ubuntu/third_party/protobuf/lib:${LD_LIBRARY_PATH}`;配置benchmark和time_profiler:`export LD_LIBRARY_PATH=./output/mindspore-lite-0.7.0-runtime-x86-cpu/lib:${LD_LIBRARY_PATH}`
......@@ -38,7 +38,7 @@ MindSpore Model Zoo中图像分类模型可[在此下载](https://download.minds
## 转换模型
如果预置模型已经满足你要求,请跳过本章节。 如果你需要对MindSpore提供的模型进行重训,重训完成后,需要将模型导出为[.mindir格式](https://www.mindspore.cn/tutorial/zh-CN/r.07/use/saving_and_loading_model_parameters.html#mindir)。然后使用MindSpore Lite[模型转换工具](https://www.mindspore.cn/lite/tutorial/zh-CN/r0.7/use/converter_tool.html)将.mindir模型转换成.ms格式。
如果预置模型已经满足你要求,请跳过本章节。 如果你需要对MindSpore提供的模型进行重训,重训完成后,需要将模型导出为[.mindir格式](https://www.mindspore.cn/tutorial/zh-CN/r0.7/use/saving_and_loading_model_parameters.html#mindir)。然后使用MindSpore Lite[模型转换工具](https://www.mindspore.cn/lite/tutorial/zh-CN/r0.7/use/converter_tool.html)将.mindir模型转换成.ms格式。
以mobilenetv2模型为例,如下脚本将其转换为MindSpore Lite模型用于端侧推理。
```bash
......
......@@ -20,16 +20,16 @@ TimeProfiler工具可以对MindSpore Lite模型网络层的前向推理进行耗
使用TimeProfiler工具,需要进行如下环境准备工作。
- 编译:TimeProfiler工具代码在MindSpore源码的`mindspore/lite/tools/time_profiler`目录中,参考构建文档中的[环境要求](https://www.mindspore.cn/lite/tutorial/zh-CN/r0.7/build.html#id2)[编译示例](https://www.mindspore.cn/lite/tutorial/zh-CN/r0.7/build.html#id4)执行编译。
- 编译:TimeProfiler工具代码在MindSpore源码的`mindspore/lite/tools/time_profile`目录中,参考构建文档中的[环境要求](https://www.mindspore.cn/lite/tutorial/zh-CN/r0.7/build.html#id2)[编译示例](https://www.mindspore.cn/lite/tutorial/zh-CN/r0.7/build.html#id4)执行编译。
- 运行:参考部署文档中的[输出输出](https://www.mindspore.cn/lite/tutorial/zh-CN/r0.7/build.html#id4),获得`time_profiler`工具,并配置环境变量。
- 运行:参考部署文档中的[编译输出](https://www.mindspore.cn/lite/tutorial/zh-CN/r0.7/build.html#id5),获得`timeprofile`工具,并配置环境变量。
## 参数说明
使用编译好的TimeProfiler工具进行模型网络层耗时分析时,其命令格式如下所示。
```bash
./timeprofiler --modelPath=<MODELPATH> [--help] [--loopCount=<LOOPCOUNT>] [--numThreads=<NUMTHREADS>] [--cpuBindMode=<CPUBINDMODE>] [--inDataPath=<INDATAPATH>] [--fp16Priority=<FP16PRIORITY>]
./timeprofile --modelPath=<MODELPATH> [--help] [--loopCount=<LOOPCOUNT>] [--numThreads=<NUMTHREADS>] [--cpuBindMode=<CPUBINDMODE>] [--inDataPath=<INDATAPATH>] [--fp16Priority=<FP16PRIORITY>]
```
下面提供详细的参数说明。
......@@ -49,7 +49,7 @@ TimeProfiler工具可以对MindSpore Lite模型网络层的前向推理进行耗
使用TimeProfiler对`test_timeprofiler.ms`模型的网络层进行耗时分析,并且设置模型推理循环运行次数为10,则其命令代码如下:
```bash
./timeprofiler --modelPath=./models/test_timeprofiler.ms --loopCount=10
./timeprofile --modelPath=./models/test_timeprofiler.ms --loopCount=10
```
该条命令执行后,TimeProfiler工具会输出模型网络层运行耗时的相关统计信息。对于本例命令,输出的统计信息如下。其中统计信息按照`opName``optype`两种划分方式分别显示,`opName`表示算子名,`optype`表示算子类别,`avg`表示该算子的平均单次运行时间,`percent`表示该算子运行耗时占所有算子运行总耗时的比例,`calledTimess`表示该算子的运行次数,`opTotalTime`表示该算子运行指定次数的总耗时。最后,`total time``kernel cost`分别显示了该模型单次推理的平均耗时和模型推理中所有算子的平均耗时之和。
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册