提交 66d2a71d 编写于 作者: M mindspore-ci-bot 提交者: Gitee

!913 change timeprofiler to timeprofile

Merge pull request !913 from mengchunyang/r0.7
......@@ -143,7 +143,7 @@ The inference framework can be obtained under `-I x86_64`, `-I arm64` and `-I ar
│ └── third_party # Header files and libraries of third party libraries
│ ├── flatbuffers # Header files of FlatBuffers
│ └── include # Header files of inference framework
│ └── time_profiler # Model network layer time-consuming analysis tool
│ └── time_profile # Model network layer time-consuming analysis tool
```
......@@ -158,7 +158,7 @@ The inference framework can be obtained under `-I x86_64`, `-I arm64` and `-I ar
│ └── third_party # Header files and libraries of third party libraries
│ ├── flatbuffers # Header files of FlatBuffers
│ └── include # Header files of inference framework
│ └── time_profiler # Model network layer time-consuming analysis tool
│ └── time_profile # Model network layer time-consuming analysis tool
```
......@@ -172,10 +172,10 @@ The inference framework can be obtained under `-I x86_64`, `-I arm64` and `-I ar
│ └── third_party # Header files and libraries of third party libraries
│ ├── flatbuffers # Header files of FlatBuffers
│ └── include # Header files of inference framework
│ └── time_profiler # Model network layer time-consuming analysis tool
│ └── time_profile # Model network layer time-consuming analysis tool
```
> 1. `liboptimize.so` only exists in the output package of runtime-arm64 and is only used on ARMv8.2 and CPUs that support fp16.
> 2. Compile ARM64 to get the inference framework output of arm64-cpu by default, if you add `-e gpu`, you will get the inference framework output of arm64-gpu, and the package name is `mindspore-lite-{version}-runtime-arm64-gpu.tar.gz`, compiling ARM32 is in the same way.
> 3. Before running the tools in the converter, benchmark or time_profiler directory, you need to configure environment variables, and configure the path where the dynamic libraries of MindSpore Lite and Protobuf are located to the path where the system searches for dynamic libraries. Take the compiled under version 0.7.0-beta as an example: configure converter: `export LD_LIBRARY_PATH=./output/mindspore-lite-0.7.0-converter-ubuntu/third_party/protobuf/lib:${LD_LIBRARY_PATH}`; configure benchmark and time_profiler: `export LD_LIBRARY_PATH= ./output/mindspore-lite-0.7.0-runtime-x86-cpu/lib:${LD_LIBRARY_PATH}`.
> 3. Before running the tools in the converter, benchmark or time_profile directory, you need to configure environment variables, and configure the path where the dynamic libraries of MindSpore Lite and Protobuf are located to the path where the system searches for dynamic libraries. Take the compiled under version 0.7.0-beta as an example: configure converter: `export LD_LIBRARY_PATH=./output/mindspore-lite-0.7.0-converter-ubuntu/third_party/protobuf/lib:${LD_LIBRARY_PATH}`; configure benchmark and timeprofiler: `export LD_LIBRARY_PATH= ./output/mindspore-lite-0.7.0-runtime-x86-cpu/lib:${LD_LIBRARY_PATH}`.
......@@ -144,7 +144,7 @@ tar -xvf mindspore-lite-{version}-runtime-{os}-{device}.tar.gz
│ └── third_party # 第三方库头文件和库
│ ├── flatbuffers # FlatBuffers头文件
│ └── include # 推理框架头文件
│ └── time_profiler # 模型网络层耗时分析工具
│ └── time_profile # 模型网络层耗时分析工具
```
......@@ -159,7 +159,7 @@ tar -xvf mindspore-lite-{version}-runtime-{os}-{device}.tar.gz
│ └── third_party # 第三方库头文件和库
│ ├── flatbuffers # FlatBuffers头文件
│ └── include # 推理框架头文件
│ └── time_profiler # 模型网络层耗时分析工具
│ └── time_profile # 模型网络层耗时分析工具
```
......@@ -173,10 +173,10 @@ tar -xvf mindspore-lite-{version}-runtime-{os}-{device}.tar.gz
│ └── third_party # 第三方库头文件和库
│ ├── flatbuffers # FlatBuffers头文件
│ └── include # 推理框架头文件
│ └── time_profiler # 模型网络层耗时分析工具
│ └── time_profile # 模型网络层耗时分析工具
```
> 1. `liboptimize.so`仅在runtime-arm64的输出包中存在,仅在ARMv8.2和支持fp16特性的CPU上使用。
> 2. 编译ARM64默认可获得arm64-cpu的推理框架输出件,若添加`-e gpu`则获得arm64-gpu的推理框架输出件,此时包名为`mindspore-lite-{version}-runtime-arm64-gpu.tar.gz`,编译ARM32同理。
> 3. 运行converter、benchmark或time_profiler目录下的工具前,都需配置环境变量,将MindSpore Lite和Protobuf的动态库所在的路径配置到系统搜索动态库的路径中。以0.7.0-beta版本下编译为例:配置converter:`export LD_LIBRARY_PATH=./output/mindspore-lite-0.7.0-converter-ubuntu/third_party/protobuf/lib:${LD_LIBRARY_PATH}`;配置benchmark和time_profiler:`export LD_LIBRARY_PATH=./output/mindspore-lite-0.7.0-runtime-x86-cpu/lib:${LD_LIBRARY_PATH}`
> 3. 运行converter、benchmark或time_profile目录下的工具前,都需配置环境变量,将MindSpore Lite和Protobuf的动态库所在的路径配置到系统搜索动态库的路径中。以0.7.0-beta版本下编译为例:配置converter:`export LD_LIBRARY_PATH=./output/mindspore-lite-0.7.0-converter-ubuntu/third_party/protobuf/lib:${LD_LIBRARY_PATH}`;配置benchmark和timeprofiler:`export LD_LIBRARY_PATH=./output/mindspore-lite-0.7.0-runtime-x86-cpu/lib:${LD_LIBRARY_PATH}`
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册