提交 fdcaad4c 编写于 作者: M meng_chunyang

update README for mindspore lite

上级 8d419314
...@@ -57,7 +57,6 @@ ...@@ -57,7 +57,6 @@
* Add 93 TFLite op. * Add 93 TFLite op.
* Add 24 Caffe op. * Add 24 Caffe op.
* Add 62 ONNX op. * Add 62 ONNX op.
* Add support for windows.
* Add 11 optimized passes, include fusion/const fold. * Add 11 optimized passes, include fusion/const fold.
* Support aware-training and Post-training quantization. * Support aware-training and Post-training quantization.
* CPU * CPU
......
...@@ -54,3 +54,14 @@ For more details please check out our [MindSpore Lite Architecture Guide](https: ...@@ -54,3 +54,14 @@ For more details please check out our [MindSpore Lite Architecture Guide](https:
Load the model and perform inference. [Inference](https://www.mindspore.cn/lite/tutorial/en/master/use/runtime.html) is the process of running input data through the model to get output. Load the model and perform inference. [Inference](https://www.mindspore.cn/lite/tutorial/en/master/use/runtime.html) is the process of running input data through the model to get output.
MindSpore provides a series of pre-trained models that can be deployed on mobile device [example](#TODO). MindSpore provides a series of pre-trained models that can be deployed on mobile device [example](#TODO).
## MindSpore Lite benchmark test result
Base on MindSpore r0.7, we test a couple of networks on HUAWEI Mate30 (Hisilicon Kirin990) mobile phone, and get the test results below for your reference.
| NetWork | Thread Number | Average Run Time(ms) |
| ------------------- | ------------- | -------------------- |
| basic_squeezenet | 4 | 9.10 |
| inception_v3 | 4 | 69.361 |
| mobilenet_v1_10_224 | 4 | 7.137 |
| mobilenet_v2_10_224 | 4 | 5.569 |
| resnet_v2_50 | 4 | 48.691 |
...@@ -64,3 +64,14 @@ MindSpore Lite是MindSpore推出的端云协同的、轻量化、高性能AI推 ...@@ -64,3 +64,14 @@ MindSpore Lite是MindSpore推出的端云协同的、轻量化、高性能AI推
主要完成模型推理工作,即加载模型,完成模型相关的所有计算。[推理](https://www.mindspore.cn/lite/tutorial/zh-CN/master/use/runtime.html)是通过模型运行输入数据,获取预测的过程。 主要完成模型推理工作,即加载模型,完成模型相关的所有计算。[推理](https://www.mindspore.cn/lite/tutorial/zh-CN/master/use/runtime.html)是通过模型运行输入数据,获取预测的过程。
MindSpore提供了一系列预训练模型部署在智能终端的[样例](#TODO) MindSpore提供了一系列预训练模型部署在智能终端的[样例](#TODO)
## MindSpore Lite性能参考数据
我们在HUAWEI Mate30(Hisilicon Kirin990)手机上,基于MindSpore r0.7,测试了一组端侧常见网络的性能数据,供您参考:
| 网络 | 线程数 | 平均推理时间(毫秒) |
| ------------------- | ------ | ------------------ |
| basic_squeezenet | 4 | 9.10 |
| inception_v3 | 4 | 69.361 |
| mobilenet_v1_10_224 | 4 | 7.137 |
| mobilenet_v2_10_224 | 4 | 5.569 |
| resnet_v2_50 | 4 | 48.691 |
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册