diff --git a/RELEASE.md b/RELEASE.md index ee54c5658b6ae782836ac15eaf73727d60597184..13118481559e58e48752c149cda27d68f5bd70cd 100644 --- a/RELEASE.md +++ b/RELEASE.md @@ -70,7 +70,7 @@ * add optimized convolution1X1/3X3/depthwise/convolution_transposed for OpenCL. * Tool & example * Add benchmark and TimeProfile tools. - * Add image classification and object detection Android Demo. + * Add image classification Android Demo. ## Bugfixes * Models diff --git a/mindspore/lite/README.md b/mindspore/lite/README.md index 3a8ae9b59959d464e0c7c389852fc53fc01f5d28..cacebb6a9f1e764209d2e76da56f2c0371c5bb9a 100644 --- a/mindspore/lite/README.md +++ b/mindspore/lite/README.md @@ -35,7 +35,7 @@ For more details please check out our [MindSpore Lite Architecture Guide](https: The MindSpore team provides a series of pre-training models used for image classification, object detection. You can use these pre-trained models in your application. - The pre-trained models provided by MindSpore include: [Image Classification](https://download.mindspore.cn/model_zoo/official/lite/) and [Object Detection](https://download.mindspore.cn/model_zoo/official/lite/). More models will be provided in the feature. + The pre-trained model provided by MindSpore: [Image Classification](https://download.mindspore.cn/model_zoo/official/lite/). More models will be provided in the feature. MindSpore allows you to retrain pre-trained models to perform other tasks. @@ -53,7 +53,7 @@ For more details please check out our [MindSpore Lite Architecture Guide](https: Load the model and perform inference. [Inference](https://www.mindspore.cn/lite/tutorial/en/master/use/runtime.html) is the process of running input data through the model to get output. - MindSpore provides a series of pre-trained models that can be deployed on mobile device [example](#TODO). + MindSpore provides pre-trained model that can be deployed on mobile device [example](https://www.mindspore.cn/lite/examples/en). ## MindSpore Lite benchmark test result Base on MindSpore r0.7, we test a couple of networks on HUAWEI Mate30 (Hisilicon Kirin990) mobile phone, and get the test results below for your reference. diff --git a/mindspore/lite/README_CN.md b/mindspore/lite/README_CN.md index 30474c1186e0ce32aeb1aa2429bc1ea5e88213eb..7e9d2db64ccfc333b31b28d0c657d46d80e6fe75 100644 --- a/mindspore/lite/README_CN.md +++ b/mindspore/lite/README_CN.md @@ -43,7 +43,7 @@ MindSpore Lite是MindSpore推出的端云协同的、轻量化、高性能AI推 MindSpore团队提供了一系列预训练模型,用于解决图像分类、目标检测等场景的学习问题。可以在您的应用程序中使用这些预训练模型对应的终端模型。 - MindSpore提供的预训练模型包括:[图像分类(Image Classification)](https://download.mindspore.cn/model_zoo/official/lite/)和[目标检测(Object Detection)](https://download.mindspore.cn/model_zoo/official/lite/)。后续MindSpore团队会增加更多的预置模型。 + MindSpore提供的预训练模型:[图像分类(Image Classification)](https://download.mindspore.cn/model_zoo/official/lite/)。后续MindSpore团队会增加更多的预置模型。 MindSpore允许您重新训练预训练模型,以执行其他任务。比如:使用预训练的图像分类模型,可以重新训练来识别新的图像类型。 @@ -63,15 +63,15 @@ MindSpore Lite是MindSpore推出的端云协同的、轻量化、高性能AI推 主要完成模型推理工作,即加载模型,完成模型相关的所有计算。[推理](https://www.mindspore.cn/lite/tutorial/zh-CN/master/use/runtime.html)是通过模型运行输入数据,获取预测的过程。 - MindSpore提供了一系列预训练模型部署在智能终端的[样例](#TODO)。 + MindSpore提供了预训练模型部署在智能终端的[样例](https://www.mindspore.cn/lite/examples)。 ## MindSpore Lite性能参考数据 我们在HUAWEI Mate30(Hisilicon Kirin990)手机上,基于MindSpore r0.7,测试了一组端侧常见网络的性能数据,供您参考: - - | 网络 | 线程数 | 平均推理时间(毫秒) | - | ------------------- | ------ | ------------------ | - | basic_squeezenet | 4 | 9.10 | - | inception_v3 | 4 | 69.361 | - | mobilenet_v1_10_224 | 4 | 7.137 | - | mobilenet_v2_10_224 | 4 | 5.569 | - | resnet_v2_50 | 4 | 48.691 | + +| 网络 | 线程数 | 平均推理时间(毫秒) | +| ------------------- | ------ | ------------------ | +| basic_squeezenet | 4 | 9.10 | +| inception_v3 | 4 | 69.361 | +| mobilenet_v1_10_224 | 4 | 7.137 | +| mobilenet_v2_10_224 | 4 | 5.569 | +| resnet_v2_50 | 4 | 48.691 |