@@ -54,3 +54,14 @@ For more details please check out our [MindSpore Lite Architecture Guide](https:
Load the model and perform inference. [Inference](https://www.mindspore.cn/lite/tutorial/en/master/use/runtime.html) is the process of running input data through the model to get output.
MindSpore provides a series of pre-trained models that can be deployed on mobile device [example](#TODO).
## MindSpore Lite benchmark test result
Base on MindSpore r0.7, we test a couple of networks on HUAWEI Mate30 (Hisilicon Kirin990) mobile phone, and get the test results below for your reference.
| NetWork | Thread Number | Average Run Time(ms) |