diff --git a/docs/demo_guides/cpp_demo.md b/docs/demo_guides/cpp_demo.md index 55abd3a70fe23dd0e8798d6a772ee216140c2875..5f3a2757b21cffb90ebd214ea6d9525dc3fb6dbd 100644 --- a/docs/demo_guides/cpp_demo.md +++ b/docs/demo_guides/cpp_demo.md @@ -32,14 +32,26 @@ tar zxf mobilenet_v1.tar.gz ![image](https://paddlelite-data.bj.bcebos.com/doc_images/cxx_demo/3inference_model.png) -(2)下载[opt工具](https://github.com/PaddlePaddle/Paddle-Lite/releases/download/v2.3.0/opt)。放入同一文件夹,终端输入命令转化模型: +(2)模型转换 -```shell -wget https://github.com/PaddlePaddle/Paddle-Lite/releases/download/v2.3.0/opt -chmod +x opt -./opt --model_dir=./mobilenet_v1 --optimize_out_type=naive_buffer --optimize_out=./mobilenet_v1_opt -``` + - v2.6.0版本之前 + + 下载[opt工具](https://github.com/PaddlePaddle/Paddle-Lite/releases/download/v2.3.0/opt)。放入同一文件夹,终端输入命令转化模型 + + ```shell + wget https://github.com/PaddlePaddle/Paddle-Lite/releases/download/v2.3.0/opt + chmod +x opt + ./opt --model_dir=./mobilenet_v1 --optimize_out_type=naive_buffer --optimize_out=./mobilenet_v1_opt + ``` + - v2.6.0版本以及后续版本 + + 安装paddlelite,终端输入命令转化模型 + + ```shell + python -m pip install paddlelite + paddle_opt_lite --model_dir=./mobilenet_v1 --optimize_out_type=naive_buffer --optimize_out=./mobilenet_v1_opt + ``` **结果如下图所示:** ![image](https://paddlelite-data.bj.bcebos.com/doc_images/cxx_demo/2opt_model.png) diff --git a/docs/demo_guides/python_demo.md b/docs/demo_guides/python_demo.md index 36ff11751dee43f4b148c21bc603784b7f60f88a..24ce217e57684791e5bce3cf8b1295faf7740c45 100644 --- a/docs/demo_guides/python_demo.md +++ b/docs/demo_guides/python_demo.md @@ -55,6 +55,11 @@ a.set_valid_places("x86") a.run() ``` + +- MAC 环境 + +Opt工具使用方式同Linux(MAC环境暂不支持python端预测,下个版本会修复该问题) + ## 3. 编写预测程序 准备好预测库和模型,我们便可以编写程序来执行预测。我们提供涵盖图像分类、目标检测等多种应用场景的C++示例demo可供参考,创建文件mobilenetV1_light_api.py,