Paddle Lite is an updated version of Paddle-Mobile, an open-open source deep learning framework designed to make it easy to perform inference on mobile, embeded, and IoT devices. It is compatible with PaddlePaddle and pre-trained models from other sources.
For tutorials, please see [PaddleLite Wiki](https://paddlepaddle.github.io/Paddle-Lite/).
For tutorials, please see [PaddleLite Document](https://paddlepaddle.github.io/Paddle-Lite/).
## Key Features
...
...
@@ -30,7 +30,7 @@ It also supports INT8 quantizations with [PaddleSlim model compression tools](ht
On Huawei NPU and FPGA, the performance is also boosted.
The latest benchmark is located at [benchmark](https://github.com/PaddlePaddle/Paddle-Lite/wiki/benchmark)
The latest benchmark is located at [benchmark](https://paddlepaddle.github.io/Paddle-Lite/develop/benchmark/)