[Paddle-Lite](https://github.com/PaddlePaddle/Paddle-Lite) is an open-source deep learning framework designed by PaddlePaddle to make it easy to perform inference on mobile, embeded, and IoT devices.
Light Weight is reflected in the use of fewer bits to represent the weight and activation of the neural network,
which can greatly reduce the size of the model,
solve the problem of limited storage space of the terminal device,
and the inference performance is overall better than other frame.
[PaddleClas](https://github.com/PaddlePaddle/PaddleClas) has used Paddle-Lite to evaluate [the performance of the mobile model](../models/Mobile.md).
For more detail of process, please refer to [Paddle-Lite documentations](https://paddle-lite.readthedocs.io/zh/latest/).
[Paddle Serving](https://github.com/PaddlePaddle/Serving) aims to help deep-learning researchers to easily deploy online inference services, supporting one-click deployment of industry, high concurrency and efficient communication between client and server and supporting multiple programming languages to develop clients.
Taking HTTP inference service deployment as an example to introduce how to use PaddleServing to deploy model services in PaddleClas.
## II. Serving Install
It is recommends to use docker to install and deploy the Serving environment in the Serving official website, first, you need to pull the docker environment and create Serving-based docker.
`9292` is the port for sending the request, which is consistent with the Serving starting port, and `./docs/images/logo.png` is the test image, the final top1 label and probability are returned.
* For more Serving deployment, such RPC inference service, you can refer to the Serving official website: [https://github.com/PaddlePaddle/Serving/tree/develop/python/examples/imagenet](https://github.com/PaddlePaddle/Serving/tree/develop/python/examples/imagenet)