# Embed Paddle Inference in Your Application Paddle inference offers the APIs in `C` and `C++` languages.
You can easily deploy a model trained by Paddle following the steps as below:
1. Optimize the native model; 2. Write some codes for deployment.
## The APIs
All the released APIs are located in the `paddle_inference_api.h` header file. The stable APIs are wrapped by `namespace paddle`, the unstable APIs are protected by `namespace paddle::contrib`.
## Write some codes Read `paddle_inference_api.h` for more information.