# Embed Paddle Inference in Your ApplicationPaddle inference offers the APIs in `C` and `C++` languages.You can easily deploy a model trained by Paddle following the steps as below:1. Optimize the native model;2. Write some codes for deployment.## The APIsAll the released APIs are located in the `paddle_inference_api.h` header file. The stable APIs are wrapped by `namespace paddle`, the unstable APIs are protected by `namespace paddle::contrib`.## Write some codesRead `paddle_inference_api.h` for more information.