From 0d73f4c2655bcef0d2c037b9b67f9279ea4fab23 Mon Sep 17 00:00:00 2001 From: Yu Yang Date: Sun, 26 Mar 2017 15:41:20 +0800 Subject: [PATCH] Add usage documentation of C-API. --- paddle/capi/examples/README.md | 3 ++ .../capi/examples/model_inference/README.md | 42 +++++++++++++++++++ 2 files changed, 45 insertions(+) create mode 100644 paddle/capi/examples/README.md create mode 100644 paddle/capi/examples/model_inference/README.md diff --git a/paddle/capi/examples/README.md b/paddle/capi/examples/README.md new file mode 100644 index 00000000000..14013e281ff --- /dev/null +++ b/paddle/capi/examples/README.md @@ -0,0 +1,3 @@ +# C-API Example Usage + +* [Model Inference](./model_inference/README.md) diff --git a/paddle/capi/examples/model_inference/README.md b/paddle/capi/examples/model_inference/README.md new file mode 100644 index 00000000000..58e6c83140b --- /dev/null +++ b/paddle/capi/examples/model_inference/README.md @@ -0,0 +1,42 @@ +# Use C-API for Model Inference + +There are several examples in this directory about how to use Paddle C-API for model inference. + +## Convert configuration file to protobuf binary. + +Firstly, the user should convert Paddle's model configuration file into a protobuf binary file. In each example directory, there is a file named `convert_protobin.sh`. It will convert `trainer_config.conf` into `trainer_config.bin`. + +The `convert_protobin.sh` is very simple, just invoke `dump_config` Python module to dump the binary file. The command line usages are: + +```bash +python -m paddle.utils.dump_config YOUR_CONFIG_FILE 'CONFIG_EXTRA_ARGS' --binary > YOUR_CONFIG_FILE.bin +``` + +## Initialize paddle + +```c++ +char* argv[] = {"--use_gpu=False"}; +paddle_init(1, (char**)argv); +``` + +We must initialize global context before we invoke other interfaces in Paddle. The initialize commands just like the `paddle_trainer` command line arguments. `paddle train --help`, will show the list of arguments. The most important argument is `use_gpu` or not. + +## Load network and parameters + +```c +paddle_gradient_machine machine; +paddle_gradient_machine_create_for_inference(&machine, config_file_content, content_size)); +paddle_gradient_machine_load_parameter_from_disk(machine, "./some_where_to_params")); +``` + +The gradient machine is a Paddle concept, which represents a neural network can be forwarded and backward. We can create a gradient machine fo model inference, and load the parameter files from disk. + +Moreover, if we want to inference in multi-thread, we could create a thread local gradient machine which shared the same parameter by using `paddle_gradient_machine_create_shared_param` API. Please reference `multi_thread` as an example. + +## Create input + +The input of a neural network is an `arguments`. The examples in this directory will show how to construct different types of inputs for prediction. Please look at `dense`, `sparse_binary`, `sequence` for details. + +## Get inference + +After invoking `paddle_gradient_machine_forward`, we could get the output of the neural network. The `value` matrix of output arguments will store the neural network output values. If the output is a `SoftmaxActivation`, the `value` matrix are the probabilities of each input samples. The height of output matrix is number of sample. The width is the number of categories. -- GitLab