Python Prediction ================== PaddlePaddle offers a set of clean prediction interfaces for python with the help of SWIG. The main steps of predict values in python are: * Parse training configurations * Construct GradientMachine * Prepare data * Predict Here is a sample python script that shows the typical prediction process for the MNIST classification problem. A complete sample code could be found at :code:`src_root/doc/ui/predict/predict_sample.py`. .. literalinclude:: src/predict_sample.py :language: python :lines: 15-18,90-100,101-104 The module that does the most of the job is py_paddle.swig_paddle, it's generated by SWIG and has complete documents, for more details you can use python's :code:`help()` function. Let's walk through the above python script: * At the beginning, use :code:`swig_paddle.initPaddle()` to initialize PaddlePaddle with command line arguments, for more about command line arguments see `Command Line Arguments <../cmd_argument/detail_introduction.html>`_. * Parse the configuration file that is used in training with :code:`parse_config()`. Because data to predict with always have no label, and output of prediction work normally is the output layer rather than the cost layer, so you should modify the configuration file accordingly before using it in the prediction work. * Create a neural network with :code:`swig_paddle.GradientMachine.createFromConfigproto()`, which takes the parsed configuration :code:`conf.model_config` as argument. Then load the trained parameters from the model with :code:`network.loadParameters()`. * Create a data converter object of utility class :code:`DataProviderConverter`. - Note: As swig_paddle can only accept C++ matrices, we offer a utility class DataProviderConverter that can accept the same input data with PyDataProvider2, for more information please refer to document of `PyDataProvider2 <../data_provider/pydataprovider2.html>`_. * Do the prediction with :code:`forwardTest()`, which takes the converted input data and outputs the activations of the output layer. Here is a typical output: .. code-block:: text [{'id': None, 'value': array([[ 5.53018653e-09, 1.12194102e-05, 1.96644767e-09, 1.43630644e-02, 1.51111044e-13, 9.85625684e-01, 2.08823112e-10, 2.32777140e-08, 2.00186201e-09, 1.15501715e-08], [ 9.99982715e-01, 1.27787406e-10, 1.72296313e-05, 1.49316648e-09, 1.36540484e-11, 6.93137714e-10, 2.70634608e-08, 3.48565123e-08, 5.25639710e-09, 4.48684503e-08]], dtype=float32)}] :code:`value` is the output of the output layer, each row represents result of the corresponding row in the input data, each element represents activation of the corresponding neuron in the output layer.