inference_en.rst 2.4 KB
Newer Older
H
HongyingG 已提交
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30
..  _api_guide_inference_en:

#################
Inference Engine
#################

Inference engine provides interfaces to save inference model :ref:`api_fluid_io_save_inference_model` and load inference model :ref:`api_fluid_io_load_inference_model` .

Format of Saved Inference Model
=====================================

There are two formats of saved inference model, which are controlled by :code:`model_filename`  and :code:`params_filename`  parameters in the two interfaces above.

- Parameters are saved into independent separate files, such as :code:`model_filename` set as :code:`None` and :code:`params_filename` set as :code:`None`

  .. code-block:: bash

      ls recognize_digits_conv.inference.model/*
      __model__ conv2d_1.w_0 conv2d_2.w_0 fc_1.w_0 conv2d_1.b_0 conv2d_2.b_0 fc_1.b_0

- Parameters are saved into the same file, such as :code:`model_filename` set as :code:`None` and :code:`params_filename` set as :code:`__params__`

  .. code-block:: bash

      ls recognize_digits_conv.inference.model/*
      __model__ __params__

Save Inference model
===============================

S
sheqiZ 已提交
31 32 33 34 35
To save an inference model, we normally use :code:`fluid.io.save_inference_model` to tailor the default :code:`fluid.Program` and only keep the parts useful for predicting :code:`predict_var`.
After being tailored, :code:`program` will be saved under :code:`./infer_model/__model__` while the parameters will be saved into independent files under :code:`./infer_model` .

Sample Code:

H
HongyingG 已提交
36 37 38 39
.. code-block:: python

    exe = fluid.Executor(fluid.CPUPlace())
    path = "./infer_model"
S
sheqiZ 已提交
40
    fluid.io.save_inference_model(dirname=path, feeded_var_names=['img'],
H
HongyingG 已提交
41 42 43 44 45 46 47 48 49 50
        target_vars=[predict_var], executor=exe)


Load Inference Model
=====================

.. code-block:: python

    exe = fluid.Executor(fluid.CPUPlace())
    path = "./infer_model"
S
sheqiZ 已提交
51
    [inference_program, feed_target_names, fetch_targets] =
H
HongyingG 已提交
52 53 54 55 56
        fluid.io.load_inference_model(dirname=path, executor=exe)
    results = exe.run(inference_program,
                  feed={feed_target_names[0]: tensor_img},
                  fetch_list=fetch_targets)

S
sheqiZ 已提交
57 58
In this example, at first we call :code:`fluid.io.load_inference_model` to get inference :code:`inference_program` , :code:`feed_target_names`-name of input data and :code:`fetch_targets` of output;
then call :code:`executor` to run inference :code:`inference_program` to get inferred result.