From 75d77721cd826fcfdaacbb7e13e4b192761f4a18 Mon Sep 17 00:00:00 2001 From: HongyingG <44694098+HongyingG@users.noreply.github.com> Date: Wed, 20 Feb 2019 19:43:22 +0800 Subject: [PATCH] inference_en (#599) * inference_en * Review --- .../api_guides/low_level/inference_en.rst | 55 +++++++++++++++++++ 1 file changed, 55 insertions(+) create mode 100755 doc/fluid/api_guides/low_level/inference_en.rst diff --git a/doc/fluid/api_guides/low_level/inference_en.rst b/doc/fluid/api_guides/low_level/inference_en.rst new file mode 100755 index 000000000..956229375 --- /dev/null +++ b/doc/fluid/api_guides/low_level/inference_en.rst @@ -0,0 +1,55 @@ +.. _api_guide_inference_en: + +################# +Inference Engine +################# + +Inference engine provides interfaces to save inference model :ref:`api_fluid_io_save_inference_model` and load inference model :ref:`api_fluid_io_load_inference_model` . + +Format of Saved Inference Model +===================================== + +There are two formats of saved inference model, which are controlled by :code:`model_filename` and :code:`params_filename` parameters in the two interfaces above. + +- Parameters are saved into independent separate files, such as :code:`model_filename` set as :code:`None` and :code:`params_filename` set as :code:`None` + + .. code-block:: bash + + ls recognize_digits_conv.inference.model/* + __model__ conv2d_1.w_0 conv2d_2.w_0 fc_1.w_0 conv2d_1.b_0 conv2d_2.b_0 fc_1.b_0 + +- Parameters are saved into the same file, such as :code:`model_filename` set as :code:`None` and :code:`params_filename` set as :code:`__params__` + + .. code-block:: bash + + ls recognize_digits_conv.inference.model/* + __model__ __params__ + +Save Inference model +=============================== + +.. code-block:: python + + exe = fluid.Executor(fluid.CPUPlace()) + path = "./infer_model" + fluid.io.save_inference_model(dirname=path, feeded_var_names=['img'], + target_vars=[predict_var], executor=exe) + +In this example, :code:`fluid.io.save_inference_model` will tailor default :code:`fluid.Program` into useful parts for predicting :code:`predict_var` . +After being tailored, :code:`program` will be saved under :code:`./infer_model/__model__` while parameters will be saved into independent files under :code:`./infer_model` . + +Load Inference Model +===================== + +.. code-block:: python + + exe = fluid.Executor(fluid.CPUPlace()) + path = "./infer_model" + [inference_program, feed_target_names, fetch_targets] = + fluid.io.load_inference_model(dirname=path, executor=exe) + results = exe.run(inference_program, + feed={feed_target_names[0]: tensor_img}, + fetch_list=fetch_targets) + +In this example, at first we call :code:`fluid.io.load_inference_model` to get inference :code:`program` , :code:`variable` name of input data and :code:`variable` of output; +then call :code:`executor` to run inference :code:`program` to get inferred result. \ No newline at end of file -- GitLab