export_model_en.md 4.3 KB
Newer Older
G
gaotingquan 已提交
1 2 3 4 5 6 7 8
# Export model

PaddlePaddle supports exporting inference model for deployment. Compared with training, inference model files store network weights and network structures persistently, and PaddlePaddle supports more fast prediction engine loading inference model to deployment.

---

## Contents

9 10 11 12 13
- [1. Environmental preparation](#1)
- [2. Export classification model](#2)
- [3. Export mainbody detection model](#3)
- [4. Export recognition model](#4)
- [5. Parameter description](#5)
G
gaotingquan 已提交
14 15 16


<a name="1"></a>
17

G
gaotingquan 已提交
18 19 20 21 22
## 1. Environmental preparation

First, refer to the [Installing PaddlePaddle](../installation/install_paddle_en.md) and the [Installing PaddleClas](../installation/install_paddleclas_en.md) to prepare environment.

<a name="2"></a>
23

G
gaotingquan 已提交
24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47
## 2. Export classification model

Change the working directory to PaddleClas:

```shell
cd /path/to/PaddleClas
```

Taking the classification model ResNet50_vd as an example, download the pre-trained model:

```shell
wget -P ./cls_pretrain/ https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/legendary_models/ResNet50_vd_pretrained.pdparams
```

The above model weights is trained by ResNet50_vd model on ImageNet1k dataset and training configuration file is `ppcls/configs/ImageNet/ResNet/ResNet50_vd.yaml`. To export the inference model, just run the following command:

```shell
python tools/export_model.py
    -c ./ppcls/configs/ImageNet/ResNet/ResNet50_vd.yaml \
    -o Global.pretrained_model=./cls_pretrain/ResNet50_vd_pretrained \
    -o Global.save_inference_dir=./deploy/models/class_ResNet50_vd_ImageNet_infer
```

<a name="3"></a>
48

G
gaotingquan 已提交
49 50 51 52 53
## 3. Export mainbody detection model

About exporting mainbody detection model in details, please refer[mainbody detection](../image_recognition_pipeline/mainbody_detection_en.md).

<a name="4"></a>
54

G
gaotingquan 已提交
55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80
## 4. Export recognition model

Change the working directory to PaddleClas:

```shell
cd /path/to/PaddleClas
```

Take the feature extraction model in products recognition as an example, download the pretrained model:

```shell
wget -P ./product_pretrain/ https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/rec/models/pretrain/product_ResNet50_vd_Aliproduct_v1.0_pretrained.pdparams
```

The above model weights file is trained by ResNet50_vd on AliProduct dataset, and the training configuration file is `ppcls/configs/Products/ResNet50_vd_Aliproduct.yaml`. The command to export inference model is as follow:

```shell
python3 tools/export_model.py \
    -c ./ppcls/configs/Products/ResNet50_vd_Aliproduct.yaml \
    -o Global.pretrained_model=./product_pretrain/product_ResNet50_vd_Aliproduct_v1.0_pretrained \
    -o Global.save_inference_dir=./deploy/models/product_ResNet50_vd_aliproduct_v1.0_infer
```

Notice, the inference model exported by above command is truncated on embedding layer, so the output of the model is n-dimensional embedding feature.

<a name="5"></a>
81

G
gaotingquan 已提交
82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103
## 5. Parameter description

In the above model export command, the configuration file used must be the same as the training configuration file. The following fields in the configuration file are used to configure exporting model parameters.

* `Global.image_shape`:To specify the input data size of the model, which does not contain the batch dimension;
* `Global.save_inference_dir`:To specify directory of saving inference model files exported;
* `Global.pretrained_model`:To specify the path of model weight file saved during training. This path does not need to contain the suffix `.pdparams` of model weight file;

The exporting model command will generate the following three files:

* `inference.pdmodel`:To store model network structure information;
* `inference.pdiparams`:To store model network weight information;
* `inference.pdiparams.info`:To store the parameter information of the model, which can be ignored in the classification model and recognition model;

The inference model exported is used to deployment by using prediction engine. You can refer the following docs according to different deployment modes / platforms

* [Python inference](./python_deploy.md)
* [C++ inference](./cpp_deploy.md)(Only support classification)
* [Python Whl inference](./whl_deploy.md)(Only support classification)
* [PaddleHub Serving inference](./paddle_hub_serving_deploy.md)(Only support classification)
* [PaddleServing inference](./paddle_serving_deploy.md)
* [PaddleLite inference](./paddle_lite_deploy.md)(Only support classification)