diff --git a/README_ch.md b/README_ch.md index b0898ddb6c40ead98336f598ce53e6fdcd7e0941..2ca73fdc5b2c1b1e504cf4ec8eef2d0dcb13deb4 100644 --- a/README_ch.md +++ b/README_ch.md @@ -78,7 +78,7 @@ PP-ShiTu图像识别快速体验:[点击这里](./docs/zh_CN/quick_start/quick - 推理部署 - [基于python预测引擎推理](docs/zh_CN/inference_deployment/python_deploy.md#1) - [基于C++预测引擎推理](docs/zh_CN/inference_deployment/cpp_deploy.md) - - [服务化部署](docs/zh_CN/inference_deployment/paddle_serving_deploy.md) + - [服务化部署](docs/zh_CN/inference_deployment/classification_serving_deploy.md) - [端侧部署](docs/zh_CN/inference_deployment/paddle_lite_deploy.md) - [Paddle2ONNX模型转化与预测](deploy/paddle2onnx/readme.md) - [模型压缩](deploy/slim/README.md) @@ -93,7 +93,7 @@ PP-ShiTu图像识别快速体验:[点击这里](./docs/zh_CN/quick_start/quick - 推理部署 - [基于python预测引擎推理](docs/zh_CN/inference_deployment/python_deploy.md#2) - [基于C++预测引擎推理](deploy/cpp_shitu/readme.md) - - [服务化部署](docs/zh_CN/inference_deployment/paddle_serving_deploy.md) + - [服务化部署](docs/zh_CN/inference_deployment/recognition_serving_deploy.md) - [端侧部署](deploy/lite_shitu/README.md) - PP系列骨干网络模型 - [PP-HGNet](docs/zh_CN/models/PP-HGNet.md) diff --git a/deploy/configs/inference_cls_based_action.yaml b/deploy/configs/inference_cls_based_action.yaml new file mode 100644 index 0000000000000000000000000000000000000000..005301c2ab395277020ef34db644cb1ffc26c2c3 --- /dev/null +++ b/deploy/configs/inference_cls_based_action.yaml @@ -0,0 +1,33 @@ +Global: + infer_imgs: "./images/ImageNet/ILSVRC2012_val_00000010.jpeg" + inference_model_dir: "./models/PPHGNet_tiny_calling_halfbody/" + batch_size: 1 + use_gpu: True + enable_mkldnn: True + cpu_num_threads: 10 + enable_benchmark: True + use_fp16: False + ir_optim: True + use_tensorrt: False + gpu_mem: 8000 + enable_profile: False + +PreProcess: + transform_ops: + - ResizeImage: + resize_short: 224 + - NormalizeImage: + scale: 0.00392157 + mean: [0.485, 0.456, 0.406] + std: [0.229, 0.224, 0.225] + order: '' + channel_num: 3 + - ToCHWImage: + +PostProcess: + main_indicator: Topk + Topk: + topk: 2 + class_id_map_file: "../dataset/data/phone_label_list.txt" + SavePreLabel: + save_dir: ./pre_label/ diff --git a/deploy/hubserving/readme.md b/deploy/hubserving/readme.md index 6b2b2dd4dd703965f52fa7d16cd6be41672186a9..8506c9e4144b4792a06ff36de6c0f6d4698b40cf 100644 --- a/deploy/hubserving/readme.md +++ b/deploy/hubserving/readme.md @@ -1,83 +1,117 @@ -[English](readme_en.md) | 简体中文 +简体中文 | [English](readme_en.md) -# 基于PaddleHub Serving的服务部署 +# 基于 PaddleHub Serving 的服务部署 -hubserving服务部署配置服务包`clas`下包含3个必选文件,目录如下: -``` -hubserving/clas/ - └─ __init__.py 空文件,必选 - └─ config.json 配置文件,可选,使用配置启动服务时作为参数传入 - └─ module.py 主模块,必选,包含服务的完整逻辑 - └─ params.py 参数文件,必选,包含模型路径、前后处理参数等参数 +PaddleClas 支持通过 PaddleHub 快速进行服务化部署。目前支持图像分类的部署,图像识别的部署敬请期待。 + + +## 目录 +- [1. 简介](#1-简介) +- [2. 准备环境](#2-准备环境) +- [3. 下载推理模型](#3-下载推理模型) +- [4. 安装服务模块](#4-安装服务模块) +- [5. 启动服务](#5-启动服务) + - [5.1 命令行启动](#51-命令行启动) + - [5.2 配置文件启动](#52-配置文件启动) +- [6. 发送预测请求](#6-发送预测请求) +- [7. 自定义修改服务模块](#7-自定义修改服务模块) + + + +## 1. 简介 + +hubserving 服务部署配置服务包 `clas` 下包含 3 个必选文件,目录如下: + +```shell +deploy/hubserving/clas/ +├── __init__.py # 空文件,必选 +├── config.json # 配置文件,可选,使用配置启动服务时作为参数传入 +├── module.py # 主模块,必选,包含服务的完整逻辑 +└── params.py # 参数文件,必选,包含模型路径、前后处理参数等参数 ``` -## 快速启动服务 -### 1. 准备环境 + + +## 2. 准备环境 ```shell -# 安装paddlehub,请安装2.0版本 -pip3 install paddlehub==2.1.0 --upgrade -i https://pypi.tuna.tsinghua.edu.cn/simple +# 安装 paddlehub,建议安装 2.1.0 版本 +python3.7 -m pip install paddlehub==2.1.0 --upgrade -i https://pypi.tuna.tsinghua.edu.cn/simple ``` -### 2. 下载推理模型 + + +## 3. 下载推理模型 + 安装服务模块前,需要准备推理模型并放到正确路径,默认模型路径为: -``` -分类推理模型结构文件:PaddleClas/inference/inference.pdmodel -分类推理模型权重文件:PaddleClas/inference/inference.pdiparams -``` + +* 分类推理模型结构文件:`PaddleClas/inference/inference.pdmodel` +* 分类推理模型权重文件:`PaddleClas/inference/inference.pdiparams` **注意**: -* 模型文件路径可在`PaddleClas/deploy/hubserving/clas/params.py`中查看和修改: +* 模型文件路径可在 `PaddleClas/deploy/hubserving/clas/params.py` 中查看和修改: + ```python "inference_model_dir": "../inference/" ``` - 需要注意,模型文件(包括.pdmodel与.pdiparams)名称必须为`inference`。 -* 我们也提供了大量基于ImageNet-1k数据集的预训练模型,模型列表及下载地址详见[模型库概览](../../docs/zh_CN/models/models_intro.md),也可以使用自己训练转换好的模型。 +* 模型文件(包括 `.pdmodel` 与 `.pdiparams`)的名称必须为 `inference`。 +* 我们提供了大量基于 ImageNet-1k 数据集的预训练模型,模型列表及下载地址详见[模型库概览](../../docs/zh_CN/algorithm_introduction/ImageNet_models.md),也可以使用自己训练转换好的模型。 -### 3. 安装服务模块 -针对Linux环境和Windows环境,安装命令如下。 -* 在Linux环境下,安装示例如下: -```shell -cd PaddleClas/deploy -# 安装服务模块: -hub install hubserving/clas/ -``` + +## 4. 安装服务模块 + +* 在 Linux 环境下,安装示例如下: + ```shell + cd PaddleClas/deploy + # 安装服务模块: + hub install hubserving/clas/ + ``` + +* 在 Windows 环境下(文件夹的分隔符为`\`),安装示例如下: + + ```shell + cd PaddleClas\deploy + # 安装服务模块: + hub install hubserving\clas\ + ``` + -* 在Windows环境下(文件夹的分隔符为`\`),安装示例如下: + +## 5. 启动服务 + + + +### 5.1 命令行启动 + +该方式仅支持使用 CPU 预测。启动命令: ```shell -cd PaddleClas\deploy -# 安装服务模块: -hub install hubserving\clas\ +hub serving start \ +--modules clas_system +--port 8866 ``` +这样就完成了一个服务化 API 的部署,使用默认端口号 8866。 -### 4. 启动服务 -#### 方式1. 命令行命令启动(仅支持CPU) -**启动命令:** -```shell -$ hub serving start --modules Module1==Version1 \ - --port XXXX \ - --use_multiprocess \ - --workers \ -``` +**参数说明**: +| 参数 | 用途 | +| ------------------ | ----------------------------------------------------------------------------------------------------------------------------- | +| --modules/-m | [**必选**] PaddleHub Serving 预安装模型,以多个 Module==Version 键值对的形式列出
*`当不指定 Version 时,默认选择最新版本`* | +| --port/-p | [**可选**] 服务端口,默认为 8866 | +| --use_multiprocess | [**可选**] 是否启用并发方式,默认为单进程方式,推荐多核 CPU 机器使用此方式
*`Windows 操作系统只支持单进程方式`* | +| --workers | [**可选**] 在并发方式下指定的并发任务数,默认为 `2*cpu_count-1`,其中 `cpu_count` 为 CPU 核数 | +更多部署细节详见 [PaddleHub Serving模型一键服务部署](https://paddlehub.readthedocs.io/zh_CN/release-v2.1/tutorial/serving.html) -**参数:** -|参数|用途| -|-|-| -|--modules/-m| [**必选**] PaddleHub Serving预安装模型,以多个Module==Version键值对的形式列出
*`当不指定Version时,默认选择最新版本`*| -|--port/-p| [**可选**] 服务端口,默认为8866| -|--use_multiprocess| [**可选**] 是否启用并发方式,默认为单进程方式,推荐多核CPU机器使用此方式
*`Windows操作系统只支持单进程方式`*| -|--workers| [**可选**] 在并发方式下指定的并发任务数,默认为`2*cpu_count-1`,其中`cpu_count`为CPU核数| + +### 5.2 配置文件启动 -如按默认参数启动服务: ```hub serving start -m clas_system``` +该方式仅支持使用 CPU 或 GPU 预测。启动命令: -这样就完成了一个服务化API的部署,使用默认端口号8866。 +```shell +hub serving start -c config.json +``` -#### 方式2. 配置文件启动(支持CPU、GPU) -**启动命令:** -```hub serving start -c config.json``` +其中,`config.json` 格式如下: -其中,`config.json`格式如下: ```json { "modules_info": { @@ -97,92 +131,109 @@ $ hub serving start --modules Module1==Version1 \ } ``` -- `init_args`中的可配参数与`module.py`中的`_initialize`函数接口一致。其中, - - 当`use_gpu`为`true`时,表示使用GPU启动服务。 - - 当`enable_mkldnn`为`true`时,表示使用MKL-DNN加速。 -- `predict_args`中的可配参数与`module.py`中的`predict`函数接口一致。 +**参数说明**: +* `init_args` 中的可配参数与 `module.py` 中的 `_initialize` 函数接口一致。其中, + - 当 `use_gpu` 为 `true` 时,表示使用 GPU 启动服务。 + - 当 `enable_mkldnn` 为 `true` 时,表示使用 MKL-DNN 加速。 +* `predict_args` 中的可配参数与 `module.py` 中的 `predict` 函数接口一致。 -**注意:** -- 使用配置文件启动服务时,其他参数会被忽略。 -- 如果使用GPU预测(即,`use_gpu`置为`true`),则需要在启动服务之前,设置CUDA_VISIBLE_DEVICES环境变量,如:```export CUDA_VISIBLE_DEVICES=0```,否则不用设置。 -- **`use_gpu`不可与`use_multiprocess`同时为`true`**。 -- **`use_gpu`与`enable_mkldnn`同时为`true`时,将忽略`enable_mkldnn`,而使用GPU**。 +**注意**: +* 使用配置文件启动服务时,将使用配置文件中的参数设置,其他命令行参数将被忽略; +* 如果使用 GPU 预测(即,`use_gpu` 置为 `true`),则需要在启动服务之前,设置 `CUDA_VISIBLE_DEVICES` 环境变量来指定所使用的 GPU 卡号,如:`export CUDA_VISIBLE_DEVICES=0`; +* **`use_gpu` 不可与 `use_multiprocess` 同时为 `true`**; +* **`use_gpu` 与 `enable_mkldnn` 同时为 `true` 时,将忽略 `enable_mkldnn`,而使用 GPU**。 + +如使用 GPU 3 号卡启动服务: -如,使用GPU 3号卡启动串联服务: ```shell cd PaddleClas/deploy export CUDA_VISIBLE_DEVICES=3 hub serving start -c hubserving/clas/config.json -``` +``` -## 发送预测请求 -配置好服务端,可使用以下命令发送预测请求,获取预测结果: + +## 6. 发送预测请求 + +配置好服务端后,可使用以下命令发送预测请求,获取预测结果: ```shell cd PaddleClas/deploy -python hubserving/test_hubserving.py server_url image_path -``` - -需要给脚本传递2个必须参数: -- **server_url**:服务地址,格式为 -`http://[ip_address]:[port]/predict/[module_name]` -- **image_path**:测试图像路径,可以是单张图片路径,也可以是图像集合目录路径。 -- **batch_size**:[**可选**] 以`batch_size`大小为单位进行预测,默认为`1`。 -- **resize_short**:[**可选**] 预处理时,按短边调整大小,默认为`256`。 -- **crop_size**:[**可选**] 预处理时,居中裁剪的大小,默认为`224`。 -- **normalize**:[**可选**] 预处理时,是否进行`normalize`,默认为`True`。 -- **to_chw**:[**可选**] 预处理时,是否调整为`CHW`顺序,默认为`True`。 +python3.7 hubserving/test_hubserving.py \ +--server_url http://127.0.0.1:8866/predict/clas_system \ +--image_file ./hubserving/ILSVRC2012_val_00006666.JPEG \ +--batch_size 8 +``` +**预测输出** +```log +The result(s): class_ids: [57, 67, 68, 58, 65], label_names: ['garter snake, grass snake', 'diamondback, diamondback rattlesnake, Crotalus adamanteus', 'sidewinder, horned rattlesnake, Crotalus cerastes', 'water snake', 'sea snake'], scores: [0.21915, 0.15631, 0.14794, 0.13177, 0.12285] +The average time of prediction cost: 2.970 s/image +The average time cost: 3.014 s/image +The average top-1 score: 0.110 +``` -**注意**:如果使用`Transformer`系列模型,如`DeiT_***_384`, `ViT_***_384`等,请注意模型的输入数据尺寸,需要指定`--resize_short=384 --crop_size=384`。 +**脚本参数说明**: +* **server_url**:服务地址,格式为`http://[ip_address]:[port]/predict/[module_name]`。 +* **image_path**:测试图像路径,可以是单张图片路径,也可以是图像集合目录路径。 +* **batch_size**:[**可选**] 以 `batch_size` 大小为单位进行预测,默认为 `1`。 +* **resize_short**:[**可选**] 预处理时,按短边调整大小,默认为 `256`。 +* **crop_size**:[**可选**] 预处理时,居中裁剪的大小,默认为 `224`。 +* **normalize**:[**可选**] 预处理时,是否进行 `normalize`,默认为 `True`。 +* **to_chw**:[**可选**] 预处理时,是否调整为 `CHW` 顺序,默认为 `True`。 +**注意**:如果使用 `Transformer` 系列模型,如 `DeiT_***_384`, `ViT_***_384` 等,请注意模型的输入数据尺寸,需要指定`--resize_short=384 --crop_size=384`。 -访问示例: +**返回结果格式说明**: +返回结果为列表(list),包含 top-k 个分类结果,以及对应的得分,还有此图片预测耗时,具体如下: ```shell -python hubserving/test_hubserving.py --server_url http://127.0.0.1:8866/predict/clas_system --image_file ./hubserving/ILSVRC2012_val_00006666.JPEG --batch_size 8 -``` - -### 返回结果格式说明 -返回结果为列表(list),包含top-k个分类结果,以及对应的得分,还有此图片预测耗时,具体如下: -``` list: 返回结果 -└─ list: 第一张图片结果 - └─ list: 前k个分类结果,依score递减排序 - └─ list: 前k个分类结果对应的score,依score递减排序 - └─ float: 该图分类耗时,单位秒 +└──list: 第一张图片结果 + ├── list: 前 k 个分类结果,依 score 递减排序 + ├── list: 前 k 个分类结果对应的 score,依 score 递减排序 + └── float: 该图分类耗时,单位秒 ``` -**说明:** 如果需要增加、删除、修改返回字段,可对相应模块进行修改,完整流程参考下一节自定义修改服务模块。 -## 自定义修改服务模块 -如果需要修改服务逻辑,你一般需要操作以下步骤: -- 1、 停止服务 -```hub serving stop --port/-p XXXX``` + +## 7. 自定义修改服务模块 -- 2、 到相应的`module.py`和`params.py`等文件中根据实际需求修改代码。`module.py`修改后需要重新安装(`hub install hubserving/clas/`)并部署。在进行部署前,可通过`python hubserving/clas/module.py`测试已安装服务模块。 +如果需要修改服务逻辑,需要进行以下操作: -- 3、 卸载旧服务包 -```hub uninstall clas_system``` +1. 停止服务 + ```shell + hub serving stop --port/-p XXXX + ``` -- 4、 安装修改后的新服务包 -```hub install hubserving/clas/``` +2. 到相应的 `module.py` 和 `params.py` 等文件中根据实际需求修改代码。`module.py` 修改后需要重新安装(`hub install hubserving/clas/`)并部署。在进行部署前,可先通过 `python3.7 hubserving/clas/module.py` 命令来快速测试准备部署的代码。 -- 5、重新启动服务 -```hub serving start -m clas_system``` +3. 卸载旧服务包 + ```shell + hub uninstall clas_system + ``` + +4. 安装修改后的新服务包 + ```shell + hub install hubserving/clas/ + ``` + +5. 重新启动服务 + ```shell + hub serving start -m clas_system + ``` **注意**: -常用参数可在[params.py](./clas/params.py)中修改: +常用参数可在 `PaddleClas/deploy/hubserving/clas/params.py` 中修改: * 更换模型,需要修改模型文件路径参数: ```python "inference_model_dir": ``` - * 更改后处理时返回的`top-k`结果数量: + * 更改后处理时返回的 `top-k` 结果数量: ```python 'topk': ``` - * 更改后处理时的lable与class id对应映射文件: + * 更改后处理时的 lable 与 class id 对应映射文件: ```python 'class_id_map_file': ``` -为了避免不必要的延时以及能够以batch_size进行预测,数据预处理逻辑(包括resize、crop等操作)在客户端完成,因此需要在[test_hubserving.py](./test_hubserving.py#L35-L52)中修改。 +为了避免不必要的延时以及能够以 batch_size 进行预测,数据预处理逻辑(包括 `resize`、`crop` 等操作)均在客户端完成,因此需要在 [PaddleClas/deploy/hubserving/test_hubserving.py#L41-L47](./test_hubserving.py#L41-L47) 以及 [PaddleClas/deploy/hubserving/test_hubserving.py#L51-L76](./test_hubserving.py#L51-L76) 中修改数据预处理逻辑相关代码。 diff --git a/deploy/hubserving/readme_en.md b/deploy/hubserving/readme_en.md index bb0ddbd2c3a994b164d8781767b8de38d484b420..6dce5cc52cc32ef41b8f18d5eb772cc44a1661ad 100644 --- a/deploy/hubserving/readme_en.md +++ b/deploy/hubserving/readme_en.md @@ -1,83 +1,116 @@ English | [简体中文](readme.md) -# Service deployment based on PaddleHub Serving +# Service deployment based on PaddleHub Serving + +PaddleClas supports rapid service deployment through PaddleHub. Currently, the deployment of image classification is supported. Please look forward to the deployment of image recognition. + +## Catalogue +- [1 Introduction](#1-introduction) +- [2. Prepare the environment](#2-prepare-the-environment) +- [3. Download the inference model](#3-download-the-inference-model) +- [4. Install the service module](#4-install-the-service-module) +- [5. Start service](#5-start-service) + - [5.1 Start with command line parameters](#51-start-with-command-line-parameters) + - [5.2 Start with configuration file](#52-start-with-configuration-file) +- [6. Send prediction requests](#6-send-prediction-requests) +- [7. User defined service module modification](#7-user-defined-service-module-modification) -HubServing service pack contains 3 files, the directory is as follows: -``` -hubserving/clas/ - └─ __init__.py Empty file, required - └─ config.json Configuration file, optional, passed in as a parameter when using configuration to start the service - └─ module.py Main module file, required, contains the complete logic of the service - └─ params.py Parameter file, required, including parameters such as model path, pre- and post-processing parameters -``` -## Quick start service -### 1. Prepare the environment + +## 1 Introduction + +The hubserving service deployment configuration service package `clas` contains 3 required files, the directories are as follows: + ```shell -# Install version 2.0 of PaddleHub -pip3 install paddlehub==2.1.0 --upgrade -i https://pypi.tuna.tsinghua.edu.cn/simple +deploy/hubserving/clas/ +├── __init__.py # Empty file, required +├── config.json # Configuration file, optional, passed in as a parameter when starting the service with configuration +├── module.py # The main module, required, contains the complete logic of the service +└── params.py # Parameter file, required, including model path, pre- and post-processing parameters and other parameters ``` -### 2. Download inference model -Before installing the service module, you need to prepare the inference model and put it in the correct path. The default model path is: -``` -Model structure file: PaddleClas/inference/inference.pdmodel -Model parameters file: PaddleClas/inference/inference.pdiparams + +## 2. Prepare the environment +```shell +# Install paddlehub, version 2.1.0 is recommended +python3.7 -m pip install paddlehub==2.1.0 --upgrade -i https://pypi.tuna.tsinghua.edu.cn/simple ``` -* The model file path can be viewed and modified in `PaddleClas/deploy/hubserving/clas/params.py`. - It should be noted that the prefix of model structure file and model parameters file must be `inference`. + +## 3. Download the inference model -* More models provided by PaddleClas can be obtained from the [model library](../../docs/en/models/models_intro_en.md). You can also use models trained by yourself. +Before installing the service module, you need to prepare the inference model and put it in the correct path. The default model path is: -### 3. Install Service Module +* Classification inference model structure file: `PaddleClas/inference/inference.pdmodel` +* Classification inference model weight file: `PaddleClas/inference/inference.pdiparams` -* On Linux platform, the examples are as follows. -```shell -cd PaddleClas/deploy -hub install hubserving/clas/ -``` +**Notice**: +* Model file paths can be viewed and modified in `PaddleClas/deploy/hubserving/clas/params.py`: + + ```python + "inference_model_dir": "../inference/" + ``` +* Model files (including `.pdmodel` and `.pdiparams`) must be named `inference`. +* We provide a large number of pre-trained models based on the ImageNet-1k dataset. For the model list and download address, see [Model Library Overview](../../docs/en/algorithm_introduction/ImageNet_models_en.md), or you can use your own trained and converted models. + + + +## 4. Install the service module + +* In the Linux environment, the installation example is as follows: + ```shell + cd PaddleClas/deploy + # Install the service module: + hub install hubserving/clas/ + ``` + +* In the Windows environment (the folder separator is `\`), the installation example is as follows: + + ```shell + cd PaddleClas\deploy + # Install the service module: + hub install hubserving\clas\ + ``` + + + +## 5. Start service + + + +### 5.1 Start with command line parameters + +This method only supports prediction using CPU. Start command: -* On Windows platform, the examples are as follows. ```shell -cd PaddleClas\deploy -hub install hubserving\clas\ +hub serving start \ +--modules clas_system +--port 8866 ``` +This completes the deployment of a serviced API, using the default port number 8866. -### 4. Start service -#### Way 1. Start with command line parameters (CPU only) +**Parameter Description**: +| parameters | uses | +| ------------------ | ------------------- | +| --modules/-m | [**required**] PaddleHub Serving pre-installed model, listed in the form of multiple Module==Version key-value pairs
*`When no Version is specified, the latest is selected by default version`* | +| --port/-p | [**OPTIONAL**] Service port, default is 8866 | +| --use_multiprocess | [**Optional**] Whether to enable the concurrent mode, the default is single-process mode, it is recommended to use this mode for multi-core CPU machines
*`Windows operating system only supports single-process mode`* | +| --workers | [**Optional**] The number of concurrent tasks specified in concurrent mode, the default is `2*cpu_count-1`, where `cpu_count` is the number of CPU cores | +For more deployment details, see [PaddleHub Serving Model One-Click Service Deployment](https://paddlehub.readthedocs.io/zh_CN/release-v2.1/tutorial/serving.html) -**start command:** -```shell -$ hub serving start --modules Module1==Version1 \ - --port XXXX \ - --use_multiprocess \ - --workers \ -``` -**parameters:** - -|parameters|usage| -|-|-| -|--modules/-m|PaddleHub Serving pre-installed model, listed in the form of multiple Module==Version key-value pairs
*`When Version is not specified, the latest version is selected by default`*| -|--port/-p|Service port, default is 8866| -|--use_multiprocess|Enable concurrent mode, the default is single-process mode, this mode is recommended for multi-core CPU machines
*`Windows operating system only supports single-process mode`*| -|--workers|The number of concurrent tasks specified in concurrent mode, the default is `2*cpu_count-1`, where `cpu_count` is the number of CPU cores| - -For example, start the 2-stage series service: -```shell -hub serving start -m clas_system -``` + +### 5.2 Start with configuration file -This completes the deployment of a service API, using the default port number 8866. +This method only supports prediction using CPU or GPU. Start command: -#### Way 2. Start with configuration file(CPU、GPU) -**start command:** ```shell -hub serving start --config/-c config.json -``` -Wherein, the format of `config.json` is as follows: +hub serving start -c config.json +``` + +Among them, the format of `config.json` is as follows: + ```json { "modules_info": { @@ -96,104 +129,110 @@ Wherein, the format of `config.json` is as follows: "workers": 2 } ``` -- The configurable parameters in `init_args` are consistent with the `_initialize` function interface in `module.py`. Among them, - - when `use_gpu` is `true`, it means that the GPU is used to start the service. - - when `enable_mkldnn` is `true`, it means that use MKL-DNN to accelerate. -- The configurable parameters in `predict_args` are consistent with the `predict` function interface in `module.py`. - -**Note:** -- When using the configuration file to start the service, other parameters will be ignored. -- If you use GPU prediction (that is, `use_gpu` is set to `true`), you need to set the environment variable CUDA_VISIBLE_DEVICES before starting the service, such as: ```export CUDA_VISIBLE_DEVICES=0```, otherwise you do not need to set it. -- **`use_gpu` and `use_multiprocess` cannot be `true` at the same time.** -- **When both `use_gpu` and `enable_mkldnn` are set to `true` at the same time, GPU is used to run and `enable_mkldnn` will be ignored.** - -For example, use GPU card No. 3 to start the 2-stage series service: + +**Parameter Description**: +* The configurable parameters in `init_args` are consistent with the `_initialize` function interface in `module.py`. in, + - When `use_gpu` is `true`, it means to use GPU to start the service. + - When `enable_mkldnn` is `true`, it means to use MKL-DNN acceleration. +* The configurable parameters in `predict_args` are consistent with the `predict` function interface in `module.py`. + +**Notice**: +* When using the configuration file to start the service, the parameter settings in the configuration file will be used, and other command line parameters will be ignored; +* If you use GPU prediction (ie, `use_gpu` is set to `true`), you need to set the `CUDA_VISIBLE_DEVICES` environment variable to specify the GPU card number used before starting the service, such as: `export CUDA_VISIBLE_DEVICES=0`; +* **`use_gpu` cannot be `true`** at the same time as `use_multiprocess`; +* ** When both `use_gpu` and `enable_mkldnn` are `true`, `enable_mkldnn` will be ignored and GPU** will be used. + +If you use GPU No. 3 card to start the service: + ```shell cd PaddleClas/deploy export CUDA_VISIBLE_DEVICES=3 hub serving start -c hubserving/clas/config.json -``` - -## Send prediction requests -After the service starts, you can use the following command to send a prediction request to obtain the prediction result: -```shell -cd PaddleClas/deploy -python hubserving/test_hubserving.py server_url image_path ``` -Two required parameters need to be passed to the script: -- **server_url**: service address,format of which is -`http://[ip_address]:[port]/predict/[module_name]` -- **image_path**: Test image path, can be a single image path or an image directory path -- **batch_size**: [**Optional**] batch_size. Default by `1`. -- **resize_short**: [**Optional**] In preprocessing, resize according to short size. Default by `256`。 -- **crop_size**: [**Optional**] In preprocessing, centor crop size. Default by `224`。 -- **normalize**: [**Optional**] In preprocessing, whether to do `normalize`. Default by `True`。 -- **to_chw**: [**Optional**] In preprocessing, whether to transpose to `CHW`. Default by `True`。 + +## 6. Send prediction requests -**Notice**: -If you want to use `Transformer series models`, such as `DeiT_***_384`, `ViT_***_384`, etc., please pay attention to the input size of model, and need to set `--resize_short=384`, `--crop_size=384`. +After configuring the server, you can use the following command to send a prediction request to get the prediction result: -**Eg.** ```shell -python hubserving/test_hubserving.py --server_url http://127.0.0.1:8866/predict/clas_system --image_file ./hubserving/ILSVRC2012_val_00006666.JPEG --batch_size 8 -``` - -### Returned result format -The returned result is a list, including the `top_k`'s classification results, corresponding scores and the time cost of prediction, details as follows. - -``` -list: The returned results -└─ list: The result of first picture - └─ list: The top-k classification results, sorted in descending order of score - └─ list: The scores corresponding to the top-k classification results, sorted in descending order of score - └─ float: The time cost of predicting the picture, unit second +cd PaddleClas/deploy +python3.7 hubserving/test_hubserving.py \ +--server_url http://127.0.0.1:8866/predict/clas_system \ +--image_file ./hubserving/ILSVRC2012_val_00006666.JPEG \ +--batch_size 8 +``` +**Predicted output** +```log +The result(s): class_ids: [57, 67, 68, 58, 65], label_names: ['garter snake, grass snake', 'diamondback, diamondback rattlesnake, Crotalus adamanteus', 'sidewinder, horned rattlesnake, Crotalus cerastes' , 'water snake', 'sea snake'], scores: [0.21915, 0.15631, 0.14794, 0.13177, 0.12285] +The average time of prediction cost: 2.970 s/image +The average time cost: 3.014 s/image +The average top-1 score: 0.110 +``` + +**Script parameter description**: +* **server_url**: Service address, the format is `http://[ip_address]:[port]/predict/[module_name]`. +* **image_path**: The test image path, which can be a single image path or an image collection directory path. +* **batch_size**: [**OPTIONAL**] Make predictions in `batch_size` size, default is `1`. +* **resize_short**: [**optional**] When preprocessing, resize by short edge, default is `256`. +* **crop_size**: [**Optional**] The size of the center crop during preprocessing, the default is `224`. +* **normalize**: [**Optional**] Whether to perform `normalize` during preprocessing, the default is `True`. +* **to_chw**: [**Optional**] Whether to adjust to `CHW` order when preprocessing, the default is `True`. + +**Note**: If you use `Transformer` series models, such as `DeiT_***_384`, `ViT_***_384`, etc., please pay attention to the input data size of the model, you need to specify `--resize_short=384 -- crop_size=384`. + +**Return result format description**: +The returned result is a list (list), including the top-k classification results, the corresponding scores, and the time-consuming prediction of this image, as follows: +```shell +list: return result +└──list: first image result + ├── list: the top k classification results, sorted in descending order of score + ├── list: the scores corresponding to the first k classification results, sorted in descending order of score + └── float: The image classification time, in seconds ``` -**Note:** If you need to add, delete or modify the returned fields, you can modify the corresponding module. For the details, refer to the user-defined modification service module in the next section. -## User defined service module modification -If you need to modify the service logic, the following steps are generally required: -1. Stop service -```shell -hub serving stop --port/-p XXXX -``` + +## 7. User defined service module modification -2. Modify the code in the corresponding files, like `module.py` and `params.py`, according to the actual needs. You need re-install(hub install hubserving/clas/) and re-deploy after modifing `module.py`. -After modifying and installing and before deploying, you can use `python hubserving/clas/module.py` to test the installed service module. +If you need to modify the service logic, you need to do the following: -For example, if you need to replace the model used by the deployed service, you need to modify model path parameters `cfg.model_file` and `cfg.params_file` in `params.py`. Of course, other related parameters may need to be modified at the same time. Please modify and debug according to the actual situation. - -3. Uninstall old service module -```shell -hub uninstall clas_system -``` +1. Stop the service + ```shell + hub serving stop --port/-p XXXX + ``` -4. Install modified service module -```shell -hub install hubserving/clas/ -``` +2. Go to the corresponding `module.py` and `params.py` and other files to modify the code according to actual needs. `module.py` needs to be reinstalled after modification (`hub install hubserving/clas/`) and deployed. Before deploying, you can use the `python3.7 hubserving/clas/module.py` command to quickly test the code ready for deployment. -5. Restart service -```shell -hub serving start -m clas_system -``` +3. Uninstall the old service pack + ```shell + hub uninstall clas_system + ``` -**Note**: +4. Install the new modified service pack + ```shell + hub install hubserving/clas/ + ``` -Common parameters can be modified in params.py: -* Directory of model files(include model structure file and model parameters file): - ```python - "inference_model_dir": - ``` -* The number of Top-k results returned during post-processing: - ```python - 'topk': - ``` -* Mapping file corresponding to label and class ID during post-processing: - ```python - 'class_id_map_file': - ``` +5. Restart the service + ```shell + hub serving start -m clas_system + ``` -In order to avoid unnecessary delay and be able to predict in batch, the preprocessing (include resize, crop and other) is completed in the client, so modify [test_hubserving.py](./test_hubserving.py#L35-L52) if necessary. +**Notice**: +Common parameters can be modified in `PaddleClas/deploy/hubserving/clas/params.py`: + * To replace the model, you need to modify the model file path parameters: + ```python + "inference_model_dir": + ``` + * Change the number of `top-k` results returned when postprocessing: + ```python + 'topk': + ``` + * The mapping file corresponding to the lable and class id when changing the post-processing: + ```python + 'class_id_map_file': + ``` + +In order to avoid unnecessary delay and be able to predict with batch_size, data preprocessing logic (including `resize`, `crop` and other operations) is completed on the client side, so it needs to modify data preprocessing logic related code in [PaddleClas/deploy/hubserving/test_hubserving.py# L41-L47](./test_hubserving.py#L41-L47) and [PaddleClas/deploy/hubserving/test_hubserving.py#L51-L76](./test_hubserving.py#L51-L76). diff --git a/deploy/lite_shitu/README.md b/deploy/lite_shitu/README.md index 52871c3c16dc9990f9cf23de24b24cb54067cac6..e2a03caedd0d4bf63af96d3541d1a8d021206e52 100644 --- a/deploy/lite_shitu/README.md +++ b/deploy/lite_shitu/README.md @@ -92,9 +92,9 @@ PaddleClas 提供了转换并优化后的推理模型,可以直接参考下方 ```shell # 进入lite_ppshitu目录 cd $PaddleClas/deploy/lite_shitu -wget https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/lite/ppshitu_lite_models_v1.1.tar -tar -xf ppshitu_lite_models_v1.1.tar -rm -f ppshitu_lite_models_v1.1.tar +wget https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/lite/ppshitu_lite_models_v1.2.tar +tar -xf ppshitu_lite_models_v1.2.tar +rm -f ppshitu_lite_models_v1.2.tar ``` #### 2.1.2 使用其他模型 @@ -162,7 +162,7 @@ git clone https://github.com/PaddlePaddle/PaddleDetection.git # 进入PaddleDetection根目录 cd PaddleDetection # 将预训练模型导出为inference模型 -python tools/export_model.py -c configs/picodet/application/mainbody_detection/picodet_lcnet_x2_5_640_mainbody.yml -o weights=https://paddledet.bj.bcebos.com/models/picodet_lcnet_x2_5_640_mainbody.pdparams --output_dir=inference +python tools/export_model.py -c configs/picodet/application/mainbody_detection/picodet_lcnet_x2_5_640_mainbody.yml -o weights=https://paddledet.bj.bcebos.com/models/picodet_lcnet_x2_5_640_mainbody.pdparams export_post_process=False --output_dir=inference # 将inference模型转化为Paddle-Lite优化模型 paddle_lite_opt --model_file=inference/picodet_lcnet_x2_5_640_mainbody/model.pdmodel --param_file=inference/picodet_lcnet_x2_5_640_mainbody/model.pdiparams --optimize_out=inference/picodet_lcnet_x2_5_640_mainbody/mainbody_det # 将转好的模型复制到lite_shitu目录下 @@ -183,24 +183,56 @@ cd deploy/lite_shitu **注意**:`--optimize_out` 参数为优化后模型的保存路径,无需加后缀`.nb`;`--model_file` 参数为模型结构信息文件的路径,`--param_file` 参数为模型权重信息文件的路径,请注意文件名。 -### 2.2 将yaml文件转换成json文件 +### 2.2 生成新的检索库 + +由于lite 版本的检索库用的是`faiss1.5.3`版本,与新版本不兼容,因此需要重新生成index库 + +#### 2.2.1 数据及环境配置 + +```shell +# 进入上级目录 +cd .. +# 下载瓶装饮料数据集 +wget https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/rec/data/drink_dataset_v1.0.tar && tar -xf drink_dataset_v1.0.tar +rm -rf drink_dataset_v1.0.tar +rm -rf drink_dataset_v1.0/index + +# 安装1.5.3版本的faiss +pip install faiss-cpu==1.5.3 + +# 下载通用识别模型,可替换成自己的inference model +wget https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/rec/models/inference/general_PPLCNet_x2_5_lite_v1.0_infer.tar +tar -xf general_PPLCNet_x2_5_lite_v1.0_infer.tar +rm -rf general_PPLCNet_x2_5_lite_v1.0_infer.tar +``` + +#### 2.2.2 生成新的index文件 + +```shell +# 生成新的index库,注意指定好识别模型的路径,同时将index_mothod修改成Flat,HNSW32和IVF在此版本中可能存在bug,请慎重使用。 +# 如果使用自己的识别模型,对应的修改inference model的目录 +python python/build_gallery.py -c configs/inference_drink.yaml -o Global.rec_inference_model_dir=general_PPLCNet_x2_5_lite_v1.0_infer -o IndexProcess.index_method=Flat + +# 进入到lite_shitu目录 +cd lite_shitu +mv ../drink_dataset_v1.0 . +``` + +### 2.3 将yaml文件转换成json文件 ```shell # 如果测试单张图像 -python generate_json_config.py --det_model_path ppshitu_lite_models_v1.1/mainbody_PPLCNet_x2_5_640_quant_v1.1_lite.nb --rec_model_path ppshitu_lite_models_v1.1/general_PPLCNet_x2_5_lite_v1.1_infer.nb --img_path images/demo.jpg +python generate_json_config.py --det_model_path ppshitu_lite_models_v1.2/mainbody_PPLCNet_x2_5_640_v1.2_lite.nb --rec_model_path ppshitu_lite_models_v1.2/general_PPLCNet_x2_5_lite_v1.2_infer.nb --img_path images/demo.jpeg # or # 如果测试多张图像 -python generate_json_config.py --det_model_path ppshitu_lite_models_v1.1/mainbody_PPLCNet_x2_5_640_quant_v1.1_lite.nb --rec_model_path ppshitu_lite_models_v1.1/general_PPLCNet_x2_5_lite_v1.1_infer.nb --img_dir images +python generate_json_config.py --det_model_path ppshitu_lite_models_v1.2/mainbody_PPLCNet_x2_5_640_v1.2_lite.nb --rec_model_path ppshitu_lite_models_v1.2/general_PPLCNet_x2_5_lite_v1.2_infer.nb --img_dir images # 执行完成后,会在lit_shitu下生成shitu_config.json配置文件 ``` -### 2.3 index字典转换 +### 2.4 index字典转换 由于python的检索库字典,使用`pickle`进行的序列化存储,导致C++不方便读取,因此需要进行转换 ```shell -# 下载瓶装饮料数据集 -wget https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/rec/data/drink_dataset_v1.0.tar && tar -xf drink_dataset_v1.0.tar -rm -rf drink_dataset_v1.0.tar # 转化id_map.pkl为id_map.txt python transform_id_map.py -c ../configs/inference_drink.yaml @@ -208,7 +240,7 @@ python transform_id_map.py -c ../configs/inference_drink.yaml 转换成功后,会在`IndexProcess.index_dir`目录下生成`id_map.txt`。 -### 2.4 与手机联调 +### 2.5 与手机联调 首先需要进行一些准备工作。 1. 准备一台arm8的安卓手机,如果编译的预测库是armv7,则需要arm7的手机,并修改Makefile中`ARM_ABI=arm7`。 @@ -308,8 +340,9 @@ chmod 777 pp_shitu 运行效果如下: ``` -images/demo.jpg: - result0: bbox[253, 275, 1146, 872], score: 0.974196, label: 伊藤园_果蔬汁 +images/demo.jpeg: + result0: bbox[344, 98, 527, 593], score: 0.811656, label: 红牛-强化型 + result1: bbox[0, 0, 600, 600], score: 0.729664, label: 红牛-强化型 ``` ## FAQ diff --git a/deploy/lite_shitu/images/demo.jpeg b/deploy/lite_shitu/images/demo.jpeg new file mode 100644 index 0000000000000000000000000000000000000000..2ef10aae5f7f5ce515cb51f857b66c6195f6664b Binary files /dev/null and b/deploy/lite_shitu/images/demo.jpeg differ diff --git a/deploy/lite_shitu/images/demo.jpg b/deploy/lite_shitu/images/demo.jpg deleted file mode 100644 index 075dc31d4e6b407b792cc8abca82dcd541be8d11..0000000000000000000000000000000000000000 Binary files a/deploy/lite_shitu/images/demo.jpg and /dev/null differ diff --git a/deploy/lite_shitu/include/config_parser.h b/deploy/lite_shitu/include/config_parser.h index dca0e5a6898a219932ee978591d89c7624988f3f..2bed92059c41be344b863d5b8a81f9367dfa48fc 100644 --- a/deploy/lite_shitu/include/config_parser.h +++ b/deploy/lite_shitu/include/config_parser.h @@ -29,16 +29,16 @@ namespace PPShiTu { -void load_jsonf(std::string jsonfile, Json::Value& jsondata); +void load_jsonf(std::string jsonfile, Json::Value &jsondata); // Inference model configuration parser class ConfigPaser { - public: +public: ConfigPaser() {} ~ConfigPaser() {} - bool load_config(const Json::Value& config) { + bool load_config(const Json::Value &config) { // Get model arch : YOLO, SSD, RetinaNet, RCNN, Face if (config["Global"].isMember("det_arch")) { @@ -89,4 +89,4 @@ class ConfigPaser { std::vector fpn_stride_; }; -} // namespace PPShiTu +} // namespace PPShiTu diff --git a/deploy/lite_shitu/include/feature_extractor.h b/deploy/lite_shitu/include/feature_extractor.h index 1961459ecfab149695890df60cef550ed5177b52..6ef5cae13fe8b9a3724d88a30e562f5d091efc89 100644 --- a/deploy/lite_shitu/include/feature_extractor.h +++ b/deploy/lite_shitu/include/feature_extractor.h @@ -18,6 +18,7 @@ #include #include #include +#include #include #include #include @@ -48,10 +49,6 @@ public: config_file["Global"]["rec_model_path"].as()); this->predictor = CreatePaddlePredictor(config); - if (config_file["Global"]["rec_label_path"].as().empty()) { - std::cout << "Please set [rec_label_path] in config file" << std::endl; - exit(-1); - } SetPreProcessParam(config_file["RecPreProcess"]["transform_ops"]); printf("feature extract model create!\n"); } @@ -68,24 +65,29 @@ public: this->mean.emplace_back(tmp.as()); } for (auto tmp : item["std"]) { - this->std.emplace_back(1 / tmp.as()); + this->std.emplace_back(tmp.as()); } this->scale = item["scale"].as(); } } } - void RunRecModel(const cv::Mat &img, double &cost_time, std::vector &feature); - //void PostProcess(std::vector &feature); - cv::Mat ResizeImage(const cv::Mat &img); - void NeonMeanScale(const float *din, float *dout, int size); + void RunRecModel(const cv::Mat &img, double &cost_time, + std::vector &feature); + // void PostProcess(std::vector &feature); + void FeatureNorm(std::vector &featuer); private: std::shared_ptr predictor; - //std::vector label_list; + // std::vector label_list; std::vector mean = {0.485f, 0.456f, 0.406f}; - std::vector std = {1 / 0.229f, 1 / 0.224f, 1 / 0.225f}; + std::vector std = {0.229f, 0.224f, 0.225f}; double scale = 0.00392157; - float size = 224; + int size = 224; + + // pre-process + Resize resize_op_; + NormalizeImage normalize_op_; + Permute permute_op_; }; } // namespace PPShiTu diff --git a/deploy/lite_shitu/include/object_detector.h b/deploy/lite_shitu/include/object_detector.h index 779cc89d197ece9115788ac25b9fa5d4c862e2fe..83212d7ea3ec2a93cf410c5930951709ab6d031d 100644 --- a/deploy/lite_shitu/include/object_detector.h +++ b/deploy/lite_shitu/include/object_detector.h @@ -16,24 +16,24 @@ #include #include +#include #include #include #include -#include +#include "json/json.h" #include #include #include -#include "json/json.h" -#include "paddle_api.h" // NOLINT +#include "paddle_api.h" // NOLINT #include "include/config_parser.h" +#include "include/picodet_postprocess.h" #include "include/preprocess_op.h" #include "include/utils.h" -#include "include/picodet_postprocess.h" -using namespace paddle::lite_api; // NOLINT +using namespace paddle::lite_api; // NOLINT namespace PPShiTu { @@ -41,53 +41,51 @@ namespace PPShiTu { std::vector GenerateColorMap(int num_class); // Visualiztion Detection Result -cv::Mat VisualizeResult(const cv::Mat& img, - const std::vector& results, - const std::vector& lables, - const std::vector& colormap, - const bool is_rbox); +cv::Mat VisualizeResult(const cv::Mat &img, + const std::vector &results, + const std::vector &lables, + const std::vector &colormap, const bool is_rbox); class ObjectDetector { - public: - explicit ObjectDetector(const Json::Value& config, - const std::string& model_dir, - int cpu_threads = 1, +public: + explicit ObjectDetector(const Json::Value &config, + const std::string &model_dir, int cpu_threads = 1, const int batch_size = 1) { config_.load_config(config); printf("config created\n"); preprocessor_.Init(config_.preprocess_info_); printf("before object detector\n"); - if(config["Global"]["det_model_path"].as().empty()){ - std::cout << "Please set [det_model_path] in config file" << std::endl; - exit(-1); + if (config["Global"]["det_model_path"].as().empty()) { + std::cout << "Please set [det_model_path] in config file" << std::endl; + exit(-1); } - LoadModel(config["Global"]["det_model_path"].as(), cpu_threads); - printf("create object detector\n"); } + LoadModel(config["Global"]["det_model_path"].as(), + cpu_threads); + printf("create object detector\n"); + } // Load Paddle inference model void LoadModel(std::string model_file, int num_theads); // Run predictor - void Predict(const std::vector& imgs, - const int warmup = 0, + void Predict(const std::vector &imgs, const int warmup = 0, const int repeats = 1, - std::vector* result = nullptr, - std::vector* bbox_num = nullptr, - std::vector* times = nullptr); + std::vector *result = nullptr, + std::vector *bbox_num = nullptr, + std::vector *times = nullptr); // Get Model Label list - const std::vector& GetLabelList() const { + const std::vector &GetLabelList() const { return config_.label_list_; } - private: +private: // Preprocess image and copy data to input buffer - void Preprocess(const cv::Mat& image_mat); + void Preprocess(const cv::Mat &image_mat); // Postprocess result void Postprocess(const std::vector mats, - std::vector* result, - std::vector bbox_num, - bool is_rbox); + std::vector *result, + std::vector bbox_num, bool is_rbox); std::shared_ptr predictor_; Preprocessor preprocessor_; @@ -96,7 +94,6 @@ class ObjectDetector { std::vector out_bbox_num_data_; float threshold_; ConfigPaser config_; - }; -} // namespace PPShiTu +} // namespace PPShiTu diff --git a/deploy/lite_shitu/include/picodet_postprocess.h b/deploy/lite_shitu/include/picodet_postprocess.h index 758795b5fbfba459ce6b7ea2d24c04c40eddd543..72dfd1b88af8296d81b56cbbc220dac61e8fa2b9 100644 --- a/deploy/lite_shitu/include/picodet_postprocess.h +++ b/deploy/lite_shitu/include/picodet_postprocess.h @@ -14,25 +14,23 @@ #pragma once -#include -#include -#include -#include #include +#include #include +#include +#include +#include #include "include/utils.h" namespace PPShiTu { -void PicoDetPostProcess(std::vector* results, - std::vector outs, - std::vector fpn_stride, - std::vector im_shape, - std::vector scale_factor, - float score_threshold = 0.3, - float nms_threshold = 0.5, - int num_class = 80, - int reg_max = 7); +void PicoDetPostProcess(std::vector *results, + std::vector outs, + std::vector fpn_stride, + std::vector im_shape, + std::vector scale_factor, + float score_threshold = 0.3, float nms_threshold = 0.5, + int num_class = 80, int reg_max = 7); -} // namespace PPShiTu +} // namespace PPShiTu diff --git a/deploy/lite_shitu/include/preprocess_op.h b/deploy/lite_shitu/include/preprocess_op.h index f7050fa86951bfe80aa4030adabc11ff43f82371..b7ed5e878d77c6aae15e399e57f8384e7a6d19fb 100644 --- a/deploy/lite_shitu/include/preprocess_op.h +++ b/deploy/lite_shitu/include/preprocess_op.h @@ -21,16 +21,16 @@ #include #include +#include "json/json.h" #include #include #include -#include "json/json.h" namespace PPShiTu { // Object for storing all preprocessed data class ImageBlob { - public: +public: // image width and height std::vector im_shape_; // Buffer for image data after preprocessing @@ -45,20 +45,20 @@ class ImageBlob { // Abstraction of preprocessing opration class class PreprocessOp { - public: - virtual void Init(const Json::Value& item) = 0; - virtual void Run(cv::Mat* im, ImageBlob* data) = 0; +public: + virtual void Init(const Json::Value &item) = 0; + virtual void Run(cv::Mat *im, ImageBlob *data) = 0; }; class InitInfo : public PreprocessOp { - public: - virtual void Init(const Json::Value& item) {} - virtual void Run(cv::Mat* im, ImageBlob* data); +public: + virtual void Init(const Json::Value &item) {} + virtual void Run(cv::Mat *im, ImageBlob *data); }; class NormalizeImage : public PreprocessOp { - public: - virtual void Init(const Json::Value& item) { +public: + virtual void Init(const Json::Value &item) { mean_.clear(); scale_.clear(); for (auto tmp : item["mean"]) { @@ -70,9 +70,11 @@ class NormalizeImage : public PreprocessOp { is_scale_ = item["is_scale"].as(); } - virtual void Run(cv::Mat* im, ImageBlob* data); + virtual void Run(cv::Mat *im, ImageBlob *data); + void Run_feature(cv::Mat *im, const std::vector &mean, + const std::vector &std, float scale); - private: +private: // CHW or HWC std::vector mean_; std::vector scale_; @@ -80,14 +82,15 @@ class NormalizeImage : public PreprocessOp { }; class Permute : public PreprocessOp { - public: - virtual void Init(const Json::Value& item) {} - virtual void Run(cv::Mat* im, ImageBlob* data); +public: + virtual void Init(const Json::Value &item) {} + virtual void Run(cv::Mat *im, ImageBlob *data); + void Run_feature(const cv::Mat *im, float *data); }; class Resize : public PreprocessOp { - public: - virtual void Init(const Json::Value& item) { +public: + virtual void Init(const Json::Value &item) { interp_ = item["interp"].as(); // max_size_ = item["target_size"].as(); keep_ratio_ = item["keep_ratio"].as(); @@ -98,11 +101,13 @@ class Resize : public PreprocessOp { } // Compute best resize scale for x-dimension, y-dimension - std::pair GenerateScale(const cv::Mat& im); + std::pair GenerateScale(const cv::Mat &im); - virtual void Run(cv::Mat* im, ImageBlob* data); + virtual void Run(cv::Mat *im, ImageBlob *data); + void Run_feature(const cv::Mat &img, cv::Mat &resize_img, int max_size_len, + int size = 0); - private: +private: int interp_; bool keep_ratio_; std::vector target_size_; @@ -111,46 +116,43 @@ class Resize : public PreprocessOp { // Models with FPN need input shape % stride == 0 class PadStride : public PreprocessOp { - public: - virtual void Init(const Json::Value& item) { +public: + virtual void Init(const Json::Value &item) { stride_ = item["stride"].as(); } - virtual void Run(cv::Mat* im, ImageBlob* data); + virtual void Run(cv::Mat *im, ImageBlob *data); - private: +private: int stride_; }; class TopDownEvalAffine : public PreprocessOp { - public: - virtual void Init(const Json::Value& item) { +public: + virtual void Init(const Json::Value &item) { trainsize_.clear(); for (auto tmp : item["trainsize"]) { trainsize_.emplace_back(tmp.as()); } } - virtual void Run(cv::Mat* im, ImageBlob* data); + virtual void Run(cv::Mat *im, ImageBlob *data); - private: +private: int interp_ = 1; std::vector trainsize_; }; -void CropImg(cv::Mat& img, - cv::Mat& crop_img, - std::vector& area, - std::vector& center, - std::vector& scale, +void CropImg(cv::Mat &img, cv::Mat &crop_img, std::vector &area, + std::vector ¢er, std::vector &scale, float expandratio = 0.15); class Preprocessor { - public: - void Init(const Json::Value& config_node) { +public: + void Init(const Json::Value &config_node) { // initialize image info at first ops_["InitInfo"] = std::make_shared(); - for (const auto& item : config_node) { + for (const auto &item : config_node) { auto op_name = item["type"].as(); ops_[op_name] = CreateOp(op_name); @@ -158,7 +160,7 @@ class Preprocessor { } } - std::shared_ptr CreateOp(const std::string& name) { + std::shared_ptr CreateOp(const std::string &name) { if (name == "DetResize") { return std::make_shared(); } else if (name == "DetPermute") { @@ -176,13 +178,13 @@ class Preprocessor { return nullptr; } - void Run(cv::Mat* im, ImageBlob* data); + void Run(cv::Mat *im, ImageBlob *data); - public: +public: static const std::vector RUN_ORDER; - private: +private: std::unordered_map> ops_; }; -} // namespace PPShiTu +} // namespace PPShiTu diff --git a/deploy/lite_shitu/include/utils.h b/deploy/lite_shitu/include/utils.h index a3b57c882561577defff97e384fb775b78204f36..b23cae31898f92cb55cf240ecc0bb544dba6bb05 100644 --- a/deploy/lite_shitu/include/utils.h +++ b/deploy/lite_shitu/include/utils.h @@ -38,6 +38,23 @@ struct ObjectResult { std::vector rec_result; }; -void nms(std::vector &input_boxes, float nms_threshold, bool rec_nms=false); +void nms(std::vector &input_boxes, float nms_threshold, + bool rec_nms = false); +template +static inline bool SortScorePairDescend(const std::pair &pair1, + const std::pair &pair2) { + return pair1.first > pair2.first; +} + +float RectOverlap(const ObjectResult &a, const ObjectResult &b); + +inline void +GetMaxScoreIndex(const std::vector &det_result, + const float threshold, + std::vector> &score_index_vec); + +void NMSBoxes(const std::vector det_result, + const float score_threshold, const float nms_threshold, + std::vector &indices); } // namespace PPShiTu diff --git a/deploy/lite_shitu/include/vector_search.h b/deploy/lite_shitu/include/vector_search.h index 89ef7733ab86c534a5c507cb4f87c9d4597dba15..49c95cc35edf9248d56b7a3a660285698cd6df8b 100644 --- a/deploy/lite_shitu/include/vector_search.h +++ b/deploy/lite_shitu/include/vector_search.h @@ -70,4 +70,4 @@ private: std::vector I; SearchResult sr; }; -} +} // namespace PPShiTu diff --git a/deploy/lite_shitu/src/config_parser.cc b/deploy/lite_shitu/src/config_parser.cc index d98b2f90f0a860189b8b3b12e9ffd5646dae1d24..09f09f782c93cdfc6fd5d41b97a630cbbafa5917 100644 --- a/deploy/lite_shitu/src/config_parser.cc +++ b/deploy/lite_shitu/src/config_parser.cc @@ -29,4 +29,4 @@ void load_jsonf(std::string jsonfile, Json::Value &jsondata) { } } -} // namespace PPShiTu +} // namespace PPShiTu diff --git a/deploy/lite_shitu/src/feature_extractor.cc b/deploy/lite_shitu/src/feature_extractor.cc index aca5c1cbbe5c70cd214c922609831e9350be28a0..67940f011eb9399aadc0aa5f38ad8d8dde197aa0 100644 --- a/deploy/lite_shitu/src/feature_extractor.cc +++ b/deploy/lite_shitu/src/feature_extractor.cc @@ -13,24 +13,29 @@ // limitations under the License. #include "include/feature_extractor.h" +#include +#include namespace PPShiTu { -void FeatureExtract::RunRecModel(const cv::Mat &img, - double &cost_time, +void FeatureExtract::RunRecModel(const cv::Mat &img, double &cost_time, std::vector &feature) { // Read img - cv::Mat resize_image = ResizeImage(img); - cv::Mat img_fp; - resize_image.convertTo(img_fp, CV_32FC3, scale); + this->resize_op_.Run_feature(img, img_fp, this->size, this->size); + this->normalize_op_.Run_feature(&img_fp, this->mean, this->std, this->scale); + std::vector input(1 * 3 * img_fp.rows * img_fp.cols, 0.0f); + this->permute_op_.Run_feature(&img_fp, input.data()); // Prepare input data from image std::unique_ptr input_tensor(std::move(this->predictor->GetInput(0))); - input_tensor->Resize({1, 3, img_fp.rows, img_fp.cols}); + input_tensor->Resize({1, 3, this->size, this->size}); auto *data0 = input_tensor->mutable_data(); - const float *dimg = reinterpret_cast(img_fp.data); - NeonMeanScale(dimg, data0, img_fp.rows * img_fp.cols); + // const float *dimg = reinterpret_cast(img_fp.data); + // NeonMeanScale(dimg, data0, img_fp.rows * img_fp.cols); + for (int i = 0; i < input.size(); ++i) { + data0[i] = input[i]; + } auto start = std::chrono::system_clock::now(); // Run predictor @@ -38,7 +43,7 @@ void FeatureExtract::RunRecModel(const cv::Mat &img, // Get output and post process std::unique_ptr output_tensor( - std::move(this->predictor->GetOutput(0))); //only one output + std::move(this->predictor->GetOutput(0))); // only one output auto end = std::chrono::system_clock::now(); auto duration = std::chrono::duration_cast(end - start); @@ -46,7 +51,7 @@ void FeatureExtract::RunRecModel(const cv::Mat &img, std::chrono::microseconds::period::num / std::chrono::microseconds::period::den; - //do postprocess + // do postprocess int output_size = 1; for (auto dim : output_tensor->shape()) { output_size *= dim; @@ -54,63 +59,15 @@ void FeatureExtract::RunRecModel(const cv::Mat &img, feature.resize(output_size); output_tensor->CopyToCpu(feature.data()); - //postprocess include sqrt or binarize. - //PostProcess(feature); + // postprocess include sqrt or binarize. + FeatureNorm(feature); return; } -// void FeatureExtract::PostProcess(std::vector &feature){ -// float feature_sqrt = std::sqrt(std::inner_product( -// feature.begin(), feature.end(), feature.begin(), 0.0f)); -// for (int i = 0; i < feature.size(); ++i) -// feature[i] /= feature_sqrt; -// } - -void FeatureExtract::NeonMeanScale(const float *din, float *dout, int size) { - - if (this->mean.size() != 3 || this->std.size() != 3) { - std::cerr << "[ERROR] mean or scale size must equal to 3\n"; - exit(1); - } - float32x4_t vmean0 = vdupq_n_f32(mean[0]); - float32x4_t vmean1 = vdupq_n_f32(mean[1]); - float32x4_t vmean2 = vdupq_n_f32(mean[2]); - float32x4_t vscale0 = vdupq_n_f32(std[0]); - float32x4_t vscale1 = vdupq_n_f32(std[1]); - float32x4_t vscale2 = vdupq_n_f32(std[2]); - - float *dout_c0 = dout; - float *dout_c1 = dout + size; - float *dout_c2 = dout + size * 2; - - int i = 0; - for (; i < size - 3; i += 4) { - float32x4x3_t vin3 = vld3q_f32(din); - float32x4_t vsub0 = vsubq_f32(vin3.val[0], vmean0); - float32x4_t vsub1 = vsubq_f32(vin3.val[1], vmean1); - float32x4_t vsub2 = vsubq_f32(vin3.val[2], vmean2); - float32x4_t vs0 = vmulq_f32(vsub0, vscale0); - float32x4_t vs1 = vmulq_f32(vsub1, vscale1); - float32x4_t vs2 = vmulq_f32(vsub2, vscale2); - vst1q_f32(dout_c0, vs0); - vst1q_f32(dout_c1, vs1); - vst1q_f32(dout_c2, vs2); - - din += 12; - dout_c0 += 4; - dout_c1 += 4; - dout_c2 += 4; - } - for (; i < size; i++) { - *(dout_c0++) = (*(din++) - this->mean[0]) * this->std[0]; - *(dout_c1++) = (*(din++) - this->mean[1]) * this->std[1]; - *(dout_c2++) = (*(din++) - this->mean[2]) * this->std[2]; - } -} - -cv::Mat FeatureExtract::ResizeImage(const cv::Mat &img) { - cv::Mat resize_img; - cv::resize(img, resize_img, cv::Size(this->size, this->size)); - return resize_img; -} +void FeatureExtract::FeatureNorm(std::vector &feature) { + float feature_sqrt = std::sqrt(std::inner_product( + feature.begin(), feature.end(), feature.begin(), 0.0f)); + for (int i = 0; i < feature.size(); ++i) + feature[i] /= feature_sqrt; } +} // namespace PPShiTu diff --git a/deploy/lite_shitu/src/main.cc b/deploy/lite_shitu/src/main.cc index 3f278dc778701a7a7591e74336e0f86fe52105ea..fb516c297d83c438b1b5df88732ad386377c781f 100644 --- a/deploy/lite_shitu/src/main.cc +++ b/deploy/lite_shitu/src/main.cc @@ -27,6 +27,7 @@ #include "include/feature_extractor.h" #include "include/object_detector.h" #include "include/preprocess_op.h" +#include "include/utils.h" #include "include/vector_search.h" #include "json/json.h" @@ -158,6 +159,11 @@ int main(int argc, char **argv) { << " [image_dir]>" << std::endl; return -1; } + + float rec_nms_threshold = 0.05; + if (RT_Config["Global"]["rec_nms_thresold"].isDouble()) + rec_nms_threshold = RT_Config["Global"]["rec_nms_thresold"].as(); + // Load model and create a object detector PPShiTu::ObjectDetector det( RT_Config, RT_Config["Global"]["det_model_path"].as(), @@ -174,6 +180,7 @@ int main(int argc, char **argv) { // for vector search std::vector feature; std::vector features; + std::vector indeices; double rec_time; if (!RT_Config["Global"]["infer_imgs"].as().empty() || !img_dir.empty()) { @@ -208,9 +215,9 @@ int main(int argc, char **argv) { RT_Config["Global"]["max_det_results"].as(), false, &det); // add the whole image for recognition to improve recall -// PPShiTu::ObjectResult result_whole_img = { -// {0, 0, srcimg.cols, srcimg.rows}, 0, 1.0}; -// det_result.push_back(result_whole_img); + PPShiTu::ObjectResult result_whole_img = { + {0, 0, srcimg.cols, srcimg.rows}, 0, 1.0}; + det_result.push_back(result_whole_img); // get rec result PPShiTu::SearchResult search_result; @@ -225,10 +232,18 @@ int main(int argc, char **argv) { // do vectore search search_result = searcher.Search(features.data(), det_result.size()); + for (int i = 0; i < det_result.size(); ++i) { + det_result[i].confidence = search_result.D[search_result.return_k * i]; + } + NMSBoxes(det_result, searcher.GetThreshold(), rec_nms_threshold, + indeices); PrintResult(img_path, det_result, searcher, search_result); batch_imgs.clear(); det_result.clear(); + features.clear(); + feature.clear(); + indeices.clear(); } } return 0; diff --git a/deploy/lite_shitu/src/object_detector.cc b/deploy/lite_shitu/src/object_detector.cc index ffea31bb9d76b1dd90eed2a90cd066b0edb20057..18388f7a5b0a4fd7b63c37269bd4eea81aad6db1 100644 --- a/deploy/lite_shitu/src/object_detector.cc +++ b/deploy/lite_shitu/src/object_detector.cc @@ -13,9 +13,9 @@ // limitations under the License. #include // for setprecision +#include "include/object_detector.h" #include #include -#include "include/object_detector.h" namespace PPShiTu { @@ -30,10 +30,10 @@ void ObjectDetector::LoadModel(std::string model_file, int num_theads) { } // Visualiztion MaskDetector results -cv::Mat VisualizeResult(const cv::Mat& img, - const std::vector& results, - const std::vector& lables, - const std::vector& colormap, +cv::Mat VisualizeResult(const cv::Mat &img, + const std::vector &results, + const std::vector &lables, + const std::vector &colormap, const bool is_rbox = false) { cv::Mat vis_img = img.clone(); for (int i = 0; i < results.size(); ++i) { @@ -75,24 +75,18 @@ cv::Mat VisualizeResult(const cv::Mat& img, origin.y = results[i].rect[1]; // Configure text background - cv::Rect text_back = cv::Rect(results[i].rect[0], - results[i].rect[1] - text_size.height, - text_size.width, - text_size.height); + cv::Rect text_back = + cv::Rect(results[i].rect[0], results[i].rect[1] - text_size.height, + text_size.width, text_size.height); // Draw text, and background cv::rectangle(vis_img, text_back, roi_color, -1); - cv::putText(vis_img, - text, - origin, - font_face, - font_scale, - cv::Scalar(255, 255, 255), - thickness); + cv::putText(vis_img, text, origin, font_face, font_scale, + cv::Scalar(255, 255, 255), thickness); } return vis_img; } -void ObjectDetector::Preprocess(const cv::Mat& ori_im) { +void ObjectDetector::Preprocess(const cv::Mat &ori_im) { // Clone the image : keep the original mat for postprocess cv::Mat im = ori_im.clone(); // cv::cvtColor(im, im, cv::COLOR_BGR2RGB); @@ -100,7 +94,7 @@ void ObjectDetector::Preprocess(const cv::Mat& ori_im) { } void ObjectDetector::Postprocess(const std::vector mats, - std::vector* result, + std::vector *result, std::vector bbox_num, bool is_rbox = false) { result->clear(); @@ -156,12 +150,11 @@ void ObjectDetector::Postprocess(const std::vector mats, } } -void ObjectDetector::Predict(const std::vector& imgs, - const int warmup, +void ObjectDetector::Predict(const std::vector &imgs, const int warmup, const int repeats, - std::vector* result, - std::vector* bbox_num, - std::vector* times) { + std::vector *result, + std::vector *bbox_num, + std::vector *times) { auto preprocess_start = std::chrono::steady_clock::now(); int batch_size = imgs.size(); @@ -180,29 +173,29 @@ void ObjectDetector::Predict(const std::vector& imgs, scale_factor_all[bs_idx * 2 + 1] = inputs_.scale_factor_[1]; // TODO: reduce cost time - in_data_all.insert( - in_data_all.end(), inputs_.im_data_.begin(), inputs_.im_data_.end()); + in_data_all.insert(in_data_all.end(), inputs_.im_data_.begin(), + inputs_.im_data_.end()); } auto preprocess_end = std::chrono::steady_clock::now(); std::vector output_data_list_; // Prepare input tensor auto input_names = predictor_->GetInputNames(); - for (const auto& tensor_name : input_names) { + for (const auto &tensor_name : input_names) { auto in_tensor = predictor_->GetInputByName(tensor_name); if (tensor_name == "image") { int rh = inputs_.in_net_shape_[0]; int rw = inputs_.in_net_shape_[1]; in_tensor->Resize({batch_size, 3, rh, rw}); - auto* inptr = in_tensor->mutable_data(); + auto *inptr = in_tensor->mutable_data(); std::copy_n(in_data_all.data(), in_data_all.size(), inptr); } else if (tensor_name == "im_shape") { in_tensor->Resize({batch_size, 2}); - auto* inptr = in_tensor->mutable_data(); + auto *inptr = in_tensor->mutable_data(); std::copy_n(im_shape_all.data(), im_shape_all.size(), inptr); } else if (tensor_name == "scale_factor") { in_tensor->Resize({batch_size, 2}); - auto* inptr = in_tensor->mutable_data(); + auto *inptr = in_tensor->mutable_data(); std::copy_n(scale_factor_all.data(), scale_factor_all.size(), inptr); } } @@ -216,7 +209,7 @@ void ObjectDetector::Predict(const std::vector& imgs, if (config_.arch_ == "PicoDet") { for (int j = 0; j < output_names.size(); j++) { auto output_tensor = predictor_->GetTensor(output_names[j]); - const float* outptr = output_tensor->data(); + const float *outptr = output_tensor->data(); std::vector output_shape = output_tensor->shape(); output_data_list_.push_back(outptr); } @@ -242,7 +235,7 @@ void ObjectDetector::Predict(const std::vector& imgs, if (config_.arch_ == "PicoDet") { for (int i = 0; i < output_names.size(); i++) { auto output_tensor = predictor_->GetTensor(output_names[i]); - const float* outptr = output_tensor->data(); + const float *outptr = output_tensor->data(); std::vector output_shape = output_tensor->shape(); if (i == 0) { num_class = output_shape[2]; @@ -268,16 +261,15 @@ void ObjectDetector::Predict(const std::vector& imgs, std::cerr << "[WARNING] No object detected." << std::endl; } output_data_.resize(output_size); - std::copy_n( - output_tensor->mutable_data(), output_size, output_data_.data()); + std::copy_n(output_tensor->mutable_data(), output_size, + output_data_.data()); int out_bbox_num_size = 1; for (int j = 0; j < out_bbox_num_shape.size(); ++j) { out_bbox_num_size *= out_bbox_num_shape[j]; } out_bbox_num_data_.resize(out_bbox_num_size); - std::copy_n(out_bbox_num->mutable_data(), - out_bbox_num_size, + std::copy_n(out_bbox_num->mutable_data(), out_bbox_num_size, out_bbox_num_data_.data()); } // Postprocessing result @@ -285,9 +277,8 @@ void ObjectDetector::Predict(const std::vector& imgs, result->clear(); if (config_.arch_ == "PicoDet") { PPShiTu::PicoDetPostProcess( - result, output_data_list_, config_.fpn_stride_, - inputs_.im_shape_, inputs_.scale_factor_, - config_.nms_info_["score_threshold"].as(), + result, output_data_list_, config_.fpn_stride_, inputs_.im_shape_, + inputs_.scale_factor_, config_.nms_info_["score_threshold"].as(), config_.nms_info_["nms_threshold"].as(), num_class, reg_max); bbox_num->push_back(result->size()); } else { @@ -326,4 +317,4 @@ std::vector GenerateColorMap(int num_class) { return colormap; } -} // namespace PPShiTu +} // namespace PPShiTu diff --git a/deploy/lite_shitu/src/picodet_postprocess.cc b/deploy/lite_shitu/src/picodet_postprocess.cc index cde914c26db21813b1a52137385fa1509cb825f7..04054efa752ca2c2c1ffce504e0a0d48f259eff3 100644 --- a/deploy/lite_shitu/src/picodet_postprocess.cc +++ b/deploy/lite_shitu/src/picodet_postprocess.cc @@ -47,9 +47,9 @@ int activation_function_softmax(const _Tp *src, _Tp *dst, int length) { } // PicoDet decode -PPShiTu::ObjectResult -disPred2Bbox(const float *&dfl_det, int label, float score, int x, int y, - int stride, std::vector im_shape, int reg_max) { +PPShiTu::ObjectResult disPred2Bbox(const float *&dfl_det, int label, + float score, int x, int y, int stride, + std::vector im_shape, int reg_max) { float ct_x = (x + 0.5) * stride; float ct_y = (y + 0.5) * stride; std::vector dis_pred; diff --git a/deploy/lite_shitu/src/preprocess_op.cc b/deploy/lite_shitu/src/preprocess_op.cc index 9c74d6ee7241c93b9fb206317f634e523425793e..974dcbfc6366590c790314599ae3cbe446dafd86 100644 --- a/deploy/lite_shitu/src/preprocess_op.cc +++ b/deploy/lite_shitu/src/preprocess_op.cc @@ -20,7 +20,7 @@ namespace PPShiTu { -void InitInfo::Run(cv::Mat* im, ImageBlob* data) { +void InitInfo::Run(cv::Mat *im, ImageBlob *data) { data->im_shape_ = {static_cast(im->rows), static_cast(im->cols)}; data->scale_factor_ = {1., 1.}; @@ -28,10 +28,10 @@ void InitInfo::Run(cv::Mat* im, ImageBlob* data) { static_cast(im->cols)}; } -void NormalizeImage::Run(cv::Mat* im, ImageBlob* data) { +void NormalizeImage::Run(cv::Mat *im, ImageBlob *data) { double e = 1.0; if (is_scale_) { - e *= 1./255.0; + e *= 1. / 255.0; } (*im).convertTo(*im, CV_32FC3, e); for (int h = 0; h < im->rows; h++) { @@ -46,35 +46,61 @@ void NormalizeImage::Run(cv::Mat* im, ImageBlob* data) { } } -void Permute::Run(cv::Mat* im, ImageBlob* data) { +void NormalizeImage::Run_feature(cv::Mat *im, const std::vector &mean, + const std::vector &std, float scale) { + (*im).convertTo(*im, CV_32FC3, scale); + for (int h = 0; h < im->rows; h++) { + for (int w = 0; w < im->cols; w++) { + im->at(h, w)[0] = + (im->at(h, w)[0] - mean[0]) / std[0]; + im->at(h, w)[1] = + (im->at(h, w)[1] - mean[1]) / std[1]; + im->at(h, w)[2] = + (im->at(h, w)[2] - mean[2]) / std[2]; + } + } +} + +void Permute::Run(cv::Mat *im, ImageBlob *data) { (*im).convertTo(*im, CV_32FC3); int rh = im->rows; int rw = im->cols; int rc = im->channels(); (data->im_data_).resize(rc * rh * rw); - float* base = (data->im_data_).data(); + float *base = (data->im_data_).data(); for (int i = 0; i < rc; ++i) { cv::extractChannel(*im, cv::Mat(rh, rw, CV_32FC1, base + i * rh * rw), i); } } -void Resize::Run(cv::Mat* im, ImageBlob* data) { +void Permute::Run_feature(const cv::Mat *im, float *data) { + int rh = im->rows; + int rw = im->cols; + int rc = im->channels(); + for (int i = 0; i < rc; ++i) { + cv::extractChannel(*im, cv::Mat(rh, rw, CV_32FC1, data + i * rh * rw), i); + } +} + +void Resize::Run(cv::Mat *im, ImageBlob *data) { auto resize_scale = GenerateScale(*im); data->im_shape_ = {static_cast(im->cols * resize_scale.first), static_cast(im->rows * resize_scale.second)}; data->in_net_shape_ = {static_cast(im->cols * resize_scale.first), static_cast(im->rows * resize_scale.second)}; - cv::resize( - *im, *im, cv::Size(), resize_scale.first, resize_scale.second, interp_); + cv::resize(*im, *im, cv::Size(), resize_scale.first, resize_scale.second, + interp_); data->im_shape_ = { - static_cast(im->rows), static_cast(im->cols), + static_cast(im->rows), + static_cast(im->cols), }; data->scale_factor_ = { - resize_scale.second, resize_scale.first, + resize_scale.second, + resize_scale.first, }; } -std::pair Resize::GenerateScale(const cv::Mat& im) { +std::pair Resize::GenerateScale(const cv::Mat &im) { std::pair resize_scale; int origin_w = im.cols; int origin_h = im.rows; @@ -101,7 +127,30 @@ std::pair Resize::GenerateScale(const cv::Mat& im) { return resize_scale; } -void PadStride::Run(cv::Mat* im, ImageBlob* data) { +void Resize::Run_feature(const cv::Mat &img, cv::Mat &resize_img, + int resize_short_size, int size) { + int resize_h = 0; + int resize_w = 0; + if (size > 0) { + resize_h = size; + resize_w = size; + } else { + int w = img.cols; + int h = img.rows; + + float ratio = 1.f; + if (h < w) { + ratio = float(resize_short_size) / float(h); + } else { + ratio = float(resize_short_size) / float(w); + } + resize_h = round(float(h) * ratio); + resize_w = round(float(w) * ratio); + } + cv::resize(img, resize_img, cv::Size(resize_w, resize_h)); +} + +void PadStride::Run(cv::Mat *im, ImageBlob *data) { if (stride_ <= 0) { return; } @@ -110,48 +159,44 @@ void PadStride::Run(cv::Mat* im, ImageBlob* data) { int rw = im->cols; int nh = (rh / stride_) * stride_ + (rh % stride_ != 0) * stride_; int nw = (rw / stride_) * stride_ + (rw % stride_ != 0) * stride_; - cv::copyMakeBorder( - *im, *im, 0, nh - rh, 0, nw - rw, cv::BORDER_CONSTANT, cv::Scalar(0)); + cv::copyMakeBorder(*im, *im, 0, nh - rh, 0, nw - rw, cv::BORDER_CONSTANT, + cv::Scalar(0)); data->in_net_shape_ = { - static_cast(im->rows), static_cast(im->cols), + static_cast(im->rows), + static_cast(im->cols), }; } -void TopDownEvalAffine::Run(cv::Mat* im, ImageBlob* data) { +void TopDownEvalAffine::Run(cv::Mat *im, ImageBlob *data) { cv::resize(*im, *im, cv::Size(trainsize_[0], trainsize_[1]), 0, 0, interp_); // todo: Simd::ResizeBilinear(); data->in_net_shape_ = { - static_cast(trainsize_[1]), static_cast(trainsize_[0]), + static_cast(trainsize_[1]), + static_cast(trainsize_[0]), }; } // Preprocessor op running order -const std::vector Preprocessor::RUN_ORDER = {"InitInfo", - "DetTopDownEvalAffine", - "DetResize", - "DetNormalizeImage", - "DetPadStride", - "DetPermute"}; - -void Preprocessor::Run(cv::Mat* im, ImageBlob* data) { - for (const auto& name : RUN_ORDER) { +const std::vector Preprocessor::RUN_ORDER = { + "InitInfo", "DetTopDownEvalAffine", "DetResize", + "DetNormalizeImage", "DetPadStride", "DetPermute"}; + +void Preprocessor::Run(cv::Mat *im, ImageBlob *data) { + for (const auto &name : RUN_ORDER) { if (ops_.find(name) != ops_.end()) { ops_[name]->Run(im, data); } } } -void CropImg(cv::Mat& img, - cv::Mat& crop_img, - std::vector& area, - std::vector& center, - std::vector& scale, +void CropImg(cv::Mat &img, cv::Mat &crop_img, std::vector &area, + std::vector ¢er, std::vector &scale, float expandratio) { int crop_x1 = std::max(0, area[0]); int crop_y1 = std::max(0, area[1]); int crop_x2 = std::min(img.cols - 1, area[2]); int crop_y2 = std::min(img.rows - 1, area[3]); - + int center_x = (crop_x1 + crop_x2) / 2.; int center_y = (crop_y1 + crop_y2) / 2.; int half_h = (crop_y2 - crop_y1) / 2.; @@ -182,4 +227,4 @@ void CropImg(cv::Mat& img, scale.emplace_back((crop_y2 - crop_y1)); } -} // namespace PPShiTu +} // namespace PPShiTu diff --git a/deploy/lite_shitu/src/utils.cc b/deploy/lite_shitu/src/utils.cc index 3bc461770e2d79e33e4de91a3f4cea8c131eb7ad..a687f071c15ebe97915cddf98950042ab9cf8b4d 100644 --- a/deploy/lite_shitu/src/utils.cc +++ b/deploy/lite_shitu/src/utils.cc @@ -54,4 +54,53 @@ void nms(std::vector &input_boxes, float nms_threshold, } } +float RectOverlap(const ObjectResult &a, const ObjectResult &b) { + float Aa = (a.rect[2] - a.rect[0] + 1) * (a.rect[3] - a.rect[1] + 1); + float Ab = (b.rect[2] - b.rect[0] + 1) * (b.rect[3] - b.rect[1] + 1); + + int iou_w = max(min(a.rect[2], b.rect[2]) - max(a.rect[0], b.rect[0]) + 1, 0); + int iou_h = max(min(a.rect[3], b.rect[3]) - max(a.rect[1], b.rect[1]) + 1, 0); + float Aab = iou_w * iou_h; + return Aab / (Aa + Ab - Aab); +} + +inline void +GetMaxScoreIndex(const std::vector &det_result, + const float threshold, + std::vector> &score_index_vec) { + // Generate index score pairs. + for (size_t i = 0; i < det_result.size(); ++i) { + if (det_result[i].confidence > threshold) { + score_index_vec.push_back(std::make_pair(det_result[i].confidence, i)); + } + } + + // Sort the score pair according to the scores in descending order + std::stable_sort(score_index_vec.begin(), score_index_vec.end(), + SortScorePairDescend); +} + +void NMSBoxes(const std::vector det_result, + const float score_threshold, const float nms_threshold, + std::vector &indices) { + int a = 1; + // Get top_k scores (with corresponding indices). + std::vector> score_index_vec; + GetMaxScoreIndex(det_result, score_threshold, score_index_vec); + + // Do nms + indices.clear(); + for (size_t i = 0; i < score_index_vec.size(); ++i) { + const int idx = score_index_vec[i].second; + bool keep = true; + for (int k = 0; k < (int)indices.size() && keep; ++k) { + const int kept_idx = indices[k]; + float overlap = RectOverlap(det_result[idx], det_result[kept_idx]); + keep = overlap <= nms_threshold; + } + if (keep) + indices.push_back(idx); + } +} + } // namespace PPShiTu diff --git a/deploy/lite_shitu/src/vector_search.cc b/deploy/lite_shitu/src/vector_search.cc index ea848959b651eb04effc25ad9efb7eb497ef2025..f9c06a83d2abc0401d8e480a57244a43ba6fc7aa 100644 --- a/deploy/lite_shitu/src/vector_search.cc +++ b/deploy/lite_shitu/src/vector_search.cc @@ -64,4 +64,4 @@ const SearchResult &VectorSearch::Search(float *feature, int query_number) { const std::string &VectorSearch::GetLabel(faiss::Index::idx_t ind) { return this->id_map.at(ind); } -} \ No newline at end of file +} // namespace PPShiTu diff --git a/deploy/paddleserving/readme.md b/deploy/paddleserving/readme.md index a2fdec2de5ac3f468ff7ed63b04ebf7bb7b2f574..321d6e0f68e11b3eb9a47ad45076b8a2e3aa771a 120000 --- a/deploy/paddleserving/readme.md +++ b/deploy/paddleserving/readme.md @@ -1 +1 @@ -../../docs/zh_CN/inference_deployment/paddle_serving_deploy.md \ No newline at end of file +../../docs/zh_CN/inference_deployment/classification_serving_deploy.md \ No newline at end of file diff --git a/deploy/paddleserving/readme_en.md b/deploy/paddleserving/readme_en.md new file mode 120000 index 0000000000000000000000000000000000000000..80b5fede2d9809db34b1f28a1141262865e042e0 --- /dev/null +++ b/deploy/paddleserving/readme_en.md @@ -0,0 +1 @@ +../../docs/en/inference_deployment/classification_serving_deploy_en.md \ No newline at end of file diff --git a/deploy/paddleserving/recognition/readme.md b/deploy/paddleserving/recognition/readme.md new file mode 120000 index 0000000000000000000000000000000000000000..116873ea2d00c750a36c3ebe2a727b34ccb11e4c --- /dev/null +++ b/deploy/paddleserving/recognition/readme.md @@ -0,0 +1 @@ +../../../docs/zh_CN/inference_deployment/recognition_serving_deploy.md \ No newline at end of file diff --git a/deploy/paddleserving/recognition/readme_en.md b/deploy/paddleserving/recognition/readme_en.md new file mode 120000 index 0000000000000000000000000000000000000000..2250088e12a4f6a3e9b41889f1fc9d00a983dfe7 --- /dev/null +++ b/deploy/paddleserving/recognition/readme_en.md @@ -0,0 +1 @@ +../../../docs/en/inference_deployment/recognition_serving_deploy_en.md \ No newline at end of file diff --git a/deploy/paddleserving/recognition/run_cpp_serving.sh b/deploy/paddleserving/recognition/run_cpp_serving.sh index affca99c63da9c70fd7c5dd4eb6079fe8ba6b7e6..e1deb1148b1705031c0e92522e7eaf7cf4679a45 100644 --- a/deploy/paddleserving/recognition/run_cpp_serving.sh +++ b/deploy/paddleserving/recognition/run_cpp_serving.sh @@ -1,7 +1,14 @@ -nohup python3 -m paddle_serving_server.serve \ ---model ../../models/picodet_PPLCNet_x2_5_mainbody_lite_v1.0_serving \ - --port 9293 >>log_mainbody_detection.txt 1&>2 & +gpu_id=$1 -nohup python3 -m paddle_serving_server.serve \ ---model ../../models/general_PPLCNet_x2_5_lite_v1.0_serving \ ---port 9294 >>log_feature_extraction.txt 1&>2 & +# PP-ShiTu CPP serving script +if [[ -n "${gpu_id}" ]]; then + nohup python3.7 -m paddle_serving_server.serve \ + --model ../../models/picodet_PPLCNet_x2_5_mainbody_lite_v1.0_serving ../../models/general_PPLCNet_x2_5_lite_v1.0_serving \ + --op GeneralPicodetOp GeneralFeatureExtractOp \ + --port 9400 --gpu_id="${gpu_id}" > log_PPShiTu.txt 2>&1 & +else + nohup python3.7 -m paddle_serving_server.serve \ + --model ../../models/picodet_PPLCNet_x2_5_mainbody_lite_v1.0_serving ../../models/general_PPLCNet_x2_5_lite_v1.0_serving \ + --op GeneralPicodetOp GeneralFeatureExtractOp \ + --port 9400 > log_PPShiTu.txt 2>&1 & +fi diff --git a/deploy/paddleserving/run_cpp_serving.sh b/deploy/paddleserving/run_cpp_serving.sh index 05794b7d953e578880dcc9cb87e91be0c031a415..4aecab368663968ad285372be05371b6a6c0138c 100644 --- a/deploy/paddleserving/run_cpp_serving.sh +++ b/deploy/paddleserving/run_cpp_serving.sh @@ -1,2 +1,14 @@ -#run cls server: -nohup python3 -m paddle_serving_server.serve --model ResNet50_vd_serving --port 9292 & +gpu_id=$1 + +# ResNet50_vd CPP serving script +if [[ -n "${gpu_id}" ]]; then + nohup python3.7 -m paddle_serving_server.serve \ + --model ./ResNet50_vd_serving \ + --op GeneralClasOp \ + --port 9292 & +else + nohup python3.7 -m paddle_serving_server.serve \ + --model ./ResNet50_vd_serving \ + --op GeneralClasOp \ + --port 9292 --gpu_id="${gpu_id}" & +fi diff --git a/docs/en/inference_deployment/classification_serving_deploy_en.md b/docs/en/inference_deployment/classification_serving_deploy_en.md new file mode 100644 index 0000000000000000000000000000000000000000..120871edddbe1ca7b6ac1b3a72a3e89e8d1de39a --- /dev/null +++ b/docs/en/inference_deployment/classification_serving_deploy_en.md @@ -0,0 +1,239 @@ +English | [简体中文](../../zh_CN/inference_deployment/classification_serving_deploy.md) + +# Classification model service deployment + +## Table of contents + +- [1 Introduction](#1-introduction) +- [2. Serving installation](#2-serving-installation) +- [3. Image Classification Service Deployment](#3-image-classification-service-deployment) + - [3.1 Model conversion](#31-model-conversion) + - [3.2 Service deployment and request](#32-service-deployment-and-request) + - [3.2.1 Python Serving](#321-python-serving) + - [3.2.2 C++ Serving](#322-c-serving) + + +## 1 Introduction + +[Paddle Serving](https://github.com/PaddlePaddle/Serving) aims to help deep learning developers easily deploy online prediction services, support one-click deployment of industrial-grade service capabilities, high concurrency between client and server Efficient communication and support for developing clients in multiple programming languages. + +This section takes the HTTP prediction service deployment as an example to introduce how to use PaddleServing to deploy the model service in PaddleClas. Currently, only Linux platform deployment is supported, and Windows platform is not currently supported. + + +## 2. Serving installation + +The Serving official website recommends using docker to install and deploy the Serving environment. First, you need to pull the docker environment and create a Serving-based docker. + +```shell +# start GPU docker +docker pull paddlepaddle/serving:0.7.0-cuda10.2-cudnn7-devel +nvidia-docker run -p 9292:9292 --name test -dit paddlepaddle/serving:0.7.0-cuda10.2-cudnn7-devel bash +nvidia-docker exec -it test bash + +# start CPU docker +docker pull paddlepaddle/serving:0.7.0-devel +docker run -p 9292:9292 --name test -dit paddlepaddle/serving:0.7.0-devel bash +docker exec -it test bash +``` + +After entering docker, you need to install Serving-related python packages. +```shell +python3.7 -m pip install paddle-serving-client==0.7.0 +python3.7 -m pip install paddle-serving-app==0.7.0 +python3.7 -m pip install faiss-cpu==1.7.1post2 + +#If it is a CPU deployment environment: +python3.7 -m pip install paddle-serving-server==0.7.0 #CPU +python3.7 -m pip install paddlepaddle==2.2.0 # CPU + +#If it is a GPU deployment environment +python3.7 -m pip install paddle-serving-server-gpu==0.7.0.post102 # GPU with CUDA10.2 + TensorRT6 +python3.7 -m pip install paddlepaddle-gpu==2.2.0 # GPU with CUDA10.2 + +#Other GPU environments need to confirm the environment and then choose which one to execute +python3.7 -m pip install paddle-serving-server-gpu==0.7.0.post101 # GPU with CUDA10.1 + TensorRT6 +python3.7 -m pip install paddle-serving-server-gpu==0.7.0.post112 # GPU with CUDA11.2 + TensorRT8 +``` + +* If the installation speed is too slow, you can change the source through `-i https://pypi.tuna.tsinghua.edu.cn/simple` to speed up the installation process. +* For other environment configuration installation, please refer to: [Install Paddle Serving with Docker](https://github.com/PaddlePaddle/Serving/blob/v0.7.0/doc/Install_EN.md) + + + +## 3. Image Classification Service Deployment + +The following takes the classic ResNet50_vd model as an example to introduce how to deploy the image classification service. + + +### 3.1 Model conversion + +When using PaddleServing for service deployment, you need to convert the saved inference model into a Serving model. +- Go to the working directory: + ```shell + cd deploy/paddleserving + ``` +- Download and unzip the inference model for ResNet50_vd: + ```shell + # Download ResNet50_vd inference model + wget -nc https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/inference/ResNet50_vd_infer.tar + # Decompress the ResNet50_vd inference model + tar xf ResNet50_vd_infer.tar + ``` +- Use the paddle_serving_client command to convert the downloaded inference model into a model format for easy server deployment: + ```shell + # Convert ResNet50_vd model + python3.7 -m paddle_serving_client.convert \ + --dirname ./ResNet50_vd_infer/ \ + --model_filename inference.pdmodel \ + --params_filename inference.pdiparams \ + --serving_server ./ResNet50_vd_serving/ \ + --serving_client ./ResNet50_vd_client/ + ``` + The specific meaning of the parameters in the above command is shown in the following table + | parameter | type | default value | description | + | --------- | ---- | ------------- | ----------- | |--- | + | `dirname` | str | - | The storage path of the model file to be converted. The program structure file and parameter file are saved in this directory. | + | `model_filename` | str | None | The name of the file storing the model Inference Program structure that needs to be converted. If set to None, use `__model__` as the default filename | + | `params_filename` | str | None | File name where all parameters of the model to be converted are stored. It needs to be specified if and only if all model parameters are stored in a single binary file. If the model parameters are stored in separate files, set it to None | + | `serving_server` | str | `"serving_server"` | The storage path of the converted model files and configuration files. Default is serving_server | + | `serving_client` | str | `"serving_client"` | The converted client configuration file storage path. Default is serving_client | + + After the ResNet50_vd inference model conversion is completed, there will be additional `ResNet50_vd_serving` and `ResNet50_vd_client` folders in the current folder, with the following structure: + ```shell + ├── ResNet50_vd_serving/ + │ ├── inference.pdiparams + │ ├── inference.pdmodel + │ ├── serving_server_conf.prototxt + │ └── serving_server_conf.stream.prototxt + │ + └── ResNet50_vd_client/ + ├── serving_client_conf.prototxt + └── serving_client_conf.stream.prototxt + ``` + +- Serving provides the function of input and output renaming in order to be compatible with the deployment of different models. When different models are deployed in inference, you only need to modify the `alias_name` of the configuration file, and the inference deployment can be completed without modifying the code. Therefore, after the conversion, you need to modify the alias names in the files `serving_server_conf.prototxt` under `ResNet50_vd_serving` and `ResNet50_vd_client` respectively, and change the `alias_name` in `fetch_var` to `prediction`, the modified serving_server_conf.prototxt is as follows Show: + ```log + feed_var { + name: "inputs" + alias_name: "inputs" + is_lod_tensor: false + feed_type: 1 + shape: 3 + shape: 224 + shape: 224 + } + fetch_var { + name: "save_infer_model/scale_0.tmp_1" + alias_name: "prediction" + is_lod_tensor: false + fetch_type: 1 + shape: 1000 + } + ``` + +### 3.2 Service deployment and request + +The paddleserving directory contains the code for starting the pipeline service, the C++ serving service and sending the prediction request, mainly including: +```shell +__init__.py +classification_web_service.py # Script to start the pipeline server +config.yml # Configuration file to start the pipeline service +pipeline_http_client.py # Script for sending pipeline prediction requests in http mode +pipeline_rpc_client.py # Script for sending pipeline prediction requests in rpc mode +readme.md # Classification model service deployment document +run_cpp_serving.sh # Start the C++ Serving departmentscript +test_cpp_serving_client.py # Script for sending C++ serving prediction requests in rpc mode +``` + +#### 3.2.1 Python Serving + +- Start the service: + ```shell + # Start the service and save the running log in log.txt + python3.7 classification_web_service.py &>log.txt & + ``` + +- send request: + ```shell + # send service request + python3.7 pipeline_http_client.py + ``` + After a successful run, the results of the model prediction will be printed in the cmd window, and the results are as follows: + ```log + {'err_no': 0, 'err_msg': '', 'key': ['label', 'prob'], 'value': ["['daisy']", '[0.9341402053833008]'], 'tensors ': []} + ``` +- turn off the service +If the service program is running in the foreground, you can press `Ctrl+C` to terminate the server program; if it is running in the background, you can use the kill command to close related processes, or you can execute the following command in the path where the service program is started to terminate the server program: + ```bash + python3.7 -m paddle_serving_server.serve stop + ``` + After the execution is completed, the `Process stopped` message appears, indicating that the service was successfully shut down. + + +#### 3.2.2 C++ Serving + +Different from Python Serving, the C++ Serving client calls C++ OP to predict, so before starting the service, you need to compile and install the serving server package, and set `SERVING_BIN`. + +- Compile and install the Serving server package + ```shell + # Enter the working directory + cd PaddleClas/deploy/paddleserving + # One-click compile and install Serving server, set SERVING_BIN + source ./build_server.sh python3.7 + ``` + **Note: The path set by **[build_server.sh](./build_server.sh#L55-L62) may need to be modified according to the actual machine environment such as CUDA, python version, etc., and then compiled. + +- Modify the client file `ResNet50_client/serving_client_conf.prototxt` , change the field after `feed_type:` to 20, change the field after the first `shape:` to 1 and delete the rest of the `shape` fields. + ```log + feed_var { + name: "inputs" + alias_name: "inputs" + is_lod_tensor: false + feed_type: 20 + shape: 1 + } + ``` +- Modify part of the code of [`test_cpp_serving_client`](./test_cpp_serving_client.py) + 1. Modify the [`feed={"inputs": image}`](./test_cpp_serving_client.py#L28) part of the code, and change the path after `load_client_config` to `ResNet50_client/serving_client_conf.prototxt` . + 2. Modify the [`feed={"inputs": image}`](./test_cpp_serving_client.py#L45) part of the code, and change `inputs` to be the same as the `feed_var` field in `ResNet50_client/serving_client_conf.prototxt` name` is the same. Since `name` in some model client files is `x` instead of `inputs` , you need to pay attention to this when using these models for C++ Serving deployment. + +- Start the service: + ```shell + # Start the service, the service runs in the background, and the running log is saved in nohup.txt + # CPU deployment + sh run_cpp_serving.sh + # GPU deployment and specify card 0 + sh run_cpp_serving.sh 0 + ``` + +- send request: + ```shell + # send service request + python3.7 test_cpp_serving_client.py + ``` + After a successful run, the results of the model prediction will be printed in the cmd window, and the results are as follows: + ```log + prediction: daisy, probability: 0.9341399073600769 + ``` +- close the service: + If the service program is running in the foreground, you can press `Ctrl+C` to terminate the server program; if it is running in the background, you can use the kill command to close related processes, or you can execute the following command in the path where the service program is started to terminate the server program: + ```bash + python3.7 -m paddle_serving_server.serve stop + ``` + After the execution is completed, the `Process stopped` message appears, indicating that the service was successfully shut down. + +##4.FAQ + +**Q1**: No result is returned after the request is sent or an output decoding error is prompted + +**A1**: Do not set the proxy when starting the service and sending the request. You can close the proxy before starting the service and sending the request. The command to close the proxy is: +```shell +unset https_proxy +unset http_proxy +``` + +**Q2**: nothing happens after starting the service + +**A2**: You can check whether the path corresponding to `model_config` in `config.yml` exists, and whether the folder name is correct + +For more service deployment types, such as `RPC prediction service`, you can refer to Serving's [github official website](https://github.com/PaddlePaddle/Serving/tree/v0.9.0/examples) diff --git a/docs/en/inference_deployment/paddle_hub_serving_deploy_en.md b/docs/en/inference_deployment/paddle_hub_serving_deploy_en.md index c89142911f12ffcb2622fb8b5912cd9c960e56c4..4dddc94bd8456a882e42000b640870155f46da7c 100644 --- a/docs/en/inference_deployment/paddle_hub_serving_deploy_en.md +++ b/docs/en/inference_deployment/paddle_hub_serving_deploy_en.md @@ -1,11 +1,10 @@ -# Service deployment based on PaddleHub Serving +English | [简体中文](../../zh_CN/inference_deployment/paddle_hub_serving_deploy.md) -PaddleClas supports rapid service deployment through Paddlehub. At present, it supports the deployment of image classification. Please look forward to the deployment of image recognition. +# Service deployment based on PaddleHub Serving ---- +PaddleClas supports rapid service deployment through PaddleHub. Currently, the deployment of image classification is supported. Please look forward to the deployment of image recognition. ## Catalogue - - [1. Introduction](#1) - [2. Prepare the environment](#2) - [3. Download inference model](#3) @@ -16,97 +15,101 @@ PaddleClas supports rapid service deployment through Paddlehub. At present, it s - [6. Send prediction requests](#6) - [7. User defined service module modification](#7) + -## 1. Introduction +## 1 Introduction -HubServing service pack contains 3 files, the directory is as follows: +The hubserving service deployment configuration service package `clas` contains 3 required files, the directories are as follows: +```shell +deploy/hubserving/clas/ +├── __init__.py # Empty file, required +├── config.json # Configuration file, optional, passed in as a parameter when starting the service with configuration +├── module.py # The main module, required, contains the complete logic of the service +└── params.py # Parameter file, required, including model path, pre- and post-processing parameters and other parameters ``` -hubserving/clas/ - └─ __init__.py Empty file, required - └─ config.json Configuration file, optional, passed in as a parameter when using configuration to start the service - └─ module.py Main module file, required, contains the complete logic of the service - └─ params.py Parameter file, required, including parameters such as model path, pre- and post-processing parameters -``` + ## 2. Prepare the environment - ```shell -# Install version 2.0 of PaddleHub -pip3 install paddlehub==2.1.0 --upgrade -i https://pypi.tuna.tsinghua.edu.cn/simple +# Install paddlehub, version 2.1.0 is recommended +python3.7 -m pip install paddlehub==2.1.0 --upgrade -i https://pypi.tuna.tsinghua.edu.cn/simple ``` + -## 3. Download inference model +## 3. Download the inference model -Before installing the service module, you need to prepare the inference model and put it in the correct path. The default model path is: +Before installing the service module, you need to prepare the inference model and put it in the correct path. The default model path is: -* Model structure file: `PaddleClas/inference/inference.pdmodel` -* Model parameters file: `PaddleClas/inference/inference.pdiparams` +* Classification inference model structure file: `PaddleClas/inference/inference.pdmodel` +* Classification inference model weight file: `PaddleClas/inference/inference.pdiparams` **Notice**: -* The model file path can be viewed and modified in `PaddleClas/deploy/hubserving/clas/params.py`. -* It should be noted that the prefix of model structure file and model parameters file must be `inference`. -* More models provided by PaddleClas can be obtained from the [model library](../algorithm_introduction/ImageNet_models_en.md). You can also use models trained by yourself. +* Model file paths can be viewed and modified in `PaddleClas/deploy/hubserving/clas/params.py`: + + ```python + "inference_model_dir": "../inference/" + ``` +* Model files (including `.pdmodel` and `.pdiparams`) must be named `inference`. +* We provide a large number of pre-trained models based on the ImageNet-1k dataset. For the model list and download address, see [Model Library Overview](../algorithm_introduction/ImageNet_models.md), or you can use your own trained and converted models. + -## 4. Install Service Module +## 4. Install the service module -* On Linux platform, the examples are as follows. -```shell -cd PaddleClas/deploy -hub install hubserving/clas/ -``` +* In the Linux environment, the installation example is as follows: + ```shell + cd PaddleClas/deploy + # Install the service module: + hub install hubserving/clas/ + ``` + +* In the Windows environment (the folder separator is `\`), the installation example is as follows: + + ```shell + cd PaddleClas\deploy + # Install the service module: + hub install hubserving\clas\ + ``` -* On Windows platform, the examples are as follows. -```shell -cd PaddleClas\deploy -hub install hubserving\clas\ -``` ## 5. Start service + ### 5.1 Start with command line parameters -This method only supports CPU. Command as follow: +This method only supports prediction using CPU. Start command: ```shell -$ hub serving start --modules Module1==Version1 \ - --port XXXX \ - --use_multiprocess \ - --workers \ -``` - -**parameters:** - -|parameters|usage| -|-|-| -|--modules/-m|PaddleHub Serving pre-installed model, listed in the form of multiple Module==Version key-value pairs
*`When Version is not specified, the latest version is selected by default`*| -|--port/-p|Service port, default is 8866| -|--use_multiprocess|Enable concurrent mode, the default is single-process mode, this mode is recommended for multi-core CPU machines
*`Windows operating system only supports single-process mode`*| -|--workers|The number of concurrent tasks specified in concurrent mode, the default is `2*cpu_count-1`, where `cpu_count` is the number of CPU cores| - -For example, start service: - -```shell -hub serving start -m clas_system -``` +hub serving start \ +--modules clas_system +--port 8866 +``` +This completes the deployment of a serviced API, using the default port number 8866. -This completes the deployment of a service API, using the default port number 8866. +**Parameter Description**: +|parameters|uses| +|-|-| +|--modules/-m| [**required**] PaddleHub Serving pre-installed model, listed in the form of multiple Module==Version key-value pairs
*`When no Version is specified, the latest is selected by default version `*| +|--port/-p| [**OPTIONAL**] Service port, default is 8866| +|--use_multiprocess| [**Optional**] Whether to enable the concurrent mode, the default is single-process mode, it is recommended to use this mode for multi-core CPU machines
*`Windows operating system only supports single-process mode`*| +|--workers| [**Optional**] The number of concurrent tasks specified in concurrent mode, the default is `2*cpu_count-1`, where `cpu_count` is the number of CPU cores| +For more deployment details, see [PaddleHub Serving Model One-Click Service Deployment](https://paddlehub.readthedocs.io/zh_CN/release-v2.1/tutorial/serving.html) ### 5.2 Start with configuration file -This method supports CPU and GPU. Command as follow: +This method only supports prediction using CPU or GPU. Start command: ```shell -hub serving start --config/-c config.json -``` +hub serving start -c config.json +``` -Wherein, the format of `config.json` is as follows: +Among them, the format of `config.json` is as follows: ```json { @@ -127,18 +130,19 @@ Wherein, the format of `config.json` is as follows: } ``` -- The configurable parameters in `init_args` are consistent with the `_initialize` function interface in `module.py`. Among them, - - when `use_gpu` is `true`, it means that the GPU is used to start the service. - - when `enable_mkldnn` is `true`, it means that use MKL-DNN to accelerate. -- The configurable parameters in `predict_args` are consistent with the `predict` function interface in `module.py`. +**Parameter Description**: +* The configurable parameters in `init_args` are consistent with the `_initialize` function interface in `module.py`. in, + - When `use_gpu` is `true`, it means to use GPU to start the service. + - When `enable_mkldnn` is `true`, it means to use MKL-DNN acceleration. +* The configurable parameters in `predict_args` are consistent with the `predict` function interface in `module.py`. -**Note:** -- When using the configuration file to start the service, other parameters will be ignored. -- If you use GPU prediction (that is, `use_gpu` is set to `true`), you need to set the environment variable CUDA_VISIBLE_DEVICES before starting the service, such as: ```export CUDA_VISIBLE_DEVICES=0```, otherwise you do not need to set it. -- **`use_gpu` and `use_multiprocess` cannot be `true` at the same time.** -- **When both `use_gpu` and `enable_mkldnn` are set to `true` at the same time, GPU is used to run and `enable_mkldnn` will be ignored.** +**Notice**: +* When using the configuration file to start the service, the parameter settings in the configuration file will be used, and other command line parameters will be ignored; +* If you use GPU prediction (ie, `use_gpu` is set to `true`), you need to set the `CUDA_VISIBLE_DEVICES` environment variable to specify the GPU card number used before starting the service, such as: `export CUDA_VISIBLE_DEVICES=0`; +* **`use_gpu` cannot be `true`** at the same time as `use_multiprocess`; +* ** When both `use_gpu` and `enable_mkldnn` are `true`, `enable_mkldnn` will be ignored and GPU** will be used. -For example, use GPU card No. 3 to start the 2-stage series service: +If you use GPU No. 3 card to start the service: ```shell cd PaddleClas/deploy @@ -149,88 +153,86 @@ hub serving start -c hubserving/clas/config.json ## 6. Send prediction requests -After the service starting, you can use the following command to send a prediction request to obtain the prediction result: +After configuring the server, you can use the following command to send a prediction request to get the prediction result: ```shell cd PaddleClas/deploy -python hubserving/test_hubserving.py server_url image_path +python3.7 hubserving/test_hubserving.py \ +--server_url http://127.0.0.1:8866/predict/clas_system \ +--image_file ./hubserving/ILSVRC2012_val_00006666.JPEG \ +--batch_size 8 +``` +**Predicted output** +```log +The result(s): class_ids: [57, 67, 68, 58, 65], label_names: ['garter snake, grass snake', 'diamondback, diamondback rattlesnake, Crotalus adamanteus', 'sidewinder, horned rattlesnake, Crotalus cerastes' , 'water snake', 'sea snake'], scores: [0.21915, 0.15631, 0.14794, 0.13177, 0.12285] +The average time of prediction cost: 2.970 s/image +The average time cost: 3.014 s/image +The average top-1 score: 0.110 ``` -Two required parameters need to be passed to the script: - -- **server_url**: service address,format of which is -`http://[ip_address]:[port]/predict/[module_name]` -- **image_path**: Test image path, can be a single image path or an image directory path -- **batch_size**: [**Optional**] batch_size. Default by `1`. -- **resize_short**: [**Optional**] In preprocessing, resize according to short size. Default by `256`。 -- **crop_size**: [**Optional**] In preprocessing, centor crop size. Default by `224`。 -- **normalize**: [**Optional**] In preprocessing, whether to do `normalize`. Default by `True`。 -- **to_chw**: [**Optional**] In preprocessing, whether to transpose to `CHW`. Default by `True`。 +**Script parameter description**: +* **server_url**: Service address, the format is `http://[ip_address]:[port]/predict/[module_name]`. +* **image_path**: The test image path, which can be a single image path or an image collection directory path. +* **batch_size**: [**OPTIONAL**] Make predictions in `batch_size` size, default is `1`. +* **resize_short**: [**optional**] When preprocessing, resize by short edge, default is `256`. +* **crop_size**: [**Optional**] The size of the center crop during preprocessing, the default is `224`. +* **normalize**: [**Optional**] Whether to perform `normalize` during preprocessing, the default is `True`. +* **to_chw**: [**Optional**] Whether to adjust to `CHW` order when preprocessing, the default is `True`. -**Notice**: -If you want to use `Transformer series models`, such as `DeiT_***_384`, `ViT_***_384`, etc., please pay attention to the input size of model, and need to set `--resize_short=384`, `--crop_size=384`. +**Note**: If you use `Transformer` series models, such as `DeiT_***_384`, `ViT_***_384`, etc., please pay attention to the input data size of the model, you need to specify `--resize_short=384 -- crop_size=384`. -**Eg.** +**Return result format description**: +The returned result is a list (list), including the top-k classification results, the corresponding scores, and the time-consuming prediction of this image, as follows: ```shell -python hubserving/test_hubserving.py --server_url http://127.0.0.1:8866/predict/clas_system --image_file ./hubserving/ILSVRC2012_val_00006666.JPEG --batch_size 8 +list: return result +└──list: first image result + ├── list: the top k classification results, sorted in descending order of score + ├── list: the scores corresponding to the first k classification results, sorted in descending order of score + └── float: The image classification time, in seconds ``` -The returned result is a list, including the `top_k`'s classification results, corresponding scores and the time cost of prediction, details as follows. - -``` -list: The returned results -└─ list: The result of first picture - └─ list: The top-k classification results, sorted in descending order of score - └─ list: The scores corresponding to the top-k classification results, sorted in descending order of score - └─ float: The time cost of predicting the picture, unit second -``` -**Note:** If you need to add, delete or modify the returned fields, you can modify the corresponding module. For the details, refer to the user-defined modification service module in the next section. ## 7. User defined service module modification -If you need to modify the service logic, the following steps are generally required: - -1. Stop service -```shell -hub serving stop --port/-p XXXX -``` - -2. Modify the code in the corresponding files, like `module.py` and `params.py`, according to the actual needs. You need re-install(hub install hubserving/clas/) and re-deploy after modifing `module.py`. -After modifying and installing and before deploying, you can use `python hubserving/clas/module.py` to test the installed service module. +If you need to modify the service logic, you need to do the following: -For example, if you need to replace the model used by the deployed service, you need to modify model path parameters `cfg.model_file` and `cfg.params_file` in `params.py`. Of course, other related parameters may need to be modified at the same time. Please modify and debug according to the actual situation. - -3. Uninstall old service module -```shell -hub uninstall clas_system -``` +1. Stop the service + ```shell + hub serving stop --port/-p XXXX + ``` -4. Install modified service module -```shell -hub install hubserving/clas/ -``` +2. Go to the corresponding `module.py` and `params.py` and other files to modify the code according to actual needs. `module.py` needs to be reinstalled after modification (`hub install hubserving/clas/`) and deployed. Before deploying, you can use the `python3.7 hubserving/clas/module.py` command to quickly test the code ready for deployment. -5. Restart service -```shell -hub serving start -m clas_system -``` +3. Uninstall the old service pack + ```shell + hub uninstall clas_system + ``` -**Note**: +4. Install the new modified service pack + ```shell + hub install hubserving/clas/ + ``` -Common parameters can be modified in params.py: -* Directory of model files(include model structure file and model parameters file): - ```python - "inference_model_dir": - ``` -* The number of Top-k results returned during post-processing: - ```python - 'topk': - ``` -* Mapping file corresponding to label and class ID during post-processing: - ```python - 'class_id_map_file': - ``` +5. Restart the service + ```shell + hub serving start -m clas_system + ``` -In order to avoid unnecessary delay and be able to predict in batch, the preprocessing (include resize, crop and other) is completed in the client, so modify [test_hubserving.py](../../../deploy/hubserving/test_hubserving.py#L35-L52) if necessary. +**Notice**: +Common parameters can be modified in `PaddleClas/deploy/hubserving/clas/params.py`: + * To replace the model, you need to modify the model file path parameters: + ```python + "inference_model_dir": + ``` + * Change the number of `top-k` results returned when postprocessing: + ```python + 'topk': + ``` + * The mapping file corresponding to the lable and class id when changing the post-processing: + ```python + 'class_id_map_file': + ``` + +In order to avoid unnecessary delay and be able to predict with batch_size, data preprocessing logic (including `resize`, `crop` and other operations) is completed on the client side, so it needs to be in [PaddleClas/deploy/hubserving/test_hubserving.py# L41-L47](../../../deploy/hubserving/test_hubserving.py#L41-L47) and [PaddleClas/deploy/hubserving/test_hubserving.py#L51-L76](../../../deploy/hubserving/test_hubserving.py#L51-L76) Modify the data preprocessing logic related code. diff --git a/docs/en/inference_deployment/paddle_serving_deploy_en.md b/docs/en/inference_deployment/paddle_serving_deploy_en.md deleted file mode 100644 index 7a602920f0271abb35cc532d83af97a98f7a6310..0000000000000000000000000000000000000000 --- a/docs/en/inference_deployment/paddle_serving_deploy_en.md +++ /dev/null @@ -1,280 +0,0 @@ -# Model Service Deployment - -## Catalogue - -- [1. Introduction](#1) -- [2. Installation of Serving](#2) -- [3. Service Deployment for Image Classification](#3) - - [3.1 Model Transformation](#3.1) - - [3.2 Service Deployment and Request](#3.2) -- [4. Service Deployment for Image Recognition](#4) - - [4.1 Model Transformation](#4.1) - - [4.2 Service Deployment and Request](#4.2) -- [5. FAQ](#5) - - -## 1. Introduction - -[Paddle Serving](https://github.com/PaddlePaddle/Serving) is designed to provide easy deployment of on-line prediction services for deep learning developers, it supports one-click deployment of industrial-grade services, highly concurrent and efficient communication between client and server, and multiple programming languages for client development. - -This section, exemplified by HTTP deployment of prediction service, describes how to deploy model services in PaddleClas with PaddleServing. Currently, only deployment on Linux platform is supported. Windows platform is not supported. - - -## 2. Installation of Serving - -It is officially recommended to use docker for the installation and environment deployment of Serving. First, pull the docker and create a Serving-based one. - -``` -docker pull paddlepaddle/serving:0.7.0-cuda10.2-cudnn7-devel -nvidia-docker run -p 9292:9292 --name test -dit paddlepaddle/serving:0.7.0-cuda10.2-cudnn7-devel bash -nvidia-docker exec -it test bash -``` - -Once you are in docker, install the Serving-related python packages. - -``` -pip3 install paddle-serving-client==0.7.0 -pip3 install paddle-serving-server==0.7.0 # CPU -pip3 install paddle-serving-app==0.7.0 -pip3 install paddle-serving-server-gpu==0.7.0.post102 #GPU with CUDA10.2 + TensorRT6 -# For other GPU environemnt, confirm the environment before choosing which one to execute -pip3 install paddle-serving-server-gpu==0.7.0.post101 # GPU with CUDA10.1 + TensorRT6 -pip3 install paddle-serving-server-gpu==0.7.0.post112 # GPU with CUDA11.2 + TensorRT8 -``` - -- Speed up the installation process by replacing the source with `-i https://pypi.tuna.tsinghua.edu.cn/simple`. -- For other environment configuration and installation, please refer to [Install Paddle Serving using docker](https://github.com/PaddlePaddle/Serving/blob/v0.7.0/doc/Install_EN.md) -- To deploy CPU services, please install the CPU version of serving-server with the following command. - -``` -pip install paddle-serving-server -``` - - -## 3. Service Deployment for Image Classification - - -### 3.1 Model Transformation - -When adopting PaddleServing for service deployment, the saved inference model needs to be converted to a Serving model. The following part takes the classic ResNet50_vd model as an example to introduce the deployment of image classification service. - -- Enter the working directory: - -``` -cd deploy/paddleserving -``` - -- Download the inference model of ResNet50_vd: - -``` -# Download and decompress the ResNet50_vd model -wget https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/inference/ResNet50_vd_infer.tar && tar xf ResNet50_vd_infer.tar -``` - -- Convert the downloaded inference model into a format that is readily deployable by Server with the help of paddle_serving_client. - -``` -# Convert the ResNet50_vd model -python3 -m paddle_serving_client.convert --dirname ./ResNet50_vd_infer/ \ - --model_filename inference.pdmodel \ - --params_filename inference.pdiparams \ - --serving_server ./ResNet50_vd_serving/ \ - --serving_client ./ResNet50_vd_client/ -``` - -After the transformation, `ResNet50_vd_serving` and `ResNet50_vd_client` will be added to the current folder in the following format: - -``` -|- ResNet50_vd_server/ - |- __model__ - |- __params__ - |- serving_server_conf.prototxt - |- serving_server_conf.stream.prototxt -|- ResNet50_vd_client - |- serving_client_conf.prototxt - |- serving_client_conf.stream.prototxt -``` - -Having obtained the model file, modify the alias name in `serving_server_conf.prototxt` under directory `ResNet50_vd_server` by changing `alias_name` in `fetch_var` to `prediction`. - -**Notes**: Serving supports input and output renaming to ensure its compatibility with the deployment of different models. In this case, modifying the alias_name of the configuration file is the only step needed to complete the inference and deployment of all kinds of models. The modified serving_server_conf.prototxt is shown below: - -``` -feed_var { - name: "inputs" - alias_name: "inputs" - is_lod_tensor: false - feed_type: 1 - shape: 3 - shape: 224 - shape: 224 -} -fetch_var { - name: "save_infer_model/scale_0.tmp_1" - alias_name: "prediction" - is_lod_tensor: true - fetch_type: 1 - shape: -1 -} -``` - - -### 3.2 Service Deployment and Request - -Paddleserving's directory contains the code to start the pipeline service and send prediction requests, including: - -``` -__init__.py -config.yml # Configuration file for starting the service -pipeline_http_client.py # Script for sending pipeline prediction requests by http -pipeline_rpc_client.py # Script for sending pipeline prediction requests by rpc -classification_web_service.py # Script for starting the pipeline server -``` - -- Start the service: - -``` -# Start the service and the run log is saved in log.txt -python3 classification_web_service.py &>log.txt & -``` - -Once the service is successfully started, a log will be printed in log.txt similar to the following ![img](../../../deploy/paddleserving/imgs/start_server.png) - -- Send request: - -``` -# Send service request -python3 pipeline_http_client.py -``` - -Once the service is successfully started, the prediction results will be printed in the cmd window, see the following example:![img](../../../deploy/paddleserving/imgs/results.png) - - - -## 4. Service Deployment for Image Recognition - -When using PaddleServing for service deployment, the saved inference model needs to be converted to a Serving model. The following part, exemplified by the ultra-lightweight model for image recognition in PP-ShiTu, details the deployment of image recognition service. - - - -## 4.1 Model Transformation - -- Download inference models for general detection and general recognition - -``` -cd deploy -# Download and decompress general recogntion models -wget -P models/ https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/rec/models/inference/general_PPLCNet_x2_5_lite_v1.0_infer.tar -cd models -tar -xf general_PPLCNet_x2_5_lite_v1.0_infer.tar -# Download and decompress general detection models -wget https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/rec/models/inference/picodet_PPLCNet_x2_5_mainbody_lite_v1.0_infer.tar -tar -xf picodet_PPLCNet_x2_5_mainbody_lite_v1.0_infer.tar -``` - -- Convert the inference model for recognition into a Serving model: - -``` -# Convert the recognition model -python3 -m paddle_serving_client.convert --dirname ./general_PPLCNet_x2_5_lite_v1.0_infer/ \ - --model_filename inference.pdmodel \ - --params_filename inference.pdiparams \ - --serving_server ./general_PPLCNet_x2_5_lite_v1.0_serving/ \ - --serving_client ./general_PPLCNet_x2_5_lite_v1.0_client/ -``` - -After the transformation, `general_PPLCNet_x2_5_lite_v1.0_serving/` and `general_PPLCNet_x2_5_lite_v1.0_serving/` will be added to the current folder. Modify the alias name in serving_server_conf.prototxt under the directory `general_PPLCNet_x2_5_lite_v1.0_serving/` by changing `alias_name` to `features` in `fetch_var`. The modified serving_server_conf.prototxt is similar to the following: - -``` -feed_var { - name: "x" - alias_name: "x" - is_lod_tensor: false - feed_type: 1 - shape: 3 - shape: 224 - shape: 224 -} -fetch_var { - name: "save_infer_model/scale_0.tmp_1" - alias_name: "features" - is_lod_tensor: true - fetch_type: 1 - shape: -1 -} -``` - -- Convert the inference model for detection into a Serving model: - -``` -# Convert the general detection model -python3 -m paddle_serving_client.convert --dirname ./picodet_PPLCNet_x2_5_mainbody_lite_v1.0_infer/ \ - --model_filename inference.pdmodel \ - --params_filename inference.pdiparams \ - --serving_server ./picodet_PPLCNet_x2_5_mainbody_lite_v1.0_serving/ \ - --serving_client ./picodet_PPLCNet_x2_5_mainbody_lite_v1.0_client/ -``` - -After the transformation, `picodet_PPLCNet_x2_5_mainbody_lite_v1.0_serving/` and `picodet_PPLCNet_x2_5_ mainbody_lite_v1.0_client/` will be added to the current folder. - -**Note:** The alias name in the serving_server_conf.prototxt under the directory`picodet_PPLCNet_x2_5_mainbody_lite_v1.0_serving/` requires no modification. - -- Download and decompress the constructed search library index - -``` -cd ../ -wget https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/rec/data/drink_dataset_v1.0.tar && tar -xf drink_dataset_v1.0.tar -``` - - -## 4.2 Service Deployment and Request - -**Note:** Since the recognition service involves multiple models, PipeLine is adopted for better performance. This deployment method does not support the windows platform for now. - -- Enter the working directory - -``` -cd ./deploy/paddleserving/recognition -``` - -Paddleserving's directory contains the code to start the pipeline service and send prediction requests, including: - -``` -__init__.py -config.yml # Configuration file for starting the service -pipeline_http_client.py # Script for sending pipeline prediction requests by http -pipeline_rpc_client.py # Script for sending pipeline prediction requests by rpc -recognition_web_service.py # Script for starting the pipeline server -``` - -- Start the service: - -``` -# Start the service and the run log is saved in log.txt -python3 recognition_web_service.py &>log.txt & -``` - -Once the service is successfully started, a log will be printed in log.txt similar to the following ![img](../../../deploy/paddleserving/imgs/start_server_shitu.png) - -- Send request: - -``` -python3 pipeline_http_client.py -``` - -Once the service is successfully started, the prediction results will be printed in the cmd window, see the following example: ![img](../../../deploy/paddleserving/imgs/results_shitu.png) - - - -## 5.FAQ - -**Q1**: After sending a request, no result is returned or the output is prompted with a decoding error. - -**A1**: Please turn off the proxy before starting the service and sending requests, try the following command: - -``` -unset https_proxy -unset http_proxy -``` - -For more types of service deployment, such as `RPC prediction services`, you can refer to the [github official website](https://github.com/PaddlePaddle/Serving/tree/v0.7.0/examples) of Serving. diff --git a/docs/en/inference_deployment/recognition_serving_deploy_en.md b/docs/en/inference_deployment/recognition_serving_deploy_en.md new file mode 100644 index 0000000000000000000000000000000000000000..bf8061376a6db8fb6cb8c256c8cc5a74c0fb1326 --- /dev/null +++ b/docs/en/inference_deployment/recognition_serving_deploy_en.md @@ -0,0 +1,282 @@ +English | [简体中文](../../zh_CN/inference_deployment/recognition_serving_deploy.md) + +# Recognition model service deployment + +## Table of contents + +- [1 Introduction](#1-introduction) +- [2. Serving installation](#2-serving-installation) +- [3. Image recognition service deployment](#3-image-recognition-service-deployment) + - [3.1 Model conversion](#31-model-conversion) + - [3.2 Service deployment and request](#32-service-deployment-and-request) + - [3.2.1 Python Serving](#321-python-serving) + - [3.2.2 C++ Serving](#322-c-serving) +- [4. FAQ](#4-faq) + + +## 1 Introduction + +[Paddle Serving](https://github.com/PaddlePaddle/Serving) aims to help deep learning developers easily deploy online prediction services, support one-click deployment of industrial-grade service capabilities, high concurrency between client and server Efficient communication and support for developing clients in multiple programming languages. + +This section takes the HTTP prediction service deployment as an example to introduce how to use PaddleServing to deploy the model service in PaddleClas. Currently, only Linux platform deployment is supported, and Windows platform is not currently supported. + + +## 2. Serving installation + +The Serving official website recommends using docker to install and deploy the Serving environment. First, you need to pull the docker environment and create a Serving-based docker. + +```shell +# start GPU docker +docker pull paddlepaddle/serving:0.7.0-cuda10.2-cudnn7-devel +nvidia-docker run -p 9292:9292 --name test -dit paddlepaddle/serving:0.7.0-cuda10.2-cudnn7-devel bash +nvidia-docker exec -it test bash + +# start CPU docker +docker pull paddlepaddle/serving:0.7.0-devel +docker run -p 9292:9292 --name test -dit paddlepaddle/serving:0.7.0-devel bash +docker exec -it test bash +``` + +After entering docker, you need to install Serving-related python packages. +```shell +python3.7 -m pip install paddle-serving-client==0.7.0 +python3.7 -m pip install paddle-serving-app==0.7.0 +python3.7 -m pip install faiss-cpu==1.7.1post2 + +#If it is a CPU deployment environment: +python3.7 -m pip install paddle-serving-server==0.7.0 #CPU +python3.7 -m pip install paddlepaddle==2.2.0 # CPU + +#If it is a GPU deployment environment +python3.7 -m pip install paddle-serving-server-gpu==0.7.0.post102 # GPU with CUDA10.2 + TensorRT6 +python3.7 -m pip install paddlepaddle-gpu==2.2.0 # GPU with CUDA10.2 + +#Other GPU environments need to confirm the environment and then choose which one to execute +python3.7 -m pip install paddle-serving-server-gpu==0.7.0.post101 # GPU with CUDA10.1 + TensorRT6 +python3.7 -m pip install paddle-serving-server-gpu==0.7.0.post112 # GPU with CUDA11.2 + TensorRT8 +``` + +* If the installation speed is too slow, you can change the source through `-i https://pypi.tuna.tsinghua.edu.cn/simple` to speed up the installation process. +* For other environment configuration installation, please refer to: [Install Paddle Serving with Docker](https://github.com/PaddlePaddle/Serving/blob/v0.7.0/doc/Install_CN.md) + + + + +## 3. Image recognition service deployment + +When using PaddleServing for image recognition service deployment, **need to convert multiple saved inference models to Serving models**. The following takes the ultra-lightweight image recognition model in PP-ShiTu as an example to introduce the deployment of image recognition services. + +### 3.1 Model conversion + +- Go to the working directory: + ```shell + cd deploy/ + ``` +- Download generic detection inference model and generic recognition inference model + ```shell + # Create and enter the models folder + mkdir models + cd models + # Download and unzip the generic recognition model + wget https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/rec/models/inference/general_PPLCNet_x2_5_lite_v1.0_infer.tar + tar -xf general_PPLCNet_x2_5_lite_v1.0_infer.tar + # Download and unzip the generic detection model + wget https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/rec/models/inference/picodet_PPLCNet_x2_5_mainbody_lite_v1.0_infer.tar + tar -xf picodet_PPLCNet_x2_5_mainbody_lite_v1.0_infer.tar + ``` +- Convert the generic recognition inference model to the Serving model: + ```shell + # Convert the generic recognition model + python3.7 -m paddle_serving_client.convert \ + --dirname ./general_PPLCNet_x2_5_lite_v1.0_infer/ \ + --model_filename inference.pdmodel \ + --params_filename inference.pdiparams \ + --serving_server ./general_PPLCNet_x2_5_lite_v1.0_serving/ \ + --serving_client ./general_PPLCNet_x2_5_lite_v1.0_client/ + ``` + The meaning of the parameters of the above command is the same as [#3.1 Model conversion](#3.1) + + After the recognition inference model is converted, there will be additional folders `general_PPLCNet_x2_5_lite_v1.0_serving/` and `general_PPLCNet_x2_5_lite_v1.0_client/` in the current folder. Modify the name of `alias` in `serving_server_conf.prototxt` in `general_PPLCNet_x2_5_lite_v1.0_serving/` and `general_PPLCNet_x2_5_lite_v1.0_client/` directories respectively: Change `alias_name` in `fetch_var` to `features`. The content of the modified `serving_server_conf.prototxt` is as follows + + ```log + feed_var { + name: "x" + alias_name: "x" + is_lod_tensor: false + feed_type: 1 + shape: 3 + shape: 224 + shape: 224 + } + fetch_var { + name: "save_infer_model/scale_0.tmp_1" + alias_name: "features" + is_lod_tensor: false + fetch_type: 1 + shape: 512 + } + ``` + + After the conversion of the general recognition inference model is completed, there will be additional `general_PPLCNet_x2_5_lite_v1.0_serving/` and `general_PPLCNet_x2_5_lite_v1.0_client/` folders in the current folder, with the following structure: + ```shell + ├── general_PPLCNet_x2_5_lite_v1.0_serving/ + │ ├── inference.pdiparams + │ ├── inference.pdmodel + │ ├── serving_server_conf.prototxt + │ └── serving_server_conf.stream.prototxt + │ + └── general_PPLCNet_x2_5_lite_v1.0_client/ + ├── serving_client_conf.prototxt + └── serving_client_conf.stream.prototxt + ``` +- Convert general detection inference model to Serving model: + ```shell + # Convert generic detection model + python3.7 -m paddle_serving_client.convert --dirname ./picodet_PPLCNet_x2_5_mainbody_lite_v1.0_infer/ \ + --model_filename inference.pdmodel \ + --params_filename inference.pdiparams \ + --serving_server ./picodet_PPLCNet_x2_5_mainbody_lite_v1.0_serving/ \ + --serving_client ./picodet_PPLCNet_x2_5_mainbody_lite_v1.0_client/ + ``` + The meaning of the parameters of the above command is the same as [#3.1 Model conversion](#3.1) + + After the conversion of the general detection inference model is completed, there will be additional folders `picodet_PPLCNet_x2_5_mainbody_lite_v1.0_serving/` and `picodet_PPLCNet_x2_5_mainbody_lite_v1.0_client/` in the current folder, with the following structure: + ```shell + ├── picodet_PPLCNet_x2_5_mainbody_lite_v1.0_serving/ + │ ├── inference.pdiparams + │ ├── inference.pdmodel + │ ├── serving_server_conf.prototxt + │ └── serving_server_conf.stream.prototxt + │ + └── picodet_PPLCNet_x2_5_mainbody_lite_v1.0_client/ + ├── serving_client_conf.prototxt + └── serving_client_conf.stream.prototxt + ``` + The specific meaning of the parameters in the above command is shown in the following table + | parameter | type | default value | description | + | ----------------- | ---- | ------------------ | ----------------------------------------------------- | + | `dirname` | str | - | The storage path of the model file to be converted. The program structure file and parameter file are saved in this directory.| + | `model_filename` | str | None | The name of the file storing the model Inference Program structure that needs to be converted. If set to None, use `__model__` as the default filename | + | `params_filename` | str | None | The name of the file that stores all parameters of the model that need to be transformed. It needs to be specified if and only if all model parameters are stored in a single binary file. If the model parameters are stored in separate files, set it to None | + | `serving_server` | str | `"serving_server"` | The storage path of the converted model files and configuration files. Default is serving_server | + | `serving_client` | str | `"serving_client"` | The converted client configuration file storage path. Default is | + +- Download and unzip the index of the retrieval library that has been built + ```shell + # Go back to the deploy directory + cd ../ + # Download the built retrieval library index + wget https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/rec/data/drink_dataset_v1.0.tar + # Decompress the built retrieval library index + tar -xf drink_dataset_v1.0.tar + ``` + +### 3.2 Service deployment and request + +**Note:** The identification service involves multiple models, and the PipeLine deployment method is used for performance reasons. The Pipeline deployment method currently does not support the windows platform. +- go to the working directory + ```shell + cd ./deploy/paddleserving/recognition + ``` + The paddleserving directory contains code to start the Python Pipeline service, the C++ Serving service, and send prediction requests, including: + ```shell + __init__.py + config.yml # The configuration file to start the python pipeline service + pipeline_http_client.py # Script for sending pipeline prediction requests in http mode + pipeline_rpc_client.py # Script for sending pipeline prediction requests in rpc mode + recognition_web_service.py # Script to start the pipeline server + readme.md # Recognition model service deployment documents + run_cpp_serving.sh # Script to start C++ Pipeline Serving deployment + test_cpp_serving_client.py # Script for sending C++ Pipeline serving prediction requests by rpc + ``` + + +#### 3.2.1 Python Serving + +- Start the service: + ```shell + # Start the service and save the running log in log.txt + python3.7 recognition_web_service.py &>log.txt & + ``` + +- send request: + ```shell + python3.7 pipeline_http_client.py + ``` + After a successful run, the results of the model prediction will be printed in the cmd window, and the results are as follows: + ```log + {'err_no': 0, 'err_msg': '', 'key': ['result'], 'value': ["[{'bbox': [345, 95, 524, 576], 'rec_docs': 'Red Bull-Enhanced', 'rec_scores': 0.79903316}]"], 'tensors': []} + ``` + + +#### 3.2.2 C++ Serving + +Different from Python Serving, the C++ Serving client calls C++ OP to predict, so before starting the service, you need to compile and install the serving server package, and set `SERVING_BIN`. +- Compile and install the Serving server package + ```shell + # Enter the working directory + cd PaddleClas/deploy/paddleserving + # One-click compile and install Serving server, set SERVING_BIN + source ./build_server.sh python3.7 + ``` + **Note:** The path set by [build_server.sh](../build_server.sh#L55-L62) may need to be modified according to the actual machine environment such as CUDA, python version, etc., and then compiled. + +- The input and output format used by C++ Serving is different from that of Python, so you need to execute the following command to overwrite the files below [3.1] (#31-model conversion) by copying the 4 files to get the corresponding 4 prototxt files in the folder. + ```shell + # Enter PaddleClas/deploy directory + cd PaddleClas/deploy/ + + # Overwrite prototxt file + \cp ./paddleserving/recognition/preprocess/general_PPLCNet_x2_5_lite_v1.0_serving/*.prototxt ./models/general_PPLCNet_x2_5_lite_v1.0_serving/ + \cp ./paddleserving/recognition/preprocess/general_PPLCNet_x2_5_lite_v1.0_client/*.prototxt ./models/general_PPLCNet_x2_5_lite_v1.0_client/ + \cp ./paddleserving/recognition/preprocess/picodet_PPLCNet_x2_5_mainbody_lite_v1.0_client/*.prototxt ./models/picodet_PPLCNet_x2_5_mainbody_lite_v1.0_client/ + \cp ./paddleserving/recognition/preprocess/picodet_PPLCNet_x2_5_mainbody_lite_v1.0_serving/*.prototxt ./models/picodet_PPLCNet_x2_5_mainbody_lite_v1.0_serving/ + ``` + +- Start the service: + ```shell + # Enter the working directory + cd PaddleClas/deploy/paddleserving/recognition + + # The default port number is 9400; the running log is saved in log_PPShiTu.txt by default + # CPU deployment + sh run_cpp_serving.sh + # GPU deployment, and specify card 0 + sh run_cpp_serving.sh 0 + ``` + +- send request: + ```shell + # send service request + python3.7 test_cpp_serving_client.py + ``` + After a successful run, the results of the model predictions are printed in the client's terminal window as follows: + ```log + WARNING: Logging before InitGoogleLogging() is written to STDERR + I0614 03:01:36.273097 6084 naming_service_thread.cpp:202] brpc::policy::ListNamingService("127.0.0.1:9400"): added 1 + I0614 03:01:37.393564 6084 general_model.cpp:490] [client]logid=0,client_cost=1107.82ms,server_cost=1101.75ms. + [{'bbox': [345, 95, 524, 585], 'rec_docs': 'Red Bull-Enhanced', 'rec_scores': 0.8073724}] + ``` + +- close the service: + If the service program is running in the foreground, you can press `Ctrl+C` to terminate the server program; if it is running in the background, you can use the kill command to close related processes, or you can execute the following command in the path where the service program is started to terminate the server program: + ```bash + python3.7 -m paddle_serving_server.serve stop + ``` + After the execution is completed, the `Process stopped` message appears, indicating that the service was successfully shut down. + + +## 4. FAQ + +**Q1**: No result is returned after the request is sent or an output decoding error is prompted + +**A1**: Do not set the proxy when starting the service and sending the request. You can close the proxy before starting the service and sending the request. The command to close the proxy is: +```shell +unset https_proxy +unset http_proxy +``` +**Q2**: nothing happens after starting the service + +**A2**: You can check whether the path corresponding to `model_config` in `config.yml` exists, and whether the folder name is correct + +For more service deployment types, such as `RPC prediction service`, you can refer to Serving's [github official website](https://github.com/PaddlePaddle/Serving/tree/v0.9.0/examples) diff --git a/docs/images/action_rec_by_classification.gif b/docs/images/action_rec_by_classification.gif new file mode 100644 index 0000000000000000000000000000000000000000..52046b249b145d2099c7360d3c56abc3b51764bd Binary files /dev/null and b/docs/images/action_rec_by_classification.gif differ diff --git a/docs/zh_CN/algorithm_introduction/action_rec_by_classification.md b/docs/zh_CN/algorithm_introduction/action_rec_by_classification.md new file mode 100644 index 0000000000000000000000000000000000000000..bf8272be99128eef40a4bdbfbc6a0273b2f51c8e --- /dev/null +++ b/docs/zh_CN/algorithm_introduction/action_rec_by_classification.md @@ -0,0 +1,201 @@ +# 基于图像分类的打电话行为识别模型 + +------ + +## 目录 +- [1. 模型和应用场景介绍](#1) +- [2. 模型训练、评估和预测](#2) + - [2.1 PaddleClas 环境安装](#2.1) + - [2.2 数据准备](#2.2) + - [2.2.1 数据集下载](#2.2.1) + - [2.2.2 训练及测试图像处理](#2.2.2) + - [2.2.3 标注文件准备](#2.2.3) + - [2.3 模型训练](#2.3) + - [2.4 模型评估](#2.4) + - [2.5 模型预测](#2.5) +- [3. 模型推理部署](#3) + - [3.1 模型导出](#3.1) + - [3.2 执行模型预测](#3.2) +- [4. 在PP-Human中使用该模型](#4) + +
+ +
数据来源及版权归属:天覆科技,感谢提供并开源实际场景数据,仅限学术研究使用
+
+ + + +## 1. 模型和应用场景介绍 +行为识别在智慧社区,安防监控等方向具有广泛应用。根据行为的不同,一些行为可以通过图像直接进行行为判断(例如打电话)。这里我们提供了基于图像分类的打电话行为识别模型,对人物图像进行是否打电话的二分类识别。 + +| 任务 | 算法 | 精度 | 预测速度(ms) | 模型权重 | +| ---- | ---- | ---- | ---- | ------ | +| 打电话行为识别 | PP-HGNet-tiny | 准确率: 86.85 | 单人 2.94ms | [下载链接](https://bj.bcebos.com/v1/paddledet/models/pipeline/PPHGNet_tiny_calling_halfbody.pdparams) | + +注: +1. 该模型使用[UAV-Human](https://github.com/SUTDCV/UAV-Human)的打电话行为部分进行训练和测试。 +2. 预测速度为NVIDIA T4 机器上使用TensorRT FP16时的速度, 速度包含数据预处理、模型预测、后处理全流程。 + +该模型为实时行人分析工具[PP-Human](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/deploy/pipeline)中行为识别功能的一部分,欢迎体验PP-Human的完整功能。 + + +## 2. 模型训练、评估和预测 + + + +### 2.1 PaddleClas 环境安装 +请根据[环境准备](../installation/install_paddleclas.md)完成PaddleClas的环境依赖准备。 + + + +### 2.2 数据准备 + + + +#### 2.2.1 数据集下载 +打电话的行为识别是基于公开数据集[UAV-Human](https://github.com/SUTDCV/UAV-Human)进行训练的。请通过该链接填写相关数据集申请材料后获取下载链接。 + +在`UAVHuman/ActionRecognition/RGBVideos`路径下包含了该数据集中RGB视频数据集,每个视频的文件名即为其标注信息。 + + + +#### 2.2.2 训练及测试图像处理 +根据视频文件名,其中与行为识别相关的为`A`相关的字段(即action),我们可以找到期望识别的动作类型数据。 +- 正样本视频:以打电话为例,我们只需找到包含`A024`的文件。 +- 负样本视频:除目标动作以外所有的视频。 + +鉴于视频数据转化为图像会有较多冗余,对于正样本视频,我们间隔8帧进行采样,并使用行人检测模型处理为半身图像(取检测框的上半部分,即`img = img[:H/2, :, :]`)。正样本视频中的采样得到的图像即视为正样本,负样本视频中采样得到的图像即为负样本。 + +**注意**: 正样本视频中并不完全符合打电话这一动作,在视频开头结尾部分会出现部分冗余动作,需要移除。 + + + +#### 2.2.3 标注文件准备 +根据[PaddleClas数据集格式说明](../data_preparation/classification_dataset.md),标注文件样例如下,其中`0`,`1`分别是图片对应所属的类别: +``` + # 每一行采用"空格"分隔图像路径与标注 + train/000001.jpg 0 + train/000002.jpg 0 + train/000003.jpg 1 + ... +``` + +此外,标签文件`phone_label_list.txt`,帮助将分类序号映射到具体的类型名称: +``` +0 make_a_phone_call # 类型0 +1 normal # 类型1 +``` + +完成上述内容后,放置于`dataset`目录下,文件结构如下: +``` +data/ +├── images # 放置所有图片 +├── phone_label_list.txt # 标签文件 +├── phone_train_list.txt # 训练列表,包含图片及其对应类型 +└── phone_val_list.txt # 测试列表,包含图片及其对应类型 +``` + + +### 2.3 模型训练 + +通过如下命令启动训练: +```bash +export CUDA_VISIBLE_DEVICES=0,1,2,3 +python3 -m paddle.distributed.launch \ + --gpus="0,1,2,3" \ + tools/train.py \ + -c ./ppcls/configs/practical_models/PPHGNet_tiny_calling_halfbody.yaml \ + -o Arch.pretrained=True +``` +其中 `Arch.pretrained` 为 `True`表示使用预训练权重帮助训练。 + + + +### 2.4 模型评估 +训练好模型之后,可以通过以下命令实现对模型指标的评估。 + +```bash +python3 tools/eval.py \ + -c ./ppcls/configs/practical_models/PPHGNet_tiny_calling_halfbody.yaml \ + -o Global.pretrained_model=output/PPHGNet_tiny/best_model +``` + +其中 `-o Global.pretrained_model="output/PPHGNet_tiny/best_model"` 指定了当前最佳权重所在的路径,如果指定其他权重,只需替换对应的路径即可。 + + +### 2.5 模型预测 + +模型训练完成之后,可以加载训练得到的预训练模型,进行模型预测。在模型库的 `tools/infer.py` 中提供了完整的示例,只需执行下述命令即可完成模型预测: + +```bash +python3 tools/infer.py \ + -c ./ppcls/configs/practical_models/PPHGNet_tiny_calling_halfbody.yaml \ + -o Global.pretrained_model=output/PPHGNet_tiny/best_model + -o Infer.infer_imgs={your test image} +``` + + + +## 3. 模型推理部署 +Paddle Inference 是飞桨的原生推理库,作用于服务器端和云端,提供高性能的推理能力。相比于直接基于预训练模型进行预测,Paddle Inference可使用 MKLDNN、CUDNN、TensorRT 进行预测加速,从而实现更优的推理性能。更多关于 Paddle Inference 推理引擎的介绍,可以参考 [Paddle Inference官网教程](https://www.paddlepaddle.org.cn/documentation/docs/zh/guides/infer/inference/inference_cn.html)。 + + + +### 3.1 模型导出 +```bash +python3 tools/export_model.py \ + -c ./ppcls/configs/practical_models/PPHGNet_tiny_calling_halfbody.yaml \ + -o Global.pretrained_model=output/PPHGNet_tiny/best_model \ + -o Global.save_inference_dir=deploy/models//PPHGNet_tiny_calling_halfbody/ +``` +执行完该脚本后会在 `deploy/models/` 下生成 `PPHGNet_tiny_calling_halfbody` 文件夹,文件结构如下: + +``` +├── PPHGNet_tiny_calling_halfbody +│ ├── inference.pdiparams +│ ├── inference.pdiparams.info +│ └── inference.pdmodel +``` + + + +### 3.2 执行模型预测 +在`deploy`下,执行下列命令: + +```bash +# Current path is {root of PaddleClas}/deploy + +python3 python/predict_cls.py -c configs/inference_cls_based_action.yaml +``` + + + +## 4. 在PP-Human中使用该模型 +[PP-Human](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/deploy/pipeline)是基于飞桨深度学习框架的业界首个开源产业级实时行人分析工具,具有功能丰富,应用广泛和部署高效三大优势。该模型可以应用于PP-Human中,实现实时视频的打电话行为识别功能。 + +由于当前的PP-Human功能集成在[PaddleDetection](https://github.com/PaddlePaddle/PaddleDetection)中,需要按以下步骤实现该模型在PP-Human中的调用适配。 + +1. 完成模型导出 +2. 重命名模型 +```bash +cd deploy/models/PPHGNet_tiny_calling_halfbody + +mv inference.pdiparams model.pdiparams +mv inference.pdiparams.info model.pdiparams.info +mv inference.pdmodel model.pdmodel +``` +3. 下载[预测配置文件](https://bj.bcebos.com/v1/paddledet/models/pipeline/infer_configs/PPHGNet_tiny_calling_halfbody/infer_cfg.yml) + +``` bash +wget https://bj.bcebos.com/v1/paddledet/models/pipeline/infer_configs/PPHGNet_tiny_calling_halfbody/infer_cfg.yml +``` +完成后文件结构如下,即可在PP-Human中使用: +``` +PPHGNet_tiny_calling_halfbody +├── infer_cfg.yml +├── model.pdiparams +├── model.pdiparams.info +└── model.pdmodel +``` + +详细请参考[基于图像分类的行为识别——打电话识别](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/deploy/pipeline/docs/tutorials/action.md#%E5%9F%BA%E4%BA%8E%E5%9B%BE%E5%83%8F%E5%88%86%E7%B1%BB%E7%9A%84%E8%A1%8C%E4%B8%BA%E8%AF%86%E5%88%AB%E6%89%93%E7%94%B5%E8%AF%9D%E8%AF%86%E5%88%AB)。 diff --git a/docs/zh_CN/inference_deployment/classification_serving_deploy.md b/docs/zh_CN/inference_deployment/classification_serving_deploy.md new file mode 100644 index 0000000000000000000000000000000000000000..3d9c999625535b9a70c2ba443512717bcb3a975c --- /dev/null +++ b/docs/zh_CN/inference_deployment/classification_serving_deploy.md @@ -0,0 +1,240 @@ +简体中文 | [English](../../en/inference_deployment/classification_serving_deploy_en.md) + +# 分类模型服务化部署 + +## 目录 + +- [1. 简介](#1-简介) +- [2. Serving 安装](#2-serving-安装) +- [3. 图像分类服务部署](#3-图像分类服务部署) +- [3.1 模型转换](#31-模型转换) +- [3.2 服务部署和请求](#32-服务部署和请求) + - [3.2.1 Python Serving](#321-python-serving) + - [3.2.2 C++ Serving](#322-c-serving) +- [4.FAQ](#4faq) + + +## 1. 简介 + +[Paddle Serving](https://github.com/PaddlePaddle/Serving) 旨在帮助深度学习开发者轻松部署在线预测服务,支持一键部署工业级的服务能力、客户端和服务端之间高并发和高效通信、并支持多种编程语言开发客户端。 + +该部分以 HTTP 预测服务部署为例,介绍怎样在 PaddleClas 中使用 PaddleServing 部署模型服务。目前只支持 Linux 平台部署,暂不支持 Windows 平台。 + + +## 2. Serving 安装 + +Serving 官网推荐使用 docker 安装并部署 Serving 环境。首先需要拉取 docker 环境并创建基于 Serving 的 docker。 + +```shell +# 启动GPU docker +docker pull paddlepaddle/serving:0.7.0-cuda10.2-cudnn7-devel +nvidia-docker run -p 9292:9292 --name test -dit paddlepaddle/serving:0.7.0-cuda10.2-cudnn7-devel bash +nvidia-docker exec -it test bash + +# 启动CPU docker +docker pull paddlepaddle/serving:0.7.0-devel +docker run -p 9292:9292 --name test -dit paddlepaddle/serving:0.7.0-devel bash +docker exec -it test bash +``` + +进入 docker 后,需要安装 Serving 相关的 python 包。 +```shell +python3.7 -m pip install paddle-serving-client==0.7.0 +python3.7 -m pip install paddle-serving-app==0.7.0 +python3.7 -m pip install faiss-cpu==1.7.1post2 + +#若为CPU部署环境: +python3.7 -m pip install paddle-serving-server==0.7.0 # CPU +python3.7 -m pip install paddlepaddle==2.2.0 # CPU + +#若为GPU部署环境 +python3.7 -m pip install paddle-serving-server-gpu==0.7.0.post102 # GPU with CUDA10.2 + TensorRT6 +python3.7 -m pip install paddlepaddle-gpu==2.2.0 # GPU with CUDA10.2 + +#其他GPU环境需要确认环境再选择执行哪一条 +python3.7 -m pip install paddle-serving-server-gpu==0.7.0.post101 # GPU with CUDA10.1 + TensorRT6 +python3.7 -m pip install paddle-serving-server-gpu==0.7.0.post112 # GPU with CUDA11.2 + TensorRT8 +``` + +* 如果安装速度太慢,可以通过 `-i https://pypi.tuna.tsinghua.edu.cn/simple` 更换源,加速安装过程。 +* 其他环境配置安装请参考:[使用Docker安装Paddle Serving](https://github.com/PaddlePaddle/Serving/blob/v0.7.0/doc/Install_CN.md) + + + +## 3. 图像分类服务部署 + +下面以经典的 ResNet50_vd 模型为例,介绍如何部署图像分类服务。 + + +### 3.1 模型转换 + +使用 PaddleServing 做服务化部署时,需要将保存的 inference 模型转换为 Serving 模型。 +- 进入工作目录: + ```shell + cd deploy/paddleserving + ``` +- 下载并解压 ResNet50_vd 的 inference 模型: + ```shell + # 下载 ResNet50_vd inference 模型 + wget -nc https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/inference/ResNet50_vd_infer.tar + # 解压 ResNet50_vd inference 模型 + tar xf ResNet50_vd_infer.tar + ``` +- 用 paddle_serving_client 命令把下载的 inference 模型转换成易于 Server 部署的模型格式: + ```shell + # 转换 ResNet50_vd 模型 + python3.7 -m paddle_serving_client.convert \ + --dirname ./ResNet50_vd_infer/ \ + --model_filename inference.pdmodel \ + --params_filename inference.pdiparams \ + --serving_server ./ResNet50_vd_serving/ \ + --serving_client ./ResNet50_vd_client/ + ``` + 上述命令中参数具体含义如下表所示 + | 参数 | 类型 | 默认值 | 描述 | + | ----------------- | ---- | ------------------ | ------------------------------------------------------------ | + | `dirname` | str | - | 需要转换的模型文件存储路径,Program结构文件和参数文件均保存在此目录。 | + | `model_filename` | str | None | 存储需要转换的模型Inference Program结构的文件名称。如果设置为None,则使用 `__model__` 作为默认的文件名 | + | `params_filename` | str | None | 存储需要转换的模型所有参数的文件名称。当且仅当所有模型参数被保>存在一个单独的二进制文件中,它才需要被指定。如果模型参数是存储在各自分离的文件中,设置它的值为None | + | `serving_server` | str | `"serving_server"` | 转换后的模型文件和配置文件的存储路径。默认值为serving_server | + | `serving_client` | str | `"serving_client"` | 转换后的客户端配置文件存储路径。默认值为serving_client | + + ResNet50_vd 推理模型转换完成后,会在当前文件夹多出 `ResNet50_vd_serving` 和 `ResNet50_vd_client` 的文件夹,具备如下结构: + ```shell + ├── ResNet50_vd_serving/ + │ ├── inference.pdiparams + │ ├── inference.pdmodel + │ ├── serving_server_conf.prototxt + │ └── serving_server_conf.stream.prototxt + │ + └── ResNet50_vd_client/ + ├── serving_client_conf.prototxt + └── serving_client_conf.stream.prototxt + ``` + +- Serving 为了兼容不同模型的部署,提供了输入输出重命名的功能。让不同的模型在推理部署时,只需要修改配置文件的 `alias_name` 即可,无需修改代码即可完成推理部署。因此在转换完毕后需要分别修改 `ResNet50_vd_serving` 下的文件 `serving_server_conf.prototxt` 和 `ResNet50_vd_client` 下的文件 `serving_client_conf.prototxt`,将 `fetch_var` 中 `alias_name:` 后的字段改为 `prediction`,修改后的 `serving_server_conf.prototxt` 和 `serving_client_conf.prototxt` 如下所示: + ```log + feed_var { + name: "inputs" + alias_name: "inputs" + is_lod_tensor: false + feed_type: 1 + shape: 3 + shape: 224 + shape: 224 + } + fetch_var { + name: "save_infer_model/scale_0.tmp_1" + alias_name: "prediction" + is_lod_tensor: false + fetch_type: 1 + shape: 1000 + } + ``` + +### 3.2 服务部署和请求 + +paddleserving 目录包含了启动 pipeline 服务、C++ serving服务和发送预测请求的代码,主要包括: +```shell +__init__.py +classification_web_service.py # 启动pipeline服务端的脚本 +config.yml # 启动pipeline服务的配置文件 +pipeline_http_client.py # http方式发送pipeline预测请求的脚本 +pipeline_rpc_client.py # rpc方式发送pipeline预测请求的脚本 +readme.md # 分类模型服务化部署文档 +run_cpp_serving.sh # 启动C++ Serving部署的脚本 +test_cpp_serving_client.py # rpc方式发送C++ serving预测请求的脚本 +``` + +#### 3.2.1 Python Serving + +- 启动服务: + ```shell + # 启动服务,运行日志保存在 log.txt + python3.7 classification_web_service.py &>log.txt & + ``` + +- 发送请求: + ```shell + # 发送服务请求 + python3.7 pipeline_http_client.py + ``` + 成功运行后,模型预测的结果会打印在客户端中,如下所示: + ```log + {'err_no': 0, 'err_msg': '', 'key': ['label', 'prob'], 'value': ["['daisy']", '[0.9341402053833008]'], 'tensors': []} + ``` +- 关闭服务 +如果服务程序在前台运行,可以按下`Ctrl+C`来终止服务端程序;如果在后台运行,可以使用kill命令关闭相关进程,也可以在启动服务程序的路径下执行以下命令来终止服务端程序: + ```bash + python3.7 -m paddle_serving_server.serve stop + ``` + 执行完毕后出现`Process stopped`信息表示成功关闭服务。 + + +#### 3.2.2 C++ Serving + +与Python Serving不同,C++ Serving客户端调用 C++ OP来预测,因此在启动服务之前,需要编译并安装 serving server包,并设置 `SERVING_BIN`。 + +- 编译并安装Serving server包 + ```shell + # 进入工作目录 + cd PaddleClas/deploy/paddleserving + # 一键编译安装Serving server、设置 SERVING_BIN + source ./build_server.sh python3.7 + ``` + **注:**[build_server.sh](./build_server.sh#L55-L62)所设定的路径可能需要根据实际机器上的环境如CUDA、python版本等作一定修改,然后再编译。 + +- 修改客户端文件 `ResNet50_vd_client/serving_client_conf.prototxt` ,将 `feed_type:` 后的字段改为20,将第一个 `shape:` 后的字段改为1并删掉其余的 `shape` 字段。 + ```log + feed_var { + name: "inputs" + alias_name: "inputs" + is_lod_tensor: false + feed_type: 20 + shape: 1 + } + ``` +- 修改 [`test_cpp_serving_client`](./test_cpp_serving_client.py) 的部分代码 + 1. 修改 [`load_client_config`](./test_cpp_serving_client.py#L28) 处的代码,将 `load_client_config` 后的路径改为 `ResNet50_vd_client/serving_client_conf.prototxt` 。 + 2. 修改 [`feed={"inputs": image}`](./test_cpp_serving_client.py#L45) 处的代码,将 `inputs` 改为与 `ResNet50_vd_client/serving_client_conf.prototxt` 中 `feed_var` 字段下面的 `name` 一致。由于部分模型client文件中的 `name` 为 `x` 而不是 `inputs` ,因此使用这些模型进行C++ Serving部署时需要注意这一点。 + +- 启动服务: + ```shell + # 启动服务, 服务在后台运行,运行日志保存在 nohup.txt + # CPU部署 + bash run_cpp_serving.sh + # GPU部署并指定0号卡 + bash run_cpp_serving.sh 0 + ``` + +- 发送请求: + ```shell + # 发送服务请求 + python3.7 test_cpp_serving_client.py + ``` + 成功运行后,模型预测的结果会打印在客户端中,如下所示: + ```log + prediction: daisy, probability: 0.9341399073600769 + ``` +- 关闭服务: + 如果服务程序在前台运行,可以按下`Ctrl+C`来终止服务端程序;如果在后台运行,可以使用kill命令关闭相关进程,也可以在启动服务程序的路径下执行以下命令来终止服务端程序: + ```bash + python3.7 -m paddle_serving_server.serve stop + ``` + 执行完毕后出现`Process stopped`信息表示成功关闭服务。 + +## 4.FAQ + +**Q1**: 发送请求后没有结果返回或者提示输出解码报错 + +**A1**: 启动服务和发送请求时不要设置代理,可以在启动服务前和发送请求前关闭代理,关闭代理的命令是: +```shell +unset https_proxy +unset http_proxy +``` + +**Q2**: 启动服务后没有任何反应 + +**A2**: 可以检查`config.yml`中`model_config`对应的路径是否存在,文件夹命名是否正确 + +更多的服务部署类型,如 `RPC 预测服务` 等,可以参考 Serving 的[github 官网](https://github.com/PaddlePaddle/Serving/tree/v0.9.0/examples) diff --git a/docs/zh_CN/inference_deployment/cpp_deploy_on_windows.md b/docs/zh_CN/inference_deployment/cpp_deploy_on_windows.md old mode 100755 new mode 100644 index b7089cbdb072b7365d44e13e9a0abd3d6f056483..03bf54348d0b48b64843eb89699dce4ffe64ce8a --- a/docs/zh_CN/inference_deployment/cpp_deploy_on_windows.md +++ b/docs/zh_CN/inference_deployment/cpp_deploy_on_windows.md @@ -5,13 +5,13 @@ PaddleClas 在 Windows 平台下基于 `Visual Studio 2019 Community` 进行了 ----- ## 目录 * [1. 前置条件](#1) - * [1.1 下载 PaddlePaddle C++ 预测库 paddle_inference_install_dir](#1.1) - * [1.2 安装配置 OpenCV](#1.2) + * [1.1 下载 PaddlePaddle C++ 预测库 paddle_inference_install_dir](#1.1) + * [1.2 安装配置 OpenCV](#1.2) * [2. 使用 Visual Studio 2019 编译](#2) * [3. 预测](#3) - * [3.1 准备 inference model](#3.1) - * [3.2 运行预测](#3.2) - * [3.3 注意事项](#3.3) + * [3.1 准备 inference model](#3.1) + * [3.2 运行预测](#3.2) + * [3.3 注意事项](#3.3) ## 1. 前置条件 diff --git a/docs/zh_CN/inference_deployment/export_model.md b/docs/zh_CN/inference_deployment/export_model.md index 5e7d204c5f3e9755d2c97428c040fe7c2aa328e2..4e2d98e9310602b4df7c0bedee32be88b7cf8fef 100644 --- a/docs/zh_CN/inference_deployment/export_model.md +++ b/docs/zh_CN/inference_deployment/export_model.md @@ -91,9 +91,9 @@ python3 tools/export_model.py \ 导出的 inference 模型文件可用于预测引擎进行推理部署,根据不同的部署方式/平台,可参考: -* [Python 预测](./python_deploy.md) -* [C++ 预测](./cpp_deploy.md)(目前仅支持分类模型) -* [Python Whl 预测](./whl_deploy.md)(目前仅支持分类模型) -* [PaddleHub Serving 部署](./paddle_hub_serving_deploy.md)(目前仅支持分类模型) -* [PaddleServing 部署](./paddle_serving_deploy.md) -* [PaddleLite 部署](./paddle_lite_deploy.md)(目前仅支持分类模型) +* [Python 预测](./inference/python_deploy.md) +* [C++ 预测](./inference/cpp_deploy.md)(目前仅支持分类模型) +* [Python Whl 预测](./inference/whl_deploy.md)(目前仅支持分类模型) +* [PaddleHub Serving 部署](./deployment/paddle_hub_serving_deploy.md)(目前仅支持分类模型) +* [PaddleServing 部署](./deployment/paddle_serving_deploy.md) +* [PaddleLite 部署](./deployment/paddle_lite_deploy.md)(目前仅支持分类模型) diff --git a/docs/zh_CN/inference_deployment/paddle_hub_serving_deploy.md b/docs/zh_CN/inference_deployment/paddle_hub_serving_deploy.md index e3892e9a96810c418ec508a555a9d276b3ba73ae..37d688b32051af8fe5a44dcd245c5340e5baafe2 100644 --- a/docs/zh_CN/inference_deployment/paddle_hub_serving_deploy.md +++ b/docs/zh_CN/inference_deployment/paddle_hub_serving_deploy.md @@ -1,9 +1,9 @@ +简体中文 | [English](../../en/inference_deployment/paddle_hub_serving_deploy_en.md) + # 基于 PaddleHub Serving 的服务部署 PaddleClas 支持通过 PaddleHub 快速进行服务化部署。目前支持图像分类的部署,图像识别的部署敬请期待。 ---- - ## 目录 - [1. 简介](#1) @@ -22,20 +22,20 @@ PaddleClas 支持通过 PaddleHub 快速进行服务化部署。目前支持图 hubserving 服务部署配置服务包 `clas` 下包含 3 个必选文件,目录如下: -``` -hubserving/clas/ - └─ __init__.py 空文件,必选 - └─ config.json 配置文件,可选,使用配置启动服务时作为参数传入 - └─ module.py 主模块,必选,包含服务的完整逻辑 - └─ params.py 参数文件,必选,包含模型路径、前后处理参数等参数 +```shell +deploy/hubserving/clas/ +├── __init__.py # 空文件,必选 +├── config.json # 配置文件,可选,使用配置启动服务时作为参数传入 +├── module.py # 主模块,必选,包含服务的完整逻辑 +└── params.py # 参数文件,必选,包含模型路径、前后处理参数等参数 ``` ## 2. 准备环境 ```shell -# 安装 paddlehub,请安装 2.0 版本 -pip3 install paddlehub==2.1.0 --upgrade -i https://pypi.tuna.tsinghua.edu.cn/simple +# 安装 paddlehub,建议安装 2.1.0 版本 +python3.7 -m pip install paddlehub==2.1.0 --upgrade -i https://pypi.tuna.tsinghua.edu.cn/simple ``` @@ -53,30 +53,27 @@ pip3 install paddlehub==2.1.0 --upgrade -i https://pypi.tuna.tsinghua.edu.cn/sim ```python "inference_model_dir": "../inference/" ``` -需要注意, - * 模型文件(包括 `.pdmodel` 与 `.pdiparams`)名称必须为 `inference`。 - * 我们也提供了大量基于 ImageNet-1k 数据集的预训练模型,模型列表及下载地址详见[模型库概览](../algorithm_introduction/ImageNet_models.md),也可以使用自己训练转换好的模型。 +* 模型文件(包括 `.pdmodel` 与 `.pdiparams`)的名称必须为 `inference`。 +* 我们提供了大量基于 ImageNet-1k 数据集的预训练模型,模型列表及下载地址详见[模型库概览](../algorithm_introduction/ImageNet_models.md),也可以使用自己训练转换好的模型。 ## 4. 安装服务模块 -针对 Linux 环境和 Windows 环境,安装命令如下。 - * 在 Linux 环境下,安装示例如下: -```shell -cd PaddleClas/deploy -# 安装服务模块: -hub install hubserving/clas/ -``` + ```shell + cd PaddleClas/deploy + # 安装服务模块: + hub install hubserving/clas/ + ``` * 在 Windows 环境下(文件夹的分隔符为`\`),安装示例如下: -```shell -cd PaddleClas\deploy -# 安装服务模块: -hub install hubserving\clas\ -``` + ```shell + cd PaddleClas\deploy + # 安装服务模块: + hub install hubserving\clas\ + ``` @@ -84,36 +81,34 @@ hub install hubserving\clas\ -### 5.1 命令行命令启动 +### 5.1 命令行启动 该方式仅支持使用 CPU 预测。启动命令: ```shell -$ hub serving start --modules Module1==Version1 \ - --port XXXX \ - --use_multiprocess \ - --workers \ -``` +hub serving start \ +--modules clas_system +--port 8866 +``` +这样就完成了一个服务化 API 的部署,使用默认端口号 8866。 **参数说明**: -|参数|用途| -|-|-| -|--modules/-m| [**必选**] PaddleHub Serving 预安装模型,以多个 Module==Version 键值对的形式列出
*`当不指定 Version 时,默认选择最新版本`*| -|--port/-p| [**可选**] 服务端口,默认为 8866| +|参数|用途| +|-|-| +|--modules/-m| [**必选**] PaddleHub Serving 预安装模型,以多个 Module==Version 键值对的形式列出
*`当不指定 Version 时,默认选择最新版本`*| +|--port/-p| [**可选**] 服务端口,默认为 8866| |--use_multiprocess| [**可选**] 是否启用并发方式,默认为单进程方式,推荐多核 CPU 机器使用此方式
*`Windows 操作系统只支持单进程方式`*| -|--workers| [**可选**] 在并发方式下指定的并发任务数,默认为 `2*cpu_count-1`,其中 `cpu_count` 为 CPU 核数| - -如按默认参数启动服务:```hub serving start -m clas_system``` - -这样就完成了一个服务化 API 的部署,使用默认端口号 8866。 - +|--workers| [**可选**] 在并发方式下指定的并发任务数,默认为 `2*cpu_count-1`,其中 `cpu_count` 为 CPU 核数| +更多部署细节详见 [PaddleHub Serving模型一键服务部署](https://paddlehub.readthedocs.io/zh_CN/release-v2.1/tutorial/serving.html) ### 5.2 配置文件启动 该方式仅支持使用 CPU 或 GPU 预测。启动命令: -```hub serving start -c config.json``` +```shell +hub serving start -c config.json +``` 其中,`config.json` 格式如下: @@ -163,12 +158,21 @@ hub serving start -c hubserving/clas/config.json ```shell cd PaddleClas/deploy -python hubserving/test_hubserving.py server_url image_path -``` +python3.7 hubserving/test_hubserving.py \ +--server_url http://127.0.0.1:8866/predict/clas_system \ +--image_file ./hubserving/ILSVRC2012_val_00006666.JPEG \ +--batch_size 8 +``` +**预测输出** +```log +The result(s): class_ids: [57, 67, 68, 58, 65], label_names: ['garter snake, grass snake', 'diamondback, diamondback rattlesnake, Crotalus adamanteus', 'sidewinder, horned rattlesnake, Crotalus cerastes', 'water snake', 'sea snake'], scores: [0.21915, 0.15631, 0.14794, 0.13177, 0.12285] +The average time of prediction cost: 2.970 s/image +The average time cost: 3.014 s/image +The average top-1 score: 0.110 +``` **脚本参数说明**: -* **server_url**:服务地址,格式为 -`http://[ip_address]:[port]/predict/[module_name]` +* **server_url**:服务地址,格式为`http://[ip_address]:[port]/predict/[module_name]`。 * **image_path**:测试图像路径,可以是单张图片路径,也可以是图像集合目录路径。 * **batch_size**:[**可选**] 以 `batch_size` 大小为单位进行预测,默认为 `1`。 * **resize_short**:[**可选**] 预处理时,按短边调整大小,默认为 `256`。 @@ -178,41 +182,44 @@ python hubserving/test_hubserving.py server_url image_path **注意**:如果使用 `Transformer` 系列模型,如 `DeiT_***_384`, `ViT_***_384` 等,请注意模型的输入数据尺寸,需要指定`--resize_short=384 --crop_size=384`。 -访问示例: - -```shell -python hubserving/test_hubserving.py --server_url http://127.0.0.1:8866/predict/clas_system --image_file ./hubserving/ILSVRC2012_val_00006666.JPEG --batch_size 8 -``` - **返回结果格式说明**: 返回结果为列表(list),包含 top-k 个分类结果,以及对应的得分,还有此图片预测耗时,具体如下: -``` +```shell list: 返回结果 -└─ list: 第一张图片结果 - └─ list: 前 k 个分类结果,依 score 递减排序 - └─ list: 前 k 个分类结果对应的 score,依 score 递减排序 - └─ float: 该图分类耗时,单位秒 +└──list: 第一张图片结果 + ├── list: 前 k 个分类结果,依 score 递减排序 + ├── list: 前 k 个分类结果对应的 score,依 score 递减排序 + └── float: 该图分类耗时,单位秒 ``` + ## 7. 自定义修改服务模块 -如果需要修改服务逻辑,需要进行以下操作: +如果需要修改服务逻辑,需要进行以下操作: -1. 停止服务 -```hub serving stop --port/-p XXXX``` +1. 停止服务 + ```shell + hub serving stop --port/-p XXXX + ``` -2. 到相应的 `module.py` 和 `params.py` 等文件中根据实际需求修改代码。`module.py` 修改后需要重新安装(`hub install hubserving/clas/`)并部署。在进行部署前,可通过 `python hubserving/clas/module.py` 测试已安装服务模块。 +2. 到相应的 `module.py` 和 `params.py` 等文件中根据实际需求修改代码。`module.py` 修改后需要重新安装(`hub install hubserving/clas/`)并部署。在进行部署前,可先通过 `python3.7 hubserving/clas/module.py` 命令来快速测试准备部署的代码。 -3. 卸载旧服务包 -```hub uninstall clas_system``` +3. 卸载旧服务包 + ```shell + hub uninstall clas_system + ``` -4. 安装修改后的新服务包 -```hub install hubserving/clas/``` +4. 安装修改后的新服务包 + ```shell + hub install hubserving/clas/ + ``` -5.重新启动服务 -```hub serving start -m clas_system``` +5. 重新启动服务 + ```shell + hub serving start -m clas_system + ``` **注意**: 常用参数可在 `PaddleClas/deploy/hubserving/clas/params.py` 中修改: @@ -229,4 +236,4 @@ list: 返回结果 'class_id_map_file': ``` -为了避免不必要的延时以及能够以 batch_size 进行预测,数据预处理逻辑(包括 `resize`、`crop` 等操作)均在客户端完成,因此需要在 `PaddleClas/deploy/hubserving/test_hubserving.py#L35-L52` 中修改。 +为了避免不必要的延时以及能够以 batch_size 进行预测,数据预处理逻辑(包括 `resize`、`crop` 等操作)均在客户端完成,因此需要在 [PaddleClas/deploy/hubserving/test_hubserving.py#L41-L47](../../../deploy/hubserving/test_hubserving.py#L41-L47) 以及 [PaddleClas/deploy/hubserving/test_hubserving.py#L51-L76](../../../deploy/hubserving/test_hubserving.py#L51-L76) 中修改数据预处理逻辑相关代码。 diff --git a/docs/zh_CN/inference_deployment/paddle_lite_deploy.md b/docs/zh_CN/inference_deployment/paddle_lite_deploy.md index 68480f769a67aae33ca614b0eede2581fcf57392..bdfa89a2d8af904d5d0532053d09a2257ca83333 100644 --- a/docs/zh_CN/inference_deployment/paddle_lite_deploy.md +++ b/docs/zh_CN/inference_deployment/paddle_lite_deploy.md @@ -231,9 +231,9 @@ adb push imgs/tabby_cat.jpg /data/local/tmp/arm_cpu/ ```shell clas_model_file ./MobileNetV3_large_x1_0.nb # 模型文件地址 -label_path ./imagenet1k_label_list.txt # 类别映射文本文件 +label_path ./imagenet1k_label_list.txt # 类别映射文本文件 resize_short_size 256 # resize之后的短边边长 -crop_size 224 # 裁剪后用于预测的边长 +crop_size 224 # 裁剪后用于预测的边长 visualize 0 # 是否进行可视化,如果选择的话,会在当前文件夹下生成名为clas_result.png的图像文件 num_threads 1 # 线程数,默认是1。 precision FP32 # 精度类型,可以选择 FP32 或者 INT8,默认是 FP32。 @@ -263,4 +263,3 @@ A1:如果已经走通了上述步骤,更换模型只需要替换 `.nb` 模 Q2:换一个图测试怎么做? A2:替换 debug 下的测试图像为你想要测试的图像,使用 ADB 再次 push 到手机上即可。 - diff --git a/docs/zh_CN/inference_deployment/paddle_serving_deploy.md b/docs/zh_CN/inference_deployment/paddle_serving_deploy.md deleted file mode 100644 index 18faeb3655e78b04394aa5f64caca91f5f1ae630..0000000000000000000000000000000000000000 --- a/docs/zh_CN/inference_deployment/paddle_serving_deploy.md +++ /dev/null @@ -1,294 +0,0 @@ -# 模型服务化部署 --------- -## 目录 -- [1. 简介](#1) -- [2. Serving 安装](#2) -- [3. 图像分类服务部署](#3) - - [3.1 模型转换](#3.1) - - [3.2 服务部署和请求](#3.2) - - [3.2.1 Python Serving](#3.2.1) - - [3.2.2 C++ Serving](#3.2.2) -- [4. 图像识别服务部署](#4) - - [4.1 模型转换](#4.1) - - [4.2 服务部署和请求](#4.2) - - [4.2.1 Python Serving](#4.2.1) - - [4.2.2 C++ Serving](#4.2.2) -- [5. FAQ](#5) - - -## 1. 简介 -[Paddle Serving](https://github.com/PaddlePaddle/Serving) 旨在帮助深度学习开发者轻松部署在线预测服务,支持一键部署工业级的服务能力、客户端和服务端之间高并发和高效通信、并支持多种编程语言开发客户端。 - -该部分以 HTTP 预测服务部署为例,介绍怎样在 PaddleClas 中使用 PaddleServing 部署模型服务。目前只支持 Linux 平台部署,暂不支持 Windows 平台。 - - -## 2. Serving 安装 -Serving 官网推荐使用 docker 安装并部署 Serving 环境。首先需要拉取 docker 环境并创建基于 Serving 的 docker。 - -```shell -# 启动GPU docker -docker pull paddlepaddle/serving:0.7.0-cuda10.2-cudnn7-devel -nvidia-docker run -p 9292:9292 --name test -dit paddlepaddle/serving:0.7.0-cuda10.2-cudnn7-devel bash -nvidia-docker exec -it test bash - -# 启动CPU docker -docker pull paddlepaddle/serving:0.7.0-devel -docker run -p 9292:9292 --name test -dit paddlepaddle/serving:0.7.0-devel bash -docker exec -it test bash -``` - -进入 docker 后,需要安装 Serving 相关的 python 包。 -```shell -pip3 install paddle-serving-client==0.7.0 -pip3 install paddle-serving-app==0.7.0 -pip3 install faiss-cpu==1.7.1post2 - -#若为CPU部署环境: -pip3 install paddle-serving-server==0.7.0 # CPU -pip3 install paddlepaddle==2.2.0 # CPU - -#若为GPU部署环境 -pip3 install paddle-serving-server-gpu==0.7.0.post102 # GPU with CUDA10.2 + TensorRT6 -pip3 install paddlepaddle-gpu==2.2.0 # GPU with CUDA10.2 - -#其他GPU环境需要确认环境再选择执行哪一条 -pip3 install paddle-serving-server-gpu==0.7.0.post101 # GPU with CUDA10.1 + TensorRT6 -pip3 install paddle-serving-server-gpu==0.7.0.post112 # GPU with CUDA11.2 + TensorRT8 -``` - -* 如果安装速度太慢,可以通过 `-i https://pypi.tuna.tsinghua.edu.cn/simple` 更换源,加速安装过程。 -* 其他环境配置安装请参考: [使用Docker安装Paddle Serving](https://github.com/PaddlePaddle/Serving/blob/v0.7.0/doc/Install_CN.md) - - - -## 3. 图像分类服务部署 - -### 3.1 模型转换 -使用 PaddleServing 做服务化部署时,需要将保存的 inference 模型转换为 Serving 模型。下面以经典的 ResNet50_vd 模型为例,介绍如何部署图像分类服务。 -- 进入工作目录: -```shell -cd deploy/paddleserving -``` -- 下载 ResNet50_vd 的 inference 模型: -```shell -# 下载并解压 ResNet50_vd 模型 -wget https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/inference/ResNet50_vd_infer.tar && tar xf ResNet50_vd_infer.tar -``` -- 用 paddle_serving_client 把下载的 inference 模型转换成易于 Server 部署的模型格式: -``` -# 转换 ResNet50_vd 模型 -python3 -m paddle_serving_client.convert --dirname ./ResNet50_vd_infer/ \ - --model_filename inference.pdmodel \ - --params_filename inference.pdiparams \ - --serving_server ./ResNet50_vd_serving/ \ - --serving_client ./ResNet50_vd_client/ -``` -ResNet50_vd 推理模型转换完成后,会在当前文件夹多出 `ResNet50_vd_serving` 和 `ResNet50_vd_client` 的文件夹,具备如下格式: -``` -|- ResNet50_vd_serving/ - |- inference.pdiparams - |- inference.pdmodel - |- serving_server_conf.prototxt - |- serving_server_conf.stream.prototxt -|- ResNet50_vd_client - |- serving_client_conf.prototxt - |- serving_client_conf.stream.prototxt -``` -得到模型文件之后,需要分别修改 `ResNet50_vd_serving` 和 `ResNet50_vd_client` 下文件 `serving_server_conf.prototxt` 中的 alias 名字:将 `fetch_var` 中的 `alias_name` 改为 `prediction` - -**备注**: Serving 为了兼容不同模型的部署,提供了输入输出重命名的功能。这样,不同的模型在推理部署时,只需要修改配置文件的 alias_name 即可,无需修改代码即可完成推理部署。 -修改后的 serving_server_conf.prototxt 如下所示: -``` -feed_var { - name: "inputs" - alias_name: "inputs" - is_lod_tensor: false - feed_type: 1 - shape: 3 - shape: 224 - shape: 224 -} -fetch_var { - name: "save_infer_model/scale_0.tmp_1" - alias_name: "prediction" - is_lod_tensor: false - fetch_type: 1 - shape: 1000 -} -``` - -### 3.2 服务部署和请求 -paddleserving 目录包含了启动 pipeline 服务、C++ serving服务和发送预测请求的代码,包括: -```shell -__init__.py -config.yml # 启动pipeline服务的配置文件 -pipeline_http_client.py # http方式发送pipeline预测请求的脚本 -pipeline_rpc_client.py # rpc方式发送pipeline预测请求的脚本 -classification_web_service.py # 启动pipeline服务端的脚本 -run_cpp_serving.sh # 启动C++ Serving部署的脚本 -test_cpp_serving_client.py # rpc方式发送C++ serving预测请求的脚本 -``` - -#### 3.2.1 Python Serving -- 启动服务: -```shell -# 启动服务,运行日志保存在 log.txt -python3 classification_web_service.py &>log.txt & -``` - -- 发送请求: -```shell -# 发送服务请求 -python3 pipeline_http_client.py -``` -成功运行后,模型预测的结果会打印在 cmd 窗口中,结果如下: -``` -{'err_no': 0, 'err_msg': '', 'key': ['label', 'prob'], 'value': ["['daisy']", '[0.9341402053833008]'], 'tensors': []} -``` - - -#### 3.2.2 C++ Serving -- 启动服务: -```shell -# 启动服务, 服务在后台运行,运行日志保存在 nohup.txt -sh run_cpp_serving.sh -``` - -- 发送请求: -```shell -# 发送服务请求 -python3 test_cpp_serving_client.py -``` -成功运行后,模型预测的结果会打印在 cmd 窗口中,结果如下: -``` -prediction: daisy, probability: 0.9341399073600769 -``` - - -## 4.图像识别服务部署 -使用 PaddleServing 做服务化部署时,需要将保存的 inference 模型转换为 Serving 模型。 下面以 PP-ShiTu 中的超轻量图像识别模型为例,介绍图像识别服务的部署。 - -## 4.1 模型转换 -- 下载通用检测 inference 模型和通用识别 inference 模型 -``` -cd deploy -# 下载并解压通用识别模型 -wget -P models/ https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/rec/models/inference/general_PPLCNet_x2_5_lite_v1.0_infer.tar -cd models -tar -xf general_PPLCNet_x2_5_lite_v1.0_infer.tar -# 下载并解压通用检测模型 -wget https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/rec/models/inference/picodet_PPLCNet_x2_5_mainbody_lite_v1.0_infer.tar -tar -xf picodet_PPLCNet_x2_5_mainbody_lite_v1.0_infer.tar -``` -- 转换识别 inference 模型为 Serving 模型: -``` -# 转换识别模型 -python3 -m paddle_serving_client.convert --dirname ./general_PPLCNet_x2_5_lite_v1.0_infer/ \ - --model_filename inference.pdmodel \ - --params_filename inference.pdiparams \ - --serving_server ./general_PPLCNet_x2_5_lite_v1.0_serving/ \ - --serving_client ./general_PPLCNet_x2_5_lite_v1.0_client/ -``` -识别推理模型转换完成后,会在当前文件夹多出 `general_PPLCNet_x2_5_lite_v1.0_serving/` 和 `general_PPLCNet_x2_5_lite_v1.0_client/` 的文件夹。分别修改 `general_PPLCNet_x2_5_lite_v1.0_serving/` 和 `general_PPLCNet_x2_5_lite_v1.0_client/` 目录下的 serving_server_conf.prototxt 中的 alias 名字: 将 `fetch_var` 中的 `alias_name` 改为 `features`。 -修改后的 serving_server_conf.prototxt 内容如下: -``` -feed_var { - name: "x" - alias_name: "x" - is_lod_tensor: false - feed_type: 1 - shape: 3 - shape: 224 - shape: 224 -} -fetch_var { - name: "save_infer_model/scale_0.tmp_1" - alias_name: "features" - is_lod_tensor: false - fetch_type: 1 - shape: 512 -} -``` -- 转换通用检测 inference 模型为 Serving 模型: -``` -# 转换通用检测模型 -python3 -m paddle_serving_client.convert --dirname ./picodet_PPLCNet_x2_5_mainbody_lite_v1.0_infer/ \ - --model_filename inference.pdmodel \ - --params_filename inference.pdiparams \ - --serving_server ./picodet_PPLCNet_x2_5_mainbody_lite_v1.0_serving/ \ - --serving_client ./picodet_PPLCNet_x2_5_mainbody_lite_v1.0_client/ -``` -检测 inference 模型转换完成后,会在当前文件夹多出 `picodet_PPLCNet_x2_5_mainbody_lite_v1.0_serving/` 和 `picodet_PPLCNet_x2_5_mainbody_lite_v1.0_client/` 的文件夹。 - -**注意:** 此处不需要修改 `picodet_PPLCNet_x2_5_mainbody_lite_v1.0_serving/` 目录下的 serving_server_conf.prototxt 中的 alias 名字。 - -- 下载并解压已经构建后的检索库 index -``` -cd ../ -wget https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/rec/data/drink_dataset_v1.0.tar && tar -xf drink_dataset_v1.0.tar -``` - -## 4.2 服务部署和请求 -**注意:** 识别服务涉及到多个模型,出于性能考虑采用 PipeLine 部署方式。Pipeline 部署方式当前不支持 windows 平台。 -- 进入到工作目录 -```shell -cd ./deploy/paddleserving/recognition -``` -paddleserving 目录包含启动 Python Pipeline 服务、C++ Serving 服务和发送预测请求的代码,包括: -``` -__init__.py -config.yml # 启动python pipeline服务的配置文件 -pipeline_http_client.py # http方式发送pipeline预测请求的脚本 -pipeline_rpc_client.py # rpc方式发送pipeline预测请求的脚本 -recognition_web_service.py # 启动pipeline服务端的脚本 -run_cpp_serving.sh # 启动C++ Pipeline Serving部署的脚本 -test_cpp_serving_client.py # rpc方式发送C++ Pipeline serving预测请求的脚本 -``` - - -#### 4.2.1 Python Serving -- 启动服务: -``` -# 启动服务,运行日志保存在 log.txt -python3 recognition_web_service.py &>log.txt & -``` - -- 发送请求: -``` -python3 pipeline_http_client.py -``` -成功运行后,模型预测的结果会打印在 cmd 窗口中,结果如下: -``` -{'err_no': 0, 'err_msg': '', 'key': ['result'], 'value': ["[{'bbox': [345, 95, 524, 576], 'rec_docs': '红牛-强化型', 'rec_scores': 0.79903316}]"], 'tensors': []} -``` - - -#### 4.2.2 C++ Serving -- 启动服务: -```shell -# 启动服务: 此处会在后台同时启动主体检测和特征提取服务,端口号分别为9293和9294; -# 运行日志分别保存在 log_mainbody_detection.txt 和 log_feature_extraction.txt中 -sh run_cpp_serving.sh -``` - -- 发送请求: -```shell -# 发送服务请求 -python3 test_cpp_serving_client.py -``` -成功运行后,模型预测的结果会打印在 cmd 窗口中,结果如下所示: -``` -[{'bbox': [345, 95, 524, 586], 'rec_docs': '红牛-强化型', 'rec_scores': 0.8016462}] -``` - - -## 5.FAQ -**Q1**: 发送请求后没有结果返回或者提示输出解码报错 - -**A1**: 启动服务和发送请求时不要设置代理,可以在启动服务前和发送请求前关闭代理,关闭代理的命令是: -``` -unset https_proxy -unset http_proxy -``` - -更多的服务部署类型,如 `RPC 预测服务` 等,可以参考 Serving 的[github 官网](https://github.com/PaddlePaddle/Serving/tree/v0.7.0/examples) diff --git a/docs/zh_CN/inference_deployment/python_deploy.md b/docs/zh_CN/inference_deployment/python_deploy.md index 9d4f254fdde8400b369dc54a4437dcc5f6929126..22b871344b782098ef9ded562cc7f2ce4277f790 100644 --- a/docs/zh_CN/inference_deployment/python_deploy.md +++ b/docs/zh_CN/inference_deployment/python_deploy.md @@ -8,9 +8,9 @@ - [1. 图像分类模型推理](#1) - [2. PP-ShiTu模型推理](#2) - - [2.1 主体检测模型推理](#2.1) - - [2.2 特征提取模型推理](#2.2) - - [2.3 PP-ShiTu PipeLine推理](#2.3) + - [2.1 主体检测模型推理](#2.1) + - [2.2 特征提取模型推理](#2.2) + - [2.3 PP-ShiTu PipeLine推理](#2.3) ## 1. 图像分类推理 diff --git a/docs/zh_CN/inference_deployment/recognition_serving_deploy.md b/docs/zh_CN/inference_deployment/recognition_serving_deploy.md new file mode 100644 index 0000000000000000000000000000000000000000..5311fe997269aecc1f956e8ebcdbcb628b3ed23c --- /dev/null +++ b/docs/zh_CN/inference_deployment/recognition_serving_deploy.md @@ -0,0 +1,281 @@ +简体中文 | [English](../../en/inference_deployment/recognition_serving_deploy_en.md) + +# 识别模型服务化部署 + +## 目录 + +- [1. 简介](#1-简介) +- [2. Serving 安装](#2-serving-安装) +- [3. 图像识别服务部署](#3-图像识别服务部署) + - [3.1 模型转换](#31-模型转换) + - [3.2 服务部署和请求](#32-服务部署和请求) + - [3.2.1 Python Serving](#321-python-serving) + - [3.2.2 C++ Serving](#322-c-serving) +- [4. FAQ](#4-faq) + + +## 1. 简介 + +[Paddle Serving](https://github.com/PaddlePaddle/Serving) 旨在帮助深度学习开发者轻松部署在线预测服务,支持一键部署工业级的服务能力、客户端和服务端之间高并发和高效通信、并支持多种编程语言开发客户端。 + +该部分以 HTTP 预测服务部署为例,介绍怎样在 PaddleClas 中使用 PaddleServing 部署模型服务。目前只支持 Linux 平台部署,暂不支持 Windows 平台。 + + +## 2. Serving 安装 + +Serving 官网推荐使用 docker 安装并部署 Serving 环境。首先需要拉取 docker 环境并创建基于 Serving 的 docker。 + +```shell +# 启动GPU docker +docker pull paddlepaddle/serving:0.7.0-cuda10.2-cudnn7-devel +nvidia-docker run -p 9292:9292 --name test -dit paddlepaddle/serving:0.7.0-cuda10.2-cudnn7-devel bash +nvidia-docker exec -it test bash + +# 启动CPU docker +docker pull paddlepaddle/serving:0.7.0-devel +docker run -p 9292:9292 --name test -dit paddlepaddle/serving:0.7.0-devel bash +docker exec -it test bash +``` + +进入 docker 后,需要安装 Serving 相关的 python 包。 +```shell +python3.7 -m pip install paddle-serving-client==0.7.0 +python3.7 -m pip install paddle-serving-app==0.7.0 +python3.7 -m pip install faiss-cpu==1.7.1post2 + +#若为CPU部署环境: +python3.7 -m pip install paddle-serving-server==0.7.0 # CPU +python3.7 -m pip install paddlepaddle==2.2.0 # CPU + +#若为GPU部署环境 +python3.7 -m pip install paddle-serving-server-gpu==0.7.0.post102 # GPU with CUDA10.2 + TensorRT6 +python3.7 -m pip install paddlepaddle-gpu==2.2.0 # GPU with CUDA10.2 + +#其他GPU环境需要确认环境再选择执行哪一条 +python3.7 -m pip install paddle-serving-server-gpu==0.7.0.post101 # GPU with CUDA10.1 + TensorRT6 +python3.7 -m pip install paddle-serving-server-gpu==0.7.0.post112 # GPU with CUDA11.2 + TensorRT8 +``` + +* 如果安装速度太慢,可以通过 `-i https://pypi.tuna.tsinghua.edu.cn/simple` 更换源,加速安装过程。 +* 其他环境配置安装请参考:[使用Docker安装Paddle Serving](https://github.com/PaddlePaddle/Serving/blob/v0.7.0/doc/Install_CN.md) + + + + +## 3. 图像识别服务部署 + +使用 PaddleServing 做图像识别服务化部署时,**需要将保存的多个 inference 模型都转换为 Serving 模型**。 下面以 PP-ShiTu 中的超轻量图像识别模型为例,介绍图像识别服务的部署。 + + +### 3.1 模型转换 + +- 进入工作目录: + ```shell + cd deploy/ + ``` +- 下载通用检测 inference 模型和通用识别 inference 模型 + ```shell + # 创建并进入models文件夹 + mkdir models + cd models + # 下载并解压通用识别模型 + wget https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/rec/models/inference/general_PPLCNet_x2_5_lite_v1.0_infer.tar + tar -xf general_PPLCNet_x2_5_lite_v1.0_infer.tar + # 下载并解压通用检测模型 + wget https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/rec/models/inference/picodet_PPLCNet_x2_5_mainbody_lite_v1.0_infer.tar + tar -xf picodet_PPLCNet_x2_5_mainbody_lite_v1.0_infer.tar + ``` +- 转换通用识别 inference 模型为 Serving 模型: + ```shell + # 转换通用识别模型 + python3.7 -m paddle_serving_client.convert \ + --dirname ./general_PPLCNet_x2_5_lite_v1.0_infer/ \ + --model_filename inference.pdmodel \ + --params_filename inference.pdiparams \ + --serving_server ./general_PPLCNet_x2_5_lite_v1.0_serving/ \ + --serving_client ./general_PPLCNet_x2_5_lite_v1.0_client/ + ``` + 上述命令的参数含义与[#3.1 模型转换](#3.1)相同 + 通用识别 inference 模型转换完成后,会在当前文件夹多出 `general_PPLCNet_x2_5_lite_v1.0_serving/` 和 `general_PPLCNet_x2_5_lite_v1.0_client/` 的文件夹,具备如下结构: + ```shell + ├── general_PPLCNet_x2_5_lite_v1.0_serving/ + │ ├── inference.pdiparams + │ ├── inference.pdmodel + │ ├── serving_server_conf.prototxt + │ └── serving_server_conf.stream.prototxt + │ + └── general_PPLCNet_x2_5_lite_v1.0_client/ + ├── serving_client_conf.prototxt + └── serving_client_conf.stream.prototxt + ``` +- 转换通用检测 inference 模型为 Serving 模型: + ```shell + # 转换通用检测模型 + python3.7 -m paddle_serving_client.convert --dirname ./picodet_PPLCNet_x2_5_mainbody_lite_v1.0_infer/ \ + --model_filename inference.pdmodel \ + --params_filename inference.pdiparams \ + --serving_server ./picodet_PPLCNet_x2_5_mainbody_lite_v1.0_serving/ \ + --serving_client ./picodet_PPLCNet_x2_5_mainbody_lite_v1.0_client/ + ``` + 上述命令的参数含义与[#3.1 模型转换](#3.1)相同 + + 识别推理模型转换完成后,会在当前文件夹多出 `general_PPLCNet_x2_5_lite_v1.0_serving/` 和 `general_PPLCNet_x2_5_lite_v1.0_client/` 的文件夹。分别修改 `general_PPLCNet_x2_5_lite_v1.0_serving/` 和 `general_PPLCNet_x2_5_lite_v1.0_client/` 目录下的 `serving_server_conf.prototxt` 中的 `alias` 名字: 将 `fetch_var` 中的 `alias_name` 改为 `features`。 修改后的 `serving_server_conf.prototxt` 内容如下 + + ```log + feed_var { + name: "x" + alias_name: "x" + is_lod_tensor: false + feed_type: 1 + shape: 3 + shape: 224 + shape: 224 + } + fetch_var { + name: "save_infer_model/scale_0.tmp_1" + alias_name: "features" + is_lod_tensor: false + fetch_type: 1 + shape: 512 + } + ``` + 通用检测 inference 模型转换完成后,会在当前文件夹多出 `picodet_PPLCNet_x2_5_mainbody_lite_v1.0_serving/` 和 `picodet_PPLCNet_x2_5_mainbody_lite_v1.0_client/` 的文件夹,具备如下结构: + ```shell + ├── picodet_PPLCNet_x2_5_mainbody_lite_v1.0_serving/ + │ ├── inference.pdiparams + │ ├── inference.pdmodel + │ ├── serving_server_conf.prototxt + │ └── serving_server_conf.stream.prototxt + │ + └── picodet_PPLCNet_x2_5_mainbody_lite_v1.0_client/ + ├── serving_client_conf.prototxt + └── serving_client_conf.stream.prototxt + ``` + 上述命令中参数具体含义如下表所示 + | 参数 | 类型 | 默认值 | 描述 | + | ----------------- | ---- | ------------------ | ------------------------------------------------------------ | + | `dirname` | str | - | 需要转换的模型文件存储路径,Program结构文件和参数文件均保存在此目录。 | + | `model_filename` | str | None | 存储需要转换的模型Inference Program结构的文件名称。如果设置为None,则使用 `__model__` 作为默认的文件名 | + | `params_filename` | str | None | 存储需要转换的模型所有参数的文件名称。当且仅当所有模型参数被保>存在一个单独的二进制文件中,它才需要被指定。如果模型参数是存储在各自分离的文件中,设置它的值为None | + | `serving_server` | str | `"serving_server"` | 转换后的模型文件和配置文件的存储路径。默认值为serving_server | + | `serving_client` | str | `"serving_client"` | 转换后的客户端配置文件存储路径。默认值为serving_client | + +- 下载并解压已经构建后完成的检索库 index + ```shell + # 回到deploy目录 + cd ../ + # 下载构建完成的检索库 index + wget https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/rec/data/drink_dataset_v1.0.tar + # 解压构建完成的检索库 index + tar -xf drink_dataset_v1.0.tar + ``` + +### 3.2 服务部署和请求 + +**注意:** 识别服务涉及到多个模型,出于性能考虑采用 PipeLine 部署方式。Pipeline 部署方式当前不支持 windows 平台。 +- 进入到工作目录 + ```shell + cd ./deploy/paddleserving/recognition + ``` + paddleserving 目录包含启动 Python Pipeline 服务、C++ Serving 服务和发送预测请求的代码,包括: + ```shell + __init__.py + config.yml # 启动python pipeline服务的配置文件 + pipeline_http_client.py # http方式发送pipeline预测请求的脚本 + pipeline_rpc_client.py # rpc方式发送pipeline预测请求的脚本 + recognition_web_service.py # 启动pipeline服务端的脚本 + readme.md # 识别模型服务化部署文档 + run_cpp_serving.sh # 启动C++ Pipeline Serving部署的脚本 + test_cpp_serving_client.py # rpc方式发送C++ Pipeline serving预测请求的脚本 + ``` + + +#### 3.2.1 Python Serving + +- 启动服务: + ```shell + # 启动服务,运行日志保存在 log.txt + python3.7 recognition_web_service.py &>log.txt & + ``` + +- 发送请求: + ```shell + python3.7 pipeline_http_client.py + ``` + 成功运行后,模型预测的结果会打印在客户端中,如下所示: + ```log + {'err_no': 0, 'err_msg': '', 'key': ['result'], 'value': ["[{'bbox': [345, 95, 524, 576], 'rec_docs': '红牛-强化型', 'rec_scores': 0.79903316}]"], 'tensors': []} + ``` + + +#### 3.2.2 C++ Serving + +与Python Serving不同,C++ Serving客户端调用 C++ OP来预测,因此在启动服务之前,需要编译并安装 serving server包,并设置 `SERVING_BIN`。 +- 编译并安装Serving server包 + ```shell + # 进入工作目录 + cd PaddleClas/deploy/paddleserving + # 一键编译安装Serving server、设置 SERVING_BIN + source ./build_server.sh python3.7 + ``` + **注:**[build_server.sh](../build_server.sh#L55-L62)所设定的路径可能需要根据实际机器上的环境如CUDA、python版本等作一定修改,然后再编译。 + +- C++ Serving使用的输入输出格式与Python不同,因此需要执行以下命令,将4个文件复制到下的文件覆盖掉[3.1](#31-模型转换)得到文件夹中的对应4个prototxt文件。 + ```shell + # 进入PaddleClas/deploy目录 + cd PaddleClas/deploy/ + + # 覆盖prototxt文件 + \cp ./paddleserving/recognition/preprocess/general_PPLCNet_x2_5_lite_v1.0_serving/*.prototxt ./models/general_PPLCNet_x2_5_lite_v1.0_serving/ + \cp ./paddleserving/recognition/preprocess/general_PPLCNet_x2_5_lite_v1.0_client/*.prototxt ./models/general_PPLCNet_x2_5_lite_v1.0_client/ + \cp ./paddleserving/recognition/preprocess/picodet_PPLCNet_x2_5_mainbody_lite_v1.0_client/*.prototxt ./models/picodet_PPLCNet_x2_5_mainbody_lite_v1.0_client/ + \cp ./paddleserving/recognition/preprocess/picodet_PPLCNet_x2_5_mainbody_lite_v1.0_serving/*.prototxt ./models/picodet_PPLCNet_x2_5_mainbody_lite_v1.0_serving/ + ``` + +- 启动服务: + ```shell + # 进入工作目录 + cd PaddleClas/deploy/paddleserving/recognition + + # 端口号默认为9400;运行日志默认保存在 log_PPShiTu.txt 中 + # CPU部署 + bash run_cpp_serving.sh + # GPU部署,并指定第0号卡 + bash run_cpp_serving.sh 0 + ``` + +- 发送请求: + ```shell + # 发送服务请求 + python3.7 test_cpp_serving_client.py + ``` + 成功运行后,模型预测的结果会打印在客户端中,如下所示: + ```log + WARNING: Logging before InitGoogleLogging() is written to STDERR + I0614 03:01:36.273097 6084 naming_service_thread.cpp:202] brpc::policy::ListNamingService("127.0.0.1:9400"): added 1 + I0614 03:01:37.393564 6084 general_model.cpp:490] [client]logid=0,client_cost=1107.82ms,server_cost=1101.75ms. + [{'bbox': [345, 95, 524, 585], 'rec_docs': '红牛-强化型', 'rec_scores': 0.8073724}] + ``` + +- 关闭服务 +如果服务程序在前台运行,可以按下`Ctrl+C`来终止服务端程序;如果在后台运行,可以使用kill命令关闭相关进程,也可以在启动服务程序的路径下执行以下命令来终止服务端程序: + ```bash + python3.7 -m paddle_serving_server.serve stop + ``` + 执行完毕后出现`Process stopped`信息表示成功关闭服务。 + + +## 4. FAQ + +**Q1**: 发送请求后没有结果返回或者提示输出解码报错 + +**A1**: 启动服务和发送请求时不要设置代理,可以在启动服务前和发送请求前关闭代理,关闭代理的命令是: +```shell +unset https_proxy +unset http_proxy +``` +**Q2**: 启动服务后没有任何反应 + +**A2**: 可以检查`config.yml`中`model_config`对应的路径是否存在,文件夹命名是否正确 + +更多的服务部署类型,如 `RPC 预测服务` 等,可以参考 Serving 的[github 官网](https://github.com/PaddlePaddle/Serving/tree/v0.9.0/examples) diff --git a/ppcls/arch/backbone/__init__.py b/ppcls/arch/backbone/__init__.py index e957358479cb98d8bde3dac0d4b2785b8965c7bf..d3bb4541981fb4c01befc82b3b569a2e098ac92b 100644 --- a/ppcls/arch/backbone/__init__.py +++ b/ppcls/arch/backbone/__init__.py @@ -67,6 +67,9 @@ from ppcls.arch.backbone.model_zoo.pvt_v2 import PVT_V2_B0, PVT_V2_B1, PVT_V2_B2 from ppcls.arch.backbone.model_zoo.mobilevit import MobileViT_XXS, MobileViT_XS, MobileViT_S from ppcls.arch.backbone.model_zoo.repvgg import RepVGG_A0, RepVGG_A1, RepVGG_A2, RepVGG_B0, RepVGG_B1, RepVGG_B2, RepVGG_B1g2, RepVGG_B1g4, RepVGG_B2g4, RepVGG_B3g4 from ppcls.arch.backbone.model_zoo.van import VAN_tiny +from ppcls.arch.backbone.model_zoo.peleenet import PeleeNet +from ppcls.arch.backbone.model_zoo.convnext import ConvNeXt_tiny + from ppcls.arch.backbone.variant_models.resnet_variant import ResNet50_last_stage_stride1 from ppcls.arch.backbone.variant_models.vgg_variant import VGG19Sigmoid from ppcls.arch.backbone.variant_models.pp_lcnet_variant import PPLCNet_x2_5_Tanh diff --git a/ppcls/arch/backbone/model_zoo/convnext.py b/ppcls/arch/backbone/model_zoo/convnext.py new file mode 100644 index 0000000000000000000000000000000000000000..f30894eab526b8deb5e61a964dc287415f1b1a02 --- /dev/null +++ b/ppcls/arch/backbone/model_zoo/convnext.py @@ -0,0 +1,240 @@ +# MIT License +# +# Copyright (c) Meta Platforms, Inc. and affiliates. +# +# Permission is hereby granted, free of charge, to any person obtaining a copy +# of this software and associated documentation files (the "Software"), to deal +# in the Software without restriction, including without limitation the rights +# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +# copies of the Software, and to permit persons to whom the Software is +# furnished to do so, subject to the following conditions: +# +# The above copyright notice and this permission notice shall be included in all +# copies or substantial portions of the Software. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# +# Code was heavily based on https://github.com/facebookresearch/ConvNeXt + +import paddle +import paddle.nn as nn +from paddle.nn.initializer import TruncatedNormal, Constant + +from ppcls.utils.save_load import load_dygraph_pretrain, load_dygraph_pretrain_from_url + +MODEL_URLS = { + "ConvNeXt_tiny": "", # TODO +} + +__all__ = list(MODEL_URLS.keys()) + +trunc_normal_ = TruncatedNormal(std=.02) +zeros_ = Constant(value=0.) +ones_ = Constant(value=1.) + + +def drop_path(x, drop_prob=0., training=False): + """Drop paths (Stochastic Depth) per sample (when applied in main path of residual blocks). + the original name is misleading as 'Drop Connect' is a different form of dropout in a separate paper... + See discussion: https://github.com/tensorflow/tpu/issues/494#issuecomment-532968956 ... + """ + if drop_prob == 0. or not training: + return x + keep_prob = paddle.to_tensor(1 - drop_prob) + shape = (paddle.shape(x)[0], ) + (1, ) * (x.ndim - 1) + random_tensor = keep_prob + paddle.rand(shape, dtype=x.dtype) + random_tensor = paddle.floor(random_tensor) # binarize + output = x.divide(keep_prob) * random_tensor + return output + + +class DropPath(nn.Layer): + """Drop paths (Stochastic Depth) per sample (when applied in main path of residual blocks). + """ + + def __init__(self, drop_prob=None): + super(DropPath, self).__init__() + self.drop_prob = drop_prob + + def forward(self, x): + return drop_path(x, self.drop_prob, self.training) + + +class ChannelsFirstLayerNorm(nn.Layer): + r""" LayerNorm that supports two data formats: channels_last (default) or channels_first. + The ordering of the dimensions in the inputs. channels_last corresponds to inputs with + shape (batch_size, height, width, channels) while channels_first corresponds to inputs + with shape (batch_size, channels, height, width). + """ + + def __init__(self, normalized_shape, epsilon=1e-5): + super().__init__() + self.weight = self.create_parameter( + shape=[normalized_shape], default_initializer=ones_) + self.bias = self.create_parameter( + shape=[normalized_shape], default_initializer=zeros_) + self.epsilon = epsilon + self.normalized_shape = [normalized_shape] + + def forward(self, x): + u = x.mean(1, keepdim=True) + s = (x - u).pow(2).mean(1, keepdim=True) + x = (x - u) / paddle.sqrt(s + self.epsilon) + x = self.weight[:, None, None] * x + self.bias[:, None, None] + return x + + +class Block(nn.Layer): + r""" ConvNeXt Block. There are two equivalent implementations: + (1) DwConv -> LayerNorm (channels_first) -> 1x1 Conv -> GELU -> 1x1 Conv; all in (N, C, H, W) + (2) DwConv -> Permute to (N, H, W, C); LayerNorm (channels_last) -> Linear -> GELU -> Linear; Permute back + We use (2) as we find it slightly faster in PyTorch + + Args: + dim (int): Number of input channels. + drop_path (float): Stochastic depth rate. Default: 0.0 + layer_scale_init_value (float): Init value for Layer Scale. Default: 1e-6. + """ + + def __init__(self, dim, drop_path=0., layer_scale_init_value=1e-6): + super().__init__() + self.dwconv = nn.Conv2D( + dim, dim, 7, padding=3, groups=dim) # depthwise conv + self.norm = nn.LayerNorm(dim, epsilon=1e-6) + # pointwise/1x1 convs, implemented with linear layers + self.pwconv1 = nn.Linear(dim, 4 * dim) + self.act = nn.GELU() + self.pwconv2 = nn.Linear(4 * dim, dim) + if layer_scale_init_value > 0: + self.gamma = self.create_parameter( + shape=[dim], + default_initializer=Constant(value=layer_scale_init_value)) + else: + self.gamma = None + self.drop_path = DropPath( + drop_path) if drop_path > 0. else nn.Identity() + + def forward(self, x): + input = x + x = self.dwconv(x) + x = x.transpose([0, 2, 3, 1]) # (N, C, H, W) -> (N, H, W, C) + x = self.norm(x) + x = self.pwconv1(x) + x = self.act(x) + x = self.pwconv2(x) + if self.gamma is not None: + x = self.gamma * x + x = x.transpose([0, 3, 1, 2]) # (N, H, W, C) -> (N, C, H, W) + + x = input + self.drop_path(x) + return x + + +class ConvNeXt(nn.Layer): + r""" ConvNeXt + A PyTorch impl of : `A ConvNet for the 2020s` - + https://arxiv.org/pdf/2201.03545.pdf + + Args: + in_chans (int): Number of input image channels. Default: 3 + class_num (int): Number of classes for classification head. Default: 1000 + depths (tuple(int)): Number of blocks at each stage. Default: [3, 3, 9, 3] + dims (int): Feature dimension at each stage. Default: [96, 192, 384, 768] + drop_path_rate (float): Stochastic depth rate. Default: 0. + layer_scale_init_value (float): Init value for Layer Scale. Default: 1e-6. + head_init_scale (float): Init scaling value for classifier weights and biases. Default: 1. + """ + + def __init__(self, + in_chans=3, + class_num=1000, + depths=[3, 3, 9, 3], + dims=[96, 192, 384, 768], + drop_path_rate=0., + layer_scale_init_value=1e-6, + head_init_scale=1.): + super().__init__() + + # stem and 3 intermediate downsampling conv layers + self.downsample_layers = nn.LayerList() + stem = nn.Sequential( + nn.Conv2D( + in_chans, dims[0], 4, stride=4), + ChannelsFirstLayerNorm( + dims[0], epsilon=1e-6)) + self.downsample_layers.append(stem) + for i in range(3): + downsample_layer = nn.Sequential( + ChannelsFirstLayerNorm( + dims[i], epsilon=1e-6), + nn.Conv2D( + dims[i], dims[i + 1], 2, stride=2), ) + self.downsample_layers.append(downsample_layer) + + # 4 feature resolution stages, each consisting of multiple residual blocks + self.stages = nn.LayerList() + dp_rates = [ + x.item() for x in paddle.linspace(0, drop_path_rate, sum(depths)) + ] + cur = 0 + for i in range(4): + stage = nn.Sequential(*[ + Block( + dim=dims[i], + drop_path=dp_rates[cur + j], + layer_scale_init_value=layer_scale_init_value) + for j in range(depths[i]) + ]) + self.stages.append(stage) + cur += depths[i] + + self.norm = nn.LayerNorm(dims[-1], epsilon=1e-6) # final norm layer + self.head = nn.Linear(dims[-1], class_num) + + self.apply(self._init_weights) + self.head.weight.set_value(self.head.weight * head_init_scale) + self.head.bias.set_value(self.head.bias * head_init_scale) + + def _init_weights(self, m): + if isinstance(m, (nn.Conv2D, nn.Linear)): + trunc_normal_(m.weight) + if m.bias is not None: + zeros_(m.bias) + + def forward_features(self, x): + for i in range(4): + x = self.downsample_layers[i](x) + x = self.stages[i](x) + # global average pooling, (N, C, H, W) -> (N, C) + return self.norm(x.mean([-2, -1])) + + def forward(self, x): + x = self.forward_features(x) + x = self.head(x) + return x + + +def _load_pretrained(pretrained, model, model_url, use_ssld=False): + if pretrained is False: + pass + elif pretrained is True: + load_dygraph_pretrain_from_url(model, model_url, use_ssld=use_ssld) + elif isinstance(pretrained, str): + load_dygraph_pretrain(model, pretrained) + else: + raise RuntimeError( + "pretrained type is not available. Please use `string` or `boolean` type." + ) + + +def ConvNeXt_tiny(pretrained=False, use_ssld=False, **kwargs): + model = ConvNeXt(depths=[3, 3, 9, 3], dims=[96, 192, 384, 768], **kwargs) + _load_pretrained( + pretrained, model, MODEL_URLS["ConvNeXt_tiny"], use_ssld=use_ssld) + return model diff --git a/ppcls/arch/backbone/model_zoo/peleenet.py b/ppcls/arch/backbone/model_zoo/peleenet.py new file mode 100644 index 0000000000000000000000000000000000000000..a09091af23d7d2a67c2f8303b4f8c119f77e8593 --- /dev/null +++ b/ppcls/arch/backbone/model_zoo/peleenet.py @@ -0,0 +1,239 @@ +# copyright (c) 2022 PaddlePaddle Authors. All Rights Reserve. +# +# MIT License +# +# Permission is hereby granted, free of charge, to any person obtaining a copy +# of this software and associated documentation files (the "Software"), to deal +# in the Software without restriction, including without limitation the rights +# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +# copies of the Software, and to permit persons to whom the Software is +# furnished to do so, subject to the following conditions: +# +# The above copyright notice and this permission notice shall be included in all +# copies or substantial portions of the Software. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# +# Code was heavily based on https://github.com/Robert-JunWang/PeleeNet +# reference: https://arxiv.org/pdf/1804.06882.pdf + +import math + +import paddle +import paddle.nn as nn +import paddle.nn.functional as F +from paddle.nn.initializer import Normal, Constant + +from ppcls.utils.save_load import load_dygraph_pretrain, load_dygraph_pretrain_from_url + +MODEL_URLS = { + "peleenet": "" # TODO +} + +__all__ = MODEL_URLS.keys() + +normal_ = lambda x, mean=0, std=1: Normal(mean, std)(x) +constant_ = lambda x, value=0: Constant(value)(x) +zeros_ = Constant(value=0.) +ones_ = Constant(value=1.) + + +class _DenseLayer(nn.Layer): + def __init__(self, num_input_features, growth_rate, bottleneck_width, drop_rate): + super(_DenseLayer, self).__init__() + + growth_rate = int(growth_rate / 2) + inter_channel = int(growth_rate * bottleneck_width / 4) * 4 + + if inter_channel > num_input_features / 2: + inter_channel = int(num_input_features / 8) * 4 + print('adjust inter_channel to ', inter_channel) + + self.branch1a = BasicConv2D( + num_input_features, inter_channel, kernel_size=1) + self.branch1b = BasicConv2D( + inter_channel, growth_rate, kernel_size=3, padding=1) + + self.branch2a = BasicConv2D( + num_input_features, inter_channel, kernel_size=1) + self.branch2b = BasicConv2D( + inter_channel, growth_rate, kernel_size=3, padding=1) + self.branch2c = BasicConv2D( + growth_rate, growth_rate, kernel_size=3, padding=1) + + def forward(self, x): + branch1 = self.branch1a(x) + branch1 = self.branch1b(branch1) + + branch2 = self.branch2a(x) + branch2 = self.branch2b(branch2) + branch2 = self.branch2c(branch2) + + return paddle.concat([x, branch1, branch2], 1) + + +class _DenseBlock(nn.Sequential): + def __init__(self, num_layers, num_input_features, bn_size, growth_rate, drop_rate): + super(_DenseBlock, self).__init__() + for i in range(num_layers): + layer = _DenseLayer(num_input_features + i * + growth_rate, growth_rate, bn_size, drop_rate) + setattr(self, 'denselayer%d' % (i + 1), layer) + + +class _StemBlock(nn.Layer): + def __init__(self, num_input_channels, num_init_features): + super(_StemBlock, self).__init__() + + num_stem_features = int(num_init_features/2) + + self.stem1 = BasicConv2D( + num_input_channels, num_init_features, kernel_size=3, stride=2, padding=1) + self.stem2a = BasicConv2D( + num_init_features, num_stem_features, kernel_size=1, stride=1, padding=0) + self.stem2b = BasicConv2D( + num_stem_features, num_init_features, kernel_size=3, stride=2, padding=1) + self.stem3 = BasicConv2D( + 2*num_init_features, num_init_features, kernel_size=1, stride=1, padding=0) + self.pool = nn.MaxPool2D(kernel_size=2, stride=2) + + def forward(self, x): + out = self.stem1(x) + + branch2 = self.stem2a(out) + branch2 = self.stem2b(branch2) + branch1 = self.pool(out) + + out = paddle.concat([branch1, branch2], 1) + out = self.stem3(out) + + return out + + +class BasicConv2D(nn.Layer): + + def __init__(self, in_channels, out_channels, activation=True, **kwargs): + super(BasicConv2D, self).__init__() + self.conv = nn.Conv2D(in_channels, out_channels, + bias_attr=False, **kwargs) + self.norm = nn.BatchNorm2D(out_channels) + self.activation = activation + + def forward(self, x): + x = self.conv(x) + x = self.norm(x) + if self.activation: + return F.relu(x) + else: + return x + + +class PeleeNetDY(nn.Layer): + r"""PeleeNet model class, based on + `"Densely Connected Convolutional Networks" and + "Pelee: A Real-Time Object Detection System on Mobile Devices" ` + + Args: + growth_rate (int or list of 4 ints) - how many filters to add each layer (`k` in paper) + block_config (list of 4 ints) - how many layers in each pooling block + num_init_features (int) - the number of filters to learn in the first convolution layer + bottleneck_width (int or list of 4 ints) - multiplicative factor for number of bottle neck layers + (i.e. bn_size * k features in the bottleneck layer) + drop_rate (float) - dropout rate after each dense layer + class_num (int) - number of classification classes + """ + + def __init__(self, growth_rate=32, block_config=[3, 4, 8, 6], + num_init_features=32, bottleneck_width=[1, 2, 4, 4], + drop_rate=0.05, class_num=1000): + + super(PeleeNetDY, self).__init__() + + self.features = nn.Sequential(*[ + ('stemblock', _StemBlock(3, num_init_features)), + ]) + + if type(growth_rate) is list: + growth_rates = growth_rate + assert len(growth_rates) == 4, \ + 'The growth rate must be the list and the size must be 4' + else: + growth_rates = [growth_rate] * 4 + + if type(bottleneck_width) is list: + bottleneck_widths = bottleneck_width + assert len(bottleneck_widths) == 4, \ + 'The bottleneck width must be the list and the size must be 4' + else: + bottleneck_widths = [bottleneck_width] * 4 + + # Each denseblock + num_features = num_init_features + for i, num_layers in enumerate(block_config): + block = _DenseBlock(num_layers=num_layers, + num_input_features=num_features, + bn_size=bottleneck_widths[i], + growth_rate=growth_rates[i], + drop_rate=drop_rate) + setattr(self.features, 'denseblock%d' % (i + 1), block) + num_features = num_features + num_layers * growth_rates[i] + + setattr(self.features, 'transition%d' % (i + 1), BasicConv2D( + num_features, num_features, kernel_size=1, stride=1, padding=0)) + + if i != len(block_config) - 1: + setattr(self.features, 'transition%d_pool' % + (i + 1), nn.AvgPool2D(kernel_size=2, stride=2)) + num_features = num_features + + # Linear layer + self.classifier = nn.Linear(num_features, class_num) + self.drop_rate = drop_rate + + self.apply(self._initialize_weights) + + def forward(self, x): + features = self.features(x) + out = F.avg_pool2d(features, kernel_size=features.shape[2:4]).flatten(1) + if self.drop_rate > 0: + out = F.dropout(out, p=self.drop_rate, training=self.training) + out = self.classifier(out) + return out + + def _initialize_weights(self, m): + if isinstance(m, nn.Conv2D): + n = m._kernel_size[0] * m._kernel_size[1] * m._out_channels + normal_(m.weight, std=math.sqrt(2. / n)) + if m.bias is not None: + zeros_(m.bias) + elif isinstance(m, nn.BatchNorm2D): + ones_(m.weight) + zeros_(m.bias) + elif isinstance(m, nn.Linear): + normal_(m.weight, std=0.01) + zeros_(m.bias) + + +def _load_pretrained(pretrained, model, model_url, use_ssld): + if pretrained is False: + pass + elif pretrained is True: + load_dygraph_pretrain_from_url(model, model_url, use_ssld=use_ssld) + elif isinstance(pretrained, str): + load_dygraph_pretrain(model, pretrained) + else: + raise RuntimeError( + "pretrained type is not available. Please use `string` or `boolean` type." + ) + + +def PeleeNet(pretrained=False, use_ssld=False, **kwargs): + model = PeleeNetDY(**kwargs) + _load_pretrained(pretrained, model, MODEL_URLS["peleenet"], use_ssld) + return model diff --git a/ppcls/configs/ImageNet/ConvNeXt/ConvNeXt_tiny.yaml b/ppcls/configs/ImageNet/ConvNeXt/ConvNeXt_tiny.yaml new file mode 100644 index 0000000000000000000000000000000000000000..fb6e3cbdbb2dc648e4ef0bd1cad59106efbf91db --- /dev/null +++ b/ppcls/configs/ImageNet/ConvNeXt/ConvNeXt_tiny.yaml @@ -0,0 +1,170 @@ +# global configs +Global: + checkpoints: null + pretrained_model: null + output_dir: ./output/ + device: gpu + save_interval: 1 + eval_during_train: True + eval_interval: 1 + epochs: 300 + print_batch_step: 10 + use_visualdl: False + # used for static mode and model export + image_shape: [3, 224, 224] + save_inference_dir: ./inference + # training model under @to_static + to_static: False + update_freq: 4 # for 8 cards + +# model ema +EMA: + decay: 0.9999 + + +# model architecture +Arch: + name: ConvNeXt_tiny + class_num: 1000 + drop_path_rate: 0.1 + layer_scale_init_value: 1e-6 + head_init_scale: 1.0 + + +# loss function config for traing/eval process +Loss: + Train: + - CELoss: + weight: 1.0 + epsilon: 0.1 + Eval: + - CELoss: + weight: 1.0 + + +Optimizer: + name: AdamW + beta1: 0.9 + beta2: 0.999 + epsilon: 1e-8 + weight_decay: 0.05 + one_dim_param_no_weight_decay: True + lr: + # for 8 cards + name: Cosine + learning_rate: 4e-3 # lr 4e-3 for total_batch_size 4096 + eta_min: 1e-6 + warmup_epoch: 20 + warmup_start_lr: 0 + + +# data loader for train and eval +DataLoader: + Train: + dataset: + name: ImageNetDataset + image_root: ./dataset/ILSVRC2012/ + cls_label_path: ./dataset/ILSVRC2012/train_list.txt + transform_ops: + - DecodeImage: + to_rgb: True + channel_first: False + - RandCropImage: + size: 224 + interpolation: bicubic + backend: pil + - RandFlipImage: + flip_code: 1 + - TimmAutoAugment: + config_str: rand-m9-mstd0.5-inc1 + interpolation: bicubic + img_size: 224 + - NormalizeImage: + scale: 1.0/255.0 + mean: [0.485, 0.456, 0.406] + std: [0.229, 0.224, 0.225] + order: '' + - RandomErasing: + EPSILON: 0.25 + sl: 0.02 + sh: 1.0/3.0 + r1: 0.3 + attempt: 10 + use_log_aspect: True + mode: pixel + batch_transform_ops: + - OpSampler: + MixupOperator: + alpha: 0.8 + prob: 0.5 + CutmixOperator: + alpha: 1.0 + prob: 0.5 + sampler: + name: DistributedBatchSampler + batch_size: 128 + drop_last: True + shuffle: True + loader: + num_workers: 4 + use_shared_memory: True + + Eval: + dataset: + name: ImageNetDataset + image_root: ./dataset/ILSVRC2012/ + cls_label_path: ./dataset/ILSVRC2012/val_list.txt + transform_ops: + - DecodeImage: + to_rgb: True + channel_first: False + - ResizeImage: + resize_short: 256 + interpolation: bicubic + backend: pil + - CropImage: + size: 224 + - NormalizeImage: + scale: 1.0/255.0 + mean: [0.485, 0.456, 0.406] + std: [0.229, 0.224, 0.225] + order: '' + sampler: + name: DistributedBatchSampler + batch_size: 128 + drop_last: False + shuffle: False + loader: + num_workers: 4 + use_shared_memory: True + + +Infer: + infer_imgs: docs/images/inference_deployment/whl_demo.jpg + batch_size: 10 + transforms: + - DecodeImage: + to_rgb: True + channel_first: False + - ResizeImage: + resize_short: 256 + interpolation: bicubic + backend: pil + - CropImage: + size: 224 + - NormalizeImage: + scale: 1.0/255.0 + mean: [0.485, 0.456, 0.406] + std: [0.229, 0.224, 0.225] + order: '' + - ToCHWImage: + PostProcess: + name: Topk + topk: 5 + class_id_map_file: ppcls/utils/imagenet1k_label_list.txt + + +Metric: + Eval: + - TopkAcc: + topk: [1, 5] diff --git a/ppcls/configs/ImageNet/PeleeNet/PeleeNet.yaml b/ppcls/configs/ImageNet/PeleeNet/PeleeNet.yaml new file mode 100644 index 0000000000000000000000000000000000000000..648f97040e36c135d4896386ced20b23d328d746 --- /dev/null +++ b/ppcls/configs/ImageNet/PeleeNet/PeleeNet.yaml @@ -0,0 +1,130 @@ +# global configs +Global: + checkpoints: null + pretrained_model: null + output_dir: ./output/ + device: gpu + save_interval: 1 + eval_during_train: True + eval_interval: 1 + epochs: 120 + print_batch_step: 10 + use_visualdl: False + # used for static mode and model export + image_shape: [3, 224, 224] + save_inference_dir: ./inference + # training model under @to_static + to_static: False + +# model architecture +Arch: + name: PeleeNet + class_num: 1000 + +# loss function config for traing/eval process +Loss: + Train: + - CELoss: + weight: 1.0 + Eval: + - CELoss: + weight: 1.0 + + +Optimizer: + name: Momentum + momentum: 0.9 + lr: + name: Cosine + learning_rate: 0.18 # for total batch size 512 + regularizer: + name: 'L2' + coeff: 0.0001 + + +# data loader for train and eval +DataLoader: + Train: + dataset: + name: ImageNetDataset + image_root: ./dataset/ILSVRC2012/ + cls_label_path: ./dataset/ILSVRC2012/train_list.txt + transform_ops: + - DecodeImage: + to_rgb: True + channel_first: False + - RandCropImage: + size: 224 + - RandFlipImage: + flip_code: 1 + - NormalizeImage: + scale: 1.0/255.0 + mean: [0.485, 0.456, 0.406] + std: [0.229, 0.224, 0.225] + order: '' + + sampler: + name: DistributedBatchSampler + batch_size: 128 + drop_last: False + shuffle: True + loader: + num_workers: 4 + use_shared_memory: True + + Eval: + dataset: + name: ImageNetDataset + image_root: ./dataset/ILSVRC2012/ + cls_label_path: ./dataset/ILSVRC2012/val_list.txt + transform_ops: + - DecodeImage: + to_rgb: True + channel_first: False + - ResizeImage: + resize_short: 256 + - CropImage: + size: 224 + - NormalizeImage: + scale: 1.0/255.0 + mean: [0.485, 0.456, 0.406] + std: [0.229, 0.224, 0.225] + order: '' + sampler: + name: DistributedBatchSampler + batch_size: 256 # for 2 cards + drop_last: False + shuffle: False + loader: + num_workers: 4 + use_shared_memory: True + +Infer: + infer_imgs: docs/images/inference_deployment/whl_demo.jpg + batch_size: 10 + transforms: + - DecodeImage: + to_rgb: True + channel_first: False + - ResizeImage: + resize_short: 256 + - CropImage: + size: 224 + - NormalizeImage: + scale: 1.0/255.0 + mean: [0.485, 0.456, 0.406] + std: [0.229, 0.224, 0.225] + order: '' + - ToCHWImage: + PostProcess: + name: Topk + topk: 5 + class_id_map_file: ppcls/utils/imagenet1k_label_list.txt + +Metric: + Train: + - TopkAcc: + topk: [1, 5] + Eval: + - TopkAcc: + topk: [1, 5] diff --git a/ppcls/configs/practical_models/PPHGNet_tiny_calling_halfbody.yaml b/ppcls/configs/practical_models/PPHGNet_tiny_calling_halfbody.yaml new file mode 100644 index 0000000000000000000000000000000000000000..c6415cd47ca04ebb6a89e786b7f63ee85b12ac07 --- /dev/null +++ b/ppcls/configs/practical_models/PPHGNet_tiny_calling_halfbody.yaml @@ -0,0 +1,150 @@ +# global configs +Global: + checkpoints: null + pretrained_model: null + output_dir: ./output/ + device: gpu + save_interval: 1 + eval_during_train: True + eval_interval: 1 + epochs: 50 + print_batch_step: 10 + use_visualdl: False + # used for static mode and model export + image_shape: [3, 224, 224] + save_inference_dir: ./inference + # training model under @to_static + to_static: False + use_dali: False + +# model architecture +Arch: + name: PPHGNet_tiny + class_num: 2 + +# loss function config for traing/eval process +Loss: + Train: + - CELoss: + weight: 1.0 + epsilon: 0.1 + Eval: + - CELoss: + weight: 1.0 + +Optimizer: + name: Momentum + momentum: 0.9 + lr: + name: Cosine + learning_rate: 0.05 + warmup_epoch: 3 + regularizer: + name: 'L2' + coeff: 0.00004 + + +# data loader for train and eval +DataLoader: + Train: + dataset: + name: ImageNetDataset + image_root: ./dataset/ + cls_label_path: ./dataset/phone_train_list_halfbody.txt + transform_ops: + - DecodeImage: + to_rgb: True + channel_first: False + - ResizeImage: + size: 224 + - RandFlipImage: + flip_code: 1 + - TimmAutoAugment: + config_str: rand-m7-mstd0.5-inc1 + interpolation: bicubic + img_size: 224 + - NormalizeImage: + scale: 1.0/255.0 + mean: [0.485, 0.456, 0.406] + std: [0.229, 0.224, 0.225] + order: '' + - RandomErasing: + EPSILON: 0.25 + sl: 0.02 + sh: 1.0/3.0 + r1: 0.3 + attempt: 10 + use_log_aspect: True + mode: pixel + batch_transform_ops: + - OpSampler: + MixupOperator: + alpha: 0.2 + prob: 0.5 + CutmixOperator: + alpha: 1.0 + prob: 0.5 + + sampler: + name: DistributedBatchSampler + batch_size: 128 + drop_last: False + shuffle: True + loader: + num_workers: 2 + use_shared_memory: False + + Eval: + dataset: + name: ImageNetDataset + image_root: ./dataset/ + cls_label_path: ./dataset/phone_val_list_halfbody.txt + transform_ops: + - DecodeImage: + to_rgb: True + channel_first: False + - ResizeImage: + size: 224 + interpolation: bicubic + backend: pil + - NormalizeImage: + scale: 1.0/255.0 + mean: [0.485, 0.456, 0.406] + std: [0.229, 0.224, 0.225] + order: '' + sampler: + name: DistributedBatchSampler + batch_size: 128 + drop_last: False + shuffle: False + loader: + num_workers: 2 + use_shared_memory: False + +Infer: + infer_imgs: docs/images/inference_deployment/whl_demo.jpg + batch_size: 1 + transforms: + - DecodeImage: + to_rgb: True + channel_first: False + - ResizeImage: + size: 224 + - NormalizeImage: + scale: 1.0/255.0 + mean: [0.485, 0.456, 0.406] + std: [0.229, 0.224, 0.225] + order: '' + - ToCHWImage: + PostProcess: + name: Topk + topk: 2 + class_id_map_file: dataset/phone_label_list.txt + +Metric: + Train: + - TopkAcc: + topk: [1, 1] + Eval: + - TopkAcc: + topk: [1, 1] diff --git a/ppcls/data/preprocess/ops/operators.py b/ppcls/data/preprocess/ops/operators.py index d87960e93fe7bc7e2e67f7c30d1b58d811153905..e617b8a71afffeb9e18e4be412f5a3374bd387ec 100644 --- a/ppcls/data/preprocess/ops/operators.py +++ b/ppcls/data/preprocess/ops/operators.py @@ -18,6 +18,7 @@ from __future__ import print_function from __future__ import unicode_literals from functools import partial +import io import six import math import random @@ -138,28 +139,53 @@ class OperatorParamError(ValueError): class DecodeImage(object): """ decode image """ - def __init__(self, to_rgb=True, to_np=False, channel_first=False): - self.to_rgb = to_rgb + def __init__(self, + to_np=True, + to_rgb=True, + channel_first=False, + backend="cv2"): self.to_np = to_np # to numpy + self.to_rgb = to_rgb # only enabled when to_np is True self.channel_first = channel_first # only enabled when to_np is True + if backend.lower() not in ["cv2", "pil"]: + logger.warning( + f"The backend of DecodeImage only support \"cv2\" or \"PIL\". \"f{backend}\" is unavailable. Use \"cv2\" instead." + ) + backend = "cv2" + self.backend = backend.lower() + + if not to_np: + logger.warning( + f"\"to_rgb\" and \"channel_first\" are only enabled when to_np is True. \"to_np\" is now {to_np}." + ) + def __call__(self, img): - if not isinstance(img, np.ndarray): - if six.PY2: - assert type(img) is str and len( - img) > 0, "invalid input 'img' in DecodeImage" + if isinstance(img, Image.Image): + assert self.backend == "pil", "invalid input 'img' in DecodeImage" + elif isinstance(img, np.ndarray): + assert self.backend == "cv2", "invalid input 'img' in DecodeImage" + elif isinstance(img, bytes): + if self.backend == "pil": + data = io.BytesIO(img) + img = Image.open(data) else: - assert type(img) is bytes and len( - img) > 0, "invalid input 'img' in DecodeImage" - data = np.frombuffer(img, dtype='uint8') - img = cv2.imdecode(data, 1) - if self.to_rgb: - assert img.shape[2] == 3, 'invalid shape of image[%s]' % ( - img.shape) - img = img[:, :, ::-1] - - if self.channel_first: - img = img.transpose((2, 0, 1)) + data = np.frombuffer(img, dtype="uint8") + img = cv2.imdecode(data, 1) + else: + raise ValueError("invalid input 'img' in DecodeImage") + + if self.to_np: + if self.backend == "pil": + assert img.mode == "RGB", f"invalid shape of image[{img.shape}]" + img = np.asarray(img)[:, :, ::-1] # BRG + + if self.to_rgb: + assert img.shape[2] == 3, f"invalid shape of image[{img.shape}]" + img = img[:, :, ::-1] + + if self.channel_first: + img = img.transpose((2, 0, 1)) return img diff --git a/ppcls/engine/engine.py b/ppcls/engine/engine.py index 884a05bb141947d70d2a20c2d88967bcbe6626ea..1aa0a1e05c306f46c77ff09b3fb6af344d3e01e3 100644 --- a/ppcls/engine/engine.py +++ b/ppcls/engine/engine.py @@ -34,6 +34,7 @@ from ppcls.arch import apply_to_static from ppcls.loss import build_loss from ppcls.metric import build_metrics from ppcls.optimizer import build_optimizer +from ppcls.utils.ema import ExponentialMovingAverage from ppcls.utils.save_load import load_dygraph_pretrain, load_dygraph_pretrain_from_url from ppcls.utils.save_load import init_model from ppcls.utils import save_load @@ -99,6 +100,9 @@ class Engine(object): logger.info('train with paddle {} and device {}'.format( paddle.__version__, self.device)) + # gradient accumulation + self.update_freq = self.config["Global"].get("update_freq", 1) + if "class_num" in config["Global"]: global_class_num = config["Global"]["class_num"] if "class_num" not in config["Arch"]: @@ -203,7 +207,7 @@ class Engine(object): if self.mode == 'train': self.optimizer, self.lr_sch = build_optimizer( self.config["Optimizer"], self.config["Global"]["epochs"], - len(self.train_dataloader), + len(self.train_dataloader) // self.update_freq, [self.model, self.train_loss_func]) # AMP training and evaluating @@ -277,6 +281,12 @@ class Engine(object): level=self.amp_level, save_dtype='float32') + # build EMA model + self.ema = "EMA" in self.config and self.mode == "train" + if self.ema: + self.model_ema = ExponentialMovingAverage( + self.model, self.config['EMA'].get("decay", 0.9999)) + # check the gpu num world_size = dist.get_world_size() self.config["Global"]["distributed"] = world_size != 1 @@ -311,6 +321,10 @@ class Engine(object): "metric": -1.0, "epoch": 0, } + ema_module = None + if self.ema: + best_metric_ema = 0.0 + ema_module = self.model_ema.module # key: # val: metrics list word self.output_info = dict() @@ -325,12 +339,14 @@ class Engine(object): if self.config.Global.checkpoints is not None: metric_info = init_model(self.config.Global, self.model, - self.optimizer, self.train_loss_func) + self.optimizer, self.train_loss_func, + ema_module) if metric_info is not None: best_metric.update(metric_info) self.max_iter = len(self.train_dataloader) - 1 if platform.system( ) == "Windows" else len(self.train_dataloader) + self.max_iter = self.max_iter // self.update_freq * self.update_freq for epoch_id in range(best_metric["epoch"] + 1, self.config["Global"]["epochs"] + 1): @@ -361,6 +377,7 @@ class Engine(object): self.optimizer, best_metric, self.output_dir, + ema=ema_module, model_name=self.config["Arch"]["name"], prefix="best_model", loss=self.train_loss_func, @@ -375,6 +392,32 @@ class Engine(object): self.model.train() + if self.ema: + ori_model, self.model = self.model, ema_module + acc_ema = self.eval(epoch_id) + self.model = ori_model + ema_module.eval() + + if acc_ema > best_metric_ema: + best_metric_ema = acc_ema + save_load.save_model( + self.model, + self.optimizer, + {"metric": acc_ema, + "epoch": epoch_id}, + self.output_dir, + ema=ema_module, + model_name=self.config["Arch"]["name"], + prefix="best_model_ema", + loss=self.train_loss_func) + logger.info("[Eval][Epoch {}][best metric ema: {}]".format( + epoch_id, best_metric_ema)) + logger.scaler( + name="eval_acc_ema", + value=acc_ema, + step=epoch_id, + writer=self.vdl_writer) + # save model if epoch_id % save_interval == 0: save_load.save_model( @@ -382,6 +425,7 @@ class Engine(object): self.optimizer, {"metric": acc, "epoch": epoch_id}, self.output_dir, + ema=ema_module, model_name=self.config["Arch"]["name"], prefix="epoch_{}".format(epoch_id), loss=self.train_loss_func) @@ -391,6 +435,7 @@ class Engine(object): self.optimizer, {"metric": acc, "epoch": epoch_id}, self.output_dir, + ema=ema_module, model_name=self.config["Arch"]["name"], prefix="latest", loss=self.train_loss_func) diff --git a/ppcls/engine/train/train.py b/ppcls/engine/train/train.py index 14db79e73e9e51d16d5784b7aa48a6afb12a7e0f..a41674da70c167959c2515ec696ca2a6686cf0f8 100644 --- a/ppcls/engine/train/train.py +++ b/ppcls/engine/train/train.py @@ -53,25 +53,33 @@ def train_epoch(engine, epoch_id, print_batch_step): out = forward(engine, batch) loss_dict = engine.train_loss_func(out, batch[1]) + # loss + loss = loss_dict["loss"] / engine.update_freq + # backward & step opt if engine.amp: - scaled = engine.scaler.scale(loss_dict["loss"]) + scaled = engine.scaler.scale(loss) scaled.backward() - for i in range(len(engine.optimizer)): - engine.scaler.minimize(engine.optimizer[i], scaled) + if (iter_id + 1) % engine.update_freq == 0: + for i in range(len(engine.optimizer)): + engine.scaler.minimize(engine.optimizer[i], scaled) else: - loss_dict["loss"].backward() - for i in range(len(engine.optimizer)): - engine.optimizer[i].step() + loss.backward() + if (iter_id + 1) % engine.update_freq == 0: + for i in range(len(engine.optimizer)): + engine.optimizer[i].step() - # clear grad - for i in range(len(engine.optimizer)): - engine.optimizer[i].clear_grad() - - # step lr(by step) - for i in range(len(engine.lr_sch)): - if not getattr(engine.lr_sch[i], "by_epoch", False): - engine.lr_sch[i].step() + if (iter_id + 1) % engine.update_freq == 0: + # clear grad + for i in range(len(engine.optimizer)): + engine.optimizer[i].clear_grad() + # step lr(by step) + for i in range(len(engine.lr_sch)): + if not getattr(engine.lr_sch[i], "by_epoch", False): + engine.lr_sch[i].step() + # update ema + if engine.ema: + engine.model_ema.update(engine.model) # below code just for logging # update metric_for_logger diff --git a/ppcls/engine/train/utils.py b/ppcls/engine/train/utils.py index ca211ff932f19ca63804a5a1ff52def5eb89477f..44e54660b6453b713b2325e26b1bd5590b23c933 100644 --- a/ppcls/engine/train/utils.py +++ b/ppcls/engine/train/utils.py @@ -54,12 +54,12 @@ def log_info(trainer, batch_size, epoch_id, iter_id): ips_msg = "ips: {:.5f} samples/s".format( batch_size / trainer.time_info["batch_cost"].avg) eta_sec = ((trainer.config["Global"]["epochs"] - epoch_id + 1 - ) * len(trainer.train_dataloader) - iter_id + ) * trainer.max_iter - iter_id ) * trainer.time_info["batch_cost"].avg eta_msg = "eta: {:s}".format(str(datetime.timedelta(seconds=int(eta_sec)))) logger.info("[Train][Epoch {}/{}][Iter: {}/{}]{}, {}, {}, {}, {}".format( epoch_id, trainer.config["Global"]["epochs"], iter_id, - len(trainer.train_dataloader), lr_msg, metric_msg, time_msg, ips_msg, + trainer.max_iter, lr_msg, metric_msg, time_msg, ips_msg, eta_msg)) for i, lr in enumerate(trainer.lr_sch): diff --git a/ppcls/utils/download.py b/ppcls/utils/download.py index 51d45438880c940929acf2f8542eec937052f21b..3aeda0ced44f7cc5916b918e5980b86b977a23fe 100644 --- a/ppcls/utils/download.py +++ b/ppcls/utils/download.py @@ -77,21 +77,6 @@ def _map_path(url, root_dir): return osp.join(root_dir, fpath) -def _get_unique_endpoints(trainer_endpoints): - # Sorting is to avoid different environmental variables for each card - trainer_endpoints.sort() - ips = set() - unique_endpoints = set() - for endpoint in trainer_endpoints: - ip = endpoint.split(":")[0] - if ip in ips: - continue - ips.add(ip) - unique_endpoints.add(endpoint) - logger.info("unique_endpoints {}".format(unique_endpoints)) - return unique_endpoints - - def get_path_from_url(url, root_dir, md5sum=None, @@ -118,20 +103,20 @@ def get_path_from_url(url, # parse path after download to decompress under root_dir fullpath = _map_path(url, root_dir) # Mainly used to solve the problem of downloading data from different - # machines in the case of multiple machines. Different ips will download - # data, and the same ip will only download data once. - unique_endpoints = _get_unique_endpoints(ParallelEnv() - .trainer_endpoints[:]) + # machines in the case of multiple machines. Different nodes will download + # data, and the same node will only download data once. + rank_id_curr_node = int(os.environ.get("PADDLE_RANK_IN_NODE", 0)) + if osp.exists(fullpath) and check_exist and _md5check(fullpath, md5sum): logger.info("Found {}".format(fullpath)) else: - if ParallelEnv().current_endpoint in unique_endpoints: + if rank_id_curr_node == 0: fullpath = _download(url, root_dir, md5sum) else: while not os.path.exists(fullpath): time.sleep(1) - if ParallelEnv().current_endpoint in unique_endpoints: + if rank_id_curr_node == 0: if decompress and (tarfile.is_tarfile(fullpath) or zipfile.is_zipfile(fullpath)): fullpath = _decompress(fullpath) diff --git a/ppcls/utils/ema.py b/ppcls/utils/ema.py index b54cdb1b2030dc0a70394816a433e7e715e12996..8292781955210d68cea119b2fd887b534b3a6c04 100644 --- a/ppcls/utils/ema.py +++ b/ppcls/utils/ema.py @@ -1,10 +1,10 @@ -# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserve. +# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # -# http://www.apache.org/licenses/LICENSE-2.0 +# http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, @@ -12,52 +12,31 @@ # See the License for the specific language governing permissions and # limitations under the License. +from copy import deepcopy + import paddle -import numpy as np class ExponentialMovingAverage(): """ Exponential Moving Average - Code was heavily based on https://github.com/Wanger-SJTU/SegToolbox.Pytorch/blob/master/lib/utils/ema.py + Code was heavily based on https://github.com/rwightman/pytorch-image-models/blob/master/timm/utils/model_ema.py """ - def __init__(self, model, decay, thres_steps=True): - self._model = model - self._decay = decay - self._thres_steps = thres_steps - self._shadow = {} - self._backup = {} - - def register(self): - self._update_step = 0 - for name, param in self._model.named_parameters(): - if param.stop_gradient is False: - self._shadow[name] = param.numpy().copy() - - def update(self): - decay = min(self._decay, (1 + self._update_step) / ( - 10 + self._update_step)) if self._thres_steps else self._decay - for name, param in self._model.named_parameters(): - if param.stop_gradient is False: - assert name in self._shadow - new_val = np.array(param.numpy().copy()) - old_val = np.array(self._shadow[name]) - new_average = decay * old_val + (1 - decay) * new_val - self._shadow[name] = new_average - self._update_step += 1 - return decay - - def apply(self): - for name, param in self._model.named_parameters(): - if param.stop_gradient is False: - assert name in self._shadow - self._backup[name] = np.array(param.numpy().copy()) - param.set_value(np.array(self._shadow[name])) - - def restore(self): - for name, param in self._model.named_parameters(): - if param.stop_gradient is False: - assert name in self._backup - param.set_value(self._backup[name]) - self._backup = {} + def __init__(self, model, decay=0.9999): + super().__init__() + # make a copy of the model for accumulating moving average of weights + self.module = deepcopy(model) + self.module.eval() + self.decay = decay + + @paddle.no_grad() + def _update(self, model, update_fn): + for ema_v, model_v in zip(self.module.state_dict().values(), model.state_dict().values()): + ema_v.set_value(update_fn(ema_v, model_v)) + + def update(self, model): + self._update(model, update_fn=lambda e, m: self.decay * e + (1. - self.decay) * m) + + def set(self, model): + self._update(model, update_fn=lambda e, m: m) diff --git a/ppcls/utils/save_load.py b/ppcls/utils/save_load.py index 04486cc273bbfe9e3d9863b4c4ded6a8d283eee3..31323e9ae11b3245c898f412057a15fb56734b0a 100644 --- a/ppcls/utils/save_load.py +++ b/ppcls/utils/save_load.py @@ -95,7 +95,11 @@ def load_distillation_model(model, pretrained_model): pretrained_model)) -def init_model(config, net, optimizer=None, loss: paddle.nn.Layer=None): +def init_model(config, + net, + optimizer=None, + loss: paddle.nn.Layer=None, + ema=None): """ load model from checkpoint or pretrained_model """ @@ -115,6 +119,11 @@ def init_model(config, net, optimizer=None, loss: paddle.nn.Layer=None): for i in range(len(optimizer)): optimizer[i].set_state_dict(opti_dict[i] if isinstance( opti_dict, list) else opti_dict) + if ema is not None: + assert os.path.exists(checkpoints + ".ema.pdparams"), \ + "Given dir {}.ema.pdparams not exist.".format(checkpoints) + para_ema_dict = paddle.load(checkpoints + ".ema.pdparams") + ema.set_state_dict(para_ema_dict) logger.info("Finish load checkpoints from {}".format(checkpoints)) return metric_dict @@ -133,6 +142,7 @@ def save_model(net, optimizer, metric_info, model_path, + ema=None, model_name="", prefix='ppcls', loss: paddle.nn.Layer=None, @@ -161,6 +171,8 @@ def save_model(net, paddle.save(s_params, model_path + "_student.pdparams") paddle.save(params_state_dict, model_path + ".pdparams") + if ema is not None: + paddle.save(ema.state_dict(), model_path + ".ema.pdparams") paddle.save([opt.state_dict() for opt in optimizer], model_path + ".pdopt") paddle.save(metric_info, model_path + ".pdstates") logger.info("Already save model in {}".format(model_path)) diff --git a/test_tipc/benchmark_train.sh b/test_tipc/benchmark_train.sh index 5c4d4112ad691569914ccf9b84480db9b76fa024..bbec7c62aab656b2b1076ad478ecf6094b20e9b6 100644 --- a/test_tipc/benchmark_train.sh +++ b/test_tipc/benchmark_train.sh @@ -179,6 +179,22 @@ for batch_size in ${batch_size_list[*]}; do func_sed_params "$FILENAME" "${line_epoch}" "$epoch" gpu_id=$(set_gpu_id $device_num) + # if bs is big, then copy train_list.txt to generate more train log + # There are 5w image in train_list. And the train log printed interval is 10 iteration. + # At least 25 log number would be good to calculate ips for benchmark system. + # So the copy number for train_list is as follows: + total_batch_size=`echo $[$batch_size*${device_num:1:1}*${device_num:3:3}]` + copy_num=`echo $[$total_batch_size/200]` + if [ $copy_num -gt 1 ];then + cd dataset/ILSVRC2012 + rm -rf train_list.txt + for ((i=1; i <=$copy_num; i++));do + cat val_list.txt >> train_list.txt + done + cd ../../ + fi + + if [ ${#gpu_id} -le 1 ];then log_path="$SAVE_LOG/profiling_log" mkdir -p $log_path diff --git a/test_tipc/config/ConvNeXt/ConvNeXt_tiny_train_infer_python.txt b/test_tipc/config/ConvNeXt/ConvNeXt_tiny_train_infer_python.txt new file mode 100644 index 0000000000000000000000000000000000000000..11b4007ef9fbaef563b028a10bf9f42eb2581f94 --- /dev/null +++ b/test_tipc/config/ConvNeXt/ConvNeXt_tiny_train_infer_python.txt @@ -0,0 +1,54 @@ +===========================train_params=========================== +model_name:ConvNeXt_tiny +python:python3.7 +gpu_list:0|0,1 +-o Global.device:gpu +-o Global.auto_cast:null +-o Global.epochs:lite_train_lite_infer=2|whole_train_whole_infer=120 +-o Global.output_dir:./output/ +-o DataLoader.Train.sampler.batch_size:8 +-o Global.pretrained_model:null +train_model_name:latest +train_infer_img_dir:./dataset/ILSVRC2012/val +null:null +## +trainer:norm_train +norm_train:tools/train.py -c ppcls/configs/ImageNet/ConvNeXt/ConvNeXt_tiny.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +pact_train:null +fpgm_train:null +distill_train:null +null:null +null:null +## +===========================eval_params=========================== +eval:tools/eval.py -c ppcls/configs/ImageNet/ConvNeXt/ConvNeXt_tiny.yaml +null:null +## +===========================infer_params========================== +-o Global.save_inference_dir:./inference +-o Global.pretrained_model: +norm_export:tools/export_model.py -c ppcls/configs/ImageNet/ConvNeXt/ConvNeXt_tiny.yaml +quant_export:null +fpgm_export:null +distill_export:null +kl_quant:null +export2:null +inference_dir:null +infer_model:../inference/ +infer_export:True +infer_quant:Fasle +inference:python/predict_cls.py -c configs/inference_cls.yaml -o PreProcess.transform_ops.0.ResizeImage.resize_short=256 -o PreProcess.transform_ops.1.CropImage.size=224 +-o Global.use_gpu:True|False +-o Global.enable_mkldnn:True|False +-o Global.cpu_num_threads:1|6 +-o Global.batch_size:1|16 +-o Global.use_tensorrt:True|False +-o Global.use_fp16:True|False +-o Global.inference_model_dir:../inference +-o Global.infer_imgs:../dataset/ILSVRC2012/val +-o Global.save_log_path:null +-o Global.benchmark:True +null:null +null:null +===========================infer_benchmark_params========================== +random_infer_input:[{float32,[3,224,224]}] \ No newline at end of file diff --git a/test_tipc/config/PeleeNet/PeleeNet_train_infer_python.txt b/test_tipc/config/PeleeNet/PeleeNet_train_infer_python.txt new file mode 100644 index 0000000000000000000000000000000000000000..f2c3a82a1dea3bc226940ac711790d32939dc541 --- /dev/null +++ b/test_tipc/config/PeleeNet/PeleeNet_train_infer_python.txt @@ -0,0 +1,54 @@ +===========================train_params=========================== +model_name:PeleeNet +python:python3 +gpu_list:0|0,1 +-o Global.device:gpu +-o Global.auto_cast:null +-o Global.epochs:lite_train_lite_infer=2|whole_train_whole_infer=120 +-o Global.output_dir:./output/ +-o DataLoader.Train.sampler.batch_size:8 +-o Global.pretrained_model:null +train_model_name:latest +train_infer_img_dir:./dataset/ILSVRC2012/val +null:null +## +trainer:norm_train +norm_train:tools/train.py -c ppcls/configs/ImageNet/PeleeNet/PeleeNet.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +pact_train:null +fpgm_train:null +distill_train:null +null:null +null:null +## +===========================eval_params=========================== +eval:tools/eval.py -c ppcls/configs/ImageNet/PeleeNet/PeleeNet.yaml +null:null +## +===========================infer_params========================== +-o Global.save_inference_dir:./inference +-o Global.pretrained_model: +norm_export:tools/export_model.py -c ppcls/configs/ImageNet/PeleeNet/PeleeNet.yaml +quant_export:null +fpgm_export:null +distill_export:null +kl_quant:null +export2:null +inference_dir:null +infer_model:../inference/ +infer_export:True +infer_quant:Fasle +inference:python/predict_cls.py -c configs/inference_cls.yaml +-o Global.use_gpu:True|False +-o Global.enable_mkldnn:True|False +-o Global.cpu_num_threads:1|6 +-o Global.batch_size:1|16 +-o Global.use_tensorrt:True|False +-o Global.use_fp16:True|False +-o Global.inference_model_dir:../inference +-o Global.infer_imgs:../dataset/ILSVRC2012/val +-o Global.save_log_path:null +-o Global.benchmark:True +null:null +null:null +===========================infer_benchmark_params========================== +random_infer_input:[{float32,[3,224,224]}] diff --git a/test_tipc/configs/AlexNet/AlexNet_train_amp_infer_python.txt b/test_tipc/configs/AlexNet/AlexNet_train_amp_infer_python.txt index c0cf046186576e77776c71699611bb541ddf1aaf..c4a085d28b361709a679752e1f87b5e0f582eff4 100644 --- a/test_tipc/configs/AlexNet/AlexNet_train_amp_infer_python.txt +++ b/test_tipc/configs/AlexNet/AlexNet_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/AlexNet/AlexNet.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/AlexNet/AlexNet.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/AlexNet/AlexNet_train_infer_python.txt b/test_tipc/configs/AlexNet/AlexNet_train_infer_python.txt index e5703326974a5d2b0c814d93e4b88042d5f17cfc..990b67f8e3010f50f092e3f02866740e88be0577 100644 --- a/test_tipc/configs/AlexNet/AlexNet_train_infer_python.txt +++ b/test_tipc/configs/AlexNet/AlexNet_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/AlexNet/AlexNet.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/AlexNet/AlexNet.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/CSPNet/CSPDarkNet53_train_amp_infer_python.txt b/test_tipc/configs/CSPNet/CSPDarkNet53_train_amp_infer_python.txt index 534cb09f64aa706c7b1d6a225fac7fcf74d8fdef..d1e48a6743f3d9e0d8a823ed1e5fcd98c9b8b813 100644 --- a/test_tipc/configs/CSPNet/CSPDarkNet53_train_amp_infer_python.txt +++ b/test_tipc/configs/CSPNet/CSPDarkNet53_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/CSPNet/CSPDarkNet53.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/CSPNet/CSPDarkNet53.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/CSPNet/CSPDarkNet53_train_infer_python.txt b/test_tipc/configs/CSPNet/CSPDarkNet53_train_infer_python.txt index fc15806209a3b6ba7062d19b25c07ae86c2a36a5..e9869d6f40f8cdef6c8741ff7d9596b195601c52 100644 --- a/test_tipc/configs/CSPNet/CSPDarkNet53_train_infer_python.txt +++ b/test_tipc/configs/CSPNet/CSPDarkNet53_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/CSPNet/CSPDarkNet53.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/CSPNet/CSPDarkNet53.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/CSWinTransformer/CSWinTransformer_base_224_train_infer_python.txt b/test_tipc/configs/CSWinTransformer/CSWinTransformer_base_224_train_infer_python.txt index 808b37a857a317079492a4c9a7ab39e5d5741719..198cdeef4462b4f94cedd12464b2499dca29efad 100644 --- a/test_tipc/configs/CSWinTransformer/CSWinTransformer_base_224_train_infer_python.txt +++ b/test_tipc/configs/CSWinTransformer/CSWinTransformer_base_224_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/CSWinTransformer/CSWinTransformer_base_224.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/CSWinTransformer/CSWinTransformer_base_224.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/CSWinTransformer/CSWinTransformer_base_384_train_infer_python.txt b/test_tipc/configs/CSWinTransformer/CSWinTransformer_base_384_train_infer_python.txt index 2441975dd96ec2bb68f5f2e5d8a1fd5571f347b4..2e3957ec2f15bf65aabb2a50be4dd3fb8e8786ba 100644 --- a/test_tipc/configs/CSWinTransformer/CSWinTransformer_base_384_train_infer_python.txt +++ b/test_tipc/configs/CSWinTransformer/CSWinTransformer_base_384_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/CSWinTransformer/CSWinTransformer_base_384.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/CSWinTransformer/CSWinTransformer_base_384.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/CSWinTransformer/CSWinTransformer_large_224_train_infer_python.txt b/test_tipc/configs/CSWinTransformer/CSWinTransformer_large_224_train_infer_python.txt index 5bcebfb7b49cede306d3a165228f1d0839660729..c1ba81846c001b6d1c046f24bd9b053d2602a1b9 100644 --- a/test_tipc/configs/CSWinTransformer/CSWinTransformer_large_224_train_infer_python.txt +++ b/test_tipc/configs/CSWinTransformer/CSWinTransformer_large_224_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/CSWinTransformer/CSWinTransformer_large_224.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/CSWinTransformer/CSWinTransformer_large_224.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/CSWinTransformer/CSWinTransformer_large_384_train_infer_python.txt b/test_tipc/configs/CSWinTransformer/CSWinTransformer_large_384_train_infer_python.txt index 67cba41bb18e9d295d0ba7903465fdefe45b80b0..aa58c136b86fb452245e1123d13df0564f1a42fb 100644 --- a/test_tipc/configs/CSWinTransformer/CSWinTransformer_large_384_train_infer_python.txt +++ b/test_tipc/configs/CSWinTransformer/CSWinTransformer_large_384_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/CSWinTransformer/CSWinTransformer_large_384.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/CSWinTransformer/CSWinTransformer_large_384.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/CSWinTransformer/CSWinTransformer_small_224_train_infer_python.txt b/test_tipc/configs/CSWinTransformer/CSWinTransformer_small_224_train_infer_python.txt index c6d66e1504ce915bc3a91b1dfbfd591a12a90b1d..e33402d35c178198ece466bc7a189209e46ca41c 100644 --- a/test_tipc/configs/CSWinTransformer/CSWinTransformer_small_224_train_infer_python.txt +++ b/test_tipc/configs/CSWinTransformer/CSWinTransformer_small_224_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/CSWinTransformer/CSWinTransformer_small_224.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/CSWinTransformer/CSWinTransformer_small_224.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/CSWinTransformer/CSWinTransformer_tiny_224_train_infer_python.txt b/test_tipc/configs/CSWinTransformer/CSWinTransformer_tiny_224_train_infer_python.txt index b869e2b201b573fc1ddbd236e95221e5114606de..b8b95f2f43a32e017e7535d5a4bf84347d63e994 100644 --- a/test_tipc/configs/CSWinTransformer/CSWinTransformer_tiny_224_train_infer_python.txt +++ b/test_tipc/configs/CSWinTransformer/CSWinTransformer_tiny_224_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/CSWinTransformer/CSWinTransformer_tiny_224.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.print_batch_step=1 +norm_train:tools/train.py -c ppcls/configs/ImageNet/CSWinTransformer/CSWinTransformer_tiny_224.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.print_batch_step=1 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/DLA/DLA102_train_amp_infer_python.txt b/test_tipc/configs/DLA/DLA102_train_amp_infer_python.txt index bdc23ffa35fde038393a59546e3607573ee27ff8..794a10bb9d51a82dffd4200c0a90b825ae9e7d94 100644 --- a/test_tipc/configs/DLA/DLA102_train_amp_infer_python.txt +++ b/test_tipc/configs/DLA/DLA102_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/DLA/DLA102.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/DLA/DLA102.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/DLA/DLA102_train_infer_python.txt b/test_tipc/configs/DLA/DLA102_train_infer_python.txt index 238b06848ef7c860703983a61ba73de11e812e11..6de8b653d76345668836bc576d96778024b80fd1 100644 --- a/test_tipc/configs/DLA/DLA102_train_infer_python.txt +++ b/test_tipc/configs/DLA/DLA102_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/DLA/DLA102.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/DLA/DLA102.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/DLA/DLA102x2_train_amp_infer_python.txt b/test_tipc/configs/DLA/DLA102x2_train_amp_infer_python.txt index 2552690f31120311edb7dee5c4b334bc131d01d2..c4df9e4f383c0889909c70248bc43e87b7e4595d 100644 --- a/test_tipc/configs/DLA/DLA102x2_train_amp_infer_python.txt +++ b/test_tipc/configs/DLA/DLA102x2_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/DLA/DLA102x2.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/DLA/DLA102x2.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/DLA/DLA102x2_train_infer_python.txt b/test_tipc/configs/DLA/DLA102x2_train_infer_python.txt index 2fbe7d4fd302e087a96c4a1b7f1bafd29a92cae5..eadc385aea11b804851314fe899459c3e58ee53d 100644 --- a/test_tipc/configs/DLA/DLA102x2_train_infer_python.txt +++ b/test_tipc/configs/DLA/DLA102x2_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/DLA/DLA102x2.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/DLA/DLA102x2.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/DLA/DLA102x_train_amp_infer_python.txt b/test_tipc/configs/DLA/DLA102x_train_amp_infer_python.txt index f1c19957bde3d68021ae83056a7ef5d429bdade4..e1251f5699d5abea774e380e602715c9d0cecb40 100644 --- a/test_tipc/configs/DLA/DLA102x_train_amp_infer_python.txt +++ b/test_tipc/configs/DLA/DLA102x_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/DLA/DLA102x.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/DLA/DLA102x.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/DLA/DLA102x_train_infer_python.txt b/test_tipc/configs/DLA/DLA102x_train_infer_python.txt index c67a35e1fea3babb966db260a2064644e8817e94..fc269e225fbc5a97672e634e7fedfdc1e149422b 100644 --- a/test_tipc/configs/DLA/DLA102x_train_infer_python.txt +++ b/test_tipc/configs/DLA/DLA102x_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/DLA/DLA102x.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/DLA/DLA102x.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/DLA/DLA169_train_amp_infer_python.txt b/test_tipc/configs/DLA/DLA169_train_amp_infer_python.txt index 8ba21d09a359c1e86faf4557762edeaf4d911274..8e6e92549a535e6ef9ad08c5cb516658518e0f95 100644 --- a/test_tipc/configs/DLA/DLA169_train_amp_infer_python.txt +++ b/test_tipc/configs/DLA/DLA169_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/DLA/DLA169.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/DLA/DLA169.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/DLA/DLA169_train_infer_python.txt b/test_tipc/configs/DLA/DLA169_train_infer_python.txt index 51ddf32d65939b698d54814c199b79be641ecdfb..e01a74cc3928be9bbc44d14bedc9b30d8231062d 100644 --- a/test_tipc/configs/DLA/DLA169_train_infer_python.txt +++ b/test_tipc/configs/DLA/DLA169_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/DLA/DLA169.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/DLA/DLA169.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/DLA/DLA34_train_amp_infer_python.txt b/test_tipc/configs/DLA/DLA34_train_amp_infer_python.txt index 5e63e494dcd60d83d21bc9f2050fe3e7f2d545de..fd2645506f92b23f3cac7f19d1a72f15fd3c0c6e 100644 --- a/test_tipc/configs/DLA/DLA34_train_amp_infer_python.txt +++ b/test_tipc/configs/DLA/DLA34_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/DLA/DLA34.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/DLA/DLA34.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/DLA/DLA34_train_infer_python.txt b/test_tipc/configs/DLA/DLA34_train_infer_python.txt index c9ba5e39724f9042a5c95059f09f82a31e53c0aa..02fa85932c539c7b28de277fbbfa4933b64d5334 100644 --- a/test_tipc/configs/DLA/DLA34_train_infer_python.txt +++ b/test_tipc/configs/DLA/DLA34_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/DLA/DLA34.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/DLA/DLA34.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/DLA/DLA46_c_train_amp_infer_python.txt b/test_tipc/configs/DLA/DLA46_c_train_amp_infer_python.txt index a9ec71f73cb1e5c60d5b4a77da4be21f8f69e305..f51270db1523f23239c3398f768020265121244b 100644 --- a/test_tipc/configs/DLA/DLA46_c_train_amp_infer_python.txt +++ b/test_tipc/configs/DLA/DLA46_c_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/DLA/DLA46_c.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/DLA/DLA46_c.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/DLA/DLA46_c_train_infer_python.txt b/test_tipc/configs/DLA/DLA46_c_train_infer_python.txt index fa8d2fbaacb727d5580bbec6879bb836f10b275b..63dcd3c1f8a8548db62484a22a4f3e06edbf9c4a 100644 --- a/test_tipc/configs/DLA/DLA46_c_train_infer_python.txt +++ b/test_tipc/configs/DLA/DLA46_c_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/DLA/DLA46_c.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/DLA/DLA46_c.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/DLA/DLA46x_c_train_amp_infer_python.txt b/test_tipc/configs/DLA/DLA46x_c_train_amp_infer_python.txt index 0325c27822419e75cb4c882749383528ac76bed9..d355e917b8ee0c3fdbb569c43b1f8b963310922b 100644 --- a/test_tipc/configs/DLA/DLA46x_c_train_amp_infer_python.txt +++ b/test_tipc/configs/DLA/DLA46x_c_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/DLA/DLA46x_c.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/DLA/DLA46x_c.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/DLA/DLA46x_c_train_infer_python.txt b/test_tipc/configs/DLA/DLA46x_c_train_infer_python.txt index 274bb9d0c4eba6049f31079295863a5c0f9ce2f7..d11ced7979e45c70b3d3a726057a0d28cc6915d7 100644 --- a/test_tipc/configs/DLA/DLA46x_c_train_infer_python.txt +++ b/test_tipc/configs/DLA/DLA46x_c_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/DLA/DLA46x_c.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/DLA/DLA46x_c.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/DLA/DLA60_train_amp_infer_python.txt b/test_tipc/configs/DLA/DLA60_train_amp_infer_python.txt index ac7b55c22f25c1a14c7f5a61350b86611c776922..328980d0d6d3a8be154712ddbdcdbca3c0159cdd 100644 --- a/test_tipc/configs/DLA/DLA60_train_amp_infer_python.txt +++ b/test_tipc/configs/DLA/DLA60_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/DLA/DLA60.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/DLA/DLA60.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/DLA/DLA60_train_infer_python.txt b/test_tipc/configs/DLA/DLA60_train_infer_python.txt index 85d80f6f63d9e6c2e0dd4c7ccbba423839a9bbbe..1a214c1954e2e41f9869ccb70ee8bfe598afee80 100644 --- a/test_tipc/configs/DLA/DLA60_train_infer_python.txt +++ b/test_tipc/configs/DLA/DLA60_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/DLA/DLA60.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/DLA/DLA60.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/DLA/DLA60x_c_train_amp_infer_python.txt b/test_tipc/configs/DLA/DLA60x_c_train_amp_infer_python.txt index 7b66c715e6c95cdd30223f4efb61a846837d1e25..25934061117571dba16dfa6bcb3ed8e8bd15e66e 100644 --- a/test_tipc/configs/DLA/DLA60x_c_train_amp_infer_python.txt +++ b/test_tipc/configs/DLA/DLA60x_c_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/DLA/DLA60x_c.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/DLA/DLA60x_c.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/DLA/DLA60x_c_train_infer_python.txt b/test_tipc/configs/DLA/DLA60x_c_train_infer_python.txt index 62d9b02d5725991ac8a5aa7dfe53f0f777e19d5a..8cff0a4ff39594c3b5c1681b05c2751930368640 100644 --- a/test_tipc/configs/DLA/DLA60x_c_train_infer_python.txt +++ b/test_tipc/configs/DLA/DLA60x_c_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/DLA/DLA60x_c.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/DLA/DLA60x_c.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/DLA/DLA60x_train_amp_infer_python.txt b/test_tipc/configs/DLA/DLA60x_train_amp_infer_python.txt index 4f775c9431e67ac4494f20771f8ba7800c327c3f..3b47ade74f9f8c6aaed2e93797ce59b52bc11ae0 100644 --- a/test_tipc/configs/DLA/DLA60x_train_amp_infer_python.txt +++ b/test_tipc/configs/DLA/DLA60x_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/DLA/DLA60x.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/DLA/DLA60x.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/DLA/DLA60x_train_infer_python.txt b/test_tipc/configs/DLA/DLA60x_train_infer_python.txt index 2efc778d22f94a88720385486d45c6f2d32ce306..37bd0bbb38e2aa34aa1294449eab16faa5f1e27f 100644 --- a/test_tipc/configs/DLA/DLA60x_train_infer_python.txt +++ b/test_tipc/configs/DLA/DLA60x_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/DLA/DLA60x.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/DLA/DLA60x.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/DPN/DPN107_train_amp_infer_python.txt b/test_tipc/configs/DPN/DPN107_train_amp_infer_python.txt index 41102a8b14fe0c3e9c2923e88f19e193f68df6c2..595569ef41df77454a5cbeb3fa070ea2e3e2437c 100644 --- a/test_tipc/configs/DPN/DPN107_train_amp_infer_python.txt +++ b/test_tipc/configs/DPN/DPN107_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/DPN/DPN107.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/DPN/DPN107.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/DPN/DPN107_train_infer_python.txt b/test_tipc/configs/DPN/DPN107_train_infer_python.txt index c7161c187a9d57b589c0f769faa23e806268a14e..110bf85dd71fec781168a477acabb8dae3a40f0a 100644 --- a/test_tipc/configs/DPN/DPN107_train_infer_python.txt +++ b/test_tipc/configs/DPN/DPN107_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/DPN/DPN107.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/DPN/DPN107.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/DPN/DPN131_train_amp_infer_python.txt b/test_tipc/configs/DPN/DPN131_train_amp_infer_python.txt index 1423a3e324df402f5e4cebb584d4616dd047e2ae..c19861de923e6d04158eec27c836b544f4ded60c 100644 --- a/test_tipc/configs/DPN/DPN131_train_amp_infer_python.txt +++ b/test_tipc/configs/DPN/DPN131_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/DPN/DPN131.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/DPN/DPN131.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/DPN/DPN131_train_infer_python.txt b/test_tipc/configs/DPN/DPN131_train_infer_python.txt index 3ac518596e5fb241be0efbbf10bb61f640514bd9..15fc1e1808e96ccc6dff5e76925299e7637d6568 100644 --- a/test_tipc/configs/DPN/DPN131_train_infer_python.txt +++ b/test_tipc/configs/DPN/DPN131_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/DPN/DPN131.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/DPN/DPN131.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/DPN/DPN68_train_amp_infer_python.txt b/test_tipc/configs/DPN/DPN68_train_amp_infer_python.txt index 1a8ae3d6168c6f34023c266ef6d413bc9db683c6..73a1939288616657728e26911e556282c65b09c1 100644 --- a/test_tipc/configs/DPN/DPN68_train_amp_infer_python.txt +++ b/test_tipc/configs/DPN/DPN68_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/DPN/DPN68.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/DPN/DPN68.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/DPN/DPN68_train_infer_python.txt b/test_tipc/configs/DPN/DPN68_train_infer_python.txt index ed58c7ff4311b495ff2fa051c343a590a326319d..43b3f3ac8baa3121c9d24b016425abe16ec4377e 100644 --- a/test_tipc/configs/DPN/DPN68_train_infer_python.txt +++ b/test_tipc/configs/DPN/DPN68_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/DPN/DPN68.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/DPN/DPN68.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/DPN/DPN92_train_amp_infer_python.txt b/test_tipc/configs/DPN/DPN92_train_amp_infer_python.txt index bccd5ffd03d320ca11d4105ce2055a0bf9177565..2b8702a71b6568474662245685fc98b3025bef9a 100644 --- a/test_tipc/configs/DPN/DPN92_train_amp_infer_python.txt +++ b/test_tipc/configs/DPN/DPN92_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/DPN/DPN92.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/DPN/DPN92.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/DPN/DPN92_train_infer_python.txt b/test_tipc/configs/DPN/DPN92_train_infer_python.txt index 587ebc637ed54dbb2b2d253754658b9c5134fb0b..667c50676a744c72e4ea28ae4bddab66a338b8a7 100644 --- a/test_tipc/configs/DPN/DPN92_train_infer_python.txt +++ b/test_tipc/configs/DPN/DPN92_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/DPN/DPN92.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/DPN/DPN92.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/DPN/DPN98_train_amp_infer_python.txt b/test_tipc/configs/DPN/DPN98_train_amp_infer_python.txt index 0e488de5e3d8502b53b8b0148e7769a21df0b4ad..50b9a7849bc632d403522760aead13fefa8f8a8b 100644 --- a/test_tipc/configs/DPN/DPN98_train_amp_infer_python.txt +++ b/test_tipc/configs/DPN/DPN98_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/DPN/DPN98.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/DPN/DPN98.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/DPN/DPN98_train_infer_python.txt b/test_tipc/configs/DPN/DPN98_train_infer_python.txt index 54f07744dcc5fb113cc384f1849b1dc93dc25217..c91cd98f2a4de69a11a6a0c8a8c11a5011598579 100644 --- a/test_tipc/configs/DPN/DPN98_train_infer_python.txt +++ b/test_tipc/configs/DPN/DPN98_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/DPN/DPN98.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/DPN/DPN98.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/DarkNet/DarkNet53_train_amp_infer_python.txt b/test_tipc/configs/DarkNet/DarkNet53_train_amp_infer_python.txt index 907c8369786e4cfe29526d2a2418bf055e20a6c8..523314069c78593bba679812dee96541f7f58dd0 100644 --- a/test_tipc/configs/DarkNet/DarkNet53_train_amp_infer_python.txt +++ b/test_tipc/configs/DarkNet/DarkNet53_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/DarkNet/DarkNet53.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/DarkNet/DarkNet53.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/DarkNet/DarkNet53_train_infer_python.txt b/test_tipc/configs/DarkNet/DarkNet53_train_infer_python.txt index 348b58c9d518f5a33276e4fd2b3e99b842f99a82..44dd10505b56c81954db4cfe73b0b7d20f9d6f37 100644 --- a/test_tipc/configs/DarkNet/DarkNet53_train_infer_python.txt +++ b/test_tipc/configs/DarkNet/DarkNet53_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/DarkNet/DarkNet53.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/DarkNet/DarkNet53.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/DeiT/DeiT_base_patch16_224_train_amp_infer_python.txt b/test_tipc/configs/DeiT/DeiT_base_patch16_224_train_amp_infer_python.txt index 1c52e7bb689f81d40b90c2f3c274fbe6f538c4e3..735c918ca124d398b7972f7ea5fd037817051d01 100644 --- a/test_tipc/configs/DeiT/DeiT_base_patch16_224_train_amp_infer_python.txt +++ b/test_tipc/configs/DeiT/DeiT_base_patch16_224_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/DeiT/DeiT_base_patch16_224.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/DeiT/DeiT_base_patch16_224.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/DeiT/DeiT_base_patch16_224_train_infer_python.txt b/test_tipc/configs/DeiT/DeiT_base_patch16_224_train_infer_python.txt index b0f92bef2f3b741b17824145faacc8a4e597cd7f..fb44f6a7380239828bcdfa6bca91aab87af2c0a3 100644 --- a/test_tipc/configs/DeiT/DeiT_base_patch16_224_train_infer_python.txt +++ b/test_tipc/configs/DeiT/DeiT_base_patch16_224_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/DeiT/DeiT_base_patch16_224.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/DeiT/DeiT_base_patch16_224.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/DeiT/DeiT_base_patch16_384_train_amp_infer_python.txt b/test_tipc/configs/DeiT/DeiT_base_patch16_384_train_amp_infer_python.txt index 964e8eadbcda29764c28d646a0f1984cf85f8195..821d97de81708a321ce07733b55745e40cbd6755 100644 --- a/test_tipc/configs/DeiT/DeiT_base_patch16_384_train_amp_infer_python.txt +++ b/test_tipc/configs/DeiT/DeiT_base_patch16_384_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/DeiT/DeiT_base_patch16_384.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/DeiT/DeiT_base_patch16_384.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/DeiT/DeiT_base_patch16_384_train_infer_python.txt b/test_tipc/configs/DeiT/DeiT_base_patch16_384_train_infer_python.txt index 592d3165fc4fc630ba15b93a2c8952c03811b015..b6091739d7530fcb86d8d8b7acc0c2e781515c47 100644 --- a/test_tipc/configs/DeiT/DeiT_base_patch16_384_train_infer_python.txt +++ b/test_tipc/configs/DeiT/DeiT_base_patch16_384_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/DeiT/DeiT_base_patch16_384.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/DeiT/DeiT_base_patch16_384.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/DeiT/DeiT_small_patch16_224_train_amp_infer_python.txt b/test_tipc/configs/DeiT/DeiT_small_patch16_224_train_amp_infer_python.txt index 32b63d774f447937d0863d7b8ffb85b290e15a2e..04bafcac45a3c3231baec6ebe406faf92321d66d 100644 --- a/test_tipc/configs/DeiT/DeiT_small_patch16_224_train_amp_infer_python.txt +++ b/test_tipc/configs/DeiT/DeiT_small_patch16_224_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/DeiT/DeiT_small_patch16_224.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/DeiT/DeiT_small_patch16_224.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/DeiT/DeiT_small_patch16_224_train_infer_python.txt b/test_tipc/configs/DeiT/DeiT_small_patch16_224_train_infer_python.txt index 910708091a59a1462c4e6f4903e109fbb0d0a880..d6e4b3a277e6178f6b788730a00379cc1d86d4f2 100644 --- a/test_tipc/configs/DeiT/DeiT_small_patch16_224_train_infer_python.txt +++ b/test_tipc/configs/DeiT/DeiT_small_patch16_224_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/DeiT/DeiT_small_patch16_224.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/DeiT/DeiT_small_patch16_224.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/DeiT/DeiT_tiny_patch16_224_train_amp_infer_python.txt b/test_tipc/configs/DeiT/DeiT_tiny_patch16_224_train_amp_infer_python.txt index 769f086946997e78bd0f8b0bfd83512d855c6ed3..dad76ab80912f8248b0295735fcd84fed8f01a7d 100644 --- a/test_tipc/configs/DeiT/DeiT_tiny_patch16_224_train_amp_infer_python.txt +++ b/test_tipc/configs/DeiT/DeiT_tiny_patch16_224_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/DeiT/DeiT_tiny_patch16_224.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/DeiT/DeiT_tiny_patch16_224.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/DeiT/DeiT_tiny_patch16_224_train_infer_python.txt b/test_tipc/configs/DeiT/DeiT_tiny_patch16_224_train_infer_python.txt index b09ef91cee4f68ca9ce91e2bf973dba92ee61830..b0d4efd1c4d1fdf45216b9092f1c16b117f55a43 100644 --- a/test_tipc/configs/DeiT/DeiT_tiny_patch16_224_train_infer_python.txt +++ b/test_tipc/configs/DeiT/DeiT_tiny_patch16_224_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/DeiT/DeiT_tiny_patch16_224.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/DeiT/DeiT_tiny_patch16_224.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/DenseNet/DenseNet121_train_amp_infer_python.txt b/test_tipc/configs/DenseNet/DenseNet121_train_amp_infer_python.txt index e66857b9d6f5527efa213faf2489c45d52a7eef6..557791cff6b26a81d251e8011350e689ecfe484b 100644 --- a/test_tipc/configs/DenseNet/DenseNet121_train_amp_infer_python.txt +++ b/test_tipc/configs/DenseNet/DenseNet121_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/DenseNet/DenseNet121.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/DenseNet/DenseNet121.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/DenseNet/DenseNet121_train_infer_python.txt b/test_tipc/configs/DenseNet/DenseNet121_train_infer_python.txt index d1a0decae7da2bccaf61fef70cd2c0d75ce39609..a1532be0a281bf7dd57bab92c406b9ff76516df1 100644 --- a/test_tipc/configs/DenseNet/DenseNet121_train_infer_python.txt +++ b/test_tipc/configs/DenseNet/DenseNet121_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/DenseNet/DenseNet121.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/DenseNet/DenseNet121.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/DenseNet/DenseNet161_train_amp_infer_python.txt b/test_tipc/configs/DenseNet/DenseNet161_train_amp_infer_python.txt index fa1a0a7e33973a575d379ec0b02b9fd6c7eef217..e8bd9feaa9844cc2291cfbb428c6100738805a9f 100644 --- a/test_tipc/configs/DenseNet/DenseNet161_train_amp_infer_python.txt +++ b/test_tipc/configs/DenseNet/DenseNet161_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/DenseNet/DenseNet161.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/DenseNet/DenseNet161.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/DenseNet/DenseNet161_train_infer_python.txt b/test_tipc/configs/DenseNet/DenseNet161_train_infer_python.txt index 8175194cdf7cfee04f87c37e5d71854e311bfea4..34e744efd7df47ab775ecd2941b9f78cf5c56e42 100644 --- a/test_tipc/configs/DenseNet/DenseNet161_train_infer_python.txt +++ b/test_tipc/configs/DenseNet/DenseNet161_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/DenseNet/DenseNet161.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/DenseNet/DenseNet161.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/DenseNet/DenseNet169_train_amp_infer_python.txt b/test_tipc/configs/DenseNet/DenseNet169_train_amp_infer_python.txt index 55c4d6ec86540c175832c61af03006c5813ee184..177993bd3c27d325ec925e13c27e1d8aba523149 100644 --- a/test_tipc/configs/DenseNet/DenseNet169_train_amp_infer_python.txt +++ b/test_tipc/configs/DenseNet/DenseNet169_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/DenseNet/DenseNet169.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/DenseNet/DenseNet169.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/DenseNet/DenseNet169_train_infer_python.txt b/test_tipc/configs/DenseNet/DenseNet169_train_infer_python.txt index a14e5504d2e5ba1d1bc8be3f6c49e91b0dc52b74..85e996fbfda4b3c86084475c43f2251d27906e73 100644 --- a/test_tipc/configs/DenseNet/DenseNet169_train_infer_python.txt +++ b/test_tipc/configs/DenseNet/DenseNet169_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/DenseNet/DenseNet169.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/DenseNet/DenseNet169.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/DenseNet/DenseNet201_train_amp_infer_python.txt b/test_tipc/configs/DenseNet/DenseNet201_train_amp_infer_python.txt index 58250ce15a9227e9adcb8c7f2faaf89ded4b7062..3d85e34ee9120cb870574f14e54b682711b78ca8 100644 --- a/test_tipc/configs/DenseNet/DenseNet201_train_amp_infer_python.txt +++ b/test_tipc/configs/DenseNet/DenseNet201_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/DenseNet/DenseNet201.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/DenseNet/DenseNet201.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/DenseNet/DenseNet201_train_infer_python.txt b/test_tipc/configs/DenseNet/DenseNet201_train_infer_python.txt index 9bb1a1031bd63f3ef580d23a3544c2bedc3d6a80..623d3f079c0038dbfaa856510903f206a0849f1b 100644 --- a/test_tipc/configs/DenseNet/DenseNet201_train_infer_python.txt +++ b/test_tipc/configs/DenseNet/DenseNet201_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/DenseNet/DenseNet201.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/DenseNet/DenseNet201.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/DenseNet/DenseNet264_train_amp_infer_python.txt b/test_tipc/configs/DenseNet/DenseNet264_train_amp_infer_python.txt index 26138926314320c8c9b5ee3831a06e036142d689..f24a6e7706f6053702db2835e9984a4a60ef247f 100644 --- a/test_tipc/configs/DenseNet/DenseNet264_train_amp_infer_python.txt +++ b/test_tipc/configs/DenseNet/DenseNet264_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/DenseNet/DenseNet264.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/DenseNet/DenseNet264.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/DenseNet/DenseNet264_train_infer_python.txt b/test_tipc/configs/DenseNet/DenseNet264_train_infer_python.txt index 29e9148171ba4974057876ca74f0e13f5c37b552..561348f693d50f0aa23d6c687916e2d5aca943fe 100644 --- a/test_tipc/configs/DenseNet/DenseNet264_train_infer_python.txt +++ b/test_tipc/configs/DenseNet/DenseNet264_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/DenseNet/DenseNet264.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/DenseNet/DenseNet264.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/Distillation/resnet34_distill_resnet18_dkd_train_amp_infer_python.txt b/test_tipc/configs/Distillation/resnet34_distill_resnet18_dkd_train_amp_infer_python.txt index ab94039471ba1abaf035600a3351656a3f4e0f25..64c2383f0fd9f3725b79e5fa4e20d4b6d7e141c9 100644 --- a/test_tipc/configs/Distillation/resnet34_distill_resnet18_dkd_train_amp_infer_python.txt +++ b/test_tipc/configs/Distillation/resnet34_distill_resnet18_dkd_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/Distillation/resnet34_distill_resnet18_dkd.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/Distillation/resnet34_distill_resnet18_dkd.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Optimizer.lr.learning_rate=0.02 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/Distillation/resnet34_distill_resnet18_dkd_train_infer_python.txt b/test_tipc/configs/Distillation/resnet34_distill_resnet18_dkd_train_infer_python.txt index a6a2c239a18bdde1c8cc9ef1c28524e0defcddad..081146d98ea52ea8a8c0e3fbdc7d93cd2b700137 100644 --- a/test_tipc/configs/Distillation/resnet34_distill_resnet18_dkd_train_infer_python.txt +++ b/test_tipc/configs/Distillation/resnet34_distill_resnet18_dkd_train_infer_python.txt @@ -13,14 +13,14 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/Distillation/resnet34_distill_resnet18_dkd.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/Distillation/resnet34_distill_resnet18_dkd.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Optimizer.lr.learning_rate=0.02 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null null:null null:null ## -===========================eval_params=========================== +===========================eval_params=========================== eval:tools/eval.py -c ppcls/configs/ImageNet/Distillation/resnet34_distill_resnet18_dkd.yaml null:null ## diff --git a/test_tipc/configs/ESNet/ESNet_x0_25_train_amp_infer_python.txt b/test_tipc/configs/ESNet/ESNet_x0_25_train_amp_infer_python.txt index 7e9abcf47db884882e23008c9909dd3f334f4773..ae8ac4a664a541e653484771c3b9fe45c0316e0b 100644 --- a/test_tipc/configs/ESNet/ESNet_x0_25_train_amp_infer_python.txt +++ b/test_tipc/configs/ESNet/ESNet_x0_25_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/ESNet/ESNet_x0_25.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/ESNet/ESNet_x0_25.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/ESNet/ESNet_x0_25_train_infer_python.txt b/test_tipc/configs/ESNet/ESNet_x0_25_train_infer_python.txt index 03758fd6a4f772befbd9027a6a21bb6c9afcf0c9..1dbbe13aba69d1692fd27a35ff609ced2afe075b 100644 --- a/test_tipc/configs/ESNet/ESNet_x0_25_train_infer_python.txt +++ b/test_tipc/configs/ESNet/ESNet_x0_25_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/ESNet/ESNet_x0_25.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/ESNet/ESNet_x0_25.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/ESNet/ESNet_x0_5_train_amp_infer_python.txt b/test_tipc/configs/ESNet/ESNet_x0_5_train_amp_infer_python.txt index 7cd576052d101c9dbc065cfe6dae6a290a5f1490..5f57de7ad5c595ff41cfec8cb4738bb9eb541830 100644 --- a/test_tipc/configs/ESNet/ESNet_x0_5_train_amp_infer_python.txt +++ b/test_tipc/configs/ESNet/ESNet_x0_5_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/ESNet/ESNet_x0_5.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/ESNet/ESNet_x0_5.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/ESNet/ESNet_x0_5_train_infer_python.txt b/test_tipc/configs/ESNet/ESNet_x0_5_train_infer_python.txt index e7bd281d3c3f088d7beae77a76ba07a71deceff0..0347d06e2edf1e2b1d31b24258bd4d3f1bf4eaf8 100644 --- a/test_tipc/configs/ESNet/ESNet_x0_5_train_infer_python.txt +++ b/test_tipc/configs/ESNet/ESNet_x0_5_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/ESNet/ESNet_x0_5.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/ESNet/ESNet_x0_5.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/ESNet/ESNet_x0_75_train_amp_infer_python.txt b/test_tipc/configs/ESNet/ESNet_x0_75_train_amp_infer_python.txt index aa715a5d0165b93e5d4808353ac4a75cda974064..fd5a5cdb49a16805dc75932dfffed8b9c6d51d99 100644 --- a/test_tipc/configs/ESNet/ESNet_x0_75_train_amp_infer_python.txt +++ b/test_tipc/configs/ESNet/ESNet_x0_75_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/ESNet/ESNet_x0_75.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/ESNet/ESNet_x0_75.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/ESNet/ESNet_x0_75_train_infer_python.txt b/test_tipc/configs/ESNet/ESNet_x0_75_train_infer_python.txt index 3823531c4e0e07ac7a2d9abe146655f607ecacb2..7dddf978cf8b806145d93b154833869543752f94 100644 --- a/test_tipc/configs/ESNet/ESNet_x0_75_train_infer_python.txt +++ b/test_tipc/configs/ESNet/ESNet_x0_75_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/ESNet/ESNet_x0_75.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/ESNet/ESNet_x0_75.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/ESNet/ESNet_x1_0_train_amp_infer_python.txt b/test_tipc/configs/ESNet/ESNet_x1_0_train_amp_infer_python.txt index 8ce5c4ca603eca59b8f4d7df3deb2fa31ca0ad3f..040c6a6576dd5398ff70335359ff73ad141472fe 100644 --- a/test_tipc/configs/ESNet/ESNet_x1_0_train_amp_infer_python.txt +++ b/test_tipc/configs/ESNet/ESNet_x1_0_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/ESNet/ESNet_x1_0.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/ESNet/ESNet_x1_0.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/ESNet/ESNet_x1_0_train_infer_python.txt b/test_tipc/configs/ESNet/ESNet_x1_0_train_infer_python.txt index 3f44444c99ea223fedaff424348a7e0452bbd1ce..a4e958f885196303a8a43fbe78f6c1b2688b6890 100644 --- a/test_tipc/configs/ESNet/ESNet_x1_0_train_infer_python.txt +++ b/test_tipc/configs/ESNet/ESNet_x1_0_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/ESNet/ESNet_x1_0.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/ESNet/ESNet_x1_0.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/EfficientNet/EfficientNetB0_train_amp_infer_python.txt b/test_tipc/configs/EfficientNet/EfficientNetB0_train_amp_infer_python.txt index effea01493bf4ee309feea2ab71947117621df74..3562d1c147823f52b0a6c52f30d3811db4dfae21 100644 --- a/test_tipc/configs/EfficientNet/EfficientNetB0_train_amp_infer_python.txt +++ b/test_tipc/configs/EfficientNet/EfficientNetB0_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/EfficientNet/EfficientNetB0.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/EfficientNet/EfficientNetB0.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/EfficientNet/EfficientNetB0_train_infer_python.txt b/test_tipc/configs/EfficientNet/EfficientNetB0_train_infer_python.txt index b3fdabf5f265ecbd47ce5c742d584a90353db513..65b88ebf8d754a8ab3b31f7cd316da39a5da97ea 100644 --- a/test_tipc/configs/EfficientNet/EfficientNetB0_train_infer_python.txt +++ b/test_tipc/configs/EfficientNet/EfficientNetB0_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/EfficientNet/EfficientNetB0.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/EfficientNet/EfficientNetB0.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/EfficientNet/EfficientNetB1_train_amp_infer_python.txt b/test_tipc/configs/EfficientNet/EfficientNetB1_train_amp_infer_python.txt index ba5fa4d2067ddd98058ecc0b762292bf27219cb7..8518e8b8cdc02f2c031538c76c700551c41e243e 100644 --- a/test_tipc/configs/EfficientNet/EfficientNetB1_train_amp_infer_python.txt +++ b/test_tipc/configs/EfficientNet/EfficientNetB1_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/EfficientNet/EfficientNetB1.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/EfficientNet/EfficientNetB1.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/EfficientNet/EfficientNetB1_train_infer_python.txt b/test_tipc/configs/EfficientNet/EfficientNetB1_train_infer_python.txt index 058496deb6159dcc35dfe4bfd52714d89d2882f6..54bae3aeb0e7b53cceb039b7cceb5478e1e059d9 100644 --- a/test_tipc/configs/EfficientNet/EfficientNetB1_train_infer_python.txt +++ b/test_tipc/configs/EfficientNet/EfficientNetB1_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/EfficientNet/EfficientNetB1.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/EfficientNet/EfficientNetB1.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/EfficientNet/EfficientNetB2_train_amp_infer_python.txt b/test_tipc/configs/EfficientNet/EfficientNetB2_train_amp_infer_python.txt index a045ea9fc1ff69b039b78f92dd448e3fc53927cb..6a74b4d103dd9fc8576ba2e7441091da1bf89f3e 100644 --- a/test_tipc/configs/EfficientNet/EfficientNetB2_train_amp_infer_python.txt +++ b/test_tipc/configs/EfficientNet/EfficientNetB2_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/EfficientNet/EfficientNetB2.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/EfficientNet/EfficientNetB2.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/EfficientNet/EfficientNetB2_train_infer_python.txt b/test_tipc/configs/EfficientNet/EfficientNetB2_train_infer_python.txt index 790d6809c4d901d655de79b0439969d5260fca35..128ad101a8bea6e6d8300ae6393ee33aa6ba8dc8 100644 --- a/test_tipc/configs/EfficientNet/EfficientNetB2_train_infer_python.txt +++ b/test_tipc/configs/EfficientNet/EfficientNetB2_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/EfficientNet/EfficientNetB2.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/EfficientNet/EfficientNetB2.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/EfficientNet/EfficientNetB3_train_amp_infer_python.txt b/test_tipc/configs/EfficientNet/EfficientNetB3_train_amp_infer_python.txt index dc3a03297aee923f0476c6193b14482da89292bb..202fe8449805a56511281725f55eeb37307fc3bc 100644 --- a/test_tipc/configs/EfficientNet/EfficientNetB3_train_amp_infer_python.txt +++ b/test_tipc/configs/EfficientNet/EfficientNetB3_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/EfficientNet/EfficientNetB3.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/EfficientNet/EfficientNetB3.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/EfficientNet/EfficientNetB3_train_infer_python.txt b/test_tipc/configs/EfficientNet/EfficientNetB3_train_infer_python.txt index 835b6d2e1121eff4d057639e5239b5e44c9b418e..38e06d913bb64945fdfa1ff39bdcd9bc392298e0 100644 --- a/test_tipc/configs/EfficientNet/EfficientNetB3_train_infer_python.txt +++ b/test_tipc/configs/EfficientNet/EfficientNetB3_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/EfficientNet/EfficientNetB3.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/EfficientNet/EfficientNetB3.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/EfficientNet/EfficientNetB4_train_amp_infer_python.txt b/test_tipc/configs/EfficientNet/EfficientNetB4_train_amp_infer_python.txt index 1c63f3b742101adff3015614782065a3a6f64094..a3f529f2ff1290dd1a3a1c752059f7752df83986 100644 --- a/test_tipc/configs/EfficientNet/EfficientNetB4_train_amp_infer_python.txt +++ b/test_tipc/configs/EfficientNet/EfficientNetB4_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/EfficientNet/EfficientNetB4.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/EfficientNet/EfficientNetB4.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/EfficientNet/EfficientNetB4_train_infer_python.txt b/test_tipc/configs/EfficientNet/EfficientNetB4_train_infer_python.txt index 2f1e84f679a176e549e03e49a493d33e50a2a18b..175d10d31acf6ddd2dca9e206fdc163cbd32837c 100644 --- a/test_tipc/configs/EfficientNet/EfficientNetB4_train_infer_python.txt +++ b/test_tipc/configs/EfficientNet/EfficientNetB4_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/EfficientNet/EfficientNetB4.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/EfficientNet/EfficientNetB4.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/EfficientNet/EfficientNetB5_train_amp_infer_python.txt b/test_tipc/configs/EfficientNet/EfficientNetB5_train_amp_infer_python.txt index 60d0eeac26259f0a986332dfb19b849a25d2ae08..66512927b9e39948ed191f42d9d8fcfc37cc345e 100644 --- a/test_tipc/configs/EfficientNet/EfficientNetB5_train_amp_infer_python.txt +++ b/test_tipc/configs/EfficientNet/EfficientNetB5_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/EfficientNet/EfficientNetB5.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/EfficientNet/EfficientNetB5.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/EfficientNet/EfficientNetB5_train_infer_python.txt b/test_tipc/configs/EfficientNet/EfficientNetB5_train_infer_python.txt index 7399a866031a07543c4f60888abfb1c73c66493f..8c765f48be8ee77f18e39daf24665e1f199ccded 100644 --- a/test_tipc/configs/EfficientNet/EfficientNetB5_train_infer_python.txt +++ b/test_tipc/configs/EfficientNet/EfficientNetB5_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/EfficientNet/EfficientNetB5.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/EfficientNet/EfficientNetB5.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/EfficientNet/EfficientNetB6_train_amp_infer_python.txt b/test_tipc/configs/EfficientNet/EfficientNetB6_train_amp_infer_python.txt index ce07cce099b56e668e54672a4783ebaa8041fe25..ff0daf28e432b91527a6425a1232d459ffae00fd 100644 --- a/test_tipc/configs/EfficientNet/EfficientNetB6_train_amp_infer_python.txt +++ b/test_tipc/configs/EfficientNet/EfficientNetB6_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/EfficientNet/EfficientNetB6.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/EfficientNet/EfficientNetB6.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/EfficientNet/EfficientNetB6_train_infer_python.txt b/test_tipc/configs/EfficientNet/EfficientNetB6_train_infer_python.txt index b0f7b2709d562f289d816158147933850c818246..f22679b22596116d5729298ce8f6b7c56baead73 100644 --- a/test_tipc/configs/EfficientNet/EfficientNetB6_train_infer_python.txt +++ b/test_tipc/configs/EfficientNet/EfficientNetB6_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/EfficientNet/EfficientNetB6.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/EfficientNet/EfficientNetB6.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/EfficientNet/EfficientNetB7_train_amp_infer_python.txt b/test_tipc/configs/EfficientNet/EfficientNetB7_train_amp_infer_python.txt index d8eff7add7411b105b0996c5d91166e4689f6ef4..abbdf2c4e09935187fdf87eaf1623d0851e6fc81 100644 --- a/test_tipc/configs/EfficientNet/EfficientNetB7_train_amp_infer_python.txt +++ b/test_tipc/configs/EfficientNet/EfficientNetB7_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/EfficientNet/EfficientNetB7.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/EfficientNet/EfficientNetB7.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/EfficientNet/EfficientNetB7_train_infer_python.txt b/test_tipc/configs/EfficientNet/EfficientNetB7_train_infer_python.txt index a45d95aa1a6ec2f7a0d1df9c06725c7f4018629c..49ee48e40b7c2439086d8ad3d0bee3581be610ab 100644 --- a/test_tipc/configs/EfficientNet/EfficientNetB7_train_infer_python.txt +++ b/test_tipc/configs/EfficientNet/EfficientNetB7_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/EfficientNet/EfficientNetB7.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/EfficientNet/EfficientNetB7.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/GeneralRecognition/GeneralRecognition_PPLCNet_x2_5_train_infer_python.txt b/test_tipc/configs/GeneralRecognition/GeneralRecognition_PPLCNet_x2_5_train_infer_python.txt index 1f0979b19e4cde9570f5003cc36a0a3e069b2ffe..b76d5d1b626106d4a1420a74148621eace5c09fd 100644 --- a/test_tipc/configs/GeneralRecognition/GeneralRecognition_PPLCNet_x2_5_train_infer_python.txt +++ b/test_tipc/configs/GeneralRecognition/GeneralRecognition_PPLCNet_x2_5_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/GeneralRecognition/GeneralRecognition_PPLCNet_x2_5.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/GeneralRecognition/GeneralRecognition_PPLCNet_x2_5.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/GhostNet/GhostNet_x0_5_train_amp_infer_python.txt b/test_tipc/configs/GhostNet/GhostNet_x0_5_train_amp_infer_python.txt index 0861f243efb67150067ff694cc7be2fe11ef080c..22a418d8c066e76eb9d0c59b7b399d823dfb8e68 100644 --- a/test_tipc/configs/GhostNet/GhostNet_x0_5_train_amp_infer_python.txt +++ b/test_tipc/configs/GhostNet/GhostNet_x0_5_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/GhostNet/GhostNet_x0_5.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/GhostNet/GhostNet_x0_5.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/GhostNet/GhostNet_x0_5_train_infer_python.txt b/test_tipc/configs/GhostNet/GhostNet_x0_5_train_infer_python.txt index 1ecf72dcab888491c67a3162352c5332fa746b23..10ed44102759c32a5293b15c9070ba07112621b6 100644 --- a/test_tipc/configs/GhostNet/GhostNet_x0_5_train_infer_python.txt +++ b/test_tipc/configs/GhostNet/GhostNet_x0_5_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/GhostNet/GhostNet_x0_5.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/GhostNet/GhostNet_x0_5.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/GhostNet/GhostNet_x1_0_train_amp_infer_python.txt b/test_tipc/configs/GhostNet/GhostNet_x1_0_train_amp_infer_python.txt index 3132830ca27910adce290b231257ebd4e1506f79..e537c5181481b8e254f4eda919c282d67c75f8f0 100644 --- a/test_tipc/configs/GhostNet/GhostNet_x1_0_train_amp_infer_python.txt +++ b/test_tipc/configs/GhostNet/GhostNet_x1_0_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/GhostNet/GhostNet_x1_0.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/GhostNet/GhostNet_x1_0.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/GhostNet/GhostNet_x1_0_train_infer_python.txt b/test_tipc/configs/GhostNet/GhostNet_x1_0_train_infer_python.txt index 0bba3a6ee7313efa49004cdbf83bc4bebbb85685..804eaaa5159323d011f1cd7c1877b561ba746651 100644 --- a/test_tipc/configs/GhostNet/GhostNet_x1_0_train_infer_python.txt +++ b/test_tipc/configs/GhostNet/GhostNet_x1_0_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/GhostNet/GhostNet_x1_0.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/GhostNet/GhostNet_x1_0.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/GhostNet/GhostNet_x1_3_train_amp_infer_python.txt b/test_tipc/configs/GhostNet/GhostNet_x1_3_train_amp_infer_python.txt index e5ad93bc1fc381add7241943744f731f6cb75bf4..92c7071c77574214c471ecb86cc8787c3ab9280c 100644 --- a/test_tipc/configs/GhostNet/GhostNet_x1_3_train_amp_infer_python.txt +++ b/test_tipc/configs/GhostNet/GhostNet_x1_3_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/GhostNet/GhostNet_x1_3.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/GhostNet/GhostNet_x1_3.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/GhostNet/GhostNet_x1_3_train_infer_python.txt b/test_tipc/configs/GhostNet/GhostNet_x1_3_train_infer_python.txt index d3639ec2e9dbf4a9a054d6574e79085d76ac4a38..769761b16207bc190b335ac2ebf7cb60b46a1259 100644 --- a/test_tipc/configs/GhostNet/GhostNet_x1_3_train_infer_python.txt +++ b/test_tipc/configs/GhostNet/GhostNet_x1_3_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/GhostNet/GhostNet_x1_3.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/GhostNet/GhostNet_x1_3.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/HRNet/HRNet_W18_C_train_amp_infer_python.txt b/test_tipc/configs/HRNet/HRNet_W18_C_train_amp_infer_python.txt index 72014b2741530ca16aa508bc61e1f6d6253f0d20..d5c6d572cc811652f92030a1f4c2a47cb42c303a 100644 --- a/test_tipc/configs/HRNet/HRNet_W18_C_train_amp_infer_python.txt +++ b/test_tipc/configs/HRNet/HRNet_W18_C_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/HRNet/HRNet_W18_C.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/HRNet/HRNet_W18_C.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/HRNet/HRNet_W18_C_train_infer_python.txt b/test_tipc/configs/HRNet/HRNet_W18_C_train_infer_python.txt index 543634ee4b970abd9b0b698866ab6f9f4484ac7d..adda057a351ed80af464d5d08ff9c86f0753ed37 100644 --- a/test_tipc/configs/HRNet/HRNet_W18_C_train_infer_python.txt +++ b/test_tipc/configs/HRNet/HRNet_W18_C_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/HRNet/HRNet_W18_C.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/HRNet/HRNet_W18_C.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/HRNet/HRNet_W30_C_train_amp_infer_python.txt b/test_tipc/configs/HRNet/HRNet_W30_C_train_amp_infer_python.txt index 0a38ecc036167f4d35592d78ae9b51c63e096cce..68bff780a562f5308938a54d55abf8f8abdf5fba 100644 --- a/test_tipc/configs/HRNet/HRNet_W30_C_train_amp_infer_python.txt +++ b/test_tipc/configs/HRNet/HRNet_W30_C_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/HRNet/HRNet_W30_C.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/HRNet/HRNet_W30_C.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/HRNet/HRNet_W30_C_train_infer_python.txt b/test_tipc/configs/HRNet/HRNet_W30_C_train_infer_python.txt index bd9523869b84221d0b5a62b3da18a686c47d92c4..6bd6103855191e0ee2684fb0b5970279cb159d56 100644 --- a/test_tipc/configs/HRNet/HRNet_W30_C_train_infer_python.txt +++ b/test_tipc/configs/HRNet/HRNet_W30_C_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/HRNet/HRNet_W30_C.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/HRNet/HRNet_W30_C.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/HRNet/HRNet_W32_C_train_amp_infer_python.txt b/test_tipc/configs/HRNet/HRNet_W32_C_train_amp_infer_python.txt index bcdd374618799e8f7cc519f77f40b36ad3a57e51..3913f4c43b19db9ae8ed81967f4524d4c5316c0d 100644 --- a/test_tipc/configs/HRNet/HRNet_W32_C_train_amp_infer_python.txt +++ b/test_tipc/configs/HRNet/HRNet_W32_C_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/HRNet/HRNet_W32_C.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/HRNet/HRNet_W32_C.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/HRNet/HRNet_W32_C_train_infer_python.txt b/test_tipc/configs/HRNet/HRNet_W32_C_train_infer_python.txt index 38b80137af763784c031cc758acf6e20acab64c8..17f3f7869dfb800572ab3605518bdcad959f591c 100644 --- a/test_tipc/configs/HRNet/HRNet_W32_C_train_infer_python.txt +++ b/test_tipc/configs/HRNet/HRNet_W32_C_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/HRNet/HRNet_W32_C.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/HRNet/HRNet_W32_C.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/HRNet/HRNet_W40_C_train_amp_infer_python.txt b/test_tipc/configs/HRNet/HRNet_W40_C_train_amp_infer_python.txt index 3825c10286046c98b06aadc17d1581e293967d59..b4f4fc45ec6f7797e185e5172b9c28167831a724 100644 --- a/test_tipc/configs/HRNet/HRNet_W40_C_train_amp_infer_python.txt +++ b/test_tipc/configs/HRNet/HRNet_W40_C_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/HRNet/HRNet_W40_C.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/HRNet/HRNet_W40_C.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/HRNet/HRNet_W40_C_train_infer_python.txt b/test_tipc/configs/HRNet/HRNet_W40_C_train_infer_python.txt index a9cb9b06002fb6d056009cfb3d51a33936956df2..e86fcf6b5e88651c0c05d81ad22589a975b7c1d4 100644 --- a/test_tipc/configs/HRNet/HRNet_W40_C_train_infer_python.txt +++ b/test_tipc/configs/HRNet/HRNet_W40_C_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/HRNet/HRNet_W40_C.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/HRNet/HRNet_W40_C.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/HRNet/HRNet_W44_C_train_amp_infer_python.txt b/test_tipc/configs/HRNet/HRNet_W44_C_train_amp_infer_python.txt index f7bdb981a921b7971969b25e28c2ab56535546f6..05c2e0d0d5a44ed4c4b72cce43e67d01303b7d8d 100644 --- a/test_tipc/configs/HRNet/HRNet_W44_C_train_amp_infer_python.txt +++ b/test_tipc/configs/HRNet/HRNet_W44_C_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/HRNet/HRNet_W44_C.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/HRNet/HRNet_W44_C.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/HRNet/HRNet_W44_C_train_infer_python.txt b/test_tipc/configs/HRNet/HRNet_W44_C_train_infer_python.txt index aa57ca4f574cd87e35c441c9f9d80b73be8a1c2f..9abc3eef09499643dd2ab66733d93773d798a9aa 100644 --- a/test_tipc/configs/HRNet/HRNet_W44_C_train_infer_python.txt +++ b/test_tipc/configs/HRNet/HRNet_W44_C_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/HRNet/HRNet_W44_C.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/HRNet/HRNet_W44_C.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/HRNet/HRNet_W48_C_train_amp_infer_python.txt b/test_tipc/configs/HRNet/HRNet_W48_C_train_amp_infer_python.txt index 56cc623d73b357c873d3f75535cfc62882030db5..cd8e05db12674e46d61cde372920d56262c9b64c 100644 --- a/test_tipc/configs/HRNet/HRNet_W48_C_train_amp_infer_python.txt +++ b/test_tipc/configs/HRNet/HRNet_W48_C_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/HRNet/HRNet_W48_C.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/HRNet/HRNet_W48_C.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/HRNet/HRNet_W48_C_train_infer_python.txt b/test_tipc/configs/HRNet/HRNet_W48_C_train_infer_python.txt index 7f906e959d6462a5c0d43a6aa93c67d378ae0814..c83d9a6e8b3adb5a091a4d72c23f22c004a44acb 100644 --- a/test_tipc/configs/HRNet/HRNet_W48_C_train_infer_python.txt +++ b/test_tipc/configs/HRNet/HRNet_W48_C_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/HRNet/HRNet_W48_C.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/HRNet/HRNet_W48_C.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/HRNet/HRNet_W64_C_train_amp_infer_python.txt b/test_tipc/configs/HRNet/HRNet_W64_C_train_amp_infer_python.txt index daf5d0ab12fc1c0fd3a29a677dd178e5a16c1432..2784cc488f64425b857e43175beabc21d1a2209c 100644 --- a/test_tipc/configs/HRNet/HRNet_W64_C_train_amp_infer_python.txt +++ b/test_tipc/configs/HRNet/HRNet_W64_C_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/HRNet/HRNet_W64_C.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/HRNet/HRNet_W64_C.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/HRNet/HRNet_W64_C_train_infer_python.txt b/test_tipc/configs/HRNet/HRNet_W64_C_train_infer_python.txt index 0c8e97dec541f12eab0120975d26c9cad9c1ebe2..cb7b082574906433c947ca3b6ecc619076641c96 100644 --- a/test_tipc/configs/HRNet/HRNet_W64_C_train_infer_python.txt +++ b/test_tipc/configs/HRNet/HRNet_W64_C_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/HRNet/HRNet_W64_C.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/HRNet/HRNet_W64_C.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/HarDNet/HarDNet39_ds_train_amp_infer_python.txt b/test_tipc/configs/HarDNet/HarDNet39_ds_train_amp_infer_python.txt index 122dafe1a411b40210ae74864aee4e1e2b1f0f3e..ae9c892c38dccde60e4fc6eb9cac24c14a422e4c 100644 --- a/test_tipc/configs/HarDNet/HarDNet39_ds_train_amp_infer_python.txt +++ b/test_tipc/configs/HarDNet/HarDNet39_ds_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/HarDNet/HarDNet39_ds.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/HarDNet/HarDNet39_ds.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/HarDNet/HarDNet39_ds_train_infer_python.txt b/test_tipc/configs/HarDNet/HarDNet39_ds_train_infer_python.txt index 4b9cc1106e09a9c96cdb4b0c95d74c769a63b103..f57b052cfa3726ac5716f521038b5a03867f4e6a 100644 --- a/test_tipc/configs/HarDNet/HarDNet39_ds_train_infer_python.txt +++ b/test_tipc/configs/HarDNet/HarDNet39_ds_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/HarDNet/HarDNet39_ds.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/HarDNet/HarDNet39_ds.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/HarDNet/HarDNet68_ds_train_amp_infer_python.txt b/test_tipc/configs/HarDNet/HarDNet68_ds_train_amp_infer_python.txt index 8d27c482d6496430a0f982e0031cb503036caab4..811a037c223f4ce658dce1c03ca32d4793b3d14c 100644 --- a/test_tipc/configs/HarDNet/HarDNet68_ds_train_amp_infer_python.txt +++ b/test_tipc/configs/HarDNet/HarDNet68_ds_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/HarDNet/HarDNet68_ds.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/HarDNet/HarDNet68_ds.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/HarDNet/HarDNet68_ds_train_infer_python.txt b/test_tipc/configs/HarDNet/HarDNet68_ds_train_infer_python.txt index 13daded49d482ca57a9af3380340950b983f0fab..098150b779eefdb9ba8bb3ff8badf40766618111 100644 --- a/test_tipc/configs/HarDNet/HarDNet68_ds_train_infer_python.txt +++ b/test_tipc/configs/HarDNet/HarDNet68_ds_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/HarDNet/HarDNet68_ds.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/HarDNet/HarDNet68_ds.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/HarDNet/HarDNet68_train_amp_infer_python.txt b/test_tipc/configs/HarDNet/HarDNet68_train_amp_infer_python.txt index 88c82cc41e2c09ee53ca6999a8c14a64125db6e3..abc5ed5c449f190db51f4a217b49c59ff29f163f 100644 --- a/test_tipc/configs/HarDNet/HarDNet68_train_amp_infer_python.txt +++ b/test_tipc/configs/HarDNet/HarDNet68_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/HarDNet/HarDNet68.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/HarDNet/HarDNet68.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/HarDNet/HarDNet68_train_infer_python.txt b/test_tipc/configs/HarDNet/HarDNet68_train_infer_python.txt index df2aecbfaa60f77cf527931f07f04008fda9089a..7e01a15cc05a3fc5442ded814d7587d19760cfd7 100644 --- a/test_tipc/configs/HarDNet/HarDNet68_train_infer_python.txt +++ b/test_tipc/configs/HarDNet/HarDNet68_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/HarDNet/HarDNet68.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/HarDNet/HarDNet68.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/HarDNet/HarDNet85_train_amp_infer_python.txt b/test_tipc/configs/HarDNet/HarDNet85_train_amp_infer_python.txt index 5c3a73dfa2eacb6b35c2dd296dc3126d921acd75..1bd92977606039a2d94f7dc98296246e6d41fcec 100644 --- a/test_tipc/configs/HarDNet/HarDNet85_train_amp_infer_python.txt +++ b/test_tipc/configs/HarDNet/HarDNet85_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/HarDNet/HarDNet85.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/HarDNet/HarDNet85.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/HarDNet/HarDNet85_train_infer_python.txt b/test_tipc/configs/HarDNet/HarDNet85_train_infer_python.txt index a596cc7b804ac892af9d429b202dc0b038bb75ae..963469c374bec3af0000f1c4bf3b1033381ed415 100644 --- a/test_tipc/configs/HarDNet/HarDNet85_train_infer_python.txt +++ b/test_tipc/configs/HarDNet/HarDNet85_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/HarDNet/HarDNet85.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/HarDNet/HarDNet85.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/Inception/GoogLeNet_train_amp_infer_python.txt b/test_tipc/configs/Inception/GoogLeNet_train_amp_infer_python.txt index 0108dff5d22379555de2c0e973d03459c4fff04a..3b436eb15b85b2bb5d9c08621c0b214450bab838 100644 --- a/test_tipc/configs/Inception/GoogLeNet_train_amp_infer_python.txt +++ b/test_tipc/configs/Inception/GoogLeNet_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/Inception/GoogLeNet.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Optimizer.lr.learning_rate=0.0001 +amp_train:tools/train.py -c ppcls/configs/ImageNet/Inception/GoogLeNet.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Optimizer.lr.learning_rate=0.0001 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/Inception/GoogLeNet_train_infer_python.txt b/test_tipc/configs/Inception/GoogLeNet_train_infer_python.txt index 611977af019d0ebeadf1f542d3cf42d4565a09a4..0479d983608aba0991ad62ff80538fbdfa991176 100644 --- a/test_tipc/configs/Inception/GoogLeNet_train_infer_python.txt +++ b/test_tipc/configs/Inception/GoogLeNet_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/Inception/GoogLeNet.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Optimizer.lr.learning_rate=0.0001 +norm_train:tools/train.py -c ppcls/configs/ImageNet/Inception/GoogLeNet.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Optimizer.lr.learning_rate=0.0001 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/Inception/InceptionV3_train_amp_infer_python.txt b/test_tipc/configs/Inception/InceptionV3_train_amp_infer_python.txt index ce9c3477bef4287532d6169b6e1f8780160d38ae..3aa8d82319fdf2df7d13a0488355dc703c67b35a 100644 --- a/test_tipc/configs/Inception/InceptionV3_train_amp_infer_python.txt +++ b/test_tipc/configs/Inception/InceptionV3_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/Inception/InceptionV3.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/Inception/InceptionV3.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/Inception/InceptionV3_train_infer_python.txt b/test_tipc/configs/Inception/InceptionV3_train_infer_python.txt index 3a8b69d9b4e684ad4a7df8775e848d33b627c9d4..ef4ce7f09f0d5f0c090ef0d4d39e72e490de9952 100644 --- a/test_tipc/configs/Inception/InceptionV3_train_infer_python.txt +++ b/test_tipc/configs/Inception/InceptionV3_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/Inception/InceptionV3.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/Inception/InceptionV3.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/Inception/InceptionV4_train_amp_infer_python.txt b/test_tipc/configs/Inception/InceptionV4_train_amp_infer_python.txt index 853c1b7da138ecdbea17492e2e5d83515f9887fc..058da3dd226ee92a0687a29c76de8d92e1fdd721 100644 --- a/test_tipc/configs/Inception/InceptionV4_train_amp_infer_python.txt +++ b/test_tipc/configs/Inception/InceptionV4_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/Inception/InceptionV4.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/Inception/InceptionV4.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/Inception/InceptionV4_train_infer_python.txt b/test_tipc/configs/Inception/InceptionV4_train_infer_python.txt index 396577867a17c3208cfbdd085e6891b8fdd16f3d..7c6018e22b529b8cfd52582ef72bcc8739f1ea72 100644 --- a/test_tipc/configs/Inception/InceptionV4_train_infer_python.txt +++ b/test_tipc/configs/Inception/InceptionV4_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/Inception/InceptionV4.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/Inception/InceptionV4.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/LeViT/LeViT_128S_train_amp_infer_python.txt b/test_tipc/configs/LeViT/LeViT_128S_train_amp_infer_python.txt index d973ec5174940d92c629bf5be745a74c4d73a3d7..78120bc8e00b2ecfe7626662a236f3826bdd5cb4 100644 --- a/test_tipc/configs/LeViT/LeViT_128S_train_amp_infer_python.txt +++ b/test_tipc/configs/LeViT/LeViT_128S_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/LeViT/LeViT_128S.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/LeViT/LeViT_128S.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/LeViT/LeViT_128S_train_infer_python.txt b/test_tipc/configs/LeViT/LeViT_128S_train_infer_python.txt index 6e8f1c62c4912e9ba5d36b5c2520f44c35e594c9..682f806c8e1d1bf7503faa6ff2818b7f7bf1a57d 100644 --- a/test_tipc/configs/LeViT/LeViT_128S_train_infer_python.txt +++ b/test_tipc/configs/LeViT/LeViT_128S_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/LeViT/LeViT_128S.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/LeViT/LeViT_128S.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/LeViT/LeViT_128_train_amp_infer_python.txt b/test_tipc/configs/LeViT/LeViT_128_train_amp_infer_python.txt index fb884b24b97cec70913380aee85f28e0c1d66f5f..34ff908a633f8d4fad0d9f59aa1fdd7289f521f7 100644 --- a/test_tipc/configs/LeViT/LeViT_128_train_amp_infer_python.txt +++ b/test_tipc/configs/LeViT/LeViT_128_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/LeViT/LeViT_128.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/LeViT/LeViT_128.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/LeViT/LeViT_128_train_infer_python.txt b/test_tipc/configs/LeViT/LeViT_128_train_infer_python.txt index 1c2ce7026f84fad1af455fff827fccbdd055d65f..05dc2c62f48c584cd3c99a6a2ff8254ba454f89b 100644 --- a/test_tipc/configs/LeViT/LeViT_128_train_infer_python.txt +++ b/test_tipc/configs/LeViT/LeViT_128_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/LeViT/LeViT_128.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/LeViT/LeViT_128.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/LeViT/LeViT_192_train_amp_infer_python.txt b/test_tipc/configs/LeViT/LeViT_192_train_amp_infer_python.txt index 0ecb7e096a8983b714d489a15176df92ebd2b1fa..7c4e6feea186ca22e29b9a410743c43df4de63b4 100644 --- a/test_tipc/configs/LeViT/LeViT_192_train_amp_infer_python.txt +++ b/test_tipc/configs/LeViT/LeViT_192_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/LeViT/LeViT_192.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/LeViT/LeViT_192.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/LeViT/LeViT_192_train_infer_python.txt b/test_tipc/configs/LeViT/LeViT_192_train_infer_python.txt index b266545f0e197f65dbfdf82da9a44a18f80f723b..b05156d40f0b20b33e70e4d78b82b36d53740c9b 100644 --- a/test_tipc/configs/LeViT/LeViT_192_train_infer_python.txt +++ b/test_tipc/configs/LeViT/LeViT_192_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/LeViT/LeViT_192.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/LeViT/LeViT_192.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/LeViT/LeViT_256_train_amp_infer_python.txt b/test_tipc/configs/LeViT/LeViT_256_train_amp_infer_python.txt index dbea83a768e91287e98bfdf6125bf1d786f0ded5..65d718e2ae26b15ec802c9f624ed8867f2be15a6 100644 --- a/test_tipc/configs/LeViT/LeViT_256_train_amp_infer_python.txt +++ b/test_tipc/configs/LeViT/LeViT_256_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/LeViT/LeViT_256.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/LeViT/LeViT_256.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/LeViT/LeViT_256_train_infer_python.txt b/test_tipc/configs/LeViT/LeViT_256_train_infer_python.txt index e67d338830b6ef94559dc76e3e20e554bdadbd16..5236931f04adcb4af3340cab54410670311a5e19 100644 --- a/test_tipc/configs/LeViT/LeViT_256_train_infer_python.txt +++ b/test_tipc/configs/LeViT/LeViT_256_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/LeViT/LeViT_256.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/LeViT/LeViT_256.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/LeViT/LeViT_384_train_amp_infer_python.txt b/test_tipc/configs/LeViT/LeViT_384_train_amp_infer_python.txt index c7a243e79f07e62cc3c39680398889101bd2cb1d..14ea9bf91db668f1f3316a4484d27162433d5918 100644 --- a/test_tipc/configs/LeViT/LeViT_384_train_amp_infer_python.txt +++ b/test_tipc/configs/LeViT/LeViT_384_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/LeViT/LeViT_384.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/LeViT/LeViT_384.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/LeViT/LeViT_384_train_infer_python.txt b/test_tipc/configs/LeViT/LeViT_384_train_infer_python.txt index 06d82caed7c312036ae9ec87e0015109d91c3e0f..cb423513063857dda73000b196f04fb3f710f40a 100644 --- a/test_tipc/configs/LeViT/LeViT_384_train_infer_python.txt +++ b/test_tipc/configs/LeViT/LeViT_384_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/LeViT/LeViT_384.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/LeViT/LeViT_384.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/MixNet/MixNet_L_train_amp_infer_python.txt b/test_tipc/configs/MixNet/MixNet_L_train_amp_infer_python.txt index a70575726f4690e0abe8c698c7674918a3c64dc4..946f448c64aa473864aafa1b1db3e12f8677cd62 100644 --- a/test_tipc/configs/MixNet/MixNet_L_train_amp_infer_python.txt +++ b/test_tipc/configs/MixNet/MixNet_L_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/MixNet/MixNet_L.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/MixNet/MixNet_L.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/MixNet/MixNet_L_train_infer_python.txt b/test_tipc/configs/MixNet/MixNet_L_train_infer_python.txt index 47a16275ddfdcc44d31409829be850ae5350b4ff..66e9170f25a7301ecbbcf6404a6a3f155091fdfe 100644 --- a/test_tipc/configs/MixNet/MixNet_L_train_infer_python.txt +++ b/test_tipc/configs/MixNet/MixNet_L_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/MixNet/MixNet_L.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/MixNet/MixNet_L.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/MixNet/MixNet_M_train_amp_infer_python.txt b/test_tipc/configs/MixNet/MixNet_M_train_amp_infer_python.txt index 66c5e83d7d6a64281e3c8c48f3a75641dd2ba826..2304e5dcbe9261162bcdce183ef2523cf6509a85 100644 --- a/test_tipc/configs/MixNet/MixNet_M_train_amp_infer_python.txt +++ b/test_tipc/configs/MixNet/MixNet_M_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/MixNet/MixNet_M.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/MixNet/MixNet_M.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/MixNet/MixNet_M_train_infer_python.txt b/test_tipc/configs/MixNet/MixNet_M_train_infer_python.txt index 3c2ab6218c88563b81c2143f0bfde4ed46adc737..4cc7b49e50e91ca5b23b1b423fac815cf8b57685 100644 --- a/test_tipc/configs/MixNet/MixNet_M_train_infer_python.txt +++ b/test_tipc/configs/MixNet/MixNet_M_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/MixNet/MixNet_M.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/MixNet/MixNet_M.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/MixNet/MixNet_S_train_amp_infer_python.txt b/test_tipc/configs/MixNet/MixNet_S_train_amp_infer_python.txt index ac256e161cf75a7dc1d3978af7ba2a64fd1c2cc9..94fa68ab2f8bf433656bdaecd924541b24266202 100644 --- a/test_tipc/configs/MixNet/MixNet_S_train_amp_infer_python.txt +++ b/test_tipc/configs/MixNet/MixNet_S_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/MixNet/MixNet_S.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/MixNet/MixNet_S.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/MixNet/MixNet_S_train_infer_python.txt b/test_tipc/configs/MixNet/MixNet_S_train_infer_python.txt index c5e55ed5b26948e7620914b0fa15a9adc95b5053..158f012086e8997311d9ff34165df151be826e6e 100644 --- a/test_tipc/configs/MixNet/MixNet_S_train_infer_python.txt +++ b/test_tipc/configs/MixNet/MixNet_S_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/MixNet/MixNet_S.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/MixNet/MixNet_S.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/MobileNetV1/MobileNetV1_train_amp_infer_python.txt b/test_tipc/configs/MobileNetV1/MobileNetV1_train_amp_infer_python.txt index db98a5f6ed047a7236d0a7a1bbfa91e0f5093995..78401c81858400c8107db9fc0567a0fcf5aa18de 100644 --- a/test_tipc/configs/MobileNetV1/MobileNetV1_train_amp_infer_python.txt +++ b/test_tipc/configs/MobileNetV1/MobileNetV1_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/MobileNetV1/MobileNetV1.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/MobileNetV1/MobileNetV1.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/MobileNetV1/MobileNetV1_train_infer_python.txt b/test_tipc/configs/MobileNetV1/MobileNetV1_train_infer_python.txt index d6876ee614411904267d2a4567b00f5d8c7f2a8e..3353451fca1667de83a84b4b9137763caa1a3e81 100644 --- a/test_tipc/configs/MobileNetV1/MobileNetV1_train_infer_python.txt +++ b/test_tipc/configs/MobileNetV1/MobileNetV1_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/MobileNetV1/MobileNetV1.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/MobileNetV1/MobileNetV1.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/MobileNetV1/MobileNetV1_x0_25_train_amp_infer_python.txt b/test_tipc/configs/MobileNetV1/MobileNetV1_x0_25_train_amp_infer_python.txt index 68106952dfb6612916963c5c1f6d7c932eeffe23..02b8e8a1dc1c7f69ab0a57da62013a0083fd51fd 100644 --- a/test_tipc/configs/MobileNetV1/MobileNetV1_x0_25_train_amp_infer_python.txt +++ b/test_tipc/configs/MobileNetV1/MobileNetV1_x0_25_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/MobileNetV1/MobileNetV1_x0_25.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/MobileNetV1/MobileNetV1_x0_25.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/MobileNetV1/MobileNetV1_x0_25_train_infer_python.txt b/test_tipc/configs/MobileNetV1/MobileNetV1_x0_25_train_infer_python.txt index 38e005d3fe911e98519db56e154400de5b977b74..0ad8d79eb654a349d3eee01c9819ec086e7a3253 100644 --- a/test_tipc/configs/MobileNetV1/MobileNetV1_x0_25_train_infer_python.txt +++ b/test_tipc/configs/MobileNetV1/MobileNetV1_x0_25_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/MobileNetV1/MobileNetV1_x0_25.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/MobileNetV1/MobileNetV1_x0_25.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/MobileNetV1/MobileNetV1_x0_5_train_amp_infer_python.txt b/test_tipc/configs/MobileNetV1/MobileNetV1_x0_5_train_amp_infer_python.txt index 0552171764cdaeb448267cf5c95988354d0691bc..d1e25d31fccaaf6edd40b5a07965fbf4450ab36d 100644 --- a/test_tipc/configs/MobileNetV1/MobileNetV1_x0_5_train_amp_infer_python.txt +++ b/test_tipc/configs/MobileNetV1/MobileNetV1_x0_5_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/MobileNetV1/MobileNetV1_x0_5.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/MobileNetV1/MobileNetV1_x0_5.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/MobileNetV1/MobileNetV1_x0_5_train_infer_python.txt b/test_tipc/configs/MobileNetV1/MobileNetV1_x0_5_train_infer_python.txt index 20abac7b74dd248c4f224b7541fcc651e11ebad0..f2f01610ea08e6c19443a07a585fb1dc839ae5f5 100644 --- a/test_tipc/configs/MobileNetV1/MobileNetV1_x0_5_train_infer_python.txt +++ b/test_tipc/configs/MobileNetV1/MobileNetV1_x0_5_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/MobileNetV1/MobileNetV1_x0_5.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/MobileNetV1/MobileNetV1_x0_5.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/MobileNetV1/MobileNetV1_x0_75_train_amp_infer_python.txt b/test_tipc/configs/MobileNetV1/MobileNetV1_x0_75_train_amp_infer_python.txt index 4d876f8fb50e93fdfd53a9bcbb2f8c8f13a687e4..d518b311b9621b6f3b67c3b0b1b1cb46c0aaa2b1 100644 --- a/test_tipc/configs/MobileNetV1/MobileNetV1_x0_75_train_amp_infer_python.txt +++ b/test_tipc/configs/MobileNetV1/MobileNetV1_x0_75_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/MobileNetV1/MobileNetV1_x0_75.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/MobileNetV1/MobileNetV1_x0_75.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/MobileNetV1/MobileNetV1_x0_75_train_infer_python.txt b/test_tipc/configs/MobileNetV1/MobileNetV1_x0_75_train_infer_python.txt index 1d02f6682b5b93834dc5b162b0fa1dfe71d71547..643eaf800094d44291f7edc946bcb6781d437b94 100644 --- a/test_tipc/configs/MobileNetV1/MobileNetV1_x0_75_train_infer_python.txt +++ b/test_tipc/configs/MobileNetV1/MobileNetV1_x0_75_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/MobileNetV1/MobileNetV1_x0_75.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/MobileNetV1/MobileNetV1_x0_75.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/MobileNetV2/MobileNetV2_train_amp_infer_python.txt b/test_tipc/configs/MobileNetV2/MobileNetV2_train_amp_infer_python.txt index 840a0f598abf0bd047f7ed8cf7085974fb856e5f..e60caec71107e4a174e9fb2c2d5a62f595c5df61 100644 --- a/test_tipc/configs/MobileNetV2/MobileNetV2_train_amp_infer_python.txt +++ b/test_tipc/configs/MobileNetV2/MobileNetV2_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/MobileNetV2/MobileNetV2.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/MobileNetV2/MobileNetV2.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/MobileNetV2/MobileNetV2_train_infer_python.txt b/test_tipc/configs/MobileNetV2/MobileNetV2_train_infer_python.txt index 22150db578442772c8cf6c08a5d7be8ad1241e1d..6ff01e8aaa8a371f50486afffacbaac083a567be 100644 --- a/test_tipc/configs/MobileNetV2/MobileNetV2_train_infer_python.txt +++ b/test_tipc/configs/MobileNetV2/MobileNetV2_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/MobileNetV2/MobileNetV2.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/MobileNetV2/MobileNetV2.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/MobileNetV2/MobileNetV2_x0_25_train_amp_infer_python.txt b/test_tipc/configs/MobileNetV2/MobileNetV2_x0_25_train_amp_infer_python.txt index 1c2718f405a059c9eebdd7e0ca60edd17e065a8a..53e4e37dce425aa76b76c011e07c55a471def269 100644 --- a/test_tipc/configs/MobileNetV2/MobileNetV2_x0_25_train_amp_infer_python.txt +++ b/test_tipc/configs/MobileNetV2/MobileNetV2_x0_25_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/MobileNetV2/MobileNetV2_x0_25.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/MobileNetV2/MobileNetV2_x0_25.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/MobileNetV2/MobileNetV2_x0_25_train_infer_python.txt b/test_tipc/configs/MobileNetV2/MobileNetV2_x0_25_train_infer_python.txt index 8774da111647bd95692574353e42045240692a1c..202e33996a55e096355a43f1911c224a23501c6f 100644 --- a/test_tipc/configs/MobileNetV2/MobileNetV2_x0_25_train_infer_python.txt +++ b/test_tipc/configs/MobileNetV2/MobileNetV2_x0_25_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/MobileNetV2/MobileNetV2_x0_25.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/MobileNetV2/MobileNetV2_x0_25.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/MobileNetV2/MobileNetV2_x0_5_train_amp_infer_python.txt b/test_tipc/configs/MobileNetV2/MobileNetV2_x0_5_train_amp_infer_python.txt index 03f8073e5e1047be5024dc04c7a34ba8943246c0..2373929e35fdf9426fd95ad6660e0aa9341bdbaf 100644 --- a/test_tipc/configs/MobileNetV2/MobileNetV2_x0_5_train_amp_infer_python.txt +++ b/test_tipc/configs/MobileNetV2/MobileNetV2_x0_5_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/MobileNetV2/MobileNetV2_x0_5.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/MobileNetV2/MobileNetV2_x0_5.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/MobileNetV2/MobileNetV2_x0_5_train_infer_python.txt b/test_tipc/configs/MobileNetV2/MobileNetV2_x0_5_train_infer_python.txt index 0e58ffeb0a60b8ae992e98ca7ce4e4bfed2d0c1e..38419a97f0ae73cb5676139c9b7a33b5dca64b42 100644 --- a/test_tipc/configs/MobileNetV2/MobileNetV2_x0_5_train_infer_python.txt +++ b/test_tipc/configs/MobileNetV2/MobileNetV2_x0_5_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/MobileNetV2/MobileNetV2_x0_5.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/MobileNetV2/MobileNetV2_x0_5.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/MobileNetV2/MobileNetV2_x0_75_train_amp_infer_python.txt b/test_tipc/configs/MobileNetV2/MobileNetV2_x0_75_train_amp_infer_python.txt index 93547f8dc4e19ffa13bb4662619e9fb908a67dbb..4f35a9554f899a13487bff08b25a9b0d6645ba29 100644 --- a/test_tipc/configs/MobileNetV2/MobileNetV2_x0_75_train_amp_infer_python.txt +++ b/test_tipc/configs/MobileNetV2/MobileNetV2_x0_75_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/MobileNetV2/MobileNetV2_x0_75.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/MobileNetV2/MobileNetV2_x0_75.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/MobileNetV2/MobileNetV2_x0_75_train_infer_python.txt b/test_tipc/configs/MobileNetV2/MobileNetV2_x0_75_train_infer_python.txt index 8705c9dd82a2b9e17dfdeafc73ee58787cb4f8d7..2ccfad745b1994a919602781f03107552da6e242 100644 --- a/test_tipc/configs/MobileNetV2/MobileNetV2_x0_75_train_infer_python.txt +++ b/test_tipc/configs/MobileNetV2/MobileNetV2_x0_75_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/MobileNetV2/MobileNetV2_x0_75.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/MobileNetV2/MobileNetV2_x0_75.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/MobileNetV2/MobileNetV2_x1_5_train_amp_infer_python.txt b/test_tipc/configs/MobileNetV2/MobileNetV2_x1_5_train_amp_infer_python.txt index 948db736c5d1e287a1d8c9378f1106365a30a3ad..82739e8e6dd1496f17afca834e7beeaa18b9b6a0 100644 --- a/test_tipc/configs/MobileNetV2/MobileNetV2_x1_5_train_amp_infer_python.txt +++ b/test_tipc/configs/MobileNetV2/MobileNetV2_x1_5_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/MobileNetV2/MobileNetV2_x1_5.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/MobileNetV2/MobileNetV2_x1_5.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/MobileNetV2/MobileNetV2_x1_5_train_infer_python.txt b/test_tipc/configs/MobileNetV2/MobileNetV2_x1_5_train_infer_python.txt index 0bbc833a19a8e828710d5c3abb481059196dac6c..d67569b4611ad8664b1cc2724ecbac9c9492d08d 100644 --- a/test_tipc/configs/MobileNetV2/MobileNetV2_x1_5_train_infer_python.txt +++ b/test_tipc/configs/MobileNetV2/MobileNetV2_x1_5_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/MobileNetV2/MobileNetV2_x1_5.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/MobileNetV2/MobileNetV2_x1_5.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/MobileNetV2/MobileNetV2_x2_0_train_amp_infer_python.txt b/test_tipc/configs/MobileNetV2/MobileNetV2_x2_0_train_amp_infer_python.txt index 6b1b20096b72760432ee59dc6ebc21028eee1b86..66a074ddb15f7995a509d0316e3d46c753d7428f 100644 --- a/test_tipc/configs/MobileNetV2/MobileNetV2_x2_0_train_amp_infer_python.txt +++ b/test_tipc/configs/MobileNetV2/MobileNetV2_x2_0_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/MobileNetV2/MobileNetV2_x2_0.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/MobileNetV2/MobileNetV2_x2_0.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/MobileNetV2/MobileNetV2_x2_0_train_infer_python.txt b/test_tipc/configs/MobileNetV2/MobileNetV2_x2_0_train_infer_python.txt index 48679714d663484a5ff34cb8b6e443defde1425c..c86f61c97a83024f3739aaba7f3883e02da0df87 100644 --- a/test_tipc/configs/MobileNetV2/MobileNetV2_x2_0_train_infer_python.txt +++ b/test_tipc/configs/MobileNetV2/MobileNetV2_x2_0_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/MobileNetV2/MobileNetV2_x2_0.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/MobileNetV2/MobileNetV2_x2_0.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/MobileNetV3/MobileNetV3_large_x0_35_train_amp_infer_python.txt b/test_tipc/configs/MobileNetV3/MobileNetV3_large_x0_35_train_amp_infer_python.txt index 72c0b5f4de86fd39d1c634f8d7f1afee36ec74ff..f1fc43b3253310ec80110df3596f37f053797364 100644 --- a/test_tipc/configs/MobileNetV3/MobileNetV3_large_x0_35_train_amp_infer_python.txt +++ b/test_tipc/configs/MobileNetV3/MobileNetV3_large_x0_35_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/MobileNetV3/MobileNetV3_large_x0_35.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/MobileNetV3/MobileNetV3_large_x0_35.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/MobileNetV3/MobileNetV3_large_x0_35_train_infer_python.txt b/test_tipc/configs/MobileNetV3/MobileNetV3_large_x0_35_train_infer_python.txt index 383b3690fc00f726d72017228ef85aef49f8b37c..c7b570a3d532b3a431ccf4ad276de396616bdf00 100644 --- a/test_tipc/configs/MobileNetV3/MobileNetV3_large_x0_35_train_infer_python.txt +++ b/test_tipc/configs/MobileNetV3/MobileNetV3_large_x0_35_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/MobileNetV3/MobileNetV3_large_x0_35.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/MobileNetV3/MobileNetV3_large_x0_35.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/MobileNetV3/MobileNetV3_large_x0_5_train_amp_infer_python.txt b/test_tipc/configs/MobileNetV3/MobileNetV3_large_x0_5_train_amp_infer_python.txt index 28cd39dd69bfbb065cbfae257892ad9540d5a5e3..98baf3c51b6830e7a17195c5af3357bce2e16e4c 100644 --- a/test_tipc/configs/MobileNetV3/MobileNetV3_large_x0_5_train_amp_infer_python.txt +++ b/test_tipc/configs/MobileNetV3/MobileNetV3_large_x0_5_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/MobileNetV3/MobileNetV3_large_x0_5.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/MobileNetV3/MobileNetV3_large_x0_5.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/MobileNetV3/MobileNetV3_large_x0_5_train_infer_python.txt b/test_tipc/configs/MobileNetV3/MobileNetV3_large_x0_5_train_infer_python.txt index 5dcaf223d09f827aeaceee5102331e8959f06620..1fe9714aa20e77eec532736c02cacb34cc159f2d 100644 --- a/test_tipc/configs/MobileNetV3/MobileNetV3_large_x0_5_train_infer_python.txt +++ b/test_tipc/configs/MobileNetV3/MobileNetV3_large_x0_5_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/MobileNetV3/MobileNetV3_large_x0_5.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/MobileNetV3/MobileNetV3_large_x0_5.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/MobileNetV3/MobileNetV3_large_x0_75_train_amp_infer_python.txt b/test_tipc/configs/MobileNetV3/MobileNetV3_large_x0_75_train_amp_infer_python.txt index f3749084578921829b31a896adc2906d0afc2ca4..f6bababdb07b46d73f655e6926501102bdfe96fd 100644 --- a/test_tipc/configs/MobileNetV3/MobileNetV3_large_x0_75_train_amp_infer_python.txt +++ b/test_tipc/configs/MobileNetV3/MobileNetV3_large_x0_75_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/MobileNetV3/MobileNetV3_large_x0_75.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/MobileNetV3/MobileNetV3_large_x0_75.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/MobileNetV3/MobileNetV3_large_x0_75_train_infer_python.txt b/test_tipc/configs/MobileNetV3/MobileNetV3_large_x0_75_train_infer_python.txt index b1a735382051ea6297f239dbc44eb36da8b68830..0d2c14488f7e3d1d7607e6ecd90a198b01005135 100644 --- a/test_tipc/configs/MobileNetV3/MobileNetV3_large_x0_75_train_infer_python.txt +++ b/test_tipc/configs/MobileNetV3/MobileNetV3_large_x0_75_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/MobileNetV3/MobileNetV3_large_x0_75.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/MobileNetV3/MobileNetV3_large_x0_75.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/MobileNetV3/MobileNetV3_large_x1_0_FPGM_train_amp_infer_python.txt b/test_tipc/configs/MobileNetV3/MobileNetV3_large_x1_0_FPGM_train_amp_infer_python.txt index 5c6dd683c4dfde02de4747e9429bdb6a9f37a1f2..c460be0bec309203cc0331701247eacae3969dbc 100644 --- a/test_tipc/configs/MobileNetV3/MobileNetV3_large_x1_0_FPGM_train_amp_infer_python.txt +++ b/test_tipc/configs/MobileNetV3/MobileNetV3_large_x1_0_FPGM_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/slim/MobileNetV3_large_x1_0_prune.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/slim/MobileNetV3_large_x1_0_prune.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/MobileNetV3/MobileNetV3_large_x1_0_PACT_train_amp_infer_python.txt b/test_tipc/configs/MobileNetV3/MobileNetV3_large_x1_0_PACT_train_amp_infer_python.txt index 20f2d7b947f54744be0789742b5196495f7816e0..54af86fc31128b9c2a24cd02fa2ffa403a2c720c 100644 --- a/test_tipc/configs/MobileNetV3/MobileNetV3_large_x1_0_PACT_train_amp_infer_python.txt +++ b/test_tipc/configs/MobileNetV3/MobileNetV3_large_x1_0_PACT_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/slim/MobileNetV3_large_x1_0_quantization.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/slim/MobileNetV3_large_x1_0_quantization.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/MobileNetV3/MobileNetV3_large_x1_0_train_amp_infer_python.txt b/test_tipc/configs/MobileNetV3/MobileNetV3_large_x1_0_train_amp_infer_python.txt index 61e864c79e0b905e55d0c83e90980f90e1ae28db..87c5087581dd66fa44df1834303e89d6e27a5f1c 100644 --- a/test_tipc/configs/MobileNetV3/MobileNetV3_large_x1_0_train_amp_infer_python.txt +++ b/test_tipc/configs/MobileNetV3/MobileNetV3_large_x1_0_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/MobileNetV3/MobileNetV3_large_x1_0.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/MobileNetV3/MobileNetV3_large_x1_0.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/MobileNetV3/MobileNetV3_large_x1_0_train_infer_python.txt b/test_tipc/configs/MobileNetV3/MobileNetV3_large_x1_0_train_infer_python.txt index c1ef5d1c9422a631a4dffdef56e0c4252974894f..1f90b986281a1da9090f5f0170f1c5d9886e7fbb 100644 --- a/test_tipc/configs/MobileNetV3/MobileNetV3_large_x1_0_train_infer_python.txt +++ b/test_tipc/configs/MobileNetV3/MobileNetV3_large_x1_0_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/MobileNetV3/MobileNetV3_large_x1_0.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/MobileNetV3/MobileNetV3_large_x1_0.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/MobileNetV3/MobileNetV3_large_x1_25_train_amp_infer_python.txt b/test_tipc/configs/MobileNetV3/MobileNetV3_large_x1_25_train_amp_infer_python.txt index 3b3ee3adad01e951615f10d57ce1e7e43cd64315..c874477f76a11c46df79b65067df02e6c32be778 100644 --- a/test_tipc/configs/MobileNetV3/MobileNetV3_large_x1_25_train_amp_infer_python.txt +++ b/test_tipc/configs/MobileNetV3/MobileNetV3_large_x1_25_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/MobileNetV3/MobileNetV3_large_x1_25.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/MobileNetV3/MobileNetV3_large_x1_25.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/MobileNetV3/MobileNetV3_large_x1_25_train_infer_python.txt b/test_tipc/configs/MobileNetV3/MobileNetV3_large_x1_25_train_infer_python.txt index 449a41a46d28cbc12770343c97a5379702b1e0c0..324f872715a3e16a531f7836b28d13fb9e25ac6b 100644 --- a/test_tipc/configs/MobileNetV3/MobileNetV3_large_x1_25_train_infer_python.txt +++ b/test_tipc/configs/MobileNetV3/MobileNetV3_large_x1_25_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/MobileNetV3/MobileNetV3_large_x1_25.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/MobileNetV3/MobileNetV3_large_x1_25.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/MobileNetV3/MobileNetV3_small_x0_35_train_amp_infer_python.txt b/test_tipc/configs/MobileNetV3/MobileNetV3_small_x0_35_train_amp_infer_python.txt index b42def4c8408d4570f03ddfe3337f704227d5fe4..01cf34916c90548750fd516d3bc50578804f39eb 100644 --- a/test_tipc/configs/MobileNetV3/MobileNetV3_small_x0_35_train_amp_infer_python.txt +++ b/test_tipc/configs/MobileNetV3/MobileNetV3_small_x0_35_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/MobileNetV3/MobileNetV3_small_x0_35.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/MobileNetV3/MobileNetV3_small_x0_35.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/MobileNetV3/MobileNetV3_small_x0_35_train_infer_python.txt b/test_tipc/configs/MobileNetV3/MobileNetV3_small_x0_35_train_infer_python.txt index a3e9c1807fc62c722b9e1487967cc644f8d95a87..6e48831c6b6e0ddc78214c43dbcfc050abc7455d 100644 --- a/test_tipc/configs/MobileNetV3/MobileNetV3_small_x0_35_train_infer_python.txt +++ b/test_tipc/configs/MobileNetV3/MobileNetV3_small_x0_35_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/MobileNetV3/MobileNetV3_small_x0_35.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/MobileNetV3/MobileNetV3_small_x0_35.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/MobileNetV3/MobileNetV3_small_x0_5_train_amp_infer_python.txt b/test_tipc/configs/MobileNetV3/MobileNetV3_small_x0_5_train_amp_infer_python.txt index 1382431826b34703e0e59fd1524390a1a9b4aeb3..590b5fd2c14156c2143a387b5ddc3d134f03e52a 100644 --- a/test_tipc/configs/MobileNetV3/MobileNetV3_small_x0_5_train_amp_infer_python.txt +++ b/test_tipc/configs/MobileNetV3/MobileNetV3_small_x0_5_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/MobileNetV3/MobileNetV3_small_x0_5.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/MobileNetV3/MobileNetV3_small_x0_5.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/MobileNetV3/MobileNetV3_small_x0_5_train_infer_python.txt b/test_tipc/configs/MobileNetV3/MobileNetV3_small_x0_5_train_infer_python.txt index 6f5e8e0d20c890fcaabf8a9accd419821cf40c11..24c40a1ddc83665fcbc986938278f1d4e277000a 100644 --- a/test_tipc/configs/MobileNetV3/MobileNetV3_small_x0_5_train_infer_python.txt +++ b/test_tipc/configs/MobileNetV3/MobileNetV3_small_x0_5_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/MobileNetV3/MobileNetV3_small_x0_5.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/MobileNetV3/MobileNetV3_small_x0_5.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/MobileNetV3/MobileNetV3_small_x0_75_train_amp_infer_python.txt b/test_tipc/configs/MobileNetV3/MobileNetV3_small_x0_75_train_amp_infer_python.txt index 489b9408fdf2df35159e4e349b21f8c307a5a7b7..1dfbfc2d7d97a4e607e5d57785bae6dfab3593d4 100644 --- a/test_tipc/configs/MobileNetV3/MobileNetV3_small_x0_75_train_amp_infer_python.txt +++ b/test_tipc/configs/MobileNetV3/MobileNetV3_small_x0_75_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/MobileNetV3/MobileNetV3_small_x0_75.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/MobileNetV3/MobileNetV3_small_x0_75.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/MobileNetV3/MobileNetV3_small_x0_75_train_infer_python.txt b/test_tipc/configs/MobileNetV3/MobileNetV3_small_x0_75_train_infer_python.txt index cb2fb6078ab1972637a314fb99402d957b0aba76..4d99cabf8bae4b3ea1211da5d274adc140055fad 100644 --- a/test_tipc/configs/MobileNetV3/MobileNetV3_small_x0_75_train_infer_python.txt +++ b/test_tipc/configs/MobileNetV3/MobileNetV3_small_x0_75_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/MobileNetV3/MobileNetV3_small_x0_75.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/MobileNetV3/MobileNetV3_small_x0_75.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/MobileNetV3/MobileNetV3_small_x1_0_train_amp_infer_python.txt b/test_tipc/configs/MobileNetV3/MobileNetV3_small_x1_0_train_amp_infer_python.txt index baa1546daa565741884f93121739b361dff4ad88..da2e44c3a7476089f4231025f62184b652e9a02e 100644 --- a/test_tipc/configs/MobileNetV3/MobileNetV3_small_x1_0_train_amp_infer_python.txt +++ b/test_tipc/configs/MobileNetV3/MobileNetV3_small_x1_0_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/MobileNetV3/MobileNetV3_small_x1_0.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/MobileNetV3/MobileNetV3_small_x1_0.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/MobileNetV3/MobileNetV3_small_x1_0_train_infer_python.txt b/test_tipc/configs/MobileNetV3/MobileNetV3_small_x1_0_train_infer_python.txt index 2f77738dc6cb3cd90974d97849003a6e53d64cd4..c07de3e7fb1b2570b1888595d3ff1ad16cb56a1b 100644 --- a/test_tipc/configs/MobileNetV3/MobileNetV3_small_x1_0_train_infer_python.txt +++ b/test_tipc/configs/MobileNetV3/MobileNetV3_small_x1_0_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/MobileNetV3/MobileNetV3_small_x1_0.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/MobileNetV3/MobileNetV3_small_x1_0.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/MobileNetV3/MobileNetV3_small_x1_25_train_amp_infer_python.txt b/test_tipc/configs/MobileNetV3/MobileNetV3_small_x1_25_train_amp_infer_python.txt index e34e17696b728c9071cf3bd6d65e4cd404b7c348..5c1c2536d8f668427d01c11d5949182e33e9b74d 100644 --- a/test_tipc/configs/MobileNetV3/MobileNetV3_small_x1_25_train_amp_infer_python.txt +++ b/test_tipc/configs/MobileNetV3/MobileNetV3_small_x1_25_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/MobileNetV3/MobileNetV3_small_x1_25.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/MobileNetV3/MobileNetV3_small_x1_25.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/MobileNetV3/MobileNetV3_small_x1_25_train_infer_python.txt b/test_tipc/configs/MobileNetV3/MobileNetV3_small_x1_25_train_infer_python.txt index af05805fe5ac542a32c937199d65e6e3e07a1998..318b1e51ccc69e669e2cf7755922a7460af5fbbd 100644 --- a/test_tipc/configs/MobileNetV3/MobileNetV3_small_x1_25_train_infer_python.txt +++ b/test_tipc/configs/MobileNetV3/MobileNetV3_small_x1_25_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/MobileNetV3/MobileNetV3_small_x1_25.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/MobileNetV3/MobileNetV3_small_x1_25.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/MobileViT/MobileViT_S_train_infer_python.txt b/test_tipc/configs/MobileViT/MobileViT_S_train_infer_python.txt index 23f17c907f099cdab0159fb53237a47f430dd0a0..0c54292e2cf7f95a261bc5eb6fcf7cd9d8f1ba26 100644 --- a/test_tipc/configs/MobileViT/MobileViT_S_train_infer_python.txt +++ b/test_tipc/configs/MobileViT/MobileViT_S_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/MobileViT/MobileViT_S.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.print_batch_step=1 +norm_train:tools/train.py -c ppcls/configs/ImageNet/MobileViT/MobileViT_S.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.print_batch_step=1 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/MobileViT/MobileViT_XS_train_infer_python.txt b/test_tipc/configs/MobileViT/MobileViT_XS_train_infer_python.txt index af52268b4d0dc7820d7ab841c8f3bf10e4411cc2..d3ea3c90a4da80d1be85606d945f8aca4eded157 100644 --- a/test_tipc/configs/MobileViT/MobileViT_XS_train_infer_python.txt +++ b/test_tipc/configs/MobileViT/MobileViT_XS_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/MobileViT/MobileViT_XS.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/MobileViT/MobileViT_XS.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/MobileViT/MobileViT_XXS_train_infer_python.txt b/test_tipc/configs/MobileViT/MobileViT_XXS_train_infer_python.txt index af8baf43b84a156764ec03ff593c647cc1d6050e..944c90098ebd6bd8f9c00b4cd73dbbabd37f8069 100644 --- a/test_tipc/configs/MobileViT/MobileViT_XXS_train_infer_python.txt +++ b/test_tipc/configs/MobileViT/MobileViT_XXS_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/MobileViT/MobileViT_XXS.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/MobileViT/MobileViT_XXS.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/PP-ShiTu/PPShiTu_mainbody_det_linux_gpu_normal_normal_paddle2onnx_python_linux_cpu.txt b/test_tipc/configs/PP-ShiTu/PPShiTu_mainbody_det_linux_gpu_normal_normal_paddle2onnx_python_linux_cpu.txt index 1b970569a745c6f115b581fd2d020258d3d5014d..bfd24bb4106245d7b279d0e7c07ffbc39f28fe83 100644 --- a/test_tipc/configs/PP-ShiTu/PPShiTu_mainbody_det_linux_gpu_normal_normal_paddle2onnx_python_linux_cpu.txt +++ b/test_tipc/configs/PP-ShiTu/PPShiTu_mainbody_det_linux_gpu_normal_normal_paddle2onnx_python_linux_cpu.txt @@ -2,15 +2,15 @@ model_name:PP-ShiTu_mainbody_det python:python3.7 2onnx: paddle2onnx ---model_dir:./deploy/models/picodet_PPLCNet_x2_5_mainbody_lite_v1.0_infer/ +--model_dir:./deploy/models/picodet_lcnet_x2_5_640_mainbody_infer/ --model_filename:inference.pdmodel --params_filename:inference.pdiparams ---save_file:./deploy/models/picodet_PPLCNet_x2_5_mainbody_lite_v1.0_infer/inference.onnx +--save_file:./deploy/models/picodet_lcnet_x2_5_640_mainbody_infer/inference.onnx --opset_version:11 --enable_onnx_checker:True -inference_model_url:https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/rec/models/inference/picodet_PPLCNet_x2_5_mainbody_lite_v1.0_infer.tar -inference:./python/predict_cls.py -Global.use_onnx:True -Global.inference_model_dir:./models/picodet_PPLCNet_x2_5_mainbody_lite_v1.0_infer -Global.use_gpu:False --c:configs/inference_cls.yaml \ No newline at end of file +inference_model_url:https://paddledet.bj.bcebos.com/models/picodet_lcnet_x2_5_640_mainbody_infer.tar +inference:null +Global.use_onnx:null +Global.inference_model_dir:null +Global.use_gpu:null +-c:null \ No newline at end of file diff --git a/test_tipc/configs/PPHGNet/PPHGNet_small_train_infer_python.txt b/test_tipc/configs/PPHGNet/PPHGNet_small_train_infer_python.txt index c2b66e7f7c99673f43a287a98c7292158707d632..204503e49b27f532c777a8bcb832e21a2d794e39 100644 --- a/test_tipc/configs/PPHGNet/PPHGNet_small_train_infer_python.txt +++ b/test_tipc/configs/PPHGNet/PPHGNet_small_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/PPHGNet/PPHGNet_small.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/PPHGNet/PPHGNet_small.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/PPHGNet/PPHGNet_tiny_train_infer_python.txt b/test_tipc/configs/PPHGNet/PPHGNet_tiny_train_infer_python.txt index abe870dd14810e973dec09ff38227364b8a376f7..4bf0d64319e46c2b8a4f7e0b7db112ad0ae548a5 100644 --- a/test_tipc/configs/PPHGNet/PPHGNet_tiny_train_infer_python.txt +++ b/test_tipc/configs/PPHGNet/PPHGNet_tiny_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/PPHGNet/PPHGNet_tiny.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/PPHGNet/PPHGNet_tiny.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/PPLCNet/PPLCNet_x0_25_train_infer_python.txt b/test_tipc/configs/PPLCNet/PPLCNet_x0_25_train_infer_python.txt index b86ca4ecd801776f3c816ecf78844c9b3a1e542b..b5d91c06c06ac476cce150347f273c50f5903f8f 100644 --- a/test_tipc/configs/PPLCNet/PPLCNet_x0_25_train_infer_python.txt +++ b/test_tipc/configs/PPLCNet/PPLCNet_x0_25_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/PPLCNet/PPLCNet_x0_25.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/PPLCNet/PPLCNet_x0_25.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/PPLCNet/PPLCNet_x0_35_train_infer_python.txt b/test_tipc/configs/PPLCNet/PPLCNet_x0_35_train_infer_python.txt index a13fcaccbd700898b6b89311d3cde07d9544870c..dbd78bdc6ae67ac8bdabc2ff3cdb4fe5b5de48a0 100644 --- a/test_tipc/configs/PPLCNet/PPLCNet_x0_35_train_infer_python.txt +++ b/test_tipc/configs/PPLCNet/PPLCNet_x0_35_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/PPLCNet/PPLCNet_x0_35.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/PPLCNet/PPLCNet_x0_35.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/PPLCNet/PPLCNet_x0_5_train_infer_python.txt b/test_tipc/configs/PPLCNet/PPLCNet_x0_5_train_infer_python.txt index 7b0e108beba0845a84dc4f615960a8a02a458625..4764b6bf6ca0df0a4885a92d1293ab763e9e574a 100644 --- a/test_tipc/configs/PPLCNet/PPLCNet_x0_5_train_infer_python.txt +++ b/test_tipc/configs/PPLCNet/PPLCNet_x0_5_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/PPLCNet/PPLCNet_x0_5.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/PPLCNet/PPLCNet_x0_5.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/PPLCNet/PPLCNet_x0_75_train_infer_python.txt b/test_tipc/configs/PPLCNet/PPLCNet_x0_75_train_infer_python.txt index eb34925e9b8ad7b38ae76e46b7ded8d6689bc53a..8efc1299a953256968e69d1f5560b0e1a9bf52d8 100644 --- a/test_tipc/configs/PPLCNet/PPLCNet_x0_75_train_infer_python.txt +++ b/test_tipc/configs/PPLCNet/PPLCNet_x0_75_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/PPLCNet/PPLCNet_x0_75.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/PPLCNet/PPLCNet_x0_75.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/PPLCNet/PPLCNet_x1_0_train_infer_python.txt b/test_tipc/configs/PPLCNet/PPLCNet_x1_0_train_infer_python.txt index fbbcba9c85554d044dcdd2ce6e0ee1ab7071f808..5bbe58d082c854bdfd5345c472e13deb27579bfa 100644 --- a/test_tipc/configs/PPLCNet/PPLCNet_x1_0_train_infer_python.txt +++ b/test_tipc/configs/PPLCNet/PPLCNet_x1_0_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/PPLCNet/PPLCNet_x1_0.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/PPLCNet/PPLCNet_x1_0.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/PPLCNet/PPLCNet_x1_5_train_infer_python.txt b/test_tipc/configs/PPLCNet/PPLCNet_x1_5_train_infer_python.txt index 5254a58da29911c2e751df724692992ab3801e0a..17ef3baaba0e532aeff386704e87a7022ae7f166 100644 --- a/test_tipc/configs/PPLCNet/PPLCNet_x1_5_train_infer_python.txt +++ b/test_tipc/configs/PPLCNet/PPLCNet_x1_5_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/PPLCNet/PPLCNet_x1_5.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/PPLCNet/PPLCNet_x1_5.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/PPLCNet/PPLCNet_x2_0_train_infer_python.txt b/test_tipc/configs/PPLCNet/PPLCNet_x2_0_train_infer_python.txt index 07f41282bcb7996c710847704c2fc0d24de53765..f37d1e41c66e43932b9e817edc8306975388e9b2 100644 --- a/test_tipc/configs/PPLCNet/PPLCNet_x2_0_train_infer_python.txt +++ b/test_tipc/configs/PPLCNet/PPLCNet_x2_0_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/PPLCNet/PPLCNet_x2_0.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/PPLCNet/PPLCNet_x2_0.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/PPLCNet/PPLCNet_x2_5_train_amp_infer_python.txt b/test_tipc/configs/PPLCNet/PPLCNet_x2_5_train_amp_infer_python.txt index 26e95cb7897d83a45355300bf1fdd4d1cf4aa3b5..d73a29fd4d330164332391bcc7f9b194d6f9d40c 100644 --- a/test_tipc/configs/PPLCNet/PPLCNet_x2_5_train_amp_infer_python.txt +++ b/test_tipc/configs/PPLCNet/PPLCNet_x2_5_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/PPLCNet/PPLCNet_x2_5.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/PPLCNet/PPLCNet_x2_5.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/PPLCNet/PPLCNet_x2_5_train_infer_python.txt b/test_tipc/configs/PPLCNet/PPLCNet_x2_5_train_infer_python.txt index 6c2f627b421b9c388b0da1ff9d732d816749f24a..06f089948e387291117a3ee050bf487792016295 100644 --- a/test_tipc/configs/PPLCNet/PPLCNet_x2_5_train_infer_python.txt +++ b/test_tipc/configs/PPLCNet/PPLCNet_x2_5_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/PPLCNet/PPLCNet_x2_5.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/PPLCNet/PPLCNet_x2_5.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/PPLCNetV2/PPLCNetV2_base_train_infer_python.txt b/test_tipc/configs/PPLCNetV2/PPLCNetV2_base_train_infer_python.txt index b1913eca5d3aa162232e8e4f37adbde20525ac64..4931593872beeb1bafa54551637d7f28d3700ece 100644 --- a/test_tipc/configs/PPLCNetV2/PPLCNetV2_base_train_infer_python.txt +++ b/test_tipc/configs/PPLCNetV2/PPLCNetV2_base_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/PPLCNetV2/PPLCNetV2_base.yaml -o Global.seed=1234 -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/PPLCNetV2/PPLCNetV2_base.yaml -o Global.seed=1234 -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/PVTV2/PVT_V2_B0_train_infer_python.txt b/test_tipc/configs/PVTV2/PVT_V2_B0_train_infer_python.txt index b0db1cf5482e55173f06213683004ef52053b62e..eb029ff5a8d094073d8ab51c713484484e458bb9 100644 --- a/test_tipc/configs/PVTV2/PVT_V2_B0_train_infer_python.txt +++ b/test_tipc/configs/PVTV2/PVT_V2_B0_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/PVTV2/PVT_V2_B0.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/PVTV2/PVT_V2_B0.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/PVTV2/PVT_V2_B1_train_infer_python.txt b/test_tipc/configs/PVTV2/PVT_V2_B1_train_infer_python.txt index 505b811f08a72c2eaa364c6b7c84d9a13d9f562b..c289563f11e84df7cd2829f209553d0aed6c68f7 100644 --- a/test_tipc/configs/PVTV2/PVT_V2_B1_train_infer_python.txt +++ b/test_tipc/configs/PVTV2/PVT_V2_B1_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/PVTV2/PVT_V2_B1.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/PVTV2/PVT_V2_B1.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/PVTV2/PVT_V2_B2_Linear_train_infer_python.txt b/test_tipc/configs/PVTV2/PVT_V2_B2_Linear_train_infer_python.txt index 2460ddd9ed3d39d8678d5ab46d9fd9b1fad21039..cc412d47cc56da877fd63775d322787e6878cd71 100644 --- a/test_tipc/configs/PVTV2/PVT_V2_B2_Linear_train_infer_python.txt +++ b/test_tipc/configs/PVTV2/PVT_V2_B2_Linear_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/PVTV2/PVT_V2_B2_Linear.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.print_batch_step=1 +norm_train:tools/train.py -c ppcls/configs/ImageNet/PVTV2/PVT_V2_B2_Linear.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.print_batch_step=1 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/PVTV2/PVT_V2_B2_train_infer_python.txt b/test_tipc/configs/PVTV2/PVT_V2_B2_train_infer_python.txt index ac05176acb0a8afb9518324a39a7b4bbd9c2c050..d607cfb8acaa6168be0ece49a6c1506cdc800614 100644 --- a/test_tipc/configs/PVTV2/PVT_V2_B2_train_infer_python.txt +++ b/test_tipc/configs/PVTV2/PVT_V2_B2_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/PVTV2/PVT_V2_B2.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/PVTV2/PVT_V2_B2.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/PVTV2/PVT_V2_B3_train_infer_python.txt b/test_tipc/configs/PVTV2/PVT_V2_B3_train_infer_python.txt index 7f76080d2bd53667a759a148643515a4e482693b..58585012d816314ce6ecb7edc50eb11f57e374a1 100644 --- a/test_tipc/configs/PVTV2/PVT_V2_B3_train_infer_python.txt +++ b/test_tipc/configs/PVTV2/PVT_V2_B3_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/PVTV2/PVT_V2_B3.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/PVTV2/PVT_V2_B3.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/PVTV2/PVT_V2_B4_train_infer_python.txt b/test_tipc/configs/PVTV2/PVT_V2_B4_train_infer_python.txt index a3dacfb807625a13cf493f3dea83a6c6287fa9f7..7dccac5fb2002704cf7c66c8fef763637beac8c1 100644 --- a/test_tipc/configs/PVTV2/PVT_V2_B4_train_infer_python.txt +++ b/test_tipc/configs/PVTV2/PVT_V2_B4_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/PVTV2/PVT_V2_B4.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/PVTV2/PVT_V2_B4.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/PVTV2/PVT_V2_B5_train_infer_python.txt b/test_tipc/configs/PVTV2/PVT_V2_B5_train_infer_python.txt index bfff184435d6507be33cfdcd685b28c76775173e..4dc143d00ddf8b2516d019fa224c1b2704b60347 100644 --- a/test_tipc/configs/PVTV2/PVT_V2_B5_train_infer_python.txt +++ b/test_tipc/configs/PVTV2/PVT_V2_B5_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/PVTV2/PVT_V2_B5.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/PVTV2/PVT_V2_B5.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/ReXNet/ReXNet_1_0_train_amp_infer_python.txt b/test_tipc/configs/ReXNet/ReXNet_1_0_train_amp_infer_python.txt index 9ae14e503dbd6a13a0eddf3a29bbbc13d3e73fd8..2c349d563c80a93f34c431211520e9aa172c548c 100644 --- a/test_tipc/configs/ReXNet/ReXNet_1_0_train_amp_infer_python.txt +++ b/test_tipc/configs/ReXNet/ReXNet_1_0_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/ReXNet/ReXNet_1_0.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/ReXNet/ReXNet_1_0.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/ReXNet/ReXNet_1_0_train_infer_python.txt b/test_tipc/configs/ReXNet/ReXNet_1_0_train_infer_python.txt index b54a6fc7ab23b3e6a1e0b75db2ca3e9130444f76..63045f7d263cbd3d2bbfeee893795e927b953fb7 100644 --- a/test_tipc/configs/ReXNet/ReXNet_1_0_train_infer_python.txt +++ b/test_tipc/configs/ReXNet/ReXNet_1_0_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/ReXNet/ReXNet_1_0.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/ReXNet/ReXNet_1_0.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/ReXNet/ReXNet_1_3_train_amp_infer_python.txt b/test_tipc/configs/ReXNet/ReXNet_1_3_train_amp_infer_python.txt index f5c7aed52a243a03ff63269c5312b021786a7d13..afe48af41ef1b288e7dca550ba39ce01d561abb9 100644 --- a/test_tipc/configs/ReXNet/ReXNet_1_3_train_amp_infer_python.txt +++ b/test_tipc/configs/ReXNet/ReXNet_1_3_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/ReXNet/ReXNet_1_3.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/ReXNet/ReXNet_1_3.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/ReXNet/ReXNet_1_3_train_infer_python.txt b/test_tipc/configs/ReXNet/ReXNet_1_3_train_infer_python.txt index b26c7e466d6098c402944ad4d6a2adf28da64483..cbfa9ceaeb6352edbc00239edebbc42dd19f7a56 100644 --- a/test_tipc/configs/ReXNet/ReXNet_1_3_train_infer_python.txt +++ b/test_tipc/configs/ReXNet/ReXNet_1_3_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/ReXNet/ReXNet_1_3.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/ReXNet/ReXNet_1_3.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/ReXNet/ReXNet_1_5_train_amp_infer_python.txt b/test_tipc/configs/ReXNet/ReXNet_1_5_train_amp_infer_python.txt index 3399aad8da871d85b4b3c37acfcb657dcfa58c04..32186517c1814b855dcb659a523758a745a1fe70 100644 --- a/test_tipc/configs/ReXNet/ReXNet_1_5_train_amp_infer_python.txt +++ b/test_tipc/configs/ReXNet/ReXNet_1_5_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/ReXNet/ReXNet_1_5.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/ReXNet/ReXNet_1_5.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/ReXNet/ReXNet_1_5_train_infer_python.txt b/test_tipc/configs/ReXNet/ReXNet_1_5_train_infer_python.txt index 2c7dfaa7f5d18eca1db74fa81a0d1bd1054870e3..8a10bad8b67e76a6654388f0d7750f48ca6f8127 100644 --- a/test_tipc/configs/ReXNet/ReXNet_1_5_train_infer_python.txt +++ b/test_tipc/configs/ReXNet/ReXNet_1_5_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/ReXNet/ReXNet_1_5.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/ReXNet/ReXNet_1_5.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/ReXNet/ReXNet_2_0_train_amp_infer_python.txt b/test_tipc/configs/ReXNet/ReXNet_2_0_train_amp_infer_python.txt index f036e87b9c7671150f537ae3d458218e83748115..194cda7ada3891efee36a222be109fd61bc8a79e 100644 --- a/test_tipc/configs/ReXNet/ReXNet_2_0_train_amp_infer_python.txt +++ b/test_tipc/configs/ReXNet/ReXNet_2_0_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/ReXNet/ReXNet_2_0.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/ReXNet/ReXNet_2_0.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/ReXNet/ReXNet_2_0_train_infer_python.txt b/test_tipc/configs/ReXNet/ReXNet_2_0_train_infer_python.txt index 2f56e112b1aee6943b33642fa35fbf3e6757b770..34d0649be23f4cc453b513be98d0c94702d217b6 100644 --- a/test_tipc/configs/ReXNet/ReXNet_2_0_train_infer_python.txt +++ b/test_tipc/configs/ReXNet/ReXNet_2_0_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/ReXNet/ReXNet_2_0.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/ReXNet/ReXNet_2_0.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/ReXNet/ReXNet_3_0_train_amp_infer_python.txt b/test_tipc/configs/ReXNet/ReXNet_3_0_train_amp_infer_python.txt index 3263e6e16ad1a259e2804c40f6ac6a4319eca726..e0a2d325829ed930aa43f1874a8cb205cbf7ce62 100644 --- a/test_tipc/configs/ReXNet/ReXNet_3_0_train_amp_infer_python.txt +++ b/test_tipc/configs/ReXNet/ReXNet_3_0_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/ReXNet/ReXNet_3_0.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/ReXNet/ReXNet_3_0.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/ReXNet/ReXNet_3_0_train_infer_python.txt b/test_tipc/configs/ReXNet/ReXNet_3_0_train_infer_python.txt index b6f2447721d2b2adce34c24e4432b05fec6b9452..862660e91e4150493c775befd22e1e156ee48296 100644 --- a/test_tipc/configs/ReXNet/ReXNet_3_0_train_infer_python.txt +++ b/test_tipc/configs/ReXNet/ReXNet_3_0_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/ReXNet/ReXNet_3_0.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/ReXNet/ReXNet_3_0.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/RedNet/RedNet101_train_amp_infer_python.txt b/test_tipc/configs/RedNet/RedNet101_train_amp_infer_python.txt index 8255130c48d2b5083c2e629d8a4381203b98806f..97092296af0cb09e83fdfc6bf192d6f39b94a6de 100644 --- a/test_tipc/configs/RedNet/RedNet101_train_amp_infer_python.txt +++ b/test_tipc/configs/RedNet/RedNet101_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/RedNet/RedNet101.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/RedNet/RedNet101.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/RedNet/RedNet101_train_infer_python.txt b/test_tipc/configs/RedNet/RedNet101_train_infer_python.txt index 84df64de82a6d6706bf85dbe41ccfa91a2ffcb1c..908f822f6440078c3287584f5c60ab065d7ea269 100644 --- a/test_tipc/configs/RedNet/RedNet101_train_infer_python.txt +++ b/test_tipc/configs/RedNet/RedNet101_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/RedNet/RedNet101.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/RedNet/RedNet101.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/RedNet/RedNet152_train_amp_infer_python.txt b/test_tipc/configs/RedNet/RedNet152_train_amp_infer_python.txt index 5b208005b8357165d130d4e74e1cc943efe36b7c..aae976c3f53fbe43e286e28bb5bfedd0489c1b7c 100644 --- a/test_tipc/configs/RedNet/RedNet152_train_amp_infer_python.txt +++ b/test_tipc/configs/RedNet/RedNet152_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/RedNet/RedNet152.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/RedNet/RedNet152.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/RedNet/RedNet152_train_infer_python.txt b/test_tipc/configs/RedNet/RedNet152_train_infer_python.txt index 1f6ca5dc8b45a0036f2cdbb22575f41ece6aefc2..b56b3cdc12b001d1d75ca093162203eb1e7bd1c6 100644 --- a/test_tipc/configs/RedNet/RedNet152_train_infer_python.txt +++ b/test_tipc/configs/RedNet/RedNet152_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/RedNet/RedNet152.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/RedNet/RedNet152.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/RedNet/RedNet26_train_amp_infer_python.txt b/test_tipc/configs/RedNet/RedNet26_train_amp_infer_python.txt index 1aa6bbd4d0ca15b7b07852b73b8c4c58175542b4..52f2a82e3d8f92a601deae826e98ffd6c94b9fb6 100644 --- a/test_tipc/configs/RedNet/RedNet26_train_amp_infer_python.txt +++ b/test_tipc/configs/RedNet/RedNet26_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/RedNet/RedNet26.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/RedNet/RedNet26.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/RedNet/RedNet26_train_infer_python.txt b/test_tipc/configs/RedNet/RedNet26_train_infer_python.txt index fdfc9a09ab41c2f438e3038483da901933b34057..0d303cded7a1572fdc6233fd17a1710ae1de477c 100644 --- a/test_tipc/configs/RedNet/RedNet26_train_infer_python.txt +++ b/test_tipc/configs/RedNet/RedNet26_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/RedNet/RedNet26.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/RedNet/RedNet26.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/RedNet/RedNet38_train_amp_infer_python.txt b/test_tipc/configs/RedNet/RedNet38_train_amp_infer_python.txt index 05715a860b2f21b30d5c869c1115d98dcf41cd84..fbe7a4279a17d165da0613343c6968ddd52918a4 100644 --- a/test_tipc/configs/RedNet/RedNet38_train_amp_infer_python.txt +++ b/test_tipc/configs/RedNet/RedNet38_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/RedNet/RedNet38.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/RedNet/RedNet38.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/RedNet/RedNet38_train_infer_python.txt b/test_tipc/configs/RedNet/RedNet38_train_infer_python.txt index d8347979350493bb5794dac4cb90e9aa991f772e..94ab7e71d017ee73e2034a7b8037fb1437f7f7f9 100644 --- a/test_tipc/configs/RedNet/RedNet38_train_infer_python.txt +++ b/test_tipc/configs/RedNet/RedNet38_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/RedNet/RedNet38.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/RedNet/RedNet38.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/RedNet/RedNet50_train_amp_infer_python.txt b/test_tipc/configs/RedNet/RedNet50_train_amp_infer_python.txt index 3ea2e0d316e239af4eab2f93d21b53e28d88fe12..e6c3886994c4fba7e20a7d5de6f4bf51eb893ea7 100644 --- a/test_tipc/configs/RedNet/RedNet50_train_amp_infer_python.txt +++ b/test_tipc/configs/RedNet/RedNet50_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/RedNet/RedNet50.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/RedNet/RedNet50.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/RedNet/RedNet50_train_infer_python.txt b/test_tipc/configs/RedNet/RedNet50_train_infer_python.txt index d9d8b9254b8b2c5f47b6e60567da7c21f6ce1655..85f743fbd95b23fb2ee47492ba109e57a2ee27e7 100644 --- a/test_tipc/configs/RedNet/RedNet50_train_infer_python.txt +++ b/test_tipc/configs/RedNet/RedNet50_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/RedNet/RedNet50.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/RedNet/RedNet50.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/Res2Net/Res2Net101_vd_26w_4s_train_amp_infer_python.txt b/test_tipc/configs/Res2Net/Res2Net101_vd_26w_4s_train_amp_infer_python.txt index 6a9f287429be801d96be984048bd0be5fedb4c68..d84fededc0c1c56f570a947ab8f7186e31b6733d 100644 --- a/test_tipc/configs/Res2Net/Res2Net101_vd_26w_4s_train_amp_infer_python.txt +++ b/test_tipc/configs/Res2Net/Res2Net101_vd_26w_4s_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/Res2Net/Res2Net101_vd_26w_4s.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/Res2Net/Res2Net101_vd_26w_4s.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/Res2Net/Res2Net101_vd_26w_4s_train_infer_python.txt b/test_tipc/configs/Res2Net/Res2Net101_vd_26w_4s_train_infer_python.txt index f77836ab0dccff16eb1bf0da1ff5aa87d5b2a09d..9e34d43edd055570095a78fd72cebea36875eec2 100644 --- a/test_tipc/configs/Res2Net/Res2Net101_vd_26w_4s_train_infer_python.txt +++ b/test_tipc/configs/Res2Net/Res2Net101_vd_26w_4s_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/Res2Net/Res2Net101_vd_26w_4s.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/Res2Net/Res2Net101_vd_26w_4s.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/Res2Net/Res2Net200_vd_26w_4s_train_amp_infer_python.txt b/test_tipc/configs/Res2Net/Res2Net200_vd_26w_4s_train_amp_infer_python.txt index c8157cae60a41cfa15458d4b697000cd2f9f20d0..0eedea3c5d0be6a6b1bf9dfcf58f154769a205df 100644 --- a/test_tipc/configs/Res2Net/Res2Net200_vd_26w_4s_train_amp_infer_python.txt +++ b/test_tipc/configs/Res2Net/Res2Net200_vd_26w_4s_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/Res2Net/Res2Net200_vd_26w_4s.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/Res2Net/Res2Net200_vd_26w_4s.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/Res2Net/Res2Net200_vd_26w_4s_train_infer_python.txt b/test_tipc/configs/Res2Net/Res2Net200_vd_26w_4s_train_infer_python.txt index 68caab3a8b660cd61bbec10806dbaf120a25b10b..6b656359e083a7fb0f647e027e2365c13303237e 100644 --- a/test_tipc/configs/Res2Net/Res2Net200_vd_26w_4s_train_infer_python.txt +++ b/test_tipc/configs/Res2Net/Res2Net200_vd_26w_4s_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/Res2Net/Res2Net200_vd_26w_4s.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/Res2Net/Res2Net200_vd_26w_4s.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/Res2Net/Res2Net50_14w_8s_train_amp_infer_python.txt b/test_tipc/configs/Res2Net/Res2Net50_14w_8s_train_amp_infer_python.txt index 6bfee99062699c581a38bb2fe14ed6ad986750c2..0dd1128ddb8a39271aa4aaa9cc0a5bb4266e754c 100644 --- a/test_tipc/configs/Res2Net/Res2Net50_14w_8s_train_amp_infer_python.txt +++ b/test_tipc/configs/Res2Net/Res2Net50_14w_8s_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/Res2Net/Res2Net50_14w_8s.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/Res2Net/Res2Net50_14w_8s.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/Res2Net/Res2Net50_14w_8s_train_infer_python.txt b/test_tipc/configs/Res2Net/Res2Net50_14w_8s_train_infer_python.txt index 4001e9f11b6815b8fd6387a09b7bcb3d966cc96f..d2951f84ba5e3a3b272381a2e6d869337f454755 100644 --- a/test_tipc/configs/Res2Net/Res2Net50_14w_8s_train_infer_python.txt +++ b/test_tipc/configs/Res2Net/Res2Net50_14w_8s_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/Res2Net/Res2Net50_14w_8s.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/Res2Net/Res2Net50_14w_8s.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/Res2Net/Res2Net50_26w_4s_train_amp_infer_python.txt b/test_tipc/configs/Res2Net/Res2Net50_26w_4s_train_amp_infer_python.txt index 4a33c79c4c7f4b216e9974376fedc6024f269c8e..db8eaeac6709d421ee923bd8da65295ad8f044df 100644 --- a/test_tipc/configs/Res2Net/Res2Net50_26w_4s_train_amp_infer_python.txt +++ b/test_tipc/configs/Res2Net/Res2Net50_26w_4s_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/Res2Net/Res2Net50_26w_4s.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/Res2Net/Res2Net50_26w_4s.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/Res2Net/Res2Net50_26w_4s_train_infer_python.txt b/test_tipc/configs/Res2Net/Res2Net50_26w_4s_train_infer_python.txt index ac4facc90c8098c59c1880974865212887f678d9..5430c97b61e83e52587d27b488979948db2d1c46 100644 --- a/test_tipc/configs/Res2Net/Res2Net50_26w_4s_train_infer_python.txt +++ b/test_tipc/configs/Res2Net/Res2Net50_26w_4s_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/Res2Net/Res2Net50_26w_4s.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/Res2Net/Res2Net50_26w_4s.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/Res2Net/Res2Net50_vd_26w_4s_train_amp_infer_python.txt b/test_tipc/configs/Res2Net/Res2Net50_vd_26w_4s_train_amp_infer_python.txt index cd3409a333240e4026960a089c7720b97f772eba..8ea5320763246e51a934b70e4c2ebb36a8c8c4c4 100644 --- a/test_tipc/configs/Res2Net/Res2Net50_vd_26w_4s_train_amp_infer_python.txt +++ b/test_tipc/configs/Res2Net/Res2Net50_vd_26w_4s_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/Res2Net/Res2Net50_vd_26w_4s.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/Res2Net/Res2Net50_vd_26w_4s.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/Res2Net/Res2Net50_vd_26w_4s_train_infer_python.txt b/test_tipc/configs/Res2Net/Res2Net50_vd_26w_4s_train_infer_python.txt index b62de2ca87ba306532ad891f48ec0c543286f4ad..6792698d78fb986244ab59475077bb66e3af934d 100644 --- a/test_tipc/configs/Res2Net/Res2Net50_vd_26w_4s_train_infer_python.txt +++ b/test_tipc/configs/Res2Net/Res2Net50_vd_26w_4s_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/Res2Net/Res2Net50_vd_26w_4s.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/Res2Net/Res2Net50_vd_26w_4s.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/ResNeSt/ResNeSt50_fast_1s1x64d_train_amp_infer_python.txt b/test_tipc/configs/ResNeSt/ResNeSt50_fast_1s1x64d_train_amp_infer_python.txt index 0a8e0d0a97e8fbb24579e0d87f720169f1d5e722..230fd565c31c82ea2b5acdead5d40ab46854c6d6 100644 --- a/test_tipc/configs/ResNeSt/ResNeSt50_fast_1s1x64d_train_amp_infer_python.txt +++ b/test_tipc/configs/ResNeSt/ResNeSt50_fast_1s1x64d_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/ResNeSt/ResNeSt50_fast_1s1x64d.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/ResNeSt/ResNeSt50_fast_1s1x64d.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/ResNeSt/ResNeSt50_fast_1s1x64d_train_infer_python.txt b/test_tipc/configs/ResNeSt/ResNeSt50_fast_1s1x64d_train_infer_python.txt index e6894c3cdcf30a6c2501e4b45975b2060110783c..427dff09071c7bd164f1811767b642616e3658b5 100644 --- a/test_tipc/configs/ResNeSt/ResNeSt50_fast_1s1x64d_train_infer_python.txt +++ b/test_tipc/configs/ResNeSt/ResNeSt50_fast_1s1x64d_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/ResNeSt/ResNeSt50_fast_1s1x64d.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/ResNeSt/ResNeSt50_fast_1s1x64d.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/ResNeSt/ResNeSt50_train_amp_infer_python.txt b/test_tipc/configs/ResNeSt/ResNeSt50_train_amp_infer_python.txt index 1d0e59216c17dacaf255f3ce7a4d3020d4d48c48..add120cf64fa158817f139072fafe0cfa8d2bef0 100644 --- a/test_tipc/configs/ResNeSt/ResNeSt50_train_amp_infer_python.txt +++ b/test_tipc/configs/ResNeSt/ResNeSt50_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/ResNeSt/ResNeSt50.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/ResNeSt/ResNeSt50.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/ResNeSt/ResNeSt50_train_infer_python.txt b/test_tipc/configs/ResNeSt/ResNeSt50_train_infer_python.txt index 8829e4ad55607e35f0931b4c23e38317b7a37ccf..1c7c85608d35b78c9ad4732aef245f79cfd24272 100644 --- a/test_tipc/configs/ResNeSt/ResNeSt50_train_infer_python.txt +++ b/test_tipc/configs/ResNeSt/ResNeSt50_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/ResNeSt/ResNeSt50.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/ResNeSt/ResNeSt50.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/ResNeXt/ResNeXt101_32x4d_train_amp_infer_python.txt b/test_tipc/configs/ResNeXt/ResNeXt101_32x4d_train_amp_infer_python.txt index 1ca28d104c9b4f9d1e43869c4e3fb4b1cf6c0019..785ff5444132e3a821d1a3c7f45a21d88a147df6 100644 --- a/test_tipc/configs/ResNeXt/ResNeXt101_32x4d_train_amp_infer_python.txt +++ b/test_tipc/configs/ResNeXt/ResNeXt101_32x4d_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/ResNeXt/ResNeXt101_32x4d.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/ResNeXt/ResNeXt101_32x4d.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/ResNeXt/ResNeXt101_32x4d_train_infer_python.txt b/test_tipc/configs/ResNeXt/ResNeXt101_32x4d_train_infer_python.txt index 3550429cc511ef35043f7f6ea44768d6a237a48e..25af7d0fe70366eef2c4eab6f5f9de7b321c1dbc 100644 --- a/test_tipc/configs/ResNeXt/ResNeXt101_32x4d_train_infer_python.txt +++ b/test_tipc/configs/ResNeXt/ResNeXt101_32x4d_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/ResNeXt/ResNeXt101_32x4d.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/ResNeXt/ResNeXt101_32x4d.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/ResNeXt/ResNeXt101_64x4d_train_amp_infer_python.txt b/test_tipc/configs/ResNeXt/ResNeXt101_64x4d_train_amp_infer_python.txt index 69db3f17bf2b6e142cbeaecf4f5e131571966676..ddf7b281b937ce6308dd12d5a37629fed1fdb353 100644 --- a/test_tipc/configs/ResNeXt/ResNeXt101_64x4d_train_amp_infer_python.txt +++ b/test_tipc/configs/ResNeXt/ResNeXt101_64x4d_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/ResNeXt/ResNeXt101_64x4d.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/ResNeXt/ResNeXt101_64x4d.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/ResNeXt/ResNeXt101_64x4d_train_infer_python.txt b/test_tipc/configs/ResNeXt/ResNeXt101_64x4d_train_infer_python.txt index 3632870323f45cc8ca3a6e9c831e335694ea5b86..ca6c83ca9fb79a195f23e0a550a28bc33291e3c9 100644 --- a/test_tipc/configs/ResNeXt/ResNeXt101_64x4d_train_infer_python.txt +++ b/test_tipc/configs/ResNeXt/ResNeXt101_64x4d_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/ResNeXt/ResNeXt101_64x4d.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/ResNeXt/ResNeXt101_64x4d.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/ResNeXt/ResNeXt101_vd_32x4d_train_amp_infer_python.txt b/test_tipc/configs/ResNeXt/ResNeXt101_vd_32x4d_train_amp_infer_python.txt index 22edd3b99ed6355199d1cfcf13f481be3d76cc04..476527eb4a8880f5561af21ac1f2ac89a3fe5c33 100644 --- a/test_tipc/configs/ResNeXt/ResNeXt101_vd_32x4d_train_amp_infer_python.txt +++ b/test_tipc/configs/ResNeXt/ResNeXt101_vd_32x4d_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/ResNeXt/ResNeXt101_vd_32x4d.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/ResNeXt/ResNeXt101_vd_32x4d.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/ResNeXt/ResNeXt101_vd_32x4d_train_infer_python.txt b/test_tipc/configs/ResNeXt/ResNeXt101_vd_32x4d_train_infer_python.txt index 1c86f0a1381757bdcce679c3af7c1a63f7621da2..aa36f0230f9eb9f1b58e41ffe4ea0f23af8e88d7 100644 --- a/test_tipc/configs/ResNeXt/ResNeXt101_vd_32x4d_train_infer_python.txt +++ b/test_tipc/configs/ResNeXt/ResNeXt101_vd_32x4d_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/ResNeXt/ResNeXt101_vd_32x4d.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/ResNeXt/ResNeXt101_vd_32x4d.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/ResNeXt/ResNeXt101_vd_64x4d_train_amp_infer_python.txt b/test_tipc/configs/ResNeXt/ResNeXt101_vd_64x4d_train_amp_infer_python.txt index 73f786f113a566bde9a1fb75e49b223c6701756d..9599b092632de6b6dc0f2c8d793154de7e25f795 100644 --- a/test_tipc/configs/ResNeXt/ResNeXt101_vd_64x4d_train_amp_infer_python.txt +++ b/test_tipc/configs/ResNeXt/ResNeXt101_vd_64x4d_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/ResNeXt/ResNeXt101_vd_64x4d.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/ResNeXt/ResNeXt101_vd_64x4d.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/ResNeXt/ResNeXt101_vd_64x4d_train_infer_python.txt b/test_tipc/configs/ResNeXt/ResNeXt101_vd_64x4d_train_infer_python.txt index aefd20549f0323a9d9afbf4bce3a8b8ed5d0f83a..6639eeda89af3445768a5014049781ab9c0ba27e 100644 --- a/test_tipc/configs/ResNeXt/ResNeXt101_vd_64x4d_train_infer_python.txt +++ b/test_tipc/configs/ResNeXt/ResNeXt101_vd_64x4d_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/ResNeXt/ResNeXt101_vd_64x4d.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/ResNeXt/ResNeXt101_vd_64x4d.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/ResNeXt/ResNeXt152_32x4d_train_amp_infer_python.txt b/test_tipc/configs/ResNeXt/ResNeXt152_32x4d_train_amp_infer_python.txt index 1ca28d104c9b4f9d1e43869c4e3fb4b1cf6c0019..785ff5444132e3a821d1a3c7f45a21d88a147df6 100644 --- a/test_tipc/configs/ResNeXt/ResNeXt152_32x4d_train_amp_infer_python.txt +++ b/test_tipc/configs/ResNeXt/ResNeXt152_32x4d_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/ResNeXt/ResNeXt101_32x4d.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/ResNeXt/ResNeXt101_32x4d.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/ResNeXt/ResNeXt152_32x4d_train_infer_python.txt b/test_tipc/configs/ResNeXt/ResNeXt152_32x4d_train_infer_python.txt index 3550429cc511ef35043f7f6ea44768d6a237a48e..25af7d0fe70366eef2c4eab6f5f9de7b321c1dbc 100644 --- a/test_tipc/configs/ResNeXt/ResNeXt152_32x4d_train_infer_python.txt +++ b/test_tipc/configs/ResNeXt/ResNeXt152_32x4d_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/ResNeXt/ResNeXt101_32x4d.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/ResNeXt/ResNeXt101_32x4d.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/ResNeXt/ResNeXt152_64x4d_train_amp_infer_python.txt b/test_tipc/configs/ResNeXt/ResNeXt152_64x4d_train_amp_infer_python.txt index 3841f2b7e72e3d6c82a879e833f65758a2f4a934..91d1de89dd9374c8bac2ed6436967ddb237962ee 100644 --- a/test_tipc/configs/ResNeXt/ResNeXt152_64x4d_train_amp_infer_python.txt +++ b/test_tipc/configs/ResNeXt/ResNeXt152_64x4d_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/ResNeXt/ResNeXt152_64x4d.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/ResNeXt/ResNeXt152_64x4d.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/ResNeXt/ResNeXt152_64x4d_train_infer_python.txt b/test_tipc/configs/ResNeXt/ResNeXt152_64x4d_train_infer_python.txt index a7d4678c19eeb768a714072a8fcd747cd0da1249..e378c2665bdfcacb130bfc6a8612765f267ff9de 100644 --- a/test_tipc/configs/ResNeXt/ResNeXt152_64x4d_train_infer_python.txt +++ b/test_tipc/configs/ResNeXt/ResNeXt152_64x4d_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/ResNeXt/ResNeXt152_64x4d.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/ResNeXt/ResNeXt152_64x4d.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/ResNeXt/ResNeXt152_vd_32x4d_train_amp_infer_python.txt b/test_tipc/configs/ResNeXt/ResNeXt152_vd_32x4d_train_amp_infer_python.txt index b0baf65150176639f60200ebd73b3b708aff41ea..65036bd7ce009760658a91b81a0369feec8e3541 100644 --- a/test_tipc/configs/ResNeXt/ResNeXt152_vd_32x4d_train_amp_infer_python.txt +++ b/test_tipc/configs/ResNeXt/ResNeXt152_vd_32x4d_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/ResNeXt/ResNeXt152_vd_32x4d.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/ResNeXt/ResNeXt152_vd_32x4d.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/ResNeXt/ResNeXt152_vd_32x4d_train_infer_python.txt b/test_tipc/configs/ResNeXt/ResNeXt152_vd_32x4d_train_infer_python.txt index 77b38caca0b08e64e923305ec1cc525e6eefce2c..bf25de03496f9affe2e5d53a2e01c6ed22f61eb1 100644 --- a/test_tipc/configs/ResNeXt/ResNeXt152_vd_32x4d_train_infer_python.txt +++ b/test_tipc/configs/ResNeXt/ResNeXt152_vd_32x4d_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/ResNeXt/ResNeXt152_vd_32x4d.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/ResNeXt/ResNeXt152_vd_32x4d.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/ResNeXt/ResNeXt152_vd_64x4d_train_amp_infer_python.txt b/test_tipc/configs/ResNeXt/ResNeXt152_vd_64x4d_train_amp_infer_python.txt index 573b579ff161aca2741867bece064ff10fd5bc22..bff639bc82eed88b9aa967114e93233a23cd9de0 100644 --- a/test_tipc/configs/ResNeXt/ResNeXt152_vd_64x4d_train_amp_infer_python.txt +++ b/test_tipc/configs/ResNeXt/ResNeXt152_vd_64x4d_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/ResNeXt/ResNeXt152_vd_64x4d.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/ResNeXt/ResNeXt152_vd_64x4d.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/ResNeXt/ResNeXt152_vd_64x4d_train_infer_python.txt b/test_tipc/configs/ResNeXt/ResNeXt152_vd_64x4d_train_infer_python.txt index 5b630c1a73d06e0b860d0c496d6679f9cf9b9765..f881c4a0739b376d0891f4267f5e12011656e13d 100644 --- a/test_tipc/configs/ResNeXt/ResNeXt152_vd_64x4d_train_infer_python.txt +++ b/test_tipc/configs/ResNeXt/ResNeXt152_vd_64x4d_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/ResNeXt/ResNeXt152_vd_64x4d.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/ResNeXt/ResNeXt152_vd_64x4d.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/ResNeXt/ResNeXt50_32x4d_train_amp_infer_python.txt b/test_tipc/configs/ResNeXt/ResNeXt50_32x4d_train_amp_infer_python.txt index b28bd26fc7f2af8996722593f41129d3898b3e93..243e52ee692a6f89a430a7b381e8f31304eac015 100644 --- a/test_tipc/configs/ResNeXt/ResNeXt50_32x4d_train_amp_infer_python.txt +++ b/test_tipc/configs/ResNeXt/ResNeXt50_32x4d_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/ResNeXt/ResNeXt50_32x4d.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/ResNeXt/ResNeXt50_32x4d.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/ResNeXt/ResNeXt50_32x4d_train_infer_python.txt b/test_tipc/configs/ResNeXt/ResNeXt50_32x4d_train_infer_python.txt index 9d44af9e2c9f12e0dd4e6803d0a4ef829a1dc4ed..50a2884e621d0568649428ca3ff993ce2d960938 100644 --- a/test_tipc/configs/ResNeXt/ResNeXt50_32x4d_train_infer_python.txt +++ b/test_tipc/configs/ResNeXt/ResNeXt50_32x4d_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/ResNeXt/ResNeXt50_32x4d.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/ResNeXt/ResNeXt50_32x4d.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/ResNeXt/ResNeXt50_64x4d_train_amp_infer_python.txt b/test_tipc/configs/ResNeXt/ResNeXt50_64x4d_train_amp_infer_python.txt index 9d04710ab9b93546f4ff235d48c95eaf0cf25e40..785be2abe4e26dcc18317e1498abb92ee106ea14 100644 --- a/test_tipc/configs/ResNeXt/ResNeXt50_64x4d_train_amp_infer_python.txt +++ b/test_tipc/configs/ResNeXt/ResNeXt50_64x4d_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/ResNeXt/ResNeXt50_64x4d.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/ResNeXt/ResNeXt50_64x4d.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/ResNeXt/ResNeXt50_64x4d_train_infer_python.txt b/test_tipc/configs/ResNeXt/ResNeXt50_64x4d_train_infer_python.txt index bb5caee658ac1d5c8e12d85db66db675a7a076ea..1a616cd4e608e1b6773591340aa7a5532e545f02 100644 --- a/test_tipc/configs/ResNeXt/ResNeXt50_64x4d_train_infer_python.txt +++ b/test_tipc/configs/ResNeXt/ResNeXt50_64x4d_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/ResNeXt/ResNeXt50_64x4d.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/ResNeXt/ResNeXt50_64x4d.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/ResNeXt/ResNeXt50_vd_32x4d_train_amp_infer_python.txt b/test_tipc/configs/ResNeXt/ResNeXt50_vd_32x4d_train_amp_infer_python.txt index b502731f745810f6108c697a8b082e4805a55a89..f1e63692ccde03df9b0ebb36d53557377fa40e79 100644 --- a/test_tipc/configs/ResNeXt/ResNeXt50_vd_32x4d_train_amp_infer_python.txt +++ b/test_tipc/configs/ResNeXt/ResNeXt50_vd_32x4d_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/ResNeXt/ResNeXt50_vd_32x4d.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/ResNeXt/ResNeXt50_vd_32x4d.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/ResNeXt/ResNeXt50_vd_32x4d_train_infer_python.txt b/test_tipc/configs/ResNeXt/ResNeXt50_vd_32x4d_train_infer_python.txt index b7c77f9144942105fd9e9e4c7909f06ca1d254be..2e3d4eefccd38a4bc2850fb12657eeb9bcf87b9c 100644 --- a/test_tipc/configs/ResNeXt/ResNeXt50_vd_32x4d_train_infer_python.txt +++ b/test_tipc/configs/ResNeXt/ResNeXt50_vd_32x4d_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/ResNeXt/ResNeXt50_vd_32x4d.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/ResNeXt/ResNeXt50_vd_32x4d.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/ResNeXt/ResNeXt50_vd_64x4d_train_amp_infer_python.txt b/test_tipc/configs/ResNeXt/ResNeXt50_vd_64x4d_train_amp_infer_python.txt index b0b0b3fe272ca5254fe59faa504cb3905b8c22fc..ed4b57e4d655090312ef34a4f04867f0bc2d4859 100644 --- a/test_tipc/configs/ResNeXt/ResNeXt50_vd_64x4d_train_amp_infer_python.txt +++ b/test_tipc/configs/ResNeXt/ResNeXt50_vd_64x4d_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/ResNeXt/ResNeXt50_vd_64x4d.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/ResNeXt/ResNeXt50_vd_64x4d.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/ResNeXt/ResNeXt50_vd_64x4d_train_infer_python.txt b/test_tipc/configs/ResNeXt/ResNeXt50_vd_64x4d_train_infer_python.txt index 5bd046e3c3882d679fc4f716f1c4ae2ece1a063b..dfa8229ecb94c3fd306bd59e45429be9d07ecf12 100644 --- a/test_tipc/configs/ResNeXt/ResNeXt50_vd_64x4d_train_infer_python.txt +++ b/test_tipc/configs/ResNeXt/ResNeXt50_vd_64x4d_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/ResNeXt/ResNeXt50_vd_64x4d.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/ResNeXt/ResNeXt50_vd_64x4d.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/ResNet/ResNet101_train_amp_infer_python.txt b/test_tipc/configs/ResNet/ResNet101_train_amp_infer_python.txt index 9172ef8cf54d76ffe17079a17b6bf796df2a3351..d18b20d09b5fdb29648ed27b410ce291d53404bb 100644 --- a/test_tipc/configs/ResNet/ResNet101_train_amp_infer_python.txt +++ b/test_tipc/configs/ResNet/ResNet101_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/ResNet/ResNet101.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/ResNet/ResNet101.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/ResNet/ResNet101_train_infer_python.txt b/test_tipc/configs/ResNet/ResNet101_train_infer_python.txt index 9564d41eecc81611ba6d0f8fe03c01587dc26e0e..7fb1980d3f9c37a006caa6eaec225649376d66ee 100644 --- a/test_tipc/configs/ResNet/ResNet101_train_infer_python.txt +++ b/test_tipc/configs/ResNet/ResNet101_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/ResNet/ResNet101.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/ResNet/ResNet101.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/ResNet/ResNet101_vd_train_amp_infer_python.txt b/test_tipc/configs/ResNet/ResNet101_vd_train_amp_infer_python.txt index eb8238584b3828a12c7c8d75118dcc78da8a0562..342f9830be42157968c87d74695cf4040d168744 100644 --- a/test_tipc/configs/ResNet/ResNet101_vd_train_amp_infer_python.txt +++ b/test_tipc/configs/ResNet/ResNet101_vd_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/ResNet/ResNet101_vd.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/ResNet/ResNet101_vd.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/ResNet/ResNet101_vd_train_infer_python.txt b/test_tipc/configs/ResNet/ResNet101_vd_train_infer_python.txt index 2bae824020573387a7924ce4097180cfd145b6e8..87f553159d87e51c8403e0f7566af80e4311e82b 100644 --- a/test_tipc/configs/ResNet/ResNet101_vd_train_infer_python.txt +++ b/test_tipc/configs/ResNet/ResNet101_vd_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/ResNet/ResNet101_vd.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/ResNet/ResNet101_vd.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/ResNet/ResNet152_train_amp_infer_python.txt b/test_tipc/configs/ResNet/ResNet152_train_amp_infer_python.txt index 8549cc07cde50b4f51da729aacebed08470c2187..74f20ff3ef7981fce96ca4a66c14064b244b3ea3 100644 --- a/test_tipc/configs/ResNet/ResNet152_train_amp_infer_python.txt +++ b/test_tipc/configs/ResNet/ResNet152_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/ResNet/ResNet152.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/ResNet/ResNet152.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/ResNet/ResNet152_train_infer_python.txt b/test_tipc/configs/ResNet/ResNet152_train_infer_python.txt index 0b319af8dc6cc9f8a7cf4bdfd89f1f7165544d2a..2bd9c756152c1ebd889002c1f7cf2488845c122f 100644 --- a/test_tipc/configs/ResNet/ResNet152_train_infer_python.txt +++ b/test_tipc/configs/ResNet/ResNet152_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/ResNet/ResNet152.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/ResNet/ResNet152.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/ResNet/ResNet152_vd_train_amp_infer_python.txt b/test_tipc/configs/ResNet/ResNet152_vd_train_amp_infer_python.txt index a8e8c630516cd8ce1c5918a60e0feffdfd746270..267735350dc4e1bd590ce4cad721188f1846b6f0 100644 --- a/test_tipc/configs/ResNet/ResNet152_vd_train_amp_infer_python.txt +++ b/test_tipc/configs/ResNet/ResNet152_vd_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/ResNet/ResNet152_vd.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/ResNet/ResNet152_vd.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/ResNet/ResNet152_vd_train_infer_python.txt b/test_tipc/configs/ResNet/ResNet152_vd_train_infer_python.txt index 8ee972427cc2860a8306f45f0e76871d3b9fd82c..bd0f45b97aeba5ae1b83791d1097c1835496f61f 100644 --- a/test_tipc/configs/ResNet/ResNet152_vd_train_infer_python.txt +++ b/test_tipc/configs/ResNet/ResNet152_vd_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/ResNet/ResNet152_vd.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/ResNet/ResNet152_vd.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/ResNet/ResNet18_train_amp_infer_python.txt b/test_tipc/configs/ResNet/ResNet18_train_amp_infer_python.txt index 4f2badaf2ce2f61ffe7676825411fdb0f955cad8..a38c72c0c3bab0dc55a7927778a790554b9d97c6 100644 --- a/test_tipc/configs/ResNet/ResNet18_train_amp_infer_python.txt +++ b/test_tipc/configs/ResNet/ResNet18_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/ResNet/ResNet18.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/ResNet/ResNet18.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/ResNet/ResNet18_train_infer_python.txt b/test_tipc/configs/ResNet/ResNet18_train_infer_python.txt index a4e2a520e799cf93a2289a6b3b82cb025e7b9304..ce5977cd829eeae3c1644397fd0206d57963db66 100644 --- a/test_tipc/configs/ResNet/ResNet18_train_infer_python.txt +++ b/test_tipc/configs/ResNet/ResNet18_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/ResNet/ResNet18.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/ResNet/ResNet18.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/ResNet/ResNet18_vd_train_amp_infer_python.txt b/test_tipc/configs/ResNet/ResNet18_vd_train_amp_infer_python.txt index 4dc479177c33e078023dcd352cba51b74b5417b2..d0adc2b7f7f5edf68dfd3e68704aedac90cc1a57 100644 --- a/test_tipc/configs/ResNet/ResNet18_vd_train_amp_infer_python.txt +++ b/test_tipc/configs/ResNet/ResNet18_vd_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/ResNet/ResNet18_vd.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/ResNet/ResNet18_vd.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/ResNet/ResNet18_vd_train_infer_python.txt b/test_tipc/configs/ResNet/ResNet18_vd_train_infer_python.txt index 1698bb6e64fd4adc8d477b9ae77bb5dbcbf1f9e4..18138b9c6cef68b4104d79455fb8f4adffddea92 100644 --- a/test_tipc/configs/ResNet/ResNet18_vd_train_infer_python.txt +++ b/test_tipc/configs/ResNet/ResNet18_vd_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/ResNet/ResNet18_vd.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/ResNet/ResNet18_vd.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/ResNet/ResNet200_vd_train_amp_infer_python.txt b/test_tipc/configs/ResNet/ResNet200_vd_train_amp_infer_python.txt index a829b20fa8c5588eaea68cf705bf06fd6fd5fef7..f1b33fcee7764d7d24609bdc9c1eed7d0350e137 100644 --- a/test_tipc/configs/ResNet/ResNet200_vd_train_amp_infer_python.txt +++ b/test_tipc/configs/ResNet/ResNet200_vd_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/ResNet/ResNet200_vd.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/ResNet/ResNet200_vd.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/ResNet/ResNet200_vd_train_infer_python.txt b/test_tipc/configs/ResNet/ResNet200_vd_train_infer_python.txt index a4c92e2ced2c2486dee35c8463a7596024c4bf26..62c4aa6fa6d461845099402cd3adce79f84dd901 100644 --- a/test_tipc/configs/ResNet/ResNet200_vd_train_infer_python.txt +++ b/test_tipc/configs/ResNet/ResNet200_vd_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/ResNet/ResNet200_vd.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/ResNet/ResNet200_vd.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/ResNet/ResNet34_train_amp_infer_python.txt b/test_tipc/configs/ResNet/ResNet34_train_amp_infer_python.txt index 301ca3d91f6130a49b66ba551e9cabe8a426d0b1..f35bb23fd69196a0266ff8a1bb91e8653e0e682b 100644 --- a/test_tipc/configs/ResNet/ResNet34_train_amp_infer_python.txt +++ b/test_tipc/configs/ResNet/ResNet34_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/ResNet/ResNet34.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/ResNet/ResNet34.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/ResNet/ResNet34_train_infer_python.txt b/test_tipc/configs/ResNet/ResNet34_train_infer_python.txt index 37985180e9730ff3381a24bfa06d0fdb5f192a76..e29da50ebba22e89465ba46c54cbaf05d34ad00e 100644 --- a/test_tipc/configs/ResNet/ResNet34_train_infer_python.txt +++ b/test_tipc/configs/ResNet/ResNet34_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/ResNet/ResNet34.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/ResNet/ResNet34.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/ResNet/ResNet34_vd_train_amp_infer_python.txt b/test_tipc/configs/ResNet/ResNet34_vd_train_amp_infer_python.txt index 62473756e7f8b5195a5782e639254643a44444ce..0253ab83b7bbd12a25a8cca582a18eb5db7b45e6 100644 --- a/test_tipc/configs/ResNet/ResNet34_vd_train_amp_infer_python.txt +++ b/test_tipc/configs/ResNet/ResNet34_vd_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/ResNet/ResNet34_vd.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/ResNet/ResNet34_vd.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/ResNet/ResNet34_vd_train_infer_python.txt b/test_tipc/configs/ResNet/ResNet34_vd_train_infer_python.txt index fef19a4fee99edca00584ecac47fa39e4c62a6ed..26740a1877bda6d9bc88cc745b9c8bf2c336a9b7 100644 --- a/test_tipc/configs/ResNet/ResNet34_vd_train_infer_python.txt +++ b/test_tipc/configs/ResNet/ResNet34_vd_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/ResNet/ResNet34_vd.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/ResNet/ResNet34_vd.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/ResNet/ResNet50_train_infer_python.txt b/test_tipc/configs/ResNet/ResNet50_train_infer_python.txt index f8808359e743fcd6197af17361503a8d28b97585..2870be8321fb7cd5e136054fb34e05c4eac5ae0e 100644 --- a/test_tipc/configs/ResNet/ResNet50_train_infer_python.txt +++ b/test_tipc/configs/ResNet/ResNet50_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/ResNet/ResNet50.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/ResNet/ResNet50.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/ResNet/ResNet50_vd_FPGM_train_amp_infer_python.txt b/test_tipc/configs/ResNet/ResNet50_vd_FPGM_train_amp_infer_python.txt index 0b00effeb535ecefd85507f869815dc5cde1253a..8bb612aee9c4ba1afa57f311a4d80016c041de0f 100644 --- a/test_tipc/configs/ResNet/ResNet50_vd_FPGM_train_amp_infer_python.txt +++ b/test_tipc/configs/ResNet/ResNet50_vd_FPGM_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/slim/ResNet50_vd_prune.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/slim/ResNet50_vd_prune.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/ResNet/ResNet50_vd_PACT_train_amp_infer_python.txt b/test_tipc/configs/ResNet/ResNet50_vd_PACT_train_amp_infer_python.txt index e4c65b010a1dd36f4a5b34cf1d03980913c11c18..1811dc6910a96fd43c6614bc18fd005dda4478b2 100644 --- a/test_tipc/configs/ResNet/ResNet50_vd_PACT_train_amp_infer_python.txt +++ b/test_tipc/configs/ResNet/ResNet50_vd_PACT_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/slim/ResNet50_vd_quantization.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/slim/ResNet50_vd_quantization.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/ResNet/ResNet50_vd_train_amp_infer_python.txt b/test_tipc/configs/ResNet/ResNet50_vd_train_amp_infer_python.txt index a9cc2d027752db2880ff1655b69c1adc7d82bb19..21ba8068fdf86fbc2be970a19082e41f72004230 100644 --- a/test_tipc/configs/ResNet/ResNet50_vd_train_amp_infer_python.txt +++ b/test_tipc/configs/ResNet/ResNet50_vd_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/ResNet/ResNet50_vd.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/ResNet/ResNet50_vd.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/ResNet/ResNet50_vd_train_infer_python.txt b/test_tipc/configs/ResNet/ResNet50_vd_train_infer_python.txt index 7ef5b4c8a3984c17947b054f56fc69e96cac4dec..7980b378e0883d54e13b8eef8547b6a6b2bb2278 100644 --- a/test_tipc/configs/ResNet/ResNet50_vd_train_infer_python.txt +++ b/test_tipc/configs/ResNet/ResNet50_vd_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/ResNet/ResNet50_vd.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/ResNet/ResNet50_vd.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/SENet/SENet154_vd_train_amp_infer_python.txt b/test_tipc/configs/SENet/SENet154_vd_train_amp_infer_python.txt index 2bd7b3c5bf2c8ec7784268ece10e3f35c7c199ee..993514b564718da3f4733e481e043a1e4b3e8bce 100644 --- a/test_tipc/configs/SENet/SENet154_vd_train_amp_infer_python.txt +++ b/test_tipc/configs/SENet/SENet154_vd_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/SENet/SENet154_vd.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/SENet/SENet154_vd.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/SENet/SENet154_vd_train_infer_python.txt b/test_tipc/configs/SENet/SENet154_vd_train_infer_python.txt index 7486371bf36efb7134396347ab3be411dad172af..56f709d3438a767d6bdb46a68cab69cecb351fae 100644 --- a/test_tipc/configs/SENet/SENet154_vd_train_infer_python.txt +++ b/test_tipc/configs/SENet/SENet154_vd_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/SENet/SENet154_vd.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/SENet/SENet154_vd.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/SENet/SE_ResNeXt101_32x4d_train_amp_infer_python.txt b/test_tipc/configs/SENet/SE_ResNeXt101_32x4d_train_amp_infer_python.txt index 57afa0d0a207fee2a4d5029205eb13d4154a86ae..3b019043d7f8611b831de507f63db836e4b1ffbf 100644 --- a/test_tipc/configs/SENet/SE_ResNeXt101_32x4d_train_amp_infer_python.txt +++ b/test_tipc/configs/SENet/SE_ResNeXt101_32x4d_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/SENet/SE_ResNeXt101_32x4d.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/SENet/SE_ResNeXt101_32x4d.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/SENet/SE_ResNeXt101_32x4d_train_infer_python.txt b/test_tipc/configs/SENet/SE_ResNeXt101_32x4d_train_infer_python.txt index 1f5d3d7446f9e5ac6dffaaace99a784a66a6ca2a..feb04229f313f7f60a9a9a2db883c87823e1c01b 100644 --- a/test_tipc/configs/SENet/SE_ResNeXt101_32x4d_train_infer_python.txt +++ b/test_tipc/configs/SENet/SE_ResNeXt101_32x4d_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/SENet/SE_ResNeXt101_32x4d.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/SENet/SE_ResNeXt101_32x4d.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/SENet/SE_ResNeXt50_32x4d_train_amp_infer_python.txt b/test_tipc/configs/SENet/SE_ResNeXt50_32x4d_train_amp_infer_python.txt index ae0a334de05c91a628099674da99d9bf0d73fc29..5f04b1febb45c2ccc7c1cc174338c037c45547af 100644 --- a/test_tipc/configs/SENet/SE_ResNeXt50_32x4d_train_amp_infer_python.txt +++ b/test_tipc/configs/SENet/SE_ResNeXt50_32x4d_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/SENet/SE_ResNeXt50_32x4d.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/SENet/SE_ResNeXt50_32x4d.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/SENet/SE_ResNeXt50_32x4d_train_infer_python.txt b/test_tipc/configs/SENet/SE_ResNeXt50_32x4d_train_infer_python.txt index 1b6c630f07a6cc31477977c8506e02bb75e4f699..38b7ae8879ed24d6202c9e716890ff9be8082c7a 100644 --- a/test_tipc/configs/SENet/SE_ResNeXt50_32x4d_train_infer_python.txt +++ b/test_tipc/configs/SENet/SE_ResNeXt50_32x4d_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/SENet/SE_ResNeXt50_32x4d.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/SENet/SE_ResNeXt50_32x4d.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/SENet/SE_ResNeXt50_vd_32x4d_train_amp_infer_python.txt b/test_tipc/configs/SENet/SE_ResNeXt50_vd_32x4d_train_amp_infer_python.txt index 3cea2c6afdf16bd8e05d5a8e3a7b3d464149402d..0aeb092de314023ff7630e062a409e9606e574c6 100644 --- a/test_tipc/configs/SENet/SE_ResNeXt50_vd_32x4d_train_amp_infer_python.txt +++ b/test_tipc/configs/SENet/SE_ResNeXt50_vd_32x4d_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/SENet/SE_ResNeXt50_vd_32x4d.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/SENet/SE_ResNeXt50_vd_32x4d.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/SENet/SE_ResNeXt50_vd_32x4d_train_infer_python.txt b/test_tipc/configs/SENet/SE_ResNeXt50_vd_32x4d_train_infer_python.txt index c988bbe16f9afa78b38c09c037cc75971ef34a1f..5e71729d33f603d022fd99ac18006c5f5237a4ea 100644 --- a/test_tipc/configs/SENet/SE_ResNeXt50_vd_32x4d_train_infer_python.txt +++ b/test_tipc/configs/SENet/SE_ResNeXt50_vd_32x4d_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/SENet/SE_ResNeXt50_vd_32x4d.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/SENet/SE_ResNeXt50_vd_32x4d.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/SENet/SE_ResNet18_vd_train_amp_infer_python.txt b/test_tipc/configs/SENet/SE_ResNet18_vd_train_amp_infer_python.txt index 872e1ce72d015723c570a19e7ca849631e050e36..ed2aae84984997261b0be4906acadc8f92fcbcaa 100644 --- a/test_tipc/configs/SENet/SE_ResNet18_vd_train_amp_infer_python.txt +++ b/test_tipc/configs/SENet/SE_ResNet18_vd_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/SENet/SE_ResNet18_vd.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/SENet/SE_ResNet18_vd.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/SENet/SE_ResNet18_vd_train_infer_python.txt b/test_tipc/configs/SENet/SE_ResNet18_vd_train_infer_python.txt index 632592512d663992b9b91f8daa8ceec2ff3345c9..bd7f8526d5b76c55789b6565be4203e6db8eb160 100644 --- a/test_tipc/configs/SENet/SE_ResNet18_vd_train_infer_python.txt +++ b/test_tipc/configs/SENet/SE_ResNet18_vd_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/SENet/SE_ResNet18_vd.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/SENet/SE_ResNet18_vd.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/SENet/SE_ResNet34_vd_train_amp_infer_python.txt b/test_tipc/configs/SENet/SE_ResNet34_vd_train_amp_infer_python.txt index 4ac5c06b4334a727bd8981cb6fd54e0af4d5b413..1c0c3ffa06c55a41e20ca2d2ad41b8d0962bbf85 100644 --- a/test_tipc/configs/SENet/SE_ResNet34_vd_train_amp_infer_python.txt +++ b/test_tipc/configs/SENet/SE_ResNet34_vd_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/SENet/SE_ResNet34_vd.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/SENet/SE_ResNet34_vd.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/SENet/SE_ResNet34_vd_train_infer_python.txt b/test_tipc/configs/SENet/SE_ResNet34_vd_train_infer_python.txt index 65080d8846a56f344f7ad8469ebead42d6c2470f..d5766c7b44489cee332c171ee0c47f9fb1eb0949 100644 --- a/test_tipc/configs/SENet/SE_ResNet34_vd_train_infer_python.txt +++ b/test_tipc/configs/SENet/SE_ResNet34_vd_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/SENet/SE_ResNet34_vd.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/SENet/SE_ResNet34_vd.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/SENet/SE_ResNet50_vd_train_amp_infer_python.txt b/test_tipc/configs/SENet/SE_ResNet50_vd_train_amp_infer_python.txt index d9bc19b579704acce852c651d60d6410596ac497..3ce39a969fb3a3b133a72d7cc3f90231725abedf 100644 --- a/test_tipc/configs/SENet/SE_ResNet50_vd_train_amp_infer_python.txt +++ b/test_tipc/configs/SENet/SE_ResNet50_vd_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/SENet/SE_ResNet50_vd.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/SENet/SE_ResNet50_vd.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/SENet/SE_ResNet50_vd_train_infer_python.txt b/test_tipc/configs/SENet/SE_ResNet50_vd_train_infer_python.txt index f244bc552f04b2815426283a1ec16e704c911219..ccea1da26c6560d2c543a931e60959c462057b21 100644 --- a/test_tipc/configs/SENet/SE_ResNet50_vd_train_infer_python.txt +++ b/test_tipc/configs/SENet/SE_ResNet50_vd_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/SENet/SE_ResNet50_vd.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/SENet/SE_ResNet50_vd.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/ShuffleNet/ShuffleNetV2_swish_train_amp_infer_python.txt b/test_tipc/configs/ShuffleNet/ShuffleNetV2_swish_train_amp_infer_python.txt index 9822328e323c36d56a7fec860f6a55474c446a81..f447fc0ee1746a435cfdeb06d79166b0dc9ea8ab 100644 --- a/test_tipc/configs/ShuffleNet/ShuffleNetV2_swish_train_amp_infer_python.txt +++ b/test_tipc/configs/ShuffleNet/ShuffleNetV2_swish_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/ShuffleNet/ShuffleNetV2_swish.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/ShuffleNet/ShuffleNetV2_swish.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/ShuffleNet/ShuffleNetV2_swish_train_infer_python.txt b/test_tipc/configs/ShuffleNet/ShuffleNetV2_swish_train_infer_python.txt index a31b5235d45115b95c58f37fb4a854e7cf0813fe..0c34a8e6c4ff0df297a8d574729022584124a997 100644 --- a/test_tipc/configs/ShuffleNet/ShuffleNetV2_swish_train_infer_python.txt +++ b/test_tipc/configs/ShuffleNet/ShuffleNetV2_swish_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/ShuffleNet/ShuffleNetV2_swish.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/ShuffleNet/ShuffleNetV2_swish.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/ShuffleNet/ShuffleNetV2_x0_25_train_amp_infer_python.txt b/test_tipc/configs/ShuffleNet/ShuffleNetV2_x0_25_train_amp_infer_python.txt index e2e6a4c995ebc9739328a8dc3b83f17a4545633b..56919a43fb4ddf4479972c322dcb4cf26f39887a 100644 --- a/test_tipc/configs/ShuffleNet/ShuffleNetV2_x0_25_train_amp_infer_python.txt +++ b/test_tipc/configs/ShuffleNet/ShuffleNetV2_x0_25_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/ShuffleNet/ShuffleNetV2_x0_25.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/ShuffleNet/ShuffleNetV2_x0_25.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/ShuffleNet/ShuffleNetV2_x0_25_train_infer_python.txt b/test_tipc/configs/ShuffleNet/ShuffleNetV2_x0_25_train_infer_python.txt index c94136796d59a994ddebb1fdaa7f373b274e1f50..c471555dee270f8439b6fcd124049e2ea998a95b 100644 --- a/test_tipc/configs/ShuffleNet/ShuffleNetV2_x0_25_train_infer_python.txt +++ b/test_tipc/configs/ShuffleNet/ShuffleNetV2_x0_25_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/ShuffleNet/ShuffleNetV2_x0_25.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/ShuffleNet/ShuffleNetV2_x0_25.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/ShuffleNet/ShuffleNetV2_x0_33_train_amp_infer_python.txt b/test_tipc/configs/ShuffleNet/ShuffleNetV2_x0_33_train_amp_infer_python.txt index 97b59cc65db4a81779a9429961858ecc41790cc3..fbc01f9d9e4860cf52739cb79f2079afb6f2e07c 100644 --- a/test_tipc/configs/ShuffleNet/ShuffleNetV2_x0_33_train_amp_infer_python.txt +++ b/test_tipc/configs/ShuffleNet/ShuffleNetV2_x0_33_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/ShuffleNet/ShuffleNetV2_x0_33.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/ShuffleNet/ShuffleNetV2_x0_33.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/ShuffleNet/ShuffleNetV2_x0_33_train_infer_python.txt b/test_tipc/configs/ShuffleNet/ShuffleNetV2_x0_33_train_infer_python.txt index 0a65fa5ac93c59dd105a155ed5a405c01cb55488..4d0f547a3dabdff73bc680f05ce6af55406827ec 100644 --- a/test_tipc/configs/ShuffleNet/ShuffleNetV2_x0_33_train_infer_python.txt +++ b/test_tipc/configs/ShuffleNet/ShuffleNetV2_x0_33_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/ShuffleNet/ShuffleNetV2_x0_33.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/ShuffleNet/ShuffleNetV2_x0_33.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/ShuffleNet/ShuffleNetV2_x0_5_train_amp_infer_python.txt b/test_tipc/configs/ShuffleNet/ShuffleNetV2_x0_5_train_amp_infer_python.txt index 4dc126ab690428a4ba9af8cda59ffaf445dc0021..cea9fa8242c05d137743fe3adc3d0932dc760daa 100644 --- a/test_tipc/configs/ShuffleNet/ShuffleNetV2_x0_5_train_amp_infer_python.txt +++ b/test_tipc/configs/ShuffleNet/ShuffleNetV2_x0_5_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/ShuffleNet/ShuffleNetV2_x0_5.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/ShuffleNet/ShuffleNetV2_x0_5.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/ShuffleNet/ShuffleNetV2_x0_5_train_infer_python.txt b/test_tipc/configs/ShuffleNet/ShuffleNetV2_x0_5_train_infer_python.txt index e8e66567a47270b5a0ed87de739740400993ff41..2e7c1e23c5d5808467fbe9a4c04f536d8737812e 100644 --- a/test_tipc/configs/ShuffleNet/ShuffleNetV2_x0_5_train_infer_python.txt +++ b/test_tipc/configs/ShuffleNet/ShuffleNetV2_x0_5_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/ShuffleNet/ShuffleNetV2_x0_5.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/ShuffleNet/ShuffleNetV2_x0_5.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/ShuffleNet/ShuffleNetV2_x1_0_train_amp_infer_python.txt b/test_tipc/configs/ShuffleNet/ShuffleNetV2_x1_0_train_amp_infer_python.txt index e738f37653716ca6a06764c80645ee0e200a8ce7..b453a93bab76e0b8ebfb6eae82518479fca19a6b 100644 --- a/test_tipc/configs/ShuffleNet/ShuffleNetV2_x1_0_train_amp_infer_python.txt +++ b/test_tipc/configs/ShuffleNet/ShuffleNetV2_x1_0_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/ShuffleNet/ShuffleNetV2_x1_0.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/ShuffleNet/ShuffleNetV2_x1_0.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/ShuffleNet/ShuffleNetV2_x1_0_train_infer_python.txt b/test_tipc/configs/ShuffleNet/ShuffleNetV2_x1_0_train_infer_python.txt index cbf2d0475975c44e5ab754e15277e1bfe75cab85..7eebc33185fece9cb56366bcf0c8dd328b945ac4 100644 --- a/test_tipc/configs/ShuffleNet/ShuffleNetV2_x1_0_train_infer_python.txt +++ b/test_tipc/configs/ShuffleNet/ShuffleNetV2_x1_0_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/ShuffleNet/ShuffleNetV2_x1_0.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/ShuffleNet/ShuffleNetV2_x1_0.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/ShuffleNet/ShuffleNetV2_x1_5_train_amp_infer_python.txt b/test_tipc/configs/ShuffleNet/ShuffleNetV2_x1_5_train_amp_infer_python.txt index 7317c4780f60296247cf96d2753e7e761328fd39..460df9e5afa55fe951948748f6a93b9abf1c9c25 100644 --- a/test_tipc/configs/ShuffleNet/ShuffleNetV2_x1_5_train_amp_infer_python.txt +++ b/test_tipc/configs/ShuffleNet/ShuffleNetV2_x1_5_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/ShuffleNet/ShuffleNetV2_x1_5.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/ShuffleNet/ShuffleNetV2_x1_5.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/ShuffleNet/ShuffleNetV2_x1_5_train_infer_python.txt b/test_tipc/configs/ShuffleNet/ShuffleNetV2_x1_5_train_infer_python.txt index c3a67fa25fdb0b275579aadde76dc49535320bed..1dfafb9988c3d6b6a6895a4f49677fff95f137a7 100644 --- a/test_tipc/configs/ShuffleNet/ShuffleNetV2_x1_5_train_infer_python.txt +++ b/test_tipc/configs/ShuffleNet/ShuffleNetV2_x1_5_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/ShuffleNet/ShuffleNetV2_x1_5.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/ShuffleNet/ShuffleNetV2_x1_5.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/ShuffleNet/ShuffleNetV2_x2_0_train_amp_infer_python.txt b/test_tipc/configs/ShuffleNet/ShuffleNetV2_x2_0_train_amp_infer_python.txt index 020b44b68fd5e6ae25270097a6ce5f3126ae5dd5..e2d89d92edff3d763238bd660235b1b010e89390 100644 --- a/test_tipc/configs/ShuffleNet/ShuffleNetV2_x2_0_train_amp_infer_python.txt +++ b/test_tipc/configs/ShuffleNet/ShuffleNetV2_x2_0_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/ShuffleNet/ShuffleNetV2_x2_0.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/ShuffleNet/ShuffleNetV2_x2_0.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/ShuffleNet/ShuffleNetV2_x2_0_train_infer_python.txt b/test_tipc/configs/ShuffleNet/ShuffleNetV2_x2_0_train_infer_python.txt index dfaf945c8be48099a4fccd53c9b99fffef70a8a8..1f9bc867d11b248bdf8ff36bcac223e53b696e9f 100644 --- a/test_tipc/configs/ShuffleNet/ShuffleNetV2_x2_0_train_infer_python.txt +++ b/test_tipc/configs/ShuffleNet/ShuffleNetV2_x2_0_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/ShuffleNet/ShuffleNetV2_x2_0.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/ShuffleNet/ShuffleNetV2_x2_0.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/SqueezeNet/SqueezeNet1_0_train_amp_infer_python.txt b/test_tipc/configs/SqueezeNet/SqueezeNet1_0_train_amp_infer_python.txt index 33746bcd1b837a67dcc69364a93d7353fcf0cd95..b52b2d4f7624c8339e32a4897910737ac63262a2 100644 --- a/test_tipc/configs/SqueezeNet/SqueezeNet1_0_train_amp_infer_python.txt +++ b/test_tipc/configs/SqueezeNet/SqueezeNet1_0_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/SqueezeNet/SqueezeNet1_0.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/SqueezeNet/SqueezeNet1_0.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/SqueezeNet/SqueezeNet1_0_train_infer_python.txt b/test_tipc/configs/SqueezeNet/SqueezeNet1_0_train_infer_python.txt index 1d0df071cdae93fabc3b8e1856eb31affd11b6a0..8e0e3f4549cde8e48cc09ee2a5e98b8d7be221d3 100644 --- a/test_tipc/configs/SqueezeNet/SqueezeNet1_0_train_infer_python.txt +++ b/test_tipc/configs/SqueezeNet/SqueezeNet1_0_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/SqueezeNet/SqueezeNet1_0.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/SqueezeNet/SqueezeNet1_0.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/SqueezeNet/SqueezeNet1_1_train_amp_infer_python.txt b/test_tipc/configs/SqueezeNet/SqueezeNet1_1_train_amp_infer_python.txt index 943a05e46808bbc60acb956899e710c2fba9710c..984ab50474112c1e7a076e0b1b3a44e6d4caf501 100644 --- a/test_tipc/configs/SqueezeNet/SqueezeNet1_1_train_amp_infer_python.txt +++ b/test_tipc/configs/SqueezeNet/SqueezeNet1_1_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/SqueezeNet/SqueezeNet1_1.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/SqueezeNet/SqueezeNet1_1.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/SqueezeNet/SqueezeNet1_1_train_infer_python.txt b/test_tipc/configs/SqueezeNet/SqueezeNet1_1_train_infer_python.txt index 341ab206686b33b7f04f740f9d4897c87b7c0b17..68d4e8b2ebbf52e13bdbeb064b9ec70352a34924 100644 --- a/test_tipc/configs/SqueezeNet/SqueezeNet1_1_train_infer_python.txt +++ b/test_tipc/configs/SqueezeNet/SqueezeNet1_1_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/SqueezeNet/SqueezeNet1_1.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/SqueezeNet/SqueezeNet1_1.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/SwinTransformer/SwinTransformer_base_patch4_window12_384_train_amp_infer_python.txt b/test_tipc/configs/SwinTransformer/SwinTransformer_base_patch4_window12_384_train_amp_infer_python.txt index d4e0a4122e027329a88c3890e5adea5f684f8123..6a1d5f7a53cde1169a3f54513fe857b9088e92f9 100644 --- a/test_tipc/configs/SwinTransformer/SwinTransformer_base_patch4_window12_384_train_amp_infer_python.txt +++ b/test_tipc/configs/SwinTransformer/SwinTransformer_base_patch4_window12_384_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/SwinTransformer/SwinTransformer_base_patch4_window12_384.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/SwinTransformer/SwinTransformer_base_patch4_window12_384.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/SwinTransformer/SwinTransformer_base_patch4_window12_384_train_infer_python.txt b/test_tipc/configs/SwinTransformer/SwinTransformer_base_patch4_window12_384_train_infer_python.txt index a54568f5a5e74de0ae57647ab33d8f5ebb5327ab..ec50de2810851556aa0cf717ff3ea23fc9259f51 100644 --- a/test_tipc/configs/SwinTransformer/SwinTransformer_base_patch4_window12_384_train_infer_python.txt +++ b/test_tipc/configs/SwinTransformer/SwinTransformer_base_patch4_window12_384_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/SwinTransformer/SwinTransformer_base_patch4_window12_384.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/SwinTransformer/SwinTransformer_base_patch4_window12_384.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/SwinTransformer/SwinTransformer_base_patch4_window7_224_train_amp_infer_python.txt b/test_tipc/configs/SwinTransformer/SwinTransformer_base_patch4_window7_224_train_amp_infer_python.txt index 8e686d07753b1ed2a9f441f7417908196518e24a..527d016ed0567c2b52bd003b7c787998037c46a4 100644 --- a/test_tipc/configs/SwinTransformer/SwinTransformer_base_patch4_window7_224_train_amp_infer_python.txt +++ b/test_tipc/configs/SwinTransformer/SwinTransformer_base_patch4_window7_224_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/SwinTransformer/SwinTransformer_base_patch4_window7_224.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/SwinTransformer/SwinTransformer_base_patch4_window7_224.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/SwinTransformer/SwinTransformer_base_patch4_window7_224_train_infer_python.txt b/test_tipc/configs/SwinTransformer/SwinTransformer_base_patch4_window7_224_train_infer_python.txt index 7422e4cc92585edd2c0be1b8ccb1378c1981a5cf..1ceef499a3247a586deca6e0a2583f9314a2ee3e 100644 --- a/test_tipc/configs/SwinTransformer/SwinTransformer_base_patch4_window7_224_train_infer_python.txt +++ b/test_tipc/configs/SwinTransformer/SwinTransformer_base_patch4_window7_224_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/SwinTransformer/SwinTransformer_base_patch4_window7_224.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/SwinTransformer/SwinTransformer_base_patch4_window7_224.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/SwinTransformer/SwinTransformer_large_patch4_window12_384_train_amp_infer_python.txt b/test_tipc/configs/SwinTransformer/SwinTransformer_large_patch4_window12_384_train_amp_infer_python.txt index 6ed1e29b0fb27171f4745c223e471e0b9d9a5153..3b74e40a6da1cba8b506b865123b4699a321fda9 100644 --- a/test_tipc/configs/SwinTransformer/SwinTransformer_large_patch4_window12_384_train_amp_infer_python.txt +++ b/test_tipc/configs/SwinTransformer/SwinTransformer_large_patch4_window12_384_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/SwinTransformer/SwinTransformer_large_patch4_window12_384.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/SwinTransformer/SwinTransformer_large_patch4_window12_384.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/SwinTransformer/SwinTransformer_large_patch4_window12_384_train_infer_python.txt b/test_tipc/configs/SwinTransformer/SwinTransformer_large_patch4_window12_384_train_infer_python.txt index 67d48b17600531ea5ead004362914780697282a6..00b510c64844c458cd9fbe4967c26e73d0a067a7 100644 --- a/test_tipc/configs/SwinTransformer/SwinTransformer_large_patch4_window12_384_train_infer_python.txt +++ b/test_tipc/configs/SwinTransformer/SwinTransformer_large_patch4_window12_384_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/SwinTransformer/SwinTransformer_large_patch4_window12_384.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/SwinTransformer/SwinTransformer_large_patch4_window12_384.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/SwinTransformer/SwinTransformer_large_patch4_window7_224_train_amp_infer_python.txt b/test_tipc/configs/SwinTransformer/SwinTransformer_large_patch4_window7_224_train_amp_infer_python.txt index 46434bdb9c7b9023eec8a8a426ea1ec9e9a21cf6..a451044f7d7db7dc177f4aa98968a5959a0d77ec 100644 --- a/test_tipc/configs/SwinTransformer/SwinTransformer_large_patch4_window7_224_train_amp_infer_python.txt +++ b/test_tipc/configs/SwinTransformer/SwinTransformer_large_patch4_window7_224_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/SwinTransformer/SwinTransformer_large_patch4_window7_224.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/SwinTransformer/SwinTransformer_large_patch4_window7_224.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/SwinTransformer/SwinTransformer_large_patch4_window7_224_train_infer_python.txt b/test_tipc/configs/SwinTransformer/SwinTransformer_large_patch4_window7_224_train_infer_python.txt index d0a9e572c12b6132fb727650d2f097deebdd1985..c23cf2ae6b60c904fd9783b3619e004057c3457f 100644 --- a/test_tipc/configs/SwinTransformer/SwinTransformer_large_patch4_window7_224_train_infer_python.txt +++ b/test_tipc/configs/SwinTransformer/SwinTransformer_large_patch4_window7_224_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/SwinTransformer/SwinTransformer_large_patch4_window7_224.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/SwinTransformer/SwinTransformer_large_patch4_window7_224.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/SwinTransformer/SwinTransformer_small_patch4_window7_224_train_amp_infer_python.txt b/test_tipc/configs/SwinTransformer/SwinTransformer_small_patch4_window7_224_train_amp_infer_python.txt index 0475fa099927aaccdfd3c039d06606e173f445a5..8f69c4af2e3aec726bbe3003654f18e1b9dbc6a1 100644 --- a/test_tipc/configs/SwinTransformer/SwinTransformer_small_patch4_window7_224_train_amp_infer_python.txt +++ b/test_tipc/configs/SwinTransformer/SwinTransformer_small_patch4_window7_224_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/SwinTransformer/SwinTransformer_small_patch4_window7_224.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/SwinTransformer/SwinTransformer_small_patch4_window7_224.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/SwinTransformer/SwinTransformer_small_patch4_window7_224_train_infer_python.txt b/test_tipc/configs/SwinTransformer/SwinTransformer_small_patch4_window7_224_train_infer_python.txt index bb53a4f6310578e33aa8671eef2352b44a3e75d4..9424b2c711f60c74a5872adfa7e8e3b3d04699b1 100644 --- a/test_tipc/configs/SwinTransformer/SwinTransformer_small_patch4_window7_224_train_infer_python.txt +++ b/test_tipc/configs/SwinTransformer/SwinTransformer_small_patch4_window7_224_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/SwinTransformer/SwinTransformer_small_patch4_window7_224.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/SwinTransformer/SwinTransformer_small_patch4_window7_224.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/SwinTransformer/SwinTransformer_tiny_patch4_window7_224_train_infer_python.txt b/test_tipc/configs/SwinTransformer/SwinTransformer_tiny_patch4_window7_224_train_infer_python.txt index 397e00613a34ab3c537c69684e51e4eb9ebf681f..1f8dbe26a1a7ada3cd09090a395dd159736e0c14 100644 --- a/test_tipc/configs/SwinTransformer/SwinTransformer_tiny_patch4_window7_224_train_infer_python.txt +++ b/test_tipc/configs/SwinTransformer/SwinTransformer_tiny_patch4_window7_224_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/SwinTransformer/SwinTransformer_tiny_patch4_window7_224.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/SwinTransformer/SwinTransformer_tiny_patch4_window7_224.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/TNT/TNT_small_train_amp_infer_python.txt b/test_tipc/configs/TNT/TNT_small_train_amp_infer_python.txt index 1966ac184caecd85cb1475bfddad368fbb1ac085..749043a2d560e1253cee9c884bf2da06ca79a312 100644 --- a/test_tipc/configs/TNT/TNT_small_train_amp_infer_python.txt +++ b/test_tipc/configs/TNT/TNT_small_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/TNT/TNT_small.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/TNT/TNT_small.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/TNT/TNT_small_train_infer_python.txt b/test_tipc/configs/TNT/TNT_small_train_infer_python.txt index 93ca3cd288a47a90570910c134c9ab052379b271..6fe44c0473277e19acb49d78a5ecc62d5a7447eb 100644 --- a/test_tipc/configs/TNT/TNT_small_train_infer_python.txt +++ b/test_tipc/configs/TNT/TNT_small_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/TNT/TNT_small.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/TNT/TNT_small.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/Twins/alt_gvt_base_train_amp_infer_python.txt b/test_tipc/configs/Twins/alt_gvt_base_train_amp_infer_python.txt index e14b8ddc9ab1fc9a1605057afa8c3905d47814f9..b8e00798bc6dced253cca37b8e53bd1cea86211a 100644 --- a/test_tipc/configs/Twins/alt_gvt_base_train_amp_infer_python.txt +++ b/test_tipc/configs/Twins/alt_gvt_base_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/Twins/alt_gvt_base.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/Twins/alt_gvt_base.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/Twins/alt_gvt_base_train_infer_python.txt b/test_tipc/configs/Twins/alt_gvt_base_train_infer_python.txt index b49938d33d4ab5c8d4d5bff5af63153d919958b0..510e3300c79aaef99d399b5514e97a847a8e21ce 100644 --- a/test_tipc/configs/Twins/alt_gvt_base_train_infer_python.txt +++ b/test_tipc/configs/Twins/alt_gvt_base_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/Twins/alt_gvt_base.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/Twins/alt_gvt_base.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/Twins/alt_gvt_large_train_amp_infer_python.txt b/test_tipc/configs/Twins/alt_gvt_large_train_amp_infer_python.txt index 436bb75dabd38c2d66ff93cac9244467bd2e7525..b88badd3100fcbf44f1ef9290ebcc3ccf6541066 100644 --- a/test_tipc/configs/Twins/alt_gvt_large_train_amp_infer_python.txt +++ b/test_tipc/configs/Twins/alt_gvt_large_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/Twins/alt_gvt_large.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/Twins/alt_gvt_large.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/Twins/alt_gvt_large_train_infer_python.txt b/test_tipc/configs/Twins/alt_gvt_large_train_infer_python.txt index 30a9bf6a1f45ff191cee2da930c0bc021b24e8aa..f33ed8b342e5716ab2644851cedd8d3818ef7d0b 100644 --- a/test_tipc/configs/Twins/alt_gvt_large_train_infer_python.txt +++ b/test_tipc/configs/Twins/alt_gvt_large_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/Twins/alt_gvt_large.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/Twins/alt_gvt_large.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/Twins/alt_gvt_small_train_amp_infer_python.txt b/test_tipc/configs/Twins/alt_gvt_small_train_amp_infer_python.txt index f82314ba6e1d7283b0e5f49da739e332768d8a74..c6fe05e86e04f47cbc6a4088c1b249fde4db8e23 100644 --- a/test_tipc/configs/Twins/alt_gvt_small_train_amp_infer_python.txt +++ b/test_tipc/configs/Twins/alt_gvt_small_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/Twins/alt_gvt_small.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/Twins/alt_gvt_small.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/Twins/alt_gvt_small_train_infer_python.txt b/test_tipc/configs/Twins/alt_gvt_small_train_infer_python.txt index 8a49f99150a2f820bee269e9596da7adb0372d05..930eeee78615a9274f6508be243cf2adb34c7c8e 100644 --- a/test_tipc/configs/Twins/alt_gvt_small_train_infer_python.txt +++ b/test_tipc/configs/Twins/alt_gvt_small_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/Twins/alt_gvt_small.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/Twins/alt_gvt_small.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/Twins/pcpvt_base_train_amp_infer_python.txt b/test_tipc/configs/Twins/pcpvt_base_train_amp_infer_python.txt index cf85959c01a2c50f5d2636585d388d3a3fa966a2..52ff321b7cc9e9871080af33062d9116b31e3713 100644 --- a/test_tipc/configs/Twins/pcpvt_base_train_amp_infer_python.txt +++ b/test_tipc/configs/Twins/pcpvt_base_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/Twins/pcpvt_base.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/Twins/pcpvt_base.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/Twins/pcpvt_base_train_infer_python.txt b/test_tipc/configs/Twins/pcpvt_base_train_infer_python.txt index 1091bf22b517612ff0760e6b1bfcc4257d41db6a..92b13842cb6cb8923b2903b475aa4c4528fecbf8 100644 --- a/test_tipc/configs/Twins/pcpvt_base_train_infer_python.txt +++ b/test_tipc/configs/Twins/pcpvt_base_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/Twins/pcpvt_base.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/Twins/pcpvt_base.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/Twins/pcpvt_large_train_amp_infer_python.txt b/test_tipc/configs/Twins/pcpvt_large_train_amp_infer_python.txt index 622a62991c3c3332b18ac14238c3320b1ee7441d..abd2b13006c1003d8f00c9919a6ad53d9f117cb9 100644 --- a/test_tipc/configs/Twins/pcpvt_large_train_amp_infer_python.txt +++ b/test_tipc/configs/Twins/pcpvt_large_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/Twins/pcpvt_large.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/Twins/pcpvt_large.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/Twins/pcpvt_large_train_infer_python.txt b/test_tipc/configs/Twins/pcpvt_large_train_infer_python.txt index eea591ed2e3028559a6563876d6800f8673e7537..71faf213630a39eb1bdc567fec2593bfa66b39f9 100644 --- a/test_tipc/configs/Twins/pcpvt_large_train_infer_python.txt +++ b/test_tipc/configs/Twins/pcpvt_large_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/Twins/pcpvt_large.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/Twins/pcpvt_large.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/Twins/pcpvt_small_train_amp_infer_python.txt b/test_tipc/configs/Twins/pcpvt_small_train_amp_infer_python.txt index 6d84c8f5eb10a53ea052fa7af303703b59235750..f82807b77c6c213bafff8f7f856dad7bdeca296c 100644 --- a/test_tipc/configs/Twins/pcpvt_small_train_amp_infer_python.txt +++ b/test_tipc/configs/Twins/pcpvt_small_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/Twins/pcpvt_small.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/Twins/pcpvt_small.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/Twins/pcpvt_small_train_infer_python.txt b/test_tipc/configs/Twins/pcpvt_small_train_infer_python.txt index e230864972a7faec113bcc5710401b8ebc38ef0d..61e25436ef3f0400748ba311efbc0247d3ccb66d 100644 --- a/test_tipc/configs/Twins/pcpvt_small_train_infer_python.txt +++ b/test_tipc/configs/Twins/pcpvt_small_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/Twins/pcpvt_small.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/Twins/pcpvt_small.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/VAN/VAN_tiny_train_infer_python.txt b/test_tipc/configs/VAN/VAN_tiny_train_infer_python.txt index bdf53e4076a8308b08536fd754321533292ea804..82fc3845d7fa6ac6cf5929bf82932a8d2a09a2ea 100644 --- a/test_tipc/configs/VAN/VAN_tiny_train_infer_python.txt +++ b/test_tipc/configs/VAN/VAN_tiny_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/VAN/VAN_tiny.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/VAN/VAN_tiny.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/VGG/VGG11_train_amp_infer_python.txt b/test_tipc/configs/VGG/VGG11_train_amp_infer_python.txt index cf0fe60c48e929ffbe4e188c54c557180dabf649..277faf43d80ed5aa68abe84e839e34d021210877 100644 --- a/test_tipc/configs/VGG/VGG11_train_amp_infer_python.txt +++ b/test_tipc/configs/VGG/VGG11_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/VGG/VGG11.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/VGG/VGG11.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/VGG/VGG11_train_infer_python.txt b/test_tipc/configs/VGG/VGG11_train_infer_python.txt index fd0819869266c655b71d3acb0b8e5bcbdab9906b..149234c7ba4d84694a5a713e1dde02acd9dce8ee 100644 --- a/test_tipc/configs/VGG/VGG11_train_infer_python.txt +++ b/test_tipc/configs/VGG/VGG11_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/VGG/VGG11.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/VGG/VGG11.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/VGG/VGG13_train_amp_infer_python.txt b/test_tipc/configs/VGG/VGG13_train_amp_infer_python.txt index 083a690c264a3441bafce83e4f9e9db398f17c29..c74c089fe976192796499201ba064c14c6d93aa8 100644 --- a/test_tipc/configs/VGG/VGG13_train_amp_infer_python.txt +++ b/test_tipc/configs/VGG/VGG13_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/VGG/VGG13.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/VGG/VGG13.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/VGG/VGG13_train_infer_python.txt b/test_tipc/configs/VGG/VGG13_train_infer_python.txt index 4f19a1dc19d1715c6f08262ae9ac8792b4a1342b..5b8486071ef1243122186f406fe5cf8576483c70 100644 --- a/test_tipc/configs/VGG/VGG13_train_infer_python.txt +++ b/test_tipc/configs/VGG/VGG13_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/VGG/VGG13.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/VGG/VGG13.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/VGG/VGG16_train_amp_infer_python.txt b/test_tipc/configs/VGG/VGG16_train_amp_infer_python.txt index 30b422af895cbec8cbc89f05090e058609e04342..bc21033eb7dfb095c093af73010633187439c840 100644 --- a/test_tipc/configs/VGG/VGG16_train_amp_infer_python.txt +++ b/test_tipc/configs/VGG/VGG16_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/VGG/VGG16.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/VGG/VGG16.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/VGG/VGG16_train_infer_python.txt b/test_tipc/configs/VGG/VGG16_train_infer_python.txt index 0630047210f84560910b8aa2e8da27bcd69b5767..74a0c6af81faddd0f5ddfc973f49ea26be3c82a8 100644 --- a/test_tipc/configs/VGG/VGG16_train_infer_python.txt +++ b/test_tipc/configs/VGG/VGG16_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/VGG/VGG16.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/VGG/VGG16.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/VGG/VGG19_train_amp_infer_python.txt b/test_tipc/configs/VGG/VGG19_train_amp_infer_python.txt index ccbce1489852ce36701c7690abeaffe3e17221c5..d76977edebe859d65def298128d8cf0c90f02440 100644 --- a/test_tipc/configs/VGG/VGG19_train_amp_infer_python.txt +++ b/test_tipc/configs/VGG/VGG19_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/VGG/VGG19.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/VGG/VGG19.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/VGG/VGG19_train_infer_python.txt b/test_tipc/configs/VGG/VGG19_train_infer_python.txt index ce62cddcc53019d1b33448e4d6157416b4c0e499..a1518deb53fccf0beb75b1fcaf1e51859633921b 100644 --- a/test_tipc/configs/VGG/VGG19_train_infer_python.txt +++ b/test_tipc/configs/VGG/VGG19_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/VGG/VGG19.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/VGG/VGG19.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/VisionTransformer/ViT_base_patch16_224_train_amp_infer_python.txt b/test_tipc/configs/VisionTransformer/ViT_base_patch16_224_train_amp_infer_python.txt index 74155b1f9215aebdb98ceb2f25dd430b462cfa82..22fd9e2eca3eff132224dc3b3a1d44601c3bd287 100644 --- a/test_tipc/configs/VisionTransformer/ViT_base_patch16_224_train_amp_infer_python.txt +++ b/test_tipc/configs/VisionTransformer/ViT_base_patch16_224_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/VisionTransformer/ViT_base_patch16_224.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/VisionTransformer/ViT_base_patch16_224.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/VisionTransformer/ViT_base_patch16_224_train_infer_python.txt b/test_tipc/configs/VisionTransformer/ViT_base_patch16_224_train_infer_python.txt index 8cd2e7b5141765fb0246716d5535094835e4f08c..7edde6d29c2db05537ebaa7151c30d8db9a05466 100644 --- a/test_tipc/configs/VisionTransformer/ViT_base_patch16_224_train_infer_python.txt +++ b/test_tipc/configs/VisionTransformer/ViT_base_patch16_224_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/VisionTransformer/ViT_base_patch16_224.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/VisionTransformer/ViT_base_patch16_224.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/VisionTransformer/ViT_base_patch16_384_train_amp_infer_python.txt b/test_tipc/configs/VisionTransformer/ViT_base_patch16_384_train_amp_infer_python.txt index aa23031bf79f193c060451e7615c670f31f834b0..ad27bc3fa76ba4f27c01fd58b8a2f5b49f10728b 100644 --- a/test_tipc/configs/VisionTransformer/ViT_base_patch16_384_train_amp_infer_python.txt +++ b/test_tipc/configs/VisionTransformer/ViT_base_patch16_384_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/VisionTransformer/ViT_base_patch16_384.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/VisionTransformer/ViT_base_patch16_384.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/VisionTransformer/ViT_base_patch16_384_train_infer_python.txt b/test_tipc/configs/VisionTransformer/ViT_base_patch16_384_train_infer_python.txt index 19830855e5239bc5dfbbb5065093bf7b966a9d33..92a0ebf1e8adb8afba3c9731fcbd19380805c80e 100644 --- a/test_tipc/configs/VisionTransformer/ViT_base_patch16_384_train_infer_python.txt +++ b/test_tipc/configs/VisionTransformer/ViT_base_patch16_384_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/VisionTransformer/ViT_base_patch16_384.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/VisionTransformer/ViT_base_patch16_384.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/VisionTransformer/ViT_base_patch32_384_train_amp_infer_python.txt b/test_tipc/configs/VisionTransformer/ViT_base_patch32_384_train_amp_infer_python.txt index a5370b670ff3bcc0b2020f52f1e0ad8bbfd10850..c4d1c13ce710cffe98256ee43cce58385cac7506 100644 --- a/test_tipc/configs/VisionTransformer/ViT_base_patch32_384_train_amp_infer_python.txt +++ b/test_tipc/configs/VisionTransformer/ViT_base_patch32_384_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/VisionTransformer/ViT_base_patch32_384.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/VisionTransformer/ViT_base_patch32_384.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/VisionTransformer/ViT_base_patch32_384_train_infer_python.txt b/test_tipc/configs/VisionTransformer/ViT_base_patch32_384_train_infer_python.txt index d4451ac693fa3c0fee9230b71ac1a4daf565d1da..1347eaa3d5d7692e40fb934911612f88c9b75e2d 100644 --- a/test_tipc/configs/VisionTransformer/ViT_base_patch32_384_train_infer_python.txt +++ b/test_tipc/configs/VisionTransformer/ViT_base_patch32_384_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/VisionTransformer/ViT_base_patch32_384.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/VisionTransformer/ViT_base_patch32_384.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/VisionTransformer/ViT_large_patch16_224_train_amp_infer_python.txt b/test_tipc/configs/VisionTransformer/ViT_large_patch16_224_train_amp_infer_python.txt index b86428b9fe077bcaeeff2ff6aa35161c977d141a..2b5221f7bcb08836118bdc1bd11959a20564913a 100644 --- a/test_tipc/configs/VisionTransformer/ViT_large_patch16_224_train_amp_infer_python.txt +++ b/test_tipc/configs/VisionTransformer/ViT_large_patch16_224_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/VisionTransformer/ViT_large_patch16_224.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/VisionTransformer/ViT_large_patch16_224.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/VisionTransformer/ViT_large_patch16_224_train_infer_python.txt b/test_tipc/configs/VisionTransformer/ViT_large_patch16_224_train_infer_python.txt index b2c19016eb8ce7b003001c1cb86c78942620a4b0..293e83d3ad1870932c2c8111084915d6423d6b42 100644 --- a/test_tipc/configs/VisionTransformer/ViT_large_patch16_224_train_infer_python.txt +++ b/test_tipc/configs/VisionTransformer/ViT_large_patch16_224_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/VisionTransformer/ViT_large_patch16_224.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/VisionTransformer/ViT_large_patch16_224.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/VisionTransformer/ViT_large_patch16_384_train_amp_infer_python.txt b/test_tipc/configs/VisionTransformer/ViT_large_patch16_384_train_amp_infer_python.txt index c733d7e8274fc08dd782351397c65884a63d8763..547596aee9580db7f462bf1a5a9b9ecebfc27758 100644 --- a/test_tipc/configs/VisionTransformer/ViT_large_patch16_384_train_amp_infer_python.txt +++ b/test_tipc/configs/VisionTransformer/ViT_large_patch16_384_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/VisionTransformer/ViT_large_patch16_384.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/VisionTransformer/ViT_large_patch16_384.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/VisionTransformer/ViT_large_patch16_384_train_infer_python.txt b/test_tipc/configs/VisionTransformer/ViT_large_patch16_384_train_infer_python.txt index 6e01d3a36df7dd9f3dd4882f1cbffb434a25142b..bd228e38106ee4bdacecf30732aa4fa552f65b43 100644 --- a/test_tipc/configs/VisionTransformer/ViT_large_patch16_384_train_infer_python.txt +++ b/test_tipc/configs/VisionTransformer/ViT_large_patch16_384_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/VisionTransformer/ViT_large_patch16_384.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/VisionTransformer/ViT_large_patch16_384.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/VisionTransformer/ViT_large_patch32_384_train_amp_infer_python.txt b/test_tipc/configs/VisionTransformer/ViT_large_patch32_384_train_amp_infer_python.txt index f6ead75b81e298c9684d2b1847356203afce3f26..7c667e953b14e8212de5bfe3f8f02fa5be8dada2 100644 --- a/test_tipc/configs/VisionTransformer/ViT_large_patch32_384_train_amp_infer_python.txt +++ b/test_tipc/configs/VisionTransformer/ViT_large_patch32_384_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/VisionTransformer/ViT_large_patch32_384.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/VisionTransformer/ViT_large_patch32_384.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/VisionTransformer/ViT_large_patch32_384_train_infer_python.txt b/test_tipc/configs/VisionTransformer/ViT_large_patch32_384_train_infer_python.txt index 4ae7aa934d23584ff96720efef3b93815bd2b6da..c97c77b2fb97c16a107447e1197a8e74346b2cad 100644 --- a/test_tipc/configs/VisionTransformer/ViT_large_patch32_384_train_infer_python.txt +++ b/test_tipc/configs/VisionTransformer/ViT_large_patch32_384_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/VisionTransformer/ViT_large_patch32_384.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/VisionTransformer/ViT_large_patch32_384.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/VisionTransformer/ViT_small_patch16_224_train_amp_infer_python.txt b/test_tipc/configs/VisionTransformer/ViT_small_patch16_224_train_amp_infer_python.txt index 34fcaf290cddab226ee70b4be3caae2c10739f7b..6004106ee3e2a78e9f38d4525afc89b65d0ab941 100644 --- a/test_tipc/configs/VisionTransformer/ViT_small_patch16_224_train_amp_infer_python.txt +++ b/test_tipc/configs/VisionTransformer/ViT_small_patch16_224_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/VisionTransformer/ViT_small_patch16_224.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/VisionTransformer/ViT_small_patch16_224.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/VisionTransformer/ViT_small_patch16_224_train_infer_python.txt b/test_tipc/configs/VisionTransformer/ViT_small_patch16_224_train_infer_python.txt index c933fb38fe1a378420529b5ddea99f13b8260155..46dc2b767b48e66bd1f033218a7e2429f74a111d 100644 --- a/test_tipc/configs/VisionTransformer/ViT_small_patch16_224_train_infer_python.txt +++ b/test_tipc/configs/VisionTransformer/ViT_small_patch16_224_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/VisionTransformer/ViT_small_patch16_224.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/VisionTransformer/ViT_small_patch16_224.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/Xception/Xception41_deeplab_train_amp_infer_python.txt b/test_tipc/configs/Xception/Xception41_deeplab_train_amp_infer_python.txt index 325e0ad83092e10098496fc3224ff444aa291298..f37bc8395d261777fddd811aa205044794400f7d 100644 --- a/test_tipc/configs/Xception/Xception41_deeplab_train_amp_infer_python.txt +++ b/test_tipc/configs/Xception/Xception41_deeplab_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/Xception/Xception41_deeplab.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/Xception/Xception41_deeplab.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/Xception/Xception41_deeplab_train_infer_python.txt b/test_tipc/configs/Xception/Xception41_deeplab_train_infer_python.txt index a845326daa022f3817f7a3d10c8a5c0b0222316c..9cf7e7e2e9959d871376edb61dd44b98d81609d4 100644 --- a/test_tipc/configs/Xception/Xception41_deeplab_train_infer_python.txt +++ b/test_tipc/configs/Xception/Xception41_deeplab_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/Xception/Xception41_deeplab.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/Xception/Xception41_deeplab.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/Xception/Xception41_train_amp_infer_python.txt b/test_tipc/configs/Xception/Xception41_train_amp_infer_python.txt index f4569e71a7d75aa1dc2a9fd09c923ac68a4aec7c..f5c875bcaa29e04d45665ec95f31159d59b21d21 100644 --- a/test_tipc/configs/Xception/Xception41_train_amp_infer_python.txt +++ b/test_tipc/configs/Xception/Xception41_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/Xception/Xception41.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/Xception/Xception41.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/Xception/Xception41_train_infer_python.txt b/test_tipc/configs/Xception/Xception41_train_infer_python.txt index afd3c607dfdc37fa073050bc46e59ae36d29575c..55bf6b9f002c4f313ade9fdfac0ae0553d84046b 100644 --- a/test_tipc/configs/Xception/Xception41_train_infer_python.txt +++ b/test_tipc/configs/Xception/Xception41_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/Xception/Xception41.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/Xception/Xception41.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/Xception/Xception65_deeplab_train_amp_infer_python.txt b/test_tipc/configs/Xception/Xception65_deeplab_train_amp_infer_python.txt index 709423e618cf263d739c9b0eb797893b917c0f93..9d524cb6d65a91633f96f30e52bf88fc6e02f970 100644 --- a/test_tipc/configs/Xception/Xception65_deeplab_train_amp_infer_python.txt +++ b/test_tipc/configs/Xception/Xception65_deeplab_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/Xception/Xception65_deeplab.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/Xception/Xception65_deeplab.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/Xception/Xception65_deeplab_train_infer_python.txt b/test_tipc/configs/Xception/Xception65_deeplab_train_infer_python.txt index ee74e4918551d11f036a5ded5028510bcc0ca47e..e84160070a815e6bd261e8a5a02ba05e5369c3b5 100644 --- a/test_tipc/configs/Xception/Xception65_deeplab_train_infer_python.txt +++ b/test_tipc/configs/Xception/Xception65_deeplab_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/Xception/Xception65_deeplab.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/Xception/Xception65_deeplab.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/Xception/Xception65_train_amp_infer_python.txt b/test_tipc/configs/Xception/Xception65_train_amp_infer_python.txt index ed1d042d3a2cee34b1e03ef834e3084d6ab2d006..228e5bf07725d6b1c5ccd95555e9c27698bc821c 100644 --- a/test_tipc/configs/Xception/Xception65_train_amp_infer_python.txt +++ b/test_tipc/configs/Xception/Xception65_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/Xception/Xception65.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/Xception/Xception65.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/Xception/Xception65_train_infer_python.txt b/test_tipc/configs/Xception/Xception65_train_infer_python.txt index ad241e0da4004a99ab379b7740f0dca5d881f8e0..bf27339bc2066f887ba78deb7f7be2f7ff09afe0 100644 --- a/test_tipc/configs/Xception/Xception65_train_infer_python.txt +++ b/test_tipc/configs/Xception/Xception65_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/Xception/Xception65.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/Xception/Xception65.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/Xception/Xception71_train_amp_infer_python.txt b/test_tipc/configs/Xception/Xception71_train_amp_infer_python.txt index 60867581f598b68c487673b71f47a94df1a318f0..472d4bae3645df758e6ca28e69f26d055d06b596 100644 --- a/test_tipc/configs/Xception/Xception71_train_amp_infer_python.txt +++ b/test_tipc/configs/Xception/Xception71_train_amp_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:amp_train -amp_train:tools/train.py -c ppcls/configs/ImageNet/Xception/Xception71.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 +amp_train:tools/train.py -c ppcls/configs/ImageNet/Xception/Xception71.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o AMP.scale_loss=128 -o AMP.use_dynamic_loss_scaling=True -o AMP.level=O2 -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/configs/Xception/Xception71_train_infer_python.txt b/test_tipc/configs/Xception/Xception71_train_infer_python.txt index 092d2aeb6a1a5542cfb8a7d8ed2484bcb5b6f971..eeada259ff0cd20b472e9400555da222daf2baf5 100644 --- a/test_tipc/configs/Xception/Xception71_train_infer_python.txt +++ b/test_tipc/configs/Xception/Xception71_train_infer_python.txt @@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val null:null ## trainer:norm_train -norm_train:tools/train.py -c ppcls/configs/ImageNet/Xception/Xception71.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False +norm_train:tools/train.py -c ppcls/configs/ImageNet/Xception/Xception71.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.eval_during_train=False -o Global.save_interval=2 pact_train:null fpgm_train:null distill_train:null diff --git a/test_tipc/test_paddle2onnx.sh b/test_tipc/test_paddle2onnx.sh index a8c6914ec641503c68566d8693cca1f6f93fbf66..d025fb2efd672baab42e4617a13dd127d90a73bc 100644 --- a/test_tipc/test_paddle2onnx.sh +++ b/test_tipc/test_paddle2onnx.sh @@ -59,13 +59,15 @@ function func_paddle2onnx(){ status_check $last_status "${trans_model_cmd}" "${status_log}" "${model_name}" # python inference - set_model_dir=$(func_set_params "${inference_model_dir_key}" "${inference_model_dir_value}") - set_use_onnx=$(func_set_params "${use_onnx_key}" "${use_onnx_value}") - set_hardware=$(func_set_params "${inference_hardware_key}" "${inference_hardware_value}") - set_inference_config=$(func_set_params "${inference_config_key}" "${inference_config_value}") - infer_model_cmd="cd deploy && ${python} ${inference_py} -o ${set_model_dir} -o ${set_use_onnx} -o ${set_hardware} ${set_inference_config} > ${_save_log_path} 2>&1 && cd ../" - eval $infer_model_cmd - status_check $last_status "${infer_model_cmd}" "${status_log}" "${model_name}" + if [[ ${inference_py} != "null" ]]; then + set_model_dir=$(func_set_params "${inference_model_dir_key}" "${inference_model_dir_value}") + set_use_onnx=$(func_set_params "${use_onnx_key}" "${use_onnx_value}") + set_hardware=$(func_set_params "${inference_hardware_key}" "${inference_hardware_value}") + set_inference_config=$(func_set_params "${inference_config_key}" "${inference_config_value}") + infer_model_cmd="cd deploy && ${python} ${inference_py} -o ${set_model_dir} -o ${set_use_onnx} -o ${set_hardware} ${set_inference_config} > ${_save_log_path} 2>&1 && cd ../" + eval $infer_model_cmd + status_check $last_status "${infer_model_cmd}" "${status_log}" "${model_name}" + fi }