diff --git a/configs/rotate/README.md b/configs/rotate/README.md index 72d52014f066fd7456906b7345d22f87a3b882f4..db3a6b0ddc12e50627f8d42805ede3b1817b2c46 100644 --- a/configs/rotate/README.md +++ b/configs/rotate/README.md @@ -15,11 +15,12 @@ | 模型 | mAP | 学习率策略 | 角度表示 | 数据增广 | GPU数目 | 每GPU图片数目 | 模型下载 | 配置文件 | |:---:|:----:|:---------:|:-----:|:--------:|:-----:|:------------:|:-------:|:------:| -| [S2ANet](./s2anet/README.md) | 74.0 | 2x | le135 | - | 4 | 2 | [model](https://paddledet.bj.bcebos.com/models/s2anet_alignconv_2x_dota.pdparams) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/dota/s2anet_alignconv_2x_dota.yml) | +| [S2ANet](./s2anet/README.md) | 73.84 | 2x | le135 | - | 4 | 2 | [model](https://paddledet.bj.bcebos.com/models/s2anet_alignconv_2x_dota.pdparams) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/rotate/s2anet/s2anet_alignconv_2x_dota.yml) | **注意:** - 如果**GPU卡数**或者**batch size**发生了改变,你需要按照公式 **lrnew = lrdefault * (batch_sizenew * GPU_numbernew) / (batch_sizedefault * GPU_numberdefault)** 调整学习率。 +- 模型库中的模型默认使用单尺度训练单尺度测试。如果数据增广一栏标明MS,意味着使用多尺度训练和多尺度测试。如果数据增广一栏标明RR,意味着使用RandomRotate数据增广进行训练。 ## 数据准备 ### DOTA数据准备 @@ -36,8 +37,15 @@ ${DOTA_ROOT} └── labelTxt ``` -DOTA数据集分辨率较高,因此一般在训练和测试之前对图像进行切图,使用单尺度进行切图可以使用以下命令: +对于有标注的数据,每一张图片会对应一个同名的txt文件,文件中每一行为一个旋转框的标注,其格式如下: ``` +x1 y1 x2 y2 x3 y3 x4 y4 class_name difficult +``` + +### 单尺度切图 +DOTA数据集分辨率较高,因此一般在训练和测试之前对图像进行离线切图,使用单尺度进行切图可以使用以下命令: +``` bash +# 对于有标注的数据进行切图 python configs/rotate/tools/prepare_data.py \ --input_dirs ${DOTA_ROOT}/train/ ${DOTA_ROOT}/val/ \ --output_dir ${OUTPUT_DIR}/trainval1024/ \ @@ -45,26 +53,39 @@ python configs/rotate/tools/prepare_data.py \ --subsize 1024 \ --gap 200 \ --rates 1.0 + +# 对于无标注的数据进行切图需要设置--image_only +python configs/rotate/tools/prepare_data.py \ + --input_dirs ${DOTA_ROOT}/test/ \ + --output_dir ${OUTPUT_DIR}/test1024/ \ + --coco_json_file DOTA_test1024.json \ + --subsize 1024 \ + --gap 200 \ + --rates 1.0 \ + --image_only + ``` + +### 多尺度切图 使用多尺度进行切图可以使用以下命令: -``` +``` bash +# 对于有标注的数据进行切图 python configs/rotate/tools/prepare_data.py \ --input_dirs ${DOTA_ROOT}/train/ ${DOTA_ROOT}/val/ \ --output_dir ${OUTPUT_DIR}/trainval/ \ --coco_json_file DOTA_trainval1024.json \ --subsize 1024 \ --gap 500 \ - --rates 0.5 1.0 1.5 \ -``` -对于无标注的数据可以设置`--image_only`进行切图,如下所示: -``` + --rates 0.5 1.0 1.5 + +# 对于无标注的数据进行切图需要设置--image_only python configs/rotate/tools/prepare_data.py \ --input_dirs ${DOTA_ROOT}/test/ \ --output_dir ${OUTPUT_DIR}/test1024/ \ --coco_json_file DOTA_test1024.json \ --subsize 1024 \ - --gap 200 \ - --rates 1.0 \ + --gap 500 \ + --rates 0.5 1.0 1.5 \ --image_only ``` diff --git a/configs/rotate/README_en.md b/configs/rotate/README_en.md index a91f66a9d61c576070b954b312676aea54a2d4ec..03c4d2cee3ff61dc1001c8adcd81864a597935d2 100644 --- a/configs/rotate/README_en.md +++ b/configs/rotate/README_en.md @@ -14,11 +14,12 @@ Rotated object detection is used to detect rectangular bounding boxes with angle ## Model Zoo | Model | mAP | Lr Scheduler | Angle | Aug | GPU Number | images/GPU | download | config | |:---:|:----:|:---------:|:-----:|:--------:|:-----:|:------------:|:-------:|:------:| -| [S2ANet](./s2anet/README.md) | 74.0 | 2x | le135 | - | 4 | 2 | [model](https://paddledet.bj.bcebos.com/models/s2anet_alignconv_2x_dota.pdparams) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/dota/s2anet_alignconv_2x_dota.yml) | +| [S2ANet](./s2anet/README_en.md) | 73.84 | 2x | le135 | - | 4 | 2 | [model](https://paddledet.bj.bcebos.com/models/s2anet_alignconv_2x_dota.pdparams) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/rotate/s2anet/s2anet_alignconv_2x_dota.yml) | **Notes:** - if **GPU number** or **mini-batch size** is changed, **learning rate** should be adjusted according to the formula **lrnew = lrdefault * (batch_sizenew * GPU_numbernew) / (batch_sizedefault * GPU_numberdefault)**. +- Models in model zoo is trained and tested with single scale by default. If `MS` is indicated in the data augmentation column, it means that multi-scale training and multi-scale testing are used. If `RR` is indicated in the data augmentation column, it means that RandomRotate data augmentation is used for training. ## Data Preparation ### DOTA Dataset preparation @@ -35,8 +36,16 @@ ${DOTA_ROOT} └── labelTxt ``` -The image resolution of DOTA dataset is relatively high, so we usually slice the images before training and testing. To slice the images with a single scale, you can use the command below +For labeled data, each image corresponds to a txt file with the same name, and each row in the txt file represent a rotated bouding box. The format is as follows: + +``` +x1 y1 x2 y2 x3 y3 x4 y4 class_name difficult ``` + +### Slicing data with single scale +The image resolution of DOTA dataset is relatively high, so we usually slice the images before training and testing. To slice the images with a single scale, you can use the command below +``` bash +# slicing labeled data python configs/rotate/tools/prepare_data.py \ --input_dirs ${DOTA_ROOT}/train/ ${DOTA_ROOT}/val/ \ --output_dir ${OUTPUT_DIR}/trainval1024/ \ @@ -44,26 +53,37 @@ python configs/rotate/tools/prepare_data.py \ --subsize 1024 \ --gap 200 \ --rates 1.0 +# slicing unlabeled data by setting --image_only +python configs/rotate/tools/prepare_data.py \ + --input_dirs ${DOTA_ROOT}/test/ \ + --output_dir ${OUTPUT_DIR}/test1024/ \ + --coco_json_file DOTA_test1024.json \ + --subsize 1024 \ + --gap 200 \ + --rates 1.0 \ + --image_only + ``` + +### Slicing data with multi scale To slice the images with multiple scales, you can use the command below -``` +``` bash +# slicing labeled data python configs/rotate/tools/prepare_data.py \ --input_dirs ${DOTA_ROOT}/train/ ${DOTA_ROOT}/val/ \ --output_dir ${OUTPUT_DIR}/trainval/ \ --coco_json_file DOTA_trainval1024.json \ --subsize 1024 \ --gap 500 \ - --rates 0.5 1.0 1.5 \ -``` -For data without annotations, you should set `--image_only` as follows -``` + --rates 0.5 1.0 1.5 +# slicing unlabeled data by setting --image_only python configs/rotate/tools/prepare_data.py \ --input_dirs ${DOTA_ROOT}/test/ \ --output_dir ${OUTPUT_DIR}/test1024/ \ --coco_json_file DOTA_test1024.json \ --subsize 1024 \ - --gap 200 \ - --rates 1.0 \ + --gap 500 \ + --rates 0.5 1.0 1.5 \ --image_only ``` diff --git a/configs/rotate/s2anet/README.md b/configs/rotate/s2anet/README.md index c76364d8e2b8158638bdca393f4f4a8864759c2b..faabe96b2319fc42615a9e257681209ffb731abd 100644 --- a/configs/rotate/s2anet/README.md +++ b/configs/rotate/s2anet/README.md @@ -1,16 +1,35 @@ -# S2ANet模型 +简体中文 | [English](README_en.md) + +# S2ANet ## 内容 - [简介](#简介) -- [开始训练](#开始训练) - [模型库](#模型库) +- [使用说明](#使用说明) - [预测部署](#预测部署) +- [引用](#引用) ## 简介 -[S2ANet](https://arxiv.org/pdf/2008.09397.pdf)是用于检测旋转框的模型,在DOTA 1.0数据集上单尺度训练能达到74.0的mAP. +[S2ANet](https://arxiv.org/pdf/2008.09397.pdf)是用于检测旋转框的模型. + +## 模型库 + +| 模型 | Conv类型 | mAP | 学习率策略 | 角度表示 | 数据增广 | GPU数目 | 每GPU图片数目 | 模型下载 | 配置文件 | +|:---:|:------:|:----:|:---------:|:-----:|:--------:|:-----:|:------------:|:-------:|:------:| +| S2ANet | Conv | 71.45 | 2x | le135 | - | 4 | 2 | [model](https://paddledet.bj.bcebos.com/models/s2anet_conv_2x_dota.pdparams) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/rotate/s2anet/s2anet_conv_2x_dota.yml) | +| S2ANet | AlignConv | 73.84 | 2x | le135 | - | 4 | 2 | [model](https://paddledet.bj.bcebos.com/models/s2anet_alignconv_2x_dota.pdparams) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/rotate/s2anet/s2anet_alignconv_2x_dota.yml) | + +**注意:** + +- 如果**GPU卡数**或者**batch size**发生了改变,你需要按照公式 **lrnew = lrdefault * (batch_sizenew * GPU_numbernew) / (batch_sizedefault * GPU_numberdefault)** 调整学习率。 +- 模型库中的模型默认使用单尺度训练单尺度测试。如果数据增广一栏标明MS,意味着使用多尺度训练和多尺度测试。如果数据增广一栏标明RR,意味着使用RandomRotate数据增广进行训练。 +- 这里使用`multiclass_nms`,与原作者使用nms略有不同。 + + +## 使用说明 -## 开始训练 +参考[数据准备](../README.md#数据准备)准备数据。 ### 1. 训练 @@ -22,13 +41,13 @@ python tools/train.py -c configs/rotate/s2anet/s2anet_1x_spine.yml GPU多卡训练 ```bash -export CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 -python -m paddle.distributed.launch --gpus 0,1,2,3,4,5,6,7 tools/train.py -c configs/rotate/s2anet/s2anet_1x_spine.yml +export CUDA_VISIBLE_DEVICES=0,1,2,3 +python -m paddle.distributed.launch --gpus 0,1,2,3 tools/train.py -c configs/rotate/s2anet/s2anet_1x_spine.yml ``` 可以通过`--eval`开启边训练边测试。 -### 3. 评估 +### 2. 评估 ```bash python tools/eval.py -c configs/rotate/s2anet/s2anet_1x_spine.yml -o weights=output/s2anet_1x_spine/model_final.pdparams @@ -36,7 +55,7 @@ python tools/eval.py -c configs/rotate/s2anet/s2anet_1x_spine.yml -o weights=out python tools/eval.py -c configs/rotate/s2anet/s2anet_1x_spine.yml -o weights=https://paddledet.bj.bcebos.com/models/s2anet_1x_spine.pdparams ``` -### 4. 预测 +### 3. 预测 执行如下命令,会将图像预测结果保存到`output`文件夹下。 ```bash python tools/infer.py -c configs/rotate/s2anet/s2anet_1x_spine.yml -o weights=output/s2anet_1x_spine/model_final.pdparams --infer_img=demo/39006.jpg --draw_threshold=0.3 @@ -46,37 +65,26 @@ python tools/infer.py -c configs/rotate/s2anet/s2anet_1x_spine.yml -o weights=ou python tools/infer.py -c configs/rotate/s2anet/s2anet_1x_spine.yml -o weights=https://paddledet.bj.bcebos.com/models/s2anet_1x_spine.pdparams --infer_img=demo/39006.jpg --draw_threshold=0.3 ``` -### 5. DOTA数据评估 +### 4. DOTA数据评估 执行如下命令,会在`output`文件夹下将每个图像预测结果保存到同文件夹名的txt文本中。 ``` -python tools/infer.py -c configs/rotate/s2anet/s2anet_alignconv_2x_dota.yml -o weights=./weights/s2anet_alignconv_2x_dota.pdparams --infer_dir=/path/to/test/images --output_dir=output --visualize=False --save_results=True +python tools/infer.py -c configs/rotate/s2anet/s2anet_alignconv_2x_dota.yml -o weights=https://paddledet.bj.bcebos.com/models/s2anet_alignconv_2x_dota.pdparams --infer_dir=/path/to/test/images --output_dir=output --visualize=False --save_results=True ``` 参考[DOTA Task](https://captain-whu.github.io/DOTA/tasks.html), 评估DOTA数据集需要生成一个包含所有检测结果的zip文件,每一类的检测结果储存在一个txt文件中,txt文件中每行格式为:`image_name score x1 y1 x2 y2 x3 y3 x4 y4`。将生成的zip文件提交到[DOTA Evaluation](https://captain-whu.github.io/DOTA/evaluation.html)的Task1进行评估。你可以执行以下命令生成评估文件 ``` python configs/rotate/tools/generate_result.py --pred_txt_dir=output/ --output_dir=submit/ --data_type=dota10 + zip -r submit.zip submit ``` -## 模型库 - -### S2ANet模型 - -| 模型 | Conv类型 | mAP | 模型下载 | 配置文件 | -|:-----------:|:----------:|:--------:| :----------:| :---------: | -| S2ANet | Conv | 71.42 | [model](https://paddledet.bj.bcebos.com/models/s2anet_conv_2x_dota.pdparams) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/rotate/s2anet/s2anet_conv_2x_dota.yml) | -| S2ANet | AlignConv | 74.0 | [model](https://paddledet.bj.bcebos.com/models/s2anet_alignconv_2x_dota.pdparams) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/rotate/s2anet/s2anet_alignconv_2x_dota.yml) | - -**注意:** 这里使用`multiclass_nms`,与原作者使用nms略有不同。 - - ## 预测部署 Paddle中`multiclass_nms`算子的输入支持四边形输入,因此部署时可以不需要依赖旋转框IOU计算算子。 -部署教程请参考[预测部署](../../deploy/README.md) +部署教程请参考[预测部署](../../../deploy/README.md) -## Citations +## 引用 ``` @article{han2021align, author={J. {Han} and J. {Ding} and J. {Li} and G. -S. {Xia}}, diff --git a/configs/rotate/s2anet/README_en.md b/configs/rotate/s2anet/README_en.md index 70da7660b8b4aca16cdce5f9f8acc1ab4bc1f17b..9ec48753a445e6eba2223a80d69b0860d379c2ef 100644 --- a/configs/rotate/s2anet/README_en.md +++ b/configs/rotate/s2anet/README_en.md @@ -1,26 +1,34 @@ -# S2ANet Model +English | [简体中文](README.md) + +# S2ANet ## Content -- [S2ANet Model](#s2anet-model) - - [Content](#content) - - [Introduction](#introduction) - - [Start Training](#start-training) - - [1. Train](#1-train) - - [2. Evaluation](#2-evaluation) - - [3. Prediction](#3-prediction) - - [4. DOTA Data evaluation](#4-dota-data-evaluation) - - [Model Library](#model-library) - - [S2ANet Model](#s2anet-model-1) - - [Predict Deployment](#predict-deployment) - - [Citations](#citations) +- [Introduction](#Introduction) +- [Model Zoo](#Model-Zoo) +- [Getting Start](#Getting-Start) +- [Deployment](#Deployment) +- [Citations](#Citations) ## Introduction -[S2ANet](https://arxiv.org/pdf/2008.09397.pdf) is used to detect rotated objects and acheives 74.0 mAP on DOTA 1.0 dataset. +[S2ANet](https://arxiv.org/pdf/2008.09397.pdf) is used to detect rotated objects. + +## Model Zoo +| Model | Conv Type | mAP | Lr Scheduler | Angle | Aug | GPU Number | images/GPU | download | config | +|:---:|:------:|:----:|:---------:|:-----:|:--------:|:-----:|:------------:|:-------:|:------:| +| S2ANet | Conv | 71.45 | 2x | le135 | - | 4 | 2 | [model](https://paddledet.bj.bcebos.com/models/s2anet_conv_2x_dota.pdparams) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/rotate/s2anet/s2anet_conv_2x_dota.yml) | +| S2ANet | AlignConv | 73.84 | 2x | le135 | - | 4 | 2 | [model](https://paddledet.bj.bcebos.com/models/s2anet_alignconv_2x_dota.pdparams) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/rotate/s2anet/s2anet_alignconv_2x_dota.yml) | + +**Notes:** +- if **GPU number** or **mini-batch size** is changed, **learning rate** should be adjusted according to the formula **lrnew = lrdefault * (batch_sizenew * GPU_numbernew) / (batch_sizedefault * GPU_numberdefault)**. +- Models in model zoo is trained and tested with single scale by default. If `MS` is indicated in the data augmentation column, it means that multi-scale training and multi-scale testing are used. If `RR` is indicated in the data augmentation column, it means that RandomRotate data augmentation is used for training. +- `multiclass_nms` is used here, which is slightly different from the original author's use of NMS. -## Start Training +## Getting Start -### 2. Train +Refer to [Data-Preparation](../README_en.md#Data-Preparation) to prepare data. + +### 1. Train Single GPU Training ```bash @@ -30,13 +38,13 @@ python tools/train.py -c configs/rotate/s2anet/s2anet_1x_spine.yml Multiple GPUs Training ```bash -export CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 -python -m paddle.distributed.launch --gpus 0,1,2,3,4,5,6,7 tools/train.py -c configs/rotate/s2anet/s2anet_1x_spine.yml +export CUDA_VISIBLE_DEVICES=0,1,2,3 +python -m paddle.distributed.launch --gpus 0,1,2,3 tools/train.py -c configs/rotate/s2anet/s2anet_1x_spine.yml ``` You can use `--eval`to enable train-by-test. -### 3. Evaluation +### 2. Evaluation ```bash python tools/eval.py -c configs/rotate/s2anet/s2anet_1x_spine.yml -o weights=output/s2anet_1x_spine/model_final.pdparams @@ -44,7 +52,7 @@ python tools/eval.py -c configs/rotate/s2anet/s2anet_1x_spine.yml -o weights=out python tools/eval.py -c configs/rotate/s2anet/s2anet_1x_spine.yml -o weights=https://paddledet.bj.bcebos.com/models/s2anet_1x_spine.pdparams ``` -### 4. Prediction +### 3. Prediction Executing the following command will save the image prediction results to the `output` folder. ```bash python tools/infer.py -c configs/rotate/s2anet/s2anet_1x_spine.yml -o weights=output/s2anet_1x_spine/model_final.pdparams --infer_img=demo/39006.jpg --draw_threshold=0.3 @@ -54,35 +62,24 @@ Prediction using models that provide training: python tools/infer.py -c configs/rotate/s2anet/s2anet_1x_spine.yml -o weights=https://paddledet.bj.bcebos.com/models/s2anet_1x_spine.pdparams --infer_img=demo/39006.jpg --draw_threshold=0.3 ``` -### 5. DOTA Data evaluation +### 4. DOTA Data evaluation Execute the following command, will save each image prediction result in `output` folder txt text with the same folder name. ``` -python tools/infer.py -c configs/rotate/s2anet/s2anet_alignconv_2x_dota.yml -o weights=./weights/s2anet_alignconv_2x_dota.pdparams --infer_dir=/path/to/test/images --output_dir=output --visualize=False --save_results=True +python tools/infer.py -c configs/rotate/s2anet/s2anet_alignconv_2x_dota.yml -o weights=https://paddledet.bj.bcebos.com/models/s2anet_alignconv_2x_dota.pdparams --infer_dir=/path/to/test/images --output_dir=output --visualize=False --save_results=True ``` Refering to [DOTA Task](https://captain-whu.github.io/DOTA/tasks.html), You need to submit a zip file containing results for all test images for evaluation. The detection results of each category are stored in a txt file, each line of which is in the following format `image_id score x1 y1 x2 y2 x3 y3 x4 y4`. To evaluate, you should submit the generated zip file to the Task1 of [DOTA Evaluation](https://captain-whu.github.io/DOTA/evaluation.html). You can execute the following command to generate the file ``` python configs/rotate/tools/generate_result.py --pred_txt_dir=output/ --output_dir=submit/ --data_type=dota10 + zip -r submit.zip submit ``` -## Model Library - -### S2ANet Model - -| Model | Conv Type | mAP | Model Download | Configuration File | -|:-----------:|:----------:|:--------:| :----------:| :---------: | -| S2ANet | Conv | 71.42 | [model](https://paddledet.bj.bcebos.com/models/s2anet_conv_2x_dota.pdparams) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/rotate/s2anet/s2anet_conv_2x_dota.yml) | -| S2ANet | AlignConv | 74.0 | [model](https://paddledet.bj.bcebos.com/models/s2anet_alignconv_2x_dota.pdparams) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/rotate/s2anet/s2anet_alignconv_2x_dota.yml) | - -**Attention:** `multiclass_nms` is used here, which is slightly different from the original author's use of NMS. - - -## Predict Deployment +## Deployment The inputs of the `multiclass_nms` operator in Paddle support quadrilateral inputs, so deployment can be done without relying on the rotating frame IOU operator. -Please refer to the deployment tutorial[Predict deployment](../../deploy/README_en.md) +Please refer to the deployment tutorial[Predict deployment](../../../deploy/README_en.md) ## Citations diff --git a/configs/rotate/tools/slicebase.py b/configs/rotate/tools/slicebase.py index 515dd5f8c36d9be3769bcd2050620b976a302840..5514b7e27c7de4047eab750fd6e1e811728a5139 100644 --- a/configs/rotate/tools/slicebase.py +++ b/configs/rotate/tools/slicebase.py @@ -222,8 +222,10 @@ class SliceBase(object): windows = self.get_windows(height, width) self.slice_image_single(resize_img, windows, output_dir, base_name) if not self.image_only: - self.slice_anno_single(info['annotation'], windows, output_dir, - base_name) + annos = info['annotation'] + for anno in annos: + anno['poly'] = list(map(lambda x: rate * x, anno['poly'])) + self.slice_anno_single(annos, windows, output_dir, base_name) def check_or_mkdirs(self, path): if not os.path.exists(path): diff --git a/test_tipc/configs/dota/s2anet_alignconv_2x_spine.yml b/test_tipc/configs/dota/s2anet_alignconv_2x_spine.yml index 89f3b5f8aa44be7140f26416b92a7a47b7ec81b7..07a91225ebd9f7663ed4e301ad15b95bd2003ad2 100644 --- a/test_tipc/configs/dota/s2anet_alignconv_2x_spine.yml +++ b/test_tipc/configs/dota/s2anet_alignconv_2x_spine.yml @@ -1,9 +1,9 @@ _BASE_: [ '../../../configs/datasets/spine_coco.yml', '../../../configs/runtime.yml', - '../../../configs/dota/_base_/s2anet_optimizer_2x.yml', - '../../../configs/dota/_base_/s2anet.yml', - '../../../configs/dota/_base_/s2anet_reader.yml', + '../../../configs/rotate/s2anet/_base_/s2anet_optimizer_2x.yml', + '../../../configs/rotate/s2anet/_base_/s2anet.yml', + '../../../configs/rotate/s2anet/_base_/s2anet_reader.yml', ] pretrain_weights: https://paddledet.bj.bcebos.com/models/pretrained/ResNet50_vd_ssld_v2_pretrained.pdparams diff --git a/test_tipc/configs/dota/s2anet_conv_2x_spine.yml b/test_tipc/configs/dota/s2anet_conv_2x_spine.yml index 746ef0cc90a79e08033c48267f3e3118167fd938..23610b08ab9782f741634597eff28575d3e5dafa 100644 --- a/test_tipc/configs/dota/s2anet_conv_2x_spine.yml +++ b/test_tipc/configs/dota/s2anet_conv_2x_spine.yml @@ -1,9 +1,9 @@ _BASE_: [ '../../../configs/datasets/spine_coco.yml', '../../../configs/runtime.yml', - '../../../configs/dota/_base_/s2anet_optimizer_2x.yml', - '../../../configs/dota/_base_/s2anet.yml', - '../../../configs/dota/_base_/s2anet_reader.yml', + '../../../configs/rotate/s2anet/_base_/s2anet_optimizer_2x.yml', + '../../../configs/rotate/s2anet/_base_/s2anet.yml', + '../../../configs/rotate/s2anet/_base_/s2anet_reader.yml', ] weights: output/s2anet_conv_1x_dota/model_final pretrain_weights: https://paddledet.bj.bcebos.com/models/pretrained/ResNet50_cos_pretrained.pdparams diff --git a/test_tipc/prepare.sh b/test_tipc/prepare.sh index adf8d2dbad34d5f6f55cfe4d05e68d56fcc90a23..f006d984e08197e916288f8285940f96f4a07dfd 100644 --- a/test_tipc/prepare.sh +++ b/test_tipc/prepare.sh @@ -152,8 +152,8 @@ else cd ./dataset/wider_face/ && tar -xvf wider_tipc.tar && mv -n wider_tipc/* . rm -rf wider_tipc/ && cd ../../ # download spine_coco lite data - wget -nc -P ./dataset/spine_coco/ https://paddledet.bj.bcebos.com/data/tipc/spine_tipc.tar --no-check-certificate - cd ./dataset/spine_coco/ && tar -xvf spine_tipc.tar && mv -n spine_tipc/* . + wget -nc -P ./dataset/spine_coco/ https://paddledet.bj.bcebos.com/data/tipc/spine_coco_tipc.tar --no-check-certificate + cd ./dataset/spine_coco/ && tar -xvf spine_coco_tipc.tar && mv -n spine_coco_tipc/* . rm -rf spine_tipc/ && cd ../../ if [[ ${model_name} =~ "s2anet" ]]; then cd ./ppdet/ext_op && eval "${python} setup.py install"