提交 52f8a2b7 编写于 作者: Z Zhi Tian

update configs

上级 c93ebbbb
......@@ -55,6 +55,8 @@ Please note that:
For your convenience, we provide the following trained models (more models are coming soon).
**ResNe(x)ts:**
Model | Total training mem (GB) | Multi-scale training | Testing time / im | AP (minival) | AP (test-dev) | Link
--- |:---:|:---:|:---:|:---:|:--:|:---:
FCOS_R_50_FPN_1x | 29.3 | No | 71ms | 37.1 | 37.4 | [download](https://cloudstor.aarnet.edu.au/plus/s/dDeDPBLEAt19Xrl/download)
......@@ -62,6 +64,13 @@ FCOS_R_101_FPN_2x | 44.1 | Yes | 74ms | 41.4 | 41.5 | [download](https://cloudst
FCOS_X_101_32x8d_FPN_2x | 72.9 | Yes | 122ms | 42.5 | 42.7 | [download](https://cloudstor.aarnet.edu.au/plus/s/U5myBfGF7MviZ97/download)
FCOS_X_101_64x4d_FPN_2x | 77.7 | Yes | 140ms | 43.0 | 43.2 | [download](https://cloudstor.aarnet.edu.au/plus/s/wpwoCi4S8iajFi9/download)
*All ResNe(x)t models are trained with 16 images in a mini-batch.*
**MobileNets:**
Model | Training batch size | Multi-scale training | Testing time / im | AP (minival) | Link
--- |:---:|:---:|:---:|:---:|:--:|:---:
FCOS_R_50_FPN_1x | 29.3 | No | 71ms | 37.1 | 37.4 | [download](https://cloudstor.aarnet.edu.au/plus/s/dDeDPBLEAt19Xrl/download)
[1] *1x and 2x mean the model is trained for 90K and 180K iterations, respectively.* \
[2] *We report total training memory footprint on all GPUs instead of the memory footprint per GPU as in maskrcnn-benchmark*. \
[3] *All results are obtained with a single model and without any test time data augmentation such as multi-scale, flipping and etc..* \
......
......@@ -5,10 +5,12 @@ MODEL:
FCOS_ON: True
BACKBONE:
CONV_BODY: "MNV2-FPN-RETINANET"
FREEZE_CONV_BODY_AT: 0
RESNETS:
BACKBONE_OUT_CHANNELS: 256
RETINANET:
USE_C5: False # FCOS uses P5 instead of C5
USE_SYNCBN: False
DATASETS:
TRAIN: ("coco_2014_train", "coco_2014_valminusminival")
TEST: ("coco_2014_minival",)
......
MODEL:
META_ARCHITECTURE: "GeneralizedRCNN"
WEIGHT: "https://cloudstor.aarnet.edu.au/plus/s/xtixKaxLWmbcyf7/download#mobilenet_v2-ecbe2b5.pth"
RPN_ONLY: True
FCOS_ON: True
BACKBONE:
CONV_BODY: "MNV2-FPN-RETINANET"
FREEZE_CONV_BODY_AT: 0
RESNETS:
BACKBONE_OUT_CHANNELS: 256
RETINANET:
USE_C5: False # FCOS uses P5 instead of C5
USE_SYNCBN: True
DATASETS:
TRAIN: ("coco_2014_train", "coco_2014_valminusminival")
TEST: ("coco_2014_minival",)
INPUT:
MIN_SIZE_TRAIN: (800,)
MAX_SIZE_TRAIN: 1333
MIN_SIZE_TEST: 800
MAX_SIZE_TEST: 1333
DATALOADER:
SIZE_DIVISIBILITY: 32
SOLVER:
BASE_LR: 0.01
WEIGHT_DECAY: 0.0001
STEPS: (60000, 80000)
MAX_ITER: 90000
IMS_PER_BATCH: 32
WARMUP_METHOD: "constant"
......@@ -34,6 +34,7 @@ _C.MODEL.CLS_AGNOSTIC_BBOX_REG = False
# the path in paths_catalog. Else, it will use it as the specified absolute
# path
_C.MODEL.WEIGHT = ""
_C.USE_SYNCBN = False
# -----------------------------------------------------------------------------
......
# Set up custom environment before nearly anything else is imported
# NOTE: this should be the first import (no not reorder)
from maskrcnn_benchmark.utils.env import setup_environment # noqa F401 isort:skip
import argparse
import os
import torch
def main():
parser = argparse.ArgumentParser(description="Remove the solver states stored in a trained model")
parser.add_argument(
"model",
default="models/FCOS_R_50_FPN_1x.pth",
help="path to the input model file",
)
args = parser.parse_args()
model = torch.load(args.model)
del model["optimizer"]
del model["scheduler"]
filename_wo_ext, ext = os.path.splitext(args.model)
output_file = filename_wo_ext + "_wo_solver_states" + ext
torch.save(model, output_file)
print("Done. The model without solver states is saved to {}".format(output_file))
if __name__ == "__main__":
main()
......@@ -31,7 +31,8 @@ def train(cfg, local_rank, distributed):
device = torch.device(cfg.MODEL.DEVICE)
model.to(device)
model = torch.nn.SyncBatchNorm.convert_sync_batchnorm(model)
if cfg.MODEL.USE_SYNCBN:
model = torch.nn.SyncBatchNorm.convert_sync_batchnorm(model)
optimizer = make_optimizer(cfg, model)
scheduler = make_lr_scheduler(cfg, optimizer)
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册