提交 63770918 编写于 作者: P pengmian
......@@ -12,31 +12,19 @@ PaddleSeg具备高性能、丰富的数据增强、工业级部署、全流程
- **丰富的数据增强**
- 基于百度视觉技术部的实际业务经验,内置10+种数据增强策略,可结合实际业务场景进行定制组合,提升模型泛化能力和鲁棒性。
基于百度视觉技术部的实际业务经验,内置10+种数据增强策略,可结合实际业务场景进行定制组合,提升模型泛化能力和鲁棒性。
- **主流模型覆盖**
- 支持U-Net, DeepLabv3+, ICNet三类主流分割网络,结合预训练模型和可调节的骨干网络,满足不同性能和精度的要求。
支持U-Net, DeepLabv3+, ICNet三类主流分割网络,结合预训练模型和可调节的骨干网络,满足不同性能和精度的要求。
- **高性能**
- PaddleSeg支持多进程IO、多卡并行、多卡Batch Norm同步等训练加速策略,结合飞桨核心框架的显存优化算法,可以大幅度减少分割模型的显存开销,更快完成分割模型训练。
PaddleSeg支持多进程IO、多卡并行、跨卡Batch Norm同步等训练加速策略,结合飞桨核心框架的显存优化功能,可以大幅度减少分割模型的显存开销,更快完成分割模型训练。
- **工业级部署**
- 基于[Paddle Serving](https://github.com/PaddlePaddle/Serving)和PaddlePaddle高性能预测引擎,结合百度开放的AI能力,轻松搭建人像分割和车道线分割服务。
</br>
## 在线体验
PaddleSeg提供了多种预训练模型,并且以NoteBook的方式提供了在线体验的教程,欢迎体验:
|教程|链接|
|-|-|
|U-Net宠物分割|[点击体验](https://aistudio.baidu.com/aistudio/projectDetail/102889)|
|PaddleSeg人像分割|[点击体验](https://aistudio.baidu.com/aistudio/projectDetail/100798)|
|DeepLabv3+图像分割|[点击体验](https://aistudio.baidu.com/aistudio/projectDetail/101696)|
基于[Paddle Serving](https://github.com/PaddlePaddle/Serving)和PaddlePaddle高性能预测引擎,结合百度开放的AI能力,轻松搭建人像分割和车道线分割服务。
</br>
......@@ -64,28 +52,39 @@ PaddleSeg提供了多种预训练模型,并且以NoteBook的方式提供了在
### 预测部署
* [模型导出](./docs/model_export.md)
* [预测库使用](./inference)
* [模型部署与Serving](./serving)
* [C++预测库使用](./inference)
* [PaddleSeg Serving服务化部署](./serving)
### 高级功能
* [PaddleSeg的数据增强](./docs/data_aug.md)
* [特色垂类模型使用](./contrib)
</br>
</br>
## FAQ
#### Q:图像分割的数据增强如何配置,unpadding, step-scaling, range-scaling的原理是什么?
A: 数据增强的配置可以参考文档[数据增强](./docs/data_aug.md)
A: 更详细数据增强文档可以参考[数据增强](./docs/data_aug.md)
#### Q: 预测时图片过大,导致显存不足如何处理?
A: 降低Batch size,使用Group Norm策略等。
</br>
## 在线体验
PaddleSeg提供了多种预训练模型,并且以NoteBook的方式提供了在线体验的教程,欢迎体验:
|教程|链接|
|-|-|
|U-Net宠物分割|[点击体验](https://aistudio.baidu.com/aistudio/projectDetail/102889)|
|PaddleSeg人像分割|[点击体验](https://aistudio.baidu.com/aistudio/projectDetail/100798)|
|DeepLabv3+图像分割|[点击体验](https://aistudio.baidu.com/aistudio/projectDetail/101696)|
|PaddleSeg特色垂类模型|[点击体验](https://aistudio.baidu.com/aistudio/projectdetail/115541)|
</br>
## 更新日志
......@@ -94,11 +93,10 @@ A: 降低Batch size,使用Group Norm策略等。
**`v0.1.0`**
* PaddleSeg分割库初始版本发布,包含DeepLabv3+, U-Net, ICNet三类分割模型, 其中DeepLabv3+支持Xception, MobileNet两种可调节的骨干网络。
* CVPR 19' LIP人体部件分割比赛冠军预测模型发布[ACE2P](./contrib/ACE2P)
* CVPR19 LIP人体部件分割比赛冠军预测模型发布[ACE2P](./contrib/ACE2P)
* 预置基于DeepLabv3+网络的[人像分割](./contrib/HumanSeg/)[车道线分割](./contrib/RoadLine)预测模型发布
</br>
</br>
## 如何贡献代码
......
......@@ -10,18 +10,6 @@ AUG:
MIN_SCALE_FACTOR: 0.5 # for stepscaling
SCALE_STEP_SIZE: 0.25 # for stepscaling
MIRROR: True
RICH_CROP:
ENABLE: False
ASPECT_RATIO: 0.33
BLUR: True
BLUR_RATIO: 0.1
FLIP: True
FLIP_RATIO: 0.2
MAX_ROTATION: 15
MIN_AREA_RATIO: 0.5
BRIGHTNESS_JITTER_RATIO: 0.5
CONTRAST_JITTER_RATIO: 0.5
SATURATION_JITTER_RATIO: 0.5
BATCH_SIZE: 4
DATASET:
DATA_DIR: "./dataset/cityscapes/"
......@@ -44,8 +32,7 @@ TEST:
TEST_MODEL: "snapshots/cityscape_v5/final/"
TRAIN:
MODEL_SAVE_DIR: "snapshots/cityscape_v7/"
PRETRAINED_MODEL: u"pretrain/deeplabv3plus_gn_init"
RESUME: False
PRETRAINED_MODEL_DIR: "pretrain/deeplabv3plus_gn_init"
SNAPSHOT_EPOCH: 10
SOLVER:
LR: 0.001
......
......@@ -12,18 +12,6 @@ AUG:
MIN_SCALE_FACTOR: 0.75 # for stepscaling
SCALE_STEP_SIZE: 0.25 # for stepscaling
MIRROR: True
RICH_CROP:
ENABLE: False
ASPECT_RATIO: 0.33
BLUR: True
BLUR_RATIO: 0.1
FLIP: True
FLIP_RATIO: 0.2
MAX_ROTATION: 15
MIN_AREA_RATIO: 0.5
BRIGHTNESS_JITTER_RATIO: 0.5
CONTRAST_JITTER_RATIO: 0.5
SATURATION_JITTER_RATIO: 0.5
BATCH_SIZE: 4
DATASET:
DATA_DIR: "./dataset/mini_pet/"
......@@ -45,8 +33,7 @@ TEST:
TEST_MODEL: "./test/saved_model/unet_pet/final/"
TRAIN:
MODEL_SAVE_DIR: "./test/saved_models/unet_pet/"
PRETRAINED_MODEL: "./test/models/unet_coco/"
RESUME: False
PRETRAINED_MODEL_DIR: "./test/models/unet_coco/"
SNAPSHOT_EPOCH: 10
SOLVER:
NUM_EPOCHS: 500
......
......@@ -11,7 +11,7 @@ TRAIN Group存放所有和训练相关的配置
<br/>
<br/>
## `PRETRAINED_MODEL`
## `PRETRAINED_MODEL_DIR`
预训练模型路径
## 默认值
......@@ -28,19 +28,15 @@ TRAIN Group存放所有和训练相关的配置
<br/>
<br/>
## `RESUME`
是否从预训练模型中恢复参数并继续训练
## `RESUME_MODEL_DIR`
从指定路径中恢复参数并继续训练
## 默认值
False
## 注意事项
* 当该字段被置为True且`PRETRAINED_MODEL`不存在时,该选项不生效
* 当该字段被置为True且`PRETRAINED_MODEL`存在时,PaddleSeg会恢复到上一次训练的最近一个epoch,并且恢复训练过程中的临时变量(如已经衰减过的学习率,Optimizer的动量数据等)
* 当该字段被置为True且`PRETRAINED_MODEL`存在时,`PRETRAINED_MODEL`路径的最后一个目录必须为int数值或者字符串final,PaddleSeg会将int数值作为当前起始EPOCH继续训练,若目录为final,则不会继续训练。若目录不满足上述条件,PaddleSeg会抛出错误。
*`RESUME_MODEL_DIR`存在时,PaddleSeg会恢复到上一次训练的最近一个epoch,并且恢复训练过程中的临时变量(如已经衰减过的学习率,Optimizer的动量数据等),`PRETRAINED_MODEL`路径的最后一个目录必须为int数值或者字符串final,PaddleSeg会将int数值作为当前起始EPOCH继续训练,若目录为final,则不会继续训练。若目录不满足上述条件,PaddleSeg会抛出错误。
<br/>
<br/>
......@@ -57,4 +53,4 @@ False
* 仅在GPU多卡训练时该开关有效(Windows不支持多卡训练,因此无需打开该开关)
* GPU多卡训练时,建议开启该开关,可以提升模型的训练效果
\ No newline at end of file
* GPU多卡训练时,建议开启该开关,可以提升模型的训练效果
......@@ -35,9 +35,6 @@ rich crop是指对图像进行多种变换,保证在训练过程中数据的
- blur
图像加模糊,使用开关`AUG.RICH_CROP.BLUR`,为False时该项功能关闭。`AUG.RICH_CROP.BLUR_RATIO`控制加入模糊的概率。
- flip
图像上下翻转,使用开关`AUG.RICH_CROP.FLIP`,为False时该项功能关闭。`AUG.RICH_CROP.FLIP_RATIO`控制上下翻转的概率。
- rotation
图像旋转,`AUG.RICH_CROP.MAX_ROTATION`控制最大旋转角度。旋转产生的多余的区域的填充值为均值。
......
......@@ -4,7 +4,7 @@ PaddleSeg对所有内置的分割模型都提供了公开数据集下的预训
## ImageNet预训练模型
所有Imagenet预训练模型来自于PaddlePaddle图像分类库,想获取更多细节请点击[这里](https://github.com/PaddlePaddle/models/tree/develop/PaddleCV/image_classification))
所有Imagenet预训练模型来自于PaddlePaddle图像分类库,想获取更多细节请点击[这里](https://github.com/PaddlePaddle/models/tree/develop/PaddleCV/image_classification)
| 模型 | 数据集合 | Depth multiplier | 下载地址 | Accuray Top1/5 Error|
|---|---|---|---|---|
......
......@@ -76,7 +76,7 @@ python pdseg/train.py --use_gpu \
--tb_log_dir train_log \
--cfg configs/unet_pet.yaml \
BATCH_SIZE 4 \
TRAIN.PRETRAINED_MODEL pretrained_model/unet_bn_coco \
TRAIN.PRETRAINED_MODEL_DIR pretrained_model/unet_bn_coco \
TRAIN.SYNC_BATCH_NORM True \
SOLVER.LR 5e-5
```
......
......@@ -261,17 +261,18 @@ class SegDataset(object):
SATURATION_JITTER_RATIO,
contrast_jitter_ratio=cfg.AUG.RICH_CROP.
CONTRAST_JITTER_RATIO)
if cfg.AUG.RICH_CROP.FLIP:
if cfg.AUG.RICH_CROP.FLIP_RATIO <= 0:
n = 0
elif cfg.AUG.RICH_CROP.FLIP_RATIO >= 1:
n = 1
else:
n = int(1.0 / cfg.AUG.RICH_CROP.FLIP_RATIO)
if n > 0:
if np.random.randint(0, n) == 0:
img = img[::-1, :, :]
grt = grt[::-1, :]
if cfg.AUG.FLIP:
if cfg.AUG.FLIP_RATIO <= 0:
n = 0
elif cfg.AUG.FLIP_RATIO >= 1:
n = 1
else:
n = int(1.0 / cfg.AUG.FLIP_RATIO)
if n > 0:
if np.random.randint(0, n) == 0:
img = img[::-1, :, :]
grt = grt[::-1, :]
if cfg.AUG.MIRROR:
if np.random.randint(0, 2) == 1:
......
......@@ -152,15 +152,15 @@ def load_checkpoint(exe, program):
Load checkpoiont from pretrained model directory for resume training
"""
print('Resume model training from:', cfg.TRAIN.PRETRAINED_MODEL)
if not os.path.exists(cfg.TRAIN.PRETRAINED_MODEL):
print('Resume model training from:', cfg.TRAIN.RESUME_MODEL_DIR)
if not os.path.exists(cfg.TRAIN.RESUME_MODEL_DIR):
raise ValueError("TRAIN.PRETRAIN_MODEL {} not exist!".format(
cfg.TRAIN.PRETRAINED_MODEL))
cfg.TRAIN.RESUME_MODEL_DIR))
fluid.io.load_persistables(
exe, cfg.TRAIN.PRETRAINED_MODEL, main_program=program)
exe, cfg.TRAIN.RESUME_MODEL_DIR, main_program=program)
model_path = cfg.TRAIN.PRETRAINED_MODEL
model_path = cfg.TRAIN.RESUME_MODEL_DIR
# Check is path ended by path spearator
if model_path[-1] == os.sep:
model_path = model_path[0:-1]
......@@ -255,11 +255,11 @@ def train(cfg):
# Resume training
begin_epoch = cfg.SOLVER.BEGIN_EPOCH
if cfg.TRAIN.RESUME:
if cfg.TRAIN.RESUME_MODEL_DIR:
begin_epoch = load_checkpoint(exe, train_prog)
# Load pretrained model
elif os.path.exists(cfg.TRAIN.PRETRAINED_MODEL):
print('Pretrained model dir:', cfg.TRAIN.PRETRAINED_MODEL)
elif os.path.exists(cfg.TRAIN.PRETRAINED_MODEL_DIR):
print('Pretrained model dir:', cfg.TRAIN.PRETRAINED_MODEL_DIR)
load_vars = []
load_fail_vars = []
......@@ -268,10 +268,10 @@ def train(cfg):
Check whehter persitable variable shape is match with current network
"""
var_exist = os.path.exists(
os.path.join(cfg.TRAIN.PRETRAINED_MODEL, var.name))
os.path.join(cfg.TRAIN.PRETRAINED_MODEL_DIR, var.name))
if var_exist:
var_shape = parse_shape_from_file(
os.path.join(cfg.TRAIN.PRETRAINED_MODEL, var.name))
os.path.join(cfg.TRAIN.PRETRAINED_MODEL_DIR, var.name))
return var_shape == shape
return False
......@@ -285,10 +285,10 @@ def train(cfg):
load_fail_vars.append(x)
if cfg.MODEL.FP16:
# If open FP16 training mode, load FP16 var separate
load_fp16_vars(exe, cfg.TRAIN.PRETRAINED_MODEL, train_prog)
load_fp16_vars(exe, cfg.TRAIN.PRETRAINED_MODEL_DIR, train_prog)
else:
fluid.io.load_vars(
exe, dirname=cfg.TRAIN.PRETRAINED_MODEL, vars=load_vars)
exe, dirname=cfg.TRAIN.PRETRAINED_MODEL_DIR, vars=load_vars)
for var in load_vars:
print("Parameter[{}] loaded sucessfully!".format(var.name))
for var in load_fail_vars:
......@@ -299,7 +299,7 @@ def train(cfg):
len(load_vars) + len(load_fail_vars)))
else:
print('Pretrained model dir {} not exists, training from scratch...'.
format(cfg.TRAIN.PRETRAINED_MODEL))
format(cfg.TRAIN.PRETRAINED_MODEL_DIR))
fetch_list = [avg_loss.name, lr.name]
if args.debug:
......
......@@ -98,6 +98,16 @@ class SegConfig(dict):
'DATASET.IMAGE_TYPE config error, only support `rgb`, `gray` and `rgba`'
)
if not self.TRAIN_CROP_SIZE:
raise ValueError(
'TRAIN_CROP_SIZE is empty! Please set a pair of values in format (width, height)'
)
if not self.EVAL_CROP_SIZE:
raise ValueError(
'EVAL_CROP_SIZE is empty! Please set a pair of values in format (width, height)'
)
if reset_dataset:
# Ensure file list is use UTF-8 encoding
train_sets = codecs.open(self.DATASET.TRAIN_FILE_LIST, 'r',
......
......@@ -69,6 +69,10 @@ cfg.DATASET.IGNORE_INDEX = 255
########################### 数据增强配置 ######################################
# 图像镜像左右翻转
cfg.AUG.MIRROR = True
# 图像上下翻转开关,True/False
cfg.AUG.FLIP = False
# 图像启动上下翻转的概率,0-1
cfg.AUG.FLIP_RATIO = 0.5
# 图像resize的固定尺寸(宽,高),非负
cfg.AUG.FIX_RESIZE_SIZE = tuple()
# 图像resize的方式有三种:
......@@ -107,18 +111,14 @@ cfg.AUG.RICH_CROP.CONTRAST_JITTER_RATIO = 0.5
cfg.AUG.RICH_CROP.BLUR = False
# 图像启动模糊百分比,0-1
cfg.AUG.RICH_CROP.BLUR_RATIO = 0.1
# 图像上下翻转开关,True/False
cfg.AUG.RICH_CROP.FLIP = False
# 图像启动上下翻转的概率,0-1
cfg.AUG.RICH_CROP.FLIP_RATIO = 0.2
########################### 训练配置 ##########################################
# 模型保存路径
cfg.TRAIN.MODEL_SAVE_DIR = ''
# 预训练模型路径
cfg.TRAIN.PRETRAINED_MODEL = ''
cfg.TRAIN.PRETRAINED_MODEL_DIR = ''
# 是否resume,继续训练
cfg.TRAIN.RESUME = False
cfg.TRAIN.RESUME_MODEL_DIR = ''
# 是否使用多卡间同步BatchNorm均值和方差
cfg.TRAIN.SYNC_BATCH_NORM = False
# 模型参数保存的epoch间隔数,可用来继续训练中断的模型
......
......@@ -46,7 +46,7 @@ model_urls = {
"unet_bn_coco": "https://paddleseg.bj.bcebos.com/models/unet_coco_v3.tgz",
# Cityscapes pretrained
"deeplabv3plus_mobilenetv2-1-0_bn_cityscapes":
"deeplabv3p_mobilenetv2-1-0_bn_cityscapes":
"https://paddleseg.bj.bcebos.com/models/mobilenet_cityscapes.tgz",
"deeplabv3p_xception65_gn_cityscapes":
"https://paddleseg.bj.bcebos.com/models/deeplabv3p_xception65_cityscapes.tgz",
......
......@@ -10,18 +10,6 @@ AUG:
MIN_SCALE_FACTOR: 0.5 # for stepscaling
SCALE_STEP_SIZE: 0.25 # for stepscaling
MIRROR: True
RICH_CROP:
ENABLE: False
ASPECT_RATIO: 0.33
BLUR: True
BLUR_RATIO: 0.1
FLIP: True
FLIP_RATIO: 0.2
MAX_ROTATION: 15
MIN_AREA_RATIO: 0.5
BRIGHTNESS_JITTER_RATIO: 0.5
CONTRAST_JITTER_RATIO: 0.5
SATURATION_JITTER_RATIO: 0.5
BATCH_SIZE: 4
DATASET:
DATA_DIR: "./dataset/cityscapes/"
......@@ -46,8 +34,7 @@ TEST:
TEST_MODEL: "snapshots/cityscape_v5/final/"
TRAIN:
MODEL_SAVE_DIR: "snapshots/cityscape_v5/"
PRETRAINED_MODEL: "pretrain/deeplabv3plus_gn_init"
RESUME: False
PRETRAINED_MODEL_DIR: "pretrain/deeplabv3plus_gn_init"
SNAPSHOT_EPOCH: 10
SOLVER:
LR: 0.001
......
......@@ -12,18 +12,6 @@ AUG:
MIN_SCALE_FACTOR: 0.75 # for stepscaling
SCALE_STEP_SIZE: 0.25 # for stepscaling
MIRROR: True
RICH_CROP:
ENABLE: False
ASPECT_RATIO: 0.33
BLUR: True
BLUR_RATIO: 0.1
FLIP: True
FLIP_RATIO: 0.2
MAX_ROTATION: 15
MIN_AREA_RATIO: 0.5
BRIGHTNESS_JITTER_RATIO: 0.5
CONTRAST_JITTER_RATIO: 0.5
SATURATION_JITTER_RATIO: 0.5
BATCH_SIZE: 6
DATASET:
DATA_DIR: "./dataset/pet/"
......@@ -45,8 +33,7 @@ TEST:
TEST_MODEL: "./test/saved_model/unet_pet/final/"
TRAIN:
MODEL_SAVE_DIR: "./test/saved_models/unet_pet/"
PRETRAINED_MODEL: "./test/models/unet_coco/"
RESUME: False
PRETRAINED_MODEL_DIR: "./test/models/unet_coco/"
SNAPSHOT_EPOCH: 10
SOLVER:
NUM_EPOCHS: 500
......
......@@ -45,7 +45,8 @@ if __name__ == "__main__":
saved_model = os.path.join(LOCAL_PATH, "saved_model", model_name)
parser = argparse.ArgumentParser(description="PaddleSeg loacl test")
parser.add_argument("--devices",
parser.add_argument(
"--devices",
dest="devices",
help="GPU id of running. if more than one, use spacing to separate.",
nargs="+",
......@@ -75,7 +76,7 @@ if __name__ == "__main__":
train(
flags=["--cfg", cfg, "--use_gpu", "--log_steps", "10"],
options=[
"SOLVER.NUM_EPOCHS", "1", "TRAIN.PRETRAINED_MODEL", test_model,
"SOLVER.NUM_EPOCHS", "1", "TRAIN.PRETRAINED_MODEL_DIR", test_model,
"TRAIN.MODEL_SAVE_DIR", saved_model
],
devices=devices)
......@@ -46,7 +46,8 @@ if __name__ == "__main__":
saved_model = os.path.join(LOCAL_PATH, "saved_model", model_name)
parser = argparse.ArgumentParser(description="PaddleSeg loacl test")
parser.add_argument("--devices",
parser.add_argument(
"--devices",
dest="devices",
help="GPU id of running. if more than one, use spacing to separate.",
nargs="+",
......@@ -59,7 +60,7 @@ if __name__ == "__main__":
train(
flags=["--cfg", cfg, "--use_gpu", "--log_steps", "10"],
options=[
"SOLVER.NUM_EPOCHS", "1", "TRAIN.PRETRAINED_MODEL", test_model,
"SOLVER.NUM_EPOCHS", "1", "TRAIN.PRETRAINED_MODEL_DIR", test_model,
"TRAIN.MODEL_SAVE_DIR", saved_model, "DATASET.TRAIN_FILE_LIST",
os.path.join(DATASET_PATH, "mini_pet", "file_list",
"train_list.txt"), "DATASET.VAL_FILE_LIST",
......
......@@ -84,7 +84,7 @@ def _uncompress_file(filepath, extrapath, delete_file, print_progress):
else:
handler = functools.partial(_uncompress_file_tar, mode="r")
for total_num, index in handler(filepath, extrapath):
for total_num, index, rootpath in handler(filepath, extrapath):
if print_progress:
done = int(50 * float(index) / total_num)
progress(
......@@ -95,27 +95,31 @@ def _uncompress_file(filepath, extrapath, delete_file, print_progress):
if delete_file:
os.remove(filepath)
return rootpath
def _uncompress_file_zip(filepath, extrapath):
files = zipfile.ZipFile(filepath, 'r')
filelist = files.namelist()
rootpath = filelist[0]
total_num = len(filelist)
for index, file in enumerate(filelist):
files.extract(file, extrapath)
yield total_num, index
yield total_num, index, rootpath
files.close()
yield total_num, index
yield total_num, index, rootpath
def _uncompress_file_tar(filepath, extrapath, mode="r:gz"):
files = tarfile.open(filepath, mode)
filelist = files.getnames()
total_num = len(filelist)
rootpath = filelist[0]
for index, file in enumerate(filelist):
files.extract(file, extrapath)
yield total_num, index
yield total_num, index, rootpath
files.close()
yield total_num, index
yield total_num, index, rootpath
def download_file_and_uncompress(url,
......@@ -150,7 +154,9 @@ def download_file_and_uncompress(url,
if not os.path.exists(savename):
if not os.path.exists(savepath):
_download_file(url, savepath, print_progress)
_uncompress_file(savepath, extrapath, delete_file, print_progress)
savename = _uncompress_file(savepath, extrapath, delete_file,
print_progress)
savename = os.path.join(extrapath, savename)
shutil.move(savename, extraname)
......
......@@ -67,7 +67,7 @@ MODEL:
DEEPLAB:
BACKBONE: "xception_65"
TRAIN:
PRETRAINED_MODEL: "./pretrained_model/deeplabv3p_xception65_bn_pet/"
PRETRAINED_MODEL_DIR: "./pretrained_model/deeplabv3p_xception65_bn_pet/"
# 其他配置
......@@ -79,7 +79,6 @@ AUG:
BATCH_SIZE: 4
TRAIN:
MODEL_SAVE_DIR: "./finetune/deeplabv3p_xception65_bn_pet/"
RESUME: False
SNAPSHOT_EPOCH: 10
TEST:
TEST_MODEL: "./finetune/deeplabv3p_xception65_bn_pet/final"
......@@ -128,6 +127,6 @@ python pdseg/eval.py --use_gpu --cfg ./configs/test_pet.yaml
|xception65_imagenet|-|bn|ImageNet|MODEL.MODEL_NAME: deeplabv3p <br> MODEL.DEEPLAB.BACKBONE: xception_65 <br> MODEL.DEFAULT_NORM_TYPE: bn|
|deeplabv3p_mobilnetv2-1-0_bn_coco|MobileNet V2|bn|COCO|MODEL.MODEL_NAME: deeplabv3p <br> MODEL.DEEPLAB.BACKBONE: mobilenet <br> MODEL.DEEPLAB.DEPTH_MULTIPLIER: 1.0 <br> MODEL.DEFAULT_NORM_TYPE: bn|
|deeplabv3p_xception65_bn_coco|Xception|bn|COCO|MODEL.MODEL_NAME: deeplabv3p <br> MODEL.DEEPLAB.BACKBONE: xception_65 <br> MODEL.DEFAULT_NORM_TYPE: bn |
|deeplabv3p_mobilnetv2-1-0_bn_cityscapes|MobileNet V2|bn|Cityscapes|MODEL.MODEL_NAME: deeplabv3p <br> MODEL.DEEPLAB.BACKBONE: xception_65 <br> MODEL.DEFAULT_NORM_TYPE: bn|
|deeplabv3p_mobilnetv2-1-0_bn_cityscapes|MobileNet V2|bn|Cityscapes|MODEL.MODEL_NAME: deeplabv3p <br> MODEL.DEEPLAB.BACKBONE: mobilenet <br> MODEL.DEEPLAB.DEPTH_MULTIPLIER: 1.0 <br> MODEL.DEEPLAB.ENCODER_WITH_ASPP: False <br> MODEL.DEEPLAB.ENABLE_DECODER: False <br> MODEL.DEFAULT_NORM_TYPE: bn|
|deeplabv3p_xception65_gn_cityscapes|Xception|gn|Cityscapes|MODEL.MODEL_NAME: deeplabv3p <br> MODEL.DEEPLAB.BACKBONE: xception_65 <br> MODEL.DEFAULT_NORM_TYPE: gn|
|**deeplabv3p_xception65_bn_cityscapes**|Xception|bn|Cityscapes||
|**deeplabv3p_xception65_bn_cityscapes**|Xception|bn|Cityscapes|MODEL.MODEL_NAME: deeplabv3p <br> MODEL.DEEPLAB.BACKBONE: xception_65 <br> MODEL.DEFAULT_NORM_TYPE: bn|
......@@ -64,8 +64,9 @@ DATASET:
MODEL:
MODEL_NAME: "icnet"
DEFAULT_NORM_TYPE: "bn"
MULTI_LOSS_WEIGHT: "[1.0, 0.4, 0.16]"
TRAIN:
PRETRAINED_MODEL: "./pretrained_model/icnet_bn_cityscapes/"
PRETRAINED_MODEL_DIR: "./pretrained_model/icnet_bn_cityscapes/"
# 其他配置
......@@ -77,7 +78,6 @@ AUG:
BATCH_SIZE: 4
TRAIN:
MODEL_SAVE_DIR: "./finetune/icnet_pet/"
RESUME: False
SNAPSHOT_EPOCH: 10
TEST:
TEST_MODEL: "./finetune/icnet_pet/final"
......@@ -117,4 +117,4 @@ python pdseg/eval.py --use_gpu --cfg ./configs/test_pet.yaml
|预训练模型名称|BackBone|Norm|数据集|配置|
|-|-|-|-|-|
|icnet_bn_cityscapes|-|bn|Cityscapes|MODEL.MODEL_NAME: icnet <br> MODEL.DEFAULT_NORM_TYPE: bn|
|icnet_bn_cityscapes|-|bn|Cityscapes|MODEL.MODEL_NAME: icnet <br> MODEL.DEFAULT_NORM_TYPE: bn <br> MULTI_LOSS_WEIGHT: [1.0, 0.4, 0.16]|
......@@ -65,7 +65,7 @@ MODEL:
MODEL_NAME: "unet"
DEFAULT_NORM_TYPE: "bn"
TRAIN:
PRETRAINED_MODEL: "./pretrained_model/unet_bn_coco/"
PRETRAINED_MODEL_DIR: "./pretrained_model/unet_bn_coco/"
# 其他配置
......@@ -77,7 +77,6 @@ AUG:
BATCH_SIZE: 4
TRAIN:
MODEL_SAVE_DIR: "./finetune/unet_pet/"
RESUME: False
SNAPSHOT_EPOCH: 10
TEST:
TEST_MODEL: "./finetune/unet_pet/final"
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册