未验证 提交 30a031d8 编写于 作者: Q qingqing01 提交者: GitHub

Support Python3 in DeepLab v3+ (#1321)

上级 6e239c97
release/1.8 XiaoguangHu01-patch-1 ZeyuChen-patch-1 dependabot/pip/PaddleRec/ctr/wide_deep/bleach-3.1.4 dependabot/pip/PaddleRec/ctr/wide_deep/notebook-5.7.8 dependabot/pip/PaddleRec/ncf/bleach-3.1.4 dependabot/pip/PaddleRec/ncf/notebook-5.7.8 dependabot/pip/PaddleRec/wide_deep/bleach-3.1.4 dependabot/pip/PaddleRec/wide_deep/notebook-5.7.8 dev-static develop dyning-patch-1 fix_bug github/fork/0YuanZhang0/fix_author_info github/fork/0YuanZhang0/upgrade_ade github/fork/1024er/fix_vae_update github/fork/123malin/matchnet github/fork/123malin/w2v_shuffle_batch github/fork/BeyondYourself/patch-3 github/fork/CandyCaneLane/deepfm_ github/fork/CandyCaneLane/xdeepfm github/fork/DrRyanHuang/patch-1 github/fork/FDInSky/dyg_det github/fork/FrostML/simnet github/fork/GT-ZhangAcer/develop github/fork/Haichao-Zhang/typo_fix github/fork/JepsonWong/ptb_lm_dataloader1 github/fork/JepsonWong/seq2seq_dataloader github/fork/JesseyXujin/cherry-pick-fix-load github/fork/JesseyXujin/elmo github/fork/JesseyXujin/elmo_new github/fork/JesseyXujin/elmo_test github/fork/JesseyXujin/senta1 github/fork/JesseyXujin/senta_readme_update github/fork/JiabinYang/add-word2vec github/fork/JiabinYang/test_word2vec github/fork/JianzhouZhan/develop github/fork/Joejiong/joe_model_1.8_update github/fork/Joejiong/joe_model_1.8_update_1 github/fork/Joejiong/update_1.8 github/fork/LiuChaoXD/liuchao45-tsn github/fork/LiuChiachi/seq2seq_attn_using_api2.0-beta github/fork/LongXinchen/longxinchen2 github/fork/LutaoChu/develop github/fork/MrChengmo/tdm_dev github/fork/Renwb1991/my-cool-stuff github/fork/RonaldJEN/develop github/fork/Steffy-zxf/fix-argument-use-gpu github/fork/Steffy-zxf/update-pcr-README github/fork/SunAhong1993/develop github/fork/SunGaofeng/v16loadfix github/fork/Sunyingbin/release/1.8 github/fork/Xreki/amp/resnet50 github/fork/Xreki/benchmark/video github/fork/Yancey1989/disable_debug github/fork/Yancey1989/dygraph_dist_resnet github/fork/Yancey1989/fast_imagenet_mp github/fork/Yancey1989/visreader github/fork/ccmeteorljh/develop github/fork/ceci3/add_in github/fork/ceci3/c_gan github/fork/ceci3/develop github/fork/ceci3/fix_cgan github/fork/chajchaj/mobile0208 github/fork/chengweiv5/fix-gru4rec-doc github/fork/chenwhql/dataloader_test_phase2 github/fork/chenwhql/dygraph/dataloader_test github/fork/chenwhql/fix_reader_usage_error github/fork/cjld/fix2 github/fork/cjt222/1.7_local github/fork/cjt222/fix_bug_py2andpy3 github/fork/cuicheng01/update_inference_time github/fork/cuicheng01/update_readme github/fork/danleifeng/dygraph_fleet_dev github/fork/edencfc/develop github/fork/edencfc/patch-1 github/fork/frankwhzhang/delerec github/fork/frankwhzhang/fix_bug github/fork/frankwhzhang/update_16 github/fork/gavin1332/fleet github/fork/gentelyang/develop github/fork/greatyang/patch-1 github/fork/greatyang/patch-2 github/fork/guoshengCS/add-fluid-transformer-md github/fork/guoshengCS/add-transformer-validation github/fork/guoshengCS/cherry-pick-transformer-readme github/fork/guoshengCS/fix-py3-transformer github/fork/guoshengCS/paddle1.6-paddleMT github/fork/guoshengCS/refine-transformer-align github/fork/guru4elephant/refine_doc github/fork/heavengate/pointrcnn_rpn_use_train_aug github/fork/huangjun12/2.0-bmn github/fork/jacquesqiao/add-ctr-dataset github/fork/jacquesqiao/update-ctr-reader github/fork/jerrywgz/add_FPN_for_fasterrcnn github/fork/jerrywgz/fix_default_infer_path_in_RCNN github/fork/jerrywgz/fix_resize github/fork/ji-u/pipe_reader github/fork/kahitomi/develop github/fork/kangyuzhe666/develop github/fork/kebinC/metric github/fork/kgresearch/develop github/fork/killf/killf_fix_bug github/fork/kolinwei/patch-9 github/fork/lijiancheng0614/develop github/fork/lishiyu93/release/1.7 github/fork/littletomatodonkey/develop github/fork/lixingjian/develop github/fork/milkfish1988/patch-2 github/fork/molly00ecla/develop github/fork/mozpp/patch-1 github/fork/overlordmax/add_DSSM github/fork/overlordmax/add_ESMM github/fork/overlordmax/add_mmoe github/fork/overlordmax/add_multi_task github/fork/overlordmax/fix_mmoe github/fork/overlordmax/listwise_05182140 github/fork/overlordmax/mmoe_0416 github/fork/overlordmax/mmoe_04162255 github/fork/overlordmax/mmoe_04172100 github/fork/overlordmax/mmoe_04172112 github/fork/overlordmax/mmoe_sb_0417 github/fork/overlordmax/ncf_04281221 github/fork/overlordmax/pr_04221554 github/fork/overlordmax/pr_06102355 github/fork/overlordmax/pr_5291452 github/fork/overlordmax/pr_5291503 github/fork/overlordmax/pr_5291515 github/fork/overlordmax/pr_5291527 github/fork/overlordmax/test_mmoe github/fork/overlordmax/wide_deep_04221053 github/fork/overlordmax/wide_deep_04221148 github/fork/overlordmax/youtube_05130940 github/fork/phlrain/update_bert_new_save_load github/fork/pkulzb/develop github/fork/qingqing01/trt_infer github/fork/qqraise/develop github/fork/shippingwang/add_TALL github/fork/shippingwang/add_TSN_SeResNeXt github/fork/shippingwang/add_benchmark github/fork/shippingwang/add_ce github/fork/shippingwang/add_colortwist github/fork/shippingwang/add_tsm_dy github/fork/shippingwang/fix_bugs_3 github/fork/shippingwang/speedup github/fork/shippingwang/upgrade_api_tsn github/fork/slf12/float_to_int github/fork/slf12/pact github/fork/slf12/seq2seq_ocr github/fork/sneaxiy/remove_gc_flags github/fork/sneaxiy/remove_mem_opt_strategies github/fork/sneaxiy/remove_paddle_reader_apis github/fork/soldier828/develop github/fork/songyouwei/typo-fix github/fork/tachyon77/patch-1 github/fork/tachyon77/patch-2 github/fork/typhoonzero/add_resnext github/fork/walloollaw/fix-hang-in-ppdet github/fork/wanghaoshuang/enhence_auto_pruning github/fork/wanghaoshuang/fix_prune_demo github/fork/wanghaoshuang/fix_slim github/fork/wanghaoshuang/fix_slim_demo github/fork/wanghaoshuang/nb_slim github/fork/wangsouc/add-deep-and-cross github/fork/wangsouc/deepmf github/fork/willthefrog/dali_multi_eval github/fork/wopeizl/remove_profile github/fork/wzzju/add_bn_fuse github/fork/wzzju/test_dyquant github/fork/wzzju/test_mlperf github/fork/wzzju/test_mobilenet github/fork/xinaiwunai/develop github/fork/xixiaoyao/develop github/fork/xuezhong/dureader_v2 github/fork/zhangting2020/fuse_bn_add_act github/fork/zhengya01/HiNAS_models_ce github/fork/zhengya01/ce1 github/fork/zhengya01/ce_dgu github/fork/zhengya01/ce_human_pose_estimation github/fork/zhengya01/ce_tagspace github/fork/zhengya01/ce_transformer github/fork/zhengya01/ce_video github/fork/zhengya01/ce_word2vec github/fork/zhengya01/dy_lac github/fork/zhengya01/metric_learning_ce github/fork/zhengya01/vc_ce github/fork/zhiqiu/dev/resnet_amp github/fork/zhiqiu/dev/test_addto github/fork/zhumanyu/develop github/fork/zjzhangd/fix_policy_gradient github/fork/zzszmyf/develop github/fork/zzszmyf/patch-1 godfanmiao-patch-1 godfanmiao-patch-2 godfanmiao-patch-3 grasswolfs-patch-1 imguozhen-patch-1 junjun315-patch-1 kolinwei-patch-1 lac_1 lac_2 lsq-branch refine_reader release/1.0 release/1.1 release/1.3 release/1.4 release/1.5 release/1.6 release/1.7 release/2.0-beta release/2.2 release/2.3 release/2.4 revert-5689-release/2.3 revert-5695-PP-TinyPose tipc up_doc update-couplet-readme yxp1216 yxp1218 yxp1221 yxp1222 2.0.0-beta v2.0.0-beta v1.8.0 v1.7.0 v1.6 v1.5.1 v1.5 v1.4 v1.3 develop-align-v1.7.0
1 合并请求!4047add danet
deeplabv3plus_xception65_initialize.params
deeplabv3plus.params
deeplabv3plus.tar.gz
DeepLab运行本目录下的程序示例需要使用PaddlePaddle develop最新版本。如果您的PaddlePaddle安装版本低于此要求,请按照[安装文档](http://www.paddlepaddle.org/docs/develop/documentation/zh/build_and_install/pip_install_cn.html)中的说明更新PaddlePaddle安装版本。
DeepLab运行本目录下的程序示例需要使用PaddlePaddle Fluid v1.0.0版本或以上。如果您的PaddlePaddle安装版本低于此要求,请按照安装文档中的说明更新PaddlePaddle安装版本,如果使用GPU,该程序需要使用cuDNN v7版本。
## 代码结构
......@@ -41,10 +41,12 @@ data/cityscape/
如果需要从头开始训练模型,用户需要下载我们的初始化模型
```
wget http://paddlemodels.cdn.bcebos.com/deeplab/deeplabv3plus_xception65_initialize.tar.gz
tar -xf deeplabv3plus_xception65_initialize.tar.gz && rm deeplabv3plus_xception65_initialize.tar.gz
```
如果需要最终训练模型进行fine tune或者直接用于预测,请下载我们的最终模型
```
wget http://paddlemodels.cdn.bcebos.com/deeplab/deeplabv3plus.tar.gz
tar -xf deeplabv3plus.tar.gz && rm deeplabv3plus.tar.gz
```
......@@ -70,11 +72,11 @@ python train.py --help
```
python ./train.py \
--batch_size=8 \
--parallel=true
--parallel=true \
--train_crop_size=769 \
--total_step=90000 \
--init_weights_path=$INIT_WEIGHTS_PATH \
--save_weights_path=$SAVE_WEIGHTS_PATH \
--init_weights_path=deeplabv3plus_xception65_initialize.params \
--save_weights_path=output \
--dataset_path=$DATASET_PATH
```
......@@ -82,11 +84,10 @@ python ./train.py \
执行以下命令在`Cityscape`测试数据集上进行测试:
```
python ./eval.py \
--init_weights_path=$INIT_WEIGHTS_PATH \
--init_weights=deeplabv3plus.params \
--dataset_path=$DATASET_PATH
```
需要通过选项`--model_path`指定模型文件。
测试脚本的输出的评估指标为[mean IoU]()。
需要通过选项`--model_path`指定模型文件。测试脚本的输出的评估指标为mean IoU。
## 实验结果
......
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import os
os.environ['FLAGS_fraction_of_gpu_memory_to_use'] = '0.98'
......@@ -91,7 +94,7 @@ exe = fluid.Executor(place)
exe.run(sp)
if args.init_weights_path:
print "load from:", args.init_weights_path
print("load from:", args.init_weights_path)
load_model()
dataset = CityscapeDataset(args.dataset_path, 'val')
......@@ -118,7 +121,7 @@ for i, imgs, labels, names in batches:
mp = (wrong + right) != 0
miou2 = np.mean((right[mp] * 1.0 / (right[mp] + wrong[mp])))
if args.verbose:
print 'step: %s, mIoU: %s' % (i + 1, miou2)
print('step: %s, mIoU: %s' % (i + 1, miou2))
else:
print '\rstep: %s, mIoU: %s' % (i + 1, miou2),
print('\rstep: %s, mIoU: %s' % (i + 1, miou2))
sys.stdout.flush()
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import paddle
import paddle.fluid as fluid
......@@ -50,7 +53,7 @@ def append_op_result(result, name):
def conv(*args, **kargs):
kargs['param_attr'] = name_scope + 'weights'
if kargs.has_key('bias_attr') and kargs['bias_attr']:
if 'bias_attr' in kargs and kargs['bias_attr']:
kargs['bias_attr'] = name_scope + 'biases'
else:
kargs['bias_attr'] = False
......@@ -62,7 +65,7 @@ def group_norm(input, G, eps=1e-5, param_attr=None, bias_attr=None):
N, C, H, W = input.shape
if C % G != 0:
print "group can not divide channle:", C, G
print("group can not divide channle:", C, G)
for d in range(10):
for t in [d, -d]:
if G + t <= 0: continue
......@@ -70,7 +73,7 @@ def group_norm(input, G, eps=1e-5, param_attr=None, bias_attr=None):
G = G + t
break
if C % G == 0:
print "use group size:", G
print("use group size:", G)
break
assert C % G == 0
param_shape = (G, )
......@@ -139,7 +142,7 @@ def seq_conv(input, channel, stride, filter, dilation=1, act=None):
filter,
stride,
groups=input.shape[1],
padding=(filter / 2) * dilation,
padding=(filter // 2) * dilation,
dilation=dilation)
input = bn(input)
if act: input = act(input)
......
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import cv2
import numpy as np
import os
import six
default_config = {
"shuffle": True,
......@@ -30,7 +35,7 @@ def slice_with_pad(a, s, value=0):
pr = 0
pads.append([pl, pr])
slices.append([l, r])
slices = map(lambda x: slice(x[0], x[1], 1), slices)
slices = list(map(lambda x: slice(x[0], x[1], 1), slices))
a = a[slices]
a = np.pad(a, pad_width=pads, mode='constant', constant_values=value)
return a
......@@ -38,11 +43,17 @@ def slice_with_pad(a, s, value=0):
class CityscapeDataset:
def __init__(self, dataset_dir, subset='train', config=default_config):
import commands
label_dirname = dataset_dir + 'gtFine/' + subset
label_files = commands.getoutput(
"find %s -type f | grep labelTrainIds | sort" %
label_dirname).splitlines()
label_dirname = os.path.join(dataset_dir, 'gtFine/' + subset)
if six.PY2:
import commands
label_files = commands.getoutput(
"find %s -type f | grep labelTrainIds | sort" %
label_dirname).splitlines()
else:
import subprocess
label_files = subprocess.getstatusoutput(
"find %s -type f | grep labelTrainIds | sort" %
label_dirname)[-1].splitlines()
self.label_files = label_files
self.label_dirname = label_dirname
self.index = 0
......@@ -50,7 +61,7 @@ class CityscapeDataset:
self.dataset_dir = dataset_dir
self.config = config
self.reset()
print "total number", len(label_files)
print("total number", len(label_files))
def reset(self, shuffle=False):
self.index = 0
......@@ -66,13 +77,14 @@ class CityscapeDataset:
shape = self.config["crop_size"]
while True:
ln = self.label_files[self.index]
img_name = self.dataset_dir + 'leftImg8bit/' + self.subset + ln[len(
self.label_dirname):]
img_name = os.path.join(
self.dataset_dir,
'leftImg8bit/' + self.subset + ln[len(self.label_dirname):])
img_name = img_name.replace('gtFine_labelTrainIds', 'leftImg8bit')
label = cv2.imread(ln)
img = cv2.imread(img_name)
if img is None:
print "load img failed:", img_name
print("load img failed:", img_name)
self.next_img()
else:
break
......@@ -128,5 +140,7 @@ class CityscapeDataset:
from prefetch_generator import BackgroundGenerator
batches = BackgroundGenerator(batches, 100)
except:
print "You can install 'prefetch_generator' for acceleration of data reading."
print(
"You can install 'prefetch_generator' for acceleration of data reading."
)
return batches
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import os
os.environ['FLAGS_fraction_of_gpu_memory_to_use'] = '0.98'
......@@ -126,13 +129,12 @@ exe = fluid.Executor(place)
exe.run(sp)
if args.init_weights_path:
print "load from:", args.init_weights_path
print("load from:", args.init_weights_path)
load_model()
dataset = CityscapeDataset(args.dataset_path, 'train')
if args.parallel:
print "Using ParallelExecutor."
exe_p = fluid.ParallelExecutor(
use_cuda=True, loss_name=loss_mean.name, main_program=tp)
......@@ -149,9 +151,9 @@ for i, imgs, labels, names in batches:
'label': labels},
fetch_list=[pred, loss_mean])
if i % 100 == 0:
print "Model is saved to", args.save_weights_path
print("Model is saved to", args.save_weights_path)
save_model()
print "step %s, loss: %s" % (i, np.mean(retv[1]))
print("step %s, loss: %s" % (i, np.mean(retv[1])))
print "Training done. Model is saved to", args.save_weights_path
print("Training done. Model is saved to", args.save_weights_path)
save_model()
......@@ -10,3 +10,4 @@ output*
pred
eval_tools
box*
PyramidBox_WiderFace*
......@@ -14,7 +14,7 @@
## 安装
在当前目录下运行样例代码需要PadddlePaddle Fluid的v0.13.0或以上的版本。如果你的运行环境中的PaddlePaddle低于此版本,请根据[安装文档](http://www.paddlepaddle.org/docs/develop/documentation/zh/build_and_install/pip_install_cn.html)中的说明来更新PaddlePaddle。
在当前目录下运行样例代码需要PadddlePaddle Fluid的v0.13.0或以上的版本。如果你的运行环境中的PaddlePaddle低于此版本,请根据安装文档中的说明来更新PaddlePaddle。
## 数据准备
......
......@@ -20,3 +20,4 @@ data/pascalvoc/trainval.txt
log*
*.log
ssd_mobilenet_v1_pascalvoc*
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册
反馈
建议
客服 返回
顶部