CTCN模型训练一段时间后NAN
Created by: linrjing
使用CTCN模型代码,输入数据是自己的数据
-
测试1,img_size=512, concept_size=512, batch_size = 16. 第一个epoch训练正常,第二个epoch在中间出现NAN [INFO: metrics_util.py: 282]: [TRAIN] Epoch 1, iter 0 Loss = 8.644265174865723, loc_loss = 5.305370330810547, cls_loss = 3.338895320892334 [INFO: metrics_util.py: 282]: [TRAIN] Epoch 1, iter 10 Loss = 6.298323631286621, loc_loss = 3.9766969680786133, cls_loss = 2.3216264247894287 [INFO: metrics_util.py: 282]: [TRAIN] Epoch 1, iter 20 Loss = 7.81887149810791, loc_loss = 4.746278762817383, cls_loss = 3.0725927352905273 [INFO: metrics_util.py: 282]: [TRAIN] Epoch 1, iter 30 Loss = 9.531373977661133, loc_loss = 5.976119041442871, cls_loss = 3.555255174636841 [INFO: metrics_util.py: 282]: [TRAIN] Epoch 1, iter 40 Loss = 8.562569618225098, loc_loss = 4.903314590454102, cls_loss = 3.659255027770996 [INFO: metrics_util.py: 282]: [TRAIN] Epoch 1, iter 50 Loss = 7.116206169128418, loc_loss = 3.607116222381592, cls_loss = 3.509089946746826 [INFO: metrics_util.py: 282]: [TRAIN] Epoch 1, iter 60 Loss = 8.900976181030273, loc_loss = 4.741877555847168, cls_loss = 4.1590986251831055 [INFO: metrics_util.py: 282]: [TRAIN] Epoch 1, iter 70 Loss = 7.006552219390869, loc_loss = 4.304408550262451, cls_loss = 2.702143430709839 [INFO: metrics_util.py: 282]: [TRAIN] Epoch 1, iter 80 Loss = 7.706241607666016, loc_loss = 4.250545978546143, cls_loss = 3.455695867538452 [INFO: metrics_util.py: 282]: [TRAIN] Epoch 1, iter 90 Loss = 8.307282447814941, loc_loss = 4.759175777435303, cls_loss = 3.5481057167053223 [INFO: metrics_util.py: 282]: [TRAIN] Epoch 1, iter 100 Loss = 7.775359630584717, loc_loss = 4.987478733062744, cls_loss = 2.7878808975219727 [INFO: metrics_util.py: 282]: [TRAIN] Epoch 1, iter 110 Loss = 8.040206909179688, loc_loss = 4.403718948364258, cls_loss = 3.6364874839782715 [INFO: metrics_util.py: 282]: [TRAIN] Epoch 1, iter 120 Loss = 9.691923141479492, loc_loss = 4.130779266357422, cls_loss = 5.561143398284912 [INFO: metrics_util.py: 282]: [TRAIN] Epoch 1, iter 130 Loss = 7.779775619506836, loc_loss = 4.071199417114258, cls_loss = 3.708575963973999 [INFO: metrics_util.py: 282]: [TRAIN] Epoch 1, iter 140 Loss = 7.102745532989502, loc_loss = 4.228488922119141, cls_loss = 2.8742568492889404 [INFO: metrics_util.py: 282]: [TRAIN] Epoch 1, iter 150 Loss = 6.970609188079834, loc_loss = 3.9636733531951904, cls_loss = 3.006936550140381 [INFO: metrics_util.py: 282]: [TRAIN] Epoch 1, iter 160 Loss = nan, loc_loss = 21.078128814697266, cls_loss = nan [INFO: metrics_util.py: 282]: [TRAIN] Epoch 1, iter 170 Loss = nan, loc_loss = 20.18567657470703, cls_loss = nan [INFO: metrics_util.py: 282]: [TRAIN] Epoch 1, iter 180 Loss = nan, loc_loss = 20.06766700744629, cls_loss = nan [INFO: metrics_util.py: 282]: [TRAIN] Epoch 1, iter 190 Loss = nan, loc_loss = 20.610614776611328, cls_loss = nan [INFO: metrics_util.py: 282]: [TRAIN] Epoch 1, iter 200 Loss = nan, loc_loss = 19.23656463623047, cls_loss = nan [INFO: metrics_util.py: 282]: [TRAIN] Epoch 1, iter 210 Loss = nan, loc_loss = 19.809160232543945, cls_loss = nan
-
测试2,img_size=512, concept_size=256, batch_size = 16. 第一个epoch训练正常,第二个epoch在中间出现NAN [INFO: metrics_util.py: 282]: [TRAIN] Epoch 1, iter 0 Loss = 8.087115287780762, loc_loss = 4.917365074157715, cls_loss = 3.169750213623047 [INFO: metrics_util.py: 282]: [TRAIN] Epoch 1, iter 10 Loss = 7.561893463134766, loc_loss = 4.704628944396973, cls_loss = 2.857264280319214 [INFO: metrics_util.py: 282]: [TRAIN] Epoch 1, iter 20 Loss = 7.202031135559082, loc_loss = 4.352816581726074, cls_loss = 2.8492140769958496 [INFO: metrics_util.py: 282]: [TRAIN] Epoch 1, iter 30 Loss = 7.938170433044434, loc_loss = 5.153167247772217, cls_loss = 2.7850029468536377 [INFO: metrics_util.py: 282]: [TRAIN] Epoch 1, iter 40 Loss = 8.291640281677246, loc_loss = 5.546207904815674, cls_loss = 2.7454323768615723 [INFO: metrics_util.py: 282]: [TRAIN] Epoch 1, iter 50 Loss = 7.575397968292236, loc_loss = 4.737273216247559, cls_loss = 2.838124990463257 [INFO: metrics_util.py: 282]: [TRAIN] Epoch 1, iter 60 Loss = 7.889821529388428, loc_loss = 4.824648857116699, cls_loss = 3.0651731491088867 [INFO: metrics_util.py: 282]: [TRAIN] Epoch 1, iter 70 Loss = 8.803262710571289, loc_loss = 4.728842735290527, cls_loss = 4.074419975280762 [INFO: metrics_util.py: 282]: [TRAIN] Epoch 1, iter 80 Loss = 9.253735542297363, loc_loss = 4.693952560424805, cls_loss = 4.559782981872559 [INFO: metrics_util.py: 282]: [TRAIN] Epoch 1, iter 90 Loss = 8.738704681396484, loc_loss = 4.8759965896606445, cls_loss = 3.862708568572998 [INFO: metrics_util.py: 282]: [TRAIN] Epoch 1, iter 100 Loss = 7.218425750732422, loc_loss = 4.249882698059082, cls_loss = 2.96854305267334 [INFO: metrics_util.py: 282]: [TRAIN] Epoch 1, iter 110 Loss = 7.345929145812988, loc_loss = 4.364291191101074, cls_loss = 2.981637716293335 [INFO: metrics_util.py: 282]: [TRAIN] Epoch 1, iter 120 Loss = 7.921400547027588, loc_loss = 4.798431873321533, cls_loss = 3.122968912124634 [INFO: metrics_util.py: 282]: [TRAIN] Epoch 1, iter 130 Loss = 8.251079559326172, loc_loss = 4.602048873901367, cls_loss = 3.6490306854248047 [INFO: metrics_util.py: 282]: [TRAIN] Epoch 1, iter 140 Loss = nan, loc_loss = nan, cls_loss = nan [INFO: metrics_util.py: 282]: [TRAIN] Epoch 1, iter 150 Loss = nan, loc_loss = nan, cls_loss = nan [INFO: metrics_util.py: 282]: [TRAIN] Epoch 1, iter 160 Loss = nan, loc_loss = nan, cls_loss = nan [INFO: metrics_util.py: 282]: [TRAIN] Epoch 1, iter 170 Loss = nan, loc_loss = nan, cls_loss = nan [INFO: metrics_util.py: 282]: [TRAIN] Epoch 1, iter 180 Loss = nan, loc_loss = nan, cls_loss = nan [INFO: metrics_util.py: 282]: [TRAIN] Epoch 1, iter 190 Loss = nan, loc_loss = nan, cls_loss = nan [INFO: metrics_util.py: 282]: [TRAIN] Epoch 1, iter 200 Loss = nan, loc_loss = nan, cls_loss = nan [INFO: metrics_util.py: 282]: [TRAIN] Epoch 1, iter 210 Loss = nan, loc_loss = nan, cls_loss = nan [INFO: metrics_util.py: 282]: [TRAIN] Epoch 1, iter 220 Loss = nan, loc_loss = nan, cls_loss = nan [INFO: metrics_util.py: 282]: [TRAIN] Epoch 1, iter 230 Loss = nan, loc_loss = nan, cls_loss = nan [INFO: metrics_util.py: 282]: [TRAIN] Epoch 1, iter 240 Loss = nan, loc_loss = nan, cls_loss = nan [INFO: metrics_util.py: 282]: [TRAIN] Epoch 1, iter 250 Loss = nan, loc_loss = nan, cls_loss = nan