Summary of debugging WarpCTCLayer
Created by: pkuyym
I often encounter inf
cost when training mandanrin data using deep speech2 (GPU version). It seems that WarpCTCLayer
may have potential numerical problems. So, @qingqing01 and I have been tried to figure out what leads to inf
. Considering that inf
doesn't appear regularly, we save the two inputs of WarpCTCLayer
using printValueString
firstly, then parse and load the saved context in debugging phase. However, loading the exception context only increases probability of inf
which means that regular reproduction is not assured.
For inf
, we find two suspicious snippets
seq2batchPadding
Please go to seq2batchPadding to see details. We detect -inf
in batchValue_
just before calling hl_warpctc_compute_loss
, since that seq2batchPadding
is the only function in which batchValue
is modified except resizeOrCreate
. So we consider seq2batchPadding
as a suspicious reason.
status: Fixed by #3105
compute_probs_kernel
We also dig into wrap ctc kernal and find that compute_probs_kernel
will appear 0
after exponent operation. Location of exponent operation snnipet is at ctc_helper::exponential, this leads to 0
contained in probs_
. Unfortunately, probs_
will be passed into compute_alpha_kernel and illegal operation log(0)
is detected at line167 and line191 in compute_alpha_kernel
. We also consider this as a suspicious reason.
status: Fixed by #5
Besides, we also encounter a validataion error, details are listed below:
F0726 17:18:41.516580 15765 hl_warpctc_wrap.cc:131] Check failed: CTC_STATUS_SUCCESS == dynload::compute_ctc_loss(batchInput, batchGrad, cpuLabels, cpuLabelLengths, cpuInputLeng ths, numClasses, numSequences, cpuCosts, workspace, *options) (0 vs. 3) warp-ctc [version 2] Error: execution failed-
This fatal exception is throwed by here. The reason hasn't been figured out.