crnn-ctc training: strange awk command and len(img_label_lines) != actual dataset size
Created by: sfraczek
Hi,
I have analysed a little bit the code of DataGenerator in ctc_reader.py and I have minor comments for the code there:
In line https://github.com/PaddlePaddle/models/blob/f93838a4258c2a197cfa9e14c244b4da7a042a88/fluid/ocr_recognition/ctc_reader.py#L67 sizes
equals 0 if batch_size equals the total number of images in the dataset. I found this when I set batch_size equal to size of dataset. The result was that there was no iterations calculated at all (instead of one), because the loop in the next line is not executed.
This has led me to the next thing:
In the line here https://github.com/PaddlePaddle/models/blob/f93838a4258c2a197cfa9e14c244b4da7a042a88/fluid/ocr_recognition/ctc_reader.py#L51 I have been testing what the command sed 1,$((1 + RANDOM % 100))d
is supposed to do.
I have left only 5 lines in the img_label_list file. I have then ran the command cat /root/.cache/paddle/dataset/ctc_data/data/train.list | awk '{printf("%04d%.4f %s\n", $1, rand(), $0)}' | sort | sed 1,$(( RANDOM % 5))d
and it turns out that it outputs up to 4 lines. One of the lines is always missing. When I tried sed 1,$(( RANDOM % 6))d
instead of 1,$(( RANDOM % 5))d
, I still got 4 or less lines. The conclusion is that there is always at least one file not present in output. I wonder if that was intended? Does this lead to the first problem above? Also, if I increase the number to 100, the probability of printing empty list increases drastically (I guesss that up to 95%). Is this intended behavior?
Also I found this: https://github.com/PaddlePaddle/models/blob/f93838a4258c2a197cfa9e14c244b4da7a042a88/fluid/ocr_recognition/ctc_reader.py#L43, I found awk applying no change to the result of the previous command, as the file contains only 4 columns.
I couldn't analyse any more of this code part because I had to return to my duties. I hope you can find this useful. :)