Created by: sidgoyal78
As mentioned in https://github.com/PaddlePaddle/Paddle/issues/11057 we get non-deterministic outputs for a couple of examples.
I tried debugging the sentiment analysis model: https://github.com/sidgoyal78/Paddle/blob/a801e7bcb2f4f1e131f6a640ecd84a03d21588ff/test_sa_conv.py
by actually saving the weights after each iteration, and also printing inputs and outputs at individual CUDA kernels. I have some findings that I wanted to share:
- The SGD Sparse kernel has cudaAtomicAdd which results in different order of processing the float values.
- The im2colOCF kernel (for the sequence_convolution layer) is also implemented using cudaAtomicAdd, and hence the most probable reason for inconsistent results.
In order to get consistent results I had to change the launching of both of these kernels to allow for only 1 thread.
So in summary the issue is with CUDA atomics. This was pointed out by Greg in a discussion recently, and there is another comment here: https://github.com/tensorflow/tensorflow/issues/3103#issuecomment-286302405
With these changes, we get consistent results but at the cost of performance.
Moreover, if we change the optimizer to Adam in the above example, then we don't get consistent results. So probably there is some problem with Adam optimizer as well.
Note: With launching just one thread, we don't get correct results, since only a part of the actual work is done. However, this PR is for highlighting the consistency problem (but I want to be clear that launching 1 thread does result in incorrect outputs, as can be seen in the failed unit tests).
@dzhwinter @chengduoZH : What do you guys think?