1. 27 10月, 2020 5 次提交
  2. 26 10月, 2020 9 次提交
  3. 23 10月, 2020 1 次提交
    • H
      Fix test_parallel_executor_test_while_train Random Failure by Decreasing GPU Usage (#28213) · a1e7fd4a
      Huihuang Zheng 提交于
      Recently, test_parallel_executor_test_while_train randomly failed on CI. On all CI logs, it showed NCCL initialization failed or cusolver initialization failed. I found online that those failure is usually caused by GPU shortage. Those API calls CUDA APIs directly so it shouldn't be the problem of allocator. It may be somewhere in PaddlePaddle increases GPU usage.
      
      However, I run this test for 1000 times on my machine and the CI machine, either of them can reproduce the random failure. Maybe there is something related to the environment only happened in test env.
      
      To verify my assumption that somewhere in PaddlePaddle increases GPU usage and also fix this CI, I decreased the batch_size to see whether the random failure disappears in test env.
      a1e7fd4a
  4. 22 10月, 2020 5 次提交
  5. 21 10月, 2020 9 次提交
  6. 20 10月, 2020 10 次提交
  7. 19 10月, 2020 1 次提交
    • Y
      xpu adam op (#28031) · 6f0c3d1f
      yinhaofeng 提交于
      * lookup_table_xpu op report errors;test=kunlun
      
      * add adam xpu op;test=kunlun
      
      * reset lookup
      
      * change adam wrong;test=kunlun
      6f0c3d1f