- 29 12月, 2020 1 次提交
-
-
由 liuyuhui 提交于
* [Kunlun] PR1:Support one Kunlun card training in parallel executor (#29337) * [Kunlun] PR2: Support MultiDevicePass and BKCL in parallel executor (#29574) * [Kunlun] bug fix of PR2: Support MultiDevicePass and BKCL in parallel executor (#29926) * add bkcl.so in whl for kunlun (#29947) * [Kunlun] bug fix of PR2: Support MultiDevicePass and BKCL in parallel executor (#29961) Co-authored-by: NQingshuChen <qingshu.chen714@gmail.com>
-
- 01 6月, 2020 1 次提交
-
-
由 Huihuang Zheng 提交于
The PR: https://github.com/PaddlePaddle/Paddle/pull/24651 seems causes new random failure of unit test test_parallel_executor_seresnext_base_cpu. The reason is that smaller batch size causes random optimization of neural network. I distinguished cpu/gpu batch size to fix the unittest.
-
- 22 8月, 2019 1 次提交
-
-
由 chengduo 提交于
* increase test_parallel_executor_seresnext time limit test=develop * split test_parallel_executor_seresnext test=develop * temporally disable reduce_and_allreduce test because of the random failure. test=develop * split gpu and cpu test=develop
-