“77513f7188b8162a481d81fcecc00c02fc917225”上不存在“git@gitcode.net:paddlepaddle/FluidDoc.git”
- 31 3月, 2023 1 次提交
-
-
由 Ainavo 提交于
-
- 08 12月, 2022 1 次提交
-
-
由 姜永久 提交于
* rm dygraph_to_static eager guard tests part2 minst2ptb_lm
-
- 29 11月, 2022 1 次提交
-
-
由 Nyakku Shigure 提交于
-
- 23 10月, 2022 1 次提交
-
-
由 Nyakku Shigure 提交于
* update config * re-blacken python code * temporarily disable date and diff_py_file * skip a format
-
- 12 10月, 2022 1 次提交
-
-
由 Nyakku Shigure 提交于
* [CodeStyle][F401] remove unused imports in unittests/dygraph_to_static * [CodeStyle][F401] remove unused imports in unittests/ir * add noqa after required imports
-
- 17 8月, 2022 1 次提交
-
-
由 Nyakku Shigure 提交于
[CodeStyle][NPU] use np.testing.assert_allclose instead of self.assertTrue(np.allclose(...)) (part 1) (#44988) * autofix * try resolve precision issues * revert some changes * clean some `err_msg` * 0.0001 -> 1e-4 * update commented assert code * try to fix some shape errors * `numpy` -> `np` * empty commit, trigger kunlun ci, test=kunlun * empty commit, retrigger kunlun ci, test=kunlun * empty commit, trigger kunlun ci, try fix npu memcpy_h2d, test=kunlun * try fix npu import error, test=kunlun
-
- 05 6月, 2022 1 次提交
-
-
由 Sing_chan 提交于
* use yapf to format all python file * yapf exclude two unittests file for they rely on writing and reading file, and format will break them * disable diff_py_file because too many diff files cause command following failed
-
- 30 3月, 2022 1 次提交
-
-
由 0x45f 提交于
* Switch some dy2st UT to eager mode * Add UT
-
- 17 1月, 2022 1 次提交
-
-
由 0x45f 提交于
* close enable_inplace PASS for PE, and test dy2st pure fp16 training stability * add some comment * enlarge atol
-
- 20 12月, 2021 1 次提交
-
-
由 0x45f 提交于
-
- 02 12月, 2021 1 次提交
-
-
由 0x45f 提交于
* fix test_mnist_pure_fp16 * change batch_id
-
- 24 11月, 2021 1 次提交
-
-
由 0x45f 提交于
* run dy2stat pure fp16 in Linear model * no use self._pure_fp16_inputs * add test and fix Adam error in dy2stat pure fp16 training * use paddle.optimizer.Adam * run test in gpu * change test time for CI * enlarge atol for test_resnet_pure_fp16 * refine code and enlarge atol * make custom_white_list and custom_black_list take effect for AMP and pure fp16 * check tracer is not None * use default atol * change filter_size * change atol and add some NOTE
-
- 05 8月, 2021 1 次提交
-
-
由 Aurelius84 提交于
* Support Mixed Precision training in @to_static * fix block.vars logic * fix GPU training loss diff * remove unused code
-