- 19 8月, 2022 1 次提交
-
-
由 Nyakku Shigure 提交于
[CodeStyle] use np.testing.assert_allclose instead of self.assertTrue(np.allclose(...)) (part 2) (#45213) * autofix (get ci log) * retrigger ci * fix test_gather_nd_op, wrong expected in dygraph * fix test_activation_op, unpack static graph result * fix test_auc_op, unpack static graph result * fix test_bce_loss, unpack static graph result * fix test_bce_with_logits_loss, unpack static graph result * fix test_cond, unpack static graph result * fix test_dygraph_weight_norm, wrong numpy reference when `axis=None` * fix test_einsum, wrong matmul inputs * fix test_elementwise_heaviside_op, unpack static graph result * fix test_frac_api, unpack static graph result * skip test_group_norm_op_v2, probably the wrong numpy reference * fix test_imperative_double_grad, wrong subscript * skip test_imperative_tensor_clear_gradient, ??? * skip test_layer_norm_op, probably the wrong numpy reference * fix test_math_op_patch, unpack static graph results * fix test_masked_select_op, unpack static graph results * fix test_mse_loss, unpack static graph results * fix test_multi_label_soft_margin_loss, unpack static graph results * fix test_multi_dot_op, unpack static graph results * fix test_nll_loss, unpack static graph results * fix test_normalization_wrapper, unpack static graph results * fix test_pass_builder, unpack static graph results * fix test_prelu_op, possibly an extra comma * fix test_psroi_pool_op, unpack static graph results * fix test_queue, unpack static graph results * fix test_reorder_lod_tensor, compare an item with a list * fix test_rrelu_op, unpack static graph results * fix test_searchsorted_op, unpack static graph results * fix test_sigmoid_focal_loss, unpack static graph results * fix test_smooth_l1_loss, unpack static graph results * fix test_soft_margin_loss, unpack static graph results * fix test_softmax2d, unpack static graph results * fix test_square_error_cost, unpack static graph results * fix test_tril_indices_op, unpack static graph results * fix test_unsqueeze_op, mismatch numpy reference (axis) * skip test_layers, `static_rlt` is missing an axis * fix test_mnist, unpack PredictorTools result (also a list) * fix test_build_strategy, unpack PredictorTools result * fix test_mobile_net, unpack PredictorTools result * fix test_resnet_v2, unpack PredictorTools result * revert some changes revert test_layers revert test_group_norm_op_v2 revert test_layer_norm_op revert test_imperative_tensor_clear_gradient * fix test_normal, use flatten instead of reshape, (PR-CI-Windows-OPENBLAS) * empty commit, trigger CI
-
- 17 8月, 2022 1 次提交
-
-
由 Nyakku Shigure 提交于
[CodeStyle][NPU] use np.testing.assert_allclose instead of self.assertTrue(np.allclose(...)) (part 1) (#44988) * autofix * try resolve precision issues * revert some changes * clean some `err_msg` * 0.0001 -> 1e-4 * update commented assert code * try to fix some shape errors * `numpy` -> `np` * empty commit, trigger kunlun ci, test=kunlun * empty commit, retrigger kunlun ci, test=kunlun * empty commit, trigger kunlun ci, try fix npu memcpy_h2d, test=kunlun * try fix npu import error, test=kunlun
-
- 06 7月, 2022 1 次提交
-
-
由 xiongkun 提交于
* add support for control flow block analysis * move FunctionNameLivenessAnalysis into utils * pass test_ifelse.py * remove duplicate data_layer_not_check * pass the test_ifelse.py * fix unittest error . * fix all ci error in first version * temporay disable CreateVariableTransformer * fix ci errors * fix function name liveness analysis bugs * modifty def cond * fix * fix ci error - v2 * fix by code review * change return_name_ids -> return_name
-
- 05 7月, 2022 1 次提交
-
-
由 zhiboniu 提交于
* change fluid.mean to paddle.mean * reverse some old code examples
-
- 05 6月, 2022 1 次提交
-
-
由 Sing_chan 提交于
* use yapf to format all python file * yapf exclude two unittests file for they rely on writing and reading file, and format will break them * disable diff_py_file because too many diff files cause command following failed
-
- 03 5月, 2022 1 次提交
-
-
由 Huihuang Zheng 提交于
This PR hotfixed the `test_cond.py` in CUDA 11.2 The reason of the bug is that the `fill_constant` op returns wrong value in the modified test case `test_extremely_simple_net_with_op_in_condition`, SWEs can use `layers.Print(a)` and `layers.Print(b)` in the test case to reproduce it and they can see the `fill_constant` returns something `e-50` instead of `1.23` and `1.25` This PR hotfixed the bug by comparing `b` value instead of actual number, which makes sure the `cond` logic is right. **However, the PR didn't fix `fill_constant`**. We would let the SWEs who are working here to find the op bug and fix it.
-
- 08 5月, 2021 1 次提交
-
-
由 Huihuang Zheng 提交于
Remove np Deprecation Warning since `np.bool` is alias of `bool` The warning report from test: ``` 2021-04-30 15:29:32 /workspace/Paddle/build/python/paddle/fluid/framework.py:689: DeprecationWarning: `np.bool` is a deprecated alias for the builtin `bool`. To silence this warning, use `bool` by itself. Doing this will not modify any behavior and is safe. If you specifically wanted the numpy scalar type, use `np.bool_` here. 2021-04-30 15:29:32 Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations 2021-04-30 15:29:32 elif dtype == np.bool: 2021-04-30 15:29:32 /workspace/Paddle/build/python/paddle/fluid/layers/utils.py:77: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated, and in 3.8 it will stop working 2021-04-30 15:29:32 return (isinstance(seq, collections.Sequence) and 2021-04-30 15:29:32 /workspace/Paddle/build/python/paddle/fluid/tests/unittests/test_cond.py:99: DeprecationWarning: `np.bool` is a deprecated alias for the builtin `bool`. To silence this warning, use `bool` by itself. Doing this will not modify any behavior and is safe. If you specifically wanted the numpy scalar type, use `np.bool_` here. ```
-
- 16 6月, 2020 1 次提交
-
-
由 Huihuang Zheng 提交于
As the title
-
- 20 5月, 2020 1 次提交
-
-
由 Huihuang Zheng 提交于
In the past, the test_cond will fail with 2% probability and easy to re-produce. Now I re-run 300 times and no failure occurs. The probability of still has the failure is (1 - 2%) ^ 300 ~= 0.00004. We can say the random failure disappears. Maybe someone fixed some bugs in PE.
-
- 12 4月, 2020 1 次提交
-
-
由 Huihuang Zheng 提交于
This PR enhances error messages of several API/OPs: ParallelExecutor (python && C++) Executor (python && C++) StaticRNN (python) IfElse (python) cond (python) split_lod_tensor (python && C++)
-
- 11 4月, 2020 1 次提交
-
-
由 Huihuang Zheng 提交于
The flaky windows test is hard to debug. It just has an exit code 0xc0000374 without any log so we don't know where and why. The probability of failure is about 1/50. I spent 3 days and found it happened only when using PE + control flow + Windows. Exit code 0xc0000374 indicates heap corruption or access violation, but I found the memory is enough during debugging. There is no failed test under 500+ linux tests. I suspect the reason is multiple thread difference between Windows and Linux but I don't have time to completely debug it now. I will temporary disable the test and fix it in next days.
-
- 26 2月, 2020 1 次提交
-
-
由 songyouwei 提交于
* dygraph support cond op test=develop * unittest coverage test=develop * fix coverage test=develop * fix for coverage test=develop * refine TypeError msg test=develop * remove restrict test=develop
-
- 06 1月, 2020 1 次提交
-
-
由 Huihuang Zheng 提交于
-
- 18 12月, 2019 1 次提交
-
-
由 Huihuang Zheng 提交于
The fixed bugs: 1. The condition sub-graph is not pruned 2. When backward graph is extremely simple, the whole backward ops are pruned.
-
- 06 12月, 2019 1 次提交
-
-
由 Huihuang Zheng 提交于
Add tests to use dy/dx to make sure the gradient values calculated by the control flow backward is correct. Also fixed bugs detected by those tests. Fix bugs: 1. Unlike sum_op, optimizer ops don't allow uninitialized input tensor. But in conditional_block_grad_op, since the conditional_block may not run, the output gradient tensor may be uninitialized, which will cause the optimizer op error. To fix it, we should let optimizer ops support uninitialized input like sum_op or assign the uninitialized gradient to 0 when the conditional_block_grad_op doesn't run. I found there are about 10+ optimizer ops. **To be simpler, I just assign output gradient of the conditional_block_grad_op to 0 in this PR**. But it can be further explored whether we can make optimizer ops like sum_op to support uninitialized input tensor because theoretically we can speed up without the assigning in conditional_block_grad_op. 2. Infer parameter shapes during append_backward. I didn't know that all our parameters are in global block. When op_desc is inferring shapes at the sub-block, it may not know the shape of gradients of parameters whose shape information is at global block. I fixed it by inferring shapes of gradients from forward var. This PR also did some code clean up: 1. Print the var name when sgd_op catches shape error so that it is easier to debug 2. Fix a typo: dicta -> dict
-
- 29 11月, 2019 1 次提交
-
-
由 Huihuang Zheng 提交于
* Commit before merging develop test=develop * Backup after working with Huihuang logs * Commit before deleting Huihuang debug loggings * Commit before debug test=develop * Fix bug commit test=develop * Backup of fixing bugs test=develop * Clean up code test=develop * Fix a bug in sum_op test=develop
-
- 11 11月, 2019 1 次提交
-
-
由 Huihuang Zheng 提交于
-