- 19 10月, 2022 1 次提交
-
-
由 Nyakku Shigure 提交于
-
- 11 10月, 2022 1 次提交
-
-
由 Nyakku Shigure 提交于
-
- 27 9月, 2022 1 次提交
-
-
由 Nyakku Shigure 提交于
* [CodeStyle] remove all future import * revert test_error.py * restore future import in example code
-
- 19 9月, 2022 1 次提交
-
-
由 Charles-hit 提交于
* support cast op backward refuse forward * Fix the bug of high order unit test framework
-
- 14 9月, 2022 1 次提交
-
-
由 Nyakku Shigure 提交于
* trim trailing whitespace * fix `.cmake-format.py` * revert npu ut changes, avoid npu ci error
-
- 08 9月, 2022 1 次提交
-
-
由 Charles-hit 提交于
* support more op for high level * add unit test for high level op * remove unnecessary comments
-
- 05 6月, 2022 1 次提交
-
-
由 Sing_chan 提交于
* use yapf to format all python file * yapf exclude two unittests file for they rely on writing and reading file, and format will break them * disable diff_py_file because too many diff files cause command following failed
-
- 29 4月, 2022 1 次提交
-
-
由 YuanRisheng 提交于
* add double yaml * add inline func
-
- 28 4月, 2022 1 次提交
-
-
由 pangyoki 提交于
* fix collections.Sequence in python3.10 * fix format
-
- 22 4月, 2022 2 次提交
-
-
由 YuanRisheng 提交于
* support 3-rd order gradient * change code format
-
由 YuanRisheng 提交于
* Support double grad check of op in Eager mode * fix bugs of backward yaml * adjust code format
-
- 01 11月, 2021 1 次提交
-
-
由 Weilong Wu 提交于
* native commit for triple grad of sigmod * Updated unittests files * init functional jacobian api * Updated trible_test func * Updated gradient_checker & test_script * finish test with dtype float32 * add float64 test case * polish code * use atol=1e-5 with dtype float64 * fix for ci * set timeout for test_jacobian * fix dygraph grad to support high differential * polish API docstring * Updated gradient checker and some related files * fix double grad strip error for high differential * fix double grad strip error for high differential * Add Sigmoid triple grad tests * fix dygraph double grad dtype error when calling for high differential senario * Updated triple grad teses func * Use np.random to initialize ddx * Updated triple_grad_check func * add todo for gradient checker and refine some comments * remove additional code * add test for warnging in backward.py * format python code * support multi input in triple gradient checker * Add matmul triple grad kernel * Updated comments of TODO * Supported some special tests * Change code-format to follow CI std * Updated gradient_checker.py * Fix conflicts * Removed unnecessary printing log * Change code style to follow CI std Co-authored-by: Nlevi131 <limaolin01@baidu.com> Co-authored-by: NJiabin Yang <360788950@qq.com>
-
- 19 10月, 2021 1 次提交
-
-
由 Weilong Wu 提交于
* Support elementwise_add triple grad Kernel * Change code-format to follow CI std
-
- 13 10月, 2021 1 次提交
-
-
由 Jiabin Yang 提交于
* native commit for triple grad of sigmod * Updated unittests files * init functional jacobian api * Updated trible_test func * Updated gradient_checker & test_script * finish test with dtype float32 * add float64 test case * polish code * use atol=1e-5 with dtype float64 * fix for ci * set timeout for test_jacobian * fix dygraph grad to support high differential * polish API docstring * Updated gradient checker and some related files * fix double grad strip error for high differential * fix double grad strip error for high differential * Add Sigmoid triple grad tests * fix dygraph double grad dtype error when calling for high differential senario * Updated triple grad teses func * Use np.random to initialize ddx * Updated triple_grad_check func * add todo for gradient checker and refine some comments * remove additional code * add test for warnging in backward.py * format python code Co-authored-by: Nveyron95 <veyron_wu@163.com> Co-authored-by: Nlevi131 <limaolin01@baidu.com>
-
- 13 3月, 2020 1 次提交
-
-
由 tianshuo78520a 提交于
* fix the travic ci problem
-
- 01 11月, 2019 1 次提交
-
-
由 Leo Chen 提交于
* don't expose numerous Tensor.set(), test=develop * fix condition, test=develop * fix float16 bug, test=develop * feed should be Tensor or np.array, not Variable or number, test=develop * use forcecast to copy numpy slice to new array, test=develop * remove float16-uint16 hacking, test=develop
-
- 13 8月, 2019 1 次提交
-
-
由 Jiawei Wang 提交于
* instag lod tensor impl * First PR for instag * First PR for instag * Before adding Selection Rows. * Change name from instag to filter_instag, add upgrade the impl of filter_instag * Change name from instag to filter_instag, add upgrade the impl of filter_instag * Fix yapf error in gradient_checker.py to pass Travis-CI * Fix Filter Instag Grad test=develop * Fix Filter Instag Grad test=develop * 1) Fix API.spec, add filter_instag Op. 2) Add Vector Support for CUDA. test=develop * Impl Loss_weight and empty output handler * change Loss Weight datatype to Float32, and add Loss Weight as 2nd output * 1) Support Tensor Input(without LOD) 2) Add Unit test * Filter By Instag Final test=develop * Update API.spec for filter_by_instag test=develop * Update API.spec for filter_by_instag 2 test=develop * Add Filter By Instag Coverage * code format of test_layers.py * code format test_layers.py test=develop * Make API args more readable test=develop * Make API args more readable and pass code format test=develop * Filter By Instag Op, Rename Map to Index Map test=develop * Filter By Instag Op, code format err in filter_by_instag_op.cc test=develop * Filter by instag op: code format of cpp files test=develop * Filter by instag Op: Api spec modification test=develop * Filter by instag Op: Api spec doc id modification test=develop * Filter by instag Op: Api spec and doc preview test=develop test=document_preview * Filter By Instag Op, fix doc erro test=document_preview test=develop * Filter By Instag Op, fix doc err and Api spec test=document_preview test=develop * Filter By Instag Op, fix Api spec test=document_preview test=develop * Filter By Instag Op, fix Paddle Encoforce deprecated warning test=document_preview test=develop * Filter By Instag Op, fix Paddle Encoforce deprecated and code format warning test=document_preview test=develop
-
- 16 6月, 2019 1 次提交
-
-
由 qingqing01 提交于
* Update backward.py: - If there is no input grad var in all outputs of previous ops, do not append this op into graph. - Only apply this stragety when double backward. * Update some double backward op. * Update sum_op to judge whether a tensor is empty by numel or IsInitialized().
-
- 14 5月, 2019 1 次提交
-
-
由 Kaipeng Deng 提交于
* add elementwise_add_grad_grad op. test=develop * use defined GradMaker. test=develop
-
- 10 5月, 2019 1 次提交
-
-
由 qingqing01 提交于
* Add conv2d_grad_grad_op * Extracte the cuDNN conv algo searching code in conv_cudnn_helper.h. - Now use it in conv2d_grad_grad. - Will simply the searching code in conv2d and conv2d_grad in next PR. * Enhance and fix bug in unit testing of gradient_checker. * Support to fetch empty variables,return None in Python.
-
- 23 4月, 2019 1 次提交
-
-
由 qingqing01 提交于
Support backward of backward for Relu and add a new gradient checker by comparing theoretical and numerical Jacobian. (#16862) * Support backward of backward and a new gradient checker * Rename decorators.py to decorator_helper.py, since Python on Windows CI has decorators package. 1. Add ReluDoubleGradMaker when register relu_grad. 2. Add a new gradient checker by comparing theoretical and numerical Jacobian. Check double gradients by double_grad_check.
-