- 31 12月, 2019 1 次提交
-
-
由 Leo Chen 提交于
* update layers used in mnist dygraph model, test=develop * fix import issue, test=develop * add dygraph utils, test=develop * add unittest, test=develop
-
- 26 12月, 2019 1 次提交
-
-
由 hutuxian 提交于
* fix stat shape back in global auc scenario * add UT to cover global auc
-
- 24 12月, 2019 3 次提交
- 23 12月, 2019 1 次提交
-
-
由 Leo Chen 提交于
* add unittests, test=develop * set dtype of compare op to bool, test=develop
-
- 20 12月, 2019 1 次提交
-
-
由 hong 提交于
-
- 18 12月, 2019 1 次提交
-
-
由 Leo Chen 提交于
adds unary operator __neg__ for VarBase in dygraph mode, and for Variable in static graph mode.
-
- 17 12月, 2019 1 次提交
-
-
由 zhouwei25 提交于
* increase example code of py_func, fix some wrong description of English API doc
-
- 16 12月, 2019 3 次提交
-
-
由 Kaipeng Deng 提交于
* yolo_box OP add Attr(clip_bbox). test=develop
-
由 zhouwei25 提交于
-
由 Leo Chen 提交于
* patch math method for varbase using auto-generated op functions, test=develop * clean code that handles batch_size, test=develop * follow comments, test=develop * follow comments, test=develop * code clean, test=develop
-
- 11 12月, 2019 1 次提交
-
-
由 Huihuang Zheng 提交于
-
- 09 12月, 2019 2 次提交
-
-
由 Huihuang Zheng 提交于
As the title
-
由 guofei 提交于
Add basic while_loop
-
- 06 12月, 2019 3 次提交
-
-
由 Huihuang Zheng 提交于
Add tests to use dy/dx to make sure the gradient values calculated by the control flow backward is correct. Also fixed bugs detected by those tests. Fix bugs: 1. Unlike sum_op, optimizer ops don't allow uninitialized input tensor. But in conditional_block_grad_op, since the conditional_block may not run, the output gradient tensor may be uninitialized, which will cause the optimizer op error. To fix it, we should let optimizer ops support uninitialized input like sum_op or assign the uninitialized gradient to 0 when the conditional_block_grad_op doesn't run. I found there are about 10+ optimizer ops. **To be simpler, I just assign output gradient of the conditional_block_grad_op to 0 in this PR**. But it can be further explored whether we can make optimizer ops like sum_op to support uninitialized input tensor because theoretically we can speed up without the assigning in conditional_block_grad_op. 2. Infer parameter shapes during append_backward. I didn't know that all our parameters are in global block. When op_desc is inferring shapes at the sub-block, it may not know the shape of gradients of parameters whose shape information is at global block. I fixed it by inferring shapes of gradients from forward var. This PR also did some code clean up: 1. Print the var name when sgd_op catches shape error so that it is easier to debug 2. Fix a typo: dicta -> dict
-
由 Feiyu Chan 提交于
Add a python interface for Gelu. Add documentation for fluid.layers.gelu.
-
由 wangchaochaohu 提交于
-
- 05 12月, 2019 4 次提交
-
-
由 danleifeng 提交于
-
由 hong 提交于
* dygraph mode support linear lr warm up; test=develop * add unitest for linear warmup; test=develop * add input type check; test=develop * fix type check assert error; test=develop * change type error; test=develop
-
由 lilong12 提交于
-
由 Leo Chen 提交于
* test=develop, fix docker with paddle nccl problem * don't expose numerous Tensor.set(), test=develop * fix condition, test=develop * fix float16 bug, test=develop * feed should be Tensor or np.array, not Variable or number, test=develop * use forcecast to copy numpy slice to new array, test=develop * remove float16-uint16 hacking, test=develop * add variable method to varbase and refactor to_variable to support return varbase * support kwargs in varbase constructor * add VarBase constructor to support default python args * refine varbase initial method * reset branch * fix ut for change VarBase error info to PaddleEnforce * cherry is parameter change before * overload isinstance to replace too many change of is_variable * rm useless files * rm useless code merged by git * test=develop, fix some ut failed error * test=develop, fix test_graph_wrapper * add some tests, test=develop * refine __getitem__, test=develop * add tests, test=develop * fix err_msg, test=develop
-
- 04 12月, 2019 1 次提交
-
-
由 wangchaochaohu 提交于
* fix fill_constant_batch_size_like_op precious problem test=develop
-
- 03 12月, 2019 5 次提交
-
-
由 ruri 提交于
-
由 lilong12 提交于
* set dim[0] to -1 if dim[0] < 0 and remove assertion to runtime, test=develop * modify ENFORCE message, test=develop * add validation for x.shape[0] > 0, test=develop * add ut, test=develop
-
由 Aurelius84 提交于
* fix adam sample code bug test=document_fix * fix sample code bug in scale test=document_fix
-
由 ruri 提交于
-
- 02 12月, 2019 4 次提交
-
-
由 liym27 提交于
-
由 hutuxian 提交于
* refactor AUC OP and add its CUDA Kernel * the layout of global auc doesn't change
-
由 wawltor 提交于
* fix the device supported of the op unique and unique_with_counts. test=develop test=document_fix * Fix the precision of test in the op of unique and unique_with_counts. test=develop test=document_fix
-
由 Huihuang Zheng 提交于
Add English doc for cond
-
- 01 12月, 2019 1 次提交
-
-
由 Jie Fang 提交于
-
- 29 11月, 2019 2 次提交
-
-
由 Huihuang Zheng 提交于
* Commit before merging develop test=develop * Backup after working with Huihuang logs * Commit before deleting Huihuang debug loggings * Commit before debug test=develop * Fix bug commit test=develop * Backup of fixing bugs test=develop * Clean up code test=develop * Fix a bug in sum_op test=develop
-
由 zhaoyuchen2018 提交于
* Add ascending for argsort * Refine api doc description. * Refine descending description * Add int32 logic to speedup when data is small size. * Remove int32 opt as not support in python
-
- 28 11月, 2019 4 次提交
-
-
由 Kaipeng Deng 提交于
* add Adam beta1/beta2 support Variable. test=develop
-
由 ruri 提交于
-
由 Kaipeng Deng 提交于
* batch_norm momentum support variable. test=develop * fix format. test=develop * add batch_norm momentum variable example. test=develop * move MomentumTensor to training branch. test=develop * split example. test=develop * fix doc. test=develop * fix PADDLE_ENFORCE ci. test=develop * fix format. test=develop
-
由 Zeng Jinle 提交于
-
- 27 11月, 2019 1 次提交
-
-
由 hutuxian 提交于
* support data_norm_op run in CUDA * add two parameters sync_stats & summary_decay_rate * add UT
-