- 25 12月, 2019 3 次提交
-
-
由 Aurelius84 提交于
* add register op_data_type test=develop * fix register bug in isfinite op test=develop * rm int int64_t in pad2d gradKernel test=develop
-
由 hong 提交于
-
由 zhouwei25 提交于
-
- 24 12月, 2019 3 次提交
-
-
由 Aurelius84 提交于
* optimize adam speed by removing _finish_update test=develop * fix SparseAdamFunctor param list test=develop * Remove scale_op in expect_list of adam_op test=develop * fix test optimizer loss assert error test=develop * fix test optimizer loss assert error test=develop * modify PADDLE_ENFORCE usage test=develop * fix op_type in lamb_op.cc test=develop * fix errors ostream format bug test=develop * add betaPowOut in ngraph op test=develop * fix ngraph::op api for gcc8 test=develop * clean code test=develop * modify struct into class test=develop * remove code of beta1Tensor in lamb_op test=develop
-
由 FDInSky 提交于
Update iou_similarity op to support non-normalized bbox
-
由 guofei 提交于
-
- 23 12月, 2019 2 次提交
- 20 12月, 2019 1 次提交
-
-
由 Chen Weihang 提交于
-
- 19 12月, 2019 4 次提交
-
-
由 Chengmo 提交于
* test=develop, speed dense calc & communication
-
由 Wojciech Uss 提交于
test=develop
-
由 guofei 提交于
1. Make while_op accept GPU conditional data 2. Add more complex test cases for while_loop API
-
由 WangXi 提交于
-
- 17 12月, 2019 1 次提交
-
-
由 Huihuang Zheng 提交于
-
- 16 12月, 2019 3 次提交
-
-
由 zhaoyuchen2018 提交于
* Fix softmax cuda bug * Refine multihead log and softmax logic
-
由 Kaipeng Deng 提交于
* yolo_box OP add Attr(clip_bbox). test=develop
-
由 Leo Chen 提交于
* fix elementwise_pow bug on integer, test=develop * use llrint to support elementwise_pow_grad, test=develop * add some tests, test=develop * revert grad functor, test=develop
-
- 15 12月, 2019 1 次提交
-
-
由 Chen Weihang 提交于
* rename paddle throw error macro, test=develop * fix new error use case, test=develop
-
- 12 12月, 2019 2 次提交
-
-
由 joanna.wozna.intel 提交于
* Add reshape int8 op test=develop * Change test to CPUPlace test=develop * Correct tests test=develop
-
由 tangwei12 提交于
* add fake init for the trainer, fix large memory hold in the trainer * do not merge recv vars from a remote endpoint, test=develop * add recv and save op, merge slice var in one op, save memory * remove hsigmoid with pull sparse, test=develop
-
- 11 12月, 2019 1 次提交
-
-
由 GaoWei8 提交于
test=develop
-
- 10 12月, 2019 5 次提交
-
-
由 wangchaochaohu 提交于
-
由 Zeng Jinle 提交于
-
由 mapingshuo 提交于
* add seed op
-
由 Adam 提交于
* MKLDNN v1.0 rebase to Paddle 1.6 test=develop * Add hacky paddle::string::to_string() implementation * vectorize<int64-t>() -> vectorize() cleanup test=develop * PADDLE_ENFORCE and void_cast fixes test=develop * Rebase changes test=develop * Cosmetics test=develop * Delete MKL from mkldnn.cmake test=develop * CMake debug commands test=develop * Delete MKLDNN_VERBOSE and rebase fixes test=develop * Rebase fixes test=develop * Temporarily disable int8 resnet101 vgg16 and vgg19 tests test=develop * Add libmkldnn.so.1 to python setup test=develop * Add libmkldnn.so.1 to inference_lib cmake after rebase test=develop * Post rebase fixes + FC int8 changes test=develop * Fix LRN NHWC test=develop * Fix NHWC conv3d test=develop * Windows build fix + next conv3d fix test=develop * Fix conv2d on AVX2 machines test=develop
-
由 wangchaochaohu 提交于
* accelerate mean op test=develop
-
- 06 12月, 2019 5 次提交
-
-
由 Zeng Jinle 提交于
* polish infer shape registry, test=develop * modify some operators registry, test=develop
-
由 Aurelius84 提交于
-
由 Huihuang Zheng 提交于
Add tests to use dy/dx to make sure the gradient values calculated by the control flow backward is correct. Also fixed bugs detected by those tests. Fix bugs: 1. Unlike sum_op, optimizer ops don't allow uninitialized input tensor. But in conditional_block_grad_op, since the conditional_block may not run, the output gradient tensor may be uninitialized, which will cause the optimizer op error. To fix it, we should let optimizer ops support uninitialized input like sum_op or assign the uninitialized gradient to 0 when the conditional_block_grad_op doesn't run. I found there are about 10+ optimizer ops. **To be simpler, I just assign output gradient of the conditional_block_grad_op to 0 in this PR**. But it can be further explored whether we can make optimizer ops like sum_op to support uninitialized input tensor because theoretically we can speed up without the assigning in conditional_block_grad_op. 2. Infer parameter shapes during append_backward. I didn't know that all our parameters are in global block. When op_desc is inferring shapes at the sub-block, it may not know the shape of gradients of parameters whose shape information is at global block. I fixed it by inferring shapes of gradients from forward var. This PR also did some code clean up: 1. Print the var name when sgd_op catches shape error so that it is easier to debug 2. Fix a typo: dicta -> dict
-
由 Jacek Czaja 提交于
test=develop
-
由 Jacek Czaja 提交于
* - BAtch norm mkl-dnn NHWC test=develop - compilation fix test=develop - UT fix - cosmetics test=develop - Fix to Batch Norm MKL-DNN NHWC UT test=develop Conflicts: paddle/fluid/operators/batch_norm_op.h * - Lint fixes test=develop
-
- 04 12月, 2019 3 次提交
-
-
由 Youwei Song 提交于
* dygraph Embedding layer use lookuptable v2 test=develop * fix test_nce test=develop
-
由 wangchaochaohu 提交于
* fix fill_constant_batch_size_like_op precious problem test=develop
-
由 WangXi 提交于
-
- 03 12月, 2019 6 次提交
-
-
由 Zeng Jinle 提交于
-
由 Jacek Czaja 提交于
-
由 Tao Luo 提交于
test=develop
-
由 lilong12 提交于
* set dim[0] to -1 if dim[0] < 0 and remove assertion to runtime, test=develop * modify ENFORCE message, test=develop * add validation for x.shape[0] > 0, test=develop * add ut, test=develop
-
由 tangwei12 提交于
-
由 Leo Chen 提交于
-