- 26 9月, 2019 16 次提交
-
-
由 liuwei1031 提交于
* impove error message when passing ndarray with object dtype * imporve message format * change assert to raise TypeError * remind user how to locate the irregular data instead of printing * add unittest for input array type check
-
由 joanna.wozna.intel 提交于
* Fix conv2d+dequantize squash for residual fusion test=develop * Correct int8 input test=develop * Add if exclude or include padding in pool2d mkldnn test=develop
-
由 chengduo 提交于
test=develop
-
由 Aurelius84 提交于
* x.dims == y.dims test=develop * refine comment
-
由 Yang Zhang 提交于
* Expose `mutable_data` as python binding test=develop * Add test for device pointer binding test=develop * Make test compatible with python 2
-
由 Aurelius84 提交于
* fix input shape check test=develop * move PADDLE_ENFORCE test=develop
-
由 chengduo 提交于
Add dtype for coalesce_tensor_op
-
由 Zhaolong Xing 提交于
test=develop test=document_fix
-
由 gongweibao 提交于
Polish elementwise max min pow document to add more examples
-
由 Aurelius84 提交于
-
由 Tao Luo 提交于
test=develop
-
由 mapingshuo 提交于
* fix doc of apply_optimize test=document_fix test=document_preview * modify doc of backward test=develop test=document_fix * modify document hash test=develop test=document_preview
-
由 Chen Weihang 提交于
* add lod check for sequence op, test=develop * delete unnecessary check in expend op, test=develop
-
由 Huihuang Zheng 提交于
The new "fluid.data" changes old "fluid.layers.data": 1. Add shape and dtype check. 2. Remove "append_batch_size" parameter. We won't offer this in the new data layer because other deep learning platforms don't have this kind of data layer pre-processing. It may confuse users. 3. Remove "stop gradient" parameter because the data layer doesn't do back-propagation TODO: Now data layer feeded by executor is checked, will we want to check the feed data of readers in the future?
-
由 Zeng Jinle 提交于
-
由 qingqing01 提交于
-
- 25 9月, 2019 17 次提交
-
-
由 xujiaqi01 提交于
fix memory leak in HogwildWorker, whose ops are explicitly deleted in destructor
-
由 Zeng Jinle 提交于
-
由 Zeng Jinle 提交于
* add AdadeltaOptimizer doc, test=develop * refine doc,test=develop * folllow lanxiang's comments, test=develop, test=document_fix
-
由 Zeng Jinle 提交于
* expose set_gradient_clip, test=develop, test=document_preview, test=preview * expose gradient clip, test=develop, test=document_fix * refine doc, test=develop * follow lanxiang's comments, test=develop, test=document_fix
-
由 chengjuntao 提交于
* refine doc, test=develop, test=document_preview
-
由 zhongpu 提交于
* add kernel for fill_op, test=develop * modify PADDLE_ENFORCE to PADDLE_ENFORCE_EQ, test=develop * add op test for fill_op, test=develop * REGISTER COP CUDA KERNEL, test=develop * update test_fill_op.py, test=develop * change FillConstantOpVarTypeInference to FillOpVarTypeInference, test=develop * fix op test, test=develop * add head file, test=develop
-
由 wangchaochaohu 提交于
* add support tensor and tensorlist for strided_slice OP test=develop * fix the commnet test=develop * fix test=develop * fix the bug test=develop * delete log test=develop * fix API.spec test=develop * fix test=develop
-
由 lvmengsi 提交于
* update API.spec
-
由 lvmengsi 提交于
* fix bn
-
由 ShenLiang 提交于
* treat broadcast as non-initial, test=develop * rename the class name * rename the class name, test=develop
-
由 Bob Zhu 提交于
* add support of matmul with multiple head even different width and height Original matmul with multiple head supports only the mat_a.width == mat_b.height, in that case, mat_b will be horizontally split. In this patch, we extend the support when mat_a.width != mat_b.height but mat_a.width/head_number == mat_b.height, in this case, mab_b will be vertically split. One example is A is [3, 8], B is [2, 16], head_number is 4. In this case, A will be split as [3, 2], B will be (vertically) split as [2, 4]. The final result will be 4 matrix of 4 matrix of [3,4], i.e. [3, 16] test=develop * add support of matmul with multiple head even different width and height Original matmul with multiple head supports only the mat_a.width == mat_b.height, in that case, mat_b will be horizontally split. In this patch, we extend the support when mat_a.width != mat_b.height but mat_a.width/head_number == mat_b.height, in this case, mab_b will be vertically split. One example is A is [3, 8], B is [2, 16], head_number is 4. In this case, A will be split as [3, 2], B will be (vertically) split as [2, 4]. The final result will be 4 matrix of 4 matrix of [3,4], i.e. [3, 16] test=develop * refactor the code of matmul with multiple head even different width and height test=develop
-
由 Liufang Sang 提交于
* refine ctc align op with padding * refine api sample code
-
由 Tao Luo 提交于
* add input type and dtype check for softmax_op test=develop * refine error message test=develop
-
由 Zhaolong Xing 提交于
FIx C++ inference BUG: When open memory optim and enable trt subgraph at the same time, there is a bug (#19969) * fix memory optimization type test=develop * 1. fix BUG: open trt and memory optim will trigger bug. 2. Clean memory optim bug. test=develop
-
由 Wojciech Uss 提交于
* Add support for new QAT models test=develop Co-Authored-By: NMichał Gallus <michal.gallus@intel.com> Co-Authored-By: NWojciech Uss <wojciech.uss@intel.com> * fixed fps results test=develop * fix top5 accuracy drop problem * updated for new QAT models * skip quantizing average pooling - dirty but working * add missing pass * added missing conv+brelu fuse pass * removed a call to non-existent pass test=develop * renamed pass test=develop * Adjust finding pooling scale to newest QAT models * Remove unnecessary code from quantization_mkldnn_pass * Copy Pooling input scale to output scale in QAT * Refactor & remove unused code in QAT * Incorporate fp32 FC into QAT test=develop * Enable graph drawing with debug flag test=develop * Add tests for QATv2 * Fix paths for QATv2 models test=develop * Add option to save transformed int8 qat model test=develop * Remove redundant lines from qat mkldnn pass test=develop * Delegate disablement of avg pooling to qat test=develop * fix CI bug, test=develop * Follow Wangzhen's Review, test=develop * Update API.spec test=develop * Name False in (is_unsigned, TensorScale) tuple test=develop
-
由 Aurelius84 提交于
* Removing last dims constraints of seq_pad and seq_unpad test=develop * fix test_layer api code test=develop * fix sequence_pad_op.cc conflict test=develop * remove test_analyzer_mm_dnn test=develop * fix vectorize bug test=develop * fix vectorize<int> test=develop
-
由 chengduo 提交于
test=develop
-
- 24 9月, 2019 7 次提交
-
-
由 Yi Liu 提交于
test=develop test=document_fix
-
由 jhjiangcs 提交于
-
由 Zeng Jinle 提交于
-
由 Yang Zhang 提交于
* Add float16 support to `sync_batch_norm_op` test=develop * Add test for sync_bn with FP16 input test=develop
-
由 Aurelius84 提交于
* Remove constraint that last dimension is forced to be 1 by add lookup_table_v2 test=develop * modify into PADDLE_ENFORCE_CUDA_SUCCESS test=develop * Revert "modify into PADDLE_ENFORCE_CUDA_SUCCESS test=develop" This reverts commit 8a960bfc61e51aa27c3c529df8fb90b93ebd19f9. * move api into fluid.embedding test=develop * fix example code test=develop * move one_hot into fluid.one_hot * modify api.spec test=develop * fix loss shape test=develop
-
由 Zeng Jinle 提交于
-
由 whs 提交于
1. Support customize eval function instead of eval program. 2. Fix loading checkpoint in quantization strategy. 3. Support saving eval model when saving a checkpoint. 4. Fix decoder of loading context in PaddleSlim. 5. Fix restoring from the checkpoint of uniform prune strategy. 6. Support saving eval model and infer model during training. 7. Add ‘unitest’ for saving eval model, saving infer model and uniform pruning restoring from the checkpoint. 8. Fix pruning of depthwise_conv_grad op by updating the groups.
-