- 27 9月, 2019 8 次提交
-
-
由 wangchaochaohu 提交于
* codegen code for reconstruction test=develop * fix the cmake test=develop * fix review advice test=develop
-
由 lvmengsi 提交于
-
由 tangwei12 提交于
* add a base class for the Communicator * add AsyncCommunicator Impl for async distributed training
-
由 WangXi 提交于
Refine document of DGCMomentumOptimizer
-
由 danleifeng 提交于
Polish English docs of elementwise_add/sub/mul/div
-
由 Chen Weihang 提交于
* shape and optimize paddle error message stack, test=develop * limit exception type & add unittest, test=develop * fix multi-platform problem, test=develop * fix related unnitest failed, test=develop * add doc & fix unittest errors, test=develop * fix function name error, test=develop * update tensor test exception msg compare, test=develop * remove unittest on win32, the dir format is different, test=develop * remove useless package, test=develop * add paddle enforce handler unittest, test=develop * add exception checkout, test=develop * fix coverage failed, test=develop * fix op registry test failed, test=develop * refactor whole pr, test=develop * remove test in CMakelist, test=develop * fix coverage, test=develop
-
由 Li Fuchen 提交于
Use AllocateTmpTensor() for creating temporary tensors in warpctc.
-
由 Huihuang Zheng 提交于
Set output type LoDTensor only After code experiment, I found data doens't support other type
-
- 26 9月, 2019 17 次提交
-
-
由 123malin 提交于
* fix DistributeTranspilerConfig document, test=develop
-
由 wangchaochaohu 提交于
-
由 whs 提交于
* Make PaddleSlim support PyReader. * Fix unittest of sensitive pruning. * Add some assert.
-
由 Adam 提交于
test=develop
-
由 joanna.wozna.intel 提交于
* Fix conv2d+dequantize squash for residual fusion test=develop * Correct int8 input test=develop * Add if exclude or include padding in pool2d mkldnn test=develop
-
由 chengduo 提交于
test=develop
-
由 Aurelius84 提交于
* x.dims == y.dims test=develop * refine comment
-
由 Yang Zhang 提交于
* Expose `mutable_data` as python binding test=develop * Add test for device pointer binding test=develop * Make test compatible with python 2
-
由 Aurelius84 提交于
* fix input shape check test=develop * move PADDLE_ENFORCE test=develop
-
由 chengduo 提交于
Add dtype for coalesce_tensor_op
-
由 Zhaolong Xing 提交于
test=develop test=document_fix
-
由 gongweibao 提交于
Polish elementwise max min pow document to add more examples
-
由 Aurelius84 提交于
-
由 Tao Luo 提交于
test=develop
-
由 mapingshuo 提交于
* fix doc of apply_optimize test=document_fix test=document_preview * modify doc of backward test=develop test=document_fix * modify document hash test=develop test=document_preview
-
由 Chen Weihang 提交于
* add lod check for sequence op, test=develop * delete unnecessary check in expend op, test=develop
-
由 Huihuang Zheng 提交于
The new "fluid.data" changes old "fluid.layers.data": 1. Add shape and dtype check. 2. Remove "append_batch_size" parameter. We won't offer this in the new data layer because other deep learning platforms don't have this kind of data layer pre-processing. It may confuse users. 3. Remove "stop gradient" parameter because the data layer doesn't do back-propagation TODO: Now data layer feeded by executor is checked, will we want to check the feed data of readers in the future?
-
- 25 9月, 2019 15 次提交
-
-
由 xujiaqi01 提交于
fix memory leak in HogwildWorker, whose ops are explicitly deleted in destructor
-
由 Zeng Jinle 提交于
-
由 Zeng Jinle 提交于
* add AdadeltaOptimizer doc, test=develop * refine doc,test=develop * folllow lanxiang's comments, test=develop, test=document_fix
-
由 Zeng Jinle 提交于
* expose set_gradient_clip, test=develop, test=document_preview, test=preview * expose gradient clip, test=develop, test=document_fix * refine doc, test=develop * follow lanxiang's comments, test=develop, test=document_fix
-
由 chengjuntao 提交于
* refine doc, test=develop, test=document_preview
-
由 zhongpu 提交于
* add kernel for fill_op, test=develop * modify PADDLE_ENFORCE to PADDLE_ENFORCE_EQ, test=develop * add op test for fill_op, test=develop * REGISTER COP CUDA KERNEL, test=develop * update test_fill_op.py, test=develop * change FillConstantOpVarTypeInference to FillOpVarTypeInference, test=develop * fix op test, test=develop * add head file, test=develop
-
由 wangchaochaohu 提交于
* add support tensor and tensorlist for strided_slice OP test=develop * fix the commnet test=develop * fix test=develop * fix the bug test=develop * delete log test=develop * fix API.spec test=develop * fix test=develop
-
由 lvmengsi 提交于
* update API.spec
-
由 lvmengsi 提交于
* fix bn
-
由 Bob Zhu 提交于
* add support of matmul with multiple head even different width and height Original matmul with multiple head supports only the mat_a.width == mat_b.height, in that case, mat_b will be horizontally split. In this patch, we extend the support when mat_a.width != mat_b.height but mat_a.width/head_number == mat_b.height, in this case, mab_b will be vertically split. One example is A is [3, 8], B is [2, 16], head_number is 4. In this case, A will be split as [3, 2], B will be (vertically) split as [2, 4]. The final result will be 4 matrix of 4 matrix of [3,4], i.e. [3, 16] test=develop * add support of matmul with multiple head even different width and height Original matmul with multiple head supports only the mat_a.width == mat_b.height, in that case, mat_b will be horizontally split. In this patch, we extend the support when mat_a.width != mat_b.height but mat_a.width/head_number == mat_b.height, in this case, mab_b will be vertically split. One example is A is [3, 8], B is [2, 16], head_number is 4. In this case, A will be split as [3, 2], B will be (vertically) split as [2, 4]. The final result will be 4 matrix of 4 matrix of [3,4], i.e. [3, 16] test=develop * refactor the code of matmul with multiple head even different width and height test=develop
-
由 Liufang Sang 提交于
* refine ctc align op with padding * refine api sample code
-
由 Zhaolong Xing 提交于
FIx C++ inference BUG: When open memory optim and enable trt subgraph at the same time, there is a bug (#19969) * fix memory optimization type test=develop * 1. fix BUG: open trt and memory optim will trigger bug. 2. Clean memory optim bug. test=develop
-
由 Wojciech Uss 提交于
* Add support for new QAT models test=develop Co-Authored-By: NMichał Gallus <michal.gallus@intel.com> Co-Authored-By: NWojciech Uss <wojciech.uss@intel.com> * fixed fps results test=develop * fix top5 accuracy drop problem * updated for new QAT models * skip quantizing average pooling - dirty but working * add missing pass * added missing conv+brelu fuse pass * removed a call to non-existent pass test=develop * renamed pass test=develop * Adjust finding pooling scale to newest QAT models * Remove unnecessary code from quantization_mkldnn_pass * Copy Pooling input scale to output scale in QAT * Refactor & remove unused code in QAT * Incorporate fp32 FC into QAT test=develop * Enable graph drawing with debug flag test=develop * Add tests for QATv2 * Fix paths for QATv2 models test=develop * Add option to save transformed int8 qat model test=develop * Remove redundant lines from qat mkldnn pass test=develop * Delegate disablement of avg pooling to qat test=develop * fix CI bug, test=develop * Follow Wangzhen's Review, test=develop * Update API.spec test=develop * Name False in (is_unsigned, TensorScale) tuple test=develop
-
由 Aurelius84 提交于
* Removing last dims constraints of seq_pad and seq_unpad test=develop * fix test_layer api code test=develop * fix sequence_pad_op.cc conflict test=develop * remove test_analyzer_mm_dnn test=develop * fix vectorize bug test=develop * fix vectorize<int> test=develop
-
由 chengduo 提交于
test=develop
-