- 27 9月, 2019 5 次提交
-
-
由 chengduo 提交于
* make pad and split support fp16 test=develop
-
由 lvmengsi 提交于
-
由 tangwei12 提交于
* add a base class for the Communicator * add AsyncCommunicator Impl for async distributed training
-
由 danleifeng 提交于
Polish English docs of elementwise_add/sub/mul/div
-
由 Li Fuchen 提交于
Use AllocateTmpTensor() for creating temporary tensors in warpctc.
-
- 26 9月, 2019 11 次提交
-
-
由 wangchaochaohu 提交于
-
由 Adam 提交于
test=develop
-
由 joanna.wozna.intel 提交于
* Fix conv2d+dequantize squash for residual fusion test=develop * Correct int8 input test=develop * Add if exclude or include padding in pool2d mkldnn test=develop
-
由 Aurelius84 提交于
* x.dims == y.dims test=develop * refine comment
-
由 Aurelius84 提交于
* fix input shape check test=develop * move PADDLE_ENFORCE test=develop
-
由 chengduo 提交于
Add dtype for coalesce_tensor_op
-
由 Zhaolong Xing 提交于
test=develop test=document_fix
-
由 gongweibao 提交于
Polish elementwise max min pow document to add more examples
-
由 Aurelius84 提交于
-
由 Tao Luo 提交于
test=develop
-
由 Chen Weihang 提交于
* add lod check for sequence op, test=develop * delete unnecessary check in expend op, test=develop
-
- 25 9月, 2019 6 次提交
-
-
由 zhongpu 提交于
* add kernel for fill_op, test=develop * modify PADDLE_ENFORCE to PADDLE_ENFORCE_EQ, test=develop * add op test for fill_op, test=develop * REGISTER COP CUDA KERNEL, test=develop * update test_fill_op.py, test=develop * change FillConstantOpVarTypeInference to FillOpVarTypeInference, test=develop * fix op test, test=develop * add head file, test=develop
-
由 wangchaochaohu 提交于
* add support tensor and tensorlist for strided_slice OP test=develop * fix the commnet test=develop * fix test=develop * fix the bug test=develop * delete log test=develop * fix API.spec test=develop * fix test=develop
-
由 lvmengsi 提交于
* fix bn
-
由 Bob Zhu 提交于
* add support of matmul with multiple head even different width and height Original matmul with multiple head supports only the mat_a.width == mat_b.height, in that case, mat_b will be horizontally split. In this patch, we extend the support when mat_a.width != mat_b.height but mat_a.width/head_number == mat_b.height, in this case, mab_b will be vertically split. One example is A is [3, 8], B is [2, 16], head_number is 4. In this case, A will be split as [3, 2], B will be (vertically) split as [2, 4]. The final result will be 4 matrix of 4 matrix of [3,4], i.e. [3, 16] test=develop * add support of matmul with multiple head even different width and height Original matmul with multiple head supports only the mat_a.width == mat_b.height, in that case, mat_b will be horizontally split. In this patch, we extend the support when mat_a.width != mat_b.height but mat_a.width/head_number == mat_b.height, in this case, mab_b will be vertically split. One example is A is [3, 8], B is [2, 16], head_number is 4. In this case, A will be split as [3, 2], B will be (vertically) split as [2, 4]. The final result will be 4 matrix of 4 matrix of [3,4], i.e. [3, 16] test=develop * refactor the code of matmul with multiple head even different width and height test=develop
-
由 Liufang Sang 提交于
* refine ctc align op with padding * refine api sample code
-
由 Aurelius84 提交于
* Removing last dims constraints of seq_pad and seq_unpad test=develop * fix test_layer api code test=develop * fix sequence_pad_op.cc conflict test=develop * remove test_analyzer_mm_dnn test=develop * fix vectorize bug test=develop * fix vectorize<int> test=develop
-
- 24 9月, 2019 9 次提交
-
-
由 jhjiangcs 提交于
-
由 Yang Zhang 提交于
* Add float16 support to `sync_batch_norm_op` test=develop * Add test for sync_bn with FP16 input test=develop
-
由 Aurelius84 提交于
* Remove constraint that last dimension is forced to be 1 by add lookup_table_v2 test=develop * modify into PADDLE_ENFORCE_CUDA_SUCCESS test=develop * Revert "modify into PADDLE_ENFORCE_CUDA_SUCCESS test=develop" This reverts commit 8a960bfc61e51aa27c3c529df8fb90b93ebd19f9. * move api into fluid.embedding test=develop * fix example code test=develop * move one_hot into fluid.one_hot * modify api.spec test=develop * fix loss shape test=develop
-
由 xujiaqi01 提交于
* support change shuffle thread num * support change train thread num * fix receive shuffle data of each channel * data norm stop gradient * add check thread_tensor type and root_tensor type when merge metric * remove sleep in shuffle, add config * add config of pslib client to client communication * fix xbox str * add data norm op testcase * add flush in trainer finalize
-
由 Kaipeng Deng 提交于
-
由 Jacek Czaja 提交于
- First implementation of BWD and FWD of pooling mkl-dnn - Compilation fix - Fix - Fix - Fix - Fix to crash - Compilation fix - Combined AcquireBacward with Fwd test=develop
-
由 Zeng Jinle 提交于
-
由 Zeng Jinle 提交于
-
由 Leo Chen 提交于
* make OpTest check grad inplace even if forward has no inplace, test=develop * do not run PE when enable_inplace is False, test=develop * add conv3d cuda kernel for float16 type, test=develop * refactor OpTest for inplace, test=develop * add comments, test=develop
-
- 23 9月, 2019 3 次提交
-
-
由 Zhang Ting 提交于
-
由 Kaipeng Deng 提交于
* fix softmax ce time limit check failed. test=develop * refine softmax calc. test=develop
-
由 石晓伟 提交于
-
- 22 9月, 2019 1 次提交
-
-
由 lvmengsi 提交于
* add instance norm op
-
- 21 9月, 2019 2 次提交
- 20 9月, 2019 3 次提交
-
-
由 Aurelius84 提交于
* support 2-level lod of input in sequence_pool test=develop * fix lod level bug in .cu test=develop
-
由 Zhang Ting 提交于
1. group_norm support data_layout=NHWC 2. modified doc of group_norm
-
由 Jacek Czaja 提交于
- LRN mkl-dnn kernel refactor test=develop - compilation fix - Another compilation fix - Compilation fix - another compilation fix - compilation fix - Crash fix - optional LRN mkldnn workspace - Added mid allocation - Workaround for tests - Removed gradient from is_test ut - Removed mid for inference - Reverted LRN mid removal for is_test - PADDLE_ENFORCE adjusted - Rebase to templatization commit - Compilation fix - compilation fix test=develop - lint test=develop - Fix to crash - Rebase to recent codebase - lin - lint - compilation fix
-