- 28 11月, 2020 5 次提交
-
-
由 LielinJiang 提交于
* add ReduceLROnPlateau
-
由 Huihuang Zheng 提交于
test_mnist failed on CUDA11. We found that it is due to PaddleInference IR Optimization after debugging. We disable it in this PR and we will re-enable it after PaddleInference fixes it.
-
由 liym27 提交于
-
由 liym27 提交于
[Dy2Stat] Fix bug: the return statement should be transformed to an equivalent Paddle/Python if statement, which depends on if conditions of the return stmt. (#29165)
-
由 Huihuang Zheng 提交于
GridGenerator model failed because the output shape of `linspace` is (-1). The reason is that C++ InferShape fixes the shape to (-1): https://github.com/PaddlePaddle/Paddle/blob/5da3d514ebaa6fffd48c4a2e6bb5b16268dae92e/paddle/fluid/operators/linspace_op.cc#L49 We cannot set the shape in C++ infer shape because this Tensor may not be initialized during compile time, but when input `num` of `linspace` is an integer, we know the shape at compiler time. This PR simply set the shape in Python and add GridGenerator as unittest.
-
- 27 11月, 2020 27 次提交
-
-
由 Aurelius84 提交于
-
由 ShenLiang 提交于
* add reducer * refine envent for memorycopy * add concat&split for allreduce * apply concat & split for fuse tensor * fix nccl dep * fix the untest, compile problem and ddp initialize problem * fix untest for mac & add some comments & solve the repeated param in sublayers * fix untest for windows & fix document
-
由 lilong12 提交于
update expand as op to use the shape of the target tensor instead of the target tensor itself. (#29020) * update, test=develop
-
由 Kaipeng Deng 提交于
* alias yolo_loss & decode_yolo_box to paddle.vision. test=develop
-
由 Shibo Tao 提交于
-
由 LutaoChu 提交于
add paddle.subtract, optimize paddle.maximum and paddle.minimum
-
由 徐铭远 提交于
* fix doc example, test=develop, test=document_fix
-
由 Chen Long 提交于
-
由 yukavio 提交于
* solve pretty table dependent in flops api * add unittest dependent * temp
-
由 pangyoki 提交于
* fix Categorical en doc * fix doc for apis * remove numpy in sample code
-
由 LielinJiang 提交于
* enhance logger callback for benchmark
-
由 Jack Zhou 提交于
Add eigen gru and fix the dropout bug in the rnn
-
由 yaoxuefeng 提交于
-
由 lilong12 提交于
-
由 liym27 提交于
[Dynamic-to-Static] Support **kwargs as input of the function which is decorated by `jit.save.to_static` (#29098)
-
由 YUNSHEN XIE 提交于
-
由 GaoWei8 提交于
-
由 lilong12 提交于
-
由 Chen Weihang 提交于
-
由 guofei 提交于
* Optimiz the unittest test_imperative_out_scale test=develop
-
由 Shang Zhizhou 提交于
* remove -DSUPPORTS_CUDA_FP16 in cuda.cmake * comile with cuda9 * add some unittest * notest;test=coverage * add unittest for trt plugin swish && split * update ernie unittest * fix some error message * remove repeated judgement of CUDA version in mbEltwiseLayerNormOpConverter * fix comile errror when CUDA_ARCH_NAME < Pascal" * fix comile error * update unittest timeout * compile with cuda9 * update error msg * fix code style * add some comments * add define IF_CUDA_ARCH_SUPPORT_FP16 * rename IF_CUDA_ARCH_SUPPORT_FP16 to CUDA_ARCH_FP16_SUPPORTED
-
由 Chen Weihang 提交于
* add symlink force for unittest * open unittest
-
由 xiaoting 提交于
* fix interpolate example, test=develop;test=document_fix * fix format, test=develop, test=document_fix * update upsample doc, test=develop, test=document_fix
-
由 whs 提交于
* 1. grid_sample 1.1 fix has_print 2. conv1d_transpose 2.1 fix code_example error 3. conv1d 4. affine_grid 4.1 has_print 4.2 has_disable_static 5. Conv1DTranspose 5.1 fix code_example error 5.2 has_disable_static 6. Conv1d 6.1 code_example 6.2 has_disable_static
-
由 Guanghua Yu 提交于
-
由 Chen Weihang 提交于
-
由 Chen Weihang 提交于
-
- 26 11月, 2020 8 次提交
-
-
由 lilong12 提交于
* update, test=develop
-
由 Noel 提交于
Fix ops doc for some ops
-
由 Leo Chen 提交于
* split train_mode and has_grad * fix format * fix ci problems * fix sample code
-
由 ShenLiang 提交于
* add Inmemorydataset
-
由 ShenLiang 提交于
-
由 YUNSHEN XIE 提交于
-
由 JZ-LIANG 提交于
* add lars to fleet meta optimizer * add lamb to proto * add lamb to fleet meta optimizer * fixed syntax bug * fixed syntax bug * fixed syntax error in lamb, add config setter of lamb in distributed_strategy * trigger unitest to rerun * add new unitest func for lamb * revise unitest for lars and lamb * revise dgc meta unitest * revise lars document in distribute_strategy * revise lars lamb document in distributed_strategy.py * revise lars lamb document in distributed_strategy.py * add weight decay exclude logic to lars * restore optimzier.py * restore optimizer.py as develop except lars * add epsilon and exclude fn to distributed_sttrategy * add lars epsilon * revise unitest for fleet lars and lamb * revise lars lamb unitest for CI coverage * revise lars argument api * revise lars argument api * revise lars argument api * revise api doc of lars * fix op role * add sharding save and add_sync_comm_for_test function * add comm_analyse to utlis * revise sharding_utils * add sharding saving unittest * revise sharding utils for unittest * revise sharding en doc * update sharding utils api * add doc for sharding * fixed bug in sharding var size count * update varsize count in sharding * fix sharding num_nccl_comm * Revert "fix sharding num_nccl_comm" This reverts commit d51587c15e9323acf226ddd36154275f0d1daf76.
-