- 29 7月, 2020 12 次提交
-
-
由 cc 提交于
* Remove the output for moving_average_abs_max_scale op, test=develop
-
由 石晓伟 提交于
-
由 Dong Daxiang 提交于
* refine strategy compiler and meta optimizers make async as a_sync
-
由 zhupengyang 提交于
-
由 Chen Weihang 提交于
* unified signal error format * refine signal error message
-
由 tianshuo78520a 提交于
-
由 Zhou Wei 提交于
-
由 Chen Weihang 提交于
* remove ProgramTranslator.save_inference_model * adapt save_quantized_model * revert buffer check implemention * remove useless import function
-
由 Chen Weihang 提交于
* simplify buffered reader to improve DataLoader performance * fix 22 failed unittests * fix cuda pinned context condition * fix test_reader_reset failed * fix two failed unittests * change unittest place * polish error messaage * polish cast op GetExpecctedKernelType * remove debug info in unittest
-
由 Pei Yang 提交于
-
由 Zhou Wei 提交于
-
由 Huihuang Zheng 提交于
Enhance TracedLayer Error Message Note: this PR uses assert to check type somewhere and check_type somewhere, the reason is that the check_type skips checking when it is under dygraph mode.
-
- 28 7月, 2020 12 次提交
-
-
由 yukavio 提交于
* saving inference model for user defined quantization model * saving inference model for user defined quantization model * fixed ci coverage
-
由 Pei Yang 提交于
-
由 tianshuo78520a 提交于
-
由 mapingshuo 提交于
-
由 zhangchunle 提交于
-
由 Pei Yang 提交于
-
由 Dong Daxiang 提交于
* add more settings for distributed strategy Basically, DistributedStrategy has several parts of configurations: - BuildStrategy: the same as paddle.fluid.BuildStrategy, but the distributed arguments are moved out of BuildStrategy - ExecutionStrategy: the same as paddle.fluid.ExecutionStrategy - collective communication configs: nccl_comm_num, hierarchical allreduce and so on - distributed algorithms: async_update(mainly used in PS), lars, lamb and so on
-
由 Sylwester Fraczek 提交于
-
由 Chen Weihang 提交于
* polish framework error message part3 * polish details * fix error message print error
-
由 arlesniak 提交于
* Added DNNL cache management for DyGraph * move FLAGS_use_mkldnn to more general CMakeLists, getu use of the flag in ClearGradients * missing file * Fixes after review * Bringing back original idea of place for 'use_mkldnn' flag to be accessible from platform nad imperative. * Removed duplicate and added docs * Fixes for CI
-
由 cc 提交于
-
由 zhupengyang 提交于
-
- 27 7月, 2020 6 次提交
-
-
由 wangchaochaohu 提交于
-
由 Wojciech Uss 提交于
test=develop
-
由 LutaoChu 提交于
fix the cross OP behavior is not as expected when axis=0
-
由 tianshuo78520a 提交于
-
由 mapingshuo 提交于
-
由 Yi Liu 提交于
test=develop
-
- 24 7月, 2020 10 次提交
-
-
由 Huihuang Zheng 提交于
Based on the comment here https://github.com/PaddlePaddle/Paddle/blob/b5f8784cab94eae785659787fc529870c87b254c/paddle/fluid/framework/details/build_strategy.h#L49 The unit test which compares Reduce and AllReduce must have diff. The PR_CI_Night runs on P40 machine and it has 8GB GPU, which is smaller than the 16GB normal CI machines. So we decrease the batch size in the past to make it runnable: https://github.com/PaddlePaddle/Paddle/pull/24651/files . Decreasing the batch size makes the difference occurs often. So this PR replace the absolute delta by relative delta. Before this PR, the unit test failure happens with probability about 1/100. After this PR it doesn't happen.
-
由 Zhen Wang 提交于
* fix the double grad bug for the star gan. test=develop * update the retain_graph parameter doc. test=develop * add the unit test for the retain_graph parameter. test=develop
-
由 Chen Weihang 提交于
* fix jit.save input type change error * add unittes
-
由 Chen Weihang 提交于
* polish framework error meg part2 * polish details
-
由 Dong Daxiang 提交于
-
由 Dong Daxiang 提交于
-
由 123malin 提交于
* test=develop, add logo
-
由 mapingshuo 提交于
-
由 qingqing01 提交于
* Refine Model 1. Take the network (instance of Layer) as the input of Model. 2. Refine set_dict/load_dict of Layer. 3. Refine Input interface, so update code sample about Input
-
由 xujiaqi01 提交于
* add fleet distributed metrics * test=develop
-