- 24 8月, 2021 11 次提交
-
-
由 wanghuancoder 提交于
* add fetch, test=develop * fix fetch2op, test=develop * fix fetch2op, test=develop * refine, test=develop * fix fetch ctx, test=develop * add wait, test=develop * rename fetch2 to fetch_v2, test=develop * merge, test=develop
-
由 Haohongxiang 提交于
* Add no_sync in data parallel for dynamic graph * modify UT of no_sync * delete test_parallel_dygraph_dataparallel_no_sync.py * add test_parallel_dygraph_no_sync.py * modify run_trainer_with_spawn in UTs * Add UT of complex control flow in no_sync * add specific descriptions and notes for no_sync * check code style * modify UT's TIMEOUT in CMakeLists.txt
-
由 duanboqiang 提交于
* fix bmm bug * bmm style * fix bmm
-
由 Jacek Czaja 提交于
* - concat refactoring draft * - cmpilation fixes * - yet another compilation fix * - fix * - compilation fix * - fixes to compilation * - another compilation fix * - fix * - Added overloaded AcquirePrimitiveDesc for concat * - fix * - reserve introduced * - UT fixes * - test concat int8 improved * - fixes * - fix to crash * - lint fixes * - fixes after review * - some other fixes from review
-
由 wanghuancoder 提交于
-
由 王明冬 提交于
-
由 Zeng Jinle 提交于
-
由 ronnywang 提交于
* add conv_op_npu and test * add more tests * clean headers & support fp16 * update
-
由 ronnywang 提交于
* add pool2d_op_npu and test * update * update pool2d_backward_navie * clean headers
-
由 TeslaZhao 提交于
-
由 Yulong Ao 提交于
* add auto_parallel dir * mv to paddle.distributed * add shard_xx api * add distributed attrs for var * add ut, test=develop * add dist * update * update * update * update * update * update, test=develop * update, test=develop * update, test=develop * update, test=develop * update, test=develop * update, test=develop * update, test=develop * update * update * update * update * update * update, test=develop * update, test=develop * update * update * delete unused proto * resotre op_desc * restore type_defs * update var_desc * remove dimss_mapping for proto_pybind * update interface.py * update framework.py * update * update * add auto_parallel dir * mv to paddle.distributed * add shard_xx api * add distributed attrs for var * add ut, test=develop * [WIP] Add the auto completion feature and related codes * [WIP] Improve the auto completion and related codes * [WIP] Make the auto completion to support data-parallel * [WIP] Make the completion support mp and dp+mp * [WIP] Refactor auto completion unit test for MLP * [WIP] Refactor the implementation of DistributedOperatorImpl * [WIP] Improve dims_mapping update rule and fix a bug * [WIP] Support auto completion for one transformer decoder layer * [WIP] Add a minor change * [WIP] Fix a bug within the uint test * Shard XShape tensor, add embedding completion and refactor code * Add the distributed_operators dir to setup.py.in * Improve the completion process and add the unittest for gpt * fix process_mesh ut * fix process_mesh ut * update * update, test=develop * Add support for automatically completing distributed attrs of special ops * update * update * update * fix doc sample codes, test=develop * improve coverage, test=develop * add static_mode check, test=develop * Model the cluster for cost model and physical mapping * update, test=develop * add set_placement, test=develop * Add the check to make sure the candidate tensors' size is great than zero * update doc, test=develop * update doc, test=develop * update doc, test=develop * update doc, test=develop * update, test=develop * Auto mark dist attrs annotated by user * update ndarray to nested list, test=develop * update, test=develop * Add auto-completion module for auto-parallel (based on PR#33804) * Remove unnecessary files * Remove unrelated files for the auto completion pr * Update the unit test to improve the coverage * Modify codes based on reviews * Minor changes for CI * Improve some codes based on new comments * Fix bugs caused by shallow copy in attributes.py * Imporve amend_distributed_attr_for_program in context.py * Other changes for weihang's comments Co-authored-by: Nsandyhouse <lilong12@baidu.com>
-
- 23 8月, 2021 13 次提交
-
-
由 Bo Liu 提交于
-
由 Wilber 提交于
-
由 wenbin 提交于
-
由 Jacek Czaja 提交于
* - disabled interpolate onednn * - compilation fix * - draft of batch_norm cache disabling * - fixes to UT
-
由 Peihan 提交于
* enable infer_ut on windows * remove lib calculation & time * unset http_proxy when download bos file on windows
-
由 Li Min 提交于
Refactor the organization of layer_norm cuda impl so that it can be reused in fused attention op. Extract the layer_norm cuda impl form layer_norm_op.cu to layer_norm_kernel.cu.h. Define fused/attention_layer_norm.h, which can be used in fused attention op in next PR.
-
由 zyfncg 提交于
* Support getitem by Bool index * delete some debug info of bool index * support the case that the shape of bool index is different from indexed tensor
-
由 wanghuancoder 提交于
This reverts commit 6bacfb0e.
-
由 pangyoki 提交于
-
由 pangyoki 提交于
-
由 TeslaZhao 提交于
-
由 seemingwang 提交于
-
由 zhaoyingli 提交于
* adamw support cuda * adamw support cuda
-
- 22 8月, 2021 1 次提交
-
-
由 Zhang Zheng 提交于
-
- 20 8月, 2021 10 次提交
-
-
由 Hao Lin 提交于
-
由 Yuang Liu 提交于
-
由 lzzyzlbb 提交于
* add rmsprop npu * add argsort npu * add argsort npu * modify according to review * modify sharedatawith according to review * modify reshape according to review * rm dygraph=false
-
由 Sing_chan 提交于
* [NPU] Support npu kernel for pad3d op * fix for comment of zhouwei25 * fix some bugs according to qili93's comments * add support and test for paddings in input * delete VLOG used for debug
-
由 wanghuancoder 提交于
* use spin lock in auto growth allocator, test=develop * use pthread spin lock, test=develop * use lock guard, test=develop * use malloc spin lock, test=develop * use lock_guard, test=develop
-
由 wangguanqun 提交于
* add trainer desc config to distributed strategy * code style modified * data_feed set lod
-
由 zhaoyingli 提交于
* add depthwise_conv2d npu * add some tests * Delete test_unique_op_npu.py * delete trans input
-
由 zhaoyingli 提交于
* [NPU] Support npu op where and where grad * fix use const_cast * delete a test
-
由 Peihan 提交于
-
由 JYChen 提交于
* add (N,C,*) input support for GroupNorm * --amend
-
- 19 8月, 2021 5 次提交
-
-
由 JingZhuangzhuang 提交于
* add npu sin op * [NPU] Support npu kernel for sin op * modify support npu kernel for sin op * modify support npu kernel for sin op * modify nou sin op * modify npu sin op * add sin op npu
-
由 Peihan 提交于
* add slim resnet50 quant model in pr-ci-inference * enable resnet50_quant multi_thread4_trt_int8_bz1 * remove LOG(FATAL)
-
由 Yiqun Liu 提交于
Add dimension check for inverse to avoid dividing by 0 error when input's shape is [0, 0, 0]. (#34996)
-
由 ceci3 提交于
* fix batch_norm and instance norm when input is []
-
由 tianshuo78520a 提交于
* notest;test=gpu-inference * notest;test=gpu-inference * notest;test=gpu-inference * notest;test=gpu-inference * fix error * notest;test=gpu-inference * notest;test=gpu-inference * notest;test=gpu-inference * test=gpu-inference
-