- 04 11月, 2021 1 次提交
-
-
由 huangxu96 提交于
Add Static CostModel. Static data is based on op benchmark system
-
- 29 10月, 2021 1 次提交
-
-
由 Ming-Xu Huang 提交于
-
- 28 10月, 2021 1 次提交
-
-
由 pangyoki 提交于
* add doc for show() in paddle.version * fix format * print cuda and cudnn in show API
-
- 27 10月, 2021 2 次提交
-
-
由 pangyoki 提交于
* add paddle.version.cuda and paddle.version.cudnn API * fix little bug * fix bug * add doc string * fix mkdir error * fix windows path * fix new paddle/version path * fix unittest * fix format
-
由 zhangkaihuo 提交于
本PR是fused_transformer的layer层代码,包含FusedFeedForward的layer层代码和FusedTransformerEncoderLayer的代码。
-
- 22 9月, 2021 2 次提交
-
-
由 JingZhuangzhuang 提交于
-
由 zhouweiwei2014 提交于
* support extern third_party lapack on Linux/Windows/Mac * fix ci
-
- 16 9月, 2021 2 次提交
-
-
由 Zhong Hui 提交于
-
由 Shang Zhizhou 提交于
-
- 31 8月, 2021 1 次提交
-
-
由 Zhanlue Yang 提交于
[Background] Expansion in code size can be irreversible in the long run, leading to huge release packages which not only hampers user experience but also exceeds a hard limit of pypi. In such, NV_FATBIN section takes up 86% of the compiled dylib size, owing to the vast number of GPU arches supported. This PR aims to prune this NV_FATBIN. [Solution] In the new release strategy, two types of whl packages will be involved: Cubin PIP package: PIP package maintains a smaller window for GPU arches support, containing sm_60, sm_70, sm_75, sm_80 cubins, covering Pascal - Ampere arches JIT release package: This is a backup for Cubin PIP package, containing compute_35, compute_50, compute_60, compute_70, compute_75, compute_80, with best performance and GPU arches coverage. However, it takes around 10 min to install due to the JIT compilation. [How to use] The new release strategy is disabled by default. To compile for Cubin PIP package, add this to cmake: -DCUBIN_RELEASE_PIP To compile for JIT release package, add this to cmake: -DJIT_RELEASE_WHL
-
- 24 8月, 2021 1 次提交
-
-
由 Yulong Ao 提交于
* add auto_parallel dir * mv to paddle.distributed * add shard_xx api * add distributed attrs for var * add ut, test=develop * add dist * update * update * update * update * update * update, test=develop * update, test=develop * update, test=develop * update, test=develop * update, test=develop * update, test=develop * update, test=develop * update * update * update * update * update * update, test=develop * update, test=develop * update * update * delete unused proto * resotre op_desc * restore type_defs * update var_desc * remove dimss_mapping for proto_pybind * update interface.py * update framework.py * update * update * add auto_parallel dir * mv to paddle.distributed * add shard_xx api * add distributed attrs for var * add ut, test=develop * [WIP] Add the auto completion feature and related codes * [WIP] Improve the auto completion and related codes * [WIP] Make the auto completion to support data-parallel * [WIP] Make the completion support mp and dp+mp * [WIP] Refactor auto completion unit test for MLP * [WIP] Refactor the implementation of DistributedOperatorImpl * [WIP] Improve dims_mapping update rule and fix a bug * [WIP] Support auto completion for one transformer decoder layer * [WIP] Add a minor change * [WIP] Fix a bug within the uint test * Shard XShape tensor, add embedding completion and refactor code * Add the distributed_operators dir to setup.py.in * Improve the completion process and add the unittest for gpt * fix process_mesh ut * fix process_mesh ut * update * update, test=develop * Add support for automatically completing distributed attrs of special ops * update * update * update * fix doc sample codes, test=develop * improve coverage, test=develop * add static_mode check, test=develop * Model the cluster for cost model and physical mapping * update, test=develop * add set_placement, test=develop * Add the check to make sure the candidate tensors' size is great than zero * update doc, test=develop * update doc, test=develop * update doc, test=develop * update doc, test=develop * update, test=develop * Auto mark dist attrs annotated by user * update ndarray to nested list, test=develop * update, test=develop * Add auto-completion module for auto-parallel (based on PR#33804) * Remove unnecessary files * Remove unrelated files for the auto completion pr * Update the unit test to improve the coverage * Modify codes based on reviews * Minor changes for CI * Improve some codes based on new comments * Fix bugs caused by shallow copy in attributes.py * Imporve amend_distributed_attr_for_program in context.py * Other changes for weihang's comments Co-authored-by: Nsandyhouse <lilong12@baidu.com>
-
- 11 8月, 2021 1 次提交
-
-
由 lilong12 提交于
* add auto_parallel apis
-
- 10 8月, 2021 1 次提交
-
-
由 chentianyu03 提交于
* add any.hpp to utils and replace boost::any with self defined paddle::any * add copy any.hpp to custom op depends * modify any.hpp include path * remove boost from setup.py.in * add copy any.hpp to custom op depends * move any.hpp to paddle/utils/ dirs * move any.h to extension/include direction * copy utils to right directions
-
- 05 8月, 2021 1 次提交
-
-
由 0x45f 提交于
* integrated gast library * integrated gast library * fix unittest and remove ast2.py * remove 'gast' from __all__ in __init__.py * add copyright in other files * fix copyright
-
- 04 8月, 2021 1 次提交
-
-
由 kuizhiqing 提交于
-
- 19 7月, 2021 1 次提交
-
-
由 chentianyu03 提交于
* add cuda event and stream api * add cuda event and stream api * add get_current_stream api * add get_current_stream api * init streams * modify get_current_stream * modify get_cuttent_stream * add synchronize func * add current_stream doc and test file * move get_current_stream into CUDA macro * move CudaEvent into CUDA macro * move _get_current_stream and _device_synchronize into cuda macro * modify the macro of cuda stream and event * add test case for synchronize * add paddle.devices.cuda module * event and stream support hip * add doc for stream and event class * move cuda stream and event into single pybind * add cuda_streams_py.cc to cmakelist * add _device_synchronize and _get_current_stream to core module * add test case for cudastream and cudaevent * move __all__ in streams.py * fix test fail * add cuda to devices __all__ * fix current_stream doc writing error * move devices to device direction, and merge device.py into __init__.py * add required:gpu to sample codes * remove cuda direction from device/__init__.py
-
- 12 7月, 2021 1 次提交
-
-
由 Yuang Liu 提交于
* softmax mask fuse upper triangle * cover not implemented cpu code
-
- 15 6月, 2021 1 次提交
-
-
由 Wilber 提交于
-
- 09 6月, 2021 1 次提交
-
-
由 cc 提交于
* Add wrap for functional api * Refine the wraped api * Add unit test for quant functional layers * Update all unit tests for dygraph qat
-
- 01 6月, 2021 2 次提交
-
-
由 chentianyu03 提交于
-
由 chentianyu03 提交于
* replace and remove complex64/128 types in custom OP and other files * fix custom_tensor_test fail bug * fix custom_conj_test fail bug * fix dispatch_test_op build fail bug
-
- 27 5月, 2021 1 次提交
-
-
由 Zhou Wei 提交于
* Unify all external API error message mechanism and enhance third-party API error msg * fix some comment * fix some comment
-
- 25 5月, 2021 1 次提交
-
-
由 Ming-Xu Huang 提交于
-
- 07 5月, 2021 1 次提交
-
-
由 Zhou Wei 提交于
* Remove paddle_custom_op dynamic libraries, change link to FLUID_CORE on windows, and check copy_to * fix CI
-
- 30 4月, 2021 1 次提交
-
-
由 tianshuo78520a 提交于
* revert data_generator * test * add setup.py
-
- 25 4月, 2021 2 次提交
- 23 4月, 2021 1 次提交
-
-
由 Chen Weihang 提交于
* remove useless ext headers * fix boost header compile failed
-
- 22 4月, 2021 2 次提交
-
-
由 wuhuanzhou 提交于
-
由 tianshuo78520a 提交于
-
- 21 4月, 2021 1 次提交
-
-
由 xiemoyuan 提交于
* remove fluid for auto_checkpoint. * fix bug.
-
- 19 4月, 2021 1 次提交
-
-
由 ShenLiang 提交于
* support dp & mp
-
- 17 4月, 2021 1 次提交
-
-
由 ShenLiang 提交于
* add model parallel support in dygraph
-
- 09 4月, 2021 1 次提交
-
-
由 Aurelius84 提交于
* Remove old custom OP to reduce whl package volume * [Custom OP]Remove old custom OP to reduce whl package volume * support macos
-
- 07 4月, 2021 1 次提交
-
-
由 zhang wenhui 提交于
* Ascend rc (#30483) * Fix compilcation on CANN20.1 and older (#30494) Fix compilcation on CANN20.1 and older * Add distribution supported (#30578) Add distribution supported * Build praser for Hcom* operators (#30627) Build praser for Hcom* operators * Pass device_ids info from launch to trainer. (#30632) Pass device_ids info from launch to trainer * Add Hccl program group (#30642) Add Hccl program group * Add startup bash files of test_ascend_group. (#30645) Add startup bash files of test_ascend_group * cleanup (#30646) cleanup test_ascend_group.py * [Feature] Build parser to support distributed training (#30658) [Feature] Build parser to support distributed training * fix compilation on ascend-20.1 (#30722) fix compilation on ascend-20.1 * Dev/fix ascend string (#30749) Dev/fix ascend string * code style (#30781) code style * Merge ascend_optimizer and ascend_parser. (#30776) Merge ascend_optimizer and ascend_parser. * Ascendrc add converted op : [range/equal/range/uniform_random/expand/squeeze], fix cast op bug (#30797) Ascendrc add converted op : [range/equal/range/uniform_random/expand/squeeze], fix cast op bug * Add paddle ascend distribution training supported (#30796) Add paddle ascend distribution training supported * pass cxx_flags to gloo cmake (#30857) * Destroy session first. (#30954) Destroy session first. * merge * fix, test=develop * fix, test=develop * fix style, test=develop * fix, test=develop * fix * fix log fatal, test=develop * fix enforce style, test=develop * fix, test=develop * fix, test=develop * fix rccl, test=develop * fix test, test=develop * fix, test=develop * fix, test=develop * fix, test=develop * fix node_num, test=develop * fix ids str, test=develop * fix ids str, test=develop * fix ids str, test=develop * fix, test=develop * fix, test=develop * fix, test=develop * fix, test=develop * fix, test=develop * fix, test=develop * fix, test=develop * fix, test=develop * fix style code, test=develop * fix style code, test=develop * fix style code, test=develop * fix style code, test=develop Co-authored-by: Nhutuxian <hutuxian2011@sina.cn> Co-authored-by: Ngongweibao <weibao.gong@gmail.com> Co-authored-by: NVoid Main <voidmain1313113@gmail.com> Co-authored-by: NLeo Chen <chenqiuliang@baidu.com> Co-authored-by: Ndingsiyu <18369187719@163.com> Co-authored-by: NOleNet <olenet@126.com>
-
- 01 4月, 2021 1 次提交
-
-
由 chentianyu03 提交于
* add custom init grad for backward function * add custom init grad for backward function * handle when the grad_tensor is none * handle when the grad_tensor is none * fix the args type error on windows platform * modify the args order and doc * format code * add grad_tensor to xpu * modify the grad_tensor type check * add paddle.backward api to support multi tensors gradient compute * add paddle.backward api to support multi tensors gradient compute * add paddle.atuograd module and backward api * change tensor.backward func args * modify tensor backward api * remove create_graph intputs args * add doc and examplex code for backward api * when have the same tensor, throw error * modify test Init func args * modify the execute.Init func args in test files * add paddle.autograd package in setup.py.in * modify error msg, remove _run_backward method in class Tensor * add test cases for backward api
-
- 31 3月, 2021 1 次提交
-
-
由 tianshuo78520a 提交于
* fix whl package push pypi * add rst
-
- 30 3月, 2021 1 次提交
-
-
由 Zhou Wei 提交于
* Remove old custom OP to reduce whl package volume * [Custom OP]Remove old custom OP to reduce whl package volume
-
- 23 3月, 2021 1 次提交
-
-
由 Wilber 提交于
-
- 22 3月, 2021 1 次提交
-
-
由 arlesniak 提交于
-