- 25 3月, 2022 1 次提交
-
-
由 Jiabin Yang 提交于
* refactor eager flags * fix flags error when we switch from eager to dygraph * fix ci problem * fix ci * fix ci * merge develop and fix code style * merge develop and fix code style * fix op test error * fix op test error * fix op test error * fix op test error * fix op test error * merge develop
-
- 22 3月, 2022 1 次提交
-
-
由 Siming Dai 提交于
* add out_size shape for graph_send_recv * fix bug in register kernel: no const int& support * add out_size in infermeta * change unittest * fix unittest * fix out_size default value * fix doc * delete arg mapping * add sig * move -1 to 0 * move -1 to 0
-
- 16 3月, 2022 1 次提交
-
-
由 Zhong Hui 提交于
* segment pool support for int int64 kernel. * add support in python api
-
- 14 3月, 2022 1 次提交
-
-
由 Zhong Hui 提交于
[multiprocessing] Add paddle.incubate.multiprocessing for sharing tensors between python processes. (#37302) * Add support for paddle.multiprocessing * move multiprocessing to incubate.
-
- 11 3月, 2022 1 次提交
-
-
由 Yuang Liu 提交于
-
- 01 3月, 2022 1 次提交
-
-
由 sneaxiy 提交于
* vectorize lamb kernel * remove flags, add ut * remove useless codes * refine code, add param order
-
- 25 2月, 2022 1 次提交
-
-
由 sneaxiy 提交于
* add multi tensor apply l2 norm * add multi_tensor_apply code * make sizeof(TensorMeta) smalller * move code to distributed_fused_lamb_op.cu * remove useless FLAGS
-
- 24 2月, 2022 1 次提交
-
-
由 Leo Chen 提交于
* fix 'invalid escape sequence' * fix assert error
-
- 19 2月, 2022 1 次提交
-
-
由 sneaxiy 提交于
* add DistributedFusedLamb op * polish code * fix compile error * compatible with pten changement * fix rocm compile error * improve converage * update upstream/develop * fix cast_with_ptr.h * add FLAGS_distributed_lamb_divide_nranks_when_allreduce=1 * fix clip before allreduce * add use_master_param_norm * code polish * fix bug * fix ROCM ci
-
- 28 1月, 2022 1 次提交
-
-
由 zhangkaihuo 提交于
-
- 27 1月, 2022 2 次提交
-
-
由 Siming Dai 提交于
* add the test case for the UVA * add the context load for the uva * Add graph_sample kernel * Add graph_sample commit * add new commit for graph_sample * add unsigned long long int * delete some remarks * add cpu version * add cuda eids * add cpu eids * delete _uva * optimize speed: emplace_back, last_layer * add to_uva_tensor * add cpu return_eids choice * add gpu return_eids choice * add cpu reindex_nodes * add gpu reindex_nodes * rename op and add OMP for cpu * add incubate api * fix the compile problem for the PADDLE_ENFORE and different device * fix the rcom and windows compile problem * add unittest for graph_sample_neighbors * fix cpu unittest and unique problem * fix uva unittest, fix cuda unique problem * fix the windows compile problem * fix the windows rand_r compile problem * add correct unittest, add src_eids dispensable * delete black * combine uva unittest * mv Sample_index to Sample_Index; check input shape; fix random sample func * delete memset & cudaMemset * fix according to PR comments * fix rocm ci * modify function names according to the specification * fix windows_openblas ci * refine annotations, fix windows unittest, add default value for uva device_id, fix bug for input nodes with empty neighbors * fix rocm ci * rename graph_sample_neighbors as graph_khop_sampler, add incubate api doc * add data type * fix conflict Co-authored-by: Nwawltor <fangzeyang0904@hotmail.com>
-
由 zhangkaihuo 提交于
* fix bug: 1. atten: set the default value of attn_dropout_rate to None 2. ffn: add activation parameter * for pure fp16 * Add a SparseCsrTensor * remove unused functional * remove const * remove SetMemoberTensor * remove non_zero_nums_, the number of non zero elements of each batch can be obtained from the crows * SparseCooTensor * add SetMember * merge upstream; add SetMember * merge upstream * merge upstream; add newline at end of file * add newline at end of file * remove newline at end of file * remove newline at end of file * stash * user pten::framework::make_ddim * user pten::framework::make_ddim * merge upstream; use the latest mutable_data * merge upstream; use the latest mutable_data * return mutable dense tensor
-
- 22 12月, 2021 1 次提交
-
-
由 Zhanlue Yang 提交于
-
- 26 11月, 2021 1 次提交
-
-
由 Li Min 提交于
* Fix bugs when bias is none for static graph for fused_attention op.
-
- 23 11月, 2021 1 次提交
-
-
由 Li Min 提交于
Add support for bias is none for fused_attention op.
-
- 19 11月, 2021 2 次提交
-
-
由 wuhuanzhou 提交于
* GeneratePass support attr condition and mapping, test=develop * fix coverage, test=develop * Add fuse_resnet_unit pass, test=develop * fix CI errors, test=develop * fix CI errors, test=develop * fix unittest error when compiling without CUDA, test=develop * fix static ci error, test=develop * limit kernel size must equal 1, test=develop
-
由 Siming Dai 提交于
* add cpu version, using set: sum, min, max * add cpu version: mean * improve cpu code and fix dynamic memory allcation problem * fix arg error, add index judge, delete fp16 * fix bug in CudaAtomicMax and CudaAtomicMin * add CUDA version * fix grad_op bug for index * add op test, add correct cpu grad op * Add correct CUDA Mean grad * [Add] Successful MEAN and SUM * [Add] Successful MIN and MAX in CPU * [Add] Successful MIN and MAX in CUDA * fix windows dtype ci * fix ROCM ci by adding HIP flag * rename fused_gather_scatter to send_recv * unify name as send and recv * change zero index return time * add send_recv incubate api * fix index data type, add unittest case for API * delete redundant input tensor * fix en example and docs, add default value in pool_type * add shape judge and max grid judge * fix comment * fix index type bug * add const & * fix en docs * delete numpy in examples * add unittest for int input * fix send_recv comment * change send_recv to graph_send_recv
-
- 16 11月, 2021 1 次提交
-
-
由 Li Min 提交于
fused_attention_op的实现中,使用了bias_add,且其实现是通过使用kernel primitive来实现的,之后kernel primitive的WriteData api接口及函数内部实现发生了更改,将判断越界的逻辑移到了template的参数中,使得调用的分支有错误,产生了越界赋值操作,污染了别的显存空间的内容。具体表现为:test_fused_attention_op_api.py 单次执行基本上不会报错,多次循环执行不同shape的输入,结果计算不对,具有偶发性,bug不易察觉。
-
- 12 11月, 2021 1 次提交
-
-
由 zhangkaihuo 提交于
* fix bug: 1. atten: set the default value of attn_dropout_rate to None 2. ffn: add activation parameter
-
- 28 10月, 2021 1 次提交
-
-
由 Li Min 提交于
* Fix fused_attention english doc test=document_fix
-
- 27 10月, 2021 1 次提交
-
-
由 zhangkaihuo 提交于
本PR是fused_transformer的layer层代码,包含FusedFeedForward的layer层代码和FusedTransformerEncoderLayer的代码。
-
- 26 10月, 2021 2 次提交
- 17 10月, 2021 1 次提交
-
-
由 Zeng Jinle 提交于
This reverts commit 0452f27c.
-
- 16 10月, 2021 1 次提交
-
-
由 Zhang Zheng 提交于
* fix the initializer of resnet unit op * fix the initializer of resnet unit op
-
- 15 10月, 2021 1 次提交
-
-
由 Zhang Zheng 提交于
-
- 26 9月, 2021 1 次提交
-
-
由 Yuang Liu 提交于
-
- 17 9月, 2021 1 次提交
-
-
由 Zhong Hui 提交于
-
- 16 9月, 2021 1 次提交
-
-
由 Zhong Hui 提交于
-
- 16 7月, 2021 1 次提交
-
-
由 Yuang Liu 提交于
-
- 15 7月, 2021 1 次提交
-
-
由 wanghuancoder 提交于
* cache core.ops, test=develop * refine, test=develop
-
- 14 7月, 2021 1 次提交
-
-
由 Yuang Liu 提交于
-
- 12 7月, 2021 1 次提交
-
-
由 Yuang Liu 提交于
* softmax mask fuse upper triangle * cover not implemented cpu code
-
- 11 6月, 2021 1 次提交
-
-
由 zhiboniu 提交于
* update 2.0 public api in all left files * reverse device.py all list; fix some flake8 errors
-
- 22 4月, 2021 1 次提交
-
-
由 tianshuo78520a 提交于
-
- 21 4月, 2021 1 次提交
-
-
由 xiemoyuan 提交于
* remove fluid for auto_checkpoint. * fix bug.
-
- 30 3月, 2021 1 次提交
-
-
由 Zhou Wei 提交于
* Remove old custom OP to reduce whl package volume * [Custom OP]Remove old custom OP to reduce whl package volume
-
- 25 1月, 2021 1 次提交
-
-
由 123malin 提交于
* test=develop, fix test_lookahead
-
- 13 1月, 2021 1 次提交
-
-
由 WeiXin 提交于
-
- 07 1月, 2021 1 次提交
-
-
由 123malin 提交于
* test=develop, add model_average and lookahead
-