- 06 12月, 2022 1 次提交
-
-
由 kangguangli 提交于
* remove layers.matmul in nets.py * remove layers.matmul in rnn_impl/test_quantization_pass/auto_parallel_gpt_model/test_auto_parallel_completion_gpt * remove layers.matmul in other files * fix * fix * remove layers.matmul itself * remove ref in CMakeLists.txt and tools directory * remove matmul in fluid.layers.nn.py * remove matmul in fluid.dygraph.rnn.py && resotre test_matmul_op.py * replace matmul in fluid.dygraph.rnn.py && clean api_test in test_matmul_op.py * fix error && restore empty test_auto_search_dist_matmul_op.py * fix check in test_auto_parallel_partitioner.py * fix test_dist_matmul && test_flags_mkldnn_ops_on_off * fix test_fused_attention_op_xpu.py && test_matmul_op_xpu.py * remove test_auto_search_dist_matmul_op.py * remove layers.matmul in auto_parallel_gpt_model.py && fix doc in fluid/io.py * fix for matmul_grad * fix codestyle * fix codestyle * resolve conflicts error * restore unit test file but not compiled it for later remove * fix codestyle * fix wrong unittest skip * fix unittest delete * fix scale cost * fix scale cost * resolve conflicts error * resolve conflicts error Co-authored-by: Njakpiase <jakpia21@gmail.com>
-
- 29 11月, 2022 1 次提交
-
-
由 Nyakku Shigure 提交于
* isort all files * revert conflicting files * revert conflicting files * revert conflicting files * revert conflicting files * revert conflicting files
-
- 23 10月, 2022 1 次提交
-
-
由 Nyakku Shigure 提交于
* update config * re-blacken python code * temporarily disable date and diff_py_file * skip a format
-
- 11 10月, 2022 1 次提交
-
-
由 Nyakku Shigure 提交于
-
- 07 7月, 2022 1 次提交
-
-
由 wanghuancoder 提交于
* fused_gate_attention manual code in eager
-
- 05 6月, 2022 1 次提交
-
-
由 Sing_chan 提交于
* use yapf to format all python file * yapf exclude two unittests file for they rely on writing and reading file, and format will break them * disable diff_py_file because too many diff files cause command following failed
-
- 31 5月, 2022 1 次提交
-
-
由 Li Min 提交于
* replace dropout_is_test with is_test. * improve atol on a100.
-
- 13 5月, 2022 1 次提交
-
-
由 Weilong Wu 提交于
-
- 11 3月, 2022 1 次提交
-
-
由 Yuang Liu 提交于
-
- 23 11月, 2021 1 次提交
-
-
由 Li Min 提交于
Add support for bias is none for fused_attention op.
-
- 12 11月, 2021 1 次提交
-
-
由 zhangkaihuo 提交于
* fix bug: 1. atten: set the default value of attn_dropout_rate to None 2. ffn: add activation parameter
-
- 10 11月, 2021 1 次提交
-
-
由 Li Min 提交于
att, bug fix
-
- 08 11月, 2021 1 次提交
-
-
由 Li Min 提交于
目前的fused_attention_op不支持attn_mask=None的输入,本PR对此进行了补充,并补充了相应的单测逻辑。
-
- 28 10月, 2021 1 次提交
-
-
由 Li Min 提交于
* Fix bug when pre_layer_norm is false.
-
- 26 10月, 2021 2 次提交
- 22 10月, 2021 1 次提交
-
-
由 Li Min 提交于
功能:本PR的目标是提高attention模块的计算性能。 为了减少框架层对op的调度开销,本PR通过在C++层手动实现attention模块,对外提供attention 大op; 为了减少防存开销,本PR采取了两种优化方法: (1)在q,k,v计算时通过共享输入X,将该处的gemm,transpose和bias add从三次调用减少为一次; (2)使用kernel融合优化技术,在不同cuda kernel之间通过寄存器传输数据;
-