1. 26 10月, 2021 10 次提交
  2. 25 10月, 2021 7 次提交
    • A
      [NPU] modifications for model ernie-1.0 (#36642) · 19b02d95
      Aganlengzi 提交于
      * [NPU] modifications for model ernie-1.0
      
      * rollback 503003 and change cast to dtype
      19b02d95
    • Z
      add op: fused_feedforward(backward) (#35611) · 2dd0a46a
      zhangkaihuo 提交于
      这个PR是fused_feedforward反向的代码
      
      相关kernel实现:fused_dropout_act_bias, fused_residual_dropout_bias, fused_layernorm_residual_dropout_bias
      
      fused_feedforward是一个融合算子,该算子对transformer模型的feed forward层的算子进行融合和封装,使得前端只呈现一个接口,通过融合减少部分访存和kernel launch的时间,以此提升性能。
      2dd0a46a
    • S
      Add bincount op (#36317) · 39f19127
      smallv0221 提交于
      * Add bincount op
      
      * upload cpu version
      
      * fix unitest
      
      * fix unittest
      
      * fix unittest
      
      * fix en doc
      
      * add more test
      
      * fix en doc
      
      * add more test case
      
      * fix test
      
      * fix input vailidation
      
      * fix input check
      
      * fix unittest
      
      * fix test
      
      * fix en doc
      39f19127
    • Z
      Create CinnCompiler class for compiling subgraphs found by build_cinn_pass. (#36562) · 4c460378
      Zhen Wang 提交于
      * Init the functions of CinnCompiler.
      
      * Add the unit test for CinnCompiler.
      
      * Fix some compilation errors.
      
      * Update the UT of cinn_compiler.
      
      * Use Decomposer&OpFusion passes in CinnCompiler::CompileGraph.
      
      * Update some comments.
      
      * Uncomment some includes in build_cinn_pass.cc.
      
      * Use refs instead of ptrs as returned types of FindGraph & Compile in
      CinnCompiler.
      
      * Use the merged CinnGraphSymbolization functions in CinnCompiler.
      4c460378
    • T
      add some ops to train ssd on kunlun (#36407) · 50778ad6
      TTerror 提交于
      * add some ops to train ssd on kunlun
      
      * add some ops to train ssd on kunlun
      
      * add some ops to train ssd on kunlun
      
      * update cast op unittest
      
      * update cast op unittest
      
      * update cast op unittest
      
      * update xpu cmake
      
      * update cast unittest
      50778ad6
    • Z
      add op: fused_feedforward(forward) (#35843) · b18cbfb2
      zhangkaihuo 提交于
      这个PR只包含fused_feedforward前向的代码。
      
      相关kernel实现:fused_dropout_act_bias, fused_residual_dropout_bias, fused_layernorm_residual_dropout_bias
      
      fused_feedforward是一个融合算子,该算子对transformer模型的feed forward层的算子进行融合和封装,使得前端只呈现一个接口,通过融合减少部分访存和kernel launch的时间,以此提升性能。
      b18cbfb2
    • J
      fix pool2d convert case (#36667) · 087c3abe
      JingZhuangzhuang 提交于
      087c3abe
  3. 24 10月, 2021 1 次提交
  4. 22 10月, 2021 3 次提交
    • W
      correct slice serialize data (#36588) · 5e880840
      wenbin 提交于
      * slice
      
      * add UT
      5e880840
    • Z
      add fp16 kernel for clip_op (#36577) · 1962d3af
      zhangbo9674 提交于
      1962d3af
    • L
      Fused attention op forward (#35905) · d4906214
      Li Min 提交于
      功能:本PR的目标是提高attention模块的计算性能。
      为了减少框架层对op的调度开销,本PR通过在C++层手动实现attention模块,对外提供attention 大op;
      为了减少防存开销,本PR采取了两种优化方法:
      (1)在q,k,v计算时通过共享输入X,将该处的gemm,transpose和bias add从三次调用减少为一次;
      (2)使用kernel融合优化技术,在不同cuda kernel之间通过寄存器传输数据;
      d4906214
  5. 21 10月, 2021 9 次提交
    • Z
      [NPU] Add p_norm_grad (#36497) · ed478a3e
      zhulei 提交于
      ed478a3e
    • R
      add swish_op for npu (#36579) · 7eab0fa6
      ronnywang 提交于
      7eab0fa6
    • J
      Added matmul_v2+transpose+reshape fuse pass (#36481) · 856cb9c5
      jakpiase 提交于
      * added base changes for matmul_v2+trans+resh fuse pass
      
      * added full matmul_v2+transpose+reshape pass
      
      * removed a file added by mistake
      
      * added reviewers suggestions
      
      * Changed ops type in checking capatibility version
      
      * Deteled one statement
      856cb9c5
    • F
      [NPU] Add sync_batch_norm and sync_batch_norm_grad NPU Kernel (#36320) · 0ca2807c
      furnace 提交于
      * add sync_batch_norm (support train, infer, and fp32, fp16, and NCHW, NHWC)
      
      * [NPU] Delete debug codes
      
      * [NPU] Remove FP16
      0ca2807c
    • J
      Add viterbi decode (#35778) · 6072aecb
      Jack Zhou 提交于
      * add viterbi decode cpu kernel
      
      * add viterbi decoder api in paddle.text
      
      * add a data buffer once to avoid create many small pieces of data buffer frequently
      
      * fix viterbi max_seq_length bug
      
      * fix seq_len=1 bug
      
      * fix device context
      
      * move split out of for loop
      
      * remove INVERSE_SUB
      
      * remove 2 GET_CAST_MASK
      
      * remove 1 loop
      
      * remove Functor
      
      * add to_static deploy code
      
      * use MAX_FUNC instead of ELE_MAX
      
      * add MaxFunctor
      
      * impl max_func
      
      * remove MaxFunctor
      
      * remove cast op
      
      * use REGISTER_OP_WITHOUT_GRADIENT
      
      * add viterbi cuda kernel
      
      * add FIX_BLOCKDIM_CASE macro
      
      * add MKL add, mul; add get data mask
      
      * add arange mkl impl
      
      * add CPU Argmax
      
      * add cpu gather
      
      * use EXECUTE_MKL_ELEMENT_BINARY_OP instead of some ADD, MUL
      
      * use SameDimsBinaryOP instead of EXECUTE_MKL_ELEMENT_BINARY_OP
      
      * use SAME_DIMS_ELEMENT_BINARY_OP
      
      * add SimpleBroadcastBinaryOP
      
      * use int instead of int64_t to accelerate
      
      * optimize SimpleBroadcastBinaryOP
      
      * optimize SimpleBroadcastBinaryOP
      
      * optimize performance in both single thread and multithread situation
      
      * remove useless line
      
      * remove useless code
      
      * add CREATE_TENSOR_BUFFER macro
      
      * add INIT_REQUIRED_TENSOR macro
      
      * add comment
      
      * fix windows ci
      
      * add viterbi unittest
      
      * remove cuda add functor
      
      * remove cuda equal
      
      * remove a template function
      
      * fix windows ci
      
      * fix windows dtype
      
      * remove some template instance
      
      * remove useless header file
      
      * remove some blockdim
      
      * remove transpose impl
      
      * accelerate cpu performance on single thread situation
      
      * viterbi_decode->crf_decode
      
      * rename crf params name
      
      * add viterbi api test
      
      * remove useless import
      
      * add enable_static
      
      * use viterbi decoder
      
      * fix viterbi len=1
      
      * fix  viterbi unittest
      
      * remove useless comments
      
      * reconstruct viterbi decode
      
      * remove ADD,SUB,MUL structure
      
      * fix coverage
      
      * remove CREATE_TENSOR
      
      * add name args
      
      * crf.py->ops.py; with_start_stop_tag->include_start_end_tag
      
      * update crf_decode en docs
      
      * fix viterbi decode en docs
      
      * fix some review comments
      
      * add FIXED_BLOCK_DIM_CASE in cuda
      
      * push_back->emplace_back
      
      * crf_decode->viterbi_decode; include_start_end_tag->include_bos_eos_tag
      
      * paddle.text.ops.viterbi_decode->paddle.text.viterbi_decode
      
      * fix viterbi_decode en docs
      6072aecb
    • D
      fix hdfs download_dir (#36590) · 66f4b292
      danleifeng 提交于
      66f4b292
    • T
      add fill_any_like/flatten ops to train ssd on kunlun (#36550) · 7bf2aa38
      TTerror 提交于
      * add some ops to train ssd on kunlun
      
      * update test_fill_any_like_op_xpu.py
      7bf2aa38
    • X
      User specified backend (#35745) · b6e7f8e9
      xiongkun 提交于
      b6e7f8e9
    • Y
      Fixed unit test for auto parallel cost model (#36574) · f6985774
      YipZLF 提交于
      f6985774
  6. 20 10月, 2021 9 次提交
    • H
      fix bugs of ClipGradByGlobalNorm in HybridParallel (#36555) · 6a3941e3
      Haohongxiang 提交于
      * fix bugs of ClipGradByGlobalNorm
      
      * add unittests
      
      * add unittests
      6a3941e3
    • R
      [NPU] Add kldiv_loss_op for npu (#36494) · 6a572a19
      ronnywang 提交于
      6a572a19
    • S
      Add FasterTokenizer Operator (#34491) · 3f2d6a3f
      Steffy-zxf 提交于
      Add Tokenizer related functionalities for Transformer model in order that the process of training and predicting is consistent.
      
      * support the text string as an input Tensor
      * support the "VOCAB"unordered_map<wstring, int> as an input Tensor to lookup tokens
      * Tokenizer used for BERT. This tokenizer applies an end-to-end, text string to wordpiece tokenization.
      * It first applies basic tokenization, followed by wordpiece tokenization.
      3f2d6a3f
    • Z
      fix pow2 decay (#36559) · 605e7f08
      Zeng Jinle 提交于
      605e7f08
    • W
      add unittest (#36371) · 7325c9fb
      Wilber 提交于
      7325c9fb
    • W
      update for trt convert ut. (#36549) · 06bd348d
      Wilber 提交于
      06bd348d
    • J
      [FIX] Extend time for test_activation_nn_grad to avoid its timeout issue (#36527) · c285c719
      Jiabin Yang 提交于
      * native commit for triple grad of sigmod
      
      * Updated unittests files
      
      * init functional jacobian api
      
      * Updated trible_test func
      
      * Updated gradient_checker & test_script
      
      * finish test with dtype float32
      
      * add float64 test case
      
      * polish code
      
      * use atol=1e-5 with dtype float64
      
      * fix for ci
      
      * set timeout for test_jacobian
      
      * fix dygraph grad to support high differential
      
      * polish API docstring
      
      * Updated gradient checker and some related files
      
      * fix double grad strip error for high differential
      
      * fix double grad strip error for high differential
      
      * Add Sigmoid triple grad tests
      
      * fix dygraph double grad dtype error when calling for high differential senario
      
      * Updated triple grad teses func
      
      * Use np.random to initialize ddx
      
      * Updated triple_grad_check func
      
      * add todo for gradient checker and refine some comments
      
      * remove additional code
      
      * add test for warnging in backward.py
      
      * add tanh triple grad
      
      * format python code
      
      * refine code
      
      * make test_activation_nn_grad test time to 150s
      Co-authored-by: Nveyron95 <veyron_wu@163.com>
      Co-authored-by: Nlevi131 <limaolin01@baidu.com>
      c285c719
    • J
      [Auto Parallel] Generalization for Partition and Completion (#35735) · 797bd40d
      JZ-LIANG 提交于
      * default dist op
      
      * add dist_attr for dist op
      
      * add unitest
      
      * update inputname
      
      * update function name
      
      * add unitest
      
      * update CMakeLists.txt for CI
      
      * fix dis_matmul
      
      * fix compile error
      
      * update matmul to matmul_v2
      
      * unify api
      
      * unify api
      
      * todo
      
      * update distop forward func
      
      * update distop forward func
      
      * auto parallel backward
      
      * update dist op
      
      * autoparallel backward
      
      * add backward for embedding
      
      * temp1
      
      * temp2
      
      * temp3
      
      * temp4
      
      * backward done1
      
      * backward done2
      
      * backward done3
      
      * dist embedding remove mp mode
      
      * dist matmul remove mp mode
      
      * update dist embedding
      『
      
      * dist op init1
      
      * dist op init 2
      
      * update unitest
      
      * context remove parallel mode
      
      * partitioner remove parallel mode
      
      * update unitest
      
      * a more general method to support varying mesh in pipeline parallel
      
      * support varying mesh in pipeline parallel
      
      * embedding support varying mesh in pipeline parallel
      
      * matmul support varying mesh in pipeline parallel
      
      * default dist op support varying mesh in pipeline parallel
      
      * dist attribute for startup program
      
      * default dist op support varying mesh in pipeline parallel 2
      
      * partitoner support varying mesh in pipeline parallel
      
      * revise logic for auto compeletion
      
      * revise framework.py
      
      * revise reshard unitest
      
      * revise unitest for parallelize
      
      * chmod
      
      * fixed bug for dist embedding name mapping
      Co-authored-by: Nzhaoyingli <zhaoyingli@baidu.com>
      797bd40d
    • 0
      remove no_value using var.name (#36513) · fe01ba6a
      0x45f 提交于
      * remove no_value using var.name
      
      * fix unit test for CI
      
      * fix unit test
      
      * add test case
      
      * fix test case
      
      * add more test case
      fe01ba6a
  7. 19 10月, 2021 1 次提交