1. 18 2月, 2022 2 次提交
  2. 17 2月, 2022 5 次提交
  3. 16 2月, 2022 4 次提交
  4. 15 2月, 2022 5 次提交
    • W
      [Paddle-Inference] support preln_ernie: add... · 2bc91cc5
      Wangzheee 提交于
      [Paddle-Inference] support preln_ernie: add preln_embedding_eltwise_layernorm_fuse_pass, preln_skip_layernorm_fuse_pass (#39508)
      
      * support preln_ernie
      
      * support preln_ernie
      2bc91cc5
    • R
      [PluggableDevice] Add custom runtime support (#38740) · 3e7825f3
      ronnywang 提交于
      * [CustomRuntime] Add DeviceManager
      
      * [CustomRuntime] Add DeviceInterface
      
      * [CustomRuntime] Add Stream, Event, DeviceGuard, CallbackManager
      
      * [CustomRuntime] Add plug-in device
      
      * [CustomRuntime] Memory module support PluggableDevice
      
      * [CustomRuntime] Add WITH_PLUGGABLE_DEVICE cmake option
      
      * update
      
      * [API] update API doc based on comments, test=develop
      Co-authored-by: Nqili93 <qili93@qq.com>
      3e7825f3
    • T
      Add cinn_instruction_run_op for launching execution of a cinn instruction (#39435) · 9d0baeab
      TeFeng Chen 提交于
      * add cinn_instruction_run_op for launching execution of a cinn instruction
      
      * fix multi definition compilation error
      
      * update cmake
      
      * fix bug at infershape
      
      * fix compile error due to lacking header file
      9d0baeab
    • H
      move histogram to pten (#39496) · 556f6eb0
      hong 提交于
      * move histogram to pten; test=develop
      
      * fix format error; test=develop
      
      * fix histogram kernel format; test=develop
      556f6eb0
    • A
      [PTen]Migrate proto::VarType outside of Pten (#39411) · 7e7e9404
      Aurelius84 提交于
      * #1 migrate dist-related type()-> dtype()
      
      * move datatype function from pten -> fluid/framework
      
      * change type() in imperative into convert(dtype())
      
      * modify xx_tensor->type into xx_tensor->dtype
      
      * change the set_type interface and the caller
      
      * modify xx_tensor.type into xx_tensor.dtype
      
      * fix mutable_data(place, dtype())
      
      * change caller of mutable_data in pten and distributed
      
      * change the caller of mutable_data in fluid/framework
      
      * change the caller of mutable_data in imperative directory
      
      * mutable_data: inference
      
      * update the call of mutable_data
      
      * transfer MakePenScalarArray MakePtenScalar ResetHolderWithType
      
      * pass the compile. the next step is remove VarType in Pten
      
      * fix all and remove VarType from pten. success in linux. Next task is other platform
      
      * fix conflict with develop
      
      * fix compiled error
      
      * Fix reset conversion
      
      * fix conflict
      
      * fix compiled problem
      
      * fix typo
      
      * Fix << in tensor_utils.cc
      
      * fix type->dtype
      
      * fix unittest
      
      * fix tensor init constructor
      
      * fix DataTypeSize for BFloat16
      
      * fix code style
      
      * fix npu compiled error
      
      * fix npu
      
      * compile npu sucessfully
      
      * fix conflict
      
      * fix conflict
      Co-authored-by: Nxiongkun <xiongkun03@baidu.com>
      7e7e9404
  5. 14 2月, 2022 5 次提交
    • S
      [Bug fix] prevent squashing pair u8 dequantize -> s8 quantize (#39346) · 66b5348e
      Sylwester Fraczek 提交于
      * prevent squashing pair u8 dequantize -> s8 quantize
      
      * add relu op to check for uint8
      
      * fix ptq fc attr name fuse_activation->activation_type
      
      * fix
      
      * add unit test
      
      * remove unused variable
      
      * test fix unsuccessful
      
      * fix test and logic
      
      * multiline comment
      
      * remove cout
      
      * Revert "fix ptq fc attr name fuse_activation->activation_type"
      
      This reverts commit ffd023353a5e9b0fd15e81b9e9f9fe1794035017.
      
      * fix ptq fc attr name fuse_activation->activation_type
      66b5348e
    • W
      context add generator (#39475) · 463e31f4
      Wilber 提交于
      * context add generator
      
      * update
      463e31f4
    • L
      [NewExe] Ignore eof Exception(#39487) · 2f642159
      liutiexing 提交于
      * add align for WorkQueue
      
      * add spinlock
      
      * merge develop
      
      * merge
      
      * Add EventsWaiter
      
      * Revert "Add EventsWaiter"
      
      This reverts commit e206173aa9be7401b83a53581627bfaf557c8fb2.
      
      * add log for Executor
      
      * Avoid thread reconsruction when EOF
      Co-authored-by: Nliutiexing <liutiexing@google.com>
      2f642159
    • C
      [PTen] Add HasAttr for ArgumentMappingContext (#39464) · ddb1e23f
      Chen Weihang 提交于
      * add has_attr for arg map context
      
      * skip useless attr now
      
      * skip attr if not exists
      
      * fix typo
      ddb1e23f
    • C
      [pten] add split kernel (#39060) · d0df5632
      chentianyu03 提交于
      * add split kernel
      
      * add split kernel signature
      
      * fix split bug
      
      * modify MakePtenScalarArrayFromVarList
      
      * modify MakePtenScalarArrayFromVarList
      
      * fix split windows register error
      
      * add test case for split kernel
      
      * replace raw split kernel with pten kernel
      
      * fix makeScalar/ScalarArray bug
      
      * remove debug log
      
      * remove int64_t type in buildPtcontext
      
      * update by code review
      
      * fix split dev test failed
      
      * change DenseTensorMeta to MetaTensor
      
      * change split api code from auto gen to manual
      
      * split cuda kernel support bfloat16 type
      
      * fix conflict
      
      * rm raw split kernel
      
      * merge develop branch
      
      * change to pten::errors
      d0df5632
  6. 11 2月, 2022 6 次提交
  7. 10 2月, 2022 2 次提交
  8. 09 2月, 2022 6 次提交
  9. 08 2月, 2022 5 次提交
    • S
      Make Embedding layer support more int ids type (#39381) · 60f1461a
      sneaxiy 提交于
      * add more int id type support for embedding
      
      * add ut
      
      * add more ut
      
      * fix ci error
      60f1461a
    • H
      Add FuseOptimizerPass and test_dist_fuse_adam_pass unittest. (#39208) · ccdcfa2d
      hlygit66666 提交于
      * add fuse_relu_depthwise_conv_pass unittest
      
      * fix atol and rtol
      
      * fix according to review
      
      * Add FuseOptimizerPass and fuse_adam_pass unittest
      
      * add sgd and momentum unittest
      
      * add fuse_optimizer_pass
      
      * close amp
      
      * close amp
      
      * update
      
      * fix run on two cards
      
      * Update test_dist_fuse_adam_pass.py
      
      * Update test_dist_fuse_momentum_pass.py
      
      * Update test_dist_fuse_sgd_pass.py
      
      * Create test_dist_fuse_sgd_pass.py
      
      * Create test_dist_fuse_sgd_pass.py
      
      * Create test_dist_fuse_sgd_pass.py
      
      * Update test_dist_fuse_adam_pass.py
      
      * Update test_dist_fuse_momentum_pass.py
      
      * Update test_dist_fuse_sgd_pass.py
      ccdcfa2d
    • J
      [Bug fix] Fixed handling of one of the cases in the quantization process (#39342) · e4d475ea
      joanna.wozna.intel 提交于
      * Fix quantization next op findings
      
      * Corrections according to the review
      e4d475ea
    • J
      Fix to #38126 (#39097) · f884edb9
      Jacek Czaja 提交于
      * - 38126 potential fix
      
      * - fix
      
      * - build fix
      
      * - another candidate fix
      
      * - compilation fix
      
      * - another fix
      
      * - Fix to activation of NHWC being first oneDNN op in chain on oneDNN ops
      
      * - compilation fix
      
      * - added NHWC reotating for elementwise being first op
      
      * - compilation fix
      
      * - compilation fix
      
      * - Added UT
      
      * - cosmetic fixes
      f884edb9
    • H
      Update op support gpu impl (#39386) · ba882657
      hong 提交于
      * find gpu kernel in pten factory; test=develop
      
      * check in functional kernel first; test=develop
      ba882657