1. 29 12月, 2021 8 次提交
  2. 28 12月, 2021 12 次提交
    • L
      Support multi-output feature for elementwise (#38410) · 48f061fb
      limingshu 提交于
      * first commit
      
      * pass ctest of  elementwise_div_grad
      48f061fb
    • F
      Utilize StreamSafeCUDAAllocator to support fast GC in new executor (#37642) · 0c7153a4
      From00 提交于
      * fix reshape move storage error
      
      * remove needless set type
      
      * alloc tensor by shared storage
      
      * Utilize StreamSafeCUDAAllocator to support fast GC in new executor
      
      * Fix compile error for Windows and ROCm
      
      * Fix compile error for Windows
      
      * Modify UT stream_safe_cuda_alloc_test
      
      * Modify UT stream_safe_cuda_alloc_test
      
      * Rewrite fast GC
      
      * Rewrite fast GC
      
      * Fix compile error for BOOST_GET_CONST
      
      * Fix compile error for BOOST_GET_CONST
      
      * Changes default stream for StreamSafeCUDAAllocator
      
      * Fix a small CI error
      
      * Remove some redundant code
      
      * Fix conflict
      
      * Fix compile error for ROCm
      
      * Fix Windoes CI error
      
      * Fix CI error
      
      * Remove some unnecessary code
      
      * Fix CI error
      
      * Add UT for fast GC
      
      * Fix CI error
      
      * add device-agnostic stream class
      
      * add stream.h
      
      * fix ut
      
      * fix cpu compile
      
      * Use RWLock in GetAllocator
      
      * Fix CI error
      Co-authored-by: NChen Weihang <chenweihang@baidu.com>
      Co-authored-by: Nzhiqiu <chenqiuliang@baidu.com>
      0c7153a4
    • J
      Support test basic of Var and Layer (#38426) · 1fb80a6a
      Jiabin Yang 提交于
      * Rearranged Eager AutoCodeGen directory structure
      
      * Removed USE_OP in Eager AutoCodeGen
      
      * Enabled generation for Operators without Grad/Inputs/Outputs
      
      * Resolved operators without input
      
      * Fixed merge conflicts
      
      * Enabled Eager AutoCodeGen for 10+ more operators
      
      * Refactored Eager AutoCodeGen with more organized helper objects
      
      * Enabled Eager AutoCodeGen for operators with multiple OpBases
      
      * Adjusted Eager AutoCodeGen to Enable Passing Output Tensor as Input Argument
      
      * Handled Dispensable Inputs/Outputs in Eager AutoCodeGen
      
      * Adjusted function generation/call between Python-C API & Dygraph API
      
      * Synchronized auto-generated Python-C API with Dygraph Forward Functions
      
      * support more eager tensor api
      
      * fix merge compile error
      
      * fix compile error and fit develop code
      
      * support pure CPU
      
      * fix some logic error in eager_mode
      
      * support _varbase_creator in eager mode
      
      * Added safe_initialized interface to EagerTensor for use in processing dispensable inputs
      
      * for eager mode
      
      * refine
      
      * support multiple constructor for eager tensor
      
      * add place related code
      
      * polish code
      
      * specific randint with dtype of int64
      
      * Support pure cpu test
      
      * eager logic
      
      * refine test in pure cpu
      
      * eager logic
      
      * eager logic
      
      * eager logic, test=develop
      
      * skip core.eager when in inference, test=develop
      
      * refine, test=develop
      
      * refine, test=develop
      
      * call RetainGrad after run forward kernel, test=develop
      
      * refine, test=develop
      
      * support dygraph util, meta, guard test
      
      * support inference test
      
      * refine test and fix initializer failed
      
      * support create varbase and fix retain grad error
      
      * fix windows error
      
      * support test code coverage
      
      * support test code coverage
      
      * support test code coverage
      Co-authored-by: Njim19930609 <jim19930609@gmail.com>
      Co-authored-by: NWang Huan <wanghuan29@baidu.com>
      1fb80a6a
    • Z
      refactor matmul directory in pten (#38227) · 982bf444
      zyfncg 提交于
      * refactor matmul directory in pten
      
      * fix merge conflict
      982bf444
    • H
      Add API and op for take_along_axis (#38396) · 3310f519
      huangxu96 提交于
      * add API and op for take_along_axis
      
      * fix compile dependency problem and add example code and doc
      
      * add unitest
      
      * delete some code for CI coverage
      
      * fix code style problem
      
      * fix as review
      3310f519
    • G
      fix adamw epsilon in cuda kernel (#37746) · 6f1bb3d6
      Guoxia Wang 提交于
      6f1bb3d6
    • T
      Add Amax and Amin API (#38417) · 340dfb26
      Tao Luo 提交于
      * add amax/amin
      
      * support axis is list
      340dfb26
    • C
      [pten] remove in_type arg in cast kernel (#38486) · 0637b9a6
      chentianyu03 提交于
      * remove intype arg in cast kernel
      
      * modify conj config in api.yaml by dictionary order
      
      * rm unused code in cast_kernel.cu
      0637b9a6
    • H
      add reduce_prod_xpu. fix reduce_mean_xpu bug. (#38481) · 78836bb7
      houj04 提交于
      * add reduce_prod_xpu. fix reduce_mean_xpu bug.
      
      * iadd reduce_prod_xpu. fix reduce_mean_xpu bug. test=kunlun
      78836bb7
    • L
      [new-exec] add completion_nofifier (#38447) · 404a4a6a
      Leo Chen 提交于
      * add completion_nofifier
      
      * fix bug
      
      * unregist event waiter
      404a4a6a
    • B
      add mul_lstm_fuse_pass ut (#37795) · 1db61c3e
      baoachun 提交于
      * add mul_lstm_fuse_pass ut
      
      * update mul_lstm_fuse_pass ut
      
      * update ut
      
      * update ut
      
      * update ut
      
      * add CPU ut cmake setting
      
      * update ut
      1db61c3e
    • L
      f9e8a775
  3. 27 12月, 2021 10 次提交
  4. 26 12月, 2021 4 次提交
  5. 24 12月, 2021 6 次提交
    • S
      renorm op (#38130) · 6982871d
      seemingwang 提交于
      * graph engine demo
      
      * upload unsaved changes
      
      * fix dependency error
      
      * fix shard_num problem
      
      * py client
      
      * remove lock and graph-type
      
      * add load direct graph
      
      * add load direct graph
      
      * add load direct graph
      
      * batch random_sample
      
      * batch_sample_k
      
      * fix num_nodes size
      
      * batch brpc
      
      * batch brpc
      
      * add test
      
      * add test
      
      * add load_nodes; change add_node function
      
      * change sample return type to pair
      
      * resolve conflict
      
      * resolved conflict
      
      * resolved conflict
      
      * separate server and client
      
      * merge pair type
      
      * fix
      
      * resolved conflict
      
      * fixed segment fault; high-level VLOG for load edges and load nodes
      
      * random_sample return 0
      
      * rm useless loop
      
      * test:load edge
      
      * fix ret -1
      
      * test: rm sample
      
      * rm sample
      
      * random_sample return future
      
      * random_sample return int
      
      * test fake node
      
      * fixed here
      
      * memory leak
      
      * remove test code
      
      * fix return problem
      
      * add common_graph_table
      
      * random sample node &test & change data-structure from linkedList to vector
      
      * add common_graph_table
      
      * sample with srand
      
      * add node_types
      
      * optimize nodes sample
      
      * recover test
      
      * random sample
      
      * destruct weighted sampler
      
      * GraphEdgeBlob
      
      * WeightedGraphEdgeBlob to GraphEdgeBlob
      
      * WeightedGraphEdgeBlob to GraphEdgeBlob
      
      * pybind sample nodes api
      
      * pull nodes with step
      
      * fixed pull_graph_list bug; add test for pull_graph_list by step
      
      * add graph table;name
      
      * add graph table;name
      
      * add pybind
      
      * add pybind
      
      * add FeatureNode
      
      * add FeatureNode
      
      * add FeatureNode Serialize
      
      * add FeatureNode Serialize
      
      * get_feat_node
      
      * avoid local rpc
      
      * fix get_node_feat
      
      * fix get_node_feat
      
      * remove log
      
      * get_node_feat return  py:bytes
      
      * merge develop with graph_engine
      
      * fix threadpool.h head
      
      * fix
      
      * fix typo
      
      * resolve conflict
      
      * fix conflict
      
      * recover lost content
      
      * fix pybind of FeatureNode
      
      * recover cmake
      
      * recover tools
      
      * resolve conflict
      
      * resolve linking problem
      
      * code style
      
      * change test_server port
      
      * fix code problems
      
      * remove shard_num config
      
      * remove redundent threads
      
      * optimize start server
      
      * remove logs
      
      * fix code problems by reviewers' suggestions
      
      * move graph files into a folder
      
      * code style change
      
      * remove graph operations from base table
      
      * optimize get_feat function of graph engine
      
      * fix long long count problem
      
      * remove redandunt graph files
      
      * remove unused shell
      
      * recover dropout_op_pass.h
      
      * fix potential stack overflow when request number is too large & node add & node clear & node remove
      
      * when sample k is larger than neigbor num, return directly
      
      * using random seed generator of paddle to speed up
      
      * fix bug of random sample k
      
      * fix code style
      
      * fix code style
      
      * add remove graph to fleet_py.cc
      
      * fix blocking_queue problem
      
      * fix style
      
      * fix
      
      * recover capacity check
      
      * add remove graph node; add set_feature
      
      * add remove graph node; add set_feature
      
      * add remove graph node; add set_feature
      
      * add remove graph node; add set_feature
      
      * fix distributed op combining problems
      
      * optimize
      
      * remove logs
      
      * fix MultiSlotDataGenerator error
      
      * cache for graph engine
      
      * fix type compare error
      
      * more test&fix thread terminating problem
      
      * remove header
      
      * change time interval of shrink
      
      * use cache when sample nodes
      
      * remove unused function
      
      * change unique_ptr to shared_ptr
      
      * simplify cache template
      
      * cache api on client
      
      * fix
      
      * reduce sample threads when cache is not used
      
      * reduce cache memory
      
      * cache optimization
      
      * remove test function
      
      * remove extra fetch function
      
      * graph-engine data transfer optimization
      
      * support graph_split load&query
      
      * remove logs
      
      * change shards to pointer vector
      
      * use inference
      
      * remove test code
      
      * renorm op
      
      * simplify renorm op
      
      * recover local changes
      
      * recover renorm op kernel
      
      * fix init
      
      * add blanklines in renorm doc
      
      * fix import
      
      * fix import
      Co-authored-by: NHuang Zhengjie <270018958@qq.com>
      Co-authored-by: NWeiyue Su <weiyue.su@gmail.com>
      Co-authored-by: Nsuweiyue <suweiyue@baidu.com>
      Co-authored-by: Nluobin06 <luobin06@baidu.com>
      Co-authored-by: Nliweibin02 <liweibin02@baidu.com>
      Co-authored-by: Ntangwei12 <tangwei12@baidu.com>
      6982871d
    • Z
      [AMP] Add multi_precision for sgd (#38231) · a4d07bb9
      zhangbo9674 提交于
      a4d07bb9
    • C
      [pten] combine reduce_cuda codes (#38328) · 08941eda
      chentianyu03 提交于
      * combine reduce_cuda codes
      
      * support float16 in pten redcue_mean
      
      * replace ReduceCudaKernel impl with pten reduce impl
      
      * mv reduce funcs into reduce_cuda_impl
      
      * rm unsed codes and headers
      
      * mv GetReduceDim into reduce_cuda_impl
      
      * recover GetReduceDim in reduce_op.h
      
      * add new dispatch macro
      
      * fix pool op output not inited and cause transform to pten::denseTensor error
      
      * fix output tensor not initialized error
      
      * rename new dispatch macro and format code style
      
      * rm reduce_functor_op.h file
      08941eda
    • Z
      [heterps]move pre-init id logic from common_sparse_table to sparse_geo_table (#38173) · 52329f6f
      zmxdream 提交于
      * remove pre-init id in common_sparse_tabl.cc
      52329f6f
    • zhouweiwei2014's avatar
      add new API/OP:paddle.Tensor.exponential_ (#38256) · 33185000
      zhouweiwei2014 提交于
      * add new API/OP:paddle.Tensor.exponential_
      
      * fix CI
      33185000
    • 努力努力在努力丶's avatar
      [MLU]add mlu op interface (#38241) · c396ee65
      努力努力在努力丶 提交于
      * [MLU]add mlu op interface
      
      * [MLU]fix alpha of activation op
      c396ee65