1. 19 4月, 2021 1 次提交
  2. 17 4月, 2021 1 次提交
  3. 15 4月, 2021 7 次提交
    • 1
      tree-based-model (#31696) · a8c3a902
      123malin 提交于
      * add index_dataset and index_sampler for tree-based model
      a8c3a902
    • F
      [ROCM] bugfix for unit tests (#32258) · 90133d24
      furnace 提交于
      * [ROCM] bugfix for test_conv_transpose_nn_grad
      
      * [ROCM] bugfix for test_batch_norm_op_v2
      
      * [ROCM] bugfix for test_empty_like_op
      
      * [ROCM] bugfix for test_conv_transpose_nn_grad
      90133d24
    • T
      heterps support pscore (#32093) · 9f8c8f96
      Thunderbrook 提交于
      * pscore support heterps
      
      * fleet cmake
      
      * fleet wrapper
      
      * macro
      
      * solve conflict
      
      * solve conflict
      
      * add unitest
      
      * paddle enforce
      
      * unitest
      
      * unitest
      
      * unitest
      9f8c8f96
    • X
      support int for nearest_interp, test=develop (#32270) · 668a0d3b
      xiaoting 提交于
      668a0d3b
    • J
      【Deepmd Support】add IsInitialized and tanh double grad (#32188) · cfdde0ec
      Jiabin Yang 提交于
      * add IsInitialized
      
      * rm additional log and add tanh double grad
      
      * rename is_initialized
      cfdde0ec
    • W
      Customizable Python Layer in Dygraph (#32130) · 29f65225
      WeiXin 提交于
      * custom python backward
      
      * polish up the code
      
      * polish up the code
      
      * polish up the code.
      
      * Fix code format and comments.
      
      * Delete redundant files.
      
      * add unnittest.
      
      * edit unnittest.
      
      * edit unnittest.
      
      * Remove redundant header files.
      
      * Improve coverage and remove redundant code.
      
      * support saving for backward.
      
      * polish code according to comments.
      
      * Add support type for PyLayer.
      
      * Modify the DOC.
      
      * polish Doc.
      
      * polish Doc.
      
      * polish Doc.
      
      * polish Doc.
      
      * polish Doc.
      
      * polish Doc.
      
      * polish code and make the code robust.
      
      * Modify the code format.
      29f65225
    • Z
      【NPU】Cherry-pick ascendrc ops code by 0325 to develop (#32197) · e6bc358d
      zhang wenhui 提交于
      * merge 31065
      
      * Fix typo of selected_npus (#31230)
      
      * merge 31249
      
      * [NPU] Support npu op pow and pow grad (#31247)
      
      * [NPU] Support npu op: (1) pow (2) pow_grad
      
      * Support fp16
      
      * Fix pow npu fp16 test (#31256)
      
      * support list of list attribute for NPU (#31299)
      
      * support list of list attribute for NPU
      
      * fix compile problem
      
      * fix reference
      
      * [NPU] Support npu op: (1) slice (2) slice_grad (#31275)
      
      * fix reading flags from env (#31329)
      
      * merge 31347
      
      * [NPU] Support npu op layer_norm and layer_norm_grad (#31310)
      
      * init commit, add layer_norm npu kernel
      
      * fix typo
      
      * add unittest
      
      * add unittest
      
      * fix bug
      
      * fix bug
      
      * refine ut
      
      * [NPU] add npu kernel for equal op (#31393)
      
      * add npu kernel for equal op
      
      * refine code
      
      * add more ut
      
      * update year
      
      * [NPU] Support npu kernel for shape op  (#31427)
      
      * add shape npu
      
      * fix
      
      * fix
      
      * fix endif (#31431)
      
      * Fix pow, use fillD instead of broadcast (#31433)
      
      * Fix pow, refine code (#31440)
      
      * fix cmake of cryptopp to avoid downloading every time (#31451)
      
      * [NPU] squeeze and unsqueeze op for ascend (#31452)
      Co-authored-by: Nroot <xiayanming@baidu.com>
      
      * Support npu kernel for gather op (#31458)
      
      * add gather npu op
      
      * code review done
      
      * update python new line
      
      * precommit
      
      * fix review
      
      * del commit
      
      * 【NPU】add scale op for npu (#31499)
      
      * add scale npu
      
      * fix
      
      * fix
      
      * Support TensorFormVector, TensorToVector of bool type (#31518)
      
      * support TensorFormVector, TensorToVector of bool type
      
      * add ut
      
      * fix compile problem
      
      * 【NPU】support npu kernel for fill_constant op (#31521)
      
      * add fill_constant npu
      
      * add fill_constant npu
      
      * fix
      
      * cherry-pick 31422, solve conflict
      
      * 【NPU】Support npu kernel for matmul op (#31544)
      
      * add matmulv2_npu
      
      * add matmul
      
      * add matmul
      
      * [NPU] Support npu op elementwise_mul and elementwise_mul_grad (#31571)
      
      * [NPU] Support npu op elementwise_max (#31574)
      
      * 【NPU】add relu op for  npu (#31515)
      
      * add relu npu
      
      * fixed
      
      * fix
      
      * 【NPU】Suppert npu kernel for reshape2 op (#31524)
      
      * add reshape2 npu
      
      * add reshpe2
      
      * [NPU] Support npu kernel for gather op fix bug (#31541)
      
      * add gather npu op
      
      * code review done
      
      * update python new line
      
      * precommit
      
      * fix review
      
      * del commit
      
      * update gather_grad
      
      * fix bug
      
      * fix bug
      
      * [NPU] Support npu kernel for amp_check_finite_and_unscale_npu op (#31457)
      
      * Support npu kernel for amp_check_finite_and_unscale_npu op
      
      * support EnforceNotMet exception
      
      * fix exception bug
      
      * modify python unittest
      
      * precommit
      
      * update c++ unittest
      
      * fix review
      
      * fix review
      
      * [NPU] accuracy op (#31492)
      
      * accuracy op
      
      * fix license
      
      * fix
      
      * add test and fix bug
      
      * [NPU] add Assign OP (#31561)
      
      * add assign op
      
      * add test assign npu test
      
      * dele if def
      Co-authored-by: Noyjxer <1728722986@qq.com>
      
      * [NPU] fix npu op elementwise_mul_grad (#31592)
      
      * 【NPU】Support npu op gelu and gelu_grad (#31530)
      
      * Support npu op gelu and gelu_grad
      
      * Support npu op gelu and gelu_grad
      
      * [NPU] fix assgin cmake (#31595)
      
      * fix gather_grad bug (#31607)
      
      * [NPU] add range op (#31560)
      
      * add range op
      
      * fix codestyle; call GetSize directly
      Co-authored-by: Noyjxer <1728722986@qq.com>
      
      * 【NPU】Support npu op elementwise_div and elementwise_div_grad (#31573)
      
      * Support npu op elementwise_div and elementwise_div_grad
      
      * Support npu op elementwise_div and elementwise_div_grad
      
      * Support npu op elementwise_div and elementwise_div_grad
      
      * [NPU] Support npu op log, log_grad, sqrt, sqrt_grad, square, tanh and tanh_grad (#31600)
      
      * [NPU] Support npu op logicalnot_op (#31534)
      
      * [NPU] Support npu op elementwise_min (#31575)
      
      * [NPU] Support npu op elementwise_pow (#31576)
      
      * [NPU] Support npu op table_lookup_v2 and table_lookup_v2_grad (#31399)
      
      * [npu] support npu kernel `table_lookup_v2`
      
      * clean up
      
      * +python test
      
      * +cmake
      
      * clean up
      
      * remove int8 kernel
      + python unitest for fp16
      
      * clean up
      
      * [NPU] support npu kernel for `less_than` (#31327)
      
      * [npu] support npu kernel for `less than`
      
      * remove int* kernel
      
      * cleanup
      
      * [NPU] Support npu kernel scatter op (#31624)
      
      * Support npu kernel scatter op
      
      * Add more test
      
      * [NPU] fix allocator min chunk size (#31632)
      
      * [NPU] Support NPU kernel cast op (#31635)
      Co-authored-by: Nfrankwhzhang <frankwhzhang@126.com>
      
      * [NPU] add npu kernel for sgd (#31639)
      
      * 【NPU】Support NPU kernel for reduce_sum op v2 (#31620)
      
      * add reduce_sum
      
      * fix broadcastd
      
      * fix test
      
      * fix
      
      * add unsqueeze in reduce_sum
      
      * add template
      
      * add unittest for keep_dim
      
      * test reduce_all
      Co-authored-by: Nfrankwhzhang <frankwhzhang@126.com>
      
      * [NPU] add npu kernel for adam (#31644)
      
      * add npu kernel for adam
      
      * refine code
      
      * disable test
      
      * modify atol
      
      * 【NPU】Support npu kernel for mul op (#31584)
      
      * add mul
      
      * add test mul
      
      * [NPU] add npu kernel for softmax_with_cross_entropy (#31656)
      
      * init
      
      * fix bugs
      
      * [NPU] add npu kernel for mean Op (#31562)
      
      * update mean op
      
      * update mean op
      
      * give a better test activation
      Co-authored-by: Noyjxer <1728722986@qq.com>
      
      * Revert "[NPU] add npu kernel for mean Op (#31562)" (#31665)
      
      This reverts commit 468ac699.
      
      * 【NPU】Add TensorCopy to NPU kernel for reduce_sum op  (#31667)
      
      * update unittest
      
      * add TensorCopy in npu grad kernel
      
      * [NPU] Support npu op `expand` (#31405)
      
      * [npu] support npu kernel  for `expand`
      
      * [NPU] fix shape of dx in mul_grad (#31675)
      
      * fix shape of dx
      
      * refine code
      
      * [NPU] add Increment op (#31563)
      
      * add increment
      
      * fix
      
      * update test increment op inplace
      
      * update increment op
      
      * increment b = 2
      Co-authored-by: Noyjxer <1728722986@qq.com>
      
      * [NPU] add NPU add topk  (#31596)
      
      * add topk op
      
      * add cmake
      
      * update topk npu op
      
      * refactor func
      
      * fix test not go npu TopKD bug
      
      * NPUPlace(4) to NPUPlace(0)
      
      * update comment
      Co-authored-by: Noyjxer <1728722986@qq.com>
      
      * [NPU] Support NPU kernel sum op (#31671)
      
      * [NPU] npu support `transpose` (#31486)
      
      * cherry-pick 31564, solve conflict
      
      * [NPU] Fix bug: Fix calculation errors of pow grad npu kernel (#31699)
      
      * [NPU] Support testing grad of NPU ops in OpTest (#31697)
      
      * [NPU] Support NPU kernel of stack op (#31711)
      
      * [NPU] Remove redundant ctest of top_k_op_npu_test (#31718)
      
      * [NPU] fix reshape npu op kernel (#31726)
      
      * rename npu op file
      
      * fix reshape
      
      * [NPU] change transpose to transpose2 (#31734)
      
      * change transpose to transpose2
      
      * fix bug
      
      * [NPU] Support  mean npu kernel (#31729)
      
      * [NPU] fix some bugs of npu op (#31739)
      
      * fix softmax
      
      * fix mean
      
      * fix lookup_table_v2
      
      * 【NPU】Fix npu kernel elementwise_div_grad  (#31753)
      
      * [NPU] fix the grad kernel diff bug of gather op (#31757)
      
      * fix gather grad kernel diff
      
      * fix gather grad kernel diff
      
      * fix gather review bug
      
      * 【NPU】Fix reshape test & add grad test (#31776)
      
      * fix
      
      * fix
      
      * [NPU] support fp16 for npu accuracy op (#31797)
      
      * [NPU] support list of tensor input (#31801)
      
      * support list of tensor as npu input
      
      * add comment
      
      * fix typo
      
      * fix typo
      
      * [NPU] add npu kernel for concat op (#31695)
      
      * add npu kernel for concat op
      
      * add npu kernel for concat op
      
      * refine code
      
      * update
      
      * refine concat_grad
      
      * [NPU] Support npu kernel for op elementwise_floordiv (#31822)
      
      * [NPU] fix bug of lookup_table_v2_grad (#31834)
      
      * [NPU] support default stream (#31510)
      
      * [NPU] support mixed precision input for npu layer norm (#31847)
      
      * support mixed precision input for npu layer norm
      
      * fix layer_norm npu kernel
      Co-authored-by: Nzhiqiu <chenqiuliang@baidu.com>
      
      * 【NPU】Support npu kernel for update_loss_scaling op (#31830)
      
      * add update_loss_scaling_npu NPU kernel
      
      * change TensorFromVec to Memset
      
      * fix compile problem (#31850)
      
      * [NPU] support npu for conditional_block op (#31854)
      
      * 【NPU】Add int dtype kernel for reshape2 op (#31864)
      
      * fix
      
      * fix
      
      * [NPU] fix some op bugs (#31855)
      
      * fix some op bugs
      
      * fix some bugs
      
      * follow comments
      
      * fix log level
      
      * add ut
      
      * [NPU] support fp16 of input for api pow (#31871)
      
      * [NPU] add npu kernel for truncated_gaussian_random op (#31654)
      
      * init
      
      * add todo
      
      * add npu kernel for truncated_gaussian_random
      
      * add sync
      
      * fix concat_grad
      
      * fix typo
      
      * fix compile
      
      * fix compile
      
      * fix compile
      
      * fix compile
      
      * fix compile
      
      * fix compile
      
      * fix code style
      
      * fix code style
      
      * fix code
      
      * Fix op test (#32231)
      
      * fix conditional block (#32243)
      
      * fix style code
      Co-authored-by: Nxiayanming <41795079@qq.com>
      Co-authored-by: NLeo Chen <chenqiuliang@baidu.com>
      Co-authored-by: Nliym27 <33742067+liym27@users.noreply.github.com>
      Co-authored-by: NReventon_L <luyuxiang1994@qq.com>
      Co-authored-by: Nroot <xiayanming@baidu.com>
      Co-authored-by: Noyjxer <1728722986@qq.com>
      Co-authored-by: Nyinhaofeng <66763551+yinhaofeng@users.noreply.github.com>
      Co-authored-by: NOleNet <olenet@126.com>
      Co-authored-by: NMeiyim <chen_xuyi@outlook.com>
      Co-authored-by: Noyxuan-11 <963650125@qq.com>
      Co-authored-by: Npangyoki <pangyoki@126.com>
      e6bc358d
  4. 14 4月, 2021 6 次提交
  5. 13 4月, 2021 3 次提交
    • P
      extend multiclass_nms unittest timeout threshold (#32214) · cb81826a
      Pei Yang 提交于
      * extend multiclass_nms unittest timeout threshold
      
      * adjust timeout to 200s
      
      * temporarily disable multiclass_nms trt op teller
      cb81826a
    • C
      add layer.to api (#32040) · 6e946e9d
      chentianyu03 提交于
      * add layer.to api
      
      * add layer.to api
      
      * add layer.to api
      
      * add the doc for Layer.to
      
      * add input type checking
      
      * modify assert and import bug
      
      * format code style
      
      * format code style
      
      * make place support str type
      
      * add SetGradVarBase method to set the gradient after conversion
      
      * modify argument palce to device
      
      * modify argument palce to device
      
      * modify doc of layers.to API
      
      * add xpuplace to device argument
      6e946e9d
    • Q
      [ROCM] fix depth conv2d in rocm, test=develop (#32170) · 693c7629
      Qi Li 提交于
      693c7629
  6. 12 4月, 2021 1 次提交
    • R
      [ROCM] fix some unittests (#32129) · bd2a4e23
      ronnywang 提交于
      * [ROCM] fix test_gru_rnn_op
      
      * [ROCM] fix test_expand_op
      
      * [ROCM] fix test_cross_entropy_loss
      
      * [ROCM] fix test_conv_nn_grad
      
      * [ROCM] fix test_bilinear_tensor_product_op
      
      * [ROCM] fix elementwise_op_function
      
      * [ROCM] fix test_lstm_cudnn_op
      
      * [ROCM] fix test_gpu_package_without_gpu_device
      
      * [ROCM] fix test_gru_unit_op
      
      * [ROCM] fix test_imperative_optimizer
      
      * [ROCM] fix rnn
      
      * [ROCM] fix group_norm_op
      
      * [ROCM] fix test_pool3d_api
      
      * [ROCM] fix test_pool3d_op
      bd2a4e23
  7. 09 4月, 2021 4 次提交
    • L
      [NPU] cherry-pick basic NPU components/allocator/operator/executor supports from ascendrc (#32144) · ccf5709d
      Leo Chen 提交于
      * [feature] support npu allocator (#30840)
      
      [feature] support npu allocator
      
      * [feature] support npu operator (#30951)
      
      [feature] support npu operator
      
      * [feature] support npu allocator, part 2 (#30972)
      
      * support npu allocator
      
      * add npu device context
      
      * fix some compile problem
      
      * fix some compile problem
      
      * add npu info
      
      * compile ok
      
      * fix include dir
      
      * support naive_best_fit_allocator
      
      * run ut ok, bug failed to exit
      
      * call aclrtResetDevice before exit
      
      * fix aclFinilize
      
      * add system allocatot test
      
      * add selected_gpus in gtest
      
      * add tensor_test for npu
      
      * support npu op, initial commit
      
      * add npu stream
      
      * add elementwise_add_op
      
      * compile ok
      
      * fix typo
      
      * fix elementwise_add_op_npu_test
      
      * support op run
      
      * test can run but failed
      
      * change aclopExecuteV2 to aclopCompileAndExecute
      
      * support parsing ascend rank table file (#31000)
      
      support parsing ascend rank table file
      
      * Fix reshape on GE graph. (#31084)
      
      Fix reshape on GE graph
      
      * add npu kernel for elementwise_sub and elementwise_sub_grad (#30973)
      
      * add npu sub op
      
      * fix typo
      
      * rename test
      
      * fix bug
      
      * fix bug
      
      * add fp16 kernel
      
      * fix typo
      
      * support sub grad op
      
      * support elementwise_sub_grad op
      Co-authored-by: Nfrankwhzhang <frankwhzhang@126.com>
      
      * Fix compilation problem (#31100)
      
      Fix compilation problem (#31100)
      
      * fix compile
      
      * fix code stype
      
      * remove const_cast
      
      * support adding correct npu op in pybind.h (#31143)
      
      * support adding correct npu op in pybind.h
      
      * refine code
      
      * [NPU] Support executor with NPU (#31057)
      
      * [NPU] Support executor with NPU
      
      * Fix code according to reviews
      
      * Fix code
      
      * Add unittest for sub op npu
      
      * refactor npu device manager (#31154)
      
      refactor npu device manager (#31154)
      
      * fix selected npus
      
      * fix compile
      
      * fix reading flags from env
      
      * format
      Co-authored-by: Nxiayanming <41795079@qq.com>
      Co-authored-by: Ngongweibao <weibao.gong@gmail.com>
      Co-authored-by: Nfrankwhzhang <frankwhzhang@126.com>
      Co-authored-by: Nliym27 <33742067+liym27@users.noreply.github.com>
      ccf5709d
    • S
      fix unittest timeour (#32161) · a73cb679
      Shang Zhizhou 提交于
      a73cb679
    • A
      [Dy2Stat] Fix undefined var used in For (#32153) · 4636d136
      Aurelius84 提交于
      * fix undefind var in For
      
      * fix code style
      4636d136
    • A
      [Dy2Stat] Support DictCmp and zip grammer (#32159) · 55730d95
      Aurelius84 提交于
      * support DictCmp and zip grammar
      
      * fix code style
      55730d95
  8. 08 4月, 2021 3 次提交
  9. 07 4月, 2021 4 次提交
    • D
      add uint8 type for flatten op (#32120) · 297290a8
      danleifeng 提交于
      * add uint8 type for flatten;test=develop
      297290a8
    • Z
      【NPU】Merge ascend GE&distributed code by 0208 from ascendrc (#31957) · 8c7c53b3
      zhang wenhui 提交于
      * Ascend rc (#30483)
      
      * Fix compilcation on CANN20.1 and older (#30494)
      
      Fix compilcation on CANN20.1 and older
      
      * Add distribution supported (#30578)
      
      Add distribution supported
      
      * Build praser for Hcom* operators (#30627)
      
      Build praser for Hcom* operators
      
      * Pass device_ids info from launch to trainer. (#30632)
      
      Pass device_ids info from launch to trainer
      
      * Add Hccl program group (#30642)
      
      Add Hccl program group
      
      * Add startup bash files of test_ascend_group. (#30645)
      
      Add startup bash files of test_ascend_group
      
      * cleanup (#30646)
      
      cleanup test_ascend_group.py
      
      * [Feature] Build parser to support distributed training (#30658)
      
      [Feature] Build parser to support distributed training
      
      * fix compilation on ascend-20.1 (#30722)
      
      fix compilation on ascend-20.1
      
      * Dev/fix ascend string (#30749)
      
      Dev/fix ascend string
      
      * code style (#30781)
      
      code style
      
      * Merge ascend_optimizer and ascend_parser. (#30776)
      
      Merge ascend_optimizer and ascend_parser.
      
      * Ascendrc add converted op : [range/equal/range/uniform_random/expand/squeeze], fix cast op bug  (#30797)
      
      Ascendrc add converted op : [range/equal/range/uniform_random/expand/squeeze], fix cast op bug
      
      * Add paddle ascend distribution training supported (#30796)
      
      Add paddle ascend distribution training supported
      
      * pass cxx_flags to gloo cmake (#30857)
      
      * Destroy session first. (#30954)
      
      Destroy session first.
      
      * merge
      
      * fix, test=develop
      
      * fix, test=develop
      
      * fix style, test=develop
      
      * fix, test=develop
      
      * fix
      
      * fix log fatal, test=develop
      
      * fix enforce style, test=develop
      
      * fix, test=develop
      
      * fix, test=develop
      
      * fix rccl, test=develop
      
      * fix test, test=develop
      
      * fix, test=develop
      
      * fix, test=develop
      
      * fix, test=develop
      
      * fix node_num, test=develop
      
      * fix ids str, test=develop
      
      * fix ids str, test=develop
      
      * fix ids str, test=develop
      
      * fix, test=develop
      
      * fix, test=develop
      
      * fix, test=develop
      
      * fix, test=develop
      
      * fix, test=develop
      
      * fix, test=develop
      
      * fix, test=develop
      
      * fix, test=develop
      
      * fix style code, test=develop
      
      * fix style code, test=develop
      
      * fix style code, test=develop
      
      * fix style code, test=develop
      Co-authored-by: Nhutuxian <hutuxian2011@sina.cn>
      Co-authored-by: Ngongweibao <weibao.gong@gmail.com>
      Co-authored-by: NVoid Main <voidmain1313113@gmail.com>
      Co-authored-by: NLeo Chen <chenqiuliang@baidu.com>
      Co-authored-by: Ndingsiyu <18369187719@163.com>
      Co-authored-by: NOleNet <olenet@126.com>
      8c7c53b3
    • J
      [3D-parallelism] Hybrid Model Parallelism (#32074) · 1e60a0c4
      JZ-LIANG 提交于
      1e60a0c4
    • C
  10. 06 4月, 2021 3 次提交
  11. 02 4月, 2021 3 次提交
    • J
    • W
      support save/load single tensor (#31756) · 43367e4b
      WeiXin 提交于
      * support save/load single tensor
      
      * compatibility modification according to unnittest
      
      * Some python2.7 don't have 'copyreg' modules
      
      * Handle a syntax error.
      
      * Dealing with compatibility problems on Mac.
      
      * Dealing with compatibility problems on Mac.
      
      * edit unittest to improve coverage.
      
      * Modify the code according to the review comments
      
      * Reduce redundant code.
      
      * support for static graph loading dygraph state_dict
      
      * edit code according to CI
      
      * edit unittest
      
      * edit unnittest
      
      * delete redundant file
      
      * edit code according to Comments
      
      * edit english doc
      
      * edit english doc
      
      * edit English DOC.
      
      * get/set_tensor->get/set_value; return_numpy=False
      
      * get/set_tensor->get/set_value; return_numpy=False
      
      * edit unnittest
      
      * edit unnittest
      
      * polish code.
      43367e4b
    • S
      graph engine (#31226) · 94736d60
      seemingwang 提交于
      * graph engine demo
      
      * upload unsaved changes
      
      * fix dependency error
      
      * fix shard_num problem
      
      * py client
      
      * remove lock and graph-type
      
      * add load direct graph
      
      * add load direct graph
      
      * add load direct graph
      
      * batch random_sample
      
      * batch_sample_k
      
      * fix num_nodes size
      
      * batch brpc
      
      * batch brpc
      
      * add test
      
      * add test
      
      * add load_nodes; change add_node function
      
      * change sample return type to pair
      
      * resolve conflict
      
      * resolved conflict
      
      * resolved conflict
      
      * separate server and client
      
      * merge pair type
      
      * fix
      
      * resolved conflict
      
      * fixed segment fault; high-level VLOG for load edges and load nodes
      
      * random_sample return 0
      
      * rm useless loop
      
      * test:load edge
      
      * fix ret -1
      
      * test: rm sample
      
      * rm sample
      
      * random_sample return future
      
      * random_sample return int
      
      * test fake node
      
      * fixed here
      
      * memory leak
      
      * remove test code
      
      * fix return problem
      
      * add common_graph_table
      
      * random sample node &test & change data-structure from linkedList to vector
      
      * add common_graph_table
      
      * sample with srand
      
      * add node_types
      
      * optimize nodes sample
      
      * recover test
      
      * random sample
      
      * destruct weighted sampler
      
      * GraphEdgeBlob
      
      * WeightedGraphEdgeBlob to GraphEdgeBlob
      
      * WeightedGraphEdgeBlob to GraphEdgeBlob
      
      * pybind sample nodes api
      
      * pull nodes with step
      
      * fixed pull_graph_list bug; add test for pull_graph_list by step
      
      * add graph table;name
      
      * add graph table;name
      
      * add pybind
      
      * add pybind
      
      * add FeatureNode
      
      * add FeatureNode
      
      * add FeatureNode Serialize
      
      * add FeatureNode Serialize
      
      * get_feat_node
      
      * avoid local rpc
      
      * fix get_node_feat
      
      * fix get_node_feat
      
      * remove log
      
      * get_node_feat return  py:bytes
      
      * merge develop with graph_engine
      
      * fix threadpool.h head
      
      * fix
      
      * fix typo
      
      * resolve conflict
      
      * fix conflict
      
      * recover lost content
      
      * fix pybind of FeatureNode
      
      * recover cmake
      
      * recover tools
      
      * resolve conflict
      
      * resolve linking problem
      
      * code style
      
      * change test_server port
      
      * fix code problems
      
      * remove shard_num config
      
      * remove redundent threads
      
      * optimize start server
      
      * remove logs
      
      * fix code problems by reviewers' suggestions
      Co-authored-by: NHuang Zhengjie <270018958@qq.com>
      Co-authored-by: NWeiyue Su <weiyue.su@gmail.com>
      Co-authored-by: Nsuweiyue <suweiyue@baidu.com>
      Co-authored-by: Nluobin06 <luobin06@baidu.com>
      Co-authored-by: Nliweibin02 <liweibin02@baidu.com>
      94736d60
  12. 01 4月, 2021 4 次提交
    • S
      Support control flow in DataParallel (#31625) · 8460698b
      ShenLiang 提交于
      * support control flow
      
      * supoort sync_parameters_buffers
      
      * fix the bug of sparse embedding
      8460698b
    • C
      add custom init grad for backward function (#31540) · 83b953f5
      chentianyu03 提交于
      * add custom init grad for backward function
      
      * add custom init grad for backward function
      
      * handle when the grad_tensor is none
      
      * handle when the grad_tensor is none
      
      * fix the args type error on windows platform
      
      * modify the args order and doc
      
      * format code
      
      * add grad_tensor to xpu
      
      * modify the grad_tensor type check
      
      * add paddle.backward api to support multi tensors gradient compute
      
      * add paddle.backward api to support multi tensors gradient compute
      
      * add paddle.atuograd module and backward api
      
      * change tensor.backward func args
      
      * modify tensor backward api
      
      * remove create_graph intputs args
      
      * add doc and examplex code for backward api
      
      * when have the same tensor, throw error
      
      * modify test Init func args
      
      * modify the execute.Init func args in test files
      
      * add paddle.autograd package in setup.py.in
      
      * modify error msg, remove _run_backward method in class Tensor
      
      * add test cases for backward api
      83b953f5
    • T
      LOG CLEAN (#31819) · 0589ed21
      tangwei12 提交于
      * upgrade vlog
      
      * train from dataset fetch optimize
      0589ed21
    • Z
      [Paddle-TRT] add anchor generator op plugin (#31730) · b807e408
      zlsh80826 提交于
      * add anchor generator op plugin
      
      * add anchor generator unit_test
      
      * remove dbg info
      
      * remove redundant line
      
      * replace assertion with paddle enforce
      
      * dynamic plugin replaces assertion with paddle enforce
      
      * anchor generator support dynamic shape on spatial axis
      
      * anchor generator test with fp16, dynamic shape
      
      * add anchor generator test all
      
      * add back main
      
      * reduce test input size to not exceed the timelimit of ci
      
      * change super to InferencePassTest for python2 compatibility
      
      * reuse paddle operator anchor generator
      
      * move creator construct to header with default
      
      * add cuda ifdef
      
      * reduce line
      
      * change super to InferencePassTest for python2 compatibility
      
      * fix anchor generator fp16 serialize setting
      
      * split unittest from test_all
      
      * restrict anchor generator input format before version 7234
      
      * anchor generator only support greater than trt7.1
      
      * change min_graph_size to 2
      
      * min_graph size to 3 if dynamic shape
      
      * reduce dynamic shape size to avoid trt search tactic too long to exceed time limit
      
      * remove anchor from fetch list
      
      * anchor generator support all trt version
      
      * fix memory not allocated but if serialized
      b807e408