1. 21 10月, 2020 1 次提交
    • C
      2.0rc api rename (#28088) · 7c1aa0d6
      cnn 提交于
      * rename manual_seed to seed
      
      * rename xxx1d-->xxx1D, xxx2d-->xxx2D, xxx3d-->xxx3D
      
      * rename manual_seed --> seed
      
      * do not rename .cc, .cu and .h file
      
      * rename manual_seed --> seed
      
      * rename manual_seed --> seed
      
      * rename manual_seed --> seed
      
      * rename manual_seed --> seed
      
      * disable_static on doc example code
      
      * donot change manual_seed on generator
      
      * add enable_static on sample code
      
      * convert python/paddle/fluid/layers/nn.py to bak
      
      * fix typo
      
      * fix code style
      
      * fix seed to manual_seed when call functions of Generator()
      
      * fix bug
      7c1aa0d6
  2. 04 9月, 2020 1 次提交
  3. 02 9月, 2020 1 次提交
  4. 31 8月, 2020 1 次提交
  5. 23 8月, 2020 1 次提交
    • H
      add alpha_dropout in nn.functional and nn.layer, test=develop (#26365) · 7bd7b188
      huangjun12 提交于
      * add alpha_dropout in nn.functional and nn.layer, test=develop
      
      * refine Interface and assertion, test=develop
      
      * fix ci import error, test=develop
      
      * fix alias and use layers.scale, test=develop
      
      * fix doc and training params, test=develop
      
      * refine details in doc, test=develop
      
      * fix doc details, test=develop
      7bd7b188
  6. 22 8月, 2020 1 次提交
  7. 20 12月, 2019 1 次提交
  8. 17 12月, 2019 1 次提交
  9. 12 12月, 2019 1 次提交
  10. 10 12月, 2019 1 次提交
  11. 02 12月, 2019 1 次提交
  12. 07 10月, 2019 1 次提交
  13. 28 4月, 2019 1 次提交
    • Z
      Refine dropout gpu memory (#17095) · 28d69d71
      Zeng Jinle 提交于
      * refine_dropout_mem,test=develop
      
      * # This is a combination of 14 commits.
      # The first commit's message is:
      remove ut test_dist_word2vec in mac ci, will fix it in private, test=develop (#17066)
      
      # This is the 2nd commit message:
      
      Fleet unify distributed training (#16791)
      
      * implement distributed transpiler with fleet
      # This is the 3rd commit message:
      
      ParallelDyGraph with GPU collective mode (#16827)
      
      implement dygraph.parallel.DataParallel to hook reduce op.
      
      # This is the 4th commit message:
      
      Init mixed precision training interface (#16856)
      
      * Init mixed precision training interface
      
      * Add fp16 test script
      
      test=develop
      
      * All initializers support float16
      
      test=develop
      
      * Code cleanup & add more code annotations
      
      test=develop
      
      * Update API spec
      
      test=develop
      
      * Add usage example in doc
      
      test=develop
      
      # This is the 5th commit message:
      
      fix reference_count_pass,test=develop (#17060)
      
      test=develop
      # This is the 6th commit message:
      
      Speedup roi_perspective_transform op by caching the information of linear interpolation in forward (#17090)
      
      * Cache the information of linear interpolation in forward and use it in backward.
      test=develop
      
      * Fix cuda kernel.
      test=develop
      
      # This is the 7th commit message:
      
      remove unnecessary prepare_data (#17080)
      
      test=develop
      # This is the 8th commit message:
      
      fix interpolate cu. test=develop (#17101)
      
      # This is the 9th commit message:
      
      test=develop, double backward leaky_relu (#17067)
      
      backward of backward: leaky_relu
      # This is the 10th commit message:
      
      fix fuse optimizer ops (#17102)
      
      test=develop
      # This is the 11th commit message:
      
      truncated_gaussian_random supported in distributed training, test=develop (#17091)
      
      # This is the 12th commit message:
      
       Detailed coordinate description for yolov3 loss (#17007)
      
      * Detailed coordinate description for yolov3 loss
      
      test=develop
      
      * modified api.spec
      
      test=develop
      
      * modified loss name
      
      * fix api.spec
      
      test=develop
      
      * polish description
      
      test=develop
      
      * modified api.spec
      
      test=develop
      
      # This is the 13th commit message:
      
      fix test_weight_decay (#17109)
      
      test=develop
      # This is the 14th commit message:
      
      Path flag (#17105)
      
      * fix python/paddle/fluid/__init__.py detecting problems
      28d69d71
  14. 24 10月, 2018 1 次提交
  15. 23 10月, 2018 1 次提交
  16. 15 8月, 2018 1 次提交
  17. 26 7月, 2018 2 次提交
  18. 20 3月, 2018 4 次提交
  19. 24 2月, 2018 1 次提交
  20. 13 2月, 2018 1 次提交
    • X
      Run Python OP tests in a single Python process to improve test time. (#8362) · cde6241a
      Xin Pan 提交于
      Currently, our tests run with 2 GPUs, the init time is absurdly long:
      about 4s for each process.  Currently, we run each OP test on
      different processes. This PR:
      
      1. create cmake function py_test_modules which will generate the
      Makefile that runs a list of Python unittest module in a single Python
      process.
      
      2. move all "python unittest compatible" (e.g., used the unittest
      package, not just a regular python file). from fluid/tests to
      fluid/tests/unittests.
      
      3. cmake now will run all OP tests in fluid/tests/unittests in a
      single process, except the time-consuming tests, they are separated
      into different processes to utilize parallelism. Please make sure to
      use the unittest package if you put the python test file in
      fluid/tests/unittests
      
      4. remove all exit(0) from fluid/tests/unittests/*.py, exit(0) is used
      to disable unittest, we can not do it when running all tests in a
      single process since it will terminate the process without running the
      other tests. Instead, the test is disabled in
      fluid/tests/unittests/CMakeLists.txt. FIXME is added for each disabled
      item. Please disable the unittest from
      fluid/tests/unittests/CMakeLists.txt, instead of adding exit(0) to the
      Python file, for all Python file in fluid/tests/unittests/.
      
      5. add an option WITH_FAST_BUNDLE_TEST. When OFF, will run the unit
      tests in separate process so that they can be tested individually.
      cde6241a
  21. 12 2月, 2018 1 次提交
  22. 30 1月, 2018 1 次提交
  23. 21 1月, 2018 1 次提交
    • D
      "fix decode bug" (#7711) · e983cc90
      dzhwinter 提交于
      * "fix decode bug"
      
      * "follow commnet"
      
      * "fix error"
      
      * "fix hook bug"
      
      * fix based comment
      
      * fix copyright
      
      * fix based on comment
      e983cc90
  24. 15 1月, 2018 1 次提交
    • D
      Feature/hooks (#7513) · b9b75377
      dzhwinter 提交于
      * add copyright hook
      
      * add copyright hook
      
      * refine copyright hook
      
      * "test copyright hook"
      
      * fix check style
      
      * fix ci
      b9b75377
  25. 21 12月, 2017 1 次提交
  26. 24 11月, 2017 1 次提交
  27. 14 11月, 2017 1 次提交
  28. 27 10月, 2017 1 次提交
    • Y
      Gradient check use graph (#5027) · be00b0c4
      Yu Yang 提交于
      * Simplize Gradient Check
      
      * Stash
      
      * Extract apply_backward_pass to backward.py
      
      Rename apply_backward_pass to append_backward_ops
      
      * Use graph API to check gradient
      
      * Fix ci
      
      * Fix CI
      
      * Fix backward for double precision
      
      * Stash
      
      * Fix CI
      
      * Fix ci
      
      * Ignore GRU test
      
      * Ignore xe op
      
      * Fix CI
      
      * Fix softmax with xe gradient
      
      The correct equation should be IG = OG * (d_softmax_with_xe())
      
      * Fix typo
      
      * Fix merge error
      
      * Disable LRN
      be00b0c4
  29. 20 9月, 2017 1 次提交
  30. 19 9月, 2017 2 次提交
  31. 03 9月, 2017 2 次提交