1. 11 9月, 2019 1 次提交
  2. 27 8月, 2019 1 次提交
    • V
      Support Tensor input with padding for warpctc op (#19322) · 482ce818
      vincentXiyu 提交于
      * support tensor input with padding for warpctc op
      
      * merge with develop
      
      * test=develop
      
      * modified python API examples test=develop
      
      * nn.py is modified for code coverage test=develop
      
      * update documents info about warpctc op in API.spec test=develop
      
      * add test_warpctc_with_padding in test_layers test=develop
      
      * add warning log for cuda_version back to warpctc_op.cc
      
      * modify API.spec for warpctc op test=develop
      
      * modify API.spec
      
      * update warpctc test to new CompiledProgram API test=develop
      
      * modify code examples for warpctc op test=develop
      
      * modify API.spec for warpctc op test=develop
      
      * modify API.spec for warpctc op test=develop
      482ce818
  3. 12 6月, 2019 1 次提交
    • H
      Cherry-pick: fix random CI failure. (#18011) · 0bf25351
      Huihuang Zheng 提交于
      * Cherry-pick fix random Python3 CI failure.
      
      In some tests, SWEs used "print('xxx').format('xxx')". The syntax
      is only supported in Python2, not python3. However, since those
      lines are related to data download, if the CI machines already have
      the data, it passes CI tests. That causes random failure.
      
      * Cherry-pick: disable CUDNN case of test_warpctc_op
      
      Also temporary disable a unit test. The test will be fixed under high priority.
      0bf25351
  4. 10 6月, 2019 1 次提交
  5. 16 11月, 2018 1 次提交
    • W
      Add cudnn ctc loss (#12366) · b32c13dc
      Wu Yi 提交于
      * add cudnn ctc loss
      
      * wip add test test=develop
      
      * wip
      
      * wip
      
      * done test=develop
      
      * move include cudnn test=develop
      
      * test test=develop
      
      * fix build test=develop
      
      * fix build test=develop
      
      * fix build on cudnn5 test=develop
      
      * fix cudnn5 build test=develop
      
      * fix cudnn5 build test=develop
      
      * merge develop softmax functor change test=develop
      b32c13dc
  6. 15 8月, 2018 1 次提交
  7. 07 8月, 2018 1 次提交
    • M
      Fix pybind11 problem · 6abe819f
      minqiyang 提交于
      Fix str and bytes problem
      Fix sorted problem
      Fix math problem
      Fix CI problem
      6abe819f
  8. 26 7月, 2018 2 次提交
  9. 15 6月, 2018 1 次提交
    • K
      Modify Pybind LoDTensor API according to length-based LoD (#11106) · 417fcf4f
      Kexin Zhao 提交于
      * add lod_tensor util and modify pybind
      
      * refind pybind LoDTensor API and modify LoDTensor and DataFeeder test
      
      * fix test error
      
      * fix detection map op test
      
      * fix reorder_lod_tensor test
      
      * fix seq_concat_op
      
      * fix chunk evel op test
      
      * fix target assign op
      
      * fix warp ctc op
      
      * address comments step 1: reverse reset_lod op
      
      * step 2: modify op test
      
      * add warning message
      
      * remove has_valid_lod
      
      * add back has_valid_lod
      
      * address comments
      
      * add exception catching trial
      417fcf4f
  10. 22 5月, 2018 1 次提交
  11. 21 5月, 2018 1 次提交
  12. 24 2月, 2018 1 次提交
  13. 13 2月, 2018 1 次提交
    • X
      Run Python OP tests in a single Python process to improve test time. (#8362) · cde6241a
      Xin Pan 提交于
      Currently, our tests run with 2 GPUs, the init time is absurdly long:
      about 4s for each process.  Currently, we run each OP test on
      different processes. This PR:
      
      1. create cmake function py_test_modules which will generate the
      Makefile that runs a list of Python unittest module in a single Python
      process.
      
      2. move all "python unittest compatible" (e.g., used the unittest
      package, not just a regular python file). from fluid/tests to
      fluid/tests/unittests.
      
      3. cmake now will run all OP tests in fluid/tests/unittests in a
      single process, except the time-consuming tests, they are separated
      into different processes to utilize parallelism. Please make sure to
      use the unittest package if you put the python test file in
      fluid/tests/unittests
      
      4. remove all exit(0) from fluid/tests/unittests/*.py, exit(0) is used
      to disable unittest, we can not do it when running all tests in a
      single process since it will terminate the process without running the
      other tests. Instead, the test is disabled in
      fluid/tests/unittests/CMakeLists.txt. FIXME is added for each disabled
      item. Please disable the unittest from
      fluid/tests/unittests/CMakeLists.txt, instead of adding exit(0) to the
      Python file, for all Python file in fluid/tests/unittests/.
      
      5. add an option WITH_FAST_BUNDLE_TEST. When OFF, will run the unit
      tests in separate process so that they can be tested individually.
      cde6241a
  14. 12 2月, 2018 1 次提交
  15. 21 1月, 2018 1 次提交
    • D
      "fix decode bug" (#7711) · e983cc90
      dzhwinter 提交于
      * "fix decode bug"
      
      * "follow commnet"
      
      * "fix error"
      
      * "fix hook bug"
      
      * fix based comment
      
      * fix copyright
      
      * fix based on comment
      e983cc90
  16. 15 1月, 2018 2 次提交
  17. 13 1月, 2018 1 次提交
  18. 11 1月, 2018 2 次提交
  19. 09 1月, 2018 1 次提交
    • Y
      Port WarpCTC Operator (#5107) · b5fda272
      Yiqun Liu 提交于
      * Add Seq2BatchFunctor, which will be used in WarpCTCOp.
      
      * Implement WrapCTCFunctor and WrapCTCKernel.
      
      * Add unittest of warpctc_op.
      
      * Modify the check_output inferface in python unittest framework to allow check a subset of outputs.
      
      * Use absolute offset lod in warpctc_op and related functors.
      
      * Refine the comments of warpctc_op.
      
      * The new python unittest supports checking a subset of the outputs, so revoke the previous change.
      
      * Rename the transform from LoDTensor to Tensor with shape [max_sequence_length, num_sequences, sequence_width] to PaddingSequenceFunctor.
      
      * Update to the newest codes.
      
      * Rename the PaddingSequenceFunctor to PaddingLoDTensorFunctor and remove the computation of dimensions out of the functos.
      b5fda272