1. 24 2月, 2018 1 次提交
  2. 13 2月, 2018 1 次提交
    • X
      Run Python OP tests in a single Python process to improve test time. (#8362) · cde6241a
      Xin Pan 提交于
      Currently, our tests run with 2 GPUs, the init time is absurdly long:
      about 4s for each process.  Currently, we run each OP test on
      different processes. This PR:
      
      1. create cmake function py_test_modules which will generate the
      Makefile that runs a list of Python unittest module in a single Python
      process.
      
      2. move all "python unittest compatible" (e.g., used the unittest
      package, not just a regular python file). from fluid/tests to
      fluid/tests/unittests.
      
      3. cmake now will run all OP tests in fluid/tests/unittests in a
      single process, except the time-consuming tests, they are separated
      into different processes to utilize parallelism. Please make sure to
      use the unittest package if you put the python test file in
      fluid/tests/unittests
      
      4. remove all exit(0) from fluid/tests/unittests/*.py, exit(0) is used
      to disable unittest, we can not do it when running all tests in a
      single process since it will terminate the process without running the
      other tests. Instead, the test is disabled in
      fluid/tests/unittests/CMakeLists.txt. FIXME is added for each disabled
      item. Please disable the unittest from
      fluid/tests/unittests/CMakeLists.txt, instead of adding exit(0) to the
      Python file, for all Python file in fluid/tests/unittests/.
      
      5. add an option WITH_FAST_BUNDLE_TEST. When OFF, will run the unit
      tests in separate process so that they can be tested individually.
      cde6241a
  3. 12 2月, 2018 1 次提交
  4. 21 1月, 2018 1 次提交
    • D
      "fix decode bug" (#7711) · e983cc90
      dzhwinter 提交于
      * "fix decode bug"
      
      * "follow commnet"
      
      * "fix error"
      
      * "fix hook bug"
      
      * fix based comment
      
      * fix copyright
      
      * fix based on comment
      e983cc90
  5. 15 1月, 2018 1 次提交
    • D
      Feature/hooks (#7513) · b9b75377
      dzhwinter 提交于
      * add copyright hook
      
      * add copyright hook
      
      * refine copyright hook
      
      * "test copyright hook"
      
      * fix check style
      
      * fix ci
      b9b75377
  6. 25 12月, 2017 1 次提交
  7. 19 12月, 2017 1 次提交
  8. 27 11月, 2017 2 次提交
  9. 24 11月, 2017 1 次提交
  10. 14 11月, 2017 1 次提交
  11. 09 11月, 2017 1 次提交
    • F
      Add grad for lodtensor array ops (#5461) · b698d19b
      fengjiayi 提交于
      * Add LoDRankTable
      
      LoD Rank Table stores the `level` of `lod` which is ordered by sequence
      length in descending order. It is useful when implement dynamic RNN and
      is shared by dynamic RNN memory, dynamic RNN slice input and dynamic
      RNN slice output operators.
      
      * Add skeleton for array_to_lod_tensor and lod_tensor_to_array
      
      * Add VarType::LoDTensorArray
      
      * Add PyBind of LoDTensorArray
      
      * Add InferVarType
      
      * Add first unittest
      
      * Add ut
      
      * Add unittest
      
      * Add unittest
      
      * Add unittests
      
      * update
      
      * init
      
      * add infershape for lod_tensor_to_array_op
      
      * compelete array_to_lod_tensor_op
      
      * copy data
      
      * clean code
      
      * clean code
      
      * Fix unittest data
      
      * fix bugs
      
      * fix compile error
      
      * Refine TensorToArrayOp
      
      * refactor array_to_lod_tensor
      
      * Unittest
      
      * fix bugs
      
      * Fix unittest
      
      * Fix unittest
      
      * debug
      
      * Debug
      
      * Fix unittest
      
      * Add grad for ops
      
      * Debug
      
      * Fix a bug
      
      * fix a bug
      
      * fix a bug
      b698d19b
  12. 08 11月, 2017 1 次提交
    • Y
      Feature/rnn to array to lod tensor (#5411) · f72729d4
      Yu Yang 提交于
      * Add LoDRankTable
      
      LoD Rank Table stores the `level` of `lod` which is ordered by sequence
      length in descending order. It is useful when implement dynamic RNN and
      is shared by dynamic RNN memory, dynamic RNN slice input and dynamic
      RNN slice output operators.
      
      * Add skeleton for array_to_lod_tensor and lod_tensor_to_array
      
      * Add VarType::LoDTensorArray
      
      * Add PyBind of LoDTensorArray
      
      * Add InferVarType
      
      * Add first unittest
      
      * Add ut
      
      * Add unittest
      
      * Add unittest
      
      * Add unittests
      
      * update
      
      * init
      
      * add infershape for lod_tensor_to_array_op
      
      * compelete array_to_lod_tensor_op
      
      * copy data
      
      * clean code
      
      * clean code
      
      * Fix unittest data
      
      * fix bugs
      
      * fix compile error
      
      * Refine TensorToArrayOp
      
      * refactor array_to_lod_tensor
      
      * Unittest
      
      * fix bugs
      
      * Fix unittest
      
      * Fix unittest
      
      * debug
      
      * Debug
      
      * Fix unittest
      
      * clean code
      
      * refactor
      
      * use ostream
      
      * update test
      
      * fix gpu build error
      
      * make gpu test pass
      f72729d4