1. 03 5月, 2018 1 次提交
  2. 30 4月, 2018 1 次提交
  3. 29 4月, 2018 1 次提交
  4. 16 4月, 2018 1 次提交
  5. 12 4月, 2018 2 次提交
    • Y
      Remove the use of ARCHIVE_START/END (#9844) · e90e7ab2
      Yiqun Liu 提交于
      * Add USE_OP of all operators and kernels and remove ARCHIVE_START/END in CMakeLists.txt of inference unittests.
      
      * Remove ARCHIVE_START/END when linking inference shared library.
      
      * Disable some fluid related cmake operations for cross-compiling.
      e90e7ab2
    • T
      update grpc version · d798e325
      typhoonzero 提交于
      d798e325
  6. 11 4月, 2018 1 次提交
  7. 10 4月, 2018 1 次提交
  8. 08 4月, 2018 1 次提交
  9. 07 4月, 2018 1 次提交
  10. 04 4月, 2018 1 次提交
  11. 22 3月, 2018 1 次提交
  12. 20 3月, 2018 1 次提交
    • S
      CMake refine for HIP support. · e50205e7
      sabreshao 提交于
      1. Add option WITH_AMD_GPU.
      2. Add cmake/hip.cmake for HIP toolchain.
      3. Some external module such as eigen may need HIP port.
      4. Add macro hip_library/hip_binary/hip_test to cmake/generic.cmake.
      5. Add one HIP source concat.hip.cu as an example. Each .cu may have its corresponding .hip.cu.
      e50205e7
  13. 16 3月, 2018 2 次提交
    • S
      Demostration of cmake refine for HIP support. · 45c988d8
      sabreshao 提交于
      1. Add option WITH_AMD_GPU.
      2. Add cmake/hip.cmake for HIP toolchain.
      3. Some external module such as eigen may need HIP port.
      4. Add macro hip_library/hip_binary/hip_test to cmake/generic.cmake.
      5. Add one HIP source concat.hip.cu as an example. Each .cu may have its corresponding .hip.cu.
      45c988d8
    • Y
      Single GPU ParallelExecutor complete · 6f0dfd89
      Yu Yang 提交于
      6f0dfd89
  14. 13 3月, 2018 1 次提交
  15. 08 3月, 2018 2 次提交
    • L
      add MKL for fluid static and shared library · 5030681c
      Luo Tao 提交于
      5030681c
    • T
      compile and install the static library of fluid inference (#7827) · 6f50dee4
      Tao Luo 提交于
      * compile and install the static library of fluid inference
      
      * fix dynload_cuda not in CPU mode
      
      * update shared library and adjust the deploy of openblas
      
      * adjust the deploy of openblas
      
      * * auto add all fluid modules for static library
      * use libprotobuf.a instead of libprotobuf-lite.a for profiler
      
      * use set_property to set the global varible instead of ENV
      
      * add gpu depends of fluid modules, auto add inference_lib_dist depends
      
      * change the condition of openblas_lib, and fix a typo
      6f50dee4
  16. 06 3月, 2018 2 次提交
  17. 05 3月, 2018 1 次提交
  18. 01 3月, 2018 1 次提交
  19. 15 2月, 2018 1 次提交
  20. 14 2月, 2018 1 次提交
  21. 12 2月, 2018 2 次提交
  22. 07 2月, 2018 1 次提交
  23. 06 2月, 2018 1 次提交
  24. 05 2月, 2018 1 次提交
  25. 30 1月, 2018 1 次提交
  26. 27 1月, 2018 1 次提交
  27. 25 1月, 2018 1 次提交
  28. 22 1月, 2018 2 次提交
  29. 20 1月, 2018 1 次提交
  30. 19 1月, 2018 1 次提交
  31. 16 1月, 2018 2 次提交
  32. 15 1月, 2018 1 次提交
  33. 09 1月, 2018 1 次提交
    • Y
      Port WarpCTC Operator (#5107) · b5fda272
      Yiqun Liu 提交于
      * Add Seq2BatchFunctor, which will be used in WarpCTCOp.
      
      * Implement WrapCTCFunctor and WrapCTCKernel.
      
      * Add unittest of warpctc_op.
      
      * Modify the check_output inferface in python unittest framework to allow check a subset of outputs.
      
      * Use absolute offset lod in warpctc_op and related functors.
      
      * Refine the comments of warpctc_op.
      
      * The new python unittest supports checking a subset of the outputs, so revoke the previous change.
      
      * Rename the transform from LoDTensor to Tensor with shape [max_sequence_length, num_sequences, sequence_width] to PaddingSequenceFunctor.
      
      * Update to the newest codes.
      
      * Rename the PaddingSequenceFunctor to PaddingLoDTensorFunctor and remove the computation of dimensions out of the functos.
      b5fda272