1. 19 10月, 2017 4 次提交
  2. 18 10月, 2017 2 次提交
    • D
      LSTM Operator forward implementation. · 2a8dbd13
      dangqingqing 提交于
      2a8dbd13
    • M
      MatMul operator (#4856) · 16489827
      Markus Kliegl 提交于
      * initial matmul operator
      
      Similar to np.matmul, but also has transpose_X and transpose_Y flags,
      and only supports tensors from rank 1 to 3 inclusive.
      
      For GPU, uses cublas?gemmStridedBatched. For CPU, uses
      cblas_?gemm_batch if available via MKL; otherwise a simple serial
      implementation that loops over the batch dimension is employed for now.
      16489827
  3. 16 10月, 2017 3 次提交
  4. 15 10月, 2017 4 次提交
  5. 14 10月, 2017 4 次提交
  6. 12 10月, 2017 4 次提交
  7. 11 10月, 2017 5 次提交
  8. 10 10月, 2017 3 次提交
  9. 09 10月, 2017 2 次提交
  10. 05 10月, 2017 2 次提交
    • Y
      Use PADDLE_WITH_CUDA instead of PADDLE_WITH_GPU · 4558807c
      Yi Wang 提交于
      4558807c
    • Y
      Change `PADDLE_ONLY_CPU` to `PADDLE_WITH_GPU` · 84500f94
      Yu Yang 提交于
      By shell command
      
      ```bash
      sed -i 's#ifdef PADDLE_ONLY_CPU#ifndef PADDLE_WITH_GPU#g' `find ./paddle/ -name '*.h' -o -name '*.cc' -o -name '*.cpp' -o -name '*.c' -o -name '*.cu'`
      sed -i 's#ifndef PADDLE_ONLY_CPU#ifdef PADDLE_WITH_GPU#g' `find ./paddle/ -name '*.h' -o -name '*.cc' -o -name '*.cpp' -o -name '*.c' -o -name '*.cu'`
      ```
      84500f94
  11. 30 9月, 2017 3 次提交
  12. 29 9月, 2017 4 次提交