1. 18 10月, 2017 1 次提交
    • M
      MatMul operator (#4856) · 16489827
      Markus Kliegl 提交于
      * initial matmul operator
      
      Similar to np.matmul, but also has transpose_X and transpose_Y flags,
      and only supports tensors from rank 1 to 3 inclusive.
      
      For GPU, uses cublas?gemmStridedBatched. For CPU, uses
      cblas_?gemm_batch if available via MKL; otherwise a simple serial
      implementation that loops over the batch dimension is employed for now.
      16489827
  2. 06 10月, 2017 1 次提交
    • A
      Adding Adadelta optimization operator (#4576) · 828c5b3e
      Abhinav Arora 提交于
      * Adding Adadelta optimization operator
      * Making inputs and outputs conform to naming convention
      * Removing type alias from header files
      * Fixing Adadelta documentation in comments
      * Addressing code review feedback
      828c5b3e
  3. 07 8月, 2017 1 次提交
  4. 04 8月, 2017 1 次提交
  5. 31 7月, 2017 1 次提交
  6. 25 7月, 2017 1 次提交
  7. 19 7月, 2017 1 次提交