1. 24 10月, 2017 2 次提交
  2. 23 10月, 2017 1 次提交
  3. 22 10月, 2017 1 次提交
  4. 21 10月, 2017 3 次提交
  5. 20 10月, 2017 8 次提交
    • K
      Adding Nesterov Momentum (#4948) · 5380a547
      kavyasrinet 提交于
      5380a547
    • A
      Adding increment op (#4940) · 09c0c82e
      Abhinav Arora 提交于
      * Adding incremnt op
      
      * Fixing comment about step attribute
      09c0c82e
    • Q
      add test_fit_a_line (#4936) · 9903e49f
      QI JUN 提交于
      * add test_fit_a_line
      
      * Update
      
      * fix persistable bug
      
      * fix elementwise add bug
      
      * set correct attr for bias op in fc layer
      
      * set correct attr for bias op in fc layer
      
      * Update
      
      1. Add init_program to hold initializers
      2. bug fix
      
      * add test_fit_a_line
      
      * fix persistable bug
      
      * fix elementwise add bug
      
      * fix type
      
      * add gitignore
      
      * Complete fit_a_line test
      
      * revert code
      
      * Clean up
      
      * Revert "revert code"
      
      This reverts commit eb1aa015.
      
      * Refine
      
      * Fix unit test
      9903e49f
    • Y
      Remove template parameter for Tensor methods (#4937) · c532b967
      Yu Yang 提交于
      * Remove template parameter for Tensor methods
      
      * Also check the type is correct when data()
      * Simplize holder_
      
      * Fix accuracy_op
      
      * Register Code
      c532b967
    • Q
      fix elementwise add bug · 9e640444
      qijun 提交于
      9e640444
    • A
    • A
    • Y
      Feature/py executor test (#4922) · 3db52783
      Yu Yang 提交于
      * Implement FC layer with helper
      
      * Update LayerHelper
      
      * Add debug string for Python ProtoBuf
      
      and Rename `Sync` to `Flush`
      
      * Add check of ProtoBuf initialization
      
      * Layer wrapper for FC
      
      * Fix unittest
      
      * Fix CI
      
      * Add code generator
      
      * AttributeChecker Better error log and speicalize bool
      
      Since lots of types can be cast to bool
      
      * Complete mlp, fit_a_line
      
      * Expose get global scope
      
      * Make global scope not thread-safe
      
      1. It is no need to make global scope thread-safe, since it will be
      invoked in Python main thread.
      2. Do not free the global scope when C++ exit. Let the OS free memories,
      otherwise, we need to handle the destroy dependencies.
      
      See
      https://google.github.io/styleguide/cppguide.html#Static_and_Global_Variables
      
      * Fix
      
      * Implementation of simple conv_2d layer
      
      * Stash
      
      * Remove private data members in OpRegister
      
      * Fix bugs
      
      * Stash
      
      * Expose FeedFetchList as VarType
      
      * Change ProgramDesc not a global variable
      
      * Polish code style
      
      * Stash
      
      * Correct implement BlockDesc destructor
      
      * Correct implement BlockDesc destructor
      
      * Unify program as parameter name
      
      * Fix bugs
      
      * Add unittest
      
      * Fix unit test error
      
      * Remove unused functions
      
      * Add clone for Python Program
      
      * Working on executor
      
      * Stash
      
      * Add glog as dependencies of ops
      
      * Use VLOG to logging some information is helpful when we debug Paddle
      
      * Expose VarDesc::persistable to Python
      
      * Test executor
      
      * Complete unittest
      
      * Polish code
      
      * Fix merge error
      
      * Follow comment
      
      * Polish Python Code
      3db52783
  6. 19 10月, 2017 7 次提交
  7. 18 10月, 2017 7 次提交
    • D
      LSTM Operator forward implementation. · 2a8dbd13
      dangqingqing 提交于
      2a8dbd13
    • Q
      fix conflict (#4883) · 521514da
      QI JUN 提交于
      521514da
    • M
      MatMul operator (#4856) · 16489827
      Markus Kliegl 提交于
      * initial matmul operator
      
      Similar to np.matmul, but also has transpose_X and transpose_Y flags,
      and only supports tensors from rank 1 to 3 inclusive.
      
      For GPU, uses cublas?gemmStridedBatched. For CPU, uses
      cblas_?gemm_batch if available via MKL; otherwise a simple serial
      implementation that loops over the batch dimension is employed for now.
      16489827
    • Q
      fix gpu build error · f9681459
      qijun 提交于
      f9681459
    • Q
      add sparse sgd operator unittest · ab8cc401
      qijun 提交于
      ab8cc401
    • Q
      add sparse kernel of sgd operator · 182ce51c
      qijun 提交于
      182ce51c
    • Q
      Impl optimizer (#4734) · df0946eb
      Qiao Longfei 提交于
      * init parameter base class
      
      * optimize the Comments of optimizer
      
      * basic implimentation of optimizer
      
      * add test_optimizer
      
      * add no_grad_set to interface
      
      * update optimizer.py
      
      * python code can run
      
      * fix some problem
      
      * add sync_with_cpp to Python Program and Block
      
      * sync vars and ops in block from cpp
      
      * optimize code and add some comment
      
      * add more check for sync
      
      * update optimizer with return value of Backward
      
      * rm unused code
      
      * infer shape when create gradient vairiable
      
      * update test_optimizer
      
      * update test_program.py
      
      * update backward test
      
      * follow comment
      df0946eb
  8. 17 10月, 2017 4 次提交
  9. 16 10月, 2017 4 次提交
  10. 15 10月, 2017 3 次提交