- 24 10月, 2017 3 次提交
-
-
由 Yang Yang(Tony) 提交于
Pass all forward op test
-
由 Yu Yang 提交于
* Correct mul_op implementation * Restore the origin shape after mul * Fix mul op * Do not touch math_function
-
由 QI JUN 提交于
* ensure ids in lookup table op must be a column vector * follow comments
-
- 23 10月, 2017 1 次提交
-
-
由 dangqingqing 提交于
-
- 22 10月, 2017 1 次提交
-
-
由 Qiao Longfei 提交于
-
- 21 10月, 2017 3 次提交
-
-
由 Yu Yang 提交于
-
由 qijun 提交于
-
由 Yan Chunwei 提交于
-
- 20 10月, 2017 8 次提交
-
-
由 kavyasrinet 提交于
-
由 Abhinav Arora 提交于
* Adding incremnt op * Fixing comment about step attribute
-
由 QI JUN 提交于
* add test_fit_a_line * Update * fix persistable bug * fix elementwise add bug * set correct attr for bias op in fc layer * set correct attr for bias op in fc layer * Update 1. Add init_program to hold initializers 2. bug fix * add test_fit_a_line * fix persistable bug * fix elementwise add bug * fix type * add gitignore * Complete fit_a_line test * revert code * Clean up * Revert "revert code" This reverts commit eb1aa015. * Refine * Fix unit test
-
由 Yu Yang 提交于
* Remove template parameter for Tensor methods * Also check the type is correct when data() * Simplize holder_ * Fix accuracy_op * Register Code
-
由 qijun 提交于
-
由 Abhinav Arora 提交于
-
由 Abhinav Arora 提交于
-
由 Yu Yang 提交于
* Implement FC layer with helper * Update LayerHelper * Add debug string for Python ProtoBuf and Rename `Sync` to `Flush` * Add check of ProtoBuf initialization * Layer wrapper for FC * Fix unittest * Fix CI * Add code generator * AttributeChecker Better error log and speicalize bool Since lots of types can be cast to bool * Complete mlp, fit_a_line * Expose get global scope * Make global scope not thread-safe 1. It is no need to make global scope thread-safe, since it will be invoked in Python main thread. 2. Do not free the global scope when C++ exit. Let the OS free memories, otherwise, we need to handle the destroy dependencies. See https://google.github.io/styleguide/cppguide.html#Static_and_Global_Variables * Fix * Implementation of simple conv_2d layer * Stash * Remove private data members in OpRegister * Fix bugs * Stash * Expose FeedFetchList as VarType * Change ProgramDesc not a global variable * Polish code style * Stash * Correct implement BlockDesc destructor * Correct implement BlockDesc destructor * Unify program as parameter name * Fix bugs * Add unittest * Fix unit test error * Remove unused functions * Add clone for Python Program * Working on executor * Stash * Add glog as dependencies of ops * Use VLOG to logging some information is helpful when we debug Paddle * Expose VarDesc::persistable to Python * Test executor * Complete unittest * Polish code * Fix merge error * Follow comment * Polish Python Code
-
- 19 10月, 2017 7 次提交
-
-
由 dangqingqing 提交于
-
由 dangqingqing 提交于
-
由 dangqingqing 提交于
-
由 dangqingqing 提交于
-
由 kavyasrinet 提交于
* Adding Proximal Gradient Descent * Fixing review comments
-
由 fengjiayi 提交于
* Add design doc of batch_norm_op * Move batch_norm_op.png to operator/images * Refine batch_norm_op design doc
-
由 Yu Yang 提交于
* Change ProgramDesc not a global variable * Polish code style * Correct implement BlockDesc destructor * Unify program as parameter name
-
- 18 10月, 2017 7 次提交
-
-
由 dangqingqing 提交于
-
由 QI JUN 提交于
-
由 Markus Kliegl 提交于
* initial matmul operator Similar to np.matmul, but also has transpose_X and transpose_Y flags, and only supports tensors from rank 1 to 3 inclusive. For GPU, uses cublas?gemmStridedBatched. For CPU, uses cblas_?gemm_batch if available via MKL; otherwise a simple serial implementation that loops over the batch dimension is employed for now.
-
由 qijun 提交于
-
由 qijun 提交于
-
由 qijun 提交于
-
由 Qiao Longfei 提交于
* init parameter base class * optimize the Comments of optimizer * basic implimentation of optimizer * add test_optimizer * add no_grad_set to interface * update optimizer.py * python code can run * fix some problem * add sync_with_cpp to Python Program and Block * sync vars and ops in block from cpp * optimize code and add some comment * add more check for sync * update optimizer with return value of Backward * rm unused code * infer shape when create gradient vairiable * update test_optimizer * update test_program.py * update backward test * follow comment
-
- 17 10月, 2017 4 次提交
-
-
由 Yu Yang 提交于
They are public now
-
由 Yu Yang 提交于
* Change dataType to data_type Follow PEP8 * Change name_convention to fit PEP8
-
由 qijun 提交于
-
由 Yu Yang 提交于
* Feed/Fetch op just plain operator, not a OpWithKernel * Do not register OpInfoMaker since Feed/Fetch will never be configured by users * Feed/Fetch op has empty gradient * Feed/Fetch op do not hard code `feed_variable`, `fetch_variable` as its input and output, make it as a plain Operator input/output
-
- 16 10月, 2017 4 次提交
-
-
由 Luo Tao 提交于
-
由 qijun 提交于
-
由 qijun 提交于
-
由 dangqingqing 提交于
-
- 15 10月, 2017 2 次提交