- 15 11月, 2017 4 次提交
-
-
由 dzhwinter 提交于
-
由 Dong Zhihong 提交于
-
由 Helin Wang 提交于
- Removed all main_program and startup_program in the demo. - Using paddle.default_main_program() hides the implementation detail (e.g., using g_main_program) from the user, we can change the implementation in the future much easier.
-
由 Abhinav Arora 提交于
* Adding greater than and less than equal ops to compare op * Changing the name of the less_than_equal and greater_than_equal op * Also changing the name of the functors
-
- 14 11月, 2017 12 次提交
-
-
由 Qiao Longfei 提交于
* init commit * change some dir name
-
由 xzl 提交于
-
由 peterzhang2029 提交于
-
由 Dong Zhihong 提交于
-
由 Qiao Longfei 提交于
* fix lod_tensor_array * init test beam search decode op * add test_beam_search_decode_op
-
由 peterzhang2029 提交于
-
由 Yu Yang 提交于
* Conditional Block Forward * Assign Operator. Out=X, when type in [LoDTensor/SelectedRows/LoDTensorArray] * Stash * Add Scope::Rename it is useful in gradient phase of an operator with block * ConditionalBlock Grad Done * Add comments * yapf format code
-
由 peterzhang2029 提交于
-
由 xzl 提交于
-
由 QI JUN 提交于
* add split lod tensor operator * add more test cast * clean code * add merge lod tensor operator * fix bug * clean code * add grad operator * make mask support GPU * add comments
-
由 Yu Yang 提交于
* Assign Operator. Out=X, when type in [LoDTensor/SelectedRows/LoDTensorArray] * Follow comments
-
由 Helin Wang 提交于
-
- 13 11月, 2017 4 次提交
-
-
由 Yibing Liu 提交于
-
由 Yu Yang 提交于
-
由 QI JUN 提交于
* add lstm layer * set hidden shape * rename input parameter * add dynamic lstm * refine dynamic lstm layer * change parameter using XavierInitializer by default * refine dynamic lstm layer
-
由 QI JUN 提交于
* create learning rate variable for every parameter * fix ci * set parameter lr relatively to global lr
-
- 11 11月, 2017 1 次提交
-
-
由 dzhwinter 提交于
* "fix ci failed" * "comment out seq_concate op to unblock PRs"
-
- 10 11月, 2017 12 次提交
-
-
由 guosheng 提交于
-
由 guosheng 提交于
-
由 Yancey 提交于
* fix seq_concat with refactaring LoD * fix failed unit test * rename function name
-
由 yangyaming 提交于
-
由 Yang Yang(Tony) 提交于
* first commit * Python API for while op * Python Unittest for simple while_op forward * fix out to be list * Fix UT * VarType * Fix several bugs * Fix bug * Fix bug * Fix Bug * Fix bug * Fix unittest * Remove debug log * Add comments * add PADDLE_ENFORCE * while_grad_op first commit * Add `BlockDescBind::FindRecursiveOrCreateVar()` and fix bugs * refine code * fix unittest bug
-
由 Dong Zhihong 提交于
-
由 Dong Zhihong 提交于
-
由 kavyasrinet 提交于
* Adding operator assignment * Adding documentation to layers.py * Removing file from another PR
-
由 Dong Zhihong 提交于
-
由 Dong Zhihong 提交于
-
由 Siddharth Goyal 提交于
* Fix attribute naming for momentum_op * Fix minor typo in comment * Fix attribute name * Fix names in test_optimizer * Fix python wrapper
-
由 Dong Zhihong 提交于
-
- 09 11月, 2017 7 次提交
-
-
由 dangqingqing 提交于
-
由 peterzhang2029 提交于
-
由 yangyaming 提交于
-
由 ranqiu 提交于
-
由 Yang Yu 提交于
* increament is default inplace
-
由 Helin Wang 提交于
-
由 fengjiayi 提交于
* Add LoDRankTable LoD Rank Table stores the `level` of `lod` which is ordered by sequence length in descending order. It is useful when implement dynamic RNN and is shared by dynamic RNN memory, dynamic RNN slice input and dynamic RNN slice output operators. * Add skeleton for array_to_lod_tensor and lod_tensor_to_array * Add VarType::LoDTensorArray * Add PyBind of LoDTensorArray * Add InferVarType * Add first unittest * Add ut * Add unittest * Add unittest * Add unittests * update * init * add infershape for lod_tensor_to_array_op * compelete array_to_lod_tensor_op * copy data * clean code * clean code * Fix unittest data * fix bugs * fix compile error * Refine TensorToArrayOp * refactor array_to_lod_tensor * Unittest * fix bugs * Fix unittest * Fix unittest * debug * Debug * Fix unittest * Add grad for ops * Debug * Fix a bug * fix a bug * fix a bug
-