- 09 11月, 2017 5 次提交
-
-
由 dangqingqing 提交于
-
由 yangyaming 提交于
-
由 ranqiu 提交于
-
由 fengjiayi 提交于
* Add LoDRankTable LoD Rank Table stores the `level` of `lod` which is ordered by sequence length in descending order. It is useful when implement dynamic RNN and is shared by dynamic RNN memory, dynamic RNN slice input and dynamic RNN slice output operators. * Add skeleton for array_to_lod_tensor and lod_tensor_to_array * Add VarType::LoDTensorArray * Add PyBind of LoDTensorArray * Add InferVarType * Add first unittest * Add ut * Add unittest * Add unittest * Add unittests * update * init * add infershape for lod_tensor_to_array_op * compelete array_to_lod_tensor_op * copy data * clean code * clean code * Fix unittest data * fix bugs * fix compile error * Refine TensorToArrayOp * refactor array_to_lod_tensor * Unittest * fix bugs * Fix unittest * Fix unittest * debug * Debug * Fix unittest * Add grad for ops * Debug * Fix a bug * fix a bug * fix a bug
-
由 Yang Yu 提交于
-
- 08 11月, 2017 15 次提交
-
-
由 yangyaming 提交于
-
由 caoying03 提交于
-
由 ranqiu 提交于
-
由 yangyaming 提交于
-
由 Luo Tao 提交于
-
由 Yang Yang(Tony) 提交于
* add fill_constant_batch_size_like_op to rnn h_boot * first commit * merge develop; fix conflict * update to main_program
-
由 typhoonzero 提交于
-
由 Yang Yu 提交于
Follow comments
-
由 Yang Yu 提交于
-
由 Wang,Jeff 提交于
-
由 chengduoZH 提交于
-
由 Yu Yang 提交于
* Add LoDRankTable LoD Rank Table stores the `level` of `lod` which is ordered by sequence length in descending order. It is useful when implement dynamic RNN and is shared by dynamic RNN memory, dynamic RNN slice input and dynamic RNN slice output operators. * Add skeleton for array_to_lod_tensor and lod_tensor_to_array * Add VarType::LoDTensorArray * Add PyBind of LoDTensorArray * Add InferVarType * Add first unittest * Add ut * Add unittest * Add unittest * Add unittests * update * init * add infershape for lod_tensor_to_array_op * compelete array_to_lod_tensor_op * copy data * clean code * clean code * Fix unittest data * fix bugs * fix compile error * Refine TensorToArrayOp * refactor array_to_lod_tensor * Unittest * fix bugs * Fix unittest * Fix unittest * debug * Debug * Fix unittest * clean code * refactor * use ostream * update test * fix gpu build error * make gpu test pass
-
由 Yang Yu 提交于
Used for shrink memories state in DyRNN. The height of state could be shrinked after running a step block.
-
由 Yu Yang 提交于
-
由 Yu Yang 提交于
* Compare Operator * Follow comments
-
- 07 11月, 2017 6 次提交
-
-
由 ranqiu 提交于
-
由 typhoonzero 提交于
-
由 dangqingqing 提交于
1. user can disable peephole connections. 2. not calculate some gradients.
-
由 Yu Yang 提交于
* Use stable_sort in lod_rank_table It is easy to debug and test when use `stable_sort`and the time complexity is not changed. * Add LoDTensorArray * Stash * Better debug message for IsInitialized * Stash * Better debug message for IsInitialized * Complete array read/write op unittests * Add unittest, Gradient of array read/write * Follow comments
-
由 Yang Yang(Tony) 提交于
-
由 Yu Yang 提交于
* Use stable_sort in lod_rank_table It is easy to debug and test when use `stable_sort`and the time complexity is not changed. * Add LoDTensorArray * Stash * Better debug message for IsInitialized * Stash * Better debug message for IsInitialized * Complete array read/write op unittests
-
- 06 11月, 2017 10 次提交
-
-
由 dangqingqing 提交于
-
由 chengduoZH 提交于
-
由 caoying03 提交于
-
由 dangqingqing 提交于
-
由 Yiqun Liu 提交于
The engine library need to link paddle_pserver and paddle_network on linux.
-
由 dangqingqing 提交于
-
由 chengduoZH 提交于
-
由 yangyaming 提交于
-
由 Yu Yang 提交于
-
由 Yu Yang 提交于
* Use stable_sort in lod_rank_table It is easy to debug and test when use `stable_sort`and the time complexity is not changed. * Add LoDTensorArray
-
- 05 11月, 2017 2 次提交
-
-
由 yangyaming 提交于
-
由 Yu Yang 提交于
-
- 04 11月, 2017 2 次提交
-
-
由 Cao Ying 提交于
* proj init. * add unittest and implementation.
-
由 Qiao Longfei 提交于
* add acc layer * memory log level change from 3 to 10 * use gaussian random to init conv parameters * use initializer * fix import * batch_norm use helper to create persistable var * refine code * train only 2 batches for test * use g_program and g_init_program * use XavierInitializer to init fc parameter
-