- 14 11月, 2017 4 次提交
-
-
由 QI JUN 提交于
* add split lod tensor operator * add more test cast * clean code * add merge lod tensor operator * fix bug * clean code * add grad operator * make mask support GPU * add comments
-
由 Yu Yang 提交于
* Assign Operator. Out=X, when type in [LoDTensor/SelectedRows/LoDTensorArray] * Follow comments
-
由 xuwei06 提交于
The rank of the tensor from the chip() function is changed. In release mode, eigen_assert is not enabled and the dimenstion mismatch is not detected.
-
由 xuwei06 提交于
The dimension is not set correctly and is not being checked in release mode because eigen_assert is not enabled.
-
- 13 11月, 2017 6 次提交
-
-
由 peterzhang2029 提交于
-
由 peterzhang2029 提交于
-
由 Qiao Longfei 提交于
* init trieconcat_op * add basic implementation * add test * add more test * update unit test * add PackAllSteps test * fix PackAllSteps * all test passed * clean code * remove state inside helper * rename prob to score * optimize RemoveFromEnd * use deconstructor to delete BeamNode recursively * optimize interface * add comment to interface * optimizer data structure * use template to define the type of score * use template parameter for BeamHelper * change father to parent * rename TrieConcat to BeamSearchOutConcat * use LoDTensorArray * rename BeamSearchOutConcat to BeamSearchDecode * refine code * remain all candidate sentence in beam_search_decode_op, do not consider endid * use unique_ptr * fix compare bug * fix lod compile problem
-
由 Yibing Liu 提交于
-
由 Yibing Liu 提交于
-
由 peterzhang2029 提交于
-
- 11 11月, 2017 4 次提交
-
-
由 dzhwinter 提交于
* "fix ci failed" * "comment out seq_concate op to unblock PRs"
-
由 emailweixu 提交于
TensorSetConstant struct is used both in math_function.cc and math_function.cu. Somehow the release version can correctly handle it. But in debug version, set_constant_with_place() in math_function.cu uses the TensorSetConstant in math_function.cc and causes crash.
-
由 Yu Yang 提交于
it is useful in gradient phase of an operator with block
-
由 emailweixu 提交于
It caused by a bug of std::call_once described in https://stackoverflow.com/questions/41717579/stdcall-once-hangs-on-second-call-after-callable-threw-on-first-call. It is likely caused by a deeper bug of pthread_once, which is discussed in https://patchwork.ozlabs.org/patch/482350/
-
- 10 11月, 2017 6 次提交
-
-
由 yangyaming 提交于
-
由 Yancey 提交于
* fix seq_concat with refactaring LoD * fix failed unit test * rename function name
-
由 yangyaming 提交于
-
由 yangyaming 提交于
-
由 Yang Yang(Tony) 提交于
* first commit * Python API for while op * Python Unittest for simple while_op forward * fix out to be list * Fix UT * VarType * Fix several bugs * Fix bug * Fix bug * Fix Bug * Fix bug * Fix unittest * Remove debug log * Add comments * add PADDLE_ENFORCE * while_grad_op first commit * Add `BlockDescBind::FindRecursiveOrCreateVar()` and fix bugs * refine code * fix unittest bug
-
由 Siddharth Goyal 提交于
* Fix attribute naming for momentum_op * Fix minor typo in comment * Fix attribute name * Fix names in test_optimizer * Fix python wrapper
-
- 09 11月, 2017 10 次提交
-
-
由 dangqingqing 提交于
-
由 dangqingqing 提交于
-
由 Luo Tao 提交于
-
由 peterzhang2029 提交于
-
由 yangyaming 提交于
-
由 fengjiayi 提交于
* Add LoDRankTable LoD Rank Table stores the `level` of `lod` which is ordered by sequence length in descending order. It is useful when implement dynamic RNN and is shared by dynamic RNN memory, dynamic RNN slice input and dynamic RNN slice output operators. * Add skeleton for array_to_lod_tensor and lod_tensor_to_array * Add VarType::LoDTensorArray * Add PyBind of LoDTensorArray * Add InferVarType * Add first unittest * Add ut * Add unittest * Add unittest * Add unittests * update * init * add infershape for lod_tensor_to_array_op * compelete array_to_lod_tensor_op * copy data * clean code * clean code * Fix unittest data * fix bugs * fix compile error * Refine TensorToArrayOp * refactor array_to_lod_tensor * Unittest * fix bugs * Fix unittest * Fix unittest * debug * Debug * Fix unittest * Add grad for ops * Debug * Fix a bug * fix a bug * fix a bug
-
由 Yang Yu 提交于
-
由 Yang Yu 提交于
-
由 Yang Yu 提交于
-
由 guosheng 提交于
-
- 08 11月, 2017 10 次提交
-
-
由 wwhu 提交于
-
由 dangqingqing 提交于
-
由 Yang Yang(Tony) 提交于
* add fill_constant_batch_size_like_op to rnn h_boot * first commit * merge develop; fix conflict * update to main_program
-
由 typhoonzero 提交于
-
由 peterzhang2029 提交于
-
由 typhoonzero 提交于
-
由 Yang Yu 提交于
CompareOp can run on CPU even other operators are running on GPU, since opeatations like comparing control flags should be performed only on CPU
-
由 Yang Yu 提交于
Follow comments
-
由 chengduoZH 提交于
-
由 Yu Yang 提交于
* Add LoDRankTable LoD Rank Table stores the `level` of `lod` which is ordered by sequence length in descending order. It is useful when implement dynamic RNN and is shared by dynamic RNN memory, dynamic RNN slice input and dynamic RNN slice output operators. * Add skeleton for array_to_lod_tensor and lod_tensor_to_array * Add VarType::LoDTensorArray * Add PyBind of LoDTensorArray * Add InferVarType * Add first unittest * Add ut * Add unittest * Add unittest * Add unittests * update * init * add infershape for lod_tensor_to_array_op * compelete array_to_lod_tensor_op * copy data * clean code * clean code * Fix unittest data * fix bugs * fix compile error * Refine TensorToArrayOp * refactor array_to_lod_tensor * Unittest * fix bugs * Fix unittest * Fix unittest * debug * Debug * Fix unittest * clean code * refactor * use ostream * update test * fix gpu build error * make gpu test pass
-