- 04 11月, 2017 5 次提交
-
-
由 kexinzhao 提交于
polish operator doc
-
由 Qiao Longfei 提交于
* add acc layer * memory log level change from 3 to 10 * use gaussian random to init conv parameters * use initializer * fix import * batch_norm use helper to create persistable var * refine code * train only 2 batches for test * use g_program and g_init_program * use XavierInitializer to init fc parameter
-
由 Yu Yang 提交于
* Add LoDRankTable LoD Rank Table stores the `level` of `lod` which is ordered by sequence length in descending order. It is useful when implement dynamic RNN and is shared by dynamic RNN memory, dynamic RNN slice input and dynamic RNN slice output operators. * Add InferVarType
-
由 Abhinav Arora 提交于
-
由 Kexin Zhao 提交于
-
- 03 11月, 2017 18 次提交
-
-
由 Wang Meng 提交于
Check whether param name is manually set when input is a sequence in fc layer
-
由 ranqiu92 提交于
Update faq
-
由 wangmeng28 提交于
-
由 ranqiu 提交于
-
由 kexinzhao 提交于
Polish activation operator documentation
-
由 Tao Luo 提交于
Refine activation function pointer for LSTM operator.
-
由 Yi Wang 提交于
-
由 Yi Wang 提交于
* Add English version of Android cross-compiling document * Add English version of Android cross-compiling document * Follow comments from Yi-qun and Kavya
-
由 wangmeng28 提交于
-
由 Kexin Zhao 提交于
-
由 fengjiayi 提交于
* init * Fix bug * rename test_filw * refine test
-
由 daminglu 提交于
fix build doc
-
由 daming-lu 提交于
-
由 Yu Yang 提交于
* Fix bug in lookup_table_op & layers * Missing Act in layers * Should += in CPU * Remove check in python * Fix bug in sequence_conv_pool() * Fix a bug in test_recommender_system.py * Just skip test_evaluator
-
由 daming-lu 提交于
-
由 Abhinav Arora 提交于
* Adding the Xavier Initializer * Addressing code review feedback
-
由 Kexin Zhao 提交于
-
由 daming-lu 提交于
-
- 02 11月, 2017 17 次提交
-
-
由 Tao Luo 提交于
Update IntelOptimizedPaddle.md
-
由 wangmeng28 提交于
-
由 wangmeng28 提交于
-
由 Peng LI 提交于
Support saving figure to file
-
由 daminglu 提交于
add deploy script for website
-
由 武毅 提交于
* refine evaluator op types * update * follow comments * update * fix v2 mnist case * fix v2 mnist case * update * update * hack auc evaluator for dense vec * follow comments
-
由 dangqingqing 提交于
-
由 kavyasrinet 提交于
* Adding design doc for model average (now called parameter_average) * Updating title * Updating image tag * Updating review comments
-
由 Yang Yang(Tony) 提交于
-
由 tensor-tang 提交于
-
由 Zhuoyuan 提交于
deconv cudnn forward passed
-
由 dzhwinter 提交于
* "add net drawer for visualizing the graph" * "fix " * "add dep"
-
由 Yu Yang 提交于
* Init commit * Make executor use ProgramDescBind * Change Attribute from BlockDesc to BlockDescBind * Since we will get the program desc in RNN, just BlockDesc is not enough. * Add DeviceContext to Executor API * Rewrite RNN * Pass Python * AddBiasOp does not care num_flatten_dims * Stash * Fix MacOS Compile * Pass RNN forward * add python test * refactor test * Make compile pass * add gradopmaker * First draft done * Polish code * add grad op maker and grad infershape * Polish code * Fix backward.cc bug * Fix infershape * Rename function * add backward test * simplify recurrent test * Update * Pass unittest * Add comments & refine test * Add comments * refactor test * Complete Unittest * fix StepScopes enforce * Remove unused unittest * no type error * Update * Make RNN Pass unittest
-
由 Yang yaming 提交于
Add PrecisionRecall Op
-
由 dzhwinter 提交于
* "add sequence conv layer" * "add book recommender_system testing" * "add training loop" * "add sequence layer" * "add recommender system training data" * "fix conv2d layer bug" * add sequence_conv_pool * "fix input is Null" * add networks * "fix based comment" * "add sum op layer" * "merge layers" * Update layers.py * "fix input is NULL bug" * "debug embedding table" * "modify layers.py" * "fix pool interface" * "add export type to layers" * "fix based on comment" * "need lod info support in all operator" * "remove accuracy layer" * "tuning learning rate" * "add sparse test" * "add gpu test" * Update test_recommender_system.py
-
由 Qiao Longfei 提交于
* optimizer use init_program * create persistable variable * add create_persistable_var to block * optimizer use create_persistable_var * fix prefix * move create_global_persistable_var from Block to LayerHelper * Polish Optimizer initialization code. * Using the LayerHelper to create initialize operator and variables * add_accumulator should use an independent data type * default use param data type for accumulator
-
由 Yang Yang(Tony) 提交于
-