- 18 10月, 2017 4 次提交
-
-
由 wanghaoshuang 提交于
-
由 Yu Yang 提交于
-
由 Yu Yang 提交于
-
由 Qiao Longfei 提交于
* init parameter base class * optimize the Comments of optimizer * basic implimentation of optimizer * add test_optimizer * add no_grad_set to interface * update optimizer.py * python code can run * fix some problem * add sync_with_cpp to Python Program and Block * sync vars and ops in block from cpp * optimize code and add some comment * add more check for sync * update optimizer with return value of Backward * rm unused code * infer shape when create gradient vairiable * update test_optimizer * update test_program.py * update backward test * follow comment
-
- 17 10月, 2017 11 次提交
-
-
由 Yu Yang 提交于
* Make global scope not thread-safe 1. It is no need to make global scope thread-safe, since it will be invoked in Python main thread. 2. Do not free the global scope when C++ exit. Let the OS free memories, otherwise, we need to handle the destroy dependencies. See https://google.github.io/styleguide/cppguide.html#Static_and_Global_Variables * Revert "FIX: Release CPU/GPU memory via deleter" This reverts commit 8f80f5bc.
-
由 Qiao Longfei 提交于
-
由 Yang Yang 提交于
-
由 Qiao Longfei 提交于
* rm cpp executor_test, rewrite in python later * remove executor_test code in CMakeList.txt
-
由 Yu Yang 提交于
* Change dataType to data_type Follow PEP8 * Change name_convention to fit PEP8
-
由 Yu Yang 提交于
* AttributeChecker Better error log and speicalize bool Since lots of types can be cast to bool * add FIXME comment
-
由 qijun 提交于
-
由 qijun 提交于
-
由 qijun 提交于
-
由 qijun 提交于
-
由 Yu Yang 提交于
* Feed/Fetch op just plain operator, not a OpWithKernel * Do not register OpInfoMaker since Feed/Fetch will never be configured by users * Feed/Fetch op has empty gradient * Feed/Fetch op do not hard code `feed_variable`, `fetch_variable` as its input and output, make it as a plain Operator input/output
-
- 16 10月, 2017 2 次提交
- 15 10月, 2017 11 次提交
-
-
由 qiaolongfei 提交于
-
由 qiaolongfei 提交于
-
由 qijun 提交于
-
由 Yu Yang 提交于
* Final step of backward, return a map from param_name to grad * Complete the final step of backward Return the param_name to grad_info
-
由 qijun 提交于
-
由 Qiao Longfei 提交于
* add target to Backward, generate var in block when call backward * modify backward_test * fix executor_test * set var desc default type to LOD_TENSOR * update backward_test * insert loss in the top level of backward * create grad vars for all blocks in current program * optimize code * update test_program.py * only create var for newly create blocks when backward
-
由 Dong Zhihong 提交于
-
由 qijun 提交于
-
由 Dong Zhihong 提交于
-
由 Yu Yang 提交于
-
由 Dong Zhihong 提交于
-
- 14 10月, 2017 6 次提交
-
-
由 qijun 提交于
-
由 fengjiayi 提交于
* Add debug string for Python ProtoBuf and Rename `Sync` to `Flush` * Add check of ProtoBuf initialization
-
由 fengjiayi 提交于
* Add grad_name_map * Fix bug * Fix bug * Follow comments
-
由 Yu Yang 提交于
-
由 Yu Yang 提交于
* Update VarDesc from design doc * Fix GCC compile * Fix unittest
-
由 Yu Yang 提交于
-
- 13 10月, 2017 6 次提交