- 17 10月, 2017 16 次提交
-
-
由 tensor-tang 提交于
-
由 tensor-tang 提交于
-
由 Yu Yang 提交于
* Make global scope not thread-safe 1. It is no need to make global scope thread-safe, since it will be invoked in Python main thread. 2. Do not free the global scope when C++ exit. Let the OS free memories, otherwise, we need to handle the destroy dependencies. See https://google.github.io/styleguide/cppguide.html#Static_and_Global_Variables * Revert "FIX: Release CPU/GPU memory via deleter" This reverts commit 8f80f5bc.
-
由 Yu Yang 提交于
They are public now
-
由 Qiao Longfei 提交于
-
由 Qiao Longfei 提交于
* rm cpp executor_test, rewrite in python later * remove executor_test code in CMakeList.txt
-
由 qijun 提交于
-
由 Yu Yang 提交于
* Change dataType to data_type Follow PEP8 * Change name_convention to fit PEP8
-
由 Yu Yang 提交于
* AttributeChecker Better error log and speicalize bool Since lots of types can be cast to bool * add FIXME comment
-
由 qijun 提交于
-
由 qijun 提交于
-
由 qijun 提交于
-
由 qijun 提交于
-
由 qijun 提交于
-
由 qijun 提交于
-
由 Yu Yang 提交于
* Feed/Fetch op just plain operator, not a OpWithKernel * Do not register OpInfoMaker since Feed/Fetch will never be configured by users * Feed/Fetch op has empty gradient * Feed/Fetch op do not hard code `feed_variable`, `fetch_variable` as its input and output, make it as a plain Operator input/output
-
- 16 10月, 2017 10 次提交
-
-
由 tensor-tang 提交于
and add activation in unit tests
-
由 Luo Tao 提交于
-
由 武毅 提交于
-
由 武毅 提交于
* fix gometalinter versioning * stop gometalinter
-
由 qijun 提交于
-
由 qijun 提交于
-
由 tensor-tang 提交于
-
由 qijun 提交于
-
由 fengjiayi 提交于
-
由 fengjiayi 提交于
* Expose Executor to Python * Follow comments
-
- 15 10月, 2017 13 次提交
-
-
由 qiaolongfei 提交于
-
由 qiaolongfei 提交于
-
由 qijun 提交于
-
由 Yu Yang 提交于
* Final step of backward, return a map from param_name to grad * Complete the final step of backward Return the param_name to grad_info
-
由 qijun 提交于
-
由 qijun 提交于
-
由 Qiao Longfei 提交于
* add target to Backward, generate var in block when call backward * modify backward_test * fix executor_test * set var desc default type to LOD_TENSOR * update backward_test * insert loss in the top level of backward * create grad vars for all blocks in current program * optimize code * update test_program.py * only create var for newly create blocks when backward
-
由 Dong Zhihong 提交于
-
由 qijun 提交于
-
由 Dong Zhihong 提交于
-
由 Yu Yang 提交于
-
由 Dong Zhihong 提交于
-
由 Dong Zhihong 提交于
-
- 14 10月, 2017 1 次提交
-
-
由 qijun 提交于
-