- 18 10月, 2017 5 次提交
-
-
由 QI JUN 提交于
-
由 Markus Kliegl 提交于
* initial matmul operator Similar to np.matmul, but also has transpose_X and transpose_Y flags, and only supports tensors from rank 1 to 3 inclusive. For GPU, uses cublas?gemmStridedBatched. For CPU, uses cblas_?gemm_batch if available via MKL; otherwise a simple serial implementation that loops over the batch dimension is employed for now.
-
由 Yu Yang 提交于
-
由 Qiao Longfei 提交于
-
由 Qiao Longfei 提交于
* init parameter base class * optimize the Comments of optimizer * basic implimentation of optimizer * add test_optimizer * add no_grad_set to interface * update optimizer.py * python code can run * fix some problem * add sync_with_cpp to Python Program and Block * sync vars and ops in block from cpp * optimize code and add some comment * add more check for sync * update optimizer with return value of Backward * rm unused code * infer shape when create gradient vairiable * update test_optimizer * update test_program.py * update backward test * follow comment
-
- 17 10月, 2017 16 次提交
-
-
由 tensor-tang 提交于
-
由 tensor-tang 提交于
-
由 Yu Yang 提交于
* Make global scope not thread-safe 1. It is no need to make global scope thread-safe, since it will be invoked in Python main thread. 2. Do not free the global scope when C++ exit. Let the OS free memories, otherwise, we need to handle the destroy dependencies. See https://google.github.io/styleguide/cppguide.html#Static_and_Global_Variables * Revert "FIX: Release CPU/GPU memory via deleter" This reverts commit 8f80f5bc.
-
由 Yu Yang 提交于
They are public now
-
由 Qiao Longfei 提交于
-
由 Qiao Longfei 提交于
* rm cpp executor_test, rewrite in python later * remove executor_test code in CMakeList.txt
-
由 qijun 提交于
-
由 Yu Yang 提交于
* Change dataType to data_type Follow PEP8 * Change name_convention to fit PEP8
-
由 Yu Yang 提交于
* AttributeChecker Better error log and speicalize bool Since lots of types can be cast to bool * add FIXME comment
-
由 qijun 提交于
-
由 qijun 提交于
-
由 qijun 提交于
-
由 qijun 提交于
-
由 qijun 提交于
-
由 qijun 提交于
-
由 Yu Yang 提交于
* Feed/Fetch op just plain operator, not a OpWithKernel * Do not register OpInfoMaker since Feed/Fetch will never be configured by users * Feed/Fetch op has empty gradient * Feed/Fetch op do not hard code `feed_variable`, `fetch_variable` as its input and output, make it as a plain Operator input/output
-
- 16 10月, 2017 10 次提交
-
-
由 tensor-tang 提交于
and add activation in unit tests
-
由 Luo Tao 提交于
-
由 武毅 提交于
-
由 武毅 提交于
* fix gometalinter versioning * stop gometalinter
-
由 qijun 提交于
-
由 qijun 提交于
-
由 tensor-tang 提交于
-
由 qijun 提交于
-
由 fengjiayi 提交于
-
由 fengjiayi 提交于
* Expose Executor to Python * Follow comments
-
- 15 10月, 2017 9 次提交
-
-
由 qiaolongfei 提交于
-
由 qiaolongfei 提交于
-
由 qijun 提交于
-
由 Yu Yang 提交于
* Final step of backward, return a map from param_name to grad * Complete the final step of backward Return the param_name to grad_info
-
由 qijun 提交于
-
由 qijun 提交于
-
由 Qiao Longfei 提交于
* add target to Backward, generate var in block when call backward * modify backward_test * fix executor_test * set var desc default type to LOD_TENSOR * update backward_test * insert loss in the top level of backward * create grad vars for all blocks in current program * optimize code * update test_program.py * only create var for newly create blocks when backward
-
由 Dong Zhihong 提交于
-
由 qijun 提交于
-