- 04 3月, 2019 1 次提交
-
-
由 Yihua Xu 提交于
test=develop
-
- 27 2月, 2019 1 次提交
-
-
由 xiaolil1 提交于
* Optimize key creation of INT8 pool kernel to improve the peformance of ResNet-50 and MobileNet, especially for latency. test=develop * Optimize key creation of pool fp32 grad. test=develop
-
- 25 2月, 2019 7 次提交
-
-
由 peizhilin 提交于
-
由 peizhilin 提交于
test=develop
-
由 chengduo 提交于
* refile profiler test=develop * follow comment test=develop
-
由 Zhen Wang 提交于
-
由 Zhen Wang 提交于
-
由 Jacek Czaja 提交于
* - Implemented draft of primitive desc keeping in Tensor test=develop - TransposeMKLDNNHandler::AcquireSrcMemory was reimplemented - Added nchw and nc formats setting for sake of compatiblity Fixed unit tests - Worakaround to problem with 5D data in conv - Added 3D and 1D MKL-DNN formats for name handles for tensor test=develop - Fix to UTs test=develop - Conv fp32 op was updated Cosmetic fixes test=develop - tensor mkldnn cosmetics test=develop - Moved most of mkl-dnn specific code from Tensor to mkl-dnn utils * - Lint fixes test=develop * - setting prim dec in Tensor , sets also layout to kMKLDNN test=develop * - Moved creation of prim desc totally out of Tensor test=develop * - Cosmetic fixes adter review test=develop
-
由 peizhilin 提交于
test=develop
-
- 24 2月, 2019 1 次提交
-
-
由 Dun 提交于
-
- 22 2月, 2019 4 次提交
-
-
由 Sylwester Fraczek 提交于
reason: dereferencing smart pointer is the same as the underlying pointer test=develop
-
由 tensor-tang 提交于
* Revert "Optimze Gelu with MKL Erf function (#15770)" This reverts commit 676995c8. * test=develop
-
由 chengduo 提交于
test=develop
-
由 Yihua Xu 提交于
* Optimize for gelu operator * Set up the low accuracy mode of MKL ERF function. test=develop * Only enable MKLML ERF when OS is linux * Use the speical mklml version included vmsErf function to verify gelu mkl kernel. test=develop * Add the CUDA macro to avoid NVCC's compile issue. test=develop * Add the TODO comments for mklml library modification. test=develop * Clean Code test=develop * Add the comment of marco for NVCC compiler. test=develop
-
- 21 2月, 2019 6 次提交
-
-
由 Tao Luo 提交于
test=develop
-
由 Dun Liang 提交于
-
由 Dun Liang 提交于
-
由 Dun Liang 提交于
-
由 Xin Pan 提交于
test=develop
-
由 Dun 提交于
* refine profiler && add runtime tracer * test=develop * test=develop * test=develop * test=develop * test=develop * test=develop * test=develop * test=develop * fix bug && test=develop * add thread id map && test=develop * test=develop * testing * bug fix * remove cuda event && refine code && test=develop * test=develop * test=develop * test=develop * fix windows temp file && test=develop * test=develop * fix windows bug && test=develop * fix start up issue && test=develop * code polish && test=develop * remove unused code && test=develop * add some cupti cbid && test=develop * add FLAGS_multiple_of_cupti_buffer_size && test=develop * fix compile error && test=develop * add keyword && test=develop * fix && test=develop * code polish && test=develop
-
- 20 2月, 2019 2 次提交
-
-
由 mozga-intel 提交于
* Enable momentum operator for a ngraph engine test=develop * Update tests test=develop * Unnecessary line of the code as intended was removed test=develop
-
由 Tao Luo 提交于
-
- 19 2月, 2019 3 次提交
-
-
由 tensor-tang 提交于
* fix warnings test=develop * fix enforce test test=develop
-
由 sneaxiy 提交于
test=develop
-
由 sneaxiy 提交于
test=develop
-
- 14 2月, 2019 1 次提交
-
-
由 sneaxiy 提交于
test=develop
-
- 11 2月, 2019 2 次提交
-
-
由 dzhwinter 提交于
-
由 mozga-intel 提交于
test=develop
-
- 03 2月, 2019 1 次提交
-
-
由 peizhilin 提交于
test=develop
-
- 02 2月, 2019 2 次提交
- 31 1月, 2019 2 次提交
-
-
由 liuwei1031 提交于
* expose peak gpu memory API to python test=develop * add unittest for peak gpu memory monitoring test=develop * add pybind change test=develop * add mutex to gpu mem usage monitor test=develop * update benchmark flag definition file test=develop * tweak unittest for memory monitoring test=develop
-
由 guoshengCS 提交于
test=develop
-
- 28 1月, 2019 2 次提交
-
-
由 tensor-tang 提交于
test=develop
-
由 sneaxiy 提交于
test=develop
-
- 24 1月, 2019 2 次提交
-
-
由 Yiqun Liu 提交于
* Refine the beam_search op and test. * A basic CUDA implementation of beam_search for small batch_size. * Implement CUDA kernel for beam_search_op. * Use multiple CUDA threads in the same block to select the top beam. * Update the python api of beam_search op. * Enable extend function in CPU kernel of beam_search op. * Unify the CUDA codes. test=develop * Unify the CPU kernel of beam_search op. * Ensure the seletced items of beam_search_op's CPU kernel sorted by scores. * Update the description of beam_search in API.spec. * Enable the use of CUDA kernel in beam_search op. * Exclude the beam_search's CUDA unittest when there is no CUDA gpu, and delete some debuging statements. test=develop * Follow comments. test=develop * Call the CPU kernel for beam_search op when batch_size > 4. test=develop * Remove the except of is_empty op in PrepareData. test=develop
-
由 sneaxiy 提交于
test=develop
-
- 23 1月, 2019 1 次提交
-
-
由 tangwei12 提交于
checkpoint for distributed training.
-
- 16 1月, 2019 1 次提交
-
-
由 minqiyang 提交于
-
- 14 1月, 2019 1 次提交
-
-
由 peizhilin 提交于
test=develop
-