- 05 3月, 2019 3 次提交
- 04 3月, 2019 13 次提交
-
-
由 dengkaipeng 提交于
-
由 dengkaipeng 提交于
-
由 dengkaipeng 提交于
-
由 dengkaipeng 提交于
-
由 dengkaipeng 提交于
-
由 dengkaipeng 提交于
-
由 dengkaipeng 提交于
-
由 dengkaipeng 提交于
-
由 dengkaipeng 提交于
-
由 dengkaipeng 提交于
-
由 dengkaipeng 提交于
-
由 baojun 提交于
-
- 01 3月, 2019 2 次提交
- 28 2月, 2019 2 次提交
- 27 2月, 2019 9 次提交
-
-
由 Yiqun Liu 提交于
test=develop
-
由 xiaolil1 提交于
test=develop
-
由 luotao1 提交于
test=develop
-
由 baojun 提交于
-
由 mozga-intel 提交于
test=develop
-
由 dzhwinter 提交于
* staged. * polish code * polish code. test=develop * polish code. test=develop * api change. test=develop * fix default value. test=develop * fix default value. test=develop
-
由 Yiqun Liu 提交于
* Rewrite is_empty op to avoid unnecessary data transform. test=develop * Add the implementation of InferShape and InferVarType for is_empty op. test=develop * Rewrite is_empty op to avoid directly inherit OperatorBase. test=develop
-
由 xiaolil1 提交于
* Optimize key creation of INT8 pool kernel to improve the peformance of ResNet-50 and MobileNet, especially for latency. test=develop * Optimize key creation of pool fp32 grad. test=develop
-
由 baojun-nervana 提交于
-
- 26 2月, 2019 11 次提交
-
-
由 tensor-tang 提交于
test=develop
-
由 tensor-tang 提交于
test=develop
-
由 tensor-tang 提交于
test=develop
-
由 Jacek Czaja 提交于
- MKLDNN ops revisited - disabled softmax modifications - disabled elementwise_add - reverted LRN modifications - reverted SUM primitive - Partial reviing of softmax - Enable softmax - Softmax changes - LRN is back - LRN partially disabled - LRN is back - LRN fix - compilation fixes - Sum fixed(hopefully) - Enabling (partially) elementwise_add - Fixes to elemenwise_add - Lint fixes quantize fix - compilation fix test=develop Disabling pooling - Disabled quantize op test=develop
-
由 qingqing01 提交于
-
由 qingqing01 提交于
* loosly check in cross_entropy_op when soft_label is True * Add Runtime assertion in backward infer_shape check. * Skip InferShape check when un-know the input dimensions
-
由 Yihua Xu 提交于
test=develop
-
由 xiaoli.liu@intel.com 提交于
test=develop
-
由 Yiqun Liu 提交于
Optimize the CUDA implementation of sequence_expand op by reduce the times of copying lod data from CPU to GPU. (#15493) * Optimize the CUDA implementation of sequence_expand op by reduce the times of copying lod data from CPU to GPU. test=develop * Refine the op benchmark to support setting lod in config. test=develop
-
由 guomingz 提交于
* This PR improve performance of prior_box op about 1.25x faster on CPU. * Test Env:SKX 8180 with fake data on 28 threads(bs=1). * The below table shows the ~25% improvement which generated by [eval_tp_fake_data.py](https://github.com/PaddlePaddle/Paddle/issues/15618#issuecomment-464613976). | Type |Event | Calls | Total | Min. | Max. | Ave. | Ratio.| | ---------------- | ------------------ | ---- | ------- | -------- | -------- | ------------ | -------- | | w/ optimization | thread0::prior_box | 6000 | 921.201 | 0.110572 | 0.383402 | **0.153533** | 0.084585 | | w/o optimization | thread0::prior_box | 6000 | 1151.85 | 0.102276 | 0.426702 | **0.191976** | 0.103337 | test=develop * Fix the style issue. test=develop
-
由 chengduo 提交于
* add alloc_continuous_space_op test=develop * Polish code test=develop * follow comment test=develop
-