- 13 1月, 2021 2 次提交
-
-
由 alncat 提交于
* added support for inference using qunatization aware trained dygraph * added support for inference using qunatization aware trained dygraph correct boost get usage * Delete incorrect warning message (#30196) * fix warning and no grad * clean redundant API alias in 2.0 - part 2 (#30013) * delete paddle.nn.functional.assign * fix dynamic to static error * just add the op error message for the matmul xpu (#30246) add the op error message for the matmul xpu * Add Static Variable Clone (#30208) Add clone method for static Variable so that this interface will be same as dygraph. It fixed some bugs in dy2stat * use wget to replace curl to download the lcov file (#30229) * use wget to replace curl to download the lcov file * add cache for lcov * fix test_pool3d_op timeout issue (#30248) * Fix unittests bugs. (#30250) * modify error message based on comments (#30189) * modify error message based on comments * edit code according to review. * Correct spelling according to review. * Fix bug for 'save mutiple method' (#30218) * Fix bug for 'save mutiple method' * To pass coverage. * edit code to pass coverage. * edit code to pass coverage. * add unittest for coverage. * change for coverage. * edit for coverage. * added support for inference using qunatization aware trained dygraph * Alias from paddle.fluid.layers.auc to paddle.static.auc (#30206) * add alias from fluid.layers.auc to static.auc * Update __init__.py * added support for inference using qunatization aware trained dygraph correct boost get usage * corrected boost get usage * corrected naming issues and enforcing zero check * correct paddle enforce message * added more error checkings * corrected error report message and optimized code * corrected findvar usage * corrected paddle_enforce in scope * correct error messages * correct error reporting format Co-authored-by: NLielinJiang <50691816+LielinJiang@users.noreply.github.com> Co-authored-by: NXiaoguangHu <46782768+XiaoguangHu01@users.noreply.github.com> Co-authored-by: Nwawltor <fangzeyang0904@hotmail.com> Co-authored-by: NHuihuang Zheng <zhhsplendid@gmail.com> Co-authored-by: NYUNSHEN XIE <1084314248@qq.com> Co-authored-by: NBai Yifan <me@ethanbai.com> Co-authored-by: Ngongweibao <weibao.gong@gmail.com> Co-authored-by: NWeiXin <weixin10@baidu.com> Co-authored-by: NJiaqi Liu <liujiaqi06@baidu.com>
-
由 Zhang Jun 提交于
* fix bug on compiling inference shared lib with crypto;test=develop * fix cmake bug when build inference lib using -DWITH_CRYPTO=OFF * update cmake * remove unnecessary enforce message
-
- 12 1月, 2021 2 次提交
- 11 1月, 2021 3 次提交
- 08 1月, 2021 3 次提交
-
-
由 Wilber 提交于
-
由 Wilber 提交于
-
由 joanna.wozna.intel 提交于
* Add a necessary condition * Remove test for white list and add header
-
- 07 1月, 2021 1 次提交
-
-
由 weihaoji 提交于
[XPU] Remove lite_xpu ut lite_resnet50_test since fusion pass changes introduced precision diff. test=develop (#30122)
-
- 06 1月, 2021 1 次提交
-
-
由 Shang Zhizhou 提交于
* snap * add inference api: DisableTensorRtOPs * fix code style * update api to experimental * update variable name
-
- 04 1月, 2021 3 次提交
- 30 12月, 2020 2 次提交
- 29 12月, 2020 2 次提交
- 28 12月, 2020 2 次提交
- 25 12月, 2020 1 次提交
-
-
由 YUNSHEN XIE 提交于
-
- 24 12月, 2020 3 次提交
-
-
由 tangwei12 提交于
* oneps (3/4) Co-authored-by: NMrChengmo <cmchengmo@163.com> Co-authored-by: Nmalin10 <malin10@baidu.com> Co-authored-by: Nchengmo <chengmo@baidu.com>
-
由 jakpiase 提交于
-
由 Wilber 提交于
-
- 23 12月, 2020 1 次提交
-
-
由 Wilber 提交于
-
- 21 12月, 2020 1 次提交
-
-
由 Zhang Jun 提交于
-
- 17 12月, 2020 1 次提交
-
-
由 Wilber 提交于
* enable_use_gpu has higher priority than FLAGS * update.
-
- 11 12月, 2020 2 次提交
-
-
由 Wilber 提交于
-
由 Jacek Czaja 提交于
* - Added infrastructre for new test - Added UT for Multiple models prediction - cosmetic fixes - lint - lint fixes * - Removed timeout for MMP test
-
- 08 12月, 2020 1 次提交
-
-
由 Pei Yang 提交于
* change hard_swish from plugin to layer * add ut when threshold != scale
-
- 07 12月, 2020 2 次提交
-
-
由 Shang Zhizhou 提交于
* fix tensorrt unittest precision error * fix unittest precision error. test_trt_subgraph_pass && test_trt_dynamic_shape_transformer_prune
-
由 Pei Yang 提交于
-
- 03 12月, 2020 1 次提交
-
-
由 Shang Zhizhou 提交于
* fix tensorrt output shape error * fix unittest tensorrt_engine_op_test * fix code style for unitest
-
- 02 12月, 2020 2 次提交
-
-
由 Wilber 提交于
-
由 Shang Zhizhou 提交于
-
- 01 12月, 2020 1 次提交
-
-
由 Wilber 提交于
-
- 30 11月, 2020 2 次提交
-
-
由 joanna.wozna.intel 提交于
-
由 Wilber 提交于
-
- 27 11月, 2020 1 次提交
-
-
由 Shang Zhizhou 提交于
* remove -DSUPPORTS_CUDA_FP16 in cuda.cmake * comile with cuda9 * add some unittest * notest;test=coverage * add unittest for trt plugin swish && split * update ernie unittest * fix some error message * remove repeated judgement of CUDA version in mbEltwiseLayerNormOpConverter * fix comile errror when CUDA_ARCH_NAME < Pascal" * fix comile error * update unittest timeout * compile with cuda9 * update error msg * fix code style * add some comments * add define IF_CUDA_ARCH_SUPPORT_FP16 * rename IF_CUDA_ARCH_SUPPORT_FP16 to CUDA_ARCH_FP16_SUPPORTED
-