- 21 9月, 2020 1 次提交
-
-
由 huangxu96 提交于
* Finished ChannelWiseQuantDequantAbsMaxOp and Passed unittests. * Finished channel-wise quantize strategy in imperative quantization. * Added Cuda code of ChannelWiseQuantDequantMaxAbsOP Add Cuda code of ChannelWiseQuantDequantMaxAbsOp * Add quant_axis for channel_wise quant. * fixed a bug in unnitests, which will not trigger axis = 1 case and cannot meet the coverage rate requirement. * Added some assert infomation and fixed some coding style mistakes.
-
- 18 9月, 2020 1 次提交
-
-
由 Zhen Wang 提交于
-
- 07 9月, 2020 1 次提交
-
-
由 Sylwester Fraczek 提交于
* fix dimensions error for mobilenetv1_KL_quant fixes AssertionError: The size of weight scales vector (1000) does not match the number of output channels (1024) in the weights tensor fc7_weights. add mul test * remove comment * add third case unit test
-
- 02 9月, 2020 1 次提交
-
-
由 YUNSHEN XIE 提交于
-
- 27 8月, 2020 1 次提交
-
-
由 chalsliu 提交于
* Add the option to execute unit tests only at night * set ut nightly label for 3 cases.
-
- 25 8月, 2020 1 次提交
-
-
由 YUNSHEN XIE 提交于
-
- 24 8月, 2020 2 次提交
-
-
由 cc 提交于
* Add mnist test for post training quantization, test=develop
-
由 YUNSHEN XIE 提交于
* find timeout unittests * setting timeout value * fix some error * fix some error * fix some error * fix no newline of end file error
-
- 19 8月, 2020 1 次提交
-
-
由 cc 提交于
* Conv2d_transpose and mul support channnelwise quantization, test=develop * Skip collecting out threshold for output tensor of which the type is not fp32 or fp64, test=develop * Fix error in test_user_defined_quantization, test=develop * Add depthwise_conv_bn_fuse, test=develop * Add conv_transpose_bn_fuse_pass for post_training_quant, test=develop
-
- 31 7月, 2020 2 次提交
-
-
由 yukavio 提交于
-
由 Bai Yifan 提交于
* Remove slim from paddle framework test=develop Co-authored-by: Nwanghaoshuang <wanghaoshuang@baidu.com>
-
- 28 7月, 2020 2 次提交
- 27 7月, 2020 1 次提交
-
-
由 Wojciech Uss 提交于
test=develop
-
- 13 7月, 2020 2 次提交
-
-
由 YUNSHEN XIE 提交于
-
由 Zhen Wang 提交于
-
- 11 7月, 2020 1 次提交
-
-
由 Zhen Wang 提交于
* Add the imperative quantization aware training. * This is the python part of Imperative QAT. test=develop
-
- 06 7月, 2020 1 次提交
-
-
由 Wojciech Uss 提交于
-
- 30 6月, 2020 1 次提交
-
-
由 Wojciech Uss 提交于
test=develop
-
- 29 6月, 2020 1 次提交
-
-
由 Wojciech Uss 提交于
test=develop
-
- 19 6月, 2020 1 次提交
-
-
由 iducn 提交于
-
- 04 6月, 2020 1 次提交
-
-
由 Liufang Sang 提交于
* add user defined func test=develop * update test=develop * update test=develop * fix name conflicts test=develop * add unittest test=develop * change 2018 to 2020 test=develop * add comment test=develop * add comment for function test=develop * fix details test=develop * fix details test=develop
-
- 02 6月, 2020 2 次提交
-
-
由 cc 提交于
* Post_training_quantization supports optimize model by fusing, test=develop
-
由 Wojciech Uss 提交于
-
- 14 5月, 2020 1 次提交
-
-
由 lidanqing 提交于
Update DNNL QAT document 2.0-alpha
-
- 13 5月, 2020 1 次提交
-
-
由 cc 提交于
[Fix bug] Init scale node in OutScaleForTrainingPass and enable test_quantization_scale_pass UT (#24393) * Init scale node in OutScaleForTrainingPass, test=develop * Enable test_quantization_scale, test=develop
-
- 12 5月, 2020 1 次提交
-
-
由 joanna.wozna.intel 提交于
-
- 08 5月, 2020 1 次提交
-
-
由 Wojciech Uss 提交于
* Enabled quantize all and skip missing in QAT
-
- 28 4月, 2020 1 次提交
-
-
由 lidanqing 提交于
* update local data preprocess doc * update for 2.0 QAT test=develop test=document_fix * update benchmark data test=develop test=document_fix Co-authored-by: NWojciech Uss <wojciech.uss@intel.com>
-
- 23 4月, 2020 1 次提交
-
-
由 Wojciech Uss 提交于
* QAT: support range-based quantization and scales from attribute * added support for channelwise
-
- 17 4月, 2020 1 次提交
-
-
由 cc 提交于
* Weight quantization support channel_wise_abs_max method to achieve higher accuracy
-
- 11 4月, 2020 1 次提交
-
-
由 Wojciech Uss 提交于
* Update for QAT INT8 MKL-DNN document, added info on VNNI in Windows, benchmark results added and updated
-
- 10 4月, 2020 1 次提交
-
-
由 Wojciech Uss 提交于
-
- 07 4月, 2020 2 次提交
- 03 4月, 2020 3 次提交
- 02 4月, 2020 1 次提交
-
-
由 Wojciech Uss 提交于
-
- 28 3月, 2020 1 次提交
-
-
由 lidanqing 提交于
-