- 19 8月, 2020 1 次提交
-
-
由 cc 提交于
* Conv2d_transpose and mul support channnelwise quantization, test=develop * Skip collecting out threshold for output tensor of which the type is not fp32 or fp64, test=develop * Fix error in test_user_defined_quantization, test=develop * Add depthwise_conv_bn_fuse, test=develop * Add conv_transpose_bn_fuse_pass for post_training_quant, test=develop
-
- 29 7月, 2020 1 次提交
-
-
由 cc 提交于
* Remove the output for moving_average_abs_max_scale op, test=develop
-
- 09 7月, 2020 1 次提交
-
-
由 Zhen Wang 提交于
-
- 19 3月, 2020 1 次提交
-
-
由 Liufang Sang 提交于
* fix div zero test=develop * fix div zero test=develop * add hostdevice function test=develop * add eps when is zero test=develop
-
- 01 11月, 2019 1 次提交
-
-
由 Leo Chen 提交于
* don't expose numerous Tensor.set(), test=develop * fix condition, test=develop * fix float16 bug, test=develop * feed should be Tensor or np.array, not Variable or number, test=develop * use forcecast to copy numpy slice to new array, test=develop * remove float16-uint16 hacking, test=develop
-
- 21 5月, 2019 1 次提交
-
-
由 Zhaolong Xing 提交于
* add quant_dequant_moving_avg_max_abs op test=develop * add more note for quantdequant op test=develop
-
- 07 5月, 2019 1 次提交
-
-
由 Zhen Wang 提交于
* Add MovingAverageAbsMaxScale operator which is only used for calculating the quantization scale. * test=develop * change the output into inplace. test=develop * Revert "test=develop" This reverts commit 696cf62699ba1e1c98f61f7345ac7060010eb29a. * Revert "change the output into inplace. test=develop" This reverts commit a19acd20f07eee82622701a3015e6e9c073a5e0b. * test=develop. * update the MovingAverageAbsMaxScaleOp test. test=develop
-
- 19 3月, 2019 1 次提交
-
-
由 Zhen Wang 提交于
-
- 15 3月, 2019 1 次提交
-
-
由 视言 提交于
* Add moving average absmax op in quantilize-aware training.
-
- 04 3月, 2019 1 次提交
-
-
由 Zhen Wang 提交于
-
- 26 2月, 2019 1 次提交
-
-
由 qingqing01 提交于
-
- 03 9月, 2018 1 次提交
-
-
由 qingqing01 提交于
* Improve and fix fake_quantize_op.
-
- 30 8月, 2018 1 次提交
-
-
由 Dang Qingqing 提交于
-
- 15 8月, 2018 1 次提交
-
-
由 minqiyang 提交于
-
- 26 7月, 2018 2 次提交
- 11 7月, 2018 1 次提交
-
-
由 视言 提交于
* Add a fake_quantize_op, which quantize an input tensor to a tensor with lower bits.
-