- 20 2月, 2022 1 次提交
-
-
由 Chen Weihang 提交于
* rename pten dir to phi * rename namespace to phi * rename infrt pten dir to phi * resolve conflict * rename pten to phi in cmake * revert all infrt change * change needed files * fix infrt failed * fix inference failed
-
- 18 2月, 2022 1 次提交
-
-
由 Feiyu Chan 提交于
* move blas related files * move lapack related files
-
- 21 1月, 2022 1 次提交
-
-
由 Aurelius84 提交于
* Migrate Dim and DDim from paddle::framework into pten namespace * fix paddle::framework::Array * fix framework::Array
-
- 27 10月, 2021 1 次提交
-
-
由 whs 提交于
-
- 26 3月, 2021 1 次提交
-
-
由 cc 提交于
* Use layer to calculate output scale * add backward for moving_average_abs_max_scale and save output scales to op's attr
-
- 12 10月, 2020 1 次提交
-
-
由 cc 提交于
* Add test attribute in channelwise_quant op, test=develop
-
- 21 9月, 2020 1 次提交
-
-
由 huangxu96 提交于
* Finished ChannelWiseQuantDequantAbsMaxOp and Passed unittests. * Finished channel-wise quantize strategy in imperative quantization. * Added Cuda code of ChannelWiseQuantDequantMaxAbsOP Add Cuda code of ChannelWiseQuantDequantMaxAbsOp * Add quant_axis for channel_wise quant. * fixed a bug in unnitests, which will not trigger axis = 1 case and cannot meet the coverage rate requirement. * Added some assert infomation and fixed some coding style mistakes.
-
- 19 8月, 2020 1 次提交
-
-
由 cc 提交于
* Conv2d_transpose and mul support channnelwise quantization, test=develop * Skip collecting out threshold for output tensor of which the type is not fp32 or fp64, test=develop * Fix error in test_user_defined_quantization, test=develop * Add depthwise_conv_bn_fuse, test=develop * Add conv_transpose_bn_fuse_pass for post_training_quant, test=develop
-
- 29 7月, 2020 1 次提交
-
-
由 cc 提交于
* Remove the output for moving_average_abs_max_scale op, test=develop
-
- 09 7月, 2020 1 次提交
-
-
由 Zhen Wang 提交于
-
- 19 3月, 2020 1 次提交
-
-
由 Liufang Sang 提交于
* fix div zero test=develop * fix div zero test=develop * add hostdevice function test=develop * add eps when is zero test=develop
-
- 11 9月, 2019 1 次提交
-
-
由 Huihuang Zheng 提交于
TemporaryAllocator is a singleton used for allocating memory for Cudnn. Since it is a singleton, we can delete it for better performance in memory. We replace TemporaryAllocator by CUDADeviceContextAllocator and CUDADeviceContextAllocation, which uses stream callback to delete the memory allocated for the stream to avoid singleton. Also added data_feed_proto to operator to fix CI in CPU compilation
-
- 21 5月, 2019 1 次提交
-
-
由 Zhaolong Xing 提交于
* add quant_dequant_moving_avg_max_abs op test=develop * add more note for quantdequant op test=develop
-
- 07 5月, 2019 1 次提交
-
-
由 Zhen Wang 提交于
* Add MovingAverageAbsMaxScale operator which is only used for calculating the quantization scale. * test=develop * change the output into inplace. test=develop * Revert "test=develop" This reverts commit 696cf626. * Revert "change the output into inplace. test=develop" This reverts commit a19acd20. * test=develop. * update the MovingAverageAbsMaxScaleOp test. test=develop
-
- 21 3月, 2019 1 次提交
-
-
由 Zhen Wang 提交于
-
- 19 3月, 2019 1 次提交
-
-
由 Zhen Wang 提交于
-
- 15 3月, 2019 1 次提交
-
-
由 视言 提交于
* Add moving average absmax op in quantilize-aware training.
-
- 04 3月, 2019 1 次提交
-
-
由 Zhen Wang 提交于
-
- 03 9月, 2018 1 次提交
-
-
由 qingqing01 提交于
* Improve and fix fake_quantize_op.
-
- 30 8月, 2018 1 次提交
-
-
由 Dang Qingqing 提交于
-
- 28 8月, 2018 1 次提交
-
-
由 Dang Qingqing 提交于
-
- 24 8月, 2018 1 次提交
-
-
由 Dang Qingqing 提交于
-
- 11 7月, 2018 1 次提交
-
-
由 视言 提交于
* Add a fake_quantize_op, which quantize an input tensor to a tensor with lower bits.
-