- 09 7月, 2020 1 次提交
-
-
由 Zhen Wang 提交于
-
- 11 5月, 2020 1 次提交
-
-
由 Chen Weihang 提交于
* add new macro BOOST_GET_SAFELY & unittests, test=develop * add different macro type, test=develop * fix get macro type in executor, test=develop * four macro part change backup * using one macro for all case, test=develop * revert attribute change, test=develop * change to three func to solve gcc4.8 bug, test=develop * polish some details, test=develop
-
- 19 3月, 2020 1 次提交
-
-
由 Liufang Sang 提交于
* fix div zero test=develop * fix div zero test=develop * add hostdevice function test=develop * add eps when is zero test=develop
-
- 27 5月, 2019 1 次提交
-
-
由 Zeng Jinle 提交于
* Revert "Revert "Fix allocator bug"" This reverts commit 174d0d0b. * Revert "fix travis ci" This reverts commit 5656fa9f. test=develop * add inlined_vector.h, test=develop * add inlined_vector_test,test=develop * clean code of allocator,test=develop * delete zero_size_allocator.h,test=develop * fix failed unittest,test=develop
-
- 21 5月, 2019 1 次提交
-
-
由 Zhaolong Xing 提交于
* add quant_dequant_moving_avg_max_abs op test=develop * add more note for quantdequant op test=develop
-
- 07 5月, 2019 1 次提交
-
-
由 Zhen Wang 提交于
* Add MovingAverageAbsMaxScale operator which is only used for calculating the quantization scale. * test=develop * change the output into inplace. test=develop * Revert "test=develop" This reverts commit 696cf626. * Revert "change the output into inplace. test=develop" This reverts commit a19acd20. * test=develop. * update the MovingAverageAbsMaxScaleOp test. test=develop
-
- 13 4月, 2019 1 次提交
-
-
由 Zhen Wang 提交于
-
- 21 3月, 2019 1 次提交
-
-
由 Zhen Wang 提交于
-
- 15 3月, 2019 1 次提交
-
-
由 视言 提交于
* Add moving average absmax op in quantilize-aware training.
-
- 04 3月, 2019 1 次提交
-
-
由 Zhen Wang 提交于
-
- 04 9月, 2018 1 次提交
-
-
由 minqiyang 提交于
-
- 03 9月, 2018 1 次提交
-
-
由 qingqing01 提交于
* Improve and fix fake_quantize_op.
-
- 30 8月, 2018 1 次提交
-
-
由 Dang Qingqing 提交于
-
- 28 8月, 2018 1 次提交
-
-
由 Dang Qingqing 提交于
-
- 11 7月, 2018 1 次提交
-
-
由 视言 提交于
* Add a fake_quantize_op, which quantize an input tensor to a tensor with lower bits.
-