- 21 5月, 2019 1 次提交
-
-
由 Zhaolong Xing 提交于
* add quant_dequant_moving_avg_max_abs op test=develop * add more note for quantdequant op test=develop
-
- 07 5月, 2019 1 次提交
-
-
由 Zhen Wang 提交于
* Add MovingAverageAbsMaxScale operator which is only used for calculating the quantization scale. * test=develop * change the output into inplace. test=develop * Revert "test=develop" This reverts commit 696cf626. * Revert "change the output into inplace. test=develop" This reverts commit a19acd20. * test=develop. * update the MovingAverageAbsMaxScaleOp test. test=develop
-
- 21 3月, 2019 1 次提交
-
-
由 Zhen Wang 提交于
-
- 19 3月, 2019 1 次提交
-
-
由 Zhen Wang 提交于
-
- 15 3月, 2019 1 次提交
-
-
由 视言 提交于
* Add moving average absmax op in quantilize-aware training.
-
- 05 3月, 2019 1 次提交
-
-
由 Zhen Wang 提交于
-
- 04 3月, 2019 1 次提交
-
-
由 Zhen Wang 提交于
-
- 26 2月, 2019 1 次提交
-
-
由 qingqing01 提交于
-
- 14 2月, 2019 1 次提交
-
-
由 qingqing01 提交于
* Fix debug mode in fake_quantize_op * Remove template specialization
-
- 12 12月, 2018 1 次提交
-
-
由 Yu Yang 提交于
test=develop
-
- 15 11月, 2018 1 次提交
-
-
由 Sylwester Fraczek 提交于
* add is_test to pooling and activations add prop_kind support for layers activation. conv and pooling add a pass that sets is_test to true add transpiler version of is_test pass test=develop * patch test and pass test=develop * add pass to analyzer.h test=develop * add is_test attr description & pass only on mkldnn in: activation_op.cc batch_norm_op.cc conv_op.cc dropout_op.cc lrn_op.cc pool_op.cc sequence_pool_op.cc softmax_op.cc * fix is_test handling for activation pool and conv * change description of is_test for all layers again * remove GetAttr(use_mkldnn) from pass * rename correct_mkldnn_test_phase to is_test and remove dependency on MKLDNN test=develop * review fix magic number * two if(..)s into one * Check is_test once and pass mkldnn forward prop kind * dereference shared_ptr with * (without get()) test=develop * add is_test_pass back test=develop
-
- 03 9月, 2018 1 次提交
-
-
由 qingqing01 提交于
* Improve and fix fake_quantize_op.
-
- 30 8月, 2018 1 次提交
-
-
由 Dang Qingqing 提交于
-
- 28 8月, 2018 1 次提交
-
-
由 Dang Qingqing 提交于
-
- 24 8月, 2018 1 次提交
-
-
由 Dang Qingqing 提交于
-
- 11 7月, 2018 1 次提交
-
-
由 视言 提交于
* Add a fake_quantize_op, which quantize an input tensor to a tensor with lower bits.
-