- 17 4月, 2020 1 次提交
-
-
由 dingminghui 提交于
while filter weight is quanted elementwise, use maximum of weight_scale instead of mininum to keep quant weight in range of int8
-
- 09 4月, 2020 1 次提交
-
-
由 jackzhang235 提交于
[MLU] add some basic support for MLU, including related passes, kernels, gtests and some api in padddle_api.h Passes:mlu_subgraph_pass ,mlu_postprocess_pass Kernels: act,batch_norm, concat, conv, elementwise, fc, interpolate, pool, scale, softmax
-
- 01 4月, 2020 1 次提交
-
-
由 zhaoying 提交于
1. disable conv activation pass by default 2. set fc_fuser'param with_relu false while mlu fc kernel does not support relu 3. change fc filter shape from 2 dim to 4 dim while input dim == 4 4. add ToFile func in mlu tensor for debug convenience 5. enable 4-dim input in elementwise_ops 6. add transpose2d in utility.cc
-
- 28 3月, 2020 1 次提交
-
-
由 jackzhang235 提交于
-
- 06 3月, 2020 1 次提交
-
-
由 zhangshijin 提交于
* [MLU] support resnet50 on MLU * [MLU] support resnet50 on MLU
-