未验证 提交 48da34a8 编写于 作者: Z zhouzj 提交者: GitHub

add description about the batch_size (#870)

上级 347644b0
...@@ -93,3 +93,5 @@ quanter.save_quantized_model(net, 'save_dir', input_spec=[paddle.static.InputSpe ...@@ -93,3 +93,5 @@ quanter.save_quantized_model(net, 'save_dir', input_spec=[paddle.static.InputSpe
| ----------- | --------------------------- | ------------ | --------------------------- | | ----------- | --------------------------- | ------------ | --------------------------- |
| MobileNetV1 | 70.99/89.65 | 普通在线量化 | 70.63/89.65 | | MobileNetV1 | 70.99/89.65 | 普通在线量化 | 70.63/89.65 |
| MobileNetV3 | 78.96/94.48 | PACT在线量化 | 77.52/93.77 | | MobileNetV3 | 78.96/94.48 | PACT在线量化 | 77.52/93.77 |
注:在batch_size=256时能达到上表中理想的量化精度,但显存需求较大,因此在train.py中batch_size设置为128。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册