未验证 提交 12134b2e 编写于 作者: C Candy2Tang 提交者: GitHub

[xdoctest][task 124] Reformat example code with google style in...

[xdoctest][task 124] Reformat example code with google style in python/paddle/quantization/quantize.py (#56235)

* [xdoctest][task 124] test=docs_preview

* test=document_fix

* fix indent

---------
Co-authored-by: NSigureMo <sigure.qaq@gmail.com>
上级 80d3a20b
...@@ -41,29 +41,30 @@ class Quantization(metaclass=abc.ABCMeta): ...@@ -41,29 +41,30 @@ class Quantization(metaclass=abc.ABCMeta):
pass pass
def convert(self, model: Layer, inplace=False): def convert(self, model: Layer, inplace=False):
r"""Convert the quantization model to onnx style. And the converted r"""Convert the quantization model to ONNX style. And the converted
model can be saved as inference model by calling paddle.jit.save. model can be saved as inference model by calling paddle.jit.save.
Args: Args:
model(Layer) - The quantized model to be covnerted. model(Layer) - The quantized model to be converted.
inplace(bool) - Whether to modify the model in-place. inplace(bool) - Whether to modify the model in-place.
Return: The converted model Return: The converted model
Examples: Examples:
.. code-block:: python .. code-block:: python
import paddle
from paddle.quantization import QAT, QuantConfig >>> import paddle
from paddle.quantization.quanters import FakeQuanterWithAbsMaxObserver >>> from paddle.quantization import QAT, QuantConfig
from paddle.vision.models import LeNet >>> from paddle.quantization.quanters import FakeQuanterWithAbsMaxObserver
>>> from paddle.vision.models import LeNet
quanter = FakeQuanterWithAbsMaxObserver(moving_rate=0.9)
q_config = QuantConfig(activation=quanter, weight=quanter) >>> quanter = FakeQuanterWithAbsMaxObserver(moving_rate=0.9)
qat = QAT(q_config) >>> q_config = QuantConfig(activation=quanter, weight=quanter)
model = LeNet() >>> qat = QAT(q_config)
quantized_model = qat.quantize(model) >>> model = LeNet()
converted_model = qat.convert(quantized_model) >>> quantized_model = qat.quantize(model)
dummy_data = paddle.rand([1, 1, 32, 32], dtype="float32") >>> converted_model = qat.convert(quantized_model)
paddle.jit.save(converted_model, "./quant_deploy", [dummy_data]) >>> dummy_data = paddle.rand([1, 1, 32, 32], dtype="float32")
>>> paddle.jit.save(converted_model, "./quant_deploy", [dummy_data])
""" """
_model = model if inplace else copy.deepcopy(model) _model = model if inplace else copy.deepcopy(model)
replaced = {} replaced = {}
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册