未验证 提交 210790d8 编写于 作者: T Tao Luo 提交者: GitHub

Merge pull request #11521 from luotao1/inference_doc

add doc for inference_transpiler
...@@ -19,16 +19,30 @@ from ..executor import global_scope ...@@ -19,16 +19,30 @@ from ..executor import global_scope
class InferenceTranspiler: class InferenceTranspiler:
'''
Convert the fluid program to optimized inference program.
There are several optimizations, only fuse batch normalization is supported now.
Examples:
.. code-block:: python
# As InferenceTranspiler will modify the original program,
# please clone before use it.
inference_transpiler_program = program.clone()
t = fluid.InferenceTranspiler()
t.transpile(inference_transpiler_program, place)
'''
def transpile(self, program, place, scope=None): def transpile(self, program, place, scope=None):
''' '''
Transpile the program. Support only fuse batch normalization now. Run the transpiler.
:param program: program to transpile Args:
:type program: Program program (Program): program to transpile
:param place: inference place place (Place): inference place
:type place: Place scope (Scope|None): inference Scope
:param scope: inference scope
:type scope: Scope or None
''' '''
if not isinstance(program, Program): if not isinstance(program, Program):
raise TypeError("program should be as Program type") raise TypeError("program should be as Program type")
...@@ -49,36 +63,43 @@ class InferenceTranspiler: ...@@ -49,36 +63,43 @@ class InferenceTranspiler:
can be integrated with them. Doing so will give us a forward acceleration, can be integrated with them. Doing so will give us a forward acceleration,
especially in environments like mobile or embedded. especially in environments like mobile or embedded.
For input X: For input :math:`X`:
- Conv process: X = input * W + bias
- Batch norm process: X' = (X - mean) / std - Conv process: :math:`X = input * W + bias`
- Scale Process: Y = a * X' + b - Batch norm process: :math:`X' = (X - mean) / std`
- Scale Process: :math:`Y = a * X' + b`
After fuse into one operation: After fuse into one operation:
Y = (input * W + bias - mean) / std * a + b .. math::
= input * a * W / std + ((bias - mean) / std * a + b)
Y &= (input * W + bias - mean) / std * a + b \\\\
&= input * a * W / std + ((bias - mean) / std * a + b)
The operator transformation is: The operator transformation is:
- before: - before:
- conv->batch_norm->any_other_op (bias == 0) - conv->batch_norm->any_other_op (bias == 0)
- conv->elementwise_add->batch_norm->any_other_op (bias != 0) - conv->elementwise_add->batch_norm->any_other_op (bias != 0)
- after: - after:
- conv->elementwise_add->any_other_op - conv->elementwise_add->any_other_op
The transpile stages are: The transpile stages are:
1. insert elementwise_add op when bias == 0. 1. insert elementwise_add op when bias == 0.
2. fuse the batch_norm's parameters to conv and elementwise_add operators. 2. fuse the batch_norm's parameters to conv and elementwise_add operators.
3. remove batch_norm ops which are not used in any other ops. 3. remove batch_norm ops which are not used in any other ops.
4. adjust the input of any_other_op to be the output of elementwise_add operator. 4. adjust the input of any_other_op to be the output of elementwise_add operator.
5. remove unused variables. 5. remove unused variables.
:param program: program to transpile Args:
:type program: Program program (Program): program to transpile
:param place: inference place place (Place): inference place
:type place: Place scope (Scope): inference Scope
:param scope: inference scope
:type scope: Scope
''' '''
self.scope = scope self.scope = scope
self.place = place self.place = place
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册