提交 cd1467da 编写于 作者: A acosta123 提交者: Cheerego

Update native_infer_en.md (#787)

上级 87aadd24
...@@ -96,7 +96,7 @@ There are two modes in term of memory management in `PaddleBuf` : ...@@ -96,7 +96,7 @@ There are two modes in term of memory management in `PaddleBuf` :
In the two modes, the first is more convenient while the second strictly controls memory management to facilitate integration with `tcmalloc` and other libraries. In the two modes, the first is more convenient while the second strictly controls memory management to facilitate integration with `tcmalloc` and other libraries.
### Upgrade performance based on contrib::AnalysisConfig (Prerelease) ### Upgrade performance based on contrib::AnalysisConfig
AnalyisConfig is at the stage of pre-release and protected by `namespace contrib` , which may be adjusted in the future. AnalyisConfig is at the stage of pre-release and protected by `namespace contrib` , which may be adjusted in the future.
...@@ -106,9 +106,11 @@ The usage of `AnalysisConfig` is similiar with that of `NativeConfig` but the fo ...@@ -106,9 +106,11 @@ The usage of `AnalysisConfig` is similiar with that of `NativeConfig` but the fo
```c++ ```c++
AnalysisConfig config; AnalysisConfig config;
config.model_dir = xxx; config.SetModel(dirname); // set the directory of the model
config.use_gpu = false; // GPU optimization is not supported at present config.EnableUseGpu(100, 0 /*gpu id*/); // use GPU,or
config.specify_input_name = true; // it needs to set name of input config.DisableGpu(); // use CPU
config.SwitchSpecifyInputNames(true); // need to appoint the name of your input
config.SwitchIrOptim(); // turn on the optimization switch,and a sequence of optimizations will be executed in operation
``` ```
Note that input PaddleTensor needs to be allocated. Previous examples need to be revised as follows: Note that input PaddleTensor needs to be allocated. Previous examples need to be revised as follows:
...@@ -147,7 +149,7 @@ For more specific examples, please refer to[LoD-Tensor Instructions](../../../us ...@@ -147,7 +149,7 @@ For more specific examples, please refer to[LoD-Tensor Instructions](../../../us
1. If the CPU type permits, it's best to use the versions with support for AVX and MKL. 1. If the CPU type permits, it's best to use the versions with support for AVX and MKL.
2. Reuse input and output `PaddleTensor` to avoid frequent memory allocation resulting in low performance 2. Reuse input and output `PaddleTensor` to avoid frequent memory allocation resulting in low performance
3. Try to replace `NativeConfig` with `AnalysisConfig` to perform optimization for CPU inference 3. Try to replace `NativeConfig` with `AnalysisConfig` to perform optimization for CPU or GPU inference
## Code Demo ## Code Demo
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册