未验证 提交 08c8da53 编写于 作者: Y yunyaoXYY 提交者: GitHub

[FastDeploy] Improve fastdeploy code and readme (#9379)

* Fix padding value in rec model, and box sort in det model

* Add FastDeploy support to deploy PaddleOCR models.

* Improve readme

* improve readme

* improve readme

* improve code
上级 36ec3a40
...@@ -152,9 +152,9 @@ int main(int argc, char *argv[]) { ...@@ -152,9 +152,9 @@ int main(int argc, char *argv[]) {
option.UsePaddleBackend(); // Paddle Inference option.UsePaddleBackend(); // Paddle Inference
} else if (flag == 5) { } else if (flag == 5) {
option.UseGpu(); option.UseGpu();
option.UseTrtBackend(); option.UsePaddleInferBackend();
option.EnablePaddleTrtCollectShape(); option.paddle_infer_option.collect_trt_shape = true;
option.EnablePaddleToTrt(); // Paddle-TensorRT option.paddle_infer_option.enable_trt = true; // Paddle-TensorRT
} else if (flag == 6) { } else if (flag == 6) {
option.UseGpu(); option.UseGpu();
option.UseOrtBackend(); // ONNX Runtime option.UseOrtBackend(); // ONNX Runtime
......
...@@ -70,7 +70,7 @@ infer_demo ./ch_PP-OCRv3_det_infer ./ch_ppocr_mobile_v3.0_cls_infer ./ch_PP-OCRv ...@@ -70,7 +70,7 @@ infer_demo ./ch_PP-OCRv3_det_infer ./ch_ppocr_mobile_v3.0_cls_infer ./ch_PP-OCRv
## 5. PP-OCRv3 C# API接口简介 ## 5. PP-OCRv3 C# API接口简介
下面提供了PP-OCRv3的C# API简介 下面提供了PP-OCRv3的C# API简介
- 如果用户想要更换部署后端或进行其他定制化操作, 请查看[C# Runtime API](https://github.com/PaddlePaddle/FastDeploy/blob/develop/csharp/fastdeploy/runtime_option.cs). - 如果用户想要更换部署后端或进行其他定制化操作, 请查看[C# Runtime API](https://baidu-paddle.github.io/fastdeploy-api/csharp/html/classfastdeploy_1_1RuntimeOption.html).
- 更多 PP-OCR C# API 请查看 [C# PP-OCR API](https://github.com/PaddlePaddle/FastDeploy/blob/develop/csharp/fastdeploy/vision/ocr/model.cs) - 更多 PP-OCR C# API 请查看 [C# PP-OCR API](https://github.com/PaddlePaddle/FastDeploy/blob/develop/csharp/fastdeploy/vision/ocr/model.cs)
### 模型 ### 模型
......
...@@ -105,17 +105,17 @@ def build_option(args): ...@@ -105,17 +105,17 @@ def build_option(args):
elif args.backend.lower() == "pptrt": elif args.backend.lower() == "pptrt":
assert args.device.lower( assert args.device.lower(
) == "gpu", "Paddle-TensorRT backend require inference on device GPU." ) == "gpu", "Paddle-TensorRT backend require inference on device GPU."
det_option.use_trt_backend() det_option.use_paddle_infer_backend()
det_option.enable_paddle_trt_collect_shape() det_option.paddle_infer_option.collect_trt_shape = True
det_option.enable_paddle_to_trt() det_option.paddle_infer_option.enable_trt = True
cls_option.use_trt_backend() cls_option.use_paddle_infer_backend()
cls_option.enable_paddle_trt_collect_shape() cls_option.paddle_infer_option.collect_trt_shape = True
cls_option.enable_paddle_to_trt() cls_option.paddle_infer_option.enable_trt = True
rec_option.use_trt_backend() rec_option.use_paddle_infer_backend()
rec_option.enable_paddle_trt_collect_shape() rec_option.paddle_infer_option.collect_trt_shape = True
rec_option.enable_paddle_to_trt() rec_option.paddle_infer_option.enable_trt = True
# If use TRT backend, the dynamic shape will be set as follow. # If use TRT backend, the dynamic shape will be set as follow.
# We recommend that users set the length and height of the detection model to a multiple of 32. # We recommend that users set the length and height of the detection model to a multiple of 32.
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册