未验证 提交 35e02553 编写于 作者: G George Ni 提交者: GitHub

[MOT] update install dependency (#3726)

上级 0c2b74f9
...@@ -60,6 +60,7 @@ PP-YOLO improved performance and speed of YOLOv3 with following methods: ...@@ -60,6 +60,7 @@ PP-YOLO improved performance and speed of YOLOv3 with following methods:
- PP-YOLO inference speed is tesed on single Tesla V100 with batch size as 1, CUDA 10.2, CUDNN 7.5.1, TensorRT 5.1.2.2 in TensorRT mode. - PP-YOLO inference speed is tesed on single Tesla V100 with batch size as 1, CUDA 10.2, CUDNN 7.5.1, TensorRT 5.1.2.2 in TensorRT mode.
- PP-YOLO FP32 inference speed testing uses inference model exported by `tools/export_model.py` and benchmarked by running `depoly/python/infer.py` with `--run_benchmark`. All testing results do not contains the time cost of data reading and post-processing(NMS), which is same as [YOLOv4(AlexyAB)](https://github.com/AlexeyAB/darknet) in testing method. - PP-YOLO FP32 inference speed testing uses inference model exported by `tools/export_model.py` and benchmarked by running `depoly/python/infer.py` with `--run_benchmark`. All testing results do not contains the time cost of data reading and post-processing(NMS), which is same as [YOLOv4(AlexyAB)](https://github.com/AlexeyAB/darknet) in testing method.
- TensorRT FP16 inference speed testing exclude the time cost of bounding-box decoding(`yolo_box`) part comparing with FP32 testing above, which means that data reading, bounding-box decoding and post-processing(NMS) is excluded(test method same as [YOLOv4(AlexyAB)](https://github.com/AlexeyAB/darknet) too) - TensorRT FP16 inference speed testing exclude the time cost of bounding-box decoding(`yolo_box`) part comparing with FP32 testing above, which means that data reading, bounding-box decoding and post-processing(NMS) is excluded(test method same as [YOLOv4(AlexyAB)](https://github.com/AlexeyAB/darknet) too)
- If you set `--run_benchmark=True`,you should install these dependencies at first, `pip install pynvml psutil GPUtil`.
### PP-YOLO for mobile ### PP-YOLO for mobile
......
...@@ -31,6 +31,8 @@ config->EnableTensorRtEngine(1 << 20 /*workspace_size*/, ...@@ -31,6 +31,8 @@ config->EnableTensorRtEngine(1 << 20 /*workspace_size*/,
false /*use_calib_mode*/); false /*use_calib_mode*/);
``` ```
**注意:**
--run_benchmark如果设置为True,则需要安装依赖`pip install pynvml psutil GPUtil`
### 3.2 TensorRT固定尺寸预测 ### 3.2 TensorRT固定尺寸预测
......
...@@ -168,7 +168,9 @@ CUDNN_LIB=/usr/lib/aarch64-linux-gnu/ ...@@ -168,7 +168,9 @@ CUDNN_LIB=/usr/lib/aarch64-linux-gnu/
| --use_mkldnn | CPU预测中是否开启MKLDNN加速 | | --use_mkldnn | CPU预测中是否开启MKLDNN加速 |
| --cpu_threads | 设置cpu线程数,默认为1 | | --cpu_threads | 设置cpu线程数,默认为1 |
**注意**: 优先级顺序:`camera_id` > `video_file` > `image_dir` > `image_file` **注意**:
- 优先级顺序:`camera_id` > `video_file` > `image_dir` > `image_file`
- --run_benchmark如果设置为True,则需要安装依赖`pip install pynvml psutil GPUtil`
`样例一` `样例一`
......
...@@ -110,8 +110,9 @@ make ...@@ -110,8 +110,9 @@ make
| --use_mkldnn | CPU预测中是否开启MKLDNN加速 | | --use_mkldnn | CPU预测中是否开启MKLDNN加速 |
| --cpu_threads | 设置cpu线程数,默认为1 | | --cpu_threads | 设置cpu线程数,默认为1 |
**注意**: 优先级顺序:`camera_id` > `video_file` > `image_dir` > `image_file` **注意**:
- 优先级顺序:`camera_id` > `video_file` > `image_dir` > `image_file`
- --run_benchmark如果设置为True,则需要安装依赖`pip install pynvml psutil GPUtil`
`样例一` `样例一`
```shell ```shell
......
...@@ -108,6 +108,7 @@ cd D:\projects\PaddleDetection\deploy\cpp\out\build\x64-Release ...@@ -108,6 +108,7 @@ cd D:\projects\PaddleDetection\deploy\cpp\out\build\x64-Release
**注意**: **注意**:
(1)优先级顺序:`camera_id` > `video_file` > `image_dir` > `image_file`。 (1)优先级顺序:`camera_id` > `video_file` > `image_dir` > `image_file`。
(2)如果提示找不到`opencv_world346.dll`,把`D:\projects\packages\opencv3_4_6\build\x64\vc14\bin`文件夹下的`opencv_world346.dll`拷贝到`main.exe`文件夹下即可。 (2)如果提示找不到`opencv_world346.dll`,把`D:\projects\packages\opencv3_4_6\build\x64\vc14\bin`文件夹下的`opencv_world346.dll`拷贝到`main.exe`文件夹下即可。
(3)--run_benchmark如果设置为True,则需要安装依赖`pip install pynvml psutil GPUtil`。
`样例一`: `样例一`:
......
...@@ -48,3 +48,4 @@ python deploy/python/infer.py --model_dir=./inference/yolov3_mobilenet_v1_roadsi ...@@ -48,3 +48,4 @@ python deploy/python/infer.py --model_dir=./inference/yolov3_mobilenet_v1_roadsi
- 参数优先级顺序:`camera_id` > `video_file` > `image_dir` > `image_file` - 参数优先级顺序:`camera_id` > `video_file` > `image_dir` > `image_file`
- run_mode:fluid代表使用AnalysisPredictor,精度float32来推理,其他参数指用AnalysisPredictor,TensorRT不同精度来推理。 - run_mode:fluid代表使用AnalysisPredictor,精度float32来推理,其他参数指用AnalysisPredictor,TensorRT不同精度来推理。
- 如果安装的PaddlePaddle不支持基于TensorRT进行预测,需要自行编译,详细可参考[预测库编译教程](https://paddleinference.paddlepaddle.org.cn/user_guides/source_compile.html) - 如果安装的PaddlePaddle不支持基于TensorRT进行预测,需要自行编译,详细可参考[预测库编译教程](https://paddleinference.paddlepaddle.org.cn/user_guides/source_compile.html)
- --run_benchmark如果设置为True,则需要安装依赖`pip install pynvml psutil GPUtil`
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册