未验证 提交 1e6090d8 编写于 作者: W wangguanzhong 提交者: GitHub

refine doc (#163)

上级 1a843c55
...@@ -25,24 +25,24 @@ python tools/export_model.py -c configs/faster_rcnn_r50_1x.yml \ ...@@ -25,24 +25,24 @@ python tools/export_model.py -c configs/faster_rcnn_r50_1x.yml \
## 设置导出模型的输入大小 ## 设置导出模型的输入大小
使用Fluid-TensorRT进行预测时,由于<=TensorRT 5.1的版本仅支持定长输入,保存模型的`data`层的图片大小需要和实际输入图片大小一致。而Fluid C++预测引擎没有此限制。可通过设置TestFeed的`image_shape`可以修改保存模型中的输入图片大小。示例如下: 使用Fluid-TensorRT进行预测时,由于<=TensorRT 5.1的版本仅支持定长输入,保存模型的`data`层的图片大小需要和实际输入图片大小一致。而Fluid C++预测引擎没有此限制。可通过设置TestReader中`image_shape`可以修改保存模型中的输入图片大小。示例如下:
```bash ```bash
# 导出FasterRCNN模型,输入是3x640x640 # 导出FasterRCNN模型,输入是3x640x640
python tools/export_model.py -c configs/faster_rcnn_r50_1x.yml \ python tools/export_model.py -c configs/faster_rcnn_r50_1x.yml \
--output_dir=./inference_model \ --output_dir=./inference_model \
-o weights=https://paddlemodels.bj.bcebos.com/object_detection/faster_rcnn_r50_1x.tar \ -o weights=https://paddlemodels.bj.bcebos.com/object_detection/faster_rcnn_r50_1x.tar \
FasterRCNNTestFeed.image_shape=[3,640,640] TestReader.inputs_def.image_shape=[3,640,640]
# 导出YOLOv3模型,输入是3x320x320 # 导出YOLOv3模型,输入是3x320x320
python tools/export_model.py -c configs/yolov3_darknet.yml \ python tools/export_model.py -c configs/yolov3_darknet.yml \
--output_dir=./inference_model \ --output_dir=./inference_model \
-o weights=https://paddlemodels.bj.bcebos.com/object_detection/yolov3_darknet.tar \ -o weights=https://paddlemodels.bj.bcebos.com/object_detection/yolov3_darknet.tar \
YoloTestFeed.image_shape=[3,320,320] TestReader.inputs_def.image_shape=[3,320,320]
# 导出SSD模型,输入是3x300x300 # 导出SSD模型,输入是3x300x300
python tools/export_model.py -c configs/ssd/ssd_mobilenet_v1_voc.yml \ python tools/export_model.py -c configs/ssd/ssd_mobilenet_v1_voc.yml \
--output_dir=./inference_model \ --output_dir=./inference_model \
-o weights= https://paddlemodels.bj.bcebos.com/object_detection/ssd_mobilenet_v1_voc.tar \ -o weights= https://paddlemodels.bj.bcebos.com/object_detection/ssd_mobilenet_v1_voc.tar \
SSDTestFeed.image_shape=[3,300,300] TestReader.inputs_def.image_shape=[3,300,300]
``` ```
...@@ -6,7 +6,7 @@ ...@@ -6,7 +6,7 @@
```bash ```bash
export CUDA_VISIBLE_DEVICES=0 export CUDA_VISIBLE_DEVICES=0
python tools/cpp_infer.py --model_path=output/yolov3_mobilenet_v1/ --config_path=tools/cpp_demo.yml --infer_img=demo/000000570688.jpg --visualize python tools/cpp_infer.py --model_path=inference_model/faster_rcnn_r50_1x/ --config_path=tools/cpp_demo.yml --infer_img=demo/000000570688.jpg --visualize
``` ```
...@@ -18,7 +18,7 @@ python tools/cpp_infer.py --model_path=output/yolov3_mobilenet_v1/ --config_path ...@@ -18,7 +18,7 @@ python tools/cpp_infer.py --model_path=output/yolov3_mobilenet_v1/ --config_path
4. visualize: 是否保存可视化结果,默认保存路径为```output/``` 4. visualize: 是否保存可视化结果,默认保存路径为```output/```
更多参数可在```tools/cpp_demo.yml```中查看 更多参数可在```tools/cpp_demo.yml```中查看, **设置shape时必须保持与模型导出时shape大小一致**
## Paddle环境搭建 ## Paddle环境搭建
...@@ -37,6 +37,6 @@ cmake .. -DWITH_MKL=ON \ ...@@ -37,6 +37,6 @@ cmake .. -DWITH_MKL=ON \
make -j20 make -j20
make install make install
```
export LD_LIBRARY_PATH=${PATH_TO_TensorRT}:$LD_LIBRARY_PATH
```
# demo for tensorrt_infer.py # demo for cpp_infer.py
mode: trt_fp32 # trt_fp32, trt_fp16, trt_int8, fluid mode: trt_fp32 # trt_fp32, trt_fp16, trt_int8, fluid
arch: RCNN # YOLO, SSD, RCNN, RetinaNet arch: RCNN # YOLO, SSD, RCNN, RetinaNet
min_subgraph_size: 20 # need 3 for YOLO arch min_subgraph_size: 20 # need 3 for YOLO arch
use_python_inference: False # whether to use python inference use_python_inference: False # whether to use python inference
# visulize the predicted image # visualize the predicted image
metric: COCO # COCO, VOC metric: COCO # COCO, VOC
draw_threshold: 0.5 draw_threshold: 0.5
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册