未验证 提交 a9fe5d72 编写于 作者: K Kaipeng Deng 提交者: GitHub

Merge pull request #2110 from tink2123/cherry-pick

[cherry-pick]modified infer speed
...@@ -155,7 +155,9 @@ Inference is used to get prediction score or image features based on trained mod ...@@ -155,7 +155,9 @@ Inference is used to get prediction score or image features based on trained mod
--image_name=000000000139.jpg \ --image_name=000000000139.jpg \
--draw_threshold=0.5 --draw_threshold=0.5
Inference speed: - Set ```export CUDA_VISIBLE_DEVICES=0``` to specifiy one GPU to infer.
Inference speed(Tesla P40):
| input size | 608x608 | 416x416 | 320x320 | | input size | 608x608 | 416x416 | 320x320 |
......
...@@ -157,7 +157,9 @@ Train Loss ...@@ -157,7 +157,9 @@ Train Loss
--image_name=000000000139.jpg \ --image_name=000000000139.jpg \
--draw_threshold=0.5 --draw_threshold=0.5
模型预测速度: - 通过设置export CUDA\_VISIBLE\_DEVICES=0指定单卡GPU预测。
模型预测速度(Tesla P40):
| input size | 608x608 | 416x416 | 320x320 | | input size | 608x608 | 416x416 | 320x320 |
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册