Created by: wangjiawei04
为了方便Paddle Detection导出的模型可以直接用于Serving部署上线,增加了tools/export_serving_model.py 使用方式和export_model.py 一样
python tools/export_serving_model.py -c configs/faster_rcnn_r50_1x.yml --output_dir=./inference_model -o weights=output/faster_rcnn_r50_1x/best_model
输出的结果在 inference_model/faster_rcnn_r50_1x
下
infer_cfg.yml #配置文件
serving_client #serving模型客户端配置
serving_server #serving模型服务端配置,模型参数也在此文件夹下
启动serving 前序要求
pip install -U paddle-serving-server-gpu paddle-serving-client paddle-serving-app
运行
GLOG_v=2 python -m paddle_serving_server_gpu.serve --model serving_server --port 9494 --gpu_id 0
python test_client.py serving_client/serving_client_conf.prototxt infer_cfg.yml 000000570688.jpg
更多资料可以参见 Faster RCNN on Paddle Serving