提交 8e0f18be 编写于 作者: T tripleMu 提交者: 深度眸

[Feautre] Add deploy dockerfile (#224)

* Add dockerfile for deploy

* Fix

* Update docker/Dockerfile_deployment
Co-authored-by: NHinGwenWoong <peterhuang0323@qq.com>

* Update docker/Dockerfile_deployment
Co-authored-by: NHinGwenWoong <peterhuang0323@qq.com>

* Add opencv old version

* Add dockerfile and fix some typo

* Remove repeat packages

* Fix undefined symbol bug

* Update docker/Dockerfile_deployment
Co-authored-by: 深度眸's avatarHaian Huang(深度眸) <1286304229@qq.com>

* Update docker/Dockerfile_deployment
Co-authored-by: NHinGwenWoong <peterhuang0323@qq.com>

* Add docs for deploy dockerfile

* Fix typo

* Update docs/zh_cn/advanced_guides/yolov5_deploy.md
Co-authored-by: NHinGwenWoong <peterhuang0323@qq.com>

* Add profiler

* Update docs/zh_cn/advanced_guides/yolov5_deploy.md
Co-authored-by: NHinGwenWoong <peterhuang0323@qq.com>

* Update docker/Dockerfile_deployment
Co-authored-by: NRange King <RangeKingHZ@gmail.com>

* Remove mmcv-full
Co-authored-by: NHinGwenWoong <peterhuang0323@qq.com>
Co-authored-by: 深度眸's avatarHaian Huang(深度眸) <1286304229@qq.com>
Co-authored-by: NRange King <RangeKingHZ@gmail.com>
上级 9c250571
......@@ -20,7 +20,7 @@ RUN rm /etc/apt/sources.list.d/cuda.list \
# pip config set global.index-url https://pypi.tuna.tsinghua.edu.cn/simple
RUN apt-get update \
&& apt-get install -y ffmpeg libsm6 libxext6 git ninja-build libglib2.0-0 libsm6 libxrender-dev libxext6 \
&& apt-get install -y ffmpeg libsm6 libxext6 git ninja-build libglib2.0-0 libxrender-dev \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
......
FROM nvcr.io/nvidia/pytorch:22.04-py3
WORKDIR /openmmlab
ARG ONNXRUNTIME_VERSION=1.8.1
ENV DEBIAN_FRONTEND=noninteractive \
APT_KEY_DONT_WARN_ON_DANGEROUS_USAGE=DontWarn \
FORCE_CUDA="1"
RUN apt-key del 7fa2af80 \
&& apt-key adv --fetch-keys https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/3bf863cc.pub \
&& apt-key adv --fetch-keys https://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu1804/x86_64/7fa2af80.pub
# (Optional)
# RUN sed -i 's/http:\/\/archive.ubuntu.com\/ubuntu\//http:\/\/mirrors.aliyun.com\/ubuntu\//g' /etc/apt/sources.list \
# && pip config set global.index-url https://pypi.tuna.tsinghua.edu.cn/simple
RUN apt-get update \
&& apt-get install -y ffmpeg git libgl1-mesa-glx libopencv-dev \
libsm6 libspdlog-dev libssl-dev ninja-build libxext6 libxrender-dev \
libglib2.0-0 vim wget --no-install-recommends \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
# get onnxruntime
RUN wget -q https://github.com/microsoft/onnxruntime/releases/download/v${ONNXRUNTIME_VERSION}/onnxruntime-linux-x64-${ONNXRUNTIME_VERSION}.tgz \
&& tar -zxvf onnxruntime-linux-x64-${ONNXRUNTIME_VERSION}.tgz \
&& pip install --no-cache-dir onnxruntime-gpu==${ONNXRUNTIME_VERSION} \
&& pip install pycuda
# Install OPENMIM MMENGINE MMCV MMDET
RUN pip install --no-cache-dir openmim \
&& mim install --no-cache-dir "mmengine>=0.3.0" "mmcv>=2.0.0rc1,<2.1.0" "mmdet>=3.0.0rc2,<3.1.0"
# Install MMYOLO
RUN git clone https://github.com/open-mmlab/mmyolo.git -b dev mmyolo \
&& cd mmyolo \
&& mim install --no-cache-dir -e . \
&& cd ..
# Install MMDEPLOY
ENV ONNXRUNTIME_DIR=/openmmlab/onnxruntime-linux-x64-${ONNXRUNTIME_VERSION} \
TENSORRT_DIR=/usr/lib/x86_64-linux-gnu \
CUDNN_DIR=/usr/lib/x86_64-linux-gnu \
BACKUP_LD_LIBRARY_PATH=$LD_LIBRARY_PATH \
LD_LIBRARY_PATH=$ONNXRUNTIME_DIR/lib:$TENSORRT_DIR/lib:$CUDNN_DIR/lib64:$BACKUP_LD_LIBRARY_PATH
RUN git clone https://github.com/open-mmlab/mmdeploy -b dev-1.x mmdeploy \
&& cd mmdeploy \
&& git submodule update --init --recursive \
&& mkdir -p build \
&& cd build \
&& cmake -DMMDEPLOY_TARGET_BACKENDS="ort;trt" -DONNXRUNTIME_DIR=${ONNXRUNTIME_DIR} -DTENSORRT_DIR=${TENSORRT_DIR} -DCUDNN_DIR=${CUDNN_DIR} .. \
&& make -j$(nproc) \
&& make install \
&& cd .. \
&& mim install --no-cache-dir -e .
# Fix undefined symbol bug
RUN echo -e "\nldconfig" > /root/.bashrc
......@@ -209,7 +209,7 @@ use_efficientnms = False
设置 `MMDeploy` 根目录为环境变量 `MMDEPLOY_DIR` ,例如 `export MMDEPLOY_DIR=/the/root/path/of/MMDeploy`
```shell
python3 &(MMDEPLOY_DIR)/tools/deploy.py \
python3 ${MMDEPLOY_DIR}/tools/deploy.py \
${DEPLOY_CFG_PATH} \
${MODEL_CFG_PATH} \
${MODEL_CHECKPOINT_PATH} \
......@@ -239,10 +239,10 @@ python3 &(MMDEPLOY_DIR)/tools/deploy.py \
# 模型评测
当您将 PyTorch 模型转换为后端支持的模型后,您可能需要验证模型的精度,使用 `&(MMDEPLOY_DIR)/tools/test.py`
当您将 PyTorch 模型转换为后端支持的模型后,您可能需要验证模型的精度,使用 `${MMDEPLOY_DIR}/tools/test.py`
```shell
python3 &(MMDEPLOY_DIR)/tools/test.py \
python3 ${MMDEPLOY_DIR}/tools/test.py \
${DEPLOY_CFG} \
${MODEL_CFG} \
--model ${BACKEND_MODEL_FILES} \
......@@ -282,31 +282,4 @@ python3 &(MMDEPLOY_DIR)/tools/test.py \
- `--warmup`: 在计算推理时间之前进行预热,需要先开启 `speed-test`
- `--log-interval`: 每个日志之间的间隔,需要先设置 `speed-test`
注意:`&(MMDEPLOY_DIR)/tools/test.py` 中的其他参数用于速度测试。他们不影响评估。
# 模型推理
当您评测部署模型精度后,您可以使用 `mmyolo/demo/image_demo_deploy.py` 来测试本地图片并选择展示或保存可视化结果。
```shell
python3 demo/image_demo_deploy.py \
${IMG} \
${MODEL_CFG} \
${DEPLOY_CFG} \
${CHECKPOINT} \
[--out-dir ${OUTPUT_DIR}] \
[--device] ${DEVICE}\
[--show] \
[--score-thr ${SHOW_SCORE_THR}
```
## 参数描述
- `img` :待检测的图片路径,文件夹路径,或图片网址。
- `model_cfg`: MMYOLO 模型配置文件。
- `deploy_cfg`: 部署配置文件。
- `CHECKPOINT`: 导出的后端模型。 例如, 如果我们导出了 TensorRT 模型,我们需要传入后缀为 ".engine" 文件路径。
- `--out-dir`: 保存检测的可视化结果文件夹。
- `--device`: 运行模型的设备。请注意,某些后端会限制设备。例如,TensorRT 必须在 cuda 上运行。
- `--show`: 是否在屏幕上显示检测的可视化结果,当您开启此选项后,将会关闭保存可视化结果。
- `--score-thr`: 确定是否显示检测边界框的阈值。
注意:`${MMDEPLOY_DIR}/tools/test.py` 中的其他参数用于速度测试。他们不影响评估。
......@@ -215,7 +215,7 @@ export PATH_TO_CHECKPOINTS=/home/openmmlab/dev/mmdeploy/yolov5s.pth
#### ONNXRuntime
```shell
python3 &(MMDEPLOY_DIR)/tools/deploy.py \
python3 ${MMDEPLOY_DIR}/tools/deploy.py \
configs/deploy/detection_onnxruntime_static.py \
configs/deploy/model/yolov5_s-static.py \
${PATH_TO_CHECKPOINTS} \
......@@ -228,7 +228,7 @@ python3 &(MMDEPLOY_DIR)/tools/deploy.py \
#### TensorRT
```bash
python3 &(MMDEPLOY_DIR)/tools/deploy.py \
python3 ${MMDEPLOY_DIR}/tools/deploy.py \
configs/deploy/detection_tensorrt_static-640x640.py \
configs/deploy/model/yolov5_s-static.py \
${PATH_TO_CHECKPOINTS} \
......@@ -243,7 +243,7 @@ python3 &(MMDEPLOY_DIR)/tools/deploy.py \
#### ONNXRuntime
```shell
python3 &(MMDEPLOY_DIR)/tools/deploy.py \
python3 ${MMDEPLOY_DIR}/tools/deploy.py \
configs/deploy/detection_onnxruntime_dynamic.py \
configs/yolov5/yolov5_s-v61_syncbn_8xb16-300e_coco.py \
${PATH_TO_CHECKPOINTS} \
......@@ -256,7 +256,7 @@ python3 &(MMDEPLOY_DIR)/tools/deploy.py \
#### TensorRT
```shell
python3 &(MMDEPLOY_DIR)/tools/deploy.py \
python3 ${MMDEPLOY_DIR}/tools/deploy.py \
configs/deploy/detection_tensorrt_dynamic-192x192-960x960.py \
configs/yolov5/yolov5_s-v61_syncbn_8xb16-300e_coco.py \
${PATH_TO_CHECKPOINTS} \
......@@ -280,12 +280,12 @@ python3 &(MMDEPLOY_DIR)/tools/deploy.py \
## 模型评测
当您转换模型成功后,可以使用 `&(MMDEPLOY_DIR)/tools/test.py` 工具对转换后的模型进行评测。下面是对 `ONNXRuntime``TensorRT` 静态模型的评测,动态模型评测修改传入模型配置即可。
当您转换模型成功后,可以使用 `${MMDEPLOY_DIR}/tools/test.py` 工具对转换后的模型进行评测。下面是对 `ONNXRuntime``TensorRT` 静态模型的评测,动态模型评测修改传入模型配置即可。
### ONNXRuntime
```shell
python3 &(MMDEPLOY_DIR)/tools/test.py \
python3 ${MMDEPLOY_DIR}/tools/test.py \
configs/deploy/detection_onnxruntime_static.py \
configs/deploy/model/yolov5_s-static.py \
--model work_dir/end2end.onnx \
......@@ -302,7 +302,7 @@ python3 &(MMDEPLOY_DIR)/tools/test.py \
**注意**: TensorRT 需要执行设备是 `cuda`
```shell
python3 &(MMDEPLOY_DIR)/tools/test.py \
python3 ${MMDEPLOY_DIR}/tools/test.py \
configs/deploy/detection_tensorrt_static-640x640.py \
configs/deploy/model/yolov5_s-static.py \
--model work_dir/end2end.engine \
......@@ -316,6 +316,116 @@ python3 &(MMDEPLOY_DIR)/tools/test.py \
**未来我们将会支持模型测速等更加实用的脚本**
# 使用 Docker 部署测试
`MMYOLO` 提供了一个 [`Dockerfile`](docker/Dockerfile_deployment) 用于构建镜像。请确保您的 `docker` 版本大于等于 `19.03`
温馨提示;国内用户建议取消掉 [`Dockerfile`](docker/Dockerfile_deployment) 里面 `Optional` 后两行的注释,可以获得火箭一般的下载提速:
```dockerfile
# (Optional)
RUN sed -i 's/http:\/\/archive.ubuntu.com\/ubuntu\//http:\/\/mirrors.aliyun.com\/ubuntu\//g' /etc/apt/sources.list && \
pip config set global.index-url https://pypi.tuna.tsinghua.edu.cn/simple
```
构建命令:
```shell
# build an image with PyTorch 1.12, CUDA 11.6, TensorRT 8.2.4 ONNXRuntime 1.8.1
docker build -f docker/Dockerfile_deployment -t mmyolo:v1 .
```
用以下命令运行 Docker 镜像:
```shell
export DATA_DIR=/path/to/your/dataset
docker run --gpus all --shm-size=8g -it --name mmyolo -v ${DATA_DIR}:/openmmlab/mmyolo/data/coco mmyolo:v1
```
`DATA_DIR` 是 COCO 数据的路径。
复制以下脚本到 `docker` 容器 `/openmmlab/mmyolo/script.sh`:
```bash
#!/bin/bash
wget -q https://download.openmmlab.com/mmyolo/v0/yolov5/yolov5_s-v61_syncbn_fast_8xb16-300e_coco/yolov5_s-v61_syncbn_fast_8xb16-300e_coco_20220918_084700-86e02187.pth \
-O yolov5s.pth
export MMDEPLOY_DIR=/openmmlab/mmdeploy
export PATH_TO_CHECKPOINTS=/openmmlab/mmyolo/yolov5s.pth
python3 ${MMDEPLOY_DIR}/tools/deploy.py \
configs/deploy/detection_tensorrt_static-640x640.py \
configs/deploy/model/yolov5_s-static.py \
${PATH_TO_CHECKPOINTS} \
demo/demo.jpg \
--work-dir work_dir_trt \
--device cuda:0
python3 ${MMDEPLOY_DIR}/tools/test.py \
configs/deploy/detection_tensorrt_static-640x640.py \
configs/deploy/model/yolov5_s-static.py \
--model work_dir_trt/end2end.engine \
--device cuda:0 \
--work-dir work_dir_trt
python3 ${MMDEPLOY_DIR}/tools/deploy.py \
configs/deploy/detection_onnxruntime_static.py \
configs/deploy/model/yolov5_s-static.py \
${PATH_TO_CHECKPOINTS} \
demo/demo.jpg \
--work-dir work_dir_ort \
--device cpu
python3 ${MMDEPLOY_DIR}/tools/test.py \
configs/deploy/detection_onnxruntime_static.py \
configs/deploy/model/yolov5_s-static.py \
--model work_dir_ort/end2end.onnx \
--device cpu \
--work-dir work_dir_ort
```
`/openmmlab/mmyolo` 下运行:
```shell
sh script.sh
```
脚本会自动下载 `MMYOLO``YOLOv5` 预训练权重并使用 `MMDeploy` 进行模型转换和测试。您将会看到以下输出:
- TensorRT:
![image](https://user-images.githubusercontent.com/92794867/199657349-1bad9196-c00b-4a65-84f5-80f51e65a2bd.png)
- ONNXRuntime:
![image](https://user-images.githubusercontent.com/92794867/199657283-95412e84-3ba4-463f-b4b2-4bf52ec4acbd.png)
可以看到,经过 `MMDeploy` 部署的模型与 [MMYOLO-YOLOv5](`https://github.com/open-mmlab/mmyolo/tree/main/configs/yolov5`) 的 mAP-37.7 差距在 1% 以内。
如果您需要测试您的模型推理速度,可以使用以下命令:
- TensorRT
```shell
python3 ${MMDEPLOY_DIR}/tools/profiler.py \
configs/deploy/detection_tensorrt_static-640x640.py \
configs/deploy/model/yolov5_s-static.py \
data/coco/val2017 \
--model work_dir_trt/end2end.engine \
--device cuda:0
```
- ONNXRuntime
```shell
python3 ${MMDEPLOY_DIR}/tools/profiler.py \
configs/deploy/detection_tensorrt_static-640x640.py \
configs/deploy/model/yolov5_s-static.py \
data/coco/val2017 \
--model work_dir_trt/end2end.engine \
--device cpu
```
## 模型推理
TODO
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册