提交 1b7ac282 编写于 作者: Eric.Lee2021's avatar Eric.Lee2021 🚴🏻

add persn detect

上级 b467d33c
# YOLO V3
物体检测,包括手部检测、人脸检测,因为数据集的独立所以分别为独立模型。
物体检测,包括手部检测、人脸检测、人检测,因为数据集的独立所以分别为3个独立模型。
## 项目介绍
### 1、手部检测
......@@ -14,6 +14,11 @@
* 视频示例:
![videoface](https://codechina.csdn.net/EricLee/yolo_v3/-/raw/master/samples/face.gif)
### 3、人检测
人检测示例如下 :
* 视频示例:
![videoPerson](https://codechina.csdn.net/EricLee/yolo_v3/-/raw/master/samples/person.gif)
## 项目配置
* 作者开发环境:
* Python 3.7
......@@ -25,16 +30,20 @@
* [数据集下载地址(百度网盘 Password: c680 )](https://pan.baidu.com/s/1H0YH8jMEXeIcubLEv0W_yw)
### 2、脸部检测数据集
该项目采用的是开源数据集 WIDERFACE,其下载地址为 http://shuoyang1213.me/WIDERFACE/
该项目采用的是开源数据集 WIDERFACE,其地址为 http://shuoyang1213.me/WIDERFACE/
```
@inproceedings{yang2016wider,
Author = {Yang, Shuo and Luo, Ping and Loy, Chen Change and Tang, Xiaoou},
Booktitle = {IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
Title = {WIDER FACE: A Face Detection Benchmark},
Year = {2016}}
Author = {Yang, Shuo and Luo, Ping and Loy, Chen Change and Tang, Xiaoou},
Booktitle = {IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
Title = {WIDER FACE: A Face Detection Benchmark},
Year = {2016}}
```
* [该项目制作的训练集的数据集下载地址(百度网盘 Password: r77x )](https://pan.baidu.com/s/1Jsm1qPPzAW46LRW5nUClzQ)
### 3、人检测数据集
该项目采用的是开源数据集 COCO,其地址为 https://cocodataset.org/
* [该项目制作的训练集的数据集下载地址(百度网盘 Password: )]()
### 数据格式
size是全图分辨率, (x,y) 是目标物体中心对于全图的归一化坐标,w,h是目标物体边界框对于全图的归一化宽、高。
......@@ -65,6 +74,9 @@ Contextual Attention for Hand Detection in the Wild. S. Narasimhaswamy, Z. Wei,
### 2、脸部检测预训练模型
* [预训练模型下载地址(百度网盘 Password: l2a3 )](https://pan.baidu.com/s/1xVtZUMD94DiT9FQQ66xG1A)
### 3、人检测预训练模型
* [预训练模型下载地址(百度网盘 Password: ise9 )](https://pan.baidu.com/s/1mxiI-tOpE3sU-9TVPJmPWw)
## 项目使用方法
......
cfg_model=yolo
classes=1
gpus = 0
num_workers = 8
batch_size = 8
img_size = 416
multi_scale = True
epochs = 100
train=./yolo_person_train/anno/train.txt
valid=./yolo_person_train/anno/train.txt
names=./cfg/person.names
#finetune_model=./coco_model/yolov3_coco.pt
finetune_model = ./weights-yolov3-person/latest_416.pt
lr_step = 10,20,30
lr0 = 0.0001
......@@ -93,7 +93,7 @@ def detect(
colors = [(v // 32 * 64 + 64, (v // 8) % 4 * 64, v % 8 * 32) for v in range(1, num_classes + 1)][::-1]
video_capture = cv2.VideoCapture("./video/bean.mp4")
video_capture = cv2.VideoCapture("./video/bean_1.mp4")
# url="http://admin:admin@192.168.43.1:8081"
# video_capture=cv2.VideoCapture(url)
......@@ -151,7 +151,7 @@ def detect(
# print(conf, cls_conf)
# xyxy = refine_hand_bbox(xyxy,im0.shape)
plot_one_box(xyxy, im0, label=label, color=(155,55,255),line_thickness = 3)
plot_one_box(xyxy, im0, label=label, color=(15,255,95),line_thickness = 3)
s2 = time.time()
print("detect time: {} \n".format(s2 - t))
......@@ -177,8 +177,8 @@ def detect(
if __name__ == '__main__':
voc_config = 'cfg/face.data' # 模型相关配置文件
model_path = './weights-yolov3-face/latest_416.pt' # 检测模型路径
voc_config = 'cfg/person.data' # 模型相关配置文件
model_path = './weights-yolov3-person/latest_416.pt' # 检测模型路径
model_cfg = 'yolo' # yolo / yolo-tiny
img_size = 416 # 图像尺寸
......
......@@ -12,8 +12,11 @@ if __name__ == "__main__":
# path='./datasets_fusion_hand_train/anno/train.txt'
# path_voc_names = './cfg/hand.names'
path='./yolo_widerface_open_train/anno/train.txt'
path_voc_names = './cfg/face.names'
# path='./yolo_widerface_open_train/anno/train.txt'
# path_voc_names = './cfg/face.names'
path='./yolo_person_train/anno/train.txt'
path_voc_names = './cfg/person.names'
with open(path_voc_names, 'r') as f:
label_map = f.readlines()
......
......@@ -203,7 +203,8 @@ if __name__ == '__main__':
# train(data_cfg="cfg/hand.data")
train(data_cfg = "cfg/face.data")
# train(data_cfg = "cfg/face.data")
train(data_cfg = "cfg/person.data")
print('well done ~ ')
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册