提交 1d00041c 编写于 作者: Bubbliiiing's avatar Bubbliiiing

create code

上级 ed437dbe
## YOLOV5:You Only Look Once目标检测模型在pytorch当中的实现(edition v6.1 in Ultralytics)
## YOLOV7:You Only Look Once目标检测模型在pytorch当中的实现
---
## 目录
......@@ -13,7 +13,7 @@
9. [参考资料 Reference](#Reference)
## Top News
**`2022-05`**:**仓库创建,支持不同尺寸模型训练,分别为n、s、m、l、x版本的yolov5、支持step、cos学习率下降法、支持adam、sgd优化器选择、支持学习率根据batch_size自适应调整、新增图片裁剪、支持多GPU训练、支持各个种类目标数量计算、支持heatmap、支持EMA。**
**`2022-07`**:**仓库创建,支持step、cos学习率下降法、支持adam、sgd优化器选择、支持学习率根据batch_size自适应调整、新增图片裁剪、支持多GPU训练、支持各个种类目标数量计算、支持heatmap、支持EMA。**
## 相关仓库
| 模型 | 路径 |
......@@ -26,15 +26,12 @@ Mobilenet-Yolov4 | https://github.com/bubbliiiing/mobilenet-yolov4-pytorch
YoloV5-V5.0 | https://github.com/bubbliiiing/yolov5-pytorch
YoloV5-V6.1 | https://github.com/bubbliiiing/yolov5-v6.1-pytorch
YoloX | https://github.com/bubbliiiing/yolox-pytorch
YoloV7 | https://github.com/bubbliiiing/yolov7-pytorch
## 性能情况
| 训练数据集 | 权值文件名称 | 测试数据集 | 输入图片大小 | mAP 0.5:0.95 | mAP 0.5 |
| :-----: | :-----: | :------: | :------: | :------: | :-----: |
| COCO-Train2017 | [yolov5_n_v6.1.pth](https://github.com/bubbliiiing/yolov5-v6.1-pytorch/releases/download/v1.0/yolov5_n_v6.1.pth) | COCO-Val2017 | 640x640 | 27.6 | 45.0
| COCO-Train2017 | [yolov5_s_v6.1.pth](https://github.com/bubbliiiing/yolov5-v6.1-pytorch/releases/download/v1.0/yolov5_s_v6.1.pth) | COCO-Val2017 | 640x640 | 37.0 | 56.2
| COCO-Train2017 | [yolov5_m_v6.1.pth](https://github.com/bubbliiiing/yolov5-v6.1-pytorch/releases/download/v1.0/yolov5_m_v6.1.pth) | COCO-Val2017 | 640x640 | 44.7 | 63.4
| COCO-Train2017 | [yolov5_l_v6.1.pth](https://github.com/bubbliiiing/yolov5-v6.1-pytorch/releases/download/v1.0/yolov5_l_v6.1.pth) | COCO-Val2017 | 640x640 | 48.4 | 66.6
| COCO-Train2017 | [yolov5_x_v6.1.pth](https://github.com/bubbliiiing/yolov5-v6.1-pytorch/releases/download/v1.0/yolov5_x_v6.1.pth) | COCO-Val2017 | 640x640 | 50.1 | 68.3
| COCO-Train2017 | [yolov7_weights.pth](https://github.com/bubbliiiing/yolov7-pytorch/releases/download/v1.0/yolov7_weights.pth) | COCO-Val2017 | 640x640 | 27.6 | 45.0
## 所需环境
torch==1.2.0
......@@ -171,7 +168,4 @@ img/street.jpg
5. 运行get_map.py即可获得评估结果,评估结果会保存在map_out文件夹中。
## Reference
https://github.com/qqwweee/keras-yolo3/
https://github.com/Cartucho/mAP
https://github.com/Ma-Dan/keras-yolo4
https://github.com/ultralytics/yolov5
https://github.com/WongKinYiu/yolov7
......@@ -7,7 +7,7 @@ from nets.CSPdarknet import CSPDarknet, Conv, MP, RCSPDark_Block, RCSPDark_Trans
class SPPCSPC(nn.Module):
# CSP https://github.com/WongKinYiu/CrossStagePartialNetworks
def __init__(self, c1, c2, n=1, shortcut=False, g=1, e=0.5, k=(5, 9, 13)):
def __init__(self, c1, c2, n=1, shortcut=False, g=1, e=0.5, k=(5, 9, 20)):
super(SPPCSPC, self).__init__()
c_ = int(2 * c2 * e) # hidden channels
self.cv1 = Conv(c1, c_, 1, 1)
......@@ -242,14 +242,13 @@ class YoloBody(nn.Module):
#-----------------------------------------------#
#---------------------------------------------------#
# 生成CSPdarknet53的主干模型
# 生成主干模型
# 获得三个有效特征层,他们的shape分别是:
# 52,52,256
# 26,26,512
# 13,13,1024
# 80,80,512
# 40,40,1024
# 20,20,1024
#---------------------------------------------------#
# self.backbone = CSPDarknet(model, base_channels, base_depth)
self.backbone = CSPDarknet(base_channels)
self.backbone = CSPDarknet(base_channels, pretrained=pretrained)
self.upsample = nn.Upsample(scale_factor=2, mode="nearest")
......@@ -316,17 +315,17 @@ class YoloBody(nn.Module):
P5 = self.rep_conv_3(P5)
#---------------------------------------------------#
# 第三个特征层
# y3=(batch_size,75,52,52)
# y3=(batch_size,75,80,80)
#---------------------------------------------------#
out2 = self.yolo_head_P3(P3)
#---------------------------------------------------#
# 第二个特征层
# y2=(batch_size,75,26,26)
# y2=(batch_size,75,40,40)
#---------------------------------------------------#
out1 = self.yolo_head_P4(P4)
#---------------------------------------------------#
# 第一个特征层
# y1=(batch_size,75,13,13)
# y1=(batch_size,75,20,20)
#---------------------------------------------------#
out0 = self.yolo_head_P5(P5)
return out0, out1, out2
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册