• YOLOv7改进:ConvNeXt(backbone改为CNeB)


    1.介绍

    自从ViT(Vision Transformer)在CV领域大放异彩,越来越多的研究人员开始拥入Transformer的怀抱。回顾近一年,在CV领域发的文章绝大多数都是基于Transformer的,比如2021年ICCV 的best paper Swin Transformer,而卷积神经网络已经开始慢慢淡出舞台中央。卷积神经网络要被Transformer取代了吗?也许会在不久的将来。今年(2022)一月份,Facebook AI Research和UC Berkeley一起发表了一篇文章A ConvNet for the 2020s,在文章中提出了ConvNeXt纯卷积神经网络,它对标的是2021年非常火的Swin Transformer,通过一系列实验比对,在相同的FLOPs下,ConvNeXt相比Swin Transformer拥有更快的推理速度以及更高的准确率,在ImageNet 22K上ConvNeXt-XL达到了87.8%的准确率,参看下图(原文表12)。看来ConvNeXt的提出强行给卷积神经网络续了口命。

    ConvNeXt是一种由Facebook AI Research和UC Berkeley共同提出的卷积神经网络模型。它是一种纯卷积神经网络,由标准卷积神经网络模块构成,具有精度高、效率高、可扩展性强和设计非常简单的特点。ConvNeXt在2022年的CVPR上发表了一篇论文,题为“面向2020年代的卷积神经网络”。ConvNeXt已在ImageNet-1K和ImageNet-22K数据集上进行了训练,并在多个任务上取得了优异的表现。ConvNeXt的训练代码和预训练模型均已在GitHub上公开。
    ConvNeXt是基于ResNet50进行改进的,其与Swin Transformer一样,具有4个Stage;不同的是ConvNeXt将各Stage中Block的数量比例从3:4:6:3改为了与Swin Transformer一样的1:1:3:1。 此外,在进行特征图降采样方面,ConvNeXt采用了与Swin Transformer一致的步长为4,尺寸为4×4的卷积核。
    ConvNeXt的优点包括:
    ConvNeXt是一种纯卷积神经网络,由标准卷积神经网络模块构成,具有精度高、效率高、可扩展性强和设计非常简单的特点。
    ConvNeXt在ImageNet-1K和ImageNet-22K数据集上进行了训练,并在多个任务上取得了优异的表现。
    ConvNeXt采用了Transformer网络的一些先进思想对现有的经典ResNet50/200网络做一些调整改进,将Transformer网络的最新的部分思想和技术引入到CNN网络现有的模块中从而结合这两种网络的优势,提高CNN网络的性能表现.
    ConvNeXt的缺点包括:
    ConvNeXt并没有在整体的网络框架和搭建思路上做重大的创新,它仅仅是依照Transformer网络的一些先进思想对现有的经典ResNet50/200网络做一些调整改进.
    ConvNeXt相对于其他CNN模型而言,在某些情况下需要更多计算资源.

     

    2. yolov7添加CNeB模块代码

    2.1.在 common.py 文件中添加如下代码

    1. class LayerNorm_s(nn.Module):
    2. def __init__(self, normalized_shape, eps=1e-6, data_format="channels_last"):
    3. super().__init__()
    4. self.weight = nn.Parameter(torch.ones(normalized_shape))
    5. self.bias = nn.Parameter(torch.zeros(normalized_shape))
    6. self.eps = eps
    7. self.data_format = data_format
    8. if self.data_format not in ["channels_last", "channels_first"]:
    9. raise NotImplementedError
    10. self.normalized_shape = (normalized_shape,)
    11. def forward(self, x):
    12. if self.data_format == "channels_last":
    13. return F.layer_norm(x, self.normalized_shape, self.weight, self.bias, self.eps)
    14. elif self.data_format == "channels_first":
    15. u = x.mean(1, keepdim=True)
    16. s = (x - u).pow(2).mean(1, keepdim=True)
    17. x = (x - u) / torch.sqrt(s + self.eps)
    18. x = self.weight[:, None, None] * x + self.bias[:, None, None]
    19. return x
    20. class ConvNextBlock(nn.Module):
    21. def __init__(self, dim, drop_path=0., layer_scale_init_value=1e-6):
    22. super().__init__()
    23. self.dwconv = nn.Conv2d(dim, dim, kernel_size=7, padding=3, groups=dim) # depthwise conv
    24. self.norm = LayerNorm_s(dim, eps=1e-6)
    25. self.pwconv1 = nn.Linear(dim, 4 * dim)
    26. self.act = nn.GELU()
    27. self.pwconv2 = nn.Linear(4 * dim, dim)
    28. self.gamma = nn.Parameter(layer_scale_init_value * torch.ones((dim)),
    29. requires_grad=True) if layer_scale_init_value > 0 else None
    30. self.drop_path = DropPath(drop_path) if drop_path > 0. else nn.Identity()
    31. def forward(self, x):
    32. input = x
    33. x = self.dwconv(x)
    34. x = x.permute(0, 2, 3, 1) # (N, C, H, W) -> (N, H, W, C)
    35. x = self.norm(x)
    36. x = self.pwconv1(x)
    37. x = self.act(x)
    38. x = self.pwconv2(x)
    39. if self.gamma is not None:
    40. x = self.gamma * x
    41. x = x.permute(0, 3, 1, 2) # (N, H, W, C) -> (N, C, H, W)
    42. x = input + self.drop_path(x)
    43. return x
    44. class DropPath(nn.Module):
    45. """Drop paths (Stochastic Depth) per sample (when applied in main path of residual blocks).
    46. """
    47. def __init__(self, drop_prob=None):
    48. super(DropPath, self).__init__()
    49. self.drop_prob = drop_prob
    50. def forward(self, x):
    51. return drop_path_f(x, self.drop_prob, self.training)
    52. def drop_path_f(x, drop_prob: float = 0., training: bool = False):
    53. if drop_prob == 0. or not training:
    54. return x
    55. keep_prob = 1 - drop_prob
    56. shape = (x.shape[0],) + (1,) * (x.ndim - 1) # work with diff dim tensors, not just 2D ConvNets
    57. random_tensor = keep_prob + torch.rand(shape, dtype=x.dtype, device=x.device)
    58. random_tensor.floor_() # binarize
    59. output = x.div(keep_prob) * random_tensor
    60. return output
    61. class CNeB(nn.Module):
    62. # CSP ConvNextBlock with 3 convolutions by iscyy/yoloair
    63. def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5): # ch_in, ch_out, number, shortcut, groups, expansion
    64. super().__init__()
    65. c_ = int(c2 * e) # hidden channels
    66. self.cv1 = Conv(c1, c_, 1, 1)
    67. self.cv2 = Conv(c1, c_, 1, 1)
    68. self.cv3 = Conv(2 * c_, c2, 1)
    69. self.m = nn.Sequential(*(ConvNextBlock(c_) for _ in range(n)))
    70. def forward(self, x):
    71. return self.cv3(torch.cat((self.m(self.cv1(x)), self.cv2(x)), dim=1))

    2.2 创建yolov7CNEB.yaml配置文件 

    1. # YOLOv7 🚀, GPL-3.0 license
    2. # parameters
    3. nc: 80 # number of classes
    4. depth_multiple: 0.33 # 0.55 model depth multiple
    5. width_multiple: 1.0 # 0.55 layer channel multiple
    6. # anchors
    7. anchors:
    8. - [12,16, 19,36, 40,28] # P3/8
    9. - [36,75, 76,55, 72,146] # P4/16
    10. - [142,110, 192,243, 459,401] # P5/32
    11. # yolov7 backbone by yoloair
    12. backbone:
    13. # [from, number, module, args]
    14. [[-1, 1, Conv, [32, 3, 1]], # 0
    15. [-1, 1, Conv, [64, 3, 2]], # 1-P1/2
    16. [-1, 1, Conv, [64, 3, 1]],
    17. [-1, 1, Conv, [128, 3, 2]], # 3-P2/4
    18. [-1, 1, CNeB, [128]],
    19. [-1, 1, Conv, [256, 3, 2]],
    20. [-1, 1, MP, []],
    21. [-1, 1, Conv, [128, 1, 1]],
    22. [-3, 1, Conv, [128, 1, 1]],
    23. [-1, 1, Conv, [128, 3, 2]],
    24. [[-1, -3], 1, Concat, [1]], # 16-P3/8
    25. [-1, 1, Conv, [128, 1, 1]],
    26. [-2, 1, Conv, [128, 1, 1]],
    27. [-1, 1, Conv, [128, 3, 1]],
    28. [-1, 1, Conv, [128, 3, 1]],
    29. [-1, 1, Conv, [128, 3, 1]],
    30. [-1, 1, Conv, [128, 3, 1]],
    31. [[-1, -3, -5, -6], 1, Concat, [1]],
    32. [-1, 1, Conv, [512, 1, 1]],
    33. [-1, 1, MP, []],
    34. [-1, 1, Conv, [256, 1, 1]],
    35. [-3, 1, Conv, [256, 1, 1]],
    36. [-1, 1, Conv, [256, 3, 2]],
    37. [[-1, -3], 1, Concat, [1]],
    38. [-1, 1, Conv, [256, 1, 1]],
    39. [-2, 1, Conv, [256, 1, 1]],
    40. [-1, 1, Conv, [256, 3, 1]],
    41. [-1, 1, Conv, [256, 3, 1]],
    42. [-1, 1, Conv, [256, 3, 1]],
    43. [-1, 1, Conv, [256, 3, 1]],
    44. [[-1, -3, -5, -6], 1, Concat, [1]],
    45. [-1, 1, Conv, [1024, 1, 1]],
    46. [-1, 1, MP, []],
    47. [-1, 1, Conv, [512, 1, 1]],
    48. [-3, 1, Conv, [512, 1, 1]],
    49. [-1, 1, Conv, [512, 3, 2]],
    50. [[-1, -3], 1, Concat, [1]],
    51. [-1, 1, CNeB, [1024]],
    52. [-1, 1, Conv, [256, 3, 1]],
    53. ]
    54. # yolov7 head by yoloair
    55. head:
    56. [[-1, 1, SPPCSPC, [512]],
    57. [-1, 1, Conv, [256, 1, 1]],
    58. [-1, 1, nn.Upsample, [None, 2, 'nearest']],
    59. [31, 1, Conv, [256, 1, 1]],
    60. [[-1, -2], 1, Concat, [1]],
    61. [-1, 1, CNeB, [128]],
    62. [-1, 1, Conv, [128, 1, 1]],
    63. [-1, 1, nn.Upsample, [None, 2, 'nearest']],
    64. [18, 1, Conv, [128, 1, 1]],
    65. [[-1, -2], 1, Concat, [1]],
    66. [-1, 1, CNeB, [128]],
    67. [-1, 1, MP, []],
    68. [-1, 1, Conv, [128, 1, 1]],
    69. [-3, 1, Conv, [128, 1, 1]],
    70. [-1, 1, Conv, [128, 3, 2]],
    71. [[-1, -3, 44], 1, Concat, [1]],
    72. [-1, 1, CNeB, [256]],
    73. [-1, 1, MP, []],
    74. [-1, 1, Conv, [256, 1, 1]],
    75. [-3, 1, Conv, [256, 1, 1]],
    76. [-1, 1, Conv, [256, 3, 2]],
    77. [[-1, -3, 39], 1, Concat, [1]],
    78. [-1, 3, CNeB, [512]],
    79. # 检测头 -----------------------------
    80. [49, 1, RepConv, [256, 3, 1]],
    81. [55, 1, RepConv, [512, 3, 1]],
    82. [61, 1, RepConv, [1024, 3, 1]],
    83. [[62,63,64], 1, IDetect, [nc, anchors]], # Detect(P3, P4, P5)
    84. ]

    2.3 在 models/yolo.py文件夹下找到parse_model函数 

    1. elif m is CNeB:
    2. c1, c2 = ch[f], args[0]
    3. if c2 != no:
    4. c2 = make_divisible(c2 * gw, 8)
    5. args = [c1, c2, *args[1:]]
    6. if m is CNeB:
    7. args.insert(2, n)
    8. n = 1

    修改完成

  • 相关阅读:
    多肽介导PEG磷脂——磷脂-聚乙二醇-靶向肽SP94,DSPE-PEG-SP94
    关于jQuery_浏览器事件的方法和使用
    二、进程管理(三)同步与互斥
    源码解读etcd heartbeat,election timeout之间的拉锯
    (react+ts)vite项目中的路径别名的配置
    卷积层与池化层输出的尺寸的计算公式详解
    !与~有什么区别
    猿创征文|运维工具介绍
    华为机试 - 高效的任务规划
    2023Jenkins连接k8s
  • 原文地址:https://blog.csdn.net/weixin_45303602/article/details/133350107