欢迎来真孝善网,为您提供真孝善正能量书籍故事!

SSD目标检测算法论文解读——双语版

时间:11-17 名人轶事 提交错误

翻译论文汇总:https://github.com/SnailTyan/deep-learning-papers-translation

SSD: Single Shot MultiBox Detector

Abstract

我们提出了一种使用单个深度神经网络检测图像中对象的方法。我们的方法称为SSD,将边界框的输出空间离散化为一组默认框,每个特征图位置具有不同的纵横比和比例。在预测时,网络会针对每个默认框中每个对象类别的存在情况生成分数,并对框进行调整以更好地匹配对象形状。此外,该网络结合了具有不同分辨率的多个特征图的预测,以自然地处理各种尺寸的对象。 SSD 相对于需要对象提案的方法来说很简单,因为它完全消除了提案生成和后续像素或特征重采样阶段,并将所有计算封装在单个网络中。这使得SSD 易于训练,并且可以直接集成到需要检测组件的系统中。 PASCAL VOC、COCO 和ILSVRC 数据集上的实验结果证实,SSD 与利用额外对象提议步骤的方法相比具有竞争性的准确性,并且速度更快,同时为训练和推理提供统一的框架。对于300 300 输入,SSD 在Nvidia Titan X 上以59 FPS 的VOC2007 测试中实现了74.3% mAP;对于512 512 输入,SSD 实现了76.9%$ mAP,优于同类最先进的Faster R-CNN模型。与其他单阶段方法相比,即使输入图像尺寸较小,SSD 也具有更高的精度。代码可在:https://github.com/weiliu89/caffe/tree/ssd 获取。

摘要

我们提出了一种使用单个深度神经网络来检测图像中的对象的方法。我们的方法名为SSD,将边界框的输出空间离散化为一组具有不同长宽比的默认框,并缩放每个特征图的位置。在预测时,网络会为每个默认框中每个目标类别的出现情况生成一个分数,并调整框以更好地匹配目标形状。此外,该网络结合了不同分辨率的多个特征图的预测,以自然地处理各种尺寸的对象。相对于需要目标提案的方法,SSD非常简单,因为它完全消除了提案生成和后续像素或特征重采样阶段,并将所有计算封装到单个网络中。这使得SSD 易于训练并直接集成到需要检测组件的系统中。 PASCAL VOC、COCO 和ILSVRC 数据集上的实验结果证实,对于利用额外目标提议步骤的方法,SSD 具有竞争力的准确度和速度,同时提供统一的训练和推理框架。对于300300 输入,SSD 在Nvidia Titan Faster R-CNN 模型上实现了74.3%$ 的mAP。与其他单阶段方法相比,即使输入图像尺寸较小,SSD 也能实现更高的精度。代码获取:https://github.com/weiliu89/caffe/tree/ssd。

1. Introduction

当前最先进的对象检测系统是以下方法的变体: 假设边界框,对每个框重新采样像素或特征,并应用高质量分类器。自Selective Search 工作[1] 到PASCAL VOC、COCO 和ILSVRC 检测的当前领先结果(所有这些都基于Faster R-CNN [2]),尽管具有更深层次的功能(例如[3]),该管道在检测基准上一直占主导地位。虽然准确,但这些方法对于嵌入式系统来说计算量太大,而且即使使用高端硬件,对于实时应用程序来说也太慢。这些方法的检测速度通常以每帧秒数(SPF) 来衡量,甚至是最快的方法高精度检测器Faster R-CNN 的运行速度仅为每秒7 帧(FPS)。人们已经进行了许多尝试,通过攻击检测管道的每个阶段来构建更快的检测器(参见第4 节中的相关工作),但到目前为止,显着提高的速度只能以显着降低检测精度为代价。

1. 引言

当前最先进的对象检测系统是以下方法的变体:假设边界框,对每个框的像素或特征重新采样,并应用高质量分类器。由于Selective Search [1] 在所有基于Faster R-CNN [2] 的PASCAL VOC、COCO 和ILSVRC 检测上取得了最先进的结果(尽管具有像[3] 这样的更深层次的功能),因此该管道具有检测基准数据开始流行。尽管准确,但这些方法对于嵌入式系统来说计算量太大,并且对于实时应用程序(即使在高端硬件上)来说也太慢。通常,这些方法的检测速度以每帧秒(SPF) 来衡量,即使是最快的高精度检测器Faster R-CNN,也只能以每秒7 帧(FPS) 的速度运行。人们已经进行了许多尝试,通过处理检测管道中的每个阶段来构建更快的检测器(请参阅第4 节中的相关工作),但到目前为止,显着的速度改进只是以显着降低的检测精度为代价。

本文提出了第一个基于深度网络的目标检测器,它不会为边界框假设重新采样像素或特征,并且与重新采样的方法一样准确。这使得高精度检测的速度显着提高(在VOC2007 测试中,mAP $74.3%$ 为59 FPS,而mAP $73.2%$ 的Faster R-CNN 7 FPS 或mAP $63.4%$ 的YOLO 45 FPS)。速度的根本改进来自于消除边界框建议和随后的像素或特征重采样阶段。我们并不是第一个这样做的人(参见[4,5]),但通过添加一系列改进,我们设法比以前的尝试显着提高准确性。我们的改进包括使用小型卷积滤波器来预测边界框位置中的对象类别和偏移,使用单独的预测器(滤波器)进行不同的纵横比检测,以及将这些滤波器应用于网络后期的多个特征图以执行多尺度检测。通过这些修改,特别是使用多层进行不同尺度的预测,我们可以使用相对较低分辨率的输入实现高精度,进一步提高检测速度。虽然这些贡献单独看来可能很小,但我们注意到,最终的系统将PASCAL VOC 实时检测的准确性从YOLO 的63.4%$ mAP 提高到SSD 的74.3%$ mAP。与最近备受瞩目的残差网络研究相比,这在检测精度方面有了更大的相对改进[3]。此外,显着提高高质量检测的速度可以扩大计算机视觉有用的设置范围。

本文提出了第一个基于深度网络的目标检测器,它不会对边界框假设的像素或特征进行重新采样,并且与其他方法一样准确。这为高精度检测提供了速度上的显着提升(在VOC2007 测试中,59 FPS 和74.3% mAP,相比于Faster R-CNN 7 FPS 和73.2% mAP 或YOLO 45 FPS 和63.4% mAP)。速度的根本改进来自于边界框提议和后续像素或特征重采样阶段的消除。我们并不是第一个这样做的人(参见[4, 5]),但通过添加一系列改进,我们设法比以前的尝试显着提高准确性。我们的改进包括使用小型卷积滤波器来预测边界框位置中的对象类别和偏移,使用单独的预测器(滤波器)进行不同的纵横比检测,以及将这些滤波器应用于多个特征图以执行多尺度检测。通过这些修改,特别是使用多层进行不同尺度的预测,我们可以使用相对较低分辨率的输入来实现高精度,从而进一步提高检测速度。虽然这些贡献单独看来可能很小,但我们注意到,最终的系统提高了PASCAL VOC 实时检测的准确性,从YOLO 的mAP 为63.4% 到SSD 的mAP 为74.3%。与最近备受瞩目的残差网络[3]工作相比,这是检测精度上相对较大的改进。此外,显着提高高质量检测的速度可以扩大计算机视觉的使用范围。

我们的贡献总结如下:

我们引入了SSD,这是一种用于多个类别的单次检测器,它比之前最先进的单次检测器(YOLO) 更快,并且显着更准确,事实上与执行显式区域提议的较慢技术一样准确和池化(包括Faster R-CNN)。

SSD 的核心是使用应用于特征图的小型卷积滤波器来预测一组固定的默认边界框的类别分数和框偏移。

为了实现高检测精度,我们从不同尺度的特征图生成不同尺度的预测,并按纵横比显式地分离预测。

这些设计功能可以实现简单的端到端训练和高精度,即使在低分辨率输入图像上也是如此,从而进一步改善速度与精度的权衡。

实验包括对PASCAL VOC、COCO 和ILSVRC 上评估的不同输入大小的模型进行时序和准确性分析,并与一系列最新的最先进方法进行比较。

我们的贡献总结如下:

我们引入了SSD,这是一种用于多个类别的一次性检测器,它比之前最先进的一次性检测器(YOLO)更快、更准确,事实上,通过执行显式区域提议和池化较慢的技术相同的精度(包括Faster R-CNN)。

SSD的核心是预测一系列固定的默认边界框的类别分数和边界框偏移,使用较小的卷积滤波器应用于特征图。

为了实现高检测精度,我们根据不同尺度的特征图生成不同尺度的预测,并按纵横比明确分离预测。

这些设计功能即使在低分辨率输入图像上也能实现简单的端到端训练和高精度,从而进一步改善速度与精度的权衡。

实验包括在PASCAL VOC、COCO 和ILSVRC 上评估不同输入大小的模型的时间和准确性分析,以及与一系列最新最先进方法的比较。

2. The Single Shot Detector (SSD)

本节介绍我们提出的SSD 检测框架(第2.1 节)和相关的训练方法(第2.2 节)。随后,第二节。 2.3 呈现了数据集特定的模型细节和实验结果。

2. 单次检测器(SSD)

本节介绍我们提出的SSD 检测框架(第2.1 节)和相关训练方法(第2.2 节)。随后,2.3 节介绍了数据集特定的模型细节和实验结果。

2.1 Model

SSD 方法基于前馈卷积网络,该网络生成固定大小的边界框集合以及这些框中是否存在对象类实例的分数,然后是非最大抑制步骤以生成最终结果检测。早期的网络层基于用于高质量图像分类的标准架构(在任何分类层之前被截断),我们将其称为基础网络。然后,我们向网络添加辅助结构以生成具有以下关键特征的检测:

2.1 模型

SSD 方法基于前馈卷积网络,该网络生成一组固定大小的边界框并对这些边界框中存在的目标类别实例进行评分,然后进行非极大值抑制步骤以产生最终检测结果。早期的网络层基于高质量图像分类的标准架构(在任何分类层之前被截断),我们将其称为基础网络。然后,我们向网络添加辅助结构以产生具有以下关键特征的检测:

Multi-scale feature maps for detection我们将卷积特征层添加到截断的基础网络的末尾。这些层的大小逐渐减小,并允许在多个尺度上预测检测。每个特征层用于预测检测的卷积模型都不同(cfOverfeat[4] 和YOLO[5] 在单尺度特征图上运行)。

用于检测的多尺度特征映射。我们将卷积特征层添加到截取的基础网络的末尾。这些层的尺寸逐渐减小,并允许在多个尺度上预测检测结果。用于预测检测的卷积模型对于每个特征层都是不同的(有关在单尺度特征图上操作的信息,请参见Overfeat [4] 和YOLO [5])。

Convolutional predictors for detection每个添加的特征层(或者可选地来自基础网络的现有特征层)可以使用一组卷积滤波器产生一组固定的检测预测。这些在图2 中的SSD 网络架构顶部表示。对于大小为$m times n$ 且具有$p$ 通道的特征层,用于预测潜在检测参数的基本元素是$3 times 3 乘以p$small 内核,生成类别的分数,或相对于默认框坐标的形状偏移。在应用内核的每个$m times n$ 个位置处,它都会产生一个输出值。边界框偏移输出值是相对于每个特征图位置的默认框位置来测量的(参见YOLO[5] 的架构,此步骤使用中间全连接层而不是卷积滤波器)。

图2 图. 2: 两种单次检测模型: SSD和YOLO之间的比较[5]。我们的SSD 模型在基础网络的末尾添加了几个特征层,这些特征层预测t

he offsets to default boxes of different scales and aspect ratios and their associated confidences. SSD with a 300 × 300 input size significantly outperforms its 448 × 448 YOLO counterpart in accuracy on VOC2007testwhile also improving the speed.用于检测的卷积预测器。每个添加的特征层(或者任选的来自基础网络的现有特征层)可以使用一组卷积滤波器产生固定的检测预测集合。这些在图2中的SSD网络架构的上部指出。对于具有$p$通道的大小为$m times n$的特征层,潜在检测的预测参数的基本元素是$3 times 3 times p$的小核得到某个类别的分数,或者相对于默认框坐标的形状偏移。在应用卷积核的$m times n$的每个位置,它会产生一个输出值。边界框偏移输出值是相对每个特征映射位置的相对默认框位置来度量的(查阅YOLO[5]的架构,该步骤使用中间全连接层而不是卷积滤波器)。 Figure 2图2:两个单次检测模型的比较:SSD和YOLO[5]。我们的SSD模型在基础网络的末端添加了几个特征层,它预测了不同尺度和长宽比的默认边界框的偏移量及其相关的置信度。300×300输入尺寸的SSD在VOC2007test上的准确度上明显优于448×448的YOLO的准确度,同时也提高了速度。Default boxes and aspect ratiosWe associate a set of default bounding boxes with each feature map cell, for multiple feature maps at the top of the network. The default boxes tile the feature map in a convolutional manner, so that the position of each box relative to its corresponding cell is fixed. At each feature map cell, we predict the offsets relative to the default box shapes in the cell, as well as the per-class scores that indicate the presence of a class instance in each of those boxes. Specifically, for each box out of $k$ at a given location, we compute $c$ class scores and the $4$ offsets relative to the original default box shape. This results in a total of $(c+4)k$ filters that are applied around each location in the feature map, yielding $(c+4)kmn$ outputs for a $mtimes n$ feature map. For an illustration of default boxes, please refer to Fig.1. Our default boxes are similar to theanchor boxesused in Faster R-CNN[2], however we apply them to several feature maps of different resolutions. Allowing different default box shapes in several feature maps let us efficiently discretize the space of possible output box shapes. Figure 1Fig. 1: SSD framework. (a) SSD only needs an input image and ground truth boxes for each object during training. In a convolutional fashion, we evaluate a small set (e.g. 4) of default boxes of different aspect ratios at each location in several feature maps with different scales (e.g. 8 × 8 and 4 × 4 in (b) and (c)). For each default box, we predict both the shape offsets and the confidences for all object categories ($(c_1, c_2, dots, c_p)$). At training time, we first match these default boxes to the ground truth boxes. For example, we have matched two default boxes with the cat and one with the dog, which are treated as positives and the rest as negatives. The model loss is a weighted sum between localization loss (e.g. Smooth L1 [6]) and confidence loss (e.g. Softmax).默认边界框和长宽比。对于网络顶部的多个特征映射,我们将一组默认边界框与每个特征映射单元相关联。默认边界框以卷积的方式平铺特征映射,以便每个边界框相对于其对应单元的位置是固定的。在每个特征映射单元中,我们预测单元中相对于默认边界框形状的偏移量,以及指出每个边界框中存在的每个类别实例的类别分数。具体而言,对于给定位置处的$k$个边界框中的每一个,我们计算$c$个类别分数和相对于原始默认边界框形状的$4$个偏移量。这导致在特征映射中的每个位置周围应用总共$(c+4)k$个滤波器,对于$mtimes n$的特征映射取得$(c+4)kmn$个输出。有关默认边界框的说明,请参见图1。我们的默认边界框与Faster R-CNN[2]中使用的锚边界框相似,但是我们将它们应用到不同分辨率的几个特征映射上。在几个特征映射中允许不同的默认边界框形状让我们有效地离散可能的输出框形状的空间。 Figure 1图1:SSD框架。(a)在训练期间,SSD仅需要每个目标的输入图像和真实边界框。以卷积方式,我们评估具有不同尺度(例如(b)和(c)中的8×8和4×4)的几个特征映射中每个位置处不同长宽比的默认框的小集合(例如4个)。对于每个默认边界框,我们预测所有目标类别($(c_1, c_2, dots, c_p)$)的形状偏移量和置信度。在训练时,我们首先将这些默认边界框与实际的边界框进行匹配。例如,我们已经与猫匹配两个默认边界框,与狗匹配了一个,这被视为积极的,其余的是消极的。模型损失是定位损失(例如,Smooth L1[6])和置信度损失(例如Softmax)之间的加权和。

2.2 Training

The key difference between training SSD and training a typical detector that uses region proposals, is that ground truth information needs to be assigned to specific outputs in the fixed set of detector outputs. Some version of this is also required for training in YOLO[5] and for the region proposal stage of Faster R-CNN[2] and MultiBox[7]. Once this assignment is determined, the loss function and back propagation are applied end-to-end. Training also involves choosing the set of default boxes and scales for detection as well as the hard negative mining and data augmentation strategies.

2.2 训练

训练SSD和训练使用区域提出的典型检测器之间的关键区别在于,需要将真实信息分配给固定的检测器输出集合中的特定输出。在YOLO[5]的训练中、Faster R-CNN[2]和MultiBox[7]的区域提出阶段,一些版本也需要这样的操作。一旦确定了这个分配,损失函数和反向传播就可以应用端到端了。训练也涉及选择默认边界框集合和缩放进行检测,以及难例挖掘和数据增强策略。Matching strategyDuring training we need to determine which default boxes correspond to a ground truth detection and train the network accordingly. For each ground truth box we are selecting from default boxes that vary over location, aspect ratio, and scale. We begin by matching each ground truth box to the default box with the best jaccard overlap (as in MultiBox [7]). Unlike MultiBox, we then match default boxes to any ground truth with jaccard overlap higher than a threshold (0.5). This simplifies the learning problem, allowing the network to predict high scores for multiple overlapping default boxes rather than requiring it to pick only the one with maximum overlap.匹配策略。在训练过程中,我们需要确定哪些默认边界框对应实际边界框的检测,并相应地训练网络。对于每个实际边界框,我们从默认边界框中选择,这些框会在位置,长宽比和尺度上变化。我们首先将每个实际边界框与具有最好的Jaccard重叠(如MultiBox[7])的边界框相匹配。与MultiBox不同的是,我们将默认边界框匹配到Jaccard重叠高于阈值(0.5)的任何实际边界框。这简化了学习问题,允许网络为多个重叠的默认边界框预测高分,而不是要求它只挑选具有最大重叠的一个边界框。注:Jaccard重叠即IoU。Training objectiveThe SSD training objective is derived from the MultiBox objective[7,8] but is extended to handle multiple object categories. Let $x_{ij}^p = lbrace 1,0 rbrace$ be an indicator for matching the $i$-th default box to the $j$-th ground truth box of category $p$. In the matching strategy above, we can have $sum_i x_{ij}^p geq 1$. The overall objective loss function is a weighted sum of the localization loss (loc) and the confidence loss (conf): $$L(x, c, l, g) = frac{1}{N}(L_{conf}(x, c) + alpha L_{loc}(x, l, g)) tag{1}$$ where N is the number of matched default boxes. If $N = 0$, wet set the loss to 0. The localization loss is a Smooth L1 loss[6] between the predicted box ($l$) and the ground truth box ($g$) parameters. Similar to Faster R-CNN[2], we regress to offsets for the center ($cx, cy$) of the default bounding box ($d$) and for its width ($w$) and height ($h$). $$ L_{loc}(x,l,g) = sum_{i in Pos}^N sum_{m in lbrace cx, cy, w, h rbrace} x_{ij}^k mathtt{smooth}_{L1}(l_{i}^m - hat{g}_j^m) \ hat{g}_j^{cx} = (g_j^{cx} - d_i^{cx}) / d_i^w quad quad hat{g}_j^{cy} = (g_j^{cy} - d_i^{cy}) / d_i^h \ hat{g}_j^{w} = logBig(frac{g_j{w}}{d_iw}Big) quad quad hat{g}_j^{h} = logBig(frac{g_j{h}}{d_ih}Big) tag{2} $$ The confidence loss is the softmax loss over multiple classes confidences ($c$). $$ L_{conf}(x, c) = - sum_{iin Pos}^N x_{ij}^p log(hat{c}_i^p) - sum_{iin Neg} log(hat{c}_i^0)quad mathtt{where}quadhat{c}_i^p = frac{exp(c_i^p)}{sum_p exp(c_i^p)} tag{3} $$ and the weight term $alpha$ is set to 1 by cross validation.训练目标函数。SSD训练目标函数来自于MultiBox目标[7,8],但扩展到处理多个目标类别。设$x_{ij}^p = lbrace 1,0 rbrace$是第$i$个默认边界框匹配到类别$p$的第$j$个实际边界框的指示器。在上面的匹配策略中,我们有$sum_i x_{ij}^p geq 1$。总体目标损失函数是定位损失(loc)和置信度损失(conf)的加权和:$$L(x, c, l, g) = frac{1}{N}(L_{conf}(x, c) + alpha L_{loc}(x, l, g)) tag{1}$$其中N是匹配的默认边界框的数量。如果$N=0$,则将损失设为0。定位损失是预测框($l$)与真实框($g$)参数之间的Smooth L1损失[6]。类似于Faster R-CNN[2],我们回归默认边界框($d$)的中心偏移量($cx, cy$)和其宽度($w$)、高度($h$)的偏移量。$$ L_{loc}(x,l,g) = sum_{i in Pos}^N sum_{m in lbrace cx, cy, w, h rbrace} x_{ij}^k mathtt{smooth}_{L1}(l_{i}^m - hat{g}_j^m) \ hat{g}_j^{cx} = (g_j^{cx} - d_i^{cx}) / d_i^w quad quad hat{g}_j^{cy} = (g_j^{cy} - d_i^{cy}) / d_i^h \ hat{g}_j^{w} = logBig(frac{g_j{w}}{d_iw}Big) quad quad hat{g}_j^{h} = logBig(frac{g_j{h}}{d_ih}Big) tag{2} $$置信度损失是在多类别置信度($c$)上的softmax损失。 $$ L_{conf}(x, c) = - sum_{iin Pos}^N x_{ij}^p log(hat{c}_i^p) - sum_{iin Neg} log(hat{c}_i^0)quad mathtt{where}quadhat{c}_i^p = frac{exp(c_i^p)}{sum_p exp(c_i^p)} tag{3} $$ 通过交叉验证权重项$alpha$设为1。Choosing scales and aspect ratios for default boxesTo handle different object scales, some methods [4,9] suggest processing the image at different sizes and combining the results afterwards. However, by utilizing feature maps from several different layers in a single network for prediction we can mimic the same effect, while also sharing parameters across all object scales. Previous works [10,11] have shown that using feature maps from the lower layers can improve semantic segmentation quality because the lower layers capture more fine details of the input objects. Similarly, [12] showed that adding global context pooled from a feature map can help smooth the segmentation results. Motivated by these methods, we use both the lower and upper feature maps for detection. Figure 1 shows two exemplar feature maps (8 × 8 and 4 × 4) which are used in the framework. In practice, we can use many more with small computational overhead.为默认边界框选择尺度和长宽比。为了处理不同的目标尺度,一些方法[4,9]建议处理不同尺寸的图像,然后将结果合并。然而,通过利用单个网络中几个不同层的特征映射进行预测,我们可以模拟相同的效果,同时还可以跨所有目标尺度共享参数。以前的工作[10,11]已经表明,使用低层的特征映射可以提高语义分割的质量,因为低层会捕获输入目标的更多细节。同样,[12]表明,从特征映射上添加全局上下文池化可以有助于平滑分割结果。受这些方法的启发,我们使用较低和较高的特征映射进行检测。图1显示了框架中使用的两个示例性特征映射(8×8和4×4)。在实践中,我们可以使用更多的具有很少计算开支的特征映射。 Feature maps from different levels within a network are known to have different (empirical) receptive field sizes [13]. Fortunately, within the SSD framework, the default boxes do not necessary need to correspond to the actual receptive fields of each layer. We design the tiling of default boxes so that specific feature maps learn to be responsive to particular scales of the objects. Suppose we want to use $m$ feature maps for prediction. The scale of the default boxes for each feature map is computed as: $$s_k = s_text{min} + frac{s_text{max} - s_text{min}}{m - 1} (k - 1),quad kin [1, m]$$ where $s_text{min}$ is 0.2 and $s_text{max}$ is 0.9, meaning the lowest layer has a scale of 0.2 and the highest layer has a scale of 0.9, and all layers in between are regularly spaced. We impose different aspect ratios for the default boxes, and denote them as $a_r in {1, 2, 3, frac{1}{2}, frac{1}{3}}$. We can compute the width ($w_k^a = s_ksqrt{a_r}$) and height ($h_k^a = s_k / sqrt{a_r}$) for each default box. For the aspect ratio of 1, we also add a default box whose scale is $s"_k = sqrt{s_k s_{k+1}}$, resulting in 6 default boxes per feature map location. We set the center of each default box to $(frac{i+0.5}{|f_k|}, frac{j+0.5}{|f_k|})$, where $|f_k|$ is the size of the $k$-th square feature map, $i, jin [0, |f_k|)$. In practice, one can also design a distribution of default boxes to best fit a specific dataset. How to design the optimal tiling is an open question as well. 已知网络中不同层的特征映射具有不同的(经验的)感受野大小[13]。幸运的是,在SSD框架内,默认边界框不需要对应于每层的实际感受野。我们设计平铺默认边界框,以便特定的特征映射学习响应目标的特定尺度。假设我们要使用$m$个特征映射进行预测。每个特征映射默认边界框的尺度计算如下:$$s_k = s_text{min} + frac{s_text{max} - s_text{min}}{m - 1} (k - 1),quad kin [1, m]$$其中$s_text{min}$为0.2,$s_text{max}$为0.9,意味着最低层具有0.2的尺度,最高层具有0.9的尺度,并且在它们之间的所有层是规则间隔的。我们为默认边界框添加不同的长宽比,并将它们表示为$a_r in {1, 2, 3, frac{1}{2}, frac{1}{3}}$。我们可以计算每个边界框的宽度($w_k^a = s_ksqrt{a_r}$)和高度($h_k^a = s_k / sqrt{a_r}$)。对于长宽比为1,我们还添加了一个默认边界框,其尺度为$s"_k = sqrt{s_k s_{k+1}}$,在每个特征映射位置得到6个默认边界框。我们将每个默认边界框的中心设置为$(frac{i+0.5}{|f_k|}, frac{j+0.5}{|f_k|})$,其中$|f_k|$是第$k$个平方特征映射的大小,$i, jin [0, |f_k|)$。在实践中,也可以设计默认边界框的分布以最适合特定的数据集。如何设计最佳平铺也是一个悬而未决的问题。 By combining predictions for all default boxes with different scales and aspect ratios from all locations of many feature maps, we have a diverse set of predictions, covering various input object sizes and shapes. For example, in Fig. 1, the dog is matched to a default box in the 4 × 4 feature map, but not to any default boxes in the 8 × 8 feature map. This is because those boxes have different scales and do not match the dog box, and therefore are considered as negatives during training. 通过将所有默认边界框的预测与许多特征映射所有位置的不同尺度和高宽比相结合,我们有不同的预测集合,涵盖各种输入目标大小和形状。例如,在图1中,狗被匹配到4×4特征映射中的默认边界框,而不是8×8特征映射中的任何默认框。这是因为那些边界框有不同的尺度,不匹配狗的边界框,因此在训练期间被认为是负例。Hard negative miningAfter the matching step, most of the default boxes are negatives, especially when the number of possible default boxes is large. This introduces a significant imbalance between the positive and negative training examples. Instead of using all the negative examples, we sort them using the highest confidence loss for each default box and pick the top ones so that the ratio between the negatives and positives is at most 3:1. We found that this leads to faster optimization and a more stable training.难例挖掘。在匹配步骤之后,大多数默认边界框为负例,尤其是当可能的默认边界框数量较多时。这在正的训练实例和负的训练实例之间引入了显著的不平衡。我们不使用所有负例,而是使用每个默认边界框的最高置信度损失来排序它们,并挑选最高的置信度,以便负例和正例之间的比例至多为3:1。我们发现这会导致更快的优化和更稳定的训练。Data augmentationTo make the model more robust to various input object sizes and shapes, each training image is randomly sampled by one of the following options: Use the entire original input image.Sample a patch so that theminimumjaccard overlap with the objects is 0.1, 0.3, 0.5, 0.7, or 0.9.Randomly sample a patch.The size of each sampled patch is [0.1, 1] of the original image size, and the aspect ratio is between $frac {1} {2}$ and 2. We keep the overlapped part of the ground truth box if the center of it is in the sampled patch. After the aforementioned sampling step, each sampled patch is resized to fixed size and is horizontally flipped with probability of 0.5, in addition to applying some photo-metric distortions similar to those described in [14].数据增强。为了使模型对各种输入目标大小和形状更鲁棒,每张训练图像都是通过以下选项之一进行随机采样的: 使用整个原始输入图像。采样一个图像块,使得与目标之间的最小Jaccard重叠为0.1,0.3,0.5,0.7或0.9。随机采样一个图像块。每个采样图像块的大小是原始图像大小的[0.1,1],长宽比在$frac {1} {2}$和2之间。如果实际边界框的中心在采用的图像块中,我们保留实际边界框与采样图像块的重叠部分。在上述采样步骤之后,除了应用类似于文献[14]中描述的一些光度变形之外,将每个采样图像块调整到固定尺寸并以0.5的概率进行水平翻转。

3. Experimental Results

Base networkOur experiments are all based on VGG16[15], which is pre-trained on the ILSVRC CLS-LOC dataset[16]. Similar to DeepLab-LargeFOV[17], we convert fc6 and fc7 to convolutional layers, subsample parameters from fc6 and fc7, change pool5 from $2times 2-s2$ to $3times 3-s1$, and use the atrous algorithm[18] to fill the "holes". We remove all the dropout layers and the fc8 layer. We fine-tune the resulting model using SGD with initial learning rate $10^{-3}$, 0.9 momentum, 0.0005 weight decay, and batch size 32. The learning rate decay policy is slightly different for each dataset, and we will describe details later. The full training and testing code is built on Caffe[19] and is open source at:https://github.com/weiliu89/caffe/tree/ssd.

3. 实验结果

基础网络。我们的实验全部基于VGG16[15],它是在ILSVRC CLS-LOC数据集[16]上预先训练的。类似于DeepLab-LargeFOV[17],我们将fc6和fc7转换为卷积层,从fc6和fc7中重采样参数,将pool5从$2times 2-s2$更改为$3times 3-s1$,并使用空洞算法[18]来填补这个“小洞”。我们删除所有的丢弃层和fc8层。我们使用SGD对得到的模型进行微调,初始学习率为$10^{-3}$,动量为0.9,权重衰减为0.0005,批数据大小为32。每个数据集的学习速率衰减策略略有不同,我们将在后面详细描述。完整的训练和测试代码建立在Caffe[19]上并开源:[https://github.com/weiliu89/caffe/tree/ssd](https://github.com/weiliu89/caffe/tree/SSD)。

3.1 PASCAL VOC2007

On this dataset, we compare against Fast R-CNN [6] and Faster R-CNN [2] on VOC2007test(4952 images). All methods fine-tune on the same pre-trained VGG16 network.

3.1 PASCAL VOC2007

在这个数据集上,我们在VOC2007test(4952张图像)上比较了Fast R-CNN[6]和FAST R-CNN[2]。所有的方法都在相同的预训练好的VGG16网络上进行微调。 Figure 2 shows the architecture details of the SSD300 model. We use conv4_3, conv7 (fc7), conv8_2, conv9_2, conv10_2, and conv11_2 to predict both location and confidences. We set default box with scale 0.1 on conv4_3. We initialize the parameters for all the newly added convolutional layers with the "xavier" method [20]. For conv4_3, conv10_2 and conv11_2, we only associate 4 default boxes at each feature map location —— omitting aspect ratios of $frac{1}{3}$ and 3. For all other layers, we put 6 default boxes as described in Sec. 2.2. Since, as pointed out in [12], conv4_3 has a different feature scale compared to the other layers, we use the L2 normalization technique introduced in [12] to scale the feature norm at each location in the feature map to 20 and learn the scale during back propagation. We use the $10^{-3}$ learning rate for 40k iterations, then continue training for 10k iterations with $10^{-4}$ and $10^{-5}$. When training on VOC2007 $texttt{trainval}$, Table 1 shows that our low resolution SSD300 model is already more accurate than Fast R-CNN. When we train SSD on a larger $512times 512$ input image, it is even more accurate, surpassing Faster R-CNN by $1.7%$ mAP. If we train SSD with more (i.e. 07+12) data, we see that SSD300 is already better than Faster R-CNN by $1.1%$ and that SSD512 is $3.6%$ better. If we take models trained on COCO $texttt{trainval35k}$ as described in Sec. 3.4 and fine-tuning them on the 07+12 dataset with SSD512, we achieve the best results: $81.6%$ mAP. Table 1Table 1:PASCAL VOC2007testdetection results. Both Fast and Faster R-CNN use input images whose minimum dimension is 600. The two SSD models have exactly the same settings except that they have different input sizes (300×300 vs. 512×512). It is obvious that larger input size leads to better results, and more data always helps. Data: ”07”: VOC2007trainval, ”07+12”: union of VOC2007 and VOC2012trainval. ”07+12+COCO”: first train on COCOtrainval35kthen fine-tune on 07+12. 图2显示了SSD300模型的架构细节。我们使用conv4_3,conv7(fc7),conv8_2,conv9_2,conv10_2和conv11_2来预测位置和置信度。我们在conv4_3上设置了尺度为0.1的默认边界框。我们使用“xavier”方法[20]初始化所有新添加的卷积层的参数。对于conv4_3,conv10_2和conv11_2,我们只在每个特征映射位置上关联了4个默认边界框——忽略$frac {1} {3} $和3的长宽比。对于所有其它层,我们像2.2节描述的那样放置了6个默认边界框。如[12]所指出的,与其它层相比,由于conv4_3具有不同的特征尺度,所以我们使用[12]中引入的L2正则化技术将特征映射中每个位置的特征标准缩放到20,在反向传播过程中学习尺度。对于40k次迭代,我们使用$10{-3}$的学习率,然后继续用$10{-4}$和$10^{-5}$的学习率训练10k迭代。当对VOC2007 $texttt{trainval}$进行训练时,表1显示了我们的低分辨率SSD300模型已经比Fast R-CNN更准确。当我们用更大的$512times 512$输入图像上训练SSD时,它更加准确,超过了Faster R-CNN $1.7%$的mAP。如果我们用更多的(即07+12)数据来训练SSD,我们看到SSD300已经比Faster R-CNN好$1.1%$,SSD512比Faster R-CNN好$3.6%$。如果我们将SSD512用3.4节描述的COCO $texttt{trainval35k}$来训练模型并在07+12数据集上进行微调,我们获得了最好的结果:$81.6%$的mAP。 Table 1表1:PASCAL VOC2007test检测结果。Fast和Faster R-CNN都使用最小维度为600的输入图像。两个SSD模型使用完全相同的设置除了它们有不同的输入大小(300×300和512×512)。很明显更大的输入尺寸会导致更好的结果,并且更大的数据同样有帮助。数据:“07”:VOC2007trainval,“07+12”:VOC2007和VOC2012trainval的联合。“07+12+COCO”:首先在COCOtrainval35k上训练然后在07+12上微调。 To understand the performance of our two SSD models in more details, we used the detection analysis tool from [21]. Figure 3 shows that SSD can detect various object categories with high quality (large white area). The majority of its confident detections are correct. The recall is around $85-90%$, and is much higher with “weak” (0.1 jaccard overlap) criteria. Compared to R-CNN [22], SSD has less localization error, indicating that SSD can localize objects better because it directly learns to regress the object shape and classify object categories instead of using two decoupled steps. However, SSD has more confusions with similar object categories (especially for animals), partly because we share locations for multiple categories. Figure 4 shows that SSD is very sensitive to the bounding box size. In other words, it has much worse performance on smaller objects than bigger objects. This is not surprising because those small objects may not even have any information at the very top layers. Increasing the input size (e.g. from 300 × 300 to 512 × 512) can help improve detecting small objects, but there is still a lot of room to improve. On the positive side, we can clearly see that SSD performs really well on large objects. And it is very robust to different object aspect ratios because we use default boxes of various aspect ratios per feature map location. Figure 3Fig. 3:Visualization of performance for SSD512 on animals, vehicles, and furniture from VOC2007test. The top row shows the cumulative fraction of detections that are correct (Cor) or false positive due to poor localization (Loc), confusion with similar categories (Sim), with others (Oth), or with background (BG). The solid red line reflects the change of recall with strong criteria (0.5 jaccard overlap) as the number of detections increases. The dashed red line is using the weak criteria (0.1 jaccard overlap). The bottom row shows the distribution of top-ranked false positive types. Figure 4Fig. 4:Sensitivity and impact of different object characteristics on VOC2007testset using [21]. The plot on the left shows the effects of BBox Area per category, and the right plot shows the effect of Aspect Ratio. Key: BBox Area: XS=extra-small; S=small; M=medium; L=large; XL=extra-large. Aspect Ratio: XT=extra-tall/narrow; T=tall; M=medium; W=wide; XW =extra-wide. 为了更详细地了解我们两个SSD模型的性能,我们使用了[21]中的检测分析工具。图3显示了SSD可以检测到高质量(大白色区域)的各种目标类别。它大部分的确信检测是正确的。召回约为$85-90%$,而“弱”(0.1 Jaccard重叠)标准则要高得多。与R-CNN[22]相比,SSD具有更小的定位误差,表明SSD可以更好地定位目标,因为它直接学习回归目标形状和分类目标类别,而不是使用两个解耦步骤。然而,SSD对类似的目标类别(特别是对于动物)有更多的混淆,部分原因是我们共享多个类别的位置。图4显示SSD对边界框大小非常敏感。换句话说,它在较小目标上比在较大目标上的性能要差得多。这并不奇怪,因为这些小目标甚至可能在顶层没有任何信息。增加输入尺寸(例如从300×300到512×512)可以帮助改进检测小目标,但仍然有很大的改进空间。积极的一面,我们可以清楚地看到SSD在大型目标上的表现非常好。而且对于不同长宽比的目标,它是非常鲁棒的,因为我们使用每个特征映射位置的各种长宽比的默认框。 Figure 3图3:SSD512在VOC2007test中的动物,车辆和家具上的性能可视化。第一行显示由于定位不佳(Loc),与相似类别(Sim)混淆,与其它(Oth)或背景(BG)相关的正确检测(Cor)或假阳性的累积分数。红色的实线表示随着检测次数的增加,强标准(0.5 Jaccard重叠)下的召回变化。红色虚线是使用弱标准(0.1 Jaccard重叠)。最下面一行显示了排名靠前的假阳性类型的分布。 Figure 4图4:使用[21]在VOC2007test设置上不同目标特性的灵敏度和影响。左边的图显示了BBox面积对每个类别的影响,右边的图显示了长宽比的影响。关键:BBox区域:XS=超小;S=小;M=中等;L=大;XL=超大。长宽比:XT=超高/窄;T=高;M=中等;W=宽;XW =超宽。

3.2 Model analysis

To understand SSD better, we carried out controlled experiments to examine how each component affects performance. For all the experiments, we use the same settings and input size (300 × 300), except for specified changes to the settings or component(s).

3.2 模型分析

为了更好地了解SSD,我们进行了控制实验,以检查每个组件如何影响性能。对于所有的实验,我们使用相同的设置和输入大小(300×300),除了指定的设置或组件的更改。Data augmentation is crucial. Fast and Faster R-CNN use the original image and the horizontal flip to train. We use a more extensive sampling strategy, similar to YOLO [5]. Table 2 shows that we can improve $8.8%$ mAP with this sampling strategy. We do not know how much our sampling strategy will benefit Fast and Faster R-CNN, but they are likely to benefit less because they use a feature pooling step during classification that is relatively robust to object translation by design. Table 2Table 2:Effects of various design choices and components on SSD performance.数据增强至关重要。Fast和Faster R-CNN使用原始图像和水平翻转来训练。我们使用更广泛的抽样策略,类似于YOLO[5]。从表2可以看出,采样策略可以提高$8.8%$的mAP。我们不知道我们的采样策略将会使Fast和Faster R-CNN受益多少,但是他们可能从中受益较少,因为他们在分类过程中使用了一个特征池化步骤,这对通过设计的目标变换来说相对鲁棒。 Table 2表2:各种设计选择和组件对SSD性能的影响。More default box shapes is better. As described in Sec. 2.2, by default we use 6 default boxes per location. If we remove the boxes with $frac {1} {3}$ and 3 aspect ratios, the performance drops by $0.6%$. By further removing the boxes with $frac {1} {2}$ and 2 aspect ratios, the performance drops another $2.1%$. Using a variety of default box shapes seems to make the task of predicting boxes easier for the network.更多的默认边界框形状会更好。如2.2节所述,默认情况下,我们每个位置使用6个默认边界框。如果我们删除长宽比为$frac {1} {3}$和3的边界框,性能下降了$0.6%$。通过进一步去除$frac {1} {2}$和2长宽比的盒子,性能再下降$2.1%$。使用各种默认边界框形状似乎使网络预测边界框的任务更容易。Atrous is faster. As described in Sec. 3, we used the atrous version of a subsampled VGG16, following DeepLab-LargeFOV [17]. If we use the full VGG16, keeping pool5 with 2×2−s2 and not subsampling parameters from fc6 and fc7, and add conv5 3 for prediction, the result is about the same while the speed is about $20%$ slower.Atrous更快。如第3节所述,我们根据DeepLab-LargeFOV[17]使用子采样的VGG16的空洞版本。如果我们使用完整的VGG16,保持pool5为2×2-s2,并且不从fc6和fc7中子采样参数,并添加conv5_3进行预测,结果大致相同,而速度慢了大约$20%$。Multiple output layers at different resolutions is better. A major contribution of SSD is using default boxes of different scales on different output layers. To measure the advantage gained, we progressively remove layers and compare results. For a fair comparison, every time we remove a layer, we adjust the default box tiling to keep the total number of boxes similar to the original (8732). This is done by stacking more scales of boxes on remaining layers and adjusting scales of boxes if needed. We do not exhaustively optimize the tiling for each setting. Table 3 shows a decrease in accuracy with fewer layers, dropping monotonically from 74.3 to 62.4. When we stack boxes of multiple scales on a layer, many are on the image boundary and need to be handled carefully. We tried the strategy used in Faster R-CNN [2], ignoring boxes which are on the boundary. We observe some interesting trends. For example, it hurts the performance by a large margin if we use very coarse feature maps (e.g. conv11_2 (1 × 1) or conv10_2 (3 × 3)). The reason might be that we do not have enough large boxes to cover large objects after the pruning. When we use primarily finer resolution maps, the performance starts increasing again because even after pruning a sufficient number of large boxes remains. If we only use conv7 for prediction, the performance is the worst, reinforcing the message that it is critical to spread boxes of different scales over different layers. Besides, since our predictions do not rely on ROI pooling as in [6], we do not have the collapsing bins problem in low-resolution feature maps [23]. The SSD architecture combines predictions from feature maps of various resolutions to achieve comparable accuracy to Faster R-CNN, while using lower resolution input images. Table 3Table 3:Effects of using multiple output layers.多个不同分辨率的输出层更好。SSD的主要贡献是在不同的输出层上使用不同尺度的默认边界框。为了衡量所获得的优势,我们逐步删除层并比较结果。为了公平比较,每次我们删除一层,我们调整默认边界框平铺,以保持类似于最初的边界框的总数(8732)。这是通过在剩余层上堆叠更多尺度的盒子并根据需要调整边界框的尺度来完成的。我们没有详尽地优化每个设置的平铺。表3显示层数较少,精度降低,从74.3单调递减至62.4。当我们在一层上堆叠多尺度的边界框时,很多边界框在图像边界上需要小心处理。我们尝试了在Faster R-CNN[2]中使用这个策略,忽略在边界上的边界框。我们观察到了一些有趣的趋势。例如,如果我们使用非常粗糙的特征映射(例如conv11_2(1×1)或conv10_2(3×3)),它会大大伤害性能。原因可能是修剪后我们没有足够大的边界框来覆盖大的目标。当我们主要使用更高分辨率的特征映射时,性能开始再次上升,因为即使在修剪之后仍然有足够数量的大边界框。如果我们只使用conv7进行预测,那么性能是最糟糕的,这就强化了在不同层上扩展不同尺度的边界框是非常关键的信息。此外,由于我们的预测不像[6]那样依赖于ROI池化,所以我们在低分辨率特征映射中没有折叠组块的问题[23]。SSD架构将来自各种分辨率的特征映射的预测结合起来,以达到与Faster R-CNN相当的精确度,同时使用较低分辨率的输入图像。

Table 3表3:使用多个输出层的影响。

3.3 PASCAL VOC2012

We use the same settings as those used for our basic VOC2007 experiments above, except that we use VOC2012trainvaland VOC2007trainvalandtest(21503 images) for training, and test on VOC2012test(10991 images). We train the models with $10^{−3}$ learning rate for 60k iterations, then $10^{−4}$ for 20k iterations. Table 4 shows the results of our SSD300 and SSD512 model. We see the same performance trend as we observed on VOC2007test. Our SSD300 improves accuracy over Fast/Faster R-CNN. By increasing the training and testing image size to 512 × 512, we are $4.5%$ more accurate than Faster R-CNN. Compared to YOLO, SSD is significantly more accurate, likely due to the use of convolutional default boxes from multiple feature maps and our matching strategy during training. When fine-tuned from models trained on COCO, our SSD512 achieves $80.0%$ mAP, which is $4.1%$ higher than Faster R-CNN. Table 4Table 4:PASCAL VOC2012testdetection results. Fast and Faster R-CNN use images with minimum dimension 600, while the image size for YOLO is 448 × 448. data: ”07++12”: union of VOC2007trainvalandtestand VOC2012trainval. ”07++12+COCO”: first train on COCOtrainval35kthen fine-tune on 07++12.

3.3 PASCAL VOC2012

除了我们使用VOC2012trainval和VOC2007trainval,test(21503张图像)进行训练,以及在VOC2012test(10991张图像)上进行测试之外,我们使用与上述基本的VOC2007实验相同的设置。我们用$10{−3}$的学习率对模型进行60k次的迭代训练,然后使用$10{−4}$的学习率进行20k次迭代训练。表4显示了我们的SSD300和SSD512模型的结果。我们看到了与我们在VOC2007test中观察到的相同的性能趋势。我们的SSD300比Fast/Faster R-CNN提高了准确性。通过将训练和测试图像大小增加到512×512,我们比Faster R-CNN的准确率提高了$4.5%$。与YOLO相比,SSD更精确,可能是由于使用了来自多个特征映射的卷积默认边界框和我们在训练期间的匹配策略。当对从COCO上训练的模型进行微调后,我们的SSD512达到了$80.0%$的mAP,比Faster R-CNN高了$4.1%$。 Table 4表4:PASCAL VOC2012test上的检测结果. Fast和Faster R-CNN使用最小维度为600的图像,而YOLO的图像大小为448× 48。数据:“07++12”:VOC2007trainval,test和VOC2012trainval。“07++12+COCO”:先在COCOtrainval135k上训练然后在07++12上微调。

3.4 COCO

To further validate the SSD framework, we trained our SSD300 and SSD512 architectures on the COCO dataset. Since objects in COCO tend to be smaller than PASCAL VOC, we use smaller default boxes for all layers. We follow the strategy mentioned in Sec. 2.2, but now our smallest default box has a scale of 0.15 instead of 0.2, and the scale of the default box on conv4_3 is 0.07 (e.g. 21 pixels for a 300 × 300 image).

3.4 COCO

为了进一步验证SSD框架,我们在COCO数据集上对SSD300和SSD512架构进行了训练。由于COCO中的目标往往比PASCAL VOC中的更小,因此我们对所有层使用较小的默认边界框。我们遵循2.2节中提到的策略,但是现在我们最小的默认边界框尺度是0.15而不是0.2,并且conv4_3上的默认边界框尺度是0.07(例如,300×300图像中的21个像素)。 We use thetrainval35k[24] for training. We first train the model with $10^{−3}$ learning rate for 160k iterations, and then continue training for 40k iterations with $10^{−4}$ and 40k iterations with $10^{−5}$. Table 5 shows the results ontest-dev2015. Similar to what we observed on the PASCAL VOC dataset, SSD300 is better than Fast R-CNN in both mAP@0.5 and mAP@[0.5:0.95]. SSD300 has a similar mAP@0.75 as ION [24] and Faster R-CNN [25], but is worse in mAP@0.5. By increasing the image size to 512 × 512, our SSD512 is better than Faster R-CNN [25] in both criteria. Interestingly, we observe that SSD512 is $5.3%$ better in mAP@0.75, but is only $1.2%$ better in mAP@0.5. We also observe that it has much better AP ($4.8%$) and AR ($4.6%$) for large objects, but has relatively less improvement in AP ($1.3%$) and AR ($2.0%$) for small objects. Compared to ION, the improvement in AR for large and small objects is more similar ($5.4%$ vs. $3.9%$). We conjecture that Faster R-CNN is more competitive on smaller objects with SSD because it performs two box refinement steps, in both the RPN part and in the Fast R-CNN part. In Fig. 5, we show some detection examples on COCOtest-devwith the SSD512 model. Table 5Table 5:COCOtest-dev2015detection results. Figure 5Fig. 5:Detection examples on COCOtest-devwith SSD512 model. We show detections with scores higher than 0.6. Each color corresponds to an object category. 我们使用trainval35k[24]进行训练。我们首先用$10{−3}$的学习率对模型进行训练,进行160k次迭代,然后继续以$10{−4}$和$10^{−5}$的学习率各进行40k次迭代。表5显示了test-dev2015的结果。与我们在PASCAL VOC数据集中观察到的结果类似,SSD300在mAP@0.5和mAP@[0.5:0.95]中都优于Fast R-CNN。SSD300与ION 24]和Faster R-CNN[25]具有相似的mAP@0.75,但是mAP@0.5更差。通过将图像尺寸增加到512×512,我们的SSD512在这两个标准中都优于Faster R-CNN[25]。有趣的是,我们观察到SSD512在mAP@0.75中要好$5.3%$,但是在mAP@0.5中只好$1.2%$。我们也观察到,对于大型目标,AP($4.8%$)和AR($4.6%$)的效果要好得多,但对于小目标,AP($1.3%$)和AR($2.0%$)有相对更少的改进。与ION相比,大型和小型目标的AR改进更为相似($5.4%$和$3.9%$)。我们推测Faster R-CNN在较小的目标上比SSD更具竞争力,因为它在RPN部分和Fast R-CNN部分都执行了两个边界框细化步骤。在图5中,我们展示了SSD512模型在COCOtest-dev上的一些检测实例。 Table 5表5:COCOtest-dev2015检测结果。 Figure 5图5:SSD512模型在COCOtest-dev上的检测实例。我们展示了分数高于0.6的检测。每种颜色对应一种目标类别。

3.5 Preliminary ILSVRC results

We applied the same network architecture we used for COCO to the ILSVRC DET dataset [16]. We train a SSD300 model using the ILSVRC2014 DETtrainandval1as used in [22]. We first train the model with $10^{−3}$ learning rate for 320k iterations, and then continue training for 80k iterations with $10^{−4}$ and 40k iterations with $10^{−5}$. We can achieve 43.4 mAP on theval2set [22]. Again, it validates that SSD is a general framework for high quality real-time detection.

3.5 初步的ILSVRC结果

我们将在COCO上应用的相同网络架构应用于ILSVRC DET数据集[16]。我们使用[22]中使用的ILSVRC2014 DETtrain和val1来训练SSD300模型。我们首先用$10{−3}$的学习率对模型进行训练,进行了320k次的迭代,然后以$10{−4}$继续迭代80k次,以$10^{−5}$迭代40k次。我们可以在val2数据集上[22]实现43.4 mAP。再一次证明了SSD是用于高质量实时检测的通用框架。

3.6 Data Augmentation for Small Object Accuracy

Without a follow-up feature resampling step as in Faster R-CNN, the classification task for small objects is relatively hard for SSD, as demonstrated in our analysis (see Fig. 4). The data augmentation strategy described in Sec. 2.2 helps to improve the performance dramatically, especially on small datasets such as PASCAL VOC. The random crops generated by the strategy can be thought of as a “zoom in” operation and can generate many larger training examples. To implement a “zoom out” operation that creates more small training examples, we first randomly place an image on a canvas of 16× of the original image size filled with mean values before we do any random crop operation. Because we have more training images by introducing this new “expansion” data augmentation trick, we have to double the training iterations. We have seen a consistent increase of $2%-3%$ mAP across multiple datasets, as shown in Table 6. In specific, Figure 6 shows that the new augmentation trick significantly improves the performance on small objects. This result underscores the importance of the data augmentation strategy for the final model accuracy. Table 6Table 6:Results on multiple datasets when we add the image expansion data augmentation trick. $SSD300^{*}$ and $SSD512^{*}$ are the models that are trained with the new data augmentation. Figure 6Fig.6:Sensitivity and impact of object size with new data augmentation on VOC2007testset using [21]. The top row shows the effects of BBox Area per category for the original SSD300 and SSD512 model, and the bottom row corresponds to the $SSD300^{*}$ and $SSD512^{*}$ model trained with the new data augmentation trick. It is obvious that the new data augmentation trick helps detecting small objects significantly.

3.6 为小目标准确率进行数据增强

SSD没有如Faster R-CNN中后续的特征重采样步骤,小目标的分类任务对SSD来说相对困难,正如我们的分析(见图4)所示。2.2描述的数据增强有助于显著提高性能,特别是在PASCAL VOC等小数据集上。策略产生的随机裁剪可以被认为是“放大”操作,并且可以产生许多更大的训练样本。为了实现创建更多小型训练样本的“缩小”操作,我们首先将图像随机放置在填充了平均值的原始图像大小为16x的画布上,然后再进行任意的随机裁剪操作。因为通过引入这个新的“扩展”数据增强技巧,我们有更多的训练图像,所以我们必须将训练迭代次数加倍。我们已经在多个数据集上看到了一致的$2%-3%$的mAP增长,如表6所示。具体来说,图6显示新的增强技巧显著提高了模型在小目标上的性能。这个结果强调了数据增强策略对最终模型精度的重要性。 Table 6表6:我们使用图像扩展数据增强技巧在多个数据集上的结果。$SSD300{*}$和$SSD512{*}$是用新的数据增强训练的模型。 Figure 6图6:具有新的数据增强的目标尺寸在[21]中使用的VOC2007test数据集上灵敏度及影响。最上一行显示了原始SSD300和SSD512模型上每个类别的BBox面积的影响,最下面一行对应使用新的数据增强训练技巧的$SSD300{*}$和$SSD512{*}$模型。新的数据增强技巧显然有助于显著检测小目标。 An alternative way of improving SSD is to design a better tiling of default boxes so that its position and scale are better aligned with the receptive field of each position on a feature map. We leave this for future work. 改进SSD的另一种方法是设计一个更好的默认边界框平铺,使其位置和尺度与特征映射上每个位置的感受野更好地对齐。我们将这个留给未来工作。

3.7 Inference time

Considering the large number of boxes generated from our method, it is essential to perform non-maximum suppression (nms) efficiently during inference. By using a confidence threshold of 0.01, we can filter out most boxes. We then apply nms with jaccard overlap of 0.45 per class and keep the top 200 detections per image. This step costs about 1.7 msec per image for SSD300 and 20 VOC classes, which is close to the total time (2.4 msec) spent on all newly added layers. We measure the speed with batch size 8 using Titan X and cuDNN v4 with Intel Xeon E5-2667v3@3.20GHz.

3.7 推断时间

考虑到我们的方法产生大量边界框,在推断期间执行非最大值抑制(nms)是必要的。通过使用0.01的置信度阈值,我们可以过滤大部分边界框。然后,我们应用nms,每个类别0.45的Jaccard重叠,并保留每张图像的前200个检测。对于SSD300和20个VOC类别,这个步骤每张图像花费大约1.7毫秒,接近在所有新增层上花费的总时间(2.4毫秒)。我们使用Titan X、cuDNN v4、Intel Xeon E5-2667v3@3.20GHz以及批大小为8来测量速度。 Table 7 shows the comparison between SSD, Faster R-CNN[2], and YOLO[5]. Both our SSD300 and SSD512 method outperforms Faster R-CNN in both speed and accuracy. Although Fast YOLO[5] can run at 155 FPS, it has lower accuracy by almost $22%$ mAP. To the best of our knowledge, SSD300 is the first real-time method to achieve above $70%$ mAP. Note that about $80%$ of the forward time is spent on the base network (VGG16 in our case). Therefore, using a faster base network could even further improve the speed, which can possibly make the SSD512 model real-time as well. Figure 7Table 7:Results on Pascal VOC2007test. SSD300 is the only real-time detection method that can achieve above $70%$ mAP. By using a larger input image, SSD512 outperforms all methods on accuracy while maintaining a close to real-time speed. 表7显示了SSD,Faster R-CNN[2]和YOLO[5]之间的比较。我们的SSD300和SSD512的速度和精度均优于Faster R-CNN。虽然Fast YOLO[5]可以以155FPS的速度运行,但其准确性却降低了近$22%$的mAP。就我们所知,SSD300是第一个实现$70%$以上mAP的实时方法。请注意,大约$80%$前馈时间花费在基础网络上(本例中为VGG16)。因此,使用更快的基础网络可以进一步提高速度,这也可能使SSD512模型达到实时。 Figure 7表7:Pascal VOC2007test上的结果。SSD300是唯一可以取得$70%$以上mAP的实现检测方法。通过使用更大的输入图像,SSD512在精度上超过了所有方法同时保持近似实时的速度。

4. Related Work

There are two established classes of methods for object detection in images, one based on sliding windows and the other based on region proposal classification. Before the advent of convolutional neural networks, the state of the art for those two approaches —— Deformable Part Model (DPM) [26] and Selective Search [1] —— had comparable performance. However, after the dramatic improvement brought on by R-CNN [22], which combines selective search region proposals and convolutional network based post-classification, region proposal object detection methods became prevalent.

4. 相关工作

在图像中有两种建立的用于目标检测的方法,一种基于滑动窗口,另一种基于区域提出分类。在卷积神经网络出现之前,这两种方法的最新技术——可变形部件模型(DPM)[26]和选择性搜索[1]——具有相当的性能。然而,在R-CNN[22]结合选择性搜索区域提出和基于后分类的卷积网络带来的显著改进后,区域提出目标检测方法变得流行。 The original R-CNN approach has been improved in a variety of ways. The first set of approaches improve the quality and speed of post-classification, since it requires the classification of thousands of image crops, which is expensive and time-consuming. SPPnet [9] speeds up the original R-CNN approach significantly. It introduces a spatial pyramid pooling layer that is more robust to region size and scale and allows the classification layers to reuse features computed over feature maps generated at several image resolutions. Fast R-CNN [6] extends SPPnet so that it can fine-tune all layers end-to-end by minimizing a loss for both confidences and bounding box regression, which was first introduced in MultiBox [7] for learning objectness. 最初的R-CNN方法已经以各种方式进行了改进。第一套方法提高了后分类的质量和速度,因为它需要对成千上万的裁剪图像进行分类,这是昂贵和耗时的。SPPnet[9]显著加快了原有的R-CNN方法。它引入了一个空间金字塔池化层,该层对区域大小和尺度更鲁棒,并允许分类层重用多个图像分辨率下生成的特征映射上计算的特征。Fast R-CNN[6]扩展了SPPnet,使得它可以通过最小化置信度和边界框回归的损失来对所有层进行端到端的微调,最初在MultiBox[7]中引入用于学习目标。 The second set of approaches improve the quality of proposal generation using deep neural networks. In the most recent works like MultiBox [7,8], the Selective Search region proposals, which are based on low-level image features, are replaced by proposals generated directly from a separate deep neural network. This further improves the detection accuracy but results in a somewhat complex setup, requiring the training of two neural networks with a dependency between them. Faster R-CNN [2] replaces selective search proposals by ones learned from a region proposal network (RPN), and introduces a method to integrate the RPN with Fast R-CNN by alternating between fine-tuning shared convolutional layers and prediction layers for these two networks. This way region proposals are used to pool mid-level features and the final classification step is less expensive. Our SSD is very similar to the region proposal network (RPN) in Faster R-CNN in that we also use a fixed set of (default) boxes for prediction, similar to the anchor boxes in the RPN. But instead of using these to pool features and evaluate another classifier, we simultaneously produce a score for each object category in each box. Thus, our approach avoids the complication of merging RPN with Fast R-CNN and is easier to train, faster, and straightforward to integrate in other tasks. 第二套方法使用深度神经网络提高了提出生成的质量。在最近的工作MultiBox[7,8]中,基于低级图像特征的选择性搜索区域提出直接被单独的深度神经网络生成的提出所取代。这进一步提高了检测精度,但是导致了一些复杂的设置,需要训练两个具有依赖关系的神经网络。Faster R-CNN[2]将选择性搜索提出替换为区域提出网络(RPN)学习到的区域提出,并引入了一种方法,通过交替两个网络之间的微调共享卷积层和预测层将RPN和Fast R-CNN结合在一起。通过这种方式,使用区域提出池化中级特征,并且最后的分类步骤比较便宜。我们的SSD与Faster R-CNN中的区域提出网络(RPN)非常相似,因为我们也使用一组固定的(默认)边界框进行预测,类似于RPN中的锚边界框。但是,我们不是使用这些来池化特征并评估另一个分类器,而是为每个目标类别在每个边界框中同时生成一个分数。因此,我们的方法避免了将RPN与Fast R-CNN合并的复杂性,并且更容易训练,更快且更直接地集成到其它任务中。 Another set of methods, which are directly related to our approach, skip the proposal step altogether and predict bounding boxes and confidences for multiple categories directly. OverFeat [4], a deep version of the sliding window method, predicts a bounding box directly from each location of the topmost feature map after knowing the confidences of the underlying object categories. YOLO [5] uses the whole topmost feature map to predict both confidences for multiple categories and bounding boxes (which are shared for these categories). Our SSD method falls in this category because we do not have the proposal step but use the default boxes. However, our approach is more flexible than the existing methods because we can use default boxes of different aspect ratios on each feature location from multiple feature maps at different scales. If we only use one default box per location from the topmost feature map, our SSD would have similar architecture to OverFeat [4]; if we use the whole topmost feature map and add a fully connected layer for predictions instead of our convolutional predictors, and do not explicitly consider multiple aspect ratios, we can approximately reproduce YOLO [5]. 与我们的方法直接相关的另一组方法,完全跳过提出步骤,直接预测多个类别的边界框和置信度。OverFeat[4]是滑动窗口方法的深度版本,在知道了底层目标类别的置信度之后,直接从最顶层的特征映射的每个位置预测边界框。YOLO[5]使用整个最顶层的特征映射来预测多个类别和边界框(这些类别共享)的置信度。我们的SSD方法属于这一类,因为我们没有提出步骤,但使用默认边界框。然而,我们的方法比现有方法更灵活,因为我们可以在不同尺度的多个特征映射的每个特征位置上使用不同长宽比的默认边界框。如果我们只从最顶层的特征映射的每个位置使用一个默认框,我们的SSD将具有与OverFeat[4]相似的架构;如果我们使用整个最顶层的特征映射,并添加一个全连接层进行预测来代替我们的卷积预测器,并且没有明确地考虑多个长宽比,我们可以近似地再现YOLO[5]。

5. Conclusions

This paper introduces SSD, a fast single-shot object detector for multiple categories. A key feature of our model is the use of multi-scale convolutional bounding box outputs attached to multiple feature maps at the top of the network. This representation allows us to efficiently model the space of possible box shapes. We experimentally validate that given appropriate training strategies, a larger number of carefully chosen default bounding boxes results in improved performance. We build SSD models with at least an order of magnitude more box predictions sampling location, scale, and aspect ratio, than existing methods [5,7]. We demonstrate that given the same VGG-16 base architecture, SSD compares favorably to its state-of-the-art object detector counterparts in terms of both accuracy and speed. Our SSD512 model significantly outperforms the state-of-the-art Faster R-CNN [2] in terms of accuracy on PASCAL VOC and COCO, while being 3× faster. Our real time SSD300 model runs at 59 FPS, which is faster than the current real time YOLO [5] alternative, while producing markedly superior detection accuracy.

5. 结论

本文介绍了SSD,一种快速的单次多类别目标检测器。我们模型的一个关键特性是使用网络顶部多个特征映射的多尺度卷积边界框输出。这种表示使我们能够高效地建模可能的边界框形状空间。我们通过实验验证,在给定合适训练策略的情况下,大量仔细选择的默认边界框会提高性能。我们构建的SSD模型比现有的方法至少要多一个数量级的边界框预测采样位置,尺度和长宽比[5,7]。我们证明了给定相同的VGG-16基础架构,SSD在准确性和速度方面与其对应的最先进的目标检测器相比毫不逊色。在PASCAL VOC和COCO上,我们的SSD512模型的性能明显优于最先进的Faster R-CNN[2],而速度提高了3倍。我们的实时SSD300模型运行速度为59FPS,比目前的实时YOLO[5]更快,同时显著提高了检测精度。 Apart from its standalone utility, we believe that our monolithic and relatively simple SSD model provides a useful building block for larger systems that employ an object detection component. A promising future direction is to explore its use as part of a system using recurrent neural networks to detect and track objects in video simultaneously. 除了单独使用之外,我们相信我们的整体和相对简单的SSD模型为采用目标检测组件的大型系统提供了有用的构建模块。一个有前景的未来方向是探索它作为系统的一部分,使用循环神经网络来同时检测和跟踪视频中的目标。

6. Acknowledgment

This work was started as an internship project at Google and continued at UNC. We would like to thank Alex Toshev for helpful discussions and are indebted to the Image Understanding and DistBelief teams at Google. We also thank Philip Ammirato and Patrick Poirson for helpful comments. We thank NVIDIA for providing GPUs and acknowledge support from NSF 1452851, 1446631, 1526367, 1533771.

6. 致谢

这项工作是在谷歌的一个实习项目开始的,并在UNC继续。我们要感谢Alex Toshev进行有益的讨论,并感谢Google的Image Understanding和DistBelief团队。我们也感谢Philip Ammirato和Patrick Poirson提供有用的意见。我们感谢NVIDIA提供的GPU,并对NSF 1452851,1446631,1526367,1533771的支持表示感谢。

References

Uijlings, J.R., van de Sande, K.E., Gevers, T., Smeulders, A.W.: Selective search for object recognition. IJCV (2013) Ren, S., He, K., Girshick, R., Sun, J.: Faster R-CNN: Towards real-time object detection with region proposal networks. In: NIPS. (2015) He, K., Zhang, X., Ren, S., Sun, J.:Deep residual learning for image recognition. In:CVPR. (2016) Sermanet, P., Eigen, D., Zhang, X., Mathieu, M., Fergus, R., LeCun, Y.: Overfeat:Integrated recognition, localization and detection using convolutional networks. In: ICLR. (2014) Redmon, J., Divvala, S., Girshick, R., Farhadi, A.: You only look once: Unified, real-time object detection. In: CVPR. (2016) Girshick, R.: Fast R-CNN. In: ICCV. (2015) Erhan, D., Szegedy, C., Toshev, A., Anguelov, D.: Scalable object detection using deep neural networks. In: CVPR. (2014) Szegedy, C., Reed, S., Erhan, D., Anguelov, D.: Scalable, high-quality object detection. arXiv preprint arXiv:1412.1441 v3 (2015) He, K., Zhang, X., Ren, S., Sun, J.: Spatial pyramid pooling in deep convolutional networks for visual recognition. In: ECCV. (2014) Long, J., Shelhamer, E., Darrell, T.: Fully convolutional networks for semantic segmentation. In: CVPR. (2015) Hariharan, B., Arbeláez, P., Girshick, R., Malik, J.: Hypercolumns for object segmentation and fine-grained localization. In: CVPR. (2015) Liu, W., Rabinovich, A., Berg, A.C.: ParseNet: Looking wider to see better.In:ILCR.(2016) Zhou, B., Khosla, A., Lapedriza, A., Oliva, A., Torralba, A.: Object detector semerge in deep scene cnns. In: ICLR. (2015) Howard, A.G.: Some improvements on deep convolutional neural network based image classification. arXiv preprint arXiv:1312.5402 (2013) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: NIPS. (2015) Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., Berg, A.C., Fei-Fei, L.: Imagenet large scale visual recognition challenge. IJCV (2015) Chen, L.C., Papandreou, G., Kokkinos, I., Murphy, K., Yuille, A.L.: Semantic image segmentation with deep convolutional nets and fully connected crfs. In: ICLR. (2015) Holschneider, M., Kronland-Martinet, R., Morlet, J., Tchamitchian, P.: Areal-time algorithm for signal analysis with the help of the wavelet transform. In: Wavelets. Springer (1990) 286–297 Jia, Y., Shelhamer, E., Donahue, J., Karayev, S., Long, J., Girshick, R., Guadarrama, S., Darrell, T.: Caffe: Convolutional architecture for fast feature embedding. In: MM. (2014) Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. In: AISTATS. (2010) Hoiem, D., Chodpathumwan, Y., Dai, Q.: Diagnosing error in object detectors. In: ECCV 2012. (2012) Girshick, R., Donahue, J., Darrell, T., Malik, J.: Rich feature hierarchies for accurate object detection and semantic segmentation. In: CVPR. (2014) Zhang, L., Lin, L., Liang, X., He, K.: Is faster r-cnn doing well for pedestrian detection. In: ECCV. (2016) Bell, S., Zitnick, C.L., Bala, K., Girshick, R.: Inside-outside net:Detecting objects in context with skip pooling and recurrent neural networks. In: CVPR. (2016) COCO: Common Objects in Context.http://mscoco.org/dataset/#detections-leaderboard(2016) [Online; accessed 25-July-2016]. Felzenszwalb, P., McAllester, D., Ramanan, D.: A discriminatively trained, multiscale, deformable part model. In: CVPR. (2008)

关于SSD目标检测算法论文解读——双语版的内容到此结束,希望对大家有所帮助。

用户评论

凉月流沐@

真是太棒了!能看到这篇目标检测论文的中英对照翻译,学习起来会方便很多。

    有5位网友表示赞同!

青瓷清茶倾城歌

很久没跟进目标检测的新进展了,要趁机来阅读一下这个模型。

    有20位网友表示赞同!

绳情

我之前不太了解Single Shot MultiBox Detector,谢谢分享这篇文章!

    有7位网友表示赞同!

君临臣

这样可以一边看原文一边看翻译理解透彻的多

    有20位网友表示赞同!

幸好是你

学习新技术确实需要一些好的翻译资料支持!

    有18位网友表示赞同!

有你,很幸福

目标检测是现在的研究热点吧,这个模型看起来很不错。

    有10位网友表示赞同!

龙吟凤

对于对深度学习没太了解的来说,这种中英文对照会更容易理解。

    有12位网友表示赞同!

温柔腔

想深入研究下这篇论文,这样方便看懂研究内容。

    有14位网友表示赞同!

我就是这样一个人

感谢分享!我一直在寻找关于多目标检测的文章,这个很适合我。

    有15位网友表示赞同!

她最好i

"Single Shot MultiBox Detector" 听起来就很酷炫吧?

    有13位网友表示赞同!

孤廖

翻译这么仔细,作者真是太用心了!

    有13位网友表示赞同!

爱情的过失

学习目标检测方向的人可以收藏这篇帖子!

    有8位网友表示赞同!

非想

希望这种中英文对照的资料越来越多!

    有5位网友表示赞同!

醉红颜

这篇论文会是目标检测研究领域的重要参考文章吧?

    有16位网友表示赞同!

权诈

翻译很专业,感觉作者对这个模型很有深入理解。

    有9位网友表示赞同!

在哪跌倒こ就在哪躺下

如果能加上一些解释和具体的例子那就更好了!

    有16位网友表示赞同!

自繩自縛

期待看更多关于Single Shot MultiBox Detector的文章。

    有12位网友表示赞同!

败类

学习目标检测算法一直是比较困难的事情,希望这篇论文能给我带来帮助!

    有16位网友表示赞同!

可儿

看来这篇文章将介绍一个非常高效的目标检测模型!

    有11位网友表示赞同!

ヅ她的身影若隐若现

现在很多项目应用需要用到目标检测,这篇关于新模型的文章很有价值。

    有15位网友表示赞同!

【SSD目标检测算法论文解读——双语版】相关文章:

1.蛤蟆讨媳妇【哈尼族民间故事】

2.米颠拜石

3.王羲之临池学书

4.清代敢于创新的“浓墨宰相”——刘墉

5.“巧取豪夺”的由来--米芾逸事

6.荒唐洁癖 惜砚如身(米芾逸事)

7.拜石为兄--米芾逸事

8.郑板桥轶事十则

9.王献之被公主抢亲后的悲惨人生

10.史上真实张三丰:在棺材中竟神奇复活