Faster RCNN的损失函数(Loss Function)的形式如下:
p(i): Anchor[i]的预测分类概率;
Anchor[i]是正样本时,p(i)*=1;Anchor[i]是负样本时,p(i)*=0;
什么是正样本与负样本满足以下条件的Anchor是正样本:与Ground Truth Box的IOU(Intersection-Over-Union) 的重叠区域最大的Anchor;与Gound Truth Box的IOU的重叠区域>0.7;满足以下条件的Anchor是负样本:与Gound Truth Box的IOU的重叠区域 <0.3;既不属于正样本又不属于负样本的Anchor不参与训练。
t(i): Anchor[i]预测的Bounding Box的参数化坐标(parameterized coordinates);
t(i)*: Anchor[i]的Ground Truth的Bounding Box的参数化坐标;
N(cls): mini-batch size;
N(reg): Anchor Location的数量;
其中,R是Smooth L1函数;
表示只有在正样本时才回归Bounding Box。
Smooth L1完美地避开了 L1 和 L2 损失的缺陷,在 x 较小时,对 x 的梯度也会变小; 而在 x 很大时,对 x 的梯度的绝对值达到上限1,不会因预测值的梯度十分大导致训练不稳定。
L(cls): 是两个类别的对数损失
λ: 权重平衡参数,在论文中作者设置λ=10,但实际实验显示,结果对的λ变化不敏感,如下表所示,λ取值从1变化到100,对最终结果的影响在1%以内。
Smooth L1 Loss
def _smooth_l1_loss(self, bbox_pred, bbox_targets, bbox_inside_weights, bbox_outside_weights, sigma=1.0, dim=[1]):
sigma_2 = sigma ** 2
box_diff = bbox_pred - bbox_targets
in_box_diff = bbox_inside_weights * box_diff
abs_in_box_diff = tf.abs(in_box_diff)
smoothL1_sign = tf.stop_gradient(tf.to_float(tf.less(abs_in_box_diff, 1. / sigma_2)))
in_loss_box = tf.pow(in_box_diff, 2) * (sigma_2 / 2.) * smoothL1_sign \
+ (abs_in_box_diff - (0.5 / sigma_2)) * (1. - smoothL1_sign)
out_loss_box = bbox_outside_weights * in_loss_box
loss_box = tf.reduce_mean(tf.reduce_sum(
out_loss_box,
axis=dim
))
return loss_box
代码中的Smooth L1 Loss更加General。
bbox_inside_weight对应于公式(1)(Faster RCNN的损失函数)中的p*,即当Anchor为正样本时值为1,为负样本时值为0。bbox_outside_weights对应于公式(1)(Faster RCNN的损失函数)中的N(reg)、λ、N(cls)的设置。在论文中,N(reg)=2400、λ=10、N(cls)=256,如此分类和回归两个loss的权重基本相同。
在代码中,N(reg)=N(cls),λ=1,如此分类和回归两个loss的权重也基本相同。
Loss
def _add_losses(self, sigma_rpn=3.0):
with tf.variable_scope('LOSS_' + self._tag) as scope:
# RPN, class loss
rpn_cls_score = tf.reshape(self._predictions['rpn_cls_score_reshape'], [-1, 2])
rpn_label = tf.reshape(self._anchor_targets['rpn_labels'], [-1])
rpn_select = tf.where(tf.not_equal(rpn_label, -1))
rpn_cls_score = tf.reshape(tf.gather(rpn_cls_score, rpn_select), [-1, 2])
rpn_label = tf.reshape(tf.gather(rpn_label, rpn_select), [-1])
rpn_cross_entropy = tf.reduce_mean(
tf.nn.sparse_softmax_cross_entropy_with_logits(logits=rpn_cls_score, labels=rpn_label))
# RPN, bbox loss
rpn_bbox_pred = self._predictions['rpn_bbox_pred']
rpn_bbox_targets = self._anchor_targets['rpn_bbox_targets']
rpn_bbox_inside_weights = self._anchor_targets['rpn_bbox_inside_weights']
rpn_bbox_outside_weights = self._anchor_targets['rpn_bbox_outside_weights']
rpn_loss_box = self._smooth_l1_loss(rpn_bbox_pred, rpn_bbox_targets, rpn_bbox_inside_weights,
rpn_bbox_outside_weights, sigma=sigma_rpn, dim=[1, 2, 3])
# RCNN, class loss
cls_score = self._predictions["cls_score"]
label = tf.reshape(self._proposal_targets["labels"], [-1])
cross_entropy = tf.reduce_mean(tf.nn.sparse_softmax_cross_entropy_with_logits(logits=cls_score, labels=label))
# RCNN, bbox loss
bbox_pred = self._predictions['bbox_pred']
bbox_targets = self._proposal_targets['bbox_targets']
bbox_inside_weights = self._proposal_targets['bbox_inside_weights']
bbox_outside_weights = self._proposal_targets['bbox_outside_weights']
loss_box = self._smooth_l1_loss(bbox_pred, bbox_targets, bbox_inside_weights, bbox_outside_weights)
self._losses['cross_entropy'] = cross_entropy
self._losses['loss_box'] = loss_box
self._losses['rpn_cross_entropy'] = rpn_cross_entropy
self._losses['rpn_loss_box'] = rpn_loss_box
loss = cross_entropy + loss_box + rpn_cross_entropy + rpn_loss_box
regularization_loss = tf.add_n(tf.losses.get_regularization_losses(), 'regu')
self._losses['total_loss'] = loss + regularization_loss
self._event_summaries.update(self._losses)
return loss
损失函数中包含了RPN交叉熵、RPN Box的regression、RCNN的交叉熵、RCNN Box的regression以及参数正则化损失。
IOU的计算
def bbox_overlaps(
np.ndarray[DTYPE_t, ndim=2] boxes,
np.ndarray[DTYPE_t, ndim=2] query_boxes):
"""
Parameters
----------
boxes: (N, 4) ndarray of float
query_boxes: (K, 4) ndarray of float
Returns
-------
overlaps: (N, K) ndarray of overlap between boxes and query_boxes
"""
cdef unsigned int N = boxes.shape[0]
cdef unsigned int K = query_boxes.shape[0]
cdef np.ndarray[DTYPE_t, ndim=2] overlaps = np.zeros((N, K), dtype=DTYPE)
cdef DTYPE_t iw, ih, box_area
cdef DTYPE_t ua
cdef unsigned int k, n
for k in range(K):
box_area = (
(query_boxes[k, 2] - query_boxes[k, 0] + 1) *
(query_boxes[k, 3] - query_boxes[k, 1] + 1)
)
for n in range(N):
iw = (
min(boxes[n, 2], query_boxes[k, 2]) -
max(boxes[n, 0], query_boxes[k, 0]) + 1
)
if iw > 0:
ih = (
min(boxes[n, 3], query_boxes[k, 3]) -
max(boxes[n, 1], query_boxes[k, 1]) + 1
)
if ih > 0:
ua = float(
(boxes[n, 2] - boxes[n, 0] + 1) *
(boxes[n, 3] - boxes[n, 1] + 1) +
box_area - iw * ih
)
overlaps[n, k] = iw * ih / ua
return overlaps
IOU覆盖率的计算过程:IOU=C/(A+B-C)
IOU计算
留言与评论(共有 0 条评论) |