Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
曾经的那一瞬间
Models
提交
f0b597ef
M
Models
项目概览
曾经的那一瞬间
/
Models
大约 1 年 前同步成功
通知
1
Star
0
Fork
0
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
0
列表
看板
标记
里程碑
合并请求
0
DevOps
流水线
流水线任务
计划
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
M
Models
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
0
Issue
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
Pages
DevOps
DevOps
流水线
流水线任务
计划
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
流水线任务
提交
Issue看板
体验新版 GitCode,发现更多精彩内容 >>
提交
f0b597ef
编写于
11月 30, 2022
作者:
A
A. Unique TensorFlower
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
Internal change
PiperOrigin-RevId: 492000777
上级
f5dddb7b
变更
2
隐藏空白更改
内联
并排
Showing
2 changed file
with
140 addition
and
32 deletion
+140
-32
official/vision/modeling/layers/edgetpu.py
official/vision/modeling/layers/edgetpu.py
+60
-5
official/vision/modeling/layers/edgetpu_test.py
official/vision/modeling/layers/edgetpu_test.py
+80
-27
未找到文件。
official/vision/modeling/layers/edgetpu.py
浏览文件 @
f0b597ef
...
...
@@ -158,7 +158,8 @@ _RECOMMENDED_NMS_MEMORY = 360000
def
non_max_suppression_padded
(
boxes
:
tf
.
Tensor
,
scores
:
tf
.
Tensor
,
output_size
:
int
,
iou_threshold
:
float
=
0.5
)
->
tf
.
Tensor
:
iou_threshold
:
float
=
0.5
,
refinements
:
int
=
0
)
->
tf
.
Tensor
:
"""Selects a subset of boxes which have highest score among IOU-similar boxes.
Prunes away boxes that have high intersection-over-union (IOU) overlap
...
...
@@ -188,8 +189,10 @@ def non_max_suppression_padded(boxes: tf.Tensor,
representing a single score corresponding to each box (each row of boxes).
output_size: A scalar integer `Tensor` representing the maximum number of
boxes to be selected by non-max suppression.
iou_threshold: A 0-D float tensor representing the threshold for deciding
whether boxes overlap too much with respect to IOU.
iou_threshold: A float representing the threshold for deciding whether boxes
overlap too much with respect to IOU.
refinements: A number of extra refinement steps to make result closer to
original sequential NMS.
Returns:
A 1-D+ integer `Tensor` of shape `[...batch_dims, output_size]` representing
...
...
@@ -206,15 +209,62 @@ def non_max_suppression_padded(boxes: tf.Tensor,
for
boxes_i
,
scores_i
in
shard_tensors
(
0
,
block
,
boxes
,
scores
):
indices
.
append
(
_non_max_suppression_as_is
(
boxes_i
,
scores_i
,
output_size
,
iou_threshold
))
iou_threshold
,
refinements
))
indices
=
tf
.
concat
(
indices
,
axis
=
0
)
return
tf
.
reshape
(
indices
,
batch_shape
+
[
output_size
])
def
_refine_nms_graph_to_original_algorithm
(
better
:
tf
.
Tensor
)
->
tf
.
Tensor
:
"""Refines the relationship graph, bringing it closer to the iterative NMS.
See `test_refinement_sample` unit tests for example, also comments in body of
the algorithm, for the intuition.
Args:
better: is a tensor with zeros and ones so that
[batch dims ..., box_1, box_2] represents the
[adjacency matrix](https://en.wikipedia.org/wiki/Adjacency_matrix)
for the [relation](https://en.wikipedia.org/wiki/Relation_(mathematics))
`better` between boxes box_1 and box_2.
Returns:
Modification of tensor encoding adjacency matrix of `better` relation.
"""
# good_box: is a tensor with zeros and ones so that
# [batch dims ..., box_i] represents belonging of a box_i to the `good`
# subset. `good` subset is defined as exactly those boxes that do not have any
# `better` boxes.
# INTUITION: In terms of oriented graph , this is subset of nodes nobody
# points to as "I'm better than you". These nodes will never be suppressed in
# the original NMS algorithm.
good_box
=
tf
.
constant
(
1.
)
-
_reduce_or
(
better
,
axis
=-
1
)
# good_better: is a tensor with zeros and ones so that
# [batch dims ..., box_1, box_2] represents the adjacency matrix for the
# `good_better` relation on all boxes set. `good_better` relation is defined
# as relation between good box and boxes it is better than.
# INTUITION: In terms of oriented graph, this is subset of edges, which
# doesn't have any other inbound edges. These edges will represent
# suppression actions in the original NMS algorithm.
good_better
=
_and
(
tf
.
expand_dims
(
good_box
,
axis
=-
2
),
better
)
# not_bad_box: is a tensor with zeros and ones so that
# [batch dims ..., box_i] represents belonging of a box_i to the `not_bad`
# subset. `not_bad` subset is defined as boxes all that and only those that
# does not have any `good_better` boxes.
# INTUITION: These nodes are nodes which are not suppressed by `good` boxes
# in the original NMS algorithm.
not_bad_box
=
tf
.
constant
(
1.
)
-
_reduce_or
(
good_better
,
axis
=-
1
)
# return: is a tensor with zeros and ones so that
# [batch dims ..., box_1, box_2] represents the adjacency matrix for the
# `better` relation on all boxes set which is closer to represent suppression
# procedure in original NMS algorithm.
return
_and
(
tf
.
expand_dims
(
not_bad_box
,
axis
=-
2
),
better
)
def
_non_max_suppression_as_is
(
boxes
:
tf
.
Tensor
,
scores
:
tf
.
Tensor
,
output_size
:
int
,
iou_threshold
:
float
=
0.5
)
->
tf
.
Tensor
:
iou_threshold
:
float
=
0.5
,
refinements
:
int
=
0
)
->
tf
.
Tensor
:
"""Selects a subset of boxes which have highest score among IOU-similar boxes.
Args:
...
...
@@ -225,6 +275,8 @@ def _non_max_suppression_as_is(boxes: tf.Tensor,
boxes to be selected by non-max suppression.
iou_threshold: A 0-D float tensor representing the threshold for deciding
whether boxes overlap too much with respect to IOU.
refinements: A number of extra refinement steps to make result closer to
original sequencial NMS.
Returns:
A 1-D+ integer `Tensor` of shape `[...batch_dims, output_size]` representing
...
...
@@ -246,6 +298,9 @@ def _non_max_suppression_as_is(boxes: tf.Tensor,
worse
=
_greater
(
relative_scores
)
same_later
=
_and
(
_same
(
relative_scores
),
_greater
(
relative_order
))
similar_worse_or_same_later
=
_and
(
similar
,
_or
(
worse
,
same_later
))
for
_
in
range
(
refinements
):
similar_worse_or_same_later
=
_refine_nms_graph_to_original_algorithm
(
similar_worse_or_same_later
)
prunable
=
_reduce_or
(
similar_worse_or_same_later
,
axis
=-
1
)
remaining
=
tf
.
constant
(
1.
)
-
prunable
scores
=
tf
.
reshape
(
tf
.
exp
(
scores
),
[
1
,
1
,
batch_size
,
boxes_size
])
...
...
official/vision/modeling/layers/edgetpu_test.py
浏览文件 @
f0b597ef
...
...
@@ -44,16 +44,75 @@ def _maximum_activation_size(model):
return
max_size
def
_deviation_and_margin
(
reference
,
valid
,
optimized
):
"""Returns deviation and margin between two batched sets of indices."""
deviation_rate
=
0
min_union
=
reference
.
shape
[
1
]
+
optimized
.
shape
[
1
]
runs
=
reference
.
shape
[
0
]
for
run
in
range
(
runs
):
reference_slice
=
{
*
reference
[
run
,
:
valid
[
run
]].
numpy
().
tolist
()}
optimized_slice
=
{
*
optimized
[
run
].
numpy
().
astype
(
int
).
tolist
()}
-
{
-
1
}
union_size
=
len
(
optimized_slice
|
reference_slice
)
symdiff_size
=
len
(
optimized_slice
^
reference_slice
)
deviation_rate
+=
symdiff_size
/
union_size
min_union
=
min
(
min_union
,
union_size
)
deviation_rate
=
deviation_rate
/
runs
# six sigma estimate via LLN theorem
margin
=
6
*
(
deviation_rate
/
np
.
sqrt
(
runs
)
+
1
/
(
runs
*
min_union
))
return
deviation_rate
,
margin
class
NonMaxSuppressionTest
(
parameterized
.
TestCase
,
tf
.
test
.
TestCase
):
def
setUp
(
self
):
super
().
setUp
()
tf
.
random
.
set_seed
(
42
)
@
parameterized
.
parameters
((
16
,
8
,
200
,
0.009
),
(
31
,
17
,
100
,
0.013
),
(
71
,
41
,
100
,
0.045
),
(
150
,
100
,
100
,
0.129
),
(
300
,
300
,
100
,
0.116
),
(
600
,
600
,
50
,
0.176
))
def
test_reference_match
(
self
,
n
,
top
,
runs
,
max_deviation
):
def
test_refinement_sample
(
self
):
"""Tests difference in NMS behaviours.
Runs on four boxes with following IOU table (only neighbours will qualify
as similar boxes)
box | 0 | 1 | 2 | 3
--- | ---- | ---- | ---- | ----
0 | 1 | 7/13 | 1/4 | 1/19
1 | 7/13 | 1 | 7/13 | 1/4
2 | 1/4 | 7/13 | 1 | 7/13
3 | 1/19 | 1/4 | 7/13 | 1
So 0 is best box, it eliminates 1, next is box 2 which is eleminated by 1
if it is allowed (depending on number of refinements).
"""
boxes
:
tf
.
Tensor
=
tf
.
constant
(
[
# y1, x1, y2, x2
[
0.0
,
0.0
,
1.0
,
1.0
],
[
0.0
,
0.3
,
1.0
,
1.3
],
[
0.0
,
0.6
,
1.0
,
1.6
],
[
0.0
,
0.9
,
1.0
,
1.9
],
],
dtype
=
tf
.
float32
)
scores
:
tf
.
Tensor
=
tf
.
constant
([
1.0
,
0.9
,
0.8
,
0.7
,
],
dtype
=
tf
.
float32
)
self
.
assertAllEqual
(
edgetpu
.
non_max_suppression_padded
(
boxes
,
scores
,
4
,
refinements
=
0
),
tf
.
constant
([
0.0
,
-
1.0
,
-
1.0
,
-
1.0
],
dtype
=
tf
.
float32
))
self
.
assertAllEqual
(
edgetpu
.
non_max_suppression_padded
(
boxes
,
scores
,
4
,
refinements
=
1
),
tf
.
constant
([
0.0
,
2.0
,
-
1.0
,
-
1.0
],
dtype
=
tf
.
float32
))
@
parameterized
.
parameters
((
16
,
8
,
200
,
[
0.009
,
0.004
,
0.004
]),
(
31
,
17
,
100
,
[
0.013
,
0.004
,
0.004
]),
(
71
,
41
,
100
,
[
0.045
,
0.003
,
0.002
]),
(
150
,
100
,
100
,
[
0.129
,
0.010
,
0.001
]),
(
300
,
300
,
100
,
[
0.116
,
0.016
,
0.002
]),
(
600
,
600
,
50
,
[
0.176
,
0.032
,
0.003
]))
def
test_reference_match
(
self
,
n
,
top
,
runs
,
max_devs
):
"""Compares that new optimized method is close to reference method.
Runs two algorithms with same sets of input boxes and scores, and measures
...
...
@@ -71,32 +130,26 @@ class NonMaxSuppressionTest(parameterized.TestCase, tf.test.TestCase):
top: limit of output boxes count.
runs: for the statistical testing number of runs to performs to avoid
tests flakiness.
max_deviation: mean limit on deviation between optimized and reference
algorithms. Please read notes why this number may be set higher to avoid
flaky testing.
max_devs: series of mean limits on deviation between optimized and
reference algorithms with different number of refinements. (Indexes of
elements correspond to number of refinements) Please use margin based
values proposed by failed test to avoid flaky testing.
"""
deviation_rate
=
0
min_union
=
2
*
n
boxes
=
random_boxes
([
runs
,
n
])
scores
=
tf
.
random
.
uniform
(
shape
=
[
runs
,
n
])
test
=
edgetpu
.
non_max_suppression_padded
(
boxes
,
scores
,
top
)
for
run
in
range
(
runs
):
reference
=
tf
.
image
.
non_max_suppression
(
boxes
[
run
],
scores
[
run
],
top
)
reference
=
{
*
reference
.
numpy
().
tolist
()}
optimized
=
{
*
test
[
run
].
numpy
().
astype
(
int
).
tolist
()}
-
{
-
1
}
union_size
=
len
(
optimized
|
reference
)
deviation_rate
+=
len
(
optimized
^
reference
)
/
union_size
min_union
=
min
(
min_union
,
union_size
)
deviation_rate
=
deviation_rate
/
runs
# six sigma estimate via LLN theorem
safe_margin
=
6
*
(
deviation_rate
/
np
.
sqrt
(
runs
)
+
1
/
(
runs
*
min_union
))
self
.
assertLess
(
deviation_rate
,
max_deviation
,
msg
=
'Deviation rate between optimized and reference implementations is '
'higher than expected. If you are tuning the test, recommended safe '
'deviation rate is '
f
'
{
deviation_rate
}
+
{
safe_margin
}
=
{
deviation_rate
+
safe_margin
}
'
)
reference
,
valid
=
tf
.
image
.
non_max_suppression_padded
(
boxes
,
scores
,
top
,
pad_to_max_output_size
=
True
)
for
refinements
,
max_deviation
in
enumerate
(
max_devs
):
optimized
=
edgetpu
.
non_max_suppression_padded
(
boxes
,
scores
,
top
,
refinements
=
refinements
)
deviation
,
margin
=
_deviation_and_margin
(
reference
,
valid
,
optimized
)
self
.
assertLess
(
deviation
,
max_deviation
,
msg
=
'Deviation rate between optimized and reference implementations is '
'higher than expected. If you are tuning the test, recommended safe '
'deviation rate is '
f
'
{
deviation
}
+
{
margin
}
=
{
deviation
+
margin
}
'
)
@
parameterized
.
parameters
(([
16
],
8
),
([
91
,
150
],
100
),
([
20
,
20
,
200
],
10
))
def
test_sharded_match
(
self
,
shape
:
list
[
int
],
top
:
int
):
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录