提交 4d81db41 编写于 作者: S Scott Feldman 提交者: David S. Miller

rocker: fix neigh tbl index increment race

rocker->neigh_tbl_next_index is used to generate unique indices for neigh
entries programmed into the device.  The way new indices were generated was
racy with the new prepare-commit transaction model.  A simple fix here
removes the race.  The race was with two processes getting the same index,
one process using prepare-commit, the other not:

Proc A					Proc B

PREPARE phase
get neigh_tbl_next_index

					NONE phase
					get neigh_tbl_next_index
					neigh_tbl_next_index++

COMMIT phase
neigh_tbl_next_index++

Both A and B got the same index.  The fix is to store and increment
neigh_tbl_next_index in the PREPARE (or NONE) phase and use value in COMMIT
phase:

Proc A					Proc B

PREPARE phase
get neigh_tbl_next_index
neigh_tbl_next_index++

					NONE phase
					get neigh_tbl_next_index
					neigh_tbl_next_index++

COMMIT phase
// use value stashed in PREPARE phase
Reported-by: NSimon Horman <simon.horman@netronome.com>
Signed-off-by: NScott Feldman <sfeldma@gmail.com>
Reviewed-by: NSimon Horman <simon.horman@netronome.com>
Signed-off-by: NDavid S. Miller <davem@davemloft.net>
上级 a0720310
......@@ -2901,10 +2901,10 @@ static void _rocker_neigh_add(struct rocker *rocker,
enum switchdev_trans trans,
struct rocker_neigh_tbl_entry *entry)
{
entry->index = rocker->neigh_tbl_next_index;
if (trans != SWITCHDEV_TRANS_COMMIT)
entry->index = rocker->neigh_tbl_next_index++;
if (trans == SWITCHDEV_TRANS_PREPARE)
return;
rocker->neigh_tbl_next_index++;
entry->ref_count++;
hash_add(rocker->neigh_tbl, &entry->entry,
be32_to_cpu(entry->ip_addr));
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册