提交 df7274eb 编写于 作者: F Florian Westphal 提交者: Steffen Klassert

xfrm: state: delay freeing until rcu grace period has elapsed

The hash table backend memory and the state structs are free'd via
kfree/vfree.

Once we only rely on rcu during lookups we have to make sure no other cpu
is currently accessing this before doing the free.

Free operations already happen from worker so we can use synchronize_rcu
to wait until concurrent readers are done.
Signed-off-by: NFlorian Westphal <fw@strlen.de>
Signed-off-by: NSteffen Klassert <steffen.klassert@secunet.com>
上级 02efdff7
...@@ -146,6 +146,9 @@ static void xfrm_hash_resize(struct work_struct *work) ...@@ -146,6 +146,9 @@ static void xfrm_hash_resize(struct work_struct *work)
spin_unlock_bh(&net->xfrm.xfrm_state_lock); spin_unlock_bh(&net->xfrm.xfrm_state_lock);
osize = (ohashmask + 1) * sizeof(struct hlist_head); osize = (ohashmask + 1) * sizeof(struct hlist_head);
synchronize_rcu();
xfrm_hash_free(odst, osize); xfrm_hash_free(odst, osize);
xfrm_hash_free(osrc, osize); xfrm_hash_free(osrc, osize);
xfrm_hash_free(ospi, osize); xfrm_hash_free(ospi, osize);
...@@ -369,6 +372,8 @@ static void xfrm_state_gc_task(struct work_struct *work) ...@@ -369,6 +372,8 @@ static void xfrm_state_gc_task(struct work_struct *work)
hlist_move_list(&net->xfrm.state_gc_list, &gc_list); hlist_move_list(&net->xfrm.state_gc_list, &gc_list);
spin_unlock_bh(&xfrm_state_gc_lock); spin_unlock_bh(&xfrm_state_gc_lock);
synchronize_rcu();
hlist_for_each_entry_safe(x, tmp, &gc_list, gclist) hlist_for_each_entry_safe(x, tmp, &gc_list, gclist)
xfrm_state_gc_destroy(x); xfrm_state_gc_destroy(x);
} }
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册