提交 68a4b481 编写于 作者: D Daniel Jordan 提交者: Xie XiuQi

vfio: remove unnecessary mmap_sem writer acquisition around locked_vm

hulk inclusion
category: feature
bugzilla: 13228
CVE: NA
---------------------------

Now that mmap_sem is no longer required for modifying locked_vm, remove
it in the VFIO code.

[XXX Can be sent separately, along with similar conversions in the other
places mmap_sem was taken for locked_vm.  While at it, could make
similar changes to pinned_vm.]
Signed-off-by: NDaniel Jordan <daniel.m.jordan@oracle.com>
Signed-off-by: NHongbo Yao <yaohongbo@huawei.com>
Reviewed-by: NXie XiuQi <xiexiuqi@huawei.com>
Tested-by: NHongbo Yao <yaohongbo@huawei.com>
Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
上级 53f4e528
......@@ -274,7 +274,8 @@ static int vfio_iova_put_vfio_pfn(struct vfio_dma *dma, struct vfio_pfn *vpfn)
static int vfio_lock_acct(struct vfio_dma *dma, long npage, bool async)
{
struct mm_struct *mm;
int ret;
long locked_vm;
int ret = 0;
if (!npage)
return 0;
......@@ -283,25 +284,15 @@ static int vfio_lock_acct(struct vfio_dma *dma, long npage, bool async)
if (!mm)
return -ESRCH; /* process exited */
ret = down_write_killable(&mm->mmap_sem);
if (!ret) {
if (npage > 0) {
if (!dma->lock_cap) {
unsigned long limit;
limit = task_rlimit(dma->task,
RLIMIT_MEMLOCK) >> PAGE_SHIFT;
locked_vm = atomic_long_add_return(npage, &mm->locked_vm);
if (atomic_long_read(&mm->locked_vm) + npage >
limit)
ret = -ENOMEM;
}
if (npage > 0 && !dma->lock_cap) {
unsigned long limit = task_rlimit(dma->task, RLIMIT_MEMLOCK) >>
PAGE_SHIFT;
if (locked_vm > limit) {
atomic_long_sub(npage, &mm->locked_vm);
ret = -ENOMEM;
}
if (!ret)
atomic_long_add(npage, &mm->locked_vm);
up_write(&mm->mmap_sem);
}
if (async)
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册