提交 8bf54d76 编写于 作者: M Miaohe Lin 提交者: Zheng Zengkai

mm/memory.c: fix potential pte_unmap_unlock pte error

stable inclusion
from stable-5.10.20
commit 6c074ae0a482d97828522ceae0618a63bc1ab3aa
bugzilla: 50608

--------------------------------

[ Upstream commit 90a3e375 ]

Since commit 42e4089c ("x86/speculation/l1tf: Disallow non privileged
high MMIO PROT_NONE mappings"), when the first pfn modify is not allowed,
we would break the loop with pte unchanged.  Then the wrong pte - 1 would
be passed to pte_unmap_unlock.

Andi said:

 "While the fix is correct, I'm not sure if it actually is a real bug.
  Is there any architecture that would do something else than unlocking
  the underlying page? If it's just the underlying page then it should
  be always the same page, so no bug"

Link: https://lkml.kernel.org/r/20210109080118.20885-1-linmiaohe@huawei.com
Fixes: 42e4089c ("x86/speculation/l1tf: Disallow non privileged high MMIO PROT_NONE mappings")
Signed-off-by: NHongxiang Lou <louhongxiang@huawei.com>
Signed-off-by: NMiaohe Lin <linmiaohe@huawei.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: NSasha Levin <sashal@kernel.org>
Signed-off-by: NChen Jun <chenjun102@huawei.com>
Acked-by: NXie XiuQi <xiexiuqi@huawei.com>
Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
上级 8b68c0a4
...@@ -2165,11 +2165,11 @@ static int remap_pte_range(struct mm_struct *mm, pmd_t *pmd, ...@@ -2165,11 +2165,11 @@ static int remap_pte_range(struct mm_struct *mm, pmd_t *pmd,
unsigned long addr, unsigned long end, unsigned long addr, unsigned long end,
unsigned long pfn, pgprot_t prot) unsigned long pfn, pgprot_t prot)
{ {
pte_t *pte; pte_t *pte, *mapped_pte;
spinlock_t *ptl; spinlock_t *ptl;
int err = 0; int err = 0;
pte = pte_alloc_map_lock(mm, pmd, addr, &ptl); mapped_pte = pte = pte_alloc_map_lock(mm, pmd, addr, &ptl);
if (!pte) if (!pte)
return -ENOMEM; return -ENOMEM;
arch_enter_lazy_mmu_mode(); arch_enter_lazy_mmu_mode();
...@@ -2183,7 +2183,7 @@ static int remap_pte_range(struct mm_struct *mm, pmd_t *pmd, ...@@ -2183,7 +2183,7 @@ static int remap_pte_range(struct mm_struct *mm, pmd_t *pmd,
pfn++; pfn++;
} while (pte++, addr += PAGE_SIZE, addr != end); } while (pte++, addr += PAGE_SIZE, addr != end);
arch_leave_lazy_mmu_mode(); arch_leave_lazy_mmu_mode();
pte_unmap_unlock(pte - 1, ptl); pte_unmap_unlock(mapped_pte, ptl);
return err; return err;
} }
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册