提交 93b3037a 编写于 作者: P Peter Zijlstra 提交者: Dave Hansen

mm: Update ptep_get_lockless()'s comment

Improve the comment.
Suggested-by: NMatthew Wilcox <willy@infradead.org>
Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lkml.kernel.org/r/20221022114424.515572025%40infradead.org
上级 60463628
...@@ -300,15 +300,12 @@ static inline pte_t ptep_get(pte_t *ptep) ...@@ -300,15 +300,12 @@ static inline pte_t ptep_get(pte_t *ptep)
#ifdef CONFIG_GUP_GET_PTE_LOW_HIGH #ifdef CONFIG_GUP_GET_PTE_LOW_HIGH
/* /*
* WARNING: only to be used in the get_user_pages_fast() implementation. * For walking the pagetables without holding any locks. Some architectures
* * (eg x86-32 PAE) cannot load the entries atomically without using expensive
* With get_user_pages_fast(), we walk down the pagetables without taking any * instructions. We are guaranteed that a PTE will only either go from not
* locks. For this we would like to load the pointers atomically, but sometimes * present to present, or present to not present -- it will not switch to a
* that is not possible (e.g. without expensive cmpxchg8b on x86_32 PAE). What * completely different present page without a TLB flush inbetween; which we
* we do have is the guarantee that a PTE will only either go from not present * are blocking by holding interrupts off.
* to present, or present to not present or both -- it will not switch to a
* completely different present page without a TLB flush in between; something
* that we are blocking by holding interrupts off.
* *
* Setting ptes from not present to present goes: * Setting ptes from not present to present goes:
* *
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册