提交 d45ff356 编写于 作者: W Will Deacon 提交者: Xie XiuQi

arm64: tlb: Rewrite stale comment in asm/tlbflush.h

mainline inclusion
from mainline-4.20-rc1
commit: 7f08872774eb971693ba79eeb2d4db364c9f5bfb
category: feature
feature: Reduce synchronous TLB invalidation on ARM64
bugzilla: NA
CVE: NA

--------------------------------------------------

Peter Z asked me to justify the barrier usage in asm/tlbflush.h, but
actually that whole block comment needs to be rewritten.
Reported-by: NPeter Zijlstra <peterz@infradead.org>
Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: NWill Deacon <will.deacon@arm.com>
Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
Signed-off-by: NHanjun Guo <guohanjun@huawei.com>
Reviewed-by: NXuefeng Wang <wxf.wang@hisilicon.com>
Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
上级 f51b9fe4
...@@ -70,43 +70,73 @@ ...@@ -70,43 +70,73 @@
}) })
/* /*
* TLB Management * TLB Invalidation
* ============== * ================
* *
* The TLB specific code is expected to perform whatever tests it needs * This header file implements the low-level TLB invalidation routines
* to determine if it should invalidate the TLB for each call. Start * (sometimes referred to as "flushing" in the kernel) for arm64.
* addresses are inclusive and end addresses are exclusive; it is safe to
* round these addresses down.
* *
* flush_tlb_all() * Every invalidation operation uses the following template:
*
* DSB ISHST // Ensure prior page-table updates have completed
* TLBI ... // Invalidate the TLB
* DSB ISH // Ensure the TLB invalidation has completed
* if (invalidated kernel mappings)
* ISB // Discard any instructions fetched from the old mapping
*
*
* The following functions form part of the "core" TLB invalidation API,
* as documented in Documentation/core-api/cachetlb.rst:
* *
* Invalidate the entire TLB. * flush_tlb_all()
* Invalidate the entire TLB (kernel + user) on all CPUs
* *
* flush_tlb_mm(mm) * flush_tlb_mm(mm)
* Invalidate an entire user address space on all CPUs.
* The 'mm' argument identifies the ASID to invalidate.
*
* flush_tlb_range(vma, start, end)
* Invalidate the virtual-address range '[start, end)' on all
* CPUs for the user address space corresponding to 'vma->mm'.
* Note that this operation also invalidates any walk-cache
* entries associated with translations for the specified address
* range.
*
* flush_tlb_kernel_range(start, end)
* Same as flush_tlb_range(..., start, end), but applies to
* kernel mappings rather than a particular user address space.
* Whilst not explicitly documented, this function is used when
* unmapping pages from vmalloc/io space.
*
* flush_tlb_page(vma, addr)
* Invalidate a single user mapping for address 'addr' in the
* address space corresponding to 'vma->mm'. Note that this
* operation only invalidates a single, last-level page-table
* entry and therefore does not affect any walk-caches.
* *
* Invalidate all TLB entries in a particular address space.
* - mm - mm_struct describing address space
* *
* flush_tlb_range(mm,start,end) * Next, we have some undocumented invalidation routines that you probably
* don't want to call unless you know what you're doing:
* *
* Invalidate a range of TLB entries in the specified address * local_flush_tlb_all()
* space. * Same as flush_tlb_all(), but only applies to the calling CPU.
* - mm - mm_struct describing address space
* - start - start address (may not be aligned)
* - end - end address (exclusive, may not be aligned)
* *
* flush_tlb_page(vaddr,vma) * __flush_tlb_kernel_pgtable(addr)
* Invalidate a single kernel mapping for address 'addr' on all
* CPUs, ensuring that any walk-cache entries associated with the
* translation are also invalidated.
* *
* Invalidate the specified page in the specified address range. * __flush_tlb_range(vma, start, end, stride, last_level)
* - vaddr - virtual address (may not be aligned) * Invalidate the virtual-address range '[start, end)' on all
* - vma - vma_struct describing address range * CPUs for the user address space corresponding to 'vma->mm'.
* The invalidation operations are issued at a granularity
* determined by 'stride' and only affect any walk-cache entries
* if 'last_level' is equal to false.
* *
* flush_kern_tlb_page(kaddr)
* *
* Invalidate the TLB entry for the specified page. The address * Finally, take a look at asm/tlb.h to see how tlb_flush() is implemented
* will be in the kernels virtual memory space. Current uses * on top of these routines, since that is our interface to the mmu_gather
* only require the D-TLB to be invalidated. * API as used by munmap() and friends.
* - kaddr - Kernel virtual memory address
*/ */
static inline void local_flush_tlb_all(void) static inline void local_flush_tlb_all(void)
{ {
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册