- 10 1月, 2014 1 次提交
-
-
由 Kevin Hao 提交于
The e500v1 doesn't implement the MAS7, so we should avoid to access this register on that implementations. In the current kernel, the access to MAS7 are protected by either CONFIG_PHYS_64BIT or MMU_FTR_BIG_PHYS. Since some code are executed before the code patching, we have to use CONFIG_PHYS_64BIT in these cases. Signed-off-by: NKevin Hao <haokexin@gmail.com> Signed-off-by: NScott Wood <scottwood@freescale.com>
-
- 23 11月, 2013 1 次提交
-
-
由 Scott Wood 提交于
And in flush_hugetlb_page(), don't check whether vma is NULL after we've already dereferenced it. This was found by Dan using static analysis as described here: https://lists.ozlabs.org/pipermail/linuxppc-dev/2013-November/113161.html We currently get away with this because the callers that currently pass NULL for vma seem to be 32-bit-only (e.g. highmem, and CONFIG_DEBUG_PGALLOC in pgtable_32.c) Hugetlb is currently 64-bit only, so we never saw a NULL vma here. Signed-off-by: NScott Wood <scottwood@freescale.com> Reported-by: NDan Carpenter <dan.carpenter@oracle.com>
-
- 07 12月, 2011 2 次提交
-
-
由 Becky Bruce 提交于
This avoids an extra find_vma() and is less error-prone. Signed-off-by: NBecky Bruce <beckyb@kernel.crashing.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Becky Bruce 提交于
This patch does 2 things: It corrects the code that determines the size to write into MAS1 for the PPC_MM_SLICES case (this originally came from David Gibson and I had incorrectly altered it), and it changes the methodolody used to calculate the size for !PPC_MM_SLICES to work for 64-bit as well as 32-bit. Signed-off-by: NBecky Bruce <beckyb@kernel.crashing.org> Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 23 9月, 2011 1 次提交
-
-
由 Paul Mackerras 提交于
Commit 41151e77 ("powerpc: Hugetlb for BookE") added some #ifdef CONFIG_MM_SLICES conditionals to hugetlb_get_unmapped_area() and vma_mmu_pagesize(). Unfortunately this is not the correct config symbol; it should be CONFIG_PPC_MM_SLICES. The result is that attempting to use hugetlbfs on 64-bit Power server processors results in an infinite stack recursion between get_unmapped_area() and hugetlb_get_unmapped_area(). This fixes it by changing the #ifdef to use CONFIG_PPC_MM_SLICES in those functions and also in book3e_hugetlb_preload(). Signed-off-by: NPaul Mackerras <paulus@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 20 9月, 2011 1 次提交
-
-
由 Becky Bruce 提交于
Enable hugepages on Freescale BookE processors. This allows the kernel to use huge TLB entries to map pages, which can greatly reduce the number of TLB misses and the amount of TLB thrashing experienced by applications with large memory footprints. Care should be taken when using this on FSL processors, as the number of large TLB entries supported by the core is low (16-64) on current processors. The supported set of hugepage sizes include 4m, 16m, 64m, 256m, and 1g. Page sizes larger than the max zone size are called "gigantic" pages and must be allocated on the command line (and cannot be deallocated). This is currently only fully implemented for Freescale 32-bit BookE processors, but there is some infrastructure in the code for 64-bit BooKE. Signed-off-by: NBecky Bruce <beckyb@kernel.crashing.org> Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-