- 28 10月, 2015 1 次提交
-
-
由 Scott Wood 提交于
The SMP release mechanism for FSL book3e is different from when booting with normal hardware. In theory we could simulate the normal spin table mechanism, but not at the addresses U-Boot put in the device tree -- so there'd need to be even more communication between the kernel and kexec to set that up. Instead, kexec-tools will set a boolean property linux,booted-from-kexec in the /chosen node. Signed-off-by: NScott Wood <scottwood@freescale.com> Cc: devicetree@vger.kernel.org
-
- 23 10月, 2015 1 次提交
-
-
由 Scott Wood 提交于
Use an AS=1 trampoline TLB entry to allow all normal TLB1 entries to be loaded at once. This avoids the need to keep the translation that code is executing from in the same TLB entry in the final TLB configuration as during early boot, which in turn is helpful for relocatable kernels (e.g. kdump) where the kernel is not running from what would be the first TLB entry. On e6500, we limit map_mem_in_cams() to the primary hwthread of a core (the boot cpu is always considered primary, as a kdump kernel can be entered on any cpu). Each TLB only needs to be set up once, and when we do, we don't want another thread to be running when we create a temporary trampoline TLB1 entry. Signed-off-by: NScott Wood <scottwood@freescale.com>
-
- 11 6月, 2015 1 次提交
-
-
由 Alexey Kardashevskiy 提交于
We are adding support for DMA memory pre-registration to be used in conjunction with VFIO. The idea is that the userspace which is going to run a guest may want to pre-register a user space memory region so it all gets pinned once and never goes away. Having this done, a hypervisor will not have to pin/unpin pages on every DMA map/unmap request. This is going to help with multiple pinning of the same memory. Another use of it is in-kernel real mode (mmu off) acceleration of DMA requests where real time translation of guest physical to host physical addresses is non-trivial and may fail as linux ptes may be temporarily invalid. Also, having cached host physical addresses (compared to just pinning at the start and then walking the page table again on every H_PUT_TCE), we can be sure that the addresses which we put into TCE table are the ones we already pinned. This adds a list of memory regions to mm_context_t. Each region consists of a header and a list of physical addresses. This adds API to: 1. register/unregister memory regions; 2. do final cleanup (which puts all pre-registered pages); 3. do userspace to physical address translation; 4. manage usage counters; multiple registration of the same memory is allowed (once per container). This implements 2 counters per registered memory region: - @mapped: incremented on every DMA mapping; decremented on unmapping; initialized to 1 when a region is just registered; once it becomes zero, no more mappings allowe; - @used: incremented on every "register" ioctl; decremented on "unregister"; unregistration is allowed for DMA mapped regions unless it is the very last reference. For the very last reference this checks that the region is still mapped and returns -EBUSY so the userspace gets to know that memory is still pinned and unregistration needs to be retried; @used remains 1. Host physical addresses are stored in vmalloc'ed array. In order to access these in the real mode (mmu off), there is a real_vmalloc_addr() helper. In-kernel acceleration patchset will move it from KVM to MMU code. Signed-off-by: NAlexey Kardashevskiy <aik@ozlabs.ru> Reviewed-by: NDavid Gibson <david@gibson.dropbear.id.au> Reviewed-by: NDavid Gibson <david@gibson.dropbear.id.au> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
- 11 5月, 2015 1 次提交
-
-
由 Michael Ellerman 提交于
Currently we print "Starting Linux PPC64" at boot. But we don't mention anywhere whether the kernel is big or little endian. If we print the utsname->machine value instead we get either "ppc64" or "ppc64le" which is much more informative, eg: Starting Linux ppc64le #1 SMP Wed Apr 15 12:12:20 AEST 2015 Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
- 11 4月, 2015 1 次提交
-
-
由 Anton Blanchard 提交于
The hard lockup detector uses a PMU event as a periodic NMI to detect if we are stuck (where stuck means no timer interrupts have occurred). Ben's rework of the ppc64 soft disable code has made ppc64 PMU exceptions a partial NMI. They can get disabled if an external interrupt comes in, but otherwise PMU interrupts will fire in interrupt disabled regions. We disable the hard lockup detector by default for a few reasons: - It breaks userspace event based branches on POWER8. - It is likely to produce false positives on KVM guests. - Since PMCs can only count to 2^31, counting cycles means we might take multiple PMU exceptions per second per hardware thread even if our hard lockup timeout is 10 seconds. It can be enabled via a boot option, or via procfs. Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
- 19 11月, 2014 1 次提交
-
-
由 Michael Ellerman 提交于
Although we are now selecting NO_BOOTMEM, we still have some traces of bootmem lying around. That is because even with NO_BOOTMEM there is still a shim that converts bootmem calls into memblock calls, but ultimately we want to remove all traces of bootmem. Most of the patch is conversions from alloc_bootmem() to memblock_virt_alloc(). In general a call such as: p = (struct foo *)alloc_bootmem(x); Becomes: p = memblock_virt_alloc(x, 0); We don't need the cast because memblock_virt_alloc() returns a void *. The alignment value of zero tells memblock to use the default alignment, which is SMP_CACHE_BYTES, the same value alloc_bootmem() uses. We remove a number of NULL checks on the result of memblock_virt_alloc(). That is because memblock_virt_alloc() will panic if it can't allocate, in exactly the same way as alloc_bootmem(), so the NULL checks are and always have been redundant. The memory returned by memblock_virt_alloc() is already zeroed, so we remove several memsets of the result of memblock_virt_alloc(). Finally we convert a few uses of __alloc_bootmem(x, y, MAX_DMA_ADDRESS) to just plain memblock_virt_alloc(). We don't use memblock_alloc_base() because MAX_DMA_ADDRESS is ~0ul on powerpc, so limiting the allocation to that is pointless, 16XB ought to be enough for anyone. Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
- 10 11月, 2014 2 次提交
-
-
由 Anton Blanchard 提交于
We did part of sparse initialisation in setup_arch and part in initmem_init. Put them together. Signed-off-by: NAnton Blanchard <anton@samba.org> Tested-by: NEmil Medve <Emilian.Medve@Freescale.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Anton Blanchard 提交于
At the moment we transition from the memblock alloctor to the bootmem allocator. Gitting rid of the bootmem allocator removes a bunch of complicated code (most of which I owe the dubious honour of being responsible for writing). Signed-off-by: NAnton Blanchard <anton@samba.org> Tested-by: NEmil Medve <Emilian.Medve@Freescale.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
- 05 11月, 2014 1 次提交
-
-
由 Anton Blanchard 提交于
ppc64_boot_msg is meant to be a boot debug aid, but is only used in one spot. Get rid of it, and save ourseleves a couple of lines in the kernel log buffer. Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
- 16 10月, 2014 1 次提交
-
-
由 Anton Blanchard 提交于
Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
- 02 10月, 2014 1 次提交
-
-
由 Anton Blanchard 提交于
There is no need for yet another copy of the command line, just use boot_command_line like everyone else. Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
- 25 9月, 2014 2 次提交
-
-
由 Michael Ellerman 提交于
"Helps debug funky firmware issues". After: Starting Linux PPC64 #108 SMP Wed Aug 6 19:04:51 EST 2014 ----------------------------------------------------- ppc64_pft_size = 0x1a phys_mem_size = 0x200000000 cpu_features = 0x17fc7a6c18500249 possible = 0x1fffffff18700649 always = 0x0000000000000040 cpu_user_features = 0xdc0065c2 0xee000000 mmu_features = 0x5a000001 firmware_features = 0x00000001405a440b htab_hash_mask = 0x7ffff ----------------------------------------------------- Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Michael Ellerman 提交于
At boot we display a bunch of low level settings which can be useful to know, and can help to spot bugs when things are fundamentally misconfigured. At the moment they are very widely spaced, so that we can accommodate the line: ppc64_caches.dcache_line_size = 0xYY But we only print that line when the cache line size is not 128, ie. almost never, so it just makes the display look odd usually. The ppc64_caches prefix is redundant so remove it, which means we can align things a bit closer for the common case. While we're there replace the last use of camelCase (physicalMemorySize), and use phys_mem_size. Before: Starting Linux PPC64 #104 SMP Wed Aug 6 18:41:34 EST 2014 ----------------------------------------------------- ppc64_pft_size = 0x1a physicalMemorySize = 0x200000000 ppc64_caches.dcache_line_size = 0xf0 ppc64_caches.icache_line_size = 0xf0 htab_address = 0xdeadbeef htab_hash_mask = 0x7ffff physical_start = 0xf000bar ----------------------------------------------------- After: Starting Linux PPC64 #103 SMP Wed Aug 6 18:38:04 EST 2014 ----------------------------------------------------- ppc64_pft_size = 0x1a phys_mem_size = 0x200000000 dcache_line_size = 0xf0 icache_line_size = 0xf0 htab_address = 0xdeadbeef htab_hash_mask = 0x7ffff physical_start = 0xf000bar ----------------------------------------------------- This patch is final, no bike shedding ;) Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
- 09 8月, 2014 1 次提交
-
-
由 Daniel Walter 提交于
Replace strict_strto calls with more appropriate kstrto calls Signed-off-by: NDaniel Walter <dwalter@google.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 30 7月, 2014 1 次提交
-
-
由 Andy Fleming 提交于
The general idea is that each core will release all of its threads into the secondary thread startup code, which will eventually wait in the secondary core holding area, for the appropriate bit in the PACA to be set. The kick_cpu function pointer will set that bit in the PACA, and thus "release" the core/thread to boot. We also need to do a few things that U-Boot normally does for CPUs (like enable branch prediction). Signed-off-by: NAndy Fleming <afleming@freescale.com> [scottwood@freescale.com: various changes, including only enabling threads if Linux wants to kick them] Signed-off-by: NScott Wood <scottwood@freescale.com>
-
- 28 7月, 2014 2 次提交
-
-
由 Michael Ellerman 提交于
I spent ten minutes scratching my head, trying to work out where we enabled relocation on interrupts for guest kernels. Expand the doco to make it clear. Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Michael Ellerman 提交于
Old cpus didn't have a Segment Lookaside Buffer (SLB), instead they had a Segment Table (STAB). Now that we've dropped support for those cpus, we can remove the STAB support entirely. Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 05 6月, 2014 1 次提交
-
-
由 Anton Blanchard 提交于
The pseries platform code unconditionally overrides memory_block_size_bytes regardless of the running platform. Create a ppc_md hook that so each platform can choose to do what it wants. Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 23 4月, 2014 1 次提交
-
-
由 Anton Blanchard 提交于
There is no need to put a function descriptor in __secondary_hold_spinloop. Use ppc_function_entry to get the instruction address and put it in __secondary_hold_spinloop instead. Also fix an issue where we assumed cur_cpu_spec held a function descriptor. Signed-off-by: NAnton Blanchard <anton@samba.org>
-
- 13 4月, 2014 1 次提交
-
-
由 Paul Mackerras 提交于
Commit 8f619b54 ("powerpc/ppc64: Do not turn AIL (reloc-on interrupts) too early") added code to set the AIL bit in the LPCR without checking whether the kernel is running in hypervisor mode. The result is that when the kernel is running as a guest (i.e., under PowerKVM or PowerVM), the processor takes a privileged instruction interrupt at that point, causing a panic. The visible result is that the kernel hangs after printing "returning from prom_init". This fixes it by checking for hypervisor mode being available before setting LPCR. If we are not in hypervisor mode, we enable relocation-on interrupts later in pSeries_setup_arch using the H_SET_MODE hcall. Signed-off-by: NPaul Mackerras <paulus@samba.org> Acked-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 07 4月, 2014 3 次提交
-
-
由 Benjamin Herrenschmidt 提交于
Turn them on at the same time as we allow MSR_IR/DR in the paca kernel MSR, ie, after the MMU has been setup enough to be able to handle relocated access to the linear mapping. Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Benjamin Herrenschmidt 提交于
If we take an interrupt such as a trap caused by a BUG_ON before the MMU has been setup, the interrupt handlers try to enable virutal mode and cause a recursive crash, making the original problem very hard to debug. This fixes it by adjusting the "kernel_msr" value in the PACA so that it only has MSR_IR and MSR_DR (translation for instruction and data) set after the MMU has been initialized for the processor. We may still not have a console yet but at least we don't get into a recursive fault (and early debug console or memory dump via JTAG of the kernel buffer *will* give us the proper error). Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Benjamin Herrenschmidt 提交于
Move the definition to setup-common.c and set the init value to -1 on both 32 and 64-bit (it was 0 on 64-bit). Additionally add a check to prom.c to garantee that the init value has been udpated after the DT scan. Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 20 3月, 2014 2 次提交
-
-
由 Scott Wood 提交于
Once special level interrupts are supported, we may take nested TLB misses -- so allow the same thread to acquire the lock recursively. The lock will not be effective against the nested TLB miss handler trying to write the same entry as the interrupted TLB miss handler, but that's also a problem on non-threaded CPUs that lack TLB write conditional. This will be addressed in the patch that enables crit/mc support by invalidating the TLB on return from level exceptions. Signed-off-by: NScott Wood <scottwood@freescale.com>
-
由 Tiejun Chen 提交于
We already allocated critical/machine/debug check exceptions, but we also should initialize those associated kernel stack pointers for use by special exceptions in the PACA. Signed-off-by: NTiejun Chen <tiejun.chen@windriver.com> Signed-off-by: NScott Wood <scottwood@freescale.com>
-
- 10 1月, 2014 1 次提交
-
-
由 Scott Wood 提交于
There are a few things that make the existing hw tablewalk handlers unsuitable for e6500: - Indirect entries go in TLB1 (though the resulting direct entries go in TLB0). - It has threads, but no "tlbsrx." -- so we need a spinlock and a normal "tlbsx". Because we need this lock, hardware tablewalk is mandatory on e6500 unless we want to add spinlock+tlbsx to the normal bolted TLB miss handler. - TLB1 has no HES (nor next-victim hint) so we need software round robin (TODO: integrate this round robin data with hugetlb/KVM) - The existing tablewalk handlers map half of a page table at a time, because IBM hardware has a fixed 1MiB indirect page size. e6500 has variable size indirect entries, with a minimum of 2MiB. So we can't do the half-page indirect mapping, and even if we could it would be less efficient than mapping the full page. - Like on e5500, the linear mapping is bolted, so we don't need the overhead of supporting nested tlb misses. Note that hardware tablewalk does not work in rev1 of e6500. We do not expect to support e6500 rev1 in mainline Linux. Signed-off-by: NScott Wood <scottwood@freescale.com> Cc: Mihai Caraman <mihai.caraman@freescale.com>
-
- 05 12月, 2013 1 次提交
-
-
由 Mahesh Salgaonkar 提交于
This patch introduces exclusive emergency stack for machine check exception. We use emergency stack to handle machine check exception so that we can save MCE information (srr1, srr0, dar and dsisr) before turning on ME bit and be ready for re-entrancy. This helps us to prevent clobbering of MCE information in case of nested machine checks. The reason for using emergency stack over normal kernel stack is that the machine check might occur in the middle of setting up a stack frame which may result into improper use of kernel stack. Signed-off-by: NMahesh Salgaonkar <mahesh@linux.vnet.ibm.com> Acked-by: NPaul Mackerras <paulus@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 02 12月, 2013 1 次提交
-
-
由 Kevin Hao 提交于
Signed-off-by: NKevin Hao <haokexin@gmail.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 26 11月, 2013 1 次提交
-
-
由 Jason Baron 提交于
Default CONFIG_PANIC_TIMEOUT to 180 seconds on powerpc. The pSeries continue to set the timeout to 10 seconds at run-time. Thus, there's a small window where we don't have the correct value on pSeries, but if this is only run-time discoverable we don't have a better option. In any case, if the user changes the default setting of 180 seconds, we honor that user setting. Signed-off-by: NJason Baron <jbaron@akamai.com> Cc: benh@kernel.crashing.org Cc: paulus@samba.org Cc: ralf@linux-mips.org Cc: mpe@ellerman.id.au Cc: felipe.contreras@gmail.com Cc: linuxppc-dev@lists.ozlabs.org Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/705bbe0f70fb20759151642ba0176a6414ec9f7a.1385418410.git.jbaron@akamai.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
- 30 10月, 2013 1 次提交
-
-
由 Robert Jennings 提交于
Move the few declarations from arch/powerpc/kernel/setup.h into arch/powerpc/include/asm/setup.h. This resolves a sparse warning for arch/powerpc/mm/numa.c which defines do_init_bootmem() but can't include the setup.h header in the prior path. Resolves: arch/powerpc/mm/numa.c:998:13: warning: symbol 'do_init_bootmem' was not declared. Should it be static? Signed-off-by: NRobert C Jennings <rcj@linux.vnet.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 14 8月, 2013 4 次提交
-
-
由 Anton Blanchard 提交于
Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Benjamin Herrenschmidt 提交于
Remove the generic PPC_INDIRECT_IO and ensure we only add overhead to the right accessors. IE. If only CONFIG_PPC_INDIRECT_PIO is set, we don't add overhead to all MMIO accessors. Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Benjamin Herrenschmidt 提交于
We have a bunch of CONFIG_PPC_EARLY_DEBUG_* options that are intended for bringup/debug only. They hard wire a machine specific udbg backend very early on (before we even probe the platform), and use whatever tricks are available on each machine/cpu to be able to get some kind of output out there early on. So far, on powermac with no serial ports, we have CONFIG_PPC_EARLY_DEBUG_BOOTX to use the low-level btext engine on the screen, but it doesn't do much, at least on 64-bit. It only really gets enabled after the platform has been probed and the MMU enabled. This adds a way to enable it much earlier. From prom_init.c (while still running with Open Firmware), we grab the screen details and set things up using the physical address of the frame buffer. Then btext itself uses the "rm_ci" feature of the 970 processor (Real Mode Cache Inhibited) to access it while in real mode. We need to do a little bit of reorg of the btext code to inline things better, in order to limit how much we touch memory while in this mode as the consequences might be ... interesting. This successfully allowed me to debug problems early on with the G5 (related to gold being broken vs. ppc64 kernels). Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Anton Blanchard 提交于
Address some of the trivial sparse warnings in arch/powerpc. Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 08 8月, 2013 1 次提交
-
-
由 Laurentiu TUDOR 提交于
At console init, when the kernel tries to flush the log buffer the ePAPR byte-channel based console write fails silently, losing the buffered messages. This happens because The ePAPR para-virtualization init isn't done early enough so that the hcall instruction to be set, causing the byte-channel write hcall to be a nop. To fix, change the ePAPR para-virt init to use early device tree functions and move it in early init. Signed-off-by: NLaurentiu Tudor <Laurentiu.Tudor@freescale.com> Signed-off-by: NScott Wood <scottwood@freescale.com>
-
- 08 7月, 2013 2 次提交
-
-
由 Aneesh Kumar K.V 提交于
Older version of power architecture use Real Mode Offset register and Real Mode Limit Selector for mapping guest Real Mode Area. The guest RMA should be physically contigous since we use the range when address translation is not enabled. This patch switch RMA allocation code to use contigous memory allocator. The patch also remove the the linear allocator which not used any more Acked-by: NPaul Mackerras <paulus@samba.org> Signed-off-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: NAlexander Graf <agraf@suse.de>
-
由 Aneesh Kumar K.V 提交于
Powerpc architecture uses a hash based page table mechanism for mapping virtual addresses to physical address. The architecture require this hash page table to be physically contiguous. With KVM on Powerpc currently we use early reservation mechanism for allocating guest hash page table. This implies that we need to reserve a big memory region to ensure we can create large number of guest simultaneously with KVM on Power. Another disadvantage is that the reserved memory is not available to rest of the subsystems and and that implies we limit the total available memory in the host. This patch series switch the guest hash page table allocation to use contiguous memory allocator. Signed-off-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Acked-by: NPaul Mackerras <paulus@samba.org> Signed-off-by: NAlexander Graf <agraf@suse.de>
-
- 01 7月, 2013 1 次提交
-
-
由 Chen Gang 提交于
the smp_release_cpus is a normal funciton and called in normal environments, but it calls the __initdata spinning_secondaries. need modify spinning_secondaries to match smp_release_cpus. the related warning: (the linker report boot_paca.33377, but it should be spinning_secondaries) ----------------------------------------------------------------------------- WARNING: arch/powerpc/kernel/built-in.o(.text+0x23176): Section mismatch in reference from the function .smp_release_cpus() to the variable .init.data:boot_paca.33377 The function .smp_release_cpus() references the variable __initdata boot_paca.33377. This is often because .smp_release_cpus lacks a __initdata annotation or the annotation of boot_paca.33377 is wrong. WARNING: arch/powerpc/kernel/built-in.o(.text+0x231fe): Section mismatch in reference from the function .smp_release_cpus() to the variable .init.data:boot_paca.33377 The function .smp_release_cpus() references the variable __initdata boot_paca.33377. This is often because .smp_release_cpus lacks a __initdata annotation or the annotation of boot_paca.33377 is wrong. ----------------------------------------------------------------------------- Signed-off-by: NChen Gang <gang.chen@asianux.com> CC: <stable@vger.kernel.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 30 4月, 2013 1 次提交
-
-
由 Aneesh Kumar K.V 提交于
We allocate one page for the last level of linux page table. With THP and large page size of 16MB, that would mean we are wasting large part of that page. To map 16MB area, we only need a PTE space of 2K with 64K page size. This patch reduce the space wastage by sharing the page allocated for the last level of linux page table with multiple pmd entries. We call these smaller chunks PTE page fragments and allocated page, PTE page. In order to support systems which doesn't have 64K HPTE support, we also add another 2K to PTE page fragment. The second half of the PTE fragments is used for storing slot and secondary bit information of an HPTE. With this we now have a 4K PTE fragment. We use a simple approach to share the PTE page. On allocation, we bump the PTE page refcount to 16 and share the PTE page with the next 16 pte alloc request. This should help in the node locality of the PTE page fragment, assuming that the immediate pte alloc request will mostly come from the same NUMA node. We don't try to reuse the freed PTE page fragment. Hence we could be waisting some space. Signed-off-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Acked-by: NPaul Mackerras <paulus@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 15 2月, 2013 1 次提交
-
-
由 Michael Ellerman 提交于
In commit 466921c5 we added a hack to set the paca data_offset to zero so that per-cpu accesses would work on the boot cpu prior to per-cpu areas being setup. This fixed a problem with lockdep touching per-cpu areas very early in boot. However if we combine CONFIG_LOCK_STAT=y with any of the PPC_EARLY_DEBUG options, we can hit the same problem in udbg_early_init(). To avoid that we need to set the data_offset of the boot_paca also. So factor out the fixup logic and call it for both the boot_paca, and "the paca of the boot cpu". Signed-off-by: NMichael Ellerman <michael@ellerman.id.au> Tested-by: NGeoff Levand <geoff@infradead.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-