- 14 8月, 2013 3 次提交
-
-
由 Benjamin Herrenschmidt 提交于
Remove the generic PPC_INDIRECT_IO and ensure we only add overhead to the right accessors. IE. If only CONFIG_PPC_INDIRECT_PIO is set, we don't add overhead to all MMIO accessors. Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Benjamin Herrenschmidt 提交于
We have a bunch of CONFIG_PPC_EARLY_DEBUG_* options that are intended for bringup/debug only. They hard wire a machine specific udbg backend very early on (before we even probe the platform), and use whatever tricks are available on each machine/cpu to be able to get some kind of output out there early on. So far, on powermac with no serial ports, we have CONFIG_PPC_EARLY_DEBUG_BOOTX to use the low-level btext engine on the screen, but it doesn't do much, at least on 64-bit. It only really gets enabled after the platform has been probed and the MMU enabled. This adds a way to enable it much earlier. From prom_init.c (while still running with Open Firmware), we grab the screen details and set things up using the physical address of the frame buffer. Then btext itself uses the "rm_ci" feature of the 970 processor (Real Mode Cache Inhibited) to access it while in real mode. We need to do a little bit of reorg of the btext code to inline things better, in order to limit how much we touch memory while in this mode as the consequences might be ... interesting. This successfully allowed me to debug problems early on with the G5 (related to gold being broken vs. ppc64 kernels). Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Anton Blanchard 提交于
Address some of the trivial sparse warnings in arch/powerpc. Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 08 8月, 2013 1 次提交
-
-
由 Laurentiu TUDOR 提交于
At console init, when the kernel tries to flush the log buffer the ePAPR byte-channel based console write fails silently, losing the buffered messages. This happens because The ePAPR para-virtualization init isn't done early enough so that the hcall instruction to be set, causing the byte-channel write hcall to be a nop. To fix, change the ePAPR para-virt init to use early device tree functions and move it in early init. Signed-off-by: NLaurentiu Tudor <Laurentiu.Tudor@freescale.com> Signed-off-by: NScott Wood <scottwood@freescale.com>
-
- 01 7月, 2013 1 次提交
-
-
由 Chen Gang 提交于
the smp_release_cpus is a normal funciton and called in normal environments, but it calls the __initdata spinning_secondaries. need modify spinning_secondaries to match smp_release_cpus. the related warning: (the linker report boot_paca.33377, but it should be spinning_secondaries) ----------------------------------------------------------------------------- WARNING: arch/powerpc/kernel/built-in.o(.text+0x23176): Section mismatch in reference from the function .smp_release_cpus() to the variable .init.data:boot_paca.33377 The function .smp_release_cpus() references the variable __initdata boot_paca.33377. This is often because .smp_release_cpus lacks a __initdata annotation or the annotation of boot_paca.33377 is wrong. WARNING: arch/powerpc/kernel/built-in.o(.text+0x231fe): Section mismatch in reference from the function .smp_release_cpus() to the variable .init.data:boot_paca.33377 The function .smp_release_cpus() references the variable __initdata boot_paca.33377. This is often because .smp_release_cpus lacks a __initdata annotation or the annotation of boot_paca.33377 is wrong. ----------------------------------------------------------------------------- Signed-off-by: NChen Gang <gang.chen@asianux.com> CC: <stable@vger.kernel.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 30 4月, 2013 1 次提交
-
-
由 Aneesh Kumar K.V 提交于
We allocate one page for the last level of linux page table. With THP and large page size of 16MB, that would mean we are wasting large part of that page. To map 16MB area, we only need a PTE space of 2K with 64K page size. This patch reduce the space wastage by sharing the page allocated for the last level of linux page table with multiple pmd entries. We call these smaller chunks PTE page fragments and allocated page, PTE page. In order to support systems which doesn't have 64K HPTE support, we also add another 2K to PTE page fragment. The second half of the PTE fragments is used for storing slot and secondary bit information of an HPTE. With this we now have a 4K PTE fragment. We use a simple approach to share the PTE page. On allocation, we bump the PTE page refcount to 16 and share the PTE page with the next 16 pte alloc request. This should help in the node locality of the PTE page fragment, assuming that the immediate pte alloc request will mostly come from the same NUMA node. We don't try to reuse the freed PTE page fragment. Hence we could be waisting some space. Signed-off-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Acked-by: NPaul Mackerras <paulus@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 15 2月, 2013 2 次提交
-
-
由 Michael Ellerman 提交于
In commit 466921c5 we added a hack to set the paca data_offset to zero so that per-cpu accesses would work on the boot cpu prior to per-cpu areas being setup. This fixed a problem with lockdep touching per-cpu areas very early in boot. However if we combine CONFIG_LOCK_STAT=y with any of the PPC_EARLY_DEBUG options, we can hit the same problem in udbg_early_init(). To avoid that we need to set the data_offset of the boot_paca also. So factor out the fixup logic and call it for both the boot_paca, and "the paca of the boot cpu". Signed-off-by: NMichael Ellerman <michael@ellerman.id.au> Tested-by: NGeoff Levand <geoff@infradead.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Geoff Levand 提交于
The powerpc boot_paca symbol is now only used within the early_setup() routine, so move it from its global definition into early_setup(). Signed-off-by: NGeoff Levand <geoff@infradead.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 15 11月, 2012 1 次提交
-
-
由 Michael Neuling 提交于
If we change load_hander() to use an ori instead of addi, we can load handlers upto 64k away provided we are still 64k aligned. Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 27 9月, 2012 1 次提交
-
-
由 Michael Ellerman 提交于
In commit 407821a3 we assigned a poison value to the paca->data_offset. Unfortunately with CONFIG_LOCK_STAT=y lockdep will read & write to percpu data very early in boot, prior to us initialising the percpu areas, leading to a crash. We have been getting away with this because the data_offset was previously set to zero. This causes lockdep to read & write to the initial copy of the percpu variables, which are discarded later in boot. Although that is "fishy", it does work, and for lock statistics it is no big deal to discard the counts from early boot. So set the paca->data_offset = 0 for the boot cpu paca only. Reported-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <michael@ellerman.id.au> Tested-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 29 3月, 2012 1 次提交
-
-
由 David Howells 提交于
Disintegrate asm/system.h for PowerPC. Signed-off-by: NDavid Howells <dhowells@redhat.com> Acked-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org> cc: linuxppc-dev@lists.ozlabs.org
-
- 05 3月, 2012 1 次提交
-
-
由 Alexander Graf 提交于
We have code to allocate big chunks of linear memory on bootup for later use. This code is currently used for RMA allocation, but can be useful beyond that extent. Make it generic so we can reuse it for other stuff later. Signed-off-by: NAlexander Graf <agraf@suse.de> Acked-by: NPaul Mackerras <paulus@samba.org> Signed-off-by: NAvi Kivity <avi@redhat.com>
-
- 07 12月, 2011 1 次提交
-
-
由 Becky Bruce 提交于
For 64-bit FSL_BOOKE implementations, gigantic pages need to be reserved at boot time by the memblock code based on the command line. This adds the call that handles the reservation, and fixes some code comments. It also removes the previous pr_err when reserve_hugetlb_gpages is called on a system without hugetlb enabled - the way the code is structured, the call is unconditional and the resulting error message spurious and confusing. Signed-off-by: NBecky Bruce <beckyb@kernel.crashing.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 16 11月, 2011 1 次提交
-
-
由 Anton Blanchard 提交于
kdump fails because we try to execute an HV only instruction. Feature fixups are being applied after we copy the exception vectors down to 0 so they miss out on any updates. We have always had this issue but it only became critical in v3.0 when we added CFAR support (breaks POWER5) and v3.1 when we added POWERNV (breaks everyone). Signed-off-by: NAnton Blanchard <anton@samba.org> Cc: <stable@kernel.org> [v3.0+] Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 01 11月, 2011 1 次提交
-
-
由 Paul Gortmaker 提交于
All these files were including module.h just for the basic EXPORT_SYMBOL infrastructure. We can shift them off to the export.h header which is a way smaller footprint and thus realize some compile time gains. Signed-off-by: NPaul Gortmaker <paul.gortmaker@windriver.com>
-
- 20 9月, 2011 2 次提交
-
-
由 Anton Blanchard 提交于
While converting code to use for_each_node_by_type I noticed a number of coding style issues. Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Anton Blanchard 提交于
Use for_each_node_by_type instead of open coding it. Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 12 7月, 2011 1 次提交
-
-
由 Paul Mackerras 提交于
This adds infrastructure which will be needed to allow book3s_hv KVM to run on older POWER processors, including PPC970, which don't support the Virtual Real Mode Area (VRMA) facility, but only the Real Mode Offset (RMO) facility. These processors require a physically contiguous, aligned area of memory for each guest. When the guest does an access in real mode (MMU off), the address is compared against a limit value, and if it is lower, the address is ORed with an offset value (from the Real Mode Offset Register (RMOR)) and the result becomes the real address for the access. The size of the RMA has to be one of a set of supported values, which usually includes 64MB, 128MB, 256MB and some larger powers of 2. Since we are unlikely to be able to allocate 64MB or more of physically contiguous memory after the kernel has been running for a while, we allocate a pool of RMAs at boot time using the bootmem allocator. The size and number of the RMAs can be set using the kvm_rma_size=xx and kvm_rma_count=xx kernel command line options. KVM exports a new capability, KVM_CAP_PPC_RMA, to signal the availability of the pool of preallocated RMAs. The capability value is 1 if the processor can use an RMA but doesn't require one (because it supports the VRMA facility), or 2 if the processor requires an RMA for each guest. This adds a new ioctl, KVM_ALLOCATE_RMA, which allocates an RMA from the pool and returns a file descriptor which can be used to map the RMA. It also returns the size of the RMA in the argument structure. Having an RMA means we will get multiple KMV_SET_USER_MEMORY_REGION ioctl calls from userspace. To cope with this, we now preallocate the kvm->arch.ram_pginfo array when the VM is created with a size sufficient for up to 64GB of guest memory. Subsequently we will get rid of this array and use memory associated with each memslot instead. This moves most of the code that translates the user addresses into host pfns (page frame numbers) out of kvmppc_prepare_vrma up one level to kvmppc_core_prepare_memory_region. Also, instead of having to look up the VMA for each page in order to check the page size, we now check that the pages we get are compound pages of 16MB. However, if we are adding memory that is mapped to an RMA, we don't bother with calling get_user_pages_fast and instead just offset from the base pfn for the RMA. Typically the RMA gets added after vcpus are created, which makes it inconvenient to have the LPCR (logical partition control register) value in the vcpu->arch struct, since the LPCR controls whether the processor uses RMA or VRMA for the guest. This moves the LPCR value into the kvm->arch struct and arranges for the MER (mediated external request) bit, which is the only bit that varies between vcpus, to be set in assembly code when going into the guest if there is a pending external interrupt request. Signed-off-by: NPaul Mackerras <paulus@samba.org> Signed-off-by: NAlexander Graf <agraf@suse.de>
-
- 17 6月, 2011 1 次提交
-
-
由 Matt Evans 提交于
smp_release_cpus() waits for all cpus (including the bootcpu) due to an off-by-one count on boot_cpu_count (which is all CPUs). This patch replaces that with spinning_secondaries (which is all secondary CPUs). Signed-off-by: NMatt Evans <matt@ozlabs.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 19 5月, 2011 1 次提交
-
-
由 Kumar Gala 提交于
Signed-off-by: NKumar Gala <galak@kernel.crashing.org>
-
- 06 5月, 2011 1 次提交
-
-
由 Benjamin Herrenschmidt 提交于
slb0_limit() wasn't a very descriptive name. This changes it along with a comment explaining what it's used for, and provides a 64-bit BookE implementation. Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 27 4月, 2011 1 次提交
-
-
由 Matt Evans 提交于
Some of the 64bit PPC CPU features are MMU-related, so this patch moves them to MMU_FTR_ bits. All cpu_has_feature()-style tests are moved to mmu_has_feature(), and seven feature bits are freed as a result. Signed-off-by: NMatt Evans <matt@ozlabs.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 20 4月, 2011 1 次提交
-
-
由 Benjamin Herrenschmidt 提交于
We need to wait a bit for them to have done their CPU setup or we might end up with translation and EE on with different LPCR values between threads Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 09 12月, 2010 1 次提交
-
-
由 Anton Blanchard 提交于
We now allow interrupt stacks anywhere in the first segment which can be 256M or 1TB. Fix the comment. Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 18 11月, 2010 1 次提交
-
-
由 Alessio Igor Bogani 提交于
The commit 5e3d20a6 remove bkl from startup code so setup_arch() it isn't called with bkl held anymore. Update the comment on top of that function. Fix also a typo. This work was supported by a hardware donation from the CE Linux Forum. Signed-off-by: NAlessio Igor Bogani <abogani@texware.it> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 24 8月, 2010 1 次提交
-
-
由 Nathan Fontenot 提交于
The 'smt_enabled=X' boot option does not handle values of X > 2. For Power 7 processors with smt modes of 0,1,2,3, and 4 this does not work. This patch allows the smt_enabled option to be set to any value limited to a max equal to the number of threads per core. Signed-off-by: NNathan Fontenot <nfont@austin.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 05 8月, 2010 1 次提交
-
-
由 Benjamin Herrenschmidt 提交于
The RMA (RMO is a misnomer) is a concept specific to ppc64 (in fact server ppc64 though I hijack it on embedded ppc64 for similar purposes) and represents the area of memory that can be accessed in real mode (aka with MMU off), or on embedded, from the exception vectors (which is bolted in the TLB) which pretty much boils down to the same thing. We take that out of the generic MEMBLOCK data structure and move it into arch/powerpc where it belongs, renaming it to "RMA" while at it. Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 31 7月, 2010 1 次提交
-
-
由 Matt Evans 提交于
With dynamic PACAs, the kexecing CPU's PACA won't lie within the kernel static data and there is a chance that something may stomp it when preparing to kexec. This patch switches this final CPU to a static PACA just before we pull the switch. Signed-off-by: NMatt Evans <matt@ozlabs.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 14 7月, 2010 1 次提交
-
-
由 Yinghai Lu 提交于
via following scripts FILES=$(find * -type f | grep -vE 'oprofile|[^K]config') sed -i \ -e 's/lmb/memblock/g' \ -e 's/LMB/MEMBLOCK/g' \ $FILES for N in $(find . -name lmb.[ch]); do M=$(echo $N | sed 's/lmb/memblock/g') mv $N $M done and remove some wrong change like lmbench and dlmb etc. also move memblock.c from lib/ to mm/ Suggested-by: NIngo Molnar <mingo@elte.hu> Acked-by: N"H. Peter Anvin" <hpa@zytor.com> Acked-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org> Acked-by: NLinus Torvalds <torvalds@linux-foundation.org> Signed-off-by: NYinghai Lu <yinghai@kernel.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 09 7月, 2010 1 次提交
-
-
由 Anton Blanchard 提交于
Now we dynamically allocate the paca array, it takes an extra load whenever we want to access another cpu's paca. One place we do that a lot is per cpu variables. A simple example: DEFINE_PER_CPU(unsigned long, vara); unsigned long test4(int cpu) { return per_cpu(vara, cpu); } This takes 4 loads, 5 if you include the actual load of the per cpu variable: ld r11,-32760(r30) # load address of paca pointer ld r9,-32768(r30) # load link address of percpu variable sldi r3,r29,9 # get offset into paca (each entry is 512 bytes) ld r0,0(r11) # load paca pointer add r3,r0,r3 # paca + offset ld r11,64(r3) # load paca[cpu].data_offset ldx r3,r9,r11 # load per cpu variable If we remove the ppc64 specific per_cpu_offset(), we get the generic one which indexes into a statically allocated array. This removes one load and one add: ld r11,-32760(r30) # load address of __per_cpu_offset ld r9,-32768(r30) # load link address of percpu variable sldi r3,r29,3 # get offset into __per_cpu_offset (each entry 8 bytes) ldx r11,r11,r3 # load __per_cpu_offset[cpu] ldx r3,r9,r11 # load per cpu variable Having all the offsets in one array also helps when iterating over a per cpu variable across a number of cpus, such as in the scheduler. Before we would need to load one paca cacheline when calculating each per cpu offset. Now we have 16 (128 / sizeof(long)) per cpu offsets in each cacheline. Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 15 6月, 2010 1 次提交
-
-
由 Christoph Hellwig 提交于
Irq stacks provide an essential protection from stack overflows through external interrupts, at the cost of two additionals stacks per CPU. Enable them unconditionally to simplify the kernel build and prevent people from accidentally disabling them. Signed-off-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 21 5月, 2010 2 次提交
-
-
由 Anton Blanchard 提交于
Author: Milton Miller <miltonm@bga.com> On large machines we are running out of room below 256MB. In some cases we only need to ensure the allocation is in the first segment, which may be 256MB or 1TB. Add slb0_limit and use it to specify the upper limit for the irqstack and emergency stacks. On a large ppc64 box, this fixes a panic at boot when the crashkernel= option is specified (previously we would run out of memory below 256MB). Signed-off-by: NMilton Miller <miltonm@bga.com> Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Milton Miller 提交于
Configuring a powerpc 32 bit kernel for both SMP and SUSPEND turns on CPU_HOTPLUG to enable disable_nonboot_cpus to be called by the common suspend code. Previously the definition of cpu_die for ppc32 was in the powermac platform code, causing it to be undefined if that platform as not selected. arch/powerpc/kernel/built-in.o: In function 'cpu_idle': arch/powerpc/kernel/idle.c:98: undefined reference to 'cpu_die' Move the code from setup_64 to smp.c and rename the power mac versions to their specific names. Note that this does not setup the cpu_die pointers in either smp_ops (request a given cpu die) or ppc_md (make this cpu die), for other platforms but there are generic versions in smp.c. Reported-by: NMatt Sealey <matt@genesi-usa.com> Reported-by: NKumar Gala <galak@kernel.crashing.org> Signed-off-by: NMilton Miller <miltonm@bga.com> Signed-off-by: NAnton Vorontsov <avorontsov@mvista.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 19 3月, 2010 1 次提交
-
-
由 FUJITA Tomonori 提交于
powerpc initializes swiotlb before parsing the kernel boot options so swiotlb options (e.g. specifying the swiotlb buffer size) are ignored. Any time before freeing bootmem works for swiotlb so this patch moves powerpc's swiotlb initialization after parsing the kernel boot options, mem_init (as x86 does). Signed-off-by: NFUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Tested-by: NBecky Bruce <beckyb@kernel.crashing.org> Tested-by: NAlbert Herranz <albert_herranz@yahoo.es> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 09 3月, 2010 1 次提交
-
-
由 Michael Ellerman 提交于
On 64-bit kernels we currently have a 512 byte struct paca_struct for each cpu (usually just called "the paca"). Currently they are statically allocated, which means a kernel built for a large number of cpus will waste a lot of space if it's booted on a machine with few cpus. We can avoid that by only allocating the number of pacas we need at boot. However this is complicated by the fact that we need to access the paca before we know how many cpus there are in the system. The solution is to dynamically allocate enough space for NR_CPUS pacas, but then later in boot when we know how many cpus we have, we free any unused pacas. Signed-off-by: NMichael Ellerman <michael@ellerman.id.au> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 10 11月, 2009 1 次提交
-
-
由 FUJITA Tomonori 提交于
This enables us to avoid printing swiotlb memory info when we initialize swiotlb. After swiotlb initialization, we could find that we don't need swiotlb. This patch removes the code to print swiotlb memory info in swiotlb_init() and exports the function to do that. Signed-off-by: NFUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Cc: chrisw@sous-sol.org Cc: dwmw2@infradead.org Cc: joerg.roedel@amd.com Cc: muli@il.ibm.com Cc: tony.luck@intel.com Cc: benh@kernel.crashing.org LKML-Reference: <1257849980-22640-9-git-send-email-fujita.tomonori@lab.ntt.co.jp> [ -v2: merge up conflict ] Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 30 10月, 2009 1 次提交
-
-
由 Michael Ellerman 提交于
Defining CONFIG_SPARSE_IRQ enables generic code that gets rid of the static irq_desc array, and replaces it with an array of pointers to irq_descs. It also allows node local allocation of irq_descs, however we currently don't have the information available to do that, so we just allocate them on all on node 0. Signed-off-by: NMichael Ellerman <michael@ellerman.id.au> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 27 10月, 2009 1 次提交
-
-
由 Kumar Gala 提交于
Fix the following 3 issues: arch/powerpc/kernel/process.c: In function 'arch_randomize_brk': arch/powerpc/kernel/process.c:1183: error: 'mmu_highuser_ssize' undeclared (first use in this function) arch/powerpc/kernel/process.c:1183: error: (Each undeclared identifier is reported only once arch/powerpc/kernel/process.c:1183: error: for each function it appears in.) arch/powerpc/kernel/process.c:1183: error: 'MMU_SEGSIZE_1T' undeclared (first use in this function) In file included from arch/powerpc/kernel/setup_64.c:60: arch/powerpc/include/asm/mmu-hash64.h:132: error: redefinition of 'struct mmu_psize_def' arch/powerpc/include/asm/mmu-hash64.h:159: error: expected identifier or '(' before numeric constant arch/powerpc/include/asm/mmu-hash64.h:396: error: conflicting types for 'mm_context_t' arch/powerpc/include/asm/mmu-book3e.h:184: error: previous declaration of 'mm_context_t' was here cc1: warnings being treated as errors arch/powerpc/kernel/pci_64.c: In function 'pcibios_unmap_io_space': arch/powerpc/kernel/pci_64.c:100: error: unused variable 'res' Signed-off-by: NKumar Gala <galak@kernel.crashing.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 20 8月, 2009 2 次提交
-
-
由 Benjamin Herrenschmidt 提交于
This contains all the bits that didn't fit in previous patches :-) This includes the actual exception handlers assembly, the changes to the kernel entry, other misc bits and wiring it all up in Kconfig. Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Benjamin Herrenschmidt 提交于
This adds the TLB miss handler assembly, the low level TLB flush routines along with the necessary hook for dealing with our virtual page tables or indirect TLB entries that need to be flushes when PTE pages are freed. There is currently no support for hugetlbfs Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-