- 29 10月, 2014 2 次提交
-
-
由 Nishanth Aravamudan 提交于
We received a report of warning in kernel/sched/core.c where the sched group was NULL on an LPAR after a topology update. This seems to occur because after the topology update has moved the CPUs, cpu_to_node is returning the old value still, which ends up breaking the consistency of the NUMA topology in the per-cpu maps. Ensure that we update the per-cpu fields when we re-map CPUs. Signed-off-by: NNishanth Aravamudan <nacc@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Nishanth Aravamudan 提交于
There isn't any need to keep referring to update->cpu, as we've already checked cpu == update->cpu at this point. Signed-off-by: NNishanth Aravamudan <nacc@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
- 28 10月, 2014 3 次提交
-
-
由 Peter Zijlstra 提交于
Andy reported that the current state of event_idx is rather confused. So remove all but the x86_pmu implementation and change the default to return 0 (the safe option). Reported-by: NAndy Lutomirski <luto@amacapital.net> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Cc: Arnaldo Carvalho de Melo <acme@kernel.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Christoph Lameter <cl@linux.com> Cc: Cody P Schafer <cody@linux.vnet.ibm.com> Cc: Cody P Schafer <dev@codyps.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Hendrik Brueckner <brueckner@linux.vnet.ibm.com> Cc: Himangi Saraogi <himangi774@gmail.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Paul Gortmaker <paul.gortmaker@windriver.com> Cc: Paul Mackerras <paulus@samba.org> Cc: sukadev@linux.vnet.ibm.com <sukadev@linux.vnet.ibm.com> Cc: Thomas Huth <thuth@linux.vnet.ibm.com> Cc: Vince Weaver <vincent.weaver@maine.edu> Cc: linux390@de.ibm.com Cc: linuxppc-dev@lists.ozlabs.org Cc: linux-s390@vger.kernel.org Signed-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Ian Munsie 提交于
This patch makes copro_calculate_slb() mask the ESID by the correct mask for 1T vs 256M segments. This has no effect by itself as the extra bits were ignored, but it makes debugging the segment table entries easier and means that we can directly compare the ESID values for duplicates without needing to worry about masking in the comparison. This will be used to simplify a comparison in the following patch. Signed-off-by: NIan Munsie <imunsie@au1.ibm.com> Reviewed-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Michael Ellerman 提交于
This reverts commit bf7588a0. Ben says although the code is not correct "[this] fix was completely wrong and does more damages than it fixes things." Acked-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
- 23 10月, 2014 1 次提交
-
-
由 Jeremy Kerr 提交于
Currently, we can't call opal wrappers from modules when using the LE ABIv2, which requires a TOC init. If we do we'll try and load the opal entry point using the wrong toc and probably explode or worse jump to the wrong address. Nothing in upstream is making opal calls from a module, but we do export one of the wrappers so we should fix this anyway. This change uses the _GLOBAL_TOC() macro (rather than _GLOBAL) for the opal wrappers, so that we can do non-local calls to them. Signed-off-by: NJeremy Kerr <jk@ozlabs.org> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
- 22 10月, 2014 3 次提交
-
-
由 Pranith Kumar 提交于
This patch wires up the new syscall sys_bpf() on powerpc. Passes the tests in samples/bpf: #0 add+sub+mul OK #1 unreachable OK #2 unreachable2 OK #3 out of range jump OK #4 out of range jump2 OK #5 test1 ld_imm64 OK #6 test2 ld_imm64 OK #7 test3 ld_imm64 OK #8 test4 ld_imm64 OK #9 test5 ld_imm64 OK #10 no bpf_exit OK #11 loop (back-edge) OK #12 loop2 (back-edge) OK #13 conditional loop OK #14 read uninitialized register OK #15 read invalid register OK #16 program doesn't init R0 before exit OK #17 stack out of bounds OK #18 invalid call insn1 OK #19 invalid call insn2 OK #20 invalid function call OK #21 uninitialized stack1 OK #22 uninitialized stack2 OK #23 check valid spill/fill OK #24 check corrupted spill/fill OK #25 invalid src register in STX OK #26 invalid dst register in STX OK #27 invalid dst register in ST OK #28 invalid src register in LDX OK #29 invalid dst register in LDX OK #30 junk insn OK #31 junk insn2 OK #32 junk insn3 OK #33 junk insn4 OK #34 junk insn5 OK #35 misaligned read from stack OK #36 invalid map_fd for function call OK #37 don't check return value before access OK #38 access memory with incorrect alignment OK #39 sometimes access memory with incorrect alignment OK #40 jump test 1 OK #41 jump test 2 OK #42 jump test 3 OK #43 jump test 4 OK Signed-off-by: NPranith Kumar <bobby.prani@gmail.com> [mpe: test using samples/bpf] Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Aneesh Kumar K.V 提交于
Remove the check of CONFIG_PPC_SUBPAGE_PROT when deciding if is_hugepage_only_range() is extern or inline. The extern version is in slice.c and is built if CONFIG_PPC_MM_SLICES=y. There was no build break possible because CONFIG_PPC_SUBPAGE_PROT is only selectable under conditions which also mean CONFIG_PPC_MM_SLICES will be selected. Signed-off-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Aneesh Kumar K.V 提交于
arch/powerpc/mm/slice.c:704:5: error: expected identifier or ‘(’ before numeric constant int is_hugepage_only_range(struct mm_struct *mm, unsigned long addr, ^ make[1]: *** [arch/powerpc/mm/slice.o] Error 1 make: *** [arch/powerpc/mm/slice.o] Error 2 This got introduced via 1217d34b "powerpc: Ensure global functions include their prototype". We started including linux/hugetlb.h with that patch and now we have #define is_hugepage_only_range(mm, addr, len) 0 with hugetlbfs disabled. Fixes: 1217d34b ("powerpc: Ensure global functions include their prototype") Signed-off-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
- 17 10月, 2014 1 次提交
-
-
由 Michael Ellerman 提交于
Scott's patch 1c98025c "Dynamic DMA zone limits" changed dma_direct_alloc_coherent() to start using dev->coherent_dma_mask. That seems fair enough, but it exposes the fact that some of the drivers we care about on IBM platforms aren't setting the coherent mask. The proper fix is to have drivers set the coherent mask and also have the platform code honor it. For now, just restrict the dynamic DMA zone limits to the platforms that need it. Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Acked-by: NScott Wood <scottwood@freescale.com>
-
- 16 10月, 2014 4 次提交
-
-
由 Anton Blanchard 提交于
Now KVM is working on LE, enable it. Also enable transarent hugepage which has already been enabled on BE. Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Anton Blanchard 提交于
Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Michael Ellerman 提交于
Commit 0b0b0893 "of/pci: Fix the conversion of IO ranges into IO resources" changed the behaviour of of_pci_range_to_resource(). Previously it simply populated the resource based on the arguments. Now it calls pci_register_io_range() and pci_address_to_pio(). These both have two implementations depending on whether PCI_IOBASE is defined, which it is not for powerpc. Further complicating matters, both routines are weak, and powerpc implements it's own version of one - pci_address_to_pio(). However powerpc's implementation depends on other initialisations which are done later in boot. The end result is incorrectly initialised IO space. Often we can get away with that, because we don't make much use of IO space. However virtio requires it, so we see eg: pci_bus 0000:00: root bus resource [io 0xffff] (bus address [0xffffffffffffffff-0xffffffffffffffff]) PCI: Cannot allocate resource region 0 of device 0000:00:01.0, will remap virtio-pci 0000:00:01.0: can't enable device: BAR 0 [io size 0x0020] not assigned The simplest fix for now is to just stop using of_pci_range_to_resource(), and open-code the original implementation, that's all we want it to do. Fixes: 0b0b0893 ("of/pci: Fix the conversion of IO ranges into IO resources") Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Greg Kurz 提交于
The associativity domain numbers are obtained from the hypervisor through registers and written into memory by the guest: the packed array passed to vphn_unpack_associativity() is then native-endian, unlike what was assumed in the following commit: commit b08a2a12 Author: Alistair Popple <alistair@popple.id.au> Date: Wed Aug 7 02:01:44 2013 +1000 powerpc: Make NUMA device node code endian safe This issue fills the topology with bogus data and makes it unusable. It may lead to severe performance breakdowns. We should ideally patch the vphn_unpack_associativity() function to do the 64-bit loads, but this requires some more brain storming. In the meantime, let's go for a suboptimal and temporary bug fix: this patch converts each 64-bit value of the packed array to big endian, as expected by the current parsing code in vphn_unpack_associativity(). Signed-off-by: NGreg Kurz <gkurz@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
- 15 10月, 2014 13 次提交
-
-
由 Michael Ellerman 提交于
As demonstrated in the previous commit, the failure message from the msi bitmap selftests is a bit subtle, it's easy to miss a failure in a busy boot log. So drop our check() macro and use WARN_ON() instead. This necessitates inverting all the conditions as well. Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Michael Ellerman 提交于
When we added the alignment tests recently we failed to check they were actually passing - oops. They weren't passing, because the bitmap was full. We should also be a bit more careful when checking the return code, a negative error return could by divisible by our alignment value. Fixes: b0345bbc ("powerpc/msi: Improve IRQ bitmap allocator") Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Gavin Shan 提交于
The Broadcom Shiner 2-ports 10G ethernet adapter has same problem commit 6f20bda0 ("powerpc/eeh: Block PCI config access upon frozen PE") fixes. Put it to the black list as well. # lspci -s 0004:01:00.0 0004:01:00.0 Ethernet controller: Broadcom Corporation \ NetXtreme II BCM57810 10 Gigabit Ethernet (rev 10) # lspci -n -s 0004:01:00.0 0004:01:00.0 0200: 14e4:168e (rev 10) Reported-by: NJohn Walthour <jwalthour@us.ibm.com> Signed-off-by: NGavin Shan <gwshan@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Gavin Shan 提交于
When the PE's config space is marked as blocked, PCI config read requests always return 0xFF's. It's pointless to collect logs in this case. Signed-off-by: NGavin Shan <gwshan@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Gavin Shan 提交于
The problem was found when I tried to inject PCI config error by PHB3 PAPR error injection registers into Broadcom Austin 4-ports NIC adapter. The frozen PE was reported successfully and EEH core started to recover it. However, I run into fenced PHB when dumping PCI config space as EEH logs. I was told that PCI config requests should not be progagated to the adapter until PE reset is done successfully. Otherise, we would run out of PHB internal credits and trigger PCT (PCIE Completion Timeout), which leads to the fenced PHB. The patch introduces another PE flag EEH_PE_CFG_RESTRICTED, which is set during PE initialization time if the PE includes the specific PCI devices that need block PCI config access until PE reset is done. When the PE becomes frozen for the first time, EEH_PE_CFG_BLOCKED is set if the PE has flag EEH_PE_CFG_RESTRICTED. Then the PCI config access to the PE will be dropped by platform PCI accessors until PE reset is done successfully. The mechanism is shared by PowerNV platform owned PE or userland owned ones. It's not used on pSeries platform yet. Signed-off-by: NGavin Shan <gwshan@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Gavin Shan 提交于
The pSeires EEH config accessors rely on rtas_{read, write}_config() and the condition to check if the PE's config space is blocked should be moved to those 2 functions so that config requests from kernel, userland, EEH core can be dropped to avoid recursive EEH error if necessary. Signed-off-by: NGavin Shan <gwshan@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Gavin Shan 提交于
It's bad idea to access the PCI config registers of the adapters, which is experiencing reset. It leads to recursive EEH error without exception. The patch drops PCI config requests in EEH accessors if the PE has been marked to accept PCI config requests, for example during PE reseet time. Signed-off-by: NGavin Shan <gwshan@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Gavin Shan 提交于
The flag EEH_PE_RESET indicates blocking config space of the PE during reset time. We potentially need block PE's config space other than reset time. So it's reasonable to replace it with EEH_PE_CFG_BLOCKED to indicate its usage. There are no substantial code or logic changes in this patch. Signed-off-by: NGavin Shan <gwshan@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Gavin Shan 提交于
Function eeh_pe_state_mark() could possibly have combination of multiple EEH PE state as its argument. The patch fixes the condition used to check if EEH_PE_ISOLATED is included. Signed-off-by: NGavin Shan <gwshan@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Bharata B Rao 提交于
- ibm,rtas-configure-connector should treat the RTAS data as big endian. - Treat ibm,ppc-interrupt-server#s as big-endian when setting smp_processor_id during hotplug. Signed-off-by: NBharata B Rao <bharata@linux.vnet.ibm.com> Signed-off-by: NThomas Falcon <tlfalcon@linux.vnet.ibm.com> Acked-by: NNathan Fontenot <nfont@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Anton Blanchard 提交于
We can use the simpler dump_stack() instead of show_stack(current, __get_SP()) Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Anton Blanchard 提交于
Michael points out that __get_SP() is a pretty horrible function name. Let's give it a better name. Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Anton Blanchard 提交于
Li Zhong points out an issue with our current __get_SP() implementation. If ftrace function tracing is enabled (ie -pg profiling using _mcount) we spill a stack frame on 64bit all the time. If a function calls __get_SP() and later calls a function that is tail call optimised, we will pop the stack frame and the value returned by __get_SP() is no longer valid. An example from Li can be found in save_stack_trace -> save_context_stack: c0000000000432c0 <.save_stack_trace>: c0000000000432c0: mflr r0 c0000000000432c4: std r0,16(r1) c0000000000432c8: stdu r1,-128(r1) <-- stack frame for _mcount c0000000000432cc: std r3,112(r1) c0000000000432d0: bl <._mcount> c0000000000432d4: nop c0000000000432d8: mr r4,r1 <-- __get_SP() c0000000000432dc: ld r5,632(r13) c0000000000432e0: ld r3,112(r1) c0000000000432e4: li r6,1 c0000000000432e8: addi r1,r1,128 <-- pop stack frame c0000000000432ec: ld r0,16(r1) c0000000000432f0: mtlr r0 c0000000000432f4: b <.save_context_stack> <-- tail call optimized save_context_stack ends up with a stack pointer below the current one, and it is likely to be scribbled over. Fix this by making __get_SP() a function which returns the callers stack frame. Also replace inline assembly which grabs the stack pointer in save_stack_trace and show_stack with __get_SP(). This also fixes an issue with perf_arch_fetch_caller_regs(). It currently unwinds the stack once, which will skip a valid stack frame on a leaf function. With the __get_SP() fixes in this patch, we never need to unwind the stack frame to get to the first interesting frame. We have to export __get_SP() because perf_arch_fetch_caller_regs() (which is used in modules) calls it from a header file. Reported-by: NLi Zhong <zhong@linux.vnet.ibm.com> Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
- 13 10月, 2014 3 次提交
-
-
由 Nishanth Aravamudan 提交于
We have hit a few customer issues with the topology update code (VPHN and PRRN). It would be nice to be able to debug the notifications coming from the hypervisor in both cases to the LPAR, as well as to disable responding to the notifications at boot-time, to narrow down the source of the problems. Add a basic level of such functionality, similar to the numa= command-line parameter. We already have a toggle in /proc/powerpc/topology_updates that allows run-time enabling/disabling, so the updates can be started at run-time if desired. But the bugs we've run into have occured during boot or very shortly after coming to login, and have resulted in a broken NUMA topology. Signed-off-by: NNishanth Aravamudan <nacc@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Nishanth Aravamudan 提交于
proc_create can fail, we should check the return value and pass up the failure. Suggested-by: NMichael Ellerman <mpe@ellerman.id.au> Signed-off-by: NNishanth Aravamudan <nacc@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Mahesh Salgaonkar 提交于
Recently we moved HMI handling into Linux kernel instead of taking HMI directly in OPAL. This new change is dependent on new OPAL call for HMI recovery which was introduced in newer firmware. While this new change works fine with latest OPAL firmware, we broke the HMI handling if we run newer kernel on old OPAL firmware that results in system hang. This patch fixes this issue by falling back to old HMI behavior on older OPAL firmware. This patch introduces a check for opal token OPAL_HANDLE_HMI to see if we are running on newer firmware or old firmware. On newer firmware this check would return OPAL_TOKEN_PRESENT, otherwise we are running on old firmware and fallback to old HMI behavior. Old firmware: POWER8 System Firmware Release as of today <= SV810_087 Action: Let OPAL handle HMIs Newer firmware: in development/yet to be released. Action: Let Linux host handle HMIs. This patch depends on opal check token patch posted at ppc-devel https://lists.ozlabs.org/pipermail/linuxppc-dev/2014-August/120224.htmlSigned-off-by: NMahesh Salgaonkar <mahesh@linux.vnet.ibm.com> [mpe: Minor comment and printk rewording] Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
- 10 10月, 2014 4 次提交
-
-
由 Mahesh Salgaonkar 提交于
In HMI interrupt handler we don't touch SRR0/SRR1, instead we touch HSRR0/HSRR1. Hence we don't need to clear MSR_RI bit. Signed-off-by: NMahesh Salgaonkar <mahesh@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Romeo Cane 提交于
Declaring sys_call_table as a pointer causes the compiler to generate the wrong lookup code in arch_syscall_addr(). <arch_syscall_addr>: lis r9,-16384 rlwinm r3,r3,2,0,29 - lwz r11,30640(r9) - lwzx r3,r11,r3 + addi r9,r9,30640 + lwzx r3,r9,r3 blr The actual sys_call_table symbol, declared in assembler, is an array. If we lie about that to the compiler we get the wrong code generated, as above. This definition seems only to be used by the syscall tracing code in kernel/trace/trace_syscalls.c. With this patch I can successfully use the syscall tracepoints: bash-3815 [002] .... 333.239082: sys_write -> 0x2 bash-3815 [002] .... 333.239087: sys_dup2(oldfd: a, newfd: 1) bash-3815 [002] .... 333.239088: sys_dup2 -> 0x1 bash-3815 [002] .... 333.239092: sys_fcntl(fd: a, cmd: 1, arg: 0) bash-3815 [002] .... 333.239093: sys_fcntl -> 0x1 bash-3815 [002] .... 333.239094: sys_close(fd: a) bash-3815 [002] .... 333.239094: sys_close -> 0x0 Signed-off-by: NRomeo Cane <romeo.cane.ext@coriant.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Geert Uytterhoeven 提交于
The different architectures used their own (and different) declarations: extern __visible const void __nosave_begin, __nosave_end; extern const void __nosave_begin, __nosave_end; extern long __nosave_begin, __nosave_end; Consolidate them using the first variant in <asm/sections.h>. Signed-off-by: NGeert Uytterhoeven <geert@linux-m68k.org> Cc: Russell King <linux@arm.linux.org.uk> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: Guan Xuetao <gxt@mprc.pku.edu.cn> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@redhat.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Mel Gorman 提交于
ARCH_USES_NUMA_PROT_NONE was defined for architectures that implemented _PAGE_NUMA using _PROT_NONE. This saved using an additional PTE bit and relied on the fact that PROT_NONE vmas were skipped by the NUMA hinting fault scanner. This was found to be conceptually confusing with a lot of implicit assumptions and it was asked that an alternative be found. Commit c46a7c81 "x86: define _PAGE_NUMA by reusing software bits on the PMD and PTE levels" redefined _PAGE_NUMA on x86 to be one of the swap PTE bits and shrunk the maximum possible swap size but it did not go far enough. There are no architectures that reuse _PROT_NONE as _PROT_NUMA but the relics still exist. This patch removes ARCH_USES_NUMA_PROT_NONE and removes some unnecessary duplication in powerpc vs the generic implementation by defining the types the core NUMA helpers expected to exist from x86 with their ppc64 equivalent. This necessitated that a PTE bit mask be created that identified the bits that distinguish present from NUMA pte entries but it is expected this will only differ between arches based on _PAGE_PROTNONE. The naming for the generic helpers was taken from x86 originally but ppc64 has types that are equivalent for the purposes of the helper so they are mapped instead of duplicating code. Signed-off-by: NMel Gorman <mgorman@suse.de> Cc: Hugh Dickins <hughd@google.com> Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com> Cc: Rik van Riel <riel@redhat.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Cyrill Gorcunov <gorcunov@gmail.com> Reviewed-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 08 10月, 2014 6 次提交
-
-
由 Ian Munsie 提交于
This adds hooks into the core powerpc mm code for cxl. The core powerpc code sometimes uses local tlbie. Unfortunately this won't work with the current cxl driver as it relies on snooping tlbie broadcasts. The cxl hardware can have TLB entries invalidated via MMIO but this is not currently supported by the driver. In future we can make local tlbie smarter so that it invalidates cxl contexts via MMIO when it needs to but for now we have this workaround. This workaround checks for any active cxl contexts and if so, disables local tlbie. This also adds a hook for when SLBs are invalidated. This ensures any corresponding SLBs in cxl are also invalidated at the same time. This is required for segment demotion. Signed-off-by: NIan Munsie <imunsie@au1.ibm.com> Signed-off-by: NMichael Neuling <mikey@neuling.org> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Ian Munsie 提交于
This adds the OPAL call to change a PHB into cxl mode. Signed-off-by: NIan Munsie <imunsie@au1.ibm.com> Signed-off-by: NMichael Neuling <mikey@neuling.org> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Ian Munsie 提交于
This adds a new function hash_page_mm() based on the existing hash_page(). This version allows any struct mm to be passed in, rather than assuming current. This is useful for servicing co-processor faults which are not in the context of the current running process. We need to be careful here as the current hash_page() assumes current in a few places. Signed-off-by: NIan Munsie <imunsie@au1.ibm.com> Signed-off-by: NMichael Neuling <mikey@neuling.org> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Ian Munsie 提交于
This adds a number of functions for allocating IRQs under powernv PCIe for cxl. Signed-off-by: NIan Munsie <imunsie@au1.ibm.com> Signed-off-by: NMichael Neuling <mikey@neuling.org> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Ian Munsie 提交于
Some of the MSI IRQ code in pnv_pci_ioda_msi_setup() is generically useful so split it out. This will be used by some of the cxl PCIe code later. Signed-off-by: NIan Munsie <imunsie@au1.ibm.com> Signed-off-by: NMichael Neuling <mikey@neuling.org> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Ian Munsie 提交于
Export mmu_kernel_ssize and mmu_linear_psize. These are needed by the cxl driver which has it's own MMU. To setup the MMU cxl needs access to these. Signed-off-by: NIan Munsie <imunsie@au1.ibm.com> Signed-off-by: NMichael Neuling <mikey@neuling.org> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-