- 15 1月, 2014 19 次提交
-
-
由 Paul Mackerras 提交于
Currently, when we have a process using the transactional memory facilities on POWER8 (that is, the processor is in transactional or suspended state), and the process enters the kernel and the kernel then uses the floating-point or vector (VMX/Altivec) facility, we end up corrupting the user-visible FP/VMX/VSX state. This happens, for example, if a page fault causes a copy-on-write operation, because the copy_page function will use VMX to do the copy on POWER8. The test program below demonstrates the bug. The bug happens because when FP/VMX state for a transactional process is stored in the thread_struct, we store the checkpointed state in .fp_state/.vr_state and the transactional (current) state in .transact_fp/.transact_vr. However, when the kernel wants to use FP/VMX, it calls enable_kernel_fp() or enable_kernel_altivec(), which saves the current state in .fp_state/.vr_state. Furthermore, when we return to the user process we return with FP/VMX/VSX disabled. The next time the process uses FP/VMX/VSX, we don't know which set of state (the current register values, .fp_state/.vr_state, or .transact_fp/.transact_vr) we should be using, since we have no way to tell if we are still in the same transaction, and if not, whether the previous transaction succeeded or failed. Thus it is necessary to strictly adhere to the rule that if FP has been enabled at any point in a transaction, we must keep FP enabled for the user process with the current transactional state in the FP registers, until we detect that it is no longer in a transaction. Similarly for VMX; once enabled it must stay enabled until the process is no longer transactional. In order to keep this rule, we add a new thread_info flag which we test when returning from the kernel to userspace, called TIF_RESTORE_TM. This flag indicates that there is FP/VMX/VSX state to be restored before entering userspace, and when it is set the .tm_orig_msr field in the thread_struct indicates what state needs to be restored. The restoration is done by restore_tm_state(). The TIF_RESTORE_TM bit is set by new giveup_fpu/altivec_maybe_transactional helpers, which are called from enable_kernel_fp/altivec, giveup_vsx, and flush_fp/altivec_to_thread instead of giveup_fpu/altivec. The other thing to be done is to get the transactional FP/VMX/VSX state from .fp_state/.vr_state when doing reclaim, if that state has been saved there by giveup_fpu/altivec_maybe_transactional. Having done this, we set the FP/VMX bit in the thread's MSR after reclaim to indicate that that part of the state is now valid (having been reclaimed from the processor's checkpointed state). Finally, in the signal handling code, we move the clearing of the transactional state bits in the thread's MSR a bit earlier, before calling flush_fp_to_thread(), so that we don't unnecessarily set the TIF_RESTORE_TM bit. This is the test program: /* Michael Neuling 4/12/2013 * * See if the altivec state is leaked out of an aborted transaction due to * kernel vmx copy loops. * * gcc -m64 htm_vmxcopy.c -o htm_vmxcopy * */ /* We don't use all of these, but for reference: */ int main(int argc, char *argv[]) { long double vecin = 1.3; long double vecout; unsigned long pgsize = getpagesize(); int i; int fd; int size = pgsize*16; char tmpfile[] = "/tmp/page_faultXXXXXX"; char buf[pgsize]; char *a; uint64_t aborted = 0; fd = mkstemp(tmpfile); assert(fd >= 0); memset(buf, 0, pgsize); for (i = 0; i < size; i += pgsize) assert(write(fd, buf, pgsize) == pgsize); unlink(tmpfile); a = mmap(NULL, size, PROT_READ|PROT_WRITE, MAP_PRIVATE, fd, 0); assert(a != MAP_FAILED); asm __volatile__( "lxvd2x 40,0,%[vecinptr] ; " // set 40 to initial value TBEGIN "beq 3f ;" TSUSPEND "xxlxor 40,40,40 ; " // set 40 to 0 "std 5, 0(%[map]) ;" // cause kernel vmx copy page TABORT TRESUME TEND "li %[res], 0 ;" "b 5f ;" "3: ;" // Abort handler "li %[res], 1 ;" "5: ;" "stxvd2x 40,0,%[vecoutptr] ; " : [res]"=r"(aborted) : [vecinptr]"r"(&vecin), [vecoutptr]"r"(&vecout), [map]"r"(a) : "memory", "r0", "r3", "r4", "r5", "r6", "r7"); if (aborted && (vecin != vecout)){ printf("FAILED: vector state leaked on abort %f != %f\n", (double)vecin, (double)vecout); exit(1); } munmap(a, size); close(fd); printf("PASSED!\n"); return 0; } Signed-off-by: NPaul Mackerras <paulus@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Paul Mackerras 提交于
TIF_PERFMON_WORK and TIF_PERFMON_CTXSW are completely unused. They appear to be related to the old perfmon2 code, which has been superseded by the perf_event infrastructure. This removes their definitions so that the bits can be used for other purposes. Signed-off-by: NPaul Mackerras <paulus@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Benjamin Herrenschmidt 提交于
If we set irq_work on a processor and immediately afterward, before the irq work has a chance to be processed, we change the decrementer value, we can seriously delay the handling of that irq_work. Fix it by checking in a few places for pending irq work, first before changing the decrementer in decrementer_set_next_event() and after changing it in the same function and in timer_interrupt(). Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Mahesh Salgaonkar 提交于
Huge Dickins reported an issue that b5ff4211 "powerpc/book3s: Queue up and process delayed MCE events" breaks the PowerMac G5 boot. This patch fixes it by moving the mce even processing away from syscall exit, which was wrong to do that in first place, and using irq work framework to delay processing of mce event. Reported-by: Hugh Dickins <hughd@google.com Signed-off-by: NMahesh Salgaonkar <mahesh@linux.vnet.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Preeti U Murthy 提交于
Commit fbd7740f(powerpc: Simplify pSeries idle loop) switched pseries cpu idle handling from complete idle loops to ppc_md.powersave functions. Earlier to this switch, ppc64_runlatch_off() had to be called in each of the idle routines. But after the switch, this call is handled in arch_cpu_idle(),just before the call to ppc_md.powersave, where platform specific idle routines are called. As a consequence, the call to ppc64_runlatch_off() got duplicated in the arch_cpu_idle() routine as well as in the some of the idle routines in pseries and commit fbd7740f missed to get rid of these redundant calls. These calls were carried over subsequent enhancements to the pseries cpuidle routines. Although multiple calls to ppc64_runlatch_off() is harmless, there is still some overhead due to it. Besides that, these calls could also make way for a misunderstanding that it is *necessary* to call ppc64_runlatch_off() multiple times, when that is not the case. Hence this patch takes care of eliminating this redundancy. Signed-off-by: NPreeti U Murthy <preeti@linux.vnet.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Geert Uytterhoeven 提交于
add_system_ram_resources() is a subsys_initcall. Signed-off-by: NGeert Uytterhoeven <geert@linux-m68k.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Olof Johansson 提交于
This makes ppc64_defconfig bootable without initrd on pasemi systems, most of whom have MV SATA controllers. Some have SIL24, but that driver is already enabled. Signed-off-by: NOlof Johansson <olof@lixom.net> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Vasant Hegde 提交于
At present we assume candidate image is <= 256MB. But in P8, candidate image size can go up to 750MB. Hence increasing candidate image max size to 1GB. Signed-off-by: NVasant Hegde <hegdevasant@linux.vnet.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Srivatsa S. Bhat 提交于
There have been some weird bugs in the past where the kernel tried to associate threads of the same core to different NUMA nodes, and things went haywire after that point (as expected). But unfortunately, root-causing such issues have been quite challenging, due to the lack of appropriate debug checks in the kernel. These bugs usually lead to some odd soft-lockups in the scheduler's build-sched-domain code in the CPU hotplug path, which makes it very hard to trace it back to the incorrect cpu-to-node mappings. So add appropriate debug checks to catch such invalid cpu-to-node mappings as early as possible. Signed-off-by: NSrivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Srivatsa S. Bhat 提交于
On POWER platforms, the hypervisor can notify the guest kernel about dynamic changes in the cpu-numa associativity (VPHN topology update). Hence the cpu-to-node mappings that we got from the firmware during boot, may no longer be valid after such updates. This is handled using the arch_update_cpu_topology() hook in the scheduler, and the sched-domains are rebuilt according to the new mappings. But unfortunately, at the moment, CPU hotplug ignores these updated mappings and instead queries the firmware for the cpu-to-numa relationships and uses them during CPU online. So the kernel can end up assigning wrong NUMA nodes to CPUs during subsequent CPU hotplug online operations (after booting). Further, a particularly problematic scenario can result from this bug: On POWER platforms, the SMT mode can be switched between 1, 2, 4 (and even 8) threads per core. The switch to Single-Threaded (ST) mode is performed by offlining all except the first CPU thread in each core. Switching back to SMT mode involves onlining those other threads back, in each core. Now consider this scenario: 1. During boot, the kernel gets the cpu-to-node mappings from the firmware and assigns the CPUs to NUMA nodes appropriately, during CPU online. 2. Later on, the hypervisor updates the cpu-to-node mappings dynamically and communicates this update to the kernel. The kernel in turn updates its cpu-to-node associations and rebuilds its sched domains. Everything is fine so far. 3. Now, the user switches the machine from SMT to ST mode (say, by running ppc64_cpu --smt=1). This involves offlining all except 1 thread in each core. 4. The user then tries to switch back from ST to SMT mode (say, by running ppc64_cpu --smt=4), and this involves onlining those threads back. Since CPU hotplug ignores the new mappings, it queries the firmware and tries to associate the newly onlined sibling threads to the old NUMA nodes. This results in sibling threads within the same core getting associated with different NUMA nodes, which is incorrect. The scheduler's build-sched-domains code gets thoroughly confused with this and enters an infinite loop and causes soft-lockups, as explained in detail in commit 3be7db6a (powerpc: VPHN topology change updates all siblings). So to fix this, use the numa_cpu_lookup_table to remember the updated cpu-to-node mappings, and use them during CPU hotplug online operations. Further, we also need to ensure that all threads in a core are assigned to a common NUMA node, irrespective of whether all those threads were online during the topology update. To achieve this, we take care not to use cpu_sibling_mask() since it is not hotplug invariant. Instead, we use cpu_first_sibling_thread() and set up the mappings manually using the 'threads_per_core' value for that particular platform. This helps us ensure that we don't hit this bug with any combination of CPU hotplug and SMT mode switching. Cc: stable@vger.kernel.org Signed-off-by: NSrivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Gavin Shan 提交于
Some devices, for example PCI root port, don't have IOMMU table and group. We needn't detach them from their IOMMU group. Otherwise, it potentially incurs kernel crash because of referring NULL IOMMU group as following backtrace indicates: .iommu_group_remove_device+0x74/0x1b0 .iommu_bus_notifier+0x94/0xb4 .notifier_call_chain+0x78/0xe8 .__blocking_notifier_call_chain+0x7c/0xbc .blocking_notifier_call_chain+0x38/0x48 .device_del+0x50/0x234 .pci_remove_bus_device+0x88/0x138 .pci_stop_and_remove_bus_device+0x2c/0x40 .pcibios_remove_pci_devices+0xcc/0xfc .pcibios_remove_pci_devices+0x3c/0xfc Signed-off-by: NGavin Shan <shangw@linux.vnet.ibm.com> Reviewed-by: NAlexey Kardashevskiy <aik@ozlabs.ru> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Gavin Shan 提交于
When EEH error comes to one specific PCI device before its driver is loaded, we will apply hotplug to recover the error. During the plug time, the PCI device will be probed and its driver is loaded. Then we wrongly calls to the error handlers if the driver supports EEH explicitly. The patch intends to fix by introducing flag EEH_DEV_NO_HANDLER and set it before we remove the PCI device. In turn, we can avoid wrongly calls the error handlers of the PCI device after its driver loaded. Signed-off-by: NGavin Shan <shangw@linux.vnet.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Gavin Shan 提交于
The patch implements the EEH operation backend restore_config() for PowerNV platform. That relies on OPAL API opal_pci_reinit() where we reinitialize the error reporting properly after PE or PHB reset. Signed-off-by: NGavin Shan <shangw@linux.vnet.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Gavin Shan 提交于
After reset on the specific PE or PHB, we never configure AER correctly on PowerNV platform. We needn't care it on pSeries platform. The patch introduces additional EEH operation eeh_ops:: restore_config() so that we have chance to configure AER correctly for PowerNV platform. Signed-off-by: NGavin Shan <shangw@linux.vnet.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Gavin Shan 提交于
We don't have IO ports on PHB3 and the assignment of variable "iomap_off" on PHB3 is meaningless. The patch just removes the unnecessary assignment to the variable. The code change should have been part of commit c35d2a8c ("powerpc/powernv: Needn't IO segment map for PHB3"). Signed-off-by: NGavin Shan <shangw@linux.vnet.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Nishanth Aravamudan 提交于
After reverting 25ebc45b ("powerpc/pseries/iommu: remove default window before attempting DDW manipulation"), we no longer remove the base window in enable_ddw. Therefore, we no longer need to reset the DMA window state in find_existing_ddw_windows(). We can instead go back to what was done before, which simply reuses the previous configuration, if any. Further, this removes the final caller of the reset-pe-dma-windows call, so remove those functions. This fixes an EEH on kdump with the ipr driver. The EEH occurs, because the initcall removes the DDW configuration (64-bit DMA window), but doesn't ensure the ops are via the IOMMU -- a DMA operation occurs during probe (still investigating this) and we EEH. This reverts commit 14b6f00f. Signed-off-by: NNishanth Aravamudan <nacc@linux.vnet.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Nishanth Aravamudan 提交于
Ben rightfully pointed out that there is a race in the "newer" DDW code. Presuming we are running on recent enough firmware that supports the "reset" DDW manipulation call, we currently always remove the base 32-bit DMA window in order to maximize the resources for Phyp when creating the 64-bit window. However, this can be problematic for the case where multiple functions are in the same PE (partitionable endpoint), where some funtions might be 32-bit DMA only. All of a sudden, the only functional DMA window for such functions is gone. We will have serious errors in such situations. The best solution is simply to revert the extension to the DDW code where we ever remove the base DMA window. This reverts commit 25ebc45b. Signed-off-by: NNishanth Aravamudan <nacc@linux.vnet.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Paul Gortmaker 提交于
None of these files are actually using any __init type directives and hence don't need to include <linux/init.h>. Most are just a left over from __devinit and __cpuinit removal, or simply due to code getting copied from one driver to the next. The one instance where we add an include for init.h covers off a case where that file was implicitly getting it from another header which itself didn't need it. Signed-off-by: NPaul Gortmaker <paul.gortmaker@windriver.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Andreas Schwab 提交于
GCC 4.8 now generates out-of-line vr save/restore functions when optimizing for size. They are needed for the raid6 altivec support. Signed-off-by: NAndreas Schwab <schwab@linux-m68k.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 30 12月, 2013 15 次提交
-
-
由 Benjamin Herrenschmidt 提交于
Merge a pile of fixes that went into the "merge" branch (3.13-rc's) such as Anton Little Endian fixes. Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Anton Blanchard 提交于
The SLB save area is shared with the hypervisor and is defined as big endian, so we need to byte swap on little endian builds. Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Benjamin Herrenschmidt 提交于
Anatolij writes: Please pull two DTS fixes for MPC5125 tower board. Without them the v3.13-rcX kernels do not boot.
-
由 Alistair Popple 提交于
This patch updates the generic iommu backend code to use the it_page_shift field to determine the iommu page size instead of using hardcoded values. Signed-off-by: NAlistair Popple <alistair@popple.id.au> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Alistair Popple 提交于
This patch adds a it_page_shift field to struct iommu_table and initiliases it to 4K for all platforms. Signed-off-by: NAlistair Popple <alistair@popple.id.au> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Alistair Popple 提交于
The powerpc iommu uses a hardcoded page size of 4K. This patch changes the name of the IOMMU_PAGE_* macros to reflect the hardcoded values. A future patch will use the existing names to support dynamic page sizes. Signed-off-by: NAlistair Popple <alistair@popple.id.au> Signed-off-by: NAlexey Kardashevskiy <aik@ozlabs.ru> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Michael Opdenacker 提交于
This removes the REDBOOT Kconfig parameter, which was no longer used anywhere in the source code and Makefiles. Signed-off-by: NMichael Opdenacker <michael.opdenacker@free-electrons.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Mahesh Salgaonkar 提交于
With recent machine check patch series changes, The exception vectors starting from 0x4300 are now overflowing with allyesconfig. Fix that by moving machine_check_common and machine_check_handle_early code out of that region to make enough room for exception vector area. Fixes this build error reportes by Stephen: arch/powerpc/kernel/exceptions-64s.S: Assembler messages: arch/powerpc/kernel/exceptions-64s.S:958: Error: attempt to move .org backwards arch/powerpc/kernel/exceptions-64s.S:959: Error: attempt to move .org backwards arch/powerpc/kernel/exceptions-64s.S:983: Error: attempt to move .org backwards arch/powerpc/kernel/exceptions-64s.S:984: Error: attempt to move .org backwards arch/powerpc/kernel/exceptions-64s.S:1003: Error: attempt to move .org backwards arch/powerpc/kernel/exceptions-64s.S:1013: Error: attempt to move .org backwards arch/powerpc/kernel/exceptions-64s.S:1014: Error: attempt to move .org backwards arch/powerpc/kernel/exceptions-64s.S:1015: Error: attempt to move .org backwards arch/powerpc/kernel/exceptions-64s.S:1016: Error: attempt to move .org backwards arch/powerpc/kernel/exceptions-64s.S:1017: Error: attempt to move .org backwards arch/powerpc/kernel/exceptions-64s.S:1018: Error: attempt to move .org backwards [Moved the code further down as it introduced link errors due to too long relative branches to the masked interrupts handlers from the exception prologs. Also removed the useless feature section --BenH ] Signed-off-by: NMahesh Salgaonkar <mahesh@linux.vnet.ibm.com> Reported-by: NStephen Rothwell <sfr@canb.auug.org.au> Tested-by: NStephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Olof Johansson 提交于
Commit 5c0484e2 ('powerpc: Endian safe trampoline') resulted in losing proper alignment of the spinlock variables used when booting secondary CPUs, causing some quite odd issues with failing to boot on PA Semi-based systems. This showed itself on ppc64_defconfig, but not on pasemi_defconfig, so it had gone unnoticed when I initially tested the LE patch set. Fix is to add explicit alignment instead of relying on good luck. :) [ It appears that there is a different issue with PA Semi systems however this fix is definitely correct so applying anyway -- BenH ] Fixes: 5c0484e2 ('powerpc: Endian safe trampoline') Reported-by: NChristian Zigotzky <chzigotzky@xenosoft.de> Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=67811Signed-off-by: NOlof Johansson <olof@lixom.net> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Anton Blanchard 提交于
p_end is an 8 byte value embedded in the text section. This means it is only 4 byte aligned when it should be 8 byte aligned. Fix this by adding an explicit alignment. This fixes an issue where POWER7 little endian builds with CONFIG_RELOCATABLE=y fail to boot. Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Brian W Hart 提交于
Prevent ioda_eeh_hub_diag() from clobbering itself when called by supplying a per-PHB buffer for P7IOC hub diagnostic data. Take care to inform OPAL of the correct size for the buffer. [Small style change to the use of sizeof -- BenH] Signed-off-by: NBrian W Hart <hartb@linux.vnet.ibm.com> Acked-by: NGavin Shan <shangw@linux.vnet.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Brian W Hart 提交于
PHB diagnostic buffer may be smaller than PAGE_SIZE, especially when PAGE_SIZE > 4KB. Signed-off-by: NBrian W Hart <hartb@linux.vnet.ibm.com> Acked-by: NGavin Shan <shangw@linux.vnet.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Paul E. McKenney 提交于
The powerpc 64-bit __copy_tofrom_user() function uses shifts to handle unaligned invocations. However, these shifts were designed for big-endian systems: On little-endian systems, they must shift in the opposite direction. This commit relies on the C preprocessor to insert the correct shifts into the assembly code. [ This is a rare but nasty LE issue. Most of the time we use the POWER7 optimised __copy_tofrom_user_power7 loop, but when it hits an exception we fall back to the base __copy_tofrom_user loop. - Anton ] Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com> Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Rajesh B Prathipati 提交于
The generic put_unaligned/get_unaligned macros were made endian-safe by calling the appropriate endian dependent macros based on the endian type of the powerpc processor. Signed-off-by: NRajesh B Prathipati <rprathip@linux.vnet.ibm.com> Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Michael Neuling 提交于
In EXCEPTION_PROLOG_COMMON() we check to see if the stack pointer (r1) is valid when coming from the kernel. If it's not valid, we die but with a nice oops message. Currently we allocate a stack frame (subtract INT_FRAME_SIZE) before we check to see if the stack pointer is negative. Unfortunately, this won't detect a bad stack where r1 is less than INT_FRAME_SIZE. This patch fixes the check to compare the modified r1 with -INT_FRAME_SIZE. With this, bad kernel stack pointers (including NULL pointers) are correctly detected again. Kudos to Paulus for finding this. Signed-off-by: NMichael Neuling <mikey@neuling.org> cc: stable@vger.kernel.org Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 21 12月, 2013 1 次提交
-
-
由 Matteo Facchinetti 提交于
At the moment the USB controller's pin muxing is not setup correctly and causes a kernel panic upon system startup, so disable the USB1 device tree node in the MPC5125 tower board dts file. The USB controller is connected to an USB3320 ULPI transceiver and the device tree should receive an update to reflect correct dependencies and required initialization data before the USB1 node can get re-enabled. Signed-off-by: NMatteo Facchinetti <matteo.facchinetti@sirius-es.it> Signed-off-by: NAnatolij Gustschin <agust@denx.de>
-
- 19 12月, 2013 1 次提交
-
-
由 Gerhard Sittig 提交于
the 'soc' node in the MPC5125 "tower" board .dts has an '#interrupt-cells' property although this node is not an interrupt controller remove this erroneously placed property because starting with v3.13-rc1 lookup and resolution of 'interrupts' specs for peripherals gets misled (tries to use the 'soc' as the interrupt parent which fails), emits 'no irq domain found' WARN() messages and breaks the boot process [ best viewed with 'git diff -U5' to have DT node names in the context ] Cc: Anatolij Gustschin <agust@denx.de> Cc: linuxppc-dev@lists.ozlabs.org Cc: devicetree@vger.kernel.org Signed-off-by: NGerhard Sittig <gsi@denx.de> Signed-off-by: NAnatolij Gustschin <agust@denx.de>
-
- 13 12月, 2013 4 次提交
-
-
由 Benjamin Herrenschmidt 提交于
We are passing pointers to the firmware for reads, we need to properly convert the result as OPAL is always BE. Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Anton Blanchard 提交于
opal_xscom_read uses a pointer to return the data so we need to byteswap it on LE builds. Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Anton Blanchard 提交于
A couple more device tree properties that need byte swapping. Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Anton Blanchard 提交于
The MSI code is miscalculating quotas in little endian mode. Add required byteswaps to fix this. Before we claimed a quota of 65536, after the patch we see the correct value of 256. Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-