- 13 4月, 2014 1 次提交
-
-
由 Paul Mackerras 提交于
Commit 8f619b54 ("powerpc/ppc64: Do not turn AIL (reloc-on interrupts) too early") added code to set the AIL bit in the LPCR without checking whether the kernel is running in hypervisor mode. The result is that when the kernel is running as a guest (i.e., under PowerKVM or PowerVM), the processor takes a privileged instruction interrupt at that point, causing a panic. The visible result is that the kernel hangs after printing "returning from prom_init". This fixes it by checking for hypervisor mode being available before setting LPCR. If we are not in hypervisor mode, we enable relocation-on interrupts later in pSeries_setup_arch using the H_SET_MODE hcall. Signed-off-by: NPaul Mackerras <paulus@samba.org> Acked-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 09 4月, 2014 2 次提交
-
-
由 Anton Blanchard 提交于
Recent CPUs support quad word load and store instructions. Add support to the alignment handler for them. Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Michael Neuling 提交于
In: commit 742415d6 Author: Michael Neuling <mikey@neuling.org> powerpc: Turn syscall handler into macros We converted the syscall entry code onto macros, but in doing this we introduced some cruft that's never run and should never have been added. This removes that code. Signed-off-by: NMichael Neuling <mikey@neuling.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 07 4月, 2014 7 次提交
-
-
由 Benjamin Herrenschmidt 提交于
Turn them on at the same time as we allow MSR_IR/DR in the paca kernel MSR, ie, after the MMU has been setup enough to be able to handle relocated access to the linear mapping. Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Benjamin Herrenschmidt 提交于
If we take an interrupt such as a trap caused by a BUG_ON before the MMU has been setup, the interrupt handlers try to enable virutal mode and cause a recursive crash, making the original problem very hard to debug. This fixes it by adjusting the "kernel_msr" value in the PACA so that it only has MSR_IR and MSR_DR (translation for instruction and data) set after the MMU has been initialized for the processor. We may still not have a console yet but at least we don't get into a recursive fault (and early debug console or memory dump via JTAG of the kernel buffer *will* give us the proper error). Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Benjamin Herrenschmidt 提交于
All our cpu feature updates were done for every CPU in the device-tree, thus overwriting the cputable bits over and over again. Instead do them only for the boot CPU. Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Benjamin Herrenschmidt 提交于
Move the definition to setup-common.c and set the init value to -1 on both 32 and 64-bit (it was 0 on 64-bit). Additionally add a check to prom.c to garantee that the init value has been udpated after the DT scan. Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Benjamin Herrenschmidt 提交于
For historical reasons that code was under #ifdef CONFIG_PPC_PSERIES but it applies equally to all 64-bit platforms. Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Michael Neuling 提交于
We can't take an IRQ when we're about to do a trechkpt as our GPR state is set to user GPR values. We've hit this when running some IBM Java stress tests in the lab resulting in the following dump: cpu 0x3f: Vector: 700 (Program Check) at [c000000007eb3d40] pc: c000000000050074: restore_gprs+0xc0/0x148 lr: 00000000b52a8184 sp: ac57d360 msr: 8000000100201030 current = 0xc00000002c500000 paca = 0xc000000007dbfc00 softe: 0 irq_happened: 0x00 pid = 34535, comm = Pooled Thread # R00 = 00000000b52a8184 R16 = 00000000b3e48fda R01 = 00000000ac57d360 R17 = 00000000ade79bd8 R02 = 00000000ac586930 R18 = 000000000fac9bcc R03 = 00000000ade60000 R19 = 00000000ac57f930 R04 = 00000000f6624918 R20 = 00000000ade79be8 R05 = 00000000f663f238 R21 = 00000000ac218a54 R06 = 0000000000000002 R22 = 000000000f956280 R07 = 0000000000000008 R23 = 000000000000007e R08 = 000000000000000a R24 = 000000000000000c R09 = 00000000b6e69160 R25 = 00000000b424cf00 R10 = 0000000000000181 R26 = 00000000f66256d4 R11 = 000000000f365ec0 R27 = 00000000b6fdcdd0 R12 = 00000000f66400f0 R28 = 0000000000000001 R13 = 00000000ada71900 R29 = 00000000ade5a300 R14 = 00000000ac2185a8 R30 = 00000000f663f238 R15 = 0000000000000004 R31 = 00000000f6624918 pc = c000000000050074 restore_gprs+0xc0/0x148 cfar= c00000000004fe28 dont_restore_vec+0x1c/0x1a4 lr = 00000000b52a8184 msr = 8000000100201030 cr = 24804888 ctr = 0000000000000000 xer = 0000000000000000 trap = 700 This moves tm_recheckpoint to a C function and moves the tm_restore_sprs into that function. It then adds IRQ disabling over the trechkpt critical section. It also sets the TEXASR FS in the signals code to ensure this is never set now that we explictly write the TM sprs in tm_recheckpoint. Signed-off-by: NMichael Neuling <mikey@neuling.org> cc: stable@vger.kernel.org Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Greg Kurz 提交于
The current kernel code assumes big endian and parses RTAS events all wrong. The most visible effect is that we cannot honor EPOW events, meaning, for example, we cannot shut down a guest properly from the hypervisor. This new patch is largely inspired by Nathan's work: we get rid of all the bit fields in the RTAS event structures (even the unused ones, for consistency). We also introduce endian safe accessors for the fields used by the kernel (trivial rtas_error_type() accessor added for consistency). Cc: Nathan Fontenot <nfont@linux.vnet.ibm.com> Signed-off-by: NGreg Kurz <gkurz@linux.vnet.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 24 3月, 2014 4 次提交
-
-
由 Mahesh Salgaonkar 提交于
While checking powersaving mode in machine check handler at 0x200, we clobber CFAR register. Fix it by saving and restoring it during beq/bgt. Signed-off-by: NMahesh Salgaonkar <mahesh@linux.vnet.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Greg Kurz 提交于
The ppc_rtas() syscall allows userspace to interact directly with RTAS. For the moment, it assumes every thing is big endian and returns either EINVAL or EFAULT when called in a little endian environment. As suggested by Benjamin, to avoid bugs when userspace wants to pass a non 32 bit value to RTAS, it is far better to stick with a simple rationale: ppc_rtas() should be called with a big endian rtas_args structure. With this patch, it is now up to userspace to forge big endian arguments, as expected by RTAS. Signed-off-by: NGreg Kurz <gkurz@linux.vnet.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Michael Neuling 提交于
The facility unavailable exception can be triggered from userspace by accessing PMU registers when EBB is not enabled. This causes the included pr_err() to run, hence spamming the kernel log buffer. This avoids this by rate limiting these messages. Signed-off-by: NMichael Neuling <mikey@neuling.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Michael Ellerman 提交于
Some power8 revisions have a hardware bug where we can lose a Performance Monitor (PMU) exception under certain circumstances. We will be adding a workaround for this case, see the next commit for details. The observed behaviour is that writing PMAO doesn't cause an exception as we would expect, hence the name of the feature. Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 20 3月, 2014 9 次提交
-
-
由 Srivatsa S. Bhat 提交于
Subsystems that want to register CPU hotplug callbacks, as well as perform initialization for the CPUs that are already online, often do it as shown below: get_online_cpus(); for_each_online_cpu(cpu) init_cpu(cpu); register_cpu_notifier(&foobar_cpu_notifier); put_online_cpus(); This is wrong, since it is prone to ABBA deadlocks involving the cpu_add_remove_lock and the cpu_hotplug.lock (when running concurrently with CPU hotplug operations). Instead, the correct and race-free way of performing the callback registration is: cpu_notifier_register_begin(); for_each_online_cpu(cpu) init_cpu(cpu); /* Note the use of the double underscored version of the API */ __register_cpu_notifier(&foobar_cpu_notifier); cpu_notifier_register_done(); Fix the sysfs code in powerpc by using this latter form of callback registration. Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Olof Johansson <olof@lixom.net> Cc: Wang Dongsheng <dongsheng.wang@freescale.com> Cc: Ingo Molnar <mingo@kernel.org> Acked-by: NMadhavan Srinivasan <maddy@linux.vnet.ibm.com> Signed-off-by: NSrivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
由 Scott Wood 提交于
Add special state saving for critical and machine check exceptions. Most of this code could be used to handle debug exceptions taken from kernel space, but actually doing so is outside the scope of this patch. The various critical and machine check exceptions now point to their real handlers, rather than hanging the kernel. Signed-off-by: NScott Wood <scottwood@freescale.com>
-
由 Scott Wood 提交于
Use the proper scratch SPRG and PACA region. Introduce level-specific macros to simplify usage and avoid needing to do a bunch of token pasting throughout EXCEPTION_COMMON(). Now that EXCEPTION_COMMON_DBG() is properly using the debug scratch register, there's no more need for the caller to move the value to the GEN scratch first. Signed-off-by: NScott Wood <scottwood@freescale.com>
-
由 Scott Wood 提交于
The ints parameter was used to optionally insert RECONCILE_IRQ_STATE into EXCEPTION_COMMON. However, since it came at the end of EXCEPTION_COMMON, there was no real benefit for it to be there as opposed to being called separately by the caller of EXCEPTION_COMMON. The ints parameter was causing some hassle when trying to add an extra macro layer. Besides avoiding that, moving "ints" to the caller makes the code simpler by: - avoiding the asymmetry where INTS_RESTORE_HARD is called separately by the individual exception, but INTS_DISABLE was not - removing the no-op INTS_KEEP - not having an unnecessary macro parameter It also turned out to be necessary to delay the INTS_DISABLE in the case of special level exceptions until after we saved the old value of PACAIRQHAPPENED. Signed-off-by: NScott Wood <scottwood@freescale.com>
-
由 Scott Wood 提交于
Previously SPRG3 was marked for use by both VDSO and critical interrupts (though critical interrupts were not fully implemented). In commit 8b64a9df ("powerpc/booke64: Use SPRG0/3 scratch for bolted TLB miss & crit int"), Mihai Caraman made an attempt to resolve this conflict by restoring the VDSO value early in the critical interrupt, but this has some issues: - It's incompatible with EXCEPTION_COMMON which restores r13 from the by-then-overwritten scratch (this cost me some debugging time). - It forces critical exceptions to be a special case handled differently from even machine check and debug level exceptions. - It didn't occur to me that it was possible to make this work at all (by doing a final "ld r13, PACA_EXCRIT+EX_R13(r13)") until after I made (most of) this patch. :-) It might be worth investigating using a load rather than SPRG on return from all exceptions (except TLB misses where the scratch never leaves the SPRG) -- it could save a few cycles. Until then, let's stick with SPRG for all exceptions. Since we cannot use SPRG4-7 for scratch without corrupting the state of a KVM guest, move VDSO to SPRG7 on book3e. Since neither SPRG4-7 nor critical interrupts exist on book3s, SPRG3 is still used for VDSO there. Signed-off-by: NScott Wood <scottwood@freescale.com> Cc: Mihai Caraman <mihai.caraman@freescale.com> Cc: Anton Blanchard <anton@samba.org> Cc: Paul Mackerras <paulus@samba.org> Cc: kvm-ppc@vger.kernel.org
-
由 Scott Wood 提交于
Once special level interrupts are supported, we may take nested TLB misses -- so allow the same thread to acquire the lock recursively. The lock will not be effective against the nested TLB miss handler trying to write the same entry as the interrupted TLB miss handler, but that's also a problem on non-threaded CPUs that lack TLB write conditional. This will be addressed in the patch that enables crit/mc support by invalidating the TLB on return from level exceptions. Signed-off-by: NScott Wood <scottwood@freescale.com>
-
由 Scott Wood 提交于
altivec_unavailable was commented as 0xf20 but the code uses 0x200. Note that 0xf20 is also used by ap_unavailable. altivec_assist was commented as 0x1700 but the code uses 0x220. critical_input was commented as 0x580 but the code uses 0x100. machine_check was commented and implemented as 0x200, which conflicts with altivec_assist (it only builds because MC_EXCEPTION_PROLOG is commented out). Changed to the fixed IVOR value of 0x000. Signed-off-by: NScott Wood <scottwood@freescale.com>
-
由 Tiejun Chen 提交于
We need to store thread info to these exception thread info like something we already did for PPC32. Signed-off-by: NTiejun Chen <tiejun.chen@windriver.com> Signed-off-by: NScott Wood <scottwood@freescale.com>
-
由 Tiejun Chen 提交于
We already allocated critical/machine/debug check exceptions, but we also should initialize those associated kernel stack pointers for use by special exceptions in the PACA. Signed-off-by: NTiejun Chen <tiejun.chen@windriver.com> Signed-off-by: NScott Wood <scottwood@freescale.com>
-
- 13 3月, 2014 1 次提交
-
-
由 Marek Szyprowski 提交于
Enable reserved memory initialization from device tree. Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com> Signed-off-by: NGrant Likely <grant.likely@linaro.org>
-
- 07 3月, 2014 7 次提交
-
-
由 Jiri Slaby 提交于
As the data parameter is not really used by any ftrace_dyn_arch_init, remove that from ftrace_dyn_arch_init. This also removes the addr local variable from ftrace_init which is now unused. Note the documentation was imprecise as it did not suggest to set (*data) to 0. Link: http://lkml.kernel.org/r/1393268401-24379-4-git-send-email-jslaby@suse.cz Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: linux-arch@vger.kernel.org Signed-off-by: NJiri Slaby <jslaby@suse.cz> Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
-
由 Jiri Slaby 提交于
No architecture uses the "data" parameter in ftrace_dyn_arch_init() in any way, it just sets the value to 0. And this is used as a return value in the caller -- ftrace_init, which just checks the retval against zero. Note there is also "return 0" in every ftrace_dyn_arch_init. So it is enough to check the retval and remove all the indirect sets of data on all archs. Link: http://lkml.kernel.org/r/1393268401-24379-3-git-send-email-jslaby@suse.cz Cc: linux-arch@vger.kernel.org Signed-off-by: NJiri Slaby <jslaby@suse.cz> Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
-
由 Haren Myneni 提交于
pHyp can change cache nodes for suspend/resume operation. Currently the device tree is updated by drmgr in userspace after all non boot CPUs are enabled. Hence, we do not modify the cache list based on the latest cache nodes. Also we do not remove cache entries for the primary CPU. This patch removes the cache list for the boot CPU, updates the device tree before enabling nonboot CPUs and adds cache list for the boot cpu. This patch also has the side effect that older versions of drmgr will perform a second device tree update from userspace. While this is a redundant waste of a couple cycles it is harmless since firmware returns the same data for the subsequent update-nodes/properties rtas calls. Signed-off-by: NHaren Myneni <hbabu@us.ibm.com> Signed-off-by: NTyrel Datwyler <tyreld@linux.vnet.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Mahesh Salgaonkar 提交于
Detect and recover from machine check when inside opal on a special scom load instructions. On specific SCOM read via MMIO we may get a machine check exception with SRR0 pointing inside opal. To recover from MC in this scenario, get a recovery instruction address and return to it from MC. OPAL will export the machine check recoverable ranges through device tree node mcheck-recoverable-ranges under ibm,opal: # hexdump /proc/device-tree/ibm,opal/mcheck-recoverable-ranges 0000000 0000 0000 3000 2804 0000 000c 0000 0000 0000010 3000 2814 0000 0000 3000 27f0 0000 000c 0000020 0000 0000 3000 2814 xxxx xxxx xxxx xxxx 0000030 llll llll yyyy yyyy yyyy yyyy ... ... # where: xxxx xxxx xxxx xxxx = Starting instruction address llll llll = Length of the address range. yyyy yyyy yyyy yyyy = recovery address Each recoverable address range entry is (start address, len, recovery address), 2 cells each for start and recovery address, 1 cell for len, totalling 5 cells per entry. During kernel boot time, build up the recovery table with the list of recovery ranges from device-tree node which will be used during machine check exception to recover from MMIO SCOM UE. Signed-off-by: NMahesh Salgaonkar <mahesh@linux.vnet.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Li Zhong 提交于
This patch reverts my previous "fix", and replace it with the correct fix from Russell. And as Russell pointed out -- dma_set_mask_and_coherent() (and the other dma_set_mask() functions) are really supposed to be used by drivers only. Signed-off-by: NLi Zhong <zhong@linux.vnet.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Anton Blanchard 提交于
The 64bit relocation code places a few symbols in the text segment. These symbols are only 4 byte aligned where they need to be 8 byte aligned. Add an explicit alignment. Signed-off-by: NAnton Blanchard <anton@samba.org> Cc: stable@vger.kernel.org Tested-by: NLaurent Dufour <ldufour@linux.vnet.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Michael Neuling 提交于
When we fork/clone we currently don't copy any of the TM state to the new thread. This results in a TM bad thing (program check) when the new process is switched in as the kernel does a tmrechkpt with TEXASR FS not set. Also, since R1 is from userspace, we trigger the bad kernel stack pointer detection. So we end up with something like this: Bad kernel stack pointer 0 at c0000000000404fc cpu 0x2: Vector: 700 (Program Check) at [c00000003ffefd40] pc: c0000000000404fc: restore_gprs+0xc0/0x148 lr: 0000000000000000 sp: 0 msr: 9000000100201030 current = 0xc000001dd1417c30 paca = 0xc00000000fe00800 softe: 0 irq_happened: 0x01 pid = 0, comm = swapper/2 WARNING: exception is not recoverable, can't continue The below fixes this by flushing the TM state before we copy the task_struct to the clone. To do this we go through the tmreclaim patch, which removes the checkpointed registers from the CPU and transitions the CPU out of TM suspend mode. Hence we need to call tmrechkpt after to restore the checkpointed state and the TM mode for the current task. To make this fail from userspace is simply: tbegin li r0, 2 sc <boom> Kudos to Adhemerval Zanella Neto for finding this. Signed-off-by: NMichael Neuling <mikey@neuling.org> cc: Adhemerval Zanella Neto <azanella@br.ibm.com> cc: stable@vger.kernel.org Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 05 3月, 2014 9 次提交
-
-
由 Preeti U Murthy 提交于
Fast sleep is one of the deep idle states on Power8 in which local timers of CPUs stop. On PowerPC we do not have an external clock device which can handle wakeup of such CPUs. Now that we have the support in the tick broadcast framework for archs that do not sport such a device and the low level support for fast sleep, enable it in the cpuidle framework on PowerNV. Signed-off-by: NPreeti U Murthy <preeti@linux.vnet.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Vaidyanathan Srinivasan 提交于
During "Fast-sleep" and deeper power savings state, decrementer and timebase could be stopped making it out of sync with rest of the cores in the system. Add a firmware call to request platform to resync timebase using low level platform methods. Signed-off-by: NVaidyanathan Srinivasan <svaidy@linux.vnet.ibm.com> Signed-off-by: NPreeti U. Murthy <preeti@linux.vnet.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Vaidyanathan Srinivasan 提交于
Before adding Fast-Sleep into the cpuidle framework, some low level support needs to be added to enable it. This includes saving and restoring of certain registers at entry and exit time of this state respectively just like we do in the NAP idle state. Signed-off-by: NVaidyanathan Srinivasan <svaidy@linux.vnet.ibm.com> [Changelog modified by Preeti U. Murthy <preeti@linux.vnet.ibm.com>] Signed-off-by: NPreeti U. Murthy <preeti@linux.vnet.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Preeti U Murthy 提交于
Split timer_interrupt(), which is the local timer interrupt handler on ppc into routines called during regular interrupt handling and __timer_interrupt(), which takes care of running local timers and collecting time related stats. This will enable callers interested only in running expired local timers to directly call into __timer_interupt(). One of the use cases of this is the tick broadcast IPI handling in which the sleeping CPUs need to handle the local timers that have expired. Signed-off-by: NPreeti U Murthy <preeti@linux.vnet.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Srivatsa S. Bhat 提交于
For scalability and performance reasons, we want the tick broadcast IPIs to be handled as efficiently as possible. Fixed IPI messages are one of the most efficient mechanisms available - they are faster than the smp_call_function mechanism because the IPI handlers are fixed and hence they don't involve costly operations such as adding IPI handlers to the target CPU's function queue, acquiring locks for synchronization etc. Luckily we have an unused IPI message slot, so use that to implement tick broadcast IPIs efficiently. Signed-off-by: NSrivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com> [Functions renamed to tick_broadcast* and Changelog modified by Preeti U. Murthy<preeti@linux.vnet.ibm.com>] Signed-off-by: NPreeti U. Murthy <preeti@linux.vnet.ibm.com> Acked-by: Geoff Levand <geoff@infradead.org> [For the PS3 part] Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Srivatsa S. Bhat 提交于
The IPI handlers for both PPC_MSG_CALL_FUNC and PPC_MSG_CALL_FUNC_SINGLE map to a common implementation - generic_smp_call_function_single_interrupt(). So, we can consolidate them and save one of the IPI message slots, (which are precious on powerpc, since only 4 of those slots are available). So, implement the functionality of PPC_MSG_CALL_FUNC_SINGLE using PPC_MSG_CALL_FUNC itself and release its IPI message slot, so that it can be used for something else in the future, if desired. Signed-off-by: NSrivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com> Signed-off-by: NPreeti U. Murthy <preeti@linux.vnet.ibm.com> Acked-by: Geoff Levand <geoff@infradead.org> [For the PS3 part] Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Thomas Gleixner 提交于
Commit b8a9a11b (powerpc: eeh: Kill another abuse of irq_desc) is missing some brackets ..... It's not a good idea to write patches in grumpy mode and then forget to at least compile test them or rely on the few eyeballs discussing that patch to spot it..... Reported-by: fengguang.wu@intel.com Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Gavin Shan <shangw@linux.vnet.ibm.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: ppc <linuxppc-dev@lists.ozlabs.org>
-
由 Thomas Gleixner 提交于
commit 91150af3 (powerpc/eeh: Fix unbalanced enable for IRQ) is another brilliant example of trainwreck engineering. The patch "fixes" the issue of an unbalanced call to irq_enable() which causes a prominent warning by checking the disabled state of the interrupt line and call conditionally into the core code. This is wrong in two aspects: 1) The warning is there to tell users, that they need to fix their asymetric enable/disable patterns by finding the root cause and solving it there. It's definitely not meant to work around it by conditionally calling into the core code depending on the random state of the irq line. Asymetric irq_disable/enable calls are a clear sign of wrong usage of the interfaces which have to be cured at the root and not by somehow hacking around it. 2) The abuse of core internal data structure instead of using the proper interfaces for retrieving the information for the 'hack around' irq_desc is core internal and it's clear enough stated. Replace at least the irq_desc abuse with the proper functions and add a big fat comment why this is absurd and completely wrong. Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Gavin Shan <shangw@linux.vnet.ibm.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: ppc <linuxppc-dev@lists.ozlabs.org> Link: http://lkml.kernel.org/r/20140223212736.562906212@linutronix.deSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
-
由 Thomas Gleixner 提交于
No functional change Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: ppc <linuxppc-dev@lists.ozlabs.org> Link: http://lkml.kernel.org/r/20140223212736.333718121@linutronix.deSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
-