1. 02 6月, 2017 1 次提交
  2. 11 4月, 2017 1 次提交
  3. 07 4月, 2017 1 次提交
    • B
      powerpc/smp: Remove migrate_irq() custom implementation · a978e139
      Benjamin Herrenschmidt 提交于
      Some powerpc platforms use this to move IRQs away from a CPU being
      unplugged. This function has several bugs such as not taking the right
      locks or failing to NULL check pointers.
      
      There's a new generic function doing exactly the same thing without all
      the bugs, so let's use it instead.
      
      mpe: The obvious place for the select of GENERIC_IRQ_MIGRATION is on
      HOTPLUG_CPU, but that doesn't work. On some configs PM_SLEEP_SMP will
      select HOTPLUG_CPU even though its dependencies are not met, which means
      the select of GENERIC_IRQ_MIGRATION doesn't happen. That leads to the
      build breaking. Fix it by moving the select of GENERIC_IRQ_MIGRATION to
      SMP.
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      a978e139
  4. 25 12月, 2016 1 次提交
  5. 20 9月, 2016 2 次提交
    • M
      powerpc: Remove all usages of NO_IRQ · ef24ba70
      Michael Ellerman 提交于
      NO_IRQ has been == 0 on powerpc for just over ten years (since commit
      0ebfff14 ("[POWERPC] Add new interrupt mapping core and change
      platforms to use it")). It's also 0 on most other arches.
      
      Although it's fairly harmless, every now and then it causes confusion
      when a driver is built on powerpc and another arch which doesn't define
      NO_IRQ. There's at least 6 definitions of NO_IRQ in drivers/, at least
      some of which are to work around that problem.
      
      So we'd like to remove it. This is fairly trivial in the arch code, we
      just convert:
      
          if (irq == NO_IRQ)	to	if (!irq)
          if (irq != NO_IRQ)	to	if (irq)
          irq = NO_IRQ;	to	irq = 0;
          return NO_IRQ;	to	return 0;
      
      And a few other odd cases as well.
      
      At least for now we keep the #define NO_IRQ, because there is driver
      code that uses NO_IRQ and the fixes to remove those will go via other
      trees.
      
      Note we also change some occurrences in PPC sound drivers, drivers/ps3,
      and drivers/macintosh.
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      ef24ba70
    • N
      powerpc/64: Replay hypervisor maintenance interrupt first · e0e0d6b7
      Nicholas Piggin 提交于
      The HMI (Hypervisor Maintenance Interrupt) is defined by the
      architecture to be higher priority than other maskable interrupts, so
      replay it first, as a best-effort to replay according to hardware
      priorities.
      Signed-off-by: NNicholas Piggin <npiggin@gmail.com>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      e0e0d6b7
  6. 13 9月, 2016 1 次提交
  7. 01 8月, 2016 1 次提交
  8. 17 7月, 2016 1 次提交
  9. 08 7月, 2016 1 次提交
  10. 14 4月, 2016 1 次提交
  11. 15 9月, 2015 1 次提交
  12. 10 11月, 2014 1 次提交
  13. 03 11月, 2014 1 次提交
    • C
      powerpc: Replace __get_cpu_var uses · 69111bac
      Christoph Lameter 提交于
      This still has not been merged and now powerpc is the only arch that does
      not have this change. Sorry about missing linuxppc-dev before.
      
      V2->V2
        - Fix up to work against 3.18-rc1
      
      __get_cpu_var() is used for multiple purposes in the kernel source. One of
      them is address calculation via the form &__get_cpu_var(x).  This calculates
      the address for the instance of the percpu variable of the current processor
      based on an offset.
      
      Other use cases are for storing and retrieving data from the current
      processors percpu area.  __get_cpu_var() can be used as an lvalue when
      writing data or on the right side of an assignment.
      
      __get_cpu_var() is defined as :
      
      __get_cpu_var() always only does an address determination. However, store
      and retrieve operations could use a segment prefix (or global register on
      other platforms) to avoid the address calculation.
      
      this_cpu_write() and this_cpu_read() can directly take an offset into a
      percpu area and use optimized assembly code to read and write per cpu
      variables.
      
      This patch converts __get_cpu_var into either an explicit address
      calculation using this_cpu_ptr() or into a use of this_cpu operations that
      use the offset.  Thereby address calculations are avoided and less registers
      are used when code is generated.
      
      At the end of the patch set all uses of __get_cpu_var have been removed so
      the macro is removed too.
      
      The patch set includes passes over all arches as well. Once these operations
      are used throughout then specialized macros can be defined in non -x86
      arches as well in order to optimize per cpu access by f.e.  using a global
      register that may be set to the per cpu base.
      
      Transformations done to __get_cpu_var()
      
      1. Determine the address of the percpu instance of the current processor.
      
      	DEFINE_PER_CPU(int, y);
      	int *x = &__get_cpu_var(y);
      
          Converts to
      
      	int *x = this_cpu_ptr(&y);
      
      2. Same as #1 but this time an array structure is involved.
      
      	DEFINE_PER_CPU(int, y[20]);
      	int *x = __get_cpu_var(y);
      
          Converts to
      
      	int *x = this_cpu_ptr(y);
      
      3. Retrieve the content of the current processors instance of a per cpu
      variable.
      
      	DEFINE_PER_CPU(int, y);
      	int x = __get_cpu_var(y)
      
         Converts to
      
      	int x = __this_cpu_read(y);
      
      4. Retrieve the content of a percpu struct
      
      	DEFINE_PER_CPU(struct mystruct, y);
      	struct mystruct x = __get_cpu_var(y);
      
         Converts to
      
      	memcpy(&x, this_cpu_ptr(&y), sizeof(x));
      
      5. Assignment to a per cpu variable
      
      	DEFINE_PER_CPU(int, y)
      	__get_cpu_var(y) = x;
      
         Converts to
      
      	__this_cpu_write(y, x);
      
      6. Increment/Decrement etc of a per cpu variable
      
      	DEFINE_PER_CPU(int, y);
      	__get_cpu_var(y)++
      
         Converts to
      
      	__this_cpu_inc(y)
      
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      CC: Paul Mackerras <paulus@samba.org>
      Signed-off-by: NChristoph Lameter <cl@linux.com>
      [mpe: Fix build errors caused by set/or_softirq_pending(), and rework
            assignment in __set_breakpoint() to use memcpy().]
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      69111bac
  14. 15 10月, 2014 1 次提交
  15. 02 10月, 2014 1 次提交
  16. 27 8月, 2014 2 次提交
    • T
      Revert "powerpc: Replace __get_cpu_var uses" · 23f66e2d
      Tejun Heo 提交于
      This reverts commit 5828f666 due to
      build failure after merging with pending powerpc changes.
      
      Link: http://lkml.kernel.org/g/20140827142243.6277eaff@canb.auug.org.auSigned-off-by: NTejun Heo <tj@kernel.org>
      Reported-by: NStephen Rothwell <sfr@canb.auug.org.au>
      Cc: Christoph Lameter <cl@linux-foundation.org>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      23f66e2d
    • C
      powerpc: Replace __get_cpu_var uses · 5828f666
      Christoph Lameter 提交于
      __get_cpu_var() is used for multiple purposes in the kernel source. One of
      them is address calculation via the form &__get_cpu_var(x).  This calculates
      the address for the instance of the percpu variable of the current processor
      based on an offset.
      
      Other use cases are for storing and retrieving data from the current
      processors percpu area.  __get_cpu_var() can be used as an lvalue when
      writing data or on the right side of an assignment.
      
      __get_cpu_var() is defined as :
      
      #define __get_cpu_var(var) (*this_cpu_ptr(&(var)))
      
      __get_cpu_var() always only does an address determination. However, store
      and retrieve operations could use a segment prefix (or global register on
      other platforms) to avoid the address calculation.
      
      this_cpu_write() and this_cpu_read() can directly take an offset into a
      percpu area and use optimized assembly code to read and write per cpu
      variables.
      
      This patch converts __get_cpu_var into either an explicit address
      calculation using this_cpu_ptr() or into a use of this_cpu operations that
      use the offset.  Thereby address calculations are avoided and less registers
      are used when code is generated.
      
      At the end of the patch set all uses of __get_cpu_var have been removed so
      the macro is removed too.
      
      The patch set includes passes over all arches as well. Once these operations
      are used throughout then specialized macros can be defined in non -x86
      arches as well in order to optimize per cpu access by f.e.  using a global
      register that may be set to the per cpu base.
      
      Transformations done to __get_cpu_var()
      
      1. Determine the address of the percpu instance of the current processor.
      
      	DEFINE_PER_CPU(int, y);
      	int *x = &__get_cpu_var(y);
      
          Converts to
      
      	int *x = this_cpu_ptr(&y);
      
      2. Same as #1 but this time an array structure is involved.
      
      	DEFINE_PER_CPU(int, y[20]);
      	int *x = __get_cpu_var(y);
      
          Converts to
      
      	int *x = this_cpu_ptr(y);
      
      3. Retrieve the content of the current processors instance of a per cpu
      variable.
      
      	DEFINE_PER_CPU(int, y);
      	int x = __get_cpu_var(y)
      
         Converts to
      
      	int x = __this_cpu_read(y);
      
      4. Retrieve the content of a percpu struct
      
      	DEFINE_PER_CPU(struct mystruct, y);
      	struct mystruct x = __get_cpu_var(y);
      
         Converts to
      
      	memcpy(&x, this_cpu_ptr(&y), sizeof(x));
      
      5. Assignment to a per cpu variable
      
      	DEFINE_PER_CPU(int, y)
      	__get_cpu_var(y) = x;
      
         Converts to
      
      	__this_cpu_write(y, x);
      
      6. Increment/Decrement etc of a per cpu variable
      
      	DEFINE_PER_CPU(int, y);
      	__get_cpu_var(y)++
      
         Converts to
      
      	__this_cpu_inc(y)
      
      tj: Folded a fix patch.
          http://lkml.kernel.org/g/alpine.DEB.2.11.1408172143020.9652@gentwo.org
      
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      CC: Paul Mackerras <paulus@samba.org>
      Signed-off-by: NChristoph Lameter <cl@linux.com>
      Signed-off-by: NTejun Heo <tj@kernel.org>
      5828f666
  17. 05 8月, 2014 1 次提交
  18. 07 6月, 2014 1 次提交
  19. 05 3月, 2014 1 次提交
  20. 11 2月, 2014 1 次提交
  21. 02 12月, 2013 1 次提交
  22. 08 10月, 2013 1 次提交
  23. 01 10月, 2013 1 次提交
    • F
      irq: Consolidate do_softirq() arch overriden implementations · 7d65f4a6
      Frederic Weisbecker 提交于
      All arch overriden implementations of do_softirq() share the following
      common code: disable irqs (to avoid races with the pending check),
      check if there are softirqs pending, then execute __do_softirq() on
      a specific stack.
      
      Consolidate the common parts such that archs only worry about the
      stack switch.
      Acked-by: NLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Paul Mackerras <paulus@au1.ibm.com>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Paul Mackerras <paulus@au1.ibm.com>
      Cc: James Hogan <james.hogan@imgtec.com>
      Cc: James E.J. Bottomley <jejb@parisc-linux.org>
      Cc: Helge Deller <deller@gmx.de>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: David S. Miller <davem@davemloft.net>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      7d65f4a6
  24. 25 9月, 2013 2 次提交
    • B
      powerpc: Remove ksp_limit on ppc64 · cbc9565e
      Benjamin Herrenschmidt 提交于
      We've been keeping that field in thread_struct for a while, it contains
      the "limit" of the current stack pointer and is meant to be used for
      detecting stack overflows.
      
      It has a few problems however:
      
       - First, it was never actually *used* on 64-bit. Set and updated but
      not actually exploited
      
       - When switching stack to/from irq and softirq stacks, it's update
      is racy unless we hard disable interrupts, which is costly. This
      is fine on 32-bit as we don't soft-disable there but not on 64-bit.
      
      Thus rather than fixing 2 in order to implement 1 in some hypothetical
      future, let's remove the code completely from 64-bit. In order to avoid
      a clutter of ifdef's, we remove the updates from C code completely
      during interrupt stack switching, and instead maintain it from the
      asm helper that is used to do the stack switching in the first place.
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      cbc9565e
    • B
      powerpc/irq: Run softirqs off the top of the irq stack · 0366a1c7
      Benjamin Herrenschmidt 提交于
      Nowadays, irq_exit() calls __do_softirq() pretty much directly
      instead of calling do_softirq() which switches to the decicated
      softirq stack.
      
      This has lead to observed stack overflows on powerpc since we call
      irq_enter() and irq_exit() outside of the scope that switches to
      the irq stack.
      
      This fixes it by moving the stack switching up a level, making
      irq_enter() and irq_exit() run off the irq stack.
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      0366a1c7
  25. 01 8月, 2013 1 次提交
  26. 20 6月, 2013 1 次提交
  27. 15 6月, 2013 1 次提交
    • B
      powerpc: Fix missing/delayed calls to irq_work · 230b3034
      Benjamin Herrenschmidt 提交于
      When replaying interrupts (as a result of the interrupt occurring
      while soft-disabled), in the case of the decrementer, we are exclusively
      testing for a pending timer target. However we also use decrementer
      interrupts to trigger the new "irq_work", which in this case would
      be missed.
      
      This change the logic to force a replay in both cases of a timer
      boundary reached and a decrementer interrupt having actually occurred
      while disabled. The former test is still useful to catch cases where
      a CPU having been hard-disabled for a long time completely misses the
      interrupt due to a decrementer rollover.
      
      CC: <stable@vger.kernel.org> [v3.4+]
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      Tested-by: NSteven Rostedt <rostedt@goodmis.org>
      230b3034
  28. 18 4月, 2013 1 次提交
    • I
      powerpc: Add accounting for Doorbell interrupts · a6a058e5
      Ian Munsie 提交于
      This patch adds a new line to /proc/interrupts to account for the
      doorbell interrupts that each hardware thread has received. The total
      interrupt count in /proc/stat will now also include doorbells.
      
       # cat /proc/interrupts
                 CPU0       CPU1       CPU2       CPU3
       16:        551       1267        281        175      XICS Level     IPI
      LOC:       2037       1503       1688       1625   Local timer interrupts
      SPU:          0          0          0          0   Spurious interrupts
      CNT:          0          0          0          0   Performance monitoring interrupts
      MCE:          0          0          0          0   Machine check exceptions
      DBL:         42        550         20         91   Doorbell interrupts
      Signed-off-by: NIan Munsie <imunsie@au1.ibm.com>
      Signed-off-by: NMichael Ellerman <michael@ellerman.id.au>
      a6a058e5
  29. 10 1月, 2013 1 次提交
    • I
      powerpc: Add code to handle soft-disabled doorbells on server · fe9e1d54
      Ian Munsie 提交于
      This patch adds the logic to properly handle doorbells that come in when
      interrupts have been soft disabled and to replay them when interrupts
      are re-enabled:
      
      - masked_##_H##interrupt is modified to leave interrupts enabled when a
        doorbell has come in since doorbells are edge sensitive and as such
        won't be automatically re-raised.
      
      - __check_irq_replay now tests if a doorbell happened on book3s, and
        returns either 0xe80 or 0xa00 depending on whether we are the
        hypervisor or not.
      
      - restore_check_irq_replay now tests for the two possible server
        doorbell vector numbers to replay.
      
      - __replay_interrupt also adds tests for the two server doorbell vector
        numbers, and is modified to use a compare instruction rather than an
        andi. on the single bit difference between 0x500 and 0x900.
      
      The last two use a CPU feature section to avoid needlessly testing
      against the hypervisor vector if it is not the hypervisor, and vice
      versa.
      Signed-off-by: NIan Munsie <imunsie@au1.ibm.com>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      fe9e1d54
  30. 17 9月, 2012 1 次提交
    • L
      powerpc/trace: Fix interrupt tracepoints vs. RCU · e72bbbab
      Li Zhong 提交于
      There are a few tracepoints in the interrupt code path, which is before
      irq_enter(), or after irq_exit(), like
      trace_irq_entry()/trace_irq_exit() in do_IRQ(),
      trace_timer_interrupt_entry()/trace_timer_interrupt_exit() in
      timer_interrupt().
      
      If the interrupt is from idle(), and because tracepoint contains RCU
      read-side critical section, we could see following suspicious RCU usage
      reported:
      
      [  145.127743] ===============================
      [  145.127747] [ INFO: suspicious RCU usage. ]
      [  145.127752] 3.6.0-rc3+ #1 Not tainted
      [  145.127755] -------------------------------
      [  145.127759] /root/.workdir/linux/arch/powerpc/include/asm/trace.h:33
      suspicious rcu_dereference_check() usage!
      [  145.127765]
      [  145.127765] other info that might help us debug this:
      [  145.127765]
      [  145.127771]
      [  145.127771] RCU used illegally from idle CPU!
      [  145.127771] rcu_scheduler_active = 1, debug_locks = 0
      [  145.127777] RCU used illegally from extended quiescent state!
      [  145.127781] no locks held by swapper/0/0.
      [  145.127785]
      [  145.127785] stack backtrace:
      [  145.127789] Call Trace:
      [  145.127796] [c00000000108b530] [c000000000013c40] .show_stack
      +0x70/0x1c0 (unreliable)
      [  145.127806] [c00000000108b5e0]
      [c0000000000f59d8] .lockdep_rcu_suspicious+0x118/0x150
      [  145.127813] [c00000000108b680] [c00000000000fc58] .do_IRQ+0x498/0x500
      [  145.127820] [c00000000108b750] [c000000000003950]
      hardware_interrupt_common+0x150/0x180
      [  145.127828] --- Exception: 501 at .plpar_hcall_norets+0x84/0xd4
      [  145.127828]     LR = .check_and_cede_processor+0x38/0x70
      [  145.127836] [c00000000108bab0] [c0000000000665dc] .shared_cede_loop
      +0x5c/0x100
      [  145.127844] [c00000000108bb70] [c000000000588ab0] .cpuidle_enter
      +0x30/0x50
      [  145.127850] [c00000000108bbe0]
      [c000000000588b0c] .cpuidle_enter_state+0x3c/0xb0
      [  145.127857] [c00000000108bc60] [c000000000589730] .cpuidle_idle_call
      +0x150/0x6c0
      [  145.127863] [c00000000108bd30] [c000000000058440] .pSeries_idle
      +0x10/0x40
      [  145.127870] [c00000000108bda0] [c00000000001683c] .cpu_idle
      +0x18c/0x2d0
      [  145.127876] [c00000000108be60] [c00000000000b434] .rest_init
      +0x124/0x1b0
      [  145.127884] [c00000000108bef0] [c0000000009d0d28] .start_kernel
      +0x568/0x588
      [  145.127890] [c00000000108bf90] [c000000000009660] .start_here_common
      +0x20/0x40
      
      This is because the RCU usage in interrupt context should be used in
      area marked by rcu_irq_enter()/rcu_irq_exit(), called in
      irq_enter()/irq_exit() respectively.
      
      Move them into the irq_enter()/irq_exit() area to avoid the reporting.
      Signed-off-by: NLi Zhong <zhong@linux.vnet.ibm.com>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      e72bbbab
  31. 10 7月, 2012 2 次提交
    • B
      powerpc: Fix build of some debug irq code · 21b2de34
      Benjamin Herrenschmidt 提交于
      There was a typo, checking for CONFIG_TRACE_IRQFLAG instead of
      CONFIG_TRACE_IRQFLAGS causing some useful debug code to not be
      built
      
      This in turns causes a build error on BookE 64-bit due to incorrect
      semicolons at the end of a couple of macros, so let's fix that too
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      CC: stable@vger.kernel.org [v3.4]
      21b2de34
    • B
      powerpc: More fixes for lazy IRQ vs. idle · be2cf20a
      Benjamin Herrenschmidt 提交于
      Looks like we still have issues with pSeries and Cell idle code
      vs. the lazy irq state. In fact, the reset fixes that went upstream
      are exposing the problem more by causing BUG_ON() to trigger (which
      this patch turns into a WARN_ON instead).
      
      We need to be careful when using a variant of low power state that
      has the side effect of turning interrupts back on, to properly set
      all the SW & lazy state to look as if everything is enabled before
      we enter the low power state with MSR:EE off as we will return with
      MSR:EE on. If not, we have a discrepancy of state which can cause
      things to go very wrong later on.
      
      This patch moves the logic into a helper and uses it from the
      pseries and cell idle code. The power4/970 idle code already got
      things right (in assembly even !) so I'm not touching it. The power7
      "bare metal" idle code is subtly different and correct. Remains PA6T
      and some hypervisor based Cell platforms which have questionable
      code in there, but they are mostly dead platforms so I'll fix them
      when I manage to get final answers from the respective maintainers
      about how the low power state actually works on them.
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      CC: stable@vger.kernel.org [v3.4]
      be2cf20a
  32. 29 6月, 2012 1 次提交
    • S
      powerpc/ftrace: Do not trace restore_interrupts() · 2d773aa4
      Steven Rostedt 提交于
      As I was adding code that affects all archs, I started testing function
      tracer against PPC64 and found that it currently locks up with 3.4
      kernel. I figured it was due to tracing a function that shouldn't be, so
      I went through the following process to bisect to find the culprit:
      
       cat /debug/tracing/available_filter_functions > t
       num=`wc -l t`
       sed -ne "1,${num}p" t > t1
       let num=num+1
       sed -ne "${num},$p" t > t2
       cat t1 > /debug/tracing/set_ftrace_filter
       echo function /debug/tracing/current_tracer
       <failed? bisect t1, if not bisect t2>
      
      It finally came down to this function: restore_interrupts()
      
      I'm not sure why this locks up the system. It just seems to prevent
      scheduling from occurring. Interrupts seem to still work, as I can ping
      the box. But all user processes freeze.
      
      When restore_interrupts() is not traced, function tracing works fine.
      
      Cc: stable@kernel.org
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      2d773aa4
  33. 22 5月, 2012 1 次提交
  34. 12 5月, 2012 1 次提交
    • B
      powerpc/irq: Fix another case of lazy IRQ state getting out of sync · 7c0482e3
      Benjamin Herrenschmidt 提交于
      So we have another case of paca->irq_happened getting out of
      sync with the HW irq state. This can happen when a perfmon
      interrupt occurs while soft disabled, as it will return to a
      soft disabled but hard enabled context while leaving a stale
      PACA_IRQ_HARD_DIS flag set.
      
      This patch fixes it, and also adds a test for the condition
      of those flags being out of sync in arch_local_irq_restore()
      when CONFIG_TRACE_IRQFLAGS is enabled.
      
      This helps catching those gremlins faster (and so far I
      can't seem see any anymore, so that's good news).
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      7c0482e3
  35. 09 5月, 2012 1 次提交
  36. 30 4月, 2012 1 次提交
    • G
      powerpc/irqdomain: Fix broken NR_IRQ references · 4013369f
      Grant Likely 提交于
      The switch from using irq_map to irq_alloc_desc*() for managing irq
      number allocations introduced new bugs in some of the powerpc
      interrupt code.  Several functions rely on the value of NR_IRQS to
      determine the maximum irq number that could get allocated.  However,
      with sparse_irq and using irq_alloc_desc*() the maximum possible irq
      number is now specified with 'nr_irqs' which may be a number larger
      than NR_IRQS.  This has caused breakage on powermac when
      CONFIG_NR_IRQS is set to 32.
      
      This patch removes most of the direct references to NR_IRQS in the
      powerpc code and replaces them with either a nr_irqs reference or by
      using the common for_each_irq_desc() macro.  The powerpc-specific
      for_each_irq() macro is removed at the same time.
      
      Also, the Cell axon_msi driver is refactored to remove the global
      build assumption on the size of NR_IRQS and instead add a limit to the
      maximum irq number when calling irq_domain_add_nomap().
      Signed-off-by: NGrant Likely <grant.likely@secretlab.ca>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      4013369f