1. 18 2月, 2010 3 次提交
  2. 11 2月, 2010 1 次提交
    • B
      x86: Avoid race condition in pci_enable_msix() · ced5b697
      Brandon Phiilps 提交于
      Keep chip_data in create_irq_nr and destroy_irq.
      
      When two drivers are setting up MSI-X at the same time via
      pci_enable_msix() there is a race.  See this dmesg excerpt:
      
      [   85.170610] ixgbe 0000:02:00.1: irq 97 for MSI/MSI-X
      [   85.170611]   alloc irq_desc for 99 on node -1
      [   85.170613] igb 0000:08:00.1: irq 98 for MSI/MSI-X
      [   85.170614]   alloc kstat_irqs on node -1
      [   85.170616] alloc irq_2_iommu on node -1
      [   85.170617]   alloc irq_desc for 100 on node -1
      [   85.170619]   alloc kstat_irqs on node -1
      [   85.170621] alloc irq_2_iommu on node -1
      [   85.170625] ixgbe 0000:02:00.1: irq 99 for MSI/MSI-X
      [   85.170626]   alloc irq_desc for 101 on node -1
      [   85.170628] igb 0000:08:00.1: irq 100 for MSI/MSI-X
      [   85.170630]   alloc kstat_irqs on node -1
      [   85.170631] alloc irq_2_iommu on node -1
      [   85.170635]   alloc irq_desc for 102 on node -1
      [   85.170636]   alloc kstat_irqs on node -1
      [   85.170639] alloc irq_2_iommu on node -1
      [   85.170646] BUG: unable to handle kernel NULL pointer dereference
      at 0000000000000088
      
      As you can see igb and ixgbe are both alternating on create_irq_nr()
      via pci_enable_msix() in their probe function.
      
      ixgbe: While looping through irq_desc_ptrs[] via create_irq_nr() ixgbe
      choses irq_desc_ptrs[102] and exits the loop, drops vector_lock and
      calls dynamic_irq_init. Then it sets irq_desc_ptrs[102]->chip_data =
      NULL via dynamic_irq_init().
      
      igb: Grabs the vector_lock now and starts looping over irq_desc_ptrs[]
      via create_irq_nr(). It gets to irq_desc_ptrs[102] and does this:
      
      	cfg_new = irq_desc_ptrs[102]->chip_data;
      	if (cfg_new->vector != 0)
      		continue;
      
      This hits the NULL deref.
      
      Another possible race exists via pci_disable_msix() in a driver or in
      the number of error paths that call free_msi_irqs():
      
      destroy_irq()
      dynamic_irq_cleanup() which sets desc->chip_data = NULL
      ...race window...
      desc->chip_data = cfg;
      
      Remove the save and restore code for cfg in create_irq_nr() and
      destroy_irq() and take the desc->lock when checking the irq_cfg.
      Reported-and-analyzed-by: NBrandon Philips <bphilips@suse.de>
      Signed-off-by: NYinghai Lu <yinghai@kernel.org>
      LKML-Reference: <1265793639-15071-3-git-send-email-yinghai@kernel.org>
      Signed-off-by: NBrandon Phililps <bphilips@suse.de>
      Cc: stable@kernel.org
      Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
      ced5b697
  3. 15 12月, 2009 1 次提交
  4. 04 12月, 2009 2 次提交
  5. 20 11月, 2009 1 次提交
  6. 18 11月, 2009 1 次提交
  7. 08 11月, 2009 2 次提交
    • C
      irq: Do not attempt to create subdirectories if /proc/irq/<irq> failed · c82a43d4
      Cyrill Gorcunov 提交于
      If a parent directory (ie /proc/irq/<irq>) could not be created
      we should not attempt to create subdirectories. Otherwise it
      would lead that "smp_affinity" and "spurious" entries are may be
      registered under /proc root instead of a proper place.
      Signed-off-by: NCyrill Gorcunov <gorcunov@openvz.org>
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      Cc: Yinghai Lu <yinghai@kernel.org>
      LKML-Reference: <20091026202811.GD5321@lenovo>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      c82a43d4
    • Y
      genirq: try_one_irq() must be called with irq disabled · e7e7e0c0
      Yong Zhang 提交于
      Prarit reported:
      =================================
      [ INFO: inconsistent lock state ]
      2.6.32-rc5 #1
      ---------------------------------
      inconsistent {IN-HARDIRQ-W} -> {HARDIRQ-ON-W} usage.
      swapper/0 [HC0[0]:SC1[1]:HE1:SE0] takes:
       (&irq_desc_lock_class){?.-...}, at: [<ffffffff810c264e>] try_one_irq+0x32/0x138
      {IN-HARDIRQ-W} state was registered at:
       [<ffffffff81095160>] __lock_acquire+0x2fc/0xd5d
       [<ffffffff81095cb4>] lock_acquire+0xf3/0x12d
       [<ffffffff814cdadd>] _spin_lock+0x40/0x89
       [<ffffffff810c3389>] handle_level_irq+0x30/0x105
       [<ffffffff81014e0e>] handle_irq+0x95/0xb7
       [<ffffffff810141bd>] do_IRQ+0x6a/0xe0
       [<ffffffff81012813>] ret_from_intr+0x0/0x16
      irq event stamp: 195096
      hardirqs last  enabled at (195096): [<ffffffff814cd7f7>] _spin_unlock_irq+0x3a/0x5c
      hardirqs last disabled at (195095): [<ffffffff814cdbdd>] _spin_lock_irq+0x29/0x95
      softirqs last  enabled at (195088): [<ffffffff81068c92>] __do_softirq+0x1c1/0x1ef
      softirqs last disabled at (195093): [<ffffffff8101304c>] call_softirq+0x1c/0x30
      
      other info that might help us debug this:
      1 lock held by swapper/0:
       #0:  (kernel/irq/spurious.c:21){+.-...}, at: [<ffffffff81070cf2>]
      run_timer_softirq+0x1a9/0x315
      
      stack backtrace:
      Pid: 0, comm: swapper Not tainted 2.6.32-rc5 #1
      Call Trace:
       <IRQ>  [<ffffffff81093e94>] valid_state+0x187/0x1ae
       [<ffffffff81093fe4>] mark_lock+0x129/0x253
       [<ffffffff810951d4>] __lock_acquire+0x370/0xd5d
       [<ffffffff81095cb4>] lock_acquire+0xf3/0x12d
       [<ffffffff814cdadd>] _spin_lock+0x40/0x89
       [<ffffffff810c264e>] try_one_irq+0x32/0x138
       [<ffffffff810c2795>] poll_all_shared_irqs+0x41/0x6d
       [<ffffffff810c27dd>] poll_spurious_irqs+0x1c/0x49
       [<ffffffff81070d82>] run_timer_softirq+0x239/0x315
       [<ffffffff81068bd3>] __do_softirq+0x102/0x1ef
       [<ffffffff8101304c>] call_softirq+0x1c/0x30
       [<ffffffff81014b65>] do_softirq+0x59/0xca
       [<ffffffff810686ad>] irq_exit+0x58/0xae
       [<ffffffff81029b84>] smp_apic_timer_interrupt+0x94/0xba
       [<ffffffff81012a33>] apic_timer_interrupt+0x13/0x20
      
      The reason is that try_one_irq() is called from hardirq context with
      interrupts disabled and from softirq context (poll_all_shared_irqs())
      with interrupts enabled.
      
      Disable interrupts before calling it from poll_all_shared_irqs().
      Reported-and-tested-by: NPrarit Bhargava <prarit@redhat.com>
      Signed-off-by: NYong Zhang <yong.zhang0@gmail.com>
      LKML-Reference: <1257563773-4620-1-git-send-email-yong.zhang0@gmail.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      e7e7e0c0
  8. 04 11月, 2009 2 次提交
  9. 12 10月, 2009 1 次提交
  10. 29 8月, 2009 1 次提交
  11. 27 8月, 2009 1 次提交
    • T
      genirq: Do not mask oneshot edge type interrupts · 4dbc9ca2
      Thomas Gleixner 提交于
      Masking oneshot edge type interrupts is wrong as we might lose an
      interrupt which is issued when the threaded handler is handling the
      device. We can keep the irq unmasked safely as with edge type
      interrupts there is no danger of interrupt floods. If the threaded
      handler has not yet finished then IRQTF_RUNTHREAD is set which will
      keep the handler thread active.
      
      Debugged and verified in preempt-rt.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      4dbc9ca2
  12. 18 8月, 2009 1 次提交
  13. 17 8月, 2009 3 次提交
    • T
      genirq: Support nested threaded irq handling · 399b5da2
      Thomas Gleixner 提交于
      Interrupt chips which are behind a slow bus (i2c, spi ...) and
      demultiplex other interrupt sources need to run their interrupt
      handler in a thread. 
      
      The demultiplexed interrupt handlers need to run in thread context as
      well and need to finish before the demux handler thread can reenable
      the interrupt line. So the easiest way is to run the sub device
      handlers in the context of the demultiplexing handler thread.
      
      To avoid that a separate thread is created for the subdevices the
      function set_nested_irq_thread() is provided which sets the
      IRQ_NESTED_THREAD flag in the interrupt descriptor.
      
      A driver which calls request_threaded_irq() must not be aware of the
      fact that the threaded handler is called in the context of the
      demultiplexing handler thread. The setup code checks the
      IRQ_NESTED_THREAD flag which was set from the irq chip setup code and
      does not setup a separate thread for the interrupt. The primary
      function which is provided by the device driver is replaced by an
      internal dummy function which warns when it is called.
      
      For the demultiplexing handler a helper function handle_nested_irq()
      is provided which calls the demux interrupt thread function in the
      context of the caller and does the proper interrupt accounting and
      takes the interrupt disabled status of the demultiplexed subdevice
      into account.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Mark Brown <broonie@opensource.wolfsonmicro.com>
      Cc: Dmitry Torokhov <dmitry.torokhov@gmail.com>
      Cc: Trilok Soni <soni.trilok@gmail.com>
      Cc: Pavel Machek <pavel@ucw.cz>
      Cc: Brian Swetland <swetland@google.com>
      Cc: Joonyoung Shim <jy0922.shim@samsung.com>
      Cc: m.szyprowski@samsung.com
      Cc: t.fujak@samsung.com
      Cc: kyungmin.park@samsung.com,
      Cc: David Brownell <david-b@pacbell.net>
      Cc: Daniel Ribeiro <drwyrm@gmail.com>
      Cc: arve@android.com
      Cc: Barry Song <21cnbao@gmail.com>
      399b5da2
    • T
      genirq: Add buslock support · 70aedd24
      Thomas Gleixner 提交于
      Some interrupt chips are connected to a "slow" bus (i2c, spi ...). The
      bus access needs to sleep and therefor cannot be called in atomic
      contexts.
      
      Some of the generic interrupt management functions like disable_irq(),
      enable_irq() ... call interrupt chip functions with the irq_desc->lock
      held and interrupts disabled. This does not work for such devices.
      
      Provide a separate synchronization mechanism for such interrupt
      chips. The irq_chip structure is extended by two optional functions
      (bus_lock and bus_sync_and_unlock).
      
      The idea is to serialize the bus access for those operations in the
      core code so that drivers which are behind that bus operated interrupt
      controller do not have to worry about it and just can use the normal
      interfaces. To achieve this we add two function pointers to the
      irq_chip: bus_lock and bus_sync_unlock.
      
      bus_lock() is called to serialize access to the interrupt controller
      bus.
      
      Now the core code can issue chip->mask/unmask ... commands without
      changing the fast path code at all. The chip implementation merily
      stores that information in a chip private data structure and
      returns. No bus interaction as these functions are called from atomic
      context.
      
      After that bus_sync_unlock() is called outside the atomic context. Now
      the chip implementation issues the bus commands, waits for completion
      and unlocks the interrupt controller bus.
      
      The irq_chip implementation as pseudo code:
      
      struct irq_chip_data {
             struct mutex   mutex;
             unsigned int   irq_offset;
             unsigned long  mask;
             unsigned long  mask_status;
      }
      
      static void bus_lock(unsigned int irq)
      {
              struct irq_chip_data *data = get_irq_desc_chip_data(irq);
      
              mutex_lock(&data->mutex);
      }
      
      static void mask(unsigned int irq)
      {
              struct irq_chip_data *data = get_irq_desc_chip_data(irq);
      
              irq -= data->irq_offset;
              data->mask |= (1 << irq);
      }
      
      static void unmask(unsigned int irq)
      {
              struct irq_chip_data *data = get_irq_desc_chip_data(irq);
      
              irq -= data->irq_offset;
              data->mask &= ~(1 << irq);
      }
      
      static void bus_sync_unlock(unsigned int irq)
      {
              struct irq_chip_data *data = get_irq_desc_chip_data(irq);
      
              if (data->mask != data->mask_status) {
                      do_bus_magic_to_set_mask(data->mask);
                      data->mask_status = data->mask;
              }
              mutex_unlock(&data->mutex);
      }
      
      The device drivers can use request_threaded_irq, free_irq, disable_irq
      and enable_irq as usual with the only restriction that the calls need
      to come from non atomic context.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Mark Brown <broonie@opensource.wolfsonmicro.com>
      Cc: Dmitry Torokhov <dmitry.torokhov@gmail.com>
      Cc: Trilok Soni <soni.trilok@gmail.com>
      Cc: Pavel Machek <pavel@ucw.cz>
      Cc: Brian Swetland <swetland@google.com>
      Cc: Joonyoung Shim <jy0922.shim@samsung.com>
      Cc: m.szyprowski@samsung.com
      Cc: t.fujak@samsung.com
      Cc: kyungmin.park@samsung.com,
      Cc: David Brownell <david-b@pacbell.net>
      Cc: Daniel Ribeiro <drwyrm@gmail.com>
      Cc: arve@android.com
      Cc: Barry Song <21cnbao@gmail.com>
      70aedd24
    • T
      genirq: Add oneshot support · b25c340c
      Thomas Gleixner 提交于
      For threaded interrupt handlers we expect the hard interrupt handler
      part to mask the interrupt on the originating device. The interrupt
      line itself is reenabled after the hard interrupt handler has
      executed.
      
      This requires access to the originating device from hard interrupt
      context which is not always possible. There are devices which can only
      be accessed via a bus (i2c, spi, ...). The bus access requires thread
      context. For such devices we need to keep the interrupt line masked
      until the threaded handler has executed.
      
      Add a new flag IRQF_ONESHOT which allows drivers to request that the
      interrupt is not unmasked after the hard interrupt context handler has
      been executed and the thread has been woken. The interrupt line is
      unmasked after the thread handler function has been executed.
      
      Note that for now IRQF_ONESHOT cannot be used with IRQF_SHARED to
      avoid complex accounting mechanisms.
      
      For oneshot interrupts the primary handler simply returns
      IRQ_WAKE_THREAD and does nothing else. A generic implementation
      irq_default_primary_handler() is provided to avoid useless copies all
      over the place. It is automatically installed when
      request_threaded_irq() is called with handler=NULL and
      thread_fn!=NULL.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Mark Brown <broonie@opensource.wolfsonmicro.com>
      Cc: Dmitry Torokhov <dmitry.torokhov@gmail.com>
      Cc: Trilok Soni <soni.trilok@gmail.com>
      Cc: Pavel Machek <pavel@ucw.cz>
      Cc: Brian Swetland <swetland@google.com>
      Cc: Joonyoung Shim <jy0922.shim@samsung.com>
      Cc: m.szyprowski@samsung.com
      Cc: t.fujak@samsung.com
      Cc: kyungmin.park@samsung.com,
      Cc: David Brownell <david-b@pacbell.net>
      Cc: Daniel Ribeiro <drwyrm@gmail.com>
      Cc: arve@android.com
      Cc: Barry Song <21cnbao@gmail.com>
      b25c340c
  14. 14 8月, 2009 1 次提交
    • L
      genirq: prevent wakeup of freed irq thread · 2d860ad7
      Linus Torvalds 提交于
      free_irq() can remove an irqaction while the corresponding interrupt
      is in progress, but free_irq() sets action->thread to NULL
      unconditionally, which might lead to a NULL pointer dereference in
      handle_IRQ_event() when the hard interrupt context tries to wake up
      the handler thread.
      
      Prevent this by moving the thread stop after synchronize_irq(). No
      need to set action->thread to NULL either as action is going to be
      freed anyway.
      
      This fixes a boot crash reported against preempt-rt which uses the
      mainline irq threads code to implement full irq threading.
      
      [ tglx: removed local irqthread variable ]
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      2d860ad7
  15. 09 8月, 2009 1 次提交
  16. 08 8月, 2009 1 次提交
  17. 04 8月, 2009 1 次提交
  18. 23 7月, 2009 1 次提交
    • B
      genirq: Fix UP compile failure caused by irq_thread_check_affinity · 61f38261
      Bruno Premont 提交于
      Since genirq: Delegate irq affinity setting to the irq thread
      (591d2fb0) compilation with
      CONFIG_SMP=n fails with following error:
      
      /usr/src/linux-2.6/kernel/irq/manage.c:
         In function 'irq_thread_check_affinity':
      /usr/src/linux-2.6/kernel/irq/manage.c:475:
         error: 'struct irq_desc' has no member named 'affinity'
      make[4]: *** [kernel/irq/manage.o] Error 1
      
      That commit adds a new function irq_thread_check_affinity() which
      uses struct irq_desc.affinity which is only available for CONFIG_SMP=y.
      Move that function under #ifdef CONFIG_SMP.
      
      [ tglx@brownpaperbag: compile and boot tested on UP and SMP ]
      Signed-off-by: NBruno Premont <bonbons@linux-vserver.org>
      LKML-Reference: <20090722222232.2eb3e1c4@neptune.home>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      61f38261
  19. 21 7月, 2009 1 次提交
    • T
      genirq: Delegate irq affinity setting to the irq thread · 591d2fb0
      Thomas Gleixner 提交于
      irq_set_thread_affinity() calls set_cpus_allowed_ptr() which might
      sleep, but irq_set_thread_affinity() is called with desc->lock held
      and can be called from hard interrupt context as well. The code has
      another bug as it does not hold a ref on the task struct as required
      by set_cpus_allowed_ptr().
      
      Just set the IRQTF_AFFINITY bit in action->thread_flags. The next time
      the thread runs it migrates itself. Solves all of the above problems
      nicely.
      
      Add kerneldoc to irq_set_thread_affinity() while at it.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      LKML-Reference: <new-submission>
      591d2fb0
  20. 05 7月, 2009 1 次提交
  21. 12 6月, 2009 3 次提交
    • Y
      irq: slab alloc for default irq_affinity · 28be225b
      Yinghai Lu 提交于
      Ingo had
      
      [    0.000000] ------------[ cut here ]------------
      [    0.000000] WARNING: at mm/bootmem.c:537 alloc_arch_preferred_bootmem+0x2b/0x71()
      [    0.000000] Hardware name: System Product Name
      [    0.000000] Modules linked in:
      [    0.000000] Pid: 0, comm: swapper Tainted: G        W  2.6.30-tip-03087-g0bb2618-dirty #52506
      [    0.000000] Call Trace:
      [    0.000000]  [<81032588>] warn_slowpath_common+0x60/0x90
      [    0.000000]  [<810325c5>] warn_slowpath_null+0xd/0x10
      [    0.000000]  [<819d1bc0>] alloc_arch_preferred_bootmem+0x2b/0x71
      [    0.000000]  [<819d1c31>] ___alloc_bootmem_nopanic+0x2b/0x9a
      [    0.000000]  [<81050a0a>] ? lock_release+0xac/0xb2
      [    0.000000]  [<819d1d4c>] ___alloc_bootmem+0xe/0x2d
      [    0.000000]  [<819d1e9f>] __alloc_bootmem+0xa/0xc
      [    0.000000]  [<819d7c63>] alloc_bootmem_cpumask_var+0x21/0x26
      [    0.000000]  [<819d0cc8>] early_irq_init+0x15/0x10d
      [    0.000000]  [<819bb75a>] start_kernel+0x167/0x326
      [    0.000000]  [<819bb06b>] __init_begin+0x6b/0x70
      [    0.000000] ---[ end trace 4eaa2a86a8e2da23 ]---
      [    0.000000] NR_IRQS:2304 nr_irqs:424
      [    0.000000] CPU 0 irqstacks, hard=821e6000 soft=821e7000
      
      we need to update init_irq_default_affinity
      Signed-off-by: NYinghai Lu <yinghai@kernel.org>
      Signed-off-by: NPekka Enberg <penberg@cs.helsinki.fi>
      28be225b
    • P
      irq: use kcalloc() instead of the bootmem allocator · 22fb4e71
      Pekka Enberg 提交于
      Fixes the following problem:
      
      [    0.000000] Experimental hierarchical RCU init done.
      [    0.000000] NR_IRQS:4352 nr_irqs:256
      [    0.000000] ------------[ cut here ]------------
      [    0.000000] WARNING: at mm/bootmem.c:537 alloc_arch_preferred_bootmem+0x40/0x7e()
      [    0.000000] Hardware name: To Be Filled By O.E.M.
      [    0.000000] Pid: 0, comm: swapper Not tainted 2.6.30-tip-02161-g7a74539-dirty #59709
      [    0.000000] Call Trace:
      [    0.000000]  [<ffffffff823f8c8e>] ? alloc_arch_preferred_bootmem+0x40/0x7e
      [    0.000000]  [<ffffffff81067168>] warn_slowpath_common+0x88/0xcb
      [    0.000000]  [<ffffffff810671d2>] warn_slowpath_null+0x27/0x3d
      [    0.000000]  [<ffffffff823f8c8e>] alloc_arch_preferred_bootmem+0x40/0x7e
      [    0.000000]  [<ffffffff823f9307>] ___alloc_bootmem_nopanic+0x4e/0xec
      [    0.000000]  [<ffffffff823f93c5>] ___alloc_bootmem+0x20/0x61
      [    0.000000]  [<ffffffff823f962e>] __alloc_bootmem+0x1e/0x34
      [    0.000000]  [<ffffffff823f757c>] early_irq_init+0x6d/0x118
      [    0.000000]  [<ffffffff823e0140>] ? early_idt_handler+0x0/0x71
      [    0.000000]  [<ffffffff823e0cf7>] start_kernel+0x192/0x394
      [    0.000000]  [<ffffffff823e0140>] ? early_idt_handler+0x0/0x71
      [    0.000000]  [<ffffffff823e02ad>] x86_64_start_reservations+0xb4/0xcf
      [    0.000000]  [<ffffffff823e0000>] ? __init_begin+0x0/0x140
      [    0.000000]  [<ffffffff823e0420>] x86_64_start_kernel+0x158/0x17b
      [    0.000000] ---[ end trace a7919e7f17c0a725 ]---
      [    0.000000] Fast TSC calibration using PIT
      [    0.000000] Detected 2002.510 MHz processor.
      [    0.004000] Console: colour VGA+ 80x25
      Reported-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NPekka Enberg <penberg@cs.helsinki.fi>
      22fb4e71
    • Y
      irq/cpumask: make memoryless node zero happy · dad213ae
      Yinghai Lu 提交于
      Don't hardcode to node zero for early boot IRQ setup memory allocations.
      
      [ penberg@cs.helsinki.fi: minor cleanups ]
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: NYinghai Lu <yinghai@kernel.org>
      Signed-off-by: NPekka Enberg <penberg@cs.helsinki.fi>
      dad213ae
  22. 23 5月, 2009 1 次提交
    • P
      sparseirq: Allow early irq_desc allocation · 948cd529
      Paul Mundt 提交于
      Presently non-legacy IRQs have their irq_desc allocated with
      kzalloc_node(). This assumes that all callers of irq_to_desc_node_alloc()
      will be sufficiently late in the boot process that kmalloc is available.
      
      While porting sparseirq support to sh this blew up immediately, as at the
      time that we register the CPU's interrupt vector map only bootmem is
      available. Check slab_is_available() to work out which path to use.
      
      [ Impact: fix SH early boot crash with sparseirq enabled ]
      Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
      Acked-by: NYinghai Lu <yinghai@kernel.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Mel Gorman <mel@csn.ul.ie>
      LKML-Reference: <20090522014008.GA2806@linux-sh.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      948cd529
  23. 13 5月, 2009 1 次提交
  24. 02 5月, 2009 1 次提交
    • Y
      x86/irq: use move_irq_desc() in create_irq_nr() · 15e957d0
      Yinghai Lu 提交于
      move_irq_desc() will try to move irq_desc to the home node if
      the allocated one is not correct, in create_irq_nr().
      
      ( This can happen on devices that are on different nodes that
        are using MSI, when drivers are loaded and unloaded randomly. )
      
      v2: fix non-smp build
      v3: add NUMA_IRQ_DESC to eliminate #ifdefs
      
      [ Impact: improve irq descriptor locality on NUMA systems ]
      Signed-off-by: NYinghai Lu <yinghai@kernel.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Suresh Siddha <suresh.b.siddha@intel.com>
      Cc: "Eric W. Biederman" <ebiederm@xmission.com>
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      LKML-Reference: <49F95EAE.2050903@kernel.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      15e957d0
  25. 01 5月, 2009 1 次提交
  26. 29 4月, 2009 1 次提交
    • H
      tracing: fix build failure on s390 · a0e39ed3
      Heiko Carstens 提交于
      "tracing: create automated trace defines" causes this compile error on s390,
      as reported by Sachin Sant against linux-next:
      
       kernel/built-in.o: In function `__do_softirq':
       (.text+0x1c680): undefined reference to `__tracepoint_softirq_entry'
      
      This happens because the definitions of the softirq tracepoints were moved
      from kernel/softirq.c to kernel/irq/handle.c. Since s390 doesn't support
      generic hardirqs handle.c doesn't get compiled and the definitions are
      missing.
      
      So move the tracepoints to softirq.c again.
      
      [ Impact: fix build failure on s390 ]
      Reported-by: NSachin Sant <sachinp@in.ibm.com>
      Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: fweisbec@gmail.com
      LKML-Reference: <20090429135139.5fac79b8@osiris.boeblingen.de.ibm.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      a0e39ed3
  27. 28 4月, 2009 4 次提交
    • Y
      x86/irq: change irq_desc_alloc() to take node instead of cpu · 85ac16d0
      Yinghai Lu 提交于
      This simplifies the node awareness of the code. All our allocators
      only deal with a NUMA node ID locality not with CPU ids anyway - so
      there's no need to maintain (and transform) a CPU id all across the
      IRq layer.
      
      v2: keep move_irq_desc related
      
      [ Impact: cleanup, prepare IRQ code to be NUMA-aware ]
      Signed-off-by: NYinghai Lu <yinghai@kernel.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Suresh Siddha <suresh.b.siddha@intel.com>
      Cc: "Eric W. Biederman" <ebiederm@xmission.com>
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      Cc: Jeremy Fitzhardinge <jeremy@goop.org>
      LKML-Reference: <49F65536.2020300@kernel.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      85ac16d0
    • Y
      irq: only update affinity if ->set_affinity() is sucessfull · 57b150cc
      Yinghai Lu 提交于
      irq_set_affinity() and move_masked_irq() try to assign affinity
      before calling chip set_affinity(). Some archs are assigning it
      in ->set_affinity() again.
      
      We do something like:
      
       cpumask_cpy(desc->affinity, mask);
       desc->chip->set_affinity(mask);
      
      But in the failure path, affinity should not be touched - otherwise
      we'll end up with a different affinity mask despite the failure to
      migrate the IRQ.
      
      So try to update the afffinity only if set_affinity returns with 0.
      Also call irq_set_thread_affinity accordingly.
      
      v2: update after "irq, x86: Remove IRQ_DISABLED check in process context IRQ move"
      v3: according to Ingo, change set_affinity() in irq_chip should return int.
      v4: update comments by removing moving irq_desc code.
      
      [ Impact: fix /proc/irq/*/smp_affinity setting corner case bug ]
      Signed-off-by: NYinghai Lu <yinghai@kernel.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Suresh Siddha <suresh.b.siddha@intel.com>
      Cc: "Eric W. Biederman" <ebiederm@xmission.com>
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      LKML-Reference: <49F65509.60307@kernel.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      57b150cc
    • Y
      x86/irq: remove leftover code from NUMA_MIGRATE_IRQ_DESC · fcef5911
      Yinghai Lu 提交于
      The original feature of migrating irq_desc dynamic was too fragile
      and was causing problems: it caused crashes on systems with lots of
      cards with MSI-X when user-space irq-balancer was enabled.
      
      We now have new patches that create irq_desc according to device
      numa node. This patch removes the leftover bits of the dynamic balancer.
      
      [ Impact: remove dead code ]
      Signed-off-by: NYinghai Lu <yinghai@kernel.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Suresh Siddha <suresh.b.siddha@intel.com>
      Cc: "Eric W. Biederman" <ebiederm@xmission.com>
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      LKML-Reference: <49F654AF.8000808@kernel.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      fcef5911
    • Y
      irq, cpumask: correct CPUMASKS_OFFSTACK typo and fix fallout · 9ec4fa27
      Yinghai Lu 提交于
      CPUMASKS_OFFSTACK is not defined anywhere (it is CPUMASK_OFFSTACK).
      It is a typo and init_allocate_desc_masks() is called before it set
      affinity to all cpus...
      
      Split init_alloc_desc_masks() into all_desc_masks() and init_desc_masks().
      
      Also use CPUMASK_OFFSTACK in alloc_desc_masks().
      
      [ Impact: fix smp_affinity copying/setup when moving irq_desc between CPUs ]
      Signed-off-by: NYinghai Lu <yinghai@kernel.org>
      Acked-by: NRusty Russell <rusty@rustcorp.com.au>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Suresh Siddha <suresh.b.siddha@intel.com>
      Cc: "Eric W. Biederman" <ebiederm@xmission.com>
      LKML-Reference: <49F6546E.3040406@kernel.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      9ec4fa27
  28. 23 4月, 2009 1 次提交