1. 04 10月, 2010 3 次提交
  2. 29 7月, 2010 1 次提交
    • I
      irq: Add new IRQ flag IRQF_NO_SUSPEND · 685fd0b4
      Ian Campbell 提交于
      A small number of users of IRQF_TIMER are using it for the implied no
      suspend behaviour on interrupts which are not timer interrupts.
      
      Therefore add a new IRQF_NO_SUSPEND flag, rename IRQF_TIMER to
      __IRQF_TIMER and redefine IRQF_TIMER in terms of these new flags.
      Signed-off-by: NIan Campbell <ian.campbell@citrix.com>
      Cc: Jeremy Fitzhardinge <jeremy@goop.org>
      Cc: Dmitry Torokhov <dmitry.torokhov@gmail.com>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Grant Likely <grant.likely@secretlab.ca>
      Cc: xen-devel@lists.xensource.com
      Cc: linux-input@vger.kernel.org
      Cc: linuxppc-dev@ozlabs.org
      Cc: devicetree-discuss@lists.ozlabs.org
      LKML-Reference: <1280398595-29708-1-git-send-email-ian.campbell@citrix.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      685fd0b4
  3. 09 6月, 2010 1 次提交
    • T
      genirq: Deal with desc->set_type() changing desc->chip · 46732475
      Thomas Gleixner 提交于
      The set_type() function can change the chip implementation when the
      trigger mode changes. That might result in using an non-initialized
      irq chip when called from __setup_irq() or when called via
      set_irq_type() on an already enabled irq. 
      
      The set_irq_type() function should not be called on an enabled irq,
      but because we forgot to put a check into it, we have a bunch of users
      which grew the habit of doing that and it never blew up as the
      function is serialized via desc->lock against all users of desc->chip
      and they never hit the non-initialized irq chip issue.
      
      The easy fix for the __setup_irq() issue would be to move the
      irq_chip_set_defaults(desc->chip) call after the trigger setting to
      make sure that a chip change is covered.
      
      But as we have already users, which do the type setting after
      request_irq(), the safe fix for now is to call irq_chip_set_defaults()
      from __irq_set_trigger() when desc->set_type() changed the irq chip.
      
      It needs a deeper analysis whether we should refuse to change the chip
      on an already enabled irq, but that'd be a large scale change to fix
      all the existing users. So that's neither stable nor 2.6.35 material.
      Reported-by: NEsben Haabendal <eha@doredevelopment.dk>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: linuxppc-dev <linuxppc-dev@ozlabs.org>
      Cc: stable@kernel.org
      46732475
  4. 03 5月, 2010 1 次提交
    • P
      genirq: Add CPU mask affinity hint · e7a297b0
      Peter P Waskiewicz Jr 提交于
      This patch adds a cpumask affinity hint to the irq_desc structure,
      along with a registration function and a read-only proc entry for each
      interrupt.
      
      This affinity_hint handle for each interrupt can be used by underlying
      drivers that need a better mechanism to control interrupt affinity.
      The underlying driver can register a cpumask for the interrupt, which
      will allow the driver to provide the CPU mask for the interrupt to
      anything that requests it.  The intent is to extend the userspace
      daemon, irqbalance, to help hint to it a preferred CPU mask to balance
      the interrupt into.
      
      [ tglx: Fixed compile warnings, added WARN_ON, made SMP only ]
      Signed-off-by: NPeter P Waskiewicz Jr <peter.p.waskiewicz.jr@intel.com>
      Cc: davem@davemloft.net
      Cc: arjan@linux.jf.intel.com
      Cc: bhutchings@solarflare.com
      LKML-Reference: <20100430214445.3992.41647.stgit@ppwaskie-hc2.jf.intel.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      e7a297b0
  5. 13 4月, 2010 2 次提交
    • T
      genirq: Remove IRQF_DISABLED from core code · 6932bf37
      Thomas Gleixner 提交于
      Remove all code which is related to IRQF_DISABLED from the core kernel
      code. IRQF_DISABLED still exists as a flag, but becomes a NOOP and
      will be removed after a grace period. That way we can easily revert to
      the previous behaviour by just restoring the core code.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Alan Cox <alan@lxorguk.ukuu.org.uk>
      Cc: Andi Kleen <andi@firstfloor.org>
      Cc: David Miller <davem@davemloft.net>
      Cc: Greg Kroah-Hartman <gregkh@suse.de>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Linus Torvalds <torvalds@osdl.org>
      LKML-Reference: <20100326000405.991244690@linutronix.de>
      6932bf37
    • M
      genirq: Introduce request_any_context_irq() · ae731f8d
      Marc Zyngier 提交于
      Now that we enjoy threaded interrupts, we're starting to see irq_chip
      implementations (wm831x, pca953x) that make use of threaded interrupts
      for the controller, and nested interrupts for the client interrupt. It
      all works very well, with one drawback:
      
      Drivers requesting an IRQ must now know whether the handler will
      run in a thread context or not, and call request_threaded_irq() or
      request_irq() accordingly.
      
      The problem is that the requesting driver sometimes doesn't know
      about the nature of the interrupt, specially when the interrupt
      controller is a discrete chip (typically a GPIO expander connected
      over I2C) that can be connected to a wide variety of otherwise perfectly
      supported hardware.
      
      This patch introduces the request_any_context_irq() function that mostly
      mimics the usual request_irq(), except that it checks whether the irq
      level is configured as nested or not, and calls the right backend.
      On success, it also returns either IRQC_IS_HARDIRQ or IRQC_IS_NESTED.
      
      [ tglx: Made return value an enum, simplified code and made the export
        	of request_any_context_irq GPL ]
      Signed-off-by: NMarc Zyngier <maz@misterjones.org>
      Cc: <joachim.eastwood@jotron.com>
      LKML-Reference: <927ea285bd0c68934ddae1a47e44a9ba@localhost>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      ae731f8d
  6. 31 3月, 2010 1 次提交
    • T
      genirq: Force MSI irq handlers to run with interrupts disabled · 753649db
      Thomas Gleixner 提交于
      Network folks reported that directing all MSI-X vectors of their multi
      queue NICs to a single core can cause interrupt stack overflows when
      enough interrupts fire at the same time.
      
      This is caused by the fact that we run interrupt handlers by default
      with interrupts enabled unless the driver reuqests the interrupt with
      the IRQF_DISABLED set. The NIC handlers do not set this flag, so
      simultaneous interrupts can nest unlimited and cause the stack
      overflow.
      
      The only safe counter measure is to run the interrupt handlers with
      interrupts disabled. We can't switch to this mode in general right
      now, but it is safe to do so for MSI interrupts.
      
      Force IRQF_DISABLED for MSI interrupt handlers.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Andi Kleen <andi@firstfloor.org>
      Cc: Linus Torvalds <torvalds@osdl.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Alan Cox <alan@lxorguk.ukuu.org.uk>
      Cc: David Miller <davem@davemloft.net>
      Cc: Greg Kroah-Hartman <gregkh@suse.de>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: stable@kernel.org
      753649db
  7. 24 3月, 2010 1 次提交
  8. 11 3月, 2010 1 次提交
    • T
      genirq: Prevent oneshot irq thread race · 0b1adaa0
      Thomas Gleixner 提交于
      Lars-Peter pointed out that the oneshot threaded interrupt handler
      code has the following race:
      
       CPU0                            CPU1
       hande_level_irq(irq X)
         mask_ack_irq(irq X)
         handle_IRQ_event(irq X)
           wake_up(thread_handler)
                                       thread handler(irq X) runs
                                       finalize_oneshot(irq X)
      				  does not unmask due to 
      				  !(desc->status & IRQ_MASKED)
      
       return from irq
       does not unmask due to
       (desc->status & IRQ_ONESHOT)
        				  
      This leaves the interrupt line masked forever. 
      
      The reason for this is the inconsistent handling of the IRQ_MASKED
      flag. Instead of setting it in the mask function the oneshot support
      sets the flag after waking up the irq thread.
      
      The solution for this is to set/clear the IRQ_MASKED status whenever
      we mask/unmask an interrupt line. That's the easy part, but that
      cleanup opens another race:
      
       CPU0                            CPU1
       hande_level_irq(irq)
         mask_ack_irq(irq)
         handle_IRQ_event(irq)
           wake_up(thread_handler)
                                       thread handler(irq) runs
                                       finalize_oneshot_irq(irq)
      				  unmask(irq)
           irq triggers again
           handle_level_irq(irq)
             mask_ack_irq(irq)
           return from irq due to IRQ_INPROGRESS				  
      
       return from irq
       does not unmask due to
       (desc->status & IRQ_ONESHOT)
      
      This requires that we synchronize finalize_oneshot_irq() with the
      primary handler. If IRQ_INPROGESS is set we wait until the primary
      handler on the other CPU has returned before unmasking the interrupt
      line again.
      
      We probably have never seen that problem because it does not happen on
      UP and on SMP the irqbalancer protects us by pinning the primary
      handler and the thread to the same CPU.
      Reported-by: NLars-Peter Clausen <lars@metafoo.de>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: stable@kernel.org
      0b1adaa0
  9. 15 12月, 2009 1 次提交
  10. 18 8月, 2009 1 次提交
  11. 17 8月, 2009 3 次提交
    • T
      genirq: Support nested threaded irq handling · 399b5da2
      Thomas Gleixner 提交于
      Interrupt chips which are behind a slow bus (i2c, spi ...) and
      demultiplex other interrupt sources need to run their interrupt
      handler in a thread. 
      
      The demultiplexed interrupt handlers need to run in thread context as
      well and need to finish before the demux handler thread can reenable
      the interrupt line. So the easiest way is to run the sub device
      handlers in the context of the demultiplexing handler thread.
      
      To avoid that a separate thread is created for the subdevices the
      function set_nested_irq_thread() is provided which sets the
      IRQ_NESTED_THREAD flag in the interrupt descriptor.
      
      A driver which calls request_threaded_irq() must not be aware of the
      fact that the threaded handler is called in the context of the
      demultiplexing handler thread. The setup code checks the
      IRQ_NESTED_THREAD flag which was set from the irq chip setup code and
      does not setup a separate thread for the interrupt. The primary
      function which is provided by the device driver is replaced by an
      internal dummy function which warns when it is called.
      
      For the demultiplexing handler a helper function handle_nested_irq()
      is provided which calls the demux interrupt thread function in the
      context of the caller and does the proper interrupt accounting and
      takes the interrupt disabled status of the demultiplexed subdevice
      into account.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Mark Brown <broonie@opensource.wolfsonmicro.com>
      Cc: Dmitry Torokhov <dmitry.torokhov@gmail.com>
      Cc: Trilok Soni <soni.trilok@gmail.com>
      Cc: Pavel Machek <pavel@ucw.cz>
      Cc: Brian Swetland <swetland@google.com>
      Cc: Joonyoung Shim <jy0922.shim@samsung.com>
      Cc: m.szyprowski@samsung.com
      Cc: t.fujak@samsung.com
      Cc: kyungmin.park@samsung.com,
      Cc: David Brownell <david-b@pacbell.net>
      Cc: Daniel Ribeiro <drwyrm@gmail.com>
      Cc: arve@android.com
      Cc: Barry Song <21cnbao@gmail.com>
      399b5da2
    • T
      genirq: Add buslock support · 70aedd24
      Thomas Gleixner 提交于
      Some interrupt chips are connected to a "slow" bus (i2c, spi ...). The
      bus access needs to sleep and therefor cannot be called in atomic
      contexts.
      
      Some of the generic interrupt management functions like disable_irq(),
      enable_irq() ... call interrupt chip functions with the irq_desc->lock
      held and interrupts disabled. This does not work for such devices.
      
      Provide a separate synchronization mechanism for such interrupt
      chips. The irq_chip structure is extended by two optional functions
      (bus_lock and bus_sync_and_unlock).
      
      The idea is to serialize the bus access for those operations in the
      core code so that drivers which are behind that bus operated interrupt
      controller do not have to worry about it and just can use the normal
      interfaces. To achieve this we add two function pointers to the
      irq_chip: bus_lock and bus_sync_unlock.
      
      bus_lock() is called to serialize access to the interrupt controller
      bus.
      
      Now the core code can issue chip->mask/unmask ... commands without
      changing the fast path code at all. The chip implementation merily
      stores that information in a chip private data structure and
      returns. No bus interaction as these functions are called from atomic
      context.
      
      After that bus_sync_unlock() is called outside the atomic context. Now
      the chip implementation issues the bus commands, waits for completion
      and unlocks the interrupt controller bus.
      
      The irq_chip implementation as pseudo code:
      
      struct irq_chip_data {
             struct mutex   mutex;
             unsigned int   irq_offset;
             unsigned long  mask;
             unsigned long  mask_status;
      }
      
      static void bus_lock(unsigned int irq)
      {
              struct irq_chip_data *data = get_irq_desc_chip_data(irq);
      
              mutex_lock(&data->mutex);
      }
      
      static void mask(unsigned int irq)
      {
              struct irq_chip_data *data = get_irq_desc_chip_data(irq);
      
              irq -= data->irq_offset;
              data->mask |= (1 << irq);
      }
      
      static void unmask(unsigned int irq)
      {
              struct irq_chip_data *data = get_irq_desc_chip_data(irq);
      
              irq -= data->irq_offset;
              data->mask &= ~(1 << irq);
      }
      
      static void bus_sync_unlock(unsigned int irq)
      {
              struct irq_chip_data *data = get_irq_desc_chip_data(irq);
      
              if (data->mask != data->mask_status) {
                      do_bus_magic_to_set_mask(data->mask);
                      data->mask_status = data->mask;
              }
              mutex_unlock(&data->mutex);
      }
      
      The device drivers can use request_threaded_irq, free_irq, disable_irq
      and enable_irq as usual with the only restriction that the calls need
      to come from non atomic context.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Mark Brown <broonie@opensource.wolfsonmicro.com>
      Cc: Dmitry Torokhov <dmitry.torokhov@gmail.com>
      Cc: Trilok Soni <soni.trilok@gmail.com>
      Cc: Pavel Machek <pavel@ucw.cz>
      Cc: Brian Swetland <swetland@google.com>
      Cc: Joonyoung Shim <jy0922.shim@samsung.com>
      Cc: m.szyprowski@samsung.com
      Cc: t.fujak@samsung.com
      Cc: kyungmin.park@samsung.com,
      Cc: David Brownell <david-b@pacbell.net>
      Cc: Daniel Ribeiro <drwyrm@gmail.com>
      Cc: arve@android.com
      Cc: Barry Song <21cnbao@gmail.com>
      70aedd24
    • T
      genirq: Add oneshot support · b25c340c
      Thomas Gleixner 提交于
      For threaded interrupt handlers we expect the hard interrupt handler
      part to mask the interrupt on the originating device. The interrupt
      line itself is reenabled after the hard interrupt handler has
      executed.
      
      This requires access to the originating device from hard interrupt
      context which is not always possible. There are devices which can only
      be accessed via a bus (i2c, spi, ...). The bus access requires thread
      context. For such devices we need to keep the interrupt line masked
      until the threaded handler has executed.
      
      Add a new flag IRQF_ONESHOT which allows drivers to request that the
      interrupt is not unmasked after the hard interrupt context handler has
      been executed and the thread has been woken. The interrupt line is
      unmasked after the thread handler function has been executed.
      
      Note that for now IRQF_ONESHOT cannot be used with IRQF_SHARED to
      avoid complex accounting mechanisms.
      
      For oneshot interrupts the primary handler simply returns
      IRQ_WAKE_THREAD and does nothing else. A generic implementation
      irq_default_primary_handler() is provided to avoid useless copies all
      over the place. It is automatically installed when
      request_threaded_irq() is called with handler=NULL and
      thread_fn!=NULL.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Mark Brown <broonie@opensource.wolfsonmicro.com>
      Cc: Dmitry Torokhov <dmitry.torokhov@gmail.com>
      Cc: Trilok Soni <soni.trilok@gmail.com>
      Cc: Pavel Machek <pavel@ucw.cz>
      Cc: Brian Swetland <swetland@google.com>
      Cc: Joonyoung Shim <jy0922.shim@samsung.com>
      Cc: m.szyprowski@samsung.com
      Cc: t.fujak@samsung.com
      Cc: kyungmin.park@samsung.com,
      Cc: David Brownell <david-b@pacbell.net>
      Cc: Daniel Ribeiro <drwyrm@gmail.com>
      Cc: arve@android.com
      Cc: Barry Song <21cnbao@gmail.com>
      b25c340c
  12. 14 8月, 2009 1 次提交
    • L
      genirq: prevent wakeup of freed irq thread · 2d860ad7
      Linus Torvalds 提交于
      free_irq() can remove an irqaction while the corresponding interrupt
      is in progress, but free_irq() sets action->thread to NULL
      unconditionally, which might lead to a NULL pointer dereference in
      handle_IRQ_event() when the hard interrupt context tries to wake up
      the handler thread.
      
      Prevent this by moving the thread stop after synchronize_irq(). No
      need to set action->thread to NULL either as action is going to be
      freed anyway.
      
      This fixes a boot crash reported against preempt-rt which uses the
      mainline irq threads code to implement full irq threading.
      
      [ tglx: removed local irqthread variable ]
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      2d860ad7
  13. 23 7月, 2009 1 次提交
    • B
      genirq: Fix UP compile failure caused by irq_thread_check_affinity · 61f38261
      Bruno Premont 提交于
      Since genirq: Delegate irq affinity setting to the irq thread
      (591d2fb0) compilation with
      CONFIG_SMP=n fails with following error:
      
      /usr/src/linux-2.6/kernel/irq/manage.c:
         In function 'irq_thread_check_affinity':
      /usr/src/linux-2.6/kernel/irq/manage.c:475:
         error: 'struct irq_desc' has no member named 'affinity'
      make[4]: *** [kernel/irq/manage.o] Error 1
      
      That commit adds a new function irq_thread_check_affinity() which
      uses struct irq_desc.affinity which is only available for CONFIG_SMP=y.
      Move that function under #ifdef CONFIG_SMP.
      
      [ tglx@brownpaperbag: compile and boot tested on UP and SMP ]
      Signed-off-by: NBruno Premont <bonbons@linux-vserver.org>
      LKML-Reference: <20090722222232.2eb3e1c4@neptune.home>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      61f38261
  14. 21 7月, 2009 1 次提交
    • T
      genirq: Delegate irq affinity setting to the irq thread · 591d2fb0
      Thomas Gleixner 提交于
      irq_set_thread_affinity() calls set_cpus_allowed_ptr() which might
      sleep, but irq_set_thread_affinity() is called with desc->lock held
      and can be called from hard interrupt context as well. The code has
      another bug as it does not hold a ref on the task struct as required
      by set_cpus_allowed_ptr().
      
      Just set the IRQTF_AFFINITY bit in action->thread_flags. The next time
      the thread runs it migrates itself. Solves all of the above problems
      nicely.
      
      Add kerneldoc to irq_set_thread_affinity() while at it.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      LKML-Reference: <new-submission>
      591d2fb0
  15. 13 5月, 2009 1 次提交
  16. 28 4月, 2009 1 次提交
    • Y
      irq: only update affinity if ->set_affinity() is sucessfull · 57b150cc
      Yinghai Lu 提交于
      irq_set_affinity() and move_masked_irq() try to assign affinity
      before calling chip set_affinity(). Some archs are assigning it
      in ->set_affinity() again.
      
      We do something like:
      
       cpumask_cpy(desc->affinity, mask);
       desc->chip->set_affinity(mask);
      
      But in the failure path, affinity should not be touched - otherwise
      we'll end up with a different affinity mask despite the failure to
      migrate the IRQ.
      
      So try to update the afffinity only if set_affinity returns with 0.
      Also call irq_set_thread_affinity accordingly.
      
      v2: update after "irq, x86: Remove IRQ_DISABLED check in process context IRQ move"
      v3: according to Ingo, change set_affinity() in irq_chip should return int.
      v4: update comments by removing moving irq_desc code.
      
      [ Impact: fix /proc/irq/*/smp_affinity setting corner case bug ]
      Signed-off-by: NYinghai Lu <yinghai@kernel.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Suresh Siddha <suresh.b.siddha@intel.com>
      Cc: "Eric W. Biederman" <ebiederm@xmission.com>
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      LKML-Reference: <49F65509.60307@kernel.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      57b150cc
  17. 23 4月, 2009 1 次提交
  18. 14 4月, 2009 1 次提交
    • P
      x86, irq: Remove IRQ_DISABLED check in process context IRQ move · 6ec3cfec
      Pallipadi, Venkatesh 提交于
      As discussed in the thread here:
      
        http://marc.info/?l=linux-kernel&m=123964468521142&w=2
      
      Eric W. Biederman observed:
      
      > It looks like some additional bugs have slipped in since last I looked.
      >
      > set_irq_affinity does this:
      > ifdef CONFIG_GENERIC_PENDING_IRQ
      >        if (desc->status & IRQ_MOVE_PCNTXT || desc->status & IRQ_DISABLED) {
      >                cpumask_copy(desc->affinity, cpumask);
      >                desc->chip->set_affinity(irq, cpumask);
      >        } else {
      >                desc->status |= IRQ_MOVE_PENDING;
      >                cpumask_copy(desc->pending_mask, cpumask);
      >        }
      > #else
      >
      > That IRQ_DISABLED case is a software state and as such it has nothing to
      > do with how safe it is to move an irq in process context.
      
      [...]
      
      >
      > The only reason we migrate MSIs in interrupt context today is that there
      > wasn't infrastructure for support migration both in interrupt context
      > and outside of it.
      
      Yes. The idea here was to force the MSI migration to happen in process
      context. One of the patches in the series did
      
              disable_irq(dev->irq);
              irq_set_affinity(dev->irq, cpumask_of(dev->cpu));
              enable_irq(dev->irq);
      
      with the above patch adding irq/manage code check for interrupt disabled
      and moving the interrupt in process context.
      
      IIRC, there was no IRQ_MOVE_PCNTXT when we were developing this HPET
      code and we ended up having this ugly hack. IRQ_MOVE_PCNTXT was there
      when we eventually submitted the patch upstream. But, looks like I did a
      blind rebasing instead of using IRQ_MOVE_PCNTXT in hpet MSI code.
      
      Below patch fixes this. i.e., revert commit 932775a4
      and add PCNTXT to HPET MSI setup. Also removes copying of desc->affinity
      in generic code as set_affinity routines are doing it internally.
      Reported-by: N"Eric W. Biederman" <ebiederm@xmission.com>
      Signed-off-by: NVenkatesh Pallipadi <venkatesh.pallipadi@intel.com>
      Acked-by: N"Eric W. Biederman" <ebiederm@xmission.com>
      Cc: "Li Shaohua" <shaohua.li@intel.com>
      Cc: Gary Hade <garyhade@us.ibm.com>
      Cc: "lcm@us.ibm.com" <lcm@us.ibm.com>
      Cc: suresh.b.siddha@intel.com
      LKML-Reference: <20090413222058.GB8211@linux-os.sc.intel.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      6ec3cfec
  19. 31 3月, 2009 1 次提交
    • R
      PM: Introduce functions for suspending and resuming device interrupts · 0a0c5168
      Rafael J. Wysocki 提交于
      Introduce helper functions allowing us to prevent device drivers from
      getting any interrupts (without disabling interrupts on the CPU)
      during suspend (or hibernation) and to make them start to receive
      interrupts again during the subsequent resume.  These functions make it
      possible to keep timer interrupts enabled while the "late" suspend and
      "early" resume callbacks provided by device drivers are being
      executed.  In turn, this allows device drivers' "late" suspend and
      "early" resume callbacks to sleep, execute ACPI callbacks etc.
      
      The functions introduced here will be used to rework the handling of
      interrupts during suspend (hibernation) and resume.  Namely,
      interrupts will only be disabled on the CPU right before suspending
      sysdevs, while device drivers will be prevented from receiving
      interrupts, with the help of the new helper function, before their
      "late" suspend callbacks run (and analogously during resume).
      Signed-off-by: NRafael J. Wysocki <rjw@sisk.pl>
      Acked-by: NIngo Molnar <mingo@elte.hu>
      0a0c5168
  20. 24 3月, 2009 2 次提交
    • T
      genirq: threaded irq handlers review fixups · f48fe81e
      Thomas Gleixner 提交于
      Delta patch to address the review comments.
      
            - Implement warning when IRQ_WAKE_THREAD is requested and no
              thread handler installed
            - coding style fixes
      Pointed-out-by: NChristoph Hellwig <hch@infradead.org>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      f48fe81e
    • T
      genirq: add threaded interrupt handler support · 3aa551c9
      Thomas Gleixner 提交于
      Add support for threaded interrupt handlers:
      
      A device driver can request that its main interrupt handler runs in a
      thread. To achive this the device driver requests the interrupt with
      request_threaded_irq() and provides additionally to the handler a
      thread function. The handler function is called in hard interrupt
      context and needs to check whether the interrupt originated from the
      device. If the interrupt originated from the device then the handler
      can either return IRQ_HANDLED or IRQ_WAKE_THREAD. IRQ_HANDLED is
      returned when no further action is required. IRQ_WAKE_THREAD causes
      the genirq code to invoke the threaded (main) handler. When
      IRQ_WAKE_THREAD is returned handler must have disabled the interrupt
      on the device level. This is mandatory for shared interrupt handlers,
      but we need to do it as well for obscure x86 hardware where disabling
      an interrupt on the IO_APIC level redirects the interrupt to the
      legacy PIC interrupt lines.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Reviewed-by: NIngo Molnar <mingo@elte.hu>
      3aa551c9
  21. 13 3月, 2009 2 次提交
  22. 12 3月, 2009 3 次提交
  23. 18 2月, 2009 2 次提交
  24. 15 2月, 2009 2 次提交
  25. 13 2月, 2009 1 次提交
  26. 09 2月, 2009 1 次提交
  27. 28 1月, 2009 2 次提交
  28. 12 1月, 2009 1 次提交
    • M
      cpumask: update irq_desc to use cpumask_var_t · 7f7ace0c
      Mike Travis 提交于
      Impact: reduce memory usage, use new cpumask API.
      
      Replace the affinity and pending_masks with cpumask_var_t's.  This adds
      to the significant size reduction done with the SPARSE_IRQS changes.
      
      The added functions (init_alloc_desc_masks & init_copy_desc_masks) are
      in the include file so they can be inlined (and optimized out for the
      !CONFIG_CPUMASKS_OFFSTACK case.)  [Naming chosen to be consistent with
      the other init*irq functions, as well as the backwards arg declaration
      of "from, to" instead of the more common "to, from" standard.]
      
      Includes a slight change to the declaration of struct irq_desc to embed
      the pending_mask within ifdef(CONFIG_SMP) to be consistent with other
      references, and some small changes to Xen.
      
      Tested: sparse/non-sparse/cpumask_offstack/non-cpumask_offstack/nonuma/nosmp on x86_64
      Signed-off-by: NMike Travis <travis@sgi.com>
      Cc: Chris Wright <chrisw@sous-sol.org>
      Cc: Jeremy Fitzhardinge <jeremy@xensource.com>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>
      Cc: virtualization@lists.osdl.org
      Cc: xen-devel@lists.xensource.com
      Cc: Yinghai Lu <yhlu.kernel@gmail.com>
      7f7ace0c