1. 31 3月, 2010 1 次提交
    • T
      genirq: Force MSI irq handlers to run with interrupts disabled · 753649db
      Thomas Gleixner 提交于
      Network folks reported that directing all MSI-X vectors of their multi
      queue NICs to a single core can cause interrupt stack overflows when
      enough interrupts fire at the same time.
      
      This is caused by the fact that we run interrupt handlers by default
      with interrupts enabled unless the driver reuqests the interrupt with
      the IRQF_DISABLED set. The NIC handlers do not set this flag, so
      simultaneous interrupts can nest unlimited and cause the stack
      overflow.
      
      The only safe counter measure is to run the interrupt handlers with
      interrupts disabled. We can't switch to this mode in general right
      now, but it is safe to do so for MSI interrupts.
      
      Force IRQF_DISABLED for MSI interrupt handlers.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Andi Kleen <andi@firstfloor.org>
      Cc: Linus Torvalds <torvalds@osdl.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Alan Cox <alan@lxorguk.ukuu.org.uk>
      Cc: David Miller <davem@davemloft.net>
      Cc: Greg Kroah-Hartman <gregkh@suse.de>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: stable@kernel.org
      753649db
  2. 24 3月, 2010 1 次提交
  3. 11 3月, 2010 1 次提交
    • T
      genirq: Prevent oneshot irq thread race · 0b1adaa0
      Thomas Gleixner 提交于
      Lars-Peter pointed out that the oneshot threaded interrupt handler
      code has the following race:
      
       CPU0                            CPU1
       hande_level_irq(irq X)
         mask_ack_irq(irq X)
         handle_IRQ_event(irq X)
           wake_up(thread_handler)
                                       thread handler(irq X) runs
                                       finalize_oneshot(irq X)
      				  does not unmask due to 
      				  !(desc->status & IRQ_MASKED)
      
       return from irq
       does not unmask due to
       (desc->status & IRQ_ONESHOT)
        				  
      This leaves the interrupt line masked forever. 
      
      The reason for this is the inconsistent handling of the IRQ_MASKED
      flag. Instead of setting it in the mask function the oneshot support
      sets the flag after waking up the irq thread.
      
      The solution for this is to set/clear the IRQ_MASKED status whenever
      we mask/unmask an interrupt line. That's the easy part, but that
      cleanup opens another race:
      
       CPU0                            CPU1
       hande_level_irq(irq)
         mask_ack_irq(irq)
         handle_IRQ_event(irq)
           wake_up(thread_handler)
                                       thread handler(irq) runs
                                       finalize_oneshot_irq(irq)
      				  unmask(irq)
           irq triggers again
           handle_level_irq(irq)
             mask_ack_irq(irq)
           return from irq due to IRQ_INPROGRESS				  
      
       return from irq
       does not unmask due to
       (desc->status & IRQ_ONESHOT)
      
      This requires that we synchronize finalize_oneshot_irq() with the
      primary handler. If IRQ_INPROGESS is set we wait until the primary
      handler on the other CPU has returned before unmasking the interrupt
      line again.
      
      We probably have never seen that problem because it does not happen on
      UP and on SMP the irqbalancer protects us by pinning the primary
      handler and the thread to the same CPU.
      Reported-by: NLars-Peter Clausen <lars@metafoo.de>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: stable@kernel.org
      0b1adaa0
  4. 15 12月, 2009 1 次提交
  5. 18 8月, 2009 1 次提交
  6. 17 8月, 2009 3 次提交
    • T
      genirq: Support nested threaded irq handling · 399b5da2
      Thomas Gleixner 提交于
      Interrupt chips which are behind a slow bus (i2c, spi ...) and
      demultiplex other interrupt sources need to run their interrupt
      handler in a thread. 
      
      The demultiplexed interrupt handlers need to run in thread context as
      well and need to finish before the demux handler thread can reenable
      the interrupt line. So the easiest way is to run the sub device
      handlers in the context of the demultiplexing handler thread.
      
      To avoid that a separate thread is created for the subdevices the
      function set_nested_irq_thread() is provided which sets the
      IRQ_NESTED_THREAD flag in the interrupt descriptor.
      
      A driver which calls request_threaded_irq() must not be aware of the
      fact that the threaded handler is called in the context of the
      demultiplexing handler thread. The setup code checks the
      IRQ_NESTED_THREAD flag which was set from the irq chip setup code and
      does not setup a separate thread for the interrupt. The primary
      function which is provided by the device driver is replaced by an
      internal dummy function which warns when it is called.
      
      For the demultiplexing handler a helper function handle_nested_irq()
      is provided which calls the demux interrupt thread function in the
      context of the caller and does the proper interrupt accounting and
      takes the interrupt disabled status of the demultiplexed subdevice
      into account.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Mark Brown <broonie@opensource.wolfsonmicro.com>
      Cc: Dmitry Torokhov <dmitry.torokhov@gmail.com>
      Cc: Trilok Soni <soni.trilok@gmail.com>
      Cc: Pavel Machek <pavel@ucw.cz>
      Cc: Brian Swetland <swetland@google.com>
      Cc: Joonyoung Shim <jy0922.shim@samsung.com>
      Cc: m.szyprowski@samsung.com
      Cc: t.fujak@samsung.com
      Cc: kyungmin.park@samsung.com,
      Cc: David Brownell <david-b@pacbell.net>
      Cc: Daniel Ribeiro <drwyrm@gmail.com>
      Cc: arve@android.com
      Cc: Barry Song <21cnbao@gmail.com>
      399b5da2
    • T
      genirq: Add buslock support · 70aedd24
      Thomas Gleixner 提交于
      Some interrupt chips are connected to a "slow" bus (i2c, spi ...). The
      bus access needs to sleep and therefor cannot be called in atomic
      contexts.
      
      Some of the generic interrupt management functions like disable_irq(),
      enable_irq() ... call interrupt chip functions with the irq_desc->lock
      held and interrupts disabled. This does not work for such devices.
      
      Provide a separate synchronization mechanism for such interrupt
      chips. The irq_chip structure is extended by two optional functions
      (bus_lock and bus_sync_and_unlock).
      
      The idea is to serialize the bus access for those operations in the
      core code so that drivers which are behind that bus operated interrupt
      controller do not have to worry about it and just can use the normal
      interfaces. To achieve this we add two function pointers to the
      irq_chip: bus_lock and bus_sync_unlock.
      
      bus_lock() is called to serialize access to the interrupt controller
      bus.
      
      Now the core code can issue chip->mask/unmask ... commands without
      changing the fast path code at all. The chip implementation merily
      stores that information in a chip private data structure and
      returns. No bus interaction as these functions are called from atomic
      context.
      
      After that bus_sync_unlock() is called outside the atomic context. Now
      the chip implementation issues the bus commands, waits for completion
      and unlocks the interrupt controller bus.
      
      The irq_chip implementation as pseudo code:
      
      struct irq_chip_data {
             struct mutex   mutex;
             unsigned int   irq_offset;
             unsigned long  mask;
             unsigned long  mask_status;
      }
      
      static void bus_lock(unsigned int irq)
      {
              struct irq_chip_data *data = get_irq_desc_chip_data(irq);
      
              mutex_lock(&data->mutex);
      }
      
      static void mask(unsigned int irq)
      {
              struct irq_chip_data *data = get_irq_desc_chip_data(irq);
      
              irq -= data->irq_offset;
              data->mask |= (1 << irq);
      }
      
      static void unmask(unsigned int irq)
      {
              struct irq_chip_data *data = get_irq_desc_chip_data(irq);
      
              irq -= data->irq_offset;
              data->mask &= ~(1 << irq);
      }
      
      static void bus_sync_unlock(unsigned int irq)
      {
              struct irq_chip_data *data = get_irq_desc_chip_data(irq);
      
              if (data->mask != data->mask_status) {
                      do_bus_magic_to_set_mask(data->mask);
                      data->mask_status = data->mask;
              }
              mutex_unlock(&data->mutex);
      }
      
      The device drivers can use request_threaded_irq, free_irq, disable_irq
      and enable_irq as usual with the only restriction that the calls need
      to come from non atomic context.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Mark Brown <broonie@opensource.wolfsonmicro.com>
      Cc: Dmitry Torokhov <dmitry.torokhov@gmail.com>
      Cc: Trilok Soni <soni.trilok@gmail.com>
      Cc: Pavel Machek <pavel@ucw.cz>
      Cc: Brian Swetland <swetland@google.com>
      Cc: Joonyoung Shim <jy0922.shim@samsung.com>
      Cc: m.szyprowski@samsung.com
      Cc: t.fujak@samsung.com
      Cc: kyungmin.park@samsung.com,
      Cc: David Brownell <david-b@pacbell.net>
      Cc: Daniel Ribeiro <drwyrm@gmail.com>
      Cc: arve@android.com
      Cc: Barry Song <21cnbao@gmail.com>
      70aedd24
    • T
      genirq: Add oneshot support · b25c340c
      Thomas Gleixner 提交于
      For threaded interrupt handlers we expect the hard interrupt handler
      part to mask the interrupt on the originating device. The interrupt
      line itself is reenabled after the hard interrupt handler has
      executed.
      
      This requires access to the originating device from hard interrupt
      context which is not always possible. There are devices which can only
      be accessed via a bus (i2c, spi, ...). The bus access requires thread
      context. For such devices we need to keep the interrupt line masked
      until the threaded handler has executed.
      
      Add a new flag IRQF_ONESHOT which allows drivers to request that the
      interrupt is not unmasked after the hard interrupt context handler has
      been executed and the thread has been woken. The interrupt line is
      unmasked after the thread handler function has been executed.
      
      Note that for now IRQF_ONESHOT cannot be used with IRQF_SHARED to
      avoid complex accounting mechanisms.
      
      For oneshot interrupts the primary handler simply returns
      IRQ_WAKE_THREAD and does nothing else. A generic implementation
      irq_default_primary_handler() is provided to avoid useless copies all
      over the place. It is automatically installed when
      request_threaded_irq() is called with handler=NULL and
      thread_fn!=NULL.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Mark Brown <broonie@opensource.wolfsonmicro.com>
      Cc: Dmitry Torokhov <dmitry.torokhov@gmail.com>
      Cc: Trilok Soni <soni.trilok@gmail.com>
      Cc: Pavel Machek <pavel@ucw.cz>
      Cc: Brian Swetland <swetland@google.com>
      Cc: Joonyoung Shim <jy0922.shim@samsung.com>
      Cc: m.szyprowski@samsung.com
      Cc: t.fujak@samsung.com
      Cc: kyungmin.park@samsung.com,
      Cc: David Brownell <david-b@pacbell.net>
      Cc: Daniel Ribeiro <drwyrm@gmail.com>
      Cc: arve@android.com
      Cc: Barry Song <21cnbao@gmail.com>
      b25c340c
  7. 14 8月, 2009 1 次提交
    • L
      genirq: prevent wakeup of freed irq thread · 2d860ad7
      Linus Torvalds 提交于
      free_irq() can remove an irqaction while the corresponding interrupt
      is in progress, but free_irq() sets action->thread to NULL
      unconditionally, which might lead to a NULL pointer dereference in
      handle_IRQ_event() when the hard interrupt context tries to wake up
      the handler thread.
      
      Prevent this by moving the thread stop after synchronize_irq(). No
      need to set action->thread to NULL either as action is going to be
      freed anyway.
      
      This fixes a boot crash reported against preempt-rt which uses the
      mainline irq threads code to implement full irq threading.
      
      [ tglx: removed local irqthread variable ]
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      2d860ad7
  8. 23 7月, 2009 1 次提交
    • B
      genirq: Fix UP compile failure caused by irq_thread_check_affinity · 61f38261
      Bruno Premont 提交于
      Since genirq: Delegate irq affinity setting to the irq thread
      (591d2fb0) compilation with
      CONFIG_SMP=n fails with following error:
      
      /usr/src/linux-2.6/kernel/irq/manage.c:
         In function 'irq_thread_check_affinity':
      /usr/src/linux-2.6/kernel/irq/manage.c:475:
         error: 'struct irq_desc' has no member named 'affinity'
      make[4]: *** [kernel/irq/manage.o] Error 1
      
      That commit adds a new function irq_thread_check_affinity() which
      uses struct irq_desc.affinity which is only available for CONFIG_SMP=y.
      Move that function under #ifdef CONFIG_SMP.
      
      [ tglx@brownpaperbag: compile and boot tested on UP and SMP ]
      Signed-off-by: NBruno Premont <bonbons@linux-vserver.org>
      LKML-Reference: <20090722222232.2eb3e1c4@neptune.home>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      61f38261
  9. 21 7月, 2009 1 次提交
    • T
      genirq: Delegate irq affinity setting to the irq thread · 591d2fb0
      Thomas Gleixner 提交于
      irq_set_thread_affinity() calls set_cpus_allowed_ptr() which might
      sleep, but irq_set_thread_affinity() is called with desc->lock held
      and can be called from hard interrupt context as well. The code has
      another bug as it does not hold a ref on the task struct as required
      by set_cpus_allowed_ptr().
      
      Just set the IRQTF_AFFINITY bit in action->thread_flags. The next time
      the thread runs it migrates itself. Solves all of the above problems
      nicely.
      
      Add kerneldoc to irq_set_thread_affinity() while at it.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      LKML-Reference: <new-submission>
      591d2fb0
  10. 13 5月, 2009 1 次提交
  11. 28 4月, 2009 1 次提交
    • Y
      irq: only update affinity if ->set_affinity() is sucessfull · 57b150cc
      Yinghai Lu 提交于
      irq_set_affinity() and move_masked_irq() try to assign affinity
      before calling chip set_affinity(). Some archs are assigning it
      in ->set_affinity() again.
      
      We do something like:
      
       cpumask_cpy(desc->affinity, mask);
       desc->chip->set_affinity(mask);
      
      But in the failure path, affinity should not be touched - otherwise
      we'll end up with a different affinity mask despite the failure to
      migrate the IRQ.
      
      So try to update the afffinity only if set_affinity returns with 0.
      Also call irq_set_thread_affinity accordingly.
      
      v2: update after "irq, x86: Remove IRQ_DISABLED check in process context IRQ move"
      v3: according to Ingo, change set_affinity() in irq_chip should return int.
      v4: update comments by removing moving irq_desc code.
      
      [ Impact: fix /proc/irq/*/smp_affinity setting corner case bug ]
      Signed-off-by: NYinghai Lu <yinghai@kernel.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Suresh Siddha <suresh.b.siddha@intel.com>
      Cc: "Eric W. Biederman" <ebiederm@xmission.com>
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      LKML-Reference: <49F65509.60307@kernel.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      57b150cc
  12. 23 4月, 2009 1 次提交
  13. 14 4月, 2009 1 次提交
    • P
      x86, irq: Remove IRQ_DISABLED check in process context IRQ move · 6ec3cfec
      Pallipadi, Venkatesh 提交于
      As discussed in the thread here:
      
        http://marc.info/?l=linux-kernel&m=123964468521142&w=2
      
      Eric W. Biederman observed:
      
      > It looks like some additional bugs have slipped in since last I looked.
      >
      > set_irq_affinity does this:
      > ifdef CONFIG_GENERIC_PENDING_IRQ
      >        if (desc->status & IRQ_MOVE_PCNTXT || desc->status & IRQ_DISABLED) {
      >                cpumask_copy(desc->affinity, cpumask);
      >                desc->chip->set_affinity(irq, cpumask);
      >        } else {
      >                desc->status |= IRQ_MOVE_PENDING;
      >                cpumask_copy(desc->pending_mask, cpumask);
      >        }
      > #else
      >
      > That IRQ_DISABLED case is a software state and as such it has nothing to
      > do with how safe it is to move an irq in process context.
      
      [...]
      
      >
      > The only reason we migrate MSIs in interrupt context today is that there
      > wasn't infrastructure for support migration both in interrupt context
      > and outside of it.
      
      Yes. The idea here was to force the MSI migration to happen in process
      context. One of the patches in the series did
      
              disable_irq(dev->irq);
              irq_set_affinity(dev->irq, cpumask_of(dev->cpu));
              enable_irq(dev->irq);
      
      with the above patch adding irq/manage code check for interrupt disabled
      and moving the interrupt in process context.
      
      IIRC, there was no IRQ_MOVE_PCNTXT when we were developing this HPET
      code and we ended up having this ugly hack. IRQ_MOVE_PCNTXT was there
      when we eventually submitted the patch upstream. But, looks like I did a
      blind rebasing instead of using IRQ_MOVE_PCNTXT in hpet MSI code.
      
      Below patch fixes this. i.e., revert commit 932775a4
      and add PCNTXT to HPET MSI setup. Also removes copying of desc->affinity
      in generic code as set_affinity routines are doing it internally.
      Reported-by: N"Eric W. Biederman" <ebiederm@xmission.com>
      Signed-off-by: NVenkatesh Pallipadi <venkatesh.pallipadi@intel.com>
      Acked-by: N"Eric W. Biederman" <ebiederm@xmission.com>
      Cc: "Li Shaohua" <shaohua.li@intel.com>
      Cc: Gary Hade <garyhade@us.ibm.com>
      Cc: "lcm@us.ibm.com" <lcm@us.ibm.com>
      Cc: suresh.b.siddha@intel.com
      LKML-Reference: <20090413222058.GB8211@linux-os.sc.intel.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      6ec3cfec
  14. 31 3月, 2009 1 次提交
    • R
      PM: Introduce functions for suspending and resuming device interrupts · 0a0c5168
      Rafael J. Wysocki 提交于
      Introduce helper functions allowing us to prevent device drivers from
      getting any interrupts (without disabling interrupts on the CPU)
      during suspend (or hibernation) and to make them start to receive
      interrupts again during the subsequent resume.  These functions make it
      possible to keep timer interrupts enabled while the "late" suspend and
      "early" resume callbacks provided by device drivers are being
      executed.  In turn, this allows device drivers' "late" suspend and
      "early" resume callbacks to sleep, execute ACPI callbacks etc.
      
      The functions introduced here will be used to rework the handling of
      interrupts during suspend (hibernation) and resume.  Namely,
      interrupts will only be disabled on the CPU right before suspending
      sysdevs, while device drivers will be prevented from receiving
      interrupts, with the help of the new helper function, before their
      "late" suspend callbacks run (and analogously during resume).
      Signed-off-by: NRafael J. Wysocki <rjw@sisk.pl>
      Acked-by: NIngo Molnar <mingo@elte.hu>
      0a0c5168
  15. 24 3月, 2009 2 次提交
    • T
      genirq: threaded irq handlers review fixups · f48fe81e
      Thomas Gleixner 提交于
      Delta patch to address the review comments.
      
            - Implement warning when IRQ_WAKE_THREAD is requested and no
              thread handler installed
            - coding style fixes
      Pointed-out-by: NChristoph Hellwig <hch@infradead.org>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      f48fe81e
    • T
      genirq: add threaded interrupt handler support · 3aa551c9
      Thomas Gleixner 提交于
      Add support for threaded interrupt handlers:
      
      A device driver can request that its main interrupt handler runs in a
      thread. To achive this the device driver requests the interrupt with
      request_threaded_irq() and provides additionally to the handler a
      thread function. The handler function is called in hard interrupt
      context and needs to check whether the interrupt originated from the
      device. If the interrupt originated from the device then the handler
      can either return IRQ_HANDLED or IRQ_WAKE_THREAD. IRQ_HANDLED is
      returned when no further action is required. IRQ_WAKE_THREAD causes
      the genirq code to invoke the threaded (main) handler. When
      IRQ_WAKE_THREAD is returned handler must have disabled the interrupt
      on the device level. This is mandatory for shared interrupt handlers,
      but we need to do it as well for obscure x86 hardware where disabling
      an interrupt on the IO_APIC level redirects the interrupt to the
      legacy PIC interrupt lines.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Reviewed-by: NIngo Molnar <mingo@elte.hu>
      3aa551c9
  16. 13 3月, 2009 2 次提交
  17. 12 3月, 2009 3 次提交
  18. 18 2月, 2009 2 次提交
  19. 15 2月, 2009 2 次提交
  20. 13 2月, 2009 1 次提交
  21. 09 2月, 2009 1 次提交
  22. 28 1月, 2009 2 次提交
  23. 12 1月, 2009 1 次提交
    • M
      cpumask: update irq_desc to use cpumask_var_t · 7f7ace0c
      Mike Travis 提交于
      Impact: reduce memory usage, use new cpumask API.
      
      Replace the affinity and pending_masks with cpumask_var_t's.  This adds
      to the significant size reduction done with the SPARSE_IRQS changes.
      
      The added functions (init_alloc_desc_masks & init_copy_desc_masks) are
      in the include file so they can be inlined (and optimized out for the
      !CONFIG_CPUMASKS_OFFSTACK case.)  [Naming chosen to be consistent with
      the other init*irq functions, as well as the backwards arg declaration
      of "from, to" instead of the more common "to, from" standard.]
      
      Includes a slight change to the declaration of struct irq_desc to embed
      the pending_mask within ifdef(CONFIG_SMP) to be consistent with other
      references, and some small changes to Xen.
      
      Tested: sparse/non-sparse/cpumask_offstack/non-cpumask_offstack/nonuma/nosmp on x86_64
      Signed-off-by: NMike Travis <travis@sgi.com>
      Cc: Chris Wright <chrisw@sous-sol.org>
      Cc: Jeremy Fitzhardinge <jeremy@xensource.com>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>
      Cc: virtualization@lists.osdl.org
      Cc: xen-devel@lists.xensource.com
      Cc: Yinghai Lu <yhlu.kernel@gmail.com>
      7f7ace0c
  24. 01 1月, 2009 1 次提交
  25. 29 12月, 2008 2 次提交
    • Y
      sparseirq: move __weak symbols into separate compilation unit · 43a25632
      Yinghai Lu 提交于
      GCC has a bug with __weak alias functions: if the functions are in
      the same compilation unit as their call site, GCC can decide to
      inline them - and thus rob the linker of the opportunity to override
      the weak alias with the real thing.
      
      So move all the IRQ handling related __weak symbols to kernel/irq/chip.c.
      Signed-off-by: NYinghai Lu <yinghai@kernel.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      43a25632
    • I
      sparseirq: work around __weak alias bug · b2e2fe99
      Ingo Molnar 提交于
      Impact: fix boot crash if the kernel is built with certain GCC versions
      
      GCC has a bug with __weak alias functions: if the functions are in
      the same compilation unit as their call site, GCC can decide to
      inline them - and thus rob the linker of the opportunity to override
      the weak alias with the real thing.
      
      This can lead to the boot crash reported by Kamalesh Babulal:
      
       ACPI: Core revision 20080926
       Setting APIC routing to flat
       BUG: unable to handle kernel NULL pointer dereference at
       0000000000000000
       IP: [<ffffffff8021f9a8>] add_pin_to_irq_cpu+0x14/0x74
       PGD 0
       Oops: 0000 [#1] SMP
       [...]
      
      So move the arch_init_chip_data() function from handle.c to manage.c.
      Reported-by: NKamalesh Babulal <kamalesh@linux.vnet.ibm.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      b2e2fe99
  26. 13 12月, 2008 1 次提交
  27. 02 12月, 2008 2 次提交
  28. 13 11月, 2008 1 次提交
    • M
      genirq: __irq_set_trigger: change pr_warning to pr_debug · 3ff68a6a
      Mark Nelson 提交于
      Commit 0c5d1eb7 (genirq: record trigger
      type) caused powerpc platforms that had no set_type() function in their
      struct irq_chip to spew out warnings about "No set_type function for
      IRQ...". This warning isn't necessarily justified though because the
      generic powerpc platform code calls set_irq_type() (which in turn calls
      __irq_set_trigger) with information from the device tree to establish
      the interrupt mappings, regardless of whether the PIC can actually set
      a type.
      
      A platform's irq_chip might not have a set_type function for a variety
      of reasons, for example: the platform may have the type essentially
      hard-coded, or as in the case for Cell interrupts are just messages
      past around that have no real concept of type, or the platform
      could even have a virtual PIC as on the PS3.
      Signed-off-by: NMark Nelson <markn@au1.ibm.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      3ff68a6a
  29. 10 11月, 2008 1 次提交