1. 31 8月, 2007 1 次提交
    • J
      request_irq: fix DEBUG_SHIRQ handling · 59845b1f
      Jarek Poplawski 提交于
      Mariusz Kozlowski reported lockdep's warning:
      
      > =================================
      > [ INFO: inconsistent lock state ]
      > 2.6.23-rc2-mm1 #7
      > ---------------------------------
      > inconsistent {in-hardirq-W} -> {hardirq-on-W} usage.
      > ifconfig/5492 [HC0[0]:SC0[0]:HE1:SE1] takes:
      >  (&tp->lock){+...}, at: [<de8706e0>] rtl8139_interrupt+0x27/0x46b [8139too]
      > {in-hardirq-W} state was registered at:
      >   [<c0138eeb>] __lock_acquire+0x949/0x11ac
      >   [<c01397e7>] lock_acquire+0x99/0xb2
      >   [<c0452ff3>] _spin_lock+0x35/0x42
      >   [<de8706e0>] rtl8139_interrupt+0x27/0x46b [8139too]
      >   [<c0147a5d>] handle_IRQ_event+0x28/0x59
      >   [<c01493ca>] handle_level_irq+0xad/0x10b
      >   [<c0105a13>] do_IRQ+0x93/0xd0
      >   [<c010441e>] common_interrupt+0x2e/0x34
      ...
      > other info that might help us debug this:
      > 1 lock held by ifconfig/5492:
      >  #0:  (rtnl_mutex){--..}, at: [<c0451778>] mutex_lock+0x1c/0x1f
      >
      > stack backtrace:
      ...
      >  [<c0452ff3>] _spin_lock+0x35/0x42
      >  [<de8706e0>] rtl8139_interrupt+0x27/0x46b [8139too]
      >  [<c01480fd>] free_irq+0x11b/0x146
      >  [<de871d59>] rtl8139_close+0x8a/0x14a [8139too]
      >  [<c03bde63>] dev_close+0x57/0x74
      ...
      
      This shows that a driver's irq handler was running both in hard interrupt
      and process contexts with irqs enabled. The latter was done during
      free_irq() call and was possible only with CONFIG_DEBUG_SHIRQ enabled.
      This was fixed by another patch.
      
      But similar problem is possible with request_irq(): any locks taken from
      irq handler could be vulnerable - especially with soft interrupts. This
      patch fixes it by disabling local interrupts during handler's run. (It
      seems, disabling softirqs should be enough, but it needs more checking
      on possible races or other special cases).
      Reported-by: NMariusz Kozlowski <m.kozlowski@tuxland.pl>
      Signed-off-by: NJarek Poplawski <jarkao2@o2.pl>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      59845b1f
  2. 23 8月, 2007 1 次提交
  3. 09 5月, 2007 1 次提交
  4. 17 2月, 2007 2 次提交
  5. 15 2月, 2007 1 次提交
  6. 13 2月, 2007 2 次提交
  7. 12 2月, 2007 1 次提交
    • A
      [PATCH] sort the devres mess out · 5ea81769
      Al Viro 提交于
      * Split the implementation-agnostic stuff in separate files.
      * Make sure that targets using non-default request_irq() pull
        kernel/irq/devres.o
      * Introduce new symbols (HAS_IOPORT and HAS_IOMEM) defaulting to positive;
        allow architectures to turn them off (we needed these symbols anyway for
        dependencies of quite a few drivers).
      * protect the ioport-related parts of lib/devres.o with CONFIG_HAS_IOPORT.
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      5ea81769
  8. 10 2月, 2007 1 次提交
    • T
      devres: device resource management · 9ac7849e
      Tejun Heo 提交于
      Implement device resource management, in short, devres.  A device
      driver can allocate arbirary size of devres data which is associated
      with a release function.  On driver detach, release function is
      invoked on the devres data, then, devres data is freed.
      
      devreses are typed by associated release functions.  Some devreses are
      better represented by single instance of the type while others need
      multiple instances sharing the same release function.  Both usages are
      supported.
      
      devreses can be grouped using devres group such that a device driver
      can easily release acquired resources halfway through initialization
      or selectively release resources (e.g. resources for port 1 out of 4
      ports).
      
      This patch adds devres core including documentation and the following
      managed interfaces.
      
      * alloc/free	: devm_kzalloc(), devm_kzfree()
      * IO region	: devm_request_region(), devm_release_region()
      * IRQ		: devm_request_irq(), devm_free_irq()
      * DMA		: dmam_alloc_coherent(), dmam_free_coherent(),
      		  dmam_declare_coherent_memory(), dmam_pool_create(),
      		  dmam_pool_destroy()
      * PCI		: pcim_enable_device(), pcim_pin_device(), pci_is_managed()
      * iomap		: devm_ioport_map(), devm_ioport_unmap(), devm_ioremap(),
      		  devm_ioremap_nocache(), devm_iounmap(), pcim_iomap_table(),
      		  pcim_iomap(), pcim_iounmap()
      Signed-off-by: NTejun Heo <htejun@gmail.com>
      Signed-off-by: NJeff Garzik <jeff@garzik.org>
      9ac7849e
  9. 24 1月, 2007 1 次提交
  10. 15 11月, 2006 1 次提交
  11. 05 10月, 2006 1 次提交
  12. 01 8月, 2006 1 次提交
    • D
      [PATCH] genirq: {en,dis}able_irq_wake() need refcounting too · 15a647eb
      David Brownell 提交于
      IRQs need refcounting and a state flag to track whether the the IRQ should
      be enabled or disabled as a "normal IRQ" source after a series of calls to
      {en,dis}able_irq().  For shared IRQs, the IRQ must be enabled so long as at
      least one driver needs it active.
      
      Likewise, IRQs need the same support to track whether the IRQ should be
      enabled or disabled as a "wakeup event" source after a series of calls to
      {en,dis}able_irq_wake().  For shared IRQs, the IRQ must be enabled as a
      wakeup source during sleep so long as at least one driver needs it.  But
      right now they _don't have_ that refcounting ...  which means sharing a
      wakeup-capable IRQ can't work correctly in some configurations.
      
      This patch adds the refcount and flag mechanisms to set_irq_wake() -- which
      is what {en,dis}able_irq_wake() call -- and minimal documentation of what
      the irq wake mechanism does.
      
      Drivers relying on the older (broken) "toggle" semantics will trigger a
      warning; that'll be a handful of drivers on ARM systems.
      Signed-off-by: NDavid Brownell <dbrownell@users.sourceforge.net>
      Acked-by: NIngo Molnar <mingo@elte.hu>
      Acked-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Russell King <rmk@arm.linux.org.uk>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      15a647eb
  13. 04 7月, 2006 1 次提交
    • I
      [PATCH] lockdep: core · fbb9ce95
      Ingo Molnar 提交于
      Do 'make oldconfig' and accept all the defaults for new config options -
      reboot into the kernel and if everything goes well it should boot up fine and
      you should have /proc/lockdep and /proc/lockdep_stats files.
      
      Typically if the lock validator finds some problem it will print out
      voluminous debug output that begins with "BUG: ..." and which syslog output
      can be used by kernel developers to figure out the precise locking scenario.
      
      What does the lock validator do?  It "observes" and maps all locking rules as
      they occur dynamically (as triggered by the kernel's natural use of spinlocks,
      rwlocks, mutexes and rwsems).  Whenever the lock validator subsystem detects a
      new locking scenario, it validates this new rule against the existing set of
      rules.  If this new rule is consistent with the existing set of rules then the
      new rule is added transparently and the kernel continues as normal.  If the
      new rule could create a deadlock scenario then this condition is printed out.
      
      When determining validity of locking, all possible "deadlock scenarios" are
      considered: assuming arbitrary number of CPUs, arbitrary irq context and task
      context constellations, running arbitrary combinations of all the existing
      locking scenarios.  In a typical system this means millions of separate
      scenarios.  This is why we call it a "locking correctness" validator - for all
      rules that are observed the lock validator proves it with mathematical
      certainty that a deadlock could not occur (assuming that the lock validator
      implementation itself is correct and its internal data structures are not
      corrupted by some other kernel subsystem).  [see more details and conditionals
      of this statement in include/linux/lockdep.h and
      Documentation/lockdep-design.txt]
      
      Furthermore, this "all possible scenarios" property of the validator also
      enables the finding of complex, highly unlikely multi-CPU multi-context races
      via single single-context rules, increasing the likelyhood of finding bugs
      drastically.  In practical terms: the lock validator already found a bug in
      the upstream kernel that could only occur on systems with 3 or more CPUs, and
      which needed 3 very unlikely code sequences to occur at once on the 3 CPUs.
      That bug was found and reported on a single-CPU system (!).  So in essence a
      race will be found "piecemail-wise", triggering all the necessary components
      for the race, without having to reproduce the race scenario itself!  In its
      short existence the lock validator found and reported many bugs before they
      actually caused a real deadlock.
      
      To further increase the efficiency of the validator, the mapping is not per
      "lock instance", but per "lock-class".  For example, all struct inode objects
      in the kernel have inode->inotify_mutex.  If there are 10,000 inodes cached,
      then there are 10,000 lock objects.  But ->inotify_mutex is a single "lock
      type", and all locking activities that occur against ->inotify_mutex are
      "unified" into this single lock-class.  The advantage of the lock-class
      approach is that all historical ->inotify_mutex uses are mapped into a single
      (and as narrow as possible) set of locking rules - regardless of how many
      different tasks or inode structures it took to build this set of rules.  The
      set of rules persist during the lifetime of the kernel.
      
      To see the rough magnitude of checking that the lock validator does, here's a
      portion of /proc/lockdep_stats, fresh after bootup:
      
       lock-classes:                            694 [max: 2048]
       direct dependencies:                  1598 [max: 8192]
       indirect dependencies:               17896
       all direct dependencies:             16206
       dependency chains:                    1910 [max: 8192]
       in-hardirq chains:                      17
       in-softirq chains:                     105
       in-process chains:                    1065
       stack-trace entries:                 38761 [max: 131072]
       combined max dependencies:         2033928
       hardirq-safe locks:                     24
       hardirq-unsafe locks:                  176
       softirq-safe locks:                     53
       softirq-unsafe locks:                  137
       irq-safe locks:                         59
       irq-unsafe locks:                      176
      
      The lock validator has observed 1598 actual single-thread locking patterns,
      and has validated all possible 2033928 distinct locking scenarios.
      
      More details about the design of the lock validator can be found in
      Documentation/lockdep-design.txt, which can also found at:
      
         http://redhat.com/~mingo/lockdep-patches/lockdep-design.txt
      
      [bunk@stusta.de: cleanups]
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NArjan van de Ven <arjan@linux.intel.com>
      Signed-off-by: NAdrian Bunk <bunk@stusta.de>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      fbb9ce95
  14. 03 7月, 2006 2 次提交
  15. 02 7月, 2006 2 次提交
  16. 01 7月, 2006 1 次提交
  17. 30 6月, 2006 17 次提交
  18. 28 4月, 2006 1 次提交
  19. 27 3月, 2006 1 次提交
  20. 26 3月, 2006 1 次提交