1. 10 12月, 2009 1 次提交
  2. 06 12月, 2009 1 次提交
  3. 13 11月, 2009 1 次提交
  4. 09 10月, 2009 1 次提交
  5. 23 9月, 2009 1 次提交
  6. 29 8月, 2009 1 次提交
  7. 02 8月, 2009 6 次提交
    • M
      lockdep: Fix memory usage info of BFS · 90629209
      Ming Lei 提交于
      The unit is KB, so sizeof(struct circular_queue) should be
      divided by 1024.
      Signed-off-by: NMing Lei <tom.leiming@gmail.com>
      Cc: akpm@linux-foundation.org
      Cc: torvalds@linux-foundation.org
      Cc: davem@davemloft.net
      Cc: Ming Lei <tom.leiming@gmail.com>
      Cc: a.p.zijlstra@chello.nl
      LKML-Reference: <1249220616-7190-1-git-send-email-tom.leiming@gmail.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      90629209
    • M
      lockdep: Reintroduce generation count to make BFS faster · e351b660
      Ming Lei 提交于
      We still can apply DaveM's generation count optimization to
      BFS, based on the following idea:
      
       - before doing each BFS, increase the global generation id
         by 1
      
       - if one node in the graph has been visited, mark it as
         visited by storing the current global generation id into
         the node's dep_gen_id field
      
       - so we can decide if one node has been visited already, by
         comparing the node's dep_gen_id with the global generation id.
      
      By applying DaveM's generation count optimization to current
      implementation of BFS, we gain the following advantages:
      
       - we save MAX_LOCKDEP_ENTRIES/8 bytes memory;
      
       - we remove the bitmap_zero(bfs_accessed, MAX_LOCKDEP_ENTRIES);
         in each BFS, which is very time-consuming since
         MAX_LOCKDEP_ENTRIES may be very large.(16384UL)
      Signed-off-by: NMing Lei <tom.leiming@gmail.com>
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: "David S. Miller" <davem@davemloft.net>
      LKML-Reference: <1248274089-6358-1-git-send-email-tom.leiming@gmail.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      e351b660
    • P
      lockdep: Deal with many similar locks · bb97a91e
      Peter Zijlstra 提交于
      spin_lock_nest_lock() allows to take many instances of the same
      class, this can easily lead to overflow of MAX_LOCK_DEPTH.
      
      To avoid this overflow, we'll stop accounting instances but
      start reference counting the class in the held_lock structure.
      
      [ We could maintain a list of instances, if we'd move the hlock
        stuff into __lock_acquired(), but that would require
        significant modifications to the current code. ]
      
      We restrict this mode to spin_lock_nest_lock() only, because it
      degrades the lockdep quality due to lost of instance.
      
      For lockstat this means we don't track lock statistics for any
      but the first lock in the series.
      
      Currently nesting is limited to 11 bits because that was the
      spare space available in held_lock. This yields a 2048
      instances maximium.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Marcelo Tosatti <mtosatti@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      bb97a91e
    • P
      lockdep: Introduce lockdep_assert_held() · f607c668
      Peter Zijlstra 提交于
      Add a lockdep helper to validate that we indeed are the owner
      of a lock.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      f607c668
    • P
      lockdep: Fix style nits · 98c33edd
      Peter Zijlstra 提交于
      fixes a few comments and whitespaces that annoyed me.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      98c33edd
    • P
      lockdep: Fix backtraces · 4f84f433
      Peter Zijlstra 提交于
      Truncate stupid -1 entries in backtraces.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      LKML-Reference: <1248096665.15751.8816.camel@twins>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      4f84f433
  8. 24 7月, 2009 10 次提交
  9. 18 4月, 2009 1 次提交
    • P
      lockdep: more robust lockdep_map init sequence · c8a25005
      Peter Zijlstra 提交于
      Steven Rostedt reported:
      
      > OK, I think I figured this bug out. This is a lockdep issue with respect
      > to tracepoints.
      >
      > The trace points in lockdep are called all the time. Outside the lockdep
      > logic. But if lockdep were to trigger an error / warning (which this run
      > did) we might be in trouble. For new locks, like the dentry->d_lock, that
      > are created, they will not get a name:
      >
      > void lockdep_init_map(struct lockdep_map *lock, const char *name,
      >                       struct lock_class_key *key, int subclass)
      > {
      >         if (unlikely(!debug_locks))
      >                 return;
      >
      > When a problem is found by lockdep, debug_locks becomes false. Thus we
      > stop allocating names for locks. This dentry->d_lock I had, now has no
      > name. Worse yet, I have CONFIG_DEBUG_VM set, that scrambles non
      > initialized memory. Thus, when the trace point was hit, it had junk for
      > the lock->name, and the machine crashed.
      
      Ah, nice catch. I think we should put at least the name in regardless.
      
      Ensure we at least initialize the trivial entries of the depmap so that
      they can be relied upon, even when lockdep itself decided to pack up and
      go home.
      
      [ Impact: fix lock tracing after lockdep warnings. ]
      Reported-by: NSteven Rostedt <rostedt@goodmis.org>
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Acked-by: NSteven Rostedt <rostedt@goodmis.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      LKML-Reference: <1239954049.23397.4156.camel@laptop>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      c8a25005
  10. 15 4月, 2009 2 次提交
    • S
      tracing/events: move trace point headers into include/trace/events · ad8d75ff
      Steven Rostedt 提交于
      Impact: clean up
      
      Create a sub directory in include/trace called events to keep the
      trace point headers in their own separate directory. Only headers that
      declare trace points should be defined in this directory.
      
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Neil Horman <nhorman@tuxdriver.com>
      Cc: Zhao Lei <zhaolei@cn.fujitsu.com>
      Cc: Eduard - Gabriel Munteanu <eduard.munteanu@linux360.ro>
      Cc: Pekka Enberg <penberg@cs.helsinki.fi>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      ad8d75ff
    • S
      tracing: create automated trace defines · a8d154b0
      Steven Rostedt 提交于
      This patch lowers the number of places a developer must modify to add
      new tracepoints. The current method to add a new tracepoint
      into an existing system is to write the trace point macro in the
      trace header with one of the macros TRACE_EVENT, TRACE_FORMAT or
      DECLARE_TRACE, then they must add the same named item into the C file
      with the macro DEFINE_TRACE(name) and then add the trace point.
      
      This change cuts out the needing to add the DEFINE_TRACE(name).
      Every file that uses the tracepoint must still include the trace/<type>.h
      file, but the one C file must also add a define before the including
      of that file.
      
       #define CREATE_TRACE_POINTS
       #include <trace/mytrace.h>
      
      This will cause the trace/mytrace.h file to also produce the C code
      necessary to implement the trace point.
      
      Note, if more than one trace/<type>.h is used to create the C code
      it is best to list them all together.
      
       #define CREATE_TRACE_POINTS
       #include <trace/foo.h>
       #include <trace/bar.h>
       #include <trace/fido.h>
      
      Thanks to Mathieu Desnoyers and Christoph Hellwig for coming up with
      the cleaner solution of the define above the includes over my first
      design to have the C code include a "special" header.
      
      This patch converts sched, irq and lockdep and skb to use this new
      method.
      
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Neil Horman <nhorman@tuxdriver.com>
      Cc: Zhao Lei <zhaolei@cn.fujitsu.com>
      Cc: Eduard - Gabriel Munteanu <eduard.munteanu@linux360.ro>
      Cc: Pekka Enberg <penberg@cs.helsinki.fi>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      a8d154b0
  11. 10 4月, 2009 1 次提交
    • F
      tracing/lockdep: report the time waited for a lock · 2062501a
      Frederic Weisbecker 提交于
      While trying to optimize the new lock on reiserfs to replace
      the bkl, I find the lock tracing very useful though it lacks
      something important for performance (and latency) instrumentation:
      the time a task waits for a lock.
      
      That's what this patch implements:
      
        bash-4816  [000]   202.652815: lock_contended: lock_contended: &sb->s_type->i_mutex_key
        bash-4816  [000]   202.652819: lock_acquired: &rq->lock (0.000 us)
       <...>-4787  [000]   202.652825: lock_acquired: &rq->lock (0.000 us)
       <...>-4787  [000]   202.652829: lock_acquired: &rq->lock (0.000 us)
        bash-4816  [000]   202.652833: lock_acquired: &sb->s_type->i_mutex_key (16.005 us)
      
      As shown above, the "lock acquired" field is followed by the time
      it has been waiting for the lock. Usually, a lock contended entry
      is followed by a near lock_acquired entry with a non-zero time waited.
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Acked-by: NPeter Zijlstra <peterz@infradead.org>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      LKML-Reference: <1238975373-15739-1-git-send-email-fweisbec@gmail.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      2062501a
  12. 31 3月, 2009 2 次提交
  13. 05 3月, 2009 4 次提交
    • D
      lockdep: remove duplicate CONFIG_DEBUG_LOCKDEP definitions · 03d78913
      David Rientjes 提交于
      Impact: cleanup
      
      The atomic debug modifiers are already defined in
      kernel/lockdep_internals.h.
      Signed-off-by: NDavid Rientjes <rientjes@google.com>
      Acked-by: NPeter Zijlstra <peterz@infradead.org>
      LKML-Reference: <alpine.DEB.2.00.0903050222160.30401@chino.kir.corp.google.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      03d78913
    • P
      tracing: add lockdep tracepoints for lock acquire/release · efed792d
      Peter Zijlstra 提交于
      Augment the traces with lock names when lockdep is available:
      
       1)               |  down_read_trylock() {
       1)               |    _spin_lock_irqsave() {
       1)               |      /* lock_acquire: &sem->wait_lock */
       1)   4.201 us    |    }
       1)               |    _spin_unlock_irqrestore() {
       1)               |      /* lock_release: &sem->wait_lock */
       1)   3.523 us    |    }
       1)               |  /* lock_acquire: try read &mm->mmap_sem */
       1) + 13.386 us   |  }
       1)   1.635 us    |  find_vma();
       1)               |  handle_mm_fault() {
       1)               |    __do_fault() {
       1)               |      filemap_fault() {
       1)               |        find_lock_page() {
       1)               |          find_get_page() {
       1)               |            /* lock_acquire: read rcu_read_lock */
       1)               |            /* lock_release: rcu_read_lock */
       1)   5.697 us    |          }
       1)   8.158 us    |        }
       1) + 11.079 us   |      }
       1)               |      _spin_lock() {
       1)               |        /* lock_acquire: __pte_lockptr(page) */
       1)   3.949 us    |      }
       1)   1.460 us    |      page_add_file_rmap();
       1)               |      _spin_unlock() {
       1)               |        /* lock_release: __pte_lockptr(page) */
       1)   3.115 us    |      }
       1)               |      unlock_page() {
       1)   1.421 us    |        page_waitqueue();
       1)   1.220 us    |        __wake_up_bit();
       1)   6.519 us    |      }
       1) + 34.328 us   |    }
       1) + 37.452 us   |  }
       1)               |  up_read() {
       1)               |  /* lock_release: &mm->mmap_sem */
       1)               |    _spin_lock_irqsave() {
       1)               |      /* lock_acquire: &sem->wait_lock */
       1)   3.865 us    |    }
       1)               |    _spin_unlock_irqrestore() {
       1)               |      /* lock_release: &sem->wait_lock */
       1)   8.562 us    |    }
       1) + 17.370 us   |  }
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: =?ISO-8859-1?Q?T=F6r=F6k?= Edwin <edwintorok@gmail.com>
      Cc: Jason Baron <jbaron@redhat.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      LKML-Reference: <1236166375.5330.7209.camel@laptop>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      efed792d
    • P
      lockdep: remove extra "irq" string · 26575e28
      Peter Zijlstra 提交于
      Impact: clarify lockdep printk text
      
      print_irq_inversion_bug() gets handed state strings of the form
      
        "HARDIRQ", "SOFTIRQ", "RECLAIM_FS"
      
      and appends "-irq-{un,}safe" to them, which is either redudant for *IRQ or
      confusing in the RECLAIM_FS case.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      LKML-Reference: <1236175192.5330.7585.camel@laptop>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      26575e28
    • P
      lockdep: fix incorrect state name · 1c21f14e
      Peter Zijlstra 提交于
      In the recent mark_lock_irq() rework a bug snuck in that would report the
      state of write locks causing irq inversion under a read lock as a read
      lock.
      
      Fix this by masking the read bit of the state when validating write
      dependencies.
      Reported-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      LKML-Reference: <1236172646.5330.7450.camel@laptop>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      1c21f14e
  14. 15 2月, 2009 8 次提交