1. 14 2月, 2010 1 次提交
  2. 10 2月, 2010 1 次提交
  3. 04 2月, 2010 2 次提交
  4. 03 2月, 2010 6 次提交
  5. 02 2月, 2010 1 次提交
    • L
      tracing: Fix circular dead lock in stack trace · 4f48f8b7
      Lai Jiangshan 提交于
      When we cat <debugfs>/tracing/stack_trace, we may cause circular lock:
      sys_read()
        t_start()
           arch_spin_lock(&max_stack_lock);
      
        t_show()
           seq_printf(), vsnprintf() .... /* they are all trace-able,
             when they are traced, max_stack_lock may be required again. */
      
      The following script can trigger this circular dead lock very easy:
      #!/bin/bash
      
      echo 1 > /proc/sys/kernel/stack_tracer_enabled
      
      mount -t debugfs xxx /mnt > /dev/null 2>&1
      
      (
      # make check_stack() zealous to require max_stack_lock
      for ((; ;))
      {
      	echo 1 > /mnt/tracing/stack_max_size
      }
      ) &
      
      for ((; ;))
      {
      	cat /mnt/tracing/stack_trace > /dev/null
      }
      
      To fix this bug, we increase the percpu trace_active before
      require the lock.
      Reported-by: NLi Zefan <lizf@cn.fujitsu.com>
      Signed-off-by: NLai Jiangshan <laijs@cn.fujitsu.com>
      LKML-Reference: <4B67D4F9.9080905@cn.fujitsu.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      4f48f8b7
  6. 01 2月, 2010 1 次提交
    • J
      softlockup: Add sched_clock_tick() to avoid kernel warning on kgdb resume · d6ad3e28
      Jason Wessel 提交于
      When CONFIG_HAVE_UNSTABLE_SCHED_CLOCK is set, sched_clock() gets
      the time from hardware such as the TSC on x86. In this
      configuration kgdb will report a softlock warning message on
      resuming or detaching from a debug session.
      
      Sequence of events in the problem case:
      
       1) "cpu sched clock" and "hardware time" are at 100 sec prior
          to a call to kgdb_handle_exception()
      
       2) Debugger waits in kgdb_handle_exception() for 80 sec and on
          exit the following is called ...  touch_softlockup_watchdog() -->
          __raw_get_cpu_var(touch_timestamp) = 0;
      
       3) "cpu sched clock" = 100s (it was not updated, because the
          interrupt was disabled in kgdb) but the "hardware time" = 180 sec
      
       4) The first timer interrupt after resuming from
          kgdb_handle_exception updates the watchdog from the "cpu sched clock"
      
      update_process_times() { ...  run_local_timers() -->
      softlockup_tick() --> check (touch_timestamp == 0) (it is "YES"
      here, we have set "touch_timestamp = 0" at kgdb) -->
      __touch_softlockup_watchdog() ***(A)--> reset "touch_timestamp"
      to "get_timestamp()" (Here, the "touch_timestamp" will still be
      set to 100s.)  ...
      
          scheduler_tick() ***(B)--> sched_clock_tick() (update "cpu sched
          clock" to "hardware time" = 180s) ...  }
      
       5) The Second timer interrupt handler appears to have a large
          jump and trips the softlockup warning.
      
      update_process_times() { ...  run_local_timers() -->
      softlockup_tick() --> "cpu sched clock" - "touch_timestamp" =
      180s-100s > 60s --> printk "soft lockup error messages" ...  }
      
      note: ***(A) reset "touch_timestamp" to
      "get_timestamp(this_cpu)"
      
      Why is "touch_timestamp" 100 sec, instead of 180 sec?
      
      When CONFIG_HAVE_UNSTABLE_SCHED_CLOCK is set, the call trace of
      get_timestamp() is:
      
      get_timestamp(this_cpu)
       -->cpu_clock(this_cpu)
       -->sched_clock_cpu(this_cpu)
       -->__update_sched_clock(sched_clock_data, now)
      
      The __update_sched_clock() function uses the GTOD tick value to
      create a window to normalize the "now" values.  So if "now"
      value is too big for sched_clock_data, it will be ignored.
      
      The fix is to invoke sched_clock_tick() to update "cpu sched
      clock" in order to recover from this state.  This is done by
      introducing the function touch_softlockup_watchdog_sync(). This
      allows kgdb to request that the sched clock is updated when the
      watchdog thread runs the first time after a resume from kgdb.
      
      [yong.zhang0@gmail.com: Use per cpu instead of an array]
      Signed-off-by: NJason Wessel <jason.wessel@windriver.com>
      Signed-off-by: NDongdong Deng <Dongdong.Deng@windriver.com>
      Cc: kgdb-bugreport@lists.sourceforge.net
      Cc: peterz@infradead.org
      LKML-Reference: <1264631124-4837-2-git-send-email-jason.wessel@windriver.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      d6ad3e28
  7. 30 1月, 2010 2 次提交
    • J
      perf, hw_breakpoint, kgdb: Do not take mutex for kernel debugger · 5352ae63
      Jason Wessel 提交于
      This patch fixes the regression in functionality where the
      kernel debugger and the perf API do not nicely share hw
      breakpoint reservations.
      
      The kernel debugger cannot use any mutex_lock() calls because it
      can start the kernel running from an invalid context.
      
      A mutex free version of the reservation API needed to get
      created for the kernel debugger to safely update hw breakpoint
      reservations.
      
      The possibility for a breakpoint reservation to be concurrently
      processed at the time that kgdb interrupts the system is
      improbable. Should this corner case occur the end user is
      warned, and the kernel debugger will prohibit updating the
      hardware breakpoint reservations.
      
      Any time the kernel debugger reserves a hardware breakpoint it
      will be a system wide reservation.
      Signed-off-by: NJason Wessel <jason.wessel@windriver.com>
      Acked-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: kgdb-bugreport@lists.sourceforge.net
      Cc: K.Prasad <prasad@linux.vnet.ibm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Alan Stern <stern@rowland.harvard.edu>
      Cc: torvalds@linux-foundation.org
      LKML-Reference: <1264719883-7285-3-git-send-email-jason.wessel@windriver.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      5352ae63
    • J
      x86, hw_breakpoints, kgdb: Fix kgdb to use hw_breakpoint API · cc096749
      Jason Wessel 提交于
      In the 2.6.33 kernel, the hw_breakpoint API is now used for the
      performance event counters.  The hw_breakpoint_handler() now
      consumes the hw breakpoints that were previously set by kgdb
      arch specific code.  In order for kgdb to work in conjunction
      with this core API change, kgdb must use some of the low level
      functions of the hw_breakpoint API to install, uninstall, and
      deal with hw breakpoint reservations.
      
      The kgdb core required a change to call kgdb_disable_hw_debug
      anytime a slave cpu enters kgdb_wait() in order to keep all the
      hw breakpoints in sync as well as to prevent hitting a hw
      breakpoint while kgdb is active.
      
      During the architecture specific initialization of kgdb, it will
      pre-allocate 4 disabled (struct perf event **) structures.  Kgdb
      will use these to manage the capabilities for the 4 hw
      breakpoint registers, per cpu.  Right now the hw_breakpoint API
      does not have a way to ask how many breakpoints are available,
      on each CPU so it is possible that the install of a breakpoint
      might fail when kgdb restores the system to the run state.  The
      intent of this patch is to first get the basic functionality of
      hw breakpoints working and leave it to the person debugging the
      kernel to understand what hw breakpoints are in use and what
      restrictions have been imposed as a result.  Breakpoint
      constraints will be dealt with in a future patch.
      
      While atomic, the x86 specific kgdb code will call
      arch_uninstall_hw_breakpoint() and arch_install_hw_breakpoint()
      to manage the cpu specific hw breakpoints.
      
      The net result of these changes allow kgdb to use the same pool
      of hw_breakpoints that are used by the perf event API, but
      neither knows about future reservations for the available hw
      breakpoint slots.
      Signed-off-by: NJason Wessel <jason.wessel@windriver.com>
      Acked-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: kgdb-bugreport@lists.sourceforge.net
      Cc: K.Prasad <prasad@linux.vnet.ibm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Alan Stern <stern@rowland.harvard.edu>
      Cc: torvalds@linux-foundation.org
      LKML-Reference: <1264719883-7285-2-git-send-email-jason.wessel@windriver.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      cc096749
  8. 28 1月, 2010 3 次提交
  9. 27 1月, 2010 4 次提交
    • O
      lockdep: Fix check_usage_backwards() error message · 48d50674
      Oleg Nesterov 提交于
      Lockdep has found the real bug, but the output doesn't look right to me:
      
      > =========================================================
      > [ INFO: possible irq lock inversion dependency detected ]
      > 2.6.33-rc5 #77
      > ---------------------------------------------------------
      > emacs/1609 just changed the state of lock:
      >  (&(&tty->ctrl_lock)->rlock){+.....}, at: [<ffffffff8127c648>] tty_fasync+0xe8/0x190
      > but this lock took another, HARDIRQ-unsafe lock in the past:
      >  (&(&sighand->siglock)->rlock){-.....}
      
      "HARDIRQ-unsafe" and "this lock took another" looks wrong, afaics.
      
      >   ... key      at: [<ffffffff81c054a4>] __key.46539+0x0/0x8
      >   ... acquired at:
      >    [<ffffffff81089af6>] __lock_acquire+0x1056/0x15a0
      >    [<ffffffff8108a0df>] lock_acquire+0x9f/0x120
      >    [<ffffffff81423012>] _raw_spin_lock_irqsave+0x52/0x90
      >    [<ffffffff8127c1be>] __proc_set_tty+0x3e/0x150
      >    [<ffffffff8127e01d>] tty_open+0x51d/0x5e0
      
      The stack-trace shows that this lock (ctrl_lock) was taken under
      ->siglock (which is hopefully irq-safe).
      
      This is a clear typo in check_usage_backwards() where we tell the print a
      fancy routine we're forwards.
      Signed-off-by: NOleg Nesterov <oleg@redhat.com>
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      LKML-Reference: <20100126181641.GA10460@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      48d50674
    • M
      tracing/documentation: Cover new frame pointer semantics · 03688970
      Mike Frysinger 提交于
      Update the graph tracer examples to cover the new frame pointer semantics
      (in terms of passing it along).  Move the HAVE_FUNCTION_GRAPH_FP_TEST docs
      out of the Kconfig, into the right place, and expand on the details.
      Signed-off-by: NMike Frysinger <vapier@gentoo.org>
      LKML-Reference: <1264165967-18938-1-git-send-email-vapier@gentoo.org>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      03688970
    • S
      ring-buffer: Check for end of page in iterator · 3c05d748
      Steven Rostedt 提交于
      If the iterator comes to an empty page for some reason, or if
      the page is emptied by a consuming read. The iterator code currently
      does not check if the iterator is pass the contents, and may
      return a false entry.
      
      This patch adds a check to the ring buffer iterator to test if the
      current page has been completely read and sets the iterator to the
      next page if necessary.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      3c05d748
    • S
      ring-buffer: Check if ring buffer iterator has stale data · 492a74f4
      Steven Rostedt 提交于
      Usually reads of the ring buffer is performed by a single task.
      There are two types of reads from the ring buffer.
      
      One is a consuming read which will consume the entry that was read
      and the next read will be the entry that follows.
      
      The other is an iterator that will let the user read the contents of
      the ring buffer without modifying it. When an iterator is allocated,
      writes to the ring buffer are disabled to protect the iterator.
      
      The problem exists when consuming reads happen while an iterator is
      allocated. Specifically, the kind of read that swaps out an entire
      page (used by splice) and replaces it with a new read. If the iterator
      is on the page that is swapped out, then the next read may read
      from this swapped out page and return garbage.
      
      This patch adds a check when reading the iterator to make sure that
      the iterator contents are still valid. If a consuming read has taken
      place, the iterator is reset.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      492a74f4
  10. 26 1月, 2010 2 次提交
    • T
      clocksource: Prevent potential kgdb dead lock · 7b7422a5
      Thomas Gleixner 提交于
      commit 0f8e8ef7 (clocksource: Simplify clocksource watchdog resume
      logic) introduced a potential kgdb dead lock. When the kernel is
      stopped by kgdb inside code which holds watchdog_lock then kgdb dead
      locks in clocksource_resume_watchdog().
      
      clocksource_resume_watchdog() is called from kbdg via
      clocksource_touch_watchdog() to avoid that the clock source watchdog
      marks TSC unstable after the kernel has been stopped.
      
      Solve this by replacing spin_lock with a spin_trylock and just return
      in case the lock is held. Not resetting the watchdog might result in
      TSC becoming marked unstable, but that's an acceptable penalty for
      using kgdb.
      
      The timekeeping is anyway easily screwed up by kgdb when the system
      uses either jiffies or a clock source which wraps in short intervals
      (e.g. pm_timer wraps about every 4.6s), so we really do not have to
      worry about that occasional TSC marked unstable side effect.
      
      The second caller of clocksource_resume_watchdog() is
      clocksource_resume(). The trylock is safe here as well because the
      system is UP at this point, interrupts are disabled and nothing else
      can hold watchdog_lock().
      Reported-by: NJason Wessel <jason.wessel@windriver.com>
      LKML-Reference: <1264480000-6997-4-git-send-email-jason.wessel@windriver.com>
      Cc: kgdb-bugreport@lists.sourceforge.net
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: John Stultz <johnstul@us.ibm.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      7b7422a5
    • S
      tracing: Prevent kernel oops with corrupted buffer · 74bf4076
      Steven Rostedt 提交于
      If the contents of the ftrace ring buffer gets corrupted and the trace
      file is read, it could create a kernel oops (usualy just killing the user
      task thread). This is caused by the checking of the pid in the buffer.
      If the pid is negative, it still references the cmdline cache array,
      which could point to an invalid address.
      
      The simple fix is to test for negative PIDs.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      74bf4076
  11. 22 1月, 2010 1 次提交
    • P
      sched: Fix fork vs hotplug vs cpuset namespaces · fabf318e
      Peter Zijlstra 提交于
      There are a number of issues:
      
      1) TASK_WAKING vs cgroup_clone (cpusets)
      
      copy_process():
      
        sched_fork()
          child->state = TASK_WAKING; /* waiting for wake_up_new_task() */
        if (current->nsproxy != p->nsproxy)
           ns_cgroup_clone()
             cgroup_clone()
               mutex_lock(inode->i_mutex)
               mutex_lock(cgroup_mutex)
               cgroup_attach_task()
      	   ss->can_attach()
                 ss->attach() [ -> cpuset_attach() ]
                   cpuset_attach_task()
                     set_cpus_allowed_ptr();
                       while (child->state == TASK_WAKING)
                         cpu_relax();
      will deadlock the system.
      
      
      2) cgroup_clone (cpusets) vs copy_process
      
      So even if the above would work we still have:
      
      copy_process():
      
        if (current->nsproxy != p->nsproxy)
           ns_cgroup_clone()
             cgroup_clone()
               mutex_lock(inode->i_mutex)
               mutex_lock(cgroup_mutex)
               cgroup_attach_task()
      	   ss->can_attach()
                 ss->attach() [ -> cpuset_attach() ]
                   cpuset_attach_task()
                     set_cpus_allowed_ptr();
        ...
      
        p->cpus_allowed = current->cpus_allowed
      
      over-writing the modified cpus_allowed.
      
      
      3) fork() vs hotplug
      
        if we unplug the child's cpu after the sanity check when the child
        gets attached to the task_list but before wake_up_new_task() shit
        will meet with fan.
      
      Solve all these issues by moving fork cpu selection into
      wake_up_new_task().
      Reported-by: NSerge E. Hallyn <serue@us.ibm.com>
      Tested-by: NSerge E. Hallyn <serue@us.ibm.com>
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      LKML-Reference: <1264106190.4283.1314.camel@laptop>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      fabf318e
  12. 21 1月, 2010 4 次提交
  13. 18 1月, 2010 1 次提交
  14. 17 1月, 2010 5 次提交
  15. 15 1月, 2010 6 次提交