1. 14 9月, 2008 3 次提交
    • I
      timers: fix itimer/many thread hang, fix · 430b5294
      Ingo Molnar 提交于
      fix:
      
       kernel/fork.c:843: error: ‘struct signal_struct’ has no member named ‘sum_sched_runtime’
       kernel/irq/handle.c:117: warning: ‘sparse_irq_lock’ defined but not used
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      430b5294
    • F
      timers: fix itimer/many thread hang · f06febc9
      Frank Mayhar 提交于
      Overview
      
      This patch reworks the handling of POSIX CPU timers, including the
      ITIMER_PROF, ITIMER_VIRT timers and rlimit handling.  It was put together
      with the help of Roland McGrath, the owner and original writer of this code.
      
      The problem we ran into, and the reason for this rework, has to do with using
      a profiling timer in a process with a large number of threads.  It appears
      that the performance of the old implementation of run_posix_cpu_timers() was
      at least O(n*3) (where "n" is the number of threads in a process) or worse.
      Everything is fine with an increasing number of threads until the time taken
      for that routine to run becomes the same as or greater than the tick time, at
      which point things degrade rather quickly.
      
      This patch fixes bug 9906, "Weird hang with NPTL and SIGPROF."
      
      Code Changes
      
      This rework corrects the implementation of run_posix_cpu_timers() to make it
      run in constant time for a particular machine.  (Performance may vary between
      one machine and another depending upon whether the kernel is built as single-
      or multiprocessor and, in the latter case, depending upon the number of
      running processors.)  To do this, at each tick we now update fields in
      signal_struct as well as task_struct.  The run_posix_cpu_timers() function
      uses those fields to make its decisions.
      
      We define a new structure, "task_cputime," to contain user, system and
      scheduler times and use these in appropriate places:
      
      struct task_cputime {
      	cputime_t utime;
      	cputime_t stime;
      	unsigned long long sum_exec_runtime;
      };
      
      This is included in the structure "thread_group_cputime," which is a new
      substructure of signal_struct and which varies for uniprocessor versus
      multiprocessor kernels.  For uniprocessor kernels, it uses "task_cputime" as
      a simple substructure, while for multiprocessor kernels it is a pointer:
      
      struct thread_group_cputime {
      	struct task_cputime totals;
      };
      
      struct thread_group_cputime {
      	struct task_cputime *totals;
      };
      
      We also add a new task_cputime substructure directly to signal_struct, to
      cache the earliest expiration of process-wide timers, and task_cputime also
      replaces the it_*_expires fields of task_struct (used for earliest expiration
      of thread timers).  The "thread_group_cputime" structure contains process-wide
      timers that are updated via account_user_time() and friends.  In the non-SMP
      case the structure is a simple aggregator; unfortunately in the SMP case that
      simplicity was not achievable due to cache-line contention between CPUs (in
      one measured case performance was actually _worse_ on a 16-cpu system than
      the same test on a 4-cpu system, due to this contention).  For SMP, the
      thread_group_cputime counters are maintained as a per-cpu structure allocated
      using alloc_percpu().  The timer functions update only the timer field in
      the structure corresponding to the running CPU, obtained using per_cpu_ptr().
      
      We define a set of inline functions in sched.h that we use to maintain the
      thread_group_cputime structure and hide the differences between UP and SMP
      implementations from the rest of the kernel.  The thread_group_cputime_init()
      function initializes the thread_group_cputime structure for the given task.
      The thread_group_cputime_alloc() is a no-op for UP; for SMP it calls the
      out-of-line function thread_group_cputime_alloc_smp() to allocate and fill
      in the per-cpu structures and fields.  The thread_group_cputime_free()
      function, also a no-op for UP, in SMP frees the per-cpu structures.  The
      thread_group_cputime_clone_thread() function (also a UP no-op) for SMP calls
      thread_group_cputime_alloc() if the per-cpu structures haven't yet been
      allocated.  The thread_group_cputime() function fills the task_cputime
      structure it is passed with the contents of the thread_group_cputime fields;
      in UP it's that simple but in SMP it must also safely check that tsk->signal
      is non-NULL (if it is it just uses the appropriate fields of task_struct) and,
      if so, sums the per-cpu values for each online CPU.  Finally, the three
      functions account_group_user_time(), account_group_system_time() and
      account_group_exec_runtime() are used by timer functions to update the
      respective fields of the thread_group_cputime structure.
      
      Non-SMP operation is trivial and will not be mentioned further.
      
      The per-cpu structure is always allocated when a task creates its first new
      thread, via a call to thread_group_cputime_clone_thread() from copy_signal().
      It is freed at process exit via a call to thread_group_cputime_free() from
      cleanup_signal().
      
      All functions that formerly summed utime/stime/sum_sched_runtime values from
      from all threads in the thread group now use thread_group_cputime() to
      snapshot the values in the thread_group_cputime structure or the values in
      the task structure itself if the per-cpu structure hasn't been allocated.
      
      Finally, the code in kernel/posix-cpu-timers.c has changed quite a bit.
      The run_posix_cpu_timers() function has been split into a fast path and a
      slow path; the former safely checks whether there are any expired thread
      timers and, if not, just returns, while the slow path does the heavy lifting.
      With the dedicated thread group fields, timers are no longer "rebalanced" and
      the process_timer_rebalance() function and related code has gone away.  All
      summing loops are gone and all code that used them now uses the
      thread_group_cputime() inline.  When process-wide timers are set, the new
      task_cputime structure in signal_struct is used to cache the earliest
      expiration; this is checked in the fast path.
      
      Performance
      
      The fix appears not to add significant overhead to existing operations.  It
      generally performs the same as the current code except in two cases, one in
      which it performs slightly worse (Case 5 below) and one in which it performs
      very significantly better (Case 2 below).  Overall it's a wash except in those
      two cases.
      
      I've since done somewhat more involved testing on a dual-core Opteron system.
      
      Case 1: With no itimer running, for a test with 100,000 threads, the fixed
      	kernel took 1428.5 seconds, 513 seconds more than the unfixed system,
      	all of which was spent in the system.  There were twice as many
      	voluntary context switches with the fix as without it.
      
      Case 2: With an itimer running at .01 second ticks and 4000 threads (the most
      	an unmodified kernel can handle), the fixed kernel ran the test in
      	eight percent of the time (5.8 seconds as opposed to 70 seconds) and
      	had better tick accuracy (.012 seconds per tick as opposed to .023
      	seconds per tick).
      
      Case 3: A 4000-thread test with an initial timer tick of .01 second and an
      	interval of 10,000 seconds (i.e. a timer that ticks only once) had
      	very nearly the same performance in both cases:  6.3 seconds elapsed
      	for the fixed kernel versus 5.5 seconds for the unfixed kernel.
      
      With fewer threads (eight in these tests), the Case 1 test ran in essentially
      the same time on both the modified and unmodified kernels (5.2 seconds versus
      5.8 seconds).  The Case 2 test ran in about the same time as well, 5.9 seconds
      versus 5.4 seconds but again with much better tick accuracy, .013 seconds per
      tick versus .025 seconds per tick for the unmodified kernel.
      
      Since the fix affected the rlimit code, I also tested soft and hard CPU limits.
      
      Case 4: With a hard CPU limit of 20 seconds and eight threads (and an itimer
      	running), the modified kernel was very slightly favored in that while
      	it killed the process in 19.997 seconds of CPU time (5.002 seconds of
      	wall time), only .003 seconds of that was system time, the rest was
      	user time.  The unmodified kernel killed the process in 20.001 seconds
      	of CPU (5.014 seconds of wall time) of which .016 seconds was system
      	time.  Really, though, the results were too close to call.  The results
      	were essentially the same with no itimer running.
      
      Case 5: With a soft limit of 20 seconds and a hard limit of 2000 seconds
      	(where the hard limit would never be reached) and an itimer running,
      	the modified kernel exhibited worse tick accuracy than the unmodified
      	kernel: .050 seconds/tick versus .028 seconds/tick.  Otherwise,
      	performance was almost indistinguishable.  With no itimer running this
      	test exhibited virtually identical behavior and times in both cases.
      
      In times past I did some limited performance testing.  those results are below.
      
      On a four-cpu Opteron system without this fix, a sixteen-thread test executed
      in 3569.991 seconds, of which user was 3568.435s and system was 1.556s.  On
      the same system with the fix, user and elapsed time were about the same, but
      system time dropped to 0.007 seconds.  Performance with eight, four and one
      thread were comparable.  Interestingly, the timer ticks with the fix seemed
      more accurate:  The sixteen-thread test with the fix received 149543 ticks
      for 0.024 seconds per tick, while the same test without the fix received 58720
      for 0.061 seconds per tick.  Both cases were configured for an interval of
      0.01 seconds.  Again, the other tests were comparable.  Each thread in this
      test computed the primes up to 25,000,000.
      
      I also did a test with a large number of threads, 100,000 threads, which is
      impossible without the fix.  In this case each thread computed the primes only
      up to 10,000 (to make the runtime manageable).  System time dominated, at
      1546.968 seconds out of a total 2176.906 seconds (giving a user time of
      629.938s).  It received 147651 ticks for 0.015 seconds per tick, still quite
      accurate.  There is obviously no comparable test without the fix.
      Signed-off-by: NFrank Mayhar <fmayhar@google.com>
      Cc: Roland McGrath <roland@redhat.com>
      Cc: Alexey Dobriyan <adobriyan@gmail.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      f06febc9
    • L
      cpuset: avoid changing cpuset's cpus when -errno returned · 4e74339a
      Li Zefan 提交于
      After the patch:
      
      commit 0b2f630a
      Author: Miao Xie <miaox@cn.fujitsu.com>
      Date:   Fri Jul 25 01:47:21 2008 -0700
      
          cpusets: restructure the function update_cpumask() and update_nodemask()
      
      It might happen that 'echo 0 > /cpuset/sub/cpus' returned failure but 'cpus'
      has been changed, because cpus was changed before calling heap_init() which
      may return -ENOMEM.
      
      This patch restores the orginal behavior.
      Signed-off-by: NLi Zefan <lizf@cn.fujitsu.com>
      Acked-by: NPaul Menage <menage@google.com>
      Cc: Paul Jackson <pj@sgi.com>
      Cc: Miao Xie <miaox@cn.fujitsu.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      4e74339a
  2. 10 9月, 2008 1 次提交
    • T
      clockevents: remove WARN_ON which was used to gather information · 61c22c34
      Thomas Gleixner 提交于
      The issue of the endless reprogramming loop due to a too small
      min_delta_ns was fixed with the previous updates of the clock events
      code, but we had no information about the spread of this problem. I
      added a WARN_ON to get automated information via kerneloops.org and to
      get some direct reports, which allowed me to analyse the affected
      machines.
      
      The WARN_ON has served its purpose and would be annoying for a release
      kernel. Remove it and just keep the information about the increase of
      the min_delta_ns value.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      61c22c34
  3. 07 9月, 2008 1 次提交
    • M
      sched: arch_reinit_sched_domains() must destroy domains to force rebuild · dfb512ec
      Max Krasnyansky 提交于
      What I realized recently is that calling rebuild_sched_domains() in
      arch_reinit_sched_domains() by itself is not enough when cpusets are enabled.
      partition_sched_domains() code is trying to avoid unnecessary domain rebuilds
      and will not actually rebuild anything if new domain masks match the old ones.
      
      What this means is that doing
           echo 1 > /sys/devices/system/cpu/sched_mc_power_savings
      on a system with cpusets enabled will not take affect untill something changes
      in the cpuset setup (ie new sets created or deleted).
      
      This patch fixes restore correct behaviour where domains must be rebuilt in
      order to enable MC powersaving flags.
      
      Test on quad-core Core2 box with both CONFIG_CPUSETS and !CONFIG_CPUSETS.
      Also tested on dual-core Core2 laptop. Lockdep is happy and things are working
      as expected.
      Signed-off-by: NMax Krasnyansky <maxk@qualcomm.com>
      Tested-by: NVaidyanathan Srinivasan <svaidy@linux.vnet.ibm.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      dfb512ec
  4. 06 9月, 2008 4 次提交
    • M
      ntp: fix calculation of the next jiffie to trigger RTC sync · 4ff4b9e1
      Maciej W. Rozycki 提交于
      We have a bug in the calculation of the next jiffie to trigger the RTC
      synchronisation.  The aim here is to run sync_cmos_clock() as close as
      possible to the middle of a second.  Which means we want this function to
      be called less than or equal to half a jiffie away from when now.tv_nsec
      equals 5e8 (500000000).
      
      If this is not the case for a given call to the function, for this purpose
      instead of updating the RTC we calculate the offset in nanoseconds to the
      next point in time where now.tv_nsec will be equal 5e8.  The calculated
      offset is then converted to jiffies as these are the unit used by the
      timer.
      
      Hovewer timespec_to_jiffies() used here uses a ceil()-type rounding mode,
      where the resulting value is rounded up.  As a result the range of
      now.tv_nsec when the timer will trigger is from 5e8 to 5e8 + TICK_NSEC
      rather than the desired 5e8 - TICK_NSEC / 2 to 5e8 + TICK_NSEC / 2.
      
      As a result if for example sync_cmos_clock() happens to be called at the
      time when now.tv_nsec is between 5e8 + TICK_NSEC / 2 and 5e8 to 5e8 +
      TICK_NSEC, it will simply be rescheduled HZ jiffies later, falling in the
      same range of now.tv_nsec again.  Similarly for cases offsetted by an
      integer multiple of TICK_NSEC.
      
      This change addresses the problem by subtracting TICK_NSEC / 2 from the
      nanosecond offset to the next point in time where now.tv_nsec will be
      equal 5e8, effectively shifting the following rounding in
      timespec_to_jiffies() so that it produces a rounded-to-nearest result.
      Signed-off-by: NMaciej W. Rozycki <macro@linux-mips.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      4ff4b9e1
    • T
      clockevents: broadcast fixup possible waiters · 7300711e
      Thomas Gleixner 提交于
      Until the C1E patches arrived there where no users of periodic broadcast
      before switching to oneshot mode. Now we need to trigger a possible
      waiter for a periodic broadcast when switching to oneshot mode.
      Otherwise we can starve them for ever.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      7300711e
    • B
      sched: fix process time monotonicity · 49048622
      Balbir Singh 提交于
      Spencer reported a problem where utime and stime were going negative despite
      the fixes in commit b27f03d4. The suspected
      reason for the problem is that signal_struct maintains it's own utime and
      stime (of exited tasks), these are not updated using the new task_utime()
      routine, hence sig->utime can go backwards and cause the same problem
      to occur (sig->utime, adds tsk->utime and not task_utime()). This patch
      fixes the problem
      
      TODO: using max(task->prev_utime, derived utime) works for now, but a more
      generic solution is to implement cputime_max() and use the cputime_gt()
      function for comparison.
      
      Reported-by: spencer@bluehost.com
      Signed-off-by: NBalbir Singh <balbir@linux.vnet.ibm.com>
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      49048622
    • P
      sched_clock: fix NOHZ interaction · 56c7426b
      Peter Zijlstra 提交于
      If HLT stops the TSC, we'll fail to account idle time, thereby inflating the
      actual process times. Fix this by re-calibrating the clock against GTOD when
      leaving nohz mode.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Tested-by: NAvi Kivity <avi@qumranet.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      56c7426b
  5. 05 9月, 2008 6 次提交
    • T
      clockevents: prevent endless loop lockup · 1fb9b7d2
      Thomas Gleixner 提交于
      The C1E/HPET bug reports on AMDX2/RS690 systems where tracked down to a
      too small value of the HPET minumum delta for programming an event.
      
      The clockevents code needs to enforce an interrupt event on the clock event
      device in some cases. The enforcement code was stupid and naive, as it just
      added the minimum delta to the current time and tried to reprogram the device.
      When the minimum delta is too small, then this loops forever.
      
      Add a sanity check. Allow reprogramming to fail 3 times, then print a warning
      and double the minimum delta value to make sure, that this does not happen again.
      Use the same function for both tick-oneshot and tick-broadcast code.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      1fb9b7d2
    • T
      clockevents: prevent multiple init/shutdown · 9c17bcda
      Thomas Gleixner 提交于
      While chasing the C1E/HPET bugreports I went through the clock events
      code inch by inch and found that the broadcast device can be initialized
      and shutdown multiple times. Multiple shutdowns are not critical, but
      useless waste of time. Multiple initializations are simply broken. Another
      CPU might have the device in use already after the first initialization and
      the second init could just render it unusable again.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      9c17bcda
    • T
      clockevents: enforce reprogram in oneshot setup · 7205656a
      Thomas Gleixner 提交于
      In tick_oneshot_setup we program the device to the given next_event,
      but we do not check the return value. We need to make sure that the
      device is programmed enforced so the interrupt handler engine starts
      working. Split out the reprogramming function from tick_program_event()
      and call it with the device, which was handed in to tick_setup_oneshot().
      Set the force argument, so the devices is firing an interrupt.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      7205656a
    • T
      clockevents: prevent endless loop in periodic broadcast handler · d4496b39
      Thomas Gleixner 提交于
      The reprogramming of the periodic broadcast handler was broken,
      when the first programming returned -ETIME. The clockevents code
      stores the new expiry value in the clock events device next_event field
      only when the programming time has not been elapsed yet. The loop in
      question calculates the new expiry value from the next_event value
      and therefor never increases.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      d4496b39
    • V
      clockevents: prevent clockevent event_handler ending up handler_noop · 7c1e7689
      Venkatesh Pallipadi 提交于
      There is a ordering related problem with clockevents code, due to which
      clockevents_register_device() called after tickless/highres switch
      will not work. The new clockevent ends up with clockevents_handle_noop as
      event handler, resulting in no timer activity.
      
      The problematic path seems to be
      
      * old device already has hrtimer_interrupt as the event_handler
      * new clockevent device registers with a higher rating
      * tick_check_new_device() is called
        * clockevents_exchange_device() gets called
          * old->event_handler is set to clockevents_handle_noop
        * tick_setup_device() is called for the new device
          * which sets new->event_handler using the old->event_handler which is noop.
      
      Change the ordering so that new device inherits the proper handler.
      
      This does not have any issue in normal case as most likely all the clockevent
      devices are setup before the highres switch. But, can potentially be affecting
      some corner case where HPET force detect happens after the highres switch.
      This was a problem with HPET in MSI mode code that we have been experimenting
      with.
      Signed-off-by: NVenkatesh Pallipadi <venkatesh.pallipadi@intel.com>
      Signed-off-by: NShaohua Li <shaohua.li@intel.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      7c1e7689
    • A
      forgotten refcount on sysctl root table · b380b0d4
      Al Viro 提交于
      We should've set refcount on the root sysctl table; otherwise we'll blow
      up the first time we get down to zero dynamically registered sysctl
      tables.
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      Tested-by: NJames Bottomley <James.Bottomley@HansenPartnership.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      b380b0d4
  6. 03 9月, 2008 5 次提交
  7. 02 9月, 2008 1 次提交
  8. 30 8月, 2008 2 次提交
  9. 29 8月, 2008 1 次提交
  10. 28 8月, 2008 3 次提交
    • P
      sched: rt-bandwidth accounting fix · cc2991cf
      Peter Zijlstra 提交于
      It fixes an accounting bug where we would continue accumulating runtime
      even though the bandwidth control is disabled. This would lead to very long
      throttle periods once bandwidth control gets turned on again.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      cc2991cf
    • J
      sched: fix sched_rt_rq_enqueue() resched idle · f3ade837
      John Blackwood 提交于
      When sysctl_sched_rt_runtime is set to something other than -1 and the
      CONFIG_RT_GROUP_SCHED kernel parameter is NOT enabled, we get into a state
      where we see one or more CPUs idling forvever even though there are
      real-time
      tasks in their rt runqueue that are able to run (no longer throttled).
      
      The sequence is:
      
      - A real-time task is running when the timer sets the rt runqueue
          to throttled, and the rt task is resched_task()ed and switched
          out, and idle is switched in since there are no non-rt tasks to
          run on that cpu.
      
      - Eventually the do_sched_rt_period_timer() runs and un-throttles
          the rt runqueue, but we just exit the timer interrupt and go back
          to executing the idle task in the idle loop forever.
      
      If we change the sched_rt_rq_enqueue() routine to use some of the code
      from the CONFIG_RT_GROUP_SCHED enabled version of this same routine and
      resched_task() the currently executing task (idle in our case) if it is
      a lower priority task than the higher rt task in the now un-throttled
      runqueue, the problem is no longer observed.
      Signed-off-by: NJohn Blackwood <john.blackwood@ccur.com>
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      f3ade837
    • S
      ftrace: disable tracing for suspend to ram · f42ac38c
      Steven Rostedt 提交于
      I've been painstakingly debugging the issue with suspend to ram and
      ftraced. The 2.6.28 code does not have this issue, but since the mcount
      recording is not going to be in 27, this must be solved for the ftrace
      daemon version.
      
      The resume from suspend to ram would reboot because it was triple
      faulting. Debugging further, I found that calling the mcount function
      itself was not an issue, but it would fault when it incremented
      preempt_count. preempt_count is on the tasks info structure that is on the
      low memory address of the task's stack.  For some reason, it could not
      write to it. Resuming out of suspend to ram does quite a lot of funny
      tricks to get to work, so it is not surprising at all that simply doing a
      preempt_disable() would cause a fault.
      
      Thanks to Rafael for suggesting to add a "while (1);" to find the place in
      resuming that is causing the fault. I would place the loop somewhere in
      the code, compile and reboot and see if it would either reboot (hit the
      fault) or simply hang (hit the loop).  Doing this over and over again, I
      narrowed it down that it was happening in enable_nonboot_cpus.
      
      At this point, I found that it is easier to simply disable tracing around
      the suspend code, instead of searching for the particular function that
      can not handle doing a preempt_disable.
      
      This patch disables the tracer as it suspends and reenables it on resume.
      
      I tested this patch on my Laptop, and it can resume fine with the patch.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Acked-by: NRafael J. Wysocki <rjw@sisk.pl>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      f42ac38c
  11. 27 8月, 2008 2 次提交
    • S
      exit signals: use of uninitialized field notify_count · 2633f0e5
      Steve VanDeBogart 提交于
      task->signal->notify_count is only initialized if
      task->signal->group_exit_task is not NULL.  Reorder a conditional so
      that uninitialised memory is not used.  Found by Valgrind.
      Signed-off-by: NSteve VanDeBogart <vandebo-lkml@nerdbox.net>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      2633f0e5
    • Z
      lockdep: fix invalid list_del_rcu in zap_class · 74870172
      Zhu Yi 提交于
      The problem is found during iwlagn driver testing on
      v2.6.27-rc4-176-gb8e6c91c kernel, but it turns out to be a lockdep bug.
      In our testing, we frequently load and unload the iwlagn driver
      (>50 times). Then the MAX_STACK_TRACE_ENTRIES is reached (expected
      behaviour?). The error message with the call trace is as below.
      
      BUG: MAX_STACK_TRACE_ENTRIES too low!
      turning off the locking correctness validator.
      Pid: 4895, comm: iwlagn Not tainted 2.6.27-rc4 #13
      
      Call Trace:
       [<ffffffff81014aa1>] save_stack_trace+0x22/0x3e
       [<ffffffff8105390a>] save_trace+0x8b/0x91
       [<ffffffff81054e60>] mark_lock+0x1b0/0x8fa
       [<ffffffff81056f71>] __lock_acquire+0x5b9/0x716
       [<ffffffffa00d818a>] ieee80211_sta_work+0x0/0x6ea [mac80211]
       [<ffffffff81057120>] lock_acquire+0x52/0x6b
       [<ffffffff81045f0e>] run_workqueue+0x97/0x1ed
       [<ffffffff81045f5e>] run_workqueue+0xe7/0x1ed
       [<ffffffff81045f0e>] run_workqueue+0x97/0x1ed
       [<ffffffff81046ae4>] worker_thread+0xd8/0xe3
       [<ffffffff81049503>] autoremove_wake_function+0x0/0x2e
       [<ffffffff81046a0c>] worker_thread+0x0/0xe3
       [<ffffffff810493ec>] kthread+0x47/0x73
       [<ffffffff8128e3ab>] trace_hardirqs_on_thunk+0x3a/0x3f
       [<ffffffff8100cea9>] child_rip+0xa/0x11
       [<ffffffff8100c4df>] restore_args+0x0/0x30
       [<ffffffff810316e1>] finish_task_switch+0x0/0xcc
       [<ffffffff810493a5>] kthread+0x0/0x73
       [<ffffffff8100ce9f>] child_rip+0x0/0x11
      
      Although the above is harmless, when the ilwagn module is removed
      later lockdep will trigger a kernel oops as below.
      
      BUG: unable to handle kernel NULL pointer dereference at
      0000000000000008
      IP: [<ffffffff810531e1>] zap_class+0x24/0x82
      PGD 73128067 PUD 7448c067 PMD 0
      Oops: 0002 [1] SMP
      CPU 0
      Modules linked in: rfcomm l2cap bluetooth autofs4 sunrpc
      nf_conntrack_ipv6 xt_state nf_conntrack xt_tcpudp ip6t_ipv6header
      ip6t_REJECT ip6table_filter ip6_tables x_tables ipv6 cpufreq_ondemand
      acpi_cpufreq dm_mirror dm_log dm_multipath dm_mod snd_hda_intel sr_mod
      snd_seq_dummy snd_seq_oss snd_seq_midi_event battery snd_seq
      snd_seq_device cdrom button snd_pcm_oss snd_mixer_oss snd_pcm
      snd_timer snd_page_alloc e1000e snd_hwdep sg iTCO_wdt
      iTCO_vendor_support ac pcspkr i2c_i801 i2c_core snd soundcore video
      output ata_piix ata_generic libata sd_mod scsi_mod ext3 jbd mbcache
      uhci_hcd ohci_hcd ehci_hcd [last unloaded: mac80211]
      Pid: 4941, comm: modprobe Not tainted 2.6.27-rc4 #10
      RIP: 0010:[<ffffffff810531e1>]  [<ffffffff810531e1>]
      zap_class+0x24/0x82
      RSP: 0000:ffff88007bcb3eb0  EFLAGS: 00010046
      RAX: 0000000000068ee8 RBX: ffffffff8192a0a0 RCX: 0000000000000000
      RDX: 0000000000000000 RSI: 0000000000001dfb RDI: ffffffff816e70b0
      RBP: ffffffffa00cd000 R08: ffffffff816818f8 R09: ffff88007c923558
      R10: ffffe20002ad2408 R11: ffffffff811028ec R12: ffffffff8192a0a0
      R13: 000000000002bd90 R14: 0000000000000000 R15: 0000000000000296
      FS:  00007f9d1cee56f0(0000) GS:ffffffff814a58c0(0000)
      knlGS:0000000000000000
      CS:  0010 DS: 0000 ES: 0000 CR0: 000000008005003b
      CR2: 0000000000000008 CR3: 0000000073047000 CR4: 00000000000006e0
      DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
      DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
      Process modprobe (pid: 4941, threadinfo ffff88007bcb2000, task
      ffff8800758d1fc0)
      Stack:  ffffffff81057376 0000000000000000 ffffffffa00f7b00
      0000000000000000
       0000000000000080 0000000000618278 00007fff24f16720 0000000000000000
       ffffffff8105d37a ffffffffa00f7b00 ffffffff8105d591 313132303863616d
      Call Trace:
       [<ffffffff81057376>] ? lockdep_free_key_range+0x61/0xf5
       [<ffffffff8105d37a>] ? free_module+0xd4/0xe4
       [<ffffffff8105d591>] ? sys_delete_module+0x1de/0x1f9
       [<ffffffff8106dbfa>] ? audit_syscall_entry+0x12d/0x160
       [<ffffffff8100be2b>] ? system_call_fastpath+0x16/0x1b
      
      Code: b2 00 01 00 00 00 c3 31 f6 49 c7 c0 10 8a 61 81 eb 32 49 39 38
      75 26 48 98 48 6b c0 38 48 8b 90 08 8a 61 81 48 8b 88 00 8a 61 81 <48>
      89 51 08 48 89 0a 48 c7 80 08 8a 61 81 00 02 20 00 48 ff c6
      RIP  [<ffffffff810531e1>] zap_class+0x24/0x82
       RSP <ffff88007bcb3eb0>
      CR2: 0000000000000008
      ---[ end trace a1297e0c4abb0f2e ]---
      
      The root cause for this oops is in add_lock_to_list() when
      save_trace() fails due to MAX_STACK_TRACE_ENTRIES is reached,
      entry->class is assigned but entry is never added into any lock list.
      This makes the list_del_rcu() in zap_class() oops later when the
      module is unloaded. This patch fixes the problem by assigning
      entry->class after save_trace() returns success.
      Signed-off-by: NZhu Yi <yi.zhu@intel.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      74870172
  12. 26 8月, 2008 4 次提交
    • J
      lockstat: repair erronous contention statistics · 04148b73
      Joe Korty 提交于
      Fix bad contention counting in /proc/lock_stat.
      
      /proc/lockstat tries to gather per-ip contention
      statistics per-lock.  This was failing due to
      a garbage per-ip index selector being used.
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      04148b73
    • J
      lockstat: fix numerical output rounding error · 2189459d
      Joe Korty 提交于
      Fix rounding error in /proc/lock_stat numerical output.
      
      On occasion the two digit fractional part contains the three
      digit value '100'.  This is due to a bug in the rounding algorithm
      which pushes values in the range '95..99' to '100' rather than
      to '00' + an increment to the integer part.  For example,
      
      	- 123456.100      old display
      	+ 123457.00	  new display
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      2189459d
    • H
      smp: have smp_call_function_single() detect invalid CPUs · f73be6de
      H. Peter Anvin 提交于
      Have smp_call_function_single() return invalid CPU indicies and return
      -ENXIO.  This function is already executed inside a
      get_cpu()..put_cpu() which locks out CPU removal, so rather than
      having the higher layers doing another layer of locking to guard
      against unplugged CPUs do the test here.
      Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
      f73be6de
    • L
      [module] Don't let gcc inline load_module() · ffb4ba76
      Linus Torvalds 提交于
      'load_module()' is a complex function that contains all the ELF section
      logic, and inlining it is utterly insane.  But gcc will do it, simply
      because there is only one call-site.  As a result, all the stack space
      that is allocated for all the work to load the module will still be
      active when we actually call the module init sequence, and the deep call
      chain makes stack overflows happen.
      
      And stack overflows are really hard to debug, because they not only
      corrupt random pages below the stack, but also corrupt the thread_info
      structure that is allocated under the stack.
      
      In this case, Alan Brunelle reported some crazy oopses at bootup, after
      loading the processor module that ends up doing complex ACPI stuff and
      has quite a deep callchain.  This should fix it, and is the sane thing
      to do regardless.
      
      Cc: Alan D. Brunelle <Alan.Brunelle@hp.com>
      Cc: Arjan van de Ven <arjan@linux.intel.com>
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      ffb4ba76
  13. 25 8月, 2008 1 次提交
    • P
      sched_clock: fix cpu_clock() · 354879bb
      Peter Zijlstra 提交于
      This patch fixes 3 issues:
      
      a) it removes the dependency on jiffies, because jiffies are incremented
         by a single CPU, and the tick is not synchronized between CPUs. Therefore
         relying on it to calculate a window to clip whacky TSC values doesn't work
         as it can drift around.
      
         So instead use [GTOD, GTOD+TICK_NSEC) as the window.
      
      b) __update_sched_clock() did (roughly speaking):
      
         delta = sched_clock() - scd->tick_raw;
         clock += delta;
      
         Which gives exponential growth, instead of linear.
      
      c) allows the sched_clock_cpu() value to warp the u64 without breaking.
      
      the results are more reliable sched_clock() deltas:
      
                 before       after   sched_clock
      
      cpu_clock: 15750        51312   51488
      cpu_clock: 59719        51052   50947
      cpu_clock: 15879        51249   51061
      cpu_clock: 1            50933   51198
      cpu_clock: 1            50931   51039
      cpu_clock: 1            51093   50981
      cpu_clock: 1            51043   51040
      cpu_clock: 1            50959   50938
      cpu_clock: 1            50981   51011
      cpu_clock: 1            51364   51212
      cpu_clock: 1            51219   51273
      cpu_clock: 1            51389   51048
      cpu_clock: 1            51285   51611
      cpu_clock: 1            50964   51137
      cpu_clock: 1            50973   50968
      cpu_clock: 1            50967   50972
      cpu_clock: 1            58910   58485
      cpu_clock: 1            51082   51025
      cpu_clock: 1            50957   50958
      cpu_clock: 1            50958   50957
      cpu_clock: 1006128      51128   50971
      cpu_clock: 1            51107   51155
      cpu_clock: 1            51371   51081
      cpu_clock: 1            51104   51365
      cpu_clock: 1            51363   51309
      cpu_clock: 1            51107   51160
      cpu_clock: 1            51139   51100
      cpu_clock: 1            51216   51136
      cpu_clock: 1            51207   51215
      cpu_clock: 1            51087   51263
      cpu_clock: 1            51249   51177
      cpu_clock: 1            51519   51412
      cpu_clock: 1            51416   51255
      cpu_clock: 1            51591   51594
      cpu_clock: 1            50966   51374
      cpu_clock: 1            50966   50966
      cpu_clock: 1            51291   50948
      cpu_clock: 1            50973   50867
      cpu_clock: 1            50970   50970
      cpu_clock: 998306       50970   50971
      cpu_clock: 1            50971   50970
      cpu_clock: 1            50970   50970
      cpu_clock: 1            50971   50971
      cpu_clock: 1            50970   50970
      cpu_clock: 1            51351   50970
      cpu_clock: 1            50970   51352
      cpu_clock: 1            50971   50970
      cpu_clock: 1            50970   50970
      cpu_clock: 1            51321   50971
      cpu_clock: 1            50974   51324
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      354879bb
  14. 24 8月, 2008 1 次提交
  15. 21 8月, 2008 4 次提交
  16. 20 8月, 2008 1 次提交