1. 11 8月, 2008 1 次提交
  2. 31 7月, 2008 1 次提交
    • I
      sched clock: revert various sched_clock() changes · e4e4e534
      Ingo Molnar 提交于
      Found an interactivity problem on a quad core test-system - simple
      CPU loops would occasionally delay the system un an unacceptable way.
      
      After much debugging with Peter Zijlstra it turned out that the problem
      is caused by the string of sched_clock() changes - they caused the CPU
      clock to jump backwards a bit - which confuses the scheduler arithmetics.
      
      (which is unsigned for performance reasons)
      
      So revert:
      
       # c300ba25: sched_clock: and multiplier for TSC to gtod drift
       # c0c87734: sched_clock: only update deltas with local reads.
       # af52a90a: sched_clock: stop maximum check on NO HZ
       # f7cce27f: sched_clock: widen the max and min time
      
      This solves the interactivity problems.
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Acked-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Acked-by: NMike Galbraith <efault@gmx.de>
      e4e4e534
  3. 28 7月, 2008 2 次提交
  4. 27 7月, 2008 4 次提交
  5. 26 7月, 2008 9 次提交
    • K
      per-task-delay-accounting: add memory reclaim delay · 873b4771
      Keika Kobayashi 提交于
      Sometimes, application responses become bad under heavy memory load.
      Applications take a bit time to reclaim memory.  The statistics, how long
      memory reclaim takes, will be useful to measure memory usage.
      
      This patch adds accounting memory reclaim to per-task-delay-accounting for
      accounting the time of do_try_to_free_pages().
      
      <i.e>
      
      - When System is under low memory load,
        memory reclaim may not occur.
      
      $ free
                   total       used       free     shared    buffers     cached
      Mem:       8197800    1577300    6620500          0       4808    1516724
      -/+ buffers/cache:      55768    8142032
      Swap:     16386292          0   16386292
      
      $ vmstat 1
      procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----
       r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa
       0  0      0 5069748  10612 3014060    0    0     0     0    3   26  0  0 100  0
       0  0      0 5069748  10612 3014060    0    0     0     0    4   22  0  0 100  0
       0  0      0 5069748  10612 3014060    0    0     0     0    3   18  0  0 100  0
      
      Measure the time of tar command.
      
      $ ls -s test.dat
      1501472 test.dat
      
      $ time tar cvf test.tar test.dat
      real    0m13.388s
      user    0m0.116s
      sys     0m5.304s
      
      $ ./delayget -d -p <pid>
      CPU             count     real total  virtual total    delay total
                        428     5528345500     5477116080       62749891
      IO              count    delay total
                        338     8078977189
      SWAP            count    delay total
                          0              0
      RECLAIM         count    delay total
                          0              0
      
      - When system is under heavy memory load
        memory reclaim may occur.
      
      $ vmstat 1
      procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----
       r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa
       0  0 7159032  49724   1812   3012    0    0     0     0    3   24  0  0 100  0
       0  0 7159032  49724   1812   3012    0    0     0     0    4   24  0  0 100  0
       0  0 7159032  49848   1812   3012    0    0     0     0    3   22  0  0 100  0
      
      In this case, one process uses more 8G memory
      by execution of malloc() and memset().
      
      $ time tar cvf test.tar test.dat
      real    1m38.563s        <-  increased by 85 sec
      user    0m0.140s
      sys     0m7.060s
      
      $ ./delayget -d -p <pid>
      CPU             count     real total  virtual total    delay total
                       9021     7140446250     7315277975      923201824
      IO              count    delay total
                       8965    90466349669
      SWAP            count    delay total
                          3       21036367
      RECLAIM         count    delay total
                        740    61011951153
      
      In the later case, the value of RECLAIM is increasing.
      So, taskstats can show how much memory reclaim influences TAT.
      Signed-off-by: NKeika Kobayashi <kobayashi.kk@ncos.nec.co.jp>
      Acked-by: NBalbir Singh <balbir@linux.vnet.ibm.com>
      Acked-by: NKOSAKI Motohiro <kosaki.motohiro@jp.fujistu.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      873b4771
    • A
      task IO accounting: provide distinct tgid/tid I/O statistics · 297c5d92
      Andrea Righi 提交于
      Report per-thread I/O statistics in /proc/pid/task/tid/io and aggregate
      parent I/O statistics in /proc/pid/io.  This approach follows the same
      model used to account per-process and per-thread CPU times.
      
      As a practial application, this allows for example to quickly find the top
      I/O consumer when a process spawns many child threads that perform the
      actual I/O work, because the aggregated I/O statistics can always be found
      in /proc/pid/io.
      
      [ Oleg Nesterov points out that we should check that the task is still
        alive before we iterate over the threads, but also says that we can do
        that fixup on top of this later.  - Linus ]
      Acked-by: NBalbir Singh <balbir@linux.vnet.ibm.com>
      Signed-off-by: NAndrea Righi <righi.andrea@gmail.com>
      Cc: Matt Heaton <matt@hostmonster.com>
      Cc: Shailabh Nagar <nagar@watson.ibm.com>
      Acked-by-with-comments: Oleg Nesterov <oleg@tv-sign.ru>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      297c5d92
    • J
      accounting: account for user time when updating memory integrals · 49b5cf34
      Jonathan Lim 提交于
      Adapt acct_update_integrals() to include user time when calculating the time
      difference.  The units of acct_rss_mem1 and acct_vm_mem1 are also changed from
      pages-jiffies to pages-usecs to avoid calling jiffies_to_usecs() in
      xacct_add_tsk() which might overflow.
      Signed-off-by: NJonathan Lim <jlim@sgi.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      49b5cf34
    • P
      pidns: remove find_task_by_pid, unused for a long time · dbda0de5
      Pavel Emelyanov 提交于
      It seems to me that it was a mistake marking this function as deprecated
      and scheduling it for removal, rather than resolutely removing it after
      the last caller's death.
      
      Anyway - better late, then never.
      Signed-off-by: NPavel Emelyanov <xemul@openvz.org>
      Cc: Oleg Nesterov <oleg@tv-sign.ru>
      Cc: "Eric W. Biederman" <ebiederm@xmission.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      dbda0de5
    • P
      pidns: remove now unused find_pid function. · e49859e7
      Pavel Emelyanov 提交于
      This one had the only users so far - the kill_proc, which is removed, so
      drop this (invalid in namespaced world) call too.
      
      And of course - erase all references on it from comments.
      Signed-off-by: NPavel Emelyanov <xemul@openvz.org>
      Cc: Oleg Nesterov <oleg@tv-sign.ru>
      Cc: "Eric W. Biederman" <ebiederm@xmission.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      e49859e7
    • P
      pidns: remove now unused kill_proc function · 19b0cfcc
      Pavel Emelyanov 提交于
      This function operated on a pid_t to kill a task, which is no longer valid
      in a containerized system.
      
      It has finally lost all its users and we can safely remove it from the
      tree.
      Signed-off-by: NPavel Emelyanov <xemul@openvz.org>
      Cc: Oleg Nesterov <oleg@tv-sign.ru>
      Cc: "Eric W. Biederman" <ebiederm@xmission.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      19b0cfcc
    • O
      kill PF_BORROWED_MM in favour of PF_KTHREAD · 246bb0b1
      Oleg Nesterov 提交于
      Kill PF_BORROWED_MM.  Change use_mm/unuse_mm to not play with ->flags, and
      do s/PF_BORROWED_MM/PF_KTHREAD/ for a couple of other users.
      
      No functional changes yet.  But this allows us to do further
      fixes/cleanups.
      
      oom_kill/ptrace/etc often check "p->mm != NULL" to filter out the
      kthreads, this is wrong because of use_mm().  The problem with
      PF_BORROWED_MM is that we need task_lock() to avoid races.  With this
      patch we can check PF_KTHREAD directly, or use a simple lockless helper:
      
      	/* The result must not be dereferenced !!! */
      	struct mm_struct *__get_task_mm(struct task_struct *tsk)
      	{
      		if (tsk->flags & PF_KTHREAD)
      			return NULL;
      		return tsk->mm;
      	}
      
      Note also ecard_task().  It runs with ->mm != NULL, but it's the kernel
      thread without PF_BORROWED_MM.
      Signed-off-by: NOleg Nesterov <oleg@tv-sign.ru>
      Cc: Roland McGrath <roland@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      246bb0b1
    • O
      introduce PF_KTHREAD flag · 7b34e428
      Oleg Nesterov 提交于
      Introduce the new PF_KTHREAD flag to mark the kernel threads.  It is set
      by INIT_TASK() and copied to the forked childs (we could set it in
      kthreadd() along with PF_NOFREEZE instead).
      
      daemonize() was changed as well.  In that case testing of PF_KTHREAD is
      racy, but daemonize() is hopeless anyway.
      
      This flag is cleared in do_execve(), before search_binary_handler().
      Probably not the best place, we can do this in exec_mmap() or in
      start_thread(), or clear it along with PF_FORKNOEXEC.  But I think this
      doesn't matter in practice, and if do_execve() fails kthread should die
      soon.
      Signed-off-by: NOleg Nesterov <oleg@tv-sign.ru>
      Cc: Roland McGrath <roland@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      7b34e428
    • O
      ptrace: give more respect to SIGKILL · 364d3c13
      Oleg Nesterov 提交于
      ptrace_stop() has some complicated checks to prevent the scheduling in the
      TASK_TRACED state with the pending SIGKILL, but these checks are racy, and
      they depend on arch_ptrace_stop_needed().
      
      This patch assumes that the traced task should die asap if it was killed by
      SIGKILL, in that case schedule()->signal_pending_state() has no reason to
      ignore the TASK_WAKEKILL part of TASK_TRACED, and we can kill this nasty
      special case.
      
      Note: do_exit()->ptrace_notify() is special, the killed task can already
      dequeue SIGKILL at this point. Another indication that fatal_signal_pending()
      is not exactly right.
      Signed-off-by: NOleg Nesterov <oleg@tv-sign.ru>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Matthew Wilcox <matthew@wil.cx>
      Cc: Roland McGrath <roland@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      364d3c13
  6. 25 7月, 2008 1 次提交
  7. 18 7月, 2008 1 次提交
  8. 17 7月, 2008 2 次提交
    • R
      ptrace children revamp · f470021a
      Roland McGrath 提交于
      ptrace no longer fiddles with the children/sibling links, and the
      old ptrace_children list is gone.  Now ptrace, whether of one's own
      children or another's via PTRACE_ATTACH, just uses the new ptraced
      list instead.
      
      There should be no user-visible difference that matters.  The only
      change is the order in which do_wait() sees multiple stopped
      children and stopped ptrace attachees.  Since wait_task_stopped()
      was changed earlier so it no longer reorders the children list, we
      already know this won't cause any new problems.
      Signed-off-by: NRoland McGrath <roland@redhat.com>
      f470021a
    • R
      Freezer: Introduce PF_FREEZER_NOSIG · ebb12db5
      Rafael J. Wysocki 提交于
      The freezer currently attempts to distinguish kernel threads from
      user space tasks by checking if their mm pointer is unset and it
      does not send fake signals to kernel threads.  However, there are
      kernel threads, mostly related to networking, that behave like
      user space tasks and may want to be sent a fake signal to be frozen.
      
      Introduce the new process flag PF_FREEZER_NOSIG that will be set
      by default for all kernel threads and make the freezer only send
      fake signals to the tasks having PF_FREEZER_NOSIG unset.  Provide
      the set_freezable_with_signal() function to be called by the kernel
      threads that want to be sent a fake signal for freezing.
      
      This patch should not change the freezer's observable behavior.
      Signed-off-by: NRafael J. Wysocki <rjw@sisk.pl>
      Signed-off-by: NAndi Kleen <ak@linux.intel.com>
      Acked-by: NPavel Machek <pavel@suse.cz>
      Signed-off-by: NLen Brown <len.brown@intel.com>
      ebb12db5
  9. 11 7月, 2008 1 次提交
    • S
      sched_clock: stop maximum check on NO HZ · af52a90a
      Steven Rostedt 提交于
      Working with ftrace I would get large jumps of 11 millisecs or more with
      the clock tracer. This killed the latencing timings of ftrace and also
      caused the irqoff self tests to fail.
      
      What was happening is with NO_HZ the idle would stop the jiffy counter and
      before the jiffy counter was updated the sched_clock would have a bad
      delta jiffies to compare with the gtod with the maximum.
      
      The jiffies would stop and the last sched_tick would record the last gtod.
      On wakeup, the sched clock update would compare the gtod + delta jiffies
      (which would be zero) and compare it to the TSC. The TSC would have
      correctly (with a stable TSC) moved forward several jiffies. But because the
      jiffies has not been updated yet the clock would be prevented from moving
      forward because it would appear that the TSC jumped too far ahead.
      
      The clock would then virtually stop, until the jiffies are updated. Then
      the next sched clock update would see that the clock was very much behind
      since the delta jiffies is now correct. This would then jump the clock
      forward by several jiffies.
      
      This caused ftrace to report several milliseconds of interrupts off
      latency at every resume from NO_HZ idle.
      
      This patch adds hooks into the nohz code to disable the checking of the
      maximum clock update when nohz is in effect. It resumes the max check
      when nohz has updated the jiffies again.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Cc: Steven Rostedt <srostedt@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      af52a90a
  10. 27 6月, 2008 3 次提交
  11. 24 6月, 2008 1 次提交
    • R
      sched: add new API sched_setscheduler_nocheck: add a flag to control access checks · 961ccddd
      Rusty Russell 提交于
      Hidehiro Kawai noticed that sched_setscheduler() can fail in
      stop_machine: it calls sched_setscheduler() from insmod, which can
      have CAP_SYS_MODULE without CAP_SYS_NICE.
      
      Two cases could have failed, so are changed to sched_setscheduler_nocheck:
        kernel/softirq.c:cpu_callback()
      	- CPU hotplug callback
        kernel/stop_machine.c:__stop_machine_run()
      	- Called from various places, including modprobe()
      Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
      Cc: Jeremy Fitzhardinge <jeremy@goop.org>
      Cc: Hidehiro Kawai <hidehiro.kawai.ez@hitachi.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: linux-mm@kvack.org
      Cc: sugita <yumiko.sugita.yf@hitachi.com>
      Cc: Satoshi OSHIMA <satoshi.oshima.fk@hitachi.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      961ccddd
  12. 10 6月, 2008 2 次提交
    • D
      sched: prevent bound kthreads from changing cpus_allowed · 9985b0ba
      David Rientjes 提交于
      Kthreads that have called kthread_bind() are bound to specific cpus, so
      other tasks should not be able to change their cpus_allowed from under
      them.  Otherwise, it is possible to move kthreads, such as the migration
      or software watchdog threads, so they are not allowed access to the cpu
      they work on.
      
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Paul Menage <menage@google.com>
      Cc: Paul Jackson <pj@sgi.com>
      Signed-off-by: NDavid Rientjes <rientjes@google.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      9985b0ba
    • O
      sched: fix TASK_WAKEKILL vs SIGKILL race · 16882c1e
      Oleg Nesterov 提交于
      schedule() has the special "TASK_INTERRUPTIBLE && signal_pending()" case,
      this allows us to do
      
      	current->state = TASK_INTERRUPTIBLE;
      	schedule();
      
      without fear to sleep with pending signal.
      
      However, the code like
      
      	current->state = TASK_KILLABLE;
      	schedule();
      
      is not right, schedule() doesn't take TASK_WAKEKILL into account. This means
      that mutex_lock_killable(), wait_for_completion_killable(), down_killable(),
      schedule_timeout_killable() can miss SIGKILL (and btw the second SIGKILL has
      no effect).
      
      Introduce the new helper, signal_pending_state(), and change schedule() to
      use it. Hopefully it will have more users, that is why the task's state is
      passed separately.
      
      Note this "__TASK_STOPPED | __TASK_TRACED" check in signal_pending_state().
      This is needed to preserve the current behaviour (ptrace_notify). I hope
      this check will be removed soon, but this (afaics good) change needs the
      separate discussion.
      
      The fast path is "(state & (INTERRUPTIBLE | WAKEKILL)) + signal_pending(p)",
      basically the same that schedule() does now. However, this patch of course
      bloats schedule().
      Signed-off-by: NOleg Nesterov <oleg@tv-sign.ru>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      16882c1e
  13. 06 6月, 2008 3 次提交
    • G
      sched: fix cpupri hotplug support · 1f11eb6a
      Gregory Haskins 提交于
      The RT folks over at RedHat found an issue w.r.t. hotplug support which
      was traced to problems with the cpupri infrastructure in the scheduler:
      
      https://bugzilla.redhat.com/show_bug.cgi?id=449676
      
      This bug affects 23-rt12+, 24-rtX, 25-rtX, and sched-devel.  This patch
      applies to 25.4-rt4, though it should trivially apply to most cpupri enabled
      kernels mentioned above.
      
      It turned out that the issue was that offline cpus could get inadvertently
      registered with cpupri so that they were erroneously selected during
      migration decisions.  The end result would be an OOPS as the offline cpu
      had tasks routed to it.
      
      This patch generalizes the old join/leave domain interface into an
      online/offline interface, and adjusts the root-domain/hotplug code to
      utilize it.
      
      I was able to easily reproduce the issue prior to this patch, and am no
      longer able to reproduce it after this patch.  I can offline cpus
      indefinately and everything seems to be in working order.
      
      Thanks to Arnaldo (acme), Thomas, and Peter for doing the legwork to point
      me in the right direction.  Also thank you to Peter for reviewing the
      early iterations of this patch.
      Signed-off-by: NGregory Haskins <ghaskins@novell.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      1f11eb6a
    • R
      sched: reorder task_struct to reduce padding on 64bit builds · c7aceaba
      Richard Kennedy 提交于
      This patch removes 24 bytes of padding and allows 1 extra object per
      slab on my fedora based config.
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      c7aceaba
    • I
      namespacecheck: more sched.c fixes · 554ec22f
      Ingo Molnar 提交于
      [ Stephen Rothwell <sfr@canb.auug.org.au>: build fix ]
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      554ec22f
  14. 29 5月, 2008 1 次提交
  15. 27 5月, 2008 1 次提交
  16. 25 5月, 2008 2 次提交
  17. 24 5月, 2008 5 次提交