1. 04 12月, 2008 1 次提交
    • S
      ftrace: graph of a single function · ea4e2bc4
      Steven Rostedt 提交于
      This patch adds the file:
      
         /debugfs/tracing/set_graph_function
      
      which can be used along with the function graph tracer.
      
      When this file is empty, the function graph tracer will act as
      usual. When the file has a function in it, the function graph
      tracer will only trace that function.
      
      For example:
      
       # echo blk_unplug > /debugfs/tracing/set_graph_function
       # cat /debugfs/tracing/trace
       [...]
       ------------------------------------------
       | 2)  make-19003  =>  kjournald-2219
       ------------------------------------------
      
       2)               |  blk_unplug() {
       2)               |    dm_unplug_all() {
       2)               |      dm_get_table() {
       2)      1.381 us |        _read_lock();
       2)      0.911 us |        dm_table_get();
       2)      1. 76 us |        _read_unlock();
       2) +   12.912 us |      }
       2)               |      dm_table_unplug_all() {
       2)               |        blk_unplug() {
       2)      0.778 us |          generic_unplug_device();
       2)      2.409 us |        }
       2)      5.992 us |      }
       2)      0.813 us |      dm_table_put();
       2) +   29. 90 us |    }
       2) +   34.532 us |  }
      
      You can add up to 32 functions into this file. Currently we limit it
      to 32, but this may change with later improvements.
      
      To add another function, use the append '>>':
      
        # echo sys_read >> /debugfs/tracing/set_graph_function
        # cat /debugfs/tracing/set_graph_function
        blk_unplug
        sys_read
      
      Using the '>' will clear out the function and write anew:
      
        # echo sys_write > /debug/tracing/set_graph_function
        # cat /debug/tracing/set_graph_function
        sys_write
      
      Note, if you have function graph running while doing this, the small
      time between clearing it and updating it will cause the graph to
      record all functions. This should not be an issue because after
      it sets the filter, only those functions will be recorded from then on.
      If you need to only record a particular function then set this
      file first before starting the function graph tracer. In the future
      this side effect may be corrected.
      
      The set_graph_function file is similar to the set_ftrace_filter but
      it does not take wild cards nor does it allow for more than one
      function to be set with a single write. There is no technical reason why
      this is the case, I just do not have the time yet to implement that.
      
      Note, dynamic ftrace must be enabled for this to appear because it
      uses the dynamic ftrace records to match the name to the mcount
      call sites.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      ea4e2bc4
  2. 02 12月, 2008 1 次提交
    • D
      epoll: introduce resource usage limits · 7ef9964e
      Davide Libenzi 提交于
      It has been thought that the per-user file descriptors limit would also
      limit the resources that a normal user can request via the epoll
      interface.  Vegard Nossum reported a very simple program (a modified
      version attached) that can make a normal user to request a pretty large
      amount of kernel memory, well within the its maximum number of fds.  To
      solve such problem, default limits are now imposed, and /proc based
      configuration has been introduced.  A new directory has been created,
      named /proc/sys/fs/epoll/ and inside there, there are two configuration
      points:
      
        max_user_instances = Maximum number of devices - per user
      
        max_user_watches   = Maximum number of "watched" fds - per user
      
      The current default for "max_user_watches" limits the memory used by epoll
      to store "watches", to 1/32 of the amount of the low RAM.  As example, a
      256MB 32bit machine, will have "max_user_watches" set to roughly 90000.
      That should be enough to not break existing heavy epoll users.  The
      default value for "max_user_instances" is set to 128, that should be
      enough too.
      
      This also changes the userspace, because a new error code can now come out
      from EPOLL_CTL_ADD (-ENOSPC).  The EMFILE from epoll_create() was already
      listed, so that should be ok.
      
      [akpm@linux-foundation.org: use get_current_user()]
      Signed-off-by: NDavide Libenzi <davidel@xmailserver.org>
      Cc: Michael Kerrisk <mtk.manpages@gmail.com>
      Cc: <stable@kernel.org>
      Cc: Cyrill Gorcunov <gorcunov@gmail.com>
      Reported-by: NVegard Nossum <vegardno@ifi.uio.no>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      7ef9964e
  3. 26 11月, 2008 3 次提交
  4. 23 11月, 2008 1 次提交
  5. 18 11月, 2008 1 次提交
    • F
      tracing/function-return-tracer: add the overrun field · 0231022c
      Frederic Weisbecker 提交于
      Impact: help to find the better depth of trace
      
      We decided to arbitrary define the depth of function return trace as
      "20". Perhaps this is not enough. To help finding an optimal depth, we
      measure now the overrun: the number of functions that have been missed
      for the current thread. By default this is not displayed, we have to
      do set a particular flag on the return tracer: echo overrun >
      /debug/tracing/trace_options And the overrun will be printed on the
      right.
      
      As the trace shows below, the current 20 depth is not enough.
      
      update_wall_time+0x37f/0x8c0 -> update_xtime_cache (345 ns) (Overruns: 2838)
      update_wall_time+0x384/0x8c0 -> clocksource_get_next (1141 ns) (Overruns: 2838)
      do_timer+0x23/0x100 -> update_wall_time (3882 ns) (Overruns: 2838)
      tick_do_update_jiffies64+0xbf/0x160 -> do_timer (5339 ns) (Overruns: 2838)
      tick_sched_timer+0x6a/0xf0 -> tick_do_update_jiffies64 (7209 ns) (Overruns: 2838)
      vgacon_set_cursor_size+0x98/0x120 -> native_io_delay (2613 ns) (Overruns: 274)
      vgacon_cursor+0x16e/0x1d0 -> vgacon_set_cursor_size (33151 ns) (Overruns: 274)
      set_cursor+0x5f/0x80 -> vgacon_cursor (36432 ns) (Overruns: 274)
      con_flush_chars+0x34/0x40 -> set_cursor (38790 ns) (Overruns: 274)
      release_console_sem+0x1ec/0x230 -> up (721 ns) (Overruns: 274)
      release_console_sem+0x225/0x230 -> wake_up_klogd (316 ns) (Overruns: 274)
      con_flush_chars+0x39/0x40 -> release_console_sem (2996 ns) (Overruns: 274)
      con_write+0x22/0x30 -> con_flush_chars (46067 ns) (Overruns: 274)
      n_tty_write+0x1cc/0x360 -> con_write (292670 ns) (Overruns: 274)
      smp_apic_timer_interrupt+0x2a/0x90 -> native_apic_mem_write (330 ns) (Overruns: 274)
      irq_enter+0x17/0x70 -> idle_cpu (413 ns) (Overruns: 274)
      smp_apic_timer_interrupt+0x2f/0x90 -> irq_enter (1525 ns) (Overruns: 274)
      ktime_get_ts+0x40/0x70 -> getnstimeofday (465 ns) (Overruns: 274)
      ktime_get_ts+0x60/0x70 -> set_normalized_timespec (436 ns) (Overruns: 274)
      ktime_get+0x16/0x30 -> ktime_get_ts (2501 ns) (Overruns: 274)
      hrtimer_interrupt+0x77/0x1a0 -> ktime_get (3439 ns) (Overruns: 274)
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Acked-by: NSteven Rostedt <rostedt@goodmis.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      0231022c
  6. 11 11月, 2008 2 次提交
    • F
      tracing, x86: add low level support for ftrace return tracing · caf4b323
      Frederic Weisbecker 提交于
      Impact: add infrastructure for function-return tracing
      
      Add low level support for ftrace return tracing.
      
      This plug-in stores return addresses on the thread_info structure of
      the current task.
      
      The index of the current return address is initialized when the task
      is the first one (init) and when a process forks (the child). It is
      not needed when a task does a sys_execve because after this syscall,
      it still needs to return on the kernel functions it called.
      
      Note that the code of return_to_handler has been suggested by Steven
      Rostedt as almost all of the ideas of improvements in this V3.
      
      For purpose of security, arch/x86/kernel/process_32.c is not traced
      because __switch_to() changes the current task during its execution.
      That could cause inconsistency in the stored return address of this
      function even if I didn't have any crash after testing with tracing on
      this function enabled.
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      caf4b323
    • O
      fix for account_group_exec_runtime(), make sure ->signal can't be freed under rq->lock · ad474cac
      Oleg Nesterov 提交于
      Impact: fix hang/crash on ia64 under high load
      
      This is ugly, but the simplest patch by far.
      
      Unlike other similar routines, account_group_exec_runtime() could be
      called "implicitly" from within scheduler after exit_notify(). This
      means we can race with the parent doing release_task(), we can't just
      check ->signal != NULL.
      
      Change __exit_signal() to do spin_unlock_wait(&task_rq(tsk)->lock)
      before __cleanup_signal() to make sure ->signal can't be freed under
      task_rq(tsk)->lock. Note that task_rq_unlock_wait() doesn't care
      about the case when tsk changes cpu/rq under us, this should be OK.
      
      Thanks to Ingo who nacked my previous buggy patch.
      Signed-off-by: NOleg Nesterov <oleg@redhat.com>
      Acked-by: NPeter Zijlstra <peterz@infradead.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Reported-by: NDoug Chapman <doug.chapman@hp.com>
      ad474cac
  7. 07 11月, 2008 2 次提交
    • D
      net: Fix recursive descent in __scm_destroy(). · 3b53fbf4
      David S. Miller 提交于
      __scm_destroy() walks the list of file descriptors in the scm_fp_list
      pointed to by the scm_cookie argument.
      
      Those, in turn, can close sockets and invoke __scm_destroy() again.
      
      There is nothing which limits how deeply this can occur.
      
      The idea for how to fix this is from Linus.  Basically, we do all of
      the fput()s at the top level by collecting all of the scm_fp_list
      objects hit by an fput().  Inside of the initial __scm_destroy() we
      keep running the list until it is empty.
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      3b53fbf4
    • D
      net: Fix recursive descent in __scm_destroy(). · f8d570a4
      David Miller 提交于
      __scm_destroy() walks the list of file descriptors in the scm_fp_list
      pointed to by the scm_cookie argument.
      
      Those, in turn, can close sockets and invoke __scm_destroy() again.
      
      There is nothing which limits how deeply this can occur.
      
      The idea for how to fix this is from Linus.  Basically, we do all of
      the fput()s at the top level by collecting all of the scm_fp_list
      objects hit by an fput().  Inside of the initial __scm_destroy() we
      keep running the list until it is empty.
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      f8d570a4
  8. 23 10月, 2008 1 次提交
  9. 22 10月, 2008 1 次提交
  10. 20 10月, 2008 3 次提交
    • R
      add CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS · 656eb2cd
      Roland McGrath 提交于
      This adds a kconfig option to change the /proc/PID/coredump_filter default.
      Fedora has been carrying a trivial patch to change the hard-wired value for
      this default, since Fedora 8.  The default default can't change safely
      because there are old GDB versions out there (all before 6.7) that are
      confused by the core dump files created by the MMF_DUMP_ELF_HEADERS setting.
      Signed-off-by: NRoland McGrath <roland@redhat.com>
      Cc: Michael Kerrisk <mtk.manpages@googlemail.com>
      Cc: Oleg Nesterov <oleg@tv-sign.ru>
      Cc: Alan Cox <alan@lxorguk.ukuu.org.uk>
      Cc: Andi Kleen <andi@firstfloor.org>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Kawai Hidehiro <hidehiro.kawai.ez@hitachi.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: David Jones <davej@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      656eb2cd
    • K
      coredump_filter: add hugepage dumping · e575f111
      KOSAKI Motohiro 提交于
      Presently hugepage's vma has a VM_RESERVED flag in order not to be
      swapped.  But a VM_RESERVED vma isn't core dumped because this flag is
      often used for some kernel vmas (e.g.  vmalloc, sound related).
      
      Thus hugepages are never dumped and it can't be debugged easily.  Many
      developers want hugepages to be included into core-dump.
      
      However, We can't read generic VM_RESERVED area because this area is often
      IO mapping area.  then these area reading may change device state.  it is
      definitly undesiable side-effect.
      
      So adding a hugepage specific bit to the coredump filter is better.  It
      will be able to hugepage core dumping and doesn't cause any side-effect to
      any i/o devices.
      
      In additional, libhugetlb use hugetlb private mapping pages as anonymous
      page.  Then, hugepage private mapping pages should be core dumped by
      default.
      
      Then, /proc/[pid]/core_dump_filter has two new bits.
      
       - bit 5 mean hugetlb private mapping pages are dumped or not. (default: yes)
       - bit 6 mean hugetlb shared mapping pages are dumped or not.  (default: no)
      
      I tested by following method.
      
      % ulimit -c unlimited
      % ./crash_hugepage  50
      % ./crash_hugepage  50  -p
      % ls -lh
      % gdb ./crash_hugepage core
      %
      % echo 0x43 > /proc/self/coredump_filter
      % ./crash_hugepage  50
      % ./crash_hugepage  50  -p
      % ls -lh
      % gdb ./crash_hugepage core
      
      #include <stdlib.h>
      #include <stdio.h>
      #include <unistd.h>
      #include <sys/mman.h>
      #include <string.h>
      
      #include "hugetlbfs.h"
      
      int main(int argc, char** argv){
      	char* p;
      	int ch;
      	int mmap_flags = MAP_SHARED;
      	int fd;
      	int nr_pages;
      
      	while((ch = getopt(argc, argv, "p")) != -1) {
      		switch (ch) {
      		case 'p':
      			mmap_flags &= ~MAP_SHARED;
      			mmap_flags |= MAP_PRIVATE;
      			break;
      		default:
      			/* nothing*/
      			break;
      		}
      	}
      	argc -= optind;
      	argv += optind;
      
      	if (argc == 0){
      		printf("need # of pages\n");
      		exit(1);
      	}
      
      	nr_pages = atoi(argv[0]);
      	if (nr_pages < 2) {
      		printf("nr_pages must >2\n");
      		exit(1);
      	}
      
      	fd = hugetlbfs_unlinked_fd();
      	p = mmap(NULL, nr_pages * gethugepagesize(),
      		 PROT_READ|PROT_WRITE, mmap_flags, fd, 0);
      
      	sleep(2);
      
      	*(p + gethugepagesize()) = 1; /* COW */
      	sleep(2);
      
      	/* crash! */
      	*(int*)0 = 1;
      
      	return 0;
      }
      Signed-off-by: NKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Reviewed-by: NKawai Hidehiro <hidehiro.kawai.ez@hitachi.com>
      Cc: Hugh Dickins <hugh@veritas.com>
      Cc: William Irwin <wli@holomorphy.com>
      Cc: Adam Litke <agl@us.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      e575f111
    • P
      sched: optimize group load balancer · ffda12a1
      Peter Zijlstra 提交于
      I noticed that tg_shares_up() unconditionally takes rq-locks for all cpus
      in the sched_domain. This hurts.
      
      We need the rq-locks whenever we change the weight of the per-cpu group sched
      entities. To allevate this a little, only change the weight when the new
      weight is at least shares_thresh away from the old value.
      
      This avoids the rq-lock for the top level entries, since those will never
      be re-weighted, and fuzzes the lower level entries a little to gain performance
      in semi-stable situations.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      ffda12a1
  11. 17 10月, 2008 1 次提交
  12. 09 10月, 2008 1 次提交
    • I
      sched debug: add name to sched_domain sysctl entries · a5d8c348
      Ingo Molnar 提交于
      add /proc/sys/kernel/sched_domain/cpu0/domain0/name, to make
      it easier to see which specific scheduler domain remained at
      that entry.
      
      Since we process the scheduler domain tree and
      simplify it, it's not always immediately clear during debugging
      which domain came from where.
      
      depends on CONFIG_SCHED_DEBUG=y.
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      a5d8c348
  13. 28 9月, 2008 1 次提交
  14. 23 9月, 2008 1 次提交
    • F
      timers: fix itimer/many thread hang, v2 · bb34d92f
      Frank Mayhar 提交于
      This is the second resubmission of the posix timer rework patch, posted
      a few days ago.
      
      This includes the changes from the previous resubmittion, which addressed
      Oleg Nesterov's comments, removing the RCU stuff from the patch and
      un-inlining the thread_group_cputime() function for SMP.
      
      In addition, per Ingo Molnar it simplifies the UP code, consolidating much
      of it with the SMP version and depending on lower-level SMP/UP handling to
      take care of the differences.
      
      It also cleans up some UP compile errors, moves the scheduler stats-related
      macros into kernel/sched_stats.h, cleans up a merge error in
      kernel/fork.c and has a few other minor fixes and cleanups as suggested
      by Oleg and Ingo. Thanks for the review, guys.
      Signed-off-by: NFrank Mayhar <fmayhar@google.com>
      Cc: Roland McGrath <roland@redhat.com>
      Cc: Alexey Dobriyan <adobriyan@gmail.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      bb34d92f
  15. 22 9月, 2008 1 次提交
  16. 14 9月, 2008 3 次提交
    • I
      timers: fix itimer/many thread hang, cleanups · 5ce73a4a
      Ingo Molnar 提交于
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      5ce73a4a
    • I
      timers: fix itimer/many thread hang, fix #2 · 0a8eaa4f
      Ingo Molnar 提交于
      fix the UP build:
      
      In file included from arch/x86/kernel/asm-offsets_32.c:9,
                       from arch/x86/kernel/asm-offsets.c:3:
      include/linux/sched.h: In function ‘thread_group_cputime_clone_thread’:
      include/linux/sched.h:2272: warning: no return statement in function returning non-void
      include/linux/sched.h: In function ‘thread_group_cputime_account_user’:
      include/linux/sched.h:2284: error: invalid type argument of ‘->’ (have ‘struct task_cputime’)
      include/linux/sched.h:2284: error: invalid type argument of ‘->’ (have ‘struct task_cputime’)
      include/linux/sched.h: In function ‘thread_group_cputime_account_system’:
      include/linux/sched.h:2291: error: invalid type argument of ‘->’ (have ‘struct task_cputime’)
      include/linux/sched.h:2291: error: invalid type argument of ‘->’ (have ‘struct task_cputime’)
      include/linux/sched.h: In function ‘thread_group_cputime_account_exec_runtime’:
      include/linux/sched.h:2298: error: invalid type argument of ‘->’ (have ‘struct task_cputime’)
      distcc[14501] ERROR: compile arch/x86/kernel/asm-offsets.c on a/30 failed
      make[1]: *** [arch/x86/kernel/asm-offsets.s] Error 1
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      0a8eaa4f
    • F
      timers: fix itimer/many thread hang · f06febc9
      Frank Mayhar 提交于
      Overview
      
      This patch reworks the handling of POSIX CPU timers, including the
      ITIMER_PROF, ITIMER_VIRT timers and rlimit handling.  It was put together
      with the help of Roland McGrath, the owner and original writer of this code.
      
      The problem we ran into, and the reason for this rework, has to do with using
      a profiling timer in a process with a large number of threads.  It appears
      that the performance of the old implementation of run_posix_cpu_timers() was
      at least O(n*3) (where "n" is the number of threads in a process) or worse.
      Everything is fine with an increasing number of threads until the time taken
      for that routine to run becomes the same as or greater than the tick time, at
      which point things degrade rather quickly.
      
      This patch fixes bug 9906, "Weird hang with NPTL and SIGPROF."
      
      Code Changes
      
      This rework corrects the implementation of run_posix_cpu_timers() to make it
      run in constant time for a particular machine.  (Performance may vary between
      one machine and another depending upon whether the kernel is built as single-
      or multiprocessor and, in the latter case, depending upon the number of
      running processors.)  To do this, at each tick we now update fields in
      signal_struct as well as task_struct.  The run_posix_cpu_timers() function
      uses those fields to make its decisions.
      
      We define a new structure, "task_cputime," to contain user, system and
      scheduler times and use these in appropriate places:
      
      struct task_cputime {
      	cputime_t utime;
      	cputime_t stime;
      	unsigned long long sum_exec_runtime;
      };
      
      This is included in the structure "thread_group_cputime," which is a new
      substructure of signal_struct and which varies for uniprocessor versus
      multiprocessor kernels.  For uniprocessor kernels, it uses "task_cputime" as
      a simple substructure, while for multiprocessor kernels it is a pointer:
      
      struct thread_group_cputime {
      	struct task_cputime totals;
      };
      
      struct thread_group_cputime {
      	struct task_cputime *totals;
      };
      
      We also add a new task_cputime substructure directly to signal_struct, to
      cache the earliest expiration of process-wide timers, and task_cputime also
      replaces the it_*_expires fields of task_struct (used for earliest expiration
      of thread timers).  The "thread_group_cputime" structure contains process-wide
      timers that are updated via account_user_time() and friends.  In the non-SMP
      case the structure is a simple aggregator; unfortunately in the SMP case that
      simplicity was not achievable due to cache-line contention between CPUs (in
      one measured case performance was actually _worse_ on a 16-cpu system than
      the same test on a 4-cpu system, due to this contention).  For SMP, the
      thread_group_cputime counters are maintained as a per-cpu structure allocated
      using alloc_percpu().  The timer functions update only the timer field in
      the structure corresponding to the running CPU, obtained using per_cpu_ptr().
      
      We define a set of inline functions in sched.h that we use to maintain the
      thread_group_cputime structure and hide the differences between UP and SMP
      implementations from the rest of the kernel.  The thread_group_cputime_init()
      function initializes the thread_group_cputime structure for the given task.
      The thread_group_cputime_alloc() is a no-op for UP; for SMP it calls the
      out-of-line function thread_group_cputime_alloc_smp() to allocate and fill
      in the per-cpu structures and fields.  The thread_group_cputime_free()
      function, also a no-op for UP, in SMP frees the per-cpu structures.  The
      thread_group_cputime_clone_thread() function (also a UP no-op) for SMP calls
      thread_group_cputime_alloc() if the per-cpu structures haven't yet been
      allocated.  The thread_group_cputime() function fills the task_cputime
      structure it is passed with the contents of the thread_group_cputime fields;
      in UP it's that simple but in SMP it must also safely check that tsk->signal
      is non-NULL (if it is it just uses the appropriate fields of task_struct) and,
      if so, sums the per-cpu values for each online CPU.  Finally, the three
      functions account_group_user_time(), account_group_system_time() and
      account_group_exec_runtime() are used by timer functions to update the
      respective fields of the thread_group_cputime structure.
      
      Non-SMP operation is trivial and will not be mentioned further.
      
      The per-cpu structure is always allocated when a task creates its first new
      thread, via a call to thread_group_cputime_clone_thread() from copy_signal().
      It is freed at process exit via a call to thread_group_cputime_free() from
      cleanup_signal().
      
      All functions that formerly summed utime/stime/sum_sched_runtime values from
      from all threads in the thread group now use thread_group_cputime() to
      snapshot the values in the thread_group_cputime structure or the values in
      the task structure itself if the per-cpu structure hasn't been allocated.
      
      Finally, the code in kernel/posix-cpu-timers.c has changed quite a bit.
      The run_posix_cpu_timers() function has been split into a fast path and a
      slow path; the former safely checks whether there are any expired thread
      timers and, if not, just returns, while the slow path does the heavy lifting.
      With the dedicated thread group fields, timers are no longer "rebalanced" and
      the process_timer_rebalance() function and related code has gone away.  All
      summing loops are gone and all code that used them now uses the
      thread_group_cputime() inline.  When process-wide timers are set, the new
      task_cputime structure in signal_struct is used to cache the earliest
      expiration; this is checked in the fast path.
      
      Performance
      
      The fix appears not to add significant overhead to existing operations.  It
      generally performs the same as the current code except in two cases, one in
      which it performs slightly worse (Case 5 below) and one in which it performs
      very significantly better (Case 2 below).  Overall it's a wash except in those
      two cases.
      
      I've since done somewhat more involved testing on a dual-core Opteron system.
      
      Case 1: With no itimer running, for a test with 100,000 threads, the fixed
      	kernel took 1428.5 seconds, 513 seconds more than the unfixed system,
      	all of which was spent in the system.  There were twice as many
      	voluntary context switches with the fix as without it.
      
      Case 2: With an itimer running at .01 second ticks and 4000 threads (the most
      	an unmodified kernel can handle), the fixed kernel ran the test in
      	eight percent of the time (5.8 seconds as opposed to 70 seconds) and
      	had better tick accuracy (.012 seconds per tick as opposed to .023
      	seconds per tick).
      
      Case 3: A 4000-thread test with an initial timer tick of .01 second and an
      	interval of 10,000 seconds (i.e. a timer that ticks only once) had
      	very nearly the same performance in both cases:  6.3 seconds elapsed
      	for the fixed kernel versus 5.5 seconds for the unfixed kernel.
      
      With fewer threads (eight in these tests), the Case 1 test ran in essentially
      the same time on both the modified and unmodified kernels (5.2 seconds versus
      5.8 seconds).  The Case 2 test ran in about the same time as well, 5.9 seconds
      versus 5.4 seconds but again with much better tick accuracy, .013 seconds per
      tick versus .025 seconds per tick for the unmodified kernel.
      
      Since the fix affected the rlimit code, I also tested soft and hard CPU limits.
      
      Case 4: With a hard CPU limit of 20 seconds and eight threads (and an itimer
      	running), the modified kernel was very slightly favored in that while
      	it killed the process in 19.997 seconds of CPU time (5.002 seconds of
      	wall time), only .003 seconds of that was system time, the rest was
      	user time.  The unmodified kernel killed the process in 20.001 seconds
      	of CPU (5.014 seconds of wall time) of which .016 seconds was system
      	time.  Really, though, the results were too close to call.  The results
      	were essentially the same with no itimer running.
      
      Case 5: With a soft limit of 20 seconds and a hard limit of 2000 seconds
      	(where the hard limit would never be reached) and an itimer running,
      	the modified kernel exhibited worse tick accuracy than the unmodified
      	kernel: .050 seconds/tick versus .028 seconds/tick.  Otherwise,
      	performance was almost indistinguishable.  With no itimer running this
      	test exhibited virtually identical behavior and times in both cases.
      
      In times past I did some limited performance testing.  those results are below.
      
      On a four-cpu Opteron system without this fix, a sixteen-thread test executed
      in 3569.991 seconds, of which user was 3568.435s and system was 1.556s.  On
      the same system with the fix, user and elapsed time were about the same, but
      system time dropped to 0.007 seconds.  Performance with eight, four and one
      thread were comparable.  Interestingly, the timer ticks with the fix seemed
      more accurate:  The sixteen-thread test with the fix received 149543 ticks
      for 0.024 seconds per tick, while the same test without the fix received 58720
      for 0.061 seconds per tick.  Both cases were configured for an interval of
      0.01 seconds.  Again, the other tests were comparable.  Each thread in this
      test computed the primes up to 25,000,000.
      
      I also did a test with a large number of threads, 100,000 threads, which is
      impossible without the fix.  In this case each thread computed the primes only
      up to 10,000 (to make the runtime manageable).  System time dominated, at
      1546.968 seconds out of a total 2176.906 seconds (giving a user time of
      629.938s).  It received 147651 ticks for 0.015 seconds per tick, still quite
      accurate.  There is obviously no comparable test without the fix.
      Signed-off-by: NFrank Mayhar <fmayhar@google.com>
      Cc: Roland McGrath <roland@redhat.com>
      Cc: Alexey Dobriyan <adobriyan@gmail.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      f06febc9
  17. 10 9月, 2008 1 次提交
  18. 06 9月, 2008 2 次提交
    • A
      hrtimer: create a "timer_slack" field in the task struct · 6976675d
      Arjan van de Ven 提交于
      We want to be able to control the default "rounding" that is used by
      select() and poll() and friends. This is a per process property
      (so that we can have a "nice" like program to start certain programs with
      a looser or stricter rounding) that can be set/get via a prctl().
      
      For this purpose, a field called "timer_slack_ns" is added to the task
      struct. In addition, a field called "default_timer_slack"ns" is added
      so that tasks easily can temporarily to a more/less accurate slack and then
      back to the default.
      
      The default value of the slack is set to 50 usec; this is significantly less
      than 2.6.27's average select() and poll() timing error but still allows
      the kernel to group timers somewhat to preserve power behavior. Applications
      and admins can override this via the prctl()
      Signed-off-by: NArjan van de Ven <arjan@linux.intel.com>
      6976675d
    • B
      sched: fix process time monotonicity · 49048622
      Balbir Singh 提交于
      Spencer reported a problem where utime and stime were going negative despite
      the fixes in commit b27f03d4. The suspected
      reason for the problem is that signal_struct maintains it's own utime and
      stime (of exited tasks), these are not updated using the new task_utime()
      routine, hence sig->utime can go backwards and cause the same problem
      to occur (sig->utime, adds tsk->utime and not task_utime()). This patch
      fixes the problem
      
      TODO: using max(task->prev_utime, derived utime) works for now, but a more
      generic solution is to implement cputime_max() and use the cputime_gt()
      function for comparison.
      
      Reported-by: spencer@bluehost.com
      Signed-off-by: NBalbir Singh <balbir@linux.vnet.ibm.com>
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      49048622
  19. 15 8月, 2008 2 次提交
  20. 14 8月, 2008 1 次提交
    • D
      CRED: Introduce credential access wrappers · 9e2b2dc4
      David Howells 提交于
      The patches that are intended to introduce copy-on-write credentials for 2.6.28
      require abstraction of access to some fields of the task structure,
      particularly for the case of one task accessing another's credentials where RCU
      will have to be observed.
      
      Introduced here are trivial no-op versions of the desired accessors for current
      and other tasks so that other subsystems can start to be converted over more
      easily.
      
      Wrappers are introduced into a new header (linux/cred.h) for UID/GID,
      EUID/EGID, SUID/SGID, FSUID/FSGID, cap_effective and current's subscribed
      user_struct.  These wrappers are macros because the ordering between header
      files mitigates against making them inline functions.
      
      linux/cred.h is #included from linux/sched.h.
      
      Further, XFS is modified such that it no longer defines and uses parameterised
      versions of current_fs[ug]id(), thus getting rid of the namespace collision
      otherwise incurred.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      Signed-off-by: NJames Morris <jmorris@namei.org>
      9e2b2dc4
  21. 11 8月, 2008 1 次提交
  22. 31 7月, 2008 1 次提交
    • I
      sched clock: revert various sched_clock() changes · e4e4e534
      Ingo Molnar 提交于
      Found an interactivity problem on a quad core test-system - simple
      CPU loops would occasionally delay the system un an unacceptable way.
      
      After much debugging with Peter Zijlstra it turned out that the problem
      is caused by the string of sched_clock() changes - they caused the CPU
      clock to jump backwards a bit - which confuses the scheduler arithmetics.
      
      (which is unsigned for performance reasons)
      
      So revert:
      
       # c300ba25: sched_clock: and multiplier for TSC to gtod drift
       # c0c87734: sched_clock: only update deltas with local reads.
       # af52a90a: sched_clock: stop maximum check on NO HZ
       # f7cce27f: sched_clock: widen the max and min time
      
      This solves the interactivity problems.
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Acked-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Acked-by: NMike Galbraith <efault@gmx.de>
      e4e4e534
  23. 28 7月, 2008 2 次提交
  24. 27 7月, 2008 4 次提交
  25. 26 7月, 2008 2 次提交
    • K
      per-task-delay-accounting: add memory reclaim delay · 873b4771
      Keika Kobayashi 提交于
      Sometimes, application responses become bad under heavy memory load.
      Applications take a bit time to reclaim memory.  The statistics, how long
      memory reclaim takes, will be useful to measure memory usage.
      
      This patch adds accounting memory reclaim to per-task-delay-accounting for
      accounting the time of do_try_to_free_pages().
      
      <i.e>
      
      - When System is under low memory load,
        memory reclaim may not occur.
      
      $ free
                   total       used       free     shared    buffers     cached
      Mem:       8197800    1577300    6620500          0       4808    1516724
      -/+ buffers/cache:      55768    8142032
      Swap:     16386292          0   16386292
      
      $ vmstat 1
      procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----
       r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa
       0  0      0 5069748  10612 3014060    0    0     0     0    3   26  0  0 100  0
       0  0      0 5069748  10612 3014060    0    0     0     0    4   22  0  0 100  0
       0  0      0 5069748  10612 3014060    0    0     0     0    3   18  0  0 100  0
      
      Measure the time of tar command.
      
      $ ls -s test.dat
      1501472 test.dat
      
      $ time tar cvf test.tar test.dat
      real    0m13.388s
      user    0m0.116s
      sys     0m5.304s
      
      $ ./delayget -d -p <pid>
      CPU             count     real total  virtual total    delay total
                        428     5528345500     5477116080       62749891
      IO              count    delay total
                        338     8078977189
      SWAP            count    delay total
                          0              0
      RECLAIM         count    delay total
                          0              0
      
      - When system is under heavy memory load
        memory reclaim may occur.
      
      $ vmstat 1
      procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----
       r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa
       0  0 7159032  49724   1812   3012    0    0     0     0    3   24  0  0 100  0
       0  0 7159032  49724   1812   3012    0    0     0     0    4   24  0  0 100  0
       0  0 7159032  49848   1812   3012    0    0     0     0    3   22  0  0 100  0
      
      In this case, one process uses more 8G memory
      by execution of malloc() and memset().
      
      $ time tar cvf test.tar test.dat
      real    1m38.563s        <-  increased by 85 sec
      user    0m0.140s
      sys     0m7.060s
      
      $ ./delayget -d -p <pid>
      CPU             count     real total  virtual total    delay total
                       9021     7140446250     7315277975      923201824
      IO              count    delay total
                       8965    90466349669
      SWAP            count    delay total
                          3       21036367
      RECLAIM         count    delay total
                        740    61011951153
      
      In the later case, the value of RECLAIM is increasing.
      So, taskstats can show how much memory reclaim influences TAT.
      Signed-off-by: NKeika Kobayashi <kobayashi.kk@ncos.nec.co.jp>
      Acked-by: NBalbir Singh <balbir@linux.vnet.ibm.com>
      Acked-by: NKOSAKI Motohiro <kosaki.motohiro@jp.fujistu.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      873b4771
    • A
      task IO accounting: provide distinct tgid/tid I/O statistics · 297c5d92
      Andrea Righi 提交于
      Report per-thread I/O statistics in /proc/pid/task/tid/io and aggregate
      parent I/O statistics in /proc/pid/io.  This approach follows the same
      model used to account per-process and per-thread CPU times.
      
      As a practial application, this allows for example to quickly find the top
      I/O consumer when a process spawns many child threads that perform the
      actual I/O work, because the aggregated I/O statistics can always be found
      in /proc/pid/io.
      
      [ Oleg Nesterov points out that we should check that the task is still
        alive before we iterate over the threads, but also says that we can do
        that fixup on top of this later.  - Linus ]
      Acked-by: NBalbir Singh <balbir@linux.vnet.ibm.com>
      Signed-off-by: NAndrea Righi <righi.andrea@gmail.com>
      Cc: Matt Heaton <matt@hostmonster.com>
      Cc: Shailabh Nagar <nagar@watson.ibm.com>
      Acked-by-with-comments: Oleg Nesterov <oleg@tv-sign.ru>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      297c5d92