1. 24 9月, 2014 6 次提交
  2. 21 9月, 2014 1 次提交
  3. 19 9月, 2014 21 次提交
  4. 16 9月, 2014 5 次提交
  5. 09 9月, 2014 2 次提交
  6. 08 9月, 2014 5 次提交
    • R
      sched, time: Atomically increment stime & utime · eb1b4af0
      Rik van Riel 提交于
      The functions task_cputime_adjusted and thread_group_cputime_adjusted()
      can be called locklessly, as well as concurrently on many different CPUs.
      
      This can occasionally lead to the utime and stime reported by times(), and
      other syscalls like it, going backward. The cause for this appears to be
      multiple threads racing in cputime_adjust(), both with values for utime or
      stime that is larger than the original, but each with a different value.
      
      Sometimes the larger value gets saved first, only to be immediately
      overwritten with a smaller value by another thread.
      
      Using atomic exchange prevents that problem, and ensures time
      progresses monotonically.
      Signed-off-by: NRik van Riel <riel@redhat.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: umgwanakikbuti@gmail.com
      Cc: fweisbec@gmail.com
      Cc: akpm@linux-foundation.org
      Cc: srao@redhat.com
      Cc: lwoodman@redhat.com
      Cc: atheurer@redhat.com
      Cc: oleg@redhat.com
      Link: http://lkml.kernel.org/r/1408133138-22048-4-git-send-email-riel@redhat.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      eb1b4af0
    • R
      time, signal: Protect resource use statistics with seqlock · e78c3496
      Rik van Riel 提交于
      Both times() and clock_gettime(CLOCK_PROCESS_CPUTIME_ID) have scalability
      issues on large systems, due to both functions being serialized with a
      lock.
      
      The lock protects against reporting a wrong value, due to a thread in the
      task group exiting, its statistics reporting up to the signal struct, and
      that exited task's statistics being counted twice (or not at all).
      
      Protecting that with a lock results in times() and clock_gettime() being
      completely serialized on large systems.
      
      This can be fixed by using a seqlock around the events that gather and
      propagate statistics. As an additional benefit, the protection code can
      be moved into thread_group_cputime(), slightly simplifying the calling
      functions.
      
      In the case of posix_cpu_clock_get_task() things can be simplified a
      lot, because the calling function already ensures that the task sticks
      around, and the rest is now taken care of in thread_group_cputime().
      
      This way the statistics reporting code can run lockless.
      Signed-off-by: NRik van Riel <riel@redhat.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Alex Thorlton <athorlton@sgi.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Daeseok Youn <daeseok.youn@gmail.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Dongsheng Yang <yangds.fnst@cn.fujitsu.com>
      Cc: Geert Uytterhoeven <geert@linux-m68k.org>
      Cc: Guillaume Morin <guillaume@morinfr.org>
      Cc: Ionut Alexa <ionut.m.alexa@gmail.com>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Li Zefan <lizefan@huawei.com>
      Cc: Michal Hocko <mhocko@suse.cz>
      Cc: Michal Schmidt <mschmidt@redhat.com>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Vladimir Davydov <vdavydov@parallels.com>
      Cc: umgwanakikbuti@gmail.com
      Cc: fweisbec@gmail.com
      Cc: srao@redhat.com
      Cc: lwoodman@redhat.com
      Cc: atheurer@redhat.com
      Link: http://lkml.kernel.org/r/20140816134010.26a9b572@annuminas.surriel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      e78c3496
    • R
      exit: Always reap resource stats in __exit_signal() · 90ed9cbe
      Rik van Riel 提交于
      Oleg pointed out that wait_task_zombie adds a task's usage statistics
      to the parent's signal struct, but the task's own signal struct should
      also propagate the statistics at exit time.
      
      This allows thread_group_cputime(reaped_zombie) to get the statistics
      after __unhash_process() has made the task invisible to for_each_thread,
      but before the thread has actually been rcu freed, making sure no
      non-monotonic results are returned inside that window.
      Suggested-by: NOleg Nesterov <oleg@redhat.com>
      Signed-off-by: NRik van Riel <riel@redhat.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Guillaume Morin <guillaume@morinfr.org>
      Cc: Ionut Alexa <ionut.m.alexa@gmail.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Li Zefan <lizefan@huawei.com>
      Cc: Michal Hocko <mhocko@suse.cz>
      Cc: Michal Schmidt <mschmidt@redhat.com>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: umgwanakikbuti@gmail.com
      Cc: fweisbec@gmail.com
      Cc: srao@redhat.com
      Cc: lwoodman@redhat.com
      Cc: atheurer@redhat.com
      Link: http://lkml.kernel.org/r/1408133138-22048-2-git-send-email-riel@redhat.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      90ed9cbe
    • I
      Merge tag 'v3.17-rc4' into sched/core, to prevent conflicts with upcoming... · e2627dce
      Ingo Molnar 提交于
      Merge tag 'v3.17-rc4' into sched/core, to prevent conflicts with upcoming patches, and to refresh the tree
      
      Linux 3.17-rc4
      e2627dce
    • L
      Linux 3.17-rc4 · 2ce7598c
      Linus Torvalds 提交于
      2ce7598c