1. 23 2月, 2014 6 次提交
  2. 22 2月, 2014 4 次提交
  3. 11 2月, 2014 4 次提交
  4. 10 2月, 2014 8 次提交
  5. 09 2月, 2014 3 次提交
  6. 28 1月, 2014 11 次提交
  7. 24 1月, 2014 1 次提交
  8. 23 1月, 2014 2 次提交
    • P
      sched/clock: Fixup early initialization · d375b4e0
      Peter Zijlstra 提交于
      The code would assume sched_clock_stable() and switch to !stable
      later, this switch brings a discontinuity in time.
      
      The discontinuity on switching from stable to unstable was always
      present, but previously we would set stable/unstable before
      initializing TSC and usually stick to the one we start out with.
      
      So the static_key bits brought an extra switch where there previously
      wasn't one.
      
      Things are further complicated by the fact that we cannot use
      static_key as early as we usually call set_sched_clock_stable().
      
      Fix things by tracking the stable state in a regular variable and only
      set the static_key to the right state on sched_clock_init(), which is
      ran right after late_time_init->tsc_init().
      
      Before this we would not be using the TSC anyway.
      Reported-and-Tested-by: NSasha Levin <sasha.levin@oracle.com>
      Reported-by: dyoung@redhat.com
      Fixes: 35af99e6 ("sched/clock, x86: Use a static_key for sched_clock_stable")
      Cc: jacob.jun.pan@linux.intel.com
      Cc: Mike Galbraith <bitbucket@online.de>
      Cc: hpa@zytor.com
      Cc: paulmck@linux.vnet.ibm.com
      Cc: John Stultz <john.stultz@linaro.org>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Arjan van de Ven <arjan@linux.intel.com>
      Cc: lenb@kernel.org
      Cc: rjw@rjwysocki.net
      Cc: Eliezer Tamir <eliezer.tamir@linux.intel.com>
      Cc: rui.zhang@intel.com
      Signed-off-by: NPeter Zijlstra <peterz@infradead.org>
      Link: http://lkml.kernel.org/r/20140122115918.GG3694@twins.programming.kicks-ass.netSigned-off-by: NIngo Molnar <mingo@kernel.org>
      d375b4e0
    • V
      Revert "sched: Fix sleep time double accounting in enqueue entity" · 9390675a
      Vincent Guittot 提交于
      This reverts commit 282cf499.
      
      With the current implementation, the load average statistics of a sched entity
      change according to other activity on the CPU even if this activity is done
      between the running window of the sched entity and have no influence on the
      running duration of the task.
      
      When a task wakes up on the same CPU, we currently update last_runnable_update
      with the return  of __synchronize_entity_decay without updating the
      runnable_avg_sum and runnable_avg_period accordingly. In fact, we have to sync
      the load_contrib of the se with the rq's blocked_load_contrib before removing
      it from the latter (with __synchronize_entity_decay) but we must keep
      last_runnable_update unchanged for updating runnable_avg_sum/period during the
      next update_entity_load_avg.
      Signed-off-by: NVincent Guittot <vincent.guittot@linaro.org>
      Signed-off-by: NPeter Zijlstra <peterz@infradead.org>
      Reviewed-by: NBen Segall <bsegall@google.com>
      Cc: pjt@google.com
      Cc: alex.shi@linaro.org
      Link: http://lkml.kernel.org/r/1390376734-6800-1-git-send-email-vincent.guittot@linaro.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
      9390675a
  9. 22 1月, 2014 1 次提交
    • M
      sched: add tracepoints related to NUMA task migration · 286549dc
      Mel Gorman 提交于
      This patch adds three tracepoints
       o trace_sched_move_numa	when a task is moved to a node
       o trace_sched_swap_numa	when a task is swapped with another task
       o trace_sched_stick_numa	when a numa-related migration fails
      
      The tracepoints allow the NUMA scheduler activity to be monitored and the
      following high-level metrics can be calculated
      
       o NUMA migrated stuck	 nr trace_sched_stick_numa
       o NUMA migrated idle	 nr trace_sched_move_numa
       o NUMA migrated swapped nr trace_sched_swap_numa
       o NUMA local swapped	 trace_sched_swap_numa src_nid == dst_nid (should never happen)
       o NUMA remote swapped	 trace_sched_swap_numa src_nid != dst_nid (should == NUMA migrated swapped)
       o NUMA group swapped	 trace_sched_swap_numa src_ngid == dst_ngid
      			 Maybe a small number of these are acceptable
      			 but a high number would be a major surprise.
      			 It would be even worse if bounces are frequent.
       o NUMA avg task migs.	 Average number of migrations for tasks
       o NUMA stddev task mig	 Self-explanatory
       o NUMA max task migs.	 Maximum number of migrations for a single task
      
      In general the intent of the tracepoints is to help diagnose problems
      where automatic NUMA balancing appears to be doing an excessive amount
      of useless work.
      
      [akpm@linux-foundation.org: remove semicolon-after-if, repair coding-style]
      Signed-off-by: NMel Gorman <mgorman@suse.de>
      Reviewed-by: NRik van Riel <riel@redhat.com>
      Cc: Alex Thorlton <athorlton@sgi.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      286549dc