1. 26 1月, 2008 19 次提交
    • H
      sched: do not do cond_resched() when CONFIG_PREEMPT · 02b67cc3
      Herbert Xu 提交于
      Why do we even have cond_resched when real preemption
      is on? It seems to be a waste of space and time.
      
      remove cond_resched with CONFIG_PREEMPT on.
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      02b67cc3
    • P
      sched: SCHED_FIFO/SCHED_RR watchdog timer · 78f2c7db
      Peter Zijlstra 提交于
      Introduce a new rlimit that allows the user to set a runtime timeout on
      real-time tasks their slice. Once this limit is exceeded the task will receive
      SIGXCPU.
      
      So it measures runtime since the last sleep.
      
      Input and ideas by Thomas Gleixner and Lennart Poettering.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      CC: Lennart Poettering <mzxreary@0pointer.de>
      CC: Michael Kerrisk <mtk.manpages@googlemail.com>
      CC: Ulrich Drepper <drepper@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      78f2c7db
    • P
      sched: sched_rt_entity · fa717060
      Peter Zijlstra 提交于
      Move the task_struct members specific to rt scheduling together.
      A future optimization could be to put sched_entity and sched_rt_entity
      into a union.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      CC: Srivatsa Vaddagiri <vatsa@linux.vnet.ibm.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      fa717060
    • P
      Preempt-RCU: implementation · e260be67
      Paul E. McKenney 提交于
      This patch implements a new version of RCU which allows its read-side
      critical sections to be preempted. It uses a set of counter pairs
      to keep track of the read-side critical sections and flips them
      when all tasks exit read-side critical section. The details
      of this implementation can be found in this paper -
      
      	http://www.rdrop.com/users/paulmck/RCU/OLSrtRCU.2006.08.11a.pdf
      
      and the article-
      
      	http://lwn.net/Articles/253651/
      
      This patch was developed as a part of the -rt kernel development and
      meant to provide better latencies when read-side critical sections of
      RCU don't disable preemption.  As a consequence of keeping track of RCU
      readers, the readers have a slight overhead (optimizations in the paper).
      This implementation co-exists with the "classic" RCU implementations
      and can be switched to at compiler.
      
      Also includes RCU tracing summarized in debugfs.
      
      [ akpm@linux-foundation.org: build fixes on non-preempt architectures ]
      Signed-off-by: NGautham R Shenoy <ego@in.ibm.com>
      Signed-off-by: NDipankar Sarma <dipankar@in.ibm.com>
      Signed-off-by: NPaul E. McKenney <paulmck@us.ibm.com>
      Reviewed-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      e260be67
    • P
      Preempt-RCU: reorganize RCU code into rcuclassic.c and rcupdate.c · 01c1c660
      Paul E. McKenney 提交于
      This patch re-organizes the RCU code to enable multiple implementations
      of RCU. Users of RCU continues to include rcupdate.h and the
      RCU interfaces remain the same. This is in preparation for
      subsequently merging the preemptible RCU implementation.
      Signed-off-by: NGautham R Shenoy <ego@in.ibm.com>
      Signed-off-by: NDipankar Sarma <dipankar@in.ibm.com>
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Reviewed-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      01c1c660
    • D
      Preempt-RCU: Use softirq instead of tasklets for · c2d727aa
      Dipankar Sarma 提交于
      This patch makes RCU use softirq instead of tasklets.
      
      It also adds a memory barrier after raising the softirq
      inorder to ensure that the cpu sees the most recently updated
      value of rcu->cur while processing callbacks.
      The discussion of the related theoretical race pointed out
      by James Huang can be found here --> http://lkml.org/lkml/2007/11/20/603Signed-off-by: NGautham R Shenoy <ego@in.ibm.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      Signed-off-by: NDipankar Sarma <dipankar@in.ibm.com>
      Reviewed-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      c2d727aa
    • S
      sched: RT-balance, add new methods to sched_class · cb469845
      Steven Rostedt 提交于
      Dmitry Adamushko found that the current implementation of the RT
      balancing code left out changes to the sched_setscheduler and
      rt_mutex_setprio.
      
      This patch addresses this issue by adding methods to the schedule classes
      to handle being switched out of (switched_from) and being switched into
      (switched_to) a sched_class. Also a method for changing of priorities
      is also added (prio_changed).
      
      This patch also removes some duplicate logic between rt_mutex_setprio and
      sched_setscheduler.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      cb469845
    • S
      sched: RT-balance, replace hooks with pre/post schedule and wakeup methods · 9a897c5a
      Steven Rostedt 提交于
      To make the main sched.c code more agnostic to the schedule classes.
      Instead of having specific hooks in the schedule code for the RT class
      balancing. They are replaced with a pre_schedule, post_schedule
      and task_wake_up methods. These methods may be used by any of the classes
      but currently, only the sched_rt class implements them.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      9a897c5a
    • I
      sched: whitespace cleanups in topology.h · 32525d02
      Ingo Molnar 提交于
      whitespace cleanups in topology.h.
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      32525d02
    • I
      sched: reactivate fork balancing · 52d85343
      Ingo Molnar 提交于
      reactivate fork balancing.
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      52d85343
    • G
      sched: add sched-domain roots · 57d885fe
      Gregory Haskins 提交于
      We add the notion of a root-domain which will be used later to rescope
      global variables to per-domain variables.  Each exclusive cpuset
      essentially defines an island domain by fully partitioning the member cpus
      from any other cpuset.  However, we currently still maintain some
      policy/state as global variables which transcend all cpusets.  Consider,
      for instance, rt-overload state.
      
      Whenever a new exclusive cpuset is created, we also create a new
      root-domain object and move each cpu member to the root-domain's span.
      By default the system creates a single root-domain with all cpus as
      members (mimicking the global state we have today).
      
      We add some plumbing for storing class specific data in our root-domain.
      Whenever a RQ is switching root-domains (because of repartitioning) we
      give each sched_class the opportunity to remove any state from its old
      domain and add state to the new one.  This logic doesn't have any clients
      yet but it will later in the series.
      Signed-off-by: NGregory Haskins <ghaskins@novell.com>
      CC: Christoph Lameter <clameter@sgi.com>
      CC: Paul Jackson <pj@sgi.com>
      CC: Simon Derr <simon.derr@bull.net>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      57d885fe
    • G
      sched: de-SCHED_OTHER-ize the RT path · e7693a36
      Gregory Haskins 提交于
      The current wake-up code path tries to determine if it can optimize the
      wake-up to "this_cpu" by computing load calculations.  The problem is that
      these calculations are only relevant to SCHED_OTHER tasks where load is king.
      For RT tasks, priority is king.  So the load calculation is completely wasted
      bandwidth.
      
      Therefore, we create a new sched_class interface to help with
      pre-wakeup routing decisions and move the load calculation as a function
      of CFS task's class.
      Signed-off-by: NGregory Haskins <ghaskins@novell.com>
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      e7693a36
    • G
      sched: add RT-balance cpu-weight · 73fe6aae
      Gregory Haskins 提交于
      Some RT tasks (particularly kthreads) are bound to one specific CPU.
      It is fairly common for two or more bound tasks to get queued up at the
      same time.  Consider, for instance, softirq_timer and softirq_sched.  A
      timer goes off in an ISR which schedules softirq_thread to run at RT50.
      Then the timer handler determines that it's time to smp-rebalance the
      system so it schedules softirq_sched to run.  So we are in a situation
      where we have two RT50 tasks queued, and the system will go into
      rt-overload condition to request other CPUs for help.
      
      This causes two problems in the current code:
      
      1) If a high-priority bound task and a low-priority unbounded task queue
         up behind the running task, we will fail to ever relocate the unbounded
         task because we terminate the search on the first unmovable task.
      
      2) We spend precious futile cycles in the fast-path trying to pull
         overloaded tasks over.  It is therefore optimial to strive to avoid the
         overhead all together if we can cheaply detect the condition before
         overload even occurs.
      
      This patch tries to achieve this optimization by utilizing the hamming
      weight of the task->cpus_allowed mask.  A weight of 1 indicates that
      the task cannot be migrated.  We will then utilize this information to
      skip non-migratable tasks and to eliminate uncessary rebalance attempts.
      
      We introduce a per-rq variable to count the number of migratable tasks
      that are currently running.  We only go into overload if we have more
      than one rt task, AND at least one of them is migratable.
      
      In addition, we introduce a per-task variable to cache the cpus_allowed
      weight, since the hamming calculation is probably relatively expensive.
      We only update the cached value when the mask is updated which should be
      relatively infrequent, especially compared to scheduling frequency
      in the fast path.
      Signed-off-by: NGregory Haskins <ghaskins@novell.com>
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      73fe6aae
    • I
      softlockup: automatically detect hung TASK_UNINTERRUPTIBLE tasks · 82a1fcb9
      Ingo Molnar 提交于
      this patch extends the soft-lockup detector to automatically
      detect hung TASK_UNINTERRUPTIBLE tasks. Such hung tasks are
      printed the following way:
      
       ------------------>
       INFO: task prctl:3042 blocked for more than 120 seconds.
       "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message
       prctl         D fd5e3793     0  3042   2997
              f6050f38 00000046 00000001 fd5e3793 00000009 c06d8264 c06dae80 00000286
              f6050f40 f6050f00 f7d34d90 f7d34fc8 c1e1be80 00000001 f6050000 00000000
              f7e92d00 00000286 f6050f18 c0489d1a f6050f40 00006605 00000000 c0133a5b
       Call Trace:
        [<c04883a5>] schedule_timeout+0x6d/0x8b
        [<c04883d8>] schedule_timeout_uninterruptible+0x15/0x17
        [<c0133a76>] msleep+0x10/0x16
        [<c0138974>] sys_prctl+0x30/0x1e2
        [<c0104c52>] sysenter_past_esp+0x5f/0xa5
        =======================
       2 locks held by prctl/3042:
       #0:  (&sb->s_type->i_mutex_key#5){--..}, at: [<c0197d11>] do_fsync+0x38/0x7a
       #1:  (jbd_handle){--..}, at: [<c01ca3d2>] journal_start+0xc7/0xe9
       <------------------
      
      the current default timeout is 120 seconds. Such messages are printed
      up to 10 times per bootup. If the system has crashed already then the
      messages are not printed.
      
      if lockdep is enabled then all held locks are printed as well.
      
      this feature is a natural extension to the softlockup-detector (kernel
      locked up without scheduling) and to the NMI watchdog (kernel locked up
      with IRQs disabled).
      
      [ Gautham R Shenoy <ego@in.ibm.com>: CPU hotplug fixes. ]
      [ Andrew Morton <akpm@linux-foundation.org>: build warning fix. ]
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NArjan van de Ven <arjan@linux.intel.com>
      82a1fcb9
    • I
      cpu-hotplug: fix build on !CONFIG_SMP · d0d23b54
      Ingo Molnar 提交于
      fix build on !CONFIG_SMP.
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      d0d23b54
    • G
      cpu-hotplug: replace per-subsystem mutexes with get_online_cpus() · 95402b38
      Gautham R Shenoy 提交于
      This patch converts the known per-subsystem mutexes to get_online_cpus
      put_online_cpus. It also eliminates the CPU_LOCK_ACQUIRE and
      CPU_LOCK_RELEASE hotplug notification events.
      Signed-off-by: NGautham  R Shenoy <ego@in.ibm.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      95402b38
    • G
      cpu-hotplug: replace lock_cpu_hotplug() with get_online_cpus() · 86ef5c9a
      Gautham R Shenoy 提交于
      Replace all lock_cpu_hotplug/unlock_cpu_hotplug from the kernel and use
      get_online_cpus and put_online_cpus instead as it highlights the
      refcount semantics in these operations.
      
      The new API guarantees protection against the cpu-hotplug operation, but
      it doesn't guarantee serialized access to any of the local data
      structures. Hence the changes needs to be reviewed.
      
      In case of pseries_add_processor/pseries_remove_processor, use
      cpu_maps_update_begin()/cpu_maps_update_done() as we're modifying the
      cpu_present_map there.
      Signed-off-by: NGautham R Shenoy <ego@in.ibm.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      86ef5c9a
    • G
      cpu-hotplug: refcount based cpu hotplug · d221938c
      Gautham R Shenoy 提交于
      This patch implements a Refcount + Waitqueue based model for
      cpu-hotplug.
      
      Now, a thread which wants to prevent cpu-hotplug, will bump up a global
      refcount and the thread which wants to perform a cpu-hotplug operation
      will block till the global refcount goes to zero.
      
      The readers, if any, during an ongoing cpu-hotplug operation are blocked
      until the cpu-hotplug operation is over.
      Signed-off-by: NGautham R Shenoy <ego@in.ibm.com>
      Signed-off-by: Paul Jackson <pj@sgi.com> [For !CONFIG_HOTPLUG_CPU ]
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      d221938c
    • S
      sched: group scheduler, fix fairness of cpu bandwidth allocation for task groups · 6b2d7700
      Srivatsa Vaddagiri 提交于
      The current load balancing scheme isn't good enough for precise
      group fairness.
      
      For example: on a 8-cpu system, I created 3 groups as under:
      
      	a = 8 tasks (cpu.shares = 1024)
      	b = 4 tasks (cpu.shares = 1024)
      	c = 3 tasks (cpu.shares = 1024)
      
      a, b and c are task groups that have equal weight. We would expect each
      of the groups to receive 33.33% of cpu bandwidth under a fair scheduler.
      
      This is what I get with the latest scheduler git tree:
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      --------------------------------------------------------------------------------
      Col1  | Col2    | Col3  |  Col4
      ------|---------|-------|-------------------------------------------------------
      a     | 277.676 | 57.8% | 54.1%  54.1%  54.1%  54.2%  56.7%  62.2%  62.8% 64.5%
      b     | 116.108 | 24.2% | 47.4%  48.1%  48.7%  49.3%
      c     |  86.326 | 18.0% | 47.5%  47.9%  48.5%
      --------------------------------------------------------------------------------
      
      Explanation of o/p:
      
      Col1 -> Group name
      Col2 -> Cumulative execution time (in seconds) received by all tasks of that
      	group in a 60sec window across 8 cpus
      Col3 -> CPU bandwidth received by the group in the 60sec window, expressed in
              percentage. Col3 data is derived as:
      		Col3 = 100 * Col2 / (NR_CPUS * 60)
      Col4 -> CPU bandwidth received by each individual task of the group.
      		Col4 = 100 * cpu_time_recd_by_task / 60
      
      [I can share the test case that produces a similar o/p if reqd]
      
      The deviation from desired group fairness is as below:
      
      	a = +24.47%
      	b = -9.13%
      	c = -15.33%
      
      which is quite high.
      
      After the patch below is applied, here are the results:
      
      --------------------------------------------------------------------------------
      Col1  | Col2    | Col3  |  Col4
      ------|---------|-------|-------------------------------------------------------
      a     | 163.112 | 34.0% | 33.2%  33.4%  33.5%  33.5%  33.7%  34.4%  34.8% 35.3%
      b     | 156.220 | 32.5% | 63.3%  64.5%  66.1%  66.5%
      c     | 160.653 | 33.5% | 85.8%  90.6%  91.4%
      --------------------------------------------------------------------------------
      
      Deviation from desired group fairness is as below:
      
      	a = +0.67%
      	b = -0.83%
      	c = +0.17%
      
      which is far better IMO. Most of other runs have yielded a deviation within
      +-2% at the most, which is good.
      
      Why do we see bad (group) fairness with current scheuler?
      =========================================================
      
      Currently cpu's weight is just the summation of individual task weights.
      This can yield incorrect results. For ex: consider three groups as below
      on a 2-cpu system:
      
      	CPU0	CPU1
      ---------------------------
      	A (10)  B(5)
      		C(5)
      ---------------------------
      
      Group A has 10 tasks, all on CPU0, Group B and C have 5 tasks each all
      of which are on CPU1. Each task has the same weight (NICE_0_LOAD =
      1024).
      
      The current scheme would yield a cpu weight of 10240 (10*1024) for each cpu and
      the load balancer will think both CPUs are perfectly balanced and won't
      move around any tasks. This, however, would yield this bandwidth:
      
      	A = 50%
      	B = 25%
      	C = 25%
      
      which is not the desired result.
      
      What's changing in the patch?
      =============================
      
      	- How cpu weights are calculated when CONFIF_FAIR_GROUP_SCHED is
      	  defined (see below)
      	- API Change
      		- Two tunables introduced in sysfs (under SCHED_DEBUG) to
      		  control the frequency at which the load balance monitor
      		  thread runs.
      
      The basic change made in this patch is how cpu weight (rq->load.weight) is
      calculated. Its now calculated as the summation of group weights on a cpu,
      rather than summation of task weights. Weight exerted by a group on a
      cpu is dependent on the shares allocated to it and also the number of
      tasks the group has on that cpu compared to the total number of
      (runnable) tasks the group has in the system.
      
      Let,
      	W(K,i)  = Weight of group K on cpu i
      	T(K,i)  = Task load present in group K's cfs_rq on cpu i
      	T(K)    = Total task load of group K across various cpus
      	S(K) 	= Shares allocated to group K
      	NRCPUS	= Number of online cpus in the scheduler domain to
      	 	  which group K is assigned.
      
      Then,
      	W(K,i) = S(K) * NRCPUS * T(K,i) / T(K)
      
      A load balance monitor thread is created at bootup, which periodically
      runs and adjusts group's weight on each cpu. To avoid its overhead, two
      min/max tunables are introduced (under SCHED_DEBUG) to control the rate
      at which it runs.
      
      Fixes from: Peter Zijlstra <a.p.zijlstra@chello.nl>
      
      - don't start the load_balance_monitor when there is only a single cpu.
      - rename the kthread because its currently longer than TASK_COMM_LEN
      Signed-off-by: NSrivatsa Vaddagiri <vatsa@linux.vnet.ibm.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      6b2d7700
  2. 25 1月, 2008 21 次提交