1. 21 1月, 2010 1 次提交
  2. 23 12月, 2009 1 次提交
  3. 21 12月, 2009 2 次提交
  4. 17 12月, 2009 12 次提交
    • P
      sched: Fix broken assertion · 077614ee
      Peter Zijlstra 提交于
      There's a preemption race in the set_task_cpu() debug check in
      that when we get preempted after setting task->state we'd still
      be on the rq proper, but fail the test.
      
      Check for preempted tasks, since those are always on the RQ.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      LKML-Reference: <20091217121830.137155561@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      077614ee
    • F
      sched: Teach might_sleep() about preemptible RCU · 234da7bc
      Frederic Weisbecker 提交于
      In practice, it is harmless to voluntarily sleep in a
      rcu_read_lock() section if we are running under preempt rcu, but
      it is illegal if we build a kernel running non-preemptable rcu.
      
      Currently, might_sleep() doesn't notice sleepable operations
      under rcu_read_lock() sections if we are running under
      preemptable rcu because preempt_count() is left untouched after
      rcu_read_lock() in this case. But we want developers who test
      their changes under such config to notice the "sleeping while
      atomic" issues.
      
      So we add rcu_read_lock_nesting to prempt_count() in
      might_sleep() checks.
      
      [ v2: Handle rcu-tiny ]
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Reviewed-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      LKML-Reference: <1260991265-8451-1-git-send-regression-fweisbec@gmail.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      234da7bc
    • I
      sched: Make warning less noisy · 416eb395
      Ingo Molnar 提交于
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Mike Galbraith <efault@gmx.de>
      LKML-Reference: <20091216170517.807938893@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      416eb395
    • P
      sched: Simplify set_task_cpu() · 738d2be4
      Peter Zijlstra 提交于
      Rearrange code a bit now that its a simpler function.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Mike Galbraith <efault@gmx.de>
      LKML-Reference: <20091216170518.269101883@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      738d2be4
    • P
      sched: Remove the cfs_rq dependency from set_task_cpu() · 88ec22d3
      Peter Zijlstra 提交于
      In order to remove the cfs_rq dependency from set_task_cpu() we
      need to ensure the task is cfs_rq invariant for all callsites.
      
      The simple approach is to substract cfs_rq->min_vruntime from
      se->vruntime on dequeue, and add cfs_rq->min_vruntime on
      enqueue.
      
      However, this has the downside of breaking FAIR_SLEEPERS since
      we loose the old vruntime as we only maintain the relative
      position.
      
      To solve this, we observe that we only migrate runnable tasks,
      we do this using deactivate_task(.sleep=0) and
      activate_task(.wakeup=0), therefore we can restrain the
      min_vruntime invariance to that state.
      
      The only other case is wakeup balancing, since we want to
      maintain the old vruntime we cannot make it relative on dequeue,
      but since we don't migrate inactive tasks, we can do so right
      before we activate it again.
      
      This is where we need the new pre-wakeup hook, we need to call
      this while still holding the old rq->lock. We could fold it into
      ->select_task_rq(), but since that has multiple callsites and
      would obfuscate the locking requirements, that seems like a
      fudge.
      
      This leaves the fork() case, simply make sure that ->task_fork()
      leaves the ->vruntime in a relative state.
      
      This covers all cases where set_task_cpu() gets called, and
      ensures it sees a relative vruntime.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Mike Galbraith <efault@gmx.de>
      LKML-Reference: <20091216170518.191697025@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      88ec22d3
    • P
      sched: Add pre and post wakeup hooks · efbbd05a
      Peter Zijlstra 提交于
      As will be apparent in the next patch, we need a pre wakeup hook
      for sched_fair task migration, hence rename the post wakeup hook
      and one pre wakeup.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Mike Galbraith <efault@gmx.de>
      LKML-Reference: <20091216170518.114746117@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      efbbd05a
    • P
      sched: Move kthread_bind() back to kthread.c · 881232b7
      Peter Zijlstra 提交于
      Since kthread_bind() lost its dependencies on sched.c, move it
      back where it came from.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Mike Galbraith <efault@gmx.de>
      LKML-Reference: <20091216170518.039524041@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      881232b7
    • P
      sched: Fix select_task_rq() vs hotplug issues · 5da9a0fb
      Peter Zijlstra 提交于
      Since select_task_rq() is now responsible for guaranteeing
      ->cpus_allowed and cpu_active_mask, we need to verify this.
      
      select_task_rq_rt() can blindly return
      smp_processor_id()/task_cpu() without checking the valid masks,
      select_task_rq_fair() can do the same in the rare case that all
      SD_flags are disabled.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Mike Galbraith <efault@gmx.de>
      LKML-Reference: <20091216170517.961475466@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      5da9a0fb
    • P
      sched: Fix sched_exec() balancing · 38022906
      Peter Zijlstra 提交于
      Since we access ->cpus_allowed without holding rq->lock we need
      a retry loop to validate the result, this comes for near free
      when we merge sched_migrate_task() into sched_exec() since that
      already does the needed check.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Mike Galbraith <efault@gmx.de>
      LKML-Reference: <20091216170517.884743662@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      38022906
    • P
      sched: Ensure set_task_cpu() is never called on blocked tasks · e2912009
      Peter Zijlstra 提交于
      In order to clean up the set_task_cpu() rq dependencies we need
      to ensure it is never called on blocked tasks because such usage
      does not pair with consistent rq->lock usage.
      
      This puts the migration burden on ttwu().
      
      Furthermore we need to close a race against changing
      ->cpus_allowed, since select_task_rq() runs with only preemption
      disabled.
      
      For sched_fork() this is safe because the child isn't in the
      tasklist yet, for wakeup we fix this by synchronizing
      set_cpus_allowed_ptr() against TASK_WAKING, which leaves
      sched_exec to be a problem
      
      This also closes a hole in (6ad4c188 sched: Fix balance vs
      hotplug race) where ->select_task_rq() doesn't validate the
      result against the sched_domain/root_domain.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Mike Galbraith <efault@gmx.de>
      LKML-Reference: <20091216170517.807938893@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      e2912009
    • P
      sched: Use TASK_WAKING for fork wakups · 06b83b5f
      Peter Zijlstra 提交于
      For later convenience use TASK_WAKING for fresh tasks.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Mike Galbraith <efault@gmx.de>
      LKML-Reference: <20091216170517.732561278@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      06b83b5f
    • P
      sched: Fix task_hot() test order · e6c8fba7
      Peter Zijlstra 提交于
      Make sure not to access sched_fair fields before verifying it is
      indeed a sched_fair task.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Mike Galbraith <efault@gmx.de>
      CC: stable@kernel.org
      LKML-Reference: <20091216170517.577998058@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      e6c8fba7
  5. 15 12月, 2009 8 次提交
  6. 13 12月, 2009 2 次提交
    • J
      sched: Use pr_fmt() and pr_<level>() · 663997d4
      Joe Perches 提交于
      - Convert printk(KERN_<level> to pr_<level> (not KERN_DEBUG)
       - Add #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
       - Coalesce long format strings
       - Add missing \n to "ERROR: !SD_LOAD_BALANCE domain has parent"
      Signed-off-by: NJoe Perches <joe@perches.com>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      LKML-Reference: <1260655047.2637.7.camel@Joe-Laptop.home>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      663997d4
    • R
      sched: Make wakeup side and atomic variants of completion API irq safe · 7539a3b3
      Rafael J. Wysocki 提交于
      Alan Stern noticed that all the wakeup side (and atomic) variants of the
      completion APIs should be irq safe, but the newly introduced
      completion_done() and try_wait_for_completion() aren't. The use of the
      irq unsafe variants in IRQ contexts can cause crashes/hangs.
      
      Fix the problem by making them use spin_lock_irqsave() and
      spin_lock_irqrestore().
      Reported-by: NAlan Stern <stern@rowland.harvard.edu>
      Signed-off-by: NRafael J. Wysocki <rjw@sisk.pl>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Zhang Rui <rui.zhang@intel.com>
      Cc: pm list <linux-pm@lists.linux-foundation.org>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: David Chinner <david@fromorbit.com>
      Cc: Lachlan McIlroy <lachlan@sgi.com>
      LKML-Reference: <200912130007.30541.rjw@sisk.pl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      7539a3b3
  7. 11 12月, 2009 1 次提交
    • I
      sched: Remove forced2_migrations stats · b9889ed1
      Ingo Molnar 提交于
      This build warning:
      
       kernel/sched.c: In function 'set_task_cpu':
       kernel/sched.c:2070: warning: unused variable 'old_rq'
      
      Made me realize that the forced2_migrations stat looks pretty
      pointless (and a misnomer) - remove it.
      
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Mike Galbraith <efault@gmx.de>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      b9889ed1
  8. 10 12月, 2009 2 次提交
  9. 09 12月, 2009 11 次提交