1. 06 6月, 2008 2 次提交
    • M
      sched: make !hrtick faster · f333fdc9
      Mike Galbraith 提交于
      it is safe to ignore timers and flags when the feature is disabled.
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      f333fdc9
    • G
      sched: prioritize non-migratable tasks over migratable ones · 45c01e82
      Gregory Haskins 提交于
      Dmitry Adamushko pointed out a known flaw in the rt-balancing algorithm
      that could allow suboptimal balancing if a non-migratable task gets
      queued behind a running migratable one.  It is discussed in this thread:
      
      http://lkml.org/lkml/2008/4/22/296
      
      This issue has been further exacerbated by a recent checkin to
      sched-devel (git-id 5eee63a5ebc19a870ac40055c0be49457f3a89a3).
      
      >From a pure priority standpoint, the run-queue is doing the "right"
      thing. Using Dmitry's nomenclature, if T0 is on cpu1 first, and T1
      wakes up at equal or lower priority (affined only to cpu1) later, it
      *should* wait for T0 to finish.  However, in reality that is likely
      suboptimal from a system perspective if there are other cores that
      could allow T0 and T1 to run concurrently.  Since T1 can not migrate,
      the only choice for higher concurrency is to try to move T0.  This is
      not something we addessed in the recent rt-balancing re-work.
      
      This patch tries to enhance the balancing algorithm by accomodating this
      scenario.  It accomplishes this by incorporating the migratability of a
      task into its priority calculation.  Within a numerical tsk->prio, a
      non-migratable task is logically higher than a migratable one.  We
      maintain this by introducing a new per-priority queue (xqueue, or
      exclusive-queue) for holding non-migratable tasks.  The scheduler will
      draw from the xqueue over the standard shared-queue (squeue) when
      available.
      
      There are several details for utilizing this properly.
      
      1) During task-wake-up, we not only need to check if the priority
         preempts the current task, but we also need to check for this
         non-migratable condition.  Therefore, if a non-migratable task wakes
         up and sees an equal priority migratable task already running, it
         will attempt to preempt it *if* there is a likelyhood that the
         current task will find an immediate home.
      
      2) Tasks only get this non-migratable "priority boost" on wake-up.  Any
         requeuing will result in the non-migratable task being queued to the
         end of the shared queue.  This is an attempt to prevent the system
         from being completely unfair to migratable tasks during things like
         SCHED_RR timeslicing.
      
      I am sure this patch introduces potentially "odd" behavior if you
      concoct a scenario where a bunch of non-migratable threads could starve
      migratable ones given the right pattern.  I am not yet convinced that
      this is a problem since we are talking about tasks of equal RT priority
      anyway, and there never is much in the way of guarantees against
      starvation under that scenario anyway. (e.g. you could come up with a
      similar scenario with a specific timing environment verses an affinity
      environment).  I can be convinced otherwise, but for now I think this is
      "ok".
      Signed-off-by: NGregory Haskins <ghaskins@novell.com>
      CC: Dmitry Adamushko <dmitry.adamushko@gmail.com>
      CC: Steven Rostedt <rostedt@goodmis.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      45c01e82
  2. 29 5月, 2008 4 次提交
  3. 15 5月, 2008 1 次提交
  4. 12 5月, 2008 1 次提交
    • L
      Add new 'cond_resched_bkl()' helper function · c3921ab7
      Linus Torvalds 提交于
      It acts exactly like a regular 'cond_resched()', but will not get
      optimized away when CONFIG_PREEMPT is set.
      
      Normal kernel code is already preemptable in the presense of
      CONFIG_PREEMPT, so cond_resched() is optimized away (see commit
      02b67cc3 "sched: do not do
      cond_resched() when CONFIG_PREEMPT").
      
      But when wanting to conditionally reschedule while holding a lock, you
      need to use "cond_sched_lock(lock)", and the new function is the BKL
      equivalent of that.
      
      Also make fs/locks.c use it.
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      c3921ab7
  5. 11 5月, 2008 1 次提交
    • L
      BKL: revert back to the old spinlock implementation · 8e3e076c
      Linus Torvalds 提交于
      The generic semaphore rewrite had a huge performance regression on AIM7
      (and potentially other BKL-heavy benchmarks) because the generic
      semaphores had been rewritten to be simple to understand and fair.  The
      latter, in particular, turns a semaphore-based BKL implementation into a
      mess of scheduling.
      
      The attempt to fix the performance regression failed miserably (see the
      previous commit 00b41ec2 'Revert
      "semaphore: fix"'), and so for now the simple and sane approach is to
      instead just go back to the old spinlock-based BKL implementation that
      never had any issues like this.
      
      This patch also has the advantage of being reported to fix the
      regression completely according to Yanmin Zhang, unlike the semaphore
      hack which still left a couple percentage point regression.
      
      As a spinlock, the BKL obviously has the potential to be a latency
      issue, but it's not really any different from any other spinlock in that
      respect.  We do want to get rid of the BKL asap, but that has been the
      plan for several years.
      
      These days, the biggest users are in the tty layer (open/release in
      particular) and Alan holds out some hope:
      
        "tty release is probably a few months away from getting cured - I'm
         afraid it will almost certainly be the very last user of the BKL in
         tty to get fixed as it depends on everything else being sanely locked."
      
      so while we're not there yet, we do have a plan of action.
      Tested-by: NYanmin Zhang <yanmin_zhang@linux.intel.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Andi Kleen <andi@firstfloor.org>
      Cc: Matthew Wilcox <matthew@wil.cx>
      Cc: Alexander Viro <viro@ftp.linux.org.uk>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      8e3e076c
  6. 06 5月, 2008 10 次提交
  7. 01 5月, 2008 1 次提交
    • R
      rename div64_64 to div64_u64 · 6f6d6a1a
      Roman Zippel 提交于
      Rename div64_64 to div64_u64 to make it consistent with the other divide
      functions, so it clearly includes the type of the divide.  Move its definition
      to math64.h as currently no architecture overrides the generic implementation.
       They can still override it of course, but the duplicated declarations are
      avoided.
      Signed-off-by: NRoman Zippel <zippel@linux-m68k.org>
      Cc: Avi Kivity <avi@qumranet.com>
      Cc: Russell King <rmk@arm.linux.org.uk>
      Cc: Geert Uytterhoeven <geert@linux-m68k.org>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: David Howells <dhowells@redhat.com>
      Cc: Jeff Dike <jdike@addtoit.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Patrick McHardy <kaber@trash.net>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      6f6d6a1a
  8. 29 4月, 2008 2 次提交
  9. 25 4月, 2008 4 次提交
  10. 23 4月, 2008 1 次提交
  11. 20 4月, 2008 13 次提交