1. 09 8月, 2007 7 次提交
    • P
      sched: fix bug in balance_tasks() · a4ac01c3
      Peter Williams 提交于
      There are two problems with balance_tasks() and how it used:
      
      1. The variables best_prio and best_prio_seen (inherited from the old
      move_tasks()) were only required to handle problems caused by the
      active/expired arrays, the order in which they were processed and the
      possibility that the task with the highest priority could be on either.
        These issues are no longer present and the extra overhead associated
      with their use is unnecessary (and possibly wrong).
      
      2. In the absence of CONFIG_FAIR_GROUP_SCHED being set, the same
      this_best_prio variable needs to be used by all scheduling classes or
      there is a risk of moving too much load.  E.g. if the highest priority
      task on this at the beginning is a fairly low priority task and the rt
      class migrates a task (during its turn) then that moved task becomes the
      new highest priority task on this_rq but when the sched_fair class
      initializes its copy of this_best_prio it will get the priority of the
      original highest priority task as, due to the run queue locks being
      held, the reschedule triggered by pull_task() will not have taken place.
        This could result in inappropriate overriding of skip_for_load and
      excessive load being moved.
      
      The attached patch addresses these problems by deleting all reference to
      best_prio and best_prio_seen and making this_best_prio a reference
      parameter to the various functions involved.
      
      load_balance_fair() has also been modified so that this_best_prio is
      only reset (in the loop) if CONFIG_FAIR_GROUP_SCHED is set.  This should
      preserve the effect of helping spread groups' higher priority tasks
      around the available CPUs while improving system performance when
      CONFIG_FAIR_GROUP_SCHED isn't set.
      Signed-off-by: NPeter Williams <pwil3058@bigpond.net.au>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      a4ac01c3
    • A
      sched: remove binary sysctls from kernel.sched_domain · e0361851
      Alexey Dobriyan 提交于
      kernel.sched_domain hierarchy is under CTL_UNNUMBERED and thus
      unreachable to sysctl(2). Generating .ctl_number's in such situation is
      not useful.
      Signed-off-by: NAlexey Dobriyan <adobriyan@sw.ru>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      e0361851
    • I
      sched: schedule() speedup · 8e717b19
      Ingo Molnar 提交于
      speed up schedule(): share the 'now' parameter that deactivate_task()
      was calculating internally.
      
      ( this also fixes the small accounting window between the deactivate
        call and the pick_next_task() call. )
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      8e717b19
    • I
      sched: uninline rq_clock() · 7bfd0485
      Ingo Molnar 提交于
      uninline rq_clock() to save 263 bytes of code:
      
         text    data     bss     dec     hex filename
         39561    3642      24   43227    a8db sched.o.before
         39298    3642      24   42964    a7d4 sched.o.after
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      7bfd0485
    • U
      sched: clean up sched_getaffinity() · 9531b62f
      Ulrich Drepper 提交于
      here's another tiny cleanup.  The generated code is not affected (gcc is
      smart enough) but for people looking over the code it is just irritating
      to have the extra conditional.
      Signed-off-by: NUlrich Drepper <drepper@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      9531b62f
    • P
      sched: simplify move_tasks() · 43010659
      Peter Williams 提交于
      The move_tasks() function is currently multiplexed with two distinct
      capabilities:
      
      1. attempt to move a specified amount of weighted load from one run
      queue to another; and
      2. attempt to move a specified number of tasks from one run queue to
      another.
      
      The first of these capabilities is used in two places, load_balance()
      and load_balance_idle(), and in both of these cases the return value of
      move_tasks() is used purely to decide if tasks/load were moved and no
      notice of the actual number of tasks moved is taken.
      
      The second capability is used in exactly one place,
      active_load_balance(), to attempt to move exactly one task and, as
      before, the return value is only used as an indicator of success or failure.
      
      This multiplexing of sched_task() was introduced, by me, as part of the
      smpnice patches and was motivated by the fact that the alternative, one
      function to move specified load and one to move a single task, would
      have led to two functions of roughly the same complexity as the old
      move_tasks() (or the new balance_tasks()).  However, the new modular
      design of the new CFS scheduler allows a simpler solution to be adopted
      and this patch addresses that solution by:
      
      1. adding a new function, move_one_task(), to be used by
      active_load_balance(); and
      2. making move_tasks() a single purpose function that tries to move a
      specified weighted load and returns 1 for success and 0 for failure.
      
      One of the consequences of these changes is that neither move_one_task()
      or the new move_tasks() care how many tasks sched_class.load_balance()
      moves and this enables its interface to be simplified by returning the
      amount of load moved as its result and removing the load_moved pointer
      from the argument list.  This helps simplify the new move_tasks() and
      slightly reduces the amount of work done in each of
      sched_class.load_balance()'s implementations.
      
      Further simplification, e.g. changes to balance_tasks(), are possible
      but (slightly) complicated by the special needs of load_balance_fair()
      so I've left them to a later patch (if this one gets accepted).
      
      NB Since move_tasks() gets called with two run queue locks held even
      small reductions in overhead are worthwhile.
      
      [ mingo@elte.hu ]
      
      this change also reduces code size nicely:
      
         text    data     bss     dec     hex filename
         39216    3618      24   42858    a76a sched.o.before
         39173    3618      24   42815    a73f sched.o.after
      Signed-off-by: NPeter Williams <pwil3058@bigpond.net.au>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      43010659
    • I
      sched: reorder update_cpu_load(rq) with the ->task_tick() call · f1a438d8
      Ingo Molnar 提交于
      Peter Williams suggested to flip the order of update_cpu_load(rq) with
      the ->task_tick() call. This is a NOP for the current scheduler (the
      two functions are independent of each other), ->task_tick() might
      create some state for update_cpu_load() in the future (or in PlugSched).
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      f1a438d8
  2. 02 8月, 2007 8 次提交
  3. 01 8月, 2007 1 次提交
  4. 26 7月, 2007 4 次提交
  5. 20 7月, 2007 4 次提交
  6. 18 7月, 2007 1 次提交
    • R
      Freezer: make kernel threads nonfreezable by default · 83144186
      Rafael J. Wysocki 提交于
      Currently, the freezer treats all tasks as freezable, except for the kernel
      threads that explicitly set the PF_NOFREEZE flag for themselves.  This
      approach is problematic, since it requires every kernel thread to either
      set PF_NOFREEZE explicitly, or call try_to_freeze(), even if it doesn't
      care for the freezing of tasks at all.
      
      It seems better to only require the kernel threads that want to or need to
      be frozen to use some freezer-related code and to remove any
      freezer-related code from the other (nonfreezable) kernel threads, which is
      done in this patch.
      
      The patch causes all kernel threads to be nonfreezable by default (ie.  to
      have PF_NOFREEZE set by default) and introduces the set_freezable()
      function that should be called by the freezable kernel threads in order to
      unset PF_NOFREEZE.  It also makes all of the currently freezable kernel
      threads call set_freezable(), so it shouldn't cause any (intentional)
      change of behaviour to appear.  Additionally, it updates documentation to
      describe the freezing of tasks more accurately.
      
      [akpm@linux-foundation.org: build fixes]
      Signed-off-by: NRafael J. Wysocki <rjw@sisk.pl>
      Acked-by: NNigel Cunningham <nigel@nigel.suspend2.net>
      Cc: Pavel Machek <pavel@ucw.cz>
      Cc: Oleg Nesterov <oleg@tv-sign.ru>
      Cc: Gautham R Shenoy <ego@in.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      83144186
  7. 16 7月, 2007 3 次提交
  8. 14 7月, 2007 4 次提交
  9. 10 7月, 2007 8 次提交