1. 14 4月, 2009 3 次提交
    • J
      wait: don't use __wake_up_common() · 78ddb08f
      Johannes Weiner 提交于
      '777c6c5f wait: prevent exclusive waiter starvation' made
      __wake_up_common() global to be used from abort_exclusive_wait().
      
      It was needed to do a wake-up with the waitqueue lock held while
      passing down a key to the wake-up function.
      
      Since '4ede816a epoll keyed wakeups: add __wake_up_locked_key() and
      __wake_up_sync_key()' there is an appropriate wrapper for this case:
      __wake_up_locked_key().
      
      Use it here and make __wake_up_common() private to the scheduler
      again.
      Signed-off-by: NJohannes Weiner <hannes@cmpxchg.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      LKML-Reference: <1239720785-19661-1-git-send-email-hannes@cmpxchg.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      78ddb08f
    • G
      sched: Nominate a power-efficient ilb in select_nohz_balancer() · e790fb0b
      Gautham R Shenoy 提交于
      The CPU that first goes idle becomes the idle-load-balancer and remains
      that until either it picks up a task or till all the CPUs of the system
      goes idle.
      
      Optimize this further to allow it to relinquish it's post
      once all it's siblings in the power-aware sched_domain go idle, thereby
      allowing the whole package-core to go idle. While relinquising the post,
      nominate another an idle-load balancer from a semi-idle core/package.
      Signed-off-by: NGautham R Shenoy <ego@in.ibm.com>
      Acked-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      LKML-Reference: <20090414045535.7645.31641.stgit@sofia.in.ibm.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      e790fb0b
    • G
      sched: Nominate idle load balancer from a semi-idle package. · f711f609
      Gautham R Shenoy 提交于
      Currently the nomination of idle-load balancer is done by choosing the first
      idle cpu in the nohz.cpu_mask. This may not be power-efficient, since
      such an idle cpu could come from a completely idle core/package thereby
      preventing the whole core/package from being in a low-power state.
      
      For eg, consider a quad-core dual package system. The cpu numbering need
      not be sequential and can something like [0, 2, 4, 6] and [1, 3, 5, 7].
      With sched_mc/smt_power_savings and the power-aware IRQ balance, we try to keep
      as fewer Packages/Cores active. But the current idle load balancer logic
      goes against this by choosing the first_cpu in the nohz.cpu_mask and not
      taking the system topology into consideration.
      
      Improve the algorithm to nominate the idle load balancer from a semi idle
      cores/packages thereby increasing the probability of the cores/packages being
      in deeper sleep states for longer duration.
      
      The algorithm is activated only when sched_mc/smt_power_savings != 0.
      Signed-off-by: NGautham R Shenoy <ego@in.ibm.com>
      Acked-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      LKML-Reference: <20090414045530.7645.12175.stgit@sofia.in.ibm.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      f711f609
  2. 01 4月, 2009 5 次提交
    • D
      epoll keyed wakeups: add __wake_up_locked_key() and __wake_up_sync_key() · 4ede816a
      Davide Libenzi 提交于
      This patchset introduces wakeup hints for some of the most popular (from
      epoll POV) devices, so that epoll code can avoid spurious wakeups on its
      waiters.
      
      The problem with epoll is that the callback-based wakeups do not, ATM,
      carry any information about the events the wakeup is related to.  So the
      only choice epoll has (not being able to call f_op->poll() from inside the
      callback), is to add the file* to a ready-list and resolve the real events
      later on, at epoll_wait() (or its own f_op->poll()) time.  This can cause
      spurious wakeups, since the wake_up() itself might be for an event the
      caller is not interested into.
      
      The rate of these spurious wakeup can be pretty high in case of many
      network sockets being monitored.
      
      By allowing devices to report the events the wakeups refer to (at least
      the two major classes - POLLIN/POLLOUT), we are able to spare useless
      wakeups by proper handling inside the epoll's poll callback.
      
      Epoll will have in any case to call f_op->poll() on the file* later on,
      since the change to be done in order to have the full event set sent via
      wakeup, is too invasive for the way our f_op->poll() system works (the
      full event set is calculated inside the poll function - there are too many
      of them to even start thinking the change - also poll/select would need
      change too).
      
      Epoll is changed in a way that both devices which send event hints, and
      the ones that don't, are correctly handled.  The former will gain some
      efficiency though.
      
      As a general rule for devices, would be to add an event mask by using
      key-aware wakeup macros, when making up poll wait queues.  I tested it
      (together with the epoll's poll fix patch Andrew has in -mm) and wakeups
      for the supported devices are correctly filtered.
      
      Test program available here:
      
      http://www.xmailserver.org/epoll_test.c
      
      This patch:
      
      Nothing revolutionary here.  Just using the available "key" that our
      wakeup core already support.  The __wake_up_locked_key() was no brainer,
      since both __wake_up_locked() and __wake_up_locked_key() are thin wrappers
      around __wake_up_common().
      
      The __wake_up_sync() function had a body, so the choice was between
      borrowing the body for __wake_up_sync_key() and calling it from
      __wake_up_sync(), or make an inline and calling it from both.  I chose the
      former since in most archs it all resolves to "mov $0, REG; jmp ADDR".
      Signed-off-by: NDavide Libenzi <davidel@xmailserver.org>
      Cc: Alan Cox <alan@lxorguk.ukuu.org.uk>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: David Miller <davem@davemloft.net>
      Cc: William Lee Irwin III <wli@movementarian.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      4ede816a
    • G
      sched: Print sched_group::__cpu_power in sched_domain_debug · 46e0bb9c
      Gautham R Shenoy 提交于
      Impact: extend debug info /proc/sched_debug
      
      If the user changes the value of the sched_mc/smt_power_savings sysfs
      tunable, it'll trigger a rebuilding of the whole sched_domain tree,
      with the SD_POWERSAVINGS_BALANCE flag set at certain levels.
      
      As a result, there would be a change in the __cpu_power of sched_groups
      in the sched_domain hierarchy.
      
      Print the __cpu_power values for each sched_group in sched_domain_debug
      to help verify this change and correlate it with the change in the
      load-balancing behavior.
      Signed-off-by: NGautham R Shenoy <ego@in.ibm.com>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      LKML-Reference: <20090330045520.2869.24777.stgit@sofia.in.ibm.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      46e0bb9c
    • B
      cpuacct: add per-cgroup utime/stime statistics · ef12fefa
      Bharata B Rao 提交于
      Add per-cgroup cpuacct controller statistics like the system and user
      time consumed by the group of tasks.
      
      Changelog:
      
      v7
      - Changed the name of the statistic from utime to user and from stime to
        system so that in future we could easily add other statistics like irq,
        softirq, steal times etc easily.
      
      v6
      - Fixed a bug in the error path of cpuacct_create() (pointed by Li Zefan).
      
      v5
      - In cpuacct_stats_show(), use cputime64_to_clock_t() since we are
        operating on a 64bit variable here.
      
      v4
      - Remove comments in cpuacct_update_stats() which explained why rcu_read_lock()
        was needed (as per Peter Zijlstra's review comments).
      - Don't say that percpu_counter_read() is broken in Documentation/cpuacct.txt
        as per KAMEZAWA Hiroyuki's review comments.
      
      v3
      - Fix a small race in the cpuacct hierarchy walk.
      
      v2
      - stime and utime now exported in clock_t units instead of msecs.
      - Addressed the code review comments from Balbir and Li Zefan.
      - Moved to -tip tree.
      
      v1
      - Moved the stime/utime accounting to cpuacct controller.
      
      Earlier versions
      - http://lkml.org/lkml/2009/2/25/129Signed-off-by: NBharata B Rao <bharata@linux.vnet.ibm.com>
      Signed-off-by: NBalaji Rao <balajirrao@gmail.com>
      Cc: Dhaval Giani <dhaval@linux.vnet.ibm.com>
      Cc: Paul Menage <menage@google.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Reviewed-by: NLi Zefan <lizf@cn.fujitsu.com>
      Acked-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Acked-by: NBalbir Singh <balbir@linux.vnet.ibm.com>
      Tested-by: NBalbir Singh <balbir@linux.vnet.ibm.com>
      LKML-Reference: <20090331043222.GA4093@in.ibm.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      ef12fefa
    • H
      posixtimers, sched: Fix posix clock monotonicity · c5f8d995
      Hidetoshi Seto 提交于
      Impact: Regression fix (against clock_gettime() backwarding bug)
      
      This patch re-introduces a couple of functions, task_sched_runtime
      and thread_group_sched_runtime, which was once removed at the
      time of 2.6.28-rc1.
      
      These functions protect the sampling of thread/process clock with
      rq lock.  This rq lock is required not to update rq->clock during
      the sampling.
      
      i.e.
        The clock_gettime() may return
         ((accounted runtime before update) + (delta after update))
        that is less than what it should be.
      
      v2 -> v3:
      	- Rename static helper function __task_delta_exec()
      	  to do_task_delta_exec() since -tip tree already has
      	  a __task_delta_exec() of different version.
      
      v1 -> v2:
      	- Revises comments of function and patch description.
      	- Add note about accuracy of thread group's runtime.
      Signed-off-by: NHidetoshi Seto <seto.hidetoshi@jp.fujitsu.com>
      Acked-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: stable@kernel.org	[2.6.28.x][2.6.29.x]
      LKML-Reference: <49D1CC93.4080401@jp.fujitsu.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      c5f8d995
    • B
      cpuacct: make cpuacct hierarchy walk in cpuacct_charge() safe when rcupreempt is used -v2 · a18b83b7
      Bharata B Rao 提交于
      Impact: fix cgroups race under rcu-preempt
      
      cpuacct_charge() obtains task's ca and does a hierarchy walk upwards.
      This can race with the task's movement between cgroups. This race
      can cause an access to freed ca pointer in cpuacct_charge() or access
      to invalid cgroups pointer of the task. This will not happen with rcu or
      tree rcu as cpuacct_charge() is called with preemption disabled. However if
      rcupreempt is used, the race is seen. Thanks to Li Zefan for explaining this.
      
      Fix this race by explicitly protecting ca and the hierarchy walk with
      rcu_read_lock().
      
      Changes for v2:
      
       - Update patch descrition (as per Li Zefan's review comments).
      
       - Remove comments in cpuacct_charge() which explained why rcu_read_lock()
         was needed (as per Peter Zijlstra's review comments).
      Signed-off-by: NBharata B Rao <bharata@linux.vnet.ibm.com>
      Cc: Dhaval Giani <dhaval@linux.vnet.ibm.com>
      Cc: Li Zefan <lizf@cn.fujitsu.com>
      Cc: Paul Menage <menage@google.com>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Acked-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Acked-by: NBalbir Singh <balbir@linux.vnet.ibm.com>
      Tested-by: NBalbir Singh <balbir@linux.vnet.ibm.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      a18b83b7
  3. 31 3月, 2009 1 次提交
    • P
      hrtimer: fix rq->lock inversion (again) · 7f1e2ca9
      Peter Zijlstra 提交于
      It appears I inadvertly introduced rq->lock recursion to the
      hrtimer_start() path when I delegated running already expired
      timers to softirq context.
      
      This patch fixes it by introducing a __hrtimer_start_range_ns()
      method that will not use raise_softirq_irqoff() but
      __raise_softirq_irqoff() which avoids the wakeup.
      
      It then also changes schedule() to check for pending softirqs and
      do the wakeup then, I'm not quite sure I like this last bit, nor
      am I convinced its really needed.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: paulus@samba.org
      LKML-Reference: <20090313112301.096138802@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      7f1e2ca9
  4. 30 3月, 2009 1 次提交
  5. 29 3月, 2009 1 次提交
  6. 25 3月, 2009 12 次提交
    • G
      sched: Add comments to find_busiest_group() function · b7bb4c9b
      Gautham R Shenoy 提交于
      Impact: cleanup
      
      Add /** style comments around find_busiest_group(). Also add a few
      explanatory comments.
      
      This concludes the find_busiest_group() cleanup. The function is
      now down to 72 lines from the original 313 lines.
      Signed-off-by: NGautham R Shenoy <ego@in.ibm.com>
      Acked-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Suresh Siddha <suresh.b.siddha@intel.com>
      Cc: "Balbir Singh" <balbir@in.ibm.com>
      Cc: Nick Piggin <nickpiggin@yahoo.com.au>
      Cc: "Dhaval Giani" <dhaval@linux.vnet.ibm.com>
      Cc: Bharata B Rao <bharata@linux.vnet.ibm.com>
      Cc: "Vaidyanathan Srinivasan" <svaidy@linux.vnet.ibm.com>
      LKML-Reference: <20090325091427.13992.18933.stgit@sofia.in.ibm.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      b7bb4c9b
    • G
      sched: Refactor the power savings balance code · c071df18
      Gautham R Shenoy 提交于
      Impact: cleanup
      
      Create seperate helper functions to initialize the
      power-savings-balance related variables, to update them and
      to check if we have a scope for performing power-savings balance.
      
      Add no-op inline functions for the !(CONFIG_SCHED_MC || CONFIG_SCHED_SMT)
      case.
      
      This will eliminate all the #ifdef jungle in find_busiest_group() and the
      other helper functions.
      Signed-off-by: NGautham R Shenoy <ego@in.ibm.com>
      Acked-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Suresh Siddha <suresh.b.siddha@intel.com>
      Cc: "Balbir Singh" <balbir@in.ibm.com>
      Cc: Nick Piggin <nickpiggin@yahoo.com.au>
      Cc: "Dhaval Giani" <dhaval@linux.vnet.ibm.com>
      Cc: Bharata B Rao <bharata@linux.vnet.ibm.com>
      Cc: "Vaidyanathan Srinivasan" <svaidy@linux.vnet.ibm.com>
      LKML-Reference: <20090325091422.13992.73616.stgit@sofia.in.ibm.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      c071df18
    • G
      sched: Optimize the !power_savings_balance during fbg() · a021dc03
      Gautham R Shenoy 提交于
      Impact: cleanup, micro-optimization
      
      We don't need to perform power_savings balance if either the
      cpu is NOT_IDLE or if the sched_domain doesn't contain the
      SD_POWERSAVINGS_BALANCE flag set.
      
      Currently, we check for these conditions multiple number of
      times, even though these variables don't change over the scope
      of find_busiest_group().
      
      Check once, and store the value in the already exiting
      "power_savings_balance" variable.
      Signed-off-by: NGautham R Shenoy <ego@in.ibm.com>
      Acked-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Suresh Siddha <suresh.b.siddha@intel.com>
      Cc: "Balbir Singh" <balbir@in.ibm.com>
      Cc: Nick Piggin <nickpiggin@yahoo.com.au>
      Cc: "Dhaval Giani" <dhaval@linux.vnet.ibm.com>
      Cc: Bharata B Rao <bharata@linux.vnet.ibm.com>
      Cc: "Vaidyanathan Srinivasan" <svaidy@linux.vnet.ibm.com>
      LKML-Reference: <20090325091417.13992.2657.stgit@sofia.in.ibm.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      a021dc03
    • G
      sched: Create a helper function to calculate imbalance · dbc523a3
      Gautham R Shenoy 提交于
      Move all the imbalance calculation out of find_busiest_group()
      through this helper function.
      
      With this change, the structure of find_busiest_group() will be
      as follows:
      
      - update_sched_domain_statistics.
      
      - check if imbalance exits.
      
      - update imbalance and return busiest.
      Signed-off-by: NGautham R Shenoy <ego@in.ibm.com>
      Acked-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Suresh Siddha <suresh.b.siddha@intel.com>
      Cc: "Balbir Singh" <balbir@in.ibm.com>
      Cc: Nick Piggin <nickpiggin@yahoo.com.au>
      Cc: "Dhaval Giani" <dhaval@linux.vnet.ibm.com>
      Cc: Bharata B Rao <bharata@linux.vnet.ibm.com>
      Cc: "Vaidyanathan Srinivasan" <svaidy@linux.vnet.ibm.com>
      LKML-Reference: <20090325091411.13992.43293.stgit@sofia.in.ibm.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      dbc523a3
    • G
      sched: Create helper to calculate small_imbalance in fbg() · 2e6f44ae
      Gautham R Shenoy 提交于
      Impact: cleanup
      
      We have two places in find_busiest_group() where we need to calculate
      the minor imbalance before returning the busiest group. Encapsulate
      this functionality into a seperate helper function.
      
      Credit: Vaidyanathan Srinivasan <svaidy@linux.vnet.ibm.com>
      Signed-off-by: NGautham R Shenoy <ego@in.ibm.com>
      Acked-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Suresh Siddha <suresh.b.siddha@intel.com>
      Cc: "Balbir Singh" <balbir@in.ibm.com>
      Cc: Nick Piggin <nickpiggin@yahoo.com.au>
      Cc: "Dhaval Giani" <dhaval@linux.vnet.ibm.com>
      Cc: Bharata B Rao <bharata@linux.vnet.ibm.com>
      LKML-Reference: <20090325091406.13992.54316.stgit@sofia.in.ibm.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      2e6f44ae
    • G
      sched: Create a helper function to calculate sched_domain stats for fbg() · 37abe198
      Gautham R Shenoy 提交于
      Impact: cleanup
      
      Create a helper function named update_sd_lb_stats() to update the
      various sched_domain related statistics in find_busiest_group().
      
      With this we would have moved all the statistics computation out of
      find_busiest_group().
      
      Credit: Vaidyanathan Srinivasan <svaidy@linux.vnet.ibm.com>
      Signed-off-by: NGautham R Shenoy <ego@in.ibm.com>
      Acked-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Suresh Siddha <suresh.b.siddha@intel.com>
      Cc: "Balbir Singh" <balbir@in.ibm.com>
      Cc: Nick Piggin <nickpiggin@yahoo.com.au>
      Cc: "Dhaval Giani" <dhaval@linux.vnet.ibm.com>
      Cc: Bharata B Rao <bharata@linux.vnet.ibm.com>
      LKML-Reference: <20090325091401.13992.88737.stgit@sofia.in.ibm.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      37abe198
    • G
      sched: Define structure to store the sched_domain statistics for fbg() · 222d656d
      Gautham R Shenoy 提交于
      Impact: cleanup
      
      Currently we use a lot of local variables in find_busiest_group()
      to capture the various statistics related to the sched_domain.
      Group them together into a single data structure.
      
      This will help us to offload the job of updating the sched_domain
      statistics to a helper function.
      
      Credit: Vaidyanathan Srinivasan <svaidy@linux.vnet.ibm.com>
      Signed-off-by: NGautham R Shenoy <ego@in.ibm.com>
      Acked-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Suresh Siddha <suresh.b.siddha@intel.com>
      Cc: "Balbir Singh" <balbir@in.ibm.com>
      Cc: Nick Piggin <nickpiggin@yahoo.com.au>
      Cc: "Dhaval Giani" <dhaval@linux.vnet.ibm.com>
      Cc: Bharata B Rao <bharata@linux.vnet.ibm.com>
      LKML-Reference: <20090325091356.13992.25970.stgit@sofia.in.ibm.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      222d656d
    • G
      sched: Create a helper function to calculate sched_group stats for fbg() · 1f8c553d
      Gautham R Shenoy 提交于
      Impact: cleanup
      
      Create a helper function named update_sg_lb_stats() which
      can be invoked to calculate the individual group's statistics
      in find_busiest_group().
      
      This reduces the lenght of find_busiest_group() considerably.
      
      Credit: Vaidyanathan Srinivasan <svaidy@linux.vnet.ibm.com>
      Signed-off-by: NGautham R Shenoy <ego@in.ibm.com>
      Aked-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Suresh Siddha <suresh.b.siddha@intel.com>
      Cc: "Balbir Singh" <balbir@in.ibm.com>
      Cc: Nick Piggin <nickpiggin@yahoo.com.au>
      Cc: "Dhaval Giani" <dhaval@linux.vnet.ibm.com>
      Cc: Bharata B Rao <bharata@linux.vnet.ibm.com>
      LKML-Reference: <20090325091351.13992.43461.stgit@sofia.in.ibm.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      1f8c553d
    • G
      sched: Define structure to store the sched_group statistics for fbg() · 381be78f
      Gautham R Shenoy 提交于
      Impact: cleanup
      
      Currently a whole bunch of variables are used to store the
      various statistics pertaining to the groups we iterate over
      in find_busiest_group().
      
      Group them together in a single data structure and add
      appropriate comments.
      
      This will be useful later on when we create helper functions
      to calculate the sched_group statistics.
      
      Credit: Vaidyanathan Srinivasan <svaidy@linux.vnet.ibm.com>
      Signed-off-by: NGautham R Shenoy <ego@in.ibm.com>
      Acked-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Suresh Siddha <suresh.b.siddha@intel.com>
      Cc: "Balbir Singh" <balbir@in.ibm.com>
      Cc: Nick Piggin <nickpiggin@yahoo.com.au>
      Cc: "Dhaval Giani" <dhaval@linux.vnet.ibm.com>
      Cc: Bharata B Rao <bharata@linux.vnet.ibm.com>
      LKML-Reference: <20090325091345.13992.20099.stgit@sofia.in.ibm.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      381be78f
    • G
      sched: Fix indentations in find_busiest_group() using gotos · 6dfdb062
      Gautham R Shenoy 提交于
      Impact: cleanup
      
      Some indentations in find_busiest_group() can minimized by using
      early exits with the help of gotos. This improves readability in
      a couple of cases.
      Signed-off-by: NGautham R Shenoy <ego@in.ibm.com>
      Acked-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Suresh Siddha <suresh.b.siddha@intel.com>
      Cc: "Balbir Singh" <balbir@in.ibm.com>
      Cc: Nick Piggin <nickpiggin@yahoo.com.au>
      Cc: "Dhaval Giani" <dhaval@linux.vnet.ibm.com>
      Cc: Bharata B Rao <bharata@linux.vnet.ibm.com>
      Cc: "Vaidyanathan Srinivasan" <svaidy@linux.vnet.ibm.com>
      LKML-Reference: <20090325091340.13992.45062.stgit@sofia.in.ibm.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      6dfdb062
    • G
      sched: Simple helper functions for find_busiest_group() · 67bb6c03
      Gautham R Shenoy 提交于
      Impact: cleanup
      
      Currently the load idx calculation code is in find_busiest_group().
      Move that to a static inline helper function.
      
      Similary, to find the first cpu of a sched_group we use
      cpumask_first(sched_group_cpus(group))
      
      Use a helper to that. It improves readability in some cases.
      Signed-off-by: NGautham R Shenoy <ego@in.ibm.com>
      Acked-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Suresh Siddha <suresh.b.siddha@intel.com>
      Cc: "Balbir Singh" <balbir@in.ibm.com>
      Cc: Nick Piggin <nickpiggin@yahoo.com.au>
      Cc: "Dhaval Giani" <dhaval@linux.vnet.ibm.com>
      Cc: Bharata B Rao <bharata@linux.vnet.ibm.com>
      Cc: "Vaidyanathan Srinivasan" <svaidy@linux.vnet.ibm.com>
      LKML-Reference: <20090325091335.13992.55424.stgit@sofia.in.ibm.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      67bb6c03
    • L
      sched: remove unused fields from struct rq · 67aa0f76
      Luis Henriques 提交于
      Impact: cleanup, new schedstat ABI
      
      Since they are used on in statistics and are always set to zero, the
      following fields from struct rq have been removed: yld_exp_empty,
      yld_act_empty and yld_both_empty.
      
      Both Sched Debug and SCHEDSTAT_VERSION versions has also been
      incremented since ABIs have been changed.
      
      The schedtop tool has been updated to properly handle new version of
      schedstat:
      
         http://rt.wiki.kernel.org/index.php/Schedtop_utilitySigned-off-by: NLuis Henriques <henrix@sapo.pt>
      Acked-by: NGregory Haskins <ghaskins@novell.com>
      Acked-by: NPeter Zijlstra <peterz@infradead.org>
      LKML-Reference: <20090324221002.GA10061@hades.domain.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      67aa0f76
  7. 19 3月, 2009 2 次提交
  8. 17 3月, 2009 2 次提交
  9. 13 3月, 2009 2 次提交
  10. 11 3月, 2009 1 次提交
    • M
      sched: add avg_overlap decay · df1c99d4
      Mike Galbraith 提交于
      Impact: more precise avg_overlap metric - better load-balancing
      
      avg_overlap is used to measure the runtime overlap of the waker and
      wakee.
      
      However, when a process changes behaviour, eg a pipe becomes
      un-congested and we don't need to go to sleep after a wakeup
      for a while, the avg_overlap value grows stale.
      
      When running we use the avg runtime between preemption as a
      measure for avg_overlap since the amount of runtime can be
      correlated to cache footprint.
      
      The longer we run, the less likely we'll be wanting to be
      migrated to another CPU.
      Signed-off-by: NMike Galbraith <efault@gmx.de>
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      LKML-Reference: <1236709131.25234.576.camel@laptop>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      df1c99d4
  11. 10 3月, 2009 1 次提交
  12. 06 3月, 2009 1 次提交
  13. 05 3月, 2009 1 次提交
    • F
      sched: don't rebalance if attached on NULL domain · 8a0be9ef
      Frederic Weisbecker 提交于
      Impact: fix function graph trace hang / drop pointless softirq on UP
      
      While debugging a function graph trace hang on an old PII, I saw
      that it consumed most of its time on the timer interrupt. And
      the domain rebalancing softirq was the most concerned.
      
      The timer interrupt calls trigger_load_balance() which will
      decide if it is worth to schedule a rebalancing softirq.
      
      In case of builtin UP kernel, no problem arises because there is
      no domain question.
      
      In case of builtin SMP kernel running on an SMP box, still no
      problem, the softirq will be raised each time we reach the
      next_balance time.
      
      In case of builtin SMP kernel running on a UP box (most distros
      provide default SMP kernels, whatever the box you have), then
      the CPU is attached to the NULL sched domain. So a kind of
      unexpected behaviour happen:
      
      trigger_load_balance() -> raises the rebalancing softirq later
      on softirq: run_rebalance_domains() -> rebalance_domains() where
      the for_each_domain(cpu, sd) is not taken because of the NULL
      domain we are attached at. Which means rq->next_balance is never
      updated. So on the next timer tick, we will enter
      trigger_load_balance() which will always reschedule() the
      rebalacing softirq:
      
      if (time_after_eq(jiffies, rq->next_balance))
      	raise_softirq(SCHED_SOFTIRQ);
      
      So for each tick, we process this pointless softirq.
      
      This patch fixes it by checking if we are attached to the null
      domain before raising the softirq, another possible fix would be
      to set the maximal possible JIFFIES value to rq->next_balance if
      we are attached to the NULL domain.
      
      v2: build fix on UP
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      LKML-Reference: <49af242d.1c07d00a.32d5.ffffc019@mx.google.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      8a0be9ef
  14. 02 3月, 2009 1 次提交
  15. 27 2月, 2009 1 次提交
  16. 26 2月, 2009 2 次提交
  17. 25 2月, 2009 1 次提交
    • P
      generic-ipi: remove CSD_FLAG_WAIT · 6e275637
      Peter Zijlstra 提交于
      Oleg noticed that we don't strictly need CSD_FLAG_WAIT, rework
      the code so that we can use CSD_FLAG_LOCK for both purposes.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Nick Piggin <npiggin@suse.de>
      Cc: Jens Axboe <jens.axboe@oracle.com>
      Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      6e275637
  18. 20 2月, 2009 1 次提交
  19. 16 2月, 2009 1 次提交