1. 30 3月, 2009 1 次提交
  2. 29 3月, 2009 1 次提交
  3. 25 3月, 2009 12 次提交
    • G
      sched: Add comments to find_busiest_group() function · b7bb4c9b
      Gautham R Shenoy 提交于
      Impact: cleanup
      
      Add /** style comments around find_busiest_group(). Also add a few
      explanatory comments.
      
      This concludes the find_busiest_group() cleanup. The function is
      now down to 72 lines from the original 313 lines.
      Signed-off-by: NGautham R Shenoy <ego@in.ibm.com>
      Acked-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Suresh Siddha <suresh.b.siddha@intel.com>
      Cc: "Balbir Singh" <balbir@in.ibm.com>
      Cc: Nick Piggin <nickpiggin@yahoo.com.au>
      Cc: "Dhaval Giani" <dhaval@linux.vnet.ibm.com>
      Cc: Bharata B Rao <bharata@linux.vnet.ibm.com>
      Cc: "Vaidyanathan Srinivasan" <svaidy@linux.vnet.ibm.com>
      LKML-Reference: <20090325091427.13992.18933.stgit@sofia.in.ibm.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      b7bb4c9b
    • G
      sched: Refactor the power savings balance code · c071df18
      Gautham R Shenoy 提交于
      Impact: cleanup
      
      Create seperate helper functions to initialize the
      power-savings-balance related variables, to update them and
      to check if we have a scope for performing power-savings balance.
      
      Add no-op inline functions for the !(CONFIG_SCHED_MC || CONFIG_SCHED_SMT)
      case.
      
      This will eliminate all the #ifdef jungle in find_busiest_group() and the
      other helper functions.
      Signed-off-by: NGautham R Shenoy <ego@in.ibm.com>
      Acked-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Suresh Siddha <suresh.b.siddha@intel.com>
      Cc: "Balbir Singh" <balbir@in.ibm.com>
      Cc: Nick Piggin <nickpiggin@yahoo.com.au>
      Cc: "Dhaval Giani" <dhaval@linux.vnet.ibm.com>
      Cc: Bharata B Rao <bharata@linux.vnet.ibm.com>
      Cc: "Vaidyanathan Srinivasan" <svaidy@linux.vnet.ibm.com>
      LKML-Reference: <20090325091422.13992.73616.stgit@sofia.in.ibm.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      c071df18
    • G
      sched: Optimize the !power_savings_balance during fbg() · a021dc03
      Gautham R Shenoy 提交于
      Impact: cleanup, micro-optimization
      
      We don't need to perform power_savings balance if either the
      cpu is NOT_IDLE or if the sched_domain doesn't contain the
      SD_POWERSAVINGS_BALANCE flag set.
      
      Currently, we check for these conditions multiple number of
      times, even though these variables don't change over the scope
      of find_busiest_group().
      
      Check once, and store the value in the already exiting
      "power_savings_balance" variable.
      Signed-off-by: NGautham R Shenoy <ego@in.ibm.com>
      Acked-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Suresh Siddha <suresh.b.siddha@intel.com>
      Cc: "Balbir Singh" <balbir@in.ibm.com>
      Cc: Nick Piggin <nickpiggin@yahoo.com.au>
      Cc: "Dhaval Giani" <dhaval@linux.vnet.ibm.com>
      Cc: Bharata B Rao <bharata@linux.vnet.ibm.com>
      Cc: "Vaidyanathan Srinivasan" <svaidy@linux.vnet.ibm.com>
      LKML-Reference: <20090325091417.13992.2657.stgit@sofia.in.ibm.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      a021dc03
    • G
      sched: Create a helper function to calculate imbalance · dbc523a3
      Gautham R Shenoy 提交于
      Move all the imbalance calculation out of find_busiest_group()
      through this helper function.
      
      With this change, the structure of find_busiest_group() will be
      as follows:
      
      - update_sched_domain_statistics.
      
      - check if imbalance exits.
      
      - update imbalance and return busiest.
      Signed-off-by: NGautham R Shenoy <ego@in.ibm.com>
      Acked-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Suresh Siddha <suresh.b.siddha@intel.com>
      Cc: "Balbir Singh" <balbir@in.ibm.com>
      Cc: Nick Piggin <nickpiggin@yahoo.com.au>
      Cc: "Dhaval Giani" <dhaval@linux.vnet.ibm.com>
      Cc: Bharata B Rao <bharata@linux.vnet.ibm.com>
      Cc: "Vaidyanathan Srinivasan" <svaidy@linux.vnet.ibm.com>
      LKML-Reference: <20090325091411.13992.43293.stgit@sofia.in.ibm.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      dbc523a3
    • G
      sched: Create helper to calculate small_imbalance in fbg() · 2e6f44ae
      Gautham R Shenoy 提交于
      Impact: cleanup
      
      We have two places in find_busiest_group() where we need to calculate
      the minor imbalance before returning the busiest group. Encapsulate
      this functionality into a seperate helper function.
      
      Credit: Vaidyanathan Srinivasan <svaidy@linux.vnet.ibm.com>
      Signed-off-by: NGautham R Shenoy <ego@in.ibm.com>
      Acked-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Suresh Siddha <suresh.b.siddha@intel.com>
      Cc: "Balbir Singh" <balbir@in.ibm.com>
      Cc: Nick Piggin <nickpiggin@yahoo.com.au>
      Cc: "Dhaval Giani" <dhaval@linux.vnet.ibm.com>
      Cc: Bharata B Rao <bharata@linux.vnet.ibm.com>
      LKML-Reference: <20090325091406.13992.54316.stgit@sofia.in.ibm.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      2e6f44ae
    • G
      sched: Create a helper function to calculate sched_domain stats for fbg() · 37abe198
      Gautham R Shenoy 提交于
      Impact: cleanup
      
      Create a helper function named update_sd_lb_stats() to update the
      various sched_domain related statistics in find_busiest_group().
      
      With this we would have moved all the statistics computation out of
      find_busiest_group().
      
      Credit: Vaidyanathan Srinivasan <svaidy@linux.vnet.ibm.com>
      Signed-off-by: NGautham R Shenoy <ego@in.ibm.com>
      Acked-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Suresh Siddha <suresh.b.siddha@intel.com>
      Cc: "Balbir Singh" <balbir@in.ibm.com>
      Cc: Nick Piggin <nickpiggin@yahoo.com.au>
      Cc: "Dhaval Giani" <dhaval@linux.vnet.ibm.com>
      Cc: Bharata B Rao <bharata@linux.vnet.ibm.com>
      LKML-Reference: <20090325091401.13992.88737.stgit@sofia.in.ibm.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      37abe198
    • G
      sched: Define structure to store the sched_domain statistics for fbg() · 222d656d
      Gautham R Shenoy 提交于
      Impact: cleanup
      
      Currently we use a lot of local variables in find_busiest_group()
      to capture the various statistics related to the sched_domain.
      Group them together into a single data structure.
      
      This will help us to offload the job of updating the sched_domain
      statistics to a helper function.
      
      Credit: Vaidyanathan Srinivasan <svaidy@linux.vnet.ibm.com>
      Signed-off-by: NGautham R Shenoy <ego@in.ibm.com>
      Acked-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Suresh Siddha <suresh.b.siddha@intel.com>
      Cc: "Balbir Singh" <balbir@in.ibm.com>
      Cc: Nick Piggin <nickpiggin@yahoo.com.au>
      Cc: "Dhaval Giani" <dhaval@linux.vnet.ibm.com>
      Cc: Bharata B Rao <bharata@linux.vnet.ibm.com>
      LKML-Reference: <20090325091356.13992.25970.stgit@sofia.in.ibm.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      222d656d
    • G
      sched: Create a helper function to calculate sched_group stats for fbg() · 1f8c553d
      Gautham R Shenoy 提交于
      Impact: cleanup
      
      Create a helper function named update_sg_lb_stats() which
      can be invoked to calculate the individual group's statistics
      in find_busiest_group().
      
      This reduces the lenght of find_busiest_group() considerably.
      
      Credit: Vaidyanathan Srinivasan <svaidy@linux.vnet.ibm.com>
      Signed-off-by: NGautham R Shenoy <ego@in.ibm.com>
      Aked-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Suresh Siddha <suresh.b.siddha@intel.com>
      Cc: "Balbir Singh" <balbir@in.ibm.com>
      Cc: Nick Piggin <nickpiggin@yahoo.com.au>
      Cc: "Dhaval Giani" <dhaval@linux.vnet.ibm.com>
      Cc: Bharata B Rao <bharata@linux.vnet.ibm.com>
      LKML-Reference: <20090325091351.13992.43461.stgit@sofia.in.ibm.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      1f8c553d
    • G
      sched: Define structure to store the sched_group statistics for fbg() · 381be78f
      Gautham R Shenoy 提交于
      Impact: cleanup
      
      Currently a whole bunch of variables are used to store the
      various statistics pertaining to the groups we iterate over
      in find_busiest_group().
      
      Group them together in a single data structure and add
      appropriate comments.
      
      This will be useful later on when we create helper functions
      to calculate the sched_group statistics.
      
      Credit: Vaidyanathan Srinivasan <svaidy@linux.vnet.ibm.com>
      Signed-off-by: NGautham R Shenoy <ego@in.ibm.com>
      Acked-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Suresh Siddha <suresh.b.siddha@intel.com>
      Cc: "Balbir Singh" <balbir@in.ibm.com>
      Cc: Nick Piggin <nickpiggin@yahoo.com.au>
      Cc: "Dhaval Giani" <dhaval@linux.vnet.ibm.com>
      Cc: Bharata B Rao <bharata@linux.vnet.ibm.com>
      LKML-Reference: <20090325091345.13992.20099.stgit@sofia.in.ibm.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      381be78f
    • G
      sched: Fix indentations in find_busiest_group() using gotos · 6dfdb062
      Gautham R Shenoy 提交于
      Impact: cleanup
      
      Some indentations in find_busiest_group() can minimized by using
      early exits with the help of gotos. This improves readability in
      a couple of cases.
      Signed-off-by: NGautham R Shenoy <ego@in.ibm.com>
      Acked-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Suresh Siddha <suresh.b.siddha@intel.com>
      Cc: "Balbir Singh" <balbir@in.ibm.com>
      Cc: Nick Piggin <nickpiggin@yahoo.com.au>
      Cc: "Dhaval Giani" <dhaval@linux.vnet.ibm.com>
      Cc: Bharata B Rao <bharata@linux.vnet.ibm.com>
      Cc: "Vaidyanathan Srinivasan" <svaidy@linux.vnet.ibm.com>
      LKML-Reference: <20090325091340.13992.45062.stgit@sofia.in.ibm.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      6dfdb062
    • G
      sched: Simple helper functions for find_busiest_group() · 67bb6c03
      Gautham R Shenoy 提交于
      Impact: cleanup
      
      Currently the load idx calculation code is in find_busiest_group().
      Move that to a static inline helper function.
      
      Similary, to find the first cpu of a sched_group we use
      cpumask_first(sched_group_cpus(group))
      
      Use a helper to that. It improves readability in some cases.
      Signed-off-by: NGautham R Shenoy <ego@in.ibm.com>
      Acked-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Suresh Siddha <suresh.b.siddha@intel.com>
      Cc: "Balbir Singh" <balbir@in.ibm.com>
      Cc: Nick Piggin <nickpiggin@yahoo.com.au>
      Cc: "Dhaval Giani" <dhaval@linux.vnet.ibm.com>
      Cc: Bharata B Rao <bharata@linux.vnet.ibm.com>
      Cc: "Vaidyanathan Srinivasan" <svaidy@linux.vnet.ibm.com>
      LKML-Reference: <20090325091335.13992.55424.stgit@sofia.in.ibm.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      67bb6c03
    • L
      sched: remove unused fields from struct rq · 67aa0f76
      Luis Henriques 提交于
      Impact: cleanup, new schedstat ABI
      
      Since they are used on in statistics and are always set to zero, the
      following fields from struct rq have been removed: yld_exp_empty,
      yld_act_empty and yld_both_empty.
      
      Both Sched Debug and SCHEDSTAT_VERSION versions has also been
      incremented since ABIs have been changed.
      
      The schedtop tool has been updated to properly handle new version of
      schedstat:
      
         http://rt.wiki.kernel.org/index.php/Schedtop_utilitySigned-off-by: NLuis Henriques <henrix@sapo.pt>
      Acked-by: NGregory Haskins <ghaskins@novell.com>
      Acked-by: NPeter Zijlstra <peterz@infradead.org>
      LKML-Reference: <20090324221002.GA10061@hades.domain.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      67aa0f76
  4. 17 3月, 2009 2 次提交
  5. 13 3月, 2009 1 次提交
    • L
      cpuacct: reduce one NULL check in fast-path · 7a46c594
      Li Zefan 提交于
      Impact: micro-optimization
      
      In cpuacct_charge(), task_ca() will never return NULL, so change
      for(...) to do { } while(...) to save one NULL check.
      Signed-off-by: NLi Zefan <lizf@cn.fujitsu.com>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Menage <menage@google.com>
      Cc: Balbir Singh <balbir@linux.vnet.ibm.com>
      Cc: Bharata B Rao <bharata@linux.vnet.ibm.com>
      LKML-Reference: <49B863F5.2060400@cn.fujitsu.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      7a46c594
  6. 11 3月, 2009 1 次提交
    • M
      sched: add avg_overlap decay · df1c99d4
      Mike Galbraith 提交于
      Impact: more precise avg_overlap metric - better load-balancing
      
      avg_overlap is used to measure the runtime overlap of the waker and
      wakee.
      
      However, when a process changes behaviour, eg a pipe becomes
      un-congested and we don't need to go to sleep after a wakeup
      for a while, the avg_overlap value grows stale.
      
      When running we use the avg runtime between preemption as a
      measure for avg_overlap since the amount of runtime can be
      correlated to cache footprint.
      
      The longer we run, the less likely we'll be wanting to be
      migrated to another CPU.
      Signed-off-by: NMike Galbraith <efault@gmx.de>
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      LKML-Reference: <1236709131.25234.576.camel@laptop>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      df1c99d4
  7. 10 3月, 2009 1 次提交
  8. 06 3月, 2009 1 次提交
  9. 05 3月, 2009 1 次提交
    • F
      sched: don't rebalance if attached on NULL domain · 8a0be9ef
      Frederic Weisbecker 提交于
      Impact: fix function graph trace hang / drop pointless softirq on UP
      
      While debugging a function graph trace hang on an old PII, I saw
      that it consumed most of its time on the timer interrupt. And
      the domain rebalancing softirq was the most concerned.
      
      The timer interrupt calls trigger_load_balance() which will
      decide if it is worth to schedule a rebalancing softirq.
      
      In case of builtin UP kernel, no problem arises because there is
      no domain question.
      
      In case of builtin SMP kernel running on an SMP box, still no
      problem, the softirq will be raised each time we reach the
      next_balance time.
      
      In case of builtin SMP kernel running on a UP box (most distros
      provide default SMP kernels, whatever the box you have), then
      the CPU is attached to the NULL sched domain. So a kind of
      unexpected behaviour happen:
      
      trigger_load_balance() -> raises the rebalancing softirq later
      on softirq: run_rebalance_domains() -> rebalance_domains() where
      the for_each_domain(cpu, sd) is not taken because of the NULL
      domain we are attached at. Which means rq->next_balance is never
      updated. So on the next timer tick, we will enter
      trigger_load_balance() which will always reschedule() the
      rebalacing softirq:
      
      if (time_after_eq(jiffies, rq->next_balance))
      	raise_softirq(SCHED_SOFTIRQ);
      
      So for each tick, we process this pointless softirq.
      
      This patch fixes it by checking if we are attached to the null
      domain before raising the softirq, another possible fix would be
      to set the maximal possible JIFFIES value to rq->next_balance if
      we are attached to the NULL domain.
      
      v2: build fix on UP
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      LKML-Reference: <49af242d.1c07d00a.32d5.ffffc019@mx.google.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      8a0be9ef
  10. 02 3月, 2009 1 次提交
  11. 27 2月, 2009 1 次提交
  12. 26 2月, 2009 2 次提交
  13. 20 2月, 2009 1 次提交
  14. 16 2月, 2009 2 次提交
  15. 12 2月, 2009 1 次提交
  16. 11 2月, 2009 1 次提交
  17. 06 2月, 2009 1 次提交
    • J
      wait: prevent exclusive waiter starvation · 777c6c5f
      Johannes Weiner 提交于
      With exclusive waiters, every process woken up through the wait queue must
      ensure that the next waiter down the line is woken when it has finished.
      
      Interruptible waiters don't do that when aborting due to a signal.  And if
      an aborting waiter is concurrently woken up through the waitqueue, noone
      will ever wake up the next waiter.
      
      This has been observed with __wait_on_bit_lock() used by
      lock_page_killable(): the first contender on the queue was aborting when
      the actual lock holder woke it up concurrently.  The aborted contender
      didn't acquire the lock and therefor never did an unlock followed by
      waking up the next waiter.
      
      Add abort_exclusive_wait() which removes the process' wait descriptor from
      the waitqueue, iff still queued, or wakes up the next waiter otherwise.
      It does so under the waitqueue lock.  Racing with a wake up means the
      aborting process is either already woken (removed from the queue) and will
      wake up the next waiter, or it will remove itself from the queue and the
      concurrent wake up will apply to the next waiter after it.
      
      Use abort_exclusive_wait() in __wait_event_interruptible_exclusive() and
      __wait_on_bit_lock() when they were interrupted by other means than a wake
      up through the queue.
      
      [akpm@linux-foundation.org: coding-style fixes]
      Reported-by: NChris Mason <chris.mason@oracle.com>
      Signed-off-by: NJohannes Weiner <hannes@cmpxchg.org>
      Mentored-by: NOleg Nesterov <oleg@redhat.com>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Matthew Wilcox <matthew@wil.cx>
      Cc: Chuck Lever <cel@citi.umich.edu>
      Cc: Nick Piggin <nickpiggin@yahoo.com.au>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: <stable@kernel.org>		["after some testing"]
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      777c6c5f
  18. 05 2月, 2009 1 次提交
  19. 01 2月, 2009 2 次提交
  20. 15 1月, 2009 5 次提交
    • P
      sched: SCHED_IDLE weight change · cce7ade8
      Peter Zijlstra 提交于
      Increase the SCHED_IDLE weight from 2 to 3, this gives much more stable
      vruntime numbers.
      
      time advanced in 100ms:
      
       weight=2
      
       64765.988352
       67012.881408
       88501.412352
      
       weight=3
      
       35496.181411
       34130.971298
       35497.411573
      Signed-off-by: NMike Galbraith <efault@gmx.de>
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      cce7ade8
    • P
      sched: fix bandwidth validation for UID grouping · 98a4826b
      Peter Zijlstra 提交于
      Impact: make rt-limit tunables work again
      
      Mark Glines reported:
      
      > I've got an issue on x86-64 where I can't configure the system to allow
      > RT tasks for a non-root user.
      >
      > In 2.6.26.5, I was able to do the following to set things up nicely:
      > echo 450000 >/sys/kernel/uids/0/cpu_rt_runtime
      > echo 450000 >/sys/kernel/uids/1000/cpu_rt_runtime
      >
      > Seems like every value I try to echo into the /sys files returns EINVAL.
      
      For UID grouping we initialize the root group with infinite bandwidth
      which by default is actually more than the global limit, therefore the
      bandwidth check always fails.
      
      Because the root group is a phantom group (for UID grouping) we cannot
      runtime adjust it, therefore we let it reflect the global bandwidth
      settings.
      Reported-by: NMark Glines <mark@glines.org>
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      98a4826b
    • P
      sched: introduce avg_wakeup · 831451ac
      Peter Zijlstra 提交于
      Introduce a new avg_wakeup statistic.
      
      avg_wakeup is a measure of how frequently a task wakes up other tasks, it
      represents the average time between wakeups, with a limit of avg_runtime
      for when it doesn't wake up anybody.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Signed-off-by: NMike Galbraith <efault@gmx.de>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      831451ac
    • P
      mutex: implement adaptive spinning · 0d66bf6d
      Peter Zijlstra 提交于
      Change mutex contention behaviour such that it will sometimes busy wait on
      acquisition - moving its behaviour closer to that of spinlocks.
      
      This concept got ported to mainline from the -rt tree, where it was originally
      implemented for rtmutexes by Steven Rostedt, based on work by Gregory Haskins.
      
      Testing with Ingo's test-mutex application (http://lkml.org/lkml/2006/1/8/50)
      gave a 345% boost for VFS scalability on my testbox:
      
       # ./test-mutex-shm V 16 10 | grep "^avg ops"
       avg ops/sec:               296604
      
       # ./test-mutex-shm V 16 10 | grep "^avg ops"
       avg ops/sec:               85870
      
      The key criteria for the busy wait is that the lock owner has to be running on
      a (different) cpu. The idea is that as long as the owner is running, there is a
      fair chance it'll release the lock soon, and thus we'll be better off spinning
      instead of blocking/scheduling.
      
      Since regular mutexes (as opposed to rtmutexes) do not atomically track the
      owner, we add the owner in a non-atomic fashion and deal with the races in
      the slowpath.
      
      Furthermore, to ease the testing of the performance impact of this new code,
      there is means to disable this behaviour runtime (without having to reboot
      the system), when scheduler debugging is enabled (CONFIG_SCHED_DEBUG=y),
      by issuing the following command:
      
       # echo NO_OWNER_SPIN > /debug/sched_features
      
      This command re-enables spinning again (this is also the default):
      
       # echo OWNER_SPIN > /debug/sched_features
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      0d66bf6d
    • P
      mutex: preemption fixes · 41719b03
      Peter Zijlstra 提交于
      The problem is that dropping the spinlock right before schedule is a voluntary
      preemption point and can cause a schedule, right after which we schedule again.
      
      Fix this inefficiency by keeping preemption disabled until we schedule, do this
      by explicity disabling preemption and providing a schedule() variant that
      assumes preemption is already disabled.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      41719b03
  21. 14 1月, 2009 1 次提交