1. 06 3月, 2014 4 次提交
  2. 23 8月, 2013 8 次提交
  3. 29 7月, 2013 2 次提交
    • R
      Revert "cpuidle: Quickly notice prediction failure for repeat mode" · 14851912
      Rafael J. Wysocki 提交于
      Revert commit 69a37bea (cpuidle: Quickly notice prediction failure for
      repeat mode), because it has been identified as the source of a
      significant performance regression in v3.8 and later as explained by
      Jeremy Eder:
      
        We believe we've identified a particular commit to the cpuidle code
        that seems to be impacting performance of variety of workloads.
        The simplest way to reproduce is using netperf TCP_RR test, so
        we're using that, on a pair of Sandy Bridge based servers.  We also
        have data from a large database setup where performance is also
        measurably/positively impacted, though that test data isn't easily
        share-able.
      
        Included below are test results from 3 test kernels:
      
        kernel       reverts
        -----------------------------------------------------------
        1) vanilla   upstream (no reverts)
      
        2) perfteam2 reverts e11538d1
      
        3) test      reverts 69a37bea
                             e11538d1
      
        In summary, netperf TCP_RR numbers improve by approximately 4%
        after reverting 69a37bea.  When
        69a37bea is included, C0 residency
        never seems to get above 40%.  Taking that patch out gets C0 near
        100% quite often, and performance increases.
      
        The below data are histograms representing the %c0 residency @
        1-second sample rates (using turbostat), while under netperf test.
      
        - If you look at the first 4 histograms, you can see %c0 residency
          almost entirely in the 30,40% bin.
        - The last pair, which reverts 69a37bea,
          shows %c0 in the 80,90,100% bins.
      
        Below each kernel name are netperf TCP_RR trans/s numbers for the
        particular kernel that can be disclosed publicly, comparing the 3
        test kernels.  We ran a 4th test with the vanilla kernel where
        we've also set /dev/cpu_dma_latency=0 to show overall impact
        boosting single-threaded TCP_RR performance over 11% above
        baseline.
      
        3.10-rc2 vanilla RX + c0 lock (/dev/cpu_dma_latency=0):
        TCP_RR trans/s 54323.78
      
        -----------------------------------------------------------
        3.10-rc2 vanilla RX (no reverts)
        TCP_RR trans/s 48192.47
      
        Receiver %c0
            0.0000 -    10.0000 [     1]: *
           10.0000 -    20.0000 [     0]:
           20.0000 -    30.0000 [     0]:
           30.0000 -    40.0000 [    59]:
        ***********************************************************
           40.0000 -    50.0000 [     1]: *
           50.0000 -    60.0000 [     0]:
           60.0000 -    70.0000 [     0]:
           70.0000 -    80.0000 [     0]:
           80.0000 -    90.0000 [     0]:
           90.0000 -   100.0000 [     0]:
      
        Sender %c0
            0.0000 -    10.0000 [     1]: *
           10.0000 -    20.0000 [     0]:
           20.0000 -    30.0000 [     0]:
           30.0000 -    40.0000 [    11]: ***********
           40.0000 -    50.0000 [    49]:
        *************************************************
           50.0000 -    60.0000 [     0]:
           60.0000 -    70.0000 [     0]:
           70.0000 -    80.0000 [     0]:
           80.0000 -    90.0000 [     0]:
           90.0000 -   100.0000 [     0]:
      
        -----------------------------------------------------------
        3.10-rc2 perfteam2 RX (reverts commit
        e11538d1)
        TCP_RR trans/s 49698.69
      
        Receiver %c0
            0.0000 -    10.0000 [     1]: *
           10.0000 -    20.0000 [     1]: *
           20.0000 -    30.0000 [     0]:
           30.0000 -    40.0000 [    59]:
        ***********************************************************
           40.0000 -    50.0000 [     0]:
           50.0000 -    60.0000 [     0]:
           60.0000 -    70.0000 [     0]:
           70.0000 -    80.0000 [     0]:
           80.0000 -    90.0000 [     0]:
           90.0000 -   100.0000 [     0]:
      
        Sender %c0
            0.0000 -    10.0000 [     1]: *
           10.0000 -    20.0000 [     0]:
           20.0000 -    30.0000 [     0]:
           30.0000 -    40.0000 [     2]: **
           40.0000 -    50.0000 [    58]:
        **********************************************************
           50.0000 -    60.0000 [     0]:
           60.0000 -    70.0000 [     0]:
           70.0000 -    80.0000 [     0]:
           80.0000 -    90.0000 [     0]:
           90.0000 -   100.0000 [     0]:
      
        -----------------------------------------------------------
        3.10-rc2 test RX (reverts 69a37bea
        and e11538d1)
        TCP_RR trans/s 47766.95
      
        Receiver %c0
            0.0000 -    10.0000 [     1]: *
           10.0000 -    20.0000 [     1]: *
           20.0000 -    30.0000 [     0]:
           30.0000 -    40.0000 [    27]: ***************************
           40.0000 -    50.0000 [     2]: **
           50.0000 -    60.0000 [     0]:
           60.0000 -    70.0000 [     2]: **
           70.0000 -    80.0000 [     0]:
           80.0000 -    90.0000 [     0]:
           90.0000 -   100.0000 [    28]: ****************************
      
        Sender:
            0.0000 -    10.0000 [     1]: *
           10.0000 -    20.0000 [     0]:
           20.0000 -    30.0000 [     0]:
           30.0000 -    40.0000 [    11]: ***********
           40.0000 -    50.0000 [     0]:
           50.0000 -    60.0000 [     1]: *
           60.0000 -    70.0000 [     0]:
           70.0000 -    80.0000 [     3]: ***
           80.0000 -    90.0000 [     7]: *******
           90.0000 -   100.0000 [    38]: **************************************
      
        These results demonstrate gaining back the tendency of the CPU to
        stay in more responsive, performant C-states (and thus yield
        measurably better performance), by reverting commit
        69a37bea.
      Requested-by: NJeremy Eder <jeder@redhat.com>
      Tested-by: NLen Brown <len.brown@intel.com>
      Cc: 3.8+ <stable@vger.kernel.org>
      Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      14851912
    • R
      Revert "cpuidle: Quickly notice prediction failure in general case" · 228b3023
      Rafael J. Wysocki 提交于
      Revert commit e11538d1 (cpuidle: Quickly notice prediction failure in
      general case), since it depends on commit 69a37bea (cpuidle: Quickly
      notice prediction failure for repeat mode) that has been identified
      as the source of a significant performance regression in v3.8 and
      later.
      Requested-by: NJeremy Eder <jeder@redhat.com>
      Tested-by: NLen Brown <len.brown@intel.com>
      Cc: 3.8+ <stable@vger.kernel.org>
      Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      228b3023
  4. 15 7月, 2013 1 次提交
    • D
      cpuidle: Make it clear that governors cannot be modules · 137b944e
      Daniel Lezcano 提交于
      cpufreq governors are defined as modules in the code, but the Kconfig
      options do not allow them to be built as modules.  This is not really
      a problem, but the cpuidle init ordering is: the cpuidle init
      functions (framework and driver) and then the governors.  That leads
      to some weirdness in the cpuidle framework.
      
      Namely,  cpuidle_register_device() calls cpuidle_enable_device() which
      fails at the first attempt, because governors have not been registered
      yet.  When a governor is registered, the framework calls
      cpuidle_enable_device() again which runs __cpuidle_register_device()
      only then.  Of course, for that to work, the cpuidle_enable_device()
      return value has to be ignored by cpuidle_register_device().
      
      Instead of having this cyclic call graph and relying on a positive
      side effects of the hackish back and forth cpuidle_enable_device()
      calls it is better to fix the cpuidle init ordering.
      
      To that end, replace the module init code with postcore_initcall()
      so we have:
      
       * cpuidle framework : core_initcall
       * cpuidle governors : postcore_initcall
       * cpuidle drivers   : device_initcall
      
      and remove the corresponding module exit code as it is dead anyway
      (governors can't be built as modules).
      
      [rjw: Changelog]
      Signed-off-by: NDaniel Lezcano <daniel.lezcano@linaro.org>
      Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      137b944e
  5. 15 1月, 2013 1 次提交
  6. 03 1月, 2013 1 次提交
  7. 23 11月, 2012 1 次提交
    • L
      cpuidle: fix a suspicious RCU usage in menu governor · a093b93e
      Li Zhong 提交于
      I saw this suspicious RCU usage on the next tree of 11/15
      
      [   67.123404] ===============================
      [   67.123413] [ INFO: suspicious RCU usage. ]
      [   67.123423] 3.7.0-rc5-next-20121115-dirty #1 Not tainted
      [   67.123434] -------------------------------
      [   67.123444] include/trace/events/timer.h:186 suspicious rcu_dereference_check() usage!
      [   67.123458]
      [   67.123458] other info that might help us debug this:
      [   67.123458]
      [   67.123474]
      [   67.123474] RCU used illegally from idle CPU!
      [   67.123474] rcu_scheduler_active = 1, debug_locks = 0
      [   67.123493] RCU used illegally from extended quiescent state!
      [   67.123507] 1 lock held by swapper/1/0:
      [   67.123516]  #0:  (&cpu_base->lock){-.-...}, at: [<c0000000000979b0>] .__hrtimer_start_range_ns+0x28c/0x524
      [   67.123555]
      [   67.123555] stack backtrace:
      [   67.123566] Call Trace:
      [   67.123576] [c0000001e2ccb920] [c00000000001275c] .show_stack+0x78/0x184 (unreliable)
      [   67.123599] [c0000001e2ccb9d0] [c0000000000c15a0] .lockdep_rcu_suspicious+0x120/0x148
      [   67.123619] [c0000001e2ccba70] [c00000000009601c] .enqueue_hrtimer+0x1c0/0x1c8
      [   67.123639] [c0000001e2ccbb00] [c000000000097aa0] .__hrtimer_start_range_ns+0x37c/0x524
      [   67.123660] [c0000001e2ccbc20] [c0000000005c9698] .menu_select+0x508/0x5bc
      [   67.123678] [c0000001e2ccbd20] [c0000000005c740c] .cpuidle_idle_call+0xa8/0x6e4
      [   67.123699] [c0000001e2ccbdd0] [c0000000000459a0] .pSeries_idle+0x10/0x34
      [   67.123717] [c0000001e2ccbe40] [c000000000014dc8] .cpu_idle+0x130/0x280
      [   67.123738] [c0000001e2ccbee0] [c0000000006ffa8c] .start_secondary+0x378/0x384
      [   67.123758] [c0000001e2ccbf90] [c00000000000936c] .start_secondary_prolog+0x10/0x14
      
      hrtimer_start was added in 198fd638 and ae515197. The patch below tries
      to use RCU_NONIDLE around it to avoid the above report.
      Signed-off-by: NLi Zhong <zhong@linux.vnet.ibm.com>
      Acked-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Reviewed-by: NRik van Riel <riel@redhat.com>
      Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      a093b93e
  8. 15 11月, 2012 3 次提交
    • Y
      cpuidle: Get typical recent sleep interval · c96ca4fb
      Youquan Song 提交于
      The function detect_repeating_patterns was not very useful for
      workloads with alternating long and short pauses, for example
      virtual machines handling network requests for each other (say
      a web and database server).
      
      Instead, try to find a recent sleep interval that is somewhere
      between the median and the mode sleep time, by discarding outliers
      to the up side and recalculating the average and standard deviation
      until that is no longer required.
      
      This should do something sane with a sleep interval series like:
      
      	200 180 210 10000 30 1000 170 200
      
      The current code would simply discard such a series, while the
      new code will guess a typical sleep interval just shy of 200.
      
      The original patch come from Rik van Riel <riel@redhat.com>.
      Signed-off-by: NRik van Riel <riel@redhat.com>
      Signed-off-by: NYouquan Song <youquan.song@intel.com>
      Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      c96ca4fb
    • Y
      cpuidle: Quickly notice prediction failure in general case · e11538d1
      Youquan Song 提交于
      The prediction for future is difficult and when the cpuidle governor prediction
      fails and govenor possibly choose the shallower C-state than it should. How to
      quickly notice and find the failure becomes important for power saving.
      
      The patch extends to general case that prediction logic get a small predicted
      residency, so it choose a shallow C-state though the expected residency is large
      . Once the prediction will be fail, the CPU will keep staying at shallow C-state
      for a long time. Acutally, the CPU has change enter into deep C-state.
      So when the expected residency is long enough but governor choose a shallow
      C-state, an timer will be added in order to monitor if the prediction failure.
      
      When C-state is waken up prior to the adding timer, the timer will be cancelled
      initiatively. When the timer is triggered and menu governor will quickly notice
      prediction failure and re-evaluates deeper C-states possibility.
      Signed-off-by: NRik van Riel <riel@redhat.com>
      Signed-off-by: NYouquan Song <youquan.song@intel.com>
      Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      e11538d1
    • Y
      cpuidle: Quickly notice prediction failure for repeat mode · 69a37bea
      Youquan Song 提交于
      The prediction for future is difficult and when the cpuidle governor prediction
      fails and govenor possibly choose the shallower C-state than it should. How to
      quickly notice and find the failure becomes important for power saving.
      
      cpuidle menu governor has a method to predict the repeat pattern if there are 8
      C-states residency which are continuous and the same or very close, so it will
      predict the next C-states residency will keep same residency time.
      
      There is a real case that turbostat utility (tools/power/x86/turbostat)
      at kernel 3.3 or early. turbostat utility will read 10 registers one by one at
      Sandybridge, so it will generate 10 IPIs to wake up idle CPUs. So cpuidle menu
       governor will predict it is repeat mode and there is another IPI wake up idle
       CPU soon, so it keeps idle CPU stay at C1 state even though CPU is totally
      idle. However, in the turbostat, following 10 registers reading is sleep 5
      seconds by default, so the idle CPU will keep at C1 for a long time though it is
       idle until break event occurs.
      In a idle Sandybridge system, run "./turbostat -v", we will notice that deep
      C-state dangles between "70% ~ 99%". After patched the kernel, we will notice
      deep C-state stays at >99.98%.
      
      In the patch, a timer is added when menu governor detects a repeat mode and
      choose a shallow C-state. The timer is set to a time out value that greater
      than predicted time, and we conclude repeat mode prediction failure if timer is
      triggered. When repeat mode happens as expected, the timer is not triggered
      and CPU waken up from C-states and it will cancel the timer initiatively.
      When repeat mode does not happen, the timer will be time out and menu governor
      will quickly notice that the repeat mode prediction fails and then re-evaluates
      deeper C-states possibility.
      
      Below is another case which will clearly show the patch much benefit:
      
      #include <stdlib.h>
      #include <stdio.h>
      #include <unistd.h>
      #include <signal.h>
      #include <sys/time.h>
      #include <time.h>
      #include <pthread.h>
      
      volatile int * shutdown;
      volatile long * count;
      int delay = 20;
      int loop = 8;
      
      void usage(void)
      {
      	fprintf(stderr,
      		"Usage: idle_predict [options]\n"
      		"  --help	-h  Print this help\n"
      		"  --thread	-n  Thread number\n"
      		"  --loop     	-l  Loop times in shallow Cstate\n"
      		"  --delay	-t  Sleep time (uS)in shallow Cstate\n");
      }
      
      void *simple_loop() {
      	int idle_num = 1;
      	while (!(*shutdown)) {
      		*count = *count + 1;
      
      		if (idle_num % loop)
      			usleep(delay);
      		else {
      			/* sleep 1 second */
      			usleep(1000000);
      			idle_num = 0;
      		}
      		idle_num++;
      	}
      
      }
      
      static void sighand(int sig)
      {
      	*shutdown = 1;
      }
      
      int main(int argc, char *argv[])
      {
      	sigset_t sigset;
      	int signum = SIGALRM;
      	int i, c, er = 0, thread_num = 8;
      	pthread_t pt[1024];
      
      	static char optstr[] = "n:l:t:h:";
      
      	while ((c = getopt(argc, argv, optstr)) != EOF)
      		switch (c) {
      			case 'n':
      				thread_num = atoi(optarg);
      				break;
      			case 'l':
      				loop = atoi(optarg);
      				break;
      			case 't':
      				delay = atoi(optarg);
      				break;
      			case 'h':
      			default:
      				usage();
      				exit(1);
      		}
      
      	printf("thread=%d,loop=%d,delay=%d\n",thread_num,loop,delay);
      	count = malloc(sizeof(long));
      	shutdown = malloc(sizeof(int));
      	*count = 0;
      	*shutdown = 0;
      
      	sigemptyset(&sigset);
      	sigaddset(&sigset, signum);
      	sigprocmask (SIG_BLOCK, &sigset, NULL);
      	signal(SIGINT, sighand);
      	signal(SIGTERM, sighand);
      
      	for(i = 0; i < thread_num ; i++)
      		pthread_create(&pt[i], NULL, simple_loop, NULL);
      
      	for (i = 0; i < thread_num; i++)
      		pthread_join(pt[i], NULL);
      
      	exit(0);
      }
      
      Get powertop V2 from git://github.com/fenrus75/powertop, build powertop.
      After build the above test application, then run it.
      Test plaform can be Intel Sandybridge or other recent platforms.
      #./idle_predict -l 10 &
      #./powertop
      
      We will find that deep C-state will dangle between 40%~100% and much time spent
      on C1 state. It is because menu governor wrongly predict that repeat mode
      is kept, so it will choose the C1 shallow C-state even though it has chance to
      sleep 1 second in deep C-state.
      
      While after patched the kernel, we find that deep C-state will keep >99.6%.
      Signed-off-by: NRik van Riel <riel@redhat.com>
      Signed-off-by: NYouquan Song <youquan.song@intel.com>
      Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      69a37bea
  9. 04 7月, 2012 2 次提交
    • R
      PM / Domains: Add preliminary support for cpuidle, v2 · cbc9ef02
      Rafael J. Wysocki 提交于
      On some systems there are CPU cores located in the same power
      domains as I/O devices.  Then, power can only be removed from the
      domain if all I/O devices in it are not in use and the CPU core
      is idle.  Add preliminary support for that to the generic PM domains
      framework.
      
      First, the platform is expected to provide a cpuidle driver with one
      extra state designated for use with the generic PM domains code.
      This state should be initially disabled and its exit_latency value
      should be set to whatever time is needed to bring up the CPU core
      itself after restoring power to it, not including the domain's
      power on latency.  Its .enter() callback should point to a procedure
      that will remove power from the domain containing the CPU core at
      the end of the CPU power transition.
      
      The remaining characteristics of the extra cpuidle state, referred to
      as the "domain" cpuidle state below, (e.g. power usage, target
      residency) should be populated in accordance with the properties of
      the hardware.
      
      Next, the platform should execute genpd_attach_cpuidle() on the PM
      domain containing the CPU core.  That will cause the generic PM
      domains framework to treat that domain in a special way such that:
      
       * When all devices in the domain have been suspended and it is about
         to be turned off, the states of the devices will be saved, but
         power will not be removed from the domain.  Instead, the "domain"
         cpuidle state will be enabled so that power can be removed from
         the domain when the CPU core is idle and the state has been chosen
         as the target by the cpuidle governor.
      
       * When the first I/O device in the domain is resumed and
         __pm_genpd_poweron(() is called for the first time after
         power has been removed from the domain, the "domain" cpuidle
         state will be disabled to avoid subsequent surprise power removals
         via cpuidle.
      
      The effective exit_latency value of the "domain" cpuidle state
      depends on the time needed to bring up the CPU core itself after
      restoring power to it as well as on the power on latency of the
      domain containing the CPU core.  Thus the "domain" cpuidle state's
      exit_latency has to be recomputed every time the domain's power on
      latency is updated, which may happen every time power is restored
      to the domain, if the measured power on latency is greater than
      the latency stored in the corresponding generic_pm_domain structure.
      Signed-off-by: NRafael J. Wysocki <rjw@sisk.pl>
      Reviewed-by: NKevin Hilman <khilman@ti.com>
      cbc9ef02
    • S
      cpuidle: move field disable from per-driver to per-cpu · dc7fd275
      ShuoX Liu 提交于
      Andrew J.Schorr raises a question.  When he changes the disable setting on
      a single CPU, it affects all the other CPUs.  Basically, currently, the
      disable field is per-driver instead of per-cpu.  All the C states of the
      same driver are shared by all CPU in the same machine.
      
      The patch changes the `disable' field to per-cpu, so we could set this
      separately for each cpu.
      Signed-off-by: NShuoX Liu <shuox.liu@intel.com>
      Reported-by: NAndrew J.Schorr <aschorr@telemetry-investments.com>
      Reviewed-by: NYanmin Zhang <yanmin_zhang@intel.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NRafael J. Wysocki <rjw@sisk.pl>
      dc7fd275
  10. 30 3月, 2012 2 次提交
  11. 07 11月, 2011 3 次提交
  12. 01 11月, 2011 1 次提交
  13. 25 8月, 2011 1 次提交
  14. 29 5月, 2011 1 次提交
  15. 29 9月, 2010 1 次提交
  16. 10 8月, 2010 1 次提交
    • A
      cpuidle: extend cpuidle and menu governor to handle dynamic states · 71abbbf8
      Ai Li 提交于
      On some SoC chips, HW resources may be in use during any particular idle
      period.  As a consequence, the cpuidle states that the SoC is safe to
      enter can change from idle period to idle period.  In addition, the
      latency and threshold of each cpuidle state can vary, depending on the
      operating condition when the CPU becomes idle, e.g.  the current cpu
      frequency, the current state of the HW blocks, etc.
      
      cpuidle core and the menu governor, in the current form, are geared
      towards cpuidle states that are static, i.e.  the availabiltiy of the
      states, their latencies, their thresholds are non-changing during run
      time.  cpuidle does not provide any hook that cpuidle drivers can use to
      adjust those values on the fly for the current idle period before the menu
      governor selects the target cpuidle state.
      
      This patch extends cpuidle core and the menu governor to handle states
      that are dynamic.  There are three additions in the patch and the patch
      maintains backwards-compatibility with existing cpuidle drivers.
      
      1) add prepare() to struct cpuidle_device.  A cpuidle driver can hook
         into the callback and cpuidle will call prepare() before calling the
         governor's select function.  The callback gives the cpuidle driver a
         chance to update the dynamic information of the cpuidle states for the
         current idle period, e.g.  state availability, latencies, thresholds,
         power values, etc.
      
      2) add CPUIDLE_FLAG_IGNORE as one of the state flags.  In the prepare()
         function, a cpuidle driver can set/clear the flag to indicate to the
         menu governor whether a cpuidle state should be ignored, i.e.  not
         available, during the current idle period.
      
      3) add power_specified bit to struct cpuidle_device.  The menu governor
         currently assumes that the cpuidle states are arranged in the order of
         increasing latency, threshold, and power savings.  This is true or can
         be made true for static states.  Once the state parameters are dynamic,
         the latencies, thresholds, and power savings for the cpuidle states can
         increase or decrease by different amounts from idle period to idle
         period.  So the assumption of increasing latency, threshold, and power
         savings from Cn to C(n+1) can no longer be guaranteed.
      
      It can be straightforward to calculate the power consumption of each
      available state and to specify it in power_usage for the idle period.
      Using the power_usage fields, the menu governor then selects the state
      that has the lowest power consumption and that still satisfies all other
      critieria.  The power_specified bit defaults to 0.  For existing cpuidle
      drivers, cpuidle detects that power_specified is 0 and fills in a dummy
      set of power_usage values.
      Signed-off-by: NAi Li <aili@codeaurora.org>
      Cc: Len Brown <len.brown@intel.com>
      Acked-by: NArjan van de Ven <arjan@linux.intel.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Venkatesh Pallipadi <venki@google.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      71abbbf8
  17. 01 7月, 2010 1 次提交
    • P
      sched: Cure nr_iowait_cpu() users · 8c215bd3
      Peter Zijlstra 提交于
      Commit 0224cf4c (sched: Intoduce get_cpu_iowait_time_us())
      broke things by not making sure preemption was indeed disabled
      by the callers of nr_iowait_cpu() which took the iowait value of
      the current cpu.
      
      This resulted in a heap of preempt warnings. Cure this by making
      nr_iowait_cpu() take a cpu number and fix up the callers to pass
      in the right number.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Arjan van de Ven <arjan@infradead.org>
      Cc: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
      Cc: Rafael J. Wysocki <rjw@sisk.pl>
      Cc: Maxim Levitsky <maximlevitsky@gmail.com>
      Cc: Len Brown <len.brown@intel.com>
      Cc: Pavel Machek <pavel@ucw.cz>
      Cc: Jiri Slaby <jslaby@suse.cz>
      Cc: linux-pm@lists.linux-foundation.org
      LKML-Reference: <1277968037.1868.120.camel@laptop>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      8c215bd3
  18. 25 5月, 2010 1 次提交
    • A
      cpuidle: add a repeating pattern detector to the menu governor · 1f85f87d
      Arjan van de Ven 提交于
      Currently, the menu governor uses the (corrected) next timer as key item
      for predicting the idle duration.
      
      It turns out that there are specific cases where this breaks down: There
      are cases where we have a very repetitive pattern of idle durations, where
      the idle period is pretty much the same, for reasons completely unrelated
      to the next timer event.  Examples of such repeating patterns are network
      loads with irq mitigation, the mouse moving but in theory also the wifi
      beacons.
      
      This patch adds a relatively simple detector for such repeating patterns,
      where the standard deviation of the last 8 idle periods is compared to a
      threshold.
      
      With this extra predictor in place, measurements show that the DECAY
      factor can now be increased (the decaying average will now decay slower)
      to get an even more stable result.
      
      [arjan@infradead.org: fix bug identified by Frank]
      Signed-off-by: NArjan van de Ven <arjan@linux.intel.com>
      Cc: Corrado Zoccolo <czoccolo@gmail.com>
      Cc: Frank Rowand <frank.rowand@am.sony.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      1f85f87d
  19. 11 5月, 2010 1 次提交
    • M
      PM QOS update · ed77134b
      Mark Gross 提交于
      This patch changes the string based list management to a handle base
      implementation to help with the hot path use of pm-qos, it also renames
      much of the API to use "request" as opposed to "requirement" that was
      used in the initial implementation.  I did this because request more
      accurately represents what it actually does.
      
      Also, I added a string based ABI for users wanting to use a string
      interface.  So if the user writes 0xDDDDDDDD formatted hex it will be
      accepted by the interface.  (someone asked me for it and I don't think
      it hurts anything.)
      
      This patch updates some documentation input I got from Randy.
      Signed-off-by: Nmarkgross <mgross@linux.intel.com>
      Signed-off-by: NRafael J. Wysocki <rjw@sisk.pl>
      ed77134b
  20. 10 5月, 2010 1 次提交
    • A
      cpuidle: Fix incorrect optimization · 1c6fe036
      Arjan van de Ven 提交于
      commit 672917dc ("cpuidle: menu governor: reduce latency on exit")
      added an optimization, where the analysis on the past idle period moved
      from the end of idle, to the beginning of the new idle.
      
      Unfortunately, this optimization had a bug where it zeroed one key
      variable for new use, that is needed for the analysis.  The fix is
      simple, zero the variable after doing the work from the previous idle.
      
      During the audit of the code that found this issue, another issue was
      also found; the ->measured_us data structure member is never set, a
      local variable is always used instead.
      Signed-off-by: NArjan van de Ven <arjan@linux.intel.com>
      Cc: Corrado Zoccolo <czoccolo@gmail.com>
      Cc: stable@kernel.org
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      1c6fe036
  21. 07 3月, 2010 1 次提交
  22. 12 1月, 2010 1 次提交
  23. 22 9月, 2009 1 次提交