1. 29 5月, 2011 4 次提交
    • L
      x86 idle: EXPORT_SYMBOL(default_idle, pm_idle) only when APM demands it · 06ae40ce
      Len Brown 提交于
      In the long run, we don't want default_idle() or (pm_idle)() to
      be exported outside of process.c.  Start by not exporting them
      to modules, unless the APM build demands it.
      
      cc: x86@kernel.org
      cc: Jiri Kosina <jkosina@suse.cz>
      Signed-off-by: NLen Brown <len.brown@intel.com>
      06ae40ce
    • L
      x86 idle: clarify AMD erratum 400 workaround · 02c68a02
      Len Brown 提交于
      The workaround for AMD erratum 400 uses the term "c1e" falsely suggesting:
      1. Intel C1E is somehow involved
      2. All AMD processors with C1E are involved
      
      Use the string "amd_c1e" instead of simply "c1e" to clarify that
      this workaround is specific to AMD's version of C1E.
      Use the string "e400" to clarify that the workaround is specific
      to AMD processors with Erratum 400.
      
      This patch is text-substitution only, with no functional change.
      
      cc: x86@kernel.org
      Acked-by: NBorislav Petkov <borislav.petkov@amd.com>
      Signed-off-by: NLen Brown <len.brown@intel.com>
      02c68a02
    • T
      idle governor: Avoid lock acquisition to read pm_qos before entering idle · 333c5ae9
      Tim Chen 提交于
      Thanks to the reviews and comments by Rafael, James, Mark and Andi.
      Here's version 2 of the patch incorporating your comments and also some
      update to my previous patch comments.
      
      I noticed that before entering idle state, the menu idle governor will
      look up the current pm_qos target value according to the list of qos
      requests received.  This look up currently needs the acquisition of a
      lock to access the list of qos requests to find the qos target value,
      slowing down the entrance into idle state due to contention by multiple
      cpus to access this list.  The contention is severe when there are a lot
      of cpus waking and going into idle.  For example, for a simple workload
      that has 32 pair of processes ping ponging messages to each other, where
      64 cpu cores are active in test system, I see the following profile with
      37.82% of cpu cycles spent in contention of pm_qos_lock:
      
      -     37.82%          swapper  [kernel.kallsyms]          [k]
      _raw_spin_lock_irqsave
         - _raw_spin_lock_irqsave
            - 95.65% pm_qos_request
                 menu_select
                 cpuidle_idle_call
               - cpu_idle
                    99.98% start_secondary
      
      A better approach will be to cache the updated pm_qos target value so
      reading it does not require lock acquisition as in the patch below.
      With this patch the contention for pm_qos_lock is removed and I saw a
      2.2X increase in throughput for my message passing workload.
      
      cc: stable@kernel.org
      Signed-off-by: NTim Chen <tim.c.chen@linux.intel.com>
      Acked-by: NAndi Kleen <ak@linux.intel.com>
      Acked-by: NJames Bottomley <James.Bottomley@suse.de>
      Acked-by: Nmark gross <markgross@thegnar.org>
      Signed-off-by: NLen Brown <len.brown@intel.com>
      333c5ae9
    • T
      cpuidle: menu: fixed wrapping timers at 4.294 seconds · 7467571f
      Tero Kristo 提交于
      Cpuidle menu governor is using u32 as a temporary datatype for storing
      nanosecond values which wrap around at 4.294 seconds. This causes errors
      in predicted sleep times resulting in higher than should be C state
      selection and increased power consumption. This also breaks cpuidle
      state residency statistics.
      
      cc: stable@kernel.org # .32.x through .39.x
      Signed-off-by: NTero Kristo <tero.kristo@nokia.com>
      Signed-off-by: NLen Brown <len.brown@intel.com>
      7467571f
  2. 15 3月, 2011 36 次提交