1. 23 12月, 2006 1 次提交
  2. 13 12月, 2006 1 次提交
  3. 08 12月, 2006 1 次提交
  4. 22 11月, 2006 3 次提交
    • D
      WorkStruct: make allyesconfig · c4028958
      David Howells 提交于
      Fix up for make allyesconfig.
      Signed-Off-By: NDavid Howells <dhowells@redhat.com>
      c4028958
    • D
      WorkStruct: Pass the work_struct pointer instead of context data · 65f27f38
      David Howells 提交于
      Pass the work_struct pointer to the work function rather than context data.
      The work function can use container_of() to work out the data.
      
      For the cases where the container of the work_struct may go away the moment the
      pending bit is cleared, it is made possible to defer the release of the
      structure by deferring the clearing of the pending bit.
      
      To make this work, an extra flag is introduced into the management side of the
      work_struct.  This governs auto-release of the structure upon execution.
      
      Ordinarily, the work queue executor would release the work_struct for further
      scheduling or deallocation by clearing the pending bit prior to jumping to the
      work function.  This means that, unless the driver makes some guarantee itself
      that the work_struct won't go away, the work function may not access anything
      else in the work_struct or its container lest they be deallocated..  This is a
      problem if the auxiliary data is taken away (as done by the last patch).
      
      However, if the pending bit is *not* cleared before jumping to the work
      function, then the work function *may* access the work_struct and its container
      with no problems.  But then the work function must itself release the
      work_struct by calling work_release().
      
      In most cases, automatic release is fine, so this is the default.  Special
      initiators exist for the non-auto-release case (ending in _NAR).
      Signed-Off-By: NDavid Howells <dhowells@redhat.com>
      65f27f38
    • D
      [PATCH] Fix CPU_FREQ_GOV_ONDEMAND=y compile error · 6af6e1ef
      Dave Jones 提交于
      The ONDEMAND governor needs FREQ_TABLE
      Signed-off-by: NMattia Dongili <malattia@linux.it>
      Signed-off-by: NDave Jones <davej@redhat.com>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      6af6e1ef
  5. 21 11月, 2006 1 次提交
    • L
      Add "pure_initcall" for static variable initialization · b3438f82
      Linus Torvalds 提交于
      This is a quick hack to overcome the fact that SRCU currently does not
      allow static initializers, and we need to sometimes initialize those
      things before any other initializers (even "core" ones) can do so.
      
      Currently we don't allow this at all for modules, and the only user that
      needs is right now is cpufreq. As reported by Thomas Gleixner:
      
         "Commit b4dfdbb3 ("[PATCH] cpufreq:
          make the transition_notifier chain use SRCU breaks cpu frequency
          notification users, which register the callback > on core_init
          level."
      
      Cc: Thomas Gleixner <tglx@timesys.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Arjan van de Ven <arjan@infradead.org>
      Cc: Andrew Morton <akpm@osdl.org>,
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      b3438f82
  6. 07 11月, 2006 1 次提交
  7. 21 10月, 2006 1 次提交
  8. 16 10月, 2006 1 次提交
    • V
      [CPUFREQ][8/8] acpi-cpufreq: Add support for freq feedback from hardware · dfde5d62
      Venkatesh Pallipadi 提交于
      Enable ondemand governor and acpi-cpufreq to use IA32_APERF and IA32_MPERF MSR
      to get active frequency feedback for the last sampling interval. This will
      make ondemand take right frequency decisions when hardware coordination of
      frequency is going on.
      
      Without APERF/MPERF, ondemand can take wrong decision at times due
      to underlying hardware coordination or TM2.
      Example:
      * CPU 0 and CPU 1 are hardware cooridnated.
      * CPU 1 running at highest frequency.
      * CPU 0 was running at highest freq. Now ondemand reduces it to
        some intermediate frequency based on utilization.
      * Due to underlying hardware coordination with other CPU 1, CPU 0 continues to
        run at highest frequency (as long as other CPU is at highest).
      * When ondemand samples CPU 0 again next time, without actual frequency
        feedback from APERF/MPERF, it will think that previous frequency change
        was successful and can go to wrong target frequency. This is because it
        thinks that utilization it has got this sampling interval is when running at
        intermediate frequency, rather than actual highest frequency.
      
      More information about IA32_APERF IA32_MPERF MSR:
      Refer to IA-32 Intel® Architecture Software Developer's Manual at
      http://developer.intel.comSigned-off-by: NVenkatesh Pallipadi <venkatesh.pallipadi@intel.com>
      Signed-off-by: NDave Jones <davej@redhat.com>
      dfde5d62
  9. 04 10月, 2006 1 次提交
    • A
      [PATCH] cpufreq: make the transition_notifier chain use SRCU · b4dfdbb3
      Alan Stern 提交于
      This patch (as762) changes the cpufreq_transition_notifier_list from a
      blocking_notifier_head to an srcu_notifier_head.  This will prevent errors
      caused attempting to call down_read() to access the notifier chain at a
      time when interrupts must remain disabled, during system suspend.
      
      It's not clear to me whether this is really necessary; perhaps the chain
      could be made into an atomic_notifier.  However a couple of the callout
      routines do use blocking operations, so this approach seems safer.
      
      The head of the notifier chain needs to be initialized before use; this is
      done by an __init routine at core_initcall time.  If this turns out not to
      be a good choice, it can easily be changed.
      Signed-off-by: NAlan Stern <stern@rowland.harvard.edu>
      Cc: "Paul E. McKenney" <paulmck@us.ibm.com>
      Cc: Jesse Brandeburg <jesse.brandeburg@gmail.com>
      Cc: Dave Jones <davej@codemonkey.org.uk>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      b4dfdbb3
  10. 27 9月, 2006 1 次提交
  11. 23 9月, 2006 1 次提交
    • D
      [CPUFREQ] Fix some more CPU hotplug locking. · ddad65df
      Dave Jones 提交于
      Lukewarm IQ detected in hotplug locking
      BUG: warning at kernel/cpu.c:38/lock_cpu_hotplug()
      [<b0134a42>] lock_cpu_hotplug+0x42/0x65
      [<b02f8af1>] cpufreq_update_policy+0x25/0xad
      [<b0358756>] kprobe_flush_task+0x18/0x40
      [<b0355aab>] schedule+0x63f/0x68b
      [<b01377c2>] __link_module+0x0/0x1f
      [<b0119e7d>] __cond_resched+0x16/0x34
      [<b03560bf>] cond_resched+0x26/0x31
      [<b0355b0e>] wait_for_completion+0x17/0xb1
      [<f965c547>] cpufreq_stat_cpu_callback+0x13/0x20 [cpufreq_stats]
      [<f9670074>] cpufreq_stats_init+0x74/0x8b [cpufreq_stats]
      [<b0137872>] sys_init_module+0x91/0x174
      [<b0102c81>] sysenter_past_esp+0x56/0x79
      
      As there are other places that call cpufreq_update_policy without
      the hotplug lock, it seems better to keep the hotplug locking
      at the lower level for the time being until this is revamped.
      Signed-off-by: NDave Jones <davej@redhat.com>
      ddad65df
  12. 06 9月, 2006 1 次提交
  13. 14 8月, 2006 1 次提交
  14. 12 8月, 2006 3 次提交
    • A
      [CPUFREQ][2/2] ondemand: updated add powersave_bias tunable · 05ca0350
      Alexey Starikovskiy 提交于
      ondemand selects the minimum frequency that can retire
      a workload with negligible idle time -- ideally resulting in the highest
      performance/power efficiency with negligible performance impact.
      
      But on some systems and some workloads, this algorithm
      is more performance biased than necessary, and
      de-tuning it a bit to allow some performance impact
      can save measurable power.
      
      This patch adds a "powersave_bias" tunable to ondemand
      to allow it to reduce its target frequency by a specified percent.
      
      By default, the powersave_bias is 0 and has no effect.
      powersave_bias is in units of 0.1%, so it has an effective range
      of 1 through 1000, resulting in 0.1% to 100% impact.
      
      In practice, users will not be able to detect a difference between
      0.1% increments, but 1.0% increments turned out to be too large.
      Also, the max value of 1000 (100%) would simply peg the system
      in its deepest power saving P-state, unless the processor really has
      a hardware P-state at 0Hz:-)
      
      For example, If ondemand requests 2.0GHz based on utilization,
      and powersave_bias=100, this code will knock 10% off the target
      and seek  a target of 1.8GHz instead of 2.0GHz until the
      next sampling.  If 1.8 is an exact match with an hardware frequency
      we use it, otherwise we average our time between the frequency
      next higher than 1.8 and next lower than 1.8.
      
      Note that a user or administrative program can change powersave_bias
      at run-time depending on how they expect the system to be used.
      
      Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi at intel.com>
      Signed-off-by: Alexey Starikovskiy <alexey.y.starikovskiy at intel.com>
      Signed-off-by: NDave Jones <davej@redhat.com>
      05ca0350
    • A
      [CPUFREQ][1/2] ondemand: updated tune for hardware coordination · 1ce28d6b
      Alexey Starikovskiy 提交于
      Try to make dbs_check_cpu() call on all CPUs at the same jiffy.
      This will help when multiple cores share P-states via Hardware Coordination.
      
      Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi at intel.com>
      Signed-off-by: Alexey Starikovskiy <alexey.y.starikovskiy at intel.com>
      Signed-off-by: NDave Jones <davej@redhat.com>
      1ce28d6b
    • D
      [CPUFREQ] Fix typo. · cd878479
      Dave Jones 提交于
      Signed-off-by: NDave Jones <davej@redhat.com>
      cd878479
  15. 01 8月, 2006 3 次提交
    • J
      [CPUFREQ] [2/2] demand load governor modules. · ea714970
      Jeremy Fitzhardinge 提交于
      Demand-load cpufreq governor modules if needed.
      Signed-off-by: NJeremy Fitzhardinge <jeremy@goop.org>
      Signed-off-by: NDave Jones <davej@redhat.com>
      ea714970
    • J
      [CPUFREQ] [1/2] add __find_governor helper and clean up some error handling. · 3bcb09a3
      Jeremy Fitzhardinge 提交于
      Adds a __find_governor() helper function to look up a governor by
      name.  Also restructures some error handling to conform to the
      "single-exit" model which is generally preferred for kernel code.
      Signed-off-by: NJeremy Fitzhardinge <jeremy@goop.org>
      Signed-off-by: NDave Jones <davej@redhat.com>
      3bcb09a3
    • M
      [CPUFREQ] return error when failing to set minfreq · 9c9a43ed
      Mattia Dongili 提交于
      I just stumbled on this bug/feature, this is how to reproduce it:
      
      # echo 450000 > /sys/devices/system/cpu/cpu0/cpufreq/scaling_min_freq
      # echo 450000 > /sys/devices/system/cpu/cpu0/cpufreq/scaling_max_freq
      # echo powersave > /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor
      # cpufreq-info -p
      450000 450000 powersave
      # echo 1800000 > /sys/devices/system/cpu/cpu0/cpufreq/scaling_min_freq ; echo $?
      0
      # cpufreq-info -p
      450000 450000 powersave
      
      Here it is. The kernel refuses to set a min_freq higher than the
      max_freq but it allows a max_freq lower than min_freq (lowering min_freq
      also).
      
      This behaviour is pretty straightforward (but undocumented) and it
      doesn't return an error altough failing to accomplish the requested
      action (set min_freq).
      The problem (IMO) is basically that userspace is not allowed to set a
      full policy atomically while the kernel always does that thus it must
      enforce an ordering on operations.
      
      The attached patch returns -EINVAL if trying to increase frequencies
      starting from scaling_min_freq and documents the correct ordering of writes.
      Signed-off-by: NMattia Dongili <malattia@linux.it>
      Signed-off-by: Dominik Brodowski <linux at dominikbrodowski.net>
      Signed-off-by: NDave Jones <davej@redhat.com>
      
      --
      9c9a43ed
  16. 26 7月, 2006 1 次提交
    • A
      [PATCH] Reorganize the cpufreq cpu hotplug locking to not be totally bizare · 153d7f3f
      Arjan van de Ven 提交于
      The patch below moves the cpu hotplugging higher up in the cpufreq
      layering; this is needed to avoid recursive taking of the cpu hotplug
      lock and to otherwise detangle the mess.
      
      The new rules are:
      1. you must do lock_cpu_hotplug() around the following functions:
         __cpufreq_driver_target
         __cpufreq_governor (for CPUFREQ_GOV_LIMITS operation only)
         __cpufreq_set_policy
      2. governer methods (.governer) must NOT take the lock_cpu_hotplug()
         lock in any way; they are called with the lock taken already
      3. if your governer spawns a thread that does things, like calling
         __cpufreq_driver_target, your thread must honor rule #1.
      4. the policy lock and other cpufreq internal locks nest within
         the lock_cpu_hotplug() lock.
      
      I'm not entirely happy about how the __cpufreq_governor rule ended up
      (conditional locking rule depending on the argument) but basically all
      callers pass this as a constant so it's not too horrible.
      
      The patch also removes the cpufreq_governor() function since during the
      locking audit it turned out to be entirely unused (so no need to fix it)
      
      The patch works on my testbox, but it could use more testing
      (otoh... it can't be much worse than the current code)
      Signed-off-by: NArjan van de Ven <arjan@linux.intel.com>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      153d7f3f
  17. 24 7月, 2006 1 次提交
  18. 08 7月, 2006 1 次提交
    • D
      [PATCH] Fix cpufreq vs hotplug lockdep recursion. · a496e25d
      Dave Jones 提交于
      [ There's some not quite baked bits in cpufreq-git right now
        so sending this on as a patch instead ]
      
      On Thu, 2006-07-06 at 07:58 -0700, Tom London wrote:
      
      > After installing .2356 I get this each time I boot:
      > =======================================================
      > [ INFO: possible circular locking dependency detected ]
      > -------------------------------------------------------
      > S06cpuspeed/1620 is trying to acquire lock:
      >  (dbs_mutex){--..}, at: [<c060d6bb>] mutex_lock+0x21/0x24
      >
      > but task is already holding lock:
      >  (cpucontrol){--..}, at: [<c060d6bb>] mutex_lock+0x21/0x24
      >
      > which lock already depends on the new lock.
      >
      
      make sure the cpu hotplug recursive mutex (yuck) is taken early in the
      cpufreq codepaths to avoid a AB-BA deadlock.
      Signed-off-by: NArjan van de Ven <arjan@linux.intel.com>
      Signed-off-by: NDave Jones <davej@redhat.com>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      a496e25d
  19. 01 7月, 2006 1 次提交
  20. 30 6月, 2006 3 次提交
  21. 28 6月, 2006 3 次提交
  22. 23 6月, 2006 1 次提交
    • A
      [PATCH] cpufreq build fix · 138a0128
      Andrew Morton 提交于
      drivers/cpufreq/cpufreq_ondemand.c: In function 'do_dbs_timer':
      drivers/cpufreq/cpufreq_ondemand.c:374: warning: implicit declaration of function 'lock_cpu_hotplug'
      drivers/cpufreq/cpufreq_ondemand.c:381: warning: implicit declaration of function 'unlock_cpu_hotplug'
      drivers/cpufreq/cpufreq_conservative.c: In function 'do_dbs_timer':
      drivers/cpufreq/cpufreq_conservative.c:425: warning: implicit declaration of function 'lock_cpu_hotplug'
      drivers/cpufreq/cpufreq_conservative.c:432: warning: implicit declaration of function 'unlock_cpu_hotplug'
      
      Cc: Dave Jones <davej@codemonkey.org.uk>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      138a0128
  23. 22 6月, 2006 1 次提交
    • V
      [CPUFREQ] Fix ondemand vs suspend deadlock · 4ec223d0
      Venkatesh Pallipadi 提交于
      Rootcaused the bug to a deadlock in cpufreq and ondemand. Due to non-existent
      ordering between cpu_hotplug lock and dbs_mutex. Basically a race condition
      between cpu_down() and do_dbs_timer().
      
      cpu_down() flow:
      * cpu_down() call for CPU 1
      * Takes hot plug lock
      * Calls pre down notifier
      *     cpufreq notifier handler calls cpufreq_driver_target() which takes
            cpu_hotplug lock again. OK as cpu_hotplug lock is recursive in same
            process context
      * CPU 1 goes down
      * Calls post down notifier
      *     cpufreq notifier handler calls ondemand event stop which takes dbs_mutex
      
      So, cpu_hotplug lock is taken before dbs_mutex in this flow.
      
      do_dbs_timer is triggerred by a periodic timer event.
      It first takes dbs_mutex and then takes cpu_hotplug lock in
      cpufreq_driver_target().
      Note the reverse order here compared to above. So, if this timer event happens
      at right moment during cpu_down, system will deadlok.
      
      Attached patch fixes the issue for both ondemand and conservative.
      Signed-off-by: NVenkatesh Pallipadi <venkatesh.pallipadi@intel.com>
      Signed-off-by: NDave Jones <davej@redhat.com>
      4ec223d0
  24. 05 6月, 2006 1 次提交
  25. 31 5月, 2006 4 次提交
  26. 09 5月, 2006 1 次提交
  27. 26 4月, 2006 1 次提交