You need to sign in or sign up before continuing.
  1. 18 12月, 2011 1 次提交
  2. 09 12月, 2011 2 次提交
  3. 07 12月, 2011 2 次提交
  4. 06 12月, 2011 6 次提交
  5. 05 12月, 2011 1 次提交
    • P
      perf: Fix loss of notification with multi-event · 10c6db11
      Peter Zijlstra 提交于
      When you do:
              $ perf record -e cycles,cycles,cycles noploop 10
      
      You expect about 10,000 samples for each event, i.e., 10s at
      1000samples/sec. However, this is not what's happening. You
      get much fewer samples, maybe 3700 samples/event:
      
      $ perf report -D | tail -15
      Aggregated stats:
                 TOTAL events:      10998
                  MMAP events:         66
                  COMM events:          2
                SAMPLE events:      10930
      cycles stats:
                 TOTAL events:       3644
                SAMPLE events:       3644
      cycles stats:
                 TOTAL events:       3642
                SAMPLE events:       3642
      cycles stats:
                 TOTAL events:       3644
                SAMPLE events:       3644
      
      On a Intel Nehalem or even AMD64, there are 4 counters capable
      of measuring cycles, so there is plenty of space to measure those
      events without multiplexing (even with the NMI watchdog active).
      And even with multiplexing, we'd expect roughly the same number
      of samples per event.
      
      The root of the problem was that when the event that caused the buffer
      to become full was not the first event passed on the cmdline, the user
      notification would get lost. The notification was sent to the file
      descriptor of the overflowed event but the perf tool was not polling
      on it.  The perf tool aggregates all samples into a single buffer,
      i.e., the buffer of the first event. Consequently, it assumes
      notifications for any event will come via that descriptor.
      
      The seemingly straight forward solution of moving the waitq into the
      ringbuffer object doesn't work because of life-time issues. One could
      perf_event_set_output() on a fd that you're also blocking on and cause
      the old rb object to be freed while its waitq would still be
      referenced by the blocked thread -> FAIL.
      
      Therefore link all events to the ringbuffer and broadcast the wakeup
      from the ringbuffer object to all possible events that could be waited
      upon. This is rather ugly, and we're open to better solutions but it
      works for now.
      Reported-by: NStephane Eranian <eranian@google.com>
      Finished-by: NStephane Eranian <eranian@google.com>
      Reviewed-by: NStephane Eranian <eranian@google.com>
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Link: http://lkml.kernel.org/r/20111126014731.GA7030@quadSigned-off-by: NIngo Molnar <mingo@elte.hu>
      10c6db11
  6. 02 12月, 2011 5 次提交
  7. 29 11月, 2011 1 次提交
  8. 25 11月, 2011 1 次提交
    • M
      cgroup_freezer: fix freezing groups with stopped tasks · 884a45d9
      Michal Hocko 提交于
      2d3cbf8b (cgroup_freezer: update_freezer_state() does incorrect state
      transitions) removed is_task_frozen_enough and replaced it with a simple
      frozen call. This, however, breaks freezing for a group with stopped tasks
      because those cannot be frozen and so the group remains in CGROUP_FREEZING
      state (update_if_frozen doesn't count stopped tasks) and never reaches
      CGROUP_FROZEN.
      
      Let's add is_task_frozen_enough back and use it at the original locations
      (update_if_frozen and try_to_freeze_cgroup). Semantically we consider
      stopped tasks as frozen enough so we should consider both cases when
      testing frozen tasks.
      
      Testcase:
      mkdir /dev/freezer
      mount -t cgroup -o freezer none /dev/freezer
      mkdir /dev/freezer/foo
      sleep 1h &
      pid=$!
      kill -STOP $pid
      echo $pid > /dev/freezer/foo/tasks
      echo FROZEN > /dev/freezer/foo/freezer.state
      while true
      do
      	cat /dev/freezer/foo/freezer.state
      	[ "`cat /dev/freezer/foo/freezer.state`" = "FROZEN" ] && break
      	sleep 1
      done
      echo OK
      Signed-off-by: NMichal Hocko <mhocko@suse.cz>
      Acked-by: NLi Zefan <lizf@cn.fujitsu.com>
      Cc: Tomasz Buchert <tomasz.buchert@inria.fr>
      Cc: Paul Menage <paul@paulmenage.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: stable@kernel.org
      Signed-off-by: NTejun Heo <htejun@gmail.com>
      884a45d9
  9. 24 11月, 2011 1 次提交
  10. 19 11月, 2011 3 次提交
  11. 18 11月, 2011 2 次提交
  12. 17 11月, 2011 1 次提交
  13. 16 11月, 2011 2 次提交
  14. 14 11月, 2011 6 次提交
  15. 11 11月, 2011 1 次提交
    • J
      clocksource: Avoid selecting mult values that might overflow when adjusted · d65670a7
      John Stultz 提交于
      For some frequencies, the clocks_calc_mult_shift() function will
      unfortunately select mult values very close to 0xffffffff.  This
      has the potential to overflow when NTP adjusts the clock, adding
      to the mult value.
      
      This patch adds a clocksource.maxadj value, which provides
      an approximation of an 11% adjustment(NTP limits adjustments to
      500ppm and the tick adjustment is limited to 10%), which could
      be made to the clocksource.mult value. This is then used to both
      check that the current mult value won't overflow/underflow, as
      well as warning us if the timekeeping_adjust() code pushes over
      that 11% boundary.
      
      v2: Fix max_adjustment calculation, and improve WARN_ONCE
      messages.
      
      v3: Don't warn before maxadj has actually been set
      
      CC: Yong Zhang <yong.zhang0@gmail.com>
      CC: David Daney <ddaney.cavm@gmail.com>
      CC: Thomas Gleixner <tglx@linutronix.de>
      CC: Chen Jie <chenj@lemote.com>
      CC: zhangfx <zhangfx@lemote.com>
      CC: stable@kernel.org
      Reported-by: NChen Jie <chenj@lemote.com>
      Reported-by: Nzhangfx <zhangfx@lemote.com>
      Tested-by: NYong Zhang <yong.zhang0@gmail.com>
      Signed-off-by: NJohn Stultz <john.stultz@linaro.org>
      d65670a7
  16. 08 11月, 2011 1 次提交
  17. 07 11月, 2011 2 次提交
  18. 05 11月, 2011 2 次提交
    • T
      PM / Freezer: Revert 27920651 "PM / Freezer: Make fake_signal_wake_up() wake... · d6cc7685
      Tejun Heo 提交于
      PM / Freezer: Revert 27920651 "PM / Freezer: Make fake_signal_wake_up() wake TASK_KILLABLE tasks too"
      
      Commit 27920651 "PM / Freezer: Make fake_signal_wake_up() wake
      TASK_KILLABLE tasks too" updated fake_signal_wake_up() used by freezer
      to wake up KILLABLE tasks.  Sending unsolicited wakeups to tasks in
      killable sleep is dangerous as there are code paths which depend on
      tasks not waking up spuriously from KILLABLE sleep.
      
      For example. sys_read() or page can sleep in TASK_KILLABLE assuming
      that wait/down/whatever _killable can only fail if we can not return
      to the usermode.  TASK_TRACED is another obvious example.
      
      The previous patch updated wait_event_freezekillable() such that it
      doesn't depend on the spurious wakeup.  This patch reverts the
      offending commit.
      
      Note that the spurious KILLABLE wakeup had other implicit effects in
      KILLABLE sleeps in nfs and cifs and those will need further updates to
      regain freezekillable behavior.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Signed-off-by: NRafael J. Wysocki <rjw@sisk.pl>
      d6cc7685
    • G
      PM / QoS: Remove redundant check · 6513fd69
      Guennadi Liakhovetski 提交于
      Remove an "if" check, that repeats an equivalent one 6 lines above.
      Signed-off-by: NGuennadi Liakhovetski <g.liakhovetski@gmx.de>
      Signed-off-by: NRafael J. Wysocki <rjw@sisk.pl>
      6513fd69