1. 29 9月, 2011 5 次提交
    • P
      rcu: Abstract common code for RCU grace-period-wait primitives · 2c42818e
      Paul E. McKenney 提交于
      Pull the code that waits for an RCU grace period into a single function,
      which is then called by synchronize_rcu() and friends in the case of
      TREE_RCU and TREE_PREEMPT_RCU, and from rcu_barrier() and friends in
      the case of TINY_RCU and TINY_PREEMPT_RCU.
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      2c42818e
    • A
      rcu: Fix mismatched variable in rcutree_trace.c · f039d1f1
      Andi Kleen 提交于
      rcutree.c defines rcu_cpu_kthread_cpu as int, not unsigned int,
      so the extern has to follow that.
      Signed-off-by: NAndi Kleen <ak@linux.intel.com>
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      f039d1f1
    • P
      rcu: Restore checks for blocking in RCU read-side critical sections · b3fbab05
      Paul E. McKenney 提交于
      Long ago, using TREE_RCU with PREEMPT would result in "scheduling
      while atomic" diagnostics if you blocked in an RCU read-side critical
      section.  However, PREEMPT now implies TREE_PREEMPT_RCU, which defeats
      this diagnostic.  This commit therefore adds a replacement diagnostic
      based on PROVE_RCU.
      
      Because rcu_lockdep_assert() and lockdep_rcu_dereference() are now being
      used for things that have nothing to do with rcu_dereference(), rename
      lockdep_rcu_dereference() to lockdep_rcu_suspicious() and add a third
      argument that is a string indicating what is suspicious.  This third
      argument is passed in from a new third argument to rcu_lockdep_assert().
      Update all calls to rcu_lockdep_assert() to add an informative third
      argument.
      
      Also, add a pair of rcu_lockdep_assert() calls from within
      rcu_note_context_switch(), one complaining if a context switch occurs
      in an RCU-bh read-side critical section and another complaining if a
      context switch occurs in an RCU-sched read-side critical section.
      These are present only if the PROVE_RCU kernel parameter is enabled.
      
      Finally, fix some checkpatch whitespace complaints in lockdep.c.
      
      Again, you must enable PROVE_RCU to see these new diagnostics.  But you
      are enabling PROVE_RCU to check out new RCU uses in any case, aren't you?
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      b3fbab05
    • S
      rcu: Avoid unnecessary self-wakeup of per-CPU kthreads · 1eb52121
      Shaohua Li 提交于
      There are a number of cases where the RCU can find additional work
      for the per-CPU kthread within the context of that per-CPU kthread.
      In such cases, the per-CPU kthread is already running, so attempting
      to wake itself up does nothing except waste CPU cycles.  This commit
      therefore checks to see if it is in the per-CPU kthread context,
      omitting the wakeup in this case.
      Signed-off-by: NShaohua Li <shaohua.li@intel.com>
      Signed-off-by: NPaul E. McKenney <paul.mckenney@linaro.org>
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      1eb52121
    • E
      rcu: Use kthread_create_on_node() · 1f288094
      Eric Dumazet 提交于
      Commit a26ac245 (move TREE_RCU from softirq to kthread) added
      per-CPU kthreads.  However, kthread creation uses kthread_create(), which
      can put the kthread's stack and task struct on the wrong NUMA node.
      Therefore, use kthread_create_on_node() instead of kthread_create()
      so that the stacks and task structs are placed on the correct NUMA node.
      
      A similar change was carried out in commit 94dcf29a (kthread:
      use kthread_create_on_node()).
      
      Also change rcutorture's priority-boost-test kthread creation.
      Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com>
      CC: Tejun Heo <tj@kernel.org>
      CC: Rusty Russell <rusty@rustcorp.com.au>
      CC: Andrew Morton <akpm@linux-foundation.org>
      CC: Andi Kleen <ak@linux.intel.com>
      CC: Ingo Molnar <mingo@elte.hu>
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      1f288094
  2. 26 9月, 2011 1 次提交
  3. 20 9月, 2011 2 次提交
  4. 15 9月, 2011 1 次提交
  5. 12 9月, 2011 1 次提交
  6. 31 8月, 2011 1 次提交
    • E
      perf_event: Fix broken calc_timer_values() · 7f310a5d
      Eric B Munson 提交于
      We detected a serious issue with PERF_SAMPLE_READ and
      timing information when events were being multiplexing.
      
      Samples would have time_running > time_enabled. That
      was easy to reproduce with a libpfm4 example (ran 3
      times to cause multiplexing on Core 2):
      
       $ syst_smpl -e uops_retired:freq=1 &
       $ syst_smpl -e uops_retired:freq=1 &
       $ syst_smpl -e uops_retired:freq=1 &
       IIP:0x0000000040062d ... PERIOD:2355332948 ENA=40144625315 RUN=60014875184
       syst_smpl: WARNING: time_running > time_enabled
      	63277537998 uops_retired:freq=1 , scaled
      
      The bug was not present in kernel up to (and including) 3.0. It turns
      out the bug was introduced by the following commit:
      
      commit c4794295
      
          events: Move lockless timer calculation into helper function
      
      The parameters of the function got reversed yet the call sites
      were not updated to reflect the change. That lead to time_running
      and time_enabled being swapped. That had no effect when there was
      no multiplexing because in that case time_running = time_enabled
      but it would show up in any other scenario.
      Signed-off-by: NStephane Eranian <eranian@google.com>
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Link: http://lkml.kernel.org/r/20110829124112.GA4828@quadSigned-off-by: NIngo Molnar <mingo@elte.hu>
      7f310a5d
  7. 29 8月, 2011 4 次提交
    • S
      perf events: Fix slow and broken cgroup context switch code · a8d757ef
      Stephane Eranian 提交于
      The current cgroup context switch code was incorrect leading
      to bogus counts. Furthermore, as soon as there was an active
      cgroup event on a CPU, the context switch cost on that CPU
      would increase by a significant amount as demonstrated by a
      simple ping/pong example:
      
       $ ./pong
       Both processes pinned to CPU1, running for 10s
       10684.51 ctxsw/s
      
      Now start a cgroup perf stat:
       $ perf stat -e cycles,cycles -A -a -G test  -C 1 -- sleep 100
      
      $ ./pong
       Both processes pinned to CPU1, running for 10s
       6674.61 ctxsw/s
      
      That's a 37% penalty.
      
      Note that pong is not even in the monitored cgroup.
      
      The results shown by perf stat are bogus:
       $ perf stat -e cycles,cycles -A -a -G test  -C 1 -- sleep 100
      
       Performance counter stats for 'sleep 100':
      
       CPU1 <not counted> cycles   test
       CPU1 16,984,189,138 cycles  #    0.000 GHz
      
      The second 'cycles' event should report a count @ CPU clock
      (here 2.4GHz) as it is counting across all cgroups.
      
      The patch below fixes the bogus accounting and bypasses any
      cgroup switches in case the outgoing and incoming tasks are
      in the same cgroup.
      
      With this patch the same test now yields:
       $ ./pong
       Both processes pinned to CPU1, running for 10s
       10775.30 ctxsw/s
      
      Start perf stat with cgroup:
      
       $ perf stat -e cycles,cycles -A -a -G test  -C 1 -- sleep 10
      
      Run pong outside the cgroup:
       $ /pong
       Both processes pinned to CPU1, running for 10s
       10687.80 ctxsw/s
      
      The penalty is now less than 2%.
      
      And the results for perf stat are correct:
      
      $ perf stat -e cycles,cycles -A -a -G test  -C 1 -- sleep 10
      
       Performance counter stats for 'sleep 10':
      
       CPU1 <not counted> cycles test #    0.000 GHz
       CPU1 23,933,981,448 cycles      #    0.000 GHz
      
      Now perf stat reports the correct counts for
      for the non cgroup event.
      
      If we run pong inside the cgroup, then we also get the
      correct counts:
      
      $ perf stat -e cycles,cycles -A -a -G test  -C 1 -- sleep 10
      
       Performance counter stats for 'sleep 10':
      
       CPU1 22,297,726,205 cycles test #    0.000 GHz
       CPU1 23,933,981,448 cycles      #    0.000 GHz
      
            10.001457237 seconds time elapsed
      Signed-off-by: NStephane Eranian <eranian@google.com>
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Link: http://lkml.kernel.org/r/20110825135803.GA4697@quadSigned-off-by: NIngo Molnar <mingo@elte.hu>
      a8d757ef
    • W
      sched: Fix a memory leak in __sdt_free() · feff8fa0
      WANG Cong 提交于
      This patch fixes the following memory leak:
      
      unreferenced object 0xffff880107266800 (size 512):
        comm "sched-powersave", pid 3718, jiffies 4323097853 (age 27495.450s)
        hex dump (first 32 bytes):
          00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
          00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
        backtrace:
          [<ffffffff81133940>] create_object+0x187/0x28b
          [<ffffffff814ac103>] kmemleak_alloc+0x73/0x98
          [<ffffffff811232ba>] __kmalloc_node+0x104/0x159
          [<ffffffff81044b98>] kzalloc_node.clone.97+0x15/0x17
          [<ffffffff8104cb90>] build_sched_domains+0xb7/0x7f3
          [<ffffffff8104d4df>] partition_sched_domains+0x1db/0x24a
          [<ffffffff8109ee4a>] do_rebuild_sched_domains+0x3b/0x47
          [<ffffffff810a00c7>] rebuild_sched_domains+0x10/0x12
          [<ffffffff8104d5ba>] sched_power_savings_store+0x6c/0x7b
          [<ffffffff8104d5df>] sched_mc_power_savings_store+0x16/0x18
          [<ffffffff8131322c>] sysdev_class_store+0x20/0x22
          [<ffffffff81193876>] sysfs_write_file+0x108/0x144
          [<ffffffff81135b10>] vfs_write+0xaf/0x102
          [<ffffffff81135d23>] sys_write+0x4d/0x74
          [<ffffffff814c8a42>] system_call_fastpath+0x16/0x1b
          [<ffffffffffffffff>] 0xffffffffffffffff
      Signed-off-by: NWANG Cong <amwang@redhat.com>
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: stable@kernel.org # 3.0
      Link: http://lkml.kernel.org/r/1313671017-4112-1-git-send-email-amwang@redhat.comSigned-off-by: NIngo Molnar <mingo@elte.hu>
      feff8fa0
    • T
      sched: Move blk_schedule_flush_plug() out of __schedule() · 9c40cef2
      Thomas Gleixner 提交于
      There is no real reason to run blk_schedule_flush_plug() with
      interrupts and preemption disabled.
      
      Move it into schedule() and call it when the task is going voluntarily
      to sleep. There might be false positives when the task is woken
      between that call and actually scheduling, but that's not really
      different from being woken immediately after switching away.
      
      This fixes a deadlock in the scheduler where the
      blk_schedule_flush_plug() callchain enables interrupts and thereby
      allows a wakeup to happen of the task that's going to sleep.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: stable@kernel.org # 2.6.39+
      Link: http://lkml.kernel.org/n/tip-dwfxtra7yg1b5r65m32ywtct@git.kernel.orgSigned-off-by: NIngo Molnar <mingo@elte.hu>
      9c40cef2
    • T
      sched: Separate the scheduler entry for preemption · c259e01a
      Thomas Gleixner 提交于
      Block-IO and workqueues call into notifier functions from the
      scheduler core code with interrupts and preemption disabled. These
      calls should be made before entering the scheduler core.
      
      To simplify this, separate the scheduler core code into
      __schedule(). __schedule() is directly called from the places which
      set PREEMPT_ACTIVE and from schedule(). This allows us to add the work
      checks into schedule(), so they are only called when a task voluntary
      goes to sleep.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: stable@kernel.org # 2.6.39+
      Link: http://lkml.kernel.org/r/20110622174918.813258321@linutronix.deSigned-off-by: NIngo Molnar <mingo@elte.hu>
      c259e01a
  8. 27 8月, 2011 1 次提交
  9. 26 8月, 2011 2 次提交
    • N
      kernel/printk: do not turn off bootconsole in printk_late_init() if keep_bootcon · 4c30c6f5
      Nishanth Aravamudan 提交于
      It seems that 7bf69395 ("console: allow to retain boot console via
      boot option keep_bootcon") doesn't always achieve what it aims, as when
      printk_late_init() runs it unconditionally turns off all boot consoles.
      With this patch, I am able to see more messages on the boot console in
      KVM guests than I can without, when keep_bootcon is specified.
      
      I think it is appropriate for the relevant -stable trees.  However, it's
      more of an annoyance than a serious bug (ideally you don't need to keep
      the boot console around as console handover should be working -- I was
      encountering a situation where the console handover wasn't working and
      not having the boot console available meant I couldn't see why).
      Signed-off-by: NNishanth Aravamudan <nacc@us.ibm.com>
      Cc: David S. Miller <davem@davemloft.net>
      Cc: Alan Cox <alan@lxorguk.ukuu.org.uk>
      Cc: Greg KH <gregkh@suse.de>
      Acked-by: NFabio M. Di Nitto <fdinitto@redhat.com>
      Cc: <stable@kernel.org>		[2.6.39.x, 3.0.x]
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      4c30c6f5
    • A
      Add a personality to report 2.6.x version numbers · be27425d
      Andi Kleen 提交于
      I ran into a couple of programs which broke with the new Linux 3.0
      version.  Some of those were binary only.  I tried to use LD_PRELOAD to
      work around it, but it was quite difficult and in one case impossible
      because of a mix of 32bit and 64bit executables.
      
      For example, all kind of management software from HP doesnt work, unless
      we pretend to run a 2.6 kernel.
      
        $ uname -a
        Linux svivoipvnx001 3.0.0-08107-g97cd98f #1062 SMP Fri Aug 12 18:11:45 CEST 2011 i686 i686 i386 GNU/Linux
      
        $ hpacucli ctrl all show
      
        Error: No controllers detected.
      
        $ rpm -qf /usr/sbin/hpacucli
        hpacucli-8.75-12.0
      
      Another notable case is that Python now reports "linux3" from
      sys.platform(); which in turn can break things that were checking
      sys.platform() == "linux2":
      
        https://bugzilla.mozilla.org/show_bug.cgi?id=664564
      
      It seems pretty clear to me though it's a bug in the apps that are using
      '==' instead of .startswith(), but this allows us to unbreak broken
      programs.
      
      This patch adds a UNAME26 personality that makes the kernel report a
      2.6.40+x version number instead.  The x is the x in 3.x.
      
      I know this is somewhat ugly, but I didn't find a better workaround, and
      compatibility to existing programs is important.
      
      Some programs also read /proc/sys/kernel/osrelease.  This can be worked
      around in user space with mount --bind (and a mount namespace)
      
      To use:
      
        wget ftp://ftp.kernel.org/pub/linux/kernel/people/ak/uname26/uname26.c
        gcc -o uname26 uname26.c
        ./uname26 program
      Signed-off-by: NAndi Kleen <ak@linux.intel.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      be27425d
  10. 24 8月, 2011 1 次提交
    • L
      Revert "irq: Always set IRQF_ONESHOT if no primary handler is specified" · 69dd3d8e
      Linus Torvalds 提交于
      This reverts commit f3637a5f.
      
      It turns out that this breaks several drivers, one example being OMAP
      boards which use the on-board OMAP UARTs and the omap-serial driver that
      will not boot to userspace after the commit.
      
      Paul Walmsley reports that enabling CONFIG_DEBUG_SHIRQ reveals 'IRQ
      handler type mismatch' errors:
      
        IRQ handler type mismatch for IRQ 74
        current handler: serial idle
        ...
      
      and the reason is that setting IRQF_ONESHOT will now result in those
      interrupt handlers having different IRQF flags, and thus being
      unsharable.  So the commit log in the reverted commit:
      
                                  "Since it is required for those users and
          there is no difference for others it makes sense to add this flag
          unconditionally."
      
      is simply not true: there may not be any difference from a "actions at
      irq time", but there is a *big* difference wrt this flag testing irq
      management (see __setup_irq() in kernel/irq/manage.c).
      
      One solution may be to stop verifying IRQF_ONESHOT in __setup_irq(), but
      right now the safe course of action is to revert the change.  Let's
      revisit this in a later merge window.
      Reported-by: NPaul Walmsley <paul@pwsan.com>
      Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
      Requested-by: NAlan Cox <alan@lxorguk.ukuu.org.uk>
      Acked-by: NThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      69dd3d8e
  11. 19 8月, 2011 1 次提交
  12. 14 8月, 2011 1 次提交
  13. 13 8月, 2011 1 次提交
    • C
      xfs: remove subdirectories · c59d87c4
      Christoph Hellwig 提交于
      Use the move from Linux 2.6 to Linux 3.x as an excuse to kill the
      annoying subdirectories in the XFS source code.  Besides the large
      amount of file rename the only changes are to the Makefile, a few
      files including headers with the subdirectory prefix, and the binary
      sysctl compat code that includes a header under fs/xfs/ from
      kernel/.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NAlex Elder <aelder@sgi.com>
      c59d87c4
  14. 12 8月, 2011 1 次提交
    • V
      move RLIMIT_NPROC check from set_user() to do_execve_common() · 72fa5997
      Vasiliy Kulikov 提交于
      The patch http://lkml.org/lkml/2003/7/13/226 introduced an RLIMIT_NPROC
      check in set_user() to check for NPROC exceeding via setuid() and
      similar functions.
      
      Before the check there was a possibility to greatly exceed the allowed
      number of processes by an unprivileged user if the program relied on
      rlimit only.  But the check created new security threat: many poorly
      written programs simply don't check setuid() return code and believe it
      cannot fail if executed with root privileges.  So, the check is removed
      in this patch because of too often privilege escalations related to
      buggy programs.
      
      The NPROC can still be enforced in the common code flow of daemons
      spawning user processes.  Most of daemons do fork()+setuid()+execve().
      The check introduced in execve() (1) enforces the same limit as in
      setuid() and (2) doesn't create similar security issues.
      
      Neil Brown suggested to track what specific process has exceeded the
      limit by setting PF_NPROC_EXCEEDED process flag.  With the change only
      this process would fail on execve(), and other processes' execve()
      behaviour is not changed.
      
      Solar Designer suggested to re-check whether NPROC limit is still
      exceeded at the moment of execve().  If the process was sleeping for
      days between set*uid() and execve(), and the NPROC counter step down
      under the limit, the defered execve() failure because NPROC limit was
      exceeded days ago would be unexpected.  If the limit is not exceeded
      anymore, we clear the flag on successful calls to execve() and fork().
      
      The flag is also cleared on successful calls to set_user() as the limit
      was exceeded for the previous user, not the current one.
      
      Similar check was introduced in -ow patches (without the process flag).
      
      v3 - clear PF_NPROC_EXCEEDED on successful calls to set_user().
      Reviewed-by: NJames Morris <jmorris@namei.org>
      Signed-off-by: NVasiliy Kulikov <segoon@openwall.com>
      Acked-by: NNeilBrown <neilb@suse.de>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      72fa5997
  15. 11 8月, 2011 2 次提交
    • N
      blktrace: add FLUSH/FUA support · c09c47ca
      Namhyung Kim 提交于
      Add FLUSH/FUA support to blktrace. As FLUSH precedes WRITE and/or
      FUA follows WRITE, use the same 'F' flag for both cases and
      distinguish them by their (relative) position. The end results
      look like (other flags might be shown also):
      
       - WRITE:            W
       - WRITE_FLUSH:      FW
       - WRITE_FUA:        WF
       - WRITE_FLUSH_FUA:  FWF
      
      Note that we reuse TC_BARRIER due to lack of bit space of act_mask
      so that the older versions of blktrace tools will report flush
      requests as barriers from now on.
      
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Signed-off-by: NNamhyung Kim <namhyung@gmail.com>
      Reviewed-by: NJeff Moyer <jmoyer@redhat.com>
      Signed-off-by: NJens Axboe <jaxboe@fusionio.com>
      c09c47ca
    • J
      alarmtimers: Avoid possible denial of service with high freq periodic timers · 6af7e471
      John Stultz 提交于
      Its possible to jam up the alarm timers by setting very small interval
      timers, which will cause the alarmtimer subsystem to spend all of its time
      firing and restarting timers. This can effectivly lock up a box.
      
      A deeper fix is needed, closely mimicking the hrtimer code, but for now
      just cap the interval to 100us to avoid userland hanging the system.
      
      CC: Thomas Gleixner <tglx@linutronix.de>
      CC: stable@kernel.org
      Signed-off-by: NJohn Stultz <john.stultz@linaro.org>
      6af7e471
  16. 10 8月, 2011 3 次提交
  17. 09 8月, 2011 1 次提交
  18. 06 8月, 2011 1 次提交
    • J
      jump label: Reduce the cycle count by changing the link order · b77f0f3c
      Jason Baron 提交于
      In the course of testing jump labels for use with the CFS
      bandwidth controller, Paul Turner, discovered that using jump
      labels reduced the branch count and the instruction count, but
      did not reduce the cycle count or wall time.
      
      I noticed that having the jump_label.o included in the kernel
      but not used in any way still caused this increase in cycle
      count and wall time. Thus, I moved jump_label.o in the
      kernel/Makefile, thus changing the link order, and presumably
      moving it out of hot icache areas. This brought down the cycle
      count/time as expected.
      
      In addition to Paul's testing,  I've tested the patch using a
      single 'static_branch()' in the getppid() path, and basically
      running tight loops of calls to getppid(). Here are my results
      for the branch disabled case:
      
      With jump labels turned on (CONFIG_JUMP_LABEL), branch disabled:
      
       Performance counter stats for 'bash -c /tmp/getppid;true' (50 runs):
      
           3,969,510,217 instructions             #	   0.864 IPC     ( +-0.000% )
           4,592,334,954 cycles                     ( +-   0.046% )
             751,634,470 branches                   ( +-   0.000% )
      
              1.722635797  seconds time elapsed   ( +-   0.046% )
      
      Jump labels turned off (CONFIG_JUMP_LABEL not set), branch
      disabled:
      
       Performance counter stats for 'bash -c /tmp/getppid;true' (50 runs):
      
           4,009,611,846 instructions             #	   0.867 IPC     ( +-0.000% )
           4,622,210,580 cycles                     ( +-   0.012% )
             771,662,904 branches                   ( +-   0.000% )
      
              1.734341454  seconds time elapsed   ( +-   0.022% )
      Signed-off-by: NJason Baron <jbaron@redhat.com>
      Cc: rth@redhat.com
      Cc: a.p.zijlstra@chello.nl
      Cc: rostedt@goodmis.org
      Link: http://lkml.kernel.org/r/20110805204040.GG2522@redhat.comSigned-off-by: NIngo Molnar <mingo@elte.hu>
      Tested-by: NPaul Turner <pjt@google.com>
      b77f0f3c
  19. 04 8月, 2011 6 次提交
  20. 02 8月, 2011 4 次提交