1. 18 11月, 2010 1 次提交
    • S
      x86: Eliminate bp argument from the stack tracing routines · 9c0729dc
      Soeren Sandmann Pedersen 提交于
      The various stack tracing routines take a 'bp' argument in which the
      caller is supposed to provide the base pointer to use, or 0 if doesn't
      have one. Since bp is garbage whenever CONFIG_FRAME_POINTER is not
      defined, this means all callers in principle should either always pass
      0, or be conditional on CONFIG_FRAME_POINTER.
      
      However, there are only really three use cases for stack tracing:
      
      (a) Trace the current task, including IRQ stack if any
      (b) Trace the current task, but skip IRQ stack
      (c) Trace some other task
      
      In all cases, if CONFIG_FRAME_POINTER is not defined, bp should just
      be 0.  If it _is_ defined, then
      
      - in case (a) bp should be gotten directly from the CPU's register, so
        the caller should pass NULL for regs,
      
      - in case (b) the caller should should pass the IRQ registers to
        dump_trace(),
      
      - in case (c) bp should be gotten from the top of the task's stack, so
        the caller should pass NULL for regs.
      
      Hence, the bp argument is not necessary because the combination of
      task and regs is sufficient to determine an appropriate value for bp.
      
      This patch introduces a new inline function stack_frame(task, regs)
      that computes the desired bp. This function is then called from the
      two versions of dump_stack().
      Signed-off-by: NSoren Sandmann <ssp@redhat.com>
      Acked-by: NSteven Rostedt <rostedt@goodmis.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Arjan van de Ven <arjan@infradead.org>,
      Cc: Frederic Weisbecker <fweisbec@gmail.com>,
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>,
      LKML-Reference: <m3oc9rop28.fsf@dhcp-100-3-82.bos.redhat.com>>
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      9c0729dc
  2. 18 8月, 2010 1 次提交
    • D
      Make do_execve() take a const filename pointer · d7627467
      David Howells 提交于
      Make do_execve() take a const filename pointer so that kernel_execve() compiles
      correctly on ARM:
      
      arch/arm/kernel/sys_arm.c:88: warning: passing argument 1 of 'do_execve' discards qualifiers from pointer target type
      
      This also requires the argv and envp arguments to be consted twice, once for
      the pointer array and once for the strings the array points to.  This is
      because do_execve() passes a pointer to the filename (now const) to
      copy_strings_kernel().  A simpler alternative would be to cast the filename
      pointer in do_execve() when it's passed to copy_strings_kernel().
      
      do_execve() may not change any of the strings it is passed as part of the argv
      or envp lists as they are some of them in .rodata, so marking these strings as
      const should be fine.
      
      Further kernel_execve() and sys_execve() need to be changed to match.
      
      This has been test built on x86_64, frv, arm and mips.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      Tested-by: NRalf Baechle <ralf@linux-mips.org>
      Acked-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      d7627467
  3. 14 8月, 2010 1 次提交
  4. 04 8月, 2010 1 次提交
    • T
      [CPUFREQ] x86 cpufreq: Make trace_power_frequency cpufreq driver independent · 6f4f2723
      Thomas Renninger 提交于
      and fix the broken case if a core's frequency depends on others.
      
      trace_power_frequency was only implemented in a rather ungeneric way
      in acpi-cpufreq driver's target() function only.
      -> Move the call to trace_power_frequency to
         cpufreq.c:cpufreq_notify_transition() where CPUFREQ_POSTCHANGE
         notifier is triggered.
         This will support power frequency tracing by all cpufreq drivers
      
      trace_power_frequency did not trace frequency changes correctly when
      the userspace governor was used or when CPU cores' frequency depend
      on each other.
      -> Moving this into the CPUFREQ_POSTCHANGE notifier and pass the cpu
         which gets switched automatically fixes this.
      
      Robert Schoene provided some important fixes on top of my initial
      quick shot version which are integrated in this patch:
      - Forgot some changes in power_end trace (TP_printk/variable names)
      - Variable dummy in power_end must now be cpu_id
      - Use static 64 bit variable instead of unsigned int for cpu_id
      Signed-off-by: NThomas Renninger <trenn@suse.de>
      CC: davej@redhat.com
      CC: arjan@infradead.org
      CC: linux-kernel@vger.kernel.org
      CC: robert.schoene@tu-dresden.de
      Tested-by: robert.schoene@tu-dresden.de
      Signed-off-by: NDave Jones <davej@redhat.com>
      6f4f2723
  5. 02 8月, 2010 1 次提交
  6. 01 8月, 2010 1 次提交
  7. 29 7月, 2010 1 次提交
  8. 22 7月, 2010 1 次提交
    • T
      x86 cpufreq, perf: Make trace_power_frequency cpufreq driver independent · 4c21adf2
      Thomas Renninger 提交于
      and fix the broken case if a core's frequency depends on others.
      
      trace_power_frequency was only implemented in a rather ungeneric
      way in acpi-cpufreq driver's target() function only.
      
      -> Move the call to trace_power_frequency to
         cpufreq.c:cpufreq_notify_transition() where CPUFREQ_POSTCHANGE
         notifier is triggered.
         This will support power frequency tracing by all cpufreq
         drivers.
      
      trace_power_frequency did not trace frequency changes correctly
      when the userspace governor was used or when CPU cores'
      frequency depend on each other.
      
      -> Moving this into the CPUFREQ_POSTCHANGE notifier and pass the cpu
         which gets switched automatically fixes this.
      
      Robert Schoene provided some important fixes on top of my
      initial quick shot version which are integrated in this patch:
      - Forgot some changes in power_end trace (TP_printk/variable names)
      - Variable dummy in power_end must now be cpu_id
      - Use static 64 bit variable instead of unsigned int for cpu_id
      
      [akpm@linux-foundation.org: build fix]
      Signed-off-by: NThomas Renninger <trenn@suse.de>
      Cc: davej@codemonkey.org.uk
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Cc: Dave Jones <davej@codemonkey.org.uk>
      Acked-by: NArjan van de Ven <arjan@infradead.org>
      Cc: Robert Schoene <robert.schoene@tu-dresden.de>
      Tested-by: NRobert Schoene <robert.schoene@tu-dresden.de>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      4c21adf2
  9. 14 5月, 2010 1 次提交
  10. 11 5月, 2010 1 次提交
  11. 26 3月, 2010 2 次提交
    • P
      x86, ptrace: Fix block-step · ea8e61b7
      Peter Zijlstra 提交于
      Implement ptrace-block-step using TIF_BLOCKSTEP which will set
      DEBUGCTLMSR_BTF when set for a task while preserving any other
      DEBUGCTLMSR bits.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      LKML-Reference: <20100325135414.017536066@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      ea8e61b7
    • P
      x86, perf, bts, mm: Delete the never used BTS-ptrace code · faa4602e
      Peter Zijlstra 提交于
      Support for the PMU's BTS features has been upstreamed in
      v2.6.32, but we still have the old and disabled ptrace-BTS,
      as Linus noticed it not so long ago.
      
      It's buggy: TIF_DEBUGCTLMSR is trampling all over that MSR without
      regard for other uses (perf) and doesn't provide the flexibility
      needed for perf either.
      
      Its users are ptrace-block-step and ptrace-bts, since ptrace-bts
      was never used and ptrace-block-step can be implemented using a
      much simpler approach.
      
      So axe all 3000 lines of it. That includes the *locked_memory*()
      APIs in mm/mlock.c as well.
      Reported-by: NLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Roland McGrath <roland@redhat.com>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Markus Metzger <markus.t.metzger@intel.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      LKML-Reference: <20100325135413.938004390@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      faa4602e
  12. 20 3月, 2010 1 次提交
    • A
      x86, amd: Restrict usage of c1e_idle() · 035a02c1
      Andreas Herrmann 提交于
      Currently c1e_idle returns true for all CPUs greater than or equal to
      family 0xf model 0x40. This covers too many CPUs.
      
      Meanwhile a respective erratum for the underlying problem was filed
      (#400). This patch adds the logic to check whether erratum #400
      applies to a given CPU.
      Especially for CPUs where SMI/HW triggered C1e is not supported,
      c1e_idle() doesn't need to be used. We can check this by looking at
      the respective OSVW bit for erratum #400.
      
      Cc: <stable@kernel.org> # .32.x .33.x
      Signed-off-by: NAndreas Herrmann <andreas.herrmann3@amd.com>
      LKML-Reference: <20100319110922.GA19614@alberich.amd.com>
      Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
      035a02c1
  13. 11 3月, 2010 1 次提交
  14. 30 1月, 2010 1 次提交
  15. 14 1月, 2010 1 次提交
  16. 13 1月, 2010 1 次提交
  17. 28 12月, 2009 1 次提交
    • P
      x86: Use KERN_DEFAULT log-level in __show_regs() · d015a092
      Pekka Enberg 提交于
      Andrew Morton reported a strange looking kmemcheck warning:
      
        WARNING: kmemcheck: Caught 32-bit read from uninitialized memory (ffff88004fba6c20)
        0000000000000000310000000000000000000000000000002413000000c9ffff
         u u u u u u u u u u u u u u u u i i i i i i i i u u u u u u u u
      
         [<ffffffff810af3aa>] kmemleak_scan+0x25a/0x540
         [<ffffffff810afbcb>] kmemleak_scan_thread+0x5b/0xe0
         [<ffffffff8104d0fe>] kthread+0x9e/0xb0
         [<ffffffff81003074>] kernel_thread_helper+0x4/0x10
         [<ffffffffffffffff>] 0xffffffffffffffff
      
      The above printout is missing register dump completely. The
      problem here is that the output comes from syslog which doesn't
      show KERN_INFO log-level messages. We didn't see this before
      because both of us were testing on 32-bit kernels which use the
      _default_ log-level.
      
      Fix that up by explicitly using KERN_DEFAULT log-level for
      __show_regs() printks.
      Signed-off-by: NPekka Enberg <penberg@cs.helsinki.fi>
      Cc: Vegard Nossum <vegard.nossum@gmail.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Arjan van de Ven <arjan@infradead.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      LKML-Reference: <1261988819.4641.2.camel@penberg-laptop>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      d015a092
  18. 11 12月, 2009 1 次提交
  19. 10 12月, 2009 2 次提交
  20. 09 12月, 2009 2 次提交
  21. 08 11月, 2009 1 次提交
    • F
      hw-breakpoints: Rewrite the hw-breakpoints layer on top of perf events · 24f1e32c
      Frederic Weisbecker 提交于
      This patch rebase the implementation of the breakpoints API on top of
      perf events instances.
      
      Each breakpoints are now perf events that handle the
      register scheduling, thread/cpu attachment, etc..
      
      The new layering is now made as follows:
      
             ptrace       kgdb      ftrace   perf syscall
                \          |          /         /
                 \         |         /         /
                                              /
                  Core breakpoint API        /
                                            /
                           |               /
                           |              /
      
                    Breakpoints perf events
      
                           |
                           |
      
                     Breakpoints PMU ---- Debug Register constraints handling
                                          (Part of core breakpoint API)
                           |
                           |
      
                   Hardware debug registers
      
      Reasons of this rewrite:
      
      - Use the centralized/optimized pmu registers scheduling,
        implying an easier arch integration
      - More powerful register handling: perf attributes (pinned/flexible
        events, exclusive/non-exclusive, tunable period, etc...)
      
      Impact:
      
      - New perf ABI: the hardware breakpoints counters
      - Ptrace breakpoints setting remains tricky and still needs some per
        thread breakpoints references.
      
      Todo (in the order):
      
      - Support breakpoints perf counter events for perf tools (ie: implement
        perf_bpcounter_event())
      - Support from perf tools
      
      Changes in v2:
      
      - Follow the perf "event " rename
      - The ptrace regression have been fixed (ptrace breakpoint perf events
        weren't released when a task ended)
      - Drop the struct hw_breakpoint and store generic fields in
        perf_event_attr.
      - Separate core and arch specific headers, drop
        asm-generic/hw_breakpoint.h and create linux/hw_breakpoint.h
      - Use new generic len/type for breakpoint
      - Handle off case: when breakpoints api is not supported by an arch
      
      Changes in v3:
      
      - Fix broken CONFIG_KVM, we need to propagate the breakpoint api
        changes to kvm when we exit the guest and restore the bp registers
        to the host.
      
      Changes in v4:
      
      - Drop the hw_breakpoint_restore() stub as it is only used by KVM
      - EXPORT_SYMBOL_GPL hw_breakpoint_restore() as KVM can be built as a
        module
      - Restore the breakpoints unconditionally on kvm guest exit:
        TIF_DEBUG_THREAD doesn't anymore cover every cases of running
        breakpoints and vcpu->arch.switch_db_regs might not always be
        set when the guest used debug registers.
        (Waiting for a reliable optimization)
      
      Changes in v5:
      
      - Split-up the asm-generic/hw-breakpoint.h moving to
        linux/hw_breakpoint.h into a separate patch
      - Optimize the breakpoints restoring while switching from kvm guest
        to host. We only want to restore the state if we have active
        breakpoints to the host, otherwise we don't care about messed-up
        address registers.
      - Add asm/hw_breakpoint.h to Kbuild
      - Fix bad breakpoint type in trace_selftest.c
      
      Changes in v6:
      
      - Fix wrong header inclusion in trace.h (triggered a build
        error with CONFIG_FTRACE_SELFTEST
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Prasad <prasad@linux.vnet.ibm.com>
      Cc: Alan Stern <stern@rowland.harvard.edu>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Jan Kiszka <jan.kiszka@web.de>
      Cc: Jiri Slaby <jirislaby@gmail.com>
      Cc: Li Zefan <lizf@cn.fujitsu.com>
      Cc: Avi Kivity <avi@redhat.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Masami Hiramatsu <mhiramat@redhat.com>
      Cc: Paul Mundt <lethal@linux-sh.org>
      24f1e32c
  22. 04 11月, 2009 1 次提交
    • P
      x86/hw-breakpoints: Actually flush thread breakpoints in flush_thread(). · 41a48d14
      Paul Mundt 提交于
      flush_thread() tries to do a TIF_DEBUG check before calling in to
      flush_thread_hw_breakpoint() (which subsequently clears the thread flag),
      but for some reason, the x86 code is manually clearing TIF_DEBUG
      immediately before the test, so this path will never be taken.
      
      This kills off the erroneous clear_tsk_thread_flag() and lets
      flush_thread_hw_breakpoint() actually get invoked.
      
      Presumably folks were getting lucky with testing and the
      free_thread_info() -> free_thread_xstate() path was taking care of the
      flush there.
      Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
      Acked-by: N"K.Prasad" <prasad@linux.vnet.ibm.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Alan Stern <stern@rowland.harvard.edu>
      LKML-Reference: <20091005102306.GA7889@linux-sh.org>
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      41a48d14
  23. 02 10月, 2009 1 次提交
    • A
      core, x86: Add user return notifiers · 7c68af6e
      Avi Kivity 提交于
      Add a general per-cpu notifier that is called whenever the kernel is
      about to return to userspace.  The notifier uses a thread_info flag
      and existing checks, so there is no impact on user return or context
      switch fast paths.
      
      This will be used initially to speed up KVM task switching by lazily
      updating MSRs.
      Signed-off-by: NAvi Kivity <avi@redhat.com>
      LKML-Reference: <1253342422-13811-1-git-send-email-avi@redhat.com>
      Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
      7c68af6e
  24. 24 9月, 2009 1 次提交
  25. 20 9月, 2009 1 次提交
    • A
      tracing, x86, cpuidle: Move the end point of a C state in the power tracer · 288f023e
      Arjan van de Ven 提交于
      The "end of a C state" trace point currently happens before
      the code runs that corrects the TSC for having stopped during idle.
      
      The result of this is that the timestamp of the end-of-C-state event
      is garbage on cpus where the TSC stops during idle.
      
      This patch moves the end point of the C state to after the timekeeping
      engine of the kernel has been corrected.
      Signed-off-by: NArjan van de Ven <arjan@linux.intel.com>
      Cc: Len Brown <len.brown@intel.com>
      Cc: fweisbec@gmail.com
      Cc: peterz@infradead.org
      Cc: Paul Mackerras <paulus@samba.org>
      LKML-Reference: <20090919133533.139c2a46@infradead.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      288f023e
  26. 19 9月, 2009 1 次提交
  27. 20 8月, 2009 1 次提交
    • S
      clockevent: Prevent dead lock on clockevents_lock · f833bab8
      Suresh Siddha 提交于
      Currently clockevents_notify() is called with interrupts enabled at
      some places and interrupts disabled at some other places.
      
      This results in a deadlock in this scenario.
      
      cpu A holds clockevents_lock in clockevents_notify() with irqs enabled
      cpu B waits for clockevents_lock in clockevents_notify() with irqs disabled
      cpu C doing set_mtrr() which will try to rendezvous of all the cpus.
      
      This will result in C and A come to the rendezvous point and waiting
      for B. B is stuck forever waiting for the spinlock and thus not
      reaching the rendezvous point.
      
      Fix the clockevents code so that clockevents_lock is taken with
      interrupts disabled and thus avoid the above deadlock.
      
      Also call lapic_timer_propagate_broadcast() on the destination cpu so
      that we avoid calling smp_call_function() in the clockevents notifier
      chain.
      
      This issue left us wondering if we need to change the MTRR rendezvous
      logic to use stop machine logic (instead of smp_call_function) or add
      a check in spinlock debug code to see if there are other spinlocks
      which gets taken under both interrupts enabled/disabled conditions.
      Signed-off-by: NSuresh Siddha <suresh.b.siddha@intel.com>
      Signed-off-by: NVenkatesh Pallipadi <venkatesh.pallipadi@intel.com>
      Cc: "Pallipadi Venkatesh" <venkatesh.pallipadi@intel.com>
      Cc: "Brown Len" <len.brown@intel.com>
      LKML-Reference: <1250544899.2709.210.camel@sbs-t61.sc.intel.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      f833bab8
  28. 15 6月, 2009 1 次提交
    • V
      kmemcheck: add mm functions · 2dff4405
      Vegard Nossum 提交于
      With kmemcheck enabled, the slab allocator needs to do this:
      
      1. Tell kmemcheck to allocate the shadow memory which stores the status of
         each byte in the allocation proper, e.g. whether it is initialized or
         uninitialized.
      2. Tell kmemcheck which parts of memory that should be marked uninitialized.
         There are actually a few more states, such as "not yet allocated" and
         "recently freed".
      
      If a slab cache is set up using the SLAB_NOTRACK flag, it will never return
      memory that can take page faults because of kmemcheck.
      
      If a slab cache is NOT set up using the SLAB_NOTRACK flag, callers can still
      request memory with the __GFP_NOTRACK flag. This does not prevent the page
      faults from occuring, however, but marks the object in question as being
      initialized so that no warnings will ever be produced for this object.
      
      In addition to (and in contrast to) __GFP_NOTRACK, the
      __GFP_NOTRACK_FALSE_POSITIVE flag indicates that the allocation should
      not be tracked _because_ it would produce a false positive. Their values
      are identical, but need not be so in the future (for example, we could now
      enable/disable false positives with a config option).
      
      Parts of this patch were contributed by Pekka Enberg but merged for
      atomicity.
      Signed-off-by: NVegard Nossum <vegard.nossum@gmail.com>
      Signed-off-by: NPekka Enberg <penberg@cs.helsinki.fi>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      
      [rebased for mainline inclusion]
      Signed-off-by: NVegard Nossum <vegard.nossum@gmail.com>
      2dff4405
  29. 03 6月, 2009 2 次提交
  30. 12 5月, 2009 1 次提交
  31. 12 4月, 2009 1 次提交
    • J
      x86: clean up declarations and variables · 2c1b284e
      Jaswinder Singh Rajput 提交于
      Impact: cleanup, no code changed
      
       - syscalls.h       update declarations due to unifications
       - irq.c            declare smp_generic_interrupt() before it gets used
       - process.c        declare sys_fork() and sys_vfork() before they get used
       - tsc.c            rename tsc_khz shadowed variable
       - apic/probe_32.c  declare apic_default before it gets used
       - apic/nmi.c       prev_nmi_count should be unsigned
       - apic/io_apic.c   declare smp_irq_move_cleanup_interrupt() before it gets used
       - mm/init.c        declare direct_gbpages and free_initrd_mem before they get used
      Signed-off-by: NJaswinder Singh Rajput <jaswinder@kernel.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      2c1b284e
  32. 07 4月, 2009 1 次提交
    • M
      x86, ds: add leakage warning · 2311f0de
      Markus Metzger 提交于
      Add a warning in case a debug store context is not removed before
      the task it is attached to is freed.
      
      Remove the old warning at thread exit. It is too early.
      
      Declare the debug store context field in thread_struct unconditionally.
      
      Remove ds_copy_thread() and ds_exit_thread() and do the work directly
      in process*.c.
      Signed-off-by: NMarkus Metzger <markus.t.metzger@intel.com>
      Cc: roland@redhat.com
      Cc: eranian@googlemail.com
      Cc: oleg@redhat.com
      Cc: juan.villacis@intel.com
      Cc: ak@linux.jf.intel.com
      LKML-Reference: <20090403144601.254472000@intel.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      2311f0de
  33. 18 3月, 2009 1 次提交
    • R
      cpumask: fix CONFIG_CPUMASK_OFFSTACK=y cpu hotunplug crash · 30e1e6d1
      Rusty Russell 提交于
      Impact: Fix cpu offline when CONFIG_MAXSMP=y
      
      Changeset bc9b83dd "cpumask: convert
      c1e_mask in arch/x86/kernel/process.c to cpumask_var_t" contained a
      bug: c1e_mask is manipulated even if C1E isn't detected (and hence
      not allocated).
      
      This is simply fixed by checking for NULL (which gcc optimizes out
      anyway of CONFIG_CPUMASK_OFFSTACK=n, since it knows ce1_mask can never
      be NULL).
      
      In addition, fix a leak where select_idle_routine re-allocates
      (and re-clears) c1e_mask on every cpu init.
      Reported-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
      Cc: Mike Travis <travis@sgi.com>
      LKML-Reference: <200903171450.34549.rusty@rustcorp.com.au>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      30e1e6d1
  34. 16 3月, 2009 1 次提交
  35. 13 3月, 2009 2 次提交