1. 29 6月, 2013 1 次提交
  2. 26 6月, 2013 2 次提交
  3. 23 6月, 2013 1 次提交
    • S
      trace,x86: Do not call local_irq_save() in load_current_idt() · 2b4bc789
      Steven Rostedt (Red Hat) 提交于
      As load_current_idt() is now what is used to update the IDT for the
      switches needed for NMI, lockdep debug, and for tracing, it must not
      call local_irq_save(). This is because one of the users of this is
      lockdep, which does tracing of local_irq_save() and when the debug
      trap is hit, we need to update the IDT before tracing interrupts
      being disabled. As load_current_idt() is used to do this, calling
      local_irq_save() which lockdep traces, defeats the point of calling
      load_current_idt().
      
      As interrupts are already disabled when used by lockdep and NMI, the
      only other user is tracing that can disable interrupts itself. Simply
      have the tracing update disable interrupts before calling load_current_idt()
      instead of breaking the other users.
      
      Here's the dump that happened:
      
      ------------[ cut here ]------------
      WARNING: at /work/autotest/nobackup/linux-test.git/kernel/fork.c:1196 copy_process+0x2c3/0x1398()
      DEBUG_LOCKS_WARN_ON(!p->hardirqs_enabled)
      Modules linked in:
      CPU: 1 PID: 4570 Comm: gdm-simple-gree Not tainted 3.10.0-rc3-test+ #5
      Hardware name:                  /DG965MQ, BIOS MQ96510J.86A.0372.2006.0605.1717 06/05/2006
       ffffffff81d2a7a5 ffff88006ed13d50 ffffffff8192822b ffff88006ed13d90
       ffffffff81035f25 ffff8800721c6000 ffff88006ed13da0 0000000001200011
       0000000000000000 ffff88006ed5e000 ffff8800721c6000 ffff88006ed13df0
      Call Trace:
       [<ffffffff8192822b>] dump_stack+0x19/0x1b
       [<ffffffff81035f25>] warn_slowpath_common+0x67/0x80
       [<ffffffff81035fe1>] warn_slowpath_fmt+0x46/0x48
       [<ffffffff812bfc5d>] ? __raw_spin_lock_init+0x31/0x52
       [<ffffffff810341f7>] copy_process+0x2c3/0x1398
       [<ffffffff8103539d>] do_fork+0xa8/0x260
       [<ffffffff810ca7b1>] ? trace_preempt_on+0x2a/0x2f
       [<ffffffff812afb3e>] ? trace_hardirqs_on_thunk+0x3a/0x3f
       [<ffffffff81937fe7>] ? sysret_check+0x1b/0x56
       [<ffffffff81937fe7>] ? sysret_check+0x1b/0x56
       [<ffffffff810355cf>] SyS_clone+0x16/0x18
       [<ffffffff81938369>] stub_clone+0x69/0x90
       [<ffffffff81937fc2>] ? system_call_fastpath+0x16/0x1b
      ---[ end trace 8b157a9d20ca1aa2 ]---
      
      in fork.c:
      
       #ifdef CONFIG_PROVE_LOCKING
      	DEBUG_LOCKS_WARN_ON(!p->hardirqs_enabled); <-- bug here
      	DEBUG_LOCKS_WARN_ON(!p->softirqs_enabled);
       #endif
      
      Cc: Seiji Aguchi <seiji.aguchi@hds.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      2b4bc789
  4. 21 6月, 2013 7 次提交
    • S
      x86, trace: Add irq vector tracepoints · cf910e83
      Seiji Aguchi 提交于
      [Purpose of this patch]
      
      As Vaibhav explained in the thread below, tracepoints for irq vectors
      are useful.
      
      http://www.spinics.net/lists/mm-commits/msg85707.html
      
      <snip>
      The current interrupt traces from irq_handler_entry and irq_handler_exit
      provide when an interrupt is handled.  They provide good data about when
      the system has switched to kernel space and how it affects the currently
      running processes.
      
      There are some IRQ vectors which trigger the system into kernel space,
      which are not handled in generic IRQ handlers.  Tracing such events gives
      us the information about IRQ interaction with other system events.
      
      The trace also tells where the system is spending its time.  We want to
      know which cores are handling interrupts and how they are affecting other
      processes in the system.  Also, the trace provides information about when
      the cores are idle and which interrupts are changing that state.
      <snip>
      
      On the other hand, my usecase is tracing just local timer event and
      getting a value of instruction pointer.
      
      I suggested to add an argument local timer event to get instruction pointer before.
      But there is another way to get it with external module like systemtap.
      So, I don't need to add any argument to irq vector tracepoints now.
      
      [Patch Description]
      
      Vaibhav's patch shared a trace point ,irq_vector_entry/irq_vector_exit, in all events.
      But there is an above use case to trace specific irq_vector rather than tracing all events.
      In this case, we are concerned about overhead due to unwanted events.
      
      So, add following tracepoints instead of introducing irq_vector_entry/exit.
      so that we can enable them independently.
         - local_timer_vector
         - reschedule_vector
         - call_function_vector
         - call_function_single_vector
         - irq_work_entry_vector
         - error_apic_vector
         - thermal_apic_vector
         - threshold_apic_vector
         - spurious_apic_vector
         - x86_platform_ipi_vector
      
      Also, introduce a logic switching IDT at enabling/disabling time so that a time penalty
      makes a zero when tracepoints are disabled. Detailed explanations are as follows.
       - Create trace irq handlers with entering_irq()/exiting_irq().
       - Create a new IDT, trace_idt_table, at boot time by adding a logic to
         _set_gate(). It is just a copy of original idt table.
       - Register the new handlers for tracpoints to the new IDT by introducing
         macros to alloc_intr_gate() called at registering time of irq_vector handlers.
       - Add checking, whether irq vector tracing is on/off, into load_current_idt().
         This has to be done below debug checking for these reasons.
         - Switching to debug IDT may be kicked while tracing is enabled.
         - On the other hands, switching to trace IDT is kicked only when debugging
           is disabled.
      
      In addition, the new IDT is created only when CONFIG_TRACING is enabled to avoid being
      used for other purposes.
      Signed-off-by: NSeiji Aguchi <seiji.aguchi@hds.com>
      Link: http://lkml.kernel.org/r/51C323ED.5050708@hds.comSigned-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      cf910e83
    • S
      x86: Rename variables for debugging · 629f4f9d
      Seiji Aguchi 提交于
      Rename variables for debugging to describe meaning of them precisely.
      
      Also, introduce a generic way to switch IDT by checking a current state,
      debug on/off.
      Signed-off-by: NSeiji Aguchi <seiji.aguchi@hds.com>
      Link: http://lkml.kernel.org/r/51C323A8.7050905@hds.comSigned-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      629f4f9d
    • S
      x86, trace: Introduce entering/exiting_irq() · eddc0e92
      Seiji Aguchi 提交于
      When implementing tracepoints in interrupt handers, if the tracepoints are
      simply added in the performance sensitive path of interrupt handers,
      it may cause potential performance problem due to the time penalty.
      
      To solve the problem, an idea is to prepare non-trace/trace irq handers and
      switch their IDTs at the enabling/disabling time.
      
      So, let's introduce entering_irq()/exiting_irq() for pre/post-
      processing of each irq handler.
      
      A way to use them is as follows.
      
      Non-trace irq handler:
      smp_irq_handler()
      {
      	entering_irq();		/* pre-processing of this handler */
      	__smp_irq_handler();	/*
      				 * common logic between non-trace and trace handlers
      				 * in a vector.
      				 */
      	exiting_irq();		/* post-processing of this handler */
      
      }
      
      Trace irq_handler:
      smp_trace_irq_handler()
      {
      	entering_irq();		/* pre-processing of this handler */
      	trace_irq_entry();	/* tracepoint for irq entry */
      	__smp_irq_handler();	/*
      				 * common logic between non-trace and trace handlers
      				 * in a vector.
      				 */
      	trace_irq_exit();	/* tracepoint for irq exit */
      	exiting_irq();		/* post-processing of this handler */
      
      }
      
      If tracepoints can place outside entering_irq()/exiting_irq() as follows,
      it looks cleaner.
      
      smp_trace_irq_handler()
      {
      	trace_irq_entry();
      	smp_irq_handler();
      	trace_irq_exit();
      }
      
      But it doesn't work.
      The problem is with irq_enter/exit() being called. They must be called before
      trace_irq_enter/exit(),  because of the rcu_irq_enter() must be called before
      any tracepoints are used, as tracepoints use  rcu to synchronize.
      
      As a possible alternative, we may be able to call irq_enter() first as follows
      if irq_enter() can nest.
      
      smp_trace_irq_hander()
      {
      	irq_entry();
      	trace_irq_entry();
      	smp_irq_handler();
      	trace_irq_exit();
      	irq_exit();
      }
      
      But it doesn't work, either.
      If irq_enter() is nested, it may have a time penalty because it has to check if it
      was already called or not. The time penalty is not desired in performance sensitive
      paths even if it is tiny.
      Signed-off-by: NSeiji Aguchi <seiji.aguchi@hds.com>
      Link: http://lkml.kernel.org/r/51C3238D.9040706@hds.comSigned-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      eddc0e92
    • B
      x86, fpu: Use static_cpu_has_safe before alternatives · 5f8c4218
      Borislav Petkov 提交于
      The call stack below shows how this happens: basically eager_fpu_init()
      calls __thread_fpu_begin(current) which then does if (!use_eager_fpu()),
      which, in turn, uses static_cpu_has.
      
      And we're executing before alternatives so static_cpu_has doesn't work
      there yet.
      
      Use the safe variant in this path which becomes optimal after
      alternatives have run.
      
      WARNING: at arch/x86/kernel/cpu/common.c:1368 warn_pre_alternatives+0x1e/0x20()
      You're using static_cpu_has before alternatives have run!
      Modules linked in:
      Pid: 0, comm: swapper Not tainted 3.9.0-rc8+ #1
      Call Trace:
       warn_slowpath_common
       warn_slowpath_fmt
       ? fpu_finit
       warn_pre_alternatives
       eager_fpu_init
       fpu_init
       cpu_init
       trap_init
       start_kernel
       ? repair_env_string
       x86_64_start_reservations
       x86_64_start_kernel
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Link: http://lkml.kernel.org/r/1370772454-6106-6-git-send-email-bp@alien8.deSigned-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      5f8c4218
    • B
      x86: Add a static_cpu_has_safe variant · 4a90a99c
      Borislav Petkov 提交于
      We want to use this in early code where alternatives might not have run
      yet and for that case we fall back to the dynamic boot_cpu_has.
      
      For that, force a 5-byte jump since the compiler could be generating
      differently sized jumps for each label.
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Link: http://lkml.kernel.org/r/1370772454-6106-5-git-send-email-bp@alien8.deSigned-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      4a90a99c
    • B
      x86: Sanity-check static_cpu_has usage · 5700f743
      Borislav Petkov 提交于
      static_cpu_has may be used only after alternatives have run. Before that
      it always returns false if constant folding with __builtin_constant_p()
      doesn't happen. And you don't want that.
      
      This patch is the result of me debugging an issue where I overzealously
      put static_cpu_has in code which executed before alternatives have run
      and had to spend some time with scratching head and cursing at the
      monitor.
      
      So add a jump to a warning which screams loudly when we use this
      function too early. The alternatives patch that check away in
      conjunction with patching the rest of the kernel image.
      
      [ hpa: factored this into its own configuration option.  If we want to
        have an overarching option, it should be an option which selects
        other options, not as a group option in the source code. ]
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Link: http://lkml.kernel.org/r/1370772454-6106-4-git-send-email-bp@alien8.deSigned-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      5700f743
    • B
      x86, cpu: Add a synthetic, always true, cpu feature · c3b83598
      Borislav Petkov 提交于
      This will be used in alternatives later as an always-replace flag.
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Link: http://lkml.kernel.org/r/1370772454-6106-2-git-send-email-bp@alien8.deSigned-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      c3b83598
  5. 20 6月, 2013 2 次提交
    • M
      x86: Fix trigger_all_cpu_backtrace() implementation · b52e0a7c
      Michel Lespinasse 提交于
      The following change fixes the x86 implementation of
      trigger_all_cpu_backtrace(), which was previously (accidentally,
      as far as I can tell) disabled to always return false as on
      architectures that do not implement this function.
      
      trigger_all_cpu_backtrace(), as defined in include/linux/nmi.h,
      should call arch_trigger_all_cpu_backtrace() if available, or
      return false if the underlying arch doesn't implement this
      function.
      
      x86 did provide a suitable arch_trigger_all_cpu_backtrace()
      implementation, but it wasn't actually being used because it was
      declared in asm/nmi.h, which linux/nmi.h doesn't include. Also,
      linux/nmi.h couldn't easily be fixed by including asm/nmi.h,
      because that file is not available on all architectures.
      
      I am proposing to fix this by moving the x86 definition of
      arch_trigger_all_cpu_backtrace() to asm/irq.h.
      
      Tested via: echo l > /proc/sysrq-trigger
      
      Before the change, this uses a fallback implementation which
      shows backtraces on active CPUs (using
      smp_call_function_interrupt() )
      
      After the change, this shows NMI backtraces on all CPUs
      Signed-off-by: NMichel Lespinasse <walken@google.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Link: http://lkml.kernel.org/r/1370518875-1346-1-git-send-email-walken@google.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      b52e0a7c
    • P
      x86: Fix section mismatch on load_ucode_ap · 94978599
      Paul Gortmaker 提交于
      We are in the process of removing all the __cpuinit annotations.
      While working on making that change, an existing problem was
      made evident:
      
        WARNING: arch/x86/kernel/built-in.o(.text+0x198f2): Section mismatch
        in reference from the function cpu_init() to the function
        .init.text:load_ucode_ap()   The function cpu_init() references
        the function __init load_ucode_ap().  This is often because cpu_init
        lacks a __init annotation or the annotation of load_ucode_ap is wrong.
      
      This now appears because in my working tree, cpu_init() is no longer
      tagged as __cpuinit, and so the audit picks up the mismatch.  The 2nd
      hypothesis from the audit is the correct one, as there was an incorrect
      __init tag on the prototype in the header (but __cpuinit was used on
      the function itself.)
      
      The audit is telling us that the prototype's __init annotation took
      effect and the function did land in the .init.text section.  Checking
      with objdump on a mainline tree that still has __cpuinit shows that
      the __cpuinit on the function takes precedence over the __init on the
      prototype, but that won't be true once we make __cpuinit a no-op.
      
      Even though we are removing __cpuinit, we temporarily align both
      the function and the prototype on __cpuinit so that the changeset
      can be applied to stable trees  if desired.
      
      [ hpa: build fix only, no object code change ]
      
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: stable <stable@vger.kernel.org> # 3.9+
      Signed-off-by: NPaul Gortmaker <paul.gortmaker@windriver.com>
      Link: http://lkml.kernel.org/r/1371654926-11729-1-git-send-email-paul.gortmaker@windriver.comSigned-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      94978599
  6. 19 6月, 2013 1 次提交
  7. 11 6月, 2013 2 次提交
    • B
      efi: Convert runtime services function ptrs · 43ab0476
      Borislav Petkov 提交于
      ... to void * like the boot services and lose all the void * casts. No
      functionality change.
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Signed-off-by: NMatt Fleming <matt.fleming@intel.com>
      43ab0476
    • M
      Modify UEFI anti-bricking code · f8b84043
      Matthew Garrett 提交于
      This patch reworks the UEFI anti-bricking code, including an effective
      reversion of cc5a080c and 31ff2f20. It turns out that calling
      QueryVariableInfo() from boot services results in some firmware
      implementations jumping to physical addresses even after entering virtual
      mode, so until we have 1:1 mappings for UEFI runtime space this isn't
      going to work so well.
      
      Reverting these gets us back to the situation where we'd refuse to create
      variables on some systems because they classify deleted variables as "used"
      until the firmware triggers a garbage collection run, which they won't do
      until they reach a lower threshold. This results in it being impossible to
      install a bootloader, which is unhelpful.
      
      Feedback from Samsung indicates that the firmware doesn't need more than
      5KB of storage space for its own purposes, so that seems like a reasonable
      threshold. However, there's still no guarantee that a platform will attempt
      garbage collection merely because it drops below this threshold. It seems
      that this is often only triggered if an attempt to write generates a
      genuine EFI_OUT_OF_RESOURCES error. We can force that by attempting to
      create a variable larger than the remaining space. This should fail, but if
      it somehow succeeds we can then immediately delete it.
      
      I've tested this on the UEFI machines I have available, but I don't have
      a Samsung and so can't verify that it avoids the bricking problem.
      Signed-off-by: NMatthew Garrett <matthew.garrett@nebula.com>
      Signed-off-by: Lee, Chun-Y <jlee@suse.com> [ dummy variable cleanup ]
      Cc: <stable@vger.kernel.org>
      Signed-off-by: NMatt Fleming <matt.fleming@intel.com>
      f8b84043
  8. 07 6月, 2013 1 次提交
  9. 05 6月, 2013 2 次提交
  10. 01 6月, 2013 1 次提交
  11. 31 5月, 2013 6 次提交
  12. 28 5月, 2013 2 次提交
  13. 14 5月, 2013 1 次提交
  14. 10 5月, 2013 1 次提交
  15. 07 5月, 2013 1 次提交
  16. 03 5月, 2013 3 次提交
  17. 01 5月, 2013 1 次提交
    • T
      dump_stack: unify debug information printed by show_regs() · a43cb95d
      Tejun Heo 提交于
      show_regs() is inherently arch-dependent but it does make sense to print
      generic debug information and some archs already do albeit in slightly
      different forms.  This patch introduces a generic function to print debug
      information from show_regs() so that different archs print out the same
      information and it's much easier to modify what's printed.
      
      show_regs_print_info() prints out the same debug info as dump_stack()
      does plus task and thread_info pointers.
      
      * Archs which didn't print debug info now do.
      
        alpha, arc, blackfin, c6x, cris, frv, h8300, hexagon, ia64, m32r,
        metag, microblaze, mn10300, openrisc, parisc, score, sh64, sparc,
        um, xtensa
      
      * Already prints debug info.  Replaced with show_regs_print_info().
        The printed information is superset of what used to be there.
      
        arm, arm64, avr32, mips, powerpc, sh32, tile, unicore32, x86
      
      * s390 is special in that it used to print arch-specific information
        along with generic debug info.  Heiko and Martin think that the
        arch-specific extra isn't worth keeping s390 specfic implementation.
        Converted to use the generic version.
      
      Note that now all archs print the debug info before actual register
      dumps.
      
      An example BUG() dump follows.
      
       kernel BUG at /work/os/work/kernel/workqueue.c:4841!
       invalid opcode: 0000 [#1] PREEMPT SMP DEBUG_PAGEALLOC
       Modules linked in:
       CPU: 0 PID: 1 Comm: swapper/0 Not tainted 3.9.0-rc1-work+ #7
       Hardware name: empty empty/S3992, BIOS 080011  10/26/2007
       task: ffff88007c85e040 ti: ffff88007c860000 task.ti: ffff88007c860000
       RIP: 0010:[<ffffffff8234a07e>]  [<ffffffff8234a07e>] init_workqueues+0x4/0x6
       RSP: 0000:ffff88007c861ec8  EFLAGS: 00010246
       RAX: ffff88007c861fd8 RBX: ffffffff824466a8 RCX: 0000000000000001
       RDX: 0000000000000046 RSI: 0000000000000001 RDI: ffffffff8234a07a
       RBP: ffff88007c861ec8 R08: 0000000000000000 R09: 0000000000000000
       R10: 0000000000000001 R11: 0000000000000000 R12: ffffffff8234a07a
       R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
       FS:  0000000000000000(0000) GS:ffff88007dc00000(0000) knlGS:0000000000000000
       CS:  0010 DS: 0000 ES: 0000 CR0: 000000008005003b
       CR2: ffff88015f7ff000 CR3: 00000000021f1000 CR4: 00000000000007f0
       DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
       DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
       Stack:
        ffff88007c861ef8 ffffffff81000312 ffffffff824466a8 ffff88007c85e650
        0000000000000003 0000000000000000 ffff88007c861f38 ffffffff82335e5d
        ffff88007c862080 ffffffff8223d8c0 ffff88007c862080 ffffffff81c47760
       Call Trace:
        [<ffffffff81000312>] do_one_initcall+0x122/0x170
        [<ffffffff82335e5d>] kernel_init_freeable+0x9b/0x1c8
        [<ffffffff81c47760>] ? rest_init+0x140/0x140
        [<ffffffff81c4776e>] kernel_init+0xe/0xf0
        [<ffffffff81c6be9c>] ret_from_fork+0x7c/0xb0
        [<ffffffff81c47760>] ? rest_init+0x140/0x140
        ...
      
      v2: Typo fix in x86-32.
      
      v3: CPU number dropped from show_regs_print_info() as
          dump_stack_print_info() has been updated to print it.  s390
          specific implementation dropped as requested by s390 maintainers.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Acked-by: NDavid S. Miller <davem@davemloft.net>
      Acked-by: NJesper Nilsson <jesper.nilsson@axis.com>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Bjorn Helgaas <bhelgaas@google.com>
      Cc: Fengguang Wu <fengguang.wu@intel.com>
      Cc: Mike Frysinger <vapier@gentoo.org>
      Cc: Vineet Gupta <vgupta@synopsys.com>
      Cc: Sam Ravnborg <sam@ravnborg.org>
      Acked-by: Chris Metcalf <cmetcalf@tilera.com>		[tile bits]
      Acked-by: Richard Kuo <rkuo@codeaurora.org>		[hexagon bits]
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      a43cb95d
  18. 30 4月, 2013 1 次提交
    • G
      mm/hugetlb: add more arch-defined huge_pte functions · 106c992a
      Gerald Schaefer 提交于
      Commit abf09bed ("s390/mm: implement software dirty bits")
      introduced another difference in the pte layout vs.  the pmd layout on
      s390, thoroughly breaking the s390 support for hugetlbfs.  This requires
      replacing some more pte_xxx functions in mm/hugetlbfs.c with a
      huge_pte_xxx version.
      
      This patch introduces those huge_pte_xxx functions and their generic
      implementation in asm-generic/hugetlb.h, which will now be included on
      all architectures supporting hugetlbfs apart from s390.  This change
      will be a no-op for those architectures.
      
      [akpm@linux-foundation.org: fix warning]
      Signed-off-by: NGerald Schaefer <gerald.schaefer@de.ibm.com>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Hillf Danton <dhillf@gmail.com>
      Acked-by: Michal Hocko <mhocko@suse.cz>	[for !s390 parts]
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Paul Mundt <lethal@linux-sh.org>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Chris Metcalf <cmetcalf@tilera.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      106c992a
  19. 28 4月, 2013 2 次提交
  20. 27 4月, 2013 1 次提交
  21. 26 4月, 2013 1 次提交
    • I
      perf/x86/intel/P4: Robistify P4 PMU types · 5ac2b5c2
      Ingo Molnar 提交于
      Linus found, while extending integer type extension checks in the
      sparse static code checker, various fragile patterns of mixed
      signed/unsigned  64-bit/32-bit integer use in perf_events_p4.c.
      
      The relevant hardware register ABI is 64 bit wide on 32-bit
      kernels as  well, so clean it all up a bit, remove unnecessary
      casts, and make sure we  use 64-bit unsigned integers in these
      places.
      
      [ Unfortunately this patch was not tested on real P4 hardware,
        those are pretty rare already. If this patch causes any
        problems on P4 hardware then please holler ... ]
      Reported-by: NLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      Cc: David Miller <davem@davemloft.net>
      Cc: Theodore Ts'o <tytso@mit.edu>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Cyrill Gorcunov <gorcunov@gmail.com>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Link: http://lkml.kernel.org/r/20130424072630.GB1780@gmail.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      5ac2b5c2