1. 12 11月, 2013 1 次提交
    • I
      Revert "x86/UV: Add uvtrace support" · b5dfcb09
      Ingo Molnar 提交于
      This reverts commit 8eba1842.
      
      uv_trace() is not used by anything, nor is uv_trace_nmi_func, nor
      uv_trace_func.
      
      That's not how we do instrumentation code in the kernel: we add
      tracepoints, printk()s, etc. so that everyone not just those with
      magic kernel modules can debug a system.
      
      So remove this unused (and misguied) piece of code.
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      Cc: Mike Travis <travis@sgi.com>
      Cc: Dimitri Sivanich <sivanich@sgi.com>
      Cc: Hedi Berriche <hedi@sgi.com>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Arnaldo Carvalho de Melo <acme@ghostprotocols.net>
      Cc: Jason Wessel <jason.wessel@windriver.com>
      Link: http://lkml.kernel.org/n/tip-tumfBffmr4jmnt8Gyxanoblg@git.kernel.org
      b5dfcb09
  2. 24 9月, 2013 3 次提交
    • M
      x86/UV: Add uvtrace support · 8eba1842
      Mike Travis 提交于
      This patch adds support for the uvtrace module by providing a
      skeleton call to the registered trace function.  It also
      provides another separate 'NMI' tracer that is triggered by the
      system wide 'power nmi' command.
      Signed-off-by: NMike Travis <travis@sgi.com>
      Reviewed-by: NDimitri Sivanich <sivanich@sgi.com>
      Reviewed-by: NHedi Berriche <hedi@sgi.com>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Arnaldo Carvalho de Melo <acme@ghostprotocols.net>
      Cc: Jason Wessel <jason.wessel@windriver.com>
      Link: http://lkml.kernel.org/r/20130923212501.185052551@asylum.americas.sgi.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      8eba1842
    • M
      x86/UV: Update UV support for external NMI signals · 0d12ef0c
      Mike Travis 提交于
      The current UV NMI handler has not been updated for the changes
      in the system NMI handler and the perf operations.  The UV NMI
      handler reads an MMR in the UV Hub to check to see if the NMI
      event was caused by the external 'system NMI' that the operator
      can initiate on the System Mgmt Controller.
      
      The problem arises when the perf tools are running, causing
      millions of perf events per second on very large CPU count
      systems.  Previously this was okay because the perf NMI handler
      ran at a higher priority on the NMI call chain and if the NMI
      was a perf event, it would stop calling other NMI handlers
      remaining on the NMI call chain.
      
      Now the system NMI handler calls all the handlers on the NMI
      call chain including the UV NMI handler.  This causes the UV NMI
      handler to read the MMRs at the same millions per second rate.
      This can lead to significant performance loss and possible
      system failures.  It also can cause thousands of 'Dazed and
      Confused' messages being sent to the system console.  This
      effectively makes perf tools unusable on UV systems.
      
      To avoid this excessive overhead when perf tools are running,
      this code has been optimized to minimize reading of the MMRs as
      much as possible, by moving to the NMI_UNKNOWN notifier chain.
      This chain is called only when all the users on the standard
      NMI_LOCAL call chain have been called and none of them have
      claimed this NMI.
      
      There is an exception where the NMI_LOCAL notifier chain is
      used.  When the perf tools are in use, it's possible that the UV
      NMI was captured by some other NMI handler and then either
      ignored or mistakenly processed as a perf event.  We set a
      per_cpu ('ping') flag for those CPUs that ignored the initial
      NMI, and then send them an IPI NMI signal.  The NMI_LOCAL
      handler on each cpu does not need to read the MMR, but instead
      checks the in memory flag indicating it was pinged.  There are
      two module variables, 'ping_count' indicating how many requested
      NMI events occurred, and 'ping_misses' indicating how many stray
      NMI events.  These most likely are perf events so it shows the
      overhead of the perf NMI interrupts and how many MMR reads were avoided.
      
      This patch also minimizes the reads of the MMRs by having the
      first cpu entering the NMI handler on each node set a per HUB
      in-memory atomic value.  (Having a per HUB value avoids sending
      lock traffic over NumaLink.)  Both types of UV NMIs from the SMI
      layer are supported.
      Signed-off-by: NMike Travis <travis@sgi.com>
      Reviewed-by: NDimitri Sivanich <sivanich@sgi.com>
      Reviewed-by: NHedi Berriche <hedi@sgi.com>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Arnaldo Carvalho de Melo <acme@ghostprotocols.net>
      Cc: Jason Wessel <jason.wessel@windriver.com>
      Link: http://lkml.kernel.org/r/20130923212500.353547733@asylum.americas.sgi.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      0d12ef0c
    • M
      x86/UV: Move NMI support · 1e019421
      Mike Travis 提交于
      This patch moves the UV NMI support from the x2apic file to a
      new separate uv_nmi.c file in preparation for the next sequence
      of patches. It prevents upcoming bloat of the x2apic file, and
      has the added benefit of putting the upcoming /sys/module
      parameters under the name 'uv_nmi' instead of 'x2apic_uv_x',
      which was obscure.
      Signed-off-by: NMike Travis <travis@sgi.com>
      Reviewed-by: NDimitri Sivanich <sivanich@sgi.com>
      Reviewed-by: NHedi Berriche <hedi@sgi.com>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Arnaldo Carvalho de Melo <acme@ghostprotocols.net>
      Cc: Jason Wessel <jason.wessel@windriver.com>
      Link: http://lkml.kernel.org/r/20130923212500.183295611@asylum.americas.sgi.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      1e019421
  3. 12 9月, 2013 2 次提交
  4. 03 9月, 2013 1 次提交
    • L
      lockref: implement lockless reference count updates using cmpxchg() · bc08b449
      Linus Torvalds 提交于
      Instead of taking the spinlock, the lockless versions atomically check
      that the lock is not taken, and do the reference count update using a
      cmpxchg() loop.  This is semantically identical to doing the reference
      count update protected by the lock, but avoids the "wait for lock"
      contention that you get when accesses to the reference count are
      contended.
      
      Note that a "lockref" is absolutely _not_ equivalent to an atomic_t.
      Even when the lockref reference counts are updated atomically with
      cmpxchg, the fact that they also verify the state of the spinlock means
      that the lockless updates can never happen while somebody else holds the
      spinlock.
      
      So while "lockref_put_or_lock()" looks a lot like just another name for
      "atomic_dec_and_lock()", and both optimize to lockless updates, they are
      fundamentally different: the decrement done by atomic_dec_and_lock() is
      truly independent of any lock (as long as it doesn't decrement to zero),
      so a locked region can still see the count change.
      
      The lockref structure, in contrast, really is a *locked* reference
      count.  If you hold the spinlock, the reference count will be stable and
      you can modify the reference count without using atomics, because even
      the lockless updates will see and respect the state of the lock.
      
      In order to enable the cmpxchg lockless code, the architecture needs to
      do three things:
      
       (1) Make sure that the "arch_spinlock_t" and an "unsigned int" can fit
           in an aligned u64, and have a "cmpxchg()" implementation that works
           on such a u64 data type.
      
       (2) define a helper function to test for a spinlock being unlocked
           ("arch_spin_value_unlocked()")
      
       (3) select the "ARCH_USE_CMPXCHG_LOCKREF" config variable in its
           Kconfig file.
      
      This enables it for x86-64 (but not 32-bit, we'd need to make sure
      cmpxchg() turns into the proper cmpxchg8b in order to enable it for
      32-bit mode).
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      bc08b449
  5. 02 9月, 2013 1 次提交
  6. 30 8月, 2013 3 次提交
  7. 27 8月, 2013 1 次提交
  8. 26 8月, 2013 2 次提交
  9. 21 8月, 2013 1 次提交
  10. 20 8月, 2013 1 次提交
    • Y
      x86/ioapic/kcrash: Prevent crash_kexec() from deadlocking on ioapic_lock · 17405453
      Yoshihiro YUNOMAE 提交于
      Prevent crash_kexec() from deadlocking on ioapic_lock. When
      crash_kexec() is executed on a CPU, the CPU will take ioapic_lock
      in disable_IO_APIC(). So if the cpu gets an NMI while locking
      ioapic_lock, a deadlock will happen.
      
      In this patch, ioapic_lock is zapped/initialized before disable_IO_APIC().
      
      You can reproduce this deadlock the following way:
      
      1. Add mdelay(1000) after raw_spin_lock_irqsave() in
         native_ioapic_set_affinity()@arch/x86/kernel/apic/io_apic.c
      
         Although the deadlock can occur without this modification, it will increase
         the potential of the deadlock problem.
      
      2. Build and install the kernel
      
      3. Set up the OS which will run panic() and kexec when NMI is injected
          # echo "kernel.unknown_nmi_panic=1" >> /etc/sysctl.conf
          # vim /etc/default/grub
            add "nmi_watchdog=0 crashkernel=256M" in GRUB_CMDLINE_LINUX line
          # grub2-mkconfig
      
      4. Reboot the OS
      
      5. Run following command for each vcpu on the guest
          # while true; do echo <CPU num> > /proc/irq/<IO-APIC-edge or IO-APIC-fasteoi>/smp_affinitity; done;
         By running this command, cpus will get ioapic_lock for setting affinity.
      
      6. Inject NMI (push a dump button or execute 'virsh inject-nmi <domain>' if you
         use VM). After injecting NMI, panic() is called in an nmi-handler context.
         Then, kexec will normally run in panic(), but the operation will be stopped
         by deadlock on ioapic_lock in crash_kexec()->machine_crash_shutdown()->
         native_machine_crash_shutdown()->disable_IO_APIC()->clear_IO_APIC()->
         clear_IO_APIC_pin()->ioapic_read_entry().
      Signed-off-by: NYoshihiro YUNOMAE <yoshihiro.yunomae.ez@hitachi.com>
      Cc: Andi Kleen <ak@linux.intel.com>
      Cc: Gleb Natapov <gleb@redhat.com>
      Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      Cc: Joerg Roedel <joro@8bytes.org>
      Cc: Marcelo Tosatti <mtosatti@redhat.com>
      Cc: Hidehiro Kawai <hidehiro.kawai.ez@hitachi.com>
      Cc: Sebastian Andrzej Siewior <sebastian@breakpoint.cc>
      Cc: Zhang Yanfei <zhangyanfei@cn.fujitsu.com>
      Cc: Eric W. Biederman <ebiederm@xmission.com>
      Cc: yrl.pp-manager.tt@hitachi.com
      Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
      Cc: Seiji Aguchi <seiji.aguchi@hds.com>
      Link: http://lkml.kernel.org/r/20130820070107.28245.83806.stgit@yunodevelSigned-off-by: NIngo Molnar <mingo@kernel.org>
      17405453
  11. 14 8月, 2013 4 次提交
  12. 13 8月, 2013 3 次提交
    • O
      sched: fix the theoretical signal_wake_up() vs schedule() race · e0acd0a6
      Oleg Nesterov 提交于
      This is only theoretical, but after try_to_wake_up(p) was changed
      to check p->state under p->pi_lock the code like
      
      	__set_current_state(TASK_INTERRUPTIBLE);
      	schedule();
      
      can miss a signal. This is the special case of wait-for-condition,
      it relies on try_to_wake_up/schedule interaction and thus it does
      not need mb() between __set_current_state() and if(signal_pending).
      
      However, this __set_current_state() can move into the critical
      section protected by rq->lock, now that try_to_wake_up() takes
      another lock we need to ensure that it can't be reordered with
      "if (signal_pending(current))" check inside that section.
      
      The patch is actually one-liner, it simply adds smp_wmb() before
      spin_lock_irq(rq->lock). This is what try_to_wake_up() already
      does by the same reason.
      
      We turn this wmb() into the new helper, smp_mb__before_spinlock(),
      for better documentation and to allow the architectures to change
      the default implementation.
      
      While at it, kill smp_mb__after_lock(), it has no callers.
      
      Perhaps we can also add smp_mb__before/after_spinunlock() for
      prepare_to_wait().
      Signed-off-by: NOleg Nesterov <oleg@redhat.com>
      Acked-by: NPeter Zijlstra <peterz@infradead.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      e0acd0a6
    • J
      KVM: x86: Update symbolic exit codes · cc2df20c
      Jan Kiszka 提交于
      Add decoding for INVEPT and reorder the list according to the reason
      numbers.
      Signed-off-by: NJan Kiszka <jan.kiszka@siemens.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      cc2df20c
    • T
      x86, microcode, AMD: Fix early microcode loading · 84516098
      Torsten Kaiser 提交于
      load_microcode_amd() (and the helper it is using) should not have an
      cpu parameter. The microcode loading does not depend on the CPU wrt the
      patches loaded since they will end up in a global list for all CPUs
      anyway.
      
      The change from cpu to x86family in load_microcode_amd()
      now allows to drop the code messing with cpu_data(cpu) from
      collect_cpu_info_amd_early(), which is wrong anyway because at that
      point the per-cpu cpu_info is not yet setup (These values would later be
      overwritten by smp_store_boot_cpu_info() / smp_store_cpu_info()).
      
      Fold the rest of collect_cpu_info_amd_early() into load_ucode_amd_ap(),
      because its only used at one place and without the cpuinfo_x86 accesses
      it was not much left.
      Signed-off-by: NTorsten Kaiser <just.for.lkml@googlemail.com>
      [ Fengguang: build fix ]
      Signed-off-by: NFengguang Wu <fengguang.wu@intel.com>
      [ Boris: adapt it to current tree. ]
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      84516098
  13. 12 8月, 2013 1 次提交
    • T
      PCI: use weak functions for MSI arch-specific functions · 4287d824
      Thomas Petazzoni 提交于
      Until now, the MSI architecture-specific functions could be overloaded
      using a fairly complex set of #define and compile-time
      conditionals. In order to prepare for the introduction of the msi_chip
      infrastructure, it is desirable to switch all those functions to use
      the 'weak' mechanism. This commit converts all the architectures that
      were overidding those MSI functions to use the new strategy.
      
      Note that we keep two separate, non-weak, functions
      default_teardown_msi_irqs() and default_restore_msi_irqs() for the
      default behavior of the arch_teardown_msi_irqs() and
      arch_restore_msi_irqs(), as the default behavior is needed by x86 PCI
      code.
      Signed-off-by: NThomas Petazzoni <thomas.petazzoni@free-electrons.com>
      Acked-by: NBjorn Helgaas <bhelgaas@google.com>
      Acked-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      Tested-by: NDaniel Price <daniel.price@gmail.com>
      Tested-by: NThierry Reding <thierry.reding@gmail.com>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: linuxppc-dev@lists.ozlabs.org
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: linux390@de.ibm.com
      Cc: linux-s390@vger.kernel.org
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: x86@kernel.org
      Cc: Russell King <linux@arm.linux.org.uk>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: linux-ia64@vger.kernel.org
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: linux-mips@linux-mips.org
      Cc: David S. Miller <davem@davemloft.net>
      Cc: sparclinux@vger.kernel.org
      Cc: Chris Metcalf <cmetcalf@tilera.com>
      Signed-off-by: NJason Cooper <jason@lakedaemon.net>
      4287d824
  14. 10 8月, 2013 1 次提交
    • D
      x86: Don't clear olpc_ofw_header when sentinel is detected · d55e37bb
      Daniel Drake 提交于
      OpenFirmware wasn't quite following the protocol described in boot.txt
      and the kernel has detected this through use of the sentinel value
      in boot_params. OFW does zero out almost all of the stuff that it should
      do, but not the sentinel.
      
      This causes the kernel to clear olpc_ofw_header, which breaks x86 OLPC
      support.
      
      OpenFirmware has now been fixed. However, it would be nice if we could
      maintain Linux compatibility with old firmware versions. To do that, we just
      have to avoid zeroing out olpc_ofw_header.
      
      OFW does not write to any other parts of the header that are being zapped
      by the sentinel-detection code, and all users of olpc_ofw_header are
      somewhat protected through checking for the OLPC_OFW_SIG magic value
      before using it. So this should not cause any problems for anyone.
      Signed-off-by: NDaniel Drake <dsd@laptop.org>
      Link: http://lkml.kernel.org/r/20130809221420.618E6FAB03@dev.laptop.orgAcked-by: NYinghai Lu <yinghai@kernel.org>
      Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      Cc: <stable@vger.kernel.org> # v3.9+
      d55e37bb
  15. 09 8月, 2013 7 次提交
  16. 08 8月, 2013 1 次提交
  17. 07 8月, 2013 7 次提交