1. 15 5月, 2015 1 次提交
  2. 01 4月, 2015 1 次提交
  3. 23 2月, 2015 1 次提交
    • B
      x86/asm: Cleanup prefetch primitives · a930dc45
      Borislav Petkov 提交于
      This is based on a patch originally by hpa.
      
      With the current improvements to the alternatives, we can simply use %P1
      as a mem8 operand constraint and rely on the toolchain to generate the
      proper instruction sizes. For example, on 32-bit, where we use an empty
      old instruction we get:
      
        apply_alternatives: feat: 6*32+8, old: (c104648b, len: 4), repl: (c195566c, len: 4)
        c104648b: alt_insn: 90 90 90 90
        c195566c: rpl_insn: 0f 0d 4b 5c
      
        ...
      
        apply_alternatives: feat: 6*32+8, old: (c18e09b4, len: 3), repl: (c1955948, len: 3)
        c18e09b4: alt_insn: 90 90 90
        c1955948: rpl_insn: 0f 0d 08
      
        ...
      
        apply_alternatives: feat: 6*32+8, old: (c1190cf9, len: 7), repl: (c1955a79, len: 7)
        c1190cf9: alt_insn: 90 90 90 90 90 90 90
        c1955a79: rpl_insn: 0f 0d 0d a0 d4 85 c1
      
      all with the proper padding done depending on the size of the
      replacement instruction the compiler generates.
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Cc: H. Peter Anvin <hpa@linux.intel.com>
      a930dc45
  4. 19 2月, 2015 1 次提交
  5. 22 1月, 2015 9 次提交
  6. 31 7月, 2014 8 次提交
  7. 14 7月, 2014 1 次提交
  8. 22 6月, 2014 1 次提交
    • J
      x86, mpparse: Simplify arch/x86/include/asm/mpspec.h · a491cc90
      Jiang Liu 提交于
      Simplify arch/x86/include/asm/mpspec.h by
      1) Change max_physical_apicid to static as it's only used in apic.c.
      2) Kill declaration of mpc_default_type, it's never defined.
      3) Delete default_acpi_madt_oem_check(), it has already been declared
         in apic.h.
      4) Make default_acpi_madt_oem_check() depends on CONFIG_X86_LOCAL_APIC
         instead of CONFIG_X86_64 to support i386.
      5) Change mp_override_legacy_irq(), mp_config_acpi_legacy_irqs() and
         mp_register_gsi() as static because they are only used in acpi/boot.c.
      Signed-off-by: NJiang Liu <jiang.liu@linux.intel.com>
      Acked-by: NDavid Rientjes <rientjes@google.com>
      Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Joerg Roedel <joro@8bytes.org>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: H. Peter Anvin <hpa@linux.intel.com>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Grant Likely <grant.likely@linaro.org>
      Cc: Rafael J. Wysocki <rjw@rjwysocki.net>
      Cc: Bjorn Helgaas <bhelgaas@google.com>
      Cc: Randy Dunlap <rdunlap@infradead.org>
      Cc: Yinghai Lu <yinghai@kernel.org>
      Cc: Len Brown <len.brown@intel.com>
      Cc: Pavel Machek <pavel@ucw.cz>
      Cc: Seiji Aguchi <seiji.aguchi@hds.com>
      Cc: HATAYAMA Daisuke <d.hatayama@jp.fujitsu.com>
      Cc: Paul Gortmaker <paul.gortmaker@windriver.com>
      Cc: Richard Weinberger <richard@nod.at>
      Cc: Andi Kleen <ak@linux.intel.com>
      Link: http://lkml.kernel.org/r/1402302011-23642-4-git-send-email-jiang.liu@linux.intel.comSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
      a491cc90
  9. 19 6月, 2014 1 次提交
  10. 09 2月, 2014 4 次提交
  11. 20 8月, 2013 1 次提交
    • Y
      x86/ioapic/kcrash: Prevent crash_kexec() from deadlocking on ioapic_lock · 17405453
      Yoshihiro YUNOMAE 提交于
      Prevent crash_kexec() from deadlocking on ioapic_lock. When
      crash_kexec() is executed on a CPU, the CPU will take ioapic_lock
      in disable_IO_APIC(). So if the cpu gets an NMI while locking
      ioapic_lock, a deadlock will happen.
      
      In this patch, ioapic_lock is zapped/initialized before disable_IO_APIC().
      
      You can reproduce this deadlock the following way:
      
      1. Add mdelay(1000) after raw_spin_lock_irqsave() in
         native_ioapic_set_affinity()@arch/x86/kernel/apic/io_apic.c
      
         Although the deadlock can occur without this modification, it will increase
         the potential of the deadlock problem.
      
      2. Build and install the kernel
      
      3. Set up the OS which will run panic() and kexec when NMI is injected
          # echo "kernel.unknown_nmi_panic=1" >> /etc/sysctl.conf
          # vim /etc/default/grub
            add "nmi_watchdog=0 crashkernel=256M" in GRUB_CMDLINE_LINUX line
          # grub2-mkconfig
      
      4. Reboot the OS
      
      5. Run following command for each vcpu on the guest
          # while true; do echo <CPU num> > /proc/irq/<IO-APIC-edge or IO-APIC-fasteoi>/smp_affinitity; done;
         By running this command, cpus will get ioapic_lock for setting affinity.
      
      6. Inject NMI (push a dump button or execute 'virsh inject-nmi <domain>' if you
         use VM). After injecting NMI, panic() is called in an nmi-handler context.
         Then, kexec will normally run in panic(), but the operation will be stopped
         by deadlock on ioapic_lock in crash_kexec()->machine_crash_shutdown()->
         native_machine_crash_shutdown()->disable_IO_APIC()->clear_IO_APIC()->
         clear_IO_APIC_pin()->ioapic_read_entry().
      Signed-off-by: NYoshihiro YUNOMAE <yoshihiro.yunomae.ez@hitachi.com>
      Cc: Andi Kleen <ak@linux.intel.com>
      Cc: Gleb Natapov <gleb@redhat.com>
      Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      Cc: Joerg Roedel <joro@8bytes.org>
      Cc: Marcelo Tosatti <mtosatti@redhat.com>
      Cc: Hidehiro Kawai <hidehiro.kawai.ez@hitachi.com>
      Cc: Sebastian Andrzej Siewior <sebastian@breakpoint.cc>
      Cc: Zhang Yanfei <zhangyanfei@cn.fujitsu.com>
      Cc: Eric W. Biederman <ebiederm@xmission.com>
      Cc: yrl.pp-manager.tt@hitachi.com
      Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
      Cc: Seiji Aguchi <seiji.aguchi@hds.com>
      Link: http://lkml.kernel.org/r/20130820070107.28245.83806.stgit@yunodevelSigned-off-by: NIngo Molnar <mingo@kernel.org>
      17405453
  12. 21 6月, 2013 1 次提交
    • S
      x86, trace: Introduce entering/exiting_irq() · eddc0e92
      Seiji Aguchi 提交于
      When implementing tracepoints in interrupt handers, if the tracepoints are
      simply added in the performance sensitive path of interrupt handers,
      it may cause potential performance problem due to the time penalty.
      
      To solve the problem, an idea is to prepare non-trace/trace irq handers and
      switch their IDTs at the enabling/disabling time.
      
      So, let's introduce entering_irq()/exiting_irq() for pre/post-
      processing of each irq handler.
      
      A way to use them is as follows.
      
      Non-trace irq handler:
      smp_irq_handler()
      {
      	entering_irq();		/* pre-processing of this handler */
      	__smp_irq_handler();	/*
      				 * common logic between non-trace and trace handlers
      				 * in a vector.
      				 */
      	exiting_irq();		/* post-processing of this handler */
      
      }
      
      Trace irq_handler:
      smp_trace_irq_handler()
      {
      	entering_irq();		/* pre-processing of this handler */
      	trace_irq_entry();	/* tracepoint for irq entry */
      	__smp_irq_handler();	/*
      				 * common logic between non-trace and trace handlers
      				 * in a vector.
      				 */
      	trace_irq_exit();	/* tracepoint for irq exit */
      	exiting_irq();		/* post-processing of this handler */
      
      }
      
      If tracepoints can place outside entering_irq()/exiting_irq() as follows,
      it looks cleaner.
      
      smp_trace_irq_handler()
      {
      	trace_irq_entry();
      	smp_irq_handler();
      	trace_irq_exit();
      }
      
      But it doesn't work.
      The problem is with irq_enter/exit() being called. They must be called before
      trace_irq_enter/exit(),  because of the rcu_irq_enter() must be called before
      any tracepoints are used, as tracepoints use  rcu to synchronize.
      
      As a possible alternative, we may be able to call irq_enter() first as follows
      if irq_enter() can nest.
      
      smp_trace_irq_hander()
      {
      	irq_entry();
      	trace_irq_entry();
      	smp_irq_handler();
      	trace_irq_exit();
      	irq_exit();
      }
      
      But it doesn't work, either.
      If irq_enter() is nested, it may have a time penalty because it has to check if it
      was already called or not. The time penalty is not desired in performance sensitive
      paths even if it is tiny.
      Signed-off-by: NSeiji Aguchi <seiji.aguchi@hds.com>
      Link: http://lkml.kernel.org/r/51C3238D.9040706@hds.comSigned-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      eddc0e92
  13. 06 10月, 2012 1 次提交
  14. 16 7月, 2012 2 次提交
  15. 06 7月, 2012 2 次提交
  16. 04 7月, 2012 1 次提交
  17. 14 6月, 2012 2 次提交
  18. 08 6月, 2012 2 次提交
    • A
      x86/apic: Make cpu_mask_to_apicid() operations check cpu_online_mask · 4988a40c
      Alexander Gordeev 提交于
      Currently cpu_mask_to_apicid() should not get a offline CPU with
      the cpumask. Otherwise some apic drivers might try to access
      non-existent per-cpu variables (i.e. x2apic). In that regard
      cpu_mask_to_apicid() and cpu_mask_to_apicid_and() operations are
      inconsistent.
      
      This fix makes the two operations do not rely on calling
      functions and always return the apicid for only online CPUs. As
      result, the meaning and implementations of cpu_mask_to_apicid()
      and cpu_mask_to_apicid_and() operations become straight.
      Signed-off-by: NAlexander Gordeev <agordeev@redhat.com>
      Acked-by: NSuresh Siddha <suresh.b.siddha@intel.com>
      Cc: Yinghai Lu <yinghai@kernel.org>
      Link: http://lkml.kernel.org/r/20120607131624.GG4759@dhcp-26-207.brq.redhat.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      4988a40c
    • A
      x86/apic: Make cpu_mask_to_apicid() operations return error code · ff164324
      Alexander Gordeev 提交于
      Current cpu_mask_to_apicid() and cpu_mask_to_apicid_and()
      implementations have few shortcomings:
      
      1. A value returned by cpu_mask_to_apicid() is written to
      hardware registers unconditionally. Should BAD_APICID get ever
      returned it will be written to a hardware too. But the value of
      BAD_APICID is not universal across all hardware in all modes and
      might cause unexpected results, i.e. interrupts might get routed
      to CPUs that are not configured to receive it.
      
      2. Because the value of BAD_APICID is not universal it is
      counter- intuitive to return it for a hardware where it does not
      make sense (i.e. x2apic).
      
      3. cpu_mask_to_apicid_and() operation is thought as an
      complement to cpu_mask_to_apicid() that only applies a AND mask
      on top of a cpumask being passed. Yet, as consequence of 18374d89
      commit the two operations are inconsistent in that of:
        cpu_mask_to_apicid() should not get a offline CPU with the cpumask
        cpu_mask_to_apicid_and() should not fail and return BAD_APICID
      These limitations are impossible to realize just from looking at
      the operations prototypes.
      
      Most of these shortcomings are resolved by returning a error
      code instead of BAD_APICID. As the result, faults are reported
      back early rather than possibilities to cause a unexpected
      behaviour exist (in case of [1]).
      
      The only exception is setup_timer_IRQ0_pin() routine. Although
      obviously controversial to this fix, its existing behaviour is
      preserved to not break the fragile check_timer() and would
      better addressed in a separate fix.
      Signed-off-by: NAlexander Gordeev <agordeev@redhat.com>
      Acked-by: NSuresh Siddha <suresh.b.siddha@intel.com>
      Cc: Yinghai Lu <yinghai@kernel.org>
      Link: http://lkml.kernel.org/r/20120607131559.GF4759@dhcp-26-207.brq.redhat.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      ff164324