1. 29 8月, 2017 21 次提交
  2. 24 8月, 2017 1 次提交
  3. 23 8月, 2017 1 次提交
  4. 18 8月, 2017 1 次提交
  5. 17 8月, 2017 1 次提交
  6. 15 8月, 2017 1 次提交
  7. 11 8月, 2017 1 次提交
  8. 10 8月, 2017 4 次提交
    • P
      x86: Clarify/fix no-op barriers for text_poke_bp() · 01651324
      Peter Zijlstra 提交于
      So I was looking at text_poke_bp() today and I couldn't make sense of
      the barriers there.
      
      How's for something like so?
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Reviewed-by: NSteven Rostedt (VMware) <rostedt@goodmis.org>
      Acked-by: NJiri Kosina <jkosina@suse.cz>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: masami.hiramatsu.pt@hitachi.com
      Link: http://lkml.kernel.org/r/20170731102154.f57cvkjtnbmtctk6@hirez.programming.kicks-ass.netSigned-off-by: NIngo Molnar <mingo@kernel.org>
      01651324
    • A
      x86/switch_to/64: Rewrite FS/GS switching yet again to fix AMD CPUs · e137a4d8
      Andy Lutomirski 提交于
      Switching FS and GS is a mess, and the current code is still subtly
      wrong: it assumes that "Loading a nonzero value into FS sets the
      index and base", which is false on AMD CPUs if the value being
      loaded is 1, 2, or 3.
      
      (The current code came from commit 3e2b68d7 ("x86/asm,
      sched/x86: Rewrite the FS and GS context switch code"), which made
      it better but didn't fully fix it.)
      
      Rewrite it to be much simpler and more obviously correct.  This
      should fix it fully on AMD CPUs and shouldn't adversely affect
      performance.
      Signed-off-by: NAndy Lutomirski <luto@kernel.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Borislav Petkov <bpetkov@suse.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Chang Seok <chang.seok.bae@intel.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: stable@vger.kernel.org
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      e137a4d8
    • A
      x86/fsgsbase/64: Fully initialize FS and GS state in start_thread_common · 767d035d
      Andy Lutomirski 提交于
      execve used to leak FSBASE and GSBASE on AMD CPUs.  Fix it.
      
      The security impact of this bug is small but not quite zero -- it
      could weaken ASLR when a privileged task execs a less privileged
      program, but only if program changed bitness across the exec, or the
      child binary was highly unusual or actively malicious.  A child
      program that was compromised after the exec would not have access to
      the leaked base.
      Signed-off-by: NAndy Lutomirski <luto@kernel.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Borislav Petkov <bpetkov@suse.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Chang Seok <chang.seok.bae@intel.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: stable@vger.kernel.org
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      767d035d
    • V
      x86/smpboot: Unbreak CPU0 hotplug · 10e66760
      Vitaly Kuznetsov 提交于
      A hang on CPU0 onlining after a preceding offlining is observed. Trace
      shows that CPU0 is stuck in check_tsc_sync_target() waiting for source
      CPU to run check_tsc_sync_source() but this never happens. Source CPU,
      in its turn, is stuck on synchronize_sched() which is called from
      native_cpu_up() -> do_boot_cpu() -> unregister_nmi_handler().
      
      So it's a classic ABBA deadlock, due to the use of synchronize_sched() in
      unregister_nmi_handler().
      
      Fix the bug by moving unregister_nmi_handler() from do_boot_cpu() to
      native_cpu_up() after cpu onlining is done.
      Signed-off-by: NVitaly Kuznetsov <vkuznets@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/20170803105818.9934-1-vkuznets@redhat.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      10e66760
  9. 02 8月, 2017 1 次提交
    • W
      KVM: async_pf: make rcu irq exit if not triggered from idle task · 337c017c
      Wanpeng Li 提交于
       WARNING: CPU: 5 PID: 1242 at kernel/rcu/tree_plugin.h:323 rcu_note_context_switch+0x207/0x6b0
       CPU: 5 PID: 1242 Comm: unity-settings- Not tainted 4.13.0-rc2+ #1
       RIP: 0010:rcu_note_context_switch+0x207/0x6b0
       Call Trace:
        __schedule+0xda/0xba0
        ? kvm_async_pf_task_wait+0x1b2/0x270
        schedule+0x40/0x90
        kvm_async_pf_task_wait+0x1cc/0x270
        ? prepare_to_swait+0x22/0x70
        do_async_page_fault+0x77/0xb0
        ? do_async_page_fault+0x77/0xb0
        async_page_fault+0x28/0x30
       RIP: 0010:__d_lookup_rcu+0x90/0x1e0
      
      I encounter this when trying to stress the async page fault in L1 guest w/
      L2 guests running.
      
      Commit 9b132fbe (Add rcu user eqs exception hooks for async page
      fault) adds rcu_irq_enter/exit() to kvm_async_pf_task_wait() to exit cpu
      idle eqs when needed, to protect the code that needs use rcu.  However,
      we need to call the pair even if the function calls schedule(), as seen
      from the above backtrace.
      
      This patch fixes it by informing the RCU subsystem exit/enter the irq
      towards/away from idle for both n.halted and !n.halted.
      
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: Radim Krčmář <rkrcmar@redhat.com>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: stable@vger.kernel.org
      Signed-off-by: NWanpeng Li <wanpeng.li@hotmail.com>
      Reviewed-by: NPaolo Bonzini <pbonzini@redhat.com>
      Signed-off-by: NRadim Krčmář <rkrcmar@redhat.com>
      337c017c
  10. 01 8月, 2017 1 次提交
    • T
      x86/hpet: Cure interface abuse in the resume path · bb68cfe2
      Thomas Gleixner 提交于
      The HPET resume path abuses irq_domain_[de]activate_irq() to restore the
      MSI message in the HPET chip for the boot CPU on resume and it relies on an
      implementation detail of the interrupt core code, which magically makes the
      HPET unmask call invoked via a irq_disable/enable pair. This worked as long
      as the irq code did unconditionally invoke the unmask() callback. With the
      recent changes which keep track of the masked state to avoid expensive
      hardware access, this does not longer work. As a consequence the HPET timer
      interrupts are not unmasked which breaks resume as the boot CPU waits
      forever that a timer interrupt arrives.
      
      Make the restore of the MSI message explicit and invoke the unmask()
      function directly. While at it get rid of the pointless affinity setting as
      nothing can change the affinity of the interrupt and the vector across
      suspend/resume. The restore of the MSI message reestablishes the previous
      affinity setting which is the correct one.
      
      Fixes: bf22ff45 ("genirq: Avoid unnecessary low level irq function calls")
      Reported-and-tested-by: NTomi Sarvela <tomi.p.sarvela@intel.com>
      Reported-by: NMartin Peres <martin.peres@linux.intel.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Acked-by: N"Rafael J. Wysocki" <rafael.j.wysocki@intel.com>
      Cc: jeffy.chen@rock-chips.com
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Marc Zyngier <marc.zyngier@arm.com>
      Link: http://lkml.kernel.org/r/alpine.DEB.2.20.1707312158590.2287@nanos
      bb68cfe2
  11. 30 7月, 2017 3 次提交
    • R
      cpufreq: x86: Make scaling_cur_freq behave more as expected · 4815d3c5
      Rafael J. Wysocki 提交于
      After commit f8475cef "x86: use common aperfmperf_khz_on_cpu() to
      calculate KHz using APERF/MPERF" the scaling_cur_freq policy attribute
      in sysfs only behaves as expected on x86 with APERF/MPERF registers
      available when it is read from at least twice in a row.  The value
      returned by the first read may not be meaningful, because the
      computations in there use cached values from the previous iteration
      of aperfmperf_snapshot_khz() which may be stale.
      
      To prevent that from happening, modify arch_freq_get_on_cpu() to
      call aperfmperf_snapshot_khz() twice, with a short delay between
      these calls, if the previous invocation of aperfmperf_snapshot_khz()
      was too far back in the past (specifically, more that 1s ago).
      
      Also, as pointed out by Doug Smythies, aperf_delta is limited now
      and the multiplication of it by cpu_khz won't overflow, so simplify
      the s->khz computations too.
      
      Fixes: f8475cef "x86: use common aperfmperf_khz_on_cpu() to calculate KHz using APERF/MPERF"
      Reported-by: NDoug Smythies <dsmythies@telus.net>
      Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      4815d3c5
    • A
      x86/asm/32: Remove a bunch of '& 0xffff' from pt_regs segment reads · 99504819
      Andy Lutomirski 提交于
      Now that pt_regs properly defines segment fields as 16-bit on 32-bit
      CPUs, there's no need to mask off the high word.
      Signed-off-by: NAndy Lutomirski <luto@kernel.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Borislav Petkov <bpetkov@suse.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      99504819
    • A
      x86/traps: Don't clear segment high bits in early_idt_handler_common() · 630c1863
      Andy Lutomirski 提交于
      Now that pt_regs defines the segment fields as 16-bit, there's no
      need to sanitize the values.
      Signed-off-by: NAndy Lutomirski <luto@kernel.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Borislav Petkov <bpetkov@suse.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      630c1863
  12. 27 7月, 2017 2 次提交
    • A
      x86/ldt/64: Refresh DS and ES when modify_ldt changes an entry · a6323757
      Andy Lutomirski 提交于
      On x86_32, modify_ldt() implicitly refreshes the cached DS and ES
      segments because they are refreshed on return to usermode.
      
      On x86_64, they're not refreshed on return to usermode.  To improve
      determinism and match x86_32's behavior, refresh them when we update
      the LDT.
      
      This avoids a situation in which the DS points to a descriptor that is
      changed but the old cached segment persists until the next reschedule.
      If this happens, then the user-visible state will change
      nondeterministically some time after modify_ldt() returns, which is
      unfortunate.
      Signed-off-by: NAndy Lutomirski <luto@kernel.org>
      Cc: Borislav Petkov <bpetkov@suse.de>
      Cc: Chang Seok <chang.seok.bae@intel.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      a6323757
    • W
      x86: irq: Define a global vector for nested posted interrupts · 210f84b0
      Wincy Van 提交于
      We are using the same vector for nested/non-nested posted
      interrupts delivery, this may cause interrupts latency in
      L1 since we can't kick the L2 vcpu out of vmx-nonroot mode.
      
      This patch introduces a new vector which is only for nested
      posted interrupts to solve the problems above.
      Signed-off-by: NWincy Van <fanwenyi0529@gmail.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      210f84b0
  13. 26 7月, 2017 1 次提交
    • J
      x86/unwind: Add the ORC unwinder · ee9f8fce
      Josh Poimboeuf 提交于
      Add the new ORC unwinder which is enabled by CONFIG_ORC_UNWINDER=y.
      It plugs into the existing x86 unwinder framework.
      
      It relies on objtool to generate the needed .orc_unwind and
      .orc_unwind_ip sections.
      
      For more details on why ORC is used instead of DWARF, see
      Documentation/x86/orc-unwinder.txt - but the short version is
      that it's a simplified, fundamentally more robust debugninfo
      data structure, which also allows up to two orders of magnitude
      faster lookups than the DWARF unwinder - which matters to
      profiling workloads like perf.
      
      Thanks to Andy Lutomirski for the performance improvement ideas:
      splitting the ORC unwind table into two parallel arrays and creating a
      fast lookup table to search a subset of the unwind table.
      Signed-off-by: NJosh Poimboeuf <jpoimboe@redhat.com>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Jiri Slaby <jslaby@suse.cz>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: live-patching@vger.kernel.org
      Link: http://lkml.kernel.org/r/0a6cbfb40f8da99b7a45a1a8302dc6aef16ec812.1500938583.git.jpoimboe@redhat.com
      [ Extended the changelog. ]
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      ee9f8fce
  14. 25 7月, 2017 1 次提交