1. 25 1月, 2017 2 次提交
  2. 24 1月, 2017 1 次提交
    • Y
      x86/fpu/xstate: Fix xcomp_bv in XSAVES header · dffba9a3
      Yu-cheng Yu 提交于
      The compacted-format XSAVES area is determined at boot time and
      never changed after.  The field xsave.header.xcomp_bv indicates
      which components are in the fixed XSAVES format.
      
      In fpstate_init() we did not set xcomp_bv to reflect the XSAVES
      format since at the time there is no valid data.
      
      However, after we do copy_init_fpstate_to_fpregs() in fpu__clear(),
      as in commit:
      
        b22cbe40 x86/fpu: Fix invalid FPU ptrace state after execve()
      
      and when __fpu_restore_sig() does fpu__restore() for a COMPAT-mode
      app, a #GP occurs.  This can be easily triggered by doing valgrind on
      a COMPAT-mode "Hello World," as reported by Joakim Tjernlund and
      others:
      
      	https://bugzilla.kernel.org/show_bug.cgi?id=190061
      
      Fix it by setting xcomp_bv correctly.
      
      This patch also moves the xcomp_bv initialization to the proper
      place, which was in copyin_to_xsaves() as of:
      
        4c833368 x86/fpu: Set the xcomp_bv when we fake up a XSAVES area
      
      which fixed the bug too, but it's more efficient and cleaner to
      initialize things once per boot, not for every signal handling
      operation.
      Reported-by: NKevin Hao <haokexin@gmail.com>
      Reported-by: NJoakim Tjernlund <Joakim.Tjernlund@infinera.com>
      Signed-off-by: NYu-cheng Yu <yu-cheng.yu@intel.com>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Borislav Petkov <bp@suse.de>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Ravi V. Shankar <ravi.v.shankar@intel.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: haokexin@gmail.com
      Link: http://lkml.kernel.org/r/1485212084-4418-1-git-send-email-yu-cheng.yu@intel.com
      [ Combined it with 4c833368. ]
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      dffba9a3
  3. 23 1月, 2017 2 次提交
    • K
      x86/fpu: Set the xcomp_bv when we fake up a XSAVES area · 4c833368
      Kevin Hao 提交于
      I got the following calltrace on a Apollo Lake SoC with 32-bit kernel:
      
        WARNING: CPU: 2 PID: 261 at arch/x86/include/asm/fpu/internal.h:363 fpu__restore+0x1f5/0x260
        [...]
        Hardware name: Intel Corp. Broxton P/NOTEBOOK, BIOS APLIRVPA.X64.0138.B35.1608091058 08/09/2016
        Call Trace:
         dump_stack()
         __warn()
         ? fpu__restore()
         warn_slowpath_null()
         fpu__restore()
         __fpu__restore_sig()
         fpu__restore_sig()
         restore_sigcontext.isra.9()
         sys_sigreturn()
         do_int80_syscall_32()
         entry_INT80_32()
      
      The reason is that a #GP occurs when executing XRSTORS. The root cause
      is that we forget to set the xcomp_bv when we fake up the XSAVES area
      in the copyin_to_xsaves() function.
      Signed-off-by: NKevin Hao <haokexin@gmail.com>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Quentin Casasnovas <quentin.casasnovas@oracle.com>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Yu-cheng Yu <yu-cheng.yu@intel.com>
      Link: http://lkml.kernel.org/r/1485075023-30161-1-git-send-email-haokexin@gmail.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      4c833368
    • B
      x86/microcode/intel: Drop stashed AP patch pointer optimization · c26665ab
      Borislav Petkov 提交于
      This was meant to save us the scanning of the microcode containter in
      the initrd since the first AP had already done that but it can also hurt
      us:
      
      Imagine a single hyperthreaded CPU (Intel(R) Atom(TM) CPU N270, for
      example) which updates the microcode on the BSP but since the microcode
      engine is shared between the two threads, the update on CPU1 doesn't
      happen because it has already happened on CPU0 and we don't find a newer
      microcode revision on CPU1.
      
      Which doesn't set the intel_ucode_patch pointer and at initrd
      jettisoning time we don't save the microcode patch for later
      application.
      
      Now, when we suspend to RAM, the loaded microcode gets cleared so we
      need to reload but there's no patch saved in the cache.
      
      Removing the optimization fixes this issue and all is fine and dandy.
      
      Fixes: 06b8534c ("x86/microcode: Rework microcode loading")
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Reviewed-by: NThomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/20170120202955.4091-2-bp@alien8.deSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
      c26665ab
  4. 20 1月, 2017 5 次提交
  5. 19 1月, 2017 12 次提交
  6. 18 1月, 2017 10 次提交
  7. 17 1月, 2017 6 次提交
    • D
      KVM: x86: fix fixing of hypercalls · ce2e852e
      Dmitry Vyukov 提交于
      emulator_fix_hypercall() replaces hypercall with vmcall instruction,
      but it does not handle GP exception properly when writes the new instruction.
      It can return X86EMUL_PROPAGATE_FAULT without setting exception information.
      This leads to incorrect emulation and triggers
      WARN_ON(ctxt->exception.vector > 0x1f) in x86_emulate_insn()
      as discovered by syzkaller fuzzer:
      
      WARNING: CPU: 2 PID: 18646 at arch/x86/kvm/emulate.c:5558
      Call Trace:
       warn_slowpath_null+0x2c/0x40 kernel/panic.c:582
       x86_emulate_insn+0x16a5/0x4090 arch/x86/kvm/emulate.c:5572
       x86_emulate_instruction+0x403/0x1cc0 arch/x86/kvm/x86.c:5618
       emulate_instruction arch/x86/include/asm/kvm_host.h:1127 [inline]
       handle_exception+0x594/0xfd0 arch/x86/kvm/vmx.c:5762
       vmx_handle_exit+0x2b7/0x38b0 arch/x86/kvm/vmx.c:8625
       vcpu_enter_guest arch/x86/kvm/x86.c:6888 [inline]
       vcpu_run arch/x86/kvm/x86.c:6947 [inline]
      
      Set exception information when write in emulator_fix_hypercall() fails.
      Signed-off-by: NDmitry Vyukov <dvyukov@google.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: Radim Krčmář <rkrcmar@redhat.com>
      Cc: Wanpeng Li <wanpeng.li@hotmail.com>
      Cc: kvm@vger.kernel.org
      Cc: syzkaller@googlegroups.com
      Signed-off-by: NRadim Krčmář <rkrcmar@redhat.com>
      ce2e852e
    • A
      arm64: Fix swiotlb fallback allocation · 524dabe1
      Alexander Graf 提交于
      Commit b67a8b29 introduced logic to skip swiotlb allocation when all memory
      is DMA accessible anyway.
      
      While this is a great idea, __dma_alloc still calls swiotlb code unconditionally
      to allocate memory when there is no CMA memory available. The swiotlb code is
      called to ensure that we at least try get_free_pages().
      
      Without initialization, swiotlb allocation code tries to access io_tlb_list
      which is NULL. That results in a stack trace like this:
      
        Unable to handle kernel NULL pointer dereference at virtual address 00000000
        [...]
        [<ffff00000845b908>] swiotlb_tbl_map_single+0xd0/0x2b0
        [<ffff00000845be94>] swiotlb_alloc_coherent+0x10c/0x198
        [<ffff000008099dc0>] __dma_alloc+0x68/0x1a8
        [<ffff000000a1b410>] drm_gem_cma_create+0x98/0x108 [drm]
        [<ffff000000abcaac>] drm_fbdev_cma_create_with_funcs+0xbc/0x368 [drm_kms_helper]
        [<ffff000000abcd84>] drm_fbdev_cma_create+0x2c/0x40 [drm_kms_helper]
        [<ffff000000abc040>] drm_fb_helper_initial_config+0x238/0x410 [drm_kms_helper]
        [<ffff000000abce88>] drm_fbdev_cma_init_with_funcs+0x98/0x160 [drm_kms_helper]
        [<ffff000000abcf90>] drm_fbdev_cma_init+0x40/0x58 [drm_kms_helper]
        [<ffff000000b47980>] vc4_kms_load+0x90/0xf0 [vc4]
        [<ffff000000b46a94>] vc4_drm_bind+0xec/0x168 [vc4]
        [...]
      
      Thankfully swiotlb code just learned how to not do allocations with the FORCE_NO
      option. This patch configures the swiotlb code to use that if we decide not to
      initialize the swiotlb framework.
      
      Fixes: b67a8b29 ("arm64: mm: only initialize swiotlb when necessary")
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      CC: Jisheng Zhang <jszhang@marvell.com>
      CC: Geert Uytterhoeven <geert+renesas@glider.be>
      CC: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      524dabe1
    • Z
      perf/x86/intel: Handle exclusive threadid correctly on CPU hotplug · 4e71de79
      Zhou Chengming 提交于
      The CPU hotplug function intel_pmu_cpu_starting() sets
      cpu_hw_events.excl_thread_id unconditionally to 1 when the shared exclusive
      counters data structure is already availabe for the sibling thread.
      
      This works during the boot process because the first sibling gets threadid
      0 assigned and the second sibling which shares the data structure gets 1.
      
      But when the first thread of the core is offlined and onlined again it
      shares the data structure with the second thread and gets exclusive thread
      id 1 assigned as well.
      
      Prevent this by checking the threadid of the already online thread.
      
      [ tglx: Rewrote changelog ]
      Signed-off-by: NZhou Chengming <zhouchengming1@huawei.com>
      Cc: NuoHan Qiao <qiaonuohan@huawei.com>
      Cc: ak@linux.intel.com
      Cc: peterz@infradead.org
      Cc: kan.liang@intel.com
      Cc: dave.hansen@linux.intel.com
      Cc: eranian@google.com
      Cc: qiaonuohan@huawei.com
      Cc: davidcc@google.com
      Cc: guohanjun@huawei.com
      Link: http://lkml.kernel.org/r/1484536871-3131-1-git-send-email-zhouchengming1@huawei.comSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
      ---					---
       arch/x86/events/intel/core.c |    7 +++++--
       1 file changed, 5 insertions(+), 2 deletions(-)
      4e71de79
    • B
      powerpc/icp-opal: Fix missing KVM case and harden replay · 9728a7c8
      Benjamin Herrenschmidt 提交于
      The icp-opal call is missing the code from icp-native to recover
      interrupts snatched by KVM. Without that, when running KVM, we can
      get into a situation where an interrupt is lost and the CPU stuck
      with an elevated CPPR.
      
      Also harden replay by always checking the return from opal_int_eoi().
      
      Fixes: d7436188 ("powerpc/xics: Add ICP OPAL backend")
      Cc: stable@vger.kernel.org # v4.8+
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      9728a7c8
    • R
      powerpc/mm: Fix memory hotplug BUG() on radix · 32b53c01
      Reza Arbab 提交于
      Memory hotplug is leading to hash page table calls, even on radix:
      
        arch_add_memory
          create_section_mapping
            htab_bolt_mapping
              BUG_ON(!ppc_md.hpte_insert);
      
      To fix, refactor {create,remove}_section_mapping() into hash__ and
      radix__ variants. Leave the radix versions stubbed for now.
      Reviewed-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
      Acked-by: NBalbir Singh <bsingharora@gmail.com>
      Signed-off-by: NReza Arbab <arbab@linux.vnet.ibm.com>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      32b53c01
    • L
      ARM: 8613/1: Fix the uaccess crash on PB11MPCore · 90f92c63
      Linus Walleij 提交于
      The following patch was sketched by Russell in response to my
      crashes on the PB11MPCore after the patch for software-based
      priviledged no access support for ARMv8.1. See this thread:
      http://marc.info/?l=linux-arm-kernel&m=144051749807214&w=2
      
      I am unsure what is going on, I suspect everyone involved in
      the discussion is. I just want to repost this to get the
      discussion restarted, as I still have to apply this patch
      with every kernel iteration to get my PB11MPCore Realview
      running.
      
      Testing by Neil Armstrong on the Oxnas NAS has revealed that
      this bug exist also on that widely deployed hardware, so
      we are probably currently regressing all ARM11MPCore systems.
      
      Cc: Russell King <linux@armlinux.org.uk>
      Cc: Will Deacon <will.deacon@arm.com>
      Fixes: a5e090ac ("ARM: software-based priviledged-no-access support")
      Tested-by: NNeil Armstrong <narmstrong@baylibre.com>
      Signed-off-by: NLinus Walleij <linus.walleij@linaro.org>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      90f92c63
  8. 16 1月, 2017 2 次提交