1. 04 2月, 2017 2 次提交
    • R
      KVM: x86: do not save guest-unsupported XSAVE state · 00c87e9a
      Radim Krčmář 提交于
      Saving unsupported state prevents migration when the new host does not
      support a XSAVE feature of the original host, even if the feature is not
      exposed to the guest.
      
      We've masked host features with guest-visible features before, with
      4344ee98 ("KVM: x86: only copy XSAVE state for the supported
      features") and dropped it when implementing XSAVES.  Do it again.
      
      Fixes: df1daba7 ("KVM: x86: support XSAVES usage in the host")
      Cc: stable@vger.kernel.org
      Reviewed-by: NPaolo Bonzini <pbonzini@redhat.com>
      Signed-off-by: NRadim Krčmář <rkrcmar@redhat.com>
      00c87e9a
    • A
      modversions: treat symbol CRCs as 32 bit quantities · 71810db2
      Ard Biesheuvel 提交于
      The modversion symbol CRCs are emitted as ELF symbols, which allows us
      to easily populate the kcrctab sections by relying on the linker to
      associate each kcrctab slot with the correct value.
      
      This has a couple of downsides:
      
       - Given that the CRCs are treated as memory addresses, we waste 4 bytes
         for each CRC on 64 bit architectures,
      
       - On architectures that support runtime relocation, a R_<arch>_RELATIVE
         relocation entry is emitted for each CRC value, which identifies it
         as a quantity that requires fixing up based on the actual runtime
         load offset of the kernel. This results in corrupted CRCs unless we
         explicitly undo the fixup (and this is currently being handled in the
         core module code)
      
       - Such runtime relocation entries take up 24 bytes of __init space
         each, resulting in a x8 overhead in [uncompressed] kernel size for
         CRCs.
      
      Switching to explicit 32 bit values on 64 bit architectures fixes most
      of these issues, given that 32 bit values are not treated as quantities
      that require fixing up based on the actual runtime load offset.  Note
      that on some ELF64 architectures [such as PPC64], these 32-bit values
      are still emitted as [absolute] runtime relocatable quantities, even if
      the value resolves to a build time constant.  Since relative relocations
      are always resolved at build time, this patch enables MODULE_REL_CRCS on
      powerpc when CONFIG_RELOCATABLE=y, which turns the absolute CRC
      references into relative references into .rodata where the actual CRC
      value is stored.
      
      So redefine all CRC fields and variables as u32, and redefine the
      __CRC_SYMBOL() macro for 64 bit builds to emit the CRC reference using
      inline assembler (which is necessary since 64-bit C code cannot use
      32-bit types to hold memory addresses, even if they are ultimately
      resolved using values that do not exceed 0xffffffff).  To avoid
      potential problems with legacy 32-bit architectures using legacy
      toolchains, the equivalent C definition of the kcrctab entry is retained
      for 32-bit architectures.
      
      Note that this mostly reverts commit d4703aef ("module: handle ppc64
      relocating kcrctabs when CONFIG_RELOCATABLE=y")
      Acked-by: NRusty Russell <rusty@rustcorp.com.au>
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      71810db2
  2. 01 2月, 2017 6 次提交
    • T
      perf/x86/intel/uncore: Make package handling more robust · fff4b87e
      Thomas Gleixner 提交于
      The package management code in uncore relies on package mapping being
      available before a CPU is started. This changed with:
      
        9d85eb91 ("x86/smpboot: Make logical package management more robust")
      
      because the ACPI/BIOS information turned out to be unreliable, but that
      left uncore in broken state. This was not noticed because on a regular boot
      all CPUs are online before uncore is initialized.
      
      Move the allocation to the CPU online callback and simplify the hotplug
      handling. At this point the package mapping is established and correct.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Sebastian Siewior <bigeasy@linutronix.de>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Vince Weaver <vincent.weaver@maine.edu>
      Cc: Yasuaki Ishimatsu <yasu.isimatu@gmail.com>
      Fixes: 9d85eb91 ("x86/smpboot: Make logical package management more robust")
      Link: http://lkml.kernel.org/r/20170131230141.377156255@linutronix.deSigned-off-by: NIngo Molnar <mingo@kernel.org>
      fff4b87e
    • T
      perf/x86/intel/uncore: Clean up hotplug conversion fallout · 1aa6cfd3
      Thomas Gleixner 提交于
      The recent conversion to the hotplug state machine kept two mechanisms from
      the original code:
      
       1) The first_init logic which adds the number of online CPUs in a package
          to the refcount. That's wrong because the callbacks are executed for
          all online CPUs.
      
          Remove it so the refcounting is correct.
      
       2) The on_each_cpu() call to undo box->init() in the error handling
          path. That's bogus because when the prepare callback fails no box has
          been initialized yet.
      
          Remove it.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Sebastian Siewior <bigeasy@linutronix.de>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Vince Weaver <vincent.weaver@maine.edu>
      Cc: Yasuaki Ishimatsu <yasu.isimatu@gmail.com>
      Fixes: 1a246b9f ("perf/x86/intel/uncore: Convert to hotplug state machine")
      Link: http://lkml.kernel.org/r/20170131230141.298032324@linutronix.deSigned-off-by: NIngo Molnar <mingo@kernel.org>
      1aa6cfd3
    • T
      perf/x86/intel/rapl: Make package handling more robust · dd86e373
      Thomas Gleixner 提交于
      The package management code in RAPL relies on package mapping being
      available before a CPU is started. This changed with:
      
        9d85eb91 ("x86/smpboot: Make logical package management more robust")
      
      because the ACPI/BIOS information turned out to be unreliable, but that
      left RAPL in broken state. This was not noticed because on a regular boot
      all CPUs are online before RAPL is initialized.
      
      A possible fix would be to reintroduce the mess which allocates a package
      data structure in CPU prepare and when it turns out to already exist in
      starting throw it away later in the CPU online callback. But that's a
      horrible hack and not required at all because RAPL becomes functional for
      perf only in the CPU online callback. That's correct because user space is
      not yet informed about the CPU being onlined, so nothing caan rely on RAPL
      being available on that particular CPU.
      
      Move the allocation to the CPU online callback and simplify the hotplug
      handling. At this point the package mapping is established and correct.
      
      This also adds a missing check for available package data in the
      event_init() function.
      Reported-by: NYasuaki Ishimatsu <yasu.isimatu@gmail.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Sebastian Siewior <bigeasy@linutronix.de>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Vince Weaver <vincent.weaver@maine.edu>
      Fixes: 9d85eb91 ("x86/smpboot: Make logical package management more robust")
      Link: http://lkml.kernel.org/r/20170131230141.212593966@linutronix.deSigned-off-by: NIngo Molnar <mingo@kernel.org>
      dd86e373
    • M
      xtensa: fix noMMU build on cores with MMU · 4b3e6f2e
      Max Filippov 提交于
      Commit bf15f86b ("xtensa: initialize MMU before jumping to reset
      vector") calls MMU management functions even when CONFIG_MMU is not
      selected. That breaks noMMU build on cores with MMU.
      
      Don't manage MMU when CONFIG_MMU is not selected.
      
      Cc: stable@vger.kernel.org
      Signed-off-by: NMax Filippov <jcmvbkbc@gmail.com>
      4b3e6f2e
    • T
      x86/mce: Make timer handling more robust · 0becc0ae
      Thomas Gleixner 提交于
      Erik reported that on a preproduction hardware a CMCI storm triggers the
      BUG_ON in add_timer_on(). The reason is that the per CPU MCE timer is
      started by the CMCI logic before the MCE CPU hotplug callback starts the
      timer with add_timer_on(). So the timer is already queued which triggers
      the BUG.
      
      Using add_timer_on() is pretty pointless in this code because the timer is
      strictlty per CPU, initialized as pinned and all operations which arm the
      timer happen on the CPU to which the timer belongs.
      
      Simplify the whole machinery by using mod_timer() instead of add_timer_on()
      which avoids the problem because mod_timer() can handle already queued
      timers. Use __start_timer() everywhere so the earliest armed expiry time is
      preserved.
      Reported-by: NErik Veijola <erik.veijola@intel.com>
      Tested-by: NBorislav Petkov <bp@alien8.de>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Reviewed-by: NBorislav Petkov <bp@alien8.de>
      Cc: Tony Luck <tony.luck@intel.com>
      Link: http://lkml.kernel.org/r/alpine.DEB.2.20.1701310936080.3457@nanosSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
      0becc0ae
    • T
      x86/irq: Make irq activate operations symmetric · aaaec6fc
      Thomas Gleixner 提交于
      The recent commit which prevents double activation of interrupts unearthed
      interesting code in x86. The code (ab)uses irq_domain_activate_irq() to
      reconfigure an already activated interrupt. That trips over the prevention
      code now.
      
      Fix it by deactivating the interrupt before activating the new configuration.
      
      Fixes: 08d85f3e "irqdomain: Avoid activating interrupts more than once"
      Reported-and-tested-by: NMike Galbraith <efault@gmx.de>
      Reported-and-tested-by: NBorislav Petkov <bp@alien8.de>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
      Cc: Marc Zyngier <marc.zyngier@arm.com>
      Cc: stable@vger.kernel.org
      Link: http://lkml.kernel.org/r/alpine.DEB.2.20.1701311901580.3457@nanos
      aaaec6fc
  3. 31 1月, 2017 4 次提交
  4. 30 1月, 2017 2 次提交
    • B
      x86/microcode: Do not access the initrd after it has been freed · 24c25032
      Borislav Petkov 提交于
      When we look for microcode blobs, we first try builtin and if that
      doesn't succeed, we fallback to the initrd supplied to the kernel.
      
      However, at some point doing boot, that initrd gets jettisoned and we
      shouldn't access it anymore. But we do, as the below KASAN report shows.
      That's because find_microcode_in_initrd() doesn't check whether the
      initrd is still valid or not.
      
      So do that.
      
        ==================================================================
        BUG: KASAN: use-after-free in find_cpio_data
        Read of size 1 by task swapper/1/0
        page:ffffea0000db9d40 count:0 mapcount:0 mapping:          (null) index:0x1
        flags: 0x100000000000000()
        raw: 0100000000000000 0000000000000000 0000000000000001 00000000ffffffff
        raw: dead000000000100 dead000000000200 0000000000000000 0000000000000000
        page dumped because: kasan: bad access detected
        CPU: 1 PID: 0 Comm: swapper/1 Tainted: G        W       4.10.0-rc5-debug-00075-g2dbde22 #3
        Hardware name: Dell Inc. XPS 13 9360/0839Y6, BIOS 1.2.3 12/01/2016
        Call Trace:
         dump_stack
         ? _atomic_dec_and_lock
         ? __dump_page
         kasan_report_error
         ? pointer
         ? find_cpio_data
         __asan_report_load1_noabort
         ? find_cpio_data
         find_cpio_data
         ? vsprintf
         ? dump_stack
         ? get_ucode_user
         ? print_usage_bug
         find_microcode_in_initrd
         __load_ucode_intel
         ? collect_cpu_info_early
         ? debug_check_no_locks_freed
         load_ucode_intel_ap
         ? collect_cpu_info
         ? trace_hardirqs_on
         ? flat_send_IPI_mask_allbutself
         load_ucode_ap
         ? get_builtin_firmware
         ? flush_tlb_func
         ? do_raw_spin_trylock
         ? cpumask_weight
         cpu_init
         ? trace_hardirqs_off
         ? play_dead_common
         ? native_play_dead
         ? hlt_play_dead
         ? syscall_init
         ? arch_cpu_idle_dead
         ? do_idle
         start_secondary
         start_cpu
        Memory state around the buggy address:
         ffff880036e74f00: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
         ffff880036e74f80: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
        >ffff880036e75000: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
                           ^
         ffff880036e75080: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
         ffff880036e75100: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
        ==================================================================
      Reported-by: NAndrey Ryabinin <aryabinin@virtuozzo.com>
      Tested-by: NAndrey Ryabinin <aryabinin@virtuozzo.com>
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/20170126165833.evjemhbqzaepirxo@pd.tnicSigned-off-by: NIngo Molnar <mingo@kernel.org>
      24c25032
    • R
      powerpc/mm: Use the correct pointer when setting a 2MB pte · a0615a16
      Reza Arbab 提交于
      When setting a 2MB pte, radix__map_kernel_page() is using the address
      
      	ptep = (pte_t *)pudp;
      
      Fix this conversion to use pmdp instead. Use pmdp_ptep() to do this
      instead of casting the pointer.
      
      Fixes: 2bfd65e4 ("powerpc/mm/radix: Add radix callbacks for early init routines")
      Cc: stable@vger.kernel.org # v4.7+
      Reviewed-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
      Signed-off-by: NReza Arbab <arbab@linux.vnet.ibm.com>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      a0615a16
  5. 29 1月, 2017 1 次提交
    • H
      parisc: Don't use BITS_PER_LONG in userspace-exported swab.h header · 2ad5d52d
      Helge Deller 提交于
      In swab.h the "#if BITS_PER_LONG > 32" breaks compiling userspace programs if
      BITS_PER_LONG is #defined by userspace with the sizeof() compiler builtin.
      
      Solve this problem by using __BITS_PER_LONG instead.  Since we now
      #include asm/bitsperlong.h avoid further potential userspace pollution
      by moving the #define of SHIFT_PER_LONG to bitops.h which is not
      exported to userspace.
      
      This patch unbreaks compiling qemu on hppa/parisc.
      Signed-off-by: NHelge Deller <deller@gmx.de>
      Cc: <stable@vger.kernel.org>
      2ad5d52d
  6. 28 1月, 2017 2 次提交
    • J
      x86/efi: Always map the first physical page into the EFI pagetables · bf29bddf
      Jiri Kosina 提交于
      Commit:
      
        12976670 ("x86/efi: Only map RAM into EFI page tables if in mixed-mode")
      
      stopped creating 1:1 mappings for all RAM, when running in native 64-bit mode.
      
      It turns out though that there are 64-bit EFI implementations in the wild
      (this particular problem has been reported on a Lenovo Yoga 710-11IKB),
      which still make use of the first physical page for their own private use,
      even though they explicitly mark it EFI_CONVENTIONAL_MEMORY in the memory
      map.
      
      In case there is no mapping for this particular frame in the EFI pagetables,
      as soon as firmware tries to make use of it, a triple fault occurs and the
      system reboots (in case of the Yoga 710-11IKB this is very early during bootup).
      
      Fix that by always mapping the first page of physical memory into the EFI
      pagetables. We're free to hand this page to the BIOS, as trim_bios_range()
      will reserve the first page and isolate it away from memory allocators anyway.
      
      Note that just reverting 12976670 alone is not enough on v4.9-rc1+ to fix the
      regression on affected hardware, as this commit:
      
         ab72a27d ("x86/efi: Consolidate region mapping logic")
      
      later made the first physical frame not to be mapped anyway.
      Reported-by: NHanka Pavlikova <hanka@ucw.cz>
      Signed-off-by: NJiri Kosina <jkosina@suse.cz>
      Signed-off-by: NMatt Fleming <matt@codeblueprint.co.uk>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Borislav Petkov <bp@suse.de>
      Cc: Laura Abbott <labbott@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vojtech Pavlik <vojtech@ucw.cz>
      Cc: Waiman Long <waiman.long@hpe.com>
      Cc: linux-efi@vger.kernel.org
      Cc: stable@kernel.org # v4.8+
      Fixes: 12976670 ("x86/efi: Only map RAM into EFI page tables if in mixed-mode")
      Link: http://lkml.kernel.org/r/20170127222552.22336-1-matt@codeblueprint.co.uk
      [ Tidied up the changelog and the comment. ]
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      bf29bddf
    • V
      ARC: [arcompact] handle unaligned access delay slot corner case · 9aed02fe
      Vineet Gupta 提交于
      After emulating an unaligned access in delay slot of a branch, we
      pretend as the delay slot never happened - so return back to actual
      branch target (or next PC if branch was not taken).
      
      Curently we did this by handling STATUS32.DE, we also need to clear the
      BTA.T bit, which is disregarded when returning from original misaligned
      exception, but could cause weirdness if it took the interrupt return
      path (in case interrupt was acive too)
      
      One ARC700 customer ran into this when enabling unaligned access fixup
      for kernel mode accesses as well
      
      Cc: stable@vger.kernel.org
      Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
      9aed02fe
  7. 27 1月, 2017 1 次提交
    • P
      arm64: skip register_cpufreq_notifier on ACPI-based systems · 606f4226
      Prashanth Prakash 提交于
      On ACPI based systems where the topology is setup using the API
      store_cpu_topology, at the moment we do not have necessary code
      to parse cpu capacity and handle cpufreq notifier, thus
      resulting in a kernel panic.
      
      Stack:
              init_cpu_capacity_callback+0xb4/0x1c8
              notifier_call_chain+0x5c/0xa0
              __blocking_notifier_call_chain+0x58/0xa0
              blocking_notifier_call_chain+0x3c/0x50
              cpufreq_set_policy+0xe4/0x328
              cpufreq_init_policy+0x80/0x100
              cpufreq_online+0x418/0x710
              cpufreq_add_dev+0x118/0x180
              subsys_interface_register+0xa4/0xf8
              cpufreq_register_driver+0x1c0/0x298
              cppc_cpufreq_init+0xdc/0x1000 [cppc_cpufreq]
              do_one_initcall+0x5c/0x168
              do_init_module+0x64/0x1e4
              load_module+0x130c/0x14d0
              SyS_finit_module+0x108/0x120
              el0_svc_naked+0x24/0x28
      
      Fixes: 7202bde8 ("arm64: parse cpu capacity-dmips-mhz from DT")
      Acked-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NPrashanth Prakash <pprakash@codeaurora.org>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      606f4226
  8. 25 1月, 2017 9 次提交
  9. 24 1月, 2017 6 次提交
    • C
      s390/mm: Fix cmma unused transfer from pgste into pte · 0d6da872
      Christian Borntraeger 提交于
      The last pgtable rework silently disabled the CMMA unused state by
      setting a local pte variable (a parameter) instead of propagating it
      back into the caller. Fix it.
      
      Fixes: ebde765c ("s390/mm: uninline ptep_xxx functions from pgtable.h")
      Cc: stable@vger.kernel.org # v4.6+
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Claudio Imbrenda <imbrenda@linux.vnet.ibm.com>
      Signed-off-by: NChristian Borntraeger <borntraeger@de.ibm.com>
      Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
      0d6da872
    • M
      powerpc: Revert the initial stack protector support · f2574030
      Michael Ellerman 提交于
      Unfortunately the stack protector support we merged recently only works
      on some toolchains. If the toolchain is built without glibc support
      everything works fine, but if glibc is built then it leads to a panic
      at boot.
      
      The solution is not rc5 material, so revert the support for now. This
      reverts commits:
      
      6533b7c1 ("powerpc: Initial stack protector (-fstack-protector) support")
      902e06eb ("powerpc/32: Change the stack protector canary value per task")
      
      Fixes: 6533b7c1 ("powerpc: Initial stack protector (-fstack-protector) support")
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      f2574030
    • G
      powerpc/eeh: Fix wrong flag passed to eeh_unfreeze_pe() · f05fea5b
      Gavin Shan 提交于
      In __eeh_clear_pe_frozen_state(), we should pass the flag's value
      instead of its address to eeh_unfreeze_pe(). The isolated flag is
      cleared if no error returned from __eeh_clear_pe_frozen_state(). We
      never observed the error from the function. So the isolated flag should
      have been always cleared, no real issue is caused because of the misused
      @flag.
      
      This fixes the code by passing the value of @flag to eeh_unfreeze_pe().
      
      Fixes: 5cfb20b9 ("powerpc/eeh: Emulate EEH recovery for VFIO devices")
      Cc: stable@vger.kernel.org # v3.18+
      Signed-off-by: NGavin Shan <gwshan@linux.vnet.ibm.com>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      f05fea5b
    • Y
      x86/fpu/xstate: Fix xcomp_bv in XSAVES header · dffba9a3
      Yu-cheng Yu 提交于
      The compacted-format XSAVES area is determined at boot time and
      never changed after.  The field xsave.header.xcomp_bv indicates
      which components are in the fixed XSAVES format.
      
      In fpstate_init() we did not set xcomp_bv to reflect the XSAVES
      format since at the time there is no valid data.
      
      However, after we do copy_init_fpstate_to_fpregs() in fpu__clear(),
      as in commit:
      
        b22cbe40 x86/fpu: Fix invalid FPU ptrace state after execve()
      
      and when __fpu_restore_sig() does fpu__restore() for a COMPAT-mode
      app, a #GP occurs.  This can be easily triggered by doing valgrind on
      a COMPAT-mode "Hello World," as reported by Joakim Tjernlund and
      others:
      
      	https://bugzilla.kernel.org/show_bug.cgi?id=190061
      
      Fix it by setting xcomp_bv correctly.
      
      This patch also moves the xcomp_bv initialization to the proper
      place, which was in copyin_to_xsaves() as of:
      
        4c833368 x86/fpu: Set the xcomp_bv when we fake up a XSAVES area
      
      which fixed the bug too, but it's more efficient and cleaner to
      initialize things once per boot, not for every signal handling
      operation.
      Reported-by: NKevin Hao <haokexin@gmail.com>
      Reported-by: NJoakim Tjernlund <Joakim.Tjernlund@infinera.com>
      Signed-off-by: NYu-cheng Yu <yu-cheng.yu@intel.com>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Borislav Petkov <bp@suse.de>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Ravi V. Shankar <ravi.v.shankar@intel.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: haokexin@gmail.com
      Link: http://lkml.kernel.org/r/1485212084-4418-1-git-send-email-yu-cheng.yu@intel.com
      [ Combined it with 4c833368. ]
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      dffba9a3
    • M
      s390/ptrace: Preserve previous registers for short regset write · 9dce990d
      Martin Schwidefsky 提交于
      Ensure that if userspace supplies insufficient data to
      PTRACE_SETREGSET to fill all the registers, the thread's old
      registers are preserved.
      
      convert_vx_to_fp() is adapted to handle only a specified number of
      registers rather than unconditionally handling all of them: other
      callers of this function are adapted appropriately.
      
      Based on an initial patch by Dave Martin.
      
      Cc: stable@vger.kernel.org
      Reported-by: NDave Martin <Dave.Martin@arm.com>
      Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
      9dce990d
    • D
      powerpc: Add missing error check to prom_find_boot_cpu() · af2b7fa1
      Darren Stevens 提交于
      prom_init.c calls 'instance-to-package' twice, but the return
      is not checked during prom_find_boot_cpu(). The result is then
      passed to prom_getprop(), which could be PROM_ERROR. Add a return check
      to prevent this.
      
      This was found on a pasemi system, where CFE doesn't have a working
      'instance-to package' prom call.
      
      Before Commit 5c0484e2 ('powerpc: Endian safe trampoline') the area
      around addr 0 was mostly 0's and this doesn't cause a problem. Once the
      macro 'FIXUP_ENDIAN' has been added to head_64.S, the low memory area
      now has non-zero values, which cause the prom_getprop() call
      to hang.
      
      mpe: Also confirmed that under SLOF if 'instance-to-package' did fail
      with PROM_ERROR we would crash in SLOF. So the bug is not specific to
      CFE, it's just that other open firmwares don't trigger it because they
      have a working 'instance-to-package'.
      
      Fixes: 5c0484e2 ("powerpc: Endian safe trampoline")
      Cc: stable@vger.kernel.org # v3.13+
      Signed-off-by: NDarren Stevens <darren@stevens-zone.net>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      af2b7fa1
  10. 23 1月, 2017 3 次提交
    • A
      crypto: arm64/aes-blk - honour iv_out requirement in CBC and CTR modes · 11e3b725
      Ard Biesheuvel 提交于
      Update the ARMv8 Crypto Extensions and the plain NEON AES implementations
      in CBC and CTR modes to return the next IV back to the skcipher API client.
      This is necessary for chaining to work correctly.
      
      Note that for CTR, this is only done if the request is a round multiple of
      the block size, since otherwise, chaining is impossible anyway.
      
      Cc: <stable@vger.kernel.org> # v3.16+
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
      11e3b725
    • K
      x86/fpu: Set the xcomp_bv when we fake up a XSAVES area · 4c833368
      Kevin Hao 提交于
      I got the following calltrace on a Apollo Lake SoC with 32-bit kernel:
      
        WARNING: CPU: 2 PID: 261 at arch/x86/include/asm/fpu/internal.h:363 fpu__restore+0x1f5/0x260
        [...]
        Hardware name: Intel Corp. Broxton P/NOTEBOOK, BIOS APLIRVPA.X64.0138.B35.1608091058 08/09/2016
        Call Trace:
         dump_stack()
         __warn()
         ? fpu__restore()
         warn_slowpath_null()
         fpu__restore()
         __fpu__restore_sig()
         fpu__restore_sig()
         restore_sigcontext.isra.9()
         sys_sigreturn()
         do_int80_syscall_32()
         entry_INT80_32()
      
      The reason is that a #GP occurs when executing XRSTORS. The root cause
      is that we forget to set the xcomp_bv when we fake up the XSAVES area
      in the copyin_to_xsaves() function.
      Signed-off-by: NKevin Hao <haokexin@gmail.com>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Quentin Casasnovas <quentin.casasnovas@oracle.com>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Yu-cheng Yu <yu-cheng.yu@intel.com>
      Link: http://lkml.kernel.org/r/1485075023-30161-1-git-send-email-haokexin@gmail.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      4c833368
    • B
      x86/microcode/intel: Drop stashed AP patch pointer optimization · c26665ab
      Borislav Petkov 提交于
      This was meant to save us the scanning of the microcode containter in
      the initrd since the first AP had already done that but it can also hurt
      us:
      
      Imagine a single hyperthreaded CPU (Intel(R) Atom(TM) CPU N270, for
      example) which updates the microcode on the BSP but since the microcode
      engine is shared between the two threads, the update on CPU1 doesn't
      happen because it has already happened on CPU0 and we don't find a newer
      microcode revision on CPU1.
      
      Which doesn't set the intel_ucode_patch pointer and at initrd
      jettisoning time we don't save the microcode patch for later
      application.
      
      Now, when we suspend to RAM, the loaded microcode gets cleared so we
      need to reload but there's no patch saved in the cache.
      
      Removing the optimization fixes this issue and all is fine and dandy.
      
      Fixes: 06b8534c ("x86/microcode: Rework microcode loading")
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Reviewed-by: NThomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/20170120202955.4091-2-bp@alien8.deSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
      c26665ab
  11. 20 1月, 2017 4 次提交