1. 02 9月, 2018 2 次提交
  2. 01 9月, 2018 1 次提交
  3. 31 8月, 2018 6 次提交
    • J
      x86/efi: Load fixmap GDT in efi_call_phys_epilog() · eeb89e2b
      Joerg Roedel 提交于
      When PTI is enabled on x86-32 the kernel uses the GDT mapped in the fixmap
      for the simple reason that this address is also mapped for user-space.
      
      The efi_call_phys_prolog()/efi_call_phys_epilog() wrappers change the GDT
      to call EFI runtime services and switch back to the kernel GDT when they
      return. But the switch-back uses the writable GDT, not the fixmap GDT.
      
      When that happened and and the CPU returns to user-space it switches to the
      user %cr3 and tries to restore user segment registers. This fails because
      the writable GDT is not mapped in the user page-table, and without a GDT
      the fault handlers also can't be launched. The result is a triple fault and
      reboot of the machine.
      
      Fix that by restoring the GDT back to the fixmap GDT which is also mapped
      in the user page-table.
      
      Fixes: 7757d607 x86/pti: ('Allow CONFIG_PAGE_TABLE_ISOLATION for x86_32')
      Reported-by: NGuenter Roeck <linux@roeck-us.net>
      Signed-off-by: NJoerg Roedel <jroedel@suse.de>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Tested-by: NGuenter Roeck <linux@roeck-us.net>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Michal Hocko <mhocko@suse.com>
      Cc: Andi Kleen <ak@linux.intel.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Pavel Machek <pavel@ucw.cz>
      Cc: hpa@zytor.com
      Cc: linux-efi@vger.kernel.org
      Link: https://lkml.kernel.org/r/1535702738-10971-1-git-send-email-joro@8bytes.org
      eeb89e2b
    • A
      x86/nmi: Fix NMI uaccess race against CR3 switching · 4012e77a
      Andy Lutomirski 提交于
      
      A NMI can hit in the middle of context switching or in the middle of
      switch_mm_irqs_off().  In either case, CR3 might not match current->mm,
      which could cause copy_from_user_nmi() and friends to read the wrong
      memory.
      
      Fix it by adding a new nmi_uaccess_okay() helper and checking it in
      copy_from_user_nmi() and in __copy_from_user_nmi()'s callers.
      Signed-off-by: NAndy Lutomirski <luto@kernel.org>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Reviewed-by: NRik van Riel <riel@surriel.com>
      Cc: Nadav Amit <nadav.amit@gmail.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Jann Horn <jannh@google.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: stable@vger.kernel.org
      Link: https://lkml.kernel.org/r/dd956eba16646fd0b15c3c0741269dfd84452dac.1535557289.git.luto@kernel.org
      
      4012e77a
    • B
      x86: Allow generating user-space headers without a compiler · 829fe4aa
      Ben Hutchings 提交于
      
      When bootstrapping an architecture, it's usual to generate the kernel's
      user-space headers (make headers_install) before building a compiler.  Move
      the compiler check (for asm goto support) to the archprepare target so that
      it is only done when building code for the target.
      
      Fixes: e501ce95 ("x86: Force asm-goto")
      Reported-by: NHelmut Grohne <helmutg@debian.org>
      Signed-off-by: NBen Hutchings <ben@decadent.org.uk>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: stable@vger.kernel.org
      Link: https://lkml.kernel.org/r/20180829194317.GA4765@decadent.org.uk
      
      829fe4aa
    • J
      x86/dumpstack: Don't dump kernel memory based on usermode RIP · 342db04a
      Jann Horn 提交于
      
      show_opcodes() is used both for dumping kernel instructions and for dumping
      user instructions. If userspace causes #PF by jumping to a kernel address,
      show_opcodes() can be reached with regs->ip controlled by the user,
      pointing to kernel code. Make sure that userspace can't trick us into
      dumping kernel memory into dmesg.
      
      Fixes: 7cccf072 ("x86/dumpstack: Add a show_ip() function")
      Signed-off-by: NJann Horn <jannh@google.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Reviewed-by: NKees Cook <keescook@chromium.org>
      Reviewed-by: NBorislav Petkov <bp@suse.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: security@kernel.org
      Cc: stable@vger.kernel.org
      Link: https://lkml.kernel.org/r/20180828154901.112726-1-jannh@google.com
      
      342db04a
    • J
      arm64: mm: always enable CONFIG_HOLES_IN_ZONE · f52bb98f
      James Morse 提交于
      Commit 6d526ee2 ("arm64: mm: enable CONFIG_HOLES_IN_ZONE for NUMA")
      only enabled HOLES_IN_ZONE for NUMA systems because the NUMA code was
      choking on the missing zone for nomap pages. This problem doesn't just
      apply to NUMA systems.
      
      If the architecture doesn't set HAVE_ARCH_PFN_VALID, pfn_valid() will
      return true if the pfn is part of a valid sparsemem section.
      
      When working with multiple pages, the mm code uses pfn_valid_within()
      to test each page it uses within the sparsemem section is valid. On
      most systems memory comes in MAX_ORDER_NR_PAGES chunks which all
      have valid/initialised struct pages. In this case pfn_valid_within()
      is optimised out.
      
      Systems where this isn't true (e.g. due to nomap) should set
      HOLES_IN_ZONE and provide HAVE_ARCH_PFN_VALID so that mm tests each
      page as it works with it.
      
      Currently non-NUMA arm64 systems can't enable HOLES_IN_ZONE, leading to
      a VM_BUG_ON():
      
      | page:fffffdff802e1780 is uninitialized and poisoned
      | raw: ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff
      | raw: ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff
      | page dumped because: VM_BUG_ON_PAGE(PagePoisoned(p))
      | ------------[ cut here ]------------
      | kernel BUG at include/linux/mm.h:978!
      | Internal error: Oops - BUG: 0 [#1] PREEMPT SMP
      [...]
      | CPU: 1 PID: 25236 Comm: dd Not tainted 4.18.0 #7
      | Hardware name: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015
      | pstate: 40000085 (nZcv daIf -PAN -UAO)
      | pc : move_freepages_block+0x144/0x248
      | lr : move_freepages_block+0x144/0x248
      | sp : fffffe0071177680
      [...]
      | Process dd (pid: 25236, stack limit = 0x0000000094cc07fb)
      | Call trace:
      |  move_freepages_block+0x144/0x248
      |  steal_suitable_fallback+0x100/0x16c
      |  get_page_from_freelist+0x440/0xb20
      |  __alloc_pages_nodemask+0xe8/0x838
      |  new_slab+0xd4/0x418
      |  ___slab_alloc.constprop.27+0x380/0x4a8
      |  __slab_alloc.isra.21.constprop.26+0x24/0x34
      |  kmem_cache_alloc+0xa8/0x180
      |  alloc_buffer_head+0x1c/0x90
      |  alloc_page_buffers+0x68/0xb0
      |  create_empty_buffers+0x20/0x1ec
      |  create_page_buffers+0xb0/0xf0
      |  __block_write_begin_int+0xc4/0x564
      |  __block_write_begin+0x10/0x18
      |  block_write_begin+0x48/0xd0
      |  blkdev_write_begin+0x28/0x30
      |  generic_perform_write+0x98/0x16c
      |  __generic_file_write_iter+0x138/0x168
      |  blkdev_write_iter+0x80/0xf0
      |  __vfs_write+0xe4/0x10c
      |  vfs_write+0xb4/0x168
      |  ksys_write+0x44/0x88
      |  sys_write+0xc/0x14
      |  el0_svc_naked+0x30/0x34
      | Code: aa1303e0 90001a01 91296421 94008902 (d4210000)
      | ---[ end trace 1601ba47f6e883fe ]---
      
      Remove the NUMA dependency.
      
      Link: https://www.spinics.net/lists/arm-kernel/msg671851.html
      Cc: <stable@vger.kernel.org>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Reported-by: NMikulas Patocka <mpatocka@redhat.com>
      Reviewed-by: NPavel Tatashin <pavel.tatashin@microsoft.com>
      Tested-by: NMikulas Patocka <mpatocka@redhat.com>
      Signed-off-by: NJames Morse <james.morse@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      f52bb98f
    • F
      m68k/mac: Use correct PMU response format · 0986b16a
      Finn Thain 提交于
      Now that the 68k Mac port has adopted the via-pmu driver, it must decode
      the PMU response accordingly otherwise the date and time will be wrong.
      
      Fixes: ebd72227 ("macintosh/via-pmu: Replace via-pmu68k driver with via-pmu driver")
      Signed-off-by: NFinn Thain <fthain@telegraphics.com.au>
      Signed-off-by: NGeert Uytterhoeven <geert@linux-m68k.org>
      0986b16a
  4. 30 8月, 2018 8 次提交
  5. 29 8月, 2018 5 次提交
  6. 28 8月, 2018 1 次提交
  7. 27 8月, 2018 12 次提交
  8. 25 8月, 2018 3 次提交
    • A
      crypto: arm64/aes-gcm-ce - fix scatterwalk API violation · c2b24c36
      Ard Biesheuvel 提交于
      Commit 71e52c27 ("crypto: arm64/aes-ce-gcm - operate on
      two input blocks at a time") modified the granularity at which
      the AES/GCM code processes its input to allow subsequent changes
      to be applied that improve performance by using aggregation to
      process multiple input blocks at once.
      
      For this reason, it doubled the algorithm's 'chunksize' property
      to 2 x AES_BLOCK_SIZE, but retained the non-SIMD fallback path that
      processes a single block at a time. In some cases, this violates the
      skcipher scatterwalk API, by calling skcipher_walk_done() with a
      non-zero residue value for a chunk that is expected to be handled
      in its entirety. This results in a WARN_ON() to be hit by the TLS
      self test code, but is likely to break other user cases as well.
      Unfortunately, none of the current test cases exercises this exact
      code path at the moment.
      
      Fixes: 71e52c27 ("crypto: arm64/aes-ce-gcm - operate on two ...")
      Reported-by: NVakul Garg <vakul.garg@nxp.com>
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Tested-by: NVakul Garg <vakul.garg@nxp.com>
      Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
      c2b24c36
    • D
      crypto: aesni - Use unaligned loads from gcm_context_data · e5b954e8
      Dave Watson 提交于
      A regression was reported bisecting to 1476db2d
      "Move HashKey computation from stack to gcm_context".  That diff
      moved HashKey computation from the stack, which was explicitly aligned
      in the asm, to a struct provided from the C code, depending on
      AESNI_ALIGN_ATTR for alignment.   It appears some compilers may not
      align this struct correctly, resulting in a crash on the movdqa
      instruction when attempting to encrypt or decrypt data.
      
      Fix by using unaligned loads for the HashKeys.  On modern
      hardware there is no perf difference between the unaligned and
      aligned loads.  All other accesses to gcm_context_data already use
      unaligned loads.
      Reported-by: NMauro Rossi <issor.oruam@gmail.com>
      Fixes: 1476db2d ("Move HashKey computation from stack to gcm_context")
      Cc: <stable@vger.kernel.org>
      Signed-off-by: NDave Watson <davejwatson@fb.com>
      Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
      e5b954e8
    • A
      crypto: arm64/sm4-ce - check for the right CPU feature bit · 7fa885e2
      Ard Biesheuvel 提交于
      ARMv8.2 specifies special instructions for the SM3 cryptographic hash
      and the SM4 symmetric cipher. While it is unlikely that a core would
      implement one and not the other, we should only use SM4 instructions
      if the SM4 CPU feature bit is set, and we currently check the SM3
      feature bit instead. So fix that.
      
      Fixes: e99ce921 ("crypto: arm64 - add support for SM4...")
      Cc: <stable@vger.kernel.org>
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
      7fa885e2
  9. 24 8月, 2018 2 次提交