1. 12 3月, 2018 1 次提交
  2. 10 3月, 2018 1 次提交
  3. 09 3月, 2018 1 次提交
    • F
      x86/kprobes: Fix kernel crash when probing .entry_trampoline code · c07a8f8b
      Francis Deslauriers 提交于
      Disable the kprobe probing of the entry trampoline:
      
      .entry_trampoline is a code area that is used to ensure page table
      isolation between userspace and kernelspace.
      
      At the beginning of the execution of the trampoline, we load the
      kernel's CR3 register. This has the effect of enabling the translation
      of the kernel virtual addresses to physical addresses. Before this
      happens most kernel addresses can not be translated because the running
      process' CR3 is still used.
      
      If a kprobe is placed on the trampoline code before that change of the
      CR3 register happens the kernel crashes because int3 handling pages are
      not accessible.
      
      To fix this, add the .entry_trampoline section to the kprobe blacklist
      to prohibit the probing of code before all the kernel pages are
      accessible.
      Signed-off-by: NFrancis Deslauriers <francis.deslauriers@efficios.com>
      Reviewed-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: mathieu.desnoyers@efficios.com
      Cc: mhiramat@kernel.org
      Link: http://lkml.kernel.org/r/1520565492-4637-2-git-send-email-francis.deslauriers@efficios.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      c07a8f8b
  4. 08 3月, 2018 12 次提交
  5. 07 3月, 2018 6 次提交
  6. 06 3月, 2018 9 次提交
  7. 05 3月, 2018 2 次提交
  8. 04 3月, 2018 1 次提交
  9. 03 3月, 2018 1 次提交
  10. 02 3月, 2018 6 次提交
    • G
      s390: Fix runtime warning about negative pgtables_bytes · 61e18270
      Guenter Roeck 提交于
      When running s390 images with 'compat' processes, the following
      BUG is seen repeatedly.
      
      BUG: non-zero pgtables_bytes on freeing mm: -16384
      
      Bisect points to commit b4e98d9a ("mm: account pud page tables").
      Analysis shows that init_new_context() is called with
      mm->context.asce_limit set to _REGION3_SIZE. In this situation,
      pgtables_bytes remains set to 0 and is not increased. The message is
      displayed when the affected process dies and mm_dec_nr_puds() is called.
      
      Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Fixes: b4e98d9a ("mm: account pud page tables")
      Signed-off-by: NGuenter Roeck <linux@roeck-us.net>
      Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
      61e18270
    • H
      parisc: Reduce irq overhead when run in qemu · 636a415b
      Helge Deller 提交于
      When run under QEMU, calling mfctl(16) creates some overhead because the
      qemu timer has to be scaled and moved into the register. This patch
      reduces the number of calls to mfctl(16) by moving the calls out of the
      loops.
      
      Additionally, increase the minimal time interval to 8000 cycles instead
      of 500 to compensate possible QEMU delays when delivering interrupts.
      Signed-off-by: NHelge Deller <deller@gmx.de>
      Cc: stable@vger.kernel.org # 4.14+
      636a415b
    • H
      parisc: Use cr16 interval timers unconditionally on qemu · 5ffa8518
      Helge Deller 提交于
      When running on qemu we know that the (emulated) cr16 cpu-internal
      clocks are syncronized. So let's use them unconditionally on qemu.
      Signed-off-by: NHelge Deller <deller@gmx.de>
      Cc: stable@vger.kernel.org # 4.14+
      5ffa8518
    • H
      parisc: Check if secondary CPUs want own PDC calls · 0ed1fe4a
      Helge Deller 提交于
      The architecture specification says (for 64-bit systems): PDC is a per
      processor resource, and operating system software must be prepared to
      manage separate pointers to PDCE_PROC for each processor.  The address
      of PDCE_PROC for the monarch processor is stored in the Page Zero
      location MEM_PDC. The address of PDCE_PROC for each non-monarch
      processor is passed in gr26 when PDCE_RESET invokes OS_RENDEZ.
      
      Currently we still use one PDC for all CPUs, but in case we face a
      machine which is following the specification let's warn about it.
      Signed-off-by: NHelge Deller <deller@gmx.de>
      0ed1fe4a
    • H
      parisc: Hide virtual kernel memory layout · fd8d0ca2
      Helge Deller 提交于
      For security reasons do not expose the virtual kernel memory layout to
      userspace.
      Signed-off-by: NHelge Deller <deller@gmx.de>
      Suggested-by: NKees Cook <keescook@chromium.org>
      Cc: stable@vger.kernel.org # 4.15
      Reviewed-by: NKees Cook <keescook@chromium.org>
      fd8d0ca2
    • J
      parisc: Fix ordering of cache and TLB flushes · 0adb24e0
      John David Anglin 提交于
      The change to flush_kernel_vmap_range() wasn't sufficient to avoid the
      SMP stalls.  The problem is some drivers call these routines with
      interrupts disabled.  Interrupts need to be enabled for flush_tlb_all()
      and flush_cache_all() to work.  This version adds checks to ensure
      interrupts are not disabled before calling routines that need IPI
      interrupts.  When interrupts are disabled, we now drop into slower code.
      
      The attached change fixes the ordering of cache and TLB flushes in
      several cases.  When we flush the cache using the existing PTE/TLB
      entries, we need to flush the TLB after doing the cache flush.  We don't
      need to do this when we flush the entire instruction and data caches as
      these flushes don't use the existing TLB entries.  The same is true for
      tmpalias region flushes.
      
      The flush_kernel_vmap_range() and invalidate_kernel_vmap_range()
      routines have been updated.
      
      Secondly, we added a new purge_kernel_dcache_range_asm() routine to
      pacache.S and use it in invalidate_kernel_vmap_range().  Nominally,
      purges are faster than flushes as the cache lines don't have to be
      written back to memory.
      
      Hopefully, this is sufficient to resolve the remaining problems due to
      cache speculation.  So far, testing indicates that this is the case.  I
      did work up a patch using tmpalias flushes, but there is a performance
      hit because we need the physical address for each page, and we also need
      to sequence access to the tmpalias flush code.  This increases the
      probability of stalls.
      
      Signed-off-by: John David Anglin <dave.anglin@bell.net>
      Cc: stable@vger.kernel.org # 4.9+
      Signed-off-by: NHelge Deller <deller@gmx.de>
      0adb24e0