1. 24 4月, 2017 2 次提交
  2. 19 4月, 2017 3 次提交
    • V
      x86/mce: Make the MCE notifier a blocking one · 0dc9c639
      Vishal Verma 提交于
      The NFIT MCE handler callback (for handling media errors on NVDIMMs)
      takes a mutex to add the location of a memory error to a list. But since
      the notifier call chain for machine checks (x86_mce_decoder_chain) is
      atomic, we get a lockdep splat like:
      
        BUG: sleeping function called from invalid context at kernel/locking/mutex.c:620
        in_atomic(): 1, irqs_disabled(): 0, pid: 4, name: kworker/0:0
        [..]
        Call Trace:
         dump_stack
         ___might_sleep
         __might_sleep
         mutex_lock_nested
         ? __lock_acquire
         nfit_handle_mce
         notifier_call_chain
         atomic_notifier_call_chain
         ? atomic_notifier_call_chain
         mce_gen_pool_process
      
      Convert the notifier to a blocking one which gets to run only in process
      context.
      
      Boris: remove the notifier call in atomic context in print_mce(). For
      now, let's print the MCE on the atomic path so that we can make sure
      they go out and get logged at least.
      
      Fixes: 6839a6d9 ("nfit: do an ARS scrub on hitting a latent media error")
      Reported-by: NRoss Zwisler <ross.zwisler@linux.intel.com>
      Signed-off-by: NVishal Verma <vishal.l.verma@intel.com>
      Acked-by: NTony Luck <tony.luck@intel.com>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Cc: linux-edac <linux-edac@vger.kernel.org>
      Cc: x86-ml <x86@kernel.org>
      Cc: <stable@vger.kernel.org>
      Link: http://lkml.kernel.org/r/20170411224457.24777-1-vishal.l.verma@intel.comSigned-off-by: NBorislav Petkov <bp@suse.de>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      0dc9c639
    • N
      sparc64: Fix hugepage page table free · 544f8f93
      Nitin Gupta 提交于
      Make sure the start adderess is aligned to PMD_SIZE
      boundary when freeing page table backing a hugepage
      region. The issue was causing segfaults when a region
      backed by 64K pages was unmapped since such a region
      is in general not PMD_SIZE aligned.
      Signed-off-by: NNitin Gupta <nitin.m.gupta@oracle.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      544f8f93
    • D
      sparc64: Use LOCKDEP_SMALL, not PROVE_LOCKING_SMALL · 395102db
      Daniel Jordan 提交于
      CONFIG_PROVE_LOCKING_SMALL shrinks the memory usage of lockdep so the
      kernel text, data, and bss fit in the required 32MB limit, but this
      option is not set for every config that enables lockdep.
      
      A 4.10 kernel fails to boot with the console output
      
          Kernel: Using 8 locked TLB entries for main kernel image.
          hypervisor_tlb_lock[2000000:0:8000000071c007c3:1]: errors with f
          Program terminated
      
      with these config options
      
          CONFIG_LOCKDEP=y
          CONFIG_LOCK_STAT=y
          CONFIG_PROVE_LOCKING=n
      
      To fix, rename CONFIG_PROVE_LOCKING_SMALL to CONFIG_LOCKDEP_SMALL, and
      enable this option with CONFIG_LOCKDEP=y so we get the reduced memory
      usage every time lockdep is turned on.
      
      Tested that CONFIG_LOCKDEP_SMALL is set to 'y' if and only if
      CONFIG_LOCKDEP is set to 'y'.  When other lockdep-related config options
      that select CONFIG_LOCKDEP are enabled (e.g. CONFIG_LOCK_STAT or
      CONFIG_PROVE_LOCKING), verified that CONFIG_LOCKDEP_SMALL is also
      enabled.
      
      Fixes: e6b5f1be ("config: Adding the new config parameter CONFIG_PROVE_LOCKING_SMALL for sparc")
      Signed-off-by: NDaniel Jordan <daniel.m.jordan@oracle.com>
      Reviewed-by: NBabu Moger <babu.moger@oracle.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      395102db
  3. 18 4月, 2017 2 次提交
    • M
      powerpc/64: Fix HMI exception on LE with CONFIG_RELOCATABLE=y · be5c5e84
      Michael Ellerman 提交于
      Prior to commit 2337d207 ("powerpc/64: CONFIG_RELOCATABLE support for hmi
      interrupts"), the branch from hmi_exception_early() to hmi_exception_realmode()
      was just a bl hmi_exception_realmode, which the linker would turn into a bl to
      the local entry point of hmi_exception_realmode. This was broken when
      CONFIG_RELOCATABLE=y because hmi_exception_realmode() is not in the low part of
      the kernel text that is copied down to 0x0.
      
      But in fixing that, we added a new bug on little endian kernels. Because the
      branch is now a bctrl when CONFIG_RELOCATABLE=y, we branch to the global entry
      point of hmi_exception_realmode(). The global entry point must be called with
      r12 containing the address of hmi_exception_realmode(), because it uses that
      value to calculate the TOC value (r2).
      
      This may manifest as a checkstop, because we take a junk value from r12 which
      came from HSRR1, add a small constant to it and then use that as the TOC
      pointer. The HSRR1 value will have 0x9 as the top nibble, which puts it above
      RAM and somewhere in MMIO space.
      
      Fix it by changing the BRANCH_LINK_TO_FAR() macro to always use r12 to load the
      label we're branching to. This means r12 will be setup correctly on LE, fixing
      this bug, and r12 is also volatile across function calls on BE so it's a good
      choice anyway.
      
      Fixes: 2337d207 ("powerpc/64: CONFIG_RELOCATABLE support for hmi interrupts")
      Reported-by: NMahesh Salgaonkar <mahesh@linux.vnet.ibm.com>
      Acked-by: NNicholas Piggin <npiggin@gmail.com>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      be5c5e84
    • R
      powerpc/kprobe: Fix oops when kprobed on 'stdu' instruction · 9e1ba4f2
      Ravi Bangoria 提交于
      If we set a kprobe on a 'stdu' instruction on powerpc64, we see a kernel
      OOPS:
      
        Bad kernel stack pointer cd93c840 at c000000000009868
        Oops: Bad kernel stack pointer, sig: 6 [#1]
        ...
        GPR00: c000001fcd93cb30 00000000cd93c840 c0000000015c5e00 00000000cd93c840
        ...
        NIP [c000000000009868] resume_kernel+0x2c/0x58
        LR [c000000000006208] program_check_common+0x108/0x180
      
      On a 64-bit system when the user probes on a 'stdu' instruction, the kernel does
      not emulate actual store in emulate_step() because it may corrupt the exception
      frame. So the kernel does the actual store operation in exception return code
      i.e. resume_kernel().
      
      resume_kernel() loads the saved stack pointer from memory using lwz, which only
      loads the low 32-bits of the address, causing the kernel crash.
      
      Fix this by loading the 64-bit value instead.
      
      Fixes: be96f633 ("powerpc: Split out instruction analysis part of emulate_step()")
      Cc: stable@vger.kernel.org # v3.18+
      Signed-off-by: NRavi Bangoria <ravi.bangoria@linux.vnet.ibm.com>
      Reviewed-by: NNaveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
      Reviewed-by: NAnanth N Mavinakayanahalli <ananth@linux.vnet.ibm.com>
      [mpe: Change log massage, add stable tag]
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      9e1ba4f2
  4. 16 4月, 2017 1 次提交
  5. 15 4月, 2017 1 次提交
    • M
      parisc: fix bugs in pa_memcpy · 409c1b25
      Mikulas Patocka 提交于
      The patch 554bfece ("parisc: Fix access
      fault handling in pa_memcpy()") reimplements the pa_memcpy function.
      Unfortunatelly, it makes the kernel unbootable. The crash happens in the
      function ide_complete_cmd where memcpy is called with the same source
      and destination address.
      
      This patch fixes a few bugs in pa_memcpy:
      
      * When jumping to .Lcopy_loop_16 for the first time, don't skip the
        instruction "ldi 31,t0" (this bug made the kernel unbootable)
      * Use the COND macro when comparing length, so that the comparison is
        64-bit (a theoretical issue, in case the length is greater than
        0xffffffff)
      * Don't use the COND macro after the "extru" instruction (the PA-RISC
        specification says that the upper 32-bits of extru result are undefined,
        although they are set to zero in practice)
      * Fix exception addresses in .Lcopy16_fault and .Lcopy8_fault
      * Rename .Lcopy_loop_4 to .Lcopy_loop_8 (so that it is consistent with
        .Lcopy8_fault)
      
      Cc: <stable@vger.kernel.org> # v4.9+
      Fixes: 554bfece ("parisc: Fix access fault handling in pa_memcpy()")
      Signed-off-by: NMikulas Patocka <mpatocka@redhat.com>
      Signed-off-by: NHelge Deller <deller@gmx.de>
      409c1b25
  6. 14 4月, 2017 2 次提交
  7. 13 4月, 2017 3 次提交
    • O
      x86/efi: Don't try to reserve runtime regions · 6f6266a5
      Omar Sandoval 提交于
      Reserving a runtime region results in splitting the EFI memory
      descriptors for the runtime region. This results in runtime region
      descriptors with bogus memory mappings, leading to interesting crashes
      like the following during a kexec:
      
        general protection fault: 0000 [#1] SMP
        Modules linked in:
        CPU: 0 PID: 0 Comm: swapper/0 Not tainted 4.11.0-rc1 #53
        Hardware name: Wiwynn Leopard-Orv2/Leopard-DDR BW, BIOS LBM05   09/30/2016
        RIP: 0010:virt_efi_set_variable()
        ...
        Call Trace:
         efi_delete_dummy_variable()
         efi_enter_virtual_mode()
         start_kernel()
         ? set_init_arg()
         x86_64_start_reservations()
         x86_64_start_kernel()
         start_cpu()
        ...
        Kernel panic - not syncing: Fatal exception
      
      Runtime regions will not be freed and do not need to be reserved, so
      skip the memmap modification in this case.
      Signed-off-by: NOmar Sandoval <osandov@fb.com>
      Signed-off-by: NMatt Fleming <matt@codeblueprint.co.uk>
      Cc: <stable@vger.kernel.org> # v4.9+
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Dave Young <dyoung@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Jones <pjones@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: linux-efi@vger.kernel.org
      Fixes: 8e80632f ("efi/esrt: Use efi_mem_reserve() and avoid a kmalloc()")
      Link: http://lkml.kernel.org/r/20170412152719.9779-2-matt@codeblueprint.co.ukSigned-off-by: NIngo Molnar <mingo@kernel.org>
      6f6266a5
    • D
      x86, pmem: fix broken __copy_user_nocache cache-bypass assumptions · 11e63f6d
      Dan Williams 提交于
      Before we rework the "pmem api" to stop abusing __copy_user_nocache()
      for memcpy_to_pmem() we need to fix cases where we may strand dirty data
      in the cpu cache. The problem occurs when copy_from_iter_pmem() is used
      for arbitrary data transfers from userspace. There is no guarantee that
      these transfers, performed by dax_iomap_actor(), will have aligned
      destinations or aligned transfer lengths. Backstop the usage
      __copy_user_nocache() with explicit cache management in these unaligned
      cases.
      
      Yes, copy_from_iter_pmem() is now too big for an inline, but addressing
      that is saved for a later patch that moves the entirety of the "pmem
      api" into the pmem driver directly.
      
      Fixes: 5de490da ("pmem: add copy_from_iter_pmem() and clear_pmem()")
      Cc: <stable@vger.kernel.org>
      Cc: <x86@kernel.org>
      Cc: Jan Kara <jack@suse.cz>
      Cc: Jeff Moyer <jmoyer@redhat.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Al Viro <viro@zeniv.linux.org.uk>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Matthew Wilcox <mawilcox@microsoft.com>
      Reviewed-by: NRoss Zwisler <ross.zwisler@linux.intel.com>
      Signed-off-by: NToshi Kani <toshi.kani@hpe.com>
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      11e63f6d
    • K
      mm: Tighten x86 /dev/mem with zeroing reads · a4866aa8
      Kees Cook 提交于
      Under CONFIG_STRICT_DEVMEM, reading System RAM through /dev/mem is
      disallowed. However, on x86, the first 1MB was always allowed for BIOS
      and similar things, regardless of it actually being System RAM. It was
      possible for heap to end up getting allocated in low 1MB RAM, and then
      read by things like x86info or dd, which would trip hardened usercopy:
      
      usercopy: kernel memory exposure attempt detected from ffff880000090000 (dma-kmalloc-256) (4096 bytes)
      
      This changes the x86 exception for the low 1MB by reading back zeros for
      System RAM areas instead of blindly allowing them. More work is needed to
      extend this to mmap, but currently mmap doesn't go through usercopy, so
      hardened usercopy won't Oops the kernel.
      Reported-by: NTommi Rantala <tommi.t.rantala@nokia.com>
      Tested-by: NTommi Rantala <tommi.t.rantala@nokia.com>
      Signed-off-by: NKees Cook <keescook@chromium.org>
      a4866aa8
  8. 12 4月, 2017 1 次提交
    • C
      s390/mm: fix CMMA vs KSM vs others · a8f60d1f
      Christian Borntraeger 提交于
      On heavy paging with KSM I see guest data corruption. Turns out that
      KSM will add pages to its tree, where the mapping return true for
      pte_unused (or might become as such later).  KSM will unmap such pages
      and reinstantiate with different attributes (e.g. write protected or
      special, e.g. in replace_page or write_protect_page)). This uncovered
      a bug in our pagetable handling: We must remove the unused flag as
      soon as an entry becomes present again.
      
      Cc: stable@vger.kernel.org
      Signed-of-by: NChristian Borntraeger <borntraeger@de.ibm.com>
      Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
      a8f60d1f
  9. 11 4月, 2017 4 次提交
  10. 07 4月, 2017 6 次提交
    • W
      Revert "Revert "arm64: hugetlb: partial revert of 66b3923a"" · 6ae979ab
      Will Deacon 提交于
      The use of the contiguous bit by our hugetlb implementation violates
      the break-before-make requirements of the architecture and can lead to
      silent data corruption or TLB conflict aborts. Once again, disable these
      hugetlb sizes whilst it gets worked out.
      
      This reverts commit ab2e1b89.
      
      Conflicts:
      	arch/arm64/mm/hugetlbpage.c
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      6ae979ab
    • M
      powerpc/crypto/crc32c-vpmsum: Fix missing preempt_disable() · 4749228f
      Michael Ellerman 提交于
      In crc32c_vpmsum() we call enable_kernel_altivec() without first
      disabling preemption, which is not allowed:
      
        WARNING: CPU: 9 PID: 2949 at ../arch/powerpc/kernel/process.c:277 enable_kernel_altivec+0x100/0x120
        Modules linked in: dm_thin_pool dm_persistent_data dm_bio_prison dm_bufio libcrc32c vmx_crypto ...
        CPU: 9 PID: 2949 Comm: docker Not tainted 4.11.0-rc5-compiler_gcc-6.3.1-00033-g308ac756 #381
        ...
        NIP [c00000000001e320] enable_kernel_altivec+0x100/0x120
        LR [d000000003df0910] crc32c_vpmsum+0x108/0x150 [crc32c_vpmsum]
        Call Trace:
          0xc138fd09 (unreliable)
          crc32c_vpmsum+0x108/0x150 [crc32c_vpmsum]
          crc32c_vpmsum_update+0x3c/0x60 [crc32c_vpmsum]
          crypto_shash_update+0x88/0x1c0
          crc32c+0x64/0x90 [libcrc32c]
          dm_bm_checksum+0x48/0x80 [dm_persistent_data]
          sb_check+0x84/0x120 [dm_thin_pool]
          dm_bm_validate_buffer.isra.0+0xc0/0x1b0 [dm_persistent_data]
          dm_bm_read_lock+0x80/0xf0 [dm_persistent_data]
          __create_persistent_data_objects+0x16c/0x810 [dm_thin_pool]
          dm_pool_metadata_open+0xb0/0x1a0 [dm_thin_pool]
          pool_ctr+0x4cc/0xb60 [dm_thin_pool]
          dm_table_add_target+0x16c/0x3c0
          table_load+0x184/0x400
          ctl_ioctl+0x2f0/0x560
          dm_ctl_ioctl+0x38/0x50
          do_vfs_ioctl+0xd8/0x920
          SyS_ioctl+0x68/0xc0
          system_call+0x38/0xfc
      
      It used to be sufficient just to call pagefault_disable(), because that
      also disabled preemption. But the two were decoupled in commit 8222dbe2
      ("sched/preempt, mm/fault: Decouple preemption from the page fault
      logic") in mid 2015.
      
      So add the missing preempt_disable/enable(). We should also call
      disable_kernel_fp(), although it does nothing by default, there is a
      debug switch to make it active and all enables should be paired with
      disables.
      
      Fixes: 6dd7a82c ("crypto: powerpc - Add POWER8 optimised crc32c")
      Cc: stable@vger.kernel.org # v4.8+
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      4749228f
    • M
      sparc: remove unused wp_works_ok macro · 86e1066f
      Mathias Krause 提交于
      It's unused for ages, used to be required for ksyms.c back in the v1.1
      times.
      Signed-off-by: NMathias Krause <minipli@googlemail.com>
      Acked-by: NDavid S. Miller <davem@davemloft.net>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      86e1066f
    • G
      sparc32: Export vac_cache_size to fix build error · 9d262d95
      Guenter Roeck 提交于
      sparc32:allmodconfig fails to build with the following error.
      
      ERROR: "vac_cache_size" [drivers/infiniband/sw/rxe/rdma_rxe.ko] undefined!
      
      Fixes: cb886455 ("infiniband: Fix alignment of mmap cookies ...")
      Cc: Jason Gunthorpe <jgunthorpe@obsidianresearch.com>
      Cc: Doug Ledford <dledford@redhat.com>
      Signed-off-by: NGuenter Roeck <linux@roeck-us.net>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      9d262d95
    • N
      sparc64: Fix memory corruption when THP is enabled · 76811263
      Nitin Gupta 提交于
      The memory corruption was happening due to incorrect
      TLB/TSB flushing of hugepages.
      Reported-by: NDavid S. Miller <davem@davemloft.net>
      Signed-off-by: NNitin Gupta <nitin.m.gupta@oracle.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      76811263
    • T
      sparc64: Fix kernel panic due to erroneous #ifdef surrounding pmd_write() · 9ae34dbd
      Tom Hromatka 提交于
      This commit moves sparc64's prototype of pmd_write() outside
      of the CONFIG_TRANSPARENT_HUGEPAGE ifdef.
      
      In 2013, commit a7b9403f ("sparc64: Encode huge PMDs using PTE
      encoding.") exposed a path where pmd_write() could be called without
      CONFIG_TRANSPARENT_HUGEPAGE defined.  This can result in the panic below.
      
      The diff is awkward to read, but the changes are straightforward.
      pmd_write() was moved outside of #ifdef CONFIG_TRANSPARENT_HUGEPAGE.
      Also, __HAVE_ARCH_PMD_WRITE was defined.
      
      kernel BUG at include/asm-generic/pgtable.h:576!
                    \|/ ____ \|/
                    "@'/ .. \`@"
                    /_| \__/ |_\
                       \__U_/
      oracle_8114_cdb(8114): Kernel bad sw trap 5 [#1]
      CPU: 120 PID: 8114 Comm: oracle_8114_cdb Not tainted
      4.1.12-61.7.1.el6uek.rc1.sparc64 #1
      task: fff8400700a24d60 ti: fff8400700bc4000 task.ti: fff8400700bc4000
      TSTATE: 0000004411e01607 TPC: 00000000004609f8 TNPC: 00000000004609fc Y:
      00000005    Not tainted
      TPC: <gup_huge_pmd+0x198/0x1e0>
      g0: 000000000001c000 g1: 0000000000ef3954 g2: 0000000000000000 g3: 0000000000000001
      g4: fff8400700a24d60 g5: fff8001fa5c10000 g6: fff8400700bc4000 g7: 0000000000000720
      o0: 0000000000bc5058 o1: 0000000000000240 o2: 0000000000006000 o3: 0000000000001c00
      o4: 0000000000000000 o5: 0000048000080000 sp: fff8400700bc6ab1 ret_pc: 00000000004609f0
      RPC: <gup_huge_pmd+0x190/0x1e0>
      l0: fff8400700bc74fc l1: 0000000000020000 l2: 0000000000002000 l3: 0000000000000000
      l4: fff8001f93250950 l5: 000000000113f800 l6: 0000000000000004 l7: 0000000000000000
      i0: fff8400700ca46a0 i1: bd0000085e800453 i2: 000000026a0c4000 i3: 000000026a0c6000
      i4: 0000000000000001 i5: fff800070c958de8 i6: fff8400700bc6b61 i7: 0000000000460dd0
      I7: <gup_pud_range+0x170/0x1a0>
      Call Trace:
       [0000000000460dd0] gup_pud_range+0x170/0x1a0
       [0000000000460e84] get_user_pages_fast+0x84/0x120
       [00000000006f5a18] iov_iter_get_pages+0x98/0x240
       [00000000005fa744] do_direct_IO+0xf64/0x1e00
       [00000000005fbbc0] __blockdev_direct_IO+0x360/0x15a0
       [00000000101f74fc] ext4_ind_direct_IO+0xdc/0x400 [ext4]
       [00000000101af690] ext4_ext_direct_IO+0x1d0/0x2c0 [ext4]
       [00000000101af86c] ext4_direct_IO+0xec/0x220 [ext4]
       [0000000000553bd4] generic_file_read_iter+0x114/0x140
       [00000000005bdc2c] __vfs_read+0xac/0x100
       [00000000005bf254] vfs_read+0x54/0x100
       [00000000005bf368] SyS_pread64+0x68/0x80
      Signed-off-by: NTom Hromatka <tom.hromatka@oracle.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      9ae34dbd
  11. 06 4月, 2017 2 次提交
  12. 05 4月, 2017 10 次提交
    • J
      metag/usercopy: Add missing fixups · b884a190
      James Hogan 提交于
      The rapf copy loops in the Meta usercopy code is missing some extable
      entries for HTP cores with unaligned access checking enabled, where
      faults occur on the instruction immediately after the faulting access.
      
      Add the fixup labels and extable entries for these cases so that corner
      case user copy failures don't cause kernel crashes.
      
      Fixes: 373cd784 ("metag: Memory handling")
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Cc: linux-metag@vger.kernel.org
      Cc: stable@vger.kernel.org
      b884a190
    • J
      metag/usercopy: Fix src fixup in from user rapf loops · 2c0b1df8
      James Hogan 提交于
      The fixup code to rewind the source pointer in
      __asm_copy_from_user_{32,64}bit_rapf_loop() always rewound the source by
      a single unit (4 or 8 bytes), however this is insufficient if the fault
      didn't occur on the first load in the loop, as the source pointer will
      have been incremented but nothing will have been stored until all 4
      register [pairs] are loaded.
      
      Read the LSM_STEP field of TXSTATUS (which is already loaded into a
      register), a bit like the copy_to_user versions, to determine how many
      iterations of MGET[DL] have taken place, all of which need rewinding.
      
      Fixes: 373cd784 ("metag: Memory handling")
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Cc: linux-metag@vger.kernel.org
      Cc: stable@vger.kernel.org
      2c0b1df8
    • J
      metag/usercopy: Set flags before ADDZ · fd40eee1
      James Hogan 提交于
      The fixup code for the copy_to_user rapf loops reads TXStatus.LSM_STEP
      to decide how far to rewind the source pointer. There is a special case
      for the last execution of an MGETL/MGETD, since it leaves LSM_STEP=0
      even though the number of MGETLs/MGETDs attempted was 4. This uses ADDZ
      which is conditional upon the Z condition flag, but the AND instruction
      which masked the TXStatus.LSM_STEP field didn't set the condition flags
      based on the result.
      
      Fix that now by using ANDS which does set the flags, and also marking
      the condition codes as clobbered by the inline assembly.
      
      Fixes: 373cd784 ("metag: Memory handling")
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Cc: linux-metag@vger.kernel.org
      Cc: stable@vger.kernel.org
      fd40eee1
    • J
      metag/usercopy: Zero rest of buffer from copy_from_user · 563ddc10
      James Hogan 提交于
      Currently we try to zero the destination for a failed read from userland
      in fixup code in the usercopy.c macros. The rest of the destination
      buffer is then zeroed from __copy_user_zeroing(), which is used for both
      copy_from_user() and __copy_from_user().
      
      Unfortunately we fail to zero in the fixup code as D1Ar1 is set to 0
      before the fixup code entry labels, and __copy_from_user() shouldn't even
      be zeroing the rest of the buffer.
      
      Move the zeroing out into copy_from_user() and rename
      __copy_user_zeroing() to raw_copy_from_user() since it no longer does
      any zeroing. This also conveniently matches the name needed for
      RAW_COPY_USER support in a later patch.
      
      Fixes: 373cd784 ("metag: Memory handling")
      Reported-by: NAl Viro <viro@zeniv.linux.org.uk>
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Cc: linux-metag@vger.kernel.org
      Cc: stable@vger.kernel.org
      563ddc10
    • J
      metag/usercopy: Add early abort to copy_to_user · fb8ea062
      James Hogan 提交于
      When copying to userland on Meta, if any faults are encountered
      immediately abort the copy instead of continuing on and repeatedly
      faulting, and worse potentially copying further bytes successfully to
      subsequent valid pages.
      
      Fixes: 373cd784 ("metag: Memory handling")
      Reported-by: NAl Viro <viro@zeniv.linux.org.uk>
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Cc: linux-metag@vger.kernel.org
      Cc: stable@vger.kernel.org
      fb8ea062
    • J
      metag/usercopy: Fix alignment error checking · 22572119
      James Hogan 提交于
      Fix the error checking of the alignment adjustment code in
      raw_copy_from_user(), which mistakenly considers it safe to skip the
      error check when aligning the source buffer on a 2 or 4 byte boundary.
      
      If the destination buffer was unaligned it may have started to copy
      using byte or word accesses, which could well be at the start of a new
      (valid) source page. This would result in it appearing to have copied 1
      or 2 bytes at the end of the first (invalid) page rather than none at
      all.
      
      Fixes: 373cd784 ("metag: Memory handling")
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Cc: linux-metag@vger.kernel.org
      Cc: stable@vger.kernel.org
      22572119
    • J
      metag/usercopy: Drop unused macros · ef62a2d8
      James Hogan 提交于
      Metag's lib/usercopy.c has a bunch of copy_from_user macros for larger
      copies between 5 and 16 bytes which are completely unused. Before fixing
      zeroing lets drop these macros so there is less to fix.
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Cc: Al Viro <viro@zeniv.linux.org.uk>
      Cc: linux-metag@vger.kernel.org
      Cc: stable@vger.kernel.org
      ef62a2d8
    • F
      powerpc/mm: Add missing global TLB invalidate if cxl is active · 88b1bf72
      Frederic Barrat 提交于
      Commit 4c6d9acc ("powerpc/mm: Add hooks for cxl") converted local
      TLB invalidates to global if the cxl driver is active. This is necessary
      because the CAPP snoops invalidations to forward them to the PSL on the
      cxl adapter. However one path was forgotten. native_flush_hash_range()
      still does local TLB invalidates, as found out the hard way recently.
      
      This patch fixes it by following the same logic as previously: if the
      cxl driver is active, the local TLB invalidates are 'upgraded' to
      global.
      
      Fixes: 4c6d9acc ("powerpc/mm: Add hooks for cxl")
      Cc: stable@vger.kernel.org # v3.18+
      Signed-off-by: NFrederic Barrat <fbarrat@linux.vnet.ibm.com>
      Reviewed-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      88b1bf72
    • O
      powerpc/64: Fix flush_(d|i)cache_range() called from modules · 8f5f525d
      Oliver O'Halloran 提交于
      When the kernel is compiled to use 64bit ABIv2 the _GLOBAL() macro does
      not include a global entry point. A function's global entry point is
      used when the function is called from a different TOC context and in the
      kernel this typically means a call from a module into the vmlinux (or
      vice-versa).
      
      There are a few exported asm functions declared with _GLOBAL() and
      calling them from a module will likely crash the kernel since any TOC
      relative load will yield garbage.
      
      flush_icache_range() and flush_dcache_range() are both exported to
      modules, and use the TOC, so must use _GLOBAL_TOC().
      
      Fixes: 721aeaa9 ("powerpc: Build little endian ppc64 kernel with ABIv2")
      Cc: stable@vger.kernel.org # v3.16+
      Signed-off-by: NOliver O'Halloran <oohall@gmail.com>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      8f5f525d
    • J
      x86/signals: Fix lower/upper bound reporting in compat siginfo · cfac6dfa
      Joerg Roedel 提交于
      Put the right values from the original siginfo into the
      userspace compat-siginfo.
      
      This fixes the 32-bit MPX "tabletest" testcase on 64-bit kernels.
      Signed-off-by: NJoerg Roedel <jroedel@suse.de>
      Acked-by: NDave Hansen <dave.hansen@linux.intel.com>
      Cc: <stable@vger.kernel.org> # v4.8+
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Borislav Petkov <bp@suse.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: Dmitry Safonov <0x7f454c46@gmail.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Fixes: a4455082 ('x86/signals: Add missing signal_compat code for x86 features')
      Link: http://lkml.kernel.org/r/1491322501-5054-1-git-send-email-joro@8bytes.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
      cfac6dfa
  13. 04 4月, 2017 3 次提交
    • D
      ARM: OMAP2+: omap_device: Sync omap_device and pm_runtime after probe defer · 04abaf07
      Dave Gerlach 提交于
      Starting from commit 5de85b9d ("PM / runtime: Re-init runtime PM
      states at probe error and driver unbind") pm_runtime core now changes
      device runtime_status back to after RPM_SUSPENDED after a probe defer.
      Certain OMAP devices make use of "ti,no-idle-on-init" flag which causes
      omap_device_enable to be called during the BUS_NOTIFY_ADD_DEVICE event
      during probe, along with pm_runtime_set_active.
      
      This call to pm_runtime_set_active typically will prevent a call to
      pm_runtime_get in a driver probe function from re-enabling the
      omap_device. However, in the case of a probe defer that happens before
      the driver probe function is able to run, such as a missing pinctrl
      states defer, pm_runtime_reinit will set the device as RPM_SUSPENDED and
      then once driver probe is actually able to run, pm_runtime_get will see
      the device as suspended and call through to the omap_device layer,
      attempting to enable the already enabled omap_device and causing errors
      like this:
      
      omap-gpmc 50000000.gpmc: omap_device: omap_device_enable() called from
      invalid state 1
      omap-gpmc 50000000.gpmc: use pm_runtime_put_sync_suspend() in driver?
      
      We can avoid this error by making sure the pm_runtime status of a device
      matches the omap_device state before a probe attempt. By extending the
      omap_device bus notifier to act on the BUS_NOTIFY_BIND_DRIVER event we
      can check if a device is enabled in omap_device but with a pm_runtime
      status of RPM_SUSPENDED and once again mark the device as RPM_ACTIVE to
      avoid a second incorrect call to omap_device_enable.
      
      Fixes: 5de85b9d ("PM / runtime: Re-init runtime PM states at probe
      error and driver unbind")
      Tested-by: NFranklin S Cooper Jr. <fcooper@ti.com>
      Signed-off-by: NDave Gerlach <d-gerlach@ti.com>
      Signed-off-by: NTony Lindgren <tony@atomide.com>
      04abaf07
    • L
      KVM: nVMX: initialize PML fields in vmcs02 · 1fb883bb
      Ladi Prosek 提交于
      L2 was running with uninitialized PML fields which led to incomplete
      dirty bitmap logging. This manifested as all kinds of subtle erratic
      behavior of the nested guest.
      
      Fixes: 843e4330 ("KVM: VMX: Add PML support in VMX")
      Signed-off-by: NLadi Prosek <lprosek@redhat.com>
      Signed-off-by: NRadim Krčmář <rkrcmar@redhat.com>
      1fb883bb
    • L
      KVM: nVMX: do not leak PML full vmexit to L1 · ab007cc9
      Ladi Prosek 提交于
      The PML feature is not exposed to guests so we should not be forwarding
      the vmexit either.
      
      This commit fixes BSOD 0x20001 (HYPERVISOR_ERROR) when running Hyper-V
      enabled Windows Server 2016 in L1 on hardware that supports PML.
      
      Fixes: 843e4330 ("KVM: VMX: Add PML support in VMX")
      Signed-off-by: NLadi Prosek <lprosek@redhat.com>
      Reviewed-by: NDavid Hildenbrand <david@redhat.com>
      Signed-off-by: NRadim Krčmář <rkrcmar@redhat.com>
      ab007cc9