1. 19 5月, 2017 1 次提交
    • M
      powerpc/mm: Fix virt_addr_valid() etc. on 64-bit hash · e41e53cd
      Michael Ellerman 提交于
      virt_addr_valid() is supposed to tell you if it's OK to call virt_to_page() on
      an address. What this means in practice is that it should only return true for
      addresses in the linear mapping which are backed by a valid PFN.
      
      We are failing to properly check that the address is in the linear mapping,
      because virt_to_pfn() will return a valid looking PFN for more or less any
      address. That bug is actually caused by __pa(), used in virt_to_pfn().
      
      eg: __pa(0xc000000000010000) = 0x10000  # Good
          __pa(0xd000000000010000) = 0x10000  # Bad!
          __pa(0x0000000000010000) = 0x10000  # Bad!
      
      This started happening after commit bdbc29c1 ("powerpc: Work around gcc
      miscompilation of __pa() on 64-bit") (Aug 2013), where we changed the definition
      of __pa() to work around a GCC bug. Prior to that we subtracted PAGE_OFFSET from
      the value passed to __pa(), meaning __pa() of a 0xd or 0x0 address would give
      you something bogus back.
      
      Until we can verify if that GCC bug is no longer an issue, or come up with
      another solution, this commit does the minimal fix to make virt_addr_valid()
      work, by explicitly checking that the address is in the linear mapping region.
      
      Fixes: bdbc29c1 ("powerpc: Work around gcc miscompilation of __pa() on 64-bit")
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      Reviewed-by: NPaul Mackerras <paulus@ozlabs.org>
      Reviewed-by: NBalbir Singh <bsingharora@gmail.com>
      Tested-by: NBreno Leitao <breno.leitao@gmail.com>
      e41e53cd
  2. 18 5月, 2017 4 次提交
    • L
      sparc/ftrace: Fix ftrace graph time measurement · 48078d2d
      Liam R. Howlett 提交于
      The ftrace function_graph time measurements of a given function is not
      accurate according to those recorded by ftrace using the function
      filters.  This change pulls the x86_64 fix from 'commit 722b3c74
      ("ftrace/graph: Trace function entry before updating index")' into the
      sparc specific prepare_ftrace_return which stops ftrace from
      counting interrupted tasks in the time measurement.
      
      Example measurements for select_task_rq_fair running "hackbench 100
      process 1000":
      
                    |  tracing/trace_stat/function0  |  function_graph
       Before patch |  2.802 us                      |  4.255 us
       After patch  |  2.749 us                      |  3.094 us
      Signed-off-by: NLiam R. Howlett <Liam.Howlett@Oracle.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      48078d2d
    • O
      sparc: Fix -Wstringop-overflow warning · deba804c
      Orlando Arias 提交于
      Greetings,
      
      GCC 7 introduced the -Wstringop-overflow flag to detect buffer overflows
      in calls to string handling functions [1][2]. Due to the way
      ``empty_zero_page'' is declared in arch/sparc/include/setup.h, this
      causes a warning to trigger at compile time in the function mem_init(),
      which is subsequently converted to an error. The ensuing patch fixes
      this issue and aligns the declaration of empty_zero_page to that of
      other architectures. Thank you.
      
      Cheers,
      Orlando.
      
      [1] https://gcc.gnu.org/ml/gcc-patches/2016-10/msg02308.html
      [2] https://gcc.gnu.org/gcc-7/changes.htmlSigned-off-by: NOrlando Arias <oarias@knights.ucf.edu>
      
      --------------------------------------------------------------------------------
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      deba804c
    • N
      sparc64: Fix mapping of 64k pages with MAP_FIXED · b6c41cb0
      Nitin Gupta 提交于
      An incorrect huge page alignment check caused
      mmap failure for 64K pages when MAP_FIXED is used
      with address not aligned to HPAGE_SIZE.
      
      Orabug: 25885991
      
      Fixes: dcd1912d ("sparc64: Add 64K page size support")
      Signed-off-by: NNitin Gupta <nitin.m.gupta@oracle.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      b6c41cb0
    • M
      arm64/cpufeature: don't use mutex in bringup path · 63a1e1c9
      Mark Rutland 提交于
      Currently, cpus_set_cap() calls static_branch_enable_cpuslocked(), which
      must take the jump_label mutex.
      
      We call cpus_set_cap() in the secondary bringup path, from the idle
      thread where interrupts are disabled. Taking a mutex in this path "is a
      NONO" regardless of whether it's contended, and something we must avoid.
      We didn't spot this until recently, as ___might_sleep() won't warn for
      this case until all CPUs have been brought up.
      
      This patch avoids taking the mutex in the secondary bringup path. The
      poking of static keys is deferred until enable_cpu_capabilities(), which
      runs in a suitable context on the boot CPU. To account for the static
      keys being set later, cpus_have_const_cap() is updated to use another
      static key to check whether the const cap keys have been initialised,
      falling back to the caps bitmap until this is the case.
      
      This means that users of cpus_have_const_cap() gain should only gain a
      single additional NOP in the fast path once the const caps are
      initialised, but should always see the current cap value.
      
      The hyp code should never dereference the caps array, since the caps are
      initialized before we run the module initcall to initialise hyp. A check
      is added to the hyp init code to document this requirement.
      
      This change will sidestep a number of issues when the upcoming hotplug
      locking rework is merged.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Reviewed-by: NMarc Zyniger <marc.zyngier@arm.com>
      Reviewed-by: NSuzuki Poulose <suzuki.poulose@arm.com>
      Acked-by: NWill Deacon <will.deacon@arm.com>
      Cc: Christoffer Dall <christoffer.dall@linaro.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Sebastian Sewior <bigeasy@linutronix.de>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      63a1e1c9
  3. 17 5月, 2017 1 次提交
    • M
      powerpc/mm: Fix crash in page table dump with huge pages · bfb9956a
      Michael Ellerman 提交于
      The page table dump code doesn't know about huge pages, so currently
      it crashes (or walks random memory, usually leading to a crash), if it
      finds a huge page. On Book3S we only see huge pages in the Linux page
      tables when we're using the P9 Radix MMU.
      
      Teaching the code to properly handle huge pages is a bit more involved,
      so for now just prevent the crash.
      
      Cc: stable@vger.kernel.org # v4.10+
      Fixes: 8eb07b18 ("powerpc/mm: Dump linux pagetables")
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      bfb9956a
  4. 16 5月, 2017 4 次提交
  5. 15 5月, 2017 2 次提交
    • M
      powerpc/tm: Fix FP and VMX register corruption · f48e91e8
      Michael Neuling 提交于
      In commit dc310669 ("powerpc: tm: Always use fp_state and vr_state
      to store live registers"), a section of code was removed that copied
      the current state to checkpointed state. That code should not have been
      removed.
      
      When an FP (Floating Point) unavailable is taken inside a transaction,
      we need to abort the transaction. This is because at the time of the
      tbegin, the FP state is bogus so the state stored in the checkpointed
      registers is incorrect. To fix this, we treclaim (to get the
      checkpointed GPRs) and then copy the thread_struct FP live state into
      the checkpointed state. We then trecheckpoint so that the FP state is
      correctly restored into the CPU.
      
      The copying of the FP registers from live to checkpointed is what was
      missing.
      
      This simplifies the logic slightly from the original patch.
      tm_reclaim_thread() will now always write the checkpointed FP
      state. Either the checkpointed FP state will be written as part of
      the actual treclaim (in tm.S), or it'll be a copy of the live
      state. Which one we use is based on MSR[FP] from userspace.
      
      Similarly for VMX.
      
      Fixes: dc310669 ("powerpc: tm: Always use fp_state and vr_state to store live registers")
      Cc: stable@vger.kernel.org # 4.9+
      Signed-off-by: NMichael Neuling <mikey@neuling.org>
      Reviewed-by: cyrilbur@gmail.com
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      f48e91e8
    • M
      powerpc/modules: If mprofile-kernel is enabled add it to vermagic · 43e24e82
      Michael Ellerman 提交于
      On powerpc we can build the kernel with two different ABIs for mcount(), which
      is used by ftrace. Kernels built with one ABI do not know how to load modules
      built with the other ABI. The new style ABI is called "mprofile-kernel", for
      want of a better name.
      
      Currently if we build a module using the old style ABI, and the kernel with
      mprofile-kernel, when we load the module we'll oops something like:
      
        # insmod autofs4-no-mprofile-kernel.ko
        ftrace-powerpc: Unexpected instruction f8810028 around bl _mcount
        ------------[ cut here ]------------
        WARNING: CPU: 6 PID: 3759 at ../kernel/trace/ftrace.c:2024 ftrace_bug+0x2b8/0x3c0
        CPU: 6 PID: 3759 Comm: insmod Not tainted 4.11.0-rc3-gcc-5.4.1-00017-g5a61ef74 #11
        ...
        NIP [c0000000001eaa48] ftrace_bug+0x2b8/0x3c0
        LR [c0000000001eaff8] ftrace_process_locs+0x4a8/0x590
        Call Trace:
          alloc_pages_current+0xc4/0x1d0 (unreliable)
          ftrace_process_locs+0x4a8/0x590
          load_module+0x1c8c/0x28f0
          SyS_finit_module+0x110/0x140
          system_call+0x38/0xfc
        ...
        ftrace failed to modify
        [<d000000002a31024>] 0xd000000002a31024
         actual:   35:65:00:48
      
      We can avoid this by including in the vermagic whether the kernel/module was
      built with mprofile-kernel. Which results in:
      
        # insmod autofs4-pg.ko
        autofs4: version magic
        '4.11.0-rc3-gcc-5.4.1-00017-g5a61ef74 SMP mod_unload modversions '
        should be
        '4.11.0-rc3-gcc-5.4.1-00017-g5a61ef74-dirty SMP mod_unload modversions mprofile-kernel'
        insmod: ERROR: could not insert module autofs4-pg.ko: Invalid module format
      
      Fixes: 8c50b72a ("powerpc/ftrace: Add Kconfig & Make glue for mprofile-kernel")
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      Acked-by: NBalbir Singh <bsingharora@gmail.com>
      Acked-by: NJessica Yu <jeyu@redhat.com>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      43e24e82
  6. 13 5月, 2017 1 次提交
  7. 12 5月, 2017 1 次提交
    • D
      bpf, arm64: fix faulty emission of map access in tail calls · d8b54110
      Daniel Borkmann 提交于
      Shubham was recently asking on netdev why in arm64 JIT we don't multiply
      the index for accessing the tail call map by 8. That led me into testing
      out arm64 JIT wrt tail calls and it turned out I got a NULL pointer
      dereference on the tail call.
      
      The buggy access is at:
      
        prog = array->ptrs[index];
        if (prog == NULL)
            goto out;
      
        [...]
        00000060:  d2800e0a  mov x10, #0x70 // #112
        00000064:  f86a682a  ldr x10, [x1,x10]
        00000068:  f862694b  ldr x11, [x10,x2]
        0000006c:  b40000ab  cbz x11, 0x00000080
        [...]
      
      The code triggering the crash is f862694b. x1 at the time contains the
      address of the bpf array, x10 offsetof(struct bpf_array, ptrs). Meaning,
      above we load the pointer to the program at map slot 0 into x10. x10
      can then be NULL if the slot is not occupied, which we later on try to
      access with a user given offset in x2 that is the map index.
      
      Fix this by emitting the following instead:
      
        [...]
        00000060:  d2800e0a  mov x10, #0x70 // #112
        00000064:  8b0a002a  add x10, x1, x10
        00000068:  d37df04b  lsl x11, x2, #3
        0000006c:  f86b694b  ldr x11, [x10,x11]
        00000070:  b40000ab  cbz x11, 0x00000084
        [...]
      
      This basically adds the offset to ptrs to the base address of the bpf
      array we got and we later on access the map with an index * 8 offset
      relative to that. The tail call map itself is basically one large area
      with meta data at the head followed by the array of prog pointers.
      This makes tail calls working again, tested on Cavium ThunderX ARMv8.
      
      Fixes: ddb55992 ("arm64: bpf: implement bpf_tail_call() helper")
      Reported-by: NShubham Bansal <illusionist.neo@gmail.com>
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      d8b54110
  8. 11 5月, 2017 6 次提交
  9. 10 5月, 2017 18 次提交
    • N
    • N
      uapi: export all arch specifics directories · 61562f98
      Nicolas Dichtel 提交于
      This patch removes the need of subdir-y. Now all files/directories under
      arch/<arch>/include/uapi/ are exported.
      
      The only change for userland is the layout of the command 'make
      headers_install_all': directories asm-<arch> are replaced by arch-<arch>/.
      Those new directories contains all files/directories of the specified arch.
      
      Note that only cris and tile have more directories than only asm:
       - arch-v[10|32] for cris;
       - arch for tile.
      Signed-off-by: NNicolas Dichtel <nicolas.dichtel@6wind.com>
      Signed-off-by: NMasahiro Yamada <yamada.masahiro@socionext.com>
      61562f98
    • N
      uapi: export all headers under uapi directories · fcc8487d
      Nicolas Dichtel 提交于
      Regularly, when a new header is created in include/uapi/, the developer
      forgets to add it in the corresponding Kbuild file. This error is usually
      detected after the release is out.
      
      In fact, all headers under uapi directories should be exported, thus it's
      useless to have an exhaustive list.
      
      After this patch, the following files, which were not exported, are now
      exported (with make headers_install_all):
      asm-arc/kvm_para.h
      asm-arc/ucontext.h
      asm-blackfin/shmparam.h
      asm-blackfin/ucontext.h
      asm-c6x/shmparam.h
      asm-c6x/ucontext.h
      asm-cris/kvm_para.h
      asm-h8300/shmparam.h
      asm-h8300/ucontext.h
      asm-hexagon/shmparam.h
      asm-m32r/kvm_para.h
      asm-m68k/kvm_para.h
      asm-m68k/shmparam.h
      asm-metag/kvm_para.h
      asm-metag/shmparam.h
      asm-metag/ucontext.h
      asm-mips/hwcap.h
      asm-mips/reg.h
      asm-mips/ucontext.h
      asm-nios2/kvm_para.h
      asm-nios2/ucontext.h
      asm-openrisc/shmparam.h
      asm-parisc/kvm_para.h
      asm-powerpc/perf_regs.h
      asm-sh/kvm_para.h
      asm-sh/ucontext.h
      asm-tile/shmparam.h
      asm-unicore32/shmparam.h
      asm-unicore32/ucontext.h
      asm-x86/hwcap2.h
      asm-xtensa/kvm_para.h
      drm/armada_drm.h
      drm/etnaviv_drm.h
      drm/vgem_drm.h
      linux/aspeed-lpc-ctrl.h
      linux/auto_dev-ioctl.h
      linux/bcache.h
      linux/btrfs_tree.h
      linux/can/vxcan.h
      linux/cifs/cifs_mount.h
      linux/coresight-stm.h
      linux/cryptouser.h
      linux/fsmap.h
      linux/genwqe/genwqe_card.h
      linux/hash_info.h
      linux/kcm.h
      linux/kcov.h
      linux/kfd_ioctl.h
      linux/lightnvm.h
      linux/module.h
      linux/nbd-netlink.h
      linux/nilfs2_api.h
      linux/nilfs2_ondisk.h
      linux/nsfs.h
      linux/pr.h
      linux/qrtr.h
      linux/rpmsg.h
      linux/sched/types.h
      linux/sed-opal.h
      linux/smc.h
      linux/smc_diag.h
      linux/stm.h
      linux/switchtec_ioctl.h
      linux/vfio_ccw.h
      linux/wil6210_uapi.h
      rdma/bnxt_re-abi.h
      
      Note that I have removed from this list the files which are generated in every
      exported directories (like .install or .install.cmd).
      
      Thanks to Julien Floret <julien.floret@6wind.com> for the tip to get all
      subdirs with a pure makefile command.
      
      For the record, note that exported files for asm directories are a mix of
      files listed by:
       - include/uapi/asm-generic/Kbuild.asm;
       - arch/<arch>/include/uapi/asm/Kbuild;
       - arch/<arch>/include/asm/Kbuild.
      Signed-off-by: NNicolas Dichtel <nicolas.dichtel@6wind.com>
      Acked-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      Acked-by: NRussell King <rmk+kernel@armlinux.org.uk>
      Acked-by: NMark Salter <msalter@redhat.com>
      Acked-by: Michael Ellerman <mpe@ellerman.id.au> (powerpc)
      Signed-off-by: NMasahiro Yamada <yamada.masahiro@socionext.com>
      fcc8487d
    • N
      x86: stop exporting msr-index.h to userland · 25dc1d6c
      Nicolas Dichtel 提交于
      Even if this file was not in an uapi directory, it was exported because
      it was listed in the Kbuild file.
      
      Fixes: b72e7464 ("x86/uapi: Do not export <asm/msr-index.h> as part of the user API headers")
      Suggested-by: NBorislav Petkov <bp@alien8.de>
      Signed-off-by: NNicolas Dichtel <nicolas.dichtel@6wind.com>
      Acked-by: NIngo Molnar <mingo@kernel.org>
      Acked-by: NThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NMasahiro Yamada <yamada.masahiro@socionext.com>
      25dc1d6c
    • N
      nios2: put setup.h in uapi · 4f4ddad3
      Nicolas Dichtel 提交于
      This header file is exported, but from a userland pov, it's just a wrapper
      to asm-generic/setup.h.
      Signed-off-by: NNicolas Dichtel <nicolas.dichtel@6wind.com>
      Reviewed-by: NTobias Klauser <tklauser@distanz.ch>
      Signed-off-by: NMasahiro Yamada <yamada.masahiro@socionext.com>
      4f4ddad3
    • N
      h8300: put bitsperlong.h in uapi · 37835671
      Nicolas Dichtel 提交于
      This header file is exported, thus move it to uapi.
      Signed-off-by: NNicolas Dichtel <nicolas.dichtel@6wind.com>
      Signed-off-by: NMasahiro Yamada <yamada.masahiro@socionext.com>
      37835671
    • D
      sparc64: fix fault handling in NGbzero.S and GENbzero.S · 3c7f6221
      Dave Aldridge 提交于
      When any of the functions contained in NGbzero.S and GENbzero.S
      vector through *bzero_from_clear_user, we may end up taking a
      fault when executing one of the store alternate address space
      instructions. If this happens, the exception handler does not
      restore the %asi register.
      
      This commit fixes the issue by introducing a new exception
      handler that ensures the %asi register is restored when
      a fault is handled.
      
      Orabug: 25577560
      Signed-off-by: NDave Aldridge <david.j.aldridge@oracle.com>
      Reviewed-by: NRob Gardner <rob.gardner@oracle.com>
      Reviewed-by: NBabu Moger <babu.moger@oracle.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      3c7f6221
    • G
      sparc: use memdup_user_nul in sun4m LED driver · aed74ea0
      Geliang Tang 提交于
      Use memdup_user_nul() helper instead of open-coding to simplify the code.
      Signed-off-by: NGeliang Tang <geliangtang@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      aed74ea0
    • B
      x86, pmem: Fix cache flushing for iovec write < 8 bytes · 8376efd3
      Ben Hutchings 提交于
      Commit 11e63f6d added cache flushing for unaligned writes from an
      iovec, covering the first and last cache line of a >= 8 byte write and
      the first cache line of a < 8 byte write.  But an unaligned write of
      2-7 bytes can still cover two cache lines, so make sure we flush both
      in that case.
      
      Cc: <stable@vger.kernel.org>
      Fixes: 11e63f6d ("x86, pmem: fix broken __copy_user_nocache ...")
      Signed-off-by: NBen Hutchings <ben.hutchings@codethink.co.uk>
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      8376efd3
    • M
      arm64: uaccess: suppress spurious clang warning · d135b8b5
      Mark Rutland 提交于
      Clang tries to warn when there's a mismatch between an operand's size,
      and the size of the register it is held in, as this may indicate a bug.
      Specifically, clang warns when the operand's type is less than 64 bits
      wide, and the register is used unqualified (i.e. %N rather than %xN or
      %wN).
      
      Unfortunately clang can generate these warnings for unreachable code.
      For example, for code like:
      
      do {                                            \
              typeof(*(ptr)) __v = (v);               \
              switch(sizeof(*(ptr))) {                \
              case 1:                                 \
                      // assume __v is 1 byte wide    \
                      asm ("{op}b %w0" : : "r" (v));  \
                      break;                          \
              case 8:                                 \
                      // assume __v is 8 bytes wide   \
                      asm ("{op} %0" : : "r" (v));    \
                      break;                          \
              }
      while (0)
      
      ... if op() were passed a char value and pointer to char, clang may
      produce a warning for the unreachable case where sizeof(*(ptr)) is 8.
      
      For the same reasons, clang produces warnings when __put_user_err() is
      used for types that are less than 64 bits wide.
      
      We could avoid this with a cast to a fixed-width type in each of the
      cases. However, GCC will then warn that pointer types are being cast to
      mismatched integer sizes (in unreachable paths).
      
      Another option would be to use the same union trickery as we do for
      __smp_store_release() and __smp_load_acquire(), but this is fairly
      invasive.
      
      Instead, this patch suppresses the clang warning by using an x modifier
      in the assembly for the 8 byte case of __put_user_err(). No additional
      work is necessary as the value has been cast to typeof(*(ptr)), so the
      compiler will have performed any necessary extension for the reachable
      case.
      
      For consistency, __get_user_err() is also updated to use the x modifier
      for its 8 byte case.
      Acked-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Reported-by: NMatthias Kaehlcke <mka@chromium.org>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      d135b8b5
    • M
      arm64: atomic_lse: match asm register sizes · 8997c934
      Mark Rutland 提交于
      The LSE atomic code uses asm register variables to ensure that
      parameters are allocated in specific registers. In the majority of cases
      we specifically ask for an x register when using 64-bit values, but in a
      couple of cases we use a w regsiter for a 64-bit value.
      
      For asm register variables, the compiler only cares about the register
      index, with wN and xN having the same meaning. The compiler determines
      the register size to use based on the type of the variable. Thus, this
      inconsistency is merely confusing, and not harmful to code generation.
      
      For consistency, this patch updates those cases to use the x register
      alias. There should be no functional change as a result of this patch.
      Acked-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      8997c934
    • M
      arm64: armv8_deprecated: ensure extension of addr · 55de49f9
      Mark Rutland 提交于
      Our compat swp emulation holds the compat user address in an unsigned
      int, which it passes to __user_swpX_asm(). When a 32-bit value is passed
      in a register, the upper 32 bits of the register are unknown, and we
      must extend the value to 64 bits before we can use it as a base address.
      
      This patch casts the address to unsigned long to ensure it has been
      suitably extended, avoiding the potential issue, and silencing a related
      warning from clang.
      
      Fixes: bd35a4ad ("arm64: Port SWP/SWPB emulation support from arm")
      Cc: <stable@vger.kernel.org> # 3.19.x-
      Acked-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      55de49f9
    • M
      arm64: uaccess: ensure extension of access_ok() addr · a06040d7
      Mark Rutland 提交于
      Our access_ok() simply hands its arguments over to __range_ok(), which
      implicitly assummes that the addr parameter is 64 bits wide. This isn't
      necessarily true for compat code, which might pass down a 32-bit address
      parameter.
      
      In these cases, we don't have a guarantee that the address has been zero
      extended to 64 bits, and the upper bits of the register may contain
      unknown values, potentially resulting in a suprious failure.
      
      Avoid this by explicitly casting the addr parameter to an unsigned long
      (as is done on other architectures), ensuring that the parameter is
      widened appropriately.
      
      Fixes: 0aea86a2 ("arm64: User access library functions")
      Cc: <stable@vger.kernel.org> # 3.7.x-
      Acked-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      a06040d7
    • M
      arm64: ensure extension of smp_store_release value · 994870be
      Mark Rutland 提交于
      When an inline assembly operand's type is narrower than the register it
      is allocated to, the least significant bits of the register (up to the
      operand type's width) are valid, and any other bits are permitted to
      contain any arbitrary value. This aligns with the AAPCS64 parameter
      passing rules.
      
      Our __smp_store_release() implementation does not account for this, and
      implicitly assumes that operands have been zero-extended to the width of
      the type being stored to. Thus, we may store unknown values to memory
      when the value type is narrower than the pointer type (e.g. when storing
      a char to a long).
      
      This patch fixes the issue by casting the value operand to the same
      width as the pointer operand in all cases, which ensures that the value
      is zero-extended as we expect. We use the same union trickery as
      __smp_load_acquire and {READ,WRITE}_ONCE() to avoid GCC complaining that
      pointers are potentially cast to narrower width integers in unreachable
      paths.
      
      A whitespace issue at the top of __smp_store_release() is also
      corrected.
      
      No changes are necessary for __smp_load_acquire(). Load instructions
      implicitly clear any upper bits of the register, and the compiler will
      only consider the least significant bits of the register as valid
      regardless.
      
      Fixes: 47933ad4 ("arch: Introduce smp_load_acquire(), smp_store_release()")
      Fixes: 878a84d5 ("arm64: add missing data types in smp_load_acquire/smp_store_release")
      Cc: <stable@vger.kernel.org> # 3.14.x-
      Acked-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Cc: Matthias Kaehlcke <mka@chromium.org>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      994870be
    • M
      arm64: xchg: hazard against entire exchange variable · fee960be
      Mark Rutland 提交于
      The inline assembly in __XCHG_CASE() uses a +Q constraint to hazard
      against other accesses to the memory location being exchanged. However,
      the pointer passed to the constraint is a u8 pointer, and thus the
      hazard only applies to the first byte of the location.
      
      GCC can take advantage of this, assuming that other portions of the
      location are unchanged, as demonstrated with the following test case:
      
      union u {
      	unsigned long l;
      	unsigned int i[2];
      };
      
      unsigned long update_char_hazard(union u *u)
      {
      	unsigned int a, b;
      
      	a = u->i[1];
      	asm ("str %1, %0" : "+Q" (*(char *)&u->l) : "r" (0UL));
      	b = u->i[1];
      
      	return a ^ b;
      }
      
      unsigned long update_long_hazard(union u *u)
      {
      	unsigned int a, b;
      
      	a = u->i[1];
      	asm ("str %1, %0" : "+Q" (*(long *)&u->l) : "r" (0UL));
      	b = u->i[1];
      
      	return a ^ b;
      }
      
      The linaro 15.08 GCC 5.1.1 toolchain compiles the above as follows when
      using -O2 or above:
      
      0000000000000000 <update_char_hazard>:
         0:	d2800001 	mov	x1, #0x0                   	// #0
         4:	f9000001 	str	x1, [x0]
         8:	d2800000 	mov	x0, #0x0                   	// #0
         c:	d65f03c0 	ret
      
      0000000000000010 <update_long_hazard>:
        10:	b9400401 	ldr	w1, [x0,#4]
        14:	d2800002 	mov	x2, #0x0                   	// #0
        18:	f9000002 	str	x2, [x0]
        1c:	b9400400 	ldr	w0, [x0,#4]
        20:	4a000020 	eor	w0, w1, w0
        24:	d65f03c0 	ret
      
      This patch fixes the issue by passing an unsigned long pointer into the
      +Q constraint, as we do for our cmpxchg code. This may hazard against
      more than is necessary, but this is better than missing a necessary
      hazard.
      
      Fixes: 305d454a ("arm64: atomics: implement native {relaxed, acquire, release} atomics")
      Cc: <stable@vger.kernel.org> # 4.4.x-
      Acked-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      fee960be
    • K
      arm64: entry: improve data abort handling of tagged pointers · 276e9327
      Kristina Martsenko 提交于
      When handling a data abort from EL0, we currently zero the top byte of
      the faulting address, as we assume the address is a TTBR0 address, which
      may contain a non-zero address tag. However, the address may be a TTBR1
      address, in which case we should not zero the top byte. This patch fixes
      that. The effect is that the full TTBR1 address is passed to the task's
      signal handler (or printed out in the kernel log).
      
      When handling a data abort from EL1, we leave the faulting address
      intact, as we assume it's either a TTBR1 address or a TTBR0 address with
      tag 0x00. This is true as far as I'm aware, we don't seem to access a
      tagged TTBR0 address anywhere in the kernel. Regardless, it's easy to
      forget about address tags, and code added in the future may not always
      remember to remove tags from addresses before accessing them. So add tag
      handling to the EL1 data abort handler as well. This also makes it
      consistent with the EL0 data abort handler.
      
      Fixes: d50240a5 ("arm64: mm: permit use of tagged pointers at EL0")
      Cc: <stable@vger.kernel.org> # 3.12.x-
      Reviewed-by: NDave Martin <Dave.Martin@arm.com>
      Acked-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NKristina Martsenko <kristina.martsenko@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      276e9327
    • K
      arm64: hw_breakpoint: fix watchpoint matching for tagged pointers · 7dcd9dd8
      Kristina Martsenko 提交于
      When we take a watchpoint exception, the address that triggered the
      watchpoint is found in FAR_EL1. We compare it to the address of each
      configured watchpoint to see which one was hit.
      
      The configured watchpoint addresses are untagged, while the address in
      FAR_EL1 will have an address tag if the data access was done using a
      tagged address. The tag needs to be removed to compare the address to
      the watchpoints.
      
      Currently we don't remove it, and as a result can report the wrong
      watchpoint as being hit (specifically, always either the highest TTBR0
      watchpoint or lowest TTBR1 watchpoint). This patch removes the tag.
      
      Fixes: d50240a5 ("arm64: mm: permit use of tagged pointers at EL0")
      Cc: <stable@vger.kernel.org> # 3.12.x-
      Acked-by: NMark Rutland <mark.rutland@arm.com>
      Acked-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NKristina Martsenko <kristina.martsenko@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      7dcd9dd8
    • K
      arm64: traps: fix userspace cache maintenance emulation on a tagged pointer · 81cddd65
      Kristina Martsenko 提交于
      When we emulate userspace cache maintenance in the kernel, we can
      currently send the task a SIGSEGV even though the maintenance was done
      on a valid address. This happens if the address has a non-zero address
      tag, and happens to not be mapped in.
      
      When we get the address from a user register, we don't currently remove
      the address tag before performing cache maintenance on it. If the
      maintenance faults, we end up in either __do_page_fault, where find_vma
      can't find the VMA if the address has a tag, or in do_translation_fault,
      where the tagged address will appear to be above TASK_SIZE. In both
      cases, the address is not mapped in, and the task is sent a SIGSEGV.
      
      This patch removes the tag from the address before using it. With this
      patch, the fault is handled correctly, the address gets mapped in, and
      the cache maintenance succeeds.
      
      As a second bug, if cache maintenance (correctly) fails on an invalid
      tagged address, the address gets passed into arm64_notify_segfault,
      where find_vma fails to find the VMA due to the tag, and the wrong
      si_code may be sent as part of the siginfo_t of the segfault. With this
      patch, the correct si_code is sent.
      
      Fixes: 7dd01aef ("arm64: trap userspace "dc cvau" cache operation on errata-affected core")
      Cc: <stable@vger.kernel.org> # 4.8.x-
      Acked-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NKristina Martsenko <kristina.martsenko@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      81cddd65
  10. 09 5月, 2017 2 次提交