1. 21 7月, 2015 5 次提交
  2. 10 7月, 2015 1 次提交
    • M
      arm64: entry32: remove pointless register assignment · ad2daa85
      Mark Rutland 提交于
      We currently set x27 in compat_sys_sigreturn_wrapper and
      compat_sys_rt_sigreturn_wrapper, similarly to what we do with r8/why on
      32-bit ARM, in an attempt to prevent sigreturns from being restarted.
      
      However, on arm64 we have always used pt_regs::syscallno for syscall
      restarting (for both native and compat tasks), and x27 is never
      inspected again before being overwritten in kernel_exit.
      
      This patch removes the pointless register assignments.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      ad2daa85
  3. 09 7月, 2015 4 次提交
  4. 08 7月, 2015 1 次提交
  5. 07 7月, 2015 4 次提交
  6. 04 7月, 2015 1 次提交
  7. 03 7月, 2015 2 次提交
  8. 02 7月, 2015 1 次提交
  9. 01 7月, 2015 4 次提交
  10. 27 6月, 2015 1 次提交
  11. 26 6月, 2015 2 次提交
  12. 25 6月, 2015 5 次提交
    • X
      arm64: bpf: fix out-of-bounds read in bpf2a64_offset() · 8eee539d
      Xi Wang 提交于
      Problems occur when bpf_to or bpf_from has value prog->len - 1 (e.g.,
      "Very long jump backwards" in test_bpf where the last instruction is a
      jump): since ctx->offset has length prog->len, ctx->offset[bpf_to + 1]
      or ctx->offset[bpf_from + 1] will cause an out-of-bounds read, leading
      to a bogus jump offset and kernel panic.
      
      This patch moves updating ctx->offset to after calling build_insn(),
      and changes indexing to use bpf_to and bpf_from without + 1.
      
      Fixes: e54bcde3 ("arm64: eBPF JIT compiler")
      Cc: <stable@vger.kernel.org> # 3.18+
      Cc: Zi Shen Lim <zlim.lnx@gmail.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Acked-by: NAlexei Starovoitov <ast@plumgrid.com>
      Signed-off-by: NXi Wang <xi.wang@gmail.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      8eee539d
    • S
      ARM64: smp: Fix suspicious RCU usage with ipi tracepoints · be081d9b
      Stephen Boyd 提交于
      John Stultz reported an RCU splat on ARM with ipi trace events
      enabled. It looks like the same problem exists on ARM64.
      
      At this point in the IPI handling path we haven't called
      irq_enter() yet, so RCU doesn't know that we're about to exit
      idle and properly warns that we're using RCU from an idle CPU.
      Use trace_ipi_entry_rcuidle() instead of trace_ipi_entry() so
      that RCU is informed about our exit from idle.
      
      Cc: John Stultz <john.stultz@linaro.org>
      Cc: Nicolas Pitre <nicolas.pitre@linaro.org>
      Acked-by: NSteven Rostedt <rostedt@goodmis.org>
      Reviewed-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: <stable@vger.kernel.org> # 3.17+
      Fixes: 45ed695a ("ARM64: add IPI tracepoints")
      Signed-off-by: NStephen Boyd <sboyd@codeaurora.org>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      be081d9b
    • Z
      mm/hugetlb: reduce arch dependent code about hugetlb_prefault_arch_hook · a67a31fa
      Zhang Zhen 提交于
      Currently we have many duplicates in definitions of
      hugetlb_prefault_arch_hook.  In all architectures this function is empty.
      Signed-off-by: NZhang Zhen <zhenzhang.zhang@huawei.com>
      Acked-by: NDavid Rientjes <rientjes@google.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      a67a31fa
    • L
      mm: new mm hook framework · 2ae416b1
      Laurent Dufour 提交于
      CRIU is recreating the process memory layout by remapping the checkpointee
      memory area on top of the current process (criu).  This includes remapping
      the vDSO to the place it has at checkpoint time.
      
      However some architectures like powerpc are keeping a reference to the
      vDSO base address to build the signal return stack frame by calling the
      vDSO sigreturn service.  So once the vDSO has been moved, this reference
      is no more valid and the signal frame built later are not usable.
      
      This patch serie is introducing a new mm hook framework, and a new
      arch_remap hook which is called when mremap is done and the mm lock still
      hold.  The next patch is adding the vDSO remap and unmap tracking to the
      powerpc architecture.
      
      This patch (of 3):
      
      This patch introduces a new set of header file to manage mm hooks:
      - per architecture empty header file (arch/x/include/asm/mm-arch-hooks.h)
      - a generic header (include/linux/mm-arch-hooks.h)
      
      The architecture which need to overwrite a hook as to redefine it in its
      header file, while architecture which doesn't need have nothing to do.
      
      The default hooks are defined in the generic header and are used in the
      case the architecture is not defining it.
      
      In a next step, mm hooks defined in include/asm-generic/mm_hooks.h should
      be moved here.
      Signed-off-by: NLaurent Dufour <ldufour@linux.vnet.ibm.com>
      Suggested-by: NAndrew Morton <akpm@linux-foundation.org>
      Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Pavel Emelyanov <xemul@parallels.com>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Ingo Molnar <mingo@kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      2ae416b1
    • Z
      mm/hugetlb: reduce arch dependent code about huge_pmd_unshare · e81f2d22
      Zhang Zhen 提交于
      Currently we have many duplicates in definitions of huge_pmd_unshare.  In
      all architectures this function just returns 0 when
      CONFIG_ARCH_WANT_HUGE_PMD_SHARE is N.
      
      This patch puts the default implementation in mm/hugetlb.c and lets these
      architectures use the common code.
      Signed-off-by: NZhang Zhen <zhenzhang.zhang@huawei.com>
      Cc: Russell King <linux@arm.linux.org.uk>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: James Hogan <james.hogan@imgtec.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Chris Metcalf <cmetcalf@ezchip.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: James Yang <James.Yang@freescale.com>
      Cc: Aneesh Kumar <aneesh.kumar@linux.vnet.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      e81f2d22
  13. 19 6月, 2015 4 次提交
  14. 17 6月, 2015 5 次提交
    • V
      arm64: compat: print compat_sp instead of sp · 4e2ee96a
      Vladimir Murzin 提交于
      We check against compat_sp, but print out arm64's sp - fix it.
      Signed-off-by: NVladimir Murzin <vladimir.murzin@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      4e2ee96a
    • D
      arm64: mm: Fix freeing of the wrong memmap entries with !SPARSEMEM_VMEMMAP · b9bcc919
      Dave P Martin 提交于
      The memmap freeing code in free_unused_memmap() computes the end of
      each memblock by adding the memblock size onto the base.  However,
      if SPARSEMEM is enabled then the value (start) used for the base
      may already have been rounded downwards to work out which memmap
      entries to free after the previous memblock.
      
      This may cause memmap entries that are in use to get freed.
      
      In general, you're not likely to hit this problem unless there
      are at least 2 memblocks and one of them is not aligned to a
      sparsemem section boundary.  Note that carve-outs can increase
      the number of memblocks by splitting the regions listed in the
      device tree.
      
      This problem doesn't occur with SPARSEMEM_VMEMMAP, because the
      vmemmap code deals with freeing the unused regions of the memmap
      instead of requiring the arch code to do it.
      
      This patch gets the memblock base out of the memblock directly when
      computing the block end address to ensure the correct value is used.
      Signed-off-by: NDave Martin <Dave.Martin@arm.com>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      b9bcc919
    • M
      arm64: entry: fix context tracking for el0_sp_pc · 46b0567c
      Mark Rutland 提交于
      Commit 6c81fe79 ("arm64: enable context tracking") did not
      update el0_sp_pc to use ct_user_exit, but this appears to have been
      unintentional. In commit 6ab6463a ("arm64: adjust el0_sync so
      that a function can be called") we made x0 available, and in the return
      to userspace we call ct_user_enter in the kernel_exit macro.
      
      Due to this, we currently don't correctly inform RCU of the user->kernel
      transition, and may erroneously account for time spent in the kernel as
      if we were in an extended quiescent state when CONFIG_CONTEXT_TRACKING
      is enabled.
      
      As we do record the kernel->user transition, a userspace application
      making accesses from an unaligned stack pointer can demonstrate the
      imbalance, provoking the following warning:
      
      ------------[ cut here ]------------
      WARNING: CPU: 2 PID: 3660 at kernel/context_tracking.c:75 context_tracking_enter+0xd8/0xe4()
      Modules linked in:
      CPU: 2 PID: 3660 Comm: a.out Not tainted 4.1.0-rc7+ #8
      Hardware name: ARM Juno development board (r0) (DT)
      Call trace:
      [<ffffffc000089914>] dump_backtrace+0x0/0x124
      [<ffffffc000089a48>] show_stack+0x10/0x1c
      [<ffffffc0005b3cbc>] dump_stack+0x84/0xc8
      [<ffffffc0000b3214>] warn_slowpath_common+0x98/0xd0
      [<ffffffc0000b330c>] warn_slowpath_null+0x14/0x20
      [<ffffffc00013ada4>] context_tracking_enter+0xd4/0xe4
      [<ffffffc0005b534c>] preempt_schedule_irq+0xd4/0x114
      [<ffffffc00008561c>] el1_preempt+0x4/0x28
      [<ffffffc0001b8040>] exit_files+0x38/0x4c
      [<ffffffc0000b5b94>] do_exit+0x430/0x978
      [<ffffffc0000b614c>] do_group_exit+0x40/0xd4
      [<ffffffc0000c0208>] get_signal+0x23c/0x4f4
      [<ffffffc0000890b4>] do_signal+0x1ac/0x518
      [<ffffffc000089650>] do_notify_resume+0x5c/0x68
      ---[ end trace 963c192600337066 ]---
      
      This patch adds the missing ct_user_exit to the el0_sp_pc entry path,
      correcting the context tracking for this case.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Acked-by: NWill Deacon <will.deacon@arm.com>
      Fixes: 6c81fe79 ("arm64: enable context tracking")
      Cc: <stable@vger.kernel.org> # v3.17+
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      46b0567c
    • M
      arm/arm64: KVM: vgic: Do not save GICH_HCR / ICH_HCR_EL2 · 4642019d
      Marc Zyngier 提交于
      The GIC Hypervisor Configuration Register is used to enable
      the delivery of virtual interupts to a guest, as well as to
      define in which conditions maintenance interrupts are delivered
      to the host.
      
      This register doesn't contain any information that we need to
      read back (the EOIcount is utterly useless for us).
      
      So let's save ourselves some cycles, and not save it before
      writing zero to it.
      Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
      4642019d
    • A
      KVM: arm64: fix misleading comments in save/restore · 921ef1e1
      Alex Bennée 提交于
      The elr_el2 and spsr_el2 registers in fact contain the processor state
      before entry into EL2. In the case of guest state it could be in either
      el0 or el1.
      Signed-off-by: NAlex Bennée <alex.bennee@linaro.org>
      Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
      921ef1e1