1. 07 4月, 2017 1 次提交
  2. 06 4月, 2017 1 次提交
  3. 04 4月, 2017 1 次提交
    • V
      arm64: mm: unaligned access by user-land should be received as SIGBUS · 09a6adf5
      Victor Kamensky 提交于
      After 52d7523d (arm64: mm: allow the kernel to handle alignment faults on
      user accesses) commit user-land accesses that produce unaligned exceptions
      like in case of aarch32 ldm/stm/ldrd/strd instructions operating on
      unaligned memory received by user-land as SIGSEGV. It is wrong, it should
      be reported as SIGBUS as it was before 52d7523d commit.
      
      Changed do_bad_area function to take signal and code parameters out of esr
      value using fault_info table, so in case of do_alignment_fault fault
      user-land will receive SIGBUS. Wrapped access to fault_info table into
      esr_to_fault_info function.
      
      Cc: <stable@vger.kernel.org>
      Fixes: 52d7523d (arm64: mm: allow the kernel to handle alignment faults on user accesses)
      Signed-off-by: NVictor Kamensky <kamensky@cisco.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      09a6adf5
  4. 31 3月, 2017 3 次提交
  5. 23 3月, 2017 1 次提交
  6. 22 3月, 2017 2 次提交
    • N
      arm64: kaslr: Fix up the kernel image alignment · afd0e5a8
      Neeraj Upadhyay 提交于
      If kernel image extends across alignment boundary, existing
      code increases the KASLR offset by size of kernel image. The
      offset is masked after resizing. There are cases, where after
      masking, we may still have kernel image extending across
      boundary. This eventually results in only 2MB block getting
      mapped while creating the page tables. This results in data aborts
      while accessing unmapped regions during second relocation (with
      kaslr offset) in __primary_switch. To fix this problem, round up the
      kernel image size, by swapper block size, before adding it for
      correction.
      
      For example consider below case, where kernel image still crosses
      1GB alignment boundary, after masking the offset, which is fixed
      by rounding up kernel image size.
      
      SWAPPER_TABLE_SHIFT = 30
      Swapper using section maps with section size 2MB.
      CONFIG_PGTABLE_LEVELS = 3
      VA_BITS = 39
      
      _text  : 0xffffff8008080000
      _end   : 0xffffff800aa1b000
      offset : 0x1f35600000
      mask = ((1UL << (VA_BITS - 2)) - 1) & ~(SZ_2M - 1)
      
      (_text + offset) >> SWAPPER_TABLE_SHIFT = 0x3fffffe7c
      (_end + offset) >> SWAPPER_TABLE_SHIFT  = 0x3fffffe7d
      
      offset after existing correction (before mask) = 0x1f37f9b000
      (_text + offset) >> SWAPPER_TABLE_SHIFT = 0x3fffffe7d
      (_end + offset) >> SWAPPER_TABLE_SHIFT  = 0x3fffffe7d
      
      offset (after mask) = 0x1f37e00000
      (_text + offset) >> SWAPPER_TABLE_SHIFT = 0x3fffffe7c
      (_end + offset) >> SWAPPER_TABLE_SHIFT  = 0x3fffffe7d
      
      new offset w/ rounding up = 0x1f38000000
      (_text + offset) >> SWAPPER_TABLE_SHIFT = 0x3fffffe7d
      (_end + offset) >> SWAPPER_TABLE_SHIFT  = 0x3fffffe7d
      
      Fixes: f80fb3a3 ("arm64: add support for kernel ASLR")
      Cc: <stable@vger.kernel.org>
      Reviewed-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NNeeraj Upadhyay <neeraju@codeaurora.org>
      Signed-off-by: NSrinivas Ramana <sramana@codeaurora.org>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      afd0e5a8
    • W
      arm64: compat: Update compat syscalls · 713cc9df
      Will Deacon 提交于
      Hook up three pkey syscalls (which we don't implement) and the new statx
      syscall, as has been done for arch/arm/.
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      713cc9df
  7. 11 3月, 2017 5 次提交
    • G
      arm64: kernel: Update kerneldoc for cpu_suspend() rename · 0e4c0e6e
      Geert Uytterhoeven 提交于
      Commit af391b15 ("arm64: kernel: rename __cpu_suspend to keep it
      aligned with arm") renamed cpu_suspend() to arm_cpuidle_suspend(), but
      forgot to update the kerneldoc header.
      
      Fixes: af391b15 ("arm64: kernel: rename __cpu_suspend to keep it aligned with arm")
      Signed-off-by: NGeert Uytterhoeven <geert+renesas@glider.be>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      0e4c0e6e
    • M
      arm64: use const cap for system_uses_ttbr0_pan() · 14088540
      Mark Rutland 提交于
      Since commit 4b65a5db ("arm64: Introduce
      uaccess_{disable,enable} functionality based on TTBR0_EL1"),
      system_uses_ttbr0_pan() has used cpus_have_cap() to determine whether
      PAN is present.
      
      Since commit a4023f68 ("arm64: Add hypervisor safe helper for
      checking constant capabilities"), which was introduced around the same
      time, cpus_have_cap() doesn't try to use a static key, and must always
      perform a load, test, and consitional branch (likely a tbnz for the
      latter two).
      
      Elsewhere, we moved to using cpus_have_const_cap(), which can use a
      static key (i.e. a non-conditional branch), which is patched at runtime
      when the feature is detected.
      
      This patch makes system_uses_ttbr0_pan() use cpus_have_const_cap(). The
      static key is likely a win for hot-paths like the uacccess primitives,
      and this makes our usage consistent regardless.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Reviewed-by: NSuzuki K Poulose <suzuki.poulose@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      14088540
    • E
      arm64: support keyctl() system call in 32-bit mode · 5c2a6259
      Eric Biggers 提交于
      As is the case for a number of other architectures that have a 32-bit
      compat mode, enable KEYS_COMPAT if both COMPAT and KEYS are enabled.
      This allows AArch32 programs to use the keyctl() system call when
      running on an AArch64 kernel.
      Signed-off-by: NEric Biggers <ebiggers@google.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      5c2a6259
    • M
      arm64: kasan: avoid bad virt_to_pfn() · b0de0ccc
      Mark Rutland 提交于
      Booting a v4.11-rc1 kernel with DEBUG_VIRTUAL and KASAN enabled produces
      the following splat (trimmed for brevity):
      
      [    0.000000] virt_to_phys used for non-linear address: ffff200008080000 (0xffff200008080000)
      [    0.000000] WARNING: CPU: 0 PID: 0 at arch/arm64/mm/physaddr.c:14 __virt_to_phys+0x48/0x70
      [    0.000000] PC is at __virt_to_phys+0x48/0x70
      [    0.000000] LR is at __virt_to_phys+0x48/0x70
      [    0.000000] Call trace:
      [    0.000000] [<ffff2000080b1ac0>] __virt_to_phys+0x48/0x70
      [    0.000000] [<ffff20000a03b86c>] kasan_init+0x1c0/0x498
      [    0.000000] [<ffff20000a034018>] setup_arch+0x2fc/0x948
      [    0.000000] [<ffff20000a030c68>] start_kernel+0xb8/0x570
      [    0.000000] [<ffff20000a0301e8>] __primary_switched+0x6c/0x74
      
      This is because we use virt_to_pfn() on a kernel image address when
      trying to figure out its nid, so that we can allocate its shadow from
      the same node.
      
      As with other recent changes, this patch uses lm_alias() to solve this.
      
      We could instead use NUMA_NO_NODE, as x86 does for all shadow
      allocations, though we'll likely want the "real" memory shadow to be
      backed from its corresponding nid anyway, so we may as well be
      consistent and find the nid for the image shadow.
      
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Acked-by: NLaura Abbott <labbott@redhat.com>
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      b0de0ccc
    • N
      arm64: kprobes: remove kprobe_exceptions_notify · cb6950b7
      Naveen N. Rao 提交于
      Commit fc62d020 ("kprobes: Introduce weak variant of
      kprobe_exceptions_notify()") introduces a generic empty version of the
      function for architectures that don't need special handling, like arm64.
      As such, remove the arch/arm64/ specific handler.
      Signed-off-by: NNaveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      cb6950b7
  8. 10 3月, 2017 1 次提交
  9. 09 3月, 2017 2 次提交
  10. 07 3月, 2017 2 次提交
    • M
      arm64: KVM: Survive unknown traps from guests · ba4dd156
      Mark Rutland 提交于
      Currently we BUG() if we see an ESR_EL2.EC value we don't recognise. As
      configurable disables/enables are added to the architecture (controlled
      by RES1/RES0 bits respectively), with associated synchronous exceptions,
      it may be possible for a guest to trigger exceptions with classes that
      we don't recognise.
      
      While we can't service these exceptions in a manner useful to the guest,
      we can avoid bringing down the host. Per ARM DDI 0487A.k_iss10775, page
      D7-1937, EC values within the range 0x00 - 0x2c are reserved for future
      use with synchronous exceptions, and EC values within the range 0x2d -
      0x3f may be used for either synchronous or asynchronous exceptions.
      
      The patch makes KVM handle any unknown EC by injecting an UNDEFINED
      exception into the guest, with a corresponding (ratelimited) warning in
      the host dmesg. We could later improve on this with with a new (opt-in)
      exit to the host userspace.
      
      Cc: Dave Martin <dave.martin@arm.com>
      Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
      Reviewed-by: NChristoffer Dall <christoffer.dall@linaro.org>
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
      ba4dd156
    • S
      irqchip/gicv3-its: Add workaround for QDF2400 ITS erratum 0065 · 90922a2d
      Shanker Donthineni 提交于
      On Qualcomm Datacenter Technologies QDF2400 SoCs, the ITS hardware
      implementation uses 16Bytes for Interrupt Translation Entry (ITE),
      but reports an incorrect value of 8Bytes in GITS_TYPER.ITTE_size.
      
      It might cause kernel memory corruption depending on the number
      of MSI(x) that are configured and the amount of memory that has
      been allocated for ITEs in its_create_device().
      
      This patch fixes the potential memory corruption by setting the
      correct ITE size to 16Bytes.
      
      Cc: stable@vger.kernel.org
      Signed-off-by: NShanker Donthineni <shankerd@codeaurora.org>
      Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
      90922a2d
  11. 06 3月, 2017 1 次提交
  12. 03 3月, 2017 1 次提交
    • I
      sched/headers: Move task->mm handling methods to <linux/sched/mm.h> · 68e21be2
      Ingo Molnar 提交于
      Move the following task->mm helper APIs into a new header file,
      <linux/sched/mm.h>, to further reduce the size and complexity
      of <linux/sched.h>.
      
      Here are how the APIs are used in various kernel files:
      
        # mm_alloc():
        arch/arm/mach-rpc/ecard.c
        fs/exec.c
        include/linux/sched/mm.h
        kernel/fork.c
      
        # __mmdrop():
        arch/arc/include/asm/mmu_context.h
        include/linux/sched/mm.h
        kernel/fork.c
      
        # mmdrop():
        arch/arm/mach-rpc/ecard.c
        arch/m68k/sun3/mmu_emu.c
        arch/x86/mm/tlb.c
        drivers/gpu/drm/amd/amdkfd/kfd_process.c
        drivers/gpu/drm/i915/i915_gem_userptr.c
        drivers/infiniband/hw/hfi1/file_ops.c
        drivers/vfio/vfio_iommu_spapr_tce.c
        fs/exec.c
        fs/proc/base.c
        fs/proc/task_mmu.c
        fs/proc/task_nommu.c
        fs/userfaultfd.c
        include/linux/mmu_notifier.h
        include/linux/sched/mm.h
        kernel/fork.c
        kernel/futex.c
        kernel/sched/core.c
        mm/khugepaged.c
        mm/ksm.c
        mm/mmu_context.c
        mm/mmu_notifier.c
        mm/oom_kill.c
        virt/kvm/kvm_main.c
      
        # mmdrop_async_fn():
        include/linux/sched/mm.h
      
        # mmdrop_async():
        include/linux/sched/mm.h
        kernel/fork.c
      
        # mmget_not_zero():
        fs/userfaultfd.c
        include/linux/sched/mm.h
        mm/oom_kill.c
      
        # mmput():
        arch/arc/include/asm/mmu_context.h
        arch/arc/kernel/troubleshoot.c
        arch/frv/mm/mmu-context.c
        arch/powerpc/platforms/cell/spufs/context.c
        arch/sparc/include/asm/mmu_context_32.h
        drivers/android/binder.c
        drivers/gpu/drm/etnaviv/etnaviv_gem.c
        drivers/gpu/drm/i915/i915_gem_userptr.c
        drivers/infiniband/core/umem.c
        drivers/infiniband/core/umem_odp.c
        drivers/infiniband/core/uverbs_main.c
        drivers/infiniband/hw/mlx4/main.c
        drivers/infiniband/hw/mlx5/main.c
        drivers/infiniband/hw/usnic/usnic_uiom.c
        drivers/iommu/amd_iommu_v2.c
        drivers/iommu/intel-svm.c
        drivers/lguest/lguest_user.c
        drivers/misc/cxl/fault.c
        drivers/misc/mic/scif/scif_rma.c
        drivers/oprofile/buffer_sync.c
        drivers/vfio/vfio_iommu_type1.c
        drivers/vhost/vhost.c
        drivers/xen/gntdev.c
        fs/exec.c
        fs/proc/array.c
        fs/proc/base.c
        fs/proc/task_mmu.c
        fs/proc/task_nommu.c
        fs/userfaultfd.c
        include/linux/sched/mm.h
        kernel/cpuset.c
        kernel/events/core.c
        kernel/events/uprobes.c
        kernel/exit.c
        kernel/fork.c
        kernel/ptrace.c
        kernel/sys.c
        kernel/trace/trace_output.c
        kernel/tsacct.c
        mm/memcontrol.c
        mm/memory.c
        mm/mempolicy.c
        mm/migrate.c
        mm/mmu_notifier.c
        mm/nommu.c
        mm/oom_kill.c
        mm/process_vm_access.c
        mm/rmap.c
        mm/swapfile.c
        mm/util.c
        virt/kvm/async_pf.c
      
        # mmput_async():
        include/linux/sched/mm.h
        kernel/fork.c
        mm/oom_kill.c
      
        # get_task_mm():
        arch/arc/kernel/troubleshoot.c
        arch/powerpc/platforms/cell/spufs/context.c
        drivers/android/binder.c
        drivers/gpu/drm/etnaviv/etnaviv_gem.c
        drivers/infiniband/core/umem.c
        drivers/infiniband/core/umem_odp.c
        drivers/infiniband/hw/mlx4/main.c
        drivers/infiniband/hw/mlx5/main.c
        drivers/infiniband/hw/usnic/usnic_uiom.c
        drivers/iommu/amd_iommu_v2.c
        drivers/iommu/intel-svm.c
        drivers/lguest/lguest_user.c
        drivers/misc/cxl/fault.c
        drivers/misc/mic/scif/scif_rma.c
        drivers/oprofile/buffer_sync.c
        drivers/vfio/vfio_iommu_type1.c
        drivers/vhost/vhost.c
        drivers/xen/gntdev.c
        fs/proc/array.c
        fs/proc/base.c
        fs/proc/task_mmu.c
        include/linux/sched/mm.h
        kernel/cpuset.c
        kernel/events/core.c
        kernel/exit.c
        kernel/fork.c
        kernel/ptrace.c
        kernel/sys.c
        kernel/trace/trace_output.c
        kernel/tsacct.c
        mm/memcontrol.c
        mm/memory.c
        mm/mempolicy.c
        mm/migrate.c
        mm/mmu_notifier.c
        mm/nommu.c
        mm/util.c
      
        # mm_access():
        fs/proc/base.c
        include/linux/sched/mm.h
        kernel/fork.c
        mm/process_vm_access.c
      
        # mm_release():
        arch/arc/include/asm/mmu_context.h
        fs/exec.c
        include/linux/sched/mm.h
        include/uapi/linux/sched.h
        kernel/exit.c
        kernel/fork.c
      Acked-by: NLinus Torvalds <torvalds@linux-foundation.org>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: linux-kernel@vger.kernel.org
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      68e21be2
  13. 02 3月, 2017 10 次提交
  14. 28 2月, 2017 3 次提交
  15. 25 2月, 2017 1 次提交
  16. 24 2月, 2017 3 次提交
    • M
      arm64/cpufeature: check correct field width when updating sys_val · 638f863d
      Mark Rutland 提交于
      When we're updating a register's sys_val, we use arm64_ftr_value() to
      find the new field value. We use cpuid_feature_extract_field() to find
      the new value, but this implicitly assumes a 4-bit field, so we may
      extract more bits than we mean to for fields like CTR_EL0.L1ip.
      
      This affects update_cpu_ftr_reg(), where we may extract erroneous values
      for ftr_cur and ftr_new. Depending on the additional bits extracted in
      either case, we may erroneously detect that the value is mismatched, and
      we'll try to compute a new safe value.
      
      Dependent on these extra bits and feature type, arm64_ftr_safe_value()
      may pessimistically select the always-safe value, or may erroneously
      choose either the extracted cur or new value as the safe option. The
      extra bits will subsequently be masked out in arm64_ftr_set_value(), so
      we may choose a higher value, yet write back a lower one.
      
      Fix this by passing the width down explicitly in arm64_ftr_value(), so
      we always extract the correct amount.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Reviewed-by: NSuzuki K Poulose <suzuki.poulose@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      638f863d
    • M
      Revert "arm64: mm: set the contiguous bit for kernel mappings where appropriate" · d81bbe6d
      Mark Rutland 提交于
      This reverts commit 0bfc445d.
      
      When we change the permissions of regions mapped using contiguous
      entries, the architecture requires us to follow a Break-Before-Make
      strategy, breaking *all* associated entries before we can change any of
      the following properties from the entries:
      
       - presence of the contiguous bit
       - output address
       - attributes
       - permissiones
      
      Failure to do so can result in a number of problems (e.g. TLB conflict
      aborts and/or erroneous results from TLB lookups).
      
      See ARM DDI 0487A.k_iss10775, "Misprogramming of the Contiguous bit",
      page D4-1762.
      
      We do not take this into account when altering the permissions of kernel
      segments in mark_rodata_ro(), where we change the permissions of live
      contiguous entires one-by-one, leaving them transiently inconsistent.
      This has been observed to result in failures on some fast model
      configurations.
      
      Unfortunately, we cannot follow Break-Before-Make here as we'd have to
      unmap kernel text and data used to perform the sequence.
      
      For the timebeing, revert commit 0bfc445d so as to avoid issues
      resulting from this misuse of the contiguous bit.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Acked-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Reported-by: NJean-Philippe Brucker <jean-philippe.brucker@arm.com>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <Will.Deacon@arm.com>
      Cc: stable@vger.kernel.org # v4.10
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      d81bbe6d
    • S
      arm64: Avoid clobbering mm in erratum workaround on QDF2400 · ea6eac90
      Shanker Donthineni 提交于
      Commit 38fd94b0 ("arm64: Work around Falkor erratum 1003") tried to
      work around a hardware erratum, but actually caused a system crash of
      its own during switch_mm:
      
       cpu_do_switch_mm+0x20/0x40
       efi_virtmap_load+0x34/0x40
       virt_efi_get_next_variable+0x64/0xc8
       efivar_init+0x8c/0x348
       efisubsys_init+0xd4/0x270
       do_one_initcall+0x80/0x110
       kernel_init_freeable+0x19c/0x240
       kernel_init+0x10/0x100
       ret_from_fork+0x10/0x50
      
       Kernel panic - not syncing: Attempted to kill init! exitcode=0x0000000b
      
      In cpu_do_switch_mm, x1 contains the mm_struct pointer, which needs to
      be preserved by the pre_ttbr0_update_workaround macro rather than passed
      as a temporary.
      
      This patch clobbers x2 and x3 instead, keeping the mm_struct intact
      after the workaround has run.
      
      Fixes: 38fd94b0 ("arm64: Work around Falkor erratum 1003")
      Tested-by: NManoj Iyer <manoj.iyer@canonical.com>
      Signed-off-by: NShanker Donthineni <shankerd@codeaurora.org>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      ea6eac90
  17. 22 2月, 2017 2 次提交