1. 21 10月, 2015 9 次提交
  2. 20 10月, 2015 6 次提交
  3. 17 10月, 2015 1 次提交
  4. 16 10月, 2015 1 次提交
  5. 13 10月, 2015 1 次提交
    • A
      arm64: add KASAN support · 39d114dd
      Andrey Ryabinin 提交于
      This patch adds arch specific code for kernel address sanitizer
      (see Documentation/kasan.txt).
      
      1/8 of kernel addresses reserved for shadow memory. There was no
      big enough hole for this, so virtual addresses for shadow were
      stolen from vmalloc area.
      
      At early boot stage the whole shadow region populated with just
      one physical page (kasan_zero_page). Later, this page reused
      as readonly zero shadow for some memory that KASan currently
      don't track (vmalloc).
      After mapping the physical memory, pages for shadow memory are
      allocated and mapped.
      
      Functions like memset/memmove/memcpy do a lot of memory accesses.
      If bad pointer passed to one of these function it is important
      to catch this. Compiler's instrumentation cannot do this since
      these functions are written in assembly.
      KASan replaces memory functions with manually instrumented variants.
      Original functions declared as weak symbols so strong definitions
      in mm/kasan/kasan.c could replace them. Original functions have aliases
      with '__' prefix in name, so we could call non-instrumented variant
      if needed.
      Some files built without kasan instrumentation (e.g. mm/slub.c).
      Original mem* function replaced (via #define) with prefixed variants
      to disable memory access checks for such files.
      Signed-off-by: NAndrey Ryabinin <ryabinin.a.a@gmail.com>
      Tested-by: NLinus Walleij <linus.walleij@linaro.org>
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      39d114dd
  6. 12 10月, 2015 2 次提交
  7. 10 10月, 2015 1 次提交
    • Y
      arm64: fix a migrating irq bug when hotplug cpu · 217d453d
      Yang Yingliang 提交于
      When cpu is disabled, all irqs will be migratged to another cpu.
      In some cases, a new affinity is different, the old affinity need
      to be updated and if irq_set_affinity's return value is IRQ_SET_MASK_OK_DONE,
      the old affinity can not be updated. Fix it by using irq_do_set_affinity.
      
      And migrating interrupts is a core code matter, so use the generic
      function irq_migrate_all_off_this_cpu() to migrate interrupts in
      kernel/irq/migration.c.
      
      Cc: Jiang Liu <jiang.liu@linux.intel.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Russell King - ARM Linux <linux@arm.linux.org.uk>
      Cc: Hanjun Guo <hanjun.guo@linaro.org>
      Acked-by: NMarc Zyngier <marc.zyngier@arm.com>
      Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      217d453d
  8. 07 10月, 2015 7 次提交
  9. 02 10月, 2015 1 次提交
    • L
      arm64: ftrace: fix function_graph tracer panic · ee556d00
      Li Bin 提交于
      When function graph tracer is enabled, the following operation
      will trigger panic:
      
      mount -t debugfs nodev /sys/kernel
      echo next_tgid > /sys/kernel/tracing/set_ftrace_filter
      echo function_graph > /sys/kernel/tracing/current_tracer
      ls /proc/
      
      ------------[ cut here ]------------
      [  198.501417] Unable to handle kernel paging request at virtual address cb88537fdc8ba316
      [  198.506126] pgd = ffffffc008f79000
      [  198.509363] [cb88537fdc8ba316] *pgd=00000000488c6003, *pud=00000000488c6003, *pmd=0000000000000000
      [  198.517726] Internal error: Oops: 94000005 [#1] SMP
      [  198.518798] Modules linked in:
      [  198.520582] CPU: 1 PID: 1388 Comm: ls Tainted: G
      [  198.521800] Hardware name: linux,dummy-virt (DT)
      [  198.522852] task: ffffffc0fa9e8000 ti: ffffffc0f9ab0000 task.ti: ffffffc0f9ab0000
      [  198.524306] PC is at next_tgid+0x30/0x100
      [  198.525205] LR is at return_to_handler+0x0/0x20
      [  198.526090] pc : [<ffffffc0002a1070>] lr : [<ffffffc0000907c0>] pstate: 60000145
      [  198.527392] sp : ffffffc0f9ab3d40
      [  198.528084] x29: ffffffc0f9ab3d40 x28: ffffffc0f9ab0000
      [  198.529406] x27: ffffffc000d6a000 x26: ffffffc000b786e8
      [  198.530659] x25: ffffffc0002a1900 x24: ffffffc0faf16c00
      [  198.531942] x23: ffffffc0f9ab3ea0 x22: 0000000000000002
      [  198.533202] x21: ffffffc000d85050 x20: 0000000000000002
      [  198.534446] x19: 0000000000000002 x18: 0000000000000000
      [  198.535719] x17: 000000000049fa08 x16: ffffffc000242efc
      [  198.537030] x15: 0000007fa472b54c x14: ffffffffff000000
      [  198.538347] x13: ffffffc0fada84a0 x12: 0000000000000001
      [  198.539634] x11: ffffffc0f9ab3d70 x10: ffffffc0f9ab3d70
      [  198.540915] x9 : ffffffc0000907c0 x8 : ffffffc0f9ab3d40
      [  198.542215] x7 : 0000002e330f08f0 x6 : 0000000000000015
      [  198.543508] x5 : 0000000000000f08 x4 : ffffffc0f9835ec0
      [  198.544792] x3 : cb88537fdc8ba316 x2 : cb88537fdc8ba306
      [  198.546108] x1 : 0000000000000002 x0 : ffffffc000d85050
      [  198.547432]
      [  198.547920] Process ls (pid: 1388, stack limit = 0xffffffc0f9ab0020)
      [  198.549170] Stack: (0xffffffc0f9ab3d40 to 0xffffffc0f9ab4000)
      [  198.582568] Call trace:
      [  198.583313] [<ffffffc0002a1070>] next_tgid+0x30/0x100
      [  198.584359] [<ffffffc0000907bc>] ftrace_graph_caller+0x6c/0x70
      [  198.585503] [<ffffffc0000907bc>] ftrace_graph_caller+0x6c/0x70
      [  198.586574] [<ffffffc0000907bc>] ftrace_graph_caller+0x6c/0x70
      [  198.587660] [<ffffffc0000907bc>] ftrace_graph_caller+0x6c/0x70
      [  198.588896] Code: aa0003f5 2a0103f4 b4000102 91004043 (885f7c60)
      [  198.591092] ---[ end trace 6a346f8f20949ac8 ]---
      
      This is because when using function graph tracer, if the traced
      function return value is in multi regs ([x0-x7]), return_to_handler
      may corrupt them. So in return_to_handler, the parameter regs should
      be protected properly.
      
      Cc: <stable@vger.kernel.org> # 3.18+
      Signed-off-by: NLi Bin <huawei.libin@huawei.com>
      Acked-by: NAKASHI Takahiro <takahiro.akashi@linaro.org>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      ee556d00
  10. 01 10月, 2015 1 次提交
    • A
      arm64/efi: Fix boot crash by not padding between EFI_MEMORY_RUNTIME regions · 0ce3cc00
      Ard Biesheuvel 提交于
      The new Properties Table feature introduced in UEFIv2.5 may
      split memory regions that cover PE/COFF memory images into
      separate code and data regions. Since these regions only differ
      in the type (runtime code vs runtime data) and the permission
      bits, but not in the memory type attributes (UC/WC/WT/WB), the
      spec does not require them to be aligned to 64 KB.
      
      Since the relative offset of PE/COFF .text and .data segments
      cannot be changed on the fly, this means that we can no longer
      pad out those regions to be mappable using 64 KB pages.
      Unfortunately, there is no annotation in the UEFI memory map
      that identifies data regions that were split off from a code
      region, so we must apply this logic to all adjacent runtime
      regions whose attributes only differ in the permission bits.
      
      So instead of rounding each memory region to 64 KB alignment at
      both ends, only round down regions that are not directly
      preceded by another runtime region with the same type
      attributes. Since the UEFI spec does not mandate that the memory
      map be sorted, this means we also need to sort it first.
      
      Note that this change will result in all EFI_MEMORY_RUNTIME
      regions whose start addresses are not aligned to the OS page
      size to be mapped with executable permissions (i.e., on kernels
      compiled with 64 KB pages). However, since these mappings are
      only active during the time that UEFI Runtime Services are being
      invoked, the window for abuse is rather small.
      Tested-by: NMark Salter <msalter@redhat.com>
      Tested-by: Mark Rutland <mark.rutland@arm.com> [UEFI 2.4 only]
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NMatt Fleming <matt.fleming@intel.com>
      Reviewed-by: NMark Salter <msalter@redhat.com>
      Reviewed-by: NMark Rutland <mark.rutland@arm.com>
      Cc: <stable@vger.kernel.org> # v4.0+
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Leif Lindholm <leif.lindholm@linaro.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: linux-kernel@vger.kernel.org
      Link: http://lkml.kernel.org/r/1443218539-7610-3-git-send-email-matt@codeblueprint.co.ukSigned-off-by: NIngo Molnar <mingo@kernel.org>
      0ce3cc00
  11. 17 9月, 2015 3 次提交
    • W
      arm64: errata: add module build workaround for erratum #843419 · df057cc7
      Will Deacon 提交于
      Cortex-A53 processors <= r0p4 are affected by erratum #843419 which can
      lead to a memory access using an incorrect address in certain sequences
      headed by an ADRP instruction.
      
      There is a linker fix to generate veneers for ADRP instructions, but
      this doesn't work for kernel modules which are built as unlinked ELF
      objects.
      
      This patch adds a new config option for the erratum which, when enabled,
      builds kernel modules with the mcmodel=large flag. This uses absolute
      addressing for all kernel symbols, thereby removing the use of ADRP as
      a PC-relative form of addressing. The ADRP relocs are removed from the
      module loader so that we fail to load any potentially affected modules.
      
      Cc: <stable@vger.kernel.org>
      Acked-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      df057cc7
    • W
      arm64: compat: fix vfp save/restore across signal handlers in big-endian · bdec97a8
      Will Deacon 提交于
      When saving/restoring the VFP registers from a compat (AArch32)
      signal frame, we rely on the compat registers forming a prefix of the
      native register file and therefore make use of copy_{to,from}_user to
      transfer between the native fpsimd_state and the compat_vfp_sigframe.
      
      Unfortunately, this doesn't work so well in a big-endian environment.
      Our fpsimd save/restore code operates directly on 128-bit quantities
      (Q registers) whereas the compat_vfp_sigframe represents the registers
      as an array of 64-bit (D) registers. The architecture packs the compat D
      registers into the Q registers, with the least significant bytes holding
      the lower register. Consequently, we need to swap the 64-bit halves when
      converting between these two representations on a big-endian machine.
      
      This patch replaces the __copy_{to,from}_user invocations in our
      compat VFP signal handling code with explicit __put_user loops that
      operate on 64-bit values and swap them accordingly.
      
      Cc: <stable@vger.kernel.org>
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      bdec97a8
    • W
      arm64: cpu hotplug: ensure we mask out CPU_TASKS_FROZEN in notifiers · e56d82a1
      Will Deacon 提交于
      We have a couple of CPU hotplug notifiers for resetting the CPU debug
      state to a sane value when a CPU comes online.
      
      This patch ensures that we mask out CPU_TASKS_FROZEN so that we don't
      miss any online events occuring due to suspend/resume.
      Acked-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      e56d82a1
  12. 15 9月, 2015 1 次提交
  13. 09 9月, 2015 1 次提交
  14. 27 8月, 2015 1 次提交
    • A
      arm64: flush FP/SIMD state correctly after execve() · 674c242c
      Ard Biesheuvel 提交于
      When a task calls execve(), its FP/SIMD state is flushed so that
      none of the original program state is observeable by the incoming
      program.
      
      However, since this flushing consists of setting the in-memory copy
      of the FP/SIMD state to all zeroes, the CPU field is set to CPU 0 as
      well, which indicates to the lazy FP/SIMD preserve/restore code that
      the FP/SIMD state does not need to be reread from memory if the task
      is scheduled again on CPU 0 without any other tasks having entered
      userland (or used the FP/SIMD in kernel mode) on the same CPU in the
      mean time. If this happens, the FP/SIMD state of the old program will
      still be present in the registers when the new program starts.
      
      So set the CPU field to the invalid value of NR_CPUS when performing
      the flush, by calling fpsimd_flush_task_state().
      
      Cc: <stable@vger.kernel.org>
      Reported-by: NChunyan Zhang <chunyan.zhang@spreadtrum.com>
      Reported-by: NJanet Liu <janet.liu@spreadtrum.com>
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      674c242c
  15. 24 8月, 2015 1 次提交
  16. 21 8月, 2015 1 次提交
    • W
      arm64: entry: always restore x0 from the stack on syscall return · 412fcb6c
      Will Deacon 提交于
      We have a micro-optimisation on the fast syscall return path where we
      take care to keep x0 live with the return value from the syscall so that
      we can avoid restoring it from the stack. The benefit of doing this is
      fairly suspect, since we will be restoring x1 from the stack anyway
      (which lives adjacent in the pt_regs structure) and the only additional
      cost is saving x0 back to pt_regs after the syscall handler, which could
      be seen as a poor man's prefetch.
      
      More importantly, this causes issues with the context tracking code.
      
      The ct_user_enter macro ends up branching into C code, which is free to
      use x0 as a scratch register and consequently leads to us returning junk
      back to userspace as the syscall return value. Rather than special case
      the context-tracking code, this patch removes the questionable
      optimisation entirely.
      
      Cc: <stable@vger.kernel.org>
      Cc: Larry Bassel <larry.bassel@linaro.org>
      Cc: Kevin Hilman <khilman@linaro.org>
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Reported-by: NHanjun Guo <hanjun.guo@linaro.org>
      Tested-by: NHanjun Guo <hanjun.guo@linaro.org>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      412fcb6c
  17. 12 8月, 2015 1 次提交
  18. 10 8月, 2015 1 次提交
    • N
      arm64: VDSO: fix coarse clock monotonicity regression · 878854a3
      Nathan Lynch 提交于
      Since 906c5557 ("timekeeping: Copy the shadow-timekeeper over the
      real timekeeper last") it has become possible on arm64 to:
      
      - Obtain a CLOCK_MONOTONIC_COARSE or CLOCK_REALTIME_COARSE timestamp
        via syscall.
      - Subsequently obtain a timestamp for the same clock ID via VDSO which
        predates the first timestamp (by one jiffy).
      
      This is because arm64's update_vsyscall is deriving the coarse time
      using the __current_kernel_time interface, when it should really be
      using the timekeeper object provided to it by the timekeeping core.
      It happened to work before only because __current_kernel_time would
      access the same timekeeper object which had been passed to
      update_vsyscall.  This is no longer the case.
      Signed-off-by: NNathan Lynch <nathan_lynch@mentor.com>
      Acked-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      878854a3