1. 16 8月, 2017 7 次提交
    • M
      arm64: add basic VMAP_STACK support · e3067861
      Mark Rutland 提交于
      This patch enables arm64 to be built with vmap'd task and IRQ stacks.
      
      As vmap'd stacks are mapped at page granularity, stacks must be a multiple of
      PAGE_SIZE. This means that a 64K page kernel must use stacks of at least 64K in
      size.
      
      To minimize the increase in Image size, IRQ stacks are dynamically allocated at
      boot time, rather than embedding the boot CPU's IRQ stack in the kernel image.
      
      This patch was co-authored by Ard Biesheuvel and Mark Rutland.
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Reviewed-by: NWill Deacon <will.deacon@arm.com>
      Tested-by: NLaura Abbott <labbott@redhat.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: James Morse <james.morse@arm.com>
      e3067861
    • M
      arm64: use an irq stack pointer · f60fe78f
      Mark Rutland 提交于
      We allocate our IRQ stacks using a percpu array. This allows us to generate our
      IRQ stack pointers with adr_this_cpu, but bloats the kernel Image with the boot
      CPU's IRQ stack. Additionally, these are packed with other percpu variables,
      and aren't guaranteed to have guard pages.
      
      When we enable VMAP_STACK we'll want to vmap our IRQ stacks also, in order to
      provide guard pages and to permit more stringent alignment requirements. Doing
      so will require that we use a percpu pointer to each IRQ stack, rather than
      allocating a percpu IRQ stack in the kernel image.
      
      This patch updates our IRQ stack code to use a percpu pointer to the base of
      each IRQ stack. This will allow us to change the way the stack is allocated
      with minimal changes elsewhere. In some cases we may try to backtrace before
      the IRQ stack pointers are initialised, so on_irq_stack() is updated to account
      for this.
      
      In testing with cyclictest, there was no measureable difference between using
      adr_this_cpu (for irq_stack) and ldr_this_cpu (for irq_stack_ptr) in the IRQ
      entry path.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Reviewed-by: NWill Deacon <will.deacon@arm.com>
      Tested-by: NLaura Abbott <labbott@redhat.com>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: James Morse <james.morse@arm.com>
      f60fe78f
    • M
      arm64: factor out entry stack manipulation · b11e5759
      Mark Rutland 提交于
      In subsequent patches, we will detect stack overflow in our exception
      entry code, by verifying the SP after it has been decremented to make
      space for the exception regs.
      
      This verification code is small, and we can minimize its impact by
      placing it directly in the vectors. To avoid redundant modification of
      the SP, we also need to move the initial decrement of the SP into the
      vectors.
      
      As a preparatory step, this patch introduces kernel_ventry, which
      performs this decrement, and updates the entry code accordingly.
      Subsequent patches will fold SP verification into kernel_ventry.
      
      There should be no functional change as a result of this patch.
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      [Mark: turn into prep patch, expand commit msg]
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Reviewed-by: NWill Deacon <will.deacon@arm.com>
      Tested-by: NLaura Abbott <labbott@redhat.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: James Morse <james.morse@arm.com>
      b11e5759
    • M
      arm64: move SEGMENT_ALIGN to <asm/memory.h> · 8018ba4e
      Mark Rutland 提交于
      Currently we define SEGMENT_ALIGN directly in our vmlinux.lds.S.
      
      This is unfortunate, as the EFI stub currently open-codes the same
      number, and in future we'll want to fiddle with this.
      
      This patch moves the definition to our <asm/memory.h>, where it can be
      used by both vmlinux.lds.S and the EFI stub code.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Reviewed-by: NWill Deacon <will.deacon@arm.com>
      Tested-by: NLaura Abbott <labbott@redhat.com>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: James Morse <james.morse@arm.com>
      8018ba4e
    • M
      arm64: clean up irq stack definitions · f60ad4ed
      Mark Rutland 提交于
      Before we add yet another stack to the kernel, it would be nice to
      ensure that we consistently organise stack definitions and related
      helper functions.
      
      This patch moves the basic IRQ stack defintions to <asm/memory.h> to
      live with their task stack counterparts. Helpers used for unwinding are
      moved into <asm/stacktrace.h>, where subsequent patches will add helpers
      for other stacks. Includes are fixed up accordingly.
      
      This patch is a pure refactoring -- there should be no functional
      changes as a result of this patch.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Reviewed-by: NWill Deacon <will.deacon@arm.com>
      Tested-by: NLaura Abbott <labbott@redhat.com>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: James Morse <james.morse@arm.com>
      f60ad4ed
    • A
      arm64: kernel: remove {THREAD,IRQ_STACK}_START_SP · 34be98f4
      Ard Biesheuvel 提交于
      For historical reasons, we leave the top 16 bytes of our task and IRQ
      stacks unused, a practice used to ensure that the SP can always be
      masked to find the base of the current stack (historically, where
      thread_info could be found).
      
      However, this is not necessary, as:
      
      * When an exception is taken from a task stack, we decrement the SP by
        S_FRAME_SIZE and stash the exception registers before we compare the
        SP against the task stack. In such cases, the SP must be at least
        S_FRAME_SIZE below the limit, and can be safely masked to determine
        whether the task stack is in use.
      
      * When transitioning to an IRQ stack, we'll place a dummy frame onto the
        IRQ stack before enabling asynchronous exceptions, or executing code
        we expect to trigger faults. Thus, if an exception is taken from the
        IRQ stack, the SP must be at least 16 bytes below the limit.
      
      * We no longer mask the SP to find the thread_info, which is now found
        via sp_el0. Note that historically, the offset was critical to ensure
        that cpu_switch_to() found the correct stack for new threads that
        hadn't yet executed ret_from_fork().
      
      Given that, this initial offset serves no purpose, and can be removed.
      This brings us in-line with other architectures (e.g. x86) which do not
      rely on this masking.
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      [Mark: rebase, kill THREAD_START_SP, commit msg additions]
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Reviewed-by: NWill Deacon <will.deacon@arm.com>
      Tested-by: NLaura Abbott <labbott@redhat.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: James Morse <james.morse@arm.com>
      34be98f4
    • M
      arm64: remove __die()'s stack dump · c5bc503c
      Mark Rutland 提交于
      Our __die() implementation tries to dump the stack memory, in addition
      to a backtrace, which is problematic.
      
      For contemporary 16K stacks, this can be a lot of data, which can take a
      long time to dump, and can push other useful context out of the kernel's
      printk ringbuffer (and/or a user's scrollback buffer on an attached
      console).
      
      Additionally, the code implicitly assumes that the SP is on the task's
      stack, and tries to dump everything between the SP and the highest task
      stack address. When the SP points at an IRQ stack (or is corrupted),
      this makes the kernel attempt to dump vast amounts of VA space. With
      vmap'd stacks, this may result in erroneous accesses to peripherals.
      
      This patch removes the memory dump, leaving us to rely on the backtrace,
      and other means of dumping stack memory such as kdump.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Reviewed-by: NWill Deacon <will.deacon@arm.com>
      Tested-by: NLaura Abbott <labbott@redhat.com>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: James Morse <james.morse@arm.com>
      c5bc503c
  2. 09 8月, 2017 2 次提交
    • A
      arm64: unwind: remove sp from struct stackframe · 31e43ad3
      Ard Biesheuvel 提交于
      The unwind code sets the sp member of struct stackframe to
      'frame pointer + 0x10' unconditionally, without regard for whether
      doing so produces a legal value. So let's simply remove it now that
      we have stopped using it anyway.
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: James Morse <james.morse@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      31e43ad3
    • A
      arm64: unwind: reference pt_regs via embedded stack frame · 73267498
      Ard Biesheuvel 提交于
      As it turns out, the unwind code is slightly broken, and probably has
      been for a while. The problem is in the dumping of the exception stack,
      which is intended to dump the contents of the pt_regs struct at each
      level in the call stack where an exception was taken and routed to a
      routine marked as __exception (which means its stack frame is right
      below the pt_regs struct on the stack).
      
      'Right below the pt_regs struct' is ill defined, though: the unwind
      code assigns 'frame pointer + 0x10' to the .sp member of the stackframe
      struct at each level, and dump_backtrace() happily dereferences that as
      the pt_regs pointer when encountering an __exception routine. However,
      the actual size of the stack frame created by this routine (which could
      be one of many __exception routines we have in the kernel) is not known,
      and so frame.sp is pretty useless to figure out where struct pt_regs
      really is.
      
      So it seems the only way to ensure that we can find our struct pt_regs
      when walking the stack frames is to put it at a known fixed offset of
      the stack frame pointer that is passed to such __exception routines.
      The simplest way to do that is to put it inside pt_regs itself, which is
      the main change implemented by this patch. As a bonus, doing this allows
      us to get rid of a fair amount of cruft related to walking from one stack
      to the other, which is especially nice since we intend to introduce yet
      another stack for overflow handling once we add support for vmapped
      stacks. It also fixes an inconsistency where we only add a stack frame
      pointing to ELR_EL1 if we are executing from the IRQ stack but not when
      we are executing from the task stack.
      
      To consistly identify exceptions regs even in the presence of exceptions
      taken from entry code, we must check whether the next frame was created
      by entry text, rather than whether the current frame was crated by
      exception text.
      
      To avoid backtracing using PCs that fall in the idmap, or are controlled
      by userspace, we must explcitly zero the FP and LR in startup paths, and
      must ensure that the frame embedded in pt_regs is zeroed upon entry from
      EL0. To avoid these NULL entries showin in the backtrace, unwind_frame()
      is updated to avoid them.
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      [Mark: compare current frame against .entry.text, avoid bogus PCs]
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: James Morse <james.morse@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      73267498
  3. 08 8月, 2017 4 次提交
    • A
      arm64: unwind: disregard frame.sp when validating frame pointer · c7365330
      Ard Biesheuvel 提交于
      Currently, when unwinding the call stack, we validate the frame pointer
      of each frame against frame.sp, whose value is not clearly defined, and
      which makes it more difficult to link stack frames together across
      different stacks. It is far better to simply check whether the frame
      pointer itself points into a valid stack.
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: James Morse <james.morse@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      c7365330
    • M
      arm64: unwind: avoid percpu indirection for irq stack · 09668372
      Mark Rutland 提交于
      Our IRQ_STACK_PTR() and on_irq_stack() helpers both take a cpu argument,
      used to generate a percpu address. In all cases, they are passed
      {raw_,}smp_processor_id(), so this parameter is redundant.
      
      Since {raw_,}smp_processor_id() use a percpu variable internally, this
      approach means we generate a percpu offset to find the current cpu, then
      use this to index an array of percpu offsets, which we then use to find
      the current CPU's IRQ stack pointer. Thus, most of the work is
      redundant.
      
      Instead, we can consistently use raw_cpu_ptr() to generate the CPU's
      irq_stack pointer by simply adding the percpu offset to the irq_stack
      address, which is simpler in both respects.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: James Morse <james.morse@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      09668372
    • M
      arm64: move non-entry code out of .entry.text · ed84b4e9
      Mark Rutland 提交于
      Currently, cpu_switch_to and ret_from_fork both live in .entry.text,
      though neither form the critical path for an exception entry.
      
      In subsequent patches, we will require that code in .entry.text is part
      of the critical path for exception entry, for which we can assume
      certain properties (e.g. the presence of exception regs on the stack).
      
      Neither cpu_switch_to nor ret_from_fork will meet these requirements, so
      we must move them out of .entry.text. To ensure that neither are kprobed
      after being moved out of .entry.text, we must explicitly blacklist them,
      requiring a new NOKPROBE() asm helper.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: James Morse <james.morse@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      ed84b4e9
    • M
      arm64: consistently use bl for C exception entry · 2d0e751a
      Mark Rutland 提交于
      In most cases, our exception entry assembly branches to C handlers with
      a BL instruction, but in cases where we do not expect to return, we use
      B instead.
      
      While this is correct today, it means that backtraces for fatal
      exceptions miss the entry assembly (as the LR is stale at the point we
      call C code), while non-fatal exceptions have the entry assembly in the
      LR. In subsequent patches, we will need the LR to be set in these cases
      in order to backtrace reliably.
      
      This patch updates these sites to use a BL, ensuring consistency, and
      preparing for backtrace rework. An ASM_BUG() is added after each of
      these new BLs, which both catches unexpected returns, and ensures that
      the LR value doesn't point to another function label.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: James Morse <james.morse@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      2d0e751a
  4. 20 7月, 2017 2 次提交
    • R
      arm64: Convert to using %pOF instead of full_name · a270f327
      Rob Herring 提交于
      Now that we have a custom printf format specifier, convert users of
      full_name to use %pOF instead. This is preparation to remove storing
      of the full path string for each node.
      Signed-off-by: NRob Herring <robh@kernel.org>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: linux-arm-kernel@lists.infradead.org
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      a270f327
    • Q
      arm64: traps: disable irq in die() · 6f44a0ba
      Qiao Zhou 提交于
      In current die(), the irq is disabled for __die() handle, not
      including the possible panic() handling. Since the log in __die()
      can take several hundreds ms, new irq might come and interrupt
      current die().
      
      If the process calling die() holds some critical resource, and some
      other process scheduled later also needs it, then it would deadlock.
      The first panic will not be executed.
      
      So here disable irq for the whole flow of die().
      Signed-off-by: NQiao Zhou <qiaozhou@asrmicro.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      6f44a0ba
  5. 03 7月, 2017 1 次提交
    • L
      arm64: PCI: Drop DT IRQ allocation from pcibios_alloc_irq() · 769b461f
      Lorenzo Pieralisi 提交于
      With the introduction of struct pci_host_bridge.map_irq pointer it is
      possible to assign IRQs for all devices originating from a PCI host bridge
      at probe time; this is implemented through pci_assign_irq() that relies on
      the struct pci_host_bridge.map_irq pointer to map IRQ for a given device.
      
      The benefits this brings are twofold:
      
        - the IRQ for a device is assigned once at probe time
        - the IRQ assignment works also for hotplugged devices
      
      With all DT based PCI host bridges converted to the struct
      pci_host_bridge.{map/swizzle}_irq hooks mechanism the DT IRQ allocation in
      ARM64 pcibios_alloc_irq() is now redundant and can be removed.
      Signed-off-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com>
      Signed-off-by: NBjorn Helgaas <bhelgaas@google.com>
      Acked-by: NWill Deacon <will.deacon@arm.com>
      769b461f
  6. 30 6月, 2017 4 次提交
  7. 29 6月, 2017 9 次提交
  8. 24 6月, 2017 2 次提交
    • M
      arm64: ftrace: fix !CONFIG_ARM64_MODULE_PLTS kernels · 8486e54d
      Mark Rutland 提交于
      When a kernel is built without CONFIG_ARM64_MODULE_PLTS, we don't
      generate the expected branch instruction in ftrace_make_nop(). This
      means we pass zero (rather than a valid branch) to ftrace_modify_code()
      as the expected instruction to validate. This causes us to return
      -EINVAL to the core ftrace code for a valid case, resulting in a splat
      at boot time.
      
      This was an unintended effect of commit:
      
        68764420 ("arm64: ftrace: fix building without CONFIG_MODULES")
      
      ... which incorrectly moved the generation of the branch instruction
      into the ifdef for CONFIG_ARM64_MODULE_PLTS.
      
      This patch fixes the issue by moving the ifdef inside of the relevant
      if-else case, and always checking that the branch is in range,
      regardless of CONFIG_ARM64_MODULE_PLTS. This ensures that we generate
      the expected branch instruction, and also improves our sanity checks.
      
      For consistency, both ftrace_make_nop() and ftrace_make_call() are
      updated with this pattern.
      
      Fixes: 68764420 ("arm64: ftrace: fix building without CONFIG_MODULES")
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Reported-by: NMarc Zyngier <marc.zyngier@arm.com>
      Reviewed-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      8486e54d
    • D
      arm64: signal: Allow expansion of the signal frame · 33f08261
      Dave Martin 提交于
      This patch defines an extra_context signal frame record that can be
      used to describe an expanded signal frame, and modifies the context
      block allocator and signal frame setup and parsing code to create,
      populate, parse and decode this block as necessary.
      
      To avoid abuse by userspace, parse_user_sigframe() attempts to
      ensure that:
      
       * no more than one extra_context is accepted;
       * the extra context data is a sensible size, and properly placed
         and aligned.
      
      The extra_context data is required to start at the first 16-byte
      aligned address immediately after the dummy terminator record
      following extra_context in rt_sigframe.__reserved[] (as ensured
      during signal delivery).  This serves as a sanity-check that the
      signal frame has not been moved or copied without taking the extra
      data into account.
      Signed-off-by: NDave Martin <Dave.Martin@arm.com>
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      [will: add __force annotation when casting extra_datap to __user pointer]
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      33f08261
  9. 22 6月, 2017 4 次提交
    • M
      arm64: dump cpu_hwcaps at panic time · 8effeaaf
      Mark Rutland 提交于
      When debugging a kernel panic(), it can be useful to know which CPU
      features have been detected by the kernel, as some code paths can depend
      on these (and may have been patched at runtime).
      
      This patch adds a notifier to dump the detected CPU caps (as a hex
      string) at panic(), when we log other information useful for debugging.
      On a Juno R1 system running v4.12-rc5, this looks like:
      
      [  615.431249] Kernel panic - not syncing: Fatal exception in interrupt
      [  615.437609] SMP: stopping secondary CPUs
      [  615.441872] Kernel Offset: disabled
      [  615.445372] CPU features: 0x02086
      [  615.448522] Memory Limit: none
      
      A developer can decode this by looking at the corresponding
      <asm/cpucaps.h> bits. For example, the above decodes as:
      
      * bit  1: ARM64_WORKAROUND_DEVICE_LOAD_ACQUIRE
      * bit  2: ARM64_WORKAROUND_845719
      * bit  7: ARM64_WORKAROUND_834220
      * bit 13: ARM64_HAS_32BIT_EL0
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Acked-by: NSteve Capper <steve.capper@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      8effeaaf
    • D
      arm64: ptrace: Flush user-RW TLS reg to thread_struct before reading · 936eb65c
      Dave Martin 提交于
      When reading current's user-writable TLS register (which occurs
      when dumping core for native tasks), it is possible that userspace
      has modified it since the time the task was last scheduled out.
      The new TLS register value is not guaranteed to have been written
      immediately back to thread_struct in this case.
      
      As a result, a coredump can capture stale data for this register.
      Reading the register for a stopped task via ptrace is unaffected.
      
      For native tasks, this patch explicitly flushes the TPIDR_EL0
      register back to thread_struct before dumping when operating on
      current, thus ensuring that coredump contents are up to date.  For
      compat tasks, the TLS register is not user-writable and so cannot
      be out of sync, so no flush is required in compat_tls_get().
      Signed-off-by: NDave Martin <Dave.Martin@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      936eb65c
    • D
      arm64: ptrace: Flush FPSIMD regs back to thread_struct before reading · e1d5a8fb
      Dave Martin 提交于
      When reading the FPSIMD state of current (which occurs when dumping
      core), it is possible that userspace has modified the FPSIMD
      registers since the time the task was last scheduled out.  Such
      changes are not guaranteed to be reflected immedately in
      thread_struct.
      
      As a result, a coredump can contain stale values for these
      registers.  Reading the registers of a stopped task via ptrace is
      unaffected.
      
      This patch explicitly flushes the CPU state back to thread_struct
      before dumping when operating on current, thus ensuring that
      coredump contents are up to date.
      Signed-off-by: NDave Martin <Dave.Martin@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      e1d5a8fb
    • D
      arm64: ptrace: Fix VFP register dumping in compat coredumps · af66b2d8
      Dave Martin 提交于
      Currently, VFP registers are omitted from coredumps for compat
      processes, due to a bug in the REGSET_COMPAT_VFP regset
      implementation.
      
      compat_vfp_get() needs to transfer non-contiguous data from
      thread_struct.fpsimd_state, and uses put_user() to handle the
      offending trailing word (FPSCR).  This fails when copying to a
      kernel address (i.e., kbuf && !ubuf), which is what happens when
      dumping core.  As a result, the ELF coredump core code silently
      omits the NT_ARM_VFP note from the dump.
      
      It would be possible to work around this with additional special
      case code for the put_user(), but since user_regset_copyout() is
      explicitly designed to handle this scenario it is cleaner to port
      the put_user() to a user_regset_copyout() call, which this patch
      does.
      Signed-off-by: NDave Martin <Dave.Martin@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      af66b2d8
  10. 21 6月, 2017 1 次提交
    • J
      time: Clean up CLOCK_MONOTONIC_RAW time handling · fc6eead7
      John Stultz 提交于
      Now that we fixed the sub-ns handling for CLOCK_MONOTONIC_RAW,
      remove the duplicitive tk->raw_time.tv_nsec, which can be
      stored in tk->tkr_raw.xtime_nsec (similarly to how its handled
      for monotonic time).
      
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Miroslav Lichvar <mlichvar@redhat.com>
      Cc: Richard Cochran <richardcochran@gmail.com>
      Cc: Prarit Bhargava <prarit@redhat.com>
      Cc: Stephen Boyd <stephen.boyd@linaro.org>
      Cc: Kevin Brodsky <kevin.brodsky@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Daniel Mentz <danielmentz@google.com>
      Tested-by: NDaniel Mentz <danielmentz@google.com>
      Signed-off-by: NJohn Stultz <john.stultz@linaro.org>
      fc6eead7
  11. 20 6月, 2017 4 次提交
    • D
      arm64: signal: factor out signal frame record allocation · bb4322f7
      Dave Martin 提交于
      This patch factors out the allocator for signal frame optional
      records into a separate function, to ensure consistency and
      facilitate later expansion.
      
      No overrun checking is currently done, because the allocation is in
      user memory and anyway the kernel never tries to allocate enough
      space in the signal frame yet for an overrun to occur.  This
      behaviour will be refined in future patches.
      
      The approach taken in this patch to allocation of the terminator
      record is not very clean: this will also be replaced in subsequent
      patches.
      
      For future extension, a comment is added in sigcontext.h
      documenting the current static allocations in __reserved[].  This
      will be important for determining under what circumstances
      userspace may or may not see an expanded signal frame.
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NDave Martin <Dave.Martin@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      bb4322f7
    • D
      arm64: signal: factor frame layout and population into separate passes · bb4891a6
      Dave Martin 提交于
      In preparation for expanding the signal frame, this patch refactors
      the signal frame setup code in setup_sigframe() into two separate
      passes.
      
      The first pass, setup_sigframe_layout(), determines the size of the
      signal frame and its internal layout, including the presence and
      location of optional records.  The resulting knowledge is used to
      allocate and locate the user stack space required for the signal
      frame and to determine which optional records to include.
      
      The second pass, setup_sigframe(), is called once the stack frame
      is allocated in order to populate it with the necessary context
      information.
      
      As a result of these changes, it becomes more natural to represent
      locations in the signal frame by a base pointer and an offset,
      since the absolute address of each location is not known during the
      layout pass.  To be more consistent with this logic,
      parse_user_sigframe() is refactored to describe signal frame
      locations in a similar way.
      
      This change has no effect on the signal ABI, but will make it
      easier to expand the signal frame in future patches.
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NDave Martin <Dave.Martin@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      bb4891a6
    • D
      arm64: signal: Refactor sigcontext parsing in rt_sigreturn · 47ccb028
      Dave Martin 提交于
      Currently, rt_sigreturn does very limited checking on the
      sigcontext coming from userspace.
      
      Future additions to the sigcontext data will increase the potential
      for surprises.  Also, it is not clear whether the sigcontext
      extension records are supposed to occur in a particular order.
      
      To allow the parsing code to be extended more easily, this patch
      factors out the sigcontext parsing into a separate function, and
      adds extra checks to validate the well-formedness of the sigcontext
      structure.
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NDave Martin <Dave.Martin@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      47ccb028
    • D
      arm64: signal: split frame link record from sigcontext structure · 20987de3
      Dave Martin 提交于
      In order to be able to increase the amount of the data currently
      written to the __reserved[] array in the signal frame, it is
      necessary to overwrite the locations currently occupied by the
      {fp,lr} frame link record pushed at the top of the signal stack.
      
      In order for this to work, this patch detaches the frame link
      record from struct rt_sigframe and places it separately at the top
      of the signal stack.  This will allow subsequent patches to insert
      data between it and __reserved[].
      
      This change relies on the non-ABI status of the placement of the
      frame record with respect to struct sigframe: this status is
      undocumented, but the placement is not declared or described in the
      user headers, and known unwinder implementations (libgcc,
      libunwind, gdb) appear not to rely on it.
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NDave Martin <Dave.Martin@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      20987de3