1. 21 8月, 2015 1 次提交
    • W
      arm64: entry: always restore x0 from the stack on syscall return · 412fcb6c
      Will Deacon 提交于
      We have a micro-optimisation on the fast syscall return path where we
      take care to keep x0 live with the return value from the syscall so that
      we can avoid restoring it from the stack. The benefit of doing this is
      fairly suspect, since we will be restoring x1 from the stack anyway
      (which lives adjacent in the pt_regs structure) and the only additional
      cost is saving x0 back to pt_regs after the syscall handler, which could
      be seen as a poor man's prefetch.
      
      More importantly, this causes issues with the context tracking code.
      
      The ct_user_enter macro ends up branching into C code, which is free to
      use x0 as a scratch register and consequently leads to us returning junk
      back to userspace as the syscall return value. Rather than special case
      the context-tracking code, this patch removes the questionable
      optimisation entirely.
      
      Cc: <stable@vger.kernel.org>
      Cc: Larry Bassel <larry.bassel@linaro.org>
      Cc: Kevin Hilman <khilman@linaro.org>
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Reported-by: NHanjun Guo <hanjun.guo@linaro.org>
      Tested-by: NHanjun Guo <hanjun.guo@linaro.org>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      412fcb6c
  2. 05 8月, 2015 2 次提交
    • W
      arm64: mm: ensure patched kernel text is fetched from PoU · 8ec41987
      Will Deacon 提交于
      The arm64 booting document requires that the bootloader has cleaned the
      kernel image to the PoC. However, when a CPU re-enters the kernel due to
      either a CPU hotplug "on" event or resuming from a low-power state (e.g.
      cpuidle), the kernel text may in-fact be dirty at the PoU due to things
      like alternative patching or even module loading.
      
      Thanks to I-cache speculation with the MMU off, stale instructions could
      be fetched prior to enabling the MMU, potentially leading to crashes
      when executing regions of code that have been modified at runtime.
      
      This patch addresses the issue by ensuring that the local I-cache is
      invalidated immediately after a CPU has enabled its MMU but before
      jumping out of the identity mapping. Any stale instructions fetched from
      the PoC will then be discarded and refetched correctly from the PoU.
      Patching kernel text executed prior to the MMU being enabled is
      prohibited, so the early entry code will always be clean.
      Reviewed-by: NMark Rutland <mark.rutland@arm.com>
      Tested-by: NMark Rutland <mark.rutland@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      8ec41987
    • W
      arm64: alternatives: ensure secondary CPUs execute ISB after patching · 04b8637b
      Will Deacon 提交于
      In order to guarantee that the patched instruction stream is visible to
      a CPU, that CPU must execute an isb instruction after any related cache
      maintenance has completed.
      
      The instruction patching routines in kernel/insn.c get this right for
      things like jump labels and ftrace, but the alternatives patching omits
      it entirely leaving secondary cores in a potential limbo between the old
      and the new code.
      
      This patch adds an isb following the secondary polling loop in the
      altenatives patching.
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      04b8637b
  3. 03 8月, 2015 1 次提交
  4. 01 8月, 2015 1 次提交
    • S
      arm64: restore cpu suspend/resume functionality · b511a659
      Sudeep Holla 提交于
      Commit 4b3dc967 ("arm64: force CONFIG_SMP=y and remove redundant #ifdefs")
      accidentally retained code for !CONFIG_SMP in cpu_resume function. This
      resulted in the hash index being zeroed in x7 after proper computation,
      which is then used to get the cpu context pointer while resuming.
      
      This patch removes the remanant code and restores back the cpu suspend/
      resume functionality.
      
      Fixes: 4b3dc967 ("arm64: force CONFIG_SMP=y and remove redundant #ifdefs")
      Signed-off-by: NSudeep Holla <sudeep.holla@arm.com>
      Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      b511a659
  5. 31 7月, 2015 2 次提交
    • L
      ARM64: PCI: do not enable resources on PROBE_ONLY systems · 72407514
      Lorenzo Pieralisi 提交于
      On ARM64 PROBE_ONLY PCI systems resources are not currently claimed,
      therefore they can't be enabled since they do not have a valid
      parent pointer; this in turn prevents enabling PCI devices on
      ARM64 PROBE_ONLY systems, causing PCI devices initialization to
      fail.
      
      To solve this issue, resources must be claimed when devices are
      added on PROBE_ONLY systems, which ensures that the resource hierarchy
      is validated and the resource tree is sane, but this requires changes
      in the ARM64 resource management that can affect adversely existing
      PCI set-ups (claiming resources on !PROBE_ONLY systems might break
      existing ARM64 PCI platform implementations).
      
      As a temporary solution in preparation for a proper resources claiming
      implementation in ARM64 core, to enable PCI PROBE_ONLY systems on ARM64,
      this patch adds a pcibios_enable_device() arch implementation that
      simply prevents enabling resources on PROBE_ONLY systems (mirroring ARM
      behaviour).
      
      This is always a safe thing to do because on PROBE_ONLY systems the
      configuration space set-up can be considered immutable, and it is in
      preparation of proper resource claiming that would finally validate
      the PCI resources tree in the ARM64 arch implementation on PROBE_ONLY
      systems.
      
      For !PROBE_ONLY systems resources enablement in pcibios_enable_device()
      on ARM64 is implemented as in current PCI core, leaving the behaviour
      unchanged.
      Signed-off-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Bjorn Helgaas <bhelgaas@google.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      72407514
    • W
      arm64: alternative: put secondary CPUs into polling loop during patch · ef5e724b
      Will Deacon 提交于
      When patching the kernel text with alternatives, we may end up patching
      parts of the stop_machine state machine (e.g. atomic_dec_and_test in
      ack_state) and consequently corrupt the instruction stream of any
      secondary CPUs.
      
      This patch passes the cpu_online_mask to stop_machine, forcing all of
      the CPUs into our own callback which can place the secondary cores into
      a dumb (but safe!) polling loop whilst the patching is carried out.
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      ef5e724b
  6. 30 7月, 2015 2 次提交
  7. 28 7月, 2015 1 次提交
    • W
      arm64: debug: rename enum debug_el to avoid symbol collision · 6f883d10
      Will Deacon 提交于
      lib/list_sort.c defines a 'struct debug_el', where "el" is assumedly a
      a contraction of "element". This conflicts with 'enum debug_el' in our
      asm/debug-monitors.h header file, where "el" stands for Exception Level.
      
      The result is build failure when targetting allmodconfig, so rename our
      enum to 'dbg_active_el' to be slightly more explicit about what it is.
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      6f883d10
  8. 27 7月, 2015 25 次提交
  9. 22 7月, 2015 2 次提交
  10. 10 7月, 2015 1 次提交
    • M
      arm64: entry32: remove pointless register assignment · ad2daa85
      Mark Rutland 提交于
      We currently set x27 in compat_sys_sigreturn_wrapper and
      compat_sys_rt_sigreturn_wrapper, similarly to what we do with r8/why on
      32-bit ARM, in an attempt to prevent sigreturns from being restarted.
      
      However, on arm64 we have always used pt_regs::syscallno for syscall
      restarting (for both native and compat tasks), and x27 is never
      inspected again before being overwritten in kernel_exit.
      
      This patch removes the pointless register assignments.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      ad2daa85
  11. 09 7月, 2015 1 次提交
  12. 07 7月, 2015 1 次提交