1. 15 2月, 2017 1 次提交
    • M
      arm64: cpufeature: correctly handle MRS to XZR · 521c6461
      Mark Rutland 提交于
      In emulate_mrs() we may erroneously write back to the user SP rather
      than XZR if we trap an MRS instruction where Xt == 31.
      
      Use the new pt_regs_write_reg() helper to handle this correctly.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Fixes: 77c97b4e ("arm64: cpufeature: Expose CPUID registers by emulation")
      Cc: Andre Przywara <andre.przywara@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Marc Zyngier <marc.zyngier@arm.com>
      Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      521c6461
  2. 03 2月, 2017 2 次提交
    • M
      arm64: ensure __raw_read_system_reg() is self-consistent · 965861d6
      Mark Rutland 提交于
      We recently discovered that __raw_read_system_reg() erroneously mapped
      sysreg IDs to the wrong registers.
      
      To ensure that we don't get hit by a similar issue in future, this patch
      makes __raw_read_system_reg() use a macro for each case statement,
      ensuring that each case reads the correct register.
      
      To ensure that this patch hasn't introduced an issue, I've binary-diffed
      the object files before and after this patch. No code or data sections
      differ (though some debug section differ due to line numbering
      changing).
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      965861d6
    • M
      arm64: fix erroneous __raw_read_system_reg() cases · 7d0928f1
      Mark Rutland 提交于
      Since it was introduced in commit da8d02d1 ("arm64/capabilities:
      Make use of system wide safe value"), __raw_read_system_reg() has
      erroneously mapped some sysreg IDs to other registers.
      
      For the fields in ID_ISAR5_EL1, our local feature detection will be
      erroneous. We may spuriously detect that a feature is uniformly
      supported, or may fail to detect when it actually is, meaning some
      compat hwcaps may be erroneous (or not enforced upon hotplug).
      
      This patch corrects the erroneous entries.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Fixes: da8d02d1 ("arm64/capabilities: Make use of system wide safe value")
      Reported-by: NCatalin Marinas <catalin.marinas@arm.com>
      Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: stable@vger.kernel.org
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      7d0928f1
  3. 13 1月, 2017 2 次提交
  4. 12 1月, 2017 3 次提交
  5. 11 1月, 2017 4 次提交
  6. 10 1月, 2017 3 次提交
  7. 22 11月, 2016 1 次提交
    • C
      arm64: Introduce uaccess_{disable,enable} functionality based on TTBR0_EL1 · 4b65a5db
      Catalin Marinas 提交于
      This patch adds the uaccess macros/functions to disable access to user
      space by setting TTBR0_EL1 to a reserved zeroed page. Since the value
      written to TTBR0_EL1 must be a physical address, for simplicity this
      patch introduces a reserved_ttbr0 page at a constant offset from
      swapper_pg_dir. The uaccess_disable code uses the ttbr1_el1 value
      adjusted by the reserved_ttbr0 offset.
      
      Enabling access to user is done by restoring TTBR0_EL1 with the value
      from the struct thread_info ttbr0 variable. Interrupts must be disabled
      during the uaccess_ttbr0_enable code to ensure the atomicity of the
      thread_info.ttbr0 read and TTBR0_EL1 write. This patch also moves the
      get_thread_info asm macro from entry.S to assembler.h for reuse in the
      uaccess_ttbr0_* macros.
      
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: James Morse <james.morse@arm.com>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      4b65a5db
  8. 17 11月, 2016 2 次提交
  9. 20 10月, 2016 1 次提交
    • J
      arm64: cpufeature: Schedule enable() calls instead of calling them via IPI · 2a6dcb2b
      James Morse 提交于
      The enable() call for a cpufeature/errata is called using on_each_cpu().
      This issues a cross-call IPI to get the work done. Implicitly, this
      stashes the running PSTATE in SPSR when the CPU receives the IPI, and
      restores it when we return. This means an enable() call can never modify
      PSTATE.
      
      To allow PAN to do this, change the on_each_cpu() call to use
      stop_machine(). This schedules the work on each CPU which allows
      us to modify PSTATE.
      
      This involves changing the protype of all the enable() functions.
      
      enable_cpu_capabilities() is called during boot and enables the feature
      on all online CPUs. This path now uses stop_machine(). CPU features for
      hotplug'd CPUs are enabled by verify_local_cpu_features() which only
      acts on the local CPU, and can already modify the running PSTATE as it
      is called from secondary_start_kernel().
      Reported-by: NTony Thompson <anthony.thompson@arm.com>
      Reported-by: NVladimir Murzin <vladimir.murzin@arm.com>
      Signed-off-by: NJames Morse <james.morse@arm.com>
      Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      2a6dcb2b
  10. 09 9月, 2016 3 次提交
    • S
      arm64: Rearrange CPU errata workaround checks · c47a1900
      Suzuki K Poulose 提交于
      Right now we run through the work around checks on a CPU
      from __cpuinfo_store_cpu. There are some problems with that:
      
      1) We initialise the system wide CPU feature registers only after the
      Boot CPU updates its cpuinfo. Now, if a work around depends on the
      variance of a CPU ID feature (e.g, check for Cache Line size mismatch),
      we have no way of performing it cleanly for the boot CPU.
      
      2) It is out of place, invoked from __cpuinfo_store_cpu() in cpuinfo.c. It
      is not an obvious place for that.
      
      This patch rearranges the CPU specific capability(aka work around) checks.
      
      1) At the moment we use verify_local_cpu_capabilities() to check if a new
      CPU has all the system advertised features. Use this for the secondary CPUs
      to perform the work around check. For that we rename
        verify_local_cpu_capabilities() => check_local_cpu_capabilities()
      which:
      
         If the system wide capabilities haven't been initialised (i.e, the CPU
         is activated at the boot), update the system wide detected work arounds.
      
         Otherwise (i.e a CPU hotplugged in later) verify that this CPU conforms to the
         system wide capabilities.
      
      2) Boot CPU updates the work arounds from smp_prepare_boot_cpu() after we have
      initialised the system wide CPU feature values.
      
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Andre Przywara <andre.przywara@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NSuzuki K Poulose <suzuki.poulose@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      c47a1900
    • S
      arm64: Use consistent naming for errata handling · 89ba2645
      Suzuki K Poulose 提交于
      This is a cosmetic change to rename the functions dealing with
      the errata work arounds to be more consistent with their naming.
      
      1) check_local_cpu_errata() => update_cpu_errata_workarounds()
      check_local_cpu_errata() actually updates the system's errata work
      arounds. So rename it to reflect the same.
      
      2) verify_local_cpu_errata() => verify_local_cpu_errata_workarounds()
      Use errata_workarounds instead of _errata.
      
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Acked-by: NAndre Przywara <andre.przywara@arm.com>
      Signed-off-by: NSuzuki K Poulose <suzuki.poulose@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      89ba2645
    • S
      arm64: Set the safe value for L1 icache policy · ee7bc638
      Suzuki K Poulose 提交于
      Right now we use 0 as the safe value for CTR_EL0:L1Ip, which is
      not defined at the moment. The safer value for the L1Ip should be
      the weakest of the policies, which happens to be AIVIVT. While at it,
      fix the comment about safe_val.
      
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: NSuzuki K Poulose <suzuki.poulose@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      ee7bc638
  11. 07 9月, 2016 1 次提交
    • C
      arm64: Use static keys for CPU features · efd9e03f
      Catalin Marinas 提交于
      This patch adds static keys transparently for all the cpu_hwcaps
      features by implementing an array of default-false static keys and
      enabling them when detected. The cpus_have_cap() check uses the static
      keys if the feature being checked is a constant, otherwise the compiler
      generates the bitmap test.
      
      Because of the early call to static_branch_enable() via
      check_local_cpu_errata() -> update_cpu_capabilities(), the jump labels
      are initialised in cpuinfo_store_boot_cpu().
      
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Suzuki K. Poulose <Suzuki.Poulose@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      efd9e03f
  12. 31 8月, 2016 3 次提交
  13. 04 7月, 2016 1 次提交
  14. 01 7月, 2016 1 次提交
  15. 25 4月, 2016 3 次提交
  16. 20 4月, 2016 5 次提交
  17. 16 4月, 2016 1 次提交
    • S
      arm64: vhe: Verify CPU Exception Levels · ac1ad20f
      Suzuki K Poulose 提交于
      With a VHE capable CPU, kernel can run at EL2 and is a decided at early
      boot. If some of the CPUs didn't start it EL2 or doesn't have VHE, we
      could have CPUs running at different exception levels, all in the same
      kernel! This patch adds an early check for the secondary CPUs to detect
      such situations.
      
      For each non-boot CPU add a sanity check to make sure we don't have
      different run levels w.r.t the boot CPU. We save the information on
      whether the boot CPU is running in hyp mode or not and ensure the
      remaining CPUs match it.
      
      Cc: Marc Zyngier <marc.zyngier@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Reviewed-by: NChristoffer Dall <christoffer.dall@linaro.org>
      Signed-off-by: NSuzuki K Poulose <suzuki.poulose@arm.com>
      [will: made boot_cpu_hyp_mode static]
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      ac1ad20f
  18. 13 4月, 2016 1 次提交
  19. 04 3月, 2016 1 次提交
    • M
      arm64: make mrs_s prefixing implicit in read_cpuid · 1cc6ed90
      Mark Rutland 提交于
      Commit 0f54b14e ("arm64: cpufeature: Change read_cpuid() to use
      sysreg's mrs_s macro") changed read_cpuid to require a SYS_ prefix on
      register names, to allow manual assembly of registers unknown by the
      toolchain, using tables in sysreg.h.
      
      This interacts poorly with commit 42b55734 ("efi/arm64: Check
      for h/w support before booting a >4 KB granular kernel"), which is
      curretly queued via the tip tree, and uses read_cpuid without a SYS_
      prefix. Due to this, a build of next-20160304 fails if EFI and 64K pages
      are selected.
      
      To avoid this issue when trees are merged, move the required SYS_
      prefixing into read_cpuid, and revert all of the updated callsites to
      pass plain register names. This effectively reverts the bulk of commit
      0f54b14e.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Cc: James Morse <james.morse@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      1cc6ed90
  20. 01 3月, 2016 1 次提交