1. 20 5月, 2018 1 次提交
    • M
      arm64: KVM: Use lm_alias() for kvm_ksym_ref() · 46c4a30b
      Mark Rutland 提交于
      For historical reasons, we open-code lm_alias() in kvm_ksym_ref().
      
      Let's use lm_alias() to avoid duplication and make things clearer.
      
      As we have to pull this from <linux/mm.h> (which is not safe for
      inclusion in assembly), we may as well move the kvm_ksym_ref()
      definition into the existing !__ASSEMBLY__ block.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Cc: Christoffer Dall <christoffer.dall@arm.com>
      Cc: Marc Zyngier <marc.zyngier@arm.com>
      Cc: kvmarm@lists.cs.columbia.edu
      Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
      46c4a30b
  2. 04 5月, 2018 1 次提交
    • J
      KVM: arm64: Fix order of vcpu_write_sys_reg() arguments · 1975fa56
      James Morse 提交于
      A typo in kvm_vcpu_set_be()'s call:
      | vcpu_write_sys_reg(vcpu, SCTLR_EL1, sctlr)
      causes us to use the 32bit register value as an index into the sys_reg[]
      array, and sail off the end of the linear map when we try to bring up
      big-endian secondaries.
      
      | Unable to handle kernel paging request at virtual address ffff80098b982c00
      | Mem abort info:
      |  ESR = 0x96000045
      |  Exception class = DABT (current EL), IL = 32 bits
      |   SET = 0, FnV = 0
      |   EA = 0, S1PTW = 0
      | Data abort info:
      |   ISV = 0, ISS = 0x00000045
      |   CM = 0, WnR = 1
      | swapper pgtable: 4k pages, 48-bit VAs, pgdp = 000000002ea0571a
      | [ffff80098b982c00] pgd=00000009ffff8803, pud=0000000000000000
      | Internal error: Oops: 96000045 [#1] PREEMPT SMP
      | Modules linked in:
      | CPU: 2 PID: 1561 Comm: kvm-vcpu-0 Not tainted 4.17.0-rc3-00001-ga912e2261ca6-dirty #1323
      | Hardware name: ARM Juno development board (r1) (DT)
      | pstate: 60000005 (nZCv daif -PAN -UAO)
      | pc : vcpu_write_sys_reg+0x50/0x134
      | lr : vcpu_write_sys_reg+0x50/0x134
      
      | Process kvm-vcpu-0 (pid: 1561, stack limit = 0x000000006df4728b)
      | Call trace:
      |  vcpu_write_sys_reg+0x50/0x134
      |  kvm_psci_vcpu_on+0x14c/0x150
      |  kvm_psci_0_2_call+0x244/0x2a4
      |  kvm_hvc_call_handler+0x1cc/0x258
      |  handle_hvc+0x20/0x3c
      |  handle_exit+0x130/0x1ec
      |  kvm_arch_vcpu_ioctl_run+0x340/0x614
      |  kvm_vcpu_ioctl+0x4d0/0x840
      |  do_vfs_ioctl+0xc8/0x8d0
      |  ksys_ioctl+0x78/0xa8
      |  sys_ioctl+0xc/0x18
      |  el0_svc_naked+0x30/0x34
      | Code: 73620291 604d00b0 00201891 1ab10194 (957a33f8)
      |---[ end trace 4b4a4f9628596602 ]---
      
      Fix the order of the arguments.
      
      Fixes: 8d404c4c ("KVM: arm64: Rewrite system register accessors to read/write functions")
      CC: Christoffer Dall <cdall@cs.columbia.edu>
      Signed-off-by: NJames Morse <james.morse@arm.com>
      Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
      1975fa56
  3. 25 4月, 2018 1 次提交
    • K
      arm64/kernel: rename module_emit_adrp_veneer->module_emit_veneer_for_adrp · ed231ae3
      Kim Phillips 提交于
      Commit a257e025 ("arm64/kernel: don't ban ADRP to work around
      Cortex-A53 erratum #843419") introduced a function whose name ends with
      "_veneer".
      
      This clashes with commit bd8b22d2 ("Kbuild: kallsyms: ignore veneers
      emitted by the ARM linker"), which removes symbols ending in "_veneer"
      from kallsyms.
      
      The problem was manifested as 'perf test -vvvvv vmlinux' failed,
      correctly claiming the symbol 'module_emit_adrp_veneer' was present in
      vmlinux, but not in kallsyms.
      
      ...
          ERR : 0xffff00000809aa58: module_emit_adrp_veneer not on kallsyms
      ...
          test child finished with -1
          ---- end ----
          vmlinux symtab matches kallsyms: FAILED!
      
      Fix the problem by renaming module_emit_adrp_veneer to
      module_emit_veneer_for_adrp.  Now the test passes.
      
      Fixes: a257e025 ("arm64/kernel: don't ban ADRP to work around Cortex-A53 erratum #843419")
      Acked-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Michal Marek <mmarek@suse.cz>
      Signed-off-by: NKim Phillips <kim.phillips@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      ed231ae3
  4. 24 4月, 2018 1 次提交
  5. 20 4月, 2018 1 次提交
    • M
      arm/arm64: KVM: Add PSCI version selection API · 85bd0ba1
      Marc Zyngier 提交于
      Although we've implemented PSCI 0.1, 0.2 and 1.0, we expose either 0.1
      or 1.0 to a guest, defaulting to the latest version of the PSCI
      implementation that is compatible with the requested version. This is
      no different from doing a firmware upgrade on KVM.
      
      But in order to give a chance to hypothetical badly implemented guests
      that would have a fit by discovering something other than PSCI 0.2,
      let's provide a new API that allows userspace to pick one particular
      version of the API.
      
      This is implemented as a new class of "firmware" registers, where
      we expose the PSCI version. This allows the PSCI version to be
      save/restored as part of a guest migration, and also set to
      any supported version if the guest requires it.
      
      Cc: stable@vger.kernel.org #4.16
      Reviewed-by: NChristoffer Dall <cdall@kernel.org>
      Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
      85bd0ba1
  6. 12 4月, 2018 5 次提交
  7. 28 3月, 2018 4 次提交
    • D
      arm64: uaccess: Fix omissions from usercopy whitelist · 65896545
      Dave Martin 提交于
      When the hardend usercopy support was added for arm64, it was
      concluded that all cases of usercopy into and out of thread_struct
      were statically sized and so didn't require explicit whitelisting
      of the appropriate fields in thread_struct.
      
      Testing with usercopy hardening enabled has revealed that this is
      not the case for certain ptrace regset manipulation calls on arm64.
      This occurs because the sizes of usercopies associated with the
      regset API are dynamic by construction, and because arm64 does not
      always stage such copies via the stack: indeed the regset API is
      designed to avoid the need for that by adding some bounds checking.
      
      This is currently believed to affect only the fpsimd and TLS
      registers.
      
      Because the whitelisted fields in thread_struct must be contiguous,
      this patch groups them together in a nested struct.  It is also
      necessary to be able to determine the location and size of that
      struct, so rather than making the struct anonymous (which would
      save on edits elsewhere) or adding an anonymous union containing
      named and unnamed instances of the same struct (gross), this patch
      gives the struct a name and makes the necessary edits to code that
      references it (noisy but simple).
      
      Care is needed to ensure that the new struct does not contain
      padding (which the usercopy hardening would fail to protect).
      
      For this reason, the presence of tp2_value is made unconditional,
      since a padding field would be needed there in any case.  This pads
      up to the 16-byte alignment required by struct user_fpsimd_state.
      Acked-by: NKees Cook <keescook@chromium.org>
      Reported-by: NMark Rutland <mark.rutland@arm.com>
      Fixes: 9e8084d3 ("arm64: Implement thread_struct whitelist for hardened usercopy")
      Signed-off-by: NDave Martin <Dave.Martin@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      65896545
    • D
      arm64: fpsimd: Split cpu field out from struct fpsimd_state · 20b85472
      Dave Martin 提交于
      In preparation for using a common representation of the FPSIMD
      state for tasks and KVM vcpus, this patch separates out the "cpu"
      field that is used to track the cpu on which the state was most
      recently loaded.
      
      This will allow common code to operate on task and vcpu contexts
      without requiring the cpu field to be stored at the same offset
      from the FPSIMD register data in both cases.  This should avoid the
      need for messing with the definition of those parts of struct
      vcpu_arch that are exposed in the KVM user ABI.
      
      The resulting change is also convenient for grouping and defining
      the set of thread_struct fields that are supposed to be accessible
      to copy_{to,from}_user(), which includes user_fpsimd_state but
      should exclude the cpu field.  This patch does not amend the
      usercopy whitelist to match: that will be addressed in a subsequent
      patch.
      Signed-off-by: NDave Martin <Dave.Martin@arm.com>
      [will: inline fpsimd_flush_state for now]
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      20b85472
    • P
      arm64: tlbflush: avoid writing RES0 bits · 7f170499
      Philip Elcan 提交于
      Several of the bits of the TLBI register operand are RES0 per the ARM
      ARM, so TLBI operations should avoid writing non-zero values to these
      bits.
      
      This patch adds a macro __TLBI_VADDR(addr, asid) that creates the
      operand register in the correct format and honors the RES0 bits.
      Acked-by: NMark Rutland <mark.rutland@arm.com>
      Signed-off-by: NPhilip Elcan <pelcan@codeaurora.org>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      7f170499
    • M
      Revert "arm64: KVM: Use SMCCC_ARCH_WORKAROUND_1 for Falkor BP hardening" · adc91ab7
      Marc Zyngier 提交于
      Creates far too many conflicts with arm64/for-next/core, to be
      resent post -rc1.
      
      This reverts commit f9f5dc19.
      Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
      adc91ab7
  8. 27 3月, 2018 20 次提交
    • W
      arm64: cmpxchg: Include linux/compiler.h in asm/cmpxchg.h · 2a58fca9
      Will Deacon 提交于
      We need linux/compiler.h for unreachable(), so #include it here.
      Reported-by: NMark Rutland <mark.rutland@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      2a58fca9
    • W
      arm64: move percpu cmpxchg implementation from cmpxchg.h to percpu.h · c9406e51
      Will Deacon 提交于
      We want to avoid pulling linux/preempt.h into cmpxchg.h, since that can
      introduce a circular dependency on linux/bitops.h. linux/preempt.h is
      only needed by the per-cpu cmpxchg implementation, which is better off
      alongside the per-cpu xchg implementation in percpu.h, so move it there
      and add the missing #include.
      Reported-by: NMark Rutland <mark.rutland@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      c9406e51
    • W
      arm64: cmpxchg: Include build_bug.h instead of bug.h for BUILD_BUG · e8a2d040
      Will Deacon 提交于
      Having asm/cmpxchg.h pull in linux/bug.h is problematic because this
      ends up pulling in the atomic bitops which themselves may be built on
      top of atomic.h and cmpxchg.h.
      
      Instead, just include build_bug.h for the definition of BUILD_BUG.
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      e8a2d040
    • W
      arm64: lse: Include compiler_types.h and export.h for out-of-line LL/SC · 8a624f14
      Will Deacon 提交于
      When the LL/SC atomics are moved out-of-line, they are annotated as
      notrace and exported to modules. Ensure we pull in the relevant include
      files so that these macros are defined when we need them.
      Acked-by: NMark Rutland <mark.rutland@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      8a624f14
    • W
      arm64: fpsimd: include <linux/init.h> in fpsimd.h · b4f9b390
      Will Deacon 提交于
      fpsimd.h uses the __init annotation, so pull in linux/init.h
      Acked-by: NMark Rutland <mark.rutland@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      b4f9b390
    • W
      Revert "arm64: Revert L1_CACHE_SHIFT back to 6 (64-byte cache line size)" · 3f251cf0
      Will Deacon 提交于
      This reverts commit 1f85b42a.
      
      The internal dma-direct.h API has changed in -next, which collides with
      us trying to use it to manage non-coherent DMA devices on systems with
      unreasonably large cache writeback granules.
      
      This isn't at all trivial to resolve, so revert our changes for now and
      we can revisit this after the merge window. Effectively, this just
      restores our behaviour back to that of 4.16.
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      3f251cf0
    • S
      arm64: Delay enabling hardware DBM feature · 05abb595
      Suzuki K Poulose 提交于
      We enable hardware DBM bit in a capable CPU, very early in the
      boot via __cpu_setup. This doesn't give us a flexibility of
      optionally disable the feature, as the clearing the bit
      is a bit costly as the TLB can cache the settings. Instead,
      we delay enabling the feature until the CPU is brought up
      into the kernel. We use the feature capability mechanism
      to handle it.
      
      The hardware DBM is a non-conflicting feature. i.e, the kernel
      can safely run with a mix of CPUs with some using the feature
      and the others don't. So, it is safe for a late CPU to have
      this capability and enable it, even if the active CPUs don't.
      
      To get this handled properly by the infrastructure, we
      unconditionally set the capability and only enable it
      on CPUs which really have the feature. Also, we print the
      feature detection from the "matches" call back to make sure
      we don't mislead the user when none of the CPUs could use the
      feature.
      
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Reviewed-by: NDave Martin <dave.martin@arm.com>
      Signed-off-by: NSuzuki K Poulose <suzuki.poulose@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      05abb595
    • S
      arm64: Add MIDR encoding for Arm Cortex-A55 and Cortex-A35 · 6e616864
      Suzuki K Poulose 提交于
      Update the MIDR encodings for the Cortex-A55 and Cortex-A35
      
      Cc: Mark Rutland <mark.rutland@arm.com>
      Reviewed-by: NDave Martin <dave.martin@arm.com>
      Signed-off-by: NSuzuki K Poulose <suzuki.poulose@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      6e616864
    • S
      arm64: capabilities: Handle shared entries · ba7d9233
      Suzuki K Poulose 提交于
      Some capabilities have different criteria for detection and associated
      actions based on the matching criteria, even though they all share the
      same capability bit. So far we have used multiple entries with the same
      capability bit to handle this. This is prone to errors, as the
      cpu_enable is invoked for each entry, irrespective of whether the
      detection rule applies to the CPU or not. And also this complicates
      other helpers, e.g, __this_cpu_has_cap.
      
      This patch adds a wrapper entry to cover all the possible variations
      of a capability by maintaining list of matches + cpu_enable callbacks.
      To avoid complicating the prototypes for the "matches()", we use
      arm64_cpu_capabilities maintain the list and we ignore all the other
      fields except the matches & cpu_enable.
      
      This ensures :
      
       1) The capabilitiy is set when at least one of the entry detects
       2) Action is only taken for the entries that "matches".
      
      This avoids explicit checks in the cpu_enable() take some action.
      The only constraint here is that, all the entries should have the
      same "type" (i.e, scope and conflict rules).
      
      If a cpu_enable() method is associated with multiple matches for a
      single capability, care should be taken that either the match criteria
      are mutually exclusive, or that the method is robust against being
      called multiple times.
      
      This also reverts the changes introduced by commit 67948af4
      ("arm64: capabilities: Handle duplicate entries for a capability").
      
      Cc: Robin Murphy <robin.murphy@arm.com>
      Reviewed-by: NDave Martin <dave.martin@arm.com>
      Signed-off-by: NSuzuki K Poulose <suzuki.poulose@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      ba7d9233
    • S
      arm64: capabilities: Add support for checks based on a list of MIDRs · be5b2998
      Suzuki K Poulose 提交于
      Add helpers for detecting an errata on list of midr ranges
      of affected CPUs, with the same work around.
      
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Reviewed-by: NDave Martin <dave.martin@arm.com>
      Signed-off-by: NSuzuki K Poulose <suzuki.poulose@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      be5b2998
    • S
      arm64: Add helpers for checking CPU MIDR against a range · 1df31050
      Suzuki K Poulose 提交于
      Add helpers for checking if the given CPU midr falls in a range
      of variants/revisions for a given model.
      
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Reviewed-by: NDave Martin <dave.martin@arm.com>
      Signed-off-by: NSuzuki K Poulose <suzuki.poulose@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      1df31050
    • S
      arm64: capabilities: Change scope of VHE to Boot CPU feature · 830dcc9f
      Suzuki K Poulose 提交于
      We expect all CPUs to be running at the same EL inside the kernel
      with or without VHE enabled and we have strict checks to ensure
      that any mismatch triggers a kernel panic. If VHE is enabled,
      we use the feature based on the boot CPU and all other CPUs
      should follow. This makes it a perfect candidate for a capability
      based on the boot CPU,  which should be matched by all the CPUs
      (both when is ON and OFF). This saves us some not-so-pretty
      hooks and special code, just for verifying the conflict.
      
      The patch also makes the VHE capability entry depend on
      CONFIG_ARM64_VHE.
      
      Cc: Marc Zyngier <marc.zyngier@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Reviewed-by: NDave Martin <dave.martin@arm.com>
      Signed-off-by: NSuzuki K Poulose <suzuki.poulose@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      830dcc9f
    • S
      arm64: capabilities: Add support for features enabled early · fd9d63da
      Suzuki K Poulose 提交于
      The kernel detects and uses some of the features based on the boot
      CPU and expects that all the following CPUs conform to it. e.g,
      with VHE and the boot CPU running at EL2, the kernel decides to
      keep the kernel running at EL2. If another CPU is brought up without
      this capability, we use custom hooks (via check_early_cpu_features())
      to handle it. To handle such capabilities add support for detecting
      and enabling capabilities based on the boot CPU.
      
      A bit is added to indicate if the capability should be detected
      early on the boot CPU. The infrastructure then ensures that such
      capabilities are probed and "enabled" early on in the boot CPU
      and, enabled on the subsequent CPUs.
      
      Cc: Julien Thierry <julien.thierry@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Marc Zyngier <marc.zyngier@arm.com>
      Reviewed-by: NDave Martin <dave.martin@arm.com>
      Signed-off-by: NSuzuki K Poulose <suzuki.poulose@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      fd9d63da
    • S
      arm64: capabilities: Restrict KPTI detection to boot-time CPUs · d3aec8a2
      Suzuki K Poulose 提交于
      KPTI is treated as a system wide feature and is only detected if all
      the CPUs in the sysetm needs the defense, unless it is forced via kernel
      command line. This leaves a system with a mix of CPUs with and without
      the defense vulnerable. Also, if a late CPU needs KPTI but KPTI was not
      activated at boot time, the CPU is currently allowed to boot, which is a
      potential security vulnerability.
      This patch ensures that the KPTI is turned on if at least one CPU detects
      the capability (i.e, change scope to SCOPE_LOCAL_CPU). Also rejetcs a late
      CPU, if it requires the defense, when the system hasn't enabled it,
      
      Cc: Will Deacon <will.deacon@arm.com>
      Reviewed-by: NDave Martin <dave.martin@arm.com>
      Signed-off-by: NSuzuki K Poulose <suzuki.poulose@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      d3aec8a2
    • S
      arm64: capabilities: Introduce weak features based on local CPU · 5c137714
      Suzuki K Poulose 提交于
      Now that we have the flexibility of defining system features based
      on individual CPUs, introduce CPU feature type that can be detected
      on a local SCOPE and ignores the conflict on late CPUs. This is
      applicable for ARM64_HAS_NO_HW_PREFETCH, where it is fine for
      the system to have CPUs without hardware prefetch turning up
      later. We only suffer a performance penalty, nothing fatal.
      
      Cc: Will Deacon <will.deacon@arm.com>
      Reviewed-by: NDave Martin <dave.martin@arm.com>
      Signed-off-by: NSuzuki K Poulose <suzuki.poulose@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      5c137714
    • S
      arm64: capabilities: Filter the entries based on a given mask · cce360b5
      Suzuki K Poulose 提交于
      While processing the list of capabilities, it is useful to
      filter out some of the entries based on the given mask for the
      scope of the capabilities to allow better control. This can be
      used later for handling LOCAL vs SYSTEM wide capabilities and more.
      All capabilities should have their scope set to either LOCAL_CPU or
      SYSTEM. No functional/flow change.
      
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Reviewed-by: NDave Martin <dave.martin@arm.com>
      Signed-off-by: NSuzuki K Poulose <suzuki.poulose@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      cce360b5
    • S
      arm64: capabilities: Add flags to handle the conflicts on late CPU · 5b4747c5
      Suzuki K Poulose 提交于
      When a CPU is brought up, it is checked against the caps that are
      known to be enabled on the system (via verify_local_cpu_capabilities()).
      Based on the state of the capability on the CPU vs. that of System we
      could have the following combinations of conflict.
      
      	x-----------------------------x
      	| Type  | System   | Late CPU |
      	|-----------------------------|
      	|  a    |   y      |    n     |
      	|-----------------------------|
      	|  b    |   n      |    y     |
      	x-----------------------------x
      
      Case (a) is not permitted for caps which are system features, which the
      system expects all the CPUs to have (e.g VHE). While (a) is ignored for
      all errata work arounds. However, there could be exceptions to the plain
      filtering approach. e.g, KPTI is an optional feature for a late CPU as
      long as the system already enables it.
      
      Case (b) is not permitted for errata work arounds that cannot be activated
      after the kernel has finished booting.And we ignore (b) for features. Here,
      yet again, KPTI is an exception, where if a late CPU needs KPTI we are too
      late to enable it (because we change the allocation of ASIDs etc).
      
      Add two different flags to indicate how the conflict should be handled.
      
       ARM64_CPUCAP_PERMITTED_FOR_LATE_CPU - CPUs may have the capability
       ARM64_CPUCAP_OPTIONAL_FOR_LATE_CPU - CPUs may not have the cappability.
      
      Now that we have the flags to describe the behavior of the errata and
      the features, as we treat them, define types for ERRATUM and FEATURE.
      
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Reviewed-by: NDave Martin <dave.martin@arm.com>
      Signed-off-by: NSuzuki K Poulose <suzuki.poulose@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      5b4747c5
    • S
      arm64: capabilities: Prepare for fine grained capabilities · 143ba05d
      Suzuki K Poulose 提交于
      We use arm64_cpu_capabilities to represent CPU ELF HWCAPs exposed
      to the userspace and the CPU hwcaps used by the kernel, which
      include cpu features and CPU errata work arounds. Capabilities
      have some properties that decide how they should be treated :
      
       1) Detection, i.e scope : A cap could be "detected" either :
          - if it is present on at least one CPU (SCOPE_LOCAL_CPU)
      	Or
          - if it is present on all the CPUs (SCOPE_SYSTEM)
      
       2) When is it enabled ? - A cap is treated as "enabled" when the
        system takes some action based on whether the capability is detected or
        not. e.g, setting some control register, patching the kernel code.
        Right now, we treat all caps are enabled at boot-time, after all
        the CPUs are brought up by the kernel. But there are certain caps,
        which are enabled early during the boot (e.g, VHE, GIC_CPUIF for NMI)
        and kernel starts using them, even before the secondary CPUs are brought
        up. We would need a way to describe this for each capability.
      
       3) Conflict on a late CPU - When a CPU is brought up, it is checked
        against the caps that are known to be enabled on the system (via
        verify_local_cpu_capabilities()). Based on the state of the capability
        on the CPU vs. that of System we could have the following combinations
        of conflict.
      
      	x-----------------------------x
      	| Type	| System   | Late CPU |
      	------------------------------|
      	|  a    |   y      |    n     |
      	------------------------------|
      	|  b    |   n      |    y     |
      	x-----------------------------x
      
        Case (a) is not permitted for caps which are system features, which the
        system expects all the CPUs to have (e.g VHE). While (a) is ignored for
        all errata work arounds. However, there could be exceptions to the plain
        filtering approach. e.g, KPTI is an optional feature for a late CPU as
        long as the system already enables it.
      
        Case (b) is not permitted for errata work arounds which requires some
        work around, which cannot be delayed. And we ignore (b) for features.
        Here, yet again, KPTI is an exception, where if a late CPU needs KPTI we
        are too late to enable it (because we change the allocation of ASIDs
        etc).
      
      So this calls for a lot more fine grained behavior for each capability.
      And if we define all the attributes to control their behavior properly,
      we may be able to use a single table for the CPU hwcaps (which cover
      errata and features, not the ELF HWCAPs). This is a prepartory step
      to get there. More bits would be added for the properties listed above.
      
      We are going to use a bit-mask to encode all the properties of a
      capabilities. This patch encodes the "SCOPE" of the capability.
      
      As such there is no change in how the capabilities are treated.
      
      Cc: Mark Rutland <mark.rutland@arm.com>
      Reviewed-by: NDave Martin <dave.martin@arm.com>
      Signed-off-by: NSuzuki K Poulose <suzuki.poulose@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      143ba05d
    • S
      arm64: capabilities: Move errata processing code · 1e89baed
      Suzuki K Poulose 提交于
      We have errata work around processing code in cpu_errata.c,
      which calls back into helpers defined in cpufeature.c. Now
      that we are going to make the handling of capabilities
      generic, by adding the information to each capability,
      move the errata work around specific processing code.
      No functional changes.
      
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Marc Zyngier <marc.zyngier@arm.com>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Andre Przywara <andre.przywara@arm.com>
      Reviewed-by: NDave Martin <dave.martin@arm.com>
      Signed-off-by: NSuzuki K Poulose <suzuki.poulose@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      1e89baed
    • D
      arm64: capabilities: Update prototype for enable call back · c0cda3b8
      Dave Martin 提交于
      We issue the enable() call back for all CPU hwcaps capabilities
      available on the system, on all the CPUs. So far we have ignored
      the argument passed to the call back, which had a prototype to
      accept a "void *" for use with on_each_cpu() and later with
      stop_machine(). However, with commit 0a0d111d
      ("arm64: cpufeature: Pass capability structure to ->enable callback"),
      there are some users of the argument who wants the matching capability
      struct pointer where there are multiple matching criteria for a single
      capability. Clean up the declaration of the call back to make it clear.
      
       1) Renamed to cpu_enable(), to imply taking necessary actions on the
          called CPU for the entry.
       2) Pass const pointer to the capability, to allow the call back to
          check the entry. (e.,g to check if any action is needed on the CPU)
       3) We don't care about the result of the call back, turning this to
          a void.
      
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Andre Przywara <andre.przywara@arm.com>
      Cc: James Morse <james.morse@arm.com>
      Acked-by: NRobin Murphy <robin.murphy@arm.com>
      Reviewed-by: NJulien Thierry <julien.thierry@arm.com>
      Signed-off-by: NDave Martin <dave.martin@arm.com>
      [suzuki: convert more users, rename call back and drop results]
      Signed-off-by: NSuzuki K Poulose <suzuki.poulose@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      c0cda3b8
  9. 22 3月, 2018 1 次提交
    • M
      irqchip/gic-v3: Probe for SCR_EL3 being clear before resetting AP0Rn · 33625282
      Marc Zyngier 提交于
      We would like to reset the Group-0 Active Priority Registers
      at boot time if they are available to us. They would be available
      if SCR_EL3.FIQ was not set, but we cannot directly probe this bit,
      and short of checking, we may end-up trapping to EL3, and the
      firmware may not be please to get such an exception. Yes, this
      is dumb.
      
      Instead, let's use PMR to find out if its value gets affected by
      SCR_EL3.FIQ being set. We use the fact that when SCR_EL3.FIQ is
      set, the LSB of the priority is lost due to the shifting back and
      forth of the actual priority. If we read back a 0, we know that
      Group0 is unavailable. In case we read a non-zero value, we can
      safely reset the AP0Rn register.
      Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
      33625282
  10. 20 3月, 2018 4 次提交
    • D
      arm64: fpsimd: Fix bad si_code for undiagnosed SIGFPE · af4a81b9
      Dave Martin 提交于
      Currently a SIGFPE delivered in response to a floating-point
      exception trap may have si_code set to 0 on arm64.  As reported by
      Eric, this is a bad idea since this is the value of SI_USER -- yet
      this signal is definitely not the result of kill(2), tgkill(2) etc.
      and si_uid and si_pid make limited sense whereas we do want to
      yield a value for si_addr (which doesn't exist for SI_USER).
      
      It's not entirely clear whether the architecure permits a
      "spurious" fp exception trap where none of the exception flag bits
      in ESR_ELx is set.  (IMHO the architectural intent is to forbid
      this.)  However, it does permit those bits to contain garbage if
      the TFV bit in ESR_ELx is 0.  That case isn't currently handled at
      all and may result in si_code == 0 or si_code containing a FPE_FLT*
      constant corresponding to an exception that did not in fact happen.
      
      There is nothing sensible we can return for si_code in such cases,
      but SI_USER is certainly not appropriate and will lead to violation
      of legitimate userspace assumptions.
      
      This patch allocates a new si_code value FPE_UNKNOWN that at least
      does not conflict with any existing SI_* or FPE_* code, and yields
      this in si_code for undiagnosable cases.  This is probably the best
      simplicity/incorrectness tradeoff achieveable without relying on
      implementation-dependent features or adding a lot of code.  In any
      case, there appears to be no perfect solution possible that would
      justify a lot of effort here.
      
      Yielding FPE_UNKNOWN when some well-defined fp exception caused the
      trap is a violation of POSIX, but this is forced by the
      architecture.  We have no realistic prospect of yielding the
      correct code in such cases.  At present I am not aware of any ARMv8
      implementation that supports trapped floating-point exceptions in
      any case.
      
      The new code may be applicable to other architectures for similar
      reasons.
      
      No attempt is made to provide ESR_ELx to userspace in the signal
      frame, since architectural limitations mean that it is unlikely to
      provide much diagnostic value, doesn't benefit existing software
      and would create ABI with no proven purpose.  The existing
      mechanism for passing it also has problems of its own which may
      result in the wrong value being passed to userspace due to
      interaction with mm faults.  The implied rework does not appear
      justified.
      Acked-by: N"Eric W. Biederman" <ebiederm@xmission.com>
      Reported-by: N"Eric W. Biederman" <ebiederm@xmission.com>
      Signed-off-by: NDave Martin <Dave.Martin@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      af4a81b9
    • S
      arm64: KVM: Use SMCCC_ARCH_WORKAROUND_1 for Falkor BP hardening · f9f5dc19
      Shanker Donthineni 提交于
      The function SMCCC_ARCH_WORKAROUND_1 was introduced as part of SMC
      V1.1 Calling Convention to mitigate CVE-2017-5715. This patch uses
      the standard call SMCCC_ARCH_WORKAROUND_1 for Falkor chips instead
      of Silicon provider service ID 0xC2001700.
      
      Cc: <stable@vger.kernel.org> # 4.14+
      Signed-off-by: NShanker Donthineni <shankerd@codeaurora.org>
      Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
      f9f5dc19
    • S
      arm64: Expose Arm v8.4 features · 7206dc93
      Suzuki K Poulose 提交于
      Expose the new features introduced by Arm v8.4 extensions to
      Arm v8-A profile.
      
      These include :
      
       1) Data indpendent timing of instructions. (DIT, exposed as HWCAP_DIT)
       2) Unaligned atomic instructions and Single-copy atomicity of loads
          and stores. (AT, expose as HWCAP_USCAT)
       3) LDAPR and STLR instructions with immediate offsets (extension to
          LRCPC, exposed as HWCAP_ILRCPC)
       4) Flag manipulation instructions (TS, exposed as HWCAP_FLAGM).
      
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Reviewed-by: NDave Martin <dave.martin@arm.com>
      Signed-off-by: NSuzuki K Poulose <suzuki.poulose@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      7206dc93
    • A
      arm64: asm: drop special versions of adr_l/ldr_l/str_l for modules · 350e1dad
      Ard Biesheuvel 提交于
      Now that we started keeping modules within 4 GB of the core kernel
      in all cases, we no longer need to special case the adr_l/ldr_l/str_l
      macros for modules to deal with them being loaded farther away.
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      350e1dad
  11. 19 3月, 2018 1 次提交