1. 27 3月, 2018 6 次提交
    • S
      arm64: capabilities: Introduce weak features based on local CPU · 5c137714
      Suzuki K Poulose 提交于
      Now that we have the flexibility of defining system features based
      on individual CPUs, introduce CPU feature type that can be detected
      on a local SCOPE and ignores the conflict on late CPUs. This is
      applicable for ARM64_HAS_NO_HW_PREFETCH, where it is fine for
      the system to have CPUs without hardware prefetch turning up
      later. We only suffer a performance penalty, nothing fatal.
      
      Cc: Will Deacon <will.deacon@arm.com>
      Reviewed-by: NDave Martin <dave.martin@arm.com>
      Signed-off-by: NSuzuki K Poulose <suzuki.poulose@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      5c137714
    • S
      arm64: capabilities: Filter the entries based on a given mask · cce360b5
      Suzuki K Poulose 提交于
      While processing the list of capabilities, it is useful to
      filter out some of the entries based on the given mask for the
      scope of the capabilities to allow better control. This can be
      used later for handling LOCAL vs SYSTEM wide capabilities and more.
      All capabilities should have their scope set to either LOCAL_CPU or
      SYSTEM. No functional/flow change.
      
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Reviewed-by: NDave Martin <dave.martin@arm.com>
      Signed-off-by: NSuzuki K Poulose <suzuki.poulose@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      cce360b5
    • S
      arm64: capabilities: Add flags to handle the conflicts on late CPU · 5b4747c5
      Suzuki K Poulose 提交于
      When a CPU is brought up, it is checked against the caps that are
      known to be enabled on the system (via verify_local_cpu_capabilities()).
      Based on the state of the capability on the CPU vs. that of System we
      could have the following combinations of conflict.
      
      	x-----------------------------x
      	| Type  | System   | Late CPU |
      	|-----------------------------|
      	|  a    |   y      |    n     |
      	|-----------------------------|
      	|  b    |   n      |    y     |
      	x-----------------------------x
      
      Case (a) is not permitted for caps which are system features, which the
      system expects all the CPUs to have (e.g VHE). While (a) is ignored for
      all errata work arounds. However, there could be exceptions to the plain
      filtering approach. e.g, KPTI is an optional feature for a late CPU as
      long as the system already enables it.
      
      Case (b) is not permitted for errata work arounds that cannot be activated
      after the kernel has finished booting.And we ignore (b) for features. Here,
      yet again, KPTI is an exception, where if a late CPU needs KPTI we are too
      late to enable it (because we change the allocation of ASIDs etc).
      
      Add two different flags to indicate how the conflict should be handled.
      
       ARM64_CPUCAP_PERMITTED_FOR_LATE_CPU - CPUs may have the capability
       ARM64_CPUCAP_OPTIONAL_FOR_LATE_CPU - CPUs may not have the cappability.
      
      Now that we have the flags to describe the behavior of the errata and
      the features, as we treat them, define types for ERRATUM and FEATURE.
      
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Reviewed-by: NDave Martin <dave.martin@arm.com>
      Signed-off-by: NSuzuki K Poulose <suzuki.poulose@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      5b4747c5
    • S
      arm64: capabilities: Prepare for fine grained capabilities · 143ba05d
      Suzuki K Poulose 提交于
      We use arm64_cpu_capabilities to represent CPU ELF HWCAPs exposed
      to the userspace and the CPU hwcaps used by the kernel, which
      include cpu features and CPU errata work arounds. Capabilities
      have some properties that decide how they should be treated :
      
       1) Detection, i.e scope : A cap could be "detected" either :
          - if it is present on at least one CPU (SCOPE_LOCAL_CPU)
      	Or
          - if it is present on all the CPUs (SCOPE_SYSTEM)
      
       2) When is it enabled ? - A cap is treated as "enabled" when the
        system takes some action based on whether the capability is detected or
        not. e.g, setting some control register, patching the kernel code.
        Right now, we treat all caps are enabled at boot-time, after all
        the CPUs are brought up by the kernel. But there are certain caps,
        which are enabled early during the boot (e.g, VHE, GIC_CPUIF for NMI)
        and kernel starts using them, even before the secondary CPUs are brought
        up. We would need a way to describe this for each capability.
      
       3) Conflict on a late CPU - When a CPU is brought up, it is checked
        against the caps that are known to be enabled on the system (via
        verify_local_cpu_capabilities()). Based on the state of the capability
        on the CPU vs. that of System we could have the following combinations
        of conflict.
      
      	x-----------------------------x
      	| Type	| System   | Late CPU |
      	------------------------------|
      	|  a    |   y      |    n     |
      	------------------------------|
      	|  b    |   n      |    y     |
      	x-----------------------------x
      
        Case (a) is not permitted for caps which are system features, which the
        system expects all the CPUs to have (e.g VHE). While (a) is ignored for
        all errata work arounds. However, there could be exceptions to the plain
        filtering approach. e.g, KPTI is an optional feature for a late CPU as
        long as the system already enables it.
      
        Case (b) is not permitted for errata work arounds which requires some
        work around, which cannot be delayed. And we ignore (b) for features.
        Here, yet again, KPTI is an exception, where if a late CPU needs KPTI we
        are too late to enable it (because we change the allocation of ASIDs
        etc).
      
      So this calls for a lot more fine grained behavior for each capability.
      And if we define all the attributes to control their behavior properly,
      we may be able to use a single table for the CPU hwcaps (which cover
      errata and features, not the ELF HWCAPs). This is a prepartory step
      to get there. More bits would be added for the properties listed above.
      
      We are going to use a bit-mask to encode all the properties of a
      capabilities. This patch encodes the "SCOPE" of the capability.
      
      As such there is no change in how the capabilities are treated.
      
      Cc: Mark Rutland <mark.rutland@arm.com>
      Reviewed-by: NDave Martin <dave.martin@arm.com>
      Signed-off-by: NSuzuki K Poulose <suzuki.poulose@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      143ba05d
    • S
      arm64: capabilities: Move errata processing code · 1e89baed
      Suzuki K Poulose 提交于
      We have errata work around processing code in cpu_errata.c,
      which calls back into helpers defined in cpufeature.c. Now
      that we are going to make the handling of capabilities
      generic, by adding the information to each capability,
      move the errata work around specific processing code.
      No functional changes.
      
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Marc Zyngier <marc.zyngier@arm.com>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Andre Przywara <andre.przywara@arm.com>
      Reviewed-by: NDave Martin <dave.martin@arm.com>
      Signed-off-by: NSuzuki K Poulose <suzuki.poulose@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      1e89baed
    • D
      arm64: capabilities: Update prototype for enable call back · c0cda3b8
      Dave Martin 提交于
      We issue the enable() call back for all CPU hwcaps capabilities
      available on the system, on all the CPUs. So far we have ignored
      the argument passed to the call back, which had a prototype to
      accept a "void *" for use with on_each_cpu() and later with
      stop_machine(). However, with commit 0a0d111d
      ("arm64: cpufeature: Pass capability structure to ->enable callback"),
      there are some users of the argument who wants the matching capability
      struct pointer where there are multiple matching criteria for a single
      capability. Clean up the declaration of the call back to make it clear.
      
       1) Renamed to cpu_enable(), to imply taking necessary actions on the
          called CPU for the entry.
       2) Pass const pointer to the capability, to allow the call back to
          check the entry. (e.,g to check if any action is needed on the CPU)
       3) We don't care about the result of the call back, turning this to
          a void.
      
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Andre Przywara <andre.przywara@arm.com>
      Cc: James Morse <james.morse@arm.com>
      Acked-by: NRobin Murphy <robin.murphy@arm.com>
      Reviewed-by: NJulien Thierry <julien.thierry@arm.com>
      Signed-off-by: NDave Martin <dave.martin@arm.com>
      [suzuki: convert more users, rename call back and drop results]
      Signed-off-by: NSuzuki K Poulose <suzuki.poulose@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      c0cda3b8
  2. 09 3月, 2018 1 次提交
    • A
      arm64/errata: add REVIDR handling to framework · e8002e02
      Ard Biesheuvel 提交于
      In some cases, core variants that are affected by a certain erratum
      also exist in versions that have the erratum fixed, and this fact is
      recorded in a dedicated bit in system register REVIDR_EL1.
      
      Since the architecture does not require that a certain bit retains
      its meaning across different variants of the same model, each such
      REVIDR bit is tightly coupled to a certain revision/variant value,
      and so we need a list of revidr_mask/midr pairs to carry this
      information.
      
      So add the struct member and the associated macros and handling to
      allow REVIDR fixes to be taken into account.
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      e8002e02
  3. 14 12月, 2017 1 次提交
    • D
      arm64/sve: Report SVE to userspace via CPUID only if supported · 3fab3999
      Dave Martin 提交于
      Currently, the SVE field in ID_AA64PFR0_EL1 is visible
      unconditionally to userspace via the CPU ID register emulation,
      irrespective of the kernel config.  This means that if a kernel
      configured with CONFIG_ARM64_SVE=n is run on SVE-capable hardware,
      userspace will see SVE reported as present in the ID regs even
      though the kernel forbids execution of SVE instructions.
      
      This patch makes the exposure of the SVE field in ID_AA64PFR0_EL1
      conditional on CONFIG_ARM64_SVE=y.
      
      Since future architecture features are likely to encounter a
      similar requirement, this patch adds a suitable helper macros for
      use when declaring config-conditional ID register fields.
      
      Fixes: 43994d82 ("arm64/sve: Detect SVE and activate runtime support")
      Reviewed-by: NSuzuki K Poulose <suzuki.poulose@arm.com>
      Reported-by: NMark Rutland <mark.rutland@arm.com>
      Signed-off-by: NDave Martin <Dave.Martin@arm.com>
      Cc: Suzuki Poulose <suzuki.poulose@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      3fab3999
  4. 03 11月, 2017 3 次提交
  5. 18 5月, 2017 1 次提交
    • M
      arm64/cpufeature: don't use mutex in bringup path · 63a1e1c9
      Mark Rutland 提交于
      Currently, cpus_set_cap() calls static_branch_enable_cpuslocked(), which
      must take the jump_label mutex.
      
      We call cpus_set_cap() in the secondary bringup path, from the idle
      thread where interrupts are disabled. Taking a mutex in this path "is a
      NONO" regardless of whether it's contended, and something we must avoid.
      We didn't spot this until recently, as ___might_sleep() won't warn for
      this case until all CPUs have been brought up.
      
      This patch avoids taking the mutex in the secondary bringup path. The
      poking of static keys is deferred until enable_cpu_capabilities(), which
      runs in a suitable context on the boot CPU. To account for the static
      keys being set later, cpus_have_const_cap() is updated to use another
      static key to check whether the const cap keys have been initialised,
      falling back to the caps bitmap until this is the case.
      
      This means that users of cpus_have_const_cap() gain should only gain a
      single additional NOP in the fast path once the const caps are
      initialised, but should always see the current cap value.
      
      The hyp code should never dereference the caps array, since the caps are
      initialized before we run the module initcall to initialise hyp. A check
      is added to the hyp init code to document this requirement.
      
      This change will sidestep a number of issues when the upcoming hotplug
      locking rework is merged.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Reviewed-by: NMarc Zyniger <marc.zyngier@arm.com>
      Reviewed-by: NSuzuki Poulose <suzuki.poulose@arm.com>
      Acked-by: NWill Deacon <will.deacon@arm.com>
      Cc: Christoffer Dall <christoffer.dall@linaro.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Sebastian Sewior <bigeasy@linutronix.de>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      63a1e1c9
  6. 04 4月, 2017 1 次提交
  7. 11 3月, 2017 1 次提交
    • M
      arm64: use const cap for system_uses_ttbr0_pan() · 14088540
      Mark Rutland 提交于
      Since commit 4b65a5db ("arm64: Introduce
      uaccess_{disable,enable} functionality based on TTBR0_EL1"),
      system_uses_ttbr0_pan() has used cpus_have_cap() to determine whether
      PAN is present.
      
      Since commit a4023f68 ("arm64: Add hypervisor safe helper for
      checking constant capabilities"), which was introduced around the same
      time, cpus_have_cap() doesn't try to use a static key, and must always
      perform a load, test, and consitional branch (likely a tbnz for the
      latter two).
      
      Elsewhere, we moved to using cpus_have_const_cap(), which can use a
      static key (i.e. a non-conditional branch), which is patched at runtime
      when the feature is detected.
      
      This patch makes system_uses_ttbr0_pan() use cpus_have_const_cap(). The
      static key is likely a win for hot-paths like the uacccess primitives,
      and this makes our usage consistent regardless.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Reviewed-by: NSuzuki K Poulose <suzuki.poulose@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      14088540
  8. 24 2月, 2017 1 次提交
    • M
      arm64/cpufeature: check correct field width when updating sys_val · 638f863d
      Mark Rutland 提交于
      When we're updating a register's sys_val, we use arm64_ftr_value() to
      find the new field value. We use cpuid_feature_extract_field() to find
      the new value, but this implicitly assumes a 4-bit field, so we may
      extract more bits than we mean to for fields like CTR_EL0.L1ip.
      
      This affects update_cpu_ftr_reg(), where we may extract erroneous values
      for ftr_cur and ftr_new. Depending on the additional bits extracted in
      either case, we may erroneously detect that the value is mismatched, and
      we'll try to compute a new safe value.
      
      Dependent on these extra bits and feature type, arm64_ftr_safe_value()
      may pessimistically select the always-safe value, or may erroneously
      choose either the extracted cur or new value as the safe option. The
      extra bits will subsequently be masked out in arm64_ftr_set_value(), so
      we may choose a higher value, yet write back a lower one.
      
      Fix this by passing the width down explicitly in arm64_ftr_value(), so
      we always extract the correct amount.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Reviewed-by: NSuzuki K Poulose <suzuki.poulose@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      638f863d
  9. 11 1月, 2017 2 次提交
  10. 22 11月, 2016 1 次提交
    • C
      arm64: Introduce uaccess_{disable,enable} functionality based on TTBR0_EL1 · 4b65a5db
      Catalin Marinas 提交于
      This patch adds the uaccess macros/functions to disable access to user
      space by setting TTBR0_EL1 to a reserved zeroed page. Since the value
      written to TTBR0_EL1 must be a physical address, for simplicity this
      patch introduces a reserved_ttbr0 page at a constant offset from
      swapper_pg_dir. The uaccess_disable code uses the ttbr1_el1 value
      adjusted by the reserved_ttbr0 offset.
      
      Enabling access to user is done by restoring TTBR0_EL1 with the value
      from the struct thread_info ttbr0 variable. Interrupts must be disabled
      during the uaccess_ttbr0_enable code to ensure the atomicity of the
      thread_info.ttbr0 read and TTBR0_EL1 write. This patch also moves the
      get_thread_info asm macro from entry.S to assembler.h for reuse in the
      uaccess_ttbr0_* macros.
      
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: James Morse <james.morse@arm.com>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      4b65a5db
  11. 17 11月, 2016 2 次提交
  12. 06 11月, 2016 1 次提交
  13. 20 10月, 2016 1 次提交
    • J
      arm64: cpufeature: Schedule enable() calls instead of calling them via IPI · 2a6dcb2b
      James Morse 提交于
      The enable() call for a cpufeature/errata is called using on_each_cpu().
      This issues a cross-call IPI to get the work done. Implicitly, this
      stashes the running PSTATE in SPSR when the CPU receives the IPI, and
      restores it when we return. This means an enable() call can never modify
      PSTATE.
      
      To allow PAN to do this, change the on_each_cpu() call to use
      stop_machine(). This schedules the work on each CPU which allows
      us to modify PSTATE.
      
      This involves changing the protype of all the enable() functions.
      
      enable_cpu_capabilities() is called during boot and enables the feature
      on all online CPUs. This path now uses stop_machine(). CPU features for
      hotplug'd CPUs are enabled by verify_local_cpu_features() which only
      acts on the local CPU, and can already modify the running PSTATE as it
      is called from secondary_start_kernel().
      Reported-by: NTony Thompson <anthony.thompson@arm.com>
      Reported-by: NVladimir Murzin <vladimir.murzin@arm.com>
      Signed-off-by: NJames Morse <james.morse@arm.com>
      Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      2a6dcb2b
  14. 09 9月, 2016 4 次提交
    • S
      arm64: Work around systems with mismatched cache line sizes · 116c81f4
      Suzuki K Poulose 提交于
      Systems with differing CPU i-cache/d-cache line sizes can cause
      problems with the cache management by software when the execution
      is migrated from one to another. Usually, the application reads
      the cache size on a CPU and then uses that length to perform cache
      operations. However, if it gets migrated to another CPU with a smaller
      cache line size, things could go completely wrong. To prevent such
      cases, always use the smallest cache line size among the CPUs. The
      kernel CPU feature infrastructure already keeps track of the safe
      value for all CPUID registers including CTR. This patch works around
      the problem by :
      
      For kernel, dynamically patch the kernel to read the cache size
      from the system wide copy of CTR_EL0.
      
      For applications, trap read accesses to CTR_EL0 (by clearing the SCTLR.UCT)
      and emulate the mrs instruction to return the system wide safe value
      of CTR_EL0.
      
      For faster access (i.e, avoiding to lookup the system wide value of CTR_EL0
      via read_system_reg), we keep track of the pointer to table entry for
      CTR_EL0 in the CPU feature infrastructure.
      
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Andre Przywara <andre.przywara@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NSuzuki K Poulose <suzuki.poulose@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      116c81f4
    • S
      arm64: Rearrange CPU errata workaround checks · c47a1900
      Suzuki K Poulose 提交于
      Right now we run through the work around checks on a CPU
      from __cpuinfo_store_cpu. There are some problems with that:
      
      1) We initialise the system wide CPU feature registers only after the
      Boot CPU updates its cpuinfo. Now, if a work around depends on the
      variance of a CPU ID feature (e.g, check for Cache Line size mismatch),
      we have no way of performing it cleanly for the boot CPU.
      
      2) It is out of place, invoked from __cpuinfo_store_cpu() in cpuinfo.c. It
      is not an obvious place for that.
      
      This patch rearranges the CPU specific capability(aka work around) checks.
      
      1) At the moment we use verify_local_cpu_capabilities() to check if a new
      CPU has all the system advertised features. Use this for the secondary CPUs
      to perform the work around check. For that we rename
        verify_local_cpu_capabilities() => check_local_cpu_capabilities()
      which:
      
         If the system wide capabilities haven't been initialised (i.e, the CPU
         is activated at the boot), update the system wide detected work arounds.
      
         Otherwise (i.e a CPU hotplugged in later) verify that this CPU conforms to the
         system wide capabilities.
      
      2) Boot CPU updates the work arounds from smp_prepare_boot_cpu() after we have
      initialised the system wide CPU feature values.
      
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Andre Przywara <andre.przywara@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NSuzuki K Poulose <suzuki.poulose@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      c47a1900
    • S
      arm64: Use consistent naming for errata handling · 89ba2645
      Suzuki K Poulose 提交于
      This is a cosmetic change to rename the functions dealing with
      the errata work arounds to be more consistent with their naming.
      
      1) check_local_cpu_errata() => update_cpu_errata_workarounds()
      check_local_cpu_errata() actually updates the system's errata work
      arounds. So rename it to reflect the same.
      
      2) verify_local_cpu_errata() => verify_local_cpu_errata_workarounds()
      Use errata_workarounds instead of _errata.
      
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Acked-by: NAndre Przywara <andre.przywara@arm.com>
      Signed-off-by: NSuzuki K Poulose <suzuki.poulose@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      89ba2645
    • S
      arm64: Set the safe value for L1 icache policy · ee7bc638
      Suzuki K Poulose 提交于
      Right now we use 0 as the safe value for CTR_EL0:L1Ip, which is
      not defined at the moment. The safer value for the L1Ip should be
      the weakest of the policies, which happens to be AIVIVT. While at it,
      fix the comment about safe_val.
      
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: NSuzuki K Poulose <suzuki.poulose@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      ee7bc638
  15. 07 9月, 2016 1 次提交
    • C
      arm64: Use static keys for CPU features · efd9e03f
      Catalin Marinas 提交于
      This patch adds static keys transparently for all the cpu_hwcaps
      features by implementing an array of default-false static keys and
      enabling them when detected. The cpus_have_cap() check uses the static
      keys if the feature being checked is a constant, otherwise the compiler
      generates the bitmap test.
      
      Because of the early call to static_branch_enable() via
      check_local_cpu_errata() -> update_cpu_capabilities(), the jump labels
      are initialised in cpuinfo_store_boot_cpu().
      
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Suzuki K. Poulose <Suzuki.Poulose@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      efd9e03f
  16. 31 8月, 2016 3 次提交
  17. 04 7月, 2016 1 次提交
  18. 01 7月, 2016 1 次提交
  19. 25 4月, 2016 3 次提交
  20. 20 4月, 2016 2 次提交
  21. 04 3月, 2016 1 次提交
    • M
      arm64: make mrs_s prefixing implicit in read_cpuid · 1cc6ed90
      Mark Rutland 提交于
      Commit 0f54b14e ("arm64: cpufeature: Change read_cpuid() to use
      sysreg's mrs_s macro") changed read_cpuid to require a SYS_ prefix on
      register names, to allow manual assembly of registers unknown by the
      toolchain, using tables in sysreg.h.
      
      This interacts poorly with commit 42b55734 ("efi/arm64: Check
      for h/w support before booting a >4 KB granular kernel"), which is
      curretly queued via the tip tree, and uses read_cpuid without a SYS_
      prefix. Due to this, a build of next-20160304 fails if EFI and 64K pages
      are selected.
      
      To avoid this issue when trees are merged, move the required SYS_
      prefixing into read_cpuid, and revert all of the updated callsites to
      pass plain register names. This effectively reverts the bulk of commit
      0f54b14e.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Cc: James Morse <james.morse@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      1cc6ed90
  22. 01 3月, 2016 1 次提交
  23. 26 2月, 2016 1 次提交