1. 24 2月, 2017 1 次提交
    • M
      arm64/cpufeature: check correct field width when updating sys_val · 638f863d
      Mark Rutland 提交于
      When we're updating a register's sys_val, we use arm64_ftr_value() to
      find the new field value. We use cpuid_feature_extract_field() to find
      the new value, but this implicitly assumes a 4-bit field, so we may
      extract more bits than we mean to for fields like CTR_EL0.L1ip.
      
      This affects update_cpu_ftr_reg(), where we may extract erroneous values
      for ftr_cur and ftr_new. Depending on the additional bits extracted in
      either case, we may erroneously detect that the value is mismatched, and
      we'll try to compute a new safe value.
      
      Dependent on these extra bits and feature type, arm64_ftr_safe_value()
      may pessimistically select the always-safe value, or may erroneously
      choose either the extracted cur or new value as the safe option. The
      extra bits will subsequently be masked out in arm64_ftr_set_value(), so
      we may choose a higher value, yet write back a lower one.
      
      Fix this by passing the width down explicitly in arm64_ftr_value(), so
      we always extract the correct amount.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Reviewed-by: NSuzuki K Poulose <suzuki.poulose@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      638f863d
  2. 11 1月, 2017 2 次提交
  3. 22 11月, 2016 1 次提交
    • C
      arm64: Introduce uaccess_{disable,enable} functionality based on TTBR0_EL1 · 4b65a5db
      Catalin Marinas 提交于
      This patch adds the uaccess macros/functions to disable access to user
      space by setting TTBR0_EL1 to a reserved zeroed page. Since the value
      written to TTBR0_EL1 must be a physical address, for simplicity this
      patch introduces a reserved_ttbr0 page at a constant offset from
      swapper_pg_dir. The uaccess_disable code uses the ttbr1_el1 value
      adjusted by the reserved_ttbr0 offset.
      
      Enabling access to user is done by restoring TTBR0_EL1 with the value
      from the struct thread_info ttbr0 variable. Interrupts must be disabled
      during the uaccess_ttbr0_enable code to ensure the atomicity of the
      thread_info.ttbr0 read and TTBR0_EL1 write. This patch also moves the
      get_thread_info asm macro from entry.S to assembler.h for reuse in the
      uaccess_ttbr0_* macros.
      
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: James Morse <james.morse@arm.com>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      4b65a5db
  4. 17 11月, 2016 2 次提交
  5. 06 11月, 2016 1 次提交
  6. 20 10月, 2016 1 次提交
    • J
      arm64: cpufeature: Schedule enable() calls instead of calling them via IPI · 2a6dcb2b
      James Morse 提交于
      The enable() call for a cpufeature/errata is called using on_each_cpu().
      This issues a cross-call IPI to get the work done. Implicitly, this
      stashes the running PSTATE in SPSR when the CPU receives the IPI, and
      restores it when we return. This means an enable() call can never modify
      PSTATE.
      
      To allow PAN to do this, change the on_each_cpu() call to use
      stop_machine(). This schedules the work on each CPU which allows
      us to modify PSTATE.
      
      This involves changing the protype of all the enable() functions.
      
      enable_cpu_capabilities() is called during boot and enables the feature
      on all online CPUs. This path now uses stop_machine(). CPU features for
      hotplug'd CPUs are enabled by verify_local_cpu_features() which only
      acts on the local CPU, and can already modify the running PSTATE as it
      is called from secondary_start_kernel().
      Reported-by: NTony Thompson <anthony.thompson@arm.com>
      Reported-by: NVladimir Murzin <vladimir.murzin@arm.com>
      Signed-off-by: NJames Morse <james.morse@arm.com>
      Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      2a6dcb2b
  7. 09 9月, 2016 4 次提交
    • S
      arm64: Work around systems with mismatched cache line sizes · 116c81f4
      Suzuki K Poulose 提交于
      Systems with differing CPU i-cache/d-cache line sizes can cause
      problems with the cache management by software when the execution
      is migrated from one to another. Usually, the application reads
      the cache size on a CPU and then uses that length to perform cache
      operations. However, if it gets migrated to another CPU with a smaller
      cache line size, things could go completely wrong. To prevent such
      cases, always use the smallest cache line size among the CPUs. The
      kernel CPU feature infrastructure already keeps track of the safe
      value for all CPUID registers including CTR. This patch works around
      the problem by :
      
      For kernel, dynamically patch the kernel to read the cache size
      from the system wide copy of CTR_EL0.
      
      For applications, trap read accesses to CTR_EL0 (by clearing the SCTLR.UCT)
      and emulate the mrs instruction to return the system wide safe value
      of CTR_EL0.
      
      For faster access (i.e, avoiding to lookup the system wide value of CTR_EL0
      via read_system_reg), we keep track of the pointer to table entry for
      CTR_EL0 in the CPU feature infrastructure.
      
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Andre Przywara <andre.przywara@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NSuzuki K Poulose <suzuki.poulose@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      116c81f4
    • S
      arm64: Rearrange CPU errata workaround checks · c47a1900
      Suzuki K Poulose 提交于
      Right now we run through the work around checks on a CPU
      from __cpuinfo_store_cpu. There are some problems with that:
      
      1) We initialise the system wide CPU feature registers only after the
      Boot CPU updates its cpuinfo. Now, if a work around depends on the
      variance of a CPU ID feature (e.g, check for Cache Line size mismatch),
      we have no way of performing it cleanly for the boot CPU.
      
      2) It is out of place, invoked from __cpuinfo_store_cpu() in cpuinfo.c. It
      is not an obvious place for that.
      
      This patch rearranges the CPU specific capability(aka work around) checks.
      
      1) At the moment we use verify_local_cpu_capabilities() to check if a new
      CPU has all the system advertised features. Use this for the secondary CPUs
      to perform the work around check. For that we rename
        verify_local_cpu_capabilities() => check_local_cpu_capabilities()
      which:
      
         If the system wide capabilities haven't been initialised (i.e, the CPU
         is activated at the boot), update the system wide detected work arounds.
      
         Otherwise (i.e a CPU hotplugged in later) verify that this CPU conforms to the
         system wide capabilities.
      
      2) Boot CPU updates the work arounds from smp_prepare_boot_cpu() after we have
      initialised the system wide CPU feature values.
      
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Andre Przywara <andre.przywara@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NSuzuki K Poulose <suzuki.poulose@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      c47a1900
    • S
      arm64: Use consistent naming for errata handling · 89ba2645
      Suzuki K Poulose 提交于
      This is a cosmetic change to rename the functions dealing with
      the errata work arounds to be more consistent with their naming.
      
      1) check_local_cpu_errata() => update_cpu_errata_workarounds()
      check_local_cpu_errata() actually updates the system's errata work
      arounds. So rename it to reflect the same.
      
      2) verify_local_cpu_errata() => verify_local_cpu_errata_workarounds()
      Use errata_workarounds instead of _errata.
      
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Acked-by: NAndre Przywara <andre.przywara@arm.com>
      Signed-off-by: NSuzuki K Poulose <suzuki.poulose@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      89ba2645
    • S
      arm64: Set the safe value for L1 icache policy · ee7bc638
      Suzuki K Poulose 提交于
      Right now we use 0 as the safe value for CTR_EL0:L1Ip, which is
      not defined at the moment. The safer value for the L1Ip should be
      the weakest of the policies, which happens to be AIVIVT. While at it,
      fix the comment about safe_val.
      
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: NSuzuki K Poulose <suzuki.poulose@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      ee7bc638
  8. 07 9月, 2016 1 次提交
    • C
      arm64: Use static keys for CPU features · efd9e03f
      Catalin Marinas 提交于
      This patch adds static keys transparently for all the cpu_hwcaps
      features by implementing an array of default-false static keys and
      enabling them when detected. The cpus_have_cap() check uses the static
      keys if the feature being checked is a constant, otherwise the compiler
      generates the bitmap test.
      
      Because of the early call to static_branch_enable() via
      check_local_cpu_errata() -> update_cpu_capabilities(), the jump labels
      are initialised in cpuinfo_store_boot_cpu().
      
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Suzuki K. Poulose <Suzuki.Poulose@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      efd9e03f
  9. 31 8月, 2016 3 次提交
  10. 04 7月, 2016 1 次提交
  11. 01 7月, 2016 1 次提交
  12. 25 4月, 2016 3 次提交
  13. 20 4月, 2016 2 次提交
  14. 04 3月, 2016 1 次提交
    • M
      arm64: make mrs_s prefixing implicit in read_cpuid · 1cc6ed90
      Mark Rutland 提交于
      Commit 0f54b14e ("arm64: cpufeature: Change read_cpuid() to use
      sysreg's mrs_s macro") changed read_cpuid to require a SYS_ prefix on
      register names, to allow manual assembly of registers unknown by the
      toolchain, using tables in sysreg.h.
      
      This interacts poorly with commit 42b55734 ("efi/arm64: Check
      for h/w support before booting a >4 KB granular kernel"), which is
      curretly queued via the tip tree, and uses read_cpuid without a SYS_
      prefix. Due to this, a build of next-20160304 fails if EFI and 64K pages
      are selected.
      
      To avoid this issue when trees are merged, move the required SYS_
      prefixing into read_cpuid, and revert all of the updated callsites to
      pass plain register names. This effectively reverts the bulk of commit
      0f54b14e.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Cc: James Morse <james.morse@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      1cc6ed90
  15. 01 3月, 2016 1 次提交
  16. 26 2月, 2016 1 次提交
  17. 25 2月, 2016 3 次提交
  18. 19 2月, 2016 2 次提交
    • J
      arm64: kernel: Don't toggle PAN on systems with UAO · 70544196
      James Morse 提交于
      If a CPU supports both Privileged Access Never (PAN) and User Access
      Override (UAO), we don't need to disable/re-enable PAN round all
      copy_to_user() like calls.
      
      UAO alternatives cause these calls to use the 'unprivileged' load/store
      instructions, which are overridden to be the privileged kind when
      fs==KERNEL_DS.
      
      This patch changes the copy_to_user() calls to have their PAN toggling
      depend on a new composite 'feature' ARM64_ALT_PAN_NOT_UAO.
      
      If both features are detected, PAN will be enabled, but the copy_to_user()
      alternatives will not be applied. This means PAN will be enabled all the
      time for these functions. If only PAN is detected, the toggling will be
      enabled as normal.
      
      This will save the time taken to disable/re-enable PAN, and allow us to
      catch copy_to_user() accesses that occur with fs==KERNEL_DS.
      
      Futex and swp-emulation code continue to hang their PAN toggling code on
      ARM64_HAS_PAN.
      Signed-off-by: NJames Morse <james.morse@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      70544196
    • J
      arm64: kernel: Add support for User Access Override · 57f4959b
      James Morse 提交于
      'User Access Override' is a new ARMv8.2 feature which allows the
      unprivileged load and store instructions to be overridden to behave in
      the normal way.
      
      This patch converts {get,put}_user() and friends to use ldtr*/sttr*
      instructions - so that they can only access EL0 memory, then enables
      UAO when fs==KERNEL_DS so that these functions can access kernel memory.
      
      This allows user space's read/write permissions to be checked against the
      page tables, instead of testing addr<USER_DS, then using the kernel's
      read/write permissions.
      Signed-off-by: NJames Morse <james.morse@arm.com>
      [catalin.marinas@arm.com: move uao_thread_switch() above dsb()]
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      57f4959b
  19. 18 2月, 2016 1 次提交
  20. 16 2月, 2016 1 次提交
  21. 27 11月, 2015 2 次提交
  22. 25 11月, 2015 1 次提交
  23. 21 10月, 2015 4 次提交
    • S
      arm64/HWCAP: Use system wide safe values · 37b01d53
      Suzuki K. Poulose 提交于
      Extend struct arm64_cpu_capabilities to handle the HWCAP detection
      and make use of the system wide value of the feature registers for
      a reliable set of HWCAPs.
      Signed-off-by: NSuzuki K. Poulose <suzuki.poulose@arm.com>
      Tested-by: NDave Martin <Dave.Martin@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      37b01d53
    • S
      arm64/capabilities: Make use of system wide safe value · da8d02d1
      Suzuki K. Poulose 提交于
      Now that we can reliably read the system wide safe value for a
      feature register, use that to compute the system capability.
      This patch also replaces the 'feature-register-specific'
      methods with a generic routine to check the capability.
      Signed-off-by: NSuzuki K. Poulose <suzuki.poulose@arm.com>
      Tested-by: NDave Martin <Dave.Martin@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      da8d02d1
    • S
      arm64: Delay cpu feature capability checks · dbb4e152
      Suzuki K. Poulose 提交于
      At the moment we run through the arm64_features capability list for
      each CPU and set the capability if one of the CPU supports it. This
      could be problematic in a heterogeneous system with differing capabilities.
      Delay the CPU feature checks until all the enabled CPUs are up(i.e,
      smp_cpus_done(), so that we can make better decisions based on the
      overall system capability. Once we decide and advertise the capabilities
      the alternatives can be applied. From this state, we cannot roll back
      a feature to disabled based on the values from a new hotplugged CPU,
      due to the runtime patching and other reasons. So, for all new CPUs,
      we need to make sure that they have the established system capabilities.
      Failing which, we bring the CPU down, preventing it from turning online.
      Once the capabilities are decided, any new CPU booting up goes through
      verification to ensure that it has all the enabled capabilities and also
      invokes the respective enable() method on the CPU.
      
      The CPU errata checks are not delayed and is still executed per-CPU
      to detect the respective capabilities. If we ever come across a non-errata
      capability that needs to be checked on each-CPU, we could introduce them via
      a new capability table(or introduce a flag), which can be processed per CPU.
      
      The next patch will make the feature checks use the system wide
      safe value of a feature register.
      
      NOTE: The enable() methods associated with the capability is scheduled
      on all the CPUs (which is the only use case at the moment). If we need
      a different type of 'enable()' which only needs to be run once on any CPU,
      we should be able to handle that when needed.
      Signed-off-by: NSuzuki K. Poulose <suzuki.poulose@arm.com>
      Tested-by: NDave Martin <Dave.Martin@arm.com>
      [catalin.marinas@arm.com: static variable and coding style fixes]
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      dbb4e152
    • S
      arm64: Refactor check_cpu_capabilities · ce8b602c
      Suzuki K. Poulose 提交于
      check_cpu_capabilities runs through a given list of caps and
      checks if the system has the cap, updates the system capability
      bitmap and also runs any enable() methods associated with them.
      All of this is not quite obvious from the name 'check'. This
      patch splits the check_cpu_capabilities into two parts :
      
      1) update_cpu_capabilities
       => Runs through the given list and updates the system
          wide capability map.
      2) enable_cpu_capabilities
       => Runs through the given list and invokes enable() (if any)
          for the caps enabled on the system.
      
      Cc: Andre Przywara <andre.przywara@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Suggested-by: NCatalin Marinas <catalin.marinsa@arm.com>
      Signed-off-by: NSuzuki K. Poulose <suzuki.poulose@arm.com>
      Tested-by: NDave Martin <Dave.Martin@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      ce8b602c