1. 24 2月, 2017 1 次提交
    • M
      arm64/cpufeature: check correct field width when updating sys_val · 638f863d
      Mark Rutland 提交于
      When we're updating a register's sys_val, we use arm64_ftr_value() to
      find the new field value. We use cpuid_feature_extract_field() to find
      the new value, but this implicitly assumes a 4-bit field, so we may
      extract more bits than we mean to for fields like CTR_EL0.L1ip.
      
      This affects update_cpu_ftr_reg(), where we may extract erroneous values
      for ftr_cur and ftr_new. Depending on the additional bits extracted in
      either case, we may erroneously detect that the value is mismatched, and
      we'll try to compute a new safe value.
      
      Dependent on these extra bits and feature type, arm64_ftr_safe_value()
      may pessimistically select the always-safe value, or may erroneously
      choose either the extracted cur or new value as the safe option. The
      extra bits will subsequently be masked out in arm64_ftr_set_value(), so
      we may choose a higher value, yet write back a lower one.
      
      Fix this by passing the width down explicitly in arm64_ftr_value(), so
      we always extract the correct amount.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Reviewed-by: NSuzuki K Poulose <suzuki.poulose@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      638f863d
  2. 15 2月, 2017 1 次提交
    • M
      arm64: ptrace: add XZR-safe regs accessors · 6c23e2ff
      Mark Rutland 提交于
      In A64, XZR and the SP share the same encoding (31), and whether an
      instruction accesses XZR or SP for a particular register parameter
      depends on the definition of the instruction.
      
      We store the SP in pt_regs::regs[31], and thus when emulating
      instructions, we must be careful to not erroneously read from or write
      back to the saved SP. Unfortunately, we often fail to be this careful.
      
      In all cases, instructions using a transfer register parameter Xt use
      this to refer to XZR rather than SP. This patch adds helpers so that we
      can more easily and consistently handle these cases.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Marc Zyngier <marc.zyngier@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      6c23e2ff
  3. 10 2月, 2017 1 次提交
    • C
      arm64: Work around Falkor erratum 1003 · 38fd94b0
      Christopher Covington 提交于
      The Qualcomm Datacenter Technologies Falkor v1 CPU may allocate TLB entries
      using an incorrect ASID when TTBRx_EL1 is being updated. When the erratum
      is triggered, page table entries using the new translation table base
      address (BADDR) will be allocated into the TLB using the old ASID. All
      circumstances leading to the incorrect ASID being cached in the TLB arise
      when software writes TTBRx_EL1[ASID] and TTBRx_EL1[BADDR], a memory
      operation is in the process of performing a translation using the specific
      TTBRx_EL1 being written, and the memory operation uses a translation table
      descriptor designated as non-global. EL2 and EL3 code changing the EL1&0
      ASID is not subject to this erratum because hardware is prohibited from
      performing translations from an out-of-context translation regime.
      
      Consider the following pseudo code.
      
        write new BADDR and ASID values to TTBRx_EL1
      
      Replacing the above sequence with the one below will ensure that no TLB
      entries with an incorrect ASID are used by software.
      
        write reserved value to TTBRx_EL1[ASID]
        ISB
        write new value to TTBRx_EL1[BADDR]
        ISB
        write new value to TTBRx_EL1[ASID]
        ISB
      
      When the above sequence is used, page table entries using the new BADDR
      value may still be incorrectly allocated into the TLB using the reserved
      ASID. Yet this will not reduce functionality, since TLB entries incorrectly
      tagged with the reserved ASID will never be hit by a later instruction.
      
      Based on work by Shanker Donthineni <shankerd@codeaurora.org>
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NChristopher Covington <cov@codeaurora.org>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      38fd94b0
  4. 09 2月, 2017 1 次提交
    • M
      arm64: uaccess: consistently check object sizes · 76624175
      Mark Rutland 提交于
      Currently in arm64's copy_{to,from}_user, we only check the
      source/destination object size if access_ok() tells us the user access
      is permissible.
      
      However, in copy_from_user() we'll subsequently zero any remainder on
      the destination object. If we failed the access_ok() check, that applies
      to the whole object size, which we didn't check.
      
      To ensure that we catch that case, this patch hoists check_object_size()
      to the start of copy_from_user(), matching __copy_from_user() and
      __copy_to_user(). To make all of our uaccess copy primitives consistent,
      the same is done to copy_to_user().
      
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Acked-by: NKees Cook <keescook@chromium.org>
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      76624175
  5. 07 2月, 2017 1 次提交
  6. 03 2月, 2017 1 次提交
    • W
      arm64: KVM: Save/restore the host SPE state when entering/leaving a VM · f85279b4
      Will Deacon 提交于
      The SPE buffer is virtually addressed, using the page tables of the CPU
      MMU. Unusually, this means that the EL0/1 page table may be live whilst
      we're executing at EL2 on non-VHE configurations. When VHE is in use,
      we can use the same property to profile the guest behind its back.
      
      This patch adds the relevant disabling and flushing code to KVM so that
      the host can make use of SPE without corrupting guest memory, and any
      attempts by a guest to use SPE will result in a trap.
      Acked-by: NMarc Zyngier <marc.zyngier@arm.com>
      Cc: Alex Bennée <alex.bennee@linaro.org>
      Cc: Christoffer Dall <christoffer.dall@linaro.org>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      f85279b4
  7. 01 2月, 2017 2 次提交
    • C
      arm64: Work around Falkor erratum 1009 · d9ff80f8
      Christopher Covington 提交于
      During a TLB invalidate sequence targeting the inner shareable domain,
      Falkor may prematurely complete the DSB before all loads and stores using
      the old translation are observed. Instruction fetches are not subject to
      the conditions of this erratum. If the original code sequence includes
      multiple TLB invalidate instructions followed by a single DSB, onle one of
      the TLB instructions needs to be repeated to work around this erratum.
      While the erratum only applies to cases in which the TLBI specifies the
      inner-shareable domain (*IS form of TLBI) and the DSB is ISH form or
      stronger (OSH, SYS), this changes applies the workaround overabundantly--
      to local TLBI, DSB NSH sequences as well--for simplicity.
      
      Based on work by Shanker Donthineni <shankerd@codeaurora.org>
      Signed-off-by: NChristopher Covington <cov@codeaurora.org>
      Acked-by: NMark Rutland <mark.rutland@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      d9ff80f8
    • C
      arm64: Improve detection of user/non-user mappings in set_pte(_at) · ec663d96
      Catalin Marinas 提交于
      Commit cab15ce6 ("arm64: Introduce execute-only page access
      permissions") allowed a valid user PTE to have the PTE_USER bit clear.
      As a consequence, the pte_valid_not_user() macro in set_pte() was
      replaced with pte_valid_global() under the assumption that only user
      pages have the nG bit set. EFI mappings, however, also have the nG bit
      set and set_pte() wrongly ignores issuing the DSB+ISB.
      
      This patch reinstates the pte_valid_not_user() macro and adds the
      PTE_UXN bit check since all kernel mappings have this bit set. For
      clarity, pte_exec() is renamed to pte_user_exec() as it only checks for
      the absence of PTE_UXN. Consequently, the user executable check in
      set_pte_at() drops the pte_ng() test since pte_user_exec() is
      sufficient.
      
      Fixes: cab15ce6 ("arm64: Introduce execute-only page access permissions")
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      ec663d96
  8. 27 1月, 2017 1 次提交
  9. 13 1月, 2017 1 次提交
  10. 12 1月, 2017 5 次提交
  11. 11 1月, 2017 4 次提交
  12. 10 1月, 2017 2 次提交
    • W
      arm64: cpufeature: Don't enforce system-wide SPE capability · f31deaad
      Will Deacon 提交于
      The statistical profiling extension (SPE) is an optional feature of
      ARMv8.1 and is unlikely to be supported by all of the CPUs in a
      heterogeneous system.
      
      This patch updates the cpufeature checks so that such systems are not
      tainted as unsupported.
      Acked-by: NMark Rutland <mark.rutland@arm.com>
      Reviewed-by: NSuzuki Poulose <suzuki.poulose@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      f31deaad
    • J
      arm64: Remove useless UAO IPI and describe how this gets enabled · c8b06e3f
      James Morse 提交于
      Since its introduction, the UAO enable call was broken, and useless.
      commit 2a6dcb2b ("arm64: cpufeature: Schedule enable() calls instead
      of calling them via IPI"), fixed the framework so that these calls
      are scheduled, so that they can modify PSTATE.
      
      Now it is just useless. Remove it. UAO is enabled by the code patching
      which causes get_user() and friends to use the 'ldtr' family of
      instructions. This relies on the PSTATE.UAO bit being set to match
      addr_limit, which we do in uao_thread_switch() called via __switch_to().
      
      All that is needed to enable UAO is patch the code, and call schedule().
      __apply_alternatives_multi_stop() calls stop_machine() when it modifies
      the kernel text to enable the alternatives, (including the UAO code in
      uao_thread_switch()). Once stop_machine() has finished __switch_to() is
      called to reschedule the original task, this causes PSTATE.UAO to be set
      appropriately. An explicit enable() call is not needed.
      Reported-by: NVladimir Murzin <vladimir.murzin@arm.com>
      Signed-off-by: NJames Morse <james.morse@arm.com>
      c8b06e3f
  13. 05 1月, 2017 1 次提交
  14. 27 12月, 2016 1 次提交
  15. 25 12月, 2016 1 次提交
  16. 21 12月, 2016 2 次提交
  17. 15 12月, 2016 1 次提交
  18. 13 12月, 2016 1 次提交
  19. 06 12月, 2016 1 次提交
  20. 05 12月, 2016 1 次提交
  21. 03 12月, 2016 1 次提交
  22. 02 12月, 2016 3 次提交
  23. 29 11月, 2016 4 次提交
  24. 24 11月, 2016 1 次提交
    • C
      arm64: Remove I-cache invalidation from flush_cache_range() · ee6a7fce
      Catalin Marinas 提交于
      The flush_cache_range() function (similarly for flush_cache_page()) is
      called when the kernel is changing an existing VA->PA mapping range to
      either a new PA or to different attributes. Since ARMv8 has PIPT-like
      D-caches, this function does not need to perform any D-cache
      maintenance. The I-cache maintenance is already handled via set_pte_at()
      and flush_cache_range() cannot anyway guarantee that there are no cache
      lines left after invalidation due to the speculative loads.
      
      This patch makes flush_cache_range() a no-op.
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      ee6a7fce
  25. 22 11月, 2016 1 次提交
    • C
      arm64: Disable TTBR0_EL1 during normal kernel execution · 39bc88e5
      Catalin Marinas 提交于
      When the TTBR0 PAN feature is enabled, the kernel entry points need to
      disable access to TTBR0_EL1. The PAN status of the interrupted context
      is stored as part of the saved pstate, reusing the PSR_PAN_BIT (22).
      Restoring access to TTBR0_EL1 is done on exception return if returning
      to user or returning to a context where PAN was disabled.
      
      Context switching via switch_mm() must defer the update of TTBR0_EL1
      until a return to user or an explicit uaccess_enable() call.
      
      Special care needs to be taken for two cases where TTBR0_EL1 is set
      outside the normal kernel context switch operation: EFI run-time
      services (via efi_set_pgd) and CPU suspend (via cpu_(un)install_idmap).
      Code has been added to avoid deferred TTBR0_EL1 switching as in
      switch_mm() and restore the reserved TTBR0_EL1 when uninstalling the
      special TTBR0_EL1.
      
      User cache maintenance (user_cache_maint_handler and
      __flush_cache_user_range) needs the TTBR0_EL1 re-instated since the
      operations are performed by user virtual address.
      
      This patch also removes a stale comment on the switch_mm() function.
      
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: James Morse <james.morse@arm.com>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      39bc88e5