1. 27 2月, 2019 1 次提交
  2. 06 2月, 2019 1 次提交
  3. 12 12月, 2018 1 次提交
    • W
      arm64: preempt: Fix big-endian when checking preempt count in assembly · 7faa313f
      Will Deacon 提交于
      Commit 39624469 ("arm64: preempt: Provide our own implementation of
      asm/preempt.h") extended the preempt count field in struct thread_info
      to 64 bits, so that it consists of a 32-bit count plus a 32-bit flag
      indicating whether or not the current task needs rescheduling.
      
      Whilst the asm-offsets definition of TSK_TI_PREEMPT was updated to point
      to this new field, the assembly usage was left untouched meaning that a
      32-bit load from TSK_TI_PREEMPT on a big-endian machine actually returns
      the reschedule flag instead of the count.
      
      Whilst we could fix this by pointing TSK_TI_PREEMPT at the count field,
      we're actually better off reworking the two assembly users so that they
      operate on the whole 64-bit value in favour of inspecting the thread
      flags separately in order to determine whether a reschedule is needed.
      Acked-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Reported-by: N"kernelci.org bot" <bot@kernelci.org>
      Tested-by: NKevin Hilman <khilman@baylibre.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      7faa313f
  4. 11 12月, 2018 3 次提交
    • W
      arm64: Kconfig: Re-jig CONFIG options for 52-bit VA · 68d23da4
      Will Deacon 提交于
      Enabling 52-bit VAs for userspace is pretty confusing, since it requires
      you to select "48-bit" virtual addressing in the Kconfig.
      
      Rework the logic so that 52-bit user virtual addressing is advertised in
      the "Virtual address space size" choice, along with some help text to
      describe its interaction with Pointer Authentication. The EXPERT-only
      option to force all user mappings to the 52-bit range is then made
      available immediately below the VA size selection.
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      68d23da4
    • S
      arm64: mm: introduce 52-bit userspace support · 67e7fdfc
      Steve Capper 提交于
      On arm64 there is optional support for a 52-bit virtual address space.
      To exploit this one has to be running with a 64KB page size and be
      running on hardware that supports this.
      
      For an arm64 kernel supporting a 48 bit VA with a 64KB page size,
      some changes are needed to support a 52-bit userspace:
       * TCR_EL1.T0SZ needs to be 12 instead of 16,
       * TASK_SIZE needs to reflect the new size.
      
      This patch implements the above when the support for 52-bit VAs is
      detected at early boot time.
      
      On arm64 userspace addresses translation is controlled by TTBR0_EL1. As
      well as userspace, TTBR0_EL1 controls:
       * The identity mapping,
       * EFI runtime code.
      
      It is possible to run a kernel with an identity mapping that has a
      larger VA size than userspace (and for this case __cpu_set_tcr_t0sz()
      would set TCR_EL1.T0SZ as appropriate). However, when the conditions for
      52-bit userspace are met; it is possible to keep TCR_EL1.T0SZ fixed at
      12. Thus in this patch, the TCR_EL1.T0SZ size changing logic is
      disabled.
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NSteve Capper <steve.capper@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      67e7fdfc
    • S
      arm64: mm: Offset TTBR1 to allow 52-bit PTRS_PER_PGD · e842dfb5
      Steve Capper 提交于
      Enabling 52-bit VAs on arm64 requires that the PGD table expands from 64
      entries (for the 48-bit case) to 1024 entries. This quantity,
      PTRS_PER_PGD is used as follows to compute which PGD entry corresponds
      to a given virtual address, addr:
      
      pgd_index(addr) -> (addr >> PGDIR_SHIFT) & (PTRS_PER_PGD - 1)
      
      Userspace addresses are prefixed by 0's, so for a 48-bit userspace
      address, uva, the following is true:
      (uva >> PGDIR_SHIFT) & (1024 - 1) == (uva >> PGDIR_SHIFT) & (64 - 1)
      
      In other words, a 48-bit userspace address will have the same pgd_index
      when using PTRS_PER_PGD = 64 and 1024.
      
      Kernel addresses are prefixed by 1's so, given a 48-bit kernel address,
      kva, we have the following inequality:
      (kva >> PGDIR_SHIFT) & (1024 - 1) != (kva >> PGDIR_SHIFT) & (64 - 1)
      
      In other words a 48-bit kernel virtual address will have a different
      pgd_index when using PTRS_PER_PGD = 64 and 1024.
      
      If, however, we note that:
      kva = 0xFFFF << 48 + lower (where lower[63:48] == 0b)
      and, PGDIR_SHIFT = 42 (as we are dealing with 64KB PAGE_SIZE)
      
      We can consider:
      (kva >> PGDIR_SHIFT) & (1024 - 1) - (kva >> PGDIR_SHIFT) & (64 - 1)
       = (0xFFFF << 6) & 0x3FF - (0xFFFF << 6) & 0x3F	// "lower" cancels out
       = 0x3C0
      
      In other words, one can switch PTRS_PER_PGD to the 52-bit value globally
      provided that they increment ttbr1_el1 by 0x3C0 * 8 = 0x1E00 bytes when
      running with 48-bit kernel VAs (TCR_EL1.T1SZ = 16).
      
      For kernel configuration where 52-bit userspace VAs are possible, this
      patch offsets ttbr1_el1 and sets PTRS_PER_PGD corresponding to the
      52-bit value.
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Reviewed-by: NSuzuki K Poulose <suzuki.poulose@arm.com>
      Suggested-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NSteve Capper <steve.capper@arm.com>
      [will: added comment to TTBR1_BADDR_4852_OFFSET calculation]
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      e842dfb5
  5. 10 12月, 2018 2 次提交
    • W
      arm64: Fix minor issues with the dcache_by_line_op macro · 33309ecd
      Will Deacon 提交于
      The dcache_by_line_op macro suffers from a couple of small problems:
      
      First, the GAS directives that are currently being used rely on
      assembler behavior that is not documented, and probably not guaranteed
      to produce the correct behavior going forward. As a result, we end up
      with some undefined symbols in cache.o:
      
      $ nm arch/arm64/mm/cache.o
               ...
               U civac
               ...
               U cvac
               U cvap
               U cvau
      
      This is due to the fact that the comparisons used to select the
      operation type in the dcache_by_line_op macro are comparing symbols
      not strings, and even though it seems that GAS is doing the right
      thing here (undefined symbols by the same name are equal to each
      other), it seems unwise to rely on this.
      
      Second, when patching in a DC CVAP instruction on CPUs that support it,
      the fallback path consists of a DC CVAU instruction which may be
      affected by CPU errata that require ARM64_WORKAROUND_CLEAN_CACHE.
      
      Solve these issues by unrolling the various maintenance routines and
      using the conditional directives that are documented as operating on
      strings. To avoid the complexity of nested alternatives, we move the
      DC CVAP patching to __clean_dcache_area_pop, falling back to a branch
      to __clean_dcache_area_poc if DCPOP is not supported by the CPU.
      Reported-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Suggested-by: NRobin Murphy <robin.murphy@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      33309ecd
    • M
      arm64: add EXPORT_SYMBOL_NOKASAN() · 386b3c7b
      Mark Rutland 提交于
      So that we can export symbols directly from assembly files, let's make
      use of the generic <asm/export.h>. We have a few symbols that we'll want
      to conditionally export for !KASAN kernel builds, so we add a helper for
      that in <asm/assembler.h>.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      386b3c7b
  6. 07 12月, 2018 1 次提交
  7. 20 9月, 2018 1 次提交
  8. 12 4月, 2018 2 次提交
  9. 20 3月, 2018 1 次提交
  10. 07 2月, 2018 6 次提交
  11. 16 1月, 2018 2 次提交
  12. 13 1月, 2018 1 次提交
  13. 09 1月, 2018 1 次提交
  14. 08 1月, 2018 1 次提交
  15. 23 12月, 2017 3 次提交
  16. 12 12月, 2017 1 次提交
    • S
      arm64: Add software workaround for Falkor erratum 1041 · 932b50c7
      Shanker Donthineni 提交于
      The ARM architecture defines the memory locations that are permitted
      to be accessed as the result of a speculative instruction fetch from
      an exception level for which all stages of translation are disabled.
      Specifically, the core is permitted to speculatively fetch from the
      4KB region containing the current program counter 4K and next 4K.
      
      When translation is changed from enabled to disabled for the running
      exception level (SCTLR_ELn[M] changed from a value of 1 to 0), the
      Falkor core may errantly speculatively access memory locations outside
      of the 4KB region permitted by the architecture. The errant memory
      access may lead to one of the following unexpected behaviors.
      
      1) A System Error Interrupt (SEI) being raised by the Falkor core due
         to the errant memory access attempting to access a region of memory
         that is protected by a slave-side memory protection unit.
      2) Unpredictable device behavior due to a speculative read from device
         memory. This behavior may only occur if the instruction cache is
         disabled prior to or coincident with translation being changed from
         enabled to disabled.
      
      The conditions leading to this erratum will not occur when either of the
      following occur:
       1) A higher exception level disables translation of a lower exception level
         (e.g. EL2 changing SCTLR_EL1[M] from a value of 1 to 0).
       2) An exception level disabling its stage-1 translation if its stage-2
          translation is enabled (e.g. EL1 changing SCTLR_EL1[M] from a value of 1
          to 0 when HCR_EL2[VM] has a value of 1).
      
      To avoid the errant behavior, software must execute an ISB immediately
      prior to executing the MSR that will change SCTLR_ELn[M] from 1 to 0.
      Signed-off-by: NShanker Donthineni <shankerd@codeaurora.org>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      932b50c7
  17. 11 12月, 2017 2 次提交
  18. 02 11月, 2017 5 次提交
  19. 25 10月, 2017 1 次提交
  20. 16 8月, 2017 1 次提交
  21. 09 8月, 2017 1 次提交
  22. 08 8月, 2017 1 次提交
    • M
      arm64: move non-entry code out of .entry.text · ed84b4e9
      Mark Rutland 提交于
      Currently, cpu_switch_to and ret_from_fork both live in .entry.text,
      though neither form the critical path for an exception entry.
      
      In subsequent patches, we will require that code in .entry.text is part
      of the critical path for exception entry, for which we can assume
      certain properties (e.g. the presence of exception regs on the stack).
      
      Neither cpu_switch_to nor ret_from_fork will meet these requirements, so
      we must move them out of .entry.text. To ensure that neither are kprobed
      after being moved out of .entry.text, we must explicitly blacklist them,
      requiring a new NOKPROBE() asm helper.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: James Morse <james.morse@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      ed84b4e9
  23. 10 2月, 2017 1 次提交
    • C
      arm64: Work around Falkor erratum 1003 · 38fd94b0
      Christopher Covington 提交于
      The Qualcomm Datacenter Technologies Falkor v1 CPU may allocate TLB entries
      using an incorrect ASID when TTBRx_EL1 is being updated. When the erratum
      is triggered, page table entries using the new translation table base
      address (BADDR) will be allocated into the TLB using the old ASID. All
      circumstances leading to the incorrect ASID being cached in the TLB arise
      when software writes TTBRx_EL1[ASID] and TTBRx_EL1[BADDR], a memory
      operation is in the process of performing a translation using the specific
      TTBRx_EL1 being written, and the memory operation uses a translation table
      descriptor designated as non-global. EL2 and EL3 code changing the EL1&0
      ASID is not subject to this erratum because hardware is prohibited from
      performing translations from an out-of-context translation regime.
      
      Consider the following pseudo code.
      
        write new BADDR and ASID values to TTBRx_EL1
      
      Replacing the above sequence with the one below will ensure that no TLB
      entries with an incorrect ASID are used by software.
      
        write reserved value to TTBRx_EL1[ASID]
        ISB
        write new value to TTBRx_EL1[BADDR]
        ISB
        write new value to TTBRx_EL1[ASID]
        ISB
      
      When the above sequence is used, page table entries using the new BADDR
      value may still be incorrectly allocated into the TLB using the reserved
      ASID. Yet this will not reduce functionality, since TLB entries incorrectly
      tagged with the reserved ASID will never be hit by a later instruction.
      
      Based on work by Shanker Donthineni <shankerd@codeaurora.org>
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NChristopher Covington <cov@codeaurora.org>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      38fd94b0