1. 17 12月, 2021 1 次提交
  2. 02 2月, 2021 3 次提交
    • V
      ARM: 9046/1: decompressor: Do not clear SCTLR.nTLSMD for ARMv7+ cores · 2acb9097
      Vladimir Murzin 提交于
      It was observed that decompressor running on hardware implementing ARM v8.2
      Load/Store Multiple Atomicity and Ordering Control (LSMAOC), say, as guest,
      would stuck just after:
      
      Uncompressing Linux... done, booting the kernel.
      
      The reason is that it clears nTLSMD bit when disabling caches:
      
        nTLSMD, bit [3]
      
        When ARMv8.2-LSMAOC is implemented:
      
          No Trap Load Multiple and Store Multiple to
          Device-nGRE/Device-nGnRE/Device-nGnRnE memory.
      
          0b0 All memory accesses by A32 and T32 Load Multiple and Store
              Multiple at EL1 or EL0 that are marked at stage 1 as
              Device-nGRE/Device-nGnRE/Device-nGnRnE memory are trapped and
              generate a stage 1 Alignment fault.
      
          0b1 All memory accesses by A32 and T32 Load Multiple and Store
              Multiple at EL1 or EL0 that are marked at stage 1 as
              Device-nGRE/Device-nGnRE/Device-nGnRnE memory are not trapped.
      
        This bit is permitted to be cached in a TLB.
      
        This field resets to 1.
      
        Otherwise:
      
        Reserved, RES1
      
      So as effect we start getting traps we are not quite ready for.
      
      Looking into history it seems that mask used for SCTLR clear came from
      the similar code for ARMv4, where bit[3] is the enable/disable bit for
      the write buffer. That not applicable to ARMv7 and onwards, so retire
      that bit from the masks.
      
      Fixes: 7d09e854 ("[ARM] 4393/2: ARMv7: Add uncompressing code for the new CPU Id format")
      Signed-off-by: NVladimir Murzin <vladimir.murzin@arm.com>
      Signed-off-by: NRussell King <rmk+kernel@armlinux.org.uk>
      2acb9097
    • G
      ARM: 9045/1: uncompress: Validate start of physical memory against passed DTB · 0673cb38
      Geert Uytterhoeven 提交于
      Currently, the start address of physical memory is obtained by masking
      the program counter with a fixed mask of 0xf8000000.  This mask value
      was chosen as a balance between the requirements of different platforms.
      However, this does require that the start address of physical memory is
      a multiple of 128 MiB, precluding booting Linux on platforms where this
      requirement is not fulfilled.
      
      Fix this limitation by validating the masked address against the memory
      information in the passed DTB.  Only use the start address
      from DTB when masking would yield an out-of-range address, prefer the
      traditional method in all other cases.  Note that this applies only to the
      explicitly passed DTB on modern systems, and not to a DTB appended to
      the kernel, or to ATAGS.  The appended DTB may need to be augmented by
      information from ATAGS, which may need to rely on knowledge of the start
      address of physical memory itself.
      
      This allows to boot Linux on r7s9210/rza2mevb using the 64 MiB of SDRAM
      on the RZA2MEVB sub board, which is located at 0x0C000000 (CS3 space),
      i.e. not at a multiple of 128 MiB.
      Suggested-by: NNicolas Pitre <nico@fluxnic.net>
      Suggested-by: NArd Biesheuvel <ardb@kernel.org>
      Signed-off-by: NGeert Uytterhoeven <geert+renesas@glider.be>
      Reviewed-by: NArd Biesheuvel <ardb@kernel.org>
      Acked-by: NNicolas Pitre <nico@fluxnic.net>
      Signed-off-by: NRussell King <rmk+kernel@armlinux.org.uk>
      0673cb38
    • A
      ARM: 9039/1: assembler: generalize byte swapping macro into rev_l · 6468e898
      Ard Biesheuvel 提交于
      Take the 4 instruction byte swapping sequence from the decompressor's
      head.S, and turn it into a rev_l GAS macro for general use. While
      at it, make it use the 'rev' instruction when compiling for v6 or
      later.
      Reviewed-by: NGeert Uytterhoeven <geert+renesas@glider.be>
      Tested-by: NGeert Uytterhoeven <geert+renesas@glider.be>
      Reviewed-by: NNicolas Pitre <nico@fluxnic.net>
      Signed-off-by: NArd Biesheuvel <ardb@kernel.org>
      Signed-off-by: NRussell King <rmk+kernel@armlinux.org.uk>
      6468e898
  3. 21 12月, 2020 3 次提交
  4. 29 10月, 2020 2 次提交
  5. 26 10月, 2020 1 次提交
    • A
      efi/arm: set HSCTLR Thumb2 bit correctly for HVC calls from HYP · fbc81ec5
      Ard Biesheuvel 提交于
      Commit
      
        db227c19 ("ARM: 8985/1: efi/decompressor: deal with HYP mode boot gracefully")
      
      updated the EFI entry code to permit firmware to invoke the EFI stub
      loader in HYP mode, with the MMU either enabled or disabled, neither
      of which is permitted by the EFI spec, but which does happen in the
      field.
      
      In the MMU on case, we remain in HYP mode as configured by the firmware,
      and rely on the fact that any HVC instruction issued in this mode will
      be dispatched via the SVC slot in the HYP vector table. However, this
      slot will point to a Thumb2 symbol if the kernel is built in Thumb2
      mode, and so we have to configure HSCTLR to ensure that the exception
      handlers are invoked in Thumb2 mode as well.
      
      Fixes: db227c19 ("ARM: 8985/1: efi/decompressor: deal with HYP mode boot gracefully")
      Signed-off-by: NArd Biesheuvel <ardb@kernel.org>
      fbc81ec5
  6. 15 9月, 2020 5 次提交
  7. 13 6月, 2020 1 次提交
    • A
      ARM: 8985/1: efi/decompressor: deal with HYP mode boot gracefully · db227c19
      Ard Biesheuvel 提交于
      EFI on ARM only supports short descriptors, and given that it mandates
      that the MMU and caches are on, it is implied that booting in HYP mode
      is not supported.
      
      However, implementations of EFI exist (i.e., U-Boot) that ignore this
      requirement, which is not entirely unreasonable, given that it makes
      HYP mode inaccessible to the operating system.
      
      So let's make sure that we can deal with this condition gracefully.
      We already tolerate booting the EFI stub with the caches off (even
      though this violates the EFI spec as well), and so we should deal
      with HYP mode boot with MMU and caches either on or off.
      
      - When the MMU and caches are on, we can ignore the HYP stub altogether,
        since we can carry on executing at HYP. We do need to ensure that we
        disable the MMU at HYP before entering the kernel proper.
      
      - When the MMU and caches are off, we have to drop to SVC mode so that
        we can set up the page tables using short descriptors. In this case,
        we need to install the HYP stub as usual, so that we can return to HYP
        mode before handing over to the kernel proper.
      Tested-by: NHeinrich Schuchardt <xypron.glpk@gmx.de>
      Signed-off-by: NArd Biesheuvel <ardb@kernel.org>
      Signed-off-by: NRussell King <rmk+kernel@armlinux.org.uk>
      db227c19
  8. 20 5月, 2020 3 次提交
  9. 19 5月, 2020 1 次提交
  10. 14 4月, 2020 1 次提交
  11. 29 2月, 2020 1 次提交
    • A
      efi/arm: Clean EFI stub exit code from cache instead of avoiding it · 0698fac4
      Ard Biesheuvel 提交于
      The following commit:
      
        c7225494 ("efi/arm: Work around missing cache maintenance in decompressor handover")
      
      modified the EFI handover code written in assembler to work around the
      missing cache maintenance of the piece of code that is executed after the
      MMU and caches are turned off.
      
      Due to the fact that this sequence incorporates a subroutine call, cleaning
      that code from the cache is not a matter of simply passing the start and end of
      the currently running subroutine into cache_clean_flush(), which is why
      instead, the code jumps across into the cleaned copy of the image.
      
      However, this assumes that this copy is executable, and this means we
      expect EFI_LOADER_DATA regions to be executable as well, which is not
      a reasonable assumption to make, even if this is true for most UEFI
      implementations today.
      
      So change this back, and add a cache_clean_flush() call to cover the
      remaining code in the subroutine, and any code it may execute in the
      context of cache_off().
      Signed-off-by: NArd Biesheuvel <ardb@kernel.org>
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      Cc: linux-efi@vger.kernel.org
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: David Hildenbrand <david@redhat.com>
      Cc: Heinrich Schuchardt <xypron.glpk@gmx.de>
      Cc: Tom Lendacky <thomas.lendacky@amd.com>
      Link: https://lore.kernel.org/r/20200228121408.9075-5-ardb@kernel.org
      0698fac4
  12. 27 2月, 2020 3 次提交
  13. 23 2月, 2020 3 次提交
    • A
      efi/libstub/arm: Make efi_entry() an ordinary PE/COFF entrypoint · 9f922377
      Ard Biesheuvel 提交于
      Expose efi_entry() as the PE/COFF entrypoint directly, instead of
      jumping into a wrapper that fiddles with stack buffers and other
      stuff that the compiler is much better at. The only reason this
      code exists is to obtain a pointer to the base of the image, but
      we can get the same value from the loaded_image protocol, which
      we already need for other reasons anyway.
      
      Update the return type as well, to make it consistent with what
      is required for a PE/COFF executable entrypoint.
      Signed-off-by: NArd Biesheuvel <ardb@kernel.org>
      9f922377
    • A
      efi/arm: Pass start and end addresses to cache_clean_flush() · e951a1f4
      Ard Biesheuvel 提交于
      In preparation for turning the decompressor's cache clean/flush
      operations into proper by-VA maintenance for v7 cores, pass the
      start and end addresses of the regions that need cache maintenance
      into cache_clean_flush in registers r0 and r1.
      
      Currently, all implementations of cache_clean_flush ignore these
      values, so no functional change is expected as a result of this
      patch.
      Signed-off-by: NArd Biesheuvel <ardb@kernel.org>
      e951a1f4
    • A
      efi/arm: Work around missing cache maintenance in decompressor handover · c7225494
      Ard Biesheuvel 提交于
      The EFI stub executes within the context of the zImage as it was
      loaded by the firmware, which means it is treated as an ordinary
      PE/COFF executable, which is loaded into memory, and cleaned to
      the PoU to ensure that it can be executed safely while the MMU
      and caches are on.
      
      When the EFI stub hands over to the decompressor, we clean the caches
      by set/way and disable the MMU and D-cache, to comply with the Linux
      boot protocol for ARM. However, cache maintenance by set/way is not
      sufficient to ensure that subsequent instruction fetches and data
      accesses done with the MMU off see the correct data. This means that
      proceeding as we do currently is not safe, especially since we also
      perform data accesses with the MMU off, from a literal pool as well as
      the stack.
      
      So let's kick this can down the road a bit, and jump into the relocated
      zImage before disabling the caches. This removes the requirement to
      perform any by-VA cache maintenance on the original PE/COFF executable,
      but it does require that the relocated zImage is cleaned to the PoC,
      which is currently not the case. This will be addressed in a subsequent
      patch.
      Signed-off-by: NArd Biesheuvel <ardb@kernel.org>
      c7225494
  14. 26 1月, 2020 2 次提交
    • A
      ARM: 8942/1: Revert "8857/1: efi: enable CP15 DMB instructions before cleaning the cache" · cf17a1e3
      Ard Biesheuvel 提交于
      This reverts commit e17b1af9, which is
      no longer necessary now that the v7 specific routines take care not to
      issue CP15 barrier instructions before they are enabled in SCTLR.
      Signed-off-by: NArd Biesheuvel <ardb@kernel.org>
      Signed-off-by: NRussell King <rmk+kernel@armlinux.org.uk>
      cf17a1e3
    • A
      ARM: 8941/1: decompressor: enable CP15 barrier instructions in v7 cache setup code · 8239fc77
      Ard Biesheuvel 提交于
      Commit e17b1af9
      
        "ARM: 8857/1: efi: enable CP15 DMB instructions before cleaning the cache"
      
      added some explicit handling of the CP15BEN bit in the SCTLR system
      register, to ensure that CP15 barrier instructions are enabled, even
      if we enter the decompressor via the EFI stub.
      
      However, as it turns out, there are other ways in which we may end up
      using CP15 barrier instructions without them being enabled. I.e., when
      the decompressor startup code skips the cache_on() initially, we end
      up calling cache_clean_flush() with the caches and MMU off, in which
      case the CP15BEN bit in SCTLR may not be programmed either. And in
      fact, cache_on() itself issues CP15 barrier instructions before actually
      enabling them by programming the new SCTLR value (and issuing an ISB)
      
      Since these routines are shared between v7 CPUs and older ones that
      implement the CPUID extension as well, using the ordinary v7 barrier
      instructions in this code is not possible, and so we should enable the
      CP15 ones explicitly before issuing them. Note that a v7 ISB is still
      required between programming the SCTLR register and using the CP15 barrier
      instructions, and we should take care to branch over it if the CP15BEN
      bit is already set, given that in that case, the CPU may not support it.
      Signed-off-by: NArd Biesheuvel <ardb@kernel.org>
      Signed-off-by: NRussell King <rmk+kernel@armlinux.org.uk>
      8239fc77
  15. 16 11月, 2019 2 次提交
  16. 23 8月, 2019 2 次提交
  17. 19 6月, 2019 1 次提交
  18. 24 4月, 2019 1 次提交
    • A
      ARM: 8857/1: efi: enable CP15 DMB instructions before cleaning the cache · e17b1af9
      Ard Biesheuvel 提交于
      The EFI stub is entered with the caches and MMU enabled by the
      firmware, and once the stub is ready to hand over to the decompressor,
      we clean and disable the caches.
      
      The cache clean routines use CP15 barrier instructions, which can be
      disabled via SCTLR. Normally, when using the provided cache handling
      routines to enable the caches and MMU, this bit is enabled as well.
      However, but since we entered the stub with the caches already enabled,
      this routine is not executed before we call the cache clean routines,
      resulting in undefined instruction exceptions if the firmware never
      enabled this bit.
      
      So set the bit explicitly in the EFI entry code, but do so in a way that
      guarantees that the resulting code can still run on v6 cores as well
      (which are guaranteed to have CP15 barriers enabled)
      
      Cc: <stable@vger.kernel.org> # v4.9+
      Acked-by: NMarc Zyngier <marc.zyngier@arm.com>
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NRussell King <rmk+kernel@armlinux.org.uk>
      e17b1af9
  19. 19 9月, 2018 1 次提交
  20. 19 5月, 2018 2 次提交
  21. 03 10月, 2017 1 次提交