1. 17 1月, 2020 2 次提交
  2. 06 12月, 2019 2 次提交
  3. 04 12月, 2019 1 次提交
    • M
      arm64: insn: consistently handle exit text · ca2ef4ff
      Mark Rutland 提交于
      A kernel built with KASAN && FTRACE_WITH_REGS && !MODULES, produces a
      boot-time splat in the bowels of ftrace:
      
      | [    0.000000] ftrace: allocating 32281 entries in 127 pages
      | [    0.000000] ------------[ cut here ]------------
      | [    0.000000] WARNING: CPU: 0 PID: 0 at kernel/trace/ftrace.c:2019 ftrace_bug+0x27c/0x328
      | [    0.000000] CPU: 0 PID: 0 Comm: swapper Not tainted 5.4.0-rc3-00008-g7f08ae53 #13
      | [    0.000000] Hardware name: linux,dummy-virt (DT)
      | [    0.000000] pstate: 60000085 (nZCv daIf -PAN -UAO)
      | [    0.000000] pc : ftrace_bug+0x27c/0x328
      | [    0.000000] lr : ftrace_init+0x640/0x6cc
      | [    0.000000] sp : ffffa000120e7e00
      | [    0.000000] x29: ffffa000120e7e00 x28: ffff00006ac01b10
      | [    0.000000] x27: ffff00006ac898c0 x26: dfffa00000000000
      | [    0.000000] x25: ffffa000120ef290 x24: ffffa0001216df40
      | [    0.000000] x23: 000000000000018d x22: ffffa0001244c700
      | [    0.000000] x21: ffffa00011bf393c x20: ffff00006ac898c0
      | [    0.000000] x19: 00000000ffffffff x18: 0000000000001584
      | [    0.000000] x17: 0000000000001540 x16: 0000000000000007
      | [    0.000000] x15: 0000000000000000 x14: ffffa00010432770
      | [    0.000000] x13: ffff940002483519 x12: 1ffff40002483518
      | [    0.000000] x11: 1ffff40002483518 x10: ffff940002483518
      | [    0.000000] x9 : dfffa00000000000 x8 : 0000000000000001
      | [    0.000000] x7 : ffff940002483519 x6 : ffffa0001241a8c0
      | [    0.000000] x5 : ffff940002483519 x4 : ffff940002483519
      | [    0.000000] x3 : ffffa00011780870 x2 : 0000000000000001
      | [    0.000000] x1 : 1fffe0000d591318 x0 : 0000000000000000
      | [    0.000000] Call trace:
      | [    0.000000]  ftrace_bug+0x27c/0x328
      | [    0.000000]  ftrace_init+0x640/0x6cc
      | [    0.000000]  start_kernel+0x27c/0x654
      | [    0.000000] random: get_random_bytes called from print_oops_end_marker+0x30/0x60 with crng_init=0
      | [    0.000000] ---[ end trace 0000000000000000 ]---
      | [    0.000000] ftrace faulted on writing
      | [    0.000000] [<ffffa00011bf393c>] _GLOBAL__sub_D_65535_0___tracepoint_initcall_level+0x4/0x28
      | [    0.000000] Initializing ftrace call sites
      | [    0.000000] ftrace record flags: 0
      | [    0.000000]  (0)
      | [    0.000000]  expected tramp: ffffa000100b3344
      
      This is due to an unfortunate combination of several factors.
      
      Building with KASAN results in the compiler generating anonymous
      functions to register/unregister global variables against the shadow
      memory. These functions are placed in .text.startup/.text.exit, and
      given mangled names like _GLOBAL__sub_{I,D}_65535_0_$OTHER_SYMBOL. The
      kernel linker script places these in .init.text and .exit.text
      respectively, which are both discarded at runtime as part of initmem.
      
      Building with FTRACE_WITH_REGS uses -fpatchable-function-entry=2, which
      also instruments KASAN's anonymous functions. When these are discarded
      with the rest of initmem, ftrace removes dangling references to these
      call sites.
      
      Building without MODULES implicitly disables STRICT_MODULE_RWX, and
      causes arm64's patch_map() function to treat any !core_kernel_text()
      symbol as something that can be modified in-place. As core_kernel_text()
      is only true for .text and .init.text, with the latter depending on
      system_state < SYSTEM_RUNNING, we'll treat .exit.text as something that
      can be patched in-place. However, .exit.text is mapped read-only.
      
      Hence in this configuration the ftrace init code blows up while trying
      to patch one of the functions generated by KASAN.
      
      We could try to filter out the call sites in .exit.text rather than
      initializing them, but this would be inconsistent with how we handle
      .init.text, and requires hooking into core bits of ftrace. The behaviour
      of patch_map() is also inconsistent today, so instead let's clean that
      up and have it consistently handle .exit.text.
      
      This patch teaches patch_map() to handle .exit.text at init time,
      preventing the boot-time splat above. The flow of patch_map() is
      reworked to make the logic clearer and minimize redundant
      conditionality.
      
      Fixes: 3b23e499 ("arm64: implement ftrace with regs")
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Cc: Amit Daniel Kachhap <amit.kachhap@arm.com>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Torsten Duwe <duwe@suse.de>
      Cc: Will Deacon <will@kernel.org>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      ca2ef4ff
  4. 27 11月, 2019 1 次提交
  5. 21 11月, 2019 1 次提交
  6. 12 11月, 2019 1 次提交
  7. 08 11月, 2019 1 次提交
  8. 07 11月, 2019 1 次提交
  9. 06 11月, 2019 3 次提交
    • T
      arm64: implement ftrace with regs · 3b23e499
      Torsten Duwe 提交于
      This patch implements FTRACE_WITH_REGS for arm64, which allows a traced
      function's arguments (and some other registers) to be captured into a
      struct pt_regs, allowing these to be inspected and/or modified. This is
      a building block for live-patching, where a function's arguments may be
      forwarded to another function. This is also necessary to enable ftrace
      and in-kernel pointer authentication at the same time, as it allows the
      LR value to be captured and adjusted prior to signing.
      
      Using GCC's -fpatchable-function-entry=N option, we can have the
      compiler insert a configurable number of NOPs between the function entry
      point and the usual prologue. This also ensures functions are AAPCS
      compliant (e.g. disabling inter-procedural register allocation).
      
      For example, with -fpatchable-function-entry=2, GCC 8.1.0 compiles the
      following:
      
      | unsigned long bar(void);
      |
      | unsigned long foo(void)
      | {
      |         return bar() + 1;
      | }
      
      ... to:
      
      | <foo>:
      |         nop
      |         nop
      |         stp     x29, x30, [sp, #-16]!
      |         mov     x29, sp
      |         bl      0 <bar>
      |         add     x0, x0, #0x1
      |         ldp     x29, x30, [sp], #16
      |         ret
      
      This patch builds the kernel with -fpatchable-function-entry=2,
      prefixing each function with two NOPs. To trace a function, we replace
      these NOPs with a sequence that saves the LR into a GPR, then calls an
      ftrace entry assembly function which saves this and other relevant
      registers:
      
      | mov	x9, x30
      | bl	<ftrace-entry>
      
      Since patchable functions are AAPCS compliant (and the kernel does not
      use x18 as a platform register), x9-x18 can be safely clobbered in the
      patched sequence and the ftrace entry code.
      
      There are now two ftrace entry functions, ftrace_regs_entry (which saves
      all GPRs), and ftrace_entry (which saves the bare minimum). A PLT is
      allocated for each within modules.
      Signed-off-by: NTorsten Duwe <duwe@suse.de>
      [Mark: rework asm, comments, PLTs, initialization, commit message]
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Reviewed-by: NAmit Daniel Kachhap <amit.kachhap@arm.com>
      Reviewed-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Reviewed-by: NTorsten Duwe <duwe@suse.de>
      Tested-by: NAmit Daniel Kachhap <amit.kachhap@arm.com>
      Tested-by: NTorsten Duwe <duwe@suse.de>
      Cc: AKASHI Takahiro <takahiro.akashi@linaro.org>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Julien Thierry <jthierry@redhat.com>
      Cc: Will Deacon <will@kernel.org>
      3b23e499
    • M
      arm64: insn: add encoder for MOV (register) · e3bf8a67
      Mark Rutland 提交于
      For FTRACE_WITH_REGS, we're going to want to generate a MOV (register)
      instruction as part of the callsite intialization. As MOV (register) is
      an alias for ORR (shifted register), we can generate this with
      aarch64_insn_gen_logical_shifted_reg(), but it's somewhat verbose and
      difficult to read in-context.
      
      Add a aarch64_insn_gen_move_reg() wrapper for this case so that we can
      write callers in a more straightforward way.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Reviewed-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Reviewed-by: NTorsten Duwe <duwe@suse.de>
      Tested-by: NAmit Daniel Kachhap <amit.kachhap@arm.com>
      Tested-by: NTorsten Duwe <duwe@suse.de>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will@kernel.org>
      e3bf8a67
    • B
      arm64: mm: Remove MAX_USER_VA_BITS definition · 218564b1
      Bhupesh Sharma 提交于
      commit 9b31cf49 ("arm64: mm: Introduce MAX_USER_VA_BITS definition")
      introduced the MAX_USER_VA_BITS definition, which was used to support
      the arm64 mm use-cases where the user-space could use 52-bit virtual
      addresses whereas the kernel-space would still could a maximum of 48-bit
      virtual addressing.
      
      But, now with commit b6d00d47 ("arm64: mm: Introduce 52-bit Kernel
      VAs"), we removed the 52-bit user/48-bit kernel kconfig option and hence
      there is no longer any scenario where user VA != kernel VA size
      (even with CONFIG_ARM64_FORCE_52BIT enabled, the same is true).
      
      Hence we can do away with the MAX_USER_VA_BITS macro as it is equal to
      VA_BITS (maximum VA space size) in all possible use-cases. Note that
      even though the 'vabits_actual' value would be 48 for arm64 hardware
      which don't support LVA-8.2 extension (even when CONFIG_ARM64_VA_BITS_52
      is enabled), VA_BITS would still be set to a value 52. Hence this change
      would be safe in all possible VA address space combinations.
      
      Cc: James Morse <james.morse@arm.com>
      Cc: Will Deacon <will@kernel.org>
      Cc: Steve Capper <steve.capper@arm.com>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: linux-kernel@vger.kernel.org
      Cc: kexec@lists.infradead.org
      Reviewed-by: NMark Rutland <mark.rutland@arm.com>
      Signed-off-by: NBhupesh Sharma <bhsharma@redhat.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      218564b1
  10. 05 11月, 2019 1 次提交
    • H
      timekeeping/vsyscall: Update VDSO data unconditionally · 52338415
      Huacai Chen 提交于
      The update of the VDSO data is depending on __arch_use_vsyscall() returning
      True. This is a leftover from the attempt to map the features of various
      architectures 1:1 into generic code.
      
      The usage of __arch_use_vsyscall() in the actual vsyscall implementations
      got dropped and replaced by the requirement for the architecture code to
      return U64_MAX if the global clocksource is not usable in the VDSO.
      
      But the __arch_use_vsyscall() check in the update code stayed which causes
      the VDSO data to be stale or invalid when an architecture actually
      implements that function and returns False when the current clocksource is
      not usable in the VDSO.
      
      As a consequence the VDSO implementations of clock_getres(), time(),
      clock_gettime(CLOCK_.*_COARSE) operate on invalid data and return bogus
      information.
      
      Remove the __arch_use_vsyscall() check from the VDSO update function and
      update the VDSO data unconditionally.
      
      [ tglx: Massaged changelog and removed the now useless implementations in
        	asm-generic/ARM64/MIPS ]
      
      Fixes: 44f57d78 ("timekeeping: Provide a generic update_vsyscall() implementation")
      Signed-off-by: NHuacai Chen <chenhc@lemote.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Vincenzo Frascino <vincenzo.frascino@arm.com>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Paul Burton <paul.burton@mips.com>
      Cc: linux-mips@vger.kernel.org
      Cc: linux-arm-kernel@lists.infradead.org
      Cc: stable@vger.kernel.org
      Link: https://lkml.kernel.org/r/1571887709-11447-1-git-send-email-chenhc@lemote.com
      52338415
  11. 01 11月, 2019 2 次提交
  12. 30 10月, 2019 1 次提交
    • C
      arm64: Ensure VM_WRITE|VM_SHARED ptes are clean by default · aa57157b
      Catalin Marinas 提交于
      Shared and writable mappings (__S.1.) should be clean (!dirty) initially
      and made dirty on a subsequent write either through the hardware DBM
      (dirty bit management) mechanism or through a write page fault. A clean
      pte for the arm64 kernel is one that has PTE_RDONLY set and PTE_DIRTY
      clear.
      
      The PAGE_SHARED{,_EXEC} attributes have PTE_WRITE set (PTE_DBM) and
      PTE_DIRTY clear. Prior to commit 73e86cb0 ("arm64: Move PTE_RDONLY
      bit handling out of set_pte_at()"), it was the responsibility of
      set_pte_at() to set the PTE_RDONLY bit and mark the pte clean if the
      software PTE_DIRTY bit was not set. However, the above commit removed
      the pte_sw_dirty() check and the subsequent setting of PTE_RDONLY in
      set_pte_at() while leaving the PAGE_SHARED{,_EXEC} definitions
      unchanged. The result is that shared+writable mappings are now dirty by
      default
      
      Fix the above by explicitly setting PTE_RDONLY in PAGE_SHARED{,_EXEC}.
      In addition, remove the superfluous PTE_DIRTY bit from the kernel PROT_*
      attributes.
      
      Fixes: 73e86cb0 ("arm64: Move PTE_RDONLY bit handling out of set_pte_at()")
      Cc: <stable@vger.kernel.org> # 4.14.x-
      Cc: Will Deacon <will@kernel.org>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NWill Deacon <will@kernel.org>
      aa57157b
  13. 29 10月, 2019 1 次提交
    • C
      KVM: arm64: Don't set HCR_EL2.TVM when S2FWB is supported · 5c401308
      Christoffer Dall 提交于
      On CPUs that support S2FWB (Armv8.4+), KVM configures the stage 2 page
      tables to override the memory attributes of memory accesses, regardless
      of the stage 1 page table configurations, and also when the stage 1 MMU
      is turned off.  This results in all memory accesses to RAM being
      cacheable, including during early boot of the guest.
      
      On CPUs without this feature, memory accesses were non-cacheable during
      boot until the guest turned on the stage 1 MMU, and we had to detect
      when the guest turned on the MMU, such that we could invalidate all cache
      entries and ensure a consistent view of memory with the MMU turned on.
      When the guest turned on the caches, we would call stage2_flush_vm()
      from kvm_toggle_cache().
      
      However, stage2_flush_vm() walks all the stage 2 tables, and calls
      __kvm_flush-dcache_pte, which on a system with S2FWB does ... absolutely
      nothing.
      
      We can avoid that whole song and dance, and simply not set TVM when
      creating a VM on a system that has S2FWB.
      Signed-off-by: NChristoffer Dall <christoffer.dall@arm.com>
      Signed-off-by: NMarc Zyngier <maz@kernel.org>
      Reviewed-by: NMark Rutland <mark.rutland@arm.com>
      Link: https://lore.kernel.org/r/20191028130541.30536-1-christoffer.dall@arm.com
      5c401308
  14. 28 10月, 2019 6 次提交
  15. 26 10月, 2019 2 次提交
  16. 25 10月, 2019 1 次提交
  17. 22 10月, 2019 5 次提交
    • S
      arm64: Retrieve stolen time as paravirtualized guest · e0685fa2
      Steven Price 提交于
      Enable paravirtualization features when running under a hypervisor
      supporting the PV_TIME_ST hypercall.
      
      For each (v)CPU, we ask the hypervisor for the location of a shared
      page which the hypervisor will use to report stolen time to us. We set
      pv_time_ops to the stolen time function which simply reads the stolen
      value from the shared page for a VCPU. We guarantee single-copy
      atomicity using READ_ONCE which means we can also read the stolen
      time for another VCPU than the currently running one while it is
      potentially being updated by the hypervisor.
      Signed-off-by: NSteven Price <steven.price@arm.com>
      Signed-off-by: NMarc Zyngier <maz@kernel.org>
      e0685fa2
    • S
      KVM: arm64: Provide VCPU attributes for stolen time · 58772e9a
      Steven Price 提交于
      Allow user space to inform the KVM host where in the physical memory
      map the paravirtualized time structures should be located.
      
      User space can set an attribute on the VCPU providing the IPA base
      address of the stolen time structure for that VCPU. This must be
      repeated for every VCPU in the VM.
      
      The address is given in terms of the physical address visible to
      the guest and must be 64 byte aligned. The guest will discover the
      address via a hypercall.
      Signed-off-by: NSteven Price <steven.price@arm.com>
      Signed-off-by: NMarc Zyngier <maz@kernel.org>
      58772e9a
    • S
      KVM: arm64: Support stolen time reporting via shared structure · 8564d637
      Steven Price 提交于
      Implement the service call for configuring a shared structure between a
      VCPU and the hypervisor in which the hypervisor can write the time
      stolen from the VCPU's execution time by other tasks on the host.
      
      User space allocates memory which is placed at an IPA also chosen by user
      space. The hypervisor then updates the shared structure using
      kvm_put_guest() to ensure single copy atomicity of the 64-bit value
      reporting the stolen time in nanoseconds.
      
      Whenever stolen time is enabled by the guest, the stolen time counter is
      reset.
      
      The stolen time itself is retrieved from the sched_info structure
      maintained by the Linux scheduler code. We enable SCHEDSTATS when
      selecting KVM Kconfig to ensure this value is meaningful.
      Signed-off-by: NSteven Price <steven.price@arm.com>
      Signed-off-by: NMarc Zyngier <maz@kernel.org>
      8564d637
    • S
      KVM: arm64: Implement PV_TIME_FEATURES call · b48c1a45
      Steven Price 提交于
      This provides a mechanism for querying which paravirtualized time
      features are available in this hypervisor.
      
      Also add the header file which defines the ABI for the paravirtualized
      time features we're about to add.
      Signed-off-by: NSteven Price <steven.price@arm.com>
      Signed-off-by: NMarc Zyngier <maz@kernel.org>
      b48c1a45
    • C
      KVM: arm/arm64: Allow reporting non-ISV data aborts to userspace · c726200d
      Christoffer Dall 提交于
      For a long time, if a guest accessed memory outside of a memslot using
      any of the load/store instructions in the architecture which doesn't
      supply decoding information in the ESR_EL2 (the ISV bit is not set), the
      kernel would print the following message and terminate the VM as a
      result of returning -ENOSYS to userspace:
      
        load/store instruction decoding not implemented
      
      The reason behind this message is that KVM assumes that all accesses
      outside a memslot is an MMIO access which should be handled by
      userspace, and we originally expected to eventually implement some sort
      of decoding of load/store instructions where the ISV bit was not set.
      
      However, it turns out that many of the instructions which don't provide
      decoding information on abort are not safe to use for MMIO accesses, and
      the remaining few that would potentially make sense to use on MMIO
      accesses, such as those with register writeback, are not used in
      practice.  It also turns out that fetching an instruction from guest
      memory can be a pretty horrible affair, involving stopping all CPUs on
      SMP systems, handling multiple corner cases of address translation in
      software, and more.  It doesn't appear likely that we'll ever implement
      this in the kernel.
      
      What is much more common is that a user has misconfigured his/her guest
      and is actually not accessing an MMIO region, but just hitting some
      random hole in the IPA space.  In this scenario, the error message above
      is almost misleading and has led to a great deal of confusion over the
      years.
      
      It is, nevertheless, ABI to userspace, and we therefore need to
      introduce a new capability that userspace explicitly enables to change
      behavior.
      
      This patch introduces KVM_CAP_ARM_NISV_TO_USER (NISV meaning Non-ISV)
      which does exactly that, and introduces a new exit reason to report the
      event to userspace.  User space can then emulate an exception to the
      guest, restart the guest, suspend the guest, or take any other
      appropriate action as per the policy of the running system.
      Reported-by: NHeinrich Schuchardt <xypron.glpk@gmx.de>
      Signed-off-by: NChristoffer Dall <christoffer.dall@arm.com>
      Reviewed-by: NAlexander Graf <graf@amazon.com>
      Signed-off-by: NMarc Zyngier <maz@kernel.org>
      c726200d
  18. 18 10月, 2019 3 次提交
  19. 17 10月, 2019 2 次提交
  20. 15 10月, 2019 1 次提交
    • M
      arm64: Relax ICC_PMR_EL1 accesses when ICC_CTLR_EL1.PMHE is clear · f2266504
      Marc Zyngier 提交于
      The GICv3 architecture specification is incredibly misleading when it
      comes to PMR and the requirement for a DSB. It turns out that this DSB
      is only required if the CPU interface sends an Upstream Control
      message to the redistributor in order to update the RD's view of PMR.
      
      This message is only sent when ICC_CTLR_EL1.PMHE is set, which isn't
      the case in Linux. It can still be set from EL3, so some special care
      is required. But the upshot is that in the (hopefuly large) majority
      of the cases, we can drop the DSB altogether.
      
      This relies on a new static key being set if the boot CPU has PMHE
      set. The drawback is that this static key has to be exported to
      modules.
      
      Cc: Will Deacon <will@kernel.org>
      Cc: James Morse <james.morse@arm.com>
      Cc: Julien Thierry <julien.thierry.kdev@gmail.com>
      Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
      Signed-off-by: NMarc Zyngier <maz@kernel.org>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      f2266504
  21. 14 10月, 2019 2 次提交
    • N
      arm64: use both ZONE_DMA and ZONE_DMA32 · 1a8e1cef
      Nicolas Saenz Julienne 提交于
      So far all arm64 devices have supported 32 bit DMA masks for their
      peripherals. This is not true anymore for the Raspberry Pi 4 as most of
      it's peripherals can only address the first GB of memory on a total of
      up to 4 GB.
      
      This goes against ZONE_DMA32's intent, as it's expected for ZONE_DMA32
      to be addressable with a 32 bit mask. So it was decided to re-introduce
      ZONE_DMA in arm64.
      
      ZONE_DMA will contain the lower 1G of memory, which is currently the
      memory area addressable by any peripheral on an arm64 device.
      ZONE_DMA32 will contain the rest of the 32 bit addressable memory.
      Signed-off-by: NNicolas Saenz Julienne <nsaenzjulienne@suse.de>
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      1a8e1cef
    • M
      arm64: simplify syscall wrapper ifdeffery · ce87de45
      Mark Rutland 提交于
      Back in commit:
      
        4378a7d4 ("arm64: implement syscall wrappers")
      
      ... I implemented the arm64 syscall wrapper glue following the approach
      taken on x86. While doing so, I also copied across some ifdeffery that
      isn't necessary on arm64.
      
      On arm64 we don't share any of the native wrappers with compat tasks,
      and unlike x86 we don't have alternative implementations of
      SYSCALL_DEFINE0(), COND_SYSCALL(), or SYS_NI() defined when AArch32
      compat support is enabled.
      
      Thus we don't need to prevent multiple definitions of these macros, and
      can remove the #ifndef ... #endif guards protecting them. If any of
      these had been previously defined elsewhere, syscalls are unlikely to
      work correctly, and we'd want the compiler to warn about the multiple
      definitions.
      Acked-by: NWill Deacon <will@kernel.org>
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      ce87de45