- 29 10月, 2015 3 次提交
-
-
由 Alexander Kuleshov 提交于
The <linux/mm.h> already provides the PAGE_ALIGNED macro. Let's use this macro instead of IS_ALIGNED and passing PAGE_SIZE directly. Signed-off-by: NAlexander Kuleshov <kuleshovmail@gmail.com> Acked-by: NLaura Abbott <laura@labbott.name> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Will Deacon 提交于
test_bit and set_bit take the bit number to operate on, rather than a mask. This patch fixes the ICACHEF_* definitions so that they represent the bit index in __icache_flags as opposed to the mask returned by the BIT macro. Signed-off-by: NWill Deacon <will.deacon@arm.com> Reviewed-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Will Deacon 提交于
enable_cpu_capabilities is only called from within cpufeature.c, so it can be declared static. Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 23 10月, 2015 1 次提交
-
-
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip由 Catalin Marinas 提交于
This is an incremental fix for a patch previously pulled from tip irq/for-arm. * 'irq/for-arm' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: genirq: Make the cpuhotplug migration code less noisy
-
- 22 10月, 2015 1 次提交
-
-
由 Thomas Gleixner 提交于
The original arm code has a pr_debug() statement for the case where the irq chip has no set_affinity() callback. That's sufficient for debugging and we really don't want to spam dmesg with useless warnings for the normal case. Fixes: f1e0bb0a: "genirq: Introduce generic irq migration for cpu hotunplug" Reported-by: NGeert Uytterhoeven <geert@linux-m68k.org> Requested-by: NRussell King <linux@arm.linux.org.uk> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Cc: Yang Yingliang <yangyingliang@huawei.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Hanjun Guo <hanjun.guo@linaro.org> Cc: Jiang Liu <jiang.liu@linux.intel.com>
-
- 21 10月, 2015 19 次提交
-
-
由 Dave Martin 提交于
The hwcap string arrays used for generating the contents of /proc/cpuinfo are currently arrays of non-const pointers. There's no need for these pointers to be mutable, so this patch makes them const so that they can be moved to .rodata. Signed-off-by: NDave Martin <Dave.Martin@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Suzuki K. Poulose 提交于
Use the system wide safe value from the new API for safer decisions Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Christoffer Dall <christoffer.dall@linaro.org> Cc: kvmarm@lists.cs.columbia.edu Signed-off-by: NSuzuki K. Poulose <suzuki.poulose@arm.com> Acked-by: NChristoffer Dall <christoffer.dall@linaro.org> Tested-by: NDave Martin <Dave.Martin@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Suzuki K. Poulose 提交于
Use the system wide value of ID_AA64DFR0 to make safer decisions Signed-off-by: NSuzuki K. Poulose <suzuki.poulose@arm.com> Tested-by: NDave Martin <Dave.Martin@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Suzuki K. Poulose 提交于
The FP/ASIMD is detected in fpsimd_init(), which is built-in unconditionally. Lets move the hwcap handling to the central place. Signed-off-by: NSuzuki K. Poulose <suzuki.poulose@arm.com> Tested-by: NDave Martin <Dave.Martin@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Suzuki K. Poulose 提交于
Extend struct arm64_cpu_capabilities to handle the HWCAP detection and make use of the system wide value of the feature registers for a reliable set of HWCAPs. Signed-off-by: NSuzuki K. Poulose <suzuki.poulose@arm.com> Tested-by: NDave Martin <Dave.Martin@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Suzuki K. Poulose 提交于
Now that we can reliably read the system wide safe value for a feature register, use that to compute the system capability. This patch also replaces the 'feature-register-specific' methods with a generic routine to check the capability. Signed-off-by: NSuzuki K. Poulose <suzuki.poulose@arm.com> Tested-by: NDave Martin <Dave.Martin@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Suzuki K. Poulose 提交于
At the moment we run through the arm64_features capability list for each CPU and set the capability if one of the CPU supports it. This could be problematic in a heterogeneous system with differing capabilities. Delay the CPU feature checks until all the enabled CPUs are up(i.e, smp_cpus_done(), so that we can make better decisions based on the overall system capability. Once we decide and advertise the capabilities the alternatives can be applied. From this state, we cannot roll back a feature to disabled based on the values from a new hotplugged CPU, due to the runtime patching and other reasons. So, for all new CPUs, we need to make sure that they have the established system capabilities. Failing which, we bring the CPU down, preventing it from turning online. Once the capabilities are decided, any new CPU booting up goes through verification to ensure that it has all the enabled capabilities and also invokes the respective enable() method on the CPU. The CPU errata checks are not delayed and is still executed per-CPU to detect the respective capabilities. If we ever come across a non-errata capability that needs to be checked on each-CPU, we could introduce them via a new capability table(or introduce a flag), which can be processed per CPU. The next patch will make the feature checks use the system wide safe value of a feature register. NOTE: The enable() methods associated with the capability is scheduled on all the CPUs (which is the only use case at the moment). If we need a different type of 'enable()' which only needs to be run once on any CPU, we should be able to handle that when needed. Signed-off-by: NSuzuki K. Poulose <suzuki.poulose@arm.com> Tested-by: NDave Martin <Dave.Martin@arm.com> [catalin.marinas@arm.com: static variable and coding style fixes] Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Suzuki K. Poulose 提交于
check_cpu_capabilities runs through a given list of caps and checks if the system has the cap, updates the system capability bitmap and also runs any enable() methods associated with them. All of this is not quite obvious from the name 'check'. This patch splits the check_cpu_capabilities into two parts : 1) update_cpu_capabilities => Runs through the given list and updates the system wide capability map. 2) enable_cpu_capabilities => Runs through the given list and invokes enable() (if any) for the caps enabled on the system. Cc: Andre Przywara <andre.przywara@arm.com> Cc: Will Deacon <will.deacon@arm.com> Suggested-by: NCatalin Marinas <catalin.marinsa@arm.com> Signed-off-by: NSuzuki K. Poulose <suzuki.poulose@arm.com> Tested-by: NDave Martin <Dave.Martin@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Suzuki K. Poulose 提交于
Make use of the system wide safe register to decide the support for mixed endian. Signed-off-by: NSuzuki K. Poulose <suzuki.poulose@arm.com> Tested-by: NDave Martin <Dave.Martin@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Suzuki K. Poulose 提交于
Add an API for reading the safe CPUID value across the system from the new infrastructure. Signed-off-by: NSuzuki K. Poulose <suzuki.poulose@arm.com> Tested-by: NDave Martin <Dave.Martin@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Suzuki K. Poulose 提交于
This patch consolidates the CPU Sanity check to the new infrastructure. Cc: Mark Rutland <mark.rutland@arm.com> Signed-off-by: NSuzuki K. Poulose <suzuki.poulose@arm.com> Tested-by: NDave Martin <Dave.Martin@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Suzuki K. Poulose 提交于
This patch adds an infrastructure to keep track of the CPU feature registers on the system. For each register, the infrastructure keeps track of the system wide safe value of the feature bits. Also, tracks the which fields of a register should be matched strictly across all the CPUs on the system for the SANITY check infrastructure. The feature bits are classified into following 3 types depending on the implication of the possible values. This information is used to decide the safe value for a feature. LOWER_SAFE - The smaller value is safer HIGHER_SAFE - The bigger value is safer EXACT - We can't decide between the two, so a predefined safe_value is used. This infrastructure will be later used to make better decisions for: - Kernel features (e.g, KVM, Debug) - SANITY Check - CPU capability - ELF HWCAP - Exposing CPU Feature register to userspace. Signed-off-by: NSuzuki K. Poulose <suzuki.poulose@arm.com> Tested-by: NDave Martin <Dave.Martin@arm.com> [catalin.marinas@arm.com: whitespace fix] Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Suzuki K. Poulose 提交于
Introduce a helper to extract cpuid feature for any given width. Signed-off-by: NSuzuki K. Poulose <suzuki.poulose@arm.com> Tested-by: NDave Martin <Dave.Martin@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Suzuki K. Poulose 提交于
This patch moves the /proc/cpuinfo handling code: arch/arm64/kernel/{setup.c to cpuinfo.c} No functional changes Signed-off-by: NSuzuki K. Poulose <suzuki.poulose@arm.com> Tested-by: NDave Martin <Dave.Martin@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Suzuki K. Poulose 提交于
Move the mixed endian support detection code to cpufeature.c from cpuinfo.c. This also moves the update_cpu_features() used by mixed endian detection code, which will get more functionality. Also moves the ID register field shifts to asm/sysreg.h, where all the useful definitions will end up in later patches. Signed-off-by: NSuzuki K. Poulose <suzuki.poulose@arm.com> Tested-by: NDave Martin <Dave.Martin@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Suzuki K. Poulose 提交于
This patch moves the CPU feature detection code from arch/arm64/kernel/{setup.c to cpufeature.c} The plan is to consolidate all the CPU feature handling in cpufeature.c. Apart from changing pr_fmt from "alternatives" to "cpu features", there are no functional changes. Signed-off-by: NSuzuki K. Poulose <suzuki.poulose@arm.com> Tested-by: NDave Martin <Dave.Martin@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Suzuki K. Poulose 提交于
At the moment the boot CPU stores the cpuinfo long before the PERCPU areas are initialised by the kernel. This could be problematic as the non-boot CPU data structures might get copied with the data from the boot CPU, giving us no chance to detect if a particular CPU updated its cpuinfo. This patch delays the boot cpu store to smp_prepare_boot_cpu(). Also kills the setup_processor() which no longer does meaningful work. Signed-off-by: NSuzuki K. Poulose <suzuki.poulose@arm.com> Tested-by: NDave Martin <Dave.Martin@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Suzuki K. Poulose 提交于
Delay the ELF HWCAP initialisation until all the (enabled) CPUs are up, i.e, smp_cpus_done(). This is in preparation for detecting the common features across the CPUS and creating a consistent ELF HWCAP for the system. Signed-off-by: NSuzuki K. Poulose <suzuki.poulose@arm.com> Tested-by: NDave Martin <Dave.Martin@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Suzuki K. Poulose 提交于
At early boot, we print the CPU version/revision. On a heterogeneous system, we could have different types of CPUs. Print the CPU info for all active cpus. Also, the secondary CPUs prints the message only when they turn online. Also, remove the redundant 'revision' information which doesn't make any sense without the 'variant' field. Signed-off-by: NSuzuki K. Poulose <suzuki.poulose@arm.com> Tested-by: NDave Martin <Dave.Martin@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 20 10月, 2015 15 次提交
-
-
由 Catalin Marinas 提交于
Commit 21539939 (arm64: 36 bit VA) introduced 36-bit VA support for the arm64 kernel when the 16KB page configuration is enabled. While this is a valid hardware configuration, it's not something we want to encourage since it reduces the memory (and I/O) range that the kernel can access. Make this depend on EXPERT to avoid complaints of Linux not mapping the whole RAM, especially on platforms following the ARM recommended memory map. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Jungseok Lee 提交于
Unlike perf callchain relying on walk_stackframe(), dump_backtrace() has its own backtrace logic. A major difference between them is the moment a symbol is recorded. Perf writes down a symbol *before* calling unwind_frame(), but dump_backtrace() prints it out *after* unwind_frame(). As a result, the last valid symbol cannot be hooked in case of dump_backtrace(). This patch addresses the issue as synchronising dump_backtrace() with perf callchain. A simple test and its results are as follows: - crash trigger $ sudo echo c > /proc/sysrq-trigger - current status Call trace: [<fffffe00003dc738>] sysrq_handle_crash+0x24/0x30 [<fffffe00003dd2ac>] __handle_sysrq+0x128/0x19c [<fffffe00003dd730>] write_sysrq_trigger+0x60/0x74 [<fffffe0000249fc4>] proc_reg_write+0x84/0xc0 [<fffffe00001f2638>] __vfs_write+0x44/0x104 [<fffffe00001f2e60>] vfs_write+0x98/0x1a8 [<fffffe00001f3730>] SyS_write+0x50/0xb0 - with this change Call trace: [<fffffe00003dc738>] sysrq_handle_crash+0x24/0x30 [<fffffe00003dd2ac>] __handle_sysrq+0x128/0x19c [<fffffe00003dd730>] write_sysrq_trigger+0x60/0x74 [<fffffe0000249fc4>] proc_reg_write+0x84/0xc0 [<fffffe00001f2638>] __vfs_write+0x44/0x104 [<fffffe00001f2e60>] vfs_write+0x98/0x1a8 [<fffffe00001f3730>] SyS_write+0x50/0xb0 [<fffffe00000939ec>] el0_svc_naked+0x20/0x28 Note that this patch does not cover a case where MMU is disabled. The last stack frame of swapper, for example, has PC in a form of physical address. Unfortunately, a simple conversion using phys_to_virt() cannot cover all scenarios since PC is retrieved from LR - 4, not LR. It is a big tradeoff to change both head.S and unwind_frame() for only a few of symbols in *.S. Thus, this hunk does not take care of the case. Cc: AKASHI Takahiro <takahiro.akashi@linaro.org> Cc: James Morse <james.morse@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Signed-off-by: NJungseok Lee <jungseoklee85@gmail.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Jisheng Zhang 提交于
Currently, if cpuidle is disabled or not supported, powertop reports zero wakeups and zero events. This is due to the cpu_idle tracepoints are missing. This patch is to make cpu_idle tracepoints always available even if cpuidle is disabled or not supported. Signed-off-by: NJisheng Zhang <jszhang@marvell.com> Acked-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Suzuki K. Poulose 提交于
36bit VA lets us use 2 level page tables while limiting the available address space to 64GB. Cc: Will Deacon <will.deacon@arm.com> Cc: Steve Capper <steve.capper@linaro.org> Signed-off-by: NSuzuki K. Poulose <suzuki.poulose@arm.com> Reviewed-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Tested-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Suzuki K. Poulose 提交于
This patch turns on the 16K page support in the kernel. We support 48bit VA (4 level page tables) and 47bit VA (3 level page tables). With 16K we can map 128 entries using contiguous bit hint at level 3 to map 2M using single TLB entry. TODO: 16K supports 32 contiguous entries at level 2 to get us 1G(which is not yet supported by the infrastructure). That should be a separate patch altogether. Cc: Will Deacon <will.deacon@arm.com> Cc: Jeremy Linton <jeremy.linton@arm.com> Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Christoffer Dall <christoffer.dall@linaro.org> Cc: Steve Capper <steve.capper@linaro.org> Signed-off-by: NSuzuki K. Poulose <suzuki.poulose@arm.com> Reviewed-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Tested-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Ard Biesheuvel 提交于
This patch adds the page size to the arm64 kernel image header so that one can infer the PAGESIZE used by the kernel. This will be helpful to diagnose failures to boot the kernel with page size not supported by the CPU. Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NSuzuki K. Poulose <suzuki.poulose@arm.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Reviewed-by: NChristoffer Dall <christoffer.dall@linaro.org> Acked-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Suzuki K. Poulose 提交于
Ensure that the selected page size is supported by the CPU(s). If it doesn't park it. Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NSuzuki K. Poulose <suzuki.poulose@arm.com> Reviewed-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Tested-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Suzuki K. Poulose 提交于
Update the help text for ARM64_64K_PAGES to reflect the reality about AArch32 support. Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NSuzuki K. Poulose <suzuki.poulose@arm.com> Reviewed-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Tested-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Mark Rutland 提交于
We choose NR_FIX_BTMAPS such that each slot (NR_FIX_BTMAPS * PAGE_SIZE) can address 256K. Use division to derive NR_FIX_BTMAPS rather than defining it for each page size. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NSuzuki K. Poulose <suzuki.poulose@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Suzuki K. Poulose 提交于
We use !CONFIG_ARM64_64K_PAGES for CONFIG_ARM64_4K_PAGES (and vice versa) in code. It all worked well, so far since we only had two options. Now, with the introduction of 16K, these cases will break. This patch cleans up the code to use the required CONFIG symbol expression without the assumption that !64K => 4K (and vice versa) Cc: Will Deacon <will.deacon@arm.com> Acked-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NSuzuki K. Poulose <suzuki.poulose@arm.com> Reviewed-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Tested-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Suzuki K. Poulose 提交于
At the moment, we only support maximum of 3-level page table for swapper. With 48bit VA, 64K has only 3 levels and 4K uses section mapping. Add support for 4-level page table for swapper, needed by 16K pages. Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NSuzuki K. Poulose <suzuki.poulose@arm.com> Reviewed-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Tested-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Suzuki K. Poulose 提交于
Now that we can calculate the number of levels required for mapping a va width, reserve exact number of pages that would be required to cover the idmap. The idmap should be able to handle the maximum physical address size supported. Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NSuzuki K. Poulose <suzuki.poulose@arm.com> Reviewed-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Tested-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Suzuki K. Poulose 提交于
Introduce helpers for finding the number of page table levels required for a given VA width, shift for a particular page table level. Convert the existing users to the new helpers. More users to follow. Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org> Cc: Will Deacon <will.deacon@arm.com> Cc: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: NSuzuki K. Poulose <suzuki.poulose@arm.com> Acked-by: NChristoffer Dall <christoffer.dall@linaro.org> Acked-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Suzuki K. Poulose 提交于
We use section maps with 4K page size to create the swapper/idmaps. So far we have used !64K or 4K checks to handle the case where we use the section maps. This patch adds a new symbol, ARM64_SWAPPER_USES_SECTION_MAPS, to handle cases where we use section maps, instead of using the page size symbols. Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NSuzuki K. Poulose <suzuki.poulose@arm.com> Reviewed-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Suzuki K. Poulose 提交于
Move the kernel pagetable (both swapper and idmap) definitions from the generic asm/page.h to a new file, asm/kernel-pgtable.h. This is mostly a cosmetic change, to clean up the asm/page.h to get rid of the arch specific details which are not needed by the generic code. Also renames the symbols to prevent conflicts. e.g, BLOCK_SHIFT => SWAPPER_BLOCK_SHIFT Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NSuzuki K. Poulose <suzuki.poulose@arm.com> Reviewed-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Tested-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 17 10月, 2015 1 次提交
-
-
由 Yang Shi 提交于
Fix handers to handlers. Signed-off-by: NYang Shi <yang.shi@linaro.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-