- 02 8月, 2016 12 次提交
-
-
由 James Hogan 提交于
We are now able to support KVM T&E with MIPS32 guests on some MIPS64r2 and MIPS64r6 hosts, so select HAVE_KVM so it can be enabled. Signed-off-by: NJames Hogan <james.hogan@imgtec.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 James Hogan 提交于
KVM sometimes flushes host TLB entries, reading each one to check if it corresponds to a guest KSeg0 address. In the absence of EntryHi.EHInv bits to invalidate the whole entry, the entries will be set to unique virtual addresses in KSeg0 (which is not TLB mapped), spaced 2*PAGE_SIZE apart. The TLB read however will clobber the CP0_PageMask register with whatever page size that TLB entry had, and that same page size will be written back into the TLB entry along with the unique address. This would cause breakage when transparent huge pages are enabled on 64-bit host kernels, since huge page entries will overlap other nearby entries when separated by only 2*PAGE_SIZE, causing a machine check exception. Fix this by restoring the old CP0_PageMask value (which should be set to the normal page size) after reading the TLB entry if we're going to go ahead and invalidate it. Signed-off-by: NJames Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 James Hogan 提交于
kvm_mips_trans_replace() passes a pointer to KVM_GUEST_KSEGX(). This breaks on 64-bit builds due to the cast of that 64-bit pointer to a different sized 32-bit int. Cast the pointer argument to an unsigned long to work around the warning. Signed-off-by: NJames Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 James Hogan 提交于
When emulating MFC0 instructions to load 32-bit values from guest COP0 registers and the RDHWR instruction to read the CC (Count) register, sign extend the result to comply with the MIPS64 architecture. The result must be in canonical 32-bit form or the guest may malfunction. Signed-off-by: NJames Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 James Hogan 提交于
The MFC0 and MTC0 instructions in the guest which cause traps can be replaced with 32-bit loads and stores to the commpage, however on big endian 64-bit builds the offset needs to have 4 added so as to load/store the least significant half of the long instead of the most significant half. Signed-off-by: NJames Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 James Hogan 提交于
Fail if the address of the allocated exception base doesn't fit into the CP0_EBase register. This can happen on MIPS64 if CP0_EBase.WG isn't implemented but RAM is available outside of the range of KSeg0. Signed-off-by: NJames Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 James Hogan 提交于
Update the KVM entry point to write CP0_EBase as a 64-bit register when it is 64-bits wide, and to set the WG (write gate) bit if it exists in order to write bits 63:30 (or 31:30 on MIPS32). Prior to MIPS64r6 it was UNDEFINED to perform a 64-bit read or write of a 32-bit COP0 register. Since this is dynamically generated code, generate the right type of access depending on whether the kernel is 64-bit and cpu_has_ebase_wg. Signed-off-by: NJames Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 James Hogan 提交于
Update the KVM entry code to set the CP0_Entry.KX bit on 64-bit kernels. This is important to allow the entry code, running in kernel mode, to access the full 64-bit address space right up to the point of entering the guest, and immediately after exiting the guest, so it can safely restore & save the guest context from 64-bit segments. Signed-off-by: NJames Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 James Hogan 提交于
The MIPS KVM entry code (originally kvm_locore.S, later locore.S, and now entry.c) has never quite been right when built for 64-bit, using 32-bit instructions when 64-bit instructions were needed for handling 64-bit registers and pointers. Fix several cases of this now. The changes roughly fall into the following categories. - COP0 scratch registers contain guest register values and the VCPU pointer, and are themselves full width. Similarly CP0_EPC and CP0_BadVAddr registers are full width (even though technically we don't support 64-bit guest address spaces with trap & emulate KVM). Use MFC0/MTC0 for accessing them. - Handling of stack pointers and the VCPU pointer must match the pointer size of the kernel ABI (always o32 or n64), so use ADDIU. - The CPU number in thread_info, and the guest_{user,kernel}_asid arrays in kvm_vcpu_arch are all 32 bit integers, so use lw (instead of LW) to load them. Signed-off-by: NJames Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 James Hogan 提交于
There are several unportable uses of CKSEG0ADDR() in MIPS KVM, which implicitly assume that a host physical address will be in the low 512MB of the physical address space (accessible in KSeg0). These assumptions don't hold for highmem or on 64-bit kernels. When interpreting the guest physical address when reading or overwriting a trapping instruction, use kmap_atomic() to get a usable virtual address to access guest memory, which is portable to 64-bit and highmem kernels. Signed-off-by: NJames Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 James Hogan 提交于
Calculate the PFN of the commpage using virt_to_phys() instead of CPHYSADDR(). This is more portable as kzalloc() may allocate from XKPhys instead of KSeg0 on 64-bit kernels, which CPHYSADDR() doesn't handle. This is sufficient for highmem kernels too since kzalloc() will allocate from lowmem in KSeg0. Signed-off-by: NJames Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 James Hogan 提交于
The KSEGX() macro is defined to 32-bit sign extend the address argument and logically AND the result with 0xe0000000, with the final result usually compared against one of the CKSEG macros. However the literal 0xe0000000 is unsigned as the high bit is set, and is therefore zero-extended on 64-bit kernels, resulting in the sign extension bits of the argument being masked to zero. This results in the odd situation where: KSEGX(CKSEG) != CKSEG (0xffffffff80000000 & 0x00000000e0000000) != 0xffffffff80000000) Fix this by 32-bit sign extending the 0xe0000000 literal using _ACAST32_. This will help some MIPS KVM code handling 32-bit guest addresses to work on 64-bit host kernels, but will also affect KSEGX in dec_kn01_be_backend() on a 64-bit DECstation kernel, and the SiByte DMA page ops KSEGX check in clear_page() and copy_page() on 64-bit SB1 kernels, neither of which appear to be designed with 64-bit segments in mind anyway. Signed-off-by: NJames Hogan <james.hogan@imgtec.com> Acked-by: NRalf Baechle <ralf@linux-mips.org> Cc: Maciej W. Rozycki <macro@linux-mips.org> Cc: linux-mips@linux-mips.org Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
- 01 8月, 2016 3 次提交
-
-
由 Jim Mattson 提交于
Kexec needs to know the addresses of all VMCSs that are active on each CPU, so that it can flush them from the VMCS caches. It is safe to record superfluous addresses that are not associated with an active VMCS, but it is not safe to omit an address associated with an active VMCS. After a call to vmcs_load, the VMCS that was loaded is active on the CPU. The VMCS should be added to the CPU's list of active VMCSs before it is loaded. Signed-off-by: NJim Mattson <jmattson@google.com> Signed-off-by: NRadim Krčmář <rkrcmar@redhat.com>
-
由 David Matlack 提交于
KVM maintains L1's current VMCS in guest memory, at the guest physical page identified by the argument to VMPTRLD. This makes hairy time-of-check to time-of-use bugs possible,as VCPUs can be writing the the VMCS page in memory while KVM is emulating VMLAUNCH and VMRESUME. The spec documents that writing to the VMCS page while it is loaded is "undefined". Therefore it is reasonable to load the entire VMCS into an internal cache during VMPTRLD and ignore writes to the VMCS page -- the guest should be using VMREAD and VMWRITE to access the current VMCS. To adhere to the spec, KVM should flush the current VMCS during VMPTRLD, and the target VMCS during VMCLEAR (as given by the operand to VMCLEAR). Since this implementation of VMCS caching only maintains the the current VMCS, VMCLEAR will only do a flush if the operand to VMCLEAR is the current VMCS pointer. KVM will also flush during VMXOFF, which is not mandated by the spec, but also not in conflict with the spec. Signed-off-by: NDavid Matlack <dmatlack@google.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Radim Krčmář 提交于
Merge branch 'kvm-ppc-next' of git://git.kernel.org/pub/scm/linux/kernel/git/paulus/powerpc into next Fix for CVE-2016-5412, a denial-of-service vulnerability in HV KVM on POWER8 machines
-
- 28 7月, 2016 2 次提交
-
-
由 Paul Mackerras 提交于
It turns out that if the guest does a H_CEDE while the CPU is in a transactional state, and the H_CEDE does a nap, and the nap loses the architected state of the CPU (which is is allowed to do), then we lose the checkpointed state of the virtual CPU. In addition, the transactional-memory state recorded in the MSR gets reset back to non-transactional, and when we try to return to the guest, we take a TM bad thing type of program interrupt because we are trying to transition from non-transactional to transactional with a hrfid instruction, which is not permitted. The result of the program interrupt occurring at that point is that the host CPU will hang in an infinite loop with interrupts disabled. Thus this is a denial of service vulnerability in the host which can be triggered by any guest (and depending on the guest kernel, it can potentially triggered by unprivileged userspace in the guest). This vulnerability has been assigned the ID CVE-2016-5412. To fix this, we save the TM state before napping and restore it on exit from the nap, when handling a H_CEDE in real mode. The case where H_CEDE exits to host virtual mode is already OK (as are other hcalls which exit to host virtual mode) because the exit path saves the TM state. Cc: stable@vger.kernel.org # v3.15+ Signed-off-by: NPaul Mackerras <paulus@ozlabs.org>
-
由 Paul Mackerras 提交于
This moves the transactional memory state save and restore sequences out of the guest entry/exit paths into separate procedures. This is so that these sequences can be used in going into and out of nap in a subsequent patch. The only code changes here are (a) saving and restore LR on the stack, since these new procedures get called with a bl instruction, (b) explicitly saving r1 into the PACA instead of assuming that HSTATE_HOST_R1(r13) is already set, and (c) removing an unnecessary and redundant setting of MSR[TM] that should have been removed by commit 9d4d0bdd9e0a ("KVM: PPC: Book3S HV: Add transactional memory support", 2013-09-24) but wasn't. Cc: stable@vger.kernel.org # v3.15+ Signed-off-by: NPaul Mackerras <paulus@ozlabs.org>
-
- 23 7月, 2016 1 次提交
-
-
由 Radim Krčmář 提交于
Merge tag 'kvm-arm-for-4.8' of git://git.kernel.org/pub/scm/linux/kernel/git/kvmarm/kvmarm into next KVM/ARM changes for Linux 4.8 - GICv3 ITS emulation - Simpler idmap management that fixes potential TLB conflicts - Honor the kernel protection in HYP mode - Removal of the old vgic implementation
-
- 21 7月, 2016 1 次提交
-
-
由 Radim Krčmář 提交于
Merge tag 'kvm-s390-next-4.8-4' of git://git.kernel.org/pub/scm/linux/kernel/git/kvms390/linux into next KVM: s390: : Feature and fix for kvm/next (4.8) part 4 1. Provide an exit to userspace for the invalid opcode 0 (used for software breakpoints) 2. "Fix" (by returning condition code 3) some unhandled PTFF subcodes
-
- 19 7月, 2016 21 次提交
-
-
由 Marc Zyngier 提交于
If we care to move all the checks that do not involve any memory allocation, we can simplify the MAPI error handling. Let's do that, it cannot hurt. Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Marc Zyngier 提交于
vgic_its_cmd_handle_mapi has an extra "subcmd" argument, which is already contained in the command buffer that all command handlers obtain from the command queue. Let's drop it, as it is not that useful. Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Marc Zyngier 提交于
There is no need to have separate functions to validate devices and collections, as the architecture doesn't really distinguish the two, and they are supposed to be managed the same way. Let's turn the DevID checker into a generic one. Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Marc Zyngier 提交于
Going from the ITS structure to the corresponding KVM structure would be quite handy at times. The kvm_device pointer that is passed at create time is quite convenient for this, so let's keep a copy of it in the vgic_its structure. This will be put to a good use in subsequent patches. Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Marc Zyngier 提交于
Instead of spreading random allocations all over the place, consolidate allocation/init/freeing of collections in a pair of constructor/destructor. Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Marc Zyngier 提交于
When checking that the storage address of a device entry is valid, it is critical to compute the actual address of the entry, rather than relying on the beginning of the page to match a CPU page of the same size: for example, if the guest places the table at the last 64kB boundary of RAM, but RAM size isn't a multiple of 64kB... Fix this by computing the actual offset of the device ID in the L2 page, and check the corresponding GFN. Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Marc Zyngier 提交于
Checking that the device_id fits if the table, and we must make sure that the associated memory is also accessible. Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Marc Zyngier 提交于
The nr_entries variable in vgic_its_check_device_id actually describe the size of the L1 table, and not the number of entries in this table. Rename it to l1_tbl_size, so that we can now change the code with a better understanding of what is what. Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Marc Zyngier 提交于
The ITS tables are stored in LE format. If the host is reading a L1 table entry to check its validity, it must convert it to the CPU endianness. Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Marc Zyngier 提交于
The current code will fail on valid indirect tables, and happily use the ones that are pointing out of the guest RAM. Funny what a small "!" can do for you... Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Marc Zyngier 提交于
Instead of sprinkling raw kref_get() calls everytime we cannot do a normal vgic_get_irq(), use the existing vgic_get_irq_kref(), which does the same thing and is paired with a vgic_put_irq(). vgic_get_irq_kref is moved to vgic.h in order to be easily shared. Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Marc Zyngier 提交于
Let's restore some of the #defines that have been savagely dropped by the introduction of the KVM ITS code, as pointlessly break other users (including series that are already in -next). Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Eric Auger 提交于
For VGICv2 save and restore the CPU interface registers are accessed. Restore the modality which has been altered. Also explicitly set the iodev_type for both the DIST and CPU interface. Signed-off-by: NEric Auger <eric.auger@redhat.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Andre Przywara 提交于
Now that all ITS emulation functionality is in place, we advertise MSI functionality to userland and also the ITS device to the guest - if userland has configured that. Signed-off-by: NAndre Przywara <andre.przywara@arm.com> Reviewed-by: NMarc Zyngier <marc.zyngier@arm.com> Tested-by: NEric Auger <eric.auger@redhat.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Andre Przywara 提交于
When userland wants to inject an MSI into the guest, it uses the KVM_SIGNAL_MSI ioctl, which carries the doorbell address along with the payload and the device ID. With the help of the KVM IO bus framework we learn the corresponding ITS from the doorbell address. We then use our wrapper functions to iterate the linked lists and find the proper Interrupt Translation Table Entry (ITTE) and thus the corresponding struct vgic_irq to finally set the pending bit. We also provide the handler for the ITS "INT" command, which allows a guest to trigger an MSI via the ITS command queue. Since this one knows about the right ITS already, we directly call the MMIO handler function without using the kvm_io_bus framework. Signed-off-by: NAndre Przywara <andre.przywara@arm.com> Reviewed-by: NMarc Zyngier <marc.zyngier@arm.com> Tested-by: NEric Auger <eric.auger@redhat.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Andre Przywara 提交于
The connection between a device, an event ID, the LPI number and the associated CPU is stored in in-memory tables in a GICv3, but their format is not specified by the spec. Instead software uses a command queue in a ring buffer to let an ITS implementation use its own format. Implement handlers for the various ITS commands and let them store the requested relation into our own data structures. Those data structures are protected by the its_lock mutex. Our internal ring buffer read and write pointers are protected by the its_cmd mutex, so that only one VCPU per ITS can handle commands at any given time. Error handling is very basic at the moment, as we don't have a good way of communicating errors to the guest (usually an SError). The INT command handler is missing from this patch, as we gain the capability of actually injecting MSIs into the guest only later on. Signed-off-by: NAndre Przywara <andre.przywara@arm.com> Reviewed-by: NMarc Zyngier <marc.zyngier@arm.com> Tested-by: NEric Auger <eric.auger@redhat.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Andre Przywara 提交于
The (system-wide) LPI configuration table is held in a table in (guest) memory. To achieve reasonable performance, we cache this data in our struct vgic_irq. If the guest updates the configuration data (which consists of the enable bit and the priority value), it issues an INV or INVALL command to allow us to update our information. Provide functions that update that information for one LPI or all LPIs mapped to a specific collection. Signed-off-by: NAndre Przywara <andre.przywara@arm.com> Reviewed-by: NMarc Zyngier <marc.zyngier@arm.com> Tested-by: NEric Auger <eric.auger@redhat.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Andre Przywara 提交于
The LPI pending status for a GICv3 redistributor is held in a table in (guest) memory. To achieve reasonable performance, we cache the pending bit in our struct vgic_irq. The initial pending state must be read from guest memory upon enabling LPIs for this redistributor. As we can't access the guest memory while we hold the lpi_list spinlock, we create a snapshot of the LPI list and iterate over that. Signed-off-by: NAndre Przywara <andre.przywara@arm.com> Reviewed-by: NMarc Zyngier <marc.zyngier@arm.com> Tested-by: NEric Auger <eric.auger@redhat.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Andre Przywara 提交于
LPIs are dynamically created (mapped) at guest runtime and their actual number can be quite high, but is mostly assigned using a very sparse allocation scheme. So arrays are not an ideal data structure to hold the information. We use a spin-lock protected linked list to hold all mapped LPIs, represented by their struct vgic_irq. This lock is grouped between the ap_list_lock and the vgic_irq lock in our locking order. Also we store a pointer to that struct vgic_irq in our struct its_itte, so we can easily access it. Eventually we call our new vgic_get_lpi() from vgic_get_irq(), so the VGIC code gets transparently access to LPIs. Signed-off-by: NAndre Przywara <andre.przywara@arm.com> Reviewed-by: NMarc Zyngier <marc.zyngier@arm.com> Tested-by: NEric Auger <eric.auger@redhat.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Andre Przywara 提交于
Add emulation for some basic MMIO registers used in the ITS emulation. This includes: - GITS_{CTLR,TYPER,IIDR} - ID registers - GITS_{CBASER,CREADR,CWRITER} (which implement the ITS command buffer handling) - GITS_BASER<n> Most of the handlers are pretty straight forward, only the CWRITER handler is a bit more involved by taking the new its_cmd mutex and then iterating over the command buffer. The registers holding base addresses and attributes are sanitised before storing them. Signed-off-by: NAndre Przywara <andre.przywara@arm.com> Reviewed-by: NMarc Zyngier <marc.zyngier@arm.com> Tested-by: NEric Auger <eric.auger@redhat.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Andre Przywara 提交于
Introduce a new KVM device that represents an ARM Interrupt Translation Service (ITS) controller. Since there can be multiple of this per guest, we can't piggy back on the existing GICv3 distributor device, but create a new type of KVM device. On the KVM_CREATE_DEVICE ioctl we allocate and initialize the ITS data structure and store the pointer in the kvm_device data. Upon an explicit init ioctl from userland (after having setup the MMIO address) we register the handlers with the kvm_io_bus framework. Any reference to an ITS thus has to go via this interface. Signed-off-by: NAndre Przywara <andre.przywara@arm.com> Reviewed-by: NMarc Zyngier <marc.zyngier@arm.com> Tested-by: NEric Auger <eric.auger@redhat.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-