- 08 11月, 2019 1 次提交
-
-
由 Marc Zyngier 提交于
In order to find out whether a vcpu is likely to be the target of VLPIs (and to further optimize the way we deal with those), let's track the number of VLPIs a vcpu can receive. This gets implemented with an atomic variable that gets incremented or decremented on map, unmap and move of a VLPI. Signed-off-by: NMarc Zyngier <maz@kernel.org> Reviewed-by: NZenghui Yu <yuzenghui@huawei.com> Reviewed-by: NChristoffer Dall <christoffer.dall@arm.com> Link: https://lore.kernel.org/r/20191107160412.30301-2-maz@kernel.org
-
- 19 8月, 2019 7 次提交
-
-
由 Marc Zyngier 提交于
When performing an MSI injection, let's first check if the translation is already in the cache. If so, let's inject it quickly without going through the whole translation process. Tested-by: NAndre Przywara <andre.przywara@arm.com> Reviewed-by: NEric Auger <eric.auger@redhat.com> Signed-off-by: NMarc Zyngier <maz@kernel.org>
-
由 Marc Zyngier 提交于
On a successful translation, preserve the parameters in the LPI translation cache. Each translation is reusing the last slot in the list, naturally evicting the least recently used entry. Tested-by: NAndre Przywara <andre.przywara@arm.com> Reviewed-by: NEric Auger <eric.auger@redhat.com> Signed-off-by: NMarc Zyngier <maz@kernel.org>
-
由 Marc Zyngier 提交于
In order to avoid leaking vgic_irq structures on teardown, we need to drop all references to LPIs before deallocating the cache itself. This is done by invalidating the cache on vgic teardown. Tested-by: NAndre Przywara <andre.przywara@arm.com> Reviewed-by: NEric Auger <eric.auger@redhat.com> Signed-off-by: NMarc Zyngier <maz@kernel.org>
-
由 Marc Zyngier 提交于
If an ITS gets disabled, we need to make sure that further interrupts won't hit in the cache. For that, we invalidate the translation cache when the ITS is disabled. Tested-by: NAndre Przywara <andre.przywara@arm.com> Reviewed-by: NEric Auger <eric.auger@redhat.com> Signed-off-by: NMarc Zyngier <maz@kernel.org>
-
由 Marc Zyngier 提交于
The LPI translation cache needs to be discarded when an ITS command may affect the translation of an LPI (DISCARD, MAPC and MAPD with V=0) or the routing of an LPI to a redistributor with disabled LPIs (MOVI, MOVALL). We decide to perform a full invalidation of the cache, irrespective of the LPI that is affected. Commands are supposed to be rare enough that it doesn't matter. Tested-by: NAndre Przywara <andre.przywara@arm.com> Reviewed-by: NEric Auger <eric.auger@redhat.com> Signed-off-by: NMarc Zyngier <maz@kernel.org>
-
由 Marc Zyngier 提交于
There's a number of cases where we need to invalidate the caching of translations, so let's add basic support for that. Tested-by: NAndre Przywara <andre.przywara@arm.com> Reviewed-by: NEric Auger <eric.auger@redhat.com> Signed-off-by: NMarc Zyngier <maz@kernel.org>
-
由 Marc Zyngier 提交于
Add the basic data structure that expresses an MSI to LPI translation as well as the allocation/release hooks. The size of the cache is arbitrarily defined as 16*nr_vcpus. Tested-by: NAndre Przywara <andre.przywara@arm.com> Reviewed-by: NEric Auger <eric.auger@redhat.com> Signed-off-by: NMarc Zyngier <maz@kernel.org>
-
- 19 6月, 2019 1 次提交
-
-
由 Thomas Gleixner 提交于
Based on 1 normalized pattern(s): this program is free software you can redistribute it and or modify it under the terms of the gnu general public license version 2 as published by the free software foundation this program is distributed in the hope that it will be useful but without any warranty without even the implied warranty of merchantability or fitness for a particular purpose see the gnu general public license for more details you should have received a copy of the gnu general public license along with this program if not see http www gnu org licenses extracted by the scancode license scanner the SPDX license identifier GPL-2.0-only has been chosen to replace the boilerplate/reference in 503 file(s). Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Reviewed-by: NAlexios Zavras <alexios.zavras@intel.com> Reviewed-by: NAllison Randal <allison@lohutok.net> Reviewed-by: NEnrico Weigelt <info@metux.net> Cc: linux-spdx@vger.kernel.org Link: https://lkml.kernel.org/r/20190602204653.811534538@linutronix.deSigned-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 12 6月, 2019 1 次提交
-
-
由 Dave Martin 提交于
kvm_device->destroy() seems to be supposed to free its kvm_device struct, but vgic_its_destroy() is not currently doing this, resulting in a memory leak, resulting in kmemleak reports such as the following: unreferenced object 0xffff800aeddfe280 (size 128): comm "qemu-system-aar", pid 13799, jiffies 4299827317 (age 1569.844s) [...] backtrace: [<00000000a08b80e2>] kmem_cache_alloc+0x178/0x208 [<00000000dcad2bd3>] kvm_vm_ioctl+0x350/0xbc0 Fix it. Cc: Andre Przywara <andre.przywara@arm.com> Fixes: 1085fdc6 ("KVM: arm64: vgic-its: Introduce new KVM ITS device") Signed-off-by: NDave Martin <Dave.Martin@arm.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
- 21 3月, 2019 1 次提交
-
-
由 YueHaibing 提交于
Fix sparse warnings: arch/arm64/kvm/../../../virt/kvm/arm/vgic/vgic-its.c:1732:5: warning: symbol 'vgic_its_has_attr_regs' was not declared. Should it be static? arch/arm64/kvm/../../../virt/kvm/arm/vgic/vgic-its.c:1753:5: warning: symbol 'vgic_its_attr_regs_access' was not declared. Should it be static? Signed-off-by: NYueHaibing <yuehaibing@huawei.com> [maz: fixed subject] Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
- 20 3月, 2019 2 次提交
-
-
由 Marc Zyngier 提交于
Calling kvm_is_visible_gfn() implies that we're parsing the memslots, and doing this without the srcu lock is frown upon: [12704.164532] ============================= [12704.164544] WARNING: suspicious RCU usage [12704.164560] 5.1.0-rc1-00008-g600025238f51-dirty #16 Tainted: G W [12704.164573] ----------------------------- [12704.164589] ./include/linux/kvm_host.h:605 suspicious rcu_dereference_check() usage! [12704.164602] other info that might help us debug this: [12704.164616] rcu_scheduler_active = 2, debug_locks = 1 [12704.164631] 6 locks held by qemu-system-aar/13968: [12704.164644] #0: 000000007ebdae4f (&kvm->lock){+.+.}, at: vgic_its_set_attr+0x244/0x3a0 [12704.164691] #1: 000000007d751022 (&its->its_lock){+.+.}, at: vgic_its_set_attr+0x250/0x3a0 [12704.164726] #2: 00000000219d2706 (&vcpu->mutex){+.+.}, at: lock_all_vcpus+0x64/0xd0 [12704.164761] #3: 00000000a760aecd (&vcpu->mutex){+.+.}, at: lock_all_vcpus+0x64/0xd0 [12704.164794] #4: 000000000ef8e31d (&vcpu->mutex){+.+.}, at: lock_all_vcpus+0x64/0xd0 [12704.164827] #5: 000000007a872093 (&vcpu->mutex){+.+.}, at: lock_all_vcpus+0x64/0xd0 [12704.164861] stack backtrace: [12704.164878] CPU: 2 PID: 13968 Comm: qemu-system-aar Tainted: G W 5.1.0-rc1-00008-g600025238f51-dirty #16 [12704.164887] Hardware name: rockchip evb_rk3399/evb_rk3399, BIOS 2019.04-rc3-00124-g2feec69fb1 03/15/2019 [12704.164896] Call trace: [12704.164910] dump_backtrace+0x0/0x138 [12704.164920] show_stack+0x24/0x30 [12704.164934] dump_stack+0xbc/0x104 [12704.164946] lockdep_rcu_suspicious+0xcc/0x110 [12704.164958] gfn_to_memslot+0x174/0x190 [12704.164969] kvm_is_visible_gfn+0x28/0x70 [12704.164980] vgic_its_check_id.isra.0+0xec/0x1e8 [12704.164991] vgic_its_save_tables_v0+0x1ac/0x330 [12704.165001] vgic_its_set_attr+0x298/0x3a0 [12704.165012] kvm_device_ioctl_attr+0x9c/0xd8 [12704.165022] kvm_device_ioctl+0x8c/0xf8 [12704.165035] do_vfs_ioctl+0xc8/0x960 [12704.165045] ksys_ioctl+0x8c/0xa0 [12704.165055] __arm64_sys_ioctl+0x28/0x38 [12704.165067] el0_svc_common+0xd8/0x138 [12704.165078] el0_svc_handler+0x38/0x78 [12704.165089] el0_svc+0x8/0xc Make sure the lock is taken when doing this. Fixes: bf308242 ("KVM: arm/arm64: VGIC/ITS: protect kvm_read_guest() calls with SRCU lock") Reviewed-by: NEric Auger <eric.auger@redhat.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Marc Zyngier 提交于
When halting a guest, QEMU flushes the virtual ITS caches, which amounts to writing to the various tables that the guest has allocated. When doing this, we fail to take the srcu lock, and the kernel shouts loudly if running a lockdep kernel: [ 69.680416] ============================= [ 69.680819] WARNING: suspicious RCU usage [ 69.681526] 5.1.0-rc1-00008-g600025238f51-dirty #18 Not tainted [ 69.682096] ----------------------------- [ 69.682501] ./include/linux/kvm_host.h:605 suspicious rcu_dereference_check() usage! [ 69.683225] [ 69.683225] other info that might help us debug this: [ 69.683225] [ 69.683975] [ 69.683975] rcu_scheduler_active = 2, debug_locks = 1 [ 69.684598] 6 locks held by qemu-system-aar/4097: [ 69.685059] #0: 0000000034196013 (&kvm->lock){+.+.}, at: vgic_its_set_attr+0x244/0x3a0 [ 69.686087] #1: 00000000f2ed935e (&its->its_lock){+.+.}, at: vgic_its_set_attr+0x250/0x3a0 [ 69.686919] #2: 000000005e71ea54 (&vcpu->mutex){+.+.}, at: lock_all_vcpus+0x64/0xd0 [ 69.687698] #3: 00000000c17e548d (&vcpu->mutex){+.+.}, at: lock_all_vcpus+0x64/0xd0 [ 69.688475] #4: 00000000ba386017 (&vcpu->mutex){+.+.}, at: lock_all_vcpus+0x64/0xd0 [ 69.689978] #5: 00000000c2c3c335 (&vcpu->mutex){+.+.}, at: lock_all_vcpus+0x64/0xd0 [ 69.690729] [ 69.690729] stack backtrace: [ 69.691151] CPU: 2 PID: 4097 Comm: qemu-system-aar Not tainted 5.1.0-rc1-00008-g600025238f51-dirty #18 [ 69.691984] Hardware name: rockchip evb_rk3399/evb_rk3399, BIOS 2019.04-rc3-00124-g2feec69fb1 03/15/2019 [ 69.692831] Call trace: [ 69.694072] lockdep_rcu_suspicious+0xcc/0x110 [ 69.694490] gfn_to_memslot+0x174/0x190 [ 69.694853] kvm_write_guest+0x50/0xb0 [ 69.695209] vgic_its_save_tables_v0+0x248/0x330 [ 69.695639] vgic_its_set_attr+0x298/0x3a0 [ 69.696024] kvm_device_ioctl_attr+0x9c/0xd8 [ 69.696424] kvm_device_ioctl+0x8c/0xf8 [ 69.696788] do_vfs_ioctl+0xc8/0x960 [ 69.697128] ksys_ioctl+0x8c/0xa0 [ 69.697445] __arm64_sys_ioctl+0x28/0x38 [ 69.697817] el0_svc_common+0xd8/0x138 [ 69.698173] el0_svc_handler+0x38/0x78 [ 69.698528] el0_svc+0x8/0xc The fix is to obviously take the srcu lock, just like we do on the read side of things since bf308242. One wonders why this wasn't fixed at the same time, but hey... Fixes: bf308242 ("KVM: arm/arm64: VGIC/ITS: protect kvm_read_guest() calls with SRCU lock") Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
- 24 1月, 2019 2 次提交
-
-
由 Julien Thierry 提交于
vgic_dist->lpi_list_lock must always be taken with interrupts disabled as it is used in interrupt context. For configurations such as PREEMPT_RT_FULL, this means that it should be a raw_spinlock since RT spinlocks are interruptible. Signed-off-by: NJulien Thierry <julien.thierry@arm.com> Acked-by: NChristoffer Dall <christoffer.dall@arm.com> Acked-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NChristoffer Dall <christoffer.dall@arm.com>
-
由 Julien Thierry 提交于
vgic_irq->irq_lock must always be taken with interrupts disabled as it is used in interrupt context. For configurations such as PREEMPT_RT_FULL, this means that it should be a raw_spinlock since RT spinlocks are interruptible. Signed-off-by: NJulien Thierry <julien.thierry@arm.com> Acked-by: NChristoffer Dall <christoffer.dall@arm.com> Acked-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NChristoffer Dall <christoffer.dall@arm.com>
-
- 01 10月, 2018 1 次提交
-
-
由 Kristina Martsenko 提交于
Add support for handling 52bit guest physical address to the VGIC layer. So far we have limited the guest physical address to 48bits, by explicitly masking the upper bits. This patch removes the restriction. We do not have to check if the host supports 52bit as the gpa is always validated during an access. (e.g, kvm_{read/write}_guest, kvm_is_visible_gfn()). Also, the ITS table save-restore is also not affected with the enhancement. The DTE entries already store the bits[51:8] of the ITT_addr (with a 256byte alignment). Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Christoffer Dall <cdall@kernel.org> Reviewed-by: NEric Auger <eric.auger@redhat.com> Signed-off-by: NKristina Martsenko <kristina.martsenko@arm.com> [ Macro clean ups, fix PROPBASER and PENDBASER accesses ] Signed-off-by: NSuzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
- 21 7月, 2018 3 次提交
-
-
由 Christoffer Dall 提交于
In preparation for proper group 0 and group 1 support in the vgic, we add a field in the struct irq to store the group of all interrupts. We initialize the group to group 0 when emulating GICv2 and to group 1 when emulating GICv3, just like we treat them today. LPIs are always group 1. We also continue to ignore writes from the guest, preserving existing functionality, for now. Finally, we also add this field to the vgic debug logic to show the group for all interrupts. Reviewed-by: NAndrew Jones <drjones@redhat.com> Signed-off-by: NChristoffer Dall <christoffer.dall@arm.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Marc Zyngier 提交于
The vgic debugfs file only knows about SGI/PPI/SPI interrupts, and completely ignores LPIs. Let's fix that. Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Kees Cook 提交于
In the quest to remove all stack VLA usage from the kernel[1], this switches to using a maximum size and adds sanity checks. Additionally cleans up some of the int-vs-u32 usage and adds additional bounds checking. As it currently stands, this will always be 8 bytes until the ABI changes. [1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com Cc: Christoffer Dall <christoffer.dall@arm.com> Cc: Eric Auger <eric.auger@redhat.com> Cc: Andre Przywara <andre.przywara@arm.com> Cc: linux-arm-kernel@lists.infradead.org Cc: kvmarm@lists.cs.columbia.edu Signed-off-by: NKees Cook <keescook@chromium.org> [maz: dropped WARN_ONs] Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
- 15 5月, 2018 4 次提交
-
-
由 Andre Przywara 提交于
kvm_read_guest() will eventually look up in kvm_memslots(), which requires either to hold the kvm->slots_lock or to be inside a kvm->srcu critical section. In contrast to x86 and s390 we don't take the SRCU lock on every guest exit, so we have to do it individually for each kvm_read_guest() call. Use the newly introduced wrapper for that. Cc: Stable <stable@vger.kernel.org> # 4.12+ Reported-by: NJan Glauber <jan.glauber@caviumnetworks.com> Signed-off-by: NAndre Przywara <andre.przywara@arm.com> Acked-by: NChristoffer Dall <christoffer.dall@arm.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Andre Przywara 提交于
kvm_read_guest() will eventually look up in kvm_memslots(), which requires either to hold the kvm->slots_lock or to be inside a kvm->srcu critical section. In contrast to x86 and s390 we don't take the SRCU lock on every guest exit, so we have to do it individually for each kvm_read_guest() call. Provide a wrapper which does that and use that everywhere. Note that ending the SRCU critical section before returning from the kvm_read_guest() wrapper is safe, because the data has been *copied*, so we don't need to rely on valid references to the memslot anymore. Cc: Stable <stable@vger.kernel.org> # 4.8+ Reported-by: NJan Glauber <jan.glauber@caviumnetworks.com> Signed-off-by: NAndre Przywara <andre.przywara@arm.com> Acked-by: NChristoffer Dall <christoffer.dall@arm.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Andre Przywara 提交于
Apparently the development of update_affinity() overlapped with the promotion of irq_lock to be _irqsave, so the patch didn't convert this lock over. This will make lockdep complain. Fix this by disabling IRQs around the lock. Cc: stable@vger.kernel.org Fixes: 08c9fd04 ("KVM: arm/arm64: vITS: Add a helper to update the affinity of an LPI") Reported-by: NJan Glauber <jan.glauber@caviumnetworks.com> Signed-off-by: NAndre Przywara <andre.przywara@arm.com> Acked-by: NChristoffer Dall <christoffer.dall@arm.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Andre Przywara 提交于
As Jan reported [1], lockdep complains about the VGIC not being bullet proof. This seems to be due to two issues: - When commit 006df0f3 ("KVM: arm/arm64: Support calling vgic_update_irq_pending from irq context") promoted irq_lock and ap_list_lock to _irqsave, we forgot two instances of irq_lock. lockdeps seems to pick those up. - If a lock is _irqsave, any other locks we take inside them should be _irqsafe as well. So the lpi_list_lock needs to be promoted also. This fixes both issues by simply making the remaining instances of those locks _irqsave. One irq_lock is addressed in a separate patch, to simplify backporting. [1] http://lists.infradead.org/pipermail/linux-arm-kernel/2018-May/575718.html Cc: stable@vger.kernel.org Fixes: 006df0f3 ("KVM: arm/arm64: Support calling vgic_update_irq_pending from irq context") Reported-by: NJan Glauber <jan.glauber@caviumnetworks.com> Acked-by: NChristoffer Dall <christoffer.dall@arm.com> Signed-off-by: NAndre Przywara <andre.przywara@arm.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
- 26 3月, 2018 1 次提交
-
-
由 Marc Zyngier 提交于
vgic_copy_lpi_list() parses the LPI list and picks LPIs targeting a given vcpu. We allocate the array containing the intids before taking the lpi_list_lock, which means we can have an array size that is not equal to the number of LPIs. This is particularly obvious when looking at the path coming from vgic_enable_lpis, which is not a command, and thus can run in parallel with commands: vcpu 0: vcpu 1: vgic_enable_lpis its_sync_lpi_pending_table vgic_copy_lpi_list intids = kmalloc_array(irq_count) MAPI(lpi targeting vcpu 0) list_for_each_entry(lpi_list_head) intids[i++] = irq->intid; At that stage, we will happily overrun the intids array. Boo. An easy fix is is to break once the array is full. The MAPI command will update the config anyway, and we won't miss a thing. We also make sure that lpi_list_count is read exactly once, so that further updates of that value will not affect the array bound check. Cc: stable@vger.kernel.org Fixes: ccb1d791 ("KVM: arm64: vgic-its: Fix pending table sync") Reviewed-by: NAndre Przywara <andre.przywara@arm.com> Reviewed-by: NEric Auger <eric.auger@redhat.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
- 02 1月, 2018 1 次提交
-
-
由 Vasyl Gomonovych 提交于
Fix ptr_ret.cocci warnings: virt/kvm/arm/vgic/vgic-its.c:971:1-3: WARNING: PTR_ERR_OR_ZERO can be used Use PTR_ERR_OR_ZERO rather than if(IS_ERR(...)) + PTR_ERR Generated by: scripts/coccinelle/api/ptr_ret.cocci Signed-off-by: NVasyl Gomonovych <gomonovych@gmail.com> Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
-
- 29 11月, 2017 2 次提交
-
-
由 Marc Zyngier 提交于
We miss a test against NULL after allocation. Fixes: 6d03a68f ("KVM: arm64: vgic-its: Turn device_id validation into generic ID validation") Cc: stable@vger.kernel.org # 4.8 Reported-by: NAKASHI Takahiro <takahiro.akashi@linaro.org> Acked-by: NChristoffer Dall <christoffer.dall@linaro.org> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
-
由 Marc Zyngier 提交于
The current pending table parsing code assumes that we keep the previous read of the pending bits, but keep that variable in the current block, making sure it is discarded on each loop. We end-up using whatever is on the stack. Who knows, it might just be the right thing... Fixes: 33d3bc95 ("KVM: arm64: vgic-its: Read initial LPI pending table") Cc: stable@vger.kernel.org # 4.8 Reported-by: NAKASHI Takahiro <takahiro.akashi@linaro.org> Reviewed-by: NChristoffer Dall <christoffer.dall@linaro.org> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
-
- 10 11月, 2017 13 次提交
-
-
由 Christoffer Dall 提交于
Since VLPIs are injected directly by the hardware there's no need to mark these as pending in software and queue them on the AP list. Reviewed-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
-
由 Christoffer Dall 提交于
We should only try to initialize GICv4 data structures on a GICv4 capable system. Move the vgic_supports_direct_msis() check inito vgic_v4_init() so that any KVM VGIC initialization path does not fail on non-GICv4 systems. Also be slightly more strict in the checking of the return value in vgic_its_create, and only error out on negative return values from the vgic_v4_init() function. This is important because the kvm device code only treats negative values as errors and only cleans up in this case. Errornously treating a positive return value as an error from the vgic_v4_init() function can lead to NULL pointer dereferences, as has recently been observed. Acked-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
-
由 Marc Zyngier 提交于
The GICv4 architecture doesn't make it easy for save/restore to work, as it doesn't give any guarantee that the pending state is written into the pending table. So let's not take any chance, and let's return an error if we encounter any LPI that has the HW bit set. In order for userspace to distinguish this error from other failure modes, use -EACCES as an error code. Reviewed-by: NEric Auger <eric.auger@redhat.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
-
由 Marc Zyngier 提交于
There is no need to perform an INV for each interrupt when updating multiple interrupts. Instead, we can rely on the final VINVALL that gets sent to the ITS to do the work for all of them. Acked-by: NChristoffer Dall <cdall@linaro.org> Reviewed-by: NEric Auger <eric.auger@redhat.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
-
由 Marc Zyngier 提交于
Upon updating a property, we propagate it all the way to the physical ITS, and ask for an INV command to be executed there. Acked-by: NChristoffer Dall <cdall@linaro.org> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
-
由 Marc Zyngier 提交于
The current implementation of MOVALL doesn't allow us to call into the core ITS code as we hold a number of spinlocks. Let's try a method used in other parts of the code, were we copy the intids of the candicate interrupts, and then do whatever we need to do with them outside of the critical section. This allows us to move the interrupts one by one, at the expense of a bit of CPU time. Who cares? MOVALL is such a stupid command anyway... Reviewed-by: NChristoffer Dall <christoffer.dall@linaro.org> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
-
由 Marc Zyngier 提交于
Handling CLEAR is pretty easy. Just ask the ITS driver to clear the corresponding pending bit (which will turn into a CLEAR command on the physical side). Acked-by: NChristoffer Dall <cdall@linaro.org> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
-
由 Marc Zyngier 提交于
When the guest issues an affinity change, we need to tell the physical ITS that we're now targetting a new vcpu. This is done by extracting the current mapping, updating the target, and reapplying the mapping. Reviewed-by: NChristoffer Dall <christoffer.dall@linaro.org> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
-
由 Marc Zyngier 提交于
When freeing an LPI (on a DISCARD command, for example), we need to unmap the VLPI down to the physical ITS level. Acked-by: NChristoffer Dall <cdall@linaro.org> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
-
由 Marc Zyngier 提交于
If the guest issues an INT command targetting a VLPI, let's call into the irq_set_irqchip_state() helper to make it pending on the physical side. This works just as well if userspace decides to inject an interrupt using the normal userspace API... Acked-by: NChristoffer Dall <cdall@linaro.org> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
-
由 Marc Zyngier 提交于
In order to control the GICv4 view of virtual CPUs, we rely on an irqdomain allocated for that purpose. Let's add a couple of helpers to that effect. At the same time, the vgic data structures gain new fields to track all this... erm... wonderful stuff. The way we hook into the vgic init is slightly convoluted. We need the vgic to be initialized (in order to guarantee that the number of vcpus is now fixed), and we must have a vITS (otherwise this is all very pointless). So we end-up calling the init from both vgic_init and vgic_its_create. Reviewed-by: NChristoffer Dall <christoffer.dall@linaro.org> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
-
由 Marc Zyngier 提交于
In order to help integrating the vITS code with GICv4, let's add a new helper that deals with updating the affinity of an LPI, which will later be augmented with super duper extra GICv4 goodness. Reviewed-by: NChristoffer Dall <christoffer.dall@linaro.org> Reviewed-by: NEric Auger <eric.auger@redhat.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
-
由 Marc Zyngier 提交于
The whole MSI injection process is fairly monolithic. An MSI write gets turned into an injected LPI in one swift go. But this is actually a more fine-grained process: - First, a virtual ITS gets selected using the doorbell address - Then the DevID/EventID pair gets translated into an LPI - Finally the LPI is injected Since the GICv4 code needs the first two steps in order to match an IRQ routing entry to an LPI, let's expose them as helpers, and refactor the existing code to use them Reviewed-by: NChristoffer Dall <cdall@linaro.org> Reviewed-by: NEric Auger <eric.auger@redhat.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
-