提交 237aed48 编写于 作者: C Cédric Le Goater 提交者: Michael Ellerman

KVM: PPC: Book3S HV: XIVE: Free escalation interrupts before disabling the VP

When a vCPU is brought done, the XIVE VP (Virtual Processor) is first
disabled and then the event notification queues are freed. When freeing
the queues, we check for possible escalation interrupts and free them
also.

But when a XIVE VP is disabled, the underlying XIVE ENDs also are
disabled in OPAL. When an END (Event Notification Descriptor) is
disabled, its ESB pages (ESn and ESe) are disabled and loads return all
1s. Which means that any access on the ESB page of the escalation
interrupt will return invalid values.

When an interrupt is freed, the shutdown handler computes a 'saved_p'
field from the value returned by a load in xive_do_source_set_mask().
This value is incorrect for escalation interrupts for the reason
described above.

This has no impact on Linux/KVM today because we don't make use of it
but we will introduce in future changes a xive_get_irqchip_state()
handler. This handler will use the 'saved_p' field to return the state
of an interrupt and 'saved_p' being incorrect, softlockup will occur.

Fix the vCPU cleanup sequence by first freeing the escalation interrupts
if any, then disable the XIVE VP and last free the queues.

Fixes: 90c73795 ("KVM: PPC: Book3S HV: Add a new KVM device for the XIVE native exploitation mode")
Fixes: 5af50993 ("KVM: PPC: Book3S HV: Native usage of the XIVE interrupt controller")
Cc: stable@vger.kernel.org # v4.12+
Signed-off-by: NCédric Le Goater <clg@kaod.org>
Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20190806172538.5087-1-clg@kaod.org
上级 609488bc
...@@ -1134,20 +1134,22 @@ void kvmppc_xive_cleanup_vcpu(struct kvm_vcpu *vcpu) ...@@ -1134,20 +1134,22 @@ void kvmppc_xive_cleanup_vcpu(struct kvm_vcpu *vcpu)
/* Mask the VP IPI */ /* Mask the VP IPI */
xive_vm_esb_load(&xc->vp_ipi_data, XIVE_ESB_SET_PQ_01); xive_vm_esb_load(&xc->vp_ipi_data, XIVE_ESB_SET_PQ_01);
/* Disable the VP */ /* Free escalations */
xive_native_disable_vp(xc->vp_id);
/* Free the queues & associated interrupts */
for (i = 0; i < KVMPPC_XIVE_Q_COUNT; i++) { for (i = 0; i < KVMPPC_XIVE_Q_COUNT; i++) {
struct xive_q *q = &xc->queues[i];
/* Free the escalation irq */
if (xc->esc_virq[i]) { if (xc->esc_virq[i]) {
free_irq(xc->esc_virq[i], vcpu); free_irq(xc->esc_virq[i], vcpu);
irq_dispose_mapping(xc->esc_virq[i]); irq_dispose_mapping(xc->esc_virq[i]);
kfree(xc->esc_virq_names[i]); kfree(xc->esc_virq_names[i]);
} }
/* Free the queue */ }
/* Disable the VP */
xive_native_disable_vp(xc->vp_id);
/* Free the queues */
for (i = 0; i < KVMPPC_XIVE_Q_COUNT; i++) {
struct xive_q *q = &xc->queues[i];
xive_native_disable_queue(xc->vp_id, q, i); xive_native_disable_queue(xc->vp_id, q, i);
if (q->qpage) { if (q->qpage) {
free_pages((unsigned long)q->qpage, free_pages((unsigned long)q->qpage,
......
...@@ -67,10 +67,7 @@ void kvmppc_xive_native_cleanup_vcpu(struct kvm_vcpu *vcpu) ...@@ -67,10 +67,7 @@ void kvmppc_xive_native_cleanup_vcpu(struct kvm_vcpu *vcpu)
xc->valid = false; xc->valid = false;
kvmppc_xive_disable_vcpu_interrupts(vcpu); kvmppc_xive_disable_vcpu_interrupts(vcpu);
/* Disable the VP */ /* Free escalations */
xive_native_disable_vp(xc->vp_id);
/* Free the queues & associated interrupts */
for (i = 0; i < KVMPPC_XIVE_Q_COUNT; i++) { for (i = 0; i < KVMPPC_XIVE_Q_COUNT; i++) {
/* Free the escalation irq */ /* Free the escalation irq */
if (xc->esc_virq[i]) { if (xc->esc_virq[i]) {
...@@ -79,8 +76,13 @@ void kvmppc_xive_native_cleanup_vcpu(struct kvm_vcpu *vcpu) ...@@ -79,8 +76,13 @@ void kvmppc_xive_native_cleanup_vcpu(struct kvm_vcpu *vcpu)
kfree(xc->esc_virq_names[i]); kfree(xc->esc_virq_names[i]);
xc->esc_virq[i] = 0; xc->esc_virq[i] = 0;
} }
}
/* Free the queue */ /* Disable the VP */
xive_native_disable_vp(xc->vp_id);
/* Free the queues */
for (i = 0; i < KVMPPC_XIVE_Q_COUNT; i++) {
kvmppc_xive_native_cleanup_queue(vcpu, i); kvmppc_xive_native_cleanup_queue(vcpu, i);
} }
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册