提交 3de218ff 编写于 作者: J Juergen Gross

xen/events: reset active flag for lateeoi events later

In order to avoid a race condition for user events when changing
cpu affinity reset the active flag only when EOI-ing the event.

This is working fine as all user events are lateeoi events. Note that
lateeoi_ack_mask_dynirq() is not modified as there is no explicit call
to xen_irq_lateeoi() expected later.

Cc: stable@vger.kernel.org
Reported-by: NJulien Grall <julien@xen.org>
Fixes: b6622798 ("xen/events: avoid handling the same event on two cpus at the same time")
Tested-by: NJulien Grall <julien@xen.org>
Signed-off-by: NJuergen Gross <jgross@suse.com>
Reviewed-by: NBoris Ostrovsky <boris.ostrvsky@oracle.com>
Link: https://lore.kernel.org/r/20210623130913.9405-1-jgross@suse.comSigned-off-by: NJuergen Gross <jgross@suse.com>
上级 107866a8
...@@ -642,6 +642,9 @@ static void xen_irq_lateeoi_locked(struct irq_info *info, bool spurious) ...@@ -642,6 +642,9 @@ static void xen_irq_lateeoi_locked(struct irq_info *info, bool spurious)
} }
info->eoi_time = 0; info->eoi_time = 0;
/* is_active hasn't been reset yet, do it now. */
smp_store_release(&info->is_active, 0);
do_unmask(info, EVT_MASK_REASON_EOI_PENDING); do_unmask(info, EVT_MASK_REASON_EOI_PENDING);
} }
...@@ -811,6 +814,7 @@ static void xen_evtchn_close(evtchn_port_t port) ...@@ -811,6 +814,7 @@ static void xen_evtchn_close(evtchn_port_t port)
BUG(); BUG();
} }
/* Not called for lateeoi events. */
static void event_handler_exit(struct irq_info *info) static void event_handler_exit(struct irq_info *info)
{ {
smp_store_release(&info->is_active, 0); smp_store_release(&info->is_active, 0);
...@@ -1883,7 +1887,12 @@ static void lateeoi_ack_dynirq(struct irq_data *data) ...@@ -1883,7 +1887,12 @@ static void lateeoi_ack_dynirq(struct irq_data *data)
if (VALID_EVTCHN(evtchn)) { if (VALID_EVTCHN(evtchn)) {
do_mask(info, EVT_MASK_REASON_EOI_PENDING); do_mask(info, EVT_MASK_REASON_EOI_PENDING);
event_handler_exit(info); /*
* Don't call event_handler_exit().
* Need to keep is_active non-zero in order to ignore re-raised
* events after cpu affinity changes while a lateeoi is pending.
*/
clear_evtchn(evtchn);
} }
} }
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册