提交 9fb1b36c 编写于 作者: P Paul Mackerras 提交者: Benjamin Herrenschmidt

powerpc: Make sure IPI handlers see data written by IPI senders

We have been observing hangs, both of KVM guest vcpu tasks and more
generally, where a process that is woken doesn't properly wake up and
continue to run, but instead sticks in TASK_WAKING state.  This
happens because the update of rq->wake_list in ttwu_queue_remote()
is not ordered with the update of ipi_message in
smp_muxed_ipi_message_pass(), and the reading of rq->wake_list in
scheduler_ipi() is not ordered with the reading of ipi_message in
smp_ipi_demux().  Thus it is possible for the IPI receiver not to see
the updated rq->wake_list and therefore conclude that there is nothing
for it to do.

In order to make sure that anything done before smp_send_reschedule()
is ordered before anything done in the resulting call to scheduler_ipi(),
this adds barriers in smp_muxed_message_pass() and smp_ipi_demux().
The barrier in smp_muxed_message_pass() is a full barrier to ensure that
there is a full ordering between the smp_send_reschedule() caller and
scheduler_ipi().  In smp_ipi_demux(), we use xchg() rather than
xchg_local() because xchg() includes release and acquire barriers.
Using xchg() rather than xchg_local() makes sense given that
ipi_message is not just accessed locally.

This moves the barrier between setting the message and calling the
cause_ipi() function into the individual cause_ipi implementations.
Most of them -- those that used outb, out_8 or similar -- already had
a full barrier because out_8 etc. include a sync before the MMIO
store.  This adds an explicit barrier in the two remaining cases.

These changes made no measurable difference to the speed of IPIs as
measured using a simple ping-pong latency test across two CPUs on
different cores of a POWER7 machine.

The analysis of the reason why processes were not waking up properly
is due to Milton Miller.

Cc: stable@vger.kernel.org # v3.0+
Reported-by: NMilton Miller <miltonm@bga.com>
Signed-off-by: NPaul Mackerras <paulus@samba.org>
Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
上级 71433285
...@@ -28,6 +28,8 @@ void doorbell_setup_this_cpu(void) ...@@ -28,6 +28,8 @@ void doorbell_setup_this_cpu(void)
void doorbell_cause_ipi(int cpu, unsigned long data) void doorbell_cause_ipi(int cpu, unsigned long data)
{ {
/* Order previous accesses vs. msgsnd, which is treated as a store */
mb();
ppc_msgsnd(PPC_DBELL, 0, data); ppc_msgsnd(PPC_DBELL, 0, data);
} }
......
...@@ -198,8 +198,15 @@ void smp_muxed_ipi_message_pass(int cpu, int msg) ...@@ -198,8 +198,15 @@ void smp_muxed_ipi_message_pass(int cpu, int msg)
struct cpu_messages *info = &per_cpu(ipi_message, cpu); struct cpu_messages *info = &per_cpu(ipi_message, cpu);
char *message = (char *)&info->messages; char *message = (char *)&info->messages;
/*
* Order previous accesses before accesses in the IPI handler.
*/
smp_mb();
message[msg] = 1; message[msg] = 1;
mb(); /*
* cause_ipi functions are required to include a full barrier
* before doing whatever causes the IPI.
*/
smp_ops->cause_ipi(cpu, info->data); smp_ops->cause_ipi(cpu, info->data);
} }
...@@ -211,7 +218,7 @@ irqreturn_t smp_ipi_demux(void) ...@@ -211,7 +218,7 @@ irqreturn_t smp_ipi_demux(void)
mb(); /* order any irq clear */ mb(); /* order any irq clear */
do { do {
all = xchg_local(&info->messages, 0); all = xchg(&info->messages, 0);
#ifdef __BIG_ENDIAN #ifdef __BIG_ENDIAN
if (all & (1 << (24 - 8 * PPC_MSG_CALL_FUNCTION))) if (all & (1 << (24 - 8 * PPC_MSG_CALL_FUNCTION)))
......
...@@ -65,7 +65,11 @@ static inline void icp_hv_set_xirr(unsigned int value) ...@@ -65,7 +65,11 @@ static inline void icp_hv_set_xirr(unsigned int value)
static inline void icp_hv_set_qirr(int n_cpu , u8 value) static inline void icp_hv_set_qirr(int n_cpu , u8 value)
{ {
int hw_cpu = get_hard_smp_processor_id(n_cpu); int hw_cpu = get_hard_smp_processor_id(n_cpu);
long rc = plpar_hcall_norets(H_IPI, hw_cpu, value); long rc;
/* Make sure all previous accesses are ordered before IPI sending */
mb();
rc = plpar_hcall_norets(H_IPI, hw_cpu, value);
if (rc != H_SUCCESS) { if (rc != H_SUCCESS) {
pr_err("%s: bad return code qirr cpu=%d hw_cpu=%d mfrr=0x%x " pr_err("%s: bad return code qirr cpu=%d hw_cpu=%d mfrr=0x%x "
"returned %ld\n", __func__, n_cpu, hw_cpu, value, rc); "returned %ld\n", __func__, n_cpu, hw_cpu, value, rc);
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册