提交 fea85437 编写于 作者: A Alex Kogan 提交者: Xie XiuQi

locking/qspinlock: Make arch_mcs_spin_unlock_contended more generic

hulk inclusion
category: feature
bugzilla: 13227
CVE: NA

-------------------------------------------------

The arch_mcs_spin_unlock_contended macro should accept the value to be
stored into the lock argument as another argument. This allows using the
same macro in cases where the value to be stored is different from 1.
Signed-off-by: NAlex Kogan <alex.kogan@oracle.com>
Reviewed-by: NSteve Sistare <steven.sistare@oracle.com>
Signed-off-by: NWei Li <liwei391@huawei.com>
Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
上级 3b113191
......@@ -14,9 +14,9 @@ do { \
wfe(); \
} while (0) \
#define arch_mcs_spin_unlock_contended(lock) \
#define arch_mcs_spin_unlock_contended(lock, val) \
do { \
smp_store_release(lock, 1); \
smp_store_release(lock, (val)); \
dsb_sev(); \
} while (0)
......
......@@ -41,8 +41,8 @@ do { \
* operations in the critical section has been completed before
* unlocking.
*/
#define arch_mcs_spin_unlock_contended(l) \
smp_store_release((l), 1)
#define arch_mcs_spin_unlock_contended(l, val) \
smp_store_release((l), (val))
#endif
/*
......@@ -115,7 +115,7 @@ void mcs_spin_unlock(struct mcs_spinlock **lock, struct mcs_spinlock *node)
}
/* Pass lock to next waiter. */
arch_mcs_spin_unlock_contended(&next->locked);
arch_mcs_spin_unlock_contended(&next->locked, 1);
}
#endif /* __LINUX_MCS_SPINLOCK_H */
......@@ -543,7 +543,7 @@ void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val)
if (!next)
next = smp_cond_load_relaxed(&node->next, (VAL));
arch_mcs_spin_unlock_contended(&next->locked);
arch_mcs_spin_unlock_contended(&next->locked, 1);
pv_kick_node(lock, next);
release:
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册