locking/qspinlock: Introduce CNA into the slow path of qspinlock
hulk inclusion category: feature bugzilla: 13227 CVE: NA ------------------------------------------------- In CNA, spinning threads are organized in two queues, a main queue for threads running on the same node as the current lock holder, and a secondary queue for threads running on other nodes. At the unlock time, the lock holder scans the main queue looking for a thread running on the same node. If found (call it thread T), all threads in the main queue between the current lock holder and T are moved to the end of the secondary queue, and the lock is passed to T. If such T is not found, the lock is passed to the first node in the secondary queue. Finally, if the secondary queue is empty, the lock is passed to the next thread in the main queue. For more details, see https://arxiv.org/abs/1810.05600. Note that this variant of CNA may introduce starvation by continuously passing the lock to threads running on the same node. This issue will be addressed later in the series. Enabling CNA is controlled via a new configuration option (NUMA_AWARE_SPINLOCKS). The CNA variant is patched in at the boot time only if we run a multi-node machine, and the new config is enabled. For the time being, the patching requires CONFIG_PARAVIRT_SPINLOCKS to be enabled as well. However, this should be resolved once static_call() is available. Signed-off-by: NAlex Kogan <alex.kogan@oracle.com> Reviewed-by: NSteve Sistare <steven.sistare@oracle.com> Signed-off-by: NWei Li <liwei391@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
Showing
kernel/locking/qspinlock_cna.h
0 → 100644
想要评论请 注册 或 登录