提交 30619c89 编写于 作者: S Srikar Dronamraju 提交者: Ingo Molnar

sched/numa: Update the scan period without holding the numa_group lock

The metrics for updating scan periods are local or task specific.
Currently this update happens under the numa_group lock, which seems
unnecessary. Hence move this update outside the lock.

Running SPECjbb2005 on a 4 node machine and comparing bops/JVM
JVMS  LAST_PATCH  WITH_PATCH  %CHANGE
16    25355.9     25645.4     1.141
1     72812       72142       -0.92
Signed-off-by: NSrikar Dronamraju <srikar@linux.vnet.ibm.com>
Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: NRik van Riel <riel@surriel.com>
Acked-by: NMel Gorman <mgorman@techsingularity.net>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/1529514181-9842-15-git-send-email-srikar@linux.vnet.ibm.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
上级 2d4056fa
......@@ -2170,8 +2170,6 @@ static void task_numa_placement(struct task_struct *p)
}
}
update_task_scan_period(p, fault_types[0], fault_types[1]);
if (p->numa_group) {
numa_group_count_active_nodes(p->numa_group);
spin_unlock_irq(group_lock);
......@@ -2186,6 +2184,8 @@ static void task_numa_placement(struct task_struct *p)
if (task_node(p) != p->numa_preferred_nid)
numa_migrate_preferred(p);
}
update_task_scan_period(p, fault_types[0], fault_types[1]);
}
static inline int get_numa_group(struct numa_group *grp)
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册