提交 a68db763 编写于 作者: P Peter Chubb 提交者: Tony Luck

[IA64] Fix another IA64 preemption problem

There's another problem shown up by Ingo's recent patch to make
smp_processor_id() complain if it's called with preemption enabled.
local_finish_flush_tlb_mm() calls activate_context() in a situation
where it could be rescheduled to another processor.  This patch
disables preemption around the call.
Signed-off-by: NPeter Chubb <peterc@gelato.unsw.edu.au>
Signed-off-by: NTony Luck <tony.luck@intel.com>
上级 819c67e6
...@@ -231,13 +231,16 @@ smp_flush_tlb_all (void) ...@@ -231,13 +231,16 @@ smp_flush_tlb_all (void)
void void
smp_flush_tlb_mm (struct mm_struct *mm) smp_flush_tlb_mm (struct mm_struct *mm)
{ {
preempt_disable();
/* this happens for the common case of a single-threaded fork(): */ /* this happens for the common case of a single-threaded fork(): */
if (likely(mm == current->active_mm && atomic_read(&mm->mm_users) == 1)) if (likely(mm == current->active_mm && atomic_read(&mm->mm_users) == 1))
{ {
local_finish_flush_tlb_mm(mm); local_finish_flush_tlb_mm(mm);
preempt_enable();
return; return;
} }
preempt_enable();
/* /*
* We could optimize this further by using mm->cpu_vm_mask to track which CPUs * We could optimize this further by using mm->cpu_vm_mask to track which CPUs
* have been running in the address space. It's not clear that this is worth the * have been running in the address space. It's not clear that this is worth the
......
...@@ -132,6 +132,9 @@ reload_context (mm_context_t context) ...@@ -132,6 +132,9 @@ reload_context (mm_context_t context)
ia64_srlz_i(); /* srlz.i implies srlz.d */ ia64_srlz_i(); /* srlz.i implies srlz.d */
} }
/*
* Must be called with preemption off
*/
static inline void static inline void
activate_context (struct mm_struct *mm) activate_context (struct mm_struct *mm)
{ {
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册