提交 07df8418 编写于 作者: K Kirill Tkhai 提交者: David S. Miller

sparc64: Do not save/restore interrupts in get_new_mmu_context()

get_new_mmu_context() is always called with interrupts disabled.
So it's possible to do this micro optimization.

(Also fix the comment to switch_mm, which is called in both cases)
Signed-off-by: NKirill Tkhai <tkhai@yandex.ru>
CC: David Miller <davem@davemloft.net>
Signed-off-by: NDavid S. Miller <davem@davemloft.net>
上级 598ec971
......@@ -68,7 +68,7 @@ extern void smp_tsb_sync(struct mm_struct *mm);
extern void __flush_tlb_mm(unsigned long, unsigned long);
/* Switch the current MM context. Interrupts are disabled. */
/* Switch the current MM context. */
static inline void switch_mm(struct mm_struct *old_mm, struct mm_struct *mm, struct task_struct *tsk)
{
unsigned long ctx_valid, flags;
......
......@@ -681,10 +681,9 @@ void get_new_mmu_context(struct mm_struct *mm)
{
unsigned long ctx, new_ctx;
unsigned long orig_pgsz_bits;
unsigned long flags;
int new_version;
spin_lock_irqsave(&ctx_alloc_lock, flags);
spin_lock(&ctx_alloc_lock);
orig_pgsz_bits = (mm->context.sparc64_ctx_val & CTX_PGSZ_MASK);
ctx = (tlb_context_cache + 1) & CTX_NR_MASK;
new_ctx = find_next_zero_bit(mmu_context_bmap, 1 << CTX_NR_BITS, ctx);
......@@ -720,7 +719,7 @@ void get_new_mmu_context(struct mm_struct *mm)
out:
tlb_context_cache = new_ctx;
mm->context.sparc64_ctx_val = new_ctx | orig_pgsz_bits;
spin_unlock_irqrestore(&ctx_alloc_lock, flags);
spin_unlock(&ctx_alloc_lock);
if (unlikely(new_version))
smp_new_mmu_context_version();
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册