• W
    ARM: mm: avoid taking ASID spinlock on fastpath · 4b883160
    Will Deacon 提交于
    When scheduling a new mm, we take a spinlock so that we can:
    
      1. Safely allocate a new ASID, if required
      2. Update our active_asids field without worrying about parallel
         updates to reserved_asids
      3. Ensure that we flush our local TLB, if required
    
    However, this has the nasty affect of serialising context-switch across
    all CPUs in the system. The usual (fast) case is where the next mm has
    a valid ASID for the current generation. In such a scenario, we can
    avoid taking the lock and instead use atomic64_xchg to update the
    active_asids variable for the current CPU. If a rollover occurs on
    another CPU (which would take the lock), when copying the active_asids
    into the reserved_asids another atomic64_xchg is used to replace each
    active_asids with 0. The fast path can then detect this case and fall
    back to spinning on the lock.
    Tested-by: NMarc Zyngier <marc.zyngier@arm.com>
    Signed-off-by: NWill Deacon <will.deacon@arm.com>
    4b883160
context.c 5.1 KB