• J
    x86, ticketlock: Add slowpath logic · 96f853ea
    Jeremy Fitzhardinge 提交于
    Maintain a flag in the LSB of the ticket lock tail which indicates
    whether anyone is in the lock slowpath and may need kicking when
    the current holder unlocks.  The flags are set when the first locker
    enters the slowpath, and cleared when unlocking to an empty queue (ie,
    no contention).
    
    In the specific implementation of lock_spinning(), make sure to set
    the slowpath flags on the lock just before blocking.  We must do
    this before the last-chance pickup test to prevent a deadlock
    with the unlocker:
    
    Unlocker			Locker
    				test for lock pickup
    					-> fail
    unlock
    test slowpath
    	-> false
    				set slowpath flags
    				block
    
    Whereas this works in any ordering:
    
    Unlocker			Locker
    				set slowpath flags
    				test for lock pickup
    					-> fail
    				block
    unlock
    test slowpath
    	-> true, kick
    
    If the unlocker finds that the lock has the slowpath flag set but it is
    actually uncontended (ie, head == tail, so nobody is waiting), then it
    clears the slowpath flag.
    
    The unlock code uses a locked add to update the head counter.  This also
    acts as a full memory barrier so that its safe to subsequently
    read back the slowflag state, knowing that the updated lock is visible
    to the other CPUs.  If it were an unlocked add, then the flag read may
    just be forwarded from the store buffer before it was visible to the other
    CPUs, which could result in a deadlock.
    
    Unfortunately this means we need to do a locked instruction when
    unlocking with PV ticketlocks.  However, if PV ticketlocks are not
    enabled, then the old non-locked "add" is the only unlocking code.
    
    Note: this code relies on gcc making sure that unlikely() code is out of
    line of the fastpath, which only happens when OPTIMIZE_SIZE=n.  If it
    doesn't the generated code isn't too bad, but its definitely suboptimal.
    
    Thanks to Srivatsa Vaddagiri for providing a bugfix to the original
    version of this change, which has been folded in.
    Thanks to Stephan Diestelhorst for commenting on some code which relied
    on an inaccurate reading of the x86 memory ordering rules.
    Signed-off-by: NJeremy Fitzhardinge <jeremy@goop.org>
    Link: http://lkml.kernel.org/r/1376058122-8248-11-git-send-email-raghavendra.kt@linux.vnet.ibm.comSigned-off-by: NSrivatsa Vaddagiri <vatsa@linux.vnet.ibm.com>
    Reviewed-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
    Cc: Stephan Diestelhorst <stephan.diestelhorst@amd.com>
    Signed-off-by: NRaghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
    Acked-by: NIngo Molnar <mingo@kernel.org>
    Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
    96f853ea
spinlock.c 7.4 KB