• W
    ARM: 7747/1: pcpu: ensure __my_cpu_offset cannot be re-ordered across barrier() · 509eb76e
    Will Deacon 提交于
    __my_cpu_offset is non-volatile, since we want its value to be cached
    when we access several per-cpu variables in a row with preemption
    disabled. This means that we rely on preempt_{en,dis}able to hazard
    with the operation via the barrier() macro, so that we can't end up
    migrating CPUs without reloading the per-cpu offset.
    
    Unfortunately, GCC doesn't treat a "memory" clobber on a non-volatile
    asm block as a side-effect, and will happily re-order it before other
    memory clobbers (including those in prempt_disable()) and cache the
    value. This has been observed to break the cmpxchg logic in the slub
    allocator, leading to livelock in kmem_cache_alloc in mainline kernels.
    
    This patch adds a dummy memory input operand to __my_cpu_offset,
    forcing it to be ordered with respect to the barrier() macro.
    
    Cc: <stable@vger.kernel.org>
    Cc: Rob Herring <rob.herring@calxeda.com>
    Reviewed-by: NNicolas Pitre <nico@linaro.org>
    Signed-off-by: NWill Deacon <will.deacon@arm.com>
    Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
    509eb76e
percpu.h 1.5 KB