提交 f2ae6794 编写于 作者: S Sebastian Andrzej Siewior 提交者: Thomas Gleixner

alpha: Remove custom dec_and_lock() implementation

Alpha provides a custom implementation of dec_and_lock(). The functions
is split into two parts:
- atomic_add_unless() + return 0 (fast path in assembly)
- remaining part including locking (slow path in C)

Comparing the result of the alpha implementation with the generic
implementation compiled by gcc it looks like the fast path is optimized
by avoiding a stack frame (and reloading the GP), register store and all
this. This is only done in the slowpath.
After marking the slowpath (atomic_dec_and_lock_1()) as "noinline" and
doing the slowpath in C (the atomic_add_unless(atomic, -1, 1) part) I
noticed differences in the resulting assembly:
- the GP is still reloaded
- atomic_add_unless() adds more memory barriers compared to the custom
  assembly
- the custom assembly here does "load, sub, beq" while
  atomic_add_unless() does "load, cmpeq, add, bne". This is okay because
  it compares against zero after subtraction while the generic code
  compares against 1 before.

I'm not sure if avoiding the stack frame (and GP reloading) brings a lot
in terms of performance. Regarding the different barriers, Peter
Zijlstra says:

|refcount decrement needs to be a RELEASE operation, such that all the
|load/stores to the object happen before we decrement the refcount.
|
|Otherwise things like:
|
|        obj->foo = 5;
|        refcnt_dec(&obj->ref);
|
|can be re-ordered, which then allows fun scenarios like:
|
|        CPU0                            CPU1
|
|        refcnt_dec(&obj->ref);
|                                        if (dec_and_test(&obj->ref))
|                                                free(obj);
|        obj->foo = 5; // oops UaF
|
|
|This means (for alpha) that there should be a memory barrier _before_
|the decrement, however the dec_and_lock asm thing only has one _after_,
|which, per the above, is too late.
|
|The generic version using add_unless will result in memory barrier
|before and after (because that is the rule for atomic ops with a return
|value) which is strictly too many barriers for the refcount story, but
|who knows what other ordering requirements code has.

Remove the custom alpha implementation of dec_and_lock() and if it is an
issue (performance wise) then the fast path could still be inlined.
Signed-off-by: NSebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
Cc: Richard Henderson <rth@twiddle.net>
Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru>
Cc: Matt Turner <mattst88@gmail.com>
Cc: linux-alpha@vger.kernel.org
Link: https://lkml.kernel.org/r/20180606115918.GG12198@hirez.programming.kicks-ass.net
Link: https://lkml.kernel.org/r20180612161621.22645-2-bigeasy@linutronix.de
上级 d82991a8
......@@ -555,11 +555,6 @@ config SMP
If you don't know what to do here, say N.
config HAVE_DEC_LOCK
bool
depends on SMP
default y
config NR_CPUS
int "Maximum number of CPUs (2-32)"
range 2 32
......
......@@ -35,8 +35,6 @@ lib-y = __divqu.o __remqu.o __divlu.o __remlu.o \
callback_srm.o srm_puts.o srm_printk.o \
fls.o
lib-$(CONFIG_SMP) += dec_and_lock.o
# The division routines are built from single source, with different defines.
AFLAGS___divqu.o = -DDIV
AFLAGS___remqu.o = -DREM
......
// SPDX-License-Identifier: GPL-2.0
/*
* arch/alpha/lib/dec_and_lock.c
*
* ll/sc version of atomic_dec_and_lock()
*
*/
#include <linux/spinlock.h>
#include <linux/atomic.h>
#include <linux/export.h>
asm (".text \n\
.global _atomic_dec_and_lock \n\
.ent _atomic_dec_and_lock \n\
.align 4 \n\
_atomic_dec_and_lock: \n\
.prologue 0 \n\
1: ldl_l $1, 0($16) \n\
subl $1, 1, $1 \n\
beq $1, 2f \n\
stl_c $1, 0($16) \n\
beq $1, 4f \n\
mb \n\
clr $0 \n\
ret \n\
2: br $29, 3f \n\
3: ldgp $29, 0($29) \n\
br $atomic_dec_and_lock_1..ng \n\
.subsection 2 \n\
4: br 1b \n\
.previous \n\
.end _atomic_dec_and_lock");
static int __used atomic_dec_and_lock_1(atomic_t *atomic, spinlock_t *lock)
{
/* Slow path */
spin_lock(lock);
if (atomic_dec_and_test(atomic))
return 1;
spin_unlock(lock);
return 0;
}
EXPORT_SYMBOL(_atomic_dec_and_lock);
......@@ -23,7 +23,7 @@ lib-y := ctype.o string.o vsprintf.o cmdline.o \
sha1.o chacha20.o irq_regs.o argv_split.o \
flex_proportions.o ratelimit.o show_mem.o \
is_single_threaded.o plist.o decompress.o kobject_uevent.o \
earlycpio.o seq_buf.o siphash.o \
earlycpio.o seq_buf.o siphash.o dec_and_lock.o \
nmi_backtrace.o nodemask.o win_minmax.o
lib-$(CONFIG_PRINTK) += dump_stack.o
......@@ -98,10 +98,6 @@ obj-$(CONFIG_DEBUG_PREEMPT) += smp_processor_id.o
obj-$(CONFIG_DEBUG_LIST) += list_debug.o
obj-$(CONFIG_DEBUG_OBJECTS) += debugobjects.o
ifneq ($(CONFIG_HAVE_DEC_LOCK),y)
lib-y += dec_and_lock.o
endif
obj-$(CONFIG_BITREVERSE) += bitrev.o
obj-$(CONFIG_RATIONAL) += rational.o
obj-$(CONFIG_CRC_CCITT) += crc-ccitt.o
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册