- 21 4月, 2012 3 次提交
-
-
由 H. Peter Anvin 提交于
Add _ASM_EXTABLE_EX() to generate the special extable entries that are associated with uaccess_err. This allows us to change the protocol associated with these special entries. Signed-off-by: NH. Peter Anvin <hpa@zytor.com> Cc: David Daney <david.daney@cavium.com> Link: http://lkml.kernel.org/r/CA%2B55aFyijf43qSu3N9nWHEBwaGbb7T2Oq9A=9EyR=Jtyqfq_cQ@mail.gmail.com
-
由 H. Peter Anvin 提交于
Nothing should use them anymore; only _ASM_EXTABLE() should ever be used. Signed-off-by: NH. Peter Anvin <hpa@zytor.com> Cc: David Daney <david.daney@cavium.com> Link: http://lkml.kernel.org/r/CA%2B55aFyijf43qSu3N9nWHEBwaGbb7T2Oq9A=9EyR=Jtyqfq_cQ@mail.gmail.com
-
由 H. Peter Anvin 提交于
Instead of using .section ... .previous, use .pushsection ... .popsection; this is (hopefully) a bit more robust, especially in complex assembly code. Signed-off-by: NH. Peter Anvin <hpa@zytor.com> Cc: David Daney <david.daney@cavium.com> Link: http://lkml.kernel.org/r/CA%2B55aFyijf43qSu3N9nWHEBwaGbb7T2Oq9A=9EyR=Jtyqfq_cQ@mail.gmail.com
-
- 21 7月, 2011 2 次提交
-
-
由 Jan Beulich 提交于
With the write lock path simply subtracting RW_LOCK_BIAS there is, on large systems, the theoretical possibility of overflowing the 32-bit value that was used so far (namely if 128 or more CPUs manage to do the subtraction, but don't get to do the inverse addition in the failure path quickly enough). A first measure is to modify RW_LOCK_BIAS itself - with the new value chosen, it is good for up to 2048 CPUs each allowed to nest over 2048 times on the read path without causing an issue. Quite possibly it would even be sufficient to adjust the bias a little further, assuming that allowing for significantly less nesting would suffice. However, as the original value chosen allowed for even more nesting levels, to support more than 2048 CPUs (possible currently only for 64-bit kernels) the lock itself gets widened to 64 bits. Signed-off-by: NJan Beulich <jbeulich@novell.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/4E258E0D020000780004E3F0@nat28.tlf.novell.comSigned-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Jan Beulich 提交于
Rather than having two functionally identical implementations for 32- and 64-bit configurations, extend the existing assembly abstractions enough to fold the two rwlock implementations into a shared one. Signed-off-by: NJan Beulich <jbeulich@novell.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/4E258DD7020000780004E3EA@nat28.tlf.novell.comSigned-off-by: NIngo Molnar <mingo@elte.hu>
-
- 01 9月, 2009 1 次提交
-
-
由 H. Peter Anvin 提交于
We have had this convenient macro _ASM_EXTABLE() to generate exception table entry in inline assembly. Make it also usable for pure assembly. Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
-
- 23 10月, 2008 2 次提交
-
-
由 H. Peter Anvin 提交于
Change header guards named "ASM_X86__*" to "_ASM_X86_*" since: a. the double underscore is ugly and pointless. b. no leading underscore violates namespace constraints. Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
-
由 Al Viro 提交于
Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk> Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
-
- 19 8月, 2008 1 次提交
-
-
由 H. Peter Anvin 提交于
Rename _ASM_MOV_UL to _ASM_MOV for consistency with other _ASM_ instructions (_ASM_ADD, _ASM_SUB and so on.) Add ASM_SP, _ASM_BP, _ASM_SI, and _ASM_DI for consistency with _ASM_[ABCD]X. Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
-
- 23 7月, 2008 1 次提交
-
-
由 Vegard Nossum 提交于
This patch is the result of an automatic script that consolidates the format of all the headers in include/asm-x86/. The format: 1. No leading underscore. Names with leading underscores are reserved. 2. Pathname components are separated by two underscores. So we can distinguish between mm_types.h and mm/types.h. 3. Everything except letters and numbers are turned into single underscores. Signed-off-by: NVegard Nossum <vegard.nossum@gmail.com>
-
- 09 7月, 2008 3 次提交
-
-
由 Glauber Costa 提交于
In putuser_32.S and putuser_64.S, replace things like .quad, .long, and explicit references to [r|e]ax for the apropriate macros in asm/asm.h. Signed-off-by: NGlauber Costa <gcosta@redhat.com> Signed-off-by: NH. Peter Anvin <hpa@zytor.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Glauber Costa 提交于
getuser_32.S and getuser_64.S are merged into getuser.S. Signed-off-by: NGlauber Costa <gcosta@redhat.com> Signed-off-by: NH. Peter Anvin <hpa@zytor.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Glauber Costa 提交于
There are situations in which the architecture wants to use the register that represents its word-size, whatever it is. For those, introduce __ASM_REG in asm.h, along with the first users _ASM_AX and _ASM_DX. They have users waiting for it, namely the getuser functions. Signed-off-by: NGlauber Costa <gcosta@redhat.com> Signed-off-by: NH. Peter Anvin <hpa@zytor.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 17 6月, 2008 1 次提交
-
-
由 Jeremy Fitzhardinge 提交于
This is useful for unifying some pieces of asm code. Signed-off-by: NJeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
-
- 04 2月, 2008 1 次提交
-
-
由 H. Peter Anvin 提交于
Instead of open-coding the __ex_table information at each callsite, construct a common macro that can work regardless of CPU size. Signed-off-by: NH. Peter Anvin <hpa@zytor.com> Signed-off-by: NIngo Molnar <mingo@elte.hu> Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
- 30 1月, 2008 3 次提交
-
-
由 Harvey Harrison 提交于
Handle the use of long on X86_32 and quad on X86_64 Signed-off-by: NHarvey Harrison <harvey.harrison@gmail.com> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Thomas Gleixner 提交于
Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 H. Peter Anvin 提交于
Create <asm/asm.h>, with common definitions suitable for assembly unification. Signed-off-by: NH. Peter Anvin <hpa@zytor.com> Signed-off-by: NIngo Molnar <mingo@elte.hu> Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-