- 05 1月, 2014 1 次提交
-
-
由 Steven Rostedt 提交于
Commit ff47ab4f "x86: Add 1/2/4/8 byte optimization to 64bit __copy_{from,to}_user_inatomic" added a "_nocheck" call in between the copy_to/from_user() and copy_user_generic(). As both the normal and nocheck versions of theses calls use the proper __user annotation, a typecast to remove it should not be added. This causes sparse to spin out the following warnings: arch/x86/include/asm/uaccess_64.h:207:47: warning: incorrect type in argument 2 (different address spaces) arch/x86/include/asm/uaccess_64.h:207:47: expected void const [noderef] <asn:1>*src arch/x86/include/asm/uaccess_64.h:207:47: got void const *<noident> arch/x86/include/asm/uaccess_64.h:207:47: warning: incorrect type in argument 2 (different address spaces) arch/x86/include/asm/uaccess_64.h:207:47: expected void const [noderef] <asn:1>*src arch/x86/include/asm/uaccess_64.h:207:47: got void const *<noident> arch/x86/include/asm/uaccess_64.h:207:47: warning: incorrect type in argument 2 (different address spaces) arch/x86/include/asm/uaccess_64.h:207:47: expected void const [noderef] <asn:1>*src arch/x86/include/asm/uaccess_64.h:207:47: got void const *<noident> arch/x86/include/asm/uaccess_64.h:207:47: warning: incorrect type in argument 2 (different address spaces) arch/x86/include/asm/uaccess_64.h:207:47: expected void const [noderef] <asn:1>*src arch/x86/include/asm/uaccess_64.h:207:47: got void const *<noident> Cc: Andi Kleen <ak@linux.intel.com> Signed-off-by: NSteven Rostedt <rostedt@goodmis.org> Link: http://lkml.kernel.org/r/20140103164500.5f6478f5@gandalf.local.homeSigned-off-by: NH. Peter Anvin <hpa@linux.intel.com>
-
- 26 10月, 2013 2 次提交
-
-
由 Jan Beulich 提交于
Similarly to copy_from_user(), where the range check is to protect against kernel memory corruption, copy_to_user() can benefit from such checking too: Here it protects against kernel information leaks. Signed-off-by: NJan Beulich <jbeulich@suse.com> Cc: <arjan@linux.intel.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/5265059502000078000FC4F6@nat28.tlf.novell.comSigned-off-by: NIngo Molnar <mingo@kernel.org> Cc: Arjan van de Ven <arjan@linux.intel.com>
-
由 Jan Beulich 提交于
Commits 4a312769 ("x86: Turn the copy_from_user check into an (optional) compile time warning") and 63312b6a ("x86: Add a Kconfig option to turn the copy_from_user warnings into errors") touched only the 32-bit variant of copy_from_user(), whereas the original commit 9f0cf4ad ("x86: Use __builtin_object_size() to validate the buffer size for copy_from_user()") also added the same code to the 64-bit one. Further the earlier conversion from an inline WARN() to the call to copy_from_user_overflow() went a little too far: When the number of bytes to be copied is not a constant (e.g. [looking at 3.11] in drivers/net/tun.c:__tun_chr_ioctl() or drivers/pci/pcie/aer/aer_inject.c:aer_inject_write()), the compiler will always have to keep the funtion call, and hence there will always be a warning. By using __builtin_constant_p() we can avoid this. And then this slightly extends the effect of CONFIG_DEBUG_STRICT_USER_COPY_CHECKS in that apart from converting warnings to errors in the constant size case, it retains the (possibly wrong) warnings in the non-constant size case, such that if someone is prepared to get a few false positives, (s)he'll be able to recover the current behavior (except that these diagnostics now will never be converted to errors). Since the 32-bit variant (intentionally) didn't call might_fault(), the unification results in this being called twice now. Adding a suitable #ifdef would be the alternative if that's a problem. I'd like to point out though that with __compiletime_object_size() being restricted to gcc before 4.6, the whole construct is going to become more and more pointless going forward. I would question however that commit 2fb0815c ("gcc4: disable __compiletime_object_size for GCC 4.6+") was really necessary, and instead this should have been dealt with as is done here from the beginning. Signed-off-by: NJan Beulich <jbeulich@suse.com> Cc: Arjan van de Ven <arjan@linux.intel.com> Cc: Guenter Roeck <linux@roeck-us.net> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/5265056D02000078000FC4F3@nat28.tlf.novell.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
- 11 9月, 2013 1 次提交
-
-
由 Andi Kleen 提交于
The 64bit __copy_{from,to}_user_inatomic always called copy_from_user_generic, but skipped the special optimizations for 1/2/4/8 byte accesses. This especially hurts the futex call, which accesses the 4 byte futex user value with a complicated fast string operation in a function call, instead of a single movl. Use __copy_{from,to}_user for _inatomic instead to get the same optimizations. The only problem was the might_fault() in those functions. So move that to new wrapper and call __copy_{f,t}_user_nocheck() from *_inatomic directly. 32bit already did this correctly by duplicating the code. Signed-off-by: NAndi Kleen <ak@linux.intel.com> Link: http://lkml.kernel.org/r/1376687844-19857-2-git-send-email-andi@firstfloor.orgSigned-off-by: NH. Peter Anvin <hpa@linux.intel.com>
-
- 28 5月, 2013 1 次提交
-
-
由 Michael S. Tsirkin 提交于
The only reason uaccess routines might sleep is if they fault. Make this explicit. Signed-off-by: NMichael S. Tsirkin <mst@redhat.com> Signed-off-by: NPeter Zijlstra <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/1369577426-26721-9-git-send-email-mst@redhat.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
- 22 9月, 2012 1 次提交
-
-
由 H. Peter Anvin 提交于
The prototypes for clear_user() and __clear_user() are identical in the 32- and 64-bit headers. No functionality change. Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com> Link: http://lkml.kernel.org/r/1348256595-29119-8-git-send-email-hpa@linux.intel.com
-
- 30 6月, 2012 1 次提交
-
-
由 Fenghua Yu 提交于
According to Intel 64 and IA-32 SDM and Optimization Reference Manual, beginning with Ivybridge, REG string operation using MOVSB and STOSB can provide both flexible and high-performance REG string operations in cases like memory copy. Enhancement availability is indicated by CPUID.7.0.EBX[9] (Enhanced REP MOVSB/ STOSB). If CPU erms feature is detected, patch copy_user_generic with enhanced fast string version of copy_user_generic. A few new macros are defined to reduce duplicate code in ALTERNATIVE and ALTERNATIVE_2. Signed-off-by: NFenghua Yu <fenghua.yu@intel.com> Link: http://lkml.kernel.org/r/1337908785-14015-1-git-send-email-fenghua.yu@intel.comSigned-off-by: NH. Peter Anvin <hpa@linux.intel.com>
-
- 27 5月, 2012 1 次提交
-
-
由 Linus Torvalds 提交于
This throws away the old x86-specific functions in favor of the generic optimized version. Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 12 4月, 2012 1 次提交
-
-
由 Linus Torvalds 提交于
This merges the 32- and 64-bit versions of the x86 strncpy_from_user() by just rewriting it in C rather than the ancient inline asm versions that used lodsb/stosb and had been duplicated for (trivial) differences between the 32-bit and 64-bit versions. While doing that, it also speeds them up by doing the accesses a word at a time. Finally, the new routines also properly handle the case of hitting the end of the address space, which we have never done correctly before (fs/namei.c has a hack around it for that reason). Despite all these improvements, it actually removes more lines than it adds, due to the de-duplication. Also, we no longer export (or define) the legacy __strncpy_from_user() function (that was defined to not do the user permission checks), since it's not actually used anywhere, and the user address space checks are built in to the new code. Other architecture maintainers have been notified that the old hack in fs/namei.c will be going away in the 3.5 merge window, in case they copied the x86 approach of being a bit cavalier about the end of the address space. Cc: linux-arch@vger.kernel.org Cc: Ingo Molnar <mingo@kernel.org> Cc: Peter Anvin" <hpa@zytor.com> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 21 5月, 2011 1 次提交
-
-
由 Linus Torvalds 提交于
Commit e66eed65 ("list: remove prefetching from regular list iterators") removed the include of prefetch.h from list.h, which uncovered several cases that had apparently relied on that rather obscure header file dependency. So this fixes things up a bit, using grep -L linux/prefetch.h $(git grep -l '[^a-z_]prefetchw*(' -- '*.[ch]') grep -L 'prefetchw*(' $(git grep -l 'linux/prefetch.h' -- '*.[ch]') to guide us in finding files that either need <linux/prefetch.h> inclusion, or have it despite not needing it. There are more of them around (mostly network drivers), but this gets many core ones. Reported-by: NStephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 06 1月, 2010 1 次提交
-
-
由 Heiko Carstens 提交于
Callers of copy_from_user() expect it to return the number of bytes it could not copy. In no case it is supposed to return -EFAULT. In case of a detected buffer overflow just return the requested length. In addition one could think of a memset that would clear the size of the target object. [ hpa: code is not in .32 so not needed for -stable ] Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Acked-by: NArjan van de Ven <arjan@linux.intel.com> LKML-Reference: <20100105131911.GC5480@osiris.boeblingen.de.ibm.com> Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
-
- 30 12月, 2009 1 次提交
-
-
由 Jan Beulich 提交于
In order to avoid unnecessary chains of branches, rather than implementing copy_user_generic() as a function consisting of just a single (possibly patched) branch, instead properly deal with patching call instructions in the alternative instructions framework, and move the patching into the callers. As a follow-on, one could also introduce something like __EXPORT_SYMBOL_ALT() to avoid patching call sites in modules. Signed-off-by: NJan Beulich <jbeulich@novell.com> Cc: Nick Piggin <npiggin@suse.de> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Andrew Morton <akpm@linux-foundation.org> LKML-Reference: <4B2BB8180200007800026AE7@vpn.id2.novell.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 16 11月, 2009 1 次提交
-
-
由 Frederic Weisbecker 提交于
On x86-64, copy_[to|from]_user() rely on assembly routines that never call might_fault(), making us missing various lockdep checks. This doesn't apply to __copy_from,to_user() that explicitly handle these calls, neither is it a problem in x86-32 where copy_to,from_user() rely on the "__" prefixed versions that also call might_fault(). Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com> Cc: Arjan van de Ven <arjan@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Nick Piggin <npiggin@suse.de> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> LKML-Reference: <1258382538-30979-1-git-send-email-fweisbec@gmail.com> [ v2: fix module export ] Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 15 11月, 2009 1 次提交
-
-
由 Jan Beulich 提交于
This v2.6.26 commit: ad2fc2cd: x86: fix copy_user on x86 rendered __copy_from_user_inatomic() identical to copy_user_generic(), yet didn't make the former just call the latter from an inline function. Furthermore, this v2.6.19 commit: b885808e: [PATCH] Add proper sparse __user casts to __copy_to_user_inatomic converted the return type of __copy_to_user_inatomic() from unsigned long to int, but didn't do the same to __copy_from_user_inatomic(). Signed-off-by: NJan Beulich <jbeulich@novell.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Alexander Viro <viro@zeniv.linux.org.uk> Cc: Arjan van de Ven <arjan@infradead.org> Cc: Andi Kleen <andi@firstfloor.org> Cc: <v.mayatskih@gmail.com> LKML-Reference: <4AFD5778020000780001F8F4@vpn.id2.novell.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 26 9月, 2009 1 次提交
-
-
由 Arjan van de Ven 提交于
gcc (4.x) supports the __builtin_object_size() builtin, which reports the size of an object that a pointer point to, when known at compile time. If the buffer size is not known at compile time, a constant -1 is returned. This patch uses this feature to add a sanity check to copy_from_user(); if the target buffer is known to be smaller than the copy size, the copy is aborted and a WARNing is emitted in memory debug mode. These extra checks compile away when the object size is not known, or if both the buffer size and the copy length are constants. Signed-off-by: NArjan van de Ven <arjan@linux.intel.com> LKML-Reference: <20090926143301.2c396b94@infradead.org> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 21 7月, 2009 1 次提交
-
-
由 Uros Bizjak 提交于
arch/x86/include/asm/uaccess_64.h uses wrong asm operand constraint ("ir") for movq insn. Since movq sign-extends its immediate operand, "er" constraint should be used instead. Attached patch changes all uses of __put_user_asm in uaccess_64.h to use "er" when "q" insn suffix is involved. Patch was compile tested on x86_64 with defconfig. Signed-off-by: NUros Bizjak <ubizjak@gmail.com> Signed-off-by: NH. Peter Anvin <hpa@zytor.com> Cc: stable@kernel.org
-
- 02 3月, 2009 1 次提交
-
-
由 Ingo Molnar 提交于
Impact: standardize IO on cached ops On modern CPUs it is almost always a bad idea to use non-temporal stores, as the regression in this commit has shown it: 30d697fa: x86: fix performance regression in write() syscall The kernel simply has no good information about whether using non-temporal stores is a good idea or not - and trying to add heuristics only increases complexity and inserts fragility. The regression on cached write()s took very long to be found - over two years. So dont take any chances and let the hardware decide how it makes use of its caches. The only exception is drivers/gpu/drm/i915/i915_gem.c: there were we are absolutely sure that another entity (the GPU) will pick up the dirty data immediately and that the CPU will not touch that data before the GPU will. Also, keep the _nocache() primitives to make it easier for people to experiment with these details. There may be more clear-cut cases where non-cached copies can be used, outside of filemap.c. Cc: Salman Qazi <sqazi@google.com> Cc: Nick Piggin <npiggin@suse.de> Cc: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 25 2月, 2009 3 次提交
-
-
由 Ingo Molnar 提交于
Impact: make more types of copies non-temporal This change makes the following simple fix: 30d697fa: x86: fix performance regression in write() syscall A bit more sophisticated: we check the 'total' number of bytes written to decide whether to copy in a cached or a non-temporal way. This will for example cause the tail (modulo 4096 bytes) chunk of a large write() to be non-temporal too - not just the page-sized chunks. Cc: Salman Qazi <sqazi@google.com> Cc: Nick Piggin <npiggin@suse.de> Cc: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Ingo Molnar 提交于
Impact: cleanup, enable future change Add a 'total bytes copied' parameter to __copy_from_user_*nocache(), and update all the callsites. The parameter is not used yet - architecture code can use it to more intelligently decide whether the copy should be cached or non-temporal. Cc: Salman Qazi <sqazi@google.com> Cc: Nick Piggin <npiggin@suse.de> Cc: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Salman Qazi 提交于
While the introduction of __copy_from_user_nocache (see commit: 0812a579) may have been an improvement for sufficiently large writes, there is evidence to show that it is deterimental for small writes. Unixbench's fstime test gives the following results for 256 byte writes with MAX_BLOCK of 2000: 2.6.29-rc6 ( 5 samples, each in KB/sec ): 283750, 295200, 294500, 293000, 293300 2.6.29-rc6 + this patch (5 samples, each in KB/sec): 313050, 3106750, 293350, 306300, 307900 2.6.18 395700, 342000, 399100, 366050, 359850 See w_test() in src/fstime.c in unixbench version 4.1.0. Basically, the above test consists of counting how much we can write in this manner: alarm(10); while (!sigalarm) { for (f_blocks = 0; f_blocks < 2000; ++f_blocks) { write(f, buf, 256); } lseek(f, 0L, 0); } Note, there are other components to the write syscall regression that are not addressed here. Signed-off-by: NSalman Qazi <sqazi@google.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 19 11月, 2008 1 次提交
-
-
由 Hiroshi Shimamoto 提交于
__copy_from_user() will return invalid value 16 when it fails to access user space and the size is 10. Signed-off-by: NHiroshi Shimamoto <h-shimamoto@ct.jp.nec.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 23 10月, 2008 2 次提交
-
-
由 H. Peter Anvin 提交于
Change header guards named "ASM_X86__*" to "_ASM_X86_*" since: a. the double underscore is ugly and pointless. b. no leading underscore violates namespace constraints. Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
-
由 Al Viro 提交于
Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk> Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
-
- 03 10月, 2008 1 次提交
-
-
由 Nick Piggin 提交于
Fix inotify lock order reversal with mmap_sem due to holding locks over copy_to_user. Signed-off-by: NNick Piggin <npiggin@suse.de> Reported-by: N"Daniel J Blueman" <daniel.blueman@gmail.com> Tested-by: N"Daniel J Blueman" <daniel.blueman@gmail.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 11 9月, 2008 1 次提交
-
-
由 Nick Piggin 提交于
- introduce might_fault() - handle the atomic user copy paths correctly [ mingo@elte.hu: move might_sleep() outside of in_atomic(). ] Signed-off-by: NNick Piggin <npiggin@suse.de> Acked-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 10 9月, 2008 1 次提交
-
-
由 Nick Piggin 提交于
copy_to/from_user and all its variants (except the atomic ones) can take a page fault and perform non-trivial work like taking mmap_sem and entering the filesyste/pagecache. Unfortunately, this often escapes lockdep because a common pattern is to use it to read in some arguments just set up from userspace, or write data back to a hot buffer. In those cases, it will be unlikely for page reclaim to get a window in to cause copy_*_user to fault. With the new might_lock primitives, add some annotations to x86. I don't know if I caught all possible faulting points (it's a bit of a maze, and I didn't really look at 32-bit). But this is a starting point. Boots and runs OK so far. Signed-off-by: NNick Piggin <npiggin@suse.de> Acked-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 23 7月, 2008 1 次提交
-
-
由 Vegard Nossum 提交于
This patch is the result of an automatic script that consolidates the format of all the headers in include/asm-x86/. The format: 1. No leading underscore. Names with leading underscores are reserved. 2. Pathname components are separated by two underscores. So we can distinguish between mm_types.h and mm/types.h. 3. Everything except letters and numbers are turned into single underscores. Signed-off-by: NVegard Nossum <vegard.nossum@gmail.com>
-
- 09 7月, 2008 13 次提交
-
-
由 Vitaly Mayatskikh 提交于
Introduce generic C routine for handling necessary tail operations after protection fault in copy_*_user on x86. Signed-off-by: NVitaly Mayatskikh <v.mayatskih@gmail.com> Acked-by: NLinus Torvalds <torvalds@linux-foundation.org> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Glauber Costa 提交于
Remove them from the arch-specific file. Signed-off-by: NGlauber Costa <gcosta@redhat.com> Signed-off-by: NH. Peter Anvin <hpa@zytor.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Glauber Costa 提交于
We also carry the unaligned version with us. Only x86_64 uses it, but there's no problem in defining it. Signed-off-by: NGlauber Costa <gcosta@redhat.com> Signed-off-by: NH. Peter Anvin <hpa@zytor.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Glauber Costa 提交于
Move both versions, which are highly similar, to uaccess.h. Note that, for x86_64, X86_WP_WORKS_OK is always defined. Signed-off-by: NGlauber Costa <gcosta@redhat.com> Signed-off-by: NH. Peter Anvin <hpa@zytor.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Glauber Costa 提交于
We also check user pointer in x86_64 put_user, the way i386 does. In a separate patch for bisecting purposes. Signed-off-by: NGlauber Costa <gcosta@redhat.com> Signed-off-by: NH. Peter Anvin <hpa@zytor.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Glauber Costa 提交于
Move __get_user_asm and __get_user_size and __get_user_nocheck to uaccess.h. This requires us to define a macro at __get_user_size for the 64-bit access case. Signed-off-by: NGlauber Costa <gcosta@redhat.com> Signed-off-by: NH. Peter Anvin <hpa@zytor.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Glauber Costa 提交于
Let the user of the macro specify the desired return. Signed-off-by: NGlauber Costa <gcosta@redhat.com> Signed-off-by: NH. Peter Anvin <hpa@zytor.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Glauber Costa 提交于
Move both __put_user_asm and __put_user_size to uaccess.h. i386 already had a special function for 64-bit access, so for x86_64, we just define a macro with the same name. Note that for X86_64, CONFIG_X86_WP_WORKS_OK will always be defined, so the #else part will never be even compiled in. Signed-off-by: NGlauber Costa <gcosta@redhat.com> Signed-off-by: NH. Peter Anvin <hpa@zytor.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Glauber Costa 提交于
Let the user of the macro specify the desired return. Signed-off-by: NGlauber Costa <gcosta@redhat.com> Signed-off-by: NH. Peter Anvin <hpa@zytor.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Glauber Costa 提交于
Take it out of uaccess_32.h. Since it seems that no users of the x86_64 exists, we simply pick the i386 version. Signed-off-by: NGlauber Costa <gcosta@redhat.com> Signed-off-by: NH. Peter Anvin <hpa@zytor.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Glauber Costa 提交于
Merge versions of getuser from uaccess_32.h and uaccess_64.h into uaccess.h. There is a part which is 64-bit only (for now), and for that, we use a __get_user_8 macro. Signed-off-by: NGlauber Costa <gcosta@redhat.com> Signed-off-by: NH. Peter Anvin <hpa@zytor.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Glauber Costa 提交于
Common parts of uaccess_32.h and uaccess_64.h are put in uaccess.h. Bits in uaccess_32.h and uaccess_64.h that come to this file are equal except for comments and whitespaces differences. Signed-off-by: NGlauber Costa <gcosta@redhat.com> Signed-off-by: NH. Peter Anvin <hpa@zytor.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Glauber Costa 提交于
Using explicit hexa (0xFFFFFFUL) introduces an unnecessary difference between i386 and x86_64 because of the size of their long. Use -1UL instead. Signed-off-by: NGlauber Costa <gcosta@redhat.com> Signed-off-by: NH. Peter Anvin <hpa@zytor.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-