- 12 10月, 2016 1 次提交
-
-
由 Mark Rutland 提交于
The strncpy_from_user() accessor is effectively a copy_from_user() specialised to copy strings, terminating early at a NUL byte if possible. In other respects it is identical, and can be used to copy an arbitrarily large buffer from userspace into the kernel. Conceptually, it exposes a similar attack surface. As with copy_from_user(), we check the destination range when the kernel is built with KASAN, but unlike copy_from_user() we do not check the destination buffer when using HARDENED_USERCOPY. As strncpy_from_user() calls get_user() in a loop, we must call check_object_size() explicitly. This patch adds this instrumentation to strncpy_from_user(), per the same rationale as with the regular copy_from_user(). In the absence of hardened usercopy this will have no impact as the instrumentation expands to an empty static inline function. Link: http://lkml.kernel.org/r/1472221903-31181-1-git-send-email-mark.rutland@arm.comSigned-off-by: NMark Rutland <mark.rutland@arm.com> Cc: Kees Cook <keescook@chromium.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 09 8月, 2016 1 次提交
-
-
由 Linus Torvalds 提交于
When I initially added the unsafe_[get|put]_user() helpers in commit 5b24a7a2 ("Add 'unsafe' user access functions for batched accesses"), I made the mistake of modeling the interface on our traditional __[get|put]_user() functions, which return zero on success, or -EFAULT on failure. That interface is fairly easy to use, but it's actually fairly nasty for good code generation, since it essentially forces the caller to check the error value for each access. In particular, since the error handling is already internally implemented with an exception handler, and we already use "asm goto" for various other things, we could fairly easily make the error cases just jump directly to an error label instead, and avoid the need for explicit checking after each operation. So switch the interface to pass in an error label, rather than checking the error value in the caller. Best do it now before we start growing more users (the signal handling code in particular would be a good place to use the new interface). So rather than if (unsafe_get_user(x, ptr)) ... handle error .. the interface is now unsafe_get_user(x, ptr, label); where an error during the user mode fetch will now just cause a jump to 'label' in the caller. Right now the actual _implementation_ of this all still ends up being a "if (err) goto label", and does not take advantage of any exception label tricks, but for "unsafe_put_user()" in particular it should be fairly straightforward to convert to using the exception table model. Note that "unsafe_get_user()" is much harder to convert to a clever exception table model, because current versions of gcc do not allow the use of "asm goto" (for the exception) with output values (for the actual value to be fetched). But that is hopefully not a limitation in the long term. [ Also note that it might be a good idea to switch unsafe_get_user() to actually _return_ the value it fetches from user space, but this commit only changes the error handling semantics ] Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 21 5月, 2016 1 次提交
-
-
由 Andrey Ryabinin 提交于
Exchange between user and kernel memory is coded in assembly language. Which means that such accesses won't be spotted by KASAN as a compiler instruments only C code. Add explicit KASAN checks to user memory access API to ensure that userspace writes to (or reads from) a valid kernel memory. Note: Unlike others strncpy_from_user() is written mostly in C and KASAN sees memory accesses in it. However, it makes sense to add explicit check for all @count bytes that *potentially* could be written to the kernel. [aryabinin@virtuozzo.com: move kasan check under the condition] Link: http://lkml.kernel.org/r/1462869209-21096-1-git-send-email-aryabinin@virtuozzo.com Link: http://lkml.kernel.org/r/1462538722-1574-4-git-send-email-aryabinin@virtuozzo.comSigned-off-by: NAndrey Ryabinin <aryabinin@virtuozzo.com> Cc: Alexander Potapenko <glider@google.com> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 18 12月, 2015 1 次提交
-
-
由 Linus Torvalds 提交于
This converts the generic user string functions to use the batched user access functions. It makes a big difference on Skylake, which is the first x86 microarchitecture to implement SMAP. The STAC/CLAC instructions are not very fast, and doing them for each access inside the loop that copies strings from user space (which is what the pathname handling does for every pathname the kernel uses, for example) is very inefficient. Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 01 9月, 2015 1 次提交
-
-
由 Alexei Starovoitov 提交于
To fix build errors: kernel/built-in.o: In function `bpf_trace_printk': bpf_trace.c:(.text+0x11a254): undefined reference to `strncpy_from_unsafe' kernel/built-in.o: In function `fetch_memory_string': trace_kprobe.c:(.text+0x11acf8): undefined reference to `strncpy_from_unsafe' move strncpy_from_unsafe() next to probe_kernel_read/write() which use the same memory access style. Reported-by: NFengguang Wu <fengguang.wu@intel.com> Reported-by: NGuenter Roeck <linux@roeck-us.net> Fixes: 1a6877b9 ("lib: introduce strncpy_from_unsafe()") Signed-off-by: NAlexei Starovoitov <ast@plumgrid.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 29 8月, 2015 1 次提交
-
-
由 Alexei Starovoitov 提交于
generalize FETCH_FUNC_NAME(memory, string) into strncpy_from_unsafe() and fix sparse warnings that were present in original implementation. Signed-off-by: NAlexei Starovoitov <ast@plumgrid.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 13 2月, 2015 1 次提交
-
-
由 Rasmus Villemoes 提交于
strncpy_from_user.c only needs EXPORT_SYMBOL, so just include compiler.h and export.h instead of the whole module.h machinery. Signed-off-by: NRasmus Villemoes <linux@rasmusvillemoes.dk> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 27 5月, 2012 1 次提交
-
-
由 Linus Torvalds 提交于
This changes the interfaces in <asm/word-at-a-time.h> to be a bit more complicated, but a lot more generic. In particular, it allows us to really do the operations efficiently on both little-endian and big-endian machines, pretty much regardless of machine details. For example, if you can rely on a fast population count instruction on your architecture, this will allow you to make your optimized <asm/word-at-a-time.h> file with that. NOTE! The "generic" version in include/asm-generic/word-at-a-time.h is not truly generic, it actually only works on big-endian. Why? Because on little-endian the generic algorithms are wasteful, since you can inevitably do better. The x86 implementation is an example of that. (The only truly non-generic part of the asm-generic implementation is the "find_zero()" function, and you could make a little-endian version of it. And if the Kbuild infrastructure allowed us to pick a particular header file, that would be lovely) The <asm/word-at-a-time.h> functions are as follows: - WORD_AT_A_TIME_CONSTANTS: specific constants that the algorithm uses. - has_zero(): take a word, and determine if it has a zero byte in it. It gets the word, the pointer to the constant pool, and a pointer to an intermediate "data" field it can set. This is the "quick-and-dirty" zero tester: it's what is run inside the hot loops. - "prep_zero_mask()": take the word, the data that has_zero() produced, and the constant pool, and generate an *exact* mask of which byte had the first zero. This is run directly *outside* the loop, and allows the "has_zero()" function to answer the "is there a zero byte" question without necessarily getting exactly *which* byte is the first one to contain a zero. If you do multiple byte lookups concurrently (eg "hash_name()", which looks for both NUL and '/' bytes), after you've done the prep_zero_mask() phase, the result of those can be or'ed together to get the "either or" case. - The result from "prep_zero_mask()" can then be fed into "find_zero()" (to find the byte offset of the first byte that was zero) or into "zero_bytemask()" (to find the bytemask of the bytes preceding the zero byte). The existence of zero_bytemask() is optional, and is not necessary for the normal string routines. But dentry name hashing needs it, so if you enable DENTRY_WORD_AT_A_TIME you need to expose it. This changes the generic strncpy_from_user() function and the dentry hashing functions to use these modified word-at-a-time interfaces. This gets us back to the optimized state of the x86 strncpy that we lost in the previous commit when moving over to the generic version. Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 25 5月, 2012 3 次提交
-
-
由 David S. Miller 提交于
To use this, an architecture simply needs to: 1) Provide a user_addr_max() implementation via asm/uaccess.h 2) Add "select GENERIC_STRNCPY_FROM_USER" to their arch Kcnfig 3) Remove the existing strncpy_from_user() implementation and symbol exports their architecture had. Signed-off-by: NDavid S. Miller <davem@davemloft.net> Acked-by: NDavid Howells <dhowells@redhat.com>
-
由 David S. Miller 提交于
And make sure that everything using it explicitly includes that header file. Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David S. Miller 提交于
Hide details of maximum user address calculation in a new asm/uaccess.h interface named user_addr_max(). Provide little-endian implementation in find_zero(), which should work but can probably be improved. Abstrace alignment check behind IS_UNALIGNED() macro. Kill double-semicolon, noticed by David Howells. Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 24 5月, 2012 1 次提交
-
-
由 David S. Miller 提交于
Compute a mask that will only have 0x80 in the bytes which had a zero in them. The formula is: ~(((x & 0x7f7f7f7f) + 0x7f7f7f7f) | x | 0x7f7f7f7f) In the inner word iteration, we have to compute the "x | 0x7f7f7f7f" part, so we can reuse that in the above calculation. Once we have this mask, we perform divide and conquer to find the highest 0x80 location. Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 23 5月, 2012 1 次提交
-
-
由 David S. Miller 提交于
Linus removed the end-of-address-space hackery from fs/namei.c:do_getname() so we really have to validate these edge conditions and cannot cheat any more (as x86 used to as well). Move to a common C implementation like x86 did. And if both src and dst are sufficiently aligned we'll do word at a time copies and checks as well. Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 11 12月, 2009 1 次提交
-
-
由 David S. Miller 提交于
This mirrors x86 commit 9f0cf4ad (x86: Use __builtin_object_size() to validate the buffer size for copy_from_user()) Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-