提交 662741a0 编写于 作者: A Alexander Potapenko 提交者: Zheng Zengkai

kfence, kasan: make KFENCE compatible with KASAN

mainline inclusion
from mainline-v5.12-rc1
commit 2b830526
category: feature
bugzilla: 181005 https://gitee.com/openeuler/kernel/issues/I4EUY7

Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=2b8305260fb37fc20e13f71e13073304d0a031c8

-----------------------------------------------

Make KFENCE compatible with KASAN. Currently this helps test KFENCE
itself, where KASAN can catch potential corruptions to KFENCE state, or
other corruptions that may be a result of freepointer corruptions in the
main allocators.

[akpm@linux-foundation.org: merge fixup]
[andreyknvl@google.com: untag addresses for KFENCE]
  Link: https://lkml.kernel.org/r/9dc196006921b191d25d10f6e611316db7da2efc.1611946152.git.andreyknvl@google.com

Link: https://lkml.kernel.org/r/20201103175841.3495947-7-elver@google.comSigned-off-by: NMarco Elver <elver@google.com>
Signed-off-by: NAlexander Potapenko <glider@google.com>
Signed-off-by: NAndrey Konovalov <andreyknvl@google.com>
Reviewed-by: NDmitry Vyukov <dvyukov@google.com>
Reviewed-by: NJann Horn <jannh@google.com>
Co-developed-by: NMarco Elver <elver@google.com>
Cc: Andrey Konovalov <andreyknvl@google.com>
Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Christopher Lameter <cl@linux.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: David Rientjes <rientjes@google.com>
Cc: Eric Dumazet <edumazet@google.com>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Hillf Danton <hdanton@sina.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Joern Engel <joern@purestorage.com>
Cc: Jonathan Corbet <corbet@lwn.net>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Paul E. McKenney <paulmck@kernel.org>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: SeongJae Park <sjpark@amazon.de>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Will Deacon <will@kernel.org>
Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
Conflicts:
	mm/kasan/kasan.h
	mm/kasan/shadow.c
[Peng Liu: cherry-pick from 2b830526]
Signed-off-by: NPeng Liu <liupeng256@huawei.com>
Reviewed-by: NKefeng Wang <wangkefeng.wang@huawei.com>
Signed-off-by: NChen Jun <chenjun102@huawei.com>
Signed-off-by: NYingjie Shang <1415317271@qq.com>
Reviewed-by: NBixuan Cui <cuibixuan@huawei.com>
Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
上级 47961d3b
...@@ -5,7 +5,7 @@ config HAVE_ARCH_KFENCE ...@@ -5,7 +5,7 @@ config HAVE_ARCH_KFENCE
menuconfig KFENCE menuconfig KFENCE
bool "KFENCE: low-overhead sampling-based memory safety error detector" bool "KFENCE: low-overhead sampling-based memory safety error detector"
depends on HAVE_ARCH_KFENCE && !KASAN && (SLAB || SLUB) depends on HAVE_ARCH_KFENCE && (SLAB || SLUB)
select STACKTRACE select STACKTRACE
help help
KFENCE is a low-overhead sampling-based detector of heap out-of-bounds KFENCE is a low-overhead sampling-based detector of heap out-of-bounds
......
...@@ -124,6 +124,10 @@ void kasan_poison_shadow(const void *address, size_t size, u8 value) ...@@ -124,6 +124,10 @@ void kasan_poison_shadow(const void *address, size_t size, u8 value)
*/ */
address = reset_tag(address); address = reset_tag(address);
/* Skip KFENCE memory if called explicitly outside of sl*b. */
if (is_kfence_address(address))
return;
shadow_start = kasan_mem_to_shadow(address); shadow_start = kasan_mem_to_shadow(address);
shadow_end = kasan_mem_to_shadow(address + size); shadow_end = kasan_mem_to_shadow(address + size);
...@@ -141,6 +145,14 @@ void kasan_unpoison_shadow(const void *address, size_t size) ...@@ -141,6 +145,14 @@ void kasan_unpoison_shadow(const void *address, size_t size)
*/ */
address = reset_tag(address); address = reset_tag(address);
/*
* Skip KFENCE memory if called explicitly outside of sl*b. Also note
* that calls to ksize(), where size is not a multiple of machine-word
* size, would otherwise poison the invalid portion of the word.
*/
if (is_kfence_address(address))
return;
kasan_poison_shadow(address, size, tag); kasan_poison_shadow(address, size, tag);
if (size & KASAN_SHADOW_MASK) { if (size & KASAN_SHADOW_MASK) {
...@@ -396,6 +408,9 @@ static bool __kasan_slab_free(struct kmem_cache *cache, void *object, ...@@ -396,6 +408,9 @@ static bool __kasan_slab_free(struct kmem_cache *cache, void *object,
tagged_object = object; tagged_object = object;
object = reset_tag(object); object = reset_tag(object);
if (is_kfence_address(object))
return false;
if (unlikely(nearest_obj(cache, virt_to_head_page(object), object) != if (unlikely(nearest_obj(cache, virt_to_head_page(object), object) !=
object)) { object)) {
kasan_report_invalid_free(tagged_object, ip); kasan_report_invalid_free(tagged_object, ip);
...@@ -444,6 +459,9 @@ static void *__kasan_kmalloc(struct kmem_cache *cache, const void *object, ...@@ -444,6 +459,9 @@ static void *__kasan_kmalloc(struct kmem_cache *cache, const void *object,
if (unlikely(object == NULL)) if (unlikely(object == NULL))
return NULL; return NULL;
if (is_kfence_address(kasan_reset_tag(object)))
return (void *)object;
redzone_start = round_up((unsigned long)(object + size), redzone_start = round_up((unsigned long)(object + size),
KASAN_SHADOW_SCALE_SIZE); KASAN_SHADOW_SCALE_SIZE);
redzone_end = round_up((unsigned long)object + cache->object_size, redzone_end = round_up((unsigned long)object + cache->object_size,
......
...@@ -21,6 +21,7 @@ ...@@ -21,6 +21,7 @@
#include <linux/init.h> #include <linux/init.h>
#include <linux/kasan.h> #include <linux/kasan.h>
#include <linux/kernel.h> #include <linux/kernel.h>
#include <linux/kfence.h>
#include <linux/kmemleak.h> #include <linux/kmemleak.h>
#include <linux/linkage.h> #include <linux/linkage.h>
#include <linux/memblock.h> #include <linux/memblock.h>
...@@ -332,7 +333,7 @@ void kasan_record_aux_stack(void *addr) ...@@ -332,7 +333,7 @@ void kasan_record_aux_stack(void *addr)
struct kasan_alloc_meta *alloc_info; struct kasan_alloc_meta *alloc_info;
void *object; void *object;
if (!(page && PageSlab(page))) if (is_kfence_address(addr) || !(page && PageSlab(page)))
return; return;
cache = page->slab_cache; cache = page->slab_cache;
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册