1. 23 12月, 2020 7 次提交
  2. 08 8月, 2020 3 次提交
  3. 03 6月, 2020 1 次提交
  4. 08 4月, 2020 1 次提交
  5. 03 4月, 2020 1 次提交
  6. 23 1月, 2020 1 次提交
  7. 18 12月, 2019 2 次提交
    • D
      kasan: use apply_to_existing_page_range() for releasing vmalloc shadow · e218f1ca
      Daniel Axtens 提交于
      kasan_release_vmalloc uses apply_to_page_range to release vmalloc
      shadow.  Unfortunately, apply_to_page_range can allocate memory to fill
      in page table entries, which is not what we want.
      
      Also, kasan_release_vmalloc is called under free_vmap_area_lock, so if
      apply_to_page_range does allocate memory, we get a sleep in atomic bug:
      
      	BUG: sleeping function called from invalid context at mm/page_alloc.c:4681
      	in_atomic(): 1, irqs_disabled(): 0, non_block: 0, pid: 15087, name:
      
      	Call Trace:
      	 __dump_stack lib/dump_stack.c:77 [inline]
      	 dump_stack+0x199/0x216 lib/dump_stack.c:118
      	 ___might_sleep.cold.97+0x1f5/0x238 kernel/sched/core.c:6800
      	 __might_sleep+0x95/0x190 kernel/sched/core.c:6753
      	 prepare_alloc_pages mm/page_alloc.c:4681 [inline]
      	 __alloc_pages_nodemask+0x3cd/0x890 mm/page_alloc.c:4730
      	 alloc_pages_current+0x10c/0x210 mm/mempolicy.c:2211
      	 alloc_pages include/linux/gfp.h:532 [inline]
      	 __get_free_pages+0xc/0x40 mm/page_alloc.c:4786
      	 __pte_alloc_one_kernel include/asm-generic/pgalloc.h:21 [inline]
      	 pte_alloc_one_kernel include/asm-generic/pgalloc.h:33 [inline]
      	 __pte_alloc_kernel+0x1d/0x200 mm/memory.c:459
      	 apply_to_pte_range mm/memory.c:2031 [inline]
      	 apply_to_pmd_range mm/memory.c:2068 [inline]
      	 apply_to_pud_range mm/memory.c:2088 [inline]
      	 apply_to_p4d_range mm/memory.c:2108 [inline]
      	 apply_to_page_range+0x77d/0xa00 mm/memory.c:2133
      	 kasan_release_vmalloc+0xa7/0xc0 mm/kasan/common.c:970
      	 __purge_vmap_area_lazy+0xcbb/0x1f30 mm/vmalloc.c:1313
      	 try_purge_vmap_area_lazy mm/vmalloc.c:1332 [inline]
      	 free_vmap_area_noflush+0x2ca/0x390 mm/vmalloc.c:1368
      	 free_unmap_vmap_area mm/vmalloc.c:1381 [inline]
      	 remove_vm_area+0x1cc/0x230 mm/vmalloc.c:2209
      	 vm_remove_mappings mm/vmalloc.c:2236 [inline]
      	 __vunmap+0x223/0xa20 mm/vmalloc.c:2299
      	 __vfree+0x3f/0xd0 mm/vmalloc.c:2356
      	 __vmalloc_area_node mm/vmalloc.c:2507 [inline]
      	 __vmalloc_node_range+0x5d5/0x810 mm/vmalloc.c:2547
      	 __vmalloc_node mm/vmalloc.c:2607 [inline]
      	 __vmalloc_node_flags mm/vmalloc.c:2621 [inline]
      	 vzalloc+0x6f/0x80 mm/vmalloc.c:2666
      	 alloc_one_pg_vec_page net/packet/af_packet.c:4233 [inline]
      	 alloc_pg_vec net/packet/af_packet.c:4258 [inline]
      	 packet_set_ring+0xbc0/0x1b50 net/packet/af_packet.c:4342
      	 packet_setsockopt+0xed7/0x2d90 net/packet/af_packet.c:3695
      	 __sys_setsockopt+0x29b/0x4d0 net/socket.c:2117
      	 __do_sys_setsockopt net/socket.c:2133 [inline]
      	 __se_sys_setsockopt net/socket.c:2130 [inline]
      	 __x64_sys_setsockopt+0xbe/0x150 net/socket.c:2130
      	 do_syscall_64+0xfa/0x780 arch/x86/entry/common.c:294
      	 entry_SYSCALL_64_after_hwframe+0x49/0xbe
      
      Switch to using the apply_to_existing_page_range() helper instead, which
      won't allocate memory.
      
      [akpm@linux-foundation.org: s/apply_to_existing_pages/apply_to_existing_page_range/]
      Link: http://lkml.kernel.org/r/20191205140407.1874-2-dja@axtens.net
      Fixes: 3c5c3cfb ("kasan: support backing vmalloc space with real shadow memory")
      Signed-off-by: NDaniel Axtens <dja@axtens.net>
      Reported-by: NDmitry Vyukov <dvyukov@google.com>
      Reviewed-by: NAndrey Ryabinin <aryabinin@virtuozzo.com>
      Cc: Alexander Potapenko <glider@google.com>
      Cc: Qian Cai <cai@lca.pw>
      Cc: Uladzislau Rezki (Sony) <urezki@gmail.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      e218f1ca
    • A
      kasan: fix crashes on access to memory mapped by vm_map_ram() · d98c9e83
      Andrey Ryabinin 提交于
      With CONFIG_KASAN_VMALLOC=y any use of memory obtained via vm_map_ram()
      will crash because there is no shadow backing that memory.
      
      Instead of sprinkling additional kasan_populate_vmalloc() calls all over
      the vmalloc code, move it into alloc_vmap_area(). This will fix
      vm_map_ram() and simplify the code a bit.
      
      [aryabinin@virtuozzo.com: v2]
        Link: http://lkml.kernel.org/r/20191205095942.1761-1-aryabinin@virtuozzo.comLink: http://lkml.kernel.org/r/20191204204534.32202-1-aryabinin@virtuozzo.com
      Fixes: 3c5c3cfb ("kasan: support backing vmalloc space with real shadow memory")
      Signed-off-by: NAndrey Ryabinin <aryabinin@virtuozzo.com>
      Reported-by: NDmitry Vyukov <dvyukov@google.com>
      Reviewed-by: NUladzislau Rezki (Sony) <urezki@gmail.com>
      Cc: Daniel Axtens <dja@axtens.net>
      Cc: Alexander Potapenko <glider@google.com>
      Cc: Daniel Axtens <dja@axtens.net>
      Cc: Qian Cai <cai@lca.pw>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      d98c9e83
  8. 05 12月, 2019 1 次提交
  9. 02 12月, 2019 1 次提交
    • D
      kasan: support backing vmalloc space with real shadow memory · 3c5c3cfb
      Daniel Axtens 提交于
      Patch series "kasan: support backing vmalloc space with real shadow
      memory", v11.
      
      Currently, vmalloc space is backed by the early shadow page.  This means
      that kasan is incompatible with VMAP_STACK.
      
      This series provides a mechanism to back vmalloc space with real,
      dynamically allocated memory.  I have only wired up x86, because that's
      the only currently supported arch I can work with easily, but it's very
      easy to wire up other architectures, and it appears that there is some
      work-in-progress code to do this on arm64 and s390.
      
      This has been discussed before in the context of VMAP_STACK:
       - https://bugzilla.kernel.org/show_bug.cgi?id=202009
       - https://lkml.org/lkml/2018/7/22/198
       - https://lkml.org/lkml/2019/7/19/822
      
      In terms of implementation details:
      
      Most mappings in vmalloc space are small, requiring less than a full
      page of shadow space.  Allocating a full shadow page per mapping would
      therefore be wasteful.  Furthermore, to ensure that different mappings
      use different shadow pages, mappings would have to be aligned to
      KASAN_SHADOW_SCALE_SIZE * PAGE_SIZE.
      
      Instead, share backing space across multiple mappings.  Allocate a
      backing page when a mapping in vmalloc space uses a particular page of
      the shadow region.  This page can be shared by other vmalloc mappings
      later on.
      
      We hook in to the vmap infrastructure to lazily clean up unused shadow
      memory.
      
      Testing with test_vmalloc.sh on an x86 VM with 2 vCPUs shows that:
      
       - Turning on KASAN, inline instrumentation, without vmalloc, introuduces
         a 4.1x-4.2x slowdown in vmalloc operations.
      
       - Turning this on introduces the following slowdowns over KASAN:
           * ~1.76x slower single-threaded (test_vmalloc.sh performance)
           * ~2.18x slower when both cpus are performing operations
             simultaneously (test_vmalloc.sh sequential_test_order=1)
      
      This is unfortunate but given that this is a debug feature only, not the
      end of the world.  The benchmarks are also a stress-test for the vmalloc
      subsystem: they're not indicative of an overall 2x slowdown!
      
      This patch (of 4):
      
      Hook into vmalloc and vmap, and dynamically allocate real shadow memory
      to back the mappings.
      
      Most mappings in vmalloc space are small, requiring less than a full
      page of shadow space.  Allocating a full shadow page per mapping would
      therefore be wasteful.  Furthermore, to ensure that different mappings
      use different shadow pages, mappings would have to be aligned to
      KASAN_SHADOW_SCALE_SIZE * PAGE_SIZE.
      
      Instead, share backing space across multiple mappings.  Allocate a
      backing page when a mapping in vmalloc space uses a particular page of
      the shadow region.  This page can be shared by other vmalloc mappings
      later on.
      
      We hook in to the vmap infrastructure to lazily clean up unused shadow
      memory.
      
      To avoid the difficulties around swapping mappings around, this code
      expects that the part of the shadow region that covers the vmalloc space
      will not be covered by the early shadow page, but will be left unmapped.
      This will require changes in arch-specific code.
      
      This allows KASAN with VMAP_STACK, and may be helpful for architectures
      that do not have a separate module space (e.g.  powerpc64, which I am
      currently working on).  It also allows relaxing the module alignment
      back to PAGE_SIZE.
      
      Testing with test_vmalloc.sh on an x86 VM with 2 vCPUs shows that:
      
       - Turning on KASAN, inline instrumentation, without vmalloc, introuduces
         a 4.1x-4.2x slowdown in vmalloc operations.
      
       - Turning this on introduces the following slowdowns over KASAN:
           * ~1.76x slower single-threaded (test_vmalloc.sh performance)
           * ~2.18x slower when both cpus are performing operations
             simultaneously (test_vmalloc.sh sequential_test_order=3D1)
      
      This is unfortunate but given that this is a debug feature only, not the
      end of the world.
      
      The full benchmark results are:
      
      Performance
      
                                    No KASAN      KASAN original x baseline  KASAN vmalloc x baseline    x KASAN
      
      fix_size_alloc_test             662004            11404956      17.23       19144610      28.92       1.68
      full_fit_alloc_test             710950            12029752      16.92       13184651      18.55       1.10
      long_busy_list_alloc_test      9431875            43990172       4.66       82970178       8.80       1.89
      random_size_alloc_test         5033626            23061762       4.58       47158834       9.37       2.04
      fix_align_alloc_test           1252514            15276910      12.20       31266116      24.96       2.05
      random_size_align_alloc_te     1648501            14578321       8.84       25560052      15.51       1.75
      align_shift_alloc_test             147                 830       5.65           5692      38.72       6.86
      pcpu_alloc_test                  80732              125520       1.55         140864       1.74       1.12
      Total Cycles              119240774314        763211341128       6.40  1390338696894      11.66       1.82
      
      Sequential, 2 cpus
      
                                    No KASAN      KASAN original x baseline  KASAN vmalloc x baseline    x KASAN
      
      fix_size_alloc_test            1423150            14276550      10.03       27733022      19.49       1.94
      full_fit_alloc_test            1754219            14722640       8.39       15030786       8.57       1.02
      long_busy_list_alloc_test     11451858            52154973       4.55      107016027       9.34       2.05
      random_size_alloc_test         5989020            26735276       4.46       68885923      11.50       2.58
      fix_align_alloc_test           2050976            20166900       9.83       50491675      24.62       2.50
      random_size_align_alloc_te     2858229            17971700       6.29       38730225      13.55       2.16
      align_shift_alloc_test             405                6428      15.87          26253      64.82       4.08
      pcpu_alloc_test                 127183              151464       1.19         216263       1.70       1.43
      Total Cycles               54181269392        308723699764       5.70   650772566394      12.01       2.11
      fix_size_alloc_test            1420404            14289308      10.06       27790035      19.56       1.94
      full_fit_alloc_test            1736145            14806234       8.53       15274301       8.80       1.03
      long_busy_list_alloc_test     11404638            52270785       4.58      107550254       9.43       2.06
      random_size_alloc_test         6017006            26650625       4.43       68696127      11.42       2.58
      fix_align_alloc_test           2045504            20280985       9.91       50414862      24.65       2.49
      random_size_align_alloc_te     2845338            17931018       6.30       38510276      13.53       2.15
      align_shift_alloc_test             472                3760       7.97           9656      20.46       2.57
      pcpu_alloc_test                 118643              132732       1.12         146504       1.23       1.10
      Total Cycles               54040011688        309102805492       5.72   651325675652      12.05       2.11
      
      [dja@axtens.net: fixups]
        Link: http://lkml.kernel.org/r/20191120052719.7201-1-dja@axtens.net
      Link: https://bugzilla.kernel.org/show_bug.cgi?id=3D202009
      Link: http://lkml.kernel.org/r/20191031093909.9228-2-dja@axtens.net
      Signed-off-by: Mark Rutland <mark.rutland@arm.com> [shadow rework]
      Signed-off-by: NDaniel Axtens <dja@axtens.net>
      Co-developed-by: NMark Rutland <mark.rutland@arm.com>
      Acked-by: NVasily Gorbik <gor@linux.ibm.com>
      Reviewed-by: NAndrey Ryabinin <aryabinin@virtuozzo.com>
      Cc: Alexander Potapenko <glider@google.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: Christophe Leroy <christophe.leroy@c-s.fr>
      Cc: Qian Cai <cai@lca.pw>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      3c5c3cfb
  10. 25 9月, 2019 3 次提交
  11. 25 8月, 2019 1 次提交
  12. 13 7月, 2019 2 次提交
  13. 02 6月, 2019 1 次提交
  14. 29 4月, 2019 1 次提交
    • T
      mm/kasan: Simplify stacktrace handling · 880e049c
      Thomas Gleixner 提交于
      Replace the indirection through struct stack_trace by using the storage
      array based interfaces.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Reviewed-by: NJosh Poimboeuf <jpoimboe@redhat.com>
      Acked-by: NDmitry Vyukov <dvyukov@google.com>
      Acked-by: NAndrey Ryabinin <aryabinin@virtuozzo.com>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Alexander Potapenko <glider@google.com>
      Cc: kasan-dev@googlegroups.com
      Cc: linux-mm@kvack.org
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Alexey Dobriyan <adobriyan@gmail.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Mike Rapoport <rppt@linux.vnet.ibm.com>
      Cc: Akinobu Mita <akinobu.mita@gmail.com>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: iommu@lists.linux-foundation.org
      Cc: Robin Murphy <robin.murphy@arm.com>
      Cc: Marek Szyprowski <m.szyprowski@samsung.com>
      Cc: Johannes Thumshirn <jthumshirn@suse.de>
      Cc: David Sterba <dsterba@suse.com>
      Cc: Chris Mason <clm@fb.com>
      Cc: Josef Bacik <josef@toxicpanda.com>
      Cc: linux-btrfs@vger.kernel.org
      Cc: dm-devel@redhat.com
      Cc: Mike Snitzer <snitzer@redhat.com>
      Cc: Alasdair Kergon <agk@redhat.com>
      Cc: Daniel Vetter <daniel@ffwll.ch>
      Cc: intel-gfx@lists.freedesktop.org
      Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
      Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
      Cc: dri-devel@lists.freedesktop.org
      Cc: David Airlie <airlied@linux.ie>
      Cc: Jani Nikula <jani.nikula@linux.intel.com>
      Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
      Cc: Tom Zanussi <tom.zanussi@linux.intel.com>
      Cc: Miroslav Benes <mbenes@suse.cz>
      Cc: linux-arch@vger.kernel.org
      Link: https://lkml.kernel.org/r/20190425094801.963261479@linutronix.de
      880e049c
  15. 15 4月, 2019 1 次提交
  16. 03 4月, 2019 1 次提交
    • P
      x86/uaccess, kasan: Fix KASAN vs SMAP · 57b78a62
      Peter Zijlstra 提交于
      KASAN inserts extra code for every LOAD/STORE emitted by te compiler.
      Much of this code is simple and safe to run with AC=1, however the
      kasan_report() function, called on error, is most certainly not safe
      to call with AC=1.
      
      Therefore wrap kasan_report() in user_access_{save,restore}; which for
      x86 SMAP, saves/restores EFLAGS and clears AC before calling the real
      function.
      
      Also ensure all the functions are without __fentry__ hook. The
      function tracer is also not safe.
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      57b78a62
  17. 06 3月, 2019 1 次提交
    • A
      kasan: fix kasan_check_read/write definitions · bcf6f55a
      Arnd Bergmann 提交于
      Building little-endian allmodconfig kernels on arm64 started failing
      with the generated atomic.h implementation, since we now try to call
      kasan helpers from the EFI stub:
      
        aarch64-linux-gnu-ld: drivers/firmware/efi/libstub/arm-stub.stub.o: in function `atomic_set':
        include/generated/atomic-instrumented.h:44: undefined reference to `__efistub_kasan_check_write'
      
      I suspect that we get similar problems in other files that explicitly
      disable KASAN for some reason but call atomic_t based helper functions.
      
      We can fix this by checking the predefined __SANITIZE_ADDRESS__ macro
      that the compiler sets instead of checking CONFIG_KASAN, but this in
      turn requires a small hack in mm/kasan/common.c so we do see the extern
      declaration there instead of the inline function.
      
      Link: http://lkml.kernel.org/r/20181211133453.2835077-1-arnd@arndb.de
      Fixes: b1864b828644 ("locking/atomics: build atomic headers as required")
      Signed-off-by: NArnd Bergmann <arnd@arndb.de>
      Reported-by: NAnders Roxell <anders.roxell@linaro.org>
      Acked-by: NAndrey Ryabinin <aryabinin@virtuozzo.com>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Alexander Potapenko <glider@google.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: Andrey Konovalov <andreyknvl@google.com>
      Cc: Stephen Rothwell <sfr@canb.auug.org.au>,
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      bcf6f55a
  18. 22 2月, 2019 1 次提交
  19. 09 1月, 2019 2 次提交
  20. 29 12月, 2018 7 次提交
    • A
      kasan: add SPDX-License-Identifier mark to source files · e886bf9d
      Andrey Konovalov 提交于
      This patch adds a "SPDX-License-Identifier: GPL-2.0" mark to all source
      files under mm/kasan.
      
      Link: http://lkml.kernel.org/r/bce2d1e618afa5142e81961ab8fa4b4165337380.1544099024.git.andreyknvl@google.comSigned-off-by: NAndrey Konovalov <andreyknvl@google.com>
      Reviewed-by: NAndrey Ryabinin <aryabinin@virtuozzo.com>
      Reviewed-by: NDmitry Vyukov <dvyukov@google.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      e886bf9d
    • A
      kasan: add __must_check annotations to kasan hooks · 66afc7f1
      Andrey Konovalov 提交于
      This patch adds __must_check annotations to kasan hooks that return a
      pointer to make sure that a tagged pointer always gets propagated.
      
      Link: http://lkml.kernel.org/r/03b269c5e453945f724bfca3159d4e1333a8fb1c.1544099024.git.andreyknvl@google.comSigned-off-by: NAndrey Konovalov <andreyknvl@google.com>
      Suggested-by: NAndrey Ryabinin <aryabinin@virtuozzo.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      66afc7f1
    • A
      kasan, mm, arm64: tag non slab memory allocated via pagealloc · 2813b9c0
      Andrey Konovalov 提交于
      Tag-based KASAN doesn't check memory accesses through pointers tagged with
      0xff.  When page_address is used to get pointer to memory that corresponds
      to some page, the tag of the resulting pointer gets set to 0xff, even
      though the allocated memory might have been tagged differently.
      
      For slab pages it's impossible to recover the correct tag to return from
      page_address, since the page might contain multiple slab objects tagged
      with different values, and we can't know in advance which one of them is
      going to get accessed.  For non slab pages however, we can recover the tag
      in page_address, since the whole page was marked with the same tag.
      
      This patch adds tagging to non slab memory allocated with pagealloc.  To
      set the tag of the pointer returned from page_address, the tag gets stored
      to page->flags when the memory gets allocated.
      
      Link: http://lkml.kernel.org/r/d758ddcef46a5abc9970182b9137e2fbee202a2c.1544099024.git.andreyknvl@google.comSigned-off-by: NAndrey Konovalov <andreyknvl@google.com>
      Reviewed-by: NAndrey Ryabinin <aryabinin@virtuozzo.com>
      Reviewed-by: NDmitry Vyukov <dvyukov@google.com>
      Acked-by: NWill Deacon <will.deacon@arm.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      2813b9c0
    • A
      kasan: add hooks implementation for tag-based mode · 7f94ffbc
      Andrey Konovalov 提交于
      This commit adds tag-based KASAN specific hooks implementation and
      adjusts common generic and tag-based KASAN ones.
      
      1. When a new slab cache is created, tag-based KASAN rounds up the size of
         the objects in this cache to KASAN_SHADOW_SCALE_SIZE (== 16).
      
      2. On each kmalloc tag-based KASAN generates a random tag, sets the shadow
         memory, that corresponds to this object to this tag, and embeds this
         tag value into the top byte of the returned pointer.
      
      3. On each kfree tag-based KASAN poisons the shadow memory with a random
         tag to allow detection of use-after-free bugs.
      
      The rest of the logic of the hook implementation is very much similar to
      the one provided by generic KASAN. Tag-based KASAN saves allocation and
      free stack metadata to the slab object the same way generic KASAN does.
      
      Link: http://lkml.kernel.org/r/bda78069e3b8422039794050ddcb2d53d053ed41.1544099024.git.andreyknvl@google.comSigned-off-by: NAndrey Konovalov <andreyknvl@google.com>
      Reviewed-by: NAndrey Ryabinin <aryabinin@virtuozzo.com>
      Reviewed-by: NDmitry Vyukov <dvyukov@google.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      7f94ffbc
    • A
      kasan: initialize shadow to 0xff for tag-based mode · 080eb83f
      Andrey Konovalov 提交于
      A tag-based KASAN shadow memory cell contains a memory tag, that
      corresponds to the tag in the top byte of the pointer, that points to that
      memory.  The native top byte value of kernel pointers is 0xff, so with
      tag-based KASAN we need to initialize shadow memory to 0xff.
      
      [cai@lca.pw: arm64: skip kmemleak for KASAN again\
        Link: http://lkml.kernel.org/r/20181226020550.63712-1-cai@lca.pw
      Link: http://lkml.kernel.org/r/5cc1b789aad7c99cf4f3ec5b328b147ad53edb40.1544099024.git.andreyknvl@google.comSigned-off-by: NAndrey Konovalov <andreyknvl@google.com>
      Reviewed-by: NAndrey Ryabinin <aryabinin@virtuozzo.com>
      Reviewed-by: NDmitry Vyukov <dvyukov@google.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      080eb83f
    • A
      kasan: move common generic and tag-based code to common.c · bffa986c
      Andrey Konovalov 提交于
      Tag-based KASAN reuses a significant part of the generic KASAN code, so
      move the common parts to common.c without any functional changes.
      
      Link: http://lkml.kernel.org/r/114064d002356e03bb8cc91f7835e20dc61b51d9.1544099024.git.andreyknvl@google.comSigned-off-by: NAndrey Konovalov <andreyknvl@google.com>
      Reviewed-by: NAndrey Ryabinin <aryabinin@virtuozzo.com>
      Reviewed-by: NDmitry Vyukov <dvyukov@google.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      bffa986c
    • A
      kasan, mm: change hooks signatures · 0116523c
      Andrey Konovalov 提交于
      Patch series "kasan: add software tag-based mode for arm64", v13.
      
      This patchset adds a new software tag-based mode to KASAN [1].  (Initially
      this mode was called KHWASAN, but it got renamed, see the naming rationale
      at the end of this section).
      
      The plan is to implement HWASan [2] for the kernel with the incentive,
      that it's going to have comparable to KASAN performance, but in the same
      time consume much less memory, trading that off for somewhat imprecise bug
      detection and being supported only for arm64.
      
      The underlying ideas of the approach used by software tag-based KASAN are:
      
      1. By using the Top Byte Ignore (TBI) arm64 CPU feature, we can store
         pointer tags in the top byte of each kernel pointer.
      
      2. Using shadow memory, we can store memory tags for each chunk of kernel
         memory.
      
      3. On each memory allocation, we can generate a random tag, embed it into
         the returned pointer and set the memory tags that correspond to this
         chunk of memory to the same value.
      
      4. By using compiler instrumentation, before each memory access we can add
         a check that the pointer tag matches the tag of the memory that is being
         accessed.
      
      5. On a tag mismatch we report an error.
      
      With this patchset the existing KASAN mode gets renamed to generic KASAN,
      with the word "generic" meaning that the implementation can be supported
      by any architecture as it is purely software.
      
      The new mode this patchset adds is called software tag-based KASAN.  The
      word "tag-based" refers to the fact that this mode uses tags embedded into
      the top byte of kernel pointers and the TBI arm64 CPU feature that allows
      to dereference such pointers.  The word "software" here means that shadow
      memory manipulation and tag checking on pointer dereference is done in
      software.  As it is the only tag-based implementation right now, "software
      tag-based" KASAN is sometimes referred to as simply "tag-based" in this
      patchset.
      
      A potential expansion of this mode is a hardware tag-based mode, which
      would use hardware memory tagging support (announced by Arm [3]) instead
      of compiler instrumentation and manual shadow memory manipulation.
      
      Same as generic KASAN, software tag-based KASAN is strictly a debugging
      feature.
      
      [1] https://www.kernel.org/doc/html/latest/dev-tools/kasan.html
      
      [2] http://clang.llvm.org/docs/HardwareAssistedAddressSanitizerDesign.html
      
      [3] https://community.arm.com/processors/b/blog/posts/arm-a-profile-architecture-2018-developments-armv85a
      
      ====== Rationale
      
      On mobile devices generic KASAN's memory usage is significant problem.
      One of the main reasons to have tag-based KASAN is to be able to perform a
      similar set of checks as the generic one does, but with lower memory
      requirements.
      
      Comment from Vishwath Mohan <vishwath@google.com>:
      
      I don't have data on-hand, but anecdotally both ASAN and KASAN have proven
      problematic to enable for environments that don't tolerate the increased
      memory pressure well.  This includes
      
      (a) Low-memory form factors - Wear, TV, Things, lower-tier phones like Go,
      (c) Connected components like Pixel's visual core [1].
      
      These are both places I'd love to have a low(er) memory footprint option at
      my disposal.
      
      Comment from Evgenii Stepanov <eugenis@google.com>:
      
      Looking at a live Android device under load, slab (according to
      /proc/meminfo) + kernel stack take 8-10% available RAM (~350MB).  KASAN's
      overhead of 2x - 3x on top of it is not insignificant.
      
      Not having this overhead enables near-production use - ex.  running
      KASAN/KHWASAN kernel on a personal, daily-use device to catch bugs that do
      not reproduce in test configuration.  These are the ones that often cost
      the most engineering time to track down.
      
      CPU overhead is bad, but generally tolerable.  RAM is critical, in our
      experience.  Once it gets low enough, OOM-killer makes your life
      miserable.
      
      [1] https://www.blog.google/products/pixel/pixel-visual-core-image-processing-and-machine-learning-pixel-2/
      
      ====== Technical details
      
      Software tag-based KASAN mode is implemented in a very similar way to the
      generic one. This patchset essentially does the following:
      
      1. TCR_TBI1 is set to enable Top Byte Ignore.
      
      2. Shadow memory is used (with a different scale, 1:16, so each shadow
         byte corresponds to 16 bytes of kernel memory) to store memory tags.
      
      3. All slab objects are aligned to shadow scale, which is 16 bytes.
      
      4. All pointers returned from the slab allocator are tagged with a random
         tag and the corresponding shadow memory is poisoned with the same value.
      
      5. Compiler instrumentation is used to insert tag checks. Either by
         calling callbacks or by inlining them (CONFIG_KASAN_OUTLINE and
         CONFIG_KASAN_INLINE flags are reused).
      
      6. When a tag mismatch is detected in callback instrumentation mode
         KASAN simply prints a bug report. In case of inline instrumentation,
         clang inserts a brk instruction, and KASAN has it's own brk handler,
         which reports the bug.
      
      7. The memory in between slab objects is marked with a reserved tag, and
         acts as a redzone.
      
      8. When a slab object is freed it's marked with a reserved tag.
      
      Bug detection is imprecise for two reasons:
      
      1. We won't catch some small out-of-bounds accesses, that fall into the
         same shadow cell, as the last byte of a slab object.
      
      2. We only have 1 byte to store tags, which means we have a 1/256
         probability of a tag match for an incorrect access (actually even
         slightly less due to reserved tag values).
      
      Despite that there's a particular type of bugs that tag-based KASAN can
      detect compared to generic KASAN: use-after-free after the object has been
      allocated by someone else.
      
      ====== Testing
      
      Some kernel developers voiced a concern that changing the top byte of
      kernel pointers may lead to subtle bugs that are difficult to discover.
      To address this concern deliberate testing has been performed.
      
      It doesn't seem feasible to do some kind of static checking to find
      potential issues with pointer tagging, so a dynamic approach was taken.
      All pointer comparisons/subtractions have been instrumented in an LLVM
      compiler pass and a kernel module that would print a bug report whenever
      two pointers with different tags are being compared/subtracted (ignoring
      comparisons with NULL pointers and with pointers obtained by casting an
      error code to a pointer type) has been used.  Then the kernel has been
      booted in QEMU and on an Odroid C2 board and syzkaller has been run.
      
      This yielded the following results.
      
      The two places that look interesting are:
      
      is_vmalloc_addr in include/linux/mm.h
      is_kernel_rodata in mm/util.c
      
      Here we compare a pointer with some fixed untagged values to make sure
      that the pointer lies in a particular part of the kernel address space.
      Since tag-based KASAN doesn't add tags to pointers that belong to rodata
      or vmalloc regions, this should work as is.  To make sure debug checks to
      those two functions that check that the result doesn't change whether we
      operate on pointers with or without untagging has been added.
      
      A few other cases that don't look that interesting:
      
      Comparing pointers to achieve unique sorting order of pointee objects
      (e.g. sorting locks addresses before performing a double lock):
      
      tty_ldisc_lock_pair_timeout in drivers/tty/tty_ldisc.c
      pipe_double_lock in fs/pipe.c
      unix_state_double_lock in net/unix/af_unix.c
      lock_two_nondirectories in fs/inode.c
      mutex_lock_double in kernel/events/core.c
      
      ep_cmp_ffd in fs/eventpoll.c
      fsnotify_compare_groups fs/notify/mark.c
      
      Nothing needs to be done here, since the tags embedded into pointers
      don't change, so the sorting order would still be unique.
      
      Checks that a pointer belongs to some particular allocation:
      
      is_sibling_entry in lib/radix-tree.c
      object_is_on_stack in include/linux/sched/task_stack.h
      
      Nothing needs to be done here either, since two pointers can only belong
      to the same allocation if they have the same tag.
      
      Overall, since the kernel boots and works, there are no critical bugs.
      As for the rest, the traditional kernel testing way (use until fails) is
      the only one that looks feasible.
      
      Another point here is that tag-based KASAN is available under a separate
      config option that needs to be deliberately enabled. Even though it might
      be used in a "near-production" environment to find bugs that are not found
      during fuzzing or running tests, it is still a debug tool.
      
      ====== Benchmarks
      
      The following numbers were collected on Odroid C2 board. Both generic and
      tag-based KASAN were used in inline instrumentation mode.
      
      Boot time [1]:
      * ~1.7 sec for clean kernel
      * ~5.0 sec for generic KASAN
      * ~5.0 sec for tag-based KASAN
      
      Network performance [2]:
      * 8.33 Gbits/sec for clean kernel
      * 3.17 Gbits/sec for generic KASAN
      * 2.85 Gbits/sec for tag-based KASAN
      
      Slab memory usage after boot [3]:
      * ~40 kb for clean kernel
      * ~105 kb (~260% overhead) for generic KASAN
      * ~47 kb (~20% overhead) for tag-based KASAN
      
      KASAN memory overhead consists of three main parts:
      1. Increased slab memory usage due to redzones.
      2. Shadow memory (the whole reserved once during boot).
      3. Quaratine (grows gradually until some preset limit; the more the limit,
         the more the chance to detect a use-after-free).
      
      Comparing tag-based vs generic KASAN for each of these points:
      1. 20% vs 260% overhead.
      2. 1/16th vs 1/8th of physical memory.
      3. Tag-based KASAN doesn't require quarantine.
      
      [1] Time before the ext4 driver is initialized.
      [2] Measured as `iperf -s & iperf -c 127.0.0.1 -t 30`.
      [3] Measured as `cat /proc/meminfo | grep Slab`.
      
      ====== Some notes
      
      A few notes:
      
      1. The patchset can be found here:
         https://github.com/xairy/kasan-prototype/tree/khwasan
      
      2. Building requires a recent Clang version (7.0.0 or later).
      
      3. Stack instrumentation is not supported yet and will be added later.
      
      This patch (of 25):
      
      Tag-based KASAN changes the value of the top byte of pointers returned
      from the kernel allocation functions (such as kmalloc).  This patch
      updates KASAN hooks signatures and their usage in SLAB and SLUB code to
      reflect that.
      
      Link: http://lkml.kernel.org/r/aec2b5e3973781ff8a6bb6760f8543643202c451.1544099024.git.andreyknvl@google.comSigned-off-by: NAndrey Konovalov <andreyknvl@google.com>
      Reviewed-by: NAndrey Ryabinin <aryabinin@virtuozzo.com>
      Reviewed-by: NDmitry Vyukov <dvyukov@google.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      0116523c
  21. 04 7月, 2018 1 次提交