1. 23 1月, 2020 1 次提交
  2. 31 12月, 2019 1 次提交
    • J
      x86/kasan: Print original address on #GP · 2f004eea
      Jann Horn 提交于
      Make #GP exceptions caused by out-of-bounds KASAN shadow accesses easier
      to understand by computing the address of the original access and
      printing that. More details are in the comments in the patch.
      
      This turns an error like this:
      
        kasan: CONFIG_KASAN_INLINE enabled
        kasan: GPF could be caused by NULL-ptr deref or user memory access
        general protection fault, probably for non-canonical address
            0xe017577ddf75b7dd: 0000 [#1] PREEMPT SMP KASAN PTI
      
      into this:
      
        general protection fault, probably for non-canonical address
            0xe017577ddf75b7dd: 0000 [#1] PREEMPT SMP KASAN PTI
        KASAN: maybe wild-memory-access in range
            [0x00badbeefbadbee8-0x00badbeefbadbeef]
      
      The hook is placed in architecture-independent code, but is currently
      only wired up to the X86 exception handler because I'm not sufficiently
      familiar with the address space layout and exception handling mechanisms
      on other architectures.
      Signed-off-by: NJann Horn <jannh@google.com>
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Reviewed-by: NDmitry Vyukov <dvyukov@google.com>
      Cc: Alexander Potapenko <glider@google.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Andrey Konovalov <andreyknvl@google.com>
      Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: kasan-dev@googlegroups.com
      Cc: linux-mm <linux-mm@kvack.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Sean Christopherson <sean.j.christopherson@intel.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: x86-ml <x86@kernel.org>
      Link: https://lkml.kernel.org/r/20191218231150.12139-4-jannh@google.com
      2f004eea
  3. 18 12月, 2019 2 次提交
    • D
      kasan: use apply_to_existing_page_range() for releasing vmalloc shadow · e218f1ca
      Daniel Axtens 提交于
      kasan_release_vmalloc uses apply_to_page_range to release vmalloc
      shadow.  Unfortunately, apply_to_page_range can allocate memory to fill
      in page table entries, which is not what we want.
      
      Also, kasan_release_vmalloc is called under free_vmap_area_lock, so if
      apply_to_page_range does allocate memory, we get a sleep in atomic bug:
      
      	BUG: sleeping function called from invalid context at mm/page_alloc.c:4681
      	in_atomic(): 1, irqs_disabled(): 0, non_block: 0, pid: 15087, name:
      
      	Call Trace:
      	 __dump_stack lib/dump_stack.c:77 [inline]
      	 dump_stack+0x199/0x216 lib/dump_stack.c:118
      	 ___might_sleep.cold.97+0x1f5/0x238 kernel/sched/core.c:6800
      	 __might_sleep+0x95/0x190 kernel/sched/core.c:6753
      	 prepare_alloc_pages mm/page_alloc.c:4681 [inline]
      	 __alloc_pages_nodemask+0x3cd/0x890 mm/page_alloc.c:4730
      	 alloc_pages_current+0x10c/0x210 mm/mempolicy.c:2211
      	 alloc_pages include/linux/gfp.h:532 [inline]
      	 __get_free_pages+0xc/0x40 mm/page_alloc.c:4786
      	 __pte_alloc_one_kernel include/asm-generic/pgalloc.h:21 [inline]
      	 pte_alloc_one_kernel include/asm-generic/pgalloc.h:33 [inline]
      	 __pte_alloc_kernel+0x1d/0x200 mm/memory.c:459
      	 apply_to_pte_range mm/memory.c:2031 [inline]
      	 apply_to_pmd_range mm/memory.c:2068 [inline]
      	 apply_to_pud_range mm/memory.c:2088 [inline]
      	 apply_to_p4d_range mm/memory.c:2108 [inline]
      	 apply_to_page_range+0x77d/0xa00 mm/memory.c:2133
      	 kasan_release_vmalloc+0xa7/0xc0 mm/kasan/common.c:970
      	 __purge_vmap_area_lazy+0xcbb/0x1f30 mm/vmalloc.c:1313
      	 try_purge_vmap_area_lazy mm/vmalloc.c:1332 [inline]
      	 free_vmap_area_noflush+0x2ca/0x390 mm/vmalloc.c:1368
      	 free_unmap_vmap_area mm/vmalloc.c:1381 [inline]
      	 remove_vm_area+0x1cc/0x230 mm/vmalloc.c:2209
      	 vm_remove_mappings mm/vmalloc.c:2236 [inline]
      	 __vunmap+0x223/0xa20 mm/vmalloc.c:2299
      	 __vfree+0x3f/0xd0 mm/vmalloc.c:2356
      	 __vmalloc_area_node mm/vmalloc.c:2507 [inline]
      	 __vmalloc_node_range+0x5d5/0x810 mm/vmalloc.c:2547
      	 __vmalloc_node mm/vmalloc.c:2607 [inline]
      	 __vmalloc_node_flags mm/vmalloc.c:2621 [inline]
      	 vzalloc+0x6f/0x80 mm/vmalloc.c:2666
      	 alloc_one_pg_vec_page net/packet/af_packet.c:4233 [inline]
      	 alloc_pg_vec net/packet/af_packet.c:4258 [inline]
      	 packet_set_ring+0xbc0/0x1b50 net/packet/af_packet.c:4342
      	 packet_setsockopt+0xed7/0x2d90 net/packet/af_packet.c:3695
      	 __sys_setsockopt+0x29b/0x4d0 net/socket.c:2117
      	 __do_sys_setsockopt net/socket.c:2133 [inline]
      	 __se_sys_setsockopt net/socket.c:2130 [inline]
      	 __x64_sys_setsockopt+0xbe/0x150 net/socket.c:2130
      	 do_syscall_64+0xfa/0x780 arch/x86/entry/common.c:294
      	 entry_SYSCALL_64_after_hwframe+0x49/0xbe
      
      Switch to using the apply_to_existing_page_range() helper instead, which
      won't allocate memory.
      
      [akpm@linux-foundation.org: s/apply_to_existing_pages/apply_to_existing_page_range/]
      Link: http://lkml.kernel.org/r/20191205140407.1874-2-dja@axtens.net
      Fixes: 3c5c3cfb ("kasan: support backing vmalloc space with real shadow memory")
      Signed-off-by: NDaniel Axtens <dja@axtens.net>
      Reported-by: NDmitry Vyukov <dvyukov@google.com>
      Reviewed-by: NAndrey Ryabinin <aryabinin@virtuozzo.com>
      Cc: Alexander Potapenko <glider@google.com>
      Cc: Qian Cai <cai@lca.pw>
      Cc: Uladzislau Rezki (Sony) <urezki@gmail.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      e218f1ca
    • A
      kasan: fix crashes on access to memory mapped by vm_map_ram() · d98c9e83
      Andrey Ryabinin 提交于
      With CONFIG_KASAN_VMALLOC=y any use of memory obtained via vm_map_ram()
      will crash because there is no shadow backing that memory.
      
      Instead of sprinkling additional kasan_populate_vmalloc() calls all over
      the vmalloc code, move it into alloc_vmap_area(). This will fix
      vm_map_ram() and simplify the code a bit.
      
      [aryabinin@virtuozzo.com: v2]
        Link: http://lkml.kernel.org/r/20191205095942.1761-1-aryabinin@virtuozzo.comLink: http://lkml.kernel.org/r/20191204204534.32202-1-aryabinin@virtuozzo.com
      Fixes: 3c5c3cfb ("kasan: support backing vmalloc space with real shadow memory")
      Signed-off-by: NAndrey Ryabinin <aryabinin@virtuozzo.com>
      Reported-by: NDmitry Vyukov <dvyukov@google.com>
      Reviewed-by: NUladzislau Rezki (Sony) <urezki@gmail.com>
      Cc: Daniel Axtens <dja@axtens.net>
      Cc: Alexander Potapenko <glider@google.com>
      Cc: Daniel Axtens <dja@axtens.net>
      Cc: Qian Cai <cai@lca.pw>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      d98c9e83
  4. 05 12月, 2019 1 次提交
  5. 02 12月, 2019 1 次提交
    • D
      kasan: support backing vmalloc space with real shadow memory · 3c5c3cfb
      Daniel Axtens 提交于
      Patch series "kasan: support backing vmalloc space with real shadow
      memory", v11.
      
      Currently, vmalloc space is backed by the early shadow page.  This means
      that kasan is incompatible with VMAP_STACK.
      
      This series provides a mechanism to back vmalloc space with real,
      dynamically allocated memory.  I have only wired up x86, because that's
      the only currently supported arch I can work with easily, but it's very
      easy to wire up other architectures, and it appears that there is some
      work-in-progress code to do this on arm64 and s390.
      
      This has been discussed before in the context of VMAP_STACK:
       - https://bugzilla.kernel.org/show_bug.cgi?id=202009
       - https://lkml.org/lkml/2018/7/22/198
       - https://lkml.org/lkml/2019/7/19/822
      
      In terms of implementation details:
      
      Most mappings in vmalloc space are small, requiring less than a full
      page of shadow space.  Allocating a full shadow page per mapping would
      therefore be wasteful.  Furthermore, to ensure that different mappings
      use different shadow pages, mappings would have to be aligned to
      KASAN_SHADOW_SCALE_SIZE * PAGE_SIZE.
      
      Instead, share backing space across multiple mappings.  Allocate a
      backing page when a mapping in vmalloc space uses a particular page of
      the shadow region.  This page can be shared by other vmalloc mappings
      later on.
      
      We hook in to the vmap infrastructure to lazily clean up unused shadow
      memory.
      
      Testing with test_vmalloc.sh on an x86 VM with 2 vCPUs shows that:
      
       - Turning on KASAN, inline instrumentation, without vmalloc, introuduces
         a 4.1x-4.2x slowdown in vmalloc operations.
      
       - Turning this on introduces the following slowdowns over KASAN:
           * ~1.76x slower single-threaded (test_vmalloc.sh performance)
           * ~2.18x slower when both cpus are performing operations
             simultaneously (test_vmalloc.sh sequential_test_order=1)
      
      This is unfortunate but given that this is a debug feature only, not the
      end of the world.  The benchmarks are also a stress-test for the vmalloc
      subsystem: they're not indicative of an overall 2x slowdown!
      
      This patch (of 4):
      
      Hook into vmalloc and vmap, and dynamically allocate real shadow memory
      to back the mappings.
      
      Most mappings in vmalloc space are small, requiring less than a full
      page of shadow space.  Allocating a full shadow page per mapping would
      therefore be wasteful.  Furthermore, to ensure that different mappings
      use different shadow pages, mappings would have to be aligned to
      KASAN_SHADOW_SCALE_SIZE * PAGE_SIZE.
      
      Instead, share backing space across multiple mappings.  Allocate a
      backing page when a mapping in vmalloc space uses a particular page of
      the shadow region.  This page can be shared by other vmalloc mappings
      later on.
      
      We hook in to the vmap infrastructure to lazily clean up unused shadow
      memory.
      
      To avoid the difficulties around swapping mappings around, this code
      expects that the part of the shadow region that covers the vmalloc space
      will not be covered by the early shadow page, but will be left unmapped.
      This will require changes in arch-specific code.
      
      This allows KASAN with VMAP_STACK, and may be helpful for architectures
      that do not have a separate module space (e.g.  powerpc64, which I am
      currently working on).  It also allows relaxing the module alignment
      back to PAGE_SIZE.
      
      Testing with test_vmalloc.sh on an x86 VM with 2 vCPUs shows that:
      
       - Turning on KASAN, inline instrumentation, without vmalloc, introuduces
         a 4.1x-4.2x slowdown in vmalloc operations.
      
       - Turning this on introduces the following slowdowns over KASAN:
           * ~1.76x slower single-threaded (test_vmalloc.sh performance)
           * ~2.18x slower when both cpus are performing operations
             simultaneously (test_vmalloc.sh sequential_test_order=3D1)
      
      This is unfortunate but given that this is a debug feature only, not the
      end of the world.
      
      The full benchmark results are:
      
      Performance
      
                                    No KASAN      KASAN original x baseline  KASAN vmalloc x baseline    x KASAN
      
      fix_size_alloc_test             662004            11404956      17.23       19144610      28.92       1.68
      full_fit_alloc_test             710950            12029752      16.92       13184651      18.55       1.10
      long_busy_list_alloc_test      9431875            43990172       4.66       82970178       8.80       1.89
      random_size_alloc_test         5033626            23061762       4.58       47158834       9.37       2.04
      fix_align_alloc_test           1252514            15276910      12.20       31266116      24.96       2.05
      random_size_align_alloc_te     1648501            14578321       8.84       25560052      15.51       1.75
      align_shift_alloc_test             147                 830       5.65           5692      38.72       6.86
      pcpu_alloc_test                  80732              125520       1.55         140864       1.74       1.12
      Total Cycles              119240774314        763211341128       6.40  1390338696894      11.66       1.82
      
      Sequential, 2 cpus
      
                                    No KASAN      KASAN original x baseline  KASAN vmalloc x baseline    x KASAN
      
      fix_size_alloc_test            1423150            14276550      10.03       27733022      19.49       1.94
      full_fit_alloc_test            1754219            14722640       8.39       15030786       8.57       1.02
      long_busy_list_alloc_test     11451858            52154973       4.55      107016027       9.34       2.05
      random_size_alloc_test         5989020            26735276       4.46       68885923      11.50       2.58
      fix_align_alloc_test           2050976            20166900       9.83       50491675      24.62       2.50
      random_size_align_alloc_te     2858229            17971700       6.29       38730225      13.55       2.16
      align_shift_alloc_test             405                6428      15.87          26253      64.82       4.08
      pcpu_alloc_test                 127183              151464       1.19         216263       1.70       1.43
      Total Cycles               54181269392        308723699764       5.70   650772566394      12.01       2.11
      fix_size_alloc_test            1420404            14289308      10.06       27790035      19.56       1.94
      full_fit_alloc_test            1736145            14806234       8.53       15274301       8.80       1.03
      long_busy_list_alloc_test     11404638            52270785       4.58      107550254       9.43       2.06
      random_size_alloc_test         6017006            26650625       4.43       68696127      11.42       2.58
      fix_align_alloc_test           2045504            20280985       9.91       50414862      24.65       2.49
      random_size_align_alloc_te     2845338            17931018       6.30       38510276      13.53       2.15
      align_shift_alloc_test             472                3760       7.97           9656      20.46       2.57
      pcpu_alloc_test                 118643              132732       1.12         146504       1.23       1.10
      Total Cycles               54040011688        309102805492       5.72   651325675652      12.05       2.11
      
      [dja@axtens.net: fixups]
        Link: http://lkml.kernel.org/r/20191120052719.7201-1-dja@axtens.net
      Link: https://bugzilla.kernel.org/show_bug.cgi?id=3D202009
      Link: http://lkml.kernel.org/r/20191031093909.9228-2-dja@axtens.net
      Signed-off-by: Mark Rutland <mark.rutland@arm.com> [shadow rework]
      Signed-off-by: NDaniel Axtens <dja@axtens.net>
      Co-developed-by: NMark Rutland <mark.rutland@arm.com>
      Acked-by: NVasily Gorbik <gor@linux.ibm.com>
      Reviewed-by: NAndrey Ryabinin <aryabinin@virtuozzo.com>
      Cc: Alexander Potapenko <glider@google.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: Christophe Leroy <christophe.leroy@c-s.fr>
      Cc: Qian Cai <cai@lca.pw>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      3c5c3cfb
  6. 25 9月, 2019 3 次提交
  7. 25 8月, 2019 1 次提交
  8. 13 7月, 2019 3 次提交
  9. 02 6月, 2019 1 次提交
  10. 29 4月, 2019 1 次提交
    • T
      mm/kasan: Simplify stacktrace handling · 880e049c
      Thomas Gleixner 提交于
      Replace the indirection through struct stack_trace by using the storage
      array based interfaces.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Reviewed-by: NJosh Poimboeuf <jpoimboe@redhat.com>
      Acked-by: NDmitry Vyukov <dvyukov@google.com>
      Acked-by: NAndrey Ryabinin <aryabinin@virtuozzo.com>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Alexander Potapenko <glider@google.com>
      Cc: kasan-dev@googlegroups.com
      Cc: linux-mm@kvack.org
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Alexey Dobriyan <adobriyan@gmail.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Mike Rapoport <rppt@linux.vnet.ibm.com>
      Cc: Akinobu Mita <akinobu.mita@gmail.com>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: iommu@lists.linux-foundation.org
      Cc: Robin Murphy <robin.murphy@arm.com>
      Cc: Marek Szyprowski <m.szyprowski@samsung.com>
      Cc: Johannes Thumshirn <jthumshirn@suse.de>
      Cc: David Sterba <dsterba@suse.com>
      Cc: Chris Mason <clm@fb.com>
      Cc: Josef Bacik <josef@toxicpanda.com>
      Cc: linux-btrfs@vger.kernel.org
      Cc: dm-devel@redhat.com
      Cc: Mike Snitzer <snitzer@redhat.com>
      Cc: Alasdair Kergon <agk@redhat.com>
      Cc: Daniel Vetter <daniel@ffwll.ch>
      Cc: intel-gfx@lists.freedesktop.org
      Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
      Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
      Cc: dri-devel@lists.freedesktop.org
      Cc: David Airlie <airlied@linux.ie>
      Cc: Jani Nikula <jani.nikula@linux.intel.com>
      Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
      Cc: Tom Zanussi <tom.zanussi@linux.intel.com>
      Cc: Miroslav Benes <mbenes@suse.cz>
      Cc: linux-arch@vger.kernel.org
      Link: https://lkml.kernel.org/r/20190425094801.963261479@linutronix.de
      880e049c
  11. 15 4月, 2019 1 次提交
  12. 09 4月, 2019 1 次提交
  13. 03 4月, 2019 1 次提交
    • P
      x86/uaccess, kasan: Fix KASAN vs SMAP · 57b78a62
      Peter Zijlstra 提交于
      KASAN inserts extra code for every LOAD/STORE emitted by te compiler.
      Much of this code is simple and safe to run with AC=1, however the
      kasan_report() function, called on error, is most certainly not safe
      to call with AC=1.
      
      Therefore wrap kasan_report() in user_access_{save,restore}; which for
      x86 SMAP, saves/restores EFLAGS and clears AC before calling the real
      function.
      
      Also ensure all the functions are without __fentry__ hook. The
      function tracer is also not safe.
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      57b78a62
  14. 30 3月, 2019 1 次提交
  15. 13 3月, 2019 1 次提交
    • M
      treewide: add checks for the return value of memblock_alloc*() · 8a7f97b9
      Mike Rapoport 提交于
      Add check for the return value of memblock_alloc*() functions and call
      panic() in case of error.  The panic message repeats the one used by
      panicing memblock allocators with adjustment of parameters to include
      only relevant ones.
      
      The replacement was mostly automated with semantic patches like the one
      below with manual massaging of format strings.
      
        @@
        expression ptr, size, align;
        @@
        ptr = memblock_alloc(size, align);
        + if (!ptr)
        + 	panic("%s: Failed to allocate %lu bytes align=0x%lx\n", __func__, size, align);
      
      [anders.roxell@linaro.org: use '%pa' with 'phys_addr_t' type]
        Link: http://lkml.kernel.org/r/20190131161046.21886-1-anders.roxell@linaro.org
      [rppt@linux.ibm.com: fix format strings for panics after memblock_alloc]
        Link: http://lkml.kernel.org/r/1548950940-15145-1-git-send-email-rppt@linux.ibm.com
      [rppt@linux.ibm.com: don't panic if the allocation in sparse_buffer_init fails]
        Link: http://lkml.kernel.org/r/20190131074018.GD28876@rapoport-lnx
      [akpm@linux-foundation.org: fix xtensa printk warning]
      Link: http://lkml.kernel.org/r/1548057848-15136-20-git-send-email-rppt@linux.ibm.comSigned-off-by: NMike Rapoport <rppt@linux.ibm.com>
      Signed-off-by: NAnders Roxell <anders.roxell@linaro.org>
      Reviewed-by: Guo Ren <ren_guo@c-sky.com>		[c-sky]
      Acked-by: Paul Burton <paul.burton@mips.com>		[MIPS]
      Acked-by: Heiko Carstens <heiko.carstens@de.ibm.com>	[s390]
      Reviewed-by: Juergen Gross <jgross@suse.com>		[Xen]
      Reviewed-by: Geert Uytterhoeven <geert@linux-m68k.org>	[m68k]
      Acked-by: Max Filippov <jcmvbkbc@gmail.com>		[xtensa]
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Christophe Leroy <christophe.leroy@c-s.fr>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Dennis Zhou <dennis@kernel.org>
      Cc: Greentime Hu <green.hu@gmail.com>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: Guan Xuetao <gxt@pku.edu.cn>
      Cc: Guo Ren <guoren@kernel.org>
      Cc: Mark Salter <msalter@redhat.com>
      Cc: Matt Turner <mattst88@gmail.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Michal Simek <monstr@monstr.eu>
      Cc: Petr Mladek <pmladek@suse.com>
      Cc: Richard Weinberger <richard@nod.at>
      Cc: Rich Felker <dalias@libc.org>
      Cc: Rob Herring <robh+dt@kernel.org>
      Cc: Rob Herring <robh@kernel.org>
      Cc: Russell King <linux@armlinux.org.uk>
      Cc: Stafford Horne <shorne@gmail.com>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Vineet Gupta <vgupta@synopsys.com>
      Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      8a7f97b9
  16. 06 3月, 2019 3 次提交
  17. 22 2月, 2019 3 次提交
  18. 02 2月, 2019 1 次提交
    • A
      kasan: mark file common so ftrace doesn't trace it · 0d0c8de8
      Anders Roxell 提交于
      When option CONFIG_KASAN is enabled toghether with ftrace, function
      ftrace_graph_caller() gets in to a recursion, via functions
      kasan_check_read() and kasan_check_write().
      
       Breakpoint 2, ftrace_graph_caller () at ../arch/arm64/kernel/entry-ftrace.S:179
       179             mcount_get_pc             x0    //     function's pc
       (gdb) bt
       #0  ftrace_graph_caller () at ../arch/arm64/kernel/entry-ftrace.S:179
       #1  0xffffff90101406c8 in ftrace_caller () at ../arch/arm64/kernel/entry-ftrace.S:151
       #2  0xffffff90106fd084 in kasan_check_write (p=0xffffffc06c170878, size=4) at ../mm/kasan/common.c:105
       #3  0xffffff90104a2464 in atomic_add_return (v=<optimized out>, i=<optimized out>) at ./include/generated/atomic-instrumented.h:71
       #4  atomic_inc_return (v=<optimized out>) at ./include/generated/atomic-fallback.h:284
       #5  trace_graph_entry (trace=0xffffffc03f5ff380) at ../kernel/trace/trace_functions_graph.c:441
       #6  0xffffff9010481774 in trace_graph_entry_watchdog (trace=<optimized out>) at ../kernel/trace/trace_selftest.c:741
       #7  0xffffff90104a185c in function_graph_enter (ret=<optimized out>, func=<optimized out>, frame_pointer=18446743799894897728, retp=<optimized out>) at ../kernel/trace/trace_functions_graph.c:196
       #8  0xffffff9010140628 in prepare_ftrace_return (self_addr=18446743592948977792, parent=0xffffffc03f5ff418, frame_pointer=18446743799894897728) at ../arch/arm64/kernel/ftrace.c:231
       #9  0xffffff90101406f4 in ftrace_graph_caller () at ../arch/arm64/kernel/entry-ftrace.S:182
       Backtrace stopped: previous frame identical to this frame (corrupt stack?)
       (gdb)
      
      Rework so that the kasan implementation isn't traced.
      
      Link: http://lkml.kernel.org/r/20181212183447.15890-1-anders.roxell@linaro.orgSigned-off-by: NAnders Roxell <anders.roxell@linaro.org>
      Acked-by: NDmitry Vyukov <dvyukov@google.com>
      Tested-by: NDmitry Vyukov <dvyukov@google.com>
      Acked-by: NSteven Rostedt (VMware) <rostedt@goodmis.org>
      Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      0d0c8de8
  19. 09 1月, 2019 2 次提交
  20. 05 1月, 2019 1 次提交
    • J
      mm: treewide: remove unused address argument from pte_alloc functions · 4cf58924
      Joel Fernandes (Google) 提交于
      Patch series "Add support for fast mremap".
      
      This series speeds up the mremap(2) syscall by copying page tables at
      the PMD level even for non-THP systems.  There is concern that the extra
      'address' argument that mremap passes to pte_alloc may do something
      subtle architecture related in the future that may make the scheme not
      work.  Also we find that there is no point in passing the 'address' to
      pte_alloc since its unused.  This patch therefore removes this argument
      tree-wide resulting in a nice negative diff as well.  Also ensuring
      along the way that the enabled architectures do not do anything funky
      with the 'address' argument that goes unnoticed by the optimization.
      
      Build and boot tested on x86-64.  Build tested on arm64.  The config
      enablement patch for arm64 will be posted in the future after more
      testing.
      
      The changes were obtained by applying the following Coccinelle script.
      (thanks Julia for answering all Coccinelle questions!).
      Following fix ups were done manually:
      * Removal of address argument from  pte_fragment_alloc
      * Removal of pte_alloc_one_fast definitions from m68k and microblaze.
      
      // Options: --include-headers --no-includes
      // Note: I split the 'identifier fn' line, so if you are manually
      // running it, please unsplit it so it runs for you.
      
      virtual patch
      
      @pte_alloc_func_def depends on patch exists@
      identifier E2;
      identifier fn =~
      "^(__pte_alloc|pte_alloc_one|pte_alloc|__pte_alloc_kernel|pte_alloc_one_kernel)$";
      type T2;
      @@
      
       fn(...
      - , T2 E2
       )
       { ... }
      
      @pte_alloc_func_proto_noarg depends on patch exists@
      type T1, T2, T3, T4;
      identifier fn =~ "^(__pte_alloc|pte_alloc_one|pte_alloc|__pte_alloc_kernel|pte_alloc_one_kernel)$";
      @@
      
      (
      - T3 fn(T1, T2);
      + T3 fn(T1);
      |
      - T3 fn(T1, T2, T4);
      + T3 fn(T1, T2);
      )
      
      @pte_alloc_func_proto depends on patch exists@
      identifier E1, E2, E4;
      type T1, T2, T3, T4;
      identifier fn =~
      "^(__pte_alloc|pte_alloc_one|pte_alloc|__pte_alloc_kernel|pte_alloc_one_kernel)$";
      @@
      
      (
      - T3 fn(T1 E1, T2 E2);
      + T3 fn(T1 E1);
      |
      - T3 fn(T1 E1, T2 E2, T4 E4);
      + T3 fn(T1 E1, T2 E2);
      )
      
      @pte_alloc_func_call depends on patch exists@
      expression E2;
      identifier fn =~
      "^(__pte_alloc|pte_alloc_one|pte_alloc|__pte_alloc_kernel|pte_alloc_one_kernel)$";
      @@
      
       fn(...
      -,  E2
       )
      
      @pte_alloc_macro depends on patch exists@
      identifier fn =~
      "^(__pte_alloc|pte_alloc_one|pte_alloc|__pte_alloc_kernel|pte_alloc_one_kernel)$";
      identifier a, b, c;
      expression e;
      position p;
      @@
      
      (
      - #define fn(a, b, c) e
      + #define fn(a, b) e
      |
      - #define fn(a, b) e
      + #define fn(a) e
      )
      
      Link: http://lkml.kernel.org/r/20181108181201.88826-2-joelaf@google.comSigned-off-by: NJoel Fernandes (Google) <joel@joelfernandes.org>
      Suggested-by: NKirill A. Shutemov <kirill@shutemov.name>
      Acked-by: NKirill A. Shutemov <kirill@shutemov.name>
      Cc: Michal Hocko <mhocko@kernel.org>
      Cc: Julia Lawall <Julia.Lawall@lip6.fr>
      Cc: Kirill A. Shutemov <kirill@shutemov.name>
      Cc: William Kucharski <william.kucharski@oracle.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      4cf58924
  21. 29 12月, 2018 10 次提交