1. 11 7月, 2017 2 次提交
  2. 09 5月, 2017 1 次提交
  3. 04 5月, 2017 1 次提交
  4. 19 4月, 2017 1 次提交
    • P
      mm: Rename SLAB_DESTROY_BY_RCU to SLAB_TYPESAFE_BY_RCU · 5f0d5a3a
      Paul E. McKenney 提交于
      A group of Linux kernel hackers reported chasing a bug that resulted
      from their assumption that SLAB_DESTROY_BY_RCU provided an existence
      guarantee, that is, that no block from such a slab would be reallocated
      during an RCU read-side critical section.  Of course, that is not the
      case.  Instead, SLAB_DESTROY_BY_RCU only prevents freeing of an entire
      slab of blocks.
      
      However, there is a phrase for this, namely "type safety".  This commit
      therefore renames SLAB_DESTROY_BY_RCU to SLAB_TYPESAFE_BY_RCU in order
      to avoid future instances of this sort of confusion.
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: <linux-mm@kvack.org>
      Acked-by: NJohannes Weiner <hannes@cmpxchg.org>
      Acked-by: NVlastimil Babka <vbabka@suse.cz>
      [ paulmck: Add comments mentioning the old name, as requested by Eric
        Dumazet, in order to help people familiar with the old name find
        the new one. ]
      Acked-by: NDavid Rientjes <rientjes@google.com>
      5f0d5a3a
  5. 02 3月, 2017 2 次提交
    • I
      sched/headers: Prepare for new header dependencies before moving code to <linux/sched/task_stack.h> · 68db0cf1
      Ingo Molnar 提交于
      We are going to split <linux/sched/task_stack.h> out of <linux/sched.h>, which
      will have to be picked up from other headers and a couple of .c files.
      
      Create a trivial placeholder <linux/sched/task_stack.h> file that just
      maps to <linux/sched.h> to make this patch obviously correct and
      bisectable.
      
      Include the new header in the files that are going to need it.
      Acked-by: NLinus Torvalds <torvalds@linux-foundation.org>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: linux-kernel@vger.kernel.org
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      68db0cf1
    • I
      kasan, sched/headers: Uninline kasan_enable/disable_current() · af8601ad
      Ingo Molnar 提交于
      <linux/kasan.h> is a low level header that is included early
      in affected kernel headers. But it includes <linux/sched.h>
      which complicates the cleanup of sched.h dependencies.
      
      But kasan.h has almost no need for sched.h: its only use of
      scheduler functionality is in two inline functions which are
      not used very frequently - so uninline kasan_enable_current()
      and kasan_disable_current().
      
      Also add a <linux/sched.h> dependency to a .c file that depended
      on kasan.h including it.
      
      This paves the way to remove the <linux/sched.h> include from kasan.h.
      Acked-by: NLinus Torvalds <torvalds@linux-foundation.org>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: linux-kernel@vger.kernel.org
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      af8601ad
  6. 25 2月, 2017 1 次提交
    • G
      kasan: drain quarantine of memcg slab objects · f9fa1d91
      Greg Thelen 提交于
      Per memcg slab accounting and kasan have a problem with kmem_cache
      destruction.
       - kmem_cache_create() allocates a kmem_cache, which is used for
         allocations from processes running in root (top) memcg.
       - Processes running in non root memcg and allocating with either
         __GFP_ACCOUNT or from a SLAB_ACCOUNT cache use a per memcg
         kmem_cache.
       - Kasan catches use-after-free by having kfree() and kmem_cache_free()
         defer freeing of objects. Objects are placed in a quarantine.
       - kmem_cache_destroy() destroys root and non root kmem_caches. It takes
         care to drain the quarantine of objects from the root memcg's
         kmem_cache, but ignores objects associated with non root memcg. This
         causes leaks because quarantined per memcg objects refer to per memcg
         kmem cache being destroyed.
      
      To see the problem:
      
       1) create a slab cache with kmem_cache_create(,,,SLAB_ACCOUNT,)
       2) from non root memcg, allocate and free a few objects from cache
       3) dispose of the cache with kmem_cache_destroy() kmem_cache_destroy()
          will trigger a "Slab cache still has objects" warning indicating
          that the per memcg kmem_cache structure was leaked.
      
      Fix the leak by draining kasan quarantined objects allocated from non
      root memcg.
      
      Racing memcg deletion is tricky, but handled.  kmem_cache_destroy() =>
      shutdown_memcg_caches() => __shutdown_memcg_cache() => shutdown_cache()
      flushes per memcg quarantined objects, even if that memcg has been
      rmdir'd and gone through memcg_deactivate_kmem_caches().
      
      This leak only affects destroyed SLAB_ACCOUNT kmem caches when kasan is
      enabled.  So I don't think it's worth patching stable kernels.
      
      Link: http://lkml.kernel.org/r/1482257462-36948-1-git-send-email-gthelen@google.comSigned-off-by: NGreg Thelen <gthelen@google.com>
      Reviewed-by: NVladimir Davydov <vdavydov.dev@gmail.com>
      Acked-by: NAndrey Ryabinin <aryabinin@virtuozzo.com>
      Cc: Alexander Potapenko <glider@google.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      f9fa1d91
  7. 06 12月, 2016 1 次提交
    • J
      x86/suspend: fix false positive KASAN warning on suspend/resume · b53f40db
      Josh Poimboeuf 提交于
      Resuming from a suspend operation is showing a KASAN false positive
      warning:
      
        BUG: KASAN: stack-out-of-bounds in unwind_get_return_address+0x11d/0x130 at addr ffff8803867d7878
        Read of size 8 by task pm-suspend/7774
        page:ffffea000e19f5c0 count:0 mapcount:0 mapping:          (null) index:0x0
        flags: 0x2ffff0000000000()
        page dumped because: kasan: bad access detected
        CPU: 0 PID: 7774 Comm: pm-suspend Tainted: G    B           4.9.0-rc7+ #8
        Hardware name: Gigabyte Technology Co., Ltd. Z170X-UD5/Z170X-UD5-CF, BIOS F5 03/07/2016
        Call Trace:
          dump_stack+0x63/0x82
          kasan_report_error+0x4b4/0x4e0
          ? acpi_hw_read_port+0xd0/0x1ea
          ? kfree_const+0x22/0x30
          ? acpi_hw_validate_io_request+0x1a6/0x1a6
          __asan_report_load8_noabort+0x61/0x70
          ? unwind_get_return_address+0x11d/0x130
          unwind_get_return_address+0x11d/0x130
          ? unwind_next_frame+0x97/0xf0
          __save_stack_trace+0x92/0x100
          save_stack_trace+0x1b/0x20
          save_stack+0x46/0xd0
          ? save_stack_trace+0x1b/0x20
          ? save_stack+0x46/0xd0
          ? kasan_kmalloc+0xad/0xe0
          ? kasan_slab_alloc+0x12/0x20
          ? acpi_hw_read+0x2b6/0x3aa
          ? acpi_hw_validate_register+0x20b/0x20b
          ? acpi_hw_write_port+0x72/0xc7
          ? acpi_hw_write+0x11f/0x15f
          ? acpi_hw_read_multiple+0x19f/0x19f
          ? memcpy+0x45/0x50
          ? acpi_hw_write_port+0x72/0xc7
          ? acpi_hw_write+0x11f/0x15f
          ? acpi_hw_read_multiple+0x19f/0x19f
          ? kasan_unpoison_shadow+0x36/0x50
          kasan_kmalloc+0xad/0xe0
          kasan_slab_alloc+0x12/0x20
          kmem_cache_alloc_trace+0xbc/0x1e0
          ? acpi_get_sleep_type_data+0x9a/0x578
          acpi_get_sleep_type_data+0x9a/0x578
          acpi_hw_legacy_wake_prep+0x88/0x22c
          ? acpi_hw_legacy_sleep+0x3c7/0x3c7
          ? acpi_write_bit_register+0x28d/0x2d3
          ? acpi_read_bit_register+0x19b/0x19b
          acpi_hw_sleep_dispatch+0xb5/0xba
          acpi_leave_sleep_state_prep+0x17/0x19
          acpi_suspend_enter+0x154/0x1e0
          ? trace_suspend_resume+0xe8/0xe8
          suspend_devices_and_enter+0xb09/0xdb0
          ? printk+0xa8/0xd8
          ? arch_suspend_enable_irqs+0x20/0x20
          ? try_to_freeze_tasks+0x295/0x600
          pm_suspend+0x6c9/0x780
          ? finish_wait+0x1f0/0x1f0
          ? suspend_devices_and_enter+0xdb0/0xdb0
          state_store+0xa2/0x120
          ? kobj_attr_show+0x60/0x60
          kobj_attr_store+0x36/0x70
          sysfs_kf_write+0x131/0x200
          kernfs_fop_write+0x295/0x3f0
          __vfs_write+0xef/0x760
          ? handle_mm_fault+0x1346/0x35e0
          ? do_iter_readv_writev+0x660/0x660
          ? __pmd_alloc+0x310/0x310
          ? do_lock_file_wait+0x1e0/0x1e0
          ? apparmor_file_permission+0x18/0x20
          ? security_file_permission+0x73/0x1c0
          ? rw_verify_area+0xbd/0x2b0
          vfs_write+0x149/0x4a0
          SyS_write+0xd9/0x1c0
          ? SyS_read+0x1c0/0x1c0
          entry_SYSCALL_64_fastpath+0x1e/0xad
        Memory state around the buggy address:
         ffff8803867d7700: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
         ffff8803867d7780: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
        >ffff8803867d7800: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 f4
                                                                        ^
         ffff8803867d7880: f3 f3 f3 f3 00 00 00 00 00 00 00 00 00 00 00 00
         ffff8803867d7900: 00 00 00 f1 f1 f1 f1 04 f4 f4 f4 f3 f3 f3 f3 00
      
      KASAN instrumentation poisons the stack when entering a function and
      unpoisons it when exiting the function.  However, in the suspend path,
      some functions never return, so their stack never gets unpoisoned,
      resulting in stale KASAN shadow data which can cause later false
      positive warnings like the one above.
      Reported-by: NScott Bauer <scott.bauer@intel.com>
      Signed-off-by: NJosh Poimboeuf <jpoimboe@redhat.com>
      Acked-by: NAndrey Ryabinin <aryabinin@virtuozzo.com>
      Acked-by: NPavel Machek <pavel@ucw.cz>
      Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      b53f40db
  8. 01 12月, 2016 1 次提交
  9. 16 10月, 2016 1 次提交
    • D
      kprobes: Unpoison stack in jprobe_return() for KASAN · 9f7d416c
      Dmitry Vyukov 提交于
      I observed false KSAN positives in the sctp code, when
      sctp uses jprobe_return() in jsctp_sf_eat_sack().
      
      The stray 0xf4 in shadow memory are stack redzones:
      
      [     ] ==================================================================
      [     ] BUG: KASAN: stack-out-of-bounds in memcmp+0xe9/0x150 at addr ffff88005e48f480
      [     ] Read of size 1 by task syz-executor/18535
      [     ] page:ffffea00017923c0 count:0 mapcount:0 mapping:          (null) index:0x0
      [     ] flags: 0x1fffc0000000000()
      [     ] page dumped because: kasan: bad access detected
      [     ] CPU: 1 PID: 18535 Comm: syz-executor Not tainted 4.8.0+ #28
      [     ] Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011
      [     ]  ffff88005e48f2d0 ffffffff82d2b849 ffffffff0bc91e90 fffffbfff10971e8
      [     ]  ffffed000bc91e90 ffffed000bc91e90 0000000000000001 0000000000000000
      [     ]  ffff88005e48f480 ffff88005e48f350 ffffffff817d3169 ffff88005e48f370
      [     ] Call Trace:
      [     ]  [<ffffffff82d2b849>] dump_stack+0x12e/0x185
      [     ]  [<ffffffff817d3169>] kasan_report+0x489/0x4b0
      [     ]  [<ffffffff817d31a9>] __asan_report_load1_noabort+0x19/0x20
      [     ]  [<ffffffff82d49529>] memcmp+0xe9/0x150
      [     ]  [<ffffffff82df7486>] depot_save_stack+0x176/0x5c0
      [     ]  [<ffffffff817d2031>] save_stack+0xb1/0xd0
      [     ]  [<ffffffff817d27f2>] kasan_slab_free+0x72/0xc0
      [     ]  [<ffffffff817d05b8>] kfree+0xc8/0x2a0
      [     ]  [<ffffffff85b03f19>] skb_free_head+0x79/0xb0
      [     ]  [<ffffffff85b0900a>] skb_release_data+0x37a/0x420
      [     ]  [<ffffffff85b090ff>] skb_release_all+0x4f/0x60
      [     ]  [<ffffffff85b11348>] consume_skb+0x138/0x370
      [     ]  [<ffffffff8676ad7b>] sctp_chunk_put+0xcb/0x180
      [     ]  [<ffffffff8676ae88>] sctp_chunk_free+0x58/0x70
      [     ]  [<ffffffff8677fa5f>] sctp_inq_pop+0x68f/0xef0
      [     ]  [<ffffffff8675ee36>] sctp_assoc_bh_rcv+0xd6/0x4b0
      [     ]  [<ffffffff8677f2c1>] sctp_inq_push+0x131/0x190
      [     ]  [<ffffffff867bad69>] sctp_backlog_rcv+0xe9/0xa20
      [ ... ]
      [     ] Memory state around the buggy address:
      [     ]  ffff88005e48f380: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
      [     ]  ffff88005e48f400: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
      [     ] >ffff88005e48f480: f4 f4 00 00 00 00 00 00 00 00 00 00 00 00 00 00
      [     ]                    ^
      [     ]  ffff88005e48f500: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
      [     ]  ffff88005e48f580: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
      [     ] ==================================================================
      
      KASAN stack instrumentation poisons stack redzones on function entry
      and unpoisons them on function exit. If a function exits abnormally
      (e.g. with a longjmp like jprobe_return()), stack redzones are left
      poisoned. Later this leads to random KASAN false reports.
      
      Unpoison stack redzones in the frames we are going to jump over
      before doing actual longjmp in jprobe_return().
      Signed-off-by: NDmitry Vyukov <dvyukov@google.com>
      Acked-by: NMasami Hiramatsu <mhiramat@kernel.org>
      Reviewed-by: NMark Rutland <mark.rutland@arm.com>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com>
      Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
      Cc: Alexander Potapenko <glider@google.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Ananth N Mavinakayanahalli <ananth@linux.vnet.ibm.com>
      Cc: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Masami Hiramatsu <mhiramat@kernel.org>
      Cc: kasan-dev@googlegroups.com
      Cc: surovegin@google.com
      Cc: rostedt@goodmis.org
      Link: http://lkml.kernel.org/r/1476454043-101898-1-git-send-email-dvyukov@google.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      9f7d416c
  10. 03 8月, 2016 5 次提交
  11. 29 7月, 2016 1 次提交
  12. 25 6月, 2016 1 次提交
  13. 10 6月, 2016 1 次提交
  14. 21 5月, 2016 3 次提交
    • A
      mm/kasan: add API to check memory regions · 64f8ebaf
      Andrey Ryabinin 提交于
      Memory access coded in an assembly won't be seen by KASAN as a compiler
      can instrument only C code.  Add kasan_check_[read,write]() API which is
      going to be used to check a certain memory range.
      
      Link: http://lkml.kernel.org/r/1462538722-1574-3-git-send-email-aryabinin@virtuozzo.comSigned-off-by: NAndrey Ryabinin <aryabinin@virtuozzo.com>
      Acked-by: NAlexander Potapenko <glider@google.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      64f8ebaf
    • A
      mm/kasan: print name of mem[set,cpy,move]() caller in report · 936bb4bb
      Andrey Ryabinin 提交于
      When bogus memory access happens in mem[set,cpy,move]() it's usually
      caller's fault.  So don't blame mem[set,cpy,move]() in bug report, blame
      the caller instead.
      
      Before:
        BUG: KASAN: out-of-bounds access in memset+0x23/0x40 at <address>
      After:
        BUG: KASAN: out-of-bounds access in <memset_caller> at <address>
      
      Link: http://lkml.kernel.org/r/1462538722-1574-2-git-send-email-aryabinin@virtuozzo.comSigned-off-by: NAndrey Ryabinin <aryabinin@virtuozzo.com>
      Acked-by: NAlexander Potapenko <glider@google.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      936bb4bb
    • A
      mm: kasan: initial memory quarantine implementation · 55834c59
      Alexander Potapenko 提交于
      Quarantine isolates freed objects in a separate queue.  The objects are
      returned to the allocator later, which helps to detect use-after-free
      errors.
      
      When the object is freed, its state changes from KASAN_STATE_ALLOC to
      KASAN_STATE_QUARANTINE.  The object is poisoned and put into quarantine
      instead of being returned to the allocator, therefore every subsequent
      access to that object triggers a KASAN error, and the error handler is
      able to say where the object has been allocated and deallocated.
      
      When it's time for the object to leave quarantine, its state becomes
      KASAN_STATE_FREE and it's returned to the allocator.  From now on the
      allocator may reuse it for another allocation.  Before that happens,
      it's still possible to detect a use-after free on that object (it
      retains the allocation/deallocation stacks).
      
      When the allocator reuses this object, the shadow is unpoisoned and old
      allocation/deallocation stacks are wiped.  Therefore a use of this
      object, even an incorrect one, won't trigger ASan warning.
      
      Without the quarantine, it's not guaranteed that the objects aren't
      reused immediately, that's why the probability of catching a
      use-after-free is lower than with quarantine in place.
      
      Quarantine isolates freed objects in a separate queue.  The objects are
      returned to the allocator later, which helps to detect use-after-free
      errors.
      
      Freed objects are first added to per-cpu quarantine queues.  When a
      cache is destroyed or memory shrinking is requested, the objects are
      moved into the global quarantine queue.  Whenever a kmalloc call allows
      memory reclaiming, the oldest objects are popped out of the global queue
      until the total size of objects in quarantine is less than 3/4 of the
      maximum quarantine size (which is a fraction of installed physical
      memory).
      
      As long as an object remains in the quarantine, KASAN is able to report
      accesses to it, so the chance of reporting a use-after-free is
      increased.  Once the object leaves quarantine, the allocator may reuse
      it, in which case the object is unpoisoned and KASAN can't detect
      incorrect accesses to it.
      
      Right now quarantine support is only enabled in SLAB allocator.
      Unification of KASAN features in SLAB and SLUB will be done later.
      
      This patch is based on the "mm: kasan: quarantine" patch originally
      prepared by Dmitry Chernenkov.  A number of improvements have been
      suggested by Andrey Ryabinin.
      
      [glider@google.com: v9]
        Link: http://lkml.kernel.org/r/1462987130-144092-1-git-send-email-glider@google.comSigned-off-by: NAlexander Potapenko <glider@google.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Andrey Konovalov <adech.fo@gmail.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Konstantin Serebryany <kcc@google.com>
      Cc: Dmitry Chernenkov <dmitryc@google.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      55834c59
  15. 02 4月, 2016 1 次提交
  16. 26 3月, 2016 3 次提交
    • A
      mm, kasan: stackdepot implementation. Enable stackdepot for SLAB · cd11016e
      Alexander Potapenko 提交于
      Implement the stack depot and provide CONFIG_STACKDEPOT.  Stack depot
      will allow KASAN store allocation/deallocation stack traces for memory
      chunks.  The stack traces are stored in a hash table and referenced by
      handles which reside in the kasan_alloc_meta and kasan_free_meta
      structures in the allocated memory chunks.
      
      IRQ stack traces are cut below the IRQ entry point to avoid unnecessary
      duplication.
      
      Right now stackdepot support is only enabled in SLAB allocator.  Once
      KASAN features in SLAB are on par with those in SLUB we can switch SLUB
      to stackdepot as well, thus removing the dependency on SLUB stack
      bookkeeping, which wastes a lot of memory.
      
      This patch is based on the "mm: kasan: stack depots" patch originally
      prepared by Dmitry Chernenkov.
      
      Joonsoo has said that he plans to reuse the stackdepot code for the
      mm/page_owner.c debugging facility.
      
      [akpm@linux-foundation.org: s/depot_stack_handle/depot_stack_handle_t]
      [aryabinin@virtuozzo.com: comment style fixes]
      Signed-off-by: NAlexander Potapenko <glider@google.com>
      Signed-off-by: NAndrey Ryabinin <aryabinin@virtuozzo.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Andrey Konovalov <adech.fo@gmail.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Konstantin Serebryany <kcc@google.com>
      Cc: Dmitry Chernenkov <dmitryc@google.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      cd11016e
    • A
      mm, kasan: add GFP flags to KASAN API · 505f5dcb
      Alexander Potapenko 提交于
      Add GFP flags to KASAN hooks for future patches to use.
      
      This patch is based on the "mm: kasan: unified support for SLUB and SLAB
      allocators" patch originally prepared by Dmitry Chernenkov.
      Signed-off-by: NAlexander Potapenko <glider@google.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Andrey Konovalov <adech.fo@gmail.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Konstantin Serebryany <kcc@google.com>
      Cc: Dmitry Chernenkov <dmitryc@google.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      505f5dcb
    • A
      mm, kasan: SLAB support · 7ed2f9e6
      Alexander Potapenko 提交于
      Add KASAN hooks to SLAB allocator.
      
      This patch is based on the "mm: kasan: unified support for SLUB and SLAB
      allocators" patch originally prepared by Dmitry Chernenkov.
      Signed-off-by: NAlexander Potapenko <glider@google.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Andrey Konovalov <adech.fo@gmail.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Konstantin Serebryany <kcc@google.com>
      Cc: Dmitry Chernenkov <dmitryc@google.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      7ed2f9e6
  17. 10 3月, 2016 1 次提交
    • M
      kasan: add functions to clear stack poison · e3ae1163
      Mark Rutland 提交于
      Functions which the compiler has instrumented for ASAN place poison on
      the stack shadow upon entry and remove this poison prior to returning.
      
      In some cases (e.g. hotplug and idle), CPUs may exit the kernel a
      number of levels deep in C code.  If there are any instrumented
      functions on this critical path, these will leave portions of the idle
      thread stack shadow poisoned.
      
      If a CPU returns to the kernel via a different path (e.g. a cold
      entry), then depending on stack frame layout subsequent calls to
      instrumented functions may use regions of the stack with stale poison,
      resulting in (spurious) KASAN splats to the console.
      
      Contemporary GCCs always add stack shadow poisoning when ASAN is
      enabled, even when asked to not instrument a function [1], so we can't
      simply annotate functions on the critical path to avoid poisoning.
      
      Instead, this series explicitly removes any stale poison before it can
      be hit.  In the common hotplug case we clear the entire stack shadow in
      common code, before a CPU is brought online.
      
      On architectures which perform a cold return as part of cpu idle may
      retain an architecture-specific amount of stack contents.  To retain the
      poison for this retained context, the arch code must call the core KASAN
      code, passing a "watermark" stack pointer value beyond which shadow will
      be cleared.  Architectures which don't perform a cold return as part of
      idle do not need any additional code.
      
      This patch (of 3):
      
      Functions which the compiler has instrumented for KASAN place poison on
      the stack shadow upon entry and remove this poision prior to returning.
      
      In some cases (e.g.  hotplug and idle), CPUs may exit the kernel a number
      of levels deep in C code.  If there are any instrumented functions on this
      critical path, these will leave portions of the stack shadow poisoned.
      
      If a CPU returns to the kernel via a different path (e.g.  a cold entry),
      then depending on stack frame layout subsequent calls to instrumented
      functions may use regions of the stack with stale poison, resulting in
      (spurious) KASAN splats to the console.
      
      To avoid this, we must clear stale poison from the stack prior to
      instrumented functions being called.  This patch adds functions to the
      KASAN core for removing poison from (portions of) a task's stack.  These
      will be used by subsequent patches to avoid problems with hotplug and
      idle.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Acked-by: NCatalin Marinas <catalin.marinas@arm.com>
      Reviewed-by: NAndrey Ryabinin <aryabinin@virtuozzo.com>
      Cc: Alexander Potapenko <glider@google.com>
      Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      e3ae1163
  18. 21 11月, 2015 1 次提交
    • A
      kasan: fix kmemleak false-positive in kasan_module_alloc() · 45937254
      Andrey Ryabinin 提交于
      Kmemleak reports the following leak:
      
      	unreferenced object 0xfffffbfff41ea000 (size 20480):
      	comm "modprobe", pid 65199, jiffies 4298875551 (age 542.568s)
      	hex dump (first 32 bytes):
      	  00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
      	  00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
      	backtrace:
      	  [<ffffffff82354f5e>] kmemleak_alloc+0x4e/0xc0
      	  [<ffffffff8152e718>] __vmalloc_node_range+0x4b8/0x740
      	  [<ffffffff81574072>] kasan_module_alloc+0x72/0xc0
      	  [<ffffffff810efe68>] module_alloc+0x78/0xb0
      	  [<ffffffff812f6a24>] module_alloc_update_bounds+0x14/0x70
      	  [<ffffffff812f8184>] layout_and_allocate+0x16f4/0x3c90
      	  [<ffffffff812faa1f>] load_module+0x2ff/0x6690
      	  [<ffffffff813010b6>] SyS_finit_module+0x136/0x170
      	  [<ffffffff8239bbc9>] system_call_fastpath+0x16/0x1b
      	  [<ffffffffffffffff>] 0xffffffffffffffff
      
      kasan_module_alloc() allocates shadow memory for module and frees it on
      module unloading.  It doesn't store the pointer to allocated shadow memory
      because it could be calculated from the shadowed address, i.e.
      kasan_mem_to_shadow(addr).
      
      Since kmemleak cannot find pointer to allocated shadow, it thinks that
      memory leaked.
      
      Use kmemleak_ignore() to tell kmemleak that this is not a leak and shadow
      memory doesn't contain any pointers.
      Signed-off-by: NAndrey Ryabinin <aryabinin@virtuozzo.com>
      Acked-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      45937254
  19. 06 11月, 2015 5 次提交
  20. 18 9月, 2015 1 次提交
  21. 15 8月, 2015 1 次提交
  22. 16 4月, 2015 1 次提交
  23. 13 3月, 2015 1 次提交
    • A
      kasan, module, vmalloc: rework shadow allocation for modules · a5af5aa8
      Andrey Ryabinin 提交于
      Current approach in handling shadow memory for modules is broken.
      
      Shadow memory could be freed only after memory shadow corresponds it is no
      longer used.  vfree() called from interrupt context could use memory its
      freeing to store 'struct llist_node' in it:
      
          void vfree(const void *addr)
          {
          ...
              if (unlikely(in_interrupt())) {
                  struct vfree_deferred *p = this_cpu_ptr(&vfree_deferred);
                  if (llist_add((struct llist_node *)addr, &p->list))
                          schedule_work(&p->wq);
      
      Later this list node used in free_work() which actually frees memory.
      Currently module_memfree() called in interrupt context will free shadow
      before freeing module's memory which could provoke kernel crash.
      
      So shadow memory should be freed after module's memory.  However, such
      deallocation order could race with kasan_module_alloc() in module_alloc().
      
      Free shadow right before releasing vm area.  At this point vfree()'d
      memory is not used anymore and yet not available for other allocations.
      New VM_KASAN flag used to indicate that vm area has dynamically allocated
      shadow memory so kasan frees shadow only if it was previously allocated.
      Signed-off-by: NAndrey Ryabinin <a.ryabinin@samsung.com>
      Acked-by: NRusty Russell <rusty@rustcorp.com.au>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      a5af5aa8
  24. 14 2月, 2015 3 次提交
    • A
      kasan: enable instrumentation of global variables · bebf56a1
      Andrey Ryabinin 提交于
      This feature let us to detect accesses out of bounds of global variables.
      This will work as for globals in kernel image, so for globals in modules.
      Currently this won't work for symbols in user-specified sections (e.g.
      __init, __read_mostly, ...)
      
      The idea of this is simple.  Compiler increases each global variable by
      redzone size and add constructors invoking __asan_register_globals()
      function.  Information about global variable (address, size, size with
      redzone ...) passed to __asan_register_globals() so we could poison
      variable's redzone.
      
      This patch also forces module_alloc() to return 8*PAGE_SIZE aligned
      address making shadow memory handling (
      kasan_module_alloc()/kasan_module_free() ) more simple.  Such alignment
      guarantees that each shadow page backing modules address space correspond
      to only one module_alloc() allocation.
      Signed-off-by: NAndrey Ryabinin <a.ryabinin@samsung.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: Konstantin Serebryany <kcc@google.com>
      Cc: Dmitry Chernenkov <dmitryc@google.com>
      Signed-off-by: NAndrey Konovalov <adech.fo@gmail.com>
      Cc: Yuri Gribov <tetra2005@gmail.com>
      Cc: Konstantin Khlebnikov <koct9i@gmail.com>
      Cc: Sasha Levin <sasha.levin@oracle.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Andi Kleen <andi@firstfloor.org>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: David Rientjes <rientjes@google.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      bebf56a1
    • A
      x86_64: kasan: add interceptors for memset/memmove/memcpy functions · 393f203f
      Andrey Ryabinin 提交于
      Recently instrumentation of builtin functions calls was removed from GCC
      5.0.  To check the memory accessed by such functions, userspace asan
      always uses interceptors for them.
      
      So now we should do this as well.  This patch declares
      memset/memmove/memcpy as weak symbols.  In mm/kasan/kasan.c we have our
      own implementation of those functions which checks memory before accessing
      it.
      
      Default memset/memmove/memcpy now now always have aliases with '__'
      prefix.  For files that built without kasan instrumentation (e.g.
      mm/slub.c) original mem* replaced (via #define) with prefixed variants,
      cause we don't want to check memory accesses there.
      Signed-off-by: NAndrey Ryabinin <a.ryabinin@samsung.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: Konstantin Serebryany <kcc@google.com>
      Cc: Dmitry Chernenkov <dmitryc@google.com>
      Signed-off-by: NAndrey Konovalov <adech.fo@gmail.com>
      Cc: Yuri Gribov <tetra2005@gmail.com>
      Cc: Konstantin Khlebnikov <koct9i@gmail.com>
      Cc: Sasha Levin <sasha.levin@oracle.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Andi Kleen <andi@firstfloor.org>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: David Rientjes <rientjes@google.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      393f203f
    • A
      mm: slub: add kernel address sanitizer support for slub allocator · 0316bec2
      Andrey Ryabinin 提交于
      With this patch kasan will be able to catch bugs in memory allocated by
      slub.  Initially all objects in newly allocated slab page, marked as
      redzone.  Later, when allocation of slub object happens, requested by
      caller number of bytes marked as accessible, and the rest of the object
      (including slub's metadata) marked as redzone (inaccessible).
      
      We also mark object as accessible if ksize was called for this object.
      There is some places in kernel where ksize function is called to inquire
      size of really allocated area.  Such callers could validly access whole
      allocated memory, so it should be marked as accessible.
      
      Code in slub.c and slab_common.c files could validly access to object's
      metadata, so instrumentation for this files are disabled.
      Signed-off-by: NAndrey Ryabinin <a.ryabinin@samsung.com>
      Signed-off-by: NDmitry Chernenkov <dmitryc@google.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: Konstantin Serebryany <kcc@google.com>
      Signed-off-by: NAndrey Konovalov <adech.fo@gmail.com>
      Cc: Yuri Gribov <tetra2005@gmail.com>
      Cc: Konstantin Khlebnikov <koct9i@gmail.com>
      Cc: Sasha Levin <sasha.levin@oracle.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Andi Kleen <andi@firstfloor.org>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: David Rientjes <rientjes@google.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      0316bec2