提交 b791d1bd 编写于 作者: L Linus Torvalds

Merge tag 'locking-kcsan-2020-06-11' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip

Pull the Kernel Concurrency Sanitizer from Thomas Gleixner:
 "The Kernel Concurrency Sanitizer (KCSAN) is a dynamic race detector,
  which relies on compile-time instrumentation, and uses a
  watchpoint-based sampling approach to detect races.

  The feature was under development for quite some time and has already
  found legitimate bugs.

  Unfortunately it comes with a limitation, which was only understood
  late in the development cycle:

     It requires an up to date CLANG-11 compiler

  CLANG-11 is not yet released (scheduled for June), but it's the only
  compiler today which handles the kernel requirements and especially
  the annotations of functions to exclude them from KCSAN
  instrumentation correctly.

  These annotations really need to work so that low level entry code and
  especially int3 text poke handling can be completely isolated.

  A detailed discussion of the requirements and compiler issues can be
  found here:

    https://lore.kernel.org/lkml/CANpmjNMTsY_8241bS7=XAfqvZHFLrVEkv_uM4aDUWE_kh3Rvbw@mail.gmail.com/

  We came to the conclusion that trying to work around compiler
  limitations and bugs again would end up in a major trainwreck, so
  requiring a working compiler seemed to be the best choice.

  For Continous Integration purposes the compiler restriction is
  manageable and that's where most xxSAN reports come from.

  For a change this limitation might make GCC people actually look at
  their bugs. Some issues with CSAN in GCC are 7 years old and one has
  been 'fixed' 3 years ago with a half baken solution which 'solved' the
  reported issue but not the underlying problem.

  The KCSAN developers also ponder to use a GCC plugin to become
  independent, but that's not something which will show up in a few
  days.

  Blocking KCSAN until wide spread compiler support is available is not
  a really good alternative because the continuous growth of lockless
  optimizations in the kernel demands proper tooling support"

* tag 'locking-kcsan-2020-06-11' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (76 commits)
  compiler_types.h, kasan: Use __SANITIZE_ADDRESS__ instead of CONFIG_KASAN to decide inlining
  compiler.h: Move function attributes to compiler_types.h
  compiler.h: Avoid nested statement expression in data_race()
  compiler.h: Remove data_race() and unnecessary checks from {READ,WRITE}_ONCE()
  kcsan: Update Documentation to change supported compilers
  kcsan: Remove 'noinline' from __no_kcsan_or_inline
  kcsan: Pass option tsan-instrument-read-before-write to Clang
  kcsan: Support distinguishing volatile accesses
  kcsan: Restrict supported compilers
  kcsan: Avoid inserting __tsan_func_entry/exit if possible
  ubsan, kcsan: Don't combine sanitizer with kcov on clang
  objtool, kcsan: Add kcsan_disable_current() and kcsan_enable_current_nowarn()
  kcsan: Add __kcsan_{enable,disable}_current() variants
  checkpatch: Warn about data_race() without comment
  kcsan: Use GFP_ATOMIC under spin lock
  Improve KCSAN documentation a bit
  kcsan: Make reporting aware of KCSAN tests
  kcsan: Fix function matching in report
  kcsan: Change data_race() to no longer require marking racing accesses
  kcsan: Move kcsan_{disable,enable}_current() to kcsan-checks.h
  ...
...@@ -21,6 +21,7 @@ whole; patches welcome! ...@@ -21,6 +21,7 @@ whole; patches welcome!
kasan kasan
ubsan ubsan
kmemleak kmemleak
kcsan
gdb-kernel-debugging gdb-kernel-debugging
kgdb kgdb
kselftest kselftest
......
The Kernel Concurrency Sanitizer (KCSAN)
========================================
The Kernel Concurrency Sanitizer (KCSAN) is a dynamic race detector, which
relies on compile-time instrumentation, and uses a watchpoint-based sampling
approach to detect races. KCSAN's primary purpose is to detect `data races`_.
Usage
-----
KCSAN requires Clang version 11 or later.
To enable KCSAN configure the kernel with::
CONFIG_KCSAN = y
KCSAN provides several other configuration options to customize behaviour (see
the respective help text in ``lib/Kconfig.kcsan`` for more info).
Error reports
~~~~~~~~~~~~~
A typical data race report looks like this::
==================================================================
BUG: KCSAN: data-race in generic_permission / kernfs_refresh_inode
write to 0xffff8fee4c40700c of 4 bytes by task 175 on cpu 4:
kernfs_refresh_inode+0x70/0x170
kernfs_iop_permission+0x4f/0x90
inode_permission+0x190/0x200
link_path_walk.part.0+0x503/0x8e0
path_lookupat.isra.0+0x69/0x4d0
filename_lookup+0x136/0x280
user_path_at_empty+0x47/0x60
vfs_statx+0x9b/0x130
__do_sys_newlstat+0x50/0xb0
__x64_sys_newlstat+0x37/0x50
do_syscall_64+0x85/0x260
entry_SYSCALL_64_after_hwframe+0x44/0xa9
read to 0xffff8fee4c40700c of 4 bytes by task 166 on cpu 6:
generic_permission+0x5b/0x2a0
kernfs_iop_permission+0x66/0x90
inode_permission+0x190/0x200
link_path_walk.part.0+0x503/0x8e0
path_lookupat.isra.0+0x69/0x4d0
filename_lookup+0x136/0x280
user_path_at_empty+0x47/0x60
do_faccessat+0x11a/0x390
__x64_sys_access+0x3c/0x50
do_syscall_64+0x85/0x260
entry_SYSCALL_64_after_hwframe+0x44/0xa9
Reported by Kernel Concurrency Sanitizer on:
CPU: 6 PID: 166 Comm: systemd-journal Not tainted 5.3.0-rc7+ #1
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.12.0-1 04/01/2014
==================================================================
The header of the report provides a short summary of the functions involved in
the race. It is followed by the access types and stack traces of the 2 threads
involved in the data race.
The other less common type of data race report looks like this::
==================================================================
BUG: KCSAN: data-race in e1000_clean_rx_irq+0x551/0xb10
race at unknown origin, with read to 0xffff933db8a2ae6c of 1 bytes by interrupt on cpu 0:
e1000_clean_rx_irq+0x551/0xb10
e1000_clean+0x533/0xda0
net_rx_action+0x329/0x900
__do_softirq+0xdb/0x2db
irq_exit+0x9b/0xa0
do_IRQ+0x9c/0xf0
ret_from_intr+0x0/0x18
default_idle+0x3f/0x220
arch_cpu_idle+0x21/0x30
do_idle+0x1df/0x230
cpu_startup_entry+0x14/0x20
rest_init+0xc5/0xcb
arch_call_rest_init+0x13/0x2b
start_kernel+0x6db/0x700
Reported by Kernel Concurrency Sanitizer on:
CPU: 0 PID: 0 Comm: swapper/0 Not tainted 5.3.0-rc7+ #2
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.12.0-1 04/01/2014
==================================================================
This report is generated where it was not possible to determine the other
racing thread, but a race was inferred due to the data value of the watched
memory location having changed. These can occur either due to missing
instrumentation or e.g. DMA accesses. These reports will only be generated if
``CONFIG_KCSAN_REPORT_RACE_UNKNOWN_ORIGIN=y`` (selected by default).
Selective analysis
~~~~~~~~~~~~~~~~~~
It may be desirable to disable data race detection for specific accesses,
functions, compilation units, or entire subsystems. For static blacklisting,
the below options are available:
* KCSAN understands the ``data_race(expr)`` annotation, which tells KCSAN that
any data races due to accesses in ``expr`` should be ignored and resulting
behaviour when encountering a data race is deemed safe.
* Disabling data race detection for entire functions can be accomplished by
using the function attribute ``__no_kcsan``::
__no_kcsan
void foo(void) {
...
To dynamically limit for which functions to generate reports, see the
`DebugFS interface`_ blacklist/whitelist feature.
For ``__always_inline`` functions, replace ``__always_inline`` with
``__no_kcsan_or_inline`` (which implies ``__always_inline``)::
static __no_kcsan_or_inline void foo(void) {
...
* To disable data race detection for a particular compilation unit, add to the
``Makefile``::
KCSAN_SANITIZE_file.o := n
* To disable data race detection for all compilation units listed in a
``Makefile``, add to the respective ``Makefile``::
KCSAN_SANITIZE := n
Furthermore, it is possible to tell KCSAN to show or hide entire classes of
data races, depending on preferences. These can be changed via the following
Kconfig options:
* ``CONFIG_KCSAN_REPORT_VALUE_CHANGE_ONLY``: If enabled and a conflicting write
is observed via a watchpoint, but the data value of the memory location was
observed to remain unchanged, do not report the data race.
* ``CONFIG_KCSAN_ASSUME_PLAIN_WRITES_ATOMIC``: Assume that plain aligned writes
up to word size are atomic by default. Assumes that such writes are not
subject to unsafe compiler optimizations resulting in data races. The option
causes KCSAN to not report data races due to conflicts where the only plain
accesses are aligned writes up to word size.
DebugFS interface
~~~~~~~~~~~~~~~~~
The file ``/sys/kernel/debug/kcsan`` provides the following interface:
* Reading ``/sys/kernel/debug/kcsan`` returns various runtime statistics.
* Writing ``on`` or ``off`` to ``/sys/kernel/debug/kcsan`` allows turning KCSAN
on or off, respectively.
* Writing ``!some_func_name`` to ``/sys/kernel/debug/kcsan`` adds
``some_func_name`` to the report filter list, which (by default) blacklists
reporting data races where either one of the top stackframes are a function
in the list.
* Writing either ``blacklist`` or ``whitelist`` to ``/sys/kernel/debug/kcsan``
changes the report filtering behaviour. For example, the blacklist feature
can be used to silence frequently occurring data races; the whitelist feature
can help with reproduction and testing of fixes.
Tuning performance
~~~~~~~~~~~~~~~~~~
Core parameters that affect KCSAN's overall performance and bug detection
ability are exposed as kernel command-line arguments whose defaults can also be
changed via the corresponding Kconfig options.
* ``kcsan.skip_watch`` (``CONFIG_KCSAN_SKIP_WATCH``): Number of per-CPU memory
operations to skip, before another watchpoint is set up. Setting up
watchpoints more frequently will result in the likelihood of races to be
observed to increase. This parameter has the most significant impact on
overall system performance and race detection ability.
* ``kcsan.udelay_task`` (``CONFIG_KCSAN_UDELAY_TASK``): For tasks, the
microsecond delay to stall execution after a watchpoint has been set up.
Larger values result in the window in which we may observe a race to
increase.
* ``kcsan.udelay_interrupt`` (``CONFIG_KCSAN_UDELAY_INTERRUPT``): For
interrupts, the microsecond delay to stall execution after a watchpoint has
been set up. Interrupts have tighter latency requirements, and their delay
should generally be smaller than the one chosen for tasks.
They may be tweaked at runtime via ``/sys/module/kcsan/parameters/``.
Data Races
----------
In an execution, two memory accesses form a *data race* if they *conflict*,
they happen concurrently in different threads, and at least one of them is a
*plain access*; they *conflict* if both access the same memory location, and at
least one is a write. For a more thorough discussion and definition, see `"Plain
Accesses and Data Races" in the LKMM`_.
.. _"Plain Accesses and Data Races" in the LKMM: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/tools/memory-model/Documentation/explanation.txt#n1922
Relationship with the Linux-Kernel Memory Consistency Model (LKMM)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The LKMM defines the propagation and ordering rules of various memory
operations, which gives developers the ability to reason about concurrent code.
Ultimately this allows to determine the possible executions of concurrent code,
and if that code is free from data races.
KCSAN is aware of *marked atomic operations* (``READ_ONCE``, ``WRITE_ONCE``,
``atomic_*``, etc.), but is oblivious of any ordering guarantees and simply
assumes that memory barriers are placed correctly. In other words, KCSAN
assumes that as long as a plain access is not observed to race with another
conflicting access, memory operations are correctly ordered.
This means that KCSAN will not report *potential* data races due to missing
memory ordering. Developers should therefore carefully consider the required
memory ordering requirements that remain unchecked. If, however, missing
memory ordering (that is observable with a particular compiler and
architecture) leads to an observable data race (e.g. entering a critical
section erroneously), KCSAN would report the resulting data race.
Race Detection Beyond Data Races
--------------------------------
For code with complex concurrency design, race-condition bugs may not always
manifest as data races. Race conditions occur if concurrently executing
operations result in unexpected system behaviour. On the other hand, data races
are defined at the C-language level. The following macros can be used to check
properties of concurrent code where bugs would not manifest as data races.
.. kernel-doc:: include/linux/kcsan-checks.h
:functions: ASSERT_EXCLUSIVE_WRITER ASSERT_EXCLUSIVE_WRITER_SCOPED
ASSERT_EXCLUSIVE_ACCESS ASSERT_EXCLUSIVE_ACCESS_SCOPED
ASSERT_EXCLUSIVE_BITS
Implementation Details
----------------------
KCSAN relies on observing that two accesses happen concurrently. Crucially, we
want to (a) increase the chances of observing races (especially for races that
manifest rarely), and (b) be able to actually observe them. We can accomplish
(a) by injecting various delays, and (b) by using address watchpoints (or
breakpoints).
If we deliberately stall a memory access, while we have a watchpoint for its
address set up, and then observe the watchpoint to fire, two accesses to the
same address just raced. Using hardware watchpoints, this is the approach taken
in `DataCollider
<http://usenix.org/legacy/events/osdi10/tech/full_papers/Erickson.pdf>`_.
Unlike DataCollider, KCSAN does not use hardware watchpoints, but instead
relies on compiler instrumentation and "soft watchpoints".
In KCSAN, watchpoints are implemented using an efficient encoding that stores
access type, size, and address in a long; the benefits of using "soft
watchpoints" are portability and greater flexibility. KCSAN then relies on the
compiler instrumenting plain accesses. For each instrumented plain access:
1. Check if a matching watchpoint exists; if yes, and at least one access is a
write, then we encountered a racing access.
2. Periodically, if no matching watchpoint exists, set up a watchpoint and
stall for a small randomized delay.
3. Also check the data value before the delay, and re-check the data value
after delay; if the values mismatch, we infer a race of unknown origin.
To detect data races between plain and marked accesses, KCSAN also annotates
marked accesses, but only to check if a watchpoint exists; i.e. KCSAN never
sets up a watchpoint on marked accesses. By never setting up watchpoints for
marked operations, if all accesses to a variable that is accessed concurrently
are properly marked, KCSAN will never trigger a watchpoint and therefore never
report the accesses.
Key Properties
~~~~~~~~~~~~~~
1. **Memory Overhead:** The overall memory overhead is only a few MiB
depending on configuration. The current implementation uses a small array of
longs to encode watchpoint information, which is negligible.
2. **Performance Overhead:** KCSAN's runtime aims to be minimal, using an
efficient watchpoint encoding that does not require acquiring any shared
locks in the fast-path. For kernel boot on a system with 8 CPUs:
- 5.0x slow-down with the default KCSAN config;
- 2.8x slow-down from runtime fast-path overhead only (set very large
``KCSAN_SKIP_WATCH`` and unset ``KCSAN_SKIP_WATCH_RANDOMIZE``).
3. **Annotation Overheads:** Minimal annotations are required outside the KCSAN
runtime. As a result, maintenance overheads are minimal as the kernel
evolves.
4. **Detects Racy Writes from Devices:** Due to checking data values upon
setting up watchpoints, racy writes from devices can also be detected.
5. **Memory Ordering:** KCSAN is *not* explicitly aware of the LKMM's ordering
rules; this may result in missed data races (false negatives).
6. **Analysis Accuracy:** For observed executions, due to using a sampling
strategy, the analysis is *unsound* (false negatives possible), but aims to
be complete (no false positives).
Alternatives Considered
-----------------------
An alternative data race detection approach for the kernel can be found in the
`Kernel Thread Sanitizer (KTSAN) <https://github.com/google/ktsan/wiki>`_.
KTSAN is a happens-before data race detector, which explicitly establishes the
happens-before order between memory operations, which can then be used to
determine data races as defined in `Data Races`_.
To build a correct happens-before relation, KTSAN must be aware of all ordering
rules of the LKMM and synchronization primitives. Unfortunately, any omission
leads to large numbers of false positives, which is especially detrimental in
the context of the kernel which includes numerous custom synchronization
mechanisms. To track the happens-before relation, KTSAN's implementation
requires metadata for each memory location (shadow memory), which for each page
corresponds to 4 pages of shadow memory, and can translate into overhead of
tens of GiB on a large system.
...@@ -9305,6 +9305,17 @@ F: Documentation/kbuild/kconfig* ...@@ -9305,6 +9305,17 @@ F: Documentation/kbuild/kconfig*
F: scripts/Kconfig.include F: scripts/Kconfig.include
F: scripts/kconfig/ F: scripts/kconfig/
KCSAN
M: Marco Elver <elver@google.com>
R: Dmitry Vyukov <dvyukov@google.com>
L: kasan-dev@googlegroups.com
S: Maintained
F: Documentation/dev-tools/kcsan.rst
F: include/linux/kcsan*.h
F: kernel/kcsan/
F: lib/Kconfig.kcsan
F: scripts/Makefile.kcsan
KDUMP KDUMP
M: Dave Young <dyoung@redhat.com> M: Dave Young <dyoung@redhat.com>
M: Baoquan He <bhe@redhat.com> M: Baoquan He <bhe@redhat.com>
......
...@@ -531,7 +531,7 @@ export KBUILD_HOSTCXXFLAGS KBUILD_HOSTLDFLAGS KBUILD_HOSTLDLIBS LDFLAGS_MODULE ...@@ -531,7 +531,7 @@ export KBUILD_HOSTCXXFLAGS KBUILD_HOSTLDFLAGS KBUILD_HOSTLDLIBS LDFLAGS_MODULE
export KBUILD_CPPFLAGS NOSTDINC_FLAGS LINUXINCLUDE OBJCOPYFLAGS KBUILD_LDFLAGS export KBUILD_CPPFLAGS NOSTDINC_FLAGS LINUXINCLUDE OBJCOPYFLAGS KBUILD_LDFLAGS
export KBUILD_CFLAGS CFLAGS_KERNEL CFLAGS_MODULE export KBUILD_CFLAGS CFLAGS_KERNEL CFLAGS_MODULE
export CFLAGS_KASAN CFLAGS_KASAN_NOSANITIZE CFLAGS_UBSAN export CFLAGS_KASAN CFLAGS_KASAN_NOSANITIZE CFLAGS_UBSAN CFLAGS_KCSAN
export KBUILD_AFLAGS AFLAGS_KERNEL AFLAGS_MODULE export KBUILD_AFLAGS AFLAGS_KERNEL AFLAGS_MODULE
export KBUILD_AFLAGS_MODULE KBUILD_CFLAGS_MODULE KBUILD_LDFLAGS_MODULE export KBUILD_AFLAGS_MODULE KBUILD_CFLAGS_MODULE KBUILD_LDFLAGS_MODULE
export KBUILD_AFLAGS_KERNEL KBUILD_CFLAGS_KERNEL export KBUILD_AFLAGS_KERNEL KBUILD_CFLAGS_KERNEL
...@@ -965,6 +965,7 @@ endif ...@@ -965,6 +965,7 @@ endif
include scripts/Makefile.kasan include scripts/Makefile.kasan
include scripts/Makefile.extrawarn include scripts/Makefile.extrawarn
include scripts/Makefile.ubsan include scripts/Makefile.ubsan
include scripts/Makefile.kcsan
# Add user supplied CPPFLAGS, AFLAGS and CFLAGS as the last assignments # Add user supplied CPPFLAGS, AFLAGS and CFLAGS as the last assignments
KBUILD_CPPFLAGS += $(KCPPFLAGS) KBUILD_CPPFLAGS += $(KCPPFLAGS)
......
...@@ -233,6 +233,7 @@ config X86 ...@@ -233,6 +233,7 @@ config X86
select THREAD_INFO_IN_TASK select THREAD_INFO_IN_TASK
select USER_STACKTRACE_SUPPORT select USER_STACKTRACE_SUPPORT
select VIRT_TO_BUS select VIRT_TO_BUS
select HAVE_ARCH_KCSAN if X86_64
select X86_FEATURE_NAMES if PROC_FS select X86_FEATURE_NAMES if PROC_FS
select PROC_PID_ARCH_STATUS if PROC_FS select PROC_PID_ARCH_STATUS if PROC_FS
imply IMA_SECURE_AND_OR_TRUSTED_BOOT if EFI imply IMA_SECURE_AND_OR_TRUSTED_BOOT if EFI
......
...@@ -9,7 +9,9 @@ ...@@ -9,7 +9,9 @@
# Changed by many, many contributors over the years. # Changed by many, many contributors over the years.
# #
# Sanitizer runtimes are unavailable and cannot be linked for early boot code.
KASAN_SANITIZE := n KASAN_SANITIZE := n
KCSAN_SANITIZE := n
OBJECT_FILES_NON_STANDARD := y OBJECT_FILES_NON_STANDARD := y
# Kernel does not boot with kcov instrumentation here. # Kernel does not boot with kcov instrumentation here.
......
...@@ -17,7 +17,9 @@ ...@@ -17,7 +17,9 @@
# (see scripts/Makefile.lib size_append) # (see scripts/Makefile.lib size_append)
# compressed vmlinux.bin.all + u32 size of vmlinux.bin.all # compressed vmlinux.bin.all + u32 size of vmlinux.bin.all
# Sanitizer runtimes are unavailable and cannot be linked for early boot code.
KASAN_SANITIZE := n KASAN_SANITIZE := n
KCSAN_SANITIZE := n
OBJECT_FILES_NON_STANDARD := y OBJECT_FILES_NON_STANDARD := y
# Prevents link failures: __sanitizer_cov_trace_pc() is not linked in. # Prevents link failures: __sanitizer_cov_trace_pc() is not linked in.
......
...@@ -10,8 +10,11 @@ ARCH_REL_TYPE_ABS += R_386_GLOB_DAT|R_386_JMP_SLOT|R_386_RELATIVE ...@@ -10,8 +10,11 @@ ARCH_REL_TYPE_ABS += R_386_GLOB_DAT|R_386_JMP_SLOT|R_386_RELATIVE
include $(srctree)/lib/vdso/Makefile include $(srctree)/lib/vdso/Makefile
KBUILD_CFLAGS += $(DISABLE_LTO) KBUILD_CFLAGS += $(DISABLE_LTO)
# Sanitizer runtimes are unavailable and cannot be linked here.
KASAN_SANITIZE := n KASAN_SANITIZE := n
UBSAN_SANITIZE := n UBSAN_SANITIZE := n
KCSAN_SANITIZE := n
OBJECT_FILES_NON_STANDARD := y OBJECT_FILES_NON_STANDARD := y
# Prevents link failures: __sanitizer_cov_trace_pc() is not linked in. # Prevents link failures: __sanitizer_cov_trace_pc() is not linked in.
...@@ -29,6 +32,9 @@ vobjs32-y += vdso32/vclock_gettime.o ...@@ -29,6 +32,9 @@ vobjs32-y += vdso32/vclock_gettime.o
# files to link into kernel # files to link into kernel
obj-y += vma.o obj-y += vma.o
KASAN_SANITIZE_vma.o := y
UBSAN_SANITIZE_vma.o := y
KCSAN_SANITIZE_vma.o := y
OBJECT_FILES_NON_STANDARD_vma.o := n OBJECT_FILES_NON_STANDARD_vma.o := n
# vDSO images to build # vDSO images to build
......
...@@ -201,8 +201,12 @@ arch_test_and_change_bit(long nr, volatile unsigned long *addr) ...@@ -201,8 +201,12 @@ arch_test_and_change_bit(long nr, volatile unsigned long *addr)
return GEN_BINARY_RMWcc(LOCK_PREFIX __ASM_SIZE(btc), *addr, c, "Ir", nr); return GEN_BINARY_RMWcc(LOCK_PREFIX __ASM_SIZE(btc), *addr, c, "Ir", nr);
} }
static __always_inline bool constant_test_bit(long nr, const volatile unsigned long *addr) static __no_kcsan_or_inline bool constant_test_bit(long nr, const volatile unsigned long *addr)
{ {
/*
* Because this is a plain access, we need to disable KCSAN here to
* avoid double instrumentation via instrumented bitops.
*/
return ((1UL << (nr & (BITS_PER_LONG-1))) & return ((1UL << (nr & (BITS_PER_LONG-1))) &
(addr[nr >> _BITOPS_LONG_SHIFT])) != 0; (addr[nr >> _BITOPS_LONG_SHIFT])) != 0;
} }
......
...@@ -28,6 +28,10 @@ KASAN_SANITIZE_dumpstack_$(BITS).o := n ...@@ -28,6 +28,10 @@ KASAN_SANITIZE_dumpstack_$(BITS).o := n
KASAN_SANITIZE_stacktrace.o := n KASAN_SANITIZE_stacktrace.o := n
KASAN_SANITIZE_paravirt.o := n KASAN_SANITIZE_paravirt.o := n
# With some compiler versions the generated code results in boot hangs, caused
# by several compilation units. To be safe, disable all instrumentation.
KCSAN_SANITIZE := n
OBJECT_FILES_NON_STANDARD_test_nx.o := y OBJECT_FILES_NON_STANDARD_test_nx.o := y
OBJECT_FILES_NON_STANDARD_paravirt_patch.o := y OBJECT_FILES_NON_STANDARD_paravirt_patch.o := y
......
...@@ -13,6 +13,9 @@ endif ...@@ -13,6 +13,9 @@ endif
KCOV_INSTRUMENT_common.o := n KCOV_INSTRUMENT_common.o := n
KCOV_INSTRUMENT_perf_event.o := n KCOV_INSTRUMENT_perf_event.o := n
# As above, instrumenting secondary CPU boot code causes boot hangs.
KCSAN_SANITIZE_common.o := n
# Make sure load_percpu_segment has no stackprotector # Make sure load_percpu_segment has no stackprotector
nostackp := $(call cc-option, -fno-stack-protector) nostackp := $(call cc-option, -fno-stack-protector)
CFLAGS_common.o := $(nostackp) CFLAGS_common.o := $(nostackp)
......
...@@ -991,7 +991,15 @@ void __init e820__reserve_setup_data(void) ...@@ -991,7 +991,15 @@ void __init e820__reserve_setup_data(void)
while (pa_data) { while (pa_data) {
data = early_memremap(pa_data, sizeof(*data)); data = early_memremap(pa_data, sizeof(*data));
e820__range_update(pa_data, sizeof(*data)+data->len, E820_TYPE_RAM, E820_TYPE_RESERVED_KERN); e820__range_update(pa_data, sizeof(*data)+data->len, E820_TYPE_RAM, E820_TYPE_RESERVED_KERN);
e820__range_update_kexec(pa_data, sizeof(*data)+data->len, E820_TYPE_RAM, E820_TYPE_RESERVED_KERN);
/*
* SETUP_EFI is supplied by kexec and does not need to be
* reserved.
*/
if (data->type != SETUP_EFI)
e820__range_update_kexec(pa_data,
sizeof(*data) + data->len,
E820_TYPE_RAM, E820_TYPE_RESERVED_KERN);
if (data->type == SETUP_INDIRECT && if (data->type == SETUP_INDIRECT &&
((struct setup_indirect *)data->data)->type != SETUP_INDIRECT) { ((struct setup_indirect *)data->data)->type != SETUP_INDIRECT) {
......
...@@ -6,10 +6,19 @@ ...@@ -6,10 +6,19 @@
# Produces uninteresting flaky coverage. # Produces uninteresting flaky coverage.
KCOV_INSTRUMENT_delay.o := n KCOV_INSTRUMENT_delay.o := n
# KCSAN uses udelay for introducing watchpoint delay; avoid recursion.
KCSAN_SANITIZE_delay.o := n
ifdef CONFIG_KCSAN
# In case KCSAN+lockdep+ftrace are enabled, disable ftrace for delay.o to avoid
# lockdep -> [other libs] -> KCSAN -> udelay -> ftrace -> lockdep recursion.
CFLAGS_REMOVE_delay.o = $(CC_FLAGS_FTRACE)
endif
# Early boot use of cmdline; don't instrument it # Early boot use of cmdline; don't instrument it
ifdef CONFIG_AMD_MEM_ENCRYPT ifdef CONFIG_AMD_MEM_ENCRYPT
KCOV_INSTRUMENT_cmdline.o := n KCOV_INSTRUMENT_cmdline.o := n
KASAN_SANITIZE_cmdline.o := n KASAN_SANITIZE_cmdline.o := n
KCSAN_SANITIZE_cmdline.o := n
ifdef CONFIG_FUNCTION_TRACER ifdef CONFIG_FUNCTION_TRACER
CFLAGS_REMOVE_cmdline.o = -pg CFLAGS_REMOVE_cmdline.o = -pg
......
...@@ -7,6 +7,10 @@ KCOV_INSTRUMENT_mem_encrypt_identity.o := n ...@@ -7,6 +7,10 @@ KCOV_INSTRUMENT_mem_encrypt_identity.o := n
KASAN_SANITIZE_mem_encrypt.o := n KASAN_SANITIZE_mem_encrypt.o := n
KASAN_SANITIZE_mem_encrypt_identity.o := n KASAN_SANITIZE_mem_encrypt_identity.o := n
# Disable KCSAN entirely, because otherwise we get warnings that some functions
# reference __initdata sections.
KCSAN_SANITIZE := n
ifdef CONFIG_FUNCTION_TRACER ifdef CONFIG_FUNCTION_TRACER
CFLAGS_REMOVE_mem_encrypt.o = -pg CFLAGS_REMOVE_mem_encrypt.o = -pg
CFLAGS_REMOVE_mem_encrypt_identity.o = -pg CFLAGS_REMOVE_mem_encrypt_identity.o = -pg
......
...@@ -14,10 +14,18 @@ $(obj)/sha256.o: $(srctree)/lib/crypto/sha256.c FORCE ...@@ -14,10 +14,18 @@ $(obj)/sha256.o: $(srctree)/lib/crypto/sha256.c FORCE
CFLAGS_sha256.o := -D__DISABLE_EXPORTS CFLAGS_sha256.o := -D__DISABLE_EXPORTS
LDFLAGS_purgatory.ro := -e purgatory_start -r --no-undefined -nostdlib -z nodefaultlib # When linking purgatory.ro with -r unresolved symbols are not checked,
targets += purgatory.ro # also link a purgatory.chk binary without -r to check for unresolved symbols.
PURGATORY_LDFLAGS := -e purgatory_start -nostdlib -z nodefaultlib
LDFLAGS_purgatory.ro := -r $(PURGATORY_LDFLAGS)
LDFLAGS_purgatory.chk := $(PURGATORY_LDFLAGS)
targets += purgatory.ro purgatory.chk
# Sanitizer, etc. runtimes are unavailable and cannot be linked here.
GCOV_PROFILE := n
KASAN_SANITIZE := n KASAN_SANITIZE := n
UBSAN_SANITIZE := n
KCSAN_SANITIZE := n
KCOV_INSTRUMENT := n KCOV_INSTRUMENT := n
# These are adjustments to the compiler flags used for objects that # These are adjustments to the compiler flags used for objects that
...@@ -25,7 +33,7 @@ KCOV_INSTRUMENT := n ...@@ -25,7 +33,7 @@ KCOV_INSTRUMENT := n
PURGATORY_CFLAGS_REMOVE := -mcmodel=kernel PURGATORY_CFLAGS_REMOVE := -mcmodel=kernel
PURGATORY_CFLAGS := -mcmodel=large -ffreestanding -fno-zero-initialized-in-bss PURGATORY_CFLAGS := -mcmodel=large -ffreestanding -fno-zero-initialized-in-bss
PURGATORY_CFLAGS += $(DISABLE_STACKLEAK_PLUGIN) PURGATORY_CFLAGS += $(DISABLE_STACKLEAK_PLUGIN) -DDISABLE_BRANCH_PROFILING
# Default KBUILD_CFLAGS can have -pg option set when FTRACE is enabled. That # Default KBUILD_CFLAGS can have -pg option set when FTRACE is enabled. That
# in turn leaves some undefined symbols like __fentry__ in purgatory and not # in turn leaves some undefined symbols like __fentry__ in purgatory and not
...@@ -58,12 +66,15 @@ CFLAGS_string.o += $(PURGATORY_CFLAGS) ...@@ -58,12 +66,15 @@ CFLAGS_string.o += $(PURGATORY_CFLAGS)
$(obj)/purgatory.ro: $(PURGATORY_OBJS) FORCE $(obj)/purgatory.ro: $(PURGATORY_OBJS) FORCE
$(call if_changed,ld) $(call if_changed,ld)
$(obj)/purgatory.chk: $(obj)/purgatory.ro FORCE
$(call if_changed,ld)
targets += kexec-purgatory.c targets += kexec-purgatory.c
quiet_cmd_bin2c = BIN2C $@ quiet_cmd_bin2c = BIN2C $@
cmd_bin2c = $(objtree)/scripts/bin2c kexec_purgatory < $< > $@ cmd_bin2c = $(objtree)/scripts/bin2c kexec_purgatory < $< > $@
$(obj)/kexec-purgatory.c: $(obj)/purgatory.ro FORCE $(obj)/kexec-purgatory.c: $(obj)/purgatory.ro $(obj)/purgatory.chk FORCE
$(call if_changed,bin2c) $(call if_changed,bin2c)
obj-$(CONFIG_KEXEC_FILE) += kexec-purgatory.o obj-$(CONFIG_KEXEC_FILE) += kexec-purgatory.o
...@@ -6,7 +6,10 @@ ...@@ -6,7 +6,10 @@
# for more details. # for more details.
# #
# #
# Sanitizer runtimes are unavailable and cannot be linked here.
KASAN_SANITIZE := n KASAN_SANITIZE := n
KCSAN_SANITIZE := n
OBJECT_FILES_NON_STANDARD := y OBJECT_FILES_NON_STANDARD := y
subdir- := rm subdir- := rm
......
...@@ -6,7 +6,10 @@ ...@@ -6,7 +6,10 @@
# for more details. # for more details.
# #
# #
# Sanitizer runtimes are unavailable and cannot be linked here.
KASAN_SANITIZE := n KASAN_SANITIZE := n
KCSAN_SANITIZE := n
OBJECT_FILES_NON_STANDARD := y OBJECT_FILES_NON_STANDARD := y
# Prevents link failures: __sanitizer_cov_trace_pc() is not linked in. # Prevents link failures: __sanitizer_cov_trace_pc() is not linked in.
......
...@@ -37,7 +37,9 @@ KBUILD_CFLAGS := $(cflags-y) -Os -DDISABLE_BRANCH_PROFILING \ ...@@ -37,7 +37,9 @@ KBUILD_CFLAGS := $(cflags-y) -Os -DDISABLE_BRANCH_PROFILING \
KBUILD_CFLAGS := $(filter-out $(CC_FLAGS_SCS), $(KBUILD_CFLAGS)) KBUILD_CFLAGS := $(filter-out $(CC_FLAGS_SCS), $(KBUILD_CFLAGS))
GCOV_PROFILE := n GCOV_PROFILE := n
# Sanitizer runtimes are unavailable and cannot be linked here.
KASAN_SANITIZE := n KASAN_SANITIZE := n
KCSAN_SANITIZE := n
UBSAN_SANITIZE := n UBSAN_SANITIZE := n
OBJECT_FILES_NON_STANDARD := y OBJECT_FILES_NON_STANDARD := y
......
此差异已折叠。
...@@ -11,7 +11,7 @@ ...@@ -11,7 +11,7 @@
#ifndef _ASM_GENERIC_BITOPS_INSTRUMENTED_ATOMIC_H #ifndef _ASM_GENERIC_BITOPS_INSTRUMENTED_ATOMIC_H
#define _ASM_GENERIC_BITOPS_INSTRUMENTED_ATOMIC_H #define _ASM_GENERIC_BITOPS_INSTRUMENTED_ATOMIC_H
#include <linux/kasan-checks.h> #include <linux/instrumented.h>
/** /**
* set_bit - Atomically set a bit in memory * set_bit - Atomically set a bit in memory
...@@ -25,7 +25,7 @@ ...@@ -25,7 +25,7 @@
*/ */
static inline void set_bit(long nr, volatile unsigned long *addr) static inline void set_bit(long nr, volatile unsigned long *addr)
{ {
kasan_check_write(addr + BIT_WORD(nr), sizeof(long)); instrument_atomic_write(addr + BIT_WORD(nr), sizeof(long));
arch_set_bit(nr, addr); arch_set_bit(nr, addr);
} }
...@@ -38,7 +38,7 @@ static inline void set_bit(long nr, volatile unsigned long *addr) ...@@ -38,7 +38,7 @@ static inline void set_bit(long nr, volatile unsigned long *addr)
*/ */
static inline void clear_bit(long nr, volatile unsigned long *addr) static inline void clear_bit(long nr, volatile unsigned long *addr)
{ {
kasan_check_write(addr + BIT_WORD(nr), sizeof(long)); instrument_atomic_write(addr + BIT_WORD(nr), sizeof(long));
arch_clear_bit(nr, addr); arch_clear_bit(nr, addr);
} }
...@@ -54,7 +54,7 @@ static inline void clear_bit(long nr, volatile unsigned long *addr) ...@@ -54,7 +54,7 @@ static inline void clear_bit(long nr, volatile unsigned long *addr)
*/ */
static inline void change_bit(long nr, volatile unsigned long *addr) static inline void change_bit(long nr, volatile unsigned long *addr)
{ {
kasan_check_write(addr + BIT_WORD(nr), sizeof(long)); instrument_atomic_write(addr + BIT_WORD(nr), sizeof(long));
arch_change_bit(nr, addr); arch_change_bit(nr, addr);
} }
...@@ -67,7 +67,7 @@ static inline void change_bit(long nr, volatile unsigned long *addr) ...@@ -67,7 +67,7 @@ static inline void change_bit(long nr, volatile unsigned long *addr)
*/ */
static inline bool test_and_set_bit(long nr, volatile unsigned long *addr) static inline bool test_and_set_bit(long nr, volatile unsigned long *addr)
{ {
kasan_check_write(addr + BIT_WORD(nr), sizeof(long)); instrument_atomic_write(addr + BIT_WORD(nr), sizeof(long));
return arch_test_and_set_bit(nr, addr); return arch_test_and_set_bit(nr, addr);
} }
...@@ -80,7 +80,7 @@ static inline bool test_and_set_bit(long nr, volatile unsigned long *addr) ...@@ -80,7 +80,7 @@ static inline bool test_and_set_bit(long nr, volatile unsigned long *addr)
*/ */
static inline bool test_and_clear_bit(long nr, volatile unsigned long *addr) static inline bool test_and_clear_bit(long nr, volatile unsigned long *addr)
{ {
kasan_check_write(addr + BIT_WORD(nr), sizeof(long)); instrument_atomic_write(addr + BIT_WORD(nr), sizeof(long));
return arch_test_and_clear_bit(nr, addr); return arch_test_and_clear_bit(nr, addr);
} }
...@@ -93,7 +93,7 @@ static inline bool test_and_clear_bit(long nr, volatile unsigned long *addr) ...@@ -93,7 +93,7 @@ static inline bool test_and_clear_bit(long nr, volatile unsigned long *addr)
*/ */
static inline bool test_and_change_bit(long nr, volatile unsigned long *addr) static inline bool test_and_change_bit(long nr, volatile unsigned long *addr)
{ {
kasan_check_write(addr + BIT_WORD(nr), sizeof(long)); instrument_atomic_write(addr + BIT_WORD(nr), sizeof(long));
return arch_test_and_change_bit(nr, addr); return arch_test_and_change_bit(nr, addr);
} }
......
...@@ -11,7 +11,7 @@ ...@@ -11,7 +11,7 @@
#ifndef _ASM_GENERIC_BITOPS_INSTRUMENTED_LOCK_H #ifndef _ASM_GENERIC_BITOPS_INSTRUMENTED_LOCK_H
#define _ASM_GENERIC_BITOPS_INSTRUMENTED_LOCK_H #define _ASM_GENERIC_BITOPS_INSTRUMENTED_LOCK_H
#include <linux/kasan-checks.h> #include <linux/instrumented.h>
/** /**
* clear_bit_unlock - Clear a bit in memory, for unlock * clear_bit_unlock - Clear a bit in memory, for unlock
...@@ -22,7 +22,7 @@ ...@@ -22,7 +22,7 @@
*/ */
static inline void clear_bit_unlock(long nr, volatile unsigned long *addr) static inline void clear_bit_unlock(long nr, volatile unsigned long *addr)
{ {
kasan_check_write(addr + BIT_WORD(nr), sizeof(long)); instrument_atomic_write(addr + BIT_WORD(nr), sizeof(long));
arch_clear_bit_unlock(nr, addr); arch_clear_bit_unlock(nr, addr);
} }
...@@ -37,7 +37,7 @@ static inline void clear_bit_unlock(long nr, volatile unsigned long *addr) ...@@ -37,7 +37,7 @@ static inline void clear_bit_unlock(long nr, volatile unsigned long *addr)
*/ */
static inline void __clear_bit_unlock(long nr, volatile unsigned long *addr) static inline void __clear_bit_unlock(long nr, volatile unsigned long *addr)
{ {
kasan_check_write(addr + BIT_WORD(nr), sizeof(long)); instrument_write(addr + BIT_WORD(nr), sizeof(long));
arch___clear_bit_unlock(nr, addr); arch___clear_bit_unlock(nr, addr);
} }
...@@ -52,7 +52,7 @@ static inline void __clear_bit_unlock(long nr, volatile unsigned long *addr) ...@@ -52,7 +52,7 @@ static inline void __clear_bit_unlock(long nr, volatile unsigned long *addr)
*/ */
static inline bool test_and_set_bit_lock(long nr, volatile unsigned long *addr) static inline bool test_and_set_bit_lock(long nr, volatile unsigned long *addr)
{ {
kasan_check_write(addr + BIT_WORD(nr), sizeof(long)); instrument_atomic_write(addr + BIT_WORD(nr), sizeof(long));
return arch_test_and_set_bit_lock(nr, addr); return arch_test_and_set_bit_lock(nr, addr);
} }
...@@ -71,7 +71,7 @@ static inline bool test_and_set_bit_lock(long nr, volatile unsigned long *addr) ...@@ -71,7 +71,7 @@ static inline bool test_and_set_bit_lock(long nr, volatile unsigned long *addr)
static inline bool static inline bool
clear_bit_unlock_is_negative_byte(long nr, volatile unsigned long *addr) clear_bit_unlock_is_negative_byte(long nr, volatile unsigned long *addr)
{ {
kasan_check_write(addr + BIT_WORD(nr), sizeof(long)); instrument_atomic_write(addr + BIT_WORD(nr), sizeof(long));
return arch_clear_bit_unlock_is_negative_byte(nr, addr); return arch_clear_bit_unlock_is_negative_byte(nr, addr);
} }
/* Let everybody know we have it. */ /* Let everybody know we have it. */
......
...@@ -11,7 +11,7 @@ ...@@ -11,7 +11,7 @@
#ifndef _ASM_GENERIC_BITOPS_INSTRUMENTED_NON_ATOMIC_H #ifndef _ASM_GENERIC_BITOPS_INSTRUMENTED_NON_ATOMIC_H
#define _ASM_GENERIC_BITOPS_INSTRUMENTED_NON_ATOMIC_H #define _ASM_GENERIC_BITOPS_INSTRUMENTED_NON_ATOMIC_H
#include <linux/kasan-checks.h> #include <linux/instrumented.h>
/** /**
* __set_bit - Set a bit in memory * __set_bit - Set a bit in memory
...@@ -24,7 +24,7 @@ ...@@ -24,7 +24,7 @@
*/ */
static inline void __set_bit(long nr, volatile unsigned long *addr) static inline void __set_bit(long nr, volatile unsigned long *addr)
{ {
kasan_check_write(addr + BIT_WORD(nr), sizeof(long)); instrument_write(addr + BIT_WORD(nr), sizeof(long));
arch___set_bit(nr, addr); arch___set_bit(nr, addr);
} }
...@@ -39,7 +39,7 @@ static inline void __set_bit(long nr, volatile unsigned long *addr) ...@@ -39,7 +39,7 @@ static inline void __set_bit(long nr, volatile unsigned long *addr)
*/ */
static inline void __clear_bit(long nr, volatile unsigned long *addr) static inline void __clear_bit(long nr, volatile unsigned long *addr)
{ {
kasan_check_write(addr + BIT_WORD(nr), sizeof(long)); instrument_write(addr + BIT_WORD(nr), sizeof(long));
arch___clear_bit(nr, addr); arch___clear_bit(nr, addr);
} }
...@@ -54,7 +54,7 @@ static inline void __clear_bit(long nr, volatile unsigned long *addr) ...@@ -54,7 +54,7 @@ static inline void __clear_bit(long nr, volatile unsigned long *addr)
*/ */
static inline void __change_bit(long nr, volatile unsigned long *addr) static inline void __change_bit(long nr, volatile unsigned long *addr)
{ {
kasan_check_write(addr + BIT_WORD(nr), sizeof(long)); instrument_write(addr + BIT_WORD(nr), sizeof(long));
arch___change_bit(nr, addr); arch___change_bit(nr, addr);
} }
...@@ -68,7 +68,7 @@ static inline void __change_bit(long nr, volatile unsigned long *addr) ...@@ -68,7 +68,7 @@ static inline void __change_bit(long nr, volatile unsigned long *addr)
*/ */
static inline bool __test_and_set_bit(long nr, volatile unsigned long *addr) static inline bool __test_and_set_bit(long nr, volatile unsigned long *addr)
{ {
kasan_check_write(addr + BIT_WORD(nr), sizeof(long)); instrument_write(addr + BIT_WORD(nr), sizeof(long));
return arch___test_and_set_bit(nr, addr); return arch___test_and_set_bit(nr, addr);
} }
...@@ -82,7 +82,7 @@ static inline bool __test_and_set_bit(long nr, volatile unsigned long *addr) ...@@ -82,7 +82,7 @@ static inline bool __test_and_set_bit(long nr, volatile unsigned long *addr)
*/ */
static inline bool __test_and_clear_bit(long nr, volatile unsigned long *addr) static inline bool __test_and_clear_bit(long nr, volatile unsigned long *addr)
{ {
kasan_check_write(addr + BIT_WORD(nr), sizeof(long)); instrument_write(addr + BIT_WORD(nr), sizeof(long));
return arch___test_and_clear_bit(nr, addr); return arch___test_and_clear_bit(nr, addr);
} }
...@@ -96,7 +96,7 @@ static inline bool __test_and_clear_bit(long nr, volatile unsigned long *addr) ...@@ -96,7 +96,7 @@ static inline bool __test_and_clear_bit(long nr, volatile unsigned long *addr)
*/ */
static inline bool __test_and_change_bit(long nr, volatile unsigned long *addr) static inline bool __test_and_change_bit(long nr, volatile unsigned long *addr)
{ {
kasan_check_write(addr + BIT_WORD(nr), sizeof(long)); instrument_write(addr + BIT_WORD(nr), sizeof(long));
return arch___test_and_change_bit(nr, addr); return arch___test_and_change_bit(nr, addr);
} }
...@@ -107,7 +107,7 @@ static inline bool __test_and_change_bit(long nr, volatile unsigned long *addr) ...@@ -107,7 +107,7 @@ static inline bool __test_and_change_bit(long nr, volatile unsigned long *addr)
*/ */
static inline bool test_bit(long nr, const volatile unsigned long *addr) static inline bool test_bit(long nr, const volatile unsigned long *addr)
{ {
kasan_check_read(addr + BIT_WORD(nr), sizeof(long)); instrument_atomic_read(addr + BIT_WORD(nr), sizeof(long));
return arch_test_bit(nr, addr); return arch_test_bit(nr, addr);
} }
......
...@@ -16,7 +16,7 @@ ...@@ -16,7 +16,7 @@
#define KASAN_ABI_VERSION 5 #define KASAN_ABI_VERSION 5
#if __has_feature(address_sanitizer) || __has_feature(hwaddress_sanitizer) #if __has_feature(address_sanitizer) || __has_feature(hwaddress_sanitizer)
/* emulate gcc's __SANITIZE_ADDRESS__ flag */ /* Emulate GCC's __SANITIZE_ADDRESS__ flag */
#define __SANITIZE_ADDRESS__ #define __SANITIZE_ADDRESS__
#define __no_sanitize_address \ #define __no_sanitize_address \
__attribute__((no_sanitize("address", "hwaddress"))) __attribute__((no_sanitize("address", "hwaddress")))
...@@ -24,6 +24,15 @@ ...@@ -24,6 +24,15 @@
#define __no_sanitize_address #define __no_sanitize_address
#endif #endif
#if __has_feature(thread_sanitizer)
/* emulate gcc's __SANITIZE_THREAD__ flag */
#define __SANITIZE_THREAD__
#define __no_sanitize_thread \
__attribute__((no_sanitize("thread")))
#else
#define __no_sanitize_thread
#endif
/* /*
* Not all versions of clang implement the the type-generic versions * Not all versions of clang implement the the type-generic versions
* of the builtin overflow checkers. Fortunately, clang implements * of the builtin overflow checkers. Fortunately, clang implements
......
...@@ -144,6 +144,12 @@ ...@@ -144,6 +144,12 @@
#define __no_sanitize_address #define __no_sanitize_address
#endif #endif
#if defined(__SANITIZE_THREAD__) && __has_attribute(__no_sanitize_thread__)
#define __no_sanitize_thread __attribute__((no_sanitize_thread))
#else
#define __no_sanitize_thread
#endif
#if GCC_VERSION >= 50100 #if GCC_VERSION >= 50100
#define COMPILER_HAS_GENERIC_BUILTIN_OVERFLOW 1 #define COMPILER_HAS_GENERIC_BUILTIN_OVERFLOW 1
#endif #endif
......
...@@ -250,6 +250,27 @@ void ftrace_likely_update(struct ftrace_likely_data *f, int val, ...@@ -250,6 +250,27 @@ void ftrace_likely_update(struct ftrace_likely_data *f, int val,
*/ */
#include <asm/barrier.h> #include <asm/barrier.h>
#include <linux/kasan-checks.h> #include <linux/kasan-checks.h>
#include <linux/kcsan-checks.h>
/**
* data_race - mark an expression as containing intentional data races
*
* This data_race() macro is useful for situations in which data races
* should be forgiven. One example is diagnostic code that accesses
* shared variables but is not a part of the core synchronization design.
*
* This macro *does not* affect normal code generation, but is a hint
* to tooling that data races here are to be ignored.
*/
#define data_race(expr) \
({ \
__unqual_scalar_typeof(({ expr; })) __v = ({ \
__kcsan_disable_current(); \
expr; \
}); \
__kcsan_enable_current(); \
__v; \
})
/* /*
* Use __READ_ONCE() instead of READ_ONCE() if you do not require any * Use __READ_ONCE() instead of READ_ONCE() if you do not require any
...@@ -271,30 +292,18 @@ void ftrace_likely_update(struct ftrace_likely_data *f, int val, ...@@ -271,30 +292,18 @@ void ftrace_likely_update(struct ftrace_likely_data *f, int val,
__READ_ONCE_SCALAR(x); \ __READ_ONCE_SCALAR(x); \
}) })
#define __WRITE_ONCE(x, val) \ #define __WRITE_ONCE(x, val) \
do { \ do { \
*(volatile typeof(x) *)&(x) = (val); \ *(volatile typeof(x) *)&(x) = (val); \
} while (0) } while (0)
#define WRITE_ONCE(x, val) \ #define WRITE_ONCE(x, val) \
do { \ do { \
compiletime_assert_rwonce_type(x); \ compiletime_assert_rwonce_type(x); \
__WRITE_ONCE(x, val); \ __WRITE_ONCE(x, val); \
} while (0) } while (0)
#ifdef CONFIG_KASAN static __no_sanitize_or_inline
/*
* We can't declare function 'inline' because __no_sanitize_address conflicts
* with inlining. Attempt to inline it may cause a build failure.
* https://gcc.gnu.org/bugzilla/show_bug.cgi?id=67368
* '__maybe_unused' allows us to avoid defined-but-not-used warnings.
*/
# define __no_kasan_or_inline __no_sanitize_address notrace __maybe_unused
#else
# define __no_kasan_or_inline __always_inline
#endif
static __no_kasan_or_inline
unsigned long __read_once_word_nocheck(const void *addr) unsigned long __read_once_word_nocheck(const void *addr)
{ {
return __READ_ONCE(*(unsigned long *)addr); return __READ_ONCE(*(unsigned long *)addr);
...@@ -302,8 +311,8 @@ unsigned long __read_once_word_nocheck(const void *addr) ...@@ -302,8 +311,8 @@ unsigned long __read_once_word_nocheck(const void *addr)
/* /*
* Use READ_ONCE_NOCHECK() instead of READ_ONCE() if you need to load a * Use READ_ONCE_NOCHECK() instead of READ_ONCE() if you need to load a
* word from memory atomically but without telling KASAN. This is usually * word from memory atomically but without telling KASAN/KCSAN. This is
* used by unwinding code when walking the stack of a running process. * usually used by unwinding code when walking the stack of a running process.
*/ */
#define READ_ONCE_NOCHECK(x) \ #define READ_ONCE_NOCHECK(x) \
({ \ ({ \
......
...@@ -171,6 +171,38 @@ struct ftrace_likely_data { ...@@ -171,6 +171,38 @@ struct ftrace_likely_data {
*/ */
#define noinline_for_stack noinline #define noinline_for_stack noinline
/*
* Sanitizer helper attributes: Because using __always_inline and
* __no_sanitize_* conflict, provide helper attributes that will either expand
* to __no_sanitize_* in compilation units where instrumentation is enabled
* (__SANITIZE_*__), or __always_inline in compilation units without
* instrumentation (__SANITIZE_*__ undefined).
*/
#ifdef __SANITIZE_ADDRESS__
/*
* We can't declare function 'inline' because __no_sanitize_address conflicts
* with inlining. Attempt to inline it may cause a build failure.
* https://gcc.gnu.org/bugzilla/show_bug.cgi?id=67368
* '__maybe_unused' allows us to avoid defined-but-not-used warnings.
*/
# define __no_kasan_or_inline __no_sanitize_address notrace __maybe_unused
# define __no_sanitize_or_inline __no_kasan_or_inline
#else
# define __no_kasan_or_inline __always_inline
#endif
#define __no_kcsan __no_sanitize_thread
#ifdef __SANITIZE_THREAD__
# define __no_kcsan_or_inline __no_kcsan notrace __maybe_unused
# define __no_sanitize_or_inline __no_kcsan_or_inline
#else
# define __no_kcsan_or_inline __always_inline
#endif
#ifndef __no_sanitize_or_inline
#define __no_sanitize_or_inline __always_inline
#endif
#endif /* __KERNEL__ */ #endif /* __KERNEL__ */
#endif /* __ASSEMBLY__ */ #endif /* __ASSEMBLY__ */
......
/* SPDX-License-Identifier: GPL-2.0 */
/*
* This header provides generic wrappers for memory access instrumentation that
* the compiler cannot emit for: KASAN, KCSAN.
*/
#ifndef _LINUX_INSTRUMENTED_H
#define _LINUX_INSTRUMENTED_H
#include <linux/compiler.h>
#include <linux/kasan-checks.h>
#include <linux/kcsan-checks.h>
#include <linux/types.h>
/**
* instrument_read - instrument regular read access
*
* Instrument a regular read access. The instrumentation should be inserted
* before the actual read happens.
*
* @ptr address of access
* @size size of access
*/
static __always_inline void instrument_read(const volatile void *v, size_t size)
{
kasan_check_read(v, size);
kcsan_check_read(v, size);
}
/**
* instrument_write - instrument regular write access
*
* Instrument a regular write access. The instrumentation should be inserted
* before the actual write happens.
*
* @ptr address of access
* @size size of access
*/
static __always_inline void instrument_write(const volatile void *v, size_t size)
{
kasan_check_write(v, size);
kcsan_check_write(v, size);
}
/**
* instrument_atomic_read - instrument atomic read access
*
* Instrument an atomic read access. The instrumentation should be inserted
* before the actual read happens.
*
* @ptr address of access
* @size size of access
*/
static __always_inline void instrument_atomic_read(const volatile void *v, size_t size)
{
kasan_check_read(v, size);
kcsan_check_atomic_read(v, size);
}
/**
* instrument_atomic_write - instrument atomic write access
*
* Instrument an atomic write access. The instrumentation should be inserted
* before the actual write happens.
*
* @ptr address of access
* @size size of access
*/
static __always_inline void instrument_atomic_write(const volatile void *v, size_t size)
{
kasan_check_write(v, size);
kcsan_check_atomic_write(v, size);
}
/**
* instrument_copy_to_user - instrument reads of copy_to_user
*
* Instrument reads from kernel memory, that are due to copy_to_user (and
* variants). The instrumentation must be inserted before the accesses.
*
* @to destination address
* @from source address
* @n number of bytes to copy
*/
static __always_inline void
instrument_copy_to_user(void __user *to, const void *from, unsigned long n)
{
kasan_check_read(from, n);
kcsan_check_read(from, n);
}
/**
* instrument_copy_from_user - instrument writes of copy_from_user
*
* Instrument writes to kernel memory, that are due to copy_from_user (and
* variants). The instrumentation should be inserted before the accesses.
*
* @to destination address
* @from source address
* @n number of bytes to copy
*/
static __always_inline void
instrument_copy_from_user(const void *to, const void __user *from, unsigned long n)
{
kasan_check_write(to, n);
kcsan_check_write(to, n);
}
#endif /* _LINUX_INSTRUMENTED_H */
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef _LINUX_KCSAN_CHECKS_H
#define _LINUX_KCSAN_CHECKS_H
/* Note: Only include what is already included by compiler.h. */
#include <linux/compiler_attributes.h>
#include <linux/types.h>
/*
* ACCESS TYPE MODIFIERS
*
* <none>: normal read access;
* WRITE : write access;
* ATOMIC: access is atomic;
* ASSERT: access is not a regular access, but an assertion;
* SCOPED: access is a scoped access;
*/
#define KCSAN_ACCESS_WRITE 0x1
#define KCSAN_ACCESS_ATOMIC 0x2
#define KCSAN_ACCESS_ASSERT 0x4
#define KCSAN_ACCESS_SCOPED 0x8
/*
* __kcsan_*: Always calls into the runtime when KCSAN is enabled. This may be used
* even in compilation units that selectively disable KCSAN, but must use KCSAN
* to validate access to an address. Never use these in header files!
*/
#ifdef CONFIG_KCSAN
/**
* __kcsan_check_access - check generic access for races
*
* @ptr: address of access
* @size: size of access
* @type: access type modifier
*/
void __kcsan_check_access(const volatile void *ptr, size_t size, int type);
/**
* kcsan_disable_current - disable KCSAN for the current context
*
* Supports nesting.
*/
void kcsan_disable_current(void);
/**
* kcsan_enable_current - re-enable KCSAN for the current context
*
* Supports nesting.
*/
void kcsan_enable_current(void);
void kcsan_enable_current_nowarn(void); /* Safe in uaccess regions. */
/**
* kcsan_nestable_atomic_begin - begin nestable atomic region
*
* Accesses within the atomic region may appear to race with other accesses but
* should be considered atomic.
*/
void kcsan_nestable_atomic_begin(void);
/**
* kcsan_nestable_atomic_end - end nestable atomic region
*/
void kcsan_nestable_atomic_end(void);
/**
* kcsan_flat_atomic_begin - begin flat atomic region
*
* Accesses within the atomic region may appear to race with other accesses but
* should be considered atomic.
*/
void kcsan_flat_atomic_begin(void);
/**
* kcsan_flat_atomic_end - end flat atomic region
*/
void kcsan_flat_atomic_end(void);
/**
* kcsan_atomic_next - consider following accesses as atomic
*
* Force treating the next n memory accesses for the current context as atomic
* operations.
*
* @n: number of following memory accesses to treat as atomic.
*/
void kcsan_atomic_next(int n);
/**
* kcsan_set_access_mask - set access mask
*
* Set the access mask for all accesses for the current context if non-zero.
* Only value changes to bits set in the mask will be reported.
*
* @mask: bitmask
*/
void kcsan_set_access_mask(unsigned long mask);
/* Scoped access information. */
struct kcsan_scoped_access {
struct list_head list;
const volatile void *ptr;
size_t size;
int type;
};
/*
* Automatically call kcsan_end_scoped_access() when kcsan_scoped_access goes
* out of scope; relies on attribute "cleanup", which is supported by all
* compilers that support KCSAN.
*/
#define __kcsan_cleanup_scoped \
__maybe_unused __attribute__((__cleanup__(kcsan_end_scoped_access)))
/**
* kcsan_begin_scoped_access - begin scoped access
*
* Begin scoped access and initialize @sa, which will cause KCSAN to
* continuously check the memory range in the current thread until
* kcsan_end_scoped_access() is called for @sa.
*
* Scoped accesses are implemented by appending @sa to an internal list for the
* current execution context, and then checked on every call into the KCSAN
* runtime.
*
* @ptr: address of access
* @size: size of access
* @type: access type modifier
* @sa: struct kcsan_scoped_access to use for the scope of the access
*/
struct kcsan_scoped_access *
kcsan_begin_scoped_access(const volatile void *ptr, size_t size, int type,
struct kcsan_scoped_access *sa);
/**
* kcsan_end_scoped_access - end scoped access
*
* End a scoped access, which will stop KCSAN checking the memory range.
* Requires that kcsan_begin_scoped_access() was previously called once for @sa.
*
* @sa: a previously initialized struct kcsan_scoped_access
*/
void kcsan_end_scoped_access(struct kcsan_scoped_access *sa);
#else /* CONFIG_KCSAN */
static inline void __kcsan_check_access(const volatile void *ptr, size_t size,
int type) { }
static inline void kcsan_disable_current(void) { }
static inline void kcsan_enable_current(void) { }
static inline void kcsan_enable_current_nowarn(void) { }
static inline void kcsan_nestable_atomic_begin(void) { }
static inline void kcsan_nestable_atomic_end(void) { }
static inline void kcsan_flat_atomic_begin(void) { }
static inline void kcsan_flat_atomic_end(void) { }
static inline void kcsan_atomic_next(int n) { }
static inline void kcsan_set_access_mask(unsigned long mask) { }
struct kcsan_scoped_access { };
#define __kcsan_cleanup_scoped __maybe_unused
static inline struct kcsan_scoped_access *
kcsan_begin_scoped_access(const volatile void *ptr, size_t size, int type,
struct kcsan_scoped_access *sa) { return sa; }
static inline void kcsan_end_scoped_access(struct kcsan_scoped_access *sa) { }
#endif /* CONFIG_KCSAN */
#ifdef __SANITIZE_THREAD__
/*
* Only calls into the runtime when the particular compilation unit has KCSAN
* instrumentation enabled. May be used in header files.
*/
#define kcsan_check_access __kcsan_check_access
/*
* Only use these to disable KCSAN for accesses in the current compilation unit;
* calls into libraries may still perform KCSAN checks.
*/
#define __kcsan_disable_current kcsan_disable_current
#define __kcsan_enable_current kcsan_enable_current_nowarn
#else
static inline void kcsan_check_access(const volatile void *ptr, size_t size,
int type) { }
static inline void __kcsan_enable_current(void) { }
static inline void __kcsan_disable_current(void) { }
#endif
/**
* __kcsan_check_read - check regular read access for races
*
* @ptr: address of access
* @size: size of access
*/
#define __kcsan_check_read(ptr, size) __kcsan_check_access(ptr, size, 0)
/**
* __kcsan_check_write - check regular write access for races
*
* @ptr: address of access
* @size: size of access
*/
#define __kcsan_check_write(ptr, size) \
__kcsan_check_access(ptr, size, KCSAN_ACCESS_WRITE)
/**
* kcsan_check_read - check regular read access for races
*
* @ptr: address of access
* @size: size of access
*/
#define kcsan_check_read(ptr, size) kcsan_check_access(ptr, size, 0)
/**
* kcsan_check_write - check regular write access for races
*
* @ptr: address of access
* @size: size of access
*/
#define kcsan_check_write(ptr, size) \
kcsan_check_access(ptr, size, KCSAN_ACCESS_WRITE)
/*
* Check for atomic accesses: if atomic accesses are not ignored, this simply
* aliases to kcsan_check_access(), otherwise becomes a no-op.
*/
#ifdef CONFIG_KCSAN_IGNORE_ATOMICS
#define kcsan_check_atomic_read(...) do { } while (0)
#define kcsan_check_atomic_write(...) do { } while (0)
#else
#define kcsan_check_atomic_read(ptr, size) \
kcsan_check_access(ptr, size, KCSAN_ACCESS_ATOMIC)
#define kcsan_check_atomic_write(ptr, size) \
kcsan_check_access(ptr, size, KCSAN_ACCESS_ATOMIC | KCSAN_ACCESS_WRITE)
#endif
/**
* ASSERT_EXCLUSIVE_WRITER - assert no concurrent writes to @var
*
* Assert that there are no concurrent writes to @var; other readers are
* allowed. This assertion can be used to specify properties of concurrent code,
* where violation cannot be detected as a normal data race.
*
* For example, if we only have a single writer, but multiple concurrent
* readers, to avoid data races, all these accesses must be marked; even
* concurrent marked writes racing with the single writer are bugs.
* Unfortunately, due to being marked, they are no longer data races. For cases
* like these, we can use the macro as follows:
*
* .. code-block:: c
*
* void writer(void) {
* spin_lock(&update_foo_lock);
* ASSERT_EXCLUSIVE_WRITER(shared_foo);
* WRITE_ONCE(shared_foo, ...);
* spin_unlock(&update_foo_lock);
* }
* void reader(void) {
* // update_foo_lock does not need to be held!
* ... = READ_ONCE(shared_foo);
* }
*
* Note: ASSERT_EXCLUSIVE_WRITER_SCOPED(), if applicable, performs more thorough
* checking if a clear scope where no concurrent writes are expected exists.
*
* @var: variable to assert on
*/
#define ASSERT_EXCLUSIVE_WRITER(var) \
__kcsan_check_access(&(var), sizeof(var), KCSAN_ACCESS_ASSERT)
/*
* Helper macros for implementation of for ASSERT_EXCLUSIVE_*_SCOPED(). @id is
* expected to be unique for the scope in which instances of kcsan_scoped_access
* are declared.
*/
#define __kcsan_scoped_name(c, suffix) __kcsan_scoped_##c##suffix
#define __ASSERT_EXCLUSIVE_SCOPED(var, type, id) \
struct kcsan_scoped_access __kcsan_scoped_name(id, _) \
__kcsan_cleanup_scoped; \
struct kcsan_scoped_access *__kcsan_scoped_name(id, _dummy_p) \
__maybe_unused = kcsan_begin_scoped_access( \
&(var), sizeof(var), KCSAN_ACCESS_SCOPED | (type), \
&__kcsan_scoped_name(id, _))
/**
* ASSERT_EXCLUSIVE_WRITER_SCOPED - assert no concurrent writes to @var in scope
*
* Scoped variant of ASSERT_EXCLUSIVE_WRITER().
*
* Assert that there are no concurrent writes to @var for the duration of the
* scope in which it is introduced. This provides a better way to fully cover
* the enclosing scope, compared to multiple ASSERT_EXCLUSIVE_WRITER(), and
* increases the likelihood for KCSAN to detect racing accesses.
*
* For example, it allows finding race-condition bugs that only occur due to
* state changes within the scope itself:
*
* .. code-block:: c
*
* void writer(void) {
* spin_lock(&update_foo_lock);
* {
* ASSERT_EXCLUSIVE_WRITER_SCOPED(shared_foo);
* WRITE_ONCE(shared_foo, 42);
* ...
* // shared_foo should still be 42 here!
* }
* spin_unlock(&update_foo_lock);
* }
* void buggy(void) {
* if (READ_ONCE(shared_foo) == 42)
* WRITE_ONCE(shared_foo, 1); // bug!
* }
*
* @var: variable to assert on
*/
#define ASSERT_EXCLUSIVE_WRITER_SCOPED(var) \
__ASSERT_EXCLUSIVE_SCOPED(var, KCSAN_ACCESS_ASSERT, __COUNTER__)
/**
* ASSERT_EXCLUSIVE_ACCESS - assert no concurrent accesses to @var
*
* Assert that there are no concurrent accesses to @var (no readers nor
* writers). This assertion can be used to specify properties of concurrent
* code, where violation cannot be detected as a normal data race.
*
* For example, where exclusive access is expected after determining no other
* users of an object are left, but the object is not actually freed. We can
* check that this property actually holds as follows:
*
* .. code-block:: c
*
* if (refcount_dec_and_test(&obj->refcnt)) {
* ASSERT_EXCLUSIVE_ACCESS(*obj);
* do_some_cleanup(obj);
* release_for_reuse(obj);
* }
*
* Note: ASSERT_EXCLUSIVE_ACCESS_SCOPED(), if applicable, performs more thorough
* checking if a clear scope where no concurrent accesses are expected exists.
*
* Note: For cases where the object is freed, `KASAN <kasan.html>`_ is a better
* fit to detect use-after-free bugs.
*
* @var: variable to assert on
*/
#define ASSERT_EXCLUSIVE_ACCESS(var) \
__kcsan_check_access(&(var), sizeof(var), KCSAN_ACCESS_WRITE | KCSAN_ACCESS_ASSERT)
/**
* ASSERT_EXCLUSIVE_ACCESS_SCOPED - assert no concurrent accesses to @var in scope
*
* Scoped variant of ASSERT_EXCLUSIVE_ACCESS().
*
* Assert that there are no concurrent accesses to @var (no readers nor writers)
* for the entire duration of the scope in which it is introduced. This provides
* a better way to fully cover the enclosing scope, compared to multiple
* ASSERT_EXCLUSIVE_ACCESS(), and increases the likelihood for KCSAN to detect
* racing accesses.
*
* @var: variable to assert on
*/
#define ASSERT_EXCLUSIVE_ACCESS_SCOPED(var) \
__ASSERT_EXCLUSIVE_SCOPED(var, KCSAN_ACCESS_WRITE | KCSAN_ACCESS_ASSERT, __COUNTER__)
/**
* ASSERT_EXCLUSIVE_BITS - assert no concurrent writes to subset of bits in @var
*
* Bit-granular variant of ASSERT_EXCLUSIVE_WRITER().
*
* Assert that there are no concurrent writes to a subset of bits in @var;
* concurrent readers are permitted. This assertion captures more detailed
* bit-level properties, compared to the other (word granularity) assertions.
* Only the bits set in @mask are checked for concurrent modifications, while
* ignoring the remaining bits, i.e. concurrent writes (or reads) to ~mask bits
* are ignored.
*
* Use this for variables, where some bits must not be modified concurrently,
* yet other bits are expected to be modified concurrently.
*
* For example, variables where, after initialization, some bits are read-only,
* but other bits may still be modified concurrently. A reader may wish to
* assert that this is true as follows:
*
* .. code-block:: c
*
* ASSERT_EXCLUSIVE_BITS(flags, READ_ONLY_MASK);
* foo = (READ_ONCE(flags) & READ_ONLY_MASK) >> READ_ONLY_SHIFT;
*
* Note: The access that immediately follows ASSERT_EXCLUSIVE_BITS() is assumed
* to access the masked bits only, and KCSAN optimistically assumes it is
* therefore safe, even in the presence of data races, and marking it with
* READ_ONCE() is optional from KCSAN's point-of-view. We caution, however, that
* it may still be advisable to do so, since we cannot reason about all compiler
* optimizations when it comes to bit manipulations (on the reader and writer
* side). If you are sure nothing can go wrong, we can write the above simply
* as:
*
* .. code-block:: c
*
* ASSERT_EXCLUSIVE_BITS(flags, READ_ONLY_MASK);
* foo = (flags & READ_ONLY_MASK) >> READ_ONLY_SHIFT;
*
* Another example, where this may be used, is when certain bits of @var may
* only be modified when holding the appropriate lock, but other bits may still
* be modified concurrently. Writers, where other bits may change concurrently,
* could use the assertion as follows:
*
* .. code-block:: c
*
* spin_lock(&foo_lock);
* ASSERT_EXCLUSIVE_BITS(flags, FOO_MASK);
* old_flags = flags;
* new_flags = (old_flags & ~FOO_MASK) | (new_foo << FOO_SHIFT);
* if (cmpxchg(&flags, old_flags, new_flags) != old_flags) { ... }
* spin_unlock(&foo_lock);
*
* @var: variable to assert on
* @mask: only check for modifications to bits set in @mask
*/
#define ASSERT_EXCLUSIVE_BITS(var, mask) \
do { \
kcsan_set_access_mask(mask); \
__kcsan_check_access(&(var), sizeof(var), KCSAN_ACCESS_ASSERT);\
kcsan_set_access_mask(0); \
kcsan_atomic_next(1); \
} while (0)
#endif /* _LINUX_KCSAN_CHECKS_H */
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef _LINUX_KCSAN_H
#define _LINUX_KCSAN_H
#include <linux/kcsan-checks.h>
#include <linux/types.h>
#ifdef CONFIG_KCSAN
/*
* Context for each thread of execution: for tasks, this is stored in
* task_struct, and interrupts access internal per-CPU storage.
*/
struct kcsan_ctx {
int disable_count; /* disable counter */
int atomic_next; /* number of following atomic ops */
/*
* We distinguish between: (a) nestable atomic regions that may contain
* other nestable regions; and (b) flat atomic regions that do not keep
* track of nesting. Both (a) and (b) are entirely independent of each
* other, and a flat region may be started in a nestable region or
* vice-versa.
*
* This is required because, for example, in the annotations for
* seqlocks, we declare seqlock writer critical sections as (a) nestable
* atomic regions, but reader critical sections as (b) flat atomic
* regions, but have encountered cases where seqlock reader critical
* sections are contained within writer critical sections (the opposite
* may be possible, too).
*
* To support these cases, we independently track the depth of nesting
* for (a), and whether the leaf level is flat for (b).
*/
int atomic_nest_count;
bool in_flat_atomic;
/*
* Access mask for all accesses if non-zero.
*/
unsigned long access_mask;
/* List of scoped accesses. */
struct list_head scoped_accesses;
};
/**
* kcsan_init - initialize KCSAN runtime
*/
void kcsan_init(void);
#else /* CONFIG_KCSAN */
static inline void kcsan_init(void) { }
#endif /* CONFIG_KCSAN */
#endif /* _LINUX_KCSAN_H */
...@@ -31,6 +31,7 @@ ...@@ -31,6 +31,7 @@
#include <linux/task_io_accounting.h> #include <linux/task_io_accounting.h>
#include <linux/posix-timers.h> #include <linux/posix-timers.h>
#include <linux/rseq.h> #include <linux/rseq.h>
#include <linux/kcsan.h>
/* task_struct member predeclarations (sorted alphabetically): */ /* task_struct member predeclarations (sorted alphabetically): */
struct audit_context; struct audit_context;
...@@ -1197,6 +1198,9 @@ struct task_struct { ...@@ -1197,6 +1198,9 @@ struct task_struct {
#ifdef CONFIG_KASAN #ifdef CONFIG_KASAN
unsigned int kasan_depth; unsigned int kasan_depth;
#endif #endif
#ifdef CONFIG_KCSAN
struct kcsan_ctx kcsan_ctx;
#endif
#ifdef CONFIG_FUNCTION_GRAPH_TRACER #ifdef CONFIG_FUNCTION_GRAPH_TRACER
/* Index of current stored address in ret_stack: */ /* Index of current stored address in ret_stack: */
......
...@@ -37,8 +37,24 @@ ...@@ -37,8 +37,24 @@
#include <linux/preempt.h> #include <linux/preempt.h>
#include <linux/lockdep.h> #include <linux/lockdep.h>
#include <linux/compiler.h> #include <linux/compiler.h>
#include <linux/kcsan-checks.h>
#include <asm/processor.h> #include <asm/processor.h>
/*
* The seqlock interface does not prescribe a precise sequence of read
* begin/retry/end. For readers, typically there is a call to
* read_seqcount_begin() and read_seqcount_retry(), however, there are more
* esoteric cases which do not follow this pattern.
*
* As a consequence, we take the following best-effort approach for raw usage
* via seqcount_t under KCSAN: upon beginning a seq-reader critical section,
* pessimistically mark the next KCSAN_SEQLOCK_REGION_MAX memory accesses as
* atomics; if there is a matching read_seqcount_retry() call, no following
* memory operations are considered atomic. Usage of seqlocks via seqlock_t
* interface is not affected.
*/
#define KCSAN_SEQLOCK_REGION_MAX 1000
/* /*
* Version using sequence counter only. * Version using sequence counter only.
* This can be used when code has its own mutex protecting the * This can be used when code has its own mutex protecting the
...@@ -115,6 +131,7 @@ static inline unsigned __read_seqcount_begin(const seqcount_t *s) ...@@ -115,6 +131,7 @@ static inline unsigned __read_seqcount_begin(const seqcount_t *s)
cpu_relax(); cpu_relax();
goto repeat; goto repeat;
} }
kcsan_atomic_next(KCSAN_SEQLOCK_REGION_MAX);
return ret; return ret;
} }
...@@ -131,6 +148,7 @@ static inline unsigned raw_read_seqcount(const seqcount_t *s) ...@@ -131,6 +148,7 @@ static inline unsigned raw_read_seqcount(const seqcount_t *s)
{ {
unsigned ret = READ_ONCE(s->sequence); unsigned ret = READ_ONCE(s->sequence);
smp_rmb(); smp_rmb();
kcsan_atomic_next(KCSAN_SEQLOCK_REGION_MAX);
return ret; return ret;
} }
...@@ -183,6 +201,7 @@ static inline unsigned raw_seqcount_begin(const seqcount_t *s) ...@@ -183,6 +201,7 @@ static inline unsigned raw_seqcount_begin(const seqcount_t *s)
{ {
unsigned ret = READ_ONCE(s->sequence); unsigned ret = READ_ONCE(s->sequence);
smp_rmb(); smp_rmb();
kcsan_atomic_next(KCSAN_SEQLOCK_REGION_MAX);
return ret & ~1; return ret & ~1;
} }
...@@ -202,7 +221,8 @@ static inline unsigned raw_seqcount_begin(const seqcount_t *s) ...@@ -202,7 +221,8 @@ static inline unsigned raw_seqcount_begin(const seqcount_t *s)
*/ */
static inline int __read_seqcount_retry(const seqcount_t *s, unsigned start) static inline int __read_seqcount_retry(const seqcount_t *s, unsigned start)
{ {
return unlikely(s->sequence != start); kcsan_atomic_next(0);
return unlikely(READ_ONCE(s->sequence) != start);
} }
/** /**
...@@ -225,6 +245,7 @@ static inline int read_seqcount_retry(const seqcount_t *s, unsigned start) ...@@ -225,6 +245,7 @@ static inline int read_seqcount_retry(const seqcount_t *s, unsigned start)
static inline void raw_write_seqcount_begin(seqcount_t *s) static inline void raw_write_seqcount_begin(seqcount_t *s)
{ {
kcsan_nestable_atomic_begin();
s->sequence++; s->sequence++;
smp_wmb(); smp_wmb();
} }
...@@ -233,6 +254,7 @@ static inline void raw_write_seqcount_end(seqcount_t *s) ...@@ -233,6 +254,7 @@ static inline void raw_write_seqcount_end(seqcount_t *s)
{ {
smp_wmb(); smp_wmb();
s->sequence++; s->sequence++;
kcsan_nestable_atomic_end();
} }
/** /**
...@@ -243,6 +265,13 @@ static inline void raw_write_seqcount_end(seqcount_t *s) ...@@ -243,6 +265,13 @@ static inline void raw_write_seqcount_end(seqcount_t *s)
* usual consistency guarantee. It is one wmb cheaper, because we can * usual consistency guarantee. It is one wmb cheaper, because we can
* collapse the two back-to-back wmb()s. * collapse the two back-to-back wmb()s.
* *
* Note that writes surrounding the barrier should be declared atomic (e.g.
* via WRITE_ONCE): a) to ensure the writes become visible to other threads
* atomically, avoiding compiler optimizations; b) to document which writes are
* meant to propagate to the reader critical section. This is necessary because
* neither writes before and after the barrier are enclosed in a seq-writer
* critical section that would ensure readers are aware of ongoing writes.
*
* seqcount_t seq; * seqcount_t seq;
* bool X = true, Y = false; * bool X = true, Y = false;
* *
...@@ -262,18 +291,20 @@ static inline void raw_write_seqcount_end(seqcount_t *s) ...@@ -262,18 +291,20 @@ static inline void raw_write_seqcount_end(seqcount_t *s)
* *
* void write(void) * void write(void)
* { * {
* Y = true; * WRITE_ONCE(Y, true);
* *
* raw_write_seqcount_barrier(seq); * raw_write_seqcount_barrier(seq);
* *
* X = false; * WRITE_ONCE(X, false);
* } * }
*/ */
static inline void raw_write_seqcount_barrier(seqcount_t *s) static inline void raw_write_seqcount_barrier(seqcount_t *s)
{ {
kcsan_nestable_atomic_begin();
s->sequence++; s->sequence++;
smp_wmb(); smp_wmb();
s->sequence++; s->sequence++;
kcsan_nestable_atomic_end();
} }
static inline int raw_read_seqcount_latch(seqcount_t *s) static inline int raw_read_seqcount_latch(seqcount_t *s)
...@@ -398,7 +429,9 @@ static inline void write_seqcount_end(seqcount_t *s) ...@@ -398,7 +429,9 @@ static inline void write_seqcount_end(seqcount_t *s)
static inline void write_seqcount_invalidate(seqcount_t *s) static inline void write_seqcount_invalidate(seqcount_t *s)
{ {
smp_wmb(); smp_wmb();
kcsan_nestable_atomic_begin();
s->sequence+=2; s->sequence+=2;
kcsan_nestable_atomic_end();
} }
typedef struct { typedef struct {
...@@ -430,11 +463,21 @@ typedef struct { ...@@ -430,11 +463,21 @@ typedef struct {
*/ */
static inline unsigned read_seqbegin(const seqlock_t *sl) static inline unsigned read_seqbegin(const seqlock_t *sl)
{ {
return read_seqcount_begin(&sl->seqcount); unsigned ret = read_seqcount_begin(&sl->seqcount);
kcsan_atomic_next(0); /* non-raw usage, assume closing read_seqretry() */
kcsan_flat_atomic_begin();
return ret;
} }
static inline unsigned read_seqretry(const seqlock_t *sl, unsigned start) static inline unsigned read_seqretry(const seqlock_t *sl, unsigned start)
{ {
/*
* Assume not nested: read_seqretry() may be called multiple times when
* completing read critical section.
*/
kcsan_flat_atomic_end();
return read_seqcount_retry(&sl->seqcount, start); return read_seqcount_retry(&sl->seqcount, start);
} }
......
...@@ -2,9 +2,9 @@ ...@@ -2,9 +2,9 @@
#ifndef __LINUX_UACCESS_H__ #ifndef __LINUX_UACCESS_H__
#define __LINUX_UACCESS_H__ #define __LINUX_UACCESS_H__
#include <linux/instrumented.h>
#include <linux/sched.h> #include <linux/sched.h>
#include <linux/thread_info.h> #include <linux/thread_info.h>
#include <linux/kasan-checks.h>
#define uaccess_kernel() segment_eq(get_fs(), KERNEL_DS) #define uaccess_kernel() segment_eq(get_fs(), KERNEL_DS)
...@@ -58,7 +58,7 @@ ...@@ -58,7 +58,7 @@
static __always_inline __must_check unsigned long static __always_inline __must_check unsigned long
__copy_from_user_inatomic(void *to, const void __user *from, unsigned long n) __copy_from_user_inatomic(void *to, const void __user *from, unsigned long n)
{ {
kasan_check_write(to, n); instrument_copy_from_user(to, from, n);
check_object_size(to, n, false); check_object_size(to, n, false);
return raw_copy_from_user(to, from, n); return raw_copy_from_user(to, from, n);
} }
...@@ -67,7 +67,7 @@ static __always_inline __must_check unsigned long ...@@ -67,7 +67,7 @@ static __always_inline __must_check unsigned long
__copy_from_user(void *to, const void __user *from, unsigned long n) __copy_from_user(void *to, const void __user *from, unsigned long n)
{ {
might_fault(); might_fault();
kasan_check_write(to, n); instrument_copy_from_user(to, from, n);
check_object_size(to, n, false); check_object_size(to, n, false);
return raw_copy_from_user(to, from, n); return raw_copy_from_user(to, from, n);
} }
...@@ -88,7 +88,7 @@ __copy_from_user(void *to, const void __user *from, unsigned long n) ...@@ -88,7 +88,7 @@ __copy_from_user(void *to, const void __user *from, unsigned long n)
static __always_inline __must_check unsigned long static __always_inline __must_check unsigned long
__copy_to_user_inatomic(void __user *to, const void *from, unsigned long n) __copy_to_user_inatomic(void __user *to, const void *from, unsigned long n)
{ {
kasan_check_read(from, n); instrument_copy_to_user(to, from, n);
check_object_size(from, n, true); check_object_size(from, n, true);
return raw_copy_to_user(to, from, n); return raw_copy_to_user(to, from, n);
} }
...@@ -97,7 +97,7 @@ static __always_inline __must_check unsigned long ...@@ -97,7 +97,7 @@ static __always_inline __must_check unsigned long
__copy_to_user(void __user *to, const void *from, unsigned long n) __copy_to_user(void __user *to, const void *from, unsigned long n)
{ {
might_fault(); might_fault();
kasan_check_read(from, n); instrument_copy_to_user(to, from, n);
check_object_size(from, n, true); check_object_size(from, n, true);
return raw_copy_to_user(to, from, n); return raw_copy_to_user(to, from, n);
} }
...@@ -109,7 +109,7 @@ _copy_from_user(void *to, const void __user *from, unsigned long n) ...@@ -109,7 +109,7 @@ _copy_from_user(void *to, const void __user *from, unsigned long n)
unsigned long res = n; unsigned long res = n;
might_fault(); might_fault();
if (likely(access_ok(from, n))) { if (likely(access_ok(from, n))) {
kasan_check_write(to, n); instrument_copy_from_user(to, from, n);
res = raw_copy_from_user(to, from, n); res = raw_copy_from_user(to, from, n);
} }
if (unlikely(res)) if (unlikely(res))
...@@ -127,7 +127,7 @@ _copy_to_user(void __user *to, const void *from, unsigned long n) ...@@ -127,7 +127,7 @@ _copy_to_user(void __user *to, const void *from, unsigned long n)
{ {
might_fault(); might_fault();
if (access_ok(to, n)) { if (access_ok(to, n)) {
kasan_check_read(from, n); instrument_copy_to_user(to, from, n);
n = raw_copy_to_user(to, from, n); n = raw_copy_to_user(to, from, n);
} }
return n; return n;
......
...@@ -174,6 +174,16 @@ struct task_struct init_task ...@@ -174,6 +174,16 @@ struct task_struct init_task
#ifdef CONFIG_KASAN #ifdef CONFIG_KASAN
.kasan_depth = 1, .kasan_depth = 1,
#endif #endif
#ifdef CONFIG_KCSAN
.kcsan_ctx = {
.disable_count = 0,
.atomic_next = 0,
.atomic_nest_count = 0,
.in_flat_atomic = false,
.access_mask = 0,
.scoped_accesses = {LIST_POISON1, NULL},
},
#endif
#ifdef CONFIG_TRACE_IRQFLAGS #ifdef CONFIG_TRACE_IRQFLAGS
.softirqs_enabled = 1, .softirqs_enabled = 1,
#endif #endif
......
...@@ -95,6 +95,7 @@ ...@@ -95,6 +95,7 @@
#include <linux/rodata_test.h> #include <linux/rodata_test.h>
#include <linux/jump_label.h> #include <linux/jump_label.h>
#include <linux/mem_encrypt.h> #include <linux/mem_encrypt.h>
#include <linux/kcsan.h>
#include <asm/io.h> #include <asm/io.h>
#include <asm/bugs.h> #include <asm/bugs.h>
...@@ -1036,6 +1037,7 @@ asmlinkage __visible void __init start_kernel(void) ...@@ -1036,6 +1037,7 @@ asmlinkage __visible void __init start_kernel(void)
acpi_subsystem_init(); acpi_subsystem_init();
arch_post_acpi_subsys_init(); arch_post_acpi_subsys_init();
sfi_init_late(); sfi_init_late();
kcsan_init();
/* Do the rest non-__init'ed, we're now alive */ /* Do the rest non-__init'ed, we're now alive */
arch_call_rest_init(); arch_call_rest_init();
......
...@@ -23,6 +23,9 @@ endif ...@@ -23,6 +23,9 @@ endif
# Prevents flicker of uninteresting __do_softirq()/__local_bh_disable_ip() # Prevents flicker of uninteresting __do_softirq()/__local_bh_disable_ip()
# in coverage traces. # in coverage traces.
KCOV_INSTRUMENT_softirq.o := n KCOV_INSTRUMENT_softirq.o := n
# Avoid KCSAN instrumentation in softirq ("No shared variables, all the data
# are CPU local" => assume no data races), to reduce overhead in interrupts.
KCSAN_SANITIZE_softirq.o = n
# These are called from save_stack_trace() on slub debug path, # These are called from save_stack_trace() on slub debug path,
# and produce insane amounts of uninteresting coverage. # and produce insane amounts of uninteresting coverage.
KCOV_INSTRUMENT_module.o := n KCOV_INSTRUMENT_module.o := n
...@@ -31,6 +34,7 @@ KCOV_INSTRUMENT_stacktrace.o := n ...@@ -31,6 +34,7 @@ KCOV_INSTRUMENT_stacktrace.o := n
# Don't self-instrument. # Don't self-instrument.
KCOV_INSTRUMENT_kcov.o := n KCOV_INSTRUMENT_kcov.o := n
KASAN_SANITIZE_kcov.o := n KASAN_SANITIZE_kcov.o := n
KCSAN_SANITIZE_kcov.o := n
CFLAGS_kcov.o := $(call cc-option, -fno-conserve-stack -fno-stack-protector) CFLAGS_kcov.o := $(call cc-option, -fno-conserve-stack -fno-stack-protector)
# cond_syscall is currently not LTO compatible # cond_syscall is currently not LTO compatible
...@@ -103,6 +107,7 @@ obj-$(CONFIG_TRACEPOINTS) += trace/ ...@@ -103,6 +107,7 @@ obj-$(CONFIG_TRACEPOINTS) += trace/
obj-$(CONFIG_IRQ_WORK) += irq_work.o obj-$(CONFIG_IRQ_WORK) += irq_work.o
obj-$(CONFIG_CPU_PM) += cpu_pm.o obj-$(CONFIG_CPU_PM) += cpu_pm.o
obj-$(CONFIG_BPF) += bpf/ obj-$(CONFIG_BPF) += bpf/
obj-$(CONFIG_KCSAN) += kcsan/
obj-$(CONFIG_SHADOW_CALL_STACK) += scs.o obj-$(CONFIG_SHADOW_CALL_STACK) += scs.o
obj-$(CONFIG_PERF_EVENTS) += events/ obj-$(CONFIG_PERF_EVENTS) += events/
...@@ -121,6 +126,7 @@ obj-$(CONFIG_SYSCTL_KUNIT_TEST) += sysctl-test.o ...@@ -121,6 +126,7 @@ obj-$(CONFIG_SYSCTL_KUNIT_TEST) += sysctl-test.o
obj-$(CONFIG_GCC_PLUGIN_STACKLEAK) += stackleak.o obj-$(CONFIG_GCC_PLUGIN_STACKLEAK) += stackleak.o
KASAN_SANITIZE_stackleak.o := n KASAN_SANITIZE_stackleak.o := n
KCSAN_SANITIZE_stackleak.o := n
KCOV_INSTRUMENT_stackleak.o := n KCOV_INSTRUMENT_stackleak.o := n
$(obj)/configs.o: $(obj)/config_data.gz $(obj)/configs.o: $(obj)/config_data.gz
......
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册