1. 20 5月, 2021 1 次提交
  2. 22 1月, 2021 1 次提交
  3. 15 11月, 2020 1 次提交
  4. 26 10月, 2020 1 次提交
  5. 14 10月, 2020 1 次提交
  6. 01 9月, 2020 1 次提交
  7. 24 7月, 2020 1 次提交
    • I
      compiler.h: Move instrumentation_begin()/end() to new <linux/instrumentation.h> header · d19e789f
      Ingo Molnar 提交于
      Linus pointed out that compiler.h - which is a key header that gets included in every
      single one of the 28,000+ kernel files during a kernel build - was bloated in:
      
        65538966: ("vmlinux.lds.h: Create section for protection against instrumentation")
      
      Linus noted:
      
       > I have pulled this, but do we really want to add this to a header file
       > that is _so_ core that it gets included for basically every single
       > file built?
       >
       > I don't even see those instrumentation_begin/end() things used
       > anywhere right now.
       >
       > It seems excessive. That 53 lines is maybe not a lot, but it pushed
       > that header file to over 12kB, and while it's mostly comments, it's
       > extra IO and parsing basically for _every_ single file compiled in the
       > kernel.
       >
       > For what appears to be absolutely zero upside right now, and I really
       > don't see why this should be in such a core header file!
      
      Move these primitives into a new header: <linux/instrumentation.h>, and include that
      header in the headers that make use of it.
      
      Unfortunately one of these headers is asm-generic/bug.h, which does get included
      in a lot of places, similarly to compiler.h. So the de-bloating effect isn't as
      good as we'd like it to be - but at least the interfaces are defined separately.
      
      No change to functionality intended.
      Reported-by: NLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      Link: https://lore.kernel.org/r/20200604071921.GA1361070@gmail.com
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      d19e789f
  8. 21 7月, 2020 2 次提交
    • W
      compiler.h: Move compiletime_assert() macros into compiler_types.h · eb5c2d4b
      Will Deacon 提交于
      The kernel test robot reports that moving READ_ONCE() out into its own
      header breaks a W=1 build for parisc, which is relying on the definition
      of compiletime_assert() being available:
      
        | In file included from ./arch/parisc/include/generated/asm/rwonce.h:1,
        |                  from ./include/asm-generic/barrier.h:16,
        |                  from ./arch/parisc/include/asm/barrier.h:29,
        |                  from ./arch/parisc/include/asm/atomic.h:11,
        |                  from ./include/linux/atomic.h:7,
        |                  from kernel/locking/percpu-rwsem.c:2:
        | ./arch/parisc/include/asm/atomic.h: In function 'atomic_read':
        | ./include/asm-generic/rwonce.h:36:2: error: implicit declaration of function 'compiletime_assert' [-Werror=implicit-function-declaration]
        |    36 |  compiletime_assert(__native_word(t) || sizeof(t) == sizeof(long long), \
        |       |  ^~~~~~~~~~~~~~~~~~
        | ./include/asm-generic/rwonce.h:49:2: note: in expansion of macro 'compiletime_assert_rwonce_type'
        |    49 |  compiletime_assert_rwonce_type(x);    \
        |       |  ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
        | ./arch/parisc/include/asm/atomic.h:73:9: note: in expansion of macro 'READ_ONCE'
        |    73 |  return READ_ONCE((v)->counter);
        |       |         ^~~~~~~~~
      
      Move these macros into compiler_types.h, so that they are available to
      READ_ONCE() and friends.
      
      Link: http://lists.infradead.org/pipermail/linux-arm-kernel/2020-July/587094.htmlReported-by: Nkernel test robot <lkp@intel.com>
      Signed-off-by: NWill Deacon <will@kernel.org>
      eb5c2d4b
    • W
      compiler.h: Split {READ,WRITE}_ONCE definitions out into rwonce.h · e506ea45
      Will Deacon 提交于
      In preparation for allowing architectures to define their own
      implementation of the READ_ONCE() macro, move the generic
      {READ,WRITE}_ONCE() definitions out of the unwieldy 'linux/compiler.h'
      file and into a new 'rwonce.h' header under 'asm-generic'.
      Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Acked-by: NPaul E. McKenney <paulmck@kernel.org>
      Signed-off-by: NWill Deacon <will@kernel.org>
      e506ea45
  9. 25 6月, 2020 1 次提交
    • P
      rcu: Fixup noinstr warnings · b58e733f
      Peter Zijlstra 提交于
      A KCSAN build revealed we have explicit annoations through atomic_*()
      usage, switch to arch_atomic_*() for the respective functions.
      
      vmlinux.o: warning: objtool: rcu_nmi_exit()+0x4d: call to __kcsan_check_access() leaves .noinstr.text section
      vmlinux.o: warning: objtool: rcu_dynticks_eqs_enter()+0x25: call to __kcsan_check_access() leaves .noinstr.text section
      vmlinux.o: warning: objtool: rcu_nmi_enter()+0x4f: call to __kcsan_check_access() leaves .noinstr.text section
      vmlinux.o: warning: objtool: rcu_dynticks_eqs_exit()+0x2a: call to __kcsan_check_access() leaves .noinstr.text section
      vmlinux.o: warning: objtool: __rcu_is_watching()+0x25: call to __kcsan_check_access() leaves .noinstr.text section
      
      Additionally, without the NOP in instrumentation_begin(), objtool would
      not detect the lack of the 'else instrumentation_begin();' branch in
      rcu_nmi_enter().
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Signed-off-by: NPaul E. McKenney <paulmck@kernel.org>
      b58e733f
  10. 12 6月, 2020 4 次提交
  11. 05 6月, 2020 2 次提交
  12. 19 5月, 2020 1 次提交
  13. 15 5月, 2020 1 次提交
    • B
      x86: Fix early boot crash on gcc-10, third try · a9a3ed1e
      Borislav Petkov 提交于
      ... or the odyssey of trying to disable the stack protector for the
      function which generates the stack canary value.
      
      The whole story started with Sergei reporting a boot crash with a kernel
      built with gcc-10:
      
        Kernel panic — not syncing: stack-protector: Kernel stack is corrupted in: start_secondary
        CPU: 1 PID: 0 Comm: swapper/1 Not tainted 5.6.0-rc5—00235—gfffb08b3 #139
        Hardware name: Gigabyte Technology Co., Ltd. To be filled by O.E.M./H77M—D3H, BIOS F12 11/14/2013
        Call Trace:
          dump_stack
          panic
          ? start_secondary
          __stack_chk_fail
          start_secondary
          secondary_startup_64
        -—-[ end Kernel panic — not syncing: stack—protector: Kernel stack is corrupted in: start_secondary
      
      This happens because gcc-10 tail-call optimizes the last function call
      in start_secondary() - cpu_startup_entry() - and thus emits a stack
      canary check which fails because the canary value changes after the
      boot_init_stack_canary() call.
      
      To fix that, the initial attempt was to mark the one function which
      generates the stack canary with:
      
        __attribute__((optimize("-fno-stack-protector"))) ... start_secondary(void *unused)
      
      however, using the optimize attribute doesn't work cumulatively
      as the attribute does not add to but rather replaces previously
      supplied optimization options - roughly all -fxxx options.
      
      The key one among them being -fno-omit-frame-pointer and thus leading to
      not present frame pointer - frame pointer which the kernel needs.
      
      The next attempt to prevent compilers from tail-call optimizing
      the last function call cpu_startup_entry(), shy of carving out
      start_secondary() into a separate compilation unit and building it with
      -fno-stack-protector, was to add an empty asm("").
      
      This current solution was short and sweet, and reportedly, is supported
      by both compilers but we didn't get very far this time: future (LTO?)
      optimization passes could potentially eliminate this, which leads us
      to the third attempt: having an actual memory barrier there which the
      compiler cannot ignore or move around etc.
      
      That should hold for a long time, but hey we said that about the other
      two solutions too so...
      Reported-by: NSergei Trofimovich <slyfox@gentoo.org>
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Tested-by: NKalle Valo <kvalo@codeaurora.org>
      Cc: <stable@vger.kernel.org>
      Link: https://lkml.kernel.org/r/20200314164451.346497-1-slyfox@gentoo.org
      a9a3ed1e
  14. 16 4月, 2020 3 次提交
    • W
      READ_ONCE: Drop pointer qualifiers when reading from scalar types · dee081bf
      Will Deacon 提交于
      Passing a volatile-qualified pointer to READ_ONCE() is an absolute
      trainwreck for code generation: the use of 'typeof()' to define a
      temporary variable inside the macro means that the final evaluation in
      macro scope ends up forcing a read back from the stack. When stack
      protector is enabled (the default for arm64, at least), this causes
      the compiler to vomit up all sorts of junk.
      
      Unfortunately, dropping pointer qualifiers inside the macro poses quite
      a challenge, especially since the pointed-to type is permitted to be an
      aggregate, and this is relied upon by mm/ code accessing things like
      'pmd_t'. Based on numerous hacks and discussions on the mailing list,
      this is the best I've managed to come up with.
      
      Introduce '__unqual_scalar_typeof()' which takes an expression and, if
      the expression is an optionally qualified 8, 16, 32 or 64-bit scalar
      type, evaluates to the unqualified type. Other input types, including
      aggregates, remain unchanged. Hopefully READ_ONCE() on volatile aggregate
      pointers isn't something we do on a fast-path.
      
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Suggested-by: NLinus Torvalds <torvalds@linux-foundation.org>
      Reported-by: NMichael Ellerman <mpe@ellerman.id.au>
      Signed-off-by: NWill Deacon <will@kernel.org>
      dee081bf
    • W
      READ_ONCE: Enforce atomicity for {READ,WRITE}_ONCE() memory accesses · 9e343b46
      Will Deacon 提交于
      {READ,WRITE}_ONCE() cannot guarantee atomicity for arbitrary data sizes.
      This can be surprising to callers that might incorrectly be expecting
      atomicity for accesses to aggregate structures, although there are other
      callers where tearing is actually permissable (e.g. if they are using
      something akin to sequence locking to protect the access).
      
      Linus sayeth:
      
        | We could also look at being stricter for the normal READ/WRITE_ONCE(),
        | and require that they are
        |
        | (a) regular integer types
        |
        | (b) fit in an atomic word
        |
        | We actually did (b) for a while, until we noticed that we do it on
        | loff_t's etc and relaxed the rules. But maybe we could have a
        | "non-atomic" version of READ/WRITE_ONCE() that is used for the
        | questionable cases?
      
      The slight snag is that we also have to support 64-bit accesses on 32-bit
      architectures, as these appear to be widespread and tend to work out ok
      if either the architecture supports atomic 64-bit accesses (x86, armv7)
      or if the variable being accesses represents a virtual address and
      therefore only requires 32-bit atomicity in practice.
      
      Take a step in that direction by introducing a variant of
      'compiletime_assert_atomic_type()' and use it to check the pointer
      argument to {READ,WRITE}_ONCE(). Expose __{READ,WRITE}_ONCE() variants
      which are allowed to tear and convert the one broken caller over to the
      new macros.
      Suggested-by: NLinus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Signed-off-by: NWill Deacon <will@kernel.org>
      9e343b46
    • W
      READ_ONCE: Simplify implementations of {READ,WRITE}_ONCE() · a5460b5e
      Will Deacon 提交于
      The implementations of {READ,WRITE}_ONCE() suffer from a significant
      amount of indirection and complexity due to a historic GCC bug:
      
      https://gcc.gnu.org/bugzilla/show_bug.cgi?id=58145
      
      which was originally worked around by 230fa253 ("kernel: Provide
      READ_ONCE and ASSIGN_ONCE").
      
      Since GCC 4.8 is fairly vintage at this point and we emit a warning if
      we detect it during the build, return {READ,WRITE}_ONCE() to their former
      glory with an implementation that is easier to understand and, crucially,
      more amenable to optimisation. A side effect of this simplification is
      that WRITE_ONCE() no longer returns a value, but nobody seems to be
      relying on that and the new behaviour is aligned with smp_store_release().
      Suggested-by: NLinus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Christian Borntraeger <borntraeger@de.ibm.com>
      Signed-off-by: NWill Deacon <will@kernel.org>
      a5460b5e
  15. 14 4月, 2020 1 次提交
    • M
      kcsan: Change data_race() to no longer require marking racing accesses · d071e913
      Marco Elver 提交于
      Thus far, accesses marked with data_race() would still require the
      racing access to be marked in some way (be it with READ_ONCE(),
      WRITE_ONCE(), or data_race() itself), as otherwise KCSAN would still
      report a data race.  This requirement, however, seems to be unintuitive,
      and some valid use-cases demand *not* marking other accesses, as it
      might hide more serious bugs (e.g. diagnostic reads).
      
      Therefore, this commit changes data_race() to no longer require marking
      racing accesses (although it's still recommended if possible).
      
      The alternative would have been introducing another variant of
      data_race(), however, since usage of data_race() already needs to be
      carefully reasoned about, distinguishing between these cases likely adds
      more complexity in the wrong place.
      
      Link: https://lkml.kernel.org/r/20200331131002.GA30975@willie-the-truck
      Cc: Paul E. McKenney <paulmck@kernel.org>
      Cc: Will Deacon <will@kernel.org>
      Cc: Qian Cai <cai@lca.pw>
      Acked-by: NWill Deacon <will@kernel.org>
      Signed-off-by: NMarco Elver <elver@google.com>
      Signed-off-by: NPaul E. McKenney <paulmck@kernel.org>
      d071e913
  16. 08 4月, 2020 1 次提交
  17. 21 3月, 2020 2 次提交
  18. 07 1月, 2020 1 次提交
    • M
      kcsan: Add __no_kcsan function attribute · e33f9a16
      Marco Elver 提交于
      Since the use of -fsanitize=thread is an implementation detail of KCSAN,
      the name __no_sanitize_thread could be misleading if used widely.
      Instead, we introduce the __no_kcsan attribute which is shorter and more
      accurate in the context of KCSAN.
      
      This matches the attribute name __no_kcsan_or_inline. The use of
      __kcsan_or_inline itself is still required for __always_inline functions
      to retain compatibility with older compilers.
      Signed-off-by: NMarco Elver <elver@google.com>
      Signed-off-by: NPaul E. McKenney <paulmck@kernel.org>
      e33f9a16
  19. 20 11月, 2019 1 次提交
    • I
      kcsan: Improve various small stylistic details · 5cbaefe9
      Ingo Molnar 提交于
      Tidy up a few bits:
      
        - Fix typos and grammar, improve wording.
      
        - Remove spurious newlines that are col80 warning artifacts where the
          resulting line-break is worse than the disease it's curing.
      
        - Use core kernel coding style to improve readability and reduce
          spurious code pattern variations.
      
        - Use better vertical alignment for structure definitions and initialization
          sequences.
      
        - Misc other small details.
      
      No change in functionality intended.
      
      Cc: linux-kernel@vger.kernel.org
      Cc: Marco Elver <elver@google.com>
      Cc: Paul E. McKenney <paulmck@kernel.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Paul E. McKenney <paulmck@kernel.org>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      5cbaefe9
  20. 16 11月, 2019 2 次提交
  21. 08 9月, 2019 1 次提交
  22. 09 7月, 2019 1 次提交
    • J
      objtool: Add support for C jump tables · 87b512de
      Josh Poimboeuf 提交于
      Objtool doesn't know how to read C jump tables, so it has to whitelist
      functions which use them, causing missing ORC unwinder data for such
      functions, e.g. ___bpf_prog_run().
      
      C jump tables are very similar to GCC switch jump tables, which objtool
      already knows how to read.  So adding support for C jump tables is easy.
      It just needs to be able to find the tables and distinguish them from
      other data.
      
      To allow the jump tables to be found, create an __annotate_jump_table
      macro which can be used to annotate them.
      
      The annotation is done by placing the jump table in an
      .rodata..c_jump_table section.  The '.rodata' prefix ensures that the data
      will be placed in the rodata section by the vmlinux linker script.  The
      double periods are part of an existing convention which distinguishes
      kernel sections from GCC sections.
      Signed-off-by: NJosh Poimboeuf <jpoimboe@redhat.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Song Liu <songliubraving@fb.com>
      Cc: Kairui Song <kasong@redhat.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Alexei Starovoitov <ast@kernel.org>
      Cc: Daniel Borkmann <daniel@iogearbox.net>
      Link: https://lkml.kernel.org/r/0ba2ca30442b16b97165992381ce643dc27b3d1a.1561685471.git.jpoimboe@redhat.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      87b512de
  23. 10 5月, 2019 1 次提交
  24. 03 4月, 2019 1 次提交
  25. 09 1月, 2019 1 次提交
    • M
      include/linux/compiler*.h: fix OPTIMIZER_HIDE_VAR · 3e2ffd65
      Michael S. Tsirkin 提交于
      Since commit 815f0ddb ("include/linux/compiler*.h: make compiler-*.h
      mutually exclusive") clang no longer reuses the OPTIMIZER_HIDE_VAR macro
      from compiler-gcc - instead it gets the version in
      include/linux/compiler.h.  Unfortunately that version doesn't actually
      prevent compiler from optimizing out the variable.
      
      Fix up by moving the macro out from compiler-gcc.h to compiler.h.
      Compilers without incline asm support will keep working
      since it's protected by an ifdef.
      
      Also fix up comments to match reality since we are no longer overriding
      any macros.
      
      Build-tested with gcc and clang.
      
      Fixes: 815f0ddb ("include/linux/compiler*.h: make compiler-*.h mutually exclusive")
      Cc: Eli Friedman <efriedma@codeaurora.org>
      Cc: Joe Perches <joe@perches.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Reviewed-by: NNick Desaulniers <ndesaulniers@google.com>
      Signed-off-by: NMichael S. Tsirkin <mst@redhat.com>
      Signed-off-by: NMiguel Ojeda <miguel.ojeda.sandonis@gmail.com>
      3e2ffd65
  26. 19 12月, 2018 1 次提交
  27. 06 11月, 2018 1 次提交
  28. 19 10月, 2018 1 次提交
  29. 11 10月, 2018 1 次提交
    • M
      compiler.h: give up __compiletime_assert_fallback() · 81b45683
      Masahiro Yamada 提交于
      __compiletime_assert_fallback() is supposed to stop building earlier
      by using the negative-array-size method in case the compiler does not
      support "error" attribute, but has never worked like that.
      
      You can simply try:
      
          BUILD_BUG_ON(1);
      
      GCC immediately terminates the build, but Clang does not report
      anything because Clang does not support the "error" attribute now.
      It will later fail at link time, but __compiletime_assert_fallback()
      is not working at least.
      
      The root cause is commit 1d6a0d19 ("bug.h: prevent double evaluation
      of `condition' in BUILD_BUG_ON").  Prior to that commit, BUILD_BUG_ON()
      was checked by the negative-array-size method *and* the link-time trick.
      Since that commit, the negative-array-size is not effective because
      '__cond' is no longer constant.  As the comment in <linux/build_bug.h>
      says, GCC (and Clang as well) only emits the error for obvious cases.
      
      When '__cond' is a variable,
      
          ((void)sizeof(char[1 - 2 * __cond]))
      
      ... is not obvious for the compiler to know the array size is negative.
      
      Reverting that commit would break BUILD_BUG() because negative-size-array
      is evaluated before the code is optimized out.
      
      Let's give up __compiletime_assert_fallback().  This commit does not
      change the current behavior since it just rips off the useless code.
      Signed-off-by: NMasahiro Yamada <yamada.masahiro@socionext.com>
      Reviewed-by: NKees Cook <keescook@chromium.org>
      Reviewed-by: NNick Desaulniers <ndesaulniers@google.com>
      Signed-off-by: NKees Cook <keescook@chromium.org>
      81b45683
  30. 04 10月, 2018 1 次提交
    • N
      x86/objtool: Use asm macros to work around GCC inlining bugs · c06c4d80
      Nadav Amit 提交于
      As described in:
      
        77b0bf55: ("kbuild/Makefile: Prepare for using macros in inline assembly code to work around asm() related GCC inlining bugs")
      
      GCC's inlining heuristics are broken with common asm() patterns used in
      kernel code, resulting in the effective disabling of inlining.
      
      In the case of objtool the resulting borkage can be significant, since all the
      annotations of objtool are discarded during linkage and never inlined,
      yet GCC bogusly considers most functions affected by objtool annotations
      as 'too large'.
      
      The workaround is to set an assembly macro and call it from the inline
      assembly block. As a result GCC considers the inline assembly block as
      a single instruction. (Which it isn't, but that's the best we can get.)
      
      This increases the kernel size slightly:
      
            text     data     bss      dec     hex filename
        18140829 10224724 2957312 31322865 1ddf2f1 ./vmlinux before
        18140970 10225412 2957312 31323694 1ddf62e ./vmlinux after (+829)
      
      The number of static text symbols (i.e. non-inlined functions) is reduced:
      
        Before:  40321
        After:   40302 (-19)
      
      [ mingo: Rewrote the changelog. ]
      Tested-by: NKees Cook <keescook@chromium.org>
      Signed-off-by: NNadav Amit <namit@vmware.com>
      Reviewed-by: NJosh Poimboeuf <jpoimboe@redhat.com>
      Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Christopher Li <sparse@chrisli.org>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: linux-sparse@vger.kernel.org
      Link: http://lkml.kernel.org/r/20181003213100.189959-4-namit@vmware.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      c06c4d80
  31. 01 10月, 2018 1 次提交