1. 12 6月, 2020 1 次提交
  2. 05 6月, 2020 2 次提交
  3. 19 5月, 2020 1 次提交
  4. 15 5月, 2020 1 次提交
    • B
      x86: Fix early boot crash on gcc-10, third try · a9a3ed1e
      Borislav Petkov 提交于
      ... or the odyssey of trying to disable the stack protector for the
      function which generates the stack canary value.
      
      The whole story started with Sergei reporting a boot crash with a kernel
      built with gcc-10:
      
        Kernel panic — not syncing: stack-protector: Kernel stack is corrupted in: start_secondary
        CPU: 1 PID: 0 Comm: swapper/1 Not tainted 5.6.0-rc5—00235—gfffb08b3 #139
        Hardware name: Gigabyte Technology Co., Ltd. To be filled by O.E.M./H77M—D3H, BIOS F12 11/14/2013
        Call Trace:
          dump_stack
          panic
          ? start_secondary
          __stack_chk_fail
          start_secondary
          secondary_startup_64
        -—-[ end Kernel panic — not syncing: stack—protector: Kernel stack is corrupted in: start_secondary
      
      This happens because gcc-10 tail-call optimizes the last function call
      in start_secondary() - cpu_startup_entry() - and thus emits a stack
      canary check which fails because the canary value changes after the
      boot_init_stack_canary() call.
      
      To fix that, the initial attempt was to mark the one function which
      generates the stack canary with:
      
        __attribute__((optimize("-fno-stack-protector"))) ... start_secondary(void *unused)
      
      however, using the optimize attribute doesn't work cumulatively
      as the attribute does not add to but rather replaces previously
      supplied optimization options - roughly all -fxxx options.
      
      The key one among them being -fno-omit-frame-pointer and thus leading to
      not present frame pointer - frame pointer which the kernel needs.
      
      The next attempt to prevent compilers from tail-call optimizing
      the last function call cpu_startup_entry(), shy of carving out
      start_secondary() into a separate compilation unit and building it with
      -fno-stack-protector, was to add an empty asm("").
      
      This current solution was short and sweet, and reportedly, is supported
      by both compilers but we didn't get very far this time: future (LTO?)
      optimization passes could potentially eliminate this, which leads us
      to the third attempt: having an actual memory barrier there which the
      compiler cannot ignore or move around etc.
      
      That should hold for a long time, but hey we said that about the other
      two solutions too so...
      Reported-by: NSergei Trofimovich <slyfox@gentoo.org>
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Tested-by: NKalle Valo <kvalo@codeaurora.org>
      Cc: <stable@vger.kernel.org>
      Link: https://lkml.kernel.org/r/20200314164451.346497-1-slyfox@gentoo.org
      a9a3ed1e
  5. 16 4月, 2020 3 次提交
    • W
      READ_ONCE: Drop pointer qualifiers when reading from scalar types · dee081bf
      Will Deacon 提交于
      Passing a volatile-qualified pointer to READ_ONCE() is an absolute
      trainwreck for code generation: the use of 'typeof()' to define a
      temporary variable inside the macro means that the final evaluation in
      macro scope ends up forcing a read back from the stack. When stack
      protector is enabled (the default for arm64, at least), this causes
      the compiler to vomit up all sorts of junk.
      
      Unfortunately, dropping pointer qualifiers inside the macro poses quite
      a challenge, especially since the pointed-to type is permitted to be an
      aggregate, and this is relied upon by mm/ code accessing things like
      'pmd_t'. Based on numerous hacks and discussions on the mailing list,
      this is the best I've managed to come up with.
      
      Introduce '__unqual_scalar_typeof()' which takes an expression and, if
      the expression is an optionally qualified 8, 16, 32 or 64-bit scalar
      type, evaluates to the unqualified type. Other input types, including
      aggregates, remain unchanged. Hopefully READ_ONCE() on volatile aggregate
      pointers isn't something we do on a fast-path.
      
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Suggested-by: NLinus Torvalds <torvalds@linux-foundation.org>
      Reported-by: NMichael Ellerman <mpe@ellerman.id.au>
      Signed-off-by: NWill Deacon <will@kernel.org>
      dee081bf
    • W
      READ_ONCE: Enforce atomicity for {READ,WRITE}_ONCE() memory accesses · 9e343b46
      Will Deacon 提交于
      {READ,WRITE}_ONCE() cannot guarantee atomicity for arbitrary data sizes.
      This can be surprising to callers that might incorrectly be expecting
      atomicity for accesses to aggregate structures, although there are other
      callers where tearing is actually permissable (e.g. if they are using
      something akin to sequence locking to protect the access).
      
      Linus sayeth:
      
        | We could also look at being stricter for the normal READ/WRITE_ONCE(),
        | and require that they are
        |
        | (a) regular integer types
        |
        | (b) fit in an atomic word
        |
        | We actually did (b) for a while, until we noticed that we do it on
        | loff_t's etc and relaxed the rules. But maybe we could have a
        | "non-atomic" version of READ/WRITE_ONCE() that is used for the
        | questionable cases?
      
      The slight snag is that we also have to support 64-bit accesses on 32-bit
      architectures, as these appear to be widespread and tend to work out ok
      if either the architecture supports atomic 64-bit accesses (x86, armv7)
      or if the variable being accesses represents a virtual address and
      therefore only requires 32-bit atomicity in practice.
      
      Take a step in that direction by introducing a variant of
      'compiletime_assert_atomic_type()' and use it to check the pointer
      argument to {READ,WRITE}_ONCE(). Expose __{READ,WRITE}_ONCE() variants
      which are allowed to tear and convert the one broken caller over to the
      new macros.
      Suggested-by: NLinus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Signed-off-by: NWill Deacon <will@kernel.org>
      9e343b46
    • W
      READ_ONCE: Simplify implementations of {READ,WRITE}_ONCE() · a5460b5e
      Will Deacon 提交于
      The implementations of {READ,WRITE}_ONCE() suffer from a significant
      amount of indirection and complexity due to a historic GCC bug:
      
      https://gcc.gnu.org/bugzilla/show_bug.cgi?id=58145
      
      which was originally worked around by 230fa253 ("kernel: Provide
      READ_ONCE and ASSIGN_ONCE").
      
      Since GCC 4.8 is fairly vintage at this point and we emit a warning if
      we detect it during the build, return {READ,WRITE}_ONCE() to their former
      glory with an implementation that is easier to understand and, crucially,
      more amenable to optimisation. A side effect of this simplification is
      that WRITE_ONCE() no longer returns a value, but nobody seems to be
      relying on that and the new behaviour is aligned with smp_store_release().
      Suggested-by: NLinus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Christian Borntraeger <borntraeger@de.ibm.com>
      Signed-off-by: NWill Deacon <will@kernel.org>
      a5460b5e
  6. 14 4月, 2020 1 次提交
    • M
      kcsan: Change data_race() to no longer require marking racing accesses · d071e913
      Marco Elver 提交于
      Thus far, accesses marked with data_race() would still require the
      racing access to be marked in some way (be it with READ_ONCE(),
      WRITE_ONCE(), or data_race() itself), as otherwise KCSAN would still
      report a data race.  This requirement, however, seems to be unintuitive,
      and some valid use-cases demand *not* marking other accesses, as it
      might hide more serious bugs (e.g. diagnostic reads).
      
      Therefore, this commit changes data_race() to no longer require marking
      racing accesses (although it's still recommended if possible).
      
      The alternative would have been introducing another variant of
      data_race(), however, since usage of data_race() already needs to be
      carefully reasoned about, distinguishing between these cases likely adds
      more complexity in the wrong place.
      
      Link: https://lkml.kernel.org/r/20200331131002.GA30975@willie-the-truck
      Cc: Paul E. McKenney <paulmck@kernel.org>
      Cc: Will Deacon <will@kernel.org>
      Cc: Qian Cai <cai@lca.pw>
      Acked-by: NWill Deacon <will@kernel.org>
      Signed-off-by: NMarco Elver <elver@google.com>
      Signed-off-by: NPaul E. McKenney <paulmck@kernel.org>
      d071e913
  7. 08 4月, 2020 1 次提交
  8. 21 3月, 2020 2 次提交
  9. 07 1月, 2020 1 次提交
    • M
      kcsan: Add __no_kcsan function attribute · e33f9a16
      Marco Elver 提交于
      Since the use of -fsanitize=thread is an implementation detail of KCSAN,
      the name __no_sanitize_thread could be misleading if used widely.
      Instead, we introduce the __no_kcsan attribute which is shorter and more
      accurate in the context of KCSAN.
      
      This matches the attribute name __no_kcsan_or_inline. The use of
      __kcsan_or_inline itself is still required for __always_inline functions
      to retain compatibility with older compilers.
      Signed-off-by: NMarco Elver <elver@google.com>
      Signed-off-by: NPaul E. McKenney <paulmck@kernel.org>
      e33f9a16
  10. 20 11月, 2019 1 次提交
    • I
      kcsan: Improve various small stylistic details · 5cbaefe9
      Ingo Molnar 提交于
      Tidy up a few bits:
      
        - Fix typos and grammar, improve wording.
      
        - Remove spurious newlines that are col80 warning artifacts where the
          resulting line-break is worse than the disease it's curing.
      
        - Use core kernel coding style to improve readability and reduce
          spurious code pattern variations.
      
        - Use better vertical alignment for structure definitions and initialization
          sequences.
      
        - Misc other small details.
      
      No change in functionality intended.
      
      Cc: linux-kernel@vger.kernel.org
      Cc: Marco Elver <elver@google.com>
      Cc: Paul E. McKenney <paulmck@kernel.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Paul E. McKenney <paulmck@kernel.org>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      5cbaefe9
  11. 16 11月, 2019 2 次提交
  12. 08 9月, 2019 1 次提交
  13. 09 7月, 2019 1 次提交
    • J
      objtool: Add support for C jump tables · 87b512de
      Josh Poimboeuf 提交于
      Objtool doesn't know how to read C jump tables, so it has to whitelist
      functions which use them, causing missing ORC unwinder data for such
      functions, e.g. ___bpf_prog_run().
      
      C jump tables are very similar to GCC switch jump tables, which objtool
      already knows how to read.  So adding support for C jump tables is easy.
      It just needs to be able to find the tables and distinguish them from
      other data.
      
      To allow the jump tables to be found, create an __annotate_jump_table
      macro which can be used to annotate them.
      
      The annotation is done by placing the jump table in an
      .rodata..c_jump_table section.  The '.rodata' prefix ensures that the data
      will be placed in the rodata section by the vmlinux linker script.  The
      double periods are part of an existing convention which distinguishes
      kernel sections from GCC sections.
      Signed-off-by: NJosh Poimboeuf <jpoimboe@redhat.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Song Liu <songliubraving@fb.com>
      Cc: Kairui Song <kasong@redhat.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Alexei Starovoitov <ast@kernel.org>
      Cc: Daniel Borkmann <daniel@iogearbox.net>
      Link: https://lkml.kernel.org/r/0ba2ca30442b16b97165992381ce643dc27b3d1a.1561685471.git.jpoimboe@redhat.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      87b512de
  14. 10 5月, 2019 1 次提交
  15. 03 4月, 2019 1 次提交
  16. 09 1月, 2019 1 次提交
    • M
      include/linux/compiler*.h: fix OPTIMIZER_HIDE_VAR · 3e2ffd65
      Michael S. Tsirkin 提交于
      Since commit 815f0ddb ("include/linux/compiler*.h: make compiler-*.h
      mutually exclusive") clang no longer reuses the OPTIMIZER_HIDE_VAR macro
      from compiler-gcc - instead it gets the version in
      include/linux/compiler.h.  Unfortunately that version doesn't actually
      prevent compiler from optimizing out the variable.
      
      Fix up by moving the macro out from compiler-gcc.h to compiler.h.
      Compilers without incline asm support will keep working
      since it's protected by an ifdef.
      
      Also fix up comments to match reality since we are no longer overriding
      any macros.
      
      Build-tested with gcc and clang.
      
      Fixes: 815f0ddb ("include/linux/compiler*.h: make compiler-*.h mutually exclusive")
      Cc: Eli Friedman <efriedma@codeaurora.org>
      Cc: Joe Perches <joe@perches.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Reviewed-by: NNick Desaulniers <ndesaulniers@google.com>
      Signed-off-by: NMichael S. Tsirkin <mst@redhat.com>
      Signed-off-by: NMiguel Ojeda <miguel.ojeda.sandonis@gmail.com>
      3e2ffd65
  17. 19 12月, 2018 1 次提交
  18. 06 11月, 2018 1 次提交
  19. 19 10月, 2018 1 次提交
  20. 11 10月, 2018 1 次提交
    • M
      compiler.h: give up __compiletime_assert_fallback() · 81b45683
      Masahiro Yamada 提交于
      __compiletime_assert_fallback() is supposed to stop building earlier
      by using the negative-array-size method in case the compiler does not
      support "error" attribute, but has never worked like that.
      
      You can simply try:
      
          BUILD_BUG_ON(1);
      
      GCC immediately terminates the build, but Clang does not report
      anything because Clang does not support the "error" attribute now.
      It will later fail at link time, but __compiletime_assert_fallback()
      is not working at least.
      
      The root cause is commit 1d6a0d19 ("bug.h: prevent double evaluation
      of `condition' in BUILD_BUG_ON").  Prior to that commit, BUILD_BUG_ON()
      was checked by the negative-array-size method *and* the link-time trick.
      Since that commit, the negative-array-size is not effective because
      '__cond' is no longer constant.  As the comment in <linux/build_bug.h>
      says, GCC (and Clang as well) only emits the error for obvious cases.
      
      When '__cond' is a variable,
      
          ((void)sizeof(char[1 - 2 * __cond]))
      
      ... is not obvious for the compiler to know the array size is negative.
      
      Reverting that commit would break BUILD_BUG() because negative-size-array
      is evaluated before the code is optimized out.
      
      Let's give up __compiletime_assert_fallback().  This commit does not
      change the current behavior since it just rips off the useless code.
      Signed-off-by: NMasahiro Yamada <yamada.masahiro@socionext.com>
      Reviewed-by: NKees Cook <keescook@chromium.org>
      Reviewed-by: NNick Desaulniers <ndesaulniers@google.com>
      Signed-off-by: NKees Cook <keescook@chromium.org>
      81b45683
  21. 04 10月, 2018 1 次提交
    • N
      x86/objtool: Use asm macros to work around GCC inlining bugs · c06c4d80
      Nadav Amit 提交于
      As described in:
      
        77b0bf55: ("kbuild/Makefile: Prepare for using macros in inline assembly code to work around asm() related GCC inlining bugs")
      
      GCC's inlining heuristics are broken with common asm() patterns used in
      kernel code, resulting in the effective disabling of inlining.
      
      In the case of objtool the resulting borkage can be significant, since all the
      annotations of objtool are discarded during linkage and never inlined,
      yet GCC bogusly considers most functions affected by objtool annotations
      as 'too large'.
      
      The workaround is to set an assembly macro and call it from the inline
      assembly block. As a result GCC considers the inline assembly block as
      a single instruction. (Which it isn't, but that's the best we can get.)
      
      This increases the kernel size slightly:
      
            text     data     bss      dec     hex filename
        18140829 10224724 2957312 31322865 1ddf2f1 ./vmlinux before
        18140970 10225412 2957312 31323694 1ddf62e ./vmlinux after (+829)
      
      The number of static text symbols (i.e. non-inlined functions) is reduced:
      
        Before:  40321
        After:   40302 (-19)
      
      [ mingo: Rewrote the changelog. ]
      Tested-by: NKees Cook <keescook@chromium.org>
      Signed-off-by: NNadav Amit <namit@vmware.com>
      Reviewed-by: NJosh Poimboeuf <jpoimboe@redhat.com>
      Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Christopher Li <sparse@chrisli.org>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: linux-sparse@vger.kernel.org
      Link: http://lkml.kernel.org/r/20181003213100.189959-4-namit@vmware.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      c06c4d80
  22. 01 10月, 2018 6 次提交
  23. 23 8月, 2018 2 次提交
    • A
      module: use relative references for __ksymtab entries · 7290d580
      Ard Biesheuvel 提交于
      An ordinary arm64 defconfig build has ~64 KB worth of __ksymtab entries,
      each consisting of two 64-bit fields containing absolute references, to
      the symbol itself and to a char array containing its name, respectively.
      
      When we build the same configuration with KASLR enabled, we end up with an
      additional ~192 KB of relocations in the .init section, i.e., one 24 byte
      entry for each absolute reference, which all need to be processed at boot
      time.
      
      Given how the struct kernel_symbol that describes each entry is completely
      local to module.c (except for the references emitted by EXPORT_SYMBOL()
      itself), we can easily modify it to contain two 32-bit relative references
      instead.  This reduces the size of the __ksymtab section by 50% for all
      64-bit architectures, and gets rid of the runtime relocations entirely for
      architectures implementing KASLR, either via standard PIE linking (arm64)
      or using custom host tools (x86).
      
      Note that the binary search involving __ksymtab contents relies on each
      section being sorted by symbol name.  This is implemented based on the
      input section names, not the names in the ksymtab entries, so this patch
      does not interfere with that.
      
      Given that the use of place-relative relocations requires support both in
      the toolchain and in the module loader, we cannot enable this feature for
      all architectures.  So make it dependent on whether
      CONFIG_HAVE_ARCH_PREL32_RELOCATIONS is defined.
      
      Link: http://lkml.kernel.org/r/20180704083651.24360-4-ard.biesheuvel@linaro.orgSigned-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Acked-by: NJessica Yu <jeyu@kernel.org>
      Acked-by: NMichael Ellerman <mpe@ellerman.id.au>
      Reviewed-by: NWill Deacon <will.deacon@arm.com>
      Acked-by: NIngo Molnar <mingo@kernel.org>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Bjorn Helgaas <bhelgaas@google.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: James Morris <james.morris@microsoft.com>
      Cc: James Morris <jmorris@namei.org>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Nicolas Pitre <nico@linaro.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Petr Mladek <pmladek@suse.com>
      Cc: Russell King <linux@armlinux.org.uk>
      Cc: "Serge E. Hallyn" <serge@hallyn.com>
      Cc: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Thomas Garnier <thgarnie@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      7290d580
    • R
      linux/compiler.h: don't use bool · 20358399
      Rasmus Villemoes 提交于
      Appararently, it's possible to have a non-trivial TU include a few
      headers, including linux/build_bug.h, without ending up with
      linux/types.h.  So the 0day bot sent me
      
      config: um-x86_64_defconfig (attached as .config)
      
      >> include/linux/compiler.h:316:3: error: unknown type name 'bool'; did you mean '_Bool'?
            bool __cond = !(condition);    \
      
      for something I'm working on.
      
      Rather than contributing to the #include madness and including
      linux/types.h from compiler.h, just use int.
      
      Link: http://lkml.kernel.org/r/20180817101036.20969-1-linux@rasmusvillemoes.dkSigned-off-by: NRasmus Villemoes <linux@rasmusvillemoes.dk>
      Cc: Christopher Li <sparse@chrisli.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      20358399
  24. 05 6月, 2018 1 次提交
  25. 22 2月, 2018 1 次提交
    • A
      bug.h: work around GCC PR82365 in BUG() · 173a3efd
      Arnd Bergmann 提交于
      Looking at functions with large stack frames across all architectures
      led me discovering that BUG() suffers from the same problem as
      fortify_panic(), which I've added a workaround for already.
      
      In short, variables that go out of scope by calling a noreturn function
      or __builtin_unreachable() keep using stack space in functions
      afterwards.
      
      A workaround that was identified is to insert an empty assembler
      statement just before calling the function that doesn't return.  I'm
      adding a macro "barrier_before_unreachable()" to document this, and
      insert calls to that in all instances of BUG() that currently suffer
      from this problem.
      
      The files that saw the largest change from this had these frame sizes
      before, and much less with my patch:
      
        fs/ext4/inode.c:82:1: warning: the frame size of 1672 bytes is larger than 800 bytes [-Wframe-larger-than=]
        fs/ext4/namei.c:434:1: warning: the frame size of 904 bytes is larger than 800 bytes [-Wframe-larger-than=]
        fs/ext4/super.c:2279:1: warning: the frame size of 1160 bytes is larger than 800 bytes [-Wframe-larger-than=]
        fs/ext4/xattr.c:146:1: warning: the frame size of 1168 bytes is larger than 800 bytes [-Wframe-larger-than=]
        fs/f2fs/inode.c:152:1: warning: the frame size of 1424 bytes is larger than 800 bytes [-Wframe-larger-than=]
        net/netfilter/ipvs/ip_vs_core.c:1195:1: warning: the frame size of 1068 bytes is larger than 800 bytes [-Wframe-larger-than=]
        net/netfilter/ipvs/ip_vs_core.c:395:1: warning: the frame size of 1084 bytes is larger than 800 bytes [-Wframe-larger-than=]
        net/netfilter/ipvs/ip_vs_ftp.c:298:1: warning: the frame size of 928 bytes is larger than 800 bytes [-Wframe-larger-than=]
        net/netfilter/ipvs/ip_vs_ftp.c:418:1: warning: the frame size of 908 bytes is larger than 800 bytes [-Wframe-larger-than=]
        net/netfilter/ipvs/ip_vs_lblcr.c:718:1: warning: the frame size of 960 bytes is larger than 800 bytes [-Wframe-larger-than=]
        drivers/net/xen-netback/netback.c:1500:1: warning: the frame size of 1088 bytes is larger than 800 bytes [-Wframe-larger-than=]
      
      In case of ARC and CRIS, it turns out that the BUG() implementation
      actually does return (or at least the compiler thinks it does),
      resulting in lots of warnings about uninitialized variable use and
      leaving noreturn functions, such as:
      
        block/cfq-iosched.c: In function 'cfq_async_queue_prio':
        block/cfq-iosched.c:3804:1: error: control reaches end of non-void function [-Werror=return-type]
        include/linux/dmaengine.h: In function 'dma_maxpq':
        include/linux/dmaengine.h:1123:1: error: control reaches end of non-void function [-Werror=return-type]
      
      This makes them call __builtin_trap() instead, which should normally
      dump the stack and kill the current process, like some of the other
      architectures already do.
      
      I tried adding barrier_before_unreachable() to panic() and
      fortify_panic() as well, but that had very little effect, so I'm not
      submitting that patch.
      
      Vineet said:
      
      : For ARC, it is double win.
      :
      : 1. Fixes 3 -Wreturn-type warnings
      :
      : | ../net/core/ethtool.c:311:1: warning: control reaches end of non-void function
      : [-Wreturn-type]
      : | ../kernel/sched/core.c:3246:1: warning: control reaches end of non-void function
      : [-Wreturn-type]
      : | ../include/linux/sunrpc/svc_xprt.h:180:1: warning: control reaches end of
      : non-void function [-Wreturn-type]
      :
      : 2.  bloat-o-meter reports code size improvements as gcc elides the
      :    generated code for stack return.
      
      Link: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=82365
      Link: http://lkml.kernel.org/r/20171219114112.939391-1-arnd@arndb.deSigned-off-by: NArnd Bergmann <arnd@arndb.de>
      Acked-by: Vineet Gupta <vgupta@synopsys.com>	[arch/arc]
      Tested-by: Vineet Gupta <vgupta@synopsys.com>	[arch/arc]
      Cc: Mikael Starvik <starvik@axis.com>
      Cc: Jesper Nilsson <jesper.nilsson@axis.com>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: Geert Uytterhoeven <geert@linux-m68k.org>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Christopher Li <sparse@chrisli.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: "Steven Rostedt (VMware)" <rostedt@goodmis.org>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      173a3efd
  26. 08 2月, 2018 1 次提交
  27. 02 2月, 2018 2 次提交
  28. 17 12月, 2017 1 次提交