1. 12 4月, 2023 1 次提交
  2. 06 4月, 2023 9 次提交
  3. 30 3月, 2023 5 次提交
  4. 23 3月, 2023 10 次提交
  5. 15 3月, 2023 2 次提交
  6. 08 3月, 2023 3 次提交
  7. 06 3月, 2023 2 次提交
    • L
      cpumask: re-introduce constant-sized cpumask optimizations · 596ff4a0
      Linus Torvalds 提交于
      Commit aa47a7c2 ("lib/cpumask: deprecate nr_cpumask_bits") resulted
      in the cpumask operations potentially becoming hugely less efficient,
      because suddenly the cpumask was always considered to be variable-sized.
      
      The optimization was then later added back in a limited form by commit
      6f9c07be ("lib/cpumask: add FORCE_NR_CPUS config option"), but that
      FORCE_NR_CPUS option is not useful in a generic kernel and more of a
      special case for embedded situations with fixed hardware.
      
      Instead, just re-introduce the optimization, with some changes.
      
      Instead of depending on CPUMASK_OFFSTACK being false, and then always
      using the full constant cpumask width, this introduces three different
      cpumask "sizes":
      
       - the exact size (nr_cpumask_bits) remains identical to nr_cpu_ids.
      
         This is used for situations where we should use the exact size.
      
       - the "small" size (small_cpumask_bits) is the NR_CPUS constant if it
         fits in a single word and the bitmap operations thus end up able
         to trigger the "small_const_nbits()" optimizations.
      
         This is used for the operations that have optimized single-word
         cases that get inlined, notably the bit find and scanning functions.
      
       - the "large" size (large_cpumask_bits) is the NR_CPUS constant if it
         is an sufficiently small constant that makes simple "copy" and
         "clear" operations more efficient.
      
         This is arbitrarily set at four words or less.
      
      As a an example of this situation, without this fixed size optimization,
      cpumask_clear() will generate code like
      
              movl    nr_cpu_ids(%rip), %edx
              addq    $63, %rdx
              shrq    $3, %rdx
              andl    $-8, %edx
              callq   memset@PLT
      
      on x86-64, because it would calculate the "exact" number of longwords
      that need to be cleared.
      
      In contrast, with this patch, using a MAX_CPU of 64 (which is quite a
      reasonable value to use), the above becomes a single
      
      	movq $0,cpumask
      
      instruction instead, because instead of caring to figure out exactly how
      many CPU's the system has, it just knows that the cpumask will be a
      single word and can just clear it all.
      
      Note that this does end up tightening the rules a bit from the original
      version in another way: operations that set bits in the cpumask are now
      limited to the actual nr_cpu_ids limit, whereas we used to do the
      nr_cpumask_bits thing almost everywhere in the cpumask code.
      
      But if you just clear bits, or scan for bits, we can use the simpler
      compile-time constants.
      
      In the process, remove 'cpumask_complement()' and 'for_each_cpu_not()'
      which were not useful, and which fundamentally have to be limited to
      'nr_cpu_ids'.  Better remove them now than have somebody introduce use
      of them later.
      
      Of course, on x86-64 with MAXSMP there is no sane small compile-time
      constant for the cpumask sizes, and we end up using the actual CPU bits,
      and will generate the above kind of horrors regardless.  Please don't
      use MAXSMP unless you really expect to have machines with thousands of
      cores.
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      596ff4a0
    • M
      Remove Intel compiler support · 95207db8
      Masahiro Yamada 提交于
      include/linux/compiler-intel.h had no update in the past 3 years.
      
      We often forget about the third C compiler to build the kernel.
      
      For example, commit a0a12c3e ("asm goto: eradicate CC_HAS_ASM_GOTO")
      only mentioned GCC and Clang.
      
      init/Kconfig defines CC_IS_GCC and CC_IS_CLANG but not CC_IS_ICC,
      and nobody has reported any issue.
      
      I guess the Intel Compiler support is broken, and nobody is caring
      about it.
      
      Harald Arnesen pointed out ICC (classic Intel C/C++ compiler) is
      deprecated:
      
          $ icc -v
          icc: remark #10441: The Intel(R) C++ Compiler Classic (ICC) is
          deprecated and will be removed from product release in the second half
          of 2023. The Intel(R) oneAPI DPC++/C++ Compiler (ICX) is the recommended
          compiler moving forward. Please transition to use this compiler. Use
          '-diag-disable=10441' to disable this message.
          icc version 2021.7.0 (gcc version 12.1.0 compatibility)
      
      Arnd Bergmann provided a link to the article, "Intel C/C++ compilers
      complete adoption of LLVM".
      
      lib/zstd/common/compiler.h and lib/zstd/compress/zstd_fast.c were kept
      untouched for better sync with https://github.com/facebook/zstd
      
      Link: https://www.intel.com/content/www/us/en/developer/articles/technical/adoption-of-llvm-complete-icx.htmlSigned-off-by: NMasahiro Yamada <masahiroy@kernel.org>
      Acked-by: NArnd Bergmann <arnd@arndb.de>
      Reviewed-by: NNick Desaulniers <ndesaulniers@google.com>
      Reviewed-by: NNathan Chancellor <nathan@kernel.org>
      Reviewed-by: NMiguel Ojeda <ojeda@kernel.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      95207db8
  8. 03 3月, 2023 8 次提交
    • M
      kasan, x86: don't rename memintrinsics in uninstrumented files · 4ec4190b
      Marco Elver 提交于
      Now that memcpy/memset/memmove are no longer overridden by KASAN, we can
      just use the normal symbol names in uninstrumented files.
      
      Drop the preprocessor redefinitions.
      
      Link: https://lkml.kernel.org/r/20230224085942.1791837-4-elver@google.com
      Fixes: 69d4c0d3 ("entry, kasan, x86: Disallow overriding mem*() functions")
      Signed-off-by: NMarco Elver <elver@google.com>
      Reviewed-by: NAndrey Konovalov <andreyknvl@gmail.com>
      Cc: Alexander Potapenko <glider@google.com>
      Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com>
      Cc: Borislav Petkov (AMD) <bp@alien8.de>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Jakub Jelinek <jakub@redhat.com>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Linux Kernel Functional Testing <lkft@linaro.org>
      Cc: Naresh Kamboju <naresh.kamboju@linaro.org>
      Cc: Nathan Chancellor <nathan@kernel.org>
      Cc: Nick Desaulniers <ndesaulniers@google.com>
      Cc: Nicolas Schier <nicolas@fjasle.eu>
      Cc: Peter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vincenzo Frascino <vincenzo.frascino@arm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      4ec4190b
    • A
      openrisc: fix livelock in uaccess · caa82ae7
      Al Viro 提交于
      openrisc equivalent of 26178ec1 "x86: mm: consolidate VM_FAULT_RETRY handling"
      If e.g. get_user() triggers a page fault and a fatal signal is caught, we might
      end up with handle_mm_fault() returning VM_FAULT_RETRY and not doing anything
      to page tables.  In such case we must *not* return to the faulting insn -
      that would repeat the entire thing without making any progress; what we need
      instead is to treat that as failed (user) memory access.
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      caa82ae7
    • A
      nios2: fix livelock in uaccess · e902e508
      Al Viro 提交于
      nios2 equivalent of 26178ec1 "x86: mm: consolidate VM_FAULT_RETRY handling"
      If e.g. get_user() triggers a page fault and a fatal signal is caught, we might
      end up with handle_mm_fault() returning VM_FAULT_RETRY and not doing anything
      to page tables.  In such case we must *not* return to the faulting insn -
      that would repeat the entire thing without making any progress; what we need
      instead is to treat that as failed (user) memory access.
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      e902e508
    • A
      microblaze: fix livelock in uaccess · a1179ac7
      Al Viro 提交于
      microblaze equivalent of 26178ec1 "x86: mm: consolidate VM_FAULT_RETRY handling"
      If e.g. get_user() triggers a page fault and a fatal signal is caught, we might
      end up with handle_mm_fault() returning VM_FAULT_RETRY and not doing anything
      to page tables.  In such case we must *not* return to the faulting insn -
      that would repeat the entire thing without making any progress; what we need
      instead is to treat that as failed (user) memory access.
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      a1179ac7
    • A
      ia64: fix livelock in uaccess · d088af1e
      Al Viro 提交于
      ia64 equivalent of 26178ec1 "x86: mm: consolidate VM_FAULT_RETRY handling"
      If e.g. get_user() triggers a page fault and a fatal signal is caught, we might
      end up with handle_mm_fault() returning VM_FAULT_RETRY and not doing anything
      to page tables.  In such case we must *not* return to the faulting insn -
      that would repeat the entire thing without making any progress; what we need
      instead is to treat that as failed (user) memory access.
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      d088af1e
    • A
      sparc: fix livelock in uaccess · 79c54c97
      Al Viro 提交于
      sparc equivalent of 26178ec1 "x86: mm: consolidate VM_FAULT_RETRY handling"
      If e.g. get_user() triggers a page fault and a fatal signal is caught, we might
      end up with handle_mm_fault() returning VM_FAULT_RETRY and not doing anything
      to page tables.  In such case we must *not* return to the faulting insn -
      that would repeat the entire thing without making any progress; what we need
      instead is to treat that as failed (user) memory access.
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      79c54c97
    • A
      alpha: fix livelock in uaccess · dce45493
      Al Viro 提交于
      alpha equivalent of 26178ec1 "x86: mm: consolidate VM_FAULT_RETRY handling"
      If e.g. get_user() triggers a page fault and a fatal signal is caught, we might
      end up with handle_mm_fault() returning VM_FAULT_RETRY and not doing anything
      to page tables.  In such case we must *not* return to the faulting insn -
      that would repeat the entire thing without making any progress; what we need
      instead is to treat that as failed (user) memory access.
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      dce45493
    • A
      parisc: fix livelock in uaccess · 15261678
      Al Viro 提交于
      parisc equivalent of 26178ec1 "x86: mm: consolidate VM_FAULT_RETRY handling"
      If e.g. get_user() triggers a page fault and a fatal signal is caught, we might
      end up with handle_mm_fault() returning VM_FAULT_RETRY and not doing anything
      to page tables.  In such case we must *not* return to the faulting insn -
      that would repeat the entire thing without making any progress; what we need
      instead is to treat that as failed (user) memory access.
      Tested-by: NHelge Deller <deller@gmx.de>
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      15261678