1. 27 2月, 2021 1 次提交
  2. 24 2月, 2021 2 次提交
  3. 17 2月, 2021 1 次提交
  4. 16 2月, 2021 1 次提交
  5. 11 2月, 2021 2 次提交
  6. 09 2月, 2021 1 次提交
  7. 08 2月, 2021 1 次提交
  8. 06 2月, 2021 1 次提交
  9. 29 1月, 2021 1 次提交
  10. 06 1月, 2021 3 次提交
    • A
      Kconfig: regularize selection of CONFIG_BINFMT_ELF · 41026c34
      Al Viro 提交于
      with mips converted to use of fs/config_binfmt_elf.c, there's no
      need to keep selects of that thing all over arch/* - we can simply
      turn into def_bool y if COMPAT && BINFMT_ELF (in fs/Kconfig.binfmt)
      and get rid of all selects.
      
      Several architectures got those selects wrong (e.g. you could
      end up with sparc64 sans BINFMT_ELF, with select violating
      dependencies, etc.)
      
      Randy Dunlap has spotted some of those; IMO this is simpler than
      his fix, but it depends upon the stuff that would need to be
      backported, so we might end up using his variant for -stable.
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      41026c34
    • A
      x32: make X32, !IA32_EMULATION setups able to execute x32 binaries · 85f2ada7
      Al Viro 提交于
      It's really trivial - the only wrinkle is making sure that
      compiler knows that ia32-related side of COMPAT_ARCH_DLINFO
      is dead code on such configs (we don't get there without
      having passed compat_elf_check_arch(), and on such configs
      that'll fail for ia32 binary).
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      85f2ada7
    • A
      [amd64] clean PRSTATUS_SIZE/SET_PR_FPVALID up properly · 7facdc42
      Al Viro 提交于
      To get rid of hardcoded size/offset in those macros we need to have
      a definition of i386 variant of struct elf_prstatus.  However, we can't
      do that in asm/compat.h - the types needed for that are not there and
      adding an include of asm/user32.h into asm/compat.h would cause a lot
      of mess.
      
      That could be conveniently done in elfcore-compat.h, but currently there
      is nowhere to put arch-dependent parts of it - no asm/elfcore-compat.h.
      So we introduce a new file (asm/elfcore-compat.h, present on architectures
      that have CONFIG_ARCH_HAS_ELFCORE_COMPAT set, currently only on x86),
      have it pulled by linux/elfcore-compat.h and move the definitions there.
      
      As a side benefit, we don't need to worry about accidental inclusion of
      that file into binfmt_elf.c itself, so we don't need the dance with
      COMPAT_PRSTATUS_SIZE, etc. - only fs/compat_binfmt_elf.c will see
      that header.
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      7facdc42
  11. 28 12月, 2020 1 次提交
  12. 16 12月, 2020 2 次提交
    • M
      arch, mm: restore dependency of __kernel_map_pages() on DEBUG_PAGEALLOC · 5d6ad668
      Mike Rapoport 提交于
      The design of DEBUG_PAGEALLOC presumes that __kernel_map_pages() must
      never fail.  With this assumption is wouldn't be safe to allow general
      usage of this function.
      
      Moreover, some architectures that implement __kernel_map_pages() have this
      function guarded by #ifdef DEBUG_PAGEALLOC and some refuse to map/unmap
      pages when page allocation debugging is disabled at runtime.
      
      As all the users of __kernel_map_pages() were converted to use
      debug_pagealloc_map_pages() it is safe to make it available only when
      DEBUG_PAGEALLOC is set.
      
      Link: https://lkml.kernel.org/r/20201109192128.960-4-rppt@kernel.orgSigned-off-by: NMike Rapoport <rppt@linux.ibm.com>
      Acked-by: NDavid Hildenbrand <david@redhat.com>
      Acked-by: NKirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Cc: Albert Ou <aou@eecs.berkeley.edu>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Christian Borntraeger <borntraeger@de.ibm.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: "Edgecombe, Rick P" <rick.p.edgecombe@intel.com>
      Cc: Heiko Carstens <hca@linux.ibm.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Len Brown <len.brown@intel.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Palmer Dabbelt <palmer@dabbelt.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Paul Walmsley <paul.walmsley@sifive.com>
      Cc: Pavel Machek <pavel@ucw.cz>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
      Cc: "Rafael J. Wysocki" <rjw@rjwysocki.net>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vasily Gorbik <gor@linux.ibm.com>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Cc: Will Deacon <will@kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      5d6ad668
    • K
      x86: mremap speedup - Enable HAVE_MOVE_PUD · be37c98d
      Kalesh Singh 提交于
      HAVE_MOVE_PUD enables remapping pages at the PUD level if both the
      source and destination addresses are PUD-aligned.
      
      With HAVE_MOVE_PUD enabled it can be inferred that there is
      approximately a 13x improvement in performance on x86.  (See data
      below).
      
      ------- Test Results ---------
      
      The following results were obtained using a 5.4 kernel, by remapping
      a PUD-aligned, 1GB sized region to a PUD-aligned destination.
      The results from 10 iterations of the test are given below:
      
      Total mremap times for 1GB data on x86. All times are in nanoseconds.
      
        Control        HAVE_MOVE_PUD
      
        180394         15089
        235728         14056
        238931         25741
        187330         13838
        241742         14187
        177925         14778
        182758         14728
        160872         14418
        205813         15107
        245722         13998
      
        205721.5       15594    <-- Mean time in nanoseconds
      
      A 1GB mremap completion time drops from ~205 microseconds
      to ~15 microseconds on x86. (~13x speed up).
      
      Link: https://lkml.kernel.org/r/20201014005320.2233162-6-kaleshsingh@google.comSigned-off-by: NKalesh Singh <kaleshsingh@google.com>
      Acked-by: NKirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Acked-by: NIngo Molnar <mingo@redhat.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
      Cc: Anshuman Khandual <anshuman.khandual@arm.com>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Brian Geffon <bgeffon@google.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Christian Brauner <christian.brauner@ubuntu.com>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Frederic Weisbecker <frederic@kernel.org>
      Cc: Gavin Shan <gshan@redhat.com>
      Cc: Hassan Naveed <hnaveed@wavecomp.com>
      Cc: Jia He <justin.he@arm.com>
      Cc: John Hubbard <jhubbard@nvidia.com>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Krzysztof Kozlowski <krzk@kernel.org>
      Cc: Lokesh Gidra <lokeshgidra@google.com>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Masahiro Yamada <masahiroy@kernel.org>
      Cc: Masami Hiramatsu <mhiramat@kernel.org>
      Cc: Mike Rapoport <rppt@kernel.org>
      Cc: Mina Almasry <almasrymina@google.com>
      Cc: Minchan Kim <minchan@google.com>
      Cc: Peter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Ralph Campbell <rcampbell@nvidia.com>
      Cc: Ram Pai <linuxram@us.ibm.com>
      Cc: Sami Tolvanen <samitolvanen@google.com>
      Cc: Sandipan Das <sandipan@linux.ibm.com>
      Cc: SeongJae Park <sjpark@amazon.de>
      Cc: Shuah Khan <shuah@kernel.org>
      Cc: Steven Price <steven.price@arm.com>
      Cc: Suren Baghdasaryan <surenb@google.com>
      Cc: Will Deacon <will@kernel.org>
      Cc: Zi Yan <ziy@nvidia.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      be37c98d
  13. 01 12月, 2020 1 次提交
  14. 24 11月, 2020 1 次提交
  15. 19 11月, 2020 1 次提交
  16. 17 11月, 2020 1 次提交
  17. 14 11月, 2020 1 次提交
    • S
      ftrace/x86: Allow for arguments to be passed in to ftrace_regs by default · 02a474ca
      Steven Rostedt (VMware) 提交于
      Currently, the only way to get access to the registers of a function via a
      ftrace callback is to set the "FL_SAVE_REGS" bit in the ftrace_ops. But as this
      saves all regs as if a breakpoint were to trigger (for use with kprobes), it
      is expensive.
      
      The regs are already saved on the stack for the default ftrace callbacks, as
      that is required otherwise a function being traced will get the wrong
      arguments and possibly crash. And on x86, the arguments are already stored
      where they would be on a pt_regs structure to use that code for both the
      regs version of a callback, it makes sense to pass that information always
      to all functions.
      
      If an architecture does this (as x86_64 now does), it is to set
      HAVE_DYNAMIC_FTRACE_WITH_ARGS, and this will let the generic code that it
      could have access to arguments without having to set the flags.
      
      This also includes having the stack pointer being saved, which could be used
      for accessing arguments on the stack, as well as having the function graph
      tracer not require its own trampoline!
      Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Signed-off-by: NSteven Rostedt (VMware) <rostedt@goodmis.org>
      02a474ca
  18. 07 11月, 2020 1 次提交
  19. 31 10月, 2020 1 次提交
    • A
      timekeeping: default GENERIC_CLOCKEVENTS to enabled · 0774a6ed
      Arnd Bergmann 提交于
      Almost all machines use GENERIC_CLOCKEVENTS, so it feels wrong to
      require each one to select that symbol manually.
      
      Instead, enable it whenever CONFIG_LEGACY_TIMER_TICK is disabled as
      a simplification. It should be possible to select both
      GENERIC_CLOCKEVENTS and LEGACY_TIMER_TICK from an architecture now
      and decide at runtime between the two.
      
      For the clockevents arch-support.txt file, this means that additional
      architectures are marked as TODO when they have at least one machine
      that still uses LEGACY_TIMER_TICK, rather than being marked 'ok' when
      at least one machine has been converted. This means that both m68k and
      arm (for riscpc) revert to TODO.
      
      At this point, we could just always enable CONFIG_GENERIC_CLOCKEVENTS
      rather than leaving it off when not needed. I built an m68k
      defconfig kernel (using gcc-10.1.0) and found that this would add
      around 5.5KB in kernel image size:
      
         text	   data	    bss	    dec	    hex	filename
      3861936	1092236	 196656	5150828	 4e986c	obj-m68k/vmlinux-no-clockevent
      3866201	1093832	 196184	5156217	 4ead79	obj-m68k/vmlinux-clockevent
      
      On Arm (MACH_RPC), that difference appears to be twice as large,
      around 11KB on top of an 6MB vmlinux.
      Reviewed-by: NGeert Uytterhoeven <geert@linux-m68k.org>
      Acked-by: NGeert Uytterhoeven <geert@linux-m68k.org>
      Tested-by: NGeert Uytterhoeven <geert@linux-m68k.org>
      Reviewed-by: NLinus Walleij <linus.walleij@linaro.org>
      Signed-off-by: NArnd Bergmann <arnd@arndb.de>
      0774a6ed
  20. 09 10月, 2020 1 次提交
  21. 06 10月, 2020 1 次提交
    • D
      x86, powerpc: Rename memcpy_mcsafe() to copy_mc_to_{user, kernel}() · ec6347bb
      Dan Williams 提交于
      In reaction to a proposal to introduce a memcpy_mcsafe_fast()
      implementation Linus points out that memcpy_mcsafe() is poorly named
      relative to communicating the scope of the interface. Specifically what
      addresses are valid to pass as source, destination, and what faults /
      exceptions are handled.
      
      Of particular concern is that even though x86 might be able to handle
      the semantics of copy_mc_to_user() with its common copy_user_generic()
      implementation other archs likely need / want an explicit path for this
      case:
      
        On Fri, May 1, 2020 at 11:28 AM Linus Torvalds <torvalds@linux-foundation.org> wrote:
        >
        > On Thu, Apr 30, 2020 at 6:21 PM Dan Williams <dan.j.williams@intel.com> wrote:
        > >
        > > However now I see that copy_user_generic() works for the wrong reason.
        > > It works because the exception on the source address due to poison
        > > looks no different than a write fault on the user address to the
        > > caller, it's still just a short copy. So it makes copy_to_user() work
        > > for the wrong reason relative to the name.
        >
        > Right.
        >
        > And it won't work that way on other architectures. On x86, we have a
        > generic function that can take faults on either side, and we use it
        > for both cases (and for the "in_user" case too), but that's an
        > artifact of the architecture oddity.
        >
        > In fact, it's probably wrong even on x86 - because it can hide bugs -
        > but writing those things is painful enough that everybody prefers
        > having just one function.
      
      Replace a single top-level memcpy_mcsafe() with either
      copy_mc_to_user(), or copy_mc_to_kernel().
      
      Introduce an x86 copy_mc_fragile() name as the rename for the
      low-level x86 implementation formerly named memcpy_mcsafe(). It is used
      as the slow / careful backend that is supplanted by a fast
      copy_mc_generic() in a follow-on patch.
      
      One side-effect of this reorganization is that separating copy_mc_64.S
      to its own file means that perf no longer needs to track dependencies
      for its memcpy_64.S benchmarks.
      
       [ bp: Massage a bit. ]
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Reviewed-by: NTony Luck <tony.luck@intel.com>
      Acked-by: NMichael Ellerman <mpe@ellerman.id.au>
      Cc: <stable@vger.kernel.org>
      Link: http://lore.kernel.org/r/CAHk-=wjSqtXAqfUJxFtWNwmguFASTgB0dz1dT3V-78Quiezqbg@mail.gmail.com
      Link: https://lkml.kernel.org/r/160195561680.2163339.11574962055305783722.stgit@dwillia2-desk3.amr.corp.intel.com
      ec6347bb
  22. 16 9月, 2020 2 次提交
  23. 09 9月, 2020 2 次提交
  24. 08 9月, 2020 1 次提交
  25. 01 9月, 2020 2 次提交
  26. 06 8月, 2020 1 次提交
  27. 31 7月, 2020 1 次提交
  28. 24 7月, 2020 1 次提交
  29. 19 7月, 2020 1 次提交
  30. 05 7月, 2020 1 次提交
  31. 18 6月, 2020 1 次提交
  32. 15 6月, 2020 1 次提交