1. 23 12月, 2020 7 次提交
  2. 21 12月, 2020 1 次提交
  3. 20 12月, 2020 1 次提交
  4. 17 12月, 2020 2 次提交
    • M
      arm64: Work around broken GCC 4.9 handling of "S" constraint · 9fd339a4
      Marc Zyngier 提交于
      GCC 4.9 seems to have a problem with the "S" asm constraint
      when the symbol lives in the same compilation unit, and pretends
      the constraint is impossible:
      
      $ cat x.c
      void *foo(void)
      {
      	static int x;
      	int *addr;
      	asm("adrp %0, %1" : "=r" (addr) : "S" (&x));
      	return addr;
      }
      
      $ ~/Work/gcc-linaro-aarch64-linux-gnu-4.9-2014.09_linux/bin/aarch64-linux-gnu-gcc -S -x c -O2 x.c
      x.c: In function ‘foo’:
      x.c:5:2: error: impossible constraint in ‘asm’
        asm("adrp %0, %1" : "=r" (addr) : "S" (&x));
        ^
      
      Boo. Following revisions of the compiler work just fine, though.
      
      We can fallback to the "i" constraint for GCC version prior to 5.0,
      which *seems* to do the right thing. Hopefully we will be able to
      remove this at some point, but in the meantime this gets us going.
      Signed-off-by: NMarc Zyngier <maz@kernel.org>
      Acked-by: NArd Biesheuvel <ardb@kernel.org>
      Link: https://lore.kernel.org/r/20201217111135.1536658-1-maz@kernel.orgSigned-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      9fd339a4
    • M
      arm64: make _TIF_WORK_MASK bits contiguous · 870d1675
      Mark Rutland 提交于
      We need the bits of _TIF_WORK_MASK to be contiguous in order to to use
      this as an immediate argument to an AND instruction in entry.S.
      
      We happened to change these bits in commits:
      
        b5a5a01d ("arm64: uaccess: remove addr_limit_user_check()")
        192caabd ("arm64: add support for TIF_NOTIFY_SIGNAL")
      
      which each worked in isolation, but the merge resolution in commit:
      
        005b2a9d ("Merge tag 'tif-task_work.arch-2020-12-14' of git://git.kernel.dk/linux-block")
      
      happened to make the bits non-contiguous.
      
      Fix this by moving TIF_NOTIFY_SIGNAL to be bit 6, which is contiguous
      with the rest of _TIF_WORK_MASK.
      
      Otherwise, we'll get a build-time failure as below:
      
         arch/arm64/kernel/entry.S: Assembler messages:
         arch/arm64/kernel/entry.S:733: Error: immediate out of range at operand 3 -- `and x2,x19,#((1<<1)|(1<<0)|(1<<2)|(1<<3)|(1<<4)|(1<<5)|(1<<7))'
         scripts/Makefile.build:360: recipe for target 'arch/arm64/kernel/entry.o' failed
      
      Fixes: 005b2a9d ("Merge tag 'tif-task_work.arch-2020-12-14' of git://git.kernel.dk/linux-block")
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will@kernel.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      870d1675
  5. 16 12月, 2020 6 次提交
    • M
      arch, mm: make kernel_page_present() always available · 32a0de88
      Mike Rapoport 提交于
      For architectures that enable ARCH_HAS_SET_MEMORY having the ability to
      verify that a page is mapped in the kernel direct map can be useful
      regardless of hibernation.
      
      Add RISC-V implementation of kernel_page_present(), update its forward
      declarations and stubs to be a part of set_memory API and remove ugly
      ifdefery in inlcude/linux/mm.h around current declarations of
      kernel_page_present().
      
      Link: https://lkml.kernel.org/r/20201109192128.960-5-rppt@kernel.orgSigned-off-by: NMike Rapoport <rppt@linux.ibm.com>
      Acked-by: NKirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Cc: Albert Ou <aou@eecs.berkeley.edu>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Christian Borntraeger <borntraeger@de.ibm.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: David Hildenbrand <david@redhat.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: "Edgecombe, Rick P" <rick.p.edgecombe@intel.com>
      Cc: Heiko Carstens <hca@linux.ibm.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Len Brown <len.brown@intel.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Palmer Dabbelt <palmer@dabbelt.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Paul Walmsley <paul.walmsley@sifive.com>
      Cc: Pavel Machek <pavel@ucw.cz>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
      Cc: "Rafael J. Wysocki" <rjw@rjwysocki.net>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vasily Gorbik <gor@linux.ibm.com>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Cc: Will Deacon <will@kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      32a0de88
    • M
      arch, mm: restore dependency of __kernel_map_pages() on DEBUG_PAGEALLOC · 5d6ad668
      Mike Rapoport 提交于
      The design of DEBUG_PAGEALLOC presumes that __kernel_map_pages() must
      never fail.  With this assumption is wouldn't be safe to allow general
      usage of this function.
      
      Moreover, some architectures that implement __kernel_map_pages() have this
      function guarded by #ifdef DEBUG_PAGEALLOC and some refuse to map/unmap
      pages when page allocation debugging is disabled at runtime.
      
      As all the users of __kernel_map_pages() were converted to use
      debug_pagealloc_map_pages() it is safe to make it available only when
      DEBUG_PAGEALLOC is set.
      
      Link: https://lkml.kernel.org/r/20201109192128.960-4-rppt@kernel.orgSigned-off-by: NMike Rapoport <rppt@linux.ibm.com>
      Acked-by: NDavid Hildenbrand <david@redhat.com>
      Acked-by: NKirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Cc: Albert Ou <aou@eecs.berkeley.edu>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Christian Borntraeger <borntraeger@de.ibm.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: "Edgecombe, Rick P" <rick.p.edgecombe@intel.com>
      Cc: Heiko Carstens <hca@linux.ibm.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Len Brown <len.brown@intel.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Palmer Dabbelt <palmer@dabbelt.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Paul Walmsley <paul.walmsley@sifive.com>
      Cc: Pavel Machek <pavel@ucw.cz>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
      Cc: "Rafael J. Wysocki" <rjw@rjwysocki.net>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vasily Gorbik <gor@linux.ibm.com>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Cc: Will Deacon <will@kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      5d6ad668
    • M
      arm, arm64: move free_unused_memmap() to generic mm · 4f5b0c17
      Mike Rapoport 提交于
      ARM and ARM64 free unused parts of the memory map just before the
      initialization of the page allocator. To allow holes in the memory map both
      architectures overload pfn_valid() and define HAVE_ARCH_PFN_VALID.
      
      Allowing holes in the memory map for FLATMEM may be useful for small
      machines, such as ARC and m68k and will enable those architectures to cease
      using DISCONTIGMEM and still support more than one memory bank.
      
      Move the functions that free unused memory map to generic mm and enable
      them in case HAVE_ARCH_PFN_VALID=y.
      
      Link: https://lkml.kernel.org/r/20201101170454.9567-10-rppt@kernel.orgSigned-off-by: NMike Rapoport <rppt@linux.ibm.com>
      Acked-by: Catalin Marinas <catalin.marinas@arm.com>	[arm64]
      Cc: Alexey Dobriyan <adobriyan@gmail.com>
      Cc: Geert Uytterhoeven <geert@linux-m68k.org>
      Cc: Greg Ungerer <gerg@linux-m68k.org>
      Cc: John Paul Adrian Glaubitz <glaubitz@physik.fu-berlin.de>
      Cc: Jonathan Corbet <corbet@lwn.net>
      Cc: Matt Turner <mattst88@gmail.com>
      Cc: Meelis Roos <mroos@linux.ee>
      Cc: Michael Schmitz <schmitzmic@gmail.com>
      Cc: Russell King <linux@armlinux.org.uk>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Vineet Gupta <vgupta@synopsys.com>
      Cc: Will Deacon <will@kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      4f5b0c17
    • D
      mm: forbid splitting special mappings · 871402e0
      Dmitry Safonov 提交于
      Don't allow splitting of vm_special_mapping's.  It affects vdso/vvar
      areas.  Uprobes have only one page in xol_area so they aren't affected.
      
      Those restrictions were enforced by checks in .mremap() callbacks.
      Restrict resizing with generic .split() callback.
      
      Link: https://lkml.kernel.org/r/20201013013416.390574-7-dima@arista.comSigned-off-by: NDmitry Safonov <dima@arista.com>
      Cc: Alexander Viro <viro@zeniv.linux.org.uk>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Brian Geffon <bgeffon@google.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Dan Carpenter <dan.carpenter@oracle.com>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Cc: Dave Jiang <dave.jiang@intel.com>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Jason Gunthorpe <jgg@ziepe.ca>
      Cc: John Hubbard <jhubbard@nvidia.com>
      Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
      Cc: Mike Kravetz <mike.kravetz@oracle.com>
      Cc: Minchan Kim <minchan@kernel.org>
      Cc: Ralph Campbell <rcampbell@nvidia.com>
      Cc: Russell King <linux@armlinux.org.uk>
      Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vishal Verma <vishal.l.verma@intel.com>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Cc: Will Deacon <will@kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      871402e0
    • K
      arm64: mremap speedup - enable HAVE_MOVE_PUD · f5308c89
      Kalesh Singh 提交于
      HAVE_MOVE_PUD enables remapping pages at the PUD level if both the source
      and destination addresses are PUD-aligned.
      
      With HAVE_MOVE_PUD enabled it can be inferred that there is approximately
      a 19x improvement in performance on arm64.  (See data below).
      
      ------- Test Results ---------
      
      The following results were obtained using a 5.4 kernel, by remapping a
      PUD-aligned, 1GB sized region to a PUD-aligned destination.  The results
      from 10 iterations of the test are given below:
      
      Total mremap times for 1GB data on arm64. All times are in nanoseconds.
      
        Control          HAVE_MOVE_PUD
      
        1247761          74271
        1219896          46771
        1094792          59687
        1227760          48385
        1043698          76666
        1101771          50365
        1159896          52500
        1143594          75261
        1025833          61354
        1078125          48697
      
        1134312.6        59395.7    <-- Mean time in nanoseconds
      
      A 1GB mremap completion time drops from ~1.1 milliseconds to ~59
      microseconds on arm64.  (~19x speed up).
      
      Link: https://lkml.kernel.org/r/20201014005320.2233162-5-kaleshsingh@google.comSigned-off-by: NKalesh Singh <kaleshsingh@google.com>
      Acked-by: NKirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will@kernel.org>
      Cc: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
      Cc: Anshuman Khandual <anshuman.khandual@arm.com>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Geffon <bgeffon@google.com>
      Cc: Christian Brauner <christian.brauner@ubuntu.com>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Frederic Weisbecker <frederic@kernel.org>
      Cc: Gavin Shan <gshan@redhat.com>
      Cc: Hassan Naveed <hnaveed@wavecomp.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Jia He <justin.he@arm.com>
      Cc: John Hubbard <jhubbard@nvidia.com>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Krzysztof Kozlowski <krzk@kernel.org>
      Cc: Lokesh Gidra <lokeshgidra@google.com>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Masahiro Yamada <masahiroy@kernel.org>
      Cc: Masami Hiramatsu <mhiramat@kernel.org>
      Cc: Mike Rapoport <rppt@kernel.org>
      Cc: Mina Almasry <almasrymina@google.com>
      Cc: Minchan Kim <minchan@google.com>
      Cc: Peter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Ralph Campbell <rcampbell@nvidia.com>
      Cc: Ram Pai <linuxram@us.ibm.com>
      Cc: Sami Tolvanen <samitolvanen@google.com>
      Cc: Sandipan Das <sandipan@linux.ibm.com>
      Cc: SeongJae Park <sjpark@amazon.de>
      Cc: Shuah Khan <shuah@kernel.org>
      Cc: Steven Price <steven.price@arm.com>
      Cc: Suren Baghdasaryan <surenb@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Zi Yan <ziy@nvidia.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      f5308c89
    • M
      arm64: Warn the user when a small VA_BITS value wastes memory · 31f80a4e
      Marc Zyngier 提交于
      The memblock code ignores any memory that doesn't fit in the
      linear mapping. In order to preserve the distance between two physical
      memory locations and their mappings in the linear map, any hole between
      two memory regions occupies the same space in the linear map.
      
      On most systems, this is hardly a problem (the memory banks are close
      together, and VA_BITS represents a large space compared to the available
      memory *and* the potential gaps).
      
      On NUMA systems, things are quite different: the gaps between the
      memory nodes can be pretty large compared to the memory size itself,
      and the range from memblock_start_of_DRAM() to memblock_end_of_DRAM()
      can exceed the space described by VA_BITS.
      
      Unfortunately, we're not very good at making this obvious to the user,
      and on a D05 system (two sockets and 4 nodes with 64GB each)
      accidentally configured with 39bit VA, we display something like this:
      
      [    0.000000] NUMA: NODE_DATA [mem 0x1ffbffe100-0x1ffbffffff]
      [    0.000000] NUMA: NODE_DATA [mem 0x2febfc1100-0x2febfc2fff]
      [    0.000000] NUMA: Initmem setup node 2 [<memory-less node>]
      [    0.000000] NUMA: NODE_DATA [mem 0x2febfbf200-0x2febfc10ff]
      [    0.000000] NUMA: NODE_DATA(2) on node 1
      [    0.000000] NUMA: Initmem setup node 3 [<memory-less node>]
      [    0.000000] NUMA: NODE_DATA [mem 0x2febfbd300-0x2febfbf1ff]
      [    0.000000] NUMA: NODE_DATA(3) on node 1
      
      which isn't very explicit, and doesn't tell the user why 128GB
      have suddently disappeared.
      
      Let's add a warning message telling the user that memory has been
      truncated, and offer a potential solution (bumping VA_BITS up).
      Signed-off-by: NMarc Zyngier <maz@kernel.org>
      Acked-by: NMark Rutland <mark.rutland@arm.com>
      Link: https://lore.kernel.org/r/20201215152918.1511108-1-maz@kernel.orgSigned-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      31f80a4e
  6. 15 12月, 2020 2 次提交
  7. 12 12月, 2020 1 次提交
  8. 10 12月, 2020 3 次提交
  9. 09 12月, 2020 3 次提交
  10. 08 12月, 2020 13 次提交
  11. 04 12月, 2020 1 次提交