1. 05 4月, 2021 1 次提交
  2. 27 2月, 2021 1 次提交
  3. 24 2月, 2021 2 次提交
  4. 19 1月, 2021 1 次提交
    • S
      s390: convert to generic entry · 56e62a73
      Sven Schnelle 提交于
      This patch converts s390 to use the generic entry infrastructure from
      kernel/entry/*.
      
      There are a few special things on s390:
      
      - PIF_PER_TRAP is moved to TIF_PER_TRAP as the generic code doesn't
        know about our PIF flags in exit_to_user_mode_loop().
      
      - The old code had several ways to restart syscalls:
      
        a) PIF_SYSCALL_RESTART, which was only set during execve to force a
           restart after upgrading a process (usually qemu-kvm) to pgste page
           table extensions.
      
        b) PIF_SYSCALL, which is set by do_signal() to indicate that the
           current syscall should be restarted. This is changed so that
           do_signal() now also uses PIF_SYSCALL_RESTART. Continuing to use
           PIF_SYSCALL doesn't work with the generic code, and changing it
           to PIF_SYSCALL_RESTART makes PIF_SYSCALL and PIF_SYSCALL_RESTART
           more unique.
      
      - On s390 calling sys_sigreturn or sys_rt_sigreturn is implemented by
      executing a svc instruction on the process stack which causes a fault.
      While handling that fault the fault code sets PIF_SYSCALL to hand over
      processing to the syscall code on exit to usermode.
      
      The patch introduces PIF_SYSCALL_RET_SET, which is set if ptrace sets
      a return value for a syscall. The s390x ptrace ABI uses r2 both for the
      syscall number and return value, so ptrace cannot set the syscall number +
      return value at the same time. The flag makes handling that a bit easier.
      do_syscall() will just skip executing the syscall if PIF_SYSCALL_RET_SET
      is set.
      
      CONFIG_DEBUG_ASCE was removd in favour of the generic CONFIG_DEBUG_ENTRY.
      CR1/7/13 will be checked both on kernel entry and exit to contain the
      correct asces.
      Signed-off-by: NSven Schnelle <svens@linux.ibm.com>
      Signed-off-by: NVasily Gorbik <gor@linux.ibm.com>
      56e62a73
  5. 10 12月, 2020 1 次提交
  6. 23 11月, 2020 2 次提交
    • H
      s390/mm: use invalid asce instead of kernel asce · 0290c9e3
      Heiko Carstens 提交于
      Create a region 3 page table which contains only invalid entries, and
      use that via "s390_invalid_asce" instead of the kernel ASCE whenever
      there is either
      - no user address space available, e.g. during early startup
      - as an intermediate ASCE when address spaces are switched
      
      This makes sure that user space accesses in such situations are
      guaranteed to fail.
      Reviewed-by: NSven Schnelle <svens@linux.ibm.com>
      Reviewed-by: NAlexander Gordeev <agordeev@linux.ibm.com>
      Signed-off-by: NHeiko Carstens <hca@linux.ibm.com>
      0290c9e3
    • H
      s390/mm: remove set_fs / rework address space handling · 87d59863
      Heiko Carstens 提交于
      Remove set_fs support from s390. With doing this rework address space
      handling and simplify it. As a result address spaces are now setup
      like this:
      
      CPU running in              | %cr1 ASCE | %cr7 ASCE | %cr13 ASCE
      ----------------------------|-----------|-----------|-----------
      user space                  |  user     |  user     |  kernel
      kernel, normal execution    |  kernel   |  user     |  kernel
      kernel, kvm guest execution |  gmap     |  user     |  kernel
      
      To achieve this the getcpu vdso syscall is removed in order to avoid
      secondary address mode and a separate vdso address space in for user
      space. The getcpu vdso syscall will be implemented differently with a
      subsequent patch.
      
      The kernel accesses user space always via secondary address space.
      This happens in different ways:
      - with mvcos in home space mode and directly read/write to secondary
        address space
      - with mvcs/mvcp in primary space mode and copy from primary space to
        secondary space or vice versa
      - with e.g. cs in secondary space mode and access secondary space
      
      Switching translation modes happens with sacf before and after
      instructions which access user space, like before.
      
      Lazy handling of control register reloading is removed in the hope to
      make everything simpler, but at the cost of making kernel entry and
      exit a bit slower. That is: on kernel entry the primary asce is always
      changed to contain the kernel asce, and on kernel exit the primary
      asce is changed again so it contains the user asce.
      
      In kernel mode there is only one exception to the primary asce: when
      kvm guests are executed the primary asce contains the gmap asce (which
      describes the guest address space). The primary asce is reset to
      kernel asce whenever kvm guest execution is interrupted, so that this
      doesn't has to be taken into account for any user space accesses.
      Reviewed-by: NSven Schnelle <svens@linux.ibm.com>
      Signed-off-by: NHeiko Carstens <hca@linux.ibm.com>
      87d59863
  7. 21 11月, 2020 3 次提交
  8. 11 11月, 2020 1 次提交
  9. 09 11月, 2020 3 次提交
  10. 26 10月, 2020 1 次提交
  11. 21 10月, 2020 1 次提交
  12. 14 10月, 2020 2 次提交
    • M
      arch, drivers: replace for_each_membock() with for_each_mem_range() · b10d6bca
      Mike Rapoport 提交于
      There are several occurrences of the following pattern:
      
      	for_each_memblock(memory, reg) {
      		start = __pfn_to_phys(memblock_region_memory_base_pfn(reg);
      		end = __pfn_to_phys(memblock_region_memory_end_pfn(reg));
      
      		/* do something with start and end */
      	}
      
      Using for_each_mem_range() iterator is more appropriate in such cases and
      allows simpler and cleaner code.
      
      [akpm@linux-foundation.org: fix arch/arm/mm/pmsa-v7.c build]
      [rppt@linux.ibm.com: mips: fix cavium-octeon build caused by memblock refactoring]
        Link: http://lkml.kernel.org/r/20200827124549.GD167163@linux.ibm.comSigned-off-by: NMike Rapoport <rppt@linux.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Baoquan He <bhe@redhat.com>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: Daniel Axtens <dja@axtens.net>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Emil Renner Berthing <kernel@esmil.dk>
      Cc: Hari Bathini <hbathini@linux.ibm.com>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Jonathan Cameron <Jonathan.Cameron@huawei.com>
      Cc: Marek Szyprowski <m.szyprowski@samsung.com>
      Cc: Max Filippov <jcmvbkbc@gmail.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Michal Simek <monstr@monstr.eu>
      Cc: Miguel Ojeda <miguel.ojeda.sandonis@gmail.com>
      Cc: Palmer Dabbelt <palmer@dabbelt.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Paul Walmsley <paul.walmsley@sifive.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Russell King <linux@armlinux.org.uk>
      Cc: Stafford Horne <shorne@gmail.com>
      Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Will Deacon <will@kernel.org>
      Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
      Link: https://lkml.kernel.org/r/20200818151634.14343-13-rppt@kernel.orgSigned-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      b10d6bca
    • M
      arch, mm: replace for_each_memblock() with for_each_mem_pfn_range() · c9118e6c
      Mike Rapoport 提交于
      There are several occurrences of the following pattern:
      
      	for_each_memblock(memory, reg) {
      		start_pfn = memblock_region_memory_base_pfn(reg);
      		end_pfn = memblock_region_memory_end_pfn(reg);
      
      		/* do something with start_pfn and end_pfn */
      	}
      
      Rather than iterate over all memblock.memory regions and each time query
      for their start and end PFNs, use for_each_mem_pfn_range() iterator to get
      simpler and clearer code.
      Signed-off-by: NMike Rapoport <rppt@linux.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Reviewed-by: NBaoquan He <bhe@redhat.com>
      Acked-by: Miguel Ojeda <miguel.ojeda.sandonis@gmail.com>	[.clang-format]
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: Daniel Axtens <dja@axtens.net>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Emil Renner Berthing <kernel@esmil.dk>
      Cc: Hari Bathini <hbathini@linux.ibm.com>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Jonathan Cameron <Jonathan.Cameron@huawei.com>
      Cc: Marek Szyprowski <m.szyprowski@samsung.com>
      Cc: Max Filippov <jcmvbkbc@gmail.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Michal Simek <monstr@monstr.eu>
      Cc: Palmer Dabbelt <palmer@dabbelt.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Paul Walmsley <paul.walmsley@sifive.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Russell King <linux@armlinux.org.uk>
      Cc: Stafford Horne <shorne@gmail.com>
      Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Will Deacon <will@kernel.org>
      Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
      Link: https://lkml.kernel.org/r/20200818151634.14343-12-rppt@kernel.orgSigned-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      c9118e6c
  13. 16 9月, 2020 3 次提交
    • V
      s390/kasan: support protvirt with 4-level paging · c360c9a2
      Vasily Gorbik 提交于
      Currently the kernel crashes in Kasan instrumentation code if
      CONFIG_KASAN_S390_4_LEVEL_PAGING is used on protected virtualization
      capable machine where the ultravisor imposes addressing limitations on
      the host and those limitations are lower then KASAN_SHADOW_OFFSET.
      
      The problem is that Kasan has to know in advance where vmalloc/modules
      areas would be. With protected virtualization enabled vmalloc/modules
      areas are moved down to the ultravisor secure storage limit while kasan
      still expects them at the very end of 4-level paging address space.
      
      To fix that make Kasan recognize when protected virtualization is enabled
      and predefine vmalloc/modules areas position which are compliant with
      ultravisor secure storage limit.
      
      Kasan shadow itself stays in place and might reside above that ultravisor
      secure storage limit.
      
      One slight difference compaired to a kernel without Kasan enabled is that
      vmalloc/modules areas position is not reverted to default if ultravisor
      initialization fails. It would still be below the ultravisor secure
      storage limit.
      
      Kernel layout with kasan, 4-level paging and protected virtualization
      enabled (ultravisor secure storage limit is at 0x0000800000000000):
      ---[ vmemmap Area Start ]---
      0x0000400000000000-0x0000400080000000
      ---[ vmemmap Area End ]---
      ---[ vmalloc Area Start ]---
      0x00007fe000000000-0x00007fff80000000
      ---[ vmalloc Area End ]---
      ---[ Modules Area Start ]---
      0x00007fff80000000-0x0000800000000000
      ---[ Modules Area End ]---
      ---[ Kasan Shadow Start ]---
      0x0018000000000000-0x001c000000000000
      ---[ Kasan Shadow End ]---
      0x001c000000000000-0x0020000000000000         1P PGD I
      
      Kernel layout with kasan, 4-level paging and protected virtualization
      disabled/unsupported:
      ---[ vmemmap Area Start ]---
      0x0000400000000000-0x0000400060000000
      ---[ vmemmap Area End ]---
      ---[ Kasan Shadow Start ]---
      0x0018000000000000-0x001c000000000000
      ---[ Kasan Shadow End ]---
      ---[ vmalloc Area Start ]---
      0x001fffe000000000-0x001fffff80000000
      ---[ vmalloc Area End ]---
      ---[ Modules Area Start ]---
      0x001fffff80000000-0x0020000000000000
      ---[ Modules Area End ]---
      Signed-off-by: NVasily Gorbik <gor@linux.ibm.com>
      c360c9a2
    • V
      s390/mm,ptdump: sort markers · ee4b2ce6
      Vasily Gorbik 提交于
      Kasan configuration options and size of physical memory present could
      affect kernel memory layout. In particular vmemmap, vmalloc and modules
      might come before kasan shadow or after it. To make ptdump correctly
      output markers in the right order markers have to be sorted.
      
      To preserve the original order of markers with the same start address
      avoid using sort() from lib/sort.c (which is not stable sorting algorithm)
      and sort markers in place.
      Reviewed-by: NHeiko Carstens <hca@linux.ibm.com>
      Signed-off-by: NVasily Gorbik <gor@linux.ibm.com>
      ee4b2ce6
    • H
      s390/mm,ptdump: add proper ifdefs · 48111b48
      Heiko Carstens 提交于
      Use ifdefs instead of IS_ENABLED() to avoid compile error
      for !PTDUMP_DEBUGFS:
      
      arch/s390/mm/dump_pagetables.c: In function ‘pt_dump_init’:
      arch/s390/mm/dump_pagetables.c:248:64: error: ‘ptdump_fops’ undeclared (first use in this function); did you mean ‘pidfd_fops’?
         debugfs_create_file("kernel_page_tables", 0400, NULL, NULL, &ptdump_fops);
      Reported-by: NJulian Wiedmann <jwi@linux.ibm.com>
      Fixes: 08c8e685 ("s390: add ARCH_HAS_DEBUG_WX support")
      Signed-off-by: NHeiko Carstens <hca@linux.ibm.com>
      Signed-off-by: NVasily Gorbik <gor@linux.ibm.com>
      48111b48
  14. 14 9月, 2020 10 次提交
  15. 27 8月, 2020 1 次提交
  16. 13 8月, 2020 3 次提交
    • P
      mm/gup: remove task_struct pointer for all gup code · 64019a2e
      Peter Xu 提交于
      After the cleanup of page fault accounting, gup does not need to pass
      task_struct around any more.  Remove that parameter in the whole gup
      stack.
      Signed-off-by: NPeter Xu <peterx@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Reviewed-by: NJohn Hubbard <jhubbard@nvidia.com>
      Link: http://lkml.kernel.org/r/20200707225021.200906-26-peterx@redhat.comSigned-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      64019a2e
    • P
      mm/s390: use general page fault accounting · 35e45f3e
      Peter Xu 提交于
      Use the general page fault accounting by passing regs into
      handle_mm_fault().  It naturally solve the issue of multiple page fault
      accounting when page fault retry happened.
      Signed-off-by: NPeter Xu <peterx@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Reviewed-by: NGerald Schaefer <gerald.schaefer@de.ibm.com>
      Acked-by: NGerald Schaefer <gerald.schaefer@de.ibm.com>
      Cc: Alexander Gordeev <agordeev@linux.ibm.com>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Vasily Gorbik <gor@linux.ibm.com>
      Cc: Christian Borntraeger <borntraeger@de.ibm.com>
      Link: http://lkml.kernel.org/r/20200707225021.200906-19-peterx@redhat.comSigned-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      35e45f3e
    • P
      mm: do page fault accounting in handle_mm_fault · bce617ed
      Peter Xu 提交于
      Patch series "mm: Page fault accounting cleanups", v5.
      
      This is v5 of the pf accounting cleanup series.  It originates from Gerald
      Schaefer's report on an issue a week ago regarding to incorrect page fault
      accountings for retried page fault after commit 4064b982 ("mm: allow
      VM_FAULT_RETRY for multiple times"):
      
        https://lore.kernel.org/lkml/20200610174811.44b94525@thinkpad/
      
      What this series did:
      
        - Correct page fault accounting: we do accounting for a page fault
          (no matter whether it's from #PF handling, or gup, or anything else)
          only with the one that completed the fault.  For example, page fault
          retries should not be counted in page fault counters.  Same to the
          perf events.
      
        - Unify definition of PERF_COUNT_SW_PAGE_FAULTS: currently this perf
          event is used in an adhoc way across different archs.
      
          Case (1): for many archs it's done at the entry of a page fault
          handler, so that it will also cover e.g.  errornous faults.
      
          Case (2): for some other archs, it is only accounted when the page
          fault is resolved successfully.
      
          Case (3): there're still quite some archs that have not enabled
          this perf event.
      
          Since this series will touch merely all the archs, we unify this
          perf event to always follow case (1), which is the one that makes most
          sense.  And since we moved the accounting into handle_mm_fault, the
          other two MAJ/MIN perf events are well taken care of naturally.
      
        - Unify definition of "major faults": the definition of "major
          fault" is slightly changed when used in accounting (not
          VM_FAULT_MAJOR).  More information in patch 1.
      
        - Always account the page fault onto the one that triggered the page
          fault.  This does not matter much for #PF handlings, but mostly for
          gup.  More information on this in patch 25.
      
      Patchset layout:
      
      Patch 1:     Introduced the accounting in handle_mm_fault(), not enabled.
      Patch 2-23:  Enable the new accounting for arch #PF handlers one by one.
      Patch 24:    Enable the new accounting for the rest outliers (gup, iommu, etc.)
      Patch 25:    Cleanup GUP task_struct pointer since it's not needed any more
      
      This patch (of 25):
      
      This is a preparation patch to move page fault accountings into the
      general code in handle_mm_fault().  This includes both the per task
      flt_maj/flt_min counters, and the major/minor page fault perf events.  To
      do this, the pt_regs pointer is passed into handle_mm_fault().
      
      PERF_COUNT_SW_PAGE_FAULTS should still be kept in per-arch page fault
      handlers.
      
      So far, all the pt_regs pointer that passed into handle_mm_fault() is
      NULL, which means this patch should have no intented functional change.
      Suggested-by: NLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: NPeter Xu <peterx@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Cc: Albert Ou <aou@eecs.berkeley.edu>
      Cc: Alexander Gordeev <agordeev@linux.ibm.com>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Cain <bcain@codeaurora.org>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Christian Borntraeger <borntraeger@de.ibm.com>
      Cc: Chris Zankel <chris@zankel.net>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: David S. Miller <davem@davemloft.net>
      Cc: Geert Uytterhoeven <geert@linux-m68k.org>
      Cc: Gerald Schaefer <gerald.schaefer@de.ibm.com>
      Cc: Greentime Hu <green.hu@gmail.com>
      Cc: Guo Ren <guoren@kernel.org>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Helge Deller <deller@gmx.de>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru>
      Cc: James E.J. Bottomley <James.Bottomley@HansenPartnership.com>
      Cc: John Hubbard <jhubbard@nvidia.com>
      Cc: Jonas Bonn <jonas@southpole.se>
      Cc: Ley Foon Tan <ley.foon.tan@intel.com>
      Cc: "Luck, Tony" <tony.luck@intel.com>
      Cc: Matt Turner <mattst88@gmail.com>
      Cc: Max Filippov <jcmvbkbc@gmail.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Michal Simek <monstr@monstr.eu>
      Cc: Nick Hu <nickhu@andestech.com>
      Cc: Palmer Dabbelt <palmer@dabbelt.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Paul Walmsley <paul.walmsley@sifive.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Richard Henderson <rth@twiddle.net>
      Cc: Rich Felker <dalias@libc.org>
      Cc: Russell King <linux@armlinux.org.uk>
      Cc: Stafford Horne <shorne@gmail.com>
      Cc: Stefan Kristiansson <stefan.kristiansson@saunalahti.fi>
      Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vasily Gorbik <gor@linux.ibm.com>
      Cc: Vincent Chen <deanbo422@gmail.com>
      Cc: Vineet Gupta <vgupta@synopsys.com>
      Cc: Will Deacon <will@kernel.org>
      Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
      Link: http://lkml.kernel.org/r/20200707225021.200906-1-peterx@redhat.com
      Link: http://lkml.kernel.org/r/20200707225021.200906-2-peterx@redhat.comSigned-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      bce617ed
  17. 12 8月, 2020 1 次提交
  18. 08 8月, 2020 2 次提交
    • M
      mm/sparse: cleanup the code surrounding memory_present() · c89ab04f
      Mike Rapoport 提交于
      After removal of CONFIG_HAVE_MEMBLOCK_NODE_MAP we have two equivalent
      functions that call memory_present() for each region in memblock.memory:
      sparse_memory_present_with_active_regions() and membocks_present().
      
      Moreover, all architectures have a call to either of these functions
      preceding the call to sparse_init() and in the most cases they are called
      one after the other.
      
      Mark the regions from memblock.memory as present during sparce_init() by
      making sparse_init() call memblocks_present(), make memblocks_present()
      and memory_present() functions static and remove redundant
      sparse_memory_present_with_active_regions() function.
      
      Also remove no longer required HAVE_MEMORY_PRESENT configuration option.
      Signed-off-by: NMike Rapoport <rppt@linux.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Link: http://lkml.kernel.org/r/20200712083130.22919-1-rppt@kernel.orgSigned-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      c89ab04f
    • M
      mm: remove unneeded includes of <asm/pgalloc.h> · ca15ca40
      Mike Rapoport 提交于
      Patch series "mm: cleanup usage of <asm/pgalloc.h>"
      
      Most architectures have very similar versions of pXd_alloc_one() and
      pXd_free_one() for intermediate levels of page table.  These patches add
      generic versions of these functions in <asm-generic/pgalloc.h> and enable
      use of the generic functions where appropriate.
      
      In addition, functions declared and defined in <asm/pgalloc.h> headers are
      used mostly by core mm and early mm initialization in arch and there is no
      actual reason to have the <asm/pgalloc.h> included all over the place.
      The first patch in this series removes unneeded includes of
      <asm/pgalloc.h>
      
      In the end it didn't work out as neatly as I hoped and moving
      pXd_alloc_track() definitions to <asm-generic/pgalloc.h> would require
      unnecessary changes to arches that have custom page table allocations, so
      I've decided to move lib/ioremap.c to mm/ and make pgalloc-track.h local
      to mm/.
      
      This patch (of 8):
      
      In most cases <asm/pgalloc.h> header is required only for allocations of
      page table memory.  Most of the .c files that include that header do not
      use symbols declared in <asm/pgalloc.h> and do not require that header.
      
      As for the other header files that used to include <asm/pgalloc.h>, it is
      possible to move that include into the .c file that actually uses symbols
      from <asm/pgalloc.h> and drop the include from the header file.
      
      The process was somewhat automated using
      
      	sed -i -E '/[<"]asm\/pgalloc\.h/d' \
                      $(grep -L -w -f /tmp/xx \
                              $(git grep -E -l '[<"]asm/pgalloc\.h'))
      
      where /tmp/xx contains all the symbols defined in
      arch/*/include/asm/pgalloc.h.
      
      [rppt@linux.ibm.com: fix powerpc warning]
      Signed-off-by: NMike Rapoport <rppt@linux.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Reviewed-by: NPekka Enberg <penberg@kernel.org>
      Acked-by: Geert Uytterhoeven <geert@linux-m68k.org>	[m68k]
      Cc: Abdul Haleem <abdhalee@linux.vnet.ibm.com>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Christophe Leroy <christophe.leroy@csgroup.eu>
      Cc: Joerg Roedel <joro@8bytes.org>
      Cc: Max Filippov <jcmvbkbc@gmail.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Satheesh Rajendran <sathnaga@linux.vnet.ibm.com>
      Cc: Stafford Horne <shorne@gmail.com>
      Cc: Stephen Rothwell <sfr@canb.auug.org.au>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Joerg Roedel <jroedel@suse.de>
      Cc: Matthew Wilcox <willy@infradead.org>
      Link: http://lkml.kernel.org/r/20200627143453.31835-1-rppt@kernel.org
      Link: http://lkml.kernel.org/r/20200627143453.31835-2-rppt@kernel.orgSigned-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      ca15ca40
  19. 27 7月, 2020 1 次提交