1. 30 7月, 2019 1 次提交
  2. 26 7月, 2019 1 次提交
  3. 19 7月, 2019 3 次提交
    • D
      mm/memory_hotplug: allow arch_remove_memory() without CONFIG_MEMORY_HOTREMOVE · 80ec922d
      David Hildenbrand 提交于
      We want to improve error handling while adding memory by allowing to use
      arch_remove_memory() and __remove_pages() even if
      CONFIG_MEMORY_HOTREMOVE is not set to e.g., implement something like:
      
      	arch_add_memory()
      	rc = do_something();
      	if (rc) {
      		arch_remove_memory();
      	}
      
      We won't get rid of CONFIG_MEMORY_HOTREMOVE for now, as it will require
      quite some dependencies for memory offlining.
      
      Link: http://lkml.kernel.org/r/20190527111152.16324-7-david@redhat.comSigned-off-by: NDavid Hildenbrand <david@redhat.com>
      Reviewed-by: NPavel Tatashin <pasha.tatashin@soleen.com>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
      Cc: Rich Felker <dalias@libc.org>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: "Rafael J. Wysocki" <rafael@kernel.org>
      Cc: Michal Hocko <mhocko@suse.com>
      Cc: David Hildenbrand <david@redhat.com>
      Cc: Oscar Salvador <osalvador@suse.com>
      Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
      Cc: Alex Deucher <alexander.deucher@amd.com>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Mark Brown <broonie@kernel.org>
      Cc: Chris Wilson <chris@chris-wilson.co.uk>
      Cc: Christophe Leroy <christophe.leroy@c-s.fr>
      Cc: Nicholas Piggin <npiggin@gmail.com>
      Cc: Vasily Gorbik <gor@linux.ibm.com>
      Cc: Rob Herring <robh@kernel.org>
      Cc: Masahiro Yamada <yamada.masahiro@socionext.com>
      Cc: "mike.travis@hpe.com" <mike.travis@hpe.com>
      Cc: Andrew Banman <andrew.banman@hpe.com>
      Cc: Arun KS <arunks@codeaurora.org>
      Cc: Qian Cai <cai@lca.pw>
      Cc: Mathieu Malaterre <malat@debian.org>
      Cc: Baoquan He <bhe@redhat.com>
      Cc: Logan Gunthorpe <logang@deltatee.com>
      Cc: Anshuman Khandual <anshuman.khandual@arm.com>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Chintan Pandya <cpandya@codeaurora.org>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Jonathan Cameron <Jonathan.Cameron@huawei.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Jun Yao <yaojun8558363@gmail.com>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Mike Rapoport <rppt@linux.vnet.ibm.com>
      Cc: Oscar Salvador <osalvador@suse.de>
      Cc: Robin Murphy <robin.murphy@arm.com>
      Cc: Wei Yang <richard.weiyang@gmail.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Yu Zhao <yuzhao@google.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      80ec922d
    • D
      s390x/mm: implement arch_remove_memory() · 18c86506
      David Hildenbrand 提交于
      Will come in handy when wanting to handle errors after
      arch_add_memory().
      
      Link: http://lkml.kernel.org/r/20190527111152.16324-4-david@redhat.comSigned-off-by: NDavid Hildenbrand <david@redhat.com>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Michal Hocko <mhocko@suse.com>
      Cc: Mike Rapoport <rppt@linux.vnet.ibm.com>
      Cc: David Hildenbrand <david@redhat.com>
      Cc: Vasily Gorbik <gor@linux.ibm.com>
      Cc: Oscar Salvador <osalvador@suse.com>
      Cc: Alex Deucher <alexander.deucher@amd.com>
      Cc: Andrew Banman <andrew.banman@hpe.com>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Anshuman Khandual <anshuman.khandual@arm.com>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Arun KS <arunks@codeaurora.org>
      Cc: Baoquan He <bhe@redhat.com>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Chintan Pandya <cpandya@codeaurora.org>
      Cc: Christophe Leroy <christophe.leroy@c-s.fr>
      Cc: Chris Wilson <chris@chris-wilson.co.uk>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Jonathan Cameron <Jonathan.Cameron@huawei.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Jun Yao <yaojun8558363@gmail.com>
      Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
      Cc: Logan Gunthorpe <logang@deltatee.com>
      Cc: Mark Brown <broonie@kernel.org>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Masahiro Yamada <yamada.masahiro@socionext.com>
      Cc: Mathieu Malaterre <malat@debian.org>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Mike Rapoport <rppt@linux.ibm.com>
      Cc: "mike.travis@hpe.com" <mike.travis@hpe.com>
      Cc: Nicholas Piggin <npiggin@gmail.com>
      Cc: Oscar Salvador <osalvador@suse.de>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Pavel Tatashin <pasha.tatashin@soleen.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Qian Cai <cai@lca.pw>
      Cc: "Rafael J. Wysocki" <rafael@kernel.org>
      Cc: Rich Felker <dalias@libc.org>
      Cc: Rob Herring <robh@kernel.org>
      Cc: Robin Murphy <robin.murphy@arm.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Wei Yang <richard.weiyang@gmail.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
      Cc: Yu Zhao <yuzhao@google.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      18c86506
    • D
      s390x/mm: fail when an altmap is used for arch_add_memory() · 973de24a
      David Hildenbrand 提交于
      ZONE_DEVICE is not yet supported, fail if an altmap is passed, so we
      don't forget arch_add_memory()/arch_remove_memory() when unlocking
      support.
      
      Link: http://lkml.kernel.org/r/20190527111152.16324-3-david@redhat.comSigned-off-by: NDavid Hildenbrand <david@redhat.com>
      Suggested-by: NDan Williams <dan.j.williams@intel.com>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Michal Hocko <mhocko@suse.com>
      Cc: Mike Rapoport <rppt@linux.vnet.ibm.com>
      Cc: David Hildenbrand <david@redhat.com>
      Cc: Vasily Gorbik <gor@linux.ibm.com>
      Cc: Oscar Salvador <osalvador@suse.com>
      Cc: Alex Deucher <alexander.deucher@amd.com>
      Cc: Andrew Banman <andrew.banman@hpe.com>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Anshuman Khandual <anshuman.khandual@arm.com>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Arun KS <arunks@codeaurora.org>
      Cc: Baoquan He <bhe@redhat.com>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Chintan Pandya <cpandya@codeaurora.org>
      Cc: Christophe Leroy <christophe.leroy@c-s.fr>
      Cc: Chris Wilson <chris@chris-wilson.co.uk>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Jonathan Cameron <Jonathan.Cameron@huawei.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Jun Yao <yaojun8558363@gmail.com>
      Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
      Cc: Logan Gunthorpe <logang@deltatee.com>
      Cc: Mark Brown <broonie@kernel.org>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Masahiro Yamada <yamada.masahiro@socionext.com>
      Cc: Mathieu Malaterre <malat@debian.org>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Mike Rapoport <rppt@linux.ibm.com>
      Cc: "mike.travis@hpe.com" <mike.travis@hpe.com>
      Cc: Nicholas Piggin <npiggin@gmail.com>
      Cc: Oscar Salvador <osalvador@suse.de>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Pavel Tatashin <pasha.tatashin@soleen.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Qian Cai <cai@lca.pw>
      Cc: "Rafael J. Wysocki" <rafael@kernel.org>
      Cc: Rich Felker <dalias@libc.org>
      Cc: Rob Herring <robh@kernel.org>
      Cc: Robin Murphy <robin.murphy@arm.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Wei Yang <richard.weiyang@gmail.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
      Cc: Yu Zhao <yuzhao@google.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      973de24a
  4. 17 7月, 2019 2 次提交
    • A
      mm, kprobes: generalize and rename notify_page_fault() as kprobe_page_fault() · b98cca44
      Anshuman Khandual 提交于
      Architectures which support kprobes have very similar boilerplate around
      calling kprobe_fault_handler().  Use a helper function in kprobes.h to
      unify them, based on the x86 code.
      
      This changes the behaviour for other architectures when preemption is
      enabled.  Previously, they would have disabled preemption while calling
      the kprobe handler.  However, preemption would be disabled if this fault
      was due to a kprobe, so we know the fault was not due to a kprobe
      handler and can simply return failure.
      
      This behaviour was introduced in commit a980c0ef ("x86/kprobes:
      Refactor kprobes_fault() like kprobe_exceptions_notify()")
      
      [anshuman.khandual@arm.com: export kprobe_fault_handler()]
        Link: http://lkml.kernel.org/r/1561133358-8876-1-git-send-email-anshuman.khandual@arm.com
      Link: http://lkml.kernel.org/r/1560420444-25737-1-git-send-email-anshuman.khandual@arm.comSigned-off-by: NAnshuman Khandual <anshuman.khandual@arm.com>
      Reviewed-by: NDave Hansen <dave.hansen@linux.intel.com>
      Cc: Michal Hocko <mhocko@suse.com>
      Cc: Matthew Wilcox <willy@infradead.org>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Christophe Leroy <christophe.leroy@c-s.fr>
      Cc: Stephen Rothwell <sfr@canb.auug.org.au>
      Cc: Andrey Konovalov <andreyknvl@google.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Russell King <linux@armlinux.org.uk>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Vineet Gupta <vgupta@synopsys.com>
      Cc: James Hogan <jhogan@kernel.org>
      Cc: Paul Burton <paul.burton@mips.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      b98cca44
    • T
      dma-direct: Force unencrypted DMA under SME for certain DMA masks · 9087c375
      Tom Lendacky 提交于
      If a device doesn't support DMA to a physical address that includes the
      encryption bit (currently bit 47, so 48-bit DMA), then the DMA must
      occur to unencrypted memory. SWIOTLB is used to satisfy that requirement
      if an IOMMU is not active (enabled or configured in passthrough mode).
      
      However, commit fafadcd1 ("swiotlb: don't dip into swiotlb pool for
      coherent allocations") modified the coherent allocation support in
      SWIOTLB to use the DMA direct coherent allocation support. When an IOMMU
      is not active, this resulted in dma_alloc_coherent() failing for devices
      that didn't support DMA addresses that included the encryption bit.
      
      Addressing this requires changes to the force_dma_unencrypted() function
      in kernel/dma/direct.c. Since the function is now non-trivial and
      SME/SEV specific, update the DMA direct support to add an arch override
      for the force_dma_unencrypted() function. The arch override is selected
      when CONFIG_AMD_MEM_ENCRYPT is set. The arch override function resides in
      the arch/x86/mm/mem_encrypt.c file and forces unencrypted DMA when either
      SEV is active or SME is active and the device does not support DMA to
      physical addresses that include the encryption bit.
      
      Fixes: fafadcd1 ("swiotlb: don't dip into swiotlb pool for coherent allocations")
      Suggested-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NTom Lendacky <thomas.lendacky@amd.com>
      Acked-by: NThomas Gleixner <tglx@linutronix.de>
      [hch: moved the force_dma_unencrypted declaration to dma-mapping.h,
            fold the s390 fix from Halil Pasic]
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      9087c375
  5. 15 6月, 2019 1 次提交
  6. 05 6月, 2019 1 次提交
  7. 04 6月, 2019 2 次提交
  8. 29 5月, 2019 1 次提交
    • E
      signal: Remove the task parameter from force_sig_fault · 2e1661d2
      Eric W. Biederman 提交于
      As synchronous exceptions really only make sense against the current
      task (otherwise how are you synchronous) remove the task parameter
      from from force_sig_fault to make it explicit that is what is going
      on.
      
      The two known exceptions that deliver a synchronous exception to a
      stopped ptraced task have already been changed to
      force_sig_fault_to_task.
      
      The callers have been changed with the following emacs regular expression
      (with obvious variations on the architectures that take more arguments)
      to avoid typos:
      
      force_sig_fault[(]\([^,]+\)[,]\([^,]+\)[,]\([^,]+\)[,]\W+current[)]
      ->
      force_sig_fault(\1,\2,\3)
      Signed-off-by: N"Eric W. Biederman" <ebiederm@xmission.com>
      2e1661d2
  9. 28 5月, 2019 1 次提交
    • M
      s390: add unreachable() to dump_fault_info() to fix -Wmaybe-uninitialized · bf2f1eee
      Masahiro Yamada 提交于
      When CONFIG_OPTIMIZE_INLINING is enabled for s390, I see this warning:
      
      arch/s390/mm/fault.c:127:15: warning: 'asce' may be used uninitialized in this function [-Wmaybe-uninitialized]
        switch (asce & _ASCE_TYPE_MASK) {
      arch/s390/mm/fault.c:177:16: note: 'asce' was declared here
        unsigned long asce;
                      ^~~~
      
      If get_fault_type() is not inlined, the compiler cannot deduce that
      all the possible paths in the 'switch' statement are covered.
      
      Of course, we could mark get_fault_type() as __always_inline to get
      back the original behavior, but I do not think it sensible to force
      inlining just for the purpose of suppressing the warning. Since this
      is just a matter of warning, I want to keep as much room for compiler
      optimization as possible.
      
      I added unreachable() to teach the compiler that the 'default' label
      is unreachable.
      
      I got rid of the 'inline' marker. Even without the 'inline' hint,
      the compiler inlines functions based on its inlining heuristic.
      
      Fixes: 9012d011 ("compiler: allow all arches to enable CONFIG_OPTIMIZE_INLINING")
      Signed-off-by: NMasahiro Yamada <yamada.masahiro@socionext.com>
      Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
      Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com>
      bf2f1eee
  10. 15 5月, 2019 3 次提交
    • D
      mm/memory_hotplug: make __remove_pages() and arch_remove_memory() never fail · ac5c9426
      David Hildenbrand 提交于
      All callers of arch_remove_memory() ignore errors.  And we should really
      try to remove any errors from the memory removal path.  No more errors are
      reported from __remove_pages().  BUG() in s390x code in case
      arch_remove_memory() is triggered.  We may implement that properly later.
      WARN in case powerpc code failed to remove the section mapping, which is
      better than ignoring the error completely right now.
      
      Link: http://lkml.kernel.org/r/20190409100148.24703-5-david@redhat.comSigned-off-by: NDavid Hildenbrand <david@redhat.com>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
      Cc: Rich Felker <dalias@libc.org>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Michal Hocko <mhocko@suse.com>
      Cc: Mike Rapoport <rppt@linux.ibm.com>
      Cc: Oscar Salvador <osalvador@suse.com>
      Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
      Cc: Christophe Leroy <christophe.leroy@c-s.fr>
      Cc: Stefan Agner <stefan@agner.ch>
      Cc: Nicholas Piggin <npiggin@gmail.com>
      Cc: Pavel Tatashin <pasha.tatashin@soleen.com>
      Cc: Vasily Gorbik <gor@linux.ibm.com>
      Cc: Arun KS <arunks@codeaurora.org>
      Cc: Geert Uytterhoeven <geert@linux-m68k.org>
      Cc: Masahiro Yamada <yamada.masahiro@socionext.com>
      Cc: Rob Herring <robh@kernel.org>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Wei Yang <richard.weiyang@gmail.com>
      Cc: Qian Cai <cai@lca.pw>
      Cc: Mathieu Malaterre <malat@debian.org>
      Cc: Andrew Banman <andrew.banman@hpe.com>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Mike Travis <mike.travis@hpe.com>
      Cc: Oscar Salvador <osalvador@suse.de>
      Cc: "Rafael J. Wysocki" <rafael@kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      ac5c9426
    • M
      mm, memory_hotplug: provide a more generic restrictions for memory hotplug · 940519f0
      Michal Hocko 提交于
      arch_add_memory, __add_pages take a want_memblock which controls whether
      the newly added memory should get the sysfs memblock user API (e.g.
      ZONE_DEVICE users do not want/need this interface).  Some callers even
      want to control where do we allocate the memmap from by configuring
      altmap.
      
      Add a more generic hotplug context for arch_add_memory and __add_pages.
      struct mhp_restrictions contains flags which contains additional features
      to be enabled by the memory hotplug (MHP_MEMBLOCK_API currently) and
      altmap for alternative memmap allocator.
      
      This patch shouldn't introduce any functional change.
      
      [akpm@linux-foundation.org: build fix]
      Link: http://lkml.kernel.org/r/20190408082633.2864-3-osalvador@suse.deSigned-off-by: NMichal Hocko <mhocko@suse.com>
      Signed-off-by: NOscar Salvador <osalvador@suse.de>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Cc: David Hildenbrand <david@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      940519f0
    • C
      initramfs: poison freed initrd memory · f94f7434
      Christoph Hellwig 提交于
      Various architectures including x86 poison the freed initrd memory.  Do
      the same in the generic free_initrd_mem implementation and switch a few
      more architectures that are identical to the generic code over to it now.
      
      Link: http://lkml.kernel.org/r/20190213174621.29297-9-hch@lst.deSigned-off-by: NChristoph Hellwig <hch@lst.de>
      Acked-by: NMike Rapoport <rppt@linux.ibm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>	[arm64]
      Cc: Geert Uytterhoeven <geert@linux-m68k.org>	[m68k]
      Cc: Steven Price <steven.price@arm.com>
      Cc: Alexander Viro <viro@zeniv.linux.org.uk>
      Cc: Guan Xuetao <gxt@pku.edu.cn>
      Cc: Russell King <linux@armlinux.org.uk>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      f94f7434
  11. 08 5月, 2019 1 次提交
  12. 02 5月, 2019 1 次提交
    • M
      s390/unwind: introduce stack unwind API · 78c98f90
      Martin Schwidefsky 提交于
      Rework the dump_trace() stack unwinder interface to support different
      unwinding algorithms. The new interface looks like this:
      
      	struct unwind_state state;
      	unwind_for_each_frame(&state, task, regs, start_stack)
      		do_something(state.sp, state.ip, state.reliable);
      
      The unwind_bc.c file contains the implementation for the classic
      back-chain unwinder.
      
      One positive side effect of the new code is it now handles ftraced
      functions gracefully. It prints the real name of the return function
      instead of 'return_to_handler'.
      Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
      78c98f90
  13. 29 4月, 2019 2 次提交
    • G
      locking/lockdep: check for freed initmem in static_obj() · 7a5da02d
      Gerald Schaefer 提交于
      The following warning occurred on s390:
      WARNING: CPU: 0 PID: 804 at kernel/locking/lockdep.c:1025 lockdep_register_key+0x30/0x150
      
      This is because the check in static_obj() assumes that all memory within
      [_stext, _end] belongs to static objects, which at least for s390 isn't
      true. The init section is also part of this range, and freeing it allows
      the buddy allocator to allocate memory from it. We have virt == phys for
      the kernel on s390, so that such allocations would then have addresses
      within the range [_stext, _end].
      
      To fix this, introduce arch_is_kernel_initmem_freed(), similar to
      arch_is_kernel_text/data(), and add it to the checks in static_obj().
      This will always return 0 on architectures that do not define
      arch_is_kernel_initmem_freed. On s390, it will return 1 if initmem has
      been freed and the address is in the range [__init_begin, __init_end].
      Signed-off-by: NGerald Schaefer <gerald.schaefer@de.ibm.com>
      Reviewed-by: NHeiko Carstens <heiko.carstens@de.ibm.com>
      Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
      7a5da02d
    • G
      s390/kernel: introduce .dma sections · a80313ff
      Gerald Schaefer 提交于
      With a relocatable kernel that could reside at any place in memory, code
      and data that has to stay below 2 GB needs special handling.
      
      This patch introduces .dma sections for such text, data and ex_table.
      The sections will be part of the decompressor kernel, so they will not
      be relocated and stay below 2 GB. Their location is passed over to the
      decompressed / relocated kernel via the .boot.preserved.data section.
      
      The duald and aste for control register setup also need to stay below
      2 GB, so move the setup code from arch/s390/kernel/head64.S to
      arch/s390/boot/head.S. The duct and linkage_stack could reside above
      2 GB, but their content has to be preserved for the decompresed kernel,
      so they are also moved into the .dma section.
      
      The start and end address of the .dma sections is added to vmcoreinfo,
      for crash support, to help debugging in case the kernel crashed there.
      Signed-off-by: NGerald Schaefer <gerald.schaefer@de.ibm.com>
      Reviewed-by: NPhilipp Rudo <prudo@linux.ibm.com>
      Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
      a80313ff
  14. 23 4月, 2019 2 次提交
    • M
      s390/mm: convert to the generic get_user_pages_fast code · 1a42010c
      Martin Schwidefsky 提交于
      Define the gup_fast_permitted to check against the asce_limit of the
      mm attached to the current task, then replace the s390 specific gup
      code with the generic implementation in mm/gup.c.
      Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
      1a42010c
    • M
      s390/mm: make the pxd_offset functions more robust · d1874a0c
      Martin Schwidefsky 提交于
      Change the way how pgd_offset, p4d_offset, pud_offset and pmd_offset
      walk the page tables. pgd_offset now always calculates the index for
      the top-level page table and adds it to the pgd, this is either a
      segment table offset for a 2-level setup, a region-3 offset for 3-levels,
      region-2 offset for 4-levels, or a region-1 offset for a 5-level setup.
      The other three functions p4d_offset, pud_offset and pmd_offset will
      only add the respective offset if they dereference the passed pointer.
      
      With the new way of walking the page tables a sequence like this from
      mm/gup.c now works:
      
           pgdp = pgd_offset(current->mm, addr);
           pgd = READ_ONCE(*pgdp);
           p4dp = p4d_offset(&pgd, addr);
           p4d = READ_ONCE(*p4dp);
           pudp = pud_offset(&p4d, addr);
           pud = READ_ONCE(*pudp);
           pmdp = pmd_offset(&pud, addr);
           pmd = READ_ONCE(*pmdp);
      Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
      d1874a0c
  15. 10 4月, 2019 1 次提交
  16. 03 4月, 2019 1 次提交
  17. 06 3月, 2019 2 次提交
  18. 01 3月, 2019 1 次提交
  19. 21 2月, 2019 2 次提交
  20. 07 2月, 2019 1 次提交
  21. 18 1月, 2019 1 次提交
  22. 29 12月, 2018 3 次提交
  23. 30 11月, 2018 1 次提交
    • S
      s390: use common bust_spinlocks() · 5b39fc04
      Sergey Senozhatsky 提交于
      s390 is the only architecture that is using own bust_spinlocks()
      variant, while other arch-s seem to be OK with the common
      implementation.
      
      Heiko Carstens [1] said he would prefer s390 to use the common
      bust_spinlocks() as well:
        I did some code archaeology and this function is unchanged since ~17
        years. When it was introduced it was close to identical to the x86
        variant. All other architectures use the common code variant in the
        meantime. So if we change this I'd prefer that we switch s390 to the
        common code variant as well. Right now I can't see a reason for not
        doing that
      
      This patch removes s390 bust_spinlocks() and drops the weak attribute
      from the common bust_spinlocks() version.
      
      [1] lkml.kernel.org/r/20181025062800.GB4037@osiris
      Signed-off-by: NSergey Senozhatsky <sergey.senozhatsky@gmail.com>
      Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com>
      Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
      5b39fc04
  24. 27 11月, 2018 1 次提交
  25. 09 11月, 2018 1 次提交
    • P
      s390/mm: Convert tlb_table_flush() to use call_rcu() · 0d4e68e2
      Paul E. McKenney 提交于
      Now that call_rcu()'s callback is not invoked until after all
      preempt-disable regions of code have completed (in addition to explicitly
      marked RCU read-side critical sections), call_rcu() can be used in place
      of call_rcu_sched().  This commit therefore makes that change.
      Signed-off-by: NPaul E. McKenney <paulmck@linux.ibm.com>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: <linux-s390@vger.kernel.org>
      0d4e68e2
  26. 02 11月, 2018 1 次提交
    • M
      s390/mm: fix mis-accounting of pgtable_bytes · e12e4044
      Martin Schwidefsky 提交于
      In case a fork or a clone system fails in copy_process and the error
      handling does the mmput() at the bad_fork_cleanup_mm label, the
      following warning messages will appear on the console:
      
        BUG: non-zero pgtables_bytes on freeing mm: 16384
      
      The reason for that is the tricks we play with mm_inc_nr_puds() and
      mm_inc_nr_pmds() in init_new_context().
      
      A normal 64-bit process has 3 levels of page table, the p4d level and
      the pud level are folded. On process termination the free_pud_range()
      function in mm/memory.c will subtract 16KB from pgtable_bytes with a
      mm_dec_nr_puds() call, but there actually is not really a pud table.
      
      One issue with this is the fact that pgtable_bytes is usually off
      by a few kilobytes, but the more severe problem is that for a failed
      fork or clone the free_pgtables() function is not called. In this case
      there is no mm_dec_nr_puds() or mm_dec_nr_pmds() that go together with
      the mm_inc_nr_puds() and mm_inc_nr_pmds in init_new_context().
      The pgtable_bytes will be off by 16384 or 32768 bytes and we get the
      BUG message. The message itself is purely cosmetic, but annoying.
      
      To fix this override the mm_pmd_folded, mm_pud_folded and mm_p4d_folded
      function to check for the true size of the address space.
      Reported-by: NLi Wang <liwang@redhat.com>
      Tested-by: NLi Wang <liwang@redhat.com>
      Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
      e12e4044
  27. 31 10月, 2018 2 次提交