1. 22 8月, 2019 1 次提交
  2. 17 7月, 2019 1 次提交
    • T
      dma-direct: Force unencrypted DMA under SME for certain DMA masks · 9087c375
      Tom Lendacky 提交于
      If a device doesn't support DMA to a physical address that includes the
      encryption bit (currently bit 47, so 48-bit DMA), then the DMA must
      occur to unencrypted memory. SWIOTLB is used to satisfy that requirement
      if an IOMMU is not active (enabled or configured in passthrough mode).
      
      However, commit fafadcd1 ("swiotlb: don't dip into swiotlb pool for
      coherent allocations") modified the coherent allocation support in
      SWIOTLB to use the DMA direct coherent allocation support. When an IOMMU
      is not active, this resulted in dma_alloc_coherent() failing for devices
      that didn't support DMA addresses that included the encryption bit.
      
      Addressing this requires changes to the force_dma_unencrypted() function
      in kernel/dma/direct.c. Since the function is now non-trivial and
      SME/SEV specific, update the DMA direct support to add an arch override
      for the force_dma_unencrypted() function. The arch override is selected
      when CONFIG_AMD_MEM_ENCRYPT is set. The arch override function resides in
      the arch/x86/mm/mem_encrypt.c file and forces unencrypted DMA when either
      SEV is active or SME is active and the device does not support DMA to
      physical addresses that include the encryption bit.
      
      Fixes: fafadcd1 ("swiotlb: don't dip into swiotlb pool for coherent allocations")
      Suggested-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NTom Lendacky <thomas.lendacky@amd.com>
      Acked-by: NThomas Gleixner <tglx@linutronix.de>
      [hch: moved the force_dma_unencrypted declaration to dma-mapping.h,
            fold the s390 fix from Halil Pasic]
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      9087c375
  3. 13 7月, 2019 2 次提交
  4. 15 6月, 2019 2 次提交
  5. 11 6月, 2019 1 次提交
    • M
      docs: s390: convert docs to ReST and rename to *.rst · 8b4a503d
      Mauro Carvalho Chehab 提交于
      Convert all text files with s390 documentation to ReST format.
      
      Tried to preserve as much as possible the original document
      format. Still, some of the files required some work in order
      for it to be visible on both plain text and after converted
      to html.
      
      The conversion is actually:
        - add blank lines and identation in order to identify paragraphs;
        - fix tables markups;
        - add some lists markups;
        - mark literal blocks;
        - adjust title markups.
      
      At its new index.rst, let's add a :orphan: while this is not linked to
      the main index.rst file, in order to avoid build warnings.
      Signed-off-by: NMauro Carvalho Chehab <mchehab+samsung@kernel.org>
      Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com>
      8b4a503d
  6. 07 6月, 2019 2 次提交
  7. 04 6月, 2019 1 次提交
  8. 15 5月, 2019 3 次提交
    • M
      mm: memblock: make keeping memblock memory opt-in rather than opt-out · 350e88ba
      Mike Rapoport 提交于
      Most architectures do not need the memblock memory after the page
      allocator is initialized, but only few enable ARCH_DISCARD_MEMBLOCK in the
      arch Kconfig.
      
      Replacing ARCH_DISCARD_MEMBLOCK with ARCH_KEEP_MEMBLOCK and inverting the
      logic makes it clear which architectures actually use memblock after
      system initialization and skips the necessity to add ARCH_DISCARD_MEMBLOCK
      to the architectures that are still missing that option.
      
      Link: http://lkml.kernel.org/r/1556102150-32517-1-git-send-email-rppt@linux.ibm.comSigned-off-by: NMike Rapoport <rppt@linux.ibm.com>
      Acked-by: Michael Ellerman <mpe@ellerman.id.au> (powerpc)
      Cc: Russell King <linux@armlinux.org.uk>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Richard Kuo <rkuo@codeaurora.org>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: Geert Uytterhoeven <geert@linux-m68k.org>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Paul Burton <paul.burton@mips.com>
      Cc: James Hogan <jhogan@kernel.org>
      Cc: Ley Foon Tan <lftan@altera.com>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
      Cc: Rich Felker <dalias@libc.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Eric Biederman <ebiederm@xmission.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      350e88ba
    • A
      hugetlb: allow to free gigantic pages regardless of the configuration · 4eb0716e
      Alexandre Ghiti 提交于
      On systems without CONTIG_ALLOC activated but that support gigantic pages,
      boottime reserved gigantic pages can not be freed at all.  This patch
      simply enables the possibility to hand back those pages to memory
      allocator.
      
      Link: http://lkml.kernel.org/r/20190327063626.18421-5-alex@ghiti.frSigned-off-by: NAlexandre Ghiti <alex@ghiti.fr>
      Acked-by: David S. Miller <davem@davemloft.net> [sparc]
      Reviewed-by: NMike Kravetz <mike.kravetz@oracle.com>
      Cc: Andy Lutomirsky <luto@kernel.org>
      Cc: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: "H . Peter Anvin" <hpa@zytor.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Rich Felker <dalias@libc.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      4eb0716e
    • A
      mm: simplify MEMORY_ISOLATION && COMPACTION || CMA into CONTIG_ALLOC · 8df995f6
      Alexandre Ghiti 提交于
      This condition allows to define alloc_contig_range, so simplify it into a
      more accurate naming.
      
      Link: http://lkml.kernel.org/r/20190327063626.18421-4-alex@ghiti.frSigned-off-by: NAlexandre Ghiti <alex@ghiti.fr>
      Suggested-by: NVlastimil Babka <vbabka@suse.cz>
      Acked-by: NVlastimil Babka <vbabka@suse.cz>
      Cc: Andy Lutomirsky <luto@kernel.org>
      Cc: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: David S. Miller <davem@davemloft.net>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: "H . Peter Anvin" <hpa@zytor.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Mike Kravetz <mike.kravetz@oracle.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Rich Felker <dalias@libc.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      8df995f6
  9. 03 5月, 2019 2 次提交
  10. 29 4月, 2019 3 次提交
  11. 23 4月, 2019 1 次提交
  12. 11 4月, 2019 1 次提交
  13. 10 4月, 2019 1 次提交
  14. 03 4月, 2019 2 次提交
    • W
      locking/rwsem: Remove rwsem-spinlock.c & use rwsem-xadd.c for all archs · 390a0c62
      Waiman Long 提交于
      Currently, we have two different implementation of rwsem:
      
       1) CONFIG_RWSEM_GENERIC_SPINLOCK (rwsem-spinlock.c)
       2) CONFIG_RWSEM_XCHGADD_ALGORITHM (rwsem-xadd.c)
      
      As we are going to use a single generic implementation for rwsem-xadd.c
      and no architecture-specific code will be needed, there is no point
      in keeping two different implementations of rwsem. In most cases, the
      performance of rwsem-spinlock.c will be worse. It also doesn't get all
      the performance tuning and optimizations that had been implemented in
      rwsem-xadd.c over the years.
      
      For simplication, we are going to remove rwsem-spinlock.c and make all
      architectures use a single implementation of rwsem - rwsem-xadd.c.
      
      All references to RWSEM_GENERIC_SPINLOCK and RWSEM_XCHGADD_ALGORITHM
      in the code are removed.
      Suggested-by: NPeter Zijlstra <peterz@infradead.org>
      Signed-off-by: NWaiman Long <longman@redhat.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Acked-by: NLinus Torvalds <torvalds@linux-foundation.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Davidlohr Bueso <dave@stgolabs.net>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Tim Chen <tim.c.chen@linux.intel.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: linux-arm-kernel@lists.infradead.org
      Cc: linux-c6x-dev@linux-c6x.org
      Cc: linux-m68k@lists.linux-m68k.org
      Cc: linux-riscv@lists.infradead.org
      Cc: linux-um@lists.infradead.org
      Cc: linux-xtensa@linux-xtensa.org
      Cc: linuxppc-dev@lists.ozlabs.org
      Cc: nios2-dev@lists.rocketboards.org
      Cc: openrisc@lists.librecores.org
      Cc: uclinux-h8-devel@lists.sourceforge.jp
      Link: https://lkml.kernel.org/r/20190322143008.21313-3-longman@redhat.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      390a0c62
    • M
      s390/tlb: Convert to generic mmu_gather · 9de7d833
      Martin Schwidefsky 提交于
      No change in behavior intended.
      Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Rik van Riel <riel@surriel.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: aneesh.kumar@linux.vnet.ibm.com
      Cc: heiko.carstens@de.ibm.com
      Cc: linux@armlinux.org.uk
      Cc: npiggin@gmail.com
      Cc: will.deacon@arm.com
      Link: http://lkml.kernel.org/r/20180918125151.31744-3-schwidefsky@de.ibm.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      9de7d833
  15. 18 1月, 2019 2 次提交
  16. 21 12月, 2018 1 次提交
  17. 14 12月, 2018 1 次提交
  18. 06 12月, 2018 1 次提交
  19. 29 11月, 2018 1 次提交
  20. 23 11月, 2018 2 次提交
  21. 31 10月, 2018 2 次提交
  22. 09 10月, 2018 4 次提交
    • V
      s390/kasan: add option for 4-level paging support · 5dff0381
      Vasily Gorbik 提交于
      By default 3-level paging is used when the kernel is compiled with
      kasan support. Add 4-level paging option to support systems with more
      then 3TB of physical memory and to cover 4-level paging specific code
      with kasan as well.
      Reviewed-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
      Signed-off-by: NVasily Gorbik <gor@linux.ibm.com>
      Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
      5dff0381
    • V
      s390/kasan: enable stack and global variables access checks · 5e785963
      Vasily Gorbik 提交于
      By defining KASAN_SHADOW_OFFSET in Kconfig stack and global variables
      memory access check instrumentation is enabled. gcc version 4.9.2 or
      newer is also required.
      Reviewed-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
      Signed-off-by: NVasily Gorbik <gor@linux.ibm.com>
      Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
      5e785963
    • V
      s390/kasan: add initialization code and enable it · 42db5ed8
      Vasily Gorbik 提交于
      Kasan needs 1/8 of kernel virtual address space to be reserved as the
      shadow area. And eventually it requires the shadow memory offset to be
      known at compile time (passed to the compiler when full instrumentation
      is enabled).  Any value picked as the shadow area offset for 3-level
      paging would eat up identity mapping on 4-level paging (with 1PB
      shadow area size). So, the kernel sticks to 3-level paging when kasan
      is enabled. 3TB border is picked as the shadow offset.  The memory
      layout is adjusted so, that physical memory border does not exceed
      KASAN_SHADOW_START and vmemmap does not go below KASAN_SHADOW_END.
      
      Due to the fact that on s390 paging is set up very late and to cover
      more code with kasan instrumentation, temporary identity mapping and
      final shadow memory are set up early. The shadow memory mapping is
      later carried over to init_mm.pgd during paging_init.
      
      For the needs of paging structures allocation and shadow memory
      population a primitive allocator is used, which simply chops off
      memory blocks from the end of the physical memory.
      
      Kasan currenty doesn't track vmemmap and vmalloc areas.
      
      Current memory layout (for 3-level paging, 2GB physical memory).
      
      ---[ Identity Mapping ]---
      0x0000000000000000-0x0000000000100000
      ---[ Kernel Image Start ]---
      0x0000000000100000-0x0000000002b00000
      ---[ Kernel Image End ]---
      0x0000000002b00000-0x0000000080000000        2G <- physical memory border
      0x0000000080000000-0x0000030000000000     3070G PUD I
      ---[ Kasan Shadow Start ]---
      0x0000030000000000-0x0000030010000000      256M PMD RW X  <- shadow for 2G memory
      0x0000030010000000-0x0000037ff0000000   523776M PTE RO NX <- kasan zero ro page
      0x0000037ff0000000-0x0000038000000000      256M PMD RW X  <- shadow for 2G modules
      ---[ Kasan Shadow End ]---
      0x0000038000000000-0x000003d100000000      324G PUD I
      ---[ vmemmap Area ]---
      0x000003d100000000-0x000003e080000000
      ---[ vmalloc Area ]---
      0x000003e080000000-0x000003ff80000000
      ---[ Modules Area ]---
      0x000003ff80000000-0x0000040000000000        2G
      Acked-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
      Signed-off-by: NVasily Gorbik <gor@linux.ibm.com>
      Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
      42db5ed8
    • M
      s390: add support for virtually mapped kernel stacks · ce3dc447
      Martin Schwidefsky 提交于
      With virtually mapped kernel stacks the kernel stack overflow detection
      is now fault based, every stack has a guard page in the vmalloc space.
      The panic_stack is renamed to nodat_stack and is used for all function
      that need to run without DAT, e.g. memcpy_real or do_start_kdump.
      
      The main effect is a reduction in the kernel image size as with vmap
      stacks the old style overflow checking that adds two instructions per
      function is not needed anymore. Result from bloat-o-meter:
      
      add/remove: 20/1 grow/shrink: 13/26854 up/down: 2198/-216240 (-214042)
      
      In regard to performance the micro-benchmark for fork has a hit of a
      few microseconds, allocating 4 pages in vmalloc space is more expensive
      compare to an order-2 page allocation. But with real workload I could
      not find a noticeable difference.
      Acked-by: NHeiko Carstens <heiko.carstens@de.ibm.com>
      Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
      ce3dc447
  23. 27 9月, 2018 2 次提交
  24. 16 8月, 2018 1 次提交