1. 12 12月, 2020 1 次提交
  2. 30 10月, 2020 1 次提交
  3. 26 10月, 2020 1 次提交
  4. 08 9月, 2020 2 次提交
  5. 17 6月, 2020 1 次提交
    • B
      x86/mm: Fix -Wmissing-prototypes warnings for arch/x86/mm/init.c · d5249bc7
      Benjamin Thiel 提交于
      Fix -Wmissing-prototypes warnings:
      
        arch/x86/mm/init.c:81:6:
        warning: no previous prototype for ‘x86_has_pat_wp’ [-Wmissing-prototypes]
        bool x86_has_pat_wp(void)
      
        arch/x86/mm/init.c:86:22:
        warning: no previous prototype for ‘pgprot2cachemode’ [-Wmissing-prototypes]
        enum page_cache_mode pgprot2cachemode(pgprot_t pgprot)
      
      by including the respective header containing prototypes. Also fix:
      
        arch/x86/mm/init.c:893:13:
        warning: no previous prototype for ‘mem_encrypt_free_decrypted_mem’ [-Wmissing-prototypes]
        void __weak mem_encrypt_free_decrypted_mem(void) { }
      
      by making it static inline for the !CONFIG_AMD_MEM_ENCRYPT case. This
      warning happens when CONFIG_AMD_MEM_ENCRYPT is not enabled (defconfig
      for example):
      
        ./arch/x86/include/asm/mem_encrypt.h:80:27:
        warning: inline function ‘mem_encrypt_free_decrypted_mem’ declared weak [-Wattributes]
        static inline void __weak mem_encrypt_free_decrypted_mem(void) { }
                                ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
      
      It's ok to convert to static inline because the function is used only in
      x86. Is not shared with other architectures so drop the __weak too.
      
       [ bp: Massage and adjust __weak comments while at it. ]
      Signed-off-by: NBenjamin Thiel <b.thiel@posteo.de>
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Link: https://lkml.kernel.org/r/20200606122629.2720-1-b.thiel@posteo.de
      d5249bc7
  6. 26 4月, 2020 1 次提交
  7. 22 11月, 2019 1 次提交
    • N
      dma-mapping: treat dev->bus_dma_mask as a DMA limit · a7ba70f1
      Nicolas Saenz Julienne 提交于
      Using a mask to represent bus DMA constraints has a set of limitations.
      The biggest one being it can only hold a power of two (minus one). The
      DMA mapping code is already aware of this and treats dev->bus_dma_mask
      as a limit. This quirk is already used by some architectures although
      still rare.
      
      With the introduction of the Raspberry Pi 4 we've found a new contender
      for the use of bus DMA limits, as its PCIe bus can only address the
      lower 3GB of memory (of a total of 4GB). This is impossible to represent
      with a mask. To make things worse the device-tree code rounds non power
      of two bus DMA limits to the next power of two, which is unacceptable in
      this case.
      
      In the light of this, rename dev->bus_dma_mask to dev->bus_dma_limit all
      over the tree and treat it as such. Note that dev->bus_dma_limit should
      contain the higher accessible DMA address.
      Signed-off-by: NNicolas Saenz Julienne <nsaenzjulienne@suse.de>
      Reviewed-by: NRobin Murphy <robin.murphy@arm.com>
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      a7ba70f1
  8. 09 8月, 2019 2 次提交
  9. 17 7月, 2019 2 次提交
    • D
      x86/mm: Free sme_early_buffer after init · ffdb07f3
      David Rientjes 提交于
      The contents of sme_early_buffer should be cleared after
      __sme_early_enc_dec() because it is used to move encrypted and decrypted
      data, but since __sme_early_enc_dec() is __init this buffer simply can be
      freed after init.
      
      This saves a page that is otherwise unreferenced after init.
      Reported-by: NCfir Cohen <cfir@google.com>
      Signed-off-by: NDavid Rientjes <rientjes@google.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Link: https://lkml.kernel.org/r/alpine.DEB.2.21.1907101318170.197432@chino.kir.corp.google.com
      ffdb07f3
    • T
      dma-direct: Force unencrypted DMA under SME for certain DMA masks · 9087c375
      Tom Lendacky 提交于
      If a device doesn't support DMA to a physical address that includes the
      encryption bit (currently bit 47, so 48-bit DMA), then the DMA must
      occur to unencrypted memory. SWIOTLB is used to satisfy that requirement
      if an IOMMU is not active (enabled or configured in passthrough mode).
      
      However, commit fafadcd1 ("swiotlb: don't dip into swiotlb pool for
      coherent allocations") modified the coherent allocation support in
      SWIOTLB to use the DMA direct coherent allocation support. When an IOMMU
      is not active, this resulted in dma_alloc_coherent() failing for devices
      that didn't support DMA addresses that included the encryption bit.
      
      Addressing this requires changes to the force_dma_unencrypted() function
      in kernel/dma/direct.c. Since the function is now non-trivial and
      SME/SEV specific, update the DMA direct support to add an arch override
      for the force_dma_unencrypted() function. The arch override is selected
      when CONFIG_AMD_MEM_ENCRYPT is set. The arch override function resides in
      the arch/x86/mm/mem_encrypt.c file and forces unencrypted DMA when either
      SEV is active or SME is active and the device does not support DMA to
      physical addresses that include the encryption bit.
      
      Fixes: fafadcd1 ("swiotlb: don't dip into swiotlb pool for coherent allocations")
      Suggested-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NTom Lendacky <thomas.lendacky@amd.com>
      Acked-by: NThomas Gleixner <tglx@linutronix.de>
      [hch: moved the force_dma_unencrypted declaration to dma-mapping.h,
            fold the s390 fix from Halil Pasic]
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      9087c375
  10. 19 6月, 2019 1 次提交
  11. 09 5月, 2019 1 次提交
    • B
      x86/mm: Do not use set_{pud, pmd}_safe() when splitting a large page · eccd9064
      Brijesh Singh 提交于
      The commit
      
        0a9fe8ca ("x86/mm: Validate kernel_physical_mapping_init() PTE population")
      
      triggers this warning in SEV guests:
      
        WARNING: CPU: 0 PID: 0 at arch/x86/include/asm/pgalloc.h:87 phys_pmd_init+0x30d/0x386
        Call Trace:
         kernel_physical_mapping_init+0xce/0x259
         early_set_memory_enc_dec+0x10f/0x160
         kvm_smp_prepare_boot_cpu+0x71/0x9d
         start_kernel+0x1c9/0x50b
         secondary_startup_64+0xa4/0xb0
      
      A SEV guest calls kernel_physical_mapping_init() to clear the encryption
      mask from an existing mapping. While doing so, it also splits large
      pages into smaller.
      
      To split a page, kernel_physical_mapping_init() allocates a new page and
      updates the existing entry. The set_{pud,pmd}_safe() helpers trigger a
      warning when updating an entry with a page in the present state.
      
      Add a new kernel_physical_mapping_change() helper which uses the
      non-safe variants of set_{pmd,pud,p4d}() and {pmd,pud,p4d}_populate()
      routines when updating the entry.
      
      Since kernel_physical_mapping_change() may replace an existing
      entry with a new entry, the caller is responsible to flush
      the TLB at the end. Change early_set_memory_enc_dec() to use
      kernel_physical_mapping_change() when it wants to clear the memory
      encryption mask from the page table entry.
      
       [ bp:
         - massage commit message.
         - flesh out comment according to dhansen's request.
         - align function arguments at opening brace. ]
      
      Fixes: 0a9fe8ca ("x86/mm: Validate kernel_physical_mapping_init() PTE population")
      Signed-off-by: NBrijesh Singh <brijesh.singh@amd.com>
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Reviewed-by: NDave Hansen <dave.hansen@intel.com>
      Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: "Kirill A . Shutemov" <kirill.shutemov@linux.intel.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Thomas Lendacky <Thomas.Lendacky@amd.com>
      Cc: x86-ml <x86@kernel.org>
      Link: https://lkml.kernel.org/r/20190417154102.22613-1-brijesh.singh@amd.com
      eccd9064
  12. 14 12月, 2018 1 次提交
  13. 16 9月, 2018 1 次提交
    • B
      x86/mm: Add .bss..decrypted section to hold shared variables · b3f0907c
      Brijesh Singh 提交于
      kvmclock defines few static variables which are shared with the
      hypervisor during the kvmclock initialization.
      
      When SEV is active, memory is encrypted with a guest-specific key, and
      if the guest OS wants to share the memory region with the hypervisor
      then it must clear the C-bit before sharing it.
      
      Currently, we use kernel_physical_mapping_init() to split large pages
      before clearing the C-bit on shared pages. But it fails when called from
      the kvmclock initialization (mainly because the memblock allocator is
      not ready that early during boot).
      
      Add a __bss_decrypted section attribute which can be used when defining
      such shared variable. The so-defined variables will be placed in the
      .bss..decrypted section. This section will be mapped with C=0 early
      during boot.
      
      The .bss..decrypted section has a big chunk of memory that may be unused
      when memory encryption is not active, free it when memory encryption is
      not active.
      Suggested-by: NThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NBrijesh Singh <brijesh.singh@amd.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Tom Lendacky <thomas.lendacky@amd.com>
      Cc: Borislav Petkov <bp@suse.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: Sean Christopherson <sean.j.christopherson@intel.com>
      Cc: Radim Krčmář<rkrcmar@redhat.com>
      Cc: kvm@vger.kernel.org
      Link: https://lkml.kernel.org/r/1536932759-12905-2-git-send-email-brijesh.singh@amd.com
      b3f0907c
  14. 20 3月, 2018 5 次提交
  15. 13 2月, 2018 1 次提交
  16. 21 1月, 2018 1 次提交
  17. 16 1月, 2018 4 次提交
  18. 10 1月, 2018 1 次提交
    • C
      dma-mapping: move swiotlb arch helpers to a new header · ea8c64ac
      Christoph Hellwig 提交于
      phys_to_dma, dma_to_phys and dma_capable are helpers published by
      architecture code for use of swiotlb and xen-swiotlb only.  Drivers are
      not supposed to use these directly, but use the DMA API instead.
      
      Move these to a new asm/dma-direct.h helper, included by a
      linux/dma-direct.h wrapper that provides the default linear mapping
      unless the architecture wants to override it.
      
      In the MIPS case the existing dma-coherent.h is reused for now as
      untangling it will take a bit of work.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Acked-by: NRobin Murphy <robin.murphy@arm.com>
      ea8c64ac
  19. 18 12月, 2017 1 次提交
    • T
      x86/mm: Unbreak modules that use the DMA API · 9d5f38ba
      Tom Lendacky 提交于
      Commit d8aa7eea ("x86/mm: Add Secure Encrypted Virtualization (SEV)
      support") changed sme_active() from an inline function that referenced
      sme_me_mask to a non-inlined function in order to make the sev_enabled
      variable a static variable.  This function was marked EXPORT_SYMBOL_GPL
      because at the time the patch was submitted, sme_me_mask was marked
      EXPORT_SYMBOL_GPL.
      
      Commit 87df2617 ("x86/mm: Unbreak modules that rely on external
      PAGE_KERNEL availability") changed sme_me_mask variable from
      EXPORT_SYMBOL_GPL to EXPORT_SYMBOL, allowing external modules the ability
      to build with CONFIG_AMD_MEM_ENCRYPT=y.  Now, however, with sev_active()
      no longer an inline function and marked as EXPORT_SYMBOL_GPL, external
      modules that use the DMA API are once again broken in 4.15. Since the DMA
      API is meant to be used by external modules, this needs to be changed.
      
      Change the sme_active() and sev_active() functions from EXPORT_SYMBOL_GPL
      to EXPORT_SYMBOL.
      Signed-off-by: NTom Lendacky <thomas.lendacky@amd.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brijesh Singh <brijesh.singh@amd.com>
      Link: https://lkml.kernel.org/r/20171215162011.14125.7113.stgit@tlendack-t1.amdoffice.net
      9d5f38ba
  20. 09 11月, 2017 1 次提交
    • J
      x86/mm: Unbreak modules that rely on external PAGE_KERNEL availability · 87df2617
      Jiri Kosina 提交于
      Commit 7744ccdb ("x86/mm: Add Secure Memory Encryption (SME)
      support") as a side-effect made PAGE_KERNEL all of a sudden unavailable
      to modules which can't make use of EXPORT_SYMBOL_GPL() symbols.
      
      This is because once SME is enabled, sme_me_mask (which is introduced as
      EXPORT_SYMBOL_GPL) makes its way to PAGE_KERNEL through _PAGE_ENC,
      causing imminent build failure for all the modules which make use of all
      the EXPORT-SYMBOL()-exported API (such as vmap(), __vmalloc(),
      remap_pfn_range(), ...).
      
      Exporting (as EXPORT_SYMBOL()) interfaces (and having done so for ages)
      that take pgprot_t argument, while making it impossible to -- all of a
      sudden -- pass PAGE_KERNEL to it, feels rather incosistent.
      
      Restore the original behavior and make it possible to pass PAGE_KERNEL
      to all its EXPORT_SYMBOL() consumers.
      
      [ This is all so not wonderful. We shouldn't need that "sme_me_mask"
        access at all in all those places that really don't care about that
        level of detail, and just want _PAGE_KERNEL or whatever.
      
        We have some similar issues with _PAGE_CACHE_WP and _PAGE_NOCACHE,
        both of which hide a "cachemode2protval()" call, and which also ends
        up using another EXPORT_SYMBOL(), but at least that only triggers for
        the much more rare cases.
      
        Maybe we could move these dynamic page table bits to be generated much
        deeper down in the VM layer, instead of hiding them in the macros that
        everybody uses.
      
        So this all would merit some cleanup. But not today.   - Linus ]
      
      Cc: Tom Lendacky <thomas.lendacky@amd.com>
      Signed-off-by: NJiri Kosina <jkosina@suse.cz>
      Despised-by: NThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      87df2617
  21. 07 11月, 2017 6 次提交
  22. 30 9月, 2017 1 次提交
  23. 07 9月, 2017 1 次提交
  24. 19 7月, 2017 1 次提交
    • T
      x86/mm: Add support to make use of Secure Memory Encryption · aca20d54
      Tom Lendacky 提交于
      Add support to check if SME has been enabled and if memory encryption
      should be activated (checking of command line option based on the
      configuration of the default state).  If memory encryption is to be
      activated, then the encryption mask is set and the kernel is encrypted
      "in place."
      Signed-off-by: NTom Lendacky <thomas.lendacky@amd.com>
      Reviewed-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Alexander Potapenko <glider@google.com>
      Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brijesh Singh <brijesh.singh@amd.com>
      Cc: Dave Young <dyoung@redhat.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: Jonathan Corbet <corbet@lwn.net>
      Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      Cc: Larry Woodman <lwoodman@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Matt Fleming <matt@codeblueprint.co.uk>
      Cc: Michael S. Tsirkin <mst@redhat.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Radim Krčmář <rkrcmar@redhat.com>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Toshimitsu Kani <toshi.kani@hpe.com>
      Cc: kasan-dev@googlegroups.com
      Cc: kvm@vger.kernel.org
      Cc: linux-arch@vger.kernel.org
      Cc: linux-doc@vger.kernel.org
      Cc: linux-efi@vger.kernel.org
      Cc: linux-mm@kvack.org
      Link: http://lkml.kernel.org/r/5f0da2fd4cce63f556117549e2c89c170072209f.1500319216.git.thomas.lendacky@amd.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      aca20d54
  25. 18 7月, 2017 1 次提交
    • T
      x86/mm: Add support to encrypt the kernel in-place · 6ebcb060
      Tom Lendacky 提交于
      Add the support to encrypt the kernel in-place. This is done by creating
      new page mappings for the kernel - a decrypted write-protected mapping
      and an encrypted mapping. The kernel is encrypted by copying it through
      a temporary buffer.
      Signed-off-by: NTom Lendacky <thomas.lendacky@amd.com>
      Reviewed-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Alexander Potapenko <glider@google.com>
      Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brijesh Singh <brijesh.singh@amd.com>
      Cc: Dave Young <dyoung@redhat.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: Jonathan Corbet <corbet@lwn.net>
      Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      Cc: Larry Woodman <lwoodman@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Matt Fleming <matt@codeblueprint.co.uk>
      Cc: Michael S. Tsirkin <mst@redhat.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Radim Krčmář <rkrcmar@redhat.com>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Toshimitsu Kani <toshi.kani@hpe.com>
      Cc: kasan-dev@googlegroups.com
      Cc: kvm@vger.kernel.org
      Cc: linux-arch@vger.kernel.org
      Cc: linux-doc@vger.kernel.org
      Cc: linux-efi@vger.kernel.org
      Cc: linux-mm@kvack.org
      Link: http://lkml.kernel.org/r/c039bf9412ef95e1e6bf4fdf8facab95e00c717b.1500319216.git.thomas.lendacky@amd.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      6ebcb060