1. 13 7月, 2021 2 次提交
  2. 12 7月, 2021 4 次提交
  3. 11 7月, 2021 1 次提交
  4. 09 7月, 2021 10 次提交
    • A
      mm/mremap: allow arch runtime override · 3bbda69c
      Aneesh Kumar K.V 提交于
      Patch series "Speedup mremap on ppc64", v8.
      
      This patchset enables MOVE_PMD/MOVE_PUD support on power.  This requires
      the platform to support updating higher-level page tables without updating
      page table entries.  This also needs to invalidate the Page Walk Cache on
      architecture supporting the same.
      
      This patch (of 3):
      
      Architectures like ppc64 support faster mremap only with radix
      translation.  Hence allow a runtime check w.r.t support for fast mremap.
      
      Link: https://lkml.kernel.org/r/20210616045735.374532-1-aneesh.kumar@linux.ibm.com
      Link: https://lkml.kernel.org/r/20210616045735.374532-2-aneesh.kumar@linux.ibm.comSigned-off-by: NAneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Kalesh Singh <kaleshsingh@google.com>
      Cc: Nicholas Piggin <npiggin@gmail.com>
      Cc: Joel Fernandes <joel@joelfernandes.org>
      Cc: Christophe Leroy <christophe.leroy@csgroup.eu>
      Cc: Kirill A. Shutemov <kirill@shutemov.name>
      Cc: "Aneesh Kumar K . V" <aneesh.kumar@linux.ibm.com>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Cc: Stephen Rothwell <sfr@canb.auug.org.au>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      3bbda69c
    • A
      mm/mremap: hold the rmap lock in write mode when moving page table entries. · 97113eb3
      Aneesh Kumar K.V 提交于
      To avoid a race between rmap walk and mremap, mremap does
      take_rmap_locks().  The lock was taken to ensure that rmap walk don't miss
      a page table entry due to PTE moves via move_pagetables().  The kernel
      does further optimization of this lock such that if we are going to find
      the newly added vma after the old vma, the rmap lock is not taken.  This
      is because rmap walk would find the vmas in the same order and if we don't
      find the page table attached to older vma we would find it with the new
      vma which we would iterate later.
      
      As explained in commit eb66ae03 ("mremap: properly flush TLB before
      releasing the page") mremap is special in that it doesn't take ownership
      of the page.  The optimized version for PUD/PMD aligned mremap also
      doesn't hold the ptl lock.  This can result in stale TLB entries as show
      below.
      
      This patch updates the rmap locking requirement in mremap to handle the race condition
      explained below with optimized mremap::
      
      Optmized PMD move
      
          CPU 1                           CPU 2                                   CPU 3
      
          mremap(old_addr, new_addr)      page_shrinker/try_to_unmap_one
      
          mmap_write_lock_killable()
      
                                          addr = old_addr
                                          lock(pte_ptl)
          lock(pmd_ptl)
          pmd = *old_pmd
          pmd_clear(old_pmd)
          flush_tlb_range(old_addr)
      
          *new_pmd = pmd
                                                                                  *new_addr = 10; and fills
                                                                                  TLB with new addr
                                                                                  and old pfn
      
          unlock(pmd_ptl)
                                          ptep_clear_flush()
                                          old pfn is free.
                                                                                  Stale TLB entry
      
      Optimized PUD move also suffers from a similar race.  Both the above race
      condition can be fixed if we force mremap path to take rmap lock.
      
      Link: https://lkml.kernel.org/r/20210616045239.370802-7-aneesh.kumar@linux.ibm.com
      Fixes: 2c91bd4a ("mm: speed up mremap by 20x on large regions")
      Fixes: c49dd340 ("mm: speedup mremap on 1GB or larger regions")
      Link: https://lore.kernel.org/linux-mm/CAHk-=wgXVR04eBNtxQfevontWnP6FDm+oj5vauQXP3S-huwbPw@mail.gmail.comSigned-off-by: NAneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
      Acked-by: NHugh Dickins <hughd@google.com>
      Acked-by: NKirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Cc: Christophe Leroy <christophe.leroy@csgroup.eu>
      Cc: Joel Fernandes <joel@joelfernandes.org>
      Cc: Kalesh Singh <kaleshsingh@google.com>
      Cc: Kirill A. Shutemov <kirill@shutemov.name>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Nicholas Piggin <npiggin@gmail.com>
      Cc: Stephen Rothwell <sfr@canb.auug.org.au>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      97113eb3
    • A
      mm/mremap: use pmd/pud_poplulate to update page table entries · 0881ace2
      Aneesh Kumar K.V 提交于
      pmd/pud_populate is the right interface to be used to set the respective
      page table entries.  Some architectures like ppc64 do assume that
      set_pmd/pud_at can only be used to set a hugepage PTE.  Since we are not
      setting up a hugepage PTE here, use the pmd/pud_populate interface.
      
      Link: https://lkml.kernel.org/r/20210616045239.370802-6-aneesh.kumar@linux.ibm.comSigned-off-by: NAneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
      Cc: Christophe Leroy <christophe.leroy@csgroup.eu>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Joel Fernandes <joel@joelfernandes.org>
      Cc: Kalesh Singh <kaleshsingh@google.com>
      Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Nicholas Piggin <npiggin@gmail.com>
      Cc: Stephen Rothwell <sfr@canb.auug.org.au>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      0881ace2
    • A
      mm/mremap: don't enable optimized PUD move if page table levels is 2 · d6655dff
      Aneesh Kumar K.V 提交于
      With two level page table don't enable move_normal_pud.
      
      Link: https://lkml.kernel.org/r/20210616045239.370802-5-aneesh.kumar@linux.ibm.comSigned-off-by: NAneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
      Cc: Christophe Leroy <christophe.leroy@csgroup.eu>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Joel Fernandes <joel@joelfernandes.org>
      Cc: Kalesh Singh <kaleshsingh@google.com>
      Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Nicholas Piggin <npiggin@gmail.com>
      Cc: Stephen Rothwell <sfr@canb.auug.org.au>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      d6655dff
    • A
      mm/mremap: convert huge PUD move to separate helper · 7d846db7
      Aneesh Kumar K.V 提交于
      With TRANSPARENT_HUGEPAGE_PUD enabled the kernel can find huge PUD
      entries.  Add a helper to move huge PUD entries on mremap().
      
      This will be used by a later patch to optimize mremap of PUD_SIZE aligned
      level 4 PTE mapped address
      
      This also make sure we support mremap on huge PUD entries even with
      CONFIG_HAVE_MOVE_PUD disabled.
      
      [aneesh.kumar@linux.ibm.com: fix build failure with clang-10]
        Link: https://lore.kernel.org/lkml/YMuOSnJsL9qkxweY@archlinux-ax161
        Link: https://lkml.kernel.org/r/20210619134310.89098-1-aneesh.kumar@linux.ibm.com
      
      Link: https://lkml.kernel.org/r/20210616045239.370802-4-aneesh.kumar@linux.ibm.comSigned-off-by: NAneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
      Cc: Christophe Leroy <christophe.leroy@csgroup.eu>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Joel Fernandes <joel@joelfernandes.org>
      Cc: Kalesh Singh <kaleshsingh@google.com>
      Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Nicholas Piggin <npiggin@gmail.com>
      Cc: Stephen Rothwell <sfr@canb.auug.org.au>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      7d846db7
    • K
      mm: add setup_initial_init_mm() helper · 5748fbc5
      Kefeng Wang 提交于
      Patch series "init_mm: cleanup ARCH's text/data/brk setup code", v3.
      
      Add setup_initial_init_mm() helper, then use it to cleanup the text, data
      and brk setup code.
      
      This patch (of 15):
      
      Add setup_initial_init_mm() helper to setup kernel text, data and brk.
      
      Link: https://lkml.kernel.org/r/20210608083418.137226-1-wangkefeng.wang@huawei.com
      Link: https://lkml.kernel.org/r/20210608083418.137226-2-wangkefeng.wang@huawei.comSigned-off-by: NKefeng Wang <wangkefeng.wang@huawei.com>
      Cc: Souptick Joarder <jrdr.linux@gmail.com>
      Cc: Christophe Leroy <christophe.leroy@csgroup.eu>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Christian Borntraeger <borntraeger@de.ibm.com>
      Cc: Geert Uytterhoeven <geert@linux-m68k.org>
      Cc: Greentime Hu <green.hu@gmail.com>
      Cc: Greg Ungerer <gerg@linux-m68k.org>
      Cc: Guo Ren <guoren@kernel.org>
      Cc: Heiko Carstens <hca@linux.ibm.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Jonas Bonn <jonas@southpole.se>
      Cc: Ley Foon Tan <ley.foon.tan@intel.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Nick Hu <nickhu@andestech.com>
      Cc: Palmer Dabbelt <palmer@dabbelt.com>
      Cc: Paul Walmsley <paul.walmsley@sifive.com>
      Cc: Rich Felker <dalias@libc.org>
      Cc: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
      Cc: Stafford Horne <shorne@gmail.com>
      Cc: Stefan Kristiansson <stefan.kristiansson@saunalahti.fi>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vasily Gorbik <gor@linux.ibm.com>
      Cc: Vineet Gupta <vgupta@synopsys.com>
      Cc: Will Deacon <will@kernel.org>
      Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      5748fbc5
    • M
      PM: hibernate: disable when there are active secretmem users · 9a436f8f
      Mike Rapoport 提交于
      It is unsafe to allow saving of secretmem areas to the hibernation
      snapshot as they would be visible after the resume and this essentially
      will defeat the purpose of secret memory mappings.
      
      Prevent hibernation whenever there are active secret memory users.
      
      Link: https://lkml.kernel.org/r/20210518072034.31572-6-rppt@kernel.orgSigned-off-by: NMike Rapoport <rppt@linux.ibm.com>
      Acked-by: NDavid Hildenbrand <david@redhat.com>
      Acked-by: NJames Bottomley <James.Bottomley@HansenPartnership.com>
      Cc: Alexander Viro <viro@zeniv.linux.org.uk>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Christopher Lameter <cl@linux.com>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: David Hildenbrand <david@redhat.com>
      Cc: Elena Reshetova <elena.reshetova@intel.com>
      Cc: Hagen Paul Pfeifer <hagen@jauu.net>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: James Bottomley <jejb@linux.ibm.com>
      Cc: "Kirill A. Shutemov" <kirill@shutemov.name>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Matthew Wilcox <willy@infradead.org>
      Cc: Michael Kerrisk <mtk.manpages@gmail.com>
      Cc: Palmer Dabbelt <palmer@dabbelt.com>
      Cc: Palmer Dabbelt <palmerdabbelt@google.com>
      Cc: Paul Walmsley <paul.walmsley@sifive.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Rick Edgecombe <rick.p.edgecombe@intel.com>
      Cc: Roman Gushchin <guro@fb.com>
      Cc: Shakeel Butt <shakeelb@google.com>
      Cc: Shuah Khan <shuah@kernel.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Tycho Andersen <tycho@tycho.ws>
      Cc: Will Deacon <will@kernel.org>
      Cc: kernel test robot <lkp@intel.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      9a436f8f
    • M
      mm: introduce memfd_secret system call to create "secret" memory areas · 1507f512
      Mike Rapoport 提交于
      Introduce "memfd_secret" system call with the ability to create memory
      areas visible only in the context of the owning process and not mapped not
      only to other processes but in the kernel page tables as well.
      
      The secretmem feature is off by default and the user must explicitly
      enable it at the boot time.
      
      Once secretmem is enabled, the user will be able to create a file
      descriptor using the memfd_secret() system call.  The memory areas created
      by mmap() calls from this file descriptor will be unmapped from the kernel
      direct map and they will be only mapped in the page table of the processes
      that have access to the file descriptor.
      
      Secretmem is designed to provide the following protections:
      
      * Enhanced protection (in conjunction with all the other in-kernel
        attack prevention systems) against ROP attacks.  Seceretmem makes
        "simple" ROP insufficient to perform exfiltration, which increases the
        required complexity of the attack.  Along with other protections like
        the kernel stack size limit and address space layout randomization which
        make finding gadgets is really hard, absence of any in-kernel primitive
        for accessing secret memory means the one gadget ROP attack can't work.
        Since the only way to access secret memory is to reconstruct the missing
        mapping entry, the attacker has to recover the physical page and insert
        a PTE pointing to it in the kernel and then retrieve the contents.  That
        takes at least three gadgets which is a level of difficulty beyond most
        standard attacks.
      
      * Prevent cross-process secret userspace memory exposures.  Once the
        secret memory is allocated, the user can't accidentally pass it into the
        kernel to be transmitted somewhere.  The secreremem pages cannot be
        accessed via the direct map and they are disallowed in GUP.
      
      * Harden against exploited kernel flaws.  In order to access secretmem,
        a kernel-side attack would need to either walk the page tables and
        create new ones, or spawn a new privileged uiserspace process to perform
        secrets exfiltration using ptrace.
      
      The file descriptor based memory has several advantages over the
      "traditional" mm interfaces, such as mlock(), mprotect(), madvise().  File
      descriptor approach allows explicit and controlled sharing of the memory
      areas, it allows to seal the operations.  Besides, file descriptor based
      memory paves the way for VMMs to remove the secret memory range from the
      userspace hipervisor process, for instance QEMU.  Andy Lutomirski says:
      
        "Getting fd-backed memory into a guest will take some possibly major
        work in the kernel, but getting vma-backed memory into a guest without
        mapping it in the host user address space seems much, much worse."
      
      memfd_secret() is made a dedicated system call rather than an extension to
      memfd_create() because it's purpose is to allow the user to create more
      secure memory mappings rather than to simply allow file based access to
      the memory.  Nowadays a new system call cost is negligible while it is way
      simpler for userspace to deal with a clear-cut system calls than with a
      multiplexer or an overloaded syscall.  Moreover, the initial
      implementation of memfd_secret() is completely distinct from
      memfd_create() so there is no much sense in overloading memfd_create() to
      begin with.  If there will be a need for code sharing between these
      implementation it can be easily achieved without a need to adjust user
      visible APIs.
      
      The secret memory remains accessible in the process context using uaccess
      primitives, but it is not exposed to the kernel otherwise; secret memory
      areas are removed from the direct map and functions in the
      follow_page()/get_user_page() family will refuse to return a page that
      belongs to the secret memory area.
      
      Once there will be a use case that will require exposing secretmem to the
      kernel it will be an opt-in request in the system call flags so that user
      would have to decide what data can be exposed to the kernel.
      
      Removing of the pages from the direct map may cause its fragmentation on
      architectures that use large pages to map the physical memory which
      affects the system performance.  However, the original Kconfig text for
      CONFIG_DIRECT_GBPAGES said that gigabyte pages in the direct map "...  can
      improve the kernel's performance a tiny bit ..." (commit 00d1c5e0
      ("x86: add gbpages switches")) and the recent report [1] showed that "...
      although 1G mappings are a good default choice, there is no compelling
      evidence that it must be the only choice".  Hence, it is sufficient to
      have secretmem disabled by default with the ability of a system
      administrator to enable it at boot time.
      
      Pages in the secretmem regions are unevictable and unmovable to avoid
      accidental exposure of the sensitive data via swap or during page
      migration.
      
      Since the secretmem mappings are locked in memory they cannot exceed
      RLIMIT_MEMLOCK.  Since these mappings are already locked independently
      from mlock(), an attempt to mlock()/munlock() secretmem range would fail
      and mlockall()/munlockall() will ignore secretmem mappings.
      
      However, unlike mlock()ed memory, secretmem currently behaves more like
      long-term GUP: secretmem mappings are unmovable mappings directly consumed
      by user space.  With default limits, there is no excessive use of
      secretmem and it poses no real problem in combination with
      ZONE_MOVABLE/CMA, but in the future this should be addressed to allow
      balanced use of large amounts of secretmem along with ZONE_MOVABLE/CMA.
      
      A page that was a part of the secret memory area is cleared when it is
      freed to ensure the data is not exposed to the next user of that page.
      
      The following example demonstrates creation of a secret mapping (error
      handling is omitted):
      
      	fd = memfd_secret(0);
      	ftruncate(fd, MAP_SIZE);
      	ptr = mmap(NULL, MAP_SIZE, PROT_READ | PROT_WRITE,
      		   MAP_SHARED, fd, 0);
      
      [1] https://lore.kernel.org/linux-mm/213b4567-46ce-f116-9cdf-bbd0c884eb3c@linux.intel.com/
      
      [akpm@linux-foundation.org: suppress Kconfig whine]
      
      Link: https://lkml.kernel.org/r/20210518072034.31572-5-rppt@kernel.orgSigned-off-by: NMike Rapoport <rppt@linux.ibm.com>
      Acked-by: NHagen Paul Pfeifer <hagen@jauu.net>
      Acked-by: NJames Bottomley <James.Bottomley@HansenPartnership.com>
      Cc: Alexander Viro <viro@zeniv.linux.org.uk>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Christopher Lameter <cl@linux.com>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Elena Reshetova <elena.reshetova@intel.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: James Bottomley <jejb@linux.ibm.com>
      Cc: "Kirill A. Shutemov" <kirill@shutemov.name>
      Cc: Matthew Wilcox <willy@infradead.org>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Michael Kerrisk <mtk.manpages@gmail.com>
      Cc: Palmer Dabbelt <palmer@dabbelt.com>
      Cc: Palmer Dabbelt <palmerdabbelt@google.com>
      Cc: Paul Walmsley <paul.walmsley@sifive.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Rick Edgecombe <rick.p.edgecombe@intel.com>
      Cc: Roman Gushchin <guro@fb.com>
      Cc: Shakeel Butt <shakeelb@google.com>
      Cc: Shuah Khan <shuah@kernel.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Tycho Andersen <tycho@tycho.ws>
      Cc: Will Deacon <will@kernel.org>
      Cc: David Hildenbrand <david@redhat.com>
      Cc: kernel test robot <lkp@intel.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      1507f512
    • M
      mmap: make mlock_future_check() global · 6aeb2542
      Mike Rapoport 提交于
      Patch series "mm: introduce memfd_secret system call to create "secret" memory areas", v20.
      
      This is an implementation of "secret" mappings backed by a file
      descriptor.
      
      The file descriptor backing secret memory mappings is created using a
      dedicated memfd_secret system call The desired protection mode for the
      memory is configured using flags parameter of the system call.  The mmap()
      of the file descriptor created with memfd_secret() will create a "secret"
      memory mapping.  The pages in that mapping will be marked as not present
      in the direct map and will be present only in the page table of the owning
      mm.
      
      Although normally Linux userspace mappings are protected from other users,
      such secret mappings are useful for environments where a hostile tenant is
      trying to trick the kernel into giving them access to other tenants
      mappings.
      
      It's designed to provide the following protections:
      
      * Enhanced protection (in conjunction with all the other in-kernel
        attack prevention systems) against ROP attacks.  Seceretmem makes
        "simple" ROP insufficient to perform exfiltration, which increases the
        required complexity of the attack.  Along with other protections like
        the kernel stack size limit and address space layout randomization which
        make finding gadgets is really hard, absence of any in-kernel primitive
        for accessing secret memory means the one gadget ROP attack can't work.
        Since the only way to access secret memory is to reconstruct the missing
        mapping entry, the attacker has to recover the physical page and insert
        a PTE pointing to it in the kernel and then retrieve the contents.  That
        takes at least three gadgets which is a level of difficulty beyond most
        standard attacks.
      
      * Prevent cross-process secret userspace memory exposures.  Once the
        secret memory is allocated, the user can't accidentally pass it into the
        kernel to be transmitted somewhere.  The secreremem pages cannot be
        accessed via the direct map and they are disallowed in GUP.
      
      * Harden against exploited kernel flaws.  In order to access secretmem,
        a kernel-side attack would need to either walk the page tables and
        create new ones, or spawn a new privileged uiserspace process to perform
        secrets exfiltration using ptrace.
      
      In the future the secret mappings may be used as a mean to protect guest
      memory in a virtual machine host.
      
      For demonstration of secret memory usage we've created a userspace library
      
      https://git.kernel.org/pub/scm/linux/kernel/git/jejb/secret-memory-preloader.git
      
      that does two things: the first is act as a preloader for openssl to
      redirect all the OPENSSL_malloc calls to secret memory meaning any secret
      keys get automatically protected this way and the other thing it does is
      expose the API to the user who needs it.  We anticipate that a lot of the
      use cases would be like the openssl one: many toolkits that deal with
      secret keys already have special handling for the memory to try to give
      them greater protection, so this would simply be pluggable into the
      toolkits without any need for user application modification.
      
      Hiding secret memory mappings behind an anonymous file allows usage of the
      page cache for tracking pages allocated for the "secret" mappings as well
      as using address_space_operations for e.g.  page migration callbacks.
      
      The anonymous file may be also used implicitly, like hugetlb files, to
      implement mmap(MAP_SECRET) and use the secret memory areas with "native"
      mm ABIs in the future.
      
      Removing of the pages from the direct map may cause its fragmentation on
      architectures that use large pages to map the physical memory which
      affects the system performance.  However, the original Kconfig text for
      CONFIG_DIRECT_GBPAGES said that gigabyte pages in the direct map "...  can
      improve the kernel's performance a tiny bit ..." (commit 00d1c5e0
      ("x86: add gbpages switches")) and the recent report [1] showed that "...
      although 1G mappings are a good default choice, there is no compelling
      evidence that it must be the only choice".  Hence, it is sufficient to
      have secretmem disabled by default with the ability of a system
      administrator to enable it at boot time.
      
      In addition, there is also a long term goal to improve management of the
      direct map.
      
      [1] https://lore.kernel.org/linux-mm/213b4567-46ce-f116-9cdf-bbd0c884eb3c@linux.intel.com/
      
      This patch (of 7):
      
      It will be used by the upcoming secret memory implementation.
      
      Link: https://lkml.kernel.org/r/20210518072034.31572-1-rppt@kernel.org
      Link: https://lkml.kernel.org/r/20210518072034.31572-2-rppt@kernel.orgSigned-off-by: NMike Rapoport <rppt@linux.ibm.com>
      Reviewed-by: NDavid Hildenbrand <david@redhat.com>
      Acked-by: NJames Bottomley <James.Bottomley@HansenPartnership.com>
      Cc: Alexander Viro <viro@zeniv.linux.org.uk>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Christopher Lameter <cl@linux.com>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: David Hildenbrand <david@redhat.com>
      Cc: Elena Reshetova <elena.reshetova@intel.com>
      Cc: Hagen Paul Pfeifer <hagen@jauu.net>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: James Bottomley <jejb@linux.ibm.com>
      Cc: "Kirill A. Shutemov" <kirill@shutemov.name>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Matthew Wilcox <willy@infradead.org>
      Cc: Michael Kerrisk <mtk.manpages@gmail.com>
      Cc: Palmer Dabbelt <palmer@dabbelt.com>
      Cc: Palmer Dabbelt <palmerdabbelt@google.com>
      Cc: Paul Walmsley <paul.walmsley@sifive.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Rick Edgecombe <rick.p.edgecombe@intel.com>
      Cc: Roman Gushchin <guro@fb.com>
      Cc: Shakeel Butt <shakeelb@google.com>
      Cc: Shuah Khan <shuah@kernel.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Tycho Andersen <tycho@tycho.ws>
      Cc: Will Deacon <will@kernel.org>
      Cc: kernel test robot <lkp@intel.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      6aeb2542
    • O
      mm/slub: use stackdepot to save stack trace in objects · 78869146
      Oliver Glitta 提交于
      Many stack traces are similar so there are many similar arrays.
      Stackdepot saves each unique stack only once.
      
      Replace field addrs in struct track with depot_stack_handle_t handle.  Use
      stackdepot to save stack trace.
      
      The benefits are smaller memory overhead and possibility to aggregate
      per-cache statistics in the future using the stackdepot handle instead of
      matching stacks manually.
      
      [rdunlap@infradead.org: rename save_stack_trace()]
        Link: https://lkml.kernel.org/r/20210513051920.29320-1-rdunlap@infradead.org
      [vbabka@suse.cz: fix lockdep splat]
        Link: https://lkml.kernel.org/r/20210516195150.26740-1-vbabka@suse.czLink: https://lkml.kernel.org/r/20210414163434.4376-1-glittao@gmail.comSigned-off-by: NOliver Glitta <glittao@gmail.com>
      Signed-off-by: NRandy Dunlap <rdunlap@infradead.org>
      Signed-off-by: NVlastimil Babka <vbabka@suse.cz>
      Reviewed-by: NVlastimil Babka <vbabka@suse.cz>
      Acked-by: NDavid Rientjes <rientjes@google.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      78869146
  5. 05 7月, 2021 1 次提交
  6. 02 7月, 2021 22 次提交