1. 20 6月, 2016 1 次提交
    • M
      s390/mm: add shadow gmap support · 4be130a0
      Martin Schwidefsky 提交于
      For a nested KVM guest the outer KVM host needs to create shadow
      page tables for the nested guest. This patch adds the basic support
      to the guest address space (gmap) code.
      
      For each guest address space the inner KVM host creates, the first
      outer KVM host needs to create shadow page tables. The address space
      is identified by the ASCE loaded into the control register 1 at the
      time the inner SIE instruction for the second nested KVM guest is
      executed. The outer KVM host creates the shadow tables starting with
      the table identified by the ASCE on a on-demand basis. The outer KVM
      host will get repeated faults for all the shadow tables needed to
      run the second KVM guest.
      
      While a shadow page table for the second KVM guest is active the access
      to the origin region, segment and page tables needs to be restricted
      for the first KVM guest. For region and segment and page tables the first
      KVM guest may read the memory, but write attempt has to lead to an
      unshadow.  This is done using the page invalid and read-only bits in the
      page table of the first KVM guest. If the first guest re-accesses one of
      the origin pages of a shadow, it gets a fault and the affected parts of
      the shadow page table hierarchy needs to be removed again.
      
      PGSTE tables don't have to be shadowed, as all interpretation assist can't
      deal with the invalid bits in the shadow pte being set differently than
      the original ones provided by the first KVM guest.
      
      Many bug fixes and improvements by David Hildenbrand.
      Reviewed-by: NDavid Hildenbrand <dahi@linux.vnet.ibm.com>
      Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
      Signed-off-by: NChristian Borntraeger <borntraeger@de.ibm.com>
      4be130a0
  2. 21 4月, 2016 1 次提交
    • G
      s390/mm: fix asce_bits handling with dynamic pagetable levels · 723cacbd
      Gerald Schaefer 提交于
      There is a race with multi-threaded applications between context switch and
      pagetable upgrade. In switch_mm() a new user_asce is built from mm->pgd and
      mm->context.asce_bits, w/o holding any locks. A concurrent mmap with a
      pagetable upgrade on another thread in crst_table_upgrade() could already
      have set new asce_bits, but not yet the new mm->pgd. This would result in a
      corrupt user_asce in switch_mm(), and eventually in a kernel panic from a
      translation exception.
      
      Fix this by storing the complete asce instead of just the asce_bits, which
      can then be read atomically from switch_mm(), so that it either sees the
      old value or the new value, but no mixture. Both cases are OK. Having the
      old value would result in a page fault on access to the higher level memory,
      but the fault handler would see the new mm->pgd, if it was a valid access
      after the mmap on the other thread has completed. So as worst-case scenario
      we would have a page fault loop for the racing thread until the next time
      slice.
      
      Also remove dead code and simplify the upgrade/downgrade path, there are no
      upgrades from 2 levels, and only downgrades from 3 levels for compat tasks.
      There are also no concurrent upgrades, because the mmap_sem is held with
      down_write() in do_mmap, so the flush and table checks during upgrade can
      be removed.
      Reported-by: NMichael Munday <munday@ca.ibm.com>
      Reviewed-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
      Signed-off-by: NGerald Schaefer <gerald.schaefer@de.ibm.com>
      Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
      723cacbd
  3. 10 3月, 2016 1 次提交
    • M
      s390/mm: four page table levels vs. fork · 3446c13b
      Martin Schwidefsky 提交于
      The fork of a process with four page table levels is broken since
      git commit 6252d702 "[S390] dynamic page tables."
      
      All new mm contexts are created with three page table levels and
      an asce limit of 4TB. If the parent has four levels dup_mmap will
      add vmas to the new context which are outside of the asce limit.
      The subsequent call to copy_page_range will walk the three level
      page table structure of the new process with non-zero pgd and pud
      indexes. This leads to memory clobbers as the pgd_index *and* the
      pud_index is added to the mm->pgd pointer without a pgd_deref
      in between.
      
      The init_new_context() function is selecting the number of page
      table levels for a new context. The function is used by mm_init()
      which in turn is called by dup_mm() and mm_alloc(). These two are
      used by fork() and exec(). The init_new_context() function can
      distinguish the two cases by looking at mm->context.asce_limit,
      for fork() the mm struct has been copied and the number of page
      table levels may not change. For exec() the mm_alloc() function
      set the new mm structure to zero, in this case a three-level page
      table is created as the temporary stack space is located at
      STACK_TOP_MAX = 4TB.
      
      This fixes CVE-2016-2143.
      Reported-by: NMarcin Kościelnicki <koriakin@0x04.net>
      Reviewed-by: NHeiko Carstens <heiko.carstens@de.ibm.com>
      Cc: stable@vger.kernel.org
      Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
      3446c13b
  4. 08 3月, 2016 1 次提交
  5. 23 4月, 2015 1 次提交
  6. 25 3月, 2015 1 次提交
    • H
      s390: remove 31 bit support · 5a79859a
      Heiko Carstens 提交于
      Remove the 31 bit support in order to reduce maintenance cost and
      effectively remove dead code. Since a couple of years there is no
      distribution left that comes with a 31 bit kernel.
      
      The 31 bit kernel also has been broken since more than a year before
      anybody noticed. In addition I added a removal warning to the kernel
      shown at ipl for 5 minutes: a960062e ("s390: add 31 bit warning
      message") which let everybody know about the plan to remove 31 bit
      code. We didn't get any response.
      
      Given that the last 31 bit only machine was introduced in 1999 let's
      remove the code.
      Anybody with 31 bit user space code can still use the compat mode.
      Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com>
      Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
      5a79859a
  7. 28 11月, 2014 1 次提交
  8. 27 10月, 2014 1 次提交
  9. 26 8月, 2014 1 次提交
    • M
      KVM: s390/mm: use radix trees for guest to host mappings · 527e30b4
      Martin Schwidefsky 提交于
      Store the target address for the gmap segments in a radix tree
      instead of using invalid segment table entries. gmap_translate
      becomes a simple radix_tree_lookup, gmap_fault is split into the
      address translation with gmap_translate and the part that does
      the linking of the gmap shadow page table with the process page
      table.
      A second radix tree is used to keep the pointers to the segment
      table entries for segments that are mapped in the guest address
      space. On unmap of a segment the pointer is retrieved from the
      radix tree and is used to carry out the segment invalidation in
      the gmap shadow page table. As the radix tree can only store one
      pointer, each host segment may only be mapped to exactly one
      guest location.
      Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
      Signed-off-by: NChristian Borntraeger <borntraeger@de.ibm.com>
      527e30b4
  10. 22 4月, 2014 1 次提交
  11. 21 2月, 2014 2 次提交
  12. 27 6月, 2013 1 次提交
  13. 17 6月, 2013 1 次提交
  14. 20 7月, 2012 1 次提交
    • H
      s390/comments: unify copyright messages and remove file names · a53c8fab
      Heiko Carstens 提交于
      Remove the file name from the comment at top of many files. In most
      cases the file name was wrong anyway, so it's rather pointless.
      
      Also unify the IBM copyright statement. We did have a lot of sightly
      different statements and wanted to change them one after another
      whenever a file gets touched. However that never happened. Instead
      people start to take the old/"wrong" statements to use as a template
      for new files.
      So unify all of them in one go.
      Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com>
      a53c8fab
  15. 24 5月, 2012 1 次提交
  16. 11 4月, 2012 1 次提交
    • M
      [S390] fix tlb flushing for page table pages · cd94154c
      Martin Schwidefsky 提交于
      Git commit 36409f63 "use generic RCU
      page-table freeing code" introduced a tlb flushing bug. Partially revert
      the above git commit and go back to s390 specific page table flush code.
      
      For s390 the TLB can contain three types of entries, "normal" TLB
      page-table entries, TLB combined region-and-segment-table (CRST) entries
      and real-space entries. Linux does not use real-space entries which
      leaves normal TLB entries and CRST entries. The CRST entries are
      intermediate steps in the page-table translation called translation paths.
      For example a 4K page access in a three-level page table setup will
      create two CRST TLB entries and one page-table TLB entry. The advantage
      of that approach is that a page access next to the previous one can reuse
      the CRST entries and needs just a single read from memory to create the
      page-table TLB entry. The disadvantage is that the TLB flushing rules are
      more complicated, before any page-table may be freed the TLB needs to be
      flushed.
      
      In short: the generic RCU page-table freeing code is incorrect for the
      CRST entries, in particular the check for mm_users < 2 is troublesome.
      
      This is applicable to 3.0+ kernels.
      
      Cc: <stable@vger.kernel.org>
      Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
      cd94154c
  17. 24 7月, 2011 1 次提交
    • M
      [S390] kvm guest address space mapping · e5992f2e
      Martin Schwidefsky 提交于
      Add code that allows KVM to control the virtual memory layout that
      is seen by a guest. The guest address space uses a second page table
      that shares the last level pte-tables with the process page table.
      If a page is unmapped from the process page table it is automatically
      unmapped from the guest page table as well.
      
      The guest address space mapping starts out empty, KVM can map any
      individual 1MB segments from the process virtual memory to any 1MB
      aligned location in the guest virtual memory. If a target segment in
      the process virtual memory does not exist or is unmapped while a
      guest mapping exists the desired target address is stored as an
      invalid segment table entry in the guest page table.
      The population of the guest page table is fault driven.
      Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
      e5992f2e
  18. 06 6月, 2011 1 次提交
    • M
      [S390] use generic RCU page-table freeing code · 36409f63
      Martin Schwidefsky 提交于
      Replace the s390 specific rcu page-table freeing code with the
      generic variant. This requires to duplicate the definition for the
      struct mmu_table_batch as s390 does not use the generic tlb flush
      code.
      
      While we are at it remove the restriction that page table fragments
      can not be reused after a single fragment has been freed with rcu
      and split out allocation and freeing of page tables with pgstes.
      Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
      36409f63
  19. 23 5月, 2011 2 次提交
    • M
      [S390] refactor page table functions for better pgste support · b2fa47e6
      Martin Schwidefsky 提交于
      Rework the architecture page table functions to access the bits in the
      page table extension array (pgste). There are a number of changes:
      1) Fix missing pgste update if the attach_count for the mm is <= 1.
      2) For every operation that affects the invalid bit in the pte or the
         rcp byte in the pgste the pcl lock needs to be acquired. The function
         pgste_get_lock gets the pcl lock and returns the current pgste value
         for a pte pointer. The function pgste_set_unlock stores the pgste
         and releases the lock. Between these two calls the bits in the pgste
         can be shuffled.
      3) Define two software bits in the pte _PAGE_SWR and _PAGE_SWC to avoid
         calling SetPageDirty and SetPageReferenced from pgtable.h. If the
         host reference backup bit or the host change backup bit has been
         set the dirty/referenced state is transfered to the pte. The common
         code will pick up the state from the pte.
      4) Add ptep_modify_prot_start and ptep_modify_prot_commit for mprotect.
      5) Remove pgd_populate_kernel, pud_populate_kernel, pmd_populate_kernel
         pgd_clear_kernel, pud_clear_kernel, pmd_clear_kernel and ptep_invalidate.
      6) Rename kvm_s390_test_and_clear_page_dirty to
         ptep_test_and_clear_user_dirty and add ptep_test_and_clear_user_young.
      7) Define mm_exclusive() and mm_has_pgste() helper to improve readability.
      Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
      b2fa47e6
    • M
      [S390] Remove data execution protection · 043d0708
      Martin Schwidefsky 提交于
      The noexec support on s390 does not rely on a bit in the page table
      entry but utilizes the secondary space mode to distinguish between
      memory accesses for instructions vs. data. The noexec code relies
      on the assumption that the cpu will always use the secondary space
      page table for data accesses while it is running in the secondary
      space mode. Up to the z9-109 class machines this has been the case.
      Unfortunately this is not true anymore with z10 and later machines.
      The load-relative-long instructions lrl, lgrl and lgfrl access the
      memory operand using the same addressing-space mode that has been
      used to fetch the instruction.
      This breaks the noexec mode for all user space binaries compiled
      with march=z10 or later. The only option is to remove the current
      noexec support.
      Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
      043d0708
  20. 25 10月, 2010 1 次提交
  21. 07 12月, 2009 1 次提交
    • M
      [S390] Improve address space mode selection. · b11b5334
      Martin Schwidefsky 提交于
      Introduce user_mode to replace the two variables switch_amode and
      s390_noexec. There are three valid combinations of the old values:
        1) switch_amode == 0 && s390_noexec == 0
        2) switch_amode == 1 && s390_noexec == 0
        3) switch_amode == 1 && s390_noexec == 1
      They get replaced by
        1) user_mode == HOME_SPACE_MODE
        2) user_mode == PRIMARY_SPACE_MODE
        3) user_mode == SECONDARY_SPACE_MODE
      The new kernel parameter user_mode=[primary,secondary,home] lets
      you choose the address space mode the user space processes should
      use. In addition the CONFIG_S390_SWITCH_AMODE config option
      is removed.
      Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
      b11b5334
  22. 11 9月, 2009 1 次提交
    • M
      [S390] fix recursive locking on page_table_lock · 50aa98ba
      Martin Schwidefsky 提交于
      Suzuki Poulose reported the following recursive locking bug on s390:
      
      Here is the stack trace : (see Appendix I for more info)
      
        [<0000000000406ed6>] _spin_lock+0x52/0x94
        [<0000000000103bde>] crst_table_free+0x14e/0x1a4
        [<00000000001ba684>] __pmd_alloc+0x114/0x1ec
        [<00000000001be8d0>] handle_mm_fault+0x2cc/0xb80
        [<0000000000407d62>] do_dat_exception+0x2b6/0x3a0
        [<0000000000114f8c>] sysc_return+0x0/0x8
        [<00000200001642b2>] 0x200001642b2
      
      The page_table_lock is already acquired in __pmd_alloc (mm/memory.c) and
      it tries to populate the pud/pgd with a new pmd allocated. If another
      thread populates it before we get a chance, we free the pmd using
      pmd_free().
      
      On s390x, pmd_free(even pud_free ) is #defined to crst_table_free(),
      which acquires the page_table_lock to protect the crst_table index updates.
      
      Hence this ends up in a recursive locking of the page_table_lock.
      
      The solution suggested by Dave Hansen is to use a new spin lock in the mmu
      context to protect the access to the crst_list and the pgtable_list.
      Reported-by: NSuzuki Poulose <suzuki@in.ibm.com>
      Cc: Dave Hansen <dave@linux.vnet.ibm.com>
      Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
      50aa98ba
  23. 25 12月, 2008 1 次提交
  24. 02 8月, 2008 1 次提交
  25. 10 2月, 2008 3 次提交
  26. 09 2月, 2008 1 次提交
    • M
      CONFIG_HIGHPTE vs. sub-page page tables. · 2f569afd
      Martin Schwidefsky 提交于
      Background: I've implemented 1K/2K page tables for s390.  These sub-page
      page tables are required to properly support the s390 virtualization
      instruction with KVM.  The SIE instruction requires that the page tables
      have 256 page table entries (pte) followed by 256 page status table entries
      (pgste).  The pgstes are only required if the process is using the SIE
      instruction.  The pgstes are updated by the hardware and by the hypervisor
      for a number of reasons, one of them is dirty and reference bit tracking.
      To avoid wasting memory the standard pte table allocation should return
      1K/2K (31/64 bit) and 2K/4K if the process is using SIE.
      
      Problem: Page size on s390 is 4K, page table size is 1K or 2K.  That means
      the s390 version for pte_alloc_one cannot return a pointer to a struct
      page.  Trouble is that with the CONFIG_HIGHPTE feature on x86 pte_alloc_one
      cannot return a pointer to a pte either, since that would require more than
      32 bit for the return value of pte_alloc_one (and the pte * would not be
      accessible since its not kmapped).
      
      Solution: The only solution I found to this dilemma is a new typedef: a
      pgtable_t.  For s390 pgtable_t will be a (pte *) - to be introduced with a
      later patch.  For everybody else it will be a (struct page *).  The
      additional problem with the initialization of the ptl lock and the
      NR_PAGETABLE accounting is solved with a constructor pgtable_page_ctor and
      a destructor pgtable_page_dtor.  The page table allocation and free
      functions need to call these two whenever a page table page is allocated or
      freed.  pmd_populate will get a pgtable_t instead of a struct page pointer.
       To get the pgtable_t back from a pmd entry that has been installed with
      pmd_populate a new function pmd_pgtable is added.  It replaces the pmd_page
      call in free_pte_range and apply_to_pte_range.
      Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: <linux-arch@vger.kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      2f569afd
  27. 06 2月, 2008 1 次提交
  28. 22 10月, 2007 3 次提交
    • M
      [S390] 4level-fixup cleanup · 190a1d72
      Martin Schwidefsky 提交于
      Get independent from asm-generic/4level-fixup.h
      Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
      190a1d72
    • M
      [S390] Cleanup page table definitions. · 3610cce8
      Martin Schwidefsky 提交于
      - De-confuse the defines for the address-space-control-elements
        and the segment/region table entries.
      - Create out of line functions for page table allocation / freeing.
      - Simplify get_shadow_xxx functions.
      Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
      3610cce8
    • M
      [S390] tlb flush fix. · ba8a9229
      Martin Schwidefsky 提交于
      The current tlb flushing code for page table entries violates the
      s390 architecture in a small detail. The relevant section from the
      principles of operation (SA22-7832-02 page 3-47):
      
         "A valid table entry must not be changed while it is attached
         to any CPU and may be used for translation by that CPU except to
         (1) invalidate the entry by using INVALIDATE PAGE TABLE ENTRY or
         INVALIDATE DAT TABLE ENTRY, (2) alter bits 56-63 of a page-table
         entry, or (3) make a change by means of a COMPARE AND SWAP AND
         PURGE instruction that purges the TLB."
      
      That means if one thread of a multithreaded applciation uses a vma
      while another thread does an unmap on it, the page table entries of
      that vma needs to get removed with IPTE, IDTE or CSP. In some strange
      and rare situations a cpu could check-stop (die) because a entry has
      been pushed out of the TLB that is still needed to complete a
      (milli-coded) instruction. I've never seen it happen with the current
      code on any of the supported machines, so right now this is a
      theoretical problem. But I want to fix it nevertheless, to avoid
      headaches in the futures.
      
      To get this implemented correctly without changing common code the
      primitives ptep_get_and_clear, ptep_get_and_clear_full and
      ptep_set_wrprotect need to use the IPTE instruction to invalidate the
      pte before the new pte value gets stored. If IPTE is always used for
      the three primitives three important operations will have a performace
      hit: fork, mprotect and exit_mmap. Time for some workarounds:
      
      * 1: ptep_get_and_clear_full is used in unmap_vmas to remove page
      tables entries in a batched tlb gather operation. If the mmu_gather
      context passed to unmap_vmas has been started with full_mm_flush==1
      or if only one cpu is online or if the only user of a mm_struct is the
      current process then the fullmm indication in the mmu_gather context is
      set to one. All TLBs for mm_struct are flushed by the tlb_gather_mmu
      call. No new TLBs can be created while the unmap is in progress. In
      this case ptep_get_and_clear_full clears the ptes with a simple store.
      
      * 2: ptep_get_and_clear is used in change_protection to clear the
      ptes from the page tables before they are reentered with the new
      access flags. At the end of the update flush_tlb_range clears the
      remaining TLBs. In general the ptep_get_and_clear has to issue IPTE
      for each pte and flush_tlb_range is a nop. But if there is only one
      user of the mm_struct then ptep_get_and_clear uses simple stores
      to do the update and flush_tlb_range will flush the TLBs.
      
      * 3: Similar to 2, ptep_set_wrprotect is used in copy_page_range
      for a fork to make all ptes of a cow mapping read-only. At the end of
      of copy_page_range dup_mmap will flush the TLBs with a call to
      flush_tlb_mm.  Check for mm->mm_users and if there is only one user
      avoid using IPTE in ptep_set_wrprotect and let flush_tlb_mm clear the
      TLBs.
      
      Overall for single threaded programs the tlb flush code now performs
      better, for multi threaded programs it is slightly worse. In particular
      exit_mmap() now does a single IDTE for the mm and then just frees every
      page cache reference and every page table page directly without a delay
      over the mmu_gather structure.
      Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
      ba8a9229
  29. 22 8月, 2007 1 次提交
  30. 06 2月, 2007 1 次提交
    • G
      [S390] noexec protection · c1821c2e
      Gerald Schaefer 提交于
      This provides a noexec protection on s390 hardware. Our hardware does
      not have any bits left in the pte for a hw noexec bit, so this is a
      different approach using shadow page tables and a special addressing
      mode that allows separate address spaces for code and data.
      
      As a special feature of our "secondary-space" addressing mode, separate
      page tables can be specified for the translation of data addresses
      (storage operands) and instruction addresses. The shadow page table is
      used for the instruction addresses and the standard page table for the
      data addresses.
      The shadow page table is linked to the standard page table by a pointer
      in page->lru.next of the struct page corresponding to the page that
      contains the standard page table (since page->private is not really
      private with the pte_lock and the page table pages are not in the LRU
      list).
      Depending on the software bits of a pte, it is either inserted into
      both page tables or just into the standard (data) page table. Pages of
      a vma that does not have the VM_EXEC bit set get mapped only in the
      data address space. Any try to execute code on such a page will cause a
      page translation exception. The standard reaction to this is a SIGSEGV
      with two exceptions: the two system call opcodes 0x0a77 (sys_sigreturn)
      and 0x0aad (sys_rt_sigreturn) are allowed. They are stored by the
      kernel to the signal stack frame. Unfortunately, the signal return
      mechanism cannot be modified to use an SA_RESTORER because the
      exception unwinding code depends on the system call opcode stored
      behind the signal stack frame.
      
      This feature requires that user space is executed in secondary-space
      mode and the kernel in home-space mode, which means that the addressing
      modes need to be switched and that the noexec protection only works
      for user space.
      After switching the addressing modes, we cannot use the mvcp/mvcs
      instructions anymore to copy between kernel and user space. A new
      mvcos instruction has been added to the z9 EC/BC hardware which allows
      to copy between arbitrary address spaces, but on older hardware the
      page tables need to be walked manually.
      Signed-off-by: NGerald Schaefer <geraldsc@de.ibm.com>
      Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
      c1821c2e
  31. 08 12月, 2006 1 次提交
    • H
      [S390] Virtual memmap for s390. · f4eb07c1
      Heiko Carstens 提交于
      Virtual memmap support for s390. Inspired by the ia64 implementation.
      
      Unlike ia64 we need a mechanism which allows us to dynamically attach
      shared memory regions.
      These memory regions are accessed via the dcss device driver. dcss
      implements the 'direct_access' operation, which requires struct pages
      for every single shared page.
      Therefore this implementation provides an interface to attach/detach
      shared memory:
      
      int add_shared_memory(unsigned long start, unsigned long size);
      int remove_shared_memory(unsigned long start, unsigned long size);
      
      The purpose of the add_shared_memory function is to add the given
      memory range to the 1:1 mapping and to make sure that the
      corresponding range in the vmemmap is backed with physical pages.
      It also initialises the new struct pages.
      
      remove_shared_memory in turn only invalidates the page table
      entries in the 1:1 mapping. The page tables and the memory used for
      struct pages in the vmemmap are currently not freed. They will be
      reused when the next segment will be attached.
      Given that the maximum size of a shared memory region is 2GB and
      in addition all regions must reside below 2GB this is not too much of
      a restriction, but there is room for improvement.
      Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com>
      Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
      f4eb07c1
  32. 05 10月, 2006 1 次提交
  33. 20 9月, 2006 1 次提交
  34. 12 7月, 2006 1 次提交