1. 26 5月, 2012 1 次提交
    • C
      arch/tile: allow building Linux with transparent huge pages enabled · 73636b1a
      Chris Metcalf 提交于
      The change adds some infrastructure for managing tile pmd's more generally,
      using pte_pmd() and pmd_pte() methods to translate pmd values to and
      from ptes, since on TILEPro a pmd is really just a nested structure
      holding a pgd (aka pte).  Several existing pmd methods are moved into
      this framework, and a whole raft of additional pmd accessors are defined
      that are used by the transparent hugepage framework.
      
      The tile PTE now has a "client2" bit.  The bit is used to indicate a
      transparent huge page is in the process of being split into subpages.
      
      This change also fixes a generic bug where the return value of the
      generic pmdp_splitting_flush() was incorrect.
      Signed-off-by: NChris Metcalf <cmetcalf@tilera.com>
      73636b1a
  2. 22 3月, 2012 1 次提交
    • A
      mm: thp: fix pmd_bad() triggering in code paths holding mmap_sem read mode · 1a5a9906
      Andrea Arcangeli 提交于
      In some cases it may happen that pmd_none_or_clear_bad() is called with
      the mmap_sem hold in read mode.  In those cases the huge page faults can
      allocate hugepmds under pmd_none_or_clear_bad() and that can trigger a
      false positive from pmd_bad() that will not like to see a pmd
      materializing as trans huge.
      
      It's not khugepaged causing the problem, khugepaged holds the mmap_sem
      in write mode (and all those sites must hold the mmap_sem in read mode
      to prevent pagetables to go away from under them, during code review it
      seems vm86 mode on 32bit kernels requires that too unless it's
      restricted to 1 thread per process or UP builds).  The race is only with
      the huge pagefaults that can convert a pmd_none() into a
      pmd_trans_huge().
      
      Effectively all these pmd_none_or_clear_bad() sites running with
      mmap_sem in read mode are somewhat speculative with the page faults, and
      the result is always undefined when they run simultaneously.  This is
      probably why it wasn't common to run into this.  For example if the
      madvise(MADV_DONTNEED) runs zap_page_range() shortly before the page
      fault, the hugepage will not be zapped, if the page fault runs first it
      will be zapped.
      
      Altering pmd_bad() not to error out if it finds hugepmds won't be enough
      to fix this, because zap_pmd_range would then proceed to call
      zap_pte_range (which would be incorrect if the pmd become a
      pmd_trans_huge()).
      
      The simplest way to fix this is to read the pmd in the local stack
      (regardless of what we read, no need of actual CPU barriers, only
      compiler barrier needed), and be sure it is not changing under the code
      that computes its value.  Even if the real pmd is changing under the
      value we hold on the stack, we don't care.  If we actually end up in
      zap_pte_range it means the pmd was not none already and it was not huge,
      and it can't become huge from under us (khugepaged locking explained
      above).
      
      All we need is to enforce that there is no way anymore that in a code
      path like below, pmd_trans_huge can be false, but pmd_none_or_clear_bad
      can run into a hugepmd.  The overhead of a barrier() is just a compiler
      tweak and should not be measurable (I only added it for THP builds).  I
      don't exclude different compiler versions may have prevented the race
      too by caching the value of *pmd on the stack (that hasn't been
      verified, but it wouldn't be impossible considering
      pmd_none_or_clear_bad, pmd_bad, pmd_trans_huge, pmd_none are all inlines
      and there's no external function called in between pmd_trans_huge and
      pmd_none_or_clear_bad).
      
      		if (pmd_trans_huge(*pmd)) {
      			if (next-addr != HPAGE_PMD_SIZE) {
      				VM_BUG_ON(!rwsem_is_locked(&tlb->mm->mmap_sem));
      				split_huge_page_pmd(vma->vm_mm, pmd);
      			} else if (zap_huge_pmd(tlb, vma, pmd, addr))
      				continue;
      			/* fall through */
      		}
      		if (pmd_none_or_clear_bad(pmd))
      
      Because this race condition could be exercised without special
      privileges this was reported in CVE-2012-1179.
      
      The race was identified and fully explained by Ulrich who debugged it.
      I'm quoting his accurate explanation below, for reference.
      
      ====== start quote =======
            mapcount 0 page_mapcount 1
            kernel BUG at mm/huge_memory.c:1384!
      
          At some point prior to the panic, a "bad pmd ..." message similar to the
          following is logged on the console:
      
            mm/memory.c:145: bad pmd ffff8800376e1f98(80000000314000e7).
      
          The "bad pmd ..." message is logged by pmd_clear_bad() before it clears
          the page's PMD table entry.
      
              143 void pmd_clear_bad(pmd_t *pmd)
              144 {
          ->  145         pmd_ERROR(*pmd);
              146         pmd_clear(pmd);
              147 }
      
          After the PMD table entry has been cleared, there is an inconsistency
          between the actual number of PMD table entries that are mapping the page
          and the page's map count (_mapcount field in struct page). When the page
          is subsequently reclaimed, __split_huge_page() detects this inconsistency.
      
             1381         if (mapcount != page_mapcount(page))
             1382                 printk(KERN_ERR "mapcount %d page_mapcount %d\n",
             1383                        mapcount, page_mapcount(page));
          -> 1384         BUG_ON(mapcount != page_mapcount(page));
      
          The root cause of the problem is a race of two threads in a multithreaded
          process. Thread B incurs a page fault on a virtual address that has never
          been accessed (PMD entry is zero) while Thread A is executing an madvise()
          system call on a virtual address within the same 2 MB (huge page) range.
      
                     virtual address space
                    .---------------------.
                    |                     |
                    |                     |
                  .-|---------------------|
                  | |                     |
                  | |                     |<-- B(fault)
                  | |                     |
            2 MB  | |/////////////////////|-.
            huge <  |/////////////////////|  > A(range)
            page  | |/////////////////////|-'
                  | |                     |
                  | |                     |
                  '-|---------------------|
                    |                     |
                    |                     |
                    '---------------------'
      
          - Thread A is executing an madvise(..., MADV_DONTNEED) system call
            on the virtual address range "A(range)" shown in the picture.
      
          sys_madvise
            // Acquire the semaphore in shared mode.
            down_read(&current->mm->mmap_sem)
            ...
            madvise_vma
              switch (behavior)
              case MADV_DONTNEED:
                   madvise_dontneed
                     zap_page_range
                       unmap_vmas
                         unmap_page_range
                           zap_pud_range
                             zap_pmd_range
                               //
                               // Assume that this huge page has never been accessed.
                               // I.e. content of the PMD entry is zero (not mapped).
                               //
                               if (pmd_trans_huge(*pmd)) {
                                   // We don't get here due to the above assumption.
                               }
                               //
                               // Assume that Thread B incurred a page fault and
                   .---------> // sneaks in here as shown below.
                   |           //
                   |           if (pmd_none_or_clear_bad(pmd))
                   |               {
                   |                 if (unlikely(pmd_bad(*pmd)))
                   |                     pmd_clear_bad
                   |                     {
                   |                       pmd_ERROR
                   |                         // Log "bad pmd ..." message here.
                   |                       pmd_clear
                   |                         // Clear the page's PMD entry.
                   |                         // Thread B incremented the map count
                   |                         // in page_add_new_anon_rmap(), but
                   |                         // now the page is no longer mapped
                   |                         // by a PMD entry (-> inconsistency).
                   |                     }
                   |               }
                   |
                   v
          - Thread B is handling a page fault on virtual address "B(fault)" shown
            in the picture.
      
          ...
          do_page_fault
            __do_page_fault
              // Acquire the semaphore in shared mode.
              down_read_trylock(&mm->mmap_sem)
              ...
              handle_mm_fault
                if (pmd_none(*pmd) && transparent_hugepage_enabled(vma))
                    // We get here due to the above assumption (PMD entry is zero).
                    do_huge_pmd_anonymous_page
                      alloc_hugepage_vma
                        // Allocate a new transparent huge page here.
                      ...
                      __do_huge_pmd_anonymous_page
                        ...
                        spin_lock(&mm->page_table_lock)
                        ...
                        page_add_new_anon_rmap
                          // Here we increment the page's map count (starts at -1).
                          atomic_set(&page->_mapcount, 0)
                        set_pmd_at
                          // Here we set the page's PMD entry which will be cleared
                          // when Thread A calls pmd_clear_bad().
                        ...
                        spin_unlock(&mm->page_table_lock)
      
          The mmap_sem does not prevent the race because both threads are acquiring
          it in shared mode (down_read).  Thread B holds the page_table_lock while
          the page's map count and PMD table entry are updated.  However, Thread A
          does not synchronize on that lock.
      
      ====== end quote =======
      
      [akpm@linux-foundation.org: checkpatch fixes]
      Reported-by: NUlrich Obergfell <uobergfe@redhat.com>
      Signed-off-by: NAndrea Arcangeli <aarcange@redhat.com>
      Acked-by: NJohannes Weiner <hannes@cmpxchg.org>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Dave Jones <davej@redhat.com>
      Acked-by: NLarry Woodman <lwoodman@redhat.com>
      Acked-by: NRik van Riel <riel@redhat.com>
      Cc: <stable@vger.kernel.org>		[2.6.38+]
      Cc: Mark Salter <msalter@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      1a5a9906
  3. 05 3月, 2012 1 次提交
    • P
      BUG: headers with BUG/BUG_ON etc. need linux/bug.h · 187f1882
      Paul Gortmaker 提交于
      If a header file is making use of BUG, BUG_ON, BUILD_BUG_ON, or any
      other BUG variant in a static inline (i.e. not in a #define) then
      that header really should be including <linux/bug.h> and not just
      expecting it to be implicitly present.
      
      We can make this change risk-free, since if the files using these
      headers didn't have exposure to linux/bug.h already, they would have
      been causing compile failures/warnings.
      Signed-off-by: NPaul Gortmaker <paul.gortmaker@windriver.com>
      187f1882
  4. 16 6月, 2011 1 次提交
  5. 23 5月, 2011 1 次提交
    • M
      [S390] merge page_test_dirty and page_clear_dirty · 2d42552d
      Martin Schwidefsky 提交于
      The page_clear_dirty primitive always sets the default storage key
      which resets the access control bits and the fetch protection bit.
      That will surprise a KVM guest that sets non-zero access control
      bits or the fetch protection bit. Merge page_test_dirty and
      page_clear_dirty back to a single function and only clear the
      dirty bit from the storage key.
      
      In addition move the function page_test_and_clear_dirty and
      page_test_and_clear_young to page.h where they belong. This
      requires to change the parameter from a struct page * to a page
      frame number.
      Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
      2d42552d
  6. 01 3月, 2011 1 次提交
    • B
      mm: <asm-generic/pgtable.h> must include <linux/mm_types.h> · fbd71844
      Ben Hutchings 提交于
      Commit e2cda322 ("thp: add pmd mangling generic functions") replaced
      some macros in <asm-generic/pgtable.h> with inline functions.
      
      If the functions are to be defined (not all architectures need them)
      then struct vm_area_struct must be defined first.  So include
      <linux/mm_types.h>.
      
      Fixes a build failure seen in Debian:
      
          CC [M]  drivers/media/dvb/mantis/mantis_pci.o
        In file included from arch/arm/include/asm/pgtable.h:460,
                         from drivers/media/dvb/mantis/mantis_pci.c:25:
        include/asm-generic/pgtable.h: In function 'ptep_test_and_clear_young':
        include/asm-generic/pgtable.h:29: error: dereferencing pointer to incomplete type
      Signed-off-by: NBen Hutchings <ben@decadent.org.uk>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      fbd71844
  7. 17 1月, 2011 1 次提交
    • A
      fix non-x86 build failure in pmdp_get_and_clear · b3697c02
      Andrea Arcangeli 提交于
      pmdp_get_and_clear/pmdp_clear_flush/pmdp_splitting_flush were trapped as
      BUG() and they were defined only to diminish the risk of build issues on
      not-x86 archs and to be consistent with the generic pte methods previously
      defined in include/asm-generic/pgtable.h.
      
      But they are causing more trouble than they were supposed to solve, so
      it's simpler not to define them when THP is off.
      
      This is also correcting the export of pmdp_splitting_flush which is
      currently unused (x86 isn't using the generic implementation in
      mm/pgtable-generic.c and no other arch needs that [yet]).
      Signed-off-by: NAndrea Arcangeli <aarcange@redhat.com>
      Sam Ravnborg <sam@ravnborg.org>
      Cc: Stephen Rothwell <sfr@canb.auug.org.au>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: "Luck, Tony" <tony.luck@intel.com>
      Cc: James Bottomley <James.Bottomley@HansenPartnership.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      b3697c02
  8. 14 1月, 2011 2 次提交
  9. 25 10月, 2010 1 次提交
  10. 24 8月, 2010 1 次提交
    • S
      x86, mm: Avoid unnecessary TLB flush · 61c77326
      Shaohua Li 提交于
      In x86, access and dirty bits are set automatically by CPU when CPU accesses
      memory. When we go into the code path of below flush_tlb_fix_spurious_fault(),
      we already set dirty bit for pte and don't need flush tlb. This might mean
      tlb entry in some CPUs hasn't dirty bit set, but this doesn't matter. When
      the CPUs do page write, they will automatically check the bit and no software
      involved.
      
      On the other hand, flush tlb in below position is harmful. Test creates CPU
      number of threads, each thread writes to a same but random address in same vma
      range and we measure the total time. Under a 4 socket system, original time is
      1.96s, while with the patch, the time is 0.8s. Under a 2 socket system, there is
      20% time cut too. perf shows a lot of time are taking to send ipi/handle ipi for
      tlb flush.
      Signed-off-by: NShaohua Li <shaohua.li@intel.com>
      LKML-Reference: <20100816011655.GA362@sli10-desk.sh.intel.com>
      Acked-by: NSuresh Siddha <suresh.b.siddha@intel.com>
      Cc: Andrea Archangeli <aarcange@redhat.com>
      Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
      61c77326
  11. 23 6月, 2009 1 次提交
    • P
      asm-generic: add dummy pgprot_noncached() · 0634a632
      Paul Mundt 提交于
      Most architectures now provide a pgprot_noncached(), the
      remaining ones can simply use an dummy default implementation,
      except for cris and xtensa, which should override the
      default appropriately.
      Signed-off-by: NArnd Bergmann <arnd@arndb.de>
      Cc: Jesper Nilsson <jesper.nilsson@axis.com>
      Cc: Chris Zankel <chris@zankel.net>
      Cc: Magnus Damm <magnus.damm@gmail.com>
      0634a632
  12. 30 3月, 2009 2 次提交
  13. 14 1月, 2009 1 次提交
  14. 20 12月, 2008 1 次提交
  15. 19 12月, 2008 1 次提交
  16. 16 7月, 2008 1 次提交
    • S
      mm: fix build on non-mmu machines · fe1a6875
      Sebastian Siewior 提交于
      Commit 1ea0704e aka "mm: add a ptep_modify_prot transaction abstraction"
      
      caused:
      
      |  CC      init/main.o
      |In file included from include2/asm/pgtable.h:68,
      |                 from /home/bigeasy/git/linux-2.6-m68k/include/linux/mm.h:39,
      |                 from include2/asm/uaccess.h:8,
      |                 from /home/bigeasy/git/linux-2.6-m68k/include/linux/poll.h:13,
      |                 from /home/bigeasy/git/linux-2.6-m68k/include/linux/rtc.h:113,
      |                 from /home/bigeasy/git/linux-2.6-m68k/include/linux/efi.h:19,
      |                 from /home/bigeasy/git/linux-2.6-m68k/init/main.c:43:
      |/linux-2.6/include/asm-generic/pgtable.h: In function '__ptep_modify_prot_start':
      |/linux-2.6/include/asm-generic/pgtable.h:209: error: implicit declaration of function 'ptep_get_and_clear'
      |/linux-2.6/include/asm-generic/pgtable.h:209: error: incompatible types in return
      |/linux-2.6/include/asm-generic/pgtable.h: In function '__ptep_modify_prot_commit':
      |/linux-2.6/include/asm-generic/pgtable.h:220: error: implicit declaration of function 'set_pte_at'
      |make[2]: *** [init/main.o] Error 1
      |make[1]: *** [init] Error 2
      |make: *** [sub-make] Error 2
      
      on my m68knommu box.
      Acked-by: NJeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Hugh Dickins <hugh@veritas.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Signed-off-by: NSebastian Siewior <bigeasy@linutronix.de>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      fe1a6875
  17. 25 6月, 2008 1 次提交
    • J
      mm: add a ptep_modify_prot transaction abstraction · 1ea0704e
      Jeremy Fitzhardinge 提交于
      This patch adds an API for doing read-modify-write updates to a pte's
      protection bits which may race against hardware updates to the pte.
      After reading the pte, the hardware may asynchonously set the accessed
      or dirty bits on a pte, which would be lost when writing back the
      modified pte value.
      
      The existing technique to handle this race is to use
      ptep_get_and_clear() atomically fetch the old pte value and clear it
      in memory.  This has the effect of marking the pte as non-present,
      which will prevent the hardware from updating its state.  When the new
      value is written back, the pte will be present again, and the hardware
      can resume updating the access/dirty flags.
      
      When running in a virtualized environment, pagetable updates are
      relatively expensive, since they generally involve some trap into the
      hypervisor.  To mitigate the cost of these updates, we tend to batch
      them.
      
      However, because of the atomic nature of ptep_get_and_clear(), it is
      inherently non-batchable.  This new interface allows batching by
      giving the underlying implementation enough information to open a
      transaction between the read and write phases:
      
      ptep_modify_prot_start() returns the current pte value, and puts the
        pte entry into a state where either the hardware will not update the
        pte, or if it does, the updates will be preserved on commit.
      
      ptep_modify_prot_commit() writes back the updated pte, makes sure that
        any hardware updates made since ptep_modify_prot_start() are
        preserved.
      
      ptep_modify_prot_start() and _commit() must be exactly paired, and
      used while holding the appropriate pte lock.  They do not protect
      against other software updates of the pte in any way.
      
      The current implementations of ptep_modify_prot_start and _commit are
      functionally unchanged from before: _start() uses ptep_get_and_clear()
      fetch the pte and zero the entry, preventing any hardware updates.
      _commit() simply writes the new pte value back knowing that the
      hardware has not updated the pte in the meantime.
      
      The only current user of this interface is mprotect
      Signed-off-by: NJeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
      Acked-by: NLinus Torvalds <torvalds@linux-foundation.org>
      Acked-by: NHugh Dickins <hugh@veritas.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      1ea0704e
  18. 17 10月, 2007 1 次提交
  19. 12 8月, 2007 1 次提交
  20. 18 7月, 2007 2 次提交
  21. 17 6月, 2007 1 次提交
  22. 27 4月, 2007 1 次提交
    • M
      [S390] split page_test_and_clear_dirty. · 6c210482
      Martin Schwidefsky 提交于
      The page_test_and_clear_dirty primitive really consists of two
      operations, page_test_dirty and the page_clear_dirty. The combination
      of the two is not an atomic operation, so it makes more sense to have
      two separate operations instead of one.
      In addition to the improved readability of the s390 version of
      SetPageUptodate, it now avoids the page_test_dirty operation which is
      an insert-storage-key-extended (iske) instruction which is an expensive
      operation.
      Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
      6c210482
  23. 09 4月, 2007 1 次提交
  24. 13 2月, 2007 1 次提交
    • Z
      [PATCH] i386: paravirt CPU hypercall batching mode · 9226d125
      Zachary Amsden 提交于
      The VMI ROM has a mode where hypercalls can be queued and batched.  This turns
      out to be a significant win during context switch, but must be done at a
      specific point before side effects to CPU state are visible to subsequent
      instructions.  This is similar to the MMU batching hooks already provided.
      The same hooks could be used by the Xen backend to implement a context switch
      multicall.
      
      To explain a bit more about lazy modes in the paravirt patches, basically, the
      idea is that only one of lazy CPU or MMU mode can be active at any given time.
       Lazy MMU mode is similar to this lazy CPU mode, and allows for batching of
      multiple PTE updates (say, inside a remap loop), but to avoid keeping some
      kind of state machine about when to flush cpu or mmu updates, we just allow
      one or the other to be active.  Although there is no real reason a more
      comprehensive scheme could not be implemented, there is also no demonstrated
      need for this extra complexity.
      Signed-off-by: NZachary Amsden <zach@vmware.com>
      Signed-off-by: NAndi Kleen <ak@suse.de>
      Cc: Andi Kleen <ak@suse.de>
      Cc: Jeremy Fitzhardinge <jeremy@xensource.com>
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      Cc: Chris Wright <chrisw@sous-sol.org>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      9226d125
  25. 01 10月, 2006 3 次提交
    • Z
      [PATCH] paravirt: remove set pte atomic · a93cb055
      Zachary Amsden 提交于
      Now that ptep_establish has a definition in PAE i386 3-level paging code, the
      only paging model which is insane enough to have multi-word hardware PTEs
      which are not efficient to set atomically, we can remove the ghost of
      set_pte_atomic from other architectures which falesly duplicated it, and
      remove all knowledge of it from the generic pgtable code.
      
      set_pte_atomic is now a private pte operator which is specific to i386
      Signed-off-by: NZachary Amsden <zach@vmware.com>
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      Cc: Jeremy Fitzhardinge <jeremy@xensource.com>
      Cc: Andi Kleen <ak@suse.de>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      a93cb055
    • Z
      [PATCH] paravirt: lazy mmu mode hooks.patch · 6606c3e0
      Zachary Amsden 提交于
      Implement lazy MMU update hooks which are SMP safe for both direct and shadow
      page tables.  The idea is that PTE updates and page invalidations while in
      lazy mode can be batched into a single hypercall.  We use this in VMI for
      shadow page table synchronization, and it is a win.  It also can be used by
      PPC and for direct page tables on Xen.
      
      For SMP, the enter / leave must happen under protection of the page table
      locks for page tables which are being modified.  This is because otherwise,
      you end up with stale state in the batched hypercall, which other CPUs can
      race ahead of.  Doing this under the protection of the locks guarantees the
      synchronization is correct, and also means that spurious faults which are
      generated during this window by remote CPUs are properly handled, as the page
      fault handler must re-check the PTE under protection of the same lock.
      Signed-off-by: NZachary Amsden <zach@vmware.com>
      Signed-off-by: NJeremy Fitzhardinge <jeremy@xensource.com>
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      Cc: Andi Kleen <ak@suse.de>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      6606c3e0
    • Z
      [PATCH] paravirt: pte clear not present · 9888a1ca
      Zachary Amsden 提交于
      Change pte_clear_full to a more appropriately named pte_clear_not_present,
      allowing optimizations when not-present mapping changes need not be reflected
      in the hardware TLB for protected page table modes.  There is also another
      case that can use it in the fremap code.
      Signed-off-by: NZachary Amsden <zach@vmware.com>
      Signed-off-by: NJeremy Fitzhardinge <jeremy@xensource.com>
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      Cc: Andi Kleen <ak@suse.de>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      9888a1ca
  26. 26 9月, 2006 1 次提交
  27. 02 6月, 2006 1 次提交
    • D
      [SPARC64]: Fix D-cache corruption in mremap · 0b0968a3
      David S. Miller 提交于
      If we move a mapping from one virtual address to another,
      and this changes the virtual color of the mapping to those
      pages, we can see corrupt data due to D-cache aliasing.
      
      Check for and deal with this by overriding the move_pte()
      macro.  Set things up so that other platforms can cleanly
      override the move_pte() macro too.
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      0b0968a3
  28. 07 11月, 2005 1 次提交
  29. 30 10月, 2005 1 次提交
  30. 28 9月, 2005 1 次提交
    • N
      [PATCH] mm: move_pte to remap ZERO_PAGE · 8b1f3124
      Nick Piggin 提交于
      Move the ZERO_PAGE remapping complexity to the move_pte macro in
      asm-generic, have it conditionally depend on
      __HAVE_ARCH_MULTIPLE_ZERO_PAGE, which gets defined for MIPS.
      
      For architectures without __HAVE_ARCH_MULTIPLE_ZERO_PAGE, move_pte becomes
      a noop.
      
      From: Hugh Dickins <hugh@veritas.com>
      
      Fix nasty little bug we've missed in Nick's mremap move ZERO_PAGE patch.
      The "pte" at that point may be a swap entry or a pte_file entry: we must
      check pte_present before perhaps corrupting such an entry.
      
      Patch below against 2.6.14-rc2-mm1, but the same bug is in 2.6.14-rc2's
      mm/mremap.c, and more dangerous there since it's affecting all arches: I
      think the safest course is to send Nick's patch and Yoichi's build fix and
      this fix (build tested) on to Linus - so only MIPS can be affected.
      Signed-off-by: NNick Piggin <npiggin@suse.de>
      Signed-off-by: NHugh Dickins <hugh@veritas.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      8b1f3124
  31. 05 9月, 2005 1 次提交
    • Z
      [PATCH] x86: ptep_clear optimization · a600388d
      Zachary Amsden 提交于
      Add a new accessor for PTEs, which passes the full hint from the mmu_gather
      struct; this allows architectures with hardware pagetables to optimize away
      atomic PTE operations when destroying an address space.  Removing the
      locked operation should allow better pipelining of memory access in this
      loop.  I measured an average savings of 30-35 cycles per zap_pte_range on
      the first 500 destructions on Pentium-M, but I believe the optimization
      would win more on older processors which still assert the bus lock on xchg
      for an exclusive cacheline.
      
      Update: I made some new measurements, and this saves exactly 26 cycles over
      ptep_get_and_clear on Pentium M.  On P4, with a PAE kernel, this saves 180
      cycles per ptep_get_and_clear, for a whopping 92160 cycles savings for a
      full address space destruction.
      
      pte_clear_full is not yet used, but is provided for future optimizations
      (in particular, when running inside of a hypervisor that queues page table
      updates, the full hint allows us to avoid queueing unnecessary page table
      update for an address space in the process of being destroyed.
      
      This is not a huge win, but it does help a bit, and sets the stage for
      further hypervisor optimization of the mm layer on all architectures.
      Signed-off-by: NZachary Amsden <zach@vmware.com>
      Cc: Christoph Lameter <christoph@lameter.com>
      Cc: <linux-mm@kvack.org>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      a600388d
  32. 22 6月, 2005 1 次提交
    • A
      [PATCH] msync: check pte dirty earlier · b4955ce3
      Abhijit Karmarkar 提交于
      It's common practice to msync a large address range regularly, in which
      often only a few ptes have actually been dirtied since the previous pass.
      
      sync_pte_range then goes much faster if it tests whether pte is dirty
      before locating and accessing each struct page cacheline; and it is hardly
      slowed by ptep_clear_flush_dirty repeating that test in the opposite case,
      when every pte actually is dirty.
      
      But beware, s390's pte_dirty always says false, since its dirty bit is kept
      in the storage key, located via the struct page address.  So skip this
      optimization in its case: use a pte_maybe_dirty macro which just says true
      if page_test_and_clear_dirty is implemented.
      Signed-off-by: NAbhijit Karmarkar <abhijitk@veritas.com>
      Signed-off-by: NHugh Dickins <hugh@veritas.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      b4955ce3
  33. 20 4月, 2005 1 次提交
  34. 17 4月, 2005 1 次提交
    • L
      Linux-2.6.12-rc2 · 1da177e4
      Linus Torvalds 提交于
      Initial git repository build. I'm not bothering with the full history,
      even though we have it. We can create a separate "historical" git
      archive of that later if we want to, and in the meantime it's about
      3.2GB when imported into git - space that would just make the early
      git days unnecessarily complicated, when we don't have a lot of good
      infrastructure for it.
      
      Let it rip!
      1da177e4