1. 28 10月, 2010 2 次提交
  2. 27 10月, 2010 2 次提交
    • P
      mm: stack based kmap_atomic() · 3e4d3af5
      Peter Zijlstra 提交于
      Keep the current interface but ignore the KM_type and use a stack based
      approach.
      
      The advantage is that we get rid of crappy code like:
      
      	#define __KM_PTE			\
      		(in_nmi() ? KM_NMI_PTE : 	\
      		 in_irq() ? KM_IRQ_PTE :	\
      		 KM_PTE0)
      
      and in general can stop worrying about what context we're in and what kmap
      slots might be appropriate for that.
      
      The downside is that FRV kmap_atomic() gets more expensive.
      
      For now we use a CPP trick suggested by Andrew:
      
        #define kmap_atomic(page, args...) __kmap_atomic(page)
      
      to avoid having to touch all kmap_atomic() users in a single patch.
      
      [ not compiled on:
        - mn10300: the arch doesn't actually build with highmem to begin with ]
      
      [akpm@linux-foundation.org: coding-style fixes]
      [akpm@linux-foundation.org: fix up drivers/gpu/drm/i915/intel_overlay.c]
      Acked-by: NRik van Riel <riel@redhat.com>
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Acked-by: NChris Metcalf <cmetcalf@tilera.com>
      Cc: David Howells <dhowells@redhat.com>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Russell King <rmk@arm.linux.org.uk>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: David Miller <davem@davemloft.net>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Dave Airlie <airlied@linux.ie>
      Cc: Li Zefan <lizf@cn.fujitsu.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      3e4d3af5
    • P
      mm: strictly nested kmap_atomic() · 61ecdb80
      Peter Zijlstra 提交于
      Ensure kmap_atomic() usage is strictly nested
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Reviewed-by: NRik van Riel <riel@redhat.com>
      Acked-by: NChris Metcalf <cmetcalf@tilera.com>
      Cc: David Howells <dhowells@redhat.com>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Russell King <rmk@arm.linux.org.uk>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: David Miller <davem@davemloft.net>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      61ecdb80
  3. 10 8月, 2010 2 次提交
    • A
      gcc-4.6: mm: fix unused but set warnings · 4e60c86b
      Andi Kleen 提交于
      No real bugs, just some dead code and some fixups.
      Signed-off-by: NAndi Kleen <ak@linux.intel.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      4e60c86b
    • C
      kmap_atomic: make kunmap_atomic() harder to misuse · 597781f3
      Cesar Eduardo Barros 提交于
      kunmap_atomic() is currently at level -4 on Rusty's "Hard To Misuse"
      list[1] ("Follow common convention and you'll get it wrong"), except in
      some architectures when CONFIG_DEBUG_HIGHMEM is set[2][3].
      
      kunmap() takes a pointer to a struct page; kunmap_atomic(), however, takes
      takes a pointer to within the page itself.  This seems to once in a while
      trip people up (the convention they are following is the one from
      kunmap()).
      
      Make it much harder to misuse, by moving it to level 9 on Rusty's list[4]
      ("The compiler/linker won't let you get it wrong").  This is done by
      refusing to build if the type of its first argument is a pointer to a
      struct page.
      
      The real kunmap_atomic() is renamed to kunmap_atomic_notypecheck()
      (which is what you would call in case for some strange reason calling it
      with a pointer to a struct page is not incorrect in your code).
      
      The previous version of this patch was compile tested on x86-64.
      
      [1] http://ozlabs.org/~rusty/index.cgi/tech/2008-04-01.html
      [2] In these cases, it is at level 5, "Do it right or it will always
          break at runtime."
      [3] At least mips and powerpc look very similar, and sparc also seems to
          share a common ancestor with both; there seems to be quite some
          degree of copy-and-paste coding here. The include/asm/highmem.h file
          for these three archs mention x86 CPUs at its top.
      [4] http://ozlabs.org/~rusty/index.cgi/tech/2008-03-30.html
      [5] As an aside, could someone tell me why mn10300 uses unsigned long as
          the first parameter of kunmap_atomic() instead of void *?
      Signed-off-by: NCesar Eduardo Barros <cesarb@cesarb.net>
      Cc: Russell King <linux@arm.linux.org.uk> (arch/arm)
      Cc: Ralf Baechle <ralf@linux-mips.org> (arch/mips)
      Cc: David Howells <dhowells@redhat.com> (arch/frv, arch/mn10300)
      Cc: Koichi Yasutake <yasutake.koichi@jp.panasonic.com> (arch/mn10300)
      Cc: Kyle McMartin <kyle@mcmartin.ca> (arch/parisc)
      Cc: Helge Deller <deller@gmx.de> (arch/parisc)
      Cc: "James E.J. Bottomley" <jejb@parisc-linux.org> (arch/parisc)
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> (arch/powerpc)
      Cc: Paul Mackerras <paulus@samba.org> (arch/powerpc)
      Cc: "David S. Miller" <davem@davemloft.net> (arch/sparc)
      Cc: Thomas Gleixner <tglx@linutronix.de> (arch/x86)
      Cc: Ingo Molnar <mingo@redhat.com> (arch/x86)
      Cc: "H. Peter Anvin" <hpa@zytor.com> (arch/x86)
      Cc: Arnd Bergmann <arnd@arndb.de> (include/asm-generic)
      Cc: Rusty Russell <rusty@rustcorp.com.au> ("Hard To Misuse" list)
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      597781f3
  4. 25 5月, 2010 1 次提交
  5. 26 1月, 2010 1 次提交
    • J
      mm: add coherence API for DMA to vmalloc/vmap areas · 9df5f741
      James Bottomley 提交于
      On Virtually Indexed architectures (which don't do automatic alias
      resolution in their caches), we have to flush via the correct
      virtual address to prepare pages for DMA.  On some architectures
      (like arm) we cannot prevent the CPU from doing data movein along
      the alias (and thus giving stale read data), so we not only have to
      introduce a flush API to push dirty cache lines out, but also an invalidate
      API to kill inconsistent cache lines that may have moved in before
      DMA changed the data
      Signed-off-by: NJames Bottomley <James.Bottomley@suse.de>
      9df5f741
  6. 12 1月, 2010 1 次提交
  7. 17 6月, 2009 1 次提交
  8. 04 4月, 2009 1 次提交
    • K
      Fix highmem PPC build failure · 3688e07f
      Kumar Gala 提交于
      Commit f4112de6 ("mm: introduce
      debug_kmap_atomic") broke PPC builds with CONFIG_HIGHMEM=y:
      
         CC      init/main.o
        In file included from include/linux/highmem.h:25,
                         from include/linux/pagemap.h:11,
                         from include/linux/mempolicy.h:63,
                         from init/main.c:53:
        arch/powerpc/include/asm/highmem.h: In function 'kmap_atomic_prot':
        arch/powerpc/include/asm/highmem.h:98: error: implicit declaration of function 'debug_kmap_atomic'
        In file included from include/linux/pagemap.h:11,
                         from include/linux/mempolicy.h:63,
                         from init/main.c:53:
        include/linux/highmem.h: At top level:
        include/linux/highmem.h:196: warning: conflicting types for 'debug_kmap_atomic'
        include/linux/highmem.h:196: error: static declaration of 'debug_kmap_atomic' follows non-static declaration
        include/asm/highmem.h:98: error: previous implicit declaration of 'debug_kmap_atomic' was here
        make[1]: *** [init/main.o] Error 1
        make: *** [init] Error 2
      Signed-off-by: NKumar Gala <galak@kernel.crashing.org>
      Acked-by: NAkinobu Mita <akinobu.mita@gmail.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      3688e07f
  9. 01 4月, 2009 1 次提交
  10. 28 11月, 2008 1 次提交
    • R
      Allow architectures to override copy_user_highpage() · 487ff320
      Russell King 提交于
      With aliasing VIPT cache support, the ARM implementation of
      clear_user_page() and copy_user_page() sets up a temporary kernel space
      mapping such that we have the same cache colour as the userspace page.
      This avoids having to consider any userspace aliases from this operation.
      
      However, when highmem is enabled, kmap_atomic() have to setup mappings.
      The copy_user_highpage() and clear_user_highpage() call these functions
      before delegating the copies to copy_user_page() and clear_user_page().
      
      The effect of this is that each of the *_user_highpage() functions setup
      their own kmap mapping, followed by the *_user_page() functions setting
      up another mapping.  This is rather wasteful.
      
      Thankfully, copy_user_highpage() can be overriden by architectures by
      defining __HAVE_ARCH_COPY_USER_HIGHPAGE.  However, replacement of
      clear_user_highpage() is more difficult because its inline definition
      is not conditional.  It seems that you're expected to define
      __HAVE_ARCH_ALLOC_ZEROED_USER_HIGHPAGE and provide a replacement
      __alloc_zeroed_user_highpage() implementation instead.
      
      The allocation itself is fine, so we don't want to override that.  What
      we really want to do is to override clear_user_highpage() with our own
      version which doesn't kmap_atomic() unnecessarily.
      
      Other VIPT architectures (PARISC and SH) would also like to override
      this function as well.
      Acked-by: NHugh Dickins <hugh@veritas.com>
      Acked-by: NJames Bottomley <James.Bottomley@HansenPartnership.com>
      Acked-by: NPaul Mundt <lethal@linux-sh.org>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      487ff320
  11. 06 2月, 2008 2 次提交
    • N
      mm: fix PageUptodate data race · 0ed361de
      Nick Piggin 提交于
      After running SetPageUptodate, preceeding stores to the page contents to
      actually bring it uptodate may not be ordered with the store to set the
      page uptodate.
      
      Therefore, another CPU which checks PageUptodate is true, then reads the
      page contents can get stale data.
      
      Fix this by having an smp_wmb before SetPageUptodate, and smp_rmb after
      PageUptodate.
      
      Many places that test PageUptodate, do so with the page locked, and this
      would be enough to ensure memory ordering in those places if
      SetPageUptodate were only called while the page is locked.  Unfortunately
      that is not always the case for some filesystems, but it could be an idea
      for the future.
      
      Also bring the handling of anonymous page uptodateness in line with that of
      file backed page management, by marking anon pages as uptodate when they
      _are_ uptodate, rather than when our implementation requires that they be
      marked as such.  Doing allows us to get rid of the smp_wmb's in the page
      copying functions, which were especially added for anonymous pages for an
      analogous memory ordering problem.  Both file and anonymous pages are
      handled with the same barriers.
      
      FAQ:
      Q. Why not do this in flush_dcache_page?
      A. Firstly, flush_dcache_page handles only one side (the smb side) of the
      ordering protocol; we'd still need smp_rmb somewhere. Secondly, hiding away
      memory barriers in a completely unrelated function is nasty; at least in the
      PageUptodate macros, they are located together with (half) the operations
      involved in the ordering. Thirdly, the smp_wmb is only required when first
      bringing the page uptodate, wheras flush_dcache_page should be called each time
      it is written to through the kernel mapping. It is logically the wrong place to
      put it.
      
      Q. Why does this increase my text size / reduce my performance / etc.
      A. Because it is adding the necessary instructions to eliminate the data-race.
      
      Q. Can it be improved?
      A. Yes, eg. if you were to create a rule that all SetPageUptodate operations
      run under the page lock, we could avoid the smp_rmb places where PageUptodate
      is queried under the page lock. Requires audit of all filesystems and at least
      some would need reworking. That's great you're interested, I'm eagerly awaiting
      your patches.
      Signed-off-by: NNick Piggin <npiggin@suse.de>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      0ed361de
    • C
      Pagecache zeroing: zero_user_segment, zero_user_segments and zero_user · eebd2aa3
      Christoph Lameter 提交于
      Simplify page cache zeroing of segments of pages through 3 functions
      
      zero_user_segments(page, start1, end1, start2, end2)
      
              Zeros two segments of the page. It takes the position where to
              start and end the zeroing which avoids length calculations and
      	makes code clearer.
      
      zero_user_segment(page, start, end)
      
              Same for a single segment.
      
      zero_user(page, start, length)
      
              Length variant for the case where we know the length.
      
      We remove the zero_user_page macro. Issues:
      
      1. Its a macro. Inline functions are preferable.
      
      2. The KM_USER0 macro is only defined for HIGHMEM.
      
         Having to treat this special case everywhere makes the
         code needlessly complex. The parameter for zeroing is always
         KM_USER0 except in one single case that we open code.
      
      Avoiding KM_USER0 makes a lot of code not having to be dealing
      with the special casing for HIGHMEM anymore. Dealing with
      kmap is only necessary for HIGHMEM configurations. In those
      configurations we use KM_USER0 like we do for a series of other
      functions defined in highmem.h.
      
      Since KM_USER0 is depends on HIGHMEM the existing zero_user_page
      function could not be a macro. zero_user_* functions introduced
      here can be be inline because that constant is not used when these
      functions are called.
      
      Also extract the flushing of the caches to be outside of the kmap.
      
      [akpm@linux-foundation.org: fix nfs and ntfs build]
      [akpm@linux-foundation.org: fix ntfs build some more]
      Signed-off-by: NChristoph Lameter <clameter@sgi.com>
      Cc: Steven French <sfrench@us.ibm.com>
      Cc: Michael Halcrow <mhalcrow@us.ibm.com>
      Cc: <linux-ext4@vger.kernel.org>
      Cc: Steven Whitehouse <swhiteho@redhat.com>
      Cc: Trond Myklebust <trond.myklebust@fys.uio.no>
      Cc: "J. Bruce Fields" <bfields@fieldses.org>
      Cc: Anton Altaparmakov <aia21@cantab.net>
      Cc: Mark Fasheh <mark.fasheh@oracle.com>
      Cc: David Chinner <dgc@sgi.com>
      Cc: Michael Halcrow <mhalcrow@us.ibm.com>
      Cc: Steven French <sfrench@us.ibm.com>
      Cc: Steven Whitehouse <swhiteho@redhat.com>
      Cc: Trond Myklebust <trond.myklebust@fys.uio.no>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      eebd2aa3
  12. 20 7月, 2007 1 次提交
  13. 18 7月, 2007 1 次提交
    • M
      Add __GFP_MOVABLE for callers to flag allocations from high memory that may be migrated · 769848c0
      Mel Gorman 提交于
      It is often known at allocation time whether a page may be migrated or not.
      This patch adds a flag called __GFP_MOVABLE and a new mask called
      GFP_HIGH_MOVABLE.  Allocations using the __GFP_MOVABLE can be either migrated
      using the page migration mechanism or reclaimed by syncing with backing
      storage and discarding.
      
      An API function very similar to alloc_zeroed_user_highpage() is added for
      __GFP_MOVABLE allocations called alloc_zeroed_user_highpage_movable().  The
      flags used by alloc_zeroed_user_highpage() are not changed because it would
      change the semantics of an existing API.  After this patch is applied there
      are no in-kernel users of alloc_zeroed_user_highpage() so it probably should
      be marked deprecated if this patch is merged.
      
      Note that this patch includes a minor cleanup to the use of __GFP_ZERO in
      shmem.c to keep all flag modifications to inode->mapping in the
      shmem_dir_alloc() helper function.  This clean-up suggestion is courtesy of
      Hugh Dickens.
      
      Additional credit goes to Christoph Lameter and Linus Torvalds for shaping the
      concept.  Credit to Hugh Dickens for catching issues with shmem swap vector
      and ramfs allocations.
      
      [akpm@linux-foundation.org: build fix]
      [hugh@veritas.com: __GFP_ZERO cleanup]
      Signed-off-by: NMel Gorman <mel@csn.ul.ie>
      Cc: Andy Whitcroft <apw@shadowen.org>
      Cc: Christoph Lameter <clameter@sgi.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      769848c0
  14. 10 5月, 2007 2 次提交
  15. 05 5月, 2007 1 次提交
  16. 03 5月, 2007 1 次提交
    • J
      [PATCH] i386: PARAVIRT: add kmap_atomic_pte for mapping highpte pages · ce6234b5
      Jeremy Fitzhardinge 提交于
      Xen and VMI both have special requirements when mapping a highmem pte
      page into the kernel address space.  These can be dealt with by adding
      a new kmap_atomic_pte() function for mapping highptes, and hooking it
      into the paravirt_ops infrastructure.
      
      Xen specifically wants to map the pte page RO, so this patch exposes a
      helper function, kmap_atomic_prot, which maps the page with the
      specified page protections.
      
      This also adds a kmap_flush_unused() function to clear out the cached
      kmap mappings.  Xen needs this to clear out any potential stray RW
      mappings of pages which will become part of a pagetable.
      
      [ Zach - vmi.c will need some attention after this patch.  It wasn't
        immediately obvious to me what needs to be done. ]
      Signed-off-by: NJeremy Fitzhardinge <jeremy@xensource.com>
      Signed-off-by: NAndi Kleen <ak@suse.de>
      Cc: Zachary Amsden <zach@vmware.com>
      ce6234b5
  17. 09 1月, 2007 1 次提交
    • R
      [ARM] pass vma for flush_anon_page() · a6f36be3
      Russell King 提交于
      Since get_user_pages() may be used with processes other than the
      current process and calls flush_anon_page(), flush_anon_page() has to
      cope in some way with non-current processes.
      
      It may not be appropriate, or even desirable to flush a region of
      virtual memory cache in the current process when that is different to
      the process that we want the flush to occur for.
      
      Therefore, pass the vma into flush_anon_page() so that the architecture
      can work out whether the 'vmaddr' is for the current process or not.
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      a6f36be3
  18. 14 12月, 2006 2 次提交
    • A
      [PATCH] Pass vma argument to copy_user_highpage(). · 9de455b2
      Atsushi Nemoto 提交于
      To allow a more effective copy_user_highpage() on certain architectures,
      a vma argument is added to the function and cow_user_page() allowing
      the implementation of these functions to check for the VM_EXEC bit.
      
      The main part of this patch was originally written by Ralf Baechle;
      Atushi Nemoto did the the debugging.
      Signed-off-by: NAtsushi Nemoto <anemo@mba.ocn.ne.jp>
      Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      9de455b2
    • A
      [PATCH] Fix COW D-cache aliasing on fork · 77fff4ae
      Atsushi Nemoto 提交于
      Problem:
      
      1. There is a process containing two thread (T1 and T2).  The
         thread T1 calls fork().  Then dup_mmap() function called on T1 context.
      
      static inline int dup_mmap(struct mm_struct *mm, struct mm_struct *oldmm)
      	...
      	flush_cache_mm(current->mm);
      	...	/* A */
      	(write-protect all Copy-On-Write pages)
      	...	/* B */
      	flush_tlb_mm(current->mm);
      	...
      
      2. When preemption happens between A and B (or on SMP kernel), the
         thread T2 can run and modify data on COW pages without page fault
         (modified data will stay in cache).
      
      3. Some time after fork() completed, the thread T2 may cause a page
         fault by write-protect on a COW page.
      
      4. Then data of the COW page will be copied to newly allocated
         physical page (copy_cow_page()).  It reads data via kernel mapping.
         The kernel mapping can have different 'color' with user space
         mapping of the thread T2 (dcache aliasing).  Therefore
         copy_cow_page() will copy stale data.  Then the modified data in
         cache will be lost.
      
      In order to allow architecture code to deal with this problem allow
      architecture code to override copy_user_highpage() by defining
      __HAVE_ARCH_COPY_USER_HIGHPAGE in <asm/page.h>.
      
      The main part of this patch was originally written by Ralf Baechle;
      Atushi Nemoto did the the debugging.
      Signed-off-by: NAtsushi Nemoto <anemo@mba.ocn.ne.jp>
      Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      77fff4ae
  19. 08 12月, 2006 1 次提交
  20. 26 9月, 2006 2 次提交
  21. 26 4月, 2006 1 次提交
  22. 27 3月, 2006 2 次提交
  23. 26 6月, 2005 1 次提交
  24. 17 4月, 2005 1 次提交
    • L
      Linux-2.6.12-rc2 · 1da177e4
      Linus Torvalds 提交于
      Initial git repository build. I'm not bothering with the full history,
      even though we have it. We can create a separate "historical" git
      archive of that later if we want to, and in the meantime it's about
      3.2GB when imported into git - space that would just make the early
      git days unnecessarily complicated, when we don't have a lot of good
      infrastructure for it.
      
      Let it rip!
      1da177e4