1. 06 1月, 2009 1 次提交
    • C
      add a vfs_fsync helper · 4c728ef5
      Christoph Hellwig 提交于
      Fsync currently has a fdatawrite/fdatawait pair around the method call,
      and a mutex_lock/unlock of the inode mutex.  All callers of fsync have
      to duplicate this, but we have a few and most of them don't quite get
      it right.  This patch adds a new vfs_fsync that takes care of this.
      It's a little more complicated as usual as ->fsync might get a NULL file
      pointer and just a dentry from nfsd, but otherwise gets afile and we
      want to take the mapping and file operations from it when it is there.
      
      Notes on the fsync callers:
      
       - ecryptfs wasn't calling filemap_fdatawrite / filemap_fdatawait on the
         	lower file
       - coda wasn't calling filemap_fdatawrite / filemap_fdatawait on the host
      	file, and returning 0 when ->fsync was missing
       - shm wasn't calling either filemap_fdatawrite / filemap_fdatawait nor
         taking i_mutex.  Now given that shared memory doesn't have disk
         backing not doing anything in fsync seems fine and I left it out of
         the vfs_fsync conversion for now, but in that case we might just
         not pass it through to the lower file at all but just call the no-op
         simple_sync_file directly.
      
      [and now actually export vfs_fsync]
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      4c728ef5
  2. 22 5月, 2007 1 次提交
    • A
      Detach sched.h from mm.h · e8edc6e0
      Alexey Dobriyan 提交于
      First thing mm.h does is including sched.h solely for can_do_mlock() inline
      function which has "current" dereference inside. By dealing with can_do_mlock()
      mm.h can be detached from sched.h which is good. See below, why.
      
      This patch
      a) removes unconditional inclusion of sched.h from mm.h
      b) makes can_do_mlock() normal function in mm/mlock.c
      c) exports can_do_mlock() to not break compilation
      d) adds sched.h inclusions back to files that were getting it indirectly.
      e) adds less bloated headers to some files (asm/signal.h, jiffies.h) that were
         getting them indirectly
      
      Net result is:
      a) mm.h users would get less code to open, read, preprocess, parse, ... if
         they don't need sched.h
      b) sched.h stops being dependency for significant number of files:
         on x86_64 allmodconfig touching sched.h results in recompile of 4083 files,
         after patch it's only 3744 (-8.3%).
      
      Cross-compile tested on
      
      	all arm defconfigs, all mips defconfigs, all powerpc defconfigs,
      	alpha alpha-up
      	arm
      	i386 i386-up i386-defconfig i386-allnoconfig
      	ia64 ia64-up
      	m68k
      	mips
      	parisc parisc-up
      	powerpc powerpc-up
      	s390 s390-up
      	sparc sparc-up
      	sparc64 sparc64-up
      	um-x86_64
      	x86_64 x86_64-up x86_64-defconfig x86_64-allnoconfig
      
      as well as my two usual configs.
      Signed-off-by: NAlexey Dobriyan <adobriyan@gmail.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      e8edc6e0
  3. 26 9月, 2006 1 次提交
    • P
      [PATCH] mm: msync() cleanup · 204ec841
      Peter Zijlstra 提交于
      With the tracking of dirty pages properly done now, msync doesn't need to scan
      the PTEs anymore to determine the dirty status.
      
      From: Hugh Dickins <hugh@veritas.com>
      
      In looking to do that, I made some other tidyups: can remove several
      #includes, and sys_msync loop termination not quite right.
      
      Most of those points are criticisms of the existing sys_msync, not of your
      patch.  In particular, the loop termination errors were introduced in 2.6.17:
      I did notice this shortly before it came out, but decided I was more likely to
      get it wrong myself, and make matters worse if I tried to rush a last-minute
      fix in.  And it's not terribly likely to go wrong, nor disastrous if it does
      go wrong (may miss reporting an unmapped area; may also fsync file of a
      following vma).
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Signed-off-by: NHugh Dickins <hugh@veritas.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      204ec841
  4. 23 6月, 2006 1 次提交
    • J
      [PATCH] Kill PF_SYNCWRITE flag · b31dc66a
      Jens Axboe 提交于
      A process flag to indicate whether we are doing sync io is incredibly
      ugly. It also causes performance problems when one does a lot of async
      io and then proceeds to sync it. Part of the io will go out as async,
      and the other part as sync. This causes a disconnect between the
      previously submitted io and the synced io. For io schedulers such as CFQ,
      this will cause us lost merges and suboptimal behaviour in scheduling.
      
      Remove PF_SYNCWRITE completely from the fsync/msync paths, and let
      the O_DIRECT path just directly indicate that the writes are sync
      by using WRITE_SYNC instead.
      Signed-off-by: NJens Axboe <axboe@suse.de>
      b31dc66a
  5. 25 3月, 2006 1 次提交
  6. 24 3月, 2006 4 次提交
    • A
      [PATCH] msync(): use do_fsync() · 8f2e9f15
      Andrew Morton 提交于
      No need to duplicate all that code.
      
      Cc: Hugh Dickins <hugh@veritas.com>
      Cc: Nick Piggin <nickpiggin@yahoo.com.au>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      8f2e9f15
    • A
      [PATCH] msync: fix return value · 676758bd
      Andrew Morton 提交于
      msync() does a strange thing.  Essentially:
      
      	vma = find_vma();
      	for ( ; ; ) {
      		if (!vma)
      			return -ENOMEM;
      		...
      		vma = vma->vm_next;
      	}
      
      so an msync() request which starts within or before a valid VMA and which ends
      within or beyond the final VMA will incorrectly return -ENOMEM.
      
      Fix.
      
      Cc: Hugh Dickins <hugh@veritas.com>
      Cc: Nick Piggin <nickpiggin@yahoo.com.au>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      676758bd
    • A
      [PATCH] msync(MS_SYNC): don't hold mmap_sem while syncing · 707c21c8
      Andrew Morton 提交于
      It seems bad to hold mmap_sem while performing synchronous disk I/O.  Alter
      the msync(MS_SYNC) code so that the lock is released while we sync the file.
      
      Cc: Hugh Dickins <hugh@veritas.com>
      Cc: Nick Piggin <nickpiggin@yahoo.com.au>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      707c21c8
    • A
      [PATCH] msync(): perform dirty page levelling · 9c50823e
      Andrew Morton 提交于
      It seems sensible to perform dirty page throttling in msync: as the application
      dirties pages we can kick off pdflush early, or even force the msync() caller
      to perform writeout, or even throttle the msync() caller.
      
      The main effect of this is to start disk writeback earlier if we've just
      discovered that a large amount of pagecache has been dirtied.  (Otherwise it
      wouldn't happen for up to five seconds, next time pdflush wakes up).
      
      It also will cause the page-dirtying process to get panalised for dirtying
      those pages rather than whacking someone else with the problem.
      
      We should do this for munmap() and possibly even exit(), too.
      
      We drop the mmap_sem while performing the dirty page balancing.  It doesn't
      seem right to hold mmap_sem for that long.
      
      Note that this patch only affects MS_ASYNC.  MS_SYNC will be syncing all the
      dirty pages anyway.
      
      We note that msync(MS_SYNC) does a full-file-sync inside mmap_sem, and always
      has.  We can fix that up...
      
      The patch also tightens up the mmap_sem coverage in sys_msync(): no point in
      taking it while we perform the incoming arg checking.
      
      Cc: Hugh Dickins <hugh@veritas.com>
      Cc: Nick Piggin <nickpiggin@yahoo.com.au>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      9c50823e
  7. 10 1月, 2006 1 次提交
  8. 29 11月, 2005 1 次提交
    • L
      mm: re-architect the VM_UNPAGED logic · 6aab341e
      Linus Torvalds 提交于
      This replaces the (in my opinion horrible) VM_UNMAPPED logic with very
      explicit support for a "remapped page range" aka VM_PFNMAP.  It allows a
      VM area to contain an arbitrary range of page table entries that the VM
      never touches, and never considers to be normal pages.
      
      Any user of "remap_pfn_range()" automatically gets this new
      functionality, and doesn't even have to mark the pages reserved or
      indeed mark them any other way.  It just works.  As a side effect, doing
      mmap() on /dev/mem works for arbitrary ranges.
      
      Sparc update from David in the next commit.
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      6aab341e
  9. 23 11月, 2005 1 次提交
    • H
      [PATCH] unpaged: VM_UNPAGED · 0b14c179
      Hugh Dickins 提交于
      Although we tend to associate VM_RESERVED with remap_pfn_range, quite a few
      drivers set VM_RESERVED on areas which are then populated by nopage.  The
      PageReserved removal in 2.6.15-rc1 changed VM_RESERVED not to free pages in
      zap_pte_range, without changing those drivers not to set it: so their pages
      just leak away.
      
      Let's not change miscellaneous drivers now: introduce VM_UNPAGED at the core,
      to flag the special areas where the ptes may have no struct page, or if they
      have then it's not to be touched.  Replace most instances of VM_RESERVED in
      core mm by VM_UNPAGED.  Force it on in remap_pfn_range, and the sparc and
      sparc64 io_remap_pfn_range.
      
      Revert addition of VM_RESERVED to powerpc vdso, it's not needed there.  Is it
      needed anywhere?  It still governs the mm->reserved_vm statistic, and special
      vmas not to be merged, and areas not to be core dumped; but could probably be
      eliminated later (the drivers are probably specifying it because in 2.4 it
      kept swapout off the vma, but in 2.6 we work from the LRU, which these pages
      don't get on).
      
      Use the VM_SHM slot for VM_UNPAGED, and define VM_SHM to 0: it serves no
      purpose whatsoever, and should be removed from drivers when we clean up.
      Signed-off-by: NHugh Dickins <hugh@veritas.com>
      Acked-by: NWilliam Irwin <wli@holomorphy.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      0b14c179
  10. 30 10月, 2005 4 次提交
    • H
      [PATCH] mm: pte_offset_map_lock loops · 705e87c0
      Hugh Dickins 提交于
      Convert those common loops using page_table_lock on the outside and
      pte_offset_map within to use just pte_offset_map_lock within instead.
      
      These all hold mmap_sem (some exclusively, some not), so at no level can a
      page table be whipped away from beneath them.  But whereas pte_alloc loops
      tested with the "atomic" pmd_present, these loops are testing with pmd_none,
      which on i386 PAE tests both lower and upper halves.
      
      That's now unsafe, so add a cast into pmd_none to test only the vital lower
      half: we lose a little sensitivity to a corrupt middle directory, but not
      enough to worry about.  It appears that i386 and UML were the only
      architectures vulnerable in this way, and pgd and pud no problem.
      Signed-off-by: NHugh Dickins <hugh@veritas.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      705e87c0
    • N
      [PATCH] core remove PageReserved · b5810039
      Nick Piggin 提交于
      Remove PageReserved() calls from core code by tightening VM_RESERVED
      handling in mm/ to cover PageReserved functionality.
      
      PageReserved special casing is removed from get_page and put_page.
      
      All setting and clearing of PageReserved is retained, and it is now flagged
      in the page_alloc checks to help ensure we don't introduce any refcount
      based freeing of Reserved pages.
      
      MAP_PRIVATE, PROT_WRITE of VM_RESERVED regions is tentatively being
      deprecated.  We never completely handled it correctly anyway, and is be
      reintroduced in future if required (Hugh has a proof of concept).
      
      Once PageReserved() calls are removed from kernel/power/swsusp.c, and all
      arch/ and driver code, the Set and Clear calls, and the PG_reserved bit can
      be trivially removed.
      
      Last real user of PageReserved is swsusp, which uses PageReserved to
      determine whether a struct page points to valid memory or not.  This still
      needs to be addressed (a generic page_is_ram() should work).
      
      A last caveat: the ZERO_PAGE is now refcounted and managed with rmap (and
      thus mapcounted and count towards shared rss).  These writes to the struct
      page could cause excessive cacheline bouncing on big systems.  There are a
      number of ways this could be addressed if it is an issue.
      Signed-off-by: NNick Piggin <npiggin@suse.de>
      
      Refcount bug fix for filemap_xip.c
      Signed-off-by: NCarsten Otte <cotte@de.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      b5810039
    • H
      [PATCH] mm: msync_pte_range progress · 0c942a45
      Hugh Dickins 提交于
      Use latency breaking in msync_pte_range like that in copy_pte_range, instead
      of the ugly CONFIG_PREEMPT filemap_msync alternatives.
      Signed-off-by: NHugh Dickins <hugh@veritas.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      0c942a45
    • O
      [PATCH] mm/msync.c cleanup · b57b98d1
      OGAWA Hirofumi 提交于
      This is not problem actually, but sync_page_range() is using for exported
      function to filesystems.
      
      The msync_xxx is more readable at least to me.
      Signed-off-by: NOGAWA Hirofumi <hirofumi@mail.parknet.co.jp>
      Acked-by: NHugh Dickins <hugh@veritas.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      b57b98d1
  11. 22 6月, 2005 1 次提交
    • A
      [PATCH] msync: check pte dirty earlier · b4955ce3
      Abhijit Karmarkar 提交于
      It's common practice to msync a large address range regularly, in which
      often only a few ptes have actually been dirtied since the previous pass.
      
      sync_pte_range then goes much faster if it tests whether pte is dirty
      before locating and accessing each struct page cacheline; and it is hardly
      slowed by ptep_clear_flush_dirty repeating that test in the opposite case,
      when every pte actually is dirty.
      
      But beware, s390's pte_dirty always says false, since its dirty bit is kept
      in the storage key, located via the struct page address.  So skip this
      optimization in its case: use a pte_maybe_dirty macro which just says true
      if page_test_and_clear_dirty is implemented.
      Signed-off-by: NAbhijit Karmarkar <abhijitk@veritas.com>
      Signed-off-by: NHugh Dickins <hugh@veritas.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      b4955ce3
  12. 17 4月, 2005 1 次提交
    • L
      Linux-2.6.12-rc2 · 1da177e4
      Linus Torvalds 提交于
      Initial git repository build. I'm not bothering with the full history,
      even though we have it. We can create a separate "historical" git
      archive of that later if we want to, and in the meantime it's about
      3.2GB when imported into git - space that would just make the early
      git days unnecessarily complicated, when we don't have a lot of good
      infrastructure for it.
      
      Let it rip!
      1da177e4