1. 10 12月, 2009 1 次提交
  2. 07 12月, 2009 1 次提交
  3. 06 12月, 2009 1 次提交
  4. 05 12月, 2009 1 次提交
  5. 16 9月, 2009 1 次提交
  6. 12 8月, 2009 1 次提交
  7. 10 8月, 2009 1 次提交
  8. 12 7月, 2009 1 次提交
  9. 11 7月, 2009 2 次提交
  10. 18 6月, 2009 4 次提交
  11. 20 3月, 2009 1 次提交
  12. 12 3月, 2009 2 次提交
    • T
      NFS: Throttle page dirtying while we're flushing to disk · 72cb77f4
      Trond Myklebust 提交于
      The following patch is a combination of a patch by myself and Peter
      Staubach.
      
      Trond: If we allow other processes to dirty pages while a process is doing
      a consistency sync to disk, we can end up never making progress.
      
      Peter: Attached is a patch which addresses a continuing problem with
      the NFS client generating out of order WRITE requests.  While
      this is compliant with all of the current protocol
      specifications, there are servers in the market which can not
      handle out of order WRITE requests very well.  Also, this may
      lead to sub-optimal block allocations in the underlying file
      system on the server.  This may cause the read throughputs to
      be reduced when reading the file from the server.
      
      Peter: There has been a lot of work recently done to address out of
      order issues on a systemic level.  However, the NFS client is
      still susceptible to the problem.  Out of order WRITE
      requests can occur when pdflush is in the middle of writing
      out pages while the process dirtying the pages calls
      generic_file_buffered_write which calls
      generic_perform_write which calls
      balance_dirty_pages_rate_limited which ends up calling
      writeback_inodes which ends up calling back into the NFS
      client to writes out dirty pages for the same file that
      pdflush happens to be working with.
      Signed-off-by: NPeter Staubach <staubach@redhat.com>
      [modification by Trond to merge the two similar patches]
      Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      72cb77f4
    • T
      fb8a1f11
  13. 08 10月, 2008 1 次提交
  14. 16 7月, 2008 1 次提交
    • T
      NFS: Remove BKL requirement from attribute updates · a3d01454
      Trond Myklebust 提交于
      The main problem is dealing with inode->i_size: we need to set the
      inode->i_lock on all attribute updates, and so vmtruncate won't cut it.
      Make an NFS-private version of vmtruncate that has the necessary locking
      semantics.
      
      The result should be that the following inode attribute updates are
      protected by inode->i_lock
      	nfsi->cache_validity
      	nfsi->read_cache_jiffies
      	nfsi->attrtimeo
      	nfsi->attrtimeo_timestamp
      	nfsi->change_attr
      	nfsi->last_updated
      	nfsi->cache_change_attribute
      	nfsi->access_cache
      	nfsi->access_cache_entry_lru
      	nfsi->access_cache_inode_lru
      	nfsi->acl_access
      	nfsi->acl_default
      	nfsi->nfs_page_tree
      	nfsi->ncommit
      	nfsi->npages
      	nfsi->open_files
      	nfsi->silly_list
      	nfsi->acl
      	nfsi->open_states
      	inode->i_size
      	inode->i_atime
      	inode->i_mtime
      	inode->i_ctime
      	inode->i_nlink
      	inode->i_uid
      	inode->i_gid
      
      The following is protected by dir->i_mutex
      	nfsi->cookieverf
      Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      a3d01454
  15. 10 7月, 2008 6 次提交
  16. 24 6月, 2008 1 次提交
  17. 17 5月, 2008 1 次提交
  18. 20 4月, 2008 3 次提交
  19. 20 3月, 2008 2 次提交
    • F
      nfs: nfs_redirty_request · 6d884e8f
      Fred 提交于
      Both flush functions have the same error handling routine.  Pull
      it out as a function.
      Signed-off-by: NFred Isaman <iisaman@citi.umich.edu>
      Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      6d884e8f
    • F
      nfs: don't ignore return value from nfs_pageio_add_request · f8512ad0
      Fred Isaman 提交于
      Ignoring the return value from nfs_pageio_add_request can cause deadlocks.
      
      In read path:
        call nfs_pageio_add_request from readpage_async_filler
        assume at this point that there are requests already in desc, that
          can't be merged with the current request.
        so nfs_pageio_doio is fired up to clear out desc.
        assume something goes wrong in setting up the io, so desc->pg_error is set.
        This causes nfs_pageio_add_request to return 0, *WITHOUT* adding the original
          request.
        BUT, since return code is ignored, readpage_async_filler assumes it has
          been added, and does nothing further, leaving page locked.
        do_generic_mapping_read will eventually call lock_page, resulting in deadlock
      
      In write path:
        page is marked dirty by generic_perform_write
        nfs_writepages is called
        call nfs_pageio_add_request from nfs_page_async_flush
        assume at this point that there are requests already in desc, that
          can't be merged with the current request.
        so nfs_pageio_doio is fired up to clear out desc.
        assume something goes wrong in setting up the io, so desc->pg_error is set.
        This causes nfs_page_async_flush to return 0, *WITHOUT* adding the original
          request, yet marking the request as locked (PG_BUSY) and in writeback,
          clearing dirty marks.
        The next time a write is done to the page, deadlock will result as
          nfs_write_end calls nfs_update_request
      Signed-off-by: NFred Isaman <iisaman@citi.umich.edu>
      Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      f8512ad0
  20. 08 3月, 2008 1 次提交
  21. 29 2月, 2008 1 次提交
  22. 26 2月, 2008 3 次提交
  23. 14 2月, 2008 1 次提交
  24. 08 2月, 2008 1 次提交
    • T
      NFS: Fix a potential file corruption issue when writing · 5d47a356
      Trond Myklebust 提交于
      If the inode is flagged as having an invalid mapping, then we can't rely on
      the PageUptodate() flag. Ensure that we don't use the "anti-fragmentation"
      write optimisation in nfs_updatepage(), since that will cause NFS to write
      out areas of the page that are no longer guaranteed to be up to date.
      
      A potential corruption could occur in the following scenario:
      
      client 1			client 2
      ===============			===============
      				fd=open("f",O_CREAT|O_WRONLY,0644);
      				write(fd,"fubar\n",6);	// cache last page
      				close(fd);
      fd=open("f",O_WRONLY|O_APPEND);
      write(fd,"foo\n",4);
      close(fd);
      
      				fd=open("f",O_WRONLY|O_APPEND);
      				write(fd,"bar\n",4);
      				close(fd);
      -----
      The bug may lead to the file "f" reading 'fubar\n\0\0\0\nbar\n' because
      client 2 does not update the cached page after re-opening the file for
      write. Instead it keeps it marked as PageUptodate() until someone calls
      invaldate_inode_pages2() (typically by calling read()).
      Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      5d47a356
  25. 06 2月, 2008 1 次提交
    • C
      Pagecache zeroing: zero_user_segment, zero_user_segments and zero_user · eebd2aa3
      Christoph Lameter 提交于
      Simplify page cache zeroing of segments of pages through 3 functions
      
      zero_user_segments(page, start1, end1, start2, end2)
      
              Zeros two segments of the page. It takes the position where to
              start and end the zeroing which avoids length calculations and
      	makes code clearer.
      
      zero_user_segment(page, start, end)
      
              Same for a single segment.
      
      zero_user(page, start, length)
      
              Length variant for the case where we know the length.
      
      We remove the zero_user_page macro. Issues:
      
      1. Its a macro. Inline functions are preferable.
      
      2. The KM_USER0 macro is only defined for HIGHMEM.
      
         Having to treat this special case everywhere makes the
         code needlessly complex. The parameter for zeroing is always
         KM_USER0 except in one single case that we open code.
      
      Avoiding KM_USER0 makes a lot of code not having to be dealing
      with the special casing for HIGHMEM anymore. Dealing with
      kmap is only necessary for HIGHMEM configurations. In those
      configurations we use KM_USER0 like we do for a series of other
      functions defined in highmem.h.
      
      Since KM_USER0 is depends on HIGHMEM the existing zero_user_page
      function could not be a macro. zero_user_* functions introduced
      here can be be inline because that constant is not used when these
      functions are called.
      
      Also extract the flushing of the caches to be outside of the kmap.
      
      [akpm@linux-foundation.org: fix nfs and ntfs build]
      [akpm@linux-foundation.org: fix ntfs build some more]
      Signed-off-by: NChristoph Lameter <clameter@sgi.com>
      Cc: Steven French <sfrench@us.ibm.com>
      Cc: Michael Halcrow <mhalcrow@us.ibm.com>
      Cc: <linux-ext4@vger.kernel.org>
      Cc: Steven Whitehouse <swhiteho@redhat.com>
      Cc: Trond Myklebust <trond.myklebust@fys.uio.no>
      Cc: "J. Bruce Fields" <bfields@fieldses.org>
      Cc: Anton Altaparmakov <aia21@cantab.net>
      Cc: Mark Fasheh <mark.fasheh@oracle.com>
      Cc: David Chinner <dgc@sgi.com>
      Cc: Michael Halcrow <mhalcrow@us.ibm.com>
      Cc: Steven French <sfrench@us.ibm.com>
      Cc: Steven Whitehouse <swhiteho@redhat.com>
      Cc: Trond Myklebust <trond.myklebust@fys.uio.no>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      eebd2aa3