1. 13 7月, 2011 1 次提交
  2. 30 5月, 2011 2 次提交
    • B
      NFSv4.1: unify pnfs_pageio_init functions · dfed206b
      Benny Halevy 提交于
      Use common code for pnfs_pageio_init_{read,write} and use
      a common generic pg_test function.
      
      Note that this function always assumes the the layout driver's
      pg_test method is implemented.
      
      [Fix BUG]
      Signed-off-by: NBoaz Harrosh <bharrosh@panasas.com>
      Signed-off-by: NBenny Halevy <bhalevy@panasas.com>
      dfed206b
    • B
      pnfs: Use byte-range for layoutget · fb3296eb
      Benny Halevy 提交于
      Add offset and count parameters to pnfs_update_layout and use them to get
      the layout in the pageio path.
      
      Order cache layout segments in the following order:
      * offset (ascending)
      * length (descending)
      * iomode (RW before READ)
      
      Test byte range against the layout segment in use in pnfs_{read,write}_pg_test
      so not to coalesce pages not using the same layout segment.
      
      [fix lseg ordering]
      [clean up pnfs_find_lseg lseg arg]
      [remove unnecessary FIXME]
      [fix ordering in pnfs_insert_layout]
      [clean up pnfs_insert_layout]
      Signed-off-by: NBenny Halevy <bhalevy@panasas.com>
      fb3296eb
  3. 12 5月, 2011 1 次提交
  4. 15 3月, 2011 1 次提交
  5. 12 3月, 2011 7 次提交
  6. 08 12月, 2010 1 次提交
    • T
      nfs: remove extraneous and problematic calls to nfs_clear_request · 2df485a7
      Trond Myklebust 提交于
      When a nfs_page is freed, nfs_free_request is called which also calls
      nfs_clear_request to clean out the lock and open contexts and free the
      pagecache page.
      
      However, a couple of places in the nfs code call nfs_clear_request
      themselves. What happens here if the refcount on the request is still high?
      We'll be releasing contexts and freeing pointers while the request is
      possibly still in use.
      
      Remove those bare calls to nfs_clear_context. That should only be done when
      the request is being freed.
      
      Note that when doing this, we need to watch out for tests of req->wb_page.
      Previously, nfs_set_page_tag_locked() and nfs_clear_page_tag_locked()
      would check the value of req->wb_page to figure out if the page is mapped
      into the nfsi->nfs_page_tree. We now indicate the page is mapped using
      the new bit PG_MAPPED in req->wb_flags .
      Reported-by: NJeff Layton <jlayton@redhat.com>
      Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      2df485a7
  7. 25 10月, 2010 1 次提交
  8. 24 9月, 2010 1 次提交
  9. 31 7月, 2010 1 次提交
  10. 23 6月, 2010 1 次提交
  11. 15 5月, 2010 1 次提交
  12. 07 12月, 2009 1 次提交
  13. 06 12月, 2009 1 次提交
  14. 05 12月, 2009 1 次提交
  15. 12 8月, 2009 1 次提交
  16. 13 7月, 2009 1 次提交
  17. 18 6月, 2009 3 次提交
  18. 03 4月, 2009 3 次提交
  19. 24 12月, 2008 1 次提交
    • W
      nfs: remove redundant tests on reading new pages · 136221fc
      Wu Fengguang 提交于
      aops->readpages() and its NFS helper readpage_async_filler() will only
      be called to do readahead I/O for newly allocated pages. So it's not
      necessary to test for the always 0 dirty/uptodate page flags.
      
      The removal of nfs_wb_page() call also fixes a readahead bug: the NFS
      readahead has been synchronous since 2.6.23, because that call will
      clear PG_readahead, which is the reminder for asynchronous readahead.
      
      More background: the PG_readahead page flag is shared with PG_reclaim,
      one for read path and the other for write path. clear_page_dirty_for_io()
      unconditionally clears PG_readahead to prevent possible readahead residuals,
      assuming itself to be always called in the write path. However, NFS is one
      and the only exception in that it _always_ calls clear_page_dirty_for_io()
      in the read path, i.e. for readpages()/readpage().
      
      Cc: Trond Myklebust <Trond.Myklebust@netapp.com>
      Signed-off-by: NWu Fengguang <wfg@linux.intel.com>
      Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      136221fc
  20. 17 5月, 2008 1 次提交
  21. 20 4月, 2008 2 次提交
  22. 20 3月, 2008 2 次提交
    • F
    • F
      nfs: don't ignore return value from nfs_pageio_add_request · f8512ad0
      Fred Isaman 提交于
      Ignoring the return value from nfs_pageio_add_request can cause deadlocks.
      
      In read path:
        call nfs_pageio_add_request from readpage_async_filler
        assume at this point that there are requests already in desc, that
          can't be merged with the current request.
        so nfs_pageio_doio is fired up to clear out desc.
        assume something goes wrong in setting up the io, so desc->pg_error is set.
        This causes nfs_pageio_add_request to return 0, *WITHOUT* adding the original
          request.
        BUT, since return code is ignored, readpage_async_filler assumes it has
          been added, and does nothing further, leaving page locked.
        do_generic_mapping_read will eventually call lock_page, resulting in deadlock
      
      In write path:
        page is marked dirty by generic_perform_write
        nfs_writepages is called
        call nfs_pageio_add_request from nfs_page_async_flush
        assume at this point that there are requests already in desc, that
          can't be merged with the current request.
        so nfs_pageio_doio is fired up to clear out desc.
        assume something goes wrong in setting up the io, so desc->pg_error is set.
        This causes nfs_page_async_flush to return 0, *WITHOUT* adding the original
          request, yet marking the request as locked (PG_BUSY) and in writeback,
          clearing dirty marks.
        The next time a write is done to the page, deadlock will result as
          nfs_write_end calls nfs_update_request
      Signed-off-by: NFred Isaman <iisaman@citi.umich.edu>
      Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      f8512ad0
  23. 29 2月, 2008 1 次提交
  24. 26 2月, 2008 2 次提交
    • T
      NFS: Ensure that the asynchronous RPC calls complete on nfsiod. · 101070ca
      Trond Myklebust 提交于
      We want to ensure that rpc_call_ops that involve mntput() are run on nfsiod
      rather than on rpciod, so that they don't deadlock when the resulting
      umount calls rpc_shutdown_client(). Hence we specify that read, write and
      commit calls must complete on nfsiod.
      Ditto for NFSv4 open, lock, locku and close asynchronous calls.
      Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      101070ca
    • T
      NFS: Fix a deadlock with lazy umount · 383ba719
      Trond Myklebust 提交于
      We can't allow rpc callback functions like task->tk_ops->rpc_call_prepare()
      and task->tk_ops->rpc_call_done() to call mntput() in any way, since
      that will cause a deadlock when the call to rpc_shutdown_client() attempts
      to wait on 'task' to complete.
      
      We can avoid the above deadlock by moving calls to mntput to
      task->tk_ops->rpc_release() callback, since at that time the task will be
      marked as completed, and so rpc_shutdown_client won't attempt to wait on
      it.
      Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      383ba719
  25. 06 2月, 2008 1 次提交
    • C
      Pagecache zeroing: zero_user_segment, zero_user_segments and zero_user · eebd2aa3
      Christoph Lameter 提交于
      Simplify page cache zeroing of segments of pages through 3 functions
      
      zero_user_segments(page, start1, end1, start2, end2)
      
              Zeros two segments of the page. It takes the position where to
              start and end the zeroing which avoids length calculations and
      	makes code clearer.
      
      zero_user_segment(page, start, end)
      
              Same for a single segment.
      
      zero_user(page, start, length)
      
              Length variant for the case where we know the length.
      
      We remove the zero_user_page macro. Issues:
      
      1. Its a macro. Inline functions are preferable.
      
      2. The KM_USER0 macro is only defined for HIGHMEM.
      
         Having to treat this special case everywhere makes the
         code needlessly complex. The parameter for zeroing is always
         KM_USER0 except in one single case that we open code.
      
      Avoiding KM_USER0 makes a lot of code not having to be dealing
      with the special casing for HIGHMEM anymore. Dealing with
      kmap is only necessary for HIGHMEM configurations. In those
      configurations we use KM_USER0 like we do for a series of other
      functions defined in highmem.h.
      
      Since KM_USER0 is depends on HIGHMEM the existing zero_user_page
      function could not be a macro. zero_user_* functions introduced
      here can be be inline because that constant is not used when these
      functions are called.
      
      Also extract the flushing of the caches to be outside of the kmap.
      
      [akpm@linux-foundation.org: fix nfs and ntfs build]
      [akpm@linux-foundation.org: fix ntfs build some more]
      Signed-off-by: NChristoph Lameter <clameter@sgi.com>
      Cc: Steven French <sfrench@us.ibm.com>
      Cc: Michael Halcrow <mhalcrow@us.ibm.com>
      Cc: <linux-ext4@vger.kernel.org>
      Cc: Steven Whitehouse <swhiteho@redhat.com>
      Cc: Trond Myklebust <trond.myklebust@fys.uio.no>
      Cc: "J. Bruce Fields" <bfields@fieldses.org>
      Cc: Anton Altaparmakov <aia21@cantab.net>
      Cc: Mark Fasheh <mark.fasheh@oracle.com>
      Cc: David Chinner <dgc@sgi.com>
      Cc: Michael Halcrow <mhalcrow@us.ibm.com>
      Cc: Steven French <sfrench@us.ibm.com>
      Cc: Steven Whitehouse <swhiteho@redhat.com>
      Cc: Trond Myklebust <trond.myklebust@fys.uio.no>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      eebd2aa3
  26. 30 1月, 2008 1 次提交