1. 30 1月, 2008 5 次提交
  2. 13 12月, 2007 1 次提交
    • T
      Revert "NFS: Ensure we return zero if applications attempt to write zero bytes" · a5576cfa
      Trond Myklebust 提交于
      This reverts commit b9148c6b.
      
      On Wed, 12 Dec 2007 10:57:30 -0500, Chuck Lever wrote
      > commit b9148c6b should be reverted.  It was recently forward-ported
      > from some years-old patches, and is clearly not needed now.
      >
      > On Dec 11, 2007, at 5:21 PM, Adrian Bunk wrote:
      >
      >> This code became dead after commit
      >> b9148c6b
      >> (which BTW doesn't seem to have changed any behaviour) and can
      >> therefore
      >> be removed.
      >>
      >> Spotted by the Coverity checker.
      >>
      >> Signed-off-by: Adrian Bunk <bunk@kernel.org>
      >>
      >> ---
      >> --- linux-2.6/fs/nfs/direct.c.old     2007-12-02 21:54:53.000000000 +0100
      >> +++ linux-2.6/fs/nfs/direct.c 2007-12-02 21:55:10.000000000 +0100
      >> @@ -897,15 +897,12 @@ ssize_t nfs_file_direct_write(struct kio
      >>       if (!count)
      >>               goto out;       /* return 0 */
      >>
      >>       retval = -EINVAL;
      >>       if ((ssize_t) count < 0)
      >>               goto out;
      >> -     retval = 0;
      >> -     if (!count)
      >> -             goto out;
      >>
      >>       retval = nfs_sync_mapping(mapping);
      >>       if (retval)
      >>               goto out;
      >>
      >>       retval = nfs_direct_write(iocb, iov, nr_segs, pos, count);
      >>
      Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      a5576cfa
  3. 07 12月, 2007 1 次提交
  4. 27 11月, 2007 4 次提交
  5. 24 10月, 2007 1 次提交
  6. 10 10月, 2007 2 次提交
  7. 20 7月, 2007 1 次提交
    • P
      mm: Remove slab destructors from kmem_cache_create(). · 20c2df83
      Paul Mundt 提交于
      Slab destructors were no longer supported after Christoph's
      c59def9f change. They've been
      BUGs for both slab and slub, and slob never supported them
      either.
      
      This rips out support for the dtor pointer from kmem_cache_create()
      completely and fixes up every single callsite in the kernel (there were
      about 224, not including the slab allocator definitions themselves,
      or the documentation references).
      Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
      20c2df83
  8. 11 7月, 2007 3 次提交
  9. 31 5月, 2007 1 次提交
  10. 24 5月, 2007 2 次提交
  11. 09 5月, 2007 1 次提交
  12. 01 5月, 2007 1 次提交
  13. 15 4月, 2007 1 次提交
  14. 04 2月, 2007 1 次提交
  15. 09 12月, 2006 1 次提交
  16. 08 12月, 2006 2 次提交
  17. 06 12月, 2006 1 次提交
  18. 21 10月, 2006 2 次提交
  19. 01 10月, 2006 1 次提交
  20. 27 9月, 2006 1 次提交
  21. 09 9月, 2006 1 次提交
  22. 01 7月, 2006 1 次提交
  23. 29 6月, 2006 1 次提交
  24. 28 6月, 2006 1 次提交
  25. 25 6月, 2006 3 次提交
    • T
      Merge branch 'odirect' · ccf01ef7
      Trond Myklebust 提交于
      ccf01ef7
    • C
      NFS: alloc nfs_read/write_data as direct I/O is scheduled · 82b145c5
      Chuck Lever 提交于
      Re-arrange the logic in the NFS direct I/O path so that nfs_read/write_data
      structs are allocated just before they are scheduled, rather than
      allocating them all at once before we start scheduling requests.
      Signed-off-by: NChuck Lever <cel@netapp.com>
      Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      82b145c5
    • C
      NFS: Eliminate nfs_get_user_pages() · 06cf6f2e
      Chuck Lever 提交于
      Neil Brown observed that the kmalloc() in nfs_get_user_pages() is more
      likely to fail if the I/O is large enough to require the allocation of more
      than a single page to keep track of all the pinned pages in the user's
      buffer.
      
      Instead of tracking one large page array per dreq/iocb, track pages per
      nfs_read/write_data, just like the cached I/O path does.  An array for
      pages is already allocated for us by nfs_readdata_alloc() (and the write
      and commit equivalents).
      
      This is also required for adding support for vectored I/O to the NFS direct
      I/O path.
      
      The original reason to pin the user buffer and allocate all the NFS data
      structures before trying to schedule I/O was to ensure all needed resources
      are allocated on the client before starting to send requests.  This reduces
      the chance that resource exhaustion on the client will cause a short read
      or write.
      
      On the other hand, for an application making very large application I/O
      requests, this means that it will be nearly impossible for the application
      to make forward progress on a resource-limited client.
      
      Thus, moving the buffer pinning functionality into the I/O scheduling
      loops should be good for scalability.  The next patch will do the same for
      NFS data structure allocation.
      Signed-off-by: NChuck Lever <cel@netapp.com>
      Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      06cf6f2e