1. 09 10月, 2008 1 次提交
  2. 08 10月, 2008 2 次提交
    • C
      NFS: SETCLIENTID truncates client ID and netid · d1ce02e1
      Chuck Lever 提交于
      The sc_name field is currently 56 bytes long.  This is not large enough
      to hold a pair of IPv6 addresses, the authentication type, the protocol
      name, and a uniquifier number.  The maximum possible size of the name
      string using IPv6 addresses is just under 110 bytes, so I increased the
      size of the sc_name field to accomodate this maximum.
      
      In addition, the strings in the nfs4_setclientid structure are
      constructed with scnprintf(), which wants to terminate its output with
      '\0'.  The sc_netid field was large enough only for a three byte netid
      string and a '\0' so inet6 netids were being truncated.  Perhaps we
      don't need the overhead of scnprintf() to do a simple string copy, but
      I fixed this by increasing the size of the buffer by one byte.
      
      Since all three of the string buffers in nfs4_setclientid are
      constructed with scnprintf(), I increased the size of all three by one
      byte to document the requirement, although I don't think either the
      universal address field or the name field will be so small that these
      strings get truncated in this way.
      
      The size of the Linux client's client ID on the wire will be larger
      than before.  RFC 3530 suggests the size limit for client IDs is 1024,
      and we are still well below that.
      Signed-off-by: NChuck Lever <chuck.lever@oracle.com>
      Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      d1ce02e1
    • R
      NFS: remove 8 bytes of padding from struct nfs_fattr on 64 bit builds · 9fa8d66f
      Richard Kennedy 提交于
      remove 8 bytes of padding from struct nfs_fattr on 64 bit builds
      
      This also removes padding from several nfs structures, including
      16 bytes from  nfs4_opendata, nfs4_createdata,nfs3_createdata
      & 8 bytes from nfs_read_data,nfs_write_data,nfs_removeres,nfs4_closedata
      
      This also reduces the reported stack usage of many nfs functions (30+).
      Signed-off-by: NRichard Kennedy <richard@rsk.demon.co.uk>
      ----
      
      This patch is against the latest git 2.6.27-rc4.
      I've built & run this on my AMD64 desktop, & successfully run _simple_
      tests with a  64 bit client => 32 bit server & 32 bit client to 64 bit
      server.
      
      On fedora with gcc (GCC) 4.3.0 20080428 (Red Hat 4.3.0-8) checkpatch
      reports 33 functions with reduced stack usage.
      e.g.
      __nfs_revalidate_inode [nfs] 216 => 200
      _nfs4_proc_access [nfs] 304 => 288
      _nfs4_proc_link [nfs] 536 => 504
      _nfs4_proc_remove [nfs] 304 => 288
      _nfs4_proc_rename [nfs] 584 => 552
      nfs3_proc_access [nfs] 272 => 256
      nfs3_proc_getacl [nfs] 384 => 368
      nfs3_proc_link [nfs] 496 => 464
      etc
      I can supply the complete list if anyone is interested.
      
      regards
      Richard
      Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      9fa8d66f
  3. 10 7月, 2008 2 次提交
  4. 20 4月, 2008 1 次提交
    • T
      NFSv4: Only increment the sequence id if the server saw it · c1d51931
      Trond Myklebust 提交于
      It is quite possible that the OPEN, CLOSE, LOCK, LOCKU,... compounds fail
      before the actual stateful operation has been executed (for instance in the
      PUTFH call). There is no way to tell from the overall status result which
      operations were executed from the COMPOUND.
      
      The fix is to move incrementing of the sequence id into the XDR layer,
      so that we do it as we process the results from the stateful operation.
      Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      c1d51931
  5. 30 1月, 2008 4 次提交
  6. 10 10月, 2007 2 次提交
  7. 20 7月, 2007 2 次提交
  8. 11 7月, 2007 2 次提交
  9. 13 2月, 2007 1 次提交
  10. 04 2月, 2007 1 次提交
  11. 06 12月, 2006 1 次提交
  12. 21 10月, 2006 2 次提交
  13. 23 9月, 2006 6 次提交
  14. 09 9月, 2006 1 次提交
  15. 25 8月, 2006 1 次提交
  16. 29 6月, 2006 1 次提交
  17. 25 6月, 2006 2 次提交
    • T
      Merge branch 'odirect' · ccf01ef7
      Trond Myklebust 提交于
      ccf01ef7
    • C
      NFS: Eliminate nfs_get_user_pages() · 06cf6f2e
      Chuck Lever 提交于
      Neil Brown observed that the kmalloc() in nfs_get_user_pages() is more
      likely to fail if the I/O is large enough to require the allocation of more
      than a single page to keep track of all the pinned pages in the user's
      buffer.
      
      Instead of tracking one large page array per dreq/iocb, track pages per
      nfs_read/write_data, just like the cached I/O path does.  An array for
      pages is already allocated for us by nfs_readdata_alloc() (and the write
      and commit equivalents).
      
      This is also required for adding support for vectored I/O to the NFS direct
      I/O path.
      
      The original reason to pin the user buffer and allocate all the NFS data
      structures before trying to schedule I/O was to ensure all needed resources
      are allocated on the client before starting to send requests.  This reduces
      the chance that resource exhaustion on the client will cause a short read
      or write.
      
      On the other hand, for an application making very large application I/O
      requests, this means that it will be nearly impossible for the application
      to make forward progress on a resource-limited client.
      
      Thus, moving the buffer pinning functionality into the I/O scheduling
      loops should be good for scalability.  The next patch will do the same for
      NFS data structure allocation.
      Signed-off-by: NChuck Lever <cel@netapp.com>
      Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      06cf6f2e
  18. 09 6月, 2006 6 次提交
  19. 21 3月, 2006 2 次提交
    • T
      NFS: Cleanup of NFS read code · ec06c096
      Trond Myklebust 提交于
      Same callback hierarchy inversion as for the NFS write calls. This patch is
      not strictly speaking needed by the O_DIRECT code, but avoids confusing
      differences between the asynchronous read and write code.
      Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      ec06c096
    • T
      NFS: Cleanup of NFS write code in preparation for asynchronous o_direct · 788e7a89
      Trond Myklebust 提交于
      This patch inverts the callback hierarchy for NFS write calls.
      
      Instead of having the NFSv2/v3/v4-specific code set up the RPC callback
      ops, we allow the original caller to do so. This allows for more
      flexibility w.r.t. how to set up and tear down the nfs_write_data
      structure while still allowing the NFSv3/v4 code to perform error
      handling.
      
      The greater flexibility is needed by the asynchronous O_DIRECT code, which
      wants to be able to hold on to the original nfs_write_data structures after
      the WRITE RPC call has completed in order to be able to replay them if the
      COMMIT call determines that the server has rebooted.
      Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      788e7a89