1. 11 3月, 2009 1 次提交
    • T
      NFSv3: Fix posix ACL code · ae46141f
      Trond Myklebust 提交于
      Fix a memory leak due to allocation in the XDR layer. In cases where the
      RPC call needs to be retransmitted, we end up allocating new pages without
      clearing the old ones. Fix this by moving the allocation into
      nfs3_proc_setacls().
      
      Also fix an issue discovered by Kevin Rudd, whereby the amount of memory
      reserved for the acls in the xdr_buf->head was miscalculated, and causing
      corruption.
      Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      ae46141f
  2. 24 12月, 2008 2 次提交
  3. 15 10月, 2008 1 次提交
    • T
      NFS: Fix the resolution problem with nfs_inode_attrs_need_update() · 4704f0e2
      Trond Myklebust 提交于
      It appears that 'jiffies' timestamps do not have high enough resolution for
      nfs_inode_attrs_need_update(). One problem is that a GETATTR can be
      launched within < 1 jiffy of the last operation that updated the attribute.
      Another problem is that RPC calls can take < 1 jiffy to execute.
      
      We can fix this by switching the variables to use a simple global counter
      that gets incremented every time we start another GETATTR call.
      Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      4704f0e2
  4. 09 10月, 2008 1 次提交
  5. 08 10月, 2008 2 次提交
    • C
      NFS: SETCLIENTID truncates client ID and netid · d1ce02e1
      Chuck Lever 提交于
      The sc_name field is currently 56 bytes long.  This is not large enough
      to hold a pair of IPv6 addresses, the authentication type, the protocol
      name, and a uniquifier number.  The maximum possible size of the name
      string using IPv6 addresses is just under 110 bytes, so I increased the
      size of the sc_name field to accomodate this maximum.
      
      In addition, the strings in the nfs4_setclientid structure are
      constructed with scnprintf(), which wants to terminate its output with
      '\0'.  The sc_netid field was large enough only for a three byte netid
      string and a '\0' so inet6 netids were being truncated.  Perhaps we
      don't need the overhead of scnprintf() to do a simple string copy, but
      I fixed this by increasing the size of the buffer by one byte.
      
      Since all three of the string buffers in nfs4_setclientid are
      constructed with scnprintf(), I increased the size of all three by one
      byte to document the requirement, although I don't think either the
      universal address field or the name field will be so small that these
      strings get truncated in this way.
      
      The size of the Linux client's client ID on the wire will be larger
      than before.  RFC 3530 suggests the size limit for client IDs is 1024,
      and we are still well below that.
      Signed-off-by: NChuck Lever <chuck.lever@oracle.com>
      Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      d1ce02e1
    • R
      NFS: remove 8 bytes of padding from struct nfs_fattr on 64 bit builds · 9fa8d66f
      Richard Kennedy 提交于
      remove 8 bytes of padding from struct nfs_fattr on 64 bit builds
      
      This also removes padding from several nfs structures, including
      16 bytes from  nfs4_opendata, nfs4_createdata,nfs3_createdata
      & 8 bytes from nfs_read_data,nfs_write_data,nfs_removeres,nfs4_closedata
      
      This also reduces the reported stack usage of many nfs functions (30+).
      Signed-off-by: NRichard Kennedy <richard@rsk.demon.co.uk>
      ----
      
      This patch is against the latest git 2.6.27-rc4.
      I've built & run this on my AMD64 desktop, & successfully run _simple_
      tests with a  64 bit client => 32 bit server & 32 bit client to 64 bit
      server.
      
      On fedora with gcc (GCC) 4.3.0 20080428 (Red Hat 4.3.0-8) checkpatch
      reports 33 functions with reduced stack usage.
      e.g.
      __nfs_revalidate_inode [nfs] 216 => 200
      _nfs4_proc_access [nfs] 304 => 288
      _nfs4_proc_link [nfs] 536 => 504
      _nfs4_proc_remove [nfs] 304 => 288
      _nfs4_proc_rename [nfs] 584 => 552
      nfs3_proc_access [nfs] 272 => 256
      nfs3_proc_getacl [nfs] 384 => 368
      nfs3_proc_link [nfs] 496 => 464
      etc
      I can supply the complete list if anyone is interested.
      
      regards
      Richard
      Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      9fa8d66f
  6. 10 7月, 2008 2 次提交
  7. 20 4月, 2008 1 次提交
    • T
      NFSv4: Only increment the sequence id if the server saw it · c1d51931
      Trond Myklebust 提交于
      It is quite possible that the OPEN, CLOSE, LOCK, LOCKU,... compounds fail
      before the actual stateful operation has been executed (for instance in the
      PUTFH call). There is no way to tell from the overall status result which
      operations were executed from the COMPOUND.
      
      The fix is to move incrementing of the sequence id into the XDR layer,
      so that we do it as we process the results from the stateful operation.
      Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      c1d51931
  8. 30 1月, 2008 4 次提交
  9. 10 10月, 2007 2 次提交
  10. 20 7月, 2007 2 次提交
  11. 11 7月, 2007 2 次提交
  12. 13 2月, 2007 1 次提交
  13. 04 2月, 2007 1 次提交
  14. 06 12月, 2006 1 次提交
  15. 21 10月, 2006 2 次提交
  16. 23 9月, 2006 6 次提交
  17. 09 9月, 2006 1 次提交
  18. 25 8月, 2006 1 次提交
  19. 29 6月, 2006 1 次提交
  20. 25 6月, 2006 2 次提交
    • T
      Merge branch 'odirect' · ccf01ef7
      Trond Myklebust 提交于
      ccf01ef7
    • C
      NFS: Eliminate nfs_get_user_pages() · 06cf6f2e
      Chuck Lever 提交于
      Neil Brown observed that the kmalloc() in nfs_get_user_pages() is more
      likely to fail if the I/O is large enough to require the allocation of more
      than a single page to keep track of all the pinned pages in the user's
      buffer.
      
      Instead of tracking one large page array per dreq/iocb, track pages per
      nfs_read/write_data, just like the cached I/O path does.  An array for
      pages is already allocated for us by nfs_readdata_alloc() (and the write
      and commit equivalents).
      
      This is also required for adding support for vectored I/O to the NFS direct
      I/O path.
      
      The original reason to pin the user buffer and allocate all the NFS data
      structures before trying to schedule I/O was to ensure all needed resources
      are allocated on the client before starting to send requests.  This reduces
      the chance that resource exhaustion on the client will cause a short read
      or write.
      
      On the other hand, for an application making very large application I/O
      requests, this means that it will be nearly impossible for the application
      to make forward progress on a resource-limited client.
      
      Thus, moving the buffer pinning functionality into the I/O scheduling
      loops should be good for scalability.  The next patch will do the same for
      NFS data structure allocation.
      Signed-off-by: NChuck Lever <cel@netapp.com>
      Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      06cf6f2e
  21. 09 6月, 2006 4 次提交