1. 10 5月, 2007 2 次提交
  2. 08 5月, 2007 1 次提交
  3. 01 5月, 2007 2 次提交
    • N
      NFS: Fix directory caching problem - with test case and patch. · 83672d39
      Neil Brown 提交于
      Try running this script in an NFS mounted directory (Client relatively
      recent - 2.6.18 has the problem as does 2.6.20).
      
      ------------------------------------------------------
      #!/bin/bash
      #
      # This script will produce the following errormessage from tar:
      #
      #   tar: newdir/innerdir/innerfile: file changed as we read it
      
      # create dirs
      rm -rf nfstest
      mkdir -p nfstest/dir/innerdir
      
      # create files (should not be empty)
      echo "Hello World!" >nfstest/dir/file
      echo "Hello World!" >nfstest/dir/innerdir/innerfile
      
      # problem only happens if we sleep before chmod
      sleep 1
      
      # change file modes
      chmod -R a+r nfstest
      
      # rename dir
      mv nfstest/dir nfstest/newdir
      
      # tar it
      tar -cf nfstest/nfstest.tar -C nfstest newdir
      
      # restore old dir name
      mv nfstest/newdir nfstest/dir
      --------------------------------------------------------
      
      What happens:
      
      The 'chmod -R' does a readdir_plus in each directory and the results
      get cached in the page cache.  It then updates the ctime on each file
      by one second.  When this happens, the post-op attributes are used to
      update the ctime stored on the client to match the value in the kernel.
      
      The 'mv' calls shrink_dcache_parent on the directory tree which
      flushes all the dentries (so a new lookup will be required) but
      doesn't flush the inodes or pagecache.
      
      The 'tar' does a readdir on each directory, but (in the case of
      'innerdir' at least) satisfies it from the pagecache and uses the
      READDIRPLUS data to update all the inodes.  In the case of
      'innerdir/innerfile', the ctime is out of date.
      
      'tar' then calls 'lstat' on innerdir/innerfile getting an old ctime.
      It then opens the file (triggering a GETATTR), reads the content, and
      then calls fstat to see if anything has changed.  It finds that ctime
      has changed and so complains.
      
      The problem seems to be that the cache readdirplus info is kept around
      for too long.
      
      My patch below discards pagecache data for directories when
      dentry_iput is called on them.  This effectively removes the symptom
      which convinces me that I correctly understand the problem.  However
      I'm not convinced that is a proper solution, as there could easily be
      other races that trigger the same problem without being affected by
      this 'fix'.
      
      One possibility would be to require that readdirplus pagecache data be
      only used *once* to instantiate an inode.  Somehow it should then be
      invalidated so that if the dentry subsequently disappears, it will
      cause a new request to the server to fill in the stat data.
      
      Another possibility is to compare the cache_change_attribute on the
      inode with something similar for the readdirplus info and reject the
      info from readdirplus if it is too old.
      
      I haven't tried to implement these and would value other opinions
      before I do.
      
      Thanks,
      NeilBrown
      Signed-off-by: NNeil Brown <neilb@suse.de>
      Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      83672d39
    • N
      NFS: Set meaningful value for fattr->time_start in readdirplus results. · 1f4eab7e
      Neil Brown 提交于
      Don't use uninitialsed value for fattr->time_start in readdirplus results.
      
      The 'fattr' structure filled in by nfs3_decode_direct does not get a
      value for ->time_start set.
      Thus if an entry is for an inode that we already have in cache,
      when nfs_readdir_lookup calls nfs_fhget, it will call nfs_refresh_inode
      and may update the inode with out-of-date information.
      
      Directories are read a page at a time, so each page could have a
      different timestamp that "should" be used to set the time_start for
      the fattr for info in that page.  However storing the timestamp per
      page is awkward.  (We could stick in the first 4 bytes and only read 4092
      bytes, but that is a bigger code change than I am interested it).
      
      This patch ignores the readdir_plus attributes if a readdir finds the
      information already in cache, and otherwise sets ->time_start to the time
      the readdir request was sent to the server.
      
      It might be nice to store - in the directory inode - the time stamp for
      the earliest readdir request that is still in the page cache, so that we
      don't ignore attribute data that we don't have to.  This patch doesn't do
      that.
      Signed-off-by: NNeil Brown <neilb@suse.de>
      Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      1f4eab7e
  4. 15 4月, 2007 1 次提交
  5. 13 2月, 2007 1 次提交
  6. 04 2月, 2007 4 次提交
  7. 25 1月, 2007 1 次提交
  8. 09 12月, 2006 1 次提交
  9. 22 10月, 2006 2 次提交
  10. 21 10月, 2006 3 次提交
  11. 01 10月, 2006 2 次提交
  12. 25 9月, 2006 1 次提交
  13. 23 9月, 2006 10 次提交
    • T
      NFS: nfs_lookup - don't hash dentry when optimising away the lookup · fd684071
      Trond Myklebust 提交于
      If the open intents tell us that a given lookup is going to result in a,
      exclusive create, we currently optimize away the lookup call itself. The
      reason is that the lookup would not be atomic with the create RPC call, so
      why do it in the first place?
      
      A problem occurs, however, if the VFS aborts the exclusive create operation
      after the lookup, but before the call to create the file/directory: in this
      case we will end up with a hashed negative dentry in the dcache that has
      never been looked up.
      Fix this by only actually hashing the dentry once the create operation has
      been successfully completed.
      Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      fd684071
    • C
      NFS: Use cached page as buffer for NFS symlink requests · 94a6d753
      Chuck Lever 提交于
      Now that we have a copy of the symlink path in the page cache, we can pass
      a struct page down to the XDR routines instead of a string buffer.
      
      Test plan:
      Connectathon, all NFS versions.
      Signed-off-by: NChuck Lever <chuck.lever@oracle.com>
      Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      94a6d753
    • C
      NFS: copy symlinks into page cache before sending NFS SYMLINK request · 873101b3
      Chuck Lever 提交于
      Currently the NFS client does not cache symlinks it creates.  They get
      cached only when the NFS client reads them back from the server.
      
      Copy the symlink into the page cache before sending it.
      
      Test plan:
      Connectathon, all NFS versions.
      Signed-off-by: NChuck Lever <chuck.lever@oracle.com>
      Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      873101b3
    • C
      NFS: Fix double d_drop in nfs_instantiate() error path · 4f390c15
      Chuck Lever 提交于
      If the LOOKUP or GETATTR in nfs_instantiate fail, nfs_instantiate will do a
      d_drop before returning.  But some callers already do a d_drop in the case
      of an error return.  Make certain we do only one d_drop in all error paths.
      
      This issue was introduced because over time, the symlink proc API diverged
      slightly from the create/mkdir/mknod proc API.  To prevent other coding
      mistakes of this type, change the symlink proc API to be more like
      create/mkdir/mknod and move the nfs_instantiate call into the symlink proc
      routines so it is used in exactly the same way for create, mkdir, mknod,
      and symlink.
      
      Test plan:
      Connectathon, all versions of NFS.
      Signed-off-by: NChuck Lever <chuck.lever@oracle.com>
      Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      4f390c15
    • C
      NFS: remove a no-longer-needed error check in nfs_symlink() · d3db90e2
      Chuck Lever 提交于
      In the early days of NFS, there was no duplicate reply cache on the server.
      Thus retransmitted non-idempotent requests often found that the request had
      already completed on the server.  To avoid passing an unanticipated return
      code to unsuspecting applications, NFS clients would often shunt error
      codes that implied the request had been retried but already completed.
      
      Thanks to NFS over TCP, duplicate reply caches on the server, and network
      performance and reliability improvements, it is safe to remove such checks.
      
      Test plan:
      None.
      Signed-off-by: NChuck Lever <chuck.lever@oracle.com>
      Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      d3db90e2
    • D
      NFS: Share NFS superblocks per-protocol per-server per-FSID · 54ceac45
      David Howells 提交于
      The attached patch makes NFS share superblocks between mounts from the same
      server and FSID over the same protocol.
      
      It does this by creating each superblock with a false root and returning the
      real root dentry in the vfsmount presented by get_sb(). The root dentry set
      starts off as an anonymous dentry if we don't already have the dentry for its
      inode, otherwise it simply returns the dentry we already have.
      
      We may thus end up with several trees of dentries in the superblock, and if at
      some later point one of anonymous tree roots is discovered by normal filesystem
      activity to be located in another tree within the superblock, the anonymous
      root is named and materialises attached to the second tree at the appropriate
      point.
      
      Why do it this way? Why not pass an extra argument to the mount() syscall to
      indicate the subpath and then pathwalk from the server root to the desired
      directory? You can't guarantee this will work for two reasons:
      
       (1) The root and intervening nodes may not be accessible to the client.
      
           With NFS2 and NFS3, for instance, mountd is called on the server to get
           the filehandle for the tip of a path. mountd won't give us handles for
           anything we don't have permission to access, and so we can't set up NFS
           inodes for such nodes, and so can't easily set up dentries (we'd have to
           have ghost inodes or something).
      
           With this patch we don't actually create dentries until we get handles
           from the server that we can use to set up their inodes, and we don't
           actually bind them into the tree until we know for sure where they go.
      
       (2) Inaccessible symbolic links.
      
           If we're asked to mount two exports from the server, eg:
      
      	mount warthog:/warthog/aaa/xxx /mmm
      	mount warthog:/warthog/bbb/yyy /nnn
      
           We may not be able to access anything nearer the root than xxx and yyy,
           but we may find out later that /mmm/www/yyy, say, is actually the same
           directory as the one mounted on /nnn. What we might then find out, for
           example, is that /warthog/bbb was actually a symbolic link to
           /warthog/aaa/xxx/www, but we can't actually determine that by talking to
           the server until /warthog is made available by NFS.
      
           This would lead to having constructed an errneous dentry tree which we
           can't easily fix. We can end up with a dentry marked as a directory when
           it should actually be a symlink, or we could end up with an apparently
           hardlinked directory.
      
           With this patch we need not make assumptions about the type of a dentry
           for which we can't retrieve information, nor need we assume we know its
           place in the grand scheme of things until we actually see that place.
      
      This patch reduces the possibility of aliasing in the inode and page caches for
      inodes that may be accessed by more than one NFS export. It also reduces the
      number of superblocks required for NFS where there are many NFS exports being
      used from a server (home directory server + autofs for example).
      
      This in turn makes it simpler to do local caching of network filesystems, as it
      can then be guaranteed that there won't be links from multiple inodes in
      separate superblocks to the same cache file.
      
      Obviously, cache aliasing between different levels of NFS protocol could still
      be a problem, but at least that gives us another key to use when indexing the
      cache.
      
      This patch makes the following changes:
      
       (1) The server record construction/destruction has been abstracted out into
           its own set of functions to make things easier to get right.  These have
           been moved into fs/nfs/client.c.
      
           All the code in fs/nfs/client.c has to do with the management of
           connections to servers, and doesn't touch superblocks in any way; the
           remaining code in fs/nfs/super.c has to do with VFS superblock management.
      
       (2) The sequence of events undertaken by NFS mount is now reordered:
      
           (a) A volume representation (struct nfs_server) is allocated.
      
           (b) A server representation (struct nfs_client) is acquired.  This may be
           	 allocated or shared, and is keyed on server address, port and NFS
           	 version.
      
           (c) If allocated, the client representation is initialised.  The state
           	 member variable of nfs_client is used to prevent a race during
           	 initialisation from two mounts.
      
           (d) For NFS4 a simple pathwalk is performed, walking from FH to FH to find
           	 the root filehandle for the mount (fs/nfs/getroot.c).  For NFS2/3 we
           	 are given the root FH in advance.
      
           (e) The volume FSID is probed for on the root FH.
      
           (f) The volume representation is initialised from the FSINFO record
           	 retrieved on the root FH.
      
           (g) sget() is called to acquire a superblock.  This may be allocated or
           	 shared, keyed on client pointer and FSID.
      
           (h) If allocated, the superblock is initialised.
      
           (i) If the superblock is shared, then the new nfs_server record is
           	 discarded.
      
           (j) The root dentry for this mount is looked up from the root FH.
      
           (k) The root dentry for this mount is assigned to the vfsmount.
      
       (3) nfs_readdir_lookup() creates dentries for each of the entries readdir()
           returns; this function now attaches disconnected trees from alternate
           roots that happen to be discovered attached to a directory being read (in
           the same way nfs_lookup() is made to do for lookup ops).
      
           The new d_materialise_unique() function is now used to do this, thus
           permitting the whole thing to be done under one set of locks, and thus
           avoiding any race between mount and lookup operations on the same
           directory.
      
       (4) The client management code uses a new debug facility: NFSDBG_CLIENT which
           is set by echoing 1024 to /proc/net/sunrpc/nfs_debug.
      
       (5) Clone mounts are now called xdev mounts.
      
       (6) Use the dentry passed to the statfs() op as the handle for retrieving fs
           statistics rather than the root dentry of the superblock (which is now a
           dummy).
      Signed-Off-By: NDavid Howells <dhowells@redhat.com>
      Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      54ceac45
    • D
      NFS: Move rpc_ops from nfs_server to nfs_client · 8fa5c000
      David Howells 提交于
      Move the rpc_ops from the nfs_server struct to the nfs_client struct as they're
      common to all server records of a particular NFS protocol version.
      Signed-Off-By: NDavid Howells <dhowells@redhat.com>
      Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      8fa5c000
    • T
      NFS: Add an ACCESS cache memory shrinker · 979df72e
      Trond Myklebust 提交于
      A pinned inode may in theory end up filling memory with cached ACCESS
      calls. This patch ensures that the VM may shrink away the cache in these
      particular cases.
      The shrinker works by iterating through the list of inodes on the global
      nfs_access_lru_list, and removing the least recently used access
      cache entry until it is done (or until the entire cache is empty).
      Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      979df72e
    • T
      NFS: Add a global LRU list for the ACCESS cache · cfcea3e8
      Trond Myklebust 提交于
      ...in order to allow the addition of a memory shrinker.
      Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      cfcea3e8
    • T
      NFS: Add a new ACCESS rpc call cache to the linux nfs client · 1c3c07e9
      Trond Myklebust 提交于
      The current access cache only allows one entry at a time to be cached for each
      inode. Add a per-inode red-black tree in order to allow more than one to
      be cached at a time.
      
      Should significantly cut down the time spent in path traversal for shared
      directories such as ${PATH}, /usr/share, etc.
      Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      1c3c07e9
  14. 06 7月, 2006 1 次提交
  15. 09 6月, 2006 2 次提交
  16. 20 4月, 2006 1 次提交
  17. 29 3月, 2006 1 次提交
  18. 21 3月, 2006 3 次提交
  19. 10 1月, 2006 1 次提交