1. 11 1月, 2008 1 次提交
  2. 20 10月, 2007 1 次提交
  3. 10 10月, 2007 1 次提交
  4. 08 8月, 2007 1 次提交
  5. 11 7月, 2007 11 次提交
  6. 15 5月, 2007 1 次提交
    • T
      NFS: Fix some 'sparse' warnings... · 10afec90
      Trond Myklebust 提交于
       - fs/nfs/dir.c:610:8: warning: symbol 'nfs_llseek_dir' was not declared.
         Should it be static?
       - fs/nfs/dir.c:636:5: warning: symbol 'nfs_fsync_dir' was not declared.
         Should it be static?
       - fs/nfs/write.c:925:19: warning: symbol 'req' shadows an earlier one
       - fs/nfs/write.c:61:6: warning: symbol 'nfs_commit_rcu_free' was not
         declared. Should it be static?
       - fs/nfs/nfs4proc.c:793:5: warning: symbol 'nfs4_recover_expired_lease'
         was not declared. Should it be static?
      Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      10afec90
  7. 23 9月, 2006 4 次提交
    • D
      NFS: Share NFS superblocks per-protocol per-server per-FSID · 54ceac45
      David Howells 提交于
      The attached patch makes NFS share superblocks between mounts from the same
      server and FSID over the same protocol.
      
      It does this by creating each superblock with a false root and returning the
      real root dentry in the vfsmount presented by get_sb(). The root dentry set
      starts off as an anonymous dentry if we don't already have the dentry for its
      inode, otherwise it simply returns the dentry we already have.
      
      We may thus end up with several trees of dentries in the superblock, and if at
      some later point one of anonymous tree roots is discovered by normal filesystem
      activity to be located in another tree within the superblock, the anonymous
      root is named and materialises attached to the second tree at the appropriate
      point.
      
      Why do it this way? Why not pass an extra argument to the mount() syscall to
      indicate the subpath and then pathwalk from the server root to the desired
      directory? You can't guarantee this will work for two reasons:
      
       (1) The root and intervening nodes may not be accessible to the client.
      
           With NFS2 and NFS3, for instance, mountd is called on the server to get
           the filehandle for the tip of a path. mountd won't give us handles for
           anything we don't have permission to access, and so we can't set up NFS
           inodes for such nodes, and so can't easily set up dentries (we'd have to
           have ghost inodes or something).
      
           With this patch we don't actually create dentries until we get handles
           from the server that we can use to set up their inodes, and we don't
           actually bind them into the tree until we know for sure where they go.
      
       (2) Inaccessible symbolic links.
      
           If we're asked to mount two exports from the server, eg:
      
      	mount warthog:/warthog/aaa/xxx /mmm
      	mount warthog:/warthog/bbb/yyy /nnn
      
           We may not be able to access anything nearer the root than xxx and yyy,
           but we may find out later that /mmm/www/yyy, say, is actually the same
           directory as the one mounted on /nnn. What we might then find out, for
           example, is that /warthog/bbb was actually a symbolic link to
           /warthog/aaa/xxx/www, but we can't actually determine that by talking to
           the server until /warthog is made available by NFS.
      
           This would lead to having constructed an errneous dentry tree which we
           can't easily fix. We can end up with a dentry marked as a directory when
           it should actually be a symlink, or we could end up with an apparently
           hardlinked directory.
      
           With this patch we need not make assumptions about the type of a dentry
           for which we can't retrieve information, nor need we assume we know its
           place in the grand scheme of things until we actually see that place.
      
      This patch reduces the possibility of aliasing in the inode and page caches for
      inodes that may be accessed by more than one NFS export. It also reduces the
      number of superblocks required for NFS where there are many NFS exports being
      used from a server (home directory server + autofs for example).
      
      This in turn makes it simpler to do local caching of network filesystems, as it
      can then be guaranteed that there won't be links from multiple inodes in
      separate superblocks to the same cache file.
      
      Obviously, cache aliasing between different levels of NFS protocol could still
      be a problem, but at least that gives us another key to use when indexing the
      cache.
      
      This patch makes the following changes:
      
       (1) The server record construction/destruction has been abstracted out into
           its own set of functions to make things easier to get right.  These have
           been moved into fs/nfs/client.c.
      
           All the code in fs/nfs/client.c has to do with the management of
           connections to servers, and doesn't touch superblocks in any way; the
           remaining code in fs/nfs/super.c has to do with VFS superblock management.
      
       (2) The sequence of events undertaken by NFS mount is now reordered:
      
           (a) A volume representation (struct nfs_server) is allocated.
      
           (b) A server representation (struct nfs_client) is acquired.  This may be
           	 allocated or shared, and is keyed on server address, port and NFS
           	 version.
      
           (c) If allocated, the client representation is initialised.  The state
           	 member variable of nfs_client is used to prevent a race during
           	 initialisation from two mounts.
      
           (d) For NFS4 a simple pathwalk is performed, walking from FH to FH to find
           	 the root filehandle for the mount (fs/nfs/getroot.c).  For NFS2/3 we
           	 are given the root FH in advance.
      
           (e) The volume FSID is probed for on the root FH.
      
           (f) The volume representation is initialised from the FSINFO record
           	 retrieved on the root FH.
      
           (g) sget() is called to acquire a superblock.  This may be allocated or
           	 shared, keyed on client pointer and FSID.
      
           (h) If allocated, the superblock is initialised.
      
           (i) If the superblock is shared, then the new nfs_server record is
           	 discarded.
      
           (j) The root dentry for this mount is looked up from the root FH.
      
           (k) The root dentry for this mount is assigned to the vfsmount.
      
       (3) nfs_readdir_lookup() creates dentries for each of the entries readdir()
           returns; this function now attaches disconnected trees from alternate
           roots that happen to be discovered attached to a directory being read (in
           the same way nfs_lookup() is made to do for lookup ops).
      
           The new d_materialise_unique() function is now used to do this, thus
           permitting the whole thing to be done under one set of locks, and thus
           avoiding any race between mount and lookup operations on the same
           directory.
      
       (4) The client management code uses a new debug facility: NFSDBG_CLIENT which
           is set by echoing 1024 to /proc/net/sunrpc/nfs_debug.
      
       (5) Clone mounts are now called xdev mounts.
      
       (6) Use the dentry passed to the statfs() op as the handle for retrieving fs
           statistics rather than the root dentry of the superblock (which is now a
           dummy).
      Signed-Off-By: NDavid Howells <dhowells@redhat.com>
      Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      54ceac45
    • D
      NFS: Generalise the nfs_client structure · 24c8dbbb
      David Howells 提交于
      Generalise the nfs_client structure by:
      
       (1) Moving nfs_client to a more general place (nfs_fs_sb.h).
      
       (2) Renaming its maintenance routines to be non-NFS4 specific.
      
       (3) Move those maintenance routines to a new non-NFS4 specific file (client.c)
           and move the declarations to internal.h.
      
       (4) Make nfs_find/get_client() take a full sockaddr_in to include the port
           number (will be required for NFS2/3).
      
       (5) Make nfs_find/get_client() take the NFS protocol version (again will be
           required to differentiate NFS2, 3 & 4 client records).
      
      Also:
      
       (6) Make nfs_client construction proceed akin to inodes, marking them as under
           construction and providing a function to indicate completion.
      
       (7) Make nfs_get_client() wait interruptibly if it finds a client that it can
           share, but that client is currently being constructed.
      
       (8) Make nfs4_create_client() use (6) and (7) instead of locking cl_sem.
      Signed-Off-By: NDavid Howells <dhowells@redhat.com>
      Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      24c8dbbb
    • D
      NFS: Rename nfs_server::nfs4_state · 7539bbab
      David Howells 提交于
      Rename nfs_server::nfs4_state to nfs_client as it will be used to represent the
      client state for NFS2 and NFS3 also.
      Signed-Off-By: NDavid Howells <dhowells@redhat.com>
      Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      7539bbab
    • D
      NFS: Rename struct nfs4_client to struct nfs_client · adfa6f98
      David Howells 提交于
      Rename struct nfs4_client to struct nfs_client so that it can become the basis
      for a general client record for NFS2 and NFS3 in addition to NFS4.
      Signed-Off-By: NDavid Howells <dhowells@redhat.com>
      Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      adfa6f98
  8. 01 7月, 2006 1 次提交
  9. 21 3月, 2006 1 次提交
  10. 07 1月, 2006 5 次提交
  11. 26 11月, 2005 1 次提交
  12. 07 11月, 2005 1 次提交
  13. 05 11月, 2005 3 次提交
  14. 21 10月, 2005 2 次提交
  15. 19 10月, 2005 6 次提交
    • T
      NFSv4: Fix an oopsable condition in nfs_free_seqid · 7f709a48
      Trond Myklebust 提交于
       Storing a pointer to the struct rpc_task in the nfs_seqid is broken
       since the nfs_seqid may be freed well after the task has been destroyed.
      Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      7f709a48
    • T
      NFSv4: Make NFS clean up byte range locks asynchronously · faf5f49c
      Trond Myklebust 提交于
       Currently we fail to do so if the process was signalled.
      Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      faf5f49c
    • T
      NFSv4: Remove nfs4_client->cl_sem from close() path · 83c9d41e
      Trond Myklebust 提交于
       We no longer need to worry about collisions between close() and the state
       recovery code, since the new close will automatically recheck the
       file state once it is done waiting on its sequence slot.
      
       Ditto for the nfs4_proc_locku() procedure.
      Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      83c9d41e
    • T
      NFSv4: Remove obsolete state_owner and lock_owner semaphores · e6dfa553
      Trond Myklebust 提交于
       OPEN, CLOSE, etc no longer need these semaphores to ensure ordering of
       requests.
      Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      e6dfa553
    • T
      NFSv4: Fix a potential CLOSE race · 9512135d
      Trond Myklebust 提交于
       Once the state_owner and lock_owner semaphores get removed, it will be
       possible for other OPEN requests to reopen the same file if they have
       lower sequence ids than our CLOSE call.
       This patch ensures that we recheck the file state once
       nfs_wait_on_sequence() has completed waiting.
      Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      9512135d
    • T
      NFSv4: Add functions to order RPC calls · cee54fc9
      Trond Myklebust 提交于
       NFSv4 file state-changing functions such as OPEN, CLOSE, LOCK,... are all
       labelled with "sequence identifiers" in order to prevent the server from
       reordering RPC requests, as this could cause its file state to
       become out of sync with the client.
      
       Currently the NFS client code enforces this ordering locally using
       semaphores to restrict access to structures until the RPC call is done.
       This, of course, only works with synchronous RPC calls, since the
       user process must first grab the semaphore.
       By dropping semaphores, and instead teaching the RPC engine to hold
       the RPC calls until they are ready to be sent, we can extend this
       process to work nicely with asynchronous RPC calls too.
      
       This patch adds a new list called "rpc_sequence" that defines the order
       of the RPC calls to be sent. We add one such list for each state_owner.
       When an RPC call is ready to be sent, it checks if it is top of the
       rpc_sequence list. If so, it proceeds. If not, it goes back to sleep,
       and loops until it hits top of the list.
       Once the RPC call has completed, it can then bump the sequence id counter,
       and remove itself from the rpc_sequence list, and then wake up the next
       sleeper.
      
       Note that the state_owner sequence ids and lock_owner sequence ids are
       all indexed to the same rpc_sequence list, so OPEN, LOCK,... requests
       are all ordered w.r.t. each other.
      Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      cee54fc9