1. 10 10月, 2007 2 次提交
    • F
      Re: [NFS] [PATCH] Attribute timeout handling and wrapping u32 jiffies · c7e15961
      Fabio Olive Leite 提交于
      I would like to discuss the idea that the current checks for attribute
      timeout using time_after are inadequate for 32bit architectures, since
      time_after works correctly only when the two timestamps being compared
      are within 2^31 jiffies of each other. The signed overflow caused by
      comparing values more than 2^31 jiffies apart will flip the result,
      causing incorrect assumptions of validity.
      
      2^31 jiffies is a fairly large period of time (~25 days) when compared
      to the lifetime of most kernel data structures, but for long lived NFS
      mounts that can sit idle for months (think that for some reason autofs
      cannot be used), it is easy to compare inode attribute timestamps with
      very disparate or even bogus values (as in when jiffies have wrapped
      many times, where the comparison doesn't even make sense).
      
      Currently the code tests for attribute timeout by simply adding the
      desired amount of jiffies to the stored timestamp and comparing that
      with the current timestamp of obtained attribute data with time_after.
      This is incorrect, as it returns true for the desired timeout period
      and another full 2^31 range of jiffies.
      
      In testing with artificial jumps (several small jumps, not one big
      crank) of the jiffies I was able to reproduce a problem found in a
      server with very long lived NFS mounts, where attributes would not be
      refreshed even after touching files and directories in the server:
      
      Initial uptime:
      03:42:01 up 6 min, 0 users, load average: 0.01, 0.12, 0.07
      
      NFS volume is mounted and time is advanced:
      03:38:09 up 25 days, 2 min, 0 users, load average: 1.22, 1.05, 1.08
      
      # ls -l /local/A/foo/bar /nfs/A/foo/bar
      -rw-r--r--  1 root root 0 Dec 17 03:38 /local/A/foo/bar
      -rw-r--r--  1 root root 0 Nov 22 00:36 /nfs/A/foo/bar
      
      # touch /local/A/foo/bar
      
      # ls -l /local/A/foo/bar /nfs/A/foo/bar
      -rw-r--r--  1 root root 0 Dec 17 03:47 /local/A/foo/bar
      -rw-r--r--  1 root root 0 Nov 22 00:36 /nfs/A/foo/bar
      
      We can see the local mtime is updated, but the NFS mount still shows
      the old value. The patch below makes it work:
      
      Initial setup...
      07:11:02 up 25 days, 1 min,  0 users,  load average: 0.15, 0.03, 0.04
      
      # ls -l /local/A/foo/bar /nfs/A/foo/bar
      -rw-r--r--  1 root root 0 Jan 11 07:11 /local/A/foo/bar
      -rw-r--r--  1 root root 0 Jan 11 07:11 /nfs/A/foo/bar
      
      # touch /local/A/foo/bar
      
      # ls -l /local/A/foo/bar /nfs/A/foo/bar
      -rw-r--r--  1 root root 0 Jan 11 07:14 /local/A/foo/bar
      -rw-r--r--  1 root root 0 Jan 11 07:14 /nfs/A/foo/bar
      Signed-off-by: NFabio Olive Leite <fleite@redhat.com>
      Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      c7e15961
    • P
      64 bit ino support for NFS client · 4e769b93
      Peter Staubach 提交于
      Hi.
      
      Attached is a patch to modify the NFS client code to support
      64 bit ino's, as appropriate for the system and the NFS
      protocol version.
      
      The code basically just expand the NFS interfaces for routines
      which handle ino's from using ino_t to u64 and then uses the
      fileid in the nfs_inode instead of i_ino in the inode.  The
      code paths that were updated are in the getattr method and
      the readdir methods.
      
      This should be no real change on 64 bit platforms.  Since
      the ino_t is an unsigned long, it would already be 64 bits
      wide.
      
          Thanx...
      
                 ps
      Signed-off-by: NPeter Staubach <staubach@redhat.com>
      Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      4e769b93
  2. 29 9月, 2007 1 次提交
  3. 20 7月, 2007 2 次提交
  4. 11 7月, 2007 4 次提交
  5. 22 5月, 2007 1 次提交
    • A
      Detach sched.h from mm.h · e8edc6e0
      Alexey Dobriyan 提交于
      First thing mm.h does is including sched.h solely for can_do_mlock() inline
      function which has "current" dereference inside. By dealing with can_do_mlock()
      mm.h can be detached from sched.h which is good. See below, why.
      
      This patch
      a) removes unconditional inclusion of sched.h from mm.h
      b) makes can_do_mlock() normal function in mm/mlock.c
      c) exports can_do_mlock() to not break compilation
      d) adds sched.h inclusions back to files that were getting it indirectly.
      e) adds less bloated headers to some files (asm/signal.h, jiffies.h) that were
         getting them indirectly
      
      Net result is:
      a) mm.h users would get less code to open, read, preprocess, parse, ... if
         they don't need sched.h
      b) sched.h stops being dependency for significant number of files:
         on x86_64 allmodconfig touching sched.h results in recompile of 4083 files,
         after patch it's only 3744 (-8.3%).
      
      Cross-compile tested on
      
      	all arm defconfigs, all mips defconfigs, all powerpc defconfigs,
      	alpha alpha-up
      	arm
      	i386 i386-up i386-defconfig i386-allnoconfig
      	ia64 ia64-up
      	m68k
      	mips
      	parisc parisc-up
      	powerpc powerpc-up
      	s390 s390-up
      	sparc sparc-up
      	sparc64 sparc64-up
      	um-x86_64
      	x86_64 x86_64-up x86_64-defconfig x86_64-allnoconfig
      
      as well as my two usual configs.
      Signed-off-by: NAlexey Dobriyan <adobriyan@gmail.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      e8edc6e0
  6. 15 5月, 2007 1 次提交
    • T
      NFS: Fix some 'sparse' warnings... · 10afec90
      Trond Myklebust 提交于
       - fs/nfs/dir.c:610:8: warning: symbol 'nfs_llseek_dir' was not declared.
         Should it be static?
       - fs/nfs/dir.c:636:5: warning: symbol 'nfs_fsync_dir' was not declared.
         Should it be static?
       - fs/nfs/write.c:925:19: warning: symbol 'req' shadows an earlier one
       - fs/nfs/write.c:61:6: warning: symbol 'nfs_commit_rcu_free' was not
         declared. Should it be static?
       - fs/nfs/nfs4proc.c:793:5: warning: symbol 'nfs4_recover_expired_lease'
         was not declared. Should it be static?
      Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      10afec90
  7. 10 5月, 2007 3 次提交
  8. 08 5月, 2007 1 次提交
  9. 01 5月, 2007 2 次提交
    • N
      NFS: Fix directory caching problem - with test case and patch. · 83672d39
      Neil Brown 提交于
      Try running this script in an NFS mounted directory (Client relatively
      recent - 2.6.18 has the problem as does 2.6.20).
      
      ------------------------------------------------------
      #!/bin/bash
      #
      # This script will produce the following errormessage from tar:
      #
      #   tar: newdir/innerdir/innerfile: file changed as we read it
      
      # create dirs
      rm -rf nfstest
      mkdir -p nfstest/dir/innerdir
      
      # create files (should not be empty)
      echo "Hello World!" >nfstest/dir/file
      echo "Hello World!" >nfstest/dir/innerdir/innerfile
      
      # problem only happens if we sleep before chmod
      sleep 1
      
      # change file modes
      chmod -R a+r nfstest
      
      # rename dir
      mv nfstest/dir nfstest/newdir
      
      # tar it
      tar -cf nfstest/nfstest.tar -C nfstest newdir
      
      # restore old dir name
      mv nfstest/newdir nfstest/dir
      --------------------------------------------------------
      
      What happens:
      
      The 'chmod -R' does a readdir_plus in each directory and the results
      get cached in the page cache.  It then updates the ctime on each file
      by one second.  When this happens, the post-op attributes are used to
      update the ctime stored on the client to match the value in the kernel.
      
      The 'mv' calls shrink_dcache_parent on the directory tree which
      flushes all the dentries (so a new lookup will be required) but
      doesn't flush the inodes or pagecache.
      
      The 'tar' does a readdir on each directory, but (in the case of
      'innerdir' at least) satisfies it from the pagecache and uses the
      READDIRPLUS data to update all the inodes.  In the case of
      'innerdir/innerfile', the ctime is out of date.
      
      'tar' then calls 'lstat' on innerdir/innerfile getting an old ctime.
      It then opens the file (triggering a GETATTR), reads the content, and
      then calls fstat to see if anything has changed.  It finds that ctime
      has changed and so complains.
      
      The problem seems to be that the cache readdirplus info is kept around
      for too long.
      
      My patch below discards pagecache data for directories when
      dentry_iput is called on them.  This effectively removes the symptom
      which convinces me that I correctly understand the problem.  However
      I'm not convinced that is a proper solution, as there could easily be
      other races that trigger the same problem without being affected by
      this 'fix'.
      
      One possibility would be to require that readdirplus pagecache data be
      only used *once* to instantiate an inode.  Somehow it should then be
      invalidated so that if the dentry subsequently disappears, it will
      cause a new request to the server to fill in the stat data.
      
      Another possibility is to compare the cache_change_attribute on the
      inode with something similar for the readdirplus info and reject the
      info from readdirplus if it is too old.
      
      I haven't tried to implement these and would value other opinions
      before I do.
      
      Thanks,
      NeilBrown
      Signed-off-by: NNeil Brown <neilb@suse.de>
      Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      83672d39
    • N
      NFS: Set meaningful value for fattr->time_start in readdirplus results. · 1f4eab7e
      Neil Brown 提交于
      Don't use uninitialsed value for fattr->time_start in readdirplus results.
      
      The 'fattr' structure filled in by nfs3_decode_direct does not get a
      value for ->time_start set.
      Thus if an entry is for an inode that we already have in cache,
      when nfs_readdir_lookup calls nfs_fhget, it will call nfs_refresh_inode
      and may update the inode with out-of-date information.
      
      Directories are read a page at a time, so each page could have a
      different timestamp that "should" be used to set the time_start for
      the fattr for info in that page.  However storing the timestamp per
      page is awkward.  (We could stick in the first 4 bytes and only read 4092
      bytes, but that is a bigger code change than I am interested it).
      
      This patch ignores the readdir_plus attributes if a readdir finds the
      information already in cache, and otherwise sets ->time_start to the time
      the readdir request was sent to the server.
      
      It might be nice to store - in the directory inode - the time stamp for
      the earliest readdir request that is still in the page cache, so that we
      don't ignore attribute data that we don't have to.  This patch doesn't do
      that.
      Signed-off-by: NNeil Brown <neilb@suse.de>
      Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      1f4eab7e
  10. 15 4月, 2007 1 次提交
  11. 13 2月, 2007 1 次提交
  12. 04 2月, 2007 4 次提交
  13. 25 1月, 2007 1 次提交
  14. 09 12月, 2006 1 次提交
  15. 22 10月, 2006 2 次提交
  16. 21 10月, 2006 3 次提交
  17. 01 10月, 2006 2 次提交
  18. 25 9月, 2006 1 次提交
  19. 23 9月, 2006 7 次提交
    • T
      NFS: nfs_lookup - don't hash dentry when optimising away the lookup · fd684071
      Trond Myklebust 提交于
      If the open intents tell us that a given lookup is going to result in a,
      exclusive create, we currently optimize away the lookup call itself. The
      reason is that the lookup would not be atomic with the create RPC call, so
      why do it in the first place?
      
      A problem occurs, however, if the VFS aborts the exclusive create operation
      after the lookup, but before the call to create the file/directory: in this
      case we will end up with a hashed negative dentry in the dcache that has
      never been looked up.
      Fix this by only actually hashing the dentry once the create operation has
      been successfully completed.
      Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      fd684071
    • C
      NFS: Use cached page as buffer for NFS symlink requests · 94a6d753
      Chuck Lever 提交于
      Now that we have a copy of the symlink path in the page cache, we can pass
      a struct page down to the XDR routines instead of a string buffer.
      
      Test plan:
      Connectathon, all NFS versions.
      Signed-off-by: NChuck Lever <chuck.lever@oracle.com>
      Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      94a6d753
    • C
      NFS: copy symlinks into page cache before sending NFS SYMLINK request · 873101b3
      Chuck Lever 提交于
      Currently the NFS client does not cache symlinks it creates.  They get
      cached only when the NFS client reads them back from the server.
      
      Copy the symlink into the page cache before sending it.
      
      Test plan:
      Connectathon, all NFS versions.
      Signed-off-by: NChuck Lever <chuck.lever@oracle.com>
      Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      873101b3
    • C
      NFS: Fix double d_drop in nfs_instantiate() error path · 4f390c15
      Chuck Lever 提交于
      If the LOOKUP or GETATTR in nfs_instantiate fail, nfs_instantiate will do a
      d_drop before returning.  But some callers already do a d_drop in the case
      of an error return.  Make certain we do only one d_drop in all error paths.
      
      This issue was introduced because over time, the symlink proc API diverged
      slightly from the create/mkdir/mknod proc API.  To prevent other coding
      mistakes of this type, change the symlink proc API to be more like
      create/mkdir/mknod and move the nfs_instantiate call into the symlink proc
      routines so it is used in exactly the same way for create, mkdir, mknod,
      and symlink.
      
      Test plan:
      Connectathon, all versions of NFS.
      Signed-off-by: NChuck Lever <chuck.lever@oracle.com>
      Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      4f390c15
    • C
      NFS: remove a no-longer-needed error check in nfs_symlink() · d3db90e2
      Chuck Lever 提交于
      In the early days of NFS, there was no duplicate reply cache on the server.
      Thus retransmitted non-idempotent requests often found that the request had
      already completed on the server.  To avoid passing an unanticipated return
      code to unsuspecting applications, NFS clients would often shunt error
      codes that implied the request had been retried but already completed.
      
      Thanks to NFS over TCP, duplicate reply caches on the server, and network
      performance and reliability improvements, it is safe to remove such checks.
      
      Test plan:
      None.
      Signed-off-by: NChuck Lever <chuck.lever@oracle.com>
      Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      d3db90e2
    • D
      NFS: Share NFS superblocks per-protocol per-server per-FSID · 54ceac45
      David Howells 提交于
      The attached patch makes NFS share superblocks between mounts from the same
      server and FSID over the same protocol.
      
      It does this by creating each superblock with a false root and returning the
      real root dentry in the vfsmount presented by get_sb(). The root dentry set
      starts off as an anonymous dentry if we don't already have the dentry for its
      inode, otherwise it simply returns the dentry we already have.
      
      We may thus end up with several trees of dentries in the superblock, and if at
      some later point one of anonymous tree roots is discovered by normal filesystem
      activity to be located in another tree within the superblock, the anonymous
      root is named and materialises attached to the second tree at the appropriate
      point.
      
      Why do it this way? Why not pass an extra argument to the mount() syscall to
      indicate the subpath and then pathwalk from the server root to the desired
      directory? You can't guarantee this will work for two reasons:
      
       (1) The root and intervening nodes may not be accessible to the client.
      
           With NFS2 and NFS3, for instance, mountd is called on the server to get
           the filehandle for the tip of a path. mountd won't give us handles for
           anything we don't have permission to access, and so we can't set up NFS
           inodes for such nodes, and so can't easily set up dentries (we'd have to
           have ghost inodes or something).
      
           With this patch we don't actually create dentries until we get handles
           from the server that we can use to set up their inodes, and we don't
           actually bind them into the tree until we know for sure where they go.
      
       (2) Inaccessible symbolic links.
      
           If we're asked to mount two exports from the server, eg:
      
      	mount warthog:/warthog/aaa/xxx /mmm
      	mount warthog:/warthog/bbb/yyy /nnn
      
           We may not be able to access anything nearer the root than xxx and yyy,
           but we may find out later that /mmm/www/yyy, say, is actually the same
           directory as the one mounted on /nnn. What we might then find out, for
           example, is that /warthog/bbb was actually a symbolic link to
           /warthog/aaa/xxx/www, but we can't actually determine that by talking to
           the server until /warthog is made available by NFS.
      
           This would lead to having constructed an errneous dentry tree which we
           can't easily fix. We can end up with a dentry marked as a directory when
           it should actually be a symlink, or we could end up with an apparently
           hardlinked directory.
      
           With this patch we need not make assumptions about the type of a dentry
           for which we can't retrieve information, nor need we assume we know its
           place in the grand scheme of things until we actually see that place.
      
      This patch reduces the possibility of aliasing in the inode and page caches for
      inodes that may be accessed by more than one NFS export. It also reduces the
      number of superblocks required for NFS where there are many NFS exports being
      used from a server (home directory server + autofs for example).
      
      This in turn makes it simpler to do local caching of network filesystems, as it
      can then be guaranteed that there won't be links from multiple inodes in
      separate superblocks to the same cache file.
      
      Obviously, cache aliasing between different levels of NFS protocol could still
      be a problem, but at least that gives us another key to use when indexing the
      cache.
      
      This patch makes the following changes:
      
       (1) The server record construction/destruction has been abstracted out into
           its own set of functions to make things easier to get right.  These have
           been moved into fs/nfs/client.c.
      
           All the code in fs/nfs/client.c has to do with the management of
           connections to servers, and doesn't touch superblocks in any way; the
           remaining code in fs/nfs/super.c has to do with VFS superblock management.
      
       (2) The sequence of events undertaken by NFS mount is now reordered:
      
           (a) A volume representation (struct nfs_server) is allocated.
      
           (b) A server representation (struct nfs_client) is acquired.  This may be
           	 allocated or shared, and is keyed on server address, port and NFS
           	 version.
      
           (c) If allocated, the client representation is initialised.  The state
           	 member variable of nfs_client is used to prevent a race during
           	 initialisation from two mounts.
      
           (d) For NFS4 a simple pathwalk is performed, walking from FH to FH to find
           	 the root filehandle for the mount (fs/nfs/getroot.c).  For NFS2/3 we
           	 are given the root FH in advance.
      
           (e) The volume FSID is probed for on the root FH.
      
           (f) The volume representation is initialised from the FSINFO record
           	 retrieved on the root FH.
      
           (g) sget() is called to acquire a superblock.  This may be allocated or
           	 shared, keyed on client pointer and FSID.
      
           (h) If allocated, the superblock is initialised.
      
           (i) If the superblock is shared, then the new nfs_server record is
           	 discarded.
      
           (j) The root dentry for this mount is looked up from the root FH.
      
           (k) The root dentry for this mount is assigned to the vfsmount.
      
       (3) nfs_readdir_lookup() creates dentries for each of the entries readdir()
           returns; this function now attaches disconnected trees from alternate
           roots that happen to be discovered attached to a directory being read (in
           the same way nfs_lookup() is made to do for lookup ops).
      
           The new d_materialise_unique() function is now used to do this, thus
           permitting the whole thing to be done under one set of locks, and thus
           avoiding any race between mount and lookup operations on the same
           directory.
      
       (4) The client management code uses a new debug facility: NFSDBG_CLIENT which
           is set by echoing 1024 to /proc/net/sunrpc/nfs_debug.
      
       (5) Clone mounts are now called xdev mounts.
      
       (6) Use the dentry passed to the statfs() op as the handle for retrieving fs
           statistics rather than the root dentry of the superblock (which is now a
           dummy).
      Signed-Off-By: NDavid Howells <dhowells@redhat.com>
      Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      54ceac45
    • D
      NFS: Move rpc_ops from nfs_server to nfs_client · 8fa5c000
      David Howells 提交于
      Move the rpc_ops from the nfs_server struct to the nfs_client struct as they're
      common to all server records of a particular NFS protocol version.
      Signed-Off-By: NDavid Howells <dhowells@redhat.com>
      Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      8fa5c000