1. 04 1月, 2012 4 次提交
  2. 05 11月, 2011 1 次提交
    • J
      nfs: when attempting to open a directory, fall back on normal lookup (try #5) · 1788ea6e
      Jeff Layton 提交于
      commit d953126a changed how nfs_atomic_lookup handles an -EISDIR return
      from an OPEN call. Prior to that patch, that caused the client to fall
      back to doing a normal lookup. When that patch went in, the code began
      returning that error to userspace. The d_revalidate codepath however
      never had the corresponding change, so it was still possible to end up
      with a NULL ctx->state pointer after that.
      
      That patch caused a regression. When we attempt to open a directory that
      does not have a cached dentry, that open now errors out with EISDIR. If
      you attempt the same open with a cached dentry, it will succeed.
      
      Fix this by reverting the change in nfs_atomic_lookup and allowing
      attempts to open directories to fall back to a normal lookup
      
      Also, add a NFSv4-specific f_ops->open routine that just returns
      -ENOTDIR. This should never be called if things are working properly,
      but if it ever is, then the dprintk may help in debugging.
      
      To facilitate this, a new file_operations field is also added to the
      nfs_rpc_ops struct.
      
      Cc: stable@kernel.org
      Signed-off-by: NJeff Layton <jlayton@redhat.com>
      Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      1788ea6e
  3. 31 7月, 2011 2 次提交
  4. 21 7月, 2011 1 次提交
    • J
      fs: push i_mutex and filemap_write_and_wait down into ->fsync() handlers · 02c24a82
      Josef Bacik 提交于
      Btrfs needs to be able to control how filemap_write_and_wait_range() is called
      in fsync to make it less of a painful operation, so push down taking i_mutex and
      the calling of filemap_write_and_wait() down into the ->fsync() handlers.  Some
      file systems can drop taking the i_mutex altogether it seems, like ext3 and
      ocfs2.  For correctness sake I just pushed everything down in all cases to make
      sure that we keep the current behavior the same for everybody, and then each
      individual fs maintainer can make up their mind about what to do from there.
      Thanks,
      Acked-by: NJan Kara <jack@suse.cz>
      Signed-off-by: NJosef Bacik <josef@redhat.com>
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      02c24a82
  5. 20 7月, 2011 9 次提交
  6. 30 5月, 2011 1 次提交
  7. 26 5月, 2011 3 次提交
  8. 25 5月, 2011 1 次提交
  9. 25 3月, 2011 1 次提交
  10. 24 3月, 2011 3 次提交
  11. 17 3月, 2011 1 次提交
    • A
      nfs: store devname at disconnected NFS roots · b1942c5f
      Al Viro 提交于
      part 2: make sure that disconnected roots have corresponding mnt_devname
      values stashed into them.
      
      Have nfs*_get_root() stuff a copy of devname into ->d_fsdata of the
      found root, provided that it is disconnected.
      
      Have ->d_release() free it when dentry goes away.
      
      Have the places where NFS uses ->d_fsdata for sillyrename (and that
      can *never* happen to a disconnected root - dentry will be attached
      to its parent) free old devname copies if they find those.
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      b1942c5f
  12. 16 1月, 2011 1 次提交
  13. 14 1月, 2011 2 次提交
  14. 13 1月, 2011 1 次提交
  15. 11 1月, 2011 1 次提交
    • T
      NFS: Don't use vm_map_ram() in readdir · 6650239a
      Trond Myklebust 提交于
      vm_map_ram() is not available on NOMMU platforms, and causes trouble
      on incoherrent architectures such as ARM when we access the page data
      through both the direct and the virtual mapping.
      
      The alternative is to use the direct mapping to access page data
      for the case when we are not crossing a page boundary, but to copy
      the data into a linear scratch buffer when we are accessing data
      that spans page boundaries.
      Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      Tested-by: NMarc Kleine-Budde <mkl@pengutronix.de>
      Cc: stable@kernel.org  [2.6.37]
      6650239a
  16. 07 1月, 2011 6 次提交
  17. 05 1月, 2011 2 次提交