1. 28 2月, 2013 1 次提交
  2. 26 2月, 2013 1 次提交
    • J
      vfs: kill FS_REVAL_DOT by adding a d_weak_revalidate dentry op · ecf3d1f1
      Jeff Layton 提交于
      The following set of operations on a NFS client and server will cause
      
          server# mkdir a
          client# cd a
          server# mv a a.bak
          client# sleep 30  # (or whatever the dir attrcache timeout is)
          client# stat .
          stat: cannot stat `.': Stale NFS file handle
      
      Obviously, we should not be getting an ESTALE error back there since the
      inode still exists on the server. The problem is that the lookup code
      will call d_revalidate on the dentry that "." refers to, because NFS has
      FS_REVAL_DOT set.
      
      nfs_lookup_revalidate will see that the parent directory has changed and
      will try to reverify the dentry by redoing a LOOKUP. That of course
      fails, so the lookup code returns ESTALE.
      
      The problem here is that d_revalidate is really a bad fit for this case.
      What we really want to know at this point is whether the inode is still
      good or not, but we don't really care what name it goes by or whether
      the dcache is still valid.
      
      Add a new d_op->d_weak_revalidate operation and have complete_walk call
      that instead of d_revalidate. The intent there is to allow for a
      "weaker" d_revalidate that just checks to see whether the inode is still
      good. This is also gives us an opportunity to kill off the FS_REVAL_DOT
      special casing.
      
      [AV: changed method name, added note in porting, fixed confusion re
      having it possibly called from RCU mode (it won't be)]
      
      Cc: NeilBrown <neilb@suse.de>
      Signed-off-by: NJeff Layton <jlayton@redhat.com>
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      ecf3d1f1
  3. 23 2月, 2013 1 次提交
  4. 18 2月, 2013 3 次提交
    • F
      umount oops when remove blocklayoutdriver first · 5a12cca6
      fanchaoting 提交于
      now pnfs client uses block layout, maybe we can remove
      blocklayoutdriver first. if we umount later,
      it can cause oops in unset_pnfs_layoutdriver.
      because nfss->pnfs_curr_ld->clear_layoutdriver is invalid.
      
      reproduce it:
       modprobe  blocklayoutdriver
       mount -t nfs4 -o minorversion=1 pnfsip:/ /mnt/
       rmmod blocklayoutdriver
       umount /mnt
      
      then you can see following
      
      CPU 0
      Pid: 17023, comm: umount.nfs4 Tainted: GF          O 3.7.0-rc6-pnfs #1 VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform
      RIP: 0010:[<ffffffffa04cfe6d>]  [<ffffffffa04cfe6d>] unset_pnfs_layoutdriver+0x1d/0x70 [nfsv4]
      RSP: 0018:ffff8800022d9e48  EFLAGS: 00010286
      RAX: ffffffffa04a1b00 RBX: ffff88000b013800 RCX: 0000000000000001
      RDX: ffffffff81ae8ee0 RSI: ffff880001ee94b8 RDI: ffff88000b013800
      RBP: ffff8800022d9e58 R08: 0000000000000001 R09: 0000000000000000
      R10: 0000000000000000 R11: 0000000000000000 R12: ffff880001ee9400
      R13: ffff8800105978c0 R14: 00007fff25846c08 R15: 0000000001bba550
      FS:  00007f45ae7f0700(0000) GS:ffff880012c00000(0000) knlGS:0000000000000000
      CS:  0010 DS: 0000 ES: 0000 CR0: 000000008005003b
      CR2: ffffffffa04a1b38 CR3: 0000000002c0c000 CR4: 00000000000006f0
      DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
      DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
      Process umount.nfs4 (pid: 17023, threadinfo ffff8800022d8000, task ffff880006e48aa0)
      Stack:
      ffff8800105978c0 ffff88000b013800 ffff8800022d9e78 ffffffffa04cd0ce
      ffff8800022d9e78 ffff88000b013800 ffff8800022d9ea8 ffffffffa04755a7
      ffff8800022d9ea8 ffff880002f96400 ffff88000b013800 ffff880002f96400
      Call Trace:
      [<ffffffffa04cd0ce>] nfs4_destroy_server+0x1e/0x30 [nfsv4]
      [<ffffffffa04755a7>] nfs_free_server+0xb7/0x150 [nfs]
      [<ffffffffa047d4d5>] nfs_kill_super+0x35/0x40 [nfs]
      [<ffffffff81178d35>] deactivate_locked_super+0x45/0x70
      [<ffffffff8117986a>] deactivate_super+0x4a/0x70
      [<ffffffff81193ee2>] mntput_no_expire+0xd2/0x130
      [<ffffffff81194d62>] sys_umount+0x72/0xe0
      [<ffffffff8154af59>] system_call_fastpath+0x16/0x1b
      Code: 06 e1 b8 ea ff ff ff eb 9e 0f 1f 44 00 00 55 48 89 e5 53 48 83 ec 08 66 66 66 66 90 48 8b 87 80 03 00 00 48 89 fb 48 85 c0 74 29 <48> 8b 40 38 48 85 c0 74 02 ff d0 48 8b 03 3e ff 48 04 0f 94 c2
      RIP  [<ffffffffa04cfe6d>] unset_pnfs_layoutdriver+0x1d/0x70 [nfsv4]
      RSP <ffff8800022d9e48>
      CR2: ffffffffa04a1b38
      ---[ end trace 29f75aaedda058bf ]---
      
      Signed-off-by: fanchaoting<fanchaoting@cn.fujitsu.com>
      Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      Cc: stable@vger.kernel.org
      5a12cca6
    • T
      nfs: remove kfree() redundant null checks · 96aa1549
      Tim Gardner 提交于
      smatch analysis:
      
      fs/nfs/getroot.c:130 nfs_get_root() info: redundant null
       check on name calling kfree()
      
      fs/nfs/unlink.c:272 nfs_async_unlink() info: redundant null
       check on devname_garbage calling kfree()
      
      Cc: Trond Myklebust <Trond.Myklebust@netapp.com>
      Cc: linux-nfs@vger.kernel.org
      Signed-off-by: NTim Gardner <tim.gardner@canonical.com>
      Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      96aa1549
    • W
      NFSv4.1: Don't decode skipped layoutgets · 085b7a45
      Weston Andros Adamson 提交于
      layoutget's prepare hook can call rpc_exit with status = NFS4_OK (0).
      Because of this, nfs4_proc_layoutget can't depend on a 0 status to mean
      that the RPC was successfully sent, received and parsed.
      
      To fix this, use the result's len member to see if parsing took place.
      
      This fixes the following OOPS -- calling xdr_init_decode() with a buffer length
      0 doesn't set the stream's 'p' member and ends up using uninitialized memory
      in filelayout_decode_layout.
      
      BUG: unable to handle kernel paging request at 0000000000008050
      IP: [<ffffffff81282e78>] memcpy+0x18/0x120
      PGD 0
      Oops: 0000 [#1] SMP
      last sysfs file: /sys/devices/pci0000:00/0000:00:11.0/0000:02:01.0/irq
      CPU 1
      Modules linked in: nfs_layout_nfsv41_files nfs lockd fscache auth_rpcgss nfs_acl autofs4 sunrpc ipt_REJECT nf_conntrack_ipv4 nf_defrag_ipv4 iptable_filter ip_tables ip6t_REJECT nf_conntrack_ipv6 nf_defrag_ipv6 xt_state nf_conntrack ip6table_filter ip6_tables ipv6 dm_mirror dm_region_hash dm_log dm_mod ppdev parport_pc parport snd_ens1371 snd_rawmidi snd_ac97_codec ac97_bus snd_seq snd_seq_device snd_pcm snd_timer snd soundcore snd_page_alloc e1000 microcode vmware_balloon i2c_piix4 i2c_core sg shpchp ext4 mbcache jbd2 sr_mod cdrom sd_mod crc_t10dif pata_acpi ata_generic ata_piix mptspi mptscsih mptbase scsi_transport_spi [last unloaded: speedstep_lib]
      
      Pid: 1665, comm: flush-0:22 Not tainted 2.6.32-356-test-2 #2 VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform
      RIP: 0010:[<ffffffff81282e78>]  [<ffffffff81282e78>] memcpy+0x18/0x120
      RSP: 0018:ffff88003dfab588  EFLAGS: 00010206
      RAX: ffff88003dc42000 RBX: ffff88003dfab610 RCX: 0000000000000009
      RDX: 000000003f807ff0 RSI: 0000000000008050 RDI: ffff88003dc42000
      RBP: ffff88003dfab5b0 R08: 0000000000000000 R09: 0000000000000000
      R10: 0000000000000000 R11: 0000000000000080 R12: 0000000000000024
      R13: ffff88003dc42000 R14: ffff88003f808030 R15: ffff88003dfab6a0
      FS:  0000000000000000(0000) GS:ffff880003420000(0000) knlGS:0000000000000000
      CS:  0010 DS: 0018 ES: 0018 CR0: 000000008005003b
      CR2: 0000000000008050 CR3: 000000003bc92000 CR4: 00000000001407e0
      DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
      DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
      Process flush-0:22 (pid: 1665, threadinfo ffff88003dfaa000, task ffff880037f77540)
      Stack:
      ffffffffa0398ac1 ffff8800397c5940 ffff88003dfab610 ffff88003dfab6a0
      <d> ffff88003dfab5d0 ffff88003dfab680 ffffffffa01c150b ffffea0000d82e70
      <d> 000000508116713b 0000000000000000 0000000000000000 0000000000000000
      Call Trace:
      [<ffffffffa0398ac1>] ? xdr_inline_decode+0xb1/0x120 [sunrpc]
      [<ffffffffa01c150b>] filelayout_decode_layout+0xeb/0x350 [nfs_layout_nfsv41_files]
      [<ffffffffa01c17fc>] filelayout_alloc_lseg+0x8c/0x3c0 [nfs_layout_nfsv41_files]
      [<ffffffff8150e6ce>] ? __wait_on_bit+0x7e/0x90
      Signed-off-by: NWeston Andros Adamson <dros@netapp.com>
      Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      Cc: stable@vger.kernel.org
      085b7a45
  5. 15 2月, 2013 1 次提交
    • T
      NFSv4.1: Fix bulk recall and destroy of layouts · fd9a8d71
      Trond Myklebust 提交于
      The current code in pnfs_destroy_all_layouts() assumes that removing
      the layout from the server->layouts list is sufficient to make it
      invisible to other processes. This ignores the fact that most
      users access the layout through the nfs_inode->layout...
      There is further breakage due to lack of reference counting of the
      layouts, meaning that the whole thing Oopses at the drop of a hat.
      
      The code in initiate_bulk_draining() is almost correct, and can be
      used as a model for pnfs_destroy_all_layouts(), so move that
      code to pnfs.c, and refactor the code to allow us to choose between
      a single filesystem bulk recall, and a recall of all layouts.
      Also note that initiate_bulk_draining() currently calls iput() while
      holding locks. Fix that too.
      Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      Cc: stable@vger.kernel.org
      fd9a8d71
  6. 13 2月, 2013 6 次提交
  7. 12 2月, 2013 7 次提交
  8. 01 2月, 2013 1 次提交
  9. 31 1月, 2013 2 次提交
    • T
      NFSv4.1: Handle NFS4ERR_DELAY when resetting the NFSv4.1 session · c489ee29
      Trond Myklebust 提交于
      NFS4ERR_DELAY is a legal reply when we call DESTROY_SESSION. It
      usually means that the server is busy handling an unfinished RPC
      request. Just sleep for a second and then retry.
      We also need to be able to handle the NFS4ERR_BACK_CHAN_BUSY return
      value. If the NFS server has outstanding callbacks, we just want to
      similarly sleep & retry.
      Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      Cc: stable@vger.kernel.org
      c489ee29
    • T
      NFS: Don't silently fail setattr() requests on mountpoints · ab225417
      Trond Myklebust 提交于
      Ensure that any setattr and getattr requests for junctions and/or
      mountpoints are sent to the server. Ever since commit
      0ec26fd0 (vfs: automount should ignore LOOKUP_FOLLOW), we have
      silently dropped any setattr requests to a server-side mountpoint.
      For referrals, we have silently dropped both getattr and setattr
      requests.
      
      This patch restores the original behaviour for setattr on mountpoints,
      and tries to do the same for referrals, provided that we have a
      filehandle...
      Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      Cc: stable@vger.kernel.org
      ab225417
  10. 28 1月, 2013 4 次提交
  11. 06 1月, 2013 1 次提交
  12. 05 1月, 2013 1 次提交
  13. 04 1月, 2013 3 次提交
  14. 22 12月, 2012 2 次提交
  15. 21 12月, 2012 3 次提交
    • D
      NFS4: Open files for fscaching · a4ff1468
      David Howells 提交于
      nfs4_file_open() should open files for fscaching.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      a4ff1468
    • D
      NFS: nfs_migrate_page() does not wait for FS-Cache to finish with a page · 8c209ce7
      David Howells 提交于
      nfs_migrate_page() does not wait for FS-Cache to finish with a page, probably
      leading to the following bad-page-state:
      
       BUG: Bad page state in process python-bin  pfn:17d39b
       page:ffffea00053649e8 flags:004000000000100c count:0 mapcount:0 mapping:(null)
      index:38686 (Tainted: G    B      ---------------- )
       Pid: 31053, comm: python-bin Tainted: G    B      ----------------
      2.6.32-71.24.1.el6.x86_64 #1
       Call Trace:
       [<ffffffff8111bfe7>] bad_page+0x107/0x160
       [<ffffffff8111ee69>] free_hot_cold_page+0x1c9/0x220
       [<ffffffff8111ef19>] __pagevec_free+0x59/0xb0
       [<ffffffff8104b988>] ? flush_tlb_others_ipi+0x128/0x130
       [<ffffffff8112230c>] release_pages+0x21c/0x250
       [<ffffffff8115b92a>] ? remove_migration_pte+0x28a/0x2b0
       [<ffffffff8115f3f8>] ? mem_cgroup_get_reclaim_stat_from_page+0x18/0x70
       [<ffffffff81122687>] ____pagevec_lru_add+0x167/0x180
       [<ffffffff811226f8>] __lru_cache_add+0x58/0x70
       [<ffffffff81122731>] lru_cache_add_lru+0x21/0x40
       [<ffffffff81123f49>] putback_lru_page+0x69/0x100
       [<ffffffff8115c0bd>] migrate_pages+0x13d/0x5d0
       [<ffffffff81122687>] ? ____pagevec_lru_add+0x167/0x180
       [<ffffffff81152ab0>] ? compaction_alloc+0x0/0x370
       [<ffffffff8115255c>] compact_zone+0x4cc/0x600
       [<ffffffff8111cfac>] ? get_page_from_freelist+0x15c/0x820
       [<ffffffff810672f4>] ? check_preempt_wakeup+0x1c4/0x3c0
       [<ffffffff8115290e>] compact_zone_order+0x7e/0xb0
       [<ffffffff81152a49>] try_to_compact_pages+0x109/0x170
       [<ffffffff8111e94d>] __alloc_pages_nodemask+0x5ed/0x850
       [<ffffffff814c9136>] ? thread_return+0x4e/0x778
       [<ffffffff81150d43>] alloc_pages_vma+0x93/0x150
       [<ffffffff81167ea5>] do_huge_pmd_anonymous_page+0x135/0x340
       [<ffffffff814cb6f6>] ? rwsem_down_read_failed+0x26/0x30
       [<ffffffff81136755>] handle_mm_fault+0x245/0x2b0
       [<ffffffff814ce383>] do_page_fault+0x123/0x3a0
       [<ffffffff814cbdf5>] page_fault+0x25/0x30
      
      nfs_migrate_page() calls nfs_fscache_release_page() which doesn't actually wait
      - even if __GFP_WAIT is set.  The reason that doesn't wait is that
      fscache_maybe_release_page() might deadlock the allocator as the work threads
      writing to the cache may all end up sleeping on memory allocation.
      
      However, I wonder if that is actually a problem.  There are a number of things
      I can do to deal with this:
      
       (1) Make nfs_migrate_page() wait.
      
       (2) Make fscache_maybe_release_page() honour the __GFP_WAIT flag.
      
       (3) Set a timeout around the wait.
      
       (4) Make nfs_migrate_page() return an error if the page is still busy.
      
      For the moment, I'll select (2) and (4).
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      Acked-by: NJeff Layton <jlayton@redhat.com>
      8c209ce7
    • D
      NFS: Use FS-Cache invalidation · de242c0b
      David Howells 提交于
      Use the new FS-Cache invalidation facility from NFS to deal with foreign
      changes being detected on the server rather than attempting to retire the old
      cookie and get a new one.
      
      The problem with the old method was that NFS did not wait for all outstanding
      storage and retrieval ops on the cache to complete.  There was no automatic
      wait between the calls to ->readpages() and calls to invalidate_inode_pages2()
      as the latter can only wait on locked pages that have been added to the
      pagecache (which they haven't yet on entry to ->readpages()).
      
      This was leading to oopses like the one below when an outstanding read got cut
      off from its cookie by a premature release.
      
      BUG: unable to handle kernel NULL pointer dereference at 00000000000000a8
      IP: [<ffffffffa0075118>] __fscache_read_or_alloc_pages+0x1dd/0x315 [fscache]
      PGD 15889067 PUD 15890067 PMD 0
      Oops: 0000 [#1] SMP
      CPU 0
      Modules linked in: cachefiles nfs fscache auth_rpcgss nfs_acl lockd sunrpc
      
      Pid: 4544, comm: tar Not tainted 3.1.0-rc4-fsdevel+ #1064                  /DG965RY
      RIP: 0010:[<ffffffffa0075118>]  [<ffffffffa0075118>] __fscache_read_or_alloc_pages+0x1dd/0x315 [fscache]
      RSP: 0018:ffff8800158799e8  EFLAGS: 00010246
      RAX: 0000000000000000 RBX: ffff8800070d41e0 RCX: ffff8800083dc1b0
      RDX: 0000000000000000 RSI: ffff880015879960 RDI: ffff88003e627b90
      RBP: ffff880015879a28 R08: 0000000000000002 R09: 0000000000000002
      R10: 0000000000000001 R11: ffff880015879950 R12: ffff880015879aa4
      R13: 0000000000000000 R14: ffff8800083dc158 R15: ffff880015879be8
      FS:  00007f671e9d87c0(0000) GS:ffff88003bc00000(0000) knlGS:0000000000000000
      CS:  0010 DS: 0000 ES: 0000 CR0: 000000008005003b
      CR2: 00000000000000a8 CR3: 000000001587f000 CR4: 00000000000006f0
      DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
      DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
      Process tar (pid: 4544, threadinfo ffff880015878000, task ffff880015875040)
      Stack:
       ffffffffa00b1759 ffff8800070dc158 ffff8800000213da ffff88002a286508
       ffff880015879aa4 ffff880015879be8 0000000000000001 ffff88002a2866e8
       ffff880015879a88 ffffffffa00b20be 00000000000200da ffff880015875040
      Call Trace:
       [<ffffffffa00b1759>] ? nfs_fscache_wait_bit+0xd/0xd [nfs]
       [<ffffffffa00b20be>] __nfs_readpages_from_fscache+0x7e/0x13f [nfs]
       [<ffffffff81095fe7>] ? __alloc_pages_nodemask+0x156/0x662
       [<ffffffffa0098763>] nfs_readpages+0xee/0x187 [nfs]
       [<ffffffff81098a5e>] __do_page_cache_readahead+0x1be/0x267
       [<ffffffff81098942>] ? __do_page_cache_readahead+0xa2/0x267
       [<ffffffff81098d7b>] ra_submit+0x1c/0x20
       [<ffffffff8109900a>] ondemand_readahead+0x28b/0x29a
       [<ffffffff810990ce>] page_cache_sync_readahead+0x38/0x3a
       [<ffffffff81091d8a>] generic_file_aio_read+0x2ab/0x67e
       [<ffffffffa008cfbe>] nfs_file_read+0xa4/0xc9 [nfs]
       [<ffffffff810c22c4>] do_sync_read+0xba/0xfa
       [<ffffffff810a62c9>] ? might_fault+0x4e/0x9e
       [<ffffffff81177a47>] ? security_file_permission+0x7b/0x84
       [<ffffffff810c25dd>] ? rw_verify_area+0xab/0xc8
       [<ffffffff810c29a4>] vfs_read+0xaa/0x13a
       [<ffffffff810c2a79>] sys_read+0x45/0x6c
       [<ffffffff813ac37b>] system_call_fastpath+0x16/0x1b
      Reported-by: NMark Moseley <moseleymark@gmail.com>
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      de242c0b
  16. 18 12月, 2012 1 次提交
  17. 16 12月, 2012 2 次提交
    • T
      NFS: Don't use SetPageError in the NFS writeback code · ada8e20d
      Trond Myklebust 提交于
      The writeback code is already capable of passing errors back to user space
      by means of the open_context->error. In the case of ENOSPC, Neil Brown
      is reporting seeing 2 errors being returned.
      
      Neil writes:
      
      "e.g. if /mnt2/ if an nfs mounted filesystem that has no space then
      
      strace dd if=/dev/zero conv=fsync >> /mnt2/afile count=1
      
      reported Input/output error and the relevant parts of the strace output are:
      
      write(1, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 512) = 512
      fsync(1)                                = -1 EIO (Input/output error)
      close(1)                                = -1 ENOSPC (No space left on device)"
      
      Neil then shows that the duplication of error messages appears to be due to
      the use of the PageError() mechanism, which causes filemap_fdatawait_range
      to return the extra EIO. The regression was introduced by
      commit 7b281ee0 (NFS: fsync() must exit
      with an error if page writeback failed).
      
      Fix this by removing the call to SetPageError(), and just relying on
      open_context->error reporting the ENOSPC back to fsync().
      Reported-by: NNeil Brown <neilb@suse.de>
      Tested-by: NNeil Brown <neilb@suse.de>
      Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      Cc: stable@vger.kernel.org [3.6+]
      ada8e20d
    • T
      NFSv4.1: Deal effectively with interrupted RPC calls. · ac20d163
      Trond Myklebust 提交于
      If an RPC call is interrupted, assume that the server hasn't processed
      the RPC call so that the next time we use the slot, we know that if we
      get a NFS4ERR_SEQ_MISORDERED or NFS4ERR_SEQ_FALSE_RETRY, we just have
      to bump the sequence number.
      Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      ac20d163