1. 24 5月, 2017 1 次提交
    • T
      NFSv4.0: Fix a lock leak in nfs40_walk_client_list · b49c15f9
      Trond Myklebust 提交于
      Xiaolong Ye's kernel test robot detected the following Oops:
      [  299.158991] BUG: scheduling while atomic: mount.nfs/9387/0x00000002
      [  299.169587] 2 locks held by mount.nfs/9387:
      [  299.176165]  #0:  (nfs_clid_init_mutex){......}, at: [<ffffffff8130cc92>] nfs4_discover_server_trunking+0x47/0x1fc
      [  299.201802]  #1:  (&(&nn->nfs_client_lock)->rlock){......}, at: [<ffffffff813125fa>] nfs40_walk_client_list+0x2e9/0x338
      [  299.221979] CPU: 0 PID: 9387 Comm: mount.nfs Not tainted 4.11.0-rc7-00021-g14d1bbb0 #45
      [  299.235584] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.9.3-20161025_171302-gandalf 04/01/2014
      [  299.251176] Call Trace:
      [  299.255192]  dump_stack+0x61/0x7e
      [  299.260416]  __schedule_bug+0x65/0x74
      [  299.266208]  __schedule+0x5d/0x87c
      [  299.271883]  schedule+0x89/0x9a
      [  299.276937]  schedule_timeout+0x232/0x289
      [  299.283223]  ? detach_if_pending+0x10b/0x10b
      [  299.289935]  schedule_timeout_uninterruptible+0x2a/0x2c
      [  299.298266]  ? put_rpccred+0x3e/0x115
      [  299.304327]  ? schedule_timeout_uninterruptible+0x2a/0x2c
      [  299.312851]  msleep+0x1e/0x22
      [  299.317612]  nfs4_discover_server_trunking+0x102/0x1fc
      [  299.325644]  nfs4_init_client+0x13f/0x194
      
      It looks as if we recently added a spin_lock() leak to
      nfs40_walk_client_list() when cleaning up the code.
      Reported-by: Nkernel test robot <xiaolong.ye@intel.com>
      Fixes: 14d1bbb0 ("NFS: Create a common nfs4_match_client() function")
      Cc: Anna Schumaker <Anna.Schumaker@Netapp.com>
      Signed-off-by: NTrond Myklebust <trond.myklebust@primarydata.com>
      b49c15f9
  2. 21 4月, 2017 8 次提交
  3. 18 3月, 2017 1 次提交
  4. 02 12月, 2016 2 次提交
  5. 23 9月, 2016 1 次提交
    • J
      nfs: allow blocking locks to be awoken by lock callbacks · a1d617d8
      Jeff Layton 提交于
      Add a waitqueue head to the client structure. Have clients set a wait
      on that queue prior to requesting a lock from the server. If the lock
      is blocked, then we can use that to wait for wakeups.
      
      Note that we do need to do this "manually" since we need to set the
      wait on the waitqueue prior to requesting the lock, but requesting a
      lock can involve activities that can block.
      
      However, only do that for NFSv4.1 locks, either by compiling out
      all of the waitqueue handling when CONFIG_NFS_V4_1 is disabled, or
      skipping all of it at runtime if we're dealing with v4.0, or v4.1
      servers that don't send lock callbacks.
      
      Note too that even when we expect to get a lock callback, RFC5661
      section 20.11.4 is pretty clear that we still need to poll for them,
      so we do still sleep on a timeout. We do however always poll at the
      longest interval in that case.
      Signed-off-by: NJeff Layton <jlayton@redhat.com>
      [Anna: nfs4_retry_setlk() "status" should default to -ERESTARTSYS]
      Signed-off-by: NAnna Schumaker <Anna.Schumaker@Netapp.com>
      a1d617d8
  6. 20 9月, 2016 4 次提交
  7. 30 8月, 2016 1 次提交
  8. 20 7月, 2016 1 次提交
  9. 01 7月, 2016 1 次提交
    • T
      NFS: Fix an Oops in the pNFS files and flexfiles connection setup to the DS · 5c6e5b60
      Trond Myklebust 提交于
      Chris Worley reports:
       RIP: 0010:[<ffffffffa0245f80>]  [<ffffffffa0245f80>] rpc_new_client+0x2a0/0x2e0 [sunrpc]
       RSP: 0018:ffff880158f6f548  EFLAGS: 00010246
       RAX: 0000000000000000 RBX: ffff880234f8bc00 RCX: 000000000000ea60
       RDX: 0000000000074cc0 RSI: 000000000000ea60 RDI: ffff880234f8bcf0
       RBP: ffff880158f6f588 R08: 000000000001ac80 R09: ffff880237003300
       R10: ffff880201171000 R11: ffffea0000d75200 R12: ffffffffa03afc60
       R13: ffff880230c18800 R14: 0000000000000000 R15: ffff880158f6f680
       FS:  00007f0e32673740(0000) GS:ffff88023fc40000(0000) knlGS:0000000000000000
       CS:  0010 DS: 0000 ES: 0000 CR0: 000000008005003b
       CR2: 0000000000000008 CR3: 0000000234886000 CR4: 00000000001406e0
       Stack:
        ffffffffa047a680 0000000000000000 ffff880158f6f598 ffff880158f6f680
        ffff880158f6f680 ffff880234d11d00 ffff88023357f800 ffff880158f6f7d0
        ffff880158f6f5b8 ffffffffa024660a ffff880158f6f5b8 ffffffffa02492ec
       Call Trace:
        [<ffffffffa024660a>] rpc_create_xprt+0x1a/0xb0 [sunrpc]
        [<ffffffffa02492ec>] ? xprt_create_transport+0x13c/0x240 [sunrpc]
        [<ffffffffa0246766>] rpc_create+0xc6/0x1a0 [sunrpc]
        [<ffffffffa038e695>] nfs_create_rpc_client+0xf5/0x140 [nfs]
        [<ffffffffa038f31a>] nfs_init_client+0x3a/0xd0 [nfs]
        [<ffffffffa038f22f>] nfs_get_client+0x25f/0x310 [nfs]
        [<ffffffffa025cef8>] ? rpc_ntop+0xe8/0x100 [sunrpc]
        [<ffffffffa047512c>] nfs3_set_ds_client+0xcc/0x100 [nfsv3]
        [<ffffffffa041fa10>] nfs4_pnfs_ds_connect+0x120/0x400 [nfsv4]
        [<ffffffffa03d41c7>] nfs4_ff_layout_prepare_ds+0xe7/0x330 [nfs_layout_flexfiles]
        [<ffffffffa03d1b1b>] ff_layout_pg_init_write+0xcb/0x280 [nfs_layout_flexfiles]
        [<ffffffffa03a14dc>] __nfs_pageio_add_request+0x12c/0x490 [nfs]
        [<ffffffffa03a1fa2>] nfs_pageio_add_request+0xc2/0x2a0 [nfs]
        [<ffffffffa03a0365>] ? nfs_pageio_init+0x75/0x120 [nfs]
        [<ffffffffa03a5b50>] nfs_do_writepage+0x120/0x270 [nfs]
        [<ffffffffa03a5d31>] nfs_writepage_locked+0x61/0xc0 [nfs]
        [<ffffffff813d4115>] ? __percpu_counter_add+0x55/0x70
        [<ffffffffa03a6a9f>] nfs_wb_single_page+0xef/0x1c0 [nfs]
        [<ffffffff811ca4a3>] ? __dec_zone_page_state+0x33/0x40
        [<ffffffffa0395b21>] nfs_launder_page+0x41/0x90 [nfs]
        [<ffffffff811baba0>] invalidate_inode_pages2_range+0x340/0x3a0
        [<ffffffff811bac17>] invalidate_inode_pages2+0x17/0x20
        [<ffffffffa039960e>] nfs_release+0x9e/0xb0 [nfs]
        [<ffffffffa0399570>] ? nfs_open+0x60/0x60 [nfs]
        [<ffffffffa0394dad>] nfs_file_release+0x3d/0x60 [nfs]
        [<ffffffff81226e6c>] __fput+0xdc/0x1e0
        [<ffffffff81226fbe>] ____fput+0xe/0x10
        [<ffffffff810bf2e4>] task_work_run+0xc4/0xe0
        [<ffffffff810a4188>] do_exit+0x2e8/0xb30
        [<ffffffff8102471c>] ? do_audit_syscall_entry+0x6c/0x70
        [<ffffffff811464e6>] ? __audit_syscall_exit+0x1e6/0x280
        [<ffffffff810a4a5f>] do_group_exit+0x3f/0xa0
        [<ffffffff810a4ad4>] SyS_exit_group+0x14/0x20
        [<ffffffff8179b76e>] system_call_fastpath+0x12/0x71
      
      Which seems to be due to a call to utsname() when in a task exit context
      in order to determine the hostname to set in rpc_new_client().
      
      In reality, what we want here is not the hostname of the current task, but
      the hostname that was used to set up the metadata server.
      Signed-off-by: NTrond Myklebust <trond.myklebust@primarydata.com>
      5c6e5b60
  10. 24 11月, 2015 1 次提交
  11. 18 8月, 2015 1 次提交
  12. 01 7月, 2015 1 次提交
  13. 24 4月, 2015 1 次提交
  14. 16 4月, 2015 1 次提交
  15. 04 3月, 2015 1 次提交
  16. 04 2月, 2015 2 次提交
  17. 22 1月, 2015 1 次提交
  18. 06 1月, 2015 4 次提交
  19. 26 11月, 2014 1 次提交
  20. 25 11月, 2014 1 次提交
  21. 19 9月, 2014 1 次提交
    • S
      NFSv4: nfs4_state_manager() vs. nfs_server_remove_lists() · 080af20c
      Steve Dickson 提交于
      There is a race between nfs4_state_manager() and
      nfs_server_remove_lists() that happens during a nfsv3 mount.
      
      The v3 mount notices there is already a supper block so
      nfs_server_remove_lists() called which uses the nfs_client_lock
      spin lock to synchronize access to the client list.
      
      At the same time nfs4_state_manager() is running through
      the client list looking for work to do, using the same
      lock. When nfs4_state_manager() wins the race to the
      list, a v3 client pointer is found and not ignored
      properly which causes the panic.
      
      Moving some protocol checks before the state checking
      avoids the panic.
      
      CC: Stable Tree <stable@vger.kernel.org>
      Signed-off-by: NSteve Dickson <steved@redhat.com>
      Signed-off-by: NTrond Myklebust <trond.myklebust@primarydata.com>
      080af20c
  22. 09 7月, 2014 1 次提交
  23. 19 3月, 2014 1 次提交
  24. 18 2月, 2014 1 次提交
  25. 02 2月, 2014 1 次提交