1. 08 8月, 2007 8 次提交
    • T
    • T
      NFS: Replace flush_scheduled_work with cancel_work_sync() and friends · 3d39c691
      Trond Myklebust 提交于
      This will avoid deadlocks of the form:
      
      stack backtrace:
       [<c0104fda>] show_trace_log_lvl+0x1a/0x30
       [<c0105c02>] show_trace+0x12/0x20
       [<c0105d15>] dump_stack+0x15/0x20
       [<c013ee42>] __lock_acquire+0xc22/0x1030
       [<c013f2b1>] lock_acquire+0x61/0x80
       [<c012edd9>] flush_workqueue+0x49/0x70
       [<c012ee0d>] flush_scheduled_work+0xd/0x10
       [<dcf55c0c>] nfs_release_automount_timer+0x2c/0x30 [nfs]
       [<dcf45d8e>] nfs_free_server+0x9e/0xd0 [nfs]
       [<dcf4e626>] nfs_kill_super+0x16/0x20 [nfs]
       [<c017b38d>] deactivate_super+0x7d/0xa0
       [<c018f94b>] mntput_no_expire+0x4b/0x80
       [<c018fd94>] expire_mount_list+0xe4/0x140
       [<c0191219>] mark_mounts_for_expiry+0x99/0xb0
       [<dcf55d1d>] nfs_expire_automounts+0xd/0x40 [nfs]
       [<c012e61b>] run_workqueue+0x12b/0x1e0
       [<c012f05b>] worker_thread+0x9b/0x100
       [<c0131c72>] kthread+0x42/0x70
       [<c0104c0f>] kernel_thread_helper+0x7/0x18
       =======================
      Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      3d39c691
    • T
      SUNRPC: Don't call gss_delete_sec_context() from an rcu context · a4deb81b
      Trond Myklebust 提交于
      Doing so may not be safe...
      Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      a4deb81b
    • T
      NFSv4: Don't call put_rpccred() from an rcu callback · 905f8d16
      Trond Myklebust 提交于
      Doing so would require us to introduce bh-safe locks into put_rpccred().
      This patch fixes the lockdep complaint reported by Marc Dietrich:
      
      inconsistent {softirq-on-W} -> {in-softirq-W} usage.
      swapper/0 [HC0[0]:SC1[1]:HE1:SE0] takes:
       (rpc_credcache_lock){-+..}, at: [<c01dc487>]
      _atomic_dec_and_lock+0x17/0x60
      {softirq-on-W} state was registered at:
        [<c013e870>] __lock_acquire+0x650/0x1030
        [<c013f2b1>] lock_acquire+0x61/0x80
        [<c02db9ac>] _spin_lock+0x2c/0x40
        [<c01dc487>] _atomic_dec_and_lock+0x17/0x60
        [<dced55fd>] put_rpccred+0x5d/0x100 [sunrpc]
        [<dced56c1>] rpcauth_unbindcred+0x21/0x60 [sunrpc]
        [<dced3fd4>] a0 [sunrpc]
        [<dcecefe0>] rpc_call_sync+0x30/0x40 [sunrpc]
        [<dcedc73b>] rpcb_register+0xdb/0x180 [sunrpc]
        [<dced65b3>] svc_register+0x93/0x160 [sunrpc]
        [<dced6ebe>] __svc_create+0x1ee/0x220 [sunrpc]
        [<dced7053>] svc_create+0x13/0x20 [sunrpc]
        [<dcf6d722>] nfs_callback_up+0x82/0x120 [nfs]
        [<dcf48f36>] nfs_get_client+0x176/0x390 [nfs]
        [<dcf49181>] nfs4_set_client+0x31/0x190 [nfs]
        [<dcf49983>] nfs4_create_server+0x63/0x3b0 [nfs]
        [<dcf52426>] nfs4_get_sb+0x346/0x5b0 [nfs]
        [<c017b444>] vfs_kern_mount+0x94/0x110
        [<c0190a62>] do_mount+0x1f2/0x7d0
        [<c01910a6>] sys_mount+0x66/0xa0
        [<c0104046>] syscall_call+0x7/0xb
        [<ffffffff>] 0xffffffff
      irq event stamp: 5277830
      hardirqs last  enabled at (5277830): [<c017530a>] kmem_cache_free+0x8a/0xc0
      hardirqs last disabled at (5277829): [<c01752d2>] kmem_cache_free+0x52/0xc0
      softirqs last  enabled at (5277798): [<c0124173>] __do_softirq+0xa3/0xc0
      softirqs last disabled at (5277817): [<c01241d7>] do_softirq+0x47/0x50
      
      other info that might help us debug this:
      no locks held by swapper/0.
      
      stack backtrace:
       [<c0104fda>] show_trace_log_lvl+0x1a/0x30
       [<c0105c02>] show_trace+0x12/0x20
       [<c0105d15>] dump_stack+0x15/0x20
       [<c013ccc3>] print_usage_bug+0x153/0x160
       [<c013d8b9>] mark_lock+0x449/0x620
       [<c013e824>] __lock_acquire+0x604/0x1030
       [<c013f2b1>] lock_acquire+0x61/0x80
       [<c02db9ac>] _spin_lock+0x2c/0x40
       [<c01dc487>] _atomic_dec_and_lock+0x17/0x60
       [<dced55fd>] put_rpccred+0x5d/0x100 [sunrpc]
       [<dcf6bf83>] nfs_free_delegation_callback+0x13/0x20 [nfs]
       [<c012f9ea>] __rcu_process_callbacks+0x6a/0x1c0
       [<c012fb52>] rcu_process_callbacks+0x12/0x30
       [<c0124218>] tasklet_action+0x38/0x80
       [<c0124125>] __do_softirq+0x55/0xc0
       [<c01241d7>] do_softirq+0x47/0x50
       [<c0124605>] irq_exit+0x35/0x40
       [<c0112463>] smp_apic_timer_interrupt+0x43/0x80
       [<c0104a77>] apic_timer_interrupt+0x33/0x38
       [<c02690df>] cpuidle_idle_call+0x6f/0x90
       [<c01023c3>] cpu_idle+0x43/0x70
       [<c02d8c27>] rest_init+0x47/0x50
       [<c03bcb6a>] start_kernel+0x22a/0x2b0
       [<00000000>] 0x0
       =======================
      Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      905f8d16
    • T
      NFS: Fix NFSv4 open stateid regressions · 45328c35
      Trond Myklebust 提交于
      Do not allow cached open for O_RDONLY or O_WRONLY unless the file has been
      previously opened in these modes.
      
      Also Fix the calculation of the mode in nfs4_close_prepare. We should only
      issue an OPEN_DOWNGRADE if we're sure that we will still be holding the
      correct open modes. This may not be the case if we've been doing delegated
      opens.
      
      Finally, there is no need to adjust the open mode bit flags in
      nfs4_close_done(): that has already been done in nfs4_close_prepare().
      Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      45328c35
    • T
      NFSv4: Fix a locking regression in nfs4_set_mode_locked() · ba683031
      Trond Myklebust 提交于
      We don't really need to clear &state->inode_states inside
      nfs4_set_mode_locked, and doing so without holding the inode->i_lock would
      in any case be a bug...
      Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      ba683031
    • T
      NFS: Fix put_nfs_open_context · 5e11934d
      Trond Myklebust 提交于
      We need to grab the inode->i_lock atomically with the last reference put in
      order to remove the open context that is being freed from the
      nfsi->open_files list.
      
      Fix by converting the kref to a standard atomic counter and then using
      atomic_dec_and_lock()...
      
      Thanks to Arnd Bergmann for pointing out the problem.
      Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      5e11934d
    • T
      SUNRPC: Fix a race in rpciod_down() · b247bbf1
      Trond Myklebust 提交于
      The commit 4ada539e lead to the unpleasant
      possibility of an asynchronous rpc_task being required to call
      rpciod_down() when it is complete. This again means that the rpciod
      workqueue may get to call destroy_workqueue on itself -> hang...
      
      Change rpciod_up/rpciod_down to just get/put the module, and then
      create/destroy the workqueues on module load/unload.
      Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      b247bbf1
  2. 07 8月, 2007 9 次提交
  3. 06 8月, 2007 3 次提交
  4. 05 8月, 2007 5 次提交
  5. 04 8月, 2007 15 次提交