1. 09 12月, 2006 4 次提交
    • J
      [PATCH] NFS3: Calculate 'w' a bit later in nfs3svc_encode_getaclres() · 14d2b59e
      Jesper Juhl 提交于
      NFS3: Calculate 'w' a bit later in nfs3svc_encode_getaclres()
            This is a small performance optimization since we can return before
            needing 'w'. It also saves a few bytes of .text :
            Before:
                 text    data     bss     dec     hex filename
                 1632     140       0    1772     6ec fs/nfsd/nfs3acl.o
            After:
                 text    data     bss     dec     hex filename
                 1624     140       0    1764     6e4 fs/nfsd/nfs3acl.o
      Signed-off-by: NJesper Juhl <jesper.juhl@gmail.com>
      Cc: Neil Brown <neilb@suse.de>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      14d2b59e
    • J
      [PATCH] NFS2: Calculate 'w' a bit later in nfsaclsvc_encode_getaclres() · cb65a5ba
      Jesper Juhl 提交于
      NFS2: Calculate 'w' a bit later in nfsaclsvc_encode_getaclres()
            This is a small performance optimization since we can return before
            needing 'w'. It also saves a few bytes of .text :
            Before:
                 text    data     bss     dec     hex filename
                 2406     212       0    2618     a3a fs/nfsd/nfs2acl.o
            After:
                 text    data     bss     dec     hex filename
                 2400     212       0    2612     a34 fs/nfsd/nfs2acl.o
      Signed-off-by: NJesper Juhl <jesper.juhl@gmail.com>
      Cc: Neil Brown <neilb@suse.de>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      cb65a5ba
    • P
      [PATCH] lockdep: annotate nfsd4 recover code · 4b75f78e
      Peter Zijlstra 提交于
      > =============================================
      > [ INFO: possible recursive locking detected ]
      > 2.6.18-1.2724.lockdepPAE #1
      > ---------------------------------------------
      > nfsd/6884 is trying to acquire lock:
      >  (&inode->i_mutex){--..}, at: [<c04811e5>] vfs_rmdir+0x73/0xf4
      >
      > but task is already holding lock:
      >  (&inode->i_mutex){--..}, at: [<f8dfa621>]
      > nfsd4_clear_clid_dir+0x1f/0x3d [nfsd]
      >
      > other info that might help us debug this:
      > 3 locks held by nfsd/6884:
      >  #0:  (hash_sem){----}, at: [<f8de05eb>] nfsd+0x181/0x2ea [nfsd]
      >  #1:  (client_mutex){--..}, at: [<f8df6d19>]
      > nfsd4_setclientid_confirm+0x3b/0x2cf [nfsd]
      >  #2:  (&inode->i_mutex){--..}, at: [<f8dfa621>]
      > nfsd4_clear_clid_dir+0x1f/0x3d [nfsd]
      >
      > stack backtrace:
      >  [<c040524d>] dump_trace+0x69/0x1af
      >  [<c04053ab>] show_trace_log_lvl+0x18/0x2c
      >  [<c040595f>] show_trace+0xf/0x11
      >  [<c0405a53>] dump_stack+0x15/0x17
      >  [<c043ca7a>] __lock_acquire+0x110/0x9b6
      >  [<c043d91e>] lock_acquire+0x5c/0x7a
      >  [<c061a41b>] __mutex_lock_slowpath+0xde/0x234
      >  [<c04811e5>] vfs_rmdir+0x73/0xf4
      >  [<f8dfa62b>] nfsd4_clear_clid_dir+0x29/0x3d [nfsd]
      >  [<f8dfa733>] nfsd4_remove_clid_dir+0xb8/0xf8 [nfsd]
      >  [<f8df6e90>] nfsd4_setclientid_confirm+0x1b2/0x2cf [nfsd]
      >  [<f8def19a>] nfsd4_proc_compound+0x137a/0x166c [nfsd]
      >  [<f8de00d5>] nfsd_dispatch+0xc5/0x180 [nfsd]
      >  [<f8d09d83>] svc_process+0x3bd/0x631 [sunrpc]
      >  [<f8de0604>] nfsd+0x19a/0x2ea [nfsd]
      >  [<c0404e27>] kernel_thread_helper+0x7/0x10
      > DWARF2 unwinder stuck at kernel_thread_helper+0x7/0x10
      > Leftover inexact backtrace:
      >  =======================
      
      Some nesting annotation to the nfsd4 recovery code.
      The vfs operations called will take dentry->d_inode->i_mutex.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Neil Brown <neilb@suse.de>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      4b75f78e
    • J
      [PATCH] nfsd: change uses of f_{dentry, vfsmnt} to use f_path · 7eaa36e2
      Josef "Jeff" Sipek 提交于
      Change all the uses of f_{dentry,vfsmnt} to f_path.{dentry,mnt} in the nfs
      server code.
      Signed-off-by: NJosef "Jeff" Sipek <jsipek@cs.sunysb.edu>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      7eaa36e2
  2. 08 12月, 2006 2 次提交
  3. 22 11月, 2006 1 次提交
  4. 09 11月, 2006 3 次提交
  5. 04 11月, 2006 1 次提交
    • S
      [PATCH] NFS4: fix for recursive locking problem · 7ef55b8a
      Srinivasa Ds 提交于
      When I was performing some operations on NFS, I got below error on server
      side.
      
        =============================================
        [ INFO: possible recursive locking detected ]
        2.6.19-prep #1
        ---------------------------------------------
        nfsd4/3525 is trying to acquire lock:
         (&inode->i_mutex){--..}, at: [<c0611e5a>] mutex_lock+0x21/0x24
      
        but task is already holding lock:
         (&inode->i_mutex){--..}, at: [<c0611e5a>] mutex_lock+0x21/0x24
      
        other info that might help us debug this:
        2 locks held by nfsd4/3525:
         #0:  (client_mutex){--..}, at: [<c0611e5a>] mutex_lock+0x21/0x24
         #1:  (&inode->i_mutex){--..}, at: [<c0611e5a>] mutex_lock+0x21/0x24
      
        stack backtrace:
         [<c04051ed>] show_trace_log_lvl+0x58/0x16a
         [<c04057fa>] show_trace+0xd/0x10
         [<c0405913>] dump_stack+0x19/0x1b
         [<c043b6f1>] __lock_acquire+0x778/0x99c
         [<c043be86>] lock_acquire+0x4b/0x6d
         [<c0611ceb>] __mutex_lock_slowpath+0xbc/0x20a
         [<c0611e5a>] mutex_lock+0x21/0x24
         [<c047fd7e>] vfs_rmdir+0x76/0xf8
         [<f94b7ce9>] nfsd4_clear_clid_dir+0x2c/0x41 [nfsd]
         [<f94b7de9>] nfsd4_remove_clid_dir+0xb1/0xe8 [nfsd]
         [<f94b307b>] laundromat_main+0x9b/0x1c3 [nfsd]
         [<c04333d6>] run_workqueue+0x7a/0xbb
         [<c0433d0b>] worker_thread+0xd2/0x107
         [<c0436285>] kthread+0xc3/0xf2
         [<c0402005>] kernel_thread_helper+0x5/0xb
        ===================================================================
      
      Cause for this problem was,2 successive mutex_lock calls on 2 diffrent inodes ,as shown below
      
      	static int
      	nfsd4_clear_clid_dir(struct dentry *dir, struct dentry *dentry)
      	{
      	        int status;
      
      	        /* For now this directory should already be empty, but we empty it of
              	 * any regular files anyway, just in case the directory was created by
      	         * a kernel from the future.... */
              	nfsd4_list_rec_dir(dentry, nfsd4_remove_clid_file);
      	        mutex_lock(&dir->d_inode->i_mutex);
      	        status = vfs_rmdir(dir->d_inode, dentry);
      	...
      
      	int vfs_rmdir(struct inode *dir, struct dentry *dentry)
      	{
      	        int error = may_delete(dir, dentry, 1);
      
      	        if (error)
      	                return error;
      
      	        if (!dir->i_op || !dir->i_op->rmdir)
              	        return -EPERM;
      
      	        DQUOT_INIT(dir);
      
      	        mutex_lock(&dentry->d_inode->i_mutex);
      	...
      
      So I have developed the patch to overcome this problem.
      Signed-off-by: NSrinivasa DS <srinivasa@in.ibm.com>
      Cc: Neil Brown <neilb@suse.de>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      7ef55b8a
  6. 21 10月, 2006 15 次提交
  7. 17 10月, 2006 4 次提交
  8. 06 10月, 2006 1 次提交
    • N
      [PATCH] knfsd: tidy up up meaning of 'buffer size' in nfsd/sunrpc · c6b0a9f8
      NeilBrown 提交于
      There is some confusion about the meaning of 'bufsz' for a sunrpc server.
      In some cases it is the largest message that can be sent or received.  In
      other cases it is the largest 'payload' that can be included in a NFS
      message.
      
      In either case, it is not possible for both the request and the reply to be
      this large.  One of the request or reply may only be one page long, which
      fits nicely with NFS.
      
      So we remove 'bufsz' and replace it with two numbers: 'max_payload' and
      'max_mesg'.  Max_payload is the size that the server requests.  It is used
      by the server to check the max size allowed on a particular connection:
      depending on the protocol a lower limit might be used.
      
      max_mesg is the largest single message that can be sent or received.  It is
      calculated as the max_payload, rounded up to a multiple of PAGE_SIZE, and
      with PAGE_SIZE added to overhead.  Only one of the request and reply may be
      this size.  The other must be at most one page.
      
      Cc: Greg Banks <gnb@sgi.com>
      Cc: "J. Bruce Fields" <bfields@fieldses.org>
      Signed-off-by: NNeil Brown <neilb@suse.de>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      c6b0a9f8
  9. 04 10月, 2006 9 次提交