1. 18 9月, 2014 25 次提交
  2. 15 9月, 2014 3 次提交
    • L
      vfs: avoid non-forwarding large load after small store in path lookup · 9226b5b4
      Linus Torvalds 提交于
      The performance regression that Josef Bacik reported in the pathname
      lookup (see commit 99d263d4 "vfs: fix bad hashing of dentries") made
      me look at performance stability of the dcache code, just to verify that
      the problem was actually fixed.  That turned up a few other problems in
      this area.
      
      There are a few cases where we exit RCU lookup mode and go to the slow
      serializing case when we shouldn't, Al has fixed those and they'll come
      in with the next VFS pull.
      
      But my performance verification also shows that link_path_walk() turns
      out to have a very unfortunate 32-bit store of the length and hash of
      the name we look up, followed by a 64-bit read of the combined hash_len
      field.  That screws up the processor store to load forwarding, causing
      an unnecessary hickup in this critical routine.
      
      It's caused by the ugly calling convention for the "hash_name()"
      function, and easily fixed by just making hash_name() fill in the whole
      'struct qstr' rather than passing it a pointer to just the hash value.
      
      With that, the profile for this function looks much smoother.
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      9226b5b4
    • A
      be careful with nd->inode in path_init() and follow_dotdot_rcu() · 4023bfc9
      Al Viro 提交于
      in the former we simply check if dentry is still valid after picking
      its ->d_inode; in the latter we fetch ->d_inode in the same places
      where we fetch dentry and its ->d_seq, under the same checks.
      
      Cc: stable@vger.kernel.org # 2.6.38+
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      4023bfc9
    • A
      don't bugger nd->seq on set_root_rcu() from follow_dotdot_rcu() · 7bd88377
      Al Viro 提交于
      return the value instead, and have path_init() do the assignment.  Broken by
      "vfs: Fix absolute RCU path walk failures due to uninitialized seq number",
      which was Cc-stable with 2.6.38+ as destination.  This one should go where
      it went.
      
      To avoid dummy value returned in case when root is already set (it would do
      no harm, actually, since the only caller that doesn't ignore the return value
      is guaranteed to have nd->root *not* set, but it's more obvious that way),
      lift the check into callers.  And do the same to set_root(), to keep them
      in sync.
      
      Cc: stable@vger.kernel.org # 2.6.38+
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      7bd88377
  3. 14 9月, 2014 3 次提交
    • A
      fix bogus read_seqretry() checks introduced in b37199e6 · f5be3e29
      Al Viro 提交于
      read_seqretry() returns true on mismatch, not on match...
      
      Cc: stable@vger.kernel.org # 3.15+
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      f5be3e29
    • A
      move the call of __d_drop(anon) into __d_materialise_unique(dentry, anon) · 6f18493e
      Al Viro 提交于
      and lock the right list there
      
      Cc: stable@vger.kernel.org
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      6f18493e
    • L
      vfs: fix bad hashing of dentries · 99d263d4
      Linus Torvalds 提交于
      Josef Bacik found a performance regression between 3.2 and 3.10 and
      narrowed it down to commit bfcfaa77 ("vfs: use 'unsigned long'
      accesses for dcache name comparison and hashing"). He reports:
      
       "The test case is essentially
      
            for (i = 0; i < 1000000; i++)
                    mkdir("a$i");
      
        On xfs on a fio card this goes at about 20k dir/sec with 3.2, and 12k
        dir/sec with 3.10.  This is because we spend waaaaay more time in
        __d_lookup on 3.10 than in 3.2.
      
        The new hashing function for strings is suboptimal for <
        sizeof(unsigned long) string names (and hell even > sizeof(unsigned
        long) string names that I've tested).  I broke out the old hashing
        function and the new one into a userspace helper to get real numbers
        and this is what I'm getting:
      
            Old hash table had 1000000 entries, 0 dupes, 0 max dupes
            New hash table had 12628 entries, 987372 dupes, 900 max dupes
            We had 11400 buckets with a p50 of 30 dupes, p90 of 240 dupes, p99 of 567 dupes for the new hash
      
        My test does the hash, and then does the d_hash into a integer pointer
        array the same size as the dentry hash table on my system, and then
        just increments the value at the address we got to see how many
        entries we overlap with.
      
        As you can see the old hash function ended up with all 1 million
        entries in their own bucket, whereas the new one they are only
        distributed among ~12.5k buckets, which is why we're using so much
        more CPU in __d_lookup".
      
      The reason for this hash regression is two-fold:
      
       - On 64-bit architectures the down-mixing of the original 64-bit
         word-at-a-time hash into the final 32-bit hash value is very
         simplistic and suboptimal, and just adds the two 32-bit parts
         together.
      
         In particular, because there is no bit shuffling and the mixing
         boundary is also a byte boundary, similar character patterns in the
         low and high word easily end up just canceling each other out.
      
       - the old byte-at-a-time hash mixed each byte into the final hash as it
         hashed the path component name, resulting in the low bits of the hash
         generally being a good source of hash data.  That is not true for the
         word-at-a-time case, and the hash data is distributed among all the
         bits.
      
      The fix is the same in both cases: do a better job of mixing the bits up
      and using as much of the hash data as possible.  We already have the
      "hash_32|64()" functions to do that.
      Reported-by: NJosef Bacik <jbacik@fb.com>
      Cc: Al Viro <viro@zeniv.linux.org.uk>
      Cc: Christoph Hellwig <hch@infradead.org>
      Cc: Chris Mason <clm@fb.com>
      Cc: linux-fsdevel@vger.kernel.org
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      99d263d4
  4. 11 9月, 2014 4 次提交
  5. 09 9月, 2014 5 次提交
    • J
      nfs: revert "nfs4: queue free_lock_state job submission to nfsiod" · 0c0e0d3c
      Jeff Layton 提交于
      This reverts commit 49a4bda2.
      
      Christoph reported an oops due to the above commit:
      
      generic/089 242s ...[ 2187.041239] general protection fault: 0000 [#1]
      SMP
      [ 2187.042899] Modules linked in:
      [ 2187.044000] CPU: 0 PID: 11913 Comm: kworker/0:1 Not tainted 3.16.0-rc6+ #1151
      [ 2187.044287] Hardware name: Bochs Bochs, BIOS Bochs 01/01/2007
      [ 2187.044287] Workqueue: nfsiod free_lock_state_work
      [ 2187.044287] task: ffff880072b50cd0 ti: ffff88007a4ec000 task.ti: ffff88007a4ec000
      [ 2187.044287] RIP: 0010:[<ffffffff81361ca6>]  [<ffffffff81361ca6>] free_lock_state_work+0x16/0x30
      [ 2187.044287] RSP: 0018:ffff88007a4efd58  EFLAGS: 00010296
      [ 2187.044287] RAX: 6b6b6b6b6b6b6b6b RBX: ffff88007a947ac0 RCX: 8000000000000000
      [ 2187.044287] RDX: ffffffff826af9e0 RSI: ffff88007b093c00 RDI: ffff88007b093db8
      [ 2187.044287] RBP: ffff88007a4efd58 R08: ffffffff832d3e10 R09: 000001c40efc0000
      [ 2187.044287] R10: 0000000000000000 R11: 0000000000059e30 R12: ffff88007fc13240
      [ 2187.044287] R13: ffff88007fc18b00 R14: ffff88007b093db8 R15: 0000000000000000
      [ 2187.044287] FS:  0000000000000000(0000) GS:ffff88007fc00000(0000) knlGS:0000000000000000
      [ 2187.044287] CS:  0010 DS: 0000 ES: 0000 CR0: 000000008005003b
      [ 2187.044287] CR2: 00007f93ec33fb80 CR3: 0000000079dc2000 CR4: 00000000000006f0
      [ 2187.044287] Stack:
      [ 2187.044287]  ffff88007a4efdd8 ffffffff810cc877 ffffffff810cc80d ffff88007fc13258
      [ 2187.044287]  000000007a947af0 0000000000000000 ffffffff8353ccc8 ffffffff82b6f3d0
      [ 2187.044287]  0000000000000000 ffffffff82267679 ffff88007a4efdd8 ffff88007fc13240
      [ 2187.044287] Call Trace:
      [ 2187.044287]  [<ffffffff810cc877>] process_one_work+0x1c7/0x490
      [ 2187.044287]  [<ffffffff810cc80d>] ? process_one_work+0x15d/0x490
      [ 2187.044287]  [<ffffffff810cd569>] worker_thread+0x119/0x4f0
      [ 2187.044287]  [<ffffffff810fbbad>] ? trace_hardirqs_on+0xd/0x10
      [ 2187.044287]  [<ffffffff810cd450>] ? init_pwq+0x190/0x190
      [ 2187.044287]  [<ffffffff810d3c6f>] kthread+0xdf/0x100
      [ 2187.044287]  [<ffffffff810d3b90>] ? __init_kthread_worker+0x70/0x70
      [ 2187.044287]  [<ffffffff81d9873c>] ret_from_fork+0x7c/0xb0
      [ 2187.044287]  [<ffffffff810d3b90>] ? __init_kthread_worker+0x70/0x70
      [ 2187.044287] Code: 0f 1f 44 00 00 31 c0 5d c3 66 66 66 2e 0f 1f 84 00 00 00 00 00 55 48 8d b7 48 fe ff ff 48 8b 87 58 fe ff ff 48 89 e5 48 8b 40 30 <48> 8b 00 48 8b 10 48 89 c7 48 8b 92 90 03 00 00 ff 52 28 5d c3
      [ 2187.044287] RIP  [<ffffffff81361ca6>] free_lock_state_work+0x16/0x30
      [ 2187.044287]  RSP <ffff88007a4efd58>
      [ 2187.103626] ---[ end trace 0f11326d28e5d8fa ]---
      
      The original reason for this patch was because the fl_release_private
      operation couldn't sleep. With commit ed9814d8 (locks: defer freeing
      locks in locks_delete_lock until after i_lock has been dropped), this is
      no longer a problem so we can revert this patch.
      Reported-by: NChristoph Hellwig <hch@infradead.org>
      Signed-off-by: NJeff Layton <jlayton@primarydata.com>
      Reviewed-by: NChristoph Hellwig <hch@lst.de>
      Tested-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NTrond Myklebust <trond.myklebust@primarydata.com>
      0c0e0d3c
    • C
      nfs: fix kernel warning when removing proc entry · 21e81002
      Cong Wang 提交于
      I saw the following kernel warning:
      
      [ 1852.321222] ------------[ cut here ]------------
      [ 1852.326527] WARNING: CPU: 0 PID: 118 at fs/proc/generic.c:521 remove_proc_entry+0x154/0x16b()
      [ 1852.335630] remove_proc_entry: removing non-empty directory 'fs/nfsfs', leaking at least 'volumes'
      [ 1852.344084] CPU: 0 PID: 118 Comm: kworker/u8:2 Not tainted 3.16.0+ #540
      [ 1852.350036] Hardware name: Bochs Bochs, BIOS Bochs 01/01/2011
      [ 1852.354992] Workqueue: netns cleanup_net
      [ 1852.358701]  0000000000000000 ffff880116f2fbd0 ffffffff819c03e9 ffff880116f2fc18
      [ 1852.366474]  ffff880116f2fc08 ffffffff810744ee ffffffff811e0e6e ffff8800d4e96238
      [ 1852.373507]  ffffffff81dbe665 ffff8800d46a5948 0000000000000005 ffff880116f2fc68
      [ 1852.380224] Call Trace:
      [ 1852.381976]  [<ffffffff819c03e9>] dump_stack+0x4d/0x66
      [ 1852.385495]  [<ffffffff810744ee>] warn_slowpath_common+0x7a/0x93
      [ 1852.389869]  [<ffffffff811e0e6e>] ? remove_proc_entry+0x154/0x16b
      [ 1852.393987]  [<ffffffff8107457b>] warn_slowpath_fmt+0x4c/0x4e
      [ 1852.397999]  [<ffffffff811e0e6e>] remove_proc_entry+0x154/0x16b
      [ 1852.402034]  [<ffffffff8129c73d>] nfs_fs_proc_net_exit+0x53/0x56
      [ 1852.406136]  [<ffffffff812a103b>] nfs_net_exit+0x12/0x1d
      [ 1852.409774]  [<ffffffff81785bc9>] ops_exit_list+0x44/0x55
      [ 1852.413529]  [<ffffffff81786389>] cleanup_net+0xee/0x182
      [ 1852.417198]  [<ffffffff81088c9e>] process_one_work+0x209/0x40d
      [ 1852.502320]  [<ffffffff81088bf7>] ? process_one_work+0x162/0x40d
      [ 1852.587629]  [<ffffffff810890c1>] worker_thread+0x1f0/0x2c7
      [ 1852.673291]  [<ffffffff81088ed1>] ? process_scheduled_works+0x2f/0x2f
      [ 1852.759470]  [<ffffffff8108e079>] kthread+0xc9/0xd1
      [ 1852.843099]  [<ffffffff8109427f>] ? finish_task_switch+0x3a/0xce
      [ 1852.926518]  [<ffffffff8108dfb0>] ? __kthread_parkme+0x61/0x61
      [ 1853.008565]  [<ffffffff819cbeac>] ret_from_fork+0x7c/0xb0
      [ 1853.076477]  [<ffffffff8108dfb0>] ? __kthread_parkme+0x61/0x61
      [ 1853.140653] ---[ end trace 69c4c6617f78e32d ]---
      
      It looks wrong that we add "/proc/net/nfsfs" in nfs_fs_proc_net_init()
      while remove "/proc/fs/nfsfs" in nfs_fs_proc_net_exit().
      
      Fixes: commit 65b38851 (NFS: Fix /proc/fs/nfsfs/servers and /proc/fs/nfsfs/volumes)
      Cc: Eric W. Biederman <ebiederm@xmission.com>
      Cc: Trond Myklebust <trond.myklebust@primarydata.com>
      Cc: Dan Aloni <dan@kernelim.com>
      Signed-off-by: NCong Wang <xiyou.wangcong@gmail.com>
      [Trond: replace uses of remove_proc_entry() with remove_proc_subtree()
      as suggested by Al Viro]
      Cc: stable@vger.kernel.org # 3.4.x : 65b38851: NFS: Fix /proc/fs/nfsfs/servers
      Cc: stable@vger.kernel.org # 3.4.x
      Signed-off-by: NTrond Myklebust <trond.myklebust@primarydata.com>
      21e81002
    • C
      Btrfs: use insert_inode_locked4 for inode creation · b0d5d10f
      Chris Mason 提交于
      Btrfs was inserting inodes into the hash table before we had fully
      set the inode up on disk.  This leaves us open to rare races that allow
      two different inodes in memory for the same [root, inode] pair.
      
      This patch fixes things by using insert_inode_locked4 to insert an I_NEW
      inode and unlock_new_inode when we're ready for the rest of the kernel
      to use the inode.
      
      It also makes sure to init the operations pointers on the inode before
      going into the error handling paths.
      Signed-off-by: NChris Mason <clm@fb.com>
      Reported-by: NAl Viro <viro@zeniv.linux.org.uk>
      b0d5d10f
    • F
      Btrfs: fix fsync data loss after a ranged fsync · 49dae1bc
      Filipe Manana 提交于
      While we're doing a full fsync (when the inode has the flag
      BTRFS_INODE_NEEDS_FULL_SYNC set) that is ranged too (covers only a
      portion of the file), we might have ordered operations that are started
      before or while we're logging the inode and that fall outside the fsync
      range.
      
      Therefore when a full ranged fsync finishes don't remove every extent
      map from the list of modified extent maps - as for some of them, that
      fall outside our fsync range, their respective ordered operation hasn't
      finished yet, meaning the corresponding file extent item wasn't inserted
      into the fs/subvol tree yet and therefore we didn't log it, and we must
      let the next fast fsync (one that checks only the modified list) see this
      extent map and log a matching file extent item to the log btree and wait
      for its ordered operation to finish (if it's still ongoing).
      
      A test case for xfstests follows.
      Signed-off-by: NFilipe Manana <fdmanana@suse.com>
      Signed-off-by: NChris Mason <clm@fb.com>
      49dae1bc
    • D
      Btrfs: kfree()ing ERR_PTRs · c47ca32d
      Dan Carpenter 提交于
      The "inherit" in btrfs_ioctl_snap_create_v2() and "vol_args" in
      btrfs_ioctl_rm_dev() are ERR_PTRs so we can't call kfree() on them.
      
      These kind of bugs are "One Err Bugs" where there is just one error
      label that does everything.  I could set the "inherit = NULL" and keep
      the single out label but it ends up being more complicated that way.  It
      makes the code simpler to re-order the unwind so it's in the mirror
      order of the allocation and introduce some new error labels.
      Signed-off-by: NDan Carpenter <dan.carpenter@oracle.com>
      Signed-off-by: NChris Mason <clm@fb.com>
      c47ca32d