1. 08 1月, 2011 19 次提交
  2. 07 1月, 2011 21 次提交
    • N
      fs: scale mntget/mntput · b3e19d92
      Nick Piggin 提交于
      The problem that this patch aims to fix is vfsmount refcounting scalability.
      We need to take a reference on the vfsmount for every successful path lookup,
      which often go to the same mount point.
      
      The fundamental difficulty is that a "simple" reference count can never be made
      scalable, because any time a reference is dropped, we must check whether that
      was the last reference. To do that requires communication with all other CPUs
      that may have taken a reference count.
      
      We can make refcounts more scalable in a couple of ways, involving keeping
      distributed counters, and checking for the global-zero condition less
      frequently.
      
      - check the global sum once every interval (this will delay zero detection
        for some interval, so it's probably a showstopper for vfsmounts).
      
      - keep a local count and only taking the global sum when local reaches 0 (this
        is difficult for vfsmounts, because we can't hold preempt off for the life of
        a reference, so a counter would need to be per-thread or tied strongly to a
        particular CPU which requires more locking).
      
      - keep a local difference of increments and decrements, which allows us to sum
        the total difference and hence find the refcount when summing all CPUs. Then,
        keep a single integer "long" refcount for slow and long lasting references,
        and only take the global sum of local counters when the long refcount is 0.
      
      This last scheme is what I implemented here. Attached mounts and process root
      and working directory references are "long" references, and everything else is
      a short reference.
      
      This allows scalable vfsmount references during path walking over mounted
      subtrees and unattached (lazy umounted) mounts with processes still running
      in them.
      
      This results in one fewer atomic op in the fastpath: mntget is now just a
      per-CPU inc, rather than an atomic inc; and mntput just requires a spinlock
      and non-atomic decrement in the common case. However code is otherwise bigger
      and heavier, so single threaded performance is basically a wash.
      Signed-off-by: NNick Piggin <npiggin@kernel.dk>
      b3e19d92
    • N
      fs: rename vfsmount counter helpers · c6653a83
      Nick Piggin 提交于
      Suggested by Andreas, mnt_ prefix is clearer namespace, follows kernel
      conventions better, and is easier for tab complete. I introduced these
      names so I'll admit they were not good choices.
      Signed-off-by: NNick Piggin <npiggin@kernel.dk>
      c6653a83
    • N
      fs: implement faster dentry memcmp · 9d55c369
      Nick Piggin 提交于
      The standard memcmp function on a Westmere system shows up hot in
      profiles in the `git diff` workload (both parallel and single threaded),
      and it is likely due to the costs associated with trapping into
      microcode, and little opportunity to improve memory access (dentry
      name is not likely to take up more than a cacheline).
      
      So replace it with an open-coded byte comparison. This increases code
      size by 8 bytes in the critical __d_lookup_rcu function, but the
      speedup is huge, averaging 10 runs of each:
      
      git diff st   user   sys   elapsed  CPU
      before        1.15   2.57  3.82      97.1
      after         1.14   2.35  3.61      96.8
      
      git diff mt   user   sys   elapsed  CPU
      before        1.27   3.85  1.46     349
      after         1.26   3.54  1.43     333
      
      Elapsed time for single threaded git diff at 95.0% confidence:
              -0.21  +/- 0.01
              -5.45% +/- 0.24%
      
      It's -0.66% +/- 0.06% elapsed time on my Opteron, so rep cmp costs on the
      fam10h seem to be relatively smaller, but there is still a win.
      Signed-off-by: NNick Piggin <npiggin@kernel.dk>
      9d55c369
    • N
      fs: prefetch inode data in dcache lookup · e1bb5782
      Nick Piggin 提交于
      This makes single threaded git diff -1.25% +/- 0.05% elapsed time on my
      2s12c24t Westmere system, and -0.86% +/- 0.05% on my 2s8c Barcelona, by
      prefetching the important first cacheline of the inode in while we do the
      actual name compare and other operations on the dentry.
      
      There was no measurable slowdown in the single file stat case, or the creat
      case (where negative dentries would be common).
      Signed-off-by: NNick Piggin <npiggin@kernel.dk>
      e1bb5782
    • N
      fs: improve scalability of pseudo filesystems · 4b936885
      Nick Piggin 提交于
      Regardless of how much we possibly try to scale dcache, there is likely
      always going to be some fundamental contention when adding or removing children
      under the same parent. Pseudo filesystems do not seem need to have connected
      dentries because by definition they are disconnected.
      Signed-off-by: NNick Piggin <npiggin@kernel.dk>
      4b936885
    • N
      fs: dcache per-inode inode alias locking · 873feea0
      Nick Piggin 提交于
      dcache_inode_lock can be replaced with per-inode locking. Use existing
      inode->i_lock for this. This is slightly non-trivial because we sometimes
      need to find the inode from the dentry, which requires d_inode to be
      stabilised (either with refcount or d_lock).
      Signed-off-by: NNick Piggin <npiggin@kernel.dk>
      873feea0
    • N
      fs: dcache per-bucket dcache hash locking · ceb5bdc2
      Nick Piggin 提交于
      We can turn the dcache hash locking from a global dcache_hash_lock into
      per-bucket locking.
      Signed-off-by: NNick Piggin <npiggin@kernel.dk>
      ceb5bdc2
    • N
      bit_spinlock: add required includes · 626d6074
      Nick Piggin 提交于
      Signed-off-by: NNick Piggin <npiggin@kernel.dk>
      626d6074
    • N
      kernel: add bl_list · 4e35e607
      Nick Piggin 提交于
      Introduce a type of hlist that can support the use of the lowest bit in the
      hlist_head. This will be subsequently used to implement per-bucket bit spinlock
      for inode and dentry hashes, and may be useful in other cases such as network
      hashes.
      Reviewed-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Signed-off-by: NNick Piggin <npiggin@kernel.dk>
      4e35e607
    • N
      xfs: provide simple rcu-walk ACL implementation · 880566e1
      Nick Piggin 提交于
      This simple implementation just checks for no ACLs on the inode, and
      if so, then the rcu-walk may proceed, otherwise fail it.
      Signed-off-by: NNick Piggin <npiggin@kernel.dk>
      880566e1
    • N
      btrfs: provide simple rcu-walk ACL implementation · 258a5aa8
      Nick Piggin 提交于
      This simple implementation just checks for no ACLs on the inode, and
      if so, then the rcu-walk may proceed, otherwise fail it.
      Signed-off-by: NNick Piggin <npiggin@kernel.dk>
      258a5aa8
    • N
      ext2,3,4: provide simple rcu-walk ACL implementation · 73598611
      Nick Piggin 提交于
      This simple implementation just checks for no ACLs on the inode, and
      if so, then the rcu-walk may proceed, otherwise fail it.
      Signed-off-by: NNick Piggin <npiggin@kernel.dk>
      73598611
    • N
      fs: provide simple rcu-walk generic_check_acl implementation · 1e1743eb
      Nick Piggin 提交于
      This simple implementation just checks for no ACLs on the inode, and
      if so, then the rcu-walk may proceed, otherwise fail it.
      
      This could easily be extended to put acls under RCU and check them
      under seqlock, if need be. But this implementation is enough to show
      the rcu-walk aware permissions code for path lookups is working, and
      will handle cases where there are no ACLs or ACLs in just the final
      element.
      
      This patch implicity converts tmpfs to rcu-aware permission check.
      Subsequent patches onvert ext*, xfs, and, btrfs. Each of these uses
      acl/permission code in a different way, so convert them all to provide
      templates and proof of concept.
      Signed-off-by: NNick Piggin <npiggin@kernel.dk>
      1e1743eb
    • N
      fs: provide rcu-walk aware permission i_ops · b74c79e9
      Nick Piggin 提交于
      Signed-off-by: NNick Piggin <npiggin@kernel.dk>
      b74c79e9
    • N
      fs: rcu-walk aware d_revalidate method · 34286d66
      Nick Piggin 提交于
      Require filesystems be aware of .d_revalidate being called in rcu-walk
      mode (nd->flags & LOOKUP_RCU). For now do a simple push down, returning
      -ECHILD from all implementations.
      Signed-off-by: NNick Piggin <npiggin@kernel.dk>
      34286d66
    • N
      fs: cache optimise dentry and inode for rcu-walk · 44a7d7a8
      Nick Piggin 提交于
      Put dentry and inode fields into top of data structure.  This allows RCU path
      traversal to perform an RCU dentry lookup in a path walk by touching only the
      first 56 bytes of the dentry.
      
      We also fit in 8 bytes of inline name in the first 64 bytes, so for short
      names, only 64 bytes needs to be touched to perform the lookup. We should
      get rid of the hash->prev pointer from the first 64 bytes, and fit 16 bytes
      of name in there, which will take care of 81% rather than 32% of the kernel
      tree.
      
      inode is also rearranged so that RCU lookup will only touch a single cacheline
      in the inode, plus one in the i_ops structure.
      
      This is important for directory component lookups in RCU path walking. In the
      kernel source, directory names average is around 6 chars, so this works.
      
      When we reach the last element of the lookup, we need to lock it and take its
      refcount which requires another cacheline access.
      
      Align dentry and inode operations structs, so members will be at predictable
      offsets and we can group common operations into head of structure.
      Signed-off-by: NNick Piggin <npiggin@kernel.dk>
      44a7d7a8
    • N
      fs: dcache reduce branches in lookup path · fb045adb
      Nick Piggin 提交于
      Reduce some branches and memory accesses in dcache lookup by adding dentry
      flags to indicate common d_ops are set, rather than having to check them.
      This saves a pointer memory access (dentry->d_op) in common path lookup
      situations, and saves another pointer load and branch in cases where we
      have d_op but not the particular operation.
      
      Patched with:
      
      git grep -E '[.>]([[:space:]])*d_op([[:space:]])*=' | xargs sed -e 's/\([^\t ]*\)->d_op = \(.*\);/d_set_d_op(\1, \2);/' -e 's/\([^\t ]*\)\.d_op = \(.*\);/d_set_d_op(\&\1, \2);/' -i
      Signed-off-by: NNick Piggin <npiggin@kernel.dk>
      fb045adb
    • N
      fs: dcache remove d_mounted · 5f57cbcc
      Nick Piggin 提交于
      Rather than keep a d_mounted count in the dentry, set a dentry flag instead.
      The flag can be cleared by checking the hash table to see if there are any
      mounts left, which is not time critical because it is performed at detach time.
      
      The mounted state of a dentry is only used to speculatively take a look in the
      mount hash table if it is set -- before following the mount, vfsmount lock is
      taken and mount re-checked without races.
      
      This saves 4 bytes on 32-bit, nothing on 64-bit but it does provide a hole I
      might use later (and some configs have larger than 32-bit spinlocks which might
      make use of the hole).
      
      Autofs4 conversion and changelog by Ian Kent <raven@themaw.net>:
      In autofs4, when expring direct (or offset) mounts we need to ensure that we
      block user path walks into the autofs mount, which is covered by another mount.
      To do this we clear the mounted status so that follows stop before walking into
      the mount and are essentially blocked until the expire is completed. The
      automount daemon still finds the correct dentry for the umount due to the
      follow mount logic in fs/autofs4/root.c:autofs4_follow_link(), which is set as
      an inode operation for direct and offset mounts only and is called following
      the lookup that stopped at the covered mount.
      
      At the end of the expire the covering mount probably has gone away so the
      mounted status need not be restored. But we need to check this and only restore
      the mounted status if the expire failed.
      
      XXX: autofs may not work right if we have other mounts go over the top of it?
      Signed-off-by: NNick Piggin <npiggin@kernel.dk>
      5f57cbcc
    • N
      fs: fs_struct use seqlock · c28cc364
      Nick Piggin 提交于
      Use a seqlock in the fs_struct to enable us to take an atomic copy of the
      complete cwd and root paths. Use this in the RCU lookup path to avoid a
      thread-shared spinlock in RCU lookup operations.
      
      Multi-threaded apps may now perform path lookups with scalability matching
      multi-process apps. Operations such as stat(2) become very scalable for
      multi-threaded workload.
      Signed-off-by: NNick Piggin <npiggin@kernel.dk>
      c28cc364
    • N
      fs: rcu-walk for path lookup · 31e6b01f
      Nick Piggin 提交于
      Perform common cases of path lookups without any stores or locking in the
      ancestor dentry elements. This is called rcu-walk, as opposed to the current
      algorithm which is a refcount based walk, or ref-walk.
      
      This results in far fewer atomic operations on every path element,
      significantly improving path lookup performance. It also avoids cacheline
      bouncing on common dentries, significantly improving scalability.
      
      The overall design is like this:
      * LOOKUP_RCU is set in nd->flags, which distinguishes rcu-walk from ref-walk.
      * Take the RCU lock for the entire path walk, starting with the acquiring
        of the starting path (eg. root/cwd/fd-path). So now dentry refcounts are
        not required for dentry persistence.
      * synchronize_rcu is called when unregistering a filesystem, so we can
        access d_ops and i_ops during rcu-walk.
      * Similarly take the vfsmount lock for the entire path walk. So now mnt
        refcounts are not required for persistence. Also we are free to perform mount
        lookups, and to assume dentry mount points and mount roots are stable up and
        down the path.
      * Have a per-dentry seqlock to protect the dentry name, parent, and inode,
        so we can load this tuple atomically, and also check whether any of its
        members have changed.
      * Dentry lookups (based on parent, candidate string tuple) recheck the parent
        sequence after the child is found in case anything changed in the parent
        during the path walk.
      * inode is also RCU protected so we can load d_inode and use the inode for
        limited things.
      * i_mode, i_uid, i_gid can be tested for exec permissions during path walk.
      * i_op can be loaded.
      
      When we reach the destination dentry, we lock it, recheck lookup sequence,
      and increment its refcount and mountpoint refcount. RCU and vfsmount locks
      are dropped. This is termed "dropping rcu-walk". If the dentry refcount does
      not match, we can not drop rcu-walk gracefully at the current point in the
      lokup, so instead return -ECHILD (for want of a better errno). This signals the
      path walking code to re-do the entire lookup with a ref-walk.
      
      Aside from the final dentry, there are other situations that may be encounted
      where we cannot continue rcu-walk. In that case, we drop rcu-walk (ie. take
      a reference on the last good dentry) and continue with a ref-walk. Again, if
      we can drop rcu-walk gracefully, we return -ECHILD and do the whole lookup
      using ref-walk. But it is very important that we can continue with ref-walk
      for most cases, particularly to avoid the overhead of double lookups, and to
      gain the scalability advantages on common path elements (like cwd and root).
      
      The cases where rcu-walk cannot continue are:
      * NULL dentry (ie. any uncached path element)
      * parent with d_inode->i_op->permission or ACLs
      * dentries with d_revalidate
      * Following links
      
      In future patches, permission checks and d_revalidate become rcu-walk aware. It
      may be possible eventually to make following links rcu-walk aware.
      
      Uncached path elements will always require dropping to ref-walk mode, at the
      very least because i_mutex needs to be grabbed, and objects allocated.
      Signed-off-by: NNick Piggin <npiggin@kernel.dk>
      31e6b01f
    • N
      kernel: optimise seqlock · 3c22cd57
      Nick Piggin 提交于
      Add branch annotations for seqlock read fastpath, and introduce
      __read_seqcount_begin and __read_seqcount_end functions, that can avoid the
      smp_rmb() if used carefully. These will be used by store-free path walking
      algorithm performance is critical and seqlocks are in use.
      Signed-off-by: NNick Piggin <npiggin@kernel.dk>
      3c22cd57