1. 16 1月, 2011 1 次提交
    • D
      Add a dentry op to handle automounting rather than abusing follow_link() · 9875cf80
      David Howells 提交于
      Add a dentry op (d_automount) to handle automounting directories rather than
      abusing the follow_link() inode operation.  The operation is keyed off a new
      dentry flag (DCACHE_NEED_AUTOMOUNT).
      
      This also makes it easier to add an AT_ flag to suppress terminal segment
      automount during pathwalk and removes the need for the kludge code in the
      pathwalk algorithm to handle directories with follow_link() semantics.
      
      The ->d_automount() dentry operation:
      
      	struct vfsmount *(*d_automount)(struct path *mountpoint);
      
      takes a pointer to the directory to be mounted upon, which is expected to
      provide sufficient data to determine what should be mounted.  If successful, it
      should return the vfsmount struct it creates (which it should also have added
      to the namespace using do_add_mount() or similar).  If there's a collision with
      another automount attempt, NULL should be returned.  If the directory specified
      by the parameter should be used directly rather than being mounted upon,
      -EISDIR should be returned.  In any other case, an error code should be
      returned.
      
      The ->d_automount() operation is called with no locks held and may sleep.  At
      this point the pathwalk algorithm will be in ref-walk mode.
      
      Within fs/namei.c itself, a new pathwalk subroutine (follow_automount()) is
      added to handle mountpoints.  It will return -EREMOTE if the automount flag was
      set, but no d_automount() op was supplied, -ELOOP if we've encountered too many
      symlinks or mountpoints, -EISDIR if the walk point should be used without
      mounting and 0 if successful.  The path will be updated to point to the mounted
      filesystem if a successful automount took place.
      
      __follow_mount() is replaced by follow_managed() which is more generic
      (especially with the patch that adds ->d_manage()).  This handles transits from
      directories during pathwalk, including automounting and skipping over
      mountpoints (and holding processes with the next patch).
      
      __follow_mount_rcu() will jump out of RCU-walk mode if it encounters an
      automount point with nothing mounted on it.
      
      follow_dotdot*() does not handle automounts as you don't want to trigger them
      whilst following "..".
      
      I've also extracted the mount/don't-mount logic from autofs4 and included it
      here.  It makes the mount go ahead anyway if someone calls open() or creat(),
      tries to traverse the directory, tries to chdir/chroot/etc. into the directory,
      or sticks a '/' on the end of the pathname.  If they do a stat(), however,
      they'll only trigger the automount if they didn't also say O_NOFOLLOW.
      
      I've also added an inode flag (S_AUTOMOUNT) so that filesystems can mark their
      inodes as automount points.  This flag is automatically propagated to the
      dentry as DCACHE_NEED_AUTOMOUNT by __d_instantiate().  This saves NFS and could
      save AFS a private flag bit apiece, but is not strictly necessary.  It would be
      preferable to do the propagation in d_set_d_op(), but that doesn't normally
      have access to the inode.
      
      [AV: fixed breakage in case if __follow_mount_rcu() fails and nameidata_drop_rcu()
      succeeds in RCU case of do_lookup(); we need to fall through to non-RCU case after
      that, rather than just returning with ungrabbed *path]
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      Was-Acked-by: NIan Kent <raven@themaw.net>
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      9875cf80
  2. 15 1月, 2011 1 次提交
  3. 13 1月, 2011 3 次提交
  4. 07 1月, 2011 35 次提交
    • N
      fs: implement faster dentry memcmp · 9d55c369
      Nick Piggin 提交于
      The standard memcmp function on a Westmere system shows up hot in
      profiles in the `git diff` workload (both parallel and single threaded),
      and it is likely due to the costs associated with trapping into
      microcode, and little opportunity to improve memory access (dentry
      name is not likely to take up more than a cacheline).
      
      So replace it with an open-coded byte comparison. This increases code
      size by 8 bytes in the critical __d_lookup_rcu function, but the
      speedup is huge, averaging 10 runs of each:
      
      git diff st   user   sys   elapsed  CPU
      before        1.15   2.57  3.82      97.1
      after         1.14   2.35  3.61      96.8
      
      git diff mt   user   sys   elapsed  CPU
      before        1.27   3.85  1.46     349
      after         1.26   3.54  1.43     333
      
      Elapsed time for single threaded git diff at 95.0% confidence:
              -0.21  +/- 0.01
              -5.45% +/- 0.24%
      
      It's -0.66% +/- 0.06% elapsed time on my Opteron, so rep cmp costs on the
      fam10h seem to be relatively smaller, but there is still a win.
      Signed-off-by: NNick Piggin <npiggin@kernel.dk>
      9d55c369
    • N
      fs: prefetch inode data in dcache lookup · e1bb5782
      Nick Piggin 提交于
      This makes single threaded git diff -1.25% +/- 0.05% elapsed time on my
      2s12c24t Westmere system, and -0.86% +/- 0.05% on my 2s8c Barcelona, by
      prefetching the important first cacheline of the inode in while we do the
      actual name compare and other operations on the dentry.
      
      There was no measurable slowdown in the single file stat case, or the creat
      case (where negative dentries would be common).
      Signed-off-by: NNick Piggin <npiggin@kernel.dk>
      e1bb5782
    • N
      fs: improve scalability of pseudo filesystems · 4b936885
      Nick Piggin 提交于
      Regardless of how much we possibly try to scale dcache, there is likely
      always going to be some fundamental contention when adding or removing children
      under the same parent. Pseudo filesystems do not seem need to have connected
      dentries because by definition they are disconnected.
      Signed-off-by: NNick Piggin <npiggin@kernel.dk>
      4b936885
    • N
      fs: dcache per-inode inode alias locking · 873feea0
      Nick Piggin 提交于
      dcache_inode_lock can be replaced with per-inode locking. Use existing
      inode->i_lock for this. This is slightly non-trivial because we sometimes
      need to find the inode from the dentry, which requires d_inode to be
      stabilised (either with refcount or d_lock).
      Signed-off-by: NNick Piggin <npiggin@kernel.dk>
      873feea0
    • N
      fs: dcache per-bucket dcache hash locking · ceb5bdc2
      Nick Piggin 提交于
      We can turn the dcache hash locking from a global dcache_hash_lock into
      per-bucket locking.
      Signed-off-by: NNick Piggin <npiggin@kernel.dk>
      ceb5bdc2
    • N
      fs: cache optimise dentry and inode for rcu-walk · 44a7d7a8
      Nick Piggin 提交于
      Put dentry and inode fields into top of data structure.  This allows RCU path
      traversal to perform an RCU dentry lookup in a path walk by touching only the
      first 56 bytes of the dentry.
      
      We also fit in 8 bytes of inline name in the first 64 bytes, so for short
      names, only 64 bytes needs to be touched to perform the lookup. We should
      get rid of the hash->prev pointer from the first 64 bytes, and fit 16 bytes
      of name in there, which will take care of 81% rather than 32% of the kernel
      tree.
      
      inode is also rearranged so that RCU lookup will only touch a single cacheline
      in the inode, plus one in the i_ops structure.
      
      This is important for directory component lookups in RCU path walking. In the
      kernel source, directory names average is around 6 chars, so this works.
      
      When we reach the last element of the lookup, we need to lock it and take its
      refcount which requires another cacheline access.
      
      Align dentry and inode operations structs, so members will be at predictable
      offsets and we can group common operations into head of structure.
      Signed-off-by: NNick Piggin <npiggin@kernel.dk>
      44a7d7a8
    • N
      fs: dcache reduce branches in lookup path · fb045adb
      Nick Piggin 提交于
      Reduce some branches and memory accesses in dcache lookup by adding dentry
      flags to indicate common d_ops are set, rather than having to check them.
      This saves a pointer memory access (dentry->d_op) in common path lookup
      situations, and saves another pointer load and branch in cases where we
      have d_op but not the particular operation.
      
      Patched with:
      
      git grep -E '[.>]([[:space:]])*d_op([[:space:]])*=' | xargs sed -e 's/\([^\t ]*\)->d_op = \(.*\);/d_set_d_op(\1, \2);/' -e 's/\([^\t ]*\)\.d_op = \(.*\);/d_set_d_op(\&\1, \2);/' -i
      Signed-off-by: NNick Piggin <npiggin@kernel.dk>
      fb045adb
    • N
      fs: dcache remove d_mounted · 5f57cbcc
      Nick Piggin 提交于
      Rather than keep a d_mounted count in the dentry, set a dentry flag instead.
      The flag can be cleared by checking the hash table to see if there are any
      mounts left, which is not time critical because it is performed at detach time.
      
      The mounted state of a dentry is only used to speculatively take a look in the
      mount hash table if it is set -- before following the mount, vfsmount lock is
      taken and mount re-checked without races.
      
      This saves 4 bytes on 32-bit, nothing on 64-bit but it does provide a hole I
      might use later (and some configs have larger than 32-bit spinlocks which might
      make use of the hole).
      
      Autofs4 conversion and changelog by Ian Kent <raven@themaw.net>:
      In autofs4, when expring direct (or offset) mounts we need to ensure that we
      block user path walks into the autofs mount, which is covered by another mount.
      To do this we clear the mounted status so that follows stop before walking into
      the mount and are essentially blocked until the expire is completed. The
      automount daemon still finds the correct dentry for the umount due to the
      follow mount logic in fs/autofs4/root.c:autofs4_follow_link(), which is set as
      an inode operation for direct and offset mounts only and is called following
      the lookup that stopped at the covered mount.
      
      At the end of the expire the covering mount probably has gone away so the
      mounted status need not be restored. But we need to check this and only restore
      the mounted status if the expire failed.
      
      XXX: autofs may not work right if we have other mounts go over the top of it?
      Signed-off-by: NNick Piggin <npiggin@kernel.dk>
      5f57cbcc
    • N
      fs: rcu-walk for path lookup · 31e6b01f
      Nick Piggin 提交于
      Perform common cases of path lookups without any stores or locking in the
      ancestor dentry elements. This is called rcu-walk, as opposed to the current
      algorithm which is a refcount based walk, or ref-walk.
      
      This results in far fewer atomic operations on every path element,
      significantly improving path lookup performance. It also avoids cacheline
      bouncing on common dentries, significantly improving scalability.
      
      The overall design is like this:
      * LOOKUP_RCU is set in nd->flags, which distinguishes rcu-walk from ref-walk.
      * Take the RCU lock for the entire path walk, starting with the acquiring
        of the starting path (eg. root/cwd/fd-path). So now dentry refcounts are
        not required for dentry persistence.
      * synchronize_rcu is called when unregistering a filesystem, so we can
        access d_ops and i_ops during rcu-walk.
      * Similarly take the vfsmount lock for the entire path walk. So now mnt
        refcounts are not required for persistence. Also we are free to perform mount
        lookups, and to assume dentry mount points and mount roots are stable up and
        down the path.
      * Have a per-dentry seqlock to protect the dentry name, parent, and inode,
        so we can load this tuple atomically, and also check whether any of its
        members have changed.
      * Dentry lookups (based on parent, candidate string tuple) recheck the parent
        sequence after the child is found in case anything changed in the parent
        during the path walk.
      * inode is also RCU protected so we can load d_inode and use the inode for
        limited things.
      * i_mode, i_uid, i_gid can be tested for exec permissions during path walk.
      * i_op can be loaded.
      
      When we reach the destination dentry, we lock it, recheck lookup sequence,
      and increment its refcount and mountpoint refcount. RCU and vfsmount locks
      are dropped. This is termed "dropping rcu-walk". If the dentry refcount does
      not match, we can not drop rcu-walk gracefully at the current point in the
      lokup, so instead return -ECHILD (for want of a better errno). This signals the
      path walking code to re-do the entire lookup with a ref-walk.
      
      Aside from the final dentry, there are other situations that may be encounted
      where we cannot continue rcu-walk. In that case, we drop rcu-walk (ie. take
      a reference on the last good dentry) and continue with a ref-walk. Again, if
      we can drop rcu-walk gracefully, we return -ECHILD and do the whole lookup
      using ref-walk. But it is very important that we can continue with ref-walk
      for most cases, particularly to avoid the overhead of double lookups, and to
      gain the scalability advantages on common path elements (like cwd and root).
      
      The cases where rcu-walk cannot continue are:
      * NULL dentry (ie. any uncached path element)
      * parent with d_inode->i_op->permission or ACLs
      * dentries with d_revalidate
      * Following links
      
      In future patches, permission checks and d_revalidate become rcu-walk aware. It
      may be possible eventually to make following links rcu-walk aware.
      
      Uncached path elements will always require dropping to ref-walk mode, at the
      very least because i_mutex needs to be grabbed, and objects allocated.
      Signed-off-by: NNick Piggin <npiggin@kernel.dk>
      31e6b01f
    • N
      fs: consolidate dentry kill sequence · 77812a1e
      Nick Piggin 提交于
      The tricky locking for disposing of a dentry is duplicated 3 times in the
      dcache (dput, pruning a dentry from the LRU, and pruning its ancestors).
      Consolidate them all into a single function dentry_kill.
      Signed-off-by: NNick Piggin <npiggin@kernel.dk>
      77812a1e
    • N
      ec33679d
    • N
      fs: reduce dcache_inode_lock width in lru scanning · be182bff
      Nick Piggin 提交于
      Signed-off-by: NNick Piggin <npiggin@kernel.dk>
      be182bff
    • N
      fs: dcache reduce prune_one_dentry locking · 89e60548
      Nick Piggin 提交于
      prune_one_dentry can avoid quite a bit of locking in the common case where
      ancestors have an elevated refcount. Alternatively, we could have gone the
      other way and made fewer trylocks in the case where d_count goes to zero, but
      is probably less common.
      Signed-off-by: NNick Piggin <npiggin@kernel.dk>
      89e60548
    • N
      fs: dcache reduce d_parent locking · a734eb45
      Nick Piggin 提交于
      Use RCU to simplify locking in dget_parent.
      Signed-off-by: NNick Piggin <npiggin@kernel.dk>
      a734eb45
    • N
      fs: dcache rationalise dget variants · dc0474be
      Nick Piggin 提交于
      dget_locked was a shortcut to avoid the lazy lru manipulation when we already
      held dcache_lock (lru manipulation was relatively cheap at that point).
      However, how that the lru lock is an innermost one, we never hold it at any
      caller, so the lock cost can now be avoided. We already have well working lazy
      dcache LRU, so it should be fine to defer LRU manipulations to scan time.
      Signed-off-by: NNick Piggin <npiggin@kernel.dk>
      dc0474be
    • N
      fs: dcache reduce dcache_inode_lock · 357f8e65
      Nick Piggin 提交于
      dcache_inode_lock can be avoided in d_delete() and d_materialise_unique()
      in cases where it is not required.
      Signed-off-by: NNick Piggin <npiggin@kernel.dk>
      357f8e65
    • N
      fs: dcache reduce locking in d_alloc · 89ad485f
      Nick Piggin 提交于
      Signed-off-by: NNick Piggin <npiggin@kernel.dk>
      89ad485f
    • N
      fs: dcache reduce dput locking · 61f3dee4
      Nick Piggin 提交于
      It is possible to run dput without taking data structure locks up-front. In
      many cases where we don't kill the dentry anyway, these locks are not required.
      Signed-off-by: NNick Piggin <npiggin@kernel.dk>
      61f3dee4
    • N
      fs: dcache avoid starvation in dcache multi-step operations · 58db63d0
      Nick Piggin 提交于
      Long lived dcache "multi-step" operations which retry on rename seq can
      be starved with a lot of rename activity. If they fail after the 1st pass,
      take the rename_lock for writing to avoid further starvation.
      Signed-off-by: NNick Piggin <npiggin@kernel.dk>
      58db63d0
    • N
      fs: dcache remove dcache_lock · b5c84bf6
      Nick Piggin 提交于
      dcache_lock no longer protects anything. remove it.
      Signed-off-by: NNick Piggin <npiggin@kernel.dk>
      b5c84bf6
    • N
      fs: Use rename lock and RCU for multi-step operations · 949854d0
      Nick Piggin 提交于
      The remaining usages for dcache_lock is to allow atomic, multi-step read-side
      operations over the directory tree by excluding modifications to the tree.
      Also, to walk in the leaf->root direction in the tree where we don't have
      a natural d_lock ordering.
      
      This could be accomplished by taking every d_lock, but this would mean a
      huge number of locks and actually gets very tricky.
      
      Solve this instead by using the rename seqlock for multi-step read-side
      operations, retry in case of a rename so we don't walk up the wrong parent.
      Concurrent dentry insertions are not serialised against.  Concurrent deletes
      are tricky when walking up the directory: our parent might have been deleted
      when dropping locks so also need to check and retry for that.
      
      We can also use the rename lock in cases where livelock is a worry (and it
      is introduced in subsequent patch).
      Signed-off-by: NNick Piggin <npiggin@kernel.dk>
      949854d0
    • N
      fs: increase d_name lock coverage · 9abca360
      Nick Piggin 提交于
      Cover d_name with d_lock in more cases, where there may be concurrent
      modification to it.
      Signed-off-by: NNick Piggin <npiggin@kernel.dk>
      9abca360
    • N
      fs: scale inode alias list · b23fb0a6
      Nick Piggin 提交于
      Add a new lock, dcache_inode_lock, to protect the inode's i_dentry list
      from concurrent modification. d_alias is also protected by d_lock.
      Signed-off-by: NNick Piggin <npiggin@kernel.dk>
      b23fb0a6
    • N
      fs: dcache scale subdirs · 2fd6b7f5
      Nick Piggin 提交于
      Protect d_subdirs and d_child with d_lock, except in filesystems that aren't
      using dcache_lock for these anyway (eg. using i_mutex).
      
      Note: if we change the locking rule in future so that ->d_child protection is
      provided only with ->d_parent->d_lock, it may allow us to reduce some locking.
      But it would be an exception to an otherwise regular locking scheme, so we'd
      have to see some good results. Probably not worthwhile.
      Signed-off-by: NNick Piggin <npiggin@kernel.dk>
      2fd6b7f5
    • N
      fs: dcache scale d_unhashed · da502956
      Nick Piggin 提交于
      Protect d_unhashed(dentry) condition with d_lock. This means keeping
      DCACHE_UNHASHED bit in synch with hash manipulations.
      Signed-off-by: NNick Piggin <npiggin@kernel.dk>
      da502956
    • N
      fs: dcache scale dentry refcount · b7ab39f6
      Nick Piggin 提交于
      Make d_count non-atomic and protect it with d_lock. This allows us to ensure a
      0 refcount dentry remains 0 without dcache_lock. It is also fairly natural when
      we start protecting many other dentry members with d_lock.
      Signed-off-by: NNick Piggin <npiggin@kernel.dk>
      b7ab39f6
    • N
      fs: dcache scale lru · 23044507
      Nick Piggin 提交于
      Add a new lock, dcache_lru_lock, to protect the dcache LRU list from concurrent
      modification. d_lru is also protected by d_lock, which allows LRU lists to be
      accessed without the lru lock, using RCU in future patches.
      Signed-off-by: NNick Piggin <npiggin@kernel.dk>
      23044507
    • N
      fs: dcache scale hash · 789680d1
      Nick Piggin 提交于
      Add a new lock, dcache_hash_lock, to protect the dcache hash table from
      concurrent modification. d_hash is also protected by d_lock.
      Signed-off-by: NNick Piggin <npiggin@kernel.dk>
      789680d1
    • N
      hostfs: simplify locking · ec2447c2
      Nick Piggin 提交于
      Remove dcache_lock locking from hostfs filesystem, and move it into dcache
      helpers. All that is required is a coherent path name. Protection from
      concurrent modification of the namespace after path name generation is not
      provided in current code, because dcache_lock is dropped before the path is
      used.
      Signed-off-by: NNick Piggin <npiggin@kernel.dk>
      ec2447c2
    • N
      fs: change d_hash for rcu-walk · b1e6a015
      Nick Piggin 提交于
      Change d_hash so it may be called from lock-free RCU lookups. See similar
      patch for d_compare for details.
      
      For in-tree filesystems, this is just a mechanical change.
      Signed-off-by: NNick Piggin <npiggin@kernel.dk>
      b1e6a015
    • N
      fs: change d_compare for rcu-walk · 621e155a
      Nick Piggin 提交于
      Change d_compare so it may be called from lock-free RCU lookups. This
      does put significant restrictions on what may be done from the callback,
      however there don't seem to have been any problems with in-tree fses.
      If some strange use case pops up that _really_ cannot cope with the
      rcu-walk rules, we can just add new rcu-unaware callbacks, which would
      cause name lookup to drop out of rcu-walk mode.
      
      For in-tree filesystems, this is just a mechanical change.
      Signed-off-by: NNick Piggin <npiggin@kernel.dk>
      621e155a
    • N
      fs: name case update method · fb2d5b86
      Nick Piggin 提交于
      smpfs and ncpfs want to update a live dentry name in-place. Rather than
      have them open code the locking, provide a documented dcache API.
      Signed-off-by: NNick Piggin <npiggin@kernel.dk>
      fb2d5b86
    • N
      fs: change d_delete semantics · fe15ce44
      Nick Piggin 提交于
      Change d_delete from a dentry deletion notification to a dentry caching
      advise, more like ->drop_inode. Require it to be constant and idempotent,
      and not take d_lock. This is how all existing filesystems use the callback
      anyway.
      
      This makes fine grained dentry locking of dput and dentry lru scanning
      much simpler.
      Signed-off-by: NNick Piggin <npiggin@kernel.dk>
      fe15ce44
    • N
      fs: use fast counters for vfs caches · 3e880fb5
      Nick Piggin 提交于
      percpu_counter library generates quite nasty code, so unless you need
      to dynamically allocate counters or take fast approximate value, a
      simple per cpu set of counters is much better.
      
      The percpu_counter can never be made to work as well, because it has an
      indirection from pointer to percpu memory, and it can't use direct
      this_cpu_inc interfaces because it doesn't use static PER_CPU data, so
      code will always be worse.
      
      In the fastpath, it is the difference between this:
      
              incl %gs:nr_dentry      # nr_dentry
      
      and this:
      
              movl    percpu_counter_batch(%rip), %edx        # percpu_counter_batch,
              movl    $1, %esi        #,
              movq    $nr_dentry, %rdi        #,
              call    __percpu_counter_add    # (plus I clobber registers)
      
      __percpu_counter_add:
              pushq   %rbp    #
              movq    %rsp, %rbp      #,
              subq    $32, %rsp       #,
              movq    %rbx, -24(%rbp) #,
              movq    %r12, -16(%rbp) #,
              movq    %r13, -8(%rbp)  #,
              movq    %rdi, %rbx      # fbc, fbc
      #APP
      # 216 "/home/npiggin/usr/src/linux-2.6/arch/x86/include/asm/thread_info.h" 1
              movq %gs:kernel_stack,%rax      #, pfo_ret__
      # 0 "" 2
      #NO_APP
              incl    -8124(%rax)     # <variable>.preempt_count
              movq    32(%rdi), %r12  # <variable>.counters, tcp_ptr__
      #APP
      # 78 "lib/percpu_counter.c" 1
              add %gs:this_cpu_off, %r12      # this_cpu_off, tcp_ptr__
      # 0 "" 2
      #NO_APP
              movslq  (%r12),%r13     #* tcp_ptr__, tmp73
              movslq  %edx,%rax       # batch, batch
              addq    %rsi, %r13      # amount, count
              cmpq    %rax, %r13      # batch, count
              jge     .L27    #,
              negl    %edx    # tmp76
              movslq  %edx,%rdx       # tmp76, tmp77
              cmpq    %rdx, %r13      # tmp77, count
              jg      .L28    #,
      .L27:
              movq    %rbx, %rdi      # fbc,
              call    _raw_spin_lock  #
              addq    %r13, 8(%rbx)   # count, <variable>.count
              movq    %rbx, %rdi      # fbc,
              movl    $0, (%r12)      #,* tcp_ptr__
              call    _raw_spin_unlock        #
      .L29:
      #APP
      # 216 "/home/npiggin/usr/src/linux-2.6/arch/x86/include/asm/thread_info.h" 1
              movq %gs:kernel_stack,%rax      #, pfo_ret__
      # 0 "" 2
      #NO_APP
              decl    -8124(%rax)     # <variable>.preempt_count
              movq    -8136(%rax), %rax       #, D.14625
              testb   $8, %al #, D.14625
              jne     .L32    #,
      .L31:
              movq    -24(%rbp), %rbx #,
              movq    -16(%rbp), %r12 #,
              movq    -8(%rbp), %r13  #,
              leave
              ret
              .p2align 4,,10
              .p2align 3
      .L28:
              movl    %r13d, (%r12)   # count,*
              jmp     .L29    #
      .L32:
              call    preempt_schedule        #
              .p2align 4,,6
              jmp     .L31    #
              .size   __percpu_counter_add, .-__percpu_counter_add
              .p2align 4,,15
      Signed-off-by: NNick Piggin <npiggin@kernel.dk>
      3e880fb5
    • N
      vfs: revert per-cpu nr_unused counters for dentry and inodes · 86c8749e
      Nick Piggin 提交于
      The nr_unused counters count the number of objects on an LRU, and as such they
      are synchronized with LRU object insertion and removal and scanning, and
      protected under the LRU lock.
      
      Making it per-cpu does not actually get any concurrency improvements because of
      this lock, and summing the counter is much slower, and
      incrementing/decrementing it costs more code size and is slower too.
      
      These counters should stay per-LRU, which currently means global.
      Signed-off-by: NNick Piggin <npiggin@kernel.dk>
      86c8749e