1. 16 4月, 2015 1 次提交
    • D
      VFS: Impose ordering on accesses of d_inode and d_flags · 4bf46a27
      David Howells 提交于
      Impose ordering on accesses of d_inode and d_flags to avoid the need to do
      this:
      
      	if (!dentry->d_inode || d_is_negative(dentry)) {
      
      when this:
      
      	if (d_is_negative(dentry)) {
      
      should suffice.
      
      This check is especially problematic if a dentry can have its type field set
      to something other than DENTRY_MISS_TYPE when d_inode is NULL (as in
      unionmount).
      
      What we really need to do is stick a write barrier between setting d_inode and
      setting d_flags and a read barrier between reading d_flags and reading
      d_inode.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      4bf46a27
  2. 12 4月, 2015 1 次提交
    • J
      dcache: return -ESTALE not -EBUSY on distributed fs race · 3d330dc1
      J. Bruce Fields 提交于
      On a distributed filesystem it's possible for lookup to discover that a
      directory it just found is already cached elsewhere in the directory
      heirarchy.  The dcache won't let us keep the directory in both places,
      so we have to move the dentry to the new location from the place we
      previously had it cached.
      
      If the parent has changed, then this requires all the same locks as we'd
      need to do a cross-directory rename.  But we're already in lookup
      holding one parent's i_mutex, so it's too late to acquire those locks in
      the right order.
      
      The (unreliable) solution in __d_unalias is to trylock() the required
      locks and return -EBUSY if it fails.
      
      I see no particular reason for returning -EBUSY, and -ESTALE is already
      the result of some other lookup races on NFS.  I think -ESTALE is the
      more helpful error return.  It also allows us to take advantage of the
      logic Jeff Layton added in c6a94284 "vfs: fix renameat to retry on
      ESTALE errors" and ancestors, which hopefully resolves some of these
      errors before they're returned to userspace.
      
      I can reproduce these cases using NFS with:
      
      	ssh root@$client '
      		mount -olookupcache=pos '$server':'$export' /mnt/
      		mkdir /mnt/TO
      		mkdir /mnt/DIR
      		touch /mnt/DIR/test.txt
      		while true; do
      			strace -e open cat /mnt/DIR/test.txt 2>&1 | grep EBUSY
      		done
      	'
      	ssh root@$server '
      		while true; do
      			mv $export/DIR $export/TO/DIR
      			mv $export/TO/DIR $export/DIR
      		done
      	'
      
      It also helps to add some other concurrent use of the directory on the
      client (e.g., "ls /mnt/TO").  And you can replace the server-side mv's
      by client-side mv's that are repeatedly killed.  (If the client is
      interrupted while waiting for the RENAME response then it's left with a
      dentry that has to go under one parent or the other, but it doesn't yet
      know which.)
      Acked-by: NJeff Layton <jlayton@primarydata.com>
      Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      3d330dc1
  3. 23 2月, 2015 2 次提交
  4. 14 2月, 2015 1 次提交
    • A
      fs: dcache: manually unpoison dname after allocation to shut up kasan's reports · df4c0e36
      Andrey Ryabinin 提交于
      We need to manually unpoison rounded up allocation size for dname to avoid
      kasan's reports in dentry_string_cmp().  When CONFIG_DCACHE_WORD_ACCESS=y
      dentry_string_cmp may access few bytes beyound requested in kmalloc()
      size.
      
      dentry_string_cmp() relates on that fact that dentry allocated using
      kmalloc and kmalloc internally round up allocation size.  So this is not a
      bug, but this makes kasan to complain about such accesses.  To avoid such
      reports we mark rounded up allocation size in shadow as accessible.
      Signed-off-by: NAndrey Ryabinin <a.ryabinin@samsung.com>
      Reported-by: NDmitry Vyukov <dvyukov@google.com>
      Cc: Konstantin Serebryany <kcc@google.com>
      Cc: Dmitry Chernenkov <dmitryc@google.com>
      Signed-off-by: NAndrey Konovalov <adech.fo@gmail.com>
      Cc: Yuri Gribov <tetra2005@gmail.com>
      Cc: Konstantin Khlebnikov <koct9i@gmail.com>
      Cc: Sasha Levin <sasha.levin@oracle.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Andi Kleen <andi@firstfloor.org>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: David Rientjes <rientjes@google.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      df4c0e36
  5. 13 2月, 2015 2 次提交
    • V
      list_lru: add helpers to isolate items · 3f97b163
      Vladimir Davydov 提交于
      Currently, the isolate callback passed to the list_lru_walk family of
      functions is supposed to just delete an item from the list upon returning
      LRU_REMOVED or LRU_REMOVED_RETRY, while nr_items counter is fixed by
      __list_lru_walk_one after the callback returns.  Since the callback is
      allowed to drop the lock after removing an item (it has to return
      LRU_REMOVED_RETRY then), the nr_items can be less than the actual number
      of elements on the list even if we check them under the lock.  This makes
      it difficult to move items from one list_lru_one to another, which is
      required for per-memcg list_lru reparenting - we can't just splice the
      lists, we have to move entries one by one.
      
      This patch therefore introduces helpers that must be used by callback
      functions to isolate items instead of raw list_del/list_move.  These are
      list_lru_isolate and list_lru_isolate_move.  They not only remove the
      entry from the list, but also fix the nr_items counter, making sure
      nr_items always reflects the actual number of elements on the list if
      checked under the appropriate lock.
      Signed-off-by: NVladimir Davydov <vdavydov@parallels.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Michal Hocko <mhocko@suse.cz>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Dave Chinner <david@fromorbit.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      3f97b163
    • V
      list_lru: introduce list_lru_shrink_{count,walk} · 503c358c
      Vladimir Davydov 提交于
      Kmem accounting of memcg is unusable now, because it lacks slab shrinker
      support.  That means when we hit the limit we will get ENOMEM w/o any
      chance to recover.  What we should do then is to call shrink_slab, which
      would reclaim old inode/dentry caches from this cgroup.  This is what
      this patch set is intended to do.
      
      Basically, it does two things.  First, it introduces the notion of
      per-memcg slab shrinker.  A shrinker that wants to reclaim objects per
      cgroup should mark itself as SHRINKER_MEMCG_AWARE.  Then it will be
      passed the memory cgroup to scan from in shrink_control->memcg.  For
      such shrinkers shrink_slab iterates over the whole cgroup subtree under
      the target cgroup and calls the shrinker for each kmem-active memory
      cgroup.
      
      Secondly, this patch set makes the list_lru structure per-memcg.  It's
      done transparently to list_lru users - everything they have to do is to
      tell list_lru_init that they want memcg-aware list_lru.  Then the
      list_lru will automatically distribute objects among per-memcg lists
      basing on which cgroup the object is accounted to.  This way to make FS
      shrinkers (icache, dcache) memcg-aware we only need to make them use
      memcg-aware list_lru, and this is what this patch set does.
      
      As before, this patch set only enables per-memcg kmem reclaim when the
      pressure goes from memory.limit, not from memory.kmem.limit.  Handling
      memory.kmem.limit is going to be tricky due to GFP_NOFS allocations, and
      it is still unclear whether we will have this knob in the unified
      hierarchy.
      
      This patch (of 9):
      
      NUMA aware slab shrinkers use the list_lru structure to distribute
      objects coming from different NUMA nodes to different lists.  Whenever
      such a shrinker needs to count or scan objects from a particular node,
      it issues commands like this:
      
              count = list_lru_count_node(lru, sc->nid);
              freed = list_lru_walk_node(lru, sc->nid, isolate_func,
                                         isolate_arg, &sc->nr_to_scan);
      
      where sc is an instance of the shrink_control structure passed to it
      from vmscan.
      
      To simplify this, let's add special list_lru functions to be used by
      shrinkers, list_lru_shrink_count() and list_lru_shrink_walk(), which
      consolidate the nid and nr_to_scan arguments in the shrink_control
      structure.
      
      This will also allow us to avoid patching shrinkers that use list_lru
      when we make shrink_slab() per-memcg - all we will have to do is extend
      the shrink_control structure to include the target memcg and make
      list_lru_shrink_{count,walk} handle this appropriately.
      Signed-off-by: NVladimir Davydov <vdavydov@parallels.com>
      Suggested-by: NDave Chinner <david@fromorbit.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Michal Hocko <mhocko@suse.cz>
      Cc: Greg Thelen <gthelen@google.com>
      Cc: Glauber Costa <glommer@gmail.com>
      Cc: Alexander Viro <viro@zeniv.linux.org.uk>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Tejun Heo <tj@kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      503c358c
  6. 26 1月, 2015 2 次提交
  7. 20 11月, 2014 4 次提交
  8. 04 11月, 2014 2 次提交
  9. 24 10月, 2014 1 次提交
    • A
      fix inode leaks on d_splice_alias() failure exits · 51486b90
      Al Viro 提交于
      d_splice_alias() callers expect it to either stash the inode reference
      into a new alias, or drop the inode reference.  That makes it possible
      to just return d_splice_alias() result from ->lookup() instance, without
      any extra housekeeping required.
      
      Unfortunately, that should include the failure exits.  If d_splice_alias()
      returns an error, it leaves the dentry it has been given negative and
      thus it *must* drop the inode reference.  Easily fixed, but it goes way
      back and will need backporting.
      
      Cc: stable@vger.kernel.org
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      51486b90
  10. 13 10月, 2014 1 次提交
  11. 09 10月, 2014 9 次提交
    • D
      dcache: Fix no spaces at the start of a line in dcache.c · b8314f93
      Daeseok Youn 提交于
      Fixed coding style in dcache.c
      Signed-off-by: NDaeseok Youn <daeseok.youn@gmail.com>
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      b8314f93
    • A
      dcache.c: call ->d_prune() regardless of d_unhashed() · 29266201
      Al Viro 提交于
      the only in-tree instance checks d_unhashed() anyway,
      out-of-tree code can preserve the current behaviour by
      adding such check if they want it and we get an ability
      to use it in cases where we *want* to be notified of
      killing being inevitable before ->d_lock is dropped,
      whether it's unhashed or not.  In particular, autofs
      would benefit from that.
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      29266201
    • A
      d_prune_alias(): just lock the parent and call __dentry_kill() · 29355c39
      Al Viro 提交于
      The only reason for games with ->d_prune() was __d_drop(), which
      was needed only to force dput() into killing the sucker off.
      
      Note that lock_parent() can be called under ->i_lock and won't
      drop it, so dentry is safe from somebody managing to kill it
      under us - it won't happen while we are holding ->i_lock.
      
      __dentry_kill() is called only with ->d_lockref.count being 0
      (here and when picked from shrink list) or 1 (dput() and dropping
      the ancestors in shrink_dentry_list()), so it will never be called
      twice - the first thing it's doing is making ->d_lockref.count
      negative and once that happens, nothing will increment it.
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      29355c39
    • E
      vfs: Make d_invalidate return void · 5542aa2f
      Eric W. Biederman 提交于
      Now that d_invalidate can no longer fail, stop returning a useless
      return code.  For the few callers that checked the return code update
      remove the handling of d_invalidate failure.
      Reviewed-by: NMiklos Szeredi <miklos@szeredi.hu>
      Signed-off-by: N"Eric W. Biederman" <ebiederm@xmission.com>
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      5542aa2f
    • E
      vfs: Merge check_submounts_and_drop and d_invalidate · 1ffe46d1
      Eric W. Biederman 提交于
      Now that d_invalidate is the only caller of check_submounts_and_drop,
      expand check_submounts_and_drop inline in d_invalidate.
      Reviewed-by: NMiklos Szeredi <miklos@szeredi.hu>
      Signed-off-by: N"Eric W. Biederman" <ebiederm@xmission.com>
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      1ffe46d1
    • E
      vfs: Lazily remove mounts on unlinked files and directories. · 8ed936b5
      Eric W. Biederman 提交于
      With the introduction of mount namespaces and bind mounts it became
      possible to access files and directories that on some paths are mount
      points but are not mount points on other paths.  It is very confusing
      when rm -rf somedir returns -EBUSY simply because somedir is mounted
      somewhere else.  With the addition of user namespaces allowing
      unprivileged mounts this condition has gone from annoying to allowing
      a DOS attack on other users in the system.
      
      The possibility for mischief is removed by updating the vfs to support
      rename, unlink and rmdir on a dentry that is a mountpoint and by
      lazily unmounting mountpoints on deleted dentries.
      
      In particular this change allows rename, unlink and rmdir system calls
      on a dentry without a mountpoint in the current mount namespace to
      succeed, and it allows rename, unlink, and rmdir performed on a
      distributed filesystem to update the vfs cache even if when there is a
      mount in some namespace on the original dentry.
      
      There are two common patterns of maintaining mounts: Mounts on trusted
      paths with the parent directory of the mount point and all ancestory
      directories up to / owned by root and modifiable only by root
      (i.e. /media/xxx, /dev, /dev/pts, /proc, /sys, /sys/fs/cgroup/{cpu,
      cpuacct, ...}, /usr, /usr/local).  Mounts on unprivileged directories
      maintained by fusermount.
      
      In the case of mounts in trusted directories owned by root and
      modifiable only by root the current parent directory permissions are
      sufficient to ensure a mount point on a trusted path is not removed
      or renamed by anyone other than root, even if there is a context
      where the there are no mount points to prevent this.
      
      In the case of mounts in directories owned by less privileged users
      races with users modifying the path of a mount point are already a
      danger.  fusermount already uses a combination of chdir,
      /proc/<pid>/fd/NNN, and UMOUNT_NOFOLLOW to prevent these races.  The
      removable of global rename, unlink, and rmdir protection really adds
      nothing new to consider only a widening of the attack window, and
      fusermount is already safe against unprivileged users modifying the
      directory simultaneously.
      
      In principle for perfect userspace programs returning -EBUSY for
      unlink, rmdir, and rename of dentires that have mounts in the local
      namespace is actually unnecessary.  Unfortunately not all userspace
      programs are perfect so retaining -EBUSY for unlink, rmdir and rename
      of dentries that have mounts in the current mount namespace plays an
      important role of maintaining consistency with historical behavior and
      making imperfect userspace applications hard to exploit.
      
      v2: Remove spurious old_dentry.
      v3: Optimized shrink_submounts_and_drop
          Removed unsued afs label
      v4: Simplified the changes to check_submounts_and_drop
          Do not rename check_submounts_and_drop shrink_submounts_and_drop
          Document what why we need atomicity in check_submounts_and_drop
          Rely on the parent inode mutex to make d_revalidate and d_invalidate
          an atomic unit.
      v5: Refcount the mountpoint to detach in case of simultaneous
          renames.
      Reviewed-by: NMiklos Szeredi <miklos@szeredi.hu>
      Signed-off-by: N"Eric W. Biederman" <ebiederm@xmission.com>
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      8ed936b5
    • E
      vfs: More precise tests in d_invalidate · bafc9b75
      Eric W. Biederman 提交于
      The current comments in d_invalidate about what and why it is doing
      what it is doing are wildly off-base.  Which is not surprising as
      the comments date back to last minute bug fix of the 2.2 kernel.
      
      The big fat lie of a comment said: If it's a directory, we can't drop
      it for fear of somebody re-populating it with children (even though
      dropping it would make it unreachable from that root, we still might
      repopulate it if it was a working directory or similar).
      
      [AV] What we really need to avoid is multiple dentry aliases of the
      same directory inode; on all filesystems that have ->d_revalidate()
      we either declare all positive dentries always valid (and thus never
      fed to d_invalidate()) or use d_materialise_unique() and/or d_splice_alias(),
      which take care of alias prevention.
      
      The current rules are:
      - To prevent mount point leaks dentries that are mount points or that
        have childrent that are mount points may not be be unhashed.
      - All dentries may be unhashed.
      - Directories may be rehashed with d_materialise_unique
      
      check_submounts_and_drop implements this already for well maintained
      remote filesystems so implement the current rules in d_invalidate
      by just calling check_submounts_and_drop.
      
      The one difference between d_invalidate and check_submounts_and_drop
      is that d_invalidate must respect it when a d_revalidate method has
      earlier called d_drop so preserve the d_unhashed check in
      d_invalidate.
      Reviewed-by: NMiklos Szeredi <miklos@szeredi.hu>
      Signed-off-by: N"Eric W. Biederman" <ebiederm@xmission.com>
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      bafc9b75
    • E
      vfs: Document the effect of d_revalidate on d_find_alias · 3ccb354d
      Eric W. Biederman 提交于
      d_drop or check_submounts_and_drop called from d_revalidate can result
      in renamed directories with child dentries being unhashed.  These
      renamed and drop directory dentries can be rehashed after
      d_materialise_unique uses d_find_alias to find them.
      Reviewed-by: NMiklos Szeredi <miklos@szeredi.hu>
      Signed-off-by: N"Eric W. Biederman" <ebiederm@xmission.com>
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      3ccb354d
    • A
      Allow sharing external names after __d_move() · 8d85b484
      Al Viro 提交于
      * external dentry names get a small structure prepended to them
      (struct external_name).
      * it contains an atomic refcount, matching the number of struct dentry
      instances that have ->d_name.name pointing to that external name.  The
      first thing free_dentry() does is decrementing refcount of external name,
      so the instances that are between the call of free_dentry() and
      RCU-delayed actual freeing do not contribute.
      * __d_move(x, y, false) makes the name of x equal to the name of y,
      external or not.  If y has an external name, extra reference is grabbed
      and put into x->d_name.name.  If x used to have an external name, the
      reference to the old name is dropped and, should it reach zero, freeing
      is scheduled via kfree_rcu().
      * free_dentry() in dentry with external name decrements the refcount of
      that name and, should it reach zero, does RCU-delayed call that will
      free both the dentry and external name.  Otherwise it does what it
      used to do, except that __d_free() doesn't even look at ->d_name.name;
      it simply frees the dentry.
      
      All non-RCU accesses to dentry external name are safe wrt freeing since they
      all should happen before free_dentry() is called.  RCU accesses might run
      into a dentry seen by free_dentry() or into an old name that got already
      dropped by __d_move(); however, in both cases dentry must have been
      alive and refer to that name at some point after we'd done rcu_read_lock(),
      which means that any freeing must be still pending.
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      8d85b484
  12. 30 9月, 2014 1 次提交
    • A
      missing data dependency barrier in prepend_name() · 6d13f694
      Al Viro 提交于
      AFAICS, prepend_name() is broken on SMP alpha.  Disclaimer: I don't have
      SMP alpha boxen to reproduce it on.  However, it really looks like the race
      is real.
      
      CPU1: d_path() on /mnt/ramfs/<255-character>/foo
      CPU2: mv /mnt/ramfs/<255-character> /mnt/ramfs/<63-character>
      
      CPU2 does d_alloc(), which allocates an external name, stores the name there
      including terminating NUL, does smp_wmb() and stores its address in
      dentry->d_name.name.  It proceeds to d_add(dentry, NULL) and d_move()
      old dentry over to that.  ->d_name.name value ends up in that dentry.
      
      In the meanwhile, CPU1 gets to prepend_name() for that dentry.  It fetches
      ->d_name.name and ->d_name.len; the former ends up pointing to new name
      (64-byte kmalloc'ed array), the latter - 255 (length of the old name).
      Nothing to force the ordering there, and normally that would be OK, since we'd
      run into the terminating NUL and stop.  Except that it's alpha, and we'd need
      a data dependency barrier to guarantee that we see that store of NUL
      __d_alloc() has done.  In a similar situation dentry_cmp() would survive; it
      does explicit smp_read_barrier_depends() after fetching ->d_name.name.
      prepend_name() doesn't and it risks walking past the end of kmalloc'ed object
      and possibly oops due to taking a page fault in kernel mode.
      
      Cc: stable@vger.kernel.org # 3.12+
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      6d13f694
  13. 28 9月, 2014 2 次提交
    • M
      vfs: Don't exchange "short" filenames unconditionally. · d2fa4a84
      Mikhail Efremov 提交于
      Only exchange source and destination filenames
      if flags contain RENAME_EXCHANGE.
      In case if executable file was running and replaced by
      other file /proc/PID/exe should still show correct file name,
      not the old name of the file by which it was replaced.
      
      The scenario when this bug manifests itself was like this:
      * ALT Linux uses rpm and start-stop-daemon;
      * during a package upgrade rpm creates a temporary file
        for an executable to rename it upon successful unpacking;
      * start-stop-daemon is run subsequently and it obtains
        the (nonexistant) temporary filename via /proc/PID/exe
        thus failing to identify the running process.
      
      Note that "long" filenames (> DNAiME_INLINE_LEN) are still
      exchanged without RENAME_EXCHANGE and this behaviour exists
      long enough (should be fixed too apparently).
      So this patch is just an interim workaround that restores
      behavior for "short" names as it was before changes
      introduced by commit da1ce067 ("vfs: add cross-rename").
      
      See https://lkml.org/lkml/2014/9/7/6 for details.
      
      AV: the comments about being more careful with ->d_name.hash
      than with ->d_name.name are from back in 2.3.40s; they
      became obsolete by 2.3.60s, when we started to unhash the
      target instead of swapping hash chain positions followed
      by d_delete() as we used to do when dcache was first
      introduced.
      Acked-by: NMiklos Szeredi <mszeredi@suse.cz>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Alexander Viro <viro@zeniv.linux.org.uk>
      Cc: linux-fsdevel@vger.kernel.org
      Cc: stable@vger.kernel.org
      Fixes: da1ce067 "vfs: add cross-rename"
      Signed-off-by: NMikhail Efremov <sem@altlinux.org>
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      d2fa4a84
    • L
      fold swapping ->d_name.hash into switch_names() · a28ddb87
      Linus Torvalds 提交于
      and do it along with ->d_name.len there
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      a28ddb87
  14. 27 9月, 2014 6 次提交
  15. 14 9月, 2014 2 次提交
    • A
      move the call of __d_drop(anon) into __d_materialise_unique(dentry, anon) · 6f18493e
      Al Viro 提交于
      and lock the right list there
      
      Cc: stable@vger.kernel.org
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      6f18493e
    • L
      vfs: fix bad hashing of dentries · 99d263d4
      Linus Torvalds 提交于
      Josef Bacik found a performance regression between 3.2 and 3.10 and
      narrowed it down to commit bfcfaa77 ("vfs: use 'unsigned long'
      accesses for dcache name comparison and hashing"). He reports:
      
       "The test case is essentially
      
            for (i = 0; i < 1000000; i++)
                    mkdir("a$i");
      
        On xfs on a fio card this goes at about 20k dir/sec with 3.2, and 12k
        dir/sec with 3.10.  This is because we spend waaaaay more time in
        __d_lookup on 3.10 than in 3.2.
      
        The new hashing function for strings is suboptimal for <
        sizeof(unsigned long) string names (and hell even > sizeof(unsigned
        long) string names that I've tested).  I broke out the old hashing
        function and the new one into a userspace helper to get real numbers
        and this is what I'm getting:
      
            Old hash table had 1000000 entries, 0 dupes, 0 max dupes
            New hash table had 12628 entries, 987372 dupes, 900 max dupes
            We had 11400 buckets with a p50 of 30 dupes, p90 of 240 dupes, p99 of 567 dupes for the new hash
      
        My test does the hash, and then does the d_hash into a integer pointer
        array the same size as the dentry hash table on my system, and then
        just increments the value at the address we got to see how many
        entries we overlap with.
      
        As you can see the old hash function ended up with all 1 million
        entries in their own bucket, whereas the new one they are only
        distributed among ~12.5k buckets, which is why we're using so much
        more CPU in __d_lookup".
      
      The reason for this hash regression is two-fold:
      
       - On 64-bit architectures the down-mixing of the original 64-bit
         word-at-a-time hash into the final 32-bit hash value is very
         simplistic and suboptimal, and just adds the two 32-bit parts
         together.
      
         In particular, because there is no bit shuffling and the mixing
         boundary is also a byte boundary, similar character patterns in the
         low and high word easily end up just canceling each other out.
      
       - the old byte-at-a-time hash mixed each byte into the final hash as it
         hashed the path component name, resulting in the low bits of the hash
         generally being a good source of hash data.  That is not true for the
         word-at-a-time case, and the hash data is distributed among all the
         bits.
      
      The fix is the same in both cases: do a better job of mixing the bits up
      and using as much of the hash data as possible.  We already have the
      "hash_32|64()" functions to do that.
      Reported-by: NJosef Bacik <jbacik@fb.com>
      Cc: Al Viro <viro@zeniv.linux.org.uk>
      Cc: Christoph Hellwig <hch@infradead.org>
      Cc: Chris Mason <clm@fb.com>
      Cc: linux-fsdevel@vger.kernel.org
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      99d263d4
  16. 08 8月, 2014 3 次提交