1. 18 8月, 2018 2 次提交
  2. 04 8月, 2018 2 次提交
    • A
      new helper: inode_fake_hash() · 5bef9151
      Al Viro 提交于
      open-coded in a quite a few places...
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      5bef9151
    • A
      new primitive: discard_new_inode() · c2b6d621
      Al Viro 提交于
      	We don't want open-by-handle picking half-set-up in-core
      struct inode from e.g. mkdir() having failed halfway through.
      In other words, we don't want such inodes returned by iget_locked()
      on their way to extinction.  However, we can't just have them
      unhashed - otherwise open-by-handle immediately *after* that would've
      ended up creating a new in-core inode over the on-disk one that
      is in process of being freed right under us.
      
      	Solution: new flag (I_CREATING) set by insert_inode_locked() and
      removed by unlock_new_inode() and a new primitive (discard_new_inode())
      to be used by such halfway-through-setup failure exits instead of
      unlock_new_inode() / iput() combinations.  That primitive unlocks new
      inode, but leaves I_CREATING in place.
      
      	iget_locked() treats finding an I_CREATING inode as failure
      (-ESTALE, once we sort out the error propagation).
      	insert_inode_locked() treats the same as instant -EBUSY.
      	ilookup() treats those as icache miss.
      
      [Fix by Dan Carpenter <dan.carpenter@oracle.com> folded in]
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      c2b6d621
  3. 12 7月, 2018 5 次提交
  4. 11 7月, 2018 2 次提交
  5. 09 7月, 2018 1 次提交
  6. 29 6月, 2018 1 次提交
    • L
      Revert changes to convert to ->poll_mask() and aio IOCB_CMD_POLL · a11e1d43
      Linus Torvalds 提交于
      The poll() changes were not well thought out, and completely
      unexplained.  They also caused a huge performance regression, because
      "->poll()" was no longer a trivial file operation that just called down
      to the underlying file operations, but instead did at least two indirect
      calls.
      
      Indirect calls are sadly slow now with the Spectre mitigation, but the
      performance problem could at least be largely mitigated by changing the
      "->get_poll_head()" operation to just have a per-file-descriptor pointer
      to the poll head instead.  That gets rid of one of the new indirections.
      
      But that doesn't fix the new complexity that is completely unwarranted
      for the regular case.  The (undocumented) reason for the poll() changes
      was some alleged AIO poll race fixing, but we don't make the common case
      slower and more complex for some uncommon special case, so this all
      really needs way more explanations and most likely a fundamental
      redesign.
      
      [ This revert is a revert of about 30 different commits, not reverted
        individually because that would just be unnecessarily messy  - Linus ]
      
      Cc: Al Viro <viro@zeniv.linux.org.uk>
      Cc: Christoph Hellwig <hch@lst.de>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      a11e1d43
  7. 18 6月, 2018 1 次提交
  8. 06 6月, 2018 1 次提交
    • D
      vfs: change inode times to use struct timespec64 · 95582b00
      Deepa Dinamani 提交于
      struct timespec is not y2038 safe. Transition vfs to use
      y2038 safe struct timespec64 instead.
      
      The change was made with the help of the following cocinelle
      script. This catches about 80% of the changes.
      All the header file and logic changes are included in the
      first 5 rules. The rest are trivial substitutions.
      I avoid changing any of the function signatures or any other
      filesystem specific data structures to keep the patch simple
      for review.
      
      The script can be a little shorter by combining different cases.
      But, this version was sufficient for my usecase.
      
      virtual patch
      
      @ depends on patch @
      identifier now;
      @@
      - struct timespec
      + struct timespec64
        current_time ( ... )
        {
      - struct timespec now = current_kernel_time();
      + struct timespec64 now = current_kernel_time64();
        ...
      - return timespec_trunc(
      + return timespec64_trunc(
        ... );
        }
      
      @ depends on patch @
      identifier xtime;
      @@
       struct \( iattr \| inode \| kstat \) {
       ...
      -       struct timespec xtime;
      +       struct timespec64 xtime;
       ...
       }
      
      @ depends on patch @
      identifier t;
      @@
       struct inode_operations {
       ...
      int (*update_time) (...,
      -       struct timespec t,
      +       struct timespec64 t,
      ...);
       ...
       }
      
      @ depends on patch @
      identifier t;
      identifier fn_update_time =~ "update_time$";
      @@
       fn_update_time (...,
      - struct timespec *t,
      + struct timespec64 *t,
       ...) { ... }
      
      @ depends on patch @
      identifier t;
      @@
      lease_get_mtime( ... ,
      - struct timespec *t
      + struct timespec64 *t
        ) { ... }
      
      @te depends on patch forall@
      identifier ts;
      local idexpression struct inode *inode_node;
      identifier i_xtime =~ "^i_[acm]time$";
      identifier ia_xtime =~ "^ia_[acm]time$";
      identifier fn_update_time =~ "update_time$";
      identifier fn;
      expression e, E3;
      local idexpression struct inode *node1;
      local idexpression struct inode *node2;
      local idexpression struct iattr *attr1;
      local idexpression struct iattr *attr2;
      local idexpression struct iattr attr;
      identifier i_xtime1 =~ "^i_[acm]time$";
      identifier i_xtime2 =~ "^i_[acm]time$";
      identifier ia_xtime1 =~ "^ia_[acm]time$";
      identifier ia_xtime2 =~ "^ia_[acm]time$";
      @@
      (
      (
      - struct timespec ts;
      + struct timespec64 ts;
      |
      - struct timespec ts = current_time(inode_node);
      + struct timespec64 ts = current_time(inode_node);
      )
      
      <+... when != ts
      (
      - timespec_equal(&inode_node->i_xtime, &ts)
      + timespec64_equal(&inode_node->i_xtime, &ts)
      |
      - timespec_equal(&ts, &inode_node->i_xtime)
      + timespec64_equal(&ts, &inode_node->i_xtime)
      |
      - timespec_compare(&inode_node->i_xtime, &ts)
      + timespec64_compare(&inode_node->i_xtime, &ts)
      |
      - timespec_compare(&ts, &inode_node->i_xtime)
      + timespec64_compare(&ts, &inode_node->i_xtime)
      |
      ts = current_time(e)
      |
      fn_update_time(..., &ts,...)
      |
      inode_node->i_xtime = ts
      |
      node1->i_xtime = ts
      |
      ts = inode_node->i_xtime
      |
      <+... attr1->ia_xtime ...+> = ts
      |
      ts = attr1->ia_xtime
      |
      ts.tv_sec
      |
      ts.tv_nsec
      |
      btrfs_set_stack_timespec_sec(..., ts.tv_sec)
      |
      btrfs_set_stack_timespec_nsec(..., ts.tv_nsec)
      |
      - ts = timespec64_to_timespec(
      + ts =
      ...
      -)
      |
      - ts = ktime_to_timespec(
      + ts = ktime_to_timespec64(
      ...)
      |
      - ts = E3
      + ts = timespec_to_timespec64(E3)
      |
      - ktime_get_real_ts(&ts)
      + ktime_get_real_ts64(&ts)
      |
      fn(...,
      - ts
      + timespec64_to_timespec(ts)
      ,...)
      )
      ...+>
      (
      <... when != ts
      - return ts;
      + return timespec64_to_timespec(ts);
      ...>
      )
      |
      - timespec_equal(&node1->i_xtime1, &node2->i_xtime2)
      + timespec64_equal(&node1->i_xtime2, &node2->i_xtime2)
      |
      - timespec_equal(&node1->i_xtime1, &attr2->ia_xtime2)
      + timespec64_equal(&node1->i_xtime2, &attr2->ia_xtime2)
      |
      - timespec_compare(&node1->i_xtime1, &node2->i_xtime2)
      + timespec64_compare(&node1->i_xtime1, &node2->i_xtime2)
      |
      node1->i_xtime1 =
      - timespec_trunc(attr1->ia_xtime1,
      + timespec64_trunc(attr1->ia_xtime1,
      ...)
      |
      - attr1->ia_xtime1 = timespec_trunc(attr2->ia_xtime2,
      + attr1->ia_xtime1 =  timespec64_trunc(attr2->ia_xtime2,
      ...)
      |
      - ktime_get_real_ts(&attr1->ia_xtime1)
      + ktime_get_real_ts64(&attr1->ia_xtime1)
      |
      - ktime_get_real_ts(&attr.ia_xtime1)
      + ktime_get_real_ts64(&attr.ia_xtime1)
      )
      
      @ depends on patch @
      struct inode *node;
      struct iattr *attr;
      identifier fn;
      identifier i_xtime =~ "^i_[acm]time$";
      identifier ia_xtime =~ "^ia_[acm]time$";
      expression e;
      @@
      (
      - fn(node->i_xtime);
      + fn(timespec64_to_timespec(node->i_xtime));
      |
       fn(...,
      - node->i_xtime);
      + timespec64_to_timespec(node->i_xtime));
      |
      - e = fn(attr->ia_xtime);
      + e = fn(timespec64_to_timespec(attr->ia_xtime));
      )
      
      @ depends on patch forall @
      struct inode *node;
      struct iattr *attr;
      identifier i_xtime =~ "^i_[acm]time$";
      identifier ia_xtime =~ "^ia_[acm]time$";
      identifier fn;
      @@
      {
      + struct timespec ts;
      <+...
      (
      + ts = timespec64_to_timespec(node->i_xtime);
      fn (...,
      - &node->i_xtime,
      + &ts,
      ...);
      |
      + ts = timespec64_to_timespec(attr->ia_xtime);
      fn (...,
      - &attr->ia_xtime,
      + &ts,
      ...);
      )
      ...+>
      }
      
      @ depends on patch forall @
      struct inode *node;
      struct iattr *attr;
      struct kstat *stat;
      identifier ia_xtime =~ "^ia_[acm]time$";
      identifier i_xtime =~ "^i_[acm]time$";
      identifier xtime =~ "^[acm]time$";
      identifier fn, ret;
      @@
      {
      + struct timespec ts;
      <+...
      (
      + ts = timespec64_to_timespec(node->i_xtime);
      ret = fn (...,
      - &node->i_xtime,
      + &ts,
      ...);
      |
      + ts = timespec64_to_timespec(node->i_xtime);
      ret = fn (...,
      - &node->i_xtime);
      + &ts);
      |
      + ts = timespec64_to_timespec(attr->ia_xtime);
      ret = fn (...,
      - &attr->ia_xtime,
      + &ts,
      ...);
      |
      + ts = timespec64_to_timespec(attr->ia_xtime);
      ret = fn (...,
      - &attr->ia_xtime);
      + &ts);
      |
      + ts = timespec64_to_timespec(stat->xtime);
      ret = fn (...,
      - &stat->xtime);
      + &ts);
      )
      ...+>
      }
      
      @ depends on patch @
      struct inode *node;
      struct inode *node2;
      identifier i_xtime1 =~ "^i_[acm]time$";
      identifier i_xtime2 =~ "^i_[acm]time$";
      identifier i_xtime3 =~ "^i_[acm]time$";
      struct iattr *attrp;
      struct iattr *attrp2;
      struct iattr attr ;
      identifier ia_xtime1 =~ "^ia_[acm]time$";
      identifier ia_xtime2 =~ "^ia_[acm]time$";
      struct kstat *stat;
      struct kstat stat1;
      struct timespec64 ts;
      identifier xtime =~ "^[acmb]time$";
      expression e;
      @@
      (
      ( node->i_xtime2 \| attrp->ia_xtime2 \| attr.ia_xtime2 \) = node->i_xtime1  ;
      |
       node->i_xtime2 = \( node2->i_xtime1 \| timespec64_trunc(...) \);
      |
       node->i_xtime2 = node->i_xtime1 = node->i_xtime3 = \(ts \| current_time(...) \);
      |
       node->i_xtime1 = node->i_xtime3 = \(ts \| current_time(...) \);
      |
       stat->xtime = node2->i_xtime1;
      |
       stat1.xtime = node2->i_xtime1;
      |
      ( node->i_xtime2 \| attrp->ia_xtime2 \) = attrp->ia_xtime1  ;
      |
      ( attrp->ia_xtime1 \| attr.ia_xtime1 \) = attrp2->ia_xtime2;
      |
      - e = node->i_xtime1;
      + e = timespec64_to_timespec( node->i_xtime1 );
      |
      - e = attrp->ia_xtime1;
      + e = timespec64_to_timespec( attrp->ia_xtime1 );
      |
      node->i_xtime1 = current_time(...);
      |
       node->i_xtime2 = node->i_xtime1 = node->i_xtime3 =
      - e;
      + timespec_to_timespec64(e);
      |
       node->i_xtime1 = node->i_xtime3 =
      - e;
      + timespec_to_timespec64(e);
      |
      - node->i_xtime1 = e;
      + node->i_xtime1 = timespec_to_timespec64(e);
      )
      Signed-off-by: NDeepa Dinamani <deepa.kernel@gmail.com>
      Cc: <anton@tuxera.com>
      Cc: <balbi@kernel.org>
      Cc: <bfields@fieldses.org>
      Cc: <darrick.wong@oracle.com>
      Cc: <dhowells@redhat.com>
      Cc: <dsterba@suse.com>
      Cc: <dwmw2@infradead.org>
      Cc: <hch@lst.de>
      Cc: <hirofumi@mail.parknet.co.jp>
      Cc: <hubcap@omnibond.com>
      Cc: <jack@suse.com>
      Cc: <jaegeuk@kernel.org>
      Cc: <jaharkes@cs.cmu.edu>
      Cc: <jslaby@suse.com>
      Cc: <keescook@chromium.org>
      Cc: <mark@fasheh.com>
      Cc: <miklos@szeredi.hu>
      Cc: <nico@linaro.org>
      Cc: <reiserfs-devel@vger.kernel.org>
      Cc: <richard@nod.at>
      Cc: <sage@redhat.com>
      Cc: <sfrench@samba.org>
      Cc: <swhiteho@redhat.com>
      Cc: <tj@kernel.org>
      Cc: <trond.myklebust@primarydata.com>
      Cc: <tytso@mit.edu>
      Cc: <viro@zeniv.linux.org.uk>
      95582b00
  9. 31 5月, 2018 3 次提交
  10. 29 5月, 2018 1 次提交
  11. 26 5月, 2018 2 次提交
  12. 21 5月, 2018 1 次提交
  13. 18 5月, 2018 1 次提交
  14. 14 5月, 2018 1 次提交
  15. 01 5月, 2018 1 次提交
    • K
      fasync: Fix deadlock between task-context and interrupt-context kill_fasync() · 7a107c0f
      Kirill Tkhai 提交于
      I observed the following deadlock between them:
      
      [task 1]                          [task 2]                         [task 3]
      kill_fasync()                     mm_update_next_owner()           copy_process()
       spin_lock_irqsave(&fa->fa_lock)   read_lock(&tasklist_lock)        write_lock_irq(&tasklist_lock)
        send_sigio()                    <IRQ>                             ...
         read_lock(&fown->lock)         kill_fasync()                     ...
          read_lock(&tasklist_lock)      spin_lock_irqsave(&fa->fa_lock)  ...
      
      Task 1 can't acquire read locked tasklist_lock, since there is
      already task 3 expressed its wish to take the lock exclusive.
      Task 2 holds the read locked lock, but it can't take the spin lock.
      
      Also, there is possible another deadlock (which I haven't observed):
      
      [task 1]                            [task 2]
      f_getown()                          kill_fasync()
       read_lock(&f_own->lock)             spin_lock_irqsave(&fa->fa_lock,)
       <IRQ>                               send_sigio()                     write_lock_irq(&f_own->lock)
        kill_fasync()                       read_lock(&fown->lock)
         spin_lock_irqsave(&fa->fa_lock,)
      
      Actually, we do not need exclusive fa->fa_lock in kill_fasync_rcu(),
      as it guarantees fa->fa_file->f_owner integrity only. It may seem,
      that it used to give a task a small possibility to receive two sequential
      signals, if there are two parallel kill_fasync() callers, and task
      handles the first signal fastly, but the behaviour won't become
      different, since there is exclusive sighand lock in do_send_sig_info().
      
      The patch converts fa_lock into rwlock_t, and this fixes two above
      deadlocks, as rwlock is allowed to be taken from interrupt handler
      by qrwlock design.
      Signed-off-by: NKirill Tkhai <ktkhai@virtuozzo.com>
      Signed-off-by: NJeff Layton <jlayton@redhat.com>
      7a107c0f
  16. 12 4月, 2018 1 次提交
  17. 10 4月, 2018 1 次提交
    • D
      vfs: Remove the const from dir_context::actor · a09acf4b
      David Howells 提交于
      Remove the const marking from the actor function pointer in the dir_context
      struct.  The const prevents the structure from being used as part of a
      kmalloc'd object as it makes the compiler require that the actor member be
      set at object initialisation time (or not at all), incuring something like
      the following error if you try and set it later:
      
      	fs/afs/dir.c:556:20: error: assignment of read-only member 'actor'
      
      Marking the member const like this adds very little in the way of sanity
      checking as the type checking system is likely to provide sufficient - and
      if not, the kernel is very likely to oops repeatably in this case.
      
      Fixes: ac6614b7 ("[readdir] constify ->actor")
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      Reviewed-by: NAl Viro <viro@zeniv.linux.org.uk>
      a09acf4b
  18. 31 3月, 2018 1 次提交
  19. 28 3月, 2018 1 次提交
  20. 23 3月, 2018 1 次提交
  21. 19 3月, 2018 2 次提交
    • M
      buffer.c: call thaw_super during emergency thaw · 08fdc8a0
      Mateusz Guzik 提交于
      There are 2 distinct freezing mechanisms - one operates on block
      devices and another one directly on super blocks. Both end up with the
      same result, but thaw of only one of these does not thaw the other.
      
      In particular fsfreeze --freeze uses the ioctl variant going to the
      super block. Since prior to this patch emergency thaw was not doing
      a relevant thaw, filesystems frozen with this method remained
      unaffected.
      
      The patch is a hack which adds blind unfreezing.
      
      In order to keep the super block write-locked the whole time the code
      is shuffled around and the newly introduced __iterate_supers is
      employed.
      Signed-off-by: NMateusz Guzik <mguzik@redhat.com>
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      08fdc8a0
    • R
      vfs: make sure struct filename->iname is word-aligned · 1c949843
      Rasmus Villemoes 提交于
      I noticed that offsetof(struct filename, iname) is actually 28 on 64
      bit platforms, so we always pass an unaligned pointer to
      strncpy_from_user. This is mostly a problem for those 64 bit platforms
      without HAVE_EFFICIENT_UNALIGNED_ACCESS, but even on x86_64, unaligned
      accesses carry a penalty.
      
      A user-space microbenchmark doing nothing but strncpy_from_user from the
      same (aligned) source string runs about 5% faster when the destination
      is aligned. That number increases to 20% when the string is long
      enough (~32 bytes) that we cross a cache line boundary - that's for
      example the case for about half the files a "git status" in a kernel
      tree ends up stat'ing.
      
      This won't make any real-life workloads 5%, or even 1%, faster, but path
      lookup is common enough that cutting even a few cycles should be
      worthwhile. So ensure we always pass an aligned destination pointer to
      strncpy_from_user. Instead of explicit padding, simply swap the refcnt
      and aname members, as suggested by Al Viro.
      Signed-off-by: NRasmus Villemoes <linux@rasmusvillemoes.dk>
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      1c949843
  22. 16 3月, 2018 1 次提交
    • E
      fs: Teach path_connected to handle nfs filesystems with multiple roots. · 95dd7758
      Eric W. Biederman 提交于
      On nfsv2 and nfsv3 the nfs server can export subsets of the same
      filesystem and report the same filesystem identifier, so that the nfs
      client can know they are the same filesystem.  The subsets can be from
      disjoint directory trees.  The nfsv2 and nfsv3 filesystems provides no
      way to find the common root of all directory trees exported form the
      server with the same filesystem identifier.
      
      The practical result is that in struct super s_root for nfs s_root is
      not necessarily the root of the filesystem.  The nfs mount code sets
      s_root to the root of the first subset of the nfs filesystem that the
      kernel mounts.
      
      This effects the dcache invalidation code in generic_shutdown_super
      currently called shrunk_dcache_for_umount and that code for years
      has gone through an additional list of dentries that might be dentry
      trees that need to be freed to accomodate nfs.
      
      When I wrote path_connected I did not realize nfs was so special, and
      it's hueristic for avoiding calling is_subdir can fail.
      
      The practical case where this fails is when there is a move of a
      directory from the subtree exposed by one nfs mount to the subtree
      exposed by another nfs mount.  This move can happen either locally or
      remotely.  With the remote case requiring that the move directory be cached
      before the move and that after the move someone walks the path
      to where the move directory now exists and in so doing causes the
      already cached directory to be moved in the dcache through the magic
      of d_splice_alias.
      
      If someone whose working directory is in the move directory or a
      subdirectory and now starts calling .. from the initial mount of nfs
      (where s_root == mnt_root), then path_connected as a heuristic will
      not bother with the is_subdir check.  As s_root really is not the root
      of the nfs filesystem this heuristic is wrong, and the path may
      actually not be connected and path_connected can fail.
      
      The is_subdir function might be cheap enough that we can call it
      unconditionally.  Verifying that will take some benchmarking and
      the result may not be the same on all kernels this fix needs
      to be backported to.  So I am avoiding that for now.
      
      Filesystems with snapshots such as nilfs and btrfs do something
      similar.  But as the directory tree of the snapshots are disjoint
      from one another and from the main directory tree rename won't move
      things between them and this problem will not occur.
      
      Cc: stable@vger.kernel.org
      Reported-by: NAl Viro <viro@ZenIV.linux.org.uk>
      Fixes: 397d425d ("vfs: Test for and handle paths that are unreachable from their mnt_root")
      Signed-off-by: N"Eric W. Biederman" <ebiederm@xmission.com>
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      95dd7758
  23. 13 3月, 2018 2 次提交
  24. 27 2月, 2018 1 次提交
  25. 29 1月, 2018 3 次提交
    • D
      xfs: only grab shared inode locks for source file during reflink · 01c2e13d
      Darrick J. Wong 提交于
      Reflink and dedupe operations remap blocks from a source file into a
      destination file.  The destination file needs exclusive locks on all
      levels because we're updating its block map, but the source file isn't
      undergoing any block map changes so we can use a shared lock.
      Signed-off-by: NDarrick J. Wong <darrick.wong@oracle.com>
      Reviewed-by: NChristoph Hellwig <hch@lst.de>
      01c2e13d
    • J
      fs: handle inode->i_version more efficiently · f02a9ad1
      Jeff Layton 提交于
      Since i_version is mostly treated as an opaque value, we can exploit that
      fact to avoid incrementing it when no one is watching. With that change,
      we can avoid incrementing the counter on writes, unless someone has
      queried for it since it was last incremented. If the a/c/mtime don't
      change, and the i_version hasn't changed, then there's no need to dirty
      the inode metadata on a write.
      
      Convert the i_version counter to an atomic64_t, and use the lowest order
      bit to hold a flag that will tell whether anyone has queried the value
      since it was last incremented.
      
      When we go to maybe increment it, we fetch the value and check the flag
      bit.  If it's clear then we don't need to do anything if the update
      isn't being forced.
      
      If we do need to update, then we increment the counter by 2, and clear
      the flag bit, and then use a CAS op to swap it into place. If that
      works, we return true. If it doesn't then do it again with the value
      that we fetch from the CAS operation.
      
      On the query side, if the flag is already set, then we just shift the
      value down by 1 bit and return it. Otherwise, we set the flag in our
      on-stack value and again use cmpxchg to swap it into place if it hasn't
      changed. If it has, then we use the value from the cmpxchg as the new
      "old" value and try again.
      
      This method allows us to avoid incrementing the counter on writes (and
      dirtying the metadata) under typical workloads. We only need to increment
      if it has been queried since it was last changed.
      Signed-off-by: NJeff Layton <jlayton@redhat.com>
      Reviewed-by: NJan Kara <jack@suse.cz>
      Acked-by: NDave Chinner <dchinner@redhat.com>
      Tested-by: NKrzysztof Kozlowski <krzk@kernel.org>
      f02a9ad1
    • J
      fs: new API for handling inode->i_version · ae5e165d
      Jeff Layton 提交于
      Add a documentation blob that explains what the i_version field is, how
      it is expected to work, and how it is currently implemented by various
      filesystems.
      
      We already have inode_inc_iversion. Add several other functions for
      manipulating and accessing the i_version counter. For now, the
      implementation is trivial and basically works the way that all of the
      open-coded i_version accesses work today.
      
      Future patches will convert existing users of i_version to use the new
      API, and then convert the backend implementation to do things more
      efficiently.
      Signed-off-by: NJeff Layton <jlayton@redhat.com>
      Reviewed-by: NJan Kara <jack@suse.cz>
      ae5e165d
  26. 26 1月, 2018 1 次提交