1. 24 10月, 2014 1 次提交
  2. 08 8月, 2014 1 次提交
    • A
      death to mnt_pinned · 3064c356
      Al Viro 提交于
      Rather than playing silly buggers with vfsmount refcounts, just have
      acct_on() ask fs/namespace.c for internal clone of file->f_path.mnt
      and replace it with said clone.  Then attach the pin to original
      vfsmount.  Voila - the clone will be alive until the file gets closed,
      making sure that underlying superblock remains active, etc., and
      we can drop the original vfsmount, so that it's not kept busy.
      If the file lives until the final mntput of the original vfsmount,
      we'll notice that there's an fs_pin (one in bsd_acct_struct that
      holds that file) and mnt_pin_kill() will take it out.  Since
      ->kill() is synchronous, we won't proceed past that point until
      these files are closed (and private clones of our vfsmount are
      gone), so we get the same ordering warranties we used to get.
      
      mnt_pin()/mnt_unpin()/->mnt_pinned is gone now, and good riddance -
      it never became usable outside of kernel/acct.c (and racy wrt
      umount even there).
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      3064c356
  3. 01 8月, 2014 2 次提交
    • E
      mnt: Correct permission checks in do_remount · 9566d674
      Eric W. Biederman 提交于
      While invesgiating the issue where in "mount --bind -oremount,ro ..."
      would result in later "mount --bind -oremount,rw" succeeding even if
      the mount started off locked I realized that there are several
      additional mount flags that should be locked and are not.
      
      In particular MNT_NOSUID, MNT_NODEV, MNT_NOEXEC, and the atime
      flags in addition to MNT_READONLY should all be locked.  These
      flags are all per superblock, can all be changed with MS_BIND,
      and should not be changable if set by a more privileged user.
      
      The following additions to the current logic are added in this patch.
      - nosuid may not be clearable by a less privileged user.
      - nodev  may not be clearable by a less privielged user.
      - noexec may not be clearable by a less privileged user.
      - atime flags may not be changeable by a less privileged user.
      
      The logic with atime is that always setting atime on access is a
      global policy and backup software and auditing software could break if
      atime bits are not updated (when they are configured to be updated),
      and serious performance degradation could result (DOS attack) if atime
      updates happen when they have been explicitly disabled.  Therefore an
      unprivileged user should not be able to mess with the atime bits set
      by a more privileged user.
      
      The additional restrictions are implemented with the addition of
      MNT_LOCK_NOSUID, MNT_LOCK_NODEV, MNT_LOCK_NOEXEC, and MNT_LOCK_ATIME
      mnt flags.
      
      Taken together these changes and the fixes for MNT_LOCK_READONLY
      should make it safe for an unprivileged user to create a user
      namespace and to call "mount --bind -o remount,... ..." without
      the danger of mount flags being changed maliciously.
      
      Cc: stable@vger.kernel.org
      Acked-by: NSerge E. Hallyn <serge.hallyn@ubuntu.com>
      Signed-off-by: N"Eric W. Biederman" <ebiederm@xmission.com>
      9566d674
    • E
      mnt: Only change user settable mount flags in remount · a6138db8
      Eric W. Biederman 提交于
      Kenton Varda <kenton@sandstorm.io> discovered that by remounting a
      read-only bind mount read-only in a user namespace the
      MNT_LOCK_READONLY bit would be cleared, allowing an unprivileged user
      to the remount a read-only mount read-write.
      
      Correct this by replacing the mask of mount flags to preserve
      with a mask of mount flags that may be changed, and preserve
      all others.   This ensures that any future bugs with this mask and
      remount will fail in an easy to detect way where new mount flags
      simply won't change.
      
      Cc: stable@vger.kernel.org
      Acked-by: NSerge E. Hallyn <serge.hallyn@ubuntu.com>
      Signed-off-by: N"Eric W. Biederman" <ebiederm@xmission.com>
      a6138db8
  4. 02 4月, 2014 1 次提交
    • A
      smarter propagate_mnt() · f2ebb3a9
      Al Viro 提交于
      The current mainline has copies propagated to *all* nodes, then
      tears down the copies we made for nodes that do not contain
      counterparts of the desired mountpoint.  That sets the right
      propagation graph for the copies (at teardown time we move
      the slaves of removed node to a surviving peer or directly
      to master), but we end up paying a fairly steep price in
      useless allocations.  It's fairly easy to create a situation
      where N calls of mount(2) create exactly N bindings, with
      O(N^2) vfsmounts allocated and freed in process.
      
      Fortunately, it is possible to avoid those allocations/freeings.
      The trick is to create copies in the right order and find which
      one would've eventually become a master with the current algorithm.
      It turns out to be possible in O(nodes getting propagation) time
      and with no extra allocations at all.
      
      One part is that we need to make sure that eventual master will be
      created before its slaves, so we need to walk the propagation
      tree in a different order - by peer groups.  And iterate through
      the peers before dealing with the next group.
      
      Another thing is finding the (earlier) copy that will be a master
      of one we are about to create; to do that we are (temporary) marking
      the masters of mountpoints we are attaching the copies to.
      
      Either we are in a peer of the last mountpoint we'd dealt with,
      or we have the following situation: we are attaching to mountpoint M,
      the last copy S_0 had been attached to M_0 and there are sequences
      S_0...S_n, M_0...M_n such that S_{i+1} is a master of S_{i},
      S_{i} mounted on M{i} and we need to create a slave of the first S_{k}
      such that M is getting propagation from M_{k}.  It means that the master
      of M_{k} will be among the sequence of masters of M.  On the
      other hand, the nearest marked node in that sequence will either
      be the master of M_{k} or the master of M_{k-1} (the latter -
      in the case if M_{k-1} is a slave of something M gets propagation
      from, but in a wrong peer group).
      
      So we go through the sequence of masters of M until we find
      a marked one (P).  Let N be the one before it.  Then we go through
      the sequence of masters of S_0 until we find one (say, S) mounted
      on a node D that has P as master and check if D is a peer of N.
      If it is, S will be the master of new copy, if not - the master of S
      will be.
      
      That's it for the hard part; the rest is fairly simple.  Iterator
      is in next_group(), handling of one prospective mountpoint is
      propagate_one().
      
      It seems to survive all tests and gives a noticably better performance
      than the current mainline for setups that are seriously using shared
      subtrees.
      
      Cc: stable@vger.kernel.org
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      f2ebb3a9
  5. 09 11月, 2013 1 次提交
    • A
      RCU'd vfsmounts · 48a066e7
      Al Viro 提交于
      * RCU-delayed freeing of vfsmounts
      * vfsmount_lock replaced with a seqlock (mount_lock)
      * sequence number from mount_lock is stored in nameidata->m_seq and
      used when we exit RCU mode
      * new vfsmount flag - MNT_SYNC_UMOUNT.  Set by umount_tree() when its
      caller knows that vfsmount will have no surviving references.
      * synchronize_rcu() done between unlocking namespace_sem in namespace_unlock()
      and doing pending mntput().
      * new helper: legitimize_mnt(mnt, seq).  Checks the mount_lock sequence
      number against seq, then grabs reference to mnt.  Then it rechecks mount_lock
      again to close the race and either returns success or drops the reference it
      has acquired.  The subtle point is that in case of MNT_SYNC_UMOUNT we can
      simply decrement the refcount and sod off - aforementioned synchronize_rcu()
      makes sure that final mntput() won't come until we leave RCU mode.  We need
      that, since we don't want to end up with some lazy pathwalk racing with
      umount() and stealing the final mntput() from it - caller of umount() may
      expect it to return only once the fs is shut down and we don't want to break
      that.  In other cases (i.e. with MNT_SYNC_UMOUNT absent) we have to do
      full-blown mntput() in case of mount_lock sequence number mismatch happening
      just as we'd grabbed the reference, but in those cases we won't be stealing
      the final mntput() from anything that would care.
      * mntput_no_expire() doesn't lock anything on the fast path now.  Incidentally,
      SMP and UP cases are handled the same way - no ifdefs there.
      * normal pathname resolution does *not* do any writes to mount_lock.  It does,
      of course, bump the refcounts of vfsmount and dentry in the very end, but that's
      it.
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      48a066e7
  6. 25 7月, 2013 1 次提交
    • E
      vfs: Lock in place mounts from more privileged users · 5ff9d8a6
      Eric W. Biederman 提交于
      When creating a less privileged mount namespace or propogating mounts
      from a more privileged to a less privileged mount namespace lock the
      submounts so they may not be unmounted individually in the child mount
      namespace revealing what is under them.
      
      This enforces the reasonable expectation that it is not possible to
      see under a mount point.  Most of the time mounts are on empty
      directories and revealing that does not matter, however I have seen an
      occassionaly sloppy configuration where there were interesting things
      concealed under a mount point that probably should not be revealed.
      
      Expirable submounts are not locked because they will eventually
      unmount automatically so whatever is under them already needs
      to be safe for unprivileged users to access.
      
      From a practical standpoint these restrictions do not appear to be
      significant for unprivileged users of the mount namespace.  Recursive
      bind mounts and pivot_root continues to work, and mounts that are
      created in a mount namespace may be unmounted there.  All of which
      means that the common idiom of keeping a directory of interesting
      files and using pivot_root to throw everything else away continues to
      work just fine.
      Acked-by: NSerge Hallyn <serge.hallyn@canonical.com>
      Acked-by: NAndy Lutomirski <luto@amacapital.net>
      Signed-off-by: N"Eric W. Biederman" <ebiederm@xmission.com>
      5ff9d8a6
  7. 27 3月, 2013 1 次提交
  8. 04 1月, 2012 15 次提交
  9. 27 7月, 2011 1 次提交
  10. 17 1月, 2011 1 次提交
    • A
      sanitize vfsmount refcounting changes · f03c6599
      Al Viro 提交于
      Instead of splitting refcount between (per-cpu) mnt_count
      and (SMP-only) mnt_longrefs, make all references contribute
      to mnt_count again and keep track of how many are longterm
      ones.
      
      Accounting rules for longterm count:
      	* 1 for each fs_struct.root.mnt
      	* 1 for each fs_struct.pwd.mnt
      	* 1 for having non-NULL ->mnt_ns
      	* decrement to 0 happens only under vfsmount lock exclusive
      
      That allows nice common case for mntput() - since we can't drop the
      final reference until after mnt_longterm has reached 0 due to the rules
      above, mntput() can grab vfsmount lock shared and check mnt_longterm.
      If it turns out to be non-zero (which is the common case), we know
      that this is not the final mntput() and can just blindly decrement
      percpu mnt_count.  Otherwise we grab vfsmount lock exclusive and
      do usual decrement-and-check of percpu mnt_count.
      
      For fs_struct.c we have mnt_make_longterm() and mnt_make_shortterm();
      namespace.c uses the latter in places where we don't already hold
      vfsmount lock exclusive and opencodes a few remaining spots where
      we need to manipulate mnt_longterm.
      
      Note that we mostly revert the code outside of fs/namespace.c back
      to what we used to have; in particular, normal code doesn't need
      to care about two kinds of references, etc.  And we get to keep
      the optimization Nick's variant had bought us...
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      f03c6599
  11. 16 1月, 2011 1 次提交
    • D
      Unexport do_add_mount() and add in follow_automount(), not ->d_automount() · ea5b778a
      David Howells 提交于
      Unexport do_add_mount() and make ->d_automount() return the vfsmount to be
      added rather than calling do_add_mount() itself.  follow_automount() will then
      do the addition.
      
      This slightly complicates things as ->d_automount() normally wants to add the
      new vfsmount to an expiration list and start an expiration timer.  The problem
      with that is that the vfsmount will be deleted if it has a refcount of 1 and
      the timer will not repeat if the expiration list is empty.
      
      To this end, we require the vfsmount to be returned from d_automount() with a
      refcount of (at least) 2.  One of these refs will be dropped unconditionally.
      In addition, follow_automount() must get a 3rd ref around the call to
      do_add_mount() lest it eat a ref and return an error, leaving the mount we
      have open to being expired as we would otherwise have only 1 ref on it.
      
      d_automount() should also add the the vfsmount to the expiration list (by
      calling mnt_set_expiry()) and start the expiration timer before returning, if
      this mechanism is to be used.  The vfsmount will be unlinked from the
      expiration list by follow_automount() if do_add_mount() fails.
      
      This patch also fixes the call to do_add_mount() for AFS to propagate the mount
      flags from the parent vfsmount.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      ea5b778a
  12. 07 1月, 2011 1 次提交
    • N
      fs: scale mntget/mntput · b3e19d92
      Nick Piggin 提交于
      The problem that this patch aims to fix is vfsmount refcounting scalability.
      We need to take a reference on the vfsmount for every successful path lookup,
      which often go to the same mount point.
      
      The fundamental difficulty is that a "simple" reference count can never be made
      scalable, because any time a reference is dropped, we must check whether that
      was the last reference. To do that requires communication with all other CPUs
      that may have taken a reference count.
      
      We can make refcounts more scalable in a couple of ways, involving keeping
      distributed counters, and checking for the global-zero condition less
      frequently.
      
      - check the global sum once every interval (this will delay zero detection
        for some interval, so it's probably a showstopper for vfsmounts).
      
      - keep a local count and only taking the global sum when local reaches 0 (this
        is difficult for vfsmounts, because we can't hold preempt off for the life of
        a reference, so a counter would need to be per-thread or tied strongly to a
        particular CPU which requires more locking).
      
      - keep a local difference of increments and decrements, which allows us to sum
        the total difference and hence find the refcount when summing all CPUs. Then,
        keep a single integer "long" refcount for slow and long lasting references,
        and only take the global sum of local counters when the long refcount is 0.
      
      This last scheme is what I implemented here. Attached mounts and process root
      and working directory references are "long" references, and everything else is
      a short reference.
      
      This allows scalable vfsmount references during path walking over mounted
      subtrees and unattached (lazy umounted) mounts with processes still running
      in them.
      
      This results in one fewer atomic op in the fastpath: mntget is now just a
      per-CPU inc, rather than an atomic inc; and mntput just requires a spinlock
      and non-atomic decrement in the common case. However code is otherwise bigger
      and heavier, so single threaded performance is basically a wash.
      Signed-off-by: NNick Piggin <npiggin@kernel.dk>
      b3e19d92
  13. 11 8月, 2010 1 次提交
  14. 28 7月, 2010 1 次提交
  15. 04 3月, 2010 3 次提交
  16. 17 2月, 2010 1 次提交
    • T
      percpu: add __percpu sparse annotations to fs · 003cb608
      Tejun Heo 提交于
      Add __percpu sparse annotations to fs.
      
      These annotations are to make sparse consider percpu variables to be
      in a different address space and warn if accessed without going
      through percpu accessors.  This patch doesn't affect normal builds.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: "Theodore Ts'o" <tytso@mit.edu>
      Cc: Trond Myklebust <Trond.Myklebust@netapp.com>
      Cc: Alex Elder <aelder@sgi.com>
      Cc: Christoph Hellwig <hch@infradead.org>
      Cc: Alexander Viro <viro@zeniv.linux.org.uk>
      003cb608
  17. 12 6月, 2009 2 次提交
    • N
      fs: introduce mnt_clone_write · 96029c4e
      npiggin@suse.de 提交于
      This patch speeds up lmbench lat_mmap test by about another 2% after the
      first patch.
      
      Before:
       avg = 462.286
       std = 5.46106
      
      After:
       avg = 453.12
       std = 9.58257
      
      (50 runs of each, stddev gives a reasonable confidence)
      
      It does this by introducing mnt_clone_write, which avoids some heavyweight
      operations of mnt_want_write if called on a vfsmount which we know already
      has a write count; and mnt_want_write_file, which can call mnt_clone_write
      if the file is open for write.
      
      After these two patches, mnt_want_write and mnt_drop_write go from 7% on
      the profile down to 1.3% (including mnt_clone_write).
      
      [AV: mnt_want_write_file() should take file alone and derive mnt from it;
      not only all callers have that form, but that's the only mnt about which
      we know that it's already held for write if file is opened for write]
      
      Cc: Dave Hansen <haveblue@us.ibm.com>
      Signed-off-by: NNick Piggin <npiggin@suse.de>
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      96029c4e
    • N
      fs: mnt_want_write speedup · d3ef3d73
      npiggin@suse.de 提交于
      This patch speeds up lmbench lat_mmap test by about 8%. lat_mmap is set up
      basically to mmap a 64MB file on tmpfs, fault in its pages, then unmap it.
      A microbenchmark yes, but it exercises some important paths in the mm.
      
      Before:
       avg = 501.9
       std = 14.7773
      
      After:
       avg = 462.286
       std = 5.46106
      
      (50 runs of each, stddev gives a reasonable confidence, but there is quite
      a bit of variation there still)
      
      It does this by removing the complex per-cpu locking and counter-cache and
      replaces it with a percpu counter in struct vfsmount. This makes the code
      much simpler, and avoids spinlocks (although the msync is still pretty
      costly, unfortunately). It results in about 900 bytes smaller code too. It
      does increase the size of a vfsmount, however.
      
      It should also give a speedup on large systems if CPUs are frequently operating
      on different mounts (because the existing scheme has to operate on an atomic in
      the struct vfsmount when switching between mounts). But I'm most interested in
      the single threaded path performance for the moment.
      
      [AV: minor cleanup]
      
      Cc: Dave Hansen <haveblue@us.ibm.com>
      Signed-off-by: NNick Piggin <npiggin@suse.de>
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      d3ef3d73
  18. 27 3月, 2009 1 次提交
  19. 17 10月, 2008 1 次提交
  20. 01 8月, 2008 1 次提交
  21. 27 7月, 2008 1 次提交
  22. 30 4月, 2008 1 次提交