1. 05 7月, 2017 12 次提交
  2. 28 6月, 2017 2 次提交
  3. 24 6月, 2017 4 次提交
  4. 22 6月, 2017 1 次提交
    • D
      xfs: don't allow bmap on rt files · eb5e248d
      Darrick J. Wong 提交于
      bmap returns a dumb LBA address but not the block device that goes with
      that LBA.  Swapfiles don't care about this and will blindly assume that
      the data volume is the correct blockdev, which is totally bogus for
      files on the rt subvolume.  This results in the swap code doing IOs to
      arbitrary locations on the data device(!) if the passed in mapping is a
      realtime file, so just turn off bmap for rt files.
      Signed-off-by: NDarrick J. Wong <darrick.wong@oracle.com>
      Reviewed-by: NChristoph Hellwig <hch@lst.de>
      eb5e248d
  5. 21 6月, 2017 5 次提交
  6. 19 6月, 2017 1 次提交
    • H
      mm: larger stack guard gap, between vmas · 1be7107f
      Hugh Dickins 提交于
      Stack guard page is a useful feature to reduce a risk of stack smashing
      into a different mapping. We have been using a single page gap which
      is sufficient to prevent having stack adjacent to a different mapping.
      But this seems to be insufficient in the light of the stack usage in
      userspace. E.g. glibc uses as large as 64kB alloca() in many commonly
      used functions. Others use constructs liks gid_t buffer[NGROUPS_MAX]
      which is 256kB or stack strings with MAX_ARG_STRLEN.
      
      This will become especially dangerous for suid binaries and the default
      no limit for the stack size limit because those applications can be
      tricked to consume a large portion of the stack and a single glibc call
      could jump over the guard page. These attacks are not theoretical,
      unfortunatelly.
      
      Make those attacks less probable by increasing the stack guard gap
      to 1MB (on systems with 4k pages; but make it depend on the page size
      because systems with larger base pages might cap stack allocations in
      the PAGE_SIZE units) which should cover larger alloca() and VLA stack
      allocations. It is obviously not a full fix because the problem is
      somehow inherent, but it should reduce attack space a lot.
      
      One could argue that the gap size should be configurable from userspace,
      but that can be done later when somebody finds that the new 1MB is wrong
      for some special case applications.  For now, add a kernel command line
      option (stack_guard_gap) to specify the stack gap size (in page units).
      
      Implementation wise, first delete all the old code for stack guard page:
      because although we could get away with accounting one extra page in a
      stack vma, accounting a larger gap can break userspace - case in point,
      a program run with "ulimit -S -v 20000" failed when the 1MB gap was
      counted for RLIMIT_AS; similar problems could come with RLIMIT_MLOCK
      and strict non-overcommit mode.
      
      Instead of keeping gap inside the stack vma, maintain the stack guard
      gap as a gap between vmas: using vm_start_gap() in place of vm_start
      (or vm_end_gap() in place of vm_end if VM_GROWSUP) in just those few
      places which need to respect the gap - mainly arch_get_unmapped_area(),
      and and the vma tree's subtree_gap support for that.
      Original-patch-by: NOleg Nesterov <oleg@redhat.com>
      Original-patch-by: NMichal Hocko <mhocko@suse.com>
      Signed-off-by: NHugh Dickins <hughd@google.com>
      Acked-by: NMichal Hocko <mhocko@suse.com>
      Tested-by: Helge Deller <deller@gmx.de> # parisc
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      1be7107f
  7. 18 6月, 2017 3 次提交
  8. 17 6月, 2017 1 次提交
  9. 16 6月, 2017 1 次提交
  10. 15 6月, 2017 10 次提交
    • A
      fs: don't forget to put old mntns in mntns_install · 4068367c
      Andrei Vagin 提交于
      Fixes: 4f757f3c ("make sure that mntns_install() doesn't end up with referral for root")
      Cc: Al Viro <viro@zeniv.linux.org.uk>
      Signed-off-by: NAndrei Vagin <avagin@openvz.org>
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      4068367c
    • A
      Hang/soft lockup in d_invalidate with simultaneous calls · 81be24d2
      Al Viro 提交于
      It's not hard to trigger a bunch of d_invalidate() on the same
      dentry in parallel.  They end up fighting each other - any
      dentry picked for removal by one will be skipped by the rest
      and we'll go for the next iteration through the entire
      subtree, even if everything is being skipped.  Morevoer, we
      immediately go back to scanning the subtree.  The only thing
      we really need is to dissolve all mounts in the subtree and
      as soon as we've nothing left to do, we can just unhash the
      dentry and bugger off.
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      81be24d2
    • A
      ufs_truncate_blocks(): fix the case when size is in the last direct block · a8fad984
      Al Viro 提交于
      The logics when deciding whether we need to do anything with direct blocks
      is broken when new size is within the last direct block.  It's better to
      find the path to the last byte _not_ to be removed and use that instead
      of the path to the beginning of the first block to be freed...
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      a8fad984
    • A
      ufs: more deadlock prevention on tail unpacking · 289dec5b
      Al Viro 提交于
      ->s_lock is not needed for ufs_change_blocknr()
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      289dec5b
    • A
      ufs: avoid grabbing ->truncate_mutex if possible · 09bf4f5b
      Al Viro 提交于
      tail unpacking is done in a wrong place; the deadlocks galore
      is best dealt with by doing that in ->write_iter() (and switching
      to iomap, while we are at it), but that's rather painful to
      backport.  The trouble comes from grabbing pages that cover
      the beginning of tail from inside of ufs_new_fragments(); ongoing
      pageout of any of those is going to deadlock on ->truncate_mutex
      with process that got around to extending the tail holding that
      and waiting for page to get unlocked, while ->writepage() on
      that page is waiting on ->truncate_mutex.
      
      The thing is, we don't need ->truncate_mutex when the fragment
      we are trying to map is within the tail - the damn thing is
      allocated (tail can't contain holes).
      
      Let's do a plain lookup and if the fragment is present, we can
      just pretend that we'd won the race in almost all cases.  The
      only exception is a fragment between the end of tail and the
      end of block containing tail.
      
      Protect ->i_lastfrag with ->meta_lock - read_seqlock_excl() is
      sufficient.
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      09bf4f5b
    • A
      ufs_get_locked_page(): make sure we have buffer_heads · 267309f3
      Al Viro 提交于
      callers rely upon that, but find_lock_page() racing with attempt of
      page eviction by memory pressure might have left us with
      	* try_to_free_buffers() successfully done
      	* __remove_mapping() failed, leaving the page in our mapping
      	* find_lock_page() returning an uptodate page with no
      buffer_heads attached.
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      267309f3
    • A
      ufs: fix s_size/s_dsize users · c596961d
      Al Viro 提交于
      For UFS2 we need 64bit variants; we even store them in uspi, but
      use 32bit ones instead.  One wrinkle is in handling of reserved
      space - recalculating it every time had been stupid all along, but
      now it would become really ugly.  Just calculate it once...
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      c596961d
    • A
      ufs: fix reserved blocks check · b451cec4
      Al Viro 提交于
      a) honour ->s_minfree; don't just go with default (5)
      b) don't bother with capability checks until we know we'll need them
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      b451cec4
    • A
      ufs: make ufs_freespace() return signed · fffd70f5
      Al Viro 提交于
      as it is, checking that its return value is <= 0 is useless and
      that's how it's being used.
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      fffd70f5
    • A
      ufs: fix logics in "ufs: make fsck -f happy" · 96ecff14
      Al Viro 提交于
      Storing stats _only_ at new locations is wrong for UFS1; old
      locations should always be kept updated.  The check for "has
      been converted to use of new locations" is also wrong - it
      should be "->fs_maxbsize is equal to ->fs_bsize".
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      96ecff14