1. 06 7月, 2016 6 次提交
  2. 22 6月, 2016 5 次提交
  3. 16 6月, 2016 1 次提交
  4. 14 6月, 2016 1 次提交
  5. 11 6月, 2016 2 次提交
  6. 08 6月, 2016 3 次提交
    • M
      coredump: fix dumping through pipes · 1607f09c
      Mateusz Guzik 提交于
      The offset in the core file used to be tracked with ->written field of
      the coredump_params structure. The field was retired in favour of
      file->f_pos.
      
      However, ->f_pos is not maintained for pipes which leads to breakage.
      
      Restore explicit tracking of the offset in coredump_params. Introduce
      ->pos field for this purpose since ->written was already reused.
      
      Fixes: a0083939 ("get rid of coredump_params->written").
      Reported-by: NZbigniew Jędrzejewski-Szmek <zbyszek@in.waw.pl>
      Signed-off-by: NMateusz Guzik <mguzik@redhat.com>
      Reviewed-by: NOmar Sandoval <osandov@fb.com>
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      1607f09c
    • A
      fix a regression in atomic_open() · a01e718f
      Al Viro 提交于
      open("/foo/no_such_file", O_RDONLY | O_CREAT) on should fail with
      EACCES when /foo is not writable; failing with ENOENT is obviously
      wrong.  That got broken by a braino introduced when moving the
      creat_error logics from atomic_open() to lookup_open().  Easy to
      fix, fortunately.
      Spotted-by: N"Yan, Zheng" <ukernel@gmail.com>
      Tested-by: N"Yan, Zheng" <ukernel@gmail.com>
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      a01e718f
    • A
      fix d_walk()/non-delayed __d_free() race · 3d56c25e
      Al Viro 提交于
      Ascend-to-parent logics in d_walk() depends on all encountered child
      dentries not getting freed without an RCU delay.  Unfortunately, in
      quite a few cases it is not true, with hard-to-hit oopsable race as
      the result.
      
      Fortunately, the fix is simiple; right now the rule is "if it ever
      been hashed, freeing must be delayed" and changing it to "if it
      ever had a parent, freeing must be delayed" closes that hole and
      covers all cases the old rule used to cover.  Moreover, pipes and
      sockets remain _not_ covered, so we do not introduce RCU delay in
      the cases which are the reason for having that delay conditional
      in the first place.
      
      Cc: stable@vger.kernel.org # v3.2+ (and watch out for __d_materialise_dentry())
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      3d56c25e
  7. 07 6月, 2016 2 次提交
  8. 06 6月, 2016 10 次提交
  9. 05 6月, 2016 1 次提交
    • A
      autofs braino fix for do_last() · e6ec03a2
      Al Viro 提交于
      It's an analogue of commit 7500c38a (fix the braino in "namei:
      massage lookup_slow() to be usable by lookup_one_len_unlocked()").
      The same problem (->lookup()-returned unhashed negative dentry
      just might be an autofs one with ->d_manage() that would wait
      until the daemon makes it positive) applies in do_last() - we
      need to do follow_managed() first.
      
      Fortunately, remaining callers of follow_managed() are OK - only
      autofs has that weirdness (negative dentry that does not mean
      an instant -ENOENT)) and autofs never has its negative dentries
      hashed, so we can't pick one from a dcache lookup.
      
      ->d_manage() is a bloody mess ;-/
      
      Cc: stable@vger.kernel.org # v4.6
      Spotted-by: NIan Kent <raven@themaw.net>
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      e6ec03a2
  10. 04 6月, 2016 2 次提交
    • A
      fix EOPENSTALE bug in do_last() · fac7d191
      Al Viro 提交于
      EOPENSTALE occuring at the last component of a trailing symlink ends up
      with do_last() retrying its lookup.  After the symlink body has been
      discarded.  The thing is, all this retry_lookup logics in there is not
      needed at all - the upper layers will do the right thing if we simply
      return that -EOPENSTALE as we would with any other error.  Trying to
      microoptimize in do_last() is a lot of headache for no good reason.
      
      Cc: stable@vger.kernel.org # v4.2+
      Tested-by: NOleg Drokin <green@linuxhacker.ru>
      Reviewed-and-Tested-by: NJeff Layton <jlayton@poochiereds.net>
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      fac7d191
    • C
      Btrfs: deal with duplciates during extent_map insertion in btrfs_get_extent · 8dff9c85
      Chris Mason 提交于
      When dealing with inline extents, btrfs_get_extent will incorrectly try
      to insert a duplicate extent_map.  The dup hits -EEXIST from
      add_extent_map, but then we try to merge with the existing one and end
      up trying to insert a zero length extent_map.
      
      This actually works most of the time, except when there are extent maps
      past the end of the inline extent.  rocksdb will trigger this sometimes
      because it preallocates an extent and then truncates down.
      
      Josef made a script to trigger with xfs_io:
      
      	#!/bin/bash
      
      	xfs_io -f -c "pwrite 0 1000" inline
      	xfs_io -c "falloc -k 4k 1M" inline
      	xfs_io -c "pread 0 1000" -c "fadvise -d 0 1000" -c "pread 0 1000" inline
      	xfs_io -c "fadvise -d 0 1000" inline
      	cat inline
      
      You'll get EIOs trying to read inline after this because add_extent_map
      is returning EEXIST
      Signed-off-by: NChris Mason <clm@fb.com>
      8dff9c85
  11. 03 6月, 2016 3 次提交
  12. 01 6月, 2016 4 次提交
    • Y
      ceph: use i_version to check validity of fscache · f6973c09
      Yan, Zheng 提交于
      Signed-off-by: NYan, Zheng <zyan@redhat.com>
      f6973c09
    • Y
      ceph: improve fscache revalidation · f7f7e7a0
      Yan, Zheng 提交于
      There are several issues in fscache revalidation code.
      - In ceph_revalidate_work(), fscache_invalidate() is called when
        fscache_check_consistency() return 0. This is complete wrong
        because 0 means cache is valid.
      - Handle_cap_grant() calls ceph_queue_revalidate() if client
        already has CAP_FILE_CACHE. This code is confusing. Client
        should revalidate the cache each time it got CAP_FILE_CACHE
        anew.
      - In Handle_cap_grant(), fscache_invalidate() is called if MDS
        revokes CAP_FILE_CACHE. This is inconsistency with the case
        that inode get evicted. In the later case, the cache is not
        discarded. Client may use the cache when inode is reloaded.
      
      This patch moves the fscache revalidation into ceph_get_caps().
      Client revalidates the cache after it gets CAP_FILE_CACHE.
      i_rdcache_gen should keep constance while CAP_FILE_CACHE is
      used. If i_fscache_gen is not equal to i_rdcache_gen, client
      needs to check cache's consistency.
      Signed-off-by: NYan, Zheng <zyan@redhat.com>
      f7f7e7a0
    • Y
      ceph: disable fscache when inode is opened for write · 46b59b2b
      Yan, Zheng 提交于
      All other filesystems do not add dirty pages to fscache. They all
      disable fscache when inode is opened for write. Only ceph adds
      dirty pages to fscache, but the code is buggy.
      Signed-off-by: NYan, Zheng <zyan@redhat.com>
      46b59b2b
    • Y
      ceph: avoid unnecessary fscache invalidation/revlidation · 14649758
      Yan, Zheng 提交于
      ceph_fill_file_size() has already called ceph_fscache_invalidate()
      if it return true.
      Signed-off-by: NYan, Zheng <zyan@redhat.com>
      14649758