1. 18 11月, 2019 36 次提交
  2. 16 11月, 2019 1 次提交
    • D
      afs: Fix race in commit bulk status fetch · a28f239e
      David Howells 提交于
      When a lookup is done, the afs filesystem will perform a bulk status-fetch
      operation on the requested vnode (file) plus the next 49 other vnodes from
      the directory list (in AFS, directory contents are downloaded as blobs and
      parsed locally).  When the results are received, it will speculatively
      populate the inode cache from the extra data.
      
      However, if the lookup races with another lookup on the same directory, but
      for a different file - one that's in the 49 extra fetches, then if the bulk
      status-fetch operation finishes first, it will try and update the inode
      from the other lookup.
      
      If this other inode is still in the throes of being created, however, this
      will cause an assertion failure in afs_apply_status():
      
      	BUG_ON(test_bit(AFS_VNODE_UNSET, &vnode->flags));
      
      on or about fs/afs/inode.c:175 because it expects data to be there already
      that it can compare to.
      
      Fix this by skipping the update if the inode is being created as the
      creator will presumably set up the inode with the same information.
      
      Fixes: 39db9815 ("afs: Fix application of the results of a inline bulk status fetch")
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      Reviewed-by: NMarc Dionne <marc.dionne@auristor.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      a28f239e
  3. 15 11月, 2019 2 次提交
    • J
      ceph: increment/decrement dio counter on async requests · 6a81749e
      Jeff Layton 提交于
      Ceph can in some cases issue an async DIO request, in which case we can
      end up calling ceph_end_io_direct before the I/O is actually complete.
      That may allow buffered operations to proceed while DIO requests are
      still in flight.
      
      Fix this by incrementing the i_dio_count when issuing an async DIO
      request, and decrement it when tearing down the aio_req.
      
      Fixes: 321fe13c ("ceph: add buffered/direct exclusionary locking for reads and writes")
      Signed-off-by: NJeff Layton <jlayton@kernel.org>
      Signed-off-by: NIlya Dryomov <idryomov@gmail.com>
      6a81749e
    • J
      ceph: take the inode lock before acquiring cap refs · a81bc310
      Jeff Layton 提交于
      Most of the time, we (or the vfs layer) takes the inode_lock and then
      acquires caps, but ceph_read_iter does the opposite, and that can lead
      to a deadlock.
      
      When there are multiple clients treading over the same data, we can end
      up in a situation where a reader takes caps and then tries to acquire
      the inode_lock. Another task holds the inode_lock and issues a request
      to the MDS which needs to revoke the caps, but that can't happen until
      the inode_lock is unwedged.
      
      Fix this by having ceph_read_iter take the inode_lock earlier, before
      attempting to acquire caps.
      
      Fixes: 321fe13c ("ceph: add buffered/direct exclusionary locking for reads and writes")
      Link: https://tracker.ceph.com/issues/36348Signed-off-by: NJeff Layton <jlayton@kernel.org>
      Signed-off-by: NIlya Dryomov <idryomov@gmail.com>
      a81bc310
  4. 14 11月, 2019 1 次提交
    • J
      io_uring: ensure registered buffer import returns the IO length · 5e559561
      Jens Axboe 提交于
      A test case was reported where two linked reads with registered buffers
      failed the second link always. This is because we set the expected value
      of a request in req->result, and if we don't get this result, then we
      fail the dependent links. For some reason the registered buffer import
      returned -ERROR/0, while the normal import returns -ERROR/length. This
      broke linked commands with registered buffers.
      
      Fix this by making io_import_fixed() correctly return the mapped length.
      
      Cc: stable@vger.kernel.org # v5.3
      Reported-by: N李通洲 <carter.li@eoitek.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      5e559561