1. 03 10月, 2017 13 次提交
  2. 01 10月, 2017 1 次提交
  3. 30 9月, 2017 2 次提交
  4. 27 9月, 2017 1 次提交
  5. 26 9月, 2017 20 次提交
  6. 25 9月, 2017 3 次提交
    • S
      block: fix a crash caused by wrong API · f5c156c4
      Shaohua Li 提交于
      part_stat_show takes a part device not a disk, so we should use
      part_to_disk.
      
      Fixes: d62e26b3("block: pass in queue to inflight accounting")
      Cc: Bart Van Assche <bart.vanassche@wdc.com>
      Cc: Omar Sandoval <osandov@fb.com>
      Signed-off-by: NShaohua Li <shli@fb.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      f5c156c4
    • L
      fs: Fix page cache inconsistency when mixing buffered and AIO DIO · 332391a9
      Lukas Czerner 提交于
      Currently when mixing buffered reads and asynchronous direct writes it
      is possible to end up with the situation where we have stale data in the
      page cache while the new data is already written to disk. This is
      permanent until the affected pages are flushed away. Despite the fact
      that mixing buffered and direct IO is ill-advised it does pose a thread
      for a data integrity, is unexpected and should be fixed.
      
      Fix this by deferring completion of asynchronous direct writes to a
      process context in the case that there are mapped pages to be found in
      the inode. Later before the completion in dio_complete() invalidate
      the pages in question. This ensures that after the completion the pages
      in the written area are either unmapped, or populated with up-to-date
      data. Also do the same for the iomap case which uses
      iomap_dio_complete() instead.
      
      This has a side effect of deferring the completion to a process context
      for every AIO DIO that happens on inode that has pages mapped. However
      since the consensus is that this is ill-advised practice the performance
      implication should not be a problem.
      
      This was based on proposal from Jeff Moyer, thanks!
      Reviewed-by: NJan Kara <jack@suse.cz>
      Reviewed-by: NDarrick J. Wong <darrick.wong@oracle.com>
      Reviewed-by: NJeff Moyer <jmoyer@redhat.com>
      Signed-off-by: NLukas Czerner <lczerner@redhat.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      332391a9
    • J
      nvmet: implement valid sqhd values in completions · bb1cc747
      James Smart 提交于
      To support sqhd, for initiators that are following the spec and
      paying attention to sqhd vs their sqtail values:
      
      - add sqhd to struct nvmet_sq
      - initialize sqhd to 0 in nvmet_sq_setup
      - rather than propagate the 0's-based qsize value from the connect message
        which requires a +1 in every sqhd update, and as nothing else references
        it, convert to 1's-based value in nvmt_sq/cq_setup() calls.
      - validate connect message sqsize being non-zero per spec.
      - updated assign sqhd for every completion that goes back.
      
      Also remove handling the NULL sq case in __nvmet_req_complete, as it can't
      happen with the current code.
      Signed-off-by: NJames Smart <james.smart@broadcom.com>
      Reviewed-by: NSagi Grimberg <sagi@grimberg.me>
      Reviewed-by: NMax Gurtovoy <maxg@mellanox.com>
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      bb1cc747