1. 09 7月, 2018 4 次提交
  2. 27 6月, 2018 1 次提交
  3. 15 6月, 2018 1 次提交
  4. 14 6月, 2018 1 次提交
  5. 31 5月, 2018 1 次提交
  6. 29 5月, 2018 5 次提交
  7. 14 5月, 2018 1 次提交
  8. 09 5月, 2018 3 次提交
  9. 19 4月, 2018 1 次提交
    • B
      scsi: sd_zbc: Avoid that resetting a zone fails sporadically · ccce20fc
      Bart Van Assche 提交于
      Since SCSI scanning occurs asynchronously, since sd_revalidate_disk() is
      called from sd_probe_async() and since sd_revalidate_disk() calls
      sd_zbc_read_zones() it can happen that sd_zbc_read_zones() is called
      concurrently with blkdev_report_zones() and/or blkdev_reset_zones().  That can
      cause these functions to fail with -EIO because sd_zbc_read_zones() e.g. sets
      q->nr_zones to zero before restoring it to the actual value, even if no drive
      characteristics have changed.  Avoid that this can happen by making the
      following changes:
      
      - Protect the code that updates zone information with blk_queue_enter()
        and blk_queue_exit().
      - Modify sd_zbc_setup_seq_zones_bitmap() and sd_zbc_setup() such that
        these functions do not modify struct scsi_disk before all zone
        information has been obtained.
      
      Note: since commit 055f6e18 ("block: Make q_usage_counter also track
      legacy requests"; kernel v4.15) the request queue freezing mechanism also
      affects legacy request queues.
      
      Fixes: 89d94756 ("sd: Implement support for ZBC devices")
      Signed-off-by: NBart Van Assche <bart.vanassche@wdc.com>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: Damien Le Moal <damien.lemoal@wdc.com>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: Hannes Reinecke <hare@suse.com>
      Cc: stable@vger.kernel.org # v4.16
      Reviewed-by: NDamien Le Moal <damien.lemoal@wdc.com>
      Signed-off-by: NMartin K. Petersen <martin.petersen@oracle.com>
      ccce20fc
  10. 18 4月, 2018 1 次提交
  11. 18 3月, 2018 1 次提交
    • B
      block: Move SECTOR_SIZE and SECTOR_SHIFT definitions into <linux/blkdev.h> · 233bde21
      Bart Van Assche 提交于
      It happens often while I'm preparing a patch for a block driver that
      I'm wondering: is a definition of SECTOR_SIZE and/or SECTOR_SHIFT
      available for this driver? Do I have to introduce definitions of these
      constants before I can use these constants? To avoid this confusion,
      move the existing definitions of SECTOR_SIZE and SECTOR_SHIFT into the
      <linux/blkdev.h> header file such that these become available for all
      block drivers. Make the SECTOR_SIZE definition in the uapi msdos_fs.h
      header file conditional to avoid that including that header file after
      <linux/blkdev.h> causes the compiler to complain about a SECTOR_SIZE
      redefinition.
      
      Note: the SECTOR_SIZE / SECTOR_SHIFT / SECTOR_BITS definitions have
      not been removed from uapi header files nor from NAND drivers in
      which these constants are used for another purpose than converting
      block layer offsets and sizes into a number of sectors.
      
      Cc: David S. Miller <davem@davemloft.net>
      Cc: Mike Snitzer <snitzer@redhat.com>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Cc: Minchan Kim <minchan@kernel.org>
      Cc: Nitin Gupta <ngupta@vflare.org>
      Reviewed-by: NSergey Senozhatsky <sergey.senozhatsky@gmail.com>
      Reviewed-by: NChristoph Hellwig <hch@lst.de>
      Reviewed-by: NJohannes Thumshirn <jthumshirn@suse.de>
      Reviewed-by: NMartin K. Petersen <martin.petersen@oracle.com>
      Signed-off-by: NBart Van Assche <bart.vanassche@wdc.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      233bde21
  12. 09 3月, 2018 4 次提交
  13. 01 3月, 2018 1 次提交
  14. 15 2月, 2018 1 次提交
  15. 20 1月, 2018 1 次提交
  16. 11 1月, 2018 4 次提交
  17. 10 1月, 2018 2 次提交
    • T
      blk-mq: remove REQ_ATOM_COMPLETE usages from blk-mq · 634f9e46
      Tejun Heo 提交于
      After the recent updates to use generation number and state based
      synchronization, blk-mq no longer depends on REQ_ATOM_COMPLETE except
      to avoid firing the same timeout multiple times.
      
      Remove all REQ_ATOM_COMPLETE usages and use a new rq_flags flag
      RQF_MQ_TIMEOUT_EXPIRED to avoid firing the same timeout multiple
      times.  This removes atomic bitops from hot paths too.
      
      v2: Removed blk_clear_rq_complete() from blk_mq_rq_timed_out().
      
      v3: Added RQF_MQ_TIMEOUT_EXPIRED flag.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: "jianchao.wang" <jianchao.w.wang@oracle.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      634f9e46
    • T
      blk-mq: replace timeout synchronization with a RCU and generation based scheme · 1d9bd516
      Tejun Heo 提交于
      Currently, blk-mq timeout path synchronizes against the usual
      issue/completion path using a complex scheme involving atomic
      bitflags, REQ_ATOM_*, memory barriers and subtle memory coherence
      rules.  Unfortunately, it contains quite a few holes.
      
      There's a complex dancing around REQ_ATOM_STARTED and
      REQ_ATOM_COMPLETE between issue/completion and timeout paths; however,
      they don't have a synchronization point across request recycle
      instances and it isn't clear what the barriers add.
      blk_mq_check_expired() can easily read STARTED from N-2'th iteration,
      deadline from N-1'th, blk_mark_rq_complete() against Nth instance.
      
      In fact, it's pretty easy to make blk_mq_check_expired() terminate a
      later instance of a request.  If we induce 5 sec delay before
      time_after_eq() test in blk_mq_check_expired(), shorten the timeout to
      2s, and issue back-to-back large IOs, blk-mq starts timing out
      requests spuriously pretty quickly.  Nothing actually timed out.  It
      just made the call on a recycle instance of a request and then
      terminated a later instance long after the original instance finished.
      The scenario isn't theoretical either.
      
      This patch replaces the broken synchronization mechanism with a RCU
      and generation number based one.
      
      1. Each request has a u64 generation + state value, which can be
         updated only by the request owner.  Whenever a request becomes
         in-flight, the generation number gets bumped up too.  This provides
         the basis for the timeout path to distinguish different recycle
         instances of the request.
      
         Also, marking a request in-flight and setting its deadline are
         protected with a seqcount so that the timeout path can fetch both
         values coherently.
      
      2. The timeout path fetches the generation, state and deadline.  If
         the verdict is timeout, it records the generation into a dedicated
         request abortion field and does RCU wait.
      
      3. The completion path is also protected by RCU (from the previous
         patch) and checks whether the current generation number and state
         match the abortion field.  If so, it skips completion.
      
      4. The timeout path, after RCU wait, scans requests again and
         terminates the ones whose generation and state still match the ones
         requested for abortion.
      
         By now, the timeout path knows that either the generation number
         and state changed if it lost the race or the completion will yield
         to it and can safely timeout the request.
      
      While it's more lines of code, it's conceptually simpler, doesn't
      depend on direct use of subtle memory ordering or coherence, and
      hopefully doesn't terminate the wrong instance.
      
      While this change makes REQ_ATOM_COMPLETE synchronization unnecessary
      between issue/complete and timeout paths, REQ_ATOM_COMPLETE isn't
      removed yet as it's still used in other places.  Future patches will
      move all state tracking to the new mechanism and remove all bitops in
      the hot paths.
      
      Note that this patch adds a comment explaining a race condition in
      BLK_EH_RESET_TIMER path.  The race has always been there and this
      patch doesn't change it.  It's just documenting the existing race.
      
      v2: - Fixed BLK_EH_RESET_TIMER handling as pointed out by Jianchao.
          - s/request->gstate_seqc/request->gstate_seq/ as suggested by Peter.
          - READ_ONCE() added in blk_mq_rq_update_state() as suggested by Peter.
      
      v3: - Fixed possible extended seqcount / u64_stats_sync read looping
            spotted by Peter.
          - MQ_RQ_IDLE was incorrectly being set in complete_request instead
            of free_request.  Fixed.
      
      v4: - Rebased on top of hctx_lock() refactoring patch.
          - Added comment explaining the use of hctx_lock() in completion path.
      
      v5: - Added comments requested by Bart.
          - Note the addition of BLK_EH_RESET_TIMER race condition in the
            commit message.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: "jianchao.wang" <jianchao.w.wang@oracle.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: Bart Van Assche <Bart.VanAssche@wdc.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      1d9bd516
  18. 06 1月, 2018 1 次提交
    • C
      block: introduce zoned block devices zone write locking · 6cc77e9c
      Christoph Hellwig 提交于
      Components relying only on the request_queue structure for accessing
      block devices (e.g. I/O schedulers) have a limited knowledged of the
      device characteristics. In particular, the device capacity cannot be
      easily discovered, which for a zoned block device also result in the
      inability to easily know the number of zones of the device (the zone
      size is indicated by the chunk_sectors field of the queue limits).
      
      Introduce the nr_zones field to the request_queue structure to simplify
      access to this information. Also, add the bitmap seq_zone_bitmap which
      indicates which zones of the device are sequential zones (write
      preferred or write required) and the bitmap seq_zones_wlock which
      indicates if a zone is write locked, that is, if a write request
      targeting a zone was dispatched to the device. These fields are
      initialized by the low level block device driver (sd.c for ZBC/ZAC
      disks). They are not initialized by stacking drivers (device mappers)
      handling zoned block devices (e.g. dm-linear).
      
      Using this, I/O schedulers can introduce zone write locking to control
      request dispatching to a zoned block device and avoid write request
      reordering by limiting to at most a single write request per zone
      outside of the scheduler at any time.
      
      Based on previous patches from Damien Le Moal.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      [Damien]
      * Fixed comments and identation in blkdev.h
      * Changed helper functions
      * Fixed this commit message
      Signed-off-by: NDamien Le Moal <damien.lemoal@wdc.com>
      Reviewed-by: NMartin K. Petersen <martin.petersen@oracle.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      6cc77e9c
  19. 21 12月, 2017 1 次提交
    • J
      block: unalign call_single_data in struct request · 4ccafe03
      Jens Axboe 提交于
      A previous change blindly added massive alignment to the
      call_single_data structure in struct request. This ballooned it in size
      from 296 to 320 bytes on my setup, for no valid reason at all.
      
      Use the unaligned struct __call_single_data variant instead.
      
      Fixes: 966a9671 ("smp: Avoid using two cache lines for struct call_single_data")
      Cc: stable@vger.kernel.org # v4.14
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      4ccafe03
  20. 19 12月, 2017 2 次提交
    • J
      block: fix blk_rq_append_bio · 0abc2a10
      Jens Axboe 提交于
      Commit caa4b024(blk-map: call blk_queue_bounce from blk_rq_append_bio)
      moves blk_queue_bounce() into blk_rq_append_bio(), but don't consider
      the fact that the bounced bio becomes invisible to caller since the
      parameter type is 'struct bio *'. Make it a pointer to a pointer to
      a bio, so the caller sees the right bio also after a bounce.
      
      Fixes: caa4b024 ("blk-map: call blk_queue_bounce from blk_rq_append_bio")
      Cc: Christoph Hellwig <hch@lst.de>
      Reported-by: NMichele Ballabio <barra_cuda@katamail.com>
      (handling failure of blk_rq_append_bio(), only call bio_get() after
      blk_rq_append_bio() returns OK)
      Tested-by: NMichele Ballabio <barra_cuda@katamail.com>
      Signed-off-by: NMing Lei <ming.lei@redhat.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      0abc2a10
    • M
      block: don't let passthrough IO go into .make_request_fn() · 14cb0dc6
      Ming Lei 提交于
      Commit a8821f3f("block: Improvements to bounce-buffer handling") tries
      to make sure that the bio to .make_request_fn won't exceed BIO_MAX_PAGES,
      but ignores that passthrough I/O can use blk_queue_bounce() too.
      Especially, passthrough IO may not be sector-aligned, and the check
      of 'sectors < bio_sectors(*bio_orig)' inside __blk_queue_bounce() may
      become true even though the max bvec number doesn't exceed BIO_MAX_PAGES,
      then cause the bio splitted, and the original passthrough bio is submited
      to generic_make_request().
      
      This patch fixes this issue by checking if the bio is passthrough IO,
      and use bio_kmalloc() to allocate the cloned passthrough bio.
      
      Cc: NeilBrown <neilb@suse.com>
      Fixes: a8821f3f("block: Improvements to bounce-buffer handling")
      Tested-by: NMichele Ballabio <barra_cuda@katamail.com>
      Signed-off-by: NMing Lei <ming.lei@redhat.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      14cb0dc6
  21. 11 11月, 2017 3 次提交
    • B
      block, nvme: Introduce blk_mq_req_flags_t · 9a95e4ef
      Bart Van Assche 提交于
      Several block layer and NVMe core functions accept a combination
      of BLK_MQ_REQ_* flags through the 'flags' argument but there is
      no verification at compile time whether the right type of block
      layer flags is passed. Make it possible for sparse to verify this.
      This patch does not change any functionality.
      Signed-off-by: NBart Van Assche <bart.vanassche@wdc.com>
      Reviewed-by: NHannes Reinecke <hare@suse.com>
      Tested-by: NOleksandr Natalenko <oleksandr@natalenko.name>
      Cc: linux-nvme@lists.infradead.org
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: Johannes Thumshirn <jthumshirn@suse.de>
      Cc: Ming Lei <ming.lei@redhat.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      9a95e4ef
    • B
      block, scsi: Make SCSI quiesce and resume work reliably · 3a0a5299
      Bart Van Assche 提交于
      The contexts from which a SCSI device can be quiesced or resumed are:
      * Writing into /sys/class/scsi_device/*/device/state.
      * SCSI parallel (SPI) domain validation.
      * The SCSI device power management methods. See also scsi_bus_pm_ops.
      
      It is essential during suspend and resume that neither the filesystem
      state nor the filesystem metadata in RAM changes. This is why while
      the hibernation image is being written or restored that SCSI devices
      are quiesced. The SCSI core quiesces devices through scsi_device_quiesce()
      and scsi_device_resume(). In the SDEV_QUIESCE state execution of
      non-preempt requests is deferred. This is realized by returning
      BLKPREP_DEFER from inside scsi_prep_state_check() for quiesced SCSI
      devices. Avoid that a full queue prevents power management requests
      to be submitted by deferring allocation of non-preempt requests for
      devices in the quiesced state. This patch has been tested by running
      the following commands and by verifying that after each resume the
      fio job was still running:
      
      for ((i=0; i<10; i++)); do
        (
          cd /sys/block/md0/md &&
          while true; do
            [ "$(<sync_action)" = "idle" ] && echo check > sync_action
            sleep 1
          done
        ) &
        pids=($!)
        for d in /sys/class/block/sd*[a-z]; do
          bdev=${d#/sys/class/block/}
          hcil=$(readlink "$d/device")
          hcil=${hcil#../../../}
          echo 4 > "$d/queue/nr_requests"
          echo 1 > "/sys/class/scsi_device/$hcil/device/queue_depth"
          fio --name="$bdev" --filename="/dev/$bdev" --buffered=0 --bs=512 \
            --rw=randread --ioengine=libaio --numjobs=4 --iodepth=16       \
            --iodepth_batch=1 --thread --loops=$((2**31)) &
          pids+=($!)
        done
        sleep 1
        echo "$(date) Hibernating ..." >>hibernate-test-log.txt
        systemctl hibernate
        sleep 10
        kill "${pids[@]}"
        echo idle > /sys/block/md0/md/sync_action
        wait
        echo "$(date) Done." >>hibernate-test-log.txt
      done
      Reported-by: NOleksandr Natalenko <oleksandr@natalenko.name>
      References: "I/O hangs after resuming from suspend-to-ram" (https://marc.info/?l=linux-block&m=150340235201348).
      Signed-off-by: NBart Van Assche <bart.vanassche@wdc.com>
      Reviewed-by: NHannes Reinecke <hare@suse.com>
      Tested-by: NMartin Steigerwald <martin@lichtvoll.de>
      Tested-by: NOleksandr Natalenko <oleksandr@natalenko.name>
      Cc: Martin K. Petersen <martin.petersen@oracle.com>
      Cc: Ming Lei <ming.lei@redhat.com>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: Johannes Thumshirn <jthumshirn@suse.de>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      3a0a5299
    • B
      block: Add the QUEUE_FLAG_PREEMPT_ONLY request queue flag · c9254f2d
      Bart Van Assche 提交于
      This flag will be used in the next patch to let the block layer
      core know whether or not a SCSI request queue has been quiesced.
      A quiesced SCSI queue namely only processes RQF_PREEMPT requests.
      Signed-off-by: NBart Van Assche <bart.vanassche@wdc.com>
      Reviewed-by: NHannes Reinecke <hare@suse.com>
      Tested-by: NMartin Steigerwald <martin@lichtvoll.de>
      Tested-by: NOleksandr Natalenko <oleksandr@natalenko.name>
      Cc: Ming Lei <ming.lei@redhat.com>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: Johannes Thumshirn <jthumshirn@suse.de>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      c9254f2d