1. 20 7月, 2023 1 次提交
  2. 18 11月, 2022 1 次提交
  3. 08 3月, 2022 1 次提交
  4. 23 2月, 2022 1 次提交
  5. 03 6月, 2021 1 次提交
    • B
      dm rq: fix double free of blk_mq_tag_set in dev remove after table load fails · 41266ba0
      Benjamin Block 提交于
      stable inclusion
      from stable-5.10.36
      commit 1cb02dc76f4c0a2749a02b26469512d6984252e9
      bugzilla: 51867
      CVE: NA
      
      --------------------------------
      
      commit 8e947c8f upstream.
      
      When loading a device-mapper table for a request-based mapped device,
      and the allocation/initialization of the blk_mq_tag_set for the device
      fails, a following device remove will cause a double free.
      
      E.g. (dmesg):
        device-mapper: core: Cannot initialize queue for request-based dm-mq mapped device
        device-mapper: ioctl: unable to set up device queue for new table.
        Unable to handle kernel pointer dereference in virtual kernel address space
        Failing address: 0305e098835de000 TEID: 0305e098835de803
        Fault in home space mode while using kernel ASCE.
        AS:000000025efe0007 R3:0000000000000024
        Oops: 0038 ilc:3 [#1] SMP
        Modules linked in: ... lots of modules ...
        Supported: Yes, External
        CPU: 0 PID: 7348 Comm: multipathd Kdump: loaded Tainted: G        W      X    5.3.18-53-default #1 SLE15-SP3
        Hardware name: IBM 8561 T01 7I2 (LPAR)
        Krnl PSW : 0704e00180000000 000000025e368eca (kfree+0x42/0x330)
                   R:0 T:1 IO:1 EX:1 Key:0 M:1 W:0 P:0 AS:3 CC:2 PM:0 RI:0 EA:3
        Krnl GPRS: 000000000000004a 000000025efe5230 c1773200d779968d 0000000000000000
                   000000025e520270 000000025e8d1b40 0000000000000003 00000007aae10000
                   000000025e5202a2 0000000000000001 c1773200d779968d 0305e098835de640
                   00000007a8170000 000003ff80138650 000000025e5202a2 000003e00396faa8
        Krnl Code: 000000025e368eb8: c4180041e100       lgrl    %r1,25eba50b8
                   000000025e368ebe: ecba06b93a55       risbg   %r11,%r10,6,185,58
                  #000000025e368ec4: e3b010000008       ag      %r11,0(%r1)
                  >000000025e368eca: e310b0080004       lg      %r1,8(%r11)
                   000000025e368ed0: a7110001           tmll    %r1,1
                   000000025e368ed4: a7740129           brc     7,25e369126
                   000000025e368ed8: e320b0080004       lg      %r2,8(%r11)
                   000000025e368ede: b904001b           lgr     %r1,%r11
        Call Trace:
         [<000000025e368eca>] kfree+0x42/0x330
         [<000000025e5202a2>] blk_mq_free_tag_set+0x72/0xb8
         [<000003ff801316a8>] dm_mq_cleanup_mapped_device+0x38/0x50 [dm_mod]
         [<000003ff80120082>] free_dev+0x52/0xd0 [dm_mod]
         [<000003ff801233f0>] __dm_destroy+0x150/0x1d0 [dm_mod]
         [<000003ff8012bb9a>] dev_remove+0x162/0x1c0 [dm_mod]
         [<000003ff8012a988>] ctl_ioctl+0x198/0x478 [dm_mod]
         [<000003ff8012ac8a>] dm_ctl_ioctl+0x22/0x38 [dm_mod]
         [<000000025e3b11ee>] ksys_ioctl+0xbe/0xe0
         [<000000025e3b127a>] __s390x_sys_ioctl+0x2a/0x40
         [<000000025e8c15ac>] system_call+0xd8/0x2c8
        Last Breaking-Event-Address:
         [<000000025e52029c>] blk_mq_free_tag_set+0x6c/0xb8
        Kernel panic - not syncing: Fatal exception: panic_on_oops
      
      When allocation/initialization of the blk_mq_tag_set fails in
      dm_mq_init_request_queue(), it is uninitialized/freed, but the pointer
      is not reset to NULL; so when dev_remove() later gets into
      dm_mq_cleanup_mapped_device() it sees the pointer and tries to
      uninitialize and free it again.
      
      Fix this by setting the pointer to NULL in dm_mq_init_request_queue()
      error-handling. Also set it to NULL in dm_mq_cleanup_mapped_device().
      
      Cc: <stable@vger.kernel.org> # 4.6+
      Fixes: 1c357a1e ("dm: allocate blk_mq_tag_set rather than embed in mapped_device")
      Signed-off-by: NBenjamin Block <bblock@linux.ibm.com>
      Signed-off-by: NMike Snitzer <snitzer@redhat.com>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      Signed-off-by: NChen Jun <chenjun102@huawei.com>
      Acked-by: NWeilong Chen <chenweilong@huawei.com>
      Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
      41266ba0
  6. 30 9月, 2020 1 次提交
  7. 20 7月, 2020 1 次提交
    • M
      dm rq: don't call blk_mq_queue_stopped() in dm_stop_queue() · e766668c
      Ming Lei 提交于
      dm_stop_queue() only uses blk_mq_quiesce_queue() so it doesn't
      formally stop the blk-mq queue; therefore there is no point making the
      blk_mq_queue_stopped() check -- it will never be stopped.
      
      In addition, even though dm_stop_queue() actually tries to quiesce hw
      queues via blk_mq_quiesce_queue(), checking with blk_queue_quiesced()
      to avoid unnecessary queue quiesce isn't reliable because: the
      QUEUE_FLAG_QUIESCED flag is set before synchronize_rcu() and
      dm_stop_queue() may be called when synchronize_rcu() from another
      blk_mq_quiesce_queue() is in-progress.
      
      Fixes: 7b17c2f7 ("dm: Fix a race condition related to stopping and starting queues")
      Signed-off-by: NMing Lei <ming.lei@redhat.com>
      Signed-off-by: NMike Snitzer <snitzer@redhat.com>
      e766668c
  8. 08 7月, 2020 1 次提交
    • M
      dm: do not use waitqueue for request-based DM · 85067747
      Ming Lei 提交于
      Given request-based DM now uses blk-mq's blk_mq_queue_inflight() to
      determine if outstanding IO has completed (and DM has no control over
      the blk-mq state machine used to track outstanding IO) it is unsafe to
      wakeup waiter (dm_wait_for_completion) before blk-mq has cleared a
      request's state bits (e.g. MQ_RQ_IN_FLIGHT or MQ_RQ_COMPLETE).  As
      such dm_wait_for_completion() could be left to wait indefinitely if no
      other requests complete.
      
      Fix this by eliminating request-based DM's use of waitqueue to wait
      for blk-mq requests to complete in dm_wait_for_completion.
      Signed-off-by: NMing Lei <ming.lei@redhat.com>
      Depends-on: 3c94d83c ("blk-mq: change blk_mq_queue_busy() to blk_mq_queue_inflight()")
      Cc: stable@vger.kernel.org
      Signed-off-by: NMike Snitzer <snitzer@redhat.com>
      85067747
  9. 24 6月, 2020 1 次提交
  10. 30 5月, 2020 1 次提交
    • M
      blk-mq: drain I/O when all CPUs in a hctx are offline · bf0beec0
      Ming Lei 提交于
      Most of blk-mq drivers depend on managed IRQ's auto-affinity to setup
      up queue mapping. Thomas mentioned the following point[1]:
      
      "That was the constraint of managed interrupts from the very beginning:
      
       The driver/subsystem has to quiesce the interrupt line and the associated
       queue _before_ it gets shutdown in CPU unplug and not fiddle with it
       until it's restarted by the core when the CPU is plugged in again."
      
      However, current blk-mq implementation doesn't quiesce hw queue before
      the last CPU in the hctx is shutdown.  Even worse, CPUHP_BLK_MQ_DEAD is a
      cpuhp state handled after the CPU is down, so there isn't any chance to
      quiesce the hctx before shutting down the CPU.
      
      Add new CPUHP_AP_BLK_MQ_ONLINE state to stop allocating from blk-mq hctxs
      where the last CPU goes away, and wait for completion of in-flight
      requests.  This guarantees that there is no inflight I/O before shutting
      down the managed IRQ.
      
      Add a BLK_MQ_F_STACKING and set it for dm-rq and loop, so we don't need
      to wait for completion of in-flight requests from these drivers to avoid
      a potential dead-lock. It is safe to do this for stacking drivers as those
      do not use interrupts at all and their I/O completions are triggered by
      underlying devices I/O completion.
      
      [1] https://lore.kernel.org/linux-block/alpine.DEB.2.21.1904051331270.1802@nanos.tec.linutronix.de/
      
      [hch: different retry mechanism, merged two patches, minor cleanups]
      Signed-off-by: NMing Lei <ming.lei@redhat.com>
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Reviewed-by: NHannes Reinecke <hare@suse.de>
      Reviewed-by: NDaniel Wagner <dwagner@suse.de>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      bf0beec0
  11. 06 9月, 2019 1 次提交
    • D
      block: Delay default elevator initialization · 737eb78e
      Damien Le Moal 提交于
      When elevator_init_mq() is called from blk_mq_init_allocated_queue(),
      the only information known about the device is the number of hardware
      queues as the block device scan by the device driver is not completed
      yet for most drivers. The device type and elevator required features
      are not set yet, preventing to correctly select the default elevator
      most suitable for the device.
      
      This currently affects all multi-queue zoned block devices which default
      to the "none" elevator instead of the required "mq-deadline" elevator.
      These drives currently include host-managed SMR disks connected to a
      smartpqi HBA and null_blk block devices with zoned mode enabled.
      Upcoming NVMe Zoned Namespace devices will also be affected.
      
      Fix this by adding the boolean elevator_init argument to
      blk_mq_init_allocated_queue() to control the execution of
      elevator_init_mq(). Two cases exist:
      1) elevator_init = false is used for calls to
         blk_mq_init_allocated_queue() within blk_mq_init_queue(). In this
         case, a call to elevator_init_mq() is added to __device_add_disk(),
         resulting in the delayed initialization of the queue elevator
         after the device driver finished probing the device information. This
         effectively allows elevator_init_mq() access to more information
         about the device.
      2) elevator_init = true preserves the current behavior of initializing
         the elevator directly from blk_mq_init_allocated_queue(). This case
         is used for the special request based DM devices where the device
         gendisk is created before the queue initialization and device
         information (e.g. queue limits) is already known when the queue
         initialization is executed.
      
      Additionally, to make sure that the elevator initialization is never
      done while requests are in-flight (there should be none when the device
      driver calls device_add_disk()), freeze and quiesce the device request
      queue before calling blk_mq_init_sched() in elevator_init_mq().
      Reviewed-by: NMing Lei <ming.lei@redhat.com>
      Signed-off-by: NDamien Le Moal <damien.lemoal@wdc.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      737eb78e
  12. 05 8月, 2019 1 次提交
    • M
      blk-mq: add callback of .cleanup_rq · 226b4fc7
      Ming Lei 提交于
      SCSI maintains its own driver private data hooked off of each SCSI
      request, and the pridate data won't be freed after scsi_queue_rq()
      returns BLK_STS_RESOURCE or BLK_STS_DEV_RESOURCE. An upper layer driver
      (e.g. dm-rq) may need to retry these SCSI requests, before SCSI has
      fully dispatched them, due to a lower level SCSI driver's resource
      limitation identified in scsi_queue_rq(). Currently SCSI's per-request
      private data is leaked when the upper layer driver (dm-rq) frees and
      then retries these requests in response to BLK_STS_RESOURCE or
      BLK_STS_DEV_RESOURCE returns from scsi_queue_rq().
      
      This usecase is so specialized that it doesn't warrant training an
      existing blk-mq interface (e.g. blk_mq_free_request) to allow SCSI to
      account for freeing its driver private data -- doing so would add an
      extra branch for handling a special case that all other consumers of
      SCSI (and blk-mq) won't ever need to worry about.
      
      So the most pragmatic way forward is to delegate freeing SCSI driver
      private data to the upper layer driver (dm-rq).  Do so by adding
      new .cleanup_rq callback and calling a new blk_mq_cleanup_rq() method
      from dm-rq.  A following commit will implement the .cleanup_rq() hook
      in scsi_mq_ops.
      
      Cc: Ewan D. Milne <emilne@redhat.com>
      Cc: Bart Van Assche <bvanassche@acm.org>
      Cc: Hannes Reinecke <hare@suse.com>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: Mike Snitzer <snitzer@redhat.com>
      Cc: dm-devel@redhat.com
      Cc: <stable@vger.kernel.org>
      Fixes: 396eaf21 ("blk-mq: improve DM's blk-mq IO merging via blk_insert_cloned_request feedback")
      Signed-off-by: NMing Lei <ming.lei@redhat.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      226b4fc7
  13. 10 7月, 2019 1 次提交
  14. 26 4月, 2019 1 次提交
    • Y
      dm mpath: fix missing call of path selector type->end_io · 5de719e3
      Yufen Yu 提交于
      After commit 396eaf21 ("blk-mq: improve DM's blk-mq IO merging via
      blk_insert_cloned_request feedback"), map_request() will requeue the tio
      when issued clone request return BLK_STS_RESOURCE or BLK_STS_DEV_RESOURCE.
      
      Thus, if device driver status is error, a tio may be requeued multiple
      times until the return value is not DM_MAPIO_REQUEUE.  That means
      type->start_io may be called multiple times, while type->end_io is only
      called when IO complete.
      
      In fact, even without commit 396eaf21, setup_clone() failure can
      also cause tio requeue and associated missed call to type->end_io.
      
      The service-time path selector selects path based on in_flight_size,
      which is increased by st_start_io() and decreased by st_end_io().
      Missed calls to st_end_io() can lead to in_flight_size count error and
      will cause the selector to make the wrong choice.  In addition,
      queue-length path selector will also be affected.
      
      To fix the problem, call type->end_io in ->release_clone_rq before tio
      requeue.  map_info is passed to ->release_clone_rq() for map_request()
      error path that result in requeue.
      
      Fixes: 396eaf21 ("blk-mq: improve DM's blk-mq IO merging via blk_insert_cloned_request feedback")
      Cc: stable@vger.kernl.org
      Signed-off-by: NYufen Yu <yuyufen@huawei.com>
      Signed-off-by: NMike Snitzer <snitzer@redhat.com>
      5de719e3
  15. 05 4月, 2019 1 次提交
    • M
      dm: disable DISCARD if the underlying storage no longer supports it · bcb44433
      Mike Snitzer 提交于
      Storage devices which report supporting discard commands like
      WRITE_SAME_16 with unmap, but reject discard commands sent to the
      storage device.  This is a clear storage firmware bug but it doesn't
      change the fact that should a program cause discards to be sent to a
      multipath device layered on this buggy storage, all paths can end up
      failed at the same time from the discards, causing possible I/O loss.
      
      The first discard to a path will fail with Illegal Request, Invalid
      field in cdb, e.g.:
       kernel: sd 8:0:8:19: [sdfn] tag#0 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
       kernel: sd 8:0:8:19: [sdfn] tag#0 Sense Key : Illegal Request [current]
       kernel: sd 8:0:8:19: [sdfn] tag#0 Add. Sense: Invalid field in cdb
       kernel: sd 8:0:8:19: [sdfn] tag#0 CDB: Write same(16) 93 08 00 00 00 00 00 a0 08 00 00 00 80 00 00 00
       kernel: blk_update_request: critical target error, dev sdfn, sector 10487808
      
      The SCSI layer converts this to the BLK_STS_TARGET error number, the sd
      device disables its support for discard on this path, and because of the
      BLK_STS_TARGET error multipath fails the discard without failing any
      path or retrying down a different path.  But subsequent discards can
      cause path failures.  Any discards sent to the path which already failed
      a discard ends up failing with EIO from blk_cloned_rq_check_limits with
      an "over max size limit" error since the discard limit was set to 0 by
      the sd driver for the path.  As the error is EIO, this now fails the
      path and multipath tries to send the discard down the next path.  This
      cycle continues as discards are sent until all paths fail.
      
      Fix this by training DM core to disable DISCARD if the underlying
      storage already did so.
      
      Also, fix branching in dm_done() and clone_endio() to reflect the
      mutually exclussive nature of the IO operations in question.
      
      Cc: stable@vger.kernel.org
      Reported-by: NDavid Jeffery <djeffery@redhat.com>
      Signed-off-by: NMike Snitzer <snitzer@redhat.com>
      bcb44433
  16. 06 3月, 2019 1 次提交
  17. 15 2月, 2019 1 次提交
  18. 07 2月, 2019 1 次提交
    • M
      dm: add memory barrier before waitqueue_active · 645efa84
      Mikulas Patocka 提交于
      Block core changes to switch bio-based IO accounting to be percpu had a
      side-effect of altering DM core to now rely on calling waitqueue_active
      (in both bio-based and request-based) to check if another task is in
      dm_wait_for_completion().
      
      A memory barrier is needed before calling waitqueue_active().  DM core
      doesn't piggyback on a preceding memory barrier so it must explicitly
      use its own.
      
      For more details on why using waitqueue_active() without a preceding
      barrier is unsafe, please see the comment before the waitqueue_active()
      definition in include/linux/wait.h.
      
      Add the missing memory barrier by switching to using wq_has_sleeper().
      
      Fixes: 6f757231 ("dm: remove the pending IO accounting")
      Fixes: c4576aed ("dm: fix request-based dm's use of dm_wait_for_completion")
      Signed-off-by: NMikulas Patocka <mpatocka@redhat.com>
      Signed-off-by: NMike Snitzer <snitzer@redhat.com>
      645efa84
  19. 18 12月, 2018 2 次提交
  20. 11 12月, 2018 1 次提交
  21. 10 12月, 2018 1 次提交
  22. 16 11月, 2018 1 次提交
  23. 11 10月, 2018 1 次提交
  24. 31 5月, 2018 1 次提交
  25. 09 5月, 2018 1 次提交
    • O
      block: consolidate struct request timestamp fields · 522a7775
      Omar Sandoval 提交于
      Currently, struct request has four timestamp fields:
      
      - A start time, set at get_request time, in jiffies, used for iostats
      - An I/O start time, set at start_request time, in ktime nanoseconds,
        used for blk-stats (i.e., wbt, kyber, hybrid polling)
      - Another start time and another I/O start time, used for cfq and bfq
      
      These can all be consolidated into one start time and one I/O start
      time, both in ktime nanoseconds, shaving off up to 16 bytes from struct
      request depending on the kernel config.
      Signed-off-by: NOmar Sandoval <osandov@fb.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      522a7775
  26. 31 1月, 2018 1 次提交
    • M
      blk-mq: introduce BLK_STS_DEV_RESOURCE · 86ff7c2a
      Ming Lei 提交于
      This status is returned from driver to block layer if device related
      resource is unavailable, but driver can guarantee that IO dispatch
      will be triggered in future when the resource is available.
      
      Convert some drivers to return BLK_STS_DEV_RESOURCE.  Also, if driver
      returns BLK_STS_RESOURCE and SCHED_RESTART is set, rerun queue after
      a delay (BLK_MQ_DELAY_QUEUE) to avoid IO stalls.  BLK_MQ_DELAY_QUEUE is
      3 ms because both scsi-mq and nvmefc are using that magic value.
      
      If a driver can make sure there is in-flight IO, it is safe to return
      BLK_STS_DEV_RESOURCE because:
      
      1) If all in-flight IOs complete before examining SCHED_RESTART in
      blk_mq_dispatch_rq_list(), SCHED_RESTART must be cleared, so queue
      is run immediately in this case by blk_mq_dispatch_rq_list();
      
      2) if there is any in-flight IO after/when examining SCHED_RESTART
      in blk_mq_dispatch_rq_list():
      - if SCHED_RESTART isn't set, queue is run immediately as handled in 1)
      - otherwise, this request will be dispatched after any in-flight IO is
        completed via blk_mq_sched_restart()
      
      3) if SCHED_RESTART is set concurently in context because of
      BLK_STS_RESOURCE, blk_mq_delay_run_hw_queue() will cover the above two
      cases and make sure IO hang can be avoided.
      
      One invariant is that queue will be rerun if SCHED_RESTART is set.
      Suggested-by: NJens Axboe <axboe@kernel.dk>
      Tested-by: NLaurence Oberman <loberman@redhat.com>
      Signed-off-by: NMing Lei <ming.lei@redhat.com>
      Signed-off-by: NMike Snitzer <snitzer@redhat.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      86ff7c2a
  27. 30 1月, 2018 2 次提交
  28. 18 1月, 2018 1 次提交
    • M
      blk-mq: improve DM's blk-mq IO merging via blk_insert_cloned_request feedback · 396eaf21
      Ming Lei 提交于
      blk_insert_cloned_request() is called in the fast path of a dm-rq driver
      (e.g. blk-mq request-based DM mpath).  blk_insert_cloned_request() uses
      blk_mq_request_bypass_insert() to directly append the request to the
      blk-mq hctx->dispatch_list of the underlying queue.
      
      1) This way isn't efficient enough because the hctx spinlock is always
      used.
      
      2) With blk_insert_cloned_request(), we completely bypass underlying
      queue's elevator and depend on the upper-level dm-rq driver's elevator
      to schedule IO.  But dm-rq currently can't get the underlying queue's
      dispatch feedback at all.  Without knowing whether a request was issued
      or not (e.g. due to underlying queue being busy) the dm-rq elevator will
      not be able to provide effective IO merging (as a side-effect of dm-rq
      currently blindly destaging a request from its elevator only to requeue
      it after a delay, which kills any opportunity for merging).  This
      obviously causes very bad sequential IO performance.
      
      Fix this by updating blk_insert_cloned_request() to use
      blk_mq_request_direct_issue().  blk_mq_request_direct_issue() allows a
      request to be issued directly to the underlying queue and returns the
      dispatch feedback (blk_status_t).  If blk_mq_request_direct_issue()
      returns BLK_SYS_RESOURCE the dm-rq driver will now use DM_MAPIO_REQUEUE
      to _not_ destage the request.  Whereby preserving the opportunity to
      merge IO.
      
      With this, request-based DM's blk-mq sequential IO performance is vastly
      improved (as much as 3X in mpath/virtio-scsi testing).
      Signed-off-by: NMing Lei <ming.lei@redhat.com>
      [blk-mq.c changes heavily influenced by Ming Lei's initial solution, but
      they were refactored to make them less fragile and easier to read/review]
      Signed-off-by: NMike Snitzer <snitzer@redhat.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      396eaf21
  29. 15 1月, 2018 1 次提交
    • M
      dm: fix incomplete request_queue initialization · c100ec49
      Mike Snitzer 提交于
      DM is no longer prone to having its request_queue be improperly
      initialized.
      
      Summary of changes:
      
      - defer DM's blk_register_queue() from add_disk()-time until
        dm_setup_md_queue() by using add_disk_no_queue_reg() in alloc_dev().
      
      - dm_setup_md_queue() is updated to fully initialize DM's request_queue
        (_after_ all table loads have occurred and the request_queue's type,
        features and limits are known).
      
      A very welcome side-effect of these changes is DM no longer needs to:
      1) backfill the "mq" sysfs entry (because historically DM didn't
      initialize the request_queue to use blk-mq until _after_
      blk_register_queue() was called via add_disk()).
      2) call elv_register_queue() to get .request_fn request-based DM
      device's "iosched" exposed in syfs.
      
      In addition, blk-mq debugfs support is now made available because
      request-based DM's blk-mq request_queue is now properly initialized
      before dm_setup_md_queue() calls blk_register_queue().
      
      These changes also stave off the need to introduce new DM-specific
      workarounds in block core, e.g. this proposal:
      https://patchwork.kernel.org/patch/10067961/
      
      In the end DM devices should be less unicorn in nature (relative to
      initialization and availability of block core infrastructure provided by
      the request_queue).
      Signed-off-by: NMike Snitzer <snitzer@redhat.com>
      Tested-by: NMing Lei <ming.lei@redhat.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      c100ec49
  30. 06 10月, 2017 1 次提交
  31. 28 8月, 2017 2 次提交
  32. 19 6月, 2017 1 次提交
  33. 09 6月, 2017 3 次提交
    • C
      block: switch bios to blk_status_t · 4e4cbee9
      Christoph Hellwig 提交于
      Replace bi_error with a new bi_status to allow for a clear conversion.
      Note that device mapper overloaded bi_error with a private value, which
      we'll have to keep arround at least for now and thus propagate to a
      proper blk_status_t value.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      4e4cbee9
    • C
      blk-mq: switch ->queue_rq return value to blk_status_t · fc17b653
      Christoph Hellwig 提交于
      Use the same values for use for request completion errors as the return
      value from ->queue_rq.  BLK_STS_RESOURCE is special cased to cause
      a requeue, and all the others are completed as-is.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      fc17b653
    • C
      block: introduce new block status code type · 2a842aca
      Christoph Hellwig 提交于
      Currently we use nornal Linux errno values in the block layer, and while
      we accept any error a few have overloaded magic meanings.  This patch
      instead introduces a new  blk_status_t value that holds block layer specific
      status codes and explicitly explains their meaning.  Helpers to convert from
      and to the previous special meanings are provided for now, but I suspect
      we want to get rid of them in the long run - those drivers that have a
      errno input (e.g. networking) usually get errnos that don't know about
      the special block layer overloads, and similarly returning them to userspace
      will usually return somethings that strictly speaking isn't correct
      for file system operations, but that's left as an exercise for later.
      
      For now the set of errors is a very limited set that closely corresponds
      to the previous overloaded errno values, but there is some low hanging
      fruite to improve it.
      
      blk_status_t (ab)uses the sparse __bitwise annotations to allow for sparse
      typechecking, so that we can easily catch places passing the wrong values.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      2a842aca
  34. 16 5月, 2017 1 次提交
  35. 02 5月, 2017 1 次提交