1. 06 10月, 2016 1 次提交
  2. 25 9月, 2016 2 次提交
  3. 24 9月, 2016 9 次提交
    • A
      nvmet: Make dsm number of ranges zero based · 2e5d0baa
      Alexander Solganik 提交于
      This caused the nvmet request data length to be
      incorrect.
      Signed-off-by: NAlexander Solganik <sashas@lightbitslabs.com>
      Signed-off-by: NSagi Grimberg <sagi@grimberg.me>
      Reviewed-by: NChristoph Hellwig <hch@lst.de>
      2e5d0baa
    • S
      nvmet: Use direct IO for writes · 9b349b08
      Sagi Grimberg 提交于
      We're designed to work with high-end devices where
      direct IO makes perfect sense. We noticed that we
      context switch by scheduling kblockd instead of going
      directly to the device without REQ_SYNC for writes.
      Signed-off-by: NSagi Grimberg <sagi@grimberg.me>
      Reviewed-by: NJens Axboe <axboe@kernel.dk>
      9b349b08
    • C
      admin-cmd: Added smart-log command support. · 2d79c7dc
      Chaitanya Kulkarni 提交于
      This patch implements the support for smart-log command
      (NVM Express 1.2.1-section 5.10.1.2 SMART / Health Information
      (Log Identifier 02h)) on the target for NVMe over Fabric.
      
      In current implementation host can retrieve following statistics:-
      1. Data Units Read.
      2. Data Units Written.
      3. Host Read Commands.
      4. Host Write Commands.
      Signed-off-by: NChaitanya Kulkarni <ckulkarnilinux@gmail.com>
      Signed-off-by: NSagi Grimberg <sagi@grimberg.me>
      2d79c7dc
    • J
      nvme-fabrics: Add host_traddr options field to host infrastructure · 478bcb93
      James Smart 提交于
      Add the host_traddr field to allow specification of the host-port
      connection info for the transport. Will be used by FC transport.
      Signed-off-by: NJames Smart <james.smart@broadcom.com>
      Acked-by: NJohannes Thumshirn <jth@kernel.org>
      Reviewed-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NSagi Grimberg <sagi@grimberg.me>
      478bcb93
    • J
      nvme-fabrics: revise host transport option descriptions · 4a9f05c5
      James Smart 提交于
      Revise some of the comments so not so ethernet-network centric
      Signed-off-by: NJames Smart <james.smart@broadcom.com>
      Acked-by: NJohannes Thumshirn <jth@kernel.org>
      Reviewed-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NSagi Grimberg <sagi@grimberg.me>
      4a9f05c5
    • J
      nvme-fabrics: rework nvmf_get_address() for variable options · 0fe51ff2
      James Smart 提交于
      Revise nvmf_get_address() string to account for not all options being
      present.
      Signed-off-by: NJames Smart <james.smart@broadcom.com>
      Acked-by: NJohannes Thumshirn <jth@kernel.org>
      Reviewed-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NSagi Grimberg <sagi@grimberg.me>
      0fe51ff2
    • J
      nbd: use BLK_MQ_F_BLOCKING · 005043ac
      Josef Bacik 提交于
      We take a mutex when sending commands and send stuff over the network, we need
      to have queue_rq called asynchronously.
      Signed-off-by: NJosef Bacik <jbacik@fb.com>
      Fixes: fd8383fd ("nbd: convert to blkmq")
      Signed-off-by: NJens Axboe <axboe@fb.com>
      005043ac
    • B
      blkcg: Annotate blkg_hint correctly · 55679c8d
      Bart Van Assche 提交于
      Avoid that sparse complains about blkg_hint manipulations.
      
      Fixes: a637120e ("blkcg: use radix tree to index blkgs from blkcg")
      Signed-off-by: NBart Van Assche <bart.vanassche@sandisk.com>
      Acked-by: NTejun Heo <tj@kernel.org>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      55679c8d
    • G
      cfq: fix starvation of asynchronous writes · 3932a86b
      Glauber Costa 提交于
      While debugging timeouts happening in my application workload (ScyllaDB), I have
      observed calls to open() taking a long time, ranging everywhere from 2 seconds -
      the first ones that are enough to time out my application - to more than 30
      seconds.
      
      The problem seems to happen because XFS may block on pending metadata updates
      under certain circumnstances, and that's confirmed with the following backtrace
      taken by the offcputime tool (iovisor/bcc):
      
          ffffffffb90c57b1 finish_task_switch
          ffffffffb97dffb5 schedule
          ffffffffb97e310c schedule_timeout
          ffffffffb97e1f12 __down
          ffffffffb90ea821 down
          ffffffffc046a9dc xfs_buf_lock
          ffffffffc046abfb _xfs_buf_find
          ffffffffc046ae4a xfs_buf_get_map
          ffffffffc046babd xfs_buf_read_map
          ffffffffc0499931 xfs_trans_read_buf_map
          ffffffffc044a561 xfs_da_read_buf
          ffffffffc0451390 xfs_dir3_leaf_read.constprop.16
          ffffffffc0452b90 xfs_dir2_leaf_lookup_int
          ffffffffc0452e0f xfs_dir2_leaf_lookup
          ffffffffc044d9d3 xfs_dir_lookup
          ffffffffc047d1d9 xfs_lookup
          ffffffffc0479e53 xfs_vn_lookup
          ffffffffb925347a path_openat
          ffffffffb9254a71 do_filp_open
          ffffffffb9242a94 do_sys_open
          ffffffffb9242b9e sys_open
          ffffffffb97e42b2 entry_SYSCALL_64_fastpath
          00007fb0698162ed [unknown]
      
      Inspecting my run with blktrace, I can see that the xfsaild kthread exhibit very
      high "Dispatch wait" times, on the dozens of seconds range and consistent with
      the open() times I have saw in that run.
      
      Still from the blktrace output, we can after searching a bit, identify the
      request that wasn't dispatched:
      
        8,0   11      152    81.092472813   804  A  WM 141698288 + 8 <- (8,1) 141696240
        8,0   11      153    81.092472889   804  Q  WM 141698288 + 8 [xfsaild/sda1]
        8,0   11      154    81.092473207   804  G  WM 141698288 + 8 [xfsaild/sda1]
        8,0   11      206    81.092496118   804  I  WM 141698288 + 8 (   22911) [xfsaild/sda1]
        <==== 'I' means Inserted (into the IO scheduler) ===================================>
        8,0    0   289372    96.718761435     0  D  WM 141698288 + 8 (15626265317) [swapper/0]
        <==== Only 15s later the CFQ scheduler dispatches the request ======================>
      
      As we can see above, in this particular example CFQ took 15 seconds to dispatch
      this request. Going back to the full trace, we can see that the xfsaild queue
      had plenty of opportunity to run, and it was selected as the active queue many
      times. It would just always be preempted by something else (example):
      
        8,0    1        0    81.117912979     0  m   N cfq1618SN / insert_request
        8,0    1        0    81.117913419     0  m   N cfq1618SN / add_to_rr
        8,0    1        0    81.117914044     0  m   N cfq1618SN / preempt
        8,0    1        0    81.117914398     0  m   N cfq767A  / slice expired t=1
        8,0    1        0    81.117914755     0  m   N cfq767A  / resid=40
        8,0    1        0    81.117915340     0  m   N / served: vt=1948520448 min_vt=1948520448
        8,0    1        0    81.117915858     0  m   N cfq767A  / sl_used=1 disp=0 charge=0 iops=1 sect=0
      
      where cfq767 is the xfsaild queue and cfq1618 corresponds to one of the ScyllaDB
      IO dispatchers.
      
      The requests preempting the xfsaild queue are synchronous requests. That's a
      characteristic of ScyllaDB workloads, as we only ever issue O_DIRECT requests.
      While it can be argued that preempting ASYNC requests in favor of SYNC is part
      of the CFQ logic, I don't believe that doing so for 15+ seconds is anyone's
      goal.
      
      Moreover, unless I am misunderstanding something, that breaks the expectation
      set by the "fifo_expire_async" tunable, which in my system is set to the
      default.
      
      Looking at the code, it seems to me that the issue is that after we make
      an async queue active, there is no guarantee that it will execute any request.
      
      When the queue itself tests if it cfq_may_dispatch() it can bail if it sees SYNC
      requests in flight. An incoming request from another queue can also preempt it
      in such situation before we have the chance to execute anything (as seen in the
      trace above).
      
      This patch sets the must_dispatch flag if we notice that we have requests
      that are already fifo_expired. This flag is always cleared after
      cfq_dispatch_request() returns from cfq_dispatch_requests(), so it won't pin
      the queue for subsequent requests (unless they are themselves expired)
      
      Care is taken during preempt to still allow rt requests to preempt us
      regardless.
      
      Testing my workload with this patch applied produces much better results.
      From the application side I see no timeouts, and the open() latency histogram
      generated by systemtap looks much better, with the worst outlier at 131ms:
      
      Latency histogram of xfs_buf_lock acquisition (microseconds):
       value |-------------------------------------------------- count
           0 |                                                     11
           1 |@@@@                                                161
           2 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@  1966
           4 |@                                                    54
           8 |                                                     36
          16 |                                                      7
          32 |                                                      0
          64 |                                                      0
             ~
        1024 |                                                      0
        2048 |                                                      0
        4096 |                                                      1
        8192 |                                                      1
       16384 |                                                      2
       32768 |                                                      0
       65536 |                                                      0
      131072 |                                                      1
      262144 |                                                      0
      524288 |                                                      0
      Signed-off-by: NGlauber Costa <glauber@scylladb.com>
      CC: Jens Axboe <axboe@kernel.dk>
      CC: linux-block@vger.kernel.org
      CC: linux-kernel@vger.kernel.org
      Signed-off-by: NGlauber Costa <glauber@scylladb.com>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      3932a86b
  4. 23 9月, 2016 2 次提交
  5. 22 9月, 2016 2 次提交
  6. 21 9月, 2016 7 次提交
    • A
      lightnvm: propagate device_add() error code · 1e3aeae4
      Arnd Bergmann 提交于
      device_add() may fail, and all callers are supposed to check the
      return value, but one new user in lightnvm doesn't:
      
      drivers/lightnvm/sysfs.c: In function 'nvm_sysfs_register_dev':
      drivers/lightnvm/sysfs.c:184:2: error: ignoring return value of 'device_add',
        declared with attribute warn_unused_result [-Werror=unused-result]
      
      This changes the caller to propagate any error codes, which avoids
      the warning.
      Signed-off-by: NArnd Bergmann <arnd@arndb.de>
      Fixes: 38c9e260b9f9 ("lightnvm: expose device geometry through sysfs")
      Signed-off-by: NMatias Bjørling <m@bjorling.me>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      1e3aeae4
    • S
      lightnvm: expose device geometry through sysfs · 40267efd
      Simon A. F. Lund 提交于
      For a host to access an Open-Channel SSD, it has to know its geometry,
      so that it writes and reads at the appropriate device bounds.
      
      Currently, the geometry information is kept within the kernel, and not
      exported to user-space for consumption. This patch exposes the
      configuration through sysfs and enables user-space libraries, such as
      liblightnvm, to use the sysfs implementation to get the geometry of an
      Open-Channel SSD.
      
      The sysfs entries are stored within the device hierarchy, and can be
      found using the "lightnvm" device type.
      
      An example configuration looks like this:
      
      /sys/class/nvme/
      └── nvme0n1
         ├── capabilities: 3
         ├── device_mode: 1
         ├── erase_max: 1000000
         ├── erase_typ: 1000000
         ├── flash_media_type: 0
         ├── media_capabilities: 0x00000001
         ├── media_type: 0
         ├── multiplane: 0x00010101
         ├── num_blocks: 1022
         ├── num_channels: 1
         ├── num_luns: 4
         ├── num_pages: 64
         ├── num_planes: 1
         ├── page_size: 4096
         ├── prog_max: 100000
         ├── prog_typ: 100000
         ├── read_max: 10000
         ├── read_typ: 10000
         ├── sector_oob_size: 0
         ├── sector_size: 4096
         ├── media_manager: gennvm
         ├── ppa_format: 0x380830082808001010102008
         ├── vendor_opcode: 0
         ├── max_phys_secs: 64
         └── version: 1
      Signed-off-by: NSimon A. F. Lund <slund@cnexlabs.com>
      Signed-off-by: NMatias Bjørling <m@bjorling.me>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      40267efd
    • M
      lightnvm: control life of nvm_dev in driver · b0b4e09c
      Matias Bjørling 提交于
      LightNVM compatible device drivers does not have a method to expose
      LightNVM specific sysfs entries.
      
      To enable LightNVM sysfs entries to be exposed, lightnvm device
      drivers require a struct device to attach it to. To allow both the
      actual device driver and lightnvm sysfs entries to coexist, the device
      driver tracks the lifetime of the nvm_dev structure.
      
      This patch refactors NVMe and null_blk to handle the lifetime of struct
      nvm_dev, which eliminates the need for struct gendisk when a lightnvm
      compatible device is provided.
      Signed-off-by: NMatias Bjørling <m@bjorling.me>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      b0b4e09c
    • M
      blk-mq: register device instead of disk · b21d5b30
      Matias Bjørling 提交于
      Enable devices without a gendisk instance to register itself with blk-mq
      and expose the associated multi-queue sysfs entries.
      Signed-off-by: NMatias Bjørling <m@bjorling.me>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      b21d5b30
    • M
      null_blk: refactor to support non-gendisk devices · 9ae2d0aa
      Matias Bjørling 提交于
      With LightNVM enabled devices, the gendisk structure is not exposed
      to the user. This hides the device driver specific sysfs entries, and
      prevents binding of LightNVM geometry information to the device.
      
      Refactor the device registration process, so that gendisk and
      non-gendisk devices are easily managed.
      Signed-off-by: NMatias Bjørling <m@bjorling.me>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      9ae2d0aa
    • M
      nvme: refactor namespaces to support non-gendisk devices · ac81bfa9
      Matias Bjørling 提交于
      With LightNVM enabled namespaces, the gendisk structure is not exposed
      to the user. This prevents LightNVM users from accessing the NVMe device
      driver specific sysfs entries, and LightNVM namespace geometry.
      
      Refactor the revalidation process, so that a namespace, instead of a
      gendisk, is revalidated. This later allows patches to wire up the
      sysfs entries up to a non-gendisk namespace.
      Signed-off-by: NMatias Bjørling <m@bjorling.me>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      ac81bfa9
    • G
      lightnvm: NVM should depend on HAS_DMA · e105ddb4
      Geert Uytterhoeven 提交于
      If NO_DMA=y:
      
          drivers/built-in.o: In function `nvme_nvm_dev_dma_free':
          lightnvm.c:(.text+0x23df1a): undefined reference to `dma_pool_free'
          drivers/built-in.o: In function `nvme_nvm_dev_dma_alloc':
          lightnvm.c:(.text+0x23df38): undefined reference to `dma_pool_alloc'
          drivers/built-in.o: In function `nvme_nvm_destroy_dma_pool':
          lightnvm.c:(.text+0x23df4c): undefined reference to `dma_pool_destroy'
          drivers/built-in.o: In function `nvme_nvm_create_dma_pool':
          lightnvm.c:(.text+0x23df7e): undefined reference to `dma_pool_create'
      
      and
      
          ERROR: "dma_pool_destroy" [drivers/nvme/host/nvme-core.ko] undefined!
          ERROR: "dma_pool_free" [drivers/nvme/host/nvme-core.ko] undefined!
          ERROR: "dma_pool_alloc" [drivers/nvme/host/nvme-core.ko] undefined!
          ERROR: "dma_pool_create" [drivers/nvme/host/nvme-core.ko] undefined!
      Signed-off-by: NGeert Uytterhoeven <geert@linux-m68k.org>
      Signed-off-by: NMatias Bjørling <m@bjorling.me>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      e105ddb4
  7. 19 9月, 2016 1 次提交
  8. 18 9月, 2016 1 次提交
  9. 17 9月, 2016 7 次提交
  10. 15 9月, 2016 1 次提交
    • M
      blk-mq: introduce blk_mq_delay_kick_requeue_list() · 2849450a
      Mike Snitzer 提交于
      blk_mq_delay_kick_requeue_list() provides the ability to kick the
      q->requeue_list after a specified time.  To do this the request_queue's
      'requeue_work' member was changed to a delayed_work.
      
      blk_mq_delay_kick_requeue_list() allows DM to defer processing requeued
      requests while it doesn't make sense to immediately requeue them
      (e.g. when all paths in a DM multipath have failed).
      Signed-off-by: NMike Snitzer <snitzer@redhat.com>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      2849450a
  11. 14 9月, 2016 7 次提交