1. 29 7月, 2020 1 次提交
    • C
      nvmet: use xarray for ctrl ns storing · 7774e77e
      Chaitanya Kulkarni 提交于
      This patch replaces the ctrl->namespaces tracking from linked list to
      xarray and improves the performance when accessing one namespce :-
      
      XArray vs Default:-
      
      IOPS and BW (more the better) increase BW (~1.8%):-
      ---------------------------------------------------
      
       XArray :-
        read:  IOPS=160k,  BW=626MiB/s  (656MB/s)(18.3GiB/30001msec)
        read:  IOPS=160k,  BW=626MiB/s  (656MB/s)(18.3GiB/30001msec)
        read:  IOPS=162k,  BW=631MiB/s  (662MB/s)(18.5GiB/30001msec)
      
       Default:-
        read:  IOPS=156k,  BW=609MiB/s  (639MB/s)(17.8GiB/30001msec)
        read:  IOPS=157k,  BW=613MiB/s  (643MB/s)(17.0GiB/30001msec)
        read:  IOPS=160k,  BW=626MiB/s  (656MB/s)(18.3GiB/30001msec)
      
      Submission latency (less the better) decrease (~8.3%):-
      -------------------------------------------------------
      
       XArray:-
        slat  (usec):  min=7,  max=8386,  avg=11.19,  stdev=5.96
        slat  (usec):  min=7,  max=441,   avg=11.09,  stdev=4.48
        slat  (usec):  min=7,  max=1088,  avg=11.21,  stdev=4.54
      
       Default :-
        slat  (usec):  min=8,   max=2826.5k,  avg=23.96,  stdev=3911.50
        slat  (usec):  min=8,   max=503,      avg=12.52,  stdev=5.07
        slat  (usec):  min=8,   max=2384,     avg=12.50,  stdev=5.28
      
      CPU Usage (less the better) decrease (~5.2%):-
      ----------------------------------------------
      
       XArray:-
        cpu  :  usr=1.84%,  sys=18.61%,  ctx=949471,  majf=0,  minf=250
        cpu  :  usr=1.83%,  sys=18.41%,  ctx=950262,  majf=0,  minf=237
        cpu  :  usr=1.82%,  sys=18.82%,  ctx=957224,  majf=0,  minf=234
      
       Default:-
        cpu  :  usr=1.70%,  sys=19.21%,  ctx=858196,  majf=0,  minf=251
        cpu  :  usr=1.82%,  sys=19.98%,  ctx=929720,  majf=0,  minf=227
        cpu  :  usr=1.83%,  sys=20.33%,  ctx=947208,  majf=0,  minf=235.
      Signed-off-by: NChaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      7774e77e
  2. 08 7月, 2020 1 次提交
  3. 27 5月, 2020 3 次提交
  4. 10 5月, 2020 1 次提交
  5. 26 3月, 2020 2 次提交
  6. 25 3月, 2020 1 次提交
  7. 05 3月, 2020 1 次提交
  8. 10 1月, 2020 1 次提交
  9. 05 11月, 2019 3 次提交
  10. 12 9月, 2019 1 次提交
  11. 30 8月, 2019 1 次提交
  12. 10 7月, 2019 1 次提交
  13. 11 4月, 2019 1 次提交
  14. 20 2月, 2019 1 次提交
  15. 13 12月, 2018 4 次提交
  16. 08 12月, 2018 6 次提交
  17. 17 10月, 2018 1 次提交
  18. 02 10月, 2018 1 次提交
  19. 17 9月, 2018 1 次提交
  20. 08 8月, 2018 1 次提交
    • C
      nvmet: add ns write protect support · dedf0be5
      Chaitanya Kulkarni 提交于
      This patch implements the Namespace Write Protect feature described in
      "NVMe TP 4005a Namespace Write Protect". In this version, we implement
      No Write Protect and Write Protect states for target ns which can be
      toggled by set-features commands from the host side.
      
      For write-protect state transition, we need to flush the ns specified
      as a part of command so we also add helpers for carrying out synchronous
      flush operations.
      Signed-off-by: NChaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
      [hch: fixed an incorrect endianess conversion, minor cleanups]
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      dedf0be5
  21. 28 7月, 2018 4 次提交
  22. 24 7月, 2018 1 次提交
  23. 23 7月, 2018 2 次提交
    • S
      nvmet-rdma: support max(16KB, PAGE_SIZE) inline data · 0d5ee2b2
      Steve Wise 提交于
      The patch enables inline data sizes using up to 4 recv sges, and capping
      the size at 16KB or at least 1 page size.  So on a 4K page system, up to
      16KB is supported, and for a 64K page system 1 page of 64KB is supported.
      
      We avoid > 0 order page allocations for the inline buffers by using
      multiple recv sges, one for each page.  If the device cannot support
      the configured inline data size due to lack of enough recv sges, then
      log a warning and reduce the inline size.
      
      Add a new configfs port attribute, called param_inline_data_size,
      to allow configuring the size of inline data for a given nvmf port.
      The maximum size allowed is still enforced by nvmet-rdma with
      NVMET_RDMA_MAX_INLINE_DATA_SIZE, which is now max(16KB, PAGE_SIZE).
      And the default size, if not specified via configfs, is still PAGE_SIZE.
      This preserves the existing behavior, but allows larger inline sizes
      for small page systems.  If the configured inline data size exceeds
      NVMET_RDMA_MAX_INLINE_DATA_SIZE, a warning is logged and the size is
      reduced.  If param_inline_data_size is set to 0, then inline data is
      disabled for that nvmf port.
      Reviewed-by: NSagi Grimberg <sagi@grimberg.me>
      Reviewed-by: NMax Gurtovoy <maxg@mellanox.com>
      Signed-off-by: NSteve Wise <swise@opengridcomputing.com>
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      0d5ee2b2
    • C
      nvmet: add commands supported and effects log page · 0866bf0c
      Chaitanya Kulkarni 提交于
      This patch adds support for Commands Supported and Effects log page
      (Log Identifier 05h) for NVMeOF. This also makes it easier to find
      which commands are supported, e.g. :-
      
      subnqn    : testnqn1
      Admin Command Set
      ACS2     [Get Log Page                    ] 00000001
      ACS6     [Identify                        ] 00000001
      ACS8     [Abort                           ] 00000001
      ACS9     [Set Features                    ] 00000001
      ACS10    [Get Features                    ] 00000001
      ACS12    [Asynchronous Event Request      ] 00000001
      ACS24    [Keep Alive                      ] 00000001
      
      NVM Command Set
      IOCS0    [Flush                           ] 00000001
      IOCS1    [Write                           ] 00000001
      IOCS2    [Read                            ] 00000001
      IOCS8    [Write Zeroes                    ] 00000001
      IOCS9    [Dataset Management              ] 00000001
      
      This partticular functionality can be used from the host side to examine
      the NVMeOF ctrl commands supported.
      Signed-off-by: NChaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
      Reviewed-by: NJohannes Thumshirn <jthumshirn@suse.de>
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      0866bf0c