1. 15 1月, 2021 1 次提交
  2. 06 1月, 2021 2 次提交
    • I
      nvmet-rdma: Fix list_del corruption on queue establishment failure · 9ceb7863
      Israel Rukshin 提交于
      When a queue is in NVMET_RDMA_Q_CONNECTING state, it may has some
      requests at rsp_wait_list. In case a disconnect occurs at this
      state, no one will empty this list and will return the requests to
      free_rsps list. Normally nvmet_rdma_queue_established() free those
      requests after moving the queue to NVMET_RDMA_Q_LIVE state, but in
      this case __nvmet_rdma_queue_disconnect() is called before. The
      crash happens at nvmet_rdma_free_rsps() when calling
      list_del(&rsp->free_list), because the request exists only at
      the wait list. To fix the issue, simply clear rsp_wait_list when
      destroying the queue.
      Signed-off-by: NIsrael Rukshin <israelr@nvidia.com>
      Reviewed-by: NMax Gurtovoy <mgurtovoy@nvidia.com>
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      9ceb7863
    • J
      nvme-fcloop: Fix sscanf type and list_first_entry_or_null warnings · 2b54996b
      James Smart 提交于
      Kernel robot had the following warnings:
      
      >> fcloop.c:1506:6: warning: %x in format string (no. 1) requires
      >> 'unsigned int *' but the argument type is 'signed int *'.
      >> [invalidScanfArgType_int]
      >>    if (sscanf(buf, "%x:%d:%d", &opcode, &starting, &amount) != 3)
      >>        ^
      
      Resolve by changing opcode from and int to an unsigned int
      
      and
      
      >>  fcloop.c:1632:32: warning: Uninitialized variable: lport [uninitvar]
      >>     ret = __wait_localport_unreg(lport);
      >>                                  ^
      
      >>  fcloop.c:1615:28: warning: Uninitialized variable: nport [uninitvar]
      >>     ret = __remoteport_unreg(nport, rport);
      >>                              ^
      
      These aren't actual issues as the values are assigned prior to use.
      It appears the tool doesn't understand list_first_entry_or_null().
      Regardless, quiet the tool by initializing the pointers to NULL at
      declaration.
      Signed-off-by: NJames Smart <james.smart@broadcom.com>
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      2b54996b
  3. 08 12月, 2020 1 次提交
  4. 02 12月, 2020 12 次提交
    • C
      block: switch partition lookup to use struct block_device · 8446fe92
      Christoph Hellwig 提交于
      Use struct block_device to lookup partitions on a disk.  This removes
      all usage of struct hd_struct from the I/O path.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Reviewed-by: NJan Kara <jack@suse.cz>
      Reviewed-by: NHannes Reinecke <hare@suse.de>
      Acked-by: Coly Li <colyli@suse.de>			[bcache]
      Acked-by: Chao Yu <yuchao0@huawei.com>			[f2fs]
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      8446fe92
    • C
      nvmet: fix a spelling mistake "incuding" -> "including" in Kconfig · 9f20599c
      Colin Ian King 提交于
      There is a spelling mistake in the Kconfig help text. Fix it.
      Signed-off-by: NColin Ian King <colin.king@canonical.com>
      Reviewed-by: NChaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      9f20599c
    • M
      nvmet: make sure discovery change log event is protected · 0068a7b0
      Max Gurtovoy 提交于
      Generation counter is protected by nvmet_config_sem. Make sure the
      callers that call functions that might change it, are calling it
      properly.
      Signed-off-by: NMax Gurtovoy <mgurtovoy@nvidia.com>
      Reviewed-by: NIsrael Rukshin <israelr@nvidia.com>
      Reviewed-by: NSagi Grimberg <sagi@grimberg.me>
      Reviewed-by: NChaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      0068a7b0
    • A
      nvmet: remove unused ctrl->cqs · 6d65aeab
      Amit 提交于
      remove unused cqs from nvmet_ctrl struct
      this will reduce the allocated memory.
      Signed-off-by: NAmit <amit.engel@dell.com>
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      6d65aeab
    • C
      nvmet: use inline bio for passthru fast path · dab3902b
      Chaitanya Kulkarni 提交于
      In nvmet_passthru_execute_cmd() which is a high frequency function
      it uses bio_alloc() which leads to memory allocation from the fs pool
      for each I/O.
      
      For NVMeoF nvmet_req we already have inline_bvec allocated as a part of
      request allocation that can be used with preallocated bio when we
      already know the size of request before bio allocation with bio_alloc(),
      which we already do.
      
      Introduce a bio member for the nvmet_req passthru anon union. In the
      fast path, check if we can get away with inline bvec and bio from
      nvmet_req with bio_init() call before actually allocating from the
      bio_alloc().
      
      This will be useful to avoid any new memory allocation under high
      memory pressure situation and get rid of any extra work of
      allocation (bio_alloc()) vs initialization (bio_init()) when
      transfer len is < NVMET_MAX_INLINE_DATA_LEN that user can configure at
      compile time.
      Signed-off-by: NChaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
      Reviewed-by: NLogan Gunthorpe <logang@deltatee.com>
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      dab3902b
    • C
      nvmet: use blk_rq_bio_prep instead of blk_rq_append_bio · a4fe2d3a
      Chaitanya Kulkarni 提交于
      The function blk_rq_append_bio() is a genereric API written for all
      types driver (having bounce buffers) and different context (where
      request is already having a bio i.e. rq->bio != NULL).
      
      It does mainly three things: calculating the segments, bounce queue and
      if req->bio == NULL call blk_rq_bio_prep() or handle low level merge()
      case.
      
      The NVMe PCIe and fabrics transports currently does not use queue
      bounce mechanism. In order to find this for each request processing
      in the passthru blk_rq_append_bio() does extra work in the fast path
      for each request.
      
      When I ran I/Os with different block sizes on the passthru controller
      I found that we can reuse the req->sg_cnt instead of iterating over the
      bvecs to find out nr_segs in blk_rq_append_bio(). This calculation in
      blk_rq_append_bio() is a duplication of work given that we have the
      value in req->sg_cnt. (correct me here if I'm wrong).
      
      With NVMe passthru request based driver we allocate fresh request each
      time, so every call to blk_rq_append_bio() rq->bio will be NULL i.e.
      we don't really need the second condition in the blk_rq_append_bio()
      and the resulting error condition in the caller of blk_rq_append_bio().
      
      So for NVMeOF passthru driver recalculating the segments, bounce check
      and ll_back_merge code is not needed such that we can get away with the
      minimal version of the blk_rq_append_bio() which removes the error check
      in the fast path along with extra variable in nvmet_passthru_map_sg().
      
      This patch updates the nvmet_passthru_map_sg() such that it does only
      appending the bio to the request in the context of the NVMeOF Passthru
      driver. Following are perf numbers :-
      
      With current implementation (blk_rq_append_bio()) :-
      ----------------------------------------------------
      +    5.80%     0.02%  kworker/0:2-mm_  [nvmet]  [k] nvmet_passthru_execute_cmd
      +    5.44%     0.01%  kworker/0:2-mm_  [nvmet]  [k] nvmet_passthru_execute_cmd
      +    4.88%     0.00%  kworker/0:2-mm_  [nvmet]  [k] nvmet_passthru_execute_cmd
      +    5.44%     0.01%  kworker/0:2-mm_  [nvmet]  [k] nvmet_passthru_execute_cmd
      +    4.86%     0.01%  kworker/0:2-mm_  [nvmet]  [k] nvmet_passthru_execute_cmd
      +    5.17%     0.00%  kworker/0:2-eve  [nvmet]  [k] nvmet_passthru_execute_cmd
      
      With this patch using blk_rq_bio_prep() :-
      ----------------------------------------------------
      +    3.14%     0.02%  kworker/0:2-eve  [nvmet]  [k] nvmet_passthru_execute_cmd
      +    3.26%     0.01%  kworker/0:2-eve  [nvmet]  [k] nvmet_passthru_execute_cmd
      +    5.37%     0.01%  kworker/0:2-mm_  [nvmet]  [k] nvmet_passthru_execute_cmd
      +    5.18%     0.02%  kworker/0:2-eve  [nvmet]  [k] nvmet_passthru_execute_cmd
      +    4.84%     0.02%  kworker/0:2-mm_  [nvmet]  [k] nvmet_passthru_execute_cmd
      +    4.87%     0.01%  kworker/0:2-mm_  [nvmet]  [k] nvmet_passthru_execute_cmd
      Signed-off-by: NChaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
      Reviewed-by: NLogan Gunthorpe <logang@deltatee.com>
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      a4fe2d3a
    • C
      nvmet: remove op_flags for passthru commands · 06b3bec8
      Chaitanya Kulkarni 提交于
      For passthru commands setting op_flags has no meaning. Remove the code
      that sets the op flags in nvmet_passthru_map_sg().
      Signed-off-by: NChaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
      Reviewed-by: NLogan Gunthorpe <logang@deltatee.com>
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      06b3bec8
    • C
      nvme: split nvme_alloc_request() · 39dfe844
      Chaitanya Kulkarni 提交于
      Right now nvme_alloc_request() allocates a request from block layer
      based on the value of the qid. When qid set to NVME_QID_ANY it used
      blk_mq_alloc_request() else blk_mq_alloc_request_hctx().
      
      The function nvme_alloc_request() is called from different context, The
      only place where it uses non NVME_QID_ANY value is for fabrics connect
      commands :-
      
      nvme_submit_sync_cmd()		NVME_QID_ANY
      nvme_features()			NVME_QID_ANY
      nvme_sec_submit()		NVME_QID_ANY
      nvmf_reg_read32()		NVME_QID_ANY
      nvmf_reg_read64()		NVME_QID_ANY
      nvmf_reg_write32()		NVME_QID_ANY
      nvmf_connect_admin_queue()	NVME_QID_ANY
      nvme_submit_user_cmd()		NVME_QID_ANY
      	nvme_alloc_request()
      nvme_keep_alive()		NVME_QID_ANY
      	nvme_alloc_request()
      nvme_timeout()			NVME_QID_ANY
      	nvme_alloc_request()
      nvme_delete_queue()		NVME_QID_ANY
      	nvme_alloc_request()
      nvmet_passthru_execute_cmd()	NVME_QID_ANY
      	nvme_alloc_request()
      nvmf_connect_io_queue() 	QID
      	__nvme_submit_sync_cmd()
      		nvme_alloc_request()
      
      With passthru nvme_alloc_request() now falls into the I/O fast path such
      that blk_mq_alloc_request_hctx() is never gets called and that adds
      additional branch check in fast path.
      
      Split the nvme_alloc_request() into nvme_alloc_request() and
      nvme_alloc_request_qid().
      
      Replace each call of the nvme_alloc_request() with NVME_QID_ANY param
      with a call to newly added nvme_alloc_request() without NVME_QID_ANY.
      
      Replace a call to nvme_alloc_request() with QID param with a call to
      newly added nvme_alloc_request() and nvme_alloc_request_qid()
      based on the qid value set in the __nvme_submit_sync_cmd().
      Signed-off-by: NChaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
      Reviewed-by: NLogan Gunthorpe <logang@deltatee.com>
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      39dfe844
    • C
      nvmet: add passthru io timeout value attr · 47e9730c
      Chaitanya Kulkarni 提交于
      NVMeOF controller in the passsthru mode is capable of handling wide set
      of I/O commands including vender specific passhtru io comands.
      
      The vendor specific I/O commands are used to read the large drive
      logs and can take longer than default NVMe commands, i.e. for
      passthru requests the timeout value may differ from the passthru
      controller's default timeout values (nvme-core:io_timeout).
      
      Add a configfs attribute so that user can set the io timeout values.
      In case if this configfs value is not set nvme_alloc_request() will set
      the NVME_IO_TIMEOUT value when request queuedata is NULL.
      Signed-off-by: NChaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      47e9730c
    • C
      nvmet: add passthru admin timeout value attr · a2f6a2b8
      Chaitanya Kulkarni 提交于
      NVMeOF controller in the passsthru mode is capable of handling wide set
      of admin commands including vender specific passhtru admin comands.
      
      The vendor specific admin commands are used to read the large drive
      logs and can take longer than default NVMe commands, i.e. for
      passthru requests the timeout value may differ from the passthru
      controller's default timeout values (nvme-core:admin_timeout).
      
      Add a configfs attribute so that user can set the admin timeout values.
      In case if this configfs value is not set nvme_alloc_request() will set
      the ADMIN_TIMEOUT value when request queuedata is NULL.
      Signed-off-by: NChaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      a2f6a2b8
    • C
      nvme: use consistent macro name for timeout · dc96f938
      Chaitanya Kulkarni 提交于
      This is purely a clenaup patch, add prefix NVME to the ADMIN_TIMEOUT to
      make consistent with NVME_IO_TIMEOUT.
      Signed-off-by: NChaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      dc96f938
    • J
      nvme-fcloop: add sysfs attribute to inject command drop · 03d99e5d
      James Smart 提交于
      Add sysfs attribute to specify parameters for dropping a command.  The
      attribute takes a string of:
      
        <opcode>:<starting a what instance>:<number of times>
      
      Opcode is formatted as lower 8 bits are opcode.  If a fabrics opcode, a
      bit above bits 7:0 will be set.
      
      Once set, each sqe is looked at. If the opcode matches the running
      instance count is updated. If the instance count is in the range of where
      to drop (based on starting and # of times), then drop the command by not
      passing it to the target layer.
      Signed-off-by: NJames Smart <james.smart@broadcom.com>
      03d99e5d
  5. 18 11月, 2020 1 次提交
  6. 27 10月, 2020 1 次提交
    • C
      nvmet: fix a NULL pointer dereference when tracing the flush command · 3c3751f2
      Chaitanya Kulkarni 提交于
      When target side trace in turned on and flush command is issued from the
      host it results in the following Oops.
      
      [  856.789724] BUG: kernel NULL pointer dereference, address: 0000000000000068
      [  856.790686] #PF: supervisor read access in kernel mode
      [  856.791262] #PF: error_code(0x0000) - not-present page
      [  856.791863] PGD 6d7110067 P4D 6d7110067 PUD 66f0ad067 PMD 0
      [  856.792527] Oops: 0000 [#1] SMP NOPTI
      [  856.792950] CPU: 15 PID: 7034 Comm: nvme Tainted: G           OE     5.9.0nvme-5.9+ #71
      [  856.793790] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.12.0-59-gc9ba5276e3214
      [  856.794956] RIP: 0010:trace_event_raw_event_nvmet_req_init+0x13e/0x170 [nvmet]
      [  856.795734] Code: 41 5c 41 5d c3 31 d2 31 f6 e8 4e 9b b8 e0 e9 0e ff ff ff 49 8b 55 00 48 8b 38 8b 0
      [  856.797740] RSP: 0018:ffffc90001be3a60 EFLAGS: 00010246
      [  856.798375] RAX: 0000000000000000 RBX: ffff8887e7d2c01c RCX: 0000000000000000
      [  856.799234] RDX: 0000000000000020 RSI: 0000000057e70ea2 RDI: ffff8887e7d2c034
      [  856.800088] RBP: ffff88869f710578 R08: ffff888807500d40 R09: 00000000fffffffe
      [  856.800951] R10: 0000000064c66670 R11: 00000000ef955201 R12: ffff8887e7d2c034
      [  856.801807] R13: ffff88869f7105c8 R14: 0000000000000040 R15: ffff88869f710440
      [  856.802667] FS:  00007f6a22bd8780(0000) GS:ffff888813a00000(0000) knlGS:0000000000000000
      [  856.803635] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
      [  856.804367] CR2: 0000000000000068 CR3: 00000006d73e0000 CR4: 00000000003506e0
      [  856.805283] Call Trace:
      [  856.805613]  nvmet_req_init+0x27c/0x480 [nvmet]
      [  856.806200]  nvme_loop_queue_rq+0xcb/0x1d0 [nvme_loop]
      [  856.806862]  blk_mq_dispatch_rq_list+0x123/0x7b0
      [  856.807459]  ? kvm_sched_clock_read+0x14/0x30
      [  856.808025]  __blk_mq_sched_dispatch_requests+0xc7/0x170
      [  856.808708]  blk_mq_sched_dispatch_requests+0x30/0x60
      [  856.809372]  __blk_mq_run_hw_queue+0x70/0x100
      [  856.809935]  __blk_mq_delay_run_hw_queue+0x156/0x170
      [  856.810574]  blk_mq_run_hw_queue+0x86/0xe0
      [  856.811104]  blk_mq_sched_insert_request+0xef/0x160
      [  856.811733]  blk_execute_rq+0x69/0xc0
      [  856.812212]  ? blk_mq_rq_ctx_init+0xd0/0x230
      [  856.812784]  nvme_execute_passthru_rq+0x57/0x130 [nvme_core]
      [  856.813461]  nvme_submit_user_cmd+0xeb/0x300 [nvme_core]
      [  856.814099]  nvme_user_cmd.isra.82+0x11e/0x1a0 [nvme_core]
      [  856.814752]  blkdev_ioctl+0x1dc/0x2c0
      [  856.815197]  block_ioctl+0x3f/0x50
      [  856.815606]  __x64_sys_ioctl+0x84/0xc0
      [  856.816074]  do_syscall_64+0x33/0x40
      [  856.816533]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
      [  856.817168] RIP: 0033:0x7f6a222ed107
      [  856.817617] Code: 44 00 00 48 8b 05 81 cd 2c 00 64 c7 00 26 00 00 00 48 c7 c0 ff ff ff ff c3 66 2e 8
      [  856.819901] RSP: 002b:00007ffca848f058 EFLAGS: 00000202 ORIG_RAX: 0000000000000010
      [  856.820846] RAX: ffffffffffffffda RBX: 0000000000000003 RCX: 00007f6a222ed107
      [  856.821726] RDX: 00007ffca848f060 RSI: 00000000c0484e43 RDI: 0000000000000003
      [  856.822603] RBP: 0000000000000003 R08: 000000000000003f R09: 0000000000000005
      [  856.823478] R10: 00007ffca848ece0 R11: 0000000000000202 R12: 00007ffca84912d3
      [  856.824359] R13: 00007ffca848f4d0 R14: 0000000000000002 R15: 000000000067e900
      [  856.825236] Modules linked in: nvme_loop(OE) nvmet(OE) nvme_fabrics(OE) null_blk nvme(OE) nvme_corel
      
      Move the nvmet_req_init() tracepoint after we parse the command in
      nvmet_req_init() so that we can get rid of the duplicate
      nvmet_find_namespace() call.
      Rename __assign_disk_name() ->  __assign_req_name(). Now that we call
      tracepoint after parsing the command simplify the newly added
      __assign_req_name() which fixes this bug.
      Signed-off-by: NChaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      3c3751f2
  7. 22 10月, 2020 4 次提交
    • C
      nvmet: don't use BLK_MQ_REQ_NOWAIT for passthru · 150dfb6c
      Chaitanya Kulkarni 提交于
      By default, we set the passthru request allocation flag such that it
      returns the error in the following code path and we fail the I/O when
      BLK_MQ_REQ_NOWAIT is used for request allocation :-
      
      nvme_alloc_request()
       blk_mq_alloc_request()
        blk_mq_queue_enter()
         if (flag & BLK_MQ_REQ_NOWAIT)
              return -EBUSY; <-- return if busy.
      
      On some controllers using BLK_MQ_REQ_NOWAIT ends up in I/O error where
      the controller is perfectly healthy and not in a degraded state.
      
      Block layer request allocation does allow us to wait instead of
      immediately returning the error when we BLK_MQ_REQ_NOWAIT flag is not
      used. This has shown to fix the I/O error problem reported under
      heavy random write workload.
      
      Remove the BLK_MQ_REQ_NOWAIT parameter for passthru request allocation
      which resolves this issue.
      Signed-off-by: NChaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
      Reviewed-by: NLogan Gunthorpe <logang@deltatee.com>
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      150dfb6c
    • L
      nvmet: cleanup nvmet_passthru_map_sg() · 5e063101
      Logan Gunthorpe 提交于
      Clean up some confusing elements of nvmet_passthru_map_sg() by returning
      early if the request is greater than the maximum bio size. This allows
      us to drop the sg_cnt variable.
      
      This should not result in any functional change but makes the code
      clearer and more understandable. The original code allocated a truncated
      bio then would return EINVAL when bio_add_pc_page() filled that bio. The
      new code just returns EINVAL early if this would happen.
      
      Fixes: c1fef73f ("nvmet: add passthru code to process commands")
      Signed-off-by: NLogan Gunthorpe <logang@deltatee.com>
      Suggested-by: NDouglas Gilbert <dgilbert@interlog.com>
      Reviewed-by: NSagi Grimberg <sagi@grimberg.me>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      5e063101
    • L
      nvmet: limit passthru MTDS by BIO_MAX_PAGES · df06047d
      Logan Gunthorpe 提交于
      nvmet_passthru_map_sg() only supports mapping a single BIO, not a chain
      so the effective maximum transfer should also be limitted by
      BIO_MAX_PAGES (presently this works out to 1MB).
      
      For PCI passthru devices the max_sectors would typically be more
      limitting than BIO_MAX_PAGES, but this may not be true for all passthru
      devices.
      
      Fixes: c1fef73f ("nvmet: add passthru code to process commands")
      Suggested-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NLogan Gunthorpe <logang@deltatee.com>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: Sagi Grimberg <sagi@grimberg.me>
      Cc: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      df06047d
    • Z
      nvmet: fix uninitialized work for zero kato · 85bd23f3
      zhenwei pi 提交于
      When connecting a controller with a zero kato value using the following
      command line
      
         nvme connect -t tcp -n NQN -a ADDR -s PORT --keep-alive-tmo=0
      
      the warning below can be reproduced:
      
      WARNING: CPU: 1 PID: 241 at kernel/workqueue.c:1627 __queue_delayed_work+0x6d/0x90
      with trace:
        mod_delayed_work_on+0x59/0x90
        nvmet_update_cc+0xee/0x100 [nvmet]
        nvmet_execute_prop_set+0x72/0x80 [nvmet]
        nvmet_tcp_try_recv_pdu+0x2f7/0x770 [nvmet_tcp]
        nvmet_tcp_io_work+0x63f/0xb2d [nvmet_tcp]
        ...
      
      This is caused by queuing up an uninitialized work.  Althrough the
      keep-alive timer is disabled during allocating the controller (fixed in
      0d3b6a8d), ka_work still has a chance to run (called by
      nvmet_start_ctrl).
      
      Fixes: 0d3b6a8d ("nvmet: Disable keep-alive timer when kato is cleared to 0h")
      Signed-off-by: Nzhenwei pi <pizhenwei@bytedance.com>
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      85bd23f3
  8. 07 10月, 2020 1 次提交
    • C
      nvme-loop: don't put ctrl on nvme_init_ctrl error · 1401fcc4
      Chaitanya Kulkarni 提交于
      The function nvme_init_ctrl() gets the ctrl reference & when it fails it
      does put the ctrl reference in the error unwind code.
      
      When creating loop ctrl in nvme_loop_create_ctrl() if nvme_init_ctrl()
      returns non zero (i.e. error) value it jumps to the "out_put_ctrl" label
      which calls nvme_put_ctrl(), that will lead to douple ctrl put in error
      unwind path.
      
      Update nvme_loop_create_ctrl() such that this patch removes the
      "out_put_ctrl" label, add a new "out" label after nvme_put_ctrl() in
      error unwind path and jump to newly added label when nvme_init_ctrl()
      call retuns an error.
      Signed-off-by: NChaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      1401fcc4
  9. 27 9月, 2020 5 次提交
  10. 17 9月, 2020 1 次提交
  11. 29 8月, 2020 2 次提交
  12. 24 8月, 2020 1 次提交
  13. 22 8月, 2020 7 次提交
    • A
      nvmet: Disable keep-alive timer when kato is cleared to 0h · 0d3b6a8d
      Amit Engel 提交于
      Based on nvme spec, when keep alive timeout is set to zero
      the keep-alive timer should be disabled.
      Signed-off-by: NAmit Engel <amit.engel@dell.com>
      Signed-off-by: NSagi Grimberg <sagi@grimberg.me>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      0d3b6a8d
    • C
      nvme: rename and document nvme_end_request · 2eb81a33
      Christoph Hellwig 提交于
      nvme_end_request is a bit misnamed, as it wraps around the
      blk_mq_complete_* API.  It's semantics also are non-trivial, so give it
      a more descriptive name and add a comment explaining the semantics.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Reviewed-by: NSagi Grimberg <sagi@grimberg.me>
      Reviewed-by: NMike Snitzer <snitzer@redhat.com>
      Signed-off-by: NSagi Grimberg <sagi@grimberg.me>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      2eb81a33
    • C
      nvmet: call blk_mq_free_request() directly · 7ee51cf6
      Chaitanya Kulkarni 提交于
      Instead of calling blk_put_request() which calls blk_mq_free_request(),
      call blk_mq_free_request() directly for NVMeOF passthru. This is to
      mainly avoid an extra function call in the completion path
      nvmet_passthru_req_done().
      Signed-off-by: NChaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
      Reviewed-by: NKeith Busch <kbusch@kernel.org>
      Reviewed-by: NLogan Gunthorpe <logang@deltatee.com>
      Signed-off-by: NSagi Grimberg <sagi@grimberg.me>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      7ee51cf6
    • C
      nvmet: fix oops in pt cmd execution · a2138fd4
      Chaitanya Kulkarni 提交于
      In the existing NVMeOF Passthru core command handling on failure of
      nvme_alloc_request() it errors out with rq value set to NULL. In the
      error handling path it calls blk_put_request() without checking if
      rq is set to NULL or not which produces following Oops:-
      
      [ 1457.346861] BUG: kernel NULL pointer dereference, address: 0000000000000000
      [ 1457.347838] #PF: supervisor read access in kernel mode
      [ 1457.348464] #PF: error_code(0x0000) - not-present page
      [ 1457.349085] PGD 0 P4D 0
      [ 1457.349402] Oops: 0000 [#1] SMP NOPTI
      [ 1457.349851] CPU: 18 PID: 10782 Comm: kworker/18:2 Tainted: G           OE     5.8.0-rc4nvme-5.9+ #35
      [ 1457.350951] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.12.0-59-gc9ba5276e3214
      [ 1457.352347] Workqueue: events nvme_loop_execute_work [nvme_loop]
      [ 1457.353062] RIP: 0010:blk_mq_free_request+0xe/0x110
      [ 1457.353651] Code: 3f ff ff ff 83 f8 01 75 0d 4c 89 e7 e8 1b db ff ff e9 2d ff ff ff 0f 0b eb ef 66 8
      [ 1457.355975] RSP: 0018:ffffc900035b7de0 EFLAGS: 00010282
      [ 1457.356636] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000002
      [ 1457.357526] RDX: ffffffffa060bd05 RSI: 0000000000000000 RDI: 0000000000000000
      [ 1457.358416] RBP: 0000000000000037 R08: 0000000000000000 R09: 0000000000000000
      [ 1457.359317] R10: 0000000000000000 R11: 000000000000006d R12: 0000000000000000
      [ 1457.360424] R13: ffff8887ffa68600 R14: 0000000000000000 R15: ffff8888150564c8
      [ 1457.361322] FS:  0000000000000000(0000) GS:ffff888814600000(0000) knlGS:0000000000000000
      [ 1457.362337] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
      [ 1457.363058] CR2: 0000000000000000 CR3: 000000081c0ac000 CR4: 00000000003406e0
      [ 1457.363973] Call Trace:
      [ 1457.364296]  nvmet_passthru_execute_cmd+0x150/0x2c0 [nvmet]
      [ 1457.364990]  process_one_work+0x24e/0x5a0
      [ 1457.365493]  ? __schedule+0x353/0x840
      [ 1457.365957]  worker_thread+0x3c/0x380
      [ 1457.366426]  ? process_one_work+0x5a0/0x5a0
      [ 1457.366948]  kthread+0x135/0x150
      [ 1457.367362]  ? kthread_create_on_node+0x60/0x60
      [ 1457.367934]  ret_from_fork+0x22/0x30
      [ 1457.368388] Modules linked in: nvme_loop(OE) nvmet(OE) nvme_fabrics(OE) null_blk nvme(OE) nvme_corer
      [ 1457.368414]  ata_piix crc32c_intel virtio_pci libata virtio_ring serio_raw t10_pi virtio floppy dm_]
      [ 1457.380849] CR2: 0000000000000000
      [ 1457.381288] ---[ end trace c6cab61bfd1f68fd ]---
      [ 1457.381861] RIP: 0010:blk_mq_free_request+0xe/0x110
      [ 1457.382469] Code: 3f ff ff ff 83 f8 01 75 0d 4c 89 e7 e8 1b db ff ff e9 2d ff ff ff 0f 0b eb ef 66 8
      [ 1457.384749] RSP: 0018:ffffc900035b7de0 EFLAGS: 00010282
      [ 1457.385393] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000002
      [ 1457.386264] RDX: ffffffffa060bd05 RSI: 0000000000000000 RDI: 0000000000000000
      [ 1457.387142] RBP: 0000000000000037 R08: 0000000000000000 R09: 0000000000000000
      [ 1457.388029] R10: 0000000000000000 R11: 000000000000006d R12: 0000000000000000
      [ 1457.388914] R13: ffff8887ffa68600 R14: 0000000000000000 R15: ffff8888150564c8
      [ 1457.389798] FS:  0000000000000000(0000) GS:ffff888814600000(0000) knlGS:0000000000000000
      [ 1457.390796] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
      [ 1457.391508] CR2: 0000000000000000 CR3: 000000081c0ac000 CR4: 00000000003406e0
      [ 1457.392525] Kernel panic - not syncing: Fatal exception
      [ 1457.394138] Kernel Offset: disabled
      [ 1457.394677] ---[ end Kernel panic - not syncing: Fatal exception ]---
      
      We fix this Oops by adding a new goto label out_put_req and reordering
      the blk_put_request call to avoid calling blk_put_request() with rq
      value is set to NULL. Here we also update the rest of the code
      accordingly.
      
      Fixes: 06b7164dfdc0 ("nvmet: add passthru code to process commands")
      Signed-off-by: NChaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
      Reviewed-by: NLogan Gunthorpe <logang@deltatee.com>
      Signed-off-by: NSagi Grimberg <sagi@grimberg.me>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      a2138fd4
    • C
      nvmet: add ns tear down label for pt-cmd handling · 4db69a3d
      Chaitanya Kulkarni 提交于
      In the current implementation before submitting the passthru cmd we
      may come across error e.g. getting ns from passthru controller,
      allocating a request from passthru controller, etc. For all the failure
      cases it only uses single goto label fail_out.
      
      In the target code, we follow the pattern to have a separate label for
      each error out the case when setting up multiple things before the actual
      action.
      
      This patch follows the same pattern and renames generic fail_out label
      to out_put_ns and updates the error out cases in the
      nvmet_passthru_execute_cmd() where it is needed.
      Signed-off-by: NChaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
      Reviewed-by: NLogan Gunthorpe <logang@deltatee.com>
      Signed-off-by: NSagi Grimberg <sagi@grimberg.me>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      4db69a3d
    • L
      nvmet-passthru: Reject commands with non-sgl flags set · 0ceeab96
      Logan Gunthorpe 提交于
      Any command with a non-SGL flag set (like fuse flags) should be
      rejected.
      
      Fixes: c1fef73f ("nvmet: add passthru code to process commands")
      Signed-off-by: NLogan Gunthorpe <logang@deltatee.com>
      Signed-off-by: NSagi Grimberg <sagi@grimberg.me>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      0ceeab96
    • S
      nvmet: fix a memory leak · 382fee1a
      Sagi Grimberg 提交于
      We forgot to free new_model_number
      
      Fixes: 013b7ebe ("nvmet: make ctrl model configurable")
      Reviewed-by: NChaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
      Signed-off-by: NSagi Grimberg <sagi@grimberg.me>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      382fee1a
  14. 29 7月, 2020 1 次提交