1. 17 6月, 2021 2 次提交
  2. 03 6月, 2021 2 次提交
  3. 19 5月, 2021 2 次提交
  4. 13 5月, 2021 1 次提交
    • D
      nvmet: seset ns->file when open fails · 85428bea
      Daniel Wagner 提交于
      Reset the ns->file value to NULL also in the error case in
      nvmet_file_ns_enable().
      
      The ns->file variable points either to file object or contains the
      error code after the filp_open() call. This can lead to following
      problem:
      
      When the user first setups an invalid file backend and tries to enable
      the ns, it will fail. Then the user switches over to a bdev backend
      and enables successfully the ns. The first received I/O will crash the
      system because the IO backend is chosen based on the ns->file value:
      
      static u16 nvmet_parse_io_cmd(struct nvmet_req *req)
      {
      	[...]
      
      	if (req->ns->file)
      		return nvmet_file_parse_io_cmd(req);
      
      	return nvmet_bdev_parse_io_cmd(req);
      }
      Reported-by: NEnzo Matsumiya <ematsumiya@suse.com>
      Signed-off-by: NDaniel Wagner <dwagner@suse.de>
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      85428bea
  5. 12 5月, 2021 6 次提交
    • C
      nvmet: demote fabrics cmd parse err msg to debug · 7a4ffd20
      Chaitanya Kulkarni 提交于
      Host can send invalid commands and flood the target with error messages.
      Demote the error message from pr_err() to pr_debug() in
      nvmet_parse_fabrics_cmd() and nvmet_parse_connect_cmd().
      Signed-off-by: NChaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      7a4ffd20
    • C
      nvmet: use helper to remove the duplicate code · 4c2dab2b
      Chaitanya Kulkarni 提交于
      Use the helper nvmet_report_invalid_opcode() to report invalid opcode
      so we can remove the duplicate code.
      Signed-off-by: NChaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      4c2dab2b
    • C
      nvmet: demote discovery cmd parse err msg to debug · 3651aaac
      Chaitanya Kulkarni 提交于
      Host can send invalid commands and flood the target with error messages
      for the discovery controller. Demote the error message from pr_err() to
      pr_debug( in nvmet_parse_discovery_cmd(). 
      Signed-off-by: NChaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      3651aaac
    • M
      nvmet-rdma: Fix NULL deref when SEND is completed with error · 8cc365f9
      Michal Kalderon 提交于
      When running some traffic and taking down the link on peer, a
      retry counter exceeded error is received. This leads to
      nvmet_rdma_error_comp which tried accessing the cq_context to
      obtain the queue. The cq_context is no longer valid after the
      fix to use shared CQ mechanism and should be obtained similar
      to how it is obtained in other functions from the wc->qp.
      
      [ 905.786331] nvmet_rdma: SEND for CQE 0x00000000e3337f90 failed with status transport retry counter exceeded (12).
      [ 905.832048] BUG: unable to handle kernel NULL pointer dereference at 0000000000000048
      [ 905.839919] PGD 0 P4D 0
      [ 905.842464] Oops: 0000 1 SMP NOPTI
      [ 905.846144] CPU: 13 PID: 1557 Comm: kworker/13:1H Kdump: loaded Tainted: G OE --------- - - 4.18.0-304.el8.x86_64 #1
      [ 905.872135] RIP: 0010:nvmet_rdma_error_comp+0x5/0x1b [nvmet_rdma]
      [ 905.878259] Code: 19 4f c0 e8 89 b3 a5 f6 e9 5b e0 ff ff 0f b7 75 14 4c 89 ea 48 c7 c7 08 1a 4f c0 e8 71 b3 a5 f6 e9 4b e0 ff ff 0f 1f 44 00 00 <48> 8b 47 48 48 85 c0 74 08 48 89 c7 e9 98 bf 49 00 e9 c3 e3 ff ff
      [ 905.897135] RSP: 0018:ffffab601c45fe28 EFLAGS: 00010246
      [ 905.902387] RAX: 0000000000000065 RBX: ffff9e729ea2f800 RCX: 0000000000000000
      [ 905.909558] RDX: 0000000000000000 RSI: ffff9e72df9567c8 RDI: 0000000000000000
      [ 905.916731] RBP: ffff9e729ea2b400 R08: 000000000000074d R09: 0000000000000074
      [ 905.923903] R10: 0000000000000000 R11: ffffab601c45fcc0 R12: 0000000000000010
      [ 905.931074] R13: 0000000000000000 R14: 0000000000000010 R15: ffff9e729ea2f400
      [ 905.938247] FS: 0000000000000000(0000) GS:ffff9e72df940000(0000) knlGS:0000000000000000
      [ 905.938249] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
      [ 905.950067] nvmet_rdma: SEND for CQE 0x00000000c7356cca failed with status transport retry counter exceeded (12).
      [ 905.961855] CR2: 0000000000000048 CR3: 000000678d010004 CR4: 00000000007706e0
      [ 905.961855] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
      [ 905.961856] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
      [ 905.961857] PKRU: 55555554
      [ 906.010315] Call Trace:
      [ 906.012778] __ib_process_cq+0x89/0x170 [ib_core]
      [ 906.017509] ib_cq_poll_work+0x26/0x80 [ib_core]
      [ 906.022152] process_one_work+0x1a7/0x360
      [ 906.026182] ? create_worker+0x1a0/0x1a0
      [ 906.030123] worker_thread+0x30/0x390
      [ 906.033802] ? create_worker+0x1a0/0x1a0
      [ 906.037744] kthread+0x116/0x130
      [ 906.040988] ? kthread_flush_work_fn+0x10/0x10
      [ 906.045456] ret_from_fork+0x1f/0x40
      
      Fixes: ca0f1a80 ("nvmet-rdma: use new shared CQ mechanism")
      Signed-off-by: NShai Malin <smalin@marvell.com>
      Signed-off-by: NMichal Kalderon <michal.kalderon@marvell.com>
      Reviewed-by: NSagi Grimberg <sagi@grimberg.me>
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      8cc365f9
    • C
      nvmet: fix inline bio check for passthru · ab96de5d
      Chaitanya Kulkarni 提交于
      When handling passthru commands, for inline bio allocation we only
      consider the transfer size. This works well when req->sg_cnt fits into
      the req->inline_bvec, but it will result in the early return from
      bio_add_hw_page() when req->sg_cnt > NVMET_MAX_INLINE_BVEC.
      
      Consider an I/O of size 32768 and first buffer is not aligned to the
      page boundary, then I/O is split in following manner :-
      
      [ 2206.256140] nvmet: sg->length 3440 sg->offset 656
      [ 2206.256144] nvmet: sg->length 4096 sg->offset 0
      [ 2206.256148] nvmet: sg->length 4096 sg->offset 0
      [ 2206.256152] nvmet: sg->length 4096 sg->offset 0
      [ 2206.256155] nvmet: sg->length 4096 sg->offset 0
      [ 2206.256159] nvmet: sg->length 4096 sg->offset 0
      [ 2206.256163] nvmet: sg->length 4096 sg->offset 0
      [ 2206.256166] nvmet: sg->length 4096 sg->offset 0
      [ 2206.256170] nvmet: sg->length 656 sg->offset 0
      
      Now the req->transfer_size == NVMET_MAX_INLINE_DATA_LEN i.e. 32768, but
      the req->sg_cnt is (9) > NVMET_MAX_INLINE_BIOVEC which is (8).
      This will result in early return in the following code path :-
      
      nvmet_bdev_execute_rw()
      	bio_add_pc_page()
      		bio_add_hw_page()
      			if (bio_full(bio, len))
      				return 0;
      
      Use previously introduced helper nvmet_use_inline_bvec() to consider
      req->sg_cnt when using inline bio. This only affects nvme-loop
      transport.
      
      Fixes: dab3902b ("nvmet: use inline bio for passthru fast path")
      Signed-off-by: NChaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
      Reviewed-by: NSagi Grimberg <sagi@grimberg.me>
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      ab96de5d
    • C
      nvmet: fix inline bio check for bdev-ns · 608a9690
      Chaitanya Kulkarni 提交于
      When handling rw commands, for inline bio case we only consider
      transfer size. This works well when req->sg_cnt fits into the
      req->inline_bvec, but it will result in the warning in
      __bio_add_page() when req->sg_cnt > NVMET_MAX_INLINE_BVEC.
      
      Consider an I/O size 32768 and first page is not aligned to the page
      boundary, then I/O is split in following manner :-
      
      [ 2206.256140] nvmet: sg->length 3440 sg->offset 656
      [ 2206.256144] nvmet: sg->length 4096 sg->offset 0
      [ 2206.256148] nvmet: sg->length 4096 sg->offset 0
      [ 2206.256152] nvmet: sg->length 4096 sg->offset 0
      [ 2206.256155] nvmet: sg->length 4096 sg->offset 0
      [ 2206.256159] nvmet: sg->length 4096 sg->offset 0
      [ 2206.256163] nvmet: sg->length 4096 sg->offset 0
      [ 2206.256166] nvmet: sg->length 4096 sg->offset 0
      [ 2206.256170] nvmet: sg->length 656 sg->offset 0
      
      Now the req->transfer_size == NVMET_MAX_INLINE_DATA_LEN i.e. 32768, but
      the req->sg_cnt is (9) > NVMET_MAX_INLINE_BIOVEC which is (8).
      This will result in the following warning message :-
      
      nvmet_bdev_execute_rw()
      	bio_add_page()
      		__bio_add_page()
      			WARN_ON_ONCE(bio_full(bio, len));
      
      This scenario is very hard to reproduce on the nvme-loop transport only
      with rw commands issued with the passthru IOCTL interface from the host
      application and the data buffer is allocated with the malloc() and not
      the posix_memalign().
      
      Fixes: 73383adf ("nvmet: don't split large I/Os unconditionally")
      Signed-off-by: NChaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
      Reviewed-by: NSagi Grimberg <sagi@grimberg.me>
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      608a9690
  6. 04 5月, 2021 2 次提交
  7. 22 4月, 2021 1 次提交
  8. 15 4月, 2021 3 次提交
  9. 03 4月, 2021 12 次提交
    • W
      nvmet-tcp: enable optional queue idle period tracking · d8e7b462
      Wunderlich, Mark 提交于
      Add 'idle_poll_period_usecs' option used by io_work() to support
      network devices enabled with advanced interrupt moderation
      supporting a relaxed interrupt model. It was discovered that
      such a NIC used on the target was unable to support initiator
      connection establishment, caused by the existing io_work()
      flow that immediately exits after a loop with no activity and
      does not re-queue itself.
      
      With this new option a queue is assigned a period of time
      that no activity must occur in order to become 'idle'.  Until
      the queue is idle the work item is requeued.
      
      The new module option is defined as changeable making it
      flexible for testing purposes.
      
      The pre-existing legacy behavior is preserved when no module option
      for idle_poll_period_usecs is specified.
      Signed-off-by: NMark Wunderlich <mark.wunderlich@intel.com>
      Reviewed-by: NSagi Grimberg <sagi@grimberg.me>
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      d8e7b462
    • S
      nvmet-tcp: fix incorrect locking in state_change sk callback · b5332a9f
      Sagi Grimberg 提交于
      We are not changing anything in the TCP connection state so
      we should not take a write_lock but rather a read lock.
      
      This caused a deadlock when running nvmet-tcp and nvme-tcp
      on the same system, where state_change callbacks on the
      host and on the controller side have causal relationship
      and made lockdep report on this with blktests:
      
      ================================
      WARNING: inconsistent lock state
      5.12.0-rc3 #1 Tainted: G          I
      --------------------------------
      inconsistent {IN-SOFTIRQ-W} -> {SOFTIRQ-ON-R} usage.
      nvme/1324 [HC0[0]:SC0[0]:HE1:SE1] takes:
      ffff888363151000 (clock-AF_INET){++-?}-{2:2}, at: nvme_tcp_state_change+0x21/0x150 [nvme_tcp]
      {IN-SOFTIRQ-W} state was registered at:
        __lock_acquire+0x79b/0x18d0
        lock_acquire+0x1ca/0x480
        _raw_write_lock_bh+0x39/0x80
        nvmet_tcp_state_change+0x21/0x170 [nvmet_tcp]
        tcp_fin+0x2a8/0x780
        tcp_data_queue+0xf94/0x1f20
        tcp_rcv_established+0x6ba/0x1f00
        tcp_v4_do_rcv+0x502/0x760
        tcp_v4_rcv+0x257e/0x3430
        ip_protocol_deliver_rcu+0x69/0x6a0
        ip_local_deliver_finish+0x1e2/0x2f0
        ip_local_deliver+0x1a2/0x420
        ip_rcv+0x4fb/0x6b0
        __netif_receive_skb_one_core+0x162/0x1b0
        process_backlog+0x1ff/0x770
        __napi_poll.constprop.0+0xa9/0x5c0
        net_rx_action+0x7b3/0xb30
        __do_softirq+0x1f0/0x940
        do_softirq+0xa1/0xd0
        __local_bh_enable_ip+0xd8/0x100
        ip_finish_output2+0x6b7/0x18a0
        __ip_queue_xmit+0x706/0x1aa0
        __tcp_transmit_skb+0x2068/0x2e20
        tcp_write_xmit+0xc9e/0x2bb0
        __tcp_push_pending_frames+0x92/0x310
        inet_shutdown+0x158/0x300
        __nvme_tcp_stop_queue+0x36/0x270 [nvme_tcp]
        nvme_tcp_stop_queue+0x87/0xb0 [nvme_tcp]
        nvme_tcp_teardown_admin_queue+0x69/0xe0 [nvme_tcp]
        nvme_do_delete_ctrl+0x100/0x10c [nvme_core]
        nvme_sysfs_delete.cold+0x8/0xd [nvme_core]
        kernfs_fop_write_iter+0x2c7/0x460
        new_sync_write+0x36c/0x610
        vfs_write+0x5c0/0x870
        ksys_write+0xf9/0x1d0
        do_syscall_64+0x33/0x40
        entry_SYSCALL_64_after_hwframe+0x44/0xae
      irq event stamp: 10687
      hardirqs last  enabled at (10687): [<ffffffff9ec376bd>] _raw_spin_unlock_irqrestore+0x2d/0x40
      hardirqs last disabled at (10686): [<ffffffff9ec374d8>] _raw_spin_lock_irqsave+0x68/0x90
      softirqs last  enabled at (10684): [<ffffffff9f000608>] __do_softirq+0x608/0x940
      softirqs last disabled at (10649): [<ffffffff9cdedd31>] do_softirq+0xa1/0xd0
      
      other info that might help us debug this:
       Possible unsafe locking scenario:
      
             CPU0
             ----
        lock(clock-AF_INET);
        <Interrupt>
          lock(clock-AF_INET);
      
       *** DEADLOCK ***
      
      5 locks held by nvme/1324:
       #0: ffff8884a01fe470 (sb_writers#4){.+.+}-{0:0}, at: ksys_write+0xf9/0x1d0
       #1: ffff8886e435c090 (&of->mutex){+.+.}-{3:3}, at: kernfs_fop_write_iter+0x216/0x460
       #2: ffff888104d90c38 (kn->active#255){++++}-{0:0}, at: kernfs_remove_self+0x22d/0x330
       #3: ffff8884634538d0 (&queue->queue_lock){+.+.}-{3:3}, at: nvme_tcp_stop_queue+0x52/0xb0 [nvme_tcp]
       #4: ffff888363150d30 (sk_lock-AF_INET){+.+.}-{0:0}, at: inet_shutdown+0x59/0x300
      
      stack backtrace:
      CPU: 26 PID: 1324 Comm: nvme Tainted: G          I       5.12.0-rc3 #1
      Hardware name: Dell Inc. PowerEdge R640/06NR82, BIOS 2.10.0 11/12/2020
      Call Trace:
       dump_stack+0x93/0xc2
       mark_lock_irq.cold+0x2c/0xb3
       ? verify_lock_unused+0x390/0x390
       ? stack_trace_consume_entry+0x160/0x160
       ? lock_downgrade+0x100/0x100
       ? save_trace+0x88/0x5e0
       ? _raw_spin_unlock_irqrestore+0x2d/0x40
       mark_lock+0x530/0x1470
       ? mark_lock_irq+0x1d10/0x1d10
       ? enqueue_timer+0x660/0x660
       mark_usage+0x215/0x2a0
       __lock_acquire+0x79b/0x18d0
       ? tcp_schedule_loss_probe.part.0+0x38c/0x520
       lock_acquire+0x1ca/0x480
       ? nvme_tcp_state_change+0x21/0x150 [nvme_tcp]
       ? rcu_read_unlock+0x40/0x40
       ? tcp_mtu_probe+0x1ae0/0x1ae0
       ? kmalloc_reserve+0xa0/0xa0
       ? sysfs_file_ops+0x170/0x170
       _raw_read_lock+0x3d/0xa0
       ? nvme_tcp_state_change+0x21/0x150 [nvme_tcp]
       nvme_tcp_state_change+0x21/0x150 [nvme_tcp]
       ? sysfs_file_ops+0x170/0x170
       inet_shutdown+0x189/0x300
       __nvme_tcp_stop_queue+0x36/0x270 [nvme_tcp]
       nvme_tcp_stop_queue+0x87/0xb0 [nvme_tcp]
       nvme_tcp_teardown_admin_queue+0x69/0xe0 [nvme_tcp]
       nvme_do_delete_ctrl+0x100/0x10c [nvme_core]
       nvme_sysfs_delete.cold+0x8/0xd [nvme_core]
       kernfs_fop_write_iter+0x2c7/0x460
       new_sync_write+0x36c/0x610
       ? new_sync_read+0x600/0x600
       ? lock_acquire+0x1ca/0x480
       ? rcu_read_unlock+0x40/0x40
       ? lock_is_held_type+0x9a/0x110
       vfs_write+0x5c0/0x870
       ksys_write+0xf9/0x1d0
       ? __ia32_sys_read+0xa0/0xa0
       ? lockdep_hardirqs_on_prepare.part.0+0x198/0x340
       ? syscall_enter_from_user_mode+0x27/0x70
       do_syscall_64+0x33/0x40
       entry_SYSCALL_64_after_hwframe+0x44/0xae
      
      Fixes: 872d26a3 ("nvmet-tcp: add NVMe over TCP target driver")
      Reported-by: NYi Zhang <yi.zhang@redhat.com>
      Signed-off-by: NSagi Grimberg <sagi@grimberg.me>
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      b5332a9f
    • H
      nvmet: return proper error code from discovery ctrl · 79695dcd
      Hou Pu 提交于
      Return NVME_SC_INVALID_FIELD from discovery controller like normal
      controller when executing identify or get log page command.
      Signed-off-by: NHou Pu <houpu.main@gmail.com>
      Reviewed-by: NChaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      79695dcd
    • K
      nvme: use driver pdu command for passthrough · f4b9e6c9
      Keith Busch 提交于
      All nvme transport drivers preallocate an nvme command for each request.
      Assume to use that command for nvme_setup_cmd() instead of requiring
      drivers pass a pointer to it. All nvme drivers must initialize the
      generic nvme_request 'cmd' to point to the transport's preallocated
      nvme_command.
      
      The generic nvme_request cmd pointer had previously been used only as a
      temporary copy for passthrough commands. Since it now points to the
      command that gets dispatched, passthrough commands must directly set it
      up prior to executing the request.
      Signed-off-by: NKeith Busch <kbusch@kernel.org>
      Reviewed-by: NJens Axboe <axboe@kernel.dk>
      Reviewed-by: NHimanshu Madhani <himanshu.madhani@oracle.com>
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      f4b9e6c9
    • N
      nvmet: do not allow model_number exceed 40 bytes · 48b4c010
      Noam Gottlieb 提交于
      According to the NVM specifications, the model number size should be
      40 bytes (bytes 63:24 of the Identify Controller data structure).
      Therefore, any attempt to store a value into model_number which
      exceeds 40 bytes should return an error.
      Reviewed-by: NMax Gurtovoy <mgurtovoy@nvidia.com>
      Signed-off-by: NNoam Gottlieb <ngottlieb@nvidia.com>
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      48b4c010
    • C
      nvmet: remove unnecessary ctrl parameter · de587804
      Chaitanya Kulkarni 提交于
      The function nvmet_ctrl_find_get() accepts out pointer to nvmet_ctrl
      structure. This function returns the same error value from two places
      that is :- NVME_SC_CONNECT_INVALID_PARAM | NVME_SC_DNR.
      
      Move this to the caller so we can change the return type to nvmet_ctrl.
      
      Now that we can changed the return type, instead of taking out pointer
      to the nvmet_ctrl structure remove that function parameter and return
      the valid nvmet_ctrl pointer on success and NULL on failure.
      
      Also, add and rename the goto labels for more readability with comments.
      Signed-off-by: NChaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      de587804
    • C
      nvmet-fc: update function documentation · b53d4741
      Chaitanya Kulkarni 提交于
      Add minimum description of the hosthandle parameter for
      nvmet_fc_rcv_ls_req() so that we can get rid of the following warning.
      
      drivers/nvme//target/fc.c:2009: warning: Function parameter or member 'hosthandle' not described in 'nvmet_fc_rcv_ls_req
      Signed-off-by: NChaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
      Reviewed-by: NJames Smart <jsmart2021@gmail.com>
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      b53d4741
    • C
      nvme: rename nvme_init_identify() · f21c4769
      Chaitanya Kulkarni 提交于
      This is a prep patch so that we can move the identify data structure
      related code initialization from nvme_init_identify() into a helper.
      
      Rename the function nvmet_init_identify() to nvmet_init_ctrl_finish().
      
      Next patch will move the nvme_id_ctrl related initialization from newly
      renamed function nvme_init_ctrl_finish() into the nvme_init_identify()
      helper.
      Signed-off-by: NChaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      f21c4769
    • C
      nvmet: replace white spaces with tabs · 75b5f9ed
      Chaitanya Kulkarni 提交于
      Instead of the using the whitespaces use tab spacing in the
      nvmet_execute_identify_ns().
      Signed-off-by: NChaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
      Reviewed-by: NSagi Grimberg <sagi@grimberg.me>
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      75b5f9ed
    • C
      nvmet: remove an unnecessary function parameter to nvmet_check_ctrl_status · 7798df6f
      Chaitanya Kulkarni 提交于
      In nvmet_check_ctrl_status() cmd can be derived from nvmet_req. Remove
      the local variable cmd in the nvmet_check_ctrl_status() and function
      parameter cmd for nvmet_check_ctrl_status(). Derive the cmd value from
      req parameter in the nvmet_check_ctrl_status().
      Signed-off-by: NChaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
      Reviewed-by: NSagi Grimberg <sagi@grimberg.me>
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      7798df6f
    • C
      nvmet: update error log page in nvmet_alloc_ctrl() · a56f14c2
      Chaitanya Kulkarni 提交于
      Instead of updating the error log page in the caller of the
      nvmet_alloc_ctrt() update the error log page in the nvmet_alloc_ctrl().
      Signed-off-by: NChaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
      Reviewed-by: NSagi Grimberg <sagi@grimberg.me>
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      a56f14c2
    • C
      nvmet: remove a duplicate status assignment in nvmet_alloc_ctrl · 76affbe6
      Chaitanya Kulkarni 提交于
      In the function nvmet_alloc_ctrl() we assign status value before we
      call nvmet_fine_get_subsys() to:
      
      	status = NVME_SC_CONNECT_INVALID_PARAM | NVME_SC_DNR;
      
      After we successfully find the subsystem we again set the status value
      to:
      
      	status = NVME_SC_CONNECT_INVALID_PARAM | NVME_SC_DNR;
      
      Remove the duplicate status assignment value.
      Signed-off-by: NChaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
      Reviewed-by: NSagi Grimberg <sagi@grimberg.me>
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      76affbe6
  10. 18 3月, 2021 3 次提交
  11. 11 3月, 2021 2 次提交
  12. 05 3月, 2021 1 次提交
    • M
      nvmet: model_number must be immutable once set · d9f273b7
      Max Gurtovoy 提交于
      In case we have already established connection to nvmf target, it
      shouldn't be allowed to change the model_number. E.g. if someone will
      identify ctrl and get model_number of "my_model" later on will change
      the model_numbel via configfs to "my_new_model" this will break the NVMe
      specification for "Get Log Page – Persistent Event Log" that refers to
      Model Number as: "This field contains the same value as reported in the
      Model Number field of the Identify Controller data structure, bytes
      63:24."
      
      Although it doesn't mentioned explicitly that this field can't be
      changed, we can assume it.
      
      So allow setting this field only once: using configfs or in the first
      identify ctrl operation.
      Signed-off-by: NMax Gurtovoy <mgurtovoy@nvidia.com>
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      d9f273b7
  13. 27 2月, 2021 1 次提交
  14. 10 2月, 2021 2 次提交