- 04 1月, 2020 1 次提交
-
-
由 zhengbin 提交于
Fixes coccicheck warning: drivers/infiniband/ulp/iser/iser_memory.c:530:2-21: WARNING: Assignment of 0/1 to bool variable drivers/infiniband/ulp/iser/iser_verbs.c:1096:2-21: WARNING: Assignment of 0/1 to bool variable Link: https://lore.kernel.org/r/1577176812-2238-4-git-send-email-zhengbin13@huawei.comReported-by: NHulk Robot <hulkci@huawei.com> Signed-off-by: Nzhengbin <zhengbin13@huawei.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
- 01 10月, 2019 2 次提交
-
-
由 Max Gurtovoy 提交于
Use the general linux definition for 4K and retrieve the rest from it. Link: https://lore.kernel.org/r/1569359148-12312-1-git-send-email-maxg@mellanox.comSigned-off-by: NMax Gurtovoy <maxg@mellanox.com> Reviewed-by: NSagi Grimberg <sagi@grimberg.me> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
由 Max Gurtovoy 提交于
ib_post_send, ib_post_recv and ib_dma_map_sg operations should succeed unless something unusual happened to the ib device. Link: https://lore.kernel.org/r/1569274369-29217-1-git-send-email-maxg@mellanox.comSigned-off-by: NMax Gurtovoy <maxg@mellanox.com> Acked-by: NSagi Grimberg <sagi@grimberg.me> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
- 24 6月, 2019 2 次提交
-
-
由 Israel Rukshin 提交于
After decreasing WRs array size from 7 to 3 it is more readable to give each WR a descriptive name. Signed-off-by: NIsrael Rukshin <israelr@mellanox.com> Reviewed-by: NMax Gurtovoy <maxg@mellanox.com> Reviewed-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NSagi Grimberg <sagi@grimberg.me> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
由 Israel Rukshin 提交于
Using this new API reduces iSER code complexity. It also reduces the maximum number of work requests per task and the need of dealing with multiple MRs (and their registrations and invalidations) per task. It is done by using a single WR and a special MR type (IB_MR_TYPE_INTEGRITY) for PI operation. The setup of the tested benchmark: - 2 servers with 24 cores (1 initiator and 1 target) - 24 target sessions with 1 LUN each - ramdisk backstore - PI active Performance results running fio (24 jobs, 128 iodepth) using write_generate=0 and read_verify=0 (w/w.o patch): bs IOPS(read) IOPS(write) ---- ---------- ---------- 512 1236.6K/1164.3K 1357.2K/1332.8K 1k 1196.5K/1163.8K 1348.4K/1262.7K 2k 1016.7K/921950 1003.7K/931230 4k 662728/600545 595423/501513 8k 385954/384345 333775/277090 16k 222864/222820 170317/170671 32k 116869/114896 82331/82244 64k 55205/54931 40264/40021 Using write_generate=1 and read_verify=1 (w/w.o patch): bs IOPS(read) IOPS(write) ---- ---------- ---------- 512 1090.1K/1030.9K 1303.9K/1101.4K 1k 1057.7K/904583 1318.4K/988085 2k 965226/638799 1008.6K/692514 4k 555479/410151 542414/414517 8k 298675/224964 264729/237508 16k 133485/122481 164625/138647 32k 74329/67615 80143/78743 64k 35716/35519 39294/37334 We get performance improvement at all block sizes. The most significant improvement is when writing 4k bs (almost 30% more iops). Signed-off-by: NIsrael Rukshin <israelr@mellanox.com> Reviewed-by: NMax Gurtovoy <maxg@mellanox.com> Reviewed-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NSagi Grimberg <sagi@grimberg.me> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
- 22 5月, 2019 1 次提交
-
-
由 Israel Rukshin 提交于
Signed-off-by: NIsrael Rukshin <israelr@mellanox.com> Reviewed-by: NMax Gurtovoy <maxg@mellanox.com> Reviewed-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
- 05 2月, 2019 1 次提交
-
-
由 Bart Van Assche 提交于
Keeping single line wrapper functions is not useful. Hence remove the ib_sg_dma_address() and ib_sg_dma_len() functions. This patch does not change any functionality. Signed-off-by: NBart Van Assche <bvanassche@acm.org> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
- 19 1月, 2019 1 次提交
-
-
由 Israel Rukshin 提交于
ib_dma_map_sg() augments the SGL into a 'dma mapped SGL'. This process may change the number of entries and the lengths of each entry. Code that touches dma_address is iterating over the 'dma mapped SGL' and must use dma_nents which returned from ib_dma_map_sg(). ib_sg_to_pages() and ib_map_mr_sg() are using dma_address so they must use dma_nents. Fixes: 39405885 ("IB/iser: Port to new fast registration API") Fixes: bfe066e2 ("IB/iser: Reuse ib_sg_to_pages") Signed-off-by: NIsrael Rukshin <israelr@mellanox.com> Reviewed-by: NMax Gurtovoy <maxg@mellanox.com> Acked-by: NSagi Grimberg <sagi@grimberg.me> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
- 12 12月, 2018 1 次提交
-
-
由 Kamal Heib 提交于
Make all the required change to start use the ib_device_ops structure. Signed-off-by: NKamal Heib <kamalheib1@gmail.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
- 22 11月, 2018 1 次提交
-
-
由 Yuval Shaia 提交于
Since the function always returns 0 make it void. Reported-by: NHåkon Bugge <haakon.bugge@oracle.com> Signed-off-by: NYuval Shaia <yuval.shaia@oracle.com> Reviewed-by: NLeon Romanovsky <leonro@mellanox.com> Acked-by: NSagi Grimberg <sagi@grimberg.me> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
- 31 7月, 2018 1 次提交
-
-
由 Bart Van Assche 提交于
Since the next patch will change the return type of these functions into a const pointer and since the iSER driver modifies the work request these functions return a pointer two, inline two work request conversion function calls. This patch does not change any functionality. Signed-off-by: NBart Van Assche <bart.vanassche@wdc.com> Reviewed-by: NSagi Grimberg <sagi@grimberg.me> Reviewed-by: NMax Gurtovoy <maxg@mellanox.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
- 30 7月, 2018 1 次提交
-
-
由 Max Gurtovoy 提交于
Currently this function is implemented in the scsi layer, but it's actual place should be the block layer since T10-PI is a general data integrity feature that is used in the nvme protocol as well. Suggested-by: NChristoph Hellwig <hch@lst.de> Cc: Martin K. Petersen <martin.petersen@oracle.com> Signed-off-by: NMax Gurtovoy <maxg@mellanox.com> Signed-off-by: NJens Axboe <axboe@kernel.dk>
-
- 04 6月, 2018 1 次提交
-
-
由 Max Gurtovoy 提交于
No reason to re-define protection information check in ib_iser driver. Use check masks from RDMA core driver. Reviewed-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NSagi Grimberg <sagi@grimberg.me> Reviewed-by: NMartin K. Petersen <martin.petersen@oracle.com> Signed-off-by: NMax Gurtovoy <maxg@mellanox.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
- 25 9月, 2017 1 次提交
-
-
由 Parav Pandit 提交于
The ib_mr->length represents the length of the MR in bytes as per the IBTA spec 1.3 section 11.2.10.3 (REGISTER PHYSICAL MEMORY REGION). Currently ib_mr->length field is defined as only 32-bits field. This might result into truncation and failed WRs of consumers who registers more than 4GB bytes memory regions and whose WRs accessing such MRs. This patch makes the length 64-bit to avoid such truncation. Cc: Sagi Grimberg <sagi@grimberg.me> Cc: Chuck Lever <chuck.lever@oracle.com> Cc: Faisal Latif <faisal.latif@intel.com> Fixes: 4c67e2bf ("IB/core: Introduce new fast registration API") Signed-off-by: NIlya Lesokhin <ilyal@mellanox.com> Signed-off-by: NParav Pandit <parav@mellanox.com> Signed-off-by: NLeon Romanovsky <leon@kernel.org> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
- 24 9月, 2016 1 次提交
-
-
由 Christoph Hellwig 提交于
Signed-off-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NSagi Grimberg <sagi@grimberg.me> Reviewed-by: NJason Gunthorpe <jgunthorpe@obsidianresearch.com> Reviewed-by: NSteve Wise <swise@opengridcomputing.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
- 14 5月, 2016 2 次提交
-
-
由 Bart Van Assche 提交于
The SRP initiator allows to set max_sectors to a value that exceeds the largest amount of data that can be mapped at once with an mlx4 HCA using fast registration and a page size of 4 KB. Hence modify ib_map_mr_sg() such that it can map partial sg-elements. If an sg-element has been mapped partially, let the caller know which fraction has been mapped by adjusting *sg_offset. Signed-off-by: NBart Van Assche <bart.vanassche@sandisk.com> Tested-by: NLaurence Oberman <loberman@redhat.com> Cc: Christoph Hellwig <hch@lst.de> Cc: Sagi Grimberg <sagi@grimberg.me> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Christoph Hellwig 提交于
Signed-off-by: NChristoph Hellwig <hch@lst.de> Tested-by: NSteve Wise <swise@opengridcomputing.com> Reviewed-by: NBart Van Assche <bart.vanassche@sandisk.com> Reviewed-by: NSagi Grimberg <sagi@grimberg.me> Reviewed-by: NSteve Wise <swise@opengridcomputing.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
- 27 12月, 2015 1 次提交
-
-
由 Jenny Derzhavetz 提交于
Declare that we support remote invalidation in case we are: 1. using fastreg method 2. always registering memory Detect the invalidated rkey from the work completion info so we won't invalidate it locally. The spec mandates that we must not rely on the target remote invalidate our rkey so we must check it upon a receive (scsi response) completion. Signed-off-by: NJenny Derzhavetz <jennyf@mellanox.com> Signed-off-by: NSagi Grimberg <sagig@mellanox.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
- 24 12月, 2015 5 次提交
-
-
由 Sagi Grimberg 提交于
When we enable remote invalidate support we won't want to perform local invalidates at the same time we do today, but we still need to get new rkeys. So, decouple the rkey update from the local invalidate and tie it to memory reg instead. Signed-off-by: NSagi Grimberg <sagig@mellanox.com> Signed-off-by: NJenny Derzhavetz <jennyf@mellanox.com> Reviewed-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Jenny Derzhavetz 提交于
This parameter is described as "is mr valid indicator". In other words, it indicates whether memory registration is valid or not. So intuitive values would be: mr_valid=True, when memory registration is valid and mr_valid=False otherwise. Signed-off-by: NJenny Derzhavetz <jennyf@mellanox.com> Signed-off-by: NSagi Grimberg <sagig@mellanox.com> Reviewed-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Jenny Derzhavetz 提交于
When all the task data is sent as immediate data, we are allowed to use the local_dma_lkey as it is not sent to the wire. Signed-off-by: NJenny Derzhavetz <jennyf@mellanox.com> Signed-off-by: NSagi Grimberg <sagig@mellanox.com> Reviewed-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Sagi Grimberg 提交于
We have in iser iser_sg_to_page_vec which has exactly the same role as ib_sg_to_pages. Customize the page_vec to hold a fake MR so we can reuse ib_sg_to_pages. Signed-off-by: NSagi Grimberg <sagig@mellanox.com> Signed-off-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Julia Lawall 提交于
The iser_reg_ops structures are never modified, so declare them as const. Done with the help of Coccinelle. Signed-off-by: NJulia Lawall <Julia.Lawall@lip6.fr> Acked-by: NSagi Grimberg <sagig@mellanox.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
- 23 12月, 2015 1 次提交
-
-
由 Or Gerlitz 提交于
Instead, use the cached copy of the attributes present on the device. Signed-off-by: NOr Gerlitz <ogerlitz@mellanox.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
- 12 12月, 2015 1 次提交
-
-
由 Christoph Hellwig 提交于
Use the new CQ abstraction to simplify completions in the iSER initiator. Signed-off-by: NSagi Grimberg <sagig@mellanox.com> Signed-off-by: NChristoph Hellwig <hch@lst.de>
-
- 29 10月, 2015 2 次提交
-
-
由 Sagi Grimberg 提交于
Remove fastreg page list allocation as the page vector is now private to the provider. Instead of constructing the page list and fast_req work request, call ib_map_mr_sg and construct ib_reg_wr. Signed-off-by: NSagi Grimberg <sagig@mellanox.com> Acked-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Sagi Grimberg 提交于
The block layer can reliably guarantee that SG lists won't contain gaps (page unaligned) if a driver set the queue virt_boundary. With this setting the block layer will: - refuse merges if bios are not aligned to the virtual boundary - split bios/requests that are not aligned to the virtual boundary - or, bounce buffer SG_IOs that are not aligned to the virtual boundary Since iser is working in 4K page size, set the virt_boundary to 4K pages. With this setting, we can now safely remove the bounce buffering logic in iser. Signed-off-by: NSagi Grimberg <sagig@mellanox.com> Reviewed-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
- 08 10月, 2015 1 次提交
-
-
由 Christoph Hellwig 提交于
This patch split up struct ib_send_wr so that all non-trivial verbs use their own structure which embedds struct ib_send_wr. This dramaticly shrinks the size of a WR for most common operations: sizeof(struct ib_send_wr) (old): 96 sizeof(struct ib_send_wr): 48 sizeof(struct ib_rdma_wr): 64 sizeof(struct ib_atomic_wr): 96 sizeof(struct ib_ud_wr): 88 sizeof(struct ib_fast_reg_wr): 88 sizeof(struct ib_bind_mw_wr): 96 sizeof(struct ib_sig_handover_wr): 80 And with Sagi's pending MR rework the fast registration WR will also be down to a reasonable size: sizeof(struct ib_fastreg_wr): 64 Signed-off-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: Bart Van Assche <bart.vanassche@sandisk.com> [srp, srpt] Reviewed-by: Chuck Lever <chuck.lever@oracle.com> [sunrpc] Tested-by: NHaggai Eran <haggaie@mellanox.com> Tested-by: NSagi Grimberg <sagig@mellanox.com> Tested-by: NSteve Wise <swise@opengridcomputing.com>
-
- 25 9月, 2015 1 次提交
-
-
由 Sagi Grimberg 提交于
This module parameter forces memory registration even for a continuous memory region. It is true by default as sending an all-physical rkey with remote permissions might be insecure. Signed-off-by: NSagi Grimberg <sagig@mellanox.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
- 31 8月, 2015 11 次提交
-
-
由 Jason Gunthorpe 提交于
Replace all leys with pd->local_dma_lkey. This driver does not support iWarp, so this is safe. The insecure use of ib_get_dma_mr is thus isolated to an rkey, and this looks trivially fixed by forcing the use of registration in a future patch. Signed-off-by: NJason Gunthorpe <jgunthorpe@obsidianresearch.com> Reviewed-by: NSagi Grimberg <sagig@mellanox.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Sagi Grimberg 提交于
Chaning of send work requests benefits performance by reducing the send queue lock contention (acquired in ib_post_send) and saves us HW doorbells which is posted only once. Currently, in normal IO flows iser does not chain the CDB send work request with the registration work request. Also in PI flows, signature work requests are not chained as well. Lets chain those and post only once. Signed-off-by: NSagi Grimberg <sagig@mellanox.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Sagi Grimberg 提交于
Easier to debug when we have the registration details. This patch does not change any functionality. Signed-off-by: NSagi Grimberg <sagig@mellanox.com> Signed-off-by: NAdir Lev <adirl@mellanox.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Sagi Grimberg 提交于
iser support up to 512KB data transfer in a single scsi command. This means that larger IOs will split to different request. While iser can easily saturate FDR/EDR wires, some arrays are fine tuned for 1MB (or larger) IO sizes, hence add an option to support larger transfers (up to 8MB) if the device allows it. Given that a few target implementations don't support data transfers of more than 512KB by default and the fact that larger IO sizes require more resources, we introduce a module parameter to determine the maximum number of 512B sectors in a single scsi command. Users that are interested in larger transfers can change this value given that the target supports larger transfers. At the moment, iser works in 4K pages granularity, In a later stage we will get it to work with system page size instead. IO operations that consists of N pages will need a page vector of size N+1 in case the first SG element contains an offset. Given that some devices allocates memory regions in powers of 2, this means that allocating a region with N+1 pages, will result in region resources allocation of the next power of 2. Since we don't want that to happen, in case we are in the limit of IO size supported and the first SG element has an offset, we align the SG list using a bounce buffer (which is OK given that this is not likely to happen a lot). Signed-off-by: NSagi Grimberg <sagig@mellanox.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Sagi Grimberg 提交于
iser_reg_rdma_mem_[fastreg|fmr] share a lot of code, and logically do the same thing other than the buffer registration method itself (iser_fast_reg_mr vs. iser_fast_reg_fmr). The DIF logic is not implemented in the FMR flow as there is no existing device that supports FMRs and Signature feature. This patch unifies the flow in a single routine iser_reg_rdma_mem and just split to fmr/frwr for the buffer registration itself. Also, for symmetry reasons, unify iser_unreg_rdma_mem (which will call the relevant device specific unreg routine). Signed-off-by: NSagi Grimberg <sagig@mellanox.com> Signed-off-by: NAdir Lev <adirl@mellanox.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Sagi Grimberg 提交于
As for fmrs we will hold a single registration descriptor as no need for multiple like in the frwr mode (descriptor for each task). This change helps unifying the duplicate registration code paths. Signed-off-by: NSagi Grimberg <sagig@mellanox.com> Signed-off-by: NAdir Lev <adirl@mellanox.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Sagi Grimberg 提交于
Also, change a name of a local variable. This patch does not change any functionality. Signed-off-by: NSagi Grimberg <sagig@mellanox.com> Signed-off-by: NAdir Lev <adirl@mellanox.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Adir Lev 提交于
This will allow us to unify the memory registration code path between the various methods which vary by the device capabilities. This change will make it easier and less intrusive to remove fmr_pools from the code when we'd want to. The reason we use a single descriptor is to avoid taking a redundant spinlock when working with FMRs. We also change the signature of iser_reg_page_vec to make it match iser_fast_reg_mr (and the future indirect registration method). Signed-off-by: NSagi Grimberg <sagig@mellanox.com> Signed-off-by: NAdir Lev <adirl@mellanox.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Sagi Grimberg 提交于
Instead of having it a part of the connection structure, have it be under a dedicated (embedded) structure in the connection. A logical separation of the registration pool and the connection structure. Signed-off-by: NSagi Grimberg <sagig@mellanox.com> Signed-off-by: NAdir Lev <adirl@mellanox.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Sagi Grimberg 提交于
Move all the per-device function pointers to an easy extensible iser_reg_ops structure that contains all the iser registration operations. Signed-off-by: NSagi Grimberg <sagig@mellanox.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Sagi Grimberg 提交于
Avoid struct names without iser_ prefix. This patch does not change any functionality. Signed-off-by: NSagi Grimberg <sagig@mellanox.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-