- 23 6月, 2021 1 次提交
-
-
由 Aharon Landau 提交于
Currently, if the user asks for a real-time timestamp, the device will return a free-running one, and the timestamp will be translated to real-time in the user-space. When the device supports only real-time timestamp and not free-running, the creation of the QP will fail even though the user needs supported the real-time one. To prevent this, we will return the real-time timestamp directly from the device. Link: https://lore.kernel.org/r/c6cfc8e6f038575c5c2de6505830f7e74e4de80d.1623829775.git.leonro@nvidia.comSigned-off-by: NAharon Landau <aharonl@nvidia.com> Reviewed-by: NMaor Gottlieb <maorg@nvidia.com> Signed-off-by: NLeon Romanovsky <leonro@nvidia.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
- 22 6月, 2021 27 次提交
-
-
由 Aharon Landau 提交于
QPC, SQC and RQC timestamp formats and capabilities are always equal because they represent general hardware support. So instead of code duplication, let's merge them into general enum and logic. Signed-off-by: NAharon Landau <aharonl@nvidia.com> Reviewed-by: NMaor Gottlieb <maorg@nvidia.com> Signed-off-by: NLeon Romanovsky <leonro@nvidia.com>
-
由 Xiao Yang 提交于
rxe_mr_init_user() always returns the fixed -EINVAL when ib_umem_get() fails so it's hard for user to know which actual error happens in ib_umem_get(). For example, ib_umem_get() will return -EOPNOTSUPP when trying to pin pages on a DAX file. Return actual error as mlx4/mlx5 does. Link: https://lore.kernel.org/r/20210621071456.4259-1-ice_yangxiao@163.comSigned-off-by: NXiao Yang <yangx.jy@fujitsu.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
由 Kees Cook 提交于
In preparation for FORTIFY_SOURCE performing compile-time and run-time field bounds checking for memcpy(), memmove(), and memset(), avoid intentionally writing across neighboring array fields. Use the ether_addr_copy() helper instead, as already done for smac. Link: https://lore.kernel.org/r/20210616203744.1248551-1-keescook@chromium.orgSigned-off-by: NKees Cook <keescook@chromium.org> Reviewed-by: NLeon Romanovsky <leonro@nvidia.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
由 Jack Wang 提交于
With fast memory registration on write request, rnbd-clt can do bigger IO without split. rnbd-clt now can query rtrs-clt to get the max_segments, instead of using BMAX_SEGMENTS. BMAX_SEGMENTS is not longer needed, so remove it. Link: https://lore.kernel.org/r/20210621055340.11789-6-jinpu.wang@ionos.com Cc: Jens Axboe <axboe@kernel.dk> Signed-off-by: NJack Wang <jinpu.wang@cloud.ionos.com> Reviewed-by: NMd Haris Iqbal <haris.iqbal@cloud.ionos.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
由 Jack Wang 提交于
As we can do fast memory registration on write, we can increase the max_segments, default to 512K. Link: https://lore.kernel.org/r/20210621055340.11789-5-jinpu.wang@ionos.comSigned-off-by: NJack Wang <jinpu.wang@cloud.ionos.com> Reviewed-by: NMd Haris Iqbal <haris.iqbal@cloud.ionos.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
由 Jack Wang 提交于
With write path fast memory registration, we need less memory for each request. With fast memory registration, we can reduce max_send_sge to save memory usage. Also convert the kmalloc_array to kcalloc. Link: https://lore.kernel.org/r/20210621055340.11789-4-jinpu.wang@ionos.comSigned-off-by: NJack Wang <jinpu.wang@cloud.ionos.com> Reviewed-by: NMd Haris Iqbal <haris.iqbal@cloud.ionos.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
由 Jack Wang 提交于
With fast memory registration in write path, we can reduce the memory consumption by using less max_send_sge, support IO bigger than 116 KB (29 segments * 4 KB) without splitting, and it also make the IO path more symmetric. To avoid some times MR reg failed, waiting for the invalidation to finish before the new mr reg. Introduce a refcount, only finish the request when both local invalidation and io reply are there. Link: https://lore.kernel.org/r/20210621055340.11789-3-jinpu.wang@ionos.comSigned-off-by: NJack Wang <jinpu.wang@cloud.ionos.com> Signed-off-by: NMd Haris Iqbal <haris.iqbal@ionos.com> Signed-off-by: NDima Stepanov <dmitrii.stepanov@ionos.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
由 Jack Wang 提交于
Introduce tail wr, we can send as the last wr, we want to send the local invalidate wr after rdma wr in later patch. While at it, also fix coding style issue. Link: https://lore.kernel.org/r/20210621055340.11789-2-jinpu.wang@ionos.comSigned-off-by: NJack Wang <jinpu.wang@cloud.ionos.com> Reviewed-by: NMd Haris Iqbal <haris.iqbal@cloud.ionos.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
由 Devesh Sharma 提交于
Changing ucontext ABI response structure to pass wqe_mode to user library. A flag in comp_mask has been set to indicate presence of wqe_mode. Moved wqe-mode ABI to uapi/rdma/bnxt_re-abi.h Link: https://lore.kernel.org/r/20210616202817.1185276-1-devesh.sharma@broadcom.comSigned-off-by: NDevesh Sharma <devesh.sharma@broadcom.com> Reviewed-by: NLeon Romanovsky <leonro@nvidia.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
由 Anand Khoje 提交于
Removed port validity check from ib_get_cached_subnet_prefix() as this check is not needed because "port_num" is valid. Link: https://lore.kernel.org/r/20210616154509.1047-2-anand.a.khoje@oracle.comSuggested-by: NLeon Romanovsky <leonro@nvidia.com> Signed-off-by: NAnand Khoje <anand.a.khoje@oracle.com> Signed-off-by: NHaakon Bugge <haakon.bugge@oracle.com> Reviewed-by: NLeon Romanovsky <leonro@nvidia.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
由 Leon Romanovsky 提交于
Compilation with W=1 produces warnings similar to the below. drivers/infiniband/ulp/ipoib/ipoib_main.c:320: warning: This comment starts with '/**', but isn't a kernel-doc comment. Refer Documentation/doc-guide/kernel-doc.rst All such occurrences were found with the following one line git grep -A 1 "\/\*\*" drivers/infiniband/ Link: https://lore.kernel.org/r/e57d5f4ddd08b7a19934635b44d6d632841b9ba7.1623823612.git.leonro@nvidia.com Reviewed-by: Jack Wang <jinpu.wang@ionos.com> #rtrs Reviewed-by: NDennis Dalessandro <dennis.dalessandro@cornelisnetworks.com> Signed-off-by: NLeon Romanovsky <leonro@nvidia.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
由 Yangyang Li 提交于
Switch xrcd index allocation and release from hns own bitmap interface to IDA interface. Link: https://lore.kernel.org/r/1623325814-55737-7-git-send-email-liweihang@huawei.comSigned-off-by: NYangyang Li <liyangyang20@huawei.com> Signed-off-by: NWeihang Li <liweihang@huawei.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
由 Yangyang Li 提交于
Switch pd index allocation and release from hns own bitmap interface to IDA interface. Link: https://lore.kernel.org/r/1623325814-55737-6-git-send-email-liweihang@huawei.comSigned-off-by: NYangyang Li <liyangyang20@huawei.com> Signed-off-by: NWeihang Li <liweihang@huawei.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
由 Yangyang Li 提交于
Switch mtpt index allocation and release from hns own bitmap interface to IDA interface. Link: https://lore.kernel.org/r/1623325814-55737-5-git-send-email-liweihang@huawei.comSigned-off-by: NYangyang Li <liyangyang20@huawei.com> Signed-off-by: NWeihang Li <liweihang@huawei.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
由 Yangyang Li 提交于
Round-robin (RR) is no longer used in the allocation of the bitmap table, and all the function input parameters that use this mechanism are BITMAP_NO_RR. The code that defines and uses the RR needs to be deleted. Link: https://lore.kernel.org/r/1623325814-55737-4-git-send-email-liweihang@huawei.comSigned-off-by: NYangyang Li <liyangyang20@huawei.com> Signed-off-by: NWeihang Li <liweihang@huawei.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
由 Yangyang Li 提交于
hns_roce_bitmap_free_range() is only called inside hns_roce_bitmap_free(), and the input parameter "cnt" is set to a constant 1. In addition, the driver does not use alloc_range scenarios, so free_range does not need to exist. Link: https://lore.kernel.org/r/1623325814-55737-3-git-send-email-liweihang@huawei.comSigned-off-by: NYangyang Li <liyangyang20@huawei.com> Signed-off-by: NWeihang Li <liweihang@huawei.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
由 Yangyang Li 提交于
The function is no longer used. Link: https://lore.kernel.org/r/1623325814-55737-2-git-send-email-liweihang@huawei.comSigned-off-by: NYangyang Li <liyangyang20@huawei.com> Signed-off-by: NWeihang Li <liweihang@huawei.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
由 Wenpeng Liang 提交于
There are some '%u' for 'int' and '%d' for 'unsigend int', they should be fixed. Link: https://lore.kernel.org/r/1623325232-30900-1-git-send-email-liweihang@huawei.comSigned-off-by: NWenpeng Liang <liangwenpeng@huawei.com> Signed-off-by: NWeihang Li <liweihang@huawei.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
由 Xi Wang 提交于
Remove unused members in srq context structure. Link: https://lore.kernel.org/r/1624262443-24528-10-git-send-email-liweihang@huawei.comSigned-off-by: NXi Wang <wangxi11@huawei.com> Signed-off-by: NWeihang Li <liweihang@huawei.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
由 Yixing Liu 提交于
Use hr_write_reg() instead of roce_set_field(). Link: https://lore.kernel.org/r/1624262443-24528-9-git-send-email-liweihang@huawei.comSigned-off-by: NYixing Liu <liuyixing1@huawei.com> Signed-off-by: NWeihang Li <liweihang@huawei.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
由 Yixing Liu 提交于
Use "hr_reg_write" to replace "roce_set_filed". Link: https://lore.kernel.org/r/1624262443-24528-8-git-send-email-liweihang@huawei.comSigned-off-by: NYixing Liu <liuyixing1@huawei.com> Signed-off-by: NWeihang Li <liweihang@huawei.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
由 Lang Cheng 提交于
WQE_INDEX and OPCODE and QPN of CQE use redundant masks. Just remove them. Link: https://lore.kernel.org/r/1624262443-24528-7-git-send-email-liweihang@huawei.comSigned-off-by: NLang Cheng <chenglang@huawei.com> Signed-off-by: NWeihang Li <liweihang@huawei.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
由 Lang Cheng 提交于
Fill all QPC fileds with hr_reg_*() instead of roce_set_*(). SQPN is used for HIP08 ES only, it should be removed. Link: https://lore.kernel.org/r/1624262443-24528-6-git-send-email-liweihang@huawei.comSigned-off-by: NLang Cheng <chenglang@huawei.com> Signed-off-by: NWeihang Li <liweihang@huawei.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
由 Yixing Liu 提交于
Use hr_reg_*() to write CQ context, it's simpler than roce_set_*(). Link: https://lore.kernel.org/r/1624262443-24528-5-git-send-email-liweihang@huawei.comSigned-off-by: NYixing Liu <liuyixing1@huawei.com> Signed-off-by: NWeihang Li <liweihang@huawei.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
由 Lang Cheng 提交于
In order to avoid to do bitwise operations on a boolean value, add a new register interface to avoid sparse comlaint about "dubious: x & !y" when calling hr_reg_write(ctx, field, !!val). Fixes: dc504774 ("RDMA/hns: Use new interface to set MPT related fields") Fixes: 495c2480 ("RDMA/hns: Add XRC subtype in QPC and XRC type in SRQC") Link: https://lore.kernel.org/r/1624262443-24528-4-git-send-email-liweihang@huawei.comSigned-off-by: NLang Cheng <chenglang@huawei.com> Signed-off-by: NWeihang Li <liweihang@huawei.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
由 Weihang Li 提交于
GCC may reports an running time assert error when a value calculated from ib_mtu_enum_to_int() is using as 'val' in FIELD_PREDP: include/linux/compiler_types.h:328:38: error: call to '__compiletime_assert_1524' declared with attribute error: FIELD_PREP: value too large for the field So a check is added about whether integer mtu from ib_mtu_enum_to_int() is negative to avoid this warning. Link: https://lore.kernel.org/r/1624262443-24528-3-git-send-email-liweihang@huawei.comReported-by: Nkernel test robot <lkp@intel.com> Signed-off-by: NWeihang Li <liweihang@huawei.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
由 Weihang Li 提交于
There is no need to use "!!" before "eq->eqe_size == HNS_ROCE_V3_EQE_SIZE", or sparse will complain about "dubious: x & !y". Fixes: 782832f2 ("RDMA/hns: Simplify the function config_eqc()") Link: https://lore.kernel.org/r/1624262443-24528-2-git-send-email-liweihang@huawei.comSigned-off-by: NWeihang Li <liweihang@huawei.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
- 21 6月, 2021 1 次提交
-
-
由 Avihai Horon 提交于
Relaxed Ordering is a capability that can only benefit users that support it. All kernel ULPs should support Relaxed Ordering, as they are designed to read data only after observing the CQE and use the DMA API correctly. Hence, implicitly enable Relaxed Ordering by default for MR transfers in kernel ULPs. Link: https://lore.kernel.org/r/b7e820aab7402b8efa63605f4ea465831b3b1e5e.1623236426.git.leonro@nvidia.comSigned-off-by: NAvihai Horon <avihaih@nvidia.com> Signed-off-by: NLeon Romanovsky <leonro@nvidia.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
- 19 6月, 2021 11 次提交
-
-
由 Pavel Skripkin 提交于
static void ec_bhf_remove(struct pci_dev *dev) { ... struct ec_bhf_priv *priv = netdev_priv(net_dev); unregister_netdev(net_dev); free_netdev(net_dev); pci_iounmap(dev, priv->dma_io); pci_iounmap(dev, priv->io); ... } priv is netdev private data, but it is used after free_netdev(). It can cause use-after-free when accessing priv pointer. So, fix it by moving free_netdev() after pci_iounmap() calls. Fixes: 6af55ff5 ("Driver for Beckhoff CX5020 EtherCAT master module.") Signed-off-by: NPavel Skripkin <paskripkin@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Esben Haabendal 提交于
As documented in Documentation/networking/driver.rst, the ndo_start_xmit method must not return NETDEV_TX_BUSY under any normal circumstances, and as recommended, we simply stop the tx queue in advance, when there is a risk that the next xmit would cause a NETDEV_TX_BUSY return. Signed-off-by: NEsben Haabendal <esben@geanix.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Esben Haabendal 提交于
Just as the initial check, we need to ensure num_frag+1 buffers available, as that is the number of buffers we are going to use. This fixes a buffer overflow, which might be seen during heavy network load. Complete lockup of TEMAC was reproducible within about 10 minutes of a particular load. Fixes: 84823ff8 ("net: ll_temac: Fix race condition causing TX hang") Cc: stable@vger.kernel.org # v5.4+ Signed-off-by: NEsben Haabendal <esben@geanix.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Esben Haabendal 提交于
Add a couple of memory-barriers to ensure correct ordering of read/write access to TX BDs. In xmit_done, we should ensure that reading the additional BD fields are only done after STS_CTRL_APP0_CMPLT bit is set. When xmit_done marks the BD as free by setting APP0=0, we need to ensure that the other BD fields are reset first, so we avoid racing with the xmit path, which writes to the same fields. Finally, making sure to read APP0 of next BD after the current BD, ensures that we see all available buffers. Signed-off-by: NEsben Haabendal <esben@geanix.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Esben Haabendal 提交于
With the skb pointer piggy-backed on the TX BD, we have a simple and efficient way to free the skb buffer when the frame has been transmitted. But in order to avoid freeing the skb while there are still fragments from the skb in use, we need to piggy-back on the TX BD of the skb, not the first. Without this, we are doing use-after-free on the DMA side, when the first BD of a multi TX BD packet is seen as completed in xmit_done, and the remaining BDs are still being processed. Cc: stable@vger.kernel.org # v5.4+ Signed-off-by: NEsben Haabendal <esben@geanix.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Somnath Kotur 提交于
bnxt_ethtool_init() may have allocated some memory and we need to call bnxt_ethtool_free() to properly unwind if bnxt_init_one() fails. Fixes: 7c380918 ("bnxt_en: Refactor bnxt_init_one() and turn on TPA support on 57500 chips.") Signed-off-by: NSomnath Kotur <somnath.kotur@broadcom.com> Signed-off-by: NMichael Chan <michael.chan@broadcom.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Rukhsana Ansari 提交于
TQM fastpath ring needs to be sized to store both the requester and responder side of RoCE QPs in TQM for supporting bi-directional tests. Fix bnxt_alloc_ctx_mem() to multiply the RoCE QPs by a factor of 2 when computing the number of entries for TQM fastpath ring. This fixes an RX pipeline stall issue when running bi-directional max RoCE QP tests. Fixes: c7dd7ab4 ("bnxt_en: Improve TQM ring context memory sizing formulas.") Signed-off-by: NRukhsana Ansari <rukhsana.ansari@broadcom.com> Signed-off-by: NMichael Chan <michael.chan@broadcom.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Michael Chan 提交于
There is a missing bnxt_probe_phy() call in bnxt_fw_init_one() to rediscover the PHY capabilities after a firmware reset. This can cause some PHY related functionalities to fail after a firmware reset. For example, in multi-host, the ability for any host to configure the PHY settings may be lost after a firmware reset. Fixes: ec5d31e3 ("bnxt_en: Handle firmware reset status during IF_UP.") Signed-off-by: NMichael Chan <michael.chan@broadcom.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Pavel Machek 提交于
While fixing coverity warning, commit dd2c7967 introduced typo in shift value. Fix that. Signed-off-by: NPavel Machek (CIP) <pavel@denx.de> Fixes: dd2c7967 ("cxgb4: Fix unintentional sign extension issues") Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Xi Wang 提交于
Both of HIP08 and HIP09 require the extended doorbell information to be cleared before being used. Fixes: 6b63597d ("RDMA/hns: Add TSQ link table support") Link: https://lore.kernel.org/r/1623392089-35639-1-git-send-email-liweihang@huawei.comSigned-off-by: NXi Wang <wangxi11@huawei.com> Signed-off-by: NWeihang Li <liweihang@huawei.com> Reviewed-by: NLeon Romanovsky <leonro@nvidia.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-
由 Jack Wang 提交于
Currently we only check device max_qp_wr limit for IO connection, but not for service connection. We should check for both. So save the max_qp_wr device limit in wr_limit, and use it for both IO connections and service connections. While at it, also remove an outdated comments. Link: https://lore.kernel.org/r/20210614090337.29557-6-jinpu.wang@ionos.comSuggested-by: NLeon Romanovsky <leonro@nvidia.com> Signed-off-by: NJack Wang <jinpu.wang@ionos.com> Signed-off-by: NGioh Kim <gi-oh.kim@ionos.com> Reviewed-by: NLeon Romanovsky <leonro@nvidia.com> Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
-