- 22 11月, 2018 5 次提交
-
-
由 Yuval Shaia 提交于
Since the function always returns 0 make it void. Reported-by: NHåkon Bugge <haakon.bugge@oracle.com> Signed-off-by: NYuval Shaia <yuval.shaia@oracle.com> Reviewed-by: NLeon Romanovsky <leonro@mellanox.com> Acked-by: NSagi Grimberg <sagi@grimberg.me> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
由 Yue Haibing 提交于
There is no need to have the 'struct se_portal_group *tpg' variable static since new value always be assigned before use. Signed-off-by: NYue Haibing <yuehaibing@huawei.com> Reviewed-by: NLeon Romanovsky <leonro@mellanox.com> Reviewed-by: NBart Van Assche <bvanassche@acm.org> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
由 Sabyasachi Gupta 提交于
Replaced dma_alloc_coherent + memset with dma_zalloc_coherent Signed-off-by: NSabyasachi Gupta <sabyasachi.linux@gmail.com> Acked-by: NSteve Wise <swise@opengridcomputing.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
由 Sabyasachi Gupta 提交于
Replaced dma_alloc_coherent + memset with dma_zalloc_coherent Signed-off-by: NSabyasachi Gupta <sabyasachi.linux@gmail.com> Acked-by: NSteve Wise <swise@opengridcomputing.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
由 Artemy Kovalyov 提交于
This is required so the user can set the SL on the DC QP. Signed-off-by: NArtemy Kovalyov <artemyko@mellanox.com> Reviewed-by: NYossi Itigin <yosefe@mellanox.com> Reviewed-by: NMajd Dibbiny <majd@mellanox.com> Signed-off-by: NLeon Romanovsky <leonro@mellanox.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
- 21 11月, 2018 2 次提交
-
-
由 Saeed Mahameed 提交于
Use the new generic EQ API to move all ODP RDMA data structures and logic form mlx5 core driver into mlx5_ib driver. Signed-off-by: NSaeed Mahameed <saeedm@mellanox.com> Reviewed-by: NLeon Romanovsky <leonro@mellanox.com> Reviewed-by: NTariq Toukan <tariqt@mellanox.com> Acked-by: NJason Gunthorpe <jgg@mellanox.com> Signed-off-by: NLeon Romanovsky <leonro@mellanox.com>
-
由 Saeed Mahameed 提交于
Move unnecessary EQ table structures and declaration from the public include/linux/mlx5/driver.h into the private area of mlx5_core and into eq.c/eq.h. Introduce new mlx5 EQ APIs: mlx5_comp_vectors_count(dev); mlx5_comp_irq_get_affinity_mask(dev, vector); And use them from mlx5_ib or mlx5e netdevice instead of direct access to mlx5_core internal structures. Signed-off-by: NSaeed Mahameed <saeedm@mellanox.com> Reviewed-by: NLeon Romanovsky <leonro@mellanox.com> Reviewed-by: NTariq Toukan <tariqt@mellanox.com> Signed-off-by: NLeon Romanovsky <leonro@mellanox.com>
-
- 13 11月, 2018 2 次提交
-
-
由 Moni Shoua 提交于
Add and modify debug messages to ODP related error flows. In that context, return code EAGAIN is considered less severe and print level for it is set debug instead of warn. Signed-off-by: NMoni Shoua <monis@mellanox.com> Signed-off-by: NLeon Romanovsky <leonro@mellanox.com>
-
由 Moni Shoua 提交于
When page fault event for a WQE arrives, the event data contains the resource (e.g. QP) number which will later be used by the page fault handler to retrieve the resource. Meanwhile, another context can destroy the resource and cause use-after-free. To avoid that, take a reference on the resource when handler starts and release it when it ends. Page fault events for RDMA operations don't need to be protected because the driver doesn't need to access the QP in the page fault handler. Fixes: d9aaed83 ("{net,IB}/mlx5: Refactor page fault handling") Signed-off-by: NMoni Shoua <monis@mellanox.com> Signed-off-by: NLeon Romanovsky <leonro@mellanox.com>
-
- 09 11月, 2018 6 次提交
-
-
由 Zhu Yanjun 提交于
Since the function rxe_unregister_device always returns 0, it is changed to void. Signed-off-by: NZhu Yanjun <yanjun.zhu@oracle.com> Signed-off-by: NDoug Ledford <dledford@redhat.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
由 Zhu Yanjun 提交于
The variable rxe is only used in the function rxe_xmit_packet, and the caller functions do not use it. So move this variable into the function rxe_xmit_packet. Signed-off-by: NZhu Yanjun <yanjun.zhu@oracle.com> Signed-off-by: NDoug Ledford <dledford@redhat.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
由 Andrew Boyer 提交于
link_down is self-explanatory. rdma_sends and rdma_recvs count the number of RDMA Send and RDMA Receive operations completed successfully. This is different from the existing sent_pkts and rcvd_pkts counters because the existing counters measure packets, not RDMA operations. ack_deffered is renamed to ack_deferred to fix the spelling. out_of_sequence is renamed to out_of_seq_request to make clear that it is counting only requests and not other packets which can be out of sequence. Signed-off-by: NAndrew Boyer <andrew.boyer@dell.com> Signed-off-by: NDoug Ledford <dledford@redhat.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
由 Andrew Boyer 提交于
In ib_query_port(), use the netdev's IFF_UP flag to determine phys_state (flag set = down = POLLING, flag clear = disabled = DISABLED). Callers can then use the phys_state field to distinguish between links which have a dead partner, cable missing, etc., from links which are turned off on the local node. This is useful for HA and supportability. Signed-off-by: NAndrew Boyer <andrew.boyer@dell.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Sagi Grimberg 提交于
Devices that does not use managed affinity can not export a vector affinity as the consumer relies on having a static mapping it can map to upper layer affinity (e.g. sw queues). If the driver allows the user to set the device irq affinity, then the affinitization of a long term existing entites is not relevant. For example, nvme-rdma controllers queue-irq affinitization is determined at init time so if the irq affinity changes over time, we are no longer aligned. Signed-off-by: NSagi Grimberg <sagi@grimberg.me> Acked-by: NShiraz Saleem <shiraz.saleem@intel.com> Signed-off-by: NDoug Ledford <dledford@redhat.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
由 Sagi Grimberg 提交于
Devices that does not use managed affinity can not export a vector affinity as the consumer relies on having a static mapping it can map to upper layer affinity (e.g. sw queues). If the driver allows the user to set the device irq affinity, then the affinitization of a long term existing entites is not relevant. For example, nvme-rdma controllers queue-irq affinitization is determined at init time so if the irq affinity changes over time, we are no longer aligned. Signed-off-by: NSagi Grimberg <sagi@grimberg.me> Acked-by: NLeon Romanovsky <leonro@mellanox.com> Signed-off-by: NDoug Ledford <dledford@redhat.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
- 07 11月, 2018 3 次提交
-
-
由 Sagi Grimberg 提交于
Error completions must still contain a valid wr_id and qp_num such that the consumer can rely on. Correctly fill these fields in receive error completions. Reported-by: NWalker Benjamin <benjamin.walker@intel.com> Cc: stable@vger.kernel.org Signed-off-by: NSagi Grimberg <sagi@grimberg.me> Reviewed-by: NZhu Yanjun <yanjun.zhu@oracle.com> Tested-by: NZhu Yanjun <yanjun.zhu@oracle.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Zhu Yanjun 提交于
When resp is in error state, the queued SKBs will not be handled. The function get_req cleans up the skb queue directly. CC: Srinivas Eeda <srinivas.eeda@oracle.com> CC: Junxiao Bi <junxiao.bi@oracle.com> Signed-off-by: NZhu Yanjun <yanjun.zhu@oracle.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Allen Pais 提交于
This is a mechanical transformation, no change in logic. Signed-off-by: NAllen Pais <allen.lkml@gmail.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
- 27 10月, 2018 1 次提交
-
-
由 Michal Hocko 提交于
Revert 5ff7091f ("mm, mmu_notifier: annotate mmu notifiers with blockable invalidate callbacks"). MMU_INVALIDATE_DOES_NOT_BLOCK flags was the only one used and it is no longer needed since 93065ac7 ("mm, oom: distinguish blockable mode for mmu notifiers"). We now have a full support for per range !blocking behavior so we can drop the stop gap workaround which the per notifier flag was used for. Link: http://lkml.kernel.org/r/20180827112623.8992-4-mhocko@kernel.orgSigned-off-by: NMichal Hocko <mhocko@suse.com> Cc: David Rientjes <rientjes@google.com> Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com> Cc: Jerome Glisse <jglisse@redhat.com> Cc: Juergen Gross <jgross@suse.com> Cc: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 19 10月, 2018 1 次提交
-
-
由 Tariq Toukan 提交于
Take struct mlx5_frag_buf out of mlx5_frag_buf_ctrl, as it is not needed to manage and control the datapath of the fragmented buffers API. struct mlx5_frag_buf contains control info to manage the allocation and de-allocation of the fragmented buffer. Its fields are not relevant for datapath, so here I take them out of the struct mlx5_frag_buf_ctrl, except for the fragments array itself. In addition, modified mlx5_fill_fbc to initialise the frags pointers as well. This implies that the buffer must be allocated before the function is called. A set of type-specific *_get_byte_size() functions are replaced by a generic one. Signed-off-by: NTariq Toukan <tariqt@mellanox.com> Signed-off-by: NSaeed Mahameed <saeedm@mellanox.com>
-
- 18 10月, 2018 3 次提交
-
-
由 Paul Blakey 提交于
If no-append flag is set, we will add a new FTE, instead of appending the actions of the inserted rule when the same match already exists. While here, move the has_flow_tag boolean indicator to be a flag too. This patch doesn't change any functionality. Signed-off-by: NPaul Blakey <paulb@mellanox.com> Reviewed-by: NOr Gerlitz <ogerlitz@mellanmox.com> Reviewed-by: NMark Bloch <markb@mellanox.com> Signed-off-by: NSaeed Mahameed <saeedm@mellanox.com>
-
由 Mark Bloch 提交于
Currently, when a flow rule is created using the FS core layer, the caller has to pass the entire flow counter object and not just the counter HW handle (ID). This requires both the FS core and the caller to have knowledge about the inner implementation of the FS layer flow counters cache and limits the possible users. Move to use the counter ID across the place when dealing with flows. Doing this decoupling, now can we privatize the inner implementation of the flow counters. Signed-off-by: NMark Bloch <markb@mellanox.com> Reviewed-by: NOr Gerlitz <ogerlitz@mellanox.com> Signed-off-by: NSaeed Mahameed <saeedm@mellanox.com>
-
由 Logan Gunthorpe 提交于
In order to use PCI P2P memory the pci_p2pmem_map_sg() function must be called to map the correct PCI bus address. To do this, check the first page in the scatter list to see if it is P2P memory or not. At the moment, scatter lists that contain P2P memory must be homogeneous so if the first page is P2P the entire SGL should be P2P. Signed-off-by: NLogan Gunthorpe <logang@deltatee.com> Signed-off-by: NBjorn Helgaas <bhelgaas@google.com> Reviewed-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NSagi Grimberg <sagi@grimberg.me>
-
- 17 10月, 2018 17 次提交
-
-
由 Yonatan Cohen 提交于
Extended atomic operations cmp&swp and fetch&add is a Mellanox feature extending the standard atomic operation to use, varied operand sizes, as apposed to normal atomic operation that use an 8 byte operand only. Extended atomics allows masking the results and arguments. This patch configures QP to support extended atomic operation with the maximum size possible, as exposed by HCA capabilities. Signed-off-by: NYonatan Cohen <yonatanc@mellanox.com> Reviewed-by: NGuy Levi <guyle@mellanox.com> Signed-off-by: NLeon Romanovsky <leonro@mellanox.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Parav Pandit 提交于
When add_port() is done for port == 0, it indicates that ports hardware counters initialization should be skipped. Reflect so in the comment. Signed-off-by: NParav Pandit <parav@mellanox.com> Signed-off-by: NLeon Romanovsky <leonro@mellanox.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Parav Pandit 提交于
ib_register_device() does several allocation and initialization steps. Split it into smaller more readable functions for easy review and maintenance. Signed-off-by: NParav Pandit <parav@mellanox.com> Reviewed-by: NDaniel Jurgens <danielj@mellanox.com> Signed-off-by: NLeon Romanovsky <leonro@mellanox.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Parav Pandit 提交于
If port pkey list initialization fails, free the port_immutable memory during cleanup path. Currently it is missed out. If cache setup fails, free the pkey list during cleanup path. Fixes: d291f1a6 ("IB/core: Enforce PKey security on QPs") Signed-off-by: NParav Pandit <parav@mellanox.com> Reviewed-by: NDaniel Jurgens <danielj@mellanox.com> Signed-off-by: NLeon Romanovsky <leonro@mellanox.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Hannes Reinecke 提交于
The WARN_ON() is pointless as the rport is placed in SDEV_TRANSPORT_OFFLINE at that time, so no new commands can be submitted via srp_queuecommand() Signed-off-by: NHannes Reinecke <hare@suse.com> Reviewed-by: NJens Axboe <axboe@kernel.dk> Reviewed-by: NJohannes Thumshirn <jthumshirn@suse.com> Acked-by: NBart Van Assche <bvanassche@acm.org> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Yonatan Cohen 提交于
Requester scatter to CQE is restricted to QPs configured to signal all WRs. This patch adds ability to enable scatter to cqe (force enable) in the requester without sig_all, for users who do not want all WRs signaled but rather just the ones whose data found in the CQE. Signed-off-by: NYonatan Cohen <yonatanc@mellanox.com> Reviewed-by: NGuy Levi <guyle@mellanox.com> Signed-off-by: NLeon Romanovsky <leonro@mellanox.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Yonatan Cohen 提交于
Flags sent down from user might not be supported by running driver. This might lead to unwanted bugs. To solve this, added macro to test for unsupported flags. Signed-off-by: NYonatan Cohen <yonatanc@mellanox.com> Signed-off-by: NLeon Romanovsky <leonro@mellanox.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Yonatan Cohen 提交于
Scatter to CQE is a HW offload that saves PCI writes by scattering the payload to the CQE. This patch extends already existing functionality to support DC transport type. Signed-off-by: NYonatan Cohen <yonatanc@mellanox.com> Reviewed-by: NGuy Levi <guyle@mellanox.com> Signed-off-by: NLeon Romanovsky <leonro@mellanox.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Parav Pandit 提交于
Use rdma_set_device_sysfs_group() to register device attributes and simplify the driver. Signed-off-by: NParav Pandit <parav@mellanox.com> Signed-off-by: NLeon Romanovsky <leonro@mellanox.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
由 Nathan Chancellor 提交于
Clang warns when an emumerated type is implicitly converted to another. drivers/infiniband/sw/rxe/rxe.c:106:27: warning: implicit conversion from enumeration type 'enum rxe_device_param' to different enumeration type 'enum ib_atomic_cap' [-Wenum-conversion] rxe->attr.atomic_cap = RXE_ATOMIC_CAP; ~ ^~~~~~~~~~~~~~ drivers/infiniband/sw/rxe/rxe.c:131:22: warning: implicit conversion from enumeration type 'enum rxe_port_param' to different enumeration type 'enum ib_port_state' [-Wenum-conversion] port->attr.state = RXE_PORT_STATE; ~ ^~~~~~~~~~~~~~ drivers/infiniband/sw/rxe/rxe.c:132:24: warning: implicit conversion from enumeration type 'enum rxe_port_param' to different enumeration type 'enum ib_mtu' [-Wenum-conversion] port->attr.max_mtu = RXE_PORT_MAX_MTU; ~ ^~~~~~~~~~~~~~~~ drivers/infiniband/sw/rxe/rxe.c:133:27: warning: implicit conversion from enumeration type 'enum rxe_port_param' to different enumeration type 'enum ib_mtu' [-Wenum-conversion] port->attr.active_mtu = RXE_PORT_ACTIVE_MTU; ~ ^~~~~~~~~~~~~~~~~~~ drivers/infiniband/sw/rxe/rxe.c:151:24: warning: implicit conversion from enumeration type 'enum rxe_port_param' to different enumeration type 'enum ib_mtu' [-Wenum-conversion] ib_mtu_enum_to_int(RXE_PORT_ACTIVE_MTU); ~~~~~~~~~~~~~~~~~~ ^~~~~~~~~~~~~~~~~~~ 5 warnings generated. Use the appropriate values from the expected enumerated type so no conversion needs to happen then remove the unneeded definitions. Reported-by: NNick Desaulniers <ndesaulniers@google.com> Signed-off-by: NNathan Chancellor <natechancellor@gmail.com> Reviewed-by: NNick Desaulniers <ndesaulniers@google.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
由 Leon Romanovsky 提交于
Replace custom code to allocate indexes to generic kernel API. Signed-off-by: NLeon Romanovsky <leonro@mellanox.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Leon Romanovsky 提交于
Replace custom code to allocate indexes to generic kernel API. Signed-off-by: NLeon Romanovsky <leonro@mellanox.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Leon Romanovsky 提交于
IDA adds overhead to store IDs bitmap with maximal value of IDA can be upto 2099202 (IDA_MAX = 0x80000000U / IDA_BITMAP_BITS - 1). However, there is no need to add such enormous number of devices and it is enough for now to limit it to be 8192. Signed-off-by: NLeon Romanovsky <leonro@mellanox.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Håkon Bugge 提交于
Add said information and make the debug print format consistent. Signed-off-by: NHåkon Bugge <haakon.bugge@oracle.com> Acked-by: NLeon Romanovsky <leonro@mellanox.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Håkon Bugge 提交于
IB Subnet Management Packets (SMPs) were excluded from debug prints. Fixed by enabling print even on QP0 MADs. Signed-off-by: NHåkon Bugge <haakon.bugge@oracle.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Parav Pandit 提交于
Normally kobj objects have kobj suffix to reflect it. Rename ports_parent to ports_kobj. Signed-off-by: NParav Pandit <parav@mellanox.com> Signed-off-by: NLeon Romanovsky <leonro@mellanox.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Parav Pandit 提交于
If the provider driver (such as rdma_rxe) doesn't support pma counters, avoid exposing its directory similar to optional hw_counters directory. If core fails to read the PMA counter, return an error so that user can retry later if needed. Fixes: 35c4cbb1 ("IB/core: Create get_perf_mad function in sysfs.c") Reported-by: NHolger Hoffstätte <holger@applied-asynchrony.com> Tested-by: NHolger Hoffstätte <holger@applied-asynchrony.com> Signed-off-by: NParav Pandit <parav@mellanox.com> Signed-off-by: NLeon Romanovsky <leonro@mellanox.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-