- 06 9月, 2009 9 次提交
-
-
由 Don Wood 提交于
Use the flush status to fill in cqe status when a specific error has been identified. Subsequent flushed completions still use the flushed value. Signed-off-by: NDon Wood <donald.e.wood@intel.com> Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
由 Don Wood 提交于
When a flush request is given to the hw, it will place one cqe marked as flushed (unless there is nothing to flush). An application that is waiting for all wqe's to complete will be left hanging. This modifies poll_cq to return the correct number of flushes for the pending elements on the wq. Signed-off-by: NDon Wood <donald.e.wood@intel.com> Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
由 Don Wood 提交于
When an asynchronous event occurs that requires a terminate, it is sometimes possible to identify the wqe in error. This change uses flush to get this information to the poll routine. The flush operation puts the status into the cqe. If this information is not available, it continues to use the more generic flush code as before. Signed-off-by: NDon Wood <donald.e.wood@intel.com> Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
由 Don Wood 提交于
Implement the sending and receiving of Terminate packets. Signed-off-by: NDon Wood <donald.e.wood@intel.com> Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
由 Don Wood 提交于
CQ errors are not being handled correctly. Put in the the upcall for CQ errors. Signed-off-by: NDon Wood <donald.e.wood@intel.com> Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
由 Don Wood 提交于
When a QP is destroyed, unprocessed CQ entries could still reference the QP. This change zeroes the context value at QP destroy time. By skipping over cqe's with a zero context, poll_cq no longer processes a cqe for a destroyed QP. Signed-off-by: NDon Wood <donald.e.wood@intel.com> Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
由 Don Wood 提交于
The routine to allocate a cqp request is not called from process context code. Since it is not OK to sleep, it needs to use GFP_ATOMIC not GFP_KERNEL. Signed-off-by: NDon Wood <donald.e.wood@intel.com> Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
由 Don Wood 提交于
The code currently has a work structure in the QP. This requires a lock and a pending flag to ensure there is never more than one request active. When two events happen quickly (such as FIN and LLP CLOSE), it causes unnecessary timeouts since the second one is dropped. This fix allocates memory for the work request so the second one can be queued. A lock is removed since it is no longer needed. Signed-off-by: NDon Wood <donald.e.wood@intel.com> Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
由 Don Wood 提交于
During termination, it is possible for the refcnt to go to zero while the worker thread is posting events upward. This fix increments the refcnt before the request is passed to the worker thread. The thread decrements the refcnt when the request is completed. Signed-off-by: NDon Wood <donald.e.wood@intel.com> Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
- 24 6月, 2009 2 次提交
-
-
由 Peter Huewe 提交于
Add __init and __exit annotations to the module_init/module_exit functions from drivers/infiniband/core/addr.c and cma.c. Signed-off-by: NPeter Huewe <peterhuewe@gmx.de> Acked-by: NSean Hefty <sean.hefty@intel.com> Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
由 Alexander Schmidt 提交于
Increment version number for DMEM toleration. Signed-off-by: NAlexander Schmidt <alexs@linux.vnet.ibm.com> Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
- 23 6月, 2009 5 次提交
-
-
由 Roland Dreier 提交于
dma_sync_single() is deprecated now, and the use in mthca is wrong: there should be a dma_sync_single_for_cpu() before touching the memory from the CPU, and a dma_sync_single_for_device() afterwards. Fix this, prompted by a kick in the pants from a patch from FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>. Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
由 Faisal Latif 提交于
During cluster testing, one QP was not closed, as FIN is not handled properly when its rexmit count expires or in some cases when RST is is received after sending FIN. The reason is that the cm_id does not get decremented under these conditions. Signed-off-by: NFaisal Latif <faisal.latif@intel.com> Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
由 Faisal Latif 提交于
In nes_query_device(), max_qp_init_rd_atom is incorrectly set to max_qp_wr. This was found when a test application had a dapl async event error. Signed-off-by: NFaisal Latif <faisal.latif@intel.com> Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
由 Roel Kluin 提交于
This prevents the memcpy() of a guid_entries element using a negative index. Signed-off-by: NRoel Kluin <roel.kluin@gmail.com> Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
由 Hannes Hering 提交于
Implement toleration of dynamic memory operations and 16 GB gigantic pages, where "toleration" means that the driver can cope with dynamic memory operations that happen before the driver is loaded. While the ehca driver is loaded, dynamic memory operations are still prohibited by returning NOTIFY_BAD from the memory notifier. On module load the driver walks through available system memory, checks for available memory ranges and then registers the kernel internal memory region accordingly. The translation of address ranges is implemented via a 3-level busmap. Signed-off-by: NHannes Hering <hering2@de.ibm.com> Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
- 16 6月, 2009 2 次提交
-
-
由 Greg Kroah-Hartman 提交于
In the near future, the driver core is going to not allow direct access to the driver_data pointer in struct device. Instead, the functions dev_get_drvdata() and dev_set_drvdata() should be used. These functions have been around since the beginning, so are backwards compatible with all older kernel versions. Cc: Sean Hefty <sean.hefty@intel.com> Cc: Roland Dreier <rolandd@cisco.com> Cc: Hal Rosenstock <hal.rosenstock@gmail.com> Cc: general@lists.openfabrics.org Cc: Christoph Raisch <raisch@de.ibm.com> Acked-by: NHoang-Nam Nguyen <hnguyen@de.ibm.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@suse.de>
-
由 Greg Kroah-Hartman 提交于
In the near future, the driver core is going to not allow direct access to the driver_data pointer in struct device. Instead, the functions dev_get_drvdata() and dev_set_drvdata() should be used. These functions have been around since the beginning, so are backwards compatible with all older kernel versions. Cc: general@lists.openfabrics.org Cc: Roland Dreier <rolandd@cisco.com> Cc: Hal Rosenstock <hal.rosenstock@gmail.com> Cc: Sean Hefty <sean.hefty@intel.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@suse.de>
-
- 14 6月, 2009 1 次提交
-
-
由 Roland Dreier 提交于
When both MSI-X and legacy INTx fail to generate an interrupt, the driver frees the MSI-X interrupts twice. Fix this by clearing the have_irq flag for the MSI-X interrupts when they are freed the first time. Reported-by: NYinghai Lu <yhlu.kernel@gmail.com> Tested-by: NYinghai Lu <yhlu.kernel@gmail.com> Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
- 06 6月, 2009 1 次提交
-
-
由 Jack Morgenstein 提交于
The ConnectX Programmer's Reference Manual states that the "SO" bit must be set when posting Fast Register and Local Invalidate send work requests. When this bit is set, the work request will be executed only after all previous work requests on the send queue have been executed. (If the bit is not set, Fast Register and Local Invalidate WQEs may begin execution too early, which violates the defined semantics for these operations) This fixes the issue with NFS/RDMA reported in <http://lists.openfabrics.org/pipermail/general/2009-April/059253.html> Signed-off-by: NJack Morgenstein <jackm@dev.mellanox.co.il> Cc: <stable@kernel.org> Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
- 04 6月, 2009 1 次提交
-
-
由 Joachim Fenkes 提交于
All the fields in the control block are nicely right-aligned, so no masking is necessary. Signed-off-by: NJoachim Fenkes <fenkes@de.ibm.com> Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
- 03 6月, 2009 1 次提交
-
-
由 Eric Dumazet 提交于
Define three accessors to get/set dst attached to a skb struct dst_entry *skb_dst(const struct sk_buff *skb) void skb_dst_set(struct sk_buff *skb, struct dst_entry *dst) void skb_dst_drop(struct sk_buff *skb) This one should replace occurrences of : dst_release(skb->dst) skb->dst = NULL; Delete skb->dst field Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 31 5月, 2009 1 次提交
-
-
由 Eric Dumazet 提交于
Last two drivers that need skb->dst in their start_xmit() function Tell dev_hard_start_xmit() to no release it by unsetting IFF_XMIT_DST_RELEASE Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 28 5月, 2009 3 次提交
-
-
由 Steve Wise 提交于
T3 firmware only supports one WRs worth of page list for fast register work requests. The driver currently allows 2 WRs worth, which doesn't work for T3, so reduce the limit in the driver. Signed-off-by: NSteve Wise <swise@opengridcomputing.com> Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
由 Steve Wise 提交于
Signed-off-by: NSteve Wise <swise@opengridcomputing.com> Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
由 Eli Cohen 提交于
The current MTT allocator uses kmalloc() to allocate a buffer for its buddy allocator, and thus is limited in the amount of MTT segments that it can control. As a result, the size of memory that can be registered is limited too. This patch uses a module parameter to control the number of MTT entries that each segment represents, allowing more memory to be registered with the same number of segments. Signed-off-by: NEli Cohen <eli@mellanox.co.il> Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
- 24 5月, 2009 2 次提交
-
-
由 Mike Christie 提交于
If a task did not complete normally due to a TMF, libiscsi will now complete the task with the state ISCSI_TASK_ABRT_TMF. Drivers like bnx2i that need to free resources if a command did not complete normally can then check the task state. If a driver does not need to send a special command if we have dropped the session then they can check for ISCSI_TASK_ABRT_SESS_RECOV. Signed-off-by: NMike Christie <michaelc@cs.wisc.edu> Signed-off-by: NJames Bottomley <James.Bottomley@HansenPartnership.com>
-
由 Mike Christie 提交于
When we create the tcp/ip connection by calling ep_connect, we currently just go by the routing table info. I think there are two problems with this. 1. Some drivers do not have access to a routing table. Some drivers like qla4xxx do not even know about other ports. 2. If you have two initiator ports on the same subnet, the user may have set things up so that session1 was supposed to be run through port1. and session2 was supposed to be run through port2. It looks like we could end with both sessions going through one of the ports. Fixes for cxgb3i from Karen Xie. Signed-off-by: NMike Christie <michaelc@cs.wisc.edu> Signed-off-by: NJames Bottomley <James.Bottomley@HansenPartnership.com>
-
- 19 5月, 2009 1 次提交
-
-
由 Eric W. Biederman 提交于
Network device sysfs files that grab the rtnl_lock unconditionally will deadlock if accessed when the network device is being unregistered. So use trylock and syscall_restart to avoid this deadlock. Signed-off-by: NEric W. Biederman <ebiederm@aristanetworks.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 16 5月, 2009 1 次提交
-
-
由 Roel Kluin 提交于
With a postfix increment, i is incremented one past 10K/5K before the loop ends, so the error messages will be displayed too soon if the test succeeds on the last iteration. Fix the comparisons to be > instead of >=. Signed-off-by: NRoel Kluin <roel.kluin@gmail.com> Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
- 14 5月, 2009 5 次提交
-
-
由 Jack Stone 提交于
Remove uneeded casts of void *. Signed-off-by: NJack Stone <jwjstone@fastmail.fm> Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
由 Stefan Roscher 提交于
Signed-off-by: NStefan Roscher <stefan.roscher@de.ibm.com> Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
由 Stefan Roscher 提交于
The queue map for flush completion circumvention is only used for kernel space queue pairs. This patch skips the allocation of the queue maps in case the QP is created for userspace. In addition, this patch does not iomap the galpas for kernel usage if the queue pair is only used in userspace. These changes will improve the performance of creation of userspace queue pairs. Signed-off-by: NStefan Roscher <stefan.roscher@de.ibm.com> Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
由 Stefan Roscher 提交于
In case of large queue pairs there is the possibillity of allocation failures due to memory fragmentation when using kmalloc(). To ensure the memory is allocated even if kmalloc() can not find chunks which are big enough, we fall back to allocating the memory with vmalloc(). Signed-off-by: NStefan Roscher <stefan.roscher@de.ibm.com> Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
由 Anton Blanchard 提交于
To improve performance of driver resource allocation, replace vmalloc() calls with kmalloc(). Signed-off-by: NStefan Roscher <stefan.roscher@de.ibm.com> Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
- 09 5月, 2009 1 次提交
-
-
由 Al Viro 提交于
forgot to unlock superblock before calling deactivate_super()... Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
- 08 5月, 2009 1 次提交
-
-
由 Jack Morgenstein 提交于
The low-level mlx4 driver modified the page-list addresses for fast register work requests post send to big-endian, and set a "present" bit. This caused problems later when the consumer attempted to unmap the pages using the page-list (using the list addresses which were assumed to be still in CPU-endian order). Fix the mlx4 driver to allocate two buffers and use a private buffer for the hardware-format bus addresses. This patch fixes <https://bugs.openfabrics.org/show_bug.cgi?id=1571>, an NFS/RDMA server crash. The cause of the crash was found by Vu Pham of Mellanox. The fix is along the lines suggested by Steve Wise in comment #21 in bug 1571. Signed-off-by: NJack Morgenstein <jackm@dev.mellanox.co.il> Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
- 30 4月, 2009 1 次提交
-
-
由 Steve Wise 提交于
When the SQ is flushed, mark the flushed entries as not signaled so the poll logic doesn't re-insert the CQ entry thinking its an out of order completion. The bug can cause the NFS/RDMA server to crash due to processing the same completed work request twice. Signed-off-by: NSteve Wise <swise@opengridcomputing.com> Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
- 28 4月, 2009 2 次提交
-
-
由 Chien Tung 提交于
Update version number to 1.5.0.0 Signed-off-by: NChien Tung <chien.tin.tung@intel.com> Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
由 Faisal Latif 提交于
If reg_phys_mem() fails, we need to free memory allocated for MPA frame with private data before returning the error. Also move nes_add_ref() after the reg_phys_mem() is successful. Signed-off-by: NFaisal Latif <faisal.latif@intel.com> Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-