- 15 7月, 2008 9 次提交
-
-
由 Steve Wise 提交于
This patch adds support for the IB "base memory management extension" (BMME) and the equivalent iWARP operations (which the iWARP verbs mandates all devices must implement). The new operations are: - Allocate an ib_mr for use in fast register work requests. - Allocate/free a physical buffer lists for use in fast register work requests. This allows device drivers to allocate this memory as needed for use in posting send requests (eg via dma_alloc_coherent). - New send queue work requests: * send with remote invalidate * fast register memory region * local invalidate memory region * RDMA read with invalidate local memory region (iWARP only) Consumer interface details: - A new device capability flag IB_DEVICE_MEM_MGT_EXTENSIONS is added to indicate device support for these features. - New send work request opcodes IB_WR_FAST_REG_MR, IB_WR_LOCAL_INV, IB_WR_RDMA_READ_WITH_INV are added. - A new consumer API function, ib_alloc_mr() is added to allocate fast register memory regions. - New consumer API functions, ib_alloc_fast_reg_page_list() and ib_free_fast_reg_page_list() are added to allocate and free device-specific memory for fast registration page lists. - A new consumer API function, ib_update_fast_reg_key(), is added to allow the key portion of the R_Key and L_Key of a fast registration MR to be updated. Consumers call this if desired before posting a IB_WR_FAST_REG_MR work request. Consumers can use this as follows: - MR is allocated with ib_alloc_mr(). - Page list memory is allocated with ib_alloc_fast_reg_page_list(). - MR R_Key/L_Key "key" field is updated with ib_update_fast_reg_key(). - MR made VALID and bound to a specific page list via ib_post_send(IB_WR_FAST_REG_MR) - MR made INVALID via ib_post_send(IB_WR_LOCAL_INV), ib_post_send(IB_WR_RDMA_READ_WITH_INV) or an incoming send with invalidate operation. - MR is deallocated with ib_dereg_mr() - page lists dealloced via ib_free_fast_reg_page_list(). Applications can allocate a fast register MR once, and then can repeatedly bind the MR to different physical block lists (PBLs) via posting work requests to a send queue (SQ). For each outstanding MR-to-PBL binding in the SQ pipe, a fast_reg_page_list needs to be allocated (the fast_reg_page_list is owned by the low-level driver from the consumer posting a work request until the request completes). Thus pipelining can be achieved while still allowing device-specific page_list processing. The 32-bit fast register memory key/STag is composed of a 24-bit index and an 8-bit key. The application can change the key each time it fast registers thus allowing more control over the peer's use of the key/STag (ie it can effectively be changed each time the rkey is rebound to a page list). Signed-off-by: NSteve Wise <swise@opengridcomputing.com> Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
由 Eli Cohen 提交于
The connected mode implementation in the IPoIB driver has a large overhead in the way SKBs are handled in the receive flow. It usually allocates an SKB with as big as was used in the currently received SKB and moves unused fragments from the old SKB to the new one. This involves a loop on all the remaining fragments and incurs overhead on the CPU. This patch, for small SKBs, allocates an SKB just large enough to contain the received data and copies to it the data from the received SKB. The newly allocated SKB is passed to the stack and the old SKB is reposted. When running netperf, UDP small messages, without this pach I get: UDP UNIDIRECTIONAL SEND TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 14.4.3.178 (14.4.3.178) port 0 AF_INET Socket Message Elapsed Messages Size Size Time Okay Errors Throughput bytes bytes secs # # 10^6bits/sec 114688 128 10.00 5142034 0 526.31 114688 10.00 1130489 115.71 With this patch I get both send and receive at ~315 mbps. The reason that send performance actually slows down is as follows: When using this patch, the overhead of the CPU for handling RX packets is dramatically reduced. As a result, we do not experience RNR NAK messages from the receiver which cause the connection to be closed and reopened again; when the patch is not used, the receiver cannot handle the packets fast enough so there is less time to post new buffers and hence the mentioned RNR NACKs. So what happens is that the application *thinks* it posted a certain number of packets for transmission but these packets are flushed and do not really get transmitted. Since the connection gets opened and closed many times, each time netperf gets the CPU time that otherwise would have been given to IPoIB to actually transmit the packets. This can be verified when looking at the port counters -- the output of ifconfig and the oputput of netperf (this is for the case without the patch): tx packets ========== port counter: 1,543,996 ifconfig: 1,581,426 netperf: 5,142,034 rx packets ========== netperf 1,1304,089 Signed-off-by: NEli Cohen <eli@mellanox.co.il>
-
由 Roland Dreier 提交于
They don't get updated by git and so they're worse than useless. Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
由 Robert P. J. Day 提交于
Signed-off-by: NRobert P. J. Day <rpjday@crashcourse.ca> Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
由 Eli Cohen 提交于
The idea is that for QPs with fixed size work requests (eg selective signaling QPs), before stamping the WQE, we read the value of the DS field, which gives the effective size of the descriptor as used in the previous post. Then we stamp only that area, since the rest of the descriptor is already stamped. When initializing the send queue buffer, make sure the DS field is initialized to the max descriptor size so that the subsequent stamping will be done on the entire descriptor area. Signed-off-by: NEli Cohen <eli@mellanox.co.il> Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
由 Moni Shoua 提交于
This patch solves a race that occurs after an event occurs that causes the SA query module to flush its SM address handle (AH). When SM AH becomes invalid and needs an update it is handled by the global workqueue. On the other hand this event is also handled in the IPoIB driver by queuing work in the ipoib_workqueue that does multicast joins. Although queuing is in the right order, it is done to 2 different workqueues and so there is no guarantee that the first to be queued is the first to be executed. This causes a problem because IPoIB may end up sending an request to the old SM, which will take a long time to time out (since the old SM is gone); this leads to a much longer than necessary interruption in multicast traffer. The patch sets the SA query module's SM AH to NULL when the event occurs, and until update_sm_ah() is done, any request that needs sm_ah fails with -EAGAIN return status. For consumers, the patch doesn't make things worse. Before the patch, MADs are sent to the wrong SM so the request gets lost. Consumers can be improved if they examine the return code and respond to EAGAIN properly but even without an improvement the situation is not getting worse. Signed-off-by: NMoni Levy <monil@voltaire.com> Signed-off-by: NMoni Shoua <monis@voltaire.com> Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
由 Sean Hefty 提交于
The license text for several files references a third software license that was inadvertently copied in. Update the license to what was intended. This update was based on a request from HP. Signed-off-by: NSean Hefty <sean.hefty@intel.com> Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
由 Christophe Jaillet 提交于
Remove an explicit memset(..., 0, ...) of a 'listener' structure allocated with kzalloc(). Signed-off-by: NChristophe Jaillet <christophe.jaillet@wanadoo.fr> Acked-by: NFaisal Latif <faisal@neteffect.com> Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
由 Roland Dreier 提交于
The SRP initiator is currently using ib_find_cached_pkey() and ib_get_cached_gid() in situations where the uncached ib_find_pkey() and ib_query_gid() functions serve just as well: sleeping is allowed and performance is not an issue. Since we want to eliminate the cached operations in the long term, convert SRP to use the uncached variants. Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
- 12 7月, 2008 1 次提交
-
-
由 Roland Dreier 提交于
Remove explicit lock_kernel() calls and document why the code is safe. Signed-off-by: NRoland Dreier <rolandd@cisco.com> Signed-off-by: NJonathan Corbet <corbet@lwn.net>
-
- 09 7月, 2008 1 次提交
-
-
由 Steve Wise 提交于
The change to iwch_provider.c in commit f4e91eb4 ("IB: convert struct class_device to struct device") undid the fix done in commit 7f049f2f ("RDMA/cxgb3: Hold rtnl_lock() around ethtool get_drvinfo call"). It removed the calls to rtnl_lock() that serialized the iw_cxgb3 ethtool ops calls into the cxgb3 driver. This locking is needed to avoid messing up the internal state of the cxgb3 driver. Signed-off-by: NSteve Wise <swise@opengridcomputing.com> Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
- 05 7月, 2008 1 次提交
-
-
由 Roland Dreier 提交于
Remove explicit lock_kernel() calls and document why the code is safe. Signed-off-by: NRoland Dreier <rolandd@cisco.com> Signed-off-by: NJonathan Corbet <corbet@lwn.net>
-
- 24 6月, 2008 1 次提交
-
-
由 Eli Cohen 提交于
Current memfree FW has a bug which in some cases, assumes that ICM pages passed to it are cleared. This patch uses __GFP_ZERO to allocate all ICM pages passed to the FW. Once firmware with a fix is released, we can make the workaround conditional on firmware version. This fixes the bug reported by Arthur Kepner <akepner@sgi.com> here: http://lists.openfabrics.org/pipermail/general/2008-May/050026.html Cc: <stable@kernel.org> Signed-off-by: NEli Cohen <eli@mellanox.co.il> [ Rewritten to be a one-liner using __GFP_ZERO instead of vmap()ing ICM memory and memset()ing it to 0. - Roland ] Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
- 21 6月, 2008 4 次提交
-
-
由 Arnd Bergmann 提交于
Signed-off-by: NArnd Bergmann <arnd@arndb.de>
-
由 Jonathan Corbet 提交于
All of the open() functions which don't need the BKL on their face may still depend on its acquisition to serialize opens against driver initialization. So make those functions acquire then release the BKL to be on the safe side. Signed-off-by: NJonathan Corbet <corbet@lwn.net>
-
由 Jonathan Corbet 提交于
Be extra-cautious and protect the remaining open() functions. Signed-off-by: NJonathan Corbet <corbet@lwn.net>
-
由 Jonathan Corbet 提交于
This documents the fact that somebody looked at the relevant open() functions and concluded that, due to their trivial nature, no locking was needed. Signed-off-by: NJonathan Corbet <corbet@lwn.net>
-
- 19 6月, 2008 1 次提交
-
-
由 Jack Morgenstein 提交于
Commit 1ae5c187 ("IB/uverbs: Don't store struct file * for event files") changed the way that closed files are handled in the uverbs code. However, after the conversion, is_closed flag is checked incorrectly in ib_uverbs_async_handler(). As a result, no async events are ever passed to applications. Found by: Ronni Zimmerman <ronniz@mellanox.co.il> Signed-off-by: NJack Morgenstein <jackm@dev.mellanox.co.il> Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
- 11 6月, 2008 1 次提交
-
-
由 Roland Dreier 提交于
nes_reg_user_mr() should fail if page_count becomes >= 1024 * 512 rather than just testing for strict >, because page_count is essentially used as an index into an array with 1024 * 512 entries, so allowing the loop to continue with page_count == 1024 * 512 means that memory after the end of the array is corrupted. This leads to a crash triggerable by a userspace application that requests registration of a too-big region. Also get rid of the call to pci_free_consistent() here to avoid corrupting state with a double free, since the same memory will be freed in the code jumped to at reg_user_mr_err. Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
- 10 6月, 2008 1 次提交
-
-
由 Roland Dreier 提交于
In 2.6.26, we added some support for send with invalidate work requests, including a device capability flag to indicate whether a device supports such requests. However, the support was incomplete: the completion structure was not extended with a field for the key contained in incoming send with invalidate requests. Full support for memory management extensions (send with invalidate, local invalidate, fast register through a send queue, etc) is planned for 2.6.27. Since send with invalidate is not very useful by itself, just remove the IB_DEVICE_SEND_W_INV bit before the 2.6.26 final release; we will add an IB_DEVICE_MEM_MGT_EXTENSIONS bit in 2.6.27, which makes things simpler for applications, since they will not have quite as confusing an array of fine-grained bits to check. Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
- 07 6月, 2008 3 次提交
-
-
由 Roland Dreier 提交于
On a 64-bit architecture, if ib_umem_get() is called with a size value that is so big that npages is negative when cast to int, then the length of the page list passed to get_user_pages(), namely min_t(int, npages, PAGE_SIZE / sizeof (struct page *)) will be negative, and get_user_pages() will immediately return 0 (at least since 900cf086, "Be more robust about bad arguments in get_user_pages()"). This leads to an infinite loop in ib_umem_get(), since the code boils down to: while (npages) { ret = get_user_pages(...); npages -= ret; } Fix this by taking the minimum as unsigned longs, so that the value of npages is never truncated. The impact of this bug isn't too severe, since the value of npages is checked against RLIMIT_MEMLOCK, so a process would need to have an astronomical limit or have CAP_IPC_LOCK to be able to trigger this, and such a process could already cause lots of mischief. But it does let buggy userspace code cause a kernel lock-up; for example I hit this with code that passes a negative value into a memory registartion function where it is promoted to a huge u64 value. Cc: <stable@kernel.org> Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
由 Ralph Campbell 提交于
SM/SMA traps received by the ipath driver should be forwarded to the SM if it is running on the host. The ib_ipath driver was incorrectly replying with "bad method." Signed-off-by: NRalph Campbell <ralph.campbell@qlogic.com> Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
由 Joachim Fenkes 提交于
Signed-off-by: NJoachim Fenkes <fenkes@de.ibm.com> Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
- 27 5月, 2008 2 次提交
-
-
由 Ralph Campbell 提交于
The driver supports a few features (RNR NAK, port active event, SRQ resize) that were not reported in the device capability flags. This patch fixes that. Signed-off-by: NRalph Campbell <ralph.campbell@qlogic.com> Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
由 Roland Dreier 提交于
Gabriel C <nix.or.die@googlemail.com> pointed out that when the x86 bitops are updated to operate on unsigned long, the code in sdma_abort_task() will produce warnings: drivers/infiniband/hw/ipath/ipath_sdma.c: In function 'sdma_abort_task': drivers/infiniband/hw/ipath/ipath_sdma.c:267: warning: passing argument 2 of 'constant_test_bit' from incompatible pointer type and so on, because it uses test_bit() to operation on a u64 value (returned by ipath_read_kref64() for a hardware register). Fix up these warnings by converting the test_bit() operations to &ing with appropriate symbolic defines of the bits within the hardware register. This has the benign side-effect of making the code more self-documenting as well. Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
- 24 5月, 2008 1 次提交
-
-
由 Dave Olson 提交于
If a low-level driver returns IB_MAD_RESULT_SUCCESS | IB_MAD_RESULT_CONSUMED, handle_outgoing_dr_smp() doesn't clean up properly. The fix is to kfree the local data and break, rather than falling through. This was observed with the ipath driver, but could happen with any driver. This fixes <https://bugs.openfabrics.org/show_bug.cgi?id=1027>. Signed-off-by: NDave Olson <dave.olson@qlogic.com> Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
- 21 5月, 2008 3 次提交
-
-
由 Jack Morgenstein 提交于
We saw a kernel oops in our regression testing when a multicast "join finish" occurred just after the interface was -- this is <https://bugs.openfabrics.org/show_bug.cgi?id=1040>. The test randomly causes the HCA physical port to go down then up. The cause of this is that ipoib_mcast_join_finish() processing happen just after ipoib_mcast_dev_flush() was invoked (in which case the broadcast pointer is NULL). This patch tests for and handles the case where priv->broadcast is NULL. Cc: <stable@kernel.org> Signed-off-by: NJack Morgenstein <jackm@dev.mellanox.co.il> Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
由 Roland Dreier 提交于
When creating a kernel QP where the consumer asked for a send queue with lots of scatter/gater entries, set_kernel_sq_size() incorrectly returned an error if the send queue stride is larger than the hardware's maximum send work request descriptor size. This is not a problem; the only issue is to make sure that the actual descriptors used do not overflow the maximum descriptor size, so check this instead. Clamp the returned max_send_sge value to be no bigger than what query_device returns for the max_sge to avoid confusing hapless users, even if the hardware is capable of handling a few more s/g entries. This bug caused NFS/RDMA mounts to fail when the server adapter used the mlx4 driver. Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
由 Greg Kroah-Hartman 提交于
There is a race from when a device is created with device_create() and then the drvdata is set with a call to dev_set_drvdata() in which a sysfs file could be open, yet the drvdata will be NULL, causing all sorts of bad things to happen. This patch fixes the problem by using the new function, device_create_drvdata(). Cc: Kay Sievers <kay.sievers@vrfy.org> Reviewed-by: NRoland Dreier <rolandd@cisco.com> Cc: Sean Hefty <sean.hefty@intel.com> Cc: Hal Rosenstock <hal.rosenstock@gmail.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@suse.de>
-
- 17 5月, 2008 3 次提交
-
-
由 Roland Dreier 提交于
The mthca driver returns the maximum number of scatter/gather entries returned by the firmware as the max_sge value when device properties are queried. However, the firmware also reports a limit on the maximum descriptor size allowed, and because mthca takes into account the worst case send request overhead when checking whether to allow a QP to be created, the largest number of scatter/gather entries that can be used with mthca may be limited by the maximum descriptor size rather than just by the actual s/g entry limit. This means that applications cannot actually create QPs with max_send_sge equal to the limit returned by ib_query_device(). Fix this by checking if the maximum descriptor size imposes a lower limit and if so returning that lower limit. Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
由 Roland Dreier 提交于
drivers/infiniband/hw/cxgb3/iwch_qp.c: In function 'iwch_post_send': drivers/infiniband/hw/cxgb3/iwch_qp.c:232: warning: 't3_wr_flit_cnt' may be used uninitialized in this function This is what akpm describes as "the dopey gcc-doesn't-know-that-foo(&var)-writes-to-var problem." Signed-off-by: NRoland Dreier <rolandd@cisco.com> Acked-by: NSteve Wise <swise@opengridcomputing.com>
-
由 Andrew Morton 提交于
drivers/infiniband/hw/mlx4/qp.c: In function 'mlx4_ib_post_send': drivers/infiniband/hw/mlx4/qp.c:1460: warning: 'seglen' may be used uninitialized in this function This is the dopey gcc-doesn't-know-that-foo(&var)-writes-to-var problem. Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
- 16 5月, 2008 2 次提交
-
-
由 Ralph Campbell 提交于
When I fixed the RC receive completion opcode in 2bfc8e9e ("IB/ipath: Return the correct opcode for RDMA WRITE with immediate"), I forgot to fix UC, which had the same problem for RDMA write with immediate returning the wrong opcode. Signed-off-by: NRalph Campbell <ralph.campbell@qlogic.com> Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
由 Roland Dreier 提交于
Commit f018c7e1 ("IB/ipath: Change ipath_devdata.ipath_sdma_status to be unsigned long") changed ipath_sdma_status to be unsigned long, but left a few debug messages that printed it out with a %016llx format, which generates the warnings drivers/infiniband/hw/ipath/ipath_sdma.c:348: warning: format '%016llx' expects type 'long long unsigned int', but argument 3 has type 'long unsigned int' drivers/infiniband/hw/ipath/ipath_sdma.c:618: warning: format '%016llx' expects type 'long long unsigned int', but argument 3 has type 'long unsigned int' Fix this by changing the format used to print out the value to %08lx (8 hex digits are now sufficient, because the highest bit used is 31). Warnings reported by Randy Dunlap <randy.dunlap@oracle.com>. Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
- 14 5月, 2008 6 次提交
-
-
由 Steve Wise 提交于
cxio_flush_sq() was failing to wrap around the software send queue causing garbage completion entries on a flush operation. Signed-off-by: NSteve Wise <swise@opengridcomputing.com> Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
由 Roland Dreier 提交于
Andrew Morton <akpm@linux-foundation.org> pointed out that bitops should take an unsigned long * arg. However, the ipath driver was doing bitops on struct ipath_devdata.ipath_sdma_status, which is u64. Change this member to unsigned long to avoid tons of warnings when x86 fixes the bitops to take unsigned long * instead of void *. Also, change the IPATH_SDMA_RUNNING and IPATH_SDMA_SHUTDOWN bit numbers to 30 and 31 (instead of 62 and 63) so that we're not setting another booby trap for someone who tries to make ipath work on a 32-bit architecture. Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
由 Pavel Emelyanov 提交于
The official reason is "with the presence of pid namespaces in the kernel using pid_t-s inside one is no longer safe." But the reason I fix this right now is the following: About a month ago (when 2.6.25 was not yet released) there still was a one last caller of a to-be-deprecated-soon function find_pid() - the kill_proc() function, which in turn was only used by nfs callback code. During the last merge window, this last caller was finally eliminated by some NFS patch(es) and I was about to finally kill this kill_proc() and find_pid(), but found, that I was late and the kill_proc is now called from the ipath driver since commit 58411d1c ("IB/ipath: Head of Line blocking vs forward progress of user apps"). So here's a patch that fixes this code to use struct pid * and (!) the kill_pid routine. Signed-off-by: NPavel Emelyanov <xemul@openvz.org> Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
由 Ralph Campbell 提交于
If an out of sequence RDMA read response middle or last packet is received, we should only resend the RDMA read request on the first out of sequence packet and drop subsequent out of sequence packets otherwise, we get "too many retries". Signed-off-by: NRalph Campbell <ralph.campbell@qlogic.com> Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
由 Ralph Campbell 提交于
The send DMA hardware queue voided a number of prior assumptions about when a send is complete which led to completions being generated out of order. There were also a number of locking issues when switching the QP to the error or reset states, and we implement the IB_QPS_SQD state. Signed-off-by: NRalph Campbell <ralph.campbell@qlogic.com> Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
由 Ralph Campbell 提交于
When errors are detected in RC, the QP should transition to the IB_QPS_ERR state, not the IB_QPS_SQE state. Also, when the error is on the responder side, the receive work completion error was incorrect (remote vs. local). Signed-off-by: NRalph Campbell <ralph.campbell@qlogic.com> Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-