- 13 2月, 2007 4 次提交
-
-
由 Michael S. Tsirkin 提交于
We allocate the MTT table with alloc_pages() and then do pci_map_sg(), so we must call pci_dma_sync_sg() after the CPU writes to the MTT table. This works since the device will never write MTTs on mem-free HCAs, once we get rid of the use of the WRITE_MTT firmware command. This change is needed to make that work, and is an improvement for now, since it gives FMRs a chance at working. For MPTs, both the device and CPU might write there, so we must allocate DMA coherent memory for these. Signed-off-by: NMichael S. Tsirkin <mst@mellanox.co.il> Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
由 Michael S. Tsirkin 提交于
MTTs are allocated in non-cache-coherent memory, so we must give reserved MTTs their own cache line, to prevent both device and CPU from writing into the same cache line at the same time. Signed-off-by: NMichael S. Tsirkin <mst@mellanox.co.il> Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
由 Michael S. Tsirkin 提交于
The reserved_mtts field has different meaning in Tavor and Arbel, so we are wasting mtt entries on memfree. Fix the Arbel case to match Tavor semantics. Signed-off-by: NMichael S. Tsirkin <mst@mellanox.co.il> Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
由 Steve Wise 提交于
Add an RDMA/iWARP driver for the Chelsio T3 1GbE and 10GbE adapters. Signed-off-by: NSteve Wise <swise@opengridcomputing.com> Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
- 11 2月, 2007 7 次提交
-
-
由 Sean Hefty 提交于
Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
由 Sean Hefty 提交于
Randomize the starting port number and avoid re-using port values immediately after they are closed. Instead keep track of the last port value used and increment it every time a new port number is assigned, to better replicate other port spaces. Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
由 Akinobu Mita 提交于
Percpu data is not freed on module unloading. Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Christoph Raisch <raisch@de.ibm.com> Signed-off-by: NAkinobu Mita <akinobu.mita@gmail.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Acked-by: NHoang-Nam Nguyen <hnguyen@de.ibm.com> Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
由 David Howells 提交于
For some reason gcc-3.4.5 on sparc64 does: WARNING: "____ilog2_NaN" [drivers/infiniband/hw/mthca/ib_mthca.ko] undefined! Points to note: (1) The asm volatile flush/flushw are just markers for viewing what comes out in the assembly; removing them has no effect on the result. (2) Changing almost anything else in dwh__mthca_arbel_init_srq_context() or dwh__mthca_alloc_srq() causes the problem to go away. The compiler command line issued by the kernel build is: /opt/crosstool/gcc-3.4.5-glibc-2.3.6/sparc64-unknown-linux-gnu/bin/sparc64-unknown-linux-gnu-gcc -fno-strict-aliasing -fno-common -Os -m64 -mno-fpu -mcpu=ultrasparc -mcmodel=medlow -ffixed-g4 -ffixed-g5 -fcall-used-g7 -Wa,--undeclared-regs -pg -fno-omit-frame-pointer -fno-optimize-sibling-calls -fasynchronous-unwind-tables -g -c -o drivers/infiniband/hw/mthca/.tmp_mthca_srq.o drivers/infiniband/hw/mthca/mthca_srq.c This can be reduced to this whilst still retaining the problem: /opt/crosstool/gcc-3.4.5-glibc-2.3.6/sparc64-unknown-linux-gnu/bin/sparc64-unknown-linux-gnu-gcc -m64 -c -o drivers/infiniband/hw/mthca/mthca_srq.o drivers/infiniband/hw/mthca/mthca_srq.c -Os Removing -Os or changing it to -O or -O0 thru -O6 gets rid of the problem. This patch to the kernel code fixes the problem: Cc: "David S. Miller" <davem@davemloft.net> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
由 Michael S. Tsirkin 提交于
The following patch adds experimental support for IPoIB connected mode, as defined by the draft from the IETF ipoib working group. The idea is to increase performance by increasing the MTU from the maximum of 2K (theoretically 4K) supported by IPoIB on top of UD. With this code, I'm able to get 800MByte/sec or more with netperf without options on a Mellanox 4x back-to-back DDR system. Some notes on code: 1. SRQ is used for scalability to large cluster sizes 2. Only RC connections are used (UC does not support SRQ now) 3. Retry count is set to 0 since spec draft warns against retries 4. Each connection is used for data transfers in only 1 direction, so each connection is either active(TX) or passive (RX). 2 sides that want to communicate create 2 connections. 5. Each active (TX) connection has a separate CQ for send completions - this keeps the code simple without CQ resize and other tricks 6. To detect stale passive side connections (where the remote side is down), we keep an LRU list of passive connections (updated once per second per connection) and destroy a connection after it has been unused for several seconds. The LRU rule makes it possible to avoid scanning connections that have recently been active. Signed-off-by: NMichael S. Tsirkin <mst@mellanox.co.il> Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
由 Ahmed S. Darwish 提交于
Use ARRAY_SIZE() macro already defined in kernel.h instead of open coding equivalent code. Signed-off-by: NAhmed S. Darwish <darwish.07@gmail.com> Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
由 Roland Dreier 提交于
When clearing the ib_ah_attr parameter in to_ib_ah_attr(), use sizeof *ib_ah_attr instead of sizeof *path. Pointed out by Jack Morgenstein <jackm@mellanox.co.il>. Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
- 10 2月, 2007 1 次提交
-
-
由 Al Viro 提交于
Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 08 2月, 2007 1 次提交
-
-
由 Greg Kroah-Hartman 提交于
This lets the network core have the ability to handle suspend/resume issues, if it wants to. Thanks to Frederik Deweerdt <frederik.deweerdt@gmail.com> for the arm driver fixes. Signed-off-by: NGreg Kroah-Hartman <gregkh@suse.de>
-
- 05 2月, 2007 5 次提交
-
-
由 Hoang-Nam Nguyen 提交于
Remove prototypes for functions that don't exist. Signed-off-by: NHoang-Nam Nguyen <hnguyen@de.ibm.com> Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
由 Hoang-Nam Nguyen 提交于
This patch removes do_mmap() from ehca: - Call remap_pfn_range() for hardware register block - Use vm_insert_page() to register memory allocated for completion queues and queue pairs - The actual mmap() call/trigger is now controlled by user space, ie. libehca Signed-off-by: NHoang-Nam Nguyen <hnguyen@de.ibm.com> Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
由 Steve Wise 提交于
The iWARP connection manager uses the ib_addr services to do route resolution (neighbour discovery in the IP world). The ib_addr netevent callback routine, however, currently only acts on InfiniBand neighbour updates. It needs to act on ethernet neighbour updates as well. This patch just removes filtering on device type altogether and will trigger on any neighour updates where the nud_type is valid. This simplifies the code some. Signed-off-by: NSteve Wise <swise@opengridcomputing.com> Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
由 Ishai Rabinovitz 提交于
When there is a call to send_tsk_mgmt SRP posts a send and waits for 5 seconds to get a response. When the QP is in the error state it is obvious that there will be no response so it is quite useless to wait. In fact, the timeout causes SRP to wait a long time to reconnect when a QP error occurs. (Each abort and each reset_device calls send_tsk_mgmt, which waits for the timeout). The following patch solves this problem by identifying the failure and returning an immediate error code. Signed-off-by: NIshai Rabinovitz <ishai@mellanox.co.il> Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
由 Michael S. Tsirkin 提交于
struct ib_wc currently only includes the local QP number: this matches the IB spec, but seems mostly useless. The following patch replaces this with the pointer to qp itself, and updates all low level drivers and all users. This has the following advantages: - Ability to get a per-qp context through wc->qp->qp_context - Existing drivers already have the qp pointer ready in poll cq, so this change actually saves a tiny bit (extra memory read) on data path (for ehca it would actually be expensive to find the QP pointer when polling a CQ, but ehca does not support SRQ so we can leave wc->qp as NULL for ehca) - Users that need the QP number can still get it through wc->qp->qp_num Use case: In IPoIB connected mode code, I have a common CQ shared by multiple QPs. To track connection usage, I need a way to get at some per-QP context upon the completion, and I would like to avoid allocating context object per work request just to stick a QP pointer into it. With this code, I can just use wc->qp->qp_context. Signed-off-by: NMichael S. Tsirkin <mst@mellanox.co.il> Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
- 23 1月, 2007 3 次提交
-
-
由 Hoang-Nam Nguyen 提交于
The lock is taken with _irqsave and hence must be released with _irqrestore on all paths. Signed-off-by Hoang-Nam Nguyen <hnguyen@de.ibm.com> Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
由 Hoang-Nam Nguyen 提交于
Signed-off-by: NHoang-Nam Nguyen <hnguyen@de.ibm.com> Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
由 Ishai Rabinovitz 提交于
Checks if the kmalloc in match_strdup() was successful, and bail out on looking at the token if it failed. Signed-off-by: NIshai Rabinovitz <ishai@mellanox.co.il> Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
- 10 1月, 2007 2 次提交
-
-
由 Dotan Barak 提交于
If a QP being queried is in the RESET state, don't execute the QUERY_QP firmware command (because it will fail). Signed-off-by: NDotan Barak <dotanb@mellanox.co.il> Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
由 Hoang-Nam Nguyen 提交于
Here is a patch for ehca to use proper flag, ie. GFP_ATOMIC resp. GFP_KERNEL, when calling get_zeroed_page() to prevent "Bug: scheduling while atomic...". This error does not cause a kernel panic but makes ipoib un-usable afterwards. It is reproducible on 2.6.20-rc4 if one does ifconfig down during a flood ping test. I have not observed this error in earlier releases incl. 2.6.20-rc1. This error occurs when a qp event/irq is received and ehca event handler allocates a control block/page to obtain HCA error data block. Use of GFP_ATOMIC when in interrupt context prevents this issue. Signed-off-by Hoang-Nam Nguyen <hnguyen@de.ibm.com> Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
- 08 1月, 2007 5 次提交
-
-
由 Jack Morgenstein 提交于
According to the Tavor and Arbel programmer's reference manuals, the number of bytes transferred is not provided in the byte_cnt field of the CQ entry for atomic operation completions. For atomic operations, the number of bytes transferred is always 8 (when the status is "success"), and this constant value should always be used by the driver in the ib_wc entry returned, rather than using the CQE. Signed-off-by: NJack Morgenstein <jackm@dev.mellanox.co.il> Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
由 Sean Hefty 提交于
There's a problem with how rdma cm events are reported to userspace that can lead to application crashes. When a new connection request arrives, a context for the connection is allocated in the kernel. The connection event is then reported to userspace. The userspace library retrieves the event and allocates its own context for the connection. The userspace context is associated with the kernel's context when accepting. This allows the kernel to give userspace context with other events. A problem occurs if a second event for the same connection occurs before the user has had a chance to call accept. The userspace context has not yet been set, which causes the librdmacm to crash. (This has been seen when the app takes too long to call accept, resulting in the remote side timing out and rejecting the connection) Fix this by ignoring events for new connections until userspace has set their context. This can only happen if an error occurs on a new connection before the user accepts it. This is okay, since the accept will just fail later. Signed-off-by: NSean Hefty <sean.hefty@intel.com> Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
由 Sean Hefty 提交于
We discard new connection requests while the listen backlog is full, but leak a struct ucma_event in the process. Free the structure in this case. Signed-off-by: NSean Hefty <sean.hefty@intel.com> Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
由 Steve Wise 提交于
The iWARP CM should report timeouts as event RDMA_CM_EVENT_UNREACHABLE, not event RDMA_CM_EVENT_REJECTED. Signed-off-by: NSteve Wise <swise@opengridcomputing.com> Signed-off-by: NSean Hefty <sean.hefty@intel.com> Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
由 Erez Zilber 提交于
iSER limits the number of outstanding PDUs to send. When this threshold is reached, it should return an error code (-ENOBUFS) instead of setting the suspend_tx bit (which should be used only by libiscsi). Signed-off-by: NErez Zilber <erezz@voltaire.com> Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
- 05 1月, 2007 1 次提交
-
-
由 Michael S. Tsirkin 提交于
mthca_table_find() will return the wrong address when the table entry being searched for is exactly at the beginning of a sglist entry (other than the first), because it uses >= when it should use >. Example: assume we have 2 entries in scatterlist, 4K each, offset is 4K. The current code will return first entry + 4K when we really want the second entry. In particular this means mapping an FMR on a memfree HCA may end up writing the page table into the wrong place, leading to memory corruption and also causing the HCA to use an incorrect address translation table. Signed-off-by: NMichael S. Tsirkin <mst@mellanox.co.il> Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
- 31 12月, 2006 1 次提交
-
-
由 Michael S. Tsirkin 提交于
Commit bed8bdfd ("IB: kmemdup() cleanup") introduced one bad conversion to kmemdup() in mthca_alloc_fmr(), where the structure allocated and the structure copied are not the same size. Revert this back to the original kmalloc()/memcpy() code. Reported-by: Dotan Barak <dotanb@mellanox.co.il>. Signed-off-by: NMichael S. Tsirkin <mst@mellanox.co.il> Signed-off-by: NRoland Dreier <roland@digitalvampire.org> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
- 16 12月, 2006 3 次提交
-
-
由 Roland Dreier 提交于
mthca_device_mutex() can be initialized automatically with DEFINE_MUTEX() rather than explicitly calling mutex_init(). This saves a bit of text and shrinks the source by a line, so we may as well do it.... Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
由 Leonid Arsh 提交于
Add module parameters that enable settting some of the HCA profile values, such as the number of QPs, CQs, etc. Signed-off-by: NLeonid Arsh <leonida@voltaire.com> Signed-off-by: NMoni Shoua <monis@voltaire.com> Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
由 Roland Dreier 提交于
struct srp_device.fmr_page_mask was unsigned long, which means that the top part of addresses above 4G was being chopped off on 32-bit architectures. Of course nothing good happens when data from SRP targets is DMAed to the wrong place. Fix this by changing fmr_page_mask to u64, to match the addresses actually used by IB devices. Thanks to Brian Cain <Brian.Cain@ge.com> and David McMillen <davem@systemfabricworks.com> for help diagnosing the bug and testing the fix. Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
- 13 12月, 2006 7 次提交
-
-
由 Roland Dreier 提交于
Move the initialization of ipoib_neigh's skb_queue into ipoib_neigh_alloc(), since commit 2745b5b7 ("IPoIB: Fix skb leak when freeing neighbour") will make iterate over the skb_queue to free any packets left over when freeing the ipoib_neigh structure. This fixes a crash when freeing ipoib_neigh structures allocated in ipoib_mcast_send(), which otherwise don't have their skb_queue initialized. Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
由 Ralph Campbell 提交于
Convert iSER to use the new verbs DMA mapping functions for kernel verbs consumers. Signed-off-by: NRalph Campbell <ralph.campbell@qlogic.com> Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
由 Ralph Campbell 提交于
Convert SRP to use the new verbs DMA mapping functions for kernel verbs consumers. Signed-off-by: NRalph Campbell <ralph.campbell@qlogic.com> Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
由 Ralph Campbell 提交于
Convert IPoIB to use the new DMA mapping functions for kernel verbs consumers. Signed-off-by: NRalph Campbell <ralph.campbell@qlogic.com> Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
由 Ralph Campbell 提交于
Convert code in core/ to use the new DMA mapping functions for kernel verbs consumers. Signed-off-by: NRalph Campbell <ralph.campbell@qlogic.com> Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
由 Ralph Campbell 提交于
This patch implements the interposing DMA mapping functions to allow support for IOMMUs and remove the dependence on phys_to_virt() and bus_to_virt(). Signed-off-by: NRalph Campbell <ralph.campbell@qlogic.com> Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
由 Sean Hefty 提交于
Export the rdma cm interfaces to userspace via a misc device. Signed-off-by: NSean Hefty <sean.hefty@intel.com> Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-