- 26 1月, 2008 40 次提交
-
-
由 Steve Wise 提交于
Correctly work around T3A issues by checking "hwtype != T3A" instead of "hwtype == T3B". This will be needed for new hardware types. Signed-off-by: NSteve Wise <swise@opengridcomputing.com> Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
由 Jan Engelhardt 提交于
Signed-off-by: NJan Engelhardt <jengelh@computergmbh.de> Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
由 Jan Engelhardt 提交于
Signed-off-by: NJan Engelhardt <jengelh@computergmbh.de> Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
由 Steve Wise 提交于
This is needed to support zero-stag properly. Signed-off-by: NSteve Wise <swise@opengridcomputing.com> Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
由 Steve Wise 提交于
The existing logic incorrectly maps this buffer list: 0: addr 0x10001000, size 0x1000 1: addr 0x10002000, size 0x1000 To this bogus page list: 0: 0x10000000 1: 0x10002000 The shift calculation must also take into account the address of the first entry masked by the page_mask as well as the last address+size rounded up to the next page size. Signed-off-by: NSteve Wise <swise@opengridcomputing.com> Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
由 Steve Wise 提交于
- for kernel mode cqs, call event notification handler when flushing. - flush QP when moving from RTS -> CLOSING. - fix logic to identify a kernel mode qp. Signed-off-by: NSteve Wise <swise@opengridcomputing.com> Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
由 Ralph Campbell 提交于
Move the increment of s_hdrwords into the existing if block that tests if we're doing a send with immediate, to save one test of the opcode. Signed-off-by: NRalph Campbell <ralph.campbell@qlogic.com> Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
由 Roland Dreier 提交于
Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
由 Krishna Kumar 提交于
qdisc_run() now tests for queue_stopped() before calling __qdisc_run(), and the same check is done in every iteration of __qdisc_run(), so another check is not required in the driver xmit. This means that ipoib_start_xmit() no longer needs to test netif_queue_stopped(); the test was added to fix earlier kernels, where the networking stack did not guarantee that the xmit method of an LLTX driver would not be called after the queue was stopped, but current kernels do provide this guarantee. To validate, I put a debug in the TX_BUSY path which never hit with 64 threads running overnight exercising this code a few 100 million times. Signed-off-by: NKrishna Kumar <krkumar2@in.ibm.com> Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
由 Ralph Campbell 提交于
Add new mappings from port physical state (a HW register value) to the IB SubnGet(PortInfo) port physical state. Signed-off-by: NRalph Campbell <ralph.campbell@qlogic.com> Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
由 Dave Olson 提交于
The IBA7220 uses a count-based triggering mechanism, and therefore can't use the same bandwidth verification mechanism as older chips. To support the 7220, allow enabling and disabling armlaunch errors on application request. Minor robustness improvements as well. Signed-off-by: NDave Olson <dave.olson@qlogic.com> Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
由 Dave Olson 提交于
Clean up some unused header fields, minor related cleanup. Signed-off-by: NDave Olson <dave.olson@qlogic.com> Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
由 Michael Albaugh 提交于
IBA7220 includes many more configurable IB settings. Getting/setting these is now grouped into a pair of chip specific functions accessed via function pointers. Provide sysfs access to these settings. Signed-off-by: NMichael Albaugh <michael.albaugh@qlogic.com> Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
由 Dave Olson 提交于
This adds the new (sometimes empty) chip-specific functions to the older chips, and makes the initialization and related functions consistent across all 3 chips. Signed-off-by: NDave Olson <dave.olson@qlogic.com> Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
由 Dave Olson 提交于
This code has been unused for some time, but still had leftovers from when it was used. Signed-off-by: Dave Olson <dave.olson@qlogic.com Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
由 Joachim Fenkes 提交于
Some HW revisions of eHCA2 may cause an RC connection to break if they received RDMA Reads over that connection before. This can be prevented by assuring that, after the first RDMA Read, the QP receives a new RDMA Read every few million link packets. Include code into the driver that inserts an empty (size 0) RDMA Read into the message stream every now and then if the consumer doesn't post them frequently enough. Signed-off-by: NJoachim Fenkes <fenkes@de.ibm.com> Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
由 Hoang-Nam Nguyen 提交于
This patch enhances ehca with a capability to "autodetect" the ports being connected physically. In order to utilize that function the module option nr_ports must be set to -1 (default is 2 - two ports). This feature is experimental and will made the default later. More detail: If the user connects only one port to the switch, current code requires 1) port one to be connected and 2) module option nr_ports=1 to be given. If autodetect is enabled, ehca will not wait at creation of the GSI QP for the respective port to become active. Since firmware does not accept modify_qp() while the port is down at initialization, we need to cache all calls to modify_qp() for the SMI/GSI QP and just return a good return code. When a port is activated and we get a PORT_ACTIVE event, we replay the cached modify-qp() parms and re-trigger any posted recv WRs. Only then do we forward the PORT_ACTIVE event to registered clients. The result of this autodetect patch is that all ports will be accessible by the users. Depending on their respective cabling only those ports that are connected properly will become operable. If a user tries to modify a regular QP of a non-connected port, modify_qp() will fail. Furthermore, ibv_devinfo should show the port state accordingly. Note that this patch primarily improves the loading behaviour of ehca. If the cable is removed while the driver is operating and plugged in again, firmware will handle that properly by sending an appropriate async event. Signed-off-by: NHoang-Nam Nguyen <hnguyen@de.ibm.com> Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
由 Hoang-Nam Nguyen 提交于
Signed-off-by: NHoang-Nam Nguyen <hnguyen@de.ibm.com> Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
由 Hoang-Nam Nguyen 提交于
Signed-off-by: NHoang-Nam Nguyen <hnguyen@de.ibm.com> Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
由 Erez Zilber 提交于
Add a .change_queue_depth handler to the scsi_host_template in the iSER driver. iscsi_change_queue_depth was added to iscsi_tcp in order to solve the problem of queue depth which was too high for some targets. It is also applicable for iSER. Signed-off-by: NErez Zilber <erezz@voltaire.com> Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
由 Erez Zilber 提交于
Some RDMA CM events are not supported or not handled in iSER. This patch adds some info (printk) for the user about them. Signed-off-by: NErez Zilber <erezz@voltaire.com> Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
由 Olaf Kirch 提交于
When a FMR is released via ib_fmr_pool_unmap(), the FMR usually ends up on the free_list rather than the dirty_list (because we allow a certain number of remappings before actually requiring a flush). However, ib_fmr_batch_release() only looks at dirty_list when flushing out old mappings. This means that when ib_fmr_pool_flush() is used to force a flush of the FMR pool, some dirty FMRs that have not reached their maximum remap count will not actually be flushed. Fix this by flushing all FMRs that have been used at least once in ib_fmr_batch_release(). Signed-off-by: NOlaf Kirch <olaf.kirch@oracle.com> Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
由 Olaf Kirch 提交于
Normally, the serial numbers for flush requests and flushes executed for an FMR pool should be in sync. However, if the FMR pool flushes dirty FMRs because the dirty_watermark was reached, we wake up the cleanup thread and let it do its stuff. As a side effect, the cleanup thread increments pool->flush_ser, which leaves it one higher than pool->req_ser. The next time the user calls ib_flush_fmr_pool(), the cleanup thread will be woken up, but ib_flush_fmr_pool() won't wait for the flush to complete because flush_ser is already past req_ser. This means the FMRs that the user expects to be flushed may not have all been flushed when the function returns. Fix this by telling the cleanup thread to do work exclusively by incrementing req_ser, and by moving the comparison of dirty_len and dirty_watermark into ib_fmr_pool_unmap(). Signed-off-by: NOlaf Kirch <olaf.kirch@oracle.com>
-
由 Roland Dreier 提交于
In addition to being overly complex, the locking in user_mad.c is broken: there were multiple reports of deadlocks and lockdep warnings. In particular it seems that a single thread may end up trying to take the same rwsem for reading more than once, which is explicitly forbidden in the comments in <linux/rwsem.h>. To solve this, we change the locking to use plain mutexes instead of rwsems. There is one mutex per open file, which protects the contents of the struct ib_umad_file, including the array of agents and list of queued packets; and there is one mutex per struct ib_umad_port, which protects the contents, including the list of open files. We never hold the file mutex across calls to functions like ib_unregister_mad_agent(), which can call back into other ib_umad code to queue a packet, and we always hold the port mutex as long as we need to make sure that a device is not hot-unplugged from under us. This even makes things nicer for users of the -rt patch, since we remove calls to downgrade_write() (which is not implemented in -rt). Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
由 Roland Dreier 提交于
There are a few places in the ipath driver where a variable is re-declared within a block where it is already in scope. Most of these extra declarations can simply be removed, since the variable from the outer scope is used in a way so that it does not need to keep its variable across the block with the re-declaration. Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
由 Roland Dreier 提交于
t3_rdma_init_wr.irs is a big-endian field, so declare it as __be32. This fixes one sparse warning. Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
由 Anton Blanchard 提交于
Use round_jiffies() to align ehca's 1-second timer with other timers and potentially save power by sleeping cores for longer. Signed-off-by: NAnton Blanchard <anton@samba.org> Acked-by: NHoang-Nam Nguyen <hnguyen@de.ibm.com> Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
由 Sean Hefty 提交于
By default, the responder_resources parameter is set to that received in a connection request. The passive side may override this value when accepting the connection. Use the value provided by the passive side when transitioning the QP to RTR state, rather than the value given in the connect request. Without this change, the RTR transition may fail if the passive side supports fewer responder_resources than that in the request. For code consistency and to protect against QP destruction, restructure overriding initiator_depth to match how responder_resources is set. Signed-off-by: NSean Hefty <sean.hefty@intel.com> Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
由 Dave Olson 提交于
The original QHT7040 had significant performance issues so there was an additional check in the driver for a newer serial number. Support for the small quantities of that board shipped has been dropped, so this patch removes the special checks to simplify the code. Signed-off-by: NDave Olson <dave.olson@qlogic.com> Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
由 Arthur Jones 提交于
Different chips have different width interrupt status registers, so add a flag and accessor function to decide which width register read to use. Signed-off-by: NArthur Jones <arthur.jones@qlogic.com> Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
由 Ralph Campbell 提交于
The 6110 had a bug that caused some registers to be swapped; it was fixed for the 7220 (and didn't affect the 6120 because it had fewer registers). This adds a flag and related code to handle that, and includes some minor cleanups in the same area. Signed-off-by: NRalph Campbell <ralph.campbell@qlogic.com>
-
由 Ralph Campbell 提交于
The number of configured ports for the 7220 changes the number of eager TIDs available per port, for all but port 0 (kernel port) which remains constant, so add a field to give port0 count separate from the portdata structure. Signed-off-by: NRalph Campbell <ralph.campbell@qlogic.com> Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
由 Ralph Campbell 提交于
User registers have different alignments on different chips (4KB on older, 64KB on 7220). Allow mapping the user registers on kernels with page sizes up to 64K. Signed-off-by: NRalph Campbell <ralph.campbell@qlogic.com> Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
由 Dave Olson 提交于
Signed-off-by: NDave Olson <dave.olson@qlogic.com> Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
由 Ralph Campbell 提交于
Various hardware counters are exported via the ipath file system (since it is binary data). The old file format was very dependent on the HW offsets for these registers. Newer HCA chips can have different counters at different offsets. This patch adds a level of indirection to make the file format consistent across HCAs. Signed-off-by: NRalph Campbell <ralph.campbell@qlogic.com> Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
由 Ralph Campbell 提交于
Add support for QLogic HCAs which have hardware performance sampling registers for PortSamplesControl and PortSamplesResult MADs. Signed-off-by: NRalph Campbell <ralph.campbell@qlogic.com> Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
由 David Dillow 提交于
When you have multiple targets, it gets really confusing when you try to track down who did a reset when there is no identifying information in the log message, especially when the same extension ID is mapped through two different local IB ports. So, add an identifier that can be used to track back to which local IB port/remote target pair is the one having problems. Signed-off-by: NDavid Dillow <dillowda@ornl.gov> Acked-by: NPete Wyckoff <pw@osc.edu> Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
由 Pradeep Satyanarayana 提交于
Some HCAs (such as ehca2) support SRQ, but only support fewer than 16 SG entries for SRQs. Currently IPoIB/CM implicitly assumes all HCAs will support 16 SG entries for SRQs (to handle a 64K MTU with 4K pages). This patch removes that restriction by limiting the maximum MTU in connected mode to what the maximum number of SRQ SG entries allows. This patch addresses <https://bugs.openfabrics.org/show_bug.cgi?id=728> Signed-off-by: NPradeep Satyanarayana <pradeeps@linux.vnet.ibm.com> Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
由 David Dillow 提交于
By default, the SCSI mid-layer seems to send down 512KB requests (sg_tablesize = 256), with some requests occasionally combined. By allowing the mid-layer to chain requests, we can easily grow to 1024KB or larger -- I've tested 4096KB I/O requests with no problems. I looked through the DMA paths on the hardware drivers to ensure they could take advantage of the SG chaining, and it seems that every one except ipath uses the system's DMA routines, which have been converted to handle chaining. ipath looks like it should be OK, but I have no way to test it. Signed-off-by: NDavid Dillow <dillowda@ornl.gov> [ Tested on ipath. - Roland ] Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
由 David Dillow 提交于
The current SRP initiator will send requests even if it has no credits available. The results of sending extra requests are vendor specific, but on some devices, overrunning credits will cost 85% of peak performance -- e.g. 100 MB/s vs 720 MB/s. Other devices may just drop the requests. This patch will tell the SCSI midlayer to queue requests if there are fewer than two credits remaining, and will not issue a task management request if there are no credits remaining. The mid-layer will retry the queued command once an outstanding command completes. The patch also removes the unlikely() in __srp_get_tx_iu(), as it is not at all unlikely to hit this limit under heavy load. Signed-off-by: NDavid Dillow <dillowda@ornl.gov> Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-