- 21 6月, 2008 1 次提交
-
-
由 Jonathan Corbet 提交于
This documents the fact that somebody looked at the relevant open() functions and concluded that, due to their trivial nature, no locking was needed. Signed-off-by: NJonathan Corbet <corbet@lwn.net>
-
- 29 4月, 2008 1 次提交
-
-
由 Arthur Kepner 提交于
Add a new parameter, dmasync, to the ib_umem_get() prototype. Use dmasync = 1 when mapping user-allocated CQs with ib_umem_get(). Signed-off-by: NArthur Kepner <akepner@sgi.com> Cc: Tony Luck <tony.luck@intel.com> Cc: Jesse Barnes <jbarnes@virtuousgeek.org> Cc: Jes Sorensen <jes@sgi.com> Cc: Randy Dunlap <randy.dunlap@oracle.com> Cc: Roland Dreier <rdreier@cisco.com> Cc: James Bottomley <James.Bottomley@HansenPartnership.com> Cc: David Miller <davem@davemloft.net> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Grant Grundler <grundler@parisc-linux.org> Cc: Michael Ellerman <michael@ellerman.id.au> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 20 4月, 2008 1 次提交
-
-
由 Tony Jones 提交于
This converts the main ib_device to use struct device instead of struct class_device as class_device is going away. Signed-off-by: NTony Jones <tonyj@suse.de> Signed-off-by: NKay Sievers <kay.sievers@vrfy.org> Cc: Roland Dreier <rolandd@cisco.com> Cc: Sean Hefty <sean.hefty@intel.com> Cc: Hal Rosenstock <hal.rosenstock@gmail.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@suse.de>
-
- 19 4月, 2008 1 次提交
-
-
由 Matthew Wilcox 提交于
Signed-off-by: NMatthew Wilcox <willy@linux.intel.com>
-
- 17 4月, 2008 10 次提交
-
-
由 Eli Cohen 提交于
Add support for modifying CQ parameters for controlling event generation moderation. Signed-off-by: NEli Cohen <eli@mellanox.co.il> Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
由 Roland Dreier 提交于
Add a new IB_WR_SEND_WITH_INV send opcode that can be used to mark a "send with invalidate" work request as defined in the iWARP verbs and the InfiniBand base memory management extensions. Also put "imm_data" and a new "invalidate_rkey" member in a new "ex" union in struct ib_send_wr. The invalidate_rkey member can be used to pass in an R_Key/STag to be invalidated. Add this new union to struct ib_uverbs_send_wr. Add code to copy the invalidate_rkey field in ib_uverbs_post_send(). Fix up low-level drivers to deal with the change to struct ib_send_wr, and just remove the imm_data initialization from net/sunrpc/xprtrdma/, since that code never does any send with immediate operations. Also, move the existing IB_DEVICE_SEND_W_INV flag to a new bit, since the iWARP drivers currently in the tree set the bit. The amso1100 driver at least will silently fail to honor the IB_SEND_INVALIDATE bit if passed in as part of userspace send requests (since it does not implement kernel bypass work request queueing). Remove the flag from all existing drivers that set it until we know which ones are OK. The values chosen for the new flag is not consecutive to avoid clashing with flags defined in the XRC patches, which are not merged yet but which are already in use and are likely to be merged soon. This resurrects a patch sent long ago by Mikkel Hagen <mhagen@iol.unh.edu>. Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
由 Dotan Barak 提交于
Make sure that a device implements the modify_srq and reg_phys_mr optional methods before calling them. Signed-off-by: NDotan Barak <dotanb@dev.mellanox.co.il> Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
由 Eli Cohen 提交于
Add a create_flags member to struct ib_qp_init_attr that will allow a kernel verbs consumer to create a pass special flags when creating a QP. Add a flag value for telling low-level drivers that a QP will be used for IPoIB UD LSO. The create_flags member will also be useful for XRC and ehca low-latency QP support. Since no create_flags handling is implemented yet, add code to all low-level drivers to return -EINVAL if create_flags is non-zero. Signed-off-by: NEli Cohen <eli@mellanox.co.il> Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
由 Robert P. J. Day 提交于
Convert list_splice() + INIT_LIST_HEAD() to the equivalent list_splice_init() Signed-off-by: NRobert P. J. Day <rpjday@crashcourse.ca> Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
由 Julia Lawall 提交于
The function rdma_create_id() always returns either a valid pointer or a value made with ERR_PTR, so its result should be tested with IS_ERR, not with a test for 0. The problem was found using the following semantic match. (http://www.emn.fr/x-info/coccinelle/) //<smpl> @a@ expression E, E1; statement S,S1; position p; @@ E = rdma_create_id(...) ... when != E = E1 if@p (E) S else S1 @n@ position a.p; expression E,E1; statement S,S1; @@ E = NULL ... when != E = E1 if@p (E) S else S1 @depends on !n@ expression E; statement S,S1; position a.p; @@ * if@p (E) S else S1 //</smpl> Signed-off-by: NJulia Lawall <julia@diku.dk> Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
由 Roland Dreier 提交于
Christoph Hellwig wants to unexport get_empty_filp(), which is an ugly internal interface. Change the modular user in ib_uverbs_alloc_event_file() to use the better alloc_file() interface; this makes the code cleaner too. Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
由 Roland Dreier 提交于
The file member of struct ib_uverbs_event_file was only used to keep track of whether the file had been closed or not. The only thing we ever did with the value was check if it was NULL or not. Simplify the code and get rid of the need to keep track of the struct file * we allocate by replacing the file member with an is_closed member. Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
由 Roland Dreier 提交于
Add __force cast of node_guid to __u64, since we are sticking it into a structure whose definition is shared with userspace. Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
由 Roland Dreier 提交于
Mostly update the RB tree comparisons to force __be types to normal integers, but the change to cm_format_sidr_req() is a real fix: param->path->pkey is already __be16. Signed-off-by: NRoland Dreier <rolandd@cisco.com> Acked-by: NSean Hefty <sean.hefty@intel.com>
-
- 31 3月, 2008 1 次提交
-
-
由 Al Viro 提交于
Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 11 3月, 2008 1 次提交
-
-
由 Steve Wise 提交于
cm_work_handler() can access cm_id_priv after it drops its reference by calling iwch_deref_id(), which might cause it to be freed. The fix is to look at whether IWCM_F_CALLBACK_DESTROY is set _before_ dropping the reference. Then if it was set, free the cm_id on this thread. Signed-off-by: NSteve Wise <swise@opengridcomputing.com> Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
- 01 3月, 2008 3 次提交
-
-
由 Pete Wyckoff 提交于
Commit a3cd7d90 ("IB/fmr_pool: ib_fmr_pool_flush() should flush all dirty FMRs") caused a regression for iSER and was reverted in e5507736. This change attempts to redo the original patch so that all used FMR entries are flushed when ib_flush_fmr_pool() is called without affecting the normal FMR pool cleaning thread. Simply move used entries from the clean list onto the dirty list in ib_flush_fmr_pool() before letting the cleanup thread do its job. Signed-off-by: NPete Wyckoff <pw@osc.edu> Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
由 Pete Wyckoff 提交于
This reverts commit a3cd7d90. The original commit breaks iSER reliably, making it complain: iser: iser_reg_page_vec:ib_fmr_pool_map_phys failed: -11 The FMR cleanup thread runs ib_fmr_batch_release() as dirty entries build up. This commit causes clean but used FMR entries also to be purged. During that process, another thread can see that there are no free FMRs and fail, even though there should always have been enough available. Signed-off-by: NPete Wyckoff <pw@osc.edu> Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
由 Sean Hefty 提交于
When a CM MAD is received, it is queued to a CM workqueue for processing. The queued work item references the port and device on which the MAD was received. If that device is removed from the system before the work item can execute, the work item will reference freed memory. To fix this, flush the workqueue after unregistering to receive MAD, and before the device is be freed. Signed-off-by: NSean Hefty <sean.hefty@intel.com> Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
- 16 2月, 2008 1 次提交
-
-
由 Li Zefan 提交于
If kobject_create_and_add() fails and returns NULL, the current code in ib_device_register_sysfs() does not set ret and hence returns 0. Set ret to -ENOMEM for this failure, so that the caller knows that ib_device_register_sysfs() actually failed. Signed-off-by: NLi Zefan <lizf@cn.fujitsu.com> Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
- 15 2月, 2008 1 次提交
-
-
由 Sean Hefty 提交于
There's an undesirable interaction with issuing MRA requests to increase connection timeouts and the listen backlog. When the rdma_cm receives a connection request, it queues an MRA with the ib_cm. (The ib_cm will send an MRA if it receives a duplicate REQ.) The rdma_cm will then create a new rdma_cm_id and give that to the user, which in this case is the rdma_user_cm. If the listen backlog maintained in the rdma_user_cm is full, it destroys the rdma_cm_id, which in turns destroys the ib_cm_id. The ib_cm_id generates a REJ because the state of the ib_cm_id has changed to MRA sent, versus REQ received. When the backlog is full, we just want to drop the REQ so that it is retried later. Fix this by deferring queuing the MRA until after the user of the rdma_cm has examined the connection request. Signed-off-by: NSean Hefty <sean.hefty@intel.com> Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
- 13 2月, 2008 2 次提交
-
-
由 Roland Dreier 提交于
Commit 9af57b7a ("IB/cm: Add basic performance counters") introduced a bug in how the reference count for cm_class.subsys.kobj was handled: the path that released a device did a kobject_put() on that kobject, but there was no kobject_get() in the path the handles adding a device. So the reference count ended up too low, which leads to bad things. Fix up and simplify the reference counting to avoid this. (Actually, I introduced the bug when fixing the patch up to match some of Greg's kobject changes, but who's counting) Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
由 Roland Dreier 提交于
Pesky little devils, sneaking around... Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
- 05 2月, 2008 2 次提交
-
-
由 Or Gerlitz 提交于
Allocate memory for the page_list field of struct ib_pool_fmr only when caching is enabled for the FMR pool, since the field is not used otherwise. This can save significant amounts of memory for large pools with caching turned off. Signed-off-by: NOr Gerlitz <ogerlitz@voltaire.com> Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
由 Sean Hefty 提交于
Paths with hop_limit > 1 indicate that the connection will be routed between IB subnets. Update the subnet local field in the CM REQ based on the hop_limit value. In addition, if the path is routed, then set the LIDs in the REQ to the permissive LIDs. This is used to indicate to the passive side that it should use the LIDs in the received local route header (LRH) associated with the REQ when programming the QP. This is a temporary work-around to the IB CM to support IB router development until the IB router specification is completed. It is not anticipated that this work-around will cause any interoperability issues with existing stacks or future stacks that will properly support IB routers when defined. Signed-off-by: NSean Hefty <sean.hefty@intel.com> Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
- 29 1月, 2008 3 次提交
-
-
由 Denis V. Lunev 提交于
Needed to propagate it down to the ip_route_output_flow. Signed-off-by: NDenis V. Lunev <den@openvz.org> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Denis V. Lunev 提交于
in_dev_find() need a namespace to pass it to fib_get_table(), so add an argument. Signed-off-by: NDenis V. Lunev <den@openvz.org> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Joe Perches 提交于
Signed-off-by: NJoe Perches <joe@perches.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 26 1月, 2008 12 次提交
-
-
由 Olaf Kirch 提交于
When a FMR is released via ib_fmr_pool_unmap(), the FMR usually ends up on the free_list rather than the dirty_list (because we allow a certain number of remappings before actually requiring a flush). However, ib_fmr_batch_release() only looks at dirty_list when flushing out old mappings. This means that when ib_fmr_pool_flush() is used to force a flush of the FMR pool, some dirty FMRs that have not reached their maximum remap count will not actually be flushed. Fix this by flushing all FMRs that have been used at least once in ib_fmr_batch_release(). Signed-off-by: NOlaf Kirch <olaf.kirch@oracle.com> Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
由 Olaf Kirch 提交于
Normally, the serial numbers for flush requests and flushes executed for an FMR pool should be in sync. However, if the FMR pool flushes dirty FMRs because the dirty_watermark was reached, we wake up the cleanup thread and let it do its stuff. As a side effect, the cleanup thread increments pool->flush_ser, which leaves it one higher than pool->req_ser. The next time the user calls ib_flush_fmr_pool(), the cleanup thread will be woken up, but ib_flush_fmr_pool() won't wait for the flush to complete because flush_ser is already past req_ser. This means the FMRs that the user expects to be flushed may not have all been flushed when the function returns. Fix this by telling the cleanup thread to do work exclusively by incrementing req_ser, and by moving the comparison of dirty_len and dirty_watermark into ib_fmr_pool_unmap(). Signed-off-by: NOlaf Kirch <olaf.kirch@oracle.com>
-
由 Roland Dreier 提交于
In addition to being overly complex, the locking in user_mad.c is broken: there were multiple reports of deadlocks and lockdep warnings. In particular it seems that a single thread may end up trying to take the same rwsem for reading more than once, which is explicitly forbidden in the comments in <linux/rwsem.h>. To solve this, we change the locking to use plain mutexes instead of rwsems. There is one mutex per open file, which protects the contents of the struct ib_umad_file, including the array of agents and list of queued packets; and there is one mutex per struct ib_umad_port, which protects the contents, including the list of open files. We never hold the file mutex across calls to functions like ib_unregister_mad_agent(), which can call back into other ib_umad code to queue a packet, and we always hold the port mutex as long as we need to make sure that a device is not hot-unplugged from under us. This even makes things nicer for users of the -rt patch, since we remove calls to downgrade_write() (which is not implemented in -rt). Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
由 Sean Hefty 提交于
By default, the responder_resources parameter is set to that received in a connection request. The passive side may override this value when accepting the connection. Use the value provided by the passive side when transitioning the QP to RTR state, rather than the value given in the connect request. Without this change, the RTR transition may fail if the passive side supports fewer responder_resources than that in the request. For code consistency and to protect against QP destruction, restructure overriding initiator_depth to match how responder_resources is set. Signed-off-by: NSean Hefty <sean.hefty@intel.com> Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
由 Rolf Manderscheid 提交于
An IPoIB subnet on an IB fabric that spans multiple IB subnets can't use link-local scope in multicast GIDs. The existing routines that map IP/IPv6 multicast addresses into IB link-level addresses hard-code the scope to link-local, and they also leave the partition key field uninitialised. This patch adds a parameter (the link-level broadcast address) to the mapping routines, allowing them to initialise both the scope and the P_Key appropriately, and fixes up the call sites. The next step will be to add a way to configure the scope for an IPoIB interface. Signed-off-by: NRolf Manderscheid <rvm@obsidianresearch.com> Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
由 Sean Hefty 提交于
This is based on user feedback from Doug Ledford at RedHat: Events that occur on an rdma_cm_id are reported to userspace through an event channel. Connection request events are reported on the event channel associated with the listen. When the connection is accepted, a new rdma_cm_id is created and automatically uses the listen event channel. This is suboptimal where the user only wants listen events on that channel. Additionally, it may be desirable to have events related to connection establishment use a different event channel than those related to already established connections. Allow the user to migrate an rdma_cm_id between event channels. All pending events associated with the rdma_cm_id are moved to the new event channel. Signed-off-by: NSean Hefty <sean.hefty@intel.com> Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
由 Vladimir Sokolovsky 提交于
Enable conn_id remove on the passive side after connection establishment. This corrects an issue where the IB driver can't be unloaded after running applications over RDS. The 'dev_remove' counter does not reach 0 for established connections on the passive side. This problem is limited to device removal, and only occurs on the passive side if there are established connections. Signed-off-by: NVladimir Sokolovsky <vlad@mellanox.co.il> Reviewed-by: NSean Hefty <sean.hefty@intel.com> Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
由 Sean Hefty 提交于
In cancel_mads(), MADs are moved from the wait_list and local_list to a cancel_list for processing. However, the structures on these two lists are not the same. The wait_list references struct ib_mad_send_wr_private, but local_list references struct ib_mad_local_private. Cancel_mads() treats all items moved to the cancel_list as struct ib_mad_send_wr_private. This leads to a system crash when requests are moved from the local_list to the cancel_list. Fix this by leaving local_list alone. All requests on the local_list have completed are just awaiting processing by a queued worker thread. Bug (crash) reported by Dotan Barak <dotanb@dev.mellanox.co.il>. Problem with local_list access reported by Robert Reynolds <rreynolds@opengridcomputing.com>. Signed-off-by: NSean Hefty <sean.hefty@intel.com> Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
由 Sean Hefty 提交于
Add performance/debug counters to track sent/received messages, retries, and duplicates. Counters are tracked per CM message type, per port. The counters are always enabled, so intrusive state tracking is not done. Signed-off-by: NSean Hefty <sean.hefty@intel.com> Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
由 Sean Hefty 提交于
To allow ULPs to tune timeout values and capture retry statistics, report the number of times that a mad send operation was retried. For RMPP mads, report the total number of times that the any portion (send window) of the send operation was retried. Signed-off-by: NSean Hefty <sean.hefty@intel.com> Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
由 Sean Hefty 提交于
P_key changes can invalidate multicast groups. Report errors on all multicast groups affected by a pkey change. Signed-off-by: NSean Hefty <sean.hefty@intel.com> Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
由 Steve Welch 提交于
The local loopback of an outgoing DR SMP response is limited to those that originate at the driver specific SMA implementation during the driver specific process_mad() function. This patch enables a returning DR SMP originating in userspace (or elsewhere) to be delivered to the local managment stack. In this specific case the driver process_mad() function does not consume or process the MAD, so a reponse mad has not be created and the original MAD must manually be copied to the MAD buffer that is to be handed off to the local agent. Signed-off-by: NSteve Welch <swelch@systemfabricworks.com> Acked-by: NHal Rosenstock <hal@xsigo.com> Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-