- 19 10月, 2010 11 次提交
-
-
由 Pavel Emelyanov 提交于
There's the only difference betseen the xs_bind4 and the xs_bind6 - the size of sockaddr structure they use. Fortunatelly its size can be indirectly get from the transport. Change since v1: * use sockaddr_storage instead of sockaddr * use rpc_set_port instead of manual port assigning Signed-off-by: NPavel Emelyanov <xemul@openvz.org> Reviewed-by: NChuck Lever <chuck.lever@oracle.com> [bfields@redhat.com: fix address family initialization] Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
由 Pavel Emelyanov 提交于
Remove now unneeded wrappers that just add type and protocol to socket creation callback. Signed-off-by: NPavel Emelyanov <xemul@openvz.org> Reviewed-by: NChuck Lever <chuck.lever@oracle.com> Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
由 Pavel Emelyanov 提交于
Same patch for v6 protocols. Signed-off-by: NPavel Emelyanov <xemul@openvz.org> Reviewed-by: NChuck Lever <chuck.lever@oracle.com> Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
由 Pavel Emelyanov 提交于
The UDPv4 and TCPv4 socket creation callbacks now look very similar. Signed-off-by: NPavel Emelyanov <xemul@openvz.org> Reviewed-by: NChuck Lever <chuck.lever@oracle.com> Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
由 Pavel Emelyanov 提交于
Make it look like the TCP sockets creation. Unfortunately the git diff made the patch look messy :( Signed-off-by: NPavel Emelyanov <xemul@openvz.org> Reviewed-by: NChuck Lever <chuck.lever@oracle.com> Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
由 Pavel Emelyanov 提交于
The xs_tcp_reuse_connection takes the xprt only to pass it down to the xs_abort_connection. The later one can get it from the given transport itself. Signed-off-by: NPavel Emelyanov <xemul@openvz.org> Reviewed-by: NChuck Lever <chuck.lever@oracle.com> Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
由 Pavel Emelyanov 提交于
Signed-off-by: NPavel Emelyanov <xemul@openvz.org> Reviewed-by: NChuck Lever <chuck.lever@oracle.com> Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
由 Pavel Emelyanov 提交于
Signed-off-by: NPavel Emelyanov <xemul@openvz.org> Reviewed-by: NChuck Lever <chuck.lever@oracle.com> Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
由 Pavel Emelyanov 提交于
Signed-off-by: NPavel Emelyanov <xemul@openvz.org> Reviewed-by: NChuck Lever <chuck.lever@oracle.com> Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
由 Tom Tucker 提交于
There are several error paths in the code that do not unmap DMA. This patch adds calls to svc_rdma_unmap_dma to free these DMA contexts. Signed-off-by: NTom Tucker <tom@opengridcomputing.com> Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
由 Tom Tucker 提交于
There was logic in the send path that assumed that a page containing data to send to the client has a KVA. This is not always the case and can result in data corruption when page_address returns zero and we end up DMA mapping zero. This patch changes the bus mapping logic to avoid page_address() where necessary and converts all calls from ib_dma_map_single to ib_dma_map_page in order to keep the map/unmap calls symmetric. Signed-off-by: NTom Tucker <tom@ogc.us> Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
- 12 10月, 2010 4 次提交
-
-
由 J. Bruce Fields 提交于
Expire clients more promptly, at the expense of possibly running the laundromat thread more frequently. Though it's not the default, I'd like it to be feasible to run with a lease time of just a few seconds, at which point a minimum 10 second wait between laundromat runs seems a little much. Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
由 Pavel Emelyanov 提交于
Signed-off-by: NPavel Emelyanov <xemul@openvz.org> Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
由 NeilBrown 提交于
We limit the number of 'defer' requests to DFR_MAX. The imposition of this limit is spread about a bit - sometime we don't add new things to the list, sometimes we remove old things. Also it is currently applied to requests which we are 'waiting' for rather than 'deferring'. This doesn't seem ideal as 'waiting' requests are naturally limited by the number of threads. So gather the DFR_MAX handling code to one place and only apply it to requests that are actually being deferred. This means that not all 'cache_deferred_req' structures go on the 'cache_defer_list, so we need to be careful when adding and removing things. Signed-off-by: NNeilBrown <neilb@suse.de> Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
由 NeilBrown 提交于
The return value from cache_defer_req is somewhat confusing. Various different error codes are returned, but the single caller is only interested in success or failure. In fact it can measure this success or failure itself by checking CACHE_PENDING, which makes the point of the code more explicit. So change cache_defer_req to return 'void' and test CACHE_PENDING after it completes, to see if the request was actually deferred or not. Similarly setup_deferral and cache_wait_req don't need a return value, so make them void and remove some code. The call to cache_revisit_request (to guard against a race) is only needed for the second call to setup_deferral, so move it out of setup_deferral to after that second call. With the first call the race is handled differently (by explicitly calling 'wait_for_completion'). Signed-off-by: NNeilBrown <neilb@suse.de> Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
- 03 10月, 2010 1 次提交
-
-
由 J. Bruce Fields 提交于
Commit 78155ed7 "nfsd4: distinguish expired from stale stateids" attempted to distinguish expired and stale stateid's using time information that may not have been completely reliable, so I reverted it. That was throwing out the baby with the bathwater; we still do want to return expired, but let's do that using the simpler approach of just assuming any stateid is expired if it looks like it was given out by the current server instance, but we can't find it any more. This may help clients that are recovering from network partitions. Reported-by: NBian Naimeng <biannm@cn.fujitsu.com> Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
- 02 10月, 2010 24 次提交
-
-
由 J. Bruce Fields 提交于
As long as we're not implementing any session security, we should just automatically add any new connections that come along to the list of sessions associated with the session. Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
由 J. Bruce Fields 提交于
Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
由 J. Bruce Fields 提交于
Remove connections from the list when they go down. Signed-off-by: NJ. Bruce Fields <bfields@citi.umich.edu>
-
由 J. Bruce Fields 提交于
NFSv4.1 needs warning when a client tcp connection goes down, if that connection is being used as a backchannel, so that it can warn the client that it has lost the backchannel connection. Signed-off-by: NJ. Bruce Fields <bfields@citi.umich.edu>
-
由 J. Bruce Fields 提交于
The spec requires us in various places to keep track of the connections associated with each session. Signed-off-by: NJ. Bruce Fields <bfields@citi.umich.edu>
-
由 J. Bruce Fields 提交于
Changes: - make sure session memory reservation is released on failure path. - use min_t()/min() for more compact code in several places. - break alloc_init_session into smaller pieces. - miscellaneous other cleanup. Signed-off-by: NJ. Bruce Fields <bfields@citi.umich.edu>
-
由 J. Bruce Fields 提交于
This returns an nfs error, not -ERRNO. Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
由 J. Bruce Fields 提交于
Note we're allocating an array of nfsd4_slot *'s, not nfsd4_slot's. Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
由 J. Bruce Fields 提交于
Instead of creating the new rpc client from a regular server thread, set a flag, kick off a null call, and allow the null call to do the work of setting up the client on the callback workqueue. Use a spinlock to ensure the callback work gets a consistent view of the callback parameters. This allows, for example, changing the callback from contexts where sleeping is not allowed. I hope it will also keep the locking simple as we add more session and trunking features, by serializing most of the callback-specific work. This also closes a small race where the the new cb_ident could be used with an old connection (or vice-versa). Signed-off-by: NJ. Bruce Fields <bfields@citi.umich.edu>
-
由 J. Bruce Fields 提交于
I don't see the point of the separate struct. It seems to just be getting in the way. Signed-off-by: NJ. Bruce Fields <bfields@citi.umich.edu>
-
由 J. Bruce Fields 提交于
This will eventually allow us, for example, to kick off null callback from contexts where we can't sleep. Signed-off-by: NJ. Bruce Fields <bfields@citi.umich.edu>
-
由 J. Bruce Fields 提交于
Make the recall callback code more generic, so that other callbacks will be able to use it too. Signed-off-by: NJ. Bruce Fields <bfields@citi.umich.edu>
-
由 J. Bruce Fields 提交于
With apologies for the gratuitous rename, the new name seems more helpful to me. Signed-off-by: NJ. Bruce Fields <bfields@citi.umich.edu>
-
由 J. Bruce Fields 提交于
These two structs don't really need to be distinct as far as I can tell. Signed-off-by: NJ. Bruce Fields <bfields@citi.umich.edu>
-
由 J. Bruce Fields 提交于
Now that we have both nfsd4_callback and nfsd4_cb_conn structures, I get confused if variables of both types are always named cb.... Signed-off-by: NJ. Bruce Fields <bfields@citi.umich.edu>
-
由 J. Bruce Fields 提交于
Unfortunately, spkm3 never got very far; while interoperability with one other implementation was demonstrated at some point, problems were found with the spec that were deemed not worth fixing. The kernel code is useless on its own without nfs-utils patches which were never merged into nfs-utils, and were only ever available from citi.umich.edu. They appear not to have been updated since 2005. Therefore it seems safe to assume that this code has no users, and never will. Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
由 NeilBrown 提交于
If we set up to wait for a cache item to be filled in, and then find that it is no longer pending, it could be that some other thread is in 'cache_revisit_request' and has moved our request to its 'pending' list. So when our setup_deferral calls cache_revisit_request it will find nothing to put on the pending list, and do nothing. We then return from cache_wait_req, thus leaving the 'sleeper' on-stack structure open to being corrupted by subsequent stack usage. However that 'sleeper' could still be on the 'pending' list that the other thread is looking at and so any corruption could cause it to behave badly. To avoid this race we simply take the same path as if the 'wait_for_completion_interruptible_timeout' was interrupted and if the sleeper is no longer on the list (which it won't be) we wait on the completion - which will ensure that any other cache_revisit_request will have let go of the sleeper. Signed-off-by: NNeilBrown <neilb@suse.de> Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
由 Pavel Emelyanov 提交于
The context is already known in all the sock_create callers. Signed-off-by: NPavel Emelyanov <xemul@openvz.org> Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
由 Pavel Emelyanov 提交于
Signed-off-by: NPavel Emelyanov <xemul@openvz.org> Acked-by: NDavid S. Miller <davem@davemloft.net> Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
由 Pavel Emelyanov 提交于
The net is known from the xprt_create and this tagging will also give un the context in the conntection workers where real sockets are created. Signed-off-by: NPavel Emelyanov <xemul@openvz.org> Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
由 Pavel Emelyanov 提交于
Signed-off-by: NPavel Emelyanov <xemul@openvz.org> Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
由 Pavel Emelyanov 提交于
Signed-off-by: NPavel Emelyanov <xemul@openvz.org> Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
由 Pavel Emelyanov 提交于
After this the socket creation in it knows the context. Signed-off-by: NPavel Emelyanov <xemul@openvz.org> Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
由 Pavel Emelyanov 提交于
Signed-off-by: NPavel Emelyanov <xemul@openvz.org> Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-