- 15 11月, 2012 10 次提交
-
-
由 Stanislav Kinsbursky 提交于
This hash holds file lock owners and closely associated with nfs4_clients info, which are network namespace aware. So let's make it allocated per network namespace too. Note: this hash can be allocated in per-net operations. But it looks better to allocate it on nfsd state start and thus don't waste resources if server is not running. Signed-off-by: NStanislav Kinsbursky <skinsbursky@parallels.com> Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
由 Stanislav Kinsbursky 提交于
This hash holds open owner state and closely associated with nfs4_clients info, which are network namespace aware. So let's make it allocated per network namespace too. Note: this hash can be allocated in per-net operations. But it looks better to allocate it on nfsd state start and thus don't waste resources if server is not running. Signed-off-by: NStanislav Kinsbursky <skinsbursky@parallels.com> Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
由 Stanislav Kinsbursky 提交于
This hash holds nfs4_clients info, which are network namespace aware. So let's make it allocated per network namespace. Signed-off-by: NStanislav Kinsbursky <skinsbursky@parallels.com> Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
由 Stanislav Kinsbursky 提交于
This hash holds nfs4_clients info, which are network namespace aware. So let's make it allocated per network namespace. Note: this hash can be allocated in per-net operations. But it looks better to allocate it on nfsd state start and thus don't waste resources if server is not running. Signed-off-by: NStanislav Kinsbursky <skinsbursky@parallels.com> Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
由 Stanislav Kinsbursky 提交于
This tree holds nfs4_clients info, which are network namespace aware. So let's make it per network namespace. Signed-off-by: NStanislav Kinsbursky <skinsbursky@parallels.com> Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
由 Stanislav Kinsbursky 提交于
This hash holds nfs4_clients info, which are network namespace aware. So let's make it allocated per network namespace. Note: this hash can be allocated in per-net operations. But it looks better to allocate it on nfsd state start and thus don't waste resources if server is not running. Signed-off-by: NStanislav Kinsbursky <skinsbursky@parallels.com> Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
由 Stanislav Kinsbursky 提交于
This hash holds nfs4_clients info, which are network namespace aware. So let's make it allocated per network namespace. Note: this hash is used only by legacy tracker. So let's allocate hash in tracker init. Signed-off-by: NStanislav Kinsbursky <skinsbursky@parallels.com> Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
由 Stanislav Kinsbursky 提交于
And use it's net where possible. Signed-off-by: NStanislav Kinsbursky <skinsbursky@parallels.com> Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
由 Stanislav Kinsbursky 提交于
Signed-off-by: NStanislav Kinsbursky <skinsbursky@parallels.com> Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
由 Fengguang Wu 提交于
Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
- 13 11月, 2012 6 次提交
-
-
由 Jeff Layton 提交于
Remove the cl_recdir field from the nfs4_client struct. Instead, just compute it on the fly when and if it's needed, which is now only when the legacy client tracking code is in effect. The error handling in the legacy client tracker is also changed to handle the case where md5 is unavailable. In that case, we'll warn the admin with a KERN_ERR message and disable the client tracking. Signed-off-by: NJeff Layton <jlayton@redhat.com> Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
由 Jeff Layton 提交于
The current code requires that we md5 hash the name in order to store the client in the confirmed and unconfirmed trees. Change it instead to store the clients in a pair of rbtrees, and simply compare the cl_names directly instead of hashing them. This also necessitates that we add a new flag to the clp->cl_flags field to indicate which tree the client is currently in. Signed-off-by: NJeff Layton <jlayton@redhat.com> Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
由 Jeff Layton 提交于
When nfsd starts, the legacy reboot recovery code creates a tracking struct for each directory in the v4recoverydir. When the grace period ends, it basically does a "readdir" on the directory again, and matches each dentry in there to an existing client id to see if it should be removed or not. If the matching client doesn't exist, or hasn't reclaimed its state then it will remove that dentry. This is pretty inefficient since it involves doing a lot of hash-bucket searching. It also means that we have to keep relying on being able to search for a nfs4_client by md5 hashed cl_recdir name. Instead, add a pointer to the nfs4_client that indicates the association between the nfs4_client_reclaim and nfs4_client. When a reclaim operation comes in, we set the pointer to make that association. On gracedone, the legacy client tracker will keep the recdir around iff: 1/ there is a reclaim record for the directory ...and... 2/ there's an association between the reclaim record and a client record -- that is, a create or check operation was performed on the client that matches that directory. Signed-off-by: NJeff Layton <jlayton@redhat.com> Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
由 Jeff Layton 提交于
Later callers will need to make changes to the record. Signed-off-by: NJeff Layton <jlayton@redhat.com> Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
由 Jeff Layton 提交于
We'll need to be able to call this from nfs4recover.c eventually. Signed-off-by: NJeff Layton <jlayton@redhat.com> Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
由 Jeff Layton 提交于
Currently, it takes a client pointer, but later we're going to need to search for these records without knowing whether a matching client even exists. Signed-off-by: NJeff Layton <jlayton@redhat.com> Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
- 11 11月, 2012 1 次提交
-
-
由 Jeff Layton 提交于
Signed-off-by: NJeff Layton <jlayton@redhat.com> Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
- 08 11月, 2012 7 次提交
-
-
由 J. Bruce Fields 提交于
I've found it confusing having the only references to nfsd4_do_callback_rpc() in a different file. Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
由 J. Bruce Fields 提交于
This operation is mandatory for servers to implement. Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
由 J. Bruce Fields 提交于
We're currently ignoring the callback security parameters specified in create_session, and just assuming the client wants auth_sys, because that's all the current linux client happens to care about. But this could cause us callbacks to fail to a client that wanted something different. For now, all we're doing is no longer ignoring the uid and gid passed in the auth_sys case. Further patches will add support for auth_null and gss (and possibly use more of the auth_sys information; the spec wants us to use exactly the credential we're passed, though it's hard to imagine why a client would care). Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
由 J. Bruce Fields 提交于
These conditions would indeed indicate bugs in the code, but if we want to hear about them we're likely better off warning and returning than immediately dying while holding file_lock_lock. Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
由 J. Bruce Fields 提交于
Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
由 Yanchuan Nian 提交于
The object type in the cache of lockowner_slab is wrong, and it is better to fix it. Cc: stable@vger.kernel.org Signed-off-by: NYanchuan Nian <ycnian@gmail.com> Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
由 Wei Yongjun 提交于
The variable inode is initialized but never used otherwise, so remove the unused variable. dpatch engine is used to auto generate this patch. (https://github.com/weiyj/dpatch) Signed-off-by: NWei Yongjun <yongjun_wei@trendmicro.com.cn> Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
- 02 10月, 2012 12 次提交
-
-
由 J. Bruce Fields 提交于
When a confirmed client expires, we normally also need to expire any stable storage record which would allow that client to reclaim state on the next boot. We forgot to do this in some cases. (For example, in destroy_clientid, and in the cases in exchange_id and create_session that destroy and existing confirmed client.) But in most other cases, there's really no harm to calling nfsd4_client_record_remove(), because it is a no-op in the case the client doesn't have an existing The single exception is destroying a client on shutdown, when we want to keep the stable storage records so we can recognize which clients will be allowed to reclaim when we come back up. Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
由 J. Bruce Fields 提交于
Both nfsd4_init_conn and alloc_init_session are probing the callback channel, harmless but pointless. Also, nfsd4_init_conn should probably be probing in the "unknown" case as well. In fact I don't see any harm to just doing it unconditionally when we get a new backchannel connection. Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
由 J. Bruce Fields 提交于
Before we had to delay expiring a client till we'd found out whether the session and connection allocations would succeed. That's no longer necessary. Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
由 J. Bruce Fields 提交于
This will allow some further simplification. Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
由 J. Bruce Fields 提交于
Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
由 J. Bruce Fields 提交于
Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
由 J. Bruce Fields 提交于
Do the initialization in the caller, and clarify that the only failure ever possible here was due to allocation. Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
由 J. Bruce Fields 提交于
It'll be useful to have connection allocation and initialization as separate functions. Also, note we'd been ignoring the alloc_conn error return in bind_conn_to_session. Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
由 J. Bruce Fields 提交于
This could simplify the logic a little later. Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
由 J. Bruce Fields 提交于
Something like creating a client with setclientid and then trying to confirm it with create_session may not crash the server, but I'm not completely positive of that, and in any case it's obviously bad client behavior. Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
由 J. Bruce Fields 提交于
And remove some mostly obsolete comments. Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
由 J. Bruce Fields 提交于
I added cr_flavor to the data compared in same_creds without any justification, in d5497fc6 "nfsd4: move rq_flavor into svc_cred". Recent client changes then started making mount -osec=krb5 server:/export /mnt/ echo "hello" >/mnt/TMP umount /mnt/ mount -osec=krb5i server:/export /mnt/ echo "hello" >/mnt/TMP to fail due to a clid_inuse on the second open. Mounting sequentially like this with different flavors probably isn't that common outside artificial tests. Also, the real bug here may be that the server isn't just destroying the former clientid in this case (because it isn't good enough at recognizing when the old state is gone). But it prompted some discussion and a look back at the spec, and I think the check was probably wrong. Fix and document. Cc: stable@kernel.org Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
- 27 9月, 2012 1 次提交
-
-
由 Al Viro 提交于
simplifies a bunch of callers... Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
- 11 9月, 2012 1 次提交
-
-
由 J. Bruce Fields 提交于
Somehow we ended up with identical functions "nfs4_free_stateid" and "free_generic_stateid". Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
- 10 9月, 2012 1 次提交
-
-
由 J. Bruce Fields 提交于
Processes that open and close multiple files may end up setting this oo_last_closed_stid without freeing what was previously pointed to. This can result in a major leak, visible for example by watching the nfsd4_stateids line of /proc/slabinfo. Reported-by: NCyril B. <cbay@excellency.fr> Tested-by: NCyril B. <cbay@excellency.fr> Cc: stable@vger.kernel.org Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
- 22 8月, 2012 1 次提交
-
-
由 Jeff Layton 提交于
struct file_lock is pretty large and really ought not live on the stack. On my x86_64 machine, they're almost 200 bytes each. (gdb) p sizeof(struct file_lock) $1 = 192 ...allocate them dynamically instead. Reported-by: NAl Viro <viro@ZenIV.linux.org.uk> Signed-off-by: NJeff Layton <jlayton@redhat.com> Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-