- 25 10月, 2010 1 次提交
-
-
由 J. Bruce Fields 提交于
We're doing an allocation under a spinlock, and ignoring the possibility of allocation failure. A better fix wouldn't require an unnecessary allocation in the common case, but we'll leave that for later. Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
- 21 10月, 2010 10 次提交
-
-
由 J. Bruce Fields 提交于
The minorversion seems more a property of the client than the callback channel. Some time we should probably also enforce consistent minorversion usage from the client; for now, this is just a cosmetic change. Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
由 J. Bruce Fields 提交于
Have unhash_client_locked() remove client and associated sessions from global hashes, but delay further dismantling till free_client(). (After unhash_client_locked(), the only remaining references outside the destroying thread are from any connections which have xpt_user callbacks registered.) This will simplify locking on session destruction. Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
由 J. Bruce Fields 提交于
Only one of the nfsd4_callback_probe callers actually cares about changing the callback information. Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
由 J. Bruce Fields 提交于
The callback program is allowed to depend on the session which the callback is going over. No change in behavior yet, while we still only do callbacks over a single session for the lifetime of the client. Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
由 J. Bruce Fields 提交于
We need to keep track of which connections are available for use with the backchannel, which for the forechannel, and which for both. Signed-off-by: NJ. Bruce Fields <bfields@citi.umich.edu>
-
由 J. Bruce Fields 提交于
Following rfc 5661, section 18.36.4: "If the session is not successfully created, then no changes are made to any client records on the server." We shouldn't be confirming or incrementing the sequence id in this case. Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
由 J. Bruce Fields 提交于
Currently we don't deal well with a client that has multiple sessions associated with it (even simultaneously, or serially over the lifetime of the client). In particular, we don't attempt to keep the backchannel running after the original session diseappears. We will fix that soon. Once we do that, we need the slot sequence number to be per-session; otherwise, for example, we cannot correctly handle a case like this: - All session 1 connections are lost. - The client creates session 2. We use it for the backchannel (since it's the only working choice). - The client gives us a new connection to use with session 1. - The client destroys session 2. At this point our only choice is to go back to using session 1. When we do so we must use the sequence number that is next for session 1. We therefore need to maintain multiple sequence number streams. Signed-off-by: NJ. Bruce Fields <bfields@citi.umich.edu>
-
由 J. Bruce Fields 提交于
Instead of copying the sessionid, use the new cl_cb_session pointer, which indicates which session we're using for the backchannel. Signed-off-by: NJ. Bruce Fields <bfields@citi.umich.edu>
-
由 J. Bruce Fields 提交于
The backchannel should be associated with a session, it isn't really global to the client. We do, however, want a pointer global to the client which tracks which session we're currently using for client-based callbacks. This is a first step in that direction; for now, just reshuffling of code with no significant change in behavior. Signed-off-by: NJ. Bruce Fields <bfields@citi.umich.edu>
-
由 J. Bruce Fields 提交于
Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
- 05 10月, 2010 1 次提交
-
-
由 Arnd Bergmann 提交于
This prepares the removal of the big kernel lock from the file locking code. We still use the BKL as long as fs/lockd uses it and ceph might sleep, but we can flip the definition to a private spinlock as soon as that's done. All users outside of fs/lockd get converted to use lock_flocks() instead of lock_kernel() where appropriate. Based on an earlier patch to use a spinlock from Matthew Wilcox, who has attempted this a few times before, the earliest patch from over 10 years ago turned it into a semaphore, which ended up being slower than the BKL and was subsequently reverted. Someone should do some serious performance testing when this becomes a spinlock, since this has caused problems before. Using a spinlock should be at least as good as the BKL in theory, but who knows... Signed-off-by: NArnd Bergmann <arnd@arndb.de> Acked-by: NMatthew Wilcox <willy@linux.intel.com> Cc: Christoph Hellwig <hch@lst.de> Cc: Trond Myklebust <trond.myklebust@fys.uio.no> Cc: "J. Bruce Fields" <bfields@fieldses.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Miklos Szeredi <mszeredi@suse.cz> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: John Kacur <jkacur@redhat.com> Cc: Sage Weil <sage@newdream.net> Cc: linux-kernel@vger.kernel.org Cc: linux-fsdevel@vger.kernel.org
-
- 03 10月, 2010 1 次提交
-
-
由 J. Bruce Fields 提交于
Commit 78155ed7 "nfsd4: distinguish expired from stale stateids" attempted to distinguish expired and stale stateid's using time information that may not have been completely reliable, so I reverted it. That was throwing out the baby with the bathwater; we still do want to return expired, but let's do that using the simpler approach of just assuming any stateid is expired if it looks like it was given out by the current server instance, but we can't find it any more. This may help clients that are recovering from network partitions. Reported-by: NBian Naimeng <biannm@cn.fujitsu.com> Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
- 02 10月, 2010 10 次提交
-
-
由 J. Bruce Fields 提交于
As long as we're not implementing any session security, we should just automatically add any new connections that come along to the list of sessions associated with the session. Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
由 J. Bruce Fields 提交于
Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
由 J. Bruce Fields 提交于
Remove connections from the list when they go down. Signed-off-by: NJ. Bruce Fields <bfields@citi.umich.edu>
-
由 J. Bruce Fields 提交于
The spec requires us in various places to keep track of the connections associated with each session. Signed-off-by: NJ. Bruce Fields <bfields@citi.umich.edu>
-
由 J. Bruce Fields 提交于
Changes: - make sure session memory reservation is released on failure path. - use min_t()/min() for more compact code in several places. - break alloc_init_session into smaller pieces. - miscellaneous other cleanup. Signed-off-by: NJ. Bruce Fields <bfields@citi.umich.edu>
-
由 J. Bruce Fields 提交于
This returns an nfs error, not -ERRNO. Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
由 J. Bruce Fields 提交于
Note we're allocating an array of nfsd4_slot *'s, not nfsd4_slot's. Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
由 J. Bruce Fields 提交于
Instead of creating the new rpc client from a regular server thread, set a flag, kick off a null call, and allow the null call to do the work of setting up the client on the callback workqueue. Use a spinlock to ensure the callback work gets a consistent view of the callback parameters. This allows, for example, changing the callback from contexts where sleeping is not allowed. I hope it will also keep the locking simple as we add more session and trunking features, by serializing most of the callback-specific work. This also closes a small race where the the new cb_ident could be used with an old connection (or vice-versa). Signed-off-by: NJ. Bruce Fields <bfields@citi.umich.edu>
-
由 J. Bruce Fields 提交于
This will eventually allow us, for example, to kick off null callback from contexts where we can't sleep. Signed-off-by: NJ. Bruce Fields <bfields@citi.umich.edu>
-
由 J. Bruce Fields 提交于
Now that we have both nfsd4_callback and nfsd4_cb_conn structures, I get confused if variables of both types are always named cb.... Signed-off-by: NJ. Bruce Fields <bfields@citi.umich.edu>
-
- 03 9月, 2010 1 次提交
-
-
由 J. Bruce Fields 提交于
This fixes an unnecessary BUG(). Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
- 27 8月, 2010 2 次提交
-
-
由 J. Bruce Fields 提交于
If we already had a RW open for a file, and get a readonly open, we were piggybacking on the existing RW open. That's inconsistent with the downgrade logic which blows away the RW open assuming you'll still have a readonly open. Also, make sure there is a readonly or writeonly open available for locking, again to prevent bad behavior in downgrade cases when any RW open may be lost. Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
由 J. Bruce Fields 提交于
It's OK for this function to return without setting filp--we do it in the special-stateid case. And there's a legitimate case where we can hit this, since we do permit reads on write-only stateid's. Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
- 07 8月, 2010 1 次提交
-
-
由 J. Bruce Fields 提交于
Commit f9d7562f "nfsd4: share file descriptors between stateid's" didn't correctly account for O_RDWR opens. Symptoms include leaked files, resulting in failures to unmount and/or warnings about orphaned inodes on reboot. Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
- 30 7月, 2010 5 次提交
-
-
由 Andi Kleen 提交于
Fixes at least one real minor bug: the nfs4 recovery dir sysctl would not return its status properly. Also I finished Al's 1e41568d ("Take ima_path_check() in nfsd past dentry_open() in nfsd_open()") commit, it moved the IMA code, but left the old path initializer in there. The rest is just dead code removed I think, although I was not fully sure about the "is_borc" stuff. Some more review would be still good. Found by gcc 4.6's new warnings. Signed-off-by: NAndi Kleen <ak@linux.intel.com> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Neil Brown <neilb@suse.de> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
由 J. Bruce Fields 提交于
The vfs doesn't really allow us to "upgrade" a file descriptor from read-only to read-write, and our attempt to do so in nfs4_upgrade_open is ugly and incomplete. Move to a different scheme where we keep multiple opens, shared between open stateid's, in the nfs4_file struct. Each file will be opened at most 3 times (for read, write, and read-write), and those opens will be shared between all clients and openers. On upgrade we will do another open if necessary instead of attempting to upgrade an existing open. We keep count of the number of readers and writers so we know when to close the shared files. Signed-off-by: NJ. Bruce Fields <bfields@citi.umich.edu>
-
由 J. Bruce Fields 提交于
It is legal to perform a write using the lock stateid that was originally associated with a read lock, or with a file that was originally opened for read, but has since been upgraded. So, when checking the openmode, check the mode associated with the open stateid from which the lock was derived. Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
由 J. Bruce Fields 提交于
Move more work into helper functions. Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
由 J. Bruce Fields 提交于
The delegation code mostly pretends to support either read or write delegations. However, correct support for write delegations would require, for example, breaking of delegations (and/or implementation of cb_getattr) on stat. Currently all that stops us from handing out delegations is a subtle reference-counting issue. Avoid confusion by adding an earlier check that explicitly refuses write delegations. For now, though, I'm not going so far as to rip out existing half-support for write delegations, in case we get around to using that soon. Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
- 23 7月, 2010 1 次提交
-
-
由 Jeff Layton 提交于
If someone tries to shut down the laundry_wq while it isn't up it'll cause an oops. This can happen because write_ports can create a nfsd_svc before we really start the nfs server, and we may fail before the server is ever started. Also make sure state is shutdown on error paths in nfsd_svc(). Use a common global nfsd_up flag instead of nfs4_init, and create common helper functions for nfsd start/shutdown, as there will be other work that we want done only when we the number of nfsd threads transitions between zero and nonzero. Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
- 23 6月, 2010 3 次提交
-
-
由 J. Bruce Fields 提交于
This is overkill. Signed-off-by: NJ. Bruce Fields <bfields@citi.umich.edu>
-
由 J. Bruce Fields 提交于
If the server is out of memory is better for clients to back off and retry than to just error out. Signed-off-by: NJ. Bruce Fields <bfields@citi.umich.edu>
-
由 J. Bruce Fields 提交于
Note the session has to be put() here regardless of what happens to the client. Signed-off-by: NJ. Bruce Fields <bfields@citi.umich.edu>
-
- 09 6月, 2010 1 次提交
-
-
由 J. Bruce Fields 提交于
This reportedly causes a lockdep warning on nfsd shutdown. That looks like a false positive to me, but there's no reason why this needs the state lock anyway. Reported-by: NJeff Layton <jlayton@redhat.com> Signed-off-by: NJ. Bruce Fields <bfields@citi.umich.edu>
-
- 01 6月, 2010 1 次提交
-
-
由 J. Bruce Fields 提交于
NFSv4.1 adds additional flags to the share_access argument of the open call. These flags need to be masked out in some of the existing code, but current code does that inconsistently. Tested-by: NMichael Groshans <groshans@citi.umich.edu> Signed-off-by: NJ. Bruce Fields <bfields@citi.umich.edu>
-
- 19 5月, 2010 2 次提交
-
-
由 J. Bruce Fields 提交于
This reverts commit 78155ed7. We're depending here on the boot time that we use to generate the stateid being monotonic, but get_seconds() is not necessarily. We still depend at least on boot_time being different every time, but that is a safer bet. We have a few reports of errors that might be explained by this problem, though we haven't been able to confirm any of them. But the minor gain of distinguishing expired from stale errors seems not worth the risk. Conflicts: fs/nfsd/nfs4state.c Signed-off-by: NJ. Bruce Fields <bfields@citi.umich.edu>
-
由 Pavel Emelyanov 提交于
The alloc_init_file() first adds a file to the hash and then initializes its fi_inode, fi_id and fi_had_conflict. The uninitialized fi_inode could thus be erroneously checked by the find_file(), so move the hash insertion lower. The client_mutex should prevent this race in practice; however, we eventually hope to make less use of the client_mutex, so the ordering here is an accident waiting to happen. I didn't find whether the same can be true for two other fields, but the common sense tells me it's better to initialize an object before putting it into a global hash table :) Signed-off-by: NPavel Emelyanov <xemul@openvz.org> Signed-off-by: NJ. Bruce Fields <bfields@citi.umich.edu>
-