- 02 8月, 2010 16 次提交
-
-
由 Greg Farnum 提交于
Signed-off-by: NGreg Farnum <gregf@hq.newdream.net> Signed-off-by: NSage Weil <sage@newdream.net>
-
由 Sage Weil 提交于
When we get a cap EXPORT message, make sure we are connected to all export targets to ensure we can handle the matching IMPORT. Signed-off-by: NSage Weil <sage@newdream.net>
-
由 Sage Weil 提交于
If an MDS we are talking to may have failed, we need to open sessions to its potential export targets to ensure that any in-progress migration that may have involved some of our caps is properly handled. Signed-off-by: NSage Weil <sage@newdream.net>
-
由 Sage Weil 提交于
There are a few cases where we need to open sessions with a given mds's potential export targets. Signed-off-by: NSage Weil <sage@newdream.net>
-
由 Sage Weil 提交于
Setting it elsewhere is unnecessary and more fragile. Signed-off-by: NSage Weil <sage@newdream.net>
-
由 Yehuda Sadeh 提交于
Caps related accounting is now being done per mds client instead of just being global. This prepares ground work for a later revision of the caps preallocated reservation list. Signed-off-by: NYehuda Sadeh <yehuda@hq.newdream.net> Signed-off-by: NSage Weil <sage@newdream.net>
-
由 Sage Weil 提交于
Signed-off-by: NSage Weil <sage@newdream.net>
-
由 Yehuda Sadeh 提交于
Mainly fixing minor issues reported by sparse. Signed-off-by: NYehuda Sadeh <yehuda@hq.newdream.net> Signed-off-by: NSage Weil <sage@newdream.net>
-
由 Sage Weil 提交于
If we have a capsnap but no auth cap (e.g. because it is migrating to another mds), bail out and do nothing for now. Do NOT remove the capsnap from the flush list. Signed-off-by: NSage Weil <sage@newdream.net>
-
由 Sage Weil 提交于
The caps revocation should either initiate writeback, invalidateion, or call check_caps to ack or do the dirty work. The primary question is whether we can get away with only checking the auth cap or whether all caps need to be checked. The old code was doing...something else. At the very least, revocations from non-auth MDSs could break by triggering the "check auth cap only" case. Signed-off-by: NSage Weil <sage@newdream.net>
-
由 Sage Weil 提交于
No functional change, aside from more useful debug output. Signed-off-by: NSage Weil <sage@newdream.net>
-
由 Sage Weil 提交于
Signed-off-by: NSage Weil <sage@newdream.net>
-
由 Sage Weil 提交于
If the file mode is marked as "lazy," perform cached/buffered reads when the caps permit it. Adjust the rdcache_gen and invalidation logic accordingly so that we manage our cache based on the FILE_CACHE -or- FILE_LAZYIO cap bits. Signed-off-by: NSage Weil <sage@newdream.net>
-
由 Sage Weil 提交于
If we have marked a file as "lazy" (using the ceph ioctl), perform buffered writes when the MDS caps allow it. Signed-off-by: NSage Weil <sage@newdream.net>
-
由 Sage Weil 提交于
Allow an application to mark a file descriptor for lazy file consistency semantics, allowing buffered reads and writes when multiple clients are accessing the same file. Signed-off-by: NSage Weil <sage@newdream.net>
-
由 Sage Weil 提交于
Also clean up the file flags -> file mode -> wanted caps functions while we're at it. This resyncs this file with userspace. Signed-off-by: NSage Weil <sage@newdream.net>
-
- 28 7月, 2010 1 次提交
-
-
由 Yehuda Sadeh 提交于
This fixes an issue triggered by running concurrent syncs. One of the syncs would go through while the other would just hang indefinitely. In any case, we never actually want to wake a single waiter, so the *_all functions should be used. Signed-off-by: NYehuda Sadeh <yehuda@hq.newdream.net> Signed-off-by: NSage Weil <sage@newdream.net>
-
- 25 7月, 2010 1 次提交
-
-
由 Robert P. J. Day 提交于
Signed-off-by: NRobert P. J. Day <rpjday@crashcourse.ca> Signed-off-by: NSage Weil <sage@newdream.net>
-
- 24 7月, 2010 4 次提交
-
-
由 Sage Weil 提交于
When we embed a dentry lease release notification in a request, invalidate our lease so we don't think we still have it. Otherwise we can get all sorts of incorrect client behavior when multiple clients are interacting with the same part of the namespace. Signed-off-by: NSage Weil <sage@newdream.net>
-
由 Sage Weil 提交于
If we fail to allocate a ceph_dentry_info, don't leak the dn reference. Signed-off-by: NSage Weil <sage@newdream.net>
-
由 Sage Weil 提交于
Free the ceph_pg_mapping structs when they are removed from the pg_temp rbtree. Also fix a leak in the __insert_pg_mapping() error path. Signed-off-by: NSage Weil <sage@newdream.net>
-
由 Sage Weil 提交于
We need to set the d_release dop for snapdir and snapped dentries so that the ceph_dentry_info struct gets released. We also use the dcache to cache readdir results when possible, which only works if we know when dentries are dropped from the cache. Since we don't use the dcache for readdir in the hidden snapdir, avoid that case in ceph_dentry_release. Signed-off-by: NSage Weil <sage@newdream.net>
-
- 23 7月, 2010 1 次提交
-
-
由 Sage Weil 提交于
We should always go to the MDS for readdir on the hidden snapdir. The set of snapshots can change at any time; the client can't trust its cache for that. Signed-off-by: NSage Weil <sage@newdream.net>
-
- 17 7月, 2010 2 次提交
-
-
由 Sage Weil 提交于
Strip the cap and dentry releases from replayed messages. They can cause the shared state to get out of sync because they were generated (with the request message) earlier, and no longer reflect the current client state. Signed-off-by: NSage Weil <sage@newdream.net>
-
由 Sage Weil 提交于
Replayed rename operations (after an mds failure/recovery) were broken because the request paths were regenerated from the dentry names, which get mangled when d_move() is called. Instead, resend the previous request message when replaying completed operations. Just make sure the REPLAY flag is set and the target ino is filled in. This fixes problems with workloads doing renames when the MDS restarts, where the rename operation appears to succeed, but on mds restart then fails (leading to client confusion, app breakage, etc.). Signed-off-by: NSage Weil <sage@newdream.net>
-
- 10 7月, 2010 2 次提交
-
-
由 Sage Weil 提交于
Use the address family from the peer address instead of assuming IPv4. Signed-off-by: NSage Weil <sage@newdream.net>
-
由 Sage Weil 提交于
Check for brackets around the ipv6 address to avoid ambiguity with the port number. Signed-off-by: NSage Weil <sage@newdream.net>
-
- 09 7月, 2010 1 次提交
-
-
由 Sage Weil 提交于
The buffer was too small. Make it bigger, use snprintf(), put brackets around the ipv6 address to avoid mixing it up with the :port, and use the ever-so-handy %pI[46] formats. Signed-off-by: NSage Weil <sage@newdream.net>
-
- 08 7月, 2010 1 次提交
-
-
由 Dan Carpenter 提交于
We leak a "pi" on this error path. Signed-off-by: NDan Carpenter <error27@gmail.com> Signed-off-by: NSage Weil <sage@newdream.net>
-
- 06 7月, 2010 3 次提交
-
-
由 Sage Weil 提交于
Fix leak of a struct ceph_buffer on umount. Signed-off-by: NSage Weil <sage@newdream.net>
-
由 Sage Weil 提交于
A message can be on a queue (pending or sent), or out_msg (sending), or both. We were assuming that if it's not on a queue it couldn't be out_msg, but that was false in the case of lossy connections like the OSD. Fix ceph_con_revoke() to treat these cases independently. Also, fix the out_kvec_is_message check to only trigger if we are currently sending _this_ message. This fixes a GPF in tcp_sendpage, triggered by OSD restarts. Signed-off-by: NSage Weil <sage@newdream.net>
-
由 Sage Weil 提交于
Fix a typo that made any OSD weighted between 0.1 and 1.0 effectively weighted as 1.0 (fully in). Signed-off-by: NSage Weil <sage@newdream.net>
-
- 30 6月, 2010 2 次提交
-
-
由 Sage Weil 提交于
We need to increase the total and used counters when allocating a new cap in the non-reserved (cap import) case. Signed-off-by: NSage Weil <sage@newdream.net>
-
由 Sage Weil 提交于
We can drop caps with an mds request. Ensure we only drop unused AND clean caps, since the MDS doesn't support cap writeback in that context, nor do we track it. If caps are dirty, and the MDS needs them back, we it will revoke and we will flush in the normal fashion. This fixes a possibly loss of metadata. Signed-off-by: NSage Weil <sage@newdream.net>
-
- 25 6月, 2010 3 次提交
-
-
由 Sage Weil 提交于
We may not recurse for CHOOSE_LEAF if we start with a leaf node. When that happens, the out2 vector needs to be filled in with the result. Signed-off-by: NSage Weil <sage@newdream.net>
-
由 Sage Weil 提交于
There was a longstanding problem with recursion through intervening bucket types on complex hierarchies. Signed-off-by: NSage Weil <sage@newdream.net>
-
由 Yehuda Sadeh 提交于
The ceph client structure was not set correctly. Signed-off-by: NYehuda Sadeh <yehuda@hq.newdream.net> Signed-off-by: NSage Weil <sage@newdream.net>
-
- 22 6月, 2010 2 次提交
-
-
由 Sage Weil 提交于
This fixes a race between handle_reply finishing an mds request, signalling completion, and then dropping the request structing and its dentry+inode refs, and pre_umount function waiting for requests to finish before letting the vfs tear down the dcache. If umount was delayed waiting for mds requests, we could race and BUG in shrink_dcache_for_umount_subtree because of a slow dput. This delays umount until the msgr queue flushes, which means handle_reply will exit and will have dropped the ceph_mds_request struct. I'm assuming the VFS has already ensured that its calls have all completed and those request refs have thus been dropped as well (I haven't seen that race, at least). Signed-off-by: NSage Weil <sage@newdream.net>
-
由 Sage Weil 提交于
Handle a splice_dentry failure (due to a d_materialize_unique error) without crashing. (Also, report the error code.) Signed-off-by: NSage Weil <sage@newdream.net>
-
- 18 6月, 2010 1 次提交
-
-
由 Sage Weil 提交于
If the incremental osdmap has a new crush map, advance the position after decoding so that we can parse the rest of the osdmap properly. Signed-off-by: NSage Weil <sage@newdream.net>
-