- 13 4月, 2010 1 次提交
-
-
由 Sage Weil 提交于
When filldir returned an error (e.g. buffer full for a large directory), we would leak a dentry reference, causing an oops on umount. Signed-off-by: NSage Weil <sage@newdream.net>
-
- 10 4月, 2010 1 次提交
-
-
由 Sage Weil 提交于
Teach the client to decode an updated format for the osdmap. The new format includes pool names, which will be useful shortly. Get this change in earlier rather than later. Signed-off-by: NSage Weil <sage@newdream.net>
-
- 03 4月, 2010 1 次提交
-
-
由 Sage Weil 提交于
If in_seq_acked isn't reset along with in_seq, we don't ack received messages until we reach the old count, consuming gobs memory on the other end of the connection and introducing a large delay when those messages are eventually deleted. Signed-off-by: NSage Weil <sage@newdream.net>
-
- 02 4月, 2010 3 次提交
-
-
由 Sage Weil 提交于
We create a ceph_cap_snap if there is dirty cap metadata (for writeback to mds) OR dirty pages (for writeback to osd). It is thus possible that the metadata has been written back to the MDS but the OSD data has not when the cap_snap is created. This results in a cap_snap with dirty(caps) == 0. The problem is that cap writeback to the MDS isn't necessary, and a FLUSHSNAP cap op gets no ack from the MDS. This leaves the cap_snap attached to the inode along with its inode reference. Fix the problem by dropping the cap_snap if it becomes 'complete' (all pages written out) and dirty(caps) == 0 in ceph_put_wrbuffer_cap_refs(). Also, BUG() in __ceph_flush_snaps() if we encounter a cap_snap with dirty(caps) == 0. Signed-off-by: NSage Weil <sage@newdream.net>
-
由 Sage Weil 提交于
The get_oldest_context() helper takes a reference to the returned snap context, but most callers weren't dropping that reference. Fix them. Also drop the unused locked __get_oldest_context() variant. Signed-off-by: NSage Weil <sage@newdream.net>
-
由 Sage Weil 提交于
On snap deletion, we don't regenerate ceph_cap_snaps for inodes with dirty pages because deletion does not affect metadata writeback. However, we did run into problems when we went to write back the pages because the 'oldest' snapc is determined by the oldest cap_snap, and that may be the newer snapc that reflects the deletion. This caused confusion and an infinite loop in ceph_update_writeable_page(). Change the snapc checks to allow writeback of any snapc that is equal to OR older than the 'oldest' snapc. When there are no cap_snaps, we were also using the realm's latest snapc for writeback, which complicates ceph_put_wrbufffer_cap_refs(). Instead, use i_head_snapc, the most snapc used for the most recent ('head') data. This makes the writeback snapc (ceph_osd_request.r_snapc) _always_ match a capsnap or i_head_snapc. Also, in writepags_finish(), drop the snapc referenced by the _page_ and do not assume it matches the request snapc (it may not anymore). Signed-off-by: NSage Weil <sage@newdream.net>
-
- 31 3月, 2010 1 次提交
-
-
由 Sage Weil 提交于
If a lookup fails on the magic .snap directory, we bind it to a magic snap directory inode in ceph_lookup_finish(). That code assumes the dentry is unhashed, but a recent server-side change started returning NULL leases on lookup failure, causing the .snap dentry to be hashed and NULL by ceph_fill_trace(). This causes dentry hash chain corruption, or a dies when d_rehash() includes BUG_ON(!d_unhashed(entry)); So, avoid processing the NULL dentry lease if it the dentry matches the snapdir name in ceph_fill_trace(). That allows the lookup completion to properly bind it to the snapdir inode. BUG there if dentry is hashed to be sure. Signed-off-by: NSage Weil <sage@newdream.net>
-
- 29 3月, 2010 1 次提交
-
-
由 Sage Weil 提交于
There was a use after free in __unregister_request that would trigger whenever the request map held the last reference. This appears to have triggered an oops during 'umount -f' when requests are being torn down. Signed-off-by: NSage Weil <sage@newdream.net>
-
- 23 3月, 2010 18 次提交
-
-
由 Sage Weil 提交于
Clear pointer to mds request after dropping the reference to ensure we don't drop it again, as there is at least one error path through this function that does not reset fi->last_readdir to a new value. Signed-off-by: NSage Weil <sage@newdream.net>
-
由 Sage Weil 提交于
Fix a broken check that a reply came back from the same MDS we sent the request to. I don't think a case that actually triggers this would ever come up in practice, but it's clearly wrong and easy to fix. Reported-by: NDan Carpenter <error27@gmail.com> Signed-off-by: NSage Weil <sage@newdream.net>
-
由 Dan Carpenter 提交于
Return ERR_PTR(-ENOMEM) if kmalloc() fails. We handle allocation failures the same way later in the function. Signed-off-by: NDan Carpenter <error27@gmail.com> Signed-off-by: NSage Weil <sage@newdream.net>
-
由 Sage Weil 提交于
Return error to original caller if register_session() fails. Signed-off-by: NSage Weil <sage@newdream.net>
-
由 Sage Weil 提交于
Currently, if the wait_event_interruptible is interrupted, we return EAGAIN unconditionally and loop, such that we aren't, in fact, interruptible. So, propagate ERESTARTSYS if we get it. Signed-off-by: NSage Weil <sage@newdream.net>
-
由 Sage Weil 提交于
We were rebuilding the snap context when it was not necessary (i.e. when the realm seq hadn't changed _and_ the parent seq was still older), which caused page snapc pointers to not match the realm's snapc pointer (even though the snap context itself was identical). This confused begin_write and put it into an endless loop. The correct logic is: rebuild snapc if _my_ realm seq changed, or if my parent realm's seq is newer than mine (and thus mine needs to be rebuilt too). Signed-off-by: NSage Weil <sage@newdream.net>
-
由 Sage Weil 提交于
We get a fault callback on _every_ tcp connection fault. Normally, we want to reopen the connection when that happens. If the address we have is bad, however, and connection attempts always result in a connection refused or similar error, explicitly closing and reopening the msgr connection just prevents the messenger's backoff logic from kicking in. The result can be a console full of [ 3974.417106] ceph: osd11 10.3.14.138:6800 connection failed [ 3974.423295] ceph: osd11 10.3.14.138:6800 connection failed [ 3974.429709] ceph: osd11 10.3.14.138:6800 connection failed Instead, if we get a fault, and have outstanding requests, but the osd address hasn't changed and the connection never successfully connected in the first place, do nothing to the osd connection. The messenger layer will back off and retry periodically, because we never connected and thus the lossy bit is not set. Instead, touch each request's r_stamp so that handle_timeout can tell the request is still alive and kicking. Signed-off-by: NSage Weil <sage@newdream.net>
-
由 Sage Weil 提交于
Make variable name slightly more generic, since it will (soon) reflect either the time the request was sent OR the time it was last determined to be still retrying. Signed-off-by: NSage Weil <sage@newdream.net>
-
由 Sage Weil 提交于
The messenger fault was clearing the BUSY bit, for reasons unclear. This made it possible for the con->ops->fault function to reopen the connection, and requeue work in the workqueue--even though the current thread was already in con_work. This avoids a problem where the client busy loops with connection failures on an unreachable OSD, but doesn't address the root cause of that problem. Signed-off-by: NSage Weil <sage@newdream.net>
-
由 Sage Weil 提交于
Prevent duplicate 'mds0 caps stale' message from spamming the console every few seconds while the MDS restarts. Set s_renew_requested earlier, so that we only print the message once, even if we don't send an actual request. Signed-off-by: NSage Weil <sage@newdream.net>
-
由 Sage Weil 提交于
The incremental map decoding of pg pool updates wasn't skipping the snaps and removed_snaps vectors. This caused osd requests to stall when pool snapshots were created or fs snapshots were deleted. Use a common helper for full and incremental map decoders that decodes pools properly. Signed-off-by: NSage Weil <sage@newdream.net>
-
由 Sage Weil 提交于
The wait_unsafe_requests() helper dropped the mdsc mutex to wait for each request to complete, and then examined r_node to get the next request after retaking the lock. But the request completion removes the request from the tree, so r_node was always undefined at this point. Since it's a small race, it usually led to a valid request, but not always. The result was an occasional crash in rb_next() while dereferencing node->rb_left. Fix this by clearing the rb_node when removing the request from the request tree, and not walking off into the weeds when we are done waiting for a request. Since the request we waited on will _always_ be out of the request tree, take a ref on the next request, in the hopes that it won't be. But if it is, it's ok: we can start over from the beginning (and traverse over older read requests again). Signed-off-by: NSage Weil <sage@newdream.net>
-
由 Sage Weil 提交于
We were releasing used caps (e.g. FILE_CACHE) from encode_inode_release with MDS requests (e.g. setattr). We don't carry refs on most caps, so this code worked most of the time, but for setattr (utimes) we try to drop Fscr. This causes cap state to get slightly out of sync with reality, and may result in subsequent mds revoke messages getting ignored. Fix by only releasing unused caps. Signed-off-by: NSage Weil <sage@newdream.net>
-
由 Sage Weil 提交于
Drop session mutex unconditionally in handle_cap_grant, and do the check_caps from the handle_cap_grant helper. This avoids using a magic return value. Also avoid using a flag variable in the IMPORT case and call check_caps at the appropriate point. Signed-off-by: NSage Weil <sage@newdream.net>
-
由 Sage Weil 提交于
Passing a session pointer to ceph_check_caps() used to mean it would leave the session mutex locked. That wasn't always possible if it wasn't passed CHECK_CAPS_AUTHONLY. If could unlock the passed session and lock a differet session mutex, which was clearly wrong, and also emitted a warning when it a racing CPU retook it and we did an unlock from the wrong context. This was only a problem when there was more than one MDS. First, make ceph_check_caps unconditionally drop the session mutex, so that it is free to lock other sessions as needed. Then adjust the one caller that passes in a session (handle_cap_grant) accordingly. Signed-off-by: NSage Weil <sage@newdream.net>
-
由 Sage Weil 提交于
If we don't have the exported cap it's because we already released it. No need to WARN. Signed-off-by: NSage Weil <sage@newdream.net>
-
由 Sage Weil 提交于
This causes an oops when debug output is enabled and we kick an osd request with no current r_osd (sometime after an osd failure). Check the pointer before dereferencing. Signed-off-by: NSage Weil <sage@newdream.net>
-
由 Sage Weil 提交于
Previously we would decode state directly into our current ticket_handler. This is problematic if for some reason we fail to decode, because we end up with half new state and half old state. We are probably already in bad shape if we get an update we can't decode, but we may as well be tidy anyway. Decode into new_* temporaries and update the ticket_handler only on success. Signed-off-by: NSage Weil <sage@newdream.net>
-
- 21 3月, 2010 6 次提交
-
-
由 Sage Weil 提交于
Release the old ticket_blob buffer when we get an updated service ticket from the monitor. Previously these were getting leaked. Signed-off-by: NSage Weil <sage@newdream.net>
-
由 Sage Weil 提交于
The buffer size was incorrectly calculated for the ceph_x_encrypt() encapsulated ticket blob. Use a helper (with correct arithmetic) and BUG out if we were wrong. Signed-off-by: NSage Weil <sage@newdream.net>
-
由 Sage Weil 提交于
We were failing to reconnect to services due to an old authenticator, even though we had the new ticket, because we weren't properly retrying the connect handshake, because we were calling an old/incorrect helper that left in_base_pos incorrect. The result was a failure to reconnect to the OSD or MDS (with an authentication error) if the MDS restarted after the service had been up a few hours (long enough for the original authenticator to be invalid). This was only a problem if the AUTH_X authentication was enabled. Now that the 'negotiate' and 'connect' stages are fully separated, we should use the prepare_read_connect() helper instead, and remove the obsolete one. Signed-off-by: NSage Weil <sage@newdream.net>
-
由 Sage Weil 提交于
When an inode was dropped while being migrated between two MDSs, i_cap_exporting_issued was non-zero such that issue caps were non-zero and __ceph_is_any_caps(ci) was true. This prevented the inode from being removed from the snap realm, even as it was dropped from the cache. Fix this by dropping any residual i_snap_realm ref in destroy_inode. Signed-off-by: NSage Weil <sage@newdream.net>
-
由 Sage Weil 提交于
All ci->i_snap_realm_item/realm->inodes_with_caps manipulation should be protected by realm->inodes_with_caps_lock. This bug would have only bit us in a rare race with a realm split (during some snap creations). Signed-off-by: NSage Weil <sage@newdream.net>
-
由 Sage Weil 提交于
Added assertion, and cleared one case where the implemented caps were not following the issued caps. Signed-off-by: NYehuda Sadeh <yehuda@hq.newdream.net> Signed-off-by: NSage Weil <sage@newdream.net>
-
- 06 3月, 2010 1 次提交
-
-
由 Stephen Rothwell 提交于
Signed-off-by: NStephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: NSage Weil <sage@newdream.net>
-
- 05 3月, 2010 1 次提交
-
-
由 Yehuda Sadeh 提交于
This simplifies the process of timing out messages. We keep lru of current messages that are in flight. If a timeout has passed, we reset the osd connection, so that messages will be retransmitted. This is a failsafe in case we hit some sort of problem sending out message to the OSD. Normally, we'll get notification via an updated osdmap if there are problems. If a request is older than the keepalive timeout, send a keepalive to ensure we detect any breaks in the TCP connection. Signed-off-by: NYehuda Sadeh <yehuda@hq.newdream.net> Signed-off-by: NSage Weil <sage@newdream.net>
-
- 02 3月, 2010 6 次提交
-
-
由 Sage Weil 提交于
The flush_dirty_caps() used to loop over the first entry of the cap_dirty dirty list on the assumption that after calling ceph_check_caps() it would be removed from the list. This isn't true for caps that are being migrated between MDSs, where we've received the EXPORT but not the IMPORT. Instead, do a safe list iteration, and pin the next inode on the list via the CEPH_I_NOFLUSH flag. Signed-off-by: NSage Weil <sage@newdream.net>
-
由 Sage Weil 提交于
We should include caps that are mid-migration (we've received the EXPORT, but not the IMPORT) in the issued caps set. Signed-off-by: NSage Weil <sage@newdream.net>
-
由 Sage Weil 提交于
Add missing pointer dereference (p is a void **). Signed-off-by: NSage Weil <sage@newdream.net>
-
由 Sage Weil 提交于
Verify the file is actually open for the given caps when we are waiting for caps. This ensures we will wake up and return EBADF if another thread closes the file out from under us. Note that EBADF is also the correct return code from write(2) when called on a file handle opened for reading (although the vfs should catch that). Signed-off-by: NSage Weil <sage@newdream.net>
-
由 Sage Weil 提交于
We didn't set the front length correctly. When messages used the message pool we ended up with the conservative max (4 KB), and the rest of the time the slightly less conservative estimate. Even though the OSD ignores the extra data, set it to the right value to avoid sending extra data over the network. Signed-off-by: NYehuda Sadeh <yehuda@hq.newdream.net> Signed-off-by: NSage Weil <sage@newdream.net>
-
由 Sage Weil 提交于
Reset msg front len when a message is returned to the pool: the caller may have changed it. BUG if we try to send a message with a hdr.front_len that doesn't match the front iov. Signed-off-by: NSage Weil <sage@newdream.net>
-