1. 23 3月, 2010 10 次提交
    • S
      ceph: fix connection fault con_work reentrancy problem · 3c3f2e32
      Sage Weil 提交于
      The messenger fault was clearing the BUSY bit, for reasons unclear.  This
      made it possible for the con->ops->fault function to reopen the connection,
      and requeue work in the workqueue--even though the current thread was
      already in con_work.
      
      This avoids a problem where the client busy loops with connection failures
      on an unreachable OSD, but doesn't address the root cause of that problem.
      Signed-off-by: NSage Weil <sage@newdream.net>
      3c3f2e32
    • S
      ceph: prevent dup stale messages to console for restarting mds · e4cb4cb8
      Sage Weil 提交于
      Prevent duplicate 'mds0 caps stale' message from spamming the console every
      few seconds while the MDS restarts.  Set s_renew_requested earlier, so that
      we only print the message once, even if we don't send an actual request.
      Signed-off-by: NSage Weil <sage@newdream.net>
      e4cb4cb8
    • S
      ceph: fix pg pool decoding from incremental osdmap update · efd7576b
      Sage Weil 提交于
      The incremental map decoding of pg pool updates wasn't skipping
      the snaps and removed_snaps vectors.  This caused osd requests
      to stall when pool snapshots were created or fs snapshots were
      deleted.  Use a common helper for full and incremental map
      decoders that decodes pools properly.
      Signed-off-by: NSage Weil <sage@newdream.net>
      efd7576b
    • S
      ceph: fix mds sync() race with completing requests · 80fc7314
      Sage Weil 提交于
      The wait_unsafe_requests() helper dropped the mdsc mutex to wait
      for each request to complete, and then examined r_node to get the
      next request after retaking the lock.  But the request completion
      removes the request from the tree, so r_node was always undefined
      at this point.  Since it's a small race, it usually led to a
      valid request, but not always.  The result was an occasional
      crash in rb_next() while dereferencing node->rb_left.
      
      Fix this by clearing the rb_node when removing the request from
      the request tree, and not walking off into the weeds when we
      are done waiting for a request.  Since the request we waited on
      will _always_ be out of the request tree, take a ref on the next
      request, in the hopes that it won't be.  But if it is, it's ok:
      we can start over from the beginning (and traverse over older read
      requests again).
      Signed-off-by: NSage Weil <sage@newdream.net>
      80fc7314
    • S
      ceph: only release unused caps with mds requests · 916623da
      Sage Weil 提交于
      We were releasing used caps (e.g. FILE_CACHE) from encode_inode_release
      with MDS requests (e.g. setattr).  We don't carry refs on most caps, so
      this code worked most of the time, but for setattr (utimes) we try to
      drop Fscr.
      
      This causes cap state to get slightly out of sync with reality, and may
      result in subsequent mds revoke messages getting ignored.
      
      Fix by only releasing unused caps.
      Signed-off-by: NSage Weil <sage@newdream.net>
      916623da
    • S
      ceph: clean up handle_cap_grant, handle_caps wrt session mutex · 15637c8b
      Sage Weil 提交于
      Drop session mutex unconditionally in handle_cap_grant, and do the
      check_caps from the handle_cap_grant helper.  This avoids using a magic
      return value.
      
      Also avoid using a flag variable in the IMPORT case and call
      check_caps at the appropriate point.
      Signed-off-by: NSage Weil <sage@newdream.net>
      15637c8b
    • S
      ceph: fix session locking in handle_caps, ceph_check_caps · cdc2ce05
      Sage Weil 提交于
      Passing a session pointer to ceph_check_caps() used to mean it would leave
      the session mutex locked.  That wasn't always possible if it wasn't passed
      CHECK_CAPS_AUTHONLY.   If could unlock the passed session and lock a
      differet session mutex, which was clearly wrong, and also emitted a
      warning when it a racing CPU retook it and we did an unlock from the wrong
      context.
      
      This was only a problem when there was more than one MDS.
      
      First, make ceph_check_caps unconditionally drop the session mutex, so that
      it is free to lock other sessions as needed.  Then adjust the one caller
      that passes in a session (handle_cap_grant) accordingly.
      Signed-off-by: NSage Weil <sage@newdream.net>
      cdc2ce05
    • S
      ceph: drop unnecessary WARN_ON in caps migration · 4ea0043a
      Sage Weil 提交于
      If we don't have the exported cap it's because we already released it. No
      need to WARN.
      Signed-off-by: NSage Weil <sage@newdream.net>
      4ea0043a
    • S
      ceph: fix null pointer deref of r_osd in debug output · 12eadc19
      Sage Weil 提交于
      This causes an oops when debug output is enabled and we kick
      an osd request with no current r_osd (sometime after an osd
      failure).  Check the pointer before dereferencing.
      Signed-off-by: NSage Weil <sage@newdream.net>
      12eadc19
    • S
      ceph: clean up service ticket decoding · 0a990e70
      Sage Weil 提交于
      Previously we would decode state directly into our current ticket_handler.
      This is problematic if for some reason we fail to decode, because we end
      up with half new state and half old state.
      
      We are probably already in bad shape if we get an update we can't decode,
      but we may as well be tidy anyway.  Decode into new_* temporaries and
      update the ticket_handler only on success.
      Signed-off-by: NSage Weil <sage@newdream.net>
      0a990e70
  2. 21 3月, 2010 6 次提交
  3. 06 3月, 2010 1 次提交
  4. 05 3月, 2010 1 次提交
    • Y
      ceph: reset osd after relevant messages timed out · 422d2cb8
      Yehuda Sadeh 提交于
      This simplifies the process of timing out messages. We
      keep lru of current messages that are in flight. If a
      timeout has passed, we reset the osd connection, so that
      messages will be retransmitted.  This is a failsafe in case
      we hit some sort of problem sending out message to the OSD.
      Normally, we'll get notification via an updated osdmap if
      there are problems.
      
      If a request is older than the keepalive timeout, send a
      keepalive to ensure we detect any breaks in the TCP connection.
      Signed-off-by: NYehuda Sadeh <yehuda@hq.newdream.net>
      Signed-off-by: NSage Weil <sage@newdream.net>
      422d2cb8
  5. 02 3月, 2010 9 次提交
  6. 27 2月, 2010 2 次提交
    • S
      ceph: remove bogus mds forward warning · 080af17e
      Sage Weil 提交于
      The must_resend flag is always true, not false.  In any case, we can
      just ignore it anyway.
      Signed-off-by: NSage Weil <sage@newdream.net>
      080af17e
    • S
      ceph: remove fragile __map_osds optimization · c99eb1c7
      Sage Weil 提交于
      We used to try to avoid freeing and then reallocating the osd
      struct.  This is a bit fragile due to potential interactions with
      other references (beyond o_requests), and may be the cause of
      this crash:
      
      [120633.442358] BUG: unable to handle kernel NULL pointer dereference at (null)
      [120633.443292] IP: [<ffffffff812549b6>] rb_erase+0x11d/0x277
      [120633.443292] PGD f7ff3067 PUD f7f53067 PMD 0
      [120633.443292] Oops: 0000 [#1] PREEMPT SMP
      [120633.443292] last sysfs file: /sys/kernel/uevent_seqnum
      [120633.443292] CPU 1
      [120633.443292] Modules linked in: ceph fan ac battery psmouse ehci_hcd ide_pci_generic ohci_hcd thermal processor button
      [120633.443292] Pid: 3023, comm: ceph-msgr/1 Not tainted 2.6.32-rc2 #12 H8SSL
      [120633.443292] RIP: 0010:[<ffffffff812549b6>]  [<ffffffff812549b6>] rb_erase+0x11d/0x277
      [120633.443292] RSP: 0018:ffff8800f7b13a50  EFLAGS: 00010246
      [120633.443292] RAX: ffff880022907819 RBX: ffff880022907818 RCX: 0000000000000000
      [120633.443292] RDX: ffff8800f7b13a80 RSI: ffff8800f587eb48 RDI: 0000000000000000
      [120633.443292] RBP: ffff8800f7b13a60 R08: 0000000000000000 R09: 0000000000000004
      [120633.443292] R10: 0000000000000000 R11: ffff8800c4441000 R12: ffff8800f587eb48
      [120633.443292] R13: ffff8800f58eaa00 R14: ffff8800f413c000 R15: 0000000000000001
      [120633.443292] FS:  00007fbef6e226e0(0000) GS:ffff880009200000(0000) knlGS:0000000000000000
      [120633.443292] CS:  0010 DS: 0018 ES: 0018 CR0: 000000008005003b
      [120633.443292] CR2: 0000000000000000 CR3: 00000000f7c53000 CR4: 00000000000006e0
      [120633.443292] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
      [120633.443292] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
      [120633.443292] Process ceph-msgr/1 (pid: 3023, threadinfo ffff8800f7b12000, task ffff8800f5858b40)
      [120633.443292] Stack:
      [120633.443292]  ffff8800f413c000 ffff8800f587e9c0 ffff8800f7b13a80 ffffffffa0098a86
      [120633.443292] <0> 00000000000006f1 0000000000000000 ffff8800f7b13af0 ffffffffa009959b
      [120633.443292] <0> ffff8800f413c000 ffff880022a68400 ffff880022a68400 ffff8800f587e9c0
      [120633.443292] Call Trace:
      [120633.443292]  [<ffffffffa0098a86>] __remove_osd+0x4d/0xbc [ceph]
      [120633.443292]  [<ffffffffa009959b>] __map_osds+0x199/0x4fa [ceph]
      [120633.443292]  [<ffffffffa00999f4>] ? __send_request+0xf8/0x186 [ceph]
      [120633.443292]  [<ffffffffa0099beb>] kick_requests+0x169/0x3cb [ceph]
      [120633.443292]  [<ffffffffa009a8c1>] ceph_osdc_handle_map+0x370/0x522 [ceph]
      
      Since we're probably screwed anyway if a small kmalloc is
      failing, don't bother with trying to be clever here.
      Signed-off-by: NSage Weil <sage@newdream.net>
      c99eb1c7
  7. 26 2月, 2010 2 次提交
  8. 24 2月, 2010 6 次提交
  9. 20 2月, 2010 3 次提交