1. 28 8月, 2013 3 次提交
  2. 16 8月, 2013 1 次提交
  3. 10 8月, 2013 2 次提交
  4. 04 7月, 2013 6 次提交
  5. 18 5月, 2013 1 次提交
    • A
      libceph: must hold mutex for reset_changed_osds() · 14d2f38d
      Alex Elder 提交于
      An osd client has a red-black tree describing its osds, and
      occasionally we would get crashes due to one of these trees tree
      becoming corrupt somehow.
      
      The problem turned out to be that reset_changed_osds() was being
      called without protection of the osd client request mutex.  That
      function would call __reset_osd() for any osd that had changed, and
      __reset_osd() would call __remove_osd() for any osd with no
      outstanding requests, and finally __remove_osd() would remove the
      corresponding entry from the red-black tree.  Thus, the tree was
      getting modified without having any lock protection, and was
      vulnerable to problems due to concurrent updates.
      
      This appears to be the only osd tree updating path that has this
      problem.  It can be fairly easily fixed by moving the call up
      a few lines, to just before the request mutex gets dropped
      in kick_requests().
      
      This resolves:
          http://tracker.ceph.com/issues/5043
      
      Cc: stable@vger.kernel.org # 3.4+
      Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-by: NSage Weil <sage@inktank.com>
      14d2f38d
  6. 14 5月, 2013 1 次提交
    • A
      libceph: init sent and completed when starting · c10ebbf5
      Alex Elder 提交于
      The rbd code has a need to be able to restart an osd request that
      has already been started and completed once before.  This currently
      wouldn't work right because the osd client code assumes an osd
      request will be started exactly once  Certain fields in a request
      are never cleared and this leads to trouble if you try to reuse it.
      
      Specifically, the r_sent, r_got_reply, and r_completed fields are
      never cleared.  The r_sent field records the osd incarnation at the
      time the request was sent to that osd.  If that's non-zero, the
      message won't get re-mapped to a target osd properly, and won't be
      put on the unsafe requests list the first time it's sent as it
      should.  The r_got_reply field is used in handle_reply() to ensure
      the reply to a request is processed only once.  And the r_completed
      field is used for lingering requests to avoid calling the callback
      function every time the osd client re-sends the request on behalf of
      its initiator.
      
      Each osd request passes through ceph_osdc_start_request() when
      responsibility for the request is handed over to the osd client for
      completion.  We can safely zero these three fields there each time a
      request gets started.
      
      One last related change--clear the r_linger flag when a request
      is no longer registered as a linger request.
      
      This resolves:
          http://tracker.ceph.com/issues/5026Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-by: NJosh Durgin <josh.durgin@inktank.com>
      c10ebbf5
  7. 03 5月, 2013 3 次提交
  8. 02 5月, 2013 23 次提交
    • A
      libceph: create source file "net/ceph/snapshot.c" · 4f0dcb10
      Alex Elder 提交于
      This creates a new source file "net/ceph/snapshot.c" to contain
      utility routines related to ceph snapshot contexts.  The main
      motivation was to define ceph_create_snap_context() as a common way
      to create these structures, but I've moved the definitions of
      ceph_get_snap_context() and ceph_put_snap_context() there too.
      (The benefit of inlining those is very small, and I'd rather
      keep this collection of functions together.)
      Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-by: NJosh Durgin <josh.durgin@inktank.com>
      4f0dcb10
    • A
      libceph: fix byte order mismatch · 9ef1ee5a
      Alex Elder 提交于
      A WATCH op includes an object version.  The version that's supplied
      is incorrectly byte-swapped osd_req_op_watch_init() where it's first
      assigned (it's been this way since that code was first added).
      
      The result is that the version sent to the osd is wrong, because
      that value gets byte-swapped again in osd_req_encode_op().  This
      is the source of a sparse warning related to improper byte order in
      the assignment.
      
      The approach of using the version to avoid a race is deprecated
      (see http://tracker.ceph.com/issues/3871), and the watch parameter
      is no longer even examined by the osd.  So fix the assignment in
      osd_req_op_watch_init() so it no longer does the byte swap.
      
      This resolves:
          http://tracker.ceph.com/issues/3847Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-by: NJosh Durgin <josh.durgin@inktank.com>
      9ef1ee5a
    • A
      libceph: support pages for class request data · 6c57b554
      Alex Elder 提交于
      Add the ability to provide an array of pages as outbound request
      data for object class method calls.
      Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-by: NJosh Durgin <josh.durgin@inktank.com>
      6c57b554
    • A
      libceph: fix two messenger bugs · a51b272e
      Alex Elder 提交于
      This patch makes four small changes in the ceph messenger.
      
      While getting copyup functionality working I found two bugs in the
      messenger.  Existing paths through the code did not trigger these
      problems, but they're fixed here:
          - In ceph_msg_data_pagelist_cursor_init(), the cursor's
            last_piece field was being checked against the length
            supplied.  This was OK until this commit: ccba6d98 libceph:
            implement multiple data items in a message That commit changed
            the cursor init routines to allow lengths to be supplied that
            exceeded the size of the current data item. Because of this,
            we have to use the assigned cursor resid field rather than the
            provided length in determining whether the cursor points to
            the last piece of a data item.
          - In ceph_msg_data_add_pages(), a BUG_ON() was erroneously
            catching attempts to add page data to a message if the message
            already had data assigned to it. That was OK until that same
            commit, at which point it was fine for messages to have
            multiple data items. It slipped through because that BUG_ON()
            call was present twice in that function. (You can never be too
            careful.)
      
      In addition two other minor things are changed:
          - In ceph_msg_data_cursor_init(), the local variable "data" was
            getting assigned twice.
          - In ceph_msg_data_advance(), it was assumed that the
            type-specific advance routine would set new_piece to true
            after it advanced past the last piece. That may have been
            fine, but since we check for that case we might as well set it
            explicitly in ceph_msg_data_advance().
      
      This resolves:
          http://tracker.ceph.com/issues/4762Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-by: NJosh Durgin <josh.durgin@inktank.com>
      a51b272e
    • A
      libceph: support raw data requests · 49719778
      Alex Elder 提交于
      Allow osd request ops that aren't otherwise structured (not class,
      extent, or watch ops) to specify "raw" data to be used to hold
      incoming data for the op.  Make use of this capability for the osd
      STAT op.
      
      Prefix the name of the private function osd_req_op_init() with "_",
      and expose a new function by that (earlier) name whose purpose is to
      initialize osd ops with (only) implied data.
      
      For now we'll just support the use of a page array for an osd op
      with incoming raw data.
      Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-by: NJosh Durgin <josh.durgin@inktank.com>
      49719778
    • A
      libceph: clean up osd data field access functions · 863c7eb5
      Alex Elder 提交于
      There are a bunch of functions defined to encapsulate getting the
      address of a data field for a particular op in an osd request.
      They're all defined the same way, so create a macro to take the
      place of all of them.
      
      Two of these are used outside the osd client code, so preserve them
      (but convert them to use the new macro internally).  Stop exporting
      the ones that aren't used elsewhere.
      Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-by: NJosh Durgin <josh.durgin@inktank.com>
      863c7eb5
    • A
      libceph: kill off osd data write_request parameters · 406e2c9f
      Alex Elder 提交于
      In the incremental move toward supporting distinct data items in an
      osd request some of the functions had "write_request" parameters to
      indicate, basically, whether the data belonged to in_data or the
      out_data.  Now that we maintain the data fields in the op structure
      there is no need to indicate the direction, so get rid of the
      "write_request" parameters.
      Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-by: NJosh Durgin <josh.durgin@inktank.com>
      406e2c9f
    • A
      libceph: change how "safe" callback is used · 26be8808
      Alex Elder 提交于
      An osd request currently has two callbacks.  They inform the
      initiator of the request when we've received confirmation for the
      target osd that a request was received, and when the osd indicates
      all changes described by the request are durable.
      
      The only time the second callback is used is in the ceph file system
      for a synchronous write.  There's a race that makes some handling of
      this case unsafe.  This patch addresses this problem.  The error
      handling for this callback is also kind of gross, and this patch
      changes that as well.
      
      In ceph_sync_write(), if a safe callback is requested we want to add
      the request on the ceph inode's unsafe items list.  Because items on
      this list must have their tid set (by ceph_osd_start_request()), the
      request added *after* the call to that function returns.  The
      problem with this is that there's a race between starting the
      request and adding it to the unsafe items list; the request may
      already be complete before ceph_sync_write() even begins to put it
      on the list.
      
      To address this, we change the way the "safe" callback is used.
      Rather than just calling it when the request is "safe", we use it to
      notify the initiator the bounds (start and end) of the period during
      which the request is *unsafe*.  So the initiator gets notified just
      before the request gets sent to the osd (when it is "unsafe"), and
      again when it's known the results are durable (it's no longer
      unsafe).  The first call will get made in __send_request(), just
      before the request message gets sent to the messenger for the first
      time.  That function is only called by __send_queued(), which is
      always called with the osd client's request mutex held.
      
      We then have this callback function insert the request on the ceph
      inode's unsafe list when we're told the request is unsafe.  This
      will avoid the race because this call will be made under protection
      of the osd client's request mutex.  It also nicely groups the setup
      and cleanup of the state associated with managing unsafe requests.
      
      The name of the "safe" callback field is changed to "unsafe" to
      better reflect its new purpose.  It has a Boolean "unsafe" parameter
      to indicate whether the request is becoming unsafe or is now safe.
      Because the "msg" parameter wasn't used, we drop that.
      
      This resolves the original problem reportedin:
          http://tracker.ceph.com/issues/4706Reported-by: NYan, Zheng <zheng.z.yan@intel.com>
      Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-by: NYan, Zheng <zheng.z.yan@intel.com>
      Reviewed-by: NSage Weil <sage@inktank.com>
      26be8808
    • A
      libceph: make method call data be a separate data item · 04017e29
      Alex Elder 提交于
      Right now the data for a method call is specified via a pointer and
      length, and it's copied--along with the class and method name--into
      a pagelist data item to be sent to the osd.  Instead, encode the
      data in a data item separate from the class and method names.
      
      This will allow large amounts of data to be supplied to methods
      without copying.  Only rbd uses the class functionality right now,
      and when it really needs this it will probably need to use a page
      array rather than a page list.  But this simple implementation
      demonstrates the functionality on the osd client, and that's enough
      for now.
      
      This resolves:
          http://tracker.ceph.com/issues/4104Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-by: NJosh Durgin <josh.durgin@inktank.com>
      04017e29
    • A
      libceph: add, don't set data for a message · 90af3602
      Alex Elder 提交于
      Change the names of the functions that put data on a pagelist to
      reflect that we're adding to whatever's already there rather than
      just setting it to the one thing.  Currently only one data item is
      ever added to a message, but that's about to change.
      
      This resolves:
          http://tracker.ceph.com/issues/2770Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-by: NJosh Durgin <josh.durgin@inktank.com>
      90af3602
    • A
      libceph: implement multiple data items in a message · ca8b3a69
      Alex Elder 提交于
      This patch adds support to the messenger for more than one data item
      in its data list.
      
      A message data cursor has two more fields to support this:
          - a count of the number of bytes left to be consumed across
            all data items in the list, "total_resid"
          - a pointer to the head of the list (for validation only)
      
      The cursor initialization routine has been split into two parts: the
      outer one, which initializes the cursor for traversing the entire
      list of data items; and the inner one, which initializes the cursor
      to start processing a single data item.
      
      When a message cursor is first initialized, the outer initialization
      routine sets total_resid to the length provided.  The data pointer
      is initialized to the first data item on the list.  From there, the
      inner initialization routine finishes by setting up to process the
      data item the cursor points to.
      
      Advancing the cursor consumes bytes in total_resid.  If the resid
      field reaches zero, it means the current data item is fully
      consumed.  If total_resid indicates there is more data, the cursor
      is advanced to point to the next data item, and then the inner
      initialization routine prepares for using that.  (A check is made at
      this point to make sure we don't wrap around the front of the list.)
      
      The type-specific init routines are modified so they can be given a
      length that's larger than what the data item can support.  The resid
      field is initialized to the smaller of the provided length and the
      length of the entire data item.
      
      When total_resid reaches zero, we're done.
      
      This resolves:
          http://tracker.ceph.com/issues/3761Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-by: NJosh Durgin <josh.durgin@inktank.com>
      ca8b3a69
    • A
      libceph: replace message data pointer with list · 5240d9f9
      Alex Elder 提交于
      In place of the message data pointer, use a list head which links
      through message data items.  For now we only support a single entry
      on that list.
      Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-by: NJosh Durgin <josh.durgin@inktank.com>
      5240d9f9
    • A
      libceph: have cursor point to data · 8ae4f4f5
      Alex Elder 提交于
      Rather than having a ceph message data item point to the cursor it's
      associated with, have the cursor point to a data item.  This will
      allow a message cursor to be used for more than one data item.
      Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-by: NJosh Durgin <josh.durgin@inktank.com>
      8ae4f4f5
    • A
      libceph: move cursor into message · 36153ec9
      Alex Elder 提交于
      A message will only be processing a single data item at a time, so
      there's no need for each data item to have its own cursor.
      
      Move the cursor embedded in the message data structure into the
      message itself.  To minimize the impact, keep the data->cursor
      field, but make it be a pointer to the cursor in the message.
      
      Move the definition of ceph_msg_data above ceph_msg_data_cursor so
      the cursor can point to the data without a forward definition rather
      than vice-versa.
      
      This and the upcoming patches are part of:
          http://tracker.ceph.com/issues/3761Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-by: NJosh Durgin <josh.durgin@inktank.com>
      36153ec9
    • A
      libceph: record bio length · c851c495
      Alex Elder 提交于
      The bio is the only data item type that doesn't record its full
      length.  Fix that.
      Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-by: NJosh Durgin <josh.durgin@inktank.com>
      c851c495
    • A
      libceph: skip message if too big to receive · f759ebb9
      Alex Elder 提交于
      We know the length of our message buffers.  If we get a message
      that's too long, just dump it and ignore it.  If skip was set
      then con->in_msg won't be valid, so be careful not to dereference
      a null pointer in the process.
      
      This resolves:
          http://tracker.ceph.com/issues/4664Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-by: NJosh Durgin <josh.durgin@inktank.com>
      f759ebb9
    • A
      libceph: fix possible CONFIG_BLOCK build problem · ea96571f
      Alex Elder 提交于
      This patch:
          15a0d7b libceph: record message data length
      did not enclose some bio-specific code inside CONFIG_BLOCK as
      it should have.  Fix that.
      Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-by: NJosh Durgin <josh.durgin@inktank.com>
      ea96571f
    • A
      libceph: kill off osd request r_data_in and r_data_out · 5476492f
      Alex Elder 提交于
      Finally!  Convert the osd op data pointers into real structures, and
      make the switch over to using them instead of having all ops share
      the in and/or out data structures in the osd request.
      
      Set up a new function to traverse the set of ops and release any
      data associated with them (pages).
      
      This and the patches leading up to it resolve:
          http://tracker.ceph.com/issues/4657Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-by: NJosh Durgin <josh.durgin@inktank.com>
      5476492f
    • A
      libceph: set the data pointers when encoding ops · ec9123c5
      Alex Elder 提交于
      Still using the osd request r_data_in and r_data_out pointer, but
      we're basically only referring to it via the data pointers in the
      osd ops.  And we're transferring that information to the request
      or reply message only when the op indicates it's needed, in
      osd_req_encode_op().
      
      To avoid a forward reference, ceph_osdc_msg_data_set() was moved up
      in the file.
      
      Don't bother calling ceph_osd_data_init(), in ceph_osd_alloc(),
      because the ops array will already be zeroed anyway.
      Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-by: NJosh Durgin <josh.durgin@inktank.com>
      ec9123c5
    • A
      libceph: combine initializing and setting osd data · a4ce40a9
      Alex Elder 提交于
      This ends up being a rather large patch but what it's doing is
      somewhat straightforward.
      
      Basically, this is replacing two calls with one.  The first of the
      two calls is initializing a struct ceph_osd_data with data (either a
      page array, a page list, or a bio list); the second is setting an
      osd request op so it associates that data with one of the op's
      parameters.  In place of those two will be a single function that
      initializes the op directly.
      
      That means we sort of fan out a set of the needed functions:
          - extent ops with pages data
          - extent ops with pagelist data
          - extent ops with bio list data
      and
          - class ops with page data for receiving a response
      
      We also have define another one, but it's only used internally:
          - class ops with pagelist data for request parameters
      
      Note that we *still* haven't gotten rid of the osd request's
      r_data_in and r_data_out fields.  All the osd ops refer to them for
      their data.  For now, these data fields are pointers assigned to the
      appropriate r_data_* field when these new functions are called.
      Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-by: NJosh Durgin <josh.durgin@inktank.com>
      a4ce40a9
    • A
      libceph: set message data when building osd request · 39b44cbe
      Alex Elder 提交于
      All calls of ceph_osdc_start_request() are preceded (in the case of
      rbd, almost) immediately by a call to ceph_osdc_build_request().
      
      Move the build calls at the top of ceph_osdc_start_request() out of
      there and into the ceph_osdc_build_request().  Nothing prevents
      moving these calls to the top of ceph_osdc_build_request(), either
      (and we're going to want them there in the next patch) so put them
      at the top.
      
      This and the next patch are related to:
          http://tracker.ceph.com/issues/4657Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-by: NJosh Durgin <josh.durgin@inktank.com>
      39b44cbe
    • A
      libceph: move ceph_osdc_build_request() · e65550fd
      Alex Elder 提交于
      This simply moves ceph_osdc_build_request() later in its source
      file without any change.  Done as a separate patch to facilitate
      review of the change in the next patch.
      Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-by: NJosh Durgin <josh.durgin@inktank.com>
      e65550fd
    • A
      libceph: format class info at init time · 5f562df5
      Alex Elder 提交于
      An object class method is formatted using a pagelist which contains
      the class name, the method name, and the data concatenated into an
      osd request's outbound data.
      
      Currently when a class op is initialized in osd_req_op_cls_init(),
      the lengths of and pointers to these three items are recorded.
      Later, when the op is getting formatted into the request message, a
      new pagelist is created and that is when these items get copied into
      the pagelist.
      
      This patch makes it so the pagelist to hold these items is created
      when the op is initialized instead.
      Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-by: NJosh Durgin <josh.durgin@inktank.com>
      5f562df5
新手
引导
客服 返回
顶部