1. 02 5月, 2013 40 次提交
    • A
      libceph: set message data when building osd request · 39b44cbe
      Alex Elder 提交于
      All calls of ceph_osdc_start_request() are preceded (in the case of
      rbd, almost) immediately by a call to ceph_osdc_build_request().
      
      Move the build calls at the top of ceph_osdc_start_request() out of
      there and into the ceph_osdc_build_request().  Nothing prevents
      moving these calls to the top of ceph_osdc_build_request(), either
      (and we're going to want them there in the next patch) so put them
      at the top.
      
      This and the next patch are related to:
          http://tracker.ceph.com/issues/4657Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-by: NJosh Durgin <josh.durgin@inktank.com>
      39b44cbe
    • A
      libceph: move ceph_osdc_build_request() · e65550fd
      Alex Elder 提交于
      This simply moves ceph_osdc_build_request() later in its source
      file without any change.  Done as a separate patch to facilitate
      review of the change in the next patch.
      Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-by: NJosh Durgin <josh.durgin@inktank.com>
      e65550fd
    • A
      libceph: format class info at init time · 5f562df5
      Alex Elder 提交于
      An object class method is formatted using a pagelist which contains
      the class name, the method name, and the data concatenated into an
      osd request's outbound data.
      
      Currently when a class op is initialized in osd_req_op_cls_init(),
      the lengths of and pointers to these three items are recorded.
      Later, when the op is getting formatted into the request message, a
      new pagelist is created and that is when these items get copied into
      the pagelist.
      
      This patch makes it so the pagelist to hold these items is created
      when the op is initialized instead.
      Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-by: NJosh Durgin <josh.durgin@inktank.com>
      5f562df5
    • A
      libceph: specify osd op by index in request · c99d2d4a
      Alex Elder 提交于
      An osd request now holds all of its source op structures, and every
      place that initializes one of these is in fact initializing one
      of the entries in the the osd request's array.
      
      So rather than supplying the address of the op to initialize, have
      caller specify the osd request and an indication of which op it
      would like to initialize.  This better hides the details the
      op structure (and faciltates moving the data pointers they use).
      
      Since osd_req_op_init() is a common routine, and it's not used
      outside the osd client code, give it static scope.  Also make
      it return the address of the specified op (so all the other
      init routines don't have to repeat that code).
      Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-by: NJosh Durgin <josh.durgin@inktank.com>
      c99d2d4a
    • A
      libceph: add data pointers in osd op structures · 8c042b0d
      Alex Elder 提交于
      An extent type osd operation currently implies that there will
      be corresponding data supplied in the data portion of the request
      (for write) or response (for read) message.  Similarly, an osd class
      method operation implies a data item will be supplied to receive
      the response data from the operation.
      
      Add a ceph_osd_data pointer to each of those structures, and assign
      it to point to eithre the incoming or the outgoing data structure in
      the osd message.  The data is not always available when an op is
      initially set up, so add two new functions to allow setting them
      after the op has been initialized.
      
      Begin to make use of the data item pointer available in the osd
      operation rather than the request data in or out structure in
      places where it's convenient.  Add some assertions to verify
      pointers are always set the way they're expected to be.
      
      This is a sort of stepping stone toward really moving the data
      into the osd request ops, to allow for some validation before
      making that jump.
      
      This is the first in a series of patches that resolve:
          http://tracker.ceph.com/issues/4657Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-by: NJosh Durgin <josh.durgin@inktank.com>
      8c042b0d
    • A
      libceph: rename data out field in osd request op · 54d50649
      Alex Elder 提交于
      There are fields "indata" and "indata_len" defined the ceph osd
      request op structure.  The "in" part is with from the point of view
      of the osd server, but is a little confusing here on the client
      side.  Change their names to use "request" instead of "in" to
      indicate that it defines data provided with the request (as opposed
      the data returned in the response).
      
      Rename the local variable in osd_req_encode_op() to match.
      Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-by: NJosh Durgin <josh.durgin@inktank.com>
      54d50649
    • A
      libceph: keep source rather than message osd op array · 79528734
      Alex Elder 提交于
      An osd request keeps a pointer to the osd operations (ops) array
      that it builds in its request message.
      
      In order to allow each op in the array to have its own distinct
      data, we will need to keep track of each op's data, and that
      information does not go over the wire.
      
      As long as we're tracking the data we might as well just track the
      entire (source) op definition for each of the ops.  And if we're
      doing that, we'll have no more need to keep a pointer to the
      wire-encoded version.
      
      This patch makes the array of source ops be kept with the osd
      request structure, and uses that instead of the version encoded in
      the message in places where that was previously used.  The array
      will be embedded in the request structure, and the maximum number of
      ops we ever actually use is currently 2.  So reduce CEPH_OSD_MAX_OP
      to 2 to reduce the size of the structure.
      
      The result of doing this sort of ripples back up, and as a result
      various function parameters and local variables become unnecessary.
      
      Make r_num_ops be unsigned, and move the definition of struct
      ceph_osd_req_op earlier to ensure it's defined where needed.
      
      It does not yet add per-op data, that's coming soon.
      
      This resolves:
          http://tracker.ceph.com/issues/4656Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-by: NJosh Durgin <josh.durgin@inktank.com>
      79528734
    • A
      libceph: define ceph_osd_data_length() · 23c08a9c
      Alex Elder 提交于
      One more osd data helper, which returns the length of the
      data item, regardless of its type.
      Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-by: NJosh Durgin <josh.durgin@inktank.com>
      23c08a9c
    • A
      libceph: define a few more helpers · c54d47bf
      Alex Elder 提交于
      Define ceph_osd_data_init() and ceph_osd_data_release() to clean up
      a little code.
      Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-by: NJosh Durgin <josh.durgin@inktank.com>
      c54d47bf
    • A
      libceph: define osd data initialization helpers · 43bfe5de
      Alex Elder 提交于
      Define and use functions that encapsulate the initializion of a
      ceph_osd_data structure.
      Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-by: NJosh Durgin <josh.durgin@inktank.com>
      43bfe5de
    • A
      libceph: compute incoming bytes once · 9fc6e064
      Alex Elder 提交于
      This is a simple change, extracting the number of incoming data
      bytes just once in handle_reply().
      Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-by: NJosh Durgin <josh.durgin@inktank.com>
      9fc6e064
    • A
      ceph: build osd request message later for writepages · e5975c7c
      Alex Elder 提交于
      Hold off building the osd request message in ceph_writepages_start()
      until just before it will be submitted to the osd client for
      execution.
      
      We'll still create the request and allocate the page pointer array
      after we learn we have at least one page to write.  A local variable
      will be used to keep track of the allocated array of pages.  Wait
      until just before submitting the request for assigning that page
      array pointer to the request message.
      
      Create ands use a new function osd_req_op_extent_update() whose
      purpose is to serve this one spot where the length value supplied
      when an osd request's op was initially formatted might need to get
      changed (reduced, never increased) before submitting the request.
      
      Previously, ceph_writepages_start() assigned the message header's
      data length because of this update.  That's no longer necessary,
      because ceph_osdc_build_request() will recalculate the right
      value to use based on the content of the ops in the request.
      Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-by: NJosh Durgin <josh.durgin@inktank.com>
      e5975c7c
    • A
      libceph: hold off building osd request · 02ee07d3
      Alex Elder 提交于
      Defer building the osd request until just before submitting it in
      all callers except ceph_writepages_start().  (That caller will be
      handed in the next patch.)
      Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-by: NJosh Durgin <josh.durgin@inktank.com>
      02ee07d3
    • A
      libceph: don't build request in ceph_osdc_new_request() · acead002
      Alex Elder 提交于
      This patch moves the call to ceph_osdc_build_request() out of
      ceph_osdc_new_request() and into its caller.
      
      This is in order to defer formatting osd operation information into
      the request message until just before request is started.
      
      The only unusual (ab)user of ceph_osdc_build_request() is
      ceph_writepages_start(), where the final length of write request may
      change (downward) based on the current inode size or the oldest
      snapshot context with dirty data for the inode.
      
      The remaining callers don't change anything in the request after has
      been built.
      
      This means the ops array is now supplied by the caller.  It also
      means there is no need to pass the mtime to ceph_osdc_new_request()
      (it gets provided to ceph_osdc_build_request()).  And rather than
      passing a do_sync flag, have the number of ops in the ops array
      supplied imply adding a second STARTSYNC operation after the READ or
      WRITE requested.
      
      This and some of the patches that follow are related to having the
      messenger (only) be responsible for filling the content of the
      message header, as described here:
          http://tracker.ceph.com/issues/4589Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-by: NJosh Durgin <josh.durgin@inktank.com>
      acead002
    • A
      libceph: record message data length · a1930804
      Alex Elder 提交于
      Keep track of the length of the data portion for a message in a
      separate field in the ceph_msg structure.  This information has
      been maintained in wire byte order in the message header, but
      that's going to change soon.
      Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-by: NJosh Durgin <josh.durgin@inktank.com>
      a1930804
    • A
      libceph: drop ceph_osd_request->r_con_filling_msg · ace6d3a9
      Alex Elder 提交于
      A field in an osd request keeps track of whether a connection is
      currently filling the request's reply message.  This patch gets rid
      of that field.
      
      An osd request includes two messages--a request and a reply--and
      they're both associated with the connection that existed to its
      the target osd at the time the request was created.
      
      An osd request can be dropped early, even when it's in flight.
      And at that time both messages are released.  It's possible the
      reply message has been supplied to its connection to receive
      an incoming response message at the time the osd request gets
      dropped.  So ceph_osdc_release_request() revokes that message
      from the connection before releasing it so things get cleaned up
      properly.
      
      Previously this may have caused a problem, because the connection
      that a message was associated with might have gone away before the
      revoke request.  And to avoid any problems using that connection,
      the osd client held a reference to it when it supplies its response
      message.
      
      However since this commit:
          38941f80 libceph: have messages point to their connection
      all messages hold a reference to the connection they are associated
      with whenever the connection is actively operating on the message
      (i.e. while the message is queued to send or sending, and when it
      data is being received into it).  And if a message has no connection
      associated with it, ceph_msg_revoke_incoming() won't do anything
      when asked to revoke it.
      
      As a result, there is no need to keep an additional reference to the
      connection associated with a message when we hand the message to the
      messenger when it calls our alloc_msg() method to receive something.
      If the connection *were* operating on it, it would have its own
      reference, and if not, there's no work to be done when we need to
      revoke it.
      
      So get rid of the osd request's r_con_filling_msg field.
      
      This resolves:
          http://tracker.ceph.com/issues/4647Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-by: NJosh Durgin <josh.durgin@inktank.com>
      ace6d3a9
    • A
      libceph: define ceph_decode_pgid() only once · ef4859d6
      Alex Elder 提交于
      There are two basically identical definitions of __decode_pgid()
      in libceph, one in "net/ceph/osdmap.c" and the other in
      "net/ceph/osd_client.c".  Get rid of both, and instead define
      a single inline version in "include/linux/ceph/osdmap.h".
      Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-by: NJosh Durgin <josh.durgin@inktank.com>
      ef4859d6
    • A
      libceph: drop mutex on error in handle_reply() · 8058fd45
      Alex Elder 提交于
      The osd client mutex is acquired just before getting a reference to
      a request in handle_reply().  However the error paths after that
      don't drop the mutex before returning as they should.
      
      Drop the mutex after dropping the request reference.  Also add a
      bad_mutex label at that point and use it so the failed request
      lookup case can be handled with the rest.
      
      This resolves:
          http://tracker.ceph.com/issues/4615Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-by: NSage Weil <sage@inktank.com>
      8058fd45
    • A
      libceph: use osd_req_op_extent_init() · b0270324
      Alex Elder 提交于
      Use osd_req_op_extent_init() in ceph_osdc_new_request() to
      initialize the one or two ops built in that function.
      Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-by: NJosh Durgin <josh.durgin@inktank.com>
      b0270324
    • A
      libceph: clean up ceph_osd_new_request() · d18d1e28
      Alex Elder 提交于
      All callers of ceph_osd_new_request() pass either CEPH_OSD_OP_READ
      or CEPH_OSD_OP_WRITE as the opcode value.  The function assumes it
      by filling in the extent fields in the ops array it builds.  So just
      assert that is the case, and don't bother calling op_has_extent()
      before filling in the first osd operation in the array.
      
      Define some local variables to gather the information to fill into
      the first op, and then fill in the op array all in one place.
      Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-by: NJosh Durgin <josh.durgin@inktank.com>
      d18d1e28
    • A
      libceph: don't update op in calc_layout() · a19dadfb
      Alex Elder 提交于
      The ceph_osdc_new_request() an array of osd operations is built up
      and filled in partially within that function and partially in the
      called function calc_layout().  Move the latter part back out to
      ceph_osdc_new_request() so it's all done in one place.  This makes
      it unnecessary to pass the op pointer to calc_layout(), so get rid
      of that parameter.
      Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-by: NJosh Durgin <josh.durgin@inktank.com>
      a19dadfb
    • A
      libceph: pass offset and length out of calc_layout() · 75d1c941
      Alex Elder 提交于
      The purpose of calc_layout() is to determine, given a file offset
      and length and a layout describing the placement of file data across
      objects, where in "object space" that data resides.
      
      Specifically, it determines which object should hold the first part
      of the specified range of file data, and the offset and length of
      data within that object.  The length will not exceed the bounds
      of the object, and the caller is informed of that maximum length.
      
      Add two parameters to calc_layout() to allow the object-relative
      offset and length to be passed back to the caller.
      
      This is the first steps toward having ceph_osdc_new_request() build
      its osd op structure using osd_req_op_extent_init().
      Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-by: NJosh Durgin <josh.durgin@inktank.com>
      75d1c941
    • A
      libceph: define source request op functions · 33803f33
      Alex Elder 提交于
      The rbd code has a function that allocates and populates a
      ceph_osd_req_op structure (the in-core version of an osd request
      operation).  When reviewed, Josh suggested two things: that the
      big varargs function might be better split into type-specific
      functions; and that this functionality really belongs in the osd
      client rather than rbd.
      
      This patch implements both of Josh's suggestions.  It breaks
      up the rbd function into separate functions and defines them
      in the osd client module as exported interfaces.  Unlike the
      rbd version, however, the functions don't allocate an osd_req_op
      structure; they are provided the address of one and that is
      initialized instead.
      
      The rbd function has been eliminated and calls to it have been
      replaced by calls to the new routines.  The rbd code now now use a
      stack (struct) variable to hold the op rather than allocating and
      freeing it each time.
      
      For now only the capabilities used by rbd are implemented.
      Implementing all the other osd op types, and making the rest of the
      code use it will be done separately, in the next few patches.
      
      Note that only the extent, cls, and watch portions of the
      ceph_osd_req_op structure are currently used.  Delete the others
      (xattr, pgls, and snap) from its definition so nobody thinks it's
      actually implemented or needed.  We can add it back again later
      if needed, when we know it's been tested.
      
      This (and a few follow-on patches) resolves:
          http://tracker.ceph.com/issues/3861Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-by: NJosh Durgin <josh.durgin@inktank.com>
      33803f33
    • A
      libceph: define osd_req_opcode_valid() · a8dd0a37
      Alex Elder 提交于
      Define a separate function to determine the validity of an opcode,
      and use it inside osd_req_encode_op() in order to unclutter that
      function.
      
      Don't update the destination op at all--and return zero--if an
      unsupported or unrecognized opcode is seen in osd_req_encode_op().
      Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-by: NJosh Durgin <josh.durgin@inktank.com>
      a8dd0a37
    • A
      libceph: be explicit in masking bottom 16 bits · 0baa1bd9
      Alex Elder 提交于
      In ceph_osdc_build_request() there is a call to cpu_to_le16() which
      provides a 64-bit value as its argument.  Because of the implied
      byte swapping going on it looked pretty suspect to me.
      
      At the moment it turns out the behavior is well defined, but masking
      off those bottom bits explicitly eliminates this distraction, and is
      in fact more directly related to the purpose of the message header's
      data_off field.
      
      This resolves:
          http://tracker.ceph.com/issues/4125Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-by: NJosh Durgin <josh.durgin@inktank.com>
      0baa1bd9
    • A
      libceph: send queued requests when starting new one · 7e2766a1
      Alex Elder 提交于
      An osd expects the transaction ids of arriving request messages from
      a given client to a given osd to increase monotonically.  So the osd
      client needs to send its requests in ascending tid order.
      
      The transaction id for a request is set at the time it is
      registered, in __register_request().  This is also where the request
      gets placed at the end of the osd client's unsent messages list.
      
      At the end of ceph_osdc_start_request(), the request message for a
      newly-mapped osd request is supplied to the messenger to be sent
      (via __send_request()).  If any other messages were present in the
      osd client's unsent list at that point they would be sent *after*
      this new request message.
      
      Because those unsent messages have already been registered, their
      tids would be lower than the newly-mapped request message, and
      sending that message first can violate the tid ordering rule.
      
      Rather than sending the new request only, send all queued requests
      (including the new one) at that point in ceph_osdc_start_request().
      This ensures the tid ordering property is preserved.
      
      With this in place, all messages should now be sent in tid order
      regardless of whether they're being sent for the first time or
      re-sent as a result of a call to osd_reset().
      
      This resolves:
          http://tracker.ceph.com/issues/4392Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-off-by: NSage Weil <sage@inktank.com>
      7e2766a1
    • A
      libceph: keep request lists in tid order · ad885927
      Alex Elder 提交于
      In __map_request(), when adding a request to an osd client's unsent
      list, add it to the tail rather than the head.  That way the newest
      entries (with the highest tid value) will be last.
      
      Maintain an osd's request list in order of increasing tid also.
      
      Finally--to be consistent--maintain an osd client's "notarget" list
      in that order as well.
      
      This partially resolves:
          http://tracker.ceph.com/issues/4392Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-off-by: NSage Weil <sage@inktank.com>
      ad885927
    • A
      libceph: requeue only sent requests when kicking · e02493c0
      Alex Elder 提交于
      The osd expects incoming requests for a given object from a given
      client to arrive in order, with the tid for each request being
      greater than the tid for requests that have already arrived.  This
      patch fixes two places the osd client might not maintain that
      ordering.
      
      For the osd client, the connection fault method is osd_reset().
      That function calls __reset_osd() to close and re-open the
      connection, then calls __kick_osd_requests() to cause all
      outstanding requests for the affected osd to be re-sent after
      the connection has been re-established.
      
      When an osd is reset, any in-flight messages will need to be
      re-sent.  An osd client maintains distinct lists for unsent and
      in-flight messages.  Meanwhile, an osd maintains a single list of
      all its requests (both sent and un-sent).  (Each message is linked
      into two lists--one for the osd client and one list for the osd.)
      
      To process an osd "kick" operation, the request list for the *osd*
      is traversed, and each request is moved off whichever osd *client*
      list it was on (unsent or sent) and placed onto the osd client's
      unsent list.  (It remains where it is on the osd's request list.)
      
      When that is done, osd_reset() calls __send_queued() to cause each
      of the osd client's unsent messages to be sent.
      
      OK, with that background...
      
      As the osd request list is traversed each request is prepended to
      the osd client's unsent list in the order they're seen.  The effect
      of this is to reverse the order of these requests as they are put
      (back) onto the unsent list.
      
      Instead, build up a list of only the requests for an osd that have
      already been sent (by checking their r_sent flag values).  Once an
      unsent request is found, stop examining requests and prepend the
      requests that need re-sending to the osd client's unsent list.
      
      Preserve the original order of requests in the process (previously
      re-queued requests were reversed in this process).  Because they
      have already been sent, they will have lower tids than any request
      already present on the unsent list.
      
      Just below that, traverse the linger list in forward order as
      before, but add them to the *tail* of the list rather than the head.
      These requests get re-registered, and in the process are give a new
      (higher) tid, so the should go at the end.
      
      This partially resolves:
          http://tracker.ceph.com/issues/4392Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-off-by: NSage Weil <sage@inktank.com>
      e02493c0
    • A
      libceph: no more kick_requests() race · 92451b49
      Alex Elder 提交于
      Since we no longer drop the request mutex between registering and
      mapping an osd request in ceph_osdc_start_request(), there is no
      chance of a race with kick_requests().
      
      We can now therefore map and send the new request unconditionally
      (but we'll issue a warning should it ever occur).
      Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-off-by: NSage Weil <sage@inktank.com>
      92451b49
    • A
      libceph: slightly defer registering osd request · dc4b870c
      Alex Elder 提交于
      One of the first things ceph_osdc_start_request() does is register
      the request.  It then acquires the osd client's map semaphore and
      request mutex and proceeds to map and send the request.
      
      There is no reason the request has to be registered before acquiring
      the map semaphore.  So hold off doing so until after the map
      semaphore is held.
      
      Since register_request() is nothing more than a wrapper around
      __register_request(), call the latter function instead, after
      acquiring the request mutex.
      
      That leaves register_request() unused, so get rid of it.
      
      This partially resolves:
          http://tracker.ceph.com/issues/4392Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-off-by: NSage Weil <sage@inktank.com>
      dc4b870c
    • S
      libceph: wrap auth ops in wrapper functions · 27859f97
      Sage Weil 提交于
      Use wrapper functions that check whether the auth op exists so that callers
      do not need a bunch of conditional checks.  Simplifies the external
      interface.
      Signed-off-by: NSage Weil <sage@inktank.com>
      Reviewed-by: NAlex Elder <elder@inktank.com>
      27859f97
    • S
      libceph: add update_authorizer auth method · 0bed9b5c
      Sage Weil 提交于
      Currently the messenger calls out to a get_authorizer con op, which will
      create a new authorizer if it doesn't yet have one.  In the meantime, when
      we rotate our service keys, the authorizer doesn't get updated.  Eventually
      it will be rejected by the server on a new connection attempt and get
      invalidated, and we will then rebuild a new authorizer, but this is not
      ideal.
      
      Instead, if we do have an authorizer, call a new update_authorizer op that
      will verify that the current authorizer is using the latest secret.  If it
      is not, we will build a new one that does.  This avoids the transient
      failure.
      
      This fixes one of the sorry sequence of events for bug
      
      	http://tracker.ceph.com/issues/4282Signed-off-by: NSage Weil <sage@inktank.com>
      Reviewed-by: NAlex Elder <elder@inktank.com>
      0bed9b5c
    • A
      libceph: kill osd request r_trail · 95e072eb
      Alex Elder 提交于
      The osd trail is a pagelist, used only for a CALL osd operation
      to hold the class and method names, along with any input data for
      the call.
      
      It is only currently used by the rbd client, and when it's used it
      is the only bit of outbound data in the osd request.  Since we
      already support (non-trail) pagelist data in a message, we can
      just save this outbound CALL data in the "normal" pagelist rather
      than the trail, and get rid of the trail entirely.
      
      The existing pagelist support depends on the pagelist being
      dynamically allocated, and ownership of it is passed to the
      messenger once it's been attached to a message.  (That is to say,
      the messenger releases and frees the pagelist when it's done with
      it).  That means we need to dynamically allocate the pagelist also.
      
      Note that we simply assert that the allocation of a pagelist
      structure succeeds.  Appending to a pagelist might require a dynamic
      allocation, so we're already assuming we won't run into trouble
      doing so (we're just ignore any failures--and that should be fixed
      at some point).
      
      This resolves:
          http://tracker.ceph.com/issues/4407Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-by: NJosh Durgin <josh.durgin@inktank.com>
      95e072eb
    • A
      libceph: have osd requests support pagelist data · 9a5e6d09
      Alex Elder 提交于
      Add support for recording a ceph pagelist as data associated with an
      osd request.
      Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-by: NJosh Durgin <josh.durgin@inktank.com>
      9a5e6d09
    • A
      libceph: let osd ops determine request data length · 175face2
      Alex Elder 提交于
      The length of outgoing data in an osd request is dependent on the
      osd ops that are embedded in that request.  Each op is encoded into
      a request message using osd_req_encode_op(), so that should be used
      to determine the amount of outgoing data implied by the op as it
      is encoded.
      
      Have osd_req_encode_op() return the number of bytes of outgoing data
      implied by the op being encoded, and accumulate and use that in
      ceph_osdc_build_request().
      
      As a result, ceph_osdc_build_request() no longer requires its "len"
      parameter, so get rid of it.
      
      Using the sum of the op lengths rather than the length provided is
      a valid change because:
          - The only callers of osd ceph_osdc_build_request() are
            rbd and the osd client (in ceph_osdc_new_request() on
            behalf of the file system).
          - When rbd calls it, the length provided is only non-zero for
            write requests, and in that case the single op has the
            same length value as what was passed here.
          - When called from ceph_osdc_new_request(), (it's not all that
            easy to see, but) the length passed is also always the same
            as the extent length encoded in its (single) write op if
            present.
      
      This resolves:
          http://tracker.ceph.com/issues/4406Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-by: NJosh Durgin <josh.durgin@inktank.com>
      175face2
    • A
      libceph: set response data fields earlier · 70636773
      Alex Elder 提交于
      When an incoming message is destined for the osd client, the
      messenger calls the osd client's alloc_msg method.  That function
      looks up which request has the tid matching the incoming message,
      and returns the request message that was preallocated to receive the
      response.  The response message is therefore known before the
      request is even started.
      
      Between the start of the request and the receipt of the response,
      the request and its data fields will not change, so there's no
      reason we need to hold off setting them.  In fact it's preferable
      to set them just once because it's more obvious that they're
      unchanging.
      
      So set up the fields describing where incoming data is to land in a
      response message at the beginning of ceph_osdc_start_request().
      Define a helper function that sets these fields, and use it to
      set the fields for both outgoing data in the request message and
      incoming data in the response.
      
      This resolves:
          http://tracker.ceph.com/issues/4284Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-by: NJosh Durgin <josh.durgin@inktank.com>
      70636773
    • A
      ceph: only set message data pointers if non-empty · ebf18f47
      Alex Elder 提交于
      Change it so we only assign outgoing data information for messages
      if there is outgoing data to send.
      
      This then allows us to add a few more (currently commented-out)
      assertions.
      
      This is related to:
          http://tracker.ceph.com/issues/4284Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-by: NGreg Farnum <greg@inktank.com>
      ebf18f47
    • A
      libceph: isolate other message data fields · 27fa8385
      Alex Elder 提交于
      Define ceph_msg_data_set_pagelist(), ceph_msg_data_set_bio(), and
      ceph_msg_data_set_trail() to clearly abstract the assignment of the
      remaining data-related fields in a ceph message structure.  Use the
      new functions in the osd client and mds client.
      
      This partially resolves:
          http://tracker.ceph.com/issues/4263Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-by: NJosh Durgin <josh.durgin@inktank.com>
      27fa8385
    • A
      libceph: set page info with byte length · f1baeb2b
      Alex Elder 提交于
      When setting page array information for message data, provide the
      byte length rather than the page count ceph_msg_data_set_pages().
      Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-by: NJosh Durgin <josh.durgin@inktank.com>
      f1baeb2b
    • A
      libceph: isolate message page field manipulation · 02afca6c
      Alex Elder 提交于
      Define a function ceph_msg_data_set_pages(), which more clearly
      abstracts the assignment page-related fields for data in a ceph
      message structure.  Use this new function in the osd client and mds
      client.
      
      Ideally, these fields would never be set more than once (with
      BUG_ON() calls to guarantee that).  At the moment though the osd
      client sets these every time it receives a message, and in the event
      of a communication problem this can happen more than once.  (This
      will be resolved shortly, but setting up these helpers first makes
      it all a bit easier to work with.)
      
      Rearrange the field order in a ceph_msg structure to group those
      that are used to define the possible data payloads.
      
      This partially resolves:
          http://tracker.ceph.com/issues/4263Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-by: NJosh Durgin <josh.durgin@inktank.com>
      02afca6c