1. 02 5月, 2013 40 次提交
    • A
      libceph: be explicit in masking bottom 16 bits · 0baa1bd9
      Alex Elder 提交于
      In ceph_osdc_build_request() there is a call to cpu_to_le16() which
      provides a 64-bit value as its argument.  Because of the implied
      byte swapping going on it looked pretty suspect to me.
      
      At the moment it turns out the behavior is well defined, but masking
      off those bottom bits explicitly eliminates this distraction, and is
      in fact more directly related to the purpose of the message header's
      data_off field.
      
      This resolves:
          http://tracker.ceph.com/issues/4125Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-by: NJosh Durgin <josh.durgin@inktank.com>
      0baa1bd9
    • A
      libceph: account for alignment in pages cursor · 56fc5659
      Alex Elder 提交于
      When a cursor for a page array data message is initialized it needs
      to determine the initial value for cursor->last_piece.  Currently it
      just checks if length is less than a page, but that's not correct.
      The data in the first page in the array will be offset by a page
      offset based on the alignment recorded for the data.  (All pages
      thereafter will be aligned at the base of the page, so there's
      no need to account for this except for the first page.)
      
      Because this was wrong, there was a case where the length of a piece
      would be calculated as all of the residual bytes in the message and
      that plus the page offset could exceed the length of a page.
      
      So fix this case.  Make sure the sum won't wrap.
      
      This resolves a third issue described in:
          http://tracker.ceph.com/issues/4598Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-by: NSage Weil <sage@inktank.com>
      56fc5659
    • A
      libceph: page offset must be less than page size · 5df521b1
      Alex Elder 提交于
      Currently ceph_msg_data_pages_advance() allows the page offset value
      to be PAGE_SIZE, apparently assuming ceph_msg_data_pages_next() will
      treat it as 0.  But that doesn't happen, and the result led to a
      helpful assertion failure.
      
      Change ceph_msg_data_pages_advance() to truncate the offset to 0
      before returning if it reaches PAGE_SIZE.
      
      Make a few other minor adjustments in this area (comments and a
      better assertion) while modifying it.
      
      This resolves a second issue described in:
          http://tracker.ceph.com/issues/4598Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-by: NSage Weil <sage@inktank.com>
      5df521b1
    • A
      libceph: fix broken data length assertions · 1190bf06
      Alex Elder 提交于
      It's OK for the result of a read to come back with fewer bytes than
      were requested.  So don't trigger a BUG() in that case when
      initializing the data cursor.
      
      This resolves the first problem described in:
          http://tracker.ceph.com/issues/4598Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-by: NSage Weil <sage@inktank.com>
      1190bf06
    • A
      libceph: make message data be a pointer · 6644ed7b
      Alex Elder 提交于
      Begin the transition from a single message data item to a list of
      them by replacing the "data" structure in a message with a pointer
      to a ceph_msg_data structure.
      
      A null pointer will indicate the message has no data; replace the
      use of ceph_msg_has_data() with a simple check for a null pointer.
      
      Create functions ceph_msg_data_create() and ceph_msg_data_destroy()
      to dynamically allocate and free a data item structure of a given type.
      
      When a message has its data item "set," allocate one of these to
      hold the data description, and free it when the last reference to
      the message is dropped.
      
      This partially resolves:
          http://tracker.ceph.com/issues/4429Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-by: NJosh Durgin <josh.durgin@inktank.com>
      6644ed7b
    • A
      libceph: use only ceph_msg_data_advance() · 8ea299bc
      Alex Elder 提交于
      The *_msg_pos_next() functions do little more than call
      ceph_msg_data_advance().  Replace those wrapper functions with
      a simple call to ceph_msg_data_advance().
      
      This cleanup is related to:
          http://tracker.ceph.com/issues/4428Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-by: NJosh Durgin <josh.durgin@inktank.com>
      8ea299bc
    • A
      libceph: don't add to crc unless data sent · 143334ff
      Alex Elder 提交于
      In write_partial_message_data() we aggregate the crc for the data
      portion of the message as each new piece of the data item is
      encountered.  Because it was computed *before* sending the data, if
      an attempt to send a new piece resulted in 0 bytes being sent, the
      crc crc across that piece would erroneously get computed again and
      added to the aggregate result.  This would occasionally happen in
      the evnet of a connection failure.
      
      The crc value isn't really needed until the complete value is known
      after sending all data, so there's no need to compute it before
      sending.
      
      So don't calculate the crc for a piece until *after* we know at
      least one byte of it has been sent.  That will avoid this problem.
      
      This resolves:
          http://tracker.ceph.com/issues/4450Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-by: NSage Weil <sage@inktank.com>
      143334ff
    • A
      libceph: kill last of ceph_msg_pos · f5db90bc
      Alex Elder 提交于
      The only remaining field in the ceph_msg_pos structure is
      did_page_crc.  In the new cursor model of things that flag (or
      something like it) belongs in the cursor.
      
      Define a new field "need_crc" in the cursor (which applies to all
      types of data) and initialize it to true whenever a cursor is
      initialized.
      
      In write_partial_message_data(), the data CRC still will be computed
      as before, but it will check the cursor->need_crc field to determine
      whether it's needed.  Any time the cursor is advanced to a new piece
      of a data item, need_crc will be set, and this will cause the crc
      for that entire piece to be accumulated into the data crc.
      
      In write_partial_message_data() the intermediate crc value is now
      held in a local variable so it doesn't have to be byte-swapped so
      many times.  In read_partial_msg_data() we do something similar
      (but mainly for consistency there).
      
      With that, the ceph_msg_pos structure can go away,  and it no longer
      needs to be passed as an argument to prepare_message_data().
      
      This cleanup is related to:
          http://tracker.ceph.com/issues/4428Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-by: NJosh Durgin <josh.durgin@inktank.com>
      f5db90bc
    • A
      libceph: kill most of ceph_msg_pos · 859a35d5
      Alex Elder 提交于
      All but one of the fields in the ceph_msg_pos structure are now
      never used (only assigned), so get rid of them.  This allows
      several small blocks of code to go away.
      
      This is cleanup of old code related to:
          http://tracker.ceph.com/issues/4428Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-by: NJosh Durgin <josh.durgin@inktank.com>
      859a35d5
    • A
      libceph: use cursor resid for loop condition · 643c68a4
      Alex Elder 提交于
      Use the "resid" field of a cursor rather than finding when the
      message data position has moved up to meet the data length to
      determine when all data has been sent or received in
      write_partial_message_data() and read_partial_msg_data().
      
      This is cleanup of old code related to:
          http://tracker.ceph.com/issues/4428Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-by: NJosh Durgin <josh.durgin@inktank.com>
      643c68a4
    • A
      libceph: collapse all data items into one · 4c59b4a2
      Alex Elder 提交于
      It turns out that only one of the data item types is ever used at
      any one time in a single message (currently).
          - A page array is used by the osd client (on behalf of the file
            system) and by rbd.  Only one osd op (and therefore at most
            one data item) is ever used at a time by rbd.  And the only
            time the file system sends two, the second op contains no
            data.
          - A bio is only used by the rbd client (and again, only one
            data item per message)
          - A page list is used by the file system and by rbd for outgoing
            data, but only one op (and one data item) at a time.
      
      We can therefore collapse all three of our data item fields into a
      single field "data", and depend on the messenger code to properly
      handle it based on its type.
      
      This allows us to eliminate quite a bit of duplicated code.
      
      This is related to:
          http://tracker.ceph.com/issues/4429Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-by: NJosh Durgin <josh.durgin@inktank.com>
      4c59b4a2
    • A
      libceph: get rid of read helpers · 686be208
      Alex Elder 提交于
      Now that read_partial_message_pages() and read_partial_message_bio()
      are literally identical functions we can factor them out.  They're
      pretty simple as well, so just move their relevant content into
      read_partial_msg_data().
      
      This is and previous patches together resolve:
          http://tracker.ceph.com/issues/4428Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-by: NJosh Durgin <josh.durgin@inktank.com>
      686be208
    • A
      libceph: no outbound zero data · 61fcdc97
      Alex Elder 提交于
      There is handling in write_partial_message_data() for the case where
      only the length of--and no other information about--the data to be
      sent has been specified.  It uses the zero page as the source of
      data to send in this case.
      
      This case doesn't occur.  All message senders set up a page array,
      pagelist, or bio describing the data to be sent.  So eliminate the
      block of code that handles this (but check and issue a warning for
      now, just in case it happens for some reason).
      
      This resolves:
          http://tracker.ceph.com/issues/4426Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-by: NJosh Durgin <josh.durgin@inktank.com>
      61fcdc97
    • A
      libceph: use cursor for inbound data pages · 878efabd
      Alex Elder 提交于
      The cursor code for a page array selects the right page, page
      offset, and length to use for a ceph_tcp_recvpage() call, so
      we can use it to replace a block in read_partial_message_pages().
      
      This partially resolves:
          http://tracker.ceph.com/issues/4428Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-by: NJosh Durgin <josh.durgin@inktank.com>
      878efabd
    • A
      libceph: kill ceph message bio_iter, bio_seg · 6518be47
      Alex Elder 提交于
      The bio_iter and bio_seg fields in a message are no longer used, we
      use the cursor instead.  So get rid of them and the functions that
      operate on them them.
      
      This is related to:
          http://tracker.ceph.com/issues/4428Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-by: NJosh Durgin <josh.durgin@inktank.com>
      6518be47
    • A
      libceph: use cursor for bio reads · 463207aa
      Alex Elder 提交于
      Replace the use of the information in con->in_msg_pos for incoming
      bio data.  The old in_msg_pos and the new cursor mechanism do
      basically the same thing, just slightly differently.
      
      The main functional difference is that in_msg_pos keeps track of the
      length of the complete bio list, and assumed it was fully consumed
      when that many bytes had been transferred.  The cursor does not assume
      a length, it simply consumes all bytes in the bio list.  Because the
      only user of bio data is the rbd client, and because the length of a
      bio list provided by rbd client always matches the number of bytes
      in the list, both ways of tracking length are equivalent.
      
      In addition, for in_msg_pos the initial bio vector is selected as
      the initial value of the bio->bi_idx, while the cursor assumes this
      is zero.  Again, the rbd client always passes 0 as the initial index
      so the effect is the same.
      
      Other than that, they basically match:
          in_msg_pos      cursor
          ----------      ------
          bio_iter        bio
          bio_seg         vec_index
          page_pos        page_offset
      
      The in_msg_pos field is initialized by a call to init_bio_iter().
      The bio cursor is initialized by ceph_msg_data_cursor_init().
      Both now happen in the same spot, in prepare_message_data().
      
      The in_msg_pos field is advanced by a call to in_msg_pos_next(),
      which updates page_pos and calls iter_bio_next() to move to the next
      bio vector, or to the next bio in the list.  The cursor is advanced
      by ceph_msg_data_advance().  That isn't currently happening so
      add a call to that in in_msg_pos_next().
      
      Finally, the next piece of data to use for a read is determined
      by a bunch of lines in read_partial_message_bio().  Those can be
      replaced by an equivalent ceph_msg_data_bio_next() call.
      
      This partially resolves:
          http://tracker.ceph.com/issues/4428Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-by: NJosh Durgin <josh.durgin@inktank.com>
      463207aa
    • A
      libceph: record residual bytes for all message data types · 25aff7c5
      Alex Elder 提交于
      All of the data types can use this, not just the page array.  Until
      now, only the bio type doesn't have it available, and only the
      initiator of the request (the rbd client) is able to supply the
      length of the full request without re-scanning the bio list.  Change
      the cursor init routines so the length is supplied based on the
      message header "data_len" field, and use that length to intiialize
      the "resid" field of the cursor.
      
      In addition, change the way "last_piece" is defined so it is based
      on the residual number of bytes in the original request.  This is
      necessary (at least for bio messages) because it is possible for
      a read request to succeed without consuming all of the space
      available in the data buffer.
      
      This resolves:
          http://tracker.ceph.com/issues/4427Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-by: NJosh Durgin <josh.durgin@inktank.com>
      25aff7c5
    • A
      libceph: drop pages parameter · 28a89dde
      Alex Elder 提交于
      The value passed for "pages" in read_partial_message_pages() is
      always the pages pointer from the incoming message, which can be
      derived inside that function.  So just get rid of the parameter.
      Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-by: NJosh Durgin <josh.durgin@inktank.com>
      28a89dde
    • A
      libceph: initialize data fields on last msg put · 888334f9
      Alex Elder 提交于
      When the last reference to a ceph message is dropped,
      ceph_msg_last_put() is called to clean things up.
      
      For "normal" messages (allocated via ceph_msg_new() rather than
      being allocated from a memory pool) it's sufficient to just release
      resources.  But for a mempool-allocated message we actually have to
      re-initialize the data fields in the message back to initial state
      so they're ready to go in the event the message gets reused.
      
      Some of this was already done; this fleshes it out so it's done
      more completely.
      
      This resolves:
          http://tracker.ceph.com/issues/4540Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-by: NSage Weil <sage@inktank.com>
      Reviewed-by: NJosh Durgin <josh.durgin@inktank.com>
      888334f9
    • A
      libceph: send queued requests when starting new one · 7e2766a1
      Alex Elder 提交于
      An osd expects the transaction ids of arriving request messages from
      a given client to a given osd to increase monotonically.  So the osd
      client needs to send its requests in ascending tid order.
      
      The transaction id for a request is set at the time it is
      registered, in __register_request().  This is also where the request
      gets placed at the end of the osd client's unsent messages list.
      
      At the end of ceph_osdc_start_request(), the request message for a
      newly-mapped osd request is supplied to the messenger to be sent
      (via __send_request()).  If any other messages were present in the
      osd client's unsent list at that point they would be sent *after*
      this new request message.
      
      Because those unsent messages have already been registered, their
      tids would be lower than the newly-mapped request message, and
      sending that message first can violate the tid ordering rule.
      
      Rather than sending the new request only, send all queued requests
      (including the new one) at that point in ceph_osdc_start_request().
      This ensures the tid ordering property is preserved.
      
      With this in place, all messages should now be sent in tid order
      regardless of whether they're being sent for the first time or
      re-sent as a result of a call to osd_reset().
      
      This resolves:
          http://tracker.ceph.com/issues/4392Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-off-by: NSage Weil <sage@inktank.com>
      7e2766a1
    • A
      libceph: keep request lists in tid order · ad885927
      Alex Elder 提交于
      In __map_request(), when adding a request to an osd client's unsent
      list, add it to the tail rather than the head.  That way the newest
      entries (with the highest tid value) will be last.
      
      Maintain an osd's request list in order of increasing tid also.
      
      Finally--to be consistent--maintain an osd client's "notarget" list
      in that order as well.
      
      This partially resolves:
          http://tracker.ceph.com/issues/4392Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-off-by: NSage Weil <sage@inktank.com>
      ad885927
    • A
      libceph: requeue only sent requests when kicking · e02493c0
      Alex Elder 提交于
      The osd expects incoming requests for a given object from a given
      client to arrive in order, with the tid for each request being
      greater than the tid for requests that have already arrived.  This
      patch fixes two places the osd client might not maintain that
      ordering.
      
      For the osd client, the connection fault method is osd_reset().
      That function calls __reset_osd() to close and re-open the
      connection, then calls __kick_osd_requests() to cause all
      outstanding requests for the affected osd to be re-sent after
      the connection has been re-established.
      
      When an osd is reset, any in-flight messages will need to be
      re-sent.  An osd client maintains distinct lists for unsent and
      in-flight messages.  Meanwhile, an osd maintains a single list of
      all its requests (both sent and un-sent).  (Each message is linked
      into two lists--one for the osd client and one list for the osd.)
      
      To process an osd "kick" operation, the request list for the *osd*
      is traversed, and each request is moved off whichever osd *client*
      list it was on (unsent or sent) and placed onto the osd client's
      unsent list.  (It remains where it is on the osd's request list.)
      
      When that is done, osd_reset() calls __send_queued() to cause each
      of the osd client's unsent messages to be sent.
      
      OK, with that background...
      
      As the osd request list is traversed each request is prepended to
      the osd client's unsent list in the order they're seen.  The effect
      of this is to reverse the order of these requests as they are put
      (back) onto the unsent list.
      
      Instead, build up a list of only the requests for an osd that have
      already been sent (by checking their r_sent flag values).  Once an
      unsent request is found, stop examining requests and prepend the
      requests that need re-sending to the osd client's unsent list.
      
      Preserve the original order of requests in the process (previously
      re-queued requests were reversed in this process).  Because they
      have already been sent, they will have lower tids than any request
      already present on the unsent list.
      
      Just below that, traverse the linger list in forward order as
      before, but add them to the *tail* of the list rather than the head.
      These requests get re-registered, and in the process are give a new
      (higher) tid, so the should go at the end.
      
      This partially resolves:
          http://tracker.ceph.com/issues/4392Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-off-by: NSage Weil <sage@inktank.com>
      e02493c0
    • A
      libceph: no more kick_requests() race · 92451b49
      Alex Elder 提交于
      Since we no longer drop the request mutex between registering and
      mapping an osd request in ceph_osdc_start_request(), there is no
      chance of a race with kick_requests().
      
      We can now therefore map and send the new request unconditionally
      (but we'll issue a warning should it ever occur).
      Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-off-by: NSage Weil <sage@inktank.com>
      92451b49
    • A
      libceph: slightly defer registering osd request · dc4b870c
      Alex Elder 提交于
      One of the first things ceph_osdc_start_request() does is register
      the request.  It then acquires the osd client's map semaphore and
      request mutex and proceeds to map and send the request.
      
      There is no reason the request has to be registered before acquiring
      the map semaphore.  So hold off doing so until after the map
      semaphore is held.
      
      Since register_request() is nothing more than a wrapper around
      __register_request(), call the latter function instead, after
      acquiring the request mutex.
      
      That leaves register_request() unused, so get rid of it.
      
      This partially resolves:
          http://tracker.ceph.com/issues/4392Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-off-by: NSage Weil <sage@inktank.com>
      dc4b870c
    • S
      libceph: wrap auth methods in a mutex · e9966076
      Sage Weil 提交于
      The auth code is called from a variety of contexts, include the mon_client
      (protected by the monc's mutex) and the messenger callbacks (currently
      protected by nothing).  Avoid chaos by protecting all auth state with a
      mutex.  Nothing is blocking, so this should be simple and lightweight.
      Signed-off-by: NSage Weil <sage@inktank.com>
      Reviewed-by: NAlex Elder <elder@inktank.com>
      e9966076
    • S
      libceph: wrap auth ops in wrapper functions · 27859f97
      Sage Weil 提交于
      Use wrapper functions that check whether the auth op exists so that callers
      do not need a bunch of conditional checks.  Simplifies the external
      interface.
      Signed-off-by: NSage Weil <sage@inktank.com>
      Reviewed-by: NAlex Elder <elder@inktank.com>
      27859f97
    • S
      libceph: add update_authorizer auth method · 0bed9b5c
      Sage Weil 提交于
      Currently the messenger calls out to a get_authorizer con op, which will
      create a new authorizer if it doesn't yet have one.  In the meantime, when
      we rotate our service keys, the authorizer doesn't get updated.  Eventually
      it will be rejected by the server on a new connection attempt and get
      invalidated, and we will then rebuild a new authorizer, but this is not
      ideal.
      
      Instead, if we do have an authorizer, call a new update_authorizer op that
      will verify that the current authorizer is using the latest secret.  If it
      is not, we will build a new one that does.  This avoids the transient
      failure.
      
      This fixes one of the sorry sequence of events for bug
      
      	http://tracker.ceph.com/issues/4282Signed-off-by: NSage Weil <sage@inktank.com>
      Reviewed-by: NAlex Elder <elder@inktank.com>
      0bed9b5c
    • S
      libceph: fix authorizer invalidation · 4b8e8b5d
      Sage Weil 提交于
      We were invalidating the authorizer by removing the ticket handler
      entirely.  This was effective in inducing us to request a new authorizer,
      but in the meantime it mean that any authorizer we generated would get a
      new and initialized handler with secret_id=0, which would always be
      rejected by the server side with a confusing error message:
      
       auth: could not find secret_id=0
       cephx: verify_authorizer could not get service secret for service osd secret_id=0
      
      Instead, simply clear the validity field.  This will still induce the auth
      code to request a new secret, but will let us continue to use the old
      ticket in the meantime.  The messenger code will probably continue to fail,
      but the exponential backoff will kick in, and eventually the we will get a
      new (hopefully more valid) ticket from the mon and be able to continue.
      Signed-off-by: NSage Weil <sage@inktank.com>
      Reviewed-by: NAlex Elder <elder@inktank.com>
      4b8e8b5d
    • S
      libceph: clear messenger auth_retry flag when we authenticate · 20e55c4c
      Sage Weil 提交于
      We maintain a counter of failed auth attempts to allow us to retry once
      before failing.  However, if the second attempt succeeds, the flag isn't
      cleared, which makes us think auth failed again later when the connection
      resets for other reasons (like a socket error).
      
      This is one part of the sorry sequence of events in bug
      
      	http://tracker.ceph.com/issues/4282Signed-off-by: NSage Weil <sage@inktank.com>
      Reviewed-by: NAlex Elder <elder@inktank.com>
      20e55c4c
    • S
      libceph: implement RECONNECT_SEQ feature · 3a23083b
      Sage Weil 提交于
      This is an old protocol extension that allows the client and server to
      avoid resending old messages after a reconnect (following a socket error).
      Instead, the exchange their sequence numbers during the handshake.  This
      avoids sending a bunch of useless data over the socket.
      
      It has been supported in the server code since v0.22 (Sep 2010).
      Signed-off-by: NSage Weil <sage@inktank.com>
      Reviewed-by: NAlex Elder <elder@inktank.com>
      3a23083b
    • H
      ceph: fix buffer pointer advance in ceph_sync_write · 022f3e2e
      Henry C Chang 提交于
      We should advance the user data pointer by _len_ instead of _written_.
      _len_ is the data length written in each iteration while _written_ is the
      accumulated data length we have writtent out.
      Signed-off-by: NHenry C Chang <henry.cy.chang@gmail.com>
      Reviewed-by: NGreg Farnum <greg@inktank.com>
      Tested-by: NSage Weil <sage@inktank.com>
      022f3e2e
    • Y
      ceph: use i_release_count to indicate dir's completeness · 2f276c51
      Yan, Zheng 提交于
      Current ceph code tracks directory's completeness in two places.
      ceph_readdir() checks i_release_count to decide if it can set the
      I_COMPLETE flag in i_ceph_flags. All other places check the I_COMPLETE
      flag. This indirection introduces locking complexity.
      
      This patch adds a new variable i_complete_count to ceph_inode_info.
      Set i_release_count's value to it when marking a directory complete.
      By comparing the two variables, we know if a directory is complete
      Signed-off-by: NYan, Zheng <zheng.z.yan@intel.com>
      2f276c51
    • A
      libceph: more cleanup of write_partial_msg_pages() · 8a166d05
      Alex Elder 提交于
      Basically all cases in write_partial_msg_pages() use the cursor, and
      as a result we can simplify that function quite a bit.
      Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-by: NJosh Durgin <josh.durgin@inktank.com>
      8a166d05
    • A
      libceph: kill message trail · 9d2a06c2
      Alex Elder 提交于
      The wart that is the ceph message trail can now be removed, because
      its only user was the osd client, and the previous patch made that
      no longer the case.
      
      The result allows write_partial_msg_pages() to be simplified
      considerably.
      Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-by: NJosh Durgin <josh.durgin@inktank.com>
      9d2a06c2
    • A
      libceph: kill osd request r_trail · 95e072eb
      Alex Elder 提交于
      The osd trail is a pagelist, used only for a CALL osd operation
      to hold the class and method names, along with any input data for
      the call.
      
      It is only currently used by the rbd client, and when it's used it
      is the only bit of outbound data in the osd request.  Since we
      already support (non-trail) pagelist data in a message, we can
      just save this outbound CALL data in the "normal" pagelist rather
      than the trail, and get rid of the trail entirely.
      
      The existing pagelist support depends on the pagelist being
      dynamically allocated, and ownership of it is passed to the
      messenger once it's been attached to a message.  (That is to say,
      the messenger releases and frees the pagelist when it's done with
      it).  That means we need to dynamically allocate the pagelist also.
      
      Note that we simply assert that the allocation of a pagelist
      structure succeeds.  Appending to a pagelist might require a dynamic
      allocation, so we're already assuming we won't run into trouble
      doing so (we're just ignore any failures--and that should be fixed
      at some point).
      
      This resolves:
          http://tracker.ceph.com/issues/4407Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-by: NJosh Durgin <josh.durgin@inktank.com>
      95e072eb
    • A
      libceph: have osd requests support pagelist data · 9a5e6d09
      Alex Elder 提交于
      Add support for recording a ceph pagelist as data associated with an
      osd request.
      Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-by: NJosh Durgin <josh.durgin@inktank.com>
      9a5e6d09
    • A
      libceph: let osd ops determine request data length · 175face2
      Alex Elder 提交于
      The length of outgoing data in an osd request is dependent on the
      osd ops that are embedded in that request.  Each op is encoded into
      a request message using osd_req_encode_op(), so that should be used
      to determine the amount of outgoing data implied by the op as it
      is encoded.
      
      Have osd_req_encode_op() return the number of bytes of outgoing data
      implied by the op being encoded, and accumulate and use that in
      ceph_osdc_build_request().
      
      As a result, ceph_osdc_build_request() no longer requires its "len"
      parameter, so get rid of it.
      
      Using the sum of the op lengths rather than the length provided is
      a valid change because:
          - The only callers of osd ceph_osdc_build_request() are
            rbd and the osd client (in ceph_osdc_new_request() on
            behalf of the file system).
          - When rbd calls it, the length provided is only non-zero for
            write requests, and in that case the single op has the
            same length value as what was passed here.
          - When called from ceph_osdc_new_request(), (it's not all that
            easy to see, but) the length passed is also always the same
            as the extent length encoded in its (single) write op if
            present.
      
      This resolves:
          http://tracker.ceph.com/issues/4406Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-by: NJosh Durgin <josh.durgin@inktank.com>
      175face2
    • A
      libceph: implement pages array cursor · e766d7b5
      Alex Elder 提交于
      Implement and use cursor routines for page array message data items
      for outbound message data.
      Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-by: NJosh Durgin <josh.durgin@inktank.com>
      e766d7b5
    • A
      libceph: implement bio message data item cursor · 6aaa4511
      Alex Elder 提交于
      Implement and use cursor routines for bio message data items for
      outbound message data.
      
      (See the previous commit for reasoning in support of the changes
      in out_msg_pos_next().)
      Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-by: NJosh Durgin <josh.durgin@inktank.com>
      6aaa4511
    • A
      libceph: use data cursor for message pagelist · 7fe1e5e5
      Alex Elder 提交于
      Switch to using the message cursor for the (non-trail) outgoing
      pagelist data item in a message if present.
      
      Notes on the logic changes in out_msg_pos_next():
          - only the mds client uses a ceph pagelist for message data;
          - if the mds client ever uses a pagelist, it never uses a page
            array (or anything else, for that matter) for data in the same
            message;
          - only the osd client uses the trail portion of a message data,
            and when it does, it never uses any other data fields for
            outgoing data in the same message; and finally
          - only the rbd client uses bio message data (never pagelist).
      
      Therefore out_msg_pos_next() can assume:
          - if we're in the trail portion of a message, the message data
            pagelist, data, and bio can be ignored; and
          - if there is a page list, there will never be any a bio or page
            array data, and vice-versa.
      Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-by: NJosh Durgin <josh.durgin@inktank.com>
      7fe1e5e5