1. 14 12月, 2013 1 次提交
    • J
      libceph: block I/O when PAUSE or FULL osd map flags are set · d29adb34
      Josh Durgin 提交于
      The PAUSEWR and PAUSERD flags are meant to stop the cluster from
      processing writes and reads, respectively. The FULL flag is set when
      the cluster determines that it is out of space, and will no longer
      process writes.  PAUSEWR and PAUSERD are purely client-side settings
      already implemented in userspace clients. The osd does nothing special
      with these flags.
      
      When the FULL flag is set, however, the osd responds to all writes
      with -ENOSPC. For cephfs, this makes sense, but for rbd the block
      layer translates this into EIO.  If a cluster goes from full to
      non-full quickly, a filesystem on top of rbd will not behave well,
      since some writes succeed while others get EIO.
      
      Fix this by blocking any writes when the FULL flag is set in the osd
      client. This is the same strategy used by userspace, so apply it by
      default.  A follow-on patch makes this configurable.
      
      __map_request() is called to re-target osd requests in case the
      available osds changed.  Add a paused field to a ceph_osd_request, and
      set it whenever an appropriate osd map flag is set.  Avoid queueing
      paused requests in __map_request(), but force them to be resent if
      they become unpaused.
      
      Also subscribe to the next osd map from the monitor if any of these
      flags are set, so paused requests can be unblocked as soon as
      possible.
      
      Fixes: http://tracker.ceph.com/issues/6079Reviewed-by: NSage Weil <sage@inktank.com>
      Signed-off-by: NJosh Durgin <josh.durgin@inktank.com>
      d29adb34
  2. 10 9月, 2013 1 次提交
  3. 10 7月, 2013 1 次提交
    • J
      libceph: fix invalid unsigned->signed conversion for timespec encoding · 8b8cf891
      Josh Durgin 提交于
      __kernel_time_t is a long, which cannot hold a U32_MAX on 32-bit
      architectures.  Just drop this check as it has limited value.
      
      This fixes a crash like:
      
      [  957.905812] kernel BUG at /srv/autobuild-ceph/gitbuilder.git/build/include/linux/ceph/decode.h:164!
      [  957.914849] Internal error: Oops - BUG: 0 [#1] SMP ARM
      [  957.919978] Modules linked in: rbd libceph libcrc32c ipmi_devintf ipmi_si ipmi_msghandler nfsd nfs_acl auth_rpcgss nfs fscache lockd sunrpc
      [  957.932547] CPU: 1    Tainted: G        W     (3.9.0-ceph-19bb6a83-highbank #1)
      [  957.939881] PC is at ceph_osdc_build_request+0x8c/0x4f8 [libceph]
      [  957.945967] LR is at 0xec520904
      [  957.949103] pc : [<bf13e76c>]    lr : [<ec520904>]    psr: 20000153
      [  957.949103] sp : ec753df8  ip : 00000001  fp : ec53e100
      [  957.960571] r10: ebef25c0  r9 : ec5fa400  r8 : ecbcc000
      [  957.965788] r7 : 00000000  r6 : 00000000  r5 : ffffffff  r4 : 00000020
      [  957.972307] r3 : 51cc8143  r2 : ec520900  r1 : ec753e58  r0 : ec520908
      [  957.978827] Flags: nzCv  IRQs on  FIQs off  Mode SVC_32  ISA ARM  Segment user
      [  957.986039] Control: 10c5387d  Table: 2c59c04a  DAC: 00000015
      [  957.991777] Process rbd (pid: 2138, stack limit = 0xec752238)
      [  957.997514] Stack: (0xec753df8 to 0xec754000)
      [  958.001864] 3de0:                                                       00000001 00000001
      [  958.010032] 3e00: 00000001 bf139744 ecbcc000 ec55a0a0 00000024 00000000 ebef25c0 fffffffe
      [  958.018204] 3e20: ffffffff 00000000 00000000 00000001 ec5fa400 ebef25c0 ec53e100 bf166b68
      [  958.026377] 3e40: 00000000 0000220f fffffffe ffffffff ec753e58 bf13ff24 51cc8143 05b25ed2
      [  958.034548] 3e60: 00000001 00000000 00000000 bf1688d4 00000001 00000000 00000000 00000000
      [  958.042720] 3e80: 00000001 00000060 ec5fa400 ed53d200 ed439600 ed439300 00000001 00000060
      [  958.050888] 3ea0: ec5fa400 ed53d200 00000000 bf16a320 00000000 ec53e100 00000040 ec753eb8
      [  958.059059] 3ec0: ec51df00 ed53d7c0 ed53d200 ed53d7c0 00000000 ed53d7c0 ec5fa400 bf16ed70
      [  958.067230] 3ee0: 00000000 00000060 00000002 ed53d200 00000000 bf16acf4 ed53d7c0 ec752000
      [  958.075402] 3f00: ed980e50 e954f5d8 00000000 00000060 ed53d240 ed53d258 ec753f80 c04f44a8
      [  958.083574] 3f20: edb7910c ec664700 01ade920 c02e4c44 00000060 c016b3dc ec51de40 01adfb84
      [  958.091745] 3f40: 00000060 ec752000 ec753f80 ec752000 00000060 c0108444 00000007 ec51de48
      [  958.099914] 3f60: ed0eb8c0 00000000 00000000 ec51de40 01adfb84 00000001 00000060 c0108858
      [  958.108085] 3f80: 00000000 00000000 51cc8143 00000060 01adfb84 00000007 00000004 c000dd68
      [  958.116257] 3fa0: 00000000 c000dbc0 00000060 01adfb84 00000007 01adfb84 00000060 01adfb80
      [  958.124429] 3fc0: 00000060 01adfb84 00000007 00000004 beded1a8 00000000 01adf2f0 01ade920
      [  958.132599] 3fe0: 00000000 beded180 b6811324 b6811334 800f0010 00000007 2e7f5821 2e7f5c21
      [  958.140815] [<bf13e76c>] (ceph_osdc_build_request+0x8c/0x4f8 [libceph]) from [<bf166b68>] (rbd_osd_req_format_write+0x50/0x7c [rbd])
      [  958.152739] [<bf166b68>] (rbd_osd_req_format_write+0x50/0x7c [rbd]) from [<bf1688d4>] (rbd_dev_header_watch_sync+0xe0/0x204 [rbd])
      [  958.164486] [<bf1688d4>] (rbd_dev_header_watch_sync+0xe0/0x204 [rbd]) from [<bf16a320>] (rbd_dev_image_probe+0x23c/0x850 [rbd])
      [  958.175967] [<bf16a320>] (rbd_dev_image_probe+0x23c/0x850 [rbd]) from [<bf16acf4>] (rbd_add+0x3c0/0x918 [rbd])
      [  958.185975] [<bf16acf4>] (rbd_add+0x3c0/0x918 [rbd]) from [<c02e4c44>] (bus_attr_store+0x20/0x2c)
      [  958.194850] [<c02e4c44>] (bus_attr_store+0x20/0x2c) from [<c016b3dc>] (sysfs_write_file+0x168/0x198)
      [  958.203984] [<c016b3dc>] (sysfs_write_file+0x168/0x198) from [<c0108444>] (vfs_write+0x9c/0x170)
      [  958.212768] [<c0108444>] (vfs_write+0x9c/0x170) from [<c0108858>] (sys_write+0x3c/0x70)
      [  958.220768] [<c0108858>] (sys_write+0x3c/0x70) from [<c000dbc0>] (ret_fast_syscall+0x0/0x30)
      [  958.229199] Code: e59d1058 e5913000 e3530000 ba000114 (e7f001f2)
      
      CC: stable@vger.kernel.org  # 3.4+
      Signed-off-by: NJosh Durgin <josh.durgin@inktank.com>
      Reviewed-by: NSage Weil <sage@inktank.com>
      8b8cf891
  4. 04 7月, 2013 1 次提交
  5. 03 5月, 2013 1 次提交
  6. 02 5月, 2013 35 次提交
    • A
      libceph: create source file "net/ceph/snapshot.c" · 4f0dcb10
      Alex Elder 提交于
      This creates a new source file "net/ceph/snapshot.c" to contain
      utility routines related to ceph snapshot contexts.  The main
      motivation was to define ceph_create_snap_context() as a common way
      to create these structures, but I've moved the definitions of
      ceph_get_snap_context() and ceph_put_snap_context() there too.
      (The benefit of inlining those is very small, and I'd rather
      keep this collection of functions together.)
      Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-by: NJosh Durgin <josh.durgin@inktank.com>
      4f0dcb10
    • A
      libceph: validate timespec conversions · c3f56102
      Alex Elder 提交于
      A ceph timespec contains 32-bit unsigned values for its seconds and
      nanoseconds components.  For a standard timespec, both fields are
      signed, and the seconds field is almost surely 64 bits.
      
      Add some explicit casts so the fact that this conversion is taking
      place is obvious.  Also trip a bug if we ever try to put out of
      range (negative or too big) values into a ceph timespec.
      Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-by: NJosh Durgin <josh.durgin@inktank.com>
      c3f56102
    • A
      libceph: add signed type limits · b587398a
      Alex Elder 提交于
      Flesh out the limits defined in <linux/ceph/decode.h> to include the
      maximum and minimum values for signed type S8, S16, S32, and S64.
      Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-by: NJosh Durgin <josh.durgin@inktank.com>
      b587398a
    • A
      libceph: support pages for class request data · 6c57b554
      Alex Elder 提交于
      Add the ability to provide an array of pages as outbound request
      data for object class method calls.
      Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-by: NJosh Durgin <josh.durgin@inktank.com>
      6c57b554
    • A
      libceph: support raw data requests · 49719778
      Alex Elder 提交于
      Allow osd request ops that aren't otherwise structured (not class,
      extent, or watch ops) to specify "raw" data to be used to hold
      incoming data for the op.  Make use of this capability for the osd
      STAT op.
      
      Prefix the name of the private function osd_req_op_init() with "_",
      and expose a new function by that (earlier) name whose purpose is to
      initialize osd ops with (only) implied data.
      
      For now we'll just support the use of a page array for an osd op
      with incoming raw data.
      Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-by: NJosh Durgin <josh.durgin@inktank.com>
      49719778
    • A
      libceph: kill off osd data write_request parameters · 406e2c9f
      Alex Elder 提交于
      In the incremental move toward supporting distinct data items in an
      osd request some of the functions had "write_request" parameters to
      indicate, basically, whether the data belonged to in_data or the
      out_data.  Now that we maintain the data fields in the op structure
      there is no need to indicate the direction, so get rid of the
      "write_request" parameters.
      Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-by: NJosh Durgin <josh.durgin@inktank.com>
      406e2c9f
    • A
      libceph: change how "safe" callback is used · 26be8808
      Alex Elder 提交于
      An osd request currently has two callbacks.  They inform the
      initiator of the request when we've received confirmation for the
      target osd that a request was received, and when the osd indicates
      all changes described by the request are durable.
      
      The only time the second callback is used is in the ceph file system
      for a synchronous write.  There's a race that makes some handling of
      this case unsafe.  This patch addresses this problem.  The error
      handling for this callback is also kind of gross, and this patch
      changes that as well.
      
      In ceph_sync_write(), if a safe callback is requested we want to add
      the request on the ceph inode's unsafe items list.  Because items on
      this list must have their tid set (by ceph_osd_start_request()), the
      request added *after* the call to that function returns.  The
      problem with this is that there's a race between starting the
      request and adding it to the unsafe items list; the request may
      already be complete before ceph_sync_write() even begins to put it
      on the list.
      
      To address this, we change the way the "safe" callback is used.
      Rather than just calling it when the request is "safe", we use it to
      notify the initiator the bounds (start and end) of the period during
      which the request is *unsafe*.  So the initiator gets notified just
      before the request gets sent to the osd (when it is "unsafe"), and
      again when it's known the results are durable (it's no longer
      unsafe).  The first call will get made in __send_request(), just
      before the request message gets sent to the messenger for the first
      time.  That function is only called by __send_queued(), which is
      always called with the osd client's request mutex held.
      
      We then have this callback function insert the request on the ceph
      inode's unsafe list when we're told the request is unsafe.  This
      will avoid the race because this call will be made under protection
      of the osd client's request mutex.  It also nicely groups the setup
      and cleanup of the state associated with managing unsafe requests.
      
      The name of the "safe" callback field is changed to "unsafe" to
      better reflect its new purpose.  It has a Boolean "unsafe" parameter
      to indicate whether the request is becoming unsafe or is now safe.
      Because the "msg" parameter wasn't used, we drop that.
      
      This resolves the original problem reportedin:
          http://tracker.ceph.com/issues/4706Reported-by: NYan, Zheng <zheng.z.yan@intel.com>
      Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-by: NYan, Zheng <zheng.z.yan@intel.com>
      Reviewed-by: NSage Weil <sage@inktank.com>
      26be8808
    • A
      libceph: make method call data be a separate data item · 04017e29
      Alex Elder 提交于
      Right now the data for a method call is specified via a pointer and
      length, and it's copied--along with the class and method name--into
      a pagelist data item to be sent to the osd.  Instead, encode the
      data in a data item separate from the class and method names.
      
      This will allow large amounts of data to be supplied to methods
      without copying.  Only rbd uses the class functionality right now,
      and when it really needs this it will probably need to use a page
      array rather than a page list.  But this simple implementation
      demonstrates the functionality on the osd client, and that's enough
      for now.
      
      This resolves:
          http://tracker.ceph.com/issues/4104Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-by: NJosh Durgin <josh.durgin@inktank.com>
      04017e29
    • A
      libceph: add, don't set data for a message · 90af3602
      Alex Elder 提交于
      Change the names of the functions that put data on a pagelist to
      reflect that we're adding to whatever's already there rather than
      just setting it to the one thing.  Currently only one data item is
      ever added to a message, but that's about to change.
      
      This resolves:
          http://tracker.ceph.com/issues/2770Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-by: NJosh Durgin <josh.durgin@inktank.com>
      90af3602
    • A
      libceph: implement multiple data items in a message · ca8b3a69
      Alex Elder 提交于
      This patch adds support to the messenger for more than one data item
      in its data list.
      
      A message data cursor has two more fields to support this:
          - a count of the number of bytes left to be consumed across
            all data items in the list, "total_resid"
          - a pointer to the head of the list (for validation only)
      
      The cursor initialization routine has been split into two parts: the
      outer one, which initializes the cursor for traversing the entire
      list of data items; and the inner one, which initializes the cursor
      to start processing a single data item.
      
      When a message cursor is first initialized, the outer initialization
      routine sets total_resid to the length provided.  The data pointer
      is initialized to the first data item on the list.  From there, the
      inner initialization routine finishes by setting up to process the
      data item the cursor points to.
      
      Advancing the cursor consumes bytes in total_resid.  If the resid
      field reaches zero, it means the current data item is fully
      consumed.  If total_resid indicates there is more data, the cursor
      is advanced to point to the next data item, and then the inner
      initialization routine prepares for using that.  (A check is made at
      this point to make sure we don't wrap around the front of the list.)
      
      The type-specific init routines are modified so they can be given a
      length that's larger than what the data item can support.  The resid
      field is initialized to the smaller of the provided length and the
      length of the entire data item.
      
      When total_resid reaches zero, we're done.
      
      This resolves:
          http://tracker.ceph.com/issues/3761Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-by: NJosh Durgin <josh.durgin@inktank.com>
      ca8b3a69
    • A
      libceph: replace message data pointer with list · 5240d9f9
      Alex Elder 提交于
      In place of the message data pointer, use a list head which links
      through message data items.  For now we only support a single entry
      on that list.
      Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-by: NJosh Durgin <josh.durgin@inktank.com>
      5240d9f9
    • A
      libceph: have cursor point to data · 8ae4f4f5
      Alex Elder 提交于
      Rather than having a ceph message data item point to the cursor it's
      associated with, have the cursor point to a data item.  This will
      allow a message cursor to be used for more than one data item.
      Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-by: NJosh Durgin <josh.durgin@inktank.com>
      8ae4f4f5
    • A
      libceph: move cursor into message · 36153ec9
      Alex Elder 提交于
      A message will only be processing a single data item at a time, so
      there's no need for each data item to have its own cursor.
      
      Move the cursor embedded in the message data structure into the
      message itself.  To minimize the impact, keep the data->cursor
      field, but make it be a pointer to the cursor in the message.
      
      Move the definition of ceph_msg_data above ceph_msg_data_cursor so
      the cursor can point to the data without a forward definition rather
      than vice-versa.
      
      This and the upcoming patches are part of:
          http://tracker.ceph.com/issues/3761Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-by: NJosh Durgin <josh.durgin@inktank.com>
      36153ec9
    • A
      libceph: record bio length · c851c495
      Alex Elder 提交于
      The bio is the only data item type that doesn't record its full
      length.  Fix that.
      Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-by: NJosh Durgin <josh.durgin@inktank.com>
      c851c495
    • A
      libceph: fix possible CONFIG_BLOCK build problem · ea96571f
      Alex Elder 提交于
      This patch:
          15a0d7b libceph: record message data length
      did not enclose some bio-specific code inside CONFIG_BLOCK as
      it should have.  Fix that.
      Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-by: NJosh Durgin <josh.durgin@inktank.com>
      ea96571f
    • A
      libceph: kill off osd request r_data_in and r_data_out · 5476492f
      Alex Elder 提交于
      Finally!  Convert the osd op data pointers into real structures, and
      make the switch over to using them instead of having all ops share
      the in and/or out data structures in the osd request.
      
      Set up a new function to traverse the set of ops and release any
      data associated with them (pages).
      
      This and the patches leading up to it resolve:
          http://tracker.ceph.com/issues/4657Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-by: NJosh Durgin <josh.durgin@inktank.com>
      5476492f
    • A
      libceph: set the data pointers when encoding ops · ec9123c5
      Alex Elder 提交于
      Still using the osd request r_data_in and r_data_out pointer, but
      we're basically only referring to it via the data pointers in the
      osd ops.  And we're transferring that information to the request
      or reply message only when the op indicates it's needed, in
      osd_req_encode_op().
      
      To avoid a forward reference, ceph_osdc_msg_data_set() was moved up
      in the file.
      
      Don't bother calling ceph_osd_data_init(), in ceph_osd_alloc(),
      because the ops array will already be zeroed anyway.
      Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-by: NJosh Durgin <josh.durgin@inktank.com>
      ec9123c5
    • A
      libceph: combine initializing and setting osd data · a4ce40a9
      Alex Elder 提交于
      This ends up being a rather large patch but what it's doing is
      somewhat straightforward.
      
      Basically, this is replacing two calls with one.  The first of the
      two calls is initializing a struct ceph_osd_data with data (either a
      page array, a page list, or a bio list); the second is setting an
      osd request op so it associates that data with one of the op's
      parameters.  In place of those two will be a single function that
      initializes the op directly.
      
      That means we sort of fan out a set of the needed functions:
          - extent ops with pages data
          - extent ops with pagelist data
          - extent ops with bio list data
      and
          - class ops with page data for receiving a response
      
      We also have define another one, but it's only used internally:
          - class ops with pagelist data for request parameters
      
      Note that we *still* haven't gotten rid of the osd request's
      r_data_in and r_data_out fields.  All the osd ops refer to them for
      their data.  For now, these data fields are pointers assigned to the
      appropriate r_data_* field when these new functions are called.
      Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-by: NJosh Durgin <josh.durgin@inktank.com>
      a4ce40a9
    • A
      libceph: format class info at init time · 5f562df5
      Alex Elder 提交于
      An object class method is formatted using a pagelist which contains
      the class name, the method name, and the data concatenated into an
      osd request's outbound data.
      
      Currently when a class op is initialized in osd_req_op_cls_init(),
      the lengths of and pointers to these three items are recorded.
      Later, when the op is getting formatted into the request message, a
      new pagelist is created and that is when these items get copied into
      the pagelist.
      
      This patch makes it so the pagelist to hold these items is created
      when the op is initialized instead.
      Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-by: NJosh Durgin <josh.durgin@inktank.com>
      5f562df5
    • A
      libceph: specify osd op by index in request · c99d2d4a
      Alex Elder 提交于
      An osd request now holds all of its source op structures, and every
      place that initializes one of these is in fact initializing one
      of the entries in the the osd request's array.
      
      So rather than supplying the address of the op to initialize, have
      caller specify the osd request and an indication of which op it
      would like to initialize.  This better hides the details the
      op structure (and faciltates moving the data pointers they use).
      
      Since osd_req_op_init() is a common routine, and it's not used
      outside the osd client code, give it static scope.  Also make
      it return the address of the specified op (so all the other
      init routines don't have to repeat that code).
      Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-by: NJosh Durgin <josh.durgin@inktank.com>
      c99d2d4a
    • A
      libceph: add data pointers in osd op structures · 8c042b0d
      Alex Elder 提交于
      An extent type osd operation currently implies that there will
      be corresponding data supplied in the data portion of the request
      (for write) or response (for read) message.  Similarly, an osd class
      method operation implies a data item will be supplied to receive
      the response data from the operation.
      
      Add a ceph_osd_data pointer to each of those structures, and assign
      it to point to eithre the incoming or the outgoing data structure in
      the osd message.  The data is not always available when an op is
      initially set up, so add two new functions to allow setting them
      after the op has been initialized.
      
      Begin to make use of the data item pointer available in the osd
      operation rather than the request data in or out structure in
      places where it's convenient.  Add some assertions to verify
      pointers are always set the way they're expected to be.
      
      This is a sort of stepping stone toward really moving the data
      into the osd request ops, to allow for some validation before
      making that jump.
      
      This is the first in a series of patches that resolve:
          http://tracker.ceph.com/issues/4657Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-by: NJosh Durgin <josh.durgin@inktank.com>
      8c042b0d
    • A
      libceph: rename data out field in osd request op · 54d50649
      Alex Elder 提交于
      There are fields "indata" and "indata_len" defined the ceph osd
      request op structure.  The "in" part is with from the point of view
      of the osd server, but is a little confusing here on the client
      side.  Change their names to use "request" instead of "in" to
      indicate that it defines data provided with the request (as opposed
      the data returned in the response).
      
      Rename the local variable in osd_req_encode_op() to match.
      Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-by: NJosh Durgin <josh.durgin@inktank.com>
      54d50649
    • A
      libceph: keep source rather than message osd op array · 79528734
      Alex Elder 提交于
      An osd request keeps a pointer to the osd operations (ops) array
      that it builds in its request message.
      
      In order to allow each op in the array to have its own distinct
      data, we will need to keep track of each op's data, and that
      information does not go over the wire.
      
      As long as we're tracking the data we might as well just track the
      entire (source) op definition for each of the ops.  And if we're
      doing that, we'll have no more need to keep a pointer to the
      wire-encoded version.
      
      This patch makes the array of source ops be kept with the osd
      request structure, and uses that instead of the version encoded in
      the message in places where that was previously used.  The array
      will be embedded in the request structure, and the maximum number of
      ops we ever actually use is currently 2.  So reduce CEPH_OSD_MAX_OP
      to 2 to reduce the size of the structure.
      
      The result of doing this sort of ripples back up, and as a result
      various function parameters and local variables become unnecessary.
      
      Make r_num_ops be unsigned, and move the definition of struct
      ceph_osd_req_op earlier to ensure it's defined where needed.
      
      It does not yet add per-op data, that's coming soon.
      
      This resolves:
          http://tracker.ceph.com/issues/4656Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-by: NJosh Durgin <josh.durgin@inktank.com>
      79528734
    • A
      libceph: define osd data initialization helpers · 43bfe5de
      Alex Elder 提交于
      Define and use functions that encapsulate the initializion of a
      ceph_osd_data structure.
      Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-by: NJosh Durgin <josh.durgin@inktank.com>
      43bfe5de
    • A
      ceph: build osd request message later for writepages · e5975c7c
      Alex Elder 提交于
      Hold off building the osd request message in ceph_writepages_start()
      until just before it will be submitted to the osd client for
      execution.
      
      We'll still create the request and allocate the page pointer array
      after we learn we have at least one page to write.  A local variable
      will be used to keep track of the allocated array of pages.  Wait
      until just before submitting the request for assigning that page
      array pointer to the request message.
      
      Create ands use a new function osd_req_op_extent_update() whose
      purpose is to serve this one spot where the length value supplied
      when an osd request's op was initially formatted might need to get
      changed (reduced, never increased) before submitting the request.
      
      Previously, ceph_writepages_start() assigned the message header's
      data length because of this update.  That's no longer necessary,
      because ceph_osdc_build_request() will recalculate the right
      value to use based on the content of the ops in the request.
      Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-by: NJosh Durgin <josh.durgin@inktank.com>
      e5975c7c
    • A
      libceph: don't build request in ceph_osdc_new_request() · acead002
      Alex Elder 提交于
      This patch moves the call to ceph_osdc_build_request() out of
      ceph_osdc_new_request() and into its caller.
      
      This is in order to defer formatting osd operation information into
      the request message until just before request is started.
      
      The only unusual (ab)user of ceph_osdc_build_request() is
      ceph_writepages_start(), where the final length of write request may
      change (downward) based on the current inode size or the oldest
      snapshot context with dirty data for the inode.
      
      The remaining callers don't change anything in the request after has
      been built.
      
      This means the ops array is now supplied by the caller.  It also
      means there is no need to pass the mtime to ceph_osdc_new_request()
      (it gets provided to ceph_osdc_build_request()).  And rather than
      passing a do_sync flag, have the number of ops in the ops array
      supplied imply adding a second STARTSYNC operation after the READ or
      WRITE requested.
      
      This and some of the patches that follow are related to having the
      messenger (only) be responsible for filling the content of the
      message header, as described here:
          http://tracker.ceph.com/issues/4589Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-by: NJosh Durgin <josh.durgin@inktank.com>
      acead002
    • A
      libceph: record message data length · a1930804
      Alex Elder 提交于
      Keep track of the length of the data portion for a message in a
      separate field in the ceph_msg structure.  This information has
      been maintained in wire byte order in the message header, but
      that's going to change soon.
      Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-by: NJosh Durgin <josh.durgin@inktank.com>
      a1930804
    • A
      libceph: record length of bio list with bio · fdce58cc
      Alex Elder 提交于
      When assigning a bio pointer to an osd request, we don't have an
      efficient way of knowing the total length bytes in the bio list.
      That information is available at the point it's set up by the rbd
      code, so record it with the osd data when it's set.
      
      This and the next patch are related to maintaining the length of a
      message's data independent of the message header, as described here:
          http://tracker.ceph.com/issues/4589Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-by: NJosh Durgin <josh.durgin@inktank.com>
      fdce58cc
    • A
      libceph: drop ceph_osd_request->r_con_filling_msg · ace6d3a9
      Alex Elder 提交于
      A field in an osd request keeps track of whether a connection is
      currently filling the request's reply message.  This patch gets rid
      of that field.
      
      An osd request includes two messages--a request and a reply--and
      they're both associated with the connection that existed to its
      the target osd at the time the request was created.
      
      An osd request can be dropped early, even when it's in flight.
      And at that time both messages are released.  It's possible the
      reply message has been supplied to its connection to receive
      an incoming response message at the time the osd request gets
      dropped.  So ceph_osdc_release_request() revokes that message
      from the connection before releasing it so things get cleaned up
      properly.
      
      Previously this may have caused a problem, because the connection
      that a message was associated with might have gone away before the
      revoke request.  And to avoid any problems using that connection,
      the osd client held a reference to it when it supplies its response
      message.
      
      However since this commit:
          38941f80 libceph: have messages point to their connection
      all messages hold a reference to the connection they are associated
      with whenever the connection is actively operating on the message
      (i.e. while the message is queued to send or sending, and when it
      data is being received into it).  And if a message has no connection
      associated with it, ceph_msg_revoke_incoming() won't do anything
      when asked to revoke it.
      
      As a result, there is no need to keep an additional reference to the
      connection associated with a message when we hand the message to the
      messenger when it calls our alloc_msg() method to receive something.
      If the connection *were* operating on it, it would have its own
      reference, and if not, there's no work to be done when we need to
      revoke it.
      
      So get rid of the osd request's r_con_filling_msg field.
      
      This resolves:
          http://tracker.ceph.com/issues/4647Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-by: NJosh Durgin <josh.durgin@inktank.com>
      ace6d3a9
    • A
      libceph: define ceph_decode_pgid() only once · ef4859d6
      Alex Elder 提交于
      There are two basically identical definitions of __decode_pgid()
      in libceph, one in "net/ceph/osdmap.c" and the other in
      "net/ceph/osd_client.c".  Get rid of both, and instead define
      a single inline version in "include/linux/ceph/osdmap.h".
      Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-by: NJosh Durgin <josh.durgin@inktank.com>
      ef4859d6
    • A
      libceph: define source request op functions · 33803f33
      Alex Elder 提交于
      The rbd code has a function that allocates and populates a
      ceph_osd_req_op structure (the in-core version of an osd request
      operation).  When reviewed, Josh suggested two things: that the
      big varargs function might be better split into type-specific
      functions; and that this functionality really belongs in the osd
      client rather than rbd.
      
      This patch implements both of Josh's suggestions.  It breaks
      up the rbd function into separate functions and defines them
      in the osd client module as exported interfaces.  Unlike the
      rbd version, however, the functions don't allocate an osd_req_op
      structure; they are provided the address of one and that is
      initialized instead.
      
      The rbd function has been eliminated and calls to it have been
      replaced by calls to the new routines.  The rbd code now now use a
      stack (struct) variable to hold the op rather than allocating and
      freeing it each time.
      
      For now only the capabilities used by rbd are implemented.
      Implementing all the other osd op types, and making the rest of the
      code use it will be done separately, in the next few patches.
      
      Note that only the extent, cls, and watch portions of the
      ceph_osd_req_op structure are currently used.  Delete the others
      (xattr, pgls, and snap) from its definition so nobody thinks it's
      actually implemented or needed.  We can add it back again later
      if needed, when we know it's been tested.
      
      This (and a few follow-on patches) resolves:
          http://tracker.ceph.com/issues/3861Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-by: NJosh Durgin <josh.durgin@inktank.com>
      33803f33
    • A
      ceph: move max constant definitions · adfe695a
      Alex Elder 提交于
      Move some definitions for max integer values out of the rbd code and
      into the more central "decode.h" header file.  These really belong
      in a Linux (or libc) header somewhere, but I haven't gotten around
      to proposing that yet.
      
      This is in preparation for moving some code out of rbd.c and into
      the osd client.
      Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-by: NJosh Durgin <josh.durgin@inktank.com>
      adfe695a
    • A
      libceph: make message data be a pointer · 6644ed7b
      Alex Elder 提交于
      Begin the transition from a single message data item to a list of
      them by replacing the "data" structure in a message with a pointer
      to a ceph_msg_data structure.
      
      A null pointer will indicate the message has no data; replace the
      use of ceph_msg_has_data() with a simple check for a null pointer.
      
      Create functions ceph_msg_data_create() and ceph_msg_data_destroy()
      to dynamically allocate and free a data item structure of a given type.
      
      When a message has its data item "set," allocate one of these to
      hold the data description, and free it when the last reference to
      the message is dropped.
      
      This partially resolves:
          http://tracker.ceph.com/issues/4429Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-by: NJosh Durgin <josh.durgin@inktank.com>
      6644ed7b
    • A
      libceph: kill last of ceph_msg_pos · f5db90bc
      Alex Elder 提交于
      The only remaining field in the ceph_msg_pos structure is
      did_page_crc.  In the new cursor model of things that flag (or
      something like it) belongs in the cursor.
      
      Define a new field "need_crc" in the cursor (which applies to all
      types of data) and initialize it to true whenever a cursor is
      initialized.
      
      In write_partial_message_data(), the data CRC still will be computed
      as before, but it will check the cursor->need_crc field to determine
      whether it's needed.  Any time the cursor is advanced to a new piece
      of a data item, need_crc will be set, and this will cause the crc
      for that entire piece to be accumulated into the data crc.
      
      In write_partial_message_data() the intermediate crc value is now
      held in a local variable so it doesn't have to be byte-swapped so
      many times.  In read_partial_msg_data() we do something similar
      (but mainly for consistency there).
      
      With that, the ceph_msg_pos structure can go away,  and it no longer
      needs to be passed as an argument to prepare_message_data().
      
      This cleanup is related to:
          http://tracker.ceph.com/issues/4428Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-by: NJosh Durgin <josh.durgin@inktank.com>
      f5db90bc
    • A
      libceph: kill most of ceph_msg_pos · 859a35d5
      Alex Elder 提交于
      All but one of the fields in the ceph_msg_pos structure are now
      never used (only assigned), so get rid of them.  This allows
      several small blocks of code to go away.
      
      This is cleanup of old code related to:
          http://tracker.ceph.com/issues/4428Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-by: NJosh Durgin <josh.durgin@inktank.com>
      859a35d5