1. 02 10月, 2012 17 次提交
    • A
      rbd: kill notify_timeout option · 84d34dcc
      Alex Elder 提交于
      The "notify_timeout" rbd device option is never used, so get rid of
      it.
      Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-by: NYehuda Sadeh <yehuda@inktank.com>
      84d34dcc
    • A
      rbd: add read_only rbd map option · cc0538b6
      Alex Elder 提交于
      Add the ability to map an rbd image read-only, by specifying either
      "read_only" or "ro" as an option on the rbd "command line."  Also
      allow the inverse to be explicitly specified using "read_write" or
      "rw".
      Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-by: NYehuda Sadeh <yehuda@inktank.com>
      cc0538b6
    • A
      rbd: move rbd_opts to struct rbd_device · f8c38929
      Alex Elder 提交于
      The rbd options don't really apply to the ceph client.  So don't
      store a pointer to it in the ceph_client structure, and put them
      (a struct, not a pointer) into the rbd_dev structure proper.
      
      Pass the rbd device structure to rbd_client_create() so it can
      assign rbd_dev->rbdc if successful, and have it return an error code
      instead of the rbd client pointer.
      Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-by: NYehuda Sadeh <yehuda@inktank.com>
      f8c38929
    • A
      rbd: more cleanup in rbd_header_from_disk() · 621901d6
      Alex Elder 提交于
      This just rearranges things a bit more in rbd_header_from_disk()
      so that the snapshot sizes are initialized right after the buffer
      to hold them is allocated and doing a little further consolidation
      that follows from that.  Also adds a few simple comments.
      Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-by: NYehuda Sadeh <yehuda@inktank.com>
      621901d6
    • A
      rbd: kill incore snap_names_len · f785cc1d
      Alex Elder 提交于
      The only thing the on-disk snap_names_len field is needed is to
      size the buffer allocated to hold a copy of the snapshot names
      for an rbd image.
      
      So don't bother saving it in the in-core rbd_image_header structure.
      Just use a local variable to hold the required buffer size while
      it's needed.
      
      Move the code that actually copies the snapshot names up closer
      to where the required length is saved.
      Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-by: NYehuda Sadeh <yehuda@inktank.com>
      f785cc1d
    • A
      rbd: don't over-allocate space for object prefix · 58c17b0e
      Alex Elder 提交于
      In rbd_header_from_disk() the object prefix buffer is sized based on
      the maximum size it's block_name equivalent on disk could be.
      
      Instead, only allocate enough to hold null-terminated string from
      the on-disk header--or the maximum size of no NUL is found.
      Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-by: NYehuda Sadeh <yehuda@inktank.com>
      58c17b0e
    • A
      rbd: handle locking inside __rbd_client_find() · 1f7ba331
      Alex Elder 提交于
      There is only caller of __rbd_client_find(), and it somewhat
      clumsily gets the appropriate lock and gets a reference to the
      existing ceph_client structure if it's found.
      
      Instead, have that function handle its own locking, and acquire the
      reference if found while it holds the lock.  Drop the underscores
      from the name because there's no need to signify anything special
      about this function.
      Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-by: NYehuda Sadeh <yehuda@inktank.com>
      1f7ba331
    • A
      rbd: add new snapshots at the tail · 523f3258
      Alex Elder 提交于
      This fixes a bug that went in with this commit:
      
          commit f6e0c99092cca7be00fca4080cfc7081739ca544
          Author: Alex Elder <elder@inktank.com>
          Date:   Thu Aug 2 11:29:46 2012 -0500
          rbd: simplify __rbd_init_snaps_header()
      
      The problem is that a new rbd snapshot needs to go either after an
      existing snapshot entry, or at the *end* of an rbd device's snapshot
      list.  As originally coded, it is placed at the beginning.  This was
      based on the assumption the list would be empty (so it wouldn't
      matter), but in fact if multiple new snapshots are added to an empty
      list in one shot the list will be non-empty after the first one is
      added.
      
      This addresses http://tracker.newdream.net/issues/3063Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-by: NJosh Durgin <josh.durgin@inktank.com>
      523f3258
    • A
      rbd: rename block_name -> object_prefix · 843a0d08
      Alex Elder 提交于
      In the on-disk image header structure there is a field "block_name"
      which represents what we now call the "object prefix" for an rbd
      image.  Rename this field "object_prefix" to be consistent with
      modern usage.
      
      This appears to be the only remaining vestige of the use of "block"
      in symbols that represent objects in the rbd code.
      
      This addresses http://tracker.newdream.net/issues/1761Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-by: NJosh Durgin <josh.durgin@inktank.com>
      Reviewed-by: NDan Mick <dan.mick@inktank.com>
      843a0d08
    • A
      rbd: separate reading header from decoding it · 4156d998
      Alex Elder 提交于
      Right now rbd_read_header() both reads the header object for an rbd
      image and decodes its contents.  It does this repeatedly if needed,
      in order to ensure a complete and intact header is obtained.
      
      Separate this process into two steps--reading of the raw header
      data (in new function, rbd_dev_v1_header_read()) and separately
      decoding its contents (in rbd_header_from_disk()).  As a result,
      the latter function no longer requires its allocated_snaps argument.
      Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-by: NJosh Durgin <josh.durgin@inktank.com>
      4156d998
    • A
      rbd: expand rbd_dev_ondisk_valid() checks · 103a150f
      Alex Elder 提交于
      Add checks on the validity of the snap_count and snap_names_len
      field values in rbd_dev_ondisk_valid().  This eliminates the
      need to do them in rbd_header_from_disk().
      Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-by: NJosh Durgin <josh.durgin@inktank.com>
      103a150f
    • A
      rbd: return earlier in rbd_header_from_disk() · 28cb775d
      Alex Elder 提交于
      The only caller of rbd_header_from_disk() is rbd_read_header().
      It passes as allocated_snaps the number of snapshots it will
      have received from the server for the snapshot context that
      rbd_header_from_disk() is to interpret.  The first time through
      it provides 0--mainly to extract the number of snapshots from
      the snapshot context header--so that it can allocate an
      appropriately-sized buffer to receive the entire snapshot
      context from the server in a second request.
      
      rbd_header_from_disk() will not fill in the array of snapshot ids
      unless the number in the snapshot matches the number the caller
      had allocated.
      
      This patch adjusts that logic a little further to be more efficient.
      rbd_read_header() doesn't even examine the snapshot context unless
      the snapshot count (stored in header->total_snaps) matches the
      number of snapshots allocated.  So rbd_header_from_disk() doesn't
      need to allocate or fill in the snapshot context field at all in
      that case.
      Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-by: NJosh Durgin <josh.durgin@inktank.com>
      28cb775d
    • A
      rbd: rearrange rbd_header_from_disk() · 6a52325f
      Alex Elder 提交于
      This just moves code around for the most part.  It was pulled out as
      a separate patch to avoid cluttering up some upcoming patches which
      are more substantive.  The point is basically to group everything
      related to initializing the snapshot context together.
      
      The only functional change is that rbd_header_from_disk() now
      ensures the (in-core) header it is passed is zero-filled.  This
      allows a simpler error handling path in rbd_header_from_disk().
      Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-by: NJosh Durgin <josh.durgin@inktank.com>
      6a52325f
    • A
      rbd: use sizeof (object) instead of sizeof (type) · d2bb24e5
      Alex Elder 提交于
      Fix a few spots in rbd_header_from_disk() to use sizeof (object)
      rather than sizeof (type).  Use a local variable to record sizes
      to shorten some lines and improve readability.
      Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-by: NJosh Durgin <josh.durgin@inktank.com>
      d2bb24e5
    • A
      rbd: ensure invalid pointers are made null · d78fd7ae
      Alex Elder 提交于
      Fix a number of spots where a pointer value that is known to
      have become invalid but was not reset to null.
      
      Also, toss in a change so we use sizeof (object) rather than
      sizeof (type).
      Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-by: NJosh Durgin <josh.durgin@inktank.com>
      d78fd7ae
    • A
      rbd: make snap_names_len a u64 · 0f1d3f93
      Alex Elder 提交于
      The snap_names_len field of an rbd_image_header structure is defined
      with type size_t.  That field is used as both the source and target
      of 64-bit byte-order swapping operations though, so it's best to
      define it with type u64 instead.
      Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-by: NJosh Durgin <josh.durgin@inktank.com>
      0f1d3f93
    • A
      rbd: simplify __rbd_init_snaps_header() · 35938150
      Alex Elder 提交于
      The purpose of __rbd_init_snaps_header() is to compare a new
      snapshot context with an rbd device's list of existing snapshots.
      It updates the list by adding any new snapshots or removing any
      that are not present in the new snapshot context.
      
      The code as written is a little confusing, because it traverses both
      the existing snapshot list and the set of snapshots in the snapshot
      context in reverse.  This was done based on an assumption about
      snapshots that is not true--namely that a duplicate snapshot name
      could cause an error in intepreting things if they were not
      processed in ascending order.
      
      These precautions are not necessary, because:
          - all snapshots are uniquely identified by their snapshot id
          - a new snapshot cannot be created if the rbd device has another
            snapshot with the same name
      (It is furthermore not currently possible to rename a snapshot.)
      
      This patch re-implements __rbd_init_snaps_header() so it passes
      through both the existing snapshot list and the entries in the
      snapshot context in forward order.  It still does the same thing
      as before, but I find the logic considerably easier to understand.
      
      By going forward through the names in the snapshot context, there
      is no longer a need for the rbd_prev_snap_name() helper function.
      Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-by: NJosh Durgin <josh.durgin@inktank.com>
      35938150
  2. 22 9月, 2012 1 次提交
  3. 21 9月, 2012 1 次提交
  4. 18 9月, 2012 2 次提交
    • S
      cciss: fix handling of protocol error · 2453f5f9
      Stephen M. Cameron 提交于
      If a command completes with a status of CMD_PROTOCOL_ERR, this
      information should be conveyed to the SCSI mid layer, not dropped
      on the floor.  Unlike a similar bug in the hpsa driver, this bug
      only affects tape drives and CD and DVD ROM drives in the cciss
      driver, and to induce it, you have to disconnect (or damage) a
      cable, so it is not a very likely scenario (which would explain
      why the bug has gone undetected for the last 10 years.)
      Signed-off-by: NStephen M. Cameron <scameron@beardog.cce.hp.com>
      Cc: stable@vger.kernel.org
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      2453f5f9
    • P
      nbd: clear waiting_queue on shutdown · fded4e09
      Paul Clements 提交于
      Fix a serious but uncommon bug in nbd which occurs when there is heavy
      I/O going to the nbd device while, at the same time, a failure (server,
      network) or manual disconnect of the nbd connection occurs.
      
      There is a small window between the time that the nbd_thread is stopped
      and the socket is shutdown where requests can continue to be queued to
      nbd's internal waiting_queue.  When this happens, those requests are
      never completed or freed.
      
      The fix is to clear the waiting_queue on shutdown of the nbd device, in
      the same way that the nbd request queue (queue_head) is already being
      cleared.
      Signed-off-by: NPaul Clements <paul.clements@steeleye.com>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      fded4e09
  5. 13 9月, 2012 7 次提交
  6. 12 9月, 2012 1 次提交
    • S
      xen/m2p: do not reuse kmap_op->dev_bus_addr · 2fc136ee
      Stefano Stabellini 提交于
      If the caller passes a valid kmap_op to m2p_add_override, we use
      kmap_op->dev_bus_addr to store the original mfn, but dev_bus_addr is
      part of the interface with Xen and if we are batching the hypercalls it
      might not have been written by the hypervisor yet. That means that later
      on Xen will write to it and we'll think that the original mfn is
      actually what Xen has written to it.
      
      Rather than "stealing" struct members from kmap_op, keep using
      page->index to store the original mfn and add another parameter to
      m2p_remove_override to get the corresponding kmap_op instead.
      It is now responsibility of the caller to keep track of which kmap_op
      corresponds to a particular page in the m2p_override (gntdev, the only
      user of this interface that passes a valid kmap_op, is already doing that).
      
      CC: stable@kernel.org
      Reported-and-Tested-By: NSander Eikelenboom <linux@eikelenboom.it>
      Signed-off-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com>
      Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      2fc136ee
  7. 22 8月, 2012 1 次提交
  8. 16 8月, 2012 3 次提交
    • P
      drbd: Write all pages of the bitmap after an online resize · d1aa4d04
      Philipp Reisner 提交于
      We need to write the whole bitmap after we moved the meta data
      due to an online resize operation.
      
      With the support for one peta byte devices bitmap IO was optimized
      to only write out touched pages. This optimization must be turned
      off when writing the bitmap after an online resize.
      
      This issue was introduced with drbd-8.3.10.
      
      The impact of this bug is that after an online resize, the next
      resync could become larger than expected.
      Signed-off-by: NPhilipp Reisner <philipp.reisner@linbit.com>
      Signed-off-by: NLars Ellenberg <lars.ellenberg@linbit.com>
      d1aa4d04
    • P
      drbd: Finish requests that completed while IO was frozen · 509fc019
      Philipp Reisner 提交于
      Requests of an acked epoch are stored on the barrier_acked_requests list. In
      case the private bio of such a request completes while IO on the drbd device
      is suspended [req_mod(completed_ok)] then the request stays there.
      
      When thawing IO because the fence_peer handler returned, then we use
      tl_clear() to apply the connection_lost_while_pending event to all requests
      on the transfer-log and the barrier_acked_requests list.
      
      Up to now the connection_lost_while_pending event was not applied
      on requests on the barrier_acked_requests list. Fixed that.
      
      I.e. now the connection_lost_while_pending and resend events are
      applied to requests on the barrier_acked_requests list. For that
      it is necessary that the resend event finishes (local only)
      READS correctly.
      Signed-off-by: NPhilipp Reisner <philipp.reisner@linbit.com>
      Signed-off-by: NLars Ellenberg <lars.ellenberg@linbit.com>
      509fc019
    • L
      drbd: fix drbd wire compatibility for empty flushes · 227f052f
      Lars Ellenberg 提交于
      DRBD has a concept of request epochs or reorder-domains,
      which are separated on the wire by P_BARRIER packets.
      
      Older DRBD is not able to handle zero-sized requests at all,
      so we need to map empty flushes to these drbd barriers.
      
      These are the equivalent of empty flushes, and
      by default trigger flushes on the receiving side anyways
      (unless not supported or explicitly disabled),
      so there is no need to handle this differently in newer drbd either.
      Signed-off-by: NPhilipp Reisner <philipp.reisner@linbit.com>
      Signed-off-by: NLars Ellenberg <lars.ellenberg@linbit.com>
      227f052f
  9. 08 8月, 2012 1 次提交
  10. 04 8月, 2012 2 次提交
  11. 01 8月, 2012 4 次提交