1. 24 7月, 2014 1 次提交
  2. 10 7月, 2014 1 次提交
  3. 04 7月, 2014 1 次提交
  4. 25 6月, 2014 1 次提交
  5. 23 6月, 2014 1 次提交
    • I
      rbd: handle parent_overlap on writes correctly · 9638556a
      Ilya Dryomov 提交于
      The following check in rbd_img_obj_request_submit()
      
          rbd_dev->parent_overlap <= obj_request->img_offset
      
      allows the fall through to the non-layered write case even if both
      parent_overlap and obj_request->img_offset belong to the same RADOS
      object.  This leads to data corruption, because the area to the left of
      parent_overlap ends up unconditionally zero-filled instead of being
      populated with parent data.  Suppose we want to write 1M to offset 6M
      of image bar, which is a clone of foo@snap; object_size is 4M,
      parent_overlap is 5M:
      
          rbd_data.<id>.0000000000000001
           ---------------------|----------------------|------------
          | should be copyup'ed | should be zeroed out | write ...
           ---------------------|----------------------|------------
         4M                    5M                     6M
                          parent_overlap    obj_request->img_offset
      
      4..5M should be copyup'ed from foo, yet it is zero-filled, just like
      5..6M is.
      
      Given that the only striping mode kernel client currently supports is
      chunking (i.e. stripe_unit == object_size, stripe_count == 1), round
      parent_overlap up to the next object boundary for the purposes of the
      overlap check.
      
      Cc: stable@vger.kernel.org # 3.10+
      Signed-off-by: NIlya Dryomov <ilya.dryomov@inktank.com>
      Reviewed-by: NJosh Durgin <josh.durgin@inktank.com>
      9638556a
  6. 18 6月, 2014 1 次提交
  7. 17 6月, 2014 1 次提交
  8. 14 6月, 2014 1 次提交
  9. 13 6月, 2014 2 次提交
  10. 12 6月, 2014 1 次提交
  11. 11 6月, 2014 3 次提交
  12. 07 6月, 2014 2 次提交
  13. 06 6月, 2014 6 次提交
    • J
      block: add blk_rq_set_block_pc() · f27b087b
      Jens Axboe 提交于
      With the optimizations around not clearing the full request at alloc
      time, we are leaving some of the needed init for REQ_TYPE_BLOCK_PC
      up to the user allocating the request.
      
      Add a blk_rq_set_block_pc() that sets the command type to
      REQ_TYPE_BLOCK_PC, and properly initializes the members associated
      with this type of request. Update callers to use this function instead
      of manipulating rq->cmd_type directly.
      
      Includes fixes from Christoph Hellwig <hch@lst.de> for my half-assed
      attempt.
      Signed-off-by: NJens Axboe <axboe@fb.com>
      f27b087b
    • I
      rbd: fix ida/idr memory leak · ffe312cf
      Ilya Dryomov 提交于
      ida_destroy() needs to be called on module exit to release ida caches.
      Signed-off-by: NIlya Dryomov <ilya.dryomov@inktank.com>
      Reviewed-by: NAlex Elder <elder@linaro.org>
      ffe312cf
    • A
      rbd: use reference counts for image requests · 0f2d5be7
      Alex Elder 提交于
      Each image request contains a reference count, but to date it has
      not actually been used.  (I think this was just an oversight.) A
      recent report involving rbd failing an assertion shed light on why
      and where we need to use these reference counts.
      
      Every OSD request associated with an object request uses
      rbd_osd_req_callback() as its callback function.  That function will
      call a helper function (dependent on the type of OSD request) that
      will set the object request's "done" flag if the object request if
      appropriate.  If that "done" flag is set, the object request is
      passed to rbd_obj_request_complete().
      
      In rbd_obj_request_complete(), requests are processed in sequential
      order.  So if an object request completes before one of its
      predecessors in the image request, the completion is deferred.
      Otherwise, if it's a completing object's "turn" to be completed, it
      is passed to rbd_img_obj_end_request(), which records the result of
      the operation, accumulates transferred bytes, and so on.  Next, the
      successor to this request is checked and if it is marked "done",
      (deferred) completion processing is performed on that request, and
      so on.  If the last object request in an image request is completed,
      rbd_img_request_complete() is called, which (typically) destroys
      the image request.
      
      There is a race here, however.  The instant an object request is
      marked "done" it can be provided (by a thread handling completion of
      one of its predecessor operations) to rbd_img_obj_end_request(),
      which (for the last request) can then lead to the image request
      getting torn down.  And this can happen *before* that object has
      itself entered rbd_img_obj_end_request().  As a result, once it
      *does* enter that function, the image request (and even the object
      request itself) may have been freed and become invalid.
      
      All that's necessary to avoid this is to properly count references
      to the image requests.  We tear down an image request's object
      requests all at once--only when the entire image request has
      completed.  So there's no need for an image request to count
      references for its object requests.  However, we don't want an
      image request to go away until the last of its object requests
      has passed through rbd_img_obj_callback().  In other words,
      we don't want rbd_img_request_complete() to necessarily
      result in the image request being destroyed, because it may
      get called before we've finished processing on all of its
      object requests.
      
      So the fix is to add a reference to an image request for
      each of its object requests.  The reference can be viewed
      as representing an object request that has not yet finished
      its call to rbd_img_obj_callback().  That is emphasized by
      getting the reference right after assigning that as the image
      object's callback function.  The corresponding release of that
      reference is done at the end of rbd_img_obj_callback(), which
      every image object request passes through exactly once.
      
      Cc: stable@vger.kernel.org
      Signed-off-by: NAlex Elder <elder@linaro.org>
      Reviewed-by: NIlya Dryomov <ilya.dryomov@inktank.com>
      0f2d5be7
    • I
      rbd: fix osd_request memory leak in __rbd_dev_header_watch_sync() · b30a01f2
      Ilya Dryomov 提交于
      osd_request, along with r_request and r_reply messages attached to it
      are leaked in __rbd_dev_header_watch_sync() if the requested image
      doesn't exist.  This is because lingering requests are special and get
      an extra ref in the reply path.  Fix it by unregistering linger request
      on the error path and split __rbd_dev_header_watch_sync() into two
      functions to make it maintainable.
      Signed-off-by: NIlya Dryomov <ilya.dryomov@inktank.com>
      b30a01f2
    • I
      rbd: make sure we have latest osdmap on 'rbd map' · 30ba1f02
      Ilya Dryomov 提交于
      Given an existing idle mapping (img1), mapping an image (img2) in
      a newly created pool (pool2) fails:
      
          $ ceph osd pool create pool1 8 8
          $ rbd create --size 1000 pool1/img1
          $ sudo rbd map pool1/img1
          $ ceph osd pool create pool2 8 8
          $ rbd create --size 1000 pool2/img2
          $ sudo rbd map pool2/img2
          rbd: sysfs write failed
          rbd: map failed: (2) No such file or directory
      
      This is because client instances are shared by default and we don't
      request an osdmap update when bumping a ref on an existing client.  The
      fix is to use the mon_get_version request to see if the osdmap we have
      is the latest, and block until the requested update is received if it's
      not.
      
      Fixes: http://tracker.ceph.com/issues/8184Signed-off-by: NIlya Dryomov <ilya.dryomov@inktank.com>
      Reviewed-by: NSage Weil <sage@inktank.com>
      30ba1f02
    • D
      rbd: replace IS_ERR and PTR_ERR with PTR_ERR_OR_ZERO · 461f758a
      Duan Jiong 提交于
      This patch fixes coccinelle error regarding usage of IS_ERR and
      PTR_ERR instead of PTR_ERR_OR_ZERO.
      Signed-off-by: NDuan Jiong <duanj.fnst@cn.fujitsu.com>
      Reviewed-by: NYan, Zheng <zheng.z.yan@intel.com>
      461f758a
  14. 05 6月, 2014 4 次提交
  15. 04 6月, 2014 9 次提交
  16. 30 5月, 2014 1 次提交
    • M
      block: virtio_blk: don't hold spin lock during world switch · e8edca6f
      Ming Lei 提交于
      Firstly, it isn't necessary to hold lock of vblk->vq_lock
      when notifying hypervisor about queued I/O.
      
      Secondly, virtqueue_notify() will cause world switch and
      it may take long time on some hypervisors(such as, qemu-arm),
      so it isn't good to hold the lock and block other vCPUs.
      
      On arm64 quad core VM(qemu-kvm), the patch can increase I/O
      performance a lot with VIRTIO_RING_F_EVENT_IDX enabled:
      	- without the patch: 14K IOPS
      	- with the patch: 34K IOPS
      
      fio script:
      	[global]
      	direct=1
      	bsrange=4k-4k
      	timeout=10
      	numjobs=4
      	ioengine=libaio
      	iodepth=64
      
      	filename=/dev/vdc
      	group_reporting=1
      
      	[f1]
      	rw=randread
      
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      Cc: "Michael S. Tsirkin" <mst@redhat.com>
      Cc: virtualization@lists.linux-foundation.org
      Signed-off-by: NMing Lei <ming.lei@canonical.com>
      Acked-by: NRusty Russell <rusty@rustcorp.com.au>
      Cc: stable@kernel.org # 3.13+
      Signed-off-by: NJens Axboe <axboe@fb.com>
      e8edca6f
  17. 29 5月, 2014 4 次提交
    • V
      xen-blkback: defer freeing blkif to avoid blocking xenwatch · 814d04e7
      Valentin Priescu 提交于
      Currently xenwatch blocks in VBD disconnect, waiting for all pending I/O
      requests to finish. If the VBD is attached to a hot-swappable disk, then
      xenwatch can hang for a long period of time, stalling other watches.
      
       INFO: task xenwatch:39 blocked for more than 120 seconds.
       "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
       ffff880057f01bd0 0000000000000246 ffff880057f01ac0 ffffffff810b0782
       ffff880057f01ad0 00000000000131c0 0000000000000004 ffff880057edb040
       ffff8800344c6080 0000000000000000 ffff880058c00ba0 ffff880057edb040
       Call Trace:
       [<ffffffff810b0782>] ? irq_to_desc+0x12/0x20
       [<ffffffff8128f761>] ? list_del+0x11/0x40
       [<ffffffff8147a080>] ? wait_for_common+0x60/0x160
       [<ffffffff8147bcef>] ? _raw_spin_lock_irqsave+0x2f/0x50
       [<ffffffff8147bd49>] ? _raw_spin_unlock_irqrestore+0x19/0x20
       [<ffffffff8147a26a>] schedule+0x3a/0x60
       [<ffffffffa018fe6a>] xen_blkif_disconnect+0x8a/0x100 [xen_blkback]
       [<ffffffff81079f70>] ? wake_up_bit+0x40/0x40
       [<ffffffffa018ffce>] xen_blkbk_remove+0xae/0x1e0 [xen_blkback]
       [<ffffffff8130b254>] xenbus_dev_remove+0x44/0x90
       [<ffffffff81345cb7>] __device_release_driver+0x77/0xd0
       [<ffffffff81346488>] device_release_driver+0x28/0x40
       [<ffffffff813456e8>] bus_remove_device+0x78/0xe0
       [<ffffffff81342c9f>] device_del+0x12f/0x1a0
       [<ffffffff81342d2d>] device_unregister+0x1d/0x60
       [<ffffffffa0190826>] frontend_changed+0xa6/0x4d0 [xen_blkback]
       [<ffffffffa019c252>] ? frontend_changed+0x192/0x650 [xen_netback]
       [<ffffffff8130ae50>] ? cmp_dev+0x60/0x60
       [<ffffffff81344fe4>] ? bus_for_each_dev+0x94/0xa0
       [<ffffffff8130b06e>] xenbus_otherend_changed+0xbe/0x120
       [<ffffffff8130b4cb>] frontend_changed+0xb/0x10
       [<ffffffff81309c82>] xenwatch_thread+0xf2/0x130
       [<ffffffff81079f70>] ? wake_up_bit+0x40/0x40
       [<ffffffff81309b90>] ? xenbus_directory+0x80/0x80
       [<ffffffff810799d6>] kthread+0x96/0xa0
       [<ffffffff81485934>] kernel_thread_helper+0x4/0x10
       [<ffffffff814839f3>] ? int_ret_from_sys_call+0x7/0x1b
       [<ffffffff8147c17c>] ? retint_restore_args+0x5/0x6
       [<ffffffff81485930>] ? gs_change+0x13/0x13
      
      With this patch, when there is still pending I/O, the actual disconnect
      is done by the last reference holder (last pending I/O request). In this
      case, xenwatch doesn't block indefinitely.
      Signed-off-by: NValentin Priescu <priescuv@amazon.com>
      Reviewed-by: NSteven Kady <stevkady@amazon.com>
      Reviewed-by: NSteven Noonan <snoonan@amazon.com>
      Reviewed-by: NDavid Vrabel <david.vrabel@citrix.com>
      Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      814d04e7
    • O
      xen/blkback: disable discard feature if requested by toolstack · c926b701
      Olaf Hering 提交于
      Newer toolstacks may provide a boolean property "discard-enable" in the
      backend node. Its purpose is to disable discard for file backed storage
      to avoid fragmentation. Recognize this setting also for physical
      storage.  If that property exists and is false, do not advertise
      "feature-discard" to the frontend.
      Signed-off-by: NOlaf Hering <olaf@aepfle.de>
      Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      c926b701
    • O
      xen-blkfront: remove type check from blkfront_setup_discard · 1c8cad6c
      Olaf Hering 提交于
      In its initial implementation a check for "type" was added, but only phy
      and file are handled. This breaks advertised discard support for other
      type values such as qdisk.
      
      Fix and simplify this function: If the backend advertises discard
      support it is supposed to implement it properly, so enable
      feature_discard unconditionally. If the backend advertises the need for
      a certain granularity and alignment then propagate both properties to
      the blocklayer. The discard-secure property is a boolean, update the code
      to reflect that.
      Signed-off-by: NOlaf Hering <olaf@aepfle.de>
      Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      1c8cad6c
    • C
      blk-mq: remove alloc_hctx and free_hctx methods · cdef54dd
      Christoph Hellwig 提交于
      There is no need for drivers to control hardware context allocation
      now that we do the context to node mapping in common code.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      cdef54dd