1. 02 5月, 2013 8 次提交
    • A
      libceph: record length of bio list with bio · fdce58cc
      Alex Elder 提交于
      When assigning a bio pointer to an osd request, we don't have an
      efficient way of knowing the total length bytes in the bio list.
      That information is available at the point it's set up by the rbd
      code, so record it with the osd data when it's set.
      
      This and the next patch are related to maintaining the length of a
      message's data independent of the message header, as described here:
          http://tracker.ceph.com/issues/4589Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-by: NJosh Durgin <josh.durgin@inktank.com>
      fdce58cc
    • A
      libceph: define source request op functions · 33803f33
      Alex Elder 提交于
      The rbd code has a function that allocates and populates a
      ceph_osd_req_op structure (the in-core version of an osd request
      operation).  When reviewed, Josh suggested two things: that the
      big varargs function might be better split into type-specific
      functions; and that this functionality really belongs in the osd
      client rather than rbd.
      
      This patch implements both of Josh's suggestions.  It breaks
      up the rbd function into separate functions and defines them
      in the osd client module as exported interfaces.  Unlike the
      rbd version, however, the functions don't allocate an osd_req_op
      structure; they are provided the address of one and that is
      initialized instead.
      
      The rbd function has been eliminated and calls to it have been
      replaced by calls to the new routines.  The rbd code now now use a
      stack (struct) variable to hold the op rather than allocating and
      freeing it each time.
      
      For now only the capabilities used by rbd are implemented.
      Implementing all the other osd op types, and making the rest of the
      code use it will be done separately, in the next few patches.
      
      Note that only the extent, cls, and watch portions of the
      ceph_osd_req_op structure are currently used.  Delete the others
      (xattr, pgls, and snap) from its definition so nobody thinks it's
      actually implemented or needed.  We can add it back again later
      if needed, when we know it's been tested.
      
      This (and a few follow-on patches) resolves:
          http://tracker.ceph.com/issues/3861Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-by: NJosh Durgin <josh.durgin@inktank.com>
      33803f33
    • A
      ceph: move max constant definitions · adfe695a
      Alex Elder 提交于
      Move some definitions for max integer values out of the rbd code and
      into the more central "decode.h" header file.  These really belong
      in a Linux (or libc) header somewhere, but I haven't gotten around
      to proposing that yet.
      
      This is in preparation for moving some code out of rbd.c and into
      the osd client.
      Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-by: NJosh Durgin <josh.durgin@inktank.com>
      adfe695a
    • A
      libceph: let osd ops determine request data length · 175face2
      Alex Elder 提交于
      The length of outgoing data in an osd request is dependent on the
      osd ops that are embedded in that request.  Each op is encoded into
      a request message using osd_req_encode_op(), so that should be used
      to determine the amount of outgoing data implied by the op as it
      is encoded.
      
      Have osd_req_encode_op() return the number of bytes of outgoing data
      implied by the op being encoded, and accumulate and use that in
      ceph_osdc_build_request().
      
      As a result, ceph_osdc_build_request() no longer requires its "len"
      parameter, so get rid of it.
      
      Using the sum of the op lengths rather than the length provided is
      a valid change because:
          - The only callers of osd ceph_osdc_build_request() are
            rbd and the osd client (in ceph_osdc_new_request() on
            behalf of the file system).
          - When rbd calls it, the length provided is only non-zero for
            write requests, and in that case the single op has the
            same length value as what was passed here.
          - When called from ceph_osdc_new_request(), (it's not all that
            easy to see, but) the length passed is also always the same
            as the extent length encoded in its (single) write op if
            present.
      
      This resolves:
          http://tracker.ceph.com/issues/4406Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-by: NJosh Durgin <josh.durgin@inktank.com>
      175face2
    • A
      libceph: record byte count not page count · e0c59487
      Alex Elder 提交于
      Record the byte count for an osd request rather than the page count.
      The number of pages can always be derived from the byte count (and
      alignment/offset) but the reverse is not true.
      Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-by: NJosh Durgin <josh.durgin@inktank.com>
      e0c59487
    • A
      libceph: separate read and write data · 0fff87ec
      Alex Elder 提交于
      An osd request defines information about where data to be read
      should be placed as well as where data to write comes from.
      Currently these are represented by common fields.
      
      Keep information about data for writing separate from data to be
      read by splitting these into data_in and data_out fields.
      
      This is the key patch in this whole series, in that it actually
      identifies which osd requests generate outgoing data and which
      generate incoming data.  It's less obvious (currently) that an osd
      CALL op generates both outgoing and incoming data; that's the focus
      of some upcoming work.
      
      This resolves:
          http://tracker.ceph.com/issues/4127Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-by: NJosh Durgin <josh.durgin@inktank.com>
      0fff87ec
    • A
      libceph: distinguish page and bio requests · 2ac2b7a6
      Alex Elder 提交于
      An osd request uses either pages or a bio list for its data.  Use a
      union to record information about the two, and add a data type
      tag to select between them.
      Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-by: NJosh Durgin <josh.durgin@inktank.com>
      2ac2b7a6
    • A
      libceph: separate osd request data info · 2794a82a
      Alex Elder 提交于
      Pull the fields in an osd request structure that define the data for
      the request out into a separate structure.
      Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-by: NJosh Durgin <josh.durgin@inktank.com>
      2794a82a
  2. 18 4月, 2013 1 次提交
  3. 08 4月, 2013 1 次提交
    • J
      Revert "loop: cleanup partitions when detaching loop device" · c2fccc1c
      Jens Axboe 提交于
      This reverts commit 8761a3dc.
      
      There are situations where the destruction path is called
      with the bdev->bd_mutex already held, which then deadlocks in
      loop_clr_fd(). The normal partition cleanup does a trylock()
      on the mutex, but it'd be nice to have a more bullet proof
      method in loop. So punt this more involved fix to the next
      merge window, and just back out this buggy fix for now.
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      c2fccc1c
  4. 04 4月, 2013 4 次提交
  5. 02 4月, 2013 1 次提交
    • A
      loop: prevent bdev freeing while device in use · c1681bf8
      Anatol Pomozov 提交于
      struct block_device lifecycle is defined by its inode (see fs/block_dev.c) -
      block_device allocated first time we access /dev/loopXX and deallocated on
      bdev_destroy_inode. When we create the device "losetup /dev/loopXX afile"
      we want that block_device stay alive until we destroy the loop device
      with "losetup -d".
      
      But because we do not hold /dev/loopXX inode its counter goes 0, and
      inode/bdev can be destroyed at any moment. Usually it happens at memory
      pressure or when user drops inode cache (like in the test below). When later in
      loop_clr_fd() we want to use bdev we have use-after-free error with following
      stack:
      
      BUG: unable to handle kernel NULL pointer dereference at 0000000000000280
        bd_set_size+0x10/0xa0
        loop_clr_fd+0x1f8/0x420 [loop]
        lo_ioctl+0x200/0x7e0 [loop]
        lo_compat_ioctl+0x47/0xe0 [loop]
        compat_blkdev_ioctl+0x341/0x1290
        do_filp_open+0x42/0xa0
        compat_sys_ioctl+0xc1/0xf20
        do_sys_open+0x16e/0x1d0
        sysenter_dispatch+0x7/0x1a
      
      To prevent use-after-free we need to grab the device in loop_set_fd()
      and put it later in loop_clr_fd().
      
      The issue is reprodusible on current Linus head and v3.3. Here is the test:
      
        dd if=/dev/zero of=loop.file bs=1M count=1
        while [ true ]; do
          losetup /dev/loop0 loop.file
          echo 2 > /proc/sys/vm/drop_caches
          losetup -d /dev/loop0
        done
      
      [ Doing bdgrab/bput in loop_set_fd/loop_clr_fd is safe, because every
        time we call loop_set_fd() we check that loop_device->lo_state is
        Lo_unbound and set it to Lo_bound If somebody will try to set_fd again
        it will get EBUSY.  And if we try to loop_clr_fd() on unbound loop
        device we'll get ENXIO.
      
        loop_set_fd/loop_clr_fd (and any other loop ioctl) is called under
        loop_device->lo_ctl_mutex. ]
      Signed-off-by: NAnatol Pomozov <anatol.pomozov@gmail.com>
      Cc: Al Viro <viro@zeniv.linux.org.uk>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      c1681bf8
  6. 30 3月, 2013 1 次提交
    • A
      rbd: don't zero-fill non-image object requests · 6e2a4505
      Alex Elder 提交于
      A result of ENOENT from a read request for an object that's part of
      an rbd image indicates that there is a hole in that portion of the
      image.  Similarly, a short read for such an object indicates that
      the remainder of the read should be interpreted a full read with
      zeros filling out the end of the request.
      
      This behavior is not correct for objects that are not backing rbd
      image data.  Currently rbd_img_obj_request_callback() assumes it
      should be done for all objects.
      
      Change rbd_img_obj_request_callback() so it only does this zeroing
      for image objects.  Encapsulate that special handling in its own
      function.  Add an assertion that the image object request is a bio
      request, since we assume that (and we currently don't support any
      other types).
      
      This resolves a problem identified here:
          http://tracker.ceph.com/issues/4559
      
      The regression was introduced by bf0d5f50.
      Reported-by: NDan van der Ster <dan@vanderster.com>
      Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-off-by: NSage Weil <sage@inktank.com>
      6e2a4505
  7. 29 3月, 2013 1 次提交
  8. 28 3月, 2013 1 次提交
  9. 27 3月, 2013 2 次提交
  10. 26 3月, 2013 1 次提交
  11. 23 3月, 2013 3 次提交
  12. 22 3月, 2013 2 次提交
  13. 20 3月, 2013 3 次提交
  14. 19 3月, 2013 4 次提交
  15. 16 3月, 2013 2 次提交
  16. 12 3月, 2013 5 次提交