1. 05 4月, 2014 11 次提交
  2. 03 4月, 2014 1 次提交
    • I
      libceph: a per-osdc crush scratch buffer · 9d521470
      Ilya Dryomov 提交于
      With the addition of erasure coding support in the future, scratch
      variable-length array in crush_do_rule_ary() is going to grow to at
      least 200 bytes on average, on top of another 128 bytes consumed by
      rawosd/osd arrays in the call chain.  Replace it with a buffer inside
      struct osdmap and a mutex.  This shouldn't result in any contention,
      because all osd requests were already serialized by request_mutex at
      that point; the only unlocked caller was ceph_ioctl_get_dataloc().
      Signed-off-by: NIlya Dryomov <ilya.dryomov@inktank.com>
      Reviewed-by: NSage Weil <sage@inktank.com>
      9d521470
  3. 28 1月, 2014 3 次提交
  4. 01 1月, 2014 2 次提交
  5. 04 9月, 2013 1 次提交
  6. 02 5月, 2013 2 次提交
    • A
      libceph: define ceph_decode_pgid() only once · ef4859d6
      Alex Elder 提交于
      There are two basically identical definitions of __decode_pgid()
      in libceph, one in "net/ceph/osdmap.c" and the other in
      "net/ceph/osd_client.c".  Get rid of both, and instead define
      a single inline version in "include/linux/ceph/osdmap.h".
      Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-by: NJosh Durgin <josh.durgin@inktank.com>
      ef4859d6
    • A
      libceph: rename ceph_calc_object_layout() · 41766f87
      Alex Elder 提交于
      The purpose of ceph_calc_object_layout() is to fill in the pool
      number and seed for a ceph_pg structure provided, based on a given
      osd map and target object id.
      
      Currently that function takes a file layout parameter, but the only
      thing used out of that is its pool number.
      
      Change the function so it takes a pool number rather than the full
      file layout structure.  Only update the ceph_pg if the pool is found
      in the osd map.  Get rid of few useless lines of code from the
      function while there.
      
      Since the function now very clearly just fills in the ceph_pg
      structure it's provided, rename it ceph_calc_ceph_pg().
      Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-by: NJosh Durgin <josh.durgin@inktank.com>
      41766f87
  7. 12 3月, 2013 1 次提交
  8. 27 2月, 2013 5 次提交
  9. 26 1月, 2013 1 次提交
    • C
      libceph: fix undefined behavior when using snprintf() · 1ec3911d
      Cong Ding 提交于
      The variable "str" is used as both the source and destination in
      function snprintf(), which is undefined behavior based on C11. The
      original description in C11 is:
      	"If copying takes place between objects that
      	overlap, the behavior is undefined."
      
      And, the function of ceph_osdmap_state_str() is to return the osdmap
      state, so it should return "doesn't exist" when all the conditions
      are not satisfied. I fix it in this patch.
      
      [elder@inktank.com: shortened the commit message]
      Signed-off-by: NCong Ding <dinggnu@gmail.com>
      Reviewed-by: NAlex Elder <elder@inktank.com>
      1ec3911d
  10. 18 1月, 2013 2 次提交
    • A
      libceph: pass length to ceph_calc_file_object_mapping() · e8afad65
      Alex Elder 提交于
      ceph_calc_file_object_mapping() takes (among other things) a "file"
      offset and length, and based on the layout, determines the object
      number ("bno") backing the affected portion of the file's data and
      the offset into that object where the desired range begins.  It also
      computes the size that should be used for the request--either the
      amount requested or something less if that would exceed the end of
      the object.
      
      This patch changes the input length parameter in this function so it
      is used only for input.  That is, the argument will be passed by
      value rather than by address, so the value provided won't get
      updated by the function.
      
      The value would only get updated if the length would surpass the
      current object, and in that case the value it got updated to would
      be exactly that returned in *oxlen.
      
      Only one of the two callers is affected by this change.  Update
      ceph_calc_raw_layout() so it records any updated value.
      Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-by: NJosh Durgin <josh.durgin@inktank.com>
      e8afad65
    • J
      libceph: for chooseleaf rules, retry CRUSH map descent from root if leaf is failed · 1604f488
      Jim Schutt 提交于
      Add libceph support for a new CRUSH tunable recently added to Ceph servers.
      
      Consider the CRUSH rule
        step chooseleaf firstn 0 type <node_type>
      
      This rule means that <n> replicas will be chosen in a manner such that
      each chosen leaf's branch will contain a unique instance of <node_type>.
      
      When an object is re-replicated after a leaf failure, if the CRUSH map uses
      a chooseleaf rule the remapped replica ends up under the <node_type> bucket
      that held the failed leaf.  This causes uneven data distribution across the
      storage cluster, to the point that when all the leaves but one fail under a
      particular <node_type> bucket, that remaining leaf holds all the data from
      its failed peers.
      
      This behavior also limits the number of peers that can participate in the
      re-replication of the data held by the failed leaf, which increases the
      time required to re-replicate after a failure.
      
      For a chooseleaf CRUSH rule, the tree descent has two steps: call them the
      inner and outer descents.
      
      If the tree descent down to <node_type> is the outer descent, and the descent
      from <node_type> down to a leaf is the inner descent, the issue is that a
      down leaf is detected on the inner descent, so only the inner descent is
      retried.
      
      In order to disperse re-replicated data as widely as possible across a
      storage cluster after a failure, we want to retry the outer descent. So,
      fix up crush_choose() to allow the inner descent to return immediately on
      choosing a failed leaf.  Wire this up as a new CRUSH tunable.
      
      Note that after this change, for a chooseleaf rule, if the primary OSD
      in a placement group has failed, choosing a replacement may result in
      one of the other OSDs in the PG colliding with the new primary.  This
      requires that OSD's data for that PG to need moving as well.  This
      seems unavoidable but should be relatively rare.
      
      This corresponds to ceph.git commit 88f218181a9e6d2292e2697fc93797d0f6d6e5dc.
      Signed-off-by: NJim Schutt <jaschut@sandia.gov>
      Reviewed-by: NSage Weil <sage@inktank.com>
      1604f488
  11. 01 11月, 2012 1 次提交
  12. 30 10月, 2012 1 次提交
  13. 02 10月, 2012 1 次提交
  14. 31 7月, 2012 1 次提交
  15. 07 6月, 2012 3 次提交
  16. 22 5月, 2012 1 次提交
  17. 08 5月, 2012 3 次提交