1. 15 10月, 2014 1 次提交
  2. 17 5月, 2014 1 次提交
  3. 29 4月, 2014 1 次提交
    • I
      libceph: fix non-default values check in apply_primary_affinity() · 92b2e751
      Ilya Dryomov 提交于
      osd_primary_affinity array is indexed into incorrectly when checking
      for non-default primary-affinity values.  This nullifies the impact of
      the rest of the apply_primary_affinity() and results in misdirected
      requests.
      
                      if (osds[i] != CRUSH_ITEM_NONE &&
                          osdmap->osd_primary_affinity[i] !=
                                                      ^^^
                                              CEPH_OSD_DEFAULT_PRIMARY_AFFINITY) {
      
      For a pool with size 2, this always ends up checking osd0 and osd1
      primary_affinity values, instead of the values that correspond to the
      osds in question.  E.g., given a [2,3] up set and a [max,max,0,max]
      primary affinity vector, requests are still sent to osd2, because both
      osd0 and osd1 happen to have max primary_affinity values and therefore
      we return from apply_primary_affinity() early on the premise that all
      osds in the given set have max (default) values.  Fix it.
      
      Fixes: http://tracker.ceph.com/issues/7954Signed-off-by: NIlya Dryomov <ilya.dryomov@inktank.com>
      Reviewed-by: NSage Weil <sage@inktank.com>
      92b2e751
  4. 05 4月, 2014 27 次提交
  5. 03 4月, 2014 1 次提交
    • I
      libceph: a per-osdc crush scratch buffer · 9d521470
      Ilya Dryomov 提交于
      With the addition of erasure coding support in the future, scratch
      variable-length array in crush_do_rule_ary() is going to grow to at
      least 200 bytes on average, on top of another 128 bytes consumed by
      rawosd/osd arrays in the call chain.  Replace it with a buffer inside
      struct osdmap and a mutex.  This shouldn't result in any contention,
      because all osd requests were already serialized by request_mutex at
      that point; the only unlocked caller was ceph_ioctl_get_dataloc().
      Signed-off-by: NIlya Dryomov <ilya.dryomov@inktank.com>
      Reviewed-by: NSage Weil <sage@inktank.com>
      9d521470
  6. 28 1月, 2014 3 次提交
  7. 01 1月, 2014 2 次提交
  8. 04 9月, 2013 1 次提交
  9. 02 5月, 2013 2 次提交
    • A
      libceph: define ceph_decode_pgid() only once · ef4859d6
      Alex Elder 提交于
      There are two basically identical definitions of __decode_pgid()
      in libceph, one in "net/ceph/osdmap.c" and the other in
      "net/ceph/osd_client.c".  Get rid of both, and instead define
      a single inline version in "include/linux/ceph/osdmap.h".
      Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-by: NJosh Durgin <josh.durgin@inktank.com>
      ef4859d6
    • A
      libceph: rename ceph_calc_object_layout() · 41766f87
      Alex Elder 提交于
      The purpose of ceph_calc_object_layout() is to fill in the pool
      number and seed for a ceph_pg structure provided, based on a given
      osd map and target object id.
      
      Currently that function takes a file layout parameter, but the only
      thing used out of that is its pool number.
      
      Change the function so it takes a pool number rather than the full
      file layout structure.  Only update the ceph_pg if the pool is found
      in the osd map.  Get rid of few useless lines of code from the
      function while there.
      
      Since the function now very clearly just fills in the ceph_pg
      structure it's provided, rename it ceph_calc_ceph_pg().
      Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-by: NJosh Durgin <josh.durgin@inktank.com>
      41766f87
  10. 12 3月, 2013 1 次提交