1. 07 7月, 2017 3 次提交
  2. 20 2月, 2017 4 次提交
  3. 28 7月, 2016 2 次提交
  4. 31 5月, 2016 1 次提交
  5. 26 5月, 2016 11 次提交
  6. 20 4月, 2015 1 次提交
  7. 05 4月, 2014 8 次提交
  8. 03 4月, 2014 1 次提交
    • I
      libceph: a per-osdc crush scratch buffer · 9d521470
      Ilya Dryomov 提交于
      With the addition of erasure coding support in the future, scratch
      variable-length array in crush_do_rule_ary() is going to grow to at
      least 200 bytes on average, on top of another 128 bytes consumed by
      rawosd/osd arrays in the call chain.  Replace it with a buffer inside
      struct osdmap and a mutex.  This shouldn't result in any contention,
      because all osd requests were already serialized by request_mutex at
      that point; the only unlocked caller was ceph_ioctl_get_dataloc().
      Signed-off-by: NIlya Dryomov <ilya.dryomov@inktank.com>
      Reviewed-by: NSage Weil <sage@inktank.com>
      9d521470
  9. 28 1月, 2014 6 次提交
  10. 02 5月, 2013 2 次提交
    • A
      libceph: define ceph_decode_pgid() only once · ef4859d6
      Alex Elder 提交于
      There are two basically identical definitions of __decode_pgid()
      in libceph, one in "net/ceph/osdmap.c" and the other in
      "net/ceph/osd_client.c".  Get rid of both, and instead define
      a single inline version in "include/linux/ceph/osdmap.h".
      Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-by: NJosh Durgin <josh.durgin@inktank.com>
      ef4859d6
    • A
      libceph: rename ceph_calc_object_layout() · 41766f87
      Alex Elder 提交于
      The purpose of ceph_calc_object_layout() is to fill in the pool
      number and seed for a ceph_pg structure provided, based on a given
      osd map and target object id.
      
      Currently that function takes a file layout parameter, but the only
      thing used out of that is its pool number.
      
      Change the function so it takes a pool number rather than the full
      file layout structure.  Only update the ceph_pg if the pool is found
      in the osd map.  Get rid of few useless lines of code from the
      function while there.
      
      Since the function now very clearly just fills in the ceph_pg
      structure it's provided, rename it ceph_calc_ceph_pg().
      Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-by: NJosh Durgin <josh.durgin@inktank.com>
      41766f87
  11. 27 2月, 2013 1 次提交
    • S
      libceph: add support for HASHPSPOOL pool flag · 83ca14fd
      Sage Weil 提交于
      The legacy behavior adds the pgid seed and pool together as the input for
      CRUSH.  That is problematic because each pool's PGs end up mapping to the
      same OSDs: 1.5 == 2.4 == 3.3 == ...
      
      Instead, if the HASHPSPOOL flag is set, we has the ps and pool together and
      feed that into CRUSH.  This ensures that two adjacent pools will map to
      an independent pseudorandom set of OSDs.
      
      Advertise our support for this via a protocol feature flag.
      Signed-off-by: NSage Weil <sage@inktank.com>
      Reviewed-by: NAlex Elder <elder@inktank.com>
      83ca14fd