1. 05 4月, 2014 5 次提交
    • I
      crush: add SET_CHOOSELEAF_VARY_R step · d83ed858
      Ilya Dryomov 提交于
      This lets you adjust the vary_r tunable on a per-rule basis.
      
      Reflects ceph.git commit f944ccc20aee60a7d8da7e405ec75ad1cd449fac.
      Signed-off-by: NIlya Dryomov <ilya.dryomov@inktank.com>
      Reviewed-by: NJosh Durgin <josh.durgin@inktank.com>
      d83ed858
    • I
      crush: add chooseleaf_vary_r tunable · e2b149cc
      Ilya Dryomov 提交于
      The current crush_choose_firstn code will re-use the same 'r' value for
      the recursive call.  That means that if we are hitting a collision or
      rejection for some reason (say, an OSD that is marked out) and need to
      retry, we will keep making the same (bad) choice in that recursive
      selection.
      
      Introduce a tunable that fixes that behavior by incorporating the parent
      'r' value into the recursive starting point, so that a different path
      will be taken in subsequent placement attempts.
      
      Note that this was done from the get-go for the new crush_choose_indep
      algorithm.
      
      This was exposed by a user who was seeing PGs stuck in active+remapped
      after reweight-by-utilization because the up set mapped to a single OSD.
      
      Reflects ceph.git commit a8e6c9fbf88bad056dd05d3eb790e98a5e43451a.
      Signed-off-by: NIlya Dryomov <ilya.dryomov@inktank.com>
      Reviewed-by: NJosh Durgin <josh.durgin@inktank.com>
      e2b149cc
    • I
      crush: allow crush rules to set (re)tries counts to 0 · 6ed1002f
      Ilya Dryomov 提交于
      These two fields are misnomers; they are *retry* counts.
      
      Reflects ceph.git commit f17caba8ae0cad7b6f8f35e53e5f73b444696835.
      Signed-off-by: NIlya Dryomov <ilya.dryomov@inktank.com>
      Reviewed-by: NJosh Durgin <josh.durgin@inktank.com>
      6ed1002f
    • I
      crush: fix off-by-one errors in total_tries refactor · 48a163db
      Ilya Dryomov 提交于
      Back in 27f4d1f6bc32c2ed7b2c5080cbd58b14df622607 we refactored the CRUSH
      code to allow adjustment of the retry counts on a per-pool basis.  That
      commit had an off-by-one bug: the previous "tries" counter was a *retry*
      count, not a *try* count, but the new code was passing in 1 meaning
      there should be no retries.
      
      Fix the ftotal vs tries comparison to use < instead of <= to fix the
      problem.  Note that the original code used <= here, which means the
      global "choose_total_tries" tunable is actually counting retries.
      Compensate for that by adding 1 in crush_do_rule when we pull the tunable
      into the local variable.
      
      This was noticed looking at output from a user provided osdmap.
      Unfortunately the map doesn't illustrate the change in mapping behavior
      and I haven't managed to construct one yet that does.  Inspection of the
      crush debug output now aligns with prior versions, though.
      
      Reflects ceph.git commit 795704fd615f0b008dcc81aa088a859b2d075138.
      Signed-off-by: NIlya Dryomov <ilya.dryomov@inktank.com>
      Reviewed-by: NJosh Durgin <josh.durgin@inktank.com>
      48a163db
    • Y
      libceph: fix oops in ceph_msg_data_{pages,pagelist}_advance() · d90deda6
      Yan, Zheng 提交于
      When there is no more data, ceph_msg_data_{pages,pagelist}_advance()
      should not move on to the next page.
      Signed-off-by: NYan, Zheng <zheng.z.yan@intel.com>
      d90deda6
  2. 03 4月, 2014 3 次提交
  3. 08 2月, 2014 3 次提交
  4. 04 2月, 2014 1 次提交
  5. 28 1月, 2014 8 次提交
  6. 26 1月, 2014 2 次提交
  7. 14 1月, 2014 3 次提交
  8. 01 1月, 2014 15 次提交