1. 29 9月, 2018 5 次提交
    • J
      blk-iolatency: use a percentile approache for ssd's · 1fa2840e
      Josef Bacik 提交于
      We use an average latency approach for determining if we're missing our
      latency target.  This works well for rotational storage where we have
      generally consistent latencies, but for ssd's and other low latency
      devices you have more of a spikey behavior, which means we often won't
      throttle misbehaving groups because a lot of IO completes at drastically
      faster times than our latency target.  Instead keep track of how many
      IO's miss our target and how many IO's are done in our time window.  If
      the p(90) latency is above our target then we know we need to throttle.
      With this change in place we are seeing the same throttling behavior
      with our testcase on ssd's as we see with rotational drives.
      Signed-off-by: NJosef Bacik <josef@toxicpanda.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      1fa2840e
    • J
      blk-iolatency: deal with small samples · 22ed8a93
      Josef Bacik 提交于
      There is logic to keep cgroups that haven't done a lot of IO in the most
      recent scale window from being punished for over-active higher priority
      groups.  However for things like ssd's where the windows are pretty
      short we'll end up with small numbers of samples, so 5% of samples will
      come out to 0 if there aren't enough.  Make the floor 1 sample to keep
      us from improperly bailing out of scaling down.
      Signed-off-by: NJosef Bacik <josef@toxicpanda.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      22ed8a93
    • J
      blk-iolatency: deal with nr_requests == 1 · 9f60511a
      Josef Bacik 提交于
      Hitting the case where blk_queue_depth() returned 1 uncovered the fact
      that iolatency doesn't actually handle this case properly, it simply
      doesn't scale down anybody.  For this case we should go straight into
      applying the time delay, which we weren't doing.  Since we already limit
      the floor at 1 request this if statement is not needed, and this allows
      us to set our depth to 1 which allows us to apply the delay if needed.
      Signed-off-by: NJosef Bacik <josef@toxicpanda.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      9f60511a
    • J
      blk-iolatency: use q->nr_requests directly · ff4cee08
      Josef Bacik 提交于
      We were using blk_queue_depth() assuming that it would return
      nr_requests, but we hit a case in production on drives that had to have
      NCQ turned off in order for them to not shit the bed which resulted in a
      qd of 1, even though the nr_requests was much larger.  iolatency really
      only cares about requests we are allowed to queue up, as any io that
      get's onto the request list is going to be serviced soonish, so we want
      to be throttling before the bio gets onto the request list.  To make
      iolatency work as expected, simply use q->nr_requests instead of
      blk_queue_depth() as that is what we actually care about.
      Signed-off-by: NJosef Bacik <josef@toxicpanda.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      ff4cee08
    • O
      kyber: fix integer overflow of latency targets on 32-bit · f0a0cddd
      Omar Sandoval 提交于
      NSEC_PER_SEC has type long, so 5 * NSEC_PER_SEC is calculated as a long.
      However, 5 seconds is 5,000,000,000 nanoseconds, which overflows a
      32-bit long. Make sure all of the targets are calculated as 64-bit
      values.
      
      Fixes: 6e25cb01 ("kyber: implement improved heuristics")
      Reported-by: NStephen Rothwell <sfr@canb.auug.org.au>
      Signed-off-by: NOmar Sandoval <osandov@fb.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      f0a0cddd
  2. 28 9月, 2018 10 次提交
  3. 27 9月, 2018 8 次提交
  4. 26 9月, 2018 5 次提交
  5. 25 9月, 2018 9 次提交
  6. 22 9月, 2018 3 次提交