• A
    Simplify GenericRateLimiter algorithm (#8602) · 82b81dc8
    Andrew Kryczka 提交于
    Summary:
    `GenericRateLimiter` slow path handles requests that cannot be satisfied
    immediately.  Such requests enter a queue, and their thread stays in `Request()`
    until they are granted or the rate limiter is stopped.  These threads are
    responsible for unblocking themselves.  The work to do so is split into two main
    duties.
    
    (1) Waiting for the next refill time.
    (2) Refilling the bytes and granting requests.
    
    Prior to this PR, the slow path logic involved a leader election algorithm to
    pick one thread to perform (1) followed by (2).  It elected the thread whose
    request was at the front of the highest priority non-empty queue since that
    request was most likely to be granted.  This algorithm was efficient in terms of
    reducing intermediate wakeups, which is a thread waking up only to resume
    waiting after finding its request is not granted.  However, the conceptual
    complexity of this algorithm was too high.  It took me a long time to draw a
    timeline to understand how it works for just one edge case yet there were so
    many.
    
    This PR drops the leader election to reduce conceptual complexity.  Now, the two
    duties can be performed by whichever thread acquires the lock first.  The risk
    of this change is increasing the number of intermediate wakeups, however, we
    took steps to mitigate that.
    
    - `wait_until_refill_pending_` flag ensures only one thread performs (1). This\
    prevents the thundering herd problem at the next refill time. The remaining\
    threads wait on their condition variable with an unbounded duration -- thus we\
    must remember to notify them to ensure forward progress.
    - (1) is typically done by a thread at the front of a queue. This is trivial\
    when the queues are initially empty as the first choice that arrives must be\
    the only entry in its queue. When queues are initially non-empty, we achieve\
    this by having (2) notify a thread at the front of a queue (preferring higher\
    priority) to perform the next duty.
    - We do not require any additional wakeup for (2). Typically it will just be\
    done by the thread that finished (1).
    
    Combined, the second and third bullet points above suggest the refill/granting
    will typically be done by a request at the front of its queue.  This is
    important because one wakeup is saved when a granted request happens to be in an
    already running thread.
    
    Note there are a few cases that still lead to intermediate wakeup, however.  The
    first two are existing issues that also apply to the old algorithm, however, the
    third (including both subpoints) is new.
    
    - No request may be granted (only possible when rate limit dynamically\
    decreases).
    - Requests from a different queue may be granted.
    - (2) may be run by a non-front request thread causing it to not be granted even\
    if some requests in that same queue are granted. It can happen for a couple\
    (unlikely) reasons.
      - A new request may sneak in and grab the lock at the refill time, before the\
    thread finishing (1) can wake up and grab it.
      - A new request may sneak in and grab the lock and execute (1) before (2)'s\
    chosen candidate can wake up and grab the lock. Then that non-front request\
    thread performing (1) can carry over to perform (2).
    
    Pull Request resolved: https://github.com/facebook/rocksdb/pull/8602
    
    Test Plan:
    - Use existing tests. The edge cases listed in the comment are all performance\
    related; I could not really think of any related to correctness. The logic\
    looks the same whether a thread wakes up/finishes its work early/on-time/late,\
    or whether the thread is chosen vs. "steals" the work.
    - Verified write throughput and CPU overhead are basically the same with and\
      without this change, even in a rate limiter heavy workload:
    
    Test command:
    ```
    $ rm -rf /dev/shm/dbbench/ && TEST_TMPDIR=/dev/shm /usr/bin/time ./db_bench -benchmarks=fillrandom -num_multi_db=64 -num_low_pri_threads=64 -num_high_pri_threads=64 -write_buffer_size=262144 -target_file_size_base=262144 -max_bytes_for_level_base=1048576 -rate_limiter_bytes_per_sec=16777216 -key_size=24 -value_size=1000 -num=10000 -compression_type=none -rate_limiter_refill_period_us=1000
    ```
    
    Results before this PR:
    
    ```
    fillrandom   :     108.463 micros/op 9219 ops/sec;    9.0 MB/s
    7.40user 8.84system 1:26.20elapsed 18%CPU (0avgtext+0avgdata 256140maxresident)k
    ```
    
    Results after this PR:
    
    ```
    fillrandom   :     108.108 micros/op 9250 ops/sec;    9.0 MB/s
    7.45user 8.23system 1:26.68elapsed 18%CPU (0avgtext+0avgdata 255688maxresident)k
    ```
    
    Reviewed By: hx235
    
    Differential Revision: D30048013
    
    Pulled By: ajkr
    
    fbshipit-source-id: 6741bba9d9dfbccab359806d725105817fef818b
    82b81dc8
rate_limiter.cc 11.5 KB