• H
    Block cache simulator: Add pysim to simulate caches using reinforcement learning. (#5610) · 70c7302f
    haoyuhuang 提交于
    Summary:
    This PR implements cache eviction using reinforcement learning. It includes two implementations:
    1. An implementation of Thompson Sampling for the Bernoulli Bandit [1].
    2. An implementation of LinUCB with disjoint linear models [2].
    
    The idea is that a cache uses multiple eviction policies, e.g., MRU, LRU, and LFU. The cache learns which eviction policy is the best and uses it upon a cache miss.
    Thompson Sampling is contextless and does not include any features.
    LinUCB includes features such as level, block type, caller, column family id to decide which eviction policy to use.
    
    [1] Daniel J. Russo, Benjamin Van Roy, Abbas Kazerouni, Ian Osband, and Zheng Wen. 2018. A Tutorial on Thompson Sampling. Found. Trends Mach. Learn. 11, 1 (July 2018), 1-96. DOI: https://doi.org/10.1561/2200000070
    [2] Lihong Li, Wei Chu, John Langford, and Robert E. Schapire. 2010. A contextual-bandit approach to personalized news article recommendation. In Proceedings of the 19th international conference on World wide web (WWW '10). ACM, New York, NY, USA, 661-670. DOI=http://dx.doi.org/10.1145/1772690.1772758
    Pull Request resolved: https://github.com/facebook/rocksdb/pull/5610
    
    Differential Revision: D16435067
    
    Pulled By: HaoyuHuang
    
    fbshipit-source-id: 6549239ae14115c01cb1e70548af9e46d8dc21bb
    70c7302f
Makefile 67.8 KB