1. 26 8月, 2014 1 次提交
  2. 07 8月, 2014 1 次提交
    • S
      Add DB property "rocksdb.estimate-table-readers-mem" · 1242bfca
      sdong 提交于
      Summary:
      Add a DB Property "rocksdb.estimate-table-readers-mem" to return estimated memory usage by all loaded table readers, other than allocated from block cache.
      
      Refactor the property codes to allow getting property from a version, with DB mutex not acquired.
      
      Test Plan: Add several checks of this new property in existing codes for various cases.
      
      Reviewers: yhchiang, ljin
      
      Reviewed By: ljin
      
      Subscribers: xjin, igor, leveldb
      
      Differential Revision: https://reviews.facebook.net/D20733
      1242bfca
  3. 31 7月, 2014 1 次提交
    • F
      remove malloc when create data and index iterator in Get · 8f09d53f
      Feng Zhu 提交于
      Summary:
        Define Block::Iter to be an independent class to be used by block_based_table_reader
        When creating data and index iterator, update an existing iterator rather than new one
        Thus malloc and free could be reduced
      
      Benchmark,
      Base:
      commit 76286ee6
      commands:
      --db=/dev/shm/rocksdb --num_levels=6 --key_size=20 --prefix_size=20 --keys_per_prefix=0 --value_size=100 --write_buffer_size=134217728 --max_write_buffer_number=2 --target_file_size_base=33554432 --max_bytes_for_level_base=1073741824 --verify_checksum=false --max_background_compactions=4 --use_plain_table=0 --memtablerep=prefix_hash --open_files=-1 --mmap_read=1 --mmap_write=0 --bloom_bits=10 --bloom_locality=1 --memtable_bloom_bits=500000 --compression_type=lz4 --num=2621440 --use_hash_search=1 --block_size=1024 --block_restart_interval=1 --use_existing_db=1 --threads=1 --benchmarks=readrandom —disable_auto_compactions=1
      
      malloc: 3.30% -> 1.42%
      free: 3.59%->1.61%
      
      Test Plan:
        make all check
        run db_stress
        valgrind ./db_test ./table_test
      
      Reviewers: ljin, yhchiang, dhruba, igor, sdong
      
      Reviewed By: sdong
      
      Subscribers: leveldb
      
      Differential Revision: https://reviews.facebook.net/D20655
      8f09d53f
  4. 19 6月, 2014 1 次提交
    • H
      [RocksDB] Reduce memory footprint of the blockbased table hash index. · 0f0076ed
      Haobo Xu 提交于
      Summary:
      Currently, the in-memory hash index of blockbased table uses a precise hash map to track the prefix to block range mapping. In some use cases, especially when prefix itself is big, the memory overhead becomes a problem. This diff introduces a fixed hash bucket array that does not store the prefix and allows prefix collision, which is similar to the plaintable hash index, in order to reduce the memory consumption.
      Just a quick draft, still testing and refining.
      
      Test Plan: unit test and shadow testing
      
      Reviewers: dhruba, kailiu, sdong
      
      Reviewed By: sdong
      
      Subscribers: leveldb
      
      Differential Revision: https://reviews.facebook.net/D19047
      0f0076ed
  5. 11 4月, 2014 1 次提交
  6. 27 2月, 2014 1 次提交
    • K
      Fix inconsistent code format · 444cafc2
      kailiu 提交于
      Summary:
      Found some function follows camel style. When naming funciton, we have two styles:
      
      Trivially expose internal data in readonly mode: `all_lower_case()`
      Regular function: `CapitalizeFirstLetter()`
      
      I renames these functions.
      
      Test Plan: make -j32
      
      Reviewers: haobo, sdong, dhruba, igor
      
      CC: leveldb
      
      Differential Revision: https://reviews.facebook.net/D16383
      444cafc2
  7. 02 11月, 2013 1 次提交
    • D
      Implement a compressed block cache. · b4ad5e89
      Dhruba Borthakur 提交于
      Summary:
      Rocksdb can now support a uncompressed block cache, or a compressed
      block cache or both. Lookups first look for a block in the
      uncompressed cache, if it is not found only then it is looked up
      in the compressed cache. If it is found in the compressed cache,
      then it is uncompressed and inserted into the uncompressed cache.
      
      It is possible that the same block resides in the compressed cache
      as well as the uncompressed cache at the same time. Both caches
      have their own individual LRU policy.
      
      Test Plan: Unit test case attached.
      
      Reviewers: kailiu, sdong, haobo, leveldb
      
      Reviewed By: haobo
      
      CC: xjin, haobo
      
      Differential Revision: https://reviews.facebook.net/D12675
      b4ad5e89
  8. 17 10月, 2013 1 次提交
  9. 06 10月, 2013 1 次提交
  10. 05 10月, 2013 1 次提交
  11. 24 8月, 2013 1 次提交
  12. 30 4月, 2013 1 次提交
  13. 17 4月, 2012 1 次提交
    • S
      Added bloom filter support. · 85584d49
      Sanjay Ghemawat 提交于
      In particular, we add a new FilterPolicy class.  An instance
      of this class can be supplied in Options when opening a
      database.  If supplied, the instance is used to generate
      summaries of keys (e.g., a bloom filter) which are placed in
      sstables.  These summaries are consulted by DB::Get() so we
      can avoid reading sstable blocks that are guaranteed to not
      contain the key we are looking for.
      
      This change provides one implementation of FilterPolicy
      based on bloom filters.
      
      Other changes:
      - Updated version number to 1.4.
      - Some build tweaks.
      - C binding for CompactRange.
      - A few more benchmarks: deleteseq, deleterandom, readmissing, seekrandom.
      - Minor .gitignore update.
      85584d49
  14. 16 3月, 2012 1 次提交
  15. 01 11月, 2011 1 次提交
    • H
      A number of fixes: · 36a5f8ed
      Hans Wennborg 提交于
      - Replace raw slice comparison with a call to user comparator.
        Added test for custom comparators.
      
      - Fix end of namespace comments.
      
      - Fixed bug in picking inputs for a level-0 compaction.
      
        When finding overlapping files, the covered range may expand
        as files are added to the input set.  We now correctly expand
        the range when this happens instead of continuing to use the
        old range.  For example, suppose L0 contains files with the
        following ranges:
      
            F1: a .. d
            F2:    c .. g
            F3:       f .. j
      
        and the initial compaction target is F3.  We used to search
        for range f..j which yielded {F2,F3}.  However we now expand
        the range as soon as another file is added.  In this case,
        when F2 is added, we expand the range to c..j and restart the
        search.  That picks up file F1 as well.
      
        This change fixes a bug related to deleted keys showing up
        incorrectly after a compaction as described in Issue 44.
      
      (Sync with upstream @25072954)
      36a5f8ed
  16. 20 4月, 2011 2 次提交
  17. 19 4月, 2011 1 次提交
  18. 13 4月, 2011 1 次提交
  19. 31 3月, 2011 1 次提交
  20. 19 3月, 2011 1 次提交