1. 17 10月, 2013 1 次提交
    • S
      Enable background flush thread by default and fix issues related to it · 073cbfc8
      Siying Dong 提交于
      Summary:
      Enable background flush thread in this patch and fix unit tests with:
      (1) After background flush, schedule a background compaction if condition satisfied;
      (2) Fix a bug that if universal compaction is enabled and number of levels are set to be 0, compaction will not be automatically triggered
      (3) Fix unit tests to wait for compaction to finish instead of flush, before checking the compaction results.
      
      Test Plan: pass all unit tests
      
      Reviewers: haobo, xjin, dhruba
      
      Reviewed By: haobo
      
      CC: leveldb
      
      Differential Revision: https://reviews.facebook.net/D13461
      073cbfc8
  2. 15 10月, 2013 4 次提交
    • M
      Fix rocksdb->levledb BytewiseComparator and inverted order of error in db/version_set.cc · da2fd001
      Mayank Agarwal 提交于
      Summary:
      This is needed to make existing dbs be able to open and also because BytewiseComparator was not changed since leveldb.
      The inverted order in the error message caused confusion prebiously
      
      Test Plan: make; open existing db
      
      Reviewers: leveldb, dhruba
      
      Reviewed By: dhruba
      
      Differential Revision: https://reviews.facebook.net/D13449
      da2fd001
    • M
      Features in Transaction log iterator · fe371396
      Mayank Agarwal 提交于
      Summary:
      * Logstore requests a valid change of reutrning an empty iterator and not an error in case of no log files.
      * Changed the code to return the writebatch containing the sequence number requested from GetupdatesSince even if it lies in the middle. Earlier we used to return the next writebatch,. This also allows me oto guarantee that no files played upon by the iterator are redundant. I mean the starting log file has at least a sequence number >= the sequence number requested form GetupdatesSince.
      * Cleaned up redundant logic in Iterator::Next and made a new function SeekToStartSequence for greater readability and maintainibilty.
      * Modified a test in db_test accordingly
      Please check the logic carefully and suggest improvements. I have a separate patch out for more improvements like restricting reader to read till written sequences.
      
      Test Plan:
      * transaction log iterator tests in db_test,
      * db_repl_stress.
      * rocks_log_iterator_test in fbcode/wormhole/rocksdb/test - 2 tests thriving on hacks till now can get simplified
      * testing on the shadow setup for sigma with replication
      
      Reviewers: dhruba, haobo, kailiu, sdong
      
      Reviewed By: dhruba
      
      CC: leveldb
      
      Differential Revision: https://reviews.facebook.net/D13437
      fe371396
    • K
      Add statistics to sst file · 86ef6c3f
      Kai Liu 提交于
      Summary:
      So far we only have key/value pairs as well as bloom filter stored in the
      sst file.  It will be great if we are able to store more metadata about
      this table itself, for example, the entry size, bloom filter name, etc.
      
      This diff is the first step of this effort. It allows table to keep the
      basic statistics mentioned in http://fburl.com/14995441, as well as
      allowing writing user-collected stats to stats block.
      
      After this diff, we will figure out the interface of how to allow user to collect their interested statistics.
      
      Test Plan:
      1. Added several unit tests.
      2. Ran `make check` to ensure it doesn't break other tests.
      
      Reviewers: dhruba, haobo
      
      CC: leveldb
      
      Differential Revision: https://reviews.facebook.net/D13419
      86ef6c3f
    • S
      Change Function names from Compaction->Flush When they really mean Flush · 88f2f890
      Siying Dong 提交于
      Summary: When I debug the unit test failures when enabling background flush thread, I feel the function names can be made clearer for people to understand. Also, if the names are fixed, in many places, some tests' bugs are obvious (and some of those tests are failing). This patch is to clean it up for future maintenance.
      
      Test Plan: Run test suites.
      
      Reviewers: haobo, dhruba, xjin
      
      Reviewed By: dhruba
      
      CC: leveldb
      
      Differential Revision: https://reviews.facebook.net/D13431
      88f2f890
  3. 12 10月, 2013 1 次提交
    • S
      LRUCache to try to clean entries not referenced first. · f8509653
      sdong 提交于
      Summary:
      With this patch, when LRUCache.Insert() is called and the cache is full, it will first try to free up entries whose reference counter is 1 (would become 0 after remo\
      ving from the cache). We do it in two passes, in the first pass, we only try to release those unreferenced entries. If we cannot free enough space after traversing t\
      he first remove_scan_cnt_ entries, we start from the beginning again and remove those entries being used.
      
      Test Plan: add two unit tests to cover the codes
      
      Reviewers: dhruba, haobo, emayanke
      
      Reviewed By: emayanke
      
      CC: leveldb, emayanke, xjin
      
      Differential Revision: https://reviews.facebook.net/D13377
      f8509653
  4. 11 10月, 2013 2 次提交
  5. 09 10月, 2013 2 次提交
    • N
      Add option for storing transaction logs in a separate dir · cbf4a064
      Naman Gupta 提交于
      Summary: In some cases, you might not want to store the data log (write ahead log) files in the same dir as the sst files. An example use case is leaf, which stores sst files in tmpfs. And would like to save the log files in a separate dir (disk) to save memory.
      
      Test Plan: make all. Ran db_test test. A few test failing. P2785018. If you guys don't see an obvious problem with the code, maybe somebody from the rocksdb team could help me debug the issue here. Running this on leaf worked well. I could see logs stored on disk, and deleted appropriately after compactions. Obviously this is only one set of options. The unit tests cover different options. Seems like I'm missing some edge cases.
      
      Reviewers: dhruba, haobo, leveldb
      
      CC: xinyaohu, sumeet
      
      Differential Revision: https://reviews.facebook.net/D13239
      cbf4a064
    • N
      Make db_test more robust · 11607141
      Naman Gupta 提交于
      Summary: While working on D13239, I noticed that the same options are not used for opening and destroying at db. So adding that. Also added asserts for successful DestroyDB calls.
      
      Test Plan: Ran unit tests. Atleast 1 unit test is failing. They failures are a result of some past logic change. I'm not really planning to fix those. But I would like to check this in. And hopefully the respective unit test owners can fix the broken tests
      
      Reviewers: leveldb, haobo
      
      CC: xinyaohu, sumeet, dhruba
      
      Differential Revision: https://reviews.facebook.net/D13329
      11607141
  6. 06 10月, 2013 3 次提交
  7. 05 10月, 2013 4 次提交
  8. 04 10月, 2013 2 次提交
  9. 03 10月, 2013 2 次提交
  10. 29 9月, 2013 1 次提交
  11. 27 9月, 2013 2 次提交
  12. 26 9月, 2013 2 次提交
    • H
      [RocbsDB] Add an option to enable set based memtable for perf_context_test · e0aa19a9
      Haobo Xu 提交于
      Summary:
      as title.
      Some result:
      
      -- Sequential insertion of 1M key/value with stock skip list (all in on memtable)
      time ./perf_context_test  --total_keys=1000000  --use_set_based_memetable=0
      Inserting 1000000 key/value pairs
      ...
      Put uesr key comparison:
      Count: 1000000  Average: 8.0179  StdDev: 176.34
      Min: 0.0000  Median: 2.5555  Max: 88933.0000
      Percentiles: P50: 2.56 P75: 2.83 P99: 58.21 P99.9: 133.62 P99.99: 987.50
      Get uesr key comparison:
      Count: 1000000  Average: 43.4465  StdDev: 379.03
      Min: 2.0000  Median: 36.0195  Max: 88939.0000
      Percentiles: P50: 36.02 P75: 43.66 P99: 112.98 P99.9: 824.84 P99.99: 7615.38
      real	0m21.345s
      user	0m14.723s
      sys	0m5.677s
      
      -- Sequential insertion of 1M key/value with set based memtable (all in on memtable)
      time ./perf_context_test  --total_keys=1000000  --use_set_based_memetable=1
      Inserting 1000000 key/value pairs
      ...
      Put uesr key comparison:
      Count: 1000000  Average: 61.5022  StdDev: 6.49
      Min: 0.0000  Median: 62.4295  Max: 71.0000
      Percentiles: P50: 62.43 P75: 66.61 P99: 71.00 P99.9: 71.00 P99.99: 71.00
      Get uesr key comparison:
      Count: 1000000  Average: 29.3810  StdDev: 3.20
      Min: 1.0000  Median: 29.1801  Max: 34.0000
      Percentiles: P50: 29.18 P75: 32.06 P99: 34.00 P99.9: 34.00 P99.99: 34.00
      real	0m28.875s
      user	0m21.699s
      sys	0m5.749s
      
      Worst case comparison for a Put is 88933 (skiplist) vs 71 (set based memetable)
      
      Of course, there's other in-efficiency in set based memtable implementation, which lead to the overall worst performance. However, P99 behavior advantage is very very obvious.
      
      Test Plan: ./perf_context_test and viewstate shadow testing
      
      Reviewers: dhruba
      
      Reviewed By: dhruba
      
      CC: leveldb
      
      Differential Revision: https://reviews.facebook.net/D13095
      e0aa19a9
    • D
      The vector rep implementation was segfaulting because of incorrect initialization of vector. · f1a60e5c
      Dhruba Borthakur 提交于
      Summary:
      The constructor for Vector memtable has a parameter called 'count'
      that specifies the capacity of the vector to be reserved at allocation
      time. It was incorrectly used to initialize the size of the vector.
      
      Test Plan: Enhanced db_test.
      
      Reviewers: haobo, xjin, emayanke
      
      Reviewed By: haobo
      
      CC: leveldb
      
      Differential Revision: https://reviews.facebook.net/D13083
      f1a60e5c
  13. 20 9月, 2013 1 次提交
    • D
      Better locking in vectorrep that increases throughput to match speed of storage. · 5e9f3a9a
      Dhruba Borthakur 提交于
      Summary:
      There is a use-case where we want to insert data into rocksdb as
      fast as possible. Vector rep is used for this purpose.
      
      The background flush thread needs to flush the vectorrep to
      storage. It acquires the dblock then sorts the vector, releases
      the dblock and then writes the sorted vector to storage. This is
      suboptimal because the lock is held during the sort, which
      prevents new writes for occuring.
      
      This patch moves the sorting of the vector rep to outside the
      db mutex. Performance is now as fastas the underlying storage
      system. If you are doing buffered writes to rocksdb files, then
      you can observe throughput upwards of 200 MB/sec writes.
      
      This is an early draft and not yet ready to be reviewed.
      
      Test Plan:
      make check
      
      Task ID: #
      
      Blame Rev:
      
      Reviewers: haobo
      
      Reviewed By: haobo
      
      CC: leveldb, haobo
      
      Differential Revision: https://reviews.facebook.net/D12987
      5e9f3a9a
  14. 19 9月, 2013 1 次提交
    • H
      [RocksDB] Unit test to show Seek key comparison number · 4734dbb7
      Haobo Xu 提交于
      Summary: Added SeekKeyComparison to show the uer key comparison incurred by Seek.
      
      Test Plan:
      make perf_context_test
      export LEVELDB_TESTS=DBTest.SeekKeyComparison
      ./perf_context_test --write_buffer_size=500000 --total_keys=10000
      ./perf_context_test --write_buffer_size=250000 --total_keys=10000
      
      Reviewers: dhruba, xjin
      
      Reviewed By: xjin
      
      CC: leveldb
      
      Differential Revision: https://reviews.facebook.net/D12843
      4734dbb7
  15. 18 9月, 2013 2 次提交
  16. 16 9月, 2013 3 次提交
  17. 14 9月, 2013 1 次提交
    • D
      Added a parameter to limit the maximum space amplification for universal compaction. · 4012ca1c
      Dhruba Borthakur 提交于
      Summary:
      Added a new field called max_size_amplification_ratio in the
      CompactionOptionsUniversal structure. This determines the maximum
      percentage overhead of space amplification.
      
      The size amplification is defined to be the ratio between the size of
      the oldest file to the sum of the sizes of all other files. If the
      size amplification exceeds the specified value, then min_merge_width
      and max_merge_width are ignored and a full compaction of all files is done.
      A value of 10 means that the size a database that stores 100 bytes
      of user data could occupy 110 bytes of physical storage.
      
      Test Plan: Unit test DBTest.UniversalCompactionSpaceAmplification added.
      
      Reviewers: haobo, emayanke, xjin
      
      Reviewed By: haobo
      
      CC: leveldb
      
      Differential Revision: https://reviews.facebook.net/D12825
      4012ca1c
  18. 13 9月, 2013 1 次提交
    • H
      [RocksDB] Remove Log file immediately after memtable flush · 0e422308
      Haobo Xu 提交于
      Summary: As title. The DB log file life cycle is tied up with the memtable it backs. Once the memtable is flushed to sst and committed, we should be able to delete the log file, without holding the mutex. This is part of the bigger change to avoid FindObsoleteFiles at runtime. It deals with log files. sst files will be dealt with later.
      
      Test Plan: make check; db_bench
      
      Reviewers: dhruba
      
      CC: leveldb
      
      Differential Revision: https://reviews.facebook.net/D11709
      0e422308
  19. 08 9月, 2013 1 次提交
    • H
      [RocksDB] Added nano second stopwatch and new perf counters to track block read cost · f2f4c807
      Haobo Xu 提交于
      Summary: The pupose of this diff is to expose per user-call level precise timing of block read, so that we can answer questions like: a Get() costs me 100ms, is that somehow related to loading blocks from file system, or sth else? We will answer that with EXACTLY how many blocks have been read, how much time was spent on transfering the bytes from os, how much time was spent on checksum verification and how much time was spent on block decompression, just for that one Get. A nano second stopwatch was introduced to track time with higher precision. The cost/precision of the stopwatch is also measured in unit-test. On my dev box, retrieving one time instance costs about 30ns, on average. The deviation of timing results is good enough to track 100ns-1us level events. And the overhead could be safely ignored for 100us level events (10000 instances/s), for example, a viewstate thrift call.
      
      Test Plan: perf_context_test, also testing with viewstate shadow traffic.
      
      Reviewers: dhruba
      
      Reviewed By: dhruba
      
      CC: leveldb, xjin
      
      Differential Revision: https://reviews.facebook.net/D12351
      f2f4c807
  20. 07 9月, 2013 2 次提交
    • D
      Flush was hanging because the configured options specified that more than 1... · 32c965d4
      Dhruba Borthakur 提交于
      Flush was hanging because the configured options specified that more than 1 memtable need to be merged.
      
      Summary:
      There is an config option called Options.min_write_buffer_number_to_merge
      that specifies the minimum number of write buffers to merge in memory
      before flushing to a file in L0. But in the the case when the db is
      being closed, we should not be using this config, instead we should
      flush whatever write buffers were available at that time.
      
      Test Plan: Unit test attached.
      
      Reviewers: haobo, emayanke
      
      Reviewed By: haobo
      
      CC: leveldb
      
      Differential Revision: https://reviews.facebook.net/D12717
      32c965d4
    • D
      An iterator may automatically invoke reseeks. · 197034e4
      Dhruba Borthakur 提交于
      Summary:
      An iterator invokes reseek if the number of sequential skips over the
      same userkey exceeds a configured number. This makes iter->Next()
      faster (bacause of fewer key compares) if a large number of
      adjacent internal keys in a table (sst or memtable) have the
      same userkey.
      
      Test Plan: Unit test DBTest.IterReseek.
      
      Reviewers: emayanke, haobo, xjin
      
      Reviewed By: xjin
      
      CC: leveldb, xjin
      
      Differential Revision: https://reviews.facebook.net/D11865
      197034e4
  21. 05 9月, 2013 2 次提交
    • M
      Return pathname relative to db dir in LogFile and cleanup AppendSortedWalsOfType · aa5c897d
      Mayank Agarwal 提交于
      Summary: So that replication can just download from wherever LogFile.Pathname is pointing them.
      
      Test Plan: make all check;./db_repl_stress
      
      Reviewers: dhruba, haobo
      
      Reviewed By: dhruba
      
      CC: leveldb
      
      Differential Revision: https://reviews.facebook.net/D12609
      aa5c897d
    • X
      New ldb command to convert compaction style · 42c109cc
      Xing Jin 提交于
      Summary:
      Add new command "change_compaction_style" to ldb tool. For
      universal->level, it shows "nothing to do". For level->universal, it
      compacts all files into a single one and moves the file to level 0.
      
      Also add check for number of files at level 1+ when opening db with
      universal compaction style.
      
      Test Plan:
      'make all check'. New unit test for internal convertion function. Also manully test various
      cmd like:
      
      ./ldb change_compaction_style --old_compaction_style=0
      --new_compaction_style=1 --db=/tmp/leveldbtest-3088/db_test
      
      Reviewers: haobo, dhruba
      
      Reviewed By: haobo
      
      CC: vamsi, emayanke
      
      Differential Revision: https://reviews.facebook.net/D12603
      42c109cc