1. 31 1月, 2020 1 次提交
  2. 18 1月, 2020 1 次提交
  3. 11 1月, 2020 1 次提交
  4. 08 1月, 2020 1 次提交
  5. 07 1月, 2020 1 次提交
  6. 10 12月, 2019 2 次提交
  7. 03 12月, 2019 1 次提交
  8. 19 9月, 2019 1 次提交
  9. 13 9月, 2019 1 次提交
    • L
      Add insert hints for each writebatch (#5728) · 1a928c22
      Lingjing You 提交于
      Summary:
      Add insert hints for each writebatch so that they can be used in concurrent write, and add write option to enable it.
      
      Bench result (qps):
      
      `./db_bench --benchmarks=fillseq -allow_concurrent_memtable_write=true -num=4000000 -batch-size=1 -threads=1 -db=/data3/ylj/tmp -write_buffer_size=536870912 -num_column_families=4`
      
      master:
      
      | batch size \ thread num | 1       | 2       | 4       | 8       |
      | ----------------------- | ------- | ------- | ------- | ------- |
      | 1                       | 387883  | 220790  | 308294  | 490998  |
      | 10                      | 1397208 | 978911  | 1275684 | 1733395 |
      | 100                     | 2045414 | 1589927 | 1798782 | 2681039 |
      | 1000                    | 2228038 | 1698252 | 1839877 | 2863490 |
      
      fillseq with writebatch hint:
      
      | batch size \ thread num | 1       | 2       | 4       | 8       |
      | ----------------------- | ------- | ------- | ------- | ------- |
      | 1                       | 286005  | 223570  | 300024  | 466981  |
      | 10                      | 970374  | 813308  | 1399299 | 1753588 |
      | 100                     | 1962768 | 1983023 | 2676577 | 3086426 |
      | 1000                    | 2195853 | 2676782 | 3231048 | 3638143 |
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/5728
      
      Differential Revision: D17297240
      
      fbshipit-source-id: b053590a6d77871f1ef2f911a7bd013b3899b26c
      1a928c22
  10. 24 8月, 2019 1 次提交
    • Z
      Refactor trimming logic for immutable memtables (#5022) · 2f41ecfe
      Zhongyi Xie 提交于
      Summary:
      MyRocks currently sets `max_write_buffer_number_to_maintain` in order to maintain enough history for transaction conflict checking. The effectiveness of this approach depends on the size of memtables. When memtables are small, it may not keep enough history; when memtables are large, this may consume too much memory.
      We are proposing a new way to configure memtable list history: by limiting the memory usage of immutable memtables. The new option is `max_write_buffer_size_to_maintain` and it will take precedence over the old `max_write_buffer_number_to_maintain` if they are both set to non-zero values. The new option accounts for the total memory usage of flushed immutable memtables and mutable memtable. When the total usage exceeds the limit, RocksDB may start dropping immutable memtables (which is also called trimming history), starting from the oldest one.
      The semantics of the old option actually works both as an upper bound and lower bound. History trimming will start if number of immutable memtables exceeds the limit, but it will never go below (limit-1) due to history trimming.
      In order the mimic the behavior with the new option, history trimming will stop if dropping the next immutable memtable causes the total memory usage go below the size limit. For example, assuming the size limit is set to 64MB, and there are 3 immutable memtables with sizes of 20, 30, 30. Although the total memory usage is 80MB > 64MB, dropping the oldest memtable will reduce the memory usage to 60MB < 64MB, so in this case no memtable will be dropped.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/5022
      
      Differential Revision: D14394062
      
      Pulled By: miasantreble
      
      fbshipit-source-id: 60457a509c6af89d0993f988c9b5c2aa9e45f5c5
      2f41ecfe
  11. 16 7月, 2019 1 次提交
  12. 27 6月, 2019 1 次提交
  13. 14 5月, 2019 1 次提交
    • M
      Unordered Writes (#5218) · f383641a
      Maysam Yabandeh 提交于
      Summary:
      Performing unordered writes in rocksdb when unordered_write option is set to true. When enabled the writes to memtable are done without joining any write thread. This offers much higher write throughput since the upcoming writes would not have to wait for the slowest memtable write to finish. The tradeoff is that the writes visible to a snapshot might change over time. If the application cannot tolerate that, it should implement its own mechanisms to work around that. Using TransactionDB with WRITE_PREPARED write policy is one way to achieve that. Doing so increases the max throughput by 2.2x without however compromising the snapshot guarantees.
      The patch is prepared based on an original by siying
      Existing unit tests are extended to include unordered_write option.
      
      Benchmark Results:
      ```
      TEST_TMPDIR=/dev/shm/ ./db_bench_unordered --benchmarks=fillrandom --threads=32 --num=10000000 -max_write_buffer_number=16 --max_background_jobs=64 --batch_size=8 --writes=3000000 -level0_file_num_compaction_trigger=99999 --level0_slowdown_writes_trigger=99999 --level0_stop_writes_trigger=99999 -enable_pipelined_write=false -disable_auto_compactions  --unordered_write=1
      ```
      With WAL
      - Vanilla RocksDB: 78.6 MB/s
      - WRITER_PREPARED with unordered_write: 177.8 MB/s (2.2x)
      - unordered_write: 368.9 MB/s (4.7x with relaxed snapshot guarantees)
      
      Without WAL
      - Vanilla RocksDB: 111.3 MB/s
      - WRITER_PREPARED with unordered_write: 259.3 MB/s MB/s (2.3x)
      - unordered_write: 645.6 MB/s (5.8x with relaxed snapshot guarantees)
      
      - WRITER_PREPARED with unordered_write disable concurrency control: 185.3 MB/s MB/s (2.35x)
      
      Limitations:
      - The feature is not yet extended to `max_successive_merges` > 0. The feature is also incompatible with `enable_pipelined_write` = true as well as with `allow_concurrent_memtable_write` = false.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/5218
      
      Differential Revision: D15219029
      
      Pulled By: maysamyabandeh
      
      fbshipit-source-id: 38f2abc4af8780148c6128acdba2b3227bc81759
      f383641a
  14. 10 5月, 2019 1 次提交
  15. 04 5月, 2019 1 次提交
    • M
      Refresh snapshot list during long compactions (2nd attempt) (#5278) · 6a40ee5e
      Maysam Yabandeh 提交于
      Summary:
      Part of compaction cpu goes to processing snapshot list, the larger the list the bigger the overhead. Although the lifetime of most of the snapshots is much shorter than the lifetime of compactions, the compaction conservatively operates on the list of snapshots that it initially obtained. This patch allows the snapshot list to be updated via a callback if the compaction is taking long. This should let the compaction to continue more efficiently with much smaller snapshot list.
      For simplicity, to avoid the feature is disabled in two cases: i) When more than one sub-compaction are sharing the same snapshot list, ii) when Range Delete is used in which the range delete aggregator has its own copy of snapshot list.
      This fixes the reverted https://github.com/facebook/rocksdb/pull/5099 issue with range deletes.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/5278
      
      Differential Revision: D15203291
      
      Pulled By: maysamyabandeh
      
      fbshipit-source-id: fa645611e606aa222c7ce53176dc5bb6f259c258
      6a40ee5e
  16. 02 5月, 2019 1 次提交
  17. 01 5月, 2019 1 次提交
  18. 26 4月, 2019 2 次提交
    • M
      Refresh snapshot list during long compactions (#5099) · 506e8448
      Maysam Yabandeh 提交于
      Summary:
      Part of compaction cpu goes to processing snapshot list, the larger the list the bigger the overhead. Although the lifetime of most of the snapshots is much shorter than the lifetime of compactions, the compaction conservatively operates on the list of snapshots that it initially obtained. This patch allows the snapshot list to be updated via a callback if the compaction is taking long. This should let the compaction to continue more efficiently with much smaller snapshot list.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/5099
      
      Differential Revision: D15086710
      
      Pulled By: maysamyabandeh
      
      fbshipit-source-id: 7649f56c3b6b2fb334962048150142a3bf9c1a12
      506e8448
    • N
      add missing rocksdb_flush_cf in c (#5243) · 084a3c69
      niukuo 提交于
      Summary:
      same to #5229
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/5243
      
      Differential Revision: D15082800
      
      Pulled By: siying
      
      fbshipit-source-id: f4a68a480db0e40e1ba7cf37e18b88e43dff7c08
      084a3c69
  19. 20 3月, 2019 1 次提交
  20. 15 2月, 2019 1 次提交
    • M
      Apply modernize-use-override (2nd iteration) · ca89ac2b
      Michael Liu 提交于
      Summary:
      Use C++11’s override and remove virtual where applicable.
      Change are automatically generated.
      
      Reviewed By: Orvid
      
      Differential Revision: D14090024
      
      fbshipit-source-id: 1e9432e87d2657e1ff0028e15370a85d1739ba2a
      ca89ac2b
  21. 08 2月, 2019 1 次提交
    • S
      Deprecate CompactionFilter::IgnoreSnapshots() = false (#4954) · f48758e9
      Siying Dong 提交于
      Summary:
      We found that the behavior of CompactionFilter::IgnoreSnapshots() = false isn't
      what we have expected. We thought that snapshot will always be preserved.
      However, we just realized that, if no snapshot is created while compaction
      starts, and a snapshot is created after that, the data seen from the snapshot
      can successfully be dropped by the compaction. This creates a strange behavior
      to the feature, which is hard to explain. Like what is documented in code
      comment, this feature is not very useful with snapshot anyway. The decision
      is to deprecate the feature.
      
      We keep the function to avoid to break users code. However, we will fail
      compactions if false is returned.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/4954
      
      Differential Revision: D13981900
      
      Pulled By: siying
      
      fbshipit-source-id: 2db8c2c3865acd86a28dca625945d1481b1d1e36
      f48758e9
  22. 18 12月, 2018 1 次提交
  23. 14 12月, 2018 1 次提交
  24. 04 12月, 2018 1 次提交
  25. 01 12月, 2018 1 次提交
  26. 25 11月, 2018 1 次提交
  27. 14 11月, 2018 2 次提交
  28. 10 11月, 2018 1 次提交
    • S
      Update all unique/shared_ptr instances to be qualified with namespace std (#4638) · dc352807
      Sagar Vemuri 提交于
      Summary:
      Ran the following commands to recursively change all the files under RocksDB:
      ```
      find . -type f -name "*.cc" -exec sed -i 's/ unique_ptr/ std::unique_ptr/g' {} +
      find . -type f -name "*.cc" -exec sed -i 's/<unique_ptr/<std::unique_ptr/g' {} +
      find . -type f -name "*.cc" -exec sed -i 's/ shared_ptr/ std::shared_ptr/g' {} +
      find . -type f -name "*.cc" -exec sed -i 's/<shared_ptr/<std::shared_ptr/g' {} +
      ```
      Running `make format` updated some formatting on the files touched.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/4638
      
      Differential Revision: D12934992
      
      Pulled By: sagar0
      
      fbshipit-source-id: 45a15d23c230cdd64c08f9c0243e5183934338a8
      dc352807
  29. 14 9月, 2018 1 次提交
    • V
      Memory usage stats in C API (#4340) · 0bd2ede1
      Vitaly Isaev 提交于
      Summary:
      Please consider this small PR providing access to the `MemoryUsage::GetApproximateMemoryUsageByType` function in plain C API. Actually I'm working on Go application and now trying to investigate the reasons of high memory consumption (#4313). Go [wrappers](https://github.com/tecbot/gorocksdb) are built on the top of Rocksdb C API. According to the #706, `MemoryUsage::GetApproximateMemoryUsageByType` is considered as the best option to get database internal memory usage stats, but it wasn't supported in C API yet.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/4340
      
      Differential Revision: D9655135
      
      Pulled By: ajkr
      
      fbshipit-source-id: a3d2f3f47c143ae75862fbcca2f571ea1b49e14a
      0bd2ede1
  30. 06 9月, 2018 1 次提交
  31. 24 8月, 2018 1 次提交
  32. 14 8月, 2018 1 次提交
  33. 28 6月, 2018 1 次提交
  34. 23 6月, 2018 1 次提交
    • M
      Pin top-level index on partitioned index/filter blocks (#4037) · 80ade9ad
      Maysam Yabandeh 提交于
      Summary:
      Top-level index in partitioned index/filter blocks are small and could be pinned in memory. So far we use that by cache_index_and_filter_blocks to false. This however make it difficult to keep account of the total memory usage. This patch introduces pin_top_level_index_and_filter which in combination with cache_index_and_filter_blocks=true keeps the top-level index in cache and yet pinned them to avoid cache misses and also cache lookup overhead.
      Closes https://github.com/facebook/rocksdb/pull/4037
      
      Differential Revision: D8596218
      
      Pulled By: maysamyabandeh
      
      fbshipit-source-id: 3a5f7f9ca6b4b525b03ff6bd82354881ae974ad2
      80ade9ad
  35. 02 6月, 2018 1 次提交
  36. 01 6月, 2018 1 次提交
  37. 25 5月, 2018 1 次提交