1. 10 7月, 2019 1 次提交
  2. 19 6月, 2019 1 次提交
  3. 18 6月, 2019 2 次提交
    • H
      Support computing miss ratio curves using sim_cache. (#5449) · 2d1dd5bc
      haoyuhuang 提交于
      Summary:
      This PR adds a BlockCacheTraceSimulator that reports the miss ratios given different cache configurations. A cache configuration contains "cache_name,num_shard_bits,cache_capacities". For example, "lru, 1, 1K, 2K, 4M, 4G".
      
      When we replay the trace, we also perform lookups and inserts on the simulated caches.
      In the end, it reports the miss ratio for each tuple <cache_name, num_shard_bits, cache_capacity> in a output file.
      
      This PR also adds a main source block_cache_trace_analyzer so that we can run the analyzer in command line.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/5449
      
      Test Plan:
      Added tests for block_cache_trace_analyzer.
      COMPILE_WITH_ASAN=1 make check -j32.
      
      Differential Revision: D15797073
      
      Pulled By: HaoyuHuang
      
      fbshipit-source-id: aef0c5c2e7938f3e8b6a10d4a6a50e6928ecf408
      2d1dd5bc
    • Z
      Persistent Stats: persist stats history to disk (#5046) · 671d15cb
      Zhongyi Xie 提交于
      Summary:
      This PR continues the work in https://github.com/facebook/rocksdb/pull/4748 and https://github.com/facebook/rocksdb/pull/4535 by adding a new DBOption `persist_stats_to_disk` which instructs RocksDB to persist stats history to RocksDB itself. When statistics is enabled, and  both options `stats_persist_period_sec` and `persist_stats_to_disk` are set, RocksDB will periodically write stats to a built-in column family in the following form: key -> (timestamp in microseconds)#(stats name), value -> stats value. The existing API `GetStatsHistory` will detect the current value of `persist_stats_to_disk` and either read from in-memory data structure or from the hidden column family on disk.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/5046
      
      Differential Revision: D15863138
      
      Pulled By: miasantreble
      
      fbshipit-source-id: bb82abdb3f2ca581aa42531734ac799f113e931b
      671d15cb
  4. 14 6月, 2019 1 次提交
  5. 12 6月, 2019 1 次提交
  6. 07 6月, 2019 1 次提交
  7. 01 6月, 2019 2 次提交
  8. 31 5月, 2019 3 次提交
  9. 30 5月, 2019 1 次提交
  10. 03 5月, 2019 1 次提交
  11. 01 5月, 2019 1 次提交
  12. 27 3月, 2019 1 次提交
    • Y
      Support for single-primary, multi-secondary instances (#4899) · 9358178e
      Yanqin Jin 提交于
      Summary:
      This PR allows RocksDB to run in single-primary, multi-secondary process mode.
      The writer is a regular RocksDB (e.g. an `DBImpl`) instance playing the role of a primary.
      Multiple `DBImplSecondary` processes (secondaries) share the same set of SST files, MANIFEST, WAL files with the primary. Secondaries tail the MANIFEST of the primary and apply updates to their own in-memory state of the file system, e.g. `VersionStorageInfo`.
      
      This PR has several components:
      1. (Originally in #4745). Add a `PathNotFound` subcode to `IOError` to denote the failure when a secondary tries to open a file which has been deleted by the primary.
      
      2. (Similar to #4602). Add `FragmentBufferedReader` to handle partially-read, trailing record at the end of a log from where future read can continue.
      
      3. (Originally in #4710 and #4820). Add implementation of the secondary, i.e. `DBImplSecondary`.
      3.1 Tail the primary's MANIFEST during recovery.
      3.2 Tail the primary's MANIFEST during normal processing by calling `ReadAndApply`.
      3.3 Tailing WAL will be in a future PR.
      
      4. Add an example in 'examples/multi_processes_example.cc' to demonstrate the usage of secondary RocksDB instance in a multi-process setting. Instructions to run the example can be found at the beginning of the source code.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/4899
      
      Differential Revision: D14510945
      
      Pulled By: riversand963
      
      fbshipit-source-id: 4ac1c5693e6012ad23f7b4b42d3c374fecbe8886
      9358178e
  13. 23 2月, 2019 1 次提交
  14. 20 2月, 2019 1 次提交
  15. 14 2月, 2019 1 次提交
  16. 29 1月, 2019 1 次提交
    • Y
      Change the command to invoke parallel tests (#4922) · 95604d13
      Yanqin Jin 提交于
      Summary:
      We used to call `printf $(t_run)` and later feed the result to GNU parallel in the recipe of target `check_0`. However, this approach is problematic when the length of $(t_run) exceeds the
      maximum length of a command and the `printf` command cannot be executed. Instead we use 'find -print' to avoid generating an overly long command.
      
      **This PR is actually the last commit of #4916. Prefer to merge this PR separately.**
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/4922
      
      Differential Revision: D13845883
      
      Pulled By: riversand963
      
      fbshipit-source-id: b56de7f7af43337c6ec89b931de843c9667cb679
      95604d13
  17. 25 1月, 2019 1 次提交
  18. 24 1月, 2019 1 次提交
  19. 12 1月, 2019 1 次提交
  20. 11 1月, 2019 1 次提交
  21. 20 12月, 2018 1 次提交
  22. 19 12月, 2018 1 次提交
  23. 18 12月, 2018 2 次提交
  24. 28 11月, 2018 1 次提交
    • H
      Add SstFileReader to read sst files (#4717) · 5e72bc11
      Huachao Huang 提交于
      Summary:
      A user friendly sst file reader is useful when we want to access sst
      files outside of RocksDB. For example, we can generate an sst file
      with SstFileWriter and send it to other places, then use SstFileReader
      to read the file and process the entries in other ways.
      
      Also rename the original SstFileReader to SstFileDumper because of
      name conflict, and seems SstFileDumper is more appropriate for tools.
      
      TODO: there is only a very simple test now, because I want to get some feedback first.
      If the changes look good, I will add more tests soon.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/4717
      
      Differential Revision: D13212686
      
      Pulled By: ajkr
      
      fbshipit-source-id: 737593383264c954b79e63edaf44aaae0d947e56
      5e72bc11
  25. 22 11月, 2018 1 次提交
    • A
      Introduce RangeDelAggregatorV2 (#4649) · 457f77b9
      Abhishek Madan 提交于
      Summary:
      The old RangeDelAggregator did expensive pre-processing work
      to create a collapsed, binary-searchable representation of range
      tombstones. With FragmentedRangeTombstoneIterator, much of this work is
      now unnecessary. RangeDelAggregatorV2 takes advantage of this by seeking
      in each iterator to find a covering tombstone in ShouldDelete, while
      doing minimal work in AddTombstones. The old RangeDelAggregator is still
      used during flush/compaction for now, though RangeDelAggregatorV2 will
      support those uses in a future PR.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/4649
      
      Differential Revision: D13146964
      
      Pulled By: abhimadan
      
      fbshipit-source-id: be29a4c020fc440500c137216fcc1cf529571eb3
      457f77b9
  26. 13 11月, 2018 1 次提交
  27. 10 11月, 2018 1 次提交
  28. 08 11月, 2018 1 次提交
  29. 31 10月, 2018 1 次提交
  30. 25 10月, 2018 1 次提交
    • A
      Use only "local" range tombstones during Get (#4449) · 8c78348c
      Abhishek Madan 提交于
      Summary:
      Previously, range tombstones were accumulated from every level, which
      was necessary if a range tombstone in a higher level covered a key in a lower
      level. However, RangeDelAggregator::AddTombstones's complexity is based on
      the number of tombstones that are currently stored in it, which is wasteful in
      the Get case, where we only need to know the highest sequence number of range
      tombstones that cover the key from higher levels, and compute the highest covering
      sequence number at the current level. This change introduces this optimization, and
      removes the use of RangeDelAggregator from the Get path.
      
      In the benchmark results, the following command was used to initialize the database:
      ```
      ./db_bench -db=/dev/shm/5k-rts -use_existing_db=false -benchmarks=filluniquerandom -write_buffer_size=1048576 -compression_type=lz4 -target_file_size_base=1048576 -max_bytes_for_level_base=4194304 -value_size=112 -key_size=16 -block_size=4096 -level_compaction_dynamic_level_bytes=true -num=5000000 -max_background_jobs=12 -benchmark_write_rate_limit=20971520 -range_tombstone_width=100 -writes_per_range_tombstone=100 -max_num_range_tombstones=50000 -bloom_bits=8
      ```
      
      ...and the following command was used to measure read throughput:
      ```
      ./db_bench -db=/dev/shm/5k-rts/ -use_existing_db=true -benchmarks=readrandom -disable_auto_compactions=true -num=5000000 -reads=100000 -threads=32
      ```
      
      The filluniquerandom command was only run once, and the resulting database was used
      to measure read performance before and after the PR. Both binaries were compiled with
      `DEBUG_LEVEL=0`.
      
      Readrandom results before PR:
      ```
      readrandom   :       4.544 micros/op 220090 ops/sec;   16.9 MB/s (63103 of 100000 found)
      ```
      
      Readrandom results after PR:
      ```
      readrandom   :      11.147 micros/op 89707 ops/sec;    6.9 MB/s (63103 of 100000 found)
      ```
      
      So it's actually slower right now, but this PR paves the way for future optimizations (see #4493).
      
      ----
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/4449
      
      Differential Revision: D10370575
      
      Pulled By: abhimadan
      
      fbshipit-source-id: 9a2e152be1ef36969055c0e9eb4beb0d96c11f4d
      8c78348c
  31. 24 10月, 2018 1 次提交
    • Y
      Fix compile error with aligned-new (#4576) · 742302a1
      Yi Wu 提交于
      Summary:
      In fbcode when we build with clang7++, although -faligned-new is available in compile phase, we link with an older version of libstdc++.a and it doesn't come with aligned-new support (e.g. `nm libstdc++.a | grep align_val_t` return empty). In this case the previous -faligned-new detection can pass but will end up with link error. Fixing it by only have the detection for non-fbcode build.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/4576
      
      Differential Revision: D10500008
      
      Pulled By: yiwu-arbug
      
      fbshipit-source-id: b375de4fbb61d2a08e54ab709441aa8e7b4b08cf
      742302a1
  32. 28 9月, 2018 1 次提交
    • Y
      Utility to run task periodically in a thread (#4423) · d6f2ecf4
      Yi Wu 提交于
      Summary:
      Introduce `RepeatableThread` utility to run task periodically in a separate thread. It is basically the same as the the same class in fbcode, and in addition provide a helper method to let tests mock time and trigger execution one at a time.
      
      We can use this class to replace `TimerQueue` in #4382 and `BlobDB`.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/4423
      
      Differential Revision: D10020932
      
      Pulled By: yiwu-arbug
      
      fbshipit-source-id: 3616bef108c39a33c92eedb1256de424b7c04087
      d6f2ecf4
  33. 18 9月, 2018 1 次提交
    • A
      Add RangeDelAggregator microbenchmarks (#4363) · 1626f6ab
      Abhishek Madan 提交于
      Summary:
      To measure the results of upcoming DeleteRange v2 work, this commit adds
      simple benchmarks for RangeDelAggregator. It measures the average time
      for AddTombstones and ShouldDelete calls.
      
      Using this to compare the results before #4014 and on the latest master (using the default arguments) produces the following results:
      
      Before #4014:
      ```
      =======================
      Results:
      =======================
      AddTombstones:          1356.28 us
      ShouldDelete:           0.401732 us
      ```
      
      Latest master:
      ```
      =======================
      Results:
      =======================
      AddTombstones:          740.82 us
      ShouldDelete:           0.383271 us
      ```
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/4363
      
      Differential Revision: D9881676
      
      Pulled By: abhimadan
      
      fbshipit-source-id: 793e7d61aa4b9d47eb917bbcc03f08695b5e5442
      1626f6ab
  34. 12 9月, 2018 1 次提交
    • Y
      Fix Makefile target 'jtest' on PowerPC (#4357) · 3ba3b153
      Yanqin Jin 提交于
      Summary:
      Before the fix:
      On a PowerPC machine, run the following
      ```
      $ make jtest
      ```
      The command will fail due to "undefined symbol: crc32c_ppc". It was caused by
      'rocksdbjava' Makefile target not including crc32c_ppc object files when
      generating the shared lib. The fix is simple.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/4357
      
      Differential Revision: D9779474
      
      Pulled By: riversand963
      
      fbshipit-source-id: 3c5ec9068c2b9c796e6500f71cd900267064fd51
      3ba3b153
  35. 31 8月, 2018 1 次提交
    • Z
      Rename DecodeCFAndKey to resolve naming conflict in unity test (#4323) · 1cf17ba5
      Zhongyi Xie 提交于
      Summary:
      Currently unity-test is failing because both trace_replay.cc and trace_analyzer_tool.cc defined `DecodeCFAndKey` under anonymous namespace. It is supposed to be fine except unity test will dump all source files together and now we have a conflict.
      Another issue with trace_analyzer_tool.cc is that it is using some utility functions from ldb_cmd which is not included in Makefile for unity_test, I chose to update TESTHARNESS to include LIBOBJECTS. Feel free to comment if there is a less intrusive way to solve this.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/4323
      
      Differential Revision: D9599170
      
      Pulled By: miasantreble
      
      fbshipit-source-id: 38765b11f8e7de92b43c63bdcf43ea914abdc029
      1cf17ba5