1. 21 6月, 2019 4 次提交
    • Y
      Stop printing after verification fails (#5493) · 1bfeffab
      Yanqin Jin 提交于
      Summary:
      Stop verification and printing once verification fails.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/5493
      
      Differential Revision: D15928992
      
      Pulled By: riversand963
      
      fbshipit-source-id: 699feac034a217d57280aa3fb50f5aba06adf317
      1bfeffab
    • H
      Add more callers for table reader. (#5454) · 705b8eec
      haoyuhuang 提交于
      Summary:
      This PR adds more callers for table readers. These information are only used for block cache analysis so that we can know which caller accesses a block.
      1. It renames the BlockCacheLookupCaller to TableReaderCaller as passing the caller from upstream requires changes to table_reader.h and TableReaderCaller is a more appropriate name.
      2. It adds more table reader callers in table/table_reader_caller.h, e.g., kCompactionRefill, kExternalSSTIngestion, and kBuildTable.
      
      This PR is long as it requires modification of interfaces in table_reader.h, e.g., NewIterator.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/5454
      
      Test Plan: make clean && COMPILE_WITH_ASAN=1 make check -j32.
      
      Differential Revision: D15819451
      
      Pulled By: HaoyuHuang
      
      fbshipit-source-id: b6caa704c8fb96ddd15b9a934b7e7ea87f88092d
      705b8eec
    • F
      Fix segfalut in ~DBWithTTLImpl() when called after Close() (#5485) · 0b0cb6f1
      feilongliu 提交于
      Summary:
      ~DBWithTTLImpl() fails after calling Close() function (will invoke the
      Close() function of DBImpl), because the Close() function deletes
      default_cf_handle_ which is used in the GetOptions() function called
      in ~DBWithTTLImpl(), hence lead to segfault.
      
      Fix by creating a Close() function for the DBWithTTLImpl class and do
      the close and the work originally in ~DBWithTTLImpl(). If the Close()
      function is not called, it will be called in the ~DBWithTTLImpl()
      function.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/5485
      
      Test Plan: make clean;  USE_CLANG=1 make all check -j
      
      Differential Revision: D15924498
      
      fbshipit-source-id: 567397fb972961059083a1ae0f9f99ff74872b78
      0b0cb6f1
    • Z
      sanitize and limit block_size under 4GB (#5492) · 24f73436
      Zhongyi Xie 提交于
      Summary:
      `Block::restart_index_`, `Block::restarts_`, and `Block::current_` are defined as uint32_t but  `BlockBasedTableOptions::block_size` is defined as a size_t so user might see corruption as in https://github.com/facebook/rocksdb/issues/5486.
      This PR adds a check in `BlockBasedTableFactory::SanitizeOptions` to disallow such configurations.
      yiwu-arbug
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/5492
      
      Differential Revision: D15914047
      
      Pulled By: miasantreble
      
      fbshipit-source-id: c943f153d967e15aee7f2795730ab8259e2be201
      24f73436
  2. 20 6月, 2019 3 次提交
    • S
      Fix AlignedBuffer's usage in Encryption Env (#5396) · 68614a96
      Sagar Vemuri 提交于
      Summary:
      The usage of `AlignedBuffer` in env_encryption.cc writes and reads to/from the AlignedBuffer's internal buffer directly without going through AlignedBuffer's APIs (like `Append` and `Read`), causing encapsulation to break in some cases. The writes are especially problematic as after the data is written to the buffer (directly using either memmove or memcpy), the size of the buffer is not updated ... causing the AlignedBuffer to lose track of the encapsulated buffer's current size.
      Fixed this by updating the buffer size after every write.
      
      Todo for later:
      Add an overloaded method to AlignedBuffer to support a memmove in addition to a memcopy. Encryption env does a memmove, and hence I couldn't switch to using `AlignedBuffer.Append()`.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/5396
      
      Test Plan: `make check`
      
      Differential Revision: D15764756
      
      Pulled By: sagar0
      
      fbshipit-source-id: 2e24b52bd3b4b5056c5c1da157f91ddf89370183
      68614a96
    • J
      Java: Make the generics of the Options interfaces more strict (#5461) · 5830c619
      Jurriaan Mous 提交于
      Summary:
      Make the generics of the Options interfaces more strict so they are usable in a Kotlin Multiplatform expect/actual typealias implementation without causing a Violation of Finite Bound Restriction.
      
      This fix would enable the creation of a generic Kotlin multiplatform library by just typealiasing the JVM implementation to the current Java implementation.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/5461
      
      Differential Revision: D15903288
      
      Pulled By: sagar0
      
      fbshipit-source-id: 75e83fdf5d2fcede40744a17e767563d6a4b0696
      5830c619
    • V
      Combine the read-ahead logic for user reads and compaction reads (#5431) · 24b118ad
      Vijay Nadimpalli 提交于
      Summary:
      Currently the read-ahead logic for user reads and compaction reads go through different code paths where compaction reads create new table readers and use `ReadaheadRandomAccessFile`. This change is to unify read-ahead logic to use read-ahead in BlockBasedTableReader::InitDataBlock(). As a result of the change  `ReadAheadRandomAccessFile` class and `new_table_reader_for_compaction_inputs` option will no longer be used.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/5431
      
      Test Plan:
      make check
      
      Here is the benchmarking - https://gist.github.com/vjnadimpalli/083cf423f7b6aa12dcdb14c858bc18a5
      
      Differential Revision: D15772533
      
      Pulled By: vjnadimpalli
      
      fbshipit-source-id: b71dca710590471ede6fb37553388654e2e479b9
      24b118ad
  3. 19 6月, 2019 10 次提交
  4. 18 6月, 2019 8 次提交
    • Z
      fix rocksdb lite and clang contrun test failures (#5477) · ddd088c8
      Zhongyi Xie 提交于
      Summary:
      recent commit 671d15cb introduced some test failures:
      ```
      ===== Running stats_history_test
      [==========] Running 9 tests from 1 test case.
      [----------] Global test environment set-up.
      [----------] 9 tests from StatsHistoryTest
      [ RUN      ] StatsHistoryTest.RunStatsDumpPeriodSec
      monitoring/stats_history_test.cc:63: Failure
      dbfull()->SetDBOptions({{"stats_dump_period_sec", "0"}})
      Not implemented: Not supported in ROCKSDB LITE
      
      db/db_options_test.cc:28:11: error: unused variable 'kMicrosInSec' [-Werror,-Wunused-const-variable]
      const int kMicrosInSec = 1000000;
      ```
      This PR fixes these failures
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/5477
      
      Differential Revision: D15871814
      
      Pulled By: miasantreble
      
      fbshipit-source-id: 0a7023914d2c1784d9d2d3f5bfb47310d4855394
      ddd088c8
    • H
      Block cache tracing: Fix minor bugs with downsampling and some benchmark results. (#5473) · bcfc53b4
      haoyuhuang 提交于
      Summary:
      As the code changes for block cache tracing are almost complete, I did a benchmark to compare the performance when block cache tracing is enabled/disabled.
      
       With 1% downsampling ratio, the performance overhead of block cache tracing is negligible. When we trace all block accesses, the throughput drops by 6 folds with 16 threads issuing random reads and all reads are served in block cache.
      
      Setup:
      RocksDB:    version 6.2
      Date:       Mon Jun 17 17:11:13 2019
      CPU:        24 * Intel Core Processor (Skylake)
      CPUCache:   16384 KB
      Keys:       20 bytes each
      Values:     100 bytes each (100 bytes after compression)
      Entries:    10000000
      Prefix:    20 bytes
      Keys per prefix:    0
      RawSize:    1144.4 MB (estimated)
      FileSize:   1144.4 MB (estimated)
      Write rate: 0 bytes/second
      Read rate: 0 ops/second
      Compression: NoCompression
      Compression sampling rate: 0
      Memtablerep: skip_list
      Perf Level: 1
      
      I ran the readrandom workload for 1 minute. Detailed throughput results:  (ops/second)
      Sample rate 0: no block cache tracing.
      Sample rate 1: trace all block accesses.
      Sample rate 100: trace accesses 1% blocks.
      1 thread |   |   |  -- | -- | -- | --
      Sample rate | 0 | 1 | 100
      1 MB block cache size | 13,094 | 13,166 | 13,341
      10 GB block cache size | 202,243 | 188,677 | 229,182
      
      16 threads |   |   |  -- | -- | -- | --
      Sample rate | 0 | 1 | 100
      1 MB block cache size | 208,761 | 178,700 | 201,872
      10 GB block cache size | 2,645,996 | 426,295 | 2,587,605
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/5473
      
      Differential Revision: D15869479
      
      Pulled By: HaoyuHuang
      
      fbshipit-source-id: 7ae802abe84811281a6af8649f489887cd7c4618
      bcfc53b4
    • H
      Support computing miss ratio curves using sim_cache. (#5449) · 2d1dd5bc
      haoyuhuang 提交于
      Summary:
      This PR adds a BlockCacheTraceSimulator that reports the miss ratios given different cache configurations. A cache configuration contains "cache_name,num_shard_bits,cache_capacities". For example, "lru, 1, 1K, 2K, 4M, 4G".
      
      When we replay the trace, we also perform lookups and inserts on the simulated caches.
      In the end, it reports the miss ratio for each tuple <cache_name, num_shard_bits, cache_capacity> in a output file.
      
      This PR also adds a main source block_cache_trace_analyzer so that we can run the analyzer in command line.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/5449
      
      Test Plan:
      Added tests for block_cache_trace_analyzer.
      COMPILE_WITH_ASAN=1 make check -j32.
      
      Differential Revision: D15797073
      
      Pulled By: HaoyuHuang
      
      fbshipit-source-id: aef0c5c2e7938f3e8b6a10d4a6a50e6928ecf408
      2d1dd5bc
    • Y
      Override check consistency for DBImplSecondary (#5469) · 7d8d5641
      Yanqin Jin 提交于
      Summary:
      `DBImplSecondary` calls `CheckConsistency()` during open. In the past, `DBImplSecondary` did not override this function thus `DBImpl::CheckConsistency()` is called.
      The following can happen. The secondary instance is performing consistency check which calls `GetFileSize(file_path)` but the file at `file_path` is deleted by the primary instance. `DBImpl::CheckConsistency` does not account for this and fails the consistency check. This is undesirable. The solution is that, we call `DBImpl::CheckConsistency()` first. If it passes, then we are good. If not, we give it a second chance and handles the case of file(s) being deleted.
      
      Test plan (on dev server):
      ```
      $make clean && make -j20 all
      $./db_secondary_test
      ```
      All other existing unit tests must pass as well.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/5469
      
      Differential Revision: D15861845
      
      Pulled By: riversand963
      
      fbshipit-source-id: 507d72392508caed3cd003bb2e2aa43f993dd597
      7d8d5641
    • Z
      Persistent Stats: persist stats history to disk (#5046) · 671d15cb
      Zhongyi Xie 提交于
      Summary:
      This PR continues the work in https://github.com/facebook/rocksdb/pull/4748 and https://github.com/facebook/rocksdb/pull/4535 by adding a new DBOption `persist_stats_to_disk` which instructs RocksDB to persist stats history to RocksDB itself. When statistics is enabled, and  both options `stats_persist_period_sec` and `persist_stats_to_disk` are set, RocksDB will periodically write stats to a built-in column family in the following form: key -> (timestamp in microseconds)#(stats name), value -> stats value. The existing API `GetStatsHistory` will detect the current value of `persist_stats_to_disk` and either read from in-memory data structure or from the hidden column family on disk.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/5046
      
      Differential Revision: D15863138
      
      Pulled By: miasantreble
      
      fbshipit-source-id: bb82abdb3f2ca581aa42531734ac799f113e931b
      671d15cb
    • M
      Make db_bloom_filter_test parallel (#5467) · ee294c24
      Maysam Yabandeh 提交于
      Summary:
      When run under TSAN it sometimes goes over 10m and times out. The slowest ones are `DBBloomFilterTestWithParam.BloomFilter` which we have 6 of them. Making the tests run in parallel should take care of the timeout issue.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/5467
      
      Differential Revision: D15856912
      
      Pulled By: maysamyabandeh
      
      fbshipit-source-id: 26c43c55312974c1b809c070342dee037d0219f4
      ee294c24
    • H
      Integrate block cache tracing into db_bench (#5459) · d43b4cd5
      haoyuhuang 提交于
      Summary:
      This PR integrates the block cache tracing into db_bench. It adds three command line arguments.
      -block_cache_trace_file (Block cache trace file path.) type: string default: ""
      -block_cache_trace_max_trace_file_size_in_bytes (The maximum block cache
      trace file size in bytes. Block cache accesses will not be logged if the
      trace file size exceeds this threshold. Default is 64 GB.) type: int64
      default: 68719476736
      -block_cache_trace_sampling_frequency (Block cache trace sampling
      frequency, termed s. It uses spatial downsampling and samples accesses to
      one out of s blocks.) type: int32 default: 1
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/5459
      
      Differential Revision: D15832031
      
      Pulled By: HaoyuHuang
      
      fbshipit-source-id: 0ecf2f2686557251fe741a2769b21170777efa3d
      d43b4cd5
    • A
      Switch Travis to Xenial build (#4789) · d1ae67bd
      Adam Retter 提交于
      Summary:
      I think this should now also run on Travis's new virtualised infrastructure which affords more memory and CPU.
      
      We also need to think about migrating from travis-ci.org to travis-ci.com.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/4789
      
      Differential Revision: D15856272
      
      fbshipit-source-id: 10b41d21924e8a362bc9646a63ccd1a5dfc437c6
      d1ae67bd
  5. 15 6月, 2019 5 次提交
    • H
      Integrate block cache tracer in block based table reader. (#5441) · 7a8d7358
      haoyuhuang 提交于
      Summary:
      This PR integrates the block cache tracer into block based table reader. The tracer will write the block cache accesses using the trace_writer. The tracer is null in this PR so that nothing will be logged.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/5441
      
      Differential Revision: D15772029
      
      Pulled By: HaoyuHuang
      
      fbshipit-source-id: a64adb92642cd23222e0ba8b10d86bf522b42f9b
      7a8d7358
    • S
      Validate CF Options when creating a new column family (#5453) · f1219644
      Sagar Vemuri 提交于
      Summary:
      It seems like CF Options are not properly validated  when creating a new column family with `CreateColumnFamily` API; only a selected few checks are done. Calling `ColumnFamilyData::ValidateOptions`, which is the single source for all CFOptions validations,  will help fix this. (`ColumnFamilyData::ValidateOptions` is already called at the time of `DB::Open`).
      
      **Test Plan:**
      Added a new test: `DBTest.CreateColumnFamilyShouldFailOnIncompatibleOptions`
      ```
      TEST_TMPDIR=/dev/shm ./db_test --gtest_filter=DBTest.CreateColumnFamilyShouldFailOnIncompatibleOptions
      ```
      Also ran gtest-parallel to make sure the new test is not flaky.
      ```
      TEST_TMPDIR=/dev/shm ~/gtest-parallel/gtest-parallel ./db_test --gtest_filter=DBTest.CreateColumnFamilyShouldFailOnIncompatibleOptions --repeat=10000
      [10000/10000] DBTest.CreateColumnFamilyShouldFailOnIncompatibleOptions (15 ms)
      ```
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/5453
      
      Differential Revision: D15816851
      
      Pulled By: sagar0
      
      fbshipit-source-id: 9e702b9850f5c4a7e0ef8d39e1e6f9b81e7fe1e5
      f1219644
    • H
      fix compilation error on MSVC (#5458) · b47cfec5
      Huisheng Liu 提交于
      Summary:
      "__attribute__((__weak__))" was introduced in port\jemalloc_helper.h. It's not supported by Microsoft VS 2015, resulting in compile error. This fix adds a #if branch to work around the compile issue.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/5458
      
      Differential Revision: D15827285
      
      fbshipit-source-id: 8c5f7ad31de1ac677bd96f16c4450767de834beb
      b47cfec5
    • M
      Set executeLocal on child lego jobs (#5456) · 58c78358
      Maysam Yabandeh 提交于
      Summary:
      This property is needed to run the child jobs on the same host and thus propagate the child job status back to the parent's.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/5456
      
      Reviewed By: yancouto
      
      Differential Revision: D15824382
      
      Pulled By: maysamyabandeh
      
      fbshipit-source-id: 42f2efbedaa3a8b399281105f0ce793c1c9a6191
      58c78358
    • H
      Remove unused variable (#5457) · 89695bfb
      haoyuhuang 提交于
      Summary:
      This PR removes the unused variable that causes CLANG build to fail.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/5457
      
      Differential Revision: D15825027
      
      Pulled By: HaoyuHuang
      
      fbshipit-source-id: 72c847c39ca310560efcbc5938cffa6f31164068
      89695bfb
  6. 14 6月, 2019 5 次提交
  7. 13 6月, 2019 4 次提交
  8. 12 6月, 2019 1 次提交
    • M
      WritePrepared: switch PreparedHeap from priority_queue to deque (#5436) · 773f914a
      Maysam Yabandeh 提交于
      Summary:
      Internally PreparedHeap is currently using a priority_queue. The rationale was the in the initial design PreparedHeap::AddPrepared could be called in arbitrary order. With the recent optimizations, we call ::AddPrepared only from the main write queue, which results into in-order insertion into PreparedHeap. The patch thus replaces the underlying priority_queue with a more efficient deque implementation.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/5436
      
      Differential Revision: D15752147
      
      Pulled By: maysamyabandeh
      
      fbshipit-source-id: e6960f2b2097e13137dded1ceeff3b10b03b0aeb
      773f914a