1. 04 2月, 2020 4 次提交
    • A
      Fix a test failure in error_handler_test (#6367) · 7330ec0f
      anand76 提交于
      Summary:
      Fix an intermittent failure in
      DBErrorHandlingTest.CompactionManifestWriteError due to a race between
      background error recovery and the main test thread calling
      TEST_WaitForCompact().
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/6367
      
      Test Plan: Run the test using gtest_parallel
      
      Differential Revision: D19713802
      
      Pulled By: anand1976
      
      fbshipit-source-id: 29e35dc26e0984fe8334c083e059f4fa1f335d68
      7330ec0f
    • S
      Use ReadFileToString() to get content from IDENTITY file (#6365) · f195d8d5
      sdong 提交于
      Summary:
      Right now when reading IDENTITY file, we use a very similar logic as ReadFileToString() while it does an extra file size check, which may be expensive in some file systems. There is no reason to duplicate the logic. Use ReadFileToString() instead.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/6365
      
      Test Plan: RUn all existing tests.
      
      Differential Revision: D19709399
      
      fbshipit-source-id: 3bac31f3b2471f98a0d2694278b41e9cd34040fe
      f195d8d5
    • S
      Avoid create directory for every column families (#6358) · 36c504be
      sdong 提交于
      Summary:
      A relatively recent regression causes for every CF, create and open directory is called for the DB directory, unless CF has a private directory. This doesn't scale well with large number of column families.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/6358
      
      Test Plan: Run all existing tests and see it pass. strace with db_bench --num_column_families and observe it doesn't open directory for number of column families.
      
      Differential Revision: D19675141
      
      fbshipit-source-id: da01d9216f1dae3f03d4064fbd88ce71245bd9be
      36c504be
    • H
      Error handler test fix (#6266) · eb4d6af5
      Huisheng Liu 提交于
      Summary:
      MultiDBCompactionError fails when it verifies the number of files on level 0 and level 1 without waiting for compaction to finish.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/6266
      
      Differential Revision: D19701639
      
      Pulled By: riversand963
      
      fbshipit-source-id: e96d511bcde705075f073e0b550cebcd2ecfccdc
      eb4d6af5
  2. 01 2月, 2020 2 次提交
    • S
      Fix DBTest2.ChangePrefixExtractor LITE build (#6356) · 800d24dd
      sdong 提交于
      Summary:
      DBTest2.ChangePrefixExtractor fails in LITE build because LITE build doesn't support adaptive build. Fix it by removing the stats check but only check correctness.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/6356
      
      Test Plan: Run the test with both of LITE and non-LITE build.
      
      Differential Revision: D19669537
      
      fbshipit-source-id: 6d7dd6c8a79f18e80ca1636864b9c71922030d8e
      800d24dd
    • S
      Add a unit test for prefix extractor changes (#6323) · ec496347
      sdong 提交于
      Summary:
      Add a unit test for prefix extractor change, including a check that fails due to a bug.
      Also comment out the partitioned filter case which will fail the test too.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/6323
      
      Test Plan: Run the test and it passes (and fails if the SeekForPrev() part is uncommented)
      
      Differential Revision: D19509744
      
      fbshipit-source-id: 678202ca97b5503e9de73b54b90de9e5ba822b72
      ec496347
  3. 31 1月, 2020 3 次提交
    • M
      Disable recycle_log_file_num when it is incompatible with recovery mode (#6351) · 3316d292
      Maysam Yabandeh 提交于
      Summary:
      Non-zero recycle_log_file_num is incompatible with kPointInTimeRecovery and kAbsoluteConsistency recovery modes. Currently SanitizeOptions changes the recovery mode to kTolerateCorruptedTailRecords, while to resolve this option conflict it makes more sense to compromise recycle_log_file_num, which is a performance feature, instead of wal_recovery_mode, which is a safety feature.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/6351
      
      Differential Revision: D19648931
      
      Pulled By: maysamyabandeh
      
      fbshipit-source-id: dd0bf78349edc007518a00c4d63931fd69294ad7
      3316d292
    • Y
      Shorten certain test names to avoid infra failure (#6352) · f2fbc5d6
      Yanqin Jin 提交于
      Summary:
      Unit test names, together with other components,  are used to create log files
      during some internal testing. Overly long names cause infra failure due to file
      names being too long.
      
      Look for internal tests.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/6352
      
      Differential Revision: D19649307
      
      Pulled By: riversand963
      
      fbshipit-source-id: 6f29de096e33c0eaa87d9c8702f810eda50059e7
      f2fbc5d6
    • A
      Force a new manifest file if append to current one fails (#6331) · fb05b5a6
      anand76 提交于
      Summary:
      Fix for issue https://github.com/facebook/rocksdb/issues/6316
      
      When an append/sync of the manifest file fails due to an IO error such
      as NoSpace, we don't always put the DB in read-only mode. This is true
      for flush and compactions, as well as foreground operatons such as column family
      add/drop, CompactFiles etc. Subsequent changes to the DB will be
      recorded in the same manifest file, which would have a corrupted record
      in the middle due to the previous failure. On next DB::Open(), it will
      fail to process the full manifest and data will be lost.
      
      To fix this, we reset VersionSet::descriptor_log_ on append/sync
      failure, which will force a new manifest file to be written on the next
      append.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/6331
      
      Test Plan: Add new unit tests in error_handler_test.cc
      
      Differential Revision: D19632951
      
      Pulled By: anand1976
      
      fbshipit-source-id: 68d527cb6e59a94cbbbf9f5a17a7f464381d51e3
      fb05b5a6
  4. 30 1月, 2020 3 次提交
    • S
      Fix LITE build with DBTest2.AutoPrefixMode1 (#6346) · 71874c5a
      sdong 提交于
      Summary:
      DBTest2.AutoPrefixMode1 doesn't pass because auto prefix mode is not supported there.
      Fix it by disabling the test.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/6346
      
      Test Plan: Run DBTest2.AutoPrefixMode1 in lite mode
      
      Differential Revision: D19627486
      
      fbshipit-source-id: fbde75260aeecb7e6fc406e09c19a71a95aa5f08
      71874c5a
    • S
      Fix db_bloom_filter_test clang LITE build (#6340) · 02ac6c9a
      sdong 提交于
      Summary:
      db_bloom_filter_test break with clang LITE build with following message:
      
      db/db_bloom_filter_test.cc:23:29: error: unused variable 'kPlainTable' [-Werror,-Wunused-const-variable]
      static constexpr PseudoMode kPlainTable = -1;
                                  ^
      
      Fix it by moving the declaration out of LITE build
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/6340
      
      Test Plan:
      USE_CLANG=1 LITE=1 make db_bloom_filter_test
      and without LITE=1
      
      Differential Revision: D19609834
      
      fbshipit-source-id: 0e88f5c6759238a94f9880d84c785ac18e7cdd7e
      02ac6c9a
    • M
      Double Crash in kPointInTimeRecovery with TransactionDB (#6313) · 2f973ca9
      Maysam Yabandeh 提交于
      Summary:
      In WritePrepared there could be gap in sequence numbers. This breaks the trick we use in kPointInTimeRecovery which assume the first seq in the log right after the corrupted log is one larger than the last seq we read from the logs. To let this trick keep working, we add a dummy entry with the expected sequence to the first log right after recovery.
      Also in WriteCommitted, if the log right after the corrupted log is empty, since it has no sequence number to let the sequential trick work, it is assumed as unexpected behavior. This is however expected to happen if we close the db after recovering from a corruption and before writing anything new to it. To remedy that, we apply the same technique by writing a dummy entry to the log that is created after the corrupted log.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/6313
      
      Differential Revision: D19458291
      
      Pulled By: maysamyabandeh
      
      fbshipit-source-id: 09bc49e574690085df45b034ca863ff315937e2d
      2f973ca9
  5. 29 1月, 2020 1 次提交
    • S
      Add ReadOptions.auto_prefix_mode (#6314) · 8f2bee67
      sdong 提交于
      Summary:
      Add a new option ReadOptions.auto_prefix_mode. When set to true, iterator should return the same result as total order seek, but may choose to do prefix seek internally, based on iterator upper bounds. Also fix two previous bugs when handling prefix extrator changes: (1) reverse iterator should not rely on upper bound to determine prefix. Fix it with skipping prefix check. (2) block-based filter is not handled properly.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/6314
      
      Test Plan: (1) add a unit test; (2) add the check to stress test and run see whether it can pass at least one run.
      
      Differential Revision: D19458717
      
      fbshipit-source-id: 51c1bcc5cdd826c2469af201979a39600e779bce
      8f2bee67
  6. 28 1月, 2020 3 次提交
    • S
      Use the same oldest ancestor time in table properties and manifest · 4f6c8622
      Sagar Vemuri 提交于
      Summary:
      ./db_compaction_test DBCompactionTest.LevelTtlCascadingCompactions passed 96 / 100 times.
      ```
      With the fix: all runs (tried 100, 1000, 10000) succeed.
      ```
      $ TEST_TMPDIR=/dev/shm ~/gtest-parallel/gtest-parallel ./db_compaction_test --gtest_filter=DBCompactionTest.LevelTtlCascadingCompactions --repeat=1000
      [1000/1000] DBCompactionTest.LevelTtlCascadingCompactions (1895 ms)
      ```
      
      Test Plan:
      Build:
      ```
      COMPILE_WITH_TSAN=1 make db_compaction_test -j100
      ```
      Without the fix: a few runs out of 100 fail:
      ```
      $ TEST_TMPDIR=/dev/shm KEEP_DB=1 ~/gtest-parallel/gtest-parallel ./db_compaction_test --gtest_filter=DBCompactionTest.LevelTtlCascadingCompactions --repeat=100
      ...
      ...
      Note: Google Test filter = DBCompactionTest.LevelTtlCascadingCompactions
      [==========] Running 1 test from 1 test case.
      [----------] Global test environment set-up.
      [----------] 1 test from DBCompactionTest
      [ RUN      ] DBCompactionTest.LevelTtlCascadingCompactions
      db/db_compaction_test.cc:3687: Failure
      Expected equality of these values:
        oldest_time
          Which is: 1580155869
        level_to_files[6][0].oldest_ancester_time
          Which is: 1580155870
      DB is still at /dev/shm//db_compaction_test_6337001442947696266
      [  FAILED  ] DBCompactionTest.LevelTtlCascadingCompactions (1432 ms)
      [----------] 1 test from DBCompactionTest (1432 ms total)
      
      [----------] Global test environment tear-down
      [==========] 1 test from 1 test case ran. (1433 ms total)
      [  PASSED  ] 0 tests.
      [  FAILED  ] 1 test, listed below:
      [  FAILED  ] DBCompactionTest.LevelTtlCascadingCompactions
      
       1 FAILED TEST
      [80/100] DBCompactionTest.LevelTtlCascadingCompactions returned/aborted with exit code 1 (1489 ms)
      [100/100] DBCompactionTest.LevelTtlCascadingCompactions (1522 ms)
      FAILED TESTS (4/100):
          1419 ms: ./db_compaction_test DBCompactionTest.LevelTtlCascadingCompactions (try https://github.com/facebook/rocksdb/issues/90)
          1434 ms: ./db_compaction_test DBCompactionTest.LevelTtlCascadingCompactions (try https://github.com/facebook/rocksdb/issues/84)
          1457 ms: ./db_compaction_test DBCompactionTest.LevelTtlCascadingCompactions (try https://github.com/facebook/rocksdb/issues/82)
          1489 ms: ./db_compaction_test DBCompactionTest.LevelTtlCascadingCompactions (try https://github.com/facebook/rocksdb/issues/74)
      
      Differential Revision: D19587040
      
      Pulled By: sagar0
      
      fbshipit-source-id: 11191ae9940837643bff47ebe18b299b4be3d950
      4f6c8622
    • A
      fix `WriteBufferManager` flush log message (#6335) · 5b33cfa1
      Andrew Kryczka 提交于
      Summary:
      It chooses the oldest memtable, not the largest one. This is an
      important difference for users whose CFs receive non-uniform write
      rates.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/6335
      
      Differential Revision: D19588865
      
      Pulled By: maysamyabandeh
      
      fbshipit-source-id: 62ad4325b0182f5f27858584cd73fd5978fb2cec
      5b33cfa1
    • S
      Fix regression bug of hash index with iterator total order seek (#6328) · f10f1359
      sdong 提交于
      Summary:
      https://github.com/facebook/rocksdb/pull/6028 introduces a bug for hash index in SST files. If a table reader is created when total order seek is used, prefix_extractor might be passed into table reader as null. While later when prefix seek is used, the same table reader used, hash index is checked but prefix extractor is null and the program would crash.
      Fix the issue by fixing http://github.com/facebook/rocksdb/pull/6028 in the way that prefix_extractor is preserved but ReadOptions.total_order_seek is checked
      
      Also, a null pointer check is added so that a bug like this won't cause segfault in the future.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/6328
      
      Test Plan: Add a unit test that would fail without the fix. Stress test that reproduces the crash would pass.
      
      Differential Revision: D19586751
      
      fbshipit-source-id: 8de77690167ddf5a77a01e167cf89430b1bfba42
      f10f1359
  7. 24 1月, 2020 2 次提交
    • L
      Fix the "records dropped" statistics (#6325) · f34782a6
      Levi Tamasi 提交于
      Summary:
      The earlier code used two conflicting definitions for the number of
      input records going into a compaction, one based on the
      `rocksdb.num.entries` table property and one based on
      `CompactionIterationStats`. The first one is correct and in line
      with how output records are counted, while the second one incorrectly
      ignores input records in various cases when the `CompactionIterator`
      advances or reseeks the input iterator (this can happen, amongst other
      cases, when dealing with `SingleDelete`s, regular `Delete`s, `Merge`s,
      and compaction filters). This can result in the code undercounting the
      input records and computing an incorrect value for "records dropped"
      during the compaction. The patch fixes this by switching over to the
      correct (table property based) input record count for "records dropped".
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/6325
      
      Test Plan: Tested using `make check` and `db_bench`.
      
      Differential Revision: D19525491
      
      Pulled By: ltamasi
      
      fbshipit-source-id: 4340b0b2f41546db8e356db70ca02199e48fa636
      f34782a6
    • A
      Fix queue manipulation in WriteThread::BeginWriteStall() (#6322) · 0672a6db
      anand76 提交于
      Summary:
      When there is a write stall, the active write group leader calls ```BeginWriteStall()``` to walk the queue of writers and remove any with the ```no_slowdown``` option set. There was a bug in the code which updated the back pointer but not the forward pointer (```link_newer```), corrupting the list and causing some threads to wait forever. This PR fixes it.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/6322
      
      Test Plan: Add a unit test in db_write_test
      
      Differential Revision: D19538313
      
      Pulled By: anand1976
      
      fbshipit-source-id: 6fbed819e594913f435886606f5d36f74f235c3a
      0672a6db
  8. 22 1月, 2020 2 次提交
    • M
      Correct pragma once problem with Bazel on Windows (#6321) · e6e8b9e8
      matthewvon 提交于
      Summary:
      This is a simple edit to have two #include file paths be consistent within range_del_aggregator.{h,cc} with everywhere else.
      
      The impact of this inconsistency is that it actual breaks a Bazel based build on the Windows platform. The same pragma once failure occurs with both Windows Visual C++ 2019 and clang for Windows 9.0. Bazel's "sandboxing" of the builds causes both compilers to not properly recognize "rocksdb/types.h" and "include/rocksdb/types.h" to be the same file (also comparator.h). My guess is that the backslash versus forward slash mixing within path names is the underlying issue.
      
      But, everything builds fine once the include paths in these two source files are consistent with the rest of the repository.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/6321
      
      Differential Revision: D19506585
      
      Pulled By: ltamasi
      
      fbshipit-source-id: 294c346607edc433ab99eaabc9c880ee7426817a
      e6e8b9e8
    • L
      Make DBCompactionTest.SkipStatsUpdateTest more robust (#6306) · d305f13e
      Levi Tamasi 提交于
      Summary:
      Currently, this test case tries to infer whether
      `VersionStorageInfo::UpdateAccumulatedStats` was called during open by
      checking the number of files opened against an arbitrary threshold (10).
      This makes the test brittle and results in sporadic failures. The patch
      changes the test case to use sync points to directly test whether
      `UpdateAccumulatedStats` was called.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/6306
      
      Test Plan: `make check`
      
      Differential Revision: D19439544
      
      Pulled By: ltamasi
      
      fbshipit-source-id: ceb7adf578222636a0f51740872d0278cd1a914f
      d305f13e
  9. 18 1月, 2020 2 次提交
  10. 17 1月, 2020 3 次提交
  11. 16 1月, 2020 1 次提交
    • S
      Fix kHashSearch bug with SeekForPrev (#6297) · d2b4d42d
      sdong 提交于
      Summary:
      When prefix is enabled the expected behavior when the prefix of the target does not exist is for Seek is to seek to any key larger than target and SeekToPrev to any key less than the target.
      Currently. the prefix index (kHashSearch) returns OK status but sets Invalid() to indicate two cases: a prefix of the searched key does not exist, ii) the key is beyond the range of the keys in SST file. The SeekForPrev implementation in BlockBasedTable thus does not have enough information to know when it should set the index key to first (to return a key smaller than target). The patch fixes that by returning NotFound status for cases that the prefix does not exist. SeekForPrev in BlockBasedTable accordingly SeekToFirst instead of SeekToLast on the index iterator.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/6297
      
      Test Plan: SeekForPrev of non-exsiting prefix is added to block_test.cc, and a test case is added in db_test2, which fails without the fix.
      
      Differential Revision: D19404695
      
      fbshipit-source-id: cafbbf95f8f60ff9ede9ccc99d25bfa1cf6fcdc3
      d2b4d42d
  12. 15 1月, 2020 1 次提交
  13. 14 1月, 2020 1 次提交
    • S
      Bug when multiple files at one level contains the same smallest key (#6285) · 894c6d21
      sdong 提交于
      Summary:
      The fractional cascading index is not correctly generated when two files at the same level contains the same smallest or largest user key.
      The result would be that it would hit an assertion in debug mode and lower level files might be skipped.
      This might cause wrong results when the same user keys are of merge operands and Get() is called using the exact user key. In that case, the lower files would need to further checked.
      The fix is to fix the fractional cascading index.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/6285
      
      Test Plan: Add a unit test which would cause the assertion which would be fixed.
      
      Differential Revision: D19358426
      
      fbshipit-source-id: 39b2b1558075fd95e99491d462a67f9f2298c48e
      894c6d21
  14. 11 1月, 2020 3 次提交
    • Q
      More const pointers in C API (#6283) · 6733be03
      Qinfan Wu 提交于
      Summary:
      This makes it easier to call the functions from Rust as otherwise they require mutable types.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/6283
      
      Differential Revision: D19349991
      
      Pulled By: wqfish
      
      fbshipit-source-id: e8da7a75efe8cd97757baef8ca844a054f2519b4
      6733be03
    • S
      Consider all compaction input files to compute the oldest ancestor time (#6279) · cfa58561
      Sagar Vemuri 提交于
      Summary:
      Look at all compaction input files to compute the oldest ancestor time.
      
      In https://github.com/facebook/rocksdb/issues/5992 we changed how creation_time (aka oldest-ancestor-time) table property of compaction output files is computed from max(creation-time-of-all-compaction-inputs) to min(creation-time-of-all-inputs). This exposed a bug where, during compaction, the creation_time:s of only the L0 compaction inputs were being looked at, and all other input levels were being ignored. This PR fixes the issue.
      Some TTL compactions when using Level-Style compactions might not have run due to this bug.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/6279
      
      Test Plan: Enhanced the unit tests to validate that the correct time is propagated to the compaction outputs.
      
      Differential Revision: D19337812
      
      Pulled By: sagar0
      
      fbshipit-source-id: edf8a72f11e405e93032ff5f45590816debe0bb4
      cfa58561
    • M
      unordered_write incompatible with max_successive_merges (#6284) · eff5e076
      Maysam Yabandeh 提交于
      Summary:
      unordered_write is incompatible with non-zero max_successive_merges. Although we check this at runtime, we currently don't prevent the user from setting this combination in options. This has led to stress tests to fail with this combination is tried in ::SetOptions.
      The patch fixes that and also reverts the changes performed by https://github.com/facebook/rocksdb/pull/6254, in which max_successive_merges was mistakenly declared incompatible with unordered_write.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/6284
      
      Differential Revision: D19356115
      
      Pulled By: maysamyabandeh
      
      fbshipit-source-id: f06dadec777622bd75f267361c022735cf8cecb6
      eff5e076
  15. 10 1月, 2020 1 次提交
  16. 09 1月, 2020 2 次提交
  17. 08 1月, 2020 3 次提交
    • Y
      Fix test in LITE mode (#6267) · a8b1085a
      Yanqin Jin 提交于
      Summary:
      Currently, the recently-added test DBTest2.SwitchMemtableRaceWithNewManifest
      fails in LITE mode since SetOptions() returns "Not supported". I do not want to
      put `#ifndef ROCKSDB_LITE` because it reduces test coverage. Instead, just
      trigger compaction on a different column family. The bg compaction thread
      calling LogAndApply() may race with thread calling SwitchMemtable().
      
      Test Plan (dev server):
      make check
      OPT=-DROCKSDB_LITE make check
      
      or run DBTest2.SwitchMemtableRaceWithNewManifest 100 times.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/6267
      
      Differential Revision: D19301309
      
      Pulled By: riversand963
      
      fbshipit-source-id: 88cedcca2f985968ed3bb234d324ffa2aa04ca50
      a8b1085a
    • Y
      Fix error message (#6264) · bce5189f
      Yanqin Jin 提交于
      Summary:
      Fix an error message when CURRENT is not found.
      
      Test plan (dev server)
      ```
      make check
      ```
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/6264
      
      Differential Revision: D19300699
      
      Pulled By: riversand963
      
      fbshipit-source-id: 303fa206386a125960ecca1dbdeff07422690caf
      bce5189f
    • C
      Add oldest snapshot sequence property (#6228) · 3e26a94b
      Connor1996 提交于
      Summary:
      Add oldest snapshot sequence property, so we can use `db.GetProperty("rocksdb.oldest-snapshot-sequence")` to get the sequence number of the oldest snapshot.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/6228
      
      Differential Revision: D19264145
      
      Pulled By: maysamyabandeh
      
      fbshipit-source-id: 67fbe5304d89cbc475bd404e30d1299f7b11c010
      3e26a94b
  18. 07 1月, 2020 3 次提交
    • Y
      Fix a data race for cfd->log_number_ (#6249) · 1aaa1458
      Yanqin Jin 提交于
      Summary:
      A thread calling LogAndApply may release db mutex when calling
      WriteCurrentStateToManifest() which reads cfd->log_number_. Another thread can
      call SwitchMemtable() and writes to cfd->log_number_.
      Solution is to cache the cfd->log_number_ before releasing mutex in
      LogAndApply.
      
      Test Plan (on devserver):
      ```
      $COMPILE_WITH_TSAN=1 make db_stress
      $./db_stress --acquire_snapshot_one_in=10000 --avoid_unnecessary_blocking_io=1 --block_size=16384 --bloom_bits=16 --bottommost_compression_type=zstd --cache_index_and_filter_blocks=1 --cache_size=1048576 --checkpoint_one_in=1000000 --checksum_type=kxxHash --clear_column_family_one_in=0 --compact_files_one_in=1000000 --compact_range_one_in=1000000 --compaction_ttl=0 --compression_max_dict_bytes=16384 --compression_type=zstd --compression_zstd_max_train_bytes=0 --continuous_verification_interval=0 --db=/dev/shm/rocksdb/rocksdb_crashtest_blackbox --db_write_buffer_size=1048576 --delpercent=5 --delrangepercent=0 --destroy_db_initially=0 --enable_pipelined_write=0  --flush_one_in=1000000 --format_version=5 --get_live_files_and_wal_files_one_in=1000000 --index_block_restart_interval=5 --index_type=0 --log2_keys_per_lock=22 --long_running_snapshots=0 --max_background_compactions=20 --max_bytes_for_level_base=10485760 --max_key=1000000 --max_manifest_file_size=16384 --max_write_batch_group_size_bytes=16 --max_write_buffer_number=3 --memtablerep=skip_list --mmap_read=0 --nooverwritepercent=1 --open_files=500000 --ops_per_thread=100000000 --partition_filters=0 --pause_background_one_in=1000000 --periodic_compaction_seconds=0 --prefixpercent=5 --progress_reports=0 --readpercent=45 --recycle_log_file_num=0 --reopen=20 --set_options_one_in=10000 --snapshot_hold_ops=100000 --subcompactions=2 --sync=1 --target_file_size_base=2097152 --target_file_size_multiplier=2 --test_batches_snapshots=1 --use_direct_io_for_flush_and_compaction=0 --use_direct_reads=0 --use_full_merge_v1=0 --use_merge=0 --use_multiget=1 --verify_checksum=1 --verify_checksum_one_in=1000000 --verify_db_one_in=100000 --write_buffer_size=4194304 --write_dbid_to_manifest=1 --writepercent=35
      ```
      Then repeat the following multiple times, e.g. 100 after compiling with tsan.
      ```
      $./db_test2 --gtest_filter=DBTest2.SwitchMemtableRaceWithNewManifest
      ```
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/6249
      
      Differential Revision: D19235077
      
      Pulled By: riversand963
      
      fbshipit-source-id: 79467b52f48739ce7c27e440caa2447a40653173
      1aaa1458
    • Q
      Add range delete function to C-API (#6259) · edaaa1ff
      Qinfan Wu 提交于
      Summary:
      It seems that the C-API doesn't expose the range delete functionality at the moment, so add the API.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/6259
      
      Differential Revision: D19290320
      
      Pulled By: pdillinger
      
      fbshipit-source-id: 3f403a4c3446d2042d55f1ece7cdc9c040f40c27
      edaaa1ff
    • M
      Increase max_log_size in FlushJob to 1024 bytes (#6258) · 28e5a9a9
      Maysam Yabandeh 提交于
      Summary:
      When measure_io_stats_ is enabled, the volume of logging is beyond the default limit of 512 size. The patch allows the EventLoggerStream to change the limit, and also sets it to 1024 for FlushJob.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/6258
      
      Differential Revision: D19279269
      
      Pulled By: maysamyabandeh
      
      fbshipit-source-id: 3fb5d468dad488f289ac99d713378177eb7504d6
      28e5a9a9