1. 24 12月, 2020 1 次提交
    • P
      Remove flaky, redundant, and dubious DBTest.SparseMerge (#7800) · a727efca
      Peter Dillinger 提交于
      Summary:
      This test would occasionally fail like this:
      
          WARNING: c:\users\circleci\project\db\db_test.cc(1343): error: Expected:
          (dbfull()->TEST_MaxNextLevelOverlappingBytes(handles_[1])) <= (20 * 1048576), actual: 33501540 vs 20971520
      
      And being a super old test, it's not structured in a sound way. And it appears that DBTest2.MaxCompactionBytesTest is a better test of what SparseMerge was intended to test. In fact, SparseMerge fails if I set
      
          options.max_compaction_bytes = options.target_file_size_base * 1000;
      
      Thus, we are removing this negative-value test.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/7800
      
      Test Plan: Q.E.D.
      
      Reviewed By: ajkr
      
      Differential Revision: D25693366
      
      Pulled By: pdillinger
      
      fbshipit-source-id: 9da07d4dce0559547fc938b2163a2015e956c548
      a727efca
  2. 23 12月, 2020 4 次提交
    • M
      Add more tests for assert status checked (#7524) · 02418194
      mrambacher 提交于
      Summary:
      Added 10 more tests that pass the ASSERT_STATUS_CHECKED test.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/7524
      
      Reviewed By: akankshamahajan15
      
      Differential Revision: D24323093
      
      Pulled By: ajkr
      
      fbshipit-source-id: 28d4106d0ca1740c3b896c755edf82d504b74801
      02418194
    • A
      Add more tests to ASSERT_STATUS_CHECKED (4) (#7718) · 81592d9f
      Adam Retter 提交于
      Summary:
      Fourth batch of adding more tests to ASSERT_STATUS_CHECKED.
      
      * db_range_del_test
      * db_write_test
      * random_access_file_reader_test
      * merge_test
      * external_sst_file_test
      * write_buffer_manager_test
      * stringappend_test
      * deletefile_test
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/7718
      
      Reviewed By: pdillinger
      
      Differential Revision: D25671608
      
      fbshipit-source-id: 687a794e98a9e0cd5428ead9898ef05ced987c31
      81592d9f
    • C
      SyncWAL shouldn't be supported in compacted db (#7788) · 41ff125a
      cheng-chang 提交于
      Summary:
      `CompactedDB` is a kind of read-only DB, so it shouldn't support `SyncWAL`.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/7788
      
      Test Plan: watch existing tests to pass.
      
      Reviewed By: akankshamahajan15
      
      Differential Revision: D25661209
      
      Pulled By: cheng-chang
      
      fbshipit-source-id: 9eb2cc3f73736dcc205c8410e5944aa203f002d3
      41ff125a
    • S
      Minimize Timing Issue in test WALTrashCleanupOnOpen (#7796) · 9057d0a0
      sdong 提交于
      Summary:
      We saw DBWALTestWithParam/DBWALTestWithParam.WALTrashCleanupOnOpen sometimes fail with:
      
      db/db_sst_test.cc:575: Failure
      Expected: (trash_log_count) >= (1), actual: 0 vs 1
      
      The suspicious is that delete scheduling actually deleted all trash files based on rate, but it is not expected. This can be reproduced if we manually add sleep after DB is closed for serveral seconds. Minimize its chance by setting the delete rate to be lowest possible.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/7796
      
      Test Plan: The test doesn't fail with the manual sleeping anymore
      
      Reviewed By: anand1976
      
      Differential Revision: D25675000
      
      fbshipit-source-id: a39fd05e1a83719c41014e48843792e752368e22
      9057d0a0
  3. 20 12月, 2020 1 次提交
    • P
      aggregated-table-properties with GetMapProperty (#7779) · 4d1ac19e
      Peter Dillinger 提交于
      Summary:
      So that we can more easily get aggregate live table data such
      as total filter, index, and data sizes.
      
      Also adds ldb support for getting properties
      
      Also fixed some missing/inaccurate related comments in db.h
      
      For example:
      
          $ ./ldb --db=testdb get_property rocksdb.aggregated-table-properties
          rocksdb.aggregated-table-properties.data_size: 102871
          rocksdb.aggregated-table-properties.filter_size: 0
          rocksdb.aggregated-table-properties.index_partitions: 0
          rocksdb.aggregated-table-properties.index_size: 2232
          rocksdb.aggregated-table-properties.num_data_blocks: 100
          rocksdb.aggregated-table-properties.num_deletions: 0
          rocksdb.aggregated-table-properties.num_entries: 15000
          rocksdb.aggregated-table-properties.num_merge_operands: 0
          rocksdb.aggregated-table-properties.num_range_deletions: 0
          rocksdb.aggregated-table-properties.raw_key_size: 288890
          rocksdb.aggregated-table-properties.raw_value_size: 198890
          rocksdb.aggregated-table-properties.top_level_index_size: 0
          $ ./ldb --db=testdb get_property rocksdb.aggregated-table-properties-at-level1
          rocksdb.aggregated-table-properties-at-level1.data_size: 80909
          rocksdb.aggregated-table-properties-at-level1.filter_size: 0
          rocksdb.aggregated-table-properties-at-level1.index_partitions: 0
          rocksdb.aggregated-table-properties-at-level1.index_size: 1787
          rocksdb.aggregated-table-properties-at-level1.num_data_blocks: 81
          rocksdb.aggregated-table-properties-at-level1.num_deletions: 0
          rocksdb.aggregated-table-properties-at-level1.num_entries: 12466
          rocksdb.aggregated-table-properties-at-level1.num_merge_operands: 0
          rocksdb.aggregated-table-properties-at-level1.num_range_deletions: 0
          rocksdb.aggregated-table-properties-at-level1.raw_key_size: 238210
          rocksdb.aggregated-table-properties-at-level1.raw_value_size: 163414
          rocksdb.aggregated-table-properties-at-level1.top_level_index_size: 0
          $
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/7779
      
      Test Plan: Added a test to ldb_test.py
      
      Reviewed By: jay-zhuang
      
      Differential Revision: D25653103
      
      Pulled By: pdillinger
      
      fbshipit-source-id: 2905469a08a64dd6b5510cbd7be2e64d3234d6d3
      4d1ac19e
  4. 19 12月, 2020 1 次提交
    • C
      Track WAL obsoletion when updating empty CF's log number (#7781) · fbce7a38
      Cheng Chang 提交于
      Summary:
      In the write path, there is an optimization: when a new WAL is created during SwitchMemtable, we update the internal log number of the empty column families to the new WAL. `FindObsoleteFiles` marks a WAL as obsolete if the WAL's log number is less than `VersionSet::MinLogNumberWithUnflushedData`. After updating the empty column families' internal log number, `VersionSet::MinLogNumberWithUnflushedData` might change, so some WALs might become obsolete to be purged from disk.
      
      For example, consider there are 3 column families: 0, 1, 2:
      1. initially, all the column families' log number is 1;
      2. write some data to cf0, and flush cf0, but the flush is pending;
      3. now a new WAL 2 is created;
      4. write data to cf1 and WAL 2, now cf0's log number is 1, cf1's log number is 2, cf2's log number is 2 (because cf1 and cf2 are empty, so their log numbers will be set to the highest log number);
      5. now cf0's flush hasn't finished, flush cf1, a new WAL 3 is created, and cf1's flush finishes, now cf0's log number is 1, cf1's log number is 3, cf2's log number is 3, since WAL 1 still contains data for the unflushed cf0, no WAL can be deleted from disk;
      6. now cf0's flush finishes, cf0's log number is 2 (because when cf0 was switching memtable, WAL 3 does not exist yet), cf1's log number is 3, cf2's log number is 3, so WAL 1 can be purged from disk now, but WAL 2 still cannot because `MinLogNumberToKeep()` is 2;
      7. write data to cf2 and WAL 3, because cf0 is empty, its log number is updated to 3, so now cf0's log number is 3, cf1's log number is 3, cf2's log number is 3;
      8. now if the background threads want to purge obsolete files from disk, WAL 2 can be purged because `MinLogNumberToKeep()` is 3. But there are only two flush results written to MANIFEST: the first is for flushing cf1, and the `MinLogNumberToKeep` is 1, the second is for flushing cf0, and the `MinLogNumberToKeep` is 2. So without this PR, if the DB crashes at this point and try to recover, `WalSet` will still expect WAL 2 to exist.
      
      When WAL tracking is enabled, we assume WALs will only become obsolete after a flush result is written to MANIFEST in `MemtableList::TryInstallMemtableFlushResults` (or its atomic flush counterpart). The above situation breaks this assumption.
      
      This PR tracks WAL obsoletion if necessary before updating the empty column families' log numbers.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/7781
      
      Test Plan:
      watch existing tests and stress tests to pass.
      `make -j48 blackbox_crash_test` on devserver
      
      Reviewed By: ltamasi
      
      Differential Revision: D25631695
      
      Pulled By: cheng-chang
      
      fbshipit-source-id: ca7fff967bdb42204b84226063d909893bc0a4ec
      fbce7a38
  5. 16 12月, 2020 1 次提交
    • B
      Do not full scan obsolete files on compaction busy (#7739) · 2021392e
      Burton Li 提交于
      Summary:
      When ConcurrentTaskLimiter is enabled and there are too many outstanding compactions, BackgroundCompaction returns Status::Busy(), which shouldn't be treat as compaction failure.
      This caused performance issue when outstanding compactions reached the limit.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/7739
      
      Reviewed By: cheng-chang
      
      Differential Revision: D25508319
      
      Pulled By: ltamasi
      
      fbshipit-source-id: 3b181b16ada0ca3393cfa3a7412985764e79c719
      2021392e
  6. 15 12月, 2020 2 次提交
    • J
      Log sst number in Corruption status (#7767) · a0e4421e
      Jay Zhuang 提交于
      Summary:
      sst file number in corruption error would be very useful for debugging
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/7767
      
      Reviewed By: zhichao-cao
      
      Differential Revision: D25485872
      
      Pulled By: jay-zhuang
      
      fbshipit-source-id: 67315b582cedeefbce6676015303ebe5bf6526a3
      a0e4421e
    • L
      Add initial blob support to batched MultiGet (#7766) · 1afbd194
      Levi Tamasi 提交于
      Summary:
      The patch adds initial support for reading blobs to the batched `MultiGet` API.
      The current implementation simply retrieves the blob values as the blob indexes
      are encountered; that is, reads from blob files are currently not batched. (This
      will be optimized in a separate phase.) In addition, the patch removes some dead
      code related to BlobDB from the batched `MultiGet` implementation, namely the
      `is_blob` / `is_blob_index` flags that are passed around in `DBImpl` and `MemTable` /
      `MemTableListVersion`. These were never hooked up to anything and wouldn't
      work anyways, since a single flag is not sufficient to communicate the "blobness"
      of multiple key-values.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/7766
      
      Test Plan: `make check`
      
      Reviewed By: jay-zhuang
      
      Differential Revision: D25479290
      
      Pulled By: ltamasi
      
      fbshipit-source-id: 7aba2d290e31876ee592bcf1adfd1018713a8000
      1afbd194
  7. 12 12月, 2020 1 次提交
    • P
      Fix memory leak for ColumnFamily drop with live iterator (#7749) · b1ee1914
      Peter Dillinger 提交于
      Summary:
      Uncommon bug seen by ASAN with
      ColumnFamilyTest.LiveIteratorWithDroppedColumnFamily, if the last two
      references to a ColumnFamilyData are both SuperVersions (during
      InstallSuperVersion). The fix is to use UnrefAndTryDelete even in
      SuperVersion::Cleanup but with a parameter to avoid re-entering Cleanup
      on the same SuperVersion being cleaned up.
      
      ColumnFamilyData::Unref is considered unsafe so removed.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/7749
      
      Test Plan: ./column_family_test --gtest_filter=*LiveIter* --gtest_repeat=100
      
      Reviewed By: jay-zhuang
      
      Differential Revision: D25354304
      
      Pulled By: pdillinger
      
      fbshipit-source-id: e78f3a3f67c40013b8432f31d0da8bec55c5321c
      b1ee1914
  8. 11 12月, 2020 1 次提交
    • C
      Do not log unnecessary WAL obsoletion events (#7765) · fd7d8dc5
      Cheng Chang 提交于
      Summary:
      min_wal_number_to_keep should not be decreasing, if it does not increase, then there is no need to log the WAL obsoletions in MANIFEST since a previous one has been logged.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/7765
      
      Test Plan: watch existing tests and stress tests to pass
      
      Reviewed By: pdillinger
      
      Differential Revision: D25462542
      
      Pulled By: cheng-chang
      
      fbshipit-source-id: 0085fcb6edf5cf2b0fc32f9932a7566f508768ff
      fd7d8dc5
  9. 10 12月, 2020 5 次提交
    • A
      Add further tests to ASSERT_STATUS_CHECKED (2) (#7698) · 8ff6557e
      Adam Retter 提交于
      Summary:
      Second batch of adding more tests to ASSERT_STATUS_CHECKED.
      
      * external_sst_file_basic_test
      * checkpoint_test
      * db_wal_test
      * db_block_cache_test
      * db_logical_block_size_cache_test
      * db_blob_index_test
      * optimistic_transaction_test
      * transaction_test
      * point_lock_manager_test
      * write_prepared_transaction_test
      * write_unprepared_transaction_test
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/7698
      
      Reviewed By: cheng-chang
      
      Differential Revision: D25441664
      
      Pulled By: pdillinger
      
      fbshipit-source-id: 9e78867f32321db5d4833e95eb96c5734526ef00
      8ff6557e
    • C
      Carry over min_log_number_to_keep_2pc in new MANIFEST (#7747) · 80159f6e
      Cheng Chang 提交于
      Summary:
      When two phase commit is enabled, `VersionSet::min_log_number_to_keep_2pc` is set during flush.
      But when a new MANIFEST is created, the `min_log_number_to_keep_2pc` is not carried over to the new MANIFEST. So if a new MANIFEST is created and then DB is reopened, the `min_log_number_to_keep_2pc` will be lost.  This may cause DB recovery errors.
      The bug is reproduced in a new unit test in `version_set_test.cc`.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/7747
      
      Test Plan: The new unit test in `version_set_test.cc` should pass.
      
      Reviewed By: jay-zhuang
      
      Differential Revision: D25350661
      
      Pulled By: cheng-chang
      
      fbshipit-source-id: eee890d5b19f15769069670692e270ae31044ece
      80159f6e
    • A
      Ensure that MultiGet works properly with compressed cache (#7756) · 8a1488ef
      anand76 提交于
      Summary:
      Ensure that when direct IO is enabled and a compressed block cache is
      configured, MultiGet inserts compressed data blocks into the compressed
      block cache.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/7756
      
      Test Plan: Add unit test to db_basic_test
      
      Reviewed By: cheng-chang
      
      Differential Revision: D25416240
      
      Pulled By: anand1976
      
      fbshipit-source-id: 75d57526370c9c0a45ff72651f3278dbd8a9086f
      8a1488ef
    • C
      Add a test for disabling tracking WAL (#7757) · 3c2a4488
      Cheng Chang 提交于
      Summary:
      If WAL tracking was enabled, then disabled during reopen, the previously tracked WALs should be removed from MANIFEST.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/7757
      
      Test Plan: a new unit test `DBBasicTest.DisableTrackWal` is added.
      
      Reviewed By: jay-zhuang
      
      Differential Revision: D25410508
      
      Pulled By: cheng-chang
      
      fbshipit-source-id: 9d8d9e665066135930a7c1035bb8c2f68bded6a0
      3c2a4488
    • C
      Always track WAL obsoletion (#7759) · efe827ba
      Cheng Chang 提交于
      Summary:
      Currently, when a WAL becomes obsolete after flushing, if VersionSet::WalSet does not contain the WAL, we do not track the WAL obsoletion event in MANIFEST.
      
      But consider this case:
      * WAL 10 is synced, a VersionEdit is LogAndApplied to MANIFEST to log this WAL addition event, but the VersionEdit is not applied to WalSet yet since its corresponding ManifestWriter is still pending in the write queue;
      * Since the above ManifestWriter is blocking, the LogAndApply will block on a conditional variable and release the db mutex, so another LogAndApply can proceed to enqueue other VersionEdits concurrently;
      * Now flush happens, and WAL 10 becomes obsolete, although WalSet does not contain WAL 10 yet, we should call LogAndApply to enqueue a VersionEdit to indicate the obsoletion of WAL 10;
      * otherwise, when the queued edit indicating WAL 10 addition is logged to MANIFEST, and DB crashes and reopens, the WAL 10 might have been removed from disk, but it still exists in MANIFEST.
      
      This PR changes the behavior to: always `LogAndApply` any WAL addition or obsoletion event, without considering the order issues caused by concurrency, but when applying the edits to `WalSet`, do not add the WALs if they are already obsolete. In this approach, the logical events of WAL addition and obsoletion are always tracked in MANIFEST, so we can inspect the MANIFEST and know all the previous WAL events, but we choose to ignore certain events due to the concurrency issues such as the case above, or the case in https://github.com/facebook/rocksdb/pull/7725.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/7759
      
      Test Plan: make check
      
      Reviewed By: pdillinger
      
      Differential Revision: D25423089
      
      Pulled By: cheng-chang
      
      fbshipit-source-id: 9cb9a7fbc1875bf954f2a42f9b6cfd6d49a7b21c
      efe827ba
  10. 09 12月, 2020 2 次提交
    • A
      Add further tests to ASSERT_STATUS_CHECKED (1) (#7679) · 7b2216c9
      Adam Retter 提交于
      Summary:
      First batch of adding more tests to ASSERT_STATUS_CHECKED.
      
      * db_iterator_test
      * db_memtable_test
      * db_merge_operator_test
      * db_merge_operand_test
      * write_batch_test
      * write_batch_with_index_test
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/7679
      
      Reviewed By: ajkr
      
      Differential Revision: D25399270
      
      Pulled By: pdillinger
      
      fbshipit-source-id: 3017d0a686aec5cd2d743fc2acbbf75df239f3ba
      7b2216c9
    • C
      Do not track obsolete WALs in MANIFEST even if they are synced (#7725) · 07030c6f
      Cheng Chang 提交于
      Summary:
      Consider the case:
      1. All column families are flushed, so all WALs become obsolete, but no WAL is removed from disk yet because the removal is asynchronous, a VersionEdit is written to MANIFEST indicating that WALs before a certain WAL number are obsolete, let's say this number is 3;
      2. `SyncWAL` is called, so all the on-disk WALs are synced, and if track_and_verify_wal_in_manifest=true, the WALs will be tracked in MANIFEST, let's say the WAL numbers are 1 and 2;
      3. DB crashes;
      4. During DB recovery, when replaying MANIFEST, we first see that WAL with number < 3 are obsolete, then we see that WAL 1 and 2 are synced, so according to current implementation of `WalSet`, the `WalSet` will be recovered to include WAL 1 and 2;
      5. WAL 1 and 2 are asynchronously deleted from disk, then the WAL verification algorithm fails with `Corruption: missing WAL`.
      
      The above case is reproduced in a new unit test `DBBasicTestTrackWal::DoNotTrackObsoleteWal`.
      
      The fix is to maintain the upper bound of the obsolete WAL numbers, any WAL with number less than the maintained number is considered to be obsolete, so shouldn't be tracked even if they are later synced. The number is maintained in `WalSet`.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/7725
      
      Test Plan:
      1. a new unit test `DBBasicTestTrackWal::DoNotTrackObsoleteWal` is added.
      2. run `make crash_test` on devserver.
      
      Reviewed By: riversand963
      
      Differential Revision: D25238914
      
      Pulled By: cheng-chang
      
      fbshipit-source-id: f5dccd57c3d89f19565ec5731f2d42f06d272b72
      07030c6f
  11. 08 12月, 2020 5 次提交
    • Y
      Refactor ProcessManifestWrites a little bit (#7751) · 11c4be22
      Yanqin Jin 提交于
      Summary:
      This PR removes a nested loop inside ProcessManifestWrites. The new
      implementation has the same behavior as the old code with simpler logic
      and lower complexity.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/7751
      
      Test Plan:
      make check
      Run make crash_test on devserver and succeeds 3 times.
      
      Reviewed By: ltamasi
      
      Differential Revision: D25363526
      
      Pulled By: riversand963
      
      fbshipit-source-id: 27e681949dacd7501a752e5e517b9e85b54ccb2e
      11c4be22
    • M
      Change ErrorHandler methods to return const Status& (#7539) · db03172d
      mrambacher 提交于
      Summary:
      This change eliminates the need for a lot of the PermitUncheckedError calls on return from ErrorHandler methods.  The calls are no longer needed as the status is returned as a reference rather than a copy.  Additionally, this means that the originating status (recovery_error_, bg_error_) is not cleared implicitly as a result of calling one of these methods.
      
      For this class, I do not know if the proper behavior should be to call PermitUncheckedError in the destructor or if the checked state should be cleared when the status is cleared.  I did tests both ways.  Without the code in the destructor, the status will need to be cleared in at least some of the places where it is set to OK.  When running tests, I found no instances where this class was destructed with a non-OK, non-checked Status.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/7539
      
      Reviewed By: anand1976
      
      Differential Revision: D25340565
      
      Pulled By: pdillinger
      
      fbshipit-source-id: 1730c035c81a475875ea745226112030ec25136c
      db03172d
    • L
      Do not use ASSERT_OK in child threads in ExternalSstFileTest.PickedLevelBug (#7754) · 8a06fe27
      Levi Tamasi 提交于
      Summary:
      `googletest` uses exceptions to communicate assertion failures when
      `GTEST_THROW_ON_FAILURE` is set, which does not go well with
      `std::thread`s, since an exception escaping the top-level function of an
      `std::thread` object or an `std::thread` getting destroyed without
      having been `join`ed or `detach`ed first results in a call to
      `std::terminate`. The patch fixes this by moving the `Status` assertions
      of background operations in `ExternalSstFileTest.PickedLevelBug` to the
      main thread.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/7754
      
      Test Plan: `make check`
      
      Reviewed By: riversand963
      
      Differential Revision: D25383808
      
      Pulled By: ltamasi
      
      fbshipit-source-id: 32fb2721e5169ec898d218900bc0d83eead45d03
      8a06fe27
    • A
      Handling misuse of snprintf return value (#7686) · 20c7d7c5
      Akanksha Mahajan 提交于
      Summary:
      Handle misuse of snprintf return value to avoid Out of bound
      read/write.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/7686
      
      Test Plan: make check -j64
      
      Reviewed By: riversand963
      
      Differential Revision: D25030831
      
      Pulled By: akankshamahajan15
      
      fbshipit-source-id: 1a1d181c067c78b94d720323ae00b79566b57cfa
      20c7d7c5
    • A
      Fix unit test failure ppc64le in travis (#7752) · 1df85848
      Akanksha Mahajan 提交于
      Summary:
      Added a fix for the failure of
      DBTest2.PartitionedIndexUserToInternalKey on ppc64le in travis
      Closes https://github.com/facebook/rocksdb/issues/7746
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/7752
      
      Test Plan:
      Ran travis job multiple times and it passed. Will keep
      watching the travis job after this patch.
      
      Reviewed By: pdillinger
      
      Differential Revision: D25373130
      
      Pulled By: akankshamahajan15
      
      fbshipit-source-id: fa0e3f85f75b687415044a506e42cc38ead87975
      1df85848
  12. 06 12月, 2020 1 次提交
    • Y
      Add full_history_ts_low to column family (#7740) · eee0af9a
      Yanqin Jin 提交于
      Summary:
      Following https://github.com/facebook/rocksdb/issues/7655 and https://github.com/facebook/rocksdb/issues/7657, this PR adds `full_history_ts_low_` to `ColumnFamilyData`.
      `ColumnFamilyData::full_history_ts_low_` will be used to create `FlushJob` and `CompactionJob`.
      
      `ColumnFamilyData::full_history_ts_low` is persisted to the MANIFEST file. An application can only
      increase its value. Consider the following case:
      
      >
      > The database has a key at ts=950. `full_history_ts_low` is first set to 1000, and then a GC is triggered
      > and cleans up all data older than 1000. If the application sets `full_history_ts_low` to 900 afterwards,
      > and tries to read at ts=960, the key at 950 is not seen. From the perspective of the read, the result
      > is hard to reason. For simplicity, we just do now allow decreasing full_history_ts_low for now.
      >
      
      During recovery, the value of `full_history_ts_low` is restored for each column family if applicable. Note that
      version edits in the MANIFEST file for the same column family may have `full_history_ts_low` unsorted due
      to the potential interleaving of `LogAndApply` calls. Only the max will be used to restore the state of the
      column family.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/7740
      
      Test Plan: make check
      
      Reviewed By: ltamasi
      
      Differential Revision: D25296217
      
      Pulled By: riversand963
      
      fbshipit-source-id: 24acda1df8262cd7cfdc6ce7b0ec56438abe242a
      eee0af9a
  13. 05 12月, 2020 3 次提交
    • L
      Add blob support to DBIter (#7731) · 61932cdf
      Levi Tamasi 提交于
      Summary:
      The patch adds iterator support to the integrated BlobDB implementation.
      Whenever a blob reference is encountered during iteration, the corresponding
      blob is retrieved by calling `Version::GetBlob`, assuming the `expose_blob_index`
      (formerly `allow_blob`) flag is *not* set. (Note: the flag is set by the old stacked
      BlobDB implementation, which has its own blob file handling/blob retrieval logic.)
      
      In addition, `DBIter` now uniformly returns `Status::NotSupported` with the error
      message `"BlobDB does not support merge operator."` when encountering a
      blob reference while performing a merge (instead of potentially returning a
      message that implies the database should be opened using the stacked BlobDB's
      `Open`.)
      
      TODO: We can implement support for lazily retrieving the blob value (or in other
      words, bypassing the retrieval of blob values based on key) by extending the `Iterator`
      API with a new `PrepareValue` method (similarly to `InternalIterator`, which already
      supports lazy values).
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/7731
      
      Test Plan: `make check`
      
      Reviewed By: riversand963
      
      Differential Revision: D25256293
      
      Pulled By: ltamasi
      
      fbshipit-source-id: c39cd782011495a526cdff99c16f5fca400c4811
      61932cdf
    • Z
      Fix assert(cfd->imm()->NumNotFlushed() > 0) in FlushMemtable (#7744) · e102de73
      Zhichao Cao 提交于
      Summary:
      In current code base, in FlushMemtable, when `(Flush_reason == FlushReason::kErrorRecoveryRetryFlush && (!cfd->mem()->IsEmpty() || !cached_recoverable_state_empty_.load()))`, we assert that cfd->imm()->NumNotFlushed() > 0. However, there are some corner cases that can fail this assert: 1) if there are multiple CFs, some CF has immutable memtable, some CFs don't. In ResumeImpl, all CFs will call FlushMemtable, which will hit the assert. 2) Regular flush is scheduled and running, the resume thread is waiting. New KVs are inserted and SchedulePendingFlush is called. Regular flush will continue call MaybeScheduleFlushAndCompaction until all the immutable memtables are flushed. When regular flush ends and auto resume thread starts to schedule new flushes, cfd->imm()->NumNotFlushed() can be 0.
      
      Remove the assert and added the comments.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/7744
      
      Test Plan: make check and pass the stress test
      
      Reviewed By: riversand963
      
      Differential Revision: D25340573
      
      Pulled By: zhichao-cao
      
      fbshipit-source-id: eac357bdace660247c197f01a9ff6857e3c97672
      e102de73
    • Z
      Fix the thread wait case in error_handler (#7700) · eb5a8c06
      Zhichao Cao 提交于
      Summary:
      In error_handler auto recovery case, if recovery_in_prog_ is false, the recover is finished or failed. In this case, the auto recovery thread should finish its execution so recovery_thread_ should be null. However, in some cases, it is not null, the caller should not directly returned. Instead, it should wait for a while and create a new thread to execute the new recovery.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/7700
      
      Test Plan: make check, error_handler_fs_test
      
      Reviewed By: anand1976
      
      Differential Revision: D25098233
      
      Pulled By: zhichao-cao
      
      fbshipit-source-id: 5a1cba234ca18f6dd5d1be88e02d66e1d5ce931b
      eb5a8c06
  14. 04 12月, 2020 1 次提交
    • C
      Write min_log_number_to_keep to MANIFEST during atomic flush under 2 phase commit (#7570) · 70f2e091
      Cheng Chang 提交于
      Summary:
      When 2 phase commit is enabled, if there are prepared data in a WAL, the WAL should be kept, the minimum log number for such a WAL is written to MANIFEST during flush. In atomic flush, such information is not written to MANIFEST.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/7570
      
      Test Plan: Added a new unit test `DBAtomicFlushTest.ManualFlushUnder2PC`, this test fails in atomic flush without this PR, after this PR, it succeeds.
      
      Reviewed By: riversand963
      
      Differential Revision: D24394222
      
      Pulled By: cheng-chang
      
      fbshipit-source-id: 60ce74b21b704804943be40c8de01b41269cf116
      70f2e091
  15. 03 12月, 2020 3 次提交
    • Z
      Add kManifestWriteNoWAL to BackgroundErrorReason to handle Flush IO Error when... · 29e8f6a6
      Zhichao Cao 提交于
      Add kManifestWriteNoWAL to BackgroundErrorReason to handle Flush IO Error when WAL is disabled (#7693)
      
      Summary:
      In the current code base, all the manifest writes with IO error will be set with reason: BackgroundErrorReason::kManifestWrite, which will be mapped to the kHardError if the IO Error is retryable. However, if the system does not use the WAL, all the retryable IO error should be mapped to kSoftError. Create this PR to handle is special case by adding kManifestWriteNoWAL to BackgroundErrorReason.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/7693
      
      Test Plan: make check, add new testing cases to error_handler_fs_test
      
      Reviewed By: anand1976
      
      Differential Revision: D25066204
      
      Pulled By: zhichao-cao
      
      fbshipit-source-id: d59553896c2eac3fb37c05238544d2b265379462
      29e8f6a6
    • J
      Make CompactRange and GetApproximateSizes work with timestamp (#7684) · 7fec715d
      Jay Zhuang 提交于
      Summary:
      Add timestamp to the `CompactRange()` and `GetApproximateSizes` range keys if needed.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/7684
      
      Test Plan: make check
      
      Reviewed By: riversand963
      
      Differential Revision: D25015421
      
      Pulled By: jay-zhuang
      
      fbshipit-source-id: 51ca0756087eb053a3b11801e5c7ce1c6e2d38a9
      7fec715d
    • Y
      Fix assertion failure in bg flush (#7362) · e062a719
      Yanqin Jin 提交于
      Summary:
      https://github.com/facebook/rocksdb/issues/7340 reports and reproduces an assertion failure caused by a combination of the following:
      - atomic flush is disabled.
      - a column family can appear multiple times in the flush queue at the same time. This behavior was introduced in release 5.17.
      
      Consequently, it is possible that two flushes race with each other. One bg flush thread flushes all memtables. The other thread calls `FlushMemTableToOutputFile()` afterwards, and hits the assertion error below.
      
      ```
        assert(cfd->imm()->NumNotFlushed() != 0);
        assert(cfd->imm()->IsFlushPending());
      ```
      
      Fix this by reverting the behavior. In non-atomic-flush case, a column family can appear in the flush queue at most once at the same time.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/7362
      
      Test Plan:
      make check
      Also run stress test successfully for 10 times.
      ```
      make crash_test
      ```
      
      Reviewed By: ajkr
      
      Differential Revision: D25172996
      
      Pulled By: riversand963
      
      fbshipit-source-id: f1559b6366cc609e961e3fc83fae548f1fad08ce
      e062a719
  16. 02 12月, 2020 1 次提交
  17. 01 12月, 2020 1 次提交
    • A
      Fix kPointInTimeRecovery handling of truncated WAL (#7701) · eb65d673
      Andrew Kryczka 提交于
      Summary:
      WAL may be truncated to an incomplete record due to crash while writing
      the last record or corruption. In the former case, no hole will be
      produced since no ACK'd data was lost. In the latter case, a hole could
      be produced without this PR since we proceeded to recover the next WAL
      as if nothing happened. This PR changes the record reading code to
      always report a corruption for incomplete records in
      `kPointInTimeRecovery` mode, and the upper layer will only ignore them
      if the next WAL has consecutive seqnum (i.e., we are guaranteed no
      hole).
      
      While this solves the hole problem for the case of incomplete
      records, the possibility is still there if the WAL is corrupted by
      truncation to an exact record boundary. This PR also regresses how much data
      can be recovered when writes are mixed with/without
      `WriteOptions::disableWAL`, as then we can not distinguish between a
      seqnum gap caused by corruption and a seqnum gap caused by a `disableWAL` write.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/7701
      
      Test Plan:
      Interestingly there already was a test for this case
      (`DBWALTestWithParams.kPointInTimeRecovery`); it just had a typo bug in
      the verification that prevented it from noticing holes in recovery.
      
      Reviewed By: anand1976
      
      Differential Revision: D25111765
      
      Pulled By: ajkr
      
      fbshipit-source-id: 5e330b13b1ee2b5be096cea9d0ff6075843e57b6
      eb65d673
  18. 24 11月, 2020 2 次提交
    • L
      Integrated blob garbage collection: relocate blobs (#7694) · 51a8dc6d
      Levi Tamasi 提交于
      Summary:
      The patch adds basic garbage collection support to the integrated BlobDB
      implementation. Valid blobs residing in the oldest blob files are relocated
      as they are encountered during compaction. The threshold that determines
      which blob files qualify is computed based on the configuration option
      `blob_garbage_collection_age_cutoff`, which was introduced in https://github.com/facebook/rocksdb/issues/7661 .
      Once a blob is retrieved for the purposes of relocation, it passes through the
      same logic that extracts large values to blob files in general. This means that
      if, for instance, the size threshold for key-value separation (`min_blob_size`)
      got changed or writing blob files got disabled altogether, it is possible for the
      value to be moved back into the LSM tree. In particular, one way to re-inline
      all blob values if needed would be to perform a full manual compaction with
      `enable_blob_files` set to `false`, `enable_blob_garbage_collection` set to
      `true`, and `blob_file_garbage_collection_age_cutoff` set to `1.0`.
      
      Some TODOs that I plan to address in separate PRs:
      
      1) We'll have to measure the amount of new garbage in each blob file and log
      `BlobFileGarbage` entries as part of the compaction job's `VersionEdit`.
      (For the time being, blob files are cleaned up solely based on the
      `oldest_blob_file_number` relationships.)
      2) When compression is used for blobs, the compression type hasn't changed,
      and the blob still qualifies for being written to a blob file, we can simply copy
      the compressed blob to the new file instead of going through decompression
      and compression.
      3) We need to update the formula for computing write amplification to account
      for the amount of data read from blob files as part of GC.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/7694
      
      Test Plan: `make check`
      
      Reviewed By: riversand963
      
      Differential Revision: D25069663
      
      Pulled By: ltamasi
      
      fbshipit-source-id: bdfa8feb09afcf5bca3b4eba2ba72ce2f15cd06a
      51a8dc6d
    • A
      Return `Status` from `MemTable` mutation functions (#7656) · dd6b7fc5
      Andrew Kryczka 提交于
      Summary:
      This PR updates `MemTable::Add()`, `MemTable::Update()`, and
      `MemTable::UpdateCallback()` to return `Status` objects, and adapts the
      client code in `MemTableInserter`. The goal is to prepare these
      functions for key-value checksum, where we want to verify key-value
      integrity while adding to memtable. After this PR, the memtable mutation
      functions can report a failed integrity check by returning `Status::Corruption`.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/7656
      
      Reviewed By: riversand963
      
      Differential Revision: D24900497
      
      Pulled By: ajkr
      
      fbshipit-source-id: 1a7e80581e3774676f2bbba2f0a0b04890f40009
      dd6b7fc5
  19. 21 11月, 2020 1 次提交
  20. 19 11月, 2020 1 次提交
    • C
      Do not track empty WALs (#7697) · 7169ca9c
      Cheng Chang 提交于
      Summary:
      An empty WAL won't be backed up by the BackupEngine. So if we track the empty WALs in MANIFEST, then when restoring from a backup, it may report corruption that the empty WAL is missing, which is correct because the WAL is actually in the main DB but not in the backup DB, but missing an empty WAL does not logically break DB consistency.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/7697
      
      Test Plan: watch existing tests to pass
      
      Reviewed By: pdillinger
      
      Differential Revision: D25077194
      
      Pulled By: cheng-chang
      
      fbshipit-source-id: 01917b57234b92b6063925f2ee9452c5732bdc03
      7169ca9c
  21. 18 11月, 2020 1 次提交
    • C
      Track WAL in MANIFEST: Update logic for computing min_log_number_to_keep in atomic flush (#7660) · 8c93b16f
      Cheng Chang 提交于
      Summary:
      The logic for computing min_log_number_to_keep in atomic flush was incorrect.
      
      For example, when all column families are flushed, the min_log_number_to_keep should be the latest new log. But the incorrect logic calls `PrecomputeMinLogNumberToKeepNon2PC` for each column family, and returns the minimum of them. However, `PrecomputeMinLogNumberToKeepNon2PC(cf)` assumes column families other than `cf` are flushed, but in case all column families are flushed, this assumption is incorrect.
      
      Without this fix, the WAL referenced by the computed min_log_number_to_keep may actually contain no unflushed data, so the WAL might have actually been deleted from disk on recovery, then an incorrect error `Corruption: missing WAL` will be reported.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/7660
      
      Test Plan:
      run `make crash_test_with_atomic_flush`  on devserver
      added a unit test in `db_flush_test`
      
      Reviewed By: riversand963
      
      Differential Revision: D24906265
      
      Pulled By: cheng-chang
      
      fbshipit-source-id: 08deda62e71f67f59e3b7925cdd86dd09bd4f430
      8c93b16f
  22. 17 11月, 2020 1 次提交
    • Y
      Clean up after two test failures in db_basic_test (#7682) · 869f0538
      Yanqin Jin 提交于
      Summary:
      In db_basic_test.cc, there are two tests that rely on the underlying
      system's `LockFile` support to function correctly:
      DBBasicTest.OpenWhenOpen and DBBasicTest.CheckLock. In both tests,
      re-opening a db using `DB::Open` is expected to fail because the second
      open cannot lock the LOCK file. Some distributed file systems, e.g. HDFS
      do not support the POSIX-style file lock. Therefore, these unit tests will cause
      assertion failure and the second `Open` will create a db instance.
      Currently, these db instances are not closed after the assertion
      failure. Since these db instances are registered with some process-wide, static
      data structures, e.g. `PeriodicWorkScheduler::Default()`, they can still be
      accessed after the unit tests. However, the `Env` object created for this db
      instance is destroyed when the test finishes in `~DBTestBase()`. Consequently,
      it causes illegal memory access.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/7682
      
      Test Plan:
      Run the following on a distrubited file system:
      ```
      make check
      ```
      
      Reviewed By: anand1976
      
      Differential Revision: D25004215
      
      Pulled By: riversand963
      
      fbshipit-source-id: f4327d7716c0e72b13bb43737ec9a5d156da4d52
      869f0538