1. 24 12月, 2020 1 次提交
    • P
      Remove flaky, redundant, and dubious DBTest.SparseMerge (#7800) · a727efca
      Peter Dillinger 提交于
      Summary:
      This test would occasionally fail like this:
      
          WARNING: c:\users\circleci\project\db\db_test.cc(1343): error: Expected:
          (dbfull()->TEST_MaxNextLevelOverlappingBytes(handles_[1])) <= (20 * 1048576), actual: 33501540 vs 20971520
      
      And being a super old test, it's not structured in a sound way. And it appears that DBTest2.MaxCompactionBytesTest is a better test of what SparseMerge was intended to test. In fact, SparseMerge fails if I set
      
          options.max_compaction_bytes = options.target_file_size_base * 1000;
      
      Thus, we are removing this negative-value test.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/7800
      
      Test Plan: Q.E.D.
      
      Reviewed By: ajkr
      
      Differential Revision: D25693366
      
      Pulled By: pdillinger
      
      fbshipit-source-id: 9da07d4dce0559547fc938b2163a2015e956c548
      a727efca
  2. 28 10月, 2020 1 次提交
    • M
      Fix many tests to run with MEM_ENV and ENCRYPTED_ENV; Introduce a MemoryFileSystem class (#7566) · f35f7f27
      mrambacher 提交于
      Summary:
      This PR does a few things:
      
      1.  The MockFileSystem class was split out from the MockEnv.  This change would theoretically allow a MockFileSystem to be used by other Environments as well (if we created a means of constructing one).  The MockFileSystem implements a FileSystem in its entirety and does not rely on any Wrapper implementation.
      
      2.  Make the RocksDB test suite work when MOCK_ENV=1 and ENCRYPTED_ENV=1 are set.  To accomplish this, a few things were needed:
      - The tests that tried to use the "wrong" environment (Env::Default() instead of env_) were updated
      - The MockFileSystem was changed to support the features it was missing or mishandled (such as recursively deleting files in a directory or supporting renaming of a directory).
      
      3.  Updated the test framework to have a ROCKSDB_GTEST_SKIP macro.  This can be used to flag tests that are skipped.  Currently, this defaults to doing nothing (marks the test as SUCCESS) but will mark the tests as SKIPPED when RocksDB is upgraded to a version of gtest that supports this (gtest-1.10).
      
      I have run a full "make check" with MEM_ENV, ENCRYPTED_ENV,  both, and neither under both MacOS and RedHat.  A few tests were disabled/skipped for the MEM/ENCRYPTED cases.  The error_handler_fs_test fails/hangs for MEM_ENV (presumably a timing problem) and I will introduce another PR/issue to track that problem.  (I will also push a change to disable those tests soon).  There is one more test in DBTest2 that also fails which I need to investigate or skip before this PR is merged.
      
      Theoretically, this PR should also allow the test suite to run against an Env loaded from the registry, though I do not have one to try it with currently.
      
      Finally, once this is accepted, it would be nice if there was a CircleCI job to run these tests on a checkin so this effort does not become stale.  I do not know how to do that, so if someone could write that job, it would be appreciated :)
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/7566
      
      Reviewed By: zhichao-cao
      
      Differential Revision: D24408980
      
      Pulled By: jay-zhuang
      
      fbshipit-source-id: 911b1554a4d0da06fd51feca0c090a4abdcb4a5f
      f35f7f27
  3. 02 10月, 2020 2 次提交
    • L
      Reduce the number of iterations in DBTest.FileCreationRandomFailure (#7481) · 786c1a2c
      Levi Tamasi 提交于
      Summary:
      `DBTest.FileCreationRandomFailure` frequently times out during our
      continuous test runs. (It's a case of "stress test posing as unit test.")
      The patch reduces the number of iterations to avoid this. Note that
      the lower numbers are still sufficient to trigger both flushes and
      compactions, so test coverage is still the same.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/7481
      
      Test Plan: `make check`
      
      Reviewed By: riversand963
      
      Differential Revision: D24034712
      
      Pulled By: ltamasi
      
      fbshipit-source-id: 8731a9446e5a121a1041b00f0df473b9f714935a
      786c1a2c
    • S
      Introduce options.check_flush_compaction_key_order (#7467) · 75081755
      sdong 提交于
      Summary:
      Introduce an new option options.check_flush_compaction_key_order, by default set to true, which checks key order of flush and compaction, and fail the operation if the order is violated.
      Also did minor refactor hash checking code, which consolidates the hashing logic to a vlidation class, where the key ordering logic is added.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/7467
      
      Test Plan: Add unit tests to validate the check can catch reordering in flush and compaction, and can be properly disabled.
      
      Reviewed By: riversand963
      
      Differential Revision: D24010683
      
      fbshipit-source-id: 8dd6292d2cda8006054e9ded7cfa4bf405f0527c
      75081755
  4. 15 9月, 2020 1 次提交
  5. 29 8月, 2020 1 次提交
  6. 18 8月, 2020 1 次提交
  7. 12 8月, 2020 1 次提交
    • P
      Fix+clean up handling of mock sleeps (#7101) · 6ac1d25f
      Peter Dillinger 提交于
      Summary:
      We have a number of tests hanging on MacOS and windows due to
      mishandling of code for mock sleeps. In addition, the code was in
      terrible shape because the same variable (addon_time_) would sometimes
      refer to microseconds and sometimes to seconds. One test even assumed it
      was nanoseconds but was written to pass anyway.
      
      This has been cleaned up so that DB tests generally use a SpecialEnv
      function to mock sleep, for either some number of microseconds or seconds
      depending on the function called. But to call one of these, the test must first
      call SetMockSleep (precondition enforced with assertion), which also turns
      sleeps in RocksDB into mock sleeps. To also removes accounting for actual
      clock time, call SetTimeElapseOnlySleepOnReopen, which implies
      SetMockSleep (on DB re-open). This latter setting only works by applying
      on DB re-open, otherwise havoc can ensue if Env goes back in time with
      DB open.
      
      More specifics:
      
      Removed some unused test classes, and updated comments on the general
      problem.
      
      Fixed DBSSTTest.GetTotalSstFilesSize using a sync point callback instead
      of mock time. For this we have the only modification to production code,
      inserting a sync point callback in flush_job.cc, which is not a change to
      production behavior.
      
      Removed unnecessary resetting of mock times to 0 in many tests. RocksDB
      deals in relative time. Any behaviors relying on absolute date/time are likely
      a bug. (The above test DBSSTTest.GetTotalSstFilesSize was the only one
      clearly injecting a specific absolute time for actual testing convenience.) Just
      in case I misunderstood some test, I put this note in each replacement:
      // NOTE: Presumed unnecessary and removed: resetting mock time in env
      
      Strengthened some tests like MergeTestTime, MergeCompactionTimeTest, and
      FilterCompactionTimeTest in db_test.cc
      
      stats_history_test and blob_db_test are each their own beast, rather deeply
      dependent on MockTimeEnv. Each gets its own variant of a work-around for
      TimedWait in a mock time environment. (Reduces redundancy and
      inconsistency in stats_history_test.)
      
      Intended follow-up:
      
      Remove TimedWait from the public API of InstrumentedCondVar, and only
      make that accessible through Env by passing in an InstrumentedCondVar and
      a deadline. Then the Env implementations mocking time can fix this problem
      without using sync points. (Test infrastructure using sync points interferes
      with individual tests' control over sync points.)
      
      With that change, we can simplify/consolidate the scattered work-arounds.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/7101
      
      Test Plan: make check on Linux and MacOS
      
      Reviewed By: zhichao-cao
      
      Differential Revision: D23032815
      
      Pulled By: pdillinger
      
      fbshipit-source-id: 7f33967ada8b83011fb54e8279365c008bd6610b
      6ac1d25f
  8. 11 8月, 2020 1 次提交
  9. 23 7月, 2020 1 次提交
    • C
      Clean snapshot dir before taking snapshot (#7156) · 96ce0470
      Cheng Chang 提交于
      Summary:
      `DBTest::SnapshotFiles` runs the tests in a `while` loop.
      Currently, the snapshot directory is not cleaned up in each loop, so previous snapshot files may remain in the next loop's snapshot.
      When I'm working on https://github.com/facebook/rocksdb/pull/7129, when checking the tracked WALs in MANIFEST, I find that this test always fails because it reads some unknown WAL. It turns out that the unknown WAL is left from previous loops.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/7156
      
      Test Plan: make db_test && ./db_test --gtest_filters=*SnapshotFiles
      
      Reviewed By: siying
      
      Differential Revision: D22668360
      
      Pulled By: cheng-chang
      
      fbshipit-source-id: 69d4aa3506038ba30e218e8ae966357935a99c6c
      96ce0470
  10. 10 7月, 2020 1 次提交
    • M
      More Makefile Cleanup (#7097) · c7c7b07f
      mrambacher 提交于
      Summary:
      Cleans up some of the dependencies on test code in the Makefile while building tools:
      - Moves the test::RandomString, DBBaseTest::RandomString into Random
      - Moves the test::RandomHumanReadableString into Random
      - Moves the DestroyDir method into file_utils
      - Moves the SetupSyncPointsToMockDirectIO into sync_point.
      - Moves the FaultInjection Env and FS classes under env
      
      These changes allow all of the tools to build without dependencies on test_util, thereby simplifying the build dependencies.  By moving the FaultInjection code, the dependency in db_stress on different libraries for debug vs release was eliminated.
      
      Tested both release and debug builds via Make and CMake for both static and shared libraries.
      
      More work remains to clean up how the tools are built and remove some unnecessary dependencies.  There is also more work that should be done to get the Makefile and CMake to align in their builds -- what is in the libraries and the sizes of the executables are different.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/7097
      
      Reviewed By: riversand963
      
      Differential Revision: D22463160
      
      Pulled By: pdillinger
      
      fbshipit-source-id: e19462b53324ab3f0b7c72459dbc73165cc382b2
      c7c7b07f
  11. 03 7月, 2020 3 次提交
    • J
      Replace reinterpret_cast with static_cast_with_check (#7067) · 00de6990
      Jay Zhuang 提交于
      Summary:
      Replace `reinterpret_cast` with `static_cast_with_check` for `DBImpl` and `ColumnFamilyHandleImpl`.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/7067
      
      Reviewed By: siying
      
      Differential Revision: D22361587
      
      Pulled By: jay-zhuang
      
      fbshipit-source-id: dfe9e8f3af39c3d27cc372c55ab9ad905eb0a5a1
      00de6990
    • Z
      BackupEngine verifies table file checksums on creating new backups (#7015) · 373d5ac4
      Zitan Chen 提交于
      Summary:
      When table file checksums are enabled and stored in the DB manifest by using the RocksDB default crc32c checksum function, BackupEngine will calculate the crc32c checksum of the file to be copied and compare the calculated result with the one stored in the DB manifest before copying the file to the backup directory.
      
      After copying to the backup directory, BackupEngine will verify the checksum of the copied file with the one calculated before copying. This helps detect some rare corruption events such as bit-flips during the copying process.
      
      No verification with checksums in DB manifest will be performed if the table file checksum function is not the RocksDB default crc32c checksum function.
      
      In addition, If `share_table_files` and `share_files_with_checksum` are true, BackupEngine will compare the checksums computed before and after copying of the table files.
      
      Corresponding tests are added.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/7015
      
      Test Plan: Passed make check
      
      Reviewed By: pdillinger
      
      Differential Revision: D22165732
      
      Pulled By: gg814
      
      fbshipit-source-id: ee0e8cc397c455eba64545c29380b9d9853588ec
      373d5ac4
    • P
      Revert "Whole DBTest to skip fsync (#7049)" (#7070) · 52d59e0c
      Peter Dillinger 提交于
      Summary:
      This reverts commit 4f1534bd.
      
      This commit caused failures and deadlocks in
      MultiThreadedDBTest.MultiThreaded/69 and others.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/7070
      
      Reviewed By: riversand963
      
      Differential Revision: D22358778
      
      Pulled By: pdillinger
      
      fbshipit-source-id: faf8f2cb469a7063a113921c8e9c64a9f7610dac
      52d59e0c
  12. 02 7月, 2020 2 次提交
  13. 01 7月, 2020 1 次提交
    • L
      Clean up blob files based on the linked SST set (#7001) · e367bc7f
      Levi Tamasi 提交于
      Summary:
      The earlier `VersionBuilder` code only cleaned up blob files that were
      marked as entirely consisting of garbage using `VersionEdits` with
      `BlobFileGarbage`. This covers the cases when table files go through
      regular compaction, where we iterate through the KVs and thus have an
      opportunity to calculate the amount of garbage (that is, most cases).
      However, it does not help when table files are simply dropped (e.g. deletion
      compactions or the `DeleteFile` API). To deal with such cases, the patch
      adds logic that cleans up all blob files at the head of the list until the first
      one with linked SSTs is found. (As an example, let's assume we have blob files
      with numbers 1..10, and the first one with any linked SSTs is number 8.
      This means that SSTs in the `Version` only rely on blob files with numbers >= 8,
      and thus 1..7 are no longer needed.)
      
      The code change itself is pretty small; however, changing the logic like this
      necessitated changes to some tests that have been added recently (namely
      to the ones that use blob files in isolation, i.e. without any table files referring
      to them). Some of these cases were fixed by bypassing `VersionBuilder` altogether
      in order to keep the tests simple (which actually makes them more proper unit tests
      as well), while the `VersionBuilder` unit tests were fixed by adding dummy table
      files to the test cases as needed.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/7001
      
      Test Plan: `make check`
      
      Reviewed By: riversand963
      
      Differential Revision: D22119474
      
      Pulled By: ltamasi
      
      fbshipit-source-id: c6547141355667d4291d9661d6518eb741e7b54a
      e367bc7f
  14. 30 6月, 2020 1 次提交
    • S
      Disable fsync in some tests to speed them up (#7036) · 58547e53
      sdong 提交于
      Summary:
      Fsyncing files is not providing more test coverage in many tests. Provide an option in SpecialEnv to turn it off to speed it up and enable this option in some tests with relatively long run time.
      Most of those tests can be divided as parameterized gtest too. This two speed up approaches are orthogonal and we can do both if needed.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/7036
      
      Test Plan: Run all tests and make sure they pass.
      
      Reviewed By: ltamasi
      
      Differential Revision: D22268084
      
      fbshipit-source-id: 6d4a838a1b7328c13931a2a5d93de57aa02afaab
      58547e53
  15. 25 6月, 2020 1 次提交
    • Y
      First step towards handling MANIFEST write error (#6949) · e66199d8
      Yanqin Jin 提交于
      Summary:
      This PR provides preliminary support for handling IO error during MANIFEST write.
      File write/sync is not guaranteed to be atomic. If we encounter an IOError while writing/syncing to the MANIFEST file, we cannot be sure about the state of the MANIFEST file. The version edits may or may not have reached the file. During cleanup, if we delete the newly-generated SST files referenced by the pending version edit(s), but the version edit(s) actually are persistent in the MANIFEST, then next recovery attempt will process the version edits(s) and then fail since the SST files have already been deleted.
      One approach is to truncate the MANIFEST after write/sync error, so that it is safe to delete the SST files. However, file truncation may not be supported on certain file systems. Therefore, we take the following approach.
      If an IOError is detected during MANIFEST write/sync, we disable file deletions for the faulty database. Depending on whether the IOError is retryable (set by underlying file system), either RocksDB or application can call `DB::Resume()`, or simply shutdown and restart. During `Resume()`, RocksDB will try to switch to a new MANIFEST and write all existing in-memory version storage in the new file. If this succeeds, then RocksDB may proceed. If all recovery is completed, then file deletions will be re-enabled.
      Note that multiple threads can call `LogAndApply()` at the same time, though only one of them will be going through the process MANIFEST write, possibly batching the version edits of other threads. When the leading MANIFEST writer finishes, all of the MANIFEST writing threads in this batch will have the same IOError. They will all call `ErrorHandler::SetBGError()` in which file deletion will be disabled.
      
      Possible future directions:
      - Add an `ErrorContext` structure so that it is easier to pass more info to `ErrorHandler`. Currently, as in this example, a new `BackgroundErrorReason` has to be added.
      
      Test plan (dev server):
      make check
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/6949
      
      Reviewed By: anand1976
      
      Differential Revision: D22026020
      
      Pulled By: riversand963
      
      fbshipit-source-id: f3c68a2ef45d9b505d0d625c7c5e0c88495b91c8
      e66199d8
  16. 16 6月, 2020 1 次提交
    • Z
      Add a DB Session ID (#6959) · 88db97b0
      Zitan Chen 提交于
      Summary:
      Added DB::GetDbSessionId by using the same format and machinery as DB::GetDbIdentity.
      The DB Session ID is generated (and therefore, updated) each time a DB object is opened. It is written to the LOG file right after the line of “DB SUMMARY”.
      A test for the uniqueness, for different openings and during the same opening, is also added.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/6959
      
      Test Plan: Passed make check
      
      Reviewed By: zhichao-cao
      
      Differential Revision: D21951721
      
      Pulled By: gg814
      
      fbshipit-source-id: 958a48a612db49a39998ea703cded45987d3fa8b
      88db97b0
  17. 04 6月, 2020 1 次提交
  18. 03 6月, 2020 1 次提交
    • P
      For ApproximateSizes, pro-rate table metadata size over data blocks (#6784) · 14eca6bf
      Peter Dillinger 提交于
      Summary:
      The implementation of GetApproximateSizes was inconsistent in
      its treatment of the size of non-data blocks of SST files, sometimes
      including and sometimes now. This was at its worst with large portion
      of table file used by filters and querying a small range that crossed
      a table boundary: the size estimate would include large filter size.
      
      It's conceivable that someone might want only to know the size in terms
      of data blocks, but I believe that's unlikely enough to ignore for now.
      Similarly, there's no evidence the internal function AppoximateOffsetOf
      is used for anything other than a one-sided ApproximateSize, so I intend
      to refactor to remove redundancy in a follow-up commit.
      
      So to fix this, GetApproximateSizes (and implementation details
      ApproximateSize and ApproximateOffsetOf) now consistently include in
      their returned sizes a portion of table file metadata (incl filters
      and indexes) based on the size portion of the data blocks in range. In
      other words, if a key range covers data blocks that are X% by size of all
      the table's data blocks, returned approximate size is X% of the total
      file size. It would technically be more accurate to attribute metadata
      based on number of keys, but that's not computationally efficient with
      data available and rarely a meaningful difference.
      
      Also includes miscellaneous comment improvements / clarifications.
      
      Also included is a new approximatesizerandom benchmark for db_bench.
      No significant performance difference seen with this change, whether ~700 ops/sec with cache_index_and_filter_blocks and small cache or ~150k ops/sec without cache_index_and_filter_blocks.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/6784
      
      Test Plan:
      Test added to DBTest.ApproximateSizesFilesWithErrorMargin.
      Old code running new test...
      
          [ RUN      ] DBTest.ApproximateSizesFilesWithErrorMargin
          db/db_test.cc:1562: Failure
          Expected: (size) <= (11 * 100), actual: 9478 vs 1100
      
      Other tests updated to reflect consistent accounting of metadata.
      
      Reviewed By: siying
      
      Differential Revision: D21334706
      
      Pulled By: pdillinger
      
      fbshipit-source-id: 6f86870e45213334fedbe9c73b4ebb1d8d611185
      14eca6bf
  19. 02 6月, 2020 1 次提交
  20. 05 5月, 2020 1 次提交
    • L
      Expose the set of live blob files from Version/VersionSet (#6785) · a00ddf15
      Levi Tamasi 提交于
      Summary:
      The patch adds logic that returns the set of live blob files from
      `Version::AddLiveFiles` and `VersionSet::AddLiveFiles` (in addition to
      live table files), and also cleans up the code a bit, for example, by
      exposing only the numbers of table files as opposed to the earlier
      `FileDescriptor`s that no clients used. Moreover, the patch extends
      the `GetLiveFiles` API so that it also exposes blob files in the current version.
      Similarly to https://github.com/facebook/rocksdb/pull/6755,
      this is a building block for identifying and purging obsolete blob files.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/6785
      
      Test Plan: `make check`
      
      Reviewed By: riversand963
      
      Differential Revision: D21336210
      
      Pulled By: ltamasi
      
      fbshipit-source-id: fc1aede8a49eacd03caafbc5f6f9ce43b6270821
      a00ddf15
  21. 21 4月, 2020 1 次提交
    • P
      C++20 compatibility (#6697) · 31da5e34
      Peter Dillinger 提交于
      Summary:
      Based on https://github.com/facebook/rocksdb/issues/6648 (CLA Signed), but heavily modified / extended:
      
      * Implicit capture of this via [=] deprecated in C++20, and [=,this] not standard before C++20 -> now using explicit capture lists
      * Implicit copy operator deprecated in gcc 9 -> add explicit '= default' definition
      * std::random_shuffle deprecated in C++17 and removed in C++20 -> migrated to a replacement in RocksDB random.h API
      * Add the ability to build with different std version though -DCMAKE_CXX_STANDARD=11/14/17/20 on the cmake command line
      * Minimal rebuild flag of MSVC is deprecated and is forbidden with /std:c++latest (C++20)
      * Added MSVC 2019 C++11 & MSVC 2019 C++20 in AppVeyor
      * Added GCC 9 C++11 & GCC9 C++20 in Travis
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/6697
      
      Test Plan: make check and CI
      
      Reviewed By: cheng-chang
      
      Differential Revision: D21020318
      
      Pulled By: pdillinger
      
      fbshipit-source-id: 12311be5dbd8675a0e2c817f7ec50fa11c18ab91
      31da5e34
  22. 11 4月, 2020 1 次提交
    • A
      Report kFilesMarkedForCompaction for delete triggered compactions (#6680) · a0faff12
      Akanksha Mahajan 提交于
      Summary:
      Summary : Set manual_compaction false in case of DeleteTriggeredCompaction object so that kFilesMarkedForComapaction can be reported.
                Added a DeletionTriggeredUniversalCompactionMarking test case for Deletion Triggered compaction in case of Universal Compaction.
      
      Test Plan : make check -j64
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/6680
      
      Test Plan: make check -j64
      
      Reviewed By: anand1976
      
      Differential Revision: D20945946
      
      Pulled By: akankshamahajan15
      
      fbshipit-source-id: af84e417bd7127652aaae9143c560d1ab3815d25
      a0faff12
  23. 14 3月, 2020 1 次提交
  24. 13 3月, 2020 1 次提交
  25. 05 3月, 2020 1 次提交
    • C
      Skip high levels with no key falling in the range in CompactRange (#6482) · afb97094
      Cheng Chang 提交于
      Summary:
      In CompactRange, if there is no key in memtable falling in the specified range, then flush is skipped.
      This PR extends this skipping logic to SST file levels: it starts compaction from the highest level (starting from L0) that has files with key falling in the specified range, instead of always starts from L0.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/6482
      
      Test Plan:
      A new test ManualCompactionTest::SkipLevel is added.
      
      Also updated a test related to statistics of index block cache hit in db_test2, the index cache hit is increased by 1 in this PR because when checking overlap for the key range in L0, OverlapWithLevelIterator will do a seek in the table cache iterator, which will read from the cached index.
      
      Also updated db_compaction_test and db_test to use correct range for full compaction.
      
      Differential Revision: D20251149
      
      Pulled By: cheng-chang
      
      fbshipit-source-id: f822157cf4796972bd5035d9d7178d8dfb7af08b
      afb97094
  26. 21 2月, 2020 1 次提交
    • S
      Replace namespace name "rocksdb" with ROCKSDB_NAMESPACE (#6433) · fdf882de
      sdong 提交于
      Summary:
      When dynamically linking two binaries together, different builds of RocksDB from two sources might cause errors. To provide a tool for user to solve the problem, the RocksDB namespace is changed to a flag which can be overridden in build time.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/6433
      
      Test Plan: Build release, all and jtest. Try to build with ROCKSDB_NAMESPACE with another flag.
      
      Differential Revision: D19977691
      
      fbshipit-source-id: aa7f2d0972e1c31d75339ac48478f34f6cfcfb3e
      fdf882de
  27. 05 2月, 2020 1 次提交
  28. 28 1月, 2020 1 次提交
  29. 08 1月, 2020 1 次提交
  30. 27 11月, 2019 2 次提交
    • S
      Support options.max_open_files = -1 with periodic_compaction_seconds (#6090) · aa1857e2
      sdong 提交于
      Summary:
      options.periodic_compaction_seconds isn't supported when options.max_open_files != -1. It's because that the information of file creation time is stored in table properties and are not guaranteed to be loaded unless options.max_open_files = -1. Relax this constraint by storing the information in manifest.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/6090
      
      Test Plan: Pass all existing tests; Modify an existing test to force the manifest value to take 0 to simulate backward compatibility case; manually open the DB generated with the change by release 4.2.
      
      Differential Revision: D18702268
      
      fbshipit-source-id: 13e0bd94f546498a04f3dc5fc0d9dff5125ec9eb
      aa1857e2
    • S
      Make default value of options.ttl to be 30 days when it is supported. (#6073) · 77eab5c8
      sdong 提交于
      Summary:
      By default options.ttl is disabled. We believe a better default will be 30 days, which means deleted data the database will be removed from SST files slightly after 30 days, for most of the cases.
      
      Make the default UINT64_MAX - 1 to indicate that it is not overridden by users.
      
      Change periodic_compaction_seconds to be UINT64_MAX - 1 to UINT64_MAX  too to be consistent. Also fix a small bug in the previous periodic_compaction_seconds default code.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/6073
      
      Test Plan: Add unit tests for it.
      
      Differential Revision: D18669626
      
      fbshipit-source-id: 957cd4374cafc1557d45a0ba002010552a378cc8
      77eab5c8
  31. 23 11月, 2019 1 次提交
    • S
      Support options.ttl with options.max_open_files = -1 (#6060) · d8c28e69
      sdong 提交于
      Summary:
      Previously, options.ttl cannot be set with options.max_open_files = -1, because it makes use of creation_time field in table properties, which is not available unless max_open_files = -1. With this commit, the information will be stored in manifest and when it is available, will be used instead.
      
      Note that, this change will break forward compatibility for release 5.1 and older.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/6060
      
      Test Plan: Extend existing test case to options.max_open_files != -1, and simulate backward compatility in one test case by forcing the value to be 0.
      
      Differential Revision: D18631623
      
      fbshipit-source-id: 30c232a8672de5432ce9608bb2488ecc19138830
      d8c28e69
  32. 08 11月, 2019 1 次提交
    • L
      Add file number/oldest referenced blob file number to {Sst,Live}FileMetaData (#6011) · f80050fa
      Levi Tamasi 提交于
      Summary:
      The patch exposes the file numbers of the SSTs as well as the oldest blob
      files they contain a reference to through the GetColumnFamilyMetaData/
      GetLiveFilesMetaData interface.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/6011
      
      Test Plan:
      Fixed and extended the existing unit tests. (The earlier ColumnFamilyMetaDataTest
      wasn't really testing anything because the generated memtables were never
      flushed, so the metadata structure was essentially empty.)
      
      Differential Revision: D18361697
      
      Pulled By: ltamasi
      
      fbshipit-source-id: d5ed1d94ac70858b84393c48711441ddfe1251e9
      f80050fa
  33. 26 10月, 2019 2 次提交
  34. 21 9月, 2019 1 次提交