1. 10 3月, 2021 5 次提交
    • P
      Refactor: add LineFileReader and Status::MustCheck (#8026) · 4b18c46d
      Peter Dillinger 提交于
      Summary:
      Removed confusing, awkward, and undocumented internal API
      ReadOneLine and replaced with very simple LineFileReader.
      
      In refactoring backupable_db.cc, this has the side benefit of
      removing the arbitrary cap on the size of backup metadata files.
      
      Also added Status::MustCheck to make it easy to mark a Status as
      "must check." Using this, I can ensure that after
      LineFileReader::ReadLine returns false the caller checks GetStatus().
      
      Also removed some excessive conditional compilation in status.h
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/8026
      
      Test Plan: added unit test, and running tests with ASSERT_STATUS_CHECKED
      
      Reviewed By: mrambacher
      
      Differential Revision: D26831687
      
      Pulled By: pdillinger
      
      fbshipit-source-id: ef749c265a7a26bb13cd44f6f0f97db2955f6f0f
      4b18c46d
    • P
      Make default share_files_with_checksum=true (#8020) · 847ca9f9
      Peter Dillinger 提交于
      Summary:
      New comment for share_files_with_checksum:
      // Only used if share_table_files is set to true. Setting to false is
      // DEPRECATED and potentially dangerous because in that case BackupEngine
      // can lose data if backing up databases with distinct or divergent
      // history, for example if restoring from a backup other than the latest,
      // writing to the DB, and creating another backup. Setting to true (default)
      // prevents these issues by ensuring that different table files (SSTs) with
      // the same number are treated as distinct. See
      // share_files_with_checksum_naming and ShareFilesNaming.
      
      I have also removed interim option kFlagMatchInterimNaming, which is no
      longer needed and was never needed for correct+compatible operation
      (just performance).
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/8020
      
      Test Plan:
      tests updated. Backward+forward compatibility verified with
      SHORT_TEST=1 check_format_compatible.sh. ldb uses default backup
      options, and I manually verified shared_checksum in
      /tmp/rocksdb_format_compatible_peterd/bak/current/ after run.
      
      Reviewed By: ajkr
      
      Differential Revision: D26786331
      
      Pulled By: pdillinger
      
      fbshipit-source-id: 36f968dfef1f5cacbd65154abe1d846151a55130
      847ca9f9
    • P
      Make format_version=5 new default (#8017) · 0028e339
      Peter Dillinger 提交于
      Summary:
      Haven't seen any production issues with new Bloom filter and
      it's now > 1 year old (added in 6.6.0).
      
      Updated check_format_compatible.sh and HISTORY.md
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/8017
      
      Test Plan: tests updated (or prior bugs fixed)
      
      Reviewed By: ajkr
      
      Differential Revision: D26762197
      
      Pulled By: pdillinger
      
      fbshipit-source-id: 0e755c46b443087c1544da0fd545beb9c403d1c2
      0028e339
    • S
      Java-API: Missing space in string literal (#7982) · 430842f9
      stefan-zobel 提交于
      Summary:
      `TtlDB.open()`: missing space after 'column'
      `AdvancedColumnFamilyOptionsInterface.setLevelCompactionDynamicLevelBytes()`: missing space after 'cause'
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/7982
      
      Reviewed By: ajkr
      
      Differential Revision: D26546632
      
      Pulled By: jay-zhuang
      
      fbshipit-source-id: 885dedcaa2200842764fbac9ce3766d54e1c8914
      430842f9
    • X
      Add $(ARTIFACT_SUFFIX} to benchmark tools built with cmake (#8016) · 8643d63b
      xinyuliu 提交于
      Summary:
      Add ${ARTIFACT_SUFFIX} to benchmark tool names to enable differentiating jemalloc and non-jemalloc versions.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/8016
      
      Reviewed By: jay-zhuang
      
      Differential Revision: D26907007
      
      Pulled By: ajkr
      
      fbshipit-source-id: 78d3b3372b5454d52d5b663ea982135ea9cf7bf8
      8643d63b
  2. 09 3月, 2021 5 次提交
  3. 06 3月, 2021 1 次提交
    • P
      Clarifying comments for Read() APIs (#8029) · ce391ff8
      Peter Dillinger 提交于
      Summary:
      I recently discovered the confusing, undocumented semantics of
      Read() functions in the FileSystem and Env APIs. I have added
      clarification to the best of my reverse-engineered understanding, and
      made a note in HISTORY.md for implementors to check their
      implementations, as a subtly non-adherent implementation could lead to
      RocksDB quietly ignoring some portion of a file.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/8029
      
      Test Plan: no code changes
      
      Reviewed By: anand1976
      
      Differential Revision: D26831698
      
      Pulled By: pdillinger
      
      fbshipit-source-id: 208f97ff6037bc13bb2ef360b987c2640c79bd03
      ce391ff8
  4. 04 3月, 2021 2 次提交
    • L
      Update compaction statistics to include the amount of data read from blob files (#8022) · cb25bc11
      Levi Tamasi 提交于
      Summary:
      The patch does the following:
      1) Exposes the amount of data (number of bytes) read from blob files from
      `BlobFileReader::GetBlob` / `Version::GetBlob`.
      2) Tracks the total number and size of blobs read from blob files during a
      compaction (due to garbage collection or compaction filter usage) in
      `CompactionIterationStats` and propagates this data to
      `InternalStats::CompactionStats` / `CompactionJobStats`.
      3) Updates the formulae for write amplification calculations to include the
      amount of data read from blob files.
      4) Extends the compaction stats dump with a new column `Rblob(GB)` and
      a new line containing the total number and size of blob files in the current
      `Version` to complement the information about the shape and size of the LSM tree
      that's already there.
      5) Updates `CompactionJobStats` so that the number of files and amount of data
      written by a compaction are broken down per file type (i.e. table/blob file).
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/8022
      
      Test Plan: Ran `make check` and `db_bench`.
      
      Reviewed By: riversand963
      
      Differential Revision: D26801199
      
      Pulled By: ltamasi
      
      fbshipit-source-id: 28a5f072048a702643b28cb5971b4099acabbfb2
      cb25bc11
    • M
      Feature: add SetBufferSize() so that managed size can be dynamic (#7961) · 4126bdc0
      matthewvon 提交于
      Summary:
      This PR adds SetBufferSize() to the WriteBufferManager object.  This enables user code to adjust the global budget for write_buffers based upon other memory conditions such as growth in table reader memory as the dataset grows.
      
      The buffer_size_ member variable is now atomic to match design of other changeable size_t members within WriteBufferManager.
      
      This change is useful as is.  However, this change is also essential if someone decides they wanted to enable db_write_buffer_size modifications through the DB::SetOptions() API, i.e. no waste taking this as is.
      
      Any format / spacing changes are due to clang-format as required by check-in automation.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/7961
      
      Reviewed By: ajkr
      
      Differential Revision: D26639075
      
      Pulled By: akankshamahajan15
      
      fbshipit-source-id: 0604348caf092d35f44e85715331dc920e5c1033
      4126bdc0
  5. 03 3月, 2021 3 次提交
    • Y
      Possibly bump NUMBER_OF_RESEEKS_IN_ITERATION (#8015) · 72d1e258
      Yanqin Jin 提交于
      Summary:
      When changing db iterator direction, we may perform a reseek.
      Therefore, we should bump the NUMBER_OF_RESEEKS_IN_ITERATION counter.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/8015
      
      Test Plan: make check
      
      Reviewed By: ltamasi
      
      Differential Revision: D26755415
      
      Pulled By: riversand963
      
      fbshipit-source-id: 211f51f1a454bcda768fc46c0dce51edeb7f05fe
      72d1e258
    • P
      Revamp check_format_compatible.sh (#8012) · a9046f3c
      Peter Dillinger 提交于
      Summary:
      * Adds backup/restore forward/backward compatibility testing
      * Adds forward/backward compatibility testing to sst ingestion
      * More structure sharing and comments for the lists of branches
      comprising each group
      * Less reliant on invariants between groups with de-duplication logic
      * Restructured for n+1 branch checkout+build steps rather than something
      like 3n. Should be much faster despite more checks.
      
      And to make manual runs easier
      
      * On success, restores working trees to original working branch (aborts
      early if uncommitted changes) and deletes temporary branch & remote
      * Adds SHORT_TEST=1 mode that uses only the oldest version for each
      * Adds USE_SSH=1 to use ssh instead of https for github
      group
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/8012
      
      Test Plan:
      a number of manual tests, mostly with SHORT_TEST=1. Using one
      version older for any of the groups (except I didn't check
      db_backward_only_refs) fails. Changing default format_version to 5
      (planned) without updating this script fails as it should, and passes
      with appropriate update. Full local run passed (had to remove "2.7.fb.branch"
      due to compiler issues, also before this change).
      
      Reviewed By: riversand963
      
      Differential Revision: D26735840
      
      Pulled By: pdillinger
      
      fbshipit-source-id: 1320c22de5674760657e385aa42df9fade8b6fff
      a9046f3c
    • L
      Break down the amount of data written during flushes/compactions per file type (#8013) · a46f080c
      Levi Tamasi 提交于
      Summary:
      The patch breaks down the "bytes written" (as well as the "number of output files")
      compaction statistics into two, so the values are logged separately for table files
      and blob files in the info log, and are shown in separate columns (`Write(GB)` for table
      files, `Wblob(GB)` for blob files) when the compaction statistics are dumped.
      This will also come in handy for fixing the write amplification statistics, which currently
      do not consider the amount of data read from blob files during compaction. (This will
      be fixed by an upcoming patch.)
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/8013
      
      Test Plan: Ran `make check` and `db_bench`.
      
      Reviewed By: riversand963
      
      Differential Revision: D26742156
      
      Pulled By: ltamasi
      
      fbshipit-source-id: 31d18ee8f90438b438ca7ed1ea8cbd92114442d5
      a46f080c
  6. 02 3月, 2021 2 次提交
    • A
      Support retrieving checksums for blob files from the MANIFEST when checkpointing (#8003) · f1961297
      Akanksha Mahajan 提交于
      Summary:
      The checkpointing logic supports passing file level checksums
      to the copy_file_cb callback function which is used by the backup code
      for detecting corruption during file copies.
      However, this is currently implemented only for table files.
      
      This PR extends the checksum retrieval to blob files as well.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/8003
      
      Test Plan: Add new test units
      
      Reviewed By: ltamasi
      
      Differential Revision: D26680701
      
      Pulled By: akankshamahajan15
      
      fbshipit-source-id: 1bd1e2464df6e9aa31091d35b8c72786d94cd1c5
      f1961297
    • Y
      Enable compact filter for blob in dbstress and dbbench (#8011) · 1f11d07f
      Yanqin Jin 提交于
      Summary:
      As title.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/8011
      
      Test Plan:
      ```
      ./db_bench -enable_blob_files=1 -use_keep_filter=1 -disable_auto_compactions=1
      /db_stress -enable_blob_files=1 -enable_compaction_filter=1 -acquire_snapshot_one_in=0 -compact_range_one_in=0 -iterpercent=0 -test_batches_snapshots=0 -readpercent=10 -prefixpercent=20 -writepercent=55 -delpercent=15 -continuous_verification_interval=0
      ```
      
      Reviewed By: ltamasi
      
      Differential Revision: D26736061
      
      Pulled By: riversand963
      
      fbshipit-source-id: 1c7834903c28431ce23324c4f259ed71255614e2
      1f11d07f
  7. 27 2月, 2021 2 次提交
    • Y
      Still use SystemClock* instead of shared_ptr in StepPerfTimer (#8006) · 9fdc9fbe
      Yanqin Jin 提交于
      Summary:
      This is likely a temp fix before we figure out a better way.
      
      PerfStepTimer is used intensively in certain benchmarking/testings. https://github.com/facebook/rocksdb/issues/7858 stores a `shared_ptr` to system clock in PerfStepTimer which gets created each time a `PerfStepTimer` object is created. The atomic operations in `shared_ptr` may add overhead in CPU cycles. Therefore, we change it back to a raw `SystemClock*` for now.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/8006
      
      Test Plan: make check
      
      Reviewed By: pdillinger
      
      Differential Revision: D26703560
      
      Pulled By: riversand963
      
      fbshipit-source-id: 519d0769b28da2334bea7d86c848fcc26ee8a17f
      9fdc9fbe
    • P
      Refine Ribbon configuration, improve testing, add Homogeneous (#7879) · a8b3b9a2
      Peter Dillinger 提交于
      Summary:
      This change only affects non-schema-critical aspects of the production candidate Ribbon filter. Specifically, it refines choice of internal configuration parameters based on inputs. The changes are minor enough that the schema tests in bloom_test, some of which depend on this, are unaffected. There are also some minor optimizations and refactorings.
      
      This would be a schema change for "smash" Ribbon, to fix some known issues with small filters, but "smash" Ribbon is not accessible in public APIs. Unit test CompactnessAndBacktrackAndFpRate updated to test small and medium-large filters. Run with --thoroughness=100 or so for much better detection power (not appropriate for continuous regression testing).
      
      Homogenous Ribbon:
      This change adds internally a Ribbon filter variant we call Homogeneous Ribbon, in collaboration with Stefan Walzer. The expected "result" value for every key is zero, instead of computed from a hash. Entropy for queries not to be false positives comes from free variables ("overhead") in the solution structure, which are populated pseudorandomly. Construction is slightly faster for not tracking result values, and never fails. Instead, FP rate can jump up whenever and whereever entries are packed too tightly. For small structures, we can choose overhead to make this FP rate jump unlikely, as seen in updated unit test CompactnessAndBacktrackAndFpRate.
      
      Unlike standard Ribbon, Homogeneous Ribbon seems to scale to arbitrary number of keys when accepting an FP rate penalty for small pockets of high FP rate in the structure. For example, 64-bit ribbon with 8 solution columns and 10% allocated space overhead for slots seems to achieve about 10.5% space overhead vs. information-theoretic minimum based on its observed FP rate with expected pockets of degradation. (FP rate is close to 1/256.) If targeting a higher FP rate with fewer solution columns, Homogeneous Ribbon can be even more space efficient, because the penalty from degradation is relatively smaller. If targeting a lower FP rate, Homogeneous Ribbon is less space efficient, as more allocated overhead is needed to keep the FP rate impact of degradation relatively under control. The new OptimizeHomogAtScale tool in ribbon_test helps to find these optimal allocation overheads for different numbers of solution columns. And Ribbon widths, with 128-bit Ribbon apparently cutting space overheads in half vs. 64-bit.
      
      Other misc item specifics:
      * Ribbon APIs in util/ribbon_config.h now provide configuration data for not just 5% construction failure rate (95% success), but also 50% and 0.1%.
        * Note that the Ribbon structure does not exhibit "threshold" behavior as standard Xor filter does, so there is a roughly fixed space penalty to cut construction failure rate in half. Thus, there isn't really an "almost sure" setting.
        * Although we can extrapolate settings for large filters, we don't have a good formula for configuring smaller filters (< 2^17 slots or so), and efforts to summarize with a formula have failed. Thus, small data is hard-coded from updated FindOccupancy tool.
      * Enhances ApproximateNumEntries for public API Ribbon using more precise data (new API GetNumToAdd), thus a more accurate but not perfect reversal of CalculateSpace. (bloom_test updated to expect the greater precision)
      * Move EndianSwapValue from coding.h to coding_lean.h to keep Ribbon code easily transferable from RocksDB
      * Add some missing 'const' to member functions
      * Small optimization to 128-bit BitParity
      * Small refactoring of BandingStorage in ribbon_alg.h to support Homogeneous Ribbon
      * CompactnessAndBacktrackAndFpRate now has an "expand" test: on construction failure, a possible alternative to re-seeding hash functions is simply to increase the number of slots (allocated space overhead) and try again with essentially the same hash values. (Start locations will be different roundings of the same scaled hash values--because fastrange not mod.) This seems to be as effective or more effective than re-seeding, as long as we increase the number of slots (m) by roughly m += m/w where w is the Ribbon width. This way, there is effectively an expansion by one slot for each ribbon-width window in the banding. (This approach assumes that getting "bad data" from your hash function is as unlikely as it naturally should be, e.g. no adversary.)
      * 32-bit and 16-bit Ribbon configurations are added to ribbon_test for understanding their behavior, e.g. with FindOccupancy. They are not considered useful at this time and not tested with CompactnessAndBacktrackAndFpRate.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/7879
      
      Test Plan: unit test updates included
      
      Reviewed By: jay-zhuang
      
      Differential Revision: D26371245
      
      Pulled By: pdillinger
      
      fbshipit-source-id: da6600d90a3785b99ad17a88b2a3027710b4ea3a
      a8b3b9a2
  8. 26 2月, 2021 2 次提交
    • Y
      Remove unused/incorrect fwd declaration (#8002) · c370d8aa
      Yanqin Jin 提交于
      Summary: Pull Request resolved: https://github.com/facebook/rocksdb/pull/8002
      
      Reviewed By: anand1976
      
      Differential Revision: D26659354
      
      Pulled By: riversand963
      
      fbshipit-source-id: 6b464dbea9fd8240ead8cc5af393f0b78e8f9dd1
      c370d8aa
    • Y
      Compaction filter support for (new) BlobDB (#7974) · cef4a6c4
      Yanqin Jin 提交于
      Summary:
      Allow applications to implement a custom compaction filter and pass it to BlobDB.
      
      The compaction filter's custom logic can operate on blobs.
      To do so, application needs to subclass `CompactionFilter` abstract class and implement `FilterV2()` method.
      Optionally, a method called `ShouldFilterBlobByKey()` can be implemented if application's custom logic rely solely
      on the key to make a decision without reading the blob, thus saving extra IO. Examples can be found in
      db/blob/db_blob_compaction_test.cc.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/7974
      
      Test Plan: make check
      
      Reviewed By: ltamasi
      
      Differential Revision: D26509280
      
      Pulled By: riversand963
      
      fbshipit-source-id: 59f9ae5614c4359de32f4f2b16684193cc537b39
      cef4a6c4
  9. 25 2月, 2021 1 次提交
  10. 24 2月, 2021 5 次提交
    • X
      Append all characters not captured by xsputn() in overflow() function (#7991) · b085ee13
      xinyuliu 提交于
      Summary:
      In the adapter class `WritableFileStringStreamAdapter`, which wraps WritableFile to be used for std::ostream, previouly only `std::endl` is considered a special case because `endl` is written by `os.put()` directly without going through `xsputn()`. `os.put()` will call `sputc()` and if we further check the internal implementation of `sputc()`, we will see it is
      ```
      int_type __CLR_OR_THIS_CALL sputc(_Elem _Ch) {  // put a character
          return 0 < _Pnavail() ? _Traits::to_int_type(*_Pninc() = _Ch) : overflow(_Traits::to_int_type(_Ch));
      ```
      As we explicitly disabled buffering, _Pnavail() is always 0. Thus every write, not captured by xsputn, becomes an overflow.
      
      When I run tests on Windows, I found not only `std::endl` will drop into this case, writing an unsigned long long will also call `os.put()` then followed by `sputc()` and eventually call `overflow()`. Therefore, instead of only checking `std::endl`, we should try to append other characters as well unless the appending operation fails.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/7991
      
      Reviewed By: jay-zhuang
      
      Differential Revision: D26615692
      
      Pulled By: ajkr
      
      fbshipit-source-id: 4c0003de1645b9531545b23df69b000e07014468
      b085ee13
    • A
      Make BlockBasedTable::kMaxAutoReadAheadSize configurable (#7951) · cd79a009
      Akanksha Mahajan 提交于
      Summary:
      RocksDB does auto-readahead for iterators on noticing more
      than two reads for a table file. The readahead starts at 8KB and doubles on every
      additional read upto BlockBasedTable::kMaxAutoReadAheadSize which is
      256*1024.
      This PR adds a new option BlockBasedTableOptions::max_auto_readahead_size which
      replaces BlockBasedTable::kMaxAutoReadAheadSize and the new option can be
      configured.
      If max_auto_readahead_size is set 0 then no implicit auto prefetching will
      be done. If max_auto_readahead_size provided is less than
      8KB (which is initial readahead size used by rocksdb in case of
      auto-readahead), readahead size will remain same as max_auto_readahead_size.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/7951
      
      Test Plan: Add new unit test case.
      
      Reviewed By: anand1976
      
      Differential Revision: D26568085
      
      Pulled By: akankshamahajan15
      
      fbshipit-source-id: b6543520fc74e97d859f2002328d4c5254d417af
      cd79a009
    • S
      Fix testcase failures on windows (#7992) · e017af15
      sherriiiliu 提交于
      Summary:
      Fixed 5 test case failures found on Windows 10/Windows Server 2016
      1. In `flush_job_test`, the DestroyDir function fails in deconstructor because some file handles are still being held by VersionSet. This happens on Windows Server 2016, so need to manually reset versions_ pointer to release all file handles.
      2. In `StatsHistoryTest.InMemoryStatsHistoryPurging` test, the capping memory cost of stats_history_size on Windows becomes 14000 bytes with latest changes, not just 13000 bytes.
      3. In `SSTDumpToolTest.RawOutput` test, the output file handle is not closed at the end.
      4. In `FullBloomTest.OptimizeForMemory` test, ROCKSDB_MALLOC_USABLE_SIZE is undefined on windows so `total_mem` is always equal to `total_size`. The internal memory fragmentation assertion does not apply in this case.
      5. In `BlockFetcherTest.FetchAndUncompressCompressedDataBlock` test, XPRESS cannot reach 87.5% compression ratio with original CreateTable method, so I append extra zeros to the string value to enhance compression ratio. Beside, since XPRESS allocates memory internally, thus does not support for custom allocator verification, we will skip the allocator verification for XPRESS
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/7992
      
      Reviewed By: jay-zhuang
      
      Differential Revision: D26615283
      
      Pulled By: ajkr
      
      fbshipit-source-id: 3632612f84b99e2b9c77c403b112b6bedf3b125d
      e017af15
    • S
      Always expose WITH_GFLAGS option to user (#7990) · 75c6ffb9
      sherriiiliu 提交于
      Summary:
      WITH_GFLAGS option does not work on MSVC.
      
       I checked the usage of [CMAKE_DEPENDENT_OPTION](https://cmake.org/cmake/help/latest/module/CMakeDependentOption.html). It says if the `depends` condition is not true, it will set the `option` to the value given by `force` and hides the option from the user. Therefore, `CMAKE_DEPENDENT_OPTION(WITH_GFLAGS "build with GFlags" ON "NOT MSVC;NOT MINGW" OFF)` will hide WITH_GFLAGS option from user if it is running on MSVC or MINGW and always set WITH_GFLAGS to be OFF. To expose WITH_GFLAGS option to user, I removed CMAKE_DEPENDENT_OPTION and split the logic into if-else statements
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/7990
      
      Reviewed By: jay-zhuang
      
      Differential Revision: D26615755
      
      Pulled By: ajkr
      
      fbshipit-source-id: 33ca39a73423d9516510c15aaf9efb5c4072cdf9
      75c6ffb9
    • S
      Extract test cases correctly in run_ci_db_test.ps1 script (#7989) · f91fd0c9
      sherriiiliu 提交于
      Summary:
      Extract test cases correctly in run_ci_db_test.ps1 script.
      
      There are some new test group that are ended with # comments. Previously in the script when trying to extract test groups and test cases, the regex rule did not apply to this case so the concatenation of some test group and test case failed, see examples in comments.
      
      Also removed useless trailing whitespaces in the script.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/7989
      
      Reviewed By: jay-zhuang
      
      Differential Revision: D26615909
      
      Pulled By: ajkr
      
      fbshipit-source-id: 8e68d599994f17d6fefde0daa925c3018179521a
      f91fd0c9
  11. 23 2月, 2021 2 次提交
  12. 22 2月, 2021 1 次提交
    • M
      Attempt to speed up tests by adding test to "slow" tests (#7973) · 59d91796
      mrambacher 提交于
      Summary:
      I noticed tests frequently timing out on CircleCI when I submit a PR.  I did some investigation and found the SeqAdvanceConcurrentTest suite (OneWriteQueue, TwoWriteQueues) tests were all taking a long time to complete (30 tests each taking at least 15K ms).
      
      This PR adds those test to the "slow reg" list in order to move them earlier in the execution sequence so that they are not the "long tail".
      
      For completeness, other tests that were also slow are:
      NumLevels/DBTestUniversalCompaction.UniversalCompactionTrivialMoveTest : 12 tests all taking 12K+ ms
      ReadSequentialFileTest with ReadaheadSize: 8 tests all 12K+ ms
      WriteUnpreparedTransactionTest.RecoveryTest : 2 tests at 22K+ ms
      DBBasicTest.EmptyFlush: 1 test at 35K+ ms
      RateLimiterTest.Rate: 1 test at 23K+ ms
      BackupableDBTest.ShareTableFilesWithChecksumsTransition: 1 test at 16K+ ms
      MulitThreadedDBTest.MultitThreaded: 78 tests at 10K+ ms
      TransactionStressTest.DeadlockStress: 7 tests at 11K+ ms
      DBBasicTestDeadline.IteratorDeadline: 3 tests at 10K+ ms
      
      No effort was made to determine why the tests were slow.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/7973
      
      Reviewed By: jay-zhuang
      
      Differential Revision: D26519130
      
      Pulled By: mrambacher
      
      fbshipit-source-id: 11555c9115acc207e45e210a7fc7f879170a3853
      59d91796
  13. 21 2月, 2021 1 次提交
  14. 20 2月, 2021 5 次提交
    • Y
      Update HISTORY and bump version (#7984) · 7343eb4a
      Yanqin Jin 提交于
      Summary:
      Prepare to cut 6.18.fb branch
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/7984
      
      Reviewed By: ajkr
      
      Differential Revision: D26557151
      
      Pulled By: riversand963
      
      fbshipit-source-id: 8c144c807090cdae67e6655e7a17056ce8c50bc0
      7343eb4a
    • A
      Limit buffering for collecting samples for compression dictionary (#7970) · d904233d
      Andrew Kryczka 提交于
      Summary:
      For dictionary compression, we need to collect some representative samples of the data to be compressed, which we use to either generate or train (when `CompressionOptions::zstd_max_train_bytes > 0`) a dictionary. Previously, the strategy was to buffer all the data blocks during flush, and up to the target file size during compaction. That strategy allowed us to randomly pick samples from as wide a range as possible that'd be guaranteed to land in a single output file.
      
      However, some users try to make huge files in memory-constrained environments, where this strategy can cause OOM. This PR introduces an option, `CompressionOptions::max_dict_buffer_bytes`, that limits how much data blocks are buffered before we switch to unbuffered mode (which means creating the per-SST dictionary, writing out the buffered data, and compressing/writing new blocks as soon as they are built). It is not strict as we currently buffer more than just data blocks -- also keys are buffered. But it does make a step towards giving users predictable memory usage.
      
      Related changes include:
      
      - Changed sampling for dictionary compression to select unique data blocks when there is limited availability of data blocks
      - Made use of `BlockBuilder::SwapAndReset()` to save an allocation+memcpy when buffering data blocks for building a dictionary
      - Changed `ParseBoolean()` to accept an input containing characters after the boolean. This is necessary since, with this PR, a value for `CompressionOptions::enabled` is no longer necessarily the final component in the `CompressionOptions` string.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/7970
      
      Test Plan:
      - updated `CompressionOptions` unit tests to verify limit is respected (to the extent expected in the current implementation) in various scenarios of flush/compaction to bottommost/non-bottommost level
      - looked at jemalloc heap profiles right before and after switching to unbuffered mode during flush/compaction. Verified memory usage in buffering is proportional to the limit set.
      
      Reviewed By: pdillinger
      
      Differential Revision: D26467994
      
      Pulled By: ajkr
      
      fbshipit-source-id: 3da4ef9fba59974e4ef40e40c01611002c861465
      d904233d
    • M
      Avoid self-move-assign in pop operation of binary heap. (#7942) · cf14cb3e
      Max Neunhoeffer 提交于
      Summary:
      The current implementation of a binary heap in `util/heap.h` does a move-assign in the `pop` method. In the case that there is exactly one element stored in the heap, this ends up being a self-move-assign. This can cause trouble with certain classes, which are not prepared for this. Furthermore, it trips up the glibc STL debugger (`-D_GLIBCXX_DEBUG`), which produces an assertion failure in this case.
      
      This PR addresses this problem by not doing the (unnecessary in this case) move-assign if there is only one element in the heap.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/7942
      
      Reviewed By: jay-zhuang
      
      Differential Revision: D26528739
      
      Pulled By: ajkr
      
      fbshipit-source-id: 5ca570e0c4168f086b10308ad766dff84e6e2d03
      cf14cb3e
    • T
      gitignore cmake-build-* for CLion integration (#7933) · ec76f031
      tison 提交于
      Summary: Pull Request resolved: https://github.com/facebook/rocksdb/pull/7933
      
      Reviewed By: jay-zhuang
      
      Differential Revision: D26529429
      
      Pulled By: ajkr
      
      fbshipit-source-id: 244344b70b1db161f9b224c25fe690c663264d7d
      ec76f031
    • M
      Fix handling of Mutable options; Allow DB::SetOptions to update mutable... · 4bc9df94
      mrambacher 提交于
      Fix handling of Mutable options; Allow DB::SetOptions to update mutable TableFactory Options (#7936)
      
      Summary:
      Added a "only_mutable_options" flag to the ConfigOptions.  When set, the Configurable methods will only look at/update options that are marked as kMutable.
      
      Fixed DB::SetOptions to allow for the update of any mutable TableFactory options.  Fixes https://github.com/facebook/rocksdb/issues/7385.
      
      Added tests for the new flag.  Updated HISTORY.md
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/7936
      
      Reviewed By: akankshamahajan15
      
      Differential Revision: D26389646
      
      Pulled By: mrambacher
      
      fbshipit-source-id: 6dc247f6e999fa2814059ebbd0af8face109fea0
      4bc9df94
  15. 19 2月, 2021 3 次提交
    • Z
      Introduce a new trace file format (v 0.2) for better extension (#7977) · b0fd1cc4
      Zhichao Cao 提交于
      Summary:
      The trace file record and payload encode is fixed, which requires complex backward compatibility resolving. This PR introduce a new trace file format, which makes it easier to add new entries to the payload and does not have backward compatible issues. V 0.1 is still supported in this PR. Added the tracing for lower_bound and upper_bound for iterator.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/7977
      
      Test Plan: make check. tested with old trace file in replay and analyzing.
      
      Reviewed By: anand1976
      
      Differential Revision: D26529948
      
      Pulled By: zhichao-cao
      
      fbshipit-source-id: ebb75a127ce3c07c25a1ccc194c551f917896a76
      b0fd1cc4
    • S
      Fix an assertion failure in range locking, locktree code. (#7938) · c9878baa
      Sergei Petrunia 提交于
      Summary:
      Fix this scenario:
      trx1> acquire shared lock on $key
      trx2> acquire shared lock on the same $key
      trx1> attempt to acquire a unique lock on $key.
      
      Lock acquisition will fail, and deadlock detection will start.
      It will call iterate_and_get_overlapping_row_locks() which will
      produce a list with two locks (shared locks by trx1 and trx2).
      
      However the code in lock_request::build_wait_graph() was not prepared
      to find the lock by the same transaction in the list of conflicting
      locks. Fix it to ignore it.
      
      (One may suggest to fix iterate_and_get_overlapping_row_locks() to not
      include locks by trx1. This is not a good idea, because that function
      is also used to report all locks currently held)
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/7938
      
      Reviewed By: zhichao-cao
      
      Differential Revision: D26529374
      
      Pulled By: ajkr
      
      fbshipit-source-id: d89cbed008db1a97a8f2351b9bfb75310750d16a
      c9878baa
    • V
      Update win_logger.cc : assert failed when return value not checked.... · ad25b1af
      vrqq 提交于
      Update win_logger.cc : assert failed when return value not checked. (-DROCKSDB_ASSERT_STATUS_CHECKED) (#7955)
      
      Summary:
      Ignore return value on WinLogger::CloseInternal() when build with -DROCKSDB_ASSERT_STATUS_CHECKED on windows.
      
      It's a good way to ignore check here?
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/7955
      
      Reviewed By: jay-zhuang
      
      Differential Revision: D26524145
      
      Pulled By: ajkr
      
      fbshipit-source-id: f2f643e94cde9772617c68b658fb529fffebd8ce
      ad25b1af