1. 14 6月, 2021 2 次提交
    • J
      Fix flaky ManualCompactionMax test (#8396) · d60ae5b1
      Jay Zhuang 提交于
      Summary:
      Recalculate the total size after generate new sst files.
      New generated files might have different size as the previous time which
      could cause the test failed.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/8396
      
      Test Plan:
      ```
      gtest-parallel ./db_compaction_test
      --gtest_filter=DBCompactionTest.ManualCompactionMax -r 1000 -w 100
      ```
      
      Reviewed By: akankshamahajan15
      
      Differential Revision: D29083299
      
      Pulled By: jay-zhuang
      
      fbshipit-source-id: 49d4bd619cefc0f9a1f452f8759ff4c2ba1b6fdb
      d60ae5b1
    • P
      Fix use of binutils in Facebook platform009 (#8399) · 0d0aa578
      Peter Dillinger 提交于
      Summary:
      Internal builds failing
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/8399
      
      Test Plan:
      I can reproduce a failure by putting a bad version of `as` in
      my PATH. This indicates that before this change, the custom compiler is
      falsely relying on host `as`. This change fixes that, ignoring the bad
      `as` on PATH.
      
      Reviewed By: akankshamahajan15
      
      Differential Revision: D29094159
      
      Pulled By: pdillinger
      
      fbshipit-source-id: c432e90404ea4d39d885a685eebbb08be9eda1c8
      0d0aa578
  2. 13 6月, 2021 1 次提交
    • L
      Disable subcompactions for user-defined timestamps (#8393) · 14626388
      Levi Tamasi 提交于
      Summary:
      The subcompaction boundary picking logic does not currently guarantee
      that all user keys that differ only by timestamp get processed by the same
      subcompaction. This can cause issues with the `CompactionIterator` state
      machine: for instance, one subcompaction that processes a subset of such KVs
      might drop a tombstone based on the KVs it sees, while in reality the
      tombstone might not have been eligible to be optimized out.
      (See also https://github.com/facebook/rocksdb/issues/6645, which adjusted the way compaction inputs are picked for the
      same reason.)
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/8393
      
      Test Plan: Ran `make check` and the crash test script with timestamps enabled.
      
      Reviewed By: jay-zhuang
      
      Differential Revision: D29071635
      
      Pulled By: ltamasi
      
      fbshipit-source-id: f6c72442122b4e581871e096fabe3876a9e8a5a6
      14626388
  3. 12 6月, 2021 3 次提交
    • P
      Fix double-dumping CF stats to log (#8380) · b3dbeadc
      Peter Dillinger 提交于
      Summary:
      DBImpl::DumpStats is supposed to do this:
      Dump DB stats to LOG
      For each CF, dump CFStatsNoFileHistogram to LOG
      For each CF, dump CFFileHistogram to LOG
      
      Instead, due to a longstanding bug from 2017 (https://github.com/facebook/rocksdb/issues/2126), it would dump
      CFStats, which includes both CFStatsNoFileHistogram and CFFileHistogram,
      in both loops, resulting in near-duplicate output.
      
      This fixes the bug.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/8380
      
      Test Plan: Manual inspection of LOG after db_bench
      
      Reviewed By: jay-zhuang
      
      Differential Revision: D29017535
      
      Pulled By: pdillinger
      
      fbshipit-source-id: 3010604c4a629a80347f129cd746ce9b0d0cbda6
      b3dbeadc
    • Z
      All the NoSpace() errors will be handled by regular SetBGError and RecoverFromNoSpace() (#8376) · 58162835
      Zhichao Cao 提交于
      Summary:
      In the current logic, any IO Error with retryable flag == true will be handled by the special logic and in most cases, StartRecoverFromRetryableBGIOError will be called to do the auto resume. If the NoSpace error with retryable flag is set during WAL write, it is mapped as a hard error, which will trigger the auto recovery. During the recover process, if write continues and append to the WAL, the write process sees that bg_error is set to HardError and it calls WriteStatusCheck(), which calls SetBGError() with Status (not IOStatus). This will redirect to the regular SetBGError interface, in which recovery_error_ will be set to the corresponding error. With the recovery_error_ set, the auto resume thread created in StartRecoverFromRetryableBGIOError will keep failing as long as user keeps trying to write.
      
      To fix this issue. All the NoSpace error (no matter retryable flag is set or not) will be redirect to the regular SetBGError, and RecoverFromNoSpace() will do the recovery job which calls SstFileManager::StartErrorRecovery().
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/8376
      
      Test Plan: make check and added the new testing case
      
      Reviewed By: anand1976
      
      Differential Revision: D29071828
      
      Pulled By: zhichao-cao
      
      fbshipit-source-id: 7171d7e14cc4620fdab49b7eff7a2fe9a89942c2
      58162835
    • P
      Make platform009 default for FB developers (#8389) · a42a342a
      Peter Dillinger 提交于
      Summary:
      platform007 being phased out and sometimes broken
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/8389
      
      Test Plan: `make V=1` to see which compiler is being used
      
      Reviewed By: jay-zhuang
      
      Differential Revision: D29067183
      
      Pulled By: pdillinger
      
      fbshipit-source-id: d1b07267cbc55baa9395f2f4fe3967cc6dad52f7
      a42a342a
  4. 11 6月, 2021 5 次提交
    • M
      Make Comparator into a Customizable Object (#8336) · 6ad08103
      mrambacher 提交于
      Summary:
      Makes the Comparator class into a Customizable object.  Added/Updated the CreateFromString method to create Comparators.  Added test for using the ObjectRegistry to create one.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/8336
      
      Reviewed By: jay-zhuang
      
      Differential Revision: D28999612
      
      Pulled By: mrambacher
      
      fbshipit-source-id: bff2cb2814eeb9fef6a00fddc61d6e34b6fbcf2e
      6ad08103
    • A
      Support for Merge in Integrated BlobDB with base values (#8292) · 3897ce31
      Akanksha Mahajan 提交于
      Summary:
      This PR add support for Merge operation in Integrated BlobDB with base values(i.e DB::Put). Merged values can be retrieved through  DB::Get, DB::MultiGet, DB::GetMergeOperands and Iterator operation.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/8292
      
      Test Plan: Add new unit tests
      
      Reviewed By: ltamasi
      
      Differential Revision: D28415896
      
      Pulled By: akankshamahajan15
      
      fbshipit-source-id: e9b3478bef51d2f214fb88c31ed3c8d2f4a531ff
      3897ce31
    • B
      Fixed manifest_dump issues when printing keys and values containing null characters (#8378) · d61a4493
      Baptiste Lemaire 提交于
      Summary:
      Changed fprintf function to fputc in ApplyVersionEdit, and replaced null characters with whitespaces.
      Added unit test in ldb_test.py - verifies that manifest_dump --verbose output is correct when keys and values containing null characters are inserted.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/8378
      
      Reviewed By: pdillinger
      
      Differential Revision: D29034584
      
      Pulled By: bjlemaire
      
      fbshipit-source-id: 50833687a8a5f726e247c38457eadc3e6dbab862
      d61a4493
    • M
      BugFix: fs_posix.cc GetFreeSpace uses wrong value non-root users (#8370) · 5a2b4ed6
      matthewvon 提交于
      Summary:
      fs_posix.cc GetFreeSpace() calculates free space based upon a call to statvfs().  However, there are two extremely different values in statvfs's returned structure:  f_bfree which is free space for root and f_bavail which is free space for non-root users.  The existing code uses f_bfree.  Many disks have 5 to 10% of the total disk space reserved for root only.  Therefore GetFreeSpace() does not realize that non-root users may not have storage available.
      
      This PR detects whether the effective posix user is root or not, then selects the appropriate available space value.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/8370
      
      Reviewed By: mrambacher
      
      Differential Revision: D29032710
      
      Pulled By: jay-zhuang
      
      fbshipit-source-id: 57feba34ed035615a479956d28f98d85735281c0
      5a2b4ed6
    • Z
      Use DbSessionId as cache key prefix when secondary cache is enabled (#8360) · f44e69c6
      Zhichao Cao 提交于
      Summary:
      Currently, we either use the file system inode or a monotonically incrementing runtime ID as the block cache key prefix. However, if we use a monotonically incrementing runtime ID (in the case that the file system does not support inode id generation), in some cases, it cannot ensure uniqueness (e.g., we have secondary cache migrated from host to host). We use DbSessionID (20 bytes) + current file number (at most 10 bytes) as the new cache block key prefix when the secondary cache is enabled. So can accommodate scenarios such as transfer of cache state across hosts.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/8360
      
      Test Plan: add the test to lru_cache_test
      
      Reviewed By: pdillinger
      
      Differential Revision: D29006215
      
      Pulled By: zhichao-cao
      
      fbshipit-source-id: 6cff686b38d83904667a2bd39923cd030df16814
      f44e69c6
  5. 10 6月, 2021 1 次提交
    • L
      Add a clipping internal iterator (#8327) · db325a59
      Levi Tamasi 提交于
      Summary:
      Logically, subcompactions process a key range [start, end); however, the way
      this is currently implemented is that the `CompactionIterator` for any given
      subcompaction keeps processing key-values until it actually outputs a key that
      is out of range, which is then discarded. Instead of doing this, the patch
      introduces a new type of internal iterator called `ClippingIterator` which wraps
      another internal iterator and "clips" its range of key-values so that any KVs
      returned are strictly in the [start, end) interval. This does eliminate a (minor)
      inefficiency by stopping processing in subcompactions exactly at the limit;
      however, the main motivation is related to BlobDB: namely, we need this to be
      able to measure the amount of garbage generated by a subcompaction
      precisely and prevent off-by-one errors.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/8327
      
      Test Plan: `make check`
      
      Reviewed By: siying
      
      Differential Revision: D28761541
      
      Pulled By: ltamasi
      
      fbshipit-source-id: ee0e7229f04edabbc7bed5adb51771fbdc287f69
      db325a59
  6. 08 6月, 2021 2 次提交
    • P
      Fix a major performance bug in 6.21 for cache entry stats (#8369) · 2f93a3b8
      Peter Dillinger 提交于
      Summary:
      In final polishing of https://github.com/facebook/rocksdb/issues/8297 (after most manual testing), I
      broke my own caching layer by sanitizing an input parameter with
      std::min(0, x) instead of std::max(0, x). I resisted unit testing the
      timing part of the result caching because historically, these test
      are either flaky or difficult to write, and this was not a correctness
      issue. This bug is essentially unnoticeable with a small number
      of column families but can explode background work with a
      large number of column families.
      
      This change fixes the logical error, removes some unnecessary related
      optimization, and adds mock time/sleeps to the unit test to ensure we
      can cache hit within the age limit.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/8369
      
      Test Plan: added time testing logic to existing unit test
      
      Reviewed By: ajkr
      
      Differential Revision: D28950892
      
      Pulled By: pdillinger
      
      fbshipit-source-id: e79cd4ff3eec68fd0119d994f1ed468c38026c3b
      2f93a3b8
    • D
      Cancel compact range (#8351) · 80a59a03
      David Devecsery 提交于
      Summary:
      Added the ability to cancel an in-progress range compaction by storing to an atomic "canceled" variable pointed to within the CompactRangeOptions structure.
      
      Tested via two tests added to db_tests2.cc.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/8351
      
      Reviewed By: ajkr
      
      Differential Revision: D28808894
      
      Pulled By: ddevec
      
      fbshipit-source-id: cb321361c9e23b084b188bb203f11c375a22c2dd
      80a59a03
  7. 05 6月, 2021 2 次提交
  8. 04 6月, 2021 2 次提交
    • A
      Snapshot release triggered compaction without multiple tombstones (#8357) · 9167ece5
      Andrew Kryczka 提交于
      Summary:
      This is a duplicate of https://github.com/facebook/rocksdb/issues/4948 by mzhaom to fix tests after rebase.
      
      This change is a follow-up to https://github.com/facebook/rocksdb/issues/4927, which made this possible by allowing tombstone dropping/seqnum zeroing optimizations on the last key in the compaction. Now the `largest_seqno != 0` condition suffices to prevent snapshot release triggered compaction from entering an infinite loop.
      
      The issues caused by the extraneous condition `level_and_file.second->num_deletions > 1` are:
      
      - files could have `largest_seqno > 0` forever making it impossible to tell they cannot contain any covering keys
      - it doesn't trigger compaction when there are many overwritten keys. Some MyRocks use case actually doesn't use Delete but instead calls Put with empty value to "delete" keys, so we'd like to be able to trigger compaction in this case too.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/8357
      
      Test Plan: - make check
      
      Reviewed By: jay-zhuang
      
      Differential Revision: D28855340
      
      Pulled By: ajkr
      
      fbshipit-source-id: a261b51eecafec492499e6d01e8e43112f801798
      9167ece5
    • A
      Update HISTORY and version to 6.21 (#8363) · 799cf37c
      anand76 提交于
      Summary:
      Update HISTORY and version to 6.21 on master.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/8363
      
      Reviewed By: jay-zhuang
      
      Differential Revision: D28888818
      
      Pulled By: anand1976
      
      fbshipit-source-id: 9e5fac3b99ecc9f3b7d9f21474a39fa50decb117
      799cf37c
  9. 02 6月, 2021 3 次提交
    • P
      Fix "Interval WAL" bytes to say GB instead of MB (#8350) · 2655477c
      PiyushDatta 提交于
      Summary:
      Reference: https://github.com/facebook/rocksdb/issues/7201
      
      Before fix:
      `/tmp/rocksdb_test_file/LOG.old.1622492586055679:Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 MB, 0.00 MB/s`
      
      After fix:
      `/tmp/rocksdb_test_file/LOG:Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s`
      
      Tests:
      ```
      Computer:jobs running/jobs completed/%of started jobs/Average seconds to complete
      ETA: 0s Left: 0 AVG: 0.05s  local:0/7720/100%/0.0s
      rm -rf /dev/shm/rocksdb.CLRh
      /usr/bin/python3 tools/check_all_python.py
      No syntax errors in 34 .py files
      /usr/bin/python3 tools/ldb_test.py
      Running testCheckConsistency...
      .Running testColumnFamilies...
      .Running testCountDelimDump...
      .Running testCountDelimIDump...
      .Running testDumpLiveFiles...
      .Running testDumpLoad...
      Warning: 7 bad lines ignored.
      .Running testGetProperty...
      .Running testHexPutGet...
      .Running testIDumpBasics...
      .Running testIngestExternalSst...
      .Running testInvalidCmdLines...
      .Running testListColumnFamilies...
      .Running testManifestDump...
      .Running testMiscAdminTask...
      Sequence,Count,ByteSize,Physical Offset,Key(s)
      .Running testSSTDump...
      .Running testSimpleStringPutGet...
      .Running testStringBatchPut...
      .Running testTtlPutGet...
      .Running testWALDump...
      .
      ----------------------------------------------------------------------
      Ran 19 tests in 15.945s
      
      OK
      sh tools/rocksdb_dump_test.sh
      make check-format
      make[1]: Entering directory '/home/piydatta/Documents/rocksdb'
      $DEBUG_LEVEL is 1
      Makefile:176: Warning: Compiling in debug mode. Don't use the resulting binary in production
      build_tools/format-diff.sh -c
      Checking format of uncommitted changes...
      ```
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/8350
      
      Reviewed By: jay-zhuang
      
      Differential Revision: D28790567
      
      Pulled By: zhichao-cao
      
      fbshipit-source-id: dcb1e4c124361156435122f21f0a288335b2c8c8
      2655477c
    • J
      Fix cmake build failure with gflags (#8324) · eda83eaa
      Jay Zhuang 提交于
      Summary:
      - Fix cmake build failure with gflags.
      - Add CI tests for both gflags 2.1 and 2.2.
      - Fix ctest config with gtest.
      - Add CI to run test with ctest.
      
      One benefit of ctest is it support timeout, it's set to 5min in our CI, so we will know which test is hang.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/8324
      
      Test Plan: CI pass
      
      Reviewed By: ajkr
      
      Differential Revision: D28762517
      
      Pulled By: jay-zhuang
      
      fbshipit-source-id: 09063c5af5f9f33abfcdeb48593acbd9826cd199
      eda83eaa
    • S
      Kill whitebox crash test if it is 15 minutes over the limit (#8341) · ab718b41
      sdong 提交于
      Summary:
      Whitebox crash test can run significantly over the time limit for test slowness or no kiling points. This indefinite job can create problem when this test is periodically scheduled as a job. Instead, kill the job if it is 15 minutes over the limit.
      Refactor the code slightly to consolidate the code for executing commands for white and black box tests.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/8341
      
      Test Plan: Run both of black and white box tests with both of natual and explicit kill condition.
      
      Reviewed By: jay-zhuang
      
      Differential Revision: D28756170
      
      fbshipit-source-id: f253149890e62ace78f871be927e093e9b12f49b
      ab718b41
  10. 01 6月, 2021 2 次提交
  11. 28 5月, 2021 4 次提交
    • S
      Add a new blog post for online validation (#8338) · 1c88f66f
      sdong 提交于
      Summary:
      A new blog post to introduce recent development related to online validation.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/8338
      
      Test Plan: Local test with "bundle exec jekyll serve"
      
      Reviewed By: ltamasi
      
      Differential Revision: D28757134
      
      fbshipit-source-id: 42268e1af8dc0c6a42ae62ea61568409b7ce10e4
      1c88f66f
    • S
      Use bloom filter to speed up sync point (#8337) · cda79231
      sdong 提交于
      Summary:
      Now SyncPoint is used in crash test but can signiciantly slow down the run. Add a bloom filter before each process to speed itup
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/8337
      
      Test Plan: Run all existing tests
      
      Reviewed By: ajkr
      
      Differential Revision: D28730282
      
      fbshipit-source-id: a187377a9d47877a36c5649e4b1f67d5e3033238
      cda79231
    • A
      Blog post about SecondaryCache (#8339) · b53e3d2a
      anand76 提交于
      Summary:
      Blog post about SecondaryCache
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/8339
      
      Reviewed By: zhichao-cao
      
      Differential Revision: D28753501
      
      Pulled By: anand1976
      
      fbshipit-source-id: d3241b746a9266fb523e13ad45fd0288083f7470
      b53e3d2a
    • P
      Do not truncate WAL if in read_only mode (#8313) · c75ef03e
      Peter (Stig) Edwards 提交于
      Summary:
      I noticed ```openat``` system call with ```O_WRONLY``` flag and ```sync_file_range``` and ```truncate``` on WAL file when using ```rocksdb::DB::OpenForReadOnly``` by way of ```db_bench --readonly=true --benchmarks=readseq --use_existing_db=1 --num=1 ...```
      
      Noticed in ```strace``` after seeing the last modification time of the WAL file change after each run (with ```--readonly=true```).
      
        I think introduced by https://github.com/facebook/rocksdb/commit/7d7f14480e135a4939ed6903f46b3f7056aa837a from https://github.com/facebook/rocksdb/pull/8122
      
      I added a test to catch the WAL file being truncated and the modification time on it changing.
      I am not sure if a mock filesystem with mock clock could be used to avoid having to sleep 1.1s.
      The test could also check the set of files is the same and that the sizes are also unchanged.
      
      Before:
      
      ```
      [ RUN      ] DBBasicTest.ReadOnlyReopenMtimeUnchanged
      db/db_basic_test.cc:182: Failure
      Expected equality of these values:
        file_mtime_after_readonly_reopen
          Which is: 1621611136
        file_mtime_before_readonly_reopen
          Which is: 1621611135
        file is: 000010.log
      [  FAILED  ] DBBasicTest.ReadOnlyReopenMtimeUnchanged (1108 ms)
      ```
      
      After:
      
      ```
      [ RUN      ] DBBasicTest.ReadOnlyReopenMtimeUnchanged
      [       OK ] DBBasicTest.ReadOnlyReopenMtimeUnchanged (1108 ms)
      ```
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/8313
      
      Reviewed By: pdillinger
      
      Differential Revision: D28656925
      
      Pulled By: jay-zhuang
      
      fbshipit-source-id: ea9e215cb53e7c830e76bc5fc75c45e21f12a1d6
      c75ef03e
  12. 27 5月, 2021 2 次提交
  13. 26 5月, 2021 1 次提交
  14. 25 5月, 2021 1 次提交
  15. 24 5月, 2021 1 次提交
  16. 22 5月, 2021 6 次提交
    • J
      Fix clang-analyze: use uninitiated variable (#8325) · 55853de6
      Jay Zhuang 提交于
      Summary:
      Error:
      ```
      db/db_compaction_test.cc:5211:47: warning: The left operand of '*' is a garbage value
      uint64_t total = (l1_avg_size + l2_avg_size * 10) * 10;
      ```
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/8325
      
      Test Plan: `$ make analyze`
      
      Reviewed By: pdillinger
      
      Differential Revision: D28620916
      
      Pulled By: jay-zhuang
      
      fbshipit-source-id: f6d58ab84eefbcc905cda45afb9522b0c6d230f8
      55853de6
    • Z
      Use new Insert and Lookup APIs in table reader to support secondary cache (#8315) · 7303d02b
      Zhichao Cao 提交于
      Summary:
      Secondary cache is implemented to achieve the secondary cache tier for block cache. New Insert and Lookup APIs are introduced in https://github.com/facebook/rocksdb/issues/8271  . To support and use the secondary cache in block based table reader, this PR introduces the corresponding callback functions that will be used in secondary cache, and update the Insert and Lookup APIs accordingly.
      
      benchmarking:
      ./db_bench --benchmarks="fillrandom" -num=1000000 -key_size=32 -value_size=256 -use_direct_io_for_flush_and_compaction=true -db=/tmp/rocks_t/db -partition_index_and_filters=true
      
      ./db_bench -db=/tmp/rocks_t/db -use_existing_db=true -benchmarks=readrandom -num=1000000 -key_size=32 -value_size=256 -use_direct_reads=true -cache_size=1073741824 -cache_numshardbits=5 -cache_index_and_filter_blocks=true -read_random_exp_range=17 -statistics -partition_index_and_filters=true -stats_dump_period_sec=30 -reads=50000000
      
      master benchmarking results:
      readrandom   :       3.923 micros/op 254881 ops/sec;   33.4 MB/s (23849796 of 50000000 found)
      rocksdb.db.get.micros P50 : 2.820992 P95 : 5.636716 P99 : 16.450553 P100 : 8396.000000 COUNT : 50000000 SUM : 179947064
      
      Current PR benchmarking results
      readrandom   :       4.083 micros/op 244925 ops/sec;   32.1 MB/s (23849796 of 50000000 found)
      rocksdb.db.get.micros P50 : 2.967687 P95 : 5.754916 P99 : 15.665912 P100 : 8213.000000 COUNT : 50000000 SUM : 187250053
      
      About 3.8% throughput reduction.
      P50: 5.2% increasing, P95, 2.09% increasing, P99 4.77% improvement
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/8315
      
      Test Plan: added the testing case
      
      Reviewed By: anand1976
      
      Differential Revision: D28599774
      
      Pulled By: zhichao-cao
      
      fbshipit-source-id: 098c4df0d7327d3a546df7604b2f1602f13044ed
      7303d02b
    • J
      Use large macos instance (#8320) · 6c7c3e8c
      Jay Zhuang 提交于
      Summary:
      Macos build is taking more than 1 hour, bump the instance type from the
      default medium to large (large macos instance was not available before).
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/8320
      
      Test Plan: watch CI pass
      
      Reviewed By: ajkr
      
      Differential Revision: D28589456
      
      Pulled By: jay-zhuang
      
      fbshipit-source-id: cff78dae5aaf9de90ade3468469290176de5ff32
      6c7c3e8c
    • P
      Add table properties for number of entries added to filters (#8323) · 3469d60f
      Peter Dillinger 提交于
      Summary:
      With Ribbon filter work and possible variance in actual bits
      per key (or prefix; general term "entry") to achieve certain FP rates,
      I've received a request to be able to track actual bits per key in
      generated filters. This change adds a num_filter_entries table
      property, which can be combined with filter_size to get bits per key
      (entry).
      
      This can vary from num_entries in at least these ways:
      * Different versions of same key are only counted once in filters.
      * With prefix filters, several user keys map to the same filter entry.
      * A single filter can include both prefixes and user keys.
      
      Note that FilterBlockBuilder::NumAdded() didn't do anything useful
      except distinguish empty from non-empty.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/8323
      
      Test Plan: basic unit test included, others updated
      
      Reviewed By: jay-zhuang
      
      Differential Revision: D28596210
      
      Pulled By: pdillinger
      
      fbshipit-source-id: 529a111f3c84501e5a470bc84705e436ee68c376
      3469d60f
    • J
      Fix manual compaction `max_compaction_bytes` under-calculated issue (#8269) · 6c865435
      Jay Zhuang 提交于
      Summary:
      Fix a bug that for manual compaction, `max_compaction_bytes` is only
      limit the SST files from input level, but not overlapped files on output
      level.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/8269
      
      Test Plan: `make check`
      
      Reviewed By: ajkr
      
      Differential Revision: D28231044
      
      Pulled By: jay-zhuang
      
      fbshipit-source-id: 9d7d03004f30cc4b1b9819830141436907554b7c
      6c865435
    • S
      Try to build with liburing by default. (#8322) · bd3d080e
      sdong 提交于
      Summary:
      By default, try to build with liburing. For make, if ROCKSDB_USE_IO_URING is not set, treat as 1, which means RocksDB will try to build with liburing. For cmake, add WITH_LIBURING to control it, with default on.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/8322
      
      Test Plan: Build using cmake and make.
      
      Reviewed By: anand1976
      
      Differential Revision: D28586498
      
      fbshipit-source-id: cfd39159ab697f4b93a9293a59c07f839b1e7ed5
      bd3d080e
  17. 21 5月, 2021 2 次提交
    • S
      Compare memtable insert and flush count (#8288) · 2f1984dd
      sdong 提交于
      Summary:
      When a memtable is flushed, it will validate number of entries it reads, and compare the number with how many entries inserted into memtable. This serves as one sanity c\
      heck against memory corruption. This change will also allow more counters to be added in the future for better validation.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/8288
      
      Test Plan: Pass all existing tests
      
      Reviewed By: ajkr
      
      Differential Revision: D28369194
      
      fbshipit-source-id: 7ff870380c41eab7f99eee508550dcdce32838ad
      2f1984dd
    • J
      Deflake ExternalSSTFileTest.PickedLevelBug (#8307) · 94b4faa0
      Jay Zhuang 提交于
      Summary:
      The test want to make sure these's no compaction during `AddFile`
      (between `DBImpl::AddFile:MutexLock` and `DBImpl::AddFile:MutexUnlock`)
      but the mutex could be unlocked by `EnterUnbatched()`.
      Move the lock start point after bumping the ingest file number.
      
      Also fix the dead lock when ASSERT fails.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/8307
      
      Reviewed By: ajkr
      
      Differential Revision: D28479849
      
      Pulled By: jay-zhuang
      
      fbshipit-source-id: b3c50f66aa5d5f59c5c27f815bfea189c4cd06cb
      94b4faa0