1. 23 3月, 2023 2 次提交
  2. 22 3月, 2023 1 次提交
    • S
      Deflake DBCompactionTest.CancelCompactionWaitingOnConflict (#11318) · b92bc04a
      sdong 提交于
      Summary:
      In DBCompactionTest::CancelCompactionWaitingOnConflict, when generating SST files to trigger a compaction, we don't wait after each file, which may cause multiple memtables going to the same SST file, causing insufficient files to trigger the compaction. We do the waiting instead, except the last one, which would trigger compaction.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/11318
      
      Test Plan: Run DBCompactionTest.CancelCompactionWaitingOnConflict multiple times.
      
      Reviewed By: ajkr
      
      Differential Revision: D44267273
      
      fbshipit-source-id: 86af49b05fc67ea3335312f0f5f3d22df1520bf8
      b92bc04a
  3. 21 3月, 2023 1 次提交
    • S
      Disabling some IO error assertion in EnvLogger (#11314) · cea81cad
      sdong 提交于
      Summary:
      Right now, EnvLogger has the same IO error assertion as most other places: if we are writing to the file after we've seen an IO error, the assertion would trigger. This is too strict for info logger: we would not fail DB if info logger fails and we would try the best to continue logging. For now, we simplify the problem by disabling the assertion for EnvLogger.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/11314
      
      Test Plan: Run env_logger_test to make sure at least it doesn't fail in normal cases.
      
      Reviewed By: anand1976
      
      Differential Revision: D44227732
      
      fbshipit-source-id: e3d31a221a5757f018a67ccaa96dcf89eb981f66
      cea81cad
  4. 19 3月, 2023 3 次提交
    • A
      Specify precedence in `SstFileWriter::DeleteRange()` API contract (#11309) · 8c445407
      Andrew Kryczka 提交于
      Summary: Pull Request resolved: https://github.com/facebook/rocksdb/pull/11309
      
      Reviewed By: cbi42
      
      Differential Revision: D44198501
      
      Pulled By: ajkr
      
      fbshipit-source-id: d603aca37b56aac5df255833793a3300807d63cf
      8c445407
    • L
      Updates for the 8.1 release (HISTORY, version.h, compatibility tests) (#11307) · 87de4fee
      Levi Tamasi 提交于
      Summary: Pull Request resolved: https://github.com/facebook/rocksdb/pull/11307
      
      Reviewed By: hx235
      
      Differential Revision: D44196571
      
      Pulled By: ltamasi
      
      fbshipit-source-id: 52489d6f8bd3c79cd33c87e9e1f719ea5e8bd382
      87de4fee
    • H
      New stat rocksdb.{cf|db}-write-stall-stats exposed in a structural way (#11300) · cb584771
      Hui Xiao 提交于
      Summary:
      **Context/Summary:**
      Users are interested in figuring out what has caused write stall.
      - Refactor write stall related stats from property `kCFStats` into its own db property `rocksdb.cf-write-stall-stats` as a map or string. For now, this only contains count of different combination of (CF-scope `WriteStallCause`) + (`WriteStallCondition`)
      - Add new `WriteStallCause::kWriteBufferManagerLimit` to reflect write stall caused by write buffer manager
      - Add new `rocksdb.db-write-stall-stats`. For now, this only contains `WriteStallCause::kWriteBufferManagerLimit` + `WriteStallCondition::kStopped`
      
      - Expose functions in new class `WriteStallStatsMapKeys` for examining the above two properties returned as map
      - Misc: rename/comment some write stall InternalStats for clarity
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/11300
      
      Test Plan:
      - New UT
      - Stress test
      `python3 tools/db_crashtest.py blackbox --simple --get_property_one_in=1`
      - Perf test: Both converge very slowly at similar rates but post-change has higher average ops/sec than pre-change even though they are run at the same time.
      ```
      ./db_bench -seed=1679014417652004 -db=/dev/shm/testdb/ -statistics=false -benchmarks="fillseq[-X60]" -key_size=32 -value_size=512 -num=100000 -db_write_buffer_size=655 -target_file_size_base=655 -disable_auto_compactions=false -compression_type=none -bloom_bits=3
      ```
      pre-change:
      ```
      fillseq [AVG 15 runs] : 1176 (± 732) ops/sec;    0.6 (± 0.4) MB/sec
      fillseq      :    1052.671 micros/op 949 ops/sec 105.267 seconds 100000 operations;    0.5 MB/s
      fillseq [AVG 16 runs] : 1162 (± 685) ops/sec;    0.6 (± 0.4) MB/sec
      fillseq      :    1387.330 micros/op 720 ops/sec 138.733 seconds 100000 operations;    0.4 MB/s
      fillseq [AVG 17 runs] : 1136 (± 646) ops/sec;    0.6 (± 0.3) MB/sec
      fillseq      :    1232.011 micros/op 811 ops/sec 123.201 seconds 100000 operations;    0.4 MB/s
      fillseq [AVG 18 runs] : 1118 (± 610) ops/sec;    0.6 (± 0.3) MB/sec
      fillseq      :    1282.567 micros/op 779 ops/sec 128.257 seconds 100000 operations;    0.4 MB/s
      fillseq [AVG 19 runs] : 1100 (± 578) ops/sec;    0.6 (± 0.3) MB/sec
      fillseq      :    1914.336 micros/op 522 ops/sec 191.434 seconds 100000 operations;    0.3 MB/s
      fillseq [AVG 20 runs] : 1071 (± 551) ops/sec;    0.6 (± 0.3) MB/sec
      fillseq      :    1227.510 micros/op 814 ops/sec 122.751 seconds 100000 operations;    0.4 MB/s
      fillseq [AVG 21 runs] : 1059 (± 525) ops/sec;    0.5 (± 0.3) MB/sec
      ```
      post-change:
      ```
      fillseq [AVG 15 runs] : 1226 (± 732) ops/sec;    0.6 (± 0.4) MB/sec
      fillseq      :    1323.825 micros/op 755 ops/sec 132.383 seconds 100000 operations;    0.4 MB/s
      fillseq [AVG 16 runs] : 1196 (± 687) ops/sec;    0.6 (± 0.4) MB/sec
      fillseq      :    1223.905 micros/op 817 ops/sec 122.391 seconds 100000 operations;    0.4 MB/s
      fillseq [AVG 17 runs] : 1174 (± 647) ops/sec;    0.6 (± 0.3) MB/sec
      fillseq      :    1168.996 micros/op 855 ops/sec 116.900 seconds 100000 operations;    0.4 MB/s
      fillseq [AVG 18 runs] : 1156 (± 611) ops/sec;    0.6 (± 0.3) MB/sec
      fillseq      :    1348.729 micros/op 741 ops/sec 134.873 seconds 100000 operations;    0.4 MB/s
      fillseq [AVG 19 runs] : 1134 (± 579) ops/sec;    0.6 (± 0.3) MB/sec
      fillseq      :    1196.887 micros/op 835 ops/sec 119.689 seconds 100000 operations;    0.4 MB/s
      fillseq [AVG 20 runs] : 1119 (± 550) ops/sec;    0.6 (± 0.3) MB/sec
      fillseq      :    1193.697 micros/op 837 ops/sec 119.370 seconds 100000 operations;    0.4 MB/s
      fillseq [AVG 21 runs] : 1106 (± 524) ops/sec;    0.6 (± 0.3) MB/sec
      ```
      
      Reviewed By: ajkr
      
      Differential Revision: D44159541
      
      Pulled By: hx235
      
      fbshipit-source-id: 8d29efb70001fdc52d34535eeb3364fc3e71e40b
      cb584771
  5. 18 3月, 2023 3 次提交
    • P
      HyperClockCache support for SecondaryCache, with refactoring (#11301) · 204fcff7
      Peter Dillinger 提交于
      Summary:
      Internally refactors SecondaryCache integration out of LRUCache specifically and into a wrapper/adapter class that works with various Cache implementations. Notably, this relies on separating the notion of async lookup handles from other cache handles, so that HyperClockCache doesn't have to deal with the problem of allocating handles from the hash table for lookups that might fail anyway, and might be on the same key without support for coalescing. (LRUCache's hash table can incorporate previously allocated handles thanks to its pointer indirection.) Specifically, I'm worried about the case in which hundreds of threads try to access the same block and probing in the hash table degrades to linear search on the pile of entries with the same key.
      
      This change is a big step in the direction of supporting stacked SecondaryCaches, but there are obstacles to completing that. Especially, there is no SecondaryCache hook for evictions to pass from one to the next. It has been proposed that evictions be transmitted simply as the persisted data (as in SaveToCallback), but given the current structure provided by the CacheItemHelpers, that would require an extra copy of the block data, because there's intentionally no way to ask for a contiguous Slice of the data (to allow for flexibility in storage). `AsyncLookupHandle` and the re-worked `WaitAll()` should be essentially prepared for stacked SecondaryCaches, but several "TODO with stacked secondaries" issues remain in various places.
      
      It could be argued that the stacking instead be done as a SecondaryCache adapter that wraps two (or more) SecondaryCaches, but at least with the current API that would require an extra heap allocation on SecondaryCache Lookup for a wrapper SecondaryCacheResultHandle that can transfer a Lookup between secondaries. We could also consider trying to unify the Cache and SecondaryCache APIs, though that might be difficult if `AsyncLookupHandle` is kept a fixed struct.
      
      ## cache.h (public API)
      Moves `secondary_cache` option from LRUCacheOptions to ShardedCacheOptions so that it is applicable to HyperClockCache.
      
      ## advanced_cache.h (advanced public API)
      * Add `Cache::CreateStandalone()` so that the SecondaryCache support wrapper can use it.
      * Add `SetEvictionCallback()` / `eviction_callback_` so that the SecondaryCache support wrapper can use it. Only a single callback is supported for efficiency. If there is ever a need for more than one, hopefully that can be handled with a broadcast callback wrapper.
      
      These are essentially the two "extra" pieces of `Cache` for pulling out specific SecondaryCache support from the `Cache` implementation. I think it's a good trade-off as these are reasonable, limited, and reusable "cut points" into the `Cache` implementations.
      
      * Remove async capability from standard `Lookup()` (getting rid of awkward restrictions on pending Handles) and add `AsyncLookupHandle` and `StartAsyncLookup()`. As noted in the comments, the full struct of `AsyncLookupHandle` is exposed so that it can be stack allocated, for efficiency, though more data is being copied around than before, which could impact performance. (Lookup info -> AsyncLookupHandle -> Handle vs. Lookup info -> Handle)
      
      I could foresee a future in which a Cache internally saves a pointer to the AsyncLookupHandle, which means it's dangerous to allow it to be copyable or even movable. It also means it's not compatible with std::vector (which I don't like requiring as an API parameter anyway), so `WaitAll()` expects any contiguous array of AsyncLookupHandles. I believe this is best for common case efficiency, while behaving well in other cases also. For example, `WaitAll()` has no effect on default-constructed AsyncLookupHandles, which look like a completed cache miss.
      
      ## cacheable_entry.h
      A couple of functions are obsolete because Cache::Handle can no longer be pending.
      
      ## cache.cc
      Provides default implementations for new or revamped Cache functions, especially appropriate for non-blocking caches.
      
      ## secondary_cache_adapter.{h,cc}
      The full details of the Cache wrapper adding SecondaryCache support. Essentially replicates the SecondaryCache handling that was in LRUCache, but obviously refactored. There is a bit of logic duplication, where Lookup() is essentially a manually optimized version of StartAsyncLookup() and Wait(), but it's roughly a dozen lines of code.
      
      ## sharded_cache.h, typed_cache.h, charged_cache.{h,cc}, sim_cache.cc
      Simply updated for Cache API changes.
      
      ## lru_cache.{h,cc}
      Carefully remove SecondaryCache logic, implement `CreateStandalone` and eviction handler functionality.
      
      ## clock_cache.{h,cc}
      Expose existing `CreateStandalone` functionality, add eviction handler functionality. Light refactoring.
      
      ## block_based_table_reader*
      Mostly re-worked the only usage of async Lookup, which is in BlockBasedTable::MultiGet. Used arrays in place of autovector in some places for efficiency. Simplified some logic by not trying to process some cache results before they're all ready.
      
      Created new function `BlockBasedTable::GetCachePriority()` to reduce some pre-existing code duplication (and avoid making it worse).
      
      Fixed at least one small bug from the prior confusing mixture of async and sync Lookups. In MaybeReadBlockAndLoadToCache(), called by RetrieveBlock(), called by MultiGet() with wait=false, is_cache_hit for the block_cache_tracer entry would not be set to true if the handle was pending after Lookup and before Wait.
      
      ## Intended follow-up work
      * Figure out if there are any missing stats or block_cache_tracer work in refactored BlockBasedTable::MultiGet
      * Stacked secondary caches (see above discussion)
      * See if we can make up for the small MultiGet performance regression.
      * Study more performance with SecondaryCache
      * Items evicted from over-full LRUCache in Release were not being demoted to SecondaryCache, and still aren't to minimize unit test churn. Ideally they would be demoted, but it's an exceptional case so not a big deal.
      * Use CreateStandalone for cache reservations (save unnecessary hash table operations). Not a big deal, but worthy cleanup.
      * Somehow I got the contract for SecondaryCache::Insert wrong in #10945. (Doesn't take ownership!) That API comment needs to be fixed, but didn't want to mingle that in here.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/11301
      
      Test Plan:
      ## Unit tests
      Generally updated to include HCC in SecondaryCache tests, though HyperClockCache has some different, less strict behaviors that leads to some tests not really being set up to work with it. Some of the tests remain disabled with it, but I think we have good coverage without them.
      
      ## Crash/stress test
      Updated to use the new combination.
      
      ## Performance
      First, let's check for regression on caches without secondary cache configured. Adding support for the eviction callback is likely to have a tiny effect, but it shouldn't be worrisome. LRUCache could benefit slightly from less logic around SecondaryCache handling. We can test with cache_bench default settings, built with DEBUG_LEVEL=0 and PORTABLE=0.
      
      ```
      (while :; do base/cache_bench --cache_type=hyper_clock_cache | grep Rough; done) | awk '{ sum += $9; count++; print $0; print "Average: " int(sum / count) }'
      ```
      
      **Before** this and #11299 (which could also have a small effect), running for about an hour, before & after running concurrently for each cache type:
      HyperClockCache: 3168662 (average parallel ops/sec)
      LRUCache: 2940127
      
      **After** this and #11299, running for about an hour:
      HyperClockCache: 3164862 (average parallel ops/sec) (0.12% slower)
      LRUCache: 2940928 (0.03% faster)
      
      This is an acceptable difference IMHO.
      
      Next, let's consider essentially the worst case of new CPU overhead affecting overall performance. MultiGet uses the async lookup interface regardless of whether SecondaryCache or folly are used. We can configure a benchmark where all block cache queries are for data blocks, and all are hits.
      
      Create DB and test (before and after tests running simultaneously):
      ```
      TEST_TMPDIR=/dev/shm ./db_bench -benchmarks=fillrandom -num=30000000 -disable_wal=1 -bloom_bits=16
      TEST_TMPDIR=/dev/shm base/db_bench -benchmarks=multireadrandom[-X30] -readonly -multiread_batched -batch_size=32 -num=30000000 -bloom_bits=16 -cache_size=6789000000 -duration 20 -threads=16
      ```
      
      **Before**:
      multireadrandom [AVG    30 runs] : 3444202 (± 57049) ops/sec;  240.9 (± 4.0) MB/sec
      multireadrandom [MEDIAN 30 runs] : 3514443 ops/sec;  245.8 MB/sec
      **After**:
      multireadrandom [AVG    30 runs] : 3291022 (± 58851) ops/sec;  230.2 (± 4.1) MB/sec
      multireadrandom [MEDIAN 30 runs] : 3366179 ops/sec;  235.4 MB/sec
      
      So that's roughly a 3% regression, on kind of a *worst case* test of MultiGet CPU. Similar story with HyperClockCache:
      
      **Before**:
      multireadrandom [AVG    30 runs] : 3933777 (± 41840) ops/sec;  275.1 (± 2.9) MB/sec
      multireadrandom [MEDIAN 30 runs] : 3970667 ops/sec;  277.7 MB/sec
      **After**:
      multireadrandom [AVG    30 runs] : 3755338 (± 30391) ops/sec;  262.6 (± 2.1) MB/sec
      multireadrandom [MEDIAN 30 runs] : 3785696 ops/sec;  264.8 MB/sec
      
      Roughly a 4-5% regression. Not ideal, but not the whole story, fortunately.
      
      Let's also look at Get() in db_bench:
      
      ```
      TEST_TMPDIR=/dev/shm ./db_bench -benchmarks=readrandom[-X30] -readonly -num=30000000 -bloom_bits=16 -cache_size=6789000000 -duration 20 -threads=16
      ```
      
      **Before**:
      readrandom [AVG    30 runs] : 2198685 (± 13412) ops/sec;  153.8 (± 0.9) MB/sec
      readrandom [MEDIAN 30 runs] : 2209498 ops/sec;  154.5 MB/sec
      **After**:
      readrandom [AVG    30 runs] : 2292814 (± 43508) ops/sec;  160.3 (± 3.0) MB/sec
      readrandom [MEDIAN 30 runs] : 2365181 ops/sec;  165.4 MB/sec
      
      That's showing roughly a 4% improvement, perhaps because of the secondary cache code that is no longer part of LRUCache. But weirdly, HyperClockCache is also showing 2-3% improvement:
      
      **Before**:
      readrandom [AVG    30 runs] : 2272333 (± 9992) ops/sec;  158.9 (± 0.7) MB/sec
      readrandom [MEDIAN 30 runs] : 2273239 ops/sec;  159.0 MB/sec
      **After**:
      readrandom [AVG    30 runs] : 2332407 (± 11252) ops/sec;  163.1 (± 0.8) MB/sec
      readrandom [MEDIAN 30 runs] : 2335329 ops/sec;  163.3 MB/sec
      
      Reviewed By: ltamasi
      
      Differential Revision: D44177044
      
      Pulled By: pdillinger
      
      fbshipit-source-id: e808e48ff3fe2f792a79841ba617be98e48689f5
      204fcff7
    • A
      Ignore async_io ReadOption if FileSystem doesn't support it (#11296) · eac6b6d0
      anand76 提交于
      Summary:
      In PosixFileSystem, IO uring support is opt-in. If the support is not enabled by the user, then ignore the async_io ReadOption in MultiGet and iteration at the top, rather than follow the async_io codepath and transparently switch to sync IO at the FileSystem layer.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/11296
      
      Test Plan: Add new unit tests
      
      Reviewed By: akankshamahajan15
      
      Differential Revision: D44045776
      
      Pulled By: anand1976
      
      fbshipit-source-id: a0881bf763ca2fde50b84063d0068bb521edd8b9
      eac6b6d0
    • L
      Increase the stress test coverage of GetEntity (#11303) · a72d55c9
      Levi Tamasi 提交于
      Summary:
      The `GetEntity` API is currently used in the stress tests for verification purposes;
      this patch extends the coverage by adding a mode where all point lookups in
      the non-batched, batched, and CF consistency stress tests are done using this API.
      The PR also includes a bit of refactoring to eliminate some boilerplate code around
      the wide-column consistency checks.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/11303
      
      Test Plan: Ran stress tests of the batched, non-batched, and CF consistency varieties.
      
      Reviewed By: akankshamahajan15
      
      Differential Revision: D44148503
      
      Pulled By: ltamasi
      
      fbshipit-source-id: fecdbfd3e65a459bbf16ab7aa7b9173e19240077
      a72d55c9
  6. 17 3月, 2023 1 次提交
  7. 16 3月, 2023 4 次提交
    • P
      Simplify tracking entries already in SecondaryCache (#11299) · ccaa3225
      Peter Dillinger 提交于
      Summary:
      In preparation for factoring secondary cache support out of individual Cache implementations, we can get rid of the "in secondary cache" flag on entries through a workable hack: when an entry is promoted from secondary, it is inserted in primary using a helper that lacks secondary cache support, thus preventing re-insertion into secondary cache through existing logic.
      
      This adds to the complexity of building CacheItemHelpers, because you always have to be able to get to an equivalent helper without secondary cache support, but that complexity is reasonably isolated within RocksDB typed_cache.h and test code.
      
      gcc-7 seems to have problems with constexpr constructor referencing `this` so removed constexpr support on CacheItemHelper.
      
      Also refactored some related test code to share common code / functionality.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/11299
      
      Test Plan: existing tests
      
      Reviewed By: anand1976
      
      Differential Revision: D44101453
      
      Pulled By: pdillinger
      
      fbshipit-source-id: 7a59d0a3938ee40159c90c3e65d7004f6a272345
      ccaa3225
    • N
      Add Microsoft Bing as a user (#11270) · 664dabda
      nccx 提交于
      Summary: Pull Request resolved: https://github.com/facebook/rocksdb/pull/11270
      
      Reviewed By: pdillinger
      
      Differential Revision: D43811584
      
      Pulled By: ajkr
      
      fbshipit-source-id: f27e55395644a469840785685646456f6b1452fc
      664dabda
    • H
      Add new stat rocksdb.table.open.prefetch.tail.read.bytes,... · bab5f9a6
      Hui Xiao 提交于
      Add new stat rocksdb.table.open.prefetch.tail.read.bytes, rocksdb.table.open.prefetch.tail.{miss|hit} (#11265)
      
      Summary:
      **Context/Summary:**
      We are adding new stats to measure behavior of prefetched tail size and look up into this buffer
      
      The stat collection is done in FilePrefetchBuffer but only for prefetched tail buffer during table open for now using FilePrefetchBuffer enum. It's cleaner than the alternative of implementing in upper-level call places of FilePrefetchBuffer for table open. It also has the benefit of extensible to other types of FilePrefetchBuffer if needed. See db bench for perf regression concern.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/11265
      
      Test Plan:
      **- Piggyback on existing test**
      **- rocksdb.table.open.prefetch.tail.miss is harder to UT so I manually set prefetch tail read bytes to be small and run db bench.**
      ```
      ./db_bench -db=/tmp/testdb -statistics=true -benchmarks="fillseq" -key_size=32 -value_size=512 -num=5000 -write_buffer_size=655 -target_file_size_base=655 -disable_auto_compactions=false -compression_type=none -bloom_bits=3  -use_direct_reads=true
      ```
      ```
      rocksdb.table.open.prefetch.tail.read.bytes P50 : 4096.000000 P95 : 4096.000000 P99 : 4096.000000 P100 : 4096.000000 COUNT : 225 SUM : 921600
      rocksdb.table.open.prefetch.tail.miss COUNT : 91
      rocksdb.table.open.prefetch.tail.hit COUNT : 1034
      ```
      **- No perf regression observed in db_bench**
      
      SETUP command: create same db with ~900 files for pre-change/post-change.
      ```
      ./db_bench -db=/tmp/testdb -benchmarks="fillseq" -key_size=32 -value_size=512 -num=500000 -write_buffer_size=655360  -disable_auto_compactions=true -target_file_size_base=16777216 -compression_type=none
      ```
      TEST command 60 runs or til convergence: as suggested by anand1976 and akankshamahajan15, vary `seek_nexts` and `async_io` in testing.
      ```
      ./db_bench -use_existing_db=true -db=/tmp/testdb -statistics=false -cache_size=0 -cache_index_and_filter_blocks=false -benchmarks=seekrandom[-X60] -num=50000 -seek_nexts={10, 500, 1000} -async_io={0|1} -use_direct_reads=true
      ```
      async io = 0, direct io read = true
      
        | seek_nexts = 10, 30 runs | seek_nexts = 500, 12 runs | seek_nexts = 1000, 6 runs
      -- | -- | -- | --
      pre-post change | 4776 (± 28) ops/sec;   24.8 (± 0.1) MB/sec | 288 (± 1) ops/sec;   74.8 (± 0.4) MB/sec | 145 (± 4) ops/sec;   75.6 (± 2.2) MB/sec
      post-change | 4790 (± 32) ops/sec;   24.9 (± 0.2) MB/sec | 288 (± 3) ops/sec;   74.7 (± 0.8) MB/sec | 143 (± 3) ops/sec;   74.5 (± 1.6) MB/sec
      
      async io = 1, direct io read = true
        | seek_nexts = 10, 54 runs | seek_nexts = 500, 6 runs | seek_nexts = 1000, 4 runs
      -- | -- | -- | --
      pre-post change | 3350 (± 36) ops/sec;   17.4 (± 0.2) MB/sec | 264 (± 0) ops/sec;   68.7 (± 0.2) MB/sec | 138 (± 1) ops/sec;   71.8 (± 1.0) MB/sec
      post-change | 3358 (± 27) ops/sec;   17.4 (± 0.1) MB/sec  | 263 (± 2) ops/sec;   68.3 (± 0.8) MB/sec | 139 (± 1) ops/sec;   72.6 (± 0.6) MB/sec
      
      Reviewed By: ajkr
      
      Differential Revision: D43781467
      
      Pulled By: hx235
      
      fbshipit-source-id: a706a18472a8edb2b952bac3af40eec803537f2a
      bab5f9a6
    • P
      Misc cleanup of block cache code (#11291) · 601efe3c
      Peter Dillinger 提交于
      Summary:
      ... ahead of a larger change.
      * Rename confusingly named `is_in_sec_cache` to `kept_in_sec_cache`
      * Unify naming of "standalone" block cache entries (was "detached" in clock_cache)
      * Remove some unused definitions in clock_cache.h (leftover from a previous revision)
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/11291
      
      Test Plan: usual tests and CI, no behavior changes
      
      Reviewed By: anand1976
      
      Differential Revision: D43984642
      
      Pulled By: pdillinger
      
      fbshipit-source-id: b8bf0c5b90a932a88bcbdb413b2f256834aedf97
      601efe3c
  8. 15 3月, 2023 1 次提交
    • H
      Fix bug of prematurely excluded CF in atomic flush contains unflushed data... · 11cb6af6
      Hui Xiao 提交于
      Fix bug of prematurely excluded CF in atomic flush contains unflushed data that should've been included in the atomic flush (#11148)
      
      Summary:
      **Context:**
      Atomic flush should guarantee recoverability of all data of seqno up to the max seqno of the flush. It achieves this by ensuring all such data are flushed by the time this atomic flush finishes through `SelectColumnFamiliesForAtomicFlush()`. However, our crash test exposed the following case where an excluded CF from an atomic flush contains unflushed data of seqno less than the max seqno of that atomic flush and loses its data with `WriteOptions::DisableWAL=true` in face of a crash right after the atomic flush finishes .
      ```
      ./db_stress --preserve_unverified_changes=1 --reopen=0 --acquire_snapshot_one_in=0 --adaptive_readahead=1 --allow_data_in_errors=True --async_io=1 --atomic_flush=1 --avoid_flush_during_recovery=0 --avoid_unnecessary_blocking_io=0 --backup_max_size=104857600 --backup_one_in=0 --batch_protection_bytes_per_key=0 --block_size=16384 --bloom_bits=15 --bottommost_compression_type=none --bytes_per_sync=262144 --cache_index_and_filter_blocks=0 --cache_size=8388608 --cache_type=lru_cache --charge_compression_dictionary_building_buffer=0 --charge_file_metadata=1 --charge_filter_construction=0 --charge_table_reader=0 --checkpoint_one_in=0 --checksum_type=kXXH3 --clear_column_family_one_in=0 --compact_files_one_in=0 --compact_range_one_in=0 --compaction_pri=1 --compaction_ttl=100 --compression_max_dict_buffer_bytes=134217727 --compression_max_dict_bytes=16384 --compression_parallel_threads=1 --compression_type=lz4hc --compression_use_zstd_dict_trainer=0 --compression_zstd_max_train_bytes=0 --continuous_verification_interval=0 --data_block_index_type=0 --db=$db --db_write_buffer_size=1048576 --delpercent=4 --delrangepercent=1 --destroy_db_initially=0 --detect_filter_construct_corruption=0 --disable_wal=1 --enable_compaction_filter=0 --enable_pipelined_write=0 --expected_values_dir=$exp --fail_if_options_file_error=0 --fifo_allow_compaction=0 --file_checksum_impl=none --flush_one_in=0 --format_version=5 --get_current_wal_file_one_in=0 --get_live_files_one_in=100 --get_property_one_in=0 --get_sorted_wal_files_one_in=0 --index_block_restart_interval=2 --index_type=0 --ingest_external_file_one_in=0 --initial_auto_readahead_size=524288 --iterpercent=10 --key_len_percent_dist=1,30,69 --level_compaction_dynamic_level_bytes=True --long_running_snapshots=1 --manual_wal_flush_one_in=100 --mark_for_compaction_one_file_in=0 --max_auto_readahead_size=0 --max_background_compactions=20 --max_bytes_for_level_base=10485760 --max_key=10000 --max_key_len=3 --max_manifest_file_size=1073741824 --max_write_batch_group_size_bytes=64 --max_write_buffer_number=3 --max_write_buffer_size_to_maintain=0 --memtable_prefix_bloom_size_ratio=0.01 --memtable_protection_bytes_per_key=4 --memtable_whole_key_filtering=0 --memtablerep=skip_list --min_write_buffer_number_to_merge=2 --mmap_read=1 --mock_direct_io=False --nooverwritepercent=1 --num_file_reads_for_auto_readahead=0 --open_files=-1 --open_metadata_write_fault_one_in=0 --open_read_fault_one_in=0 --open_write_fault_one_in=0 --ops_per_thread=100000000 --optimize_filters_for_memory=1 --paranoid_file_checks=1 --partition_filters=0 --partition_pinning=3 --pause_background_one_in=0 --periodic_compaction_seconds=100 --prefix_size=8 --prefixpercent=5 --prepopulate_block_cache=0 --preserve_internal_time_seconds=3600 --progress_reports=0 --read_fault_one_in=32 --readahead_size=16384 --readpercent=50 --recycle_log_file_num=0 --ribbon_starting_level=6 --secondary_cache_fault_one_in=0 --set_options_one_in=10000 --snapshot_hold_ops=100000 --sst_file_manager_bytes_per_sec=104857600 --sst_file_manager_bytes_per_truncate=1048576 --stats_dump_period_sec=10 --subcompactions=1 --sync=0 --sync_fault_injection=0 --target_file_size_base=524288 --target_file_size_multiplier=2 --test_batches_snapshots=0 --top_level_index_pinning=0 --unpartitioned_pinning=1 --use_direct_io_for_flush_and_compaction=0 --use_direct_reads=0 --use_full_merge_v1=0 --use_merge=0 --use_multiget=1 --use_put_entity_one_in=0 --user_timestamp_size=0 --value_size_mult=32 --verify_checksum=1 --verify_checksum_one_in=0 --verify_db_one_in=1000 --verify_sst_unique_id_in_manifest=1 --wal_bytes_per_sync=524288 --wal_compression=none --write_buffer_size=524288 --write_dbid_to_manifest=1 --write_fault_one_in=0 --writepercent=30 &
          pid=$!
          sleep 0.2
          sleep 10
          kill $pid
          sleep 0.2
      ./db_stress --ops_per_thread=1 --preserve_unverified_changes=1 --reopen=0 --acquire_snapshot_one_in=0 --adaptive_readahead=1 --allow_data_in_errors=True --async_io=1 --atomic_flush=1 --avoid_flush_during_recovery=0 --avoid_unnecessary_blocking_io=0 --backup_max_size=104857600 --backup_one_in=0 --batch_protection_bytes_per_key=0 --block_size=16384 --bloom_bits=15 --bottommost_compression_type=none --bytes_per_sync=262144 --cache_index_and_filter_blocks=0 --cache_size=8388608 --cache_type=lru_cache --charge_compression_dictionary_building_buffer=0 --charge_file_metadata=1 --charge_filter_construction=0 --charge_table_reader=0 --checkpoint_one_in=0 --checksum_type=kXXH3 --clear_column_family_one_in=0 --compact_files_one_in=0 --compact_range_one_in=0 --compaction_pri=1 --compaction_ttl=100 --compression_max_dict_buffer_bytes=134217727 --compression_max_dict_bytes=16384 --compression_parallel_threads=1 --compression_type=lz4hc --compression_use_zstd_dict_trainer=0 --compression_zstd_max_train_bytes=0 --continuous_verification_interval=0 --data_block_index_type=0 --db=$db --db_write_buffer_size=1048576 --delpercent=4 --delrangepercent=1 --destroy_db_initially=0 --detect_filter_construct_corruption=0 --disable_wal=1 --enable_compaction_filter=0 --enable_pipelined_write=0 --expected_values_dir=$exp --fail_if_options_file_error=0 --fifo_allow_compaction=0 --file_checksum_impl=none --flush_one_in=0 --format_version=5 --get_current_wal_file_one_in=0 --get_live_files_one_in=100 --get_property_one_in=0 --get_sorted_wal_files_one_in=0 --index_block_restart_interval=2 --index_type=0 --ingest_external_file_one_in=0 --initial_auto_readahead_size=524288 --iterpercent=10 --key_len_percent_dist=1,30,69 --level_compaction_dynamic_level_bytes=True --long_running_snapshots=1 --manual_wal_flush_one_in=100 --mark_for_compaction_one_file_in=0 --max_auto_readahead_size=0 --max_background_compactions=20 --max_bytes_for_level_base=10485760 --max_key=10000 --max_key_len=3 --max_manifest_file_size=1073741824 --max_write_batch_group_size_bytes=64 --max_write_buffer_number=3 --max_write_buffer_size_to_maintain=0 --memtable_prefix_bloom_size_ratio=0.01 --memtable_protection_bytes_per_key=4 --memtable_whole_key_filtering=0 --memtablerep=skip_list --min_write_buffer_number_to_merge=2 --mmap_read=1 --mock_direct_io=False --nooverwritepercent=1 --num_file_reads_for_auto_readahead=0 --open_files=-1 --open_metadata_write_fault_one_in=0 --open_read_fault_one_in=0 --open_write_fault_one_in=0 --ops_per_thread=100000000 --optimize_filters_for_memory=1 --paranoid_file_checks=1 --partition_filters=0 --partition_pinning=3 --pause_background_one_in=0 --periodic_compaction_seconds=100 --prefix_size=8 --prefixpercent=5 --prepopulate_block_cache=0 --preserve_internal_time_seconds=3600 --progress_reports=0 --read_fault_one_in=32 --readahead_size=16384 --readpercent=50 --recycle_log_file_num=0 --ribbon_starting_level=6 --secondary_cache_fault_one_in=0 --set_options_one_in=10000 --snapshot_hold_ops=100000 --sst_file_manager_bytes_per_sec=104857600 --sst_file_manager_bytes_per_truncate=1048576 --stats_dump_period_sec=10 --subcompactions=1 --sync=0 --sync_fault_injection=0 --target_file_size_base=524288 --target_file_size_multiplier=2 --test_batches_snapshots=0 --top_level_index_pinning=0 --unpartitioned_pinning=1 --use_direct_io_for_flush_and_compaction=0 --use_direct_reads=0 --use_full_merge_v1=0 --use_merge=0 --use_multiget=1 --use_put_entity_one_in=0 --user_timestamp_size=0 --value_size_mult=32 --verify_checksum=1 --verify_checksum_one_in=0 --verify_db_one_in=1000 --verify_sst_unique_id_in_manifest=1 --wal_bytes_per_sync=524288 --wal_compression=none --write_buffer_size=524288 --write_dbid_to_manifest=1 --write_fault_one_in=0 --writepercent=30 &
          pid=$!
          sleep 0.2
          sleep 40
          kill $pid
          sleep 0.2
      
      Verification failed for column family 6 key 0000000000000239000000000000012B0000000000000138 (56622): value_from_db: , value_from_expected: 4A6331754E4F4C4D42434041464744455A5B58595E5F5C5D5253505156575455, msg: Value not found: NotFound:
      Crash-recovery verification failed :(
      No writes or ops?
      Verification failed :(
      ```
      
      The bug is due to the following:
      - When atomic flush is used, an empty CF is legally [excluded](https://github.com/facebook/rocksdb/blob/7.10.fb/db/db_filesnapshot.cc#L39) in `SelectColumnFamiliesForAtomicFlush` as the first step of `DBImpl::FlushForGetLiveFiles` before [passing](https://github.com/facebook/rocksdb/blob/7.10.fb/db/db_filesnapshot.cc#L42) the included CFDs to `AtomicFlushMemTables`.
      - But [later](https://github.com/facebook/rocksdb/blob/7.10.fb/db/db_impl/db_impl_compaction_flush.cc#L2133) in `AtomicFlushMemTables`, `WaitUntilFlushWouldNotStallWrites` will [release the db mutex](https://github.com/facebook/rocksdb/blob/7.10.fb/db/db_impl/db_impl_compaction_flush.cc#L2403), during which data@seqno N can be inserted into the excluded CF and data@seqno M can be inserted into one of the included CFs, where M > N.
      - However, data@seqno N in an already-excluded CF is thus excluded from this atomic flush while we seqno N is less than seqno M.
      
      **Summary:**
      - Replace `SelectColumnFamiliesForAtomicFlush()`-before-`AtomicFlushMemTables()` with `SelectColumnFamiliesForAtomicFlush()`-after-wait-within-`AtomicFlushMemTables()` so we ensure no write affecting the recoverability of this atomic job (i.e, change to max seqno of this atomic flush or insertion of data with less seqno than the max seqno of the atomic flush to excluded CF) can happen after calling `SelectColumnFamiliesForAtomicFlush()`.
      - For above, refactored and clarified comments on `SelectColumnFamiliesForAtomicFlush()` and `AtomicFlushMemTables()` for clearer semantics of passed-in CFDs to atomic-flush
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/11148
      
      Test Plan:
      - New unit test failed before the fix and passes after
      - Make check
      - Rehearsal stress test
      
      Reviewed By: ajkr
      
      Differential Revision: D42799871
      
      Pulled By: hx235
      
      fbshipit-source-id: 13636b63e9c25c5895857afc36ea580d57f6d644
      11cb6af6
  9. 14 3月, 2023 4 次提交
    • P
      Use CacheWrapper in more places (#11295) · 2a23bee9
      Peter Dillinger 提交于
      Summary:
      ... to simplify code and make it less prone to needless updates on refactoring.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/11295
      
      Test Plan: existing tests (no functional changes intended)
      
      Reviewed By: hx235
      
      Differential Revision: D44040260
      
      Pulled By: pdillinger
      
      fbshipit-source-id: 1b6badb5c8ca673db0903bfaba3cfbc986f386be
      2a23bee9
    • L
      Rename a recently added PerfContext counter (#11294) · 49881921
      Levi Tamasi 提交于
      Summary:
      The patch renames the counter added in https://github.com/facebook/rocksdb/issues/11284 for better consistency with the existing naming scheme.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/11294
      
      Test Plan: `make check`
      
      Reviewed By: jowlyzhang
      
      Differential Revision: D44035964
      
      Pulled By: ltamasi
      
      fbshipit-source-id: 8b1a2a03ee728148365367e0ecc1fcf462f62191
      49881921
    • P
      Document DB::Resume(), fix LockWALInEffect test (#11290) · 648e972f
      Peter Dillinger 提交于
      Summary:
      In rare cases seeing failures like this
      
      ```
      [ RUN      ] DBWriteTestInstance/DBWriteTest.LockWALInEffect/2
      db/db_write_test.cc:653: Failure
      Put("key3", "value")
      Corruption: Not active
      ```
      
      in a test with no explicit threading. This is likely because of the unpredictability of background auto-resume. I didn't really know this feature, in part because DB::Resume() was undocumented. So I believe I have fixed the test and documented the API function.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/11290
      
      Test Plan: 1000s of stress runs of the test with gtest-parallel
      
      Reviewed By: anand1976
      
      Differential Revision: D43984583
      
      Pulled By: pdillinger
      
      fbshipit-source-id: d30dec120b4864e193751b2e33ff16834d313db3
      648e972f
    • C
      Support range deletion tombstones in `CreateColumnFamilyWithImport` (#11252) · 9aa3b6f9
      Changyu Bi 提交于
      Summary:
      CreateColumnFamilyWithImport() did not support range tombstones for two reasons:
      1. it uses point keys of a input file to determine its boundary (smallest and largest internal key), which means range tombstones outside of the point key range will be effectively dropped.
      2. it does not handle files with no point keys.
      
      Also included a fix in external_sst_file_ingestion_job.cc where the blocks read in `GetIngestedFileInfo()` can be added to block cache now (issue fixed in https://github.com/facebook/rocksdb/pull/6429).
      
      This PR adds support for exporting and importing column family with range tombstones. The main change is to add smallest internal key and largest internal key to `SstFileMetaData` that will be part of the output of `ExportColumnFamily()`. Then during `CreateColumnFamilyWithImport(...,const ExportImportFilesMetaData& metadata,...)`, file boundaries can be set from `metadata` directly. This is needed since when file boundaries are extended by range tombstones, sometimes they cannot be deduced from a file's content alone.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/11252
      
      Test Plan:
      - added unit tests that fails before this change
      
      Closes https://github.com/facebook/rocksdb/issues/11245
      
      Reviewed By: ajkr
      
      Differential Revision: D43577443
      
      Pulled By: cbi42
      
      fbshipit-source-id: 6bff78e583cc50c44854994dea0a8dd519398f2f
      9aa3b6f9
  10. 11 3月, 2023 1 次提交
    • A
      Reverse wrong order of parameter names for Java WriteBatchWithIndex#iteratorWithBase (#11280) · fbd603d0
      Alan Paxton 提交于
      Summary:
      Fix for https://github.com/facebook/rocksdb/issues/11008
      
      `Java_org_rocksdb_WriteBatchWithIndex_iteratorWithBase` takes parameters `(… jlong jwbwi_handle, jlong jcf_handle,
          jlong jbase_iterator_handle, jlong jread_opts_handle)` while `WriteBatchWithIndex.java` declares `private native long iteratorWithBase(final long handle, final long baseIteratorHandle,
            final long cfHandle, final long readOptionsHandle)`.
      
      Luckily the only call to `iteratorWithBase` passes the parameters in the correct order for the implementation `(… cfHandle, baseIteratorHandle …)` This type checks because the types are the same (long words).
      
      The code is currently used correctly, it is just extremely misleading. Swap the names of the 2 parameters in the Java method so that the correct usage is clear.
      
      There already exist test methods which call the API correctly and only succeed because of that. These continue to work.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/11280
      
      Reviewed By: cbi42
      
      Differential Revision: D43874798
      
      Pulled By: ajkr
      
      fbshipit-source-id: b59bc930bf579f4e0804f0effd4fb17f4225d60c
      fbd603d0
  11. 10 3月, 2023 4 次提交
    • J
      Fix compile errors in Clang due to unused variables depending on the build configuration (#11234) · 969d4e1d
      Jaepil Jeong 提交于
      Summary:
      This PR fixes compilation errors in Clang due to unused variables like the below:
      ```
      [109/329] Building CXX object CMakeFiles/rocksdb.dir/db/version_edit_handler.cc.o
      FAILED: CMakeFiles/rocksdb.dir/db/version_edit_handler.cc.o
      ccache /opt/homebrew/opt/llvm/bin/clang++ -DGFLAGS=1 -DGFLAGS_IS_A_DLL=0 -DHAVE_FULLFSYNC -DJEMALLOC_NO_DEMANGLE -DLZ4 -DOS_MACOSX -DROCKSDB_JEMALLOC -DROCKSDB_LIB_IO_POSIX -DROCKSDB_NO_DYNAMIC_EXTENSION -DROCKSDB_PLATFORM_POSIX -DSNAPPY -DTBB -DZLIB -DZSTD -I/Users/jaepil/work/deepsearch/deps/cpp/rocksdb -I/Users/jaepil/work/deepsearch/deps/cpp/rocksdb/include -isysroot /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk -I/Users/jaepil/app/include -I/opt/homebrew/include -I/opt/homebrew/opt/llvm/include -I/opt/homebrew/opt/llvm/include/c++/v1 -W -Wextra -Wall -pthread -Wsign-compare -Wshadow -Wno-unused-parameter -Wno-unused-variable -Woverloaded-virtual -Wnon-virtual-dtor -Wno-missing-field-initializers -Wno-strict-aliasing -Wno-invalid-offsetof -fno-omit-frame-pointer -momit-leaf-frame-pointer -march=armv8-a+crc+crypto -Wno-unused-function -Werror -O2 -g -DNDEBUG -arch arm64 -isysroot /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX13.1.sdk -std=gnu++20 -MD -MT CMakeFiles/rocksdb.dir/db/version_edit_handler.cc.o -MF CMakeFiles/rocksdb.dir/db/version_edit_handler.cc.o.d -o CMakeFiles/rocksdb.dir/db/version_edit_handler.cc.o -c /Users/jaepil/work/deepsearch/deps/cpp/rocksdb/db/version_edit_handler.cc
      /Users/jaepil/work/deepsearch/deps/cpp/rocksdb/db/version_edit_handler.cc:30:10: error: variable 'recovered_edits' set but not used [-Werror,-Wunused-but-set-variable]
        size_t recovered_edits = 0;
               ^
      1 error generated.
      ```
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/11234
      
      Reviewed By: cbi42
      
      Differential Revision: D43458604
      
      Pulled By: ajkr
      
      fbshipit-source-id: d8c50e1a108887b037a120cd9f19374ddaeee817
      969d4e1d
    • Z
      DBWithTTLImpl::IsStale overflow when ttl is 15 years (#11279) · 7a07afe8
      zhangliangkai1992 提交于
      Summary:
      Fix DBWIthTTLImpl::IsStale overflow
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/11279
      
      Reviewed By: cbi42
      
      Differential Revision: D43875039
      
      Pulled By: ajkr
      
      fbshipit-source-id: 3e5feb8c4c4480bf1421b0763ade3d2e459ec028
      7a07afe8
    • A
      Add instructions for installing googlebenchmark (#11282) · daeec505
      Alan Paxton 提交于
      Summary:
      Per the discussion in https://groups.google.com/g/rocksdb/c/JqhlvSs6ZEs/m/bnXZ7Q--AAAJ
      It seems non-obvious that googlebenchmark must be installed manually before microbenchmarks can be run. I have added more detail to the installation instructions to make it clearer.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/11282
      
      Reviewed By: cbi42
      
      Differential Revision: D43874724
      
      Pulled By: ajkr
      
      fbshipit-source-id: f64a4ac4914cb057955d1ca965885f8822ca7764
      daeec505
    • A
      Fix hang in async_io benchmarks in regression script (#11285) · 1de69762
      akankshamahajan 提交于
      Summary:
      Fix hang in async_io benchmarks in regression script. I changed the order of benchmarks and that somehow fixed the issue of hang.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/11285
      
      Test Plan: Ran it manually
      
      Reviewed By: pdillinger
      
      Differential Revision: D43937431
      
      Pulled By: akankshamahajan15
      
      fbshipit-source-id: 7c43075d3be6b8f41d08e845664012768b769661
      1de69762
  12. 09 3月, 2023 1 次提交
    • L
      Add a PerfContext counter for merge operands applied in point lookups (#11284) · 1d524385
      Levi Tamasi 提交于
      Summary:
      The existing PerfContext counter `internal_merge_count` only tracks the
      Merge operands applied during range scans. The patch adds a new counter
      called `internal_merge_count_point_lookups` to track the same metric
      for point lookups (`Get` / `MultiGet` / `GetEntity` / `MultiGetEntity`), and
      also fixes a couple of cases in the iterator where the existing counter wasn't
      updated.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/11284
      
      Test Plan: `make check`
      
      Reviewed By: jowlyzhang
      
      Differential Revision: D43926082
      
      Pulled By: ltamasi
      
      fbshipit-source-id: 321566d8b4cf0a3b6c9b73b7a5c984fb9bb492e9
      1d524385
  13. 08 3月, 2023 1 次提交
  14. 07 3月, 2023 3 次提交
    • P
      Tests verifying non-zero checksums of zero bytes (#11260) · e0107325
      Peter Dillinger 提交于
      Summary:
      Adds unit tests verifying that a block payload and checksum of all zeros is not falsely considered valid data. The test exhaustively checks that for blocks up to some length (default 20K, more exhaustively 10M) of all zeros do not produce a block checksum of all zeros.
      
      Also small refactoring of an existing checksum test to use parameterized test. (Suggest hiding whitespace changes for review.)
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/11260
      
      Test Plan:
      this is the test, manual run with
      `ROCKSDB_THOROUGH_CHECKSUM_TEST=1` to verify up to 10M.
      
      Reviewed By: hx235
      
      Differential Revision: D43706192
      
      Pulled By: pdillinger
      
      fbshipit-source-id: 95e721c320ca928e7fa2400c2570fb359cc30b1f
      e0107325
    • A
      Add support for parameters setting related to async_io benchmarks (#11262) · 13357de0
      akankshamahajan 提交于
      Summary:
      Provide support in benchmark regression to use different options to be used in async_io benchamark only - "$`MAX_READAHEAD_SIZE`", $`INITIAL_READAHEAD_SIZE`", "$`NUM_READS_FOR_READAHEAD_SIZE`".
      If user wants to run set these parameters for all benchmarks then these parameters need to be set in OPTION file instead.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/11262
      
      Test Plan: Ran manually
      
      Reviewed By: anand1976
      
      Differential Revision: D43725567
      
      Pulled By: akankshamahajan15
      
      fbshipit-source-id: 28c3462dd785ffd646d44560fa9c92bc6a8066e5
      13357de0
    • L
      Deflake/fix BlobSourceCacheReservationTest.IncreaseCacheReservationOnFullCache (#11273) · a1a3b233
      Levi Tamasi 提交于
      Summary:
      `BlobSourceCacheReservationTest.IncreaseCacheReservationOnFullCache` is both flaky and also doesn't do what its name says. The patch changes this test so it actually tests increasing the cache reservation, hopefully also deflaking it in the process.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/11273
      
      Test Plan: `make check`
      
      Reviewed By: akankshamahajan15
      
      Differential Revision: D43800935
      
      Pulled By: ltamasi
      
      fbshipit-source-id: 5eb54130dfbe227285b0e14f2084aa4b89f0b107
      a1a3b233
  15. 06 3月, 2023 2 次提交
  16. 04 3月, 2023 3 次提交
    • I
      Avoid ColumnFamilyDescriptor copy (#10978) · ddde1e6a
      Igor Canadi 提交于
      Summary:
      Hi. :) Noticed we are copying ColumnFamilyDescriptor here because my process crashed during copy constructor (cause unrelated)
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/10978
      
      Reviewed By: cbi42
      
      Differential Revision: D41473924
      
      Pulled By: ajkr
      
      fbshipit-source-id: 58a3473f2d7b24918f79d4b2726c20081c5e95b4
      ddde1e6a
    • C
      Improve documentation for MergingIterator (#11161) · d053926f
      Changyu Bi 提交于
      Summary:
      Add some comments to try to explain how/why MergingIterator works. Made some small refactoring, mostly in MergingIterator::SkipNextDeleted() and MergingIterator::SeekImpl().
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/11161
      
      Test Plan:
      crash test with small key range:
      ```
      python3 tools/db_crashtest.py blackbox --simple --max_key=100 --interval=6000 --write_buffer_size=262144 --target_file_size_base=256 --max_bytes_for_level_base=262144 --block_size=128 --value_size_mult=33 --subcompactions=10 --use_multiget=1 --delpercent=3 --delrangepercent=2 --verify_iterator_with_expected_state_one_in=2 --num_iterations=10
      ```
      
      Reviewed By: ajkr
      
      Differential Revision: D42860994
      
      Pulled By: cbi42
      
      fbshipit-source-id: 3f0c1c9c6481a7f468bf79d823998907a8116e9e
      d053926f
    • L
      Fix/clarify/extend the API comments of CompactionFilter (#11261) · 95d67f36
      Levi Tamasi 提交于
      Summary:
      The patch makes the following changes to the API comments:
      * Some general comments about snapshots, thread safety, and user-defined timestamps are moved to a more prominent place at the top of the file.
      * Detailed descriptions are added for each `ValueType` and `Decision`, fixing and extending some existing comments (e.g. that of `kRemove`, which suggested that key-values are simply removed from the output, while in reality base values are converted to tombstones) and adding detailed comments that were missing (e.g. `kPurge` and `kChangeWideColumnEntity`).
      * Updated/extended the comments of `FilterV2/V3` and `FilterBlobByKey`.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/11261
      
      Reviewed By: akankshamahajan15
      
      Differential Revision: D43714314
      
      Pulled By: ltamasi
      
      fbshipit-source-id: 835f4b1bdac1ce0e291155186095211303260729
      95d67f36
  17. 02 3月, 2023 1 次提交
    • Y
      Fix backward iteration issue when user defined timestamp is enabled in BlobDB (#11258) · 8dfcfd4e
      Yu Zhang 提交于
      Summary:
      During backward iteration, blob verification would fail because the user key (ts included) in `saved_key_` doesn't match the blob. This happens because during`FindValueForCurrentKey`, `saved_key_` is not updated when the user key(ts not included) is the same for all cases except when `timestamp_lb_` is specified. This breaks the blob verification logic when user defined timestamp is enabled and `timestamp_lb_` is not specified. Fix this by always updating `saved_key_` when a smaller user key (ts included) is seen.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/11258
      
      Test Plan:
      `make check`
      `./db_blob_basic_test --gtest_filter=DBBlobWithTimestampTest.IterateBlobs`
      
      Run db_bench (built with DEBUG_LEVEL=0) to demonstrate that no overhead is introduced with:
      
      `./db_bench -user_timestamp_size=8  -db=/dev/shm/rocksdb -disable_wal=1 -benchmarks=fillseq,seekrandom[-W1-X6] -reverse_iterator=1 -seek_nexts=5`
      
      Baseline:
      
      - seekrandom [AVG    6 runs] : 72188 (± 1481) ops/sec;   37.2 (± 0.8) MB/sec
      
      With this PR:
      
      - seekrandom [AVG    6 runs] : 74171 (± 1427) ops/sec;   38.2 (± 0.7) MB/sec
      
      Reviewed By: ltamasi
      
      Differential Revision: D43675642
      
      Pulled By: jowlyzhang
      
      fbshipit-source-id: 8022ae8522d1f66548821855e6eed63640c14e04
      8dfcfd4e
  18. 01 3月, 2023 1 次提交
  19. 28 2月, 2023 1 次提交
  20. 25 2月, 2023 1 次提交
  21. 24 2月, 2023 1 次提交
    • Y
      Fix a TestGet failure when user defined timestamp is enabled (#11249) · af7872ff
      Yu Zhang 提交于
      Summary:
      Stressing small DB with small number of keys and user-defined timestamp enabled usually fails pretty quickly in TestGet.
      
      Example command to reproduce the failure:
      
      ` tools/db_crashtest.py blackbox --enable_ts --simple --delrangepercent=0 --delpercent=5 --max_key=100 --interval=3 --write_buffer_size=262144 --target_file_size_base=262144 --max_bytes_for_level_base=262144 --subcompactions=1`
      
      Example failure: `error : inconsistent values for key 0000000000000009000000000000000A7878: expected state has the key, Get() returns NotFound.`
      
      Fixes this test failure by refreshing the read up to timestamp to the most up to date timestamp, a.k.a now, after a key is locked.  Without this, things could happen in this order and cause a test failure:
      
      <table>
        <tr>
          <th>TestGet thread</th>
          <th> A writing thread</th>
        </tr>
        <tr>
          <td>read_opts.timestamp = GetNow()</td>
          <td></td>
        </tr>
        <tr>
          <td></td>
          <td>Lock key, do write</td>
        </tr>
        <tr>
          <td>Lock key, read(read_opts) return NotFound</td>
          <td></td>
        </tr>
      </table>
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/11249
      
      Reviewed By: ltamasi
      
      Differential Revision: D43551302
      
      Pulled By: jowlyzhang
      
      fbshipit-source-id: 26877ab379bdb97acd2682a2632bc29718427f38
      af7872ff