1. 18 6月, 2022 1 次提交
    • P
      Fix overflow in ribbon_bench after #10184 (#10195) · f87adcfb
      Peter Dillinger 提交于
      Summary:
      Ribbon micro-bench needs updating after re-numbering
      `BloomLikeFilterPolicy::GetAllFixedImpls()` entries. (CircleCI nightly
      failure.)
      
      Also fixed memory leaks while using ASAN to validate my fix. (I assume
      the leaks weren't intentional for some performance characteristic.)
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/10195
      
      Test Plan: run with ASAN
      
      Reviewed By: jay-zhuang
      
      Differential Revision: D37244459
      
      Pulled By: pdillinger
      
      fbshipit-source-id: 5a363e10de3c4c9c88099c937e3dc3b4cf24fd30
      f87adcfb
  2. 17 6月, 2022 6 次提交
    • A
      Add WriteOptions::protection_bytes_per_key (#10037) · 5d6005c7
      Andrew Kryczka 提交于
      Summary:
      Added an option, `WriteOptions::protection_bytes_per_key`, that controls how many bytes per key we use for integrity protection in `WriteBatch`. It takes effect when `WriteBatch::GetProtectionBytesPerKey() == 0`.
      
      Currently the only supported value is eight. Invoking a user API with it set to any other nonzero value will result in `Status::NotSupported` returned to the user.
      
      There is also a bug fix for integrity protection with `inplace_callback`, where we forgot to take into account the possible change in varint length when calculating KV checksum for the final encoded buffer.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/10037
      
      Test Plan:
      - Manual
        - Set default value of `WriteOptions::protection_bytes_per_key` to eight and ran `make check -j24`
        - Enabled in MyShadow for 1+ week
      - Automated
        - Unit tests have a `WriteMode` that enables the integrity protection via `WriteOptions`
        - Crash test - in most cases, use `WriteOptions::protection_bytes_per_key` to enable integrity protection
      
      Reviewed By: cbi42
      
      Differential Revision: D36614569
      
      Pulled By: ajkr
      
      fbshipit-source-id: 8650087ceac9b61b560f1e5fafe5e1baf9c725fb
      5d6005c7
    • P
      Fix a false negative merge conflict (#10192) · f62c1e1e
      Peter Dillinger 提交于
      Summary:
      .. between https://github.com/facebook/rocksdb/issues/10184 and https://github.com/facebook/rocksdb/issues/10122 not detected by source control,
      leading to non-compiling code.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/10192
      
      Test Plan: updated test
      
      Reviewed By: hx235
      
      Differential Revision: D37231921
      
      Pulled By: pdillinger
      
      fbshipit-source-id: fa21488716f4c006b111b8c4127d71c757c935c3
      f62c1e1e
    • C
      Update HISTORY.md for #10114 (#10189) · 8cf86258
      Changyu Bi 提交于
      Summary:
      Update HISTORY.md for https://github.com/facebook/rocksdb/issues/10114: write batch checksum verification before writing to WAL.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/10189
      
      Reviewed By: ajkr
      
      Differential Revision: D37226366
      
      Pulled By: cbi42
      
      fbshipit-source-id: cd2f076961abc35f35783e0f2cc3beda68cdb446
      8cf86258
    • P
      More testing w/prefix extractor, small refactor (#10122) · fff302d9
      Peter Dillinger 提交于
      Summary:
      There was an interesting code path not covered by testing that
      is difficult to replicate in a unit test, which is now covered using a
      sync point. Specifically, the case of table_prefix_extractor == null and
      !need_upper_bound_check in `BlockBasedTable::PrefixMayMatch`, which
      can happen if table reader is open before extractor is registered with global
      object registry, but is later registered and re-set with SetOptions. (We
      don't have sufficient testing control over object registry to set that up
      repeatedly.)
      
      Also, this function has been renamed to `PrefixRangeMayMatch` for clarity
      vs. other functions that are not the same.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/10122
      
      Test Plan: unit tests expanded
      
      Reviewed By: siying
      
      Differential Revision: D36944834
      
      Pulled By: pdillinger
      
      fbshipit-source-id: 9e52d9da1929a3e42bbc230fcdc3599949de7bdb
      fff302d9
    • P
      Remove deprecated block-based filter (#10184) · 126c2237
      Peter Dillinger 提交于
      Summary:
      In https://github.com/facebook/rocksdb/issues/9535, release 7.0, we hid the old block-based filter from being created using
      the public API, because of its inefficiency. Although we normally maintain read compatibility
      on old DBs forever, filters are not required for reading a DB, only for optimizing read
      performance. Thus, it should be acceptable to remove this code and the substantial
      maintenance burden it carries as useful features are developed and validated (such
      as user timestamp).
      
      This change completely removes the code for reading and writing the old block-based
      filters, net removing about 1370 lines of code no longer needed. Options removed from
      testing / benchmarking tools. The prior existence is only evident in a couple of places:
      * `CacheEntryRole::kDeprecatedFilterBlock` - We can update this public API enum in
      a major release to minimize source code incompatibilities.
      * A warning is logged when an old table file is opened that used the old block-based
      filter. This is provided as a courtesy, and would be a pain to unit test, so manual testing
      should suffice. Unfortunately, sst_dump does not tell you whether a file uses
      block-based filter, and the structure of the code makes it very difficult to fix.
      * To detect that case, `kObsoleteFilterBlockPrefix` (renamed from `kFilterBlockPrefix`)
      for metaindex is maintained (for now).
      
      Other notes:
      * In some cases where numbers are associated with filter configurations, we have had to
      update the assigned numbers so that they all correspond to something that exists.
      * Fixed potential stat counting bug by assuming `filter_checked = false` for cases
      like `filter == nullptr` rather than assuming `filter_checked = true`
      * Removed obsolete `block_offset` and `prefix_extractor` parameters from several
      functions.
      * Removed some unnecessary checks `if (!table_prefix_extractor() && !prefix_extractor)`
      because the caller guarantees the prefix extractor exists and is compatible
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/10184
      
      Test Plan:
      tests updated, manually test new warning in LOG using base version to
      generate a DB
      
      Reviewed By: riversand963
      
      Differential Revision: D37212647
      
      Pulled By: pdillinger
      
      fbshipit-source-id: 06ee020d8de3b81260ffc36ad0c1202cbf463a80
      126c2237
    • A
      Update stats to help users estimate MultiGet async IO impact (#10182) · a6691d0f
      anand76 提交于
      Summary:
      Add a couple of stats to help users estimate the impact of potential MultiGet perf improvements -
      1. NUM_LEVEL_READ_PER_MULTIGET - A histogram stat for number of levels that required MultiGet to read from a file
      2. MULTIGET_COROUTINE_COUNT - A ticker stat to count the number of times the coroutine version of MultiGetFromSST was used
      
      The NUM_DATA_BLOCKS_READ_PER_LEVEL stat is obsoleted as it doesn't provide useful information for MultiGet optimization.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/10182
      
      Reviewed By: akankshamahajan15
      
      Differential Revision: D37213296
      
      Pulled By: anand1976
      
      fbshipit-source-id: 5d2b7708017c0e278578ae4bffac3926f6530efb
      a6691d0f
  3. 16 6月, 2022 9 次提交
    • Y
      Abort in dbg mode after logging (#10183) · 4d31d3c2
      Yanqin Jin 提交于
      Summary:
      In CompactionIterator code, there are multiple places where the process
      will abort in dbg mode before logging the error message describing the
      cause. This PR changes only the logging behavior for compaction iterator so
      that error message is written to LOG before the process aborts in debug
      mode.
      
      Also updated the triggering condition for an assertion for single delete with
      user-defined timestamp.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/10183
      
      Test Plan: make check
      
      Reviewed By: akankshamahajan15
      
      Differential Revision: D37190218
      
      Pulled By: riversand963
      
      fbshipit-source-id: 741bb007067be7cfbe94ac9e530ad4b2b339c009
      4d31d3c2
    • A
      Add few optimizations in async_io for short scans (#10140) · 8353ae8b
      Akanksha Mahajan 提交于
      Summary:
      This PR adds few optimizations for async_io for shorter scans.
      1.  If async_io is enabled, seek would create FilePrefetchBuffer object to fetch the data asynchronously. However `FilePrefetchbuffer::num_file_reads_` wasn't taken into consideration if it calls Next after Seek and would go for Prefetching.  This PR fixes that and Next will go for prefetching only if `FilePrefetchbuffer::num_file_reads_` is greater than 2 along with if blocks are sequential. This scenario is only for implicit auto readahead.
      2. For seek, when it calls TryReadFromCacheAsync to poll it makes async call as well because TryReadFromCacheAsync flow wasn't changed. So I updated to return after poll instead of further prefetching any data.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/10140
      
      Test Plan:
      1. Added a unit test
                        2. Ran crash_test with async_io = 1 to make sure nothing crashes.
      
      Reviewed By: anand1976
      
      Differential Revision: D37042242
      
      Pulled By: akankshamahajan15
      
      fbshipit-source-id: b8e6b7cb2ee0886f37a8f53951948b9084e8ffda
      8353ae8b
    • P
      Fix handling of accidental truncation of IDENTITY file (#10173) · 3d358a7e
      Peter Dillinger 提交于
      Summary:
      A consequence of https://github.com/facebook/rocksdb/issues/9990 was requiring a non-empty DB ID to generate
      new SST files. But if the DB ID is not tracked in the manifest and the IDENTITY file
      is somehow truncated to 0 bytes, then an empty DB ID would be assigned, leading
      to crash. This change ensures a non-empty DB ID is assigned and set in the
      IDENTITY file.
      
      Also,
      * Some light refactoring to clean up the logic
      * (I/O efficiency) If the ID is tracked in the manifest and already matches the
      IDENTITY file, don't needlessly overwrite the file.
      * (Debugging) Log the DB ID to info log on open, because sometimes IDENTITY
      can change if DB is moved around (though it would be unusual for info log to
      be copied/moved without IDENTITY file)
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/10173
      
      Test Plan: unit tests expanded/updated
      
      Reviewed By: ajkr
      
      Differential Revision: D37176545
      
      Pulled By: pdillinger
      
      fbshipit-source-id: a9b414cd35bfa33de48af322a36c24538d50bef1
      3d358a7e
    • P
      Use only ASCII in source files (#10164) · 94329ae4
      Peter Dillinger 提交于
      Summary:
      Fix existing usage of non-ASCII and add a check to prevent
      future use. Added `-n` option to greps to provide line numbers.
      
      Alternative to https://github.com/facebook/rocksdb/issues/10147
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/10164
      
      Test Plan:
      used new checker to find & fix cases, manually check
      db_bench output is preserved
      
      Reviewed By: akankshamahajan15
      
      Differential Revision: D37148792
      
      Pulled By: pdillinger
      
      fbshipit-source-id: 68c8b57e7ab829369540d532590bf756938855c7
      94329ae4
    • C
      Verify write batch checksum before WAL (#10114) · 9882652b
      Changyu Bi 提交于
      Summary:
      Context: WriteBatch can have key-value checksums when it was created `with protection_bytes_per_key > 0`.
      This PR added checksum verification for write batches before they are written to WAL.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/10114
      
      Test Plan:
      - Added new unit tests to db_kv_checksum_test.cc: `make check -j32`
      - benchmark on performance regression: `./db_bench --benchmarks=fillrandom[-X20] -db=/dev/shm/test_rocksdb -write_batch_protection_bytes_per_key=8`
        - Pre-PR:
      `
      fillrandom [AVG    20 runs] : 198875 (± 3006) ops/sec;   22.0 (± 0.3) MB/sec
      `
        - Post-PR:
      `
      fillrandom [AVG    20 runs] : 196487 (± 2279) ops/sec;   21.7 (± 0.3) MB/sec
      `
        Mean regressed about 1% (198875 -> 196487 ops/sec).
      
      Reviewed By: ajkr
      
      Differential Revision: D36917464
      
      Pulled By: cbi42
      
      fbshipit-source-id: 29beb74edf65f04b1a890b4f650d873dc7ed790d
      9882652b
    • A
      Change the instruction used for a pause on arm64 (#10118) · 2e5a323d
      Ali Saidi 提交于
      Summary:
      While the yield instruction conseptually sounds correct on most platforms it is
      a simple nop that doesn't delay the execution anywhere close to what an x86
      pause instruction does. In other projects with spin-wait loops an isb has been
      observed to be much closer to the x86 behavior.
      
      On a Graviton3 system the following test improves on average by 2x with this
      change averaged over 20 runs:
      
      ```
      ./db_bench  -benchmarks=fillrandom -threads=64 -batch_size=1
      -memtablerep=skip_list -value_size=100 --num=100000
      level0_slowdown_writes_trigger=9999 -level0_stop_writes_trigger=9999
      -disable_auto_compactions --max_write_buffer_number=8 -max_background_flushes=8
      --disable_wal --write_buffer_size=160000000 --block_size=16384
      --allow_concurrent_memtable_write -compression_type none
      ```
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/10118
      
      Reviewed By: jay-zhuang
      
      Differential Revision: D37120578
      
      fbshipit-source-id: c20bde4298222edfab7ff7cb6d42497e7012400d
      2e5a323d
    • S
      Use madvise() for mmaped file advise (#10170) · 69a32eec
      sdong 提交于
      Summary:
      A recent PR https://github.com/facebook/rocksdb/pull/10142 enabled fadvise for mmaped file. However, we were told that it might not take effective and madvise() should be used.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/10170
      
      Test Plan:
      Run existing tests
      Run a benchmark using mmap with advise random and see I/O size is indeed small.
      
      Reviewed By: anand1976
      
      Differential Revision: D37158582
      
      fbshipit-source-id: 8b3a74f0e89d2e16aac78ee4124c05841d4135c3
      69a32eec
    • Y
      Allow db_bench and db_stress to set `allow_data_in_errors` (#10171) · ce419c0f
      Yanqin Jin 提交于
      Summary:
      There is `Options::allow_data_in_errors` that controls whether RocksDB
      is allowed to log data, e.g. key, value, etc in LOG files. It is false
      by default. However, in db_bench and db_stress, it is often ok to log
      data because there is no concern about privacy.
      
      This PR allows db_stress and db_bench to set this option on the command
      line, while it remains false by default. Furthermore, make
      crash/recovery test driven by db_crashtest.py to opt-in.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/10171
      
      Test Plan: Stress test and db_bench
      
      Reviewed By: hx235
      
      Differential Revision: D37163787
      
      Pulled By: riversand963
      
      fbshipit-source-id: 0242f24d292ba15b6faf8ff903963b85d3e011f8
      ce419c0f
    • A
      fix cancel argument for latest liburing (#10168) · 19345de6
      Akanksha Mahajan 提交于
      Summary:
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/10168
      
      the arg changed to u64
      
      Reviewed By: ajkr
      
      Differential Revision: D37155407
      
      fbshipit-source-id: 464eab2806675f148fce075a6fea369fa3d7a9bb
      19345de6
  4. 15 6月, 2022 8 次提交
    • I
      Fix C4702 on windows (#10146) · 40dfa260
      iseki 提交于
      Summary:
      This code is unreachable when `ROCKSDB_LITE` not defined. And it cause build fail on my environment VS2019 16.11.15.
      ```
      -- Selecting Windows SDK version 10.0.19041.0 to target Windows 10.0.19044.
      -- The CXX compiler identification is MSVC 19.29.30145.0
      -- The C compiler identification is MSVC 19.29.30145.0
      -- The ASM compiler identification is MSVC
      ```
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/10146
      
      Reviewed By: akankshamahajan15
      
      Differential Revision: D37112916
      
      Pulled By: ajkr
      
      fbshipit-source-id: e0b2bf3055d6fac1b3fb40b9f02c4cbae3f82757
      40dfa260
    • M
      Fix potential leak when reusing PinnableSlice instances. (#10166) · 77f47995
      mpoeter 提交于
      Summary:
      `PinnableSlice` may hold a handle to a cache value which must be released to correctly decrement the ref-counter. However, when `PinnableSlice` variables are reused, e.g. like this:
      ```
      PinnableSlice pin_slice;
      db.Get("foo", &pin_slice);
      db.Get("foo", &pin_slice);
      ```
      then the second `Get` simply overwrites the old value in `pin_slice` and the handle returned by the first `Get` is _not_ released.
      
      This PR adds `Reset` calls to the `Get`/`MultiGet` calls that accept `PinnableSlice` arguments to ensure proper cleanup of old values.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/10166
      
      Reviewed By: hx235
      
      Differential Revision: D37151632
      
      Pulled By: ajkr
      
      fbshipit-source-id: 9dd3c3288300f560531b843f67db11aeb569a9ff
      77f47995
    • A
      Modify the instructions emited for PREFETCH on arm64 (#10117) · b550fc0b
      Ali Saidi 提交于
      Summary:
      __builtin_prefetch(...., 1) prefetches into the L2 cache on x86 while the same
      emits a pldl3keep instruction on arm64 which doesn't seem to be close enough.
      
      Testing on a Graviton3, and M1 system with memtablerep_bench fillrandom and
      skiplist througpuh increased as follows adjusting the 1 to 2 or 3:
      ```
                 1 -> 2     1 -> 3
      ----------------------------
      Graviton3   +10%        +15%
      M1          +10%        +10%
      ```
      
      Given that prefetching into the L1 cache seems to help, I chose that conversion
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/10117
      
      Reviewed By: pdillinger
      
      Differential Revision: D37120475
      
      fbshipit-source-id: db1ef43f941445019c68316500a2250acc643d5e
      b550fc0b
    • J
      mingw: remove no-asynchronous-unwind-tables (#9963) · 751d1a3e
      James Tucker 提交于
      Summary:
      This default is generally incompatible with other parts of mingw, and
      can be applied by outside users as-needed.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/9963
      
      Reviewed By: akankshamahajan15
      
      Differential Revision: D36302813
      
      Pulled By: ajkr
      
      fbshipit-source-id: 9456b41a96bde302bacbc39e092ccecfcb42f34f
      751d1a3e
    • G
      Add blob cache option in the column family options (#10155) · cba398df
      Gang Liao 提交于
      Summary:
      There is currently no caching mechanism for blobs, which is not ideal especially when the database resides on remote storage (where we cannot rely on the OS page cache). As part of this task, we would like to make it possible for the application to configure a blob cache.
      This PR is a part of https://github.com/facebook/rocksdb/issues/10156
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/10155
      
      Reviewed By: ltamasi
      
      Differential Revision: D37150819
      
      Pulled By: gangliao
      
      fbshipit-source-id: b807c7916ea5d411588128f8e22a49f171388fe2
      cba398df
    • T
      fix a false positive case of parsing table factory from options file (#10094) · 1d2950b8
      tabokie 提交于
      Summary:
      During options file parsing, reset table factory before attempting to parse it
      from string. This avoids mistakenly treating the default table factory as a
      newly created one.
      Signed-off-by: Ntabokie <xy.tao@outlook.com>
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/10094
      
      Reviewed By: akankshamahajan15
      
      Differential Revision: D36945378
      
      Pulled By: ajkr
      
      fbshipit-source-id: 94b2604e5e87682063b4b78f6370f3e8f101dc44
      1d2950b8
    • H
      Account memory of FileMetaData in global memory limit (#9924) · d665afdb
      Hui Xiao 提交于
      Summary:
      **Context/Summary:**
      As revealed by heap profiling, allocation of `FileMetaData` for [newly created file added to a Version](https://github.com/facebook/rocksdb/pull/9924/files#diff-a6aa385940793f95a2c5b39cc670bd440c4547fa54fd44622f756382d5e47e43R774) can consume significant heap memory. This PR is to account that toward our global memory limit based on block cache capacity.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/9924
      
      Test Plan:
      - Previous `make check` verified there are only 2 places where the memory of  the allocated `FileMetaData` can be released
      - New unit test `TEST_P(ChargeFileMetadataTestWithParam, Basic)`
      - db bench (CPU cost of `charge_file_metadata` in write and compact)
         - **write micros/op: -0.24%** : `TEST_TMPDIR=/dev/shm/testdb ./db_bench -benchmarks=fillseq -db=$TEST_TMPDIR -charge_file_metadata=1 (remove this option for pre-PR) -disable_auto_compactions=1 -write_buffer_size=100000 -num=4000000 | egrep 'fillseq'`
         - **compact micros/op -0.87%** : `TEST_TMPDIR=/dev/shm/testdb ./db_bench -benchmarks=fillseq -db=$TEST_TMPDIR -charge_file_metadata=1 -disable_auto_compactions=1 -write_buffer_size=100000 -num=4000000 -numdistinct=1000 && ./db_bench -benchmarks=compact -db=$TEST_TMPDIR -use_existing_db=1 -charge_file_metadata=1 -disable_auto_compactions=1 | egrep 'compact'`
      
      table 1 - write
      
      #-run | (pre-PR) avg micros/op | std micros/op | (post-PR)  micros/op | std micros/op | change (%)
      -- | -- | -- | -- | -- | --
      10 | 3.9711 | 0.264408 | 3.9914 | 0.254563 | 0.5111933721
      20 | 3.83905 | 0.0664488 | 3.8251 | 0.0695456 | -0.3633711465
      40 | 3.86625 | 0.136669 | 3.8867 | 0.143765 | 0.5289363078
      80 | 3.87828 | 0.119007 | 3.86791 | 0.115674 | **-0.2673865734**
      160 | 3.87677 | 0.162231 | 3.86739 | 0.16663 | **-0.2419539978**
      
      table 2 - compact
      
      #-run | (pre-PR) avg micros/op | std micros/op | (post-PR)  micros/op | std micros/op | change (%)
      -- | -- | -- | -- | -- | --
      10 | 2,399,650.00 | 96,375.80 | 2,359,537.00 | 53,243.60 | -1.67
      20 | 2,410,480.00 | 89,988.00 | 2,433,580.00 | 91,121.20 | 0.96
      40 | 2.41E+06 | 121811 | 2.39E+06 | 131525 | **-0.96**
      80 | 2.40E+06 | 134503 | 2.39E+06 | 108799 | **-0.78**
      
      - stress test: `python3 tools/db_crashtest.py blackbox --charge_file_metadata=1  --cache_size=1` killed as normal
      
      Reviewed By: ajkr
      
      Differential Revision: D36055583
      
      Pulled By: hx235
      
      fbshipit-source-id: b60eab94707103cb1322cf815f05810ef0232625
      d665afdb
    • A
      Fix the failure related to io_uring_prep_cancel (#10165) · 40d19bc1
      Akanksha Mahajan 提交于
      Summary:
      Fix for Internal jobs are failing with
      ```
       error: no matching function for call to 'io_uring_prep_cancel'
            io_uring_prep_cancel(sqe, posix_handle, 0);
            ^~~~~~~~~~~~~~~~~~~~
      note: candidate function not viable: no known conversion from 'rocksdb::Posix_IOHandle *' to '__u64' (aka 'unsigned long long') for 2nd argument
      static inline void io_uring_prep_cancel(struct io_uring_sqe *sqe,
      ```
      
      User data is set using `io_uring_set_data` API so no need to pass posix_handle here.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/10165
      
      Test Plan: CircleCI jobs
      
      Reviewed By: jay-zhuang
      
      Differential Revision: D37145233
      
      Pulled By: akankshamahajan15
      
      fbshipit-source-id: 05da650e1240e9c6fcc8aed5f0067308dccb164a
      40d19bc1
  5. 14 6月, 2022 5 次提交
    • G
      Make the per-shard hash table fixed-size. (#10154) · f105e1a5
      Guido Tagliavini Ponce 提交于
      Summary:
      We make the size of the per-shard hash table fixed. The base level of the hash table is now preallocated with the required capacity. The user must provide an estimate of the size of the values.
      
      Notice that even though the base level becomes fixed, the chains are still dynamic. Overall, the shard capacity mechanisms haven't changed, so we don't need to test this.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/10154
      
      Test Plan: `make -j24 check`
      
      Reviewed By: pdillinger
      
      Differential Revision: D37124451
      
      Pulled By: guidotag
      
      fbshipit-source-id: cba6ac76052fe0ec60b8ff4211b3de7650e80d0c
      f105e1a5
    • Y
      Fix a race condition in transaction stress test (#10157) · bfaf8291
      Yanqin Jin 提交于
      Summary:
      Before this PR, there can be a race condition between the thread calling
      `StressTest::Open()` and a background compaction thread calling
      `MultiOpsTxnsStressTest::VerifyPkSkFast()`.
      
      ```
      Time   thread1                             bg_compact_thr
       |     TransactionDB::Open(..., &txn_db_)
       |     db_ is still nullptr
       |                                         db_->GetSnapshot()  // segfault
       |     db_ = txn_db_
       V
      ```
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/10157
      
      Test Plan: CI
      
      Reviewed By: akankshamahajan15
      
      Differential Revision: D37121653
      
      Pulled By: riversand963
      
      fbshipit-source-id: 6a53117f958e9ee86f77297fdeb843e5160a9331
      bfaf8291
    • A
      Implement AbortIO using io_uring (#10125) · c0e0f306
      Akanksha Mahajan 提交于
      Summary:
      Implement AbortIO in posix using io_uring to cancel any pending read requests submitted. Its cancelled using io_uring_prep_cancel which sets the IORING_OP_ASYNC_CANCEL flag.
      
      To cancel a request, the sqe must have ->addr set to the user_data of the request it wishes to cancel. If the request is cancelled successfully, the original request is completed with -ECANCELED and the cancel request is completed with a result of 0. If the request was already running, the original may or may not complete in error. The cancel request will complete with -EALREADY for that case. And finally, if the request to cancel wasn't found, the cancel request is completed with -ENOENT.
      
      Reference: https://kernel.dk/io_uring-whatsnew.pdf,
      https://lore.kernel.org/io-uring/d9a8d76d23690842f666c326631ecc2d85b6c1bc.1615566409.git.asml.silence@gmail.com/
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/10125
      
      Test Plan: Existing Posix tests.
      
      Reviewed By: anand1976
      
      Differential Revision: D36946970
      
      Pulled By: akankshamahajan15
      
      fbshipit-source-id: 3bc1f1521b3151d01a348fc6431eb3fc85db3a14
      c0e0f306
    • M
      Increase num_levels for universal from 8 to 40 (#10158) · 04bd3479
      Mark Callaghan 提交于
      Summary:
      See https://github.com/facebook/rocksdb/issues/10082 for more details. Trivial move
      isn't done for universal when compaction is from L0 into L0. So a too small value for
      num_levels with db_bench means there will be fewer trivial moves with universal and
      that means that write-amp will increase.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/10158
      
      Test Plan: run it
      
      Reviewed By: siying
      
      Differential Revision: D37122519
      
      Pulled By: mdcallag
      
      fbshipit-source-id: 1cb39049676f68a6cc3ea8d105a9965f89d4d09e
      04bd3479
    • P
      Document design/specification bugs with auto_prefix_mode (#10144) · ad135f3f
      Peter Dillinger 提交于
      Summary:
      auto_prefix_mode is designed to use prefix filtering in a
      particular "safe" set of cases where the upper bound and the seek key
      have different prefixes: where the upper bound is the "same length
      immediate successor". These conditions are not sufficient to guarantee
      the same iteration results as total_order_seek if the DB contains
      "short" keys, less than the "full" (maximum) prefix length.
      
      We are not simply disabling the optimization in these successor cases
      because it is likely that users are essentially getting what they want
      out of existing usage. Especially if users are constructing successor
      bounds with the intention of doing a prefix-bounded seek, the existing
      behavior is more expected than the total_order_seek behavior.
      Consequently, for now we reconcile the bad specification of behavior by
      documenting the existing mismatch with total_order_seek.
      
      A closely related issue affects hypothetical comparators like
      ReverseBytewiseComparator: if they "correctly" implement
      IsSameLengthImmediateSuccessor, auto_prefix_mode could omit more
      entries (other than "short" keys noted above). Luckily, the built-in
      ReverseBytewiseComparator has an "incorrect" implementation of
      IsSameLengthImmediateSuccessor that effectively prevents prefix
      optimization and, thus, the bug. This is now documented as a new
      constraint on IsSameLengthImmediateSuccessor, and the implementation
      tweaked to be simply "safe" rather than "incorrect".
      
      This change also includes unit test updates to demonstrate the above
      issues. (Test was cleaned up for readability and simplicity.)
      
      Intended follow-up:
      * Tweak documented axioms for prefix_extractor (more details then)
      * Consider some sort of fix for this case. I don't know what that would
      look like without breaking the performance of existing code. Perhaps
      if all keys in an SST file have prefixes that are "full length," we can track
      that fact and use it to allow optimization with the "same length
      immediate successor", but that would only apply to new files.
      * Consider a better system of specifying prefix bounds
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/10144
      
      Test Plan: test updates included
      
      Reviewed By: siying
      
      Differential Revision: D37052710
      
      Pulled By: pdillinger
      
      fbshipit-source-id: 5f63b7d65f3f214e4b143e0f9aa1749527c587db
      ad135f3f
  6. 13 6月, 2022 1 次提交
  7. 11 6月, 2022 4 次提交
    • G
      Assume fixed size key (#10137) · 415200d7
      Guido Tagliavini Ponce 提交于
      Summary:
      FastLRUCache now only supports 16B keys. The tests have changed to reflect this.
      
      Because the unit tests were designed for caches that accept any string as keys, some tests are no longer compatible with FastLRUCache. We have disabled those for runs with FastLRUCache. (We could potentially change all tests to use 16B keys, but we don't because the cache public API does not require this.)
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/10137
      
      Test Plan: make -j24 check
      
      Reviewed By: gitbw95
      
      Differential Revision: D37083934
      
      Pulled By: guidotag
      
      fbshipit-source-id: be1719cf5f8364a9a32bc4555bce1a0de3833b0d
      415200d7
    • S
      Run fadvise with mmap file (#10142) · 80afa776
      sdong 提交于
      Summary:
      Right now with mmap file, we don't run fadvise following users' requests. There is no reason for that so this diff does that.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/10142
      
      Test Plan:
      A simple readrandom against files with page cache dropped shows latency improvement from 7.8 us to 2.8:
      
      ./db_bench -use_existing_db --benchmarks=readrandom --num=100
      
      Reviewed By: anand1976
      
      Differential Revision: D37074975
      
      fbshipit-source-id: ccc72bcac1b5fd634eb8fa2b6a5d9afe332e0bf6
      80afa776
    • Y
      Snapshots with user-specified timestamps (#9879) · 1777e5f7
      Yanqin Jin 提交于
      Summary:
      In RocksDB, keys are associated with (internal) sequence numbers which denote when the keys are written
      to the database. Sequence numbers in different RocksDB instances are unrelated, thus not comparable.
      
      It is nice if we can associate sequence numbers with their corresponding actual timestamps. One thing we can
      do is to support user-defined timestamp, which allows the applications to specify the format of custom timestamps
      and encode a timestamp with each key. More details can be found at https://github.com/facebook/rocksdb/wiki/User-defined-Timestamp-%28Experimental%29.
      
      This PR provides a different but complementary approach. We can associate rocksdb snapshots (defined in
      https://github.com/facebook/rocksdb/blob/7.2.fb/include/rocksdb/snapshot.h#L20) with **user-specified** timestamps.
      Since a snapshot is essentially an object representing a sequence number, this PR establishes a bi-directional mapping between sequence numbers and timestamps.
      
      In the past, snapshots are usually taken by readers. The current super-version is grabbed, and a `rocksdb::Snapshot`
      object is created with the last published sequence number of the super-version. You can see that the reader actually
      has no good idea of what timestamp to assign to this snapshot, because by the time the `GetSnapshot()` is called,
      an arbitrarily long period of time may have already elapsed since the last write, which is when the last published
      sequence number is written.
      
      This observation motivates the creation of "timestamped" snapshots on the write path. Currently, this functionality is
      exposed only to the layer of `TransactionDB`. Application can tell RocksDB to create a snapshot when a transaction
      commits, effectively associating the last sequence number with a timestamp. It is also assumed that application will
      ensure any two snapshots with timestamps should satisfy the following:
      ```
      snapshot1.seq < snapshot2.seq iff. snapshot1.ts < snapshot2.ts
      ```
      
      If the application can guarantee that when a reader takes a timestamped snapshot, there is no active writes going on
      in the database, then we also allow the user to use a new API `TransactionDB::CreateTimestampedSnapshot()` to create
      a snapshot with associated timestamp.
      
      Code example
      ```cpp
      // Create a timestamped snapshot when committing transaction.
      txn->SetCommitTimestamp(100);
      txn->SetSnapshotOnNextOperation();
      txn->Commit();
      
      // A wrapper API for convenience
      Status Transaction::CommitAndTryCreateSnapshot(
          std::shared_ptr<TransactionNotifier> notifier,
          TxnTimestamp ts,
          std::shared_ptr<const Snapshot>* ret);
      
      // Create a timestamped snapshot if caller guarantees no concurrent writes
      std::pair<Status, std::shared_ptr<const Snapshot>> snapshot = txn_db->CreateTimestampedSnapshot(100);
      ```
      
      The snapshots created in this way will be managed by RocksDB with ref-counting and potentially shared with
      other readers. We provide the following APIs for readers to retrieve a snapshot given a timestamp.
      ```cpp
      // Return the timestamped snapshot correponding to given timestamp. If ts is
      // kMaxTxnTimestamp, then we return the latest timestamped snapshot if present.
      // Othersise, we return the snapshot whose timestamp is equal to `ts`. If no
      // such snapshot exists, then we return null.
      std::shared_ptr<const Snapshot> TransactionDB::GetTimestampedSnapshot(TxnTimestamp ts) const;
      // Return the latest timestamped snapshot if present.
      std::shared_ptr<const Snapshot> TransactionDB::GetLatestTimestampedSnapshot() const;
      ```
      
      We also provide two additional APIs for stats collection and reporting purposes.
      
      ```cpp
      Status TransactionDB::GetAllTimestampedSnapshots(
          std::vector<std::shared_ptr<const Snapshot>>& snapshots) const;
      // Return timestamped snapshots whose timestamps fall in [ts_lb, ts_ub) and store them in `snapshots`.
      Status TransactionDB::GetTimestampedSnapshots(
          TxnTimestamp ts_lb,
          TxnTimestamp ts_ub,
          std::vector<std::shared_ptr<const Snapshot>>& snapshots) const;
      ```
      
      To prevent the number of timestamped snapshots from growing infinitely, we provide the following API to release
      timestamped snapshots whose timestamps are older than or equal to a given threshold.
      ```cpp
      void TransactionDB::ReleaseTimestampedSnapshotsOlderThan(TxnTimestamp ts);
      ```
      
      Before shutdown, RocksDB will release all timestamped snapshots.
      
      Comparison with user-defined timestamp and how they can be combined:
      User-defined timestamp persists every key with a timestamp, while timestamped snapshots maintain a volatile
      mapping between snapshots (sequence numbers) and timestamps.
      Different internal keys with the same user key but different timestamps will be treated as different by compaction,
      thus a newer version will not hide older versions (with smaller timestamps) unless they are eligible for garbage collection.
      In contrast, taking a timestamped snapshot at a certain sequence number and timestamp prevents all the keys visible in
      this snapshot from been dropped by compaction. Here, visible means (seq < snapshot and most recent).
      The timestamped snapshot supports the semantics of reading at an exact point in time.
      
      Timestamped snapshots can also be used with user-defined timestamp.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/9879
      
      Test Plan:
      ```
      make check
      TEST_TMPDIR=/dev/shm make crash_test_with_txn
      ```
      
      Reviewed By: siying
      
      Differential Revision: D35783919
      
      Pulled By: riversand963
      
      fbshipit-source-id: 586ad905e169189e19d3bfc0cb0177a7239d1bd4
      1777e5f7
    • G
      Enable SecondaryCache::CreateFromString to create sec cache based on the uri... · f4052d13
      gitbw95 提交于
      Enable SecondaryCache::CreateFromString to create sec cache based on the uri for CompressedSecondaryCache (#10132)
      
      Summary:
      Update SecondaryCache::CreateFromString and enable it to create sec cache based on the uri for CompressedSecondaryCache.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/10132
      
      Test Plan: Add unit tests.
      
      Reviewed By: anand1976
      
      Differential Revision: D36996997
      
      Pulled By: gitbw95
      
      fbshipit-source-id: 882ad563cff6d38b306a53426ad7e47273f34edc
      f4052d13
  8. 10 6月, 2022 5 次提交
    • P
      Fix bug with kHashSearch and changing prefix_extractor with SetOptions (#10128) · d3a3b021
      Peter Dillinger 提交于
      Summary:
      When opening an SST file created using index_type=kHashSearch,
      the *current* prefix_extractor would be saved, and used with hash index
      if the *new current* prefix_extractor at query time is compatible with
      the SST file. This is a problem if the prefix_extractor at SST open time
      is not compatible but SetOptions later changes (back) to one that is
      compatible.
      
      This change fixes that by using the known compatible (or missing) prefix
      extractor we save for use with prefix filtering. Detail: I have moved the
      InternalKeySliceTransform wrapper to avoid some indirection and remove
      unnecessary fields.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/10128
      
      Test Plan:
      expanded unit test (using some logic from https://github.com/facebook/rocksdb/issues/10122) that fails
      before fix and probably covers some other previously uncovered cases.
      
      Reviewed By: siying
      
      Differential Revision: D36955738
      
      Pulled By: pdillinger
      
      fbshipit-source-id: 0c78a6b0d24054ef2f3cb237bf010c1c5589fb10
      d3a3b021
    • Y
      Return try again when full_history_ts_low is higher than requested ts (#10126) · 693dffd8
      Yu Zhang 提交于
      Summary:
      This PR helps handle the race condition mentioned in this comment thread: https://github.com/facebook/rocksdb/pull/7884#discussion_r572402281 In case where actual full_history_ts_low is higher than the user's requested ts, return a try again message so they don't have the misconception that data between [ts, full_history_ts_low) is kept.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/10126
      
      Test Plan:
      ```
      $COMPILE_WITH_ASAN=1 make -j24 all
      $./db_with_timestamp_basic_test --gtest_filter=UpdateFullHistoryTsLowTest.ConcurrentUpdate
      $ make -j24 check
      ```
      
      Reviewed By: riversand963
      
      Differential Revision: D37055368
      
      Pulled By: jowlyzhang
      
      fbshipit-source-id: 787fd0984a246540fa03ac227b1d232590d27828
      693dffd8
    • P
      Fix fragile CacheTest::ApplyToAllEntriesDuringResize (#10145) · 5fa6ef7f
      Peter Dillinger 提交于
      Summary:
      As seen in https://github.com/facebook/rocksdb/issues/10137, simply churning the cache key hashes (e.g.
      by changing the raw cache keys) could trigger failure in this test, due
      to possibility of some cache shard exceeding its portion of capacity
      and evicting entries. Updated the test to be less fragile by using
      greater margins, and added a pre-check for evictions, which doesn't
      manifest as a race condition, before the main check that can race.
      
      Also added stack trace handler to cache_test for debugging.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/10145
      
      Test Plan:
      test thousands of iterations with gtest-parallel, including
      with changes in https://github.com/facebook/rocksdb/issues/10137 that were surfacing the problem. Pre-check
      without the fix would always fail with https://github.com/facebook/rocksdb/issues/10137
      
      Reviewed By: guidotag
      
      Differential Revision: D37058771
      
      Pulled By: pdillinger
      
      fbshipit-source-id: a7cf137967aef49c07ae9602d8523c63e7388fab
      5fa6ef7f
    • B
      Update jemalloc version for platform009 (#10143) · 1a3e23a2
      Bo Wang 提交于
      Summary:
      Update jemalloc version for platform009. Current one is a bit old and the new one can bring some quick wins (e.g. new heap profiling features on devserver).
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/10143
      
      Test Plan:
      1. The building and testing on devserver should work.
      2. `db_bench` with `--dump_malloc_stats`
      `./db_bench --benchmarks=fillrandom --num=10000000 -db=/db_bench_1 `
      `./db_bench --benchmarks=overwrite,stats --num=10000000 -use_existing_db -duration=10 --benchmark_write_rate_limit=2000000 -db=/db_bench_1 `
      `./db_bench --benchmarks=seekrandom,stats --threads=16 --num=10000000 -use_existing_db -duration=120 --benchmark_write_rate_limit=52000000 -use_direct_reads --cache_size=520000000  --statistics -db=/db_bench_1 --dump_malloc_stats=true`
      
      Before this PR: jemalloc Version: "5.2.1-1303-g73b8faa7149e452f93e52005c89459da08343570"
      After this PR: jemalloc Version:
      
      Reviewed By: anand1976
      
      Differential Revision: D37049347
      
      Pulled By: gitbw95
      
      fbshipit-source-id: 3fcd82cca989047b4bbdfdebe5beba2c4c255ed8
      1a3e23a2
    • A
      Enable wal_compression in crash_tests (#10141) · ecfd4aef
      Akanksha Mahajan 提交于
      Summary:
      Same as title
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/10141
      
      Test Plan:
      ```
      export CRASH_TEST_EXT_ARGS=" --wal_compression=zstd"
       make crash_test -j
      ```
      
      Reviewed By: riversand963
      
      Differential Revision: D37042810
      
      Pulled By: akankshamahajan15
      
      fbshipit-source-id: 53f0793d78241f1b5c954dcc808cb4c0a3e9172a
      ecfd4aef
  9. 09 6月, 2022 1 次提交
    • A
      Fix bug for WalManager with compressed WAL (#10130) · f85b31a2
      Akanksha Mahajan 提交于
      Summary:
      RocksDB uses WalManager to manage WAL files. In WalManager::ReadFirstLine(), the assumption is that reading the first record of a valid WAL file will return OK status and set the output sequence to non-zero value.
      This assumption has been broken by WAL compression which writes a `kSetCompressionType` record which is not associated with any sequence number.
      Consequently, WalManager::GetSortedWalsOfType() will skip these WALs and not return them to caller, e.g. Checkpoint, Backup, causing the operations to fail.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/10130
      
      Test Plan: - Newly Added test
      
      Reviewed By: riversand963
      
      Differential Revision: D36985744
      
      Pulled By: akankshamahajan15
      
      fbshipit-source-id: dfde7b3be68b6a30b75b49479779748eedf29f7f
      f85b31a2