1. 22 11月, 2022 6 次提交
    • A
      Post 7.9.0 release branch cut updates (#10974) · f4cfcfe8
      anand76 提交于
      Summary:
      Update HISTORY.md, version.h, and check_format_compatible.sh
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/10974
      
      Reviewed By: akankshamahajan15
      
      Differential Revision: D41455289
      
      Pulled By: anand1976
      
      fbshipit-source-id: 99888ebcb9109e5ced80584a66b20123f8783c0b
      f4cfcfe8
    • C
      Set correct temperature for range tombstone only file in penultimate level (#10972) · 6c5ec920
      Changyu Bi 提交于
      Summary:
      before this PR, if there is a range tombstone-only file generated in penultimate level, it is marked the `last_level_temperature`. This PR fixes this issue.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/10972
      
      Test Plan: added unit test for this scenario.
      
      Reviewed By: ajkr
      
      Differential Revision: D41449215
      
      Pulled By: cbi42
      
      fbshipit-source-id: 1e06b5ae3bc0183db2991a45965a9807a7e8be0c
      6c5ec920
    • A
      Update HISTORY.md for 7.9.0 (#10973) · 3ff6da6b
      anand76 提交于
      Summary:
      Update HISTORY.md for 7.9.0 release.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/10973
      
      Reviewed By: pdillinger
      
      Differential Revision: D41453720
      
      Pulled By: anand1976
      
      fbshipit-source-id: 47a23d4b6539ec6a9a09c9e69c026f7c8b10afa7
      3ff6da6b
    • P
      Add a SecondaryCache::InsertSaved() API, use in CacheDumper impl (#10945) · e079d562
      Peter Dillinger 提交于
      Summary:
      Can simplify some ugly code in cache_dump_load_impl.cc by having an API in SecondaryCache that can directly consume persisted data.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/10945
      
      Test Plan: existing tests for CacheDumper, added basic unit test
      
      Reviewed By: anand1976
      
      Differential Revision: D41231497
      
      Pulled By: pdillinger
      
      fbshipit-source-id: b8ec993ef7d3e7efd68aae8602fd3f858da58068
      e079d562
    • A
      Fix CompactionIterator flag for penultimate level output (#10967) · 097f9f44
      Andrew Kryczka 提交于
      Summary:
      We were not resetting it in non-debug mode so it could be true once and then stay true for future keys where it should be false. This PR adds the reset logic.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/10967
      
      Test Plan:
      - built `db_bench` with DEBUG_LEVEL=0
      - ran benchmark: `TEST_TMPDIR=/dev/shm/prefix ./db_bench -benchmarks=fillrandom -compaction_style=1 -preserve_internal_time_seconds=100 -preclude_last_level_data_seconds=10 -write_buffer_size=1048576 -target_file_size_base=1048576 -subcompactions=8 -duration=120`
      - compared "output_to_penultimate_level: X bytes + last: Y bytes" lines in LOG output
        - Before this fix, Y was always zero
        - After this fix, Y gradually increased throughout the benchmark
      
      Reviewed By: riversand963
      
      Differential Revision: D41417726
      
      Pulled By: ajkr
      
      fbshipit-source-id: ace1e9a289e751a5b0c2fbaa8addd4eda5525329
      097f9f44
    • P
      Observe and warn about misconfigured HyperClockCache (#10965) · 3182beef
      Peter Dillinger 提交于
      Summary:
      Background. One of the core risks of chosing HyperClockCache is ending up with degraded performance if estimated_entry_charge is very significantly wrong. Too low leads to under-utilized hash table, which wastes a bit of (tracked) memory and likely increases access times due to larger working set size (more TLB misses). Too high leads to fully populated hash table (at some limit with reasonable lookup performance) and not being able to cache as many objects as the memory limit would allow. In either case, performance degradation is graceful/continuous but can be quite significant. For example, cutting block size in half without updating estimated_entry_charge could lead to a large portion of configured block cache memory (up to roughly 1/3) going unused.
      
      Fix. This change adds a mechanism through which the DB periodically probes the block cache(s) for "problems" to report, and adds diagnostics to the HyperClockCache for bad estimated_entry_charge. The periodic probing is currently done with DumpStats / stats_dump_period_sec, and diagnostics reported to info_log (normally LOG file).
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/10965
      
      Test Plan:
      unit test included. Doesn't cover all the implemented subtleties of reporting, but ensures basics of when to report or not.
      
      Also manual testing with db_bench. Create db with
      ```
      ./db_bench --benchmarks=fillrandom,flush --num=3000000 --disable_wal=1
      ```
      Use and check LOG file for HyperClockCache for various block sizes (used as estimated_entry_charge)
      ```
      ./db_bench --use_existing_db --benchmarks=readrandom --num=3000000 --duration=20 --stats_dump_period_sec=8 --cache_type=hyper_clock_cache -block_size=XXXX
      ```
      Seeing warnings / errors or not as expected.
      
      Reviewed By: anand1976
      
      Differential Revision: D41406932
      
      Pulled By: pdillinger
      
      fbshipit-source-id: 4ca56162b73017e4b9cec2cad74466f49c27a0a7
      3182beef
  2. 18 11月, 2022 2 次提交
  3. 17 11月, 2022 2 次提交
  4. 16 11月, 2022 2 次提交
    • P
      Re-arrange cache.h to prepare for refactoring (#10942) · b55e7035
      Peter Dillinger 提交于
      Summary:
      No material changes to code or comments, just re-arranging things to prepare for a big refactoring, making it easier to what changed. Some specifics:
      * This groups things together in Cache in anticipation of secondary cache features being marked production-ready (vs. experimental).
      * CacheEntryRole will be needed in definition of class Cache, so that has been moved above it.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/10942
      
      Test Plan: existing tests
      
      Reviewed By: anand1976
      
      Differential Revision: D41205509
      
      Pulled By: pdillinger
      
      fbshipit-source-id: 3f2559ab1651c758918dc97056951fa2b5eb0348
      b55e7035
    • L
      Support using GetMergeOperands for verification with wide columns (#10952) · b644baa1
      Levi Tamasi 提交于
      Summary:
      With the recent changes, `GetMergeOperands` is now supported for wide-column entities as well, so we can use it for verification purposes in the non-batched stress tests.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/10952
      
      Test Plan: Ran a simple non-batched ops blackbox crash test.
      
      Reviewed By: riversand963
      
      Differential Revision: D41292114
      
      Pulled By: ltamasi
      
      fbshipit-source-id: 70b4c756a4a1fecb445c16c7096aad805a51203c
      b644baa1
  5. 15 11月, 2022 3 次提交
    • A
      Fix db_stress failure in async_io in FilePrefetchBuffer (#10949) · 1562524e
      Akanksha Mahajan 提交于
      Summary:
      Fix db_stress failure in async_io in FilePrefetchBuffer.
      
      From the logs, assertion was caused when
      - prev_offset_ = offset but somehow prev_len != 0 and explicit_prefetch_submitted_ = true. That scenario is when we send async request to prefetch buffer during seek but in second seek that data is found in cache. prev_offset_ and prev_len_ get updated but we were not setting explicit_prefetch_submitted_ = false because of which buffers were getting out of sync.
      It's possible a read by another thread might have loaded the block into the cache in the meantime.
      
      Particular assertion example:
      ```
      prev_offset: 0, prev_len_: 8097 , offset: 0, length: 8097, actual_length: 8097 , actual_offset: 0 ,
      curr_: 0, bufs_[curr_].offset_: 4096 ,bufs_[curr_].CurrentSize(): 48541 , async_len_to_read: 278528, bufs_[curr_].async_in_progress_: false
      second: 1, bufs_[second].offset_: 282624 ,bufs_[second].CurrentSize(): 0, async_len_to_read: 262144 ,bufs_[second].async_in_progress_: true ,
      explicit_prefetch_submitted_: true , copy_to_third_buffer: false
      ```
      As we can see curr_ was expected to read 278528 but it read 48541. Also buffers are out of sync.
      Also `explicit_prefetch_submitted_` is set true but prev_len not 0.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/10949
      
      Test Plan:
      - Ran db_bench for regression to make sure there is no regression;
      - Ran db_stress failing without this fix,
      - Ran build-linux-mini-crashtest 7- 8 times locally + CircleCI
      
      Reviewed By: anand1976
      
      Differential Revision: D41257786
      
      Pulled By: akankshamahajan15
      
      fbshipit-source-id: 1d100f94f8c06bbbe4cc76ca27f1bbc820c2494f
      1562524e
    • X
      Fix broken dependency: update zlib from 1.2.12 to 1.2.13 (#10833) · 0993c922
      xiaochenfan 提交于
      Summary:
      zlib(https://zlib.net/) has released v1.2.13.
      
      1.2.12 is no longer available for downloading and Makefile for rocksdb will be broken due to can't find the source .tar.gz.
      
      https://nvd.nist.gov/vuln/detail/CVE-2022-37434
      
      This pr update the version number and the shasum of new .tar.gz file. (1.2.13)
      
      Fixes https://github.com/facebook/rocksdb/issues/10876
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/10833
      
      Reviewed By: hx235
      
      Differential Revision: D40575954
      
      Pulled By: ajkr
      
      fbshipit-source-id: 3e560e453ddf58d045214fc4e64f83bef91f22e5
      0993c922
    • A
      Update unit test to avoid timeout (#10950) · 85154375
      akankshamahajan 提交于
      Summary:
      Update unit test to avoid timeout
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/10950
      
      Reviewed By: hx235
      
      Differential Revision: D41258892
      
      Pulled By: akankshamahajan15
      
      fbshipit-source-id: cbfe94da63e9e54544a307845deb79ba42458301
      85154375
  6. 14 11月, 2022 1 次提交
  7. 12 11月, 2022 3 次提交
    • P
      Don't attempt to use SecondaryCache on block_cache_compressed (#10944) · f321e8fc
      Peter Dillinger 提交于
      Summary:
      Compressed block cache depends on reading the block compression marker beyond the payload block size. Only the payload bytes were being saved and loaded from SecondaryCache -> boom!
      
      This removes some unnecessary code attempting to combine these two competing features. Note that BlockContents was previously used for block-based filter in block cache, but that support has been removed.
      
      Also marking block_cache_compressed as deprecated in this commit as we expect it to be replaced with SecondaryCache.
      
      This problem was discovered during refactoring but didn't want to combine bug fix with that refactoring.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/10944
      
      Test Plan: test added that fails on base revision (at least with ASAN)
      
      Reviewed By: akankshamahajan15
      
      Differential Revision: D41205578
      
      Pulled By: pdillinger
      
      fbshipit-source-id: 1b29d36c7a6552355ac6511fcdc67038ef4af29f
      f321e8fc
    • L
      Support Merge for wide-column entities in the compaction logic (#10946) · 5e894705
      Levi Tamasi 提交于
      Summary:
      The patch extends the compaction logic to handle `Merge`s in conjunction with wide-column entities. As usual, the merge operation is applied to the anonymous default column, and any other columns are unaffected.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/10946
      
      Test Plan: `make check`
      
      Reviewed By: riversand963
      
      Differential Revision: D41233722
      
      Pulled By: ltamasi
      
      fbshipit-source-id: dfd9b1362222f01bafcecb139eb48480eb279fed
      5e894705
    • A
      Fix async_io regression in scans (#10939) · d1aca4a5
      akankshamahajan 提交于
      Summary:
      Fix async_io regression in scans due to incorrect check which was causing the valid data in buffer to be cleared during seek.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/10939
      
      Test Plan:
      - stress tests  export CRASH_TEST_EXT_ARGS="--async_io=1"
          make crash_test -j32
      - Ran db_bench command which was caught the regression:
      ./db_bench --db=/rocksdb_async_io_testing/prefix_scan --disable_wal=1 --use_existing_db=true --benchmarks="seekrandom" -key_size=32 -value_size=512 -num=50000000 -use_direct_reads=false -seek_nexts=963 -duration=30 -ops_between_duration_checks=1 --async_io=true --compaction_readahead_size=4194304 --log_readahead_size=0 --blob_compaction_readahead_size=0 --initial_auto_readahead_size=65536 --num_file_reads_for_auto_readahead=0 --max_auto_readahead_size=524288
      
      seekrandom   :    3777.415 micros/op 264 ops/sec 30.000 seconds 7942 operations;  132.3 MB/s (7942 of 7942 found)
      
      Reviewed By: anand1976
      
      Differential Revision: D41173899
      
      Pulled By: akankshamahajan15
      
      fbshipit-source-id: 2d75b06457d65b1851c92382565d9c3fac329dfe
      d1aca4a5
  8. 11 11月, 2022 2 次提交
    • L
      Support Merge with wide-column entities in iterator (#10941) · dbc4101b
      Levi Tamasi 提交于
      Summary:
      The patch adds `Merge` support for wide-column entities in `DBIter`. As before, the `Merge` operation is applied to the default column of the entity; any other columns are unchanged. As a small cleanup, the PR also changes the signature of `DBIter::Merge` to simply return a boolean instead of the `Merge` operation's `Status` since the actual `Status` is already stored in a member variable.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/10941
      
      Test Plan: `make check`
      
      Reviewed By: riversand963
      
      Differential Revision: D41195471
      
      Pulled By: ltamasi
      
      fbshipit-source-id: 362cf555897296e252c3de5ddfbd569ef34f85ef
      dbc4101b
    • L
      Refactor MergeHelper::MergeUntil a bit (#10943) · 9460d4b7
      Levi Tamasi 提交于
      Summary:
      The patch untangles some nested ifs in `MergeHelper::MergeUntil`. This will come in handy when extending the compaction logic to support `Merge` for wide-column entities, and also enables us to eliminate some repeated branching on value type and to decrease the scope of some variables.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/10943
      
      Test Plan: `make check`
      
      Reviewed By: riversand963
      
      Differential Revision: D41201946
      
      Pulled By: ltamasi
      
      fbshipit-source-id: 890bd3d4e31cdccadca614489a94686d76485ba9
      9460d4b7
  9. 10 11月, 2022 1 次提交
    • L
      Revisit the interface of MergeHelper::TimedFullMerge(WithEntity) (#10932) · 2ea10952
      Levi Tamasi 提交于
      Summary:
      The patch refines/reworks `MergeHelper::TimedFullMerge(WithEntity)`
      a bit in two ways. First, it eliminates the recently introduced `TimedFullMerge`
      overload, which makes the responsibilities clearer by making sure the query
      result (`value` for `Get`, `columns` for `GetEntity`) is set uniformly in
      `SaveValue` and `GetContext`. Second, it changes the interface of
      `TimedFullMergeWithEntity` so it exposes its result in a serialized form; this
      is a more decoupled design which will come in handy when adding support
      for `Merge` with wide-column entities to `DBIter`.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/10932
      
      Test Plan: `make check`
      
      Reviewed By: akankshamahajan15
      
      Differential Revision: D41129399
      
      Pulled By: ltamasi
      
      fbshipit-source-id: 69d8da358c77d4fc7e8c40f4dafc2c129a710677
      2ea10952
  10. 09 11月, 2022 2 次提交
  11. 08 11月, 2022 3 次提交
  12. 05 11月, 2022 2 次提交
  13. 04 11月, 2022 1 次提交
  14. 03 11月, 2022 8 次提交
    • L
      Support Merge for wide-column entities during point lookups (#10916) · 941d8347
      Levi Tamasi 提交于
      Summary:
      The patch adds `Merge` support for wide-column entities to the point lookup
      APIs, i.e. `Get`, `MultiGet`, `GetEntity`, and `GetMergeOperands`. (I plan to
      update the iterator and compaction logic in separate PRs.) In terms of semantics,
      the `Merge` operation is applied to the default (anonymous) column; any other
      columns in the entity are unaffected.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/10916
      
      Test Plan: `make check`
      
      Reviewed By: riversand963
      
      Differential Revision: D40962311
      
      Pulled By: ltamasi
      
      fbshipit-source-id: 244bc9d172be1af2f204796b2f89104e4d2fa373
      941d8347
    • P
      Refactor (Hyper)ClockCache code (#10887) · cc8c8f69
      Peter Dillinger 提交于
      Summary:
      For clean-up and in preparation for some other anticipated changes, including
      * A new dynamically-scaling variant of HyperClockCache
      * SecondaryCache support for HyperClockCache
      
      This change does some refactoring for current and future code sharing and reusability. (Including follow-up on https://github.com/facebook/rocksdb/issues/10843)
      
      ## clock_cache.h
      * TBD whether new variant will be a HyperClockCache or use some other name, so namespace is just clock_cache for the family of structures.
      * A number of helper functions introduced and used.
      * Pre-emptively split ClockHandle (shared among lock-free clock cache variants) and HandleImpl (specific to a kind of Table), and introduce template to plug new Table implementation into ClockCacheShard.
      
      ## clock_cache.cc
      * Mostly using helper functions. Some things like `Rollback()` and `FreeDataMarkEmpty()` were not combined because `Rollback()` is Table-specific while `FreeDataMarkEmpty()` can be used with different table implementations.
      * Performance testing indicated that despite more opportunities for parallelism, making a local copy of handle data for processing after marking an entry empty was slower than doing that processing before marking the entry empty (but after marking it "under construction"), thus avoiding a few words of copying data. At least for now, this answers the "TODO? Delay freeing?" questions (no).
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/10887
      
      Test Plan:
      fixed a unit testing gap; other minor test updates for refactoring
      
      No functionality change
      
      ## Performance
      Same setup as https://github.com/facebook/rocksdb/issues/10801:
      
      Before: `readrandom [AVG 81 runs] : 627992 (± 5124) ops/sec`
      After: `readrandom [AVG 81 runs] : 637512 (± 4866) ops/sec`
      
      I've been getting some inconsistent results on restarts like the system is not being fair to the two processes, so I'm not sure there's such a real difference.
      
      Reviewed By: anand1976
      
      Differential Revision: D40959240
      
      Pulled By: pdillinger
      
      fbshipit-source-id: 0a8f3646b3bdb5bc7aaad60b26790b0779189949
      cc8c8f69
    • T
      Add rocksdb_backup_restore_example to examples/.gitignore (#10825) · 0d5dc5fd
      Tal Zussman 提交于
      Summary: Pull Request resolved: https://github.com/facebook/rocksdb/pull/10825
      
      Reviewed By: akankshamahajan15
      
      Differential Revision: D40419234
      
      Pulled By: ajkr
      
      fbshipit-source-id: 2d700154eb5b2943d10a0f944f2b414ece353e4a
      0d5dc5fd
    • Y
      Reduce access to atomic variables in a test (#10909) · 0547cecb
      Yanqin Jin 提交于
      Summary:
      With TSAN build on CircleCI (see mini-tsan in .circleci/config).
      Sometimes `SeqAdvanceConcurrentTest.SeqAdvanceConcurrent` will get stuck when an experimental feature called
      "unordered write" is enabled. Stack trace will be the following
      ```
      Thread 7 (Thread 0x7f2284a1c700 (LWP 481523) "write_prepared_"):
      #0  0x00000000004fa3f5 in __tsan_atomic64_load () at ./db/merge_context.h:15
      https://github.com/facebook/rocksdb/issues/1  0x00000000005e5942 in std::__atomic_base<unsigned long>::load (this=0x7b74000012f8, __m=std::memory_order_seq_cst) at /usr/bin/../lib/gcc/x86_64-linux-gnu/11/../../../../include/c++/11/bits/atomic_base.h:481
      https://github.com/facebook/rocksdb/issues/2  std::__atomic_base<unsigned long>::operator unsigned long (this=0x7b74000012f8) at /usr/bin/../lib/gcc/x86_64-linux-gnu/11/../../../../include/c++/11/bits/atomic_base.h:341
      https://github.com/facebook/rocksdb/issues/3  0x00000000005bf001 in rocksdb::SeqAdvanceConcurrentTest_SeqAdvanceConcurrent_Test::TestBody()::$_9::operator()(void*) const (this=0x7b14000085e8) at utilities/transactions/write_prepared_transaction_test.cc:1702
      
      Thread 6 (Thread 0x7f228421b700 (LWP 481521) "write_prepared_"):
      #0  0x000000000052178c in __tsan::MetaMap::GetAndLock(__tsan::ThreadState*, unsigned long, unsigned long, bool, bool) () at ./db/merge_context.h:15
      https://github.com/facebook/rocksdb/issues/1  0x00000000004fa48e in __tsan_atomic64_load () at ./db/merge_context.h:15
      https://github.com/facebook/rocksdb/issues/2  0x00000000005e5942 in std::__atomic_base<unsigned long>::load (this=0x7b74000012f8, __m=std::memory_order_seq_cst) at /usr/bin/../lib/gcc/x86_64-linux-gnu/11/../../../../include/c++/11/bits/atomic_base.h:481
      https://github.com/facebook/rocksdb/issues/3  std::__atomic_base<unsigned long>::operator unsigned long (this=0x7b74000012f8) at /usr/bin/../lib/gcc/x86_64-linux-gnu/11/../../../../include/c++/11/bits/atomic_base.h:341
      https://github.com/facebook/rocksdb/issues/4  0x00000000005bf001 in rocksdb::SeqAdvanceConcurrentTest_SeqAdvanceConcurrent_Test::TestBody()::$_9::operator()(void*) const (this=0x7b14000085e8) at utilities/transactions/write_prepared_transaction_test.cc:1702
      ```
      
      This is problematic and suspicious. Two threads will get stuck in the same place trying to load from an atomic variable.
      https://github.com/facebook/rocksdb/blob/7.8.fb/utilities/transactions/write_prepared_transaction_test.cc#L1694:L1707. Not sure why two threads can reach the same point.
      
      The stack trace shows that there may be a deadlock, since the two threads are on the same write thread (one is doing Prepare, while the other is trying to commit).
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/10909
      
      Test Plan:
      On CircleCI mini-tsan, apply a patch first so that we have a higher chance of hitting the same problematic situation,
      ```
       diff --git a/utilities/transactions/write_prepared_transaction_test.cc b/utilities/transactions/write_prepared_transaction_test.cc
      index 4bc1f3744..bd5dc4924 100644
       --- a/utilities/transactions/write_prepared_transaction_test.cc
      +++ b/utilities/transactions/write_prepared_transaction_test.cc
      @@ -1714,13 +1714,13 @@ TEST_P(SeqAdvanceConcurrentTest, SeqAdvanceConcurrent) {
             size_t d = (n % base[bi + 1]) / base[bi];
             switch (d) {
               case 0:
      -          threads.emplace_back(txn_t0, bi);
      +          threads.emplace_back(txn_t3, bi);
                 break;
               case 1:
      -          threads.emplace_back(txn_t1, bi);
      +          threads.emplace_back(txn_t3, bi);
                 break;
               case 2:
      -          threads.emplace_back(txn_t2, bi);
      +          threads.emplace_back(txn_t3, bi);
                 break;
               case 3:
                 threads.emplace_back(txn_t3, bi);
      ```
      then build and run tests
      ```
      COMPILE_WITH_TSAN=1 CC=clang-13 CXX=clang++-13 ROCKSDB_DISABLE_ALIGNED_NEW=1 USE_CLANG=1 make V=1 -j32 check
      gtest-parallel -r 100 ./write_prepared_transaction_test --gtest_filter=TwoWriteQueues/SeqAdvanceConcurrentTest.SeqAdvanceConcurrent/19
      ```
      In the above, `SeqAdvanceConcurrent/19`. The tests 10 to 19 correspond to unordered write in which Prepare() and Commit() can both enter the same write thread.
      Before this PR, there is a high chance of hitting the deadlock. With this PR, no deadlock has been encountered so far.
      
      Reviewed By: ltamasi
      
      Differential Revision: D40869387
      
      Pulled By: riversand963
      
      fbshipit-source-id: 81e82a70c263e4f3417597a201b081ee54f1deab
      0547cecb
    • B
      Added placeholders for MADV defines (#10881) · d80baa13
      Brord van Wierst 提交于
      Summary:
      Cross compiling rocksdb with rust bindings to android leads to an error since 7.4.0 (Incusion of madvise)
      This is due to missing placeholders for non-linux platforms.
      
      This PR adds the missing placeholders.
      
      See https://github.com/rust-rocksdb/rust-rocksdb/issues/697 for the specific error thrown.
      
      I have just completed the CLA :)
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/10881
      
      Reviewed By: akankshamahajan15
      
      Differential Revision: D40726103
      
      Pulled By: ajkr
      
      fbshipit-source-id: 6b391636a74ef7e20d0daf47d332ddf0c14d5c34
      d80baa13
    • A
      Improve musl libc detection and provide an option for the user to override (#10889) · 781a3874
      Adam Retter 提交于
      Summary:
      The user may override the detection of whether to use GNU libc (the default) or musl libc by setting the environment variable: `ROCKSDB_MUSL_LIBC=true`.
      
      Builds upon and supersedes: https://github.com/facebook/rocksdb/pull/9977
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/10889
      
      Reviewed By: akankshamahajan15
      
      Differential Revision: D40788431
      
      Pulled By: ajkr
      
      fbshipit-source-id: ef594d973fc14cbadf28bfb38434231a18a2107c
      781a3874
    • B
      Add OpenBSD/arm64 support for detection of CRC32 and PMULL (#10902) · 4a6906e2
      Brad Smith 提交于
      Summary: Pull Request resolved: https://github.com/facebook/rocksdb/pull/10902
      
      Reviewed By: akankshamahajan15
      
      Differential Revision: D40839659
      
      Pulled By: ajkr
      
      fbshipit-source-id: 06be5919622f8cce1fce1097c5e654900bf7f8fb
      4a6906e2
    • A
      Ran clang-format on db/ directory (#10910) · 5cf6ab6f
      Andrew Kryczka 提交于
      Summary:
      Ran `find ./db/ -type f | xargs clang-format -i`. Excluded minor changes it tried to make on db/db_impl/. Everything else it changed was directly under db/ directory. Included minor manual touchups mentioned in PR commit history.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/10910
      
      Reviewed By: riversand963
      
      Differential Revision: D40880683
      
      Pulled By: ajkr
      
      fbshipit-source-id: cfe26cda05b3fb9a72e3cb82c286e21d8c5c4174
      5cf6ab6f
  15. 02 11月, 2022 1 次提交
    • A
      Fix async_io failures in case there is error in reading data (#10890) · ff9ad2c3
      akankshamahajan 提交于
      Summary:
      Fix memory corruption error in scans if async_io is enabled. Memory corruption happened if data is overlapping between two buffers. If there is IOError while reading the data, it leads to empty buffer and other buffer already in progress of async read goes again for reading causing the error.
      Fix: Added check to abort IO in second buffer if curr_ got empty.
      
      This PR also fixes db_stress failures which happened when buffers are not aligned.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/10890
      
      Test Plan:
      - Ran make crash_test -j32 with async_io enabled.
      -  Ran benchmarks to make sure there is no regression.
      
      Reviewed By: anand1976
      
      Differential Revision: D40881731
      
      Pulled By: akankshamahajan15
      
      fbshipit-source-id: 39fcf2134c7b1bbb08415ede3e1ef261ac2dbc58
      ff9ad2c3
  16. 01 11月, 2022 1 次提交
    • Y
      Basic Support for Merge with user-defined timestamp (#10819) · 7d26e4c5
      Yanqin Jin 提交于
      Summary:
      This PR implements the originally disabled `Merge()` APIs when user-defined timestamp is enabled.
      
      Simplest usage:
      ```cpp
      // assume string append merge op is used with '.' as delimiter.
      // ts1 < ts2
      db->Put(WriteOptions(), "key", ts1, "v0");
      db->Merge(WriteOptions(), "key", ts2, "1");
      ReadOptions ro;
      ro.timestamp = &ts2;
      db->Get(ro, "key", &value);
      ASSERT_EQ("v0.1", value);
      ```
      
      Some code comments are added for clarity.
      
      Note: support for timestamp in `DB::GetMergeOperands()` will be done in a follow-up PR.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/10819
      
      Test Plan: make check
      
      Reviewed By: ltamasi
      
      Differential Revision: D40603195
      
      Pulled By: riversand963
      
      fbshipit-source-id: f96d6f183258f3392d80377025529f7660503013
      7d26e4c5