1. 01 9月, 2022 5 次提交
    • C
      Validate option `memtable_protection_bytes_per_key` (#10621) · 3a75219e
      Changyu Bi 提交于
      Summary:
      sanity check value for option `memtable_protection_bytes_per_key` in `ColumnFamilyData::ValidateOptions()`.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/10621
      
      Test Plan: `make check`, added unit test in ColumnFamilyTest.
      
      Reviewed By: ajkr
      
      Differential Revision: D39180133
      
      Pulled By: cbi42
      
      fbshipit-source-id: 009e0da3ccb332d1c9e14d20193304610bd4eb8a
      3a75219e
    • A
      Reenable sync_fault_injection in crash test (#10172) · ccf82249
      Andrew Kryczka 提交于
      Summary: Pull Request resolved: https://github.com/facebook/rocksdb/pull/10172
      
      Reviewed By: riversand963
      
      Differential Revision: D37164671
      
      Pulled By: ajkr
      
      fbshipit-source-id: 40eb919b8dc261d502510e878ee8ac7874ab35d0
      ccf82249
    • H
      Disable use_txn=1 with sync_fault_injection=1 in db_crashtest.py (#10605) · e7525a1f
      Hui Xiao 提交于
      Summary:
      **Context/Summary:**
      `ExpectedState` is not aware of transaction-related concept so `use_txn=1 ` is not compatible with `sync_fault_injection=1`. Therefore this PR disabled this combination until we expand our correctness testing to transaction related features.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/10605
      
      Test Plan:
      - Run the following commands to verify `--use_txn` is correctly sanitized
         - `python3 ./tools/db_crashtest.py blackbox --use_txn=1 --sync_fault_injection=1 `
         - `python3 ./tools/db_crashtest.py blackbox --use_txn=0 --sync_fault_injection=1 `
      
      Reviewed By: ajkr
      
      Differential Revision: D39121287
      
      Pulled By: hx235
      
      fbshipit-source-id: 7d5d6dd32479ea1c07df4f38322650f3a60def9c
      e7525a1f
    • S
      Option migration tool to break down files for FIFO compaction (#10600) · 95090035
      sdong 提交于
      Summary:
      Right now, when the option migration tool migrates to FIFO compaction, it compacts all the data into one single SST file and move to L0. Although it creates a valid LSM-tree for FIFO, for any data to be deleted for FIFO, the giant file will be deleted, which might make the DB almost empty. There is not good solution for it, because usually we don't have enough information to reconstruct the FIFO LSM-tree. This change changes to a solution that compromises the FIFO condition. We hope the solution is more useable.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/10600
      
      Test Plan: Add unit tests for that.
      
      Reviewed By: jay-zhuang
      
      Differential Revision: D39106424
      
      fbshipit-source-id: bdfd852c3b343373765b8d9716fefc08fd27145c
      95090035
    • L
      Adjust the blob cache printout in db_bench/db_stress (#10614) · 228f2c5b
      Levi Tamasi 提交于
      Summary:
      Currently, `db_bench` and `db_stress` print the blob cache options even if
      a shared block/blob cache is configured, i.e. when they are not actually
      in effect. The patch changes this so they are only printed when a separate blob
      cache is used.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/10614
      
      Test Plan: Tested manually using `db_bench` and `db_stress`.
      
      Reviewed By: akankshamahajan15
      
      Differential Revision: D39144603
      
      Pulled By: ltamasi
      
      fbshipit-source-id: f714304c5d46186f8514746c27ee6f52aa3e4af8
      228f2c5b
  2. 31 8月, 2022 2 次提交
    • L
      Support using cache warming with the secondary blob cache (#10603) · 01e88dfe
      Levi Tamasi 提交于
      Summary: Pull Request resolved: https://github.com/facebook/rocksdb/pull/10603
      
      Test Plan: `make check`
      
      Reviewed By: riversand963
      
      Differential Revision: D39117952
      
      Pulled By: ltamasi
      
      fbshipit-source-id: 5e956fa2fc18974876a5c87686acb50718e0edb7
      01e88dfe
    • H
      Add missing mutex when reading from shared variable... · 8a85946f
      Hui Xiao 提交于
      Add missing mutex when reading from shared variable bg_bottom_compaction_scheduled_, bg_compaction_scheduled_ (#10610)
      
      Summary:
      **Context/Summary:**
      According to https://github.com/facebook/rocksdb/blob/7.6.fb/db/compaction/compaction_job.h#L328-L332, any reading in the form of `*bg_compaction_scheduled_` , `*bg_bottom_compaction_scheduled_` should be protected by mutex, which isn't the case for some assert statement. This leads to a data race that can be repro-ed by the following command (command coming soon)
      
      ```
      db=/dev/shm/rocksdb_crashtest_blackbox
      exp=/dev/shm/rocksdb_crashtest_expected
      rm -rf $db $exp
      mkdir -p $exp
      
      ./db_stress --clear_column_family_one_in=0 --column_families=1 --db=$db --delpercent=10 --delrangepercent=0 --destroy_db_initially=1 --expected_values_dir=$exp --iterpercent=0 --key_len_percent_dist=1,30,69 --max_key=1000000 --max_key_len=3 --prefixpercent=0 --readpercent=0 --reopen=0 --ops_per_thread=100000000 --value_size_mult=32 --writepercent=90  --compaction_pri=4 --use_txn=1 --level_compaction_dynamic_level_bytes=True  --compaction_ttl=0  --compact_files_one_in=1000000 --compact_range_one_in=1000000 --value_size_mult=32 --verify_db_one_in=1000  --write_buffer_size=65536 --mark_for_compaction_one_file_in=10 --max_background_compactions=20 --max_key=25000000 --max_key_len=3 --max_write_buffer_number=3 --max_write_buffer_size_to_maintain=2097152 --target_file_size_base=2097152 --target_file_size_multiplier=2
      ```
      ```
      WARNING: ThreadSanitizer: data race (pid=73424)
        Read of size 4 at 0x7b8c0000151c by thread T13:
          #0 ReleaseSubcompactionResources internal_repo_rocksdb/repo/db/compaction/compaction_job.cc:390 (db_stress+0x630aa3)
          https://github.com/facebook/rocksdb/issues/1 rocksdb::CompactionJob::Run() internal_repo_rocksdb/repo/db/compaction/compaction_job.cc:741 (db_stress+0x630aa3)
          https://github.com/facebook/rocksdb/issues/2 rocksdb::DBImpl::BackgroundCompaction(bool*, rocksdb::JobContext*, rocksdb::LogBuffer*, rocksdb::DBImpl::PrepickedCompaction*, rocksdb::Env::Priority) internal_repo_rocksdb/repo/db/db_impl/db_impl_compaction_flush.cc:3436 (db_stress+0x60b2cc)
          https://github.com/facebook/rocksdb/issues/3 rocksdb::DBImpl::BackgroundCallCompaction(rocksdb::DBImpl::PrepickedCompaction*, rocksdb::Env::Priority) internal_repo_rocksdb/repo/db/db_impl/db_impl_compaction_flush.cc:2950 (db_stress+0x606d79)
          https://github.com/facebook/rocksdb/issues/4 rocksdb::DBImpl::BGWorkCompaction(void*) internal_repo_rocksdb/repo/db/db_impl/db_impl_compaction_flush.cc:2693 (db_stress+0x60356a)
      
        Previous write of size 4 at 0x7b8c0000151c by thread T12 (mutexes: write M438955329917552448):
          #0 rocksdb::DBImpl::BackgroundCallCompaction(rocksdb::DBImpl::PrepickedCompaction*, rocksdb::Env::Priority) internal_repo_rocksdb/repo/db/db_impl/db_impl_compaction_flush.cc:3018 (db_stress+0x6072a1)
          https://github.com/facebook/rocksdb/issues/1 rocksdb::DBImpl::BGWorkCompaction(void*) internal_repo_rocksdb/repo/db/db_impl/db_impl_compaction_flush.cc:2693 (db_stress+0x60356a)
      
      Location is heap block of size 6720 at 0x7b8c00000000 allocated by main thread:
          #0 operator new(unsigned long, std::align_val_t) <null> (db_stress+0xbab5bb)
          https://github.com/facebook/rocksdb/issues/1 rocksdb::DBImpl::Open(rocksdb::DBOptions const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::vector<rocksdb::ColumnFamilyDescriptor, std::allocator<rocksdb::ColumnFamilyDescriptor> > const&, std::vector<rocksdb::ColumnFamilyHandle*, std::allocator<rocksdb::ColumnFamilyHandle*> >*, rocksdb::DB**, bool, bool) internal_repo_rocksdb/repo/db/db_impl/db_impl_open.cc:1811 (db_stress+0x69769a)
          https://github.com/facebook/rocksdb/issues/2 rocksdb::TransactionDB::Open(rocksdb::DBOptions const&, rocksdb::TransactionDBOptions const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::vector<rocksdb::ColumnFamilyDescriptor, std::allocator<rocksdb::ColumnFamilyDescriptor> > const&, std::vector<rocksdb::ColumnFamilyHandle*, std::allocator<rocksdb::ColumnFamilyHandle*> >*, rocksdb::TransactionDB**) internal_repo_rocksdb/repo/utilities/transactions/pessimistic_transaction_db.cc:258 (db_stress+0x8ae1f4)
          https://github.com/facebook/rocksdb/issues/3 rocksdb::StressTest::Open(rocksdb::SharedState*) internal_repo_rocksdb/repo/db_stress_tool/db_stress_test_base.cc:2611 (db_stress+0x32b927)
          https://github.com/facebook/rocksdb/issues/4 rocksdb::StressTest::InitDb(rocksdb::SharedState*) internal_repo_rocksdb/repo/db_stress_tool/db_stress_test_base.cc:290 (db_stress+0x34712c)
      ```
      This PR added all the missing mutex that should've been in place
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/10610
      
      Test Plan:
      - Past repro command
      - Existing CI
      
      Reviewed By: riversand963
      
      Differential Revision: D39143016
      
      Pulled By: hx235
      
      fbshipit-source-id: 51dd4db55ad306f3dbda5d0dd54d6f2513cf70f2
      8a85946f
  3. 30 8月, 2022 9 次提交
    • G
      Fix an import issue in fbcode. (#10604) · 6cd81330
      gitbw95 提交于
      Summary:
      This should fix an import issue detected in meta internal tests.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/10604
      
      Test Plan: Unit Tests.
      
      Reviewed By: hx235
      
      Differential Revision: D39120414
      
      Pulled By: gitbw95
      
      fbshipit-source-id: dbd016d7f47b9f54aab5ea61e8d3cd79734f46af
      6cd81330
    • Y
      Use std::make_unique when possible (#10578) · 7c0838e6
      Yanqin Jin 提交于
      Summary: Pull Request resolved: https://github.com/facebook/rocksdb/pull/10578
      
      Test Plan: make check
      
      Reviewed By: ajkr
      
      Differential Revision: D39064748
      
      Pulled By: riversand963
      
      fbshipit-source-id: c7c135b7b713608edb14614846050ece6d4cc59d
      7c0838e6
    • H
      Sync dir containing CURRENT after RenameFile on CURRENT as much as possible (#10573) · e484b81e
      Hui Xiao 提交于
      Summary:
      **Context:**
      Below crash test revealed a bug that directory containing CURRENT file (short for `dir_contains_current_file` below) was not always get synced after a new CURRENT is created and being called with `RenameFile` as part of the creation.
      
      This bug exposes a risk that such un-synced directory containing the updated CURRENT can’t survive a host crash (e.g, power loss) hence get corrupted. This then will be followed by a recovery from a corrupted CURRENT that we don't want.
      
      The root-cause is that a nullptr `FSDirectory* dir_contains_current_file` sometimes gets passed-down to `SetCurrentFile()` hence in those case `dir_contains_current_file->FSDirectory::FsyncWithDirOptions()` will be skipped  (which otherwise will internally call`Env/FS::SyncDic()` )
      ```
      ./db_stress --acquire_snapshot_one_in=10000 --adaptive_readahead=1 --allow_data_in_errors=True --avoid_unnecessary_blocking_io=0 --backup_max_size=104857600 --backup_one_in=100000 --batch_protection_bytes_per_key=8 --block_size=16384 --bloom_bits=134.8015470676662 --bottommost_compression_type=disable --cache_size=8388608 --checkpoint_one_in=1000000 --checksum_type=kCRC32c --clear_column_family_one_in=0 --compact_files_one_in=1000000 --compact_range_one_in=1000000 --compaction_pri=2 --compaction_ttl=100 --compression_max_dict_buffer_bytes=511 --compression_max_dict_bytes=16384 --compression_type=zstd --compression_use_zstd_dict_trainer=1 --compression_zstd_max_train_bytes=65536 --continuous_verification_interval=0 --data_block_index_type=0 --db=$db --db_write_buffer_size=1048576 --delpercent=5 --delrangepercent=0 --destroy_db_initially=0 --disable_wal=0 --enable_compaction_filter=0 --enable_pipelined_write=1 --expected_values_dir=$exp --fail_if_options_file_error=1 --file_checksum_impl=none --flush_one_in=1000000 --get_current_wal_file_one_in=0 --get_live_files_one_in=1000000 --get_property_one_in=1000000 --get_sorted_wal_files_one_in=0 --index_block_restart_interval=4 --ingest_external_file_one_in=0 --iterpercent=10 --key_len_percent_dist=1,30,69 --level_compaction_dynamic_level_bytes=True --mark_for_compaction_one_file_in=10 --max_background_compactions=20 --max_bytes_for_level_base=10485760 --max_key=10000 --max_key_len=3 --max_manifest_file_size=16384 --max_write_batch_group_size_bytes=64 --max_write_buffer_number=3 --max_write_buffer_size_to_maintain=0 --memtable_prefix_bloom_size_ratio=0.001 --memtable_protection_bytes_per_key=1 --memtable_whole_key_filtering=1 --mmap_read=1 --nooverwritepercent=1 --open_metadata_write_fault_one_in=0 --open_read_fault_one_in=0 --open_write_fault_one_in=0 --ops_per_thread=100000000 --optimize_filters_for_memory=1 --paranoid_file_checks=1 --partition_pinning=2 --pause_background_one_in=1000000 --periodic_compaction_seconds=0 --prefix_size=5 --prefixpercent=5 --prepopulate_block_cache=1 --progress_reports=0 --read_fault_one_in=1000 --readpercent=45 --recycle_log_file_num=0 --reopen=0 --ribbon_starting_level=999 --secondary_cache_fault_one_in=32 --secondary_cache_uri=compressed_secondary_cache://capacity=8388608 --set_options_one_in=10000 --snapshot_hold_ops=100000 --sst_file_manager_bytes_per_sec=0 --sst_file_manager_bytes_per_truncate=0 --subcompactions=3 --sync_fault_injection=1 --target_file_size_base=2097 --target_file_size_multiplier=2 --test_batches_snapshots=1 --top_level_index_pinning=1 --use_full_merge_v1=1 --use_merge=1 --value_size_mult=32 --verify_checksum=1 --verify_checksum_one_in=1000000 --verify_db_one_in=100000 --verify_sst_unique_id_in_manifest=1 --wal_bytes_per_sync=524288 --write_buffer_size=4194 --writepercent=35
      ```
      
      ```
      stderr:
      WARNING: prefix_size is non-zero but memtablerep != prefix_hash
      db_stress: utilities/fault_injection_fs.cc:748: virtual rocksdb::IOStatus rocksdb::FaultInjectionTestFS::RenameFile(const std::string &, const std::string &, const rocksdb::IOOptions &, rocksdb::IODebugContext *): Assertion `tlist.find(tdn.second) == tlist.end()' failed.`
      ```
      
      **Summary:**
      The PR ensured the non-test path pass down a non-null dir containing CURRENT (which is by current RocksDB assumption just db_dir) by doing the following:
      - Renamed `directory_to_fsync` as `dir_contains_current_file` in `SetCurrentFile()` to tighten the association between this directory and CURRENT file
      - Changed `SetCurrentFile()` API to require `dir_contains_current_file` being passed-in, instead of making it by default nullptr.
          -  Because `SetCurrentFile()`'s `dir_contains_current_file` is passed down from `VersionSet::LogAndApply()` then `VersionSet::ProcessManifestWrites()` (i.e, think about this as a chain of 3 functions related to MANIFEST update), these 2 functions also got refactored to require `dir_contains_current_file`
      - Updated the non-test-path callers of these 3 functions to obtain and pass in non-nullptr `dir_contains_current_file`, which by current assumption of RocksDB, is the `FSDirectory* db_dir`.
          - `db_impl` path will obtain `DBImpl::directories_.getDbDir()` while others with no access to such `directories_` are obtained on the fly by creating such object `FileSystem::NewDirectory(..)` and manage it by unique pointers to ensure short life time.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/10573
      
      Test Plan:
      - `make check`
      - Passed the repro db_stress command
      - For future improvement, since we currently don't assert dir containing CURRENT to be non-nullptr due to https://github.com/facebook/rocksdb/pull/10573#pullrequestreview-1087698899, there is still chances that future developers mistakenly pass down nullptr dir containing CURRENT thus resulting skipped sync dir and cause the bug again. Therefore a smarter test (e.g, such as quoted from ajkr  "(make) unsynced data loss to be dropping files corresponding to unsynced directory entries") is still needed.
      
      Reviewed By: ajkr
      
      Differential Revision: D39005886
      
      Pulled By: hx235
      
      fbshipit-source-id: 336fb9090d0cfa6ca3dd580db86268007dde7f5a
      e484b81e
    • L
      Add a dedicated cache entry role for blobs (#10601) · 78185601
      Levi Tamasi 提交于
      Summary:
      The patch adds a dedicated cache entry role for blob values and switches
      to a registered deleter so that blobs show up as a separate bucket
      (as opposed to "Misc") in the cache occupancy statistics, e.g.
      
      ```
      Block cache entry stats(count,size,portion): DataBlock(133515,531.73 MB,13.6866%) BlobValue(1824855,3.10 GB,81.7071%) Misc(1,0.00 KB,0%)
      ```
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/10601
      
      Test Plan: Ran `make check` and tested the cache occupancy statistics using `db_bench`.
      
      Reviewed By: riversand963
      
      Differential Revision: D39107915
      
      Pulled By: ltamasi
      
      fbshipit-source-id: 8446c3b190a41a144030df73f318eeda4398c125
      78185601
    • A
      Update statistics for async scan readaheads (#10585) · 72a3fb34
      anand76 提交于
      Summary:
      Imported a fix to "rocksdb.prefetched.bytes.discarded" stat from https://github.com/facebook/rocksdb/issues/10561, and added a new stat "rocksdb.async.prefetch.abort.micros" to measure time spent waiting for async reads to abort.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/10585
      
      Reviewed By: akankshamahajan15
      
      Differential Revision: D39067000
      
      Pulled By: anand1976
      
      fbshipit-source-id: d7cda71abb48017239bd5fd832345a16c7024faf
      72a3fb34
    • Y
      print value when verification fails (#10587) · 3613d862
      Yanqin Jin 提交于
      Summary:
      When verification fails for db_stress, print more information about
      value read from the db and expected state.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/10587
      
      Test Plan:
      make check
      ./db_stress
      
      Reviewed By: akankshamahajan15, hx235
      
      Differential Revision: D39078511
      
      Pulled By: riversand963
      
      fbshipit-source-id: 77ac8ffae01fc3a9b58a02c2e7bbe141e1a18f0b
      3613d862
    • P
      Don't wait for indirect flush in read-only DB (#10569) · c5afbbfe
      Peter Dillinger 提交于
      Summary:
      Some APIs for getting live files, which are used by Checkpoint
      and BackupEngine, can optionally trigger and wait for a flush. These
      would deadlock when used on a read-only DB. Here we fix that by assuming
      the user wants the overall operation to succeed and is OK without
      flushing (because the DB is read-only).
      
      Follow-up work: the same or other issues can be hit by directly invoking
      some DB functions that are clearly not appropriate for read-only
      instance, but are not covered by overrides in DBImplReadOnly and
      CompactedDBImpl. These should be fixed to avoid similar problems on
      accidental misuse. (Long term, it would be nice to have a DBReadOnly
      class without those members, like BackupEngineReadOnly.)
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/10569
      
      Test Plan: tests updated to catch regression (hang before the fix)
      
      Reviewed By: riversand963
      
      Differential Revision: D38995759
      
      Pulled By: pdillinger
      
      fbshipit-source-id: f5f8bc7123e13cb45bd393dd974d7d6eda20bc68
      c5afbbfe
    • C
      Verify Iterator/Get() against expected state in only `no_batched_ops_test` (#10590) · 5532b462
      Changyu Bi 提交于
      Summary:
      https://github.com/facebook/rocksdb/issues/10538 added `TestIterateAgainstExpected()` in `no_batched_ops_test` to verify iterator correctness against the in memory expected state. It is not compatible when run after some other stress tests, e.g. `TestPut()` in `batched_op_stress`, that either do not set expected state when writing to DB or use keys that cannot be parsed by `GetIntVal()`. The assert [here](https://github.com/facebook/rocksdb/blob/d17be55aab80b856f96f4af89f8d18fef96646b4/db_stress_tool/db_stress_common.h#L520) could fail. This PR fixed this issue by setting iterator upperbound to `max_key` when `destroy_db_initially=0` to avoid the key space that `batched_op_stress` touches.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/10590
      
      Test Plan:
      ```
      # set up DB with batched_op_stress
      ./db_stress --test_batches_snapshots=1 --verify_iterator_with_expected_state_one_in=1 --max_key_len=3 --max_key=100000000 --skip_verifydb=1 --continuous_verification_interval=0 --writepercent=85 --delpercent=3 --delrangepercent=0 --iterpercent=10 --nooverwritepercent=1 --prefixpercent=0 --readpercent=2 --key_len_percent_dist=1,30,69
      
      # Before this PR, the following test will fail the asserts with error msg like the following
      # Assertion failed: (size_key <= key_gen_ctx.weights.size() * sizeof(uint64_t)), function GetIntVal, file db_stress_common.h, line 524.
      ./db_stress --verify_iterator_with_expected_state_one_in=1 --max_key_len=3 --max_key=100000000 --skip_verifydb=1 --continuous_verification_interval=0 --writepercent=0 --delpercent=3 --delrangepercent=0 --iterpercent=95 --nooverwritepercent=1 --prefixpercent=0 --readpercent=2 --key_len_percent_dist=1,30,69 --destroy_db_initially=0
      ```
      
      Reviewed By: ajkr
      
      Differential Revision: D39085243
      
      Pulled By: cbi42
      
      fbshipit-source-id: a7dfee2320c330773b623b442d730fd014ec7056
      5532b462
    • L
      Use the default metadata charge policy when creating an LRU cache via the Java API (#10577) · 64e74723
      Levi Tamasi 提交于
      Summary: Pull Request resolved: https://github.com/facebook/rocksdb/pull/10577
      
      Reviewed By: akankshamahajan15
      
      Differential Revision: D39035884
      
      Pulled By: ltamasi
      
      fbshipit-source-id: 48f116f8ca172b7eb5eb3651f39ddb891a7ffade
      64e74723
  4. 28 8月, 2022 1 次提交
  5. 27 8月, 2022 2 次提交
    • Z
      Make header more natural. (#10580) · d17be55a
      zhangenming 提交于
      Summary:
      Fixed #10381 for blog's navigation bar UI.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/10580
      
      Reviewed By: hx235
      
      Differential Revision: D39079045
      
      Pulled By: cbi42
      
      fbshipit-source-id: 922cf2624f201c0af42815b23d97361fc0151d93
      d17be55a
    • L
      Improve the accounting of memory used by cached blobs (#10583) · 23376aa5
      Levi Tamasi 提交于
      Summary:
      The patch improves the bookkeeping around the memory usage of
      cached blobs in two ways: 1) it uses `malloc_usable_size`, which accounts
      for allocator bin sizes etc., and 2) it also considers the memory usage
      of the `BlobContents` object in addition to the blob itself. Note: some unit
      tests had been relying on the cache charge being equal to the size of the
      cached blob; these were updated.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/10583
      
      Test Plan: `make check`
      
      Reviewed By: riversand963
      
      Differential Revision: D39060680
      
      Pulled By: ltamasi
      
      fbshipit-source-id: 3583adce2b4ce6e84861f3fadccbfd2e5a3cc482
      23376aa5
  6. 26 8月, 2022 7 次提交
    • B
      fix trace_analyzer_tool args column position (#10576) · 7670fdd6
      bilyz 提交于
      Summary:
      The column  meaning explanation is not correct according to the parsed human-readable trace file.
      
      Following are the results data from parsed trace human-readable file format.
      The key is in the first column.
      
      ```
      0x00000005 6 1 0 1661317998095439
      0x00000007 0 1 0 1661317998095479
      0x00000008 6 1 0 1661317998095493
      0x0000000300000001 1 1 6 1661317998101508
      0x0000000300000000 1 1 6 1661317998101508
      0x0000000300000001 0 1 0 1661317998106486
      0x0000000300000000 0 1 0 1661317998106498
      0x0000000A 6 1 0 1661317998106515
      0x00000007 0 1 0 1661317998111887
      0x00000001 6 1 0 1661317998111923
      ```
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/10576
      
      Reviewed By: ajkr
      
      Differential Revision: D39039110
      
      Pulled By: jay-zhuang
      
      fbshipit-source-id: eade6394c7870005b717846af09a848be6f677ce
      7670fdd6
    • J
      Fix periodic_task unable to re-register the same task type (#10379) · d9e71fb2
      Jay Zhuang 提交于
      Summary:
      Timer has a limitation that it cannot re-register a task with the same name,
      because the cancel only mark the task as invalid and wait for the Timer thread
      to clean it up later, before the task is cleaned up, the same task name cannot
      be added. Which makes the task option update likely to fail, which basically
      cancel and re-register the same task name. Change the periodic task name to a
      random unique id and store it in periodic_task_scheduler.
      
      Also refactor the `periodic_work` to `periodic_task` to make each job function
      as a `task`.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/10379
      
      Test Plan: unittests
      
      Reviewed By: ajkr
      
      Differential Revision: D38000615
      
      Pulled By: jay-zhuang
      
      fbshipit-source-id: e4135f9422e3b53aaec8eda54f4e18ce633a279e
      d9e71fb2
    • L
      Introduce a dedicated class to represent blob values (#10571) · 3f57d84a
      Levi Tamasi 提交于
      Summary:
      The patch introduces a new class called `BlobContents`, which represents
      a single uncompressed blob value. We currently use `std::string` for this
      purpose; `BlobContents` is somewhat smaller but the primary reason for a
      dedicated class is that it enables certain improvements and optimizations
      like eliding a copy when inserting a blob into the cache, using custom
      allocators, or more control over and better accounting of the memory usage
      of cached blobs (see https://github.com/facebook/rocksdb/issues/10484).
      (We plan to implement these in subsequent PRs.)
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/10571
      
      Test Plan: `make check`
      
      Reviewed By: riversand963
      
      Differential Revision: D39000965
      
      Pulled By: ltamasi
      
      fbshipit-source-id: f296eddf9dec4fc3e11cad525b462bdf63c78f96
      3f57d84a
    • B
      Support CompactionPri::kRoundRobin in RocksJava (#10572) · 418b36a9
      Brendan MacDonell 提交于
      Summary:
      Pretty trivial — this PR just adds the new compaction priority to the Java API.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/10572
      
      Reviewed By: hx235
      
      Differential Revision: D39006523
      
      Pulled By: ajkr
      
      fbshipit-source-id: ea8d665817e7b05826c397afa41c3abcda81484e
      418b36a9
    • B
      Update the javadoc for setforceConsistencyChecks (#10574) · 9f290a5d
      Brendan MacDonell 提交于
      Summary:
      As of v6.14 (released in 2020), force_consistency_checks is enabled by default. However, the Java documentation does not seem to have been updated to reflect the change at the time.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/10574
      
      Reviewed By: hx235
      
      Differential Revision: D39006566
      
      Pulled By: ajkr
      
      fbshipit-source-id: c7b029484d62deaa1f260ec55084049fe39eb84a
      9f290a5d
    • A
      Ensure writes to WAL tail during `FlushWAL(true /* sync */)` will be synced (#10560) · 7ad4b386
      Andrew Kryczka 提交于
      Summary:
      WAL append and switch can both happen between `FlushWAL(true /* sync */)`'s sync operations and its call to `MarkLogsSynced()`. We permit this since locks need to be released for the sync operations. Such an appended/switched WAL is both inactive and incompletely synced at the time `MarkLogsSynced()` processes it.
      
      Prior to this PR, `MarkLogsSynced()` assumed all inactive WALs were fully synced and removed them from consideration for future syncs. That was wrong in the scenario described above and led to the latest append(s) never being synced. This PR changes `MarkLogsSynced()` to only remove inactive WALs from consideration for which all flushed data has been synced.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/10560
      
      Test Plan: repro unit test for the scenario described above. Without this PR, it fails on "key2" not found
      
      Reviewed By: riversand963
      
      Differential Revision: D38957391
      
      Pulled By: ajkr
      
      fbshipit-source-id: da77175eba97ff251a4219b227b3bb2d4843ed26
      7ad4b386
    • A
      CI benchmarks refine configuration (#10514) · 7fbee01f
      Alan Paxton 提交于
      Summary:
      CI benchmarks refine configuration
      
      Run only “essential” benchmarks, but for longer
      Fix (reduce) the NUM_KEYS to ensure cached behaviour
      Reduce level size to try to ensure more levels
      
      Refine test durations again, more time per test, but fewer tests.
      In CI benchmark mode, the only read test is readrandom.
      There are still 3 mostly-read tests.
      
      Goal is to squeeze complete run a little bit inside 1 hour so it doesn’t clash with the next run (cron scheduled for main branch), but it gets to run as long as possible, so that results are as credible as possible.
      
      Reduce thread count to physical capacity, in an attempt to reduce throughput variance for write heavy tests. See Mark Callaghan’s comments in related documentation..
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/10514
      
      Reviewed By: ajkr
      
      Differential Revision: D38952469
      
      Pulled By: jay-zhuang
      
      fbshipit-source-id: 72fa6bba897cc47066ced65facd1fd36e28f30a8
      7fbee01f
  7. 25 8月, 2022 4 次提交
  8. 24 8月, 2022 10 次提交