1. 22 3月, 2022 5 次提交
    • H
      Update HISTORY.md and version.h for 7.1 release (#9727) · b360d25d
      Hui Xiao 提交于
      Summary:
      As title
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/9727
      
      Test Plan: no code change
      
      Reviewed By: ajkr
      
      Differential Revision: D35034541
      
      Pulled By: hx235
      
      fbshipit-source-id: ae839f23db1bdb9e5f787ca653a7685beb2ada68
      b360d25d
    • M
      Make mixgraph easier to use (#9711) · 1ca1562e
      Mark Callaghan 提交于
      Summary:
      Changes:
      * improves monitoring by displaying average size of a Put value and average scan length
      * forces the minimum value size to be 10. Before this it was 0 if you didn't set the distribution parameters.
      * uses reasonable defaults for the distribution parameters that determine value size and scan length
      * includes seeks in "reads ... found" message, before this they were missing
      
      This is for https://github.com/facebook/rocksdb/issues/9672
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/9711
      
      Test Plan:
      Before this change:
      
      ./db_bench --benchmarks=fillseq,mixgraph --mix_get_ratio=50 --mix_put_ratio=25 --mix_seek_ratio=25 --num=100000 --value_k=0.2615 --value_sigma=25.45 --iter_k=2.517 --iter_sigma=14.236
      fillseq      :       4.289 micros/op 233138 ops/sec;   25.8 MB/s
      mixgraph     :      18.461 micros/op 54166 ops/sec;  755.0 MB/s ( Gets:50164 Puts:24919 Seek:24917 of 50164 in 75081 found)
      
      After this change:
      
      ./db_bench --benchmarks=fillseq,mixgraph --mix_get_ratio=50 --mix_put_ratio=25 --mix_seek_ratio=25 --num=100000 --value_k=0.2615 --value_sigma=25.45 --iter_k=2.517 --iter_sigma=14.236
      fillseq      :       3.974 micros/op 251553 ops/sec;   27.8 MB/s
      mixgraph     :      16.722 micros/op 59795 ops/sec;  833.5 MB/s ( Gets:50164 Puts:24919 Seek:24917, reads 75081 in 75081 found, avg size: 36.0 value, 504.9 scan)
      
      Reviewed By: jay-zhuang
      
      Differential Revision: D35030190
      
      Pulled By: mdcallag
      
      fbshipit-source-id: d8f555f28d869f752ddb674a524108884511b151
      1ca1562e
    • K
      Fix a bug in PosixClock (#9695) · cb4d188a
      KNOEEE 提交于
      Summary:
      Multiplier here should be 1e6 to get microseconds.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/9695
      
      Reviewed By: ajkr
      
      Differential Revision: D34897086
      
      Pulled By: jay-zhuang
      
      fbshipit-source-id: 9c1d0811ea740ba0a007edc2da199edbd000b88b
      cb4d188a
    • D
      fix a bug, c api, if enable inplace_update_support, and use create sn… (#9471) · cbe303c1
      duyuqi 提交于
      Summary:
      c api release snapshot will core dump when enable inplace_update_support and create snapshot
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/9471
      
      Reviewed By: akankshamahajan15
      
      Differential Revision: D34965103
      
      Pulled By: riversand963
      
      fbshipit-source-id: c3aeeb9ea7126c2eda1466102794fecf57b6ab77
      cbe303c1
    • J
      Enable detect_stack_use_after_return for ASAN (#9714) · 661e0329
      Jay Zhuang 提交于
      Summary: Pull Request resolved: https://github.com/facebook/rocksdb/pull/9714
      
      Reviewed By: ajkr
      
      Differential Revision: D34983675
      
      Pulled By: jay-zhuang
      
      fbshipit-source-id: 0252ec6ee38a0b960df4c92791c7c2bcbfba5ad8
      661e0329
  2. 21 3月, 2022 1 次提交
    • A
      Provide implementation to prefetch data asynchronously in FilePrefetchBuffer (#9674) · 49a10feb
      Akanksha Mahajan 提交于
      Summary:
      In FilePrefetchBuffer if reads are sequential, after prefetching call ReadAsync API to prefetch data asynchronously so that in next prefetching data will be available. Data prefetched asynchronously will be readahead_size/2. It uses two buffers, one for synchronous prefetching and one for asynchronous. In case, the data is overlapping, the data is copied from both buffers to third buffer to make it continuous.
      This feature is under ReadOptions::async_io and is under experimental.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/9674
      
      Test Plan:
      1. Add new unit tests
      2. Run **db_stress** to make sure nothing crashes.
      
          -   Normal prefetch without `async_io` ran successfully:
      ```
      export CRASH_TEST_EXT_ARGS=" --async_io=0"
       make crash_test -j
       ```
      
      3. **Run Regressions**.
         i) Main branch without any change for normal prefetching with async_io disabled:
      
       ```
       ./db_bench -db=/tmp/prefix_scan_prefetch_main -benchmarks="fillseq" -key_size=32 -value_size=512 -num=5000000 -
                 use_direct_io_for_flush_and_compaction=true -target_file_size_base=16777216
       ```
      
      ```
      ./db_bench -use_existing_db=true -db=/tmp/prefix_scan_prefetch_main -benchmarks="seekrandom" -key_size=32 -value_size=512 -num=5000000 -use_direct_reads=true -seek_nexts=327680 -duration=120 -ops_between_duration_checks=1
      Initializing RocksDB Options from the specified file
      Initializing RocksDB Options from command-line flags
      RocksDB:    version 7.0
      Date:       Thu Mar 17 13:11:34 2022
      CPU:        24 * Intel Core Processor (Broadwell)
      CPUCache:   16384 KB
      Keys:       32 bytes each (+ 0 bytes user-defined timestamp)
      Values:     512 bytes each (256 bytes after compression)
      Entries:    5000000
      Prefix:    0 bytes
      Keys per prefix:    0
      RawSize:    2594.0 MB (estimated)
      FileSize:   1373.3 MB (estimated)
      Write rate: 0 bytes/second
      Read rate: 0 ops/second
      Compression: Snappy
      Compression sampling rate: 0
      Memtablerep: SkipListFactory
      Perf Level: 1
      ------------------------------------------------
      DB path: [/tmp/prefix_scan_prefetch_main]
      seekrandom   :  483618.390 micros/op 2 ops/sec;  338.9 MB/s (249 of 249 found)
      ```
      
        ii) normal prefetching after changes with async_io disable:
      
      ```
      ./db_bench -use_existing_db=true -db=/tmp/prefix_scan_prefetch_withchange -benchmarks="seekrandom" -key_size=32 -value_size=512 -num=5000000 -use_direct_reads=true -seek_nexts=327680 -duration=120 -ops_between_duration_checks=1
      Initializing RocksDB Options from the specified file
      Initializing RocksDB Options from command-line flags
      RocksDB:    version 7.0
      Date:       Thu Mar 17 14:11:31 2022
      CPU:        24 * Intel Core Processor (Broadwell)
      CPUCache:   16384 KB
      Keys:       32 bytes each (+ 0 bytes user-defined timestamp)
      Values:     512 bytes each (256 bytes after compression)
      Entries:    5000000
      Prefix:    0 bytes
      Keys per prefix:    0
      RawSize:    2594.0 MB (estimated)
      FileSize:   1373.3 MB (estimated)
      Write rate: 0 bytes/second
      Read rate: 0 ops/second
      Compression: Snappy
      Compression sampling rate: 0
      Memtablerep: SkipListFactory
      Perf Level: 1
      ------------------------------------------------
      DB path: [/tmp/prefix_scan_prefetch_withchange]
      seekrandom   :  471347.227 micros/op 2 ops/sec;  348.1 MB/s (255 of 255 found)
      ```
      
      Reviewed By: anand1976
      
      Differential Revision: D34731543
      
      Pulled By: akankshamahajan15
      
      fbshipit-source-id: 8e23aa93453d5fe3c672b9231ad582f60207937f
      49a10feb
  3. 19 3月, 2022 3 次提交
    • P
      Add manifest fix-up utility for file temperatures (#9683) · a8a422e9
      Peter Dillinger 提交于
      Summary:
      The goal of this change is to allow changes to the "current" (in
      FileSystem) file temperatures to feed back into DB metadata, so that
      they can inform decisions and stats reporting. In part because of
      modular code factoring, it doesn't seem easy to do this automagically,
      where opening an SST file and observing current Temperature different
      from expected would trigger a change in metadata and DB manifest write
      (essentially giving the deep read path access to the write path). It is also
      difficult to do this while the DB is open because of the limitations of
      LogAndApply.
      
      This change allows updating file temperature metadata on a closed DB
      using an experimental utility function UpdateManifestForFilesState()
      or `ldb update_manifest --update_temperatures`. This should suffice for
      "migration" scenarios where outside tooling has placed or re-arranged DB
      files into a (different) tiered configuration without going through
      RocksDB itself (currently, only compaction can change temperature
      metadata).
      
      Some details:
      * Refactored and added unit test for `ldb unsafe_remove_sst_file` because
      of shared functionality
      * Pulled in autovector.h changes from https://github.com/facebook/rocksdb/issues/9546 to fix SuperVersionContext
      move constructor (related to an older draft of this change)
      
      Possible follow-up work:
      * Support updating manifest with file checksums, such as when a
      new checksum function is used and want existing DB metadata updated
      for it.
      * It's possible that for some repair scenarios, lighter weight than
      full repair, we might want to support UpdateManifestForFilesState() to
      modify critical file details like size or checksum using same
      algorithm. But let's make sure these are differentiated from modifying
      file details in ways that don't suspect corruption (or require extreme
      trust).
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/9683
      
      Test Plan: unit tests added
      
      Reviewed By: jay-zhuang
      
      Differential Revision: D34798828
      
      Pulled By: pdillinger
      
      fbshipit-source-id: cfd83e8fb10761d8c9e7f9c020d68c9106a95554
      a8a422e9
    • Y
      Fix assertion error by doing comparison with mutex (#9717) · b2aacaf9
      Yanqin Jin 提交于
      Summary:
      On CircleCI MacOS instances, we have been seeing the following assertion error:
      ```
      Assertion failed: (alive_log_files_tail_ == alive_log_files_.rbegin()), function WriteToWAL, file /Users/distiller/project/db/db_impl/db_impl_write.cc, line 1213.
      Received signal 6 (Abort trap: 6)
      #0   0x1
      https://github.com/facebook/rocksdb/issues/1   abort (in libsystem_c.dylib) + 120
      https://github.com/facebook/rocksdb/issues/2   err (in libsystem_c.dylib) + 0
      https://github.com/facebook/rocksdb/issues/3   rocksdb::DBImpl::WriteToWAL(rocksdb::WriteBatch const&, rocksdb::log::Writer*, unsigned long long*, unsigned long long*, rocksdb::Env::IOPriority, bool, bool) (in librocksdb.7.0.0.dylib) (db_impl_write.cc:1213)
      https://github.com/facebook/rocksdb/issues/4   rocksdb::DBImpl::WriteToWAL(rocksdb::WriteThread::WriteGroup const&, rocksdb::log::Writer*, unsigned long long*, bool, bool, unsigned long long) (in librocksdb.7.0.0.dylib) (db_impl_write.cc:1251)
      https://github.com/facebook/rocksdb/issues/5   rocksdb::DBImpl::WriteImpl(rocksdb::WriteOptions const&, rocksdb::WriteBatch*, rocksdb::WriteCallback*, unsigned long long*, unsigned long long, bool, unsigned long long*, unsigned long, rocksdb::PreReleaseCallback*) (in librocksdb.7.0.0.dylib) (db_impl_	rite.cc:421)
      https://github.com/facebook/rocksdb/issues/6   rocksdb::DBImpl::Write(rocksdb::WriteOptions const&, rocksdb::WriteBatch*) (in librocksdb.7.0.0.dylib) (db_impl_write.cc:109)
      https://github.com/facebook/rocksdb/issues/7   rocksdb::DB::Put(rocksdb::WriteOptions const&, rocksdb::ColumnFamilyHandle*, rocksdb::Slice const&, rocksdb::Slice const&, rocksdb::Slice const&) (in librocksdb.7.0.0.dylib) (db_impl_write.cc:2159)
      https://github.com/facebook/rocksdb/issues/8   rocksdb::DBImpl::Put(rocksdb::WriteOptions const&, rocksdb::ColumnFamilyHandle*, rocksdb::Slice const&, rocksdb::Slice const&, rocksdb::Slice const&) (in librocksdb.7.0.0.dylib) (db_impl_write.cc:37)
      https://github.com/facebook/rocksdb/issues/9   rocksdb::DB::Put(rocksdb::WriteOptions const&, rocksdb::Slice const&, rocksdb::Slice const&, rocksdb::Slice const&) (in librocksdb.7.0.0.dylib) (db.h:382)
      https://github.com/facebook/rocksdb/issues/10  rocksdb::DBBasicTestWithTimestampPrefixSeek_IterateWithPrefix_Test::TestBody() (in db_with_timestamp_basic_test) (db_with_timestamp_basic_test.cc:2926)
      https://github.com/facebook/rocksdb/issues/11  void testing::internal::HandleSehExceptionsInMethodIfSupported<testing::Test, void>(testing::Test*, void (testing::Test::*)(), char const*) (in db_with_timestamp_basic_test) (gtest-all.cc:3899)
      https://github.com/facebook/rocksdb/issues/12  void testing::internal::HandleExceptionsInMethodIfSupported<testing::Test, void>(testing::Test*, void (testing::Test::*)(), char const*) (in db_with_timestamp_basic_test) (gtest-all.cc:3935)
      https://github.com/facebook/rocksdb/issues/13  testing::Test::Run() (in db_with_timestamp_basic_test) (gtest-all.cc:3980)
      https://github.com/facebook/rocksdb/issues/14  testing::TestInfo::Run() (in db_with_timestamp_basic_test) (gtest-all.cc:4153)
      https://github.com/facebook/rocksdb/issues/15  testing::TestCase::Run() (in db_with_timestamp_basic_test) (gtest-all.cc:4266)
      https://github.com/facebook/rocksdb/issues/16  testing::internal::UnitTestImpl::RunAllTests() (in db_with_timestamp_basic_test) (gtest-all.cc:6632)
      https://github.com/facebook/rocksdb/issues/17  bool testing::internal::HandleSehExceptionsInMethodIfSupported<testing::internal::UnitTestImpl, bool>(testing::internal::UnitTestImpl*, bool (testing::internal::UnitTestImpl::*)(), char const*) (in db_with_timestamp_basic_test) (gtest-all.cc:3899)
      https://github.com/facebook/rocksdb/issues/18  bool testing::internal::HandleExceptionsInMethodIfSupported<testing::internal::UnitTestImpl, bool>(testing::internal::UnitTestImpl*, bool (testing::internal::UnitTestImpl::*)(), char const*) (in db_with_timestamp_basic_test) (gtest-all.cc:3935)
      https://github.com/facebook/rocksdb/issues/19  testing::UnitTest::Run() (in db_with_timestamp_basic_test) (gtest-all.cc:6242)
      https://github.com/facebook/rocksdb/issues/20  RUN_ALL_TESTS() (in db_with_timestamp_basic_test) (gtest.h:22110)
      https://github.com/facebook/rocksdb/issues/21  main (in db_with_timestamp_basic_test) (db_with_timestamp_basic_test.cc:3150)
      https://github.com/facebook/rocksdb/issues/22  start (in libdyld.dylib) + 1
      ```
      
      It's likely caused by concurrent, unprotected access to the deque, even though `back()` is never popped,
      and we are comparing `rbegin()` with a cached `riterator`. To be safe, do the comparison only if we have mutex.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/9717
      
      Test Plan:
      One example
      Ssh to one CircleCI MacOS instance.
      ```
      gtest-parallel -r 1000 -w 8 ./db_test --gtest_filter=DBTest.FlushesInParallelWithCompactRange
      ```
      
      Reviewed By: pdillinger
      
      Differential Revision: D34990696
      
      Pulled By: riversand963
      
      fbshipit-source-id: 62dd48ae6fedbda53d0a64d73de9b948b4c26eee
      b2aacaf9
    • P
      New backup meta schema, with file temperatures (#9660) · cff0d1e8
      Peter Dillinger 提交于
      Summary:
      The primary goal of this change is to add support for backing up and
      restoring (applying on restore) file temperature metadata, without
      committing to either the DB manifest or the FS reported "current"
      temperatures being exclusive "source of truth".
      
      To achieve this goal, we need to add temperature information to backup
      metadata, which requires updated backup meta schema. Fortunately I
      prepared for this in https://github.com/facebook/rocksdb/issues/8069, which began forward compatibility in version
      6.19.0 for this kind of schema update. (Previously, backup meta schema
      was not extensible! Making this schema update public will allow some
      other "nice to have" features like taking backups with hard links, and
      avoiding crc32c checksum computation when another checksum is already
      available.) While schema version 2 is newly public, the default schema
      version is still 1. Until we change the default, users will need to set
      to 2 to enable features like temperature data backup+restore. New
      metadata like temperature information will be ignored with a warning
      in versions before this change and since 6.19.0. The metadata is
      considered ignorable because a functioning DB can be restored without
      it.
      
      Some detail:
      * Some renaming because "future schema" is now just public schema 2.
      * Initialize some atomics in TestFs (linter reported)
      * Add temperature hint support to SstFileDumper (used by BackupEngine)
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/9660
      
      Test Plan:
      related unit test majorly updated for the new functionality,
      including some shared testing support for tracking temperatures in a FS.
      
      Some other tests and testing hooks into production code also updated for
      making the backup meta schema change public.
      
      Reviewed By: ajkr
      
      Differential Revision: D34686968
      
      Pulled By: pdillinger
      
      fbshipit-source-id: 3ac1fa3e67ee97ca8a5103d79cc87d872c1d862a
      cff0d1e8
  4. 18 3月, 2022 2 次提交
    • Y
      Fix race condition caused by concurrent accesses to forceMmapOff_ when opening... · 3bdbf67e
      Yanqin Jin 提交于
      Fix race condition caused by concurrent accesses to forceMmapOff_ when opening Posix WritableFile (#9685)
      
      Summary:
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/9685
      
      Our TSAN reports a race condition as follows when running test
      ```
      gtest-parallel -r 100 ./external_sst_file_test --gtest_filter=ExternalSSTFileTest.MultiThreaded
      ```
      leads to the following
      
      ```
      WARNING: ThreadSanitizer: data race (pid=2683148)
        Write of size 1 at 0x556fede63340 by thread T7:
          #0 rocksdb::(anonymous namespace)::PosixFileSystem::OpenWritableFile(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, rocksdb::FileOptions const&, bool, std::unique_ptr<rocksdb::FSWritableFile, std::default_delete<rocksdb::FSWritableFile> >*, rocksdb::IODebugContext*) internal_repo_rocksdb/repo/env/fs_posix.cc:334 (external_sst_file_test+0xb61ac4)
          #1 rocksdb::(anonymous namespace)::PosixFileSystem::ReopenWritableFile(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, rocksdb::FileOptions const&, std::unique_ptr<rocksdb::FSWritableFile, std::default_delete<rocksdb::FSWritableFile> >*, rocksdb::IODebugContext*) internal_repo_rocksdb/repo/env/fs_posix.cc:382 (external_sst_file_test+0xb5ba96)
          #2 rocksdb::CompositeEnv::ReopenWritableFile(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::unique_ptr<rocksdb::WritableFile, std::default_delete<rocksdb::WritableFile> >*, rocksdb::EnvOptions const&) internal_repo_rocksdb/repo/env/composite_env.cc:334 (external_sst_file_test+0xa6ab7f)
          #3 rocksdb::EnvWrapper::ReopenWritableFile(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::unique_ptr<rocksdb::WritableFile, std::default_delete<rocksdb::WritableFile> >*, rocksdb::EnvOptions const&) internal_repo_rocksdb/repo/include/rocksdb/env.h:1428 (external_sst_file_test+0x561f3e)
      Previous read of size 1 at 0x556fede63340 by thread T4:
          #0 rocksdb::(anonymous namespace)::PosixFileSystem::OpenWritableFile(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, rocksdb::FileOptions const&, bool, std::unique_ptr<rocksdb::FSWritableFile, std::default_delete<rocksdb::FSWritableFile> >*, rocksdb::IODebugContext*) internal_repo_rocksdb/repo/env/fs_posix.cc:328 (external_sst_file_test+0xb61a70)
          #1 rocksdb::(anonymous namespace)::PosixFileSystem::ReopenWritableFile(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator
      ...
      ```
      
      Fix by making sure the following block gets executed only once:
      ```
            if (!checkedDiskForMmap_) {
              // this will be executed once in the program's lifetime.
              // do not use mmapWrite on non ext-3/xfs/tmpfs systems.
              if (!SupportsFastAllocate(fname)) {
                forceMmapOff_ = true;
              }
              checkedDiskForMmap_ = true;
            }
      ```
      
      Reviewed By: pdillinger
      
      Differential Revision: D34780308
      
      fbshipit-source-id: b761f66b24c8b5b8389d86ea371c8542b8d869d5
      3bdbf67e
    • J
      Deflake DeleteSchedulerTest.StartBGEmptyTrashMultipleTimes (#9706) · f0fca81f
      Jay Zhuang 提交于
      Summary:
      The designed sync point may not be hit if trash file is generated faster
      than deleting. Then the file will be deleted directly instead of waiting
      for background trash empty thread to do it.
      Increase SstFileManager Trash/DB ratio to avoid that.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/9706
      
      Test Plan:
      `gtest-parallel ./delete_scheduler_test
      --gtest_filter=DeleteSchedulerTest.StartBGEmptyTrashMultipleTimes -r
      10000 -w 100`
      It was likely to happen on one of the host.
      
      Reviewed By: riversand963
      
      Differential Revision: D34964735
      
      Pulled By: jay-zhuang
      
      fbshipit-source-id: bb78015489b5f6b3f11783aae7e5853ea197702c
      f0fca81f
  5. 17 3月, 2022 3 次提交
    • J
      Minor fix for Windows build with zlib (#9699) · 2586585b
      Jay Zhuang 提交于
      Summary:
      ```
      conversion from 'size_t' to 'uLong', possible loss of data
      ```
      
      Fix https://github.com/facebook/rocksdb/issues/9688
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/9699
      
      Reviewed By: riversand963
      
      Differential Revision: D34901116
      
      Pulled By: jay-zhuang
      
      fbshipit-source-id: 969148a7a8c023449bd85055a1f0eec71d0a9b3f
      2586585b
    • Y
      Improve stress test for transactions (#9568) · 58947610
      Yanqin Jin 提交于
      Summary:
      Test only, no change to functionality.
      Extremely low risk of library regression.
      
      Update test key generation by maintaining existing and non-existing keys.
      Update db_crashtest.py to drive multiops_txn stress test for both write-committed and write-prepared.
      Add a make target 'blackbox_crash_test_with_multiops_txn'.
      
      Running the following commands caught the bug exposed in https://github.com/facebook/rocksdb/issues/9571.
      ```
      $rm -rf /tmp/rocksdbtest/*
      $./db_stress -progress_reports=0 -test_multi_ops_txns -use_txn -clear_column_family_one_in=0 \
          -column_families=1 -writepercent=0 -delpercent=0 -delrangepercent=0 -customopspercent=60 \
         -readpercent=20 -prefixpercent=0 -iterpercent=20 -reopen=0 -ops_per_thread=1000 -ub_a=10000 \
         -ub_c=100 -destroy_db_initially=0 -key_spaces_path=/dev/shm/key_spaces_desc -threads=32 -read_fault_one_in=0
      $./db_stress -progress_reports=0 -test_multi_ops_txns -use_txn -clear_column_family_one_in=0
         -column_families=1 -writepercent=0 -delpercent=0 -delrangepercent=0 -customopspercent=60 -readpercent=20 \
         -prefixpercent=0 -iterpercent=20 -reopen=0 -ops_per_thread=1000 -ub_a=10000 -ub_c=100 -destroy_db_initially=0 \
         -key_spaces_path=/dev/shm/key_spaces_desc -threads=32 -read_fault_one_in=0
      ```
      
      Running the following command caught a bug which will be fixed in https://github.com/facebook/rocksdb/issues/9648 .
      ```
      $TEST_TMPDIR=/dev/shm make blackbox_crash_test_with_multiops_wc_txn
      ```
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/9568
      
      Reviewed By: jay-zhuang
      
      Differential Revision: D34308154
      
      Pulled By: riversand963
      
      fbshipit-source-id: 99ff1b65c19b46c471d2f2d3b47adcd342a1b9e7
      58947610
    • P
      crash_test Makefile refactoring, add to CircleCI (#9702) · fe9a344c
      Peter Dillinger 提交于
      Summary:
      some Makefile refactoring to support Meta-internal workflows,
      and add a basic crash_test flow to CircleCI
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/9702
      
      Test Plan: CI
      
      Reviewed By: riversand963
      
      Differential Revision: D34934315
      
      Pulled By: pdillinger
      
      fbshipit-source-id: 67f17280096d8968d8e44459293f72fb6fe339f3
      fe9a344c
  6. 16 3月, 2022 6 次提交
  7. 15 3月, 2022 2 次提交
    • Y
      Fix a TSAN-reported bug caused by concurrent accesss to std::deque (#9686) · bbdaf63d
      Yanqin Jin 提交于
      Summary:
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/9686
      
      According to https://www.cplusplus.com/reference/deque/deque/back/,
      "
      The container is accessed (neither the const nor the non-const versions modify the container).
      The last element is potentially accessed or modified by the caller. Concurrently accessing or modifying other elements is safe.
      "
      
      Also according to https://www.cplusplus.com/reference/deque/deque/pop_front/,
      "
      The container is modified.
      The first element is modified. Concurrently accessing or modifying other elements is safe (although see iterator validity above).
      "
      In RocksDB, we never pop the last element of `DBImpl::alive_log_files_`. We have been
      exploiting this fact and the above two properties when ensuring correctness when
      `DBImpl::alive_log_files_` may be accessed concurrently. Specifically, it can be accessed
      in the write path when db mutex is released. Sometimes, the log_mute_ is held. It can also be accessed in `FindObsoleteFiles()`
      when db mutex is always held. It can also be accessed
      during recovery when db mutex is also held.
      Given the fact that we never pop the last element of alive_log_files_, we currently do not
      acquire additional locks when accessing it in `WriteToWAL()` as follows
      ```
      alive_log_files_.back().AddSize(log_entry.size());
      ```
      
      This is problematic.
      
      Check source code of deque.h
      ```
        back() _GLIBCXX_NOEXCEPT
        {
      __glibcxx_requires_nonempty();
      ...
        }
      
        pop_front() _GLIBCXX_NOEXCEPT
        {
      ...
        if (this->_M_impl._M_start._M_cur
            != this->_M_impl._M_start._M_last - 1)
          {
            ...
            ++this->_M_impl._M_start._M_cur;
          }
        ...
        }
      ```
      
      `back()` will actually call `__glibcxx_requires_nonempty()` first.
      If `__glibcxx_requires_nonempty()` is enabled and not an empty macro,
      it will call `empty()`
      ```
      bool empty() {
      return this->_M_impl._M_finish == this->_M_impl._M_start;
      }
      ```
      You can see that it will access `this->_M_impl._M_start`, racing with `pop_front()`.
      Therefore, TSAN will actually catch the bug in this case.
      
      To be able to use TSAN on our library and unit tests, we should always coordinate
      concurrent accesses to STL containers properly.
      
      We need to pass information about db mutex and log mutex into `WriteToWAL()`, otherwise
      it's impossible to know which mutex to acquire inside the function.
      
      To fix this, we can catch the tail of `alive_log_files_` by reference, so that we do not have to call `back()` in `WriteToWAL()`.
      
      Reviewed By: pdillinger
      
      Differential Revision: D34780309
      
      fbshipit-source-id: 1def9821f0c437f2736c6a26445d75890377889b
      bbdaf63d
    • T
      NPE in Java_org_rocksdb_ColumnFamilyOptions_setSstPartitionerFactory (#9622) · 9e05c5e2
      Tomas Kolda 提交于
      Summary:
      There was a mistake that incorrectly cast SstPartitionerFactory (missed shared pointer). It worked for database (correct cast), but not for family. Trying to set it in family has caused Access violation.
      
      I have also added test and improved it. Older version was passing even without sst partitioner which is weird, because on Level1 we had two SST files with same key "aaaa1". I was not sure if it is a new feature and changed it to overlaping keys "aaaa0" - "aaaa2" overlaps "aaaa1".
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/9622
      
      Reviewed By: ajkr
      
      Differential Revision: D34871968
      
      Pulled By: pdillinger
      
      fbshipit-source-id: a08009766da49fc198692a610e8beb19caf737e6
      9e05c5e2
  8. 14 3月, 2022 1 次提交
  9. 13 3月, 2022 4 次提交
  10. 12 3月, 2022 2 次提交
  11. 11 3月, 2022 2 次提交
    • A
      Posix API support for Async Read and Poll APIs (#9578) · 8465cccd
      Akanksha Mahajan 提交于
      Summary:
      Provide support for Async Read and Poll in Posix file system using IOUring.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/9578
      
      Test Plan: In progress
      
      Reviewed By: anand1976
      
      Differential Revision: D34690256
      
      Pulled By: akankshamahajan15
      
      fbshipit-source-id: 291cbd1380a3cb904b726c34c0560d1b2ce44a2e
      8465cccd
    • B
      Fix mempurge crash reported in #8958 (#9671) · 7bed6595
      Baptiste Lemaire 提交于
      Summary:
      Change the `MemPurge` code to address a failure during a crash test reported in https://github.com/facebook/rocksdb/issues/8958.
      
      ### Details and results of the crash investigation:
      These failures happened in a specific scenario where the list of immutable tables was composed of 2 or more memtables, and the last memtable was the output of a previous `Mempurge` operation. Because the `PickMemtablesToFlush` function included a sorting of the memtables (previous PR related to the Mempurge project), and because the `VersionEdit` of the flush class is piggybacked onto a single one of these memtables, the `VersionEdit` was not properly selected and applied to the `VersionSet` of the DB. Since the `VersionSet` was not edited properly, the database was losing track of the SST file created during the flush process, which was subsequently deleted (and as you can expect, caused the tests to crash).
      The following command consistently failed, which was quite convenient to investigate the issue:
      `$ while rm -rf /dev/shm/single_stress && ./db_stress --clear_column_family_one_in=0 --column_families=1 --db=/dev/shm/single_stress --experimental_mempurge_threshold=5.493146827397074 --flush_one_in=10000 --reopen=0 --write_buffer_size=262144 --value_size_mult=33 --max_write_buffer_number=3 -ops_per_thread=10000; do : ; done`
      
      ### Solution proposed
      The memtables are no longer sorted based on their `memtableID` in the `PickMemtablesToFlush` function. Additionally, the `next_log_number` of the memtable created as an output of the `Mempurge` function now takes in the correct value (the log number of the first memtable being mempurged). Finally, the VersionEdit object of the flush class now takes the maximum `next_log_number` of the stack of memtables being flushed, which doesnt change anything when Mempurge is `off` but becomes necessary when Mempurge is `on`.
      
      ### Testing of the solution
      The following command no longer fails:
      ``$ while rm -rf /dev/shm/single_stress && ./db_stress --clear_column_family_one_in=0 --column_families=1 --db=/dev/shm/single_stress --experimental_mempurge_threshold=5.493146827397074 --flush_one_in=10000 --reopen=0 --write_buffer_size=262144 --value_size_mult=33 --max_write_buffer_number=3 -ops_per_thread=10000; do : ; done``
      Additionally, I ran `db_crashtest` (`whitebox` and `blackbox`) for 2.5 hours with MemPurge on and did not observe any crash.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/9671
      
      Reviewed By: pdillinger
      
      Differential Revision: D34697424
      
      Pulled By: bjlemaire
      
      fbshipit-source-id: d1ab675b361904351ac81a35c184030e52222874
      7bed6595
  12. 10 3月, 2022 3 次提交
  13. 09 3月, 2022 5 次提交
    • Y
      Support user-defined timestamps in write-committed txns (#9629) · 3b6dc049
      Yanqin Jin 提交于
      Summary:
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/9629
      
      Pessimistic transactions use pessimistic concurrency control, i.e. locking. Keys are
      locked upon first operation that writes the key or has the intention of writing. For example,
      `PessimisticTransaction::Put()`, `PessimisticTransaction::Delete()`,
      `PessimisticTransaction::SingleDelete()` will write to or delete a key, while
      `PessimisticTransaction::GetForUpdate()` is used by application to indicate
      to RocksDB that the transaction has the intention of performing write operation later
      in the same transaction.
      Pessimistic transactions support two-phase commit (2PC). A transaction can be
      `Prepared()`'ed and then `Commit()`. The prepare phase is similar to a promise: once
      `Prepare()` succeeds, the transaction has acquired the necessary resources to commit.
      The resources include locks, persistence of WAL, etc.
      Write-committed transaction is the default pessimistic transaction implementation. In
      RocksDB write-committed transaction, `Prepare()` will write data to the WAL as a prepare
      section. `Commit()` will write a commit marker to the WAL and then write data to the
      memtables. While writing to the memtables, different keys in the transaction's write batch
      will be assigned different sequence numbers in ascending order.
      Until commit/rollback, the transaction holds locks on the keys so that no other transaction
      can write to the same keys. Furthermore, the keys' sequence numbers represent the order
      in which they are committed and should be made visible. This is convenient for us to
      implement support for user-defined timestamps.
      Since column families with and without timestamps can co-exist in the same database,
      a transaction may or may not involve timestamps. Based on this observation, we add two
      optional members to each `PessimisticTransaction`, `read_timestamp_` and
      `commit_timestamp_`. If no key in the transaction's write batch has timestamp, then
      setting these two variables do not have any effect. For the rest of this commit, we discuss
      only the cases when these two variables are meaningful.
      
      read_timestamp_ is used mainly for validation, and should be set before first call to
      `GetForUpdate()`. Otherwise, the latter will return non-ok status. `GetForUpdate()` calls
      `TryLock()` that can verify if another transaction has written the same key since
      `read_timestamp_` till this call to `GetForUpdate()`. If another transaction has indeed
      written the same key, then validation fails, and RocksDB allows this transaction to
      refine `read_timestamp_` by increasing it. Note that a transaction can still use `Get()`
      with a different timestamp to read, but the result of the read should not be used to
      determine data that will be written later.
      
      commit_timestamp_ must be set after finishing writing and before transaction commit.
      This applies to both 2PC and non-2PC cases. In the case of 2PC, it's usually set after
      prepare phase succeeds.
      
      We currently require that the commit timestamp be chosen after all keys are locked. This
      means we disallow the `TransactionDB`-level APIs if user-defined timestamp is used
      by the transaction. Specifically, calling `PessimisticTransactionDB::Put()`,
      `PessimisticTransactionDB::Delete()`, `PessimisticTransactionDB::SingleDelete()`,
      etc. will return non-ok status because they specify timestamps before locking the keys.
      Users are also prompted to use the `Transaction` APIs when they receive the non-ok status.
      
      Reviewed By: ltamasi
      
      Differential Revision: D31822445
      
      fbshipit-source-id: b82abf8e230216dc89cc519564a588224a88fd43
      3b6dc049
    • H
      Rate-limit automatic WAL flush after each user write (#9607) · ca0ef54f
      Hui Xiao 提交于
      Summary:
      **Context:**
      WAL flush is currently not rate-limited by `Options::rate_limiter`. This PR is to provide rate-limiting to auto WAL flush, the one that automatically happen after each user write operation (i.e, `Options::manual_wal_flush == false`), by adding `WriteOptions::rate_limiter_options`.
      
      Note that we are NOT rate-limiting WAL flush that do NOT automatically happen after each user write, such as  `Options::manual_wal_flush == true + manual FlushWAL()` (rate-limiting multiple WAL flushes),  for the benefits of:
      - being consistent with [ReadOptions::rate_limiter_priority](https://github.com/facebook/rocksdb/blob/7.0.fb/include/rocksdb/options.h#L515)
      - being able to turn off some WAL flush's rate-limiting but not all (e.g, turn off specific the WAL flush of a critical user write like a service's heartbeat)
      
      `WriteOptions::rate_limiter_options` only accept `Env::IO_USER` and `Env::IO_TOTAL` currently due to an implementation constraint.
      - The constraint is that we currently queue parallel writes (including WAL writes) based on FIFO policy which does not factor rate limiter priority into this layer's scheduling. If we allow lower priorities such as `Env::IO_HIGH/MID/LOW` and such writes specified with lower priorities occurs before ones specified with higher priorities (even just by a tiny bit in arrival time), the former would have blocked the latter, leading to a "priority inversion" issue and contradictory to what we promise for rate-limiting priority. Therefore we only allow `Env::IO_USER` and `Env::IO_TOTAL`  right now before improving that scheduling.
      
      A pre-requisite to this feature is to support operation-level rate limiting in `WritableFileWriter`, which is also included in this PR.
      
      **Summary:**
      - Renamed test suite `DBRateLimiterTest to DBRateLimiterOnReadTest` for adding a new test suite
      - Accept `rate_limiter_priority` in `WritableFileWriter`'s private and public write functions
      - Passed `WriteOptions::rate_limiter_options` to `WritableFileWriter` in the path of automatic WAL flush.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/9607
      
      Test Plan:
      - Added new unit test to verify existing flush/compaction rate-limiting does not break, since `DBTest, RateLimitingTest` is disabled and current db-level rate-limiting tests focus on read only (e.g, `db_rate_limiter_test`, `DBTest2, RateLimitedCompactionReads`).
      - Added new unit test `DBRateLimiterOnWriteWALTest, AutoWalFlush`
      - `strace -ftt -e trace=write ./db_bench -benchmarks=fillseq -db=/dev/shm/testdb -rate_limit_auto_wal_flush=1 -rate_limiter_bytes_per_sec=15 -rate_limiter_refill_period_us=1000000 -write_buffer_size=100000000 -disable_auto_compactions=1 -num=100`
         - verified that WAL flush(i.e, system-call _write_) were chunked into 15 bytes and each _write_ was roughly 1 second apart
         - verified the chunking disappeared when `-rate_limit_auto_wal_flush=0`
      - crash test: `python3 tools/db_crashtest.py blackbox --disable_wal=0  --rate_limit_auto_wal_flush=1 --rate_limiter_bytes_per_sec=10485760 --interval=10` killed as normal
      
      **Benchmarked on flush/compaction to ensure no performance regression:**
      - compaction with rate-limiting  (see table 1, avg over 1280-run):  pre-change: **915635 micros/op**; post-change:
         **907350 micros/op (improved by 0.106%)**
      ```
      #!/bin/bash
      TEST_TMPDIR=/dev/shm/testdb
      START=1
      NUM_DATA_ENTRY=8
      N=10
      
      rm -f compact_bmk_output.txt compact_bmk_output_2.txt dont_care_output.txt
      for i in $(eval echo "{$START..$NUM_DATA_ENTRY}")
      do
          NUM_RUN=$(($N*(2**($i-1))))
          for j in $(eval echo "{$START..$NUM_RUN}")
          do
             ./db_bench --benchmarks=fillrandom -db=$TEST_TMPDIR -disable_auto_compactions=1 -write_buffer_size=6710886 > dont_care_output.txt && ./db_bench --benchmarks=compact -use_existing_db=1 -db=$TEST_TMPDIR -level0_file_num_compaction_trigger=1 -rate_limiter_bytes_per_sec=100000000 | egrep 'compact'
          done > compact_bmk_output.txt && awk -v NUM_RUN=$NUM_RUN '{sum+=$3;sum_sqrt+=$3^2}END{print sum/NUM_RUN, sqrt(sum_sqrt/NUM_RUN-(sum/NUM_RUN)^2)}' compact_bmk_output.txt >> compact_bmk_output_2.txt
      done
      ```
      - compaction w/o rate-limiting  (see table 2, avg over 640-run):  pre-change: **822197 micros/op**; post-change: **823148 micros/op (regressed by 0.12%)**
      ```
      Same as above script, except that -rate_limiter_bytes_per_sec=0
      ```
      - flush with rate-limiting (see table 3, avg over 320-run, run on the [patch](https://github.com/hx235/rocksdb/commit/ee5c6023a9f6533fab9afdc681568daa21da4953) to augment current db_bench ): pre-change: **745752 micros/op**; post-change: **745331 micros/op (regressed by 0.06 %)**
      ```
       #!/bin/bash
      TEST_TMPDIR=/dev/shm/testdb
      START=1
      NUM_DATA_ENTRY=8
      N=10
      
      rm -f flush_bmk_output.txt flush_bmk_output_2.txt
      
      for i in $(eval echo "{$START..$NUM_DATA_ENTRY}")
      do
          NUM_RUN=$(($N*(2**($i-1))))
          for j in $(eval echo "{$START..$NUM_RUN}")
          do
             ./db_bench -db=$TEST_TMPDIR -write_buffer_size=1048576000 -num=1000000 -rate_limiter_bytes_per_sec=100000000 -benchmarks=fillseq,flush | egrep 'flush'
          done > flush_bmk_output.txt && awk -v NUM_RUN=$NUM_RUN '{sum+=$3;sum_sqrt+=$3^2}END{print sum/NUM_RUN, sqrt(sum_sqrt/NUM_RUN-(sum/NUM_RUN)^2)}' flush_bmk_output.txt >> flush_bmk_output_2.txt
      done
      
      ```
      - flush w/o rate-limiting (see table 4, avg over 320-run, run on the [patch](https://github.com/hx235/rocksdb/commit/ee5c6023a9f6533fab9afdc681568daa21da4953) to augment current db_bench): pre-change: **487512 micros/op**, post-change: **485856 micors/ops (improved by 0.34%)**
      ```
      Same as above script, except that -rate_limiter_bytes_per_sec=0
      ```
      
      | table 1 - compact with rate-limiting|
      #-run | (pre-change) avg micros/op | std micros/op | (post-change)  avg micros/op | std micros/op | change in avg micros/op  (%)
      -- | -- | -- | -- | -- | --
      10 | 896978 | 16046.9 | 901242 | 15670.9 | 0.475373978
      20 | 893718 | 15813 | 886505 | 17544.7 | -0.8070778478
      40 | 900426 | 23882.2 | 894958 | 15104.5 | -0.6072681153
      80 | 906635 | 21761.5 | 903332 | 23948.3 | -0.3643141948
      160 | 898632 | 21098.9 | 907583 | 21145 | 0.9960695813
      3.20E+02 | 905252 | 22785.5 | 908106 | 25325.5 | 0.3152713278
      6.40E+02 | 905213 | 23598.6 | 906741 | 21370.5 | 0.1688000504
      **1.28E+03** | **908316** | **23533.1** | **907350** | **24626.8** | **-0.1063506533**
      average over #-run | 901896.25 | 21064.9625 | 901977.125 | 20592.025 | 0.008967217682
      
      | table 2 - compact w/o rate-limiting|
      #-run | (pre-change) avg micros/op | std micros/op | (post-change)  avg micros/op | std micros/op | change in avg micros/op  (%)
      -- | -- | -- | -- | -- | --
      10 | 811211 | 26996.7 | 807586 | 28456.4 | -0.4468627768
      20 | 815465 | 14803.7 | 814608 | 28719.7 | -0.105093413
      40 | 809203 | 26187.1 | 797835 | 25492.1 | -1.404839082
      80 | 822088 | 28765.3 | 822192 | 32840.4 | 0.01265071379
      160 | 821719 | 36344.7 | 821664 | 29544.9 | -0.006693285661
      3.20E+02 | 820921 | 27756.4 | 821403 | 28347.7 | 0.05871454135
      **6.40E+02** | **822197** | **28960.6** | **823148** | **30055.1** | **0.1156657103**
      average over #-run | 8.18E+05 | 2.71E+04 | 8.15E+05 | 2.91E+04 |  -0.25
      
      | table 3 - flush with rate-limiting|
      #-run | (pre-change) avg micros/op | std micros/op | (post-change)  avg micros/op | std micros/op | change in avg micros/op  (%)
      -- | -- | -- | -- | -- | --
      10 | 741721 | 11770.8 | 740345 | 5949.76 | -0.1855144994
      20 | 735169 | 3561.83 | 743199 | 9755.77 | 1.09226586
      40 | 743368 | 8891.03 | 742102 | 8683.22 | -0.1703059588
      80 | 742129 | 8148.51 | 743417 | 9631.58| 0.1735547324
      160 | 749045 | 9757.21 | 746256 | 9191.86 | -0.3723407806
      **3.20E+02** | **745752** | **9819.65** | **745331** | **9840.62** | **-0.0564530836**
      6.40E+02 | 749006 | 11080.5 | 748173 | 10578.7 | -0.1112140624
      average over #-run | 743741.4286 | 9004.218571 | 744117.5714 | 9090.215714 | 0.05057441238
      
      | table 4 - flush w/o rate-limiting|
      #-run | (pre-change) avg micros/op | std micros/op | (post-change)  avg micros/op | std micros/op | change in avg micros/op (%)
      -- | -- | -- | -- | -- | --
      10 | 477283 | 24719.6 | 473864 | 12379 | -0.7163464863
      20 | 486743 | 20175.2 | 502296 | 23931.3 | 3.195320734
      40 | 482846 | 15309.2 | 489820 | 22259.5 | 1.444352858
      80 | 491490 | 21883.1 | 490071 | 23085.7 | -0.2887139108
      160 | 493347 | 28074.3 | 483609 | 21211.7 | -1.973864238
      **3.20E+02** | **487512** | **21401.5** | **485856** | **22195.2** | **-0.3396839462**
      6.40E+02 | 490307 | 25418.6 | 485435 | 22405.2 | -0.9936631539
      average over #-run | 4.87E+05 | 2.24E+04 | 4.87E+05 | 2.11E+04 | 0.00E+00
      
      Reviewed By: ajkr
      
      Differential Revision: D34442441
      
      Pulled By: hx235
      
      fbshipit-source-id: 4790f13e1e5c0a95ae1d1cc93ffcf69dc6e78bdd
      ca0ef54f
    • E
      Rename mutable_cf_options to signify explicity copy (#9666) · 27d6ef8e
      Ezgi Çiçek 提交于
      Summary:
      Signify explicit copy with comment and better name for variable `mutable_cf_options`
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/9666
      
      Reviewed By: riversand963
      
      Differential Revision: D34680934
      
      Pulled By: ezgicicek
      
      fbshipit-source-id: b64ef18725fe523835d14ceb4b29bcdfe493f8ed
      27d6ef8e
    • G
      remove redundant assignment code for member state (#9665) · c9674364
      GuKaifeng 提交于
      Summary:
      Remove redundant assignment code for member `state` in the constructor of `ImmutableDBOptions`.
      There are two identical and redundant statements `stats = statistics.get();` in lines 740 and 748 of the code.
      This commit removed the line 740.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/9665
      
      Reviewed By: ajkr
      
      Differential Revision: D34686649
      
      Pulled By: riversand963
      
      fbshipit-source-id: 8f246ece382b6845528f4e2c843ce09bb66b2b0f
      c9674364
    • P
      Avoid .trash handling race in db_stress Checkpoint (#9673) · 4a9ae4f7
      Peter Dillinger 提交于
      Summary:
      The shared SstFileManager in db_stress can create background
      work that races with TestCheckpoint such that DestroyDir fails because
      of file rename while it is running. Analogous to change already made
      for TestBackupRestore
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/9673
      
      Test Plan:
      make blackbox_crash_test for a while with
      checkpoint_one_in=100
      
      Reviewed By: ajkr
      
      Differential Revision: D34702215
      
      Pulled By: pdillinger
      
      fbshipit-source-id: ac3e166efa28cba6c6f4b9b391e799394603ebfd
      4a9ae4f7
  14. 08 3月, 2022 1 次提交