1. 12 5月, 2022 2 次提交
  2. 11 5月, 2022 3 次提交
    • Y
      Support single delete in ldb (#9469) · 26768edb
      yaphet 提交于
      Summary: Pull Request resolved: https://github.com/facebook/rocksdb/pull/9469
      
      Reviewed By: riversand963
      
      Differential Revision: D33953484
      
      fbshipit-source-id: f4e84a2d9865957d744c7e84ff02ffbb0a62b0a8
      26768edb
    • P
      Avoid some warnings-as-error in CircleCI+unity+AVX512F (#9978) · 0d1613aa
      Peter Dillinger 提交于
      Summary:
      Example failure when compiling on sufficiently new hardware and built-in headers:
      
      ```
      In file included from /usr/local/lib/gcc/x86_64-linux-gnu/12.1.0/include/immintrin.h:49,
                       from ./util/bloom_impl.h:21,
                       from table/block_based/filter_policy.cc:31,
                       from unity.cc:167:
      In function '__m512i _mm512_shuffle_epi32(__m512i, _MM_PERM_ENUM)',
          inlined from 'void XXH3_accumulate_512_avx512(void*, const void*, const void*)' at util/xxhash.h:3605:58,
          inlined from 'void XXH3_accumulate(xxh_u64*, const xxh_u8*, const xxh_u8*, size_t, XXH3_f_accumulate_512)' at util/xxhash.h:4229:17,
          inlined from 'void XXH3_hashLong_internal_loop(xxh_u64*, const xxh_u8*, size_t, const xxh_u8*, size_t, XXH3_f_accumulate_512, XXH3_f_scrambleAcc)' at util/xxhash.h:4251:24,
          inlined from 'XXH128_hash_t XXH3_hashLong_128b_internal(const void*, size_t, const xxh_u8*, size_t, XXH3_f_accumulate_512, XXH3_f_scrambleAcc)' at util/xxhash.h:5065:32,
          inlined from 'XXH128_hash_t XXH3_hashLong_128b_withSecret(const void*, size_t, XXH64_hash_t, const void*, size_t)' at util/xxhash.h:5104:39:
      /usr/local/lib/gcc/x86_64-linux-gnu/12.1.0/include/avx512fintrin.h:4459:50: error: '__Y' may be used uninitialized [-Werror=maybe-uninitialized]
      ```
      
      https://app.circleci.com/pipelines/github/facebook/rocksdb/13295/workflows/1695fb5c-40c1-423b-96b4-45107dc3012d/jobs/360416
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/9978
      
      Test Plan:
      I was able to re-run in CircleCI with ssh, see the failure, ssh in and
      verify that adding -fno-avx512f fixed the failure. Will watch build-linux-unity-and-headers
      
      Reviewed By: riversand963
      
      Differential Revision: D36296028
      
      Pulled By: pdillinger
      
      fbshipit-source-id: ba5955cf2ac730f57d1d18c2f517e92f34be77a3
      0d1613aa
    • P
      Increase soft open file limit for mini-crashtest on Linux (#9972) · e78451f3
      Peter Dillinger 提交于
      Summary:
      CircleCI was using a soft open file limit of 1024 which would
      frequently be exceeded during test runs. Now using
      ```
      ulimit -S -n `ulimit -H -n`
      ```
      to set soft limit up to the hard limit (524288 in my test). I've also
      applied this same idiom to existing applicable MacOS configurations to
      reduce hard-coding numbers.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/9972
      
      Test Plan: CI
      
      Reviewed By: riversand963
      
      Differential Revision: D36262943
      
      Pulled By: pdillinger
      
      fbshipit-source-id: 86320cdf9b68a97fdb73531a7b4a59b4c2d2f73f
      e78451f3
  3. 10 5月, 2022 6 次提交
    • A
      Add microbenchmarks for `DB::GetMergeOperands()` (#9971) · 7b7a37c0
      Andrew Kryczka 提交于
      Summary:
      The new microbenchmarks, DBGetMergeOperandsInMemtable and DBGetMergeOperandsInSstFile, correspond to the two different LSMs tested: all data in one memtable and all data in one SST file, respectively. Both cases are parameterized by thread count (1 or 8) and merge operands per key (1, 32, or 1024). The SST file case is additionally parameterized by whether data is in block cache or mmap'd memory.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/9971
      
      Test Plan:
      ```
      $ TEST_TMPDIR=/dev/shm/db_basic_bench/ ./db_basic_bench --benchmark_filter=DBGetMergeOperands
      The number of inputs is very large. DBGet will be repeated at least 192 times.
      The number of inputs is very large. DBGet will be repeated at least 192 times.
      2022-05-09T13:15:40-07:00
      Running ./db_basic_bench
      Run on (36 X 2570.91 MHz CPU s)
      CPU Caches:
        L1 Data 32 KiB (x18)
        L1 Instruction 32 KiB (x18)
        L2 Unified 1024 KiB (x18)
        L3 Unified 25344 KiB (x1)
      Load Average: 4.50, 4.33, 4.37
      ----------------------------------------------------------------------------------------------------------------------------
      Benchmark                                                                  Time             CPU   Iterations UserCounters...
      ----------------------------------------------------------------------------------------------------------------------------
      DBGetMergeOperandsInMemtable/entries_per_key:1/threads:1                 846 ns          846 ns       849893 db_size=0
      DBGetMergeOperandsInMemtable/entries_per_key:32/threads:1               2436 ns         2436 ns       305779 db_size=0
      DBGetMergeOperandsInMemtable/entries_per_key:1024/threads:1            77226 ns        77224 ns         8152 db_size=0
      DBGetMergeOperandsInMemtable/entries_per_key:1/threads:8                 116 ns          929 ns       779368 db_size=0
      DBGetMergeOperandsInMemtable/entries_per_key:32/threads:8                330 ns         2644 ns       280824 db_size=0
      DBGetMergeOperandsInMemtable/entries_per_key:1024/threads:8            12466 ns        99718 ns         7200 db_size=0
      DBGetMergeOperandsInSstFile/entries_per_key:1/mmap:0/threads:1          1640 ns         1640 ns       461262 db_size=21.7826M
      DBGetMergeOperandsInSstFile/entries_per_key:1/mmap:1/threads:1          1693 ns         1693 ns       439936 db_size=21.7826M
      DBGetMergeOperandsInSstFile/entries_per_key:32/mmap:0/threads:1         3999 ns         3999 ns       172881 db_size=19.6981M
      DBGetMergeOperandsInSstFile/entries_per_key:32/mmap:1/threads:1         5544 ns         5543 ns       135657 db_size=19.6981M
      DBGetMergeOperandsInSstFile/entries_per_key:1024/mmap:0/threads:1      78767 ns        78761 ns         8395 db_size=19.6389M
      DBGetMergeOperandsInSstFile/entries_per_key:1024/mmap:1/threads:1     157242 ns       157238 ns         4495 db_size=19.6389M
      DBGetMergeOperandsInSstFile/entries_per_key:1/mmap:0/threads:8           231 ns         1848 ns       347768 db_size=21.7826M
      DBGetMergeOperandsInSstFile/entries_per_key:1/mmap:1/threads:8           214 ns         1715 ns       393312 db_size=21.7826M
      DBGetMergeOperandsInSstFile/entries_per_key:32/mmap:0/threads:8          596 ns         4767 ns       142088 db_size=19.6981M
      DBGetMergeOperandsInSstFile/entries_per_key:32/mmap:1/threads:8          720 ns         5757 ns       118200 db_size=19.6981M
      DBGetMergeOperandsInSstFile/entries_per_key:1024/mmap:0/threads:8      11613 ns        92460 ns         7344 db_size=19.6389M
      DBGetMergeOperandsInSstFile/entries_per_key:1024/mmap:1/threads:8      19989 ns       159908 ns         4440 db_size=19.6389M
      ```
      
      Reviewed By: jay-zhuang
      
      Differential Revision: D36258861
      
      Pulled By: ajkr
      
      fbshipit-source-id: 04b733e1cc3a4a70ed9baa894c50fdf96c0d6064
      7b7a37c0
    • P
      Fix format_compatible blowing away its TEST_TMPDIR (#9970) · c5c58708
      Peter Dillinger 提交于
      Summary:
      https://github.com/facebook/rocksdb/issues/9961 broke format_compatible check because of `make clean`
      referencing TEST_TMPDIR. The Makefile behavior seems reasonable to me,
      so here's a fix in check_format_compatible.sh
      
      Apparently I also included removing a redundant part of our CircleCI config.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/9970
      
      Test Plan: manual run: SHORT_TEST=1 ./tools/check_format_compatible.sh
      
      Reviewed By: riversand963
      
      Differential Revision: D36258172
      
      Pulled By: pdillinger
      
      fbshipit-source-id: d46507f04614e888b414ff23b88d040ae2b5c294
      c5c58708
    • D
      Fix conversion issues in MutableOptions (#9194) · 4527bb2f
      Davide Angelocola 提交于
      Summary:
      Removing unnecessary checks around conversion from int/long to double as it does not lose information (see https://docs.oracle.com/javase/specs/jls/se9/html/jls-5.html#jls-5.1.2).
      
      For example, `value > Double.MAX_VALUE` is always false when value is long or int.
      
      Can you please have a look adamretter? Also fixed some other minor issues (do you prefer a separate PR?)
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/9194
      
      Reviewed By: ajkr
      
      Differential Revision: D36221694
      
      fbshipit-source-id: bf327c07386560b87ddc0c98039e8d6e8f2f1e82
      4527bb2f
    • W
      Improve the precision of row entry charge in row_cache (#9337) · 89571b30
      Wang Yuan 提交于
      Summary:
      - For entry charge, we should only calculate the value size instead of including key size in LRUCache
      - The capacity of string could show the memory usage precisely
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/9337
      
      Reviewed By: ajkr
      
      Differential Revision: D36219855
      
      fbshipit-source-id: 393e48ca419d230dc552ae62dd0eb1cc9f45961d
      89571b30
    • L
      Improve memkind library detection (#9134) · 39b6c579
      Luca Giacchino 提交于
      Summary:
      Improve memkind library detection in build_detect_platform:
      
      - The current position of -lmemkind does not work with all versions of gcc
      - LDFLAGS allows specifying non-standard library path through EXTRA_LDFLAGS
      
      After the change, the options match TBB detection.
      This is a follow-up to https://github.com/facebook/rocksdb/issues/6214.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/9134
      
      Reviewed By: ajkr, mrambacher
      
      Differential Revision: D32192028
      
      fbshipit-source-id: 115fafe8d93f1fe6aaf80afb32b2cb67aad074c7
      39b6c579
    • L
      arena.h: fix Arena::IsInInlineBlock() (#9317) · 9f7968b2
      leipeng 提交于
      Summary:
      When I enable hugepage on my box, unit test fails, this PR fixes this issue:
      
      [  FAILED  ] ArenaTest.ApproximateMemoryUsage (1 ms)
      
      memory/arena_test.cc:127: Failure
      Value of: arena.IsInInlineBlock()
        Actual: true
      Expected: false
      arena.IsInInlineBlock() = 1
      memory/arena_test.cc:127: Failure
      Value of: arena.IsInInlineBlock()
        Actual: true
      Expected: false
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/9317
      
      Reviewed By: ajkr
      
      Differential Revision: D36219813
      
      fbshipit-source-id: 08d040d9f37ec4c16987e4150c2db876180d163d
      9f7968b2
  4. 07 5月, 2022 7 次提交
  5. 06 5月, 2022 5 次提交
    • O
      Fix various spelling errors still found in code (#9653) · b7aaa987
      Otto Kekäläinen 提交于
      Summary:
      dont -> don't
      refered -> referred
      
      This is a re-run of PR#7785 and acc9679c since these typos keep coming back.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/9653
      
      Reviewed By: jay-zhuang
      
      Differential Revision: D34879593
      
      fbshipit-source-id: d7631fb779ea0129beae92abfb838038e60790f8
      b7aaa987
    • A
      Enable unsynced data loss in crash test (#9947) · a62506ae
      Andrew Kryczka 提交于
      Summary:
      `db_stress` already tracks expected state history to verify prefix-recoverability when `sync_fault_injection` is enabled. This PR enables `sync_fault_injection` in `db_crashtest.py`.
      
      Previously enabling `sync_fault_injection` would cause whole unsynced files to be dropped. This PR adds a more interesting case of losing only the tail of unsynced data by implementing `TestFSWritableFile::RangeSync()` and enabling `{wal_,}bytes_per_sync`.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/9947
      
      Test Plan:
      - regular blackbox, blackbox --simple
      - various commands to stress this new case, such as `TEST_TMPDIR=/dev/shm python3 tools/db_crashtest.py blackbox --max_key=100000 --write_buffer_size=2097152 --avoid_flush_during_recovery=1 --disable_wal=0 --interval=10 --db_write_buffer_size=0 --sync_fault_injection=1 --wal_compression=none --delpercent=0 --delrangepercent=0 --prefixpercent=0 --iterpercent=0 --writepercent=100 --readpercent=0 --wal_bytes_per_sync=131072 --duration=36000 --sync=0 --open_write_fault_one_in=16`
      
      Reviewed By: riversand963
      
      Differential Revision: D36152775
      
      Pulled By: ajkr
      
      fbshipit-source-id: 44b68a7fad0a4cf74af9fe1f39be01baab8141d8
      a62506ae
    • S
      Use std::numeric_limits<> (#9954) · 49628c9a
      sdong 提交于
      Summary:
      Right now we still don't fully use std::numeric_limits but use a macro, mainly for supporting VS 2013. Right now we only support VS 2017 and up so it is not a problem. The code comment claims that MinGW still needs it. We don't have a CI running MinGW so it's hard to validate. since we now require C++17, it's hard to imagine MinGW would still build RocksDB but doesn't support std::numeric_limits<>.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/9954
      
      Test Plan: See CI Runs.
      
      Reviewed By: riversand963
      
      Differential Revision: D36173954
      
      fbshipit-source-id: a35a73af17cdcae20e258cdef57fcf29a50b49e0
      49628c9a
    • S
      platform010 gcc (#9946) · 46f8889b
      sdong 提交于
      Summary:
      Make platform010 gcc build work.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/9946
      
      Test Plan:
      ROCKSDB_FBCODE_BUILD_WITH_PLATFORM010=1 make release -j
      ROCKSDB_FBCODE_BUILD_WITH_PLATFORM010=1 make all check -j
      
      Reviewed By: pdillinger, mdcallag
      
      Differential Revision: D36152684
      
      fbshipit-source-id: ca7b0916c51501a72bb15ad33a85e8c5cac5b505
      46f8889b
    • T
      Generate pkg-config file via CMake (#9945) · e62c23cc
      Trynity Mirell 提交于
      Summary:
      Fixes https://github.com/facebook/rocksdb/issues/7934
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/9945
      
      Test Plan:
      Built via Homebrew pointing to my fork/branch:
      
      ```
        ~/src/github.com/facebook/fbthrift on   main ❯ cat ~/.homebrew/opt/rocksdb/lib/pkgconfig/rocksdb.pc                                                                                                                                                     took  1h 17m 48s at  04:24:54 pm
      prefix="/Users/trynity/.homebrew/Cellar/rocksdb/HEAD-968e4dd"
      exec_prefix="${prefix}"
      libdir="${prefix}/lib"
      includedir="${prefix}/include"
      
      Name: rocksdb
      Description: An embeddable persistent key-value store for fast storage
      URL: https://rocksdb.org/
      Version: 7.3.0
      Cflags: -I"${includedir}"
      Libs: -L"${libdir}" -lrocksdb
      ```
      
      Reviewed By: riversand963
      
      Differential Revision: D36161635
      
      Pulled By: trynity
      
      fbshipit-source-id: 0f1a9c30e43797ee65e6696896e06fde0658456e
      e62c23cc
  6. 05 5月, 2022 5 次提交
    • Y
      Rename kRemoveWithSingleDelete to kPurge (#9951) · 9d634dd5
      Yanqin Jin 提交于
      Summary:
      PR 9929 adds a new CompactionFilter::Decision, i.e.
      kRemoveWithSingleDelete so that CompactionFilter can indicate to
      CompactionIterator that a PUT can only be removed with SD. However, how
      CompactionIterator handles such a key is implementation detail which
      should not be implied in the public API. In fact,
      such a PUT can just be dropped. This is an optimization which we will apply in the near future.
      
      Discussion thread: https://github.com/facebook/rocksdb/pull/9929#discussion_r863198964
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/9951
      
      Test Plan: make check
      
      Reviewed By: ajkr
      
      Differential Revision: D36156590
      
      Pulled By: riversand963
      
      fbshipit-source-id: 7b7d01f47bba4cad7d9cca6ca52984f27f88b372
      9d634dd5
    • S
      Printing IO Error in DumpDBFileSummary (#9940) · 68ac507f
      sdong 提交于
      Summary:
      Right now in DumpDBFileSummary, IO error isn't printed out, but they are sometimes helpful. Print it out instead.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/9940
      
      Test Plan: Watch existing tests to pass.
      
      Reviewed By: riversand963
      
      Differential Revision: D36113016
      
      fbshipit-source-id: 13002080fa4dc76589e2c1c5a1079df8a3c9391c
      68ac507f
    • M
      Print elapsed time and number of operations completed (#9886) · bf68d1c9
      Mark Callaghan 提交于
      Summary:
      This is inspired by debugging a regression test that runs for ~0.05 seconds and the short
      running time makes it prone to variance. While db_bench ran for ~60 seconds, 59.95 seconds
      was spent opening 128 databases (and doing recovery). So it was harder to notice that the
      benchmark only ran for 0.05 seconds.
      
      Normally I add output to the end of the line to make life easier for existing tools that parse it
      but in this case the output near the end of the line has two optional parts and one of the optional
      parts adds an extra newline.
      
      This is for https://github.com/facebook/rocksdb/issues/9856
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/9886
      
      Test Plan:
      ./db_bench --benchmarks=overwrite,readrandom --num=1000000 --threads=4
      
      old output:
       DB path: [/tmp/rocksdbtest-2260/dbbench]
       overwrite    :      14.108 micros/op 283338 ops/sec;   31.3 MB/s
       DB path: [/tmp/rocksdbtest-2260/dbbench]
       readrandom   :       7.994 micros/op 496788 ops/sec;   55.0 MB/s (1000000 of 1000000 found)
      
      new output:
       DB path: [/tmp/rocksdbtest-2260/dbbench]
       overwrite    :      14.117 micros/op 282862 ops/sec 14.141 seconds 4000000 operations;   31.3 MB/s
       DB path: [/tmp/rocksdbtest-2260/dbbench]
       readrandom   :       8.649 micros/op 458475 ops/sec 8.725 seconds 4000000 operations;   49.8 MB/s (981548 of 1000000 found)
      
      Reviewed By: ajkr
      
      Differential Revision: D36102269
      
      Pulled By: mdcallag
      
      fbshipit-source-id: 5cd8a9e11f5cbe2a46809571afd83335b6b0caa0
      bf68d1c9
    • J
      do not call DeleteFile for not-created sst files (#9920) · 95663ff7
      jsteemann 提交于
      Summary:
      When a memtable is flushed and the flush would lead to a 0 byte .sst
      file being created, RocksDB does not write out the empty .sst file to
      disk.
      However it still calls Env::DeleteFile() on the file as part of some
      cleanup procedure at the end of BuildTable().
      Because the to-be-deleted file does not exist, this requires
      implementors of the DeleteFile() API to check if the file exists on
      their own code, or otherwise risk running into PathNotFound errors when
      DeleteFile is invoked on non-existing files.
      This PR fixes the situation so that when no .sst file is created,
      Deletefile will not be called either.
      TableFileCreationStarted() will still be called as before.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/9920
      
      Reviewed By: ajkr
      
      Differential Revision: D36107102
      
      Pulled By: riversand963
      
      fbshipit-source-id: 15881ba3fa3192dd448f906280a1cfc7a68a114a
      95663ff7
    • H
      Fix a comment in RateLimiter::RequestToken (#9933) · de537dca
      Hui Xiao 提交于
      Summary:
      **Context/Summary:**
      - As titled
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/9933
      
      Test Plan: - No code change
      
      Reviewed By: ajkr
      
      Differential Revision: D36086544
      
      Pulled By: hx235
      
      fbshipit-source-id: 2bdd19f67e45df1e3af4121b0c1a5e866a57826d
      de537dca
  7. 04 5月, 2022 7 次提交
    • J
      Default `try_load_options` to true when DB is specified (#9937) · 270179bb
      Jay Zhuang 提交于
      Summary:
      If the DB path is specified, the user would expect ldb loads the
      options from the path, but it's not:
      ```
      $ ldb list_live_files_metadata --db=`pwd`
      ```
      Default `try_load_options` to true in that case. The user can still
      disable that by:
      ```
      $ ldb list_live_files_metadata --db=`pwd` --try_load_options=false
      ```
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/9937
      
      Test Plan:
      `ldb list_live_files_metadata --db=`pwd`` is able to work for
      a db generated with different options.num_levels.
      
      Reviewed By: ajkr
      
      Differential Revision: D36106708
      
      Pulled By: jay-zhuang
      
      fbshipit-source-id: 2732fdc027a4d172436b2c9b6a9787b56b10c710
      270179bb
    • X
      Reduce comparator objects init cost in BlockIter (#9611) · 8b74cea7
      Xinyu Zeng 提交于
      Summary:
      This PR solves the problem discussed in https://github.com/facebook/rocksdb/issues/7149. By storing the pointer of InternalKeyComparator as icmp_ in BlockIter, the object size remains the same. And for each call to CompareCurrentKey, there is no need to create Comparator objects. One can use icmp_ directly or use the "user_comparator" from the icmp_.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/9611
      
      Test Plan:
      with https://github.com/facebook/rocksdb/issues/9903,
      
      ```
      $ TEST_TMPDIR=/dev/shm python3.6 ../benchmark/tools/compare.py benchmarks ./db_basic_bench ../rocksdb-pr9611/db_basic_bench --benchmark_filter=DBGet/comp_style:0/max_data:134217728/per_key_size:256/enable_statistics:0/negative_query:0/enable_filter:0/mmap:1/iterations:262144/threads:1 --benchmark_repetitions=50
      ...
      Comparing ./db_basic_bench to ../rocksdb-pr9611/db_basic_bench
      Benchmark                                                                                                                                                               Time             CPU      Time Old      Time New       CPU Old       CPU New
      ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
      ...
      DBGet/comp_style:0/max_data:134217728/per_key_size:256/enable_statistics:0/negative_query:0/enable_filter:0/mmap:1/iterations:262144/threads:1_pvalue                 0.0001          0.0001      U Test, Repetitions: 50 vs 50
      DBGet/comp_style:0/max_data:134217728/per_key_size:256/enable_statistics:0/negative_query:0/enable_filter:0/mmap:1/iterations:262144/threads:1_mean                  -0.0483         -0.0483          3924          3734          3924          3734
      DBGet/comp_style:0/max_data:134217728/per_key_size:256/enable_statistics:0/negative_query:0/enable_filter:0/mmap:1/iterations:262144/threads:1_median                -0.0713         -0.0713          3971          3687          3970          3687
      DBGet/comp_style:0/max_data:134217728/per_key_size:256/enable_statistics:0/negative_query:0/enable_filter:0/mmap:1/iterations:262144/threads:1_stddev                -0.0342         -0.0344           225           217           225           217
      DBGet/comp_style:0/max_data:134217728/per_key_size:256/enable_statistics:0/negative_query:0/enable_filter:0/mmap:1/iterations:262144/threads:1_cv                    +0.0148         +0.0146             0             0             0             0
      OVERALL_GEOMEAN                                                                                                                                                      -0.0483         -0.0483             0             0             0             0
      ```
      
      Reviewed By: akankshamahajan15
      
      Differential Revision: D35882037
      
      Pulled By: ajkr
      
      fbshipit-source-id: 9e5337bbad8f1239dff7aa9f6549020d599bfcdf
      8b74cea7
    • S
      Improve comments to options.allow_mmap_reads (#9936) · b82edffc
      Siying Dong 提交于
      Summary:
      It confused users and use that with options.allow_mmap_reads = true, CPU is high with checksum verification. Add a comment to explain it.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/9936
      
      Reviewed By: anand1976
      
      Differential Revision: D36106529
      
      fbshipit-source-id: 3d723bd686f96a84c694c8b2d91ad28d9ccfd979
      b82edffc
    • A
      db_basic_bench fix for DB object cleanup (#9939) · 440c7f63
      Andrew Kryczka 提交于
      Summary:
      Use `unique_ptr<DB>` to make sure the DB object is deleted. Previously it was not, which led to accumulating file descriptors for deleted directories because a `DBImpl::db_dir_` from each test remained alive.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/9939
      
      Test Plan: run `lsof -p $(pidof db_basic_bench)` while benchmark runs; verify no FDs for deleted directories.
      
      Reviewed By: jay-zhuang
      
      Differential Revision: D36108761
      
      Pulled By: ajkr
      
      fbshipit-source-id: cfe02646b038a445af7d5db8989eb1f40d658359
      440c7f63
    • P
      Fork and simplify LRUCache for developing enhancements (#9917) · bb87164d
      Peter Dillinger 提交于
      Summary:
      To support a project to prototype and evaluate algorithmic
      enhancments and alternatives to LRUCache, here I have separated out
      LRUCache into internal-only "FastLRUCache" and cut it down to
      essentials, so that details like secondary cache handling and
      priorities do not interfere with prototyping. These can be
      re-integrated later as needed, along with refactoring to minimize code
      duplication (which would slow down prototyping for now).
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/9917
      
      Test Plan:
      unit tests updated to ensure basic functionality has (likely)
      been preserved
      
      Reviewed By: anand1976
      
      Differential Revision: D35995554
      
      Pulled By: pdillinger
      
      fbshipit-source-id: d67b20b7ada3b5d3bfe56d897a73885894a1d9db
      bb87164d
    • P
      Fix db_crashtest.py call inconsistency in crash_test.mk (#9935) · 4b9a1a2f
      Peter Dillinger 提交于
      Summary:
      Some tests crashing because not using custom DB_STRESS_CMD
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/9935
      
      Test Plan: internal tests
      
      Reviewed By: riversand963
      
      Differential Revision: D36104347
      
      Pulled By: pdillinger
      
      fbshipit-source-id: 23f080704a124174203f54ffd85578c2047effe5
      4b9a1a2f
    • M
      Make --benchmarks=flush flush the default column family (#9887) · b6ec3328
      Mark Callaghan 提交于
      Summary:
      db_bench --benchmarks=flush wasn't flushing the default column family.
      
      This is for https://github.com/facebook/rocksdb/issues/9880
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/9887
      
      Test Plan:
      Confirm that flush works (*.log is empty) when "flush" added to benchmark list
      Confirm that *.log is not empty otherwise.
      
      Repeat for all combinations for: uses column families, uses multiple databases
      
      ./db_bench --benchmarks=overwrite --num=10000
      ls -lrt /tmp/rocksdbtest-2260/dbbench/*.log
      -rw-r--r-- 1 me users 1380286 Apr 21 10:47 /tmp/rocksdbtest-2260/dbbench/000004.log
      
      ./db_bench --benchmarks=overwrite,flush --num=10000
      ls -lrt /tmp/rocksdbtest-2260/dbbench/*.log
       -rw-r--r-- 1 me users 0 Apr 21 10:48 /tmp/rocksdbtest-2260/dbbench/000008.log
      
      ./db_bench --benchmarks=overwrite --num=10000 --num_column_families=4
      ls -lrt /tmp/rocksdbtest-2260/dbbench/*.log
        -rw-r--r-- 1 me users 1387823 Apr 21 10:49 /tmp/rocksdbtest-2260/dbbench/000004.log
      
      ./db_bench --benchmarks=overwrite,flush --num=10000 --num_column_families=4
      ls -lrt /tmp/rocksdbtest-2260/dbbench/*.log
      -rw-r--r-- 1 me users 0 Apr 21 10:51 /tmp/rocksdbtest-2260/dbbench/000014.log
      
      ./db_bench --benchmarks=overwrite --num=10000 --num_multi_db=2
      ls -lrt /tmp/rocksdbtest-2260/dbbench/[01]/*.log
       -rw-r--r-- 1 me users 1380838 Apr 21 10:55 /tmp/rocksdbtest-2260/dbbench/0/000004.log
       -rw-r--r-- 1 me users 1379734 Apr 21 10:55 /tmp/rocksdbtest-2260/dbbench/1/000004.log
      
      ./db_bench --benchmarks=overwrite,flush --num=10000 --num_multi_db=2
      ls -lrt /tmp/rocksdbtest-2260/dbbench/[01]/*.log
      -rw-r--r-- 1 me users 0 Apr 21 10:57 /tmp/rocksdbtest-2260/dbbench/0/000013.log
      -rw-r--r-- 1 me users 0 Apr 21 10:57 /tmp/rocksdbtest-2260/dbbench/1/000013.log
      
      ./db_bench --benchmarks=overwrite --num=10000 --num_column_families=4 --num_multi_db=2
      ls -lrt /tmp/rocksdbtest-2260/dbbench/[01]/*.log
      -rw-r--r-- 1 me users 1395108 Apr 21 10:52 /tmp/rocksdbtest-2260/dbbench/1/000004.log
      -rw-r--r-- 1 me users 1380411 Apr 21 10:52 /tmp/rocksdbtest-2260/dbbench/0/000004.log
      
      ./db_bench --benchmarks=overwrite,flush --num=10000 --num_column_families=4 --num_multi_db=2
      ls -lrt /tmp/rocksdbtest-2260/dbbench/[01]/*.log
      -rw-r--r-- 1 me users 0 Apr 21 10:54 /tmp/rocksdbtest-2260/dbbench/0/000022.log
      -rw-r--r-- 1 me users 0 Apr 21 10:54 /tmp/rocksdbtest-2260/dbbench/1/000022.log
      
      Reviewed By: ajkr
      
      Differential Revision: D36026777
      
      Pulled By: mdcallag
      
      fbshipit-source-id: d42d3d7efceea7b9a25bbbc0f04461d2b7301122
      b6ec3328
  8. 03 5月, 2022 4 次提交
    • Y
      Remove ifdef for try_emplace after upgrading to c++17 (#9932) · 2b5df21e
      Yanqin Jin 提交于
      Summary:
      Test plan
      make check
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/9932
      
      Reviewed By: ajkr
      
      Differential Revision: D36085404
      
      Pulled By: riversand963
      
      fbshipit-source-id: 2ece14ca0e2e4c1288339ff79e7e126b76eaf786
      2b5df21e
    • A
      Allow consecutive SingleDelete() in stress/crash test (#9930) · cda34dd6
      Andrew Kryczka 提交于
      Summary:
      We need to support consecutive SingleDelete(), so this PR adds it to the stress/crash tests.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/9930
      
      Test Plan: `python3 tools/db_crashtest.py blackbox --simple --nooverwritepercent=50 --writepercent=90 --delpercent=10 --readpercent=0 --prefixpercent=0 --delrangepercent=0 --iterpercent=0 --max_key=1000000 --duration=3600 --interval=10 --write_buffer_size=1048576 --target_file_size_base=1048576 --max_bytes_for_level_base=4194304 --value_size_mult=33`
      
      Reviewed By: riversand963
      
      Differential Revision: D36081863
      
      Pulled By: ajkr
      
      fbshipit-source-id: 3566cdbaed375b8003126fc298968eb1a854317f
      cda34dd6
    • Y
      Fix a bug of CompactionIterator/CompactionFilter using `Delete` (#9929) · 06394ff4
      Yanqin Jin 提交于
      Summary:
      When compaction filter determines that a key should be removed, it updates the internal key's type
      to `Delete`. If this internal key is preserved in current compaction but seen by a later compaction
      together with `SingleDelete`, it will cause compaction iterator to return Corruption.
      
      To fix the issue, compaction filter should return more information in addition to the intention of removing
      a key. Therefore, we add a new `kRemoveWithSingleDelete` to `CompactionFilter::Decision`. Seeing
      `kRemoveWithSingleDelete`, compaction iterator will update the op type of the internal key to `kTypeSingleDelete`.
      
      In addition, I updated db_stress_shared_state.[cc|h] so that `no_overwrite_ids_` becomes `const`. It is easier to
      reason about thread-safety if accessed from multiple threads. This information is passed to `PrepareTxnDBOptions()`
      when calling from `Open()` so that we can set up the rollback deletion type callback for transactions.
      
      Finally, disable compaction filter for multiops_txn because the key removal logic of `DbStressCompactionFilter` does
      not quite work with `MultiOpsTxnsStressTest`.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/9929
      
      Test Plan:
      make check
      make crash_test
      make crash_test_with_txn
      
      Reviewed By: anand1976
      
      Differential Revision: D36069678
      
      Pulled By: riversand963
      
      fbshipit-source-id: cedd2f1ba958af59ad3916f1ba6f424307955f92
      06394ff4
    • C
      Specify largest_seqno in VerifyChecksum (#9919) · 37f49083
      Changyu Bi 提交于
      Summary:
      `VerifyChecksum()` does not specify `largest_seqno` when creating a `TableReader`. As a result, the `TableReader` uses the `TableReaderOptions` default value (0) for `largest_seqno`. This causes the following error when the file has a nonzero global seqno in its properties:
      ```
      Corruption: An external sst file with version 2 have global seqno property with value , while largest seqno in the file is 0
      ```
      This PR fixes this by specifying `largest_seqno` in `VerifyChecksumInternal` with `largest_seqno` from the file metadata.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/9919
      
      Test Plan: `make check`
      
      Reviewed By: ajkr
      
      Differential Revision: D36028824
      
      Pulled By: cbi42
      
      fbshipit-source-id: 428d028a79386f46ef97bb6b6051dc76c83e1f2b
      37f49083
  9. 29 4月, 2022 1 次提交