1. 01 7月, 2020 5 次提交
    • L
      Clean up blob files based on the linked SST set (#7001) · e367bc7f
      Levi Tamasi 提交于
      Summary:
      The earlier `VersionBuilder` code only cleaned up blob files that were
      marked as entirely consisting of garbage using `VersionEdits` with
      `BlobFileGarbage`. This covers the cases when table files go through
      regular compaction, where we iterate through the KVs and thus have an
      opportunity to calculate the amount of garbage (that is, most cases).
      However, it does not help when table files are simply dropped (e.g. deletion
      compactions or the `DeleteFile` API). To deal with such cases, the patch
      adds logic that cleans up all blob files at the head of the list until the first
      one with linked SSTs is found. (As an example, let's assume we have blob files
      with numbers 1..10, and the first one with any linked SSTs is number 8.
      This means that SSTs in the `Version` only rely on blob files with numbers >= 8,
      and thus 1..7 are no longer needed.)
      
      The code change itself is pretty small; however, changing the logic like this
      necessitated changes to some tests that have been added recently (namely
      to the ones that use blob files in isolation, i.e. without any table files referring
      to them). Some of these cases were fixed by bypassing `VersionBuilder` altogether
      in order to keep the tests simple (which actually makes them more proper unit tests
      as well), while the `VersionBuilder` unit tests were fixed by adding dummy table
      files to the test cases as needed.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/7001
      
      Test Plan: `make check`
      
      Reviewed By: riversand963
      
      Differential Revision: D22119474
      
      Pulled By: ltamasi
      
      fbshipit-source-id: c6547141355667d4291d9661d6518eb741e7b54a
      e367bc7f
    • Y
      Add recent versions to format compatibility check (#7059) · f5554fd7
      Yanqin Jin 提交于
      Summary:
      as title.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/7059
      
      Test Plan: ./tools/check_format_compatible.sh
      
      Reviewed By: siying
      
      Differential Revision: D22320774
      
      Pulled By: riversand963
      
      fbshipit-source-id: 124d13b08703d077a7aab3678e1eb639fcbcceca
      f5554fd7
    • C
      Increase transaction timeout and enable deadlock detection in stress test (#7056) · f045ee64
      Cheng Chang 提交于
      Summary:
      There are errors like `Transaction put: Operation timed out: Timeout waiting to lock key
      terminate called without an active exception`, based on experiment on devserver, increasing timeouts can resolve the issue.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/7056
      
      Test Plan: watch stress test with txn.
      
      Reviewed By: anand1976
      
      Differential Revision: D22317265
      
      Pulled By: cheng-chang
      
      fbshipit-source-id: 2dc3352def5e78d2c39a18d7262a3a65ca98bbba
      f045ee64
    • S
      Divide WriteCallbackTest.WriteWithCallbackTest (#7037) · 80b107a0
      sdong 提交于
      Summary:
      WriteCallbackTest.WriteWithCallbackTest has a deep for-loop and in some cases runs very long. Parameterimized it.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/7037
      
      Test Plan: Run the test and see it passes.
      
      Reviewed By: ltamasi
      
      Differential Revision: D22269259
      
      fbshipit-source-id: a1b6687b5bf4609754833d14cf383d68bc7ab27a
      80b107a0
    • S
      db_stress: deep clean directory before checkpoint (#7039) · 2d1d51d3
      sdong 提交于
      Summary:
      We see crash test occassionally fails with "A checkpoint operation failed with: Invalid argument: Directory exists". The suspicious is that the directory fails to be deleted because some trash files. Deep clean the directory after a DestroyDB() call.
      
      Also add more debugging printf in case it fails.
      Also, preserve the DB if verification fails.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/7039
      
      Test Plan: Run db_stress with low --checkpoint_one_in value
      
      Reviewed By: riversand963
      
      Differential Revision: D22271694
      
      fbshipit-source-id: 6a9b2abb664fc69a4dc666741df4f6b23703cd6d
      2d1d51d3
  2. 30 6月, 2020 5 次提交
    • B
      Compaction filter support for BlobDB (#6850) · 5be2cb69
      Burton Li 提交于
      Summary:
      Added compaction filter support for BlobDB non-TTL values. Same as vanilla RocksDB, user compaction filter applies to all k/v pairs of the compaction for non-TTL values. It honors `min_blob_size`, which potentially results value transitions between inlined data and stored-in-blob data when size of value is changed.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/6850
      
      Reviewed By: siying
      
      Differential Revision: D22263487
      
      Pulled By: ltamasi
      
      fbshipit-source-id: 8fc03f8cde2a5c831e63b436b3dbf1b7f90939e8
      5be2cb69
    • S
      Disable fsync in some tests to speed them up (#7036) · 58547e53
      sdong 提交于
      Summary:
      Fsyncing files is not providing more test coverage in many tests. Provide an option in SpecialEnv to turn it off to speed it up and enable this option in some tests with relatively long run time.
      Most of those tests can be divided as parameterized gtest too. This two speed up approaches are orthogonal and we can do both if needed.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/7036
      
      Test Plan: Run all tests and make sure they pass.
      
      Reviewed By: ltamasi
      
      Differential Revision: D22268084
      
      fbshipit-source-id: 6d4a838a1b7328c13931a2a5d93de57aa02afaab
      58547e53
    • A
      Extend Get/MultiGet deadline support to table open (#6982) · 9a5886bd
      Anand Ananthabhotla 提交于
      Summary:
      Current implementation of the ```read_options.deadline``` option only checks the deadline for random file reads during point lookups. This PR extends the checks to file opens, prefetches and preloads as part of table open.
      
      The main changes are in the ```BlockBasedTable```, partitioned index and filter readers, and ```TableCache``` to take ReadOptions as an additional parameter. In ```BlockBasedTable::Open```, in order to retain existing behavior w.r.t checksum verification and block cache usage, we filter out most of the options in ```ReadOptions``` except ```deadline```. However, having the ```ReadOptions``` gives us more flexibility to honor other options like verify_checksums, fill_cache etc. in the future.
      
      Additional changes in callsites due to function signature changes in ```NewTableReader()``` and ```FilePrefetchBuffer```.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/6982
      
      Test Plan: Add new unit tests in db_basic_test
      
      Reviewed By: riversand963
      
      Differential Revision: D22219515
      
      Pulled By: anand1976
      
      fbshipit-source-id: 8a3b92f4a889808013838603aa3ca35229cd501b
      9a5886bd
    • S
      Remove 2019 from appveyor (#7038) · d809ae9a
      sdong 提交于
      Summary:
      VS2019 is covered in CircleCI. The only thing missing there is -DCMAKE_CXX_STANDARD=20 option. Add the option there and remove VS2019 build from Appveyor.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/7038
      
      Test Plan: Watch build results.
      
      Reviewed By: pdillinger, ltamasi
      
      Differential Revision: D22270010
      
      fbshipit-source-id: 77d30be49d38b41516fa8a12be45395c27b12761
      d809ae9a
    • S
      Expose KeyMayExist in the C API (#7021) · 1b85d57c
      Stanislav Tkach 提交于
      Summary: Pull Request resolved: https://github.com/facebook/rocksdb/pull/7021
      
      Reviewed By: ajkr
      
      Differential Revision: D22246297
      
      Pulled By: pdillinger
      
      fbshipit-source-id: 81dfd0a49e4d5ce0c9f00772c17cca425757ea24
      1b85d57c
  3. 27 6月, 2020 4 次提交
    • Y
      Fix data race to VersionSet::io_status_ (#7034) · d47c8711
      Yanqin Jin 提交于
      Summary:
      After https://github.com/facebook/rocksdb/issues/6949 , VersionSet::io_status_ can be concurrently accessed by multiple
      threads without lock, causing tsan test to fail. For example, a bg flush thread
      resets io_status_ before calling LogAndApply(), while another thread already in
      the process of LogAndApply() reads io_status_. This is a bug.
      
      We do not have to reset io_status_ each time we call LogAndApply(). io_status_
      is part of the state of VersionSet, and it indicates the outcome of preceding
      MANIFEST/CURRENT files IO operations. Its value should be updated only when:
      
      1. MANIFEST/CURRENT files IO fail for the first time.
      2. MANIFEST/CURRENT files IO succeed as part of recovering from a prior
         failure without process restart, e.g. calling Resume().
      
      Test Plan (devserver):
      COMPILE_WITH_TSAN=1 make check
      COMPILE_WITH_TSAN=1 make db_test2
      ./db_test2 --gtest_filter=DBTest2.CompactionStall
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/7034
      
      Reviewed By: zhichao-cao
      
      Differential Revision: D22247137
      
      Pulled By: riversand963
      
      fbshipit-source-id: 77b83e05390f3ee3cd2d96d3fdd6fe4f225e3216
      d47c8711
    • A
      Fix for TSAN failure in DeleteScheduler (#7029) · b9d51b86
      Akanksha Mahajan 提交于
      Summary:
      TSAN failure caused by setting statistics in SstFileManager and DeleteScheduler.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/7029
      
      Test Plan:
      1. make check -j64
                 2. COMPILE_WITH_TSAN=1 make check -j64
      
      Reviewed By: siying, zhichao-cao
      
      Differential Revision: D22223418
      
      Pulled By: akankshamahajan15
      
      fbshipit-source-id: c5bf336d711b787908dfeb6166cab4aa2e494d61
      b9d51b86
    • Z
      `BackupEngine::VerifyBackup` verifies checksum by default (#7014) · 1569dc48
      Zitan Chen 提交于
      Summary:
      A parameter `verify_with_checksum` is added to `BackupEngine::VerifyBackup`, which is true by default. So now `BackupEngine::VerifyBackup` verifies backup files with checksum AND file size by default. When `verify_with_checksum` is false, `BackupEngine::VerifyBackup` only compares file sizes to verify backup files.
      
      Also add a test for the case when corruption does not change the file size.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/7014
      
      Test Plan: Passed backupable_db_test
      
      Reviewed By: zhichao-cao
      
      Differential Revision: D22165590
      
      Pulled By: gg814
      
      fbshipit-source-id: 606a7450714e868bceb38598c89fd356c6004f4f
      1569dc48
    • S
      Add unity build to CircleCI (#7026) · f9817201
      sdong 提交于
      Summary:
      We are still keeping unity build working. So it's a good idea to add to a pre-commit CI.
      A latest GCC docker image just to get a little bit more coverage. Fix three small issues to make it pass.
      Also make unity_test to run db_basic_test rather than db_test to cut the test time. There is no point to run expensive tests here. It was set to run db_test before db_basic_test was separated out.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/7026
      
      Test Plan: watch tests to pass.
      
      Reviewed By: zhichao-cao
      
      Differential Revision: D22223197
      
      fbshipit-source-id: baa3b6cbb623bf359829b63ce35715c75bcb0ed4
      f9817201
  4. 26 6月, 2020 9 次提交
  5. 25 6月, 2020 6 次提交
    • Z
      Update HISTORY.md to include the Public API Change for DB::OpenForReadonly... · 95fbb62c
      Zitan Chen 提交于
      Update HISTORY.md to include the Public API Change for DB::OpenForReadonly introduced earlier (#7023)
      
      Summary:
      `DB::OpenForReadOnly()` now returns `Status::NotFound` when the specified DB directory does not exist. Previously the error returned depended on the underlying `Env`.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/7023
      
      Reviewed By: ajkr
      
      Differential Revision: D22207845
      
      Pulled By: gg814
      
      fbshipit-source-id: f35830811a0e67efb0ee82eda3a9739bc526baba
      95fbb62c
    • Z
      Add a new option for BackupEngine to store table files under shared_checksum... · be41c61f
      Zitan Chen 提交于
      Add a new option for BackupEngine to store table files under shared_checksum using DB session id in the backup filenames (#6997)
      
      Summary:
      `BackupableDBOptions::new_naming_for_backup_files` is added. This option is false by default. When it is true, backup table filenames under directory shared_checksum are of the form `<file_number>_<crc32c>_<db_session_id>.sst`.
      
      Note that when this option is true, it comes into effect only when both `share_files_with_checksum` and `share_table_files` are true.
      
      Three new test cases are added.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/6997
      
      Test Plan: Passed make check.
      
      Reviewed By: ajkr
      
      Differential Revision: D22098895
      
      Pulled By: gg814
      
      fbshipit-source-id: a1d9145e7fe562d71cde7ac995e17cb24fd42e76
      be41c61f
    • Y
      First step towards handling MANIFEST write error (#6949) · e66199d8
      Yanqin Jin 提交于
      Summary:
      This PR provides preliminary support for handling IO error during MANIFEST write.
      File write/sync is not guaranteed to be atomic. If we encounter an IOError while writing/syncing to the MANIFEST file, we cannot be sure about the state of the MANIFEST file. The version edits may or may not have reached the file. During cleanup, if we delete the newly-generated SST files referenced by the pending version edit(s), but the version edit(s) actually are persistent in the MANIFEST, then next recovery attempt will process the version edits(s) and then fail since the SST files have already been deleted.
      One approach is to truncate the MANIFEST after write/sync error, so that it is safe to delete the SST files. However, file truncation may not be supported on certain file systems. Therefore, we take the following approach.
      If an IOError is detected during MANIFEST write/sync, we disable file deletions for the faulty database. Depending on whether the IOError is retryable (set by underlying file system), either RocksDB or application can call `DB::Resume()`, or simply shutdown and restart. During `Resume()`, RocksDB will try to switch to a new MANIFEST and write all existing in-memory version storage in the new file. If this succeeds, then RocksDB may proceed. If all recovery is completed, then file deletions will be re-enabled.
      Note that multiple threads can call `LogAndApply()` at the same time, though only one of them will be going through the process MANIFEST write, possibly batching the version edits of other threads. When the leading MANIFEST writer finishes, all of the MANIFEST writing threads in this batch will have the same IOError. They will all call `ErrorHandler::SetBGError()` in which file deletion will be disabled.
      
      Possible future directions:
      - Add an `ErrorContext` structure so that it is easier to pass more info to `ErrorHandler`. Currently, as in this example, a new `BackgroundErrorReason` has to be added.
      
      Test plan (dev server):
      make check
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/6949
      
      Reviewed By: anand1976
      
      Differential Revision: D22026020
      
      Pulled By: riversand963
      
      fbshipit-source-id: f3c68a2ef45d9b505d0d625c7c5e0c88495b91c8
      e66199d8
    • S
      Test CircleCI with CLANG-10 (#7025) · 9cc25190
      sdong 提交于
      Summary:
      It's useful to build RocksDB using a more recent clang version in CI. Add a CircleCI build and fix some issues with it.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/7025
      
      Test Plan: See all tests pass.
      
      Reviewed By: pdillinger
      
      Differential Revision: D22215700
      
      fbshipit-source-id: 914a729c2cd3f3ac4a627cc0ac58d4691dca2168
      9cc25190
    • S
      Fix unity build broken by #7007 (#7024) · 50d69698
      sdong 提交于
      Summary:
      https://github.com/facebook/rocksdb/pull/7007 broken the unity build. Fix it by moving the const inside the function
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/7024
      
      Test Plan: make unity and see it to build.
      
      Reviewed By: zhichao-cao
      
      Differential Revision: D22212028
      
      fbshipit-source-id: 5daff7383b691808164d4745ab543238502d946b
      50d69698
    • Z
      Fix the memory leak in Env_basic_test (#7017) · 83a4dd1a
      Zhichao Cao 提交于
      Summary:
      Fix the memory leak broken asan and other test introduced by https://github.com/facebook/rocksdb/issues/6830
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/7017
      
      Test Plan: pass asan_check
      
      Reviewed By: siying
      
      Differential Revision: D22190289
      
      Pulled By: zhichao-cao
      
      fbshipit-source-id: 03a095f698b4f9d72fd9374191b17c890d7c2b56
      83a4dd1a
  6. 24 6月, 2020 2 次提交
  7. 23 6月, 2020 3 次提交
    • P
      Minimize memory internal fragmentation for Bloom filters (#6427) · 5b2bbacb
      Peter Dillinger 提交于
      Summary:
      New experimental option BBTO::optimize_filters_for_memory builds
      filters that maximize their use of "usable size" from malloc_usable_size,
      which is also used to compute block cache charges.
      
      Rather than always "rounding up," we track state in the
      BloomFilterPolicy object to mix essentially "rounding down" and
      "rounding up" so that the average FP rate of all generated filters is
      the same as without the option. (YMMV as heavily accessed filters might
      be unluckily lower accuracy.)
      
      Thus, the option near-minimizes what the block cache considers as
      "memory used" for a given target Bloom filter false positive rate and
      Bloom filter implementation. There are no forward or backward
      compatibility issues with this change, though it only works on the
      format_version=5 Bloom filter.
      
      With Jemalloc, we see about 10% reduction in memory footprint (and block
      cache charge) for Bloom filters, but 1-2% increase in storage footprint,
      due to encoding efficiency losses (FP rate is non-linear with bits/key).
      
      Why not weighted random round up/down rather than state tracking? By
      only requiring malloc_usable_size, we don't actually know what the next
      larger and next smaller usable sizes for the allocator are. We pick a
      requested size, accept and use whatever usable size it has, and use the
      difference to inform our next choice. This allows us to narrow in on the
      right balance without tracking/predicting usable sizes.
      
      Why not weight history of generated filter false positive rates by
      number of keys? This could lead to excess skew in small filters after
      generating a large filter.
      
      Results from filter_bench with jemalloc (irrelevant details omitted):
      
          (normal keys/filter, but high variance)
          $ ./filter_bench -quick -impl=2 -average_keys_per_filter=30000 -vary_key_count_ratio=0.9
          Build avg ns/key: 29.6278
          Number of filters: 5516
          Total size (MB): 200.046
          Reported total allocated memory (MB): 220.597
          Reported internal fragmentation: 10.2732%
          Bits/key stored: 10.0097
          Average FP rate %: 0.965228
          $ ./filter_bench -quick -impl=2 -average_keys_per_filter=30000 -vary_key_count_ratio=0.9 -optimize_filters_for_memory
          Build avg ns/key: 30.5104
          Number of filters: 5464
          Total size (MB): 200.015
          Reported total allocated memory (MB): 200.322
          Reported internal fragmentation: 0.153709%
          Bits/key stored: 10.1011
          Average FP rate %: 0.966313
      
          (very few keys / filter, optimization not as effective due to ~59 byte
           internal fragmentation in blocked Bloom filter representation)
          $ ./filter_bench -quick -impl=2 -average_keys_per_filter=1000 -vary_key_count_ratio=0.9
          Build avg ns/key: 29.5649
          Number of filters: 162950
          Total size (MB): 200.001
          Reported total allocated memory (MB): 224.624
          Reported internal fragmentation: 12.3117%
          Bits/key stored: 10.2951
          Average FP rate %: 0.821534
          $ ./filter_bench -quick -impl=2 -average_keys_per_filter=1000 -vary_key_count_ratio=0.9 -optimize_filters_for_memory
          Build avg ns/key: 31.8057
          Number of filters: 159849
          Total size (MB): 200
          Reported total allocated memory (MB): 208.846
          Reported internal fragmentation: 4.42297%
          Bits/key stored: 10.4948
          Average FP rate %: 0.811006
      
          (high keys/filter)
          $ ./filter_bench -quick -impl=2 -average_keys_per_filter=1000000 -vary_key_count_ratio=0.9
          Build avg ns/key: 29.7017
          Number of filters: 164
          Total size (MB): 200.352
          Reported total allocated memory (MB): 221.5
          Reported internal fragmentation: 10.5552%
          Bits/key stored: 10.0003
          Average FP rate %: 0.969358
          $ ./filter_bench -quick -impl=2 -average_keys_per_filter=1000000 -vary_key_count_ratio=0.9 -optimize_filters_for_memory
          Build avg ns/key: 30.7131
          Number of filters: 160
          Total size (MB): 200.928
          Reported total allocated memory (MB): 200.938
          Reported internal fragmentation: 0.00448054%
          Bits/key stored: 10.1852
          Average FP rate %: 0.963387
      
      And from db_bench (block cache) with jemalloc:
      
          $ ./db_bench -db=/dev/shm/dbbench.no_optimize -benchmarks=fillrandom -format_version=5 -value_size=90 -bloom_bits=10 -num=2000000 -threads=8 -compaction_style=2 -fifo_compaction_max_table_files_size_mb=10000 -fifo_compaction_allow_compaction=false
          $ ./db_bench -db=/dev/shm/dbbench -benchmarks=fillrandom -format_version=5 -value_size=90 -bloom_bits=10 -num=2000000 -threads=8 -optimize_filters_for_memory -compaction_style=2 -fifo_compaction_max_table_files_size_mb=10000 -fifo_compaction_allow_compaction=false
          $ (for FILE in /dev/shm/dbbench.no_optimize/*.sst; do ./sst_dump --file=$FILE --show_properties | grep 'filter block' ; done) | awk '{ t += $4; } END { print t; }'
          17063835
          $ (for FILE in /dev/shm/dbbench/*.sst; do ./sst_dump --file=$FILE --show_properties | grep 'filter block' ; done) | awk '{ t += $4; } END { print t; }'
          17430747
          $ #^ 2.1% additional filter storage
          $ ./db_bench -db=/dev/shm/dbbench.no_optimize -use_existing_db -benchmarks=readrandom,stats -statistics -bloom_bits=10 -num=2000000 -compaction_style=2 -fifo_compaction_max_table_files_size_mb=10000 -fifo_compaction_allow_compaction=false -duration=10 -cache_index_and_filter_blocks -cache_size=1000000000
          rocksdb.block.cache.index.add COUNT : 33
          rocksdb.block.cache.index.bytes.insert COUNT : 8440400
          rocksdb.block.cache.filter.add COUNT : 33
          rocksdb.block.cache.filter.bytes.insert COUNT : 21087528
          rocksdb.bloom.filter.useful COUNT : 4963889
          rocksdb.bloom.filter.full.positive COUNT : 1214081
          rocksdb.bloom.filter.full.true.positive COUNT : 1161999
          $ #^ 1.04 % observed FP rate
          $ ./db_bench -db=/dev/shm/dbbench -use_existing_db -benchmarks=readrandom,stats -statistics -bloom_bits=10 -num=2000000 -compaction_style=2 -fifo_compaction_max_table_files_size_mb=10000 -fifo_compaction_allow_compaction=false -optimize_filters_for_memory -duration=10 -cache_index_and_filter_blocks -cache_size=1000000000
          rocksdb.block.cache.index.add COUNT : 33
          rocksdb.block.cache.index.bytes.insert COUNT : 8448592
          rocksdb.block.cache.filter.add COUNT : 33
          rocksdb.block.cache.filter.bytes.insert COUNT : 18220328
          rocksdb.bloom.filter.useful COUNT : 5360933
          rocksdb.bloom.filter.full.positive COUNT : 1321315
          rocksdb.bloom.filter.full.true.positive COUNT : 1262999
          $ #^ 1.08 % observed FP rate, 13.6% less memory usage for filters
      
      (Due to specific key density, this example tends to generate filters that are "worse than average" for internal fragmentation. "Better than average" cases can show little or no improvement.)
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/6427
      
      Test Plan: unit test added, 'make check' with gcc, clang and valgrind
      
      Reviewed By: siying
      
      Differential Revision: D22124374
      
      Pulled By: pdillinger
      
      fbshipit-source-id: f3e3aa152f9043ddf4fae25799e76341d0d8714e
      5b2bbacb
    • M
      Make EncryptEnv inheritable (#6830) · 1092f19d
      Matthew Von-Maszewski 提交于
      Summary:
      EncryptEnv class is both declared and defined within env_encryption.cc.  This makes it really tough to derive new classes from that base.
      
      This branch moves declaration of the class to rocksdb/env_encryption.h.  The change facilitates making new encryption modules (such as an upcoming openssl AES CTR pull request) possible / easy.
      
      The only coding change was to add the EncryptEnv object to env_basic_test.cc.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/6830
      
      Reviewed By: riversand963
      
      Differential Revision: D21706593
      
      Pulled By: ajkr
      
      fbshipit-source-id: 64d2da95a1569ceeb9b1549c3bec5404cf4c89f0
      1092f19d
    • Z
      Fix double define in IO_tracer (#7007) · d739318b
      Zhichao Cao 提交于
      Summary:
      Fix the following error
      
      "./trace_replay/io_tracer.h:20:20: error: redefinition of ‘const unsigned int rocksdb::{anonymous}::kCharSize’
       const unsigned int kCharSize = 1;
                          ^~~~~~~~~
      In file included from unity.cc:177:
      trace_replay/block_cache_tracer.cc:22:20: note: ‘const unsigned int rocksdb::{anonymous}::kCharSize’ previously defined here
       const unsigned int kCharSize = 1;"
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/7007
      
      Reviewed By: akankshamahajan15
      
      Differential Revision: D22142618
      
      Pulled By: zhichao-cao
      
      fbshipit-source-id: e6dcd51ccc21d1f58df52cdc7a1c88e54cf4f6e8
      d739318b
  8. 20 6月, 2020 5 次提交
    • S
      Remove CircleCI clang build's verbose output (#7000) · 096beb78
      sdong 提交于
      Summary:
      As CirclrCI build's clang build is stable, verbose flag is less useful. On the other hand, the long outputs might create other problems. A non-reproducible failure "make: write error: stdout" might be related to it.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/7000
      
      Test Plan: Watch the run
      
      Reviewed By: pdillinger
      
      Differential Revision: D22118870
      
      fbshipit-source-id: a4157a4282adddcb0c55c0e9e53b2d9ce18bda66
      096beb78
    • S
      Remove an assertion in FlushAfterIntraL0CompactionCheckConsistencyFail (#7003) · dea4063b
      sdong 提交于
      Summary:
      FlushAfterIntraL0CompactionCheckConsistencyFail is flakey. It sometimes fails with:
      
      db/db_compaction_test.cc:5186: Failure
      Expected equality of these values:
        10
        NumTableFilesAtLevel(0)
          Which is: 3
      
      I don't see a clear reason why the assertion would always be true. The necessarily of the assertion is not clear either. Remove it.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/7003
      
      Test Plan: See the test still builds.
      
      Reviewed By: riversand963
      
      Differential Revision: D22129753
      
      fbshipit-source-id: 42f0bb05e32b369e8d726bfd3e35c29cf52fe008
      dea4063b
    • P
      Fix block checksum for >=4GB, refactor (#6978) · 25a0d0ca
      Peter Dillinger 提交于
      Summary:
      Although RocksDB falls over in various other ways with KVs
      around 4GB or more, this change fixes how XXH32 and XXH64 were being
      called by the block checksum code to support >= 4GB in case that should
      ever happen, or the code copied for other uses.
      
      This change is not a schema compatibility issue because the checksum
      verification code would checksum the first (block_size + 1) mod 2^32
      bytes while the checksum construction code would checksum the first
      block_size mod 2^32 plus the compression type byte, meaning the
      XXH32/64 checksums for >=4GB block would not match about 255/256 times.
      
      While touching this code, I refactored to consolidate redundant
      implementations, improving diagnostics and performance tracking in some
      cases. Also used less confusing language in those diagnostics.
      
      Makes https://github.com/facebook/rocksdb/issues/6875 obsolete.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/6978
      
      Test Plan:
      I was able to write a test for this using an SST file writer
      and VerifyChecksum in a reader. The test fails before the fix, though
      I'm leaving the test disabled because I don't think it's worth the
      expense of running regularly.
      
      Reviewed By: gg814
      
      Differential Revision: D22143260
      
      Pulled By: pdillinger
      
      fbshipit-source-id: 982993d16134e8c50bea2269047f901c1783726e
      25a0d0ca
    • A
      minor fixes for stress/crash contruns (#7006) · d76eed48
      Andrew Kryczka 提交于
      Summary:
      Avoid using `cf_consistency` together with `enable_compaction_filter` as
      the former heavily uses snapshots while the latter is incompatible with
      snapshots.
      
      Also fix a clang-analyze error for a write to a variable that is never
      read.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/7006
      
      Reviewed By: zhichao-cao
      
      Differential Revision: D22141679
      
      Pulled By: ajkr
      
      fbshipit-source-id: 1840ae238168818a9ab5973f90fd78c067399447
      d76eed48
    • P
      Remove racially charged terms "whitelist" and "blacklist" (#7008) · 88b42107
      Peter Dillinger 提交于
      Summary:
      We don't need them.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/7008
      
      Test Plan: "make check" and ensure "make crash_test" starts
      
      Reviewed By: ajkr
      
      Differential Revision: D22143838
      
      Pulled By: pdillinger
      
      fbshipit-source-id: 72c8e16603abc59f4954e304466bc4dc1f58f94e
      88b42107
  9. 19 6月, 2020 1 次提交