1. 06 12月, 2019 1 次提交
  2. 04 12月, 2019 2 次提交
    • Y
      Make folly-related targets comply with verbosity (#6120) · 4edb4284
      Yanqin Jin 提交于
      Summary:
      Before this fix, `make all` will emit full compilation command when building
      object files in the third-party/folly directory even if default verbosity is
      0 (AM_DEFAULT_VERBOSITY).
      
      Test Plan (devserver):
      ```
      $make all | tee build.log
      $make check
      ```
      Check build.log to verify.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/6120
      
      Differential Revision: D18795621
      
      Pulled By: riversand963
      
      fbshipit-source-id: 04641a8359cd4fd55034e6e797ed85de29ee2fe2
      4edb4284
    • C
      Fix compliation error on GCC4.8.2 (#6106) · f32a311f
      Connor 提交于
      Summary:
      ```
      In file included from /usr/include/c++/4.8.2/algorithm:62:0,
                       from ./db/merge_context.h:7,
                       from ./db/dbformat.h:16,
                       from ./tools/block_cache_analyzer/block_cache_trace_analyzer.h:12,
                       from tools/block_cache_analyzer/block_cache_trace_analyzer.cc:8:
      /usr/include/c++/4.8.2/bits/stl_algo.h: In instantiation of ‘_RandomAccessIterator std::__unguarded_partition(_RandomAccessIterator, _RandomAccessIterator, const _Tp&, _Compare) [with _RandomAccessIterator = __gnu_cxx::__normal_iterator<std::pair<std::basic_string<char>, long unsigned int>*, std::vector<std::pair<std::basic_string<char>, long unsigned int> > >; _Tp = std::pair<std::basic_string<char>, long unsigned int>; _Compare = rocksdb::BlockCacheTraceAnalyzer::WriteSkewness(const string&, const std::vector<long unsigned int>&, rocksdb::TraceType) const::__lambda1]’:
      /usr/include/c++/4.8.2/bits/stl_algo.h:2296:78:   required from ‘_RandomAccessIterator std::__unguarded_partition_pivot(_RandomAccessIterator, _RandomAccessIterator, _Compare) [with _RandomAccessIterator = __gnu_cxx::__normal_iterator<std::pair<std::basic_string<char>, long unsigned int>*, std::vector<std::pair<std::basic_string<char>, long unsigned int> > >; _Compare = rocksdb::BlockCacheTraceAnalyzer::WriteSkewness(const string&, const std::vector<long unsigned int>&, rocksdb::TraceType) const::__lambda1]’
      /usr/include/c++/4.8.2/bits/stl_algo.h:2337:62:   required from ‘void std::__introsort_loop(_RandomAccessIterator, _RandomAccessIterator, _Size, _Compare) [with _RandomAccessIterator = __gnu_cxx::__normal_iterator<std::pair<std::basic_string<char>, long unsigned int>*, std::vector<std::pair<std::basic_string<char>, long unsigned int> > >; _Size = long int; _Compare = rocksdb::BlockCacheTraceAnalyzer::WriteSkewness(const string&, const std::vector<long unsigned int>&, rocksdb::TraceType) const::__lambda1]’
      /usr/include/c++/4.8.2/bits/stl_algo.h:5499:44:   required from ‘void std::sort(_RAIter, _RAIter, _Compare) [with _RAIter = __gnu_cxx::__normal_iterator<std::pair<std::basic_string<char>, long unsigned int>*, std::vector<std::pair<std::basic_string<char>, long unsigned int> > >; _Compare = rocksdb::BlockCacheTraceAnalyzer::WriteSkewness(const string&, const std::vector<long unsigned int>&, rocksdb::TraceType) const::__lambda1’
      tools/block_cache_analyzer/block_cache_trace_analyzer.cc:583:79:   required from here
      /usr/include/c++/4.8.2/bits/stl_algo.h:2263:35: error: no match for call to ‘(rocksdb::BlockCacheTraceAnalyzer::WriteSkewness(const string&, const std::vector<long unsigned int>&, rocksdb::TraceType) const::__lambda1) (std::pair<std::basic_string<char>, long unsigned int>&, const std::pair<std::basic_string<char>, long unsigned int>&)’
          while (__comp(*__first, __pivot))
                                         ^
      tools/block_cache_analyzer/block_cache_trace_analyzer.cc:582:9: note: candidates are:
             [=](std::pair<std::string, uint64_t>& a,
               ^
      In file included from /usr/include/c++/4.8.2/algorithm:62:0,
                       from ./db/merge_context.h:7,
                       from ./db/dbformat.h:16,
                       from ./tools/block_cache_analyzer/block_cache_trace_analyzer.h:12,
                       from tools/block_cache_analyzer/block_cache_trace_analyzer.cc:8:
      /usr/include/c++/4.8.2/bits/stl_algo.h:2263:35: note: bool (*)(std::pair<std::basic_string<char>, long unsigned int>&, std::pair<std::basic_string<char>, long unsigned int>&) <conversion>
          while (__comp(*__first, __pivot))
                                         ^
      /usr/include/c++/4.8.2/bits/stl_algo.h:2263:35: note:   candidate expects 3 arguments, 3 provided
      tools/block_cache_analyzer/block_cache_trace_analyzer.cc:583:46: note: rocksdb::BlockCacheTraceAnalyzer::WriteSkewness(const string&, const std::vector<long unsigned int>&, rocksdb::TraceType) const::__lambda1
                 std::pair<std::string, uint64_t>& b) { return b.second < a.second; });
                                                    ^
      tools/block_cache_analyzer/block_cache_trace_analyzer.cc:583:46: note:   no known conversion for argument 2 from ‘const std::pair<std::basic_string<char>, long unsigned int>’ to ‘std::pair<std::basic_string<char>, long unsigned int>&’
      In file included from /usr/include/c++/4.8.2/algorithm:62:0,
                       from ./db/merge_context.h:7,
                       from ./db/dbformat.h:16,
                       from ./tools/block_cache_analyzer/block_cache_trace_analyzer.h:12,
                       from tools/block_cache_analyzer/block_cache_trace_analyzer.cc:8:
      /usr/include/c++/4.8.2/bits/stl_algo.h:2266:34: error: no match for call to ‘(rocksdb::BlockCacheTraceAnalyzer::WriteSkewness(const string&, const std::vector<long unsigned int>&, rocksdb::TraceType) const::__lambda1) (const std::pair<std::basic_string<char>, long unsigned int>&, std::pair<std::basic_string<char>, long unsigned int>&)’
          while (__comp(__pivot, *__last))
                                        ^
      tools/block_cache_analyzer/block_cache_trace_analyzer.cc:582:9: note: candidates are:
             [=](std::pair<std::string, uint64_t>& a,
               ^
      In file included from /usr/include/c++/4.8.2/algorithm:62:0,
                       from ./db/merge_context.h:7,
                       from ./db/dbformat.h:16,
                       from ./tools/block_cache_analyzer/block_cache_trace_analyzer.h:12,
                       from tools/block_cache_analyzer/block_cache_trace_analyzer.cc:8:
      /usr/include/c++/4.8.2/bits/stl_algo.h:2266:34: note: bool (*)(std::pair<std::basic_string<char>, long unsigned int>&, std::pair<std::basic_string<char>, long unsigned int>&) <conversion>
          while (__comp(__pivot, *__last))
                                        ^
      /usr/include/c++/4.8.2/bits/stl_algo.h:2266:34: note:   candidate expects 3 arguments, 3 provided
      tools/block_cache_analyzer/block_cache_trace_analyzer.cc:583:46: note: rocksdb::BlockCacheTraceAnalyzer::WriteSkewness(const string&, const std::vector<long unsigned int>&, rocksdb::TraceType) const::__lambda1
                 std::pair<std::string, uint64_t>& b) { return b.second < a.second; });
                                                    ^
      tools/block_cache_analyzer/block_cache_trace_analyzer.cc:583:46: note:   no known conversion for argument 1 from ‘const std::pair<std::basic_string<char>, long unsigned int>’ to ‘std::pair<std::basic_string<char>, long unsigned int>&’
      ```
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/6106
      
      Differential Revision: D18783943
      
      Pulled By: riversand963
      
      fbshipit-source-id: cc7fc10565f0210b9eebf46b95cb4950ec0b15fa
      f32a311f
  3. 03 12月, 2019 3 次提交
    • Y
      Let DBSecondary close files after catch up (#6114) · fe1147db
      Yanqin Jin 提交于
      Summary:
      After secondary instance replays the logs from primary, certain files become
      obsolete. The secondary should find these files, evict their table readers from
      table cache and close them. If this is not done, the secondary will hold on to
      these files and prevent their space from being freed.
      
      Test plan (devserver):
      ```
      $./db_secondary_test --gtest_filter=DBSecondaryTest.SecondaryCloseFiles
      $make check
      $./db_stress -ops_per_thread=100000 -enable_secondary=true -threads=32 -secondary_catch_up_one_in=10000 -clear_column_family_one_in=1000 -reopen=100
      ```
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/6114
      
      Differential Revision: D18769998
      
      Pulled By: riversand963
      
      fbshipit-source-id: 5d1f151567247196164e1b79d8402fa2045b9120
      fe1147db
    • A
      Remove key length assertion LRUHandle::CalcTotalCharge (#6115) · 16fa6fd2
      anand76 提交于
      Summary:
      Inserting an entry in the block cache with 0 length key is a valid use case. Remove the assertion in ```LRUHandle::CalcTotalCharge```.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/6115
      
      Differential Revision: D18769693
      
      Pulled By: anand1976
      
      fbshipit-source-id: 34cc159650300dda6d7273480640478f28392cda
      16fa6fd2
    • D
      Add missing DataBlock-releated functions to the C-API (#6101) · 048472f6
      David Palm 提交于
      Summary:
      Adds two missing functions to the C-API:
      
      - `rocksdb_block_based_options_set_data_block_index_type`
      - `rocksdb_block_based_options_set_data_block_hash_ratio`
      
      This enables users in other languages to enjoy the new(-ish) feature.
      
      The changes here are partially overlapping with [another PR](https://github.com/facebook/rocksdb/pull/5630) but are more focused on the DataBlock indexing options.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/6101
      
      Differential Revision: D18765639
      
      fbshipit-source-id: 4a8947e71b179f26fa1eb83c267dd47ee64ac3b3
      048472f6
  4. 28 11月, 2019 6 次提交
  5. 27 11月, 2019 17 次提交
    • J
      Work around weird unused errors with Mingw (#6075) · c16b0874
      John Ericson 提交于
      Summary:
      From the reset of the code, it looks this this maybe can be unconditionally given the attribute? But I couldn't test with MSVC so I defensively put under CPP.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/6075
      
      Differential Revision: D18723749
      
      fbshipit-source-id: 45fc8732c28dd29aab1644225d68f3c6f39bd69b
      c16b0874
    • S
      Support options.max_open_files = -1 with periodic_compaction_seconds (#6090) · aa1857e2
      sdong 提交于
      Summary:
      options.periodic_compaction_seconds isn't supported when options.max_open_files != -1. It's because that the information of file creation time is stored in table properties and are not guaranteed to be loaded unless options.max_open_files = -1. Relax this constraint by storing the information in manifest.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/6090
      
      Test Plan: Pass all existing tests; Modify an existing test to force the manifest value to take 0 to simulate backward compatibility case; manually open the DB generated with the change by release 4.2.
      
      Differential Revision: D18702268
      
      fbshipit-source-id: 13e0bd94f546498a04f3dc5fc0d9dff5125ec9eb
      aa1857e2
    • A
      Fix HISTORY.md for 6.6.0 (#6096) · 496a6ae8
      anand76 提交于
      Summary:
      Some of the entries were incorrectly listed under 6.5.0.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/6096
      
      Differential Revision: D18722801
      
      Pulled By: gfosco
      
      fbshipit-source-id: 18d1187deb6a9d69a8feb68b727d2f720a65f2bc
      496a6ae8
    • P
      Expose and elaborate FilterBuildingContext (#6088) · ca3b6c28
      Peter Dillinger 提交于
      Summary:
      This change enables custom implementations of FilterPolicy to
      wrap a variety of NewBloomFilterPolicy and select among them based on
      contextual information such as table level and compaction style.
      
      * Moves FilterBuildingContext to public API and elaborates it with more
      useful data. (It would be nice to put more general options-like data,
      but at the time this object is constructed, we are using internal APIs
      ImmutableCFOptions and MutableCFOptions and don't have easy access to
      ColumnFamilyOptions that I can tell.)
      
      * Renames BloomFilterPolicy::GetFilterBitsBuilderInternal to
      GetBuilderWithContext, because it's now public.
      
      * Plumbs through the table's "level_at_creation" for filter building
      context.
      
      * Simplified some tests by adding GetBuilder() to
      MockBlockBasedTableTester.
      
      * Adds test as DBBloomFilterTest.ContextCustomFilterPolicy, including
      sample wrapper class LevelAndStyleCustomFilterPolicy.
      
      * Fixes a cross-test bug in DBBloomFilterTest.OptimizeFiltersForHits
      where it does not reset perf context.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/6088
      
      Test Plan: make check, valgrind on db_bloom_filter_test
      
      Differential Revision: D18697817
      
      Pulled By: pdillinger
      
      fbshipit-source-id: 5f987a2d7b07cc7a33670bc08ca6b4ca698c1cf4
      ca3b6c28
    • A
      Fix compilation under MSVC VS2015 (#6081) · 6d58ea90
      Adam Retter 提交于
      Summary:
      **NOTE**: this also needs to be back-ported to 6.4.6 and possibly older branches if further releases from them is envisaged.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/6081
      
      Differential Revision: D18710107
      
      Pulled By: zhichao-cao
      
      fbshipit-source-id: 03260f9316566e2bfc12c7d702d6338bb7941e01
      6d58ea90
    • P
      Add shared library for musl-libc (#3143) · 8ae149eb
      Patrick Double 提交于
      Summary:
      Add the jni library for musl-libc, specifically for incorporating into Alpine based docker images. The classifier is `musl64`.
      
      I have signed the CLA electronically.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/3143
      
      Differential Revision: D18719372
      
      fbshipit-source-id: 6189d149310b6436d6def7d808566b0234b23313
      8ae149eb
    • L
      Refactor and clean up the code that reads a blob from a file (#6093) · d9314a92
      Levi Tamasi 提交于
      Summary:
      This patch factors out the logic that reads a (potentially compressed) blob
      from a file into a separate helper method `GetRawBlobFromFile`, and cleans
      up the code a bit. Also, errors during decompression are now logged/propagated
      to the user by returning a `Status` code of `Corruption`.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/6093
      
      Test Plan: `make check`
      
      Differential Revision: D18716673
      
      Pulled By: ltamasi
      
      fbshipit-source-id: 44144bc064cab616862d5643f34384f2bae6eb78
      d9314a92
    • P
      Allow fractional bits/key in BloomFilterPolicy (#6092) · 57f30322
      Peter Dillinger 提交于
      Summary:
      There's no technological impediment to allowing the Bloom
      filter bits/key to be non-integer (fractional/decimal) values, and it
      provides finer control over the memory vs. accuracy trade-off. This is
      especially handy in using the format_version=5 Bloom filter in place
      of the old one, because bits_per_key=9.55 provides the same accuracy as
      the old bits_per_key=10.
      
      This change not only requires refining the logic for choosing the best
      num_probes for a given bits/key setting, it revealed a flaw in that logic.
      As bits/key gets higher, the best num_probes for a cache-local Bloom
      filter is closer to bpk / 2 than to bpk * 0.69, the best choice for a
      standard Bloom filter. For example, at 16 bits per key, the best
      num_probes is 9 (FP rate = 0.0843%) not 11 (FP rate = 0.0884%).
      This change fixes and refines that logic (for the format_version=5
      Bloom filter only, just in case) based on empirical tests to find
      accuracy inflection points between each num_probes.
      
      Although bits_per_key is now specified as a double, the new Bloom
      filter converts/rounds this to "millibits / key" for predictable/precise
      internal computations. Just in case of unforeseen compatibility
      issues, we round to the nearest whole number bits / key for the
      legacy Bloom filter, so as not to unlock new behaviors for it.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/6092
      
      Test Plan: unit tests included
      
      Differential Revision: D18711313
      
      Pulled By: pdillinger
      
      fbshipit-source-id: 1aa73295f152a995328cb846ef9157ae8a05522a
      57f30322
    • L
      Refactor blob file creation logic (#6066) · 72daa92d
      Levi Tamasi 提交于
      Summary:
      The patch refactors and cleans up the logic around creating new blob files
      by moving the common code of `SelectBlobFile` and `SelectBlobFileTTL`
      to a new helper method `CreateBlobFileAndWriter`, bringing the implementation
      of `SelectBlobFile` and `SelectBlobFileTTL` into sync, and increasing encapsulation
      by adding new constructors for `BlobFile` and `BlobLogHeader`.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/6066
      
      Test Plan:
      Ran `make check` and used the BlobDB mode of `db_bench` to sanity test both
      the TTL and the non-TTL code paths.
      
      Differential Revision: D18646921
      
      Pulled By: ltamasi
      
      fbshipit-source-id: e5705a84807932e31dccab4f49b3e64369cea26d
      72daa92d
    • J
      Use lowercase for shlwapi.lib rpcrt4.lib (#6076) · 771e1723
      John Ericson 提交于
      Summary:
      This fixes MinGW cross compilation from case-sensative file systems, at no harm to MinGW builds on  Windows.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/6076
      
      Differential Revision: D18710554
      
      fbshipit-source-id: a9f299ac3aa019f7dbc07ed0c4a79e19cf99b488
      771e1723
    • A
      Fix naming of library on PPC64LE (#6080) · 1bf316e5
      Adam Retter 提交于
      Summary:
      **NOTE**: This also needs to be back-ported to be 6.4.6
      
      Fix a regression introduced in f2bf0b2d by https://github.com/facebook/rocksdb/pull/5674 whereby the compiled library would get the wrong name on PPC64LE platforms.
      
      On PPC64LE, the regression caused the library to be named `librocksdbjni-linux64.so` instead of `librocksdbjni-linux-ppc64le.so`.
      
      This PR corrects the name back to `librocksdbjni-linux-ppc64le.so` and also corrects the ordering of conditional arguments in the Makefile to match the expected order as defined in the documentation for Make.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/6080
      
      Differential Revision: D18710351
      
      fbshipit-source-id: d4db87ef378263b57de7f9edce1b7d15644cf9de
      1bf316e5
    • A
      Small improvements to Docker build for RocksJava (#6079) · 7f145195
      Adam Retter 提交于
      Summary:
      * We can reuse downloaded 3rd-party libraries
      * We can isolate the build to a Docker volume. This is useful for investigating failed builds, as we can examine the volume by assigning it a name during the build.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/6079
      
      Differential Revision: D18710263
      
      fbshipit-source-id: 93f456ba44b49e48941c43b0c4d53995ecc1f404
      7f145195
    • P
      Remove unused/undefined ImmutableCFOptions() (#6086) · 4f17d33d
      Peter Dillinger 提交于
      Summary:
      default constructor not used or even defined
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/6086
      
      Differential Revision: D18695669
      
      Pulled By: pdillinger
      
      fbshipit-source-id: 6b6ac46029f4fb6edf1c11ee6ce1d9f172b2eaf2
      4f17d33d
    • A
      Update 3rd-party libraries used by RocksJava (#6084) · 382b154b
      Adam Retter 提交于
      Summary:
      * LZ4 1.8.3 -> 1.9.2
      * ZSTD 1.4.0 -> 1.4.4
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/6084
      
      Differential Revision: D18710224
      
      fbshipit-source-id: a461ef19a473d3480acdc027f627ec3048730692
      382b154b
    • S
      Make default value of options.ttl to be 30 days when it is supported. (#6073) · 77eab5c8
      sdong 提交于
      Summary:
      By default options.ttl is disabled. We believe a better default will be 30 days, which means deleted data the database will be removed from SST files slightly after 30 days, for most of the cases.
      
      Make the default UINT64_MAX - 1 to indicate that it is not overridden by users.
      
      Change periodic_compaction_seconds to be UINT64_MAX - 1 to UINT64_MAX  too to be consistent. Also fix a small bug in the previous periodic_compaction_seconds default code.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/6073
      
      Test Plan: Add unit tests for it.
      
      Differential Revision: D18669626
      
      fbshipit-source-id: 957cd4374cafc1557d45a0ba002010552a378cc8
      77eab5c8
    • S
      Ignore value of BackupableDBOptions::max_valid_backups_to_open when B… (#6072) · fcd7e038
      Sebastiano Peluso 提交于
      Summary:
      This change ignores the value of BackupableDBOptions::max_valid_backups_to_open when a BackupEngine is not read-only.
      
      Issue: https://github.com/facebook/rocksdb/issues/4997
      
      Note on tests: I had to remove test case WriteOnlyEngine of BackupableDBTest because it was not consistent with the new semantic of BackupableDBOptions::max_valid_backups_to_open. Maybe, we should think about adding a new interface for append-only BackupEngines. On the other hand, I changed LimitBackupsOpened test case to use a read-only BackupEngine, and I added a new specific test case for the change.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/6072
      
      Reviewed By: pdillinger
      
      Differential Revision: D18687364
      
      Pulled By: sebastianopeluso
      
      fbshipit-source-id: 77bc1f927d623964d59137a93de123bbd719da4e
      fcd7e038
    • S
      Update HISTORY.md for forward compatibility (#6085) · 0bc87442
      sdong 提交于
      Summary:
      https://github.com/facebook/rocksdb/pull/6060 broke forward compatiblity for releases from 3.10 to 4.2. Update HISTORY.md to mention it. Also remove it from the compatibility tests.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/6085
      
      Differential Revision: D18691694
      
      fbshipit-source-id: 4ef903783dc722b8a4d3e8229abbf0f021a114c9
      0bc87442
  6. 23 11月, 2019 4 次提交
  7. 22 11月, 2019 1 次提交
  8. 21 11月, 2019 4 次提交
    • Y
      Fix a data race between GetColumnFamilyMetaData and MarkFilesBeingCompacted (#6056) · 0ce0edbe
      Yanqin Jin 提交于
      Summary:
      Use db mutex to protect the execution of Version::GetColumnFamilyMetaData()
      called in DBImpl::GetColumnFamilyMetaData().
      Without mutex, GetColumnFamilyMetaData() races with MarkFilesBeingCompacted()
      for access to FileMetaData::being_compacted.
      Other than mutex, there are several more alternatives.
      
      - Make FileMetaData::being_compacted an atomic variable. This will make
        FileMetaData non-copy-able.
      
      - Separate being_compacted from FileMetaData. This requires re-organizing data
        structures that are already used in many places.
      
      Test Plan (dev server):
      ```
      make check
      ```
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/6056
      
      Differential Revision: D18620488
      
      Pulled By: riversand963
      
      fbshipit-source-id: 87f89660b5d5e2ab4ef7962b7b2a7d00e346aa3b
      0ce0edbe
    • C
      Add asserts in transaction example (#6055) · c0983d06
      Cheng Chang 提交于
      Summary:
      The intention of the example for read committed is clearer with these added asserts.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/6055
      
      Test Plan: `cd examples && make transaction_example && ./transaction_example`
      
      Differential Revision: D18621830
      
      Pulled By: riversand963
      
      fbshipit-source-id: a94b08c5958b589049409ee4fc4d6799e5cbef79
      c0983d06
    • S
      Add operator[] to autovector::iterator_impl. (#6047) · 3cd75736
      Stephan T. Lavavej 提交于
      Summary:
      This is a required operator for random-access iterators, and an upcoming update for Visual Studio 2019 will change the C++ Standard Library's heap algorithms to use this operator.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/6047
      
      Differential Revision: D18618531
      
      Pulled By: ltamasi
      
      fbshipit-source-id: 08d10bc85bf2dbc3f7ef0fa3c777e99f1e927ef5
      3cd75736
    • S
      Sanitize input in DB::MultiGet() API (#6054) · 27ec3b34
      sdong 提交于
      Summary:
      The new DB::MultiGet() doesn't validate input for num_keys > 1 and GCC-9 complains about it. Fix it by directly return when num_keys == 0
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/6054
      
      Test Plan: Build with GCC-9 and see it passes.
      
      Differential Revision: D18608958
      
      fbshipit-source-id: 1c279aff3c7fe6e9d5a6d085ed02550ecea4fdb2
      27ec3b34
  9. 20 11月, 2019 2 次提交
    • P
      Fixes for g++ 4.9.2 compatibility (#6053) · 0306e012
      Peter Dillinger 提交于
      Summary:
      Taken from merryChris in https://github.com/facebook/rocksdb/issues/6043
      
      Stackoverflow ref on {{}} vs. {}:
      https://stackoverflow.com/questions/26947704/implicit-conversion-failure-from-initializer-list
      
      Note to reader: .clear() does not empty out an ostringstream, but .str("")
      suffices because we don't have to worry about clearing error flags.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/6053
      
      Test Plan: make check, manual run of filter_bench
      
      Differential Revision: D18602259
      
      Pulled By: pdillinger
      
      fbshipit-source-id: f6190f83b8eab4e80e7c107348839edabe727841
      0306e012
    • L
      Fix corruption with intra-L0 on ingested files (#5958) · ec3e3c3e
      Little-Wallace 提交于
      Summary:
      ## Problem Description
      
      Our process was abort when it call `CheckConsistency`. And the information in  `stderr` show that "`L0 files seqno 3001491972 3004797440 vs. 3002875611 3004524421` ".  Here are the causes of the accident I investigated.
      
      * RocksDB will call `CheckConsistency` whenever `MANIFEST` file is update. It will check sequence number interval of every file, except files which were ingested.
      * When one file is ingested into RocksDB, it will be assigned the value of global sequence number, and the minimum and maximum seqno of this file are equal, which are both equal to global sequence number.
      * `CheckConsistency`  determines whether the file is ingested by whether the smallest and largest seqno of an sstable file are equal.
      * If IntraL0Compaction picks one sst which was ingested just now and compacted it into another sst,  the `smallest_seqno` of this new file will be smaller than his `largest_seqno`.
          * If more than one ingested file was ingested before memtable schedule flush,  and they all compact into one new sstable file by `IntraL0Compaction`. The sequence interval of this new file will be included in the interval of the memtable.  So `CheckConsistency` will return a `Corruption`.
          * If a sstable was ingested after the memtable was schedule to flush, which would assign a larger seqno to it than memtable. Then the file was compacted with other files (these files were all flushed before the memtable) in L0 into one file. This compaction start before the flush job of memtable start,  but completed after the flush job finish. So this new file produced by the compaction (we call it s1) would have a larger interval of sequence number than the file produced by flush (we call it s2).  **But there was still some data in s1  written into RocksDB before the s2, so it's possible that some data in s2 was cover by old data in s1.** Of course, it would also make a `Corruption` because of overlap of seqno. There is the relationship of the files:
          > s1.smallest_seqno < s2.smallest_seqno < s2.largest_seqno  < s1.largest_seqno
      
      So I skip pick sst file which was ingested in function `FindIntraL0Compaction `
      
      ## Reason
      
      Here is my bug report: https://github.com/facebook/rocksdb/issues/5913
      
      There are two situations that can cause the check to fail.
      
      ### First situation:
      - First we ingest five external sst into Rocksdb, and they happened to be ingested in L0. and there had been some data in memtable, which make the smallest sequence number of memtable is less than which of sst that we ingest.
      
      - If there had been one compaction job which compacted sst from L0 to L1, `LevelCompactionPicker` would trigger a `IntraL0Compaction` which would compact this five sst from L0 to L0. We call this sst A, which was merged from five ingested sst.
      
      - Then some data was put into memtable, and memtable was flushed to L0. We called this sst B.
      - RocksDB check consistency , and find the `smallest_seqno` of B is  less than that of A and crash. Because A was merged from five sst, the smallest sequence number of it was less than the biggest sequece number of itself, so RocksDB could not tell if A was produce by ingested.
      
      ### Secondary situaion
      
      - First we have flushed many sst in L0,  we call them [s1, s2, s3].
      
      - There is an immutable memtable request to be flushed, but because flush thread is busy, so it has not been picked. we call it m1.  And at the moment, one sst is ingested into L0. We call it s4. Because s4 is ingested after m1 became immutable memtable, so it has a larger log sequence number than m1.
      
      - m1 is flushed in L0. because it is small, this flush job finish quickly. we call it s5.
      
      - [s1, s2, s3, s4] are compacted into one sst to L0, by IntraL0Compaction.  We call it s6.
        - compacted 4@0 files to L0
      - When s6 is added into manifest,  the corruption happened. because the largest sequence number of s6 is equal to s4, and they are both larger than that of s5.  But because s1 is older than m1, so the smallest sequence number of s6 is smaller than that of s5.
         - s6.smallest_seqno < s5.smallest_seqno < s5.largest_seqno < s6.largest_seqno
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/5958
      
      Differential Revision: D18601316
      
      fbshipit-source-id: 5fe54b3c9af52a2e1400728f565e895cde1c7267
      ec3e3c3e