1. 31 10月, 2019 2 次提交
  2. 30 10月, 2019 3 次提交
    • S
      Move pipeline write waiting logic into WaitForPendingWrites() (#5716) · a3960fc8
      sdong 提交于
      Summary:
      In pipeline writing mode, memtable switching needs to wait for memtable writing to finish to make sure that when memtables are made immutable, inserts are not going to them. This is currently done in DBImpl::SwitchMemtable(). This is done after flush_scheduler_.TakeNextColumnFamily() is called to fetch the list of column families to switch. The function flush_scheduler_.TakeNextColumnFamily() itself, however, is not thread-safe when being called together with flush_scheduler_.ScheduleFlush().
      This change provides a fix, which moves the waiting logic before flush_scheduler_.TakeNextColumnFamily(). WaitForPendingWrites() is a natural place where the logic can happen.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/5716
      
      Test Plan: Run all tests with ASAN and TSAN.
      
      Differential Revision: D18217658
      
      fbshipit-source-id: b9c5e765c9989645bf10afda7c5c726c3f82f6c3
      a3960fc8
    • S
      db_stress: CF Consistency check to use random CF to validate iterator results (#5983) · f22aaf8b
      sdong 提交于
      Summary:
      Right now, in db_stress's iterator tests, we always use the same CF to validate iterator results. This commit changes it so that a randomized CF is used in Cf consistency test, where every CF should have exactly the same data. This would help catch more bugs.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/5983
      
      Test Plan: Run "make crash_test_with_atomic_flush".
      
      Differential Revision: D18217643
      
      fbshipit-source-id: 3ac998852a0378bb59790b20c5f236f6a5d681fe
      f22aaf8b
    • S
      Auto enable Periodic Compactions if a Compaction Filter is used (#5865) · 4c9aa30a
      Sagar Vemuri 提交于
      Summary:
      - Periodic compactions are auto-enabled if a compaction filter or a compaction filter factory is set, in Level Compaction.
      - The default value of `periodic_compaction_seconds` is changed to UINT64_MAX, which lets RocksDB auto-tune periodic compactions as needed. An explicit value of 0 will still work as before ie. to disable periodic compactions completely. For now, on seeing a compaction filter along with a UINT64_MAX value for `periodic_compaction_seconds`, RocksDB will make SST files older than 30 days to go through periodic copmactions.
      
      Some RocksDB users make use of compaction filters to control when their data can be deleted, usually with a custom TTL logic. But it is occasionally possible that the compactions get delayed by considerable time due to factors like low writes to a key range, data reaching bottom level, etc before the TTL expiry. Periodic Compactions feature was originally built to help such cases. Now periodic compactions are auto enabled by default when compaction filters or compaction filter factories are used, as it is generally helpful to all cases to collect garbage.
      
      `periodic_compaction_seconds` is set to a large value, 30 days, in `SanitizeOptions` when RocksDB sees that a `compaction_filter` or `compaction_filter_factory` is used.
      
      This is done only for Level Compaction style.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/5865
      
      Test Plan:
      - Added a new test `DBCompactionTest.LevelPeriodicCompactionWithCompactionFilters` to make sure that `periodic_compaction_seconds` is set if either `compaction_filter` or `compaction_filter_factory` options are set.
      - `COMPILE_WITH_ASAN=1 make check`
      
      Differential Revision: D17659180
      
      Pulled By: sagar0
      
      fbshipit-source-id: 4887b9cf2e53cf2dc93a7b658c6b15e1181217ee
      4c9aa30a
  3. 29 10月, 2019 2 次提交
  4. 26 10月, 2019 4 次提交
  5. 25 10月, 2019 8 次提交
    • P
      Clean up some filter tests and comments (#5960) · 013babc6
      Peter Dillinger 提交于
      Summary:
      Some filtering tests were unfriendly to new implementations of
      FilterBitsBuilder because of dynamic_cast to FullFilterBitsBuilder. Most
      of those have now been cleaned up, worked around, or at least changed
      from crash on dynamic_cast failure to individual test failure.
      
      Also put some clarifying comments on filter-related APIs.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/5960
      
      Test Plan: make check
      
      Differential Revision: D18121223
      
      Pulled By: pdillinger
      
      fbshipit-source-id: e83827d9d5d96315d96f8e25a99cd70f497d802c
      013babc6
    • Y
      Update column families' log number altogether after flushing during recovery (#5856) · 2309fd63
      Yanqin Jin 提交于
      Summary:
      A bug occasionally shows up in crash test, and https://github.com/facebook/rocksdb/issues/5851 reproduces it.
      The bug can surface in the following way.
      1. Database has multiple column families.
      2. Between one DB restart, the last log file is corrupted in the middle (not the tail)
      3. During restart, DB crashes between flushing between two column families.
      
      Then DB will fail to be opened again with error "SST file is ahead of WALs".
      Solution is to update the log number associated with each column family altogether after flushing all column families' memtables. The version edits should be written to a new MANIFEST. Only after writing to all these version edits succeed does RocksDB (atomically) points the CURRENT file to the new MANIFEST.
      
      Test plan (on devserver):
      ```
      $make all && make check
      ```
      Specifically
      ```
      $make db_test2
      $./db_test2 --gtest_filter=DBTest2.CrashInRecoveryMultipleCF
      ```
      Also checked for compatibility as follows.
      Use this branch, run DBTest2.CrashInRecoveryMultipleCF and preserve the db directory.
      Then checkout 5.4, build ldb, and dump the MANIFEST.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/5856
      
      Differential Revision: D17620818
      
      Pulled By: riversand963
      
      fbshipit-source-id: b52ce5969c9a8052cacec2bd805fcfb373589039
      2309fd63
    • P
      Misc hashing updates / upgrades (#5909) · ca7ccbe2
      Peter Dillinger 提交于
      Summary:
      - Updated our included xxhash implementation to version 0.7.2 (== the latest dev version as of 2019-10-09).
      - Using XXH_NAMESPACE (like other fb projects) to avoid potential name collisions.
      - Added fastrange64, and unit tests for it and fastrange32. These are faster alternatives to hash % range.
      - Use preview version of XXH3 instead of MurmurHash64A for NPHash64
      -- Had to update cache_test to increase probability of passing for any given hash function.
      - Use fastrange64 instead of % with uses of NPHash64
      -- Had to fix WritePreparedTransactionTest.CommitOfDelayedPrepared to avoid deadlock apparently caused by new hash collision.
      - Set default seed for NPHash64 because specifying a seed rarely makes sense for it.
      - Removed unnecessary include xxhash.h in a popular .h file
      - Rename preview version of XXH3 to XXH3p for clarity and to ease backward compatibility in case final version of XXH3 is integrated.
      
      Relying on existing unit tests for NPHash64-related changes. Each new implementation of fastrange64 passed unit tests when manipulating my local build to select it. I haven't done any integration performance tests, but I consider the improved performance of the pieces being swapped in to be well established.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/5909
      
      Differential Revision: D18125196
      
      Pulled By: pdillinger
      
      fbshipit-source-id: f6bf83d49d20cbb2549926adf454fd035f0ecc0d
      ca7ccbe2
    • P
      FilterPolicy consolidation, part 2/2 (#5966) · ec11eff3
      Peter Dillinger 提交于
      Summary:
      The parts that are used to implement FilterPolicy /
      NewBloomFilterPolicy and not used other than for the block-based table
      should be consolidated under table/block_based/filter_policy*.
      
      This change is step 2 of 2:
      mv util/bloom.cc table/block_based/filter_policy.cc
      This gets its own PR so that git has the best chance of following the
      rename for blame purposes. Note that low-level shared implementation
      details of Bloom filters remain in util/bloom_impl.h, and
      util/bloom_test.cc remains where it is for now.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/5966
      
      Test Plan: make check
      
      Differential Revision: D18124930
      
      Pulled By: pdillinger
      
      fbshipit-source-id: 823bc09025b3395f092ef46a46aa5ba92a914d84
      ec11eff3
    • L
      Propagate SST and blob file numbers through the EventListener interface (#5962) · f7e7b34e
      Levi Tamasi 提交于
      Summary:
      This patch adds a number of new information elements to the FlushJobInfo and
      CompactionJobInfo structures that are passed to EventListeners via the
      OnFlush{Begin, Completed} and OnCompaction{Begin, Completed} callbacks.
      Namely, for flushes, the file numbers of the new SST and the oldest blob file it
      references are propagated. For compactions, the new pieces of information are
      the file number, level, and the oldest blob file referenced by each compaction
      input and output file.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/5962
      
      Test Plan:
      Extended the EventListener unit tests with logic that checks that these information
      elements are correctly propagated from the corresponding FileMetaData.
      
      Differential Revision: D18095568
      
      Pulled By: ltamasi
      
      fbshipit-source-id: 6874359a6aadb53366b5fe87adcb2f9bd27a0a56
      f7e7b34e
    • P
      FilterPolicy consolidation, part 1/2 (#5963) · dd19014a
      Peter Dillinger 提交于
      Summary:
      The parts that are used to implement FilterPolicy /
      NewBloomFilterPolicy and not used other than for the block-based table
      should be consolidated under table/block_based/filter_policy*. I don't
      foresee sharing these APIs with e.g. the Plain Table because they don't
      expose hashes for reuse in indexing.
      
      This change is step 1 of 2:
      (a) mv table/full_filter_bits_builder.h to
      table/block_based/filter_policy_internal.h which I expect to expand
      soon to internally reveal more implementation details for testing.
      (b) consolidate eventual contents of table/block_based/filter_policy.cc
      in util/bloom.cc, which has the most elaborate revision history
      (see step 2 ...)
      
      Step 2 soon to follow:
      mv util/bloom.cc table/block_based/filter_policy.cc
      This gets its own PR so that git has the best chance of following the
      rename for blame purposes. Note that low-level shared implementation
      details of Bloom filters are in util/bloom_impl.h.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/5963
      
      Test Plan: make check
      
      Differential Revision: D18121199
      
      Pulled By: pdillinger
      
      fbshipit-source-id: 8f21732c3d8909777e3240e4ac3123d73140326a
      dd19014a
    • P
      Vary key size and alignment in filter_bench (#5933) · 28370085
      Peter Dillinger 提交于
      Summary:
      The first version of filter_bench has selectable key size
      but that size does not vary throughout a test run. This artificially
      favors "branchy" hash functions like the existing BloomHash,
      MurmurHash1, probably because of optimal return for branch prediction.
      
      This change primarily varies those key sizes from -2 to +2 bytes vs.
      the average selected size. We also set the default key size at 24 to
      better reflect our best guess of typical key size.
      
      But steadily random key sizes may not be realistic either. So this
      change introduces a new filter_bench option:
      -vary_key_size_log2_interval=n where the same key size is used 2^n
      times and then changes to another size. I've set the default at 5
      (32 times same size) as a compromise between deployments with
      rather consistent vs. rather variable key sizes. On my Skylake
      system, the performance boost to MurmurHash1 largely lies between
      n=10 and n=15.
      
      Also added -vary_key_alignment (bool, now default=true), though this
      doesn't currently seem to matter in hash functions under
      consideration.
      
      This change also does a "dry run" for each testing scenario, to improve
      the accuracy of those numbers, as there was more difference between
      scenarios than expected. Subtracting gross test run times from dry run
      times is now also embedded in the output, because these "net" times are
      generally the most useful.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/5933
      
      Differential Revision: D18121683
      
      Pulled By: pdillinger
      
      fbshipit-source-id: 3c7efee1c5661a5fe43de555e786754ddf80dc1e
      28370085
    • D
      Add test showing range tombstones can create excessively large compactions (#5956) · 25095311
      Dan Lambright 提交于
      Summary:
      For more information on the original problem see this [link](https://github.com/facebook/rocksdb/issues/3977).
      
      This change adds two new tests. They are identical other than one uses range tombstones and the other does not. Each test generates sub files at L2 which overlap with keys L3. The test that uses range tombstones generates a single file at L2. This single file will generate a very large range overlap that will in turn create excessively large compaction.
      
      1: T001 - T005
      2:  000 -  005
      
      In contrast, the test that uses key ranges generates 3 files at L2. As a single file is compacted at a time, those 3 files will generate less work per compaction iteration.
      
      1:  001 - 002
      1:  003 - 004
      1:  005
      2:  000 - 005
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/5956
      
      Differential Revision: D18071631
      
      Pulled By: dlambrig
      
      fbshipit-source-id: 12abae75fb3e0b022d228c6371698aa5e53385df
      25095311
  6. 24 10月, 2019 3 次提交
    • S
      CfConsistencyStressTest to validate key consistent across CFs in TestGet() (#5863) · 9f1e5a0b
      sdong 提交于
      Summary:
      Right now in CF consitency stres test's TestGet(), keys are just fetched without validation. With this change, in 1/2 the time, compare all the CFs share the same value with the same key.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/5863
      
      Test Plan: Run "make crash_test_with_atomic_flush" and see tests pass. Hack the code to generate some inconsistency and observe the test fails as expected.
      
      Differential Revision: D17934206
      
      fbshipit-source-id: 00ba1a130391f28785737b677f80f366fb83cced
      9f1e5a0b
    • P
      Remove unused BloomFilterPolicy::hash_func_ (#5961) · 6a32e3b5
      Peter Dillinger 提交于
      Summary:
      This is an internal, file-local "feature" that is not used and
      potentially confusing.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/5961
      
      Test Plan: make check
      
      Differential Revision: D18099018
      
      Pulled By: pdillinger
      
      fbshipit-source-id: 7870627eeed09941d12538ec55d10d2e164fc716
      6a32e3b5
    • Y
      Make buckifier python3 compatible (#5922) · b4ebda7a
      Yanqin Jin 提交于
      Summary:
      Make buckifier/buckify_rocksdb.py run on both Python 3 and 2
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/5922
      
      Test Plan:
      ```
      $python3 buckifier/buckify_rocksdb.py
      $python3 buckifier/buckify_rocksdb.py '{"fake": {"extra_deps": [":test_dep", "//fakes/module:mock1"], "extra_compiler_flags": ["-DROCKSDB_LITE", "-Os"]}}'
      $python2 buckifier/buckify_rocksdb.py
      $python2 buckifier/buckify_rocksdb.py '{"fake": {"extra_deps": [":test_dep", "//fakes/module:mock1"], "extra_compiler_flags": ["-DROCKSDB_LITE", "-Os"]}}'
      ```
      
      Differential Revision: D17920611
      
      Pulled By: riversand963
      
      fbshipit-source-id: cc6e2f36013a88a710d96098f6ca18cbe85e3f62
      b4ebda7a
  7. 23 10月, 2019 2 次提交
  8. 22 10月, 2019 8 次提交
  9. 19 10月, 2019 7 次提交
    • L
      Store the filter bits reader alongside the filter block contents (#5936) · 29ccf207
      Levi Tamasi 提交于
      Summary:
      Amongst other things, PR https://github.com/facebook/rocksdb/issues/5504 refactored the filter block readers so that
      only the filter block contents are stored in the block cache (as opposed to the
      earlier design where the cache stored the filter block reader itself, leading to
      potentially dangling pointers and concurrency bugs). However, this change
      introduced a performance hit since with the new code, the metadata fields are
      re-parsed upon every access. This patch reunites the block contents with the
      filter bits reader to eliminate this overhead; since this is still a self-contained
      pure data object, it is safe to store it in the cache. (Note: this is similar to how
      the zstd digest is handled.)
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/5936
      
      Test Plan:
      make asan_check
      
      filter_bench results for the old code:
      
      ```
      $ ./filter_bench -quick
      WARNING: Assertions are enabled; benchmarks unnecessarily slow
      Building...
      Build avg ns/key: 26.7153
      Number of filters: 16669
      Total memory (MB): 200.009
      Bits/key actual: 10.0647
      ----------------------------
      Inside queries...
        Dry run (46b) ns/op: 33.4258
        Single filter ns/op: 42.5974
        Random filter ns/op: 217.861
      ----------------------------
      Outside queries...
        Dry run (25d) ns/op: 32.4217
        Single filter ns/op: 50.9855
        Random filter ns/op: 219.167
          Average FP rate %: 1.13993
      ----------------------------
      Done. (For more info, run with -legend or -help.)
      
      $ ./filter_bench -quick -use_full_block_reader
      WARNING: Assertions are enabled; benchmarks unnecessarily slow
      Building...
      Build avg ns/key: 26.5172
      Number of filters: 16669
      Total memory (MB): 200.009
      Bits/key actual: 10.0647
      ----------------------------
      Inside queries...
        Dry run (46b) ns/op: 32.3556
        Single filter ns/op: 83.2239
        Random filter ns/op: 370.676
      ----------------------------
      Outside queries...
        Dry run (25d) ns/op: 32.2265
        Single filter ns/op: 93.5651
        Random filter ns/op: 408.393
          Average FP rate %: 1.13993
      ----------------------------
      Done. (For more info, run with -legend or -help.)
      ```
      
      With the new code:
      
      ```
      $ ./filter_bench -quick
      WARNING: Assertions are enabled; benchmarks unnecessarily slow
      Building...
      Build avg ns/key: 25.4285
      Number of filters: 16669
      Total memory (MB): 200.009
      Bits/key actual: 10.0647
      ----------------------------
      Inside queries...
        Dry run (46b) ns/op: 31.0594
        Single filter ns/op: 43.8974
        Random filter ns/op: 226.075
      ----------------------------
      Outside queries...
        Dry run (25d) ns/op: 31.0295
        Single filter ns/op: 50.3824
        Random filter ns/op: 226.805
          Average FP rate %: 1.13993
      ----------------------------
      Done. (For more info, run with -legend or -help.)
      
      $ ./filter_bench -quick -use_full_block_reader
      WARNING: Assertions are enabled; benchmarks unnecessarily slow
      Building...
      Build avg ns/key: 26.5308
      Number of filters: 16669
      Total memory (MB): 200.009
      Bits/key actual: 10.0647
      ----------------------------
      Inside queries...
        Dry run (46b) ns/op: 33.2968
        Single filter ns/op: 58.6163
        Random filter ns/op: 291.434
      ----------------------------
      Outside queries...
        Dry run (25d) ns/op: 32.1839
        Single filter ns/op: 66.9039
        Random filter ns/op: 292.828
          Average FP rate %: 1.13993
      ----------------------------
      Done. (For more info, run with -legend or -help.)
      ```
      
      Differential Revision: D17991712
      
      Pulled By: ltamasi
      
      fbshipit-source-id: 7ea205550217bfaaa1d5158ebd658e5832e60f29
      29ccf207
    • Y
      Fix TestIterate for HashSkipList in db_stress (#5942) · c53db172
      Yanqin Jin 提交于
      Summary:
      Since SeekForPrev (used by Prev) is not supported by HashSkipList when prefix is used, we disable it when stress testing HashSkipList.
      
      - Change the default memtablerep to skip list.
      - Avoid Prev() when memtablerep is HashSkipList and prefix is used.
      
      Test Plan (on devserver):
      ```
      $make db_stress
      $./db_stress -ops_per_thread=10000 -reopen=1 -destroy_db_initially=true -column_families=1 -threads=1 -column_families=1 -memtablerep=prefix_hash
      $# or simply
      $./db_stress
      $./db_stress -memtablerep=prefix_hash
      ```
      Results must print "Verification successful".
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/5942
      
      Differential Revision: D18017062
      
      Pulled By: riversand963
      
      fbshipit-source-id: af867e59aa9e6f533143c984d7d529febf232fd7
      c53db172
    • P
      Refactor / clean up / optimize FullFilterBitsReader (#5941) · 5f8f2fda
      Peter Dillinger 提交于
      Summary:
      FullFilterBitsReader, after creating in BloomFilterPolicy, was
      responsible for decoding metadata bits. This meant that
      FullFilterBitsReader::MayMatch had some metadata checks in order to
      implement "always true" or "always false" functionality in the case
      of inconsistent or trivial metadata. This made for ugly
      mixing-of-concerns code and probably had some runtime cost. It also
      didn't really support plugging in alternative filter implementations
      with extensions to the existing metadata schema.
      
      BloomFilterPolicy::GetFilterBitsReader is now (exclusively) responsible
      for decoding filter metadata bits and constructing appropriate instances
      deriving from FilterBitsReader. "Always false" and "always true" derived
      classes allow FullFilterBitsReader not to be concerned with handling of
      trivial or inconsistent metadata. This also makes for easy expansion
      to alternative filter implementations in new, alternative derived
      classes. This change makes calls to FilterBitsReader::MayMatch
      *necessarily* virtual because there's now more than one built-in
      implementation. Compared with the previous implementation's extra
      'if' checks in MayMatch, there's no consistent performance difference,
      measured by (an older revision of) filter_bench (differences here seem
      to be within noise):
      
          Inside queries...
          -  Dry run (407) ns/op: 35.9996
          +  Dry run (407) ns/op: 35.2034
          -  Single filter ns/op: 47.5483
          +  Single filter ns/op: 47.4034
          -  Batched, prepared ns/op: 43.1559
          +  Batched, prepared ns/op: 42.2923
          ...
          -  Random filter ns/op: 150.697
          +  Random filter ns/op: 149.403
          ----------------------------
          Outside queries...
          -  Dry run (980) ns/op: 34.6114
          +  Dry run (980) ns/op: 34.0405
          -  Single filter ns/op: 56.8326
          +  Single filter ns/op: 55.8414
          -  Batched, prepared ns/op: 48.2346
          +  Batched, prepared ns/op: 47.5667
          -  Random filter ns/op: 155.377
          +  Random filter ns/op: 153.942
               Average FP rate %: 1.1386
      
      Also, the FullFilterBitsReader ctor was responsible for a surprising
      amount of CPU in production, due in part to inefficient determination of
      the CACHE_LINE_SIZE used to construct the filter being read. The
      overwhelming common case (same as my CACHE_LINE_SIZE) is now
      substantially optimized, as shown with filter_bench with
      -new_reader_every=1 (old option - see below) (repeatable result):
      
          Inside queries...
          -  Dry run (453) ns/op: 118.799
          +  Dry run (453) ns/op: 105.869
          -  Single filter ns/op: 82.5831
          +  Single filter ns/op: 74.2509
          ...
          -  Random filter ns/op: 224.936
          +  Random filter ns/op: 194.833
          ----------------------------
          Outside queries...
          -  Dry run (aa1) ns/op: 118.503
          +  Dry run (aa1) ns/op: 104.925
          -  Single filter ns/op: 90.3023
          +  Single filter ns/op: 83.425
          ...
          -  Random filter ns/op: 220.455
          +  Random filter ns/op: 175.7
               Average FP rate %: 1.13886
      
      However PR#5936 has/will reclaim most of this cost. After that PR, the optimization of this code path is likely negligible, but nonetheless it's clear we aren't making performance any worse.
      
      Also fixed inadequate check of consistency between filter data size and
      num_lines. (Unit test updated.)
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/5941
      
      Test Plan:
      previously added unit tests FullBloomTest.CorruptFilters and
      FullBloomTest.RawSchema
      
      Differential Revision: D18018353
      
      Pulled By: pdillinger
      
      fbshipit-source-id: 8e04c2b4a7d93223f49a237fd52ef2483929ed9c
      5f8f2fda
    • P
      Fix PlainTableReader not to crash sst_dump (#5940) · fe464bca
      Peter Dillinger 提交于
      Summary:
      Plain table SSTs could crash sst_dump because of a bug in
      PlainTableReader that can leave table_properties_ as null. Even if it
      was intended not to keep the table properties in some cases, they were
      leaked on the offending code path.
      
      Steps to reproduce:
      
          $ db_bench --benchmarks=fillrandom --num=2000000 --use_plain_table --prefix-size=12
          $ sst_dump --file=0000xx.sst --show_properties
          from [] to []
          Process /dev/shm/dbbench/000014.sst
          Sst file format: plain table
          Raw user collected properties
          ------------------------------
          Segmentation fault (core dumped)
      
      Also added missing unit testing of plain table full_scan_mode, and
      an assertion in NewIterator to check for regression.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/5940
      
      Test Plan: new unit test, manual, make check
      
      Differential Revision: D18018145
      
      Pulled By: pdillinger
      
      fbshipit-source-id: 4310c755e824c4cd6f3f86a3abc20dfa417c5e07
      fe464bca
    • Z
      Enable trace_replay with multi-threads (#5934) · 526e3b97
      Zhichao Cao 提交于
      Summary:
      In the current trace replay, all the queries are serialized and called by single threads. It may not simulate the original application query situations closely. The multi-threads replay is implemented in this PR. Users can set the number of threads to replay the trace. The queries generated according to the trace records are scheduled in the thread pool job queue.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/5934
      
      Test Plan: test with make check and real trace replay.
      
      Differential Revision: D17998098
      
      Pulled By: zhichao-cao
      
      fbshipit-source-id: 87eecf6f7c17a9dc9d7ab29dd2af74f6f60212c8
      526e3b97
    • L
      Update HISTORY.md with recent BlobDB adjacent changes · 69bd8a28
      Levi Tamasi 提交于
      Summary: Pull Request resolved: https://github.com/facebook/rocksdb/pull/5939
      
      Differential Revision: D18009096
      
      Pulled By: ltamasi
      
      fbshipit-source-id: 032a48a302f9da38aecf4055b5a8d4e1dffd9dc7
      69bd8a28
    • Y
      Expose db stress tests (#5937) · e60cc092
      Yanqin Jin 提交于
      Summary:
      expose db stress test by providing db_stress_tool.h in public header.
      This PR does the following:
      - adds a new header, db_stress_tool.h, in include/rocksdb/
      - renames db_stress.cc to db_stress_tool.cc
      - adds a db_stress.cc which simply invokes a test function.
      - update Makefile accordingly.
      
      Test Plan (dev server):
      ```
      make db_stress
      ./db_stress
      ```
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/5937
      
      Differential Revision: D17997647
      
      Pulled By: riversand963
      
      fbshipit-source-id: 1a8d9994f89ce198935566756947c518f0052410
      e60cc092
  10. 18 10月, 2019 1 次提交
    • L
      Support decoding blob indexes in sst_dump (#5926) · fdc1cb43
      Levi Tamasi 提交于
      Summary:
      The patch adds a new command line parameter --decode_blob_index to sst_dump.
      If this switch is specified, sst_dump prints blob indexes in a human readable format,
      printing the blob file number, offset, size, and expiration (if applicable) for blob
      references, and the blob value (and expiration) for inlined blobs.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/5926
      
      Test Plan:
      Used db_bench's BlobDB mode to generate SST files containing blob references with
      and without expiration, as well as inlined blobs with and without expiration (note: the
      latter are stored as plain values), and confirmed sst_dump correctly prints all four types
      of records.
      
      Differential Revision: D17939077
      
      Pulled By: ltamasi
      
      fbshipit-source-id: edc5f58fee94ba35f6699c6a042d5758f5b3963d
      fdc1cb43