1. 08 11月, 2019 6 次提交
  2. 07 11月, 2019 3 次提交
    • S
      db_stress: improve TestGet() failure printing (#5989) · 111ebf31
      sdong 提交于
      Summary:
      Right now, in db_stress's CF consistency test's TestGet case, if failure happens, we do normal string printing, rather than hex printing, so that some text is not printed out, which makes debugging harder. Fix it by printing hex instead.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/5989
      
      Test Plan: Build db_stress and see t passes.
      
      Differential Revision: D18363552
      
      fbshipit-source-id: 09d1b8f6fbff37441cbe7e63a1aef27551226cec
      111ebf31
    • Z
      Workload generator (Mixgraph) based on prefix hotness (#5953) · 8ea087ad
      Zhichao Cao 提交于
      Summary:
      In the previous PR https://github.com/facebook/rocksdb/issues/4788, user can use db_bench mix_graph option to generate the workload that is from the social graph. The key is generated based on the key access hotness. In this PR, user can further model the key-range hotness and fit those to two-term-exponential distribution. First, user cuts the whole key space into small key ranges (e.g., key-ranges are the same size and the key-range number is the number of SST files). Then, user calculates the average access count per key of each key-range as the key-range hotness. Next, user fits the key-range hotness to two-term-exponential distribution (f(x) = f(x) = a*exp(b*x) + c*exp(d*x)) and generate the value of a, b, c, and d. They are the parameters in db_bench: prefix_dist_a, prefix_dist_b, prefix_dist_c, and prefix_dist_d. Finally, user can run db_bench by specify the parameters.
      For example:
      `./db_bench --benchmarks="mixgraph" -use_direct_io_for_flush_and_compaction=true -use_direct_reads=true -cache_size=268435456 -key_dist_a=0.002312 -key_dist_b=0.3467 -keyrange_dist_a=14.18 -keyrange_dist_b=-2.917 -keyrange_dist_c=0.0164 -keyrange_dist_d=-0.08082 -keyrange_num=30 -value_k=0.2615 -value_sigma=25.45 -iter_k=2.517 -iter_sigma=14.236 -mix_get_ratio=0.85 -mix_put_ratio=0.14 -mix_seek_ratio=0.01 -sine_mix_rate_interval_milliseconds=5000 -sine_a=350 -sine_b=0.0105 -sine_d=50000 --perf_level=2 -reads=1000000 -num=5000000 -key_size=48`
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/5953
      
      Test Plan: run db_bench with different parameters and checked the results.
      
      Differential Revision: D18053527
      
      Pulled By: zhichao-cao
      
      fbshipit-source-id: 171f8b3142bd76462f1967c58345ad7e4f84bab7
      8ea087ad
    • M
      Enable write-conflict snapshot in stress tests (#5897) · 50804656
      Maysam Yabandeh 提交于
      Summary:
      DBImpl extends the public GetSnapshot() with GetSnapshotForWriteConflictBoundary() method that takes snapshots specially for write-write conflict checking. Compaction treats such snapshots differently to avoid GCing a value written after that, so that the write conflict remains visible even after the compaction. The patch extends stress tests with such snapshots.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/5897
      
      Differential Revision: D17937476
      
      Pulled By: maysamyabandeh
      
      fbshipit-source-id: bd8b0c578827990302194f63ae0181e15752951d
      50804656
  3. 06 11月, 2019 3 次提交
  4. 05 11月, 2019 1 次提交
    • M
      WritePrepared: Fix flaky test MaxCatchupWithNewSnapshot (#5850) · 52733b44
      Maysam Yabandeh 提交于
      Summary:
      MaxCatchupWithNewSnapshot tests that the snapshot sequence number will be larger than the max sequence number when the snapshot was taken. However since the test does not have access to the max sequence number when the snapshot was taken, it uses max sequence number after that, which could have advanced the snapshot by then, thus making the test flaky.
      The fix is to compare with max sequence number before the snapshot was taken, which is a lower bound for the value when the snapshot was taken.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/5850
      
      Test Plan: ~/gtest-parallel/gtest-parallel --repeat=12800 ./write_prepared_transaction_test --gtest_filter="*MaxCatchupWithNewSnapshot*"
      
      Differential Revision: D17608926
      
      Pulled By: maysamyabandeh
      
      fbshipit-source-id: b122ae5a27f982b290bd60da852e28d3c5eb0136
      52733b44
  5. 02 11月, 2019 3 次提交
  6. 01 11月, 2019 6 次提交
    • S
      crash_test: disable periodic compaction in FIFO compaction. (#5993) · 5b656584
      sdong 提交于
      Summary:
      A recent commit make periodic compaction option valid in FIFO, which means TTL. But we fail to disable it in crash test, causing assert failure. Fix it by having it disabled.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/5993
      
      Test Plan: Restart "make crash_test" many times and make sure --periodic_compaction_seconds=0 is always the case when --compaction_style=2
      
      Differential Revision: D18263223
      
      fbshipit-source-id: c91a802017d83ae89ac43827d1b0012861933814
      5b656584
    • P
      Add new persistent 64-bit hash (#5984) · 18f57f5e
      Peter Dillinger 提交于
      Summary:
      For upcoming new SST filter implementations, we will use a new
      64-bit hash function (XXH3 preview, slightly modified). This change
      updates hash.{h,cc} for that change, adds unit tests, and out-of-lines
      the implementations to keep hash.h as clean/small as possible.
      
      In developing the unit tests, I discovered that the XXH3 preview always
      returns zero for the empty string. Zero is problematic for some
      algorithms (including an upcoming SST filter implementation) if it
      occurs more often than at the "natural" rate, so it should not be
      returned from trivial values using trivial seeds. I modified our fork
      of XXH3 to return a modest hash of the seed for the empty string.
      
      With hash function details out-of-lines in hash.h, it makes sense to
      enable XXH_INLINE_ALL, so that direct calls to XXH64/XXH32/XXH3p
      are inlined. To fix array-bounds warnings on some inline calls, I
      injected some casts to uintptr_t in xxhash.cc. (Issue reported to Yann.)
      Revised: Reverted using XXH_INLINE_ALL for now.  Some Facebook
      checks are unhappy about #include on xxhash.cc file. I would
      fix that by rename to xxhash_cc.h, but to best preserve history I want
      to do that in a separate commit (PR) from the uintptr casts.
      
      Also updated filter_bench for this change, improving the performance
      predictability of dry run hashing and adding support for 64-bit hash
      (for upcoming new SST filter implementations, minor dead code in the
      tool for now).
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/5984
      
      Differential Revision: D18246567
      
      Pulled By: pdillinger
      
      fbshipit-source-id: 6162fbf6381d63c8cc611dd7ec70e1ddc883fbb8
      18f57f5e
    • P
      Prepare filter tests for more implementations (#5967) · 685e8956
      Peter Dillinger 提交于
      Summary:
      This change sets up for alternate implementations underlying
      BloomFilterPolicy:
      
      * Refactor BloomFilterPolicy and expose in internal .h file so that it's easy to iterate over / select implementations for testing, regardless of what the best public interface will look like. Most notably updated db_bloom_filter_test to use this.
      * Hide FullFilterBitsBuilder from unit tests (alternate derived classes planned); expose the part important for testing (CalculateSpace), as abstract class BuiltinFilterBitsBuilder. (Also cleaned up internally exposed interface to CalculateSpace.)
      * Rename BloomTest -> BlockBasedBloomTest for clarity (despite ongoing confusion between block-based table and block-based filter)
      * Assert that block-based filter construction interface is only used on BloomFilterPolicy appropriately constructed. (A couple of tests updated to add ", true".)
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/5967
      
      Test Plan: make check
      
      Differential Revision: D18138704
      
      Pulled By: pdillinger
      
      fbshipit-source-id: 55ef9273423b0696309e251f50b8c1b5e9ec7597
      685e8956
    • L
      Disable pre-5.5 versions in the format compatibility test (#5990) · 351e2540
      Levi Tamasi 提交于
      Summary:
      We have updated earlier release branches going back to 5.5 so they are
      built using gcc7 by default. Disabling ancient versions before that
      until we figure out a plan for them.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/5990
      
      Test Plan: Ran the script locally.
      
      Differential Revision: D18252386
      
      Pulled By: ltamasi
      
      fbshipit-source-id: a7bbb30dc52ff2eaaf31a29ecc79f7cf4e2834dc
      351e2540
    • S
      Support periodic compaction in universal compaction (#5970) · aa6f7d09
      sdong 提交于
      Summary:
      Previously, periodic compaction is not supported in universal compaction. Add the support using following approach: if any file is marked as qualified for periodid compaction, trigger a full compaction. If a full compaction is prevented by files being compacted, try to compact the higher levels than files currently being compacted. If in this way we can only compact the last sorted run and none of the file to be compacted qualifies for periodic compaction, skip the compact. This is to prevent the same single level compaction from being executed again and again.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/5970
      
      Test Plan: Add several test cases.
      
      Differential Revision: D18147097
      
      fbshipit-source-id: 8ecc308154d9aca96fb192c51fbceba3947550c1
      aa6f7d09
    • S
      Make FIFO compaction take default 30 days TTL by default (#5987) · 2a9e5caf
      sdong 提交于
      Summary:
      Right now, by default FIFO compaction has no TTL. We believe that a default TTL of 30 days will be better. With this patch, the default will be changed to 30 days. Default of Options.periodic_compaction_seconds will mean the same as options.ttl. If Options.ttl and Options.periodic_compaction_seconds left default, a default 30 days TTL will be used. If both options are set, the stricter value of the two will be used.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/5987
      
      Test Plan: Add an option sanitize test to cover the case.
      
      Differential Revision: D18237935
      
      fbshipit-source-id: a6dcea1f36c3849e13c0a69e413d73ad8eab58c9
      2a9e5caf
  7. 31 10月, 2019 3 次提交
    • M
      Turn compaction asserts to runtime check (#5935) · dccaf9f0
      Maysam Yabandeh 提交于
      Summary:
      Compaction iterator has many assert statements that are active only during test runs. Some rare bugs would show up only at runtime could violate the assert condition but go unnoticed since assert statements are not compiled in release mode. Turning the assert statements to runtime check sone pors and cons:
      Pros:
      - A bug that would result into incorrect data would be detected early before the incorrect data is written to the disk.
      
      Cons:
      - Runtime overhead: which should be negligible since compaction cpu is the minority in the overall cpu usage
      - The assert statements might already being violated at runtime, and turning them to runtime failure might result into reliability issues.
      
      The patch takes a conservative step in this direction by logging the assert violations at runtime. If we see any violation reported in logs, we investigate. Otherwise, we can go ahead turning them to runtime error.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/5935
      
      Differential Revision: D18229697
      
      Pulled By: maysamyabandeh
      
      fbshipit-source-id: f1890eca80ccd7cca29737f1825badb9aa8038a8
      dccaf9f0
    • S
      crash_test: disable atomic flush with pipelined write (#5986) · 0337d87b
      sdong 提交于
      Summary:
      Recently, pipelined write is enabled even if atomic flush is enabled, which causing sanitizing failure in db_stress. Revert this change.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/5986
      
      Test Plan: Run "make crash_test_with_atomic_flush" and see it to run for some while so that the old sanitizing error (which showed up quickly) doesn't show up.
      
      Differential Revision: D18228278
      
      fbshipit-source-id: 27fdf2f8e3e77068c9725a838b9bef4ab25a2553
      0337d87b
    • S
      Add more release branches to tools/check_format_compatible.sh (#5985) · 15119f08
      sdong 提交于
      Summary:
      More release branches are created. We should include them in continuous format compatibility checks.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/5985
      
      Test Plan: Let's see whether it is passes.
      
      Differential Revision: D18226532
      
      fbshipit-source-id: 75d8cad5b03ccea4ce16f00cea1f8b7893b0c0c8
      15119f08
  8. 30 10月, 2019 3 次提交
    • S
      Move pipeline write waiting logic into WaitForPendingWrites() (#5716) · a3960fc8
      sdong 提交于
      Summary:
      In pipeline writing mode, memtable switching needs to wait for memtable writing to finish to make sure that when memtables are made immutable, inserts are not going to them. This is currently done in DBImpl::SwitchMemtable(). This is done after flush_scheduler_.TakeNextColumnFamily() is called to fetch the list of column families to switch. The function flush_scheduler_.TakeNextColumnFamily() itself, however, is not thread-safe when being called together with flush_scheduler_.ScheduleFlush().
      This change provides a fix, which moves the waiting logic before flush_scheduler_.TakeNextColumnFamily(). WaitForPendingWrites() is a natural place where the logic can happen.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/5716
      
      Test Plan: Run all tests with ASAN and TSAN.
      
      Differential Revision: D18217658
      
      fbshipit-source-id: b9c5e765c9989645bf10afda7c5c726c3f82f6c3
      a3960fc8
    • S
      db_stress: CF Consistency check to use random CF to validate iterator results (#5983) · f22aaf8b
      sdong 提交于
      Summary:
      Right now, in db_stress's iterator tests, we always use the same CF to validate iterator results. This commit changes it so that a randomized CF is used in Cf consistency test, where every CF should have exactly the same data. This would help catch more bugs.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/5983
      
      Test Plan: Run "make crash_test_with_atomic_flush".
      
      Differential Revision: D18217643
      
      fbshipit-source-id: 3ac998852a0378bb59790b20c5f236f6a5d681fe
      f22aaf8b
    • S
      Auto enable Periodic Compactions if a Compaction Filter is used (#5865) · 4c9aa30a
      Sagar Vemuri 提交于
      Summary:
      - Periodic compactions are auto-enabled if a compaction filter or a compaction filter factory is set, in Level Compaction.
      - The default value of `periodic_compaction_seconds` is changed to UINT64_MAX, which lets RocksDB auto-tune periodic compactions as needed. An explicit value of 0 will still work as before ie. to disable periodic compactions completely. For now, on seeing a compaction filter along with a UINT64_MAX value for `periodic_compaction_seconds`, RocksDB will make SST files older than 30 days to go through periodic copmactions.
      
      Some RocksDB users make use of compaction filters to control when their data can be deleted, usually with a custom TTL logic. But it is occasionally possible that the compactions get delayed by considerable time due to factors like low writes to a key range, data reaching bottom level, etc before the TTL expiry. Periodic Compactions feature was originally built to help such cases. Now periodic compactions are auto enabled by default when compaction filters or compaction filter factories are used, as it is generally helpful to all cases to collect garbage.
      
      `periodic_compaction_seconds` is set to a large value, 30 days, in `SanitizeOptions` when RocksDB sees that a `compaction_filter` or `compaction_filter_factory` is used.
      
      This is done only for Level Compaction style.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/5865
      
      Test Plan:
      - Added a new test `DBCompactionTest.LevelPeriodicCompactionWithCompactionFilters` to make sure that `periodic_compaction_seconds` is set if either `compaction_filter` or `compaction_filter_factory` options are set.
      - `COMPILE_WITH_ASAN=1 make check`
      
      Differential Revision: D17659180
      
      Pulled By: sagar0
      
      fbshipit-source-id: 4887b9cf2e53cf2dc93a7b658c6b15e1181217ee
      4c9aa30a
  9. 29 10月, 2019 2 次提交
  10. 26 10月, 2019 4 次提交
  11. 25 10月, 2019 6 次提交
    • P
      Clean up some filter tests and comments (#5960) · 013babc6
      Peter Dillinger 提交于
      Summary:
      Some filtering tests were unfriendly to new implementations of
      FilterBitsBuilder because of dynamic_cast to FullFilterBitsBuilder. Most
      of those have now been cleaned up, worked around, or at least changed
      from crash on dynamic_cast failure to individual test failure.
      
      Also put some clarifying comments on filter-related APIs.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/5960
      
      Test Plan: make check
      
      Differential Revision: D18121223
      
      Pulled By: pdillinger
      
      fbshipit-source-id: e83827d9d5d96315d96f8e25a99cd70f497d802c
      013babc6
    • Y
      Update column families' log number altogether after flushing during recovery (#5856) · 2309fd63
      Yanqin Jin 提交于
      Summary:
      A bug occasionally shows up in crash test, and https://github.com/facebook/rocksdb/issues/5851 reproduces it.
      The bug can surface in the following way.
      1. Database has multiple column families.
      2. Between one DB restart, the last log file is corrupted in the middle (not the tail)
      3. During restart, DB crashes between flushing between two column families.
      
      Then DB will fail to be opened again with error "SST file is ahead of WALs".
      Solution is to update the log number associated with each column family altogether after flushing all column families' memtables. The version edits should be written to a new MANIFEST. Only after writing to all these version edits succeed does RocksDB (atomically) points the CURRENT file to the new MANIFEST.
      
      Test plan (on devserver):
      ```
      $make all && make check
      ```
      Specifically
      ```
      $make db_test2
      $./db_test2 --gtest_filter=DBTest2.CrashInRecoveryMultipleCF
      ```
      Also checked for compatibility as follows.
      Use this branch, run DBTest2.CrashInRecoveryMultipleCF and preserve the db directory.
      Then checkout 5.4, build ldb, and dump the MANIFEST.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/5856
      
      Differential Revision: D17620818
      
      Pulled By: riversand963
      
      fbshipit-source-id: b52ce5969c9a8052cacec2bd805fcfb373589039
      2309fd63
    • P
      Misc hashing updates / upgrades (#5909) · ca7ccbe2
      Peter Dillinger 提交于
      Summary:
      - Updated our included xxhash implementation to version 0.7.2 (== the latest dev version as of 2019-10-09).
      - Using XXH_NAMESPACE (like other fb projects) to avoid potential name collisions.
      - Added fastrange64, and unit tests for it and fastrange32. These are faster alternatives to hash % range.
      - Use preview version of XXH3 instead of MurmurHash64A for NPHash64
      -- Had to update cache_test to increase probability of passing for any given hash function.
      - Use fastrange64 instead of % with uses of NPHash64
      -- Had to fix WritePreparedTransactionTest.CommitOfDelayedPrepared to avoid deadlock apparently caused by new hash collision.
      - Set default seed for NPHash64 because specifying a seed rarely makes sense for it.
      - Removed unnecessary include xxhash.h in a popular .h file
      - Rename preview version of XXH3 to XXH3p for clarity and to ease backward compatibility in case final version of XXH3 is integrated.
      
      Relying on existing unit tests for NPHash64-related changes. Each new implementation of fastrange64 passed unit tests when manipulating my local build to select it. I haven't done any integration performance tests, but I consider the improved performance of the pieces being swapped in to be well established.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/5909
      
      Differential Revision: D18125196
      
      Pulled By: pdillinger
      
      fbshipit-source-id: f6bf83d49d20cbb2549926adf454fd035f0ecc0d
      ca7ccbe2
    • P
      FilterPolicy consolidation, part 2/2 (#5966) · ec11eff3
      Peter Dillinger 提交于
      Summary:
      The parts that are used to implement FilterPolicy /
      NewBloomFilterPolicy and not used other than for the block-based table
      should be consolidated under table/block_based/filter_policy*.
      
      This change is step 2 of 2:
      mv util/bloom.cc table/block_based/filter_policy.cc
      This gets its own PR so that git has the best chance of following the
      rename for blame purposes. Note that low-level shared implementation
      details of Bloom filters remain in util/bloom_impl.h, and
      util/bloom_test.cc remains where it is for now.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/5966
      
      Test Plan: make check
      
      Differential Revision: D18124930
      
      Pulled By: pdillinger
      
      fbshipit-source-id: 823bc09025b3395f092ef46a46aa5ba92a914d84
      ec11eff3
    • L
      Propagate SST and blob file numbers through the EventListener interface (#5962) · f7e7b34e
      Levi Tamasi 提交于
      Summary:
      This patch adds a number of new information elements to the FlushJobInfo and
      CompactionJobInfo structures that are passed to EventListeners via the
      OnFlush{Begin, Completed} and OnCompaction{Begin, Completed} callbacks.
      Namely, for flushes, the file numbers of the new SST and the oldest blob file it
      references are propagated. For compactions, the new pieces of information are
      the file number, level, and the oldest blob file referenced by each compaction
      input and output file.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/5962
      
      Test Plan:
      Extended the EventListener unit tests with logic that checks that these information
      elements are correctly propagated from the corresponding FileMetaData.
      
      Differential Revision: D18095568
      
      Pulled By: ltamasi
      
      fbshipit-source-id: 6874359a6aadb53366b5fe87adcb2f9bd27a0a56
      f7e7b34e
    • P
      FilterPolicy consolidation, part 1/2 (#5963) · dd19014a
      Peter Dillinger 提交于
      Summary:
      The parts that are used to implement FilterPolicy /
      NewBloomFilterPolicy and not used other than for the block-based table
      should be consolidated under table/block_based/filter_policy*. I don't
      foresee sharing these APIs with e.g. the Plain Table because they don't
      expose hashes for reuse in indexing.
      
      This change is step 1 of 2:
      (a) mv table/full_filter_bits_builder.h to
      table/block_based/filter_policy_internal.h which I expect to expand
      soon to internally reveal more implementation details for testing.
      (b) consolidate eventual contents of table/block_based/filter_policy.cc
      in util/bloom.cc, which has the most elaborate revision history
      (see step 2 ...)
      
      Step 2 soon to follow:
      mv util/bloom.cc table/block_based/filter_policy.cc
      This gets its own PR so that git has the best chance of following the
      rename for blame purposes. Note that low-level shared implementation
      details of Bloom filters are in util/bloom_impl.h.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/5963
      
      Test Plan: make check
      
      Differential Revision: D18121199
      
      Pulled By: pdillinger
      
      fbshipit-source-id: 8f21732c3d8909777e3240e4ac3123d73140326a
      dd19014a