1. 28 9月, 2019 4 次提交
    • Y
      Explicitly declare atomic flush incompatible with pipelined write (#5860) · 643df920
      Yanqin Jin 提交于
      Summary:
      Atomic flush is incompatible with pipelined write. At least now.
      If pipelined write is enabled, a thread performing write can exit the write
      thread and start inserting into memtables. Consequently a thread performing
      flush will enter write thread and race with memtable insertion by the former.
      This will cause undefined result in terms of data persistence.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/5860
      
      Test Plan:
      ```
      $make all && make check
      ```
      
      Differential Revision: D17638944
      
      Pulled By: riversand963
      
      fbshipit-source-id: abc578dc49a5dbe41bc5adcecf448f8e042a6d49
      643df920
    • S
      db_stress: fix run time error when prefix_size = -1 (#5862) · 5cd8aaf7
      sdong 提交于
      Summary:
      When prefix_size = -1, stress test crashes with run time error because of overflow. Fix it by not using -1 but 7 in prefix scan mode.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/5862
      
      Test Plan:
      Run
      python -u tools/db_crashtest.py --simple whitebox --random_kill_odd \
            888887 --compression_type=zstd
      and see it doesn't crash.
      
      Differential Revision: D17642313
      
      fbshipit-source-id: f029e7651498c905af1b1bee6d310ae50cdcda41
      5cd8aaf7
    • S
      crash_test to do some verification for prefix extractor and iterator bounds. (#5846) · 679a45d0
      sdong 提交于
      Summary:
      For now, crash_test is not able to report any failure for the logic related to iterator upper, lower bounds or iterators, or reseek. These are features prone to errors. Improve db_stress in several ways:
      (1) For each iterator run, reseek up to 3 times.
      (2) For every iterator, create control iterator with upper or lower bound, with total order seek. Compare the results with the iterator.
      (3) Make simple crash test to avoid prefix size to have more coverage.
      (4) make prefix_size = 0 a valid size and -1 to indicate disabling prefix extractor.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/5846
      
      Test Plan: Manually hack the code to create wrong results and see they are caught by the tool.
      
      Differential Revision: D17631760
      
      fbshipit-source-id: acd460a177bd2124a5ffd7fff490702dba63030b
      679a45d0
    • C
      Add unordered write option rocksjava (#5839) · 51185592
      Chen, You 提交于
      Summary:
      Add unordered_write option api and related ut to rocksjava
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/5839
      
      Differential Revision: D17604446
      
      Pulled By: maysamyabandeh
      
      fbshipit-source-id: c6b07e85ca9d5e3a92973ddb6ab2bc079e53c9c1
      51185592
  2. 27 9月, 2019 2 次提交
    • Y
      Add TryCatchUpWithPrimary to StackableDB (#5855) · ae458357
      Yanqin Jin 提交于
      Summary:
      as title.
      
      Test Plan (on devserver):
      ```
      $make all && make check
      ```
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/5855
      
      Differential Revision: D17615125
      
      Pulled By: riversand963
      
      fbshipit-source-id: bd6ed8cf59eafff41f0d1fc044f39e8f3573172a
      ae458357
    • S
      Add a unit test to reproduce a corruption bug (#5851) · 76e951db
      sdong 提交于
      Summary:
      This is a bug occaionally shows up in crash test, and this unit test is to reproduce it. The bug is following:
      1. Database has multiple CFs.
      2. Between one DB restart, the last log file is corrupted in the middle (not the tail)
      3. During restart, DB crashes between flushes between two CFs.
      The DB will fail to be opened again with error "SST file is ahead of WALs"
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/5851
      
      Test Plan: Run the test itself.
      
      Differential Revision: D17614721
      
      fbshipit-source-id: 1b0abce49b203a76a039e38e76bc940429975f20
      76e951db
  3. 25 9月, 2019 2 次提交
    • M
      Fix a bug in format_version 3 + partition filters + prefix search (#5835) · 6652c94f
      Maysam Yabandeh 提交于
      Summary:
      Partitioned filters make use of a top-level index to find the partition in which the filter resides. The top-level index has a key per partition. The key is guaranteed to be larger or equal than any key in that partition. When used with format_version 3, which excludes the sequence number form index keys, the separator key in the index could be equal to the prefix of the keys in the next partition. In this way, when searching for the key, the top-level index will lead us to the previous partition, which has no key with that prefix. The prefix bloom test thus returns false, although the prefix exists in the bloom of the next partition.
      The patch fixes that by a hack: It always adds the prefix of the first key of the next partition to the bloom of the current partition. In this way, in the corner cases that the index will lead us to the previous partition, we still can find the bloom filter there.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/5835
      
      Differential Revision: D17513585
      
      Pulled By: maysamyabandeh
      
      fbshipit-source-id: e2d1ff26c759e6e03875c4d57f4228316ecf50e9
      6652c94f
    • L
      Add class comment for Block · c9932d18
      Levi Tamasi 提交于
      Summary: Pull Request resolved: https://github.com/facebook/rocksdb/pull/5832
      
      Differential Revision: D17550773
      
      Pulled By: ltamasi
      
      fbshipit-source-id: 66972bb008516e55b6fbba58ddd10234346d5d11
      c9932d18
  4. 24 9月, 2019 3 次提交
    • W
      Update HISTORY.md for stop manual compaction · 02554b3c
      WangQingping 提交于
      Summary: Pull Request resolved: https://github.com/facebook/rocksdb/pull/5837
      
      Differential Revision: D17529753
      
      fbshipit-source-id: 98bbf22c690384b2f440286151dffdaaa744e97c
      02554b3c
    • Y
      Remove invalid comparison of va_list and nullptr (#5836) · 2367656b
      Yikun Jiang 提交于
      Summary:
      The comparison of va_list and nullptr is always False under any arch, and will raise invalid operands of types error in aarch64 env (`error: invalid operands of types ‘va_list {aka __va_list}’ and ‘std::nullptr_t’ to binary ‘operator!=’`).
      
      This patch removes this invalid assert.
      
      Closes: https://github.com/facebook/rocksdb/issues/4277
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/5836
      
      Differential Revision: D17532470
      
      fbshipit-source-id: ca98078ecbc6a9416c69de3bd6ffcfa33a0f0185
      2367656b
    • P
      Fix format-diff.sh detecting changes vs. upstream (#5831) · 42f898bf
      Peter Dillinger 提交于
      Summary:
      format-diff.sh, a.k.a. 'make format', would use 'master'
      to decide which commits are probably unpublished. Much better to use
      facebook remote master since local master may not be caught up and may
      have its own unpublished commits. Script now tries to compare against
      facebook remote master branch (branch pointer is updated with any fetch
      or pull), because those differences are what would be considered the
      differences for a pull request.
      
      Also, script would compare against *parent* of merge-base with that
      reference point, which is just wrong since that includes the last
      published commit.
      
      In case of problems, you can now customize the reference point, by
      setting the FORMAT_UPSTREAM variable.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/5831
      
      Test Plan: manual
      
      Differential Revision: D17528462
      
      Pulled By: pdillinger
      
      fbshipit-source-id: 50fdb8795d683bf3c14d449669c1a5299e0dfa8b
      42f898bf
  5. 21 9月, 2019 2 次提交
  6. 20 9月, 2019 1 次提交
  7. 19 9月, 2019 6 次提交
    • M
      Remove snap_refresh_nanos option (#5826) · 6ec6a4a9
      Maysam Yabandeh 提交于
      Summary:
      The snap_refresh_nanos option didn't bring much benefit. Remove the feature to simplify the code.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/5826
      
      Differential Revision: D17467147
      
      Pulled By: maysamyabandeh
      
      fbshipit-source-id: 4f950b046990d0d1292d7fc04c2ccafaf751c7f0
      6ec6a4a9
    • Y
      Refactor deletefile_test.cc (#5822) · a9c5e8e9
      Yanqin Jin 提交于
      Summary:
      Make DeleteFileTest inherit DBTestBase to avoid code duplication.
      
      Test Plan (on devserver)
      ```
      $make deletefile_test
      $./deletefile_test
      ```
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/5822
      
      Differential Revision: D17456750
      
      Pulled By: riversand963
      
      fbshipit-source-id: 224e97967da7b98838a98981cd5095d3230a814f
      a9c5e8e9
    • L
      Make clang-analyzer happy (#5821) · 2cbb61ea
      Levi Tamasi 提交于
      Summary:
      clang-analyzer has uncovered a bunch of places where the code is relying
      on pointers being valid and one case (in VectorIterator) where a moved-from
      object is being used:
      
      In file included from db/range_tombstone_fragmenter.cc:17:
      ./util/vector_iterator.h:23:18: warning: Method called on moved-from object 'keys' of type 'std::vector'
              current_(keys.size()) {
                       ^~~~~~~~~~~
      1 warning generated.
      utilities/persistent_cache/block_cache_tier_file.cc:39:14: warning: Called C++ object pointer is null
        Status s = env->NewRandomAccessFile(filepath, file, opt);
                   ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
      utilities/persistent_cache/block_cache_tier_file.cc:47:19: warning: Called C++ object pointer is null
        Status status = env_->GetFileSize(Path(), size);
                        ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
      utilities/persistent_cache/block_cache_tier_file.cc:290:14: warning: Called C++ object pointer is null
        Status s = env_->FileExists(Path());
                   ^~~~~~~~~~~~~~~~~~~~~~~~
      utilities/persistent_cache/block_cache_tier_file.cc:363:35: warning: Called C++ object pointer is null
          CacheWriteBuffer* const buf = alloc_->Allocate();
                                        ^~~~~~~~~~~~~~~~~~
      utilities/persistent_cache/block_cache_tier_file.cc:399:41: warning: Called C++ object pointer is null
        const uint64_t file_off = buf_doff_ * alloc_->BufferSize();
                                              ^~~~~~~~~~~~~~~~~~~~
      utilities/persistent_cache/block_cache_tier_file.cc:463:33: warning: Called C++ object pointer is null
        size_t start_idx = lba.off_ / alloc_->BufferSize();
                                      ^~~~~~~~~~~~~~~~~~~~
      utilities/persistent_cache/block_cache_tier_file.cc:515:5: warning: Called C++ object pointer is null
          alloc_->Deallocate(bufs_[i]);
          ^~~~~~~~~~~~~~~~~~~~~~~~~~~~
      7 warnings generated.
      ar: creating librocksdb_debug.a
      utilities/memory/memory_test.cc:68:25: warning: Called C++ object pointer is null
            cache_set->insert(db->GetDBOptions().row_cache.get());
                              ^~~~~~~~~~~~~~~~~~
      1 warning generated.
      
      The patch fixes these by adding assertions and explicitly passing in zero
      when initializing VectorIterator::current_ (which preserves the existing
      behavior).
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/5821
      
      Test Plan: Ran make check and make analyze to make sure the warnings have disappeared.
      
      Differential Revision: D17455949
      
      Pulled By: ltamasi
      
      fbshipit-source-id: 363619618ea649a0674287f9f3b3393e390571ee
      2cbb61ea
    • Remove unneeded unlock statement (#5809) · 2389aa2d
      提交于
      Summary:
      The dtor will automatically do unlock
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/5809
      
      Differential Revision: D17453694
      
      Pulled By: ltamasi
      
      fbshipit-source-id: 5348bff8e6a620a05ff639a5454e8d82ae98a22d
      2389aa2d
    • Y
      Refactor ObsoleteFilesTest to inherit from DBTestBase (#5820) · 6a279037
      Yanqin Jin 提交于
      Summary:
      Make class ObsoleteFilesTest inherit from DBTestBase.
      
      Test plan (on devserver):
      ```
      $COMPILE_WITH_ASAN=1 make obsolete_files_test
      $./obsolete_files_test
      ```
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/5820
      
      Differential Revision: D17452348
      
      Pulled By: riversand963
      
      fbshipit-source-id: b09f4581a18022ca2bfd79f2836c0bf7083f5f25
      6a279037
    • T
      Adding support for deleteFilesInRanges in JNI (#4031) · 3a408eea
      Tomas Kolda 提交于
      Summary:
      It is very useful method call to achieve https://github.com/facebook/rocksdb/wiki/Delete-A-Range-Of-Keys
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/4031
      
      Differential Revision: D13515418
      
      Pulled By: vjnadimpalli
      
      fbshipit-source-id: 930b48e0992ef07fd1edd0b0cb5f780fabb1b4b5
      3a408eea
  8. 18 9月, 2019 3 次提交
  9. 17 9月, 2019 13 次提交
  10. 16 9月, 2019 1 次提交
  11. 14 9月, 2019 3 次提交
    • P
      sst_dump recompress show #blocks compressed and not compressed (#5791) · 2ed91622
      Peter (Stig) Edwards 提交于
      Summary:
      Closes https://github.com/facebook/rocksdb/issues/1474
      Helps show when the 12.5% threshold for GoodCompressionRatio (originally from ldb) is hit.
      
      Example output:
      
      ```
      > ./sst_dump --file=/tmp/test.sst --command=recompress
      from [] to []
      Process /tmp/test.sst
      Sst file format: block-based
      Block Size: 16384
      Compression: kNoCompression           Size:  122579836 Blocks:   2300 Compressed:      0 (  0.0%) Not compressed (ratio):   2300 (100.0%) Not compressed (abort):      0 (  0.0%)
      Compression: kSnappyCompression       Size:   46289962 Blocks:   2300 Compressed:   2119 ( 92.1%) Not compressed (ratio):    181 (  7.9%) Not compressed (abort):      0 (  0.0%)
      Compression: kZlibCompression         Size:   29689825 Blocks:   2300 Compressed:   2301 (100.0%) Not compressed (ratio):      0 (  0.0%) Not compressed (abort):      0 (  0.0%)
      Unsupported compression type: kBZip2Compression.
      Compression: kLZ4Compression          Size:   44785490 Blocks:   2300 Compressed:   1950 ( 84.8%) Not compressed (ratio):    350 ( 15.2%) Not compressed (abort):      0 (  0.0%)
      Compression: kLZ4HCCompression        Size:   37498895 Blocks:   2300 Compressed:   2301 (100.0%) Not compressed (ratio):      0 (  0.0%) Not compressed (abort):      0 (  0.0%)
      Unsupported compression type: kXpressCompression.
      Compression: kZSTD                    Size:   32208707 Blocks:   2300 Compressed:   2301 (100.0%) Not compressed (ratio):      0 (  0.0%) Not compressed (abort):      0 (  0.0%)
      ```
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/5791
      
      Differential Revision: D17347870
      
      fbshipit-source-id: af10849c010b46b20e54162b70123c2805ffe526
      2ed91622
    • S
      merging_iterator.cc: Small refactoring (#5793) · bf5dbc17
      sdong 提交于
      Summary:
      1. Put the similar logic of adding valid iterator to heap and check invalid iterator's status code to the same helper functions.
      2. Because of 1, in the changing direction case, move around the places where we check status a little bit so that we can call the helper function there too. The logic would only divert in the case where the iterator is valid but status is not OK, which is not expected to happen. Add an assertion for that.
      3. Put the logic of changing direction from forward to backward to a separate function so the unlikely code path is not in Prev().
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/5793
      
      Test Plan: run all existing tests.
      
      Differential Revision: D17374397
      
      fbshipit-source-id: d595ffcf156095c4bd0f5532bacba854482a2332
      bf5dbc17
    • I
      Allow ingesting overlapping files (#5539) · 97631357
      Igor Canadi 提交于
      Summary:
      Currently IngestExternalFile() fails when its input files' ranges overlap. This condition doesn't need to hold for files that are to be ingested in L0, though.
      
      This commit allows overlapping files and forces their target level to L0.
      
      Additionally, ingest job's completion is logged to EventLogger, analogous to flush and compaction jobs.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/5539
      
      Differential Revision: D17370660
      
      Pulled By: riversand963
      
      fbshipit-source-id: 749a3899b17d1be267a5afd5b0a99d96b38ab2f3
      97631357