1. 20 9月, 2019 1 次提交
  2. 19 9月, 2019 6 次提交
    • M
      Remove snap_refresh_nanos option (#5826) · 6ec6a4a9
      Maysam Yabandeh 提交于
      Summary:
      The snap_refresh_nanos option didn't bring much benefit. Remove the feature to simplify the code.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/5826
      
      Differential Revision: D17467147
      
      Pulled By: maysamyabandeh
      
      fbshipit-source-id: 4f950b046990d0d1292d7fc04c2ccafaf751c7f0
      6ec6a4a9
    • Y
      Refactor deletefile_test.cc (#5822) · a9c5e8e9
      Yanqin Jin 提交于
      Summary:
      Make DeleteFileTest inherit DBTestBase to avoid code duplication.
      
      Test Plan (on devserver)
      ```
      $make deletefile_test
      $./deletefile_test
      ```
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/5822
      
      Differential Revision: D17456750
      
      Pulled By: riversand963
      
      fbshipit-source-id: 224e97967da7b98838a98981cd5095d3230a814f
      a9c5e8e9
    • L
      Make clang-analyzer happy (#5821) · 2cbb61ea
      Levi Tamasi 提交于
      Summary:
      clang-analyzer has uncovered a bunch of places where the code is relying
      on pointers being valid and one case (in VectorIterator) where a moved-from
      object is being used:
      
      In file included from db/range_tombstone_fragmenter.cc:17:
      ./util/vector_iterator.h:23:18: warning: Method called on moved-from object 'keys' of type 'std::vector'
              current_(keys.size()) {
                       ^~~~~~~~~~~
      1 warning generated.
      utilities/persistent_cache/block_cache_tier_file.cc:39:14: warning: Called C++ object pointer is null
        Status s = env->NewRandomAccessFile(filepath, file, opt);
                   ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
      utilities/persistent_cache/block_cache_tier_file.cc:47:19: warning: Called C++ object pointer is null
        Status status = env_->GetFileSize(Path(), size);
                        ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
      utilities/persistent_cache/block_cache_tier_file.cc:290:14: warning: Called C++ object pointer is null
        Status s = env_->FileExists(Path());
                   ^~~~~~~~~~~~~~~~~~~~~~~~
      utilities/persistent_cache/block_cache_tier_file.cc:363:35: warning: Called C++ object pointer is null
          CacheWriteBuffer* const buf = alloc_->Allocate();
                                        ^~~~~~~~~~~~~~~~~~
      utilities/persistent_cache/block_cache_tier_file.cc:399:41: warning: Called C++ object pointer is null
        const uint64_t file_off = buf_doff_ * alloc_->BufferSize();
                                              ^~~~~~~~~~~~~~~~~~~~
      utilities/persistent_cache/block_cache_tier_file.cc:463:33: warning: Called C++ object pointer is null
        size_t start_idx = lba.off_ / alloc_->BufferSize();
                                      ^~~~~~~~~~~~~~~~~~~~
      utilities/persistent_cache/block_cache_tier_file.cc:515:5: warning: Called C++ object pointer is null
          alloc_->Deallocate(bufs_[i]);
          ^~~~~~~~~~~~~~~~~~~~~~~~~~~~
      7 warnings generated.
      ar: creating librocksdb_debug.a
      utilities/memory/memory_test.cc:68:25: warning: Called C++ object pointer is null
            cache_set->insert(db->GetDBOptions().row_cache.get());
                              ^~~~~~~~~~~~~~~~~~
      1 warning generated.
      
      The patch fixes these by adding assertions and explicitly passing in zero
      when initializing VectorIterator::current_ (which preserves the existing
      behavior).
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/5821
      
      Test Plan: Ran make check and make analyze to make sure the warnings have disappeared.
      
      Differential Revision: D17455949
      
      Pulled By: ltamasi
      
      fbshipit-source-id: 363619618ea649a0674287f9f3b3393e390571ee
      2cbb61ea
    • Remove unneeded unlock statement (#5809) · 2389aa2d
      提交于
      Summary:
      The dtor will automatically do unlock
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/5809
      
      Differential Revision: D17453694
      
      Pulled By: ltamasi
      
      fbshipit-source-id: 5348bff8e6a620a05ff639a5454e8d82ae98a22d
      2389aa2d
    • Y
      Refactor ObsoleteFilesTest to inherit from DBTestBase (#5820) · 6a279037
      Yanqin Jin 提交于
      Summary:
      Make class ObsoleteFilesTest inherit from DBTestBase.
      
      Test plan (on devserver):
      ```
      $COMPILE_WITH_ASAN=1 make obsolete_files_test
      $./obsolete_files_test
      ```
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/5820
      
      Differential Revision: D17452348
      
      Pulled By: riversand963
      
      fbshipit-source-id: b09f4581a18022ca2bfd79f2836c0bf7083f5f25
      6a279037
    • T
      Adding support for deleteFilesInRanges in JNI (#4031) · 3a408eea
      Tomas Kolda 提交于
      Summary:
      It is very useful method call to achieve https://github.com/facebook/rocksdb/wiki/Delete-A-Range-Of-Keys
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/4031
      
      Differential Revision: D13515418
      
      Pulled By: vjnadimpalli
      
      fbshipit-source-id: 930b48e0992ef07fd1edd0b0cb5f780fabb1b4b5
      3a408eea
  3. 18 9月, 2019 3 次提交
  4. 17 9月, 2019 13 次提交
  5. 16 9月, 2019 1 次提交
  6. 14 9月, 2019 8 次提交
    • P
      sst_dump recompress show #blocks compressed and not compressed (#5791) · 2ed91622
      Peter (Stig) Edwards 提交于
      Summary:
      Closes https://github.com/facebook/rocksdb/issues/1474
      Helps show when the 12.5% threshold for GoodCompressionRatio (originally from ldb) is hit.
      
      Example output:
      
      ```
      > ./sst_dump --file=/tmp/test.sst --command=recompress
      from [] to []
      Process /tmp/test.sst
      Sst file format: block-based
      Block Size: 16384
      Compression: kNoCompression           Size:  122579836 Blocks:   2300 Compressed:      0 (  0.0%) Not compressed (ratio):   2300 (100.0%) Not compressed (abort):      0 (  0.0%)
      Compression: kSnappyCompression       Size:   46289962 Blocks:   2300 Compressed:   2119 ( 92.1%) Not compressed (ratio):    181 (  7.9%) Not compressed (abort):      0 (  0.0%)
      Compression: kZlibCompression         Size:   29689825 Blocks:   2300 Compressed:   2301 (100.0%) Not compressed (ratio):      0 (  0.0%) Not compressed (abort):      0 (  0.0%)
      Unsupported compression type: kBZip2Compression.
      Compression: kLZ4Compression          Size:   44785490 Blocks:   2300 Compressed:   1950 ( 84.8%) Not compressed (ratio):    350 ( 15.2%) Not compressed (abort):      0 (  0.0%)
      Compression: kLZ4HCCompression        Size:   37498895 Blocks:   2300 Compressed:   2301 (100.0%) Not compressed (ratio):      0 (  0.0%) Not compressed (abort):      0 (  0.0%)
      Unsupported compression type: kXpressCompression.
      Compression: kZSTD                    Size:   32208707 Blocks:   2300 Compressed:   2301 (100.0%) Not compressed (ratio):      0 (  0.0%) Not compressed (abort):      0 (  0.0%)
      ```
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/5791
      
      Differential Revision: D17347870
      
      fbshipit-source-id: af10849c010b46b20e54162b70123c2805ffe526
      2ed91622
    • S
      merging_iterator.cc: Small refactoring (#5793) · bf5dbc17
      sdong 提交于
      Summary:
      1. Put the similar logic of adding valid iterator to heap and check invalid iterator's status code to the same helper functions.
      2. Because of 1, in the changing direction case, move around the places where we check status a little bit so that we can call the helper function there too. The logic would only divert in the case where the iterator is valid but status is not OK, which is not expected to happen. Add an assertion for that.
      3. Put the logic of changing direction from forward to backward to a separate function so the unlikely code path is not in Prev().
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/5793
      
      Test Plan: run all existing tests.
      
      Differential Revision: D17374397
      
      fbshipit-source-id: d595ffcf156095c4bd0f5532bacba854482a2332
      bf5dbc17
    • I
      Allow ingesting overlapping files (#5539) · 97631357
      Igor Canadi 提交于
      Summary:
      Currently IngestExternalFile() fails when its input files' ranges overlap. This condition doesn't need to hold for files that are to be ingested in L0, though.
      
      This commit allows overlapping files and forces their target level to L0.
      
      Additionally, ingest job's completion is logged to EventLogger, analogous to flush and compaction jobs.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/5539
      
      Differential Revision: D17370660
      
      Pulled By: riversand963
      
      fbshipit-source-id: 749a3899b17d1be267a5afd5b0a99d96b38ab2f3
      97631357
    • A
      Refactor ArenaWrappedDBIter into separate files (#5801) · 83a6a614
      anand76 提交于
      Summary:
      Move definition and implementation for ArenaWrappedDBIter into its own .h/.cc files. Also, change inlining of functions to better comply with the Google C++ style guide.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/5801
      
      Test Plan: make check
      
      Differential Revision: D17371012
      
      Pulled By: anand1976
      
      fbshipit-source-id: c1361abc2851575111e357a63d88be3b3d6cb341
      83a6a614
    • P
      Clean up + fix build scripts re: USE_SSE= and PORTABLE= (#5800) · 6a171724
      Peter Dillinger 提交于
      Summary:
      In preparing to utilize a new Intel instruction extension, I
      noticed problems with the existing build script in regard to the
      existing utilized extensions, either with USE_SSE or PORTABLE flags.
      
      * PORTABLE=0 was interpreted the same as PORTABLE=1. Now empty and 0
      mean the same. (I guess you were not supposed to set PORTABLE= if you
      wanted non-portable--except that...)
      * The Facebook build script extensions would set PORTABLE=1 even if
      it's already set in a make var or environment. Now it does not override
      a non-empty setting, so use PORTABLE=0 for fully optimized build,
      overriding Facebook environment default.
      * Put in an explanation of the USE_SSE flag where it's used by
      build_detect_platform, and cleaned up some confusing/redundant
      associated logic.
      * If USE_SSE was set and expected intrinsics were not available,
      build_detect_platform would exit early but build would proceed with
      broken, incomplete configuration. Now warning is gracefully recovered.
      * If USE_SSE was set and expected intrinsics were not available,
      build would still try to use flags like -msse4.2 etc. which could lead
      to unexpected compilation failure or binary incompatibility. Now those
      flags are not used if the warning is issued.
      
      This should not break or change existing, valid build scripts.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/5800
      
      Test Plan: manual case testing
      
      Differential Revision: D17369543
      
      Pulled By: pdillinger
      
      fbshipit-source-id: 4ee244911680ae71144d272c40aceea548e3ce88
      6a171724
    • L
      Update history.md for option memtable_insert_hint_per_batch (#5799) · 9ba88a1e
      Lingjing You 提交于
      Summary:
      Update history.md for option memtable_insert_hint_per_batch
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/5799
      
      Differential Revision: D17369186
      
      fbshipit-source-id: 71d82f9d99d9a52d1475d1b0153670957b6111e9
      9ba88a1e
    • R
      Update HISTORY.md for option to make write group size configurable (#5798) · 27f516ac
      Ronak Sisodia 提交于
      Summary:
      Update HISTORY.md for option to make write group size configurable .
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/5798
      
      Differential Revision: D17369062
      
      fbshipit-source-id: 390a3fa0b01675e91879486a729cf2cc7624d106
      27f516ac
    • P
      Refactor some confusing logic in PlainTableReader · aa2486b2
      Peter Dillinger 提交于
      Summary: Pull Request resolved: https://github.com/facebook/rocksdb/pull/5780
      
      Test Plan: existing plain table unit test
      
      Differential Revision: D17368629
      
      Pulled By: pdillinger
      
      fbshipit-source-id: f25409cdc2f39ebe8d5cbb599cf820270e6b5d26
      aa2486b2
  7. 13 9月, 2019 3 次提交
    • L
      Add insert hints for each writebatch (#5728) · 1a928c22
      Lingjing You 提交于
      Summary:
      Add insert hints for each writebatch so that they can be used in concurrent write, and add write option to enable it.
      
      Bench result (qps):
      
      `./db_bench --benchmarks=fillseq -allow_concurrent_memtable_write=true -num=4000000 -batch-size=1 -threads=1 -db=/data3/ylj/tmp -write_buffer_size=536870912 -num_column_families=4`
      
      master:
      
      | batch size \ thread num | 1       | 2       | 4       | 8       |
      | ----------------------- | ------- | ------- | ------- | ------- |
      | 1                       | 387883  | 220790  | 308294  | 490998  |
      | 10                      | 1397208 | 978911  | 1275684 | 1733395 |
      | 100                     | 2045414 | 1589927 | 1798782 | 2681039 |
      | 1000                    | 2228038 | 1698252 | 1839877 | 2863490 |
      
      fillseq with writebatch hint:
      
      | batch size \ thread num | 1       | 2       | 4       | 8       |
      | ----------------------- | ------- | ------- | ------- | ------- |
      | 1                       | 286005  | 223570  | 300024  | 466981  |
      | 10                      | 970374  | 813308  | 1399299 | 1753588 |
      | 100                     | 1962768 | 1983023 | 2676577 | 3086426 |
      | 1000                    | 2195853 | 2676782 | 3231048 | 3638143 |
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/5728
      
      Differential Revision: D17297240
      
      fbshipit-source-id: b053590a6d77871f1ef2f911a7bd013b3899b26c
      1a928c22
    • H
      arm64 crc prefetch optimise (#5773) · a378a4c2
      HouBingjian 提交于
      Summary:
      prefetch data for following block,avoid cache miss when doing crc caculate
      
      I do performance test at kunpeng-920 server(arm-v8, 64core@2.6GHz)
      ./db_bench --benchmarks=crc32c --block_size=500000000
      before optimise : 587313.500 micros/op 1 ops/sec;  811.9 MB/s (500000000 per op)
      after optimise  : 289248.500 micros/op 3 ops/sec; 1648.5 MB/s (500000000 per op)
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/5773
      
      Differential Revision: D17347339
      
      fbshipit-source-id: bfcd74f0f0eb4b322b959be68019ddcaae1e3341
      a378a4c2
    • L
      Temporarily disable hash index in stress tests (#5792) · d35ffd56
      Levi Tamasi 提交于
      Summary:
      PR https://github.com/facebook/rocksdb/issues/4020 implicitly enabled the hash index as well in stress/crash
      tests, resulting in assertion failures in Block. This patch disables
      the hash index until we can pinpoint the root cause of these issues.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/5792
      
      Test Plan:
      Ran tools/db_crashtest.py and made sure it only uses index types 0 and 2
      (binary search and partitioned index).
      
      Differential Revision: D17346777
      
      Pulled By: ltamasi
      
      fbshipit-source-id: b4318f37f1fda3ee1bbff4ef2c2f556ca9e6b551
      d35ffd56
  8. 12 9月, 2019 5 次提交
    • A
      Fix RocksDB bug in block_cache_trace_analyzer.cc on Windows (#5786) · e8c2e68b
      Adam Retter 提交于
      Summary:
      This is required to compile on Windows with Visual Studio 2015.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/5786
      
      Differential Revision: D17335994
      
      fbshipit-source-id: 8f9568310bc6f697e312b5e24ad465e9084f0011
      e8c2e68b
    • R
      Option to make write group size configurable (#5759) · d05c0fe4
      Ronak Sisodia 提交于
      Summary:
      The max batch size that we can write to the WAL is controlled by a static manner. So if the leader write is less than 128 KB we will have the batch size as leader write size + 128 KB else the limit will be 1 MB. Both of them are statically defined.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/5759
      
      Differential Revision: D17329298
      
      fbshipit-source-id: a3d910629d8d8ca84ea39ad89c2b2d284571ded5
      d05c0fe4
    • S
      Use delete to disable automatic generated methods. (#5009) · 9eb3e1f7
      Shylock Hg 提交于
      Summary:
      Use delete to disable automatic generated methods instead of private, and put the constructor together for more clear.This modification cause the unused field warning, so add unused attribute to disable this warning.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/5009
      
      Differential Revision: D17288733
      
      fbshipit-source-id: 8a767ce096f185f1db01bd28fc88fef1cdd921f3
      9eb3e1f7
    • W
      record the timestamp on first configure (#4799) · fcda80fc
      Wilfried Goesgens 提交于
      Summary:
      cmake doesn't re-generate the timestamp on subsequent builds causing rebuilds of the lib
      
      This improves compile time turn-arounds if you have rocksdb as a compileable library include, since with the state its now it will re-generate the time stamp .cc file each time you build, and thus re-compile + re-link the rocksdb library though anything in the source actually changed.
      The original timestamp is recorded into `CMakeCache.txt` and will remain there until you flush this cache.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/4799
      
      Differential Revision: D17290040
      
      fbshipit-source-id: 28357fef3422693c9c19e88fa2873c8db0f662ed
      fcda80fc
    • A
      Support partitioned index and filters in stress/crash tests (#4020) · dd2a35f1
      Andrew Kryczka 提交于
      Summary:
      - In `db_stress`, support choosing index type and whether to enable filter partitioning, and randomly set those options in crash test
      - When partitioned filter is enabled by crash test, force partitioned index to also be enabled since it's a prerequisite
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/4020
      
      Test Plan:
      currently this is blocked on fixing the bug that crash test caught:
      
      ```
      $ TEST_TMPDIR=/data/compaction_bench python ./tools/db_crashtest.py blackbox --simple --interval=10 --max_key=10000000
      ...
      Verification failed for column family 0 key 937501: Value not found: NotFound:
      Crash-recovery verification failed :(
      ```
      
      Differential Revision: D8508683
      
      Pulled By: maysamyabandeh
      
      fbshipit-source-id: 0337e5d0558bcef26b1f3699f47265a2c1e99629
      dd2a35f1