1. 29 6月, 2022 1 次提交
  2. 28 6月, 2022 4 次提交
  3. 26 6月, 2022 1 次提交
    • L
      Add API for writing wide-column entities (#10242) · c73d2a9d
      Levi Tamasi 提交于
      Summary:
      The patch builds on https://github.com/facebook/rocksdb/pull/9915 and adds
      a new API called `PutEntity` that can be used to write a wide-column entity
      to the database. The new API is added to both `DB` and `WriteBatch`. Note
      that currently there is no way to retrieve these entities; more precisely, all
      read APIs (`Get`, `MultiGet`, and iterator) return `NotSupported` when they
      encounter a wide-column entity that is required to answer a query. Read-side
      support (as well as other missing functionality like `Merge`, compaction filter,
      and timestamp support) will be added in later PRs.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/10242
      
      Test Plan: `make check`
      
      Reviewed By: riversand963
      
      Differential Revision: D37369748
      
      Pulled By: ltamasi
      
      fbshipit-source-id: 7f5e412359ed7a400fd80b897dae5599dbcd685d
      c73d2a9d
  4. 25 6月, 2022 4 次提交
    • A
      Temporarily disable mempurge in crash test (#10252) · f322f273
      Andrew Kryczka 提交于
      Summary:
      Need to disable it for now as CI is failing, particularly `MultiOpsTxnsStressTest`. Investigation details in internal task T124324915. This PR disables mempurge more widely than `MultiOpsTxnsStressTest` until we know the issue is contained to that particular test.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/10252
      
      Reviewed By: riversand963
      
      Differential Revision: D37432948
      
      Pulled By: ajkr
      
      fbshipit-source-id: d0cf5b0e0ec7c3142c382a0347f35a4c34f4607a
      f322f273
    • B
      Pass rate_limiter_priority through filter block reader functions to FS (#10251) · 8e63d90f
      Bo Wang 提交于
      Summary:
      With https://github.com/facebook/rocksdb/pull/9996 , we can pass the rate_limiter_priority to FS for most cases. This PR is to update the code path for filter block reader.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/10251
      
      Test Plan: Current unit tests should pass.
      
      Reviewed By: pdillinger
      
      Differential Revision: D37427667
      
      Pulled By: gitbw95
      
      fbshipit-source-id: 1ce5b759b136efe4cfa48a6b97e2f837ff087433
      8e63d90f
    • Z
      Fix the flaky cursor persist test (#10250) · 410ca2ef
      zczhu 提交于
      Summary:
      The 'PersistRoundRobinCompactCursor' unit test in `db_compaction_test` may occasionally fail due to the inconsistent LSM state. The issue is fixed by adding `Flush()` and `WaitForFlushMemTable()` to produce a more predictable and stable LSM state.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/10250
      
      Test Plan: 'PersistRoundRobinCompactCursor' unit test in `db_compaction_test`
      
      Reviewed By: jay-zhuang, riversand963
      
      Differential Revision: D37426091
      
      Pulled By: littlepig2013
      
      fbshipit-source-id: 56fbaab0384c380c1f279a16dc8732b139c9f611
      410ca2ef
    • S
      Reduce overhead of SortFileByOverlappingRatio() (#10161) · 246d4697
      sdong 提交于
      Summary:
      Currently SortFileByOverlappingRatio() is O(nlogn). It is usually OK but When there are a lot of files in an LSM-tree, SortFileByOverlappingRatio() can take non-trivial amount of time. The problem is severe when the user is loading keys in sorted order, where compaction is only trivial move and this operation becomes the bottleneck and limit the total throughput. This commit makes SortFileByOverlappingRatio() only find the top 50 files based on score. 50 files are usually enough for the parallel compactions needed for the level, and in case it is not enough, we would fall back to random, which should be acceptable.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/10161
      
      Test Plan:
      Run a fillseq that generates a lot of files, and observe throughput improved (although stall is not yet eliminated). The command ran:
      
      TEST_TMPDIR=/dev/shm/ ./db_bench_sort --benchmarks=fillseq --compression_type=lz4 --write_buffer_size=5000000 --num=100000000 --value_size=1000
      
      The throughput improved by 11%.
      
      Reviewed By: jay-zhuang
      
      Differential Revision: D37129469
      
      fbshipit-source-id: 492da2ef5bfc7cdd6daa3986b50d2ff91f88542d
      246d4697
  5. 24 6月, 2022 9 次提交
    • G
      BlobDB in crash test hitting assertion (#10249) · 052666ae
      Gang Liao 提交于
      Summary:
      This task is to fix assertion failures during the crash test runs. The cache entry size might not match value size because value size can include the on-disk (possibly compressed) size. Therefore, we removed the assertions.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/10249
      
      Reviewed By: ltamasi
      
      Differential Revision: D37407576
      
      Pulled By: gangliao
      
      fbshipit-source-id: 577559f267c5b2437bcd0631cd0efabb6dde3b69
      052666ae
    • Y
      Fix race condition between file purge and backup/checkpoint (#10187) · 725df120
      Yanqin Jin 提交于
      Summary:
      Resolves https://github.com/facebook/rocksdb/issues/10129
      
      I extracted this fix from https://github.com/facebook/rocksdb/issues/7516 since it's also already a bug in main branch, and we want to
      separate it from the main part of the PR.
      
      There can be a race condition between two threads. Thread 1 executes
      `DBImpl::FindObsoleteFiles()` while thread 2 executes `GetSortedWals()`.
      ```
      Time   thread 1                                thread 2
        |  mutex_.lock
        |  read disable_delete_obsolete_files_
        |  ...
        |  wait on log_sync_cv and release mutex_
        |                                          mutex_.lock
        |                                          ++disable_delete_obsolete_files_
        |                                          mutex_.unlock
        |                                          mutex_.lock
        |                                          while (pending_purge_obsolete_files > 0) { bg_cv.wait;}
        |                                          wake up with mutex_ locked
        |                                          compute WALs tracked by MANIFEST
        |                                          mutex_.unlock
        |  wake up with mutex_ locked
        |  ++pending_purge_obsolete_files_
        |  mutex_.unlock
        |
        |  delete obsolete WAL
        |                                          WAL missing but tracked in MANIFEST.
        V
      ```
      
      The fix proposed eliminates the possibility of the above by increasing
      `pending_purge_obsolete_files_` before `FindObsoleteFiles()` can possibly release the mutex.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/10187
      
      Test Plan: make check
      
      Reviewed By: ltamasi
      
      Differential Revision: D37214235
      
      Pulled By: riversand963
      
      fbshipit-source-id: 556ab1b58ae6d19150169dfac4db08195c797184
      725df120
    • M
      Wrapper for benchmark.sh to run a sequence of db_bench tests (#10215) · 60619057
      Mark Callaghan 提交于
      Summary:
      This provides two things:
      1) Runs a sequence of db_bench tests. This sequence was chosen to provide
      good coverage with less variance.
      2) Makes it easier to do A/B testing for multiple binaries. This combines
      the report.tsv files into summary.tsv to make it easier to compare results
      across multiple binaries.
      
      Example output for 2) is:
      
      ops_sec mb_sec  lsm_sz  blob_sz c_wgb   w_amp   c_mbps  c_wsecs c_csecs b_rgb   b_wgb   usec_op p50     p99     p99.9   p99.99  pmax    uptime  stall%  Nstall  u_cpu   s_cpu   rss     test    date    version job_id
      1115171 446.7   9GB             8.9     1.0     454.7   26      26      0       0       0.9     0.5     2       7       51      5547    20      0.0     0       0.1     0.1     0.2     fillseq.wal_disabled.v400       2022-04-12T08:53:51     6.0
      1045726 418.9   8GB     0.0GB   8.4     1.0     432.4   27      26      0       0       1.0     0.5     2       6       102     5618    20      0.0     0       0.1     0.0     0.1     fillseq.wal_disabled.v400       2022-04-12T12:25:36     6.28
      
      ops_sec mb_sec  lsm_sz  blob_sz c_wgb   w_amp   c_mbps  c_wsecs c_csecs b_rgb   b_wgb   usec_op p50     p99     p99.9   p99.99  pmax    uptime  stall%  Nstall  u_cpu   s_cpu   rss     test    date    version job_id
      2969192 1189.3  16GB            0.0             0.0     0       0       0       0       10.8    9.3     25      33      49      13551   1781    0.0     0       48.2    6.8     16.8    readrandom.t32  2022-04-12T08:54:28     6.0
      2692922 1078.6  16GB    0.0GB   0.0             0.0     0       0       0       0       11.9    10.2    30      38      56      49735   1781    0.0     0       47.8    6.7     16.8    readrandom.t32  2022-04-12T12:26:15     6.28
      
      ...
      
      ops_sec mb_sec  lsm_sz  blob_sz c_wgb   w_amp   c_mbps  c_wsecs c_csecs b_rgb   b_wgb   usec_op p50     p99     p99.9   p99.99  pmax    uptime  stall%  Nstall  u_cpu   s_cpu   rss     test    date    version job_id
      180227  72.2    38GB            1126.4  8.7     643.2   3286    3218    0       0       177.6   50.2    2687    4083    6148    854083  1793    68.4    7804    17.0    5.9     0.5     overwrite.t32.s0        2022-04-12T11:55:21     6.0
      236512  94.7    31GB    0.0GB   1502.9  8.9     862.2   5242    5125    0       0       135.3   59.9    2537    3268    5404    18545   1785    49.7    5112    25.5    8.0     9.4     overwrite.t32.s0        2022-04-12T15:27:25     6.28
      
      Example output with formatting preserved is here:
      https://gist.github.com/mdcallag/4432e5bbaf91915c916d46bd6ce3c313
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/10215
      
      Test Plan: run it
      
      Reviewed By: jay-zhuang
      
      Differential Revision: D37299892
      
      Pulled By: mdcallag
      
      fbshipit-source-id: e6e0ed638fd7e8deeb869d700593fdc3eba899c8
      60619057
    • Y
      Add suggest_compact_range() and suggest_compact_range_cf() to C API. (#10175) · 2a3792ed
      Yueh-Hsuan Chiang 提交于
      Summary:
      Add suggest_compact_range() and suggest_compact_range_cf() to C API.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/10175
      
      Test Plan:
      As verifying the result requires SyncPoint, which is not available in the c_test.c,
      the test is currently done by invoking the functions and making sure it does not crash.
      
      Reviewed By: jay-zhuang
      
      Differential Revision: D37305191
      
      Pulled By: ajkr
      
      fbshipit-source-id: 0fe257b45914f6c9aeb985d8b1820dafc57a20db
      2a3792ed
    • Z
      Cut output files at compaction cursors (#10227) · 17a1d65e
      zczhu 提交于
      Summary:
      The files behind the compaction cursor contain newer data than the files ahead of it. If a compaction writes a file that spans from before its output level’s cursor to after it, then data before the cursor will be contaminated with the old timestamp from the data after the cursor. To avoid this, we can split the output file into two – one entirely before the cursor and one entirely after the cursor. Note that, in rare cases, we **DO NOT** need to cut the file if it is a trivial move since the file will not be contaminated by older files. In such case, the compact cursor is not guaranteed to be the boundary of the file, but it does not hurt the round-robin selection process.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/10227
      
      Test Plan:
      Add 'RoundRobinCutOutputAtCompactCursor' unit test in `db_compaction_test`
      
      Task: [T122216351](https://www.internalfb.com/intern/tasks/?t=122216351)
      
      Reviewed By: jay-zhuang
      
      Differential Revision: D37388088
      
      Pulled By: littlepig2013
      
      fbshipit-source-id: 9246a6a084b6037b90d6ab3183ba4dfb75a3378d
      17a1d65e
    • G
      Read from blob cache first when MultiGetBlob() (#10225) · ba1f62dd
      Gang Liao 提交于
      Summary:
      There is currently no caching mechanism for blobs, which is not ideal especially when the database resides on remote storage (where we cannot rely on the OS page cache). As part of this task, we would like to make it possible for the application to configure a blob cache.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/10225
      
      Test Plan:
      Add test cases for MultiGetBlob
      In this task, we added the new API MultiGetBlob() for BlobSource.
      
      This PR is a part of https://github.com/facebook/rocksdb/issues/10156
      
      Reviewed By: ltamasi
      
      Differential Revision: D37358364
      
      Pulled By: gangliao
      
      fbshipit-source-id: aff053a37615d96d768fb9aedde17da5618c7ae6
      ba1f62dd
    • G
      Fix key size in cache_bench (#10234) · b52620ab
      Guido Tagliavini Ponce 提交于
      Summary:
      cache_bench wasn't generating 16B keys, which are necessary for FastLRUCache. Also:
      - Added asserts in cache_bench, which is assuming that inserts never fail. When they fail (for example, if we used keys of the wrong size), memory allocated to the values will becomes leaked, and eventually the program crashes.
      - Move kCacheKeySize to the right spot.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/10234
      
      Test Plan:
      ``make -j24 check``. Also, run cache_bench with FastLRUCache and check that memory usage doesn't blow up:
      ``./cache_bench -cache_type=fast_lru_cache -num_shard_bits=6 -skewed=true \
                              -lookup_insert_percent=100 -lookup_percent=0 -insert_percent=0 -erase_percent=0 \
                              -populate_cache=true -cache_size=1073741824 -ops_per_thread=10000000 \
                              -value_bytes=8192 -resident_ratio=1 -threads=16``
      
      Reviewed By: pdillinger
      
      Differential Revision: D37382949
      
      Pulled By: guidotag
      
      fbshipit-source-id: b697a942ebb215de5d341f98dc8566763436ba9b
      b52620ab
    • P
      Don't count no prefix as Bloom hit (#10244) · f81ea75d
      Peter Dillinger 提交于
      Summary:
      When a key is "out of domain" for the prefix_extractor (no
      prefix assigned) then the Bloom filter is not queried. PerfContext
      was counting this as a Bloom "hit" while Statistics doesn't count this
      as a prefix Bloom checked. I think it's more accurate to call it neither
      hit nor miss, so changing the counting to make it PerfContext coounting
      more like Statistics.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/10244
      
      Test Plan:
      tests updates and expanded (Get and MultiGet). Iterator test
      coverage of the change will come in next PR
      
      Reviewed By: bjlemaire
      
      Differential Revision: D37371297
      
      Pulled By: pdillinger
      
      fbshipit-source-id: fed132fba6a92b2314ab898d449fce2d1586c157
      f81ea75d
    • B
      Dynamically changeable `MemPurge` option (#10011) · 5879053f
      Baptiste Lemaire 提交于
      Summary:
      **Summary**
      Make the mempurge option flag a Mutable Column Family option flag. Therefore, the mempurge feature can be dynamically toggled.
      
      **Motivation**
      RocksDB users prefer having the ability to switch features on and off without having to close and reopen the DB. This is particularly important if the feature causes issues and needs to be turned off. Dynamically changing a DB option flag does not seem currently possible.
      Moreover, with this new change, the MemPurge feature can be toggled on or off independently between column families, which we see as a major improvement.
      
      **Content of this PR**
      This PR includes removal of the `experimental_mempurge_threshold` flag as a DB option flag, and its re-introduction as a `MutableCFOption` flag. I updated the code to handle dynamic changes of the flag (in particular inside the `FlushJob` file). Additionally, this PR includes a new test to demonstrate the capacity of the code to toggle the MemPurge feature on and off, as well as the addition in the `db_stress` module of 2 different mempurge threshold values (0.0 and 1.0) that can be randomly changed with the `set_option_one_in` flag. This is useful to stress test the dynamic changes.
      
      **Benchmarking**
      I will add numbers to prove that there is no performance impact within the next 12 hours.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/10011
      
      Reviewed By: pdillinger
      
      Differential Revision: D36462357
      
      Pulled By: bjlemaire
      
      fbshipit-source-id: 5e3d63bdadf085c0572ecc2349e7dd9729ce1802
      5879053f
  6. 23 6月, 2022 4 次提交
    • G
      Add the blob cache to the stress tests and the benchmarking tool (#10202) · 2352e2df
      Gang Liao 提交于
      Summary:
      In order to facilitate correctness and performance testing, we would like to add the new blob cache to our stress test tool `db_stress` and our continuously running crash test script `db_crashtest.py`, as well as our synthetic benchmarking tool `db_bench` and the BlobDB performance testing script `run_blob_bench.sh`.
      As part of this task, we would also like to utilize these benchmarking tools to get some initial performance numbers about the effectiveness of caching blobs.
      
      This PR is a part of https://github.com/facebook/rocksdb/issues/10156
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/10202
      
      Reviewed By: ltamasi
      
      Differential Revision: D37325739
      
      Pulled By: gangliao
      
      fbshipit-source-id: deb65d0d414502270dd4c324d987fd5469869fa8
      2352e2df
    • B
      Fix typo in comments and code (#10233) · c073ed76
      Bo Wang 提交于
      Summary:
      Fix typo in comments and code.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/10233
      
      Test Plan: Existing unit tests should pass.
      
      Reviewed By: jay-zhuang, anand1976
      
      Differential Revision: D37356702
      
      Pulled By: gitbw95
      
      fbshipit-source-id: 32c019adcc6dcc95a9882b38147a310091368e51
      c073ed76
    • Y
      Add get_column_family_metadata() and related functions to C API (#10207) · e103b872
      Yueh-Hsuan Chiang 提交于
      Summary:
      * Add metadata related structs and functions in C API, including
        - `rocksdb_get_column_family_metadata()` and `rocksdb_get_column_family_metadata_cf()`
           that returns `rocksdb_column_family_metadata_t`.
        - `rocksdb_column_family_metadata_t` and its get functions & destroy function.
        - `rocksdb_level_metadata_t` and its and its get functions & destroy function.
        - `rocksdb_file_metadata_t` and its and get functions & destroy functions.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/10207
      
      Test Plan:
      Extend the existing c_test.c to include additional checks for column_family_metadata
      inside CheckCompaction.
      
      Reviewed By: riversand963
      
      Differential Revision: D37305209
      
      Pulled By: ajkr
      
      fbshipit-source-id: 0a5183206353acde145f5f9b632c3bace670aa6e
      e103b872
    • A
      Adapt benchmark result script to new fields. (#10120) · a16e2ff8
      Alan Paxton 提交于
      Summary:
      Recently merged CI benchmark scripts were failing.
      
      There has clearly been a major revision of the fields of benchmark output. The upload script expects and sanity-checks the existence of some fields (changes date to conform to OpenSearch format)..., so the script needs to change.
      
      Also add a bit more exception checking to make it more obvious when this happens again.
      
      We have deleted the existing report.tsv from the benchmark machine. An existing report.tsv is appended to by default, so that if the fields change, later rows no longer match the header. This makes for an upload that dies half way through the report file, when the format no longer matches the header.
      
      Re-instate the config.yml for running the benchmarks, so we can once again test it in situ.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/10120
      
      Reviewed By: pdillinger
      
      Differential Revision: D37314908
      
      Pulled By: jay-zhuang
      
      fbshipit-source-id: 34f5243fee694b75c6838eb55d3398e4273254b2
      a16e2ff8
  7. 22 6月, 2022 8 次提交
    • Y
      Continue to deflake BackupEngineTest.Concurrency (#10228) · 36fefd7e
      Yanqin Jin 提交于
      Summary:
      Even after https://github.com/facebook/rocksdb/issues/10069, `BackupEngineTest.Concurrency` is still flaky with decreased probability of failure.
      
      Repro steps as follows
      ```bash
      make backup_engine_test
      gtest-parallel -r 1000 -w 64 ./backup_engine_test --gtest_filter=BackupEngineTest.Concurrency
      ```
      
      The first two commits of this PR demonstrate how the test is flaky. https://github.com/facebook/rocksdb/issues/10069 handles the case in which
      `Rename()` file returns `IOError` with subcode `PathNotFound`, and `CreateLoggerFromOptions()`
      allows the operation to succeed, as expected by the test. However, `BackupEngineTest` uses
      `RemapFileSystem` on top of `ChrootFileSystem` which can return `NotFound` instead of `IOError`.
      
      This behavior is different from `Env::Default()` which returns PathNotFound if the src of `rename()`
      does not exist. We should make the behaviors of the test Env/FS match a real Env/FS.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/10228
      
      Test Plan:
      ```bash
      make check
      gtest-parallel -r 1000 -w 64 ./backup_engine_test --gtest_filter=BackupEngineTest.Concurrency
      ```
      
      Reviewed By: pdillinger
      
      Differential Revision: D37337241
      
      Pulled By: riversand963
      
      fbshipit-source-id: 07a53115e424467b55a731866e571f0ad4c6635d
      36fefd7e
    • Y
      Expose the initial logger creation error (#10223) · 9586dcf1
      Yanqin Jin 提交于
      Summary:
      https://github.com/facebook/rocksdb/issues/9984 changes the behavior of RocksDB: if logger creation failed during `SanitizeOptions()`,
      `DB::Open()` will fail. However, since `SanitizeOptions()` is called in `DBImpl::DBImpl()`, we cannot
      directly expose the error to caller without some additional work.
      This is a first version proposal which:
      - Adds a new member `init_logger_creation_s` to `DBImpl` to store the result of init logger creation
      - Checks the error during `DB::Open()` and return it to caller if non-ok
      
      This is not very ideal. We can alternatively move the logger creation logic out of the `SanitizeOptions()`.
      Since `SanitizeOptions()` is used in other places, we need to check whether this change breaks anything
      in case other callers of `SanitizeOptions()` assumes that a logger should be created.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/10223
      
      Test Plan: make check
      
      Reviewed By: pdillinger
      
      Differential Revision: D37321717
      
      Pulled By: riversand963
      
      fbshipit-source-id: 58042358a86369d606549dd9938933dd47591c4b
      9586dcf1
    • Y
      Update API comment about Options::best_efforts_recovery (#10180) · 42c631b3
      Yanqin Jin 提交于
      Summary: Pull Request resolved: https://github.com/facebook/rocksdb/pull/10180
      
      Reviewed By: pdillinger
      
      Differential Revision: D37182037
      
      Pulled By: riversand963
      
      fbshipit-source-id: a8dc865b86e2249beb7a543c317e94a14781e910
      42c631b3
    • P
      Add data block hash index to crash test, fix MultiGet issue (#10220) · 84210c94
      Peter Dillinger 提交于
      Summary:
      There was a bug in the MultiGet enhancement in https://github.com/facebook/rocksdb/issues/9899 with data
      block hash index, which was not caught because data block hash index was
      never added to stress tests. This change fixes both issues.
      
      Fixes https://github.com/facebook/rocksdb/issues/10186
      
      I intend to pick this into the 7.4.0 release candidate
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/10220
      
      Test Plan:
      Failure quickly reproduces in crash test with
      kDataBlockBinaryAndHash, and does not seem to with the fix. Reproducing
      the failure with a unit test I believe would be too tricky and fragile
      to be worthwhile.
      
      Reviewed By: anand1976
      
      Differential Revision: D37315647
      
      Pulled By: pdillinger
      
      fbshipit-source-id: 9f648265bba867275edc752f7a56611a59401cba
      84210c94
    • Y
      Refactor wal filter processing during recovery (#10214) · d654888b
      Yanqin Jin 提交于
      Summary:
      So that DBImpl::RecoverLogFiles do not have to deal with implementation
      details of WalFilter.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/10214
      
      Test Plan: make check
      
      Reviewed By: ajkr
      
      Differential Revision: D37299122
      
      Pulled By: riversand963
      
      fbshipit-source-id: acf1a80f1ef75da393d375f55968b2f3ac189816
      d654888b
    • B
      Update LZ4 library for platform009 (#10224) · f7605ec6
      Bo Wang 提交于
      Summary:
      Update LZ4 library for platform009.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/10224
      
      Test Plan: Current unit tests should pass.
      
      Reviewed By: anand1976
      
      Differential Revision: D37321801
      
      Pulled By: gitbw95
      
      fbshipit-source-id: 8a3d3019d9f7478ac737176f2d2f443c0159829e
      f7605ec6
    • Z
      Add basic kRoundRobin compaction policy (#10107) · 30141461
      zczhu 提交于
      Summary:
      Add `kRoundRobin` as a compaction priority. The implementation is as follows.
      
      - Define a cursor as the smallest Internal key in the successor of the selected file. Add `vector<InternalKey> compact_cursor_` into `VersionStorageInfo` where each element (`InternalKey`) in `compact_cursor_` represents a cursor. In round-robin compaction policy, we just need to select the first file (assuming files are sorted) and also has the smallest InternalKey larger than/equal to the cursor. After a file is chosen, we create a new `Fsize` vector which puts the selected file is placed at the first position in `temp`, the next cursor is then updated as the smallest InternalKey in successor of the selected file (the above logic is implemented in `SortFileByRoundRobin`).
      - After a compaction succeeds, typically `InstallCompactionResults()`, we choose the next cursor for the input level and save it to `edit`. When calling `LogAndApply`, we save the next cursor with its level into some local variable and finally apply the change to `vstorage` in `SaveTo` function.
      - Cursors are persist pair by pair (<level, InternalKey>) in `EncodeTo` so that they can be reconstructed when reopening. An empty cursor will not be encoded to MANIFEST
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/10107
      
      Test Plan: add unit test (`CompactionPriRoundRobin`) in `compaction_picker_test`, add `kRoundRobin` priority in `CompactionPriTest` from `db_compaction_test`, and add `PersistRoundRobinCompactCursor` in `db_compaction_test`
      
      Reviewed By: ajkr
      
      Differential Revision: D37316037
      
      Pulled By: littlepig2013
      
      fbshipit-source-id: 9f481748190ace416079139044e00df2968fb1ee
      30141461
    • Y
      Destroy iniital db dir for a test in DBWALTest (#10221) · b012d235
      Yanqin Jin 提交于
      Summary: Pull Request resolved: https://github.com/facebook/rocksdb/pull/10221
      
      Reviewed By: hx235
      
      Differential Revision: D37316280
      
      Pulled By: riversand963
      
      fbshipit-source-id: 062781acec2f36beebc62003bcc8ec280488d572
      b012d235
  8. 21 6月, 2022 6 次提交
  9. 20 6月, 2022 2 次提交
  10. 19 6月, 2022 1 次提交