1. 18 8月, 2021 3 次提交
    • Y
      Fix bug caused by releasing snapshot(s) during compaction (#8608) · 2b367fa8
      Yanqin Jin 提交于
      Summary:
      In debug mode, we are seeing assertion failure as follows
      
      ```
      db/compaction/compaction_iterator.cc:980: void rocksdb::CompactionIterator::PrepareOutput(): \
      Assertion `ikey_.type != kTypeDeletion && ikey_.type != kTypeSingleDeletion' failed.
      ```
      
      It is caused by releasing earliest snapshot during compaction between the execution of
      `NextFromInput()` and `PrepareOutput()`.
      
      In one case, as demonstrated in unit test `WritePreparedTransaction.ReleaseEarliestSnapshotDuringCompaction_WithSD2`,
      incorrect result may be returned by a following range scan if we disable assertion, as in opt compilation
      level: the SingleDelete marker's sequence number is zeroed out, but the preceding PUT is also
      outputted to the SST file after compaction. Due to the logic of DBIter, the PUT will not be
      skipped and will be returned by iterator in range scan. https://github.com/facebook/rocksdb/issues/8661 illustrates what happened.
      
      Fix by taking a more conservative approach: make compaction zero out sequence number only
      if key is in the earliest snapshot when the compaction starts.
      
      Another assertion failure is
      ```
      Assertion `current_user_key_snapshot_ == last_snapshot' failed.
      ```
      
      It's caused by releasing the snapshot between the PUT and SingleDelete during compaction.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/8608
      
      Test Plan: make check
      
      Reviewed By: jay-zhuang
      
      Differential Revision: D30145645
      
      Pulled By: riversand963
      
      fbshipit-source-id: 699f58e66faf70732ad53810ccef43935d3bbe81
      2b367fa8
    • L
      Add statistics support to integrated BlobDB (#8667) · 6878cedc
      Levi Tamasi 提交于
      Summary:
      The patch adds statistics support to the integrated BlobDB implementation,
      namely the tickers `BLOB_DB_BLOB_FILE_BYTES_READ` and
      `BLOB_DB_GC_{NUM_KEYS,BYTES}_RELOCATED`, and the histograms
      `BLOB_DB_(DE)COMPRESSION_MICROS`. (Some other statistics, like
      `BLOB_DB_BLOB_FILE_BYTES_WRITTEN`, `BLOB_DB_BLOB_FILE_SYNCED`,
      `BLOB_DB_BLOB_FILE_{READ,WRITE,SYNC}_MICROS` were already supported.)
      Note that the vast majority of the old BlobDB's tickers/histograms are not
      really applicable to the new implementation, since they e.g. pertain to calling
      dedicated BlobDB APIs (which the integrated BlobDB does not have) or are
      tied to the legacy BlobDB's design of writing blob files synchronously when
      a write API is called. Such statistics are marked "legacy BlobDB only" in
      `statistics.h`.
      
      Fixes https://github.com/facebook/rocksdb/issues/8645 .
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/8667
      
      Test Plan: Ran `make check` and tested the new statistics using `db_bench`.
      
      Reviewed By: riversand963
      
      Differential Revision: D30356884
      
      Pulled By: ltamasi
      
      fbshipit-source-id: 5f8a833faee60401c5643c2f0a6c0415488190a4
      6878cedc
    • J
      Exclude property kLiveSstFilesSizeAtTemperature from stress_test (#8668) · 0729b287
      Jay Zhuang 提交于
      Summary:
      Just like other per_level properties.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/8668
      
      Test Plan: stress_test
      
      Reviewed By: zhichao-cao
      
      Differential Revision: D30360967
      
      Pulled By: jay-zhuang
      
      fbshipit-source-id: 70da2557b95c55e8081b04ebf1a909a0fe69488f
      0729b287
  2. 17 8月, 2021 2 次提交
    • A
      Add a stat to count secondary cache hits (#8666) · add68bd2
      anand76 提交于
      Summary:
      Add a stat for secondary cache hits. The ```Cache::Lookup``` API had an unused ```stats``` parameter. This PR uses that to pass the pointer to a ```Statistics``` object that ```LRUCache``` uses to record the stat.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/8666
      
      Test Plan: Update a unit test in lru_cache_test
      
      Reviewed By: zhichao-cao
      
      Differential Revision: D30353816
      
      Pulled By: anand1976
      
      fbshipit-source-id: 2046f78b460428877a26ffdd2bb914ae47dfbe77
      add68bd2
    • P
      Stable cache keys using DB session ids in SSTs (#8659) · a207c278
      Peter Dillinger 提交于
      Summary:
      Use DB session ids in SST table properties to make cache keys
      stable across DB re-open and copy / move / restore / etc.
      
      These new cache keys are currently only enabled when FileSystem does not
      provide GetUniqueId. For now, they are typically larger, so slightly
      less efficient.
      
      Relevant to https://github.com/facebook/rocksdb/issues/7405
      
      This change has a minor regression in PersistentCache functionality:
      metaindex blocks are no longer cached in PersistentCache. Table properties
      blocks already were not but ideally should be. I didn't spent effort to
      fix & test these issues because we don't believe PersistentCache is used much
      if at all and expect SecondaryCache to replace it. (Though PRs are welcome.)
      
      FIXME: there is more to be fixed for stable cache keys on external SST files
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/8659
      
      Test Plan:
      new unit test added, which fails when disabling new
      functionality
      
      Reviewed By: zhichao-cao
      
      Differential Revision: D30297705
      
      Pulled By: pdillinger
      
      fbshipit-source-id: e8539a5c8802a79340405629870f2e3fb3822d3a
      a207c278
  3. 16 8月, 2021 4 次提交
  4. 14 8月, 2021 1 次提交
    • B
      Improve MemPurge sampling (#8656) · e51be2c5
      Baptiste Lemaire 提交于
      Summary:
      Previously, the `MemPurge` sampling function was assessing whether a random entry from a memtable was garbage or not by simply querying the given memtable (see https://github.com/facebook/rocksdb/issues/8628 for more details).
      In this diff, I am updating the sampling function by querying not only the memtable the entry was drawn from, but also all subsequent memtables that have a greater memtable ID.
      I also added the size of the value for KV entries in the payload/useful payload estimates (which was also one of the reasons why sampling was not as good as mempurging all the time in terms of L0 SST files reduction).
      Once these changes were made, I was able to clean obsolete objects and functions from the `MemtableList` struct, and did a bit of cleanup everywhere.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/8656
      
      Reviewed By: pdillinger
      
      Differential Revision: D30288583
      
      Pulled By: bjlemaire
      
      fbshipit-source-id: 7646a545ec56f4715949daa59ab5eee74540feb3
      e51be2c5
  5. 13 8月, 2021 1 次提交
    • M
      Code cleanup for trace replayer (#8652) · 74a652a4
      Merlin Mao 提交于
      Summary:
      - Remove extra `;` in trace_record.h
      - Remove some unnecessary `assert` in trace_record_handler.cc
      - Initialize `env_` after` exec_handler_` in `ReplayerImpl` to let db be asserted in creating the handler before getting `db->GetEnv()`.
      - Update history to include the new `TraceReader::Reset()`
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/8652
      
      Reviewed By: ajkr
      
      Differential Revision: D30276872
      
      Pulled By: autopear
      
      fbshipit-source-id: 476ee162e0f241490c6209307448343a5b326b37
      74a652a4
  6. 12 8月, 2021 4 次提交
    • M
      Make TraceRecord and Replayer public (#8611) · f58d2767
      Merlin Mao 提交于
      Summary:
      New public interfaces:
      `TraceRecord` and `TraceRecord::Handler`, available in "rocksdb/trace_record.h".
      `Replayer`, available in `rocksdb/utilities/replayer.h`.
      
      User can use `DB::NewDefaultReplayer()` to create a Replayer to auto/manual replay a trace file.
      
      Unit tests:
      - `./db_test2 --gtest_filter="DBTest2.TraceAndReplay"`: Updated with the internal API changes.
      - `./db_test2 --gtest_filter="DBTest2.TraceAndManualReplay"`: New for manual replay.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/8611
      
      Reviewed By: ajkr
      
      Differential Revision: D30266329
      
      Pulled By: autopear
      
      fbshipit-source-id: 1ecb3cbbedae0f6a67c18f0cc82e002b4d81b6f8
      f58d2767
    • B
      Re-add retired mempurge flag definitions for legacy-options-file temporary support. (#8650) · a53563d8
      Baptiste Lemaire 提交于
      Summary:
      Current internal regression tests pass in an old option flag `experimental_allow_mempurge` to a more recently built db.
      This flag was retired and removed in a recent PR (https://github.com/facebook/rocksdb/issues/8628), and therefore, the following error comes up : `Failed: Invalid argument: Could not find option: : experimental_allow_mempurge`.
      In this PR, I reintroduce the two flags retired in https://github.com/facebook/rocksdb/issues/8628, `experimental_allow_mempurge` and `experimental_mempurge_policy` in `db_options.cc` and mark them both as `kDeprecated`.
      This is a temporary fix to save us time to find a long term solution, which hopefully will consist in ignoring options prefixed with `experimental_` that are no longer recognized.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/8650
      
      Reviewed By: pdillinger
      
      Differential Revision: D30257307
      
      Pulled By: bjlemaire
      
      fbshipit-source-id: 35303655fd2dd9789fd9e3c450e9d8009f3c1f54
      a53563d8
    • P
      Update and enhance check_format_compatible.sh (#8651) · 6450e9fc
      Peter Dillinger 提交于
      Summary:
      The last few releases overlooked adding to this test. This
      change fixes that.
      
      This change also fixes the problem of older branches not understanding
      ROCKSDB_NO_FBCODE and referencing compilers no longer supported.
      During the test, build_detect_platform is patched to force no FBCODE
      compiler usage. (We should not need to update old branches perpetually.)
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/8651
      
      Test Plan: local run reproduces regression described in https://github.com/facebook/rocksdb/issues/8650
      
      Reviewed By: jay-zhuang, zhichao-cao
      
      Differential Revision: D30261872
      
      Pulled By: pdillinger
      
      fbshipit-source-id: 02b447d224d7e0eb8613c63185437ded146713bc
      6450e9fc
    • J
      Add suggestion for btrfs user to disable preallocation (#8646) · 87e23587
      Jay Zhuang 提交于
      Summary:
      Add comment for `options.allow_fallocate` that btrfs
      preallocated space are not freed and a suggestion to disable
      preallocation.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/8646
      
      Test Plan: No code change
      
      Reviewed By: ajkr
      
      Differential Revision: D30240050
      
      Pulled By: jay-zhuang
      
      fbshipit-source-id: 75b7190bc8276ce8d8ac2d0cb9064b386cbf4768
      87e23587
  7. 11 8月, 2021 2 次提交
    • B
      Memtable sampling for mempurge heuristic. (#8628) · e3a96c48
      Baptiste Lemaire 提交于
      Summary:
      Changes the API of the MemPurge process: the `bool experimental_allow_mempurge` and `experimental_mempurge_policy` flags have been replaced by a `double experimental_mempurge_threshold` option.
      This change of API reflects another major change introduced in this PR: the MemPurgeDecider() function now works by sampling the memtables being flushed to estimate the overall amount of useful payload (payload minus the garbage), and then compare this useful payload estimate with the `double experimental_mempurge_threshold` value.
      Therefore, when the value of this flag is `0.0` (default value), mempurge is simply deactivated. On the other hand, a value of `DBL_MAX` would be equivalent to always going through a mempurge regardless of the garbage ratio estimate.
      At the moment, a `double experimental_mempurge_threshold` value else than 0.0 or `DBL_MAX` is opnly supported`with the `SkipList` memtable representation.
      Regarding the sampling, this PR includes the introduction of a `MemTable::UniqueRandomSample` function that collects (approximately) random entries from the memtable by using the new `SkipList::Iterator::RandomSeek()` under the hood, or by iterating through each memtable entry, depending on the target sample size and the total number of entries.
      The unit tests have been readapted to support this new API.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/8628
      
      Reviewed By: pdillinger
      
      Differential Revision: D30149315
      
      Pulled By: bjlemaire
      
      fbshipit-source-id: 1feef5390c95db6f4480ab4434716533d3947f27
      e3a96c48
    • L
      Attempt to deflake DBTestXactLogIterator.TransactionLogIteratorCorruptedLog (#8627) · f63331eb
      Levi Tamasi 提交于
      Summary:
      The patch attempts to deflake `DBTestXactLogIterator.TransactionLogIteratorCorruptedLog`
      by disabling file deletions while retrieving the list of WAL files and truncating the first WAL file.
      This is to prevent the `PurgeObsoleteFiles` call triggered by `GetSortedWalFiles` from
      invalidating the result of `GetSortedWalFiles`. The patch also cleans up the test case a bit
      and changes it to using `test::TruncateFile` instead of calling the `truncate` syscall directly.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/8627
      
      Test Plan: `make check`
      
      Reviewed By: akankshamahajan15
      
      Differential Revision: D30147002
      
      Pulled By: ltamasi
      
      fbshipit-source-id: db11072a4ad8900a2f859cb5294e22b1888c23f6
      f63331eb
  8. 10 8月, 2021 4 次提交
    • A
      Simplify GenericRateLimiter algorithm (#8602) · 82b81dc8
      Andrew Kryczka 提交于
      Summary:
      `GenericRateLimiter` slow path handles requests that cannot be satisfied
      immediately.  Such requests enter a queue, and their thread stays in `Request()`
      until they are granted or the rate limiter is stopped.  These threads are
      responsible for unblocking themselves.  The work to do so is split into two main
      duties.
      
      (1) Waiting for the next refill time.
      (2) Refilling the bytes and granting requests.
      
      Prior to this PR, the slow path logic involved a leader election algorithm to
      pick one thread to perform (1) followed by (2).  It elected the thread whose
      request was at the front of the highest priority non-empty queue since that
      request was most likely to be granted.  This algorithm was efficient in terms of
      reducing intermediate wakeups, which is a thread waking up only to resume
      waiting after finding its request is not granted.  However, the conceptual
      complexity of this algorithm was too high.  It took me a long time to draw a
      timeline to understand how it works for just one edge case yet there were so
      many.
      
      This PR drops the leader election to reduce conceptual complexity.  Now, the two
      duties can be performed by whichever thread acquires the lock first.  The risk
      of this change is increasing the number of intermediate wakeups, however, we
      took steps to mitigate that.
      
      - `wait_until_refill_pending_` flag ensures only one thread performs (1). This\
      prevents the thundering herd problem at the next refill time. The remaining\
      threads wait on their condition variable with an unbounded duration -- thus we\
      must remember to notify them to ensure forward progress.
      - (1) is typically done by a thread at the front of a queue. This is trivial\
      when the queues are initially empty as the first choice that arrives must be\
      the only entry in its queue. When queues are initially non-empty, we achieve\
      this by having (2) notify a thread at the front of a queue (preferring higher\
      priority) to perform the next duty.
      - We do not require any additional wakeup for (2). Typically it will just be\
      done by the thread that finished (1).
      
      Combined, the second and third bullet points above suggest the refill/granting
      will typically be done by a request at the front of its queue.  This is
      important because one wakeup is saved when a granted request happens to be in an
      already running thread.
      
      Note there are a few cases that still lead to intermediate wakeup, however.  The
      first two are existing issues that also apply to the old algorithm, however, the
      third (including both subpoints) is new.
      
      - No request may be granted (only possible when rate limit dynamically\
      decreases).
      - Requests from a different queue may be granted.
      - (2) may be run by a non-front request thread causing it to not be granted even\
      if some requests in that same queue are granted. It can happen for a couple\
      (unlikely) reasons.
        - A new request may sneak in and grab the lock at the refill time, before the\
      thread finishing (1) can wake up and grab it.
        - A new request may sneak in and grab the lock and execute (1) before (2)'s\
      chosen candidate can wake up and grab the lock. Then that non-front request\
      thread performing (1) can carry over to perform (2).
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/8602
      
      Test Plan:
      - Use existing tests. The edge cases listed in the comment are all performance\
      related; I could not really think of any related to correctness. The logic\
      looks the same whether a thread wakes up/finishes its work early/on-time/late,\
      or whether the thread is chosen vs. "steals" the work.
      - Verified write throughput and CPU overhead are basically the same with and\
        without this change, even in a rate limiter heavy workload:
      
      Test command:
      ```
      $ rm -rf /dev/shm/dbbench/ && TEST_TMPDIR=/dev/shm /usr/bin/time ./db_bench -benchmarks=fillrandom -num_multi_db=64 -num_low_pri_threads=64 -num_high_pri_threads=64 -write_buffer_size=262144 -target_file_size_base=262144 -max_bytes_for_level_base=1048576 -rate_limiter_bytes_per_sec=16777216 -key_size=24 -value_size=1000 -num=10000 -compression_type=none -rate_limiter_refill_period_us=1000
      ```
      
      Results before this PR:
      
      ```
      fillrandom   :     108.463 micros/op 9219 ops/sec;    9.0 MB/s
      7.40user 8.84system 1:26.20elapsed 18%CPU (0avgtext+0avgdata 256140maxresident)k
      ```
      
      Results after this PR:
      
      ```
      fillrandom   :     108.108 micros/op 9250 ops/sec;    9.0 MB/s
      7.45user 8.23system 1:26.68elapsed 18%CPU (0avgtext+0avgdata 255688maxresident)k
      ```
      
      Reviewed By: hx235
      
      Differential Revision: D30048013
      
      Pulled By: ajkr
      
      fbshipit-source-id: 6741bba9d9dfbccab359806d725105817fef818b
      82b81dc8
    • L
      rocksdb: don't call LZ4_loadDictHC with null dictionary · a756fb9c
      Lucian Grijincu 提交于
      Summary: UBSAN revealed a pointer underflow when `LZ4HC_init_internal` is called with a null `start`.
      
      Reviewed By: ajkr
      
      Differential Revision: D30181874
      
      fbshipit-source-id: ca9bbac1a85c58782871d7f153af733b000cc66c
      a756fb9c
    • J
      Add an unittest for tiered storage universal compaction (#8631) · 61f83dfe
      Jay Zhuang 提交于
      Summary: Pull Request resolved: https://github.com/facebook/rocksdb/pull/8631
      
      Reviewed By: siying
      
      Differential Revision: D30200385
      
      Pulled By: jay-zhuang
      
      fbshipit-source-id: 0fa2bb15e74ff81762d767f234078e0fe0106c55
      61f83dfe
    • S
      Move old files to warm tier in FIFO compactions (#8310) · e7c24168
      sdong 提交于
      Summary:
      Some FIFO users want to keep the data for longer, but the old data is rarely accessed. This feature allows users to configure FIFO compaction so that data older than a threshold is moved to a warm storage tier.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/8310
      
      Test Plan: Add several unit tests.
      
      Reviewed By: ajkr
      
      Differential Revision: D28493792
      
      fbshipit-source-id: c14824ea634814dee5278b449ab5c98b6e0b5501
      e7c24168
  9. 08 8月, 2021 1 次提交
    • A
      Fix db_stress failure (#8632) · 052c24a6
      Akanksha Mahajan 提交于
      Summary:
      FaultInjectionTestFS injects error in Rename operation. Because
      of injected error, info.log fails to be created if rename  returns error and info_log is set to nullptr which leads to this assertion
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/8632
      
      Test Plan: run the db_stress job locally
      
      Reviewed By: ajkr
      
      Differential Revision: D30167387
      
      Pulled By: akankshamahajan15
      
      fbshipit-source-id: 8d08c4c33e8f0cabd368bbb498d21b9de0660067
      052c24a6
  10. 07 8月, 2021 7 次提交
  11. 06 8月, 2021 4 次提交
    • B
      Correct javadoc for Env#setBackgroundThreads(int) (#8576) · 8ca08178
      Brendan MacDonell 提交于
      Summary:
      By default, the low priority pool is not the flush pool, so calling `Env#setBackgroundThreads` without providing a priority will not do what the caller expected.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/8576
      
      Reviewed By: ajkr
      
      Differential Revision: D29925154
      
      Pulled By: mrambacher
      
      fbshipit-source-id: cd7211fc374e7d9929a9b88ea0a5ba8134b76099
      8ca08178
    • M
      Make MergeOperator+CompactionFilter/Factory into Customizable Classes (#8481) · d057e832
      mrambacher 提交于
      Summary:
      - Changed MergeOperator, CompactionFilter, and CompactionFilterFactory into Customizable classes.
       - Added Options/Configurable/Object Registration for TTL and Cassandra variants
       - Changed the StringAppend MergeOperators to accept a string delimiter rather than a simple char.  Made the delimiter into a configurable option
       - Added tests for new functionality
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/8481
      
      Reviewed By: zhichao-cao
      
      Differential Revision: D30136050
      
      Pulled By: mrambacher
      
      fbshipit-source-id: 271d1772835935b6773abaf018ee71e42f9491af
      d057e832
    • A
      Dynamically configure BlockBasedTableOptions.prepopulate_block_cache (#8620) · fd207993
      Akanksha Mahajan 提交于
      Summary:
      Dynamically configure BlockBasedTableOptions.prepopulate_block_cache using DB::SetOptions.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/8620
      
      Test Plan: Added new unit test
      
      Reviewed By: anand1976
      
      Differential Revision: D30091319
      
      Pulled By: akankshamahajan15
      
      fbshipit-source-id: fb586d1848a8dd525bba7b2f9eeac34f2fc6d82c
      fd207993
    • L
      Attempt to deflake ObsoleteFilesTest.DeleteObsoleteOptionsFile (#8624) · 9b25d26d
      Levi Tamasi 提交于
      Summary:
      We've been seeing occasional crashes on CI while inserting into the
      vectors in `ObsoleteFilesTest.DeleteObsoleteOptionsFile`. The crashes
      don't reproduce locally (could be either a race or an object lifecycle
      issue) but the good news is that the vectors in question are not really
      used for anything meaningful by the test. (The assertion about the sizes
      of the two vectors being equal is guaranteed to hold, since the two sync
      points where they are populated are right after each other.) The patch
      simply removes the vectors from the test, alongside the associated
      callbacks and sync points.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/8624
      
      Test Plan: `make check`
      
      Reviewed By: akankshamahajan15
      
      Differential Revision: D30118485
      
      Pulled By: ltamasi
      
      fbshipit-source-id: 0a4c3d06584e84cd2b1dcc212d274fa1b89cb647
      9b25d26d
  12. 05 8月, 2021 5 次提交
    • Y
      Update HISTORY for PR8585 (#8623) · b01a428d
      Yanqin Jin 提交于
      Summary:
      Update HISTORY.md for PR https://github.com/facebook/rocksdb/issues/8585 .
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/8623
      
      Reviewed By: ltamasi
      
      Differential Revision: D30121910
      
      Pulled By: riversand963
      
      fbshipit-source-id: 525af43fad908a498f22ed4f934ec5cbf60e6d25
      b01a428d
    • A
      Do not attempt to rename non-existent info log (#8622) · a685a701
      Andrew Kryczka 提交于
      Summary:
      Previously we attempted to rename "LOG" to "LOG.old.*" without checking
      its existence first. "LOG" had no reason to exist in a new DB.
      
      Errors in renaming a non-existent "LOG" were swallowed via
      `PermitUncheckedError()` so things worked. However the storage service's
      error monitoring was detecting all these benign rename failures. So it
      is better to fix it. Also with this PR we can now distinguish rename failure
      for other reasons and return them.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/8622
      
      Test Plan: new unit test
      
      Reviewed By: akankshamahajan15
      
      Differential Revision: D30115189
      
      Pulled By: ajkr
      
      fbshipit-source-id: e2f337ffb2bd171be0203172abc8e16e7809b170
      a685a701
    • A
      Fix clang failure (#8621) · a074d46a
      Akanksha Mahajan 提交于
      Summary:
      Fixed clang failure because of memory leak
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/8621
      
      Test Plan: CircleCI clang job
      
      Reviewed By: pdillinger
      
      Differential Revision: D30114337
      
      Pulled By: akankshamahajan15
      
      fbshipit-source-id: 16572b9bcbaa053c2ab7bc1c344148d0e6f8039c
      a074d46a
    • A
      Remove corruption error injection in FaultInjectionTestFS (#8616) · c268859a
      anand76 提交于
      Summary:
      ```FaultInjectionTestFS``` injects various types of read errors in ```FileSystem``` APIs. One type of error is corruption errors, where data is intentionally corrupted or truncated. There is corresponding validation in db_stress to verify that an injected error results in a user visible Get/MultiGet error. However, for corruption errors, its hard to know when a corruption is supposed to be detected by the user request, due to prefetching and, in case of direct IO, padding. This results in false positives. So remove that functionality.
      
      Block checksum validation for Get/MultiGet is confined to ```BlockFetcher```, so we don't lose a lot by disabling this since its a small surface area to test.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/8616
      
      Reviewed By: zhichao-cao
      
      Differential Revision: D30074422
      
      Pulled By: anand1976
      
      fbshipit-source-id: 6a61fac18f95514c15364b75013799ddf83294df
      c268859a
    • H
      Improve rate limiter implementation's readability (#8596) · dbe3810c
      hx235 提交于
      Summary:
      Context:
      As need for new feature of resource management using RocksDB's rate limiter like [https://github.com/facebook/rocksdb/issues/8595](https://github.com/facebook/rocksdb/pull/8595) arises, it is about time to re-learn our rate limiter and make this learning process easier for others by improving its readability. The comment/assertion/one extra else-branch are added based on my best understanding toward the rate_limiter.cc and rate_limiter_test.cc up to date after giving it a hard read.
      - Add code comments/assertion/one extra else-branch (that is not affecting existing behavior, see PR comment) to describe how leader-election works under multi-thread settings in GenericRateLimiter::Request()
      - Add code comments to describe a non-obvious trick during clean-up of rate limiter destructor
      - Add code comments to explain more about the starvation being fixed in GenericRateLimiter::Refill() through partial byte-granting
      - Add code comments to the rate limiter's setup in a complicated unit test in rate_limiter_test
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/8596
      
      Test Plan: - passed existing rate_limiter_test.cc
      
      Reviewed By: ajkr
      
      Differential Revision: D29982590
      
      Pulled By: hx235
      
      fbshipit-source-id: c3592986bb5b0c90d8229fe44f425251ec7e8a0a
      dbe3810c
  13. 04 8月, 2021 2 次提交