1. 28 4月, 2018 2 次提交
    • H
      Add max_subcompactions as a compaction option · ed7a95b2
      Huachao Huang 提交于
      Summary:
      Sometimes we want to compact files as fast as possible, but don't want to set a large `max_subcompactions` in the `DBOptions` by default.
      I add a `max_subcompactions` options to `CompactionOptions` so that we can choose a proper concurrency dynamically.
      Closes https://github.com/facebook/rocksdb/pull/3775
      
      Differential Revision: D7792357
      
      Pulled By: ajkr
      
      fbshipit-source-id: 94f54c3784dce69e40a229721a79a97e80cd6a6c
      ed7a95b2
    • Y
      Rename pending_compaction_ to queued_for_compaction_. · 7dfbe335
      Yanqin Jin 提交于
      Summary:
      We use `queued_for_flush_` to indicate a column family has been added to the
      flush queue. Similarly and to be consistent in our naming, we need to use `queued_for_compaction_` to indicate a column family has been added to the compaction queue. In the past we used
      `pending_compaction_` which can also be ambiguous.
      Closes https://github.com/facebook/rocksdb/pull/3781
      
      Differential Revision: D7790063
      
      Pulled By: riversand963
      
      fbshipit-source-id: 6786b11a4fcaea36dc9b4672233dbe042f921804
      7dfbe335
  2. 27 4月, 2018 7 次提交
    • Y
      Rename pending_flush_ to queued_for_flush_. · 513b5ce6
      Yanqin Jin 提交于
      Summary:
      With ColumnFamilyData::pending_flush_, we have the following code snippet in DBImpl::ScheedulePendingFlush
      
      ```
      if (!cfd->pending_flush() && cfd->imm()->IsFlushPending()) {
      ...
      }
      ```
      
      `Pending` is ambiguous, and I feel `queued_for_flush` is a better name,
      especially for the sake of readability.
      Closes https://github.com/facebook/rocksdb/pull/3777
      
      Differential Revision: D7783066
      
      Pulled By: riversand963
      
      fbshipit-source-id: f1bd8c8bfe5eafd2c94da0d8566c9b2b6bb57229
      513b5ce6
    • N
      Add virtual Truncate method to Env · 37cd617b
      Nathan VanBenschoten 提交于
      Summary:
      This change adds a virtual `Truncate` method to `Env`, which truncates
      the named file to the specified size. At the moment, this is only
      supported for `MockEnv`, but other `Env's` could be extended to override
      the method too. This is the same approach that methods like `LinkFile` and
      `AreSameFile` have taken.
      
      This is useful for any user of the in-memory `Env`. The implementation's
      header is not exported, so before this change, it was impossible to
      access it's already existing `Truncate` method.
      Closes https://github.com/facebook/rocksdb/pull/3779
      
      Differential Revision: D7785789
      
      Pulled By: ajkr
      
      fbshipit-source-id: 3bcdaeea7b7180529f7d9b496dc67b791a00bbf0
      37cd617b
    • A
      Allow options file in db_stress and db_crashtest · db36f222
      Andrew Kryczka 提交于
      Summary:
      - When options file is provided to db_stress, take supported options from the file instead of from flags
      - Call `BuildOptionsTable` after `Open` so it can use `options_` once it has been populated either from flags or from file
      - Allow options filename to be passed via `db_crashtest.py`
      Closes https://github.com/facebook/rocksdb/pull/3768
      
      Differential Revision: D7755331
      
      Pulled By: ajkr
      
      fbshipit-source-id: 5205cc5deb0d74d677b9832174153812bab9a60a
      db36f222
    • A
      Remove block-based table assertion for non-empty filter block · 7004e454
      Andrew Kryczka 提交于
      Summary:
      7a6353bd prevents empty filter blocks from being written for SST files containing range deletions only. However the assertion this PR removes is still a problem as we could be reading from a DB generated by a RocksDB build without the 7a6353bd patch. So remove the assertion. We already don't do this check when `cache_index_and_filter_blocks=false`, so it should be safe.
      Closes https://github.com/facebook/rocksdb/pull/3773
      
      Differential Revision: D7769964
      
      Pulled By: ajkr
      
      fbshipit-source-id: 7285762446f2cd2ccf16efd7a988a106fbb0d8d3
      7004e454
    • S
      Sync parent directory after deleting a file in delete scheduler · 63c965cd
      Siying Dong 提交于
      Summary:
      sync parent directory after deleting a file in delete scheduler. Otherwise, trim speed may not be as smooth as what we want.
      Closes https://github.com/facebook/rocksdb/pull/3767
      
      Differential Revision: D7760136
      
      Pulled By: siying
      
      fbshipit-source-id: ec131d53b61953f09c60d67e901e5eeb2716b05f
      63c965cd
    • M
      Fix the bloom filter skipping empty prefixes · 7e4e3814
      Maysam Yabandeh 提交于
      Summary:
      bc0da4b5 optimized bloom filters by skipping duplicate entires when the whole key and prefixes are both added to the bloom. It however used empty string as the initial value of the last entry added to the bloom. This is incorrect since empty key/prefix are valid entires by themselves. This patch fixes that.
      Closes https://github.com/facebook/rocksdb/pull/3776
      
      Differential Revision: D7778803
      
      Pulled By: maysamyabandeh
      
      fbshipit-source-id: d5a065daebee17f9403cac51e9d5626aac87bfbc
      7e4e3814
    • M
      WritePrepared Txn: disable rollback in stress test · e5a4dacf
      Maysam Yabandeh 提交于
      Summary:
      WritePrepared rollback implementation is not ready to be invoked in the middle of workload. This is due the lack of synchronization to obtain the cf handle from db. Temporarily disabling this until the problem with rollback is fixed.
      Closes https://github.com/facebook/rocksdb/pull/3772
      
      Differential Revision: D7769041
      
      Pulled By: maysamyabandeh
      
      fbshipit-source-id: 0e3b0ce679bc2afba82e653a40afa3f045722754
      e5a4dacf
  3. 26 4月, 2018 5 次提交
  4. 25 4月, 2018 2 次提交
    • A
      Add crash-recovery correctness check to db_stress · a4fb1f8c
      Andrew Kryczka 提交于
      Summary:
      Previously, our `db_stress` tool held the expected state of the DB in-memory, so after crash-recovery, there was no way to verify data correctness. This PR adds an option, `--expected_values_file`, which specifies a file holding the expected values.
      
      In black-box testing, the `db_stress` process can be killed arbitrarily, so updates to the `--expected_values_file` must be atomic. We achieve this by `mmap`ing the file and relying on `std::atomic<uint32_t>` for atomicity. Actually this doesn't provide a total guarantee on what we want as `std::atomic<uint32_t>` could, in theory, be translated into multiple stores surrounded by a mutex. We can verify our assumption by looking at `std::atomic::is_always_lock_free`.
      
      For the `mmap`'d file, we didn't have an existing way to expose its contents as a raw memory buffer. This PR adds it in the `Env::NewMemoryMappedFileBuffer` function, and `MemoryMappedFileBuffer` class.
      
      `db_crashtest.py` is updated to use an expected values file for black-box testing. On the first iteration (when the DB is created), an empty file is provided as `db_stress` will populate it when it runs. On subsequent iterations, that same filename is provided so `db_stress` can check the data is as expected on startup.
      Closes https://github.com/facebook/rocksdb/pull/3629
      
      Differential Revision: D7463144
      
      Pulled By: ajkr
      
      fbshipit-source-id: c8f3e82c93e045a90055e2468316be155633bd8b
      a4fb1f8c
    • M
      Skip duplicate bloom keys when whole_key and prefix are mixed · bc0da4b5
      Maysam Yabandeh 提交于
      Summary:
      Currently we rely on FilterBitsBuilder to skip the duplicate keys. It does that by comparing that hash of the key to the hash of the last added entry. This logic breaks however when we have whole_key_filtering mixed with prefix blooms as their addition to FilterBitsBuilder will be interleaved. The patch fixes that by comparing the last whole key and last prefix with the whole key and prefix of the new key respectively and skip the call to FilterBitsBuilder if it is a duplicate.
      Closes https://github.com/facebook/rocksdb/pull/3764
      
      Differential Revision: D7744413
      
      Pulled By: maysamyabandeh
      
      fbshipit-source-id: 15df73bbbafdfd754d4e1f42ea07f47b03bc5eb8
      bc0da4b5
  5. 24 4月, 2018 3 次提交
    • G
      Support lowering CPU priority of background threads · 090c78a0
      Gabriel Wicke 提交于
      Summary:
      Background activities like compaction can negatively affect
      latency of higher-priority tasks like request processing. To avoid this,
      rocksdb already lowers the IO priority of background threads on Linux
      systems. While this takes care of typical IO-bound systems, it does not
      help much when CPU (temporarily) becomes the bottleneck. This is
      especially likely when using more expensive compression settings.
      
      This patch adds an API to allow for lowering the CPU priority of
      background threads, modeled on the IO priority API. Benchmarks (see
      below) show significant latency and throughput improvements when CPU
      bound. As a result, workloads with some CPU usage bursts should benefit
      from lower latencies at a given utilization, or should be able to push
      utilization higher at a given request latency target.
      
      A useful side effect is that compaction CPU usage is now easily visible
      in common tools, allowing for an easier estimation of the contribution
      of compaction vs. request processing threads.
      
      As with IO priority, the implementation is limited to Linux, degrading
      to a no-op on other systems.
      Closes https://github.com/facebook/rocksdb/pull/3763
      
      Differential Revision: D7740096
      
      Pulled By: gwicke
      
      fbshipit-source-id: e5d32373e8dc403a7b0c2227023f9ce4f22b413c
      090c78a0
    • M
      Improve write time breakdown stats · affe01b0
      Mike Kolupaev 提交于
      Summary:
      There's a group of stats in PerfContext for profiling the write path. They break down the write time into WAL write, memtable insert, throttling, and everything else. We use these stats a lot for figuring out the cause of slow writes.
      
      These stats got a bit out of date and are now categorizing some interesting things as "everything else", and also do some double counting. This PR fixes it and adds two new stats: time spent waiting for other threads of the batch group, and time spent waiting for scheduling flushes/compactions. Probably these will be enough to explain all the occasional abnormally slow (multiple seconds) writes that we're seeing.
      Closes https://github.com/facebook/rocksdb/pull/3602
      
      Differential Revision: D7251562
      
      Pulled By: al13n321
      
      fbshipit-source-id: 0a2d0f5a4fa5677455e1f566da931cb46efe2a0d
      affe01b0
    • S
      Revert "Skip deleted WALs during recovery" · d5afa737
      Siying Dong 提交于
      Summary:
      This reverts commit 73f21a7b.
      
      It breaks compatibility. When created a DB using a build with this new change, opening the DB and reading the data will fail with this error:
      
      "Corruption: Can't access /000000.sst: IO error: while stat a file for size: /tmp/xxxx/000000.sst: No such file or directory"
      
      This is because the dummy AddFile4 entry generated by the new code will be treated as a real entry by an older build. The older build will think there is a real file with number 0, but there isn't such a file.
      Closes https://github.com/facebook/rocksdb/pull/3762
      
      Differential Revision: D7730035
      
      Pulled By: siying
      
      fbshipit-source-id: f2051859eff20ef1837575ecb1e1bb96b3751e77
      d5afa737
  6. 21 4月, 2018 7 次提交
    • A
      Avoid directory renames in BackupEngine · a8a28da2
      Andrew Kryczka 提交于
      Summary:
      We used to name private directories like "1.tmp" while BackupEngine populated them, and then rename without the ".tmp" suffix (i.e., rename "1.tmp" to "1") after all files were copied. On glusterfs, directory renames like this require operations across many hosts, and partial failures have caused operational problems.
      
      Fortunately we don't need to rename private directories. We already have a meta-file that uses the tempfile-rename pattern to commit a backup atomically after all its files have been successfully copied. So we can copy private files directly to their final location, so now there's no directory rename.
      Closes https://github.com/facebook/rocksdb/pull/3749
      
      Differential Revision: D7705610
      
      Pulled By: ajkr
      
      fbshipit-source-id: fd724a28dd2bf993ce323a5f2cb7e7d6980cc346
      a8a28da2
    • Y
      Disable EnvPosixTest::FilePermission · 2e72a589
      Yi Wu 提交于
      Summary:
      The test is flaky in our CI but could not be reproduce manually on the same CI host. Disabling it.
      Closes https://github.com/facebook/rocksdb/pull/3753
      
      Differential Revision: D7716320
      
      Pulled By: yiwu-arbug
      
      fbshipit-source-id: 6bed3b05880c1d24e8dc86bc970e5181bc98fb45
      2e72a589
    • M
      WritePrepared Txn: rollback via commit · bb2a2ec7
      Maysam Yabandeh 提交于
      Summary:
      Currently WritePrepared rolls back a transaction with prepare sequence number prepare_seq by i) write a single rollback batch with rollback_seq, ii) add <rollback_seq, rollback_seq> to commit cache, iii) remove prepare_seq from PrepareHeap.
      This is correct assuming that there is no snapshot taken when a transaction is rolled back. This is the case the way MySQL does rollback which is after recovery. Otherwise if max_evicted_seq advances the prepare_seq, the live snapshot might assume data as committed since it does not find them in CommitCache.
      The change is to simply add <prepare_seq. rollback_seq> to commit cache before removing prepare_seq from PrepareHeap. In this way if max_evicted_seq advances prpeare_seq, the existing mechanism that we have to check evicted entries against live snapshots will make sure that the live snapshot will not see the data of rolled back transaction.
      Closes https://github.com/facebook/rocksdb/pull/3745
      
      Differential Revision: D7696193
      
      Pulled By: maysamyabandeh
      
      fbshipit-source-id: c9a2d46341ddc03554dded1303520a1cab74ef9c
      bb2a2ec7
    • A
      Add a stat for MultiGet keys found, update memtable hit/miss stats · dbdaa466
      Anand Ananthabhotla 提交于
      Summary:
      1. Add a new ticker stat rocksdb.number.multiget.keys.found to track the
      number of keys successfully read
      2. Update rocksdb.memtable.hit/miss in DBImpl::MultiGet(). It was being done in
      DBImpl::GetImpl(), but not MultiGet
      Closes https://github.com/facebook/rocksdb/pull/3730
      
      Differential Revision: D7677364
      
      Pulled By: anand1976
      
      fbshipit-source-id: af22bd0ef8ddc5cf2b4244b0a024e539fe48bca5
      dbdaa466
    • M
      WritePrepared Txn: enable TryAgain for duplicates at the end of the batch · c3d1e36c
      Maysam Yabandeh 提交于
      Summary:
      The WriteBatch::Iterate will try with a larger sequence number if the memtable reports a duplicate. This status is specified with TryAgain status. So far the assumption was that the last entry in the batch will never return TryAgain, which is correct when WAL is created via WritePrepared since it always appends a batch separator if a natural one does not exist. However when reading a WAL generated by WriteCommitted this batch separator might  not exist. Although WritePrepared is not supposed to be able to read the WAL generated by WriteCommitted we should avoid confusing scenarios in which the behavior becomes unpredictable. The path fixes that by allowing TryAgain even for the last entry of the write batch.
      Closes https://github.com/facebook/rocksdb/pull/3747
      
      Differential Revision: D7708391
      
      Pulled By: maysamyabandeh
      
      fbshipit-source-id: bfaddaa9b14a4cdaff6977f6f63c789a6ab1ee0d
      c3d1e36c
    • M
      Propagate fill_cache config to partitioned index iterator · 17e04039
      Maysam Yabandeh 提交于
      Summary:
      Currently the partitioned index iterator creates a new ReadOptions which ignores the fill_cache config set to ReadOptions passed by the user. The patch propagates fill_cache from the user's ReadOptions to that of partition index iterator.
      Also it clarifies the contract of fill_cache that i) it does not apply to filters, ii) it still charges block cache for the size of the data block, it still pin the block if it is already in the block cache.
      Closes https://github.com/facebook/rocksdb/pull/3739
      
      Differential Revision: D7678308
      
      Pulled By: maysamyabandeh
      
      fbshipit-source-id: 53ed96424ae922e499e2d4e3580ddc3f0db893da
      17e04039
    • P
      Fix GitHub issue #3716: gcc-8 warnings · dee95a1a
      przemyslaw.skibinski@percona.com 提交于
      Summary:
      Fix the following gcc-8 warnings:
      - conflicting C language linkage declaration [-Werror]
      - writing to an object with no trivial copy-assignment [-Werror=class-memaccess]
      - array subscript -1 is below array bounds [-Werror=array-bounds]
      
      Solves https://github.com/facebook/rocksdb/issues/3716
      Closes https://github.com/facebook/rocksdb/pull/3736
      
      Differential Revision: D7684161
      
      Pulled By: yiwu-arbug
      
      fbshipit-source-id: 47c0423d26b74add251f1d3595211eee1e41e54a
      dee95a1a
  7. 20 4月, 2018 3 次提交
  8. 19 4月, 2018 3 次提交
    • Y
      Add block cache related DB properties · ad511684
      Yi Wu 提交于
      Summary:
      Add DB properties "rocksdb.block-cache-capacity", "rocksdb.block-cache-usage", "rocksdb.block-cache-pinned-usage" to show block cache usage.
      Closes https://github.com/facebook/rocksdb/pull/3734
      
      Differential Revision: D7657180
      
      Pulled By: yiwu-arbug
      
      fbshipit-source-id: dd34a019d5878dab539c51ee82669e97b2b745fd
      ad511684
    • A
      include thread-pool priority in thread names · 3cea6139
      Andrew Kryczka 提交于
      Summary:
      Previously threads were named "rocksdb:bg\<index in thread pool\>", so the first thread in all thread pools would be named "rocksdb:bg0". Users want to be able to distinguish threads used for flush (high-pri) vs regular compaction (low-pri) vs compaction to bottom-level (bottom-pri). So I changed the thread naming convention to include the thread-pool priority.
      Closes https://github.com/facebook/rocksdb/pull/3702
      
      Differential Revision: D7581415
      
      Pulled By: ajkr
      
      fbshipit-source-id: ce04482b6acd956a401ef22dc168b84f76f7d7c1
      3cea6139
    • M
      Improve db_stress with transactions · 6d06be22
      Maysam Yabandeh 提交于
      Summary:
      db_stress was already capable running transactions by setting use_txn. Running it under stress showed a couple of problems fixed in this patch.
      - The uncommitted transaction must be either rolled back or commit after recovery.
      - Current implementation of WritePrepared transaction cannot handle cf drop before crash. Clarified that in the comments and added safety checks. When running with use_txn, clear_column_family_one_in must be set to 0.
      Closes https://github.com/facebook/rocksdb/pull/3733
      
      Differential Revision: D7654419
      
      Pulled By: maysamyabandeh
      
      fbshipit-source-id: a024bad80a9dc99677398c00d29ff17d4436b7f3
      6d06be22
  9. 18 4月, 2018 1 次提交
  10. 17 4月, 2018 3 次提交
  11. 16 4月, 2018 4 次提交