提交 c073c2ed 编写于 作者: H Hui Xiao 提交者: Facebook GitHub Bot

Revert "Clarify comment about compaction_readahead_size's sanitizatio… (#11773)

Summary:
…n change (https://github.com/facebook/rocksdb/issues/11755)"

This reverts commit 45131659.

Pull Request resolved: https://github.com/facebook/rocksdb/pull/11773

Reviewed By: ajkr

Differential Revision: D48832320

Pulled By: hx235

fbshipit-source-id: 96cef26a885134360766a83505f6717598eac6a9
上级 4234a6a3
......@@ -22,7 +22,6 @@
### Behavior Changes
* Statistics `rocksdb.sst.read.micros` now includes time spent on multi read and async read into the file
* For Universal Compaction users, periodic compaction (option `periodic_compaction_seconds`) will be set to 30 days by default if block based table is used.
* `Options::compaction_readahead_size` will be sanitized to 2MB when set to 0 under non-direct IO since we have moved prefetching responsibility to page cache for compaction read with readhead size equal to `Options::compaction_readahead_size` under non-direct IO (#11631)
### Bug Fixes
* Fix a bug in FileTTLBooster that can cause users with a large number of levels (more than 65) to see errors like "runtime error: shift exponent .. is too large.." (#11673).
......
......@@ -951,13 +951,10 @@ struct DBOptions {
enum AccessHint { NONE, NORMAL, SEQUENTIAL, WILLNEED };
AccessHint access_hint_on_compaction_start = NORMAL;
// The size RocksDB uses to perform readahead during compaction read.
// If set zero, RocksDB will sanitize it to be 2MB during db open.
// If you're
// If non-zero, we perform bigger reads when doing compaction. If you're
// running RocksDB on spinning disks, you should set this to at least 2MB.
// That way RocksDB's compaction is doing sequential instead of random reads.
//
//
// Default: 0
//
// Dynamically changeable through SetDBOptions() API.
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册