提交 eba3fc64 编写于 作者: L Lei Jin

make corruption_test:CompactionInputErrorParanoid deterministic

Summary:
it writes ~10M data, default L0 compaction trigger is 4, plus 2 writer
buffer, so that can accommodate ~6M data before compaction happens for
sure. I guess encoding is doing a good job to shrink the data so that
sometime, compaction does not get triggered. I get test failure quite
often.

Test Plan: ran it multiple times and all got pass

Reviewers: igor, sdong

Reviewed By: sdong

CC: leveldb

Differential Revision: https://reviews.facebook.net/D17775
上级 9433e359
......@@ -326,7 +326,8 @@ TEST(CorruptionTest, CompactionInputError) {
TEST(CorruptionTest, CompactionInputErrorParanoid) {
Options options;
options.paranoid_checks = true;
options.write_buffer_size = 1048576;
options.write_buffer_size = 131072;
options.max_write_buffer_number = 2;
Reopen(&options);
DBImpl* dbi = reinterpret_cast<DBImpl*>(db_);
......@@ -349,7 +350,7 @@ TEST(CorruptionTest, CompactionInputErrorParanoid) {
Status s;
std::string tmp1, tmp2;
bool failed = false;
for (int i = 0; i < 10000 && s.ok(); i++) {
for (int i = 0; i < 10000; i++) {
s = db_->Put(WriteOptions(), Key(i, &tmp1), Value(i, &tmp2));
if (!s.ok()) {
failed = true;
......
......@@ -257,6 +257,8 @@ struct ColumnFamilyOptions {
// Number of files to trigger level-0 compaction. A value <0 means that
// level-0 compaction will not be triggered by number of files at all.
//
// Default: 4
int level0_file_num_compaction_trigger;
// Soft limit on number of level-0 files. We start slowing down writes at this
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册