- 06 8月, 2015 10 次提交
-
-
由 Islam AbdelRahman 提交于
Summary: Update DeleteScheduler tests so that they verify the used penalties for waiting instead of measuring the time spent which is not reliable Test Plan: make -j64 delete_scheduler_test && ./delete_scheduler_test COMPILE_WITH_TSAN=1 make -j64 delete_scheduler_test && ./delete_scheduler_test COMPILE_WITH_ASAN=1 make -j64 delete_scheduler_test && ./delete_scheduler_test make -j64 db_test && ./db_test --gtest_filter="DBTest.RateLimitedDelete:DBTest.DeleteSchedulerMultipleDBPaths" COMPILE_WITH_TSAN=1 make -j64 db_test && ./db_test --gtest_filter="DBTest.RateLimitedDelete:DBTest.DeleteSchedulerMultipleDBPaths" COMPILE_WITH_ASAN=1 make -j64 db_test && ./db_test --gtest_filter="DBTest.RateLimitedDelete:DBTest.DeleteSchedulerMultipleDBPaths" Reviewers: yhchiang, sdong Reviewed By: sdong Subscribers: dhruba Differential Revision: https://reviews.facebook.net/D43635
-
由 sdong 提交于
Summary: Currently, valgrind_check doesn't fail on test failures, which creates confusion. valgrind_check should fail if test fails. Test Plan: Manually change tests to return test failure or cause memory leak and see valgrind_check has the correct behavior. Reviewers: anthony, yhchiang, IslamAbdelRahman, igor, kradhakrishnan, rven Reviewed By: rven Subscribers: leveldb, dhruba Differential Revision: https://reviews.facebook.net/D43629
-
Summary: fix the build failure Test Plan: make all Reviewers: sdong, IslamAbdelRahman Reviewed By: IslamAbdelRahman Subscribers: dhruba, leveldb Differential Revision: https://reviews.facebook.net/D43623
-
Summary: The list of info log files of a db can be obtained using the new function. Test Plan: New test in db_test.cc passed. Reviewers: yhchiang, IslamAbdelRahman, sdong Reviewed By: sdong Subscribers: IslamAbdelRahman, leveldb, dhruba Differential Revision: https://reviews.facebook.net/D41715
-
由 sdong 提交于
Summary: Add two unit tests for SyncWAL(). One makes sure SyncWAL() doesn't block writes in the other thread. Another one makes sure SyncWAL() doesn't wait ongoing writes to finish before being executed. Create a new test file db_wal_test and move two WAL related tests from db_test to here. Test Plan: Run the new tests Reviewers: IslamAbdelRahman, rven, kradhakrishnan, kolmike, tnovak, yhchiang Reviewed By: yhchiang Subscribers: leveldb, dhruba Differential Revision: https://reviews.facebook.net/D43605
-
由 sdong 提交于
Summary: Measure read latency histogram and put in statistics. Compaction inputs are excluded from it when possible (unfortunately usually no possible as we usually take table reader from table cache. Test Plan: Run db_bench and it shows the stats, like: rocksdb.sst.read.micros statistics Percentiles :=> 50 : 1.238522 95 : 2.529740 99 : 3.912180 Reviewers: kradhakrishnan, rven, anthony, IslamAbdelRahman, MarkCallaghan, yhchiang Reviewed By: yhchiang Subscribers: leveldb, dhruba Differential Revision: https://reviews.facebook.net/D43275
-
由 sdong 提交于
Summary: "make commit-prereq" fails to clean up java, which can cause rocksjava failure. Test Plan: Run commit-prepreq Reviewers: IslamAbdelRahman, rven, kradhakrishnan, yhchiang Reviewed By: yhchiang Subscribers: leveldb, dhruba Differential Revision: https://reviews.facebook.net/D43575
-
由 Islam AbdelRahman 提交于
Summary: This patch will fix the false positive of DBTest.FlushSchedule under TSAN, we dont need to disable this test Test Plan: COMPILE_WITH_TSAN=1 make -j64 db_test && ./db_test --gtest_filter="DBTest.FlushSchedule" Reviewers: yhchiang, sdong Reviewed By: sdong Subscribers: dhruba Differential Revision: https://reviews.facebook.net/D43599
-
由 Islam AbdelRahman 提交于
Summary: Fixing TSAN false positive and relaxing the conditions when we are running under TSAN Test Plan: COMPILE_WITH_TSAN=1 make -j64 delete_scheduler_test && ./delete_scheduler_test Reviewers: yhchiang, sdong Reviewed By: sdong Subscribers: dhruba Differential Revision: https://reviews.facebook.net/D43593
-
由 sdong 提交于
Summary: While doing forward iterating, if current key is merge, internal iterator position is placed to the next key. If Prev() is called now, needs to do extra Prev() to recover the location. This is second attempt of fixing after reverting ec70fea4. This time shrink the fix to only merge key is the current key and avoid the reseeking logic for max_iterating skipping Test Plan: enable the two disabled tests and make sure they pass Reviewers: rven, IslamAbdelRahman, kradhakrishnan, tnovak, yhchiang Reviewed By: yhchiang Subscribers: leveldb, dhruba Differential Revision: https://reviews.facebook.net/D43557
-
- 05 8月, 2015 23 次提交
-
-
由 Andres Notzli 提交于
Summary: While working on https://reviews.facebook.net/D43179 , I found duplicate code in the tests. This patch removes it. Test Plan: make clean all check Reviewers: igor, sdong, rven, anthony, yhchiang Reviewed By: yhchiang Subscribers: dhruba, leveldb Differential Revision: https://reviews.facebook.net/D43263
-
由 Mike Kolupaev 提交于
Summary: Subj. We really need this feature. Previous diff D40899 has most of the changes to make this possible, this diff just adds the method. Test Plan: `make check`, the new test fails without this diff; ran with ASAN, TSAN and valgrind. Reviewers: igor, rven, IslamAbdelRahman, anthony, kradhakrishnan, tnovak, yhchiang, sdong Reviewed By: sdong Subscribers: MarkCallaghan, maykov, hermanlee4, yoshinorim, tnovak, dhruba Differential Revision: https://reviews.facebook.net/D40905
-
由 Ari Ekmekji 提交于
Summary: Updated DBTest DBCompactionTest and CompactionJobStatsTest to run compaction-related tests once with subcompactions enabled and once disabled using the TEST_P test type in the Google Test suite. Test Plan: ./db_test ./db_compaction-test ./compaction_job_stats_test Reviewers: sdong, igor, anthony, yhchiang Reviewed By: yhchiang Subscribers: dhruba, leveldb Differential Revision: https://reviews.facebook.net/D43443
-
由 Islam AbdelRahman 提交于
Summary: Introduce DeleteScheduler that allow enforcing a rate limit on file deletion Instead of deleting files immediately, files are moved to trash directory and deleted in a background thread that apply sleep penalty between deletes if needed. I have updated PurgeObsoleteFiles and PurgeObsoleteWALFiles to use the delete_scheduler instead of env_->DeleteFile Test Plan: added delete_scheduler_test existing unit tests Reviewers: kradhakrishnan, anthony, rven, yhchiang, sdong Reviewed By: sdong Subscribers: dhruba Differential Revision: https://reviews.facebook.net/D43221
-
由 Yueh-Hsuan Chiang 提交于
Summary: Update JAVA-HISTORY.md for v3.13 Test Plan: no code change. Reviewers: igor, anthony, IslamAbdelRahman, sdong Reviewed By: sdong Subscribers: dhruba, leveldb Differential Revision: https://reviews.facebook.net/D43539
-
由 Yueh-Hsuan Chiang 提交于
Fix shared library names on OSX
-
由 Yueh-Hsuan Chiang 提交于
Summary: Fixed RocksJava test failure of shouldSetTestCappedPrefixExtractor by adding the missing native implementation of useCappedPrefixExtractor. Test Plan: make jclean make rocksdbjava -j32 make jtest Reviewers: igor, anthony, IslamAbdelRahman, sdong Reviewed By: sdong Subscribers: dhruba, leveldb Differential Revision: https://reviews.facebook.net/D43551
-
由 Yueh-Hsuan Chiang 提交于
RemoveEmptyValueCompactionFilter
-
由 ashishn 提交于
-
由 Yoshinori Matsunobu 提交于
Summary: MyRocks is using jemalloc latest version, not 3.6.0. Combining multiple versions (3.6.0 in RocksDB and latest in MyRocks) broke some features -- for example, getting SIGSEGV when heap profiling was enabled. This diff switches to use jemalloc latest, if env variable ROCKSDB_FBCODE_BUILD_WITH_481=1 was set. My understanding is this env was used by MyRocks only so it would be safe to change. Test Plan: building MyRocks then verified jemalloc heap profiling worked Reviewers: igor, rven, yhchiang, jtolmer, maykov, sdong Reviewed By: sdong Subscribers: dhruba Differential Revision: https://reviews.facebook.net/D43479
-
由 Yueh-Hsuan Chiang 提交于
Summary: Merge pull request #665 by adamretter Exposes BackupEngine from C++ to the Java API. Previously only BackupableDB was available Test Plan: BackupEngineTest.java Reviewers: fyrz, igor, ankgup87, yhchiang Reviewed By: yhchiang Subscribers: dhruba Differential Revision: https://reviews.facebook.net/D42873
-
由 Yueh-Hsuan Chiang 提交于
Another attempt at adding the Java API and tests to the travis build
-
由 Yueh-Hsuan Chiang 提交于
Summary: Make DBCompactionTest.SkipStatsUpdateTest more stable by removing flaky but unnecessary assertion on the size of db as simply checking the random file open count is suffice. Test Plan: db_compaction_test Reviewers: igor, anthony, IslamAbdelRahman, sdong Reviewed By: sdong Subscribers: dhruba, leveldb Differential Revision: https://reviews.facebook.net/D43533
-
由 Yueh-Hsuan Chiang 提交于
Summary: Polish HISTORY.md Test Plan: no code change. Reviewers: igor, anthony, IslamAbdelRahman, sdong Reviewed By: sdong Subscribers: dhruba Differential Revision: https://reviews.facebook.net/D43527
-
由 sdong 提交于
Summary: In a recent change, crash_test can put data under TEST_TMPDIR. However, the directory is not cleaned before running the test, which may cause unexpected results. Clean it. Test Plan: Run white and black box crash test against non-existing, or non-empty but not compactible DBs, and make sure it works as expected. Reviewers: kradhakrishnan, rven, yhchiang, IslamAbdelRahman Reviewed By: IslamAbdelRahman Subscribers: leveldb, dhruba Differential Revision: https://reviews.facebook.net/D43515
-
由 Yueh-Hsuan Chiang 提交于
Summary: Fix a typo and update HISTORY.md for NewCompactOnDeletionCollectorFactory(). Test Plan: no code change. Reviewers: igor, anthony, IslamAbdelRahman, sdong Reviewed By: sdong Subscribers: dhruba, leveldb Differential Revision: https://reviews.facebook.net/D43521
-
由 Yueh-Hsuan Chiang 提交于
Summary: UpdateAccumulatedStats() is used to optimize compaction decision esp. when the number of deletion entries are high, but this function can slowdown DBOpen esp. in disk environment. This patch adds DBOptions::skip_sats_update_on_db_open, which skips UpdateAccumulatedStats() in DB::Open() time when it's set to true. Test Plan: Add DBCompactionTest.SkipStatsUpdateTest Reviewers: igor, anthony, IslamAbdelRahman, sdong Reviewed By: sdong Subscribers: tnovak, dhruba, leveldb Differential Revision: https://reviews.facebook.net/D42843
-
由 sdong 提交于
Summary: Currently, whitebox crash test is not really executed, because the DB is destroyed after each crash. With this fix, in the first half of the time, DB will keep opening the crashed DB and continue from there. Test Plan: "make whitebox_crash_test" and see the same DB keeps crashing and being reopened. Reviewers: IslamAbdelRahman, yhchiang, rven, kradhakrishnan Reviewed By: kradhakrishnan Subscribers: leveldb, dhruba Differential Revision: https://reviews.facebook.net/D43503
-
由 sdong 提交于
Summary: Currently crash_test only puts data under /tmp. It is less flexible if we want to cover different file systems or media. Make crash_test to appreciate TEST_TMPDIR so that users can run it against another file system. Test Plan: Run blackbox_crash_test and whitebox_crash_test with or without TEST_TMPDIR set and make sure DBs are put in the right place Reviewers: kradhakrishnan, yhchiang, rven, IslamAbdelRahman Reviewed By: IslamAbdelRahman Subscribers: leveldb, dhruba Differential Revision: https://reviews.facebook.net/D43509
-
由 sdong 提交于
Summary: crash_test now only runs complicated options, multiple column families, prefix hash, frequently changing options, many compaction threads, etc. These options are good to cover new features but we loss coverage in most common use cases. Furthermore, by running only for multiple column families, we are not able to create LSM trees that are large enough to cover some stress cases. Make half of crash_test runs the simply tests: single column family, default mem table, one compaction thread, no change options. Test Plan: Run crash_test Reviewers: rven, yhchiang, IslamAbdelRahman, kradhakrishnan Reviewed By: kradhakrishnan Subscribers: leveldb, dhruba Differential Revision: https://reviews.facebook.net/D43461
-
由 Boyang Zhang 提交于
Fixed memory leak error
-
由 Boyang Zhang 提交于
Summary: So I took a look and I used a pointer to TableBuilder. Changed it to a unique_ptr. I think this should work, but I cannot run valgrind correctly on my local machine to test it. Test Plan: Run valgrind, but it's not working locally. It says I'm executing an unrecognized instruction. Reviewers: yhchiang Subscribers: dhruba, sdong Differential Revision: https://reviews.facebook.net/D43485
-
由 sdong 提交于
Summary: In "sst_dump --show_compression_sizes", a reference of CompressionOptions is kept in TableBuilderOptions, which is destroyed later, causing a memory issue. Test Plan: Run valgrind against SSTDumpToolTest.CompressedSizes and make sure it is fixed Reviewers: IslamAbdelRahman, yhchiang, kradhakrishnan, rven Reviewed By: rven Subscribers: leveldb, dhruba Differential Revision: https://reviews.facebook.net/D43497
-
- 04 8月, 2015 6 次提交
-
-
由 Yueh-Hsuan Chiang 提交于
Summary: Fix compile warning in compact_on_deletion_collector some environment Test Plan: make Reviewers: igor, sdong, anthony, IslamAbdelRahman Subscribers: dhruba, leveldb Differential Revision: https://reviews.facebook.net/D43467
-
由 Yueh-Hsuan Chiang 提交于
Summary: This diff adds CompactOnDeletionCollector in utilities/table_properties_collectors, which applies a sliding window to a sst file and mark this file as need-compaction when it observe enough deletion entries within the consecutive keys covered by the sliding window. Test Plan: compact_on_deletion_collector_test Reviewers: igor, anthony, IslamAbdelRahman, kradhakrishnan, yoshinorim, sdong Reviewed By: sdong Subscribers: maykov, dhruba Differential Revision: https://reviews.facebook.net/D41175
-
由 Venkatesh Radhakrishnan 提交于
Summary: The compact files API had a bug where some overlapping files are not added. These are files which overlap with files which were added to the compaction input files, but not to the original set of input files. This happens only when there are more than two levels involved in the compaction. An example will illustrate this better. Level 2 has 1 input file 1.sst which spans [20,30]. Level 3 has added file 2.sst which spans [10,25] Level 4 has file 3.sst which spans [35,40] and input file 4.sst which spans [46,50]. The existing code would not add 3.sst to the set of input_files because it only becomes an overlapping file in level 4 and it wasn't one in level 3. When installing the results of the compaction, 3.sst would overlap with output file from the compact files and result in the assertion in version_set.cc:1130 // Must not overlap assert(level <= 0 || level_files->empty() || internal_comparator_->Compare( (*level_files)[level_files->size() - 1]->largest, f->smallest) < 0); This change now adds overlapping files from the current level to the set of input files also so that we don't hit the assertion above. Test Plan: d=/tmp/j; rm -rf $d; seq 1000 | parallel --gnu --eta 'd=/tmp/j/d-{}; mkdir -p $d; TEST_TMPDIR=$d ./db_compaction_test --gtest_filter=*CompactilesOnLevel* --gtest_also_run_disabled_tests >& '$d'/log-{}' Reviewers: igor, yhchiang, sdong Reviewed By: yhchiang Subscribers: dhruba, leveldb Differential Revision: https://reviews.facebook.net/D43437
-
由 Venkatesh Radhakrishnan 提交于
Summary: Made SuggestCompactRangeNoTwoLevel0Compactions by forcing a flush after generating a file and waiting for compaction at the end. Test Plan: Run SuggestCompactRangeNoTwoLevel0Compactions Reviewers: yhchiang, igor, sdong Reviewed By: sdong Subscribers: dhruba, leveldb Differential Revision: https://reviews.facebook.net/D43449
-
由 Ari Ekmekji 提交于
Summary: As of now compactions involving files from Level 0 and Level 1 are single threaded because the files in L0, although sorted, are not range partitioned like the other levels. This means that during L0-L1 compaction each file from L1 needs to be merged with potentially all the files from L0. This attempt to parallelize the L0-L1 compaction assigns a thread and a corresponding iterator to each L1 file that then considers only the key range found in that L1 file and only the L0 files that have those keys (and only the specific portion of those L0 files in which those keys are found). In this way the overlap is minimized and potentially eliminated between different iterators focusing on the same files. The first step is to restructure the compaction logic to break L0-L1 compactions into multiple, smaller, sequential compactions. Eventually each of these smaller jobs will be run simultaneously. Areas to pay extra attention to are # Correct aggregation of compaction job statistics across multiple threads # Proper opening/closing of output files (make sure each thread's is unique) # Keys that span multiple L1 files # Skewed distributions of keys within L0 files Test Plan: Make and run db_test (newer version has separate compaction tests) and compaction_job_stats_test Reviewers: igor, noetzli, anthony, sdong, yhchiang Reviewed By: yhchiang Subscribers: MarkCallaghan, dhruba, leveldb Differential Revision: https://reviews.facebook.net/D42699
-
由 sdong 提交于
Summary: Now ldb dump_manifest refuses to work if there are 20 levels. Extend the limit to 64. Test Plan: Run the tool with 20 number of levels Reviewers: kradhakrishnan, anthony, IslamAbdelRahman, yhchiang Reviewed By: yhchiang Subscribers: leveldb, dhruba Differential Revision: https://reviews.facebook.net/D42879
-
- 01 8月, 2015 1 次提交
-
-
由 Andres Noetzli 提交于
Summary: Fixed typos. Test Plan: None Reviewers: igor, yhchiang, sdong, anthony, rven Reviewed By: rven Subscribers: dhruba, leveldb Differential Revision: https://reviews.facebook.net/D43365
-