- 14 7月, 2015 1 次提交
-
-
由 sdong 提交于
Summary: This helps Windows port to format their changes, as discussed. Might have formatted some other codes too becasue last 10 commits include more. Test Plan: Build it. Reviewers: anthony, IslamAbdelRahman, kradhakrishnan, yhchiang, igor Reviewed By: igor Subscribers: leveldb, dhruba Differential Revision: https://reviews.facebook.net/D41961
-
- 12 7月, 2015 1 次提交
-
-
由 sdong 提交于
Summary: Public API depends on port/port.h which is wrong. Fix it. Also with gcc 4.8.1 build was broken as MAX_INT32 was not recognized. Fix it by using ::max in linux. Test Plan: Build it and try to build an external project on top of it. Reviewers: anthony, yhchiang, kradhakrishnan, igor Reviewed By: igor Subscribers: yoshinorim, leveldb, dhruba Differential Revision: https://reviews.facebook.net/D41745
-
- 11 7月, 2015 6 次提交
-
-
由 sdong 提交于
Summary: Print whether fast CRC32 is supported in DB info LOG Test Plan: Run db_bench and see it prints out correctly. Reviewers: yhchiang, anthony, kradhakrishnan, igor Reviewed By: igor Subscribers: MarkCallaghan, yoshinorim, leveldb, dhruba Differential Revision: https://reviews.facebook.net/D41733
-
由 sdong 提交于
Summary: new_table_iterator_nanos is not cleaned in PerfContext::Reset() while new_table_block_iter_nanos is cleaned twice. Fix it. Also fix a comment. Test Plan: Build and db_bench with --perf_context to see the value shown. Reviewers: kradhakrishnan, anthony, yhchiang, igor Reviewed By: igor Subscribers: leveldb, dhruba Differential Revision: https://reviews.facebook.net/D41721
-
由 Siying Dong 提交于
Windows Port from Microsoft
-
-
由 Dmitri Smirnov 提交于
Rule of five: add destructor Add a note to COMMIT.md for 3rd party json.
-
由 sdong 提交于
Summary: Add a perf context counter to help users figure out time spent on reading indexes and bloom filter blocks. Test Plan: Will write a unit test Subscribers: leveldb, dhruba Differential Revision: https://reviews.facebook.net/D41433
-
- 10 7月, 2015 8 次提交
-
-
-
由 Dmitri Smirnov 提交于
Committed by Alexander Zinoviev <alexander.zinoviev@me.com> 7/9/2015 2:42:41 PM
-
由 Dmitri Smirnov 提交于
-
由 Dmitri Smirnov 提交于
-
由 Alexander Zinoviev 提交于
-
由 krad 提交于
Summary: The t/DBTest.DropWrites test still fails under certain gcc version in release unit test. I unfortunately cannot repro the failure (since the compilers have mapped library which I am not able to map to correctly). I am suspecting the clock skew. Test Plan: Run make check Reviewers: CC: sdong igore Task ID: #7312624 Blame Rev:
-
由 Aaron Feldman 提交于
Summary: The new flag --cache_index_and_filter_blocks sets BlockBasedTableOptions.cache_index_and_filter_blocks Test Plan: make db_bench. Working on benchmarks with the new flag. Reviewers: igor Reviewed By: igor Subscribers: dhruba Differential Revision: https://reviews.facebook.net/D41481
-
由 unknown 提交于
a different patch.
-
- 09 7月, 2015 3 次提交
-
-
由 Dmitri Smirnov 提交于
1) Crash in env_win.cc that prevented db_test run to completion and some new tests 2) Fix new corruption tests in DBTest by allowing a shared trunction of files. Note that this is generally needed ONLY for tests. 3) Close database so WAL is closed prior to inducing corruption similar to what we did within Corruption tests.
-
Summary: Change the naming style of getter and setters according to Google C++ style in compaction.h file Test Plan: Compilation success Reviewers: sdong Reviewed By: sdong Subscribers: leveldb, dhruba Differential Revision: https://reviews.facebook.net/D41265
-
由 krad 提交于
Summary: Currently there is no test in the suite to test the case where there are multiple WAL files and there is a corruption in one of them. We have tests for single WAL file corruption scenarios. Added tests to mock the scenarios for all combinations of recovery modes and corruption in specified file locations. Test Plan: Run make check Reviewers: sdong igor CC: leveldb@ Task ID: #7501229 Blame Rev:
-
- 08 7月, 2015 13 次提交
-
-
-
由 Dmitri Smirnov 提交于
-
由 Alexander Zinoviev 提交于
-
由 agiardullo 提交于
Summary: Coverage test has been occasionally failing due to this timing check. Test Plan: run test Reviewers: yhchiang Reviewed By: yhchiang Subscribers: dhruba, leveldb Differential Revision: https://reviews.facebook.net/D41367
-
Summary: Build fail fix. Type cast issues. Test Plan: compiled Reviewers: sdong, yhchiang Reviewed By: yhchiang Subscribers: dhruba Differential Revision: https://reviews.facebook.net/D41349
-
由 Yueh-Hsuan Chiang 提交于
Summary: This patch reverts "Replace std::priority_queue in MergingIterator with custom heap" (commit commit b6655a67) as it causes db_stress failure. Test Plan: ./db_stress --test_batches_snapshots=1 --threads=32 --write_buffer_size=4194304 --destroy_db_initially=0 --reopen=20 --readpercent=45 --prefixpercent=5 --writepercent=35 --delpercent=5 --iterpercent=10 --db=/tmp/rocksdb_crashtest_KdCI5F --max_key=100000000 --mmap_read=0 --block_size=16384 --cache_size=1048576 --open_files=500000 --verify_checksum=1 --sync=0 --progress_reports=0 --disable_wal=0 --disable_data_sync=1 --target_file_size_base=2097152 --target_file_size_multiplier=2 --max_write_buffer_number=3 --max_background_compactions=20 --max_bytes_for_level_base=10485760 --filter_deletes=0 --memtablerep=prefix_hash --prefix_size=7 --ops_per_thread=200 --kill_random_test=97 Reviewers: igor, anthony, lovro, sdong Reviewed By: sdong Subscribers: dhruba, leveldb Differential Revision: https://reviews.facebook.net/D41343
-
Summary: This change enables trivial move if all the input files are non onverlapping while doing Universal Compaction. Test Plan: ./compaction_picker_test and db_test ran successfully with the new testcases. Reviewers: sdong Reviewed By: sdong Subscribers: leveldb, dhruba Differential Revision: https://reviews.facebook.net/D40875
-
由 Yueh-Hsuan Chiang 提交于
Summary: Fixed a bug in test ThreadStatusSingleCompaction where SyncPoint traces are not cleared before the test begins its second iteration. Test Plan: db_test Reviewers: sdong, anthony, IslamAbdelRahman, igor Reviewed By: igor Subscribers: dhruba, leveldb Differential Revision: https://reviews.facebook.net/D41337
-
由 Yueh-Hsuan Chiang 提交于
Summary: Remove assert(current_ == CurrentReverse()) in MergingIterator::Prev() because it is possible to have some keys larger than the seek-key inserted between Seek() and SeekToLast(), which makes current_ not equal to CurrentReverse(). Test Plan: db_stress Reviewers: igor, sdong, IslamAbdelRahman, anthony Reviewed By: anthony Subscribers: dhruba, leveldb Differential Revision: https://reviews.facebook.net/D41331
-
由 Yueh-Hsuan Chiang 提交于
Summary: Update HISTORY.md for Listener Test Plan: no code change Reviewers: igor, sdong, IslamAbdelRahman, anthony Reviewed By: anthony Subscribers: dhruba, leveldb Differential Revision: https://reviews.facebook.net/D41325
-
由 Yueh-Hsuan Chiang 提交于
Summary: Fixes valgrind errors in column_family_test. Test Plan: `make check`, `make valgrind_check` Reviewers: igor, yhchiang Reviewed By: yhchiang Subscribers: dhruba Differential Revision: https://reviews.facebook.net/D41181
-
由 Yueh-Hsuan Chiang 提交于
Summary: This diff reverts the following two previous diffs related to DBIter::FindPrevUserKey(), which makes db_stress unstable. We should bake a better fix for this. * "Fix a comparison in DBIter::FindPrevUserKey()" ec70fea4. * "Fixed endless loop in DBIter::FindPrevUserKey()" acee2b08. Test Plan: db_stress Reviewers: anthony, igor, sdong Reviewed By: sdong Subscribers: dhruba, leveldb Differential Revision: https://reviews.facebook.net/D41301
-
由 Aaron Feldman 提交于
Summary: This addresses a test failure where an exception occured in the constructor's call to CreateDirIfMissing(). The existence of unjoined threads prevented this exception from propogating properly. See http://stackoverflow.com/questions/7381757/c-terminate-called-without-an-active-exception Test Plan: Re-run tests from task #7626266 Reviewers: sdong, anthony, igor Reviewed By: igor Subscribers: dhruba Differential Revision: https://reviews.facebook.net/D41313
-
- 07 7月, 2015 2 次提交
-
-
由 Andres Notzli 提交于
Summary: This patch adds three test cases for ExpandWhileOverlapping() to the compaction_picker_test test suite. ExpandWhileOverlapping() only has an effect if the comparison function for the internal keys allows for overlapping user keys in different SST files on the same level. Thus, this patch adds a comparator based on sequence numbers to compaction_picker_test for the new test cases. Test Plan: - make compaction_picker_test && ./compaction_picker_test -> All tests pass - Replace body of ExpandWhileOverlapping() with `return true` -> Compile and run ./compaction_picker_test as before -> New tests fail Reviewers: sdong, yhchiang, rven, anthony, IslamAbdelRahman, kradhakrishnan, igor Reviewed By: igor Subscribers: dhruba, leveldb Differential Revision: https://reviews.facebook.net/D41277
-
由 Igor Canadi 提交于
Summary: Two issues: * the input keys to the compaction don't include sequence number. * sequence number is set to max(seq_num), but it should be set to max(seq_num)+1, because the condition here is strictly-larger (i.e. we will only zero-out sequence number if the DB's sequence number is strictly greater than the key's sequence number): https://github.com/facebook/rocksdb/blob/master/db/compaction_job.cc#L830 Test Plan: make compaction_job_test && ./compaction_job_test Reviewers: sdong, lovro Reviewed By: lovro Subscribers: dhruba, leveldb Differential Revision: https://reviews.facebook.net/D41247
-
- 06 7月, 2015 1 次提交
-
-
由 lovro 提交于
Summary: While profiling compaction in our service I noticed a lot of CPU (~15% of compaction) being spent in MergingIterator and key comparison. Looking at the code I found MergingIterator was (understandably) using std::priority_queue for the multiway merge. Keys in our dataset include sequence numbers that increase with time. Adjacent keys in an L0 file are very likely to be adjacent in the full database. Consequently, compaction will often pick a chunk of rows from the same L0 file before switching to another one. It would be great to avoid the O(log K) operation per row while compacting. This diff replaces std::priority_queue with a custom binary heap implementation. It has a "replace top" operation that is cheap when the new top is the same as the old one (i.e. the priority of the top entry is decreased but it still stays on top). Test Plan: make check To test the effect on performance, I generated databases with data patterns that mimic what I describe in the summary (rows have a mostly increasing sequence number). I see a 10-15% CPU decrease for compaction (and a matching throughput improvement on tmpfs). The exact improvement depends on the number of L0 files and the amount of locality. Performance on randomly distributed keys seems on par with the old code. Reviewers: kailiu, sdong, igor Reviewed By: igor Subscribers: yoshinorim, dhruba, tnovak Differential Revision: https://reviews.facebook.net/D29133
-
- 03 7月, 2015 5 次提交
-
-
由 Dmitri Smirnov 提交于
-
由 Dmitri Smirnov 提交于
-
由 Ari Ekmekji 提交于
Summary: Introduced a new category in the enum InfoLogLevel in env.h. Modifed Log() in env.cc to use the Header() when the InfoLogLevel == HEADER_LEVEL. Updated tests in auto_roll_logger_test to ensure the header is handled properly in these cases. Test Plan: Augment existing tests in auto_roll_logger_test Reviewers: igor, sdong, yhchiang Reviewed By: yhchiang Subscribers: dhruba, leveldb Differential Revision: https://reviews.facebook.net/D41067
-
由 Yueh-Hsuan Chiang 提交于
Summary: Fixed endless loop in DBIter::FindPrevUserKey() Test Plan: ./db_stress --test_batches_snapshots=1 --threads=32 --write_buffer_size=4194304 --destroy_db_initially=0 --reopen=20 --readpercent=45 --prefixpercent=5 --writepercent=35 --delpercent=5 --iterpercent=10 --db=/tmp/rocksdb_crashtest_KdCI5F --max_key=100000000 --mmap_read=0 --block_size=16384 --cache_size=1048576 --open_files=500000 --verify_checksum=1 --sync=0 --progress_reports=0 --disable_wal=0 --disable_data_sync=1 --target_file_size_base=2097152 --target_file_size_multiplier=2 --max_write_buffer_number=3 --max_background_compactions=20 --max_bytes_for_level_base=10485760 --filter_deletes=0 --memtablerep=prefix_hash --prefix_size=7 --ops_per_thread=200 --kill_random_test=97 Reviewers: tnovak, igor, sdong Reviewed By: sdong Subscribers: dhruba, leveldb Differential Revision: https://reviews.facebook.net/D41085
-
由 Mike Kolupaev 提交于
Summary: This fixes the following scenario we've hit: - we reached max_total_wal_size, created a new wal and scheduled flushing all memtables corresponding to the old one, - before the last of these flushes started its column family was dropped; the last background flush call was a no-op; no one removed the old wal from alive_logs_, - hours have passed and no flushes happened even though lots of data was written; data is written to different column families, compactions are disabled; old column families are dropped before memtable grows big enough to trigger a flush; the old wal still sits in alive_logs_ preventing max_total_wal_size limit from kicking in, - a few more hours pass and we run out disk space because of one huge .log file. Test Plan: `make check`; backported the new test, checked that it fails without this diff Reviewers: igor Reviewed By: igor Subscribers: dhruba Differential Revision: https://reviews.facebook.net/D40893
-