- 22 5月, 2015 6 次提交
-
-
由 Yueh-Hsuan Chiang 提交于
Summary: Allow EventLogger to directly log from a JSONWriter. This allows the JSONWriter to be shared by EventLogger and potentially EventListener, which is an important step to integrate EventLogger and EventListener. This patch also rewrites EventLoggerHelpers::LogTableFileCreation(), which uses the new API to generate identical log. Test Plan: Run db_bench in debug mode and make sure the log is correct and no assertions fail. Reviewers: sdong, anthony, igor Reviewed By: igor Subscribers: dhruba, leveldb Differential Revision: https://reviews.facebook.net/D38709
-
由 Igor Canadi 提交于
Summary: This turns out to be pretty bad because if we prioritize L0->L1 then L1 can grow artificially large, which makes L0->L1 more and more expensive. For example: 256MB @ L0 + 256MB @ L1 --> 512MB @ L1 256MB @ L0 + 512MB @ L1 --> 768MB @ L1 256MB @ L0 + 768MB @ L1 --> 1GB @ L1 .... 256MB @ L0 + 10GB @ L1 --> 10.2GB @ L1 At some point we need to start compacting L1->L2 to speed up L0->L1. Test Plan: The performance improvement is massive for heavy write workload. This is the benchmark I ran: https://phabricator.fb.com/P19842671. Before this change, the benchmark took 47 minutes to complete. After, the benchmark finished in 2minutes. You can see full results here: https://phabricator.fb.com/P19842674 Also, we ran this diff on MongoDB on RocksDB on one replicaset. Before the change, our initial sync was so slow that it couldn't keep up with primary writes. After the change, the import finished without any issues Reviewers: dynamike, MarkCallaghan, rven, yhchiang, sdong Reviewed By: sdong Subscribers: dhruba, leveldb Differential Revision: https://reviews.facebook.net/D38637
-
由 Igor Canadi 提交于
Summary: Having stats in our LOG more often will help a lot with perf debugging. Test Plan: none Reviewers: sdong, MarkCallaghan Reviewed By: MarkCallaghan Subscribers: dhruba, leveldb Differential Revision: https://reviews.facebook.net/D38781
-
由 Yueh-Hsuan Chiang 提交于
Summary: Make DB::GetDbIdentity() be const function. Test Plan: make db_test Reviewers: igor, rven, sdong Reviewed By: sdong Subscribers: dhruba, leveldb Differential Revision: https://reviews.facebook.net/D38745
-
New features title was repeated twice. Fixed it.
-
由 Yueh-Hsuan Chiang 提交于
Add LDFLAGS to Java static library
-
- 21 5月, 2015 1 次提交
-
-
由 Yueh-Hsuan Chiang 提交于
Summary: Rename JSONWritter to JSONWriter Test Plan: make Reviewers: igor Reviewed By: igor Subscribers: dhruba, leveldb Differential Revision: https://reviews.facebook.net/D38733
-
- 20 5月, 2015 7 次提交
-
-
由 DerekSchenk 提交于
Includes the LDFLAGS so that the correct libraries will be linked. This links rt to resolve the issue https://github.com/facebook/rocksdb/issues/606.
-
由 Yueh-Hsuan Chiang 提交于
Summary: Dump db stats in WARN level Test Plan: run db_bench and verify the LOG Reviewers: igor, MarkCallaghan Subscribers: dhruba, leveldb Differential Revision: https://reviews.facebook.net/D38691
-
由 Yueh-Hsuan Chiang 提交于
Summary: Update HISTORY.md for GetThreadList() update. Test Plan: no code change Reviewers: sdong, rven, anthony, krishnanm86, igor Reviewed By: igor Subscribers: dhruba, leveldb Differential Revision: https://reviews.facebook.net/D38685
-
由 Mark Callaghan 提交于
Summary: See https://gist.github.com/mdcallag/89ebb2b8cbd331854865 for the IO stats. I added "Cumulative compaction:" and "Interval compaction:" lines. The IO rates can be confusing. Rates fro per-level stats lines, Wr(MB/s) & Rd(MB/s), are computed using the duration of the compaction job. If the job reads 10MB, writes 9MB and the job (IO & merging) takes 1 second then the rates are 10MB/s for read and 9MB/s for writes. The IO rates in the Cumulative compaction line uses the total uptime. The IO rates in the Interval compaction line uses the interval uptime. So these Cumalative & Interval compaction IO rates cannot be compared to the per-level IO rates. But both forms of the rates are useful for debugging perf. Task ID: # Blame Rev: Test Plan: run db_bench Revert Plan: Database Impact: Memcache Impact: Other Notes: EImportant: - begin *PUBLIC* platform impact section - Bugzilla: # - end platform impact - Reviewers: igor Reviewed By: igor Subscribers: dhruba Differential Revision: https://reviews.facebook.net/D38667
-
Flipped the unreleased section to 3.11
-
由 Igor Canadi 提交于
Summary: In third-party2 build we need to force git sha because we're compiling from a different git repositry. Test Plan: `FORCE_GIT_SHA=igor make` Reviewers: kradhakrishnan, sdong Reviewed By: kradhakrishnan Subscribers: dhruba, leveldb Differential Revision: https://reviews.facebook.net/D38679
-
由 Igor Canadi 提交于
Summary: Not sure why this fails on some compilers and doesn't on others. Test Plan: none Reviewers: meyering, sdong Reviewed By: sdong Subscribers: dhruba, leveldb Differential Revision: https://reviews.facebook.net/D38673
-
- 19 5月, 2015 6 次提交
-
-
由 Igor Canadi 提交于
Summary: sync_file_range is not always asyncronous and thus can block writes if we do this for WAL in the foreground thread. See more here: http://yoshinorimatsunobu.blogspot.com/2014/03/how-syncfilerange-really-works.html Some users don't want us to call sync_file_range on WALs. Some other do. Thus, I'm adding a separate option wal_bytes_per_sync to control calling sync_file_range on WAL files. bytes_per_sync will apply only to table files now. Test Plan: no more sync_file_range for WAL as evidenced by strace Reviewers: yhchiang, rven, sdong Reviewed By: sdong Subscribers: dhruba, leveldb Differential Revision: https://reviews.facebook.net/D38253
-
由 Igor Canadi 提交于
Summary: As title. I spent some time thinking about it and I don't think there should be any issue with running manual compaction and flushes in parallel Test Plan: make check works Reviewers: rven, yhchiang, sdong Reviewed By: yhchiang, sdong Subscribers: dhruba, leveldb Differential Revision: https://reviews.facebook.net/D38355
-
由 Yueh-Hsuan Chiang 提交于
Summary: Fixed the following compile errors due to some gcc does not have std::map::emplace util/thread_status_impl.cc: In static member function ‘static std::map<std::basic_string<char>, long unsigned int> rocksdb::ThreadStatus::InterpretOperationProperties(rocksdb::ThreadStatus::OperationType, const uint64_t*)’: util/thread_status_impl.cc:88:20: error: ‘class std::map<std::basic_string<char>, long unsigned int>’ has no member named ‘emplace’ util/thread_status_impl.cc:90:20: error: ‘class std::map<std::basic_string<char>, long unsigned int>’ has no member named ‘emplace’ util/thread_status_impl.cc:94:20: error: ‘class std::map<std::basic_string<char>, long unsigned int>’ has no member named ‘emplace’ util/thread_status_impl.cc:96:20: error: ‘class std::map<std::basic_string<char>, long unsigned int>’ has no member named ‘emplace’ util/thread_status_impl.cc:98:20: error: ‘class std::map<std::basic_string<char>, long unsigned int>’ has no member named ‘emplace’ util/thread_status_impl.cc:101:20: error: ‘class std::map<std::basic_string<char>, long unsigned int>’ has no member named ‘emplace’ make: *** [util/thread_status_impl.o] Error 1 Test Plan: make db_bench Reviewers: igor Subscribers: dhruba, leveldb Differential Revision: https://reviews.facebook.net/D38643
-
由 stash93 提交于
Summary: Call Flush() function instead Test Plan: make all check Reviewers: igor Reviewed By: igor Subscribers: dhruba Differential Revision: https://reviews.facebook.net/D38583
-
由 sdong 提交于
Summary: DBTest.DynamicLevelMaxBytesCompactRange needs to make sure L0 is not empty to properly cover the code paths we want to cover. However, current codes have a bug that might leave the condition not held. Improve the test to ensure it. Test Plan: Run the test in an environment that is used to fail. Also run it many times. Subscribers: leveldb, dhruba Differential Revision: https://reviews.facebook.net/D38631
-
由 sdong 提交于
Summary: CompactRange() now is much more expensive for dynamic level base size as it goes through all the levels. Skip those not used levels between level 0 an base level. Test Plan: Run all unit tests Reviewers: yhchiang, rven, anthony, kradhakrishnan, igor Reviewed By: igor Subscribers: leveldb, dhruba Differential Revision: https://reviews.facebook.net/D37125
-
- 16 5月, 2015 4 次提交
-
-
由 Yueh-Hsuan Chiang 提交于
Summary: Allow GetThreadList to report Flush properties, which includes: * job id * number of bytes that has been written since flush started. * total size of input mem-tables Test Plan: ./db_bench --threads=30 --num=1000000 --benchmarks=fillrandom --thread_status_per_interval=100 --value_size=1000 Sample output from db_bench which tracks same flush job ThreadID ThreadType cfName Operation ElapsedTime Stage State OperationProperties 140213879898240 High Pri default Flush 5789 us FlushJob::WriteLevel0Table BytesMemtables 4112835 | BytesWritten 577104 | JobID 8 | ThreadID ThreadType cfName Operation ElapsedTime Stage State OperationProperties 140213879898240 High Pri default Flush 30.634 ms FlushJob::WriteLevel0Table BytesMemtables 4112835 | BytesWritten 1734865 | JobID 8 | Reviewers: rven, igor, sdong Reviewed By: sdong Subscribers: dhruba, leveldb Differential Revision: https://reviews.facebook.net/D38505
-
由 Yueh-Hsuan Chiang 提交于
Summary: Use a better way to initialize ThreadStatus::kNumOperationProperties. Test Plan: make Reviewers: sdong, rven, anthony, krishnanm86, igor Reviewed By: igor Subscribers: dhruba, leveldb Differential Revision: https://reviews.facebook.net/D38547
-
由 Igor Canadi 提交于
Summary: When trying to compact entire database with SuggestCompactRange(), we'll first try the left-most files. This is pretty bad, because: 1) the left part of LSM tree will be overly compacted, but right part will not be touched 2) First compaction will pick up the left-most file. Second compaction will try to pick up next left-most, but this will not be possible, because there's a big chance that second's file range on N+1 level is already being compacted. I observe both of those problems when running Mongo+RocksDB and trying to compact the DB to clean up tombstones. I'm unable to clean them up :( This diff adds a bit of randomness into choosing a file. First, it chooses a file at random and tries to compact that one. This should solve both problems specified here. Test Plan: make check Reviewers: yhchiang, rven, sdong Reviewed By: sdong Subscribers: dhruba, leveldb Differential Revision: https://reviews.facebook.net/D38379
-
由 sdong 提交于
Summary: Now rocksdb_build_git_sha is determined from "git sha". It is hard if the release is not from the repository directly but from a source code copy. Change to use the versions given in Makefile. Test Plan: Run "make util/build_version.cc" Reviewers: kradhakrishnan, rven, meyering, igor Reviewed By: igor Subscribers: leveldb, dhruba Differential Revision: https://reviews.facebook.net/D38451
-
- 15 5月, 2015 1 次提交
-
-
由 sdong 提交于
Summary: DBTest.DynamicLevelMaxBytesBase2 has a check that is not necessary and may fail. Remove it, and add two unrelated check. Test Plan: Run the test Reviewers: yhchiang, rven, kradhakrishnan, anthony, igor Reviewed By: igor Subscribers: leveldb, dhruba Differential Revision: https://reviews.facebook.net/D38457
-
- 14 5月, 2015 3 次提交
-
-
由 sdong 提交于
Summary: Universal compactions with multiple levels should use file preallocation size based on file size if output level is not level 0 Test Plan: Run all tests. Reviewers: igor Reviewed By: igor Subscribers: leveldb, dhruba Differential Revision: https://reviews.facebook.net/D38439
-
由 Yueh-Hsuan Chiang 提交于
Summary: Make ThreadStatus::InterpretOperationProperties take const uint64_t* Test Plan: make make OPT=-DROCKSDB_LITE shared_lib Reviewers: igor Reviewed By: igor Subscribers: dhruba, leveldb Differential Revision: https://reviews.facebook.net/D38445
-
由 sdong 提交于
Summary: Add --rate_limiter_bytes_per_sec to db_bench to allow rater limit to disk Test Plan: Run ./db_bench --benchmarks=fillseq --num=30000000 --rate_limiter_bytes_per_sec=3000000 --num_multi_db=8 -disable_wal And see io_stats to have the rate limited. Reviewers: yhchiang, rven, anthony, kradhakrishnan, igor Reviewed By: igor Subscribers: leveldb, dhruba Differential Revision: https://reviews.facebook.net/D38385
-
- 13 5月, 2015 4 次提交
-
-
由 Yueh-Hsuan Chiang 提交于
Summary: Fixed the following compile error in db/column_family.cc db/column_family.cc:633:33: error: ‘ASSERT_GT’ was not declared in this scope 16:14:45 ASSERT_GT(listeners.size(), 0U); Test Plan: make db_test Reviewers: igor, sdong, rven Subscribers: dhruba, leveldb Differential Revision: https://reviews.facebook.net/D38367
-
由 Yueh-Hsuan Chiang 提交于
Summary: Fixed a bug in EventListener::OnCompactionCompleted() that returns incorrect list of input / output file names. Test Plan: Extend existing test in listener_test.cc Reviewers: sdong, rven, igor Reviewed By: igor Subscribers: dhruba, leveldb Differential Revision: https://reviews.facebook.net/D38349
-
由 Igor Canadi 提交于
Summary: Example output: {"time_micros": 1431463794310521, "job": 353, "event": "table_file_creation", "file_number": 387, "file_size": 86937, "table_info": {"data_size": "81801", "index_size": "9751", "filter_size": "0", "raw_key_size": "23448", "raw_average_key_size": "24.000000", "raw_value_size": "990571", "raw_average_value_size": "1013.890481", "num_data_blocks": "245", "num_entries": "977", "filter_policy_name": "", "kDeletedKeys": "0"}} Also fixed a bug where BuildTable() in recovery was passing Env::IOHigh argument into paranoid_checks_file parameter. Test Plan: make check + check out the output in the log Reviewers: sdong, rven, yhchiang Reviewed By: yhchiang Subscribers: dhruba, leveldb Differential Revision: https://reviews.facebook.net/D38343
-
由 Igor Canadi 提交于
Summary: This caused a crash of our MongoDB + RocksDB instance. PickCompactionBySize() sets its own parent_index. We never reset this parent_index when picking PickFilesMarkedForCompactionExperimental(). So we might end up doing SetupOtherInputs() with parent_index that was set by PickCompactionBySize, although we're using compaction calculated using PickFilesMarkedForCompactionExperimental. Test Plan: Added a unit test that fails with assertion on master. Reviewers: yhchiang, rven, sdong Reviewed By: sdong Subscribers: dhruba, leveldb Differential Revision: https://reviews.facebook.net/D38337
-
- 12 5月, 2015 1 次提交
-
-
由 agiardullo 提交于
Summary: Added a couple functions to WriteBatchWithIndex to make it easier to query the value of a key including reading pending writes from a batch. (This is needed for transactions). I created write_batch_with_index_internal.h to use to store an internal-only helper function since there wasn't a good place in the existing class hierarchy to store this function (and it didn't seem right to stick this function inside WriteBatchInternal::Rep). Since I needed to access the WriteBatchEntryComparator, I moved some helper classes from write_batch_with_index.cc into write_batch_with_index_internal.h/.cc. WriteBatchIndexEntry, ReadableWriteBatch, and WriteBatchEntryComparator are all unchanged (just moved to a different file(s)). Test Plan: Added new unit tests. Reviewers: rven, yhchiang, sdong, igor Reviewed By: igor Subscribers: dhruba, leveldb Differential Revision: https://reviews.facebook.net/D38037
-
- 10 5月, 2015 1 次提交
-
-
由 Igor Canadi 提交于
Summary: In new clang we need to add override to every overriden function Test Plan: none Reviewers: rven Subscribers: dhruba Differential Revision: https://reviews.facebook.net/D38259
-
- 09 5月, 2015 2 次提交
-
-
由 Igor Canadi 提交于
Summary: When reporting compaction that was started because of SuggestCompactRange() we should treat it as manual compaction. Test Plan: none Reviewers: yhchiang, rven Reviewed By: rven Subscribers: dhruba, leveldb Differential Revision: https://reviews.facebook.net/D38139
-
由 Yueh-Hsuan Chiang 提交于
Summary: Don't treat warnings as error when building rocksdbjavastatic Test Plan: make rocksdbjavastatic -j32 Reviewers: rven, fyrz, adamretter, igor Reviewed By: igor Subscribers: dhruba, leveldb Differential Revision: https://reviews.facebook.net/D38187
-
- 08 5月, 2015 3 次提交
-
-
由 Igor Canadi 提交于
Summary: Without this I get bunch of questions when I run `make clean` Test Plan: no more questions! Reviewers: rven, yhchiang, meyering, anthony Reviewed By: meyering, anthony Subscribers: meyering, dhruba, leveldb Differential Revision: https://reviews.facebook.net/D38145
-
由 Igor Canadi 提交于
Summary: In D28521 we removed GarbageCollect() from BackupEngine's constructor. The reason was that opening BackupEngine on HDFS was very slow and in most cases we didn't have any garbage. We allowed the user to call GarbageCollect() when it detects some garbage files in his backup directory. Unfortunately, this left us vulnerable to an interesting issue. Let's say we started a backup and copied files {1, 3} but the backup failed. On another host, we restore DB from backup and generate {1, 3, 5}. Since {1, 3} is already there, we will not overwrite. However, these files might be from a different database so their contents might be different. See internal task t6781803 for more info. Now, when we're copying files and we discover a file already there, we check: 1. if the file is not referenced from any backups, we overwrite the file. 2. if the file is referenced from other backups AND the checksums don't match, we fail the backup. This will only happen if user is using a single backup directory for backing up two different databases. 3. if the file is referenced from other backups AND the checksums match, it's all good. We skip the copy and go copy the next file. Test Plan: Added new test to backupable_db_test. The test fails before this patch. Reviewers: sdong, rven, yhchiang Reviewed By: yhchiang Subscribers: dhruba, leveldb Differential Revision: https://reviews.facebook.net/D37599
-
由 Igor Canadi 提交于
Summary: as title Test Plan: none Reviewers: sdong Subscribers: dhruba, leveldb Differential Revision: https://reviews.facebook.net/D38175
-
- 07 5月, 2015 1 次提交
-
-
由 Jörg Maier 提交于
Small fixes to Java benchmark
-