- 28 1月, 2020 4 次提交
-
-
由 sdong 提交于
Summary: Commits related to hash index fix have been reverted in 6.7.fb branch. Update HISTORY.md to keep it in sync. Pull Request resolved: https://github.com/facebook/rocksdb/pull/6337 Differential Revision: D19593717 fbshipit-source-id: 466178dc6205c9e41ccced41bf281a0952bdc2ca
-
由 Andrew Kryczka 提交于
Summary: It chooses the oldest memtable, not the largest one. This is an important difference for users whose CFs receive non-uniform write rates. Pull Request resolved: https://github.com/facebook/rocksdb/pull/6335 Differential Revision: D19588865 Pulled By: maysamyabandeh fbshipit-source-id: 62ad4325b0182f5f27858584cd73fd5978fb2cec
-
由 sdong 提交于
Summary: https://github.com/facebook/rocksdb/pull/6028 introduces a bug for hash index in SST files. If a table reader is created when total order seek is used, prefix_extractor might be passed into table reader as null. While later when prefix seek is used, the same table reader used, hash index is checked but prefix extractor is null and the program would crash. Fix the issue by fixing http://github.com/facebook/rocksdb/pull/6028 in the way that prefix_extractor is preserved but ReadOptions.total_order_seek is checked Also, a null pointer check is added so that a bug like this won't cause segfault in the future. Pull Request resolved: https://github.com/facebook/rocksdb/pull/6328 Test Plan: Add a unit test that would fail without the fix. Stress test that reproduces the crash would pass. Differential Revision: D19586751 fbshipit-source-id: 8de77690167ddf5a77a01e167cf89430b1bfba42
-
由 Peter Dillinger 提交于
Summary: Remove the redundant PartitionedFilterBlockBuilder::num_added_ and ::NumAdded since the parent class, FullFilterBlockBuilder, already provides them. Also rename filters_in_partition_ and filters_per_partition_ to keys_added_to_partition_ and keys_per_partition_ to improve readability. Pull Request resolved: https://github.com/facebook/rocksdb/pull/6299 Test Plan: make check Differential Revision: D19413278 Pulled By: pdillinger fbshipit-source-id: 04926ee7874477d659cb2b6ae03f2d995fb747e5
-
- 25 1月, 2020 2 次提交
-
-
由 Fosco Marotto 提交于
Summary: Adjusted history for 6.6.1 and 6.6.2, switched master version to 6.7.0. Pull Request resolved: https://github.com/facebook/rocksdb/pull/6320 Differential Revision: D19499272 Pulled By: gfosco fbshipit-source-id: 2bafb2456951f231e411e9c03aaa4c044f497684
-
由 Maysam Yabandeh 提交于
Summary: The function was left unimplemented. Although we currently don't have a use for that it was declared with an assert(0) to prevent mistakenly using the remove_prefix of the parent class. The function body with only assert(0) however causes issues with some compiler's warning levels. The patch implements the function to avoid the warning. It also piggybacks some minor code warning for unnecessary semicolons after the function definition.s Pull Request resolved: https://github.com/facebook/rocksdb/pull/6330 Differential Revision: D19559062 Pulled By: maysamyabandeh fbshipit-source-id: 3a022484f688c9abd4556e5412bcc2628ab96a00
-
- 24 1月, 2020 3 次提交
-
-
由 Levi Tamasi 提交于
Summary: The earlier code used two conflicting definitions for the number of input records going into a compaction, one based on the `rocksdb.num.entries` table property and one based on `CompactionIterationStats`. The first one is correct and in line with how output records are counted, while the second one incorrectly ignores input records in various cases when the `CompactionIterator` advances or reseeks the input iterator (this can happen, amongst other cases, when dealing with `SingleDelete`s, regular `Delete`s, `Merge`s, and compaction filters). This can result in the code undercounting the input records and computing an incorrect value for "records dropped" during the compaction. The patch fixes this by switching over to the correct (table property based) input record count for "records dropped". Pull Request resolved: https://github.com/facebook/rocksdb/pull/6325 Test Plan: Tested using `make check` and `db_bench`. Differential Revision: D19525491 Pulled By: ltamasi fbshipit-source-id: 4340b0b2f41546db8e356db70ca02199e48fa636
-
由 anand76 提交于
Summary: When there is a write stall, the active write group leader calls ```BeginWriteStall()``` to walk the queue of writers and remove any with the ```no_slowdown``` option set. There was a bug in the code which updated the back pointer but not the forward pointer (```link_newer```), corrupting the list and causing some threads to wait forever. This PR fixes it. Pull Request resolved: https://github.com/facebook/rocksdb/pull/6322 Test Plan: Add a unit test in db_write_test Differential Revision: D19538313 Pulled By: anand1976 fbshipit-source-id: 6fbed819e594913f435886606f5d36f74f235c3a
-
由 Maysam Yabandeh 提交于
Summary: This reverts commit 8e309b35. The stress tests are failing . Revert it until we figure the root cause. Pull Request resolved: https://github.com/facebook/rocksdb/pull/6327 Differential Revision: D19537657 Pulled By: maysamyabandeh fbshipit-source-id: bf34a5dd720825957729e136e9a5a729a240e61a
-
- 23 1月, 2020 1 次提交
-
-
由 Maysam Yabandeh 提交于
Summary: kHashSearch is incompatible with larger than 1 values for index_block_restart_interval. Setting it to 1 in stress tests would avoid confusion about the test parameters. Pull Request resolved: https://github.com/facebook/rocksdb/pull/6324 Differential Revision: D19525669 Pulled By: maysamyabandeh fbshipit-source-id: fbf3a797e0ebcebb4d32eba3728cf3583906fc8a
-
- 22 1月, 2020 3 次提交
-
-
由 matthewvon 提交于
Summary: This is a simple edit to have two #include file paths be consistent within range_del_aggregator.{h,cc} with everywhere else. The impact of this inconsistency is that it actual breaks a Bazel based build on the Windows platform. The same pragma once failure occurs with both Windows Visual C++ 2019 and clang for Windows 9.0. Bazel's "sandboxing" of the builds causes both compilers to not properly recognize "rocksdb/types.h" and "include/rocksdb/types.h" to be the same file (also comparator.h). My guess is that the backslash versus forward slash mixing within path names is the underlying issue. But, everything builds fine once the include paths in these two source files are consistent with the rest of the repository. Pull Request resolved: https://github.com/facebook/rocksdb/pull/6321 Differential Revision: D19506585 Pulled By: ltamasi fbshipit-source-id: 294c346607edc433ab99eaabc9c880ee7426817a
-
由 Levi Tamasi 提交于
Summary: Currently, this test case tries to infer whether `VersionStorageInfo::UpdateAccumulatedStats` was called during open by checking the number of files opened against an arbitrary threshold (10). This makes the test brittle and results in sporadic failures. The patch changes the test case to use sync points to directly test whether `UpdateAccumulatedStats` was called. Pull Request resolved: https://github.com/facebook/rocksdb/pull/6306 Test Plan: `make check` Differential Revision: D19439544 Pulled By: ltamasi fbshipit-source-id: ceb7adf578222636a0f51740872d0278cd1a914f
-
由 sdong 提交于
Summary: Block-based table has index has been disabled in crash test due to bugs. We fixed a bug and re-enable it. Pull Request resolved: https://github.com/facebook/rocksdb/pull/6310 Test Plan: Finish one round of "crash_test_with_atomic_flush" test successfully while exclusively running has index. Another run also ran for several hours without failure. Differential Revision: D19455856 fbshipit-source-id: 1192752d2c1e81ed7e5c5c7a9481c841582d5274
-
- 21 1月, 2020 1 次提交
-
-
由 Peter Dillinger 提交于
Summary: With many millions of keys, the old Bloom filter implementation for the block-based table (format_version <= 4) would have excessive FP rate due to the limitations of feeding the Bloom filter with a 32-bit hash. This change computes an estimated inflated FP rate due to this effect and warns in the log whenever an SST filter is constructed (almost certainly a "full" not "partitioned" filter) that exceeds 1.5x FP rate due to this effect. The detailed condition is only checked if 3 million keys or more have been added to a filter, as this should be a lower bound for common bits/key settings (< 20). Recommended remedies include smaller SST file size, using format_version >= 5 (for new Bloom filter), or using partitioned filters. This does not change behavior other than generating warnings for some constructed filters using the old implementation. Pull Request resolved: https://github.com/facebook/rocksdb/pull/6317 Test Plan: Example with warning, 15M keys @ 15 bits / key: (working_mem_size_mb is just to stop after building one filter if it's large) $ ./filter_bench -quick -impl=0 -working_mem_size_mb=1 -bits_per_key=15 -average_keys_per_filter=15000000 2>&1 | grep 'FP rate' [WARN] [/block_based/filter_policy.cc:292] Using legacy SST/BBT Bloom filter with excessive key count (15.0M @ 15bpk), causing estimated 1.8x higher filter FP rate. Consider using new Bloom with format_version>=5, smaller SST file size, or partitioned filters. Predicted FP rate %: 0.766702 Average FP rate %: 0.66846 Example without warning (150K keys): $ ./filter_bench -quick -impl=0 -working_mem_size_mb=1 -bits_per_key=15 -average_keys_per_filter=150000 2>&1 | grep 'FP rate' Predicted FP rate %: 0.422857 Average FP rate %: 0.379301 $ With more samples at 15 bits/key: 150K keys -> no warning; actual: 0.379% FP rate (baseline) 1M keys -> no warning; actual: 0.396% FP rate, 1.045x 9M keys -> no warning; actual: 0.563% FP rate, 1.485x 10M keys -> warning (1.5x); actual: 0.564% FP rate, 1.488x 15M keys -> warning (1.8x); actual: 0.668% FP rate, 1.76x 25M keys -> warning (2.4x); actual: 0.880% FP rate, 2.32x At 10 bits/key: 150K keys -> no warning; actual: 1.17% FP rate (baseline) 1M keys -> no warning; actual: 1.16% FP rate 10M keys -> no warning; actual: 1.32% FP rate, 1.13x 25M keys -> no warning; actual: 1.63% FP rate, 1.39x 35M keys -> warning (1.6x); actual: 1.81% FP rate, 1.55x At 5 bits/key: 150K keys -> no warning; actual: 9.32% FP rate (baseline) 25M keys -> no warning; actual: 9.62% FP rate, 1.03x 200M keys -> no warning; actual: 12.2% FP rate, 1.31x 250M keys -> warning (1.5x); actual: 12.8% FP rate, 1.37x 300M keys -> warning (1.6x); actual: 13.4% FP rate, 1.43x The reason for the modest inaccuracy at low bits/key is that the assumption of independence between a collision between 32-hash values feeding the filter and an FP in the filter is not quite true for implementations using "simple" logic to compute indices from the stock hash result. There's math on this in my dissertation, but I don't think it's worth the effort just for these extreme cases (> 100 million keys and low-ish bits/key). Differential Revision: D19471715 Pulled By: pdillinger fbshipit-source-id: f80c96893a09bf1152630ff0b964e5cdd7e35c68
-
- 18 1月, 2020 3 次提交
-
-
由 Peter Dillinger 提交于
Summary: Help users that would benefit most from new Bloom filter implementation by logging a warning that recommends the using format_version >= 5. Pull Request resolved: https://github.com/facebook/rocksdb/pull/6312 Test Plan: $ (for BPK in 10 13 14 19 20 50; do ./filter_bench -quick -impl=0 -bits_per_key=$BPK -m_queries=1 2>&1; done) | grep 'its/key' Bits/key actual: 10.0647 Bits/key actual: 13.0593 [WARN] [/block_based/filter_policy.cc:546] Using legacy Bloom filter with high (14) bits/key. Significant filter space and/or accuracy improvement is available with format_verion>=5. Bits/key actual: 14.0581 [WARN] [/block_based/filter_policy.cc:546] Using legacy Bloom filter with high (19) bits/key. Significant filter space and/or accuracy improvement is available with format_verion>=5. Bits/key actual: 19.0542 [WARN] [/block_based/filter_policy.cc:546] Using legacy Bloom filter with high (20) bits/key. Dramatic filter space and/or accuracy improvement is available with format_verion>=5. Bits/key actual: 20.0584 [WARN] [/block_based/filter_policy.cc:546] Using legacy Bloom filter with high (50) bits/key. Dramatic filter space and/or accuracy improvement is available with format_verion>=5. Bits/key actual: 50.0577 Differential Revision: D19457191 Pulled By: pdillinger fbshipit-source-id: 073d94cde5c70e03a160f953e1100c15ea83eda4
-
由 chenyou-fdu 提交于
Summary: When we do concurrently writes, and different write operations will have WAL enable or disable. But the data from write operation with WAL disabled will still be logged into log files, which will lead to extra disk write/sync since we do not want any guarantee for these part of data. Detail can be found in https://github.com/facebook/rocksdb/issues/6280. This PR avoid mixing the two types in a write group. The advantage is simpler reasoning about the write group content Pull Request resolved: https://github.com/facebook/rocksdb/pull/6290 Differential Revision: D19448598 Pulled By: maysamyabandeh fbshipit-source-id: 3d990a0f79a78ea1bfc90773f6ebafc1884c20de
-
由 Matt Bell 提交于
Summary: This PR adds a `rocksdb_options_set_atomic_flush` function to the C API. Pull Request resolved: https://github.com/facebook/rocksdb/pull/6307 Differential Revision: D19451313 Pulled By: ltamasi fbshipit-source-id: 750495642ef55b1ea7e13477f85c38cd6574849c
-
- 17 1月, 2020 5 次提交
-
-
由 sdong 提交于
Summary: A previous change meant to make db_stress to run on sync=1 mode for 1/20 of the time in crash_test, but a bug caused to to always run on sync=1 mode. Fix it. Pull Request resolved: https://github.com/facebook/rocksdb/pull/6304 Test Plan: Start and kill "python -u tools/db_crashtest.py --simple whitebox" multiple times and observe that most times sync=0 is used while some times sync=1 is used. Differential Revision: D19433000 fbshipit-source-id: 7a0adba39b17a1b3acbbd791bb0cdb743b91fa95
-
由 sdong 提交于
Summary: Recent bug fix related to hash index introduced a new bug: hash index can return NotFound but it is not handled by BlockBasedTable::Get(). The end result is that Get() stops being executed too early. Fix it by ignoring NotFound code in Get(). Pull Request resolved: https://github.com/facebook/rocksdb/pull/6305 Test Plan: A problematic DB used to return NotFound incorrectly, and now able to return correct result. Will try to construct a unit test too.0 Differential Revision: D19438925 fbshipit-source-id: e751afa8c13728d56511cfeb1bc811ecb99f3217
-
由 Levi Tamasi 提交于
Summary: https://github.com/facebook/rocksdb/pull/2205 introduced a new configuration option called `max_background_jobs`, superseding the earlier options `max_background_flushes` and `max_background_compactions`. However, unlike `max_background_compactions`, setting `max_background_jobs` dynamically through the `SetDBOptions` interface does not adjust the size of the thread pools (see https://github.com/facebook/rocksdb/issues/6298). The patch fixes this. Pull Request resolved: https://github.com/facebook/rocksdb/pull/6300 Test Plan: Extended unit test. Differential Revision: D19430899 Pulled By: ltamasi fbshipit-source-id: 704006605b3c13c3d1b997ccc0831ee369721074
-
由 Cheng Chang 提交于
Summary: Add asserts to show the intentions of result explicitly. Add examples to show the effect of optimistic transaction more clearly. Pull Request resolved: https://github.com/facebook/rocksdb/pull/6074 Test Plan: `cd examples && make optimistic_transaction_example && ./optimistic_transaction_example` Differential Revision: D18964309 Pulled By: cheng-chang fbshipit-source-id: a524616ed9981edf2fd37ae61c5ed18c5cf25f55
-
由 sdong 提交于
Summary: Recent fix to Prefix Hash https://github.com/facebook/rocksdb/pull/6292 caused a bug that the newly created NotFound status in hash index is never reset. This causes reseek or implict reseek to return wrong results sometimes. Pull Request resolved: https://github.com/facebook/rocksdb/pull/6302 Test Plan: Add a unit test that would fail. Not fix. crash test with hash test would fail in several seconds. With the fix, it will run about several minutes before failing with another failure. Differential Revision: D19424572 fbshipit-source-id: c5276f36a95fd0e2837e30190476d2fe21ed8566
-
- 16 1月, 2020 2 次提交
-
-
由 Levi Tamasi 提交于
Summary: As of 1/15/2020, Maven Central does not support plain HTTP. Because of this, our Travis and AppVeyor builds have started failing during the assertj download step. This patch will hopefully fix these issues. See https://blog.sonatype.com/central-repository-moving-to-https for more info. Pull Request resolved: https://github.com/facebook/rocksdb/pull/6301 Test Plan: Will monitor the builds. ("I don't always test my changes but when I do, I do it in production.") Differential Revision: D19422923 Pulled By: ltamasi fbshipit-source-id: 76f9a8564a5b66ddc721d705f9cbfc736bf7a97d
-
由 sdong 提交于
Summary: When prefix is enabled the expected behavior when the prefix of the target does not exist is for Seek is to seek to any key larger than target and SeekToPrev to any key less than the target. Currently. the prefix index (kHashSearch) returns OK status but sets Invalid() to indicate two cases: a prefix of the searched key does not exist, ii) the key is beyond the range of the keys in SST file. The SeekForPrev implementation in BlockBasedTable thus does not have enough information to know when it should set the index key to first (to return a key smaller than target). The patch fixes that by returning NotFound status for cases that the prefix does not exist. SeekForPrev in BlockBasedTable accordingly SeekToFirst instead of SeekToLast on the index iterator. Pull Request resolved: https://github.com/facebook/rocksdb/pull/6297 Test Plan: SeekForPrev of non-exsiting prefix is added to block_test.cc, and a test case is added in db_test2, which fails without the fix. Differential Revision: D19404695 fbshipit-source-id: cafbbf95f8f60ff9ede9ccc99d25bfa1cf6fcdc3
-
- 15 1月, 2020 3 次提交
-
-
由 Levi Tamasi 提交于
Summary: In addition to removing the earlier partially implemented garbage collection logic from the BlobDB codebase, the patch also removes the test cases (as well as the related sync points, as appropriate) that were only relevant for the old implementation, and reworks the remaining ones so they use the new GC logic. Pull Request resolved: https://github.com/facebook/rocksdb/pull/6278 Test Plan: `make check` Differential Revision: D19335226 Pulled By: ltamasi fbshipit-source-id: 0cc1794bc9892feda1426ed5522a318f3cb1b692
-
由 sdong 提交于
Summary: A recent commit adds a unit test that uses a function not available in LITE build. Fix it by avoiding the call Pull Request resolved: https://github.com/facebook/rocksdb/pull/6295 Test Plan: Run the test in LITE build and see it passes. Differential Revision: D19395678 fbshipit-source-id: 37b42835bae02511630d80f7cafb1179401bc033
-
由 Maysam Yabandeh 提交于
Summary: kHashSearch index type is incompatible with index_block_restart_interval larger than 1. The patch asserts that and also resets index_block_restart_interval value if it is incompatible with kHashSearch. Pull Request resolved: https://github.com/facebook/rocksdb/pull/6294 Differential Revision: D19394229 Pulled By: maysamyabandeh fbshipit-source-id: 8a12712ab25e81094a7f71ecd43f773dd4fb6acd
-
- 14 1月, 2020 1 次提交
-
-
由 sdong 提交于
Summary: The fractional cascading index is not correctly generated when two files at the same level contains the same smallest or largest user key. The result would be that it would hit an assertion in debug mode and lower level files might be skipped. This might cause wrong results when the same user keys are of merge operands and Get() is called using the exact user key. In that case, the lower files would need to further checked. The fix is to fix the fractional cascading index. Pull Request resolved: https://github.com/facebook/rocksdb/pull/6285 Test Plan: Add a unit test which would cause the assertion which would be fixed. Differential Revision: D19358426 fbshipit-source-id: 39b2b1558075fd95e99491d462a67f9f2298c48e
-
- 11 1月, 2020 3 次提交
-
-
由 Qinfan Wu 提交于
Summary: This makes it easier to call the functions from Rust as otherwise they require mutable types. Pull Request resolved: https://github.com/facebook/rocksdb/pull/6283 Differential Revision: D19349991 Pulled By: wqfish fbshipit-source-id: e8da7a75efe8cd97757baef8ca844a054f2519b4
-
由 Sagar Vemuri 提交于
Summary: Look at all compaction input files to compute the oldest ancestor time. In https://github.com/facebook/rocksdb/issues/5992 we changed how creation_time (aka oldest-ancestor-time) table property of compaction output files is computed from max(creation-time-of-all-compaction-inputs) to min(creation-time-of-all-inputs). This exposed a bug where, during compaction, the creation_time:s of only the L0 compaction inputs were being looked at, and all other input levels were being ignored. This PR fixes the issue. Some TTL compactions when using Level-Style compactions might not have run due to this bug. Pull Request resolved: https://github.com/facebook/rocksdb/pull/6279 Test Plan: Enhanced the unit tests to validate that the correct time is propagated to the compaction outputs. Differential Revision: D19337812 Pulled By: sagar0 fbshipit-source-id: edf8a72f11e405e93032ff5f45590816debe0bb4
-
由 Maysam Yabandeh 提交于
Summary: unordered_write is incompatible with non-zero max_successive_merges. Although we check this at runtime, we currently don't prevent the user from setting this combination in options. This has led to stress tests to fail with this combination is tried in ::SetOptions. The patch fixes that and also reverts the changes performed by https://github.com/facebook/rocksdb/pull/6254, in which max_successive_merges was mistakenly declared incompatible with unordered_write. Pull Request resolved: https://github.com/facebook/rocksdb/pull/6284 Differential Revision: D19356115 Pulled By: maysamyabandeh fbshipit-source-id: f06dadec777622bd75f267361c022735cf8cecb6
-
- 10 1月, 2020 2 次提交
-
-
由 anand76 提交于
Summary: Undo https://github.com/facebook/rocksdb/issues/6243 and fix the crash test failures. Pull Request resolved: https://github.com/facebook/rocksdb/pull/6273 Test Plan: Run make ubsan_crash_test Differential Revision: D19331472 Pulled By: anand1976 fbshipit-source-id: 30aa4a36c1b0f77a97159d82bbfd1cd767878e28
-
由 Yanqin Jin 提交于
Summary: Fix compilation under LITE by putting `#ifndef ROCKSDB_LITE` around a code block. Pull Request resolved: https://github.com/facebook/rocksdb/pull/6277 Differential Revision: D19334157 Pulled By: riversand963 fbshipit-source-id: 947111ed68aa550f5ea424b216c1442a8af9e32b
-
- 09 1月, 2020 6 次提交
-
-
由 sdong 提交于
Summary: Some shadow warning shows up when using gcc 4.8. An example: ./utilities/blob_db/blob_compaction_filter.h: In constructor ‘rocksdb::blob_db::BlobIndexCompactionFilterFactoryBase::BlobIndexCompactionFilterFactoryBase(rocksdb::blob_db::lobDBImpl*, rocksdb::Env*, rocksdb::Statistics*)’: ./utilities/blob_db/blob_compaction_filter.h:121:7: error: declaration of ‘blob_db_impl’ shadows a member of 'this' [-Werror=shadow] : blob_db_impl_(blob_db_impl), env_(_env), statistics_(_statistics) {} ^ Fix them. Pull Request resolved: https://github.com/facebook/rocksdb/pull/6242 Test Plan: Build and see the warnings go away. Differential Revision: D19217789 fbshipit-source-id: 8ef631941f23dab47a388e060adec24b72efd65e
-
由 Yanqin Jin 提交于
Summary: Remove a comment. Pull Request resolved: https://github.com/facebook/rocksdb/pull/6274 Differential Revision: D19323151 Pulled By: riversand963 fbshipit-source-id: d0d804d6882edcd94e35544ef45578b32ff1caae
-
由 Huisheng Liu 提交于
Summary: Exclude timestamp in key comparison during boundary calculation to avoid key versions being excluded. Pull Request resolved: https://github.com/facebook/rocksdb/pull/6205 Differential Revision: D19166765 Pulled By: riversand963 fbshipit-source-id: bbe08816fef8de349a83ebd59a595ad844021f24
-
由 Amber1990Zhang 提交于
Summary: As title. add a new user [**Nebula Graph**](https://github.com/vesoft-inc/nebula) to the user doc. Pull Request resolved: https://github.com/facebook/rocksdb/pull/6271 Differential Revision: D19319345 fbshipit-source-id: 52a54372cecc701c34da4ea6b1cf27f3b7498efb
-
由 sdong 提交于
Summary: Right now, when validating prefix iterator, if control iterator is invalidate but prefix iterator shows value, we determine it as a test failure. However, this fails to consider the case where a file or memtable containing a tombstone is filtered out by a prefix bloom filter. The fix is to relax the check in this case. If we are out of prefix range, then ignore the check. Pull Request resolved: https://github.com/facebook/rocksdb/pull/6269 Test Plan: Run crash_test for a short while and it still passes. Differential Revision: D19317594 fbshipit-source-id: b964a1cdc1df5efe439d4b32f8023e1fbc8598c1
-
由 Maysam Yabandeh 提交于
Summary: The crash test is failing with non-ok status after TransactionDB::Open. This patch adds more debugging information. Pull Request resolved: https://github.com/facebook/rocksdb/pull/6272 Differential Revision: D19314527 Pulled By: maysamyabandeh fbshipit-source-id: d45ecb0f2144e052fb4b5fdd483150440991a3b4
-
- 08 1月, 2020 1 次提交
-
-
由 Adam Retter 提交于
Summary: This is the start of some JMH microbenchmarks for RocksJava. Such benchmarks can help us decide on performance improvements of the Java API. At the moment, I have only added benchmarks for various Comparator options, as that is one of the first areas where I want to improve performance. I plan to expand this to many more tests. Details of how to compile and run the benchmarks are in the `README.md`. A run of these on a XEON 3.5 GHz 4vCPU (QEMU Virtual CPU version 2.5+) / 8GB RAM KVM with Ubuntu 18.04, OpenJDK 1.8.0_232, and gcc 8.3.0 produced the following: ``` # Run complete. Total time: 01:43:17 REMEMBER: The numbers below are just data. To gain reusable insights, you need to follow up on why the numbers are the way they are. Use profilers (see -prof, -lprof), design factorial experiments, perform baseline and negative tests that provide experimental control, make sure the benchmarking environment is safe on JVM/OS/HW level, ask for reviews from the domain experts. Do not assume the numbers tell you what you want them to tell. Benchmark (comparatorName) Mode Cnt Score Error Units ComparatorBenchmarks.put native_bytewise thrpt 25 122373.920 ± 2200.538 ops/s ComparatorBenchmarks.put java_bytewise_adaptive_mutex thrpt 25 17388.201 ± 1444.006 ops/s ComparatorBenchmarks.put java_bytewise_non-adaptive_mutex thrpt 25 16887.150 ± 1632.204 ops/s ComparatorBenchmarks.put java_direct_bytewise_adaptive_mutex thrpt 25 15644.572 ± 1791.189 ops/s ComparatorBenchmarks.put java_direct_bytewise_non-adaptive_mutex thrpt 25 14869.601 ± 2252.135 ops/s ComparatorBenchmarks.put native_reverse_bytewise thrpt 25 116528.735 ± 4168.797 ops/s ComparatorBenchmarks.put java_reverse_bytewise_adaptive_mutex thrpt 25 10651.975 ± 545.998 ops/s ComparatorBenchmarks.put java_reverse_bytewise_non-adaptive_mutex thrpt 25 10514.224 ± 930.069 ops/s ``` Indicating a ~7x difference between comparators implemented natively (C++) and those implemented in Java. Let's see if we can't improve on that in the near future... Pull Request resolved: https://github.com/facebook/rocksdb/pull/6241 Differential Revision: D19290410 Pulled By: pdillinger fbshipit-source-id: 25d44bf3a31de265502ed0c5d8a28cf4c7cb9c0b
-