- 20 8月, 2017 1 次提交
-
-
由 Andrew Kryczka 提交于
Summary: add this counter stat to track usage of deletion-dropping optimization. if usage is low, we can delete it to prevent bugs like #2726. Closes https://github.com/facebook/rocksdb/pull/2761 Differential Revision: D5665421 Pulled By: ajkr fbshipit-source-id: 881befa2d199838dac88709e7b376a43d304e3d4
-
- 19 8月, 2017 4 次提交
-
-
由 Maysam Yabandeh 提交于
Summary: https://github.com/facebook/rocksdb/pull/2661 mistakenly updates the version. This patch reverts it. Closes https://github.com/facebook/rocksdb/pull/2760 Differential Revision: D5662089 Pulled By: maysamyabandeh fbshipit-source-id: f4735e37921c0ced6081a89080c78ac3728aa8bd
-
由 Andrew Kryczka 提交于
Summary: it's a new feature that'll be released in 5.8, introduced by PR #2498. Closes https://github.com/facebook/rocksdb/pull/2759 Differential Revision: D5661923 Pulled By: ajkr fbshipit-source-id: 9ba9f0d146c453715358ef2dd298aa7765649d7c
-
由 Andrew Kryczka 提交于
Summary: With this PR, we can measure read-amp for queries where perf_context is enabled as follows: ``` SetPerfLevel(kEnableCount); Get(1, "foo"); double read_amp = static_cast<double>(get_perf_context()->block_read_byte / get_perf_context()->get_read_bytes); SetPerfLevel(kDisable); ``` Our internal infra enables perf_context for a sampling of queries. So we'll be able to compute the read-amp for the sample set, which can give us a good estimate of read-amp. Closes https://github.com/facebook/rocksdb/pull/2749 Differential Revision: D5647240 Pulled By: ajkr fbshipit-source-id: ad73550b06990cf040cc4528fa885360f308ec12
-
由 Maysam Yabandeh 提交于
Summary: This fixes the existing logic for pinning l0 index partitions. The patch preloads the partitions into block cache and pin them if they belong to level 0 and pin_l0 is set. The drawback is that it does many small IOs when preloading all the partitions into the cache is direct io is enabled. Working for a solution for that. Closes https://github.com/facebook/rocksdb/pull/2661 Differential Revision: D5554010 Pulled By: maysamyabandeh fbshipit-source-id: 1e6f32a3524d71355c77d4138516dcfb601ca7b2
-
- 18 8月, 2017 4 次提交
-
-
由 Archit Mishra 提交于
Summary: Changes: * extended the wait_txn_map to track additional information * designed circular buffer to store n latest deadlocks' information * added test coverage to verify the additional information tracked is accurately stored in the buffer Closes https://github.com/facebook/rocksdb/pull/2630 Differential Revision: D5478025 Pulled By: armishra fbshipit-source-id: 2b138de7b5a73f5ca554fc3ff8220a3be49f39e7
-
由 yiwu-arbug 提交于
Summary: Clang complain about an cast from uint64_t to int32 in db_stress. Fixing it. Closes https://github.com/facebook/rocksdb/pull/2755 Differential Revision: D5655947 Pulled By: yiwu-arbug fbshipit-source-id: cfac10e796e0adfef4727090b50975b0d6e2c9be
-
由 yiwu-arbug 提交于
Summary: On initial call to BlobDBImpl::WaStats() `all_periods_write_` would be empty, so it will crash when we call pop_front() at line 1627. Apparently it is mean to pop only when `all_periods_write_.size() > kWriteAmplificationStatsPeriods`. The whole write amp calculation doesn't seems to be correct and it is not being exposed. Will work on it later. Test Plan Change kWriteAmplificationStatsPeriodMillisecs to 1000 (1 second) and run db_bench --use_blob_db for 5 minutes. Closes https://github.com/facebook/rocksdb/pull/2751 Differential Revision: D5648269 Pulled By: yiwu-arbug fbshipit-source-id: b843d9a09bb5f9e1b713d101ec7b87e54b5115a4
-
由 Sagar Vemuri 提交于
Summary: Updating Cassandra merge operator to make use of a single merge operand when needed. Single merge operand support has been introduced in #2721. Closes https://github.com/facebook/rocksdb/pull/2753 Differential Revision: D5652867 Pulled By: sagar0 fbshipit-source-id: b9fbd3196d3ebd0b752626dbf9bec9aa53e3e26a
-
- 17 8月, 2017 6 次提交
-
-
由 Sagar Vemuri 提交于
Summary: Added a function `MergeOperator::DoesAllowSingleMergeOperand()` to allow invoking a merge operator even with a single merge operand, if overriden. This is needed for Cassandra-on-RocksDB work. All Cassandra writes are through merges and this will allow a single merge-value to be updated in the merge-operator invoked via a compaction, if needed, due to an expired TTL. Closes https://github.com/facebook/rocksdb/pull/2721 Differential Revision: D5608706 Pulled By: sagar0 fbshipit-source-id: f299f9f91c4d1ac26e48bd5906e122c1c5e5f3fc
-
由 follitude 提交于
Summary: PTAL ajkr Closes https://github.com/facebook/rocksdb/pull/2750 Differential Revision: D5648052 Pulled By: ajkr fbshipit-source-id: 7cd1ddd61364d5a55a10fdd293fa74b2bf89dd98
-
由 Andrew Kryczka 提交于
Summary: fix some things that made this command hard to use from CLI: - use default values for `target_file_size_base` and `max_bytes_for_level_base`. previously we were using small values for these but default value of `write_buffer_size`, which led to enormous number of L1 files. - failure message for `value_size_mult` too big. previously there was just an assert, so in non-debug mode it'd overrun the value buffer and crash mysteriously. - only print verification success if there's no failure. before it'd print both in the failure case. - support `memtable_prefix_bloom_size_ratio` - support `num_bottom_pri_threads` (universal compaction) Closes https://github.com/facebook/rocksdb/pull/2741 Differential Revision: D5629495 Pulled By: ajkr fbshipit-source-id: ddad97d6d4ba0884e7c0f933b0a359712514fc1d
-
由 Andrew Kryczka 提交于
Summary: the range delete tombstones in memtable should be added to the aggregator even when the memtable's prefix bloom filter tells us the lookup key's not there. This bug could cause data to temporarily reappear until the memtable containing range deletions is flushed. Reported in #2743. Closes https://github.com/facebook/rocksdb/pull/2745 Differential Revision: D5639007 Pulled By: ajkr fbshipit-source-id: 04fc6facb6f978340a3f639536f4ca7c0d73dfc9
-
由 Andrew Kryczka 提交于
Summary: We forgot to recompute compaction scores after picking a universal compaction like we do in level compaction (https://github.com/facebook/rocksdb/blob/a34b2e388ee51173e44f6aa290f1301c33af9e67/db/compaction_picker.cc#L691-L695). This leads to a fairness issue where we waste compactions on CFs/DB instances that don't need it while others can starve. Previously, ccecf3f4 fixed the issue for the read-amp-based compaction case; this PR avoids the issue earlier and also for size-ratio-based compactions. Closes https://github.com/facebook/rocksdb/pull/2688 Differential Revision: D5566191 Pulled By: ajkr fbshipit-source-id: 010bccb2a107f6a76f3d3022b90aadce5cc48feb
-
由 Maysam Yabandeh 提交于
Summary: Implement the main body of WritePrepared pseudo code. This includes PrepareInternal and CommitInternal, as well as AddCommitted which updates the commit map. It also provides a IsInSnapshot method that could be later called form the read path to decide if a version is in the read snapshot or it should other be skipped. This patch lacks unit tests and does not attempt to offer an efficient implementation. The idea is that to have the API specified so that we can work on related tasks in parallel. Closes https://github.com/facebook/rocksdb/pull/2713 Differential Revision: D5640021 Pulled By: maysamyabandeh fbshipit-source-id: bfa7a05e8d8498811fab714ce4b9c21530514e1c
-
- 16 8月, 2017 4 次提交
-
-
由 Sagar Vemuri 提交于
Summary: `PartialMergeMulti` implementation is enough for Cassandra, and `PartialMerge` is not required. Implementing both will just duplicate the code. As per https://github.com/facebook/rocksdb/blob/master/include/rocksdb/merge_operator.h#L130-L135 : ``` // The default implementation of PartialMergeMulti will use this function // as a helper, for backward compatibility. Any successor class of // MergeOperator should either implement PartialMerge or PartialMergeMulti, // although implementing PartialMergeMulti is suggested as it is in general // more effective to merge multiple operands at a time instead of two // operands at a time. ``` Closes https://github.com/facebook/rocksdb/pull/2737 Reviewed By: scv119 Differential Revision: D5633073 Pulled By: sagar0 fbshipit-source-id: ef4fa102c22fec6a0175ed12f5c44c15afe3c8ca
-
由 Siying Dong 提交于
Summary: Similar to the bug fixed by https://github.com/facebook/rocksdb/pull/2726, FIFO with compaction and kCompactionStyleNone during user customized CompactFiles() with output level to be 0 can suffer from the same problem. Fix it by leveraging the bottommost_level_ flag. Closes https://github.com/facebook/rocksdb/pull/2735 Differential Revision: D5626906 Pulled By: siying fbshipit-source-id: 2b148d0461c61dbd986d74655e384419ae442158
-
由 lxcode 提交于
Summary: If ROCKSDB_LITE is defined, a call to abort() is introduced. This call requires stdlib.h. Build log of unpatched 5.7.1: http://beefy9.nyi.freebsd.org/data/110amd64-default/447974/logs/rocksdb-lite-5.7.1.log Closes https://github.com/facebook/rocksdb/pull/2744 Reviewed By: yiwu-arbug Differential Revision: D5632372 Pulled By: lxcode fbshipit-source-id: b2a8e692bf14ccf1f875f3a00463e87bba310a2b
-
由 Andrew Kryczka 提交于
Summary: Support a window of `active_width` keys that rolls through `[0, max_key)` over the duration of the test. Operations only affect keys inside the window. This gives us the ability to detect L0->L0 deletion bug (#2722). Closes https://github.com/facebook/rocksdb/pull/2739 Differential Revision: D5628555 Pulled By: ajkr fbshipit-source-id: 9cb2d8f4ab1a7c73f7797b8e19f7094970ea8749
-
- 15 8月, 2017 3 次提交
-
-
由 Neal Poole 提交于
Summary: Most of the data used here in shell commands is not generated directly from user input but some data (ie: from environment variables) may have been external influenced. It is a good practice to escape this data before using it in a shell command. Originally D4800264 but we never quite got it merged. Reviewed By: yiwu-arbug Differential Revision: D5595052 fbshipit-source-id: c09d8b47fe35fc6a47afb4933ccad9d56ca8d7be
-
由 yiwu-arbug 提交于
Summary: Also verify it fixes gcc7 compile failure #2672 (see also #2699) Closes https://github.com/facebook/rocksdb/pull/2732 Differential Revision: D5620348 Pulled By: yiwu-arbug fbshipit-source-id: 87db657ab734f23b1bfaaa9db9b9956d10eaef59
-
由 Yi Wu 提交于
-
- 14 8月, 2017 4 次提交
-
-
由 Nikhil Benesch 提交于
Summary: Some compilers require `-std=c++11` for the `cstdint` header to be available. We already have logic to add `-std=c++11` to `CXXFLAGS` when the compiler is not MSVC; simply reorder CMakeLists.txt so that logic happens before the calls to `CHECK_CXX_SOURCE_COMPILES`. Additionally add a missing `set(CMAKE_REQUIRED_FLAGS, ...)` before a call to `CHECK_C_SOURCE_COMPILES`. Closes https://github.com/facebook/rocksdb/pull/2535 Differential Revision: D5384244 Pulled By: yiwu-arbug fbshipit-source-id: 2dbae4297c5d8ab4636e08b1457ffb2d3e37aef4
-
由 Nikhil Benesch 提交于
Summary: With some compilers, `-std=c++11` is necessary for <cstdint> to be available. Pass this flag via $PLATFORM_CXXFLAGS. Fixes #2488. Closes https://github.com/facebook/rocksdb/pull/2545 Differential Revision: D5620610 Pulled By: yiwu-arbug fbshipit-source-id: 2f975b8c1ad52e283e677d9a33543abd064f13ce
-
由 Jay 提交于
Summary: This pr enables linking all the supported compression libraries via cmake. Closes https://github.com/facebook/rocksdb/pull/2552 Differential Revision: D5620607 Pulled By: yiwu-arbug fbshipit-source-id: b6949181f305bfdf04a98f898c92fd0caba0c45a
-
由 Andrew Gallagher 提交于
Summary: - Remove default arch-specified flags. - Move non-default arch-specific flags to arch-specific param. Reviewed By: yiwu-arbug Differential Revision: D5597499 fbshipit-source-id: c53108ac39c73ac36893d3fd9aaf3b5e3080f1ae
-
- 13 8月, 2017 1 次提交
-
-
由 Adam Retter 提交于
Summary: Closes https://github.com/facebook/rocksdb/pull/2730 Differential Revision: D5619256 Pulled By: ajkr fbshipit-source-id: c80d697eeceab91964259132e58f5cd2219efb93
-
- 12 8月, 2017 9 次提交
-
-
由 Andrew Kryczka 提交于
Summary: `KeyNotExistsBeyondOutputLevel` didn't consider L0 files' key-ranges. So if a key only was covered by older L0 files' key-ranges, we would incorrectly drop deletions of that key. This PR just skips the deletion-dropping optimization when output level is L0. Closes https://github.com/facebook/rocksdb/pull/2726 Differential Revision: D5617286 Pulled By: ajkr fbshipit-source-id: 4bff1396b06d49a828ba4542f249191052915bce
-
由 Andrew Kryczka 提交于
Summary: - like other subcommands, reporting compression sizes should be specified with the `--command` CLI arg. - also added `--compression_types` arg as it's useful to restrict the types of compression used, at least in my dictionary compression experiments. Closes https://github.com/facebook/rocksdb/pull/2706 Differential Revision: D5589520 Pulled By: ajkr fbshipit-source-id: 305bb4ebcc95eecc8a85523cd3b1050619c9ddc5
-
由 Andrew Kryczka 提交于
Summary: Previously we could only select the CF on which to operate uniformly at random. This is a limitation, e.g., when testing universal compaction as all CFs would need to run full compaction at roughly the same time, which isn't realistic. This PR allows the user to specify the probability distribution for selecting CFs via the `--column_family_distribution` argument. Closes https://github.com/facebook/rocksdb/pull/2677 Differential Revision: D5544436 Pulled By: ajkr fbshipit-source-id: 478d56260995236ae90895ce5bd51f38882e185a
-
由 Andrew Kryczka 提交于
Summary: sounds like we're willing to tradeoff minor inaccuracy in stats for speed. start with histogram stats. ticker stats will be harder (and, IMO, we shouldn't change them in this manner) as many test cases rely on them being exactly correct. Closes https://github.com/facebook/rocksdb/pull/2720 Differential Revision: D5607884 Pulled By: ajkr fbshipit-source-id: 1b754cda35ea6b252d1fdd5aa3cfb58866506372
-
由 yiwu-arbug 提交于
Summary: Fix c_test missing deletion of write batch pointer. Closes https://github.com/facebook/rocksdb/pull/2725 Differential Revision: D5613866 Pulled By: yiwu-arbug fbshipit-source-id: bf3f59a6812178577c9c25bae558ef36414a1f51
-
由 yiwu-arbug 提交于
Summary: While GC, blob DB use optimistic transaction to delete or replace the index entry in LSM, to guarantee correctness if there's a normal write writing to the same key. However, the previous implementation doesn't call SetSnapshot() nor use GetForUpdate() of transaction API, instead it do its own sequence number checking before beginning the transaction. A normal write can sneak in after the sequence number check and overwrite the key, and the GC will delete or relocate the old version of the key by mistake. Update the code to property use GetForUpdate() to check the existing index entry. After the patch the sequence number store with each blob record is useless, So I'm considering remove the sequence number from blob record, in another patch. Closes https://github.com/facebook/rocksdb/pull/2703 Differential Revision: D5589178 Pulled By: yiwu-arbug fbshipit-source-id: 8dc960cd5f4e61b36024ba7c32d05584ce149c24
-
由 Andrew Kryczka 提交于
Summary: Closes https://github.com/facebook/rocksdb/pull/2724 Differential Revision: D5613416 Pulled By: ajkr fbshipit-source-id: ed55fb66ab1b41dfdfe765fe3264a1c87a8acb00
-
由 Kent767 提交于
Summary: It would be super helpful to not have to recompile rocksdb to get this performance tweak for mechanical disks. I have signed the CLA. Closes https://github.com/facebook/rocksdb/pull/2718 Differential Revision: D5606994 Pulled By: yiwu-arbug fbshipit-source-id: c05e92bad0d03bd38211af1e1ced0d0d1e02f634
-
由 Siying Dong 提交于
Summary: Right now, if direct I/O is enabled, prefetching the last 512KB cannot be applied, except compaction inputs or readahead is enabled for iterators. This can create a lot of I/O for HDD cases. To solve the problem, the 512KB is prefetched in block based table if direct I/O is enabled. The prefetched buffer is passed in totegher with random access file reader, so that we try to read from the buffer before reading from the file. This can be extended in the future to support flexible user iterator readahead too. Closes https://github.com/facebook/rocksdb/pull/2708 Differential Revision: D5593091 Pulled By: siying fbshipit-source-id: ee36ff6d8af11c312a2622272b21957a7b5c81e7
-
- 11 8月, 2017 4 次提交
-
-
由 yiwu-arbug 提交于
Summary: This reverting #2699 to fix clang build. Closes https://github.com/facebook/rocksdb/pull/2723 Differential Revision: D5610207 Pulled By: yiwu-arbug fbshipit-source-id: 6857f4556d6d18f17b74cf81fa936d1dc0bd364c
-
由 Siying Dong 提交于
Summary: TSAN shows error when we grab too many locks at the same time. In TSAN crash test, make one shard key cover 2^22 keys so that no many keys will be hold at the same time. Closes https://github.com/facebook/rocksdb/pull/2719 Differential Revision: D5609035 Pulled By: siying fbshipit-source-id: 930e5d63fff92dbc193dc154c4c615efbdf06c6a
-
由 Stanislav Tkach 提交于
Summary: (#2564) Closes https://github.com/facebook/rocksdb/pull/2669 Differential Revision: D5594151 Pulled By: yiwu-arbug fbshipit-source-id: 67ae9446342f3323d6ecad8e811f4158da194270
-
由 Daniel Black 提交于
Summary: Error was: utilities/persistent_cache/block_cache_tier.cc: In instantiation of ‘void rocksdb::Add(std::map<std::__cxx11::basic_string<char>, double>*, const string&, const T&) [with T = double; std::__cxx11::string = std::__cxx11::basic_string<char>]’: utilities/persistent_cache/block_cache_tier.cc:147:40: required from here utilities/persistent_cache/block_cache_tier.cc:141:23: error: type qualifiers ignored on cast result type [-Werror=ignored-qualifiers] stats->insert({key, static_cast<const double>(t)}); fixing like #2562 Closes https://github.com/facebook/rocksdb/pull/2603 Differential Revision: D5600910 Pulled By: yiwu-arbug fbshipit-source-id: 891a5ec7e451d2dec6ad1b6b7fac545657f87363
-