- 09 12月, 2020 1 次提交
-
-
由 Cheng Chang 提交于
Summary: Consider the case: 1. All column families are flushed, so all WALs become obsolete, but no WAL is removed from disk yet because the removal is asynchronous, a VersionEdit is written to MANIFEST indicating that WALs before a certain WAL number are obsolete, let's say this number is 3; 2. `SyncWAL` is called, so all the on-disk WALs are synced, and if track_and_verify_wal_in_manifest=true, the WALs will be tracked in MANIFEST, let's say the WAL numbers are 1 and 2; 3. DB crashes; 4. During DB recovery, when replaying MANIFEST, we first see that WAL with number < 3 are obsolete, then we see that WAL 1 and 2 are synced, so according to current implementation of `WalSet`, the `WalSet` will be recovered to include WAL 1 and 2; 5. WAL 1 and 2 are asynchronously deleted from disk, then the WAL verification algorithm fails with `Corruption: missing WAL`. The above case is reproduced in a new unit test `DBBasicTestTrackWal::DoNotTrackObsoleteWal`. The fix is to maintain the upper bound of the obsolete WAL numbers, any WAL with number less than the maintained number is considered to be obsolete, so shouldn't be tracked even if they are later synced. The number is maintained in `WalSet`. Pull Request resolved: https://github.com/facebook/rocksdb/pull/7725 Test Plan: 1. a new unit test `DBBasicTestTrackWal::DoNotTrackObsoleteWal` is added. 2. run `make crash_test` on devserver. Reviewed By: riversand963 Differential Revision: D25238914 Pulled By: cheng-chang fbshipit-source-id: f5dccd57c3d89f19565ec5731f2d42f06d272b72
-
- 08 12月, 2020 8 次提交
-
-
由 Yanqin Jin 提交于
Summary: This PR removes a nested loop inside ProcessManifestWrites. The new implementation has the same behavior as the old code with simpler logic and lower complexity. Pull Request resolved: https://github.com/facebook/rocksdb/pull/7751 Test Plan: make check Run make crash_test on devserver and succeeds 3 times. Reviewed By: ltamasi Differential Revision: D25363526 Pulled By: riversand963 fbshipit-source-id: 27e681949dacd7501a752e5e517b9e85b54ccb2e
-
由 Sergei Petrunia 提交于
Summary: This PR has two commits: 1. Modify the code to allow different Lock Managers (of any kind) to be used. It is implied that a LockManager uses its own custom LockTracker. 2. Add definitions for Range Locking (class Endpoint and GetRangeLock() function. cheng-chang, is this what you've had in mind (should the PR have both item 1 and item 2?) Pull Request resolved: https://github.com/facebook/rocksdb/pull/7443 Reviewed By: zhichao-cao Differential Revision: D24123172 Pulled By: cheng-chang fbshipit-source-id: c6548ad6d4cc3c25f68d13b29147bc6fdf357185
-
由 mrambacher 提交于
Summary: This change eliminates the need for a lot of the PermitUncheckedError calls on return from ErrorHandler methods. The calls are no longer needed as the status is returned as a reference rather than a copy. Additionally, this means that the originating status (recovery_error_, bg_error_) is not cleared implicitly as a result of calling one of these methods. For this class, I do not know if the proper behavior should be to call PermitUncheckedError in the destructor or if the checked state should be cleared when the status is cleared. I did tests both ways. Without the code in the destructor, the status will need to be cleared in at least some of the places where it is set to OK. When running tests, I found no instances where this class was destructed with a non-OK, non-checked Status. Pull Request resolved: https://github.com/facebook/rocksdb/pull/7539 Reviewed By: anand1976 Differential Revision: D25340565 Pulled By: pdillinger fbshipit-source-id: 1730c035c81a475875ea745226112030ec25136c
-
由 Levi Tamasi 提交于
Summary: `googletest` uses exceptions to communicate assertion failures when `GTEST_THROW_ON_FAILURE` is set, which does not go well with `std::thread`s, since an exception escaping the top-level function of an `std::thread` object or an `std::thread` getting destroyed without having been `join`ed or `detach`ed first results in a call to `std::terminate`. The patch fixes this by moving the `Status` assertions of background operations in `ExternalSstFileTest.PickedLevelBug` to the main thread. Pull Request resolved: https://github.com/facebook/rocksdb/pull/7754 Test Plan: `make check` Reviewed By: riversand963 Differential Revision: D25383808 Pulled By: ltamasi fbshipit-source-id: 32fb2721e5169ec898d218900bc0d83eead45d03
-
由 davkor 提交于
Summary: This PR adds a fuzzer to the project and infrastructure to integrate Rocksdb with OSS-Fuzz. OSS-Fuzz is a service run by Google that performs continuous fuzzing of important open source projects. The LevelDB project is also in being fuzzed by OSS-Fuzz (https://github.com/google/oss-fuzz/tree/master/projects/leveldb). Essentially, OSS-Fuzz will perform the fuzzing for you and email you bug reports, coverage reports etc. All we need is a set of email addresses that will receive this information. For cross-referencing, the PR that adds the OSS-Fuzz logic is here: https://github.com/google/oss-fuzz/pull/4642 The `db_fuzzer` of the PR performs stateful fuzzing of Rocksdb by calling a sequence of Rockdb's APIs with random input in each fuzz iteration. Each fuzz iteration, thus, creates a new instance of Rocksdb and operates on this given instance. The goal is to test diverse states of Rocksdb and ensure no state lead to error conditions, e.g. memory corruption vulnerabilities. The fuzzer is similar (although more complex) to the fuzzer that is currently being used to analyse Leveldb (https://github.com/google/oss-fuzz/blob/master/projects/leveldb/fuzz_db.cc) Pull Request resolved: https://github.com/facebook/rocksdb/pull/7674 Reviewed By: pdillinger Differential Revision: D25238536 Pulled By: cheng-chang fbshipit-source-id: 610331c49a77eb68d3b1d7d5ef1b0ce230ac0630
-
由 Akanksha Mahajan 提交于
Summary: Handle misuse of snprintf return value to avoid Out of bound read/write. Pull Request resolved: https://github.com/facebook/rocksdb/pull/7686 Test Plan: make check -j64 Reviewed By: riversand963 Differential Revision: D25030831 Pulled By: akankshamahajan15 fbshipit-source-id: 1a1d181c067c78b94d720323ae00b79566b57cfa
-
由 Neil Mitchell 提交于
Summary: Buck TARGETS files are sometimes parsed with Python, and sometimes with Starlark - this TARGETS file was not Starlark compliant. In Starlark you can't have a top-level if in a TARGETS file, but you can have a ternary `a if b else c`. Therefore I converted TARGETS, and updated the generator for it. Pull Request resolved: https://github.com/facebook/rocksdb/pull/7743 Reviewed By: pdillinger Differential Revision: D25342587 Pulled By: ndmitchell fbshipit-source-id: 88cbe8632071a45a3ea8675812967614c62c78d1
-
由 Akanksha Mahajan 提交于
Summary: Added a fix for the failure of DBTest2.PartitionedIndexUserToInternalKey on ppc64le in travis Closes https://github.com/facebook/rocksdb/issues/7746 Pull Request resolved: https://github.com/facebook/rocksdb/pull/7752 Test Plan: Ran travis job multiple times and it passed. Will keep watching the travis job after this patch. Reviewed By: pdillinger Differential Revision: D25373130 Pulled By: akankshamahajan15 fbshipit-source-id: fa0e3f85f75b687415044a506e42cc38ead87975
-
- 06 12月, 2020 1 次提交
-
-
由 Yanqin Jin 提交于
Summary: Following https://github.com/facebook/rocksdb/issues/7655 and https://github.com/facebook/rocksdb/issues/7657, this PR adds `full_history_ts_low_` to `ColumnFamilyData`. `ColumnFamilyData::full_history_ts_low_` will be used to create `FlushJob` and `CompactionJob`. `ColumnFamilyData::full_history_ts_low` is persisted to the MANIFEST file. An application can only increase its value. Consider the following case: > > The database has a key at ts=950. `full_history_ts_low` is first set to 1000, and then a GC is triggered > and cleans up all data older than 1000. If the application sets `full_history_ts_low` to 900 afterwards, > and tries to read at ts=960, the key at 950 is not seen. From the perspective of the read, the result > is hard to reason. For simplicity, we just do now allow decreasing full_history_ts_low for now. > During recovery, the value of `full_history_ts_low` is restored for each column family if applicable. Note that version edits in the MANIFEST file for the same column family may have `full_history_ts_low` unsorted due to the potential interleaving of `LogAndApply` calls. Only the max will be used to restore the state of the column family. Pull Request resolved: https://github.com/facebook/rocksdb/pull/7740 Test Plan: make check Reviewed By: ltamasi Differential Revision: D25296217 Pulled By: riversand963 fbshipit-source-id: 24acda1df8262cd7cfdc6ce7b0ec56438abe242a
-
- 05 12月, 2020 5 次提交
-
-
由 Peter Dillinger 提交于
Summary: Add macos+cmake build on CircleCI instead. Pull Request resolved: https://github.com/facebook/rocksdb/pull/7745 Test Plan: CI Reviewed By: riversand963 Differential Revision: D25352864 Pulled By: pdillinger fbshipit-source-id: 6b0a328cbe715bc3b43d70e919a27c834edcf079
-
由 Levi Tamasi 提交于
Summary: The patch adds iterator support to the integrated BlobDB implementation. Whenever a blob reference is encountered during iteration, the corresponding blob is retrieved by calling `Version::GetBlob`, assuming the `expose_blob_index` (formerly `allow_blob`) flag is *not* set. (Note: the flag is set by the old stacked BlobDB implementation, which has its own blob file handling/blob retrieval logic.) In addition, `DBIter` now uniformly returns `Status::NotSupported` with the error message `"BlobDB does not support merge operator."` when encountering a blob reference while performing a merge (instead of potentially returning a message that implies the database should be opened using the stacked BlobDB's `Open`.) TODO: We can implement support for lazily retrieving the blob value (or in other words, bypassing the retrieval of blob values based on key) by extending the `Iterator` API with a new `PrepareValue` method (similarly to `InternalIterator`, which already supports lazy values). Pull Request resolved: https://github.com/facebook/rocksdb/pull/7731 Test Plan: `make check` Reviewed By: riversand963 Differential Revision: D25256293 Pulled By: ltamasi fbshipit-source-id: c39cd782011495a526cdff99c16f5fca400c4811
-
由 Zhichao Cao 提交于
Summary: In current code base, in FlushMemtable, when `(Flush_reason == FlushReason::kErrorRecoveryRetryFlush && (!cfd->mem()->IsEmpty() || !cached_recoverable_state_empty_.load()))`, we assert that cfd->imm()->NumNotFlushed() > 0. However, there are some corner cases that can fail this assert: 1) if there are multiple CFs, some CF has immutable memtable, some CFs don't. In ResumeImpl, all CFs will call FlushMemtable, which will hit the assert. 2) Regular flush is scheduled and running, the resume thread is waiting. New KVs are inserted and SchedulePendingFlush is called. Regular flush will continue call MaybeScheduleFlushAndCompaction until all the immutable memtables are flushed. When regular flush ends and auto resume thread starts to schedule new flushes, cfd->imm()->NumNotFlushed() can be 0. Remove the assert and added the comments. Pull Request resolved: https://github.com/facebook/rocksdb/pull/7744 Test Plan: make check and pass the stress test Reviewed By: riversand963 Differential Revision: D25340573 Pulled By: zhichao-cao fbshipit-source-id: eac357bdace660247c197f01a9ff6857e3c97672
-
由 Adam Retter 提交于
Summary: Closes - https://github.com/facebook/rocksdb/issues/7710 I tested this on an Apple DTK (Developer Transition Kit) with an Apple A12Z Bionic CPU and macOS Big Sur (11.0.1). Previously the arm64 specific CRC optimisations were limited to Linux only OS... Well now Apple Silicon is also arm64 but runs macOS ;-) Pull Request resolved: https://github.com/facebook/rocksdb/pull/7714 Reviewed By: ltamasi Differential Revision: D25287349 Pulled By: pdillinger fbshipit-source-id: 639b168bf0ac2652907531e9604936ac4974b577
-
由 Zhichao Cao 提交于
Summary: In error_handler auto recovery case, if recovery_in_prog_ is false, the recover is finished or failed. In this case, the auto recovery thread should finish its execution so recovery_thread_ should be null. However, in some cases, it is not null, the caller should not directly returned. Instead, it should wait for a while and create a new thread to execute the new recovery. Pull Request resolved: https://github.com/facebook/rocksdb/pull/7700 Test Plan: make check, error_handler_fs_test Reviewed By: anand1976 Differential Revision: D25098233 Pulled By: zhichao-cao fbshipit-source-id: 5a1cba234ca18f6dd5d1be88e02d66e1d5ce931b
-
- 04 12月, 2020 2 次提交
-
-
由 Cheng Chang 提交于
Summary: When 2 phase commit is enabled, if there are prepared data in a WAL, the WAL should be kept, the minimum log number for such a WAL is written to MANIFEST during flush. In atomic flush, such information is not written to MANIFEST. Pull Request resolved: https://github.com/facebook/rocksdb/pull/7570 Test Plan: Added a new unit test `DBAtomicFlushTest.ManualFlushUnder2PC`, this test fails in atomic flush without this PR, after this PR, it succeeds. Reviewed By: riversand963 Differential Revision: D24394222 Pulled By: cheng-chang fbshipit-source-id: 60ce74b21b704804943be40c8de01b41269cf116
-
由 Ramkumar Vadivelu 提交于
Summary: Update check_format_compatible.sh with 6.15.fb Pull Request resolved: https://github.com/facebook/rocksdb/pull/7738 Reviewed By: ajkr Differential Revision: D25307717 Pulled By: ramvadiv fbshipit-source-id: 49f5c6366e8c8a2ade9697975453c9c65e919f1b
-
- 03 12月, 2020 4 次提交
-
-
由 Zhichao Cao 提交于
Add kManifestWriteNoWAL to BackgroundErrorReason to handle Flush IO Error when WAL is disabled (#7693) Summary: In the current code base, all the manifest writes with IO error will be set with reason: BackgroundErrorReason::kManifestWrite, which will be mapped to the kHardError if the IO Error is retryable. However, if the system does not use the WAL, all the retryable IO error should be mapped to kSoftError. Create this PR to handle is special case by adding kManifestWriteNoWAL to BackgroundErrorReason. Pull Request resolved: https://github.com/facebook/rocksdb/pull/7693 Test Plan: make check, add new testing cases to error_handler_fs_test Reviewed By: anand1976 Differential Revision: D25066204 Pulled By: zhichao-cao fbshipit-source-id: d59553896c2eac3fb37c05238544d2b265379462
-
由 Peter Dillinger 提交于
Summary: The minimum rate check in RateLimiterTest.Rate can fail in Facebook's CI system Sandcastle, presumably due to heavily loaded machines. This change disables the minimum rate check for Sandcastle runs, and cleans up the code disabling it on other CI environments. (The amount of conditionally compiled code shall be minimized.) Pull Request resolved: https://github.com/facebook/rocksdb/pull/7728 Test Plan: try new test with and without setting envvar SANDCASTLE=1 Reviewed By: ltamasi Differential Revision: D25247642 Pulled By: pdillinger fbshipit-source-id: d786233af37af9a874adbb3a9e2707ec52c27a5a
-
由 Jay Zhuang 提交于
Summary: Add timestamp to the `CompactRange()` and `GetApproximateSizes` range keys if needed. Pull Request resolved: https://github.com/facebook/rocksdb/pull/7684 Test Plan: make check Reviewed By: riversand963 Differential Revision: D25015421 Pulled By: jay-zhuang fbshipit-source-id: 51ca0756087eb053a3b11801e5c7ce1c6e2d38a9
-
由 Yanqin Jin 提交于
Summary: https://github.com/facebook/rocksdb/issues/7340 reports and reproduces an assertion failure caused by a combination of the following: - atomic flush is disabled. - a column family can appear multiple times in the flush queue at the same time. This behavior was introduced in release 5.17. Consequently, it is possible that two flushes race with each other. One bg flush thread flushes all memtables. The other thread calls `FlushMemTableToOutputFile()` afterwards, and hits the assertion error below. ``` assert(cfd->imm()->NumNotFlushed() != 0); assert(cfd->imm()->IsFlushPending()); ``` Fix this by reverting the behavior. In non-atomic-flush case, a column family can appear in the flush queue at most once at the same time. Pull Request resolved: https://github.com/facebook/rocksdb/pull/7362 Test Plan: make check Also run stress test successfully for 10 times. ``` make crash_test ``` Reviewed By: ajkr Differential Revision: D25172996 Pulled By: riversand963 fbshipit-source-id: f1559b6366cc609e961e3fc83fae548f1fad08ce
-
- 02 12月, 2020 3 次提交
-
-
由 Jay Zhuang 提交于
Summary: Timestamp should not be included in prefix extractor, as we discussed here: https://github.com/facebook/rocksdb/pull/7589#discussion_r511068586 Pull Request resolved: https://github.com/facebook/rocksdb/pull/7668 Test Plan: added unittest Reviewed By: riversand963 Differential Revision: D24966265 Pulled By: jay-zhuang fbshipit-source-id: 0dae618c333d4b7942a40d556535a1795e060aea
-
由 Adam Retter 提交于
Summary: Closes https://github.com/facebook/rocksdb/issues/7691 The optimised CRC code for PPC64le which was originally imported in https://github.com/facebook/rocksdb/pull/2353 is not compatible with Clang 11. It looks like the code most likely originated from https://github.com/antonblanchard/crc32-vpmsum. The code relied on a GCC header file `ppc-asm.h` which is not available in Clang. To solve this, I have taken the same approach as the the upstream project from which the CRC code came https://github.com/antonblanchard/crc32-vpmsum/commit/ffc8018efc1e4f05d22a9fc8dde57109dd09368b#diff-ec3e62c56fbcddeb07230f2a4673c1abd7f0f1cc8e48a2aa560056cfc1b25d60 and simply imported a copy of the GCC header file into our code-base which will be used when Clang is the compiler on pcc64le. **NOTE**: The new file `util/ppc-asm.h` may have licensing implications which I guess need to be approved by RocksDB/Facebook before this is merged Pull Request resolved: https://github.com/facebook/rocksdb/pull/7713 Reviewed By: jay-zhuang Differential Revision: D25222645 Pulled By: pdillinger fbshipit-source-id: e3fec9136f26ce1eb7a027048bcf77a6cb3c769c
-
由 Peter Dillinger 提交于
Summary: TSAN reports that our stack trace handler makes unsafe calls during a signal handler. I just tried fixing some of them and I don't think it's fixable unless we can get away from using FILE stdio. Even if we can use lower level functions only, I'm not sure it's fixed. I also tried suppressing the reports with function and file level TSAN suppression, but that doesn't seem to work, perhaps because the violation is reported on the callee, not the caller. So I added a warning to be printed whenever these violations would be reported that they are practically ignorable. Internal ref: T77844138 Pull Request resolved: https://github.com/facebook/rocksdb/pull/7723 Test Plan: run external_sst_file_test with seeded abort(), with TSAN (TSAN warnings + new warning) and without TSAN (no warning, just stack trace). Reviewed By: akankshamahajan15 Differential Revision: D25228011 Pulled By: pdillinger fbshipit-source-id: 3eda1d6e7ca3cdc64076cf99ae954168837d2818
-
- 01 12月, 2020 2 次提交
-
-
由 Andrew Kryczka 提交于
Summary: WAL may be truncated to an incomplete record due to crash while writing the last record or corruption. In the former case, no hole will be produced since no ACK'd data was lost. In the latter case, a hole could be produced without this PR since we proceeded to recover the next WAL as if nothing happened. This PR changes the record reading code to always report a corruption for incomplete records in `kPointInTimeRecovery` mode, and the upper layer will only ignore them if the next WAL has consecutive seqnum (i.e., we are guaranteed no hole). While this solves the hole problem for the case of incomplete records, the possibility is still there if the WAL is corrupted by truncation to an exact record boundary. This PR also regresses how much data can be recovered when writes are mixed with/without `WriteOptions::disableWAL`, as then we can not distinguish between a seqnum gap caused by corruption and a seqnum gap caused by a `disableWAL` write. Pull Request resolved: https://github.com/facebook/rocksdb/pull/7701 Test Plan: Interestingly there already was a test for this case (`DBWALTestWithParams.kPointInTimeRecovery`); it just had a typo bug in the verification that prevented it from noticing holes in recovery. Reviewed By: anand1976 Differential Revision: D25111765 Pulled By: ajkr fbshipit-source-id: 5e330b13b1ee2b5be096cea9d0ff6075843e57b6
-
由 Steve Yen 提交于
Summary: Pull Request resolved: https://github.com/facebook/rocksdb/pull/7716 Reviewed By: pdillinger Differential Revision: D25214340 Pulled By: zhichao-cao fbshipit-source-id: 143a8e7d076917e60bbe6993d60ec55f33e2ab56
-
- 24 11月, 2020 2 次提交
-
-
由 Levi Tamasi 提交于
Summary: The patch adds basic garbage collection support to the integrated BlobDB implementation. Valid blobs residing in the oldest blob files are relocated as they are encountered during compaction. The threshold that determines which blob files qualify is computed based on the configuration option `blob_garbage_collection_age_cutoff`, which was introduced in https://github.com/facebook/rocksdb/issues/7661 . Once a blob is retrieved for the purposes of relocation, it passes through the same logic that extracts large values to blob files in general. This means that if, for instance, the size threshold for key-value separation (`min_blob_size`) got changed or writing blob files got disabled altogether, it is possible for the value to be moved back into the LSM tree. In particular, one way to re-inline all blob values if needed would be to perform a full manual compaction with `enable_blob_files` set to `false`, `enable_blob_garbage_collection` set to `true`, and `blob_file_garbage_collection_age_cutoff` set to `1.0`. Some TODOs that I plan to address in separate PRs: 1) We'll have to measure the amount of new garbage in each blob file and log `BlobFileGarbage` entries as part of the compaction job's `VersionEdit`. (For the time being, blob files are cleaned up solely based on the `oldest_blob_file_number` relationships.) 2) When compression is used for blobs, the compression type hasn't changed, and the blob still qualifies for being written to a blob file, we can simply copy the compressed blob to the new file instead of going through decompression and compression. 3) We need to update the formula for computing write amplification to account for the amount of data read from blob files as part of GC. Pull Request resolved: https://github.com/facebook/rocksdb/pull/7694 Test Plan: `make check` Reviewed By: riversand963 Differential Revision: D25069663 Pulled By: ltamasi fbshipit-source-id: bdfa8feb09afcf5bca3b4eba2ba72ce2f15cd06a
-
由 Andrew Kryczka 提交于
Summary: This PR updates `MemTable::Add()`, `MemTable::Update()`, and `MemTable::UpdateCallback()` to return `Status` objects, and adapts the client code in `MemTableInserter`. The goal is to prepare these functions for key-value checksum, where we want to verify key-value integrity while adding to memtable. After this PR, the memtable mutation functions can report a failed integrity check by returning `Status::Corruption`. Pull Request resolved: https://github.com/facebook/rocksdb/pull/7656 Reviewed By: riversand963 Differential Revision: D24900497 Pulled By: ajkr fbshipit-source-id: 1a7e80581e3774676f2bbba2f0a0b04890f40009
-
- 23 11月, 2020 1 次提交
-
-
由 Peter Dillinger 提交于
Summary: These new unit tests should ensure that we don't accidentally change the interpretation of bits for what I call Standard128Ribbon filter internally, available publicly as NewExperimentalRibbonFilterPolicy. There is very little intuitive reason for the values we check against in these tests; I just plug in the right expected values upon watching the test fail initially. Most (but not all) of the tests are essentially "whitebox" "round-trip." We create a filter from fixed keys, and first compare the checksum of those filter bytes against a saved value. We also run queries against other fixed keys, comparing which return false positives against a saved set. Pull Request resolved: https://github.com/facebook/rocksdb/pull/7696 Test Plan: test addition and refactoring only Reviewed By: jay-zhuang Differential Revision: D25082289 Pulled By: pdillinger fbshipit-source-id: b5ca646fdcb5a1c2ad2085eda4a1fd44c4287f67
-
- 21 11月, 2020 1 次提交
-
-
由 Yanqin Jin 提交于
Summary: Allow corruption_test to run on custom env loaded via `Env::LoadEnv()`. Pull Request resolved: https://github.com/facebook/rocksdb/pull/7699 Test Plan: ``` make corruption_test ./corruption_test ``` Also run on in-house custom env. Reviewed By: zhichao-cao Differential Revision: D25135525 Pulled By: riversand963 fbshipit-source-id: 7941e7ce342dc88ec2cd63e90f7674a2f57de6b7
-
- 20 11月, 2020 3 次提交
-
-
由 anand76 提交于
Summary: Fix initialization order of DBOptions and kHostnameForDbHostId by making the initialization of the latter static rather than dynamic. Pull Request resolved: https://github.com/facebook/rocksdb/pull/7702 Reviewed By: ajkr Differential Revision: D25111633 Pulled By: anand1976 fbshipit-source-id: 7afad834a66e40bcd8694a43b40d378695212224
-
由 Cheng Chang 提交于
Summary: Updates the option description and HISTORY. Pull Request resolved: https://github.com/facebook/rocksdb/pull/7689 Test Plan: N/A Reviewed By: zhichao-cao Differential Revision: D25056238 Pulled By: cheng-chang fbshipit-source-id: 6af1ef6f8dcf2173cbc0fccadc0e06cefd92bcae
-
由 Dylan Wen 提交于
Summary: Hi there, This PR fixes a few typos in comments in `cache/lru_cache.h`. Thanks Pull Request resolved: https://github.com/facebook/rocksdb/pull/7687 Reviewed By: ajkr Differential Revision: D25064674 Pulled By: jay-zhuang fbshipit-source-id: fe633369d5b82c5aac42d4ee8d551b9d657237d1
-
- 19 11月, 2020 2 次提交
-
-
由 Cheng Chang 提交于
Summary: An empty WAL won't be backed up by the BackupEngine. So if we track the empty WALs in MANIFEST, then when restoring from a backup, it may report corruption that the empty WAL is missing, which is correct because the WAL is actually in the main DB but not in the backup DB, but missing an empty WAL does not logically break DB consistency. Pull Request resolved: https://github.com/facebook/rocksdb/pull/7697 Test Plan: watch existing tests to pass Reviewed By: pdillinger Differential Revision: D25077194 Pulled By: cheng-chang fbshipit-source-id: 01917b57234b92b6063925f2ee9452c5732bdc03
-
由 Cheng Chang 提交于
Summary: It's worth mentioning the corner case bug fixed in PR 7621. Pull Request resolved: https://github.com/facebook/rocksdb/pull/7690 Test Plan: N/A Reviewed By: zhichao-cao Differential Revision: D25056678 Pulled By: cheng-chang fbshipit-source-id: 1ab42ec080f3ffe21f5d97acf65ee0af993112ba
-
- 18 11月, 2020 5 次提交
-
-
由 Akanksha Mahajan 提交于
Summary: Add cmake-mignw in circle-build Pull Request resolved: https://github.com/facebook/rocksdb/pull/7144 Test Plan: watch circle cmake-mingw build Reviewed By: jay-zhuang Differential Revision: D25039744 Pulled By: akankshamahajan15 fbshipit-source-id: 92584c9d5ad161b93d5e5a1303aac306e7985108
-
由 Cheng Chang 提交于
Summary: The logic for computing min_log_number_to_keep in atomic flush was incorrect. For example, when all column families are flushed, the min_log_number_to_keep should be the latest new log. But the incorrect logic calls `PrecomputeMinLogNumberToKeepNon2PC` for each column family, and returns the minimum of them. However, `PrecomputeMinLogNumberToKeepNon2PC(cf)` assumes column families other than `cf` are flushed, but in case all column families are flushed, this assumption is incorrect. Without this fix, the WAL referenced by the computed min_log_number_to_keep may actually contain no unflushed data, so the WAL might have actually been deleted from disk on recovery, then an incorrect error `Corruption: missing WAL` will be reported. Pull Request resolved: https://github.com/facebook/rocksdb/pull/7660 Test Plan: run `make crash_test_with_atomic_flush` on devserver added a unit test in `db_flush_test` Reviewed By: riversand963 Differential Revision: D24906265 Pulled By: cheng-chang fbshipit-source-id: 08deda62e71f67f59e3b7925cdd86dd09bd4f430
-
由 Adam Retter 提交于
Summary: Expands on https://github.com/facebook/rocksdb/pull/7016 so that when `PORTABLE=1` is set the dependencies for RocksJava static target will also be built with backwards compatibility for MacOS as far back as 10.12 (i.e. 2016). Pull Request resolved: https://github.com/facebook/rocksdb/pull/7683 Reviewed By: ajkr Differential Revision: D25034164 Pulled By: pdillinger fbshipit-source-id: dc9e51828869ed9ec336a8a86683e4d0bfe04f27
-
由 Adam Retter 提交于
Summary: Closes https://github.com/facebook/rocksdb/issues/7269 I have only tested this on macOS, let's see what CI makes of it for the other platforms... Pull Request resolved: https://github.com/facebook/rocksdb/pull/7624 Reviewed By: ajkr Differential Revision: D24834305 Pulled By: pdillinger fbshipit-source-id: ba818d8424297ccebd18ed854b044764c2dbab5f
-
由 Cheng Chang 提交于
Summary: This is the initial PR to support adding fuzz tests to RocksDB. It includes the necessary build infrastructure, and includes an example fuzzer. There is also a README serving as the tutorial for how to add more tests. Pull Request resolved: https://github.com/facebook/rocksdb/pull/7685 Test Plan: Manually build and run the fuzz test according to README. Reviewed By: pdillinger Differential Revision: D25013847 Pulled By: cheng-chang fbshipit-source-id: c91e3b337398d7f4d8f769fd5091cd080487b171
-