- 13 10月, 2017 1 次提交
-
-
由 Adam Retter 提交于
Summary: This PR also includes some cleanup, bugfixes and refactoring of the Java API. However these are really pre-cursors on the road to CompactionFilterFactory support. Closes https://github.com/facebook/rocksdb/pull/1241 Differential Revision: D6012778 Pulled By: sagar0 fbshipit-source-id: 0774465940ee99001a78906e4fed4ef57068ad5c
-
- 12 10月, 2017 3 次提交
-
-
由 Andrew Kryczka 提交于
Summary: The referencing logic is super confusing so added a comment at the part that took me longest to figure out. Closes https://github.com/facebook/rocksdb/pull/2996 Differential Revision: D6034969 Pulled By: ajkr fbshipit-source-id: 9cc2e744c1f79d6d57d378f86ed59238a5f583db
-
由 Yi Wu 提交于
Summary: Close #2955 Closes https://github.com/facebook/rocksdb/pull/2960 Differential Revision: D5962872 Pulled By: yiwu-arbug fbshipit-source-id: a6472d5c015bea3dc476c572ff5a5c90259e6059
-
由 Kefu Chai 提交于
Summary: it turns out that, with older GCC shipped from centos7, the SSE42 intrinsics are not available even with "target" specified. so we need to pass "-msse42" for checking compiler's sse4.2 support and for building crc32c.cc which uses sse4.2 intrinsics for crc32. Signed-off-by: NKefu Chai <tchaikov@gmail.com> Closes https://github.com/facebook/rocksdb/pull/2950 Differential Revision: D6032298 Pulled By: siying fbshipit-source-id: 124c946321043661b3fb0a70b6cdf4c9c5126ab4
-
- 11 10月, 2017 3 次提交
-
-
由 Yi Wu 提交于
Summary: Travis don't have ccache installed on Mac. Only run the ccache command when it exists. https://docs.travis-ci.com/user/caching/#ccache-cache Closes https://github.com/facebook/rocksdb/pull/2990 Differential Revision: D6028837 Pulled By: yiwu-arbug fbshipit-source-id: 1e2d1c7f37be2c73773258c1fd5f24eebe7a06c6
-
由 Zhongyi Xie 提交于
Summary: Right now in `PutCFImpl` we always increment NUMBER_KEYS_UPDATED counter for both in-place update or insertion. This PR fixes this by using the correct counter for either case. Closes https://github.com/facebook/rocksdb/pull/2986 Differential Revision: D6016300 Pulled By: miasantreble fbshipit-source-id: 0aed327522e659450d533d1c47d3a9f568fac65d
-
由 Andrew Kryczka 提交于
Summary: The file numbers assigned post-repair were sometimes smaller than older files' numbers due to `LogAndApply` saving the wrong next file number in the manifest. - Mark the highest file seen during repair as used before `LogAndApply` so the correct next file number will be stored. - Renamed `MarkFileNumberUsedDuringRecovery` to `MarkFileNumberUsed` since now it's used during repair in addition to during recovery - Added `TEST_Current_Next_FileNo` to expose the next file number for the unit test. Closes https://github.com/facebook/rocksdb/pull/2988 Differential Revision: D6018083 Pulled By: ajkr fbshipit-source-id: 3f25cbf74439cb8f16dd12af90b67f9f9f75e718
-
- 10 10月, 2017 4 次提交
-
-
由 Jay Patel 提交于
Summary: Hi, As part of some optimization, we're using multiple DB locations (tmpfs and spindle) to store data and configured max_bytes_for_level_multiplier_additional. But, max_bytes_for_level_multiplier_additional is not used to compute the actual size for the level while picking the DB location. So, even if DB location does not have space, RocksDB mistakenly puts the level at that location. Can someone pls. verify the fix? Let me know any other changes required. Thanks, Jay Closes https://github.com/facebook/rocksdb/pull/2704 Differential Revision: D5992515 Pulled By: ajkr fbshipit-source-id: cbbc6c0e0a7dbdca91c72e0f37b218c4cec57e28
-
由 Zhongyi Xie 提交于
Summary: Closes https://github.com/facebook/rocksdb/pull/2976 Differential Revision: D5994759 Pulled By: miasantreble fbshipit-source-id: 985c31dccb957cb970c302f813cd07a1e8cb6438
-
由 Yi Wu 提交于
Summary: On iterator create, take a snapshot, create a ReadCallback and pass the ReadCallback to the underlying DBIter to check if key is committed. Closes https://github.com/facebook/rocksdb/pull/2981 Differential Revision: D6001471 Pulled By: yiwu-arbug fbshipit-source-id: 3565c4cdaf25370ba47008b0e0cb65b31dfe79fe
-
由 Sagar Vemuri 提交于
Summary: Some WriteOptions defaults were not clearly documented. So, added comments to make the defaults more explicit. Closes https://github.com/facebook/rocksdb/pull/2984 Differential Revision: D6014500 Pulled By: sagar0 fbshipit-source-id: a28078818e335e42b303c1fc6fbfec692ed16c7c
-
- 07 10月, 2017 4 次提交
-
-
由 Yi Wu 提交于
Summary: On WritePreparedTxnDB destruct there could be running compaction/flush holding a SnapshotChecker, which holds a pointer back to WritePreparedTxnDB. Make sure those jobs finished before destructing WritePreparedTxnDB. This is caught by TransactionTest::SeqAdvanceTest. Closes https://github.com/facebook/rocksdb/pull/2982 Differential Revision: D6002957 Pulled By: yiwu-arbug fbshipit-source-id: f1e70390c9798d1bd7959f5c8e2a1c14100773c3
-
由 Maysam Yabandeh 提交于
Summary: Enable WritePrepared policy for existing transaction tests. Closes https://github.com/facebook/rocksdb/pull/2972 Differential Revision: D5993614 Pulled By: maysamyabandeh fbshipit-source-id: d1eb53e2920c4e2a56434bb001231c98426f3509
-
由 Sagar Vemuri 提交于
Summary: Enabled WAL, during GC, for blob index which is stored on regular RocksDB. Closes https://github.com/facebook/rocksdb/pull/2975 Differential Revision: D5997384 Pulled By: sagar0 fbshipit-source-id: b76c1487d8b5be0e36c55e8d77ffe3d37d63d85b
-
由 Yi Wu 提交于
Summary: Update Compaction/Flush to support WritePreparedTxnDB: Add SnapshotChecker which is a proxy to query WritePreparedTxnDB::IsInSnapshot. Pass SnapshotChecker to DBImpl on WritePreparedTxnDB open. CompactionIterator use it to check if a key has been committed and if it is visible to a snapshot. In CompactionIterator: * check if key has been committed. If not, output uncommitted keys AS-IS. * use SnapshotChecker to check if key is visible to a snapshot when in need. * do not output key with seq = 0 if the key is not committed. Closes https://github.com/facebook/rocksdb/pull/2926 Differential Revision: D5902907 Pulled By: yiwu-arbug fbshipit-source-id: 945e037fdf0aa652dc5ba0ad879461040baa0320
-
- 06 10月, 2017 3 次提交
-
-
由 Adrien Schildknecht 提交于
Summary: Add a new function in Listener to let the caller know when rocksdb is stalling writes. Closes https://github.com/facebook/rocksdb/pull/2897 Differential Revision: D5860124 Pulled By: schischi fbshipit-source-id: ee791606169aa64f772c86f817cebf02624e05e1
-
由 Yi Wu 提交于
Summary: Compaction will output keys with sequence number 0, if it is visible to earliest snapshot. Adding a test to make sure IsInSnapshot() report sequence number 0 is visible to any snapshot. Closes https://github.com/facebook/rocksdb/pull/2974 Differential Revision: D5990665 Pulled By: yiwu-arbug fbshipit-source-id: ef50ebc777ff8ca688771f3ab598c7a609b0b65e
-
由 Andrew Kryczka 提交于
Summary: fixing warnings is important, especially for release code. Closes https://github.com/facebook/rocksdb/pull/2971 Differential Revision: D5980596 Pulled By: ajkr fbshipit-source-id: 04f4ea3fb005dcda33d60342e4361e380bc4dfb1
-
- 05 10月, 2017 4 次提交
-
-
由 Maysam Yabandeh 提交于
Summary: With WriteCommitted, when the write batch has duplicate keys, the txn db simply inserts them to the db with different seq numbers and let the db ignore/merge the duplicate values at the read time. With WritePrepared all the entries of the batch are inserted with the same seq number which prevents us from benefiting from this simple solution. This patch applies a hackish solution to unblock the end-to-end testing. The hack is to be replaced with a proper solution soon. The patch simply detects the duplicate key insertions, and mark the previous one as obsolete. Then before writing to the db it rewrites the batch eliminating the obsolete keys. This would incur a memcpy cost. Furthermore handing duplicate merge would require to do FullMerge instead of simply ignoring the previous value, which is not handled by this patch. Closes https://github.com/facebook/rocksdb/pull/2969 Differential Revision: D5976337 Pulled By: maysamyabandeh fbshipit-source-id: 114e65b66f137d8454ff2d1d782b8c05da95f989
-
由 Andrew Kryczka 提交于
Summary: Dynamic adjustment of rate limit according to demand for background I/O. It increases by a factor when limiter is drained too frequently, and decreases by the same factor when limiter is not drained frequently enough. The parameters for this behavior are fixed in `GenericRateLimiter::Tune`. Other changes: - make rate limiter's `Env*` configurable for testing - track num drain intervals in RateLimiter so we don't have to rely on stats, which may be shared across different DB instances from the ones that share the RateLimiter. Closes https://github.com/facebook/rocksdb/pull/2899 Differential Revision: D5858704 Pulled By: ajkr fbshipit-source-id: cc2bac30f85e7f6fd63655d0a6732ef9ed7403b1
-
由 Adam Kupczyk 提交于
Summary: This change causes following changes result of test: ./db_bench --writes 10000000 --benchmarks="fillrandom" --compression_type none from fillrandom : 3.177 micros/op 314804 ops/sec; 34.8 MB/s to fillrandom : 2.777 micros/op 360087 ops/sec; 39.8 MB/s Closes https://github.com/facebook/rocksdb/pull/2961 Differential Revision: D5977822 Pulled By: yiwu-arbug fbshipit-source-id: 1ea77707bffa978b1592b0c5d0fe76bfa1930f8d
-
由 Manuel Ung 提交于
Summary: Currently, RocksDB does not allow reopening a preexisting DB with no merge operator defined, with a merge operator defined. This means that if a DB ever want to add a merge operator, there's no way to do so currently. Fix this by adding a new verification type `kByNameAllowFromNull` which will allow old values to be nullptr, and new values to be non-nullptr. Closes https://github.com/facebook/rocksdb/pull/2958 Differential Revision: D5961131 Pulled By: lth fbshipit-source-id: 06179bebd0d90db3d43690b5eb7345e2d5bab1eb
-
- 04 10月, 2017 6 次提交
-
-
由 Andrew Kryczka 提交于
Summary: Previously the thread pool might be non-empty after joining since concurrent submissions could spawn new threads. This problem didn't affect our background flush/compaction thread pools because the `shutting_down_` flag prevented new jobs from being submitted during/after joining. But I wanted to be able to reuse the `ThreadPool` without such external synchronization. Closes https://github.com/facebook/rocksdb/pull/2953 Differential Revision: D5951920 Pulled By: ajkr fbshipit-source-id: 0efec7d0056d36d1338367da75e8b0c089bbc973
-
由 Andrew Kryczka 提交于
Summary: We need to tell the iterator the compaction output file's level so it can apply proper optimizations, like pinning filter and index blocks when user enables `pin_l0_filter_and_index_blocks_in_cache` and the output file's level is zero. Closes https://github.com/facebook/rocksdb/pull/2949 Differential Revision: D5945597 Pulled By: ajkr fbshipit-source-id: 2389decf9026ffaa32d45801a77d002529f64a62
-
由 Maysam Yabandeh 提交于
Summary: I cannot locally reproduce the valgrind leak report but based on my code inspection not deleting txn1 might be the reason. ``` ==197848== 2,990 (544 direct, 2,446 indirect) bytes in 1 blocks are definitely lost in loss record 15 of 16 ==197848== at 0x4C2D06F: operator new(unsigned long) (in /usr/local/fbcode/gcc-5-glibc-2.23/lib/valgrind/vgpreload_memcheck-amd64-linux.so) ==197848== by 0x7D5B31: rocksdb::WritePreparedTxnDB::BeginTransaction(rocksdb::WriteOptions const&, rocksdb::TransactionOptions const&, rocksdb::Transaction*) (pessimistic_transaction_db.cc:173) ==197848== by 0x7D80C1: rocksdb::PessimisticTransactionDB::Initialize(std::vector<unsigned long, std::allocator<unsigned long> > const&, std::vector<rocksdb::ColumnFamilyHandle*, std::allocator<rocksdb::ColumnFamilyHandle*> > const&) (pessimistic_transaction_db.cc:115) ==197848== by 0x7DC42F: rocksdb::WritePreparedTxnDB::Initialize(std::vector<unsigned long, std::allocator<unsigned long> > const&, std::vector<rocksdb::ColumnFamilyHandle*, std::allocator<rocksdb::ColumnFamilyHandle*> > const&) (pessimistic_transaction_db.cc:151) ==197848== by 0x7D8CA0: rocksdb::TransactionDB::WrapDB(rocksdb::DB*, rocksdb::TransactionDBOptions const&, std::vector<unsigned long, std::allocator<unsigned long> > const&, std::vector<rocksdb::ColumnFamilyHandle*, std::allocator<rocksdb::ColumnFamilyHandle*> > const&, rocksdb::TransactionDB**) (pessimistic_transaction_db.cc:275) ==197848== by 0x7D9F26: rocksdb::TransactionDB::Open(rocksdb::DBOptions const&, rocksdb::TransactionDBOptions const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::vector<rocksdb::ColumnFamilyDescriptor, std::allocator<rocksdb::ColumnFamilyDescriptor> > const&, std::vector<rocksdb::ColumnFamilyHandle*, std::allocator<rocksdb::ColumnFamilyHandle*> >*, rocksdb::TransactionDB**) (pessimistic_transaction_db.cc:227) ==197848== by 0x7DB349: rocksdb::TransactionDB::Open(rocksdb::Options const&, rocksdb::TransactionDBOptions const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, rocksdb::TransactionDB**) (pessimistic_transaction_db.cc:198) ==197848== by 0x52ABD2: rocksdb::TransactionTest::ReOpenNoDelete() (transaction_test.h:87) ==197848== by 0x51F7B8: rocksdb::WritePreparedTransactionTest_BasicRecoveryTest_Test::TestBody() (write_prepared_transaction_test.cc:843) ==197848== by 0x857557: HandleSehExceptionsInMethodIfSupported<testing::Test, void> (gtest-all.cc:3824) ==197848== by 0x857557: void testing::internal::HandleExceptionsInMethodIfSupported<testing::Test, void>(testing::Test*, void (testing::Test::*)(), char const*) (gtest-all.cc:3860) ==197848== by 0x84E7EB: testing::Test::Run() [clone .part.485] (gtest-all.cc:3897) ==197848== by 0x84E9BC: Run (gtest-all.cc:3888) ==197848== by 0x84E9BC: testing::TestInfo::Run() [clone .part.486] (gtest-all.cc:4072) ``` Closes https://github.com/facebook/rocksdb/pull/2963 Differential Revision: D5968856 Pulled By: maysamyabandeh fbshipit-source-id: 2ac512bbcad37dc8eeeffe4f363978913354180c
-
由 Sagar Vemuri 提交于
Summary: Also made the test more easier to understand: - changed the value size to ~1MB. - switched to NoCompression. We don't anyway need compression in this test for dynamic options. The test failures started happening starting from: #2893 . Closes https://github.com/facebook/rocksdb/pull/2957 Differential Revision: D5959392 Pulled By: sagar0 fbshipit-source-id: 2d55641e429246328bc6d10fcb9ef540d6ce07da
-
由 Yi Wu 提交于
Summary: Make SnapshotConcurrentAccessTest run in the beginning of the queue. Test Plan `make all check -j64` on devserver Closes https://github.com/facebook/rocksdb/pull/2962 Differential Revision: D5965871 Pulled By: yiwu-arbug fbshipit-source-id: 8cb5a47c2468be0fbbb929226a143ec5848bfaa9
-
由 Yi Wu 提交于
Summary: Add kTypeBlobIndex value type, which will be used by blob db only, to insert a (key, blob_offset) KV pair. The purpose is to 1. Make it possible to open existing rocksdb instance as blob db. Existing value will be of kTypeIndex type, while value inserted by blob db will be of kTypeBlobIndex. 2. Make rocksdb able to detect if the db contains value written by blob db, if so return error. 3. Make it possible to have blob db optionally store value in SST file (with kTypeValue type) or as a blob value (with kTypeBlobIndex type). The root db (DBImpl) basically pretended kTypeBlobIndex are normal value on write. On Get if is_blob is provided, return whether the value read is of kTypeBlobIndex type, or return Status::NotSupported() status if is_blob is not provided. On scan allow_blob flag is pass and if the flag is true, return wether the value is of kTypeBlobIndex type via iter->IsBlob(). Changes on blob db side will be in a separate patch. Closes https://github.com/facebook/rocksdb/pull/2886 Differential Revision: D5838431 Pulled By: yiwu-arbug fbshipit-source-id: 3c5306c62bc13bb11abc03422ec5cbcea1203cca
-
- 03 10月, 2017 6 次提交
-
-
由 Andrew Kryczka 提交于
Summary: There's no point populating the block cache during this read. The key we read is guaranteed to be overwritten with a new `kValueType` key immediately afterwards, so can't be accessed again. A user was seeing high turnover of data blocks, at least partially due to this. Closes https://github.com/facebook/rocksdb/pull/2959 Differential Revision: D5961672 Pulled By: ajkr fbshipit-source-id: e7cb27c156c5db3b32af355c780efb99dbdf087c
-
由 Maysam Yabandeh 提交于
Summary: Implement the rollback of WritePrepared txns. For each modified value, it reads the value before the txn and write it back. This would cancel out the effect of transaction. It also remove the rolled back txn from prepared heap. Closes https://github.com/facebook/rocksdb/pull/2946 Differential Revision: D5937575 Pulled By: maysamyabandeh fbshipit-source-id: a6d3c47f44db3729f44b287a80f97d08dc4e888d
-
由 Sagar Vemuri 提交于
Summary: Now that RocksDB supports conditional merging during point lookups (introduced in #2923), Cassandra value merge operator can be updated to pass in a limit. The limit needs to be passed in from the Cassandra code. Closes https://github.com/facebook/rocksdb/pull/2947 Differential Revision: D5938454 Pulled By: sagar0 fbshipit-source-id: d64a72d53170d8cf202b53bd648475c3952f7d7f
-
由 Aliaksei Sandryhaila 提交于
Summary: PR 2893 introduced a variable that is only used in TEST_SYNC_POINT_CALLBACK. When RocksDB is not built in debug mode, this method is not compiled in, and the variable is unused, which triggers a compiler error. This patch reverts the corresponding part of #2893. Closes https://github.com/facebook/rocksdb/pull/2956 Reviewed By: yiwu-arbug Differential Revision: D5955679 Pulled By: asandryh fbshipit-source-id: ac4a8e85b22da7f02efb117cd2e4a6e07ba73390
-
由 Adam Retter 提交于
Summary: This enables us to crossbuild pcc64le RocksJava binaries with a suitably old version of glibc (2.17) on CentOS 7. Closes https://github.com/facebook/rocksdb/pull/2491 Differential Revision: D5955301 Pulled By: sagar0 fbshipit-source-id: 69ef9746f1dc30ffde4063dc764583d8c7ae937e
-
由 Siying Dong 提交于
Summary: Make "ldb dump --count_only" print histogram of value size. Also, fix a bug that "ldb dump --path=<db_path>" doesn't work. Closes https://github.com/facebook/rocksdb/pull/2944 Differential Revision: D5954527 Pulled By: siying fbshipit-source-id: c620a444ec544258b8d113f5f663c375dd53d6be
-
- 30 9月, 2017 2 次提交
-
-
由 Zhongyi Xie 提交于
Summary: In SST files, restart interval helps us search in data blocks. However, some meta blocks will be read sequentially, so there's no need for restart points. Restart interval will introduce extra space in the block (https://github.com/facebook/rocksdb/blob/master/table/block_builder.cc#L80). We will see if we can remove this redundant space. (Maybe set restart interval to infinite.) Closes https://github.com/facebook/rocksdb/pull/2940 Differential Revision: D5930139 Pulled By: miasantreble fbshipit-source-id: 92b1b23c15cffa90378343ac846b713623b19c21
-
由 Maysam Yabandeh 提交于
Summary: This template reminds the users to use issues only for bug reports. The template is written according to the github guidelines at https://help.github.com/articles/creating-an-issue-template-for-your-repository/ Closes https://github.com/facebook/rocksdb/pull/2948 Differential Revision: D5943558 Pulled By: maysamyabandeh fbshipit-source-id: c83b5d211ea8e334107141967689b2f0c453bbc9
-
- 29 9月, 2017 4 次提交
-
-
由 Maysam Yabandeh 提交于
Summary: When using with compressed cache it is possible that the status is ok but the block is not actually added to the block cache. The patch takes this case into account. Closes https://github.com/facebook/rocksdb/pull/2945 Differential Revision: D5937613 Pulled By: maysamyabandeh fbshipit-source-id: 5428cf1115e5046b3d01ab78d26cb181122af4c6
-
由 Andrew Kryczka 提交于
Summary: It was broken when `NotifyCollectTableCollectorsOnFinish` was introduced. That function called `Finish` on each of the `TablePropertiesCollector`s, and `CompactOnDeletionCollector::Finish()` was resetting all its internal state. Then, when we checked whether compaction is necessary, the flag had already been cleared. Fixed above issue by avoiding resetting internal state during `Finish()`. Multiple calls to `Finish()` are allowed, but callers cannot invoke `AddUserKey()` on the collector after any finishes. Closes https://github.com/facebook/rocksdb/pull/2936 Differential Revision: D5918659 Pulled By: ajkr fbshipit-source-id: 4f05e9d80e50ee762ba1e611d8d22620029dca6b
-
由 Maysam Yabandeh 提交于
Summary: Recover txns from the WAL. Also added some unit tests. Closes https://github.com/facebook/rocksdb/pull/2901 Differential Revision: D5859596 Pulled By: maysamyabandeh fbshipit-source-id: 6424967b231388093b4effffe0a3b1b7ec8caeb0
-
由 Yu Shu 提交于
Summary: The default one will try to install rocksdb:x86-windows, which would lead to failing of the build at the last step (CMake Error, Rocksdb only supports x64). Because it will try to install a serials of x86 version package, and those cannot proceed to rocksdb:x86-windows building. By using rocksdb:x64-windows, we can make sure to install x64 version. Tested on Win10 x64. Closes https://github.com/facebook/rocksdb/pull/2941 Differential Revision: D5937139 Pulled By: sagar0 fbshipit-source-id: 15637fe23df59326a0e607bd4d5c48733e20bae3
-