- 15 11月, 2016 2 次提交
-
-
由 Andrew Kryczka 提交于
Summary: This conditional should only open a new file that's dedicated to range deletions when it's the sole output of the subcompaction. Previously, we created such a file whenever the table builder was nullptr, which would've also been the case whenever the CompactionIterator's final key coincided with the final output table becoming full. Closes https://github.com/facebook/rocksdb/pull/1507 Differential Revision: D4174613 Pulled By: ajkr fbshipit-source-id: 9ffacea
-
由 Andrew Kryczka 提交于
Summary: This makes it easier to implement future optimizations like range collapsing. Closes https://github.com/facebook/rocksdb/pull/1504 Differential Revision: D4172214 Pulled By: ajkr fbshipit-source-id: ac4942f
-
- 14 11月, 2016 3 次提交
-
-
由 Joel Marcey 提交于
Need to figure out why this is still happening that `relative_url` is not prepending the right value at just random times.
-
由 Yi Wu 提交于
Summary: Currently our skip-list have an optimization to speedup sequential inserts from a single stream, by remembering the last insert position. We extend the idea to support sequential inserts from multiple streams, and even tolerate small reordering wihtin each stream. This PR is the interface part adding the following: - Add `memtable_insert_prefix_extractor` to allow specifying prefix for each key. - Add `InsertWithHint()` interface to memtable, to allow underlying implementation to return a hint of insert position, which can be later pass back to optimize inserts. - Memtable will maintain a map from prefix to hints and pass the hint via `InsertWithHint()` if `memtable_insert_prefix_extractor` is non-null. Closes https://github.com/facebook/rocksdb/pull/1419 Differential Revision: D4079367 Pulled By: yiwu-arbug fbshipit-source-id: 3555326
-
由 Yi Wu 提交于
Summary: Implement a insert hint into skip-list to hint insert position. This is to optimize for the write workload where there are multiple stream of sequential writes. For example, there is a stream of keys of a1, a2, a3... but also b1, b2, b2... Each stream are not neccessary strictly sequential, but can get reorder a little bit. User can specify a prefix extractor and the `SkipListRep` can thus maintan a hint for each of the stream for fast insert into memtable. This is the internal implementation part. See #1419 for the interface part. See inline comments for details. Closes https://github.com/facebook/rocksdb/pull/1449 Differential Revision: D4106781 Pulled By: yiwu-arbug fbshipit-source-id: f4d48c4
-
- 13 11月, 2016 3 次提交
-
-
由 Islam AbdelRahman 提交于
Summary: If user did not call SstFileWriter::Finish() or called Finish() but it failed. We need to abandon the builder, to avoid destructing it while it's open Closes https://github.com/facebook/rocksdb/pull/1502 Differential Revision: D4171660 Pulled By: IslamAbdelRahman fbshipit-source-id: ab6f434
-
由 Lijun Tang 提交于
Summary: Closes https://github.com/facebook/rocksdb/pull/1488 Differential Revision: D4157784 Pulled By: siying fbshipit-source-id: f150081
-
由 Andrew Kryczka 提交于
Summary: Change DumpTable() so we can see the range deletion meta-block. Closes https://github.com/facebook/rocksdb/pull/1505 Differential Revision: D4172227 Pulled By: ajkr fbshipit-source-id: ae35665
-
- 12 11月, 2016 1 次提交
-
-
由 Maysam Yabandeh 提交于
Summary: Currently the compaction stats are printed to stdout. We want to export the compaction stats in a map format so that the upper layer apps (e.g., MySQL) could present the stats in any format required by the them. Closes https://github.com/facebook/rocksdb/pull/1477 Differential Revision: D4149836 Pulled By: maysamyabandeh fbshipit-source-id: b3df19f
-
- 11 11月, 2016 5 次提交
-
-
由 Joel Marcey 提交于
-
由 Aaron Gao 提交于
Summary: This is a previous fix that has a typo Closes https://github.com/facebook/rocksdb/pull/1487 Differential Revision: D4157381 Pulled By: lightmark fbshipit-source-id: f079be8
-
由 Reid Horuff 提交于
Summary: Originally sequence ids were calculated, in recovery, based off of the first seqid found if the first log recovered. The working seqid was then incremented from that value based on every insertion that took place. This was faulty because of the potential for missing log files or inserts that skipped the WAL. The current recovery scheme grabs sequence from current recovering batch and increments using memtableinserter to track how many actual inserts take place. This works for 2PC batches as well scenarios where some logs are missing or inserts that skip the WAL. Closes https://github.com/facebook/rocksdb/pull/1486 Differential Revision: D4156064 Pulled By: reidHoruff fbshipit-source-id: a6da8d9
-
由 Sergey Balabanov 提交于
Reviewed By: IslamAbdelRahman Differential Revision: D4114816 fbshipit-source-id: 8082936
-
由 Anirban Rahut 提交于
Summary: enhancing sst_dump to be able to parse internal key Closes https://github.com/facebook/rocksdb/pull/1482 Differential Revision: D4154175 Pulled By: siying fbshipit-source-id: b0e28b1
-
- 10 11月, 2016 6 次提交
-
-
由 Andrew Kryczka 提交于
Summary: Closes https://github.com/facebook/rocksdb/pull/1490 Differential Revision: D4158821 Pulled By: IslamAbdelRahman fbshipit-source-id: 59b73f4
-
由 Andrew Kryczka 提交于
Summary: This fixes a correctness issue where ranges with same begin key would overwrite each other. This diff uses InternalKey as TombstoneMap's key such that all tombstones have unique keys even when their start keys overlap. We also update TombstoneMap to use an internal key comparator. End-to-end tests pass and are here (https://gist.github.com/ajkr/851ffe4c1b8a15a68d33025be190a7d9) but cannot be included yet since the DeleteRange() API is yet to be checked in. Note both tests failed before this fix. Closes https://github.com/facebook/rocksdb/pull/1484 Differential Revision: D4155248 Pulled By: ajkr fbshipit-source-id: 304b4b9
-
由 Yueh-Hsuan Chiang 提交于
Summary: Fix the following RocksDB Lite build failure in c_test.cc db/c_test.c:1051:3: error: implicit declaration of function 'fprintf' is invalid in C99 [-Werror,-Wimplicit-function-declaration] fprintf(stderr, "SKIPPED\n"); ^ db/c_test.c:1051:3: error: declaration of built-in function 'fprintf' requires inclusion of the header <stdio.h> [-Werror,-Wbuiltin-requires-header] db/c_test.c:1051:11: error: use of undeclared identifier 'stderr' fprintf(stderr, "SKIPPED\n"); ^ 3 errors generated. Closes https://github.com/facebook/rocksdb/pull/1479 Differential Revision: D4151160 Pulled By: yhchiang fbshipit-source-id: a471a30
-
由 Reid Horuff 提交于
Summary: copied from: https://github.com/mdlugajczyk/rocksdb/commit/5ebfd2623a01e69a4cbeae3ed2b788f2a84056ad Opening existing RocksDB attempts recovery from log files, which uses wrong sequence number to create the memtable. This is a regression introduced in change a4003363. This change includes a test demonstrating the problem, without the fix the test fails with "Operation failed. Try again.: Transaction could not check for conflicts for operation at SequenceNumber 1 as the MemTable only contains changes newer than SequenceNumber 2. Increasing the value of the max_write_buffer_number_to_maintain option could reduce the frequency of this error" This change is a joint effort by Peter 'Stig' Edwards thatsafunnyname and me. Closes https://github.com/facebook/rocksdb/pull/1458 Differential Revision: D4143791 Pulled By: reidHoruff fbshipit-source-id: 5a25033
-
由 Peter (Stig) Edwards 提交于
Summary: Use 16384 as e.g .value for ldb the --compression_max_dict_bytes option. I think 14 was copy and pasted from the options in the lines above. Closes https://github.com/facebook/rocksdb/pull/1483 Differential Revision: D4154393 Pulled By: siying fbshipit-source-id: ef53a69
-
由 Islam AbdelRahman 提交于
Summary: A deadlock is possible if this happen (1) Writer thread is stopped because it's waiting for compaction to finish (2) Compaction is waiting for current IngestExternalFile() calls to finish (3) IngestExternalFile() is waiting to be able to acquire the writer thread (4) WriterThread is held by stopped writes that are waiting for compactions to finish This patch fix the issue by not incrementing num_running_ingest_file_ except when we acquire the writer thread. This patch include a unittest to reproduce the described scenario Closes https://github.com/facebook/rocksdb/pull/1480 Differential Revision: D4151646 Pulled By: IslamAbdelRahman fbshipit-source-id: 09b39db
-
- 09 11月, 2016 5 次提交
-
-
由 Joel Marcey 提交于
Trying to remove baseurl term.
-
由 Islam AbdelRahman 提交于
Summary: In ForwardIterator::SeekInternal(), we may end up passing empty Slice representing an internal key to InternalKeyComparator::Compare. and when we try to extract the user key from this empty Slice, we will create a slice with size = 0 - 8 ( which will overflow and cause us to read invalid memory as well ) Scenarios to reproduce these issues are in the unit tests Closes https://github.com/facebook/rocksdb/pull/1467 Differential Revision: D4136660 Pulled By: lightmark fbshipit-source-id: 151e128
-
由 Aaron Gao 提交于
Summary: fix lint error about tabs and duplicate includes. Closes https://github.com/facebook/rocksdb/pull/1476 Differential Revision: D4149646 Pulled By: lightmark fbshipit-source-id: 2e0a632
-
由 Karthik 提交于
Summary: The general convention in RocksDB is to use GFLAGS instead of google. Fixing the anomaly. Closes https://github.com/facebook/rocksdb/pull/1470 Differential Revision: D4149213 Pulled By: kradhakrishnan fbshipit-source-id: 2dafa53
-
由 Jay Lee 提交于
Summary: support seek_for_prev in c abi. Closes https://github.com/facebook/rocksdb/pull/1457 Differential Revision: D4135360 Pulled By: lightmark fbshipit-source-id: 61256b0
-
- 08 11月, 2016 6 次提交
-
-
由 Joel Marcey 提交于
-
由 Joel Marcey 提交于
-
由 Joel Marcey 提交于
-
由 Joel Marcey 提交于
-
由 Joel Marcey 提交于
-
由 Joel Marcey 提交于
Key change is using the new `absolute_url` and `relative_url` filters http://jekyllrb.com/news/2016/10/06/jekyll-3-3-is-here/ https://github.com/blog/2277-what-s-new-in-github-pages-with-jekyll-3-3
-
- 06 11月, 2016 2 次提交
-
-
由 Adam Retter 提交于
Summary: Needed for working with `get` after `merge` on a WBWI. Closes https://github.com/facebook/rocksdb/pull/1093 Differential Revision: D4137978 Pulled By: yhchiang fbshipit-source-id: e18d50d
-
由 Andrew Kryczka 提交于
Summary: This handles two issues: (1) range deletion iterator sometimes outlives the table reader that created it, in which case the block must not be destroyed during table reader destruction; and (2) we prefer to read these range tombstone meta-blocks from file fewer times. - Extracted cache-populating logic from NewDataBlockIterator() into a separate function: MaybeLoadDataBlockToCache() - Use MaybeLoadDataBlockToCache() to load range deletion meta-block and pin it through the reader's lifetime. This code reuse works since range deletion meta-block has same format as data blocks. - Use NewDataBlockIterator() to create range deletion iterators, which uses block cache if enabled, otherwise reads the block from file. Either way, the underlying block won't disappear until after the iterator is destroyed. Closes https://github.com/facebook/rocksdb/pull/1459 Differential Revision: D4123175 Pulled By: ajkr fbshipit-source-id: 8f64281
-
- 05 11月, 2016 2 次提交
-
-
由 Andrew Kryczka 提交于
Summary: Note: reviewed in https://reviews.facebook.net/D65115 - DBIter maintains a range tombstone accumulator. We don't cleanup obsolete tombstones yet, so if the user seeks back and forth, the same tombstones would be added to the accumulator multiple times. - DBImpl::NewInternalIterator() (used to make DBIter's underlying iterator) adds memtable/L0 range tombstones, L1+ range tombstones are added on-demand during NewSecondaryIterator() (see D62205) - DBIter uses ShouldDelete() when advancing to check whether keys are covered by range tombstones Closes https://github.com/facebook/rocksdb/pull/1464 Differential Revision: D4131753 Pulled By: ajkr fbshipit-source-id: be86559
-
由 Islam AbdelRahman 提交于
Summary: We can support SST files >2GB but we don't support blocks >2GB Closes https://github.com/facebook/rocksdb/pull/1465 Differential Revision: D4132140 Pulled By: yiwu-arbug fbshipit-source-id: 63bf12d
-
- 04 11月, 2016 3 次提交
-
-
由 Andrew Kryczka 提交于
Summary: During Get()/MultiGet(), build up a RangeDelAggregator with range tombstones as we search through live memtable, immutable memtables, and SST files. This aggregator is then used by memtable.cc's SaveValue() and GetContext::SaveValue() to check whether keys are covered. added tests for Get on memtables/files; end-to-end tests mainly in https://reviews.facebook.net/D64761 Closes https://github.com/facebook/rocksdb/pull/1456 Differential Revision: D4111271 Pulled By: ajkr fbshipit-source-id: 6e388d4
-
由 zhangjinpeng1987 提交于
Summary: Add C api for RateLimiter. Closes https://github.com/facebook/rocksdb/pull/1455 Differential Revision: D4116362 Pulled By: yiwu-arbug fbshipit-source-id: cb05a8d
-
由 Joel Marcey 提交于
rocksdb hit the problem that nuclide had. https://github.com/facebook/nuclide/issues/789 https://github.com/facebook/nuclide/pull/793
-
- 03 11月, 2016 1 次提交
-
-
由 Yi Wu 提交于
Summary: Add avoid_flush_during_shutdown DB option. Closes https://github.com/facebook/rocksdb/pull/1451 Differential Revision: D4108643 Pulled By: yiwu-arbug fbshipit-source-id: abdaf4d
-
- 02 11月, 2016 1 次提交
-
-
由 Benoit Girard 提交于
Summary: Closes https://github.com/facebook/rocksdb/pull/1427 Differential Revision: D4094732 Pulled By: yiwu-arbug fbshipit-source-id: b9b79e9
-