1. 21 11月, 2017 1 次提交
    • A
      Add a ticker stat for number of keys skipped during iteration · d394a6bb
      anand1976 提交于
      Summary:
      This diff adds a new ticker stat, NUMBER_ITER_SKIP, to count the
      number of internal keys skipped during iteration. Keys can be skipped
      due to deletes, or lower sequence number, or higher sequence number
      than the one requested.
      
      Also, fix the issue when StatisticsData is naturally aligned on cacheline boundary,
      padding becomes a zero size array, which the Windows compiler doesn't
      like. So add a cacheline worth of padding in that case to keep it happy.
      We cannot conditionally add padding as gcc doesn't allow using sizeof
      in preprocessor directives.
      Closes https://github.com/facebook/rocksdb/pull/3177
      
      Differential Revision: D6353897
      
      Pulled By: anand1976
      
      fbshipit-source-id: 441d5a09af9c4e22e7355242dfc0c7b27aa0a6c2
      d394a6bb
  2. 12 11月, 2017 1 次提交
  3. 02 11月, 2017 1 次提交
    • M
      Added support for differential snapshots · 7fe3b328
      Mikhail Antonov 提交于
      Summary:
      The motivation for this PR is to add to RocksDB support for differential (incremental) snapshots, as snapshot of the DB changes between two points in time (one can think of it as diff between to sequence numbers, or the diff D which can be thought of as an SST file or just set of KVs that can be applied to sequence number S1 to get the database to the state at sequence number S2).
      
      This feature would be useful for various distributed storages layers built on top of RocksDB, as it should help reduce resources (time and network bandwidth) needed to recover and rebuilt DB instances as replicas in the context of distributed storages.
      
      From the API standpoint that would like client app requesting iterator between (start seqnum) and current DB state, and reading the "diff".
      
      This is a very draft PR for initial review in the discussion on the approach, i'm going to rework some parts and keep updating the PR.
      
      For now, what's done here according to initial discussions:
      
      Preserving deletes:
       - We want to be able to optionally preserve recent deletes for some defined period of time, so that if a delete came in recently and might need to be included in the next incremental snapshot it would't get dropped by a compaction. This is done by adding new param to Options (preserve deletes flag) and new variable to DB Impl where we keep track of the sequence number after which we don't want to drop tombstones, even if they are otherwise eligible for deletion.
       - I also added a new API call for clients to be able to advance this cutoff seqnum after which we drop deletes; i assume it's more flexible to let clients control this, since otherwise we'd need to keep some kind of timestamp < -- > seqnum mapping inside the DB, which sounds messy and painful to support. Clients could make use of it by periodically calling GetLatestSequenceNumber(), noting the timestamp, doing some calculation and figuring out by how much we need to advance the cutoff seqnum.
       - Compaction codepath in compaction_iterator.cc has been modified to avoid dropping tombstones with seqnum > cutoff seqnum.
      
      Iterator changes:
       - couple params added to ReadOptions, to optionally allow client to request internal keys instead of user keys (so that client can get the latest value of a key, be it delete marker or a put), as well as min timestamp and min seqnum.
      
      TableCache changes:
       - I modified table_cache code to be able to quickly exclude SST files from iterators heep if creation_time on the file is less then iter_start_ts as passed in ReadOptions. That would help a lot in some DB settings (like reading very recent data only or using FIFO compactions), but not so much for universal compaction with more or less long iterator time span.
      
      What's left:
      
       - Still looking at how to best plug that inside DBIter codepath. So far it seems that FindNextUserKeyInternal only parses values as UserKeys, and iter->key() call generally returns user key. Can we add new API to DBIter as internal_key(), and modify this internal method to optionally set saved_key_ to point to the full internal key? I don't need to store actual seqnum there, but I do need to store type.
      Closes https://github.com/facebook/rocksdb/pull/2999
      
      Differential Revision: D6175602
      
      Pulled By: mikhail-antonov
      
      fbshipit-source-id: c779a6696ee2d574d86c69cec866a3ae095aa900
      7fe3b328
  4. 27 10月, 2017 2 次提交
    • P
      Fix coverity uninitialized fields warnings · 47166bae
      Prashant D 提交于
      Pulled By: ajkr
      
      Differential Revision: D6170448
      
      fbshipit-source-id: 5fd6d1608fc0df27c94d9f5059315ce7f79b8f5c
      47166bae
    • A
      implement lower bound for iterators · 95667383
      Andrew Kryczka 提交于
      Summary:
      - for `SeekToFirst()`, just convert it to a regular `Seek()` if lower bound is specified
      - for operations that iterate backwards over user keys (`SeekForPrev`, `SeekToLast`, `Prev`), change `PrevInternal` to check whether user key went below lower bound every time the user key changes -- same approach we use to ensure we stay within a prefix when `prefix_same_as_start=true`.
      Closes https://github.com/facebook/rocksdb/pull/3074
      
      Differential Revision: D6158654
      
      Pulled By: ajkr
      
      fbshipit-source-id: cb0e3a922e2650d2cd4d1c6e1c0f1e8b729ff518
      95667383
  5. 26 10月, 2017 1 次提交
    • I
      Fix tombstone scans in SeekForPrev outside prefix · addfe1ef
      Islam AbdelRahman 提交于
      Summary:
      When doing a Seek() or SeekForPrev() we should stop the moment we see a key with a different prefix as start if ReadOptions:: prefix_same_as_start was set to true
      
      Right now we don't stop if we encounter a tombstone outside the prefix while executing SeekForPrev()
      Closes https://github.com/facebook/rocksdb/pull/3067
      
      Differential Revision: D6149638
      
      Pulled By: IslamAbdelRahman
      
      fbshipit-source-id: 7f659862d2bf552d3c9104a360c79439ceba2f18
      addfe1ef
  6. 20 10月, 2017 1 次提交
  7. 10 10月, 2017 1 次提交
    • Y
      WritePrepared Txn: Iterator · 8c392a31
      Yi Wu 提交于
      Summary:
      On iterator create, take a snapshot, create a ReadCallback and pass the ReadCallback to the underlying DBIter to check if key is committed.
      Closes https://github.com/facebook/rocksdb/pull/2981
      
      Differential Revision: D6001471
      
      Pulled By: yiwu-arbug
      
      fbshipit-source-id: 3565c4cdaf25370ba47008b0e0cb65b31dfe79fe
      8c392a31
  8. 04 10月, 2017 1 次提交
    • Y
      Add ValueType::kTypeBlobIndex · d1cab2b6
      Yi Wu 提交于
      Summary:
      Add kTypeBlobIndex value type, which will be used by blob db only, to insert a (key, blob_offset) KV pair. The purpose is to
      1. Make it possible to open existing rocksdb instance as blob db. Existing value will be of kTypeIndex type, while value inserted by blob db will be of kTypeBlobIndex.
      2. Make rocksdb able to detect if the db contains value written by blob db, if so return error.
      3. Make it possible to have blob db optionally store value in SST file (with kTypeValue type) or as a blob value (with kTypeBlobIndex type).
      
      The root db (DBImpl) basically pretended kTypeBlobIndex are normal value on write. On Get if is_blob is provided, return whether the value read is of kTypeBlobIndex type, or return Status::NotSupported() status if is_blob is not provided. On scan allow_blob flag is pass and if the flag is true, return wether the value is of kTypeBlobIndex type via iter->IsBlob().
      
      Changes on blob db side will be in a separate patch.
      Closes https://github.com/facebook/rocksdb/pull/2886
      
      Differential Revision: D5838431
      
      Pulled By: yiwu-arbug
      
      fbshipit-source-id: 3c5306c62bc13bb11abc03422ec5cbcea1203cca
      d1cab2b6
  9. 15 9月, 2017 1 次提交
    • S
      Three code-level optimization to Iterator::Next() · edcbb369
      Siying Dong 提交于
      Summary:
      Three small optimizations:
      (1) iter_->IsKeyPinned() shouldn't be called if read_options.pin_data is not true. This may trigger function call all the way down the iterator tree.
      (2) reuse the iterator key object in DBIter::FindNextUserEntryInternal(). The constructor of the class has some overheads.
      (3) Move the switching direction logic in MergingIterator::Next() to a separate function.
      
      These three in total improves readseq performance by about 3% in my benchmark setting.
      Closes https://github.com/facebook/rocksdb/pull/2880
      
      Differential Revision: D5829252
      
      Pulled By: siying
      
      fbshipit-source-id: 991aea10c6d6c3b43769cb4db168db62954ad1e3
      edcbb369
  10. 12 9月, 2017 1 次提交
    • S
      Make DBIter class final · 2dd22e54
      Siying Dong 提交于
      Summary:
      DBIter is referenced in ArenaWrappedDBIter, which is a simple wrapper. If DBIter is final, some virtual function call can be avoided. Some functions can even be inlined, like DBIter.value() to ArenaWrappedDBIter.value() and DBIter.key() to ArenaWrappedDBIter.key(). The performance gain is hard to measure. I just ran the memory-only benchmark for readseq and saw it didn't regress. There shouldn't be any harm doing it. Just give compiler more choices.
      Closes https://github.com/facebook/rocksdb/pull/2859
      
      Differential Revision: D5799888
      
      Pulled By: siying
      
      fbshipit-source-id: 829788f91310c40282dcfb7e412e6ef489931143
      2dd22e54
  11. 19 8月, 2017 1 次提交
    • A
      perf_context measure user bytes read · ed0a4c93
      Andrew Kryczka 提交于
      Summary:
      With this PR, we can measure read-amp for queries where perf_context is enabled as follows:
      
      ```
      SetPerfLevel(kEnableCount);
      Get(1, "foo");
      double read_amp = static_cast<double>(get_perf_context()->block_read_byte / get_perf_context()->get_read_bytes);
      SetPerfLevel(kDisable);
      ```
      
      Our internal infra enables perf_context for a sampling of queries. So we'll be able to compute the read-amp for the sample set, which can give us a good estimate of read-amp.
      Closes https://github.com/facebook/rocksdb/pull/2749
      
      Differential Revision: D5647240
      
      Pulled By: ajkr
      
      fbshipit-source-id: ad73550b06990cf040cc4528fa885360f308ec12
      ed0a4c93
  12. 25 7月, 2017 1 次提交
    • S
      Add Iterator::Refresh() · e67b35c0
      Siying Dong 提交于
      Summary:
      Add and implement Iterator::Refresh(). When this function is called, if the super version doesn't change, update the sequence number of the iterator to the latest one and invalidate the iterator. If the super version changed, recreated the whole iterator. This can help users reuse the iterator more easily.
      Closes https://github.com/facebook/rocksdb/pull/2621
      
      Differential Revision: D5464500
      
      Pulled By: siying
      
      fbshipit-source-id: f548bd35e85c1efca2ea69273802f6704eba6ba9
      e67b35c0
  13. 16 7月, 2017 1 次提交
  14. 29 6月, 2017 1 次提交
    • S
      Make "make analyze" happy · 18c63af6
      Siying Dong 提交于
      Summary:
      "make analyze" is reporting some errors. It's complicated to look but it seems to me that they are all false positive. Anyway, I think cleaning them up is a good idea. Some of the changes are hacky but I don't know a better way.
      Closes https://github.com/facebook/rocksdb/pull/2508
      
      Differential Revision: D5341710
      
      Pulled By: siying
      
      fbshipit-source-id: 6070e430e0e41a080ef441e05e8ec827d45efab6
      18c63af6
  15. 31 5月, 2017 1 次提交
  16. 24 5月, 2017 1 次提交
    • S
      Fix errors in clang-analyzer builds · 7d8207f1
      Sagar Vemuri 提交于
      Summary:
      Fix build error in db_iter.cc when running clang-analyzer.
      ```
        CC       db/db_iter.o
      db/db_iter.cc:938:21: error: no matching constructor for initialization of 'rocksdb::ParsedInternalKey'
        ParsedInternalKey ikey(Slice(), 0, 0);
                          ^    ~~~~~~~~~~~~~
      ./db/dbformat.h:84:3: note: candidate constructor not viable: no known conversion from 'int' to 'rocksdb::ValueType' for 3rd argument
        ParsedInternalKey(const Slice& u, const SequenceNumber& seq, ValueType t)
        ^
      ./db/dbformat.h:78:8: note: candidate constructor (the implicit copy constructor) not viable: requires 1 argument, but 3 were provided
      struct ParsedInternalKey {
             ^
      ./db/dbformat.h:78:8: note: candidate constructor (the implicit move constructor) not viable: requires 1 argument, but 3 were provided
      ./db/dbformat.h:83:3: note: candidate constructor not viable: requires 0 arguments, but 3 were provided
        ParsedInternalKey() { }  // Intentionally left uninitialized (for speed)
        ^
      1 error generated.
      ```
      Closes https://github.com/facebook/rocksdb/pull/2354
      
      Differential Revision: D5115751
      
      Pulled By: sagar0
      
      fbshipit-source-id: b0e386d4e935e4725b07761c3ca5f7a8cbde3692
      7d8207f1
  17. 20 5月, 2017 1 次提交
    • Y
      Suppress clang-analyzer false positive · d746aead
      Yi Wu 提交于
      Summary:
      Fixing two types of clang-analyzer false positives:
      * db is deleted and then reopen, and clang-analyzer thinks we are reusing the pointer after it has been deleted. Adding asserts to hint clang-analyzer the pointer is recreated.
      * ParsedInternalKey is (intentionally) uninitialized. Initialize the struct only when clang-analyzer is running.
      Closes https://github.com/facebook/rocksdb/pull/2334
      
      Differential Revision: D5093801
      
      Pulled By: yiwu-arbug
      
      fbshipit-source-id: f51355382098eb3da5ab9f64e094c6d03e6bdf7d
      d746aead
  18. 28 4月, 2017 1 次提交
  19. 11 4月, 2017 1 次提交
    • S
      Reduce the number of params needed to construct DBIter · 7124268a
      Sagar Vemuri 提交于
      Summary:
      DBIter, and in-turn NewDBIterator and NewArenaWrappedDBIterator, take a  bunch of params. They can be reduced by passing in ReadOptions directly instead of passing in every new param separately. It also seems much cleaner as a bunch of the params towards the end seem to be optional.
      
      (Recently I introduced max_skippable_internal_keys, which added one more to the already huge count).
      
      Idea courtesy IslamAbdelRahman
      Closes https://github.com/facebook/rocksdb/pull/2116
      
      Differential Revision: D4857128
      
      Pulled By: sagar0
      
      fbshipit-source-id: 7d239df094b94bd9ea79d145cdf825478ac037a8
      7124268a
  20. 06 4月, 2017 1 次提交
  21. 05 4月, 2017 1 次提交
  22. 04 4月, 2017 1 次提交
  23. 31 3月, 2017 1 次提交
    • S
      Option to fail a request as incomplete when skipping too many internal keys · c6d04f2e
      Sagar Vemuri 提交于
      Summary:
      Operations like Seek/Next/Prev sometimes take too long to complete when there are many internal keys to be skipped. Adding an option, max_skippable_internal_keys -- which could be used to set a threshold for the maximum number of keys that can be skipped, will help to address these cases where it is much better to fail a request (as incomplete) than to wait for a considerable time for the request to complete.
      
      This feature -- to fail an iterator seek request as incomplete, is disabled by default when max_skippable_internal_keys = 0. It is enabled only when max_skippable_internal_keys > 0.
      
      This feature is based on the discussion mentioned in the PR https://github.com/facebook/rocksdb/pull/1084.
      Closes https://github.com/facebook/rocksdb/pull/2000
      
      Differential Revision: D4753223
      
      Pulled By: sagar0
      
      fbshipit-source-id: 1c973f7
      c6d04f2e
  24. 16 3月, 2017 1 次提交
    • I
      Add macros to include file name and line number during Logging · e1916368
      Islam AbdelRahman 提交于
      Summary:
      current logging
      ```
      2017/03/14-14:20:30.393432 7fedde9f5700 (Original Log Time 2017/03/14-14:20:30.393414) [default] Level summary: base level 1 max bytes base 268435456 files[1 0 0 0 0 0 0] max score 0.25
      2017/03/14-14:20:30.393438 7fedde9f5700 [JOB 2] Try to delete WAL files size 61417909, prev total WAL file size 73820858, number of live WAL files 2.
      2017/03/14-14:20:30.393464 7fedde9f5700 [DEBUG] [JOB 2] Delete /dev/shm/old_logging//MANIFEST-000001 type=3 #1 -- OK
      2017/03/14-14:20:30.393472 7fedde9f5700 [DEBUG] [JOB 2] Delete /dev/shm/old_logging//000003.log type=0 #3 -- OK
      2017/03/14-14:20:31.427103 7fedd49f1700 [default] New memtable created with log file: #9. Immutable memtables: 0.
      2017/03/14-14:20:31.427179 7fedde9f5700 [JOB 3] Syncing log #6
      2017/03/14-14:20:31.427190 7fedde9f5700 (Original Log Time 2017/03/14-14:20:31.427170) Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots allowed 1, compaction slots scheduled 1
      2017/03/14-14:20:31.
      Closes https://github.com/facebook/rocksdb/pull/1990
      
      Differential Revision: D4708695
      
      Pulled By: IslamAbdelRahman
      
      fbshipit-source-id: cb8968f
      e1916368
  25. 09 3月, 2017 1 次提交
  26. 06 1月, 2017 1 次提交
    • A
      Maintain position in range deletions map · b104b878
      Andrew Kryczka 提交于
      Summary:
      When deletion-collapsing mode is enabled (i.e., for DBIter/CompactionIterator), we maintain position in the tombstone maps across calls to ShouldDelete(). Since iterators often access keys sequentially (or reverse-sequentially), scanning forward/backward from the last position can be faster than binary-searching the map for every key.
      
      - When Next() is invoked on an iterator, we use kForwardTraversal to scan forwards, if needed, until arriving at the range deletion containing the next key.
      - Similarly for Prev(), we use kBackwardTraversal to scan backwards in the range deletion map.
      - When the iterator seeks, we use kBinarySearch for repositioning
      - After tombstones are added or before the first ShouldDelete() invocation, the current position is set to invalid, which forces kBinarySearch to be used.
      - Non-iterator users (i.e., Get()) use kFullScan, which has the same behavior as before---scan the whole map for every key passed to ShouldDelete().
      Closes https://github.com/facebook/rocksdb/pull/1701
      
      Differential Revision: D4350318
      
      Pulled By: ajkr
      
      fbshipit-source-id: 5129b76
      b104b878
  27. 20 12月, 2016 1 次提交
    • A
      Collapse range deletions · 50e305de
      Andrew Kryczka 提交于
      Summary:
      Added a tombstone-collapsing mode to RangeDelAggregator, which eliminates overlap in the TombstoneMap. In this mode, we can check whether a tombstone covers a user key using upper_bound() (i.e., binary search). However, the tradeoff is the overhead to add tombstones is now higher, so at first I've only enabled it for range scans (compaction/flush/user iterators), where we expect a high number of calls to ShouldDelete() for the same tombstones. Point queries like Get() will still use the linear scan approach.
      
      Also in this diff I changed RangeDelAggregator's TombstoneMap to use multimap with user keys instead of map with internal keys. Callers sometimes provided ParsedInternalKey directly, from which it would've required string copying to derive an internal key Slice with which we could search the map.
      Closes https://github.com/facebook/rocksdb/pull/1614
      
      Differential Revision: D4270397
      
      Pulled By: ajkr
      
      fbshipit-source-id: 93092c7
      50e305de
  28. 17 12月, 2016 1 次提交
  29. 30 11月, 2016 1 次提交
  30. 29 11月, 2016 1 次提交
    • M
      Less linear search in DBIter::Seek() when keys are overwritten a lot · 236d4c67
      Mike Kolupaev 提交于
      Summary:
      In one deployment we saw high latencies (presumably from slow iterator operations) and a lot of CPU time reported by perf with this stack:
      
      ```
        rocksdb::MergingIterator::Next
        rocksdb::DBIter::FindNextUserEntryInternal
        rocksdb::DBIter::Seek
      ```
      
      I think what's happening is:
      1. we create a snapshot iterator,
      2. we do lots of Put()s for the same key x; this creates lots of entries in memtable,
      3. we seek the iterator to a key slightly smaller than x,
      4. the seek walks over lots of entries in memtable for key x, skipping them because of high sequence numbers.
      
      CC IslamAbdelRahman
      Closes https://github.com/facebook/rocksdb/pull/1413
      
      Differential Revision: D4083879
      
      Pulled By: IslamAbdelRahman
      
      fbshipit-source-id: a83ddae
      236d4c67
  31. 22 11月, 2016 1 次提交
    • A
      Range deletion microoptimizations · fd43ee09
      Andrew Kryczka 提交于
      Summary:
      - Made RangeDelAggregator's InternalKeyComparator member a reference-to-const so we don't need to copy-construct it. Also added InternalKeyComparator to ImmutableCFOptions so we don't need to construct one for each DBIter.
      - Made MemTable::NewRangeTombstoneIterator and the table readers' NewRangeTombstoneIterator() functions return nullptr instead of NewEmptyInternalIterator to avoid the allocation. Updated callers accordingly.
      Closes https://github.com/facebook/rocksdb/pull/1548
      
      Differential Revision: D4208169
      
      Pulled By: ajkr
      
      fbshipit-source-id: 2fd65cf
      fd43ee09
  32. 19 11月, 2016 1 次提交
    • A
      Lazily initialize RangeDelAggregator's map and pinning manager · 3f622152
      Andrew Kryczka 提交于
      Summary:
      Since a RangeDelAggregator is created for each read request, these heap-allocating member variables were consuming significant CPU (~3% total) which slowed down request throughput. The map and pinning manager are only necessary when range deletions exist, so we can defer their initialization until the first range deletion is encountered. Currently lazy initialization is done for reads only since reads pass us a single snapshot, which is easier to store on the stack for later insertion into the map than the vector passed to us by flush or compaction.
      
      Note the Arena member variable is still expensive, I will figure out what to do with it in a subsequent diff. It cannot be lazily initialized because we currently use this arena even to allocate empty iterators, which is necessary even when no range deletions exist.
      Closes https://github.com/facebook/rocksdb/pull/1539
      
      Differential Revision: D4203488
      
      Pulled By: ajkr
      
      fbshipit-source-id: 3b36279
      3f622152
  33. 05 11月, 2016 1 次提交
    • A
      DeleteRange user iterator support · 9e7cf346
      Andrew Kryczka 提交于
      Summary:
      Note: reviewed in  https://reviews.facebook.net/D65115
      
      - DBIter maintains a range tombstone accumulator. We don't cleanup obsolete tombstones yet, so if the user seeks back and forth, the same tombstones would be added to the accumulator multiple times.
      - DBImpl::NewInternalIterator() (used to make DBIter's underlying iterator) adds memtable/L0 range tombstones, L1+ range tombstones are added on-demand during NewSecondaryIterator() (see D62205)
      - DBIter uses ShouldDelete() when advancing to check whether keys are covered by range tombstones
      Closes https://github.com/facebook/rocksdb/pull/1464
      
      Differential Revision: D4131753
      
      Pulled By: ajkr
      
      fbshipit-source-id: be86559
      9e7cf346
  34. 19 10月, 2016 1 次提交
    • A
      fix db_stress assertion failure · 5e0d6b4c
      Aaron Gao 提交于
      Summary: in rocksdb::DBIter::FindValueForCurrentKey(), last_not_merge_type could also be SingleDelete() which is omitted
      
      Test Plan: db_iter_test
      
      Reviewers: yhchiang, sdong
      
      Reviewed By: sdong
      
      Subscribers: andrewkr, dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D65187
      5e0d6b4c
  35. 14 10月, 2016 1 次提交
    • A
      fix assertion failure in Prev() · 21e8dace
      Aaron Gao 提交于
      Summary:
      fix assertion failure in db_stress.
      It happens because of prefix seek key is larger than merge iterator key when they have the same user key
      
      Test Plan: ./db_stress --max_background_compactions=1 --max_write_buffer_number=3 --sync=0 --reopen=20 --write_buffer_size=33554432 --delpercent=5 --log2_keys_per_lock=10 --block_size=16384 --allow_concurrent_memtable_write=0 --test_batches_snapshots=0 --max_bytes_for_level_base=67108864 --progress_reports=0 --mmap_read=0 --writepercent=35 --disable_data_sync=0 --readpercent=50 --subcompactions=4 --ops_per_thread=20000000 --memtablerep=skip_list --prefix_size=0 --target_file_size_multiplier=1 --column_families=1 --threads=32 --disable_wal=0 --open_files=500000 --destroy_db_initially=0 --target_file_size_base=16777216 --nooverwritepercent=1 --iterpercent=10 --max_key=100000000 --prefixpercent=0 --use_clock_cache=false --kill_random_test=888887 --cache_size=1048576 --verify_checksum=1
      
      Reviewers: sdong, andrewkr, yiwu, yhchiang
      
      Reviewed By: yhchiang
      
      Subscribers: andrewkr, dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D65025
      21e8dace
  36. 12 10月, 2016 1 次提交
    • A
      new Prev() prefix support using SeekForPrev() · 447f1712
      Aaron Gao 提交于
      Summary:
      1) The previous solution for Prev() prefix support is not clean.
      Since I add api SeekForPrev(), now the Prev() can be symmetric to Next().
      and we do not need SeekToLast() to be called in Prev() any more.
      
      Also, Next() will Seek(prefix_seek_key_) to solve the problem of possible inconsistency between db_iter and merge_iter when
      there is merge_operator. And prefix_seek_key is only refreshed when change direction to forward.
      
      2) This diff also solves the bug of Iterator::SeekToLast() with iterate_upper_bound_ with prefix extractor.
      
      add test cases for the above two cases.
      
      There are some tests for the SeekToLast() in Prev(), I will clean them later.
      
      Test Plan: make all check
      
      Reviewers: IslamAbdelRahman, andrewkr, yiwu, sdong
      
      Reviewed By: sdong
      
      Subscribers: andrewkr, dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D63933
      447f1712
  37. 30 9月, 2016 1 次提交
  38. 28 9月, 2016 1 次提交
    • A
      Add SeekForPrev() to Iterator · f517d9dd
      Aaron Gao 提交于
      Summary:
      Add new Iterator API, `SeekForPrev`: find the last key that <= target key
      support prefix_extractor
      support prefix_same_as_start
      support upper_bound
      not supported in iterators without Prev()
      
      Also add tests in db_iter_test and db_iterator_test
      
      Pass all tests
      Cheers!
      
      Test Plan: make all check -j64
      
      Reviewers: andrewkr, yiwu, IslamAbdelRahman, sdong
      
      Reviewed By: sdong
      
      Subscribers: andrewkr, dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D64149
      f517d9dd
  39. 09 9月, 2016 1 次提交