1. 26 8月, 2014 1 次提交
  2. 03 6月, 2014 1 次提交
    • S
      In DB::NewIterator(), try to allocate the whole iterator tree in an arena · df9069d2
      sdong 提交于
      Summary:
      In this patch, try to allocate the whole iterator tree starting from DBIter from an arena
      1. ArenaWrappedDBIter is created when serves as the entry point of an iterator tree, with an arena in it.
      2. Add an option to create iterator from arena for following iterators: DBIter, MergingIterator, MemtableIterator, all mem table's iterators, all table reader's iterators and two level iterator.
      3. MergeIteratorBuilder is created to incrementally build the tree of internal iterators. It is passed to mem table list and version set and add iterators to it.
      
      Limitations:
      (1) Only DB::NewIterator() without tailing uses the arena. Other cases, including readonly DB and compactions are still from malloc
      (2) Two level iterator itself is allocated in arena, but not iterators inside it.
      
      Test Plan: make all check
      
      Reviewers: ljin, haobo
      
      Reviewed By: haobo
      
      Subscribers: leveldb, dhruba, yhchiang, igor
      
      Differential Revision: https://reviews.facebook.net/D18513
      df9069d2
  3. 30 4月, 2014 1 次提交
    • Y
      Add a new mem-table representation based on cuckoo hash. · 9d9d2965
      Yueh-Hsuan Chiang 提交于
      Summary:
      = Major Changes =
      * Add a new mem-table representation, HashCuckooRep, which is based cuckoo hash.
        Cuckoo hash uses multiple hash functions.  This allows each key to have multiple
        possible locations in the mem-table.
      
        - Put: When insert a key, it will try to find whether one of its possible
          locations is vacant and store the key.  If none of its possible
          locations are available, then it will kick out a victim key and
          store at that location.  The kicked-out victim key will then be
          stored at a vacant space of its possible locations or kick-out
          another victim.  In this diff, the kick-out path (known as
          cuckoo-path) is found using BFS, which guarantees to be the shortest.
      
       - Get: Simply tries all possible locations of a key --- this guarantees
         worst-case constant time complexity.
      
       - Time complexity: O(1) for Get, and average O(1) for Put if the
         fullness of the mem-table is below 80%.
      
       - Default using two hash functions, the number of hash functions used
         by the cuckoo-hash may dynamically increase if it fails to find a
         short-enough kick-out path.
      
       - Currently, HashCuckooRep does not support iteration and snapshots,
         as our current main purpose of this is to optimize point access.
      
      = Minor Changes =
      * Add IsSnapshotSupported() to DB to indicate whether the current DB
        supports snapshots.  If it returns false, then DB::GetSnapshot() will
        always return nullptr.
      
      Test Plan:
      Run existing tests.  Will develop a test specifically for cuckoo hash in
      the next diff.
      
      Reviewers: sdong, haobo
      
      Reviewed By: sdong
      
      CC: leveldb, dhruba, igor
      
      Differential Revision: https://reviews.facebook.net/D16155
      9d9d2965
  4. 26 4月, 2014 1 次提交
  5. 23 4月, 2014 1 次提交
    • S
      Expose number of entries in mem tables to users · a5707407
      sdong 提交于
      Summary: In this patch, two new DB properties are defined: rocksdb.num-immutable-mem-table and rocksdb.num-entries-imm-mem-tables, from where number of entries in mem tables can be exposed to users
      
      Test Plan:
      Cover the codes in db_test
      make all check
      
      Reviewers: haobo, ljin, igor
      
      Reviewed By: igor
      
      CC: nkg-, igor, yhchiang, dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D18207
      a5707407
  6. 13 3月, 2014 2 次提交
    • K
      A heuristic way to check if a memtable is full · 11da8bc5
      Kai Liu 提交于
      Summary:
      This is is based on https://reviews.facebook.net/D15027. It's not finished but I would like to give a prototype to avoid arena over-allocation while making better use of the already allocated memory blocks.
      
      Instead of check approximate memtable size, we will take a deeper look at the arena, which incorporate essential idea that @sdong suggests: flush when arena has allocated its last and the last is "almost full"
      
      Test Plan: N/A
      
      Reviewers: haobo, sdong
      
      Reviewed By: sdong
      
      CC: leveldb, sdong
      
      Differential Revision: https://reviews.facebook.net/D15051
      11da8bc5
    • I
      [CF] Code cleanup part 1 · fb2346fc
      Igor Canadi 提交于
      Summary:
      I'm cleaning up some code preparing for the big diff review tomorrow. This is the first part of the cleanup.
      
      Changes are mostly cosmetic. The goal is to decrease amount of code difference between columnfamilies and master branch.
      
      This diff also fixes race condition when dropping column family.
      
      Test Plan: Ran db_stress with variety of parameters
      
      Reviewers: dhruba, haobo
      
      Differential Revision: https://reviews.facebook.net/D16833
      fb2346fc
  7. 26 2月, 2014 1 次提交
    • I
      [CF] Better handling of memtable logs · b69e7d99
      Igor Canadi 提交于
      Summary: DBImpl now keeps a list of alive_log_files_. On every FindObsoleteFiles, it deletes all alive log files that are smaller than versions_->MinLogNumber()
      
      Test Plan:
      make check passes
      no specific unit tests yet, will add
      
      Reviewers: dhruba, haobo
      
      CC: leveldb
      
      Differential Revision: https://reviews.facebook.net/D16293
      b69e7d99
  8. 12 2月, 2014 1 次提交
    • S
      Reduce malloc of iterators in Get() code paths · 33042669
      Siying Dong 提交于
      Summary:
      This patch optimized Get() code paths by avoiding malloc of iterators. Iterator creation is moved to mem table rep implementations, where a callback is called when any key is found. This is the same practice as what we do in (SST) table readers.
      
      db_bench result for readrandom following a writeseq, with no compression, single thread and tmpfs, we see throughput improved to 144958 from 139027, about 3%.
      
      Test Plan: make all check
      
      Reviewers: dhruba, haobo, igor
      
      Reviewed By: haobo
      
      CC: leveldb, yhchiang
      
      Differential Revision: https://reviews.facebook.net/D14685
      33042669
  9. 31 1月, 2014 1 次提交
    • K
      Clean up arena API · 4e0298f2
      kailiu 提交于
      Summary:
      Easy thing goes first. This patch moves arena to internal dir; based
      on which, the coming patch will deal with memtable_rep.
      
      Test Plan: make check
      
      Reviewers: haobo, sdong, dhruba
      
      CC: leveldb
      
      Differential Revision: https://reviews.facebook.net/D15615
      4e0298f2
  10. 28 1月, 2014 2 次提交
  11. 16 1月, 2014 1 次提交
    • K
      Remove the unnecessary use of shared_ptr · eae1804f
      kailiu 提交于
      Summary:
      shared_ptr is slower than unique_ptr (which literally comes with no performance cost compare with raw pointers).
      In memtable and memtable rep, we use shared_ptr when we'd actually should use unique_ptr.
      
      According to igor's previous work, we are likely to make quite some performance gain from this diff.
      
      Test Plan: make check
      
      Reviewers: dhruba, igor, sdong, haobo
      
      CC: leveldb
      
      Differential Revision: https://reviews.facebook.net/D15213
      eae1804f
  12. 15 1月, 2014 1 次提交
    • I
      VersionEdit not to take NumLevels() · 055e6df4
      Igor Canadi 提交于
      Summary:
      I will submit a sequence of diffs that are preparing master branch for column families. There are a lot of implicit assumptions in the code that are making column family implementation hard. If I make the change only in column family branch, it will make merging back to master impossible.
      
      Most of the diffs will be simple code refactorings, so I hope we can have fast turnaround time. Feel free to grab me in person to discuss any of them.
      
      This diff removes number of level check from VersionEdit. It is used only when VersionEdit is read, not written, but has to be set when it is written. I believe it is a right thing to make VersionEdit dumb and check consistency on the caller side. This will also make it much easier to implement Column Families, since different column families can have different number of levels.
      
      Test Plan: make check
      
      Reviewers: dhruba, haobo, sdong, kailiu
      
      Reviewed By: kailiu
      
      CC: leveldb
      
      Differential Revision: https://reviews.facebook.net/D15159
      055e6df4
  13. 14 1月, 2014 1 次提交
    • N
      Add read/modify/write functionality to Put() api · 8454cfe5
      Naman Gupta 提交于
      Summary: The application can set a callback function, which is applied on the previous value. And calculates the new value. This new value can be set, either inplace, if the previous value existed in memtable, and new value is smaller than previous value. Otherwise the new value is added normally.
      
      Test Plan: fbmake. Added unit tests. All unit tests pass.
      
      Reviewers: dhruba, haobo
      
      Reviewed By: haobo
      
      CC: sdong, kailiu, xinyaohu, sumeet, leveldb
      
      Differential Revision: https://reviews.facebook.net/D14745
      8454cfe5
  14. 11 1月, 2014 1 次提交
    • S
      Improve RocksDB "get" performance by computing merge result in memtable · a09ee106
      Schalk-Willem Kruger 提交于
      Summary:
      Added an option (max_successive_merges) that can be used to specify the
      maximum number of successive merge operations on a key in the memtable.
      This can be used to improve performance of the "get" operation. If many
      successive merge operations are performed on a key, the performance of "get"
      operations on the key deteriorates, as the value has to be computed for each
      "get" operation by applying all the successive merge operations.
      
      FB Task ID: #3428853
      
      Test Plan:
      make all check
      db_bench --benchmarks=readrandommergerandom
      counter_stress_test
      
      Reviewers: haobo, vamsi, dhruba, sdong
      
      Reviewed By: haobo
      
      CC: zshao
      
      Differential Revision: https://reviews.facebook.net/D14991
      a09ee106
  15. 12 12月, 2013 1 次提交
  16. 11 12月, 2013 1 次提交
  17. 07 12月, 2013 1 次提交
  18. 04 12月, 2013 1 次提交
    • I
      Get rid of some shared_ptrs · 043fc14c
      Igor Canadi 提交于
      Summary:
      I went through all remaining shared_ptrs and removed the ones that I found not-necessary. Only GenerateCachePrefix() is called fairly often, so don't expect much perf wins.
      
      The ones that are left are accessed infrequently and I think we're fine with keeping them.
      
      Test Plan: make asan_check
      
      Reviewers: dhruba, haobo
      
      Reviewed By: dhruba
      
      CC: leveldb
      
      Differential Revision: https://reviews.facebook.net/D14427
      043fc14c
  19. 26 11月, 2013 1 次提交
    • D
      Free obsolete memtables outside the dbmutex. · 27bbef11
      Dhruba Borthakur 提交于
      Summary:
      Large memory allocations and frees are costly and best done outside the
      db-mutex. The memtables are already allocated outside the db-mutex but
      they were being freed while holding the db-mutex.
      This patch frees obsolete memtables outside the db-mutex.
      
      Test Plan:
      make check
      db_stress
      
      Unit tests pass, I am in the process of running stress tests.
      
      Reviewers: haobo, igor, emayanke
      
      Reviewed By: haobo
      
      CC: reconnect.grayhat, leveldb
      
      Differential Revision: https://reviews.facebook.net/D14319
      27bbef11
  20. 21 11月, 2013 1 次提交
    • S
      [Only for Performance Branch] A Hacky patch to lazily generate memtable key... · 58e1956d
      Siying Dong 提交于
      [Only for Performance Branch] A Hacky patch to lazily generate memtable key for prefix-hashed memtables.
      
      Summary:
      For prefix mem tables, encoding mem table key may be unnecessary if the prefix doesn't have any key. This patch is a little bit hacky but I want to try out the performance gain of removing this lazy initialization.
      
      In longer term, we might want to revisit the way we abstract mem tables implementations.
      
      Test Plan: make all check
      
      Reviewers: haobo, igor, kailiu
      
      Reviewed By: igor
      
      CC: leveldb
      
      Differential Revision: https://reviews.facebook.net/D14265
      58e1956d
  21. 07 11月, 2013 1 次提交
    • H
      [RocksDB] Generalize prefix-aware iterator to be used for more than one Seek · fd204488
      Haobo Xu 提交于
      Summary: Added a prefix_seek flag in ReadOptions to indicate that Seek is prefix aware(might not return data with different prefix), and also not bound to a specific prefix. Multiple Seeks and range scans can be invoked on the same iterator. If a specific prefix is specified, this flag will be ignored. Just a quick prototype that works for PrefixHashRep, the new lockless memtable could be easily extended with this support too.
      
      Test Plan: test it on Leaf
      
      Reviewers: dhruba, kailiu, sdong, igor
      
      Reviewed By: igor
      
      CC: leveldb
      
      Differential Revision: https://reviews.facebook.net/D13929
      fd204488
  22. 01 11月, 2013 1 次提交
    • N
      In-place updates for equal keys and similar sized values · fe250702
      Naman Gupta 提交于
      Summary:
      Currently for each put, a fresh memory is allocated, and a new entry is added to the memtable with a new sequence number irrespective of whether the key already exists in the memtable. This diff is an attempt to update the value inplace for existing keys. It currently handles a very simple case:
      1. Key already exists in the current memtable. Does not inplace update values in immutable memtable or snapshot
      2. Latest value type is a 'put' ie kTypeValue
      3. New value size is less than existing value, to avoid reallocating memory
      
      TODO: For a put of an existing key, deallocate memory take by values, for other value types till a kTypeValue is found, ie. remove kTypeMerge.
      TODO: Update the transaction log, to allow consistent reload of the memtable.
      
      Test Plan: Added a unit test verifying the inplace update. But some other unit tests broken due to invalid sequence number checks. WIll fix them next.
      
      Reviewers: xinyaohu, sumeet, haobo, dhruba
      
      CC: leveldb
      
      Differential Revision: https://reviews.facebook.net/D12423
      
      Automatic commit by arc
      fe250702
  23. 17 10月, 2013 1 次提交
  24. 06 10月, 2013 1 次提交
  25. 05 10月, 2013 1 次提交
  26. 13 9月, 2013 1 次提交
    • H
      [RocksDB] Remove Log file immediately after memtable flush · 0e422308
      Haobo Xu 提交于
      Summary: As title. The DB log file life cycle is tied up with the memtable it backs. Once the memtable is flushed to sst and committed, we should be able to delete the log file, without holding the mutex. This is part of the bigger change to avoid FindObsoleteFiles at runtime. It deals with log files. sst files will be dealt with later.
      
      Test Plan: make check; db_bench
      
      Reviewers: dhruba
      
      CC: leveldb
      
      Differential Revision: https://reviews.facebook.net/D11709
      0e422308
  27. 24 8月, 2013 1 次提交
  28. 23 8月, 2013 1 次提交
    • J
      Add three new MemTableRep's · 74781a0c
      Jim Paton 提交于
      Summary:
      This patch adds three new MemTableRep's: UnsortedRep, PrefixHashRep, and VectorRep.
      
      UnsortedRep stores keys in an std::unordered_map of std::sets. When an iterator is requested, it dumps the keys into an std::set and iterates over that.
      
      VectorRep stores keys in an std::vector. When an iterator is requested, it creates a copy of the vector and sorts it using std::sort. The iterator accesses that new vector.
      
      PrefixHashRep stores keys in an unordered_map mapping prefixes to ordered sets.
      
      I also added one API change. I added a function MemTableRep::MarkImmutable. This function is called when the rep is added to the immutable list. It doesn't do anything yet, but it seems like that could be useful. In particular, for the vectorrep, it means we could elide the extra copy and just sort in place. The only reason I haven't done that yet is because the use of the ArenaAllocator complicates things (I can elaborate on this if needed).
      
      Test Plan:
      make -j32 check
      ./db_stress --memtablerep=vector
      ./db_stress --memtablerep=unsorted
      ./db_stress --memtablerep=prefixhash --prefix_size=10
      
      Reviewers: dhruba, haobo, emayanke
      
      Reviewed By: dhruba
      
      CC: leveldb
      
      Differential Revision: https://reviews.facebook.net/D12117
      74781a0c
  29. 14 8月, 2013 1 次提交
    • M
      Counter for merge failure · f1bf1694
      Mayank Agarwal 提交于
      Summary:
      With Merge returning bool, it can keep failing silently(eg. While faling to fetch timestamp in TTL). We need to detect this through a rocksdb counter which can get bumped whenever Merge returns false. This will also be super-useful for the mcrocksdb-counter service where Merge may fail.
      Added a counter NUMBER_MERGE_FAILURES and appropriately updated db/merge_helper.cc
      
      I felt that it would be better to directly add counter-bumping in Merge as a default function of MergeOperator class but user should not be aware of this, so this approach seems better to me.
      
      Test Plan: make all check
      
      Reviewers: dnicholas, haobo, dhruba, vamsi
      
      CC: leveldb
      
      Differential Revision: https://reviews.facebook.net/D12129
      f1bf1694
  30. 06 8月, 2013 1 次提交
    • D
      [RocksDB] [MergeOperator] The new Merge Interface! Uses merge sequences. · c2d7826c
      Deon Nicholas 提交于
      Summary:
      Here are the major changes to the Merge Interface. It has been expanded
      to handle cases where the MergeOperator is not associative. It does so by stacking
      up merge operations while scanning through the key history (i.e.: during Get() or
      Compaction), until a valid Put/Delete/end-of-history is encountered; it then
      applies all of the merge operations in the correct sequence starting with the
      base/sentinel value.
      
      I have also introduced an "AssociativeMerge" function which allows the user to
      take advantage of associative merge operations (such as in the case of counters).
      The implementation will always attempt to merge the operations/operands themselves
      together when they are encountered, and will resort to the "stacking" method if
      and only if the "associative-merge" fails.
      
      This implementation is conjectured to allow MergeOperator to handle the general
      case, while still providing the user with the ability to take advantage of certain
      efficiencies in their own merge-operator / data-structure.
      
      NOTE: This is a preliminary diff. This must still go through a lot of review,
      revision, and testing. Feedback welcome!
      
      Test Plan:
        -This is a preliminary diff. I have only just begun testing/debugging it.
        -I will be testing this with the existing MergeOperator use-cases and unit-tests
      (counters, string-append, and redis-lists)
        -I will be "desk-checking" and walking through the code with the help gdb.
        -I will find a way of stress-testing the new interface / implementation using
      db_bench, db_test, merge_test, and/or db_stress.
        -I will ensure that my tests cover all cases: Get-Memtable,
      Get-Immutable-Memtable, Get-from-Disk, Iterator-Range-Scan, Flush-Memtable-to-L0,
      Compaction-L0-L1, Compaction-Ln-L(n+1), Put/Delete found, Put/Delete not-found,
      end-of-history, end-of-file, etc.
        -A lot of feedback from the reviewers.
      
      Reviewers: haobo, dhruba, zshao, emayanke
      
      Reviewed By: haobo
      
      CC: leveldb
      
      Differential Revision: https://reviews.facebook.net/D11499
      c2d7826c
  31. 02 8月, 2013 1 次提交
    • M
      Expand KeyMayExist to return the proper value if it can be found in memory and... · 59d0b02f
      Mayank Agarwal 提交于
      Expand KeyMayExist to return the proper value if it can be found in memory and also check block_cache
      
      Summary: Removed KeyMayExistImpl because KeyMayExist demanded Get like semantics now. Removed no_io from memtable and imm because we need the proper value now and shouldn't just stop when we see Merge in memtable. Added checks to block_cache. Updated documentation and unit-test
      
      Test Plan: make all check;db_stress for 1 hour
      
      Reviewers: dhruba, haobo
      
      Reviewed By: dhruba
      
      CC: leveldb
      
      Differential Revision: https://reviews.facebook.net/D11853
      59d0b02f
  32. 01 8月, 2013 1 次提交
    • X
      Make arena block size configurable · 0f0a24e2
      Xing Jin 提交于
      Summary:
      Add an option for arena block size, default value 4096 bytes. Arena will allocate blocks with such size.
      
      I am not sure about passing parameter to skiplist in the new virtualized framework, though I talked to Jim a bit. So add Jim as reviewer.
      
      Test Plan:
      new unit test, I am running db_test.
      
      For passing paramter from configured option to Arena, I tried tests like:
      
        TEST(DBTest, Arena_Option) {
        std::string dbname = test::TmpDir() + "/db_arena_option_test";
        DestroyDB(dbname, Options());
      
        DB* db = nullptr;
        Options opts;
        opts.create_if_missing = true;
        opts.arena_block_size = 1000000; // tested 99, 999999
        Status s = DB::Open(opts, dbname, &db);
        db->Put(WriteOptions(), "a", "123");
        }
      
      and printed some debug info. The results look good. Any suggestion for such a unit-test?
      
      Reviewers: haobo, dhruba, emayanke, jpaton
      
      Reviewed By: dhruba
      
      CC: leveldb, zshao
      
      Differential Revision: https://reviews.facebook.net/D11799
      0f0a24e2
  33. 24 7月, 2013 1 次提交
    • J
      Virtualize SkipList Interface · 52d7ecfc
      Jim Paton 提交于
      Summary: This diff virtualizes the skiplist interface so that users can provide their own implementation of a backing store for MemTables. Eventually, the backing store will be responsible for its own synchronization, allowing users (and us) to experiment with different lockless implementations.
      
      Test Plan:
      make clean
      make -j32 check
      ./db_stress
      
      Reviewers: dhruba, emayanke, haobo
      
      Reviewed By: dhruba
      
      CC: leveldb
      
      Differential Revision: https://reviews.facebook.net/D11739
      52d7ecfc
  34. 12 7月, 2013 1 次提交
    • M
      Make rocksdb-deletes faster using bloom filter · 2a986919
      Mayank Agarwal 提交于
      Summary:
      Wrote a new function in db_impl.c-CheckKeyMayExist that calls Get but with a new parameter turned on which makes Get return false only if bloom filters can guarantee that key is not in database. Delete calls this function and if the option- deletes_use_filter is turned on and CheckKeyMayExist returns false, the delete will be dropped saving:
      1. Put of delete type
      2. Space in the db,and
      3. Compaction time
      
      Test Plan:
      make all check;
      will run db_stress and db_bench and enhance unit-test once the basic design gets approved
      
      Reviewers: dhruba, haobo, vamsi
      
      Reviewed By: haobo
      
      CC: leveldb
      
      Differential Revision: https://reviews.facebook.net/D11607
      2a986919
  35. 19 6月, 2013 1 次提交
    • D
      Compact multiple memtables before flushing to storage. · 6acbe0fc
      Dhruba Borthakur 提交于
      Summary:
      Merge multiple multiple memtables in memory before writing it
      out to a file in L0.
      
      There is a new config parameter min_write_buffer_number_to_merge
      that specifies the number of write buffers that should be merged
      together to a single file in storage. The system will not flush
      wrte buffers to storage unless at least these many buffers have
      accumulated in memory.
      The default value of this new parameter is 1, which means that
      a write buffer will be immediately flushed to disk as soon it is
      ready.
      
      Test Plan: make check
      
      Differential Revision: https://reviews.facebook.net/D11241
      6acbe0fc
  36. 04 5月, 2013 1 次提交
    • H
      [Rocksdb] Support Merge operation in rocksdb · 05e88540
      Haobo Xu 提交于
      Summary:
      This diff introduces a new Merge operation into rocksdb.
      The purpose of this review is mostly getting feedback from the team (everyone please) on the design.
      
      Please focus on the four files under include/leveldb/, as they spell the client visible interface change.
      include/leveldb/db.h
      include/leveldb/merge_operator.h
      include/leveldb/options.h
      include/leveldb/write_batch.h
      
      Please go over local/my_test.cc carefully, as it is a concerete use case.
      
      Please also review the impelmentation files to see if the straw man implementation makes sense.
      
      Note that, the diff does pass all make check and truly supports forward iterator over db and a version
      of Get that's based on iterator.
      
      Future work:
      - Integration with compaction
      - A raw Get implementation
      
      I am working on a wiki that explains the design and implementation choices, but coding comes
      just naturally and I think it might be a good idea to share the code earlier. The code is
      heavily commented.
      
      Test Plan: run all local tests
      
      Reviewers: dhruba, heyongqiang
      
      Reviewed By: dhruba
      
      CC: leveldb, zshao, sheki, emayanke, MarkCallaghan
      
      Differential Revision: https://reviews.facebook.net/D9651
      05e88540
  37. 04 3月, 2013 1 次提交
    • D
      Ability for rocksdb to compact when flushing the in-memory memtable to a file in L0. · 806e2643
      Dhruba Borthakur 提交于
      Summary:
      Rocks accumulates recent writes and deletes in the in-memory memtable.
      When the memtable is full, it writes the contents on the memtable to
      a file in L0.
      
      This patch removes redundant records at the time of the flush. If there
      are multiple versions of the same key in the memtable, then only the
      most recent one is dumped into the output file. The purging of
      redundant records occur only if the most recent snapshot is earlier
      than the earliest record in the memtable.
      
      Should we switch on this feature by default or should we keep this feature
      turned off in the default settings?
      
      Test Plan: Added test case to db_test.cc
      
      Reviewers: sheki, vamsi, emayanke, heyongqiang
      
      Reviewed By: sheki
      
      CC: leveldb
      
      Differential Revision: https://reviews.facebook.net/D8991
      806e2643
  38. 04 1月, 2013 1 次提交
    • K
      Fixing and adding some comments · 8cd86a7b
      Kosie van der Merwe 提交于
      Summary:
      `MemTableList::Add()` neglected to mention that it took ownership of the reference held by its caller.
      
      The comment in `MemTable::Get()` was wrong in describing the format of the key.
      
      Test Plan: None
      
      Reviewers: dhruba, sheki, emayanke, vamsi
      
      Reviewed By: dhruba
      
      CC: leveldb
      
      Differential Revision: https://reviews.facebook.net/D7755
      8cd86a7b