1. 02 10月, 2014 1 次提交
    • L
      make compaction related options changeable · 5ec53f3e
      Lei Jin 提交于
      Summary:
      make compaction related options changeable. Most of changes are tedious,
      following the same convention: grabs MutableCFOptions at the beginning
      of compaction under mutex, then pass it throughout the job and register
      it in SuperVersion at the end.
      
      Test Plan: make all check
      
      Reviewers: igor, yhchiang, sdong
      
      Reviewed By: sdong
      
      Subscribers: leveldb
      
      Differential Revision: https://reviews.facebook.net/D23349
      5ec53f3e
  2. 23 9月, 2014 1 次提交
    • S
      WriteBatchWithIndex to allow different Comparators for different column families · d0de413f
      sdong 提交于
      Summary:
      Previously, one single column family is given to WriteBatchWithIndex to index keys for all column families. An extra map from column family ID to comparator is maintained which can override the default comparator given in the constructor. A WriteBatchWithIndex::SetComparatorForCF() is added for user to add comparators per column family.
      
      Also move more codes into anonymous namespace.
      
      Test Plan: Add a unit test
      
      Reviewers: ljin, igor
      
      Reviewed By: igor
      
      Subscribers: dhruba, leveldb, yhchiang
      
      Differential Revision: https://reviews.facebook.net/D23355
      d0de413f
  3. 18 9月, 2014 1 次提交
  4. 09 9月, 2014 1 次提交
    • L
      MemTableOptions · 52311463
      Lei Jin 提交于
      Summary: removed reference to options in WriteBatch and DBImpl::Get()
      
      Test Plan: make all check
      
      Reviewers: yhchiang, igor, sdong
      
      Reviewed By: sdong
      
      Subscribers: leveldb
      
      Differential Revision: https://reviews.facebook.net/D23049
      52311463
  5. 05 9月, 2014 1 次提交
  6. 19 8月, 2014 1 次提交
    • S
      WriteBatchWithIndex: a wrapper of WriteBatch, with a searchable index · 28b5c760
      sdong 提交于
      Summary:
      Add WriteBatchWithIndex so that a user can query data out of a WriteBatch, to support MongoDB's read-its-own-write.
      
      WriteBatchWithIndex uses a skiplist to store the binary index. The index stores the offset of the entry in the write batch. When searching for a key, the key for the entry is read by read the entry from the write batch from the offset.
      
      Define a new iterator class for querying data out of WriteBatchWithIndex. A user can create an iterator of the write batch for one column family, seek to a key and keep calling Next() to see next entries.
      
      I will add more unit tests if people are OK about this API.
      
      Test Plan:
      make all check
      Add unit tests.
      
      Reviewers: yhchiang, igor, MarkCallaghan, ljin
      
      Reviewed By: ljin
      
      Subscribers: dhruba, leveldb, xjin
      
      Differential Revision: https://reviews.facebook.net/D21381
      28b5c760
  7. 26 4月, 2014 1 次提交
  8. 15 3月, 2014 1 次提交
  9. 13 3月, 2014 1 次提交
    • I
      [CF] Code cleanup part 1 · fb2346fc
      Igor Canadi 提交于
      Summary:
      I'm cleaning up some code preparing for the big diff review tomorrow. This is the first part of the cleanup.
      
      Changes are mostly cosmetic. The goal is to decrease amount of code difference between columnfamilies and master branch.
      
      This diff also fixes race condition when dropping column family.
      
      Test Plan: Ran db_stress with variety of parameters
      
      Reviewers: dhruba, haobo
      
      Differential Revision: https://reviews.facebook.net/D16833
      fb2346fc
  10. 27 2月, 2014 1 次提交
    • I
      [CF] Handle failure in WriteBatch::Handler · 8b7ab995
      Igor Canadi 提交于
      Summary:
      * Add ColumnFamilyHandle::GetID() function. Client needs to know column family's ID to be able to construct WriteBatch
      * Handle WriteBatch::Handler failure gracefully. Since WriteBatch is not a very smart function (it takes raw CF id), client can add data to WriteBatch for column family that doesn't exist. In that case, we need to gracefully return failure status from DB::Write(). To do that, I added a return Status to WriteBatch functions PutCF, DeleteCF and MergeCF.
      
      Test Plan: Added test to column_family_test
      
      Reviewers: dhruba, haobo
      
      CC: leveldb
      
      Differential Revision: https://reviews.facebook.net/D16323
      8b7ab995
  11. 07 2月, 2014 1 次提交
    • I
      [CF] Propagate correct options to WriteBatch::InsertInto · 8fa8a708
      Igor Canadi 提交于
      Summary:
      WriteBatch can have multiple column families in one batch. Every column family has different options. So we have to add a way for write batch to get options for an arbitrary column family.
      
      This required a bit more acrobatics since lots of interfaces had to be changed.
      
      Test Plan: make check
      
      Reviewers: dhruba, haobo, sdong, kailiu
      
      CC: leveldb
      
      Differential Revision: https://reviews.facebook.net/D15957
      8fa8a708
  12. 04 2月, 2014 1 次提交
  13. 28 1月, 2014 1 次提交
  14. 15 1月, 2014 1 次提交
  15. 09 1月, 2014 1 次提交
    • I
      Add column family information to WAL · 19e3ee64
      Igor Canadi 提交于
      Summary:
      I have added three new value types:
      * kTypeColumnFamilyDeletion
      * kTypeColumnFamilyValue
      * kTypeColumnFamilyMerge
      which include column family Varint32 before the data (value, deletion and merge). These values are used only in WAL (not in memtables yet).
      
      This endeavour required changing some WriteBatch internals.
      
      Test Plan: Added a unittest
      
      Reviewers: dhruba, haobo, sdong, kailiu
      
      CC: leveldb
      
      Differential Revision: https://reviews.facebook.net/D15045
      19e3ee64
  16. 04 12月, 2013 1 次提交
    • I
      Get rid of some shared_ptrs · 043fc14c
      Igor Canadi 提交于
      Summary:
      I went through all remaining shared_ptrs and removed the ones that I found not-necessary. Only GenerateCachePrefix() is called fairly often, so don't expect much perf wins.
      
      The ones that are left are accessed infrequently and I think we're fine with keeping them.
      
      Test Plan: make asan_check
      
      Reviewers: dhruba, haobo
      
      Reviewed By: dhruba
      
      CC: leveldb
      
      Differential Revision: https://reviews.facebook.net/D14427
      043fc14c
  17. 02 12月, 2013 1 次提交
  18. 09 11月, 2013 1 次提交
    • L
      WriteBatch::Put() overload that gathers key and value from arrays of slices · 8a46ecd3
      lovro 提交于
      Summary: In our project, when writing to the database, we want to form the value as the concatenation of a small header and a larger payload.  It's a shame to have to copy the payload just so we can give RocksDB API a linear view of the value.  Since RocksDB makes a copy internally, it's easy to support gather writes.
      
      Test Plan: write_batch_test, new test case
      
      Reviewers: dhruba
      
      CC: leveldb
      
      Differential Revision: https://reviews.facebook.net/D13947
      8a46ecd3
  19. 01 11月, 2013 1 次提交
    • N
      In-place updates for equal keys and similar sized values · fe250702
      Naman Gupta 提交于
      Summary:
      Currently for each put, a fresh memory is allocated, and a new entry is added to the memtable with a new sequence number irrespective of whether the key already exists in the memtable. This diff is an attempt to update the value inplace for existing keys. It currently handles a very simple case:
      1. Key already exists in the current memtable. Does not inplace update values in immutable memtable or snapshot
      2. Latest value type is a 'put' ie kTypeValue
      3. New value size is less than existing value, to avoid reallocating memory
      
      TODO: For a put of an existing key, deallocate memory take by values, for other value types till a kTypeValue is found, ie. remove kTypeMerge.
      TODO: Update the transaction log, to allow consistent reload of the memtable.
      
      Test Plan: Added a unit test verifying the inplace update. But some other unit tests broken due to invalid sequence number checks. WIll fix them next.
      
      Reviewers: xinyaohu, sumeet, haobo, dhruba
      
      CC: leveldb
      
      Differential Revision: https://reviews.facebook.net/D12423
      
      Automatic commit by arc
      fe250702
  20. 17 10月, 2013 1 次提交
  21. 05 10月, 2013 1 次提交
  22. 24 8月, 2013 1 次提交
  23. 23 8月, 2013 1 次提交
    • J
      Add three new MemTableRep's · 74781a0c
      Jim Paton 提交于
      Summary:
      This patch adds three new MemTableRep's: UnsortedRep, PrefixHashRep, and VectorRep.
      
      UnsortedRep stores keys in an std::unordered_map of std::sets. When an iterator is requested, it dumps the keys into an std::set and iterates over that.
      
      VectorRep stores keys in an std::vector. When an iterator is requested, it creates a copy of the vector and sorts it using std::sort. The iterator accesses that new vector.
      
      PrefixHashRep stores keys in an unordered_map mapping prefixes to ordered sets.
      
      I also added one API change. I added a function MemTableRep::MarkImmutable. This function is called when the rep is added to the immutable list. It doesn't do anything yet, but it seems like that could be useful. In particular, for the vectorrep, it means we could elide the extra copy and just sort in place. The only reason I haven't done that yet is because the use of the ArenaAllocator complicates things (I can elaborate on this if needed).
      
      Test Plan:
      make -j32 check
      ./db_stress --memtablerep=vector
      ./db_stress --memtablerep=unsorted
      ./db_stress --memtablerep=prefixhash --prefix_size=10
      
      Reviewers: dhruba, haobo, emayanke
      
      Reviewed By: dhruba
      
      CC: leveldb
      
      Differential Revision: https://reviews.facebook.net/D12117
      74781a0c
  24. 22 8月, 2013 1 次提交
    • J
      Allow WriteBatch::Handler to abort iteration · cb703c9d
      Jim Paton 提交于
      Summary:
      Sometimes you don't need to iterate through the whole WriteBatch. This diff makes the Handler member functions return a bool that indicates whether to abort or not. If they return true, the iteration stops.
      
      One thing I just thought of is that this will break backwards-compability. Maybe it would be better to add a virtual member function WriteBatch::Handler::ShouldAbort() that returns false by default. Comments requested.
      
      I still have to add a new unit test for the abort code, but let's finalize the API first.
      
      Test Plan: make -j32 check
      
      Reviewers: dhruba, haobo, vamsi, emayanke
      
      Reviewed By: dhruba
      
      CC: leveldb
      
      Differential Revision: https://reviews.facebook.net/D12339
      cb703c9d
  25. 15 8月, 2013 1 次提交
    • J
      Implement log blobs · 0307c5fe
      Jim Paton 提交于
      Summary:
      This patch adds the ability for the user to add sequences of arbitrary data (blobs) to write batches. These blobs are saved to the log along with everything else in the write batch. You can add multiple blobs per WriteBatch and the ordering of blobs, puts, merges, and deletes are preserved.
      
      Blobs are not saves to SST files. RocksDB ignores blobs in every way except for writing them to the log.
      
      Before committing this patch, I need to add some test code. But I'm submitting it now so people can comment on the API.
      
      Test Plan: make -j32 check
      
      Reviewers: dhruba, haobo, vamsi
      
      Reviewed By: dhruba
      
      CC: leveldb
      
      Differential Revision: https://reviews.facebook.net/D12195
      0307c5fe
  26. 24 7月, 2013 1 次提交
    • J
      Virtualize SkipList Interface · 52d7ecfc
      Jim Paton 提交于
      Summary: This diff virtualizes the skiplist interface so that users can provide their own implementation of a backing store for MemTables. Eventually, the backing store will be responsible for its own synchronization, allowing users (and us) to experiment with different lockless implementations.
      
      Test Plan:
      make clean
      make -j32 check
      ./db_stress
      
      Reviewers: dhruba, emayanke, haobo
      
      Reviewed By: dhruba
      
      CC: leveldb
      
      Differential Revision: https://reviews.facebook.net/D11739
      52d7ecfc
  27. 27 6月, 2013 1 次提交
  28. 04 5月, 2013 1 次提交
    • H
      [Rocksdb] Support Merge operation in rocksdb · 05e88540
      Haobo Xu 提交于
      Summary:
      This diff introduces a new Merge operation into rocksdb.
      The purpose of this review is mostly getting feedback from the team (everyone please) on the design.
      
      Please focus on the four files under include/leveldb/, as they spell the client visible interface change.
      include/leveldb/db.h
      include/leveldb/merge_operator.h
      include/leveldb/options.h
      include/leveldb/write_batch.h
      
      Please go over local/my_test.cc carefully, as it is a concerete use case.
      
      Please also review the impelmentation files to see if the straw man implementation makes sense.
      
      Note that, the diff does pass all make check and truly supports forward iterator over db and a version
      of Get that's based on iterator.
      
      Future work:
      - Integration with compaction
      - A raw Get implementation
      
      I am working on a wiki that explains the design and implementation choices, but coding comes
      just naturally and I think it might be a good idea to share the code earlier. The code is
      heavily commented.
      
      Test Plan: run all local tests
      
      Reviewers: dhruba, heyongqiang
      
      Reviewed By: dhruba
      
      CC: leveldb, zshao, sheki, emayanke, MarkCallaghan
      
      Differential Revision: https://reviews.facebook.net/D9651
      05e88540
  29. 07 11月, 2012 1 次提交
  30. 09 3月, 2012 1 次提交
  31. 01 11月, 2011 1 次提交
    • H
      A number of fixes: · 36a5f8ed
      Hans Wennborg 提交于
      - Replace raw slice comparison with a call to user comparator.
        Added test for custom comparators.
      
      - Fix end of namespace comments.
      
      - Fixed bug in picking inputs for a level-0 compaction.
      
        When finding overlapping files, the covered range may expand
        as files are added to the input set.  We now correctly expand
        the range when this happens instead of continuing to use the
        old range.  For example, suppose L0 contains files with the
        following ranges:
      
            F1: a .. d
            F2:    c .. g
            F3:       f .. j
      
        and the initial compaction target is F3.  We used to search
        for range f..j which yielded {F2,F3}.  However we now expand
        the range as soon as another file is added.  In this case,
        when F2 is added, we expand the range to c..j and restart the
        search.  That picks up file F1 as well.
      
        This change fixes a bug related to deleted keys showing up
        incorrectly after a compaction as described in Issue 44.
      
      (Sync with upstream @25072954)
      36a5f8ed
  32. 21 5月, 2011 1 次提交
  33. 21 4月, 2011 1 次提交
  34. 20 4月, 2011 2 次提交
  35. 19 4月, 2011 1 次提交
  36. 13 4月, 2011 1 次提交
  37. 31 3月, 2011 1 次提交
  38. 19 3月, 2011 1 次提交