1. 02 6月, 2015 2 次提交
    • S
      Allow users to migrate to options.level_compaction_dynamic_level_bytes=true using CompactRange() · 4266d4fd
      sdong 提交于
      Summary: In DB::CompactRange(), change parameter "reduce_level" to "change_level". Users can compact all data to the last level if needed. By doing it, users can migrate the DB to options.level_compaction_dynamic_level_bytes=true.
      
      Test Plan: Add a unit test for it.
      
      Reviewers: yhchiang, anthony, kradhakrishnan, igor, rven
      
      Reviewed By: rven
      
      Subscribers: leveldb, dhruba
      
      Differential Revision: https://reviews.facebook.net/D39099
      4266d4fd
    • Y
      Removed DBImpl::notifying_events_ · d333820b
      Yueh-Hsuan Chiang 提交于
      Summary:
      DBImpl::notifying_events_ is a internal counter in DBImpl which is
      used to prevent DB close when DB is notifying events.  However, as
      the current events all rely on either compaction or flush which
      already have similar counters to prevent DB close, it is safe to
      remove notifying_events_.
      
      Test Plan:
      listener_test
      examples/compact_files_example
      
      Reviewers: igor, anthony, kradhakrishnan, rven, sdong
      
      Reviewed By: sdong
      
      Subscribers: dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D39315
      d333820b
  2. 30 5月, 2015 1 次提交
    • A
      Optimistic Transactions · dc9d70de
      agiardullo 提交于
      Summary: Optimistic transactions supporting begin/commit/rollback semantics.  Currently relies on checking the memtable to determine if there are any collisions at commit time.  Not yet implemented would be a way of enuring the memtable has some minimum amount of history so that we won't fail to commit when the memtable is empty.  You should probably start with transaction.h to get an overview of what is currently supported.
      
      Test Plan: Added a new test, but still need to look into stress testing.
      
      Reviewers: yhchiang, igor, rven, sdong
      
      Reviewed By: sdong
      
      Subscribers: adamretter, MarkCallaghan, leveldb, dhruba
      
      Differential Revision: https://reviews.facebook.net/D33435
      dc9d70de
  3. 27 5月, 2015 1 次提交
  4. 22 5月, 2015 1 次提交
  5. 12 5月, 2015 1 次提交
    • A
      API to fetch from both a WriteBatchWithIndex and the db · 711465cc
      agiardullo 提交于
      Summary:
      Added a couple functions to WriteBatchWithIndex to make it easier to query the value of a key including reading pending writes from a batch.  (This is needed for transactions).
      
      I created write_batch_with_index_internal.h to use to store an internal-only helper function since there wasn't a good place in the existing class hierarchy to store this function (and it didn't seem right to stick this function inside WriteBatchInternal::Rep).
      
      Since I needed to access the WriteBatchEntryComparator, I moved some helper classes from write_batch_with_index.cc into write_batch_with_index_internal.h/.cc.  WriteBatchIndexEntry, ReadableWriteBatch, and WriteBatchEntryComparator are all unchanged (just moved to a different file(s)).
      
      Test Plan: Added new unit tests.
      
      Reviewers: rven, yhchiang, sdong, igor
      
      Reviewed By: igor
      
      Subscribers: dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D38037
      711465cc
  6. 06 5月, 2015 1 次提交
  7. 28 4月, 2015 1 次提交
    • I
      Include bunch of more events into EventLogger · 1bb4928d
      Igor Canadi 提交于
      Summary:
      Added these events:
      * Recovery start, finish and also when recovery creates a file
      * Trivial move
      * Compaction start, finish and when compaction creates a file
      * Flush start, finish
      
      Also includes small fix to EventLogger
      
      Also added option ROCKSDB_PRINT_EVENTS_TO_STDOUT which is useful when we debug things. I've spent far too much time chasing LOG files.
      
      Still didn't get sst table properties in JSON. They are written very deeply into the stack. I'll address in separate diff.
      
      TODO:
      * Write specification. Let's first use this for a while and figure out what's good data to put here, too. After that we'll write spec
      * Write tools that parse and analyze LOGs. This can be in python or go. Good intern task.
      
      Test Plan: Ran db_bench with ROCKSDB_PRINT_EVENTS_TO_STDOUT. Here's the output: https://phabricator.fb.com/P19811976
      
      Reviewers: sdong, yhchiang, rven, MarkCallaghan, kradhakrishnan, anthony
      
      Reviewed By: anthony
      
      Subscribers: dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D37521
      1bb4928d
  8. 24 4月, 2015 1 次提交
    • G
      Implement DB::PromoteL0 method · 2dc421df
      Giuseppe Ottaviano 提交于
      Summary:
      This diff implements a new `DB` method `PromoteL0` which moves all files in L0
      to a given level skipping compaction, provided that the files have disjoint
      ranges and all levels up to the target level are empty.
      
      This method provides finer-grain control for trivial compactions, and it is
      useful for bulk-loading pre-sorted keys. Compared to D34797, it does not change
      the semantics of an existing operation, which can impact existing code.
      
      PromoteL0 is designed to work well in combination with the proposed
      `GetSstFileWriter`/`AddFile` interface, enabling to "design" the level structure
      by populating one level at a time. Such fine-grained control can be very useful
      for static or mostly-static databases.
      
      Test Plan: `make check`
      
      Reviewers: IslamAbdelRahman, philipp, MarkCallaghan, yhchiang, igor, sdong
      
      Reviewed By: sdong
      
      Subscribers: dhruba
      
      Differential Revision: https://reviews.facebook.net/D37107
      2dc421df
  9. 18 4月, 2015 1 次提交
    • I
      Add experimental API MarkForCompaction() · 6059bdf8
      Igor Canadi 提交于
      Summary:
      Some Mongo+Rocks datasets in Parse's environment are not doing compactions very frequently. During the quiet period (with no IO), we'd like to schedule compactions so that our reads become faster. Also, aggressively compacting during quiet periods helps when write bursts happen. In addition, we also want to compact files that are containing deleted key ranges (like old oplog keys).
      
      All of this is currently not possible with CompactRange() because it's single-threaded and blocks all other compactions from happening. Running CompactRange() risks an issue of blocking writes because we generate too much Level 0 files before the compaction is over. Stopping writes is very dangerous because they hold transaction locks. We tried running manual compaction once on Mongo+Rocks and everything fell apart.
      
      MarkForCompaction() solves all of those problems. This is very light-weight manual compaction. It is lower priority than automatic compactions, which means it shouldn't interfere with background process keeping the LSM tree clean. However, if no automatic compactions need to be run (or we have extra background threads available), we will start compacting files that are marked for compaction.
      
      Test Plan: added a new unit test
      
      Reviewers: yhchiang, rven, MarkCallaghan, sdong
      
      Reviewed By: sdong
      
      Subscribers: yoshinorim, dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D37083
      6059bdf8
  10. 31 3月, 2015 1 次提交
    • I
      Clean up old log files in background threads · fd3dbef2
      Igor Canadi 提交于
      Summary:
      Cleaning up log files can do heavy IO, since we call ftruncate() in the destructor. We don't want to call ftruncate() in user threads.
      
      This diff moves cleaning to background threads (flush and compaction)
      
      Test Plan: make check, will also run valgrind
      
      Reviewers: yhchiang, rven, MarkCallaghan, sdong
      
      Reviewed By: sdong
      
      Subscribers: dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D36177
      fd3dbef2
  11. 17 3月, 2015 1 次提交
  12. 14 3月, 2015 1 次提交
    • I
      EventLogger · 52d8347a
      Igor Canadi 提交于
      Summary:
      Here's my proposal for making our LOGs easier to read by machines.
      
      The idea is to dump all events as JSON objects. JSON is easy to read by humans, but more importantly, it's easy to read by machines. That way, we can parse this, load into SQLite/mongo and then query or visualize.
      
      I started with table_create and table_delete events, but if everybody agrees, I'll continue by adding more events (flush/compaction/etc etc)
      
      Test Plan:
      Ran db_bench. Observed:
      2015/01/15-14:13:25.788019 1105ef000 EVENT_LOG_v1 {"time_micros": 1421360005788015, "event": "table_file_creation", "file_number": 12, "file_size": 1909699}
      2015/01/15-14:13:25.956500 110740000 EVENT_LOG_v1 {"time_micros": 1421360005956498, "event": "table_file_deletion", "file_number": 12}
      
      Reviewers: yhchiang, rven, dhruba, MarkCallaghan, lgalanis, sdong
      
      Reviewed By: sdong
      
      Subscribers: dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D31647
      52d8347a
  13. 12 3月, 2015 2 次提交
  14. 27 2月, 2015 1 次提交
    • I
      rocksdb: Add missing override · 62247ffa
      Igor Sugak 提交于
      Summary:
      When using latest clang (3.6 or 3.7/trunck) rocksdb is failing with many errors. Almost all of them are missing override errors. This diff adds missing override keyword. No manual changes.
      
      Prerequisites: bear and clang 3.5 build with extra tools
      
      ```lang=bash
      % USE_CLANG=1 bear make all # generate a compilation database http://clang.llvm.org/docs/JSONCompilationDatabase.html
      % clang-modernize -p . -include . -add-override
      % make format
      ```
      
      Test Plan:
      Make sure all tests are passing.
      ```lang=bash
      % #Use default fb code clang.
      % make check
      ```
      Verify less error and no missing override errors.
      ```lang=bash
      % # Have trunk clang present in path.
      % ROCKSDB_NO_FBCODE=1 CC=clang CXX=clang++ make
      ```
      
      Reviewers: igor, kradhakrishnan, rven, meyering, sdong
      
      Reviewed By: sdong
      
      Subscribers: dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D34077
      62247ffa
  15. 13 2月, 2015 1 次提交
    • I
      Introduce job_id for flush and compaction · e7ea51a8
      Igor Canadi 提交于
      Summary:
      It would be good to assing background job their IDs. Two benefits:
      1) makes LOGs more readable
      2) I might use it in my EventLogger, which will try to make our LOG easier to read/query/visualize
      
      Test Plan: ran rocksdb, read the LOG
      
      Reviewers: sdong, rven, yhchiang
      
      Reviewed By: yhchiang
      
      Subscribers: dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D31617
      e7ea51a8
  16. 05 2月, 2015 1 次提交
    • Y
      Add a counter for collecting the wait time on db mutex. · 181191a1
      Yueh-Hsuan Chiang 提交于
      Summary:
      Add a counter for collecting the wait time on db mutex.
      Also add MutexWrapper and CondVarWrapper for measuring wait time.
      
      Test Plan:
      ./db_test
      export ROCKSDB_TESTS=MutexWaitStats
      ./db_test
      
      verify stats output using db_bench
      make clean
      make release
      ./db_bench --statistics=1 --benchmarks=fillseq,readwhilewriting --num=10000 --threads=10
      
      Sample output:
          rocksdb.db.mutex.wait.micros COUNT : 7546866
      
      Reviewers: MarkCallaghan, rven, sdong, igor
      
      Reviewed By: igor
      
      Subscribers: dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D32787
      181191a1
  17. 28 1月, 2015 1 次提交
  18. 27 1月, 2015 2 次提交
    • S
      Rename DBImpl::log_dir_unsynced_ to log_dir_synced_ · be8f0b12
      sdong 提交于
      Summary: log_dir_unsynced_ is a confusing name. Rename it to log_dir_synced_ and flip the value.
      
      Test Plan: Run ./fault_injection_test
      
      Reviewers: rven, yhchiang, igor
      
      Reviewed By: igor
      
      Subscribers: leveldb, dhruba
      
      Differential Revision: https://reviews.facebook.net/D32235
      be8f0b12
    • S
      Sync WAL Directory and DB Path if different from DB directory · d888c957
      sdong 提交于
      Summary:
      1. If WAL directory is different from db directory. Sync the directory after creating a log file under it.
      2. After creating an SST file, sync its parent directory instead of DB directory.
      3. change the check of kResetDeleteUnsyncedFiles in fault_injection_test. Since we changed the behavior to sync log files' parent directory after first WAL sync, instead of creating, kResetDeleteUnsyncedFiles will not guarantee to show post sync updates.
      
      Test Plan: make all check
      
      Reviewers: yhchiang, rven, igor
      
      Reviewed By: igor
      
      Subscribers: leveldb, dhruba
      
      Differential Revision: https://reviews.facebook.net/D32067
      d888c957
  19. 13 1月, 2015 1 次提交
  20. 22 12月, 2014 1 次提交
    • I
      Speed up FindObsoleteFiles() · 0acc7388
      Igor Canadi 提交于
      Summary:
      There are two versions of FindObsoleteFiles():
      * full scan, which is executed every 6 hours (and it's terribly slow)
      * no full scan, which is executed every time a background process finishes and iterator is deleted
      
      This diff is optimizing the second case (no full scan). Here's what we do before the diff:
      * Get the list of obsolete files (files with ref==0). Some files in obsolete_files set might actually be live.
      * Get the list of live files to avoid deleting files that are live.
      * Delete files that are in obsolete_files and not in live_files.
      
      After this diff:
      * The only files with ref==0 that are still live are files that have been part of move compaction. Don't include moved files in obsolete_files.
      * Get the list of obsolete files (which exclude moved files).
      * No need to get the list of live files, since all files in obsolete_files need to be deleted.
      
      I'll post the benchmark results, but you can get the feel of it here: https://reviews.facebook.net/D30123
      
      This depends on D30123.
      
      P.S. We should do full scan only in failure scenarios, not every 6 hours. I'll do this in a follow-up diff.
      
      Test Plan:
      One new unit test. Made sure that unit test fails if we don't have a `if (!f->moved)` safeguard in ~Version.
      
      make check
      
      Big number of compactions and flushes:
      
        ./db_stress --threads=30 --ops_per_thread=20000000 --max_key=10000 --column_families=20 --clear_column_family_one_in=10000000 --verify_before_write=0  --reopen=15 --max_background_compactions=10 --max_background_flushes=10 --db=/fast-rocksdb-tmp/db_stress --prefixpercent=0 --iterpercent=0 --writepercent=75 --db_write_buffer_size=2000000
      
      Reviewers: yhchiang, rven, sdong
      
      Reviewed By: sdong
      
      Subscribers: dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D30249
      0acc7388
  21. 20 12月, 2014 1 次提交
    • I
      Rewritten system for scheduling background work · fdb6be4e
      Igor Canadi 提交于
      Summary:
      When scaling to higher number of column families, the worst bottleneck was MaybeScheduleFlushOrCompaction(), which did a for loop over all column families while holding a mutex. This patch addresses the issue.
      
      The approach is similar to our earlier efforts: instead of a pull-model, where we do something for every column family, we can do a push-based model -- when we detect that column family is ready to be flushed/compacted, we add it to the flush_queue_/compaction_queue_. That way we don't need to loop over every column family in MaybeScheduleFlushOrCompaction.
      
      Here are the performance results:
      
      Command:
      
          ./db_bench --write_buffer_size=268435456 --db_write_buffer_size=268435456 --db=/fast-rocksdb-tmp/rocks_lots_of_cf --use_existing_db=0 --open_files=55000 --statistics=1 --histogram=1 --disable_data_sync=1 --max_write_buffer_number=2 --sync=0 --benchmarks=fillrandom --threads=16 --num_column_families=5000  --disable_wal=1 --max_background_flushes=16 --max_background_compactions=16 --level0_file_num_compaction_trigger=2 --level0_slowdown_writes_trigger=2 --level0_stop_writes_trigger=3 --hard_rate_limit=1 --num=33333333 --writes=33333333
      
      Before the patch:
      
           fillrandom   :      26.950 micros/op 37105 ops/sec;    4.1 MB/s
      
      After the patch:
      
            fillrandom   :      17.404 micros/op 57456 ops/sec;    6.4 MB/s
      
      Next bottleneck is VersionSet::AddLiveFiles, which is painfully slow when we have a lot of files. This is coming in the next patch, but when I removed that code, here's what I got:
      
            fillrandom   :       7.590 micros/op 131758 ops/sec;   14.6 MB/s
      
      Test Plan:
      make check
      
      two stress tests:
      
      Big number of compactions and flushes:
      
          ./db_stress --threads=30 --ops_per_thread=20000000 --max_key=10000 --column_families=20 --clear_column_family_one_in=10000000 --verify_before_write=0  --reopen=15 --max_background_compactions=10 --max_background_flushes=10 --db=/fast-rocksdb-tmp/db_stress --prefixpercent=0 --iterpercent=0 --writepercent=75 --db_write_buffer_size=2000000
      
      max_background_flushes=0, to verify that this case also works correctly
      
          ./db_stress --threads=30 --ops_per_thread=2000000 --max_key=10000 --column_families=20 --clear_column_family_one_in=10000000 --verify_before_write=0  --reopen=3 --max_background_compactions=3 --max_background_flushes=0 --db=/fast-rocksdb-tmp/db_stress --prefixpercent=0 --iterpercent=0 --writepercent=75 --db_write_buffer_size=2000000
      
      Reviewers: ljin, rven, yhchiang, sdong
      
      Reviewed By: sdong
      
      Subscribers: dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D30123
      fdb6be4e
  22. 12 12月, 2014 1 次提交
    • S
      Improve scalability of DB::GetSnapshot() · d7a48666
      sdong 提交于
      Summary: Now DB::GetSnapshot() doesn't scale to more column families, as it needs to go through all the column families to find whether snapshot is supported. This patch optimizes it.
      
      Test Plan:
      Add unit tests to cover negative cases.
      make all check
      
      Reviewers: yhchiang, rven, igor
      
      Reviewed By: igor
      
      Subscribers: dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D30093
      d7a48666
  23. 09 12月, 2014 1 次提交
  24. 06 12月, 2014 1 次提交
  25. 03 12月, 2014 1 次提交
    • J
      Enforce write buffer memory limit across column families · a14b7873
      Jonah Cohen 提交于
      Summary:
      Introduces a new class for managing write buffer memory across column
      families.  We supplement ColumnFamilyOptions::write_buffer_size with
      ColumnFamilyOptions::write_buffer, a shared pointer to a WriteBuffer
      instance that enforces memory limits before flushing out to disk.
      
      Test Plan: Added SharedWriteBuffer unit test to db_test.cc
      
      Reviewers: sdong, rven, ljin, igor
      
      Reviewed By: igor
      
      Subscribers: tnovak, yhchiang, dhruba, xjin, MarkCallaghan, yoshinorim
      
      Differential Revision: https://reviews.facebook.net/D22581
      a14b7873
  26. 21 11月, 2014 2 次提交
    • V
      Moved checkpoint to utilities · 004f416b
      Venkatesh Radhakrishnan 提交于
      Summary:
      Moved checkpoint to utilities.
      Addressed comments by Igor, Siying, Dhruba
      
      Test Plan: db_test/SnapshotLink
      
      Reviewers: dhruba, igor, sdong
      
      Reviewed By: igor
      
      Subscribers: dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D29079
      004f416b
    • Y
      Introduce GetThreadList API · d0c5f28a
      Yueh-Hsuan Chiang 提交于
      Summary:
      Add GetThreadList API, which allows developer to track the
      status of each process.  Currently, calling GetThreadList will
      only get the list of background threads in RocksDB with their
      thread-id and thread-type (priority) set.  Will add more support
      on this in the later diffs.
      
      ThreadStatus currently has the following properties:
      
        // An unique ID for the thread.
        const uint64_t thread_id;
      
        // The type of the thread, it could be ROCKSDB_HIGH_PRIORITY,
        // ROCKSDB_LOW_PRIORITY, and USER_THREAD
        const ThreadType thread_type;
      
        // The name of the DB instance where the thread is currently
        // involved with.  It would be set to empty string if the thread
        // does not involve in any DB operation.
        const std::string db_name;
      
        // The name of the column family where the thread is currently
        // It would be set to empty string if the thread does not involve
        // in any column family.
        const std::string cf_name;
      
        // The event that the current thread is involved.
        // It would be set to empty string if the information about event
        // is not currently available.
      
      Test Plan:
      ./thread_list_test
      export ROCKSDB_TESTS=GetThreadList
      ./db_test
      
      Reviewers: rven, igor, sdong, ljin
      
      Reviewed By: ljin
      
      Subscribers: dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D25047
      d0c5f28a
  27. 19 11月, 2014 1 次提交
  28. 15 11月, 2014 2 次提交
    • I
      Explicitly clean JobContext · 5c04acda
      Igor Canadi 提交于
      Summary: This way we can gurantee that old MemTables get destructed before DBImpl gets destructed, which might be useful if we want to make them depend on state from DBImpl.
      
      Test Plan: make check with asserts in JobContext's destructor
      
      Reviewers: ljin, sdong, yhchiang, rven, jonahcohen
      
      Reviewed By: jonahcohen
      
      Subscribers: dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D28959
      5c04acda
    • V
      Provide openable snapshots · 6c1b040c
      Venkatesh Radhakrishnan 提交于
      Summary: Store links to live files in directory on same disk
      
      Test Plan:
      Take snapshot and open it. Added a test GetSnapshotLink in
      db_test.
      
      Reviewers: sdong
      
      Reviewed By: sdong
      
      Subscribers: dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D28713
      6c1b040c
  29. 14 11月, 2014 2 次提交
  30. 08 11月, 2014 2 次提交
    • Y
      CompactFiles, EventListener and GetDatabaseMetaData · 28c82ff1
      Yueh-Hsuan Chiang 提交于
      Summary:
      This diff adds three sets of APIs to RocksDB.
      
      = GetColumnFamilyMetaData =
      * This APIs allow users to obtain the current state of a RocksDB instance on one column family.
      * See GetColumnFamilyMetaData in include/rocksdb/db.h
      
      = EventListener =
      * A virtual class that allows users to implement a set of
        call-back functions which will be called when specific
        events of a RocksDB instance happens.
      * To register EventListener, simply insert an EventListener to ColumnFamilyOptions::listeners
      
      = CompactFiles =
      * CompactFiles API inputs a set of file numbers and an output level, and RocksDB
        will try to compact those files into the specified level.
      
      = Example =
      * Example code can be found in example/compact_files_example.cc, which implements
        a simple external compactor using EventListener, GetColumnFamilyMetaData, and
        CompactFiles API.
      
      Test Plan:
      listener_test
      compactor_test
      example/compact_files_example
      export ROCKSDB_TESTS=CompactFiles
      db_test
      export ROCKSDB_TESTS=MetaData
      db_test
      
      Reviewers: ljin, igor, rven, sdong
      
      Reviewed By: sdong
      
      Subscribers: MarkCallaghan, dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D24705
      28c82ff1
    • I
      Redesign pending_outputs_ · 53af5d87
      Igor Canadi 提交于
      Summary:
      Here's a prototype of redesigning pending_outputs_. This way, we don't have to expose pending_outputs_ to other classes (CompactionJob, FlushJob, MemtableList). DBImpl takes care of it.
      
      Still have to write some comments, but should be good enough to start the discussion.
      
      Test Plan: make check, will also run stress test
      
      Reviewers: ljin, sdong, rven, yhchiang
      
      Reviewed By: yhchiang
      
      Subscribers: dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D28353
      53af5d87
  31. 05 11月, 2014 1 次提交
  32. 01 11月, 2014 1 次提交
    • I
      CompactionJob · 74eb4fbe
      Igor Canadi 提交于
      Summary:
      Long awaited CompactionJob class! Move most compaction-related things from DBImpl to CompactionJob, making CompactionJob easier to test and understand.
      
      Currently this is just replicating exactly the same functionality with as little as change as possible. As future work, we should:
      1. Add CompactionJob tests (I think I'll do that tomorrow)
      2. Reduce CompactionJob's state that it inherits from DBImpl
      3. Figure out how to do yielding to flush better. Currently I implemented a callback as we agreed yesterday, but I don't think it's a good long term solution.
      
      This reduces db_impl.cc from 5000+ LOC to 3400!
      
      Test Plan: make check, will add CompactionJob-specific tests, probably also move some tests from db_test to compaction_job_test
      
      Reviewers: rven, yhchiang, sdong, ljin
      
      Reviewed By: ljin
      
      Subscribers: dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D27957
      74eb4fbe
  33. 30 10月, 2014 1 次提交
    • I
      WalManager · 63590548
      Igor Canadi 提交于
      Summary: Decoupling code that deals with archived log files outside of DBImpl. That will make this code easier to reason about and test. It will also make the code easier to improve, because an improver doesn't have to understand DBImpl code in entirety.
      
      Test Plan: added test
      
      Reviewers: ljin, yhchiang, rven, sdong
      
      Reviewed By: sdong
      
      Subscribers: dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D27873
      63590548