1. 02 7月, 2015 1 次提交
    • D
      Windows Port from Microsoft · 18285c1e
      Dmitri Smirnov 提交于
       Summary: Make RocksDb build and run on Windows to be functionally
       complete and performant. All existing test cases run with no
       regressions. Performance numbers are in the pull-request.
      
       Test plan: make all of the existing unit tests pass, obtain perf numbers.
      
       Co-authored-by: Praveen Rao praveensinghrao@outlook.com
       Co-authored-by: Sherlock Huang baihan.huang@gmail.com
       Co-authored-by: Alex Zinoviev alexander.zinoviev@me.com
       Co-authored-by: Dmitri Smirnov dmitrism@microsoft.com
      18285c1e
  2. 18 6月, 2015 2 次提交
    • I
      Use CompactRangeOptions for CompactRange · 12e030a9
      Islam AbdelRahman 提交于
      Summary:
      This diff update DB::CompactRange to use RangeCompactionOptions instead of using multiple parameters
      Old CompactRange is still available but deprecated
      
      Test Plan:
      make all check
      make rocksdbjava
      USE_CLANG=1 make all
      OPT=-DROCKSDB_LITE make release
      
      Reviewers: sdong, yhchiang, igor
      
      Reviewed By: igor
      
      Subscribers: dhruba
      
      Differential Revision: https://reviews.facebook.net/D40209
      12e030a9
    • I
      Clean up InstallSuperVersion · 25d60056
      Igor Canadi 提交于
      Summary:
      We go to great lengths to make sure MaybeScheduleFlushOrCompaction() is called outside of write thread. But anyway, it's still called in the mutex, so it's not that much cheaper.
      
      This diff removes the "optimization" and cleans up the code a bit.
      
      Test Plan: make check
      
      Reviewers: rven, sdong
      
      Reviewed By: sdong
      
      Subscribers: dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D40113
      25d60056
  3. 17 6月, 2015 1 次提交
  4. 12 6月, 2015 2 次提交
    • S
      Slow down writes by bytes written · 7842920b
      sdong 提交于
      Summary:
      We slow down data into the database to the rate of options.delayed_write_rate (a new option) with this patch.
      
      The thread synchronization approach I take is to still synchronize write controller by DB mutex and GetDelay() is inside DB mutex. Try to minimize the frequency of getting time in GetDelay(). I verified it through db_bench and it seems to work
      
      hard_rate_limit is deprecated.
      
      options.delayed_write_rate is still not dynamically changeable. Need to work on it as a follow-up.
      
      Test Plan: Add new unit tests in db_test
      
      Reviewers: yhchiang, rven, kradhakrishnan, anthony, MarkCallaghan, igor
      
      Reviewed By: igor
      
      Subscribers: ikabiljo, leveldb, dhruba
      
      Differential Revision: https://reviews.facebook.net/D36351
      7842920b
    • I
      Add largest sequence to FlushJobInfo · d6ce0f7c
      Islam AbdelRahman 提交于
      Summary:
      Adding largest sequence number to FlushJobInfo
      and passing flushed file metadata to NotifyOnFlushCompleted which include alot of other values that we may want to expose in FlushJobInfo
      
      Test Plan: make check
      
      Reviewers: igor, sdong
      
      Reviewed By: sdong
      
      Subscribers: dhruba
      
      Differential Revision: https://reviews.facebook.net/D39927
      d6ce0f7c
  5. 06 6月, 2015 1 次提交
  6. 05 6月, 2015 1 次提交
    • I
      Allowing L0 -> L1 trivial move on sorted data · 3ce3bb3d
      Islam AbdelRahman 提交于
      Summary:
      This diff updates the logic of how we do trivial move, now trivial move can run on any number of files in input level as long as they are not overlapping
      
      The conditions for trivial move have been updated
      
      Introduced conditions:
        - Trivial move cannot happen if we have a compaction filter (except if the compaction is not manual)
        - Input level files cannot be overlapping
      
      Removed conditions:
        - Trivial move only run when the compaction is not manual
        - Input level should can contain only 1 file
      
      More context on what tests failed because of Trivial move
      ```
      DBTest.CompactionsGenerateMultipleFiles
      This test is expecting compaction on a file in L0 to generate multiple files in L1, this test will fail with trivial move because we end up with one file in L1
      ```
      
      ```
      DBTest.NoSpaceCompactRange
      This test expect compaction to fail when we force environment to report running out of space, of course this is not valid in trivial move situation
      because trivial move does not need any extra space, and did not check for that
      ```
      
      ```
      DBTest.DropWrites
      Similar to DBTest.NoSpaceCompactRange
      ```
      
      ```
      DBTest.DeleteObsoleteFilesPendingOutputs
      This test expect that a file in L2 is deleted after it's moved to L3, this is not valid with trivial move because although the file was moved it is now used by L3
      ```
      
      ```
      CuckooTableDBTest.CompactionIntoMultipleFiles
      Same as DBTest.CompactionsGenerateMultipleFiles
      ```
      
      This diff is based on a work by @sdong https://reviews.facebook.net/D34149
      
      Test Plan: make -j64 check
      
      Reviewers: rven, sdong, igor
      
      Reviewed By: igor
      
      Subscribers: yhchiang, ott, march, dhruba, sdong
      
      Differential Revision: https://reviews.facebook.net/D34797
      3ce3bb3d
  7. 03 6月, 2015 3 次提交
    • Y
      Fix compile warning in db/db_impl · 8afafc27
      Yueh-Hsuan Chiang 提交于
      Summary:
      Fix the following compile warning in db/db_impl
      
        db/db_impl.cc:1603:19: error: implicit conversion loses integer precision: 'const uint64_t' (aka 'const unsigned long') to 'int' [-Werror,-Wshorten-64-to-32]
           info.job_id = job_id;
                       ~ ^~~~~~
      
      Test Plan: db_test
      
      Reviewers: sdong
      
      Subscribers: dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D39423
      8afafc27
    • Y
      Allow EventListener::OnCompactionCompleted to return CompactionJobStats. · fe5c6321
      Yueh-Hsuan Chiang 提交于
      Summary:
      Allow EventListener::OnCompactionCompleted to return CompactionJobStats,
      which contains useful information about a compaction.
      
      Example CompactionJobStats returned by OnCompactionCompleted():
          smallest_output_key_prefix 05000000
          largest_output_key_prefix 06990000
          elapsed_time 42419
          num_input_records 300
          num_input_files 3
          num_input_files_at_output_level 2
          num_output_records 200
          num_output_files 1
          actual_bytes_input 167200
          actual_bytes_output 110688
          total_input_raw_key_bytes 5400
          total_input_raw_value_bytes 300000
          num_records_replaced 100
          is_manual_compaction 1
      
      Test Plan: Developed a mega test in db_test which covers 20 variables in CompactionJobStats.
      
      Reviewers: rven, igor, anthony, sdong
      
      Reviewed By: sdong
      
      Subscribers: tnovak, dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D38463
      fe5c6321
    • Y
      Add EventListener::OnTableFileCreated() · fc838212
      Yueh-Hsuan Chiang 提交于
      Summary:
      Add EventListener::OnTableFileCreated(), which will be called
      when a table file is created.  This patch is part of the
      EventLogger and EventListener integration.
      
      Test Plan: Augment existing test in db/listener_test.cc
      
      Reviewers: anthony, kradhakrishnan, rven, igor, sdong
      
      Reviewed By: sdong
      
      Subscribers: dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D38865
      fc838212
  8. 02 6月, 2015 2 次提交
    • S
      Allow users to migrate to options.level_compaction_dynamic_level_bytes=true using CompactRange() · 4266d4fd
      sdong 提交于
      Summary: In DB::CompactRange(), change parameter "reduce_level" to "change_level". Users can compact all data to the last level if needed. By doing it, users can migrate the DB to options.level_compaction_dynamic_level_bytes=true.
      
      Test Plan: Add a unit test for it.
      
      Reviewers: yhchiang, anthony, kradhakrishnan, igor, rven
      
      Reviewed By: rven
      
      Subscribers: leveldb, dhruba
      
      Differential Revision: https://reviews.facebook.net/D39099
      4266d4fd
    • Y
      Removed DBImpl::notifying_events_ · d333820b
      Yueh-Hsuan Chiang 提交于
      Summary:
      DBImpl::notifying_events_ is a internal counter in DBImpl which is
      used to prevent DB close when DB is notifying events.  However, as
      the current events all rely on either compaction or flush which
      already have similar counters to prevent DB close, it is safe to
      remove notifying_events_.
      
      Test Plan:
      listener_test
      examples/compact_files_example
      
      Reviewers: igor, anthony, kradhakrishnan, rven, sdong
      
      Reviewed By: sdong
      
      Subscribers: dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D39315
      d333820b
  9. 30 5月, 2015 1 次提交
    • A
      Optimistic Transactions · dc9d70de
      agiardullo 提交于
      Summary: Optimistic transactions supporting begin/commit/rollback semantics.  Currently relies on checking the memtable to determine if there are any collisions at commit time.  Not yet implemented would be a way of enuring the memtable has some minimum amount of history so that we won't fail to commit when the memtable is empty.  You should probably start with transaction.h to get an overview of what is currently supported.
      
      Test Plan: Added a new test, but still need to look into stress testing.
      
      Reviewers: yhchiang, igor, rven, sdong
      
      Reviewed By: sdong
      
      Subscribers: adamretter, MarkCallaghan, leveldb, dhruba
      
      Differential Revision: https://reviews.facebook.net/D33435
      dc9d70de
  10. 27 5月, 2015 1 次提交
  11. 22 5月, 2015 1 次提交
  12. 12 5月, 2015 1 次提交
    • A
      API to fetch from both a WriteBatchWithIndex and the db · 711465cc
      agiardullo 提交于
      Summary:
      Added a couple functions to WriteBatchWithIndex to make it easier to query the value of a key including reading pending writes from a batch.  (This is needed for transactions).
      
      I created write_batch_with_index_internal.h to use to store an internal-only helper function since there wasn't a good place in the existing class hierarchy to store this function (and it didn't seem right to stick this function inside WriteBatchInternal::Rep).
      
      Since I needed to access the WriteBatchEntryComparator, I moved some helper classes from write_batch_with_index.cc into write_batch_with_index_internal.h/.cc.  WriteBatchIndexEntry, ReadableWriteBatch, and WriteBatchEntryComparator are all unchanged (just moved to a different file(s)).
      
      Test Plan: Added new unit tests.
      
      Reviewers: rven, yhchiang, sdong, igor
      
      Reviewed By: igor
      
      Subscribers: dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D38037
      711465cc
  13. 06 5月, 2015 1 次提交
  14. 28 4月, 2015 1 次提交
    • I
      Include bunch of more events into EventLogger · 1bb4928d
      Igor Canadi 提交于
      Summary:
      Added these events:
      * Recovery start, finish and also when recovery creates a file
      * Trivial move
      * Compaction start, finish and when compaction creates a file
      * Flush start, finish
      
      Also includes small fix to EventLogger
      
      Also added option ROCKSDB_PRINT_EVENTS_TO_STDOUT which is useful when we debug things. I've spent far too much time chasing LOG files.
      
      Still didn't get sst table properties in JSON. They are written very deeply into the stack. I'll address in separate diff.
      
      TODO:
      * Write specification. Let's first use this for a while and figure out what's good data to put here, too. After that we'll write spec
      * Write tools that parse and analyze LOGs. This can be in python or go. Good intern task.
      
      Test Plan: Ran db_bench with ROCKSDB_PRINT_EVENTS_TO_STDOUT. Here's the output: https://phabricator.fb.com/P19811976
      
      Reviewers: sdong, yhchiang, rven, MarkCallaghan, kradhakrishnan, anthony
      
      Reviewed By: anthony
      
      Subscribers: dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D37521
      1bb4928d
  15. 24 4月, 2015 1 次提交
    • G
      Implement DB::PromoteL0 method · 2dc421df
      Giuseppe Ottaviano 提交于
      Summary:
      This diff implements a new `DB` method `PromoteL0` which moves all files in L0
      to a given level skipping compaction, provided that the files have disjoint
      ranges and all levels up to the target level are empty.
      
      This method provides finer-grain control for trivial compactions, and it is
      useful for bulk-loading pre-sorted keys. Compared to D34797, it does not change
      the semantics of an existing operation, which can impact existing code.
      
      PromoteL0 is designed to work well in combination with the proposed
      `GetSstFileWriter`/`AddFile` interface, enabling to "design" the level structure
      by populating one level at a time. Such fine-grained control can be very useful
      for static or mostly-static databases.
      
      Test Plan: `make check`
      
      Reviewers: IslamAbdelRahman, philipp, MarkCallaghan, yhchiang, igor, sdong
      
      Reviewed By: sdong
      
      Subscribers: dhruba
      
      Differential Revision: https://reviews.facebook.net/D37107
      2dc421df
  16. 18 4月, 2015 1 次提交
    • I
      Add experimental API MarkForCompaction() · 6059bdf8
      Igor Canadi 提交于
      Summary:
      Some Mongo+Rocks datasets in Parse's environment are not doing compactions very frequently. During the quiet period (with no IO), we'd like to schedule compactions so that our reads become faster. Also, aggressively compacting during quiet periods helps when write bursts happen. In addition, we also want to compact files that are containing deleted key ranges (like old oplog keys).
      
      All of this is currently not possible with CompactRange() because it's single-threaded and blocks all other compactions from happening. Running CompactRange() risks an issue of blocking writes because we generate too much Level 0 files before the compaction is over. Stopping writes is very dangerous because they hold transaction locks. We tried running manual compaction once on Mongo+Rocks and everything fell apart.
      
      MarkForCompaction() solves all of those problems. This is very light-weight manual compaction. It is lower priority than automatic compactions, which means it shouldn't interfere with background process keeping the LSM tree clean. However, if no automatic compactions need to be run (or we have extra background threads available), we will start compacting files that are marked for compaction.
      
      Test Plan: added a new unit test
      
      Reviewers: yhchiang, rven, MarkCallaghan, sdong
      
      Reviewed By: sdong
      
      Subscribers: yoshinorim, dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D37083
      6059bdf8
  17. 31 3月, 2015 1 次提交
    • I
      Clean up old log files in background threads · fd3dbef2
      Igor Canadi 提交于
      Summary:
      Cleaning up log files can do heavy IO, since we call ftruncate() in the destructor. We don't want to call ftruncate() in user threads.
      
      This diff moves cleaning to background threads (flush and compaction)
      
      Test Plan: make check, will also run valgrind
      
      Reviewers: yhchiang, rven, MarkCallaghan, sdong
      
      Reviewed By: sdong
      
      Subscribers: dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D36177
      fd3dbef2
  18. 17 3月, 2015 1 次提交
  19. 14 3月, 2015 1 次提交
    • I
      EventLogger · 52d8347a
      Igor Canadi 提交于
      Summary:
      Here's my proposal for making our LOGs easier to read by machines.
      
      The idea is to dump all events as JSON objects. JSON is easy to read by humans, but more importantly, it's easy to read by machines. That way, we can parse this, load into SQLite/mongo and then query or visualize.
      
      I started with table_create and table_delete events, but if everybody agrees, I'll continue by adding more events (flush/compaction/etc etc)
      
      Test Plan:
      Ran db_bench. Observed:
      2015/01/15-14:13:25.788019 1105ef000 EVENT_LOG_v1 {"time_micros": 1421360005788015, "event": "table_file_creation", "file_number": 12, "file_size": 1909699}
      2015/01/15-14:13:25.956500 110740000 EVENT_LOG_v1 {"time_micros": 1421360005956498, "event": "table_file_deletion", "file_number": 12}
      
      Reviewers: yhchiang, rven, dhruba, MarkCallaghan, lgalanis, sdong
      
      Reviewed By: sdong
      
      Subscribers: dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D31647
      52d8347a
  20. 12 3月, 2015 2 次提交
  21. 27 2月, 2015 1 次提交
    • I
      rocksdb: Add missing override · 62247ffa
      Igor Sugak 提交于
      Summary:
      When using latest clang (3.6 or 3.7/trunck) rocksdb is failing with many errors. Almost all of them are missing override errors. This diff adds missing override keyword. No manual changes.
      
      Prerequisites: bear and clang 3.5 build with extra tools
      
      ```lang=bash
      % USE_CLANG=1 bear make all # generate a compilation database http://clang.llvm.org/docs/JSONCompilationDatabase.html
      % clang-modernize -p . -include . -add-override
      % make format
      ```
      
      Test Plan:
      Make sure all tests are passing.
      ```lang=bash
      % #Use default fb code clang.
      % make check
      ```
      Verify less error and no missing override errors.
      ```lang=bash
      % # Have trunk clang present in path.
      % ROCKSDB_NO_FBCODE=1 CC=clang CXX=clang++ make
      ```
      
      Reviewers: igor, kradhakrishnan, rven, meyering, sdong
      
      Reviewed By: sdong
      
      Subscribers: dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D34077
      62247ffa
  22. 13 2月, 2015 1 次提交
    • I
      Introduce job_id for flush and compaction · e7ea51a8
      Igor Canadi 提交于
      Summary:
      It would be good to assing background job their IDs. Two benefits:
      1) makes LOGs more readable
      2) I might use it in my EventLogger, which will try to make our LOG easier to read/query/visualize
      
      Test Plan: ran rocksdb, read the LOG
      
      Reviewers: sdong, rven, yhchiang
      
      Reviewed By: yhchiang
      
      Subscribers: dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D31617
      e7ea51a8
  23. 05 2月, 2015 1 次提交
    • Y
      Add a counter for collecting the wait time on db mutex. · 181191a1
      Yueh-Hsuan Chiang 提交于
      Summary:
      Add a counter for collecting the wait time on db mutex.
      Also add MutexWrapper and CondVarWrapper for measuring wait time.
      
      Test Plan:
      ./db_test
      export ROCKSDB_TESTS=MutexWaitStats
      ./db_test
      
      verify stats output using db_bench
      make clean
      make release
      ./db_bench --statistics=1 --benchmarks=fillseq,readwhilewriting --num=10000 --threads=10
      
      Sample output:
          rocksdb.db.mutex.wait.micros COUNT : 7546866
      
      Reviewers: MarkCallaghan, rven, sdong, igor
      
      Reviewed By: igor
      
      Subscribers: dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D32787
      181191a1
  24. 28 1月, 2015 1 次提交
  25. 27 1月, 2015 2 次提交
    • S
      Rename DBImpl::log_dir_unsynced_ to log_dir_synced_ · be8f0b12
      sdong 提交于
      Summary: log_dir_unsynced_ is a confusing name. Rename it to log_dir_synced_ and flip the value.
      
      Test Plan: Run ./fault_injection_test
      
      Reviewers: rven, yhchiang, igor
      
      Reviewed By: igor
      
      Subscribers: leveldb, dhruba
      
      Differential Revision: https://reviews.facebook.net/D32235
      be8f0b12
    • S
      Sync WAL Directory and DB Path if different from DB directory · d888c957
      sdong 提交于
      Summary:
      1. If WAL directory is different from db directory. Sync the directory after creating a log file under it.
      2. After creating an SST file, sync its parent directory instead of DB directory.
      3. change the check of kResetDeleteUnsyncedFiles in fault_injection_test. Since we changed the behavior to sync log files' parent directory after first WAL sync, instead of creating, kResetDeleteUnsyncedFiles will not guarantee to show post sync updates.
      
      Test Plan: make all check
      
      Reviewers: yhchiang, rven, igor
      
      Reviewed By: igor
      
      Subscribers: leveldb, dhruba
      
      Differential Revision: https://reviews.facebook.net/D32067
      d888c957
  26. 13 1月, 2015 1 次提交
  27. 22 12月, 2014 1 次提交
    • I
      Speed up FindObsoleteFiles() · 0acc7388
      Igor Canadi 提交于
      Summary:
      There are two versions of FindObsoleteFiles():
      * full scan, which is executed every 6 hours (and it's terribly slow)
      * no full scan, which is executed every time a background process finishes and iterator is deleted
      
      This diff is optimizing the second case (no full scan). Here's what we do before the diff:
      * Get the list of obsolete files (files with ref==0). Some files in obsolete_files set might actually be live.
      * Get the list of live files to avoid deleting files that are live.
      * Delete files that are in obsolete_files and not in live_files.
      
      After this diff:
      * The only files with ref==0 that are still live are files that have been part of move compaction. Don't include moved files in obsolete_files.
      * Get the list of obsolete files (which exclude moved files).
      * No need to get the list of live files, since all files in obsolete_files need to be deleted.
      
      I'll post the benchmark results, but you can get the feel of it here: https://reviews.facebook.net/D30123
      
      This depends on D30123.
      
      P.S. We should do full scan only in failure scenarios, not every 6 hours. I'll do this in a follow-up diff.
      
      Test Plan:
      One new unit test. Made sure that unit test fails if we don't have a `if (!f->moved)` safeguard in ~Version.
      
      make check
      
      Big number of compactions and flushes:
      
        ./db_stress --threads=30 --ops_per_thread=20000000 --max_key=10000 --column_families=20 --clear_column_family_one_in=10000000 --verify_before_write=0  --reopen=15 --max_background_compactions=10 --max_background_flushes=10 --db=/fast-rocksdb-tmp/db_stress --prefixpercent=0 --iterpercent=0 --writepercent=75 --db_write_buffer_size=2000000
      
      Reviewers: yhchiang, rven, sdong
      
      Reviewed By: sdong
      
      Subscribers: dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D30249
      0acc7388
  28. 20 12月, 2014 1 次提交
    • I
      Rewritten system for scheduling background work · fdb6be4e
      Igor Canadi 提交于
      Summary:
      When scaling to higher number of column families, the worst bottleneck was MaybeScheduleFlushOrCompaction(), which did a for loop over all column families while holding a mutex. This patch addresses the issue.
      
      The approach is similar to our earlier efforts: instead of a pull-model, where we do something for every column family, we can do a push-based model -- when we detect that column family is ready to be flushed/compacted, we add it to the flush_queue_/compaction_queue_. That way we don't need to loop over every column family in MaybeScheduleFlushOrCompaction.
      
      Here are the performance results:
      
      Command:
      
          ./db_bench --write_buffer_size=268435456 --db_write_buffer_size=268435456 --db=/fast-rocksdb-tmp/rocks_lots_of_cf --use_existing_db=0 --open_files=55000 --statistics=1 --histogram=1 --disable_data_sync=1 --max_write_buffer_number=2 --sync=0 --benchmarks=fillrandom --threads=16 --num_column_families=5000  --disable_wal=1 --max_background_flushes=16 --max_background_compactions=16 --level0_file_num_compaction_trigger=2 --level0_slowdown_writes_trigger=2 --level0_stop_writes_trigger=3 --hard_rate_limit=1 --num=33333333 --writes=33333333
      
      Before the patch:
      
           fillrandom   :      26.950 micros/op 37105 ops/sec;    4.1 MB/s
      
      After the patch:
      
            fillrandom   :      17.404 micros/op 57456 ops/sec;    6.4 MB/s
      
      Next bottleneck is VersionSet::AddLiveFiles, which is painfully slow when we have a lot of files. This is coming in the next patch, but when I removed that code, here's what I got:
      
            fillrandom   :       7.590 micros/op 131758 ops/sec;   14.6 MB/s
      
      Test Plan:
      make check
      
      two stress tests:
      
      Big number of compactions and flushes:
      
          ./db_stress --threads=30 --ops_per_thread=20000000 --max_key=10000 --column_families=20 --clear_column_family_one_in=10000000 --verify_before_write=0  --reopen=15 --max_background_compactions=10 --max_background_flushes=10 --db=/fast-rocksdb-tmp/db_stress --prefixpercent=0 --iterpercent=0 --writepercent=75 --db_write_buffer_size=2000000
      
      max_background_flushes=0, to verify that this case also works correctly
      
          ./db_stress --threads=30 --ops_per_thread=2000000 --max_key=10000 --column_families=20 --clear_column_family_one_in=10000000 --verify_before_write=0  --reopen=3 --max_background_compactions=3 --max_background_flushes=0 --db=/fast-rocksdb-tmp/db_stress --prefixpercent=0 --iterpercent=0 --writepercent=75 --db_write_buffer_size=2000000
      
      Reviewers: ljin, rven, yhchiang, sdong
      
      Reviewed By: sdong
      
      Subscribers: dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D30123
      fdb6be4e
  29. 12 12月, 2014 1 次提交
    • S
      Improve scalability of DB::GetSnapshot() · d7a48666
      sdong 提交于
      Summary: Now DB::GetSnapshot() doesn't scale to more column families, as it needs to go through all the column families to find whether snapshot is supported. This patch optimizes it.
      
      Test Plan:
      Add unit tests to cover negative cases.
      make all check
      
      Reviewers: yhchiang, rven, igor
      
      Reviewed By: igor
      
      Subscribers: dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D30093
      d7a48666
  30. 09 12月, 2014 1 次提交
  31. 06 12月, 2014 1 次提交
  32. 03 12月, 2014 1 次提交
    • J
      Enforce write buffer memory limit across column families · a14b7873
      Jonah Cohen 提交于
      Summary:
      Introduces a new class for managing write buffer memory across column
      families.  We supplement ColumnFamilyOptions::write_buffer_size with
      ColumnFamilyOptions::write_buffer, a shared pointer to a WriteBuffer
      instance that enforces memory limits before flushing out to disk.
      
      Test Plan: Added SharedWriteBuffer unit test to db_test.cc
      
      Reviewers: sdong, rven, ljin, igor
      
      Reviewed By: igor
      
      Subscribers: tnovak, yhchiang, dhruba, xjin, MarkCallaghan, yoshinorim
      
      Differential Revision: https://reviews.facebook.net/D22581
      a14b7873
  33. 21 11月, 2014 1 次提交