1. 06 6月, 2015 1 次提交
  2. 05 6月, 2015 1 次提交
    • I
      Allowing L0 -> L1 trivial move on sorted data · 3ce3bb3d
      Islam AbdelRahman 提交于
      Summary:
      This diff updates the logic of how we do trivial move, now trivial move can run on any number of files in input level as long as they are not overlapping
      
      The conditions for trivial move have been updated
      
      Introduced conditions:
        - Trivial move cannot happen if we have a compaction filter (except if the compaction is not manual)
        - Input level files cannot be overlapping
      
      Removed conditions:
        - Trivial move only run when the compaction is not manual
        - Input level should can contain only 1 file
      
      More context on what tests failed because of Trivial move
      ```
      DBTest.CompactionsGenerateMultipleFiles
      This test is expecting compaction on a file in L0 to generate multiple files in L1, this test will fail with trivial move because we end up with one file in L1
      ```
      
      ```
      DBTest.NoSpaceCompactRange
      This test expect compaction to fail when we force environment to report running out of space, of course this is not valid in trivial move situation
      because trivial move does not need any extra space, and did not check for that
      ```
      
      ```
      DBTest.DropWrites
      Similar to DBTest.NoSpaceCompactRange
      ```
      
      ```
      DBTest.DeleteObsoleteFilesPendingOutputs
      This test expect that a file in L2 is deleted after it's moved to L3, this is not valid with trivial move because although the file was moved it is now used by L3
      ```
      
      ```
      CuckooTableDBTest.CompactionIntoMultipleFiles
      Same as DBTest.CompactionsGenerateMultipleFiles
      ```
      
      This diff is based on a work by @sdong https://reviews.facebook.net/D34149
      
      Test Plan: make -j64 check
      
      Reviewers: rven, sdong, igor
      
      Reviewed By: igor
      
      Subscribers: yhchiang, ott, march, dhruba, sdong
      
      Differential Revision: https://reviews.facebook.net/D34797
      3ce3bb3d
  3. 04 6月, 2015 1 次提交
  4. 03 6月, 2015 4 次提交
    • Y
      Fix compile warning in db/db_impl · 8afafc27
      Yueh-Hsuan Chiang 提交于
      Summary:
      Fix the following compile warning in db/db_impl
      
        db/db_impl.cc:1603:19: error: implicit conversion loses integer precision: 'const uint64_t' (aka 'const unsigned long') to 'int' [-Werror,-Wshorten-64-to-32]
           info.job_id = job_id;
                       ~ ^~~~~~
      
      Test Plan: db_test
      
      Reviewers: sdong
      
      Subscribers: dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D39423
      8afafc27
    • Y
      Allow EventListener::OnCompactionCompleted to return CompactionJobStats. · fe5c6321
      Yueh-Hsuan Chiang 提交于
      Summary:
      Allow EventListener::OnCompactionCompleted to return CompactionJobStats,
      which contains useful information about a compaction.
      
      Example CompactionJobStats returned by OnCompactionCompleted():
          smallest_output_key_prefix 05000000
          largest_output_key_prefix 06990000
          elapsed_time 42419
          num_input_records 300
          num_input_files 3
          num_input_files_at_output_level 2
          num_output_records 200
          num_output_files 1
          actual_bytes_input 167200
          actual_bytes_output 110688
          total_input_raw_key_bytes 5400
          total_input_raw_value_bytes 300000
          num_records_replaced 100
          is_manual_compaction 1
      
      Test Plan: Developed a mega test in db_test which covers 20 variables in CompactionJobStats.
      
      Reviewers: rven, igor, anthony, sdong
      
      Reviewed By: sdong
      
      Subscribers: tnovak, dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D38463
      fe5c6321
    • Y
      Add EventListener::OnTableFileCreated() · fc838212
      Yueh-Hsuan Chiang 提交于
      Summary:
      Add EventListener::OnTableFileCreated(), which will be called
      when a table file is created.  This patch is part of the
      EventLogger and EventListener integration.
      
      Test Plan: Augment existing test in db/listener_test.cc
      
      Reviewers: anthony, kradhakrishnan, rven, igor, sdong
      
      Reviewed By: sdong
      
      Subscribers: dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D38865
      fc838212
    • Y
      Add a stats counter for DB_WRITE back which was mistakenly removed. · 898e803f
      Yueh-Hsuan Chiang 提交于
      Summary: Add a stats counter for DB_WRITE back which was mistakenly removed.
      
      Test Plan: augment GroupCommitTest
      
      Reviewers: sdong
      
      Reviewed By: sdong
      
      Subscribers: dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D39399
      898e803f
  5. 02 6月, 2015 3 次提交
    • M
      more times in perf_context and iostats_context · ec7a9443
      Mike Kolupaev 提交于
      Summary:
      We occasionally get write stalls (>1s Write() calls) on HDD under read load. The following timers explain almost all of the stalls:
       - perf_context.db_mutex_lock_nanos
       - perf_context.db_condition_wait_nanos
       - iostats_context.open_time
       - iostats_context.allocate_time
       - iostats_context.write_time
       - iostats_context.range_sync_time
       - iostats_context.logger_time
      
      In my experiments each of these occasionally takes >1s on write path under some workload. There are rare cases when Write() takes long but none of these takes long.
      
      Test Plan: Added code to our application to write the listed timings to log for slow writes. They usually add up to almost exactly the time Write() call took.
      
      Reviewers: rven, yhchiang, sdong
      
      Reviewed By: sdong
      
      Subscribers: march, dhruba, tnovak
      
      Differential Revision: https://reviews.facebook.net/D39177
      ec7a9443
    • S
      Allow users to migrate to options.level_compaction_dynamic_level_bytes=true using CompactRange() · 4266d4fd
      sdong 提交于
      Summary: In DB::CompactRange(), change parameter "reduce_level" to "change_level". Users can compact all data to the last level if needed. By doing it, users can migrate the DB to options.level_compaction_dynamic_level_bytes=true.
      
      Test Plan: Add a unit test for it.
      
      Reviewers: yhchiang, anthony, kradhakrishnan, igor, rven
      
      Reviewed By: rven
      
      Subscribers: leveldb, dhruba
      
      Differential Revision: https://reviews.facebook.net/D39099
      4266d4fd
    • Y
      Removed DBImpl::notifying_events_ · d333820b
      Yueh-Hsuan Chiang 提交于
      Summary:
      DBImpl::notifying_events_ is a internal counter in DBImpl which is
      used to prevent DB close when DB is notifying events.  However, as
      the current events all rely on either compaction or flush which
      already have similar counters to prevent DB close, it is safe to
      remove notifying_events_.
      
      Test Plan:
      listener_test
      examples/compact_files_example
      
      Reviewers: igor, anthony, kradhakrishnan, rven, sdong
      
      Reviewed By: sdong
      
      Subscribers: dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D39315
      d333820b
  6. 30 5月, 2015 2 次提交
    • A
      fix LITE build · bc7a7a40
      agiardullo 提交于
      Summary: Broken by optimistic transaction diff.  (I only built 'release' not 'static_lib' when testing).
      
      Test Plan: build
      
      Reviewers: yhchiang, sdong, igor
      
      Reviewed By: igor
      
      Subscribers: dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D39219
      bc7a7a40
    • A
      Optimistic Transactions · dc9d70de
      agiardullo 提交于
      Summary: Optimistic transactions supporting begin/commit/rollback semantics.  Currently relies on checking the memtable to determine if there are any collisions at commit time.  Not yet implemented would be a way of enuring the memtable has some minimum amount of history so that we won't fail to commit when the memtable is empty.  You should probably start with transaction.h to get an overview of what is currently supported.
      
      Test Plan: Added a new test, but still need to look into stress testing.
      
      Reviewers: yhchiang, igor, rven, sdong
      
      Reviewed By: sdong
      
      Subscribers: adamretter, MarkCallaghan, leveldb, dhruba
      
      Differential Revision: https://reviews.facebook.net/D33435
      dc9d70de
  7. 29 5月, 2015 3 次提交
    • A
      Support saving history in memtable_list · c8153510
      agiardullo 提交于
      Summary:
      For transactions, we are using the memtables to validate that there are no write conflicts.  But after flushing, we don't have any memtables, and transactions could fail to commit.  So we want to someone keep around some extra history to use for conflict checking.  In addition, we want to provide a way to increase the size of this history if too many transactions fail to commit.
      
      After chatting with people, it seems like everyone prefers just using Memtables to store this history (instead of a separate history structure).  It seems like the best place for this is abstracted inside the memtable_list.  I decide to create a separate list in MemtableListVersion as using the same list complicated the flush/installalflushresults logic too much.
      
      This diff adds a new parameter to control how much memtable history to keep around after flushing.  However, it sounds like people aren't too fond of adding new parameters.  So I am making the default size of flushed+not-flushed memtables be set to max_write_buffers.  This should not change the maximum amount of memory used, but make it more likely we're using closer the the limit.  (We are now postponing deleting flushed memtables until the max_write_buffer limit is reached).  So while we might use more memory on average, we are still obeying the limit set (and you could argue it's better to go ahead and use up memory now instead of waiting for a write stall to happen to test this limit).
      
      However, if people are opposed to this default behavior, we can easily set it to 0 and require this parameter be set in order to use transactions.
      
      Test Plan: Added a xfunc test to play around with setting different values of this parameter in all tests.  Added testing in memtablelist_test and planning on adding more testing here.
      
      Reviewers: sdong, rven, igor
      
      Reviewed By: igor
      
      Subscribers: dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D37443
      c8153510
    • Y
      Rename EventLoggerHelpers EventHelpers · ec4ff4e9
      Yueh-Hsuan Chiang 提交于
      Summary:
      Rename EventLoggerHelpers EventHelpers, as it's going to include
      all event-related helper functions instead of EventLogger only stuffs.
      
      Test Plan: make
      
      Reviewers: sdong, rven, anthony
      
      Reviewed By: anthony
      
      Subscribers: dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D39093
      ec4ff4e9
    • Y
      [API Change] Move listeners from ColumnFamilyOptions to DBOptions · 672dda9b
      Yueh-Hsuan Chiang 提交于
      Summary: Move listeners from ColumnFamilyOptions to DBOptions
      
      Test Plan:
      listener_test
      compact_files_test
      
      Reviewers: rven, anthony, sdong
      
      Reviewed By: sdong
      
      Subscribers: dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D39087
      672dda9b
  8. 27 5月, 2015 1 次提交
  9. 23 5月, 2015 3 次提交
    • Y
      Fixed two bugs on logging file deletion. · 5c224d1b
      Yueh-Hsuan Chiang 提交于
      Summary:
      This patch fixes the following two bugs on logging file deletion.
      
      1.  Previously, file deletion failure was only logged in INFO_LEVEL.
          This patch changes it to ERROR_LEVEL and does some code clean.
      
      2.  EventLogger previously will always generate the same log on
          table file deletion even when file deletion is not successful.
          Now the resulting status of file deletion will also be logged.
      
      Test Plan: make all check
      
      Reviewers: sdong, igor
      
      Reviewed By: igor
      
      Subscribers: dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D38817
      5c224d1b
    • Y
      Change the log-level of DB summary and options from INFO_LEVEL to WARN_LEVEL · dc81efe4
      Yueh-Hsuan Chiang 提交于
      Summary: Change the log-level of DB summary and options from INFO_LEVEL to WARN_LEVEL
      
      Test Plan:
      Use db_bench to verify the log level.
      
      Sample output:
          2015/05/22-00:20:39.778064 7fff75b41300 [WARN] RocksDB version: 3.11.0
          2015/05/22-00:20:39.778095 7fff75b41300 [WARN] Git sha rocksdb_build_git_sha:7fee8775
          2015/05/22-00:20:39.778099 7fff75b41300 [WARN] Compile date May 22 2015
          2015/05/22-00:20:39.778101 7fff75b41300 [WARN] DB SUMMARY
          2015/05/22-00:20:39.778145 7fff75b41300 [WARN] SST files in /tmp/rocksdbtest-691931916/dbbench dir, Total Num: 0, files:
          2015/05/22-00:20:39.778148 7fff75b41300 [WARN] Write Ahead Log file in /tmp/rocksdbtest-691931916/dbbench:
          2015/05/22-00:20:39.778150 7fff75b41300 [WARN]          Options.error_if_exists: 0
          2015/05/22-00:20:39.778152 7fff75b41300 [WARN]        Options.create_if_missing: 1
          2015/05/22-00:20:39.778153 7fff75b41300 [WARN]          Options.paranoid_checks: 1
      
      Reviewers: MarkCallaghan, igor, kradhakrishnan
      
      Reviewed By: igor
      
      Subscribers: sdong, dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D38835
      dc81efe4
    • Y
      Avoid logging under mutex in DBImpl::WriteLevel0TableForRecovery(). · 2abb5926
      Yueh-Hsuan Chiang 提交于
      Summary: Avoid logging under mutex in DBImpl::WriteLevel0TableForRecovery().
      
      Test Plan: make all check
      
      Reviewers: igor, sdong
      
      Reviewed By: sdong
      
      Subscribers: dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D38823
      2abb5926
  10. 22 5月, 2015 1 次提交
  11. 20 5月, 2015 1 次提交
  12. 19 5月, 2015 3 次提交
  13. 13 5月, 2015 1 次提交
    • I
      Add more table properties to EventLogger · dbd95b75
      Igor Canadi 提交于
      Summary:
      Example output:
      
          {"time_micros": 1431463794310521, "job": 353, "event": "table_file_creation", "file_number": 387, "file_size": 86937, "table_info": {"data_size": "81801", "index_size": "9751", "filter_size": "0", "raw_key_size": "23448", "raw_average_key_size": "24.000000", "raw_value_size": "990571", "raw_average_value_size": "1013.890481", "num_data_blocks": "245", "num_entries": "977", "filter_policy_name": "", "kDeletedKeys": "0"}}
      
      Also fixed a bug where BuildTable() in recovery was passing Env::IOHigh argument into paranoid_checks_file parameter.
      
      Test Plan: make check + check out the output in the log
      
      Reviewers: sdong, rven, yhchiang
      
      Reviewed By: yhchiang
      
      Subscribers: dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D38343
      dbd95b75
  14. 12 5月, 2015 1 次提交
    • A
      API to fetch from both a WriteBatchWithIndex and the db · 711465cc
      agiardullo 提交于
      Summary:
      Added a couple functions to WriteBatchWithIndex to make it easier to query the value of a key including reading pending writes from a batch.  (This is needed for transactions).
      
      I created write_batch_with_index_internal.h to use to store an internal-only helper function since there wasn't a good place in the existing class hierarchy to store this function (and it didn't seem right to stick this function inside WriteBatchInternal::Rep).
      
      Since I needed to access the WriteBatchEntryComparator, I moved some helper classes from write_batch_with_index.cc into write_batch_with_index_internal.h/.cc.  WriteBatchIndexEntry, ReadableWriteBatch, and WriteBatchEntryComparator are all unchanged (just moved to a different file(s)).
      
      Test Plan: Added new unit tests.
      
      Reviewers: rven, yhchiang, sdong, igor
      
      Reviewed By: igor
      
      Subscribers: dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D38037
      711465cc
  15. 06 5月, 2015 2 次提交
    • I
      Cleanup CompactionJob · 65fe1cfb
      Igor Canadi 提交于
      Summary:
      Couple changes:
      1. instead of SnapshotList, just take a vector of snapshots
      2. don't take a separate parameter is_snapshots_supported. If there are snapshots in the list, that means they are supported. I actually think we should get rid of this notion of snapshots not being supported.
      3. don't pass in mutable_cf_options as a parameter. Lifetime of mutable_cf_options is a bit tricky to maintain, so it's better to not pass it in for the whole compaction job. We only really need it when we install the compaction results.
      
      Test Plan: make check
      
      Reviewers: sdong, rven, yhchiang
      
      Reviewed By: yhchiang
      
      Subscribers: dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D36627
      65fe1cfb
    • L
      fix crashes in stats and compaction filter for db_ttl_impl · df4130ad
      Laurent Demailly 提交于
      Summary: fix crashes in stats and compaction filter for db_ttl_impl
      
      Test Plan:
      Ran build with lots of debugging
      https://reviews.facebook.net/differential/diff/194175/
      
      Reviewers: yhchiang, igor, rven
      
      Reviewed By: igor
      
      Subscribers: rven, dhruba
      
      Differential Revision: https://reviews.facebook.net/D38001
      df4130ad
  16. 05 5月, 2015 1 次提交
  17. 02 5月, 2015 1 次提交
  18. 01 5月, 2015 1 次提交
    • K
      Optimize GetApproximateSizes() to use lesser CPU cycles. · d4540654
      krad 提交于
      Summary:
      CPU profiling reveals GetApproximateSizes as a bottleneck for performance. The current implementation is sub-optimal, it scans every file in every level to compute the result.
      
      We can take advantage of the fact that all levels above 0 are sorted in the increasing order of key ranges and use binary search to locate the starting index. This can reduce the number of comparisons required to compute the result.
      
      Test Plan: We have good test coverage. Run the tests.
      
      Reviewers: sdong, igor, rven, dynamike
      
      Subscribers: dynamike, maykov, dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D37755
      d4540654
  19. 30 4月, 2015 1 次提交
  20. 28 4月, 2015 1 次提交
    • I
      Include bunch of more events into EventLogger · 1bb4928d
      Igor Canadi 提交于
      Summary:
      Added these events:
      * Recovery start, finish and also when recovery creates a file
      * Trivial move
      * Compaction start, finish and when compaction creates a file
      * Flush start, finish
      
      Also includes small fix to EventLogger
      
      Also added option ROCKSDB_PRINT_EVENTS_TO_STDOUT which is useful when we debug things. I've spent far too much time chasing LOG files.
      
      Still didn't get sst table properties in JSON. They are written very deeply into the stack. I'll address in separate diff.
      
      TODO:
      * Write specification. Let's first use this for a while and figure out what's good data to put here, too. After that we'll write spec
      * Write tools that parse and analyze LOGs. This can be in python or go. Good intern task.
      
      Test Plan: Ran db_bench with ROCKSDB_PRINT_EVENTS_TO_STDOUT. Here's the output: https://phabricator.fb.com/P19811976
      
      Reviewers: sdong, yhchiang, rven, MarkCallaghan, kradhakrishnan, anthony
      
      Reviewed By: anthony
      
      Subscribers: dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D37521
      1bb4928d
  21. 24 4月, 2015 2 次提交
    • S
      Fix CompactRange for universal compaction with num_levels > 1 · d01bbb53
      sdong 提交于
      Summary:
      CompactRange for universal compaction with num_levels > 1 seems to have a bug. The unit test also has a bug so it doesn't capture the problem.
      Fix it. Revert the compact range to the logic equivalent to num_levels=1. Always compact all files together.
      
      It should also fix DBTest.IncreaseUniversalCompactionNumLevels. The issue was that options.write_buffer_size = 100 << 10 and options.write_buffer_size = 100 << 10 are not used in later test scenarios. So write_buffer_size of 4MB was used. The compaction trigger condition is not anymore obvious as expected.
      
      Test Plan: Run the new test and all test suites
      
      Reviewers: yhchiang, rven, kradhakrishnan, anthony, igor
      
      Reviewed By: igor
      
      Subscribers: leveldb, dhruba
      
      Differential Revision: https://reviews.facebook.net/D37551
      d01bbb53
    • I
      Abstract out SetMaxPossibleForUserKey() and SetMinPossibleForUserKey · e003d386
      Igor Canadi 提交于
      Summary:
      Based on feedback from D37083.
      
      Are all of these correct? In some spaces it seems like we're doing SetMaxPossibleForUserKey() although we want the smallest possible internal key for user key.
      
      Test Plan: make check
      
      Reviewers: sdong
      
      Reviewed By: sdong
      
      Subscribers: dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D37341
      e003d386
  22. 15 4月, 2015 1 次提交
    • S
      Bug of trivial move of dynamic level · debaf85e
      sdong 提交于
      Summary: D36669 introduces a bug that trivial moved data is not going to specific level but the next level, which will incorrectly be level 1 for level 0 compaciton if base level is not level 1. Fixing it by appreciating the output level
      
      Test Plan: Run all tests
      
      Reviewers: MarkCallaghan, rven, yhchiang, igor
      
      Reviewed By: igor
      
      Subscribers: leveldb, dhruba
      
      Differential Revision: https://reviews.facebook.net/D37119
      debaf85e
  23. 11 4月, 2015 1 次提交
    • I
      Make Compaction class easier to use · 47b87439
      Igor Canadi 提交于
      Summary:
      The goal of this diff is to make Compaction class easier to use. This should also make new compaction algorithms easier to write (like CompactFiles from @yhchiang and dynamic leveled and multi-leveled universal from @sdong).
      
      Here are couple of things demonstrating that Compaction class is hard to use:
      1. we have two constructors of Compaction class
      2. there's this thing called grandparents_, but it appears to only be setup for leveled compaction and not compactfiles
      3. it's easy to introduce a subtle and dangerous bug like this: D36225
      4. SetupBottomMostLevel() is hard to understand and it shouldn't be. See this comment: https://github.com/facebook/rocksdb/blob/afbafeaeaebfd27a0f3e992fee8e0c57d07658fa/db/compaction.cc#L236-L241. It also made it harder for @yhchiang to write CompactFiles, as evidenced by this: https://github.com/facebook/rocksdb/blob/afbafeaeaebfd27a0f3e992fee8e0c57d07658fa/db/compaction_picker.cc#L204-L210
      
      The problem is that we create Compaction object, which holds a lot of state, and then pass it around to some functions. After those functions are done mutating, then we call couple of functions on Compaction object, like SetupBottommostLevel() and MarkFilesBeingCompacted(). It is very hard to see what's happening with all that Compaction's state while it's travelling across different functions. If you're writing a new PickCompaction() function you need to try really hard to understand what are all the functions you need to run on Compaction object and what state you need to setup.
      
      My proposed solution is to make important parts of Compaction immutable after construction. PickCompaction() should calculate compaction inputs and then pass them onto Compaction object once they are finalized. That makes it easy to create a new compaction -- just provide all the parameters to the constructor and you're done. No need to call confusing functions after you created your object.
      
      This diff doesn't fully achieve that goal, but it comes pretty close. Here are some of the changes:
      * have one Compaction constructor instead of two.
      * inputs_ is constant after construction
      * MarkFilesBeingCompacted() is now private to Compaction class and automatically called on construction/destruction.
      * SetupBottommostLevel() is gone. Compaction figures it out on its own based on the input.
      * CompactionPicker's functions are not passing around Compaction object anymore. They are only passing around the state that they need.
      
      Test Plan:
      make check
      make asan_check
      make valgrind_check
      
      Reviewers: rven, anthony, sdong, yhchiang
      
      Reviewed By: yhchiang
      
      Subscribers: sdong, yhchiang, dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D36687
      47b87439
  24. 10 4月, 2015 1 次提交
  25. 09 4月, 2015 1 次提交
    • S
      Create EnvOptions using sanitized DB Options · b1bbdd79
      sdong 提交于
      Summary: Now EnvOptions uses unsanitized DB options. bytes_per_sync is tuned off when rate_limiter is used, but this change doesn't take effort.
      
      Test Plan: See different I/O pattern in db_bench running fillseq.
      
      Reviewers: yhchiang, kradhakrishnan, rven, anthony, igor
      
      Reviewed By: igor
      
      Subscribers: leveldb, dhruba
      
      Differential Revision: https://reviews.facebook.net/D36723
      b1bbdd79
  26. 07 4月, 2015 1 次提交
    • I
      Clean up compression logging · 5e067a7b
      Igor Canadi 提交于
      Summary: Now we add warnings when user configures compression and the compression is not supported.
      
      Test Plan:
      Configured compression to non-supported values. Observed messages in my log:
      
          2015/03/26-12:17:57.586341 7ffb8a496840 [WARN] Compression type chosen for level 2 is not supported: LZ4. RocksDB will not compress data on level 2.
      
          2015/03/26-12:19:10.768045 7f36f15c5840 [WARN] Compression type chosen is not supported: LZ4. RocksDB will not compress data.
      
      Reviewers: rven, sdong, yhchiang
      
      Reviewed By: sdong
      
      Subscribers: dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D35979
      5e067a7b