1. 28 6月, 2017 1 次提交
    • S
      FIFO Compaction with TTL · 1cd45cd1
      Sagar Vemuri 提交于
      Summary:
      Introducing FIFO compactions with TTL.
      
      FIFO compaction is based on size only which makes it tricky to enable in production as use cases can have organic growth. A user requested an option to drop files based on the time of their creation instead of the total size.
      
      To address that request:
      - Added a new TTL option to FIFO compaction options.
      - Updated FIFO compaction score to take TTL into consideration.
      - Added a new table property, creation_time, to keep track of when the SST file is created.
      - Creation_time is set as below:
        - On Flush: Set to the time of flush.
        - On Compaction: Set to the max creation_time of all the files involved in the compaction.
        - On Repair and Recovery: Set to the time of repair/recovery.
        - Old files created prior to this code change will have a creation_time of 0.
      - FIFO compaction with TTL is enabled when ttl > 0. All files older than ttl will be deleted during compaction. i.e. `if (file.creation_time < (current_time - ttl)) then delete(file)`. This will enable cases where you might want to delete all files older than, say, 1 day.
      - FIFO compaction will fall back to the prior way of deleting files based on size if:
        - the creation_time of all files involved in compaction is 0.
        - the total size (of all SST files combined) does not drop below `compaction_options_fifo.max_table_files_size` even if the files older than ttl are deleted.
      
      This feature is not supported if max_open_files != -1 or with table formats other than Block-based.
      
      **Test Plan:**
      Added tests.
      
      **Benchmark results:**
      Base: FIFO with max size: 100MB ::
      ```
      svemuri@dev15905 ~/rocksdb (fifo-compaction) $ TEST_TMPDIR=/dev/shm ./db_bench --benchmarks=readwhilewriting --num=5000000 --threads=16 --compaction_style=2 --fifo_compaction_max_table_files_size_mb=100
      
      readwhilewriting :       1.924 micros/op 519858 ops/sec;   13.6 MB/s (1176277 of 5000000 found)
      ```
      
      With TTL (a low one for testing) ::
      ```
      svemuri@dev15905 ~/rocksdb (fifo-compaction) $ TEST_TMPDIR=/dev/shm ./db_bench --benchmarks=readwhilewriting --num=5000000 --threads=16 --compaction_style=2 --fifo_compaction_max_table_files_size_mb=100 --fifo_compaction_ttl=20
      
      readwhilewriting :       1.902 micros/op 525817 ops/sec;   13.7 MB/s (1185057 of 5000000 found)
      ```
      Example Log lines:
      ```
      2017/06/26-15:17:24.609249 7fd5a45ff700 (Original Log Time 2017/06/26-15:17:24.609177) [db/compaction_picker.cc:1471] [default] FIFO compaction: picking file 40 with creation time 1498515423 for deletion
      2017/06/26-15:17:24.609255 7fd5a45ff700 (Original Log Time 2017/06/26-15:17:24.609234) [db/db_impl_compaction_flush.cc:1541] [default] Deleted 1 files
      ...
      2017/06/26-15:17:25.553185 7fd5a61a5800 [DEBUG] [db/db_impl_files.cc:309] [JOB 0] Delete /dev/shm/dbbench/000040.sst type=2 #40 -- OK
      2017/06/26-15:17:25.553205 7fd5a61a5800 EVENT_LOG_v1 {"time_micros": 1498515445553199, "job": 0, "event": "table_file_deletion", "file_number": 40}
      ```
      
      SST Files remaining in the dbbench dir, after db_bench execution completed:
      ```
      svemuri@dev15905 ~/rocksdb (fifo-compaction)  $ ls -l /dev/shm//dbbench/*.sst
      -rw-r--r--. 1 svemuri users 30749887 Jun 26 15:17 /dev/shm//dbbench/000042.sst
      -rw-r--r--. 1 svemuri users 30768779 Jun 26 15:17 /dev/shm//dbbench/000044.sst
      -rw-r--r--. 1 svemuri users 30757481 Jun 26 15:17 /dev/shm//dbbench/000046.sst
      ```
      Closes https://github.com/facebook/rocksdb/pull/2480
      
      Differential Revision: D5305116
      
      Pulled By: sagar0
      
      fbshipit-source-id: 3e5cfcf5dd07ed2211b5b37492eb235b45139174
      1cd45cd1
  2. 12 6月, 2017 1 次提交
    • S
      Sample number of reads per SST file · 5582123d
      Siying Dong 提交于
      Summary:
      We estimate number of reads per SST files, by updating the counter per file in sampled read requests. This information can later be used to trigger compactions to improve read performacne.
      Closes https://github.com/facebook/rocksdb/pull/2417
      
      Differential Revision: D5193528
      
      Pulled By: siying
      
      fbshipit-source-id: b4241c5ad0eaf444b61afb53f8e6290d9f5da2df
      5582123d
  3. 28 4月, 2017 1 次提交
  4. 06 4月, 2017 1 次提交
  5. 08 12月, 2016 1 次提交
  6. 14 10月, 2016 1 次提交
    • I
      Fix compaction conflict with running compaction · 5691a1d8
      Islam AbdelRahman 提交于
      Summary:
      Issue scenario:
      (1) We have 3 files in L1 and we issue a compaction that will compact them into 1 file in L2
      (2) While compaction (1) is running, we flush a file into L0 and trigger another compaction that decide to move this file to L1 and then move it again to L2 (this file don't overlap with any other files)
      (3) compaction (1) finishes and install the file it generated in L2, but this file overlap with the file we generated in (2) so we break the LSM consistency
      
      Looks like this issue can be triggered by using non-exclusive manual compaction or AddFile()
      
      Test Plan: unit tests
      
      Reviewers: sdong
      
      Reviewed By: sdong
      
      Subscribers: hermanlee4, jkedgar, andrewkr, dhruba, yoshinorim
      
      Differential Revision: https://reviews.facebook.net/D64947
      5691a1d8
  7. 14 9月, 2016 1 次提交
    • Y
      Refactor MutableCFOptions · 81747f1b
      Yi Wu 提交于
      Summary:
      * Change constructor of MutableCFOptions to depends only on ColumnFamilyOptions.
      * Move `max_subcompactions`, `compaction_options_fifo` and `compaction_pri` to ImmutableCFOptions to make it clear that they are immutable.
      
      Test Plan: existing unit tests.
      
      Reviewers: yhchiang, IslamAbdelRahman, sdong
      
      Reviewed By: sdong
      
      Subscribers: andrewkr, dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D63945
      81747f1b
  8. 03 9月, 2016 1 次提交
  9. 02 9月, 2016 1 次提交
    • S
      Merge options source_compaction_factor, max_grandparent_overlap_bytes and... · 32149059
      sdong 提交于
      Merge options source_compaction_factor, max_grandparent_overlap_bytes and expanded_compaction_factor into max_compaction_bytes
      
      Summary: To reduce number of options, merge source_compaction_factor, max_grandparent_overlap_bytes and expanded_compaction_factor into max_compaction_bytes.
      
      Test Plan: Add two new unit tests. Run all existing tests, including jtest.
      
      Reviewers: yhchiang, igor, IslamAbdelRahman
      
      Reviewed By: IslamAbdelRahman
      
      Subscribers: leveldb, andrewkr, dhruba
      
      Differential Revision: https://reviews.facebook.net/D59829
      32149059
  10. 22 6月, 2016 1 次提交
  11. 28 4月, 2016 1 次提交
  12. 25 3月, 2016 1 次提交
    • Y
      Fix data race issue when sub-compaction is used in CompactionJob · be9816b3
      Yueh-Hsuan Chiang 提交于
      Summary:
      When subcompaction is used, all subcompactions share the same Compaction
      pointer in CompactionJob while each subcompaction all keeps their mutable
      stats in SubcompactionState.  However, there're still some mutable part
      that is currently store in the shared Compaction pointer.
      
      This patch makes two changes:
      
      1. Make the shared Compaction pointer const so that it can never be modified
         during the compaction.
      2. Move necessary states from Compaction to SubcompactionState.
      3. Make functions of Compaction const if the function does not modify
         its internal state.
      
      Test Plan: rocksdb and MyRocks test
      
      Reviewers: sdong, kradhakrishnan, andrewkr, IslamAbdelRahman
      
      Reviewed By: IslamAbdelRahman
      
      Subscribers: andrewkr, dhruba, yoshinorim, gunnarku, leveldb
      
      Differential Revision: https://reviews.facebook.net/D55923
      be9816b3
  13. 09 3月, 2016 1 次提交
  14. 10 2月, 2016 1 次提交
  15. 23 12月, 2015 1 次提交
  16. 19 12月, 2015 1 次提交
  17. 10 10月, 2015 1 次提交
    • A
      Passing table properties to compaction callback · 3d07b815
      Alexey Maykov 提交于
      Summary: It would be nice to have and access to table properties in compaction callbacks. In MyRocks project, it will make possible to update optimizer statistics online.
      
      Test Plan: ran the unit test. Ran myrocks with the new way of collecting stats.
      
      Reviewers: igor, rven, yhchiang
      
      Reviewed By: yhchiang
      
      Subscribers: dhruba
      
      Differential Revision: https://reviews.facebook.net/D48267
      3d07b815
  18. 17 9月, 2015 1 次提交
    • M
      Improving condition for bottommost level during compaction · a5e312a7
      Mayank Pundir 提交于
      Summary: The diff modifies the condition checked to determine the bottommost level during compaction. Previously, absence of files in higher levels alone was used as the condition. Now, the function additionally evaluates if the higher levels have files which have non-overlapping key ranges, then the level can be safely considered as the bottommost level.
      
      Test Plan: Unit test cases added and passing. However, unit tests of universal compaction are failing as a result of the changes made in this diff. Need to understand why that is happening.
      
      Reviewers: igor
      
      Subscribers: dhruba, sdong, lgalanis, meyering
      
      Differential Revision: https://reviews.facebook.net/D46473
      a5e312a7
  19. 11 9月, 2015 1 次提交
    • A
      Determine boundaries of subcompactions · 3c37b3cc
      Ari Ekmekji 提交于
      Summary:
      Up to this point, the subcompactions that make up a compaction
      job have been divided based on the key range of the L1 files, and each
      subcompaction has handled the key range of only one file. However
      DBOption.max_subcompactions allows the user to designate how many
      subcompactions at most to perform. This patch updates the
      CompactionJob::GetSubcompactionBoundaries() to determine these
      divisions accordingly based on that option and other input/system factors.
      
      The current approach orders the starting and/or ending keys of certain
      compaction input files and then generates a histogram to approximate the
      size covered by the key range between each consecutive pair of keys. Then
      it groups these ranges into groups so that the sizes are approximately equal
      to one another. The approach has also been adapted to work for universal
      compaction as well instead of just for level-based compaction as it was before.
      
      These subcompactions are then executed in parallel by locally spawning
      threads, one for each. The results are then aggregated and the compaction
      completed.
      
      Test Plan: make all && make check
      
      Reviewers: yhchiang, anthony, igor, noetzli, sdong
      
      Reviewed By: sdong
      
      Subscribers: MarkCallaghan, dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D43269
      3c37b3cc
  20. 22 8月, 2015 1 次提交
    • A
      Changed 'num_subcompactions' to the more accurate 'max_subcompactions' · b6def58f
      Ari Ekmekji 提交于
      Summary:
      Up until this point we had DbOptions.num_subcompactions, but
      it is semantically more correct to call this max_subcompactions since
      we will schedule *up to* DbOptions.max_subcompactions smaller compactions
      at a time during a compaction job.
      
      I also added a --subcompactions option to db_bench
      
      Test Plan: make all   make check
      
      Reviewers: sdong, igor, anthony, yhchiang
      
      Reviewed By: yhchiang
      
      Subscribers: dhruba
      
      Differential Revision: https://reviews.facebook.net/D45069
      b6def58f
  21. 19 8月, 2015 1 次提交
    • A
      [Parallel L0-L1 Compaction Prep]: Giving Subcompactions Their Own State · f0da6977
      Ari Ekmekji 提交于
      Summary:
      In prepration for running multiple threads at the same time during
      a compaction job, this patch assigns each subcompaction its own state
      (instead of sharing the one global CompactionState). Each subcompaction then
      uses this state to update its statistics, keep track of its snapshots, etc.
      during the course of execution. Then at the end of all the executions the
      statistics are aggregated across the subcompactions so that the final result
      is the same as if only one larger compaction had run.
      
      Test Plan: ./db_test  ./db_compaction_test  ./compaction_job_test
      
      Reviewers: sdong, anthony, igor, noetzli, yhchiang
      
      Reviewed By: yhchiang
      
      Subscribers: MarkCallaghan, dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D43239
      f0da6977
  22. 04 8月, 2015 1 次提交
    • A
      Parallelize L0-L1 Compaction: Restructure Compaction Job · 40c64434
      Ari Ekmekji 提交于
      Summary:
      As of now compactions involving files from Level 0 and Level 1 are single
      threaded because the files in L0, although sorted, are not range partitioned like
      the other levels. This means that during L0-L1 compaction each file from L1
      needs to be merged with potentially all the files from L0.
      
      This attempt to parallelize the L0-L1 compaction assigns a thread and a
      corresponding iterator to each L1 file that then considers only the key range
      found in that L1 file and only the L0 files that have those keys (and only the
      specific portion of those L0 files in which those keys are found). In this way
      the overlap is minimized and potentially eliminated between different iterators
      focusing on the same files.
      
      The first step is to restructure the compaction logic to break L0-L1 compactions
      into multiple, smaller, sequential compactions. Eventually each of these smaller
      jobs will be run simultaneously. Areas to pay extra attention to are
      
        # Correct aggregation of compaction job statistics across multiple threads
        # Proper opening/closing of output files (make sure each thread's is unique)
        # Keys that span multiple L1 files
        # Skewed distributions of keys within L0 files
      
      Test Plan: Make and run db_test (newer version has separate compaction tests) and compaction_job_stats_test
      
      Reviewers: igor, noetzli, anthony, sdong, yhchiang
      
      Reviewed By: yhchiang
      
      Subscribers: MarkCallaghan, dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D42699
      40c64434
  23. 18 7月, 2015 1 次提交
    • I
      Deprecate CompactionFilterV2 · a96fcd09
      Igor Canadi 提交于
      Summary: It has been around for a while and it looks like it never found any uses in the wild. It's also complicating our compaction_job code quite a bit. We're deprecating it in 3.13, but will put it back in 3.14 if we actually find users that need this feature.
      
      Test Plan: make check
      
      Reviewers: noetzli, yhchiang, sdong
      
      Reviewed By: sdong
      
      Subscribers: dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D42405
      a96fcd09
  24. 14 7月, 2015 1 次提交
    • S
      "make format" against last 10 commits · f9728640
      sdong 提交于
      Summary: This helps Windows port to format their changes, as discussed. Might have formatted some other codes too becasue last 10 commits include more.
      
      Test Plan: Build it.
      
      Reviewers: anthony, IslamAbdelRahman, kradhakrishnan, yhchiang, igor
      
      Reviewed By: igor
      
      Subscribers: leveldb, dhruba
      
      Differential Revision: https://reviews.facebook.net/D41961
      f9728640
  25. 09 7月, 2015 1 次提交
  26. 08 7月, 2015 1 次提交
  27. 05 6月, 2015 1 次提交
    • I
      Allowing L0 -> L1 trivial move on sorted data · 3ce3bb3d
      Islam AbdelRahman 提交于
      Summary:
      This diff updates the logic of how we do trivial move, now trivial move can run on any number of files in input level as long as they are not overlapping
      
      The conditions for trivial move have been updated
      
      Introduced conditions:
        - Trivial move cannot happen if we have a compaction filter (except if the compaction is not manual)
        - Input level files cannot be overlapping
      
      Removed conditions:
        - Trivial move only run when the compaction is not manual
        - Input level should can contain only 1 file
      
      More context on what tests failed because of Trivial move
      ```
      DBTest.CompactionsGenerateMultipleFiles
      This test is expecting compaction on a file in L0 to generate multiple files in L1, this test will fail with trivial move because we end up with one file in L1
      ```
      
      ```
      DBTest.NoSpaceCompactRange
      This test expect compaction to fail when we force environment to report running out of space, of course this is not valid in trivial move situation
      because trivial move does not need any extra space, and did not check for that
      ```
      
      ```
      DBTest.DropWrites
      Similar to DBTest.NoSpaceCompactRange
      ```
      
      ```
      DBTest.DeleteObsoleteFilesPendingOutputs
      This test expect that a file in L2 is deleted after it's moved to L3, this is not valid with trivial move because although the file was moved it is now used by L3
      ```
      
      ```
      CuckooTableDBTest.CompactionIntoMultipleFiles
      Same as DBTest.CompactionsGenerateMultipleFiles
      ```
      
      This diff is based on a work by @sdong https://reviews.facebook.net/D34149
      
      Test Plan: make -j64 check
      
      Reviewers: rven, sdong, igor
      
      Reviewed By: igor
      
      Subscribers: yhchiang, ott, march, dhruba, sdong
      
      Differential Revision: https://reviews.facebook.net/D34797
      3ce3bb3d
  28. 06 5月, 2015 1 次提交
    • I
      Cleanup CompactionJob · 65fe1cfb
      Igor Canadi 提交于
      Summary:
      Couple changes:
      1. instead of SnapshotList, just take a vector of snapshots
      2. don't take a separate parameter is_snapshots_supported. If there are snapshots in the list, that means they are supported. I actually think we should get rid of this notion of snapshots not being supported.
      3. don't pass in mutable_cf_options as a parameter. Lifetime of mutable_cf_options is a bit tricky to maintain, so it's better to not pass it in for the whole compaction job. We only really need it when we install the compaction results.
      
      Test Plan: make check
      
      Reviewers: sdong, rven, yhchiang
      
      Reviewed By: yhchiang
      
      Subscribers: dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D36627
      65fe1cfb
  29. 28 4月, 2015 1 次提交
    • I
      Include bunch of more events into EventLogger · 1bb4928d
      Igor Canadi 提交于
      Summary:
      Added these events:
      * Recovery start, finish and also when recovery creates a file
      * Trivial move
      * Compaction start, finish and when compaction creates a file
      * Flush start, finish
      
      Also includes small fix to EventLogger
      
      Also added option ROCKSDB_PRINT_EVENTS_TO_STDOUT which is useful when we debug things. I've spent far too much time chasing LOG files.
      
      Still didn't get sst table properties in JSON. They are written very deeply into the stack. I'll address in separate diff.
      
      TODO:
      * Write specification. Let's first use this for a while and figure out what's good data to put here, too. After that we'll write spec
      * Write tools that parse and analyze LOGs. This can be in python or go. Good intern task.
      
      Test Plan: Ran db_bench with ROCKSDB_PRINT_EVENTS_TO_STDOUT. Here's the output: https://phabricator.fb.com/P19811976
      
      Reviewers: sdong, yhchiang, rven, MarkCallaghan, kradhakrishnan, anthony
      
      Reviewed By: anthony
      
      Subscribers: dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D37521
      1bb4928d
  30. 11 4月, 2015 1 次提交
    • I
      Make Compaction class easier to use · 47b87439
      Igor Canadi 提交于
      Summary:
      The goal of this diff is to make Compaction class easier to use. This should also make new compaction algorithms easier to write (like CompactFiles from @yhchiang and dynamic leveled and multi-leveled universal from @sdong).
      
      Here are couple of things demonstrating that Compaction class is hard to use:
      1. we have two constructors of Compaction class
      2. there's this thing called grandparents_, but it appears to only be setup for leveled compaction and not compactfiles
      3. it's easy to introduce a subtle and dangerous bug like this: D36225
      4. SetupBottomMostLevel() is hard to understand and it shouldn't be. See this comment: https://github.com/facebook/rocksdb/blob/afbafeaeaebfd27a0f3e992fee8e0c57d07658fa/db/compaction.cc#L236-L241. It also made it harder for @yhchiang to write CompactFiles, as evidenced by this: https://github.com/facebook/rocksdb/blob/afbafeaeaebfd27a0f3e992fee8e0c57d07658fa/db/compaction_picker.cc#L204-L210
      
      The problem is that we create Compaction object, which holds a lot of state, and then pass it around to some functions. After those functions are done mutating, then we call couple of functions on Compaction object, like SetupBottommostLevel() and MarkFilesBeingCompacted(). It is very hard to see what's happening with all that Compaction's state while it's travelling across different functions. If you're writing a new PickCompaction() function you need to try really hard to understand what are all the functions you need to run on Compaction object and what state you need to setup.
      
      My proposed solution is to make important parts of Compaction immutable after construction. PickCompaction() should calculate compaction inputs and then pass them onto Compaction object once they are finalized. That makes it easy to create a new compaction -- just provide all the parameters to the constructor and you're done. No need to call confusing functions after you created your object.
      
      This diff doesn't fully achieve that goal, but it comes pretty close. Here are some of the changes:
      * have one Compaction constructor instead of two.
      * inputs_ is constant after construction
      * MarkFilesBeingCompacted() is now private to Compaction class and automatically called on construction/destruction.
      * SetupBottommostLevel() is gone. Compaction figures it out on its own based on the input.
      * CompactionPicker's functions are not passing around Compaction object anymore. They are only passing around the state that they need.
      
      Test Plan:
      make check
      make asan_check
      make valgrind_check
      
      Reviewers: rven, anthony, sdong, yhchiang
      
      Reviewed By: yhchiang
      
      Subscribers: sdong, yhchiang, dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D36687
      47b87439
  31. 03 4月, 2015 1 次提交
  32. 03 3月, 2015 1 次提交
    • I
      options.level_compaction_dynamic_level_bytes to allow RocksDB to pick size... · db037393
      Igor Canadi 提交于
      options.level_compaction_dynamic_level_bytes to allow RocksDB to pick size bases of levels dynamically.
      
      Summary:
      When having fixed max_bytes_for_level_base, the ratio of size of largest level and the second one can range from 0 to the multiplier. This makes LSM tree frequently irregular and unpredictable. It can also cause poor space amplification in some cases.
      
      In this improvement (proposed by Igor Kabiljo), we introduce a parameter option.level_compaction_use_dynamic_max_bytes. When turning it on, RocksDB is free to pick a level base in the range of (options.max_bytes_for_level_base/options.max_bytes_for_level_multiplier, options.max_bytes_for_level_base] so that real level ratios are close to options.max_bytes_for_level_multiplier.
      
      Test Plan: New unit tests and pass tests suites including valgrind.
      
      Reviewers: MarkCallaghan, rven, yhchiang, igor, ikabiljo
      
      Reviewed By: ikabiljo
      
      Subscribers: yoshinorim, ikabiljo, dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D31437
      db037393
  33. 21 2月, 2015 2 次提交
  34. 16 1月, 2015 1 次提交
    • Y
      Remove Compaction::ReleaseInputs(). · b229f970
      Yueh-Hsuan Chiang 提交于
      Summary:
      This patch remove the unnecessary Compaction::ReleaseInputs().
      
      Compaction::ReleaseInputs() tries to unref its input_version
      and column_family.  However, such unref is always done in
      ~Compaction(), and all current ReleaseInputs() calls are
      right before the destructor.
      
      Test Plan: ./db_test
      
      Reviewers: igor
      
      Reviewed By: igor
      
      Subscribers: igor, rven, dhruba, sdong
      
      Differential Revision: https://reviews.facebook.net/D31605
      b229f970
  35. 17 12月, 2014 1 次提交
  36. 15 11月, 2014 2 次提交
    • I
      Fix build · 04ca7481
      Igor Canadi 提交于
      04ca7481
    • I
      CompactionJobTest · 9be338cf
      Igor Canadi 提交于
      Summary:
      This is just a simple test that passes two files though a compaction. It shows the framework so that people can continue building new compaction *unit* tests.
      In the future we might want to move some Compaction* tests from DBTest here. For example, CompactBetweenSnapshot seems a good candidate.
      
      Hopefully this test can be simpler when we mock out VersionSet.
      
      Test Plan: this is a test
      
      Reviewers: ljin, rven, yhchiang, sdong
      
      Reviewed By: sdong
      
      Subscribers: dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D28449
      9be338cf
  37. 12 11月, 2014 1 次提交
    • I
      Turn on -Wshorten-64-to-32 and fix all the errors · 767777c2
      Igor Canadi 提交于
      Summary:
      We need to turn on -Wshorten-64-to-32 for mobile. See D1671432 (internal phabricator) for details.
      
      This diff turns on the warning flag and fixes all the errors. There were also some interesting errors that I might call bugs, especially in plain table. Going forward, I think it makes sense to have this flag turned on and be very very careful when converting 64-bit to 32-bit variables.
      
      Test Plan: compiles
      
      Reviewers: ljin, rven, yhchiang, sdong
      
      Reviewed By: yhchiang
      
      Subscribers: bobbaldwin, dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D28689
      767777c2
  38. 08 11月, 2014 1 次提交
    • Y
      CompactFiles, EventListener and GetDatabaseMetaData · 28c82ff1
      Yueh-Hsuan Chiang 提交于
      Summary:
      This diff adds three sets of APIs to RocksDB.
      
      = GetColumnFamilyMetaData =
      * This APIs allow users to obtain the current state of a RocksDB instance on one column family.
      * See GetColumnFamilyMetaData in include/rocksdb/db.h
      
      = EventListener =
      * A virtual class that allows users to implement a set of
        call-back functions which will be called when specific
        events of a RocksDB instance happens.
      * To register EventListener, simply insert an EventListener to ColumnFamilyOptions::listeners
      
      = CompactFiles =
      * CompactFiles API inputs a set of file numbers and an output level, and RocksDB
        will try to compact those files into the specified level.
      
      = Example =
      * Example code can be found in example/compact_files_example.cc, which implements
        a simple external compactor using EventListener, GetColumnFamilyMetaData, and
        CompactFiles API.
      
      Test Plan:
      listener_test
      compactor_test
      example/compact_files_example
      export ROCKSDB_TESTS=CompactFiles
      db_test
      export ROCKSDB_TESTS=MetaData
      db_test
      
      Reviewers: ljin, igor, rven, sdong
      
      Reviewed By: sdong
      
      Subscribers: MarkCallaghan, dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D24705
      28c82ff1