1. 10 7月, 2013 1 次提交
  2. 01 7月, 2013 2 次提交
    • D
      Reduce write amplification by merging files in L0 back into L0 · 47c4191f
      Dhruba Borthakur 提交于
      Summary:
      There is a new option called hybrid_mode which, when switched on,
      causes HBase style compactions.  Files from L0 are
      compacted back into L0. This meat of this compaction algorithm
      is in PickCompactionHybrid().
      
      All files reside in L0. That means all files have overlapping
      keys. Each file has a time-bound, i.e. each file contains a
      range of keys that were inserted around the same time. The
      start-seqno and the end-seqno refers to the timeframe when
      these keys were inserted.  Files that have contiguous seqno
      are compacted together into a larger file. All files are
      ordered from most recent to the oldest.
      
      The current compaction algorithm starts to look for
      candidate files starting from the most recent file. It continues to
      add more files to the same compaction run as long as the
      sum of the files chosen till now is smaller than the next
      candidate file size. This logic needs to be debated
      and validated.
      
      The above logic should reduce write amplification to a
      large extent... will publish numbers shortly.
      
      Test Plan: dbstress runs for 6 hours with no data corruption (tested so far).
      
      Differential Revision: https://reviews.facebook.net/D11289
      47c4191f
    • D
      Reduce write amplification by merging files in L0 back into L0 · 554c06dd
      Dhruba Borthakur 提交于
      Summary:
      There is a new option called hybrid_mode which, when switched on,
      causes HBase style compactions.  Files from L0 are
      compacted back into L0. This meat of this compaction algorithm
      is in PickCompactionHybrid().
      
      All files reside in L0. That means all files have overlapping
      keys. Each file has a time-bound, i.e. each file contains a
      range of keys that were inserted around the same time. The
      start-seqno and the end-seqno refers to the timeframe when
      these keys were inserted.  Files that have contiguous seqno
      are compacted together into a larger file. All files are
      ordered from most recent to the oldest.
      
      The current compaction algorithm starts to look for
      candidate files starting from the most recent file. It continues to
      add more files to the same compaction run as long as the
      sum of the files chosen till now is smaller than the next
      candidate file size. This logic needs to be debated
      and validated.
      
      The above logic should reduce write amplification to a
      large extent... will publish numbers shortly.
      
      Test Plan: dbstress runs for 6 hours with no data corruption (tested so far).
      
      Differential Revision: https://reviews.facebook.net/D11289
      554c06dd
  3. 13 6月, 2013 1 次提交
    • H
      [RocksDB] cleanup EnvOptions · bdf10859
      Haobo Xu 提交于
      Summary:
      This diff simplifies EnvOptions by treating it as POD, similar to Options.
      - virtual functions are removed and member fields are accessed directly.
      - StorageOptions is removed.
      - Options.allow_readahead and Options.allow_readahead_compactions are deprecated.
      - Unused global variables are removed: useOsBuffer, useFsReadAhead, useMmapRead, useMmapWrite
      
      Test Plan: make check; db_stress
      
      Reviewers: dhruba
      
      CC: leveldb
      
      Differential Revision: https://reviews.facebook.net/D11175
      bdf10859
  4. 04 6月, 2013 1 次提交
  5. 04 5月, 2013 1 次提交
    • H
      [Rocksdb] Support Merge operation in rocksdb · 05e88540
      Haobo Xu 提交于
      Summary:
      This diff introduces a new Merge operation into rocksdb.
      The purpose of this review is mostly getting feedback from the team (everyone please) on the design.
      
      Please focus on the four files under include/leveldb/, as they spell the client visible interface change.
      include/leveldb/db.h
      include/leveldb/merge_operator.h
      include/leveldb/options.h
      include/leveldb/write_batch.h
      
      Please go over local/my_test.cc carefully, as it is a concerete use case.
      
      Please also review the impelmentation files to see if the straw man implementation makes sense.
      
      Note that, the diff does pass all make check and truly supports forward iterator over db and a version
      of Get that's based on iterator.
      
      Future work:
      - Integration with compaction
      - A raw Get implementation
      
      I am working on a wiki that explains the design and implementation choices, but coding comes
      just naturally and I think it might be a good idea to share the code earlier. The code is
      heavily commented.
      
      Test Plan: run all local tests
      
      Reviewers: dhruba, heyongqiang
      
      Reviewed By: dhruba
      
      CC: leveldb, zshao, sheki, emayanke, MarkCallaghan
      
      Differential Revision: https://reviews.facebook.net/D9651
      05e88540
  6. 16 4月, 2013 1 次提交
  7. 13 4月, 2013 1 次提交
    • H
      [RocksDB] [Performance] Speed up FindObsoleteFiles · 013e9ebb
      Haobo Xu 提交于
      Summary:
      FindObsoleteFiles was slow, holding the single big lock, resulted in bad p99 behavior.
      Didn't profile anything, but several things could be improved:
      1. VersionSet::AddLiveFiles works with std::set, which is by itself slow (a tree).
         You also don't know how many dynamic allocations occur just for building up this tree.
         switched to std::vector, also added logic to pre-calculate total size and do just one allocation
      2. Don't see why env_->GetChildren() needs to be mutex proteced, moved to PurgeObsoleteFiles where
         mutex could be unlocked.
      3. switched std::set to std:unordered_set, the conversion from vector is also inside PurgeObsoleteFiles
      I have a feeling this should pretty much fix it.
      
      Test Plan: make check;  db_stress
      
      Reviewers: dhruba, heyongqiang, MarkCallaghan
      
      Reviewed By: dhruba
      
      CC: leveldb, zshao
      
      Differential Revision: https://reviews.facebook.net/D10197
      013e9ebb
  8. 21 3月, 2013 1 次提交
    • D
      Ability to configure bufferedio-reads, filesystem-readaheads and mmap-read-write per database. · ad96563b
      Dhruba Borthakur 提交于
      Summary:
      This patch allows an application to specify whether to use bufferedio,
      reads-via-mmaps and writes-via-mmaps per database. Earlier, there
      was a global static variable that was used to configure this functionality.
      
      The default setting remains the same (and is backward compatible):
       1. use bufferedio
       2. do not use mmaps for reads
       3. use mmap for writes
       4. use readaheads for reads needed for compaction
      
      I also added a parameter to db_bench to be able to explicitly specify
      whether to do readaheads for compactions or not.
      
      Test Plan: make check
      
      Reviewers: sheki, heyongqiang, MarkCallaghan
      
      Reviewed By: sheki
      
      CC: leveldb
      
      Differential Revision: https://reviews.facebook.net/D9429
      ad96563b
  9. 20 3月, 2013 1 次提交
  10. 12 3月, 2013 1 次提交
    • D
      Prevent segfault because SizeUnderCompaction was called without any locks. · ebf16f57
      Dhruba Borthakur 提交于
      Summary:
      SizeBeingCompacted was called without any lock protection. This causes
      crashes, especially when running db_bench with value_size=128K.
      The fix is to compute SizeUnderCompaction while holding the mutex and
      passing in these values into the call to Finalize.
      
      (gdb) where
      #4  leveldb::VersionSet::SizeBeingCompacted (this=this@entry=0x7f0b490931c0, level=level@entry=4) at db/version_set.cc:1827
      #5  0x000000000043a3c8 in leveldb::VersionSet::Finalize (this=this@entry=0x7f0b490931c0, v=v@entry=0x7f0b3b86b480) at db/version_set.cc:1420
      #6  0x00000000004418d1 in leveldb::VersionSet::LogAndApply (this=0x7f0b490931c0, edit=0x7f0b3dc8c200, mu=0x7f0b490835b0, new_descriptor_log=<optimized out>) at db/version_set.cc:1016
      #7  0x00000000004222b2 in leveldb::DBImpl::InstallCompactionResults (this=this@entry=0x7f0b49083400, compact=compact@entry=0x7f0b2b8330f0) at db/db_impl.cc:1473
      #8  0x0000000000426027 in leveldb::DBImpl::DoCompactionWork (this=this@entry=0x7f0b49083400, compact=compact@entry=0x7f0b2b8330f0) at db/db_impl.cc:1757
      #9  0x0000000000426690 in leveldb::DBImpl::BackgroundCompaction (this=this@entry=0x7f0b49083400, madeProgress=madeProgress@entry=0x7f0b41bf2d1e, deletion_state=...) at db/db_impl.cc:1268
      #10 0x0000000000428f42 in leveldb::DBImpl::BackgroundCall (this=0x7f0b49083400) at db/db_impl.cc:1170
      #11 0x000000000045348e in BGThread (this=0x7f0b49023100) at util/env_posix.cc:941
      #12 leveldb::(anonymous namespace)::PosixEnv::BGThreadWrapper (arg=0x7f0b49023100) at util/env_posix.cc:874
      #13 0x00007f0b4a7cf10d in start_thread (arg=0x7f0b41bf3700) at pthread_create.c:301
      #14 0x00007f0b49b4b11d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:115
      
      Test Plan:
      make check
      
      I am running db_bench with a value size of 128K to see if the segfault is fixed.
      
      Reviewers: MarkCallaghan, sheki, emayanke
      
      Reviewed By: sheki
      
      CC: leveldb
      
      Differential Revision: https://reviews.facebook.net/D9279
      ebf16f57
  11. 04 3月, 2013 1 次提交
    • M
      Add rate_delay_limit_milliseconds · 993543d1
      Mark Callaghan 提交于
      Summary:
      This adds the rate_delay_limit_milliseconds option to make the delay
      configurable in MakeRoomForWrite when the max compaction score is too high.
      This delay is called the Ln slowdown. This change also counts the Ln slowdown
      per level to make it possible to see where the stalls occur.
      
      From IO-bound performance testing, the Level N stalls occur:
      * with compression -> at the largest uncompressed level. This makes sense
                            because compaction for compressed levels is much
                            slower. When Lx is uncompressed and Lx+1 is compressed
                            then files pile up at Lx because the (Lx,Lx+1)->Lx+1
                            compaction process is the first to be slowed by
                            compression.
      * without compression -> at level 1
      
      Task ID: #1832108
      
      Blame Rev:
      
      Test Plan:
      run with real data, added test
      
      Revert Plan:
      
      Database Impact:
      
      Memcache Impact:
      
      Other Notes:
      
      EImportant:
      
      - begin *PUBLIC* platform impact section -
      Bugzilla: #
      - end platform impact -
      
      Reviewers: dhruba
      
      Reviewed By: dhruba
      
      Differential Revision: https://reviews.facebook.net/D9045
      993543d1
  12. 01 3月, 2013 1 次提交
  13. 25 1月, 2013 1 次提交
    • C
      Use fallocate to prevent excessive allocation of sst files and logs · 3dafdfb2
      Chip Turner 提交于
      Summary:
      On some filesystems, pre-allocation can be a considerable
      amount of space.  xfs in our production environment pre-allocates by
      1GB, for instance.  By using fallocate to inform the kernel of our
      expected file sizes, we eliminate this wasteage (that isn't recovered
      until the file is closed which, in the case of LOG files, can be a
      considerable amount of time).
      
      Test Plan:
      created an xfs loopback filesystem, mounted with
      allocsize=4M, and ran db_stress.  LOG file without this change was 4M,
      and with it it was 128k then grew to normal size.
      
      Reviewers: dhruba
      
      Reviewed By: dhruba
      
      CC: adsharma, leveldb
      
      Differential Revision: https://reviews.facebook.net/D7953
      3dafdfb2
  14. 24 1月, 2013 1 次提交
    • C
      Fix a number of object lifetime/ownership issues · 2fdf91a4
      Chip Turner 提交于
      Summary:
      Replace manual memory management with std::unique_ptr in a
      number of places; not exhaustive, but this fixes a few leaks with file
      handles as well as clarifies semantics of the ownership of file handles
      with log classes.
      
      Test Plan: db_stress, make check
      
      Reviewers: dhruba
      
      Reviewed By: dhruba
      
      CC: zshao, leveldb, heyongqiang
      
      Differential Revision: https://reviews.facebook.net/D8043
      2fdf91a4
  15. 17 1月, 2013 1 次提交
    • A
      rollover manifest file. · 7d5a4383
      Abhishek Kona 提交于
      Summary:
      Check in LogAndApply if the file size is more than the limit set in
      Options.
      Things to consider : will this be expensive?
      
      Test Plan: make all check. Inputs on a new unit test?
      
      Reviewers: dhruba
      
      Reviewed By: dhruba
      
      CC: leveldb
      
      Differential Revision: https://reviews.facebook.net/D7701
      7d5a4383
  16. 16 1月, 2013 1 次提交
    • K
      Fixed bug with seek compactions on Level 0 · 28fe86c4
      Kosie van der Merwe 提交于
      Summary: Due to how the code handled compactions in Level 0 in `PickCompaction()` it could be the case that two compactions on level 0 ran that produced tables in level 1 that overlap. However, this case seems like it would only occur on a seek compaction which is unlikely on level 0. Furthermore, level 0 and level 1 had to have a certain arrangement of files.
      
      Test Plan:
      make check
      
      Reviewers: dhruba, vamsi
      
      Reviewed By: dhruba
      
      CC: leveldb, sheki
      
      Differential Revision: https://reviews.facebook.net/D7923
      28fe86c4
  17. 11 1月, 2013 1 次提交
  18. 17 12月, 2012 1 次提交
    • Z
      manifest_dump: Add --hex=1 option · c2809753
      Zheng Shao 提交于
      Summary: Without this option, manifest_dump does not print binary keys for files in a human-readable way.
      
      Test Plan:
      ./manifest_dump --hex=1 --verbose=0 --file=/data/users/zshao/fdb_comparison/leveldb/fbobj.apprequest-0_0_original/MANIFEST-000002
      manifest_file_number 589 next_file_number 590 last_sequence 2311567 log_number 543  prev_log_number 0
      --- level 0 --- version# 0 ---
       532:1300357['0000455BABE20000' @ 2183973 : 1 .. 'FFFCA5D7ADE20000' @ 2184254 : 1]
       536:1308170['000198C75CE30000' @ 2203313 : 1 .. 'FFFCF94A79E30000' @ 2206463 : 1]
       542:1321644['0002931AA5E50000' @ 2267055 : 1 .. 'FFF77B31C5E50000' @ 2270754 : 1]
       544:1286390['000410A309E60000' @ 2278592 : 1 .. 'FFFE470A73E60000' @ 2289221 : 1]
       538:1298778['0006BCF4D8E30000' @ 2217050 : 1 .. 'FFFD77DAF7E30000' @ 2220489 : 1]
       540:1282353['00090D5356E40000' @ 2231156 : 1 .. 'FFFFF4625CE40000' @ 2231969 : 1]
      --- level 1 --- version# 0 ---
       510:2112325['000007F9C2D40000' @ 1782099 : 1 .. '146F5B67B8D80000' @ 1905458 : 1]
       511:2121742['146F8A3023D60000' @ 1824388 : 1 .. '28BC8FBB9CD40000' @ 1777993 : 1]
       512:801631['28BCD396F1DE0000' @ 2080191 : 1 .. '3082DBE9ADDB0000' @ 1989927 : 1]
      
      Reviewers: dhruba, sheki, emayanke
      
      Reviewed By: dhruba
      
      CC: leveldb
      
      Differential Revision: https://reviews.facebook.net/D7425
      c2809753
  19. 05 12月, 2012 1 次提交
  20. 29 11月, 2012 1 次提交
    • A
      Fix all the lint errors. · d29f1819
      Abhishek Kona 提交于
      Summary:
      Scripted and removed all trailing spaces and converted all tabs to
      spaces.
      
      Also fixed other lint errors.
      All lint errors from this point of time should be taken seriously.
      
      Test Plan: make all check
      
      Reviewers: dhruba
      
      Reviewed By: dhruba
      
      CC: leveldb
      
      Differential Revision: https://reviews.facebook.net/D7059
      d29f1819
  21. 27 11月, 2012 1 次提交
    • D
      Assertion failure while running with unit tests with OPT=-g · 2a396999
      Dhruba Borthakur 提交于
      Summary:
      When we expand the range of keys for a level 0 compaction, we
      need to invoke ParentFilesInCompaction() only once for the
      entire range of keys that is being compacted. We were invoking
      it for each file that was being compacted, but this triggers
      an assertion because each file's range were contiguous but
      non-overlapping.
      
      I renamed ParentFilesInCompaction to ParentRangeInCompaction
      to adequately represent that it is the range-of-keys and
      not individual files that we compact in a single compaction run.
      
      Here is the assertion that is fixed by this patch.
      db_test: db/version_set.cc:585: void leveldb::Version::ExtendOverlappingInputs(int, const leveldb::Slice&, const leveldb::Slice&, std::vector<leveldb::FileMetaData*, std::allocator<leveldb::FileMetaData*> >*, int): Assertion `user_cmp->Compare(flimit, user_begin) >= 0' failed.
      
      Test Plan: make clean check OPT=-g
      
      Reviewers: sheki
      
      Reviewed By: sheki
      
      CC: MarkCallaghan, emayanke, leveldb
      
      Differential Revision: https://reviews.facebook.net/D6963
      2a396999
  22. 20 11月, 2012 2 次提交
  23. 08 11月, 2012 3 次提交
    • D
      Move filesize-based-sorting to outside the Mutex · 95dda378
      Dhruba Borthakur 提交于
      Summary:
      When a new version is created, we sort all the files at every
      level based on their size. This is necessary because we want
      to compact the largest file first. The sorting takes quite a
      bit of CPU.
      
      Moved the sorting code to be outside the mutex. Also, the
      earlier code was sorting files at all levels but we do not
      need to sort the highest-number level because those files
      are never the cause of any compaction. To reduce sorting
      costs, we sort only the first few files in each level
      because it is likely that those are the only files in that
      level that will be picked for compaction.
      
      At steady state, I have seen that this patch increase
      throughout from 1500 writes/sec to 1700 writes/sec at the
      end of a 72 hour run. The cpu saving by not sorting the
      last level was not distinctive in this test run because
      there were only 100K files in the highest numbered level.
      I expect the cpu saving to be significant when the number of
      files is much higher.
      
      This is mostly an early preview and not ready for rigorous review.
      
      With this patch, the writs/sec is now bottlenecked not by the sorting code but by GetOverlappingInputs. I am working on a patch to optimize GetOverlappingInputs.
      
      Test Plan: make check
      
      Reviewers: MarkCallaghan, heyongqiang
      
      Reviewed By: heyongqiang
      
      Differential Revision: https://reviews.facebook.net/D6411
      95dda378
    • D
      Fixed compilation error in previous merge. · 18cb6004
      Dhruba Borthakur 提交于
      Summary:
      Fixed compilation error in previous merge.
      
      Test Plan:
      
      Reviewers:
      
      CC:
      
      Task ID: #
      
      Blame Rev:
      18cb6004
    • D
      Avoid doing a exhaustive search when looking for overlapping files. · 9b87a2ba
      Dhruba Borthakur 提交于
      Summary:
      The Version::GetOverlappingInputs() is called multiple times in
      the compaction code path. Eack invocation does a binary search
      for overlapping files in the specified key range.
      This patch remembers the offset of an overlapped file when
      GetOverlappingInputs() is called the first time within
      a compaction run. Suceeding calls to GetOverlappingInputs()
      uses the remembered index to avoid the binary search.
      
      I measured that 1000 iterations of GetOverlappingInputs
      takes around 4500 microseconds without this patch. If I use
      this patch with the hint on every invocation, then 1000
      iterations take about 3900 microsecond.
      
      Test Plan: make check OPT=-g
      
      Reviewers: heyongqiang
      
      Reviewed By: heyongqiang
      
      CC: MarkCallaghan, emayanke, sheki
      
      Differential Revision: https://reviews.facebook.net/D6513
      9b87a2ba
  24. 06 11月, 2012 2 次提交
    • D
      The method GetOverlappingInputs should use binary search. · cb7a0022
      Dhruba Borthakur 提交于
      Summary:
      The method Version::GetOverlappingInputs used a sequential search
      to map a kay-range to a set of files. But the files are arranged
      in ascending order of key, so a biary search is more effective.
      
      This patch implements Version::GetOverlappingInputsBinarySearch
      that finds one file that corresponds to the specified key range
      and then iterates backwards and forwards to find all overlapping
      files.
      
      This patch is critical for making compactions efficient, especially
      when there are thousands of files in a single level.
      
      I measured that 1000 iterations of TEST_MaxNextLevelOverlappingBytes
      takes 16000 microseconds without this patch. With this patch, the
      same method takes about 4600 microseconds.
      
      Test Plan: Almost all unit tests in db_test uses this method to lookup keys.
      
      Reviewers: heyongqiang
      
      Reviewed By: heyongqiang
      
      CC: MarkCallaghan, emayanke, sheki
      
      Differential Revision: https://reviews.facebook.net/D6465
      cb7a0022
    • H
      Add a tool to change number of levels · d55c2ba3
      heyongqiang 提交于
      Summary: as subject.
      
      Test Plan: manually test it, will add a testcase
      
      Reviewers: dhruba, MarkCallaghan
      
      Differential Revision: https://reviews.facebook.net/D6345
      d55c2ba3
  25. 30 10月, 2012 1 次提交
    • M
      Adds DB::GetNextCompaction and then uses that for rate limiting db_bench · 70c42bf0
      Mark Callaghan 提交于
      Summary:
      Adds a method that returns the score for the next level that most
      needs compaction. That method is then used by db_bench to rate limit threads.
      Threads are put to sleep at the end of each stats interval until the score
      is less than the limit. The limit is set via the --rate_limit=$double option.
      The specified value must be > 1.0. Also adds the option --stats_per_interval
      to enable additional metrics reported every stats interval.
      
      Task ID: #
      
      Blame Rev:
      
      Test Plan:
      run db_bench
      
      Revert Plan:
      
      Database Impact:
      
      Memcache Impact:
      
      Other Notes:
      
      EImportant:
      
      - begin *PUBLIC* platform impact section -
      Bugzilla: #
      - end platform impact -
      
      Reviewers: dhruba
      
      Reviewed By: dhruba
      
      Differential Revision: https://reviews.facebook.net/D6243
      70c42bf0
  26. 26 10月, 2012 1 次提交
    • D
      Greedy algorithm for picking files to compact. · 5b0fe6c7
      Dhruba Borthakur 提交于
      Summary:
      It is best if we pick the largest file to compact in a level.
      This reduces the write amplification factor for compactions.
      Each level has an auxiliary data structure called files_by_size_
      that sorts all files by their size. This data structure is
      updated when a new version is created.
      
      Test Plan: make check
      
      Differential Revision: https://reviews.facebook.net/D6195
      5b0fe6c7
  27. 23 10月, 2012 1 次提交
  28. 20 10月, 2012 1 次提交
    • D
      This is the mega-patch multi-threaded compaction · 1ca05843
      Dhruba Borthakur 提交于
      published in https://reviews.facebook.net/D5997.
      
      Summary:
      This patch allows compaction to occur in multiple background threads
      concurrently.
      
      If a manual compaction is issued, the system falls back to a
      single-compaction-thread model. This is done to ensure correctess
      and simplicity of code. When the manual compaction is finished,
      the system resumes its concurrent-compaction mode automatically.
      
      The updates to the manifest are done via group-commit approach.
      
      Test Plan: run db_bench
      1ca05843
  29. 25 9月, 2012 1 次提交
    • D
      The BackupAPI should also list the length of the manifest file. · ae36e509
      Dhruba Borthakur 提交于
      Summary:
      The GetLiveFiles() api lists the set of sst files and the current
      MANIFEST file. But the database continues to append new data to the
      MANIFEST file even when the application is backing it up to the
      backup location. This means that the database-version that is
      stored in the MANIFEST FILE in the backup location
      does not correspond to the sst files returned by GetLiveFiles.
      
      This API adds a new parameter to GetLiveFiles. This new parmeter
      returns the current size of the MANIFEST file.
      
      Test Plan: Unit test attached.
      
      Reviewers: heyongqiang
      
      Reviewed By: heyongqiang
      
      Differential Revision: https://reviews.facebook.net/D5631
      ae36e509
  30. 18 9月, 2012 1 次提交
  31. 30 8月, 2012 1 次提交
  32. 29 8月, 2012 1 次提交
    • H
      merge 1.5 · a4f9b8b4
      heyongqiang 提交于
      Summary:
      
      as subject
      
      Test Plan:
      
      db_test table_test
      
      Reviewers: dhruba
      a4f9b8b4
  33. 22 8月, 2012 1 次提交
  34. 18 8月, 2012 2 次提交
    • D
      Utility to dump manifest contents. · 2aa514ec
      Dhruba Borthakur 提交于
      Summary:
      ./manifest_dump --file=/tmp/dbbench/MANIFEST-000002
      
      Output looks like
      
      manifest_file_number 30 next_file_number 31 last_sequence 388082 log_number 28  prev_log_number 0
      --- level 0 ---
      --- level 1 ---
      --- level 2 ---
       5:3244155['0000000000000000' @ 1 : 1 .. '0000000000028220' @ 28221 : 1]
       7:3244177['0000000000028221' @ 28222 : 1 .. '0000000000056441' @ 56442 : 1]
       9:3244156['0000000000056442' @ 56443 : 1 .. '0000000000084662' @ 84663 : 1]
       11:3244178['0000000000084663' @ 84664 : 1 .. '0000000000112883' @ 112884 : 1]
       13:3244158['0000000000112884' @ 112885 : 1 .. '0000000000141104' @ 141105 : 1]
       15:3244176['0000000000141105' @ 141106 : 1 .. '0000000000169325' @ 169326 : 1]
       17:3244156['0000000000169326' @ 169327 : 1 .. '0000000000197546' @ 197547 : 1]
       19:3244178['0000000000197547' @ 197548 : 1 .. '0000000000225767' @ 225768 : 1]
       21:3244155['0000000000225768' @ 225769 : 1 .. '0000000000253988' @ 253989 : 1]
       23:3244179['0000000000253989' @ 253990 : 1 .. '0000000000282209' @ 282210 : 1]
       25:3244157['0000000000282210' @ 282211 : 1 .. '0000000000310430' @ 310431 : 1]
       27:3244176['0000000000310431' @ 310432 : 1 .. '0000000000338651' @ 338652 : 1]
       29:3244156['0000000000338652' @ 338653 : 1 .. '0000000000366872' @ 366873 : 1]
      --- level 3 ---
      --- level 4 ---
      --- level 5 ---
      --- level 6 ---
      
      Test Plan: run on test directory created by dbbench
      
      Reviewers: heyongqiang
      
      Reviewed By: heyongqiang
      
      CC: hustliubo
      
      Differential Revision: https://reviews.facebook.net/D4743
      2aa514ec
    • H
      add compaction log Summary: · 680e571c
      heyongqiang 提交于
      Summary:
      add compaction summary to log
      
      log looks like:
      
      2012/08/17-18:18:32.557334 7fdcaa2bb700 Compaction summary: Base level 0, input file:[11 9 7 ],[]
      
      Test Plan: tested via db_test
      
      Reviewers: dhruba
      
      Differential Revision: https://reviews.facebook.net/D4749
      680e571c