1. 30 1月, 2014 1 次提交
    • I
      InternalStatistics · 3c0dcf0e
      Igor Canadi 提交于
      Summary:
      In DBImpl we keep track of some statistics internally and expose them via GetProperty(). This diff encapsulates all the internal statistics into a class InternalStatisics. Most of it is copy/paste.
      
      Apart from cleaning up db_impl.cc, this diff is also necessary for Column families, since every column family should have its own CompactionStats, MakeRoomForWrite-stall stats, etc. It's much easier to keep track of it in every column family if it's nicely encapsulated in its own class.
      
      Test Plan: make check
      
      Reviewers: dhruba, kailiu, haobo, sdong, emayanke
      
      Reviewed By: haobo
      
      CC: leveldb
      
      Differential Revision: https://reviews.facebook.net/D15273
      3c0dcf0e
  2. 28 1月, 2014 1 次提交
    • I
      Fsync directory after we create a new file · 832158e7
      Igor Canadi 提交于
      Summary:
      @dhruba, I'm not sure where we need to sync the directory. I implemented the function in Env() and added the dir sync just after we close the newly created file in the builder.
      
      Should I also add FsyncDir() to new files that get created by a compaction?
      
      Test Plan: Confirmed that FsyncDir is returning Status::OK()
      
      Reviewers: dhruba, haobo
      
      Reviewed By: dhruba
      
      CC: leveldb, dhruba
      
      Differential Revision: https://reviews.facebook.net/D14751
      832158e7
  3. 25 1月, 2014 1 次提交
    • I
      MemTableListVersion · c583157d
      Igor Canadi 提交于
      Summary:
      MemTableListVersion is to MemTableList what Version is to VersionSet. I took almost the same ideas to develop MemTableListVersion. The reason is to have copying std::list done in background, while flushing, rather than in foreground (MultiGet() and NewIterator()) under a mutex! Also, whenever we copied MemTableList, we copied also some MemTableList metadata (flush_requested_, commit_in_progress_, etc.), which was wasteful.
      
      This diff avoids std::list copy under a mutex in both MultiGet() and NewIterator(). I created a small database with some number of immutable memtables, and creating 100.000 iterators in a single-thread (!) decreased from {188739, 215703, 198028} to {154352, 164035, 159817}. A lot of the savings come from code under a mutex, so we should see much higher savings with multiple threads. Creating new iterator is very important to LogDevice team.
      
      I also think this diff will make SuperVersion obsolete for performance reasons. I will try it in the next diff. SuperVersion gave us huge savings on Get() code path, but I think that most of the savings came from copying MemTableList under a mutex. If we had MemTableListVersion, we would never need to copy the entire object (like we still do in NewIterator() and MultiGet())
      
      Test Plan: `make check` works. I will also do `make valgrind_check` before commit
      
      Reviewers: dhruba, haobo, kailiu, sdong, emayanke, tnovak
      
      Reviewed By: kailiu
      
      CC: leveldb
      
      Differential Revision: https://reviews.facebook.net/D15255
      c583157d
  4. 24 1月, 2014 2 次提交
    • L
      CompactRange() to return status · aba2acb5
      Lei Jin 提交于
      Summary: as title
      
      Test Plan:
      make all check
      What else tests shall I cover?
      
      Reviewers: igor, haobo
      
      CC:
      
      Differential Revision: https://reviews.facebook.net/D15339
      aba2acb5
    • T
      Tailing iterator · 81c9cc9b
      Tomislav Novak 提交于
      Summary:
      This diff implements a special type of iterator that doesn't create a snapshot
      (can be used to read newly inserted data) and is optimized for doing sequential
      reads.
      
      TailingIterator uses current superversion number to determine whether to
      invalidate its internal iterators. If the version hasn't changed, it can often
      avoid doing expensive seeks over immutable structures (sst files and immutable
      memtables).
      
      Test Plan:
      * new unit tests
      * running LD with this patch
      
      Reviewers: igor, dhruba, haobo, sdong, kailiu
      
      Reviewed By: sdong
      
      CC: leveldb, lovro, march
      
      Differential Revision: https://reviews.facebook.net/D15285
      81c9cc9b
  5. 23 1月, 2014 1 次提交
    • I
      Refactor Recover() code · 6fe9b577
      Igor Canadi 提交于
      Summary:
      This diff does two things:
      * Rethinks how we call Recover() with read_only option. Before, we call it with pointer to memtable where we'd like to apply those changes to. This memtable is set in db_impl_readonly.cc and it's actually DBImpl::mem_. Why don't we just apply updates to mem_ right away? It seems more intuitive.
      * Changes when we apply updates to manifest. Before, the process is to recover all the logs, flush it to sst files and then do one giant commit that atomically adds all recovered sst files and sets the next log number. This works good enough, but causes some small troubles for my column family approach, since I can't have one VersionEdit apply to more than single column family[1]. The change here is to commit the files recovered from logs right away. Here is the state of the world before the change:
      1. Recover log 5, add new sst files to edit
      2. Recover log 7, add new sst files to edit
      3. Recover log 8, add new sst files to edit
      4. Commit all added sst files to manifest and mark log files 5, 7 and 8 as recoverd (via SetLogNumber(9) function)
      After the change, we'll do:
      1. Recover log 5, commit the new sst files and set log 5 as recovered
      2. Recover log 7, commit the new sst files and set log 7 as recovered
      3. Recover log 8, commit the new sst files and set log 8 as recovered
      
      The added (small) benefit is that if we fail after (2), the new recovery will only have to recover log 8. In previous case, we'll have to restart the recovery from the beginning. The bigger benefit will be to enable easier integration of multiple column families in Recovery code path.
      
      [1] I'm happy to dicuss this decison, but I believe this is the cleanest way to go. It also makes backward compatibility much easier. We don't have a requirement of adding multiple column families atomically.
      
      Test Plan: make check
      
      Reviewers: dhruba, haobo, kailiu, sdong
      
      Reviewed By: kailiu
      
      CC: leveldb
      
      Differential Revision: https://reviews.facebook.net/D15237
      6fe9b577
  6. 18 1月, 2014 1 次提交
    • M
      Fix SlowdownAmount · 439e36db
      Mark Callaghan 提交于
      Summary:
      This had a few bugs.
      1) bottom and top were reversed. top is for the max value but the callers were passing the max
      value to bottom. The result is that the max sleep is used when n >= bottom.
      2) one of the callers passed values with type double and these values are frequently between
      1.0 and 2.0 so rounding will do some bad things
      3) sometimes the function returned 0 when there should be a stall
      
      With this change and one other diff (out for review soon) there are slightly fewer stalls on one workload.
      
      With the fix.
      Stalls(secs): 160.166 level0_slowdown, 0.000 level0_numfiles, 0.000 memtable_compaction, 58.495 leveln_slowdown
      Stalls(count): 910261 level0_slowdown, 0 level0_numfiles, 0 memtable_compaction, 54526 leveln_slowdown
      
      Without the fix.
      Stalls(secs): 172.227 level0_slowdown, 0.000 level0_numfiles, 0.000 memtable_compaction, 56.538 leveln_slowdown
      Stalls(count): 160831 level0_slowdown, 0 level0_numfiles, 0 memtable_compaction, 52845 leveln_slowdown
      
      Task ID: #
      
      Blame Rev:
      
      Test Plan:
      run db_bench for --benchmarks=overwrite with IO-bound database
      
      Revert Plan:
      
      Database Impact:
      
      Memcache Impact:
      
      Other Notes:
      
      EImportant:
      
      - begin *PUBLIC* platform impact section -
      Bugzilla: #
      - end platform impact -
      
      Reviewers: haobo
      
      Reviewed By: haobo
      
      CC: leveldb
      
      Differential Revision: https://reviews.facebook.net/D15243
      439e36db
  7. 16 1月, 2014 1 次提交
    • I
      Move functions from VersionSet to Version · 2f4eda78
      Igor Canadi 提交于
      Summary:
      There were some functions in VersionSet that had no reason to be there instead of Version. Moving them to Version will make column families implementation easier.
      
      The functions moved are:
      * NumLevelBytes
      * LevelSummary
      * LevelFileSummary
      * MaxNextLevelOverlappingBytes
      * AddLiveFiles (previously AddLiveFilesCurrentVersion())
      * NeedSlowdownForNumLevel0Files
      
      The diff continues on (and depends on) D15171
      
      Test Plan: make check
      
      Reviewers: dhruba, haobo, kailiu, sdong, emayanke
      
      Reviewed By: sdong
      
      CC: leveldb
      
      Differential Revision: https://reviews.facebook.net/D15183
      2f4eda78
  8. 15 1月, 2014 2 次提交
    • I
      Fix CompactRange to apply filter to every key · d9cd7a06
      Igor Canadi 提交于
      Summary:
      When doing CompactRange(), we should first flush the memtable and then calculate max_level_with_files. Also, we want to compact all the levels that have files, including level `max_level_with_files`.
      
      This patch fixed the unit test.
      
      Test Plan: Added a failing unit test and a fix, so it's not failing anymore.
      
      Reviewers: dhruba, haobo, sdong
      
      Reviewed By: haobo
      
      CC: leveldb, xjin
      
      Differential Revision: https://reviews.facebook.net/D14421
      d9cd7a06
    • I
      BuildBatchGroup -- memcpy outside of lock · 7d9f21cf
      Igor Canadi 提交于
      Summary: When building batch group, don't actually build a new batch since it requires heavy-weight mem copy and malloc. Only store references to the batches and build the batch group without lock held.
      
      Test Plan:
      `make check`
      
      I am also planning to run performance tests. The workload that will benefit from this change is readwhilewriting. I will post the results once I have them.
      
      Reviewers: dhruba, haobo, kailiu
      
      Reviewed By: haobo
      
      CC: leveldb, xjin
      
      Differential Revision: https://reviews.facebook.net/D15063
      7d9f21cf
  9. 08 1月, 2014 2 次提交
    • M
      Don't always compress L0 files written by memtable flush · 50994bf6
      Mark Callaghan 提交于
      Summary:
      Code was always compressing L0 files written by a memtable flush
      when compression was enabled. Now this is done when
      min_level_to_compress=0 for leveled compaction and when
      universal_compaction_size_percent=-1 for universal compaction.
      
      Task ID: #3416472
      
      Blame Rev:
      
      Test Plan:
      ran db_bench with compression options
      
      Revert Plan:
      
      Database Impact:
      
      Memcache Impact:
      
      Other Notes:
      
      EImportant:
      
      - begin *PUBLIC* platform impact section -
      Bugzilla: #
      - end platform impact -
      
      Reviewers: dhruba, igor, sdong
      
      Reviewed By: dhruba
      
      CC: leveldb
      
      Differential Revision: https://reviews.facebook.net/D14757
      50994bf6
    • T
      Fix a deadlock in CompactRange() · 9f690ec6
      Tomislav Novak 提交于
      Summary:
      The way DBImpl::TEST_CompactRange() throttles down the number of bg compactions
      can cause it to deadlock when CompactRange() is called concurrently from
      multiple threads. Imagine a following scenario with only two threads
      (max_background_compactions is 10 and bg_compaction_scheduled_ is initially 0):
      
         1. Thread #1 increments bg_compaction_scheduled_ (to LargeNumber), sets
            bg_compaction_scheduled_ to 9 (newvalue), schedules the compaction
            (bg_compaction_scheduled_ is now 10) and waits for it to complete.
         2. Thread #2 calls TEST_CompactRange(), increments bg_compaction_scheduled_
            (now LargeNumber + 10) and waits on a cv for bg_compaction_scheduled_ to
            drop to LargeNumber.
         3. BG thread completes the first manual compaction, decrements
            bg_compaction_scheduled_ and wakes up all threads waiting on bg_cv_.
            Thread #1 runs, increments bg_compaction_scheduled_ by LargeNumber again
            (now 2*LargeNumber + 9). Since that's more than LargeNumber + newvalue,
            thread #2 also goes to sleep (waiting on bg_cv_), without resetting
            bg_compaction_scheduled_.
      
      This diff attempts to address the problem by introducing a new counter
      bg_manual_only_ (when positive, MaybeScheduleFlushOrCompaction() will only
      schedule manual compactions).
      
      Test Plan:
      I could pretty much consistently reproduce the deadlock with a program that
      calls CompactRange(nullptr, nullptr) immediately after Write() from multiple
      threads. This no longer happens with this patch.
      
      Tests (make check) pass.
      
      Reviewers: dhruba, igor, sdong, haobo
      
      Reviewed By: igor
      
      CC: leveldb
      
      Differential Revision: https://reviews.facebook.net/D14799
      9f690ec6
  10. 02 1月, 2014 1 次提交
    • I
      Support multi-threaded DisableFileDeletions() and EnableFileDeletions() · b60c14f6
      Igor Canadi 提交于
      Summary:
      We don't want two threads to clash if they concurrently call DisableFileDeletions() and EnableFileDeletions(). I'm adding a counter that will enable file deletions only after all DisableFileDeletions() calls have been negated with EnableFileDeletions().
      
      However, we also don't want to break the old behavior, so I added a parameter force to EnableFileDeletions(). If force is true, we will still enable file deletions after every call to EnableFileDeletions(), which is what is happening now.
      
      Test Plan: make check
      
      Reviewers: dhruba, haobo, sanketh
      
      Reviewed By: dhruba
      
      CC: leveldb
      
      Differential Revision: https://reviews.facebook.net/D14781
      b60c14f6
  11. 21 12月, 2013 1 次提交
    • I
      [RocksDB] Optimize locking for Get · 1fdb3f7d
      Igor Canadi 提交于
      Summary:
      Instead of locking and saving a DB state, we can cache a DB state and update it only when it changes. This change reduces lock contention and speeds up read operations on the DB.
      
      Performance improvements are substantial, although there is some cost in no-read workloads. I ran the regression tests on my devserver and here are the numbers:
      
        overwrite                    56345  ->   63001
        fillseq                      193730 ->  185296
        readrandom                   771301 -> 1219803 (58% improvement!)
        readrandom_smallblockcache   677609 ->  862850
        readrandom_memtable_sst      710440 -> 1109223
        readrandom_fillunique_random 221589 ->  247869
        memtablefillrandom           105286 ->   92643
        memtablereadrandom           763033 -> 1288862
      
      Test Plan:
      make asan_check
      I am also running db_stress
      
      Reviewers: dhruba, haobo, sdong, kailiu
      
      Reviewed By: haobo
      
      CC: leveldb
      
      Differential Revision: https://reviews.facebook.net/D14679
      1fdb3f7d
  12. 13 12月, 2013 1 次提交
    • M
      Add monitoring for universal compaction and add counters for compaction IO · e9e6b00d
      Mark Callaghan 提交于
      Summary:
      Adds these counters
      { WAL_FILE_SYNCED, "rocksdb.wal.synced" }
        number of writes that request a WAL sync
      { WAL_FILE_BYTES, "rocksdb.wal.bytes" },
        number of bytes written to the WAL
      { WRITE_DONE_BY_SELF, "rocksdb.write.self" },
        number of writes processed by the calling thread
      { WRITE_DONE_BY_OTHER, "rocksdb.write.other" },
        number of writes not processed by the calling thread. Instead these were
        processed by the current holder of the write lock
      { WRITE_WITH_WAL, "rocksdb.write.wal" },
        number of writes that request WAL logging
      { COMPACT_READ_BYTES, "rocksdb.compact.read.bytes" },
        number of bytes read during compaction
      { COMPACT_WRITE_BYTES, "rocksdb.compact.write.bytes" },
        number of bytes written during compaction
      
      Per-interval stats output was updated with WAL stats and correct stats for universal compaction
      including a correct value for write-amplification. It now looks like:
                                     Compactions
      Level  Files Size(MB) Score Time(sec)  Read(MB) Write(MB)    Rn(MB)  Rnp1(MB)  Wnew(MB) RW-Amplify Read(MB/s) Write(MB/s)      Rn     Rnp1     Wnp1     NewW    Count  Ln-stall Stall-cnt
      --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
        0        7      464  46.4       281      3411      3875      3411         0      3875        2.1      12.1        13.8      621        0      240      240      628       0.0         0
      Uptime(secs): 310.8 total, 2.0 interval
      Writes cumulative: 9999999 total, 9999999 batches, 1.0 per batch, 1.22 ingest GB
      WAL cumulative: 9999999 WAL writes, 9999999 WAL syncs, 1.00 writes per sync, 1.22 GB written
      Compaction IO cumulative (GB): 1.22 new, 3.33 read, 3.78 write, 7.12 read+write
      Compaction IO cumulative (MB/sec): 4.0 new, 11.0 read, 12.5 write, 23.4 read+write
      Amplification cumulative: 4.1 write, 6.8 compaction
      Writes interval: 100000 total, 100000 batches, 1.0 per batch, 12.5 ingest MB
      WAL interval: 100000 WAL writes, 100000 WAL syncs, 1.00 writes per sync, 0.01 MB written
      Compaction IO interval (MB): 12.49 new, 14.98 read, 21.50 write, 36.48 read+write
      Compaction IO interval (MB/sec): 6.4 new, 7.6 read, 11.0 write, 18.6 read+write
      Amplification interval: 101.7 write, 102.9 compaction
      Stalls(secs): 142.924 level0_slowdown, 0.000 level0_numfiles, 0.805 memtable_compaction, 0.000 leveln_slowdown
      Stalls(count): 132461 level0_slowdown, 0 level0_numfiles, 3 memtable_compaction, 0 leveln_slowdown
      
      Task ID: #3329644, #3301695
      
      Blame Rev:
      
      Test Plan:
      Revert Plan:
      
      Database Impact:
      
      Memcache Impact:
      
      Other Notes:
      
      EImportant:
      
      - begin *PUBLIC* platform impact section -
      Bugzilla: #
      - end platform impact -
      
      Reviewers: dhruba
      
      CC: leveldb
      
      Differential Revision: https://reviews.facebook.net/D14583
      e9e6b00d
  13. 10 12月, 2013 1 次提交
    • I
      [RocksDB] BackupableDB · fb9fce4f
      Igor Canadi 提交于
      Summary:
      In this diff I present you BackupableDB v1. You can easily use it to backup your DB and it will do incremental snapshots for you.
      Let's first describe how you would use BackupableDB. It's inheriting StackableDB interface so you can easily construct it with your DB object -- it will add a method RollTheSnapshot() to the DB object. When you call RollTheSnapshot(), current snapshot of the DB will be stored in the backup dir. To restore, you can just call RestoreDBFromBackup() on a BackupableDB (which is a static method) and it will restore all files from the backup dir. In the next version, it will even support automatic backuping every X minutes.
      
      There are multiple things you can configure:
      1. backup_env and db_env can be different, which is awesome because then you can easily backup to HDFS or wherever you feel like.
      2. sync - if true, it *guarantees* backup consistency on machine reboot
      3. number of snapshots to keep - this will keep last N snapshots around if you want, for some reason, be able to restore from an earlier snapshot. All the backuping is done in incremental fashion - if we already have 00010.sst, we will not copy it again. *IMPORTANT* -- This is based on assumption that 00010.sst never changes - two files named 00010.sst from the same DB will always be exactly the same. Is this true? I always copy manifest, current and log files.
      4. You can decide if you want to flush the memtables before you backup, or you're fine with backing up the log files -- either way, you get a complete and consistent view of the database at a time of backup.
      5. More things you can find in BackupableDBOptions
      
      Here is the directory structure I use:
      
         backup_dir/CURRENT_SNAPSHOT - just 4 bytes holding the latest snapshot
                     0, 1, 2, ... - files containing serialized version of each snapshot - containing a list of files
                     files/*.sst - sst files shared between snapshots - if one snapshot references 00010.sst and another one needs to backup it from the DB, it will just reference the same file
                     files/ 0/, 1/, 2/, ... - snapshot directories containing private snapshot files - current, manifest and log files
      
      All the files are ref counted and deleted immediatelly when they get out of scope.
      
      Some other stuff in this diff:
      1. Added GetEnv() method to the DB. Discussed with @haobo and we agreed that it seems right thing to do.
      2. Fixed StackableDB interface. The way it was set up before, I was not able to implement BackupableDB.
      
      Test Plan:
      I have a unittest, but please don't look at this yet. I just hacked it up to help me with debugging. I will write a lot of good tests and update the diff.
      
      Also, `make asan_check`
      
      Reviewers: dhruba, haobo, emayanke
      
      Reviewed By: dhruba
      
      CC: leveldb, haobo
      
      Differential Revision: https://reviews.facebook.net/D14295
      fb9fce4f
  14. 05 12月, 2013 1 次提交
  15. 04 12月, 2013 1 次提交
    • I
      Get rid of some shared_ptrs · 043fc14c
      Igor Canadi 提交于
      Summary:
      I went through all remaining shared_ptrs and removed the ones that I found not-necessary. Only GenerateCachePrefix() is called fairly often, so don't expect much perf wins.
      
      The ones that are left are accessed infrequently and I think we're fine with keeping them.
      
      Test Plan: make asan_check
      
      Reviewers: dhruba, haobo
      
      Reviewed By: dhruba
      
      CC: leveldb
      
      Differential Revision: https://reviews.facebook.net/D14427
      043fc14c
  16. 29 11月, 2013 1 次提交
  17. 26 11月, 2013 2 次提交
  18. 15 11月, 2013 1 次提交
    • I
      PurgeObsoleteFiles() unittest · a0ce3fd0
      Igor Canadi 提交于
      Summary:
      Created a unittest that verifies that automatic deletion performed by PurgeObsoleteFiles() works correctly.
      
      Also, few small fixes on the logic part -- call version_set_->GetObsoleteFiles() in FindObsoleteFiles() instead of on some arbitrary positions.
      
      Test Plan: Created a unit test
      
      Reviewers: dhruba, haobo, nkg-
      
      Reviewed By: haobo
      
      CC: leveldb
      
      Differential Revision: https://reviews.facebook.net/D14079
      a0ce3fd0
  19. 13 11月, 2013 1 次提交
    • I
      Small changes in Deleting obsolete files · 9bc4a26f
      Igor Canadi 提交于
      Summary:
      @haobo's suggestions from https://reviews.facebook.net/D13827
      
      Renaming some variables, deprecating purge_log_after_flush, changing for loop into auto for loop.
      
      I have not implemented deleting objects outside of mutex yet because it would require a big code change - we would delete object in db_impl, which currently does not know anything about object because it's defined in version_edit.h (FileMetaData). We should do it at some point, though.
      
      Test Plan: Ran deletefile_test
      
      Reviewers: haobo
      
      Reviewed By: haobo
      
      CC: leveldb, haobo
      
      Differential Revision: https://reviews.facebook.net/D14025
      9bc4a26f
  20. 12 11月, 2013 1 次提交
  21. 09 11月, 2013 1 次提交
    • I
      Speed up FindObsoleteFiles · 1510339e
      Igor Canadi 提交于
      Summary:
      Here's one solution we discussed on speeding up FindObsoleteFiles. Keep a set of all files in DBImpl and update the set every time we create a file. I probably missed few other spots where we create a file.
      
      It might speed things up a bit, but makes code uglier. I don't really like it.
      
      Much better approach would be to abstract all file handling to a separate class. Think of it as layer between DBImpl and Env. Having a separate class deal with file namings and deletion would benefit both code cleanliness (especially with huge DBImpl) and speed things up. It will take a huge effort to do this, though.
      
      Let's discuss offline today.
      
      Test Plan: Ran ./db_stress, verified that files are getting deleted
      
      Reviewers: dhruba, haobo, kailiu, emayanke
      
      Reviewed By: dhruba
      
      Differential Revision: https://reviews.facebook.net/D13827
      1510339e
  22. 07 11月, 2013 1 次提交
    • S
      WAL log retention policy based on archive size. · c2be2cba
      shamdor 提交于
      Summary:
      Archive cleaning will still happen every WAL_ttl seconds
      but archived logs will be deleted only if archive size
      is greater then a WAL_size_limit value.
      Empty archived logs will be deleted evety WAL_ttl.
      
      Test Plan:
      1. Unit tests pass.
      2. Benchmark.
      
      Reviewers: emayanke, dhruba, haobo, sdong, kailiu, igor
      
      Reviewed By: emayanke
      
      CC: leveldb
      
      Differential Revision: https://reviews.facebook.net/D13869
      c2be2cba
  23. 05 11月, 2013 1 次提交
    • M
      Making the transaction log iterator more robust · f837f5b1
      Mayank Agarwal 提交于
      Summary:
      strict essentially means that we MUST find the startsequence. Thus we should return if starteSequence is not found in the first file in case strict is set. This will take care of ending the iterator in case of permanent gaps due to corruptions in the log files
      Also created NextImpl function that will have internal variable to distinguish whether Next is being called from StartSequence or by application.
      Set NotFoudn::gaps status to give an indication of gaps happeneing.
      Polished the inline documentation at various places
      
      Test Plan:
      * db_repl_stress test
      * db_test relating to transaction log iterator
      * fbcode/wormhole/rocksdb/rocks_log_iterator
      * sigma production machine sigmafio032.prn1
      
      Reviewers: dhruba
      
      Reviewed By: dhruba
      
      CC: leveldb
      
      Differential Revision: https://reviews.facebook.net/D13689
      f837f5b1
  24. 02 11月, 2013 1 次提交
    • D
      Implement a compressed block cache. · b4ad5e89
      Dhruba Borthakur 提交于
      Summary:
      Rocksdb can now support a uncompressed block cache, or a compressed
      block cache or both. Lookups first look for a block in the
      uncompressed cache, if it is not found only then it is looked up
      in the compressed cache. If it is found in the compressed cache,
      then it is uncompressed and inserted into the uncompressed cache.
      
      It is possible that the same block resides in the compressed cache
      as well as the uncompressed cache at the same time. Both caches
      have their own individual LRU policy.
      
      Test Plan: Unit test case attached.
      
      Reviewers: kailiu, sdong, haobo, leveldb
      
      Reviewed By: haobo
      
      CC: xjin, haobo
      
      Differential Revision: https://reviews.facebook.net/D12675
      b4ad5e89
  25. 31 10月, 2013 1 次提交
    • S
      Follow-up Cleaning-up After D13521 · f03b2df0
      Siying Dong 提交于
      Summary:
      This patch is to address @haobo's comments on D13521:
      1. rename Table to be TableReader and make its factory function to be GetTableReader
      2. move the compression type selection logic out of TableBuilder but to compaction logic
      3. more accurate comments
      4. Move stat name constants into BlockBasedTable implementation.
      5. remove some uncleaned codes in simple_table_db_test
      
      Test Plan: pass test suites.
      
      Reviewers: haobo, dhruba, kailiu
      
      Reviewed By: haobo
      
      CC: leveldb
      
      Differential Revision: https://reviews.facebook.net/D13785
      f03b2df0
  26. 25 10月, 2013 1 次提交
    • M
      Unify DeleteFile and DeleteWalFiles · 56305221
      Mayank Agarwal 提交于
      Summary:
      This is to simplify rocksdb public APIs and improve the code quality.
      Created an additional parameter to ParseFileName for log sub type and improved the code for deleting a wal file.
      Wrote exhaustive unit-tests in delete_file_test
      Unification of other redundant APIs can be taken up in a separate diff
      
      Test Plan: Expanded delete_file test
      
      Reviewers: dhruba, haobo, kailiu, sdong
      
      Reviewed By: dhruba
      
      CC: leveldb
      
      Differential Revision: https://reviews.facebook.net/D13647
      56305221
  27. 18 10月, 2013 1 次提交
    • S
      Universal Compaction to Have a Size Percentage Threshold To Decide Whether to Compress · 9edda370
      Siying Dong 提交于
      Summary:
      This patch adds a option for universal compaction to allow us to only compress output files if the files compacted previously did not yet reach a specified ratio, to save CPU costs in some cases.
      
      Compression is always skipped for flushing. This is because the size information is not easy to evaluate for flushing case. We can improve it later.
      
      Test Plan:
      add test
      DBTest.UniversalCompactionCompressRatio1 and DBTest.UniversalCompactionCompressRatio12
      
      Reviewers: dhruba, haobo
      
      Reviewed By: dhruba
      
      CC: leveldb
      
      Differential Revision: https://reviews.facebook.net/D13467
      9edda370
  28. 17 10月, 2013 2 次提交
    • D
      Add appropriate LICENSE and Copyright message. · 9cd22109
      Dhruba Borthakur 提交于
      Summary:
      Add appropriate LICENSE and Copyright message.
      
      Test Plan:
      make check
      
      Reviewers:
      
      CC:
      
      Task ID: #
      
      Blame Rev:
      9cd22109
    • S
      Enable background flush thread by default and fix issues related to it · 073cbfc8
      Siying Dong 提交于
      Summary:
      Enable background flush thread in this patch and fix unit tests with:
      (1) After background flush, schedule a background compaction if condition satisfied;
      (2) Fix a bug that if universal compaction is enabled and number of levels are set to be 0, compaction will not be automatically triggered
      (3) Fix unit tests to wait for compaction to finish instead of flush, before checking the compaction results.
      
      Test Plan: pass all unit tests
      
      Reviewers: haobo, xjin, dhruba
      
      Reviewed By: haobo
      
      CC: leveldb
      
      Differential Revision: https://reviews.facebook.net/D13461
      073cbfc8
  29. 15 10月, 2013 1 次提交
    • S
      Change Function names from Compaction->Flush When they really mean Flush · 88f2f890
      Siying Dong 提交于
      Summary: When I debug the unit test failures when enabling background flush thread, I feel the function names can be made clearer for people to understand. Also, if the names are fixed, in many places, some tests' bugs are obvious (and some of those tests are failing). This patch is to clean it up for future maintenance.
      
      Test Plan: Run test suites.
      
      Reviewers: haobo, dhruba, xjin
      
      Reviewed By: dhruba
      
      CC: leveldb
      
      Differential Revision: https://reviews.facebook.net/D13431
      88f2f890
  30. 06 10月, 2013 1 次提交
  31. 05 10月, 2013 3 次提交
    • D
      Removed scribe, thrift and java modules. · 0a9f873f
      Dhruba Borthakur 提交于
      Summary: Removed scribe, thrift and java modules.
      
      Test Plan:
      make release
      make check
      
      Reviewers: emayanke
      
      Reviewed By: emayanke
      
      CC: leveldb
      
      Differential Revision: https://reviews.facebook.net/D13293
      0a9f873f
    • D
      Change namespace from leveldb to rocksdb · a143ef9b
      Dhruba Borthakur 提交于
      Summary:
      Change namespace from leveldb to rocksdb. This allows a single
      application to link in open-source leveldb code as well as
      rocksdb code into the same process.
      
      Test Plan: compile rocksdb
      
      Reviewers: emayanke
      
      Reviewed By: emayanke
      
      CC: leveldb
      
      Differential Revision: https://reviews.facebook.net/D13287
      a143ef9b
    • M
      Add backward compatible option in GetLiveFiles to choose whether to not Flush first · 854d2363
      Mayank Agarwal 提交于
      Summary:
      As explained in comments in GetLiveFiles in db.h, this option will cause flush to be skipped in GetLiveFiles because some use-cases use GetSortedWalFiles after GetLiveFiles to generate more complete snapshots.
      Using GetSortedWalFiles after GetLiveFiles allows us to not Flush in GetLiveFiles first because wals have everything.
      Note: file deletions will be disabled before calling GLF or GSWF so live logs will not move to archive logs or get delted.
      Note: Manifest file is truncated to a proper value in GLF, so it will always reply from the proper wal files on a restart
      
      Test Plan: make
      
      Reviewers: dhruba, haobo
      
      Reviewed By: dhruba
      
      CC: leveldb
      
      Differential Revision: https://reviews.facebook.net/D13257
      854d2363
  32. 04 10月, 2013 1 次提交
  33. 13 9月, 2013 1 次提交
    • H
      [RocksDB] Remove Log file immediately after memtable flush · 0e422308
      Haobo Xu 提交于
      Summary: As title. The DB log file life cycle is tied up with the memtable it backs. Once the memtable is flushed to sst and committed, we should be able to delete the log file, without holding the mutex. This is part of the bigger change to avoid FindObsoleteFiles at runtime. It deals with log files. sst files will be dealt with later.
      
      Test Plan: make check; db_bench
      
      Reviewers: dhruba
      
      CC: leveldb
      
      Differential Revision: https://reviews.facebook.net/D11709
      0e422308