1. 30 1月, 2014 5 次提交
    • I
      InternalStatistics · 3c0dcf0e
      Igor Canadi 提交于
      Summary:
      In DBImpl we keep track of some statistics internally and expose them via GetProperty(). This diff encapsulates all the internal statistics into a class InternalStatisics. Most of it is copy/paste.
      
      Apart from cleaning up db_impl.cc, this diff is also necessary for Column families, since every column family should have its own CompactionStats, MakeRoomForWrite-stall stats, etc. It's much easier to keep track of it in every column family if it's nicely encapsulated in its own class.
      
      Test Plan: make check
      
      Reviewers: dhruba, kailiu, haobo, sdong, emayanke
      
      Reviewed By: haobo
      
      CC: leveldb
      
      Differential Revision: https://reviews.facebook.net/D15273
      3c0dcf0e
    • L
      set bg_error_ when background flush goes wrong · d118707f
      Lei Jin 提交于
      Summary: as title
      
      Test Plan: unit test
      
      Reviewers: haobo, igor, sdong, kailiu, dhruba
      
      Reviewed By: dhruba
      
      CC: leveldb
      
      Differential Revision: https://reviews.facebook.net/D15435
      d118707f
    • I
      Change ColumnFamilyData from struct to class · fa99d53e
      Igor Canadi 提交于
      Summary: ColumnFamilyData grew a lot, there's much more data that it holds now. It makes more sense to encapsulate it better by making it a class.
      
      Test Plan: make check
      
      Reviewers: dhruba, haobo, kailiu, sdong
      
      CC: leveldb
      
      Differential Revision: https://reviews.facebook.net/D15579
      fa99d53e
    • I
      PurgeObsoleteFiles in DropColumnFamily · 4662969b
      Igor Canadi 提交于
      Summary: When we drop the column family, we want to delete all the files from that column family.
      
      Test Plan: make check
      
      Reviewers: dhruba
      
      CC: leveldb
      
      Differential Revision: https://reviews.facebook.net/D15561
      4662969b
    • I
      Read from and write to different column families · f24a3ee5
      Igor Canadi 提交于
      Summary: This one is big. It adds ability to write to and read from different column families (see the unit test). It also supports recovery of different column families from log, which was the hardest part to reason about. We need to make sure to never delete the log file which has unflushed data from any column family. To support that, I added another concept, which is versions_->MinLogNumber()
      
      Test Plan: Added a unit test in column_family_test
      
      Reviewers: dhruba, haobo, sdong, kailiu
      
      CC: leveldb
      
      Differential Revision: https://reviews.facebook.net/D15537
      f24a3ee5
  2. 28 1月, 2014 6 次提交
    • I
      [column families] Removing VersionSet::current() · 4bf25357
      Igor Canadi 提交于
      Summary: Instead of VersionSet::current(), DBImpl uses default_cfd_->current directly.
      
      Test Plan: make check
      
      Reviewers: dhruba, haobo
      
      CC: leveldb
      
      Differential Revision: https://reviews.facebook.net/D15483
      4bf25357
    • M
      Update monitoring to include average time per compaction and stall · 90f29ccb
      Mark Callaghan 提交于
      Summary:
      The new columns are msComp and msStall that provide average time per compaction and stall for that level in milliseconds.
      Level  Files Size(MB) Score Time(sec)  Read(MB) Write(MB)    Rn(MB)  Rnp1(MB)  Wnew(MB) RW-Amplify Read(MB/s) Write(MB/s)      Rn     Rnp1     Wnp1     NewW    Count   msComp   msStall  Ln-stall Stall-cnt
      ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
        0        8       15   1.5         2         0        30         0         0        30        0.0       0.0        15.5        0        0        0        0       16      112       0.2       1.3      7568
        1        8       16   1.6         1        26        26        15        11        16        3.5      17.6        18.1        8        6       13        7        3      362       0.0       0.0         0
        2        1        2   0.0         0         0         2         0         0         2        0.0       0.0        18.4        0        0        0        0        1       50       0.0       0.0         0
      
      Task ID: #
      
      Blame Rev:
      
      Test Plan:
      run db_bench
      
      Revert Plan:
      
      Database Impact:
      
      Memcache Impact:
      
      Other Notes:
      
      EImportant:
      
      - begin *PUBLIC* platform impact section -
      Bugzilla: #
      - end platform impact -
      
      Reviewers: haobo
      
      Reviewed By: haobo
      
      CC: leveldb
      
      Differential Revision: https://reviews.facebook.net/D15345
      90f29ccb
    • I
      LogAndApply to take ColumnFamilyData · 511b03a5
      Igor Canadi 提交于
      Summary: This removes the default implementation of LogAndApply that applied the changed to the default column family by default. It is mostly simple reformatting.
      
      Test Plan: make check
      
      Reviewers: dhruba, kailiu
      
      CC: leveldb
      
      Differential Revision: https://reviews.facebook.net/D15465
      511b03a5
    • I
      [column families] Move memtable and immutable memtable list to column family data · eb055609
      Igor Canadi 提交于
      Summary: All memtables and immutable memtables are moved from DBImpl to ColumnFamilyData. For now, they are all referenced from default column family in DBImpl. It shouldn't be hard to get them from custom column family.
      
      Test Plan: make check
      
      Reviewers: dhruba, kailiu, sdong
      
      CC: leveldb
      
      Differential Revision: https://reviews.facebook.net/D15459
      eb055609
    • I
      Fsync directory after we create a new file · 832158e7
      Igor Canadi 提交于
      Summary:
      @dhruba, I'm not sure where we need to sync the directory. I implemented the function in Env() and added the dir sync just after we close the newly created file in the builder.
      
      Should I also add FsyncDir() to new files that get created by a compaction?
      
      Test Plan: Confirmed that FsyncDir is returning Status::OK()
      
      Reviewers: dhruba, haobo
      
      Reviewed By: dhruba
      
      CC: leveldb, dhruba
      
      Differential Revision: https://reviews.facebook.net/D14751
      832158e7
    • I
      Move NeedsCompaction() from VersionSet to Version · 6c2ca1d3
      Igor Canadi 提交于
      Summary: There is no reason to have functions NeedCompaction(), MaxCompactionScore() and MaxCompactionScoreLevel() in VersionSet, since they don't access any data in VersionSet.
      
      Test Plan: make check
      
      Reviewers: kailiu, haobo, sdong
      
      Reviewed By: kailiu
      
      CC: leveldb
      
      Differential Revision: https://reviews.facebook.net/D15333
      6c2ca1d3
  3. 25 1月, 2014 3 次提交
    • I
      Fixing iterator cleanup for Tailing iterator · f653fdcf
      Igor Canadi 提交于
      Immutable tailing iterator doesn't set CleanupState::mem, so we don't
      have to unref it.
      f653fdcf
    • I
      MemTableListVersion · c583157d
      Igor Canadi 提交于
      Summary:
      MemTableListVersion is to MemTableList what Version is to VersionSet. I took almost the same ideas to develop MemTableListVersion. The reason is to have copying std::list done in background, while flushing, rather than in foreground (MultiGet() and NewIterator()) under a mutex! Also, whenever we copied MemTableList, we copied also some MemTableList metadata (flush_requested_, commit_in_progress_, etc.), which was wasteful.
      
      This diff avoids std::list copy under a mutex in both MultiGet() and NewIterator(). I created a small database with some number of immutable memtables, and creating 100.000 iterators in a single-thread (!) decreased from {188739, 215703, 198028} to {154352, 164035, 159817}. A lot of the savings come from code under a mutex, so we should see much higher savings with multiple threads. Creating new iterator is very important to LogDevice team.
      
      I also think this diff will make SuperVersion obsolete for performance reasons. I will try it in the next diff. SuperVersion gave us huge savings on Get() code path, but I think that most of the savings came from copying MemTableList under a mutex. If we had MemTableListVersion, we would never need to copy the entire object (like we still do in NewIterator() and MultiGet())
      
      Test Plan: `make check` works. I will also do `make valgrind_check` before commit
      
      Reviewers: dhruba, haobo, kailiu, sdong, emayanke, tnovak
      
      Reviewed By: kailiu
      
      CC: leveldb
      
      Differential Revision: https://reviews.facebook.net/D15255
      c583157d
    • I
      Fix a bug in DBImpl::CreateColumnFamily · 09489d39
      Igor Canadi 提交于
      09489d39
  4. 24 1月, 2014 3 次提交
    • L
      CompactRange() to return status · aba2acb5
      Lei Jin 提交于
      Summary: as title
      
      Test Plan:
      make all check
      What else tests shall I cover?
      
      Reviewers: igor, haobo
      
      CC:
      
      Differential Revision: https://reviews.facebook.net/D15339
      aba2acb5
    • T
      Tailing iterator · 81c9cc9b
      Tomislav Novak 提交于
      Summary:
      This diff implements a special type of iterator that doesn't create a snapshot
      (can be used to read newly inserted data) and is optimized for doing sequential
      reads.
      
      TailingIterator uses current superversion number to determine whether to
      invalidate its internal iterators. If the version hasn't changed, it can often
      avoid doing expensive seeks over immutable structures (sst files and immutable
      memtables).
      
      Test Plan:
      * new unit tests
      * running LD with this patch
      
      Reviewers: igor, dhruba, haobo, sdong, kailiu
      
      Reviewed By: sdong
      
      CC: leveldb, lovro, march
      
      Differential Revision: https://reviews.facebook.net/D15285
      81c9cc9b
    • I
      ColumnFamilySet · 7c5e583a
      Igor Canadi 提交于
      Summary:
      I created a separate class ColumnFamilySet to keep track of column families. Before we did this in VersionSet and I believe this approach is cleaner.
      
      Let me know if you have any comments. I will commit tomorrow.
      
      Test Plan: make check
      
      Reviewers: dhruba, haobo, kailiu, sdong
      
      CC: leveldb
      
      Differential Revision: https://reviews.facebook.net/D15357
      7c5e583a
  5. 23 1月, 2014 2 次提交
    • I
      Fix wrong merge · f9a25dda
      Igor Canadi 提交于
      f9a25dda
    • I
      Refactor Recover() code · 6fe9b577
      Igor Canadi 提交于
      Summary:
      This diff does two things:
      * Rethinks how we call Recover() with read_only option. Before, we call it with pointer to memtable where we'd like to apply those changes to. This memtable is set in db_impl_readonly.cc and it's actually DBImpl::mem_. Why don't we just apply updates to mem_ right away? It seems more intuitive.
      * Changes when we apply updates to manifest. Before, the process is to recover all the logs, flush it to sst files and then do one giant commit that atomically adds all recovered sst files and sets the next log number. This works good enough, but causes some small troubles for my column family approach, since I can't have one VersionEdit apply to more than single column family[1]. The change here is to commit the files recovered from logs right away. Here is the state of the world before the change:
      1. Recover log 5, add new sst files to edit
      2. Recover log 7, add new sst files to edit
      3. Recover log 8, add new sst files to edit
      4. Commit all added sst files to manifest and mark log files 5, 7 and 8 as recoverd (via SetLogNumber(9) function)
      After the change, we'll do:
      1. Recover log 5, commit the new sst files and set log 5 as recovered
      2. Recover log 7, commit the new sst files and set log 7 as recovered
      3. Recover log 8, commit the new sst files and set log 8 as recovered
      
      The added (small) benefit is that if we fail after (2), the new recovery will only have to recover log 8. In previous case, we'll have to restart the recovery from the beginning. The bigger benefit will be to enable easier integration of multiple column families in Recovery code path.
      
      [1] I'm happy to dicuss this decison, but I believe this is the cleanest way to go. It also makes backward compatibility much easier. We don't have a requirement of adding multiple column families atomically.
      
      Test Plan: make check
      
      Reviewers: dhruba, haobo, kailiu, sdong
      
      Reviewed By: kailiu
      
      CC: leveldb
      
      Differential Revision: https://reviews.facebook.net/D15237
      6fe9b577
  6. 18 1月, 2014 3 次提交
    • M
      Boost access before mutex is unlocked · 4e8321bf
      Mark Callaghan 提交于
      Summary:
      This moves the use of versions_ to before the mutex is unlocked
      to avoid a possible race.
      
      Task ID: #
      
      Blame Rev:
      
      Test Plan:
      make check
      
      Revert Plan:
      
      Database Impact:
      
      Memcache Impact:
      
      Other Notes:
      
      EImportant:
      
      - begin *PUBLIC* platform impact section -
      Bugzilla: #
      - end platform impact -
      
      Reviewers: haobo, dhruba
      
      Reviewed By: dhruba
      
      CC: leveldb
      
      Differential Revision: https://reviews.facebook.net/D15279
      4e8321bf
    • I
      Statistics code cleanup · 83681bf9
      Igor Canadi 提交于
      Summary: I'm separating code-cleanup part of https://reviews.facebook.net/D14517. This will make D14517 easier to understand and this diff easier to review.
      
      Test Plan: make check
      
      Reviewers: haobo, kailiu, sdong, dhruba, tnovak
      
      Reviewed By: tnovak
      
      CC: leveldb
      
      Differential Revision: https://reviews.facebook.net/D15099
      83681bf9
    • M
      Fix SlowdownAmount · 439e36db
      Mark Callaghan 提交于
      Summary:
      This had a few bugs.
      1) bottom and top were reversed. top is for the max value but the callers were passing the max
      value to bottom. The result is that the max sleep is used when n >= bottom.
      2) one of the callers passed values with type double and these values are frequently between
      1.0 and 2.0 so rounding will do some bad things
      3) sometimes the function returned 0 when there should be a stall
      
      With this change and one other diff (out for review soon) there are slightly fewer stalls on one workload.
      
      With the fix.
      Stalls(secs): 160.166 level0_slowdown, 0.000 level0_numfiles, 0.000 memtable_compaction, 58.495 leveln_slowdown
      Stalls(count): 910261 level0_slowdown, 0 level0_numfiles, 0 memtable_compaction, 54526 leveln_slowdown
      
      Without the fix.
      Stalls(secs): 172.227 level0_slowdown, 0.000 level0_numfiles, 0.000 memtable_compaction, 56.538 leveln_slowdown
      Stalls(count): 160831 level0_slowdown, 0 level0_numfiles, 0 memtable_compaction, 52845 leveln_slowdown
      
      Task ID: #
      
      Blame Rev:
      
      Test Plan:
      run db_bench for --benchmarks=overwrite with IO-bound database
      
      Revert Plan:
      
      Database Impact:
      
      Memcache Impact:
      
      Other Notes:
      
      EImportant:
      
      - begin *PUBLIC* platform impact section -
      Bugzilla: #
      - end platform impact -
      
      Reviewers: haobo
      
      Reviewed By: haobo
      
      CC: leveldb
      
      Differential Revision: https://reviews.facebook.net/D15243
      439e36db
  7. 16 1月, 2014 2 次提交
    • I
      Move functions from VersionSet to Version · 2f4eda78
      Igor Canadi 提交于
      Summary:
      There were some functions in VersionSet that had no reason to be there instead of Version. Moving them to Version will make column families implementation easier.
      
      The functions moved are:
      * NumLevelBytes
      * LevelSummary
      * LevelFileSummary
      * MaxNextLevelOverlappingBytes
      * AddLiveFiles (previously AddLiveFilesCurrentVersion())
      * NeedSlowdownForNumLevel0Files
      
      The diff continues on (and depends on) D15171
      
      Test Plan: make check
      
      Reviewers: dhruba, haobo, kailiu, sdong, emayanke
      
      Reviewed By: sdong
      
      CC: leveldb
      
      Differential Revision: https://reviews.facebook.net/D15183
      2f4eda78
    • I
      Decrease reliance on VersionSet::NumberLevels() · 65a8a52b
      Igor Canadi 提交于
      Summary:
      With column families VersionSet will not have a constant number of levels (each CF can have different options), so we'll need to eliminate call to VersionSet::NumberLevels()
      
      This diff decreases number of callsites, but we're not there yet. It associates number of levels with Version (each version is associated with single CF) instead of VersionSet.
      
      I have also slightly changed how VersionSet keeps track of manifest size.
      
      This diff also modifies constructor of Compaction such that it takes input_version and automatically Ref()s it. Before this was done outside of constructor.
      
      In next diffs I will continue to decrease number of callsites of VersionSet::NumberLevels() and also references to current_
      
      Test Plan: make check
      
      Reviewers: haobo, dhruba, kailiu, sdong
      
      Reviewed By: sdong
      
      Differential Revision: https://reviews.facebook.net/D15171
      65a8a52b
  8. 15 1月, 2014 7 次提交
    • S
      [RocksDB Performance Branch] DBImpl.NewInternalIterator() to reduce works inside mutex · 9b51af5a
      Siying Dong 提交于
      Summary: To reduce mutex contention caused by DBImpl.NewInternalIterator(), in this function, move all the iteration creation works out of mutex, only leaving object ref and get.
      
      Test Plan:
      make all check
      will run db_stress for a while too to make sure no problem.
      
      Reviewers: haobo, dhruba, kailiu
      
      Reviewed By: haobo
      
      CC: igor, leveldb
      
      Differential Revision: https://reviews.facebook.net/D14589
      
      Conflicts:
      	db/db_impl.cc
      9b51af5a
    • I
      Fix CompactRange to apply filter to every key · d9cd7a06
      Igor Canadi 提交于
      Summary:
      When doing CompactRange(), we should first flush the memtable and then calculate max_level_with_files. Also, we want to compact all the levels that have files, including level `max_level_with_files`.
      
      This patch fixed the unit test.
      
      Test Plan: Added a failing unit test and a fix, so it's not failing anymore.
      
      Reviewers: dhruba, haobo, sdong
      
      Reviewed By: haobo
      
      CC: leveldb, xjin
      
      Differential Revision: https://reviews.facebook.net/D14421
      d9cd7a06
    • I
      VersionEdit not to take NumLevels() · 055e6df4
      Igor Canadi 提交于
      Summary:
      I will submit a sequence of diffs that are preparing master branch for column families. There are a lot of implicit assumptions in the code that are making column family implementation hard. If I make the change only in column family branch, it will make merging back to master impossible.
      
      Most of the diffs will be simple code refactorings, so I hope we can have fast turnaround time. Feel free to grab me in person to discuss any of them.
      
      This diff removes number of level check from VersionEdit. It is used only when VersionEdit is read, not written, but has to be set when it is written. I believe it is a right thing to make VersionEdit dumb and check consistency on the caller side. This will also make it much easier to implement Column Families, since different column families can have different number of levels.
      
      Test Plan: make check
      
      Reviewers: dhruba, haobo, sdong, kailiu
      
      Reviewed By: kailiu
      
      CC: leveldb
      
      Differential Revision: https://reviews.facebook.net/D15159
      055e6df4
    • I
      BuildBatchGroup -- memcpy outside of lock · 7d9f21cf
      Igor Canadi 提交于
      Summary: When building batch group, don't actually build a new batch since it requires heavy-weight mem copy and malloc. Only store references to the batches and build the batch group without lock held.
      
      Test Plan:
      `make check`
      
      I am also planning to run performance tests. The workload that will benefit from this change is readwhilewriting. I will post the results once I have them.
      
      Reviewers: dhruba, haobo, kailiu
      
      Reviewed By: haobo
      
      CC: leveldb, xjin
      
      Differential Revision: https://reviews.facebook.net/D15063
      7d9f21cf
    • N
      Use sanitized options while opening db · 1d9bac4d
      Naman Gupta 提交于
      Summary: We use SanitizeOptions() to set appropriate values for some options, based on other options. So we should use the sanitized options by default. Luckily it hasn't caused a bug yet, but can result in a bug in the fugture.
      
      Test Plan: make check
      
      Reviewers: haobo
      
      CC: leveldb
      
      Differential Revision: https://reviews.facebook.net/D14103
      1d9bac4d
    • S
      Pre-calculate whether to slow down for too many level 0 files · fbbf0d14
      Siying Dong 提交于
      Summary: Currently in DBImpl::MakeRoomForWrite(), we do  "versions_->NumLevelFiles(0) >= options_.level0_slowdown_writes_trigger" to check whether the writer thread needs to slow down. However, versions_->NumLevelFiles(0) is slightly more expensive than we expected. By caching the result of the comparison when installing a new version, we can avoid this function call every time.
      
      Test Plan:
      make all check
      Manually trigger this behavior by applying universal compaction style and make sure inserts are made slow after there are certain number of files.
      
      Reviewers: haobo, kailiu, igor
      
      Reviewed By: kailiu
      
      CC: nkg-, leveldb
      
      Differential Revision: https://reviews.facebook.net/D15141
      fbbf0d14
    • S
      DB::Put() to estimate write batch data size needed and pre-allocate buffer · 51dd2192
      Siying Dong 提交于
      Summary:
      In one of CPU profiles, we see some CPU costs of string::reserve() inside Batch.Put(). This patch should be able to reduce some of the costs by allocating sufficient buffer before hand.
      
      Since it is a trivial percentage of CPU costs, I didn't find a way to show the improvement in one of the benchmarks. I'll deploy it to same application and do the same CPU profiling to make sure those CPU costs are reduced.
      
      Test Plan: make all check
      
      Reviewers: haobo, kailiu, igor
      
      Reviewed By: haobo
      
      CC: leveldb, nkg-
      
      Differential Revision: https://reviews.facebook.net/D15135
      51dd2192
  9. 14 1月, 2014 1 次提交
  10. 09 1月, 2014 2 次提交
    • S
      StopWatch not to get time if it is created for statistics and it is disabled · 55753163
      Siying Dong 提交于
      Summary: Currently, even if statistics is not enabled, StopWatch only for the stats still gets the time of the day, which is wasteful. This patch adds a new option to StopWatch to disable this get in this case.
      
      Test Plan: make all check
      
      Reviewers: dhruba, haobo, igor
      
      CC: leveldb
      
      Differential Revision: https://reviews.facebook.net/D14703
      55753163
    • I
      Add column family information to WAL · 19e3ee64
      Igor Canadi 提交于
      Summary:
      I have added three new value types:
      * kTypeColumnFamilyDeletion
      * kTypeColumnFamilyValue
      * kTypeColumnFamilyMerge
      which include column family Varint32 before the data (value, deletion and merge). These values are used only in WAL (not in memtables yet).
      
      This endeavour required changing some WriteBatch internals.
      
      Test Plan: Added a unittest
      
      Reviewers: dhruba, haobo, sdong, kailiu
      
      CC: leveldb
      
      Differential Revision: https://reviews.facebook.net/D15045
      19e3ee64
  11. 08 1月, 2014 3 次提交
    • M
      Don't always compress L0 files written by memtable flush · 50994bf6
      Mark Callaghan 提交于
      Summary:
      Code was always compressing L0 files written by a memtable flush
      when compression was enabled. Now this is done when
      min_level_to_compress=0 for leveled compaction and when
      universal_compaction_size_percent=-1 for universal compaction.
      
      Task ID: #3416472
      
      Blame Rev:
      
      Test Plan:
      ran db_bench with compression options
      
      Revert Plan:
      
      Database Impact:
      
      Memcache Impact:
      
      Other Notes:
      
      EImportant:
      
      - begin *PUBLIC* platform impact section -
      Bugzilla: #
      - end platform impact -
      
      Reviewers: dhruba, igor, sdong
      
      Reviewed By: dhruba
      
      CC: leveldb
      
      Differential Revision: https://reviews.facebook.net/D14757
      50994bf6
    • I
      [column families] Implement DB::OpenWithColumnFamilies() · 72918eff
      Igor Canadi 提交于
      Summary:
      In addition to implementing OpenWithColumnFamilies, this diff also includes some minor changes:
      * Changed all column family names from Slice() to std::string. The performance of column family name handling is not critical, and it's more convenient and cleaner to have names as std::strings
      * Implemented ColumnFamilyOptions(const Options&) and DBOptions(const Options&)
      * Added ColumnFamilyOptions to VersionSet::ColumnFamilyData. ColumnFamilyOptions are specified on OpenWithColumnFamilies() and CreateColumnFamily()
      
      I will keep the diff in the Phabricator for a day or two and will push to the branch then. Feel free to comment even after the diff has been pushed.
      
      Test Plan: Added a simple unit test
      
      Reviewers: dhruba, haobo
      
      CC: leveldb
      
      Differential Revision: https://reviews.facebook.net/D15033
      72918eff
    • T
      Fix a deadlock in CompactRange() · 9f690ec6
      Tomislav Novak 提交于
      Summary:
      The way DBImpl::TEST_CompactRange() throttles down the number of bg compactions
      can cause it to deadlock when CompactRange() is called concurrently from
      multiple threads. Imagine a following scenario with only two threads
      (max_background_compactions is 10 and bg_compaction_scheduled_ is initially 0):
      
         1. Thread #1 increments bg_compaction_scheduled_ (to LargeNumber), sets
            bg_compaction_scheduled_ to 9 (newvalue), schedules the compaction
            (bg_compaction_scheduled_ is now 10) and waits for it to complete.
         2. Thread #2 calls TEST_CompactRange(), increments bg_compaction_scheduled_
            (now LargeNumber + 10) and waits on a cv for bg_compaction_scheduled_ to
            drop to LargeNumber.
         3. BG thread completes the first manual compaction, decrements
            bg_compaction_scheduled_ and wakes up all threads waiting on bg_cv_.
            Thread #1 runs, increments bg_compaction_scheduled_ by LargeNumber again
            (now 2*LargeNumber + 9). Since that's more than LargeNumber + newvalue,
            thread #2 also goes to sleep (waiting on bg_cv_), without resetting
            bg_compaction_scheduled_.
      
      This diff attempts to address the problem by introducing a new counter
      bg_manual_only_ (when positive, MaybeScheduleFlushOrCompaction() will only
      schedule manual compactions).
      
      Test Plan:
      I could pretty much consistently reproduce the deadlock with a program that
      calls CompactRange(nullptr, nullptr) immediately after Write() from multiple
      threads. This no longer happens with this patch.
      
      Tests (make check) pass.
      
      Reviewers: dhruba, igor, sdong, haobo
      
      Reviewed By: igor
      
      CC: leveldb
      
      Differential Revision: https://reviews.facebook.net/D14799
      9f690ec6
  12. 03 1月, 2014 1 次提交
  13. 02 1月, 2014 1 次提交
    • I
      Support multi-threaded DisableFileDeletions() and EnableFileDeletions() · b60c14f6
      Igor Canadi 提交于
      Summary:
      We don't want two threads to clash if they concurrently call DisableFileDeletions() and EnableFileDeletions(). I'm adding a counter that will enable file deletions only after all DisableFileDeletions() calls have been negated with EnableFileDeletions().
      
      However, we also don't want to break the old behavior, so I added a parameter force to EnableFileDeletions(). If force is true, we will still enable file deletions after every call to EnableFileDeletions(), which is what is happening now.
      
      Test Plan: make check
      
      Reviewers: dhruba, haobo, sanketh
      
      Reviewed By: dhruba
      
      CC: leveldb
      
      Differential Revision: https://reviews.facebook.net/D14781
      b60c14f6
  14. 21 12月, 2013 1 次提交
    • I
      [RocksDB] Optimize locking for Get · 1fdb3f7d
      Igor Canadi 提交于
      Summary:
      Instead of locking and saving a DB state, we can cache a DB state and update it only when it changes. This change reduces lock contention and speeds up read operations on the DB.
      
      Performance improvements are substantial, although there is some cost in no-read workloads. I ran the regression tests on my devserver and here are the numbers:
      
        overwrite                    56345  ->   63001
        fillseq                      193730 ->  185296
        readrandom                   771301 -> 1219803 (58% improvement!)
        readrandom_smallblockcache   677609 ->  862850
        readrandom_memtable_sst      710440 -> 1109223
        readrandom_fillunique_random 221589 ->  247869
        memtablefillrandom           105286 ->   92643
        memtablereadrandom           763033 -> 1288862
      
      Test Plan:
      make asan_check
      I am also running db_stress
      
      Reviewers: dhruba, haobo, sdong, kailiu
      
      Reviewed By: haobo
      
      CC: leveldb
      
      Differential Revision: https://reviews.facebook.net/D14679
      1fdb3f7d