1. 22 11月, 2016 7 次提交
  2. 21 11月, 2016 1 次提交
    • C
      Fix deadlock when calling getMergedHistogram · a0deec96
      Changli Gao 提交于
      Summary:
      When calling StatisticsImpl::HistogramInfo::getMergedHistogram(), if
      there is a dying thread, which is calling
      ThreadLocalPtr::StaticMeta::OnThreadExit() to merge its thread values to
      HistogramInfo, deadlock will occur. Because the former try to hold
      merge_lock then ThreadMeta::mutex_, but the later try to hold
      ThreadMeta::mutex_ then merge_lock. In short, the locking order isn't
      the same.
      
      This patch addressed this issue by releasing merge_lock before folding
      thread values.
      Closes https://github.com/facebook/rocksdb/pull/1552
      
      Differential Revision: D4211942
      
      Pulled By: ajkr
      
      fbshipit-source-id: ef89bcb
      a0deec96
  3. 20 11月, 2016 2 次提交
    • A
      Remove Arena in RangeDelAggregator · fe349db5
      Andrew Kryczka 提交于
      Summary:
      The Arena construction/destruction introduced significant overhead to read-heavy workload just by creating empty vectors for its blocks, so avoid it in RangeDelAggregator.
      Closes https://github.com/facebook/rocksdb/pull/1547
      
      Differential Revision: D4207781
      
      Pulled By: ajkr
      
      fbshipit-source-id: 9d1c130
      fe349db5
    • M
      Use more efficient hash map for deadlock detection · e63350e7
      Manuel Ung 提交于
      Summary:
      Currently, deadlock cycles are held in std::unordered_map. The problem with it is that it allocates/deallocates memory on every insertion/deletion. This limits throughput since we're doing this expensive operation while holding a global mutex. Fix this by using a vector which caches memory instead.
      
      Running the deadlock stress test, this change increased throughput from 39k txns/s -> 49k txns/s. The effect is more noticeable in MyRocks.
      Closes https://github.com/facebook/rocksdb/pull/1545
      
      Differential Revision: D4205662
      
      Pulled By: lth
      
      fbshipit-source-id: ff990e4
      e63350e7
  4. 19 11月, 2016 5 次提交
    • S
      Skip ldb test in Travis · a13bde39
      Siying Dong 提交于
      Summary:
      Travis now is building for ldb tests. Disable for now to unblock other tests while we are investigating.
      Closes https://github.com/facebook/rocksdb/pull/1546
      
      Differential Revision: D4209404
      
      Pulled By: siying
      
      fbshipit-source-id: 47edd97
      a13bde39
    • S
      Direct I/O Reads Handle the last sector correctly. · 73843aa6
      Siying Dong 提交于
      Summary:
      Currently, in the Direct I/O read mode, the last sector of the file, if not full, is not handled correctly. If the return value of pread is not multiplier of kSectorSize, we still go ahead and continue reading, even if the buffer is not aligned. With the commit, if the return value is not multiplier of kSectorSize, and all but the last sector has been read, we simply return.
      Closes https://github.com/facebook/rocksdb/pull/1550
      
      Differential Revision: D4209609
      
      Pulled By: lightmark
      
      fbshipit-source-id: cb0b439
      73843aa6
    • M
      Implement PositionedAppend for PosixWritableFile · 9d60151b
      Maysam Yabandeh 提交于
      Summary:
      This patch clarifies the contract of PositionedAppend with some unit
      tests and also implements it for PosixWritableFile. (Tasks: 14524071)
      Closes https://github.com/facebook/rocksdb/pull/1514
      
      Differential Revision: D4204907
      
      Pulled By: maysamyabandeh
      
      fbshipit-source-id: 06eabd2
      9d60151b
    • A
      Lazily initialize RangeDelAggregator's map and pinning manager · 3f622152
      Andrew Kryczka 提交于
      Summary:
      Since a RangeDelAggregator is created for each read request, these heap-allocating member variables were consuming significant CPU (~3% total) which slowed down request throughput. The map and pinning manager are only necessary when range deletions exist, so we can defer their initialization until the first range deletion is encountered. Currently lazy initialization is done for reads only since reads pass us a single snapshot, which is easier to store on the stack for later insertion into the map than the vector passed to us by flush or compaction.
      
      Note the Arena member variable is still expensive, I will figure out what to do with it in a subsequent diff. It cannot be lazily initialized because we currently use this arena even to allocate empty iterators, which is necessary even when no range deletions exist.
      Closes https://github.com/facebook/rocksdb/pull/1539
      
      Differential Revision: D4203488
      
      Pulled By: ajkr
      
      fbshipit-source-id: 3b36279
      3f622152
    • K
      cmake: s/STEQUAL/STREQUAL/ · 41e77b83
      Kefu Chai 提交于
      Summary:
      Signed-off-by: NKefu Chai <tchaikov@gmail.com>
      Closes https://github.com/facebook/rocksdb/pull/1540
      
      Differential Revision: D4207564
      
      Pulled By: siying
      
      fbshipit-source-id: 567415b
      41e77b83
  5. 18 11月, 2016 4 次提交
  6. 17 11月, 2016 8 次提交
  7. 16 11月, 2016 10 次提交
  8. 15 11月, 2016 3 次提交