1. 20 11月, 2015 1 次提交
    • I
      Reduce moving memory in LDB::ScanCommand · 88e05277
      Islam AbdelRahman 提交于
      Summary:
      Based on https://github.com/facebook/rocksdb/issues/843
      It looks that when the data is hot we spend significant amount of time moving data out of RocksDB blocks. This patch reduce moving memory when possible
      
      Original performance
      ```
      $ time ./ldb --db=/home/tec/local/ellina_test/testdb scan > /dev/null
      real	0m16.736s
      user	0m11.993s
      sys	0m4.725s
      ```
      
      Performance after reducing memcpy
      ```
      $ time ./ldb --db=/home/tec/local/ellina_test/testdb scan > /dev/null
      real	0m11.590s
      user	0m6.983s
      sys	0m4.595s
      ```
      
      Test Plan:
      dump the output of the scan into 2 files and verifying the are exactly the same
      make check
      
      Reviewers: sdong, yhchiang, anthony, rven, igor
      
      Reviewed By: igor
      
      Subscribers: dhruba
      
      Differential Revision: https://reviews.facebook.net/D51093
      88e05277
  2. 19 11月, 2015 5 次提交
  3. 18 11月, 2015 15 次提交
  4. 17 11月, 2015 14 次提交
  5. 14 11月, 2015 2 次提交
    • V
      Reuse file iterators in tailing iterator when memtable is flushed · 7824444b
      Venkatesh Radhakrishnan 提交于
      Summary:
      Under a tailing workload, there were increased block cache
      misses when a memtable was flushed because we were rebuilding iterators
      in that case since the version set changed. This was exacerbated in the
      case of iterate_upper_bound, since file iterators which were over the
      iterate_upper_bound would have been deleted and are now brought back as
      part of the Rebuild, only to be deleted again. We now renew the iterators
      and only build iterators for files which are added and delete file
      iterators for files which are deleted.
      Refer to https://reviews.facebook.net/D50463 for previous version
      
      Test Plan: DBTestTailingIterator.TailingIteratorTrimSeekToNext
      
      Reviewers: anthony, IslamAbdelRahman, igor, tnovak, yhchiang, sdong
      
      Reviewed By: sdong
      
      Subscribers: yhchiang, march, dhruba, leveldb, lovro
      
      Differential Revision: https://reviews.facebook.net/D50679
      7824444b
    • V
      Make sure that CompactFiles does not run two parallel Level 0 compactions · 2ae4d7d7
      Venkatesh Radhakrishnan 提交于
      Summary:
      Since level 0 files can overlap, two level 0 compactions cannot
      run in parallel. Compact files needs to check this before running a
      compaction.
      
      Test Plan: CompactFilesTest.L0ConflictsFiles
      
      Reviewers: igor, IslamAbdelRahman, anthony, sdong, yhchiang
      
      Reviewed By: yhchiang
      
      Subscribers: dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D50079
      2ae4d7d7
  6. 13 11月, 2015 3 次提交