1. 19 9月, 2014 3 次提交
    • L
      CuckooTable: add one option to allow identity function for the first hash function · 51af7c32
      Lei Jin 提交于
      Summary:
      MurmurHash becomes expensive when we do millions Get() a second in one
      thread. Add this option to allow the first hash function to use identity
      function as hash function. It results in QPS increase from 3.7M/s to
      ~4.3M/s. I did not observe improvement for end to end RocksDB
      performance. This may be caused by other bottlenecks that I will address
      in a separate diff.
      
      Test Plan:
      ```
      [ljin@dev1964 rocksdb] ./cuckoo_table_reader_test --enable_perf --file_dir=/dev/shm --write --identity_as_first_hash=0
      ==== Test CuckooReaderTest.WhenKeyExists
      ==== Test CuckooReaderTest.WhenKeyExistsWithUint64Comparator
      ==== Test CuckooReaderTest.CheckIterator
      ==== Test CuckooReaderTest.CheckIteratorUint64
      ==== Test CuckooReaderTest.WhenKeyNotFound
      ==== Test CuckooReaderTest.TestReadPerformance
      With 125829120 items, utilization is 93.75%, number of hash functions: 2.
      Time taken per op is 0.272us (3.7 Mqps) with batch size of 0, # of found keys 125829120
      With 125829120 items, utilization is 93.75%, number of hash functions: 2.
      Time taken per op is 0.138us (7.2 Mqps) with batch size of 10, # of found keys 125829120
      With 125829120 items, utilization is 93.75%, number of hash functions: 2.
      Time taken per op is 0.142us (7.1 Mqps) with batch size of 25, # of found keys 125829120
      With 125829120 items, utilization is 93.75%, number of hash functions: 2.
      Time taken per op is 0.142us (7.0 Mqps) with batch size of 50, # of found keys 125829120
      With 125829120 items, utilization is 93.75%, number of hash functions: 2.
      Time taken per op is 0.144us (6.9 Mqps) with batch size of 100, # of found keys 125829120
      
      With 104857600 items, utilization is 78.12%, number of hash functions: 2.
      Time taken per op is 0.201us (5.0 Mqps) with batch size of 0, # of found keys 104857600
      With 104857600 items, utilization is 78.12%, number of hash functions: 2.
      Time taken per op is 0.121us (8.3 Mqps) with batch size of 10, # of found keys 104857600
      With 104857600 items, utilization is 78.12%, number of hash functions: 2.
      Time taken per op is 0.123us (8.1 Mqps) with batch size of 25, # of found keys 104857600
      With 104857600 items, utilization is 78.12%, number of hash functions: 2.
      Time taken per op is 0.121us (8.3 Mqps) with batch size of 50, # of found keys 104857600
      With 104857600 items, utilization is 78.12%, number of hash functions: 2.
      Time taken per op is 0.112us (8.9 Mqps) with batch size of 100, # of found keys 104857600
      
      With 83886080 items, utilization is 62.50%, number of hash functions: 2.
      Time taken per op is 0.251us (4.0 Mqps) with batch size of 0, # of found keys 83886080
      With 83886080 items, utilization is 62.50%, number of hash functions: 2.
      Time taken per op is 0.107us (9.4 Mqps) with batch size of 10, # of found keys 83886080
      With 83886080 items, utilization is 62.50%, number of hash functions: 2.
      Time taken per op is 0.099us (10.1 Mqps) with batch size of 25, # of found keys 83886080
      With 83886080 items, utilization is 62.50%, number of hash functions: 2.
      Time taken per op is 0.100us (10.0 Mqps) with batch size of 50, # of found keys 83886080
      With 83886080 items, utilization is 62.50%, number of hash functions: 2.
      Time taken per op is 0.116us (8.6 Mqps) with batch size of 100, # of found keys 83886080
      
      With 73400320 items, utilization is 54.69%, number of hash functions: 2.
      Time taken per op is 0.189us (5.3 Mqps) with batch size of 0, # of found keys 73400320
      With 73400320 items, utilization is 54.69%, number of hash functions: 2.
      Time taken per op is 0.095us (10.5 Mqps) with batch size of 10, # of found keys 73400320
      With 73400320 items, utilization is 54.69%, number of hash functions: 2.
      Time taken per op is 0.096us (10.4 Mqps) with batch size of 25, # of found keys 73400320
      With 73400320 items, utilization is 54.69%, number of hash functions: 2.
      Time taken per op is 0.098us (10.2 Mqps) with batch size of 50, # of found keys 73400320
      With 73400320 items, utilization is 54.69%, number of hash functions: 2.
      Time taken per op is 0.105us (9.5 Mqps) with batch size of 100, # of found keys 73400320
      
      [ljin@dev1964 rocksdb] ./cuckoo_table_reader_test --enable_perf --file_dir=/dev/shm --write --identity_as_first_hash=1
      ==== Test CuckooReaderTest.WhenKeyExists
      ==== Test CuckooReaderTest.WhenKeyExistsWithUint64Comparator
      ==== Test CuckooReaderTest.CheckIterator
      ==== Test CuckooReaderTest.CheckIteratorUint64
      ==== Test CuckooReaderTest.WhenKeyNotFound
      ==== Test CuckooReaderTest.TestReadPerformance
      With 125829120 items, utilization is 93.75%, number of hash functions: 2.
      Time taken per op is 0.230us (4.3 Mqps) with batch size of 0, # of found keys 125829120
      With 125829120 items, utilization is 93.75%, number of hash functions: 2.
      Time taken per op is 0.086us (11.7 Mqps) with batch size of 10, # of found keys 125829120
      With 125829120 items, utilization is 93.75%, number of hash functions: 2.
      Time taken per op is 0.088us (11.3 Mqps) with batch size of 25, # of found keys 125829120
      With 125829120 items, utilization is 93.75%, number of hash functions: 2.
      Time taken per op is 0.083us (12.1 Mqps) with batch size of 50, # of found keys 125829120
      With 125829120 items, utilization is 93.75%, number of hash functions: 2.
      Time taken per op is 0.083us (12.1 Mqps) with batch size of 100, # of found keys 125829120
      
      With 104857600 items, utilization is 78.12%, number of hash functions: 2.
      Time taken per op is 0.159us (6.3 Mqps) with batch size of 0, # of found keys 104857600
      With 104857600 items, utilization is 78.12%, number of hash functions: 2.
      Time taken per op is 0.078us (12.8 Mqps) with batch size of 10, # of found keys 104857600
      With 104857600 items, utilization is 78.12%, number of hash functions: 2.
      Time taken per op is 0.080us (12.6 Mqps) with batch size of 25, # of found keys 104857600
      With 104857600 items, utilization is 78.12%, number of hash functions: 2.
      Time taken per op is 0.080us (12.5 Mqps) with batch size of 50, # of found keys 104857600
      With 104857600 items, utilization is 78.12%, number of hash functions: 2.
      Time taken per op is 0.082us (12.2 Mqps) with batch size of 100, # of found keys 104857600
      
      With 83886080 items, utilization is 62.50%, number of hash functions: 2.
      Time taken per op is 0.154us (6.5 Mqps) with batch size of 0, # of found keys 83886080
      With 83886080 items, utilization is 62.50%, number of hash functions: 2.
      Time taken per op is 0.077us (13.0 Mqps) with batch size of 10, # of found keys 83886080
      With 83886080 items, utilization is 62.50%, number of hash functions: 2.
      Time taken per op is 0.077us (12.9 Mqps) with batch size of 25, # of found keys 83886080
      With 83886080 items, utilization is 62.50%, number of hash functions: 2.
      Time taken per op is 0.078us (12.8 Mqps) with batch size of 50, # of found keys 83886080
      With 83886080 items, utilization is 62.50%, number of hash functions: 2.
      Time taken per op is 0.079us (12.6 Mqps) with batch size of 100, # of found keys 83886080
      
      With 73400320 items, utilization is 54.69%, number of hash functions: 2.
      Time taken per op is 0.218us (4.6 Mqps) with batch size of 0, # of found keys 73400320
      With 73400320 items, utilization is 54.69%, number of hash functions: 2.
      Time taken per op is 0.083us (12.0 Mqps) with batch size of 10, # of found keys 73400320
      With 73400320 items, utilization is 54.69%, number of hash functions: 2.
      Time taken per op is 0.085us (11.7 Mqps) with batch size of 25, # of found keys 73400320
      With 73400320 items, utilization is 54.69%, number of hash functions: 2.
      Time taken per op is 0.086us (11.6 Mqps) with batch size of 50, # of found keys 73400320
      With 73400320 items, utilization is 54.69%, number of hash functions: 2.
      Time taken per op is 0.078us (12.8 Mqps) with batch size of 100, # of found keys 73400320
      ```
      
      Reviewers: sdong, igor, yhchiang
      
      Reviewed By: igor
      
      Subscribers: leveldb
      
      Differential Revision: https://reviews.facebook.net/D23451
      51af7c32
    • Y
      Fixed a signed-unsigned comparison in spatial_db.cc -- issue #293 · 03504355
      Yueh-Hsuan Chiang 提交于
      Summary:
      Fixed a signed-unsigned comparison in spatial_db.cc
      
      utilities/spatialdb/spatial_db.cc:542:38: error: comparison between signed and unsigned integer expressions [-Werror=sign-compare]
      cc1plus: all warnings being treated as errors
      make: *** [utilities/spatialdb/spatial_db.o] Error 1
      
      Test Plan:
      make spatial_db_test
      ./spatial_db_test
      
      Reviewers: ljin, sdong, reddragon, igor
      
      Reviewed By: reddragon
      
      Subscribers: reddragon, leveldb
      
      Differential Revision: https://reviews.facebook.net/D23565
      03504355
    • I
      Fix syncronization issues · 2fb1fea3
      Igor Canadi 提交于
      2fb1fea3
  2. 18 9月, 2014 14 次提交
  3. 17 9月, 2014 2 次提交
  4. 16 9月, 2014 4 次提交
  5. 15 9月, 2014 1 次提交
  6. 14 9月, 2014 1 次提交
  7. 13 9月, 2014 5 次提交
  8. 12 9月, 2014 5 次提交
    • Y
      Add make install · ebb5c65e
      Yueh-Hsuan Chiang 提交于
      Summary:
      Add make install.  If INSTALL_PATH is not set, then rocksdb will be
      installed under "/usr/local" directory (/usr/local/include for headers
      and /usr/local/lib for library file(s).)
      
      Test Plan:
      Develop a simple rocksdb app, called test.cc, and do the followings.
      
      make clean
      make static_lib -j32
      sudo make install
      g++ -std=c++11 test.cc -lrocksdb -lbz2 -lz -o test
      ./test
      
      sudo make uninstall
      make clean
      make shared_lib -j32
      sudo make install
      g++ -std=c++11 test.cc -lrocksdb -lbz2 -lz -o test
      ./test
      
      make INSTALL_PATH=/tmp/path install
      make INSTALL_PATH=/tmp/path uninstall
      and make sure things are installed / uninstalled in the specified path.
      
      Reviewers: ljin, sdong, igor
      
      Reviewed By: igor
      
      Subscribers: leveldb
      
      Differential Revision: https://reviews.facebook.net/D23211
      ebb5c65e
    • F
      add_wrapped_bloom_test · 0352a9fa
      Feng Zhu 提交于
      Summary:
      1. wrap a filter policy like what fbcode/multifeed/rocksdb/MultifeedRocksDbKey.h
         to ensure that rocksdb works fine after filterpolicy interface change
      
      Test Plan: 1. valgrind ./bloom_test
      
      Reviewers: ljin, igor, yhchiang, dhruba, sdong
      
      Reviewed By: sdong
      
      Subscribers: leveldb
      
      Differential Revision: https://reviews.facebook.net/D23229
      0352a9fa
    • I
      Don't run background jobs (flush, compactions) when bg_error_ is set · 9c0e66ce
      Igor Canadi 提交于
      Summary:
      If bg_error_ is set, that means that we mark DB read only. However, current behavior still continues the flushes and compactions, even though bg_error_ is set.
      
      On the other hand, if bg_error_ is set, we will return Status::OK() from CompactRange(), although the compaction didn't actually succeed.
      
      This is clearly not desired behavior. I found this when I was debugging t5132159, although I'm pretty sure these aren't related.
      
      Also, when we're shutting down, it's dangerous to exit RunManualCompaction(), since that will destruct ManualCompaction object. Background compaction job might still hold a reference to manual_compaction_ and this will lead to undefined behavior. I changed the behavior so that we only exit RunManualCompaction when manual compaction job is marked done.
      
      Test Plan: make check
      
      Reviewers: sdong, ljin, yhchiang
      
      Reviewed By: yhchiang
      
      Subscribers: leveldb
      
      Differential Revision: https://reviews.facebook.net/D23223
      9c0e66ce
    • I
      Fix valgrind test · a9639bda
      Igor Canadi 提交于
      Summary: Get valgrind to stop complaining about uninitialized value
      
      Test Plan: valgrind not complaining anymore
      
      Reviewers: sdong, yhchiang, ljin
      
      Reviewed By: ljin
      
      Subscribers: leveldb
      
      Differential Revision: https://reviews.facebook.net/D23289
      a9639bda
    • I
      Relax FlushSchedule test · d1f24dc7
      Igor Canadi 提交于
      Summary: The test makes sure that we don't call flush too often. For that, it's ok to check if we have less than 10 table files. Otherwise, the test is flaky because it's hard to estimate number of entries in the memtable before it gets flushed (any ideas?)
      
      Test Plan: Still works, but hopefully less flaky.
      
      Reviewers: ljin, sdong, yhchiang
      
      Reviewed by: yhchiang
      
      Subscribers: leveldb
      
      Differential Revision: https://reviews.facebook.net/D23241
      d1f24dc7
  9. 11 9月, 2014 4 次提交
    • I
      Push model for flushing memtables · 3d9e6f77
      Igor Canadi 提交于
      Summary:
      When memtable is full it calls the registered callback. That callback then registers column family as needing the flush. Every write checks if there are some column families that need to be flushed. This completely eliminates the need for MakeRoomForWrite() function and simplifies our Write code-path.
      
      There is some complexity with the concurrency when the column family is dropped. I made it a bit less complex by dropping the column family from the write thread in https://reviews.facebook.net/D22965. Let me know if you want to discuss this.
      
      Test Plan: make check works. I'll also run db_stress with creating and dropping column families for a while.
      
      Reviewers: yhchiang, sdong, ljin
      
      Reviewed By: ljin
      
      Subscribers: leveldb
      
      Differential Revision: https://reviews.facebook.net/D23067
      3d9e6f77
    • I
      [unit test] CompactRange should fail if we don't have space · 059e584d
      Igor Canadi 提交于
      Summary:
      See t5106397.
      
      Also, few more changes:
      1. in unit tests, the assumption is that writes will be dropped when there is no space left on device. I changed the wording around it.
      2. InvalidArgument() errors are only when user-provided arguments are invalid. When the file is corrupted, we need to return Status::Corruption
      
      Test Plan: make check
      
      Reviewers: sdong, ljin
      
      Reviewed By: ljin
      
      Subscribers: leveldb
      
      Differential Revision: https://reviews.facebook.net/D23145
      059e584d
    • L
      fix RocksDB java build · dd641b21
      Lei Jin 提交于
      Summary: as title
      
      Test Plan: make rocksdbjava
      
      Reviewers: sdong, yhchiang, igor
      
      Reviewed By: igor
      
      Subscribers: leveldb
      
      Differential Revision: https://reviews.facebook.net/D23193
      dd641b21
    • F
      add_qps_info_in cache bench · 53404d9f
      Feng Zhu 提交于
      Summary: print qps in summary
      
      Test Plan: ./cache_bench
      
      Reviewers: yhchiang, ljin, sdong, igor
      
      Reviewed By: igor
      
      Subscribers: leveldb
      
      Differential Revision: https://reviews.facebook.net/D23079
      53404d9f
  10. 10 9月, 2014 1 次提交