1. 23 7月, 2014 1 次提交
    • S
      Add a utility function to guess optimized options based on constraints · e6de0210
      sdong 提交于
      Summary:
      Add a function GetOptions(), where based on four parameters users give: read/write amplification threshold, memory budget for mem tables and target DB size, it picks up a compaction style and parameters for them. Background threads are not touched yet.
      
      One limit of this algorithm: since compression rate and key/value size are hard to predict, it's hard to predict level 0 file size from write buffer size. Simply make 1:1 ratio here.
      
      Sample results: https://reviews.facebook.net/P477
      
      Test Plan: Will add some a unit test where some sample scenarios are given and see they pick the results that make sense
      
      Reviewers: yhchiang, dhruba, haobo, igor, ljin
      
      Reviewed By: ljin
      
      Subscribers: leveldb
      
      Differential Revision: https://reviews.facebook.net/D18741
      e6de0210
  2. 22 7月, 2014 2 次提交
    • Y
      Fixed some make and linking issues of RocksDBJava · ae7743f2
      Yueh-Hsuan Chiang 提交于
      Summary:
      Fixed some make and linking issues of RocksDBJava. Specifically:
      * Add JAVA_LDFLAGS, which does not include gflags
      * rocksdbjava library now uses JAVA_LDFLAGS instead of LDFLAGS
      * java/Makefile now includes build_config.mk
      * rearrange make rocksdbjava workflow to ensure the library file is correctly
        included in the jar file.
      
      Test Plan:
      make rocksdbjava
      make jdb_bench
      java/jdb_bench.sh
      
      Reviewers: dhruba, swapnilghike, zzbennett, rsumbaly, ankgup87
      
      Reviewed By: ankgup87
      
      Subscribers: leveldb
      
      Differential Revision: https://reviews.facebook.net/D20289
      ae7743f2
    • R
      Adding a new SST table builder based on Cuckoo Hashing · cf3da899
      Radheshyam Balasundaram 提交于
      Summary:
      Cuckoo Hashing based SST table builder. Contains:
      - Cuckoo Hashing logic and file storage logic.
      - Unit tests for logic
      
      Test Plan:
      make cuckoo_table_builder_test
      ./cuckoo_table_builder_test
      make check all
      
      Reviewers: yhchiang, igor, sdong, ljin
      
      Reviewed By: ljin
      
      Subscribers: dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D19545
      cf3da899
  3. 18 7月, 2014 1 次提交
  4. 17 7月, 2014 1 次提交
  5. 11 7月, 2014 2 次提交
    • S
      Update master to version 3.3 · 01700b69
      sdong 提交于
      Summary: As tittle
      
      Test Plan: no need
      
      Reviewers: igor, yhchiang, ljin
      
      Reviewed By: ljin
      
      Subscribers: haobo, dhruba, xjin, leveldb
      
      Differential Revision: https://reviews.facebook.net/D19629
      01700b69
    • I
      JSON (Document) API sketch · f0a8be25
      Igor Canadi 提交于
      Summary:
      This is a rough sketch of our new document API. Would like to get some thoughts and comments about the high-level architecture and API.
      
      I didn't optimize for performance at all. Leaving some low-hanging fruit so that we can be happy when we fix them! :)
      
      Currently, bunch of features are not supported at all. Indexes can be only specified when creating database. There is no query planner whatsoever. This will all be added in due time.
      
      Test Plan: Added a simple unit test
      
      Reviewers: haobo, yhchiang, dhruba, sdong, ljin
      
      Reviewed By: ljin
      
      Subscribers: leveldb
      
      Differential Revision: https://reviews.facebook.net/D18747
      f0a8be25
  6. 09 7月, 2014 1 次提交
    • L
      generic rate limiter · 5ef1ba7f
      Lei Jin 提交于
      Summary:
      A generic rate limiter that can be shared by threads and rocksdb
      instances. Will use this to smooth out write traffic generated by
      compaction and flush. This will help us get better p99 behavior on flash
      storage.
      
      Test Plan:
      unit test output
      ==== Test RateLimiterTest.Rate
      request size [1 - 1023], limit 10 KB/sec, actual rate: 10.374969 KB/sec, elapsed 2002265
      request size [1 - 2047], limit 20 KB/sec, actual rate: 20.771242 KB/sec, elapsed 2002139
      request size [1 - 4095], limit 40 KB/sec, actual rate: 41.285299 KB/sec, elapsed 2202424
      request size [1 - 8191], limit 80 KB/sec, actual rate: 81.371605 KB/sec, elapsed 2402558
      request size [1 - 16383], limit 160 KB/sec, actual rate: 162.541268 KB/sec, elapsed 3303500
      
      Reviewers: yhchiang, igor, sdong
      
      Reviewed By: sdong
      
      Subscribers: leveldb
      
      Differential Revision: https://reviews.facebook.net/D19359
      5ef1ba7f
  7. 23 6月, 2014 1 次提交
  8. 20 6月, 2014 1 次提交
    • I
      JSONDocument · 00b26c3a
      Igor Canadi 提交于
      Summary:
      After evaluating options for JSON storage, I decided to implement our own. The reason is that we'll be able to optimize it better and we get to reduce unnecessary dependencies (which is what we'd get with folly).
      
      I also plan to write a serializer/deserializer for JSONDocument with our own binary format similar to BSON. That way we'll store binary JSON format in RocksDB instead of the plain-text JSON. This means less storage and faster deserialization.
      
      There are still some inefficiencies left here. I plan to optimize them after we develop a functioning DocumentDB. That way we can move and iterate faster.
      
      Test Plan: added a unit test
      
      Reviewers: dhruba, haobo, sdong, ljin, yhchiang
      
      Reviewed By: haobo
      
      Subscribers: leveldb
      
      Differential Revision: https://reviews.facebook.net/D18831
      00b26c3a
  9. 24 5月, 2014 1 次提交
  10. 11 5月, 2014 1 次提交
  11. 08 5月, 2014 1 次提交
    • I
      Better INSTALL.md and Makefile rules · 313b2e5d
      Igor Canadi 提交于
      Summary: We have a lot of problems with gflags. However, when compiling rocksdb static library, we don't need gflags dependency. Reorganize INSTALL.md such that first-time customers don't need any dependency installed to actually build rocksdb static library.
      
      Test Plan: none
      
      Reviewers: dhruba, haobo
      
      Reviewed By: dhruba
      
      CC: leveldb
      
      Differential Revision: https://reviews.facebook.net/D18501
      313b2e5d
  12. 06 5月, 2014 1 次提交
    • I
      log_and_apply_bench on a new benchmark framework · d2569fea
      Igor Canadi 提交于
      Summary:
      db_test includes Benchmark for LogAndApply. This diff removes it from db_test and puts it into a separate log_and_apply bench. I just wanted to play around with our new benchmark framework and figure out how it works.
      
      I would also like to show you how great it is! I believe right set of microbenchmarks can speed up our productivity a lot and help catch early regressions.
      
      Test Plan: no
      
      Reviewers: dhruba, haobo, sdong, ljin, yhchiang
      
      Reviewed By: yhchiang
      
      CC: leveldb
      
      Differential Revision: https://reviews.facebook.net/D18261
      d2569fea
  13. 05 5月, 2014 1 次提交
  14. 30 4月, 2014 3 次提交
  15. 29 4月, 2014 1 次提交
  16. 26 4月, 2014 2 次提交
  17. 25 4月, 2014 1 次提交
  18. 22 4月, 2014 4 次提交
    • I
      Single-threaded asan_crash_test · d0939cdc
      Igor Canadi 提交于
      d0939cdc
    • I
      Rename "benchmark" back to "bench". · 8dc34364
      Igor Canadi 提交于
      Also, make `benchharness.cc` not compiled into rocksdb library.
      8dc34364
    • P
      Added benchmark functionality on the lines of folly/Benchmark.h · ff1b5df4
      Pratyush Seth 提交于
      Summary: Added benchmark functionality on the lines of folly/Benchmark.h
      
      Test Plan: Added unit tests
      
      Reviewers: igor, haobo, sdong, ljin, yhchiang, dhruba
      
      Reviewed By: igor
      
      CC: leveldb
      
      Differential Revision: https://reviews.facebook.net/D17973
      ff1b5df4
    • L
      hints for narrowing down FindFile range and avoiding checking unrelevant L0 files · 0f2d7681
      Lei Jin 提交于
      Summary:
      The file tree structure in Version is prebuilt and the range of each file is known.
      On the Get() code path, we do binary search in FindFile() by comparing
      target key with each file's largest key and also check the range for each L0 file.
      With some pre-calculated knowledge, each key comparision that has been done can serve
      as a hint to narrow down further searches:
      (1) If a key falls within a L0 file's range, we can safely skip the next
      file if its range does not overlap with the current one.
      (2) If a key falls within a file's range in level L0 - Ln-1, we should only
      need to binary search in the next level for files that overlap with the current one.
      
      (1) will be able to skip some files depending one the key distribution.
      (2) can greatly reduce the range of binary search, especially for bottom
      levels, given that one file most likely only overlaps with N files from
      the level below (where N is max_bytes_for_level_multiplier). So on level
      L, we will only look at ~N files instead of N^L files.
      
      Some inital results: measured with 500M key DB, when write is light (10k/s = 1.2M/s), this
      improves QPS ~7% on top of blocked bloom. When write is heavier (80k/s =
      9.6M/s), it gives us ~13% improvement.
      
      Test Plan: make all check
      
      Reviewers: haobo, igor, dhruba, sdong, yhchiang
      
      Reviewed By: haobo
      
      CC: leveldb
      
      Differential Revision: https://reviews.facebook.net/D17205
      0f2d7681
  19. 19 4月, 2014 1 次提交
  20. 18 4月, 2014 2 次提交
  21. 17 4月, 2014 1 次提交
  22. 16 4月, 2014 6 次提交
  23. 11 4月, 2014 1 次提交
  24. 09 4月, 2014 2 次提交
    • Y
      [JNI] Add an initial benchmark for java binding for rocksdb. · 0f5cbcd7
      Yueh-Hsuan Chiang 提交于
      Summary:
      * Add a benchmark for java binding for rocksdb.  The java benchmark
        is a complete rewrite based on the c++ db/db_bench.cc and the
        DbBenchmark in dain's java leveldb.
      * Support multithreading.
      * 'readseq' is currently not supported as it requires RocksDB Iterator.
      
      * usage:
      
        --benchmarks
          Comma-separated list of operations to run in the specified order
              Actual benchmarks:
                      fillseq    -- write N values in sequential key order in async mode
                      fillrandom -- write N values in random key order in async mode
                      fillbatch  -- write N/1000 batch where each batch has 1000 values
                                    in random key order in sync mode
                      fillsync   -- write N/100 values in random key order in sync mode
                      fill100K   -- write N/1000 100K values in random order in async mode
                      readseq    -- read N times sequentially
                      readrandom -- read N times in random order
                      readhot    -- read N times in random order from 1% section of DB
              Meta Operations:
                      delete     -- delete DB
          DEFAULT: [fillseq, readrandom, fillrandom]
      
        --compression_ratio
          Arrange to generate values that shrink to this fraction of
              their original size after compression
          DEFAULT: 0.5
      
        --use_existing_db
          If true, do not destroy the existing database.  If you set this
              flag and also specify a benchmark that wants a fresh database,  that benchmark will fail.
          DEFAULT: false
      
        --num
          Number of key/values to place in database.
          DEFAULT: 1000000
      
        --threads
          Number of concurrent threads to run.
          DEFAULT: 1
      
        --reads
          Number of read operations to do.  If negative, do --nums reads.
      
        --key_size
          The size of each key in bytes.
          DEFAULT: 16
      
        --value_size
          The size of each value in bytes.
          DEFAULT: 100
      
        --write_buffer_size
          Number of bytes to buffer in memtable before compacting
              (initialized to default value by 'main'.)
          DEFAULT: 4194304
      
        --cache_size
          Number of bytes to use as a cache of uncompressed data.
              Negative means use default settings.
          DEFAULT: -1
      
        --seed
          Seed base for random number generators.
          DEFAULT: 0
      
        --db
          Use the db with the following name.
          DEFAULT: /tmp/rocksdbjni-bench
      
      * Add RocksDB.write().
      
      Test Plan: make jbench
      
      Reviewers: haobo, sdong, dhruba, ankgup87
      
      Reviewed By: haobo
      
      CC: leveldb
      
      Differential Revision: https://reviews.facebook.net/D17433
      0f5cbcd7
    • L
      macros for perf_context · 92c1eb02
      Lei Jin 提交于
      Summary: This will allow us to disable them completely for iOS or for better performance
      
      Test Plan: will run make all check
      
      Reviewers: igor, haobo, dhruba
      
      Reviewed By: haobo
      
      CC: leveldb
      
      Differential Revision: https://reviews.facebook.net/D17511
      92c1eb02
  25. 05 4月, 2014 1 次提交