1. 03 10月, 2014 2 次提交
  2. 01 10月, 2014 2 次提交
  3. 30 9月, 2014 3 次提交
    • C
      add centos 5.6 build instead of ubuntu. · 0b923f0f
      Chris Riccomini 提交于
      0b923f0f
    • M
      Package generation for Ubuntu and CentOS · ee1f3ccb
      mike@arpaia.co 提交于
      Summary:
      I put together a script to assist in the generation of deb's and
      rpm's. I've tested that this works on ubuntu via vagrant. I've included the
      Vagrantfile here, but I can remove it if it's not useful. The package.sh
      script should work on any ubuntu or centos machine, I just added a bit of
      logic in there to allow a base Ubuntu or Centos machine to be able to build
      RocksDB from scratch.
      
      Example output on Ubuntu 14.04:
      
      ```
      root@vagrant-ubuntu-trusty-64:/vagrant# ./tools/package.sh
      [+] g++-4.7 is already installed. skipping.
      [+] libgflags-dev is already installed. skipping.
      [+] ruby-all-dev is already installed. skipping.
      [+] fpm is already installed. skipping.
      Created package {:path=>"rocksdb_3.5_amd64.deb"}
      root@vagrant-ubuntu-trusty-64:/vagrant# dpkg --info rocksdb_3.5_amd64.deb
       new debian package, version 2.0.
       size 17392022 bytes: control archive=1518 bytes.
           275 bytes,    11 lines      control
          2911 bytes,    38 lines      md5sums
       Package: rocksdb
       Version: 3.5
       License: BSD
       Vendor: Facebook
       Architecture: amd64
       Maintainer: rocksdb@fb.com
       Installed-Size: 83358
       Section: default
       Priority: extra
       Homepage: http://rocksdb.org/
       Description: RocksDB is an embeddable persistent key-value store for fast storage.
       ```
      
       Example output on CentOS 6.5:
      
       ```
       [root@localhost vagrant]# rpm -qip rocksdb-3.5-1.x86_64.rpm
       Name        : rocksdb                      Relocations: /usr
       Version     : 3.5                               Vendor: Facebook
       Release     : 1                             Build Date: Mon 29 Sep 2014 01:26:11 AM UTC
       Install Date: (not installed)               Build Host: localhost
       Group       : default                       Source RPM: rocksdb-3.5-1.src.rpm
       Size        : 96231106                         License: BSD
       Signature   : (none)
       Packager    : rocksdb@fb.com
       URL         : http://rocksdb.org/
       Summary     : RocksDB is an embeddable persistent key-value store for fast storage.
       Description :
       RocksDB is an embeddable persistent key-value store for fast storage.
       ```
      
      Test Plan:
      How this gets used is really up to the RocksDB core team. If you
      want to actually get this into mainline, you might have to change `make
      install` such that it install the RocksDB shared object file as well, which
      would require you to link against gflags (maybe?) and that would require some
      potential modifications to the script here (basically add a depends on that
      package).
      
      Currently, this will install the headers and a pre-compiled statically linked
      object file. If that's what you want out of life, than this requires no
      modifications.
      
      Reviewers: ljin, yhchiang, igor
      
      Reviewed By: igor
      
      Differential Revision: https://reviews.facebook.net/D24141
      ee1f3ccb
    • L
      use GetContext to replace callback function pointer · 2faf49d5
      Lei Jin 提交于
      Summary:
      Intead of passing callback function pointer and its arg on Table::Get()
      interface, passing GetContext. This makes the interface cleaner and
      possible better perf. Also adding a fast pass for SaveValue()
      
      Test Plan: make all check
      
      Reviewers: igor, yhchiang, sdong
      
      Reviewed By: sdong
      
      Subscribers: leveldb
      
      Differential Revision: https://reviews.facebook.net/D24057
      2faf49d5
  4. 27 9月, 2014 1 次提交
  5. 16 9月, 2014 1 次提交
  6. 12 9月, 2014 1 次提交
    • Y
      Add make install · ebb5c65e
      Yueh-Hsuan Chiang 提交于
      Summary:
      Add make install.  If INSTALL_PATH is not set, then rocksdb will be
      installed under "/usr/local" directory (/usr/local/include for headers
      and /usr/local/lib for library file(s).)
      
      Test Plan:
      Develop a simple rocksdb app, called test.cc, and do the followings.
      
      make clean
      make static_lib -j32
      sudo make install
      g++ -std=c++11 test.cc -lrocksdb -lbz2 -lz -o test
      ./test
      
      sudo make uninstall
      make clean
      make shared_lib -j32
      sudo make install
      g++ -std=c++11 test.cc -lrocksdb -lbz2 -lz -o test
      ./test
      
      make INSTALL_PATH=/tmp/path install
      make INSTALL_PATH=/tmp/path uninstall
      and make sure things are installed / uninstalled in the specified path.
      
      Reviewers: ljin, sdong, igor
      
      Reviewed By: igor
      
      Subscribers: leveldb
      
      Differential Revision: https://reviews.facebook.net/D23211
      ebb5c65e
  7. 09 9月, 2014 4 次提交
    • I
      Merger test · 6bb7e3ef
      Igor Canadi 提交于
      Summary: I abandoned https://reviews.facebook.net/D18789, but I wrote a good unit test there, so let's check it in. :)
      
      Test Plan: this is test
      
      Reviewers: sdong, yhchiang, ljin
      
      Reviewed By: ljin
      
      Subscribers: leveldb
      
      Differential Revision: https://reviews.facebook.net/D22827
      6bb7e3ef
    • N
      Addressing review comments · 1d284db2
      Naveen 提交于
      1d284db2
    • I
      Push- instead of pull-model for managing Write stalls · a2bb7c3c
      Igor Canadi 提交于
      Summary:
      Introducing WriteController, which is a source of truth about per-DB write delays. Let's define an DB epoch as a period where there are no flushes and compactions (i.e. new epoch is started when flush or compaction finishes). Each epoch can either:
      * proceed with all writes without delay
      * delay all writes by fixed time
      * stop all writes
      
      The three modes are recomputed at each epoch change (flush, compaction), rather than on every write (which is currently the case).
      
      When we have a lot of column families, our current pull behavior adds a big overhead, since we need to loop over every column family for every write. With new push model, overhead on Write code-path is minimal.
      
      This is just the start. Next step is to also take care of stalls introduced by slow memtable flushes. The final goal is to eliminate function MakeRoomForWrite(), which currently needs to be called for every column family by every write.
      
      Test Plan: make check for now. I'll add some unit tests later. Also, perf test.
      
      Reviewers: dhruba, yhchiang, MarkCallaghan, sdong, ljin
      
      Reviewed By: ljin
      
      Subscribers: leveldb
      
      Differential Revision: https://reviews.facebook.net/D22791
      a2bb7c3c
    • F
      Implement full filter for block based table. · 0af157f9
      Feng Zhu 提交于
      Summary:
      1. Make filter_block.h a base class. Derive block_based_filter_block and full_filter_block. The previous one is the traditional filter block. The full_filter_block is newly added. It would generate a filter block that contain all the keys in SST file.
      
      2. When querying a key, table would first check if full_filter is available. If not, it would go to the exact data block and check using block_based filter.
      
      3. User could choose to use full_filter or tradional(block_based_filter). They would be stored in SST file with different meta index name. "filter.filter_policy" or "full_filter.filter_policy". Then, Table reader is able to know the fllter block type.
      
      4. Some optimizations have been done for full_filter_block, thus it requires a different interface compared to the original one in filter_policy.h.
      
      5. Actual implementation of filter bits coding/decoding is placed in util/bloom_impl.cc
      
      Benchmark: base commit 1d23b5c4
      Command:
      db_bench --db=/dev/shm/rocksdb --num_levels=6 --key_size=20 --prefix_size=20 --keys_per_prefix=0 --value_size=100 --write_buffer_size=134217728 --max_write_buffer_number=2 --target_file_size_base=33554432 --max_bytes_for_level_base=1073741824 --verify_checksum=false --max_background_compactions=4 --use_plain_table=0 --memtablerep=prefix_hash --open_files=-1 --mmap_read=1 --mmap_write=0 --bloom_bits=10 --bloom_locality=1 --memtable_bloom_bits=500000 --compression_type=lz4 --num=393216000 --use_hash_search=1 --block_size=1024 --block_restart_interval=16 --use_existing_db=1 --threads=1 --benchmarks=readrandom —disable_auto_compactions=1
      Read QPS increase for about 30% from 2230002 to 2991411.
      
      Test Plan:
      make all check
      valgrind db_test
      db_stress --use_block_based_filter = 0
      ./auto_sanity_test.sh
      
      Reviewers: igor, yhchiang, ljin, sdong
      
      Reviewed By: sdong
      
      Subscribers: dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D20979
      0af157f9
  8. 06 9月, 2014 1 次提交
  9. 27 8月, 2014 1 次提交
  10. 19 8月, 2014 2 次提交
    • S
      WriteBatchWithIndex: a wrapper of WriteBatch, with a searchable index · 28b5c760
      sdong 提交于
      Summary:
      Add WriteBatchWithIndex so that a user can query data out of a WriteBatch, to support MongoDB's read-its-own-write.
      
      WriteBatchWithIndex uses a skiplist to store the binary index. The index stores the offset of the entry in the write batch. When searching for a key, the key for the entry is read by read the entry from the write batch from the offset.
      
      Define a new iterator class for querying data out of WriteBatchWithIndex. A user can create an iterator of the write batch for one column family, seek to a key and keep calling Next() to see next entries.
      
      I will add more unit tests if people are OK about this API.
      
      Test Plan:
      make all check
      Add unit tests.
      
      Reviewers: yhchiang, igor, MarkCallaghan, ljin
      
      Reviewed By: ljin
      
      Subscribers: dhruba, leveldb, xjin
      
      Differential Revision: https://reviews.facebook.net/D21381
      28b5c760
    • N
      RocksDB static build · ddb8039e
      Naveen 提交于
      Make file changes to download and build the dependencies
      .Load the shared library when RocksDB is initialized
      ddb8039e
  11. 13 8月, 2014 1 次提交
  12. 12 8月, 2014 2 次提交
    • R
      Integrating Cuckoo Hash SST Table format into RocksDB · 9674c11d
      Radheshyam Balasundaram 提交于
      Summary:
      Contains the following changes:
      - Implementation of cuckoo_table_factory
      - Adding cuckoo table into AdaptiveTableFactory
      - Adding cuckoo_table_db_test, similar to lines of plain_table_db_test
      - Minor fixes to Reader: When a key is found in the table, return the key found instead of the search key.
      - Minor fixes to Builder: Add table properties that are required by Version::UpdateTemporaryStats() during Get operation. Don't define curr_node as a reference variable as the memory locations may get reassigned during tree.push_back operation, leading to invalid memory access.
      
      Test Plan:
      cuckoo_table_reader_test --enable_perf
      cuckoo_table_builder_test
      cuckoo_table_db_test
      make check all
      make valgrind_check
      make asan_check
      
      Reviewers: sdong, igor, yhchiang, ljin
      
      Reviewed By: ljin
      
      Subscribers: leveldb
      
      Differential Revision: https://reviews.facebook.net/D21219
      9674c11d
    • M
      Changes to support unity build: · 93e6b5e9
      miguelportilla 提交于
      * Script for building the unity.cc file via Makefile
      * Unity executable Makefile target for testing builds
      * Source code changes to fix compilation of unity build
      93e6b5e9
  13. 26 7月, 2014 1 次提交
    • R
      Implementation of CuckooTableReader · 62f9b071
      Radheshyam Balasundaram 提交于
      Summary:
      Contains:
      - Implementation of TableReader based on Cuckoo Hashing
      - Unittests for CuckooTableReader
      - Performance test for TableReader
      
      Test Plan:
      make cuckoo_table_reader_test
      ./cuckoo_table_reader_test
      make valgrind_check
      make asan_check
      
      Reviewers: yhchiang, sdong, igor, ljin
      
      Reviewed By: ljin
      
      Subscribers: leveldb
      
      Differential Revision: https://reviews.facebook.net/D20511
      62f9b071
  14. 24 7月, 2014 2 次提交
    • I
      SpatialDB · 62963304
      Igor Canadi 提交于
      Summary:
      This diff is adding spatial index support to RocksDB.
      
      When creating the DB user specifies a list of spatial indexes. Spatial indexes can cover different areas and have different resolution (i.e. number of tiles). This is useful for supporting different zoom levels.
      
      Each element inserted into SpatialDB has:
      * a bounding box, which determines how will the element be indexed
      * string blob, which will usually be WKB representation of the polygon (http://en.wikipedia.org/wiki/Well-known_text)
      * feature set, which is a map of key-value pairs, where value can be int, double, bool, null or a string. FeatureSet will be a set of tags associated with geo elements (for example, 'road': 'highway' and similar)
      * a list of indexes to insert the element in. For example, small river element will be inserted in index for high zoom level, while country border will be inserted in all indexes (including the index for low zoom level).
      
      Each query is executed on single spatial index. Query guarantees that it will return all elements intersecting the specified bounding box, but it might also return some extra non-intersecting elements.
      
      Test Plan: Added bunch of unit tests in spatial_db_test
      
      Reviewers: dhruba, yinwang
      
      Reviewed By: yinwang
      
      Subscribers: leveldb
      
      Differential Revision: https://reviews.facebook.net/D20361
      62963304
    • Y
      [Java] Add the missing ROCKSDB_JAR variable in Makefile · b5c4c0b8
      Yueh-Hsuan Chiang 提交于
      Summary:
      Add the missing ROCKSDB_JAR variable in Makefile, which is mistakenly
      removed in https://reviews.facebook.net/D20289.
      
      Test Plan:
      export ROCKSDB_JAR=
      make rocksdbjava
      b5c4c0b8
  15. 23 7月, 2014 2 次提交
    • I
      Also bump version in Makefile · f82d4a24
      Igor Canadi 提交于
      f82d4a24
    • S
      Add a utility function to guess optimized options based on constraints · e6de0210
      sdong 提交于
      Summary:
      Add a function GetOptions(), where based on four parameters users give: read/write amplification threshold, memory budget for mem tables and target DB size, it picks up a compaction style and parameters for them. Background threads are not touched yet.
      
      One limit of this algorithm: since compression rate and key/value size are hard to predict, it's hard to predict level 0 file size from write buffer size. Simply make 1:1 ratio here.
      
      Sample results: https://reviews.facebook.net/P477
      
      Test Plan: Will add some a unit test where some sample scenarios are given and see they pick the results that make sense
      
      Reviewers: yhchiang, dhruba, haobo, igor, ljin
      
      Reviewed By: ljin
      
      Subscribers: leveldb
      
      Differential Revision: https://reviews.facebook.net/D18741
      e6de0210
  16. 22 7月, 2014 2 次提交
    • Y
      Fixed some make and linking issues of RocksDBJava · ae7743f2
      Yueh-Hsuan Chiang 提交于
      Summary:
      Fixed some make and linking issues of RocksDBJava. Specifically:
      * Add JAVA_LDFLAGS, which does not include gflags
      * rocksdbjava library now uses JAVA_LDFLAGS instead of LDFLAGS
      * java/Makefile now includes build_config.mk
      * rearrange make rocksdbjava workflow to ensure the library file is correctly
        included in the jar file.
      
      Test Plan:
      make rocksdbjava
      make jdb_bench
      java/jdb_bench.sh
      
      Reviewers: dhruba, swapnilghike, zzbennett, rsumbaly, ankgup87
      
      Reviewed By: ankgup87
      
      Subscribers: leveldb
      
      Differential Revision: https://reviews.facebook.net/D20289
      ae7743f2
    • R
      Adding a new SST table builder based on Cuckoo Hashing · cf3da899
      Radheshyam Balasundaram 提交于
      Summary:
      Cuckoo Hashing based SST table builder. Contains:
      - Cuckoo Hashing logic and file storage logic.
      - Unit tests for logic
      
      Test Plan:
      make cuckoo_table_builder_test
      ./cuckoo_table_builder_test
      make check all
      
      Reviewers: yhchiang, igor, sdong, ljin
      
      Reviewed By: ljin
      
      Subscribers: dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D19545
      cf3da899
  17. 18 7月, 2014 1 次提交
  18. 17 7月, 2014 1 次提交
  19. 11 7月, 2014 2 次提交
    • S
      Update master to version 3.3 · 01700b69
      sdong 提交于
      Summary: As tittle
      
      Test Plan: no need
      
      Reviewers: igor, yhchiang, ljin
      
      Reviewed By: ljin
      
      Subscribers: haobo, dhruba, xjin, leveldb
      
      Differential Revision: https://reviews.facebook.net/D19629
      01700b69
    • I
      JSON (Document) API sketch · f0a8be25
      Igor Canadi 提交于
      Summary:
      This is a rough sketch of our new document API. Would like to get some thoughts and comments about the high-level architecture and API.
      
      I didn't optimize for performance at all. Leaving some low-hanging fruit so that we can be happy when we fix them! :)
      
      Currently, bunch of features are not supported at all. Indexes can be only specified when creating database. There is no query planner whatsoever. This will all be added in due time.
      
      Test Plan: Added a simple unit test
      
      Reviewers: haobo, yhchiang, dhruba, sdong, ljin
      
      Reviewed By: ljin
      
      Subscribers: leveldb
      
      Differential Revision: https://reviews.facebook.net/D18747
      f0a8be25
  20. 09 7月, 2014 1 次提交
    • L
      generic rate limiter · 5ef1ba7f
      Lei Jin 提交于
      Summary:
      A generic rate limiter that can be shared by threads and rocksdb
      instances. Will use this to smooth out write traffic generated by
      compaction and flush. This will help us get better p99 behavior on flash
      storage.
      
      Test Plan:
      unit test output
      ==== Test RateLimiterTest.Rate
      request size [1 - 1023], limit 10 KB/sec, actual rate: 10.374969 KB/sec, elapsed 2002265
      request size [1 - 2047], limit 20 KB/sec, actual rate: 20.771242 KB/sec, elapsed 2002139
      request size [1 - 4095], limit 40 KB/sec, actual rate: 41.285299 KB/sec, elapsed 2202424
      request size [1 - 8191], limit 80 KB/sec, actual rate: 81.371605 KB/sec, elapsed 2402558
      request size [1 - 16383], limit 160 KB/sec, actual rate: 162.541268 KB/sec, elapsed 3303500
      
      Reviewers: yhchiang, igor, sdong
      
      Reviewed By: sdong
      
      Subscribers: leveldb
      
      Differential Revision: https://reviews.facebook.net/D19359
      5ef1ba7f
  21. 23 6月, 2014 1 次提交
  22. 20 6月, 2014 1 次提交
    • I
      JSONDocument · 00b26c3a
      Igor Canadi 提交于
      Summary:
      After evaluating options for JSON storage, I decided to implement our own. The reason is that we'll be able to optimize it better and we get to reduce unnecessary dependencies (which is what we'd get with folly).
      
      I also plan to write a serializer/deserializer for JSONDocument with our own binary format similar to BSON. That way we'll store binary JSON format in RocksDB instead of the plain-text JSON. This means less storage and faster deserialization.
      
      There are still some inefficiencies left here. I plan to optimize them after we develop a functioning DocumentDB. That way we can move and iterate faster.
      
      Test Plan: added a unit test
      
      Reviewers: dhruba, haobo, sdong, ljin, yhchiang
      
      Reviewed By: haobo
      
      Subscribers: leveldb
      
      Differential Revision: https://reviews.facebook.net/D18831
      00b26c3a
  23. 24 5月, 2014 1 次提交
  24. 11 5月, 2014 1 次提交
  25. 08 5月, 2014 1 次提交
    • I
      Better INSTALL.md and Makefile rules · 313b2e5d
      Igor Canadi 提交于
      Summary: We have a lot of problems with gflags. However, when compiling rocksdb static library, we don't need gflags dependency. Reorganize INSTALL.md such that first-time customers don't need any dependency installed to actually build rocksdb static library.
      
      Test Plan: none
      
      Reviewers: dhruba, haobo
      
      Reviewed By: dhruba
      
      CC: leveldb
      
      Differential Revision: https://reviews.facebook.net/D18501
      313b2e5d
  26. 06 5月, 2014 1 次提交
    • I
      log_and_apply_bench on a new benchmark framework · d2569fea
      Igor Canadi 提交于
      Summary:
      db_test includes Benchmark for LogAndApply. This diff removes it from db_test and puts it into a separate log_and_apply bench. I just wanted to play around with our new benchmark framework and figure out how it works.
      
      I would also like to show you how great it is! I believe right set of microbenchmarks can speed up our productivity a lot and help catch early regressions.
      
      Test Plan: no
      
      Reviewers: dhruba, haobo, sdong, ljin, yhchiang
      
      Reviewed By: yhchiang
      
      CC: leveldb
      
      Differential Revision: https://reviews.facebook.net/D18261
      d2569fea
  27. 05 5月, 2014 1 次提交