1. 24 10月, 2013 1 次提交
  2. 17 10月, 2013 1 次提交
  3. 06 10月, 2013 1 次提交
  4. 05 10月, 2013 1 次提交
  5. 03 10月, 2013 1 次提交
  6. 01 10月, 2013 1 次提交
    • N
      Phase 2 of iterator stress test · 7edb92b8
      Natalie Hildebrandt 提交于
      Summary: Using an iterator instead of the Get method, each thread goes through a portion of the database and verifies values by comparing to the shared state.
      
      Test Plan:
      ./db_stress --db=/tmp/tmppp --max_key=10000 --ops_per_thread=10000
      
      To test some basic cases, the following lines can be added (each set in turn) to the verifyDb method with the following expected results:
      
          // Should abort with "Unexpected value found"
          shared.Delete(start);
      
          // Should abort with "Value not found"
          WriteOptions write_opts;
          db_->Delete(write_opts, Key(start));
      
          // Should succeed
          WriteOptions write_opts;
          shared.Delete(start);
           db_->Delete(write_opts, Key(start));
      
          // Should abort with "Value not found"
          WriteOptions write_opts;
          db_->Delete(write_opts, Key(start + (end-start)/2));
      
          // Should abort with "Value not found"
          db_->Delete(write_opts, Key(end-1));
      
          // Should abort with "Unexpected value"
          shared.Delete(end-1);
      
          // Should abort with "Unexpected value"
          shared.Delete(start + (end-start)/2);
      
          // Should abort with "Value not found"
          db_->Delete(write_opts, Key(start));
          shared.Delete(start);
          db_->Delete(write_opts, Key(end-1));
          db_->Delete(write_opts, Key(end-2));
      
      To test the out of range abort, change the key in the for loop to Key(i+1), so that the key defined by the index i is now outside of the supposed range of the database.
      
      Reviewers: emayanke
      
      Reviewed By: emayanke
      
      CC: dhruba, xjin
      
      Differential Revision: https://reviews.facebook.net/D13071
      7edb92b8
  7. 20 9月, 2013 1 次提交
    • N
      Phase 1 of an iterator stress test · 43354182
      Natalie Hildebrandt 提交于
      Summary:
      Added MultiIterate() which does a seek and some Next/Prev
      calls.  Iterator status is checked only, no data integrity check
      
      Test Plan:
      make db_stress
      ./db_stress --iterpercent=<nonzero value> --readpercent=, etc.
      
      Reviewers: emayanke, dhruba, xjin
      
      Reviewed By: emayanke
      
      CC: leveldb
      
      Differential Revision: https://reviews.facebook.net/D12915
      43354182
  8. 14 9月, 2013 1 次提交
    • D
      Added a parameter to limit the maximum space amplification for universal compaction. · 4012ca1c
      Dhruba Borthakur 提交于
      Summary:
      Added a new field called max_size_amplification_ratio in the
      CompactionOptionsUniversal structure. This determines the maximum
      percentage overhead of space amplification.
      
      The size amplification is defined to be the ratio between the size of
      the oldest file to the sum of the sizes of all other files. If the
      size amplification exceeds the specified value, then min_merge_width
      and max_merge_width are ignored and a full compaction of all files is done.
      A value of 10 means that the size a database that stores 100 bytes
      of user data could occupy 110 bytes of physical storage.
      
      Test Plan: Unit test DBTest.UniversalCompactionSpaceAmplification added.
      
      Reviewers: haobo, emayanke, xjin
      
      Reviewed By: haobo
      
      CC: leveldb
      
      Differential Revision: https://reviews.facebook.net/D12825
      4012ca1c
  9. 24 8月, 2013 1 次提交
  10. 23 8月, 2013 1 次提交
    • J
      Add three new MemTableRep's · 74781a0c
      Jim Paton 提交于
      Summary:
      This patch adds three new MemTableRep's: UnsortedRep, PrefixHashRep, and VectorRep.
      
      UnsortedRep stores keys in an std::unordered_map of std::sets. When an iterator is requested, it dumps the keys into an std::set and iterates over that.
      
      VectorRep stores keys in an std::vector. When an iterator is requested, it creates a copy of the vector and sorts it using std::sort. The iterator accesses that new vector.
      
      PrefixHashRep stores keys in an unordered_map mapping prefixes to ordered sets.
      
      I also added one API change. I added a function MemTableRep::MarkImmutable. This function is called when the rep is added to the immutable list. It doesn't do anything yet, but it seems like that could be useful. In particular, for the vectorrep, it means we could elide the extra copy and just sort in place. The only reason I haven't done that yet is because the use of the ArenaAllocator complicates things (I can elaborate on this if needed).
      
      Test Plan:
      make -j32 check
      ./db_stress --memtablerep=vector
      ./db_stress --memtablerep=unsorted
      ./db_stress --memtablerep=prefixhash --prefix_size=10
      
      Reviewers: dhruba, haobo, emayanke
      
      Reviewed By: dhruba
      
      CC: leveldb
      
      Differential Revision: https://reviews.facebook.net/D12117
      74781a0c
  11. 21 8月, 2013 1 次提交
  12. 16 8月, 2013 1 次提交
    • D
      Benchmarking for Merge Operator · ad48c3c2
      Deon Nicholas 提交于
      Summary:
      Updated db_bench and utilities/merge_operators.h to allow for dynamic benchmarking
      of merge operators in db_bench. Added a new test (--benchmarks=mergerandom), which performs
      a bunch of random Merge() operations over random keys. Also added a "--merge_operator=" flag
      so that the tester can easily benchmark different merge operators. Currently supports
      the PutOperator and UInt64Add operator. Support for stringappend or list append may come later.
      
      Test Plan:
      	1. make db_bench
      	2. Test the PutOperator (simulating Put) as follows:
      ./db_bench --benchmarks=fillrandom,readrandom,updaterandom,readrandom,mergerandom,readrandom --merge_operator=put
      --threads=2
      
      3. Test the UInt64AddOperator (simulating numeric addition) similarly:
      ./db_bench --value_size=8 --benchmarks=fillrandom,readrandom,updaterandom,readrandom,mergerandom,readrandom
      --merge_operator=uint64add --threads=2
      
      Reviewers: haobo, dhruba, zshao, MarkCallaghan
      
      Reviewed By: haobo
      
      CC: leveldb
      
      Differential Revision: https://reviews.facebook.net/D11535
      ad48c3c2
  13. 15 8月, 2013 2 次提交
  14. 06 8月, 2013 1 次提交
  15. 02 8月, 2013 1 次提交
    • M
      Expand KeyMayExist to return the proper value if it can be found in memory and... · 59d0b02f
      Mayank Agarwal 提交于
      Expand KeyMayExist to return the proper value if it can be found in memory and also check block_cache
      
      Summary: Removed KeyMayExistImpl because KeyMayExist demanded Get like semantics now. Removed no_io from memtable and imm because we need the proper value now and shouldn't just stop when we see Merge in memtable. Added checks to block_cache. Updated documentation and unit-test
      
      Test Plan: make all check;db_stress for 1 hour
      
      Reviewers: dhruba, haobo
      
      Reviewed By: dhruba
      
      CC: leveldb
      
      Differential Revision: https://reviews.facebook.net/D11853
      59d0b02f
  16. 24 7月, 2013 1 次提交
    • M
      Use KeyMayExist for WriteBatch-Deletes · bf66c10b
      Mayank Agarwal 提交于
      Summary:
      Introduced KeyMayExist checking during writebatch-delete and removed from Outer Delete API because it uses writebatch-delete.
      Added code to skip getting Table from disk if not already present in table_cache.
      Some renaming of variables.
      Introduced KeyMayExistImpl which allows checking since specified sequence number in GetImpl useful to check partially written writebatch.
      Changed KeyMayExist to not be pure virtual and provided a default implementation.
      Expanded unit-tests in db_test to check appropriately.
      Ran db_stress for 1 hour with ./db_stress --max_key=100000 --ops_per_thread=10000000 --delpercent=50 --filter_deletes=1 --statistics=1.
      
      Test Plan: db_stress;make check
      
      Reviewers: dhruba, haobo
      
      Reviewed By: dhruba
      
      CC: leveldb, xjin
      
      Differential Revision: https://reviews.facebook.net/D11745
      bf66c10b
  17. 12 7月, 2013 1 次提交
    • M
      Make rocksdb-deletes faster using bloom filter · 2a986919
      Mayank Agarwal 提交于
      Summary:
      Wrote a new function in db_impl.c-CheckKeyMayExist that calls Get but with a new parameter turned on which makes Get return false only if bloom filters can guarantee that key is not in database. Delete calls this function and if the option- deletes_use_filter is turned on and CheckKeyMayExist returns false, the delete will be dropped saving:
      1. Put of delete type
      2. Space in the db,and
      3. Compaction time
      
      Test Plan:
      make all check;
      will run db_stress and db_bench and enhance unit-test once the basic design gets approved
      
      Reviewers: dhruba, haobo, vamsi
      
      Reviewed By: haobo
      
      CC: leveldb
      
      Differential Revision: https://reviews.facebook.net/D11607
      2a986919
  18. 11 7月, 2013 1 次提交
    • M
      Print complete statistics in db_stress · 821889e2
      Mayank Agarwal 提交于
      Summary: db_stress should alos print complete statistics like db_bench. Needed this when I wanted to measure number of delete-IOs dropped due to CheckKeyMayExist to be introduced to rocksdb codebase later- to make deltes in rocksdb faster
      
      Test Plan: make db_stress;./db_stress --max_key=100 --ops_per_thread=1000 --statistics=1
      
      Reviewers: sheki, dhruba, vamsi, haobo
      
      Reviewed By: dhruba
      
      Differential Revision: https://reviews.facebook.net/D11655
      821889e2
  19. 04 7月, 2013 1 次提交
  20. 01 7月, 2013 2 次提交
    • D
      Reduce write amplification by merging files in L0 back into L0 · 47c4191f
      Dhruba Borthakur 提交于
      Summary:
      There is a new option called hybrid_mode which, when switched on,
      causes HBase style compactions.  Files from L0 are
      compacted back into L0. This meat of this compaction algorithm
      is in PickCompactionHybrid().
      
      All files reside in L0. That means all files have overlapping
      keys. Each file has a time-bound, i.e. each file contains a
      range of keys that were inserted around the same time. The
      start-seqno and the end-seqno refers to the timeframe when
      these keys were inserted.  Files that have contiguous seqno
      are compacted together into a larger file. All files are
      ordered from most recent to the oldest.
      
      The current compaction algorithm starts to look for
      candidate files starting from the most recent file. It continues to
      add more files to the same compaction run as long as the
      sum of the files chosen till now is smaller than the next
      candidate file size. This logic needs to be debated
      and validated.
      
      The above logic should reduce write amplification to a
      large extent... will publish numbers shortly.
      
      Test Plan: dbstress runs for 6 hours with no data corruption (tested so far).
      
      Differential Revision: https://reviews.facebook.net/D11289
      47c4191f
    • D
      Reduce write amplification by merging files in L0 back into L0 · 554c06dd
      Dhruba Borthakur 提交于
      Summary:
      There is a new option called hybrid_mode which, when switched on,
      causes HBase style compactions.  Files from L0 are
      compacted back into L0. This meat of this compaction algorithm
      is in PickCompactionHybrid().
      
      All files reside in L0. That means all files have overlapping
      keys. Each file has a time-bound, i.e. each file contains a
      range of keys that were inserted around the same time. The
      start-seqno and the end-seqno refers to the timeframe when
      these keys were inserted.  Files that have contiguous seqno
      are compacted together into a larger file. All files are
      ordered from most recent to the oldest.
      
      The current compaction algorithm starts to look for
      candidate files starting from the most recent file. It continues to
      add more files to the same compaction run as long as the
      sum of the files chosen till now is smaller than the next
      candidate file size. This logic needs to be debated
      and validated.
      
      The above logic should reduce write amplification to a
      large extent... will publish numbers shortly.
      
      Test Plan: dbstress runs for 6 hours with no data corruption (tested so far).
      
      Differential Revision: https://reviews.facebook.net/D11289
      554c06dd
  21. 20 6月, 2013 1 次提交
  22. 18 6月, 2013 1 次提交
  23. 13 6月, 2013 1 次提交
    • H
      [RocksDB] cleanup EnvOptions · bdf10859
      Haobo Xu 提交于
      Summary:
      This diff simplifies EnvOptions by treating it as POD, similar to Options.
      - virtual functions are removed and member fields are accessed directly.
      - StorageOptions is removed.
      - Options.allow_readahead and Options.allow_readahead_compactions are deprecated.
      - Unused global variables are removed: useOsBuffer, useFsReadAhead, useMmapRead, useMmapWrite
      
      Test Plan: make check; db_stress
      
      Reviewers: dhruba
      
      CC: leveldb
      
      Differential Revision: https://reviews.facebook.net/D11175
      bdf10859
  24. 24 5月, 2013 1 次提交
  25. 22 5月, 2013 2 次提交
    • V
      [Kill randomly at various points in source code for testing] · 760dd475
      Vamsi Ponnekanti 提交于
      Summary:
      This is initial version. A few ways in which this could
      be extended in the future are:
      (a) Killing from more places in source code
      (b) Hashing stack and using that hash in determining whether to crash.
          This is to avoid crashing more often at source lines that are executed
          more often.
      (c) Raising exceptions or returning errors instead of killing
      
      Test Plan:
      This whole thing is for testing.
      
      Here is part of output:
      
      python2.7 tools/db_crashtest2.py -d 600
      Running db_stress
      
      db_stress retncode -15 output LevelDB version     : 1.5
      Number of threads   : 32
      Ops per thread      : 10000000
      Read percentage     : 50
      Write-buffer-size   : 4194304
      Delete percentage   : 30
      Max key             : 1000
      Ratio #ops/#keys    : 320000
      Num times DB reopens: 0
      Batches/snapshots   : 1
      Purge redundant %   : 50
      Num keys per lock   : 4
      Compression         : snappy
      ------------------------------------------------
      No lock creation because test_batches_snapshots set
      2013/04/26-17:55:17  Starting database operations
      Created bg thread 0x7fc1f07ff700
      ... finished 60000 ops
      Running db_stress
      
      db_stress retncode -15 output LevelDB version     : 1.5
      Number of threads   : 32
      Ops per thread      : 10000000
      Read percentage     : 50
      Write-buffer-size   : 4194304
      Delete percentage   : 30
      Max key             : 1000
      Ratio #ops/#keys    : 320000
      Num times DB reopens: 0
      Batches/snapshots   : 1
      Purge redundant %   : 50
      Num keys per lock   : 4
      Compression         : snappy
      ------------------------------------------------
      Created bg thread 0x7ff0137ff700
      No lock creation because test_batches_snapshots set
      2013/04/26-17:56:15  Starting database operations
      ... finished 90000 ops
      
      Revert Plan: OK
      
      Task ID: #2252691
      
      Reviewers: dhruba, emayanke
      
      Reviewed By: emayanke
      
      CC: leveldb, haobo
      
      Differential Revision: https://reviews.facebook.net/D10581
      760dd475
    • M
      Check to db_stress to not allow disable_wal and reopens set together · 3827403c
      Mayank Agarwal 提交于
      Summary: db can't reopen safely with disable_wal set!
      
      Test Plan: make db_stress; run db_stress with disable_wal and reopens set and see error
      
      Reviewers: dhruba, vamsi
      
      Reviewed By: dhruba
      
      CC: leveldb
      
      Differential Revision: https://reviews.facebook.net/D10857
      3827403c
  26. 21 5月, 2013 1 次提交
  27. 03 5月, 2013 1 次提交
    • M
      Timestamp and TTL Wrapper for rocksdb · d786b25e
      Mayank Agarwal 提交于
      Summary:
      When opened with DBTimestamp::Open call, timestamps are prepended to and stripped from the value during subsequent Put and Get calls respectively. The Timestamp is used to discard values in Get and custom compaction filter which have exceeded their TTL which is specified during Open.
      Have made a temporary change to Makefile to let us test with the temporary file TestTime.cc. Have also changed the private members of db_impl.h to protected to let them be inherited by the new class DBTimestamp
      
      Test Plan: make db_timestamp; TestTime.cc(will not check it in) shows how to use the apis currently, but I will write unit-tests shortly
      
      Reviewers: dhruba, vamsi, haobo, sheki, heyongqiang, vkrest
      
      Reviewed By: vamsi
      
      CC: zshao, xjin, vkrest, MarkCallaghan
      
      Differential Revision: https://reviews.facebook.net/D10311
      d786b25e
  28. 09 4月, 2013 2 次提交
  29. 02 4月, 2013 1 次提交
    • M
      Python script to periodically run and kill the db_stress test · e937d471
      Mayank Agarwal 提交于
      Summary: The script runs and kills the stress test periodically. Default values have been used in the script now. Should I make this a part of the Makefile or automated rocksdb build? The values can be easily changed in the script right now, but should I add some support for variable values or input to the script? I believe the script achieves its objective of unsafe crashes and reopening to expect sanity in the database.
      
      Test Plan: python tools/db_crashtest.py
      
      Reviewers: dhruba, vamsi, MarkCallaghan
      
      Reviewed By: vamsi
      
      CC: leveldb
      
      Differential Revision: https://reviews.facebook.net/D9369
      e937d471
  30. 28 3月, 2013 1 次提交
    • A
      memory manage statistics · 63f216ee
      Abhishek Kona 提交于
      Summary:
      Earlier Statistics object was a raw pointer. This meant the user had to clear up
      the Statistics object after creating the database. In most use cases the database is created in a function and the statistics pointer is out of scope. Hence the statistics object would never be deleted.
      Now Using a shared_ptr to manage this.
      
      Want this in before the next release.
      
      Test Plan: make all check.
      
      Reviewers: dhruba, emayanke
      
      Reviewed By: emayanke
      
      CC: leveldb
      
      Differential Revision: https://reviews.facebook.net/D9735
      63f216ee
  31. 11 3月, 2013 1 次提交
    • V
      [Report the #gets and #founds in db_stress] · 8ade9359
      Vamsi Ponnekanti 提交于
      Summary:
      Also added some comments and fixed some bugs in
      stats reporting. Now the stats seem to match what is expected.
      
      Test Plan:
      [nponnekanti@dev902 /data/users/nponnekanti/rocksdb] ./db_stress --test_batches_snapshots=1 --ops_per_thread=1000 --threads=1 --max_key=320
      LevelDB version     : 1.5
      Number of threads   : 1
      Ops per thread      : 1000
      Read percentage     : 10
      Delete percentage   : 30
      Max key             : 320
      Ratio #ops/#keys    : 3
      Num times DB reopens: 10
      Batches/snapshots   : 1
      Num keys per lock   : 4
      Compression         : snappy
      ------------------------------------------------
      No lock creation because test_batches_snapshots set
      2013/03/04-15:58:56  Starting database operations
      2013/03/04-15:58:56  Reopening database for the 1th time
      2013/03/04-15:58:56  Reopening database for the 2th time
      2013/03/04-15:58:56  Reopening database for the 3th time
      2013/03/04-15:58:56  Reopening database for the 4th time
      Created bg thread 0x7f4542bff700
      2013/03/04-15:58:56  Reopening database for the 5th time
      2013/03/04-15:58:56  Reopening database for the 6th time
      2013/03/04-15:58:56  Reopening database for the 7th time
      2013/03/04-15:58:57  Reopening database for the 8th time
      2013/03/04-15:58:57  Reopening database for the 9th time
      2013/03/04-15:58:57  Reopening database for the 10th time
      2013/03/04-15:58:57  Reopening database for the 11th time
      2013/03/04-15:58:57  Limited verification already done during gets
      Stress Test : 1811.551 micros/op 552 ops/sec
                  : Wrote 0.10 MB (0.05 MB/sec) (598% of 1011 ops)
                  : Wrote 6050 times
                  : Deleted 3050 times
                  : 500/900 gets found the key
                  : Got errors 0 times
      
      [nponnekanti@dev902 /data/users/nponnekanti/rocksdb] ./db_stress --ops_per_thread=1000 --threads=1 --max_key=320
      LevelDB version     : 1.5
      Number of threads   : 1
      Ops per thread      : 1000
      Read percentage     : 10
      Delete percentage   : 30
      Max key             : 320
      Ratio #ops/#keys    : 3
      Num times DB reopens: 10
      Batches/snapshots   : 0
      Num keys per lock   : 4
      Compression         : snappy
      ------------------------------------------------
      Creating 80 locks
      2013/03/04-15:58:17  Starting database operations
      2013/03/04-15:58:17  Reopening database for the 1th time
      2013/03/04-15:58:17  Reopening database for the 2th time
      2013/03/04-15:58:17  Reopening database for the 3th time
      2013/03/04-15:58:17  Reopening database for the 4th time
      Created bg thread 0x7fc0f5bff700
      2013/03/04-15:58:17  Reopening database for the 5th time
      2013/03/04-15:58:17  Reopening database for the 6th time
      2013/03/04-15:58:18  Reopening database for the 7th time
      2013/03/04-15:58:18  Reopening database for the 8th time
      2013/03/04-15:58:18  Reopening database for the 9th time
      2013/03/04-15:58:18  Reopening database for the 10th time
      2013/03/04-15:58:18  Reopening database for the 11th time
      2013/03/04-15:58:18  Starting verification
      Stress Test : 1836.258 micros/op 544 ops/sec
                  : Wrote 0.01 MB (0.01 MB/sec) (59% of 1011 ops)
                  : Wrote 605 times
                  : Deleted 305 times
                  : 50/90 gets found the key
                  : Got errors 0 times
      2013/03/04-15:58:18  Verification successful
      
      Revert Plan: OK
      
      Task ID: #
      
      Reviewers: emayanke, dhruba
      
      Reviewed By: emayanke
      
      CC: leveldb
      
      Differential Revision: https://reviews.facebook.net/D9081
      8ade9359
  32. 08 3月, 2013 1 次提交
    • A
      Make db_stress Not purge redundant keys on some opens · 3b6653b1
      amayank 提交于
      Summary: In light of the new option introduced by commit 806e2643 where the database has an option to compact before flushing to disk, we want the stress test to test both sides of the option. Have made it to 'deterministically' and configurably change that option for reopens.
      
      Test Plan: make db_stress; ./db_stress with some differnet options
      
      Reviewers: dhruba, vamsi
      
      Reviewed By: dhruba
      
      CC: leveldb, sheki
      
      Differential Revision: https://reviews.facebook.net/D9165
      3b6653b1
  33. 26 2月, 2013 1 次提交
  34. 23 2月, 2013 1 次提交
    • V
      [Add a second kind of verification to db_stress · 465b9103
      Vamsi Ponnekanti 提交于
      Summary:
      Currently the test tracks all writes in memory and
      uses it for verification at the end. This has 4 problems:
      (a) It needs mutex for each write to ensure in-memory update
      and leveldb update are done atomically. This slows down the
      benchmark.
      (b) Verification phase at the end is time consuming as well
      (c) Does not test batch writes or snapshots
      (d) We cannot kill the test and restart multiple times in a
      loop because in-memory state will be lost.
      
      I am adding a FLAGS_multi that does MultiGet/MultiPut/MultiDelete
      instead of get/put/delete to get/put/delete a group of related
      keys with same values atomically. Every get retrieves the group
      of keys and checks that their values are same. This does not have
      the above problems but the downside is that it does less amount
      of validation than the other approach.
      
      Test Plan:
      This whole this is a test! Here is a small run. I am doing larger run now.
      
      [nponnekanti@dev902 /data/users/nponnekanti/rocksdb] ./db_stress --ops_per_thread=10000 --multi=1 --ops_per_key=25
      LevelDB version     : 1.5
      Number of threads   : 32
      Ops per thread      : 10000
      Read percentage     : 10
      Delete percentage   : 30
      Max key             : 2147483648
      Num times DB reopens: 10
      Num keys per lock   : 4
      Compression         : snappy
      ------------------------------------------------
      Creating 536870912 locks
      2013/02/20-16:59:32  Starting database operations
      Created bg thread 0x7f9ebcfff700
      2013/02/20-16:59:37  Reopening database for the 1th time
      2013/02/20-16:59:46  Reopening database for the 2th time
      2013/02/20-16:59:57  Reopening database for the 3th time
      2013/02/20-17:00:11  Reopening database for the 4th time
      2013/02/20-17:00:25  Reopening database for the 5th time
      2013/02/20-17:00:36  Reopening database for the 6th time
      2013/02/20-17:00:47  Reopening database for the 7th time
      2013/02/20-17:00:59  Reopening database for the 8th time
      2013/02/20-17:01:10  Reopening database for the 9th time
      2013/02/20-17:01:20  Reopening database for the 10th time
      2013/02/20-17:01:31  Reopening database for the 11th time
      2013/02/20-17:01:31  Starting verification
      Stress Test : 109.125 micros/op 22191 ops/sec
                  : Wrote 0.00 MB (0.23 MB/sec) (59% of 32 ops)
                  : Deleted 10 times
      2013/02/20-17:01:31  Verification successful
      
      Revert Plan: OK
      
      Task ID: #
      
      Reviewers: dhruba, emayanke
      
      Reviewed By: emayanke
      
      CC: leveldb
      
      Differential Revision: https://reviews.facebook.net/D8733
      465b9103
  35. 22 2月, 2013 1 次提交
    • A
      Exploring the rocksdb stress test · 1052ea23
      amayank 提交于
      Summary:
      Fixed a bug in the stress-test where the correct size was not being
      passed to GenerateValue. This bug was there since the beginning but assertions
      were switched on in our code-base only recently.
      Added comments on the top detailing how the stress test works and how to
      quicken/slow it down after investigation.
      
      Test Plan: make all check. ./db_stress
      
      Reviewers: dhruba, asad
      
      Reviewed By: dhruba
      
      CC: vamsi, sheki, heyongqiang, zshao
      
      Differential Revision: https://reviews.facebook.net/D8727
      1052ea23
  36. 21 2月, 2013 1 次提交
    • A
      Introduce histogram in statistics.h · fe10200d
      Abhishek Kona 提交于
      Summary:
      * Introduce is histogram in statistics.h
      * stop watch to measure time.
      * introduce two timers as a poc.
      Replaced NULL with nullptr to fight some lint errors
      Should be useful for google.
      
      Test Plan:
      ran db_bench and check stats.
      make all check
      
      Reviewers: dhruba, heyongqiang
      
      Reviewed By: dhruba
      
      CC: leveldb
      
      Differential Revision: https://reviews.facebook.net/D8637
      fe10200d