1. 04 4月, 2013 1 次提交
  2. 03 4月, 2013 3 次提交
  3. 02 4月, 2013 1 次提交
    • M
      Python script to periodically run and kill the db_stress test · e937d471
      Mayank Agarwal 提交于
      Summary: The script runs and kills the stress test periodically. Default values have been used in the script now. Should I make this a part of the Makefile or automated rocksdb build? The values can be easily changed in the script right now, but should I add some support for variable values or input to the script? I believe the script achieves its objective of unsafe crashes and reopening to expect sanity in the database.
      
      Test Plan: python tools/db_crashtest.py
      
      Reviewers: dhruba, vamsi, MarkCallaghan
      
      Reviewed By: vamsi
      
      CC: leveldb
      
      Differential Revision: https://reviews.facebook.net/D9369
      e937d471
  4. 29 3月, 2013 4 次提交
    • H
      Let's get rid of delete as much as possible, here are some examples. · 645ff8f2
      Haobo Xu 提交于
      Summary:
      If a class owns an object:
       - If the object can be null => use a unique_ptr. no delete
       - If the object can not be null => don't even need new, let alone delete
       - for runtime sized array => use vector, no delete.
      
      Test Plan: make check
      
      Reviewers: dhruba, heyongqiang
      
      Reviewed By: heyongqiang
      
      CC: leveldb, zshao, sheki, emayanke, MarkCallaghan
      
      Differential Revision: https://reviews.facebook.net/D9783
      645ff8f2
    • A
      [RocksDB] Fix binary search while finding probable wal files · 3b51605b
      Abhishek Kona 提交于
      Summary:
      RocksDB does a binary search to look at the files which might contain the requested sequence number at the call GetUpdatesSince.
      There was a bug in the binary search => when the file pointed by the middle index of bsearch was empty/corrupt it needst to resize the vector and update indexes.
      This now fixes that.
      
      Test Plan: existing unit tests pass.
      
      Reviewers: heyongqiang, dhruba
      
      Reviewed By: heyongqiang
      
      CC: leveldb
      
      Differential Revision: https://reviews.facebook.net/D9777
      3b51605b
    • A
      [Rocksdb] Fix Crash on finding a db with no log files. Error out instead · 8e9c781a
      Abhishek Kona 提交于
      Summary:
      If the vector returned by GetUpdatesSince is empty, it is still returned to the
      user. This causes it throw an std::range error.
      The probable file list is checked and it returns an IOError status instead of OK now.
      
      Test Plan: added a unit test.
      
      Reviewers: dhruba, heyongqiang
      
      Reviewed By: heyongqiang
      
      CC: leveldb
      
      Differential Revision: https://reviews.facebook.net/D9771
      8e9c781a
    • A
      Use non-mmapd files for Write-Ahead Files · 7fdd5f5b
      Abhishek Kona 提交于
      Summary:
      Use non mmapd files for Write-Ahead log.
      Earlier use of MMaped files. made the log iterator read ahead and miss records.
      Now the reader and writer will point to the same physical location.
      
      There is no perf regression :
      ./db_bench --benchmarks=fillseq --db=/dev/shm/mmap_test --num=$(million 20) --use_existing_db=0 --threads=2
      with This diff :
      fillseq      :      10.756 micros/op 185281 ops/sec;   20.5 MB/s
      without this dif :
      fillseq      :      11.085 micros/op 179676 ops/sec;   19.9 MB/s
      
      Test Plan: unit test included
      
      Reviewers: dhruba, heyongqiang
      
      Reviewed By: heyongqiang
      
      CC: leveldb
      
      Differential Revision: https://reviews.facebook.net/D9741
      7fdd5f5b
  5. 28 3月, 2013 1 次提交
    • A
      memory manage statistics · 63f216ee
      Abhishek Kona 提交于
      Summary:
      Earlier Statistics object was a raw pointer. This meant the user had to clear up
      the Statistics object after creating the database. In most use cases the database is created in a function and the statistics pointer is out of scope. Hence the statistics object would never be deleted.
      Now Using a shared_ptr to manage this.
      
      Want this in before the next release.
      
      Test Plan: make all check.
      
      Reviewers: dhruba, emayanke
      
      Reviewed By: emayanke
      
      CC: leveldb
      
      Differential Revision: https://reviews.facebook.net/D9735
      63f216ee
  6. 27 3月, 2013 1 次提交
  7. 23 3月, 2013 1 次提交
    • S
      Integrate the manifest_dump command with ldb · a8bf8fe5
      Simon Marlow 提交于
      Summary:
      Syntax:
      
         manifest_dump [--verbose] --num=<manifest_num>
      
      e.g.
      
      $ ./ldb --db=/home/smarlow/tmp/testdb manifest_dump --num=12
      manifest_file_number 13 next_file_number 14 last_sequence 3 log_number
      11  prev_log_number 0
      --- level 0 --- version# 0 ---
       6:116['a1' @ 1 : 1 .. 'a1' @ 1 : 1]
       10:130['a3' @ 2 : 1 .. 'a4' @ 3 : 1]
      --- level 1 --- version# 0 ---
      --- level 2 --- version# 0 ---
      --- level 3 --- version# 0 ---
      --- level 4 --- version# 0 ---
      --- level 5 --- version# 0 ---
      --- level 6 --- version# 0 ---
      
      Test Plan: - Tested on an example DB (see output in summary)
      
      Reviewers: sheki, dhruba
      
      Reviewed By: sheki
      
      CC: leveldb, heyongqiang
      
      Differential Revision: https://reviews.facebook.net/D9609
      a8bf8fe5
  8. 22 3月, 2013 4 次提交
  9. 21 3月, 2013 4 次提交
    • D
      Run compactions even if workload is readonly or read-mostly. · d0798f67
      Dhruba Borthakur 提交于
      Summary:
      The events that trigger compaction:
      * opening the database
      * Get -> only if seek compaction is not disabled and other checks are true
      * MakeRoomForWrite -> when memtable is full
      * BackgroundCall ->
        If the background thread is about to do a compaction run, it schedules
        a new background task to trigger a possible compaction. This will cause
        additional background threads to find and process other compactions that
        can run concurrently.
      
      Test Plan: ran db_bench with overwrite and readonly alternatively.
      
      Reviewers: sheki, MarkCallaghan
      
      Reviewed By: sheki
      
      CC: leveldb
      
      Differential Revision: https://reviews.facebook.net/D9579
      d0798f67
    • D
      Ability to configure bufferedio-reads, filesystem-readaheads and mmap-read-write per database. · ad96563b
      Dhruba Borthakur 提交于
      Summary:
      This patch allows an application to specify whether to use bufferedio,
      reads-via-mmaps and writes-via-mmaps per database. Earlier, there
      was a global static variable that was used to configure this functionality.
      
      The default setting remains the same (and is backward compatible):
       1. use bufferedio
       2. do not use mmaps for reads
       3. use mmap for writes
       4. use readaheads for reads needed for compaction
      
      I also added a parameter to db_bench to be able to explicitly specify
      whether to do readaheads for compactions or not.
      
      Test Plan: make check
      
      Reviewers: sheki, heyongqiang, MarkCallaghan
      
      Reviewed By: sheki
      
      CC: leveldb
      
      Differential Revision: https://reviews.facebook.net/D9429
      ad96563b
    • D
      1.5.8.1.fb release. · 2adddeef
      Dhruba Borthakur 提交于
      Summary:
      
      Test Plan:
      
      Reviewers:
      
      CC:
      
      Task ID: #
      
      Blame Rev:
      2adddeef
    • M
      Removing boost from ldb_cmd.cc · a6f42754
      Mayank Agarwal 提交于
      Summary: Getting rid of boost in our github codebase which caused problems on third-party
      
      Test Plan: make ldb; python tools/ldb_test.py
      
      Reviewers: sheki, dhruba
      
      Reviewed By: sheki
      
      Differential Revision: https://reviews.facebook.net/D9543
      a6f42754
  10. 20 3月, 2013 5 次提交
  11. 19 3月, 2013 1 次提交
  12. 16 3月, 2013 1 次提交
  13. 15 3月, 2013 2 次提交
    • M
      Doing away with boost in ldb_cmd.h · a78fb5e8
      Mayank Agarwal 提交于
      Summary: boost functions cause complications while deploying to third-party
      
      Test Plan: make
      
      Reviewers: sheki, dhruba
      
      Reviewed By: sheki
      
      Differential Revision: https://reviews.facebook.net/D9441
      a78fb5e8
    • M
      Enhance db_bench · 5a8c8845
      Mark Callaghan 提交于
      Summary:
      Add --benchmarks=updaterandom for read-modify-write workloads. This is different
      from --benchmarks=readrandomwriterandom in a few ways. First, an "operation" is the
      combined time to do the read & write rather than treating them as two ops. Second,
      the same key is used for the read & write.
      
      Change RandomGenerator to support rows larger than 1M. That was using "assert"
      to fail and assert is compiled-away when -DNDEBUG is used.
      
      Add more options to db_bench
      --duration - sets the number of seconds for tests to run. When not set the
      operation count continues to be the limit. This is used by random operation
      tests.
      
      --use_snapshot - when set GetSnapshot() is called prior to each random read.
      This is to measure the overhead from using snapshots.
      
      --get_approx - when set GetApproximateSizes() is called prior to each random
      read. This is to measure the overhead for a query optimizer.
      
      Task ID: #
      
      Blame Rev:
      
      Test Plan:
      run db_bench
      
      Revert Plan:
      
      Database Impact:
      
      Memcache Impact:
      
      Other Notes:
      
      EImportant:
      
      - begin *PUBLIC* platform impact section -
      Bugzilla: #
      - end platform impact -
      
      Reviewers: dhruba
      
      Reviewed By: dhruba
      
      Differential Revision: https://reviews.facebook.net/D9267
      5a8c8845
  14. 14 3月, 2013 2 次提交
    • M
      Updating fbcode.gcc471.sh to use jemalloc 3.3.1 · e93dc3c0
      Mayank Agarwal 提交于
      Summary: Updated TOOL_CHAIN_LIB_BASE to use the third-party version for jemalloc-3.3.1 which contains a bug fix in quarantine.cc. This was detected while debugging valgrind issues with the rocksdb table_test
      
      Test Plan: make table_test;valgrind --leak-check=full ./table_test
      
      Reviewers: dhruba, sheki, vamsi
      
      Reviewed By: sheki
      
      Differential Revision: https://reviews.facebook.net/D9387
      e93dc3c0
    • A
      Use posix_fallocate as default. · 1ba5abca
      Abhishek Kona 提交于
      Summary:
      Ftruncate does not throw an error on disk-full. This causes Sig-bus in
      the case where the database tries to issue a Put call on a full-disk.
      
      Use posix_fallocate for allocation instead of truncate.
      Add a check to use MMaped files only on ext4, xfs and tempfs, as
      posix_fallocate is very slow on ext3 and older.
      
      Test Plan: make all check
      
      Reviewers: dhruba, chip
      
      Reviewed By: dhruba
      
      CC: adsharma, leveldb
      
      Differential Revision: https://reviews.facebook.net/D9291
      1ba5abca
  15. 13 3月, 2013 2 次提交
  16. 12 3月, 2013 2 次提交
    • D
      Prevent segfault because SizeUnderCompaction was called without any locks. · ebf16f57
      Dhruba Borthakur 提交于
      Summary:
      SizeBeingCompacted was called without any lock protection. This causes
      crashes, especially when running db_bench with value_size=128K.
      The fix is to compute SizeUnderCompaction while holding the mutex and
      passing in these values into the call to Finalize.
      
      (gdb) where
      #4  leveldb::VersionSet::SizeBeingCompacted (this=this@entry=0x7f0b490931c0, level=level@entry=4) at db/version_set.cc:1827
      #5  0x000000000043a3c8 in leveldb::VersionSet::Finalize (this=this@entry=0x7f0b490931c0, v=v@entry=0x7f0b3b86b480) at db/version_set.cc:1420
      #6  0x00000000004418d1 in leveldb::VersionSet::LogAndApply (this=0x7f0b490931c0, edit=0x7f0b3dc8c200, mu=0x7f0b490835b0, new_descriptor_log=<optimized out>) at db/version_set.cc:1016
      #7  0x00000000004222b2 in leveldb::DBImpl::InstallCompactionResults (this=this@entry=0x7f0b49083400, compact=compact@entry=0x7f0b2b8330f0) at db/db_impl.cc:1473
      #8  0x0000000000426027 in leveldb::DBImpl::DoCompactionWork (this=this@entry=0x7f0b49083400, compact=compact@entry=0x7f0b2b8330f0) at db/db_impl.cc:1757
      #9  0x0000000000426690 in leveldb::DBImpl::BackgroundCompaction (this=this@entry=0x7f0b49083400, madeProgress=madeProgress@entry=0x7f0b41bf2d1e, deletion_state=...) at db/db_impl.cc:1268
      #10 0x0000000000428f42 in leveldb::DBImpl::BackgroundCall (this=0x7f0b49083400) at db/db_impl.cc:1170
      #11 0x000000000045348e in BGThread (this=0x7f0b49023100) at util/env_posix.cc:941
      #12 leveldb::(anonymous namespace)::PosixEnv::BGThreadWrapper (arg=0x7f0b49023100) at util/env_posix.cc:874
      #13 0x00007f0b4a7cf10d in start_thread (arg=0x7f0b41bf3700) at pthread_create.c:301
      #14 0x00007f0b49b4b11d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:115
      
      Test Plan:
      make check
      
      I am running db_bench with a value size of 128K to see if the segfault is fixed.
      
      Reviewers: MarkCallaghan, sheki, emayanke
      
      Reviewed By: sheki
      
      CC: leveldb
      
      Differential Revision: https://reviews.facebook.net/D9279
      ebf16f57
    • D
      Make the build-time show up in the leveldb library. · c04c956b
      Dhruba Borthakur 提交于
      Summary:
      This is a regression caused by
      https://github.com/facebook/rocksdb/commit/772f75b3fbc5cfcf4d519114751efeae04411fa1
      
      If you do "strings libleveldb.a | grep leveldb_build_git_datetime" it will
      show you the time when the binary was built.
      
      Test Plan: make check
      
      Reviewers: emayanke
      
      Reviewed By: emayanke
      
      CC: leveldb
      
      Differential Revision: https://reviews.facebook.net/D9273
      c04c956b
  17. 11 3月, 2013 1 次提交
    • V
      [Report the #gets and #founds in db_stress] · 8ade9359
      Vamsi Ponnekanti 提交于
      Summary:
      Also added some comments and fixed some bugs in
      stats reporting. Now the stats seem to match what is expected.
      
      Test Plan:
      [nponnekanti@dev902 /data/users/nponnekanti/rocksdb] ./db_stress --test_batches_snapshots=1 --ops_per_thread=1000 --threads=1 --max_key=320
      LevelDB version     : 1.5
      Number of threads   : 1
      Ops per thread      : 1000
      Read percentage     : 10
      Delete percentage   : 30
      Max key             : 320
      Ratio #ops/#keys    : 3
      Num times DB reopens: 10
      Batches/snapshots   : 1
      Num keys per lock   : 4
      Compression         : snappy
      ------------------------------------------------
      No lock creation because test_batches_snapshots set
      2013/03/04-15:58:56  Starting database operations
      2013/03/04-15:58:56  Reopening database for the 1th time
      2013/03/04-15:58:56  Reopening database for the 2th time
      2013/03/04-15:58:56  Reopening database for the 3th time
      2013/03/04-15:58:56  Reopening database for the 4th time
      Created bg thread 0x7f4542bff700
      2013/03/04-15:58:56  Reopening database for the 5th time
      2013/03/04-15:58:56  Reopening database for the 6th time
      2013/03/04-15:58:56  Reopening database for the 7th time
      2013/03/04-15:58:57  Reopening database for the 8th time
      2013/03/04-15:58:57  Reopening database for the 9th time
      2013/03/04-15:58:57  Reopening database for the 10th time
      2013/03/04-15:58:57  Reopening database for the 11th time
      2013/03/04-15:58:57  Limited verification already done during gets
      Stress Test : 1811.551 micros/op 552 ops/sec
                  : Wrote 0.10 MB (0.05 MB/sec) (598% of 1011 ops)
                  : Wrote 6050 times
                  : Deleted 3050 times
                  : 500/900 gets found the key
                  : Got errors 0 times
      
      [nponnekanti@dev902 /data/users/nponnekanti/rocksdb] ./db_stress --ops_per_thread=1000 --threads=1 --max_key=320
      LevelDB version     : 1.5
      Number of threads   : 1
      Ops per thread      : 1000
      Read percentage     : 10
      Delete percentage   : 30
      Max key             : 320
      Ratio #ops/#keys    : 3
      Num times DB reopens: 10
      Batches/snapshots   : 0
      Num keys per lock   : 4
      Compression         : snappy
      ------------------------------------------------
      Creating 80 locks
      2013/03/04-15:58:17  Starting database operations
      2013/03/04-15:58:17  Reopening database for the 1th time
      2013/03/04-15:58:17  Reopening database for the 2th time
      2013/03/04-15:58:17  Reopening database for the 3th time
      2013/03/04-15:58:17  Reopening database for the 4th time
      Created bg thread 0x7fc0f5bff700
      2013/03/04-15:58:17  Reopening database for the 5th time
      2013/03/04-15:58:17  Reopening database for the 6th time
      2013/03/04-15:58:18  Reopening database for the 7th time
      2013/03/04-15:58:18  Reopening database for the 8th time
      2013/03/04-15:58:18  Reopening database for the 9th time
      2013/03/04-15:58:18  Reopening database for the 10th time
      2013/03/04-15:58:18  Reopening database for the 11th time
      2013/03/04-15:58:18  Starting verification
      Stress Test : 1836.258 micros/op 544 ops/sec
                  : Wrote 0.01 MB (0.01 MB/sec) (59% of 1011 ops)
                  : Wrote 605 times
                  : Deleted 305 times
                  : 50/90 gets found the key
                  : Got errors 0 times
      2013/03/04-15:58:18  Verification successful
      
      Revert Plan: OK
      
      Task ID: #
      
      Reviewers: emayanke, dhruba
      
      Reviewed By: emayanke
      
      CC: leveldb
      
      Differential Revision: https://reviews.facebook.net/D9081
      8ade9359
  18. 09 3月, 2013 3 次提交
  19. 08 3月, 2013 1 次提交
    • A
      Make db_stress Not purge redundant keys on some opens · 3b6653b1
      amayank 提交于
      Summary: In light of the new option introduced by commit 806e2643 where the database has an option to compact before flushing to disk, we want the stress test to test both sides of the option. Have made it to 'deterministically' and configurably change that option for reopens.
      
      Test Plan: make db_stress; ./db_stress with some differnet options
      
      Reviewers: dhruba, vamsi
      
      Reviewed By: dhruba
      
      CC: leveldb, sheki
      
      Differential Revision: https://reviews.facebook.net/D9165
      3b6653b1