1. 09 4月, 2013 3 次提交
  2. 06 4月, 2013 2 次提交
  3. 05 4月, 2013 1 次提交
  4. 04 4月, 2013 2 次提交
    • V
      [Getting warning while running db_crashtest] · 2b9a360c
      Vamsi Ponnekanti 提交于
      Summary:
      When I run db_crashtest, I am seeing lot of warnings that say db_stress completed
      before it was killed. To fix that I made ops per thread a very large value so that it keeps
      running until it is killed.
      
      I also set #reopens to 0. Since we are killing the process anyway, the 'simulated crash'
      that happens during reopen may not add additional value.
      
      I usually see 10-25K ops happening before the kill. So I increased max_key from 100 to
      1000 so that we use more distinct keys.
      
      Test Plan:
      Ran a few times.
      
      Revert Plan: OK
      
      Task ID: #
      
      Reviewers: emayanke
      
      Reviewed By: emayanke
      
      CC: leveldb
      
      Differential Revision: https://reviews.facebook.net/D9909
      2b9a360c
    • M
      Release 1.5.8.2.fb · d0c46a65
      Mayank Agarwal 提交于
      Test Plan: make all check
      
      Reviewers: sheki
      
      Reviewed By: sheki
      
      Differential Revision: https://reviews.facebook.net/D9915
      d0c46a65
  5. 03 4月, 2013 3 次提交
  6. 02 4月, 2013 1 次提交
    • M
      Python script to periodically run and kill the db_stress test · e937d471
      Mayank Agarwal 提交于
      Summary: The script runs and kills the stress test periodically. Default values have been used in the script now. Should I make this a part of the Makefile or automated rocksdb build? The values can be easily changed in the script right now, but should I add some support for variable values or input to the script? I believe the script achieves its objective of unsafe crashes and reopening to expect sanity in the database.
      
      Test Plan: python tools/db_crashtest.py
      
      Reviewers: dhruba, vamsi, MarkCallaghan
      
      Reviewed By: vamsi
      
      CC: leveldb
      
      Differential Revision: https://reviews.facebook.net/D9369
      e937d471
  7. 29 3月, 2013 4 次提交
    • H
      Let's get rid of delete as much as possible, here are some examples. · 645ff8f2
      Haobo Xu 提交于
      Summary:
      If a class owns an object:
       - If the object can be null => use a unique_ptr. no delete
       - If the object can not be null => don't even need new, let alone delete
       - for runtime sized array => use vector, no delete.
      
      Test Plan: make check
      
      Reviewers: dhruba, heyongqiang
      
      Reviewed By: heyongqiang
      
      CC: leveldb, zshao, sheki, emayanke, MarkCallaghan
      
      Differential Revision: https://reviews.facebook.net/D9783
      645ff8f2
    • A
      [RocksDB] Fix binary search while finding probable wal files · 3b51605b
      Abhishek Kona 提交于
      Summary:
      RocksDB does a binary search to look at the files which might contain the requested sequence number at the call GetUpdatesSince.
      There was a bug in the binary search => when the file pointed by the middle index of bsearch was empty/corrupt it needst to resize the vector and update indexes.
      This now fixes that.
      
      Test Plan: existing unit tests pass.
      
      Reviewers: heyongqiang, dhruba
      
      Reviewed By: heyongqiang
      
      CC: leveldb
      
      Differential Revision: https://reviews.facebook.net/D9777
      3b51605b
    • A
      [Rocksdb] Fix Crash on finding a db with no log files. Error out instead · 8e9c781a
      Abhishek Kona 提交于
      Summary:
      If the vector returned by GetUpdatesSince is empty, it is still returned to the
      user. This causes it throw an std::range error.
      The probable file list is checked and it returns an IOError status instead of OK now.
      
      Test Plan: added a unit test.
      
      Reviewers: dhruba, heyongqiang
      
      Reviewed By: heyongqiang
      
      CC: leveldb
      
      Differential Revision: https://reviews.facebook.net/D9771
      8e9c781a
    • A
      Use non-mmapd files for Write-Ahead Files · 7fdd5f5b
      Abhishek Kona 提交于
      Summary:
      Use non mmapd files for Write-Ahead log.
      Earlier use of MMaped files. made the log iterator read ahead and miss records.
      Now the reader and writer will point to the same physical location.
      
      There is no perf regression :
      ./db_bench --benchmarks=fillseq --db=/dev/shm/mmap_test --num=$(million 20) --use_existing_db=0 --threads=2
      with This diff :
      fillseq      :      10.756 micros/op 185281 ops/sec;   20.5 MB/s
      without this dif :
      fillseq      :      11.085 micros/op 179676 ops/sec;   19.9 MB/s
      
      Test Plan: unit test included
      
      Reviewers: dhruba, heyongqiang
      
      Reviewed By: heyongqiang
      
      CC: leveldb
      
      Differential Revision: https://reviews.facebook.net/D9741
      7fdd5f5b
  8. 28 3月, 2013 1 次提交
    • A
      memory manage statistics · 63f216ee
      Abhishek Kona 提交于
      Summary:
      Earlier Statistics object was a raw pointer. This meant the user had to clear up
      the Statistics object after creating the database. In most use cases the database is created in a function and the statistics pointer is out of scope. Hence the statistics object would never be deleted.
      Now Using a shared_ptr to manage this.
      
      Want this in before the next release.
      
      Test Plan: make all check.
      
      Reviewers: dhruba, emayanke
      
      Reviewed By: emayanke
      
      CC: leveldb
      
      Differential Revision: https://reviews.facebook.net/D9735
      63f216ee
  9. 27 3月, 2013 1 次提交
  10. 23 3月, 2013 1 次提交
    • S
      Integrate the manifest_dump command with ldb · a8bf8fe5
      Simon Marlow 提交于
      Summary:
      Syntax:
      
         manifest_dump [--verbose] --num=<manifest_num>
      
      e.g.
      
      $ ./ldb --db=/home/smarlow/tmp/testdb manifest_dump --num=12
      manifest_file_number 13 next_file_number 14 last_sequence 3 log_number
      11  prev_log_number 0
      --- level 0 --- version# 0 ---
       6:116['a1' @ 1 : 1 .. 'a1' @ 1 : 1]
       10:130['a3' @ 2 : 1 .. 'a4' @ 3 : 1]
      --- level 1 --- version# 0 ---
      --- level 2 --- version# 0 ---
      --- level 3 --- version# 0 ---
      --- level 4 --- version# 0 ---
      --- level 5 --- version# 0 ---
      --- level 6 --- version# 0 ---
      
      Test Plan: - Tested on an example DB (see output in summary)
      
      Reviewers: sheki, dhruba
      
      Reviewed By: sheki
      
      CC: leveldb, heyongqiang
      
      Differential Revision: https://reviews.facebook.net/D9609
      a8bf8fe5
  11. 22 3月, 2013 4 次提交
  12. 21 3月, 2013 4 次提交
    • D
      Run compactions even if workload is readonly or read-mostly. · d0798f67
      Dhruba Borthakur 提交于
      Summary:
      The events that trigger compaction:
      * opening the database
      * Get -> only if seek compaction is not disabled and other checks are true
      * MakeRoomForWrite -> when memtable is full
      * BackgroundCall ->
        If the background thread is about to do a compaction run, it schedules
        a new background task to trigger a possible compaction. This will cause
        additional background threads to find and process other compactions that
        can run concurrently.
      
      Test Plan: ran db_bench with overwrite and readonly alternatively.
      
      Reviewers: sheki, MarkCallaghan
      
      Reviewed By: sheki
      
      CC: leveldb
      
      Differential Revision: https://reviews.facebook.net/D9579
      d0798f67
    • D
      Ability to configure bufferedio-reads, filesystem-readaheads and mmap-read-write per database. · ad96563b
      Dhruba Borthakur 提交于
      Summary:
      This patch allows an application to specify whether to use bufferedio,
      reads-via-mmaps and writes-via-mmaps per database. Earlier, there
      was a global static variable that was used to configure this functionality.
      
      The default setting remains the same (and is backward compatible):
       1. use bufferedio
       2. do not use mmaps for reads
       3. use mmap for writes
       4. use readaheads for reads needed for compaction
      
      I also added a parameter to db_bench to be able to explicitly specify
      whether to do readaheads for compactions or not.
      
      Test Plan: make check
      
      Reviewers: sheki, heyongqiang, MarkCallaghan
      
      Reviewed By: sheki
      
      CC: leveldb
      
      Differential Revision: https://reviews.facebook.net/D9429
      ad96563b
    • D
      1.5.8.1.fb release. · 2adddeef
      Dhruba Borthakur 提交于
      Summary:
      
      Test Plan:
      
      Reviewers:
      
      CC:
      
      Task ID: #
      
      Blame Rev:
      2adddeef
    • M
      Removing boost from ldb_cmd.cc · a6f42754
      Mayank Agarwal 提交于
      Summary: Getting rid of boost in our github codebase which caused problems on third-party
      
      Test Plan: make ldb; python tools/ldb_test.py
      
      Reviewers: sheki, dhruba
      
      Reviewed By: sheki
      
      Differential Revision: https://reviews.facebook.net/D9543
      a6f42754
  13. 20 3月, 2013 5 次提交
  14. 19 3月, 2013 1 次提交
  15. 16 3月, 2013 1 次提交
  16. 15 3月, 2013 2 次提交
    • M
      Doing away with boost in ldb_cmd.h · a78fb5e8
      Mayank Agarwal 提交于
      Summary: boost functions cause complications while deploying to third-party
      
      Test Plan: make
      
      Reviewers: sheki, dhruba
      
      Reviewed By: sheki
      
      Differential Revision: https://reviews.facebook.net/D9441
      a78fb5e8
    • M
      Enhance db_bench · 5a8c8845
      Mark Callaghan 提交于
      Summary:
      Add --benchmarks=updaterandom for read-modify-write workloads. This is different
      from --benchmarks=readrandomwriterandom in a few ways. First, an "operation" is the
      combined time to do the read & write rather than treating them as two ops. Second,
      the same key is used for the read & write.
      
      Change RandomGenerator to support rows larger than 1M. That was using "assert"
      to fail and assert is compiled-away when -DNDEBUG is used.
      
      Add more options to db_bench
      --duration - sets the number of seconds for tests to run. When not set the
      operation count continues to be the limit. This is used by random operation
      tests.
      
      --use_snapshot - when set GetSnapshot() is called prior to each random read.
      This is to measure the overhead from using snapshots.
      
      --get_approx - when set GetApproximateSizes() is called prior to each random
      read. This is to measure the overhead for a query optimizer.
      
      Task ID: #
      
      Blame Rev:
      
      Test Plan:
      run db_bench
      
      Revert Plan:
      
      Database Impact:
      
      Memcache Impact:
      
      Other Notes:
      
      EImportant:
      
      - begin *PUBLIC* platform impact section -
      Bugzilla: #
      - end platform impact -
      
      Reviewers: dhruba
      
      Reviewed By: dhruba
      
      Differential Revision: https://reviews.facebook.net/D9267
      5a8c8845
  17. 14 3月, 2013 2 次提交
    • M
      Updating fbcode.gcc471.sh to use jemalloc 3.3.1 · e93dc3c0
      Mayank Agarwal 提交于
      Summary: Updated TOOL_CHAIN_LIB_BASE to use the third-party version for jemalloc-3.3.1 which contains a bug fix in quarantine.cc. This was detected while debugging valgrind issues with the rocksdb table_test
      
      Test Plan: make table_test;valgrind --leak-check=full ./table_test
      
      Reviewers: dhruba, sheki, vamsi
      
      Reviewed By: sheki
      
      Differential Revision: https://reviews.facebook.net/D9387
      e93dc3c0
    • A
      Use posix_fallocate as default. · 1ba5abca
      Abhishek Kona 提交于
      Summary:
      Ftruncate does not throw an error on disk-full. This causes Sig-bus in
      the case where the database tries to issue a Put call on a full-disk.
      
      Use posix_fallocate for allocation instead of truncate.
      Add a check to use MMaped files only on ext4, xfs and tempfs, as
      posix_fallocate is very slow on ext3 and older.
      
      Test Plan: make all check
      
      Reviewers: dhruba, chip
      
      Reviewed By: dhruba
      
      CC: adsharma, leveldb
      
      Differential Revision: https://reviews.facebook.net/D9291
      1ba5abca
  18. 13 3月, 2013 2 次提交