1. 23 4月, 2013 5 次提交
    • H
      [RocksDB] Print stack trace to stderr instead of stdio. · 06d3487b
      Haobo Xu 提交于
      Summary: Some scripts (like regression_build_test.sh) redirect stdio to a tmp file and delete it on exit. This would miss the stack trace output on segfault. Output to stderr would hopefully show us the stack trace in the continuous build output.
      
      Test Plan: ./signal_test, make check
      
      Reviewers: dhruba
      
      Reviewed By: dhruba
      
      CC: leveldb
      
      Differential Revision: https://reviews.facebook.net/D10485
      06d3487b
    • K
      Avoid global static initialization in Env::Default() · 958b9c80
      Kai Liu 提交于
      Summary:
      Mark's task description from #2316777
      
      Env::Default() comes from util/env_posix.cc
      
      This is a static global.
      
      static PosixEnv default_env;
      
      Env* Env::Default() {
        return &default_env;
      }
      
      -----
      
      These globals assume default_env was initialized first. I don't think that is safe or correct to do (http://stackoverflow.com/questions/1005685/c-static-initialization-order)
      
      const string AutoRollLoggerTest::kTestDir(
      test::TmpDir() + "/db_log_test");
      const string AutoRollLoggerTest::kLogFile(
      test::TmpDir() + "/db_log_test/LOG");
      Env* AutoRollLoggerTest::env = Env::Default();
      
      Test Plan:
      run make clean && make && make check
      But how can I know if it works in Ubuntu?
      
      Reviewers: MarkCallaghan, chip
      
      Reviewed By: chip
      
      CC: leveldb, dhruba, haobo
      
      Differential Revision: https://reviews.facebook.net/D10491
      958b9c80
    • H
      [RocksDB] Move table.h to table/ · eb6d1396
      Haobo Xu 提交于
      Summary:
      - don't see a point exposing table.h to the public.
      - fixed make clean to remove also *.d files.
      
      Test Plan: make check; db_stress
      
      Reviewers: dhruba, heyongqiang
      
      Reviewed By: dhruba
      
      CC: leveldb
      
      Differential Revision: https://reviews.facebook.net/D10479
      eb6d1396
    • A
      [RocksDB] Fix ReadMissing in db_bench · 344e832f
      Abhishek Kona 提交于
      Summary: D8943 Broke read_missing. Fix it by adding a "." at the end of the generated key
      
      Test Plan: generate, print and check the key has a "."
      
      Reviewers: dhruba, haobo
      
      Reviewed By: dhruba
      
      CC: leveldb
      
      Differential Revision: https://reviews.facebook.net/D10455
      344e832f
    • D
      Initialize parameters in the constructor. · 3cb7bf81
      Dhruba Borthakur 提交于
      Summary:
      RocksDB doesn't build on Ubuntu VM .. shoudl be fixed with this patch.
      
      g++ --version
      g++ (Ubuntu/Linaro 4.6.3-1ubuntu5) 4.6.3
      
      util/env_posix.cc:68:24: sorry, unimplemented: non-static data member initializers
      util/env_posix.cc:68:24: error: ISO C++ forbids in-class initialization of non-const static member ‘use_os_buffer’
      util/env_posix.cc:113:24: sorry, unimplemented: non-static data member initializers
      util/env_posix.cc:113:24: error: ISO C++ forbids in-class initialization of non-const static member ‘use_os_buffer
      
      Test Plan: make check
      
      Reviewers: sheki, leveldb
      
      Reviewed By: sheki
      
      Differential Revision: https://reviews.facebook.net/D10461
      3cb7bf81
  2. 21 4月, 2013 4 次提交
    • H
      [RocksDB] CompactionFilter cleanup · b4243e5a
      Haobo Xu 提交于
      Summary:
      - removed the compaction_filter_value from the callback interface. Restrict compaction filter to purging values.
      - modify some comments to reflect curent status.
      
      Test Plan: make check
      
      Reviewers: dhruba
      
      Reviewed By: dhruba
      
      CC: leveldb
      
      Differential Revision: https://reviews.facebook.net/D10335
      b4243e5a
    • M
      Add --writes_per_second rate limit, print p99.99 in histogram · b1ff9ac9
      Mark Callaghan 提交于
      Summary:
      Adds the --writes_per_second rate limit for the readwhilewriting test.
      The purpose is to optionally avoid saturating storage with writes & compaction
      and test read response time when some writes are being done.
      
      Changes the histogram code to also print the p99.99 value
      
      Task ID: #
      
      Blame Rev:
      
      Test Plan:
      make check, ran db_bench with it
      
      Revert Plan:
      
      Database Impact:
      
      Memcache Impact:
      
      Other Notes:
      
      EImportant:
      
      - begin *PUBLIC* platform impact section -
      Bugzilla: #
      - end platform impact -
      
      Reviewers: haobo
      
      Reviewed By: haobo
      
      CC: leveldb
      
      Differential Revision: https://reviews.facebook.net/D10305
      b1ff9ac9
    • H
      [RocksDB] fix build · e0b60923
      Haobo Xu 提交于
      Summary: forgot to include signal_test.cc
      
      Test Plan: make check
      
      Reviewers: sheki
      
      Reviewed By: sheki
      
      CC: leveldb
      
      Differential Revision: https://reviews.facebook.net/D10281
      e0b60923
    • H
      [RocksDB] Add stacktrace signal handler · 1255dcd4
      Haobo Xu 提交于
      Summary:
      This diff provides the ability to print out a stacktrace when the process receives certain signals.
      Currently, we enable this for the following signals (program error related):
      SIGILL SIGSEGV SIGBUS SIGABRT
      Application simply #include "util/stack_trace.h" and call leveldb::InstallStackTraceHandler() during initialization, if signal handler is needed. It's not done automatically when openning db, because it's the application(process)'s responsibility to install signal handler and some applications might already have their own (like fbcode).
      
      Sample output:
      Received signal 11 (Segmentation fault)
      #0  0x408ff0 ./signal_test() [0x408ff0] /home/haobo/rocksdb/util/signal_test.cc:4
      #1  0x40827d ./signal_test() [0x40827d] /home/haobo/rocksdb/util/signal_test.cc:24
      #2  0x7f8bb183172e /usr/local/fbcode/gcc-4.7.1-glibc-2.14.1/lib/libc.so.6(__libc_start_main+0x10e) [0x7f8bb183172e] ??:0
      #3  0x408ebc ./signal_test() [0x408ebc] /home/engshare/third-party/src/glibc/glibc-2.14.1/glibc-2.14.1/csu/../sysdeps/x86_64/elf/start.S:113
      Segmentation fault (core dumped)
      
      For each frame, we print the raw pointer, the symbol provided by backtrace_symbols (still not good enough), and the source file/line. Note that address translation is done by directly shell out to addr2line. ??:0 means addr2line fails to do the translation. Hacky, but I think it's good for now.
      
      Test Plan: signal_test.cc
      
      Reviewers: dhruba, MarkCallaghan
      
      Reviewed By: dhruba
      
      CC: leveldb
      
      Differential Revision: https://reviews.facebook.net/D10173
      1255dcd4
  3. 16 4月, 2013 3 次提交
  4. 13 4月, 2013 1 次提交
    • H
      [RocksDB] [Performance] Speed up FindObsoleteFiles · 013e9ebb
      Haobo Xu 提交于
      Summary:
      FindObsoleteFiles was slow, holding the single big lock, resulted in bad p99 behavior.
      Didn't profile anything, but several things could be improved:
      1. VersionSet::AddLiveFiles works with std::set, which is by itself slow (a tree).
         You also don't know how many dynamic allocations occur just for building up this tree.
         switched to std::vector, also added logic to pre-calculate total size and do just one allocation
      2. Don't see why env_->GetChildren() needs to be mutex proteced, moved to PurgeObsoleteFiles where
         mutex could be unlocked.
      3. switched std::set to std:unordered_set, the conversion from vector is also inside PurgeObsoleteFiles
      I have a feeling this should pretty much fix it.
      
      Test Plan: make check;  db_stress
      
      Reviewers: dhruba, heyongqiang, MarkCallaghan
      
      Reviewed By: dhruba
      
      CC: leveldb, zshao
      
      Differential Revision: https://reviews.facebook.net/D10197
      013e9ebb
  5. 12 4月, 2013 3 次提交
  6. 11 4月, 2013 5 次提交
    • M
      Release 1.5.9.fb to third party · 6936f9bb
      Mayank Agarwal 提交于
      Summary: new release 1.5.9.fb
      
      Test Plan: release
      
      Reviewers: heyongqiang
      
      Reviewed By: heyongqiang
      
      Differential Revision: https://reviews.facebook.net/D10149
      6936f9bb
    • M
      Exit and Join the background compaction threads while running rocksdb tests · 6594fef7
      Mayank Agarwal 提交于
      Summary:
      The background compaction threads are never exitted and therefore caused
      memory-leaks while running rpcksdb tests. Have changed the PosixEnv destructor to exit and join them and changed the tests likewise
      The memory leaked has reduced from 320 bytes to 64 bytes in all the tests. The 64
      bytes is relating to
      pthread_exit, but still have to figure out why. The stack-trace right now with
      table_test.cc = 64 bytes in 1 blocks are possibly lost in loss record 4 of 5
         at 0x475D8C: malloc (jemalloc.c:914)
         by 0x400D69E: _dl_map_object_deps (dl-deps.c:505)
         by 0x4013393: dl_open_worker (dl-open.c:263)
         by 0x400F015: _dl_catch_error (dl-error.c:178)
         by 0x4013B2B: _dl_open (dl-open.c:569)
         by 0x5D3E913: do_dlopen (dl-libc.c:86)
         by 0x400F015: _dl_catch_error (dl-error.c:178)
         by 0x5D3E9D6: __libc_dlopen_mode (dl-libc.c:47)
         by 0x5048BF3: pthread_cancel_init (unwind-forcedunwind.c:53)
         by 0x5048DC9: _Unwind_ForcedUnwind (unwind-forcedunwind.c:126)
         by 0x5046D9F: __pthread_unwind (unwind.c:130)
         by 0x50413A4: pthread_exit (pthreadP.h:289)
      
      Test Plan: make all check
      
      Reviewers: dhruba, sheki, haobo
      
      Reviewed By: dhruba
      
      CC: leveldb, chip
      
      Differential Revision: https://reviews.facebook.net/D9573
      6594fef7
    • H
      Set FD_CLOEXEC after each file open · e21ba94a
      heyongqiang 提交于
      Summary: as subject. This is causing problem in adsconv. Ideally, this flags should be set in open. But that is only supported in Linux kernel ≥2.6.23 and glibc ≥2.7.
      
      Test Plan:
      db_test
      
      run db_test
      
      Reviewers: dhruba, MarkCallaghan, haobo
      
      Reviewed By: dhruba
      
      CC: leveldb, chip
      
      Differential Revision: https://reviews.facebook.net/D10089
      e21ba94a
    • M
      Printing the options that db_crashtest.py is run with · f51b3750
      Mayank Agarwal 提交于
      Summary: To know which options the crashtest was run with. Also changed print to sys.stdout.write which is more standard.
      
      Test Plan: python tools/db_crashtest.py
      
      Reviewers: vamsi, akushner, dhruba
      
      Reviewed By: akushner
      
      Differential Revision: https://reviews.facebook.net/D10119
      f51b3750
    • D
      Prevent segfault in OpenCompactionOutputFile · 77305871
      Dhruba Borthakur 提交于
      Summary:
      The segfault was happening because the program was unable to open a new
      sst file (as part of the compaction) because the process ran out of
      file descriptors.
      
      The fix is to check the return status of the file creation before taking
      any other action.
      
      Program received signal SIGSEGV, Segmentation fault.
      [Switching to Thread 0x7fabf03f9700 (LWP 29904)]
      leveldb::DBImpl::OpenCompactionOutputFile (this=this@entry=0x7fabf9011400, compact=compact@entry=0x7fabf741a2b0) at db/db_impl.cc:1399
      1399    db/db_impl.cc: No such file or directory.
      (gdb) where
      
      Test Plan: make check
      
      Reviewers: MarkCallaghan, sheki
      
      Reviewed By: MarkCallaghan
      
      CC: leveldb
      
      Differential Revision: https://reviews.facebook.net/D10101
      77305871
  7. 10 4月, 2013 1 次提交
  8. 09 4月, 2013 4 次提交
  9. 06 4月, 2013 2 次提交
  10. 05 4月, 2013 1 次提交
  11. 04 4月, 2013 2 次提交
    • V
      [Getting warning while running db_crashtest] · 2b9a360c
      Vamsi Ponnekanti 提交于
      Summary:
      When I run db_crashtest, I am seeing lot of warnings that say db_stress completed
      before it was killed. To fix that I made ops per thread a very large value so that it keeps
      running until it is killed.
      
      I also set #reopens to 0. Since we are killing the process anyway, the 'simulated crash'
      that happens during reopen may not add additional value.
      
      I usually see 10-25K ops happening before the kill. So I increased max_key from 100 to
      1000 so that we use more distinct keys.
      
      Test Plan:
      Ran a few times.
      
      Revert Plan: OK
      
      Task ID: #
      
      Reviewers: emayanke
      
      Reviewed By: emayanke
      
      CC: leveldb
      
      Differential Revision: https://reviews.facebook.net/D9909
      2b9a360c
    • M
      Release 1.5.8.2.fb · d0c46a65
      Mayank Agarwal 提交于
      Test Plan: make all check
      
      Reviewers: sheki
      
      Reviewed By: sheki
      
      Differential Revision: https://reviews.facebook.net/D9915
      d0c46a65
  12. 03 4月, 2013 3 次提交
  13. 02 4月, 2013 1 次提交
    • M
      Python script to periodically run and kill the db_stress test · e937d471
      Mayank Agarwal 提交于
      Summary: The script runs and kills the stress test periodically. Default values have been used in the script now. Should I make this a part of the Makefile or automated rocksdb build? The values can be easily changed in the script right now, but should I add some support for variable values or input to the script? I believe the script achieves its objective of unsafe crashes and reopening to expect sanity in the database.
      
      Test Plan: python tools/db_crashtest.py
      
      Reviewers: dhruba, vamsi, MarkCallaghan
      
      Reviewed By: vamsi
      
      CC: leveldb
      
      Differential Revision: https://reviews.facebook.net/D9369
      e937d471
  14. 29 3月, 2013 4 次提交
    • H
      Let's get rid of delete as much as possible, here are some examples. · 645ff8f2
      Haobo Xu 提交于
      Summary:
      If a class owns an object:
       - If the object can be null => use a unique_ptr. no delete
       - If the object can not be null => don't even need new, let alone delete
       - for runtime sized array => use vector, no delete.
      
      Test Plan: make check
      
      Reviewers: dhruba, heyongqiang
      
      Reviewed By: heyongqiang
      
      CC: leveldb, zshao, sheki, emayanke, MarkCallaghan
      
      Differential Revision: https://reviews.facebook.net/D9783
      645ff8f2
    • A
      [RocksDB] Fix binary search while finding probable wal files · 3b51605b
      Abhishek Kona 提交于
      Summary:
      RocksDB does a binary search to look at the files which might contain the requested sequence number at the call GetUpdatesSince.
      There was a bug in the binary search => when the file pointed by the middle index of bsearch was empty/corrupt it needst to resize the vector and update indexes.
      This now fixes that.
      
      Test Plan: existing unit tests pass.
      
      Reviewers: heyongqiang, dhruba
      
      Reviewed By: heyongqiang
      
      CC: leveldb
      
      Differential Revision: https://reviews.facebook.net/D9777
      3b51605b
    • A
      [Rocksdb] Fix Crash on finding a db with no log files. Error out instead · 8e9c781a
      Abhishek Kona 提交于
      Summary:
      If the vector returned by GetUpdatesSince is empty, it is still returned to the
      user. This causes it throw an std::range error.
      The probable file list is checked and it returns an IOError status instead of OK now.
      
      Test Plan: added a unit test.
      
      Reviewers: dhruba, heyongqiang
      
      Reviewed By: heyongqiang
      
      CC: leveldb
      
      Differential Revision: https://reviews.facebook.net/D9771
      8e9c781a
    • A
      Use non-mmapd files for Write-Ahead Files · 7fdd5f5b
      Abhishek Kona 提交于
      Summary:
      Use non mmapd files for Write-Ahead log.
      Earlier use of MMaped files. made the log iterator read ahead and miss records.
      Now the reader and writer will point to the same physical location.
      
      There is no perf regression :
      ./db_bench --benchmarks=fillseq --db=/dev/shm/mmap_test --num=$(million 20) --use_existing_db=0 --threads=2
      with This diff :
      fillseq      :      10.756 micros/op 185281 ops/sec;   20.5 MB/s
      without this dif :
      fillseq      :      11.085 micros/op 179676 ops/sec;   19.9 MB/s
      
      Test Plan: unit test included
      
      Reviewers: dhruba, heyongqiang
      
      Reviewed By: heyongqiang
      
      CC: leveldb
      
      Differential Revision: https://reviews.facebook.net/D9741
      7fdd5f5b
  15. 28 3月, 2013 1 次提交
    • A
      memory manage statistics · 63f216ee
      Abhishek Kona 提交于
      Summary:
      Earlier Statistics object was a raw pointer. This meant the user had to clear up
      the Statistics object after creating the database. In most use cases the database is created in a function and the statistics pointer is out of scope. Hence the statistics object would never be deleted.
      Now Using a shared_ptr to manage this.
      
      Want this in before the next release.
      
      Test Plan: make all check.
      
      Reviewers: dhruba, emayanke
      
      Reviewed By: emayanke
      
      CC: leveldb
      
      Differential Revision: https://reviews.facebook.net/D9735
      63f216ee