1. 02 5月, 2015 1 次提交
  2. 28 4月, 2015 1 次提交
    • M
      Add scripts to run leveldb benchmark · a087f80e
      Mark Callaghan 提交于
      Summary:
      This runs a benchmark for LevelDB similar to what we have
      in tools/run_flash_bench.sh. It requires changes to db_bench that I published
      in a LevelDB fork on github.  Some results are at:
      http://smalldatum.blogspot.com/2015/04/comparing-leveldb-and-rocksdb-take-2.html
      
      Sample output:
      ops/sec	mb/sec	usec/op	avg	p50	Test
      525	16.4	1904.5	1904.5	111.0	fillseq.v32768
      75187	15.5	13.3	13.3	4.4	fillseq.v200
      28328	5.8	35.3	35.3	4.7	overwrite.t1.s0
      175438	0.0	5.7	5.7	4.4	readrandom.t1
      28490	5.9	35.1	35.1	4.7	overwrite.t1.s0
      121951	0.0	8.2	8.2	5.7	readwhilewriting.t1
      
      Task ID: #
      
      Blame Rev:
      
      Test Plan:
      Revert Plan:
      
      Database Impact:
      
      Memcache Impact:
      
      Other Notes:
      
      EImportant:
      
      - begin *PUBLIC* platform impact section -
      Bugzilla: #
      - end platform impact -
      
      Reviewers: igor
      
      Reviewed By: igor
      
      Subscribers: dhruba
      
      Differential Revision: https://reviews.facebook.net/D37749
      a087f80e
  3. 26 4月, 2015 1 次提交
  4. 25 4月, 2015 1 次提交
  5. 24 4月, 2015 1 次提交
    • M
      Set --seed per test · 283a0429
      Mark Callaghan 提交于
      Summary:
      This is done to avoid having each thread use the same seed between runs
      of db_bench. Without this we can inflate the OS filesystem cache hit rate on
      reads for read heavy tests and generally see the same key sequences get generated
      between teste runs.
      
      Task ID: #
      
      Blame Rev:
      
      Test Plan:
      Revert Plan:
      
      Database Impact:
      
      Memcache Impact:
      
      Other Notes:
      
      EImportant:
      
      - begin *PUBLIC* platform impact section -
      Bugzilla: #
      - end platform impact -
      
      Reviewers: igor
      
      Reviewed By: igor
      
      Subscribers: dhruba
      
      Differential Revision: https://reviews.facebook.net/D37563
      283a0429
  6. 23 4月, 2015 1 次提交
    • M
      Improve benchmark scripts · 78dbd087
      Mark Callaghan 提交于
      Summary:
      This adds:
      1) use of --level_compaction_dynamic_level_bytes=true
      2) use of --bytes_per_sync=2M
      The second is a big win for disks. The first helps in general.
      
      This also adds a new test, fillseq with 32kb values to increase the peak
      ingest and make it more likely that storage limits throughput.
      
      Sample outpout from the first 3 tests - https://gist.github.com/mdcallag/e793bd3038e367b05d6f
      
      Task ID: #
      
      Blame Rev:
      
      Test Plan:
      Revert Plan:
      
      Database Impact:
      
      Memcache Impact:
      
      Other Notes:
      
      EImportant:
      
      - begin *PUBLIC* platform impact section -
      Bugzilla: #
      - end platform impact -
      
      Reviewers: igor
      
      Reviewed By: igor
      
      Subscribers: dhruba
      
      Differential Revision: https://reviews.facebook.net/D37509
      78dbd087
  7. 14 4月, 2015 1 次提交
    • M
      Get benchmark.sh loads to run faster · 9da87480
      Mark Callaghan 提交于
      Summary:
      This changes loads to use vector memtable and disable the WAL. This also
      increases the chance we will see IO bottlenecks during loads which is good to stress
      test HW. But I also think it is a good way to load data quickly as this is a bulk
      operation and the WAL isn't needed.
      
      The two numbers below are the MB/sec rates for fillseq, bulkload using a skiplist
      or vector memtable and the WAL enabled or disabled. There is a big benefit from
      using the vector memtable and WAL disabled. Alas there is also a perf bug in
      the use of std::sort for ordered input when the vector is flushed. Task is open
      for that.
        112, 66 - skiplist with wal
        250, 116 - skiplist without wal
        110, 108 - vector with wal
        232, 370 - vector without wal
      
      Task ID: #
      
      Blame Rev:
      
      Test Plan:
      Revert Plan:
      
      Database Impact:
      
      Memcache Impact:
      
      Other Notes:
      
      EImportant:
      
      - begin *PUBLIC* platform impact section -
      Bugzilla: #
      - end platform impact -
      
      Reviewers: igor
      
      Reviewed By: igor
      
      Subscribers: dhruba
      
      Differential Revision: https://reviews.facebook.net/D36957
      9da87480
  8. 09 4月, 2015 1 次提交
    • S
      Script to check whether RocksDB can read DB generated by previous releases and vice versa · ee9bdd38
      sdong 提交于
      Summary: Add a script, which checks out changes from a list of tags, build them and load the same data into it. In the last, checkout the target build and make sure it can successfully open DB and read all the data. It is implemented through ldb tool, because ldb tool is available from all previous builds so that we don't have to cross build anything.
      
      Test Plan: Run the script.
      
      Reviewers: yhchiang, rven, anthony, kradhakrishnan, igor
      
      Reviewed By: igor
      
      Subscribers: leveldb, dhruba
      
      Differential Revision: https://reviews.facebook.net/D36639
      ee9bdd38
  9. 07 4月, 2015 1 次提交
    • M
      Add p99.9 and p99.99 response time to benchmark report, add new summary report · 3be82bc8
      Mark Callaghan 提交于
      Summary:
      This adds p99.9 and p99.99 response times to the benchmark report and
      adds a second report, report2.txt that has tests listed in test order rather
      than the time in which they were run, so overwrite tests are listed for
      all thread counts, then update etc.
      
      Also changes fillseq to compress all levels to avoid write-amp from rewriting
      uncompressed files when they reach the first level to compress.
      
      Increase max_write_buffer_number to avoid stalls during fillseq and make
      max_background_flushes agree with max_write_buffer_number.
      
      See https://gist.github.com/mdcallag/297ff4316a25cb2988f7 for an example
      of the new report (report2.txt)
      
      Task ID: #
      
      Blame Rev:
      
      Test Plan:
      Revert Plan:
      
      Database Impact:
      
      Memcache Impact:
      
      Other Notes:
      
      EImportant:
      
      - begin *PUBLIC* platform impact section -
      Bugzilla: #
      - end platform impact -
      
      Reviewers: igor
      
      Reviewed By: igor
      
      Subscribers: dhruba
      
      Differential Revision: https://reviews.facebook.net/D36537
      3be82bc8
  10. 31 3月, 2015 2 次提交
    • M
      Add --stats_interval_seconds to db_bench · 1bd70fb5
      Mark Callaghan 提交于
      Summary:
      The --stats_interval_seconds determines interval for stats reporting
      and overrides --stats_interval when set. I also changed tools/benchmark.sh
      to report stats every 60 seconds so I can avoid trying to figure out a
      good value for --stats_interval per test and per storage device.
      
      Task ID: #6631621
      
      Blame Rev:
      
      Test Plan:
      run tools/run_flash_bench, look at output
      
      Revert Plan:
      
      Database Impact:
      
      Memcache Impact:
      
      Other Notes:
      
      EImportant:
      
      - begin *PUBLIC* platform impact section -
      Bugzilla: #
      - end platform impact -
      
      Reviewers: igor
      
      Reviewed By: igor
      
      Subscribers: dhruba
      
      Differential Revision: https://reviews.facebook.net/D36189
      1bd70fb5
    • M
      Make the benchmark scripts configurable and add tests · 99ec2412
      Mark Callaghan 提交于
      Summary:
      This makes run_flash_bench.sh configurable. Previously it was hardwired for 1B keys and tests
      ran for 12 hours each. That kept me from using it. This makes it configuable, adds more tests,
      makes the duration per-test configurable and refactors the test scripts.
      
      Adds the seekrandomwhilemerging test to db_bench which is the same as seekrandomwhilewriting except
      the writer thread does Merge rather than Put.
      
      Forces the stall-time column in compaction IO stats to use a fixed format (H:M:S) which makes
      it easier to scrape and parse. Also adds an option to AppendHumanMicros to force a fixed format.
      Sometimes automation and humans want different format.
      
      Calls thread->stats.AddBytes(bytes); in db_bench for more tests to get the MB/sec summary
      stats in the output at test end.
      
      Adds the average ingest rate to compaction IO stats. Output now looks like:
      https://gist.github.com/mdcallag/2bd64d18be1b93adc494
      
      More information on the benchmark output is at https://gist.github.com/mdcallag/db43a58bd5ac624f01e1
      
      For benchmark.sh changes default RocksDB configuration to reduce stalls:
      * min_level_to_compress from 2 to 3
      * hard_rate_limit from 2 to 3
      * max_grandparent_overlap_factor and max_bytes_for_level_multiplier from 10 to 8
      * L0 file count triggers from 4,8,12 to 4,12,20 for (start,stall,stop)
      
      Task ID: #6596829
      
      Blame Rev:
      
      Test Plan:
      run tools/run_flash_bench.sh
      
      Revert Plan:
      
      Database Impact:
      
      Memcache Impact:
      
      Other Notes:
      
      EImportant:
      
      - begin *PUBLIC* platform impact section -
      Bugzilla: #
      - end platform impact -
      
      Reviewers: igor
      
      Reviewed By: igor
      
      Subscribers: dhruba
      
      Differential Revision: https://reviews.facebook.net/D36075
      99ec2412
  11. 28 3月, 2015 1 次提交
    • Y
      Make auto_sanity_test always use the db_sanity_test.cc of the newer commit. · cfa57640
      Yueh-Hsuan Chiang 提交于
      Summary:
      Whenever we add new tests in db_sanity_test.cc, the verification test
      will fail since the old version db_sanity_test.cc does not have the
      newly added test.  This patch makes auto_sanity_test.sh always use
      the db_sanity_test.cc of the newer commit.
      
      As a result, a macro guard is added to allow db_sanity_test.cc to be
      backward compatible.
      
      Test Plan: tools/auto_sanity_check.sh
      
      Reviewers: sdong, rven, igor
      
      Reviewed By: igor
      
      Subscribers: dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D35997
      cfa57640
  12. 19 3月, 2015 1 次提交
    • M
      Add readwhilemerging benchmark · dfccc7b4
      Mark Callaghan 提交于
      Summary:
      This is like readwhilewriting but uses Merge rather than Put in the writer thread.
      I am using it for in-progress benchmarks. I don't think the other benchmarks for Merge
      cover this behavior. The purpose for this test is to measure read performance when
      readers might have to merge results. This will also benefit from work-in-progress
      to add skewed key generation.
      
      Task ID: #
      
      Blame Rev:
      
      Test Plan:
      Revert Plan:
      
      Database Impact:
      
      Memcache Impact:
      
      Other Notes:
      
      EImportant:
      
      - begin *PUBLIC* platform impact section -
      Bugzilla: #
      - end platform impact -
      
      Reviewers: igor
      
      Reviewed By: igor
      
      Subscribers: dhruba
      
      Differential Revision: https://reviews.facebook.net/D35115
      dfccc7b4
  13. 18 3月, 2015 1 次提交
    • I
      rocksdb: switch to gtest · b4b69e4f
      Igor Sugak 提交于
      Summary:
      Our existing test notation is very similar to what is used in gtest. It makes it easy to adopt what is different.
      In this diff I modify existing [[ https://code.google.com/p/googletest/wiki/Primer#Test_Fixtures:_Using_the_Same_Data_Configuration_for_Multiple_Te | test fixture ]] classes to inherit from `testing::Test`. Also for unit tests that use fixture class, `TEST` is replaced with `TEST_F` as required in gtest.
      
      There are several custom `main` functions in our existing tests. To make this transition easier, I modify all `main` functions to fallow gtest notation. But eventually we can remove them and use implementation of `main` that gtest provides.
      
      ```lang=bash
      % cat ~/transform
      #!/bin/sh
      files=$(git ls-files '*test\.cc')
      for file in $files
      do
        if grep -q "rocksdb::test::RunAllTests()" $file
        then
          if grep -Eq '^class \w+Test {' $file
          then
            perl -pi -e 's/^(class \w+Test) {/${1}: public testing::Test {/g' $file
            perl -pi -e 's/^(TEST)/${1}_F/g' $file
          fi
          perl -pi -e 's/(int main.*\{)/${1}::testing::InitGoogleTest(&argc, argv);/g' $file
          perl -pi -e 's/rocksdb::test::RunAllTests/RUN_ALL_TESTS/g' $file
        fi
      done
      % sh ~/transform
      % make format
      ```
      
      Second iteration of this diff contains only scripted changes.
      
      Third iteration contains manual changes to fix last errors and make it compilable.
      
      Test Plan:
      Build and notice no errors.
      ```lang=bash
      % USE_CLANG=1 make check -j55
      ```
      Tests are still testing.
      
      Reviewers: meyering, sdong, rven, igor
      
      Reviewed By: igor
      
      Subscribers: dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D35157
      b4b69e4f
  14. 17 3月, 2015 1 次提交
    • I
      rocksdb: Replace ASSERT* with EXPECT* in functions that does not return void value · 9fd6edf8
      Igor Sugak 提交于
      Summary:
      gtest does not use exceptions to fail a unit test by design, and `ASSERT*`s are implemented using `return`. As a consequence we cannot use `ASSERT*` in a function that does not return `void` value ([[ https://code.google.com/p/googletest/wiki/AdvancedGuide#Assertion_Placement | 1]]), and have to fix our existing code. This diff does this in a generic way, with no manual changes.
      
      In order to detect all existing `ASSERT*` that are used in functions that doesn't return void value, I change the code to generate compile errors for such cases.
      
      In `util/testharness.h` I defined `EXPECT*` assertions, the same way as `ASSERT*`, and redefined `ASSERT*` to return `void`. Then executed:
      
      ```lang=bash
      % USE_CLANG=1 make all -j55 -k 2> build.log
      % perl -naF: -e 'print "-- -number=".$F[1]." ".$F[0]."\n" if  /: error:/' \
      build.log | xargs -L 1 perl -spi -e 's/ASSERT/EXPECT/g if $. == $number'
      % make format
      ```
      After that I reverted back change to `ASSERT*` in `util/testharness.h`. But preserved introduced `EXPECT*`, which is the same as `ASSERT*`. This will be deleted once switched to gtest.
      
      This diff is independent and contains manual changes only in `util/testharness.h`.
      
      Test Plan:
      Make sure all tests are passing.
      ```lang=bash
      % USE_CLANG=1 make check
      ```
      
      Reviewers: igor, lgalanis, sdong, yufei.zhu, rven, meyering
      
      Reviewed By: meyering
      
      Subscribers: dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D33333
      9fd6edf8
  15. 14 3月, 2015 1 次提交
    • M
      Switch to use_existing_db=1 for updaterandom and mergerandom · 58878f1c
      Mark Callaghan 提交于
      Summary:
      Without this change about half of the updaterandom reads and merge puts will be for keys that don't exist.
      I think it is better for these tests to start with a full database and use fillseq to fill it.
      
      Task ID: #
      
      Blame Rev:
      
      Test Plan:
      Revert Plan:
      
      Database Impact:
      
      Memcache Impact:
      
      Other Notes:
      
      EImportant:
      
      - begin *PUBLIC* platform impact section -
      Bugzilla: #
      - end platform impact -
      
      Reviewers: igor
      
      Reviewed By: igor
      
      Subscribers: dhruba
      
      Differential Revision: https://reviews.facebook.net/D35043
      58878f1c
  16. 07 3月, 2015 1 次提交
    • L
      Single threaded tests -> sync=0 Multi threaded tests -> sync=1 by default... · e126e0da
      Leonidas Galanis 提交于
      Single threaded tests -> sync=0 Multi threaded tests -> sync=1 by default unless DB_BENCH_NO_SYNC is defined
      
      Summary:
      Single threaded tests -> sync=0 Multi threaded tests -> sync=1 by default unless DB_BENCH_NO_SYNC is defined.
      
      Also added updaterandom and mergerandom with putOperator. I am waiting for some results from udb on this.
      
      Test Plan:
      DB_BENCH_NO_SYNC=1 WAL_DIR=/tmp OUTPUT_DIR=/tmp/b DB_DIR=/tmp ./tools/benchmark.sh debug,bulkload,fillseq,overwrite,filluniquerandom,readrandom,readwhilewriting,updaterandom,mergerandom
      
      WAL_DIR=/tmp OUTPUT_DIR=/tmp/b DB_DIR=/tmp ./tools/benchmark.sh debug,bulkload,fillseq,overwrite,filluniquerandom,readrandom,readwhilewriting,updaterandom,mergerandom
      
      Verify sync settings
      
      Reviewers: sdong, MarkCallaghan, igor, rven
      
      Reviewed By: igor, rven
      
      Subscribers: dhruba
      
      Differential Revision: https://reviews.facebook.net/D34185
      e126e0da
  17. 27 2月, 2015 1 次提交
    • I
      rocksdb: Add missing override · 62247ffa
      Igor Sugak 提交于
      Summary:
      When using latest clang (3.6 or 3.7/trunck) rocksdb is failing with many errors. Almost all of them are missing override errors. This diff adds missing override keyword. No manual changes.
      
      Prerequisites: bear and clang 3.5 build with extra tools
      
      ```lang=bash
      % USE_CLANG=1 bear make all # generate a compilation database http://clang.llvm.org/docs/JSONCompilationDatabase.html
      % clang-modernize -p . -include . -add-override
      % make format
      ```
      
      Test Plan:
      Make sure all tests are passing.
      ```lang=bash
      % #Use default fb code clang.
      % make check
      ```
      Verify less error and no missing override errors.
      ```lang=bash
      % # Have trunk clang present in path.
      % ROCKSDB_NO_FBCODE=1 CC=clang CXX=clang++ make
      ```
      
      Reviewers: igor, kradhakrishnan, rven, meyering, sdong
      
      Reviewed By: sdong
      
      Subscribers: dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D34077
      62247ffa
  18. 24 2月, 2015 1 次提交
  19. 12 2月, 2015 1 次提交
  20. 28 1月, 2015 1 次提交
    • I
      Remove blob store from the codebase · e8bf2310
      Igor Canadi 提交于
      Summary: We don't have plans to work on this in the short term. If we ever resurrect the project, we can find the code in the history. No need for it to linger around
      
      Test Plan: no test
      
      Reviewers: yhchiang, rven, sdong
      
      Reviewed By: sdong
      
      Subscribers: dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D32349
      e8bf2310
  21. 15 1月, 2015 3 次提交
    • I
      Change db_stress to work with format_version == 2 · 2bb05900
      Igor Canadi 提交于
      2bb05900
    • I
      New BlockBasedTable version -- better compressed block format · 9ab5adfc
      Igor Canadi 提交于
      Summary:
      This diff adds BlockBasedTable format_version = 2. New format version brings better compressed block format for these compressions:
      1) Zlib -- encode decompressed size in compressed block header
      2) BZip2 -- encode decompressed size in compressed block header
      3) LZ4 and LZ4HC -- instead of doing memcpy of size_t encode size as varint32. memcpy is very bad because the DB is not portable accross big/little endian machines or even platforms where size_t might be 8 or 4 bytes.
      
      It does not affect format for snappy.
      
      If you write a new database with format_version = 2, it will not be readable by RocksDB versions before 3.10. DB::Open() will return corruption in that case.
      
      Test Plan:
      Added a new test in db_test.
      I will also run db_bench and verify VSIZE when block_cache == 1GB
      
      Reviewers: yhchiang, rven, MarkCallaghan, dhruba, sdong
      
      Reviewed By: sdong
      
      Subscribers: dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D31461
      9ab5adfc
    • I
      Add LZ4 compression to sanity test · 516a0426
      Igor Canadi 提交于
      Summary: This will be used to test format changes in https://reviews.facebook.net/D31461
      
      Test Plan: run it
      
      Reviewers: MarkCallaghan, sdong
      
      Reviewed By: sdong
      
      Subscribers: dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D31515
      516a0426
  22. 06 1月, 2015 1 次提交
    • L
      benchmark.sh won't run through all tests properly if one specifies wal_dir to... · 9d5bd411
      Leonidas Galanis 提交于
      benchmark.sh won't run through all tests properly if one specifies wal_dir to be different than db directory.
      
      Summary:
      A command line like this to run all the tests:
      source benchmark.config.sh && nohup ./benchmark.sh 'bulkload,fillseq,overwrite,filluniquerandom,readrandom,readwhilewriting'
      where
      benchmark.config.sh is:
      export DB_DIR=/data/mysql/rocksdata
      export WAL_DIR=/txlogs/rockswal
      export OUTPUT_DIR=/root/rocks_benchmarking/output
      
      Will fail for the tests that need a new DB .
      
      Also 1) set disable_data_sync=0 and 2) add debug mode to run through all the tests more quickly
      
      Test Plan: run ./benchmark.sh 'debug,bulkload,fillseq,overwrite,filluniquerandom,readrandom,readwhilewriting' and verify that there are no complaints about WAL dir not being empty.
      
      Reviewers: sdong, yhchiang, rven, igor
      
      Reviewed By: igor
      
      Subscribers: dhruba
      
      Differential Revision: https://reviews.facebook.net/D30909
      9d5bd411
  23. 13 12月, 2014 1 次提交
    • Q
      Added 'dump_live_files' command to ldb tool. · cef6f843
      Qiao Yang 提交于
      Summary:
      Priliminary diff to solicit comments.
      Given DB path, dump all SST files (key/value and properties), WAL file and manifest
      files. What command options do we need to support for this command? Maybe
      output_hex for keys?
      
      Test Plan: Create additional ldb unit tests.
      
      Reviewers: sdong, rven
      
      Reviewed By: rven
      
      Subscribers: dhruba
      
      Differential Revision: https://reviews.facebook.net/D29547
      cef6f843
  24. 11 12月, 2014 1 次提交
  25. 05 12月, 2014 2 次提交
    • Y
      Fix compile warning in db_stress · a5d4fc0a
      Yueh-Hsuan Chiang 提交于
      Summary:
      Fix compile warning in db_stress
      
      Test Plan:
      make db_stress
      a5d4fc0a
    • Y
      Fix compile warning in db_stress.cc on Mac · 97c19408
      Yueh-Hsuan Chiang 提交于
      Summary:
      Fix the following compile warning in db_stress.cc on Mac
      tools/db_stress.cc:1688:52: error: format specifies type 'unsigned long' but the argument has type '::google::uint64' (aka 'unsigned long long') [-Werror,-Wformat]
          fprintf(stdout, "DB-write-buffer-size: %lu\n", FLAGS_db_write_buffer_size);
                                                 ~~~     ^~~~~~~~~~~~~~~~~~~~~~~~~~
                                                 %llu
      
      Test Plan:
      make
      97c19408
  26. 03 12月, 2014 1 次提交
    • J
      Enforce write buffer memory limit across column families · a14b7873
      Jonah Cohen 提交于
      Summary:
      Introduces a new class for managing write buffer memory across column
      families.  We supplement ColumnFamilyOptions::write_buffer_size with
      ColumnFamilyOptions::write_buffer, a shared pointer to a WriteBuffer
      instance that enforces memory limits before flushing out to disk.
      
      Test Plan: Added SharedWriteBuffer unit test to db_test.cc
      
      Reviewers: sdong, rven, ljin, igor
      
      Reviewed By: igor
      
      Subscribers: tnovak, yhchiang, dhruba, xjin, MarkCallaghan, yoshinorim
      
      Differential Revision: https://reviews.facebook.net/D22581
      a14b7873
  27. 25 11月, 2014 1 次提交
  28. 21 11月, 2014 1 次提交
    • S
      first rdb commit · bafce619
      Saghm Rossi 提交于
      Summary: First commit for rdb shell
      
      Test Plan: unit_test.js does simple assertions on most of the main functionality; will update with rest of tests
      
      Reviewers: igor, rven, lijn, yhciang, sdong
      
      Subscribers: dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D28749
      bafce619
  29. 15 11月, 2014 1 次提交
    • S
      Make db_stress built for ROCKSDB_LITE · a177742a
      sdong 提交于
      Summary:
      Make db_stress built for ROCKSDB_LITE.
      The test doesn't pass tough. It seg fault quickly. But I took a look and it doesn't seem to be related to lite version. Likely to be a bug inside RocksDB.
      
      Test Plan: make db_stress
      
      Reviewers: yhchiang, rven, ljin, igor
      
      Reviewed By: igor
      
      Subscribers: leveldb, dhruba
      
      Differential Revision: https://reviews.facebook.net/D28797
      a177742a
  30. 12 11月, 2014 1 次提交
    • I
      Turn on -Wshorten-64-to-32 and fix all the errors · 767777c2
      Igor Canadi 提交于
      Summary:
      We need to turn on -Wshorten-64-to-32 for mobile. See D1671432 (internal phabricator) for details.
      
      This diff turns on the warning flag and fixes all the errors. There were also some interesting errors that I might call bugs, especially in plain table. Going forward, I think it makes sense to have this flag turned on and be very very careful when converting 64-bit to 32-bit variables.
      
      Test Plan: compiles
      
      Reviewers: ljin, rven, yhchiang, sdong
      
      Reviewed By: yhchiang
      
      Subscribers: bobbaldwin, dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D28689
      767777c2
  31. 09 11月, 2014 1 次提交
  32. 08 11月, 2014 2 次提交
  33. 05 11月, 2014 1 次提交
  34. 01 11月, 2014 2 次提交