1. 02 5月, 2015 1 次提交
  2. 24 4月, 2015 1 次提交
    • M
      Set --seed per test · 283a0429
      Mark Callaghan 提交于
      Summary:
      This is done to avoid having each thread use the same seed between runs
      of db_bench. Without this we can inflate the OS filesystem cache hit rate on
      reads for read heavy tests and generally see the same key sequences get generated
      between teste runs.
      
      Task ID: #
      
      Blame Rev:
      
      Test Plan:
      Revert Plan:
      
      Database Impact:
      
      Memcache Impact:
      
      Other Notes:
      
      EImportant:
      
      - begin *PUBLIC* platform impact section -
      Bugzilla: #
      - end platform impact -
      
      Reviewers: igor
      
      Reviewed By: igor
      
      Subscribers: dhruba
      
      Differential Revision: https://reviews.facebook.net/D37563
      283a0429
  3. 23 4月, 2015 1 次提交
    • M
      Improve benchmark scripts · 78dbd087
      Mark Callaghan 提交于
      Summary:
      This adds:
      1) use of --level_compaction_dynamic_level_bytes=true
      2) use of --bytes_per_sync=2M
      The second is a big win for disks. The first helps in general.
      
      This also adds a new test, fillseq with 32kb values to increase the peak
      ingest and make it more likely that storage limits throughput.
      
      Sample outpout from the first 3 tests - https://gist.github.com/mdcallag/e793bd3038e367b05d6f
      
      Task ID: #
      
      Blame Rev:
      
      Test Plan:
      Revert Plan:
      
      Database Impact:
      
      Memcache Impact:
      
      Other Notes:
      
      EImportant:
      
      - begin *PUBLIC* platform impact section -
      Bugzilla: #
      - end platform impact -
      
      Reviewers: igor
      
      Reviewed By: igor
      
      Subscribers: dhruba
      
      Differential Revision: https://reviews.facebook.net/D37509
      78dbd087
  4. 14 4月, 2015 1 次提交
    • M
      Get benchmark.sh loads to run faster · 9da87480
      Mark Callaghan 提交于
      Summary:
      This changes loads to use vector memtable and disable the WAL. This also
      increases the chance we will see IO bottlenecks during loads which is good to stress
      test HW. But I also think it is a good way to load data quickly as this is a bulk
      operation and the WAL isn't needed.
      
      The two numbers below are the MB/sec rates for fillseq, bulkload using a skiplist
      or vector memtable and the WAL enabled or disabled. There is a big benefit from
      using the vector memtable and WAL disabled. Alas there is also a perf bug in
      the use of std::sort for ordered input when the vector is flushed. Task is open
      for that.
        112, 66 - skiplist with wal
        250, 116 - skiplist without wal
        110, 108 - vector with wal
        232, 370 - vector without wal
      
      Task ID: #
      
      Blame Rev:
      
      Test Plan:
      Revert Plan:
      
      Database Impact:
      
      Memcache Impact:
      
      Other Notes:
      
      EImportant:
      
      - begin *PUBLIC* platform impact section -
      Bugzilla: #
      - end platform impact -
      
      Reviewers: igor
      
      Reviewed By: igor
      
      Subscribers: dhruba
      
      Differential Revision: https://reviews.facebook.net/D36957
      9da87480
  5. 07 4月, 2015 1 次提交
    • M
      Add p99.9 and p99.99 response time to benchmark report, add new summary report · 3be82bc8
      Mark Callaghan 提交于
      Summary:
      This adds p99.9 and p99.99 response times to the benchmark report and
      adds a second report, report2.txt that has tests listed in test order rather
      than the time in which they were run, so overwrite tests are listed for
      all thread counts, then update etc.
      
      Also changes fillseq to compress all levels to avoid write-amp from rewriting
      uncompressed files when they reach the first level to compress.
      
      Increase max_write_buffer_number to avoid stalls during fillseq and make
      max_background_flushes agree with max_write_buffer_number.
      
      See https://gist.github.com/mdcallag/297ff4316a25cb2988f7 for an example
      of the new report (report2.txt)
      
      Task ID: #
      
      Blame Rev:
      
      Test Plan:
      Revert Plan:
      
      Database Impact:
      
      Memcache Impact:
      
      Other Notes:
      
      EImportant:
      
      - begin *PUBLIC* platform impact section -
      Bugzilla: #
      - end platform impact -
      
      Reviewers: igor
      
      Reviewed By: igor
      
      Subscribers: dhruba
      
      Differential Revision: https://reviews.facebook.net/D36537
      3be82bc8
  6. 31 3月, 2015 2 次提交
    • M
      Add --stats_interval_seconds to db_bench · 1bd70fb5
      Mark Callaghan 提交于
      Summary:
      The --stats_interval_seconds determines interval for stats reporting
      and overrides --stats_interval when set. I also changed tools/benchmark.sh
      to report stats every 60 seconds so I can avoid trying to figure out a
      good value for --stats_interval per test and per storage device.
      
      Task ID: #6631621
      
      Blame Rev:
      
      Test Plan:
      run tools/run_flash_bench, look at output
      
      Revert Plan:
      
      Database Impact:
      
      Memcache Impact:
      
      Other Notes:
      
      EImportant:
      
      - begin *PUBLIC* platform impact section -
      Bugzilla: #
      - end platform impact -
      
      Reviewers: igor
      
      Reviewed By: igor
      
      Subscribers: dhruba
      
      Differential Revision: https://reviews.facebook.net/D36189
      1bd70fb5
    • M
      Make the benchmark scripts configurable and add tests · 99ec2412
      Mark Callaghan 提交于
      Summary:
      This makes run_flash_bench.sh configurable. Previously it was hardwired for 1B keys and tests
      ran for 12 hours each. That kept me from using it. This makes it configuable, adds more tests,
      makes the duration per-test configurable and refactors the test scripts.
      
      Adds the seekrandomwhilemerging test to db_bench which is the same as seekrandomwhilewriting except
      the writer thread does Merge rather than Put.
      
      Forces the stall-time column in compaction IO stats to use a fixed format (H:M:S) which makes
      it easier to scrape and parse. Also adds an option to AppendHumanMicros to force a fixed format.
      Sometimes automation and humans want different format.
      
      Calls thread->stats.AddBytes(bytes); in db_bench for more tests to get the MB/sec summary
      stats in the output at test end.
      
      Adds the average ingest rate to compaction IO stats. Output now looks like:
      https://gist.github.com/mdcallag/2bd64d18be1b93adc494
      
      More information on the benchmark output is at https://gist.github.com/mdcallag/db43a58bd5ac624f01e1
      
      For benchmark.sh changes default RocksDB configuration to reduce stalls:
      * min_level_to_compress from 2 to 3
      * hard_rate_limit from 2 to 3
      * max_grandparent_overlap_factor and max_bytes_for_level_multiplier from 10 to 8
      * L0 file count triggers from 4,8,12 to 4,12,20 for (start,stall,stop)
      
      Task ID: #6596829
      
      Blame Rev:
      
      Test Plan:
      run tools/run_flash_bench.sh
      
      Revert Plan:
      
      Database Impact:
      
      Memcache Impact:
      
      Other Notes:
      
      EImportant:
      
      - begin *PUBLIC* platform impact section -
      Bugzilla: #
      - end platform impact -
      
      Reviewers: igor
      
      Reviewed By: igor
      
      Subscribers: dhruba
      
      Differential Revision: https://reviews.facebook.net/D36075
      99ec2412
  7. 19 3月, 2015 1 次提交
    • M
      Add readwhilemerging benchmark · dfccc7b4
      Mark Callaghan 提交于
      Summary:
      This is like readwhilewriting but uses Merge rather than Put in the writer thread.
      I am using it for in-progress benchmarks. I don't think the other benchmarks for Merge
      cover this behavior. The purpose for this test is to measure read performance when
      readers might have to merge results. This will also benefit from work-in-progress
      to add skewed key generation.
      
      Task ID: #
      
      Blame Rev:
      
      Test Plan:
      Revert Plan:
      
      Database Impact:
      
      Memcache Impact:
      
      Other Notes:
      
      EImportant:
      
      - begin *PUBLIC* platform impact section -
      Bugzilla: #
      - end platform impact -
      
      Reviewers: igor
      
      Reviewed By: igor
      
      Subscribers: dhruba
      
      Differential Revision: https://reviews.facebook.net/D35115
      dfccc7b4
  8. 14 3月, 2015 1 次提交
    • M
      Switch to use_existing_db=1 for updaterandom and mergerandom · 58878f1c
      Mark Callaghan 提交于
      Summary:
      Without this change about half of the updaterandom reads and merge puts will be for keys that don't exist.
      I think it is better for these tests to start with a full database and use fillseq to fill it.
      
      Task ID: #
      
      Blame Rev:
      
      Test Plan:
      Revert Plan:
      
      Database Impact:
      
      Memcache Impact:
      
      Other Notes:
      
      EImportant:
      
      - begin *PUBLIC* platform impact section -
      Bugzilla: #
      - end platform impact -
      
      Reviewers: igor
      
      Reviewed By: igor
      
      Subscribers: dhruba
      
      Differential Revision: https://reviews.facebook.net/D35043
      58878f1c
  9. 07 3月, 2015 1 次提交
    • L
      Single threaded tests -> sync=0 Multi threaded tests -> sync=1 by default... · e126e0da
      Leonidas Galanis 提交于
      Single threaded tests -> sync=0 Multi threaded tests -> sync=1 by default unless DB_BENCH_NO_SYNC is defined
      
      Summary:
      Single threaded tests -> sync=0 Multi threaded tests -> sync=1 by default unless DB_BENCH_NO_SYNC is defined.
      
      Also added updaterandom and mergerandom with putOperator. I am waiting for some results from udb on this.
      
      Test Plan:
      DB_BENCH_NO_SYNC=1 WAL_DIR=/tmp OUTPUT_DIR=/tmp/b DB_DIR=/tmp ./tools/benchmark.sh debug,bulkload,fillseq,overwrite,filluniquerandom,readrandom,readwhilewriting,updaterandom,mergerandom
      
      WAL_DIR=/tmp OUTPUT_DIR=/tmp/b DB_DIR=/tmp ./tools/benchmark.sh debug,bulkload,fillseq,overwrite,filluniquerandom,readrandom,readwhilewriting,updaterandom,mergerandom
      
      Verify sync settings
      
      Reviewers: sdong, MarkCallaghan, igor, rven
      
      Reviewed By: igor, rven
      
      Subscribers: dhruba
      
      Differential Revision: https://reviews.facebook.net/D34185
      e126e0da
  10. 06 1月, 2015 1 次提交
    • L
      benchmark.sh won't run through all tests properly if one specifies wal_dir to... · 9d5bd411
      Leonidas Galanis 提交于
      benchmark.sh won't run through all tests properly if one specifies wal_dir to be different than db directory.
      
      Summary:
      A command line like this to run all the tests:
      source benchmark.config.sh && nohup ./benchmark.sh 'bulkload,fillseq,overwrite,filluniquerandom,readrandom,readwhilewriting'
      where
      benchmark.config.sh is:
      export DB_DIR=/data/mysql/rocksdata
      export WAL_DIR=/txlogs/rockswal
      export OUTPUT_DIR=/root/rocks_benchmarking/output
      
      Will fail for the tests that need a new DB .
      
      Also 1) set disable_data_sync=0 and 2) add debug mode to run through all the tests more quickly
      
      Test Plan: run ./benchmark.sh 'debug,bulkload,fillseq,overwrite,filluniquerandom,readrandom,readwhilewriting' and verify that there are no complaints about WAL dir not being empty.
      
      Reviewers: sdong, yhchiang, rven, igor
      
      Reviewed By: igor
      
      Subscribers: dhruba
      
      Differential Revision: https://reviews.facebook.net/D30909
      9d5bd411
  11. 11 12月, 2014 1 次提交
  12. 11 10月, 2014 1 次提交
  13. 13 9月, 2014 1 次提交
    • L
      standardize scripts to run RocksDB benchmarks · add22e35
      Lei Jin 提交于
      Summary:
      Hope these scripts will allow people to run/repro benchmark easily
      I think it is time to re-run flash benchmarks and report results
      Please comment if any other benchmark runs are needed
      
      Test Plan: ran it
      
      Reviewers: yhchiang, igor, sdong
      
      Reviewed By: igor
      
      Subscribers: dhruba, MarkCallaghan, leveldb
      
      Differential Revision: https://reviews.facebook.net/D23139
      add22e35