1. 13 2月, 2023 1 次提交
    • W
      remove dependency on options.h for port_posix.h andport_win.h (#11214) · 42d6652b
      Wentian Guo 提交于
      Summary:
      The files in `port/`, such as `port_posix.h`, are layering over the system libraries, so shouldn't include the DB-specific files like `options.h`. This PR remove this dependency.
      
      # How
      The reason that `port_posix.h` (or `port_win.h`) include `options.h` is to use `CpuPriority`, as there is a method `SetCpuPriority()` in `port_posix.h` that uses `CpuPriority.`
      - I think `SetCpuPriority()` make sense to exist in `port_posix.h` as it provides has platform-dependent implementation
      - `CpuPriority` enum is defined in `env.h`, but used in `rocksdb/include` and `port/`.
      
      Hence, let us define `CpuPriority` enum in a common file, say `port_defs.h`, such that both directories `rocksdb/include` and `port/` can include.
      
      When we remove this dependency, some other files have compile errors because they can't find definitions, so add header files to resolve
      
      # Test
      make all check -j
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/11214
      
      Reviewed By: pdillinger
      
      Differential Revision: D43196910
      
      Pulled By: guowentian
      
      fbshipit-source-id: 70deccb72844cfb08fcc994f76c6ef6df5d55ab9
      42d6652b
  2. 14 1月, 2023 1 次提交
  3. 25 10月, 2022 1 次提交
  4. 18 6月, 2022 1 次提交
    • P
      Use optimized folly DistributedMutex in LRUCache when available (#10179) · 1aac8145
      Peter Dillinger 提交于
      Summary:
      folly DistributedMutex is faster than standard mutexes though
      imposes some static obligations on usage. See
      https://github.com/facebook/folly/blob/main/folly/synchronization/DistributedMutex.h
      for details. Here we use this alternative for our Cache implementations
      (especially LRUCache) for better locking performance, when RocksDB is
      compiled with folly.
      
      Also added information about which distributed mutex implementation is
      being used to cache_bench output and to DB LOG.
      
      Intended follow-up:
      * Use DMutex in more places, perhaps improving API to support non-scoped
      locking
      * Fix linking with fbcode compiler (needs ROCKSDB_NO_FBCODE=1 currently)
      
      Credit: Thanks Siying for reminding me about this line of work that was previously
      left unfinished.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/10179
      
      Test Plan:
      for correctness, existing tests. CircleCI config updated.
      Also Meta-internal buck build updated.
      
      For performance, ran simultaneous before & after cache_bench. Out of three
      comparison runs, the middle improvement to ops/sec was +21%:
      
      Baseline: USE_CLANG=1 DEBUG_LEVEL=0 make -j24 cache_bench (fbcode
      compiler)
      
      ```
      Complete in 20.201 s; Rough parallel ops/sec = 1584062
      Thread ops/sec = 107176
      
      Operation latency (ns):
      Count: 32000000 Average: 9257.9421  StdDev: 122412.04
      Min: 134  Median: 3623.0493  Max: 56918500
      Percentiles: P50: 3623.05 P75: 10288.02 P99: 30219.35 P99.9: 683522.04 P99.99: 7302791.63
      ```
      
      New: (add USE_FOLLY=1)
      
      ```
      Complete in 16.674 s; Rough parallel ops/sec = 1919135  (+21%)
      Thread ops/sec = 135487
      
      Operation latency (ns):
      Count: 32000000 Average: 7304.9294  StdDev: 108530.28
      Min: 132  Median: 3777.6012  Max: 91030902
      Percentiles: P50: 3777.60 P75: 10169.89 P99: 24504.51 P99.9: 59721.59 P99.99: 1861151.83
      ```
      
      Reviewed By: anand1976
      
      Differential Revision: D37182983
      
      Pulled By: pdillinger
      
      fbshipit-source-id: a17eb05f25b832b6a2c1356f5c657e831a5af8d1
      1aac8145
  5. 16 6月, 2022 1 次提交
    • A
      Change the instruction used for a pause on arm64 (#10118) · 2e5a323d
      Ali Saidi 提交于
      Summary:
      While the yield instruction conseptually sounds correct on most platforms it is
      a simple nop that doesn't delay the execution anywhere close to what an x86
      pause instruction does. In other projects with spin-wait loops an isb has been
      observed to be much closer to the x86 behavior.
      
      On a Graviton3 system the following test improves on average by 2x with this
      change averaged over 20 runs:
      
      ```
      ./db_bench  -benchmarks=fillrandom -threads=64 -batch_size=1
      -memtablerep=skip_list -value_size=100 --num=100000
      level0_slowdown_writes_trigger=9999 -level0_stop_writes_trigger=9999
      -disable_auto_compactions --max_write_buffer_number=8 -max_background_flushes=8
      --disable_wal --write_buffer_size=160000000 --block_size=16384
      --allow_concurrent_memtable_write -compression_type none
      ```
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/10118
      
      Reviewed By: jay-zhuang
      
      Differential Revision: D37120578
      
      fbshipit-source-id: c20bde4298222edfab7ff7cb6d42497e7012400d
      2e5a323d
  6. 15 6月, 2022 1 次提交
    • A
      Modify the instructions emited for PREFETCH on arm64 (#10117) · b550fc0b
      Ali Saidi 提交于
      Summary:
      __builtin_prefetch(...., 1) prefetches into the L2 cache on x86 while the same
      emits a pldl3keep instruction on arm64 which doesn't seem to be close enough.
      
      Testing on a Graviton3, and M1 system with memtablerep_bench fillrandom and
      skiplist througpuh increased as follows adjusting the 1 to 2 or 3:
      ```
                 1 -> 2     1 -> 3
      ----------------------------
      Graviton3   +10%        +15%
      M1          +10%        +10%
      ```
      
      Given that prefetching into the L1 cache seems to help, I chose that conversion
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/10117
      
      Reviewed By: pdillinger
      
      Differential Revision: D37120475
      
      fbshipit-source-id: db1ef43f941445019c68316500a2250acc643d5e
      b550fc0b
  7. 27 5月, 2022 1 次提交
    • T
      Remove code that only compiles for Visual Studio versions older than 2015 (#10065) · 6c500826
      tagliavini 提交于
      Summary:
      There are currently some preprocessor checks that assume support for Visual Studio versions older than 2015 (i.e., 0 < _MSC_VER < 1900), although we don't support them any more.
      
      We removed all code that only compiles on those older versions, except third-party/ files.
      
      The ROCKSDB_NOEXCEPT symbol is now obsolete, since it now always gets replaced by noexcept. We removed it.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/10065
      
      Reviewed By: pdillinger
      
      Differential Revision: D36721901
      
      Pulled By: guidotag
      
      fbshipit-source-id: a2892d365ef53cce44a0a7d90dd6b72ee9b5e5f2
      6c500826
  8. 19 5月, 2022 1 次提交
  9. 06 5月, 2022 1 次提交
    • S
      Use std::numeric_limits<> (#9954) · 49628c9a
      sdong 提交于
      Summary:
      Right now we still don't fully use std::numeric_limits but use a macro, mainly for supporting VS 2013. Right now we only support VS 2017 and up so it is not a problem. The code comment claims that MinGW still needs it. We don't have a CI running MinGW so it's hard to validate. since we now require C++17, it's hard to imagine MinGW would still build RocksDB but doesn't support std::numeric_limits<>.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/9954
      
      Test Plan: See CI Runs.
      
      Reviewed By: riversand963
      
      Differential Revision: D36173954
      
      fbshipit-source-id: a35a73af17cdcae20e258cdef57fcf29a50b49e0
      49628c9a
  10. 05 2月, 2022 1 次提交
    • P
      Require C++17 (#9481) · fd3e0f43
      Peter Dillinger 提交于
      Summary:
      Drop support for some old compilers by requiring C++17 standard
      (or higher). See https://github.com/facebook/rocksdb/issues/9388
      
      First modification based on this is to remove some conditional compilation in slice.h (also
      better for ODR)
      
      Also in this PR:
      * Fix some Makefile formatting that seems to affect ASSERT_STATUS_CHECKED config in
      some cases
      * Add c_test to NON_PARALLEL_TEST in Makefile
      * Fix a clang-analyze reported "potential leak" in lru_cache_test
      * Better "compatibility" definition of DEFINE_uint32 for old versions of gflags
      * Fix a linking problem with shared libraries in Makefile (`./random_test: error while loading shared libraries: librocksdb.so.6.29: cannot open shared object file: No such file or directory`)
      * Always set ROCKSDB_SUPPORT_THREAD_LOCAL and use thread_local (from C++11)
        * TODO in later PR: clean up that obsolete flag
      * Fix a cosmetic typo in c.h (https://github.com/facebook/rocksdb/issues/9488)
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/9481
      
      Test Plan:
      CircleCI config substantially updated.
      
      * Upgrade to latest Ubuntu images for each release
      * Generally prefer Ubuntu 20, but keep a couple Ubuntu 16 builds with oldest supported
      compilers, to ensure compatibility
      * Remove .circleci/cat_ignore_eagain except for Ubuntu 16 builds, because this is to work
      around a kernel bug that should not affect anything but Ubuntu 16.
      * Remove designated gcc-9 build, because the default linux build now uses GCC 9 from
      Ubuntu 20.
      * Add some `apt-key add` to fix some apt "couldn't be verified" errors
      * Generally drop SKIP_LINK=1; work-around no longer needed
      * Generally `add-apt-repository` before `apt-get update` as manual testing indicated the
      reverse might not work.
      
      Travis:
      * Use gcc-7 by default (remove specific gcc-7 and gcc-4.8 builds)
      * TODO in later PR: fix s390x "Assembler messages: Error: invalid switch -march=z14" failure
      
      AppVeyor:
      * Completely dropped because we are dropping VS2015 support and CircleCI covers
      VS >= 2017
      
      Also local testing with old gflags (out of necessity when using ROCKSDB_NO_FBCODE=1).
      
      Reviewed By: mrambacher
      
      Differential Revision: D33946377
      
      Pulled By: pdillinger
      
      fbshipit-source-id: ae077c823905b45370a26c0103ada119459da6c1
      fd3e0f43
  11. 23 12月, 2021 1 次提交
    • A
      Fixes for building RocksJava builds on s390x (#9321) · 65996dd7
      Adam Retter 提交于
      Summary:
      * Added Docker build environment for RocksJava on s390x
      * Cache alignment size for s390x was incorrectly calculated on gcc 6.4.0
      * Tighter control over which installed version of Java is used is required - build now correctly adheres to `JAVA_HOME` if it is set
      * Alpine build scripts should be used on Alpine (previously CentOS script worked by falling through to minimal gcc version)
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/9321
      
      Reviewed By: mrambacher
      
      Differential Revision: D33259624
      
      Pulled By: jay-zhuang
      
      fbshipit-source-id: d791a5150581344925c3c3f9cbb9a3622d63b3b6
      65996dd7
  12. 23 10月, 2021 1 次提交
  13. 25 9月, 2021 1 次提交
    • A
      Prevent deadlock in db_stress with DbStressCompactionFilter (#8956) · 791bff5b
      Andrew Kryczka 提交于
      Summary:
      The cyclic dependency was:
      
      - `StressTest::OperateDb()` locks the mutex for key 'k'
      - `StressTest::OperateDb()` calls a function like `PauseBackgroundWork()`, which waits for pending compaction to complete.
      - The pending compaction reaches key `k` and `DbStressCompactionFilter::FilterV2()` calls `Lock()` on that key's mutex, which hangs forever.
      
      The cycle can be broken by using a new function, `port::Mutex::TryLock()`, which returns immediately upon failure to acquire a lock. In that case `DbStressCompactionFilter::FilterV2()` can just decide to keep the key.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/8956
      
      Reviewed By: riversand963
      
      Differential Revision: D31183718
      
      Pulled By: ajkr
      
      fbshipit-source-id: 329e4a31ce43085af174cf367ef560b5a04399c5
      791bff5b
  14. 08 9月, 2021 1 次提交
  15. 31 8月, 2021 1 次提交
    • P
      Built-in support for generating unique IDs, bug fix (#8708) · 13ded694
      Peter Dillinger 提交于
      Summary:
      Env::GenerateUniqueId() works fine on Windows and on POSIX
      where /proc/sys/kernel/random/uuid exists. Our other implementation is
      flawed and easily produces collision in a new multi-threaded test.
      As we rely more heavily on DB session ID uniqueness, this becomes a
      serious issue.
      
      This change combines several individually suitable entropy sources
      for reliable generation of random unique IDs, with goal of uniqueness
      and portability, not cryptographic strength nor maximum speed.
      
      Specifically:
      * Moves code for getting UUIDs from the OS to port::GenerateRfcUuid
      rather than in Env implementation details. Callers are now told whether
      the operation fails or succeeds.
      * Adds an internal API GenerateRawUniqueId for generating high-quality
      128-bit unique identifiers, by combining entropy from three "tracks":
        * Lots of info from default Env like time, process id, and hostname.
        * std::random_device
        * port::GenerateRfcUuid (when working)
      * Built-in implementations of Env::GenerateUniqueId() will now always
      produce an RFC 4122 UUID string, either from platform-specific API or
      by converting the output of GenerateRawUniqueId.
      
      DB session IDs now use GenerateRawUniqueId while DB IDs (not as
      critical) try to use port::GenerateRfcUuid but fall back on
      GenerateRawUniqueId with conversion to an RFC 4122 UUID.
      
      GenerateRawUniqueId is declared and defined under env/ rather than util/
      or even port/ because of the Env dependency.
      
      Likely follow-up: enhance GenerateRawUniqueId to be faster after the
      first call and to guarantee uniqueness within the lifetime of a single
      process (imparting the same property onto DB session IDs).
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/8708
      
      Test Plan:
      A new mini-stress test in env_test checks the various public
      and internal APIs for uniqueness, including each track of
      GenerateRawUniqueId individually. We can't hope to verify anywhere close
      to 128 bits of entropy, but it can at least detect flaws as bad as the
      old code. Serial execution of the new tests takes about 350 ms on
      my machine.
      
      Reviewed By: zhichao-cao, mrambacher
      
      Differential Revision: D30563780
      
      Pulled By: pdillinger
      
      fbshipit-source-id: de4c9ff4b2f581cf784fcedb5f39f16e5185c364
      13ded694
  16. 25 8月, 2021 1 次提交
    • P
      Add port::GetProcessID() (#8693) · 318fe694
      Peter Dillinger 提交于
      Summary:
      Useful in some places for object uniqueness across processes.
      Currently used for generating a host-wide identifier of Cache objects
      but expected to be used soon in some unique id generation code.
      
      `int64_t` is chosen for return type because POSIX uses signed integer type,
      usually `int`, for `pid_t` and Windows uses `DWORD`, which is `uint32_t`.
      
      Future work: avoid copy-pasted declarations in port_*.h, perhaps with
      port_common.h always included from port.h
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/8693
      
      Test Plan: manual for now
      
      Reviewed By: ajkr, anand1976
      
      Differential Revision: D30492876
      
      Pulled By: pdillinger
      
      fbshipit-source-id: 39fc2788623cc9f4787866bdb67a4d183dde7eef
      318fe694
  17. 27 3月, 2021 1 次提交
  18. 05 6月, 2020 1 次提交
  19. 29 3月, 2020 1 次提交
    • C
      Be able to decrease background thread's CPU priority when creating database backup (#6602) · ee50b8d4
      Cheng Chang 提交于
      Summary:
      When creating a database backup, the background threads will not only consume IO resources by copying files, but also consuming CPU such as by computing checksums. During peak times, the CPU consumption by the background threads might affect online queries.
      
      This PR makes it possible to decrease CPU priority of these threads when creating a new backup.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/6602
      
      Test Plan: make check
      
      Reviewed By: siying, zhichao-cao
      
      Differential Revision: D20683216
      
      Pulled By: cheng-chang
      
      fbshipit-source-id: 9978b9ed9488e8ce135e90ca083e5b4b7221fd84
      ee50b8d4
  20. 23 2月, 2020 1 次提交
  21. 21 2月, 2020 1 次提交
    • S
      Replace namespace name "rocksdb" with ROCKSDB_NAMESPACE (#6433) · fdf882de
      sdong 提交于
      Summary:
      When dynamically linking two binaries together, different builds of RocksDB from two sources might cause errors. To provide a tool for user to solve the problem, the RocksDB namespace is changed to a flag which can be overridden in build time.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/6433
      
      Test Plan: Build release, all and jtest. Try to build with ROCKSDB_NAMESPACE with another flag.
      
      Differential Revision: D19977691
      
      fbshipit-source-id: aa7f2d0972e1c31d75339ac48478f34f6cfcfb3e
      fdf882de
  22. 17 9月, 2019 2 次提交
    • P
      Refactor/consolidate legacy Bloom implementation details (#5784) · 68626249
      Peter Dillinger 提交于
      Summary:
      Refactoring to consolidate implementation details of legacy
      Bloom filters. This helps to organize and document some related,
      obscure code.
      
      Also added make/cpp var TEST_CACHE_LINE_SIZE so that it's easy to
      compile and run unit tests for non-native cache line size. (Fixed a
      related test failure in db_properties_test.)
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/5784
      
      Test Plan:
      make check, including Recently added Bloom schema unit tests
      (in ./plain_table_db_test && ./bloom_test), and including with
      TEST_CACHE_LINE_SIZE=128U and TEST_CACHE_LINE_SIZE=256U. Tested the
      schema tests with temporary fault injection into new implementations.
      
      Some performance testing with modified unit tests suggest a small to moderate
      improvement in speed.
      
      Differential Revision: D17381384
      
      Pulled By: pdillinger
      
      fbshipit-source-id: ee42586da996798910fc45ac0b6289147f16d8df
      68626249
    • P
      Revert changes from PR#5784 accidentally in PR#5780 (#5810) · d3a6726f
      Peter Dillinger 提交于
      Summary:
      This will allow us to fix history by having the code changes for PR#5784 properly attributed to it.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/5810
      
      Differential Revision: D17400231
      
      Pulled By: pdillinger
      
      fbshipit-source-id: 2da8b1cdf2533cfedb35b5526eadefb38c291f09
      d3a6726f
  23. 14 9月, 2019 1 次提交
  24. 12 9月, 2019 1 次提交
  25. 21 3月, 2019 1 次提交
    • L
      Make adaptivity of LRU cache mutexes configurable (#5054) · 34f8ac0c
      Levi Tamasi 提交于
      Summary:
      The patch adds a new config option to LRUCacheOptions that enables
      users to choose whether to use an adaptive mutex for the LRU block
      cache (on platforms where adaptive mutexes are supported). The default
      is true if RocksDB is compiled with -DROCKSDB_DEFAULT_TO_ADAPTIVE_MUTEX,
      false otherwise.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/5054
      
      Differential Revision: D14542749
      
      Pulled By: ltamasi
      
      fbshipit-source-id: 0065715ab6cf91f10444b737fed8c8aee6a8a0d2
      34f8ac0c
  26. 13 3月, 2019 1 次提交
  27. 10 2月, 2018 1 次提交
  28. 25 7月, 2017 1 次提交
  29. 16 7月, 2017 1 次提交
  30. 15 7月, 2017 1 次提交
  31. 09 5月, 2017 1 次提交
  32. 28 4月, 2017 1 次提交
  33. 22 4月, 2017 1 次提交
  34. 07 2月, 2017 1 次提交
    • D
      Windows thread · 0a4cdde5
      Dmitri Smirnov 提交于
      Summary:
      introduce new methods into a public threadpool interface,
      - allow submission of std::functions as they allow greater flexibility.
      - add Joining methods to the implementation to join scheduled and submitted jobs with
        an option to cancel jobs that did not start executing.
      - Remove ugly `#ifdefs` between pthread and std implementation, make it uniform.
      - introduce pimpl for a drop in replacement of the implementation
      - Introduce rocksdb::port::Thread typedef which is a replacement for std::thread.  On Posix Thread defaults as before std::thread.
      - Implement WindowsThread that allocates memory in a more controllable manner than windows std::thread with a replaceable implementation.
      - should be no functionality changes.
      Closes https://github.com/facebook/rocksdb/pull/1823
      
      Differential Revision: D4492902
      
      Pulled By: siying
      
      fbshipit-source-id: c74cb11
      0a4cdde5
  35. 28 12月, 2016 1 次提交
  36. 28 5月, 2016 1 次提交
    • S
      Handle overflow case of rate limiter's paramters · f62fbd2c
      sdong 提交于
      Summary: When rate_bytes_per_sec * refill_period_us_ overflows, the actual limited rate is very low. Handle this case so the rate will be large.
      
      Test Plan: Add a unit test for it.
      
      Reviewers: IslamAbdelRahman, andrewkr
      
      Reviewed By: andrewkr
      
      Subscribers: yiwu, lightmark, leveldb, andrewkr, dhruba
      
      Differential Revision: https://reviews.facebook.net/D58929
      f62fbd2c
  37. 10 2月, 2016 1 次提交
  38. 14 1月, 2016 1 次提交
  39. 26 12月, 2015 1 次提交
    • N
      support for concurrent adds to memtable · 7d87f027
      Nathan Bronson 提交于
      Summary:
      This diff adds support for concurrent adds to the skiplist memtable
      implementations.  Memory allocation is made thread-safe by the addition of
      a spinlock, with small per-core buffers to avoid contention.  Concurrent
      memtable writes are made via an additional method and don't impose a
      performance overhead on the non-concurrent case, so parallelism can be
      selected on a per-batch basis.
      
      Write thread synchronization is an increasing bottleneck for higher levels
      of concurrency, so this diff adds --enable_write_thread_adaptive_yield
      (default off).  This feature causes threads joining a write batch
      group to spin for a short time (default 100 usec) using sched_yield,
      rather than going to sleep on a mutex.  If the timing of the yield calls
      indicates that another thread has actually run during the yield then
      spinning is avoided.  This option improves performance for concurrent
      situations even without parallel adds, although it has the potential to
      increase CPU usage (and the heuristic adaptation is not yet mature).
      
      Parallel writes are not currently compatible with
      inplace updates, update callbacks, or delete filtering.
      Enable it with --allow_concurrent_memtable_write (and
      --enable_write_thread_adaptive_yield).  Parallel memtable writes
      are performance neutral when there is no actual parallelism, and in
      my experiments (SSD server-class Linux and varying contention and key
      sizes for fillrandom) they are always a performance win when there is
      more than one thread.
      
      Statistics are updated earlier in the write path, dropping the number
      of DB mutex acquisitions from 2 to 1 for almost all cases.
      
      This diff was motivated and inspired by Yahoo's cLSM work.  It is more
      conservative than cLSM: RocksDB's write batch group leader role is
      preserved (along with all of the existing flush and write throttling
      logic) and concurrent writers are blocked until all memtable insertions
      have completed and the sequence number has been advanced, to preserve
      linearizability.
      
      My test config is "db_bench -benchmarks=fillrandom -threads=$T
      -batch_size=1 -memtablerep=skip_list -value_size=100 --num=1000000/$T
      -level0_slowdown_writes_trigger=9999 -level0_stop_writes_trigger=9999
      -disable_auto_compactions --max_write_buffer_number=8
      -max_background_flushes=8 --disable_wal --write_buffer_size=160000000
      --block_size=16384 --allow_concurrent_memtable_write" on a two-socket
      Xeon E5-2660 @ 2.2Ghz with lots of memory and an SSD hard drive.  With 1
      thread I get ~440Kops/sec.  Peak performance for 1 socket (numactl
      -N1) is slightly more than 1Mops/sec, at 16 threads.  Peak performance
      across both sockets happens at 30 threads, and is ~900Kops/sec, although
      with fewer threads there is less performance loss when the system has
      background work.
      
      Test Plan:
      1. concurrent stress tests for InlineSkipList and DynamicBloom
      2. make clean; make check
      3. make clean; DISABLE_JEMALLOC=1 make valgrind_check; valgrind db_bench
      4. make clean; COMPILE_WITH_TSAN=1 make all check; db_bench
      5. make clean; COMPILE_WITH_ASAN=1 make all check; db_bench
      6. make clean; OPT=-DROCKSDB_LITE make check
      7. verify no perf regressions when disabled
      
      Reviewers: igor, sdong
      
      Reviewed By: sdong
      
      Subscribers: MarkCallaghan, IslamAbdelRahman, anthony, yhchiang, rven, sdong, guyg8, kradhakrishnan, dhruba
      
      Differential Revision: https://reviews.facebook.net/D50589
      7d87f027