1. 24 8月, 2018 1 次提交
  2. 28 6月, 2018 1 次提交
  3. 22 5月, 2018 1 次提交
    • Z
      Move prefix_extractor to MutableCFOptions · c3ebc758
      Zhongyi Xie 提交于
      Summary:
      Currently it is not possible to change bloom filter config without restart the db, which is causing a lot of operational complexity for users.
      This PR aims to make it possible to dynamically change bloom filter config.
      Closes https://github.com/facebook/rocksdb/pull/3601
      
      Differential Revision: D7253114
      
      Pulled By: miasantreble
      
      fbshipit-source-id: f22595437d3e0b86c95918c484502de2ceca120c
      c3ebc758
  4. 06 3月, 2018 1 次提交
  5. 23 2月, 2018 2 次提交
  6. 20 10月, 2017 1 次提交
  7. 13 9月, 2017 1 次提交
    • A
      Fix naming in InternalKey · 5785b1fc
      Amy Xu 提交于
      Summary:
      - Switched all instances of SetMinPossibleForUserKey and SetMaxPossibleForUserKey in accordance to InternalKeyComparator's comparison logic
      Closes https://github.com/facebook/rocksdb/pull/2868
      
      Differential Revision: D5804152
      
      Pulled By: axxufb
      
      fbshipit-source-id: 80be35e04f2e8abc35cc64abe1fecb03af24e183
      5785b1fc
  8. 12 8月, 2017 2 次提交
    • A
      make sst_dump compression size command consistent · 8254e9b5
      Andrew Kryczka 提交于
      Summary:
      - like other subcommands, reporting compression sizes should be specified with the `--command` CLI arg.
      - also added `--compression_types` arg as it's useful to restrict the types of compression used, at least in my dictionary compression experiments.
      Closes https://github.com/facebook/rocksdb/pull/2706
      
      Differential Revision: D5589520
      
      Pulled By: ajkr
      
      fbshipit-source-id: 305bb4ebcc95eecc8a85523cd3b1050619c9ddc5
      8254e9b5
    • S
      Support prefetch last 512KB with direct I/O in block based file reader · 666a005f
      Siying Dong 提交于
      Summary:
      Right now, if direct I/O is enabled, prefetching the last 512KB cannot be applied, except compaction inputs or readahead is enabled for iterators. This can create a lot of I/O for HDD cases. To solve the problem, the 512KB is prefetched in block based table if direct I/O is enabled. The prefetched buffer is passed in totegher with random access file reader, so that we try to read from the buffer before reading from the file. This can be extended in the future to support flexible user iterator readahead too.
      Closes https://github.com/facebook/rocksdb/pull/2708
      
      Differential Revision: D5593091
      
      Pulled By: siying
      
      fbshipit-source-id: ee36ff6d8af11c312a2622272b21957a7b5c81e7
      666a005f
  9. 10 8月, 2017 1 次提交
    • A
      add VerifyChecksum() to db.h · 7848f0b2
      Aaron G 提交于
      Summary:
      We need a tool to check any sst file corruption in the db.
      It will check all the sst files in current version and read all the blocks (data, meta, index) with checksum verification. If any verification fails, the function will return non-OK status.
      Closes https://github.com/facebook/rocksdb/pull/2498
      
      Differential Revision: D5324269
      
      Pulled By: lightmark
      
      fbshipit-source-id: 6f8a272008b722402a772acfc804524c9d1a483b
      7848f0b2
  10. 29 7月, 2017 1 次提交
    • S
      Replace dynamic_cast<> · 21696ba5
      Siying Dong 提交于
      Summary:
      Replace dynamic_cast<> so that users can choose to build with RTTI off, so that they can save several bytes per object, and get tiny more memory available.
      Some nontrivial changes:
      1. Add Comparator::GetRootComparator() to get around the internal comparator hack
      2. Add the two experiemental functions to DB
      3. Add TableFactory::GetOptionString() to avoid unnecessary casting to get the option string
      4. Since 3 is done, move the parsing option functions for table factory to table factory files too, to be symmetric.
      Closes https://github.com/facebook/rocksdb/pull/2645
      
      Differential Revision: D5502723
      
      Pulled By: siying
      
      fbshipit-source-id: fd13cec5601cf68a554d87bfcf056f2ffa5fbf7c
      21696ba5
  11. 22 7月, 2017 2 次提交
  12. 16 7月, 2017 1 次提交
  13. 29 6月, 2017 1 次提交
    • M
      Improve Status message for block checksum mismatches · 397ab111
      Mike Kolupaev 提交于
      Summary:
      We've got some DBs where iterators return Status with message "Corruption: block checksum mismatch" all the time. That's not very informative. It would be much easier to investigate if the error message contained the file name - then we would know e.g. how old the corrupted file is, which would be very useful for finding the root cause. This PR adds file name, offset and other stuff to some block corruption-related status messages.
      
      It doesn't improve all the error messages, just a few that were easy to improve. I'm mostly interested in "block checksum mismatch" and "Bad table magic number" since they're the only corruption errors that I've ever seen in the wild.
      Closes https://github.com/facebook/rocksdb/pull/2507
      
      Differential Revision: D5345702
      
      Pulled By: al13n321
      
      fbshipit-source-id: fc8023d43f1935ad927cef1b9c55481ab3cb1339
      397ab111
  14. 28 4月, 2017 1 次提交
  15. 06 4月, 2017 1 次提交
  16. 14 3月, 2017 1 次提交
    • R
      Add ability to search for key prefix in sst_dump tool · ebd5639b
      Reid Horuff 提交于
      Summary:
      Add the flag --prefix to the sst_dump tool
      This flag is similar to, and exclusive from, the --from flag.
      
      --prefix=0x00FF will return all rows prefixed with 0x00FF.
      The --to flag may also be specified and will work as expected.
      
      These changes were used to help in debugging the power cycle corruption issue and theses changes were tested by scanning through a udb.
      Closes https://github.com/facebook/rocksdb/pull/1984
      
      Differential Revision: D4691814
      
      Pulled By: reidHoruff
      
      fbshipit-source-id: 027f261
      ebd5639b
  17. 04 1月, 2017 1 次提交
  18. 15 12月, 2016 1 次提交
  19. 11 11月, 2016 1 次提交
  20. 18 9月, 2016 1 次提交
  21. 07 9月, 2016 1 次提交
    • S
      Support ZSTD with finalized format · 607628d3
      sdong 提交于
      Summary:
      ZSTD 1.0.0 is coming. We can finally add a support of ZSTD without worrying about compatibility.
      Still keep ZSTDNotFinal for compatibility reason.
      
      Test Plan: Run all tests. Run db_bench with ZSTD version with RocksDB built with ZSTD 1.0 and older.
      
      Reviewers: andrewkr, IslamAbdelRahman
      
      Reviewed By: IslamAbdelRahman
      
      Subscribers: cyan, igor, IslamAbdelRahman, leveldb, andrewkr, dhruba
      
      Differential Revision: https://reviews.facebook.net/D63141
      607628d3
  22. 03 9月, 2016 1 次提交
  23. 20 7月, 2016 1 次提交
    • J
      New Statistics to track Compression/Decompression (#1197) · 9430333f
      John Alexander 提交于
      * Added new statistics and refactored to allow ioptions to be passed around as required to access environment and statistics pointers (and, as a convenient side effect, info_log pointer).
      
      * Prevent incrementing compression counter when compression is turned off in options.
      
      * Prevent incrementing compression counter when compression is turned off in options.
      
      * Added two more supported compression types to test code in db_test.cc
      
      * Prevent incrementing compression counter when compression is turned off in options.
      
      * Added new StatsLevel that excludes compression timing.
      
      * Fixed casting error in coding.h
      
      * Fixed CompressionStatsTest for new StatsLevel.
      
      * Removed unused variable that was breaking the Linux build
      9430333f
  24. 20 5月, 2016 1 次提交
    • R
      Added "number of merge operands" to statistics in ssts. · f6e404c2
      Richard Cairns Jr 提交于
      Summary:
      A couple of notes from the diff:
        - The namespace block I added at the top of table_properties_collector.cc was in reaction to an issue i was having with PutVarint64 and reusing the "val" string.  I'm not sure this is the cleanest way of doing this, but abstracting this out at least results in the correct behavior.
        - I chose "rocksdb.merge.operands" as the property name.  I am open to suggestions for better names.
        - The change to sst_dump_tool.cc seems a bit inelegant to me.  Is there a better way to do the if-else block?
      
      Test Plan:
      I added a test case in table_properties_collector_test.cc.  It adds two merge operands and checks to make sure that both of them are reflected by GetMergeOperands.  It also checks to make sure the wasPropertyPresent bool is properly set in the method.
      
      Running both of these tests should pass:
      ./table_properties_collector_test
      ./sst_dump_test
      
      Reviewers: IslamAbdelRahman
      
      Reviewed By: IslamAbdelRahman
      
      Subscribers: andrewkr, dhruba
      
      Differential Revision: https://reviews.facebook.net/D58119
      f6e404c2
  25. 07 5月, 2016 1 次提交
    • A
      [ldb] Export ldb_cmd*.h · 04dec2a3
      Arun Sharma 提交于
      Summary:
      This is needed so that rocksdb users can add more
      commands to the included ldb tool by adding more custom
      commands.
      
      Test Plan: make -j ldb
      
      Reviewers: sdong, IslamAbdelRahman
      
      Reviewed By: IslamAbdelRahman
      
      Subscribers: andrewkr, dhruba
      
      Differential Revision: https://reviews.facebook.net/D57243
      04dec2a3
  26. 03 5月, 2016 1 次提交
  27. 28 4月, 2016 1 次提交
    • A
      Shared dictionary compression using reference block · 843d2e31
      Andrew Kryczka 提交于
      Summary:
      This adds a new metablock containing a shared dictionary that is used
      to compress all data blocks in the SST file. The size of the shared dictionary
      is configurable in CompressionOptions and defaults to 0. It's currently only
      used for zlib/lz4/lz4hc, but the block will be stored in the SST regardless of
      the compression type if the user chooses a nonzero dictionary size.
      
      During compaction, computes the dictionary by randomly sampling the first
      output file in each subcompaction. It pre-computes the intervals to sample
      by assuming the output file will have the maximum allowable length. In case
      the file is smaller, some of the pre-computed sampling intervals can be beyond
      end-of-file, in which case we skip over those samples and the dictionary will
      be a bit smaller. After the dictionary is generated using the first file in a
      subcompaction, it is loaded into the compression library before writing each
      block in each subsequent file of that subcompaction.
      
      On the read path, gets the dictionary from the metablock, if it exists. Then,
      loads that dictionary into the compression library before reading each block.
      
      Test Plan: new unit test
      
      Reviewers: yhchiang, IslamAbdelRahman, cyan, sdong
      
      Reviewed By: sdong
      
      Subscribers: andrewkr, yoshinorim, kradhakrishnan, dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D52287
      843d2e31
  28. 20 4月, 2016 1 次提交
  29. 09 4月, 2016 1 次提交
    • I
      Improve sst_dump help message · 05229903
      Islam AbdelRahman 提交于
      Summary:
      Current Message
      
      ```
      sst_dump [--command=check|scan|none|raw] [--verify_checksum] --file=data_dir_OR_sst_file [--output_hex] [--input_key_hex] [--from=<user_key>] [--to=<user_key>] [--read_num=NUM] [--show_properties] [--show_compression_sizes] [--show_compression_sizes [--set_block_size=<block_size>]]
      ```
      New message
      
      ```
      sst_dump --file=<data_dir_OR_sst_file> [--command=check|scan|raw]
          --file=<data_dir_OR_sst_file>
            Path to SST file or directory containing SST files
      
          --command=check|scan|raw
              check: Iterate over entries in files but dont print anything except if an error is encounterd (default command)
              scan: Iterate over entries in files and print them to screen
              raw: Dump all the table contents to <file_name>_dump.txt
      
          --output_hex
            Can be combined with scan command to print the keys and values in Hex
      
          --from=<user_key>
            Key to start reading from when executing check|scan
      
          --to=<user_key>
            Key to stop reading at when executing check|scan
      
          --read_num=<num>
            Maximum number of entries to read when executing check|scan
      
          --verify_checksum
            Verify file checksum when executing check|scan
      
          --input_key_hex
            Can be combined with --from and --to to indicate that these values are encoded in Hex
      
          --show_properties
            Print table properties after iterating over the file
      
          --show_compression_sizes
            Independent command that will recreate the SST file using 16K block size with different
            compressions and report the size of the file using such compression
      
          --set_block_size=<block_size>
            Can be combined with --show_compression_sizes to set the block size that will be used
            when trying different compression algorithms
      ```
      
      Test Plan: none
      
      Reviewers: yhchiang, andrewkr, kradhakrishnan, yiwu, sdong
      
      Reviewed By: sdong
      
      Subscribers: andrewkr, dhruba
      
      Differential Revision: https://reviews.facebook.net/D56325
      05229903
  30. 07 4月, 2016 1 次提交
    • A
      Embed column family name in SST file · 2391ef72
      Andrew Kryczka 提交于
      Summary:
      Added the column family name to the properties block. This property
      is omitted only if the property is unavailable, such as when RepairDB()
      writes SST files.
      
      In a next diff, I will change RepairDB to use this new property for
      deciding to which column family an existing SST file belongs. If this
      property is missing, it will add it to the "unknown" column family (same
      as its existing behavior).
      
      Test Plan:
      New unit test:
      
        $ ./db_table_properties_test --gtest_filter=DBTablePropertiesTest.GetColumnFamilyNameProperty
      
      Reviewers: IslamAbdelRahman, yhchiang, sdong
      
      Reviewed By: sdong
      
      Subscribers: andrewkr, dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D55605
      2391ef72
  31. 31 3月, 2016 1 次提交
  32. 10 2月, 2016 1 次提交
  33. 14 1月, 2016 1 次提交
    • S
      tools/sst_dump_tool_imp.h not to depend on "util/testutil.h" · b54d4dd4
      sdong 提交于
      Summary:
      util/testutil.h doesn't seem to be used in tools/sst_dump_tool_imp.h. Remove it.
      Also move some other include to tools/sst_dump_tool.cc instead.
      
      Test Plan: Build with GCC, CLANG and with GCC 4.81 and 4.9.
      
      Reviewers: yuslepukhin, yhchiang, rven, anthony, IslamAbdelRahman
      
      Reviewed By: IslamAbdelRahman
      
      Subscribers: leveldb, dhruba
      
      Differential Revision: https://reviews.facebook.net/D52791
      b54d4dd4
  34. 24 12月, 2015 1 次提交
    • A
      Skip bottom-level filter block caching when hit-optimized · e089db40
      Andrew Kryczka 提交于
      Summary:
      When Get() or NewIterator() trigger file loads, skip caching the filter block if
      (1) optimize_filters_for_hits is set and (2) the file is on the bottommost
      level. Also skip checking filters under the same conditions, which means that
      for a preloaded file or a file that was trivially-moved to the bottom level, its
      filter block will eventually expire from the cache.
      
      - added parameters/instance variables in various places in order to propagate the config ("skip_filters") from version_set to block_based_table_reader
      - in BlockBasedTable::Rep, this optimization prevents filter from being loaded when the file is opened simply by setting filter_policy = nullptr
      - in BlockBasedTable::Get/BlockBasedTable::NewIterator, this optimization prevents filter from being used (even if it was loaded already) by setting filter = nullptr
      
      Test Plan:
      updated unit test:
      
        $ ./db_test --gtest_filter=DBTest.OptimizeFiltersForHits
      
      will also run 'make check'
      
      Reviewers: sdong, igor, paultuckfield, anthony, rven, kradhakrishnan, IslamAbdelRahman, yhchiang
      
      Reviewed By: yhchiang
      
      Subscribers: leveldb
      
      Differential Revision: https://reviews.facebook.net/D51633
      e089db40
  35. 31 10月, 2015 1 次提交
  36. 30 10月, 2015 1 次提交
  37. 15 10月, 2015 1 次提交