1. 12 10月, 2021 4 次提交
    • Y
      Some code cleanup (#9003) · 1a79839c
      Yanqin Jin 提交于
      Summary:
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/9003
      
      cleanup some code before real work.
      
      Reviewed By: ltamasi
      
      Differential Revision: D31525563
      
      fbshipit-source-id: 44558b3594f2200adc7d8621b08b06c77e358a27
      1a79839c
    • L
      Make it possible to force the garbage collection of the oldest blob files (#8994) · 3e1bf771
      Levi Tamasi 提交于
      Summary:
      The current BlobDB garbage collection logic works by relocating the valid
      blobs from the oldest blob files as they are encountered during compaction,
      and cleaning up blob files once they contain nothing but garbage. However,
      with sufficiently skewed workloads, it is theoretically possible to end up in a
      situation when few or no compactions get scheduled for the SST files that contain
      references to the oldest blob files, which can lead to increased space amp due
      to the lack of GC.
      
      In order to efficiently handle such workloads, the patch adds a new BlobDB
      configuration option called `blob_garbage_collection_force_threshold`,
      which signals to BlobDB to schedule targeted compactions for the SST files
      that keep alive the oldest batch of blob files if the overall ratio of garbage in
      the given blob files meets the threshold *and* all the given blob files are
      eligible for GC based on `blob_garbage_collection_age_cutoff`. (For example,
      if the new option is set to 0.9, targeted compactions will get scheduled if the
      sum of garbage bytes meets or exceeds 90% of the sum of total bytes in the
      oldest blob files, assuming all affected blob files are below the age-based cutoff.)
      The net result of these targeted compactions is that the valid blobs in the oldest
      blob files are relocated and the oldest blob files themselves cleaned up (since
      *all* SST files that rely on them get compacted away).
      
      These targeted compactions are similar to periodic compactions in the sense
      that they force certain SST files that otherwise would not get picked up to undergo
      compaction and also in the sense that instead of merging files from multiple levels,
      they target a single file. (Note: such compactions might still include neighboring files
      from the same level due to the need of having a "clean cut" boundary but they never
      include any files from any other level.)
      
      This functionality is currently only supported with the leveled compaction style
      and is inactive by default (since the default value is set to 1.0, i.e. 100%).
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/8994
      
      Test Plan: Ran `make check` and tested using `db_bench` and the stress/crash tests.
      
      Reviewed By: riversand963
      
      Differential Revision: D31489850
      
      Pulled By: ltamasi
      
      fbshipit-source-id: 44057d511726a0e2a03c5d9313d7511b3f0c4eab
      3e1bf771
    • A
      Protect existing files in `FaultInjectionTest{Env,FS}::ReopenWritableFile()` (#8995) · a282eff3
      Andrew Kryczka 提交于
      Summary:
      `FaultInjectionTest{Env,FS}::ReopenWritableFile()` functions were accidentally deleting WALs from previous `db_stress` runs causing verification to fail. They were operating under the assumption that `ReopenWritableFile()` would delete any existing file. It was a reasonable assumption considering the `{Env,FileSystem}::ReopenWritableFile()` documentation stated that would happen. The only problem was neither the implementations we offer nor the "real" clients in RocksDB code followed that contract. So, this PR updates the contract as well as fixing the fault injection client usage.
      
      The fault injection change exposed that `ExternalSSTFileBasicTest.SyncFailure` was relying on a fault injection `Env` dropping unsynced data written by a regular `Env`. I changed that test to make its `SstFileWriter` use fault injection `Env`, and also implemented `LinkFile()` in fault injection so the unsynced data is tracked under the new name.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/8995
      
      Test Plan:
      - Verified it fixes the following failure:
      
      ```
      $ ./db_stress --clear_column_family_one_in=0 --column_families=1 --db=/dev/shm/rocksdb_crashtest_whitebox --delpercent=5 --expected_values_dir=/dev/shm/rocksdb_crashtest_expected --iterpercent=0 --key_len_percent_dist=1,30,69 --max_key=100000 --max_key_len=3 --nooverwritepercent=1 --ops_per_thread=1000 --prefixpercent=0 --readpercent=60 --reopen=0 --target_file_size_base=1048576 --test_batches_snapshots=0 --write_buffer_size=1048576 --writepercent=35 --value_size_mult=33 -threads=1
      ...
      $ ./db_stress --avoid_flush_during_recovery=1 --clear_column_family_one_in=0 --column_families=1 --db=/dev/shm/rocksdb_crashtest_whitebox --delpercent=5 --destroy_db_initially=0 --expected_values_dir=/dev/shm/rocksdb_crashtest_expected --iterpercent=10 --key_len_percent_dist=1,30,69 --max_bytes_for_level_base=4194304 --max_key=100000 --max_key_len=3 --nooverwritepercent=1 --open_files=-1 --open_metadata_write_fault_one_in=8 --open_write_fault_one_in=16 --ops_per_thread=1000 --prefix_size=-1 --prefixpercent=0 --readpercent=50 --sync=1 --target_file_size_base=1048576 --test_batches_snapshots=0 --write_buffer_size=1048576 --writepercent=35 --value_size_mult=33 -threads=1
      ...
      Verification failed for column family 0 key 000000000000001300000000000000857878787878 (1143): Value not found: NotFound:
      Crash-recovery verification failed :(
      ...
      ```
      
      - `make check -j48`
      
      Reviewed By: ltamasi
      
      Differential Revision: D31495388
      
      Pulled By: ajkr
      
      fbshipit-source-id: 7886ccb6a07cb8b78ad7b6c1c341ccf40bb68385
      a282eff3
    • A
      Initialize cache dumper `DumpUnit` in constructor (#9014) · ee239df3
      Andrew Kryczka 提交于
      Summary:
      Should fix clang-analyze:
      
      ```
      utilities/cache_dump_load_impl.cc:296:38: warning: The left operand of '!=' is a garbage value
        while (io_s.ok() && dump_unit.type != CacheDumpUnitType::kFooter) {
                            ~~~~~~~~~~~~~~ ^
      ```
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/9014
      
      Reviewed By: zhichao-cao
      
      Differential Revision: D31546912
      
      Pulled By: ajkr
      
      fbshipit-source-id: a2e0dc7874e8c1c6abf190862b5d49e6a6ad6d01
      ee239df3
  2. 09 10月, 2021 4 次提交
  3. 08 10月, 2021 6 次提交
    • A
      stop populating unused/invalid MergingIterator heaps (#8975) · c0ec58ec
      Andrew Kryczka 提交于
      Summary:
      I was looking at https://github.com/facebook/rocksdb/issues/2636 and got very confused that `MergingIterator::AddIterator()` is populating `min_heap_` with dangling pointers. There is justification in the comments that `min_heap_` will be cleared before it's used, but it'd be cleaner to not populate it with dangling pointers in the first place. Also made similar change in the constructor for consistency, although the pointers there would not be dangling, just unused.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/8975
      
      Test Plan: rely on existing tests
      
      Reviewed By: pdillinger, hx235
      
      Differential Revision: D31273767
      
      Pulled By: ajkr
      
      fbshipit-source-id: 127ca9dd1f82f77f55dd0c3f19511de3282fc229
      c0ec58ec
    • A
      Cancel manual compactions waiting on automatic compactions to drain (#8991) · fcaa7ff6
      Andrew Kryczka 提交于
      Summary: Pull Request resolved: https://github.com/facebook/rocksdb/pull/8991
      
      Test Plan: the new test hangs forever without this fix and passes with this fix.
      
      Reviewed By: hx235
      
      Differential Revision: D31456419
      
      Pulled By: ajkr
      
      fbshipit-source-id: a82c0e5560b6e6153089dccd8e46163c61b07bff
      fcaa7ff6
    • K
      Warning about incompatible options with level_compaction_dynamic_level_bytes (#8329) · 8717c268
      Kajetan Janiak 提交于
      Summary:
      This change introduces warnings instead of a silent override when trying to use level_compaction_dynamic_level_bytes with multiple cf_paths/db_paths.
      I have completed the CLA.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/8329
      
      Reviewed By: hx235
      
      Differential Revision: D31399713
      
      Pulled By: ajkr
      
      fbshipit-source-id: 29c6fe5258d1f739b4590ecd44aee44f55415595
      8717c268
    • Z
      Add file temperature related counter and bytes stats to and io_stats (#8710) · b632ed0c
      Zhichao Cao 提交于
      Summary:
      For tiered storage project, we need to know the block read count and read bytes of files with different temperature. Add FileIOByTemperature to IOStatsContext and collect the bytes read and read count from different temperature files through the RandomAccessFileReader.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/8710
      
      Test Plan: make check, add the testing cases
      
      Reviewed By: siying
      
      Differential Revision: D30582400
      
      Pulled By: zhichao-cao
      
      fbshipit-source-id: d83173de594374fc8404af5ce93a6a9be72c7141
      b632ed0c
    • Z
      Introduce a mechanism to dump out blocks from block cache and re-insert to secondary cache (#8912) · 699f4504
      Zhichao Cao 提交于
      Summary:
      Background: Cache warming up will cause potential read performance degradation due to reading blocks from storage to the block cache. Since in production, the workload and access pattern to a certain DB is stable, it is a potential solution to dump out the blocks belonging to a certain DB to persist storage (e.g., to a file) and bulk-load the blocks to Secondary cache before the DB is relaunched. For example, when migrating a DB form host A to host B, it will take a short period of time, the access pattern to blocks in the block cache will not change much. It is efficient to dump out the blocks of certain DB, migrate to the destination host and insert them to the Secondary cache before we relaunch the DB.
      
      Design: we introduce the interface of CacheDumpWriter and CacheDumpRead for user to store the blocks dumped out from block cache. RocksDB will encode all the information and send the string to the writer. User can implement their own writer it they want. CacheDumper and CacheLoad are introduced to save the blocks and load the blocks respectively.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/8912
      
      Test Plan: add new tests to lru_cache_test and pass make check.
      
      Reviewed By: pdillinger
      
      Differential Revision: D31452871
      
      Pulled By: zhichao-cao
      
      fbshipit-source-id: 11ab4f5d03e383f476947116361d54188d36ec48
      699f4504
    • R
      Misc doc fixes (#8983) · fe994bbd
      Ramkumar Vadivelu 提交于
      Summary:
      - Update few stale GitHub wiki link references from rocksdb.org
      - Update the API comments for ignore_range_deletions
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/8983
      
      Reviewed By: ajkr
      
      Differential Revision: D31355965
      
      Pulled By: ramvadiv
      
      fbshipit-source-id: 245ac4a6913976dd82afa308bc4aae6bff3d788c
      fe994bbd
  4. 06 10月, 2021 3 次提交
  5. 04 10月, 2021 1 次提交
    • M
      Fix LITE mode builds on MacOs (#8981) · 78722983
      mrambacher 提交于
      Summary:
      On MacOS, there were errors building in LITE mode related to unused private member variables:
      
      In file included from ./db/compaction/compaction_job.h:20:
      ./db/blob/blob_file_completion_callback.h:87:19: error: private field ‘sst_file_manager_’ is not used [-Werror,-Wunused-private-field]
        SstFileManager* sst_file_manager_;
                        ^
      ./db/blob/blob_file_completion_callback.h:88:22: error: private field ‘mutex_’ is not used [-Werror,-Wunused-private-field]
        InstrumentedMutex* mutex_;
                           ^
      ./db/blob/blob_file_completion_callback.h:89:17: error: private field ‘error_handler_’ is not used [-Werror,-Wunused-private-field]
        ErrorHandler* error_handler_;
      
      This PR resolves those build issues by removing the values as members in LITE mode and fixing the constructor to ignore the input values in LITE mode (otherwise we get unused parameter warnings).
      
      Tested by validating compiles without warnings.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/8981
      
      Reviewed By: akankshamahajan15
      
      Differential Revision: D31320141
      
      Pulled By: mrambacher
      
      fbshipit-source-id: d67875ebbd39a9555e4f09b2d37159566dd8a085
      78722983
  6. 02 10月, 2021 4 次提交
    • Y
      Add additional checks for three existing unit tests (#8973) · 2cdaf5ca
      Yanqin Jin 提交于
      Summary:
      With test sync points, we can assert on the equality of iterator value in three existing
      unit tests.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/8973
      
      Test Plan:
      ```
      gtest-parallel -r 1000 ./db_test2 --gtest_filter=DBTest2.IterRaceFlush2:DBTest2.IterRaceFlush1:DBTest2.IterRefreshRaceFlush
      ```
      
      make check
      
      Reviewed By: akankshamahajan15
      
      Differential Revision: D31256340
      
      Pulled By: riversand963
      
      fbshipit-source-id: a9440767ab383e0ec61bd43ffa8fbec4ba562ea2
      2cdaf5ca
    • A
      Enable SingleDelete with user defined ts in db_bench and crash tests (#8971) · 84d71f30
      Akanksha Mahajan 提交于
      Summary:
      Enable SingleDelete with user defined timestamp in db_bench,
      db_stress and crash test
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/8971
      
      Test Plan:
      1. For db_stress, ran the command for full duration: i) python3 -u tools/db_crashtest.py
      --enable_ts whitebox --nooverwritepercent=100
      ii) make crash_test_with_ts
      
      2. For db_bench, ran:  ./db_bench -benchmarks=randomreplacekeys
      -user_timestamp_size=8 -use_single_deletes=true
      
      Reviewed By: riversand963
      
      Differential Revision: D31246558
      
      Pulled By: akankshamahajan15
      
      fbshipit-source-id: 29cd8740c9921341e52f09242fca3c44d75a12b7
      84d71f30
    • B
      Update USERS.md (#8923) · e36b9da5
      byronhe 提交于
      Summary:
      fix typo
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/8923
      
      Reviewed By: mrambacher
      
      Differential Revision: D31003331
      
      Pulled By: zhichao-cao
      
      fbshipit-source-id: 00cfcac247621b8bc6d43a3d45c6a11c9dece5b0
      e36b9da5
    • S
      Remove IOSTATS_ADD_IF_POSITIVE() (#8984) · 7f08a850
      sdong 提交于
      Summary:
      IOSTATS_ADD_IF_POSITIVE() doesn't seem to a macro that aims to improve performance but does the opposite. The counter to add is almost always positive so the if is just a waste. Furthermore, adding to a thread local variable seemse to be much cheaper than an if condition if branch prediction has a possibility to be wrong. Remove the macro.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/8984
      
      Test Plan: See CI completes.
      
      Reviewed By: anand1976
      
      Differential Revision: D31348163
      
      fbshipit-source-id: 30af6d45e1aa8bbc09b2c046206cce6f67f4777a
      7f08a850
  7. 01 10月, 2021 3 次提交
    • P
      List blob files when using command - list_live_files_metadata (#8976) · e5bfb91d
      Pradeep Ambati 提交于
      Summary:
      The ldb list_live_files_metadata command does not print any information about blob files currently. We would like to add this functionality. Note that list_live_files_metadata has two different modes of operation: the one shown above, which shows the LSM tree structure, and another one, which can be enabled using the flag --sort_by_filename and simply lists the files in numerical order regardless of level. We would like to show blob files in both modes.
      
      Changes:
      1. Using GetAllColumnFamilyMetaData API instead of GetLiveFilesMetaData API for fetching live files data.
      
      Testing:
      1. Created a sample rocksdb instance using dbbench command (this creates both SST and blob files)
      2. Checked if the blob files are listed or not by using ldb commands.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/8976
      
      Reviewed By: ltamasi
      
      Differential Revision: D31316061
      
      Pulled By: pradeepambati
      
      fbshipit-source-id: d15cdea192febf7a45f28deee2ba40615d3d84ab
      e5bfb91d
    • P
      ErrorExit if num<1000 for fillsync and fill100K (#8391) · 1953b63c
      Peter (Stig) Edwards 提交于
      Summary:
      This is to avoid an exception and core dump when running
        db_bench -benchmarks fillsync -num 999
      https://github.com/facebook/rocksdb/issues/8390
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/8391
      
      Reviewed By: pdillinger
      
      Differential Revision: D29139688
      
      Pulled By: mrambacher
      
      fbshipit-source-id: b9e306728ad25a7aac75f6154699aa852bc07bd1
      1953b63c
    • A
      Don't ignore deletion rate limit if WAL dir is different (#8967) · 532ff334
      anand76 提交于
      Summary:
      If WAL dir is different from the DB dir, we should still honor the SstFileManager deletion rate limit for SST files.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/8967
      
      Test Plan: Add a new unit test in db_sst_test
      
      Reviewed By: pdillinger
      
      Differential Revision: D31220116
      
      Pulled By: anand1976
      
      fbshipit-source-id: bcde8a53a7d728e15e597fb5d07ee86c1b38bd28
      532ff334
  8. 30 9月, 2021 2 次提交
  9. 29 9月, 2021 4 次提交
    • M
      Cleanup includes in dbformat.h (#8930) · 13ae16c3
      mrambacher 提交于
      Summary:
      This header file was including everything and the kitchen sink when it did not need to.  This resulted in many places including this header when they needed other pieces instead.
      
      Cleaned up this header to only include what was needed and fixed up the remaining code to include what was now missing.
      
      Hopefully, this sort of code hygiene cleanup will speed up the builds...
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/8930
      
      Reviewed By: pdillinger
      
      Differential Revision: D31142788
      
      Pulled By: mrambacher
      
      fbshipit-source-id: 6b45de3f300750c79f751f6227dece9cfd44085d
      13ae16c3
    • A
      Refactor expected state in stress/crash test (#8913) · 559943cd
      Andrew Kryczka 提交于
      Summary:
      This is a precursor refactoring to enable an upcoming feature: persistence failure correctness testing.
      
      - Changed `--expected_values_path` to `--expected_values_dir` and migrated "db_crashtest.py" to use the new flag. For persistence failure correctness testing there are multiple possible correct states since unsynced data is allowed to be dropped. Making it possible to restore all these possible correct states will eventually involve files containing snapshots of expected values and DB trace files.
      - The expected values directory is managed by an `ExpectedStateManager` instance. Managing expected state files is separated out of `SharedState` to prevent `SharedState` from becoming too complex when the new files and features (snapshotting, tracing, and restoring) are introduced.
      - Migrated expected values file access/management out of `SharedState` into a separate class called `ExpectedState`. This is not exposed directly to the test but rather the `ExpectedState` for the latest values file is accessed via a pass-through API on `ExpectedStateManager`. This forces the test to always access the single latest `ExpectedState`.
      - Changed the initialization of the latest expected values file to use a tempfile followed by rename, and also add cleanup logic for possible stranded tempfiles.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/8913
      
      Test Plan:
      run in several ways; try to make sure it's not obviously broken.
      
      - crashtest blackbox without TEST_TMPDIR
      ```
      $ python3 tools/db_crashtest.py blackbox --simple --write_buffer_size=1048576 --target_file_size_base=1048576 --max_bytes_for_level_base=4194304 --max_key=100000 --value_size_mult=33 --compression_type=none --duration=120 --interval=10 --compression_type=none --blob_compression_type=none
      ```
      - crashtest blackbox with TEST_TMPDIR
      ```
      $ TEST_TMPDIR=/dev/shm python3 tools/db_crashtest.py blackbox --simple --write_buffer_size=1048576 --target_file_size_base=1048576 --max_bytes_for_level_base=4194304 --max_key=100000 --value_size_mult=33 --compression_type=none --duration=120 --interval=10 --compression_type=none --blob_compression_type=none
      ```
      - crashtest whitebox with TEST_TMPDIR
      ```
      $ TEST_TMPDIR=/dev/shm python3 tools/db_crashtest.py whitebox --simple --write_buffer_size=1048576 --target_file_size_base=1048576 --max_bytes_for_level_base=4194304 --max_key=100000 --value_size_mult=33 --compression_type=none --duration=120 --interval=10 --compression_type=none --blob_compression_type=none --random_kill_odd=88887
      ```
      - db_stress without expected_values_dir
      ```
      $ ./db_stress --write_buffer_size=1048576 --target_file_size_base=1048576 --max_bytes_for_level_base=4194304 --max_key=100000 --value_size_mult=33 --compression_type=none --ops_per_thread=10000 --clear_column_family_one_in=0 --destroy_db_initially=true
      ```
      - db_stress with expected_values_dir and manual corruption
      ```
      $ ./db_stress --write_buffer_size=1048576 --target_file_size_base=1048576 --max_bytes_for_level_base=4194304 --max_key=100000 --value_size_mult=33 --compression_type=none --ops_per_thread=10000 --clear_column_family_one_in=0 --destroy_db_initially=true --expected_values_dir=./
      // modify one byte in "./LATEST.state"
      $ ./db_stress --write_buffer_size=1048576 --target_file_size_base=1048576 --max_bytes_for_level_base=4194304 --max_key=100000 --value_size_mult=33 --compression_type=none --ops_per_thread=10000 --clear_column_family_one_in=0 --destroy_db_initially=false --expected_values_dir=./
      ...
      Verification failed for column family 0 key 0000000000000000 (0): Value not found: NotFound:
      ...
      ```
      
      Reviewed By: riversand963
      
      Differential Revision: D30921951
      
      Pulled By: ajkr
      
      fbshipit-source-id: babfe218062e55d018c9b046536c0289fb78f41c
      559943cd
    • J
      Add remote compaction read/write bytes statistics (#8939) · 6b34eb0e
      Jay Zhuang 提交于
      Summary:
      Add basic read/write bytes statistics on the primary side:
      `REMOTE_COMPACT_READ_BYTES`
      `REMOTE_COMPACT_WRITE_BYTES`
      
      Fixed existing statistics missing some IO for remote compaction.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/8939
      
      Test Plan: CI
      
      Reviewed By: ajkr
      
      Differential Revision: D31074672
      
      Pulled By: jay-zhuang
      
      fbshipit-source-id: c57afdba369990185008ffaec7e3fe7c62e8902f
      6b34eb0e
    • H
      Support "level_at_creation" in TablePropertiesCollectorFactory::Context (#8919) · d6bd1a02
      Hui Xiao 提交于
      Summary:
      Context:
      Exposing the level of the sst file (i.e, table) where it is created in `TablePropertiesCollectorFactory::Context` allows users of `TablePropertiesCollectorFactory` to customize some implementation details of `TablePropertiesCollectorFactory` and `TablePropertiesCollector` based on the level of creation. For example, `TablePropertiesCollector::NeedCompact()` can return different values based on level of creation.
      - Declared an extra field `level_at_creation` in `TablePropertiesCollectorFactory::Context`
      - Allowed `level_at_creation` to be passed in as an argument in `IntTblPropCollectorFactory::CreateIntTblPropCollector()` and `UserKeyTablePropertiesCollectorFactory::CreateIntTblPropCollector()`, the latter of which is an internal wrapper of user's passed-in `TablePropertiesCollectorFactory::CreateTablePropertiesCollector()` used in table-building process
      - Called `IntTblPropCollectorFactory::CreateIntTblPropCollector()` with `level_at_creation` passed into both `BlockBasedTableBuilder` and `PlainTableBuilder`
        -  `PlainTableBuilder` previously did not capture `level_at_creation` from `TableBuilderOptions` in `PlainTableFactory`. In order for it to call the method with this parameter, this PR also made `PlainTableBuilder` capture `level_at_creation` as a required parameter
      - Called `IntTblPropCollectorFactory::CreateIntTblPropCollector()` with `level_at_creation` its overridden functions in its derived classes, including `RegularKeysStartWithAFactory::CreateIntTblPropCollector()` in `table_properties_collector_test.cc`, `SstFileWriterPropertiesCollectorFactory::CreateIntTblPropCollector()` in `sst_file_writer_collectors.h`
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/8919
      
      Test Plan:
      - Passed the added assertion for `context.level_at_creation`
      - Passed existing tests
      - Run `Make` to make sure adding a required parameter to `PlainTableBuilder`'s constructor does not break anything
      
      Reviewed By: anand1976
      
      Differential Revision: D30951729
      
      Pulled By: hx235
      
      fbshipit-source-id: c4a0173b0d9344a4cf47e1b987d759c1c73cb474
      d6bd1a02
  10. 28 9月, 2021 7 次提交
  11. 27 9月, 2021 1 次提交
    • M
      Make SliceTransform into a Customizable class (#8641) · e0f697d2
      mrambacher 提交于
      Summary:
      Made SliceTransform into a Customizable class.
      
      Would be nice to write a test that stored and used a custom transform  in an SST table.
      
      There are a set of tests (DBBlockFliterTest.PrefixExtractor*, SamePrefixTest.InDomainTest, PrefixTest.PrefixAndWholeKeyTest that run the same with or without a SliceTransform/PrefixFilter.  Is this expected?
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/8641
      
      Reviewed By: zhichao-cao
      
      Differential Revision: D31142793
      
      Pulled By: mrambacher
      
      fbshipit-source-id: bb08672fccbfdc263dcae21f25a62307e1facda1
      e0f697d2
  12. 25 9月, 2021 1 次提交