1. 11 1月, 2019 1 次提交
  2. 10 1月, 2019 2 次提交
    • M
      Remove duplicates from SnapshotList::GetAll (#4860) · d56ac22b
      Maysam Yabandeh 提交于
      Summary:
      The vector returned by SnapshotList::GetAll could have duplicate entries if two separate snapshots have the same sequence number. However, when this vector is used in compaction the duplicate entires are of no use and could be safely ignored. Moreover not having duplicate entires simplifies reasoning in the compaction_iterator.cc code. For example when searching for the previous_snap we currently use the snapshot before the current one but the way the code uses that it expects it to be also less than the current snapshot, which would be simpler to read if there is no duplicate entry in the snapshot list.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/4860
      
      Differential Revision: D13615502
      
      Pulled By: maysamyabandeh
      
      fbshipit-source-id: d45bf01213ead5f39db811f951802da6fcc3332b
      d56ac22b
    • Y
      Initialize two members in PerfContext (#4859) · 75714b4c
      Yanqin Jin 提交于
      Summary:
      as titled.
      Currently it's possible to create a local object of type PerfContext since it's
      part of public API. Then it's safe to initialize the two members to 0.
      If PerfContext is created as thread-local object, then all members are
      zero-initialized according to C++ standard.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/4859
      
      Differential Revision: D13614504
      
      Pulled By: riversand963
      
      fbshipit-source-id: 406ff548e105a074f379ad1054d56fece5f524a0
      75714b4c
  3. 09 1月, 2019 4 次提交
    • Y
      Free memory after use · ffc9f846
      Yanqin Jin 提交于
      Summary: Pull Request resolved: https://github.com/facebook/rocksdb/pull/4857
      
      Differential Revision: D13602688
      
      Pulled By: riversand963
      
      fbshipit-source-id: 993419a6afb982a7a701ff71daebebb4b4a6b265
      ffc9f846
    • M
      WritePrepared: Report released snapshots in IsInSnapshot (#4856) · f3a99e8a
      Maysam Yabandeh 提交于
      Summary:
      Previously IsInSnapshot assumed that the snapshot is valid at the time that the function is called. However there are cases where that might not be valid. Example is background compactions where the compaction algorithm operates with a list of snapshots some of which might be released by the time they are being passed to IsInSnapshot. The patch make two changes to enable the caller to tell difference: i) any live snapshot below max is added to max_committed_seq_, which allows IsInSnapshot to confidently tell whether the passed snapshot is invalid if it below max, ii) extends IsInSnapshot API with a "released" variable that is set true when IsInSnapshot find no such snapshot below max and also find no other way to give a certain return value. In such cases the return value is true but the caller should also check the "released" boolean after the call.
      In short here is the changes in the API:
      i) If the snapshot is valid, no change is required.
      ii) If the snapshot might be invalid, a reference to "released" boolean must be passed to IsInSnapshot.
      ii-a) If snapshot is above max, IsInSnapshot can figure the return valid using the commit cache.
      ii-b) otherwise if snapshot is in old_commit_map_, IsInSnapshot can use that to tell if value was visible to the snapshot.
      ii-c) otherwise it sets "released" to true and returns true as well.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/4856
      
      Differential Revision: D13599847
      
      Pulled By: maysamyabandeh
      
      fbshipit-source-id: 1752be28667f886a1efec8cae5714b9b7a8f1e0f
      f3a99e8a
    • S
      Non-initial file preloading should always prefetch index and filter (#4852) · 8641e9ad
      Siying Dong 提交于
      Summary:
      https://github.com/facebook/rocksdb/pull/3340 introduces preloading when max_open_files != -1.
      It doesn't preload index and filter in non-initial file loading case. This is a little bit too
      complicated to understand. We observed in one MyRocks use case where the filter is expected to be
      preloaded but is not. To simplify the use case, we simply always prefetch the index and filter.
      They anyway is expected to be loaded in the file verification phase anyway.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/4852
      
      Differential Revision: D13595402
      
      Pulled By: siying
      
      fbshipit-source-id: d4d8624eb3e849e20aeb990df2100502d85aff31
      8641e9ad
    • M
      WritePrepared: improve IsInSnapshotEmptyMapTest (#4853) · cd227d74
      Maysam Yabandeh 提交于
      Summary:
      IsInSnapshotEmptyMapTest tests that IsInSnapshot returns correct value for existing data after a recovery, where max is not zero and yet commit cache is empty. The existing test was preliminary which is improved in this patch. It also increases the db sequence after recovery so that there the snapshot immediately taken after recovery would have a sequence number different than that of max_evicted_seq. This simplifies the logic in IsInSnapshot by not having to consider the special case that an old snapshot might be equal to max_evicted_seq and yet not present in old_commit_map.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/4853
      
      Differential Revision: D13595223
      
      Pulled By: maysamyabandeh
      
      fbshipit-source-id: 77c12ca8a3f61a47479a93bef2038ff502dc3322
      cd227d74
  4. 08 1月, 2019 4 次提交
  5. 05 1月, 2019 2 次提交
    • Y
      Minor fix: single delete a blob value is not a mismatch (#4848) · cf852fdf
      Yi Wu 提交于
      Summary:
      In compaction iterator, if the next value of single delete is a blob value, it should not treated as mismatch. This is only a minor fix and doesn't affect correctness.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/4848
      
      Differential Revision: D13585812
      
      Pulled By: yiwu-arbug
      
      fbshipit-source-id: 0ff6223fa03a644ac9fd8a2d77f9d6711d0a62b0
      cf852fdf
    • A
      Fix point lookup on range tombstone sentinel endpoint (#4829) · 9e2c804f
      Andrew Kryczka 提交于
      Summary:
      Previously for point lookup we decided which file to look into based on user key overlap only. We also did not truncate range tombstones in the point lookup code path. These two ideas did not interact well in cases like this:
      
      - L1 has range tombstone [a, c)#1 and point key b#2. The data is split between file1 with range [a#1,1, b#72057594037927935,15], and file2 with range [b#2, c#1].
      - L1's file2 gets compacted to L2.
      - User issues `Get()` for b#3.
      - L1's file1 is opened and the range tombstone [a, c)#1 is found for b, while no point-key for b is found in L1.
      - `Get()` assumes that the range tombstone must cover all data in that range in lower levels, so short circuits and returns `NotFound`.
      
      The solution to this problem is to not look into files that only overlap with the point lookup at a range tombstone sentinel endpoint. In the above example, this would mean not opening L1's file1 or its tombstones during the `Get()`.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/4829
      
      Differential Revision: D13561355
      
      Pulled By: ajkr
      
      fbshipit-source-id: a13c21c816870a2f5d32a48af6dbd719a7d9d19f
      9e2c804f
  6. 04 1月, 2019 6 次提交
  7. 03 1月, 2019 7 次提交
  8. 29 12月, 2018 1 次提交
    • S
      Preload some files even if options.max_open_files (#3340) · f0dda35d
      Siying Dong 提交于
      Summary:
      Choose to preload some files if options.max_open_files != -1. This can slightly narrow the gap of performance between options.max_open_files is -1 and a large number. To avoid a significant regression to DB reopen speed if options.max_open_files != -1. Limit the files to preload in DB open time to 16.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/3340
      
      Differential Revision: D6686945
      
      Pulled By: siying
      
      fbshipit-source-id: 8ec11bbdb46e3d0cdee7b6ad5897a09c5a07869f
      f0dda35d
  9. 27 12月, 2018 2 次提交
    • B
      Compaction limiter miscs (#4795) · 46e3209e
      Burton Li 提交于
      Summary:
      1. Remove unused API SubtractCompactionTask().
      2. Assert outstanding tasks drop to zero in ConcurrentTaskLimiterImpl destructor.
      3. Remove GetOutstandingTask() check from manual compaction test, as TEST_WaitForCompact() doesn't synced with 'delete prepicked_compaction' in DBImpl::BGWorkCompaction(), which may make the test flaky.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/4795
      
      Differential Revision: D13542183
      
      Pulled By: siying
      
      fbshipit-source-id: 5eb2a47e62efe4126937149aa0df6e243ebefc33
      46e3209e
    • M
      Fix typos in comments (#4819) · b1288cdc
      Max 提交于
      Summary:
      Fix some typos in comments.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/4819
      
      Differential Revision: D13548543
      
      Pulled By: siying
      
      fbshipit-source-id: ca2e128fa47bef32892fc3627a7541fd9e2d5c3f
      b1288cdc
  10. 22 12月, 2018 2 次提交
  11. 21 12月, 2018 2 次提交
    • A
      fix DeleteRange memory leak for mmap and block cache (#4810) · e0be1bc4
      Andrew Kryczka 提交于
      Summary:
      Previously we were cleaning up range tombstone meta-block by calling `ReleaseCachedEntry`, which wouldn't work if `value != nullptr && cache_handle == nullptr`. This happened at least in the case with mmap reads and block cache both enabled. I noticed `NewDataBlockIterator` intends to handle all these cases, so migrated to that instead of `NewUnfragmentedRangeTombstoneIterator`.
      
      Also changed the table-opening logic to fail on `ReadRangeDelBlock` failure, since that can cause data corruption. Added a test case to verify this behavior. Note the test case does not fail on `TryReopen` because failure to preload table handlers is not considered critical. However, it does fail on any read involving that file since it cannot return correct data.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/4810
      
      Differential Revision: D13534296
      
      Pulled By: ajkr
      
      fbshipit-source-id: 55dde1111717cea6ec4bf38418daab81ccef3599
      e0be1bc4
    • S
      Introduce a CPU time counter in perf_context (#4741) · da1c64b6
      Siying Dong 提交于
      Summary:
      Introduce the first CPU timing counter, perf_context.get_cpu_nanos. This opens a door to more CPU counters in the future.
      Only Posix Env has it implemented using clock_gettime() with CLOCK_THREAD_CPUTIME_ID. How accurate the counter is depends on the platform.
      Make PerfStepTimer to take an Env as an argument, and sometimes pass it in. The direct reason is to make the unit tests to use SpecialEnv where we can ingest logic there. But in long term, this is a good change.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/4741
      
      Differential Revision: D13287798
      
      Pulled By: siying
      
      fbshipit-source-id: 090361049d9d5095d1d1a369fe1338d2e2e1c73f
      da1c64b6
  12. 20 12月, 2018 4 次提交
  13. 19 12月, 2018 3 次提交