- 14 2月, 2017 2 次提交
-
-
由 Yi Wu 提交于
Summary: Seems to me `has_unpersisted_data_` is read from read thread and write from write thread concurrently without synchronization. Making it an atomic. I update the logic not because seeing any problem with it, but it just feel confusing. Closes https://github.com/facebook/rocksdb/pull/1869 Differential Revision: D4555837 Pulled By: yiwu-arbug fbshipit-source-id: eff2ab8
-
由 Sagar Vemuri 提交于
Summary: Remove disableDataSync, and another similarly named disable_data_sync options. This is being done to simplify options, and also because the performance gains of this feature can be achieved by other methods. Closes https://github.com/facebook/rocksdb/pull/1859 Differential Revision: D4541292 Pulled By: sagar0 fbshipit-source-id: 5b3a6ca
-
- 11 2月, 2017 1 次提交
-
-
由 Dmitri Smirnov 提交于
Summary: Record the first parsed sequence number as the minimum so we can find the true minimum otherwise everything is larger than zero. Fix the comparator name comparision. Closes https://github.com/facebook/rocksdb/pull/1858 Differential Revision: D4544365 Pulled By: ajkr fbshipit-source-id: 439cbc2
-
- 09 2月, 2017 2 次提交
-
-
由 Andrew Kryczka 提交于
Summary: It was really annoying to have two places (top and bottom of compaction loop) where we cut output files. I had bugs in both DeleteRange and dictionary compression due to updating only one of the two. This diff consolidates the file-cutting logic to the bottom of the compaction loop. Keep in mind that my goal with input_status is to be consistent with the past behavior, even though I'm not sure it's ideal. Closes https://github.com/facebook/rocksdb/pull/1832 Differential Revision: D4503038 Pulled By: ajkr fbshipit-source-id: 7da5213
-
由 Maysam Yabandeh 提交于
Summary: Printing compaction stats used to operate on two variable: number_levels_: for printing the layer num_levels_to_check: for updating the compaction score After this commit: 361010d4 these two are mixed up and as a result the last layer might not be printed out: https://fb.facebook.com/groups/rocksdb.internal/permalink/1315716625143616/ number_levels_ was used to decide which layers to print: https://github.com/facebook/rocksdb/blob/672300f47f72b28c164fdf98a08171c09e311205/db/internal_stats.cc#L753 but after the patch it is based on the return value of DumpCFMapStats https://github.com/facebook/rocksdb/blob/361010d44738de48ffc4fd9add70caa0891a0719/db/internal_stats.cc#L929 which returns num_levels_to_check: https://github.com/facebook/rocksdb/blob/361010d44738de48ffc4fd9add70caa0891a0719/db/internal_stats.cc#L917 Closes https://github.com/facebook/rocksdb/pull/1853 Differential Revision: D4529280 Pulled By: maysamyabandeh fbshipit-source-id: 3fd9448
-
- 07 2月, 2017 3 次提交
-
-
由 Maysam Yabandeh 提交于
Summary: Partition Index blocks and use a Partition-index as a 2nd level index. The two-level index can be used by setting BlockBasedTableOptions::kTwoLevelIndexSearch as the index type and configuring BlockBasedTableOptions::index_per_partition t15539501 Closes https://github.com/facebook/rocksdb/pull/1814 Differential Revision: D4473535 Pulled By: maysamyabandeh fbshipit-source-id: bffb87e
-
由 Dmitri Smirnov 提交于
Summary: introduce new methods into a public threadpool interface, - allow submission of std::functions as they allow greater flexibility. - add Joining methods to the implementation to join scheduled and submitted jobs with an option to cancel jobs that did not start executing. - Remove ugly `#ifdefs` between pthread and std implementation, make it uniform. - introduce pimpl for a drop in replacement of the implementation - Introduce rocksdb::port::Thread typedef which is a replacement for std::thread. On Posix Thread defaults as before std::thread. - Implement WindowsThread that allocates memory in a more controllable manner than windows std::thread with a replaceable implementation. - should be no functionality changes. Closes https://github.com/facebook/rocksdb/pull/1823 Differential Revision: D4492902 Pulled By: siying fbshipit-source-id: c74cb11
-
由 Vitaliy Liptchinsky 提交于
Summary: Added method that returns approx num of entries as well as size for memtables. Closes https://github.com/facebook/rocksdb/pull/1841 Differential Revision: D4511990 Pulled By: VitaliyLi fbshipit-source-id: 9a4576e
-
- 04 2月, 2017 1 次提交
-
-
由 Siying Dong 提交于
Summary: In theory, Get() can get a wrong result, if it races in a special with with flush. The bug can be reproduced in DBTest2.GetRaceFlush. Fix this bug by getting snapshot after referencing the super version. Closes https://github.com/facebook/rocksdb/pull/1816 Differential Revision: D4475958 Pulled By: siying fbshipit-source-id: bd9e67a
-
- 03 2月, 2017 1 次提交
-
-
由 Islam AbdelRahman 提交于
Summary: merger.h was always a confusing name for me, simply give the file a better name Closes https://github.com/facebook/rocksdb/pull/1836 Differential Revision: D4505357 Pulled By: IslamAbdelRahman fbshipit-source-id: 07b28d8
-
- 28 1月, 2017 2 次提交
-
-
由 oranagra 提交于
Summary: - rocksdb_property_int (so that we don't have to parse strings) - and rocksdb_set_options (to allow controlling options via strings) - a few other missing options exposed - a documentation comment fix Closes https://github.com/facebook/rocksdb/pull/1793 Differential Revision: D4456569 Pulled By: yiwu-arbug fbshipit-source-id: 9f1fac1
-
由 Siying Dong 提交于
Summary: In the patch which LRU cache was made use dynamic shard bits, I changed to 2 shard bits to make the test happy. Look like it is occasionally still unhappy. Change it to 4 shard bits. Closes https://github.com/facebook/rocksdb/pull/1815 Differential Revision: D4475849 Pulled By: siying fbshipit-source-id: 575ff00
-
- 27 1月, 2017 2 次提交
-
-
由 Siying Dong 提交于
Summary: If the users use the NewLRUCache() without passing in the number of shard bits, instead of using hard-coded 6, we'll determine it based on capacity. Closes https://github.com/facebook/rocksdb/pull/1584 Differential Revision: D4242517 Pulled By: siying fbshipit-source-id: 86b0f18
-
由 Siying Dong 提交于
Summary: A current data race issue in Get() and Flush() can cause a Get() to return wrong results when a flush happened in the middle. Disable the test for now. Closes https://github.com/facebook/rocksdb/pull/1813 Differential Revision: D4472310 Pulled By: siying fbshipit-source-id: 5755ebd
-
- 26 1月, 2017 3 次提交
-
-
由 sdong 提交于
Summary: logs_.back() is called out of DB mutex, which can cause data race. We move the access into the DB mutex protection area. Closes https://github.com/facebook/rocksdb/pull/1774 Reviewed By: AsyncDBConnMarkedDownDBException Differential Revision: D4417472 Pulled By: AsyncDBConnMarkedDownDBException fbshipit-source-id: 2da1f1e
-
由 Islam AbdelRahman 提交于
Summary: GetAndRefSuperVersion() should not be called again in the same thread before ReturnAndCleanupSuperVersion() is called. If we have a compaction filter that is using DB::Get, This will happen ``` CompactFiles() { GetAndRefSuperVersion() // -- first call .. CompactionFilter() { GetAndRefSuperVersion() // -- second call ReturnAndCleanupSuperVersion() } .. ReturnAndCleanupSuperVersion() } ``` We solve this issue in the same way Iterator is solving it, but using GetReferencedSuperVersion() This was discovered in https://github.com/facebook/mysql-5.6/issues/427 by alxyang Closes https://github.com/facebook/rocksdb/pull/1803 Differential Revision: D4460155 Pulled By: IslamAbdelRahman fbshipit-source-id: 5e54322
-
由 Andrew Kryczka 提交于
Summary: when writing RangeDelAggregator::AddToBuilder, I forgot that there are sentinel tombstones in the middle of the interval map since gaps between real tombstones are represented with sentinels. blame: #1614 Closes https://github.com/facebook/rocksdb/pull/1804 Differential Revision: D4460426 Pulled By: ajkr fbshipit-source-id: 69444b5
-
- 25 1月, 2017 3 次提交
-
-
由 Islam AbdelRahman 提交于
Summary: GetAndRefSuperVersionUnlocked ReturnAndCleanupSuperVersionUnlocked GetColumnFamilyHandleUnlocked Are dead code that are not used any where Closes https://github.com/facebook/rocksdb/pull/1802 Differential Revision: D4459948 Pulled By: IslamAbdelRahman fbshipit-source-id: 30fa89d
-
由 Andrew Kryczka 提交于
Summary: It's a test case for #1797. Also got rid of kTypeDeletion in the conditional since we treat it the same as kTypeRangeDeletion. Closes https://github.com/facebook/rocksdb/pull/1800 Differential Revision: D4451300 Pulled By: ajkr fbshipit-source-id: b39dda1
-
由 Andrew Kryczka 提交于
Summary: This test ensures RangeDelAggregator can still access blocks even if it outlives the table readers that created them (detailed description in comments). I plan to optimize away the extra cache lookup we currently do in BlockBasedTable::NewRangeTombstoneIterator(), as it is ~5% CPU in my random read benchmark in a database with 1k tombstones. This test will help make sure nothing breaks in the process. Closes https://github.com/facebook/rocksdb/pull/1739 Differential Revision: D4375954 Pulled By: ajkr fbshipit-source-id: aef9357
-
- 24 1月, 2017 3 次提交
-
-
由 Andrew Kryczka 提交于
Summary: change the iterator status to NotSupported as soon as a range tombstone is encountered by a ForwardIterator. Closes https://github.com/facebook/rocksdb/pull/1593 Differential Revision: D4246294 Pulled By: ajkr fbshipit-source-id: aef9f49
-
由 Hyeonseok Oh 提交于
Summary: I fixed exisit -> exist Closes https://github.com/facebook/rocksdb/pull/1799 Differential Revision: D4451466 Pulled By: yiwu-arbug fbshipit-source-id: b447c3a
-
由 yinqiwen 提交于
Summary: Closes https://github.com/facebook/rocksdb/pull/1797 Differential Revision: D4448004 Pulled By: ajkr fbshipit-source-id: 3ffc27c
-
- 21 1月, 2017 6 次提交
-
-
由 Vitaliy Liptchinsky 提交于
Summary: Fixing GetApproximateSize bug for the case of computing stats for mem tables only. Closes https://github.com/facebook/rocksdb/pull/1795 Differential Revision: D4445507 Pulled By: IslamAbdelRahman fbshipit-source-id: 3905846
-
由 Jay Lee 提交于
Summary: Allow set SavePoint to WriteBatch in C ABI. Closes https://github.com/facebook/rocksdb/pull/1698 Differential Revision: D4378556 Pulled By: yiwu-arbug fbshipit-source-id: afca746
-
由 Changli Gao 提交于
Summary: We should validate this option, otherwise we may see std::out_of_range thrown at: db/db_impl.cc:1124 1123 for (unsigned int i = 0; i <= end; i++) { 1124 std::string& to_delete = old_info_log_files.at(i); 1125 std::string full_path_to_delete = 1126 (immutable_db_options_.db_log_dir.empty() Closes https://github.com/facebook/rocksdb/pull/1722 Differential Revision: D4379495 Pulled By: yiwu-arbug fbshipit-source-id: e136552
-
由 Shu Zhang 提交于
Summary: Closes https://github.com/facebook/rocksdb/pull/1783 Differential Revision: D4443463 Pulled By: IslamAbdelRahman fbshipit-source-id: 39d21d6
-
由 Siying Dong 提交于
Summary: If users directly call OptimizeForPointLookup(), it is broken as the option isn't compatible with parallel memtable insert. Fix it by using memtable bloomo filter instead. Closes https://github.com/facebook/rocksdb/pull/1791 Differential Revision: D4442836 Pulled By: siying fbshipit-source-id: bf6c9cd
-
由 Vitaliy Liptchinsky 提交于
Summary: Added an option to GetApproximateSizes to exclude file stats, as MyRocks has those counted exactly and we need only stats from memtables. Closes https://github.com/facebook/rocksdb/pull/1787 Differential Revision: D4441111 Pulled By: IslamAbdelRahman fbshipit-source-id: c11f4c3
-
- 20 1月, 2017 3 次提交
-
-
由 Yi Wu 提交于
Summary: Fix the bug when sync log fail, FlushJob::Run() will not be execute and reference to cfd->current() will not be release. Closes https://github.com/facebook/rocksdb/pull/1792 Differential Revision: D4441316 Pulled By: yiwu-arbug fbshipit-source-id: 5523e28
-
由 Islam AbdelRahman 提交于
Summary: Disable IngestExternalFile() in read only mode Closes https://github.com/facebook/rocksdb/pull/1781 Differential Revision: D4439179 Pulled By: IslamAbdelRahman fbshipit-source-id: b7e46e7
-
由 Reid Horuff 提交于
Summary: Consider the following single column family scenario: prepare in log A commit in log B *WAL is too large, flush all CFs to releast log A* *CFA is on log B so we do not see CFA is depending on log A so no flush is requested* To fix this we must also consider the log containing the prepare section when determining what log a CF is dependent on. Closes https://github.com/facebook/rocksdb/pull/1768 Differential Revision: D4403265 Pulled By: reidHoruff fbshipit-source-id: ce800ff
-
- 19 1月, 2017 1 次提交
-
-
由 Andrew Kryczka 提交于
Summary: Cockroachdb exposed this bug in #1778. The bug happens when a compaction's output files are ended due to exceeding max_compaction_bytes. In that case we weren't taking into account the next file's start key when deciding how far to extend the current file's max_key. This caused the non-overlapping key-range invariant to be violated. Note this was correctly handled for the usual case of cutting compaction output, which is file size exceeding max_output_file_size. I am not sure why these are two separate code paths, but we can consider refactoring it to prevent such errors in the future. Closes https://github.com/facebook/rocksdb/pull/1784 Differential Revision: D4430235 Pulled By: ajkr fbshipit-source-id: 80af748
-
- 18 1月, 2017 2 次提交
-
-
由 Islam AbdelRahman 提交于
Summary: When debugging tests, it's useful to preserve the DB to investigate it and check the logs This will allow us to set KEEP_DB=1 to preserve the DB Closes https://github.com/facebook/rocksdb/pull/1759 Differential Revision: D4393826 Pulled By: IslamAbdelRahman fbshipit-source-id: 1bff689
-
由 Siying Dong 提交于
Summary: If concurrent memtable insert is enabled, and one prepare command and a normal command are grouped into a commit group, the sequence ID will be calculated incorrectly. Closes https://github.com/facebook/rocksdb/pull/1730 Differential Revision: D4371081 Pulled By: siying fbshipit-source-id: cd40c6d
-
- 12 1月, 2017 2 次提交
-
-
由 Mike Kolupaev 提交于
Summary: DB shutdown aborts running compactions by setting an atomic shutting_down=true that CompactionJob periodically checks. Without this PR it checks it before processing every _output_ value. If compaction filter filters everything out, the compaction is uninterruptible. This PR adds checks for shutting_down on every _input_ value (in CompactionIterator and MergeHelper). There's also some minor code cleanup along the way. Closes https://github.com/facebook/rocksdb/pull/1639 Differential Revision: D4306571 Pulled By: yiwu-arbug fbshipit-source-id: f050890
-
由 Changli Gao 提交于
Summary: Closes https://github.com/facebook/rocksdb/pull/1763 Differential Revision: D4398796 Pulled By: yiwu-arbug fbshipit-source-id: b82636d
-
- 10 1月, 2017 1 次提交
-
-
由 Dmitri Smirnov 提交于
Summary: Enable directIO on WritableFileImpl::Append with offset being current length of the file. Enable UniqueID tests on Windows, disable others but leeting them to compile. Unique tests are valuable to detect failures on different filesystems and upcoming ReFS. Clear output in WinEnv Getchildren.This is different from previous strategy, do not touch output on failure. Make sure DBTest.OpenWhenOpen works with windows error message Closes https://github.com/facebook/rocksdb/pull/1746 Differential Revision: D4385681 Pulled By: IslamAbdelRahman fbshipit-source-id: c07b702
-
- 09 1月, 2017 2 次提交
-
-
由 Maysam Yabandeh 提交于
Summary: This reverts commit 54d94e9c. The pull request was landed by mistake. Closes https://github.com/facebook/rocksdb/pull/1755 Differential Revision: D4391678 Pulled By: maysamyabandeh fbshipit-source-id: 36d5149
-
由 Maysam Yabandeh 提交于
Summary: Currently the point lookup values are copied to a string provided by the user. This incures an extra memcpy cost. This patch allows doing point lookup via a PinnableSlice which pins the source memory location (instead of copying their content) and releases them after the content is consumed by the user. The old API of Get(string) is translated to the new API underneath. Here is the summary for improvements: 1. value 100 byte: 1.8% regular, 1.2% merge values 2. value 1k byte: 11.5% regular, 7.5% merge values 3. value 10k byte: 26% regular, 29.9% merge values The improvement for merge could be more if we extend this approach to pin the merge output and delay the full merge operation until the user actually needs it. We have put that for future work. PS: Sometimes we observe a small decrease in performance when switching from t5452014 to this patch but with the old Get(string) API. The difference is a little and could be noise. More importantly it is safely cancelled Closes https://github.com/facebook/rocksdb/pull/1732 Differential Revision: D4374613 Pulled By: maysamyabandeh fbshipit-source-id: a077f1a
-