- 11 10月, 2018 8 次提交
-
-
由 UncP 提交于
Summary: in `WriteThread::LaunchParallelMemTableWriters`, there is ` write_group->running.store(write_group->size); ` https://github.com/facebook/rocksdb/blob/master/db/write_thread.cc#L510 Pull Request resolved: https://github.com/facebook/rocksdb/pull/4450 Differential Revision: D10201900 Pulled By: yiwu-arbug fbshipit-source-id: 96c8fbbba5aff7ba8a6ceb3117a2bd7cc9b2f34b
-
由 Simon Grätzer 提交于
Summary: Wrong I overwrite `WriteBatch::Handler::Continue` to return _false_ at some point, I always get the `Status::Corruption` error. I don't think this check is used correctly here: The counter in `found` cannot reflect all entries in the WriteBatch when we exit the loop early. Pull Request resolved: https://github.com/facebook/rocksdb/pull/4478 Differential Revision: D10317416 Pulled By: yiwu-arbug fbshipit-source-id: cccae3382805035f9b3239b66682b5fcbba6bb61
-
由 Andrew Kryczka 提交于
Summary: Even during `DBIter::Prev()`, there is a case where we need to use `RangeDelPositioningMode::kForwardTraversal`. In particular, when we hit too many internal keys for a single user key, we use seek to find the newest internal key. If it's a merge operand, we then scan forwards, collecting the merge operands. This forward scan should be using `RangeDelPositioningMode::kForwardTraversal`. Pull Request resolved: https://github.com/facebook/rocksdb/pull/4481 Differential Revision: D10319507 Pulled By: ajkr fbshipit-source-id: b5ce7352461f3a7696b28a5136ae0076f2bde51f
-
由 Peter Pei 提交于
Summary: fix #4288 Add `OnCompactionBegin` support to `rocksdb::EventListener`. Currently, we only have these three callbacks: - OnFlushBegin - OnFlushCompleted - OnCompactionCompleted As paolococchi requested in #4288 , and ajkr agreed, we should also support `OnCompactionBegin`. This PR is a try to implement the support of `OnCompactionBegin`. Hope it is useful to you. Pull Request resolved: https://github.com/facebook/rocksdb/pull/4431 Differential Revision: D10055515 Pulled By: yiwu-arbug fbshipit-source-id: 39c0f95f8e9ff1c7ca3a10787502a17f258d2334
-
由 Yi Wu 提交于
Summary: Fix security warning from github: https://nvd.nist.gov/vuln/detail/CVE-2018-17567 Pull Request resolved: https://github.com/facebook/rocksdb/pull/4480 Reviewed By: gfosco Differential Revision: D10316443 Pulled By: yiwu-arbug fbshipit-source-id: 34555c6f5774d779734b664b9baa70bd4209175f
-
由 Andrew Kryczka 提交于
Summary: I wrote a couple tests using the public API to expose/prevent the bugs we talked. In particular, - When files have overlapping endpoints and a range tombstone spans them, ensure the largest key does not reappear to readers. This was happening due to a bug that skipped writing range tombstones to an output file when their begin key exactly matched the file's largest key. - When a tombstone spans multiple atomic compaction units, ensure newer keys do not disappear by being compacted beneath it. This happened due to a range tombstone appearing untruncated to readers when it spanned files with overlapping endpoints, even if it extended into files without overlapping endpoints (i.e., different atomic compaction units). Pull Request resolved: https://github.com/facebook/rocksdb/pull/4476 Differential Revision: D10286001 Pulled By: ajkr fbshipit-source-id: bb5ca51d0f90812fb37bfe1d01aec93f7eda55aa
-
由 Abhishek Madan 提交于
Summary: Follow-up to https://github.com/facebook/rocksdb/pull/4432. Pull Request resolved: https://github.com/facebook/rocksdb/pull/4479 Differential Revision: D10304151 Pulled By: abhimadan fbshipit-source-id: 3608b95c324702ca26791f95cb26dae1d49efbe7
-
由 Zhichao Cao 提交于
Summary: If the query types being analyzed do not appear in the trace, the current trace_analyzer will use 0 as the begin time, which create the time duration from 1970/01/01 to the now time. It will waste huge memory. Fixed by adding the trace_create_time to limit the duration. Pull Request resolved: https://github.com/facebook/rocksdb/pull/4473 Differential Revision: D10246204 Pulled By: zhichao-cao fbshipit-source-id: 42850b080b2e62f586fe73afd7737c2246d1a8c8
-
- 10 10月, 2018 7 次提交
-
-
由 Anand Ananthabhotla 提交于
Summary: There is a bug when the write queue leader is blocked on a write delay/stop, and the queue has writers with WriteOptions::no_slowdown set to true. They are not woken up until the write stall is cleared. The fix introduces a dummy writer inserted at the tail to indicate a write stall and prevent further inserts into the queue, and a condition variable that writers who can tolerate slowdown wait on before adding themselves to the queue. The leader calls WriteThread::BeginWriteStall() to add the dummy writer and then walk the queue to fail any writers with no_slowdown set. Once the stall clears, the leader calls WriteThread::EndWriteStall() to remove the dummy writer and signal the condition variable. Pull Request resolved: https://github.com/facebook/rocksdb/pull/4475 Differential Revision: D10285827 Pulled By: anand1976 fbshipit-source-id: 747465e5e7f07a829b1fb0bc1afcd7b93f4ab1a9
-
由 jsteemann 提交于
Summary: this avoids a few copies of std::string and other structs in the context of range-based for loops. instead of copying the values for each iteration, use a const reference to avoid copying. Pull Request resolved: https://github.com/facebook/rocksdb/pull/4459 Differential Revision: D10282045 Pulled By: sagar0 fbshipit-source-id: 5012e910dca279abd2be847e1fb432d96274edfb
-
由 moozzyk 提交于
Summary: Fixes: #4401 Pull Request resolved: https://github.com/facebook/rocksdb/pull/4444 Differential Revision: D10282120 Pulled By: sagar0 fbshipit-source-id: d9ddcc1b132208ae7f806fa2106add6fec1baa11
-
由 jsteemann 提交于
Summary: Fixes a typo in error messages returned by Iterator::GetProperty(...) Pull Request resolved: https://github.com/facebook/rocksdb/pull/4457 Differential Revision: D10281965 Pulled By: sagar0 fbshipit-source-id: 1cd3c665f467ef06cdfd9f482692e6f8568f3d22
-
由 Jiri Appl 提交于
Summary: The original logic was assuming that the only architectures that the code would build for on Windows were x86 and x64. This change will enable building for arm32 on Windows as well. Pull Request resolved: https://github.com/facebook/rocksdb/pull/4349 Differential Revision: D10280887 Pulled By: sagar0 fbshipit-source-id: 9ca0bede25505d22e13acf916d38aeeaaf5d981a
-
由 Abhishek Madan 提交于
Summary: To more accurately truncate range tombstones at SST boundaries, we now represent them in RangeDelAggregator using InternalKeys, which are end-key-exclusive as they were before this change. During compaction, "atomic compaction unit boundaries" (the range of keys contained in neighbouring and overlaping SSTs) are propagated down to RangeDelAggregator to truncate range tombstones at those boundariies instead. See https://github.com/facebook/rocksdb/pull/4432#discussion_r221072219 and https://github.com/facebook/rocksdb/pull/4432#discussion_r221138683 for motivating examples. Pull Request resolved: https://github.com/facebook/rocksdb/pull/4432 Differential Revision: D10263952 Pulled By: abhimadan fbshipit-source-id: 2fe85ff8a02b3a6a2de2edfe708012797a7bd579
-
由 Zhongyi Xie 提交于
Summary: this should fix the current failing TSAN jobs: The callstack for TSAN: > WARNING: ThreadSanitizer: data race (pid=87440) Read of size 8 at 0x7d580000fce0 by thread T22 (mutexes: write M548703): #0 rocksdb::InternalStats::DumpCFStatsNoFileHistogram(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >*) db/internal_stats.cc:1204 (column_family_test+0x00000080eca7) #1 rocksdb::InternalStats::DumpCFStats(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >*) db/internal_stats.cc:1169 (column_family_test+0x0000008106d0) #2 rocksdb::InternalStats::HandleCFStats(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >*, rocksdb::Slice) db/internal_stats.cc:578 (column_family_test+0x000000810720) #3 rocksdb::InternalStats::GetStringProperty(rocksdb::DBPropertyInfo const&, rocksdb::Slice const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >*) db/internal_stats.cc:488 (column_family_test+0x00000080670c) #4 rocksdb::DBImpl::DumpStats() db/db_impl.cc:625 (column_family_test+0x00000070ce9a) > Previous write of size 8 at 0x7d580000fce0 by main thread: #0 rocksdb::InternalStats::AddCFStats(rocksdb::InternalStats::InternalCFStatsType, unsigned long) db/internal_stats.h:324 (column_family_test+0x000000693bbf) #1 rocksdb::ColumnFamilyData::RecalculateWriteStallConditions(rocksdb::MutableCFOptions const&) db/column_family.cc:818 (column_family_test+0x000000693bbf) #2 rocksdb::ColumnFamilyTest_WriteStallSingleColumnFamily_Test::TestBody() db/column_family_test.cc:2563 (column_family_test+0x0000005e5a49) Pull Request resolved: https://github.com/facebook/rocksdb/pull/4474 Differential Revision: D10262099 Pulled By: miasantreble fbshipit-source-id: 1247973a3ca32e399b4575d3401dd5439c39efc5
-
- 09 10月, 2018 7 次提交
-
-
由 Zhongyi Xie 提交于
Summary: Currently statistics are supposed to be dumped to info log at intervals of `options.stats_dump_period_sec`. However the implementation choice was to bind it with compaction thread, meaning if the database has been serving very light traffic, the stats may not get dumped at all. We decided to separate stats dumping into a new timed thread using `TimerQueue`, which is already used in blob_db. This will allow us schedule new timed tasks with more deterministic behavior. Tested with db_bench using `--stats_dump_period_sec=20` in command line: > LOG:2018/09/17-14:07:45.575025 7fe99fbfe700 [WARN] [db/db_impl.cc:605] ------- DUMPING STATS ------- LOG:2018/09/17-14:08:05.643286 7fe99fbfe700 [WARN] [db/db_impl.cc:605] ------- DUMPING STATS ------- LOG:2018/09/17-14:08:25.691325 7fe99fbfe700 [WARN] [db/db_impl.cc:605] ------- DUMPING STATS ------- LOG:2018/09/17-14:08:45.740989 7fe99fbfe700 [WARN] [db/db_impl.cc:605] ------- DUMPING STATS ------- LOG content: > 2018/09/17-14:07:45.575025 7fe99fbfe700 [WARN] [db/db_impl.cc:605] ------- DUMPING STATS ------- 2018/09/17-14:07:45.575080 7fe99fbfe700 [WARN] [db/db_impl.cc:606] ** DB Stats ** Uptime(secs): 20.0 total, 20.0 interval Cumulative writes: 4447K writes, 4447K keys, 4447K commit groups, 1.0 writes per commit group, ingest: 5.57 GB, 285.01 MB/s Cumulative WAL: 4447K writes, 0 syncs, 4447638.00 writes per sync, written: 5.57 GB, 285.01 MB/s Cumulative stall: 00:00:0.012 H:M:S, 0.1 percent Interval writes: 4447K writes, 4447K keys, 4447K commit groups, 1.0 writes per commit group, ingest: 5700.71 MB, 285.01 MB/s Interval WAL: 4447K writes, 0 syncs, 4447638.00 writes per sync, written: 5.57 MB, 285.01 MB/s Interval stall: 00:00:0.012 H:M:S, 0.1 percent ** Compaction Stats [default] ** Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Pull Request resolved: https://github.com/facebook/rocksdb/pull/4382 Differential Revision: D9933051 Pulled By: miasantreble fbshipit-source-id: 6d12bb1e4977674eea4bf2d2ac6d486b814bb2fa
-
由 Fosco Marotto 提交于
Summary: Forgot this in previous commit. Pull Request resolved: https://github.com/facebook/rocksdb/pull/4472 Differential Revision: D10244227 Pulled By: gfosco fbshipit-source-id: ba0cf7a2f5271f0d9f9443004e2620887cd5fd11
-
由 DorianZheng 提交于
Summary: - Fix DBImpl API race condition The timeline of execution flow is as follow: ``` timeline user_thread1 user_thread2 t1 | cfh = GetColumnFamilyHandleUnlocked(0) t2 | id1 = cfh->GetID() t3 | GetColumnFamilyHandleUnlocked(1) t4 | id2 = cfh->GetID() V ``` The original implementation return a pointer to a stateful variable, so that the return `ColumnFamilyHandle` will be changed when another thread calls `GetColumnFamilyHandleUnlocked` with different `column family id` - Expose ColumnFamily ID to compaction event listener - Fix the return status of `DBImpl::GetLatestSequenceForKey` Pull Request resolved: https://github.com/facebook/rocksdb/pull/4391 Differential Revision: D10221243 Pulled By: yiwu-arbug fbshipit-source-id: dec60ee9ff0c8261a2f2413a8506ec1063991993
-
由 DorianZheng 提交于
Summary: The controller you requested could not be found. PTAL Pull Request resolved: https://github.com/facebook/rocksdb/pull/4466 Differential Revision: D10241358 Pulled By: yiwu-arbug fbshipit-source-id: 99664eb286860a6c8844d50efeb0ef6f0e10dd1e
-
由 DorianZheng 提交于
Summary: Pull Request resolved: https://github.com/facebook/rocksdb/pull/4467 Differential Revision: D10241418 Pulled By: yiwu-arbug fbshipit-source-id: f6adbe7292b2c934e14971c7432b3eb115c35026
-
由 Fosco Marotto 提交于
Summary: 5.16.x status wasn't tracked, and also updated for pending 5.17 release. Pull Request resolved: https://github.com/facebook/rocksdb/pull/4471 Differential Revision: D10240925 Pulled By: gfosco fbshipit-source-id: 95ab368a04a65b201d2518097af69edf2402f544
-
由 Ben Clay 提交于
Summary: JNI passthrough for utilities/memory/memory_util.cc sagar0 adamretter Pull Request resolved: https://github.com/facebook/rocksdb/pull/4446 Differential Revision: D10174578 Pulled By: sagar0 fbshipit-source-id: d1d196d771dff22afb7ef7500f308233675696f8
-
- 06 10月, 2018 2 次提交
-
-
由 Maysam Yabandeh 提交于
Summary: It also renames InstallMemtableFlushResults to MaybeInstallMemtableFlushResults to clarify its contract. Pull Request resolved: https://github.com/facebook/rocksdb/pull/4464 Differential Revision: D10224918 Pulled By: maysamyabandeh fbshipit-source-id: 04e3f2d8542002cb9f8010cb436f5152751b3cbe
-
由 Maysam Yabandeh 提交于
Summary: The contract of snprintf says that it returns "The number of characters that would have been written if n had been sufficiently large" http://www.cplusplus.com/reference/cstdio/snprintf/ The existing code however was assuming that the return value is the actual number of written bytes and uses that to reposition the starting point on the next call to snprintf. This leads to buffer overflow when the last call to snprintf has filled up the buffer. Pull Request resolved: https://github.com/facebook/rocksdb/pull/4465 Differential Revision: D10224080 Pulled By: maysamyabandeh fbshipit-source-id: 40f44e122d15b0db439812a0a361167cf012de3e
-
- 05 10月, 2018 4 次提交
-
-
由 Dmitry Alimov 提交于
Summary: Fix some typos in the comments Pull Request resolved: https://github.com/facebook/rocksdb/pull/4456 Differential Revision: D10209214 Pulled By: miasantreble fbshipit-source-id: dff857ba60396bc95126e635db96d7dc8330d2cb
-
由 Andrew Gallagher 提交于
Summary: Work around upstream bug with modules: https://bugs.llvm.org/show_bug.cgi?id=39184. Reviewed By: yiwu-arbug Differential Revision: D10209569 fbshipit-source-id: 696853a02a3869e9c33d0e61168ad4b0436fa3c0
-
由 Yanqin Jin 提交于
Summary: The old error message was misleading because it led people to believe the truncation operation failed. Pull Request resolved: https://github.com/facebook/rocksdb/pull/4454 Differential Revision: D10203575 Pulled By: riversand963 fbshipit-source-id: c76482a132566635cb55d4c73d45c461f295ec43
-
由 Zhongyi Xie 提交于
Summary: this should fix currently failing contrun test: rocksdb-contrun-no_compression, rocksdb-contrun-tsan, rocksdb-contrun-tsan_crash Pull Request resolved: https://github.com/facebook/rocksdb/pull/4453 Differential Revision: D10202626 Pulled By: miasantreble fbshipit-source-id: 850b07f14f671b5998c22d8239e2a55b2fc1e355
-
- 04 10月, 2018 1 次提交
-
-
由 JiYou 提交于
Summary: This fix is for `level == 0` in `GetOverlappingInputs()`: - In `GetOverlappingInputs()`, if `level == 0`, it has potential risk of overflow if `i == 0`. - Optmize process when `expand = true`, the expected complexity can be reduced to O(n). Signed-off-by: NJiYou <jiyou09@gmail.com> Pull Request resolved: https://github.com/facebook/rocksdb/pull/4385 Differential Revision: D10181001 Pulled By: riversand963 fbshipit-source-id: 46eef8a1d1605c9329c164e6471cd5c5b6de16b5
-
- 03 10月, 2018 3 次提交
-
-
由 Igor Canadi 提交于
Summary: This is a conceptually simple change, but it touches many files to pass the allocator through function calls. We introduce CacheAllocator, which can be used by clients to configure custom allocator for cache blocks. Our motivation is to hook this up with folly's `JemallocNodumpAllocator` (https://github.com/facebook/folly/blob/f43ce6d6866b7b994b3019df561109afae050ebc/folly/experimental/JemallocNodumpAllocator.h), but there are many other possible use cases. Additionally, this commit cleans up memory allocation in `util/compression.h`, making sure that all allocations are wrapped in a unique_ptr as soon as possible. Pull Request resolved: https://github.com/facebook/rocksdb/pull/4437 Differential Revision: D10132814 Pulled By: yiwu-arbug fbshipit-source-id: be1343a4b69f6048df127939fea9bbc96969f564
-
由 Yanqin Jin 提交于
Summary: Before running CompactFilesTest.SentinelCompressionType, we should check whether zlib and snappy are supported. CompactFilesTest.SentinelCompressionType is a newly added test. Compilation and linking with different options, e.g. COMPILE_WITH_TSAN, COMPILE_WITH_ASAN, etc. lead to generation of different binaries. On the one hand, it's not clear why zlib or snappy is present under ASAN, but not under TSAN. On the other hand, changing the compilation flags for TSAN or ASAN seems a bigger change worth much more attention. To unblock the cont-runs, I suggest that we simply add these two checks at the beginning of the test, as we did for GeneralTableTest.ApproximateOffsetOfCompressed in table/table_test.cc. Future actions include invesigating the absence of zlib and snappy when compiling with TSAN, i.e. COMPILE_WITH_TSAN=1, if necessary. Pull Request resolved: https://github.com/facebook/rocksdb/pull/4443 Differential Revision: D10140935 Pulled By: riversand963 fbshipit-source-id: 62f96d1e685386accd2ef0b98f6f754d3fd67b3e
-
由 Jakub Cech 提交于
Summary: Adding IOTA Foundation to USERS.MD Pull Request resolved: https://github.com/facebook/rocksdb/pull/4436 Differential Revision: D10108142 Pulled By: sagar0 fbshipit-source-id: 948dc9f7169cec5c113ae347f1af765a41355aae
-
- 02 10月, 2018 2 次提交
-
-
由 Gihwan Oh 提交于
Summary: Add newline for readability Pull Request resolved: https://github.com/facebook/rocksdb/pull/4434 Differential Revision: D10127684 Pulled By: riversand963 fbshipit-source-id: 39f3ed7eaea655b6ff83474bc9f7616c6ad59107
-
由 Yanqin Jin 提交于
Summary: In DBCompactionTestWithParam::ManualLevelCompactionOutputPathId, there is a race condition between `DBTestBase::GetSstFileCount` and `DBImpl::PurgeObsoleteFiles`. The following graph explains why. ``` Timeline db_compact_test_t bg_flush_t bg_compact_t | [initiate bg flush and | start waiting] | flush | DeleteObsoleteFiles | [waken up by bg_flush_t which | signaled in DeleteObsoleteFiles] | | [initiate compaction and | start waiting] | | [compact, | set manual.done to true] | [signal at the end of | BackgroundCallFlush] | | [waken up by bg_flush_t | which signaled before | returning from | BackgroundCallFlush] | | Check manual.done is true | | GetSstFileCount <-- race condition --> PurgeObsoleteFiles V ``` Pull Request resolved: https://github.com/facebook/rocksdb/pull/4440 Differential Revision: D10122628 Pulled By: riversand963 fbshipit-source-id: 3ede73c39fee6ad804dc6ac1ed84759c7e63977f
-
- 01 10月, 2018 1 次提交
-
-
由 Andrew Kryczka 提交于
Summary: Previously `CompactFiles` with `CompressionType::kDisableCompressionOption` caused program to crash on assertion failure. This PR fixes the crash by adding support for that setting. Now, that setting will cause RocksDB to choose compression according to the column family's options. Pull Request resolved: https://github.com/facebook/rocksdb/pull/4438 Differential Revision: D10115761 Pulled By: ajkr fbshipit-source-id: a553c6fa76fa5b6f73b0d165d95640da6f454122
-
- 28 9月, 2018 2 次提交
-
-
由 Yi Wu 提交于
Summary: Introduce `RepeatableThread` utility to run task periodically in a separate thread. It is basically the same as the the same class in fbcode, and in addition provide a helper method to let tests mock time and trigger execution one at a time. We can use this class to replace `TimerQueue` in #4382 and `BlobDB`. Pull Request resolved: https://github.com/facebook/rocksdb/pull/4423 Differential Revision: D10020932 Pulled By: yiwu-arbug fbshipit-source-id: 3616bef108c39a33c92eedb1256de424b7c04087
-
由 JiYou 提交于
Summary: `FindFile()` and `FindFileInRange()` actually works as the same of `std::lower_bound()`. Use `std::lower_bound()` to reduce the repeated code. - change `FindFile()` and `FindFileInRange()` to use `std::lower_bound()` Signed-off-by: NJiYou <jiyou09@gmail.com> Pull Request resolved: https://github.com/facebook/rocksdb/pull/4372 Differential Revision: D9919677 Pulled By: ajkr fbshipit-source-id: f74aaa30e2f80e410e299c5a5bca4eaf2a7a26de
-
- 27 9月, 2018 3 次提交
-
-
由 Sagar Vemuri 提交于
Summary: The assert in PosixEnv::FileExists is currently based on the return value of `access` syscall. Instead it should be based on errno. Initially I wanted to remove this assert as [`access`](https://linux.die.net/man/2/access) can error out in a few other cases (like EROFS). But on thinking more it feels like the assert is doing the right thing ... its good to crash on EROFS, EFAULT, EINVAL, and other major filesystem related problems so that the user is immediately aware of the problems while testing. (I think it might be ok to crash on EIO as well, but there might be a specific reason why it was decided not to crash for EIO, and I don't have that context. So letting the letting the assert checks remain as is for now). Pull Request resolved: https://github.com/facebook/rocksdb/pull/4427 Differential Revision: D10037200 Pulled By: sagar0 fbshipit-source-id: 5cc96116a2e53cef701f444a8b5290576f311e51
-
由 Andrew Kryczka 提交于
Summary: I guess we didn't update this script when `--allow_concurrent_memtable_write` became true by default. Fixes #4413. Pull Request resolved: https://github.com/facebook/rocksdb/pull/4428 Differential Revision: D10036452 Pulled By: ajkr fbshipit-source-id: f464be0642bd096d9040f82cdc3eae614a902183
-
由 Yi Wu 提交于
Summary: Improve log handling when avoid_flush_during_recovery=true. 1. restore total_log_size_ after recovery, by summing up existing log sizes. Fixes #4253. 2. truncate the last existing log, since this log can contain preallocated space and it will be a waste to keep the space. It avoids a crash loop of user application cause a lot of log with non-trivial size being created and ultimately take up all disk space. Pull Request resolved: https://github.com/facebook/rocksdb/pull/4405 Differential Revision: D9953933 Pulled By: yiwu-arbug fbshipit-source-id: 967780fee8acec7f358b6eb65190fb4684f82e56
-