- 18 5月, 2020 40 次提交
-
-
由 Piotr Nowojski 提交于
-
由 Yang Wang 提交于
-
由 Yang Wang 提交于
-
由 Aljoscha Krettek 提交于
Before, we were locking on the partition state object itself to prevent concurrent access (and to make sure that changes are visible across threads). However, after recent changes we hold the checkpoint lock for emitting the whole "bundle" of records from Kafka. We can now also just use the checkpoint lock in the periodic emitter callback and then don't need the fine-grained locking on the state for record emission.
-
由 Aljoscha Krettek 提交于
-
由 Aljoscha Krettek 提交于
-
由 Aljoscha Krettek 提交于
This can be used in source implementations (or anywhere else really) that need to multiplex watermark updates from multiple partitions/splits into one single (network) watermark output.
-
由 Aljoscha Krettek 提交于
-
由 Aljoscha Krettek 提交于
We add Suppliers for both TimestampAssigner and WatermarkGenerator to allow them to not be Serializable and add API methods for using them in WatermarkStrategies. This makes the WatermarkStrategy the one-stop thing when it comes to timestamps/watermarks. This makes it possible, for example, to easily generate code from the Table API. The WatermarkStrategy has a default RecordTimestampAssigner, which means that in most cases users don't need to specify a TimestampAssigner but will use the timestamps provided by the source. Only a WatermarkGenerator is required in most cases. We also will use this for compatibility with the old assigners/extractors in the KafkaConsumer. In the old model, both timestamp assigner and watermark extractor where in one and the same object and extracting a timestamp updates the state of the assigner. The KafkaConsumer will de-serialize a new extractor for each partition, meaning we have to keep the new-model assigner and wm generator in one and the same object. (The wrapping WatermarkStrategy for the old assigners returns the same object for both TimestampAssigner and WatermarkGenerator)
-
由 Stephan Ewen 提交于
We add adapters for the old AssignerWithPeriodicWatermarks and AssignerWithPunctuatedWatermarks and use the newly introduced operator.
-
由 Aljoscha Krettek 提交于
-
由 Aljoscha Krettek 提交于
This might seem overkill now but it can become useful for future tests.
-
由 Stephan Ewen 提交于
This adds a method in DataStream as well as a new operator that does watermarks using a WatermarkStrategy.
-
由 Stephan Ewen 提交于
-
由 Aljoscha Krettek 提交于
-
由 Aljoscha Krettek 提交于
We don't want to necessarily treat this as an extra thing in the future. It really is just event-time with a special timestamp assigner.
-
由 Stephan Ewen 提交于
-
由 Leonard Xu 提交于
This closes #12176
-
由 Leonard Xu 提交于
[hotfix][table-planner-blink] Planner should pass query's changelog mode to DynamicTableSink#getChangelogMode
-
由 Timo Walther 提交于
-
由 Jingsong Lee 提交于
This closes #12206
-
由 Rong Rong 提交于
This closes #11936.
-
由 Aljoscha Krettek 提交于
This reverts commit 0341b4a8. The test doesn't work because it has hard-coded paths to the local machine where the test snapshot data was created in the snapshots.
-
由 Jiangjie (Becket) Qin 提交于
The patch also adds a new SourceEventType of NoMoreSplitsEvent to allow the SourceReaderBase to exit after all the work is done.
-
由 Yangze Guo 提交于
This closes #11920.
-
由 Chesnay Schepler 提交于
-
由 Chesnay Schepler 提交于
-
由 Chesnay Schepler 提交于
-
由 Chesnay Schepler 提交于
The caching is overly aggressive and potentially even ignores changes to the .yml files.
-
由 Chesnay Schepler 提交于
-
由 Aljoscha Krettek 提交于
We need this for future changes to the serialization format.
-
由 Aljoscha Krettek 提交于
This reverts commit 339f5d84. I'm reverting these three related commits because it is important to have confidence in our testing and to clearly separate the addition of the Bucket State upgrade test from changing the serializer.
-
由 Aljoscha Krettek 提交于
This reverts commit f850ec78. I'm reverting these three related commits because it is important to have confidence in our testing and to clearly separate the addition of the Bucket State upgrade test from changing the serializer.
-
由 Aljoscha Krettek 提交于
This reverts commit 547c168a. I'm reverting these three related commits because it is important to have confidence in our testing and to clearly separate the addition of the Bucket State upgrade test from changing the serializer.
-
由 Andrey Zagrebin 提交于
-
由 Andrey Zagrebin 提交于
After #9747, managed memory is allocated from UNSAFE, not as direct nio buffers as before 1.10. The releasing of segments released also underlying unsafe memory which is dangerous in general as there can be still references to java objects giving access to the released memory. If this reference ever leaks, the illegal memory access can result in memory corruption of other code parts w/o even segmentation fault. The solution can be similar to how Java handles direct memory limit: - underlying byte buffers of segments are registered to phantom reference queue with a Java GC cleaner which releases the unsafe memory - all allocations and reservations are managed by a memory limit and an atomic available memory - if available memory is not enough while reserving, the phantom reference queue processing is triggered to run hopefully discovered by GC cleaners - memory can be released directly or in a GC cleaner The GC is also sped up by nulling out byte buffer reference in `HybridMemorySegment#free` which is inaccessible anyways after freeing. Otherwise also a lot of tests, which hold accidental references to memory segments, have to be fixed to not hold them. The `MemoryManager#verifyEmpty` checks that everything can be GC'ed at the end of the tests and after slot closing in production to detect memory leaks if any other references are held, e.g. from `HybridMemorySegment#wrap`. This closes #11109.
-
由 Andrey Zagrebin 提交于
-
由 Andrey Zagrebin 提交于
-
由 Gary Yao 提交于
Document command line options for the 'lein run test' command. In the Docker section, add that DataStreamAllroundTestProgram must be built first. Add details about types/roles of nodes and minimum number of nodes required. This closes #12019.
-
由 Gary Yao 提交于
Make ssh-keygen output private key in PEM format so that Jsch does not complain about wrong private key format.
-