- 22 7月, 2020 1 次提交
-
-
由 Niels Basjes 提交于
This closes #12907
-
- 08 6月, 2020 1 次提交
-
-
由 Jark Wu 提交于
This closes #12471
-
- 18 5月, 2020 1 次提交
-
-
由 Piotr Nowojski 提交于
-
- 17 5月, 2020 1 次提交
-
-
由 Danny Chan 提交于
This close #12150
-
- 24 2月, 2020 1 次提交
-
-
由 Chesnay Schepler 提交于
-
- 21 2月, 2020 1 次提交
-
-
由 Chesnay Schepler 提交于
-
- 13 1月, 2020 1 次提交
-
-
由 Chesnay Schepler 提交于
-
- 17 12月, 2019 1 次提交
-
-
由 Jark Wu 提交于
This closes #10536
-
- 10 12月, 2019 1 次提交
-
-
由 Gary Yao 提交于
-
- 05 9月, 2019 1 次提交
-
-
由 Chesnay Schepler 提交于
-
- 01 8月, 2019 1 次提交
-
-
由 Chesnay Schepler 提交于
-
- 12 7月, 2019 2 次提交
-
-
由 Kurt Young 提交于
-
由 Jincheng Sun 提交于
Brief change log: - remove the scala version suffix for connector-hive and queryable-state-client-java - add the scala dependencies for table-api-scala and flink-sql-connectors - correct the scala-free check logic in `verify_scala_suffixes.sh`
-
- 11 4月, 2019 1 次提交
-
-
由 Chesnay Schepler 提交于
-
- 25 2月, 2019 1 次提交
-
-
由 Aljoscha Krettek 提交于
-
- 31 1月, 2019 1 次提交
-
-
由 Timo Walther 提交于
This commit splits the flink-table module into multiple submodules in accordance with FLIP-32 (step 1). The new module structure looks as follows: flink-table-common ^ | flink-table-api-java <------- flink-table-api-scala ^ ^ | | flink-table-api-java-bridge flink-table-api-scala-bridge ^ | flink-table-planner The module structure assumes that the type system has been reworked such that only one table environment exists for both Java and Scala users. The module `flink-table-planner` contains the content of the old `flink-table` module. From there we can distribute ported classes to their final module without breaking backwards compatibility or force users to update their dependencies again. For example, if a user wants to implement a pure table program in Scala, `flink-table-api-scala` and `flink-table-planner` need to be added to the project. Until we support pure table programs, `flink-table-api-scala/java-bridge` and `flink-table-planner` need to be added to the project. This closes #7587.
-
- 04 1月, 2019 1 次提交
-
-
由 zentol 提交于
-
- 30 11月, 2018 1 次提交
-
-
由 yanghua 提交于
This commit removes all classes and methods that have been deprecated in Flink 1.6 for separating Kafka connectors from Avro and JSON formats. This closes #7182.
-
- 20 11月, 2018 1 次提交
-
-
由 Till Rohrmann 提交于
In order to satisfy dependency convergence we need to exclude the kafka-clients from the base flink-connector-kafka-x dependency in every flink-connector-kafka-y module. This closes #7140.
-
- 09 11月, 2018 1 次提交
-
-
由 zentol 提交于
-
- 03 11月, 2018 1 次提交
-
-
由 Till Rohrmann 提交于
-
- 20 7月, 2018 1 次提交
-
-
由 Timo Walther 提交于
This closes #6366.
-
- 17 7月, 2018 1 次提交
-
-
由 Till Rohrmann 提交于
-
- 29 5月, 2018 1 次提交
-
-
由 kai-chi 提交于
This closes #5840.
-
- 15 3月, 2018 1 次提交
-
-
由 Timo Walther 提交于
This closes #5673.
-
- 28 2月, 2018 1 次提交
-
-
由 Timo Walther 提交于
This closes #5564.
-
- 27 2月, 2018 1 次提交
-
-
由 Till Rohrmann 提交于
-
- 16 2月, 2018 1 次提交
-
-
由 twalthr 提交于
This closes #5491.
-
- 22 11月, 2017 1 次提交
-
-
由 Aljoscha Krettek 提交于
-
- 07 11月, 2017 2 次提交
-
-
由 Aljoscha Krettek 提交于
-
由 Stephan Ewen 提交于
By not setting Avro as 'provided', the build system will put it into the user code fat jar, rather than assuming it will be part of Flink's 'lib' folder. That way Avro is loaded child-first through the user code class loader, giving it independent separate copies per load that avoid version conflicts and caching problems.
-
- 03 11月, 2017 3 次提交
-
-
由 Stephan Ewen 提交于
This removes all dependencies on Scala-dependent projects. This commit introduces a hard wired test dependency to 'flink-test-utils_2.11' to avoid introducing a Scala version dependency due to a non-exported test utility.
-
由 Aljoscha Krettek 提交于
This also adds a new test that verifies that we correctly register Avro Serializers when they are present and modifies an existing test to verify that we correctly register dummy classes.
-
由 twalthr 提交于
-
- 02 11月, 2017 1 次提交
-
-
由 Piotr Nowojski 提交于
This might include some bugfixes
-
- 01 11月, 2017 1 次提交
-
-
由 Aljoscha Krettek 提交于
We use custom serializers to ensure that we have control over the serialization format, which allows us easier evolution of the format in the future. This also implements custom serializers for KafkaProducer11, the only TwoPhaseCommitSinkFunction we currently have.
-
- 10 10月, 2017 1 次提交
-
-
由 Piotr Nowojski 提交于
-
- 07 8月, 2017 1 次提交
-
-
由 Piotr Nowojski 提交于
Sometimes 1000m was not enough memory to run at-least-once tests with broker failures on Travis This closes #4456.
-
- 23 7月, 2017 1 次提交
-
-
由 Piotr Nowojski 提交于
This closes #4321
-
- 28 5月, 2017 1 次提交
-
-
由 zentol 提交于
-