- 16 7月, 2018 17 次提交
-
-
由 Till Rohrmann 提交于
-
由 Till Rohrmann 提交于
-
由 Stephan Ewen 提交于
[FLINK-9314] [security] (part 4) Add mutual authentication for internal Netty and Blob Server connections This closes #6326.
-
由 Stephan Ewen 提交于
-
由 Stephan Ewen 提交于
[FLINK-9313] [security] (part 2) Split SSL configuration into internal (rpc, data transport, blob server) and external (REST) This also uses SSLEngineFactory for all SSLEngine creations.
-
由 Stephan Ewen 提交于
This removes hostname verification from SSL client sockets. With client authentication, this is no longer needed and it is not compatible with various container environments.
-
由 Stephan Ewen 提交于
-
由 Stephan Ewen 提交于
-
由 Stephan Ewen 提交于
-
由 Stephan Ewen 提交于
-
由 Stephan Ewen 提交于
-
由 Stephan Ewen 提交于
-
由 Timo Walther 提交于
This closes #6332.
-
由 Yun Tang 提交于
This closes #6260
-
由 Bowen Li 提交于
This closes #6109
-
由 Bill Lee 提交于
This closes #5516
-
由 maqingxiang-it 提交于
[FLINK-9404] [file sink] Bucketing sink uses target directory rather than home directory during reflectTruncate This closes #6050
-
- 15 7月, 2018 5 次提交
-
-
由 Timo Walther 提交于
Usually it is very uncommon to define both a batch and streaming source in the same factory. Separating by environment is a concept that can be find throughout the entire flink-table module because both sources and sinks behave quite different per environment. This closes #6323.
-
由 Timo Walther 提交于
The declaration of a table type is SQL Client/context specific and should not be part of a descriptor.
-
由 Timo Walther 提交于
- Rename to TableFactory and move it to org.apache.flink.table.factories package - Unify source/sink/format factories with same logic and exceptions
-
由 Shuyi Chen 提交于
This closes #6201.
-
由 Timo Walther 提交于
This PR introduces a format discovery mechanism based on Java Service Providers. The general `TableFormatFactory` is similar to the existing table source discovery mechanism. However, it allows for arbirary format interfaces that might be introduced in the future. At the moment, a connector can request configured instances of `DeserializationSchema` and `SerializationSchema`. In the future we can add interfaces such as a `Writer` or `KeyedSerializationSchema` without breaking backwards compatibility. This PR deprecates the existing strong coupling of connector and format for the Kafa table sources and table source factories. It introduces descriptor-based alternatives.
-
- 14 7月, 2018 12 次提交
-
-
由 Andrey Zagrebin 提交于
[FLINK-9701] [state] (follow up) Use StateTtlConfiguration.DISABLED instead of null, make it Serializable and add convenience methods to its builder This closes #6331.
-
由 Dawid Wysakowicz 提交于
Added NoOrFixedIfCheckpointingEnabledRestartStrategy This closes #6283.
-
由 sihuazhou 提交于
This closes #6185.
-
由 Till Rohrmann 提交于
Extend the flink-container/docker/build.sh script to also accept a Flink archive to build the image from. This makes it easier to build an image from one of the convenience releases.
-
由 Till Rohrmann 提交于
The Kubernetes files contain a job-cluster service specification, a job specification for the StandaloneJobClusterEntryPoint and a deployment for TaskManagers. This closes #6320.
-
由 Till Rohrmann 提交于
This commit adds a Dockerfile for a standalone job cluster image. The image contains the Flink distribution and a specified user code jar. The entrypoint will start the StandaloneJobClusterEntryPoint with the provided job classname. This closes #6319.
-
由 Till Rohrmann 提交于
With this commit we can use dynamic properties to overwrite configuration values in the TaskManagerRunner. This closes #6318.
-
由 Till Rohrmann 提交于
With this commit we can use dynamic properties to overwrite configuration values in the ClusterEntrypoint. This closes #6317.
-
由 Till Rohrmann 提交于
This closes #6316.
-
由 Till Rohrmann 提交于
The StandaloneJobClusterEntryPoint is the basic entry point for containers. It is started with the user code jar in its classpath and the classname of the user program. The entrypoint will then load this user program via the classname and execute its main method. This will generate a JobGraph which is then used to start the MiniDispatcher. This closes #6315.
-
由 Till Rohrmann 提交于
The cluster component command line parser is responsible for parsing the common command line arguments with which the cluster components are started. These include the configDir, webui-port and dynamic properties. This closes #6314.
-
由 Till Rohrmann 提交于
-
- 13 7月, 2018 6 次提交
-
-
由 Rune Skou Larsen 提交于
Maintain a deterministic port ordering, so we can have expectations on which endpoint is behind which port index. This closes #6288.
-
由 gyao 提交于
Use the Jepsen framework (https://github.com/jepsen-io/jepsen) to implement tests that verify Flink's HA capabilities under real-world faults, such as sudden TaskManager/JobManager termination, HDFS NameNode unavailability, network partitions, etc. The Flink cluster under test is automatically deployed on YARN (session & job mode) and Mesos. Provide Dockerfiles for local test development. This closes #6240.
-
由 yanghua 提交于
This closes #6129.
-
由 klion26 提交于
This closes #6305.
-
由 Stephan Ewen 提交于
The upgraded ciphers are not yet supported on all platforms and JDK versions, making the getting-started process rough. Instead, we document our recommendation to set these values in the configuration. This reverts "[FLINK-9310] [security] Update standard cipher suites for secure mode"
-
由 zentol 提交于
This closes #6102.
-