- 15 1月, 2017 1 次提交
-
-
由 Haohui Mai 提交于
This closes #3123
-
- 12 1月, 2017 1 次提交
-
-
由 Lorenz Buehmann 提交于
This will also allow for using comma-separated values in the CLI. This closes #3072
-
- 02 1月, 2017 1 次提交
-
-
由 zentol 提交于
-
- 24 12月, 2016 1 次提交
-
-
由 Till Rohrmann 提交于
-
- 21 12月, 2016 2 次提交
-
-
由 Robert Metzger 提交于
-
由 Stephan Ewen 提交于
Currently, every project in Flink has a hard (compile scope) dependency on the jsr305, slf4j, and log4j artifacts. That way they are pulled into every fat jar, including user fat jars as soon as they refer to a connector or library. This commit changes the behavior in two ways: 1. It removes the concrete logger dependencies from the root pom file. Instead, it adds them to the 'flink-core' project. That way, all modules that refer to 'flink-core' will have those dependencies as well, but the projects that have 'flink-core' as provided (connectors, libraries, user programs, etc) will have those dependencies transitively as provided as well. 2. The commit overrides the slf4j and jsr305 dependencies in the parents of 'flink-connectors', 'flink-libraries', and 'flink-metrics' and sets the to 'provided'. That way all core projects pull the logger classes, but all projects that are not part of flink-dist (and rather bundled in fat jars) will not bundle these dependencies again. The flink-dist puts the dependencies into the fat jar (slf4j, jsr305) or the lib folder (log4j).
-
- 17 12月, 2016 2 次提交
-
-
由 David Anderson 提交于
[FLINK-5344] Fixed the dockerized doc build, which has been broken for a while. Fixed the -p option. Reverted the main Gemfile back to ruby 1.9 to make the build bot happy, and created a new Gemfile in ruby2/Gemfile to keep the incremental build option available. This closes #3016.
-
由 Maximilian Michels 提交于
This uses Flakka (a custom Akka 2.3 build) to resolve the issue that the bind address needs to be matching the external address of the JobManager. With the changes applied, we can now bind to all interfaces, e.g. via 0.0.0.0 (IPv4) or :: (IPv6). For this to work properly, the configuration entry JOB_MANAGER_IPC_ADDRESS now represents the external address of the JobManager. Consequently, it should not be resolved to an IP address anymore because it may not be resolvable from within containered environments. Akka treats this address as the logical address. Any messages which are not tagged with this address will be received by the Actor System (because we listen on all interfaces) but will be dropped subsequently. In addition, we need the external address for the JobManager to be able to publish it to Zookeeper for HA setups. Flakka: https://github.com/mxm/flakka Patch applied: https://github.com/akka/akka/pull/15610 - convert host to lower case - use consistent format for IPv6 address - adapt config and test cases - adapt documentation to clarify the address config entry - TaskManager: resolve the initial hostname of the StandaloneLeaderRetrievalService This closes #2917.
-
- 09 12月, 2016 1 次提交
-
-
由 Robert Metzger 提交于
This closes #2953.
-
- 07 12月, 2016 1 次提交
-
-
由 Marton Balassi 提交于
-
- 02 12月, 2016 1 次提交
-
-
由 Fabian Hueske 提交于
This closes #2897.
-
- 30 11月, 2016 1 次提交
-
-
由 Robert Metzger 提交于
This closes #2850
-
- 23 11月, 2016 1 次提交
-
-
由 Aleksandr Chermenin 提交于
[FLINK-2608] Updated test with Java collections. [FLINK-2608] Updated Chill and Kryo dependencies. [FLINK-2608] Added collections serialization test. This closes #2623.
-
- 06 10月, 2016 2 次提交
-
-
由 Stephan Ewen 提交于
[FLINK-4737] [core] Ensure that Flink and its Hadoop dependency pull the same version of 'commons-compress'
-
由 Greg Hogan 提交于
Upgrades JUnit from 4.11 to 4.12 Mockito from 1.9.5 to 1.10.19 PowerMock from 1.5.5 to 1.6.5 This closes #2597
-
- 28 9月, 2016 1 次提交
-
-
由 shijinkui 提交于
This closes #2458
-
- 27 9月, 2016 1 次提交
-
-
由 twalthr 提交于
This closes #2549.
-
- 21 9月, 2016 1 次提交
-
-
由 Vijay Srinivasaraghavan 提交于
[FLINK-3929] Added Keytab based Kerberos support to enable secure Flink cluster deployment(addresses HDHS, Kafka and ZK services) FLINK-3929 Added MiniKDC support for Kafka, Zookeeper, RollingFS and Yarn integration test modules
-
- 02 9月, 2016 2 次提交
-
-
由 Maximilian Michels 提交于
The version change didn't cause the Scalastyle errors. Seems like the only viable solution to prevent random failures of the Scalastyle plugin is to disable Scalastyle checks for the affected source file.
-
由 Maximilian Michels 提交于
This closes #2462
-
- 30 8月, 2016 1 次提交
-
-
由 Maximilian Michels 提交于
- update checkstyle plugin to 2.17 - update scalastyle plugin to 0.8.0
-
- 29 8月, 2016 1 次提交
-
-
由 wrighe3 提交于
Implemented Mesos AppMaster including: - runners for AppMaster and TaskManager - MesosFlinkResourceManager as a Mesos framework - ZK persistent storage for Mesos tasks - reusable scheduler actors for: - offer handling using Netflix Fenzo (LaunchCoordinator) - reconciliation (ReconciliationCoordinator) - task monitoring (TaskMonitor) - connection monitoring (ConnectionMonitor) - lightweight HTTP server to serve artifacts to the Mesos fetcher (ArtifactServer) - scenario-based logging for: - connectivity issues - offer handling (receive, process, decline, rescind, accept) - incorporated FLINK-4152, FLINK-3904, FLINK-4141, FLINK-3675, FLINK-4166
-
- 24 8月, 2016 1 次提交
-
-
由 Ufuk Celebi 提交于
- Add redirect layout - Remove Maven artifact name warning - Add info box if stable, but not latest - Add font-awesome 4.6.3 - Add sidenav layout This closes #2387.
-
- 10 8月, 2016 2 次提交
-
-
由 Stephan Ewen 提交于
This commit moves all 'Writable' related code to the 'flink-hadoop-compatibility' project and uses reflection in 'flink-core' to instantiate WritableTypeInfo when needed. This closes #2338
-
由 Stephan Ewen 提交于
This closes #2343
-
- 05 8月, 2016 1 次提交
-
-
由 Stephan Ewen 提交于
This moves the API compatibility checks into the API projects that use stability annotations. Previously, every project ran the tests, regardless of whether it contained public API classes or not. This closes #2334
-
- 04 8月, 2016 1 次提交
-
-
由 Stephan Ewen 提交于
-
- 03 8月, 2016 1 次提交
-
-
由 Marton Balassi 提交于
This closes #2324
-
- 12 7月, 2016 1 次提交
-
-
由 twalthr 提交于
This closes #2209.
-
- 06 7月, 2016 1 次提交
-
-
由 Robert Metzger 提交于
(The groups being tests starting from A-N and N-Z) This closes #2201
-
- 05 7月, 2016 1 次提交
-
-
由 Stephan Ewen 提交于
Makes the JUnit test utils (TestLogger, retry rules, ...) properly available to other projects without the 'flink-core' test-jar, via the 'flink-test-utils-junit' project. Makes the ForkableMiniCluster, TestEnvironment, and other test utilities available in the 'main' scope of the 'flink-test-utils' project. Creates a 'flink-test-utils-parent' project that holds the 'flink-test-utils-junit' and 'flink-test-utils' project. Also moves some tests between projects and inlines some very simple utility functions in order to simplify some test jar dependencies.
-
- 04 7月, 2016 1 次提交
-
-
由 Ismaël Mejía 提交于
Some of the changes include: - Remove unneeded dependencies (nano, wget) - Remove apt lists to reduce image size - Reduce number of layers on the docker image (best docker practice) - Remove useless variables and base the code in generic ones e.g. FLINK_HOME - Change the default JDK from oracle to openjdk-8-jre-headless, based on two reasons: 1. You cannot legally repackage the oracle jdk in docker images 2. The open-jdk headless is more appropriate for a server image (no GUI stuff) - Return port assignation to the standard FLINK one: Variable: docker-flink -> flink taskmanager.rpc.port: 6121 -> 6122 taskmanager.data.port: 6122 -> 6121 jobmanager.web.port: 8080 -> 8081 This closes #2176
-
- 01 7月, 2016 1 次提交
-
-
由 Maximilian Michels 提交于
- always ship the lib folder - properly setup the classpath from the supplied ship files - cleanup deploy() method of YarnClusterDescriptor - add test case This closes #2187
-
- 27 6月, 2016 1 次提交
-
-
由 Till Rohrmann 提交于
This closes #2112
-
- 25 6月, 2016 1 次提交
-
-
由 Maximilian Michels 提交于
Jar arguments with a single '-' were not parsed correctly if options were present. For example, in `./flink run <options> file.jar -arg value` the jar arguments would be parsed as "arg" and "value". Interestingly, this only happened when <options> were present. The issue has been fixed in commons-cli 1.3.1. A test case was added to test for regressions. This closes #2139
-
- 15 6月, 2016 1 次提交
-
-
由 Maximilian Michels 提交于
The ScalaShellITCase sometimes gets stuck before test execution with no output in the logs. We ran about a hundred builds against Surefire 2.18.1 which confirmed that the failures don't occur with this version. Waiting for an upstream fix until this can be reverted. This closes #2101
-
- 31 5月, 2016 2 次提交
-
-
由 Maximilian Michels 提交于
This makes running test locally a more pleasant experience. It still uses all exposed CPU cores (virtual or real). A custom forkCount can be set using the flink.forkCount property, e.g. -Dflink.forkCount=4.
-
由 Maximilian Michels 提交于
The Flink documentation build process is currently quite messy. These changes move us to a new build process with proper dependency handling. It assures that we all use the same dependency versions for consistent build output. Also, it eases the automated building process on other systems (like the ASF Buildbot). The goal was to make the documentation build process easier and self-contained. - use Ruby's Bundler Gem to install dependencies - update README - adapt Dockerfile - add additional rules to .gitignore - change default doc output path from /target to /content (default path of the flink-web repository) This closes #2033
-
- 27 5月, 2016 2 次提交
-
-
由 Robert Metzger 提交于
-
由 Robert Metzger 提交于
-