- 02 8月, 2016 1 次提交
-
-
由 Ufuk Celebi 提交于
This reverts commit 81cf2296. We had an incorrent implementation of Murmur hash in Flink 1.0. This was fixed in 641a0d43 for Flink 1.1. Then we thought that we need to revert this in order to ensure backwards compatability between Flink 1.0 and 1.1 savepoints (81cf22). Turns out, savepoint backwards compatability is broken for other reasons, too. Therefore, we revert 81cf22 here, ending up with a correct implementation of Murmur hash again.
-
- 01 8月, 2016 4 次提交
-
-
由 zentol 提交于
This closes #2311
-
由 Ufuk Celebi 提交于
Some source files had the -x flag set: Before this change: ``` $ find . -perm +111 -type f | grep "\.java" ./flink-contrib/flink-tweet-inputformat/src/main/java/org/apache/flink/contrib/tweetinputformat/model/places/Attributes.java ./flink-contrib/flink-tweet-inputformat/src/main/java/org/apache/flink/contrib/tweetinputformat/model/places/BoundingBox.java ./flink-contrib/flink-tweet-inputformat/src/main/java/org/apache/flink/contrib/tweetinputformat/model/places/Places.java ./flink-contrib/flink-tweet-inputformat/src/main/java/org/apache/flink/contrib/tweetinputformat/model/tweet/Contributors.java ./flink-contrib/flink-tweet-inputformat/src/main/java/org/apache/flink/contrib/tweetinputformat/model/tweet/Coordinates.java ./flink-contrib/flink-tweet-inputformat/src/main/java/org/apache/flink/contrib/tweetinputformat/model/tweet/CurrentUserRetweet.java ./flink-contrib/flink-tweet-inputformat/src/main/java/org/apache/flink/contrib/tweetinputformat/model/tweet/entities/Entities.java ./flink-contrib/flink-tweet-inputformat/src/main/java/org/apache/flink/contrib/tweetinputformat/model/tweet/entities/HashTags.java ./flink-contrib/flink-tweet-inputformat/src/main/java/org/apache/flink/contrib/tweetinputformat/model/tweet/entities/Media.java ./flink-contrib/flink-tweet-inputformat/src/main/java/org/apache/flink/contrib/tweetinputformat/model/tweet/entities/Size.java ./flink-contrib/flink-tweet-inputformat/src/main/java/org/apache/flink/contrib/tweetinputformat/model/tweet/entities/Symbol.java ./flink-contrib/flink-tweet-inputformat/src/main/java/org/apache/flink/contrib/tweetinputformat/model/tweet/entities/URL.java ./flink-contrib/flink-tweet-inputformat/src/main/java/org/apache/flink/contrib/tweetinputformat/model/tweet/entities/UserMention.java ./flink-contrib/flink-tweet-inputformat/src/main/java/org/apache/flink/contrib/tweetinputformat/model/tweet/Tweet.java ./flink-contrib/flink-tweet-inputformat/src/main/java/org/apache/flink/contrib/tweetinputformat/model/User/Users.java ./flink-libraries/flink-gelly/src/main/java/org/apache/flink/graph/Graph.java ./flink-libraries/flink-gelly/src/main/java/org/apache/flink/graph/gsa/ApplyFunction.java ./flink-libraries/flink-gelly/src/main/java/org/apache/flink/graph/gsa/GatherFunction.java ./flink-libraries/flink-gelly/src/main/java/org/apache/flink/graph/gsa/GatherSumApplyIteration.java ./flink-libraries/flink-gelly/src/main/java/org/apache/flink/graph/gsa/Neighbor.java ./flink-libraries/flink-gelly/src/main/java/org/apache/flink/graph/gsa/SumFunction.java ./flink-libraries/flink-gelly/src/main/java/org/apache/flink/graph/library/GSAConnectedComponents.java ./flink-libraries/flink-gelly/src/main/java/org/apache/flink/graph/library/GSASingleSourceShortestPaths.java ./flink-libraries/flink-gelly-examples/src/main/java/org/apache/flink/graph/examples/GSASingleSourceShortestPaths.java ./flink-libraries/flink-gelly-examples/src/test/java/org/apache/flink/graph/test/GatherSumApplyITCase.java ./flink-runtime/src/main/java/org/apache/flink/runtime/executiongraph/ExecutionGraph.java ``` After this change: ``` $ find . -perm +111 -type f | grep "\.java" ```
-
由 zentol 提交于
This closes #2310
-
由 zentol 提交于
This closes #2308
-
- 29 7月, 2016 5 次提交
-
-
由 Maximilian Michels 提交于
The json module is bundled with Ruby from 1.9.0 and upwards. This cures dependency problems with different versions of Ruby.
-
由 Aditi Viswanathan 提交于
This closes #2307.
-
由 zentol 提交于
-
由 zentol 提交于
-
由 Neelesh Srinivas Salian 提交于
Closes #2299
-
- 28 7月, 2016 2 次提交
- 27 7月, 2016 3 次提交
-
-
由 Maximilian Michels 提交于
- fail if config couldn't be loaded - remove duplicate api methods - remove undocumented XML loading feature - generate yaml conf in tests instead of xml conf - only load one config file instead of all xml or yaml files (flink-conf.yaml) - make globalconfiguration non-global and remove static SINGLETON - fix test cases - add test cases This closes #2123
-
由 kl0u 提交于
-
由 Maximilian Michels 提交于
-
- 26 7月, 2016 11 次提交
-
-
由 Aljoscha Krettek 提交于
This also updates documentation and tests. Reporters can now be specified like this: metrics.reporters: foo,bar metrics.reporter.foo.class: JMXReporter.class metrics.reporter.foo.port: 10 metrics.reporter.bar.class: GangliaReporter.class metrics.reporter.bar.port: 11 metrics.reporter.bar.something: 42
-
由 Maximilian Michels 提交于
Unfortunately, we can't deploy snapshots atomically using the Nexus repository. The staged process which leads to an atomic deployment is only designed to work for releases. Best we can do is to retry deploying artifacts in case of failures. - introduce retry in case of failure of snapshot deployment - simplify deployment script This closes #2296
-
由 Till Rohrmann 提交于
- Add YarnFlinkResourceManager test to reaccept task manager registrations from a re-elected job manager - Remove unnecessary sync logic between JobManager and ResourceManager - Avoid duplicate reigstration attempts in case of a refused registration - Add test case to check that not an excessive amount of RegisterTaskManager messages are sent - Remove containersLaunched from YarnFlinkResourceManager and instead not clearing registeredWorkers when JobManager loses leadership - Let YarnFlinkResourceManagerTest extend TestLogger - Harden YarnFlinkResourceManager.getContainersFromPreviousAttempts - Add FatalErrorOccurred message handler to FlinkResourceManager; Increase timeout for YarnFlinkResourceManagerTest; Add additional constructor to TestingYarnFlinkResourceManager for tests - Rename registeredWorkers field into startedWorkers Additionally, the RegisterResource message is renamed into NotifyResourceStarted which tells the RM that a resource has been started. This reflects the current semantics of the startedWorkers map in the resource manager. - Fix concurrency issues in TestingLeaderRetrievalService This closes #2257
-
由 Maximilian Michels 提交于
-
由 zentol 提交于
- moved user-facing API to 'flink-metrics/flink-metrics-core' - moved JMXReporter to 'flink-metrics/flink-metrics-jmx' - moved remaining metric classes to 'flink-runtime' This closes #2226
-
由 zentol 提交于
This closes #2286
-
由 Aljoscha Krettek 提交于
This closes #2278.
-
由 Ufuk Celebi 提交于
Savepoints were previously persisted without any meta data using default Java serialization of `CompletedCheckpoint`. This commit introduces a savepoint interface with version-specific serializers and stores savepoints with meta data. Savepoints expose a version number and a Collection<TaskState> for savepoint restore. Currently, there is only one savepoint version: SavepointV0 (Flink 1.1): This is the current savepoint version, which holds a reference to the Checkpoint task state collection, but is serialized with a custom serializater not relying on default Java serialization. Therefore, it should not happen again that we need to stick to certain classes in future Flink versions. The savepoints are stored in `FsSavepointStore` with the following format: MagicNumber SavepointVersion Savepoint - MagicNumber => int - SavepointVersion => int (returned by Savepoint#getVersion()) - Savepoint => bytes (serialized via version-specific SavepointSerializer) The header is minimal (magic number, version). All savepoint-specific meta data can be moved to the savepoint itself. This is also were we would have to add new meta data in future versions, allowing us to differentiate between different savepoint versions when we change the serialization stack. All savepoint related classes have been moved from checkpoint to a new sub package `checkpoint.savepoint`. This closes #2194.
-
由 twalthr 提交于
-
由 smarthi 提交于
This closes #2162.
-
由 Greg Hogan 提交于
The default maximum akka transfer size is 10 MB. This commit reduces the number of generator blocks from 2^20 to 2^15 which removes the limit on graph size. The original limit of one millions blocks was intended to future-proof scalability. This is a temporary fix as graph generation will be reworked in FLINK-3997.
-
- 25 7月, 2016 7 次提交
-
-
由 erli ding 提交于
This closes #2233
-
由 Jark Wu 提交于
This closes #2280.
-
由 twalthr 提交于
This closes #2292.
-
由 twalthr 提交于
This closes #2283.
-
由 Flavio Pompermaier 提交于
-
由 Ufuk Celebi 提交于
The `BlobServer` acts as a local cache for uploaded BLOBs. The life-cycle of each BLOB is bound to the life-cycle of the `BlobServer`. If the BlobServer shuts down (on JobManager shut down), all local files will be removed. With HA, BLOBs are persisted to another file system (e.g. HDFS) via the `BlobStore` in order to have BLOBs available after a JobManager failure (or shut down). These BLOBs are only allowed to be removed when the job that requires them enters a globally terminal state (`FINISHED`, `CANCELLED`, `FAILED`). This commit removes the `BlobStore` clean up call from the `BlobServer` shutdown. The `BlobStore` files will only be cleaned up via the `BlobLibraryCacheManager`'s' clean up task (periodically or on BlobLibraryCacheManager shutdown). This means that there is a chance that BLOBs will linger around after the job has terminated, if the job manager fails before the clean up. This closes #2256.
-
由 zentol 提交于
Prevents the CheckpointCommitter from failing a job, if either commitCheckpoint() or isCheckpointCommitter() failed. Instead, we will try again on the next notify(). This closes #2287
-
- 23 7月, 2016 2 次提交
-
-
由 Greg Hogan 提交于
The user must now specify the ID type as "integer" or "string" when reading a graph from a CSV file. This closes #2250
-
由 Greg Hogan 提交于
This closes #2248
-
- 22 7月, 2016 2 次提交
-
-
由 Ufuk Celebi 提交于
Suspended jobs were leading to shutdown of the checkpoint coordinator and hence removal of checkpoint state. For standalone recovery mode this is OK as no state can be recovered anyways (unchanged in this PR). For HA though this lead to removal of checkpoint state, which we actually want to keep for recovery. We have the following behaviour now: -----------+------------+------------------- | Standalone | High Availability -----------+------------+------------------- SUSPENDED | Discard | Keep -----------+------------+------------------- FINISHED/ | Discard | Discard FAILED/ | | CANCELED | | -----------+------------+------------------- This closes #2276.
-
由 Till Rohrmann 提交于
This closes #2284.
-
- 21 7月, 2016 3 次提交
-
-
由 Aljoscha Krettek 提交于
This closes #2279
-
由 Till Rohrmann 提交于
This PR adds a JM metric which shows the time it took to restart a job. The time is measured between entering the JobStatus.RESTARTING and reaching the JobStatus.RUNNING state. During this time, the restarting time is continuously updated. The metric only shows the time for the last restart attempt. The metric is published in the job metric group under the name of "restartingTime". This closes #2271.
-
由 Aljoscha Krettek 提交于
-