1. 29 11月, 2017 1 次提交
  2. 28 11月, 2017 2 次提交
  3. 22 11月, 2017 2 次提交
  4. 20 11月, 2017 1 次提交
  5. 15 11月, 2017 1 次提交
  6. 11 11月, 2017 2 次提交
  7. 09 11月, 2017 2 次提交
  8. 08 11月, 2017 2 次提交
  9. 07 11月, 2017 3 次提交
  10. 06 11月, 2017 1 次提交
  11. 03 11月, 2017 3 次提交
  12. 02 11月, 2017 3 次提交
    • S
      [FLINK-7778] [build] Shade ZooKeeper dependency (followups) · 8afadd45
      Stephan Ewen 提交于
        - Rename the 'flink-shaded-curator-recipes' module to 'flink-shaded-curator',
          because it actually contains more curator code than just the recipes.
      
        - Move the exception handling logic of 'ZooKeeperAccess' directly into the
          ZooKeeperStateHandleStore
      8afadd45
    • Z
      [FLINK-7778] [build] Shade ZooKeeper dependency (part 2) · d368a07a
      zentol 提交于
      This closes #4927
      d368a07a
    • S
      [FLINK-7778] [build] Shade ZooKeeper dependency (part 1) · 4d028236
      Stephan Ewen 提交于
      Shading the ZooKeeper dependency makes sure that this specific version of
      ZooKeeper is used by the Flink runtime module. The ZooKeeper version is
      sensitive, because we depend on bug fixes in later ZooKeeper versions
      for Flink's high availability.
      
      This prevents situations where for example a set of added dependencies (for
      example transtive dependencies of Hadoop) cause a different ZooKeeper version
      to be in the classpath and be loaded.
      
      This commit also removes the 'flink-shaded-curator' module, which was originally
      created to shade guava within curator, but is now obsolete, because newer
      versions of curator shade guava already.
      4d028236
  13. 31 10月, 2017 1 次提交
  14. 27 10月, 2017 1 次提交
    • K
      [FLINK-7908][QS] Restructure the queryable state module. · 0c771505
      kkloudas 提交于
      The QS module is split into core and client. The core should
      be put in the lib folder to enable queryable state, while the
      client is the one that the user will program against. The
      reason for the restructuring in mainly to remove the dependency
      on the flink-runtime from the user's program.
      0c771505
  15. 26 10月, 2017 1 次提交
    • M
      [FLINK-7502][metrics] Improve PrometheusReporter · 56017a98
      Maximilian Bode 提交于
      * Do not throw exception when same metric is added twice
      * Add possibility to configure port range
      * Bump prometheus.version 0.0.21 -> 0.0.26
      * Use simpleclient_httpserver instead of nanohttpd
      * guard gauge report against null
      * guard close() vs NPE
      
      This closes #4586.
      56017a98
  16. 23 10月, 2017 1 次提交
  17. 16 10月, 2017 2 次提交
  18. 15 10月, 2017 2 次提交
  19. 14 10月, 2017 4 次提交
  20. 11 10月, 2017 1 次提交
  21. 07 10月, 2017 1 次提交
    • P
      [FLINK-7739] [kafka connector] Fix test instabilities · 3581a335
      Piotr Nowojski 提交于
        - Set shorter heartbeats intervals. Default pause value of 60seconds is
          too large (tests would timeout before akka react)
      
        - Exclude netty dependency from zookeeper. Zookeeper was pulling in
          conflicting Netty version. Conflict was extremly subtle - TaskManager in
          Kafka tests was deadlocking in some rare corner cases.
      
      This closes #4775
      3581a335
  22. 06 10月, 2017 1 次提交
    • S
      [FLINK-7768] [core] Load File Systems via Java Service abstraction · 77e3701c
      Stephan Ewen 提交于
      This changes the discovery mechanism of file from static class name configurations
      to a service mechanism (META-INF/services).
      
      As part of that, it factors HDFS and MapR FS implementations into separate modules.
      
      With this change, users can add new filesystem implementations and make them available
      by simply adding them to the class path.
      77e3701c
  23. 27 9月, 2017 1 次提交
    • A
      [FLINK-2268] Don't include Hadoop deps in flink-core/flink-java · a477db39
      Aljoscha Krettek 提交于
      This also makes them optional in flink-runtime, which is enabled by the
      previous changes to only use Hadoop dependencies if they are available.
      
      This also requires adding a few explicit dependencies in other modules
      because they were using transitive dependencies of the Hadoop deps. The
      most common dependency there is, ha!, commons-io.
      a477db39
  24. 21 9月, 2017 1 次提交