1. 08 2月, 2016 5 次提交
    • A
      [FLINK-3336] Add Rescale Data Shipping for DataStream · 9f6a8b6d
      Aljoscha Krettek 提交于
      This is the Javadoc of DataStream.rescale() that describes the
      behaviour:
      
      Sets the partitioning of the {@link DataStream} so that the output elements
      are distributed evenly to a subset of instances of the next operation in a round-robin
      fashion.
      
      The subset of downstream operations to which the upstream operation sends
      elements depends on the degree of parallelism of both the upstream and downstream operation.
      For example, if the upstream operation has parallelism 2 and the downstream operation
      has parallelism 4, then one upstream operation would distribute elements to two
      downstream operations while the other upstream operation would distribute to the other
      two downstream operations. If, on the other hand, the downstream operation has parallelism
      2 while the upstream operation has parallelism 4 then two upstream operations will
      distribute to one downstream operation while the other two upstream operations will
      distribute to the other downstream operations.
      
      In cases where the different parallelisms are not multiples of each other one or several
      downstream operations will have a differing number of inputs from upstream operations.
      9f6a8b6d
    • A
      [FLINK-3339] Make ValueState.update(null) act as ValueState.clear() · 7469c17c
      Aljoscha Krettek 提交于
      This was causing problems with TypeSerializers that choke on null
      values, especially in the Scala KeyedStream.*WithState() family of
      functions.
      7469c17c
    • U
      6d83c9d9
    • S
    • M
      [FLINK-2721] [Storm Compatibility] Add Tuple meta information · 7eb6edd5
      mjsax 提交于
      This closes #1591
      7eb6edd5
  2. 07 2月, 2016 1 次提交
    • U
      [FLINK-3120] [runtime] Manually configure Netty's ByteBufAllocator · 168c1f0a
      Ufuk Celebi 提交于
      tl;dr Change default Netty configuration to be relative to number of slots,
      i.e. configure one memory arena (in PooledByteBufAllocator) per slot and use one
      event loop thread per slot. Behaviour can still be manually overwritten. With
      this change, we can expect 16 MB of direct memory allocated per task slot by
      Netty.
      
      Problem: We were using Netty's default PooledByteBufAllocator instance, which
      is subject to changing behaviour between Netty versions (happened between
      versions 4.0.27.Final and 4.0.28.Final resulting in increased memory
      consumption) and whose default memory consumption depends on the number of
      available cores in the system. This can be problematic for example in YARN
      setups where users run one slot per task manager on machines with many cores,
      resulting in a relatively high number of allocated memory.
      
      Solution: We instantiate a PooledByteBufAllocator instance manually and wrap
      it as a NettyBufferPool. Our instance configures one arena per task slot as
      default. It's desirable to have the number of arenas match the number of event
      loop threads to minimize lock contention (Netty's default tried to ensure this
      as well), hence the number of threads is changed as well to match the number
      of slots as default. Both number of threads and arenas can still be manually
      configured.
      
      This closes #1593.
      168c1f0a
  3. 05 2月, 2016 9 次提交
  4. 04 2月, 2016 16 次提交
  5. 03 2月, 2016 8 次提交
  6. 02 2月, 2016 1 次提交
    • S
      [FLINK-2348] Fix unstable IterateExampleITCase. · bfff86c8
      Stephan Ewen 提交于
      This deactivates the validation of results, which is not reliably possible under the current model (timeout on feedback).
      This test for now only checks that the job executes properly.
      
      Also adds proper logging property files for the examples projects.
      bfff86c8