- 10 2月, 2016 5 次提交
-
-
由 Georgios Andrianakis 提交于
This closes #1618
-
由 Chiwan Park 提交于
This closes #1585
-
由 Fabian Hueske 提交于
This closes #1599
-
由 Robert Metzger 提交于
-
由 Stephan Ewen 提交于
[FLINK-3373] [build] Revert HTTP Components versions because of incompatibility with Hadoop >= 2.6.0
-
- 09 2月, 2016 9 次提交
-
-
由 Stephan Ewen 提交于
-
由 Stephan Ewen 提交于
[FLINK-3373] [build] Bump version of transitine HTTP Components dependency to 4.4.4 (core) / 4.5.1 (client)
-
由 Robert Metzger 提交于
-
由 Stephan Ewen 提交于
This also cleans up the generics in the RocksDB state classes. This closes #1608
-
由 Stephan Ewen 提交于
-
由 Stephan Ewen 提交于
-
由 Ufuk Celebi 提交于
This closes #1601
-
由 Andrea Sella 提交于
This closes #1602
-
由 Stephan Ewen 提交于
-
- 08 2月, 2016 17 次提交
-
-
由 Stephan Ewen 提交于
-
由 Stephan Ewen 提交于
-
由 Stephan Ewen 提交于
-
由 Greg Hogan 提交于
Removes curator dependency exclusions from flink-runtime. This resolves NoClassDefFoundError exceptions when running `mvn test`. This partialy reverts e31a4d8a. This closes #1596
-
由 Robert Metzger 提交于
-
由 Ufuk Celebi 提交于
This closes #1578.
-
由 Ufuk Celebi 提交于
-
由 Ufuk Celebi 提交于
-
由 Ufuk Celebi 提交于
-
由 Stefano Baghino 提交于
This closes #1598.
-
由 Maximilian Michels 提交于
-
由 Aljoscha Krettek 提交于
-
由 Aljoscha Krettek 提交于
This is the Javadoc of DataStream.rescale() that describes the behaviour: Sets the partitioning of the {@link DataStream} so that the output elements are distributed evenly to a subset of instances of the next operation in a round-robin fashion. The subset of downstream operations to which the upstream operation sends elements depends on the degree of parallelism of both the upstream and downstream operation. For example, if the upstream operation has parallelism 2 and the downstream operation has parallelism 4, then one upstream operation would distribute elements to two downstream operations while the other upstream operation would distribute to the other two downstream operations. If, on the other hand, the downstream operation has parallelism 2 while the upstream operation has parallelism 4 then two upstream operations will distribute to one downstream operation while the other two upstream operations will distribute to the other downstream operations. In cases where the different parallelisms are not multiples of each other one or several downstream operations will have a differing number of inputs from upstream operations.
-
由 Aljoscha Krettek 提交于
This was causing problems with TypeSerializers that choke on null values, especially in the Scala KeyedStream.*WithState() family of functions.
-
由 Ufuk Celebi 提交于
-
由 Stephan Ewen 提交于
-
由 mjsax 提交于
This closes #1591
-
- 07 2月, 2016 1 次提交
-
-
由 Ufuk Celebi 提交于
tl;dr Change default Netty configuration to be relative to number of slots, i.e. configure one memory arena (in PooledByteBufAllocator) per slot and use one event loop thread per slot. Behaviour can still be manually overwritten. With this change, we can expect 16 MB of direct memory allocated per task slot by Netty. Problem: We were using Netty's default PooledByteBufAllocator instance, which is subject to changing behaviour between Netty versions (happened between versions 4.0.27.Final and 4.0.28.Final resulting in increased memory consumption) and whose default memory consumption depends on the number of available cores in the system. This can be problematic for example in YARN setups where users run one slot per task manager on machines with many cores, resulting in a relatively high number of allocated memory. Solution: We instantiate a PooledByteBufAllocator instance manually and wrap it as a NettyBufferPool. Our instance configures one arena per task slot as default. It's desirable to have the number of arenas match the number of event loop threads to minimize lock contention (Netty's default tried to ensure this as well), hence the number of threads is changed as well to match the number of slots as default. Both number of threads and arenas can still be manually configured. This closes #1593.
-
- 05 2月, 2016 8 次提交
-
-
由 Aljoscha Krettek 提交于
-
由 Robert Metzger 提交于
This closes #1428
-
由 Stephan Ewen 提交于
This closes #1590
-
由 Till Rohrmann 提交于
The GradientDescent implementation did not work with sparse input data because it requires the gradient to be dense. This patch makes sure that the gradient sum is always dense. This closes #1587.
-
由 Ufuk Celebi 提交于
-
由 Stefano Baghino 提交于
This closes #1592.
-
由 Aljoscha Krettek 提交于
-
由 Márton Balassi 提交于
This closes #1574
-