DataStream programs in Flink are regular programs that implement transformations on data streams (e.g., filtering, updating state, defining windows, aggregating). The data streams are initially created from various sources (e.g., message queues, socket streams, files). Results are returned via sinks, which may for example write the data to files, or to standard output (for example the command line terminal). Flink programs run in a variety of contexts, standalone, or embedded in other programs. The execution can happen in a local JVM, or on clusters of many machines.
Please see [basic concepts](//ci.apache.org/projects/flink/flink-docs-release-1.7/dev/api_concepts.html) for an introduction to the basic concepts of the Flink API.
In order to create your own Flink DataStream program, we encourage you to start with [anatomy of a Flink Program](//ci.apache.org/projects/flink/flink-docs-release-1.7/dev/api_concepts.html#anatomy-of-a-flink-program) and gradually add your own [stream transformations](//ci.apache.org/projects/flink/flink-docs-release-1.7/dev/stream/operators/index.html). The remaining sections act as references for additional operations and advanced features.
## 示例程序
## Example Program
The following program is a complete, working example of streaming window word count application, that counts the words coming from a web socket in 5 second windows. You can copy & paste the code to run it locally.
To run the example program, start the input stream with netcat first from a terminal:
要运行示例程序,首先从终端启动带有netcat的输入流:
...
...
@@ -91,135 +90,137 @@ nc -lk 9999
Just type some words hitting return for a new word. These will be the input to the word count program. If you want to see counts greater than 1, type the same word again and again within 5 seconds (increase the window size from 5 seconds if you cannot type that fast ☺).
Sources are where your program reads its input from. You can attach a source to your program by using `StreamExecutionEnvironment.addSource(sourceFunction)`. Flink comes with a number of pre-implemented source functions, but you can always write your own custom sources by implementing the `SourceFunction` for non-parallel sources, or by implementing the `ParallelSourceFunction` interface or extending the `RichParallelSourceFunction` for parallel sources.
*`readFile(fileInputFormat, path, watchType, interval, pathFilter, typeInfo)` - This is the method called internally by the two previous ones. It reads files in the `path` based on the given `fileInputFormat`. Depending on the provided `watchType`, this source may periodically monitor (every `interval` ms) the path for new data (`FileProcessingMode.PROCESS_CONTINUOUSLY`), or process once the data currently in the path and exit (`FileProcessingMode.PROCESS_ONCE`). Using the `pathFilter`, the user can further exclude files from being processed.
Under the hood, Flink splits the file reading process into two sub-tasks, namely _directory monitoring_ and _data reading_. Each of these sub-tasks is implemented by a separate entity. Monitoring is implemented by a single, **non-parallel** (parallelism = 1) task, while reading is performed by multiple tasks running in parallel. The parallelism of the latter is equal to the job parallelism. The role of the single monitoring task is to scan the directory (periodically or only once depending on the `watchType`), find the files to be processed, divide them in _splits_, and assign these splits to the downstream readers. The readers are the ones who will read the actual data. Each split is read by only one reader, while a reader can read multiple splits, one-by-one.
1. If the `watchType` is set to `FileProcessingMode.PROCESS_CONTINUOUSLY`, when a file is modified, its contents are re-processed entirely. This can break the “exactly-once” semantics, as appending data at the end of a file will lead to **all** its contents being re-processed.
2. If the `watchType` is set to `FileProcessingMode.PROCESS_ONCE`, the source scans the path **once** and exits, without waiting for the readers to finish reading the file contents. Of course the readers will continue reading until all file contents are read. Closing the source leads to no more checkpoints after that point. This may lead to slower recovery after a node failure, as the job will resume reading from the last checkpoint.
*`fromCollection(Iterator, Class)` - Creates a data stream from an iterator. The class specifies the data type of the elements returned by the iterator.
*`fromParallelCollection(SplittableIterator, Class)` - Creates a data stream from an iterator, in parallel. The class specifies the data type of the elements returned by the iterator.
*`generateSequence(from, to)` - Generates the sequence of numbers in the given interval, in parallel.
*`generateSequence(from, to)` -在给定的间隔中并行地生成数字序列。
Custom:
习俗:
*`addSource`- Attach a new source function. For example, to read from Apache Kafka you can use `addSource(new FlinkKafkaConsumer08<>(...))`. See [connectors](//ci.apache.org/projects/flink/flink-docs-release-1.7/dev/connectors/index.html) for more details.
Sources are where your program reads its input from. You can attach a source to your program by using `StreamExecutionEnvironment.addSource(sourceFunction)`. Flink comes with a number of pre-implemented source functions, but you can always write your own custom sources by implementing the `SourceFunction` for non-parallel sources, or by implementing the `ParallelSourceFunction` interface or extending the `RichParallelSourceFunction` for parallel sources.
*`readFile(fileInputFormat, path, watchType, interval, pathFilter)` - This is the method called internally by the two previous ones. It reads files in the `path` based on the given `fileInputFormat`. Depending on the provided `watchType`, this source may periodically monitor (every `interval` ms) the path for new data (`FileProcessingMode.PROCESS_CONTINUOUSLY`), or process once the data currently in the path and exit (`FileProcessingMode.PROCESS_ONCE`). Using the `pathFilter`, the user can further exclude files from being processed.
Under the hood, Flink splits the file reading process into two sub-tasks, namely _directory monitoring_ and _data reading_. Each of these sub-tasks is implemented by a separate entity. Monitoring is implemented by a single, **non-parallel** (parallelism = 1) task, while reading is performed by multiple tasks running in parallel. The parallelism of the latter is equal to the job parallelism. The role of the single monitoring task is to scan the directory (periodically or only once depending on the `watchType`), find the files to be processed, divide them in _splits_, and assign these splits to the downstream readers. The readers are the ones who will read the actual data. Each split is read by only one reader, while a reader can read multiple splits, one-by-one.
1. If the `watchType` is set to `FileProcessingMode.PROCESS_CONTINUOUSLY`, when a file is modified, its contents are re-processed entirely. This can break the “exactly-once” semantics, as appending data at the end of a file will lead to **all** its contents being re-processed.
2. If the `watchType` is set to `FileProcessingMode.PROCESS_ONCE`, the source scans the path **once** and exits, without waiting for the readers to finish reading the file contents. Of course the readers will continue reading until all file contents are read. Closing the source leads to no more checkpoints after that point. This may lead to slower recovery after a node failure, as the job will resume reading from the last checkpoint.
*`fromParallelCollection(SplittableIterator)` - Creates a data stream from an iterator, in parallel. The class specifies the data type of the elements returned by the iterator.
*`generateSequence(from, to)` - Generates the sequence of numbers in the given interval, in parallel.
*`generateSequence(from, to)`-在给定的间隔中并行地生成数字序列。
Custom:
风俗:
*`addSource` - Attach a new source function. For example, to read from Apache Kafka you can use `addSource(new FlinkKafkaConsumer08<>(...))`. See [connectors](//ci.apache.org/projects/flink/flink-docs-release-1.7/dev/connectors/) for more details.
Please see [operators](//ci.apache.org/projects/flink/flink-docs-release-1.7/dev/stream/operators/index.html) for an overview of the available stream transformations.
Data sinks consume DataStreams and forward them to files, sockets, external systems, or print them. Flink comes with a variety of built-in output formats that are encapsulated behind operations on the DataStreams:
*`writeAsText()` / `TextOutputFormat` - Writes elements line-wise as Strings. The Strings are obtained by calling the _toString()_ method of each element.
*`writeAsCsv(...)` / `CsvOutputFormat`- Writes tuples as comma-separated value files. Row and field delimiters are configurable. The value for each field comes from the _toString()_ method of the objects.
*`print()` / `printToErr()` - Prints the _toString()_ value of each element on the standard out / standard error stream. Optionally, a prefix (msg) can be provided which is prepended to the output. This can help to distinguish between different calls to _print_. If the parallelism is greater than 1, the output will also be prepended with the identifier of the task which produced the output.
*`addSink` - Invokes a custom sink function. Flink comes bundled with connectors to other systems (such as Apache Kafka) that are implemented as sink functions.
Data sinks consume DataStreams and forward them to files, sockets, external systems, or print them. Flink comes with a variety of built-in output formats that are encapsulated behind operations on the DataStreams:
*`writeAsText()` / `TextOutputFormat` - Writes elements line-wise as Strings. The Strings are obtained by calling the _toString()_ method of each element.
*`writeAsCsv(...)` / `CsvOutputFormat` - Writes tuples as comma-separated value files. Row and field delimiters are configurable. The value for each field comes from the _toString()_ method of the objects.
*`print()` / `printToErr()` - Prints the _toString()_ value of each element on the standard out / standard error stream. Optionally, a prefix (msg) can be provided which is prepended to the output. This can help to distinguish between different calls to _print_. If the parallelism is greater than 1, the output will also be prepended with the identifier of the task which produced the output.
*`addSink` - Invokes a custom sink function. Flink comes bundled with connectors to other systems (such as Apache Kafka) that are implemented as sink functions.
Note that the `write*()` methods on `DataStream` are mainly intended for debugging purposes. They are not participating in Flink’s checkpointing, this means these functions usually have at-least-once semantics. The data flushing to the target system depends on the implementation of the OutputFormat. This means that not all elements send to the OutputFormat are immediately showing up in the target system. Also, in failure cases, those records might be lost.
For reliable, exactly-once delivery of a stream into a file system, use the `flink-connector-filesystem`. Also, custom implementations through the `.addSink(...)` method can participate in Flink’s checkpointing for exactly-once semantics.
Iterative streaming programs implement a step function and embed it into an `IterativeStream`. As a DataStream program may never finish, there is no maximum number of iterations. Instead, you need to specify which part of the stream is fed back to the iteration and which part is forwarded downstream using a `split` transformation or a `filter`. Here, we show an example using filters. First, we define an `IterativeStream`
Then, we specify the logic that will be executed inside the loop using a series of transformations (here a simple `map` transformation)
然后,我们使用一系列转换(这里是一个简单的`map`转换)指定将在循环中执行的逻辑。
...
...
@@ -239,7 +240,7 @@ DataStream<Integer> iterationBody = iteration.map(/* this is executed many times
To close an iteration and define the iteration tail, call the `closeWith(feedbackStream)` method of the `IterativeStream`. The DataStream given to the `closeWith` function will be fed back to the iteration head. A common pattern is to use a filter to separate the part of the stream that is fed back, and the part of the stream which is propagated forward. These filters can, e.g., define the “termination” logic, where an element is allowed to propagate downstream rather than being fed back.
Iterative streaming programs implement a step function and embed it into an `IterativeStream`. As a DataStream program may never finish, there is no maximum number of iterations. Instead, you need to specify which part of the stream is fed back to the iteration and which part is forwarded downstream using a `split` transformation or a `filter`. Here, we show an example iteration where the body (the part of the computation that is repeated) is a simple map transformation, and the elements that are fed back are distinguished by the elements that are forwarded downstream using filters.
Please refer to [execution configuration](//ci.apache.org/projects/flink/flink-docs-release-1.7/dev/execution_configuration.html) for an explanation of most parameters. These parameters pertain specifically to the DataStream API:
*`setAutoWatermarkInterval(long milliseconds)`: Set the interval for automatic watermark emission. You can get the current value with `long getAutoWatermarkInterval()`
[State & Checkpointing](//ci.apache.org/projects/flink/flink-docs-release-1.7/dev/stream/state/checkpointing.html) describes how to enable and configure Flink’s checkpointing mechanism.
By default, elements are not transferred on the network one-by-one (which would cause unnecessary network traffic) but are buffered. The size of the buffers (which are actually transferred between machines) can be set in the Flink config files. While this method is good for optimizing throughput, it can cause latency issues when the incoming stream is not fast enough. To control throughput and latency, you can use `env.setBufferTimeout(timeoutMillis)` on the execution environment (or on individual operators) to set a maximum wait time for the buffers to fill up. After this time, the buffers are sent automatically even if they are not full. The default value for this timeout is 100 ms.
To maximize throughput, set `setBufferTimeout(-1)` which will remove the timeout and buffers will only be flushed when they are full. To minimize latency, set the timeout to a value close to 0 (for example 5 or 10 ms). A buffer timeout of 0 should be avoided, because it can cause severe performance degradation.
Before running a streaming program in a distributed cluster, it is a good idea to make sure that the implemented algorithm works as desired. Hence, implementing data analysis programs is usually an incremental process of checking results, debugging, and improving.
Flink provides features to significantly ease the development process of data analysis programs by supporting local debugging from within an IDE, injection of test data, and collection of result data. This section give some hints how to ease the development of Flink programs.
A `LocalStreamEnvironment` starts a Flink system within the same JVM process it was created in. If you start the LocalEnvironment from an IDE, you can set breakpoints in your code and easily debug your program.
A LocalEnvironment is created and used as follows:
创建并使用LocalEnvironment如下:
...
...
@@ -397,11 +399,11 @@ env.execute()
### Collection Data Sources
### Collection Data Sources(收集数据源)
Flink provides special data sources which are backed by Java collections to ease testing. Once a program has been tested, the sources and sinks can be easily replaced by sources and sinks that read from / write to external systems.
@@ -438,11 +440,11 @@ val myLongs = env.fromCollection(longIt)
**Note:**Currently, the collection data source requires that data types and iterators implement `Serializable`. Furthermore, collection data sources can not be executed in parallel ( parallelism = 1).
Flink also provides a sink to collect DataStream results for testing and debugging purposes. It can be used as follows:
Flink还提供一个接收器来收集Datastream结果,以便进行测试和调试。它可用于以下方面:
...
...
@@ -467,12 +469,12 @@ val myOutput: Iterator[(String, Int)] = DataStreamUtils.collect(myResult.javaStr
**Note:**`flink-streaming-contrib` module is removed from Flink 1.5.0. Its classes have been moved into `flink-streaming-java` and `flink-streaming-scala`.
*[Operators](//ci.apache.org/projects/flink/flink-docs-release-1.7/dev/stream/operators/index.html): Specification of available streaming operators.
*[Event Time](//ci.apache.org/projects/flink/flink-docs-release-1.7/dev/event_time.html): Introduction to Flink’s notion of time.
*[State & Fault Tolerance](//ci.apache.org/projects/flink/flink-docs-release-1.7/dev/stream/state/index.html): Explanation of how to develop stateful applications.
*[Connectors](//ci.apache.org/projects/flink/flink-docs-release-1.7/dev/connectors/index.html): Description of available input and output connectors.