提交 34b7bd1d 编写于 作者: W wizardforcel

2019-02-17 15:40:04

上级 dd3a99bf
......@@ -51,7 +51,7 @@ This extension introduces new methods in both the DataSet and DataStream Scala A
| --- | --- | --- |
| **mapWith** | **map (DataSet)** |
<figure class="highlight">
```
data.mapWith {
......@@ -59,12 +59,12 @@ data.mapWith {
}
```
</figure>
|
| **mapPartitionWith** | **mapPartition (DataSet)** |
<figure class="highlight">
```
data.mapPartitionWith {
......@@ -72,12 +72,12 @@ data.mapPartitionWith {
}
```
</figure>
|
| **flatMapWith** | **flatMap (DataSet)** |
<figure class="highlight">
```
data.flatMapWith {
......@@ -85,12 +85,12 @@ data.flatMapWith {
}
```
</figure>
|
| **filterWith** | **filter (DataSet)** |
<figure class="highlight">
```
data.filterWith {
......@@ -98,12 +98,12 @@ data.filterWith {
}
```
</figure>
|
| **reduceWith** | **reduce (DataSet, GroupedDataSet)** |
<figure class="highlight">
```
data.reduceWith {
......@@ -111,12 +111,12 @@ data.reduceWith {
}
```
</figure>
|
| **reduceGroupWith** | **reduceGroup (GroupedDataSet)** |
<figure class="highlight">
```
data.reduceGroupWith {
......@@ -124,12 +124,12 @@ data.reduceGroupWith {
}
```
</figure>
|
| **groupingBy** | **groupBy (DataSet)** |
<figure class="highlight">
```
data.groupingBy {
......@@ -137,12 +137,12 @@ data.groupingBy {
}
```
</figure>
|
| **sortGroupWith** | **sortGroup (GroupedDataSet)** |
<figure class="highlight">
```
grouped.sortGroupWith(Order.ASCENDING) {
......@@ -150,12 +150,12 @@ grouped.sortGroupWith(Order.ASCENDING) {
}
```
</figure>
|
| **combineGroupWith** | **combineGroup (GroupedDataSet)** |
<figure class="highlight">
```
grouped.combineGroupWith {
......@@ -163,12 +163,12 @@ grouped.combineGroupWith {
}
```
</figure>
|
| **projecting** | **apply (JoinDataSet, CrossDataSet)** |
<figure class="highlight">
```
data1.join(data2).
......@@ -183,12 +183,12 @@ data1.cross(data2).projecting {
}
```
</figure>
|
| **projecting** | **apply (CoGroupDataSet)** |
<figure class="highlight">
```
data1.coGroup(data2).
......@@ -200,7 +200,7 @@ data1.coGroup(data2).
}
```
</figure>
|
......@@ -210,7 +210,7 @@ data1.coGroup(data2).
| --- | --- | --- |
| **mapWith** | **map (DataStream)** |
<figure class="highlight">
```
data.mapWith {
......@@ -218,12 +218,12 @@ data.mapWith {
}
```
</figure>
|
| **mapPartitionWith** | **mapPartition (DataStream)** |
<figure class="highlight">
```
data.mapPartitionWith {
......@@ -231,12 +231,12 @@ data.mapPartitionWith {
}
```
</figure>
|
| **flatMapWith** | **flatMap (DataStream)** |
<figure class="highlight">
```
data.flatMapWith {
......@@ -244,12 +244,12 @@ data.flatMapWith {
}
```
</figure>
|
| **filterWith** | **filter (DataStream)** |
<figure class="highlight">
```
data.filterWith {
......@@ -257,12 +257,12 @@ data.filterWith {
}
```
</figure>
|
| **keyingBy** | **keyBy (DataStream)** |
<figure class="highlight">
```
data.keyingBy {
......@@ -270,12 +270,12 @@ data.keyingBy {
}
```
</figure>
|
| **mapWith** | **map (ConnectedDataStream)** |
<figure class="highlight">
```
data.mapWith(
......@@ -284,12 +284,12 @@ data.mapWith(
)
```
</figure>
|
| **flatMapWith** | **flatMap (ConnectedDataStream)** |
<figure class="highlight">
```
data.flatMapWith(
......@@ -298,12 +298,12 @@ data.flatMapWith(
)
```
</figure>
|
| **keyingBy** | **keyBy (ConnectedDataStream)** |
<figure class="highlight">
```
data.keyingBy(
......@@ -312,12 +312,12 @@ data.keyingBy(
)
```
</figure>
|
| **reduceWith** | **reduce (KeyedStream, WindowedStream)** |
<figure class="highlight">
```
data.reduceWith {
......@@ -325,12 +325,12 @@ data.reduceWith {
}
```
</figure>
|
| **foldWith** | **fold (KeyedStream, WindowedStream)** |
<figure class="highlight">
```
data.foldWith(User(bought = 0)) {
......@@ -338,12 +338,12 @@ data.foldWith(User(bought = 0)) {
}
```
</figure>
|
| **applyWith** | **apply (WindowedStream)** |
<figure class="highlight">
```
data.applyWith(0)(
......@@ -351,12 +351,12 @@ data.applyWith(0)(
windowFunction = case (k, w, sum) => // [...] )
```
</figure>
|
| **projecting** | **apply (JoinedStream)** |
<figure class="highlight">
```
data1.join(data2).
......@@ -367,7 +367,7 @@ data1.join(data2).
}
```
</figure>
|
......
此差异已折叠。
......@@ -175,7 +175,7 @@ This section gives a brief overview of the available transformations. The [trans
| **Map**
PythonDataStream → PythonDataStream | Takes one element and produces one element.
<figure class="highlight">
```
class Doubler(MapFunction):
......@@ -185,13 +185,13 @@ class Doubler(MapFunction):
data_stream.map(Doubler())
```
</figure>
|
| **FlatMap**
PythonDataStream → PythonDataStream | Takes one element and produces zero, one, or more elements.
<figure class="highlight">
```
class Tokenizer(FlatMapFunction):
......@@ -201,13 +201,13 @@ class Tokenizer(FlatMapFunction):
data_stream.flat_map(Tokenizer())
```
</figure>
|
| **Filter**
PythonDataStream → PythonDataStream | Evaluates a boolean function for each element and retains those for which the function returns true.
<figure class="highlight">
```
class GreaterThen1000(FilterFunction):
......@@ -217,13 +217,13 @@ class GreaterThen1000(FilterFunction):
data_stream.filter(GreaterThen1000())
```
</figure>
|
| **KeyBy**
PythonDataStream → PythonKeyedStream | Logically partitions a stream into disjoint partitions, each partition containing elements of the same key. Internally, this is implemented with hash partitioning. See [keys](/dev/api_concepts#specifying-keys) on how to specify keys. This transformation returns a PythonKeyedDataStream.
<figure class="highlight">
```
class Selector(KeySelector):
......@@ -233,13 +233,13 @@ class Selector(KeySelector):
data_stream.key_by(Selector()) // Key by field "someKey"
```
</figure>
|
| **Reduce**
PythonKeyedStream → PythonDataStream | A "rolling" reduce on a keyed data stream. Combines the current element with the last reduced value and emits the new value.
<figure class="highlight">
```
class Sum(ReduceFunction):
......@@ -251,13 +251,13 @@ class Sum(ReduceFunction):
data.reduce(Sum())
```
</figure>
|
| **Window**
PythonKeyedStream → PythonWindowedStream | Windows can be defined on already partitioned KeyedStreams. Windows group the data in each key according to some characteristic (e.g., the data that arrived within the last 5 seconds). See [windows](operators/windows.html) for a complete description of windows.
<figure class="highlight">
```
keyed_stream.count_window(10, 5) # Last 10 elements, sliding (jumping) by 5 elements
......@@ -267,13 +267,13 @@ keyed_stream.time_window(milliseconds(30)) # Last 30 milliseconds of data
keted_stream.time_window(milliseconds(100), milliseconds(20)) # Last 100 milliseconds of data, sliding (jumping) by 20 milliseconds
```
</figure>
|
| **Window Apply**
PythonWindowedStream → PythonDataStream | Applies a general function to the window as a whole. Below is a function that manually sums the elements of a window.
<figure class="highlight">
```
class WindowSum(WindowFunction):
......@@ -286,13 +286,13 @@ class WindowSum(WindowFunction):
windowed_stream.apply(WindowSum())
```
</figure>
|
| **Window Reduce**
PythonWindowedStream → PythonDataStream | Applies a functional reduce function to the window and returns the reduced value.
<figure class="highlight">
```
class Sum(ReduceFunction):
......@@ -304,25 +304,25 @@ class Sum(ReduceFunction):
windowed_stream.reduce(Sum())
```
</figure>
|
| **Union**
PythonDataStream* → PythonDataStream | Union of two or more data streams creating a new stream containing all the elements from all the streams. Note: If you union a data stream with itself you will get each element twice in the resulting stream.
<figure class="highlight">
```
data_stream.union(other_stream1, other_stream2, ...);
```
</figure>
|
| **Split**
PythonDataStream → PythonSplitStream | Split the stream into two or more streams according to some criterion.
<figure class="highlight">
```
class StreamSelector(OutputSelector):
......@@ -332,13 +332,13 @@ class StreamSelector(OutputSelector):
splited_stream = data_stream.split(StreamSelector())
```
</figure>
|
| **Select**
SplitStream → DataStream | Select one or more streams from a split stream.
<figure class="highlight">
```
even_data_stream = splited_stream.select("even")
......@@ -346,13 +346,13 @@ odd_data_stream = splited_stream.select("odd")
all_data_stream = splited_stream.select("even", "odd")
```
</figure>
|
| **Iterate**
PythonDataStream → PythonIterativeStream → PythonDataStream | Creates a "feedback" loop in the flow, by redirecting the output of one operator to some previous operator. This is especially useful for defining algorithms that continuously update a model. The following code starts with a stream and applies the iteration body continuously. Elements that are greater than 0 are sent back to the feedback channel, and the rest of the elements are forwarded downstream. See [iterations](#iterations) for a complete description.
<figure class="highlight">
```
class MinusOne(MapFunction):
......@@ -374,7 +374,7 @@ iteration.close_with(feedback)
output = iteration_body.filter(LessEquelToZero())
```
</figure>
|
......
此差异已折叠。
......@@ -126,63 +126,63 @@ This section gives a brief overview of the available transformations. The [trans
| --- | --- |
| **Map** | Takes one element and produces one element.
<figure class="highlight">
```
data.map(lambda x: x * 2)
```
</figure>
|
| **FlatMap** | Takes one element and produces zero, one, or more elements.
<figure class="highlight">
```
data.flat_map(
lambda x,c: [(1,word) for word in line.lower().split() for line in x])
```
</figure>
|
| **MapPartition** | Transforms a parallel partition in a single function call. The function get the partition as an `Iterator` and can produce an arbitrary number of result values. The number of elements in each partition depends on the parallelism and previous operations.
<figure class="highlight">
```
data.map_partition(lambda x,c: [value * 2 for value in x])
```
</figure>
|
| **Filter** | Evaluates a boolean function for each element and retains those for which the function returns true.
<figure class="highlight">
```
data.filter(lambda x: x > 1000)
```
</figure>
|
| **Reduce** | Combines a group of elements into a single element by repeatedly combining two elements into one. Reduce may be applied on a full data set, or on a grouped data set.
<figure class="highlight">
```
data.reduce(lambda x,y : x + y)
```
</figure>
|
| **ReduceGroup** | Combines a group of elements into one or more elements. ReduceGroup may be applied on a full data set, or on a grouped data set.
<figure class="highlight">
```
class Adder(GroupReduceFunction):
......@@ -194,12 +194,12 @@ class Adder(GroupReduceFunction):
data.reduce_group(Adder())
```
</figure>
|
| **Aggregate** | Performs a built-in operation (sum, min, max) on one field of all the Tuples in a data set or in each group of a data set. Aggregation can be applied on a full dataset or on a grouped data set.
<figure class="highlight">
```
# This code finds the sum of all of the values in the first field and the maximum of all of the values in the second field
......@@ -209,12 +209,12 @@ data.aggregate(Aggregation.Sum, 0).and_agg(Aggregation.Max, 1)
data.sum(0).and_agg(Aggregation.Max, 1)
```
</figure>
|
| **Join** | Joins two data sets by creating all pairs of elements that are equal on their keys. Optionally uses a JoinFunction to turn the pair of elements into a single element. See [keys](#specifying-keys) on how to define join keys.
<figure class="highlight">
```
# In this case tuple fields are used as keys.
......@@ -223,51 +223,51 @@ data.sum(0).and_agg(Aggregation.Max, 1)
result = input1.join(input2).where(0).equal_to(1)
```
</figure>
|
| **CoGroup** | The two-dimensional variant of the reduce operation. Groups each input on one or more fields and then joins the groups. The transformation function is called per pair of groups. See [keys](#specifying-keys) on how to define coGroup keys.
<figure class="highlight">
```
data1.co_group(data2).where(0).equal_to(1)
```
</figure>
|
| **Cross** | Builds the Cartesian product (cross product) of two inputs, creating all pairs of elements. Optionally uses a CrossFunction to turn the pair of elements into a single element.
<figure class="highlight">
```
result = data1.cross(data2)
```
</figure>
|
| **Union** | Produces the union of two data sets.
<figure class="highlight">
```
data.union(data2)
```
</figure>
|
| **ZipWithIndex** | Assigns consecutive indexes to each element. For more information, please refer to the [Zip Elements Guide](zip_elements_guide.html#zip-with-a-dense-index).
<figure class="highlight">
```
data.zip_with_index()
```
</figure>
|
......
......@@ -438,24 +438,24 @@ _Logical offsets_ enable navigation within the events that were mapped to a part
| --- | --- |
|
<figure class="highlight">
```
LAST(variable.field, n)
```
</figure>
| Returns the value of the field from the event that was mapped to the _n_-th _last_ element of the variable. The counting starts at the last element mapped. |
|
<figure class="highlight">
```
FIRST(variable.field, n)
```
</figure>
| Returns the value of the field from the event that was mapped to the _n_-th element of the variable. The counting starts at the first element mapped. |
......
此差异已折叠。
......@@ -229,7 +229,7 @@ String literals must be enclosed in single quotes (e.g., `SELECT 'Hello World'`)
| **Scan / Select / As**
Batch Streaming |
<figure class="highlight">
```
SELECT * FROM Orders
......@@ -237,13 +237,13 @@ SELECT * FROM Orders
SELECT a, c AS d FROM Orders
```
</figure>
|
| **Where / Filter**
Batch Streaming |
<figure class="highlight">
```
SELECT * FROM Orders WHERE b = 'red'
......@@ -251,19 +251,19 @@ SELECT * FROM Orders WHERE b = 'red'
SELECT * FROM Orders WHERE a % 2 = 0
```
</figure>
|
| **User-defined Scalar Functions (Scalar UDF)**
Batch Streaming | UDFs must be registered in the TableEnvironment. See the [UDF documentation](udfs.html) for details on how to specify and register scalar UDFs.
<figure class="highlight">
```
SELECT PRETTY_PRINT(user) FROM Orders
```
</figure>
|
......@@ -275,7 +275,7 @@ SELECT PRETTY_PRINT(user) FROM Orders
Batch Streaming
Result Updating | **Note:** GroupBy on a streaming table produces an updating result. See the [Dynamic Tables Streaming Concepts](streaming/dynamic_tables.html) page for details.
<figure class="highlight">
```
SELECT a, SUM(b) as d
......@@ -283,13 +283,13 @@ FROM Orders
GROUP BY a
```
</figure>
|
| **GroupBy Window Aggregation**
Batch Streaming | Use a group window to compute a single result row per group. See [Group Windows](#group-windows) section for more details.
<figure class="highlight">
```
SELECT user, SUM(amount)
......@@ -297,13 +297,13 @@ FROM Orders
GROUP BY TUMBLE(rowtime, INTERVAL '1' DAY), user
```
</figure>
|
| **Over Window aggregation**
Streaming | **Note:** All aggregates must be defined over the same window, i.e., same partitioning, sorting, and range. Currently, only windows with PRECEDING (UNBOUNDED and bounded) to CURRENT ROW range are supported. Ranges with FOLLOWING are not supported yet. ORDER BY must be specified on a single [time attribute](streaming/time_attributes.html)
<figure class="highlight">
```
SELECT COUNT(amount) OVER (
......@@ -320,26 +320,26 @@ WINDOW w AS (
ROWS BETWEEN 2 PRECEDING AND CURRENT ROW)
```
</figure>
|
| **Distinct**
Batch Streaming
Result Updating |
<figure class="highlight">
```
SELECT DISTINCT users FROM Orders
```
</figure>
**Note:** For streaming queries the required state to compute the query result might grow infinitely depending on the number of distinct fields. Please provide a query configuration with valid retention interval to prevent excessive state size. See [Query Configuration](streaming/query_configuration.html) for details. |
| **Grouping sets, Rollup, Cube**
Batch |
<figure class="highlight">
```
SELECT SUM(amount)
......@@ -347,13 +347,13 @@ FROM Orders
GROUP BY GROUPING SETS ((user), (product))
```
</figure>
|
| **Having**
Batch Streaming |
<figure class="highlight">
```
SELECT SUM(amount)
......@@ -362,13 +362,13 @@ GROUP BY users
HAVING SUM(amount) > 50
```
</figure>
|
| **User-defined Aggregate Functions (UDAGG)**
Batch Streaming | UDAGGs must be registered in the TableEnvironment. See the [UDF documentation](udfs.html) for details on how to specify and register UDAGGs.
<figure class="highlight">
```
SELECT MyAggregate(amount)
......@@ -376,7 +376,7 @@ FROM Orders
GROUP BY users
```
</figure>
|
......@@ -387,20 +387,20 @@ GROUP BY users
| **Inner Equi-join**
Batch Streaming | Currently, only equi-joins are supported, i.e., joins that have at least one conjunctive condition with an equality predicate. Arbitrary cross or theta joins are not supported.**Note:** The order of joins is not optimized. Tables are joined in the order in which they are specified in the FROM clause. Make sure to specify tables in an order that does not yield a cross join (Cartesian product) which are not supported and would cause a query to fail.
<figure class="highlight">
```
SELECT *
FROM Orders INNER JOIN Product ON Orders.productId = Product.id
```
</figure>
**Note:** For streaming queries the required state to compute the query result might grow infinitely depending on the number of distinct input rows. Please provide a query configuration with valid retention interval to prevent excessive state size. See [Query Configuration](streaming/query_configuration.html) for details. |
| **Outer Equi-join**
Batch Streaming Result Updating | Currently, only equi-joins are supported, i.e., joins that have at least one conjunctive condition with an equality predicate. Arbitrary cross or theta joins are not supported.**Note:** The order of joins is not optimized. Tables are joined in the order in which they are specified in the FROM clause. Make sure to specify tables in an order that does not yield a cross join (Cartesian product) which are not supported and would cause a query to fail.
<figure class="highlight">
```
SELECT *
......@@ -413,7 +413,7 @@ SELECT *
FROM Orders FULL OUTER JOIN Product ON Orders.productId = Product.id
```
</figure>
**Note:** For streaming queries the required state to compute the query result might grow infinitely depending on the number of distinct input rows. Please provide a query configuration with valid retention interval to prevent excessive state size. See [Query Configuration](streaming/query_configuration.html) for details. |
| **Time-windowed Join**
......@@ -423,7 +423,7 @@ Batch Streaming | **Note:** Time-windowed joins are a subset of regular joins th
* `ltime >= rtime AND ltime < rtime + INTERVAL '10' MINUTE`
* `ltime BETWEEN rtime - INTERVAL '10' SECOND AND rtime + INTERVAL '5' SECOND`
<figure class="highlight">
```
SELECT *
......@@ -432,50 +432,50 @@ WHERE o.id = s.orderId AND
o.ordertime BETWEEN s.shiptime - INTERVAL '4' HOUR AND s.shiptime
```
</figure>
The example above will join all orders with their corresponding shipments if the order was shipped four hours after the order was received. |
| **Expanding arrays into a relation**
Batch Streaming | Unnesting WITH ORDINALITY is not supported yet.
<figure class="highlight">
```
SELECT users, tag
FROM Orders CROSS JOIN UNNEST(tags) AS t (tag)
```
</figure>
|
| **Join with Table Function**
Batch Streaming | Joins a table with the results of a table function. Each row of the left (outer) table is joined with all rows produced by the corresponding call of the table function.User-defined table functions (UDTFs) must be registered before. See the [UDF documentation](udfs.html) for details on how to specify and register UDTFs.**Inner Join**A row of the left (outer) table is dropped, if its table function call returns an empty result.
<figure class="highlight">
```
SELECT users, tag
FROM Orders, LATERAL TABLE(unnest_udtf(tags)) t AS tag
```
</figure>
**Left Outer Join**If a table function call returns an empty result, the corresponding outer row is preserved and the result padded with null values.
<figure class="highlight">
```
SELECT users, tag
FROM Orders LEFT JOIN LATERAL TABLE(unnest_udtf(tags)) t AS tag ON TRUE
```
</figure>
**Note:** Currently, only literal `TRUE` is supported as predicate for a left outer join against a lateral table. |
| **Join with Temporal Table**
Streaming | [Temporal tables](streaming/temporal_tables.html) are tables that track changes over time.A [Temporal table function](streaming/temporal_tables.html#temporal-table-functions) provides access to the state of a temporal table at a specific point in time. The syntax to join a table with a temporal table function is the same as in _Join with Table Function_.**Note:** Currently only inner joins with temporal tables are supported.Assuming _Rates_ is a [temporal table function](streaming/temporal_tables.html#temporal-table-functions), the join can be expressed in SQL as follows:
<figure class="highlight">
```
SELECT
......@@ -487,7 +487,7 @@ WHERE
r_currency = o_currency
```
</figure>
For more information please check the more detailed [temporal tables concept description](streaming/temporal_tables.html). |
......@@ -498,7 +498,7 @@ For more information please check the more detailed [temporal tables concept des
| **Union**
Batch |
<figure class="highlight">
```
SELECT *
......@@ -509,13 +509,13 @@ FROM (
)
```
</figure>
|
| **UnionAll**
Batch Streaming |
<figure class="highlight">
```
SELECT *
......@@ -526,13 +526,13 @@ FROM (
)
```
</figure>
|
| **Intersect / Except**
Batch |
<figure class="highlight">
```
SELECT *
......@@ -543,9 +543,9 @@ FROM (
)
```
</figure>
<figure class="highlight">
```
SELECT *
......@@ -556,13 +556,13 @@ FROM (
)
```
</figure>
|
| **In**
Batch Streaming | Returns true if an expression exists in a given table sub-query. The sub-query table must consist of one column. This column must have the same data type as the expression.
<figure class="highlight">
```
SELECT user, amount
......@@ -572,13 +572,13 @@ WHERE product IN (
)
```
</figure>
**Note:** For streaming queries the operation is rewritten in a join and group operation. The required state to compute the query result might grow infinitely depending on the number of distinct input rows. Please provide a query configuration with valid retention interval to prevent excessive state size. See [Query Configuration](streaming/query_configuration.html) for details. |
| **Exists**
Batch Streaming | Returns true if the sub-query returns at least one row. Only supported if the operation can be rewritten in a join and group operation.
<figure class="highlight">
```
SELECT user, amount
......@@ -588,7 +588,7 @@ WHERE product EXISTS (
)
```
</figure>
**Note:** For streaming queries the operation is rewritten in a join and group operation. The required state to compute the query result might grow infinitely depending on the number of distinct input rows. Please provide a query configuration with valid retention interval to prevent excessive state size. See [Query Configuration](streaming/query_configuration.html) for details. |
......@@ -599,7 +599,7 @@ WHERE product EXISTS (
| **Order By**
Batch Streaming | **Note:** The result of streaming queries must be primarily sorted on an ascending [time attribute](streaming/time_attributes.html). Additional sorting attributes are supported.
<figure class="highlight">
```
SELECT *
......@@ -607,13 +607,13 @@ FROM Orders
ORDER BY orderTime
```
</figure>
|
| **Limit**
Batch |
<figure class="highlight">
```
SELECT *
......@@ -621,7 +621,7 @@ FROM Orders
LIMIT 3
```
</figure>
|
......@@ -632,7 +632,7 @@ LIMIT 3
| **Insert Into**
Batch Streaming | Output tables must be registered in the TableEnvironment (see [Register a TableSink](common.html#register-a-tablesink)). Moreover, the schema of the registered table must match the schema of the query.
<figure class="highlight">
```
INSERT INTO OutputTable
......@@ -640,7 +640,7 @@ SELECT users, tag
FROM Orders
```
</figure>
|
......@@ -771,7 +771,7 @@ val tableEnv = TableEnvironment.getTableEnvironment(env)
| **MATCH_RECOGNIZE**
Streaming | Searches for a given pattern in a streaming table according to the `MATCH_RECOGNIZE` [ISO standard](https://standards.iso.org/ittf/PubliclyAvailableStandards/c065143_ISO_IEC_TR_19075-5_2016.zip). This makes it possible to express complex event processing (CEP) logic in SQL queries.For a more detailed description, see the dedicated page for [detecting patterns in tables](streaming/match_recognize.html).
<figure class="highlight">
```
SELECT T.aid, T.bid, T.cid
......@@ -791,7 +791,7 @@ MATCH_RECOGNIZE (
) AS T
```
</figure>
|
......
此差异已折叠。
......@@ -353,7 +353,7 @@ As you can see `{a1 a2 a3}` or `{a2 a3}` are not returned due to the stop condit
| --- | --- |
| **where(condition)** | Defines a condition for the current pattern. To match the pattern, an event must satisfy the condition. Multiple consecutive where() clauses lead to their conditions being ANDed:
<figure class="highlight">
```
pattern.where(new IterativeCondition<Event>() {
......@@ -364,12 +364,12 @@ pattern.where(new IterativeCondition<Event>() {
});
```
</figure>
|
| **or(condition)** | Adds a new condition which is ORed with an existing one. An event can match the pattern only if it passes at least one of the conditions:
<figure class="highlight">
```
pattern.where(new IterativeCondition<Event>() {
......@@ -385,12 +385,12 @@ pattern.where(new IterativeCondition<Event>() {
});
```
</figure>
|
| **until(condition)** | Specifies a stop condition for a looping pattern. Meaning if event matching the given condition occurs, no more events will be accepted into the pattern.Applicable only in conjunction with `oneOrMore()`**NOTE:** It allows for cleaning state for corresponding pattern on event-based condition.
<figure class="highlight">
```
pattern.oneOrMore().until(new IterativeCondition<Event>() {
......@@ -401,84 +401,84 @@ pattern.oneOrMore().until(new IterativeCondition<Event>() {
});
```
</figure>
|
| **subtype(subClass)** | Defines a subtype condition for the current pattern. An event can only match the pattern if it is of this subtype:
<figure class="highlight">
```
pattern.subtype(SubEvent.class);
```
</figure>
|
| **oneOrMore()** | Specifies that this pattern expects at least one occurrence of a matching event.By default a relaxed internal contiguity (between subsequent events) is used. For more info on internal contiguity see [consecutive](#consecutive_java).**NOTE:** It is advised to use either `until()` or `within()` to enable state clearing
<figure class="highlight">
```
pattern.oneOrMore();
```
</figure>
|
| **timesOrMore(#times)** | Specifies that this pattern expects at least **#times** occurrences of a matching event.By default a relaxed internal contiguity (between subsequent events) is used. For more info on internal contiguity see [consecutive](#consecutive_java).
<figure class="highlight">
```
pattern.timesOrMore(2);
```
</figure>
|
| **times(#ofTimes)** | Specifies that this pattern expects an exact number of occurrences of a matching event.By default a relaxed internal contiguity (between subsequent events) is used. For more info on internal contiguity see [consecutive](#consecutive_java).
<figure class="highlight">
```
pattern.times(2);
```
</figure>
|
| **times(#fromTimes, #toTimes)** | Specifies that this pattern expects occurrences between **#fromTimes** and **#toTimes** of a matching event.By default a relaxed internal contiguity (between subsequent events) is used. For more info on internal contiguity see [consecutive](#consecutive_java).
<figure class="highlight">
```
pattern.times(2, 4);
```
</figure>
|
| **optional()** | Specifies that this pattern is optional, i.e. it may not occur at all. This is applicable to all aforementioned quantifiers.
<figure class="highlight">
```
pattern.oneOrMore().optional();
```
</figure>
|
| **greedy()** | Specifies that this pattern is greedy, i.e. it will repeat as many as possible. This is only applicable to quantifiers and it does not support group pattern currently.
<figure class="highlight">
```
pattern.oneOrMore().greedy();
```
</figure>
|
......@@ -486,113 +486,113 @@ pattern.oneOrMore().greedy();
| --- | --- |
| **where(condition)** | Defines a condition for the current pattern. To match the pattern, an event must satisfy the condition. Multiple consecutive where() clauses lead to their conditions being ANDed:
<figure class="highlight">
```
pattern.where(event => ... /* some condition */)
```
</figure>
|
| **or(condition)** | Adds a new condition which is ORed with an existing one. An event can match the pattern only if it passes at least one of the conditions:
<figure class="highlight">
```
pattern.where(event => ... /* some condition */)
.or(event => ... /* alternative condition */)
```
</figure>
|
| **until(condition)** | Specifies a stop condition for looping pattern. Meaning if event matching the given condition occurs, no more events will be accepted into the pattern.Applicable only in conjunction with `oneOrMore()`**NOTE:** It allows for cleaning state for corresponding pattern on event-based condition.
<figure class="highlight">
```
pattern.oneOrMore().until(event => ... /* some condition */)
```
</figure>
|
| **subtype(subClass)** | Defines a subtype condition for the current pattern. An event can only match the pattern if it is of this subtype:
<figure class="highlight">
```
pattern.subtype(classOf[SubEvent])
```
</figure>
|
| **oneOrMore()** | Specifies that this pattern expects at least one occurrence of a matching event.By default a relaxed internal contiguity (between subsequent events) is used. For more info on internal contiguity see [consecutive](#consecutive_scala).**NOTE:** It is advised to use either `until()` or `within()` to enable state clearing
<figure class="highlight">
```
pattern.oneOrMore()
```
</figure>
|
| **timesOrMore(#times)** | Specifies that this pattern expects at least **#times** occurrences of a matching event.By default a relaxed internal contiguity (between subsequent events) is used. For more info on internal contiguity see [consecutive](#consecutive_scala).
<figure class="highlight">
```
pattern.timesOrMore(2)
```
</figure>
|
| **times(#ofTimes)** | Specifies that this pattern expects an exact number of occurrences of a matching event.By default a relaxed internal contiguity (between subsequent events) is used. For more info on internal contiguity see [consecutive](#consecutive_scala).
<figure class="highlight">
```
pattern.times(2)
```
</figure>
|
| **times(#fromTimes, #toTimes)** | Specifies that this pattern expects occurrences between **#fromTimes** and **#toTimes** of a matching event.By default a relaxed internal contiguity (between subsequent events) is used. For more info on internal contiguity see [consecutive](#consecutive_java).
<figure class="highlight">
```
pattern.times(2, 4)
```
</figure>
|
| **optional()** | Specifies that this pattern is optional, i.e. it may not occur at all. This is applicable to all aforementioned quantifiers.
<figure class="highlight">
```
pattern.oneOrMore().optional()
```
</figure>
|
| **greedy()** | Specifies that this pattern is greedy, i.e. it will repeat as many as possible. This is only applicable to quantifiers and it does not support group pattern currently.
<figure class="highlight">
```
pattern.oneOrMore().greedy()
```
</figure>
|
......@@ -722,7 +722,7 @@ For looping patterns (e.g. `oneOrMore()` and `times()`) the default is _relaxed
| --- | --- |
| **consecutive()** | Works in conjunction with `oneOrMore()` and `times()` and imposes strict contiguity between the matching events, i.e. any non-matching element breaks the match (as in `next()`).If not applied a relaxed contiguity (as in `followedBy()`) is used.E.g. a pattern like:
<figure class="highlight">
```
Pattern.<Event>begin("start").where(new SimpleCondition<Event>() {
......@@ -745,12 +745,12 @@ Pattern.<Event>begin("start").where(new SimpleCondition<Event>() {
});
```
</figure>
Will generate the following matches for an input sequence: C D A1 A2 A3 D A4 Bwith consecutive applied: {C A1 B}, {C A1 A2 B}, {C A1 A2 A3 B}without consecutive applied: {C A1 B}, {C A1 A2 B}, {C A1 A2 A3 B}, {C A1 A2 A3 A4 B} |
| **allowCombinations()** | Works in conjunction with `oneOrMore()` and `times()` and imposes non-deterministic relaxed contiguity between the matching events (as in `followedByAny()`).If not applied a relaxed contiguity (as in `followedBy()`) is used.E.g. a pattern like:
<figure class="highlight">
```
Pattern.<Event>begin("start").where(new SimpleCondition<Event>() {
......@@ -773,7 +773,7 @@ Pattern.<Event>begin("start").where(new SimpleCondition<Event>() {
});
```
</figure>
Will generate the following matches for an input sequence: C D A1 A2 A3 D A4 Bwith combinations enabled: {C A1 B}, {C A1 A2 B}, {C A1 A3 B}, {C A1 A4 B}, {C A1 A2 A3 B}, {C A1 A2 A4 B}, {C A1 A3 A4 B}, {C A1 A2 A3 A4 B}without combinations enabled: {C A1 B}, {C A1 A2 B}, {C A1 A2 A3 B}, {C A1 A2 A3 A4 B} |
......@@ -781,7 +781,7 @@ Will generate the following matches for an input sequence: C D A1 A2 A3 D A4 Bwi
| --- | --- |
| **consecutive()** | Works in conjunction with `oneOrMore()` and `times()` and imposes strict contiguity between the matching events, i.e. any non-matching element breaks the match (as in `next()`).If not applied a relaxed contiguity (as in `followedBy()`) is used.E.g. a pattern like:
<figure class="highlight">
```
Pattern.begin("start").where(_.getName().equals("c"))
......@@ -790,12 +790,12 @@ Pattern.begin("start").where(_.getName().equals("c"))
.followedBy("end1").where(_.getName().equals("b"))
```
</figure>
Will generate the following matches for an input sequence: C D A1 A2 A3 D A4 Bwith consecutive applied: {C A1 B}, {C A1 A2 B}, {C A1 A2 A3 B}without consecutive applied: {C A1 B}, {C A1 A2 B}, {C A1 A2 A3 B}, {C A1 A2 A3 A4 B} |
| **allowCombinations()** | Works in conjunction with `oneOrMore()` and `times()` and imposes non-deterministic relaxed contiguity between the matching events (as in `followedByAny()`).If not applied a relaxed contiguity (as in `followedBy()`) is used.E.g. a pattern like:
<figure class="highlight">
```
Pattern.begin("start").where(_.getName().equals("c"))
......@@ -804,7 +804,7 @@ Pattern.begin("start").where(_.getName().equals("c"))
.followedBy("end1").where(_.getName().equals("b"))
```
</figure>
Will generate the following matches for an input sequence: C D A1 A2 A3 D A4 Bwith combinations enabled: {C A1 B}, {C A1 A2 B}, {C A1 A3 B}, {C A1 A4 B}, {C A1 A2 A3 B}, {C A1 A2 A4 B}, {C A1 A3 A4 B}, {C A1 A2 A3 A4 B}without combinations enabled: {C A1 B}, {C A1 A2 B}, {C A1 A2 A3 B}, {C A1 A2 A3 A4 B} |
......@@ -863,18 +863,18 @@ val start: Pattern[Event, _] = Pattern.begin(
| --- | --- |
| **begin(#name)** | Defines a starting pattern:
<figure class="highlight">
```
Pattern<Event, ?> start = Pattern.<Event>begin("start");
```
</figure>
|
| **begin(#pattern_sequence)** | Defines a starting pattern:
<figure class="highlight">
```
Pattern<Event, ?> start = Pattern.<Event>begin(
......@@ -882,23 +882,23 @@ Pattern<Event, ?> start = Pattern.<Event>begin(
);
```
</figure>
|
| **next(#name)** | Appends a new pattern. A matching event has to directly succeed the previous matching event (strict contiguity):
<figure class="highlight">
```
Pattern<Event, ?> next = start.next("middle");
```
</figure>
|
| **next(#pattern_sequence)** | Appends a new pattern. A sequence of matching events have to directly succeed the previous matching event (strict contiguity):
<figure class="highlight">
```
Pattern<Event, ?> next = start.next(
......@@ -906,23 +906,23 @@ Pattern<Event, ?> next = start.next(
);
```
</figure>
|
| **followedBy(#name)** | Appends a new pattern. Other events can occur between a matching event and the previous matching event (relaxed contiguity):
<figure class="highlight">
```
Pattern<Event, ?> followedBy = start.followedBy("middle");
```
</figure>
|
| **followedBy(#pattern_sequence)** | Appends a new pattern. Other events can occur between a sequence of matching events and the previous matching event (relaxed contiguity):
<figure class="highlight">
```
Pattern<Event, ?> followedBy = start.followedBy(
......@@ -930,23 +930,23 @@ Pattern<Event, ?> followedBy = start.followedBy(
);
```
</figure>
|
| **followedByAny(#name)** | Appends a new pattern. Other events can occur between a matching event and the previous matching event, and alternative matches will be presented for every alternative matching event (non-deterministic relaxed contiguity):
<figure class="highlight">
```
Pattern<Event, ?> followedByAny = start.followedByAny("middle");
```
</figure>
|
| **followedByAny(#pattern_sequence)** | Appends a new pattern. Other events can occur between a sequence of matching events and the previous matching event, and alternative matches will be presented for every alternative sequence of matching events (non-deterministic relaxed contiguity):
<figure class="highlight">
```
Pattern<Event, ?> followedByAny = start.followedByAny(
......@@ -954,40 +954,40 @@ Pattern<Event, ?> followedByAny = start.followedByAny(
);
```
</figure>
|
| **notNext()** | Appends a new negative pattern. A matching (negative) event has to directly succeed the previous matching event (strict contiguity) for the partial match to be discarded:
<figure class="highlight">
```
Pattern<Event, ?> notNext = start.notNext("not");
```
</figure>
|
| **notFollowedBy()** | Appends a new negative pattern. A partial matching event sequence will be discarded even if other events occur between the matching (negative) event and the previous matching event (relaxed contiguity):
<figure class="highlight">
```
Pattern<Event, ?> notFollowedBy = start.notFollowedBy("not");
```
</figure>
|
| **within(time)** | Defines the maximum time interval for an event sequence to match the pattern. If a non-completed event sequence exceeds this time, it is discarded:
<figure class="highlight">
```
pattern.within(Time.seconds(10));
```
</figure>
|
......@@ -995,18 +995,18 @@ pattern.within(Time.seconds(10));
| --- | --- |
| **begin(#name)** | Defines a starting pattern:
<figure class="highlight">
```
val start = Pattern.begin[Event]("start")
```
</figure>
|
| **begin(#pattern_sequence)** | Defines a starting pattern:
<figure class="highlight">
```
val start = Pattern.begin(
......@@ -1014,23 +1014,23 @@ val start = Pattern.begin(
)
```
</figure>
|
| **next(#name)** | Appends a new pattern. A matching event has to directly succeed the previous matching event (strict contiguity):
<figure class="highlight">
```
val next = start.next("middle")
```
</figure>
|
| **next(#pattern_sequence)** | Appends a new pattern. A sequence of matching events have to directly succeed the previous matching event (strict contiguity):
<figure class="highlight">
```
val next = start.next(
......@@ -1038,23 +1038,23 @@ val next = start.next(
)
```
</figure>
|
| **followedBy(#name)** | Appends a new pattern. Other events can occur between a matching event and the previous matching event (relaxed contiguity) :
<figure class="highlight">
```
val followedBy = start.followedBy("middle")
```
</figure>
|
| **followedBy(#pattern_sequence)** | Appends a new pattern. Other events can occur between a sequence of matching events and the previous matching event (relaxed contiguity) :
<figure class="highlight">
```
val followedBy = start.followedBy(
......@@ -1062,23 +1062,23 @@ val followedBy = start.followedBy(
)
```
</figure>
|
| **followedByAny(#name)** | Appends a new pattern. Other events can occur between a matching event and the previous matching event, and alternative matches will be presented for every alternative matching event (non-deterministic relaxed contiguity):
<figure class="highlight">
```
val followedByAny = start.followedByAny("middle")
```
</figure>
|
| **followedByAny(#pattern_sequence)** | Appends a new pattern. Other events can occur between a sequence of matching events and the previous matching event, and alternative matches will be presented for every alternative sequence of matching events (non-deterministic relaxed contiguity):
<figure class="highlight">
```
val followedByAny = start.followedByAny(
......@@ -1086,40 +1086,40 @@ val followedByAny = start.followedByAny(
)
```
</figure>
|
| **notNext()** | Appends a new negative pattern. A matching (negative) event has to directly succeed the previous matching event (strict contiguity) for the partial match to be discarded:
<figure class="highlight">
```
val notNext = start.notNext("not")
```
</figure>
|
| **notFollowedBy()** | Appends a new negative pattern. A partial matching event sequence will be discarded even if other events occur between the matching (negative) event and the previous matching event (relaxed contiguity):
<figure class="highlight">
```
val notFollowedBy = start.notFollowedBy("not")
```
</figure>
|
| **within(time)** | Defines the maximum time interval for an event sequence to match the pattern. If a non-completed event sequence exceeds this time, it is discarded:
<figure class="highlight">
```
pattern.within(Time.seconds(10))
```
</figure>
|
......
......@@ -9,7 +9,7 @@ The logic blocks with which the `Graph` API and top-level algorithms are assembl
| degree.annotate.directed.
**VertexInDegree** | Annotate vertices of a [directed graph](#graph-representation) with the in-degree.
<figure class="highlight">
```
DataSet<Vertex<K, LongValue>> inDegree = graph
......@@ -17,7 +17,7 @@ DataSet<Vertex<K, LongValue>> inDegree = graph
.setIncludeZeroDegreeVertices(true));
```
</figure>
Optional configuration:
......@@ -29,7 +29,7 @@ Optional configuration:
| degree.annotate.directed.
**VertexOutDegree** | Annotate vertices of a [directed graph](#graph-representation) with the out-degree.
<figure class="highlight">
```
DataSet<Vertex<K, LongValue>> outDegree = graph
......@@ -37,7 +37,7 @@ DataSet<Vertex<K, LongValue>> outDegree = graph
.setIncludeZeroDegreeVertices(true));
```
</figure>
Optional configuration:
......@@ -49,7 +49,7 @@ Optional configuration:
| degree.annotate.directed.
**VertexDegrees** | Annotate vertices of a [directed graph](#graph-representation) with the degree, out-degree, and in-degree.
<figure class="highlight">
```
DataSet<Vertex<K, Tuple2<LongValue, LongValue>>> degrees = graph
......@@ -57,7 +57,7 @@ DataSet<Vertex<K, Tuple2<LongValue, LongValue>>> degrees = gra
.setIncludeZeroDegreeVertices(true));
```
</figure>
Optional configuration:
......@@ -69,14 +69,14 @@ Optional configuration:
| degree.annotate.directed.
**EdgeSourceDegrees** | Annotate edges of a [directed graph](#graph-representation) with the degree, out-degree, and in-degree of the source ID.
<figure class="highlight">
```
DataSet<Edge<K, Tuple2<EV, Degrees>>> sourceDegrees = graph
.run(new EdgeSourceDegrees());
```
</figure>
Optional configuration:
......@@ -86,14 +86,14 @@ Optional configuration:
| degree.annotate.directed.
**EdgeTargetDegrees** | Annotate edges of a [directed graph](#graph-representation) with the degree, out-degree, and in-degree of the target ID.
<figure class="highlight">
```
DataSet<Edge<K, Tuple2<EV, Degrees>>> targetDegrees = graph
.run(new EdgeTargetDegrees();
```
</figure>
Optional configuration:
......@@ -103,14 +103,14 @@ Optional configuration:
| degree.annotate.directed.
**EdgeDegreesPair** | Annotate edges of a [directed graph](#graph-representation) with the degree, out-degree, and in-degree of both the source and target vertices.
<figure class="highlight">
```
DataSet<Edge<K, Tuple2<EV, Degrees>>> degrees = graph
.run(new EdgeDegreesPair());
```
</figure>
Optional configuration:
......@@ -120,7 +120,7 @@ Optional configuration:
| degree.annotate.undirected.
**VertexDegree** | Annotate vertices of an [undirected graph](#graph-representation) with the degree.
<figure class="highlight">
```
DataSet<Vertex<K, LongValue>> degree = graph
......@@ -129,7 +129,7 @@ DataSet<Vertex<K, LongValue>> degree = graph
.setReduceOnTargetId(true));
```
</figure>
Optional configuration:
......@@ -143,7 +143,7 @@ Optional configuration:
| degree.annotate.undirected.
**EdgeSourceDegree** | Annotate edges of an [undirected graph](#graph-representation) with degree of the source ID.
<figure class="highlight">
```
DataSet<Edge<K, Tuple2<EV, LongValue>>> sourceDegree = graph
......@@ -151,7 +151,7 @@ DataSet<Edge<K, Tuple2<EV, LongValue>>> sourceDegree = graph
.setReduceOnTargetId(true));
```
</figure>
Optional configuration:
......@@ -163,7 +163,7 @@ Optional configuration:
| degree.annotate.undirected.
**EdgeTargetDegree** | Annotate edges of an [undirected graph](#graph-representation) with degree of the target ID.
<figure class="highlight">
```
DataSet<Edge<K, Tuple2<EV, LongValue>>> targetDegree = graph
......@@ -171,7 +171,7 @@ DataSet<Edge<K, Tuple2<EV, LongValue>>> targetDegree = graph
.setReduceOnSourceId(true));
```
</figure>
Optional configuration:
......@@ -183,7 +183,7 @@ Optional configuration:
| degree.annotate.undirected.
**EdgeDegreePair** | Annotate edges of an [undirected graph](#graph-representation) with the degree of both the source and target vertices.
<figure class="highlight">
```
DataSet<Edge<K, Tuple3<EV, LongValue, LongValue>>> pairDegree = graph
......@@ -191,7 +191,7 @@ DataSet<Edge<K, Tuple3<EV, LongValue, LongValue>>> pairDegree
.setReduceOnTargetId(true));
```
</figure>
Optional configuration:
......@@ -203,7 +203,7 @@ Optional configuration:
| degree.filter.undirected.
**MaximumDegree** | Filter an [undirected graph](#graph-representation) by maximum degree.
<figure class="highlight">
```
Graph<K, VV, EV> filteredGraph = graph
......@@ -212,7 +212,7 @@ Graph<K, VV, EV> filteredGraph = graph
.setReduceOnTargetId(true));
```
</figure>
Optional configuration:
......@@ -226,13 +226,13 @@ Optional configuration:
| simple.directed.
**Simplify** | Remove self-loops and duplicate edges from a [directed graph](#graph-representation).
<figure class="highlight">
```
graph.run(new Simplify());
```
</figure>
Optional configuration:
......@@ -242,13 +242,13 @@ Optional configuration:
| simple.undirected.
**Simplify** | Add symmetric edges and remove self-loops and duplicate edges from an [undirected graph](#graph-representation).
<figure class="highlight">
```
graph.run(new Simplify());
```
</figure>
Optional configuration:
......@@ -258,13 +258,13 @@ Optional configuration:
| translate.
**TranslateGraphIds** | Translate vertex and edge IDs using the given `TranslateFunction`.
<figure class="highlight">
```
graph.run(new TranslateGraphIds(new LongValueToStringValue()));
```
</figure>
Required configuration:
......@@ -278,13 +278,13 @@ Optional configuration:
| translate.
**TranslateVertexValues** | Translate vertex values using the given `TranslateFunction`.
<figure class="highlight">
```
graph.run(new TranslateVertexValues(new LongValueAddOffset(vertexCount)));
```
</figure>
Required configuration:
......@@ -298,13 +298,13 @@ Optional configuration:
| translate.
**TranslateEdgeValues** | Translate edge values using the given `TranslateFunction`.
<figure class="highlight">
```
graph.run(new TranslateEdgeValues(new Nullify()));
```
</figure>
Required configuration:
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册