Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
doujutun3207
flink
提交
a1fef27b
F
flink
项目概览
doujutun3207
/
flink
与 Fork 源项目一致
从无法访问的项目Fork
通知
24
Star
0
Fork
0
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
0
列表
看板
标记
里程碑
合并请求
0
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
F
flink
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
0
Issue
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
体验新版 GitCode,发现更多精彩内容 >>
提交
a1fef27b
编写于
8月 02, 2016
作者:
T
twalthr
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
[FLINK-4180] [FLINK-4181] [table] Ensure examples consistency
上级
123c637e
变更
9
隐藏空白更改
内联
并排
Showing
9 changed file
with
165 addition
and
73 deletion
+165
-73
flink-libraries/flink-table/src/main/java/org/apache/flink/examples/java/WordCountSQL.java
...ain/java/org/apache/flink/examples/java/WordCountSQL.java
+40
-25
flink-libraries/flink-table/src/main/java/org/apache/flink/examples/java/WordCountTable.java
...n/java/org/apache/flink/examples/java/WordCountTable.java
+41
-27
flink-libraries/flink-table/src/main/scala/org/apache/flink/examples/scala/StreamSQLExample.scala
...la/org/apache/flink/examples/scala/StreamSQLExample.scala
+18
-4
flink-libraries/flink-table/src/main/scala/org/apache/flink/examples/scala/StreamTableExample.scala
.../org/apache/flink/examples/scala/StreamTableExample.scala
+16
-2
flink-libraries/flink-table/src/main/scala/org/apache/flink/examples/scala/TPCHQuery3Table.scala
...ala/org/apache/flink/examples/scala/TPCHQuery3Table.scala
+4
-3
flink-libraries/flink-table/src/main/scala/org/apache/flink/examples/scala/WordCountSQL.scala
.../scala/org/apache/flink/examples/scala/WordCountSQL.scala
+21
-2
flink-libraries/flink-table/src/main/scala/org/apache/flink/examples/scala/WordCountTable.scala
...cala/org/apache/flink/examples/scala/WordCountTable.scala
+19
-4
flink-libraries/flink-table/src/test/java/org/apache/flink/api/java/batch/table/AggregationsITCase.java
...apache/flink/api/java/batch/table/AggregationsITCase.java
+3
-3
flink-libraries/flink-table/src/test/scala/org/apache/flink/api/scala/batch/table/AggregationsITCase.scala
...ache/flink/api/scala/batch/table/AggregationsITCase.scala
+3
-3
未找到文件。
flink-libraries/flink-table/src/main/java/org/apache/flink/examples/java/
JavaSQLExample
.java
→
flink-libraries/flink-table/src/main/java/org/apache/flink/examples/java/
WordCountSQL
.java
浏览文件 @
a1fef27b
...
...
@@ -24,35 +24,25 @@ import org.apache.flink.api.java.table.BatchTableEnvironment;
import
org.apache.flink.api.table.TableEnvironment
;
/**
* Simple example that shows how the Batch SQL used in Java.
* Simple example that shows how the Batch SQL API is used in Java.
*
* This example shows how to:
* - Convert DataSets to Tables
* - Register a Table under a name
* - Run a SQL query on the registered Table
*
*/
public
class
JavaSQLExample
{
public
class
WordCountSQL
{
public
static
class
WC
{
public
String
word
;
public
long
frequency
;
// Public constructor to make it a Flink POJO
public
WC
()
{
}
public
WC
(
String
word
,
long
frequency
)
{
this
.
word
=
word
;
this
.
frequency
=
frequency
;
}
@Override
public
String
toString
()
{
return
"WC "
+
word
+
" "
+
frequency
;
}
}
// *************************************************************************
// PROGRAM
// *************************************************************************
public
static
void
main
(
String
[]
args
)
throws
Exception
{
// set up execution environment
ExecutionEnvironment
env
=
ExecutionEnvironment
.
getExecutionEnvironment
();
BatchTableEnvironment
t
able
Env
=
TableEnvironment
.
getTableEnvironment
(
env
);
BatchTableEnvironment
tEnv
=
TableEnvironment
.
getTableEnvironment
(
env
);
DataSet
<
WC
>
input
=
env
.
fromElements
(
new
WC
(
"Hello"
,
1
),
...
...
@@ -60,13 +50,38 @@ public class JavaSQLExample {
new
WC
(
"Hello"
,
1
));
// register the DataSet as table "WordCount"
tableEnv
.
registerDataSet
(
"WordCount"
,
input
,
"word, frequency"
);
tEnv
.
registerDataSet
(
"WordCount"
,
input
,
"word, frequency"
);
// run a SQL query on the Table and retrieve the result as a new Table
Table
table
=
t
able
Env
.
sql
(
Table
table
=
tEnv
.
sql
(
"SELECT word, SUM(frequency) as frequency FROM WordCount GROUP BY word"
);
DataSet
<
WC
>
result
=
t
able
Env
.
toDataSet
(
table
,
WC
.
class
);
DataSet
<
WC
>
result
=
tEnv
.
toDataSet
(
table
,
WC
.
class
);
result
.
print
();
}
// *************************************************************************
// USER DATA TYPES
// *************************************************************************
public
static
class
WC
{
public
String
word
;
public
long
frequency
;
// public constructor to make it a Flink POJO
public
WC
()
{
}
public
WC
(
String
word
,
long
frequency
)
{
this
.
word
=
word
;
this
.
frequency
=
frequency
;
}
@Override
public
String
toString
()
{
return
"WC "
+
word
+
" "
+
frequency
;
}
}
}
flink-libraries/flink-table/src/main/java/org/apache/flink/examples/java/
JavaTableExamp
le.java
→
flink-libraries/flink-table/src/main/java/org/apache/flink/examples/java/
WordCountTab
le.java
浏览文件 @
a1fef27b
...
...
@@ -24,48 +24,62 @@ import org.apache.flink.api.java.table.BatchTableEnvironment;
import
org.apache.flink.api.table.TableEnvironment
;
/**
* Very simple example that shows how the Java Table API can be used.
*/
public
class
JavaTableExample
{
public
static
class
WC
{
public
String
word
;
public
long
count
;
// Public constructor to make it a Flink POJO
public
WC
()
{
}
* Simple example for demonstrating the use of the Table API for a Word Count in Java.
*
* This example shows how to:
* - Convert DataSets to Tables
* - Apply group, aggregate, select, and filter operations
*
*/
public
class
WordCountTable
{
public
WC
(
String
word
,
long
count
)
{
this
.
word
=
word
;
this
.
count
=
count
;
}
@Override
public
String
toString
()
{
return
"WC "
+
word
+
" "
+
count
;
}
}
// *************************************************************************
// PROGRAM
// *************************************************************************
public
static
void
main
(
String
[]
args
)
throws
Exception
{
ExecutionEnvironment
env
=
ExecutionEnvironment
.
createCollectionsEnvironment
();
BatchTableEnvironment
t
able
Env
=
TableEnvironment
.
getTableEnvironment
(
env
);
BatchTableEnvironment
tEnv
=
TableEnvironment
.
getTableEnvironment
(
env
);
DataSet
<
WC
>
input
=
env
.
fromElements
(
new
WC
(
"Hello"
,
1
),
new
WC
(
"Ciao"
,
1
),
new
WC
(
"Hello"
,
1
));
Table
table
=
t
able
Env
.
fromDataSet
(
input
);
Table
table
=
tEnv
.
fromDataSet
(
input
);
Table
filtered
=
table
.
groupBy
(
"word"
)
.
select
(
"word
.count as count, word
"
)
.
filter
(
"
count
= 2"
);
.
select
(
"word
, frequency.sum as frequency
"
)
.
filter
(
"
frequency
= 2"
);
DataSet
<
WC
>
result
=
t
able
Env
.
toDataSet
(
filtered
,
WC
.
class
);
DataSet
<
WC
>
result
=
tEnv
.
toDataSet
(
filtered
,
WC
.
class
);
result
.
print
();
}
// *************************************************************************
// USER DATA TYPES
// *************************************************************************
public
static
class
WC
{
public
String
word
;
public
long
frequency
;
// public constructor to make it a Flink POJO
public
WC
()
{
}
public
WC
(
String
word
,
long
frequency
)
{
this
.
word
=
word
;
this
.
frequency
=
frequency
;
}
@Override
public
String
toString
()
{
return
"WC "
+
word
+
" "
+
frequency
;
}
}
}
flink-libraries/flink-table/src/main/scala/org/apache/flink/examples/scala/StreamSQLExample.scala
浏览文件 @
a1fef27b
...
...
@@ -23,11 +23,19 @@ import org.apache.flink.api.table.TableEnvironment
import
org.apache.flink.streaming.api.scala.
{
DataStream
,
StreamExecutionEnvironment
}
/**
* Simple example for demonstrating the use of SQL on Stream Table.
* Simple example for demonstrating the use of SQL on a Stream Table.
*
* This example shows how to:
* - Convert DataStreams to Tables
* - Register a Table under a name
* - Run a StreamSQL query on the registered Table
*
*/
object
StreamSQLExample
{
case
class
Order
(
user
:
Long
,
product
:
String
,
amount
:
Int
)
// *************************************************************************
// PROGRAM
// *************************************************************************
def
main
(
args
:
Array
[
String
])
:
Unit
=
{
...
...
@@ -45,11 +53,11 @@ object StreamSQLExample {
Order
(
2L
,
"rubber"
,
3
),
Order
(
4L
,
"beer"
,
1
)))
// register the DataStream under the name "OrderA" and "OrderB"
// register the DataStream
s
under the name "OrderA" and "OrderB"
tEnv
.
registerDataStream
(
"OrderA"
,
orderA
,
'user
,
'product
,
'amount
)
tEnv
.
registerDataStream
(
"OrderB"
,
orderB
,
'user
,
'product
,
'amount
)
//
Union
two tables
//
union the
two tables
val
result
=
tEnv
.
sql
(
"SELECT STREAM * FROM OrderA WHERE amount > 2 UNION ALL "
+
"SELECT STREAM * FROM OrderB WHERE amount < 2"
)
...
...
@@ -59,4 +67,10 @@ object StreamSQLExample {
env
.
execute
()
}
// *************************************************************************
// USER DATA TYPES
// *************************************************************************
case
class
Order
(
user
:
Long
,
product
:
String
,
amount
:
Int
)
}
flink-libraries/flink-table/src/main/scala/org/apache/flink/examples/scala/StreamTableExample.scala
浏览文件 @
a1fef27b
...
...
@@ -23,11 +23,18 @@ import org.apache.flink.api.table.TableEnvironment
import
org.apache.flink.streaming.api.scala.
{
DataStream
,
StreamExecutionEnvironment
}
/**
* Simple example for demonstrating the use of Table API on Stream Table.
* Simple example for demonstrating the use of Table API on a Stream Table.
*
* This example shows how to:
* - Convert DataStreams to Tables
* - Apply union, select, and filter operations
*
*/
object
StreamTableExample
{
case
class
Order
(
user
:
Long
,
product
:
String
,
amount
:
Int
)
// *************************************************************************
// PROGRAM
// *************************************************************************
def
main
(
args
:
Array
[
String
])
:
Unit
=
{
...
...
@@ -45,6 +52,7 @@ object StreamTableExample {
Order
(
2L
,
"rubber"
,
3
),
Order
(
4L
,
"beer"
,
1
))).
toTable
(
tEnv
)
// union the two tables
val
result
:
DataStream
[
Order
]
=
orderA
.
unionAll
(
orderB
)
.
select
(
'user
,
'product
,
'amount
)
.
where
(
'amount
>
2
)
...
...
@@ -55,4 +63,10 @@ object StreamTableExample {
env
.
execute
()
}
// *************************************************************************
// USER DATA TYPES
// *************************************************************************
case
class
Order
(
user
:
Long
,
product
:
String
,
amount
:
Int
)
}
flink-libraries/flink-table/src/main/scala/org/apache/flink/examples/scala/TPCHQuery3Table.scala
浏览文件 @
a1fef27b
...
...
@@ -54,9 +54,6 @@ import org.apache.flink.api.table.TableEnvironment
* o_orderdate;
* }}}
*
* Compared to the original TPC-H query this version does not sort the result by revenue
* and orderdate.
*
* Input files are plain text CSV files using the pipe character ('|') as field separator
* as generated by the TPC-H data generator which is available at
* [http://www.tpc.org/tpch/](a href="http://www.tpc.org/tpch/).
...
...
@@ -73,6 +70,10 @@ import org.apache.flink.api.table.TableEnvironment
*/
object
TPCHQuery3Table
{
// *************************************************************************
// PROGRAM
// *************************************************************************
def
main
(
args
:
Array
[
String
])
{
if
(!
parseParameters
(
args
))
{
return
...
...
flink-libraries/flink-table/src/main/scala/org/apache/flink/examples/scala/WordCountSQL.scala
浏览文件 @
a1fef27b
...
...
@@ -22,10 +22,19 @@ import org.apache.flink.api.scala.table._
import
org.apache.flink.api.table.TableEnvironment
/**
* Simple example that shows how the Batch SQL used in Scala.
* Simple example that shows how the Batch SQL API is used in Scala.
*
* This example shows how to:
* - Convert DataSets to Tables
* - Register a Table under a name
* - Run a SQL query on the registered Table
*
*/
object
WordCountSQL
{
case
class
WC
(
word
:
String
,
count
:
Int
)
// *************************************************************************
// PROGRAM
// *************************************************************************
def
main
(
args
:
Array
[
String
])
:
Unit
=
{
...
...
@@ -34,10 +43,20 @@ object WordCountSQL {
val
tEnv
=
TableEnvironment
.
getTableEnvironment
(
env
)
val
input
=
env
.
fromElements
(
WC
(
"hello"
,
1
),
WC
(
"hello"
,
1
),
WC
(
"ciao"
,
1
))
// register the DataSet as table "WordCount"
tEnv
.
registerDataSet
(
"WordCount"
,
input
,
'word
,
'frequency
)
// run a SQL query on the Table and retrieve the result as a new Table
val
table
=
tEnv
.
sql
(
"SELECT word, SUM(frequency) FROM WordCount GROUP BY word"
)
table
.
toDataSet
[
WC
].
print
()
}
// *************************************************************************
// USER DATA TYPES
// *************************************************************************
case
class
WC
(
word
:
String
,
frequency
:
Long
)
}
flink-libraries/flink-table/src/main/scala/org/apache/flink/examples/scala/WordCountTable.scala
浏览文件 @
a1fef27b
...
...
@@ -23,11 +23,18 @@ import org.apache.flink.api.scala.table._
import
org.apache.flink.api.table.TableEnvironment
/**
* Simple example for demonstrating the use of the Table API for a Word Count.
*/
* Simple example for demonstrating the use of the Table API for a Word Count in Scala.
*
* This example shows how to:
* - Convert DataSets to Tables
* - Apply group, aggregate, select, and filter operations
*
*/
object
WordCountTable
{
case
class
WC
(
word
:
String
,
count
:
Int
)
// *************************************************************************
// PROGRAM
// *************************************************************************
def
main
(
args
:
Array
[
String
])
:
Unit
=
{
...
...
@@ -39,9 +46,17 @@ object WordCountTable {
val
expr
=
input
.
toTable
(
tEnv
)
val
result
=
expr
.
groupBy
(
'word
)
.
select
(
'word
,
'count
.
sum
as
'count
)
.
select
(
'word
,
'frequency
.
sum
as
'frequency
)
.
filter
(
'frequency
===
2
)
.
toDataSet
[
WC
]
result
.
print
()
}
// *************************************************************************
// USER DATA TYPES
// *************************************************************************
case
class
WC
(
word
:
String
,
frequency
:
Long
)
}
flink-libraries/flink-table/src/test/java/org/apache/flink/api/java/batch/table/AggregationsITCase.java
浏览文件 @
a1fef27b
...
...
@@ -33,7 +33,7 @@ import org.apache.flink.test.util.MultipleProgramsTestBase;
import
org.junit.Test
;
import
org.junit.runner.RunWith
;
import
org.junit.runners.Parameterized
;
import
org.apache.flink.examples.java.
JavaTableExamp
le.WC
;
import
org.apache.flink.examples.java.
WordCountTab
le.WC
;
import
java.util.List
;
...
...
@@ -196,8 +196,8 @@ public class AggregationsITCase extends MultipleProgramsTestBase {
Table
filtered
=
table
.
groupBy
(
"word"
)
.
select
(
"word.
count as count
, word"
)
.
filter
(
"
count
= 2"
);
.
select
(
"word.
frequency as frequency
, word"
)
.
filter
(
"
frequency
= 2"
);
List
<
String
>
result
=
tableEnv
.
toDataSet
(
filtered
,
WC
.
class
)
.
map
(
new
MapFunction
<
WC
,
String
>()
{
...
...
flink-libraries/flink-table/src/test/scala/org/apache/flink/api/scala/batch/table/AggregationsITCase.scala
浏览文件 @
a1fef27b
...
...
@@ -195,11 +195,11 @@ class AggregationsITCase(mode: TestExecutionMode) extends MultipleProgramsTestBa
val
expr
=
input
.
toTable
(
tEnv
)
val
result
=
expr
.
groupBy
(
'word
)
.
select
(
'word
,
'
count
.
sum
as
'count
)
.
filter
(
'
count
===
2
)
.
select
(
'word
,
'
frequency
.
sum
as
'frequency
)
.
filter
(
'
frequency
===
2
)
.
toDataSet
[
MyWC
]
val
mappedResult
=
result
.
map
(
w
=>
(
w
.
word
,
w
.
count
*
10
)).
collect
()
val
mappedResult
=
result
.
map
(
w
=>
(
w
.
word
,
w
.
frequency
*
10
)).
collect
()
val
expected
=
"(hello,20)\n"
+
"(hola,20)"
TestBaseUtils
.
compareResultAsText
(
mappedResult
.
asJava
,
expected
)
}
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录