Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
doujutun3207
flink
提交
0385651f
F
flink
项目概览
doujutun3207
/
flink
与 Fork 源项目一致
从无法访问的项目Fork
通知
24
Star
0
Fork
0
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
0
列表
看板
标记
里程碑
合并请求
0
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
F
flink
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
0
Issue
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
体验新版 GitCode,发现更多精彩内容 >>
提交
0385651f
编写于
9月 16, 2014
作者:
A
Aljoscha Krettek
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
[scala] Add Scalastyle, use scalastyle-config.xml from Spark
上级
fd280981
变更
32
隐藏空白更改
内联
并排
Showing
32 changed file
with
627 addition
and
386 deletion
+627
-386
.gitignore
.gitignore
+1
-0
flink-examples/flink-scala-examples/src/main/scala/org/apache/flink/examples/scala/graph/ConnectedComponents.scala
...ache/flink/examples/scala/graph/ConnectedComponents.scala
+7
-4
flink-examples/flink-scala-examples/src/main/scala/org/apache/flink/examples/scala/graph/EnumTrianglesBasic.scala
...pache/flink/examples/scala/graph/EnumTrianglesBasic.scala
+108
-106
flink-examples/flink-scala-examples/src/main/scala/org/apache/flink/examples/scala/graph/PageRankBasic.scala
...org/apache/flink/examples/scala/graph/PageRankBasic.scala
+3
-1
flink-examples/flink-scala-examples/src/main/scala/org/apache/flink/examples/scala/graph/TransitiveClosureNaive.scala
...e/flink/examples/scala/graph/TransitiveClosureNaive.scala
+90
-87
flink-examples/flink-scala-examples/src/main/scala/org/apache/flink/examples/scala/misc/PiEstimation.scala
...a/org/apache/flink/examples/scala/misc/PiEstimation.scala
+1
-1
flink-examples/flink-scala-examples/src/main/scala/org/apache/flink/examples/scala/ml/LinearRegression.scala
...org/apache/flink/examples/scala/ml/LinearRegression.scala
+102
-102
flink-scala/src/main/scala/org/apache/flink/api/scala/DataSet.scala
...a/src/main/scala/org/apache/flink/api/scala/DataSet.scala
+28
-7
flink-scala/src/main/scala/org/apache/flink/api/scala/ExecutionEnvironment.scala
...ala/org/apache/flink/api/scala/ExecutionEnvironment.scala
+27
-4
flink-scala/src/main/scala/org/apache/flink/api/scala/GroupedDataSet.scala
...ain/scala/org/apache/flink/api/scala/GroupedDataSet.scala
+1
-1
flink-scala/src/main/scala/org/apache/flink/api/scala/coGroupDataSet.scala
...ain/scala/org/apache/flink/api/scala/coGroupDataSet.scala
+2
-2
flink-scala/src/main/scala/org/apache/flink/api/scala/codegen/Counter.scala
...in/scala/org/apache/flink/api/scala/codegen/Counter.scala
+2
-4
flink-scala/src/main/scala/org/apache/flink/api/scala/codegen/MacroContextHolder.scala
...g/apache/flink/api/scala/codegen/MacroContextHolder.scala
+4
-4
flink-scala/src/main/scala/org/apache/flink/api/scala/codegen/TreeGen.scala
...in/scala/org/apache/flink/api/scala/codegen/TreeGen.scala
+4
-5
flink-scala/src/main/scala/org/apache/flink/api/scala/codegen/TypeAnalyzer.scala
...ala/org/apache/flink/api/scala/codegen/TypeAnalyzer.scala
+13
-8
flink-scala/src/main/scala/org/apache/flink/api/scala/codegen/TypeDescriptors.scala
.../org/apache/flink/api/scala/codegen/TypeDescriptors.scala
+11
-6
flink-scala/src/main/scala/org/apache/flink/api/scala/codegen/TypeInformationGen.scala
...g/apache/flink/api/scala/codegen/TypeInformationGen.scala
+12
-9
flink-scala/src/main/scala/org/apache/flink/api/scala/crossDataSet.scala
.../main/scala/org/apache/flink/api/scala/crossDataSet.scala
+1
-1
flink-scala/src/main/scala/org/apache/flink/api/scala/joinDataSet.scala
...c/main/scala/org/apache/flink/api/scala/joinDataSet.scala
+1
-1
flink-scala/src/main/scala/org/apache/flink/api/scala/package.scala
...a/src/main/scala/org/apache/flink/api/scala/package.scala
+2
-2
flink-scala/src/main/scala/org/apache/flink/api/scala/typeutils/ScalaTupleComparator.scala
...ache/flink/api/scala/typeutils/ScalaTupleComparator.scala
+1
-2
flink-scala/src/main/scala/org/apache/flink/api/scala/typeutils/ScalaTupleSerializer.scala
...ache/flink/api/scala/typeutils/ScalaTupleSerializer.scala
+1
-2
flink-scala/src/main/scala/org/apache/flink/api/scala/typeutils/ScalaTupleTypeInfo.scala
...apache/flink/api/scala/typeutils/ScalaTupleTypeInfo.scala
+2
-3
flink-scala/src/main/scala/org/apache/flink/api/scala/typeutils/TypeUtils.scala
...cala/org/apache/flink/api/scala/typeutils/TypeUtils.scala
+1
-2
flink-scala/src/main/scala/org/apache/flink/api/scala/unfinishedKeyPairOperation.scala
...g/apache/flink/api/scala/unfinishedKeyPairOperation.scala
+3
-1
flink-scala/src/test/scala/org/apache/flink/api/scala/ScalaAPICompletenessTest.scala
...org/apache/flink/api/scala/ScalaAPICompletenessTest.scala
+11
-10
flink-scala/src/test/scala/org/apache/flink/api/scala/functions/SemanticPropertiesTranslationTest.scala
...i/scala/functions/SemanticPropertiesTranslationTest.scala
+4
-4
flink-scala/src/test/scala/org/apache/flink/api/scala/operators/translation/DeltaIterationTranslationTest.scala
...operators/translation/DeltaIterationTranslationTest.scala
+7
-3
flink-scala/src/test/scala/org/apache/flink/api/scala/operators/translation/ReduceTranslationTest.scala
...i/scala/operators/translation/ReduceTranslationTest.scala
+3
-1
flink-scala/src/test/scala/org/apache/flink/api/scala/runtime/TupleSerializerTest.scala
.../apache/flink/api/scala/runtime/TupleSerializerTest.scala
+2
-2
pom.xml
pom.xml
+26
-1
tools/maven/scalastyle-config.xml
tools/maven/scalastyle-config.xml
+146
-0
未找到文件。
.gitignore
浏览文件 @
0385651f
.cache
scalastyle-output.xml
.classpath
.idea
.metadata
...
...
flink-examples/flink-scala-examples/src/main/scala/org/apache/flink/examples/scala/graph/ConnectedComponents.scala
浏览文件 @
0385651f
...
...
@@ -71,7 +71,8 @@ object ConnectedComponents {
// assign the initial components (equal to the vertex id)
val
vertices
=
getVerticesDataSet
(
env
).
map
{
id
=>
(
id
,
id
)
}
// undirected edges by emitting for each input edge the input edges itself and an inverted version
// undirected edges by emitting for each input edge the input edges itself and an inverted
// version
val
edges
=
getEdgesDataSet
(
env
).
flatMap
{
edge
=>
Seq
(
edge
,
(
edge
.
_2
,
edge
.
_1
))
}
// open a delta iteration
...
...
@@ -106,20 +107,22 @@ object ConnectedComponents {
private
def
parseParameters
(
args
:
Array
[
String
])
:
Boolean
=
{
if
(
args
.
length
>
0
)
{
fileOutput
=
true
fileOutput
=
true
if
(
args
.
length
==
4
)
{
verticesPath
=
args
(
0
)
edgesPath
=
args
(
1
)
outputPath
=
args
(
2
)
maxIterations
=
args
(
3
).
toInt
}
else
{
System
.
err
.
println
(
"Usage: ConnectedComponents <vertices path> <edges path> <result path> <max number of iterations>"
)
System
.
err
.
println
(
"Usage: ConnectedComponents <vertices path> <edges path> <result path>"
+
" <max number of iterations>"
)
false
}
}
else
{
System
.
out
.
println
(
"Executing Connected Components example with built-in default data."
)
System
.
out
.
println
(
" Provide parameters to read input data from a file."
)
System
.
out
.
println
(
" Usage: ConnectedComponents <vertices path> <edges path> <result path> <max number of iterations>"
)
System
.
out
.
println
(
" Usage: ConnectedComponents <vertices path> <edges path> <result path>"
+
" <max number of iterations>"
)
}
true
}
...
...
flink-examples/flink-scala-examples/src/main/scala/org/apache/flink/examples/scala/graph/EnumTrianglesBasic.scala
浏览文件 @
0385651f
...
...
@@ -65,111 +65,113 @@ import scala.collection.mutable
*
*/
object
EnumTrianglesBasic
{
def
main
(
args
:
Array
[
String
])
{
if
(!
parseParameters
(
args
))
{
return
}
// set up execution environment
val
env
=
ExecutionEnvironment
.
getExecutionEnvironment
// read input data
val
edges
=
getEdgeDataSet
(
env
)
// project edges by vertex id
val
edgesById
=
edges
map
(
e
=>
if
(
e
.
v1
<
e
.
v2
)
e
else
Edge
(
e
.
v2
,
e
.
v1
)
)
val
triangles
=
edgesById
// build triads
.
groupBy
(
"v1"
).
sortGroup
(
"v2"
,
Order
.
ASCENDING
).
reduceGroup
(
new
TriadBuilder
())
// filter triads
.
join
(
edgesById
).
where
(
"v2"
,
"v3"
).
equalTo
(
"v1"
,
"v2"
)
{
(
t
,
_
)
=>
t
}
// emit result
if
(
fileOutput
)
{
triangles
.
writeAsCsv
(
outputPath
,
"\n"
,
","
)
}
else
{
triangles
.
print
()
}
// execute program
env
.
execute
(
"TriangleEnumeration Example"
)
}
// *************************************************************************
// USER DATA TYPES
// *************************************************************************
case
class
Edge
(
v1
:
Int
,
v2
:
Int
)
extends
Serializable
case
class
Triad
(
v1
:
Int
,
v2
:
Int
,
v3
:
Int
)
extends
Serializable
// *************************************************************************
// USER FUNCTIONS
// *************************************************************************
/**
* Builds triads (triples of vertices) from pairs of edges that share a vertex.
* The first vertex of a triad is the shared vertex, the second and third vertex are ordered by vertexId.
* Assumes that input edges share the first vertex and are in ascending order of the second vertex.
*/
class
TriadBuilder
extends
GroupReduceFunction
[
Edge
,
Triad
]
{
val
vertices
=
mutable
.
MutableList
[
Integer
]()
override
def
reduce
(
edges
:
java.lang.Iterable
[
Edge
],
out
:
Collector
[
Triad
])
=
{
// clear vertex list
vertices
.
clear
()
// build and emit triads
for
(
e
<-
edges
.
asScala
)
{
// combine vertex with all previously read vertices
for
(
v
<-
vertices
)
{
out
.
collect
(
Triad
(
e
.
v1
,
v
,
e
.
v2
))
}
vertices
+=
e
.
v2
}
}
}
// *************************************************************************
// UTIL METHODS
// *************************************************************************
private
def
parseParameters
(
args
:
Array
[
String
])
:
Boolean
=
{
if
(
args
.
length
>
0
)
{
fileOutput
=
true
if
(
args
.
length
==
2
)
{
edgePath
=
args
(
0
)
outputPath
=
args
(
1
)
}
else
{
System
.
err
.
println
(
"Usage: EnumTriangleBasic <edge path> <result path>"
)
false
}
}
else
{
System
.
out
.
println
(
"Executing Enum Triangles Basic example with built-in default data."
)
System
.
out
.
println
(
" Provide parameters to read input data from files."
)
System
.
out
.
println
(
" See the documentation for the correct format of input files."
)
System
.
out
.
println
(
" Usage: EnumTriangleBasic <edge path> <result path>"
)
}
true
}
private
def
getEdgeDataSet
(
env
:
ExecutionEnvironment
)
:
DataSet
[
Edge
]
=
{
if
(
fileOutput
)
{
env
.
readCsvFile
[
Edge
](
edgePath
,
fieldDelimiter
=
' '
,
includedFields
=
Array
(
0
,
1
))
}
else
{
val
edges
=
EnumTrianglesData
.
EDGES
.
map
{
case
Array
(
v1
,
v2
)
=>
new
Edge
(
v1
.
asInstanceOf
[
Int
],
v2
.
asInstanceOf
[
Int
])
}
env
.
fromCollection
(
edges
)
}
}
private
var
fileOutput
:
Boolean
=
false
private
var
edgePath
:
String
=
null
private
var
outputPath
:
String
=
null
def
main
(
args
:
Array
[
String
])
{
if
(!
parseParameters
(
args
))
{
return
}
// set up execution environment
val
env
=
ExecutionEnvironment
.
getExecutionEnvironment
// read input data
val
edges
=
getEdgeDataSet
(
env
)
// project edges by vertex id
val
edgesById
=
edges
map
(
e
=>
if
(
e
.
v1
<
e
.
v2
)
e
else
Edge
(
e
.
v2
,
e
.
v1
)
)
val
triangles
=
edgesById
// build triads
.
groupBy
(
"v1"
).
sortGroup
(
"v2"
,
Order
.
ASCENDING
).
reduceGroup
(
new
TriadBuilder
())
// filter triads
.
join
(
edgesById
).
where
(
"v2"
,
"v3"
).
equalTo
(
"v1"
,
"v2"
)
{
(
t
,
_
)
=>
t
}
// emit result
if
(
fileOutput
)
{
triangles
.
writeAsCsv
(
outputPath
,
"\n"
,
","
)
}
else
{
triangles
.
print
()
}
// execute program
env
.
execute
(
"TriangleEnumeration Example"
)
}
// *************************************************************************
// USER DATA TYPES
// *************************************************************************
case
class
Edge
(
v1
:
Int
,
v2
:
Int
)
extends
Serializable
case
class
Triad
(
v1
:
Int
,
v2
:
Int
,
v3
:
Int
)
extends
Serializable
// *************************************************************************
// USER FUNCTIONS
// *************************************************************************
/**
* Builds triads (triples of vertices) from pairs of edges that share a vertex. The first vertex
* of a triad is the shared vertex, the second and third vertex are ordered by vertexId. Assumes
* that input edges share the first vertex and are in ascending order of the second vertex.
*/
class
TriadBuilder
extends
GroupReduceFunction
[
Edge
,
Triad
]
{
val
vertices
=
mutable
.
MutableList
[
Integer
]()
override
def
reduce
(
edges
:
java.lang.Iterable
[
Edge
],
out
:
Collector
[
Triad
])
=
{
// clear vertex list
vertices
.
clear
()
// build and emit triads
for
(
e
<-
edges
.
asScala
)
{
// combine vertex with all previously read vertices
for
(
v
<-
vertices
)
{
out
.
collect
(
Triad
(
e
.
v1
,
v
,
e
.
v2
))
}
vertices
+=
e
.
v2
}
}
}
// *************************************************************************
// UTIL METHODS
// *************************************************************************
private
def
parseParameters
(
args
:
Array
[
String
])
:
Boolean
=
{
if
(
args
.
length
>
0
)
{
fileOutput
=
true
if
(
args
.
length
==
2
)
{
edgePath
=
args
(
0
)
outputPath
=
args
(
1
)
}
else
{
System
.
err
.
println
(
"Usage: EnumTriangleBasic <edge path> <result path>"
)
false
}
}
else
{
System
.
out
.
println
(
"Executing Enum Triangles Basic example with built-in default data."
)
System
.
out
.
println
(
" Provide parameters to read input data from files."
)
System
.
out
.
println
(
" See the documentation for the correct format of input files."
)
System
.
out
.
println
(
" Usage: EnumTriangleBasic <edge path> <result path>"
)
}
true
}
private
def
getEdgeDataSet
(
env
:
ExecutionEnvironment
)
:
DataSet
[
Edge
]
=
{
if
(
fileOutput
)
{
env
.
readCsvFile
[
Edge
](
edgePath
,
fieldDelimiter
=
' '
,
includedFields
=
Array
(
0
,
1
))
}
else
{
val
edges
=
EnumTrianglesData
.
EDGES
.
map
{
case
Array
(
v1
,
v2
)
=>
new
Edge
(
v1
.
asInstanceOf
[
Int
],
v2
.
asInstanceOf
[
Int
])
}
env
.
fromCollection
(
edges
)
}
}
private
var
fileOutput
:
Boolean
=
false
private
var
edgePath
:
String
=
null
private
var
outputPath
:
String
=
null
}
flink-examples/flink-scala-examples/src/main/scala/org/apache/flink/examples/scala/graph/PageRankBasic.scala
浏览文件 @
0385651f
...
...
@@ -86,7 +86,9 @@ object PageRankBasic {
// initialize lists
.
map
(
e
=>
AdjacencyList
(
e
.
sourceId
,
Array
(
e
.
targetId
)))
// concatenate lists
.
groupBy
(
"sourceId"
).
reduce
((
l1
,
l2
)
=>
AdjacencyList
(
l1
.
sourceId
,
l1
.
targetIds
++
l2
.
targetIds
))
.
groupBy
(
"sourceId"
).
reduce
{
(
l1
,
l2
)
=>
AdjacencyList
(
l1
.
sourceId
,
l1
.
targetIds
++
l2
.
targetIds
)
}
// start iteration
val
finalRanks
=
pagesWithRanks
.
iterateWithTermination
(
maxIterations
)
{
...
...
flink-examples/flink-scala-examples/src/main/scala/org/apache/flink/examples/scala/graph/TransitiveClosureNaive.scala
浏览文件 @
0385651f
...
...
@@ -23,90 +23,93 @@ import org.apache.flink.util.Collector
object
TransitiveClosureNaive
{
def
main
(
args
:
Array
[
String
])
:
Unit
=
{
if
(!
parseParameters
(
args
))
{
return
}
val
env
=
ExecutionEnvironment
.
getExecutionEnvironment
val
edges
=
getEdgesDataSet
(
env
)
val
paths
=
edges
.
iterateWithTermination
(
maxIterations
)
{
prevPaths
:
DataSet
[(
Long
,
Long
)]
=>
val
nextPaths
=
prevPaths
.
join
(
edges
)
.
where
(
1
).
equalTo
(
0
)
{
(
left
,
right
)
=>
(
left
.
_1
,
right
.
_2
)
}
.
union
(
prevPaths
)
.
groupBy
(
0
,
1
)
.
reduce
((
l
,
r
)
=>
l
)
val
terminate
=
prevPaths
.
coGroup
(
nextPaths
)
.
where
(
0
).
equalTo
(
0
)
{
(
prev
,
next
,
out
:
Collector
[(
Long
,
Long
)])
=>
{
val
prevPaths
=
prev
.
toSet
for
(
n
<-
next
)
if
(!
prevPaths
.
contains
(
n
))
out
.
collect
(
n
)
}
}
(
nextPaths
,
terminate
)
}
if
(
fileOutput
)
paths
.
writeAsCsv
(
outputPath
,
"\n"
,
" "
)
else
paths
.
print
()
env
.
execute
(
"Scala Transitive Closure Example"
)
}
private
var
fileOutput
:
Boolean
=
false
private
var
edgesPath
:
String
=
null
private
var
outputPath
:
String
=
null
private
var
maxIterations
:
Int
=
10
private
def
parseParameters
(
programArguments
:
Array
[
String
])
:
Boolean
=
{
if
(
programArguments
.
length
>
0
)
{
fileOutput
=
true
if
(
programArguments
.
length
==
3
)
{
edgesPath
=
programArguments
(
0
)
outputPath
=
programArguments
(
1
)
maxIterations
=
Integer
.
parseInt
(
programArguments
(
2
))
}
else
{
System
.
err
.
println
(
"Usage: TransitiveClosure <edges path> <result path> <max number of iterations>"
)
return
false
}
}
else
{
System
.
out
.
println
(
"Executing TransitiveClosure example with default parameters and built-in default data."
)
System
.
out
.
println
(
" Provide parameters to read input data from files."
)
System
.
out
.
println
(
" See the documentation for the correct format of input files."
)
System
.
out
.
println
(
" Usage: TransitiveClosure <edges path> <result path> <max number of iterations>"
)
}
true
}
private
def
getEdgesDataSet
(
env
:
ExecutionEnvironment
)
:
DataSet
[(
Long
,
Long
)]
=
{
if
(
fileOutput
)
{
env
.
readCsvFile
[(
Long
,
Long
)](
edgesPath
,
fieldDelimiter
=
' '
,
includedFields
=
Array
(
0
,
1
))
.
map
{
x
=>
(
x
.
_1
,
x
.
_2
)}
}
else
{
val
edgeData
=
ConnectedComponentsData
.
EDGES
map
{
case
Array
(
x
,
y
)
=>
(
x
.
asInstanceOf
[
Long
],
y
.
asInstanceOf
[
Long
])
}
env
.
fromCollection
(
edgeData
)
}
}
}
\ No newline at end of file
def
main
(
args
:
Array
[
String
])
:
Unit
=
{
if
(!
parseParameters
(
args
))
{
return
}
val
env
=
ExecutionEnvironment
.
getExecutionEnvironment
val
edges
=
getEdgesDataSet
(
env
)
val
paths
=
edges
.
iterateWithTermination
(
maxIterations
)
{
prevPaths
:
DataSet
[(
Long
,
Long
)]
=>
val
nextPaths
=
prevPaths
.
join
(
edges
)
.
where
(
1
).
equalTo
(
0
)
{
(
left
,
right
)
=>
(
left
.
_1
,
right
.
_2
)
}
.
union
(
prevPaths
)
.
groupBy
(
0
,
1
)
.
reduce
((
l
,
r
)
=>
l
)
val
terminate
=
prevPaths
.
coGroup
(
nextPaths
)
.
where
(
0
).
equalTo
(
0
)
{
(
prev
,
next
,
out
:
Collector
[(
Long
,
Long
)])
=>
{
val
prevPaths
=
prev
.
toSet
for
(
n
<-
next
)
if
(!
prevPaths
.
contains
(
n
))
out
.
collect
(
n
)
}
}
(
nextPaths
,
terminate
)
}
if
(
fileOutput
)
{
paths
.
writeAsCsv
(
outputPath
,
"\n"
,
" "
)
}
else
{
paths
.
print
()
}
env
.
execute
(
"Scala Transitive Closure Example"
)
}
private
var
fileOutput
:
Boolean
=
false
private
var
edgesPath
:
String
=
null
private
var
outputPath
:
String
=
null
private
var
maxIterations
:
Int
=
10
private
def
parseParameters
(
programArguments
:
Array
[
String
])
:
Boolean
=
{
if
(
programArguments
.
length
>
0
)
{
fileOutput
=
true
if
(
programArguments
.
length
==
3
)
{
edgesPath
=
programArguments
(
0
)
outputPath
=
programArguments
(
1
)
maxIterations
=
Integer
.
parseInt
(
programArguments
(
2
))
}
else
{
System
.
err
.
println
(
"Usage: TransitiveClosure <edges path> <result path> <max number of "
+
"iterations>"
)
return
false
}
}
else
{
System
.
out
.
println
(
"Executing TransitiveClosure example with default parameters and "
+
"built-in default data."
)
System
.
out
.
println
(
" Provide parameters to read input data from files."
)
System
.
out
.
println
(
" See the documentation for the correct format of input files."
)
System
.
out
.
println
(
" Usage: TransitiveClosure <edges path> <result path> <max number of "
+
"iterations>"
)
}
true
}
private
def
getEdgesDataSet
(
env
:
ExecutionEnvironment
)
:
DataSet
[(
Long
,
Long
)]
=
{
if
(
fileOutput
)
{
env
.
readCsvFile
[(
Long
,
Long
)](
edgesPath
,
fieldDelimiter
=
' '
,
includedFields
=
Array
(
0
,
1
))
.
map
{
x
=>
(
x
.
_1
,
x
.
_2
)}
}
else
{
val
edgeData
=
ConnectedComponentsData
.
EDGES
map
{
case
Array
(
x
,
y
)
=>
(
x
.
asInstanceOf
[
Long
],
y
.
asInstanceOf
[
Long
])
}
env
.
fromCollection
(
edgeData
)
}
}
}
flink-examples/flink-scala-examples/src/main/scala/org/apache/flink/examples/scala/misc/PiEstimation.scala
浏览文件 @
0385651f
...
...
@@ -37,7 +37,7 @@ object PiEstimation {
val
y
=
Math
.
random
()
if
(
x
*
x
+
y
*
y
<
1
)
1L
else
0L
}
.
reduce
(
_
+
_
)
.
reduce
(
_
+
_
)
// ratio of samples in upper right quadrant vs total samples gives surface of upper
// right quadrant, times 4 gives surface of whole unit circle, i.e. PI
...
...
flink-examples/flink-scala-examples/src/main/scala/org/apache/flink/examples/scala/ml/LinearRegression.scala
浏览文件 @
0385651f
...
...
@@ -62,101 +62,101 @@ import scala.collection.JavaConverters._
*/
object
LinearRegression
{
def
main
(
args
:
Array
[
String
])
{
if
(!
parseParameters
(
args
))
{
return
}
val
env
=
ExecutionEnvironment
.
getExecutionEnvironment
val
data
=
getDataSet
(
env
)
val
parameters
=
getParamsDataSet
(
env
)
val
result
=
parameters
.
iterate
(
numIterations
)
{
currentParameters
=>
val
newParameters
=
data
.
map
(
new
SubUpdate
).
withBroadcastSet
(
currentParameters
,
"parameters"
)
.
reduce
{
(
p1
,
p2
)
=>
def
main
(
args
:
Array
[
String
])
{
if
(!
parseParameters
(
args
))
{
return
}
val
env
=
ExecutionEnvironment
.
getExecutionEnvironment
val
data
=
getDataSet
(
env
)
val
parameters
=
getParamsDataSet
(
env
)
val
result
=
parameters
.
iterate
(
numIterations
)
{
currentParameters
=>
val
newParameters
=
data
.
map
(
new
SubUpdate
).
withBroadcastSet
(
currentParameters
,
"parameters"
)
.
reduce
{
(
p1
,
p2
)
=>
val
result
=
p1
.
_1
+
p2
.
_1
(
result
,
p1
.
_2
+
p2
.
_2
)
}
.
map
{
x
=>
x
.
_1
.
div
(
x
.
_2
)
}
newParameters
}
if
(
fileOutput
)
{
result
.
writeAsText
(
outputPath
)
}
else
{
result
.
print
()
}
env
.
execute
(
"Scala Linear Regression example"
)
}
/**
* A simple data sample, x means the input, and y means the target.
*/
(
result
,
p1
.
_2
+
p2
.
_2
)
}
.
map
{
x
=>
x
.
_1
.
div
(
x
.
_2
)
}
newParameters
}
if
(
fileOutput
)
{
result
.
writeAsText
(
outputPath
)
}
else
{
result
.
print
()
}
env
.
execute
(
"Scala Linear Regression example"
)
}
/**
* A simple data sample, x means the input, and y means the target.
*/
case
class
Data
(
var
x
:
Double
,
var
y
:
Double
)
/**
* A set of parameters -- theta0, theta1.
*/
/**
* A set of parameters -- theta0, theta1.
*/
case
class
Params
(
theta0
:
Double
,
theta1
:
Double
)
{
def
div
(
a
:
Int
)
:
Params
=
{
Params
(
theta0
/
a
,
theta1
/
a
)
}
def
+
(
other
:
Params
)
=
{
def
+
(
other
:
Params
)
=
{
Params
(
theta0
+
other
.
theta0
,
theta1
+
other
.
theta1
)
}
}
// *************************************************************************
// USER FUNCTIONS
// *************************************************************************
// *************************************************************************
// USER FUNCTIONS
// *************************************************************************
/**
* Compute a single BGD type update for every parameters.
*/
class
SubUpdate
extends
RichMapFunction
[
Data
,
(
Params
,
Int
)]
{
/**
* Compute a single BGD type update for every parameters.
*/
class
SubUpdate
extends
RichMapFunction
[
Data
,
(
Params
,
Int
)]
{
private
var
parameter
:
Params
=
null
private
var
parameter
:
Params
=
null
/** Reads the parameters from a broadcast variable into a collection. */
override
def
open
(
parameters
:
Configuration
)
{
val
parameters
=
getRuntimeContext
.
getBroadcastVariable
[
Params
](
"parameters"
).
asScala
/** Reads the parameters from a broadcast variable into a collection. */
override
def
open
(
parameters
:
Configuration
)
{
val
parameters
=
getRuntimeContext
.
getBroadcastVariable
[
Params
](
"parameters"
).
asScala
parameter
=
parameters
.
head
}
}
def
map
(
in
:
Data
)
:
(
Params
,
Int
)
=
{
val
theta0
=
def
map
(
in
:
Data
)
:
(
Params
,
Int
)
=
{
val
theta0
=
parameter
.
theta0
-
0.01
*
((
parameter
.
theta0
+
(
parameter
.
theta1
*
in
.
x
))
-
in
.
y
)
val
theta1
=
val
theta1
=
parameter
.
theta1
-
0.01
*
(((
parameter
.
theta0
+
(
parameter
.
theta1
*
in
.
x
))
-
in
.
y
)
*
in
.
x
)
(
Params
(
theta0
,
theta1
),
1
)
}
}
// *************************************************************************
// UTIL METHODS
// *************************************************************************
private
var
fileOutput
:
Boolean
=
false
private
var
dataPath
:
String
=
null
private
var
outputPath
:
String
=
null
private
var
numIterations
:
Int
=
10
private
def
parseParameters
(
programArguments
:
Array
[
String
])
:
Boolean
=
{
if
(
programArguments
.
length
>
0
)
{
fileOutput
=
true
if
(
programArguments
.
length
==
3
)
{
dataPath
=
programArguments
(
0
)
outputPath
=
programArguments
(
1
)
numIterations
=
programArguments
(
2
).
toInt
}
else
{
System
.
err
.
println
(
"Usage: LinearRegression <data path> <result path> <num iterations>"
)
false
}
}
else
{
(
Params
(
theta0
,
theta1
),
1
)
}
}
// *************************************************************************
// UTIL METHODS
// *************************************************************************
private
var
fileOutput
:
Boolean
=
false
private
var
dataPath
:
String
=
null
private
var
outputPath
:
String
=
null
private
var
numIterations
:
Int
=
10
private
def
parseParameters
(
programArguments
:
Array
[
String
])
:
Boolean
=
{
if
(
programArguments
.
length
>
0
)
{
fileOutput
=
true
if
(
programArguments
.
length
==
3
)
{
dataPath
=
programArguments
(
0
)
outputPath
=
programArguments
(
1
)
numIterations
=
programArguments
(
2
).
toInt
}
else
{
System
.
err
.
println
(
"Usage: LinearRegression <data path> <result path> <num iterations>"
)
false
}
}
else
{
System
.
out
.
println
(
"Executing Linear Regression example with default parameters and "
+
"built-in default data."
)
System
.
out
.
println
(
" Provide parameters to read input data from files."
)
...
...
@@ -164,30 +164,30 @@ object LinearRegression {
System
.
out
.
println
(
" We provide a data generator to create synthetic input files for this "
+
"program."
)
System
.
out
.
println
(
" Usage: LinearRegression <data path> <result path> <num iterations>"
)
}
true
}
private
def
getDataSet
(
env
:
ExecutionEnvironment
)
:
DataSet
[
Data
]
=
{
if
(
fileOutput
)
{
env
.
readCsvFile
[(
Double
,
Double
)](
dataPath
,
fieldDelimiter
=
' '
,
includedFields
=
Array
(
0
,
1
))
.
map
{
t
=>
new
Data
(
t
.
_1
,
t
.
_2
)
}
}
else
{
val
data
=
LinearRegressionData
.
DATA
map
{
case
Array
(
x
,
y
)
=>
Data
(
x
.
asInstanceOf
[
Double
],
y
.
asInstanceOf
[
Double
])
}
env
.
fromCollection
(
data
)
}
}
private
def
getParamsDataSet
(
env
:
ExecutionEnvironment
)
:
DataSet
[
Params
]
=
{
val
params
=
LinearRegressionData
.
PARAMS
map
{
case
Array
(
x
,
y
)
=>
Params
(
x
.
asInstanceOf
[
Double
],
y
.
asInstanceOf
[
Double
])
}
env
.
fromCollection
(
params
)
}
}
true
}
private
def
getDataSet
(
env
:
ExecutionEnvironment
)
:
DataSet
[
Data
]
=
{
if
(
fileOutput
)
{
env
.
readCsvFile
[(
Double
,
Double
)](
dataPath
,
fieldDelimiter
=
' '
,
includedFields
=
Array
(
0
,
1
))
.
map
{
t
=>
new
Data
(
t
.
_1
,
t
.
_2
)
}
}
else
{
val
data
=
LinearRegressionData
.
DATA
map
{
case
Array
(
x
,
y
)
=>
Data
(
x
.
asInstanceOf
[
Double
],
y
.
asInstanceOf
[
Double
])
}
env
.
fromCollection
(
data
)
}
}
private
def
getParamsDataSet
(
env
:
ExecutionEnvironment
)
:
DataSet
[
Params
]
=
{
val
params
=
LinearRegressionData
.
PARAMS
map
{
case
Array
(
x
,
y
)
=>
Params
(
x
.
asInstanceOf
[
Double
],
y
.
asInstanceOf
[
Double
])
}
env
.
fromCollection
(
params
)
}
}
flink-scala/src/main/scala/org/apache/flink/api/scala/DataSet.scala
浏览文件 @
0385651f
...
...
@@ -7,7 +7,7 @@
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
*
http://www.apache.org/licenses/LICENSE-2.0
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
...
...
@@ -15,7 +15,6 @@
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package
org.apache.flink.api.scala
import
org.apache.commons.lang3.Validate
...
...
@@ -610,9 +609,9 @@ class DataSet[T: ClassTag](private[flink] val set: JavaDataSet[T]) {
new
Keys
.
FieldPositionKeys
[
T
](
fieldIndices
,
set
.
getType
,
false
))
}
//
public UnsortedGrouping<T> groupBy(String... fields) {
//
new UnsortedGrouping<T>(this, new Keys.ExpressionKeys<T>(fields, getType()));
//
}
//
public UnsortedGrouping<T> groupBy(String... fields) {
//
new UnsortedGrouping<T>(this, new Keys.ExpressionKeys<T>(fields, getType()));
//
}
// --------------------------------------------------------------------------------------------
// Joining
...
...
@@ -807,7 +806,7 @@ class DataSet[T: ClassTag](private[flink] val set: JavaDataSet[T]) {
/**
* Creates a new DataSet by performing delta (or workset) iterations using the given step
* function. At the beginning `this` DataSet is the solution set and `workset` is
the Workset.
* function. At the beginning `this` DataSet is the solution set and `workset` is the Workset.
* The iteration step function gets the current solution set and workset and must output the
* delta for the solution set and the workset for the next iteration.
*
...
...
@@ -825,6 +824,28 @@ class DataSet[T: ClassTag](private[flink] val set: JavaDataSet[T]) {
wrap
(
result
)
}
/**
* Creates a new DataSet by performing delta (or workset) iterations using the given step
* function. At the beginning `this` DataSet is the solution set and `workset` is the Workset.
* The iteration step function gets the current solution set and workset and must output the
* delta for the solution set and the workset for the next iteration.
*
* Note: The syntax of delta iterations are very likely going to change soon.
*/
def
iterateDelta
[
R:
ClassTag
](
workset
:
DataSet
[
R
],
maxIterations
:
Int
,
keyFields
:
Array
[
String
])(
stepFunction
:
(
DataSet
[
T
],
DataSet
[
R
])
=>
(
DataSet
[
T
],
DataSet
[
R
]))
=
{
val
fieldIndices
=
fieldNames2Indices
(
set
.
getType
,
keyFields
)
val
key
=
new
FieldPositionKeys
[
T
](
fieldIndices
,
set
.
getType
,
false
)
val
iterativeSet
=
new
DeltaIteration
[
T
,
R
](
set
.
getExecutionEnvironment
,
set
.
getType
,
set
,
workset
.
set
,
key
,
maxIterations
)
val
(
newSolution
,
newWorkset
)
=
stepFunction
(
wrap
(
iterativeSet
.
getSolutionSet
),
wrap
(
iterativeSet
.
getWorkset
))
val
result
=
iterativeSet
.
closeWith
(
newSolution
.
set
,
newWorkset
.
set
)
wrap
(
result
)
}
// -------------------------------------------------------------------------------------------
// Custom Operators
// -------------------------------------------------------------------------------------------
...
...
@@ -919,4 +940,4 @@ class DataSet[T: ClassTag](private[flink] val set: JavaDataSet[T]) {
def
printToErr
()
:
DataSink
[
T
]
=
{
output
(
new
PrintingOutputFormat
[
T
](
true
))
}
}
\ No newline at end of file
}
flink-scala/src/main/scala/org/apache/flink/api/scala/ExecutionEnvironment.scala
浏览文件 @
0385651f
...
...
@@ -22,7 +22,7 @@ import java.util.UUID
import
org.apache.commons.lang3.Validate
import
org.apache.flink.api.common.JobExecutionResult
import
org.apache.flink.api.java.io._
import
org.apache.flink.api.java.typeutils.
{
TupleTypeInfoBase
,
BasicTypeInfo
}
import
org.apache.flink.api.java.typeutils.
{
ValueTypeInfo
,
TupleTypeInfoBase
,
BasicTypeInfo
}
import
org.apache.flink.api.scala.operators.ScalaCsvInputFormat
import
org.apache.flink.core.fs.Path
...
...
@@ -30,7 +30,7 @@ import org.apache.flink.api.java.{ExecutionEnvironment => JavaEnv}
import
org.apache.flink.api.common.io.
{
InputFormat
,
FileInputFormat
}
import
org.apache.flink.api.java.operators.DataSource
import
org.apache.flink.types.
TypeInformation
import
org.apache.flink.types.
{
StringValue
,
TypeInformation
}
import
org.apache.flink.util.
{
NumberSequenceIterator
,
SplittableIterator
}
import
scala.collection.JavaConverters._
...
...
@@ -104,6 +104,27 @@ class ExecutionEnvironment(javaEnv: JavaEnv) {
wrap
(
source
)
}
/**
* Creates a DataSet of Strings produced by reading the given file line wise.
* This method is similar to [[readTextFile]], but it produces a DataSet with mutable
* [[StringValue]] objects, rather than Java Strings. StringValues can be used to tune
* implementations to be less object and garbage collection heavy.
*
* @param filePath The path of the file, as a URI (e.g., "file:///some/local/file" or
* "hdfs://host:port/file/path").
* @param charsetName The name of the character set used to read the file. Default is UTF-0
*/
def
readTextFileWithValue
(
filePath
:
String
,
charsetName
:
String
=
"UTF-8"
)
:
DataSet
[
StringValue
]
=
{
Validate
.
notNull
(
filePath
,
"The file path may not be null."
)
val
format
=
new
TextValueInputFormat
(
new
Path
(
filePath
))
format
.
setCharsetName
(
charsetName
)
val
source
=
new
DataSource
[
StringValue
](
javaEnv
,
format
,
new
ValueTypeInfo
[
StringValue
](
classOf
[
StringValue
]))
wrap
(
source
)
}
/**
* Creates a DataSet by reading the given CSV file. The type parameter must be used to specify
* a Tuple type that has the same number of fields as there are fields in the CSV file. If the
...
...
@@ -337,8 +358,9 @@ class ExecutionEnvironment(javaEnv: JavaEnv) {
def
createProgramPlan
(
jobName
:
String
=
""
)
=
{
if
(
jobName
.
isEmpty
)
{
javaEnv
.
createProgramPlan
()
}
else
}
else
{
javaEnv
.
createProgramPlan
(
jobName
)
}
}
}
...
...
@@ -360,7 +382,8 @@ object ExecutionEnvironment {
* of parallelism of the local environment is the number of hardware contexts (CPU cores/threads).
*/
def
createLocalEnvironment
(
degreeOfParallelism
:
Int
=
Runtime
.
getRuntime
.
availableProcessors
())
:
ExecutionEnvironment
=
{
degreeOfParallelism
:
Int
=
Runtime
.
getRuntime
.
availableProcessors
())
:
ExecutionEnvironment
=
{
val
javaEnv
=
JavaEnv
.
createLocalEnvironment
()
javaEnv
.
setDegreeOfParallelism
(
degreeOfParallelism
)
new
ExecutionEnvironment
(
javaEnv
)
...
...
flink-scala/src/main/scala/org/apache/flink/api/scala/GroupedDataSet.scala
浏览文件 @
0385651f
...
...
@@ -267,7 +267,7 @@ private[flink] class GroupedDataSetImpl[T: ClassTag](
}
def
reduceGroup
[
R:
TypeInformation:
ClassTag
](
fun
:
(
TraversableOnce
[
T
],
Collector
[
R
])
=>
Unit
)
:
DataSet
[
R
]
=
{
fun
:
(
TraversableOnce
[
T
],
Collector
[
R
])
=>
Unit
)
:
DataSet
[
R
]
=
{
Validate
.
notNull
(
fun
,
"Group reduce function must not be null."
)
val
reducer
=
new
GroupReduceFunction
[
T
,
R
]
{
def
reduce
(
in
:
java.lang.Iterable
[
T
],
out
:
Collector
[
R
])
{
...
...
flink-scala/src/main/scala/org/apache/flink/api/scala/coGroupDataSet.scala
浏览文件 @
0385651f
...
...
@@ -33,8 +33,8 @@ import scala.reflect.ClassTag
/**
* A specific [[DataSet]] that results from a `coGroup` operation. The result of a default coGroup
is
* a tuple containing two arrays of values from the two sides of the coGroup. The result of the
* A specific [[DataSet]] that results from a `coGroup` operation. The result of a default coGroup
*
is
a tuple containing two arrays of values from the two sides of the coGroup. The result of the
* coGroup can be changed by specifying a custom coGroup function using the `apply` method or by
* providing a [[RichCoGroupFunction]].
*
...
...
flink-scala/src/main/scala/org/apache/flink/api/scala/codegen/Counter.scala
浏览文件 @
0385651f
...
...
@@ -7,7 +7,7 @@
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
*
http://www.apache.org/licenses/LICENSE-2.0
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
...
...
@@ -15,8 +15,6 @@
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package
org.apache.flink.api.scala.codegen
private
[
flink
]
class
Counter
{
...
...
@@ -29,4 +27,4 @@ private[flink] class Counter {
current
}
}
}
\ No newline at end of file
}
flink-scala/src/main/scala/org/apache/flink/api/scala/codegen/MacroContextHolder.scala
浏览文件 @
0385651f
...
...
@@ -7,7 +7,7 @@
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
*
http://www.apache.org/licenses/LICENSE-2.0
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
...
...
@@ -24,8 +24,8 @@ private[flink] class MacroContextHolder[C <: Context](val c: C)
private
[
flink
]
object
MacroContextHolder
{
def
newMacroHelper
[
C
<:
Context
](
c
:
C
)
=
new
MacroContextHolder
[
c.
type
](
c
)
with
TypeDescriptors
[
c.
type
]
with
TypeAnalyzer
[
c.
type
]
with
TreeGen
[
c.
type
]
with
TypeDescriptors
[
c.
type
]
with
TypeAnalyzer
[
c.
type
]
with
TreeGen
[
c.
type
]
with
TypeInformationGen
[
c.
type
]
}
flink-scala/src/main/scala/org/apache/flink/api/scala/codegen/TreeGen.scala
浏览文件 @
0385651f
...
...
@@ -7,7 +7,7 @@
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
*
http://www.apache.org/licenses/LICENSE-2.0
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
...
...
@@ -15,8 +15,6 @@
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package
org.apache.flink.api.scala.codegen
import
scala.language.implicitConversions
...
...
@@ -50,10 +48,11 @@ private[flink] trait TreeGen[C <: Context] { this: MacroContextHolder[C] with Ty
reify
(
c
.
Expr
(
source
).
splice
.
asInstanceOf
[
T
]).
tree
def
maybeMkAsInstanceOf
[
S:
c.WeakTypeTag
,
T:
c.WeakTypeTag
](
source
:
Tree
)
:
Tree
=
{
if
(
weakTypeOf
[
S
]
<:<
weakTypeOf
[
T
])
if
(
weakTypeOf
[
S
]
<:<
weakTypeOf
[
T
])
{
source
else
}
else
{
mkAsInstanceOf
[
T
](
source
)
}
}
// def mkIdent(target: Symbol): Tree = Ident(target) setType target.tpe
...
...
flink-scala/src/main/scala/org/apache/flink/api/scala/codegen/TypeAnalyzer.scala
浏览文件 @
0385651f
...
...
@@ -7,7 +7,7 @@
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
*
http://www.apache.org/licenses/LICENSE-2.0
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
...
...
@@ -15,8 +15,6 @@
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package
org.apache.flink.api.scala.codegen
import
scala.Option.option2Iterable
...
...
@@ -107,10 +105,11 @@ private[flink] trait TypeAnalyzer[C <: Context] { this: MacroContextHolder[C]
appliedType
(
d
.
asType
.
toType
,
dArgs
)
}
if
(
dTpe
<:<
tpe
)
if
(
dTpe
<:<
tpe
)
{
Some
(
analyze
(
dTpe
))
else
}
else
{
None
}
}
val
errors
=
subTypes
flatMap
{
_
.
findByType
[
UnsupportedDescriptor
]
}
...
...
@@ -150,7 +149,11 @@ private[flink] trait TypeAnalyzer[C <: Context] { this: MacroContextHolder[C]
case
true
=>
Some
(
FieldAccessor
(
bGetter
,
bSetter
,
bTpe
,
isBaseField
=
true
,
analyze
(
bTpe
.
termSymbol
.
asMethod
.
returnType
)))
bGetter
,
bSetter
,
bTpe
,
isBaseField
=
true
,
analyze
(
bTpe
.
termSymbol
.
asMethod
.
returnType
)))
case
false
=>
None
}
}
...
...
@@ -167,7 +170,9 @@ private[flink] trait TypeAnalyzer[C <: Context] { this: MacroContextHolder[C]
desc
match
{
case
desc
@
BaseClassDescriptor
(
_
,
_
,
getters
,
baseSubTypes
)
=>
desc
.
copy
(
getters
=
getters
map
updateField
,
subTypes
=
baseSubTypes
map
wireBaseFields
)
desc
.
copy
(
getters
=
getters
map
updateField
,
subTypes
=
baseSubTypes
map
wireBaseFields
)
case
desc
@
CaseClassDescriptor
(
_
,
_
,
_
,
_
,
getters
)
=>
desc
.
copy
(
getters
=
getters
map
updateField
)
case
_
=>
desc
...
...
@@ -221,7 +226,7 @@ private[flink] trait TypeAnalyzer[C <: Context] { this: MacroContextHolder[C]
case
errs
@
_
::
_
=>
val
msgs
=
errs
flatMap
{
f
=>
(
f
:
@unchecked
)
match
{
case
FieldAccessor
(
fgetter
,
_
,
_
,
_
,
UnsupportedDescriptor
(
_
,
fTpe
,
errors
))
=>
case
FieldAccessor
(
fgetter
,
_
,
_
,
_
,
UnsupportedDescriptor
(
_
,
fTpe
,
errors
))
=>
errors
map
{
err
=>
"Field "
+
fgetter
.
name
+
": "
+
fTpe
+
" - "
+
err
}
}
}
...
...
flink-scala/src/main/scala/org/apache/flink/api/scala/codegen/TypeDescriptors.scala
浏览文件 @
0385651f
...
...
@@ -7,7 +7,7 @@
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
*
http://www.apache.org/licenses/LICENSE-2.0
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
...
...
@@ -15,8 +15,6 @@
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package
org.apache.flink.api.scala.codegen
import
scala.language.postfixOps
...
...
@@ -122,7 +120,8 @@ private[flink] trait TypeDescriptors[C <: Context] { this: MacroContextHolder[C]
id
:
Int
,
tpe
:
Type
,
override
val
getters
:
Seq
[
FieldAccessor
],
subTypes
:
Seq
[
UDTDescriptor
])
extends
UDTDescriptor
{
override
def
flatten
=
this
+:
((
getters
flatMap
{
_
.
desc
.
flatten
})
++
(
subTypes
flatMap
{
_
.
flatten
}))
override
def
flatten
=
this
+:
((
getters
flatMap
{
_
.
desc
.
flatten
})
++
(
subTypes
flatMap
{
_
.
flatten
}))
override
def
canBeKey
=
flatten
forall
{
f
=>
f
.
canBeKey
}
override
def
select
(
path
:
List
[
String
])
:
Seq
[
Option
[
UDTDescriptor
]]
=
path
match
{
...
...
@@ -151,7 +150,8 @@ private[flink] trait TypeDescriptors[C <: Context] { this: MacroContextHolder[C]
override
def
hashCode
=
(
id
,
tpe
,
ctor
,
getters
).
hashCode
override
def
equals
(
that
:
Any
)
=
that
match
{
case
CaseClassDescriptor
(
thatId
,
thatTpe
,
thatMutable
,
thatCtor
,
thatGetters
)
=>
(
id
,
tpe
,
mutable
,
ctor
,
getters
).
equals
(
thatId
,
thatTpe
,
thatMutable
,
thatCtor
,
thatGetters
)
(
id
,
tpe
,
mutable
,
ctor
,
getters
).
equals
(
thatId
,
thatTpe
,
thatMutable
,
thatCtor
,
thatGetters
)
case
_
=>
false
}
...
...
@@ -164,7 +164,12 @@ private[flink] trait TypeDescriptors[C <: Context] { this: MacroContextHolder[C]
}
}
case
class
FieldAccessor
(
getter
:
Symbol
,
setter
:
Symbol
,
tpe
:
Type
,
isBaseField
:
Boolean
,
desc
:
UDTDescriptor
)
case
class
FieldAccessor
(
getter
:
Symbol
,
setter
:
Symbol
,
tpe
:
Type
,
isBaseField
:
Boolean
,
desc
:
UDTDescriptor
)
case
class
RecursiveDescriptor
(
id
:
Int
,
tpe
:
Type
,
refId
:
Int
)
extends
UDTDescriptor
{
override
def
flatten
=
Seq
(
this
)
...
...
flink-scala/src/main/scala/org/apache/flink/api/scala/codegen/TypeInformationGen.scala
浏览文件 @
0385651f
...
...
@@ -7,7 +7,7 @@
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
*
http://www.apache.org/licenses/LICENSE-2.0
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
...
...
@@ -15,8 +15,6 @@
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package
org.apache.flink.api.scala.codegen
import
org.apache.flink.api.common.typeutils.TypeSerializer
...
...
@@ -116,14 +114,16 @@ private[flink] trait TypeInformationGen[C <: Context] {
}
}
def
mkValueTypeInfo
[
T
<:
Value
:
c.WeakTypeTag
](
desc
:
UDTDescriptor
)
:
c.Expr
[
TypeInformation
[
T
]]
=
{
def
mkValueTypeInfo
[
T
<:
Value
:
c.WeakTypeTag
](
desc
:
UDTDescriptor
)
:
c.Expr
[
TypeInformation
[
T
]]
=
{
val
tpeClazz
=
c
.
Expr
[
Class
[
T
]](
Literal
(
Constant
(
desc
.
tpe
)))
reify
{
new
ValueTypeInfo
[
T
](
tpeClazz
.
splice
)
}
}
def
mkWritableTypeInfo
[
T
<:
Writable
:
c.WeakTypeTag
](
desc
:
UDTDescriptor
)
:
c.Expr
[
TypeInformation
[
T
]]
=
{
def
mkWritableTypeInfo
[
T
<:
Writable
:
c.WeakTypeTag
](
desc
:
UDTDescriptor
)
:
c.Expr
[
TypeInformation
[
T
]]
=
{
val
tpeClazz
=
c
.
Expr
[
Class
[
T
]](
Literal
(
Constant
(
desc
.
tpe
)))
reify
{
new
WritableTypeInfo
[
T
](
tpeClazz
.
splice
)
...
...
@@ -153,7 +153,8 @@ private[flink] trait TypeInformationGen[C <: Context] {
c
.
Expr
[
T
](
result
)
}
// def mkCaseClassTypeInfo[T: c.WeakTypeTag](desc: CaseClassDescriptor): c.Expr[TypeInformation[T]] = {
// def mkCaseClassTypeInfo[T: c.WeakTypeTag](
// desc: CaseClassDescriptor): c.Expr[TypeInformation[T]] = {
// val tpeClazz = c.Expr[Class[_]](Literal(Constant(desc.tpe)))
// val caseFields = mkCaseFields(desc)
// reify {
...
...
@@ -178,10 +179,12 @@ private[flink] trait TypeInformationGen[C <: Context] {
// c.Expr(mkMap(fields))
// }
//
// protected def getFields(name: String, desc: UDTDescriptor): Seq[(String, UDTDescriptor)] = desc match {
// protected def getFields(name: String, desc: UDTDescriptor): Seq[(String, UDTDescriptor)] =
// desc match {
// // Flatten product types
// case CaseClassDescriptor(_, _, _, _, getters) =>
// getters filterNot { _.isBaseField } flatMap { f => getFields(name + "." + f.getter.name, f.desc) }
// getters filterNot { _.isBaseField } flatMap {
// f => getFields(name + "." + f.getter.name, f.desc) }
// case _ => Seq((name, desc))
// }
}
\ No newline at end of file
}
flink-scala/src/main/scala/org/apache/flink/api/scala/crossDataSet.scala
浏览文件 @
0385651f
...
...
@@ -129,4 +129,4 @@ private[flink] object CrossDataSetImpl {
new
CrossDataSetImpl
(
crossOperator
,
leftSet
,
rightSet
)
}
}
\ No newline at end of file
}
flink-scala/src/main/scala/org/apache/flink/api/scala/joinDataSet.scala
浏览文件 @
0385651f
...
...
@@ -228,4 +228,4 @@ private[flink] class UnfinishedJoinOperationImpl[T, O](
new
JoinDataSetImpl
(
joinOperator
,
leftSet
.
set
,
rightSet
.
set
,
leftKey
,
rightKey
)
}
}
\ No newline at end of file
}
flink-scala/src/main/scala/org/apache/flink/api/scala/package.scala
浏览文件 @
0385651f
...
...
@@ -7,7 +7,7 @@
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
*
http://www.apache.org/licenses/LICENSE-2.0
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
...
...
@@ -44,4 +44,4 @@ package object scala {
"supported on Case Classes (for now)."
)
}
}
}
\ No newline at end of file
}
flink-scala/src/main/scala/org/apache/flink/api/scala/typeutils/ScalaTupleComparator.scala
浏览文件 @
0385651f
...
...
@@ -7,7 +7,7 @@
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
*
http://www.apache.org/licenses/LICENSE-2.0
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
...
...
@@ -15,7 +15,6 @@
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package
org.apache.flink.api.scala.typeutils
import
org.apache.flink.api.common.typeutils.
{
TypeComparator
,
TypeSerializer
}
...
...
flink-scala/src/main/scala/org/apache/flink/api/scala/typeutils/ScalaTupleSerializer.scala
浏览文件 @
0385651f
...
...
@@ -7,7 +7,7 @@
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
*
http://www.apache.org/licenses/LICENSE-2.0
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
...
...
@@ -15,7 +15,6 @@
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package
org.apache.flink.api.scala.typeutils
import
org.apache.flink.api.common.typeutils.TypeSerializer
...
...
flink-scala/src/main/scala/org/apache/flink/api/scala/typeutils/ScalaTupleTypeInfo.scala
浏览文件 @
0385651f
...
...
@@ -15,7 +15,6 @@
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package
org.apache.flink.api.scala.typeutils
import
org.apache.flink.api.java.typeutils.
{
AtomicType
,
TupleTypeInfoBase
}
...
...
@@ -79,8 +78,8 @@ abstract class ScalaTupleTypeInfo[T <: Product](
def
getFieldIndices
(
fields
:
Array
[
String
])
:
Array
[
Int
]
=
{
val
result
=
fields
map
{
x
=>
fieldNames
.
indexOf
(
x
)
}
if
(
result
.
contains
(-
1
))
{
throw
new
IllegalArgumentException
(
"Fields '"
+
fields
.
mkString
(
", "
)
+
"' are not valid for"
+
" "
+
tupleClass
+
" with fields '"
+
fieldNames
.
mkString
(
", "
)
+
"'."
)
throw
new
IllegalArgumentException
(
"Fields '"
+
fields
.
mkString
(
", "
)
+
"
' are not valid for
"
+
tupleClass
+
" with fields '"
+
fieldNames
.
mkString
(
", "
)
+
"'."
)
}
result
}
...
...
flink-scala/src/main/scala/org/apache/flink/api/scala/typeutils/TypeUtils.scala
浏览文件 @
0385651f
...
...
@@ -7,7 +7,7 @@
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
*
http://www.apache.org/licenses/LICENSE-2.0
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
...
...
@@ -15,7 +15,6 @@
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package
org.apache.flink.api.scala.typeutils
import
scala.reflect.macros.Context
...
...
flink-scala/src/main/scala/org/apache/flink/api/scala/unfinishedKeyPairOperation.scala
浏览文件 @
0385651f
...
...
@@ -71,7 +71,9 @@ private[flink] abstract class UnfinishedKeyPairOperation[T, O, R](
* This only works on a CaseClass [[DataSet]].
*/
def
where
(
firstLeftField
:
String
,
otherLeftFields
:
String*
)
=
{
val
fieldIndices
=
fieldNames2Indices
(
leftSet
.
set
.
getType
,
firstLeftField
+:
otherLeftFields
.
toArray
)
val
fieldIndices
=
fieldNames2Indices
(
leftSet
.
set
.
getType
,
firstLeftField
+:
otherLeftFields
.
toArray
)
val
leftKey
=
new
FieldPositionKeys
[
T
](
fieldIndices
,
leftSet
.
set
.
getType
)
new
HalfUnfinishedKeyPairOperation
[
T
,
O
,
R
](
this
,
leftKey
)
...
...
flink-scala/src/test/scala/org/apache/flink/api/scala/ScalaAPICompletenessTest.scala
浏览文件 @
0385651f
/*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
/**
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
*
http://www.apache.org/licenses/LICENSE-2.0
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
...
...
@@ -14,7 +15,6 @@
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package
org.apache.flink.api.scala
import
java.lang.reflect.Method
...
...
@@ -149,7 +149,8 @@ class ScalaAPICompletenessTest {
checkMethods
(
"SingleInputOperator"
,
"DataSet"
,
classOf
[
SingleInputOperator
[
_
,
_
,
_
]],
classOf
[
DataSet
[
_
]])
checkMethods
(
"TwoInputOperator"
,
"DataSet"
,
classOf
[
TwoInputOperator
[
_
,
_
,
_
,
_
]],
classOf
[
DataSet
[
_
]])
checkMethods
(
"TwoInputOperator"
,
"DataSet"
,
classOf
[
TwoInputOperator
[
_
,
_
,
_
,
_
]],
classOf
[
DataSet
[
_
]])
checkMethods
(
"SingleInputUdfOperator"
,
"DataSet"
,
classOf
[
SingleInputUdfOperator
[
_
,
_
,
_
]],
classOf
[
DataSet
[
_
]])
...
...
flink-scala/src/test/scala/org/apache/flink/api/scala/functions/SemanticPropertiesTranslationTest.scala
浏览文件 @
0385651f
...
...
@@ -46,7 +46,7 @@ class SemanticPropertiesTranslationTest {
try
{
val
env
=
ExecutionEnvironment
.
getExecutionEnvironment
val
input
=
env
.
fromElements
((
3
l
,
"test"
,
42
))
val
input
=
env
.
fromElements
((
3
L
,
"test"
,
42
))
input
.
map
(
new
WildcardConstantMapper
[(
Long
,
String
,
Int
)]).
print
()
val
plan
=
env
.
createProgramPlan
()
...
...
@@ -83,7 +83,7 @@ class SemanticPropertiesTranslationTest {
try
{
val
env
=
ExecutionEnvironment
.
getExecutionEnvironment
val
input
=
env
.
fromElements
((
3
l
,
"test"
,
42
))
val
input
=
env
.
fromElements
((
3
L
,
"test"
,
42
))
input
.
map
(
new
IndividualConstantMapper
[
Long
,
String
,
Int
]).
print
()
val
plan
=
env
.
createProgramPlan
()
...
...
@@ -120,8 +120,8 @@ class SemanticPropertiesTranslationTest {
try
{
val
env
=
ExecutionEnvironment
.
getExecutionEnvironment
val
input1
=
env
.
fromElements
((
3
l
,
"test"
))
val
input2
=
env
.
fromElements
((
3
l
,
3.1415
))
val
input1
=
env
.
fromElements
((
3
L
,
"test"
))
val
input2
=
env
.
fromElements
((
3
L
,
3.1415
))
input1
.
join
(
input2
).
where
(
0
).
equalTo
(
0
)(
new
ForwardingTupleJoin
[
Long
,
String
,
Long
,
Double
]).
print
()
...
...
flink-scala/src/test/scala/org/apache/flink/api/scala/operators/translation/DeltaIterationTranslationTest.scala
浏览文件 @
0385651f
...
...
@@ -100,7 +100,9 @@ class DeltaIterationTranslationTest {
assertEquals
(
classOf
[
IdentityMapper
[
_
]],
worksetMapper
.
getUserCodeWrapper
.
getUserCodeClass
)
assertEquals
(
classOf
[
NextWorksetMapper
],
nextWorksetMapper
.
getUserCodeWrapper
.
getUserCodeClass
)
assertEquals
(
classOf
[
NextWorksetMapper
],
nextWorksetMapper
.
getUserCodeWrapper
.
getUserCodeClass
)
if
(
solutionSetJoin
.
getUserCodeWrapper
.
getUserCodeObject
.
isInstanceOf
[
WrappingFunction
[
_
]])
{
...
...
@@ -203,7 +205,8 @@ class DeltaIterationTranslationTest {
// val iteration: DeltaIteration[Tuple3[Double, Long, String], Tuple2[Double,
// String]] = initialSolutionSet.iterateDelta(initialWorkSet, 10, 1)
// try {
// iteration.getWorkset.coGroup(iteration.getSolutionSet).where(1).equalTo(2).`with`(new DeltaIterationTranslationTest.SolutionWorksetCoGroup1)
// iteration.getWorkset.coGroup(iteration.getSolutionSet).where(1).equalTo(2).`with`(
// new DeltaIterationTranslationTest.SolutionWorksetCoGroup1)
// fail("Accepted invalid program.")
// }
// catch {
...
...
@@ -211,7 +214,8 @@ class DeltaIterationTranslationTest {
// }
// }
// try {
// iteration.getSolutionSet.coGroup(iteration.getWorkset).where(2).equalTo(1).`with`(new DeltaIterationTranslationTest.SolutionWorksetCoGroup2)
// iteration.getSolutionSet.coGroup(iteration.getWorkset).where(2).equalTo(1).`with`(
// new DeltaIterationTranslationTest.SolutionWorksetCoGroup2)
// fail("Accepted invalid program.")
// }
// catch {
...
...
flink-scala/src/test/scala/org/apache/flink/api/scala/operators/translation/ReduceTranslationTest.scala
浏览文件 @
0385651f
...
...
@@ -121,7 +121,9 @@ class ReduceTranslationTest {
assertEquals
(
keyValueInfo
,
reducer
.
getOperatorInfo
.
getOutputType
)
assertEquals
(
keyValueInfo
,
keyProjector
.
getOperatorInfo
.
getInputType
)
assertEquals
(
initialData
.
set
.
getType
,
keyProjector
.
getOperatorInfo
.
getOutputType
)
assertEquals
(
classOf
[
KeyExtractingMapper
[
_
,
_
]],
keyExtractor
.
getUserCodeWrapper
.
getUserCodeClass
)
assertEquals
(
classOf
[
KeyExtractingMapper
[
_
,
_
]],
keyExtractor
.
getUserCodeWrapper
.
getUserCodeClass
)
assertTrue
(
keyExtractor
.
getInput
.
isInstanceOf
[
GenericDataSourceBase
[
_
,
_
]])
}
catch
{
...
...
flink-scala/src/test/scala/org/apache/flink/api/scala/runtime/TupleSerializerTest.scala
浏览文件 @
0385651f
...
...
@@ -34,8 +34,8 @@ class TupleSerializerTest {
@Test
def
testTuple1Int
()
:
Unit
=
{
val
testTuples
=
Array
(
Tuple1
(
42
),
Tuple1
(
1
),
Tuple1
(
0
),
Tuple1
(-
1
),
Tuple1
(
Int
.
MaxValue
),
Tuple1
(
Int
.
MinValue
))
val
testTuples
=
Array
(
Tuple1
(
42
),
Tuple1
(
1
),
Tuple1
(
0
),
Tuple1
(-
1
),
Tuple1
(
Int
.
MaxValue
),
Tuple1
(
Int
.
MinValue
))
runTests
(
testTuples
)
}
...
...
pom.xml
浏览文件 @
0385651f
...
...
@@ -522,8 +522,10 @@ under the License.
<exclude>
**/*.creole
</exclude>
<exclude>
CONTRIBUTORS
</exclude>
<exclude>
DEPENDENCIES
</exclude>
<!-- Build fi
el
s -->
<!-- Build fi
le
s -->
<exclude>
tools/maven/checkstyle.xml
</exclude>
<exclude>
tools/maven/scalastyle-config.xml
</exclude>
<exclude>
**/scalastyle-output.xml
</exclude>
<exclude>
tools/maven/suppressions.xml
</exclude>
<exclude>
**/pom.xml
</exclude>
<exclude>
**/pom.hadoop2.xml
</exclude>
...
...
@@ -556,6 +558,29 @@ under the License.
<logViolationsToConsole>
true
</logViolationsToConsole>
</configuration>
</plugin>
<plugin>
<groupId>
org.scalastyle
</groupId>
<artifactId>
scalastyle-maven-plugin
</artifactId>
<version>
0.5.0
</version>
<configuration>
<verbose>
false
</verbose>
<failOnViolation>
true
</failOnViolation>
<includeTestSourceDirectory>
true
</includeTestSourceDirectory>
<failOnWarning>
false
</failOnWarning>
<sourceDirectory>
${basedir}/src/main/scala
</sourceDirectory>
<testSourceDirectory>
${basedir}/src/test/scala
</testSourceDirectory>
<configLocation>
tools/maven/scalastyle-config.xml
</configLocation>
<outputFile>
${project.basedir}/scalastyle-output.xml
</outputFile>
<outputEncoding>
UTF-8
</outputEncoding>
</configuration>
<executions>
<execution>
<goals>
<goal>
check
</goal>
</goals>
</execution>
</executions>
</plugin>
<plugin>
<!-- just define the Java version to be used for compiling and plugins -->
<groupId>
org.apache.maven.plugins
</groupId>
...
...
tools/maven/scalastyle-config.xml
0 → 100644
浏览文件 @
0385651f
<!--
~ Licensed to the Apache Software Foundation (ASF) under one or more
~ contributor license agreements. See the NOTICE file distributed with
~ this work for additional information regarding copyright ownership.
~ The ASF licenses this file to You under the Apache License, Version 2.0
~ (the "License"); you may not use this file except in compliance with
~ the License. You may obtain a copy of the License at
~
~ http://www.apache.org/licenses/LICENSE-2.0
~
~ Unless required by applicable law or agreed to in writing, software
~ distributed under the License is distributed on an "AS IS" BASIS,
~ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
~ See the License for the specific language governing permissions and
~ limitations under the License.
-->
<!-- NOTE: This was taken and adapted from Apache Spark. -->
<!-- If you wish to turn off checking for a section of code, you can put a comment in the source
before and after the section, with the following syntax: -->
<!-- // scalastyle:off -->
<!-- ... -->
<!-- // naughty stuff -->
<!-- ... -->
<!-- // scalastyle:on -->
<scalastyle>
<name>
Scalastyle standard configuration
</name>
<check
level=
"error"
class=
"org.scalastyle.file.FileTabChecker"
enabled=
"true"
></check>
<!-- <check level="error" class="org.scalastyle.file.FileLengthChecker" enabled="true"> -->
<!-- <parameters> -->
<!-- <parameter name="maxFileLength"><![CDATA[800]]></parameter> -->
<!-- </parameters> -->
<!-- </check> -->
<check
level=
"error"
class=
"org.scalastyle.file.HeaderMatchesChecker"
enabled=
"true"
>
<parameters>
<parameter
name=
"header"
>
<![CDATA[/**
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/]]>
</parameter>
</parameters>
</check>
<check
level=
"error"
class=
"org.scalastyle.scalariform.SpacesAfterPlusChecker"
enabled=
"true"
></check>
<check
level=
"error"
class=
"org.scalastyle.file.WhitespaceEndOfLineChecker"
enabled=
"false"
></check>
<check
level=
"error"
class=
"org.scalastyle.scalariform.SpacesBeforePlusChecker"
enabled=
"true"
></check>
<check
level=
"error"
class=
"org.scalastyle.file.FileLineLengthChecker"
enabled=
"true"
>
<parameters>
<parameter
name=
"maxLineLength"
>
<![CDATA[100]]>
</parameter>
<parameter
name=
"tabSize"
>
<![CDATA[2]]>
</parameter>
<parameter
name=
"ignoreImports"
>
true
</parameter>
</parameters>
</check>
<check
level=
"error"
class=
"org.scalastyle.scalariform.ClassNamesChecker"
enabled=
"true"
>
<parameters>
<parameter
name=
"regex"
>
<![CDATA[[A-Z][A-Za-z]*]]>
</parameter>
</parameters>
</check>
<check
level=
"error"
class=
"org.scalastyle.scalariform.ObjectNamesChecker"
enabled=
"true"
>
<parameters>
<parameter
name=
"regex"
>
<![CDATA[[A-Z][A-Za-z]*]]>
</parameter>
</parameters>
</check>
<check
level=
"error"
class=
"org.scalastyle.scalariform.PackageObjectNamesChecker"
enabled=
"true"
>
<parameters>
<parameter
name=
"regex"
>
<![CDATA[^[a-z][A-Za-z]*$]]>
</parameter>
</parameters>
</check>
<check
level=
"error"
class=
"org.scalastyle.scalariform.EqualsHashCodeChecker"
enabled=
"false"
></check>
<!-- <check level="error" class="org.scalastyle.scalariform.IllegalImportsChecker" enabled="true"> -->
<!-- <parameters> -->
<!-- <parameter name="illegalImports"><![CDATA[sun._,java.awt._]]></parameter> -->
<!-- </parameters> -->
<!-- </check> -->
<check
level=
"error"
class=
"org.scalastyle.scalariform.ParameterNumberChecker"
enabled=
"true"
>
<parameters>
<parameter
name=
"maxParameters"
>
<![CDATA[10]]>
</parameter>
</parameters>
</check>
<!-- <check level="error" class="org.scalastyle.scalariform.MagicNumberChecker" enabled="true"> -->
<!-- <parameters> -->
<!-- <parameter name="ignore"><![CDATA[-1,0,1,2,3]]></parameter> -->
<!-- </parameters> -->
<!-- </check> -->
<check
level=
"error"
class=
"org.scalastyle.scalariform.NoWhitespaceBeforeLeftBracketChecker"
enabled=
"false"
></check>
<check
level=
"error"
class=
"org.scalastyle.scalariform.NoWhitespaceAfterLeftBracketChecker"
enabled=
"false"
></check>
<!-- <check level="error" class="org.scalastyle.scalariform.ReturnChecker" enabled="true"></check> -->
<!-- <check level="error" class="org.scalastyle.scalariform.NullChecker" enabled="true"></check> -->
<!-- <check level="error" class="org.scalastyle.scalariform.NoCloneChecker" enabled="true"></check> -->
<!-- <check level="error" class="org.scalastyle.scalariform.NoFinalizeChecker" enabled="true"></check> -->
<!-- <check level="error" class="org.scalastyle.scalariform.CovariantEqualsChecker" enabled="true"></check> -->
<!-- <check level="error" class="org.scalastyle.scalariform.StructuralTypeChecker" enabled="true"></check> -->
<!-- <check level="error" class="org.scalastyle.file.RegexChecker" enabled="true"> -->
<!-- <parameters> -->
<!-- <parameter name="regex"><![CDATA[println]]></parameter> -->
<!-- </parameters> -->
<!-- </check> -->
<!-- <check level="error" class="org.scalastyle.scalariform.NumberOfTypesChecker" enabled="true"> -->
<!-- <parameters> -->
<!-- <parameter name="maxTypes"><![CDATA[30]]></parameter> -->
<!-- </parameters> -->
<!-- </check> -->
<!-- <check level="error" class="org.scalastyle.scalariform.CyclomaticComplexityChecker" enabled="true"> -->
<!-- <parameters> -->
<!-- <parameter name="maximum"><![CDATA[10]]></parameter> -->
<!-- </parameters> -->
<!-- </check> -->
<check
level=
"error"
class=
"org.scalastyle.scalariform.UppercaseLChecker"
enabled=
"true"
></check>
<check
level=
"error"
class=
"org.scalastyle.scalariform.SimplifyBooleanExpressionChecker"
enabled=
"false"
></check>
<check
level=
"error"
class=
"org.scalastyle.scalariform.IfBraceChecker"
enabled=
"true"
>
<parameters>
<parameter
name=
"singleLineAllowed"
>
<![CDATA[true]]>
</parameter>
<parameter
name=
"doubleLineAllowed"
>
<![CDATA[true]]>
</parameter>
</parameters>
</check>
<!-- <check level="error" class="org.scalastyle.scalariform.MethodLengthChecker" enabled="true"> -->
<!-- <parameters> -->
<!-- <parameter name="maxLength"><![CDATA[50]]></parameter> -->
<!-- </parameters> -->
<!-- </check> -->
<!-- <check level="error" class="org.scalastyle.scalariform.MethodNamesChecker" enabled="true"> -->
<!-- <parameters> -->
<!-- <parameter name="regex"><![CDATA[^[a-z][A-Za-z0-9]*$]]></parameter> -->
<!-- </parameters> -->
<!-- </check> -->
<!-- <check level="error" class="org.scalastyle.scalariform.NumberOfMethodsInTypeChecker" enabled="true"> -->
<!-- <parameters> -->
<!-- <parameter name="maxMethods"><![CDATA[30]]></parameter> -->
<!-- </parameters> -->
<!-- </check> -->
<!-- <check level="error" class="org.scalastyle.scalariform.PublicMethodsHaveTypeChecker" enabled="true"></check> -->
<check
level=
"error"
class=
"org.scalastyle.file.NewLineAtEofChecker"
enabled=
"true"
></check>
<check
level=
"error"
class=
"org.scalastyle.file.NoNewLineAtEofChecker"
enabled=
"false"
></check>
</scalastyle>
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录