Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
doujutun3207
flink
提交
ce0c2901
F
flink
项目概览
doujutun3207
/
flink
与 Fork 源项目一致
从无法访问的项目Fork
通知
24
Star
0
Fork
0
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
0
列表
看板
标记
里程碑
合并请求
0
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
F
flink
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
0
Issue
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
体验新版 GitCode,发现更多精彩内容 >>
提交
ce0c2901
编写于
2月 28, 2015
作者:
G
Gyula Fora
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
[streaming] Several minor cleanups
上级
b7b547d8
变更
15
隐藏空白更改
内联
并排
Showing
15 changed file
with
50 addition
and
51 deletion
+50
-51
flink-staging/flink-streaming/flink-streaming-connectors/src/main/java/org/apache/flink/streaming/connectors/kafka/KafkaConsumerExample.java
...link/streaming/connectors/kafka/KafkaConsumerExample.java
+1
-1
flink-staging/flink-streaming/flink-streaming-connectors/src/main/java/org/apache/flink/streaming/connectors/kafka/KafkaProducerExample.java
...link/streaming/connectors/kafka/KafkaProducerExample.java
+1
-1
flink-staging/flink-streaming/flink-streaming-connectors/src/main/java/org/apache/flink/streaming/connectors/kafka/api/KafkaSink.java
...pache/flink/streaming/connectors/kafka/api/KafkaSink.java
+5
-7
flink-staging/flink-streaming/flink-streaming-connectors/src/main/java/org/apache/flink/streaming/connectors/kafka/api/KafkaSource.java
...che/flink/streaming/connectors/kafka/api/KafkaSource.java
+17
-23
flink-staging/flink-streaming/flink-streaming-connectors/src/main/java/org/apache/flink/streaming/connectors/kafka/config/EncoderWrapper.java
...ink/streaming/connectors/kafka/config/EncoderWrapper.java
+1
-1
flink-staging/flink-streaming/flink-streaming-connectors/src/main/java/org/apache/flink/streaming/connectors/kafka/config/KafkaConfigWrapper.java
...streaming/connectors/kafka/config/KafkaConfigWrapper.java
+1
-1
flink-staging/flink-streaming/flink-streaming-connectors/src/main/java/org/apache/flink/streaming/connectors/kafka/config/PartitionerWrapper.java
...streaming/connectors/kafka/config/PartitionerWrapper.java
+1
-0
flink-staging/flink-streaming/flink-streaming-connectors/src/main/java/org/apache/flink/streaming/connectors/kafka/partitioner/KafkaConstantPartitioner.java
...onnectors/kafka/partitioner/KafkaConstantPartitioner.java
+1
-0
flink-staging/flink-streaming/flink-streaming-connectors/src/main/java/org/apache/flink/streaming/connectors/kafka/partitioner/KafkaDistributePartitioner.java
...nectors/kafka/partitioner/KafkaDistributePartitioner.java
+2
-1
flink-staging/flink-streaming/flink-streaming-connectors/src/test/java/org/apache/flink/streaming/connectors/kafka/StringSerializerTest.java
...link/streaming/connectors/kafka/StringSerializerTest.java
+10
-4
flink-staging/flink-streaming/flink-streaming-core/src/main/java/org/apache/flink/streaming/api/StreamGraph.java
...main/java/org/apache/flink/streaming/api/StreamGraph.java
+1
-1
flink-staging/flink-streaming/flink-streaming-core/src/main/java/org/apache/flink/streaming/api/StreamingJobGraphGenerator.java
...pache/flink/streaming/api/StreamingJobGraphGenerator.java
+1
-1
flink-staging/flink-streaming/flink-streaming-core/src/main/java/org/apache/flink/streaming/api/datastream/DataStream.java
...org/apache/flink/streaming/api/datastream/DataStream.java
+1
-1
flink-staging/flink-streaming/flink-streaming-core/src/main/java/org/apache/flink/streaming/api/environment/StreamExecutionEnvironment.java
...streaming/api/environment/StreamExecutionEnvironment.java
+0
-2
flink-staging/flink-streaming/flink-streaming-core/src/test/java/org/apache/flink/streaming/api/streamvertex/StreamVertexTest.java
...he/flink/streaming/api/streamvertex/StreamVertexTest.java
+7
-7
未找到文件。
flink-staging/flink-streaming/flink-streaming-connectors/src/main/java/org/apache/flink/streaming/connectors/kafka/KafkaConsumerExample.java
浏览文件 @
ce0c2901
...
...
@@ -58,4 +58,4 @@ public class KafkaConsumerExample {
}
}
}
\ No newline at end of file
}
flink-staging/flink-streaming/flink-streaming-connectors/src/main/java/org/apache/flink/streaming/connectors/kafka/KafkaProducerExample.java
浏览文件 @
ce0c2901
...
...
@@ -38,7 +38,7 @@ public class KafkaProducerExample {
StreamExecutionEnvironment
env
=
StreamExecutionEnvironment
.
createLocalEnvironment
().
setDegreeOfParallelism
(
4
);
@SuppressWarnings
(
"unused"
)
@SuppressWarnings
(
{
"unused"
,
"serial"
}
)
DataStream
<
String
>
stream1
=
env
.
addSource
(
new
SourceFunction
<
String
>()
{
@Override
public
void
invoke
(
Collector
<
String
>
collector
)
throws
Exception
{
...
...
flink-staging/flink-streaming/flink-streaming-connectors/src/main/java/org/apache/flink/streaming/connectors/kafka/api/KafkaSink.java
浏览文件 @
ce0c2901
...
...
@@ -19,6 +19,11 @@ package org.apache.flink.streaming.connectors.kafka.api;
import
java.util.Properties
;
import
kafka.javaapi.producer.Producer
;
import
kafka.producer.KeyedMessage
;
import
kafka.producer.ProducerConfig
;
import
kafka.serializer.DefaultEncoder
;
import
org.apache.flink.streaming.api.function.sink.RichSinkFunction
;
import
org.apache.flink.streaming.connectors.kafka.config.EncoderWrapper
;
import
org.apache.flink.streaming.connectors.kafka.config.PartitionerWrapper
;
...
...
@@ -26,12 +31,6 @@ import org.apache.flink.streaming.connectors.kafka.partitioner.KafkaDistributePa
import
org.apache.flink.streaming.connectors.kafka.partitioner.KafkaPartitioner
;
import
org.apache.flink.streaming.connectors.util.SerializationSchema
;
import
kafka.javaapi.producer.Producer
;
import
kafka.producer.KeyedMessage
;
import
kafka.producer.ProducerConfig
;
import
kafka.serializer.DefaultEncoder
;
import
kafka.utils.VerifiableProperties
;
/**
* Sink that emits its inputs to a Kafka topic.
*
...
...
@@ -105,7 +104,6 @@ public class KafkaSink<IN> extends RichSinkFunction<IN> {
partitionerWrapper
.
write
(
props
);
ProducerConfig
config
=
new
ProducerConfig
(
props
);
VerifiableProperties
props1
=
config
.
props
();
producer
=
new
Producer
<
IN
,
byte
[]>(
config
);
initDone
=
true
;
...
...
flink-staging/flink-streaming/flink-streaming-connectors/src/main/java/org/apache/flink/streaming/connectors/kafka/api/KafkaSource.java
浏览文件 @
ce0c2901
...
...
@@ -22,23 +22,23 @@ import java.util.List;
import
java.util.Map
;
import
java.util.Properties
;
import
org.apache.flink.configuration.Configuration
;
import
org.apache.flink.streaming.api.datastream.DataStream
;
import
org.apache.flink.streaming.connectors.ConnectorSource
;
import
org.apache.flink.streaming.connectors.util.DeserializationSchema
;
import
org.apache.flink.util.Collector
;
import
kafka.consumer.Consumer
;
import
kafka.consumer.ConsumerConfig
;
import
kafka.consumer.ConsumerIterator
;
import
kafka.consumer.KafkaStream
;
import
kafka.javaapi.consumer.ConsumerConnector
;
import
org.apache.flink.configuration.Configuration
;
import
org.apache.flink.streaming.api.datastream.DataStream
;
import
org.apache.flink.streaming.connectors.ConnectorSource
;
import
org.apache.flink.streaming.connectors.util.DeserializationSchema
;
import
org.apache.flink.util.Collector
;
/**
* Source that listens to a Kafka topic.
*
*
* @param <OUT>
*
Type of the messages on the topic.
*
Type of the messages on the topic.
*/
public
class
KafkaSource
<
OUT
>
extends
ConnectorSource
<
OUT
>
{
private
static
final
long
serialVersionUID
=
1L
;
...
...
@@ -50,25 +50,20 @@ public class KafkaSource<OUT> extends ConnectorSource<OUT> {
private
transient
ConsumerConnector
consumer
;
private
transient
ConsumerIterator
<
byte
[],
byte
[]>
consumerIterator
;
private
int
partitionIndex
;
private
int
numberOfInstances
;
private
long
zookeeperSyncTimeMillis
;
private
static
final
long
ZOOKEEPER_DEFAULT_SYNC_TIME
=
200
;
private
OUT
outTuple
;
/**
* Creates a KafkaSource that consumes a topic.
*
*
* @param zookeeperHost
*
Address of the Zookeeper host (with port number).
*
Address of the Zookeeper host (with port number).
* @param topicId
*
ID of the Kafka topic.
*
ID of the Kafka topic.
* @param deserializationSchema
*
User defined deserialization schema.
*
User defined deserialization schema.
* @param zookeeperSyncTimeMillis
*
Synchronization time with zookeeper.
*
Synchronization time with zookeeper.
*/
public
KafkaSource
(
String
zookeeperHost
,
String
topicId
,
DeserializationSchema
<
OUT
>
deserializationSchema
,
long
zookeeperSyncTimeMillis
)
{
...
...
@@ -96,10 +91,9 @@ public class KafkaSource<OUT> extends ConnectorSource<OUT> {
props
.
put
(
"auto.commit.interval.ms"
,
"1000"
);
consumer
=
Consumer
.
createJavaConsumerConnector
(
new
ConsumerConfig
(
props
));
partitionIndex
=
getRuntimeContext
().
getIndexOfThisSubtask
();
numberOfInstances
=
getRuntimeContext
().
getNumberOfParallelSubtasks
();
Map
<
String
,
List
<
KafkaStream
<
byte
[],
byte
[]>>>
consumerMap
=
consumer
.
createMessageStreams
(
Collections
.
singletonMap
(
topicId
,
1
));
Map
<
String
,
List
<
KafkaStream
<
byte
[],
byte
[]>>>
consumerMap
=
consumer
.
createMessageStreams
(
Collections
.
singletonMap
(
topicId
,
1
));
List
<
KafkaStream
<
byte
[],
byte
[]>>
streams
=
consumerMap
.
get
(
topicId
);
KafkaStream
<
byte
[],
byte
[]>
stream
=
streams
.
get
(
0
);
...
...
@@ -108,9 +102,9 @@ public class KafkaSource<OUT> extends ConnectorSource<OUT> {
/**
* Called to forward the data from the source to the {@link DataStream}.
*
*
* @param collector
*
The Collector for sending data to the dataStream
*
The Collector for sending data to the dataStream
*/
@Override
public
void
invoke
(
Collector
<
OUT
>
collector
)
throws
Exception
{
...
...
flink-staging/flink-streaming/flink-streaming-connectors/src/main/java/org/apache/flink/streaming/connectors/kafka/config/EncoderWrapper.java
浏览文件 @
ce0c2901
...
...
@@ -43,4 +43,4 @@ public class EncoderWrapper<T> extends KafkaConfigWrapper<SerializationSchema<T,
return
wrapped
.
serialize
(
element
);
}
}
\ No newline at end of file
}
flink-staging/flink-streaming/flink-streaming-connectors/src/main/java/org/apache/flink/streaming/connectors/kafka/config/KafkaConfigWrapper.java
浏览文件 @
ce0c2901
...
...
@@ -57,4 +57,4 @@ public abstract class KafkaConfigWrapper<T extends Serializable> {
properties
.
put
(
getClass
().
getCanonicalName
(),
stringSerializer
.
serialize
(
wrapped
));
}
}
\ No newline at end of file
}
flink-staging/flink-streaming/flink-streaming-connectors/src/main/java/org/apache/flink/streaming/connectors/kafka/config/PartitionerWrapper.java
浏览文件 @
ce0c2901
...
...
@@ -38,6 +38,7 @@ public class PartitionerWrapper<T> extends KafkaConfigWrapper<KafkaPartitioner<T
super
(
properties
);
}
@SuppressWarnings
(
"unchecked"
)
@Override
public
int
partition
(
Object
key
,
int
numPartitions
)
{
return
wrapped
.
partition
((
T
)
key
,
numPartitions
);
...
...
flink-staging/flink-streaming/flink-streaming-connectors/src/main/java/org/apache/flink/streaming/connectors/kafka/partitioner/KafkaConstantPartitioner.java
浏览文件 @
ce0c2901
...
...
@@ -19,6 +19,7 @@ package org.apache.flink.streaming.connectors.kafka.partitioner;
public
class
KafkaConstantPartitioner
<
T
>
implements
KafkaPartitioner
<
T
>
{
private
static
final
long
serialVersionUID
=
1L
;
private
int
partition
;
public
KafkaConstantPartitioner
(
int
partition
)
{
...
...
flink-staging/flink-streaming/flink-streaming-connectors/src/main/java/org/apache/flink/streaming/connectors/kafka/partitioner/KafkaDistributePartitioner.java
浏览文件 @
ce0c2901
...
...
@@ -26,6 +26,7 @@ package org.apache.flink.streaming.connectors.kafka.partitioner;
*/
public
class
KafkaDistributePartitioner
<
T
>
implements
KafkaPartitioner
<
T
>
{
private
static
final
long
serialVersionUID
=
1L
;
int
currentPartition
;
public
KafkaDistributePartitioner
()
{
...
...
@@ -37,4 +38,4 @@ public class KafkaDistributePartitioner<T> implements KafkaPartitioner<T> {
return
currentPartition
++
%
numberOfPartitions
;
}
}
\ No newline at end of file
}
flink-staging/flink-streaming/flink-streaming-connectors/src/test/java/org/apache/flink/streaming/connectors/kafka/StringSerializerTest.java
浏览文件 @
ce0c2901
...
...
@@ -27,6 +27,8 @@ import org.junit.Test;
public
class
StringSerializerTest
{
private
static
class
MyClass
implements
Serializable
{
private
static
final
long
serialVersionUID
=
1L
;
private
int
a
;
private
String
b
;
...
...
@@ -37,11 +39,15 @@ public class StringSerializerTest {
@Override
public
boolean
equals
(
Object
o
)
{
try
{
MyClass
other
=
(
MyClass
)
o
;
return
a
==
other
.
a
&&
b
.
equals
(
other
.
b
);
}
catch
(
ClassCastException
e
)
{
if
(
o
==
null
)
{
return
false
;
}
else
{
try
{
MyClass
other
=
(
MyClass
)
o
;
return
a
==
other
.
a
&&
b
.
equals
(
other
.
b
);
}
catch
(
ClassCastException
e
)
{
return
false
;
}
}
}
}
...
...
flink-staging/flink-streaming/flink-streaming-core/src/main/java/org/apache/flink/streaming/api/StreamGraph.java
浏览文件 @
ce0c2901
...
...
@@ -772,4 +772,4 @@ public class StreamGraph extends StreamingPlan {
}
}
\ No newline at end of file
}
flink-staging/flink-streaming/flink-streaming-core/src/main/java/org/apache/flink/streaming/api/StreamingJobGraphGenerator.java
浏览文件 @
ce0c2901
...
...
@@ -239,7 +239,7 @@ public class StreamingJobGraphGenerator {
AbstractJobVertex
downStreamVertex
=
streamVertices
.
get
(
downStreamvertexID
);
StreamConfig
downStreamConfig
=
new
StreamConfig
(
downStreamVertex
.
getConfiguration
());
StreamConfig
upStreamConfig
=
headOfChain
==
upStreamvertexID
?
new
StreamConfig
(
StreamConfig
upStreamConfig
=
headOfChain
.
equals
(
upStreamvertexID
)
?
new
StreamConfig
(
headVertex
.
getConfiguration
())
:
chainedConfigs
.
get
(
headOfChain
).
get
(
upStreamvertexID
);
...
...
flink-staging/flink-streaming/flink-streaming-core/src/main/java/org/apache/flink/streaming/api/datastream/DataStream.java
浏览文件 @
ce0c2901
...
...
@@ -1262,7 +1262,7 @@ public class DataStream<OUT> {
private
void
validateMerge
(
Integer
id
)
{
for
(
DataStream
<
OUT
>
ds
:
this
.
mergedStreams
)
{
if
(
ds
.
getId
()
==
id
)
{
if
(
ds
.
getId
()
.
equals
(
id
)
)
{
throw
new
RuntimeException
(
"A DataStream cannot be merged with itself"
);
}
}
...
...
flink-staging/flink-streaming/flink-streaming-core/src/main/java/org/apache/flink/streaming/api/environment/StreamExecutionEnvironment.java
浏览文件 @
ce0c2901
...
...
@@ -73,8 +73,6 @@ public abstract class StreamExecutionEnvironment {
protected
StreamGraph
streamGraph
;
private
static
StreamExecutionEnvironmentFactory
executionEnvironmentFactory
;
// --------------------------------------------------------------------------------------------
// Constructor and Properties
// --------------------------------------------------------------------------------------------
...
...
flink-staging/flink-streaming/flink-streaming-core/src/test/java/org/apache/flink/streaming/api/streamvertex/StreamVertexTest.java
浏览文件 @
ce0c2901
...
...
@@ -21,9 +21,11 @@ import static org.junit.Assert.assertEquals;
import
static
org
.
junit
.
Assert
.
fail
;
import
java.util.Arrays
;
import
java.util.Collections
;
import
java.util.HashMap
;
import
java.util.HashSet
;
import
java.util.Map
;
import
java.util.Set
;
import
org.apache.flink.api.common.functions.RichMapFunction
;
import
org.apache.flink.api.java.tuple.Tuple1
;
...
...
@@ -87,7 +89,6 @@ public class StreamVertexTest {
LocalStreamEnvironment
env
=
StreamExecutionEnvironment
.
createLocalEnvironment
(
SOURCE_PARALELISM
);
try
{
env
.
fromCollection
(
null
);
fail
();
...
...
@@ -133,14 +134,13 @@ public class StreamVertexTest {
}
}
static
HashSet
<
String
>
resultSet
;
private
static
class
SetSink
implements
SinkFunction
<
String
>
{
private
static
final
long
serialVersionUID
=
1L
;
public
static
Set
<
String
>
result
=
Collections
.
synchronizedSet
(
new
HashSet
<
String
>());
@Override
public
void
invoke
(
String
value
)
{
result
Set
.
add
(
value
);
result
.
add
(
value
);
}
}
...
...
@@ -153,19 +153,19 @@ public class StreamVertexTest {
fromStringElements
.
connect
(
generatedSequence
).
map
(
new
CoMap
()).
addSink
(
new
SetSink
());
resultSet
=
new
HashSet
<
String
>();
env
.
execute
();
HashSet
<
String
>
expectedSet
=
new
HashSet
<
String
>(
Arrays
.
asList
(
"aa"
,
"bb"
,
"cc"
,
"0"
,
"1"
,
"2"
,
"3"
));
assertEquals
(
expectedSet
,
resultSe
t
);
assertEquals
(
expectedSet
,
SetSink
.
resul
t
);
}
@Test
public
void
runStream
()
throws
Exception
{
StreamExecutionEnvironment
env
=
new
TestStreamEnvironment
(
SOURCE_PARALELISM
,
MEMORYSIZE
);
env
.
addSource
(
new
MySource
()).
setParallelism
(
SOURCE_PARALELISM
).
map
(
new
MyTask
()).
addSink
(
new
MySink
());
env
.
addSource
(
new
MySource
()).
setParallelism
(
SOURCE_PARALELISM
).
map
(
new
MyTask
())
.
addSink
(
new
MySink
());
env
.
execute
();
assertEquals
(
10
,
data
.
keySet
().
size
());
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录