Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
2dot5
ClickHouse
提交
d4798363
C
ClickHouse
项目概览
2dot5
/
ClickHouse
通知
3
Star
0
Fork
0
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
0
列表
看板
标记
里程碑
合并请求
0
DevOps
流水线
流水线任务
计划
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
C
ClickHouse
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
0
Issue
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
Pages
DevOps
DevOps
流水线
流水线任务
计划
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
流水线任务
提交
Issue看板
体验新版 GitCode,发现更多精彩内容 >>
提交
d4798363
编写于
4月 18, 2019
作者:
I
Ivan Lezhankin
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
Add test on lost messages
上级
7723a63d
变更
2
隐藏空白更改
内联
并排
Showing
2 changed file
with
49 addition
and
2 deletion
+49
-2
dbms/tests/integration/helpers/docker_compose_kafka.yml
dbms/tests/integration/helpers/docker_compose_kafka.yml
+1
-1
dbms/tests/integration/test_storage_kafka/test.py
dbms/tests/integration/test_storage_kafka/test.py
+48
-1
未找到文件。
dbms/tests/integration/helpers/docker_compose_kafka.yml
浏览文件 @
d4798363
...
...
@@ -12,7 +12,7 @@ services:
-
label:disable
kafka1
:
image
:
confluentinc/cp-kafka:
4.1
.0
image
:
confluentinc/cp-kafka:
5.2
.0
hostname
:
kafka1
ports
:
-
"
9092:9092"
...
...
dbms/tests/integration/test_storage_kafka/test.py
浏览文件 @
d4798363
...
...
@@ -7,7 +7,8 @@ from helpers.test_tools import TSV
import
json
import
subprocess
from
kafka
import
KafkaProducer
import
kafka.errors
from
kafka
import
KafkaAdminClient
,
KafkaProducer
from
google.protobuf.internal.encoder
import
_VarintBytes
"""
...
...
@@ -318,6 +319,52 @@ def test_kafka_materialized_view(kafka_cluster):
'''
)
def
test_kafka_flush_on_big_message
(
kafka_cluster
):
# Create batchs of messages of size ~100Kb
kafka_messages
=
10000
batch_messages
=
1000
messages
=
[
json
.
dumps
({
'key'
:
i
,
'value'
:
'x'
*
100
})
*
batch_messages
for
i
in
range
(
kafka_messages
)]
kafka_produce
(
'flush'
,
messages
)
instance
.
query
(
'''
DROP TABLE IF EXISTS test.view;
DROP TABLE IF EXISTS test.consumer;
CREATE TABLE test.kafka (key UInt64, value String)
ENGINE = Kafka
SETTINGS
kafka_broker_list = 'kafka1:19092',
kafka_topic_list = 'flush',
kafka_group_name = 'flush',
kafka_format = 'JSONEachRow',
kafka_max_block_size = 10;
CREATE TABLE test.view (key UInt64, value String)
ENGINE = MergeTree
ORDER BY key;
CREATE MATERIALIZED VIEW test.consumer TO test.view AS
SELECT * FROM test.kafka;
'''
)
client
=
KafkaAdminClient
(
bootstrap_servers
=
"localhost:9092"
)
received
=
False
while
not
received
:
try
:
offsets
=
client
.
list_consumer_group_offsets
(
'flush'
)
for
topic
,
offset
in
offsets
.
items
():
if
topic
.
topic
==
'flush'
and
offset
.
offset
==
kafka_messages
:
received
=
True
break
except
kafka
.
errors
.
GroupCoordinatorNotAvailableError
:
continue
for
_
in
range
(
20
):
time
.
sleep
(
1
)
result
=
instance
.
query
(
'SELECT count() FROM test.view'
)
if
int
(
result
)
==
kafka_messages
*
batch_messages
:
break
assert
int
(
result
)
==
kafka_messages
*
batch_messages
,
'ClickHouse lost some messages: {}'
.
format
(
result
)
if
__name__
==
'__main__'
:
cluster
.
start
()
raw_input
(
"Cluster created, press any key to destroy..."
)
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录