Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
2dot5
ClickHouse
提交
586f60d2
C
ClickHouse
项目概览
2dot5
/
ClickHouse
通知
3
Star
0
Fork
0
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
0
列表
看板
标记
里程碑
合并请求
0
DevOps
流水线
流水线任务
计划
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
C
ClickHouse
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
0
Issue
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
Pages
DevOps
DevOps
流水线
流水线任务
计划
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
流水线任务
提交
Issue看板
体验新版 GitCode,发现更多精彩内容 >>
提交
586f60d2
编写于
11月 21, 2019
作者:
P
Pavel Kovalenko
提交者:
Pavel Kovalenko
11月 21, 2019
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
Ensure multipart upload works in S3 storage tests.
上级
65ff10c8
变更
2
隐藏空白更改
内联
并排
Showing
2 changed file
with
33 addition
and
7 deletion
+33
-7
dbms/tests/integration/helpers/docker_compose_minio.yml
dbms/tests/integration/helpers/docker_compose_minio.yml
+3
-0
dbms/tests/integration/test_storage_s3/test.py
dbms/tests/integration/test_storage_s3/test.py
+30
-7
未找到文件。
dbms/tests/integration/helpers/docker_compose_minio.yml
浏览文件 @
586f60d2
...
...
@@ -20,9 +20,12 @@ services:
# Redirects all requests to origin Minio.
redirect
:
image
:
schmunk42/nginx-redirect
volumes
:
-
/nginx:/nginx
environment
:
-
SERVER_REDIRECT=minio1:9001
-
SERVER_REDIRECT_CODE=307
-
SERVER_ACCESS_LOG=/nginx/access.log
volumes
:
data1-1
:
dbms/tests/integration/test_storage_s3/test.py
浏览文件 @
586f60d2
...
...
@@ -65,6 +65,14 @@ def get_s3_file_content(cluster, filename):
return
data_str
# Returns nginx access log lines.
def
get_nginx_access_logs
():
handle
=
open
(
"/nginx/access.log"
,
"r"
)
data
=
handle
.
readlines
()
handle
.
close
()
return
data
@
pytest
.
fixture
(
scope
=
"module"
)
def
cluster
():
try
:
...
...
@@ -155,14 +163,29 @@ def test_multipart_put(cluster):
instance
=
cluster
.
instances
[
"dummy"
]
# type: ClickHouseInstance
table_format
=
"column1 UInt32, column2 UInt32, column3 UInt32"
long_data
=
[[
i
,
i
+
1
,
i
+
2
]
for
i
in
range
(
100000
)]
long_values_csv
=
""
.
join
([
"{},{},{}
\n
"
.
format
(
x
,
y
,
z
)
for
x
,
y
,
z
in
long_data
])
filename
=
"test.csv"
put_query
=
"insert into table function s3('http://{}:{}/{}/{}', 'CSV', '{}') format CSV"
.
format
(
cluster
.
minio_host
,
cluster
.
minio_port
,
cluster
.
minio_bucket
,
filename
,
table_format
)
# Minimum size of part is 5 Mb for Minio.
# See: https://github.com/minio/minio/blob/master/docs/minio-limits.md
run_query
(
instance
,
put_query
,
stdin
=
long_values_csv
,
settings
=
{
's3_min_upload_part_size'
:
5
*
1024
*
1024
})
min_part_size_bytes
=
5
*
1024
*
1024
csv_size_bytes
=
int
(
min_part_size_bytes
*
1.5
)
# To have 2 parts.
one_line_length
=
6
# 3 digits, 2 commas, 1 line separator.
# Generate data having size more than one part
int_data
=
[[
1
,
2
,
3
]
for
i
in
range
(
csv_size_bytes
/
one_line_length
)]
csv_data
=
""
.
join
([
"{},{},{}
\n
"
.
format
(
x
,
y
,
z
)
for
x
,
y
,
z
in
int_data
])
assert
long_values_csv
==
get_s3_file_content
(
cluster
,
filename
)
assert
len
(
csv_data
)
>
min_part_size_bytes
filename
=
"test_multipart.csv"
put_query
=
"insert into table function s3('http://{}:{}/{}/{}', 'CSV', '{}') format CSV"
.
format
(
cluster
.
minio_redirect_host
,
cluster
.
minio_redirect_port
,
cluster
.
minio_bucket
,
filename
,
table_format
)
run_query
(
instance
,
put_query
,
stdin
=
csv_data
,
settings
=
{
's3_min_upload_part_size'
:
min_part_size_bytes
})
# Use Nginx access logs to count number of parts uploaded to Minio.
nginx_logs
=
get_nginx_access_logs
()
uploaded_parts
=
filter
(
lambda
log_line
:
log_line
.
find
(
filename
)
>=
0
and
log_line
.
find
(
"PUT"
)
>=
0
,
nginx_logs
)
assert
uploaded_parts
>
1
assert
csv_data
==
get_s3_file_content
(
cluster
,
filename
)
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录