Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
Greenplum
Gpdb
提交
47687ca0
G
Gpdb
项目概览
Greenplum
/
Gpdb
通知
7
Star
1
Fork
0
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
0
列表
看板
标记
里程碑
合并请求
0
DevOps
流水线
流水线任务
计划
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
G
Gpdb
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
0
Issue
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
Pages
DevOps
DevOps
流水线
流水线任务
计划
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
流水线任务
提交
Issue看板
前往新版Gitcode,体验更适合开发者的 AI 搜索 >>
提交
47687ca0
编写于
1月 28, 2019
作者:
N
Ning Yu
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
Check for data integration on expand_after_icw pipeline.
上级
d824012e
变更
3
隐藏空白更改
内联
并排
Showing
3 changed file
with
126 addition
and
10 deletion
+126
-10
concourse/scripts/common.bash
concourse/scripts/common.bash
+55
-10
concourse/scripts/filter_dump.sed
concourse/scripts/filter_dump.sed
+17
-0
concourse/scripts/scan_partial_table.py
concourse/scripts/scan_partial_table.py
+54
-0
未找到文件。
concourse/scripts/common.bash
浏览文件 @
47687ca0
...
...
@@ -78,21 +78,49 @@ EOF
echo
"
$pgoptions
"
}
# detect for partial tables from all the non-template databases,
# exit code is 0 if no partial table is found, or 1 otherwise
function
list_partial_tables
()
{
local
pgoptions
=
"
$(
get_pgoptions
)
"
su gpadmin
-c
bash
<<
EOF
. /usr/local/greenplum-db-devel/greenplum_path.sh
export PGOPTIONS='
$pgoptions
'
python
$CWDIR
/scan_partial_table.py
EOF
}
# usage: sort_dump < input_file > output_file
#
# filter and sort the 'INSERT INTO' lines of "pg_dumpall --inserts" output.
# will also append the database name to end of each line as comment.
function
sort_dump
()
{
sed
-nrf
"
$CWDIR
/filter_dump.sed"
|
sort
}
# usage: expand_cluster <old_size> <new_size>
function
expand_cluster
()
{
local
old
=
"
$1
"
local
new
=
"
$2
"
local
inputfile
=
"/tmp/inputfile.
${
old
}
-
${
new
}
"
local
pidfile
=
"/tmp/postmaster.pid.
${
old
}
-
${
new
}
"
local
dump_before
=
"/tmp/dump.
${
old
}
-
${
new
}
.before.sql"
local
dump_after
=
"/tmp/dump.
${
old
}
-
${
new
}
.after.sql"
local
sorted_dump_before
=
"/tmp/sorted-dump.
${
old
}
-
${
new
}
.before.sql"
local
sorted_dump_after
=
"/tmp/sorted-dump.
${
old
}
-
${
new
}
.after.sql"
local
sorted_dump_diff
=
"/tmp/sorted-dump.
${
old
}
-
${
new
}
.diff"
local
dbname
=
"postgres"
local
pgoptions
=
"
$(
get_pgoptions
)
"
local
retval
=
0
local
uncompleted
local
partial
pushd
gpdb_src/gpAux/gpdemo
gen_gpexpand_input
"
$old
"
"
$new
"
# dump before expansion
su gpadmin
-c
"pg_dumpall --inserts -Oxaf '
$dump_before
'"
# Backup master pid, by checking it later we can know whether the cluster is
# restarted during the tests.
su gpadmin
-c
"head -n 1
$MASTER_DATA_DIRECTORY
/postmaster.pid >
$pidfile
"
...
...
@@ -104,23 +132,40 @@ function expand_cluster() {
uncompleted
=
$(
su gpadmin
-c
"psql -Aqtd
$dbname
-c
\"
select count(*) from gpexpand.status_detail where status <> 'COMPLETED'
\"
"
)
# cleanup
su gpadmin
-c
"yes | PGOPTIONS='
$pgoptions
' gpexpand -s -c"
su gpadmin
-c
"dropdb
$dbname
"
2>/dev/null
||
:
# ignore failure
# dump after expansion
su gpadmin
-c
"pg_dumpall --inserts -Oxaf '
$dump_after
'"
popd
if
[
"
$uncompleted
"
-ne
0
]
;
then
echo
"error: some tables are not successfully expanded"
return
1
echo
"error: fail to expand some tables"
retval
=
1
fi
# double check gp_distribution_policy.numsegments in every database
if
!
list_partial_tables
;
then
echo
"error: some tables are not expanded"
retval
=
1
fi
echo
"checking for data integration after expansion..."
sort_dump <
"
$dump_before
"
>
"
$sorted_dump_before
"
sort_dump <
"
$dump_after
"
>
"
$sorted_dump_after
"
if
diff
-u0
"
$sorted_dump_before
"
"
$sorted_dump_after
"
>
"
$sorted_dump_diff
"
;
then
echo
"before and after dumps have no difference"
else
echo
"error: before and after dumps differ, here are part of the sorted diff:"
head
-n50
"
$sorted_dump_diff
"
retval
=
1
fi
# double check gp_distribution_policy.numsegments
partial
=
$(
su gpadmin
-c
"psql -Aqtd
$dbname
-c
\"
select count(*) from gp_distribution_policy where numsegments <>
$new
\"
"
)
if
[
"
$partial
"
-ne
0
]
;
then
echo
"error: not all the tables are expanded by gpexpand"
return
1
if
[
"
$retval
"
-eq
0
]
;
then
echo
"all the tables are successfully expanded"
fi
echo
"all the tables are successfully expanded"
return
0
return
$retval
}
# usage: make_cluster [<demo_cluster_options>]
...
...
concourse/scripts/filter_dump.sed
0 → 100644
浏览文件 @
47687ca0
# foreach database name
\@
^
\\
connect
(
.*
)
$
@
{
# adjust its format
s
@@
/* DATABASE:
\1
*/
@;
# copy it to hold space
h
;
}
# foreach insert command
\@
^
INSERT INTO
@
{
# append the database name from hold space
G
;
# join the two lines
s
@
\n
@@;
# output it
p
;
}
concourse/scripts/scan_partial_table.py
0 → 100755
浏览文件 @
47687ca0
#!/usr/bin/env python
import
sys
from
gppylib.db
import
dbconn
list_dbs_sql
=
'''
select datname from pg_database
where datallowconn and not datistemplate
'''
get_cluster_size_sql
=
'''
select numsegments from gp_toolkit.__gp_number_of_segments
'''
scan_sql
=
'''
select n.nspname, c.relname
from gp_distribution_policy d
join pg_class c on c.oid = d.localoid
join pg_namespace n on n.oid = c.relnamespace
where d.numsegments <> {cluster_size:d}
and c.relstorage <> 'x'
'''
dburl
=
dbconn
.
DbURL
()
conn
=
dbconn
.
connect
(
dburl
)
cursor
=
dbconn
.
execSQL
(
conn
,
list_dbs_sql
)
dbnames
=
[
row
[
0
]
for
row
in
cursor
]
cursor
.
close
()
cluster_size
=
int
(
dbconn
.
execSQLForSingleton
(
conn
,
get_cluster_size_sql
))
conn
.
close
()
print
(
'scanning for partial tables...'
)
retval
=
0
for
dbname
in
dbnames
:
dburl
=
dbconn
.
DbURL
(
dbname
=
dbname
)
conn
=
dbconn
.
connect
(
dburl
)
cursor
=
dbconn
.
execSQL
(
conn
,
scan_sql
.
format
(
cluster_size
=
cluster_size
))
if
cursor
.
rowcount
>
0
:
retval
=
1
for
row
in
cursor
:
print
(
'- "{dbname}"."{namespace}"."{relname}"'
.
format
(
dbname
=
dbname
.
replace
(
'"'
,
'""'
),
namespace
=
row
[
0
].
replace
(
'"'
,
'""'
),
relname
=
row
[
1
].
replace
(
'"'
,
'""'
)))
cursor
.
close
()
conn
.
close
()
sys
.
exit
(
retval
)
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录